Merge "Don't read config in Manager class definition"
diff --git a/.gitignore b/.gitignore
index 287db4c..7cb052f 100644
--- a/.gitignore
+++ b/.gitignore
@@ -18,6 +18,7 @@
dist
build
.testrepository
+.stestr
.idea
.project
.pydevproject
diff --git a/.stestr.conf b/.stestr.conf
new file mode 100644
index 0000000..e3201c1
--- /dev/null
+++ b/.stestr.conf
@@ -0,0 +1,4 @@
+[DEFAULT]
+test_path=./tempest/test_discover
+group_regex=([^\.]*\.)*
+
diff --git a/HACKING.rst b/HACKING.rst
index 446d865..8407734 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -103,10 +103,10 @@
Service Tagging
---------------
Service tagging is used to specify which services are exercised by a particular
-test method. You specify the services with the ``tempest.test.services``
+test method. You specify the services with the ``tempest.common.utils.services``
decorator. For example:
-@services('compute', 'image')
+@utils.services('compute', 'image')
Valid service tag names are the same as the list of directories in tempest.api
that have tests.
@@ -128,6 +128,12 @@
Test class level resources should be defined in the `resource_setup` method of
the test class, except for any credential obtained from the credentials
provider, which should be set-up in the `setup_credentials` method.
+Cleanup is best scheduled using `addClassResourceCleanup` which ensures that
+the cleanup code is always invoked, and in reverse order with respect to the
+creation order.
+
+In both cases - test level and class level cleanups - a wait loop should be
+scheduled before the actual delete of resources with an asynchronous delete.
The test base class `BaseTestCase` defines Tempest framework for class level
fixtures. `setUpClass` and `tearDownClass` are defined here and cannot be
@@ -358,10 +364,10 @@
When adding tests for new features that were not in previous releases of the
projects the new test has to be properly skipped with a feature flag. Whether
-this is just as simple as using the @test.requires_ext() decorator to check
-if the required extension (or discoverable optional API) is enabled or adding
-a new config option to the appropriate section. If there isn't a method of
-selecting the new **feature** from the config file then there won't be a
+this is just as simple as using the @utils.requires_ext() decorator to
+check if the required extension (or discoverable optional API) is enabled or
+adding a new config option to the appropriate section. If there isn't a method
+of selecting the new **feature** from the config file then there won't be a
mechanism to disable the test with older stable releases and the new test won't
be able to merge.
@@ -380,7 +386,7 @@
Otherwise the bug fix won't be able to land in the project.
Handily, `Zuul’s cross-repository dependencies
-<https://docs.openstack.org/infra/zuul/gating.html#cross-repository-dependencies>`_.
+<https://docs.openstack.org/infra/zuul/user/gating.html#cross-project-dependencies>`_.
can be leveraged to do without step 2 and to have steps 3 and 4 happen
"atomically". To do that, make the patch written in step 1 to depend (refer to
Zuul's documentation above) on the patch written in step 4. The commit message
diff --git a/README.rst b/README.rst
index 2e13fec..c67362a 100644
--- a/README.rst
+++ b/README.rst
@@ -3,7 +3,7 @@
========================
.. image:: http://governance.openstack.org/badges/tempest.svg
- :target: http://governance.openstack.org/reference/tags/index.html
+ :target: https://governance.openstack.org/tc/reference/tags/index.html
.. Change things from this point on
@@ -183,11 +183,11 @@
Tempest also has a set of unit tests which test the Tempest code itself. These
tests can be run by specifying the test discovery path::
- $ OS_TEST_PATH=./tempest/tests testr run --parallel
+ $ stestr --test-path ./tempest/tests run
-By setting OS_TEST_PATH to ./tempest/tests it specifies that test discover
-should only be run on the unit test directory. The default value of OS_TEST_PATH
-is OS_TEST_PATH=./tempest/test_discover which will only run test discover on the
+By setting ``--test-path`` option to ./tempest/tests it specifies that test discover
+should only be run on the unit test directory. The default value of ``test_path``
+is ``test_path=./tempest/test_discover`` which will only run test discover on the
Tempest suite.
Alternatively, there are the py27 and py35 tox jobs which will run the unit
diff --git a/bindep.txt b/bindep.txt
index 8914ade..efd3a10 100644
--- a/bindep.txt
+++ b/bindep.txt
@@ -1,5 +1,5 @@
# This file contains runtime (non-python) dependencies
-# More info at: http://docs.openstack.org/infra/bindep/readme.html
+# More info at: https://docs.openstack.org/infra/bindep/readme.html
libffi-dev [platform:dpkg]
libffi-devel [platform:rpm]
diff --git a/doc/source/conf.py b/doc/source/conf.py
index 0cfdf34..067eb81 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -158,10 +158,6 @@
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
-# If true, SmartyPants will be used to convert quotes and dashes to
-# typographically correct entities.
-html_use_smartypants = False
-
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
diff --git a/data/tempest-plugins-registry.header b/doc/source/data/tempest-plugins-registry.header
similarity index 100%
rename from data/tempest-plugins-registry.header
rename to doc/source/data/tempest-plugins-registry.header
diff --git a/doc/source/library.rst b/doc/source/library.rst
index a461a0f..074d642 100644
--- a/doc/source/library.rst
+++ b/doc/source/library.rst
@@ -69,3 +69,4 @@
library/auth
library/clients
library/credential_providers
+ library/validation_resources
diff --git a/doc/source/library/credential_providers.rst b/doc/source/library/credential_providers.rst
index f4eb37d..d96c97a 100644
--- a/doc/source/library/credential_providers.rst
+++ b/doc/source/library/credential_providers.rst
@@ -130,19 +130,18 @@
# role
provider.clear_creds()
-API Reference
-=============
-------------------------------
+API Reference
+-------------
+
The dynamic credentials module
-------------------------------
+''''''''''''''''''''''''''''''
.. automodule:: tempest.lib.common.dynamic_creds
:members:
---------------------------------------
The pre-provisioned credentials module
---------------------------------------
+''''''''''''''''''''''''''''''''''''''
.. automodule:: tempest.lib.common.preprov_creds
:members:
diff --git a/doc/source/library/validation_resources.rst b/doc/source/library/validation_resources.rst
new file mode 100644
index 0000000..9b36476
--- /dev/null
+++ b/doc/source/library/validation_resources.rst
@@ -0,0 +1,11 @@
+.. _validation_resources:
+
+Validation Resources
+====================
+
+-------------------------------
+The validation_resources module
+-------------------------------
+
+.. automodule:: tempest.lib.common.validation_resources
+ :members:
diff --git a/doc/source/microversion_testing.rst b/doc/source/microversion_testing.rst
index d80081d..acf5593 100644
--- a/doc/source/microversion_testing.rst
+++ b/doc/source/microversion_testing.rst
@@ -300,35 +300,35 @@
* `2.2`_
- .. _2.2: http://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id2
+ .. _2.2: https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id2
* `2.10`_
- .. _2.10: http://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id9
+ .. _2.10: https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id9
* `2.20`_
- .. _2.20: http://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id18
+ .. _2.20: https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id18
* `2.25`_
- .. _2.25: http://docs.openstack.org/nova/latest/reference/api-microversion-history.html#maximum-in-mitaka
+ .. _2.25: https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#maximum-in-mitaka
* `2.32`_
- .. _2.32: http://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id29
+ .. _2.32: https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id29
* `2.37`_
- .. _2.37: http://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id34
+ .. _2.37: https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id34
* `2.42`_
- .. _2.42: http://docs.openstack.org/nova/latest/reference/api-microversion-history.html#maximum-in-ocata
+ .. _2.42: https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#maximum-in-ocata
* `2.47`_
- .. _2.47: http://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id42
+ .. _2.47: https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id42
* `2.48`_
@@ -338,4 +338,28 @@
* `3.3`_
- .. _3.3: https://docs.openstack.org/cinder/ocata/devref/api_microversion_history.html#id4
+ .. _3.3: https://docs.openstack.org/cinder/latest/contributor/api_microversion_history.html#id3
+
+ * `3.9`_
+
+ .. _3.9: https://docs.openstack.org/cinder/latest/contributor/api_microversion_history.html#id9
+
+ * `3.11`_
+
+ .. _3.11: https://docs.openstack.org/cinder/latest/contributor/api_microversion_history.html#id11
+
+ * `3.12`_
+
+ .. _3.12: https://docs.openstack.org/cinder/latest/contributor/api_microversion_history.html#id12
+
+ * `3.14`_
+
+ .. _3.14: https://docs.openstack.org/cinder/latest/contributor/api_microversion_history.html#id14
+
+ * `3.19`_
+
+ .. _3.19: https://docs.openstack.org/cinder/latest/contributor/api_microversion_history.html#id18
+
+ * `3.20`_
+
+ .. _3.20: https://docs.openstack.org/cinder/latest/contributor/api_microversion_history.html#id19
diff --git a/doc/source/write_tests.rst b/doc/source/write_tests.rst
index aec55e9..5a2876e 100644
--- a/doc/source/write_tests.rst
+++ b/doc/source/write_tests.rst
@@ -59,10 +59,16 @@
* setup_clients
* resource_setup
-which is executed in that order. An example of a TestCase which defines all
+which is executed in that order. Cleanup of resources provisioned during
+the resource_setup must be scheduled right after provisioning using
+the addClassResourceCleanp helper. The resource cleanups stacked this way
+are executed in reverse order during tearDownClass, before the cleanup of
+test credentials takes place. An example of a TestCase which defines all
of these would be::
-
+
+ from tempest.common import waiters
from tempest import config
+ from tempest.lib.common.utils import test_utils
from tempest import test
CONF = config.CONF
@@ -111,6 +117,13 @@
"""
super(TestExampleCase, cls).resource_setup()
cls.shared_server = cls.servers_client.create_server(...)
+ cls.addClassResourceCleanup(waiters.wait_for_server_termination,
+ cls.servers_client,
+ cls.shared_server['id'])
+ cls.addClassResourceCleanup(
+ test_utils.call_and_ignore_notfound_exc(
+ cls.servers_client.delete_server,
+ cls.shared_server['id']))
.. _credentials:
diff --git a/releasenotes/notes/add-domain-param-in-cliclient-a270fcf35c8f09e6.yaml b/releasenotes/notes/add-domain-param-in-cliclient-a270fcf35c8f09e6.yaml
new file mode 100644
index 0000000..87a6af9
--- /dev/null
+++ b/releasenotes/notes/add-domain-param-in-cliclient-a270fcf35c8f09e6.yaml
@@ -0,0 +1,17 @@
+---
+fixes:
+ - |
+ Allow to specify new domain parameters:
+
+ * `user_domain_name`
+ * `user_domain_id`
+ * `project_domain_name`
+ * `project_domain_id`
+
+ for CLIClient class, whose values will be substituted to
+ ``--os-user-domain-name``, ``--os-user-domain-id``,
+ ``--os-project-domain-name`` and ``--os-project-domain-id`` respectively
+ during command execution.
+
+ This allows to prevent possible test failures with authentication in
+ Keystone v3. Bug: #1719687
diff --git a/releasenotes/notes/add-ip-version-check-in-addresses-x491ac6d9abaxa12.yaml b/releasenotes/notes/add-ip-version-check-in-addresses-x491ac6d9abaxa12.yaml
new file mode 100644
index 0000000..957e903
--- /dev/null
+++ b/releasenotes/notes/add-ip-version-check-in-addresses-x491ac6d9abaxa12.yaml
@@ -0,0 +1,4 @@
+---
+fixes:
+ Add more accurate ip version check in addresses schema which will
+ limit the ip version value in [4, 6].
diff --git a/releasenotes/notes/add-is-resource-deleted-sg-client-f4a7a7a54ff024d7.yaml b/releasenotes/notes/add-is-resource-deleted-sg-client-f4a7a7a54ff024d7.yaml
new file mode 100644
index 0000000..e046326
--- /dev/null
+++ b/releasenotes/notes/add-is-resource-deleted-sg-client-f4a7a7a54ff024d7.yaml
@@ -0,0 +1,5 @@
+---
+features:
+ - |
+ Implement the `rest_client` method `is_resource_deleted` in the network
+ security group client.
diff --git a/releasenotes/notes/add-load-list-cmd-35a4a2e6ea0a36fd.yaml b/releasenotes/notes/add-load-list-cmd-35a4a2e6ea0a36fd.yaml
new file mode 100644
index 0000000..403bbad
--- /dev/null
+++ b/releasenotes/notes/add-load-list-cmd-35a4a2e6ea0a36fd.yaml
@@ -0,0 +1,7 @@
+---
+features:
+ - |
+ Adds a new cli option to tempest run, --load-list <list-file>
+ to specify target tests to run from a list-file. The list-file
+ supports the output format of the tempest run --list-tests
+ command.
diff --git a/releasenotes/notes/add-reset-group-snapshot-status-api-to-v3-group-snapshots-client-248d41827daf2a0c.yaml b/releasenotes/notes/add-reset-group-snapshot-status-api-to-v3-group-snapshots-client-248d41827daf2a0c.yaml
new file mode 100644
index 0000000..76b395d
--- /dev/null
+++ b/releasenotes/notes/add-reset-group-snapshot-status-api-to-v3-group-snapshots-client-248d41827daf2a0c.yaml
@@ -0,0 +1,6 @@
+---
+features:
+ - |
+ Add reset group snapshot status API to v3 group_snapshots_client library,
+ min_microversion of this API is 3.19. This feature enables the possibility
+ to reset group snapshot status.
diff --git a/releasenotes/notes/add-reset-group-status-api-to-v3-groups-client-9aa048617c66756a.yaml b/releasenotes/notes/add-reset-group-status-api-to-v3-groups-client-9aa048617c66756a.yaml
new file mode 100644
index 0000000..a39c23b
--- /dev/null
+++ b/releasenotes/notes/add-reset-group-status-api-to-v3-groups-client-9aa048617c66756a.yaml
@@ -0,0 +1,6 @@
+---
+features:
+ - |
+ Add reset group status API to v3 groups_client library, min_microversion
+ of this API is 3.20. This feature enables the possibility to reset group
+ status.
diff --git a/releasenotes/notes/add-return-value-to-retype-volume-a401aa619aaa2457.yaml b/releasenotes/notes/add-return-value-to-retype-volume-a401aa619aaa2457.yaml
index 4abfe9e..ca42014 100644
--- a/releasenotes/notes/add-return-value-to-retype-volume-a401aa619aaa2457.yaml
+++ b/releasenotes/notes/add-return-value-to-retype-volume-a401aa619aaa2457.yaml
@@ -1,5 +1,7 @@
---
fixes:
- Add a missing return statement to the retype_volume API in the v2 volumes_client library.
- This changes the response body from None to an empty dictionary.
+ - |
+ Add a missing return statement to the retype_volume API in the v2
+ volumes_client library: Bug#1703997
+ This changes the response body from None to an empty dictionary.
diff --git a/releasenotes/notes/add-validation-resources-to-lib-dc2600c4324ca4d7.yaml b/releasenotes/notes/add-validation-resources-to-lib-dc2600c4324ca4d7.yaml
new file mode 100644
index 0000000..7814f4e
--- /dev/null
+++ b/releasenotes/notes/add-validation-resources-to-lib-dc2600c4324ca4d7.yaml
@@ -0,0 +1,7 @@
+---
+features:
+ - |
+ Add the `validation_resources` module to tempest.lib. The module provides
+ a set of helpers that can be used to provision and cleanup all the
+ resources required to perform ping / ssh tests against a virtual machine:
+ a keypair, a security group with targeted rules and a floating IP.
diff --git a/releasenotes/notes/compare-header-version-func-de5139b2161b3627.yaml b/releasenotes/notes/compare-header-version-func-de5139b2161b3627.yaml
new file mode 100644
index 0000000..305e756
--- /dev/null
+++ b/releasenotes/notes/compare-header-version-func-de5139b2161b3627.yaml
@@ -0,0 +1,15 @@
+---
+features:
+ - |
+ Add a new function called ``compare_version_header_to_response`` to
+ ``tempest.lib.common.api_version_utils``, which compares the API
+ micoversion in the response header to another microversion using the
+ comparators defined in
+ ``tempest.lib.common.api_version_request.APIVersionRequest``.
+
+ It is now possible to determine how to retrieve an attribute from a
+ response body of an API call, depending on the returned microversion.
+
+ Add a new exception type called ``InvalidParam`` to
+ ``tempest.lib.exceptions``, allowing the possibility of raising an
+ exception if an invalid parameter is passed to a library function.
diff --git a/releasenotes/notes/fix-list-group-snapshots-api-969d9321002c566c.yaml b/releasenotes/notes/fix-list-group-snapshots-api-969d9321002c566c.yaml
new file mode 100644
index 0000000..775a383
--- /dev/null
+++ b/releasenotes/notes/fix-list-group-snapshots-api-969d9321002c566c.yaml
@@ -0,0 +1,6 @@
+---
+fixes:
+ - |
+ Fix list_group_snapshots API in v3 group_snapshots_client: Bug#1715786.
+ The url path for list group snapshots with details API is changed from
+ ``?detail=True`` to ``/detail``.
diff --git a/releasenotes/notes/fix-remoteclient-default-ssh-shell-prologue-33e99343d086f601.yaml b/releasenotes/notes/fix-remoteclient-default-ssh-shell-prologue-33e99343d086f601.yaml
new file mode 100644
index 0000000..5063fd5
--- /dev/null
+++ b/releasenotes/notes/fix-remoteclient-default-ssh-shell-prologue-33e99343d086f601.yaml
@@ -0,0 +1,7 @@
+---
+fixes:
+ - |
+ Fix RemoteClient default ssh_shell_prologue: Bug#1707478
+
+ The default ssh_shell_proloque has been modified from
+ specifying erroneous PATH=$$PATH:/sbin to PATH=$PATH:/sbin.
diff --git a/releasenotes/notes/http_proxy_config-cb39b55520e84db5.yaml b/releasenotes/notes/http_proxy_config-cb39b55520e84db5.yaml
new file mode 100644
index 0000000..56969de
--- /dev/null
+++ b/releasenotes/notes/http_proxy_config-cb39b55520e84db5.yaml
@@ -0,0 +1,9 @@
+---
+features:
+ - Adds a new config options, ``proxy_url``. This options is used to configure
+ running tempest through a proxy server.
+ - The RestClient class in tempest.lib.rest_client has a new kwarg parameters,
+ ``proxy_url``, that is used to set a proxy server.
+ - A new class was added to tempest.lib.http, ClosingProxyHttp. This behaves
+ identically to ClosingHttp except that it requires a proxy url and will
+ establish a connection through a proxy
diff --git a/releasenotes/notes/identity-tests-domain-drivers-76235f6672221e45.yaml b/releasenotes/notes/identity-tests-domain-drivers-76235f6672221e45.yaml
new file mode 100644
index 0000000..7ed3081
--- /dev/null
+++ b/releasenotes/notes/identity-tests-domain-drivers-76235f6672221e45.yaml
@@ -0,0 +1,7 @@
+---
+features:
+ - |
+ A new boolean config option ``domain_specific_drivers``
+ is added to the section ``identity-feature-enabled``.
+ This option must be enabled when testing an environment that
+ is configured to use domain-specific identity drivers.
diff --git a/releasenotes/notes/make-object-storage-client-as-stable-interface-d1b07c7e8f17bef6.yaml b/releasenotes/notes/make-object-storage-client-as-stable-interface-d1b07c7e8f17bef6.yaml
new file mode 100644
index 0000000..2bba952
--- /dev/null
+++ b/releasenotes/notes/make-object-storage-client-as-stable-interface-d1b07c7e8f17bef6.yaml
@@ -0,0 +1,11 @@
+---
+features:
+ - |
+ Define below object storage service clients as libraries.
+ Add new service clients to the library interface so the
+ other projects can use these modules as stable libraries
+ without any maintenance changes.
+
+ * account_client
+ * container_client
+ * object_client
diff --git a/releasenotes/notes/raise-exception-when-error-deleting-on-volume-18d0d0c5886212dd.yaml b/releasenotes/notes/raise-exception-when-error-deleting-on-volume-18d0d0c5886212dd.yaml
new file mode 100644
index 0000000..194dbc1
--- /dev/null
+++ b/releasenotes/notes/raise-exception-when-error-deleting-on-volume-18d0d0c5886212dd.yaml
@@ -0,0 +1,8 @@
+---
+upgrade:
+ - |
+ Tempest checks a volume delete by waiting for NotFound(404) on
+ show_volume(). Sometime a volume delete fails and the volume status
+ becomes error_deleting which means the delete is failed.
+ So Tempest doesn't need to wait anymore. A new release of Tempest
+ raises an exception DeleteErrorException instead of waiting.
diff --git a/releasenotes/notes/remove-deprecated-apis-from-v2-volumes-client-3ca4a5db5fea518f.yaml b/releasenotes/notes/remove-deprecated-apis-from-v2-volumes-client-3ca4a5db5fea518f.yaml
new file mode 100644
index 0000000..c75da2e
--- /dev/null
+++ b/releasenotes/notes/remove-deprecated-apis-from-v2-volumes-client-3ca4a5db5fea518f.yaml
@@ -0,0 +1,11 @@
+---
+upgrade:
+ - |
+ Remove deprecated APIs from volume v2 volumes_client, and the deprecated
+ APIs are re-realized in volume v2 transfers_client.
+
+ * create_volume_transfer
+ * show_volume_transfer
+ * list_volume_transfers
+ * delete_volume_transfer
+ * accept_volume_transfer
diff --git a/releasenotes/notes/remove-deprecated-skip-decorators-f8b42d812d20b537.yaml b/releasenotes/notes/remove-deprecated-skip-decorators-f8b42d812d20b537.yaml
new file mode 100644
index 0000000..920bc5d
--- /dev/null
+++ b/releasenotes/notes/remove-deprecated-skip-decorators-f8b42d812d20b537.yaml
@@ -0,0 +1,5 @@
+---
+upgrade:
+ - |
+ Remove two deprecated skip decorators in ``config`` module:
+ ``skip_unless_config`` and ``skip_if_config``.
diff --git a/releasenotes/notes/start-of-pike-support-f2a1b7ea8e8b0311.yaml b/releasenotes/notes/start-of-pike-support-f2a1b7ea8e8b0311.yaml
new file mode 100644
index 0000000..0787821
--- /dev/null
+++ b/releasenotes/notes/start-of-pike-support-f2a1b7ea8e8b0311.yaml
@@ -0,0 +1,11 @@
+---
+prelude: >
+ This release marks the start of support for the Pike release in Tempest.
+other:
+ - OpenStack Releases supported after this release are **Pike**, **Ocata**,
+ and **Newton**.
+
+ The release under current development of this tag is Queens, meaning
+ that every Tempest commit is also tested against master during the Queens
+ cycle. However, this does not necessarily mean that using Tempest as of
+ this tag will work against a Queens (or future release) cloud.
diff --git a/releasenotes/source/conf.py b/releasenotes/source/conf.py
index 3137541..ae3dca1 100644
--- a/releasenotes/source/conf.py
+++ b/releasenotes/source/conf.py
@@ -158,10 +158,6 @@
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
-# If true, SmartyPants will be used to convert quotes and dashes to
-# typographically correct entities.
-# html_use_smartypants = True
-
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
diff --git a/releasenotes/source/index.rst b/releasenotes/source/index.rst
index db01da0..df1de46 100644
--- a/releasenotes/source/index.rst
+++ b/releasenotes/source/index.rst
@@ -6,6 +6,7 @@
:maxdepth: 1
unreleased
+ v17.0.0
v16.1.0
v16.0.0
v15.0.0
diff --git a/releasenotes/source/v17.0.0.rst b/releasenotes/source/v17.0.0.rst
new file mode 100644
index 0000000..3f50f11
--- /dev/null
+++ b/releasenotes/source/v17.0.0.rst
@@ -0,0 +1,6 @@
+=====================
+v17.0.0 Release Notes
+=====================
+
+.. release-notes:: 17.0.0 Release Notes
+ :version: 17.0.0
diff --git a/requirements.txt b/requirements.txt
index a74f5c2..8a2fa99 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -2,24 +2,24 @@
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr!=2.1.0,>=2.0.0 # Apache-2.0
-cliff>=2.8.0 # Apache-2.0
-jsonschema!=2.5.0,<3.0.0,>=2.0.0 # MIT
+cliff!=2.9.0,>=2.8.0 # Apache-2.0
+jsonschema<3.0.0,>=2.6.0 # MIT
testtools>=1.4.0 # MIT
-paramiko>=2.0 # LGPLv2.1+
-netaddr!=0.7.16,>=0.7.13 # BSD
+paramiko>=2.0.0 # LGPLv2.1+
+netaddr>=0.7.18 # BSD
testrepository>=0.0.18 # Apache-2.0/BSD
-oslo.concurrency>=3.8.0 # Apache-2.0
-oslo.config!=4.3.0,!=4.4.0,>=4.0.0 # Apache-2.0
-oslo.log>=3.22.0 # Apache-2.0
-oslo.serialization!=2.19.1,>=1.10.0 # Apache-2.0
-oslo.utils>=3.20.0 # Apache-2.0
+oslo.concurrency>=3.20.0 # Apache-2.0
+oslo.config>=4.6.0 # Apache-2.0
+oslo.log>=3.30.0 # Apache-2.0
+oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0
+oslo.utils>=3.28.0 # Apache-2.0
six>=1.9.0 # MIT
fixtures>=3.0.0 # Apache-2.0/BSD
-PyYAML>=3.10.0 # MIT
+PyYAML>=3.10 # MIT
python-subunit>=0.0.18 # Apache-2.0/BSD
stevedore>=1.20.0 # Apache-2.0
PrettyTable<0.8,>=0.7.1 # BSD
-os-testr>=0.8.0 # Apache-2.0
+os-testr>=1.0.0 # Apache-2.0
urllib3>=1.21.1 # MIT
debtcollector>=1.2.0 # Apache-2.0
-unittest2 # BSD
+unittest2>=1.1.0 # BSD
diff --git a/tempest/api/compute/admin/test_aggregates_negative.py b/tempest/api/compute/admin/test_aggregates_negative.py
index 41be620..36ff09e 100644
--- a/tempest/api/compute/admin/test_aggregates_negative.py
+++ b/tempest/api/compute/admin/test_aggregates_negative.py
@@ -27,7 +27,6 @@
def setup_clients(cls):
super(AggregatesAdminNegativeTestJSON, cls).setup_clients()
cls.client = cls.os_admin.aggregates_client
- cls.user_client = cls.aggregates_client
@classmethod
def resource_setup(cls):
@@ -52,7 +51,7 @@
# Regular user is not allowed to create an aggregate.
aggregate_name = data_utils.rand_name(self.aggregate_name_prefix)
self.assertRaises(lib_exc.Forbidden,
- self.user_client.create_aggregate,
+ self.aggregates_client.create_aggregate,
name=aggregate_name)
@decorators.attr(type=['negative'])
@@ -87,7 +86,7 @@
# Regular user is not allowed to delete an aggregate.
aggregate = self._create_test_aggregate()
self.assertRaises(lib_exc.Forbidden,
- self.user_client.delete_aggregate,
+ self.aggregates_client.delete_aggregate,
aggregate['id'])
@decorators.attr(type=['negative'])
@@ -95,7 +94,7 @@
def test_aggregate_list_as_user(self):
# Regular user is not allowed to list aggregates.
self.assertRaises(lib_exc.Forbidden,
- self.user_client.list_aggregates)
+ self.aggregates_client.list_aggregates)
@decorators.attr(type=['negative'])
@decorators.idempotent_id('557cad12-34c9-4ff4-95f0-22f0dfbaf7dc')
@@ -103,7 +102,7 @@
# Regular user is not allowed to get aggregate details.
aggregate = self._create_test_aggregate()
self.assertRaises(lib_exc.Forbidden,
- self.user_client.show_aggregate,
+ self.aggregates_client.show_aggregate,
aggregate['id'])
@decorators.attr(type=['negative'])
@@ -140,7 +139,7 @@
# Regular user is not allowed to add a host to an aggregate.
aggregate = self._create_test_aggregate()
self.assertRaises(lib_exc.Forbidden,
- self.user_client.add_host,
+ self.aggregates_client.add_host,
aggregate['id'], host=self.host)
@decorators.attr(type=['negative'])
@@ -168,7 +167,7 @@
host=self.host)
self.assertRaises(lib_exc.Forbidden,
- self.user_client.remove_host,
+ self.aggregates_client.remove_host,
aggregate['id'], host=self.host)
@decorators.attr(type=['negative'])
diff --git a/tempest/api/compute/admin/test_auto_allocate_network.py b/tempest/api/compute/admin/test_auto_allocate_network.py
index 83fe215..a9772c4 100644
--- a/tempest/api/compute/admin/test_auto_allocate_network.py
+++ b/tempest/api/compute/admin/test_auto_allocate_network.py
@@ -16,12 +16,11 @@
from tempest.api.compute import base
from tempest.common import compute
-from tempest.common import credentials_factory as credentials
+from tempest.common import utils
from tempest import config
from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_excs
-from tempest import test
CONF = config.CONF
LOG = log.getLogger(__name__)
@@ -46,14 +45,10 @@
@classmethod
def skip_checks(cls):
super(AutoAllocateNetworkTest, cls).skip_checks()
- identity_version = cls.get_identity_version()
- if not credentials.is_admin_available(
- identity_version=identity_version):
- msg = "Missing Identity Admin API credentials in configuration."
- raise cls.skipException(msg)
if not CONF.service_available.neutron:
raise cls.skipException('Neutron is required')
- if not test.is_extension_enabled('auto-allocated-topology', 'network'):
+ if not utils.is_extension_enabled('auto-allocated-topology',
+ 'network'):
raise cls.skipException(
'auto-allocated-topology extension is not available')
@@ -148,6 +143,8 @@
test_utils.call_and_ignore_notfound_exc(
cls.networks_client.delete_network, network['id'])
+ super(AutoAllocateNetworkTest, cls).resource_cleanup()
+
@decorators.idempotent_id('5eb7b8fa-9c23-47a2-9d7d-02ed5809dd34')
def test_server_create_no_allocate(self):
"""Tests that no networking is allocated for the server."""
@@ -180,9 +177,11 @@
_, servers = compute.create_test_server(
self.os_primary, networks='auto', wait_until='ACTIVE',
min_count=3)
- server_nets = set()
for server in servers:
self.addCleanup(self.delete_server, server['id'])
+
+ server_nets = set()
+ for server in servers:
# get the server ips
addresses = self.servers_client.list_addresses(
server['id'])['addresses']
diff --git a/tempest/api/compute/admin/test_create_server.py b/tempest/api/compute/admin/test_create_server.py
index 66bedd9..08b2d19 100644
--- a/tempest/api/compute/admin/test_create_server.py
+++ b/tempest/api/compute/admin/test_create_server.py
@@ -17,8 +17,10 @@
from tempest.api.compute import base
from tempest.common.utils.linux import remote_client
+from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import data_utils
+from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
CONF = config.CONF
@@ -35,12 +37,6 @@
super(ServersWithSpecificFlavorTestJSON, cls).setup_clients()
cls.client = cls.servers_client
- @classmethod
- def resource_setup(cls):
- cls.set_validation_resources()
-
- super(ServersWithSpecificFlavorTestJSON, cls).resource_setup()
-
@decorators.idempotent_id('b3c7bcfc-bb5b-4e22-b517-c7f686b802ca')
@testtools.skipUnless(CONF.validation.run_validation,
'Instance validation tests are disabled.')
@@ -67,20 +63,30 @@
admin_pass = self.image_ssh_password
+ validation_resources = self.get_test_validation_resources(
+ self.os_primary)
server_no_eph_disk = self.create_test_server(
validatable=True,
+ validation_resources=validation_resources,
wait_until='ACTIVE',
adminPass=admin_pass,
flavor=flavor_no_eph_disk_id)
+ self.addCleanup(waiters.wait_for_server_termination,
+ self.servers_client, server_no_eph_disk['id'])
+ self.addCleanup(test_utils.call_and_ignore_notfound_exc,
+ self.servers_client.delete_server,
+ server_no_eph_disk['id'])
+
# Get partition number of server without ephemeral disk.
server_no_eph_disk = self.client.show_server(
server_no_eph_disk['id'])['server']
linux_client = remote_client.RemoteClient(
- self.get_server_ip(server_no_eph_disk),
+ self.get_server_ip(server_no_eph_disk,
+ validation_resources),
self.ssh_user,
admin_pass,
- self.validation_resources['keypair']['private_key'],
+ validation_resources['keypair']['private_key'],
server=server_no_eph_disk,
servers_client=self.client)
disks_num = len(linux_client.get_disks().split('\n'))
@@ -90,17 +96,25 @@
server_with_eph_disk = self.create_test_server(
validatable=True,
+ validation_resources=validation_resources,
wait_until='ACTIVE',
adminPass=admin_pass,
flavor=flavor_with_eph_disk_id)
+ self.addCleanup(waiters.wait_for_server_termination,
+ self.servers_client, server_with_eph_disk['id'])
+ self.addCleanup(test_utils.call_and_ignore_notfound_exc,
+ self.servers_client.delete_server,
+ server_with_eph_disk['id'])
+
server_with_eph_disk = self.client.show_server(
server_with_eph_disk['id'])['server']
linux_client = remote_client.RemoteClient(
- self.get_server_ip(server_with_eph_disk),
+ self.get_server_ip(server_with_eph_disk,
+ validation_resources),
self.ssh_user,
admin_pass,
- self.validation_resources['keypair']['private_key'],
+ validation_resources['keypair']['private_key'],
server=server_with_eph_disk,
servers_client=self.client)
disks_num_eph = len(linux_client.get_disks().split('\n'))
diff --git a/tempest/api/compute/admin/test_fixed_ips.py b/tempest/api/compute/admin/test_fixed_ips.py
index 1e09eeb..ebba73c 100644
--- a/tempest/api/compute/admin/test_fixed_ips.py
+++ b/tempest/api/compute/admin/test_fixed_ips.py
@@ -14,9 +14,9 @@
# under the License.
from tempest.api.compute import base
+from tempest.common import utils
from tempest import config
from tempest.lib import decorators
-from tempest import test
CONF = config.CONF
@@ -29,6 +29,8 @@
if CONF.service_available.neutron:
msg = ("%s skipped as neutron is available" % cls.__name__)
raise cls.skipException(msg)
+ if not utils.get_service_list()['network']:
+ raise cls.skipException("network service not enabled.")
@classmethod
def setup_clients(cls):
@@ -49,17 +51,14 @@
break
@decorators.idempotent_id('16b7d848-2f7c-4709-85a3-2dfb4576cc52')
- @test.services('network')
def test_list_fixed_ip_details(self):
fixed_ip = self.client.show_fixed_ip(self.ip)
self.assertEqual(fixed_ip['fixed_ip']['address'], self.ip)
@decorators.idempotent_id('5485077b-7e46-4cec-b402-91dc3173433b')
- @test.services('network')
def test_set_reserve(self):
self.client.reserve_fixed_ip(self.ip, reserve="None")
@decorators.idempotent_id('7476e322-b9ff-4710-bf82-49d51bac6e2e')
- @test.services('network')
def test_set_unreserve(self):
self.client.reserve_fixed_ip(self.ip, unreserve="None")
diff --git a/tempest/api/compute/admin/test_fixed_ips_negative.py b/tempest/api/compute/admin/test_fixed_ips_negative.py
index a77011e..a5deb3c 100644
--- a/tempest/api/compute/admin/test_fixed_ips_negative.py
+++ b/tempest/api/compute/admin/test_fixed_ips_negative.py
@@ -13,10 +13,10 @@
# under the License.
from tempest.api.compute import base
+from tempest.common import utils
from tempest import config
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
CONF = config.CONF
@@ -29,6 +29,8 @@
if CONF.service_available.neutron:
msg = ("%s skipped as neutron is available" % cls.__name__)
raise cls.skipException(msg)
+ if not utils.get_service_list()['network']:
+ raise cls.skipException("network service not enabled.")
@classmethod
def setup_clients(cls):
@@ -51,14 +53,12 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('9f17f47d-daad-4adc-986e-12370c93e407')
- @test.services('network')
def test_list_fixed_ip_details_with_non_admin_user(self):
self.assertRaises(lib_exc.Forbidden,
self.non_admin_client.show_fixed_ip, self.ip)
@decorators.attr(type=['negative'])
@decorators.idempotent_id('ce60042c-fa60-4836-8d43-1c8e3359dc47')
- @test.services('network')
def test_set_reserve_with_non_admin_user(self):
self.assertRaises(lib_exc.Forbidden,
self.non_admin_client.reserve_fixed_ip,
@@ -66,7 +66,6 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('f1f7a35b-0390-48c5-9803-5f27461439db')
- @test.services('network')
def test_set_unreserve_with_non_admin_user(self):
self.assertRaises(lib_exc.Forbidden,
self.non_admin_client.reserve_fixed_ip,
@@ -74,7 +73,6 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('f51cf464-7fc5-4352-bc3e-e75cfa2cb717')
- @test.services('network')
def test_set_reserve_with_invalid_ip(self):
# NOTE(maurosr): since this exercises the same code snippet, we do it
# only for reserve action
@@ -87,7 +85,6 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('fd26ef50-f135-4232-9d32-281aab3f9176')
- @test.services('network')
def test_fixed_ip_with_invalid_action(self):
self.assertRaises(lib_exc.BadRequest,
self.client.reserve_fixed_ip,
diff --git a/tempest/api/compute/admin/test_flavors.py b/tempest/api/compute/admin/test_flavors.py
index 36ebc25..1483c2e 100644
--- a/tempest/api/compute/admin/test_flavors.py
+++ b/tempest/api/compute/admin/test_flavors.py
@@ -16,10 +16,10 @@
import uuid
from tempest.api.compute import base
+from tempest.common import utils
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
class FlavorsAdminTestJSON(base.BaseV2ComputeAdminTest):
@@ -28,7 +28,7 @@
@classmethod
def skip_checks(cls):
super(FlavorsAdminTestJSON, cls).skip_checks()
- if not test.is_extension_enabled('OS-FLV-EXT-DATA', 'compute'):
+ if not utils.is_extension_enabled('OS-FLV-EXT-DATA', 'compute'):
msg = "OS-FLV-EXT-DATA extension not enabled."
raise cls.skipException(msg)
diff --git a/tempest/api/compute/admin/test_flavors_access.py b/tempest/api/compute/admin/test_flavors_access.py
index 2c236ec..b8e2b42 100644
--- a/tempest/api/compute/admin/test_flavors_access.py
+++ b/tempest/api/compute/admin/test_flavors_access.py
@@ -14,8 +14,8 @@
# under the License.
from tempest.api.compute import base
+from tempest.common import utils
from tempest.lib import decorators
-from tempest import test
class FlavorsAccessTestJSON(base.BaseV2ComputeAdminTest):
@@ -27,7 +27,7 @@
@classmethod
def skip_checks(cls):
super(FlavorsAccessTestJSON, cls).skip_checks()
- if not test.is_extension_enabled('OS-FLV-EXT-DATA', 'compute'):
+ if not utils.is_extension_enabled('OS-FLV-EXT-DATA', 'compute'):
msg = "OS-FLV-EXT-DATA extension not enabled."
raise cls.skipException(msg)
diff --git a/tempest/api/compute/admin/test_flavors_access_negative.py b/tempest/api/compute/admin/test_flavors_access_negative.py
index be165cb..45ca10a 100644
--- a/tempest/api/compute/admin/test_flavors_access_negative.py
+++ b/tempest/api/compute/admin/test_flavors_access_negative.py
@@ -14,9 +14,9 @@
# under the License.
from tempest.api.compute import base
+from tempest.common import utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
class FlavorsAccessNegativeTestJSON(base.BaseV2ComputeAdminTest):
@@ -30,7 +30,7 @@
@classmethod
def skip_checks(cls):
super(FlavorsAccessNegativeTestJSON, cls).skip_checks()
- if not test.is_extension_enabled('OS-FLV-EXT-DATA', 'compute'):
+ if not utils.is_extension_enabled('OS-FLV-EXT-DATA', 'compute'):
msg = "OS-FLV-EXT-DATA extension not enabled."
raise cls.skipException(msg)
diff --git a/tempest/api/compute/admin/test_flavors_extra_specs.py b/tempest/api/compute/admin/test_flavors_extra_specs.py
index 747cb42..4d27a22 100644
--- a/tempest/api/compute/admin/test_flavors_extra_specs.py
+++ b/tempest/api/compute/admin/test_flavors_extra_specs.py
@@ -14,9 +14,9 @@
# under the License.
from tempest.api.compute import base
+from tempest.common import utils
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
-from tempest import test
class FlavorsExtraSpecsTestJSON(base.BaseV2ComputeAdminTest):
@@ -29,7 +29,7 @@
@classmethod
def skip_checks(cls):
super(FlavorsExtraSpecsTestJSON, cls).skip_checks()
- if not test.is_extension_enabled('OS-FLV-EXT-DATA', 'compute'):
+ if not utils.is_extension_enabled('OS-FLV-EXT-DATA', 'compute'):
msg = "OS-FLV-EXT-DATA extension not enabled."
raise cls.skipException(msg)
@@ -53,12 +53,11 @@
ephemeral=ephemeral,
swap=swap,
rxtx_factor=rxtx)['flavor']
-
- @classmethod
- def resource_cleanup(cls):
- cls.admin_flavors_client.delete_flavor(cls.flavor['id'])
- cls.admin_flavors_client.wait_for_resource_deletion(cls.flavor['id'])
- super(FlavorsExtraSpecsTestJSON, cls).resource_cleanup()
+ cls.addClassResourceCleanup(
+ cls.admin_flavors_client.wait_for_resource_deletion,
+ cls.flavor['id'])
+ cls.addClassResourceCleanup(cls.admin_flavors_client.delete_flavor,
+ cls.flavor['id'])
@decorators.idempotent_id('0b2f9d4b-1ca2-4b99-bb40-165d4bb94208')
def test_flavor_set_get_update_show_unset_keys(self):
diff --git a/tempest/api/compute/admin/test_flavors_extra_specs_negative.py b/tempest/api/compute/admin/test_flavors_extra_specs_negative.py
index f39feb9..5cde39e 100644
--- a/tempest/api/compute/admin/test_flavors_extra_specs_negative.py
+++ b/tempest/api/compute/admin/test_flavors_extra_specs_negative.py
@@ -15,10 +15,10 @@
# under the License.
from tempest.api.compute import base
+from tempest.common import utils
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
class FlavorsExtraSpecsNegativeTestJSON(base.BaseV2ComputeAdminTest):
@@ -30,7 +30,7 @@
@classmethod
def skip_checks(cls):
super(FlavorsExtraSpecsNegativeTestJSON, cls).skip_checks()
- if not test.is_extension_enabled('OS-FLV-EXT-DATA', 'compute'):
+ if not utils.is_extension_enabled('OS-FLV-EXT-DATA', 'compute'):
msg = "OS-FLV-EXT-DATA extension not enabled."
raise cls.skipException(msg)
@@ -55,12 +55,11 @@
ephemeral=ephemeral,
swap=swap,
rxtx_factor=rxtx)['flavor']
-
- @classmethod
- def resource_cleanup(cls):
- cls.admin_flavors_client.delete_flavor(cls.flavor['id'])
- cls.admin_flavors_client.wait_for_resource_deletion(cls.flavor['id'])
- super(FlavorsExtraSpecsNegativeTestJSON, cls).resource_cleanup()
+ cls.addClassResourceCleanup(
+ cls.admin_flavors_client.wait_for_resource_deletion,
+ cls.flavor['id'])
+ cls.addClassResourceCleanup(cls.admin_flavors_client.delete_flavor,
+ cls.flavor['id'])
@decorators.attr(type=['negative'])
@decorators.idempotent_id('a00a3b81-5641-45a8-ab2b-4a8ec41e1d7d')
diff --git a/tempest/api/compute/admin/test_floating_ips_bulk.py b/tempest/api/compute/admin/test_floating_ips_bulk.py
index 496f119..ba19937 100644
--- a/tempest/api/compute/admin/test_floating_ips_bulk.py
+++ b/tempest/api/compute/admin/test_floating_ips_bulk.py
@@ -16,11 +16,11 @@
import netaddr
from tempest.api.compute import base
+from tempest.common import utils
from tempest import config
from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
from tempest.lib import exceptions
-from tempest import test
CONF = config.CONF
@@ -57,7 +57,7 @@
return
@decorators.idempotent_id('2c8f145f-8012-4cb8-ac7e-95a587f0e4ab')
- @test.services('network')
+ @utils.services('network')
def test_create_list_delete_floating_ips_bulk(self):
# Create, List and delete the Floating IPs Bulk
pool = 'test_pool'
diff --git a/tempest/api/compute/admin/test_live_migration.py b/tempest/api/compute/admin/test_live_migration.py
index 256a267..411159b 100644
--- a/tempest/api/compute/admin/test_live_migration.py
+++ b/tempest/api/compute/admin/test_live_migration.py
@@ -20,10 +20,10 @@
from tempest.api.compute import base
from tempest.common import compute
+from tempest.common import utils
from tempest.common import waiters
from tempest import config
from tempest.lib import decorators
-from tempest import test
CONF = config.CONF
LOG = logging.getLogger(__name__)
@@ -46,6 +46,18 @@
"Less than 2 compute nodes, skipping migration test.")
@classmethod
+ def setup_credentials(cls):
+ # These tests don't attempt any SSH validation nor do they use
+ # floating IPs on the instance, so all we need is a network and
+ # a subnet so the instance being migrated has a single port, but
+ # we need that to make sure we are properly updating the port
+ # host bindings during the live migration.
+ # TODO(mriedem): SSH validation before and after the instance is
+ # live migrated would be a nice test wrinkle addition.
+ cls.set_network_resources(network=True, subnet=True)
+ super(LiveMigrationTest, cls).setup_credentials()
+
+ @classmethod
def setup_clients(cls):
super(LiveMigrationTest, cls).setup_clients()
cls.admin_migration_client = cls.os_admin.migrations_client
@@ -122,7 +134,7 @@
@decorators.skip_because(bug="1524898")
@decorators.idempotent_id('5071cf17-3004-4257-ae61-73a84e28badd')
- @test.services('volume')
+ @utils.services('volume')
def test_volume_backed_live_migration(self):
self._test_live_migration(volume_backed=True)
diff --git a/tempest/api/compute/admin/test_quotas.py b/tempest/api/compute/admin/test_quotas.py
index 1aa9227..c2bdf7e 100644
--- a/tempest/api/compute/admin/test_quotas.py
+++ b/tempest/api/compute/admin/test_quotas.py
@@ -17,6 +17,7 @@
from testtools import matchers
from tempest.api.compute import base
+from tempest.common import identity
from tempest.common import tempest_fixtures as fixtures
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
@@ -93,10 +94,11 @@
# Verify that GET shows the updated quota set of project
project_name = data_utils.rand_name('cpu_quota_project')
project_desc = project_name + '-desc'
- project = self.identity_utils.create_project(name=project_name,
- description=project_desc)
+ project = identity.identity_utils(self.os_admin).create_project(
+ name=project_name, description=project_desc)
project_id = project['id']
- self.addCleanup(self.identity_utils.delete_project, project_id)
+ self.addCleanup(identity.identity_utils(self.os_admin).delete_project,
+ project_id)
self.adm_client.update_quota_set(project_id, ram='5120')
quota_set = self.adm_client.show_quota_set(project_id)['quota_set']
@@ -106,12 +108,12 @@
user_name = data_utils.rand_name('cpu_quota_user')
password = data_utils.rand_password()
email = user_name + '@testmail.tm'
- user = self.identity_utils.create_user(username=user_name,
- password=password,
- project=project,
- email=email)
+ user = identity.identity_utils(self.os_admin).create_user(
+ username=user_name, password=password, project=project,
+ email=email)
user_id = user['id']
- self.addCleanup(self.identity_utils.delete_user, user_id)
+ self.addCleanup(identity.identity_utils(self.os_admin).delete_user,
+ user_id)
self.adm_client.update_quota_set(project_id,
user_id=user_id,
@@ -125,10 +127,11 @@
# Admin can delete the resource quota set for a project
project_name = data_utils.rand_name('ram_quota_project')
project_desc = project_name + '-desc'
- project = self.identity_utils.create_project(name=project_name,
- description=project_desc)
+ project = identity.identity_utils(self.os_admin).create_project(
+ name=project_name, description=project_desc)
project_id = project['id']
- self.addCleanup(self.identity_utils.delete_project, project_id)
+ self.addCleanup(identity.identity_utils(self.os_admin).delete_project,
+ project_id)
quota_set_default = (self.adm_client.show_quota_set(project_id)
['quota_set'])
ram_default = quota_set_default['ram']
@@ -157,8 +160,7 @@
def _restore_default_quotas(self, original_defaults):
LOG.debug("restoring quota class defaults")
- self.adm_client.update_quota_class_set(
- 'default', **original_defaults)['quota_class_set']
+ self.adm_client.update_quota_class_set('default', **original_defaults)
# NOTE(sdague): this test is problematic as it changes
# global state, and possibly needs to be part of a set of
diff --git a/tempest/api/compute/admin/test_quotas_negative.py b/tempest/api/compute/admin/test_quotas_negative.py
index 747f320..5ef7ee4 100644
--- a/tempest/api/compute/admin/test_quotas_negative.py
+++ b/tempest/api/compute/admin/test_quotas_negative.py
@@ -13,11 +13,11 @@
# under the License.
from tempest.api.compute import base
+from tempest.common import utils
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
CONF = config.CONF
@@ -89,7 +89,7 @@
condition=CONF.service_available.neutron)
@decorators.attr(type=['negative'])
@decorators.idempotent_id('7c6c8f3b-2bf6-4918-b240-57b136a66aa0')
- @test.services('network')
+ @utils.services('network')
def test_security_groups_exceed_limit(self):
# Negative test: Creation Security Groups over limit should FAIL
# Set the quota to number of used security groups
@@ -108,7 +108,7 @@
condition=CONF.service_available.neutron)
@decorators.attr(type=['negative'])
@decorators.idempotent_id('6e9f436d-f1ed-4f8e-a493-7275dfaa4b4d')
- @test.services('network')
+ @utils.services('network')
def test_security_groups_rules_exceed_limit(self):
# Negative test: Creation of Security Group Rules should FAIL
# when we reach limit maxSecurityGroupRules
diff --git a/tempest/api/compute/admin/test_security_groups.py b/tempest/api/compute/admin/test_security_groups.py
index 8abe03a..ff9caa3 100644
--- a/tempest/api/compute/admin/test_security_groups.py
+++ b/tempest/api/compute/admin/test_security_groups.py
@@ -14,9 +14,9 @@
# under the License.
from tempest.api.compute import base
+from tempest.common import utils
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
-from tempest import test
class SecurityGroupsTestAdminJSON(base.BaseV2ComputeAdminTest):
@@ -34,7 +34,7 @@
self.client.delete_security_group(securitygroup_id)
@decorators.idempotent_id('49667619-5af9-4c63-ab5d-2cfdd1c8f7f1')
- @test.services('network')
+ @utils.services('network')
def test_list_security_groups_list_all_tenants_filter(self):
# Admin can list security groups of all tenants
# List of all security groups created
diff --git a/tempest/api/compute/admin/test_servers_negative.py b/tempest/api/compute/admin/test_servers_negative.py
index 3656770..f720b84 100644
--- a/tempest/api/compute/admin/test_servers_negative.py
+++ b/tempest/api/compute/admin/test_servers_negative.py
@@ -61,7 +61,7 @@
flavor_ref = self.create_flavor(ram=ram, vcpus=vcpus, disk=disk)
self.assertRaises((lib_exc.Forbidden, lib_exc.OverLimit),
self.client.resize_server,
- self.servers[0]['id'],
+ self.s1_id,
flavor_ref['id'])
@decorators.idempotent_id('7368a427-2f26-4ad9-9ba9-911a0ec2b0db')
@@ -83,7 +83,7 @@
flavor_ref = self.create_flavor(ram=ram, vcpus=vcpus, disk=disk)
self.assertRaises((lib_exc.Forbidden, lib_exc.OverLimit),
self.client.resize_server,
- self.servers[0]['id'],
+ self.s1_id,
flavor_ref['id'])
@decorators.attr(type=['negative'])
diff --git a/tempest/api/compute/admin/test_volume_swap.py b/tempest/api/compute/admin/test_volume_swap.py
index 22a5bc4..d715a42 100644
--- a/tempest/api/compute/admin/test_volume_swap.py
+++ b/tempest/api/compute/admin/test_volume_swap.py
@@ -13,11 +13,11 @@
import time
from tempest.api.compute import base
+from tempest.common import utils
from tempest.common import waiters
from tempest import config
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
CONF = config.CONF
@@ -80,7 +80,7 @@
raise lib_exc.TimeoutException(message)
@decorators.idempotent_id('1769f00d-a693-4d67-a631-6a3496773813')
- @test.services('volume')
+ @utils.services('volume')
def test_volume_swap(self):
# Create two volumes.
# NOTE(gmann): Volumes are created before server creation so that
diff --git a/tempest/api/compute/base.py b/tempest/api/compute/base.py
index 746f83a..705814c 100644
--- a/tempest/api/compute/base.py
+++ b/tempest/api/compute/base.py
@@ -116,42 +116,6 @@
cls.ssh_user = CONF.validation.image_ssh_user
cls.image_ssh_user = CONF.validation.image_ssh_user
cls.image_ssh_password = CONF.validation.image_ssh_password
- cls.servers = []
- cls.images = []
- cls.security_groups = []
- cls.server_groups = []
- cls.volumes = []
-
- @classmethod
- def resource_cleanup(cls):
- cls.clear_resources('images', cls.images,
- cls.compute_images_client.delete_image)
- cls.clear_servers()
- cls.clear_resources('security groups', cls.security_groups,
- cls.security_groups_client.delete_security_group)
- cls.clear_resources('server groups', cls.server_groups,
- cls.server_groups_client.delete_server_group)
- cls.clear_volumes()
- super(BaseV2ComputeTest, cls).resource_cleanup()
-
- @classmethod
- def clear_servers(cls):
- LOG.debug('Clearing servers: %s', ','.join(
- server['id'] for server in cls.servers))
- for server in cls.servers:
- try:
- test_utils.call_and_ignore_notfound_exc(
- cls.servers_client.delete_server, server['id'])
- except Exception:
- LOG.exception('Deleting server %s failed', server['id'])
-
- for server in cls.servers:
- try:
- waiters.wait_for_server_termination(cls.servers_client,
- server['id'])
- except Exception:
- LOG.exception('Waiting for deletion of server %s failed',
- server['id'])
@classmethod
def server_check_teardown(cls):
@@ -190,7 +154,7 @@
@classmethod
def create_test_server(cls, validatable=False, volume_backed=False,
- **kwargs):
+ validation_resources=None, **kwargs):
"""Wrapper utility that returns a test server.
This wrapper utility calls the common create test server and
@@ -200,6 +164,10 @@
:param validatable: Whether the server will be pingable or sshable.
:param volume_backed: Whether the instance is volume backed or not.
+ :param validation_resources: Dictionary of validation resources as
+ returned by `get_class_validation_resources`.
+ :param kwargs: Extra arguments are passed down to the
+ `compute.create_test_server` call.
"""
if 'name' not in kwargs:
kwargs['name'] = data_utils.rand_name(cls.__name__ + "-server")
@@ -216,12 +184,20 @@
body, servers = compute.create_test_server(
cls.os_primary,
validatable,
- validation_resources=cls.validation_resources,
+ validation_resources=validation_resources,
tenant_network=tenant_network,
volume_backed=volume_backed,
**kwargs)
- cls.servers.extend(servers)
+ # For each server schedule wait and delete, so we first delete all
+ # and then wait for all
+ for server in servers:
+ cls.addClassResourceCleanup(waiters.wait_for_server_termination,
+ cls.servers_client, server['id'])
+ for server in servers:
+ cls.addClassResourceCleanup(
+ test_utils.call_and_ignore_notfound_exc,
+ cls.servers_client.delete_server, server['id'])
return body
@@ -233,7 +209,10 @@
description = data_utils.rand_name('description')
body = cls.security_groups_client.create_security_group(
name=name, description=description)['security_group']
- cls.security_groups.append(body['id'])
+ cls.addClassResourceCleanup(
+ test_utils.call_and_ignore_notfound_exc,
+ cls.security_groups_client.delete_security_group,
+ body['id'])
return body
@@ -245,7 +224,10 @@
policy = ['affinity']
body = cls.server_groups_client.create_server_group(
name=name, policies=policy)['server_group']
- cls.server_groups.append(body['id'])
+ cls.addClassResourceCleanup(
+ test_utils.call_and_ignore_notfound_exc,
+ cls.server_groups_client.delete_server_group,
+ body['id'])
return body
def wait_for(self, condition):
@@ -263,18 +245,6 @@
return
time.sleep(self.build_interval)
- @staticmethod
- def _delete_volume(volumes_client, volume_id):
- """Deletes the given volume and waits for it to be gone."""
- try:
- volumes_client.delete_volume(volume_id)
- # TODO(mriedem): We should move the wait_for_resource_deletion
- # into the delete_volume method as a convenience to the caller.
- volumes_client.wait_for_resource_deletion(volume_id)
- except lib_exc.NotFound:
- LOG.warning("Unable to delete volume '%s' since it was not found. "
- "Maybe it was already deleted?", volume_id)
-
@classmethod
def prepare_instance_network(cls):
if (CONF.validation.auth_method != 'disabled' and
@@ -292,8 +262,14 @@
image = cls.compute_images_client.create_image(server_id, name=name,
**kwargs)
- image_id = data_utils.parse_image_id(image.response['location'])
- cls.images.append(image_id)
+ if api_version_utils.compare_version_header_to_response(
+ "OpenStack-API-Version", "compute 2.45", image.response, "lt"):
+ image_id = image['image_id']
+ else:
+ image_id = data_utils.parse_image_id(image.response['location'])
+ cls.addClassResourceCleanup(test_utils.call_and_ignore_notfound_exc,
+ cls.compute_images_client.delete_image,
+ image_id)
if wait_until is not None:
try:
@@ -325,14 +301,34 @@
return image
@classmethod
- def rebuild_server(cls, server_id, validatable=False, **kwargs):
- # Destroy an existing server and creates a new one
+ def recreate_server(cls, server_id, validatable=False, **kwargs):
+ """Destroy an existing class level server and creates a new one
+
+ Some test classes use a test server that can be used by multiple
+ tests. This is done to optimise runtime and test load.
+ If something goes wrong with the test server, it can be rebuilt
+ using this helper.
+
+ This helper can also be used for the initial provisioning if no
+ server_id is specified.
+
+ :param server_id: UUID of the server to be rebuilt. If None is
+ specified, a new server is provisioned.
+ :param validatable: whether to the server needs to be
+ validatable. When True, validation resources are acquired via
+ the `get_class_validation_resources` helper.
+ :param kwargs: extra paramaters are passed through to the
+ `create_test_server` call.
+ :return: the UUID of the created server.
+ """
if server_id:
cls.delete_server(server_id)
cls.password = data_utils.rand_password()
server = cls.create_test_server(
validatable,
+ validation_resources=cls.get_class_validation_resources(
+ cls.os_primary),
wait_until='ACTIVE',
adminPass=cls.password,
**kwargs)
@@ -360,17 +356,33 @@
@classmethod
def delete_volume(cls, volume_id):
"""Deletes the given volume and waits for it to be gone."""
- cls._delete_volume(cls.volumes_client, volume_id)
+ try:
+ cls.volumes_client.delete_volume(volume_id)
+ # TODO(mriedem): We should move the wait_for_resource_deletion
+ # into the delete_volume method as a convenience to the caller.
+ cls.volumes_client.wait_for_resource_deletion(volume_id)
+ except lib_exc.NotFound:
+ LOG.warning("Unable to delete volume '%s' since it was not found. "
+ "Maybe it was already deleted?", volume_id)
@classmethod
- def get_server_ip(cls, server):
+ def get_server_ip(cls, server, validation_resources=None):
"""Get the server fixed or floating IP.
Based on the configuration we're in, return a correct ip
address for validating that a guest is up.
+
+ :param server: The server dict as returned by the API
+ :param validation_resources: The dict of validation resources
+ provisioned for the server.
"""
if CONF.validation.connect_method == 'floating':
- return cls.validation_resources['floating_ip']['ip']
+ if validation_resources:
+ return validation_resources['floating_ip']['ip']
+ else:
+ msg = ('When validation.connect_method equals floating, '
+ 'validation_resources cannot be None')
+ raise exceptions.InvalidParam(invalid_param=msg)
elif CONF.validation.connect_method == 'fixed':
addresses = server['addresses'][CONF.validation.network_for_ssh]
for address in addresses:
@@ -401,28 +413,31 @@
if image_ref is not None:
kwargs['imageRef'] = image_ref
volume = cls.volumes_client.create_volume(**kwargs)['volume']
- cls.volumes.append(volume)
+ cls.addClassResourceCleanup(
+ cls.volumes_client.wait_for_resource_deletion, volume['id'])
+ cls.addClassResourceCleanup(test_utils.call_and_ignore_notfound_exc,
+ cls.volumes_client.delete_volume,
+ volume['id'])
waiters.wait_for_volume_resource_status(cls.volumes_client,
volume['id'], 'available')
return volume
- @classmethod
- def clear_volumes(cls):
- LOG.debug('Clearing volumes: %s', ','.join(
- volume['id'] for volume in cls.volumes))
- for volume in cls.volumes:
- try:
- test_utils.call_and_ignore_notfound_exc(
- cls.volumes_client.delete_volume, volume['id'])
- except Exception:
- LOG.exception('Deleting volume %s failed', volume['id'])
+ def _detach_volume(self, server, volume):
+ """Helper method to detach a volume.
- for volume in cls.volumes:
- try:
- cls.volumes_client.wait_for_resource_deletion(volume['id'])
- except Exception:
- LOG.exception('Waiting for deletion of volume %s failed',
- volume['id'])
+ Ignores 404 responses if the volume or server do not exist, or the
+ volume is already detached from the server.
+ """
+ try:
+ volume = self.volumes_client.show_volume(volume['id'])['volume']
+ # Check the status. You can only detach an in-use volume, otherwise
+ # the compute API will return a 400 response.
+ if volume['status'] == 'in-use':
+ self.servers_client.detach_volume(server['id'], volume['id'])
+ except exceptions.NotFound:
+ # Ignore 404s on detach in case the server is deleted or the volume
+ # is already detached.
+ pass
def attach_volume(self, server, volume, device=None, check_reserved=False):
"""Attaches volume to server and waits for 'in-use' volume status.
@@ -451,9 +466,7 @@
self.volumes_client, volume['id'], 'available')
# Ignore 404s on detach in case the server is deleted or the volume
# is already detached.
- self.addCleanup(test_utils.call_and_ignore_notfound_exc,
- self.servers_client.detach_volume,
- server['id'], volume['id'])
+ self.addCleanup(self._detach_volume, server, volume)
statuses = ['in-use']
if check_reserved:
statuses.append('reserved')
@@ -495,12 +508,10 @@
def get_host_other_than(self, server_id):
source_host = self.get_host_for_server(server_id)
- list_hosts_resp = self.os_admin.hosts_client.list_hosts()['hosts']
- hosts = [
- host_record['host_name']
- for host_record in list_hosts_resp
- if host_record['service'] == 'compute'
- ]
+ hypers = self.os_admin.hypervisor_client.list_hypervisors(
+ )['hypervisors']
+ hosts = [hyper['hypervisor_hostname'] for hyper in hypers
+ if hyper['state'] == 'up' and hyper['status'] == 'enabled']
for target_host in hosts:
if source_host != target_host:
diff --git a/tempest/api/compute/flavors/test_flavors.py b/tempest/api/compute/flavors/test_flavors.py
index d5bb45a..20294e9 100644
--- a/tempest/api/compute/flavors/test_flavors.py
+++ b/tempest/api/compute/flavors/test_flavors.py
@@ -18,8 +18,6 @@
class FlavorsV2TestJSON(base.BaseV2ComputeTest):
- _min_disk = 'minDisk'
- _min_ram = 'minRam'
@decorators.attr(type='smoke')
@decorators.idempotent_id('e36c0eaa-dff5-4082-ad1f-3f9a80aa3f59')
@@ -89,7 +87,7 @@
flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
flavor_id = flavor['id']
- params = {self._min_disk: flavor['disk'] + 1}
+ params = {'minDisk': flavor['disk'] + 1}
flavors = self.flavors_client.list_flavors(detail=True,
**params)['flavors']
self.assertEmpty([i for i in flavors if i['id'] == flavor_id])
@@ -100,7 +98,7 @@
flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
flavor_id = flavor['id']
- params = {self._min_ram: flavor['ram'] + 1}
+ params = {'minRam': flavor['ram'] + 1}
flavors = self.flavors_client.list_flavors(detail=True,
**params)['flavors']
self.assertEmpty([i for i in flavors if i['id'] == flavor_id])
@@ -111,7 +109,7 @@
flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
flavor_id = flavor['id']
- params = {self._min_disk: flavor['disk'] + 1}
+ params = {'minDisk': flavor['disk'] + 1}
flavors = self.flavors_client.list_flavors(**params)['flavors']
self.assertEmpty([i for i in flavors if i['id'] == flavor_id])
@@ -121,6 +119,6 @@
flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
flavor_id = flavor['id']
- params = {self._min_ram: flavor['ram'] + 1}
+ params = {'minRam': flavor['ram'] + 1}
flavors = self.flavors_client.list_flavors(**params)['flavors']
self.assertEmpty([i for i in flavors if i['id'] == flavor_id])
diff --git a/tempest/api/compute/flavors/test_flavors_negative.py b/tempest/api/compute/flavors/test_flavors_negative.py
index ebb9d2e..efd4f0e 100644
--- a/tempest/api/compute/flavors/test_flavors_negative.py
+++ b/tempest/api/compute/flavors/test_flavors_negative.py
@@ -19,11 +19,11 @@
from tempest.api.compute import base
from tempest.common import image as common_image
+from tempest.common import utils
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
CONF = config.CONF
@@ -43,7 +43,7 @@
'[image-feature-enabled].')
@decorators.attr(type=['negative'])
- @test.services('image')
+ @utils.services('image')
@decorators.idempotent_id('90f0d93a-91c1-450c-91e6-07d18172cefe')
def test_boot_with_low_ram(self):
"""Try boot a vm with lower than min ram
diff --git a/tempest/api/compute/floating_ips/test_floating_ips_actions.py b/tempest/api/compute/floating_ips/test_floating_ips_actions.py
index faa7b5d..8938570 100644
--- a/tempest/api/compute/floating_ips/test_floating_ips_actions.py
+++ b/tempest/api/compute/floating_ips/test_floating_ips_actions.py
@@ -16,22 +16,22 @@
import testtools
from tempest.api.compute.floating_ips import base
+from tempest.common import utils
from tempest import config
from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
CONF = config.CONF
class FloatingIPsTestJSON(base.BaseFloatingIPsTest):
- server_id = None
- floating_ip = None
@classmethod
def skip_checks(cls):
super(FloatingIPsTestJSON, cls).skip_checks()
+ if not utils.get_service_list()['network']:
+ raise cls.skipException("network service not enabled.")
if not CONF.network_feature_enabled.floating_ips:
raise cls.skipException("Floating ips are not available")
@@ -43,7 +43,6 @@
@classmethod
def resource_setup(cls):
super(FloatingIPsTestJSON, cls).resource_setup()
- cls.floating_ip_id = None
# Server creation
server = cls.create_test_server(wait_until='ACTIVE')
@@ -51,18 +50,11 @@
# Floating IP creation
body = cls.client.create_floating_ip(
pool=CONF.network.floating_network_name)['floating_ip']
+ cls.addClassResourceCleanup(cls.client.delete_floating_ip, body['id'])
cls.floating_ip_id = body['id']
cls.floating_ip = body['ip']
- @classmethod
- def resource_cleanup(cls):
- # Deleting the floating IP which is created in this method
- if cls.floating_ip_id:
- cls.client.delete_floating_ip(cls.floating_ip_id)
- super(FloatingIPsTestJSON, cls).resource_cleanup()
-
@decorators.idempotent_id('f7bfb946-297e-41b8-9e8c-aba8e9bb5194')
- @test.services('network')
def test_allocate_floating_ip(self):
# Positive test:Allocation of a new floating IP to a project
# should be successful
@@ -78,7 +70,6 @@
self.assertIn(floating_ip_details, body)
@decorators.idempotent_id('de45e989-b5ca-4a9b-916b-04a52e7bbb8b')
- @test.services('network')
def test_delete_floating_ip(self):
# Positive test:Deletion of valid floating IP from project
# should be successful
@@ -93,7 +84,6 @@
self.client.wait_for_resource_deletion(floating_ip_body['id'])
@decorators.idempotent_id('307efa27-dc6f-48a0-8cd2-162ce3ef0b52')
- @test.services('network')
@testtools.skipUnless(CONF.network.public_network_id,
'The public_network_id option must be specified.')
def test_associate_disassociate_floating_ip(self):
@@ -116,7 +106,6 @@
self.server_id)
@decorators.idempotent_id('6edef4b2-aaf1-4abc-bbe3-993e2561e0fe')
- @test.services('network')
@testtools.skipUnless(CONF.network.public_network_id,
'The public_network_id option must be specified.')
def test_associate_already_associated_floating_ip(self):
diff --git a/tempest/api/compute/floating_ips/test_floating_ips_actions_negative.py b/tempest/api/compute/floating_ips/test_floating_ips_actions_negative.py
index 483bd95..c3d7816 100644
--- a/tempest/api/compute/floating_ips/test_floating_ips_actions_negative.py
+++ b/tempest/api/compute/floating_ips/test_floating_ips_actions_negative.py
@@ -16,11 +16,11 @@
import testtools
from tempest.api.compute.floating_ips import base
+from tempest.common import utils
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
CONF = config.CONF
@@ -30,6 +30,8 @@
@classmethod
def skip_checks(cls):
super(FloatingIPsNegativeTestJSON, cls).skip_checks()
+ if not utils.get_service_list()['network']:
+ raise cls.skipException("network service not enabled.")
if not CONF.network_feature_enabled.floating_ips:
raise cls.skipException("Floating ips are not available")
@@ -58,7 +60,6 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('6e0f059b-e4dd-48fb-8207-06e3bba5b074')
- @test.services('network')
def test_allocate_floating_ip_from_nonexistent_pool(self):
# Negative test:Allocation of a new floating IP from a nonexistent_pool
# to a project should fail
@@ -68,7 +69,6 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('ae1c55a8-552b-44d4-bfb6-2a115a15d0ba')
- @test.services('network')
def test_delete_nonexistent_floating_ip(self):
# Negative test:Deletion of a nonexistent floating IP
# from project should fail
@@ -79,7 +79,6 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('595fa616-1a71-4670-9614-46564ac49a4c')
- @test.services('network')
def test_associate_nonexistent_floating_ip(self):
# Negative test:Association of a non existent floating IP
# to specific server should fail
@@ -90,7 +89,6 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('0a081a66-e568-4e6b-aa62-9587a876dca8')
- @test.services('network')
def test_dissociate_nonexistent_floating_ip(self):
# Negative test:Dissociation of a non existent floating IP should fail
# Dissociating non existent floating IP
@@ -100,7 +98,6 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('804b4fcb-bbf5-412f-925d-896672b61eb3')
- @test.services('network')
def test_associate_ip_to_server_without_passing_floating_ip(self):
# Negative test:Association of empty floating IP to specific server
# should raise NotFound or BadRequest(In case of Nova V2.1) exception.
@@ -110,7 +107,6 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('58a80596-ffb2-11e6-9393-fa163e4fa634')
- @test.services('network')
@testtools.skipUnless(CONF.network.public_network_id,
'The public_network_id option must be specified.')
def test_associate_ip_to_server_with_floating_ip(self):
diff --git a/tempest/api/compute/floating_ips/test_list_floating_ips.py b/tempest/api/compute/floating_ips/test_list_floating_ips.py
index 913b992..516c544 100644
--- a/tempest/api/compute/floating_ips/test_list_floating_ips.py
+++ b/tempest/api/compute/floating_ips/test_list_floating_ips.py
@@ -14,9 +14,9 @@
# under the License.
from tempest.api.compute import base
+from tempest.common import utils
from tempest import config
from tempest.lib import decorators
-from tempest import test
CONF = config.CONF
@@ -26,6 +26,8 @@
@classmethod
def skip_checks(cls):
super(FloatingIPDetailsTestJSON, cls).skip_checks()
+ if not utils.get_service_list()['network']:
+ raise cls.skipException("network service not enabled.")
if not CONF.network_feature_enabled.floating_ips:
raise cls.skipException("Floating ips are not available")
@@ -39,21 +41,14 @@
def resource_setup(cls):
super(FloatingIPDetailsTestJSON, cls).resource_setup()
cls.floating_ip = []
- cls.floating_ip_id = []
for _ in range(3):
body = cls.client.create_floating_ip(
pool=CONF.network.floating_network_name)['floating_ip']
+ cls.addClassResourceCleanup(cls.client.delete_floating_ip,
+ body['id'])
cls.floating_ip.append(body)
- cls.floating_ip_id.append(body['id'])
-
- @classmethod
- def resource_cleanup(cls):
- for f_id in cls.floating_ip_id:
- cls.client.delete_floating_ip(f_id)
- super(FloatingIPDetailsTestJSON, cls).resource_cleanup()
@decorators.idempotent_id('16db31c3-fb85-40c9-bbe2-8cf7b67ff99f')
- @test.services('network')
def test_list_floating_ips(self):
# Positive test:Should return the list of floating IPs
body = self.client.list_floating_ips()['floating_ips']
@@ -64,7 +59,6 @@
self.assertIn(self.floating_ip[i], floating_ips)
@decorators.idempotent_id('eef497e0-8ff7-43c8-85ef-558440574f84')
- @test.services('network')
def test_get_floating_ip_details(self):
# Positive test:Should be able to GET the details of floatingIP
# Creating a floating IP for which details are to be checked
@@ -86,7 +80,6 @@
self.assertEqual(floating_ip_id, body['id'])
@decorators.idempotent_id('df389fc8-56f5-43cc-b290-20eda39854d3')
- @test.services('network')
def test_list_floating_ip_pools(self):
# Positive test:Should return the list of floating IP Pools
floating_ip_pools = self.pools_client.list_floating_ip_pools()
diff --git a/tempest/api/compute/floating_ips/test_list_floating_ips_negative.py b/tempest/api/compute/floating_ips/test_list_floating_ips_negative.py
index b5bbb8c..0ade872 100644
--- a/tempest/api/compute/floating_ips/test_list_floating_ips_negative.py
+++ b/tempest/api/compute/floating_ips/test_list_floating_ips_negative.py
@@ -14,11 +14,11 @@
# under the License.
from tempest.api.compute import base
+from tempest.common import utils
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
CONF = config.CONF
@@ -28,6 +28,8 @@
@classmethod
def skip_checks(cls):
super(FloatingIPDetailsNegativeTestJSON, cls).skip_checks()
+ if not utils.get_service_list()['network']:
+ raise cls.skipException("network service not enabled.")
if not CONF.network_feature_enabled.floating_ips:
raise cls.skipException("Floating ips are not available")
@@ -38,7 +40,6 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('7ab18834-4a4b-4f28-a2c5-440579866695')
- @test.services('network')
def test_get_nonexistent_floating_ip_details(self):
# Negative test:Should not be able to GET the details
# of non-existent floating IP
diff --git a/tempest/api/compute/images/test_image_metadata.py b/tempest/api/compute/images/test_image_metadata.py
index 8d503dc..b497626 100644
--- a/tempest/api/compute/images/test_image_metadata.py
+++ b/tempest/api/compute/images/test_image_metadata.py
@@ -20,6 +20,7 @@
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import data_utils
+from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
from tempest.lib import exceptions
@@ -70,7 +71,9 @@
body = cls.glance_client.create_image(**params)
body = body['image'] if 'image' in body else body
cls.image_id = body['id']
- cls.images.append(cls.image_id)
+ cls.addClassResourceCleanup(test_utils.call_and_ignore_notfound_exc,
+ cls.glance_client.delete_image,
+ cls.image_id)
image_file = six.BytesIO((b'*' * 1024))
if CONF.image_feature_enabled.api_v1:
cls.glance_client.update_image(cls.image_id, data=image_file)
diff --git a/tempest/api/compute/images/test_images_oneserver.py b/tempest/api/compute/images/test_images_oneserver.py
index 5987d39..058e7e6 100644
--- a/tempest/api/compute/images/test_images_oneserver.py
+++ b/tempest/api/compute/images/test_images_oneserver.py
@@ -15,6 +15,7 @@
from tempest.api.compute import base
from tempest import config
+from tempest.lib.common import api_version_utils
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
@@ -74,7 +75,6 @@
# Verify the image was deleted correctly
self.client.delete_image(image['id'])
- self.images.remove(image['id'])
self.client.wait_for_resource_deletion(image['id'])
@decorators.idempotent_id('3b7c6fe4-dfe7-477c-9243-b06359db51e6')
@@ -87,5 +87,9 @@
# 4 byte utf-8 character.
utf8_name = data_utils.rand_name(b'\xe2\x82\xa1'.decode('utf-8'))
body = self.client.create_image(self.server_id, name=utf8_name)
- image_id = data_utils.parse_image_id(body.response['location'])
+ if api_version_utils.compare_version_header_to_response(
+ "OpenStack-API-Version", "compute 2.45", body.response, "lt"):
+ image_id = body['image_id']
+ else:
+ image_id = data_utils.parse_image_id(body.response['location'])
self.addCleanup(self.client.delete_image, image_id)
diff --git a/tempest/api/compute/images/test_images_oneserver_negative.py b/tempest/api/compute/images/test_images_oneserver_negative.py
index cf32ba3..a2e58c9 100644
--- a/tempest/api/compute/images/test_images_oneserver_negative.py
+++ b/tempest/api/compute/images/test_images_oneserver_negative.py
@@ -19,6 +19,7 @@
from tempest.api.compute import base
from tempest.common import waiters
from tempest import config
+from tempest.lib.common import api_version_utils
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
@@ -51,7 +52,7 @@
self._reset_server()
def _reset_server(self):
- self.__class__.server_id = self.rebuild_server(self.server_id)
+ self.__class__.server_id = self.recreate_server(self.server_id)
@classmethod
def skip_checks(cls):
@@ -105,9 +106,12 @@
self.assertRaises(lib_exc.Conflict, self.create_image_from_server,
self.server_id)
- image_id = data_utils.parse_image_id(image.response['location'])
+ if api_version_utils.compare_version_header_to_response(
+ "OpenStack-API-Version", "compute 2.45", image.response, "lt"):
+ image_id = image['image_id']
+ else:
+ image_id = data_utils.parse_image_id(image.response['location'])
self.client.delete_image(image_id)
- self.images.remove(image_id)
@decorators.attr(type=['negative'])
@decorators.idempotent_id('084f0cbc-500a-4963-8a4e-312905862581')
@@ -124,12 +128,15 @@
# Return an error while trying to delete an image what is creating
image = self.create_image_from_server(self.server_id)
- image_id = data_utils.parse_image_id(image.response['location'])
+ if api_version_utils.compare_version_header_to_response(
+ "OpenStack-API-Version", "compute 2.45", image.response, "lt"):
+ image_id = image['image_id']
+ else:
+ image_id = data_utils.parse_image_id(image.response['location'])
self.addCleanup(self._reset_server)
# Do not wait, attempt to delete the image, ensure it's successful
self.client.delete_image(image_id)
- self.images.remove(image_id)
self.assertRaises(lib_exc.NotFound,
self.client.show_image, image_id)
diff --git a/tempest/api/compute/images/test_list_image_filters.py b/tempest/api/compute/images/test_list_image_filters.py
index acc8b3e..d83d8df 100644
--- a/tempest/api/compute/images/test_list_image_filters.py
+++ b/tempest/api/compute/images/test_list_image_filters.py
@@ -23,6 +23,7 @@
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import data_utils
+from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
from tempest.lib import exceptions
@@ -74,7 +75,10 @@
body = cls.glance_client.create_image(**params)
body = body['image'] if 'image' in body else body
image_id = body['id']
- cls.images.append(image_id)
+ cls.addClassResourceCleanup(
+ test_utils.call_and_ignore_notfound_exc,
+ cls.compute_images_client.delete_image,
+ image_id)
# Wait 1 second between creation and upload to ensure a delta
# between created_at and updated_at.
time.sleep(1)
diff --git a/tempest/api/compute/security_groups/base.py b/tempest/api/compute/security_groups/base.py
index 6148e16..54a6da8 100644
--- a/tempest/api/compute/security_groups/base.py
+++ b/tempest/api/compute/security_groups/base.py
@@ -14,9 +14,9 @@
# under the License.
from tempest.api.compute import base
+from tempest.common import utils
from tempest import config
from tempest.lib.common.utils import data_utils
-from tempest import test
CONF = config.CONF
@@ -24,6 +24,12 @@
class BaseSecurityGroupsTest(base.BaseV2ComputeTest):
@classmethod
+ def skip_checks(cls):
+ super(BaseSecurityGroupsTest, cls).skip_checks()
+ if not utils.get_service_list()['network']:
+ raise cls.skipException("network service not enabled.")
+
+ @classmethod
def setup_credentials(cls):
# A network and a subnet will be created for these tests
cls.set_network_resources(network=True, subnet=True)
@@ -32,7 +38,7 @@
@staticmethod
def generate_random_security_group_id():
if (CONF.service_available.neutron and
- test.is_extension_enabled('security-group', 'network')):
+ utils.is_extension_enabled('security-group', 'network')):
return data_utils.rand_uuid()
else:
return data_utils.rand_int_id(start=999)
diff --git a/tempest/api/compute/security_groups/test_security_group_rules.py b/tempest/api/compute/security_groups/test_security_group_rules.py
index 124db0e..4c99ea6 100644
--- a/tempest/api/compute/security_groups/test_security_group_rules.py
+++ b/tempest/api/compute/security_groups/test_security_group_rules.py
@@ -15,7 +15,6 @@
from tempest.api.compute.security_groups import base
from tempest.lib import decorators
-from tempest import test
class SecurityGroupRulesTestJSON(base.BaseSecurityGroupsTest):
@@ -55,7 +54,6 @@
@decorators.attr(type='smoke')
@decorators.idempotent_id('850795d7-d4d3-4e55-b527-a774c0123d3a')
- @test.services('network')
def test_security_group_rules_create(self):
# Positive test: Creation of Security Group rule
# should be successful
@@ -73,7 +71,6 @@
self._check_expected_response(rule)
@decorators.idempotent_id('7a01873e-3c38-4f30-80be-31a043cfe2fd')
- @test.services('network')
def test_security_group_rules_create_with_optional_cidr(self):
# Positive test: Creation of Security Group rule
# with optional argument cidr
@@ -96,7 +93,6 @@
self._check_expected_response(rule)
@decorators.idempotent_id('7f5d2899-7705-4d4b-8458-4505188ffab6')
- @test.services('network')
def test_security_group_rules_create_with_optional_group_id(self):
# Positive test: Creation of Security Group rule
# with optional argument group_id
@@ -125,7 +121,6 @@
@decorators.attr(type='smoke')
@decorators.idempotent_id('a6154130-5a55-4850-8be4-5e9e796dbf17')
- @test.services('network')
def test_security_group_rules_list(self):
# Positive test: Created Security Group rules should be
# in the list of all rules
@@ -163,7 +158,6 @@
self.assertNotEmpty([i for i in rules if i['id'] == rule2_id])
@decorators.idempotent_id('fc5c5acf-2091-43a6-a6ae-e42760e9ffaf')
- @test.services('network')
def test_security_group_rules_delete_when_peer_group_deleted(self):
# Positive test:rule will delete when peer group deleting
# Creating a Security Group to add rules to it
@@ -178,7 +172,7 @@
ip_protocol=self.ip_protocol,
from_port=self.from_port,
to_port=self.to_port,
- group_id=sg2_id)['security_group_rule']
+ group_id=sg2_id)
# Delete group2
self.security_groups_client.delete_security_group(sg2_id)
diff --git a/tempest/api/compute/security_groups/test_security_group_rules_negative.py b/tempest/api/compute/security_groups/test_security_group_rules_negative.py
index 4efb8b7..8283aae 100644
--- a/tempest/api/compute/security_groups/test_security_group_rules_negative.py
+++ b/tempest/api/compute/security_groups/test_security_group_rules_negative.py
@@ -17,7 +17,6 @@
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
class SecurityGroupRulesNegativeTestJSON(base.BaseSecurityGroupsTest):
@@ -29,7 +28,6 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('1d507e98-7951-469b-82c3-23f1e6b8c254')
- @test.services('network')
def test_create_security_group_rule_with_non_existent_id(self):
# Negative test: Creation of Security Group rule should FAIL
# with non existent Parent group id
@@ -46,7 +44,6 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('2244d7e4-adb7-4ecb-9930-2d77e123ce4f')
- @test.services('network')
def test_create_security_group_rule_with_invalid_id(self):
# Negative test: Creation of Security Group rule should FAIL
# with Parent group id which is not integer
@@ -63,7 +60,6 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('8bd56d02-3ffa-4d67-9933-b6b9a01d6089')
- @test.services('network')
def test_create_security_group_rule_duplicate(self):
# Negative test: Create Security Group rule duplicate should fail
# Creating a Security Group to add rule to it
@@ -88,7 +84,6 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('84c81249-9f6e-439c-9bbf-cbb0d2cddbdf')
- @test.services('network')
def test_create_security_group_rule_with_invalid_ip_protocol(self):
# Negative test: Creation of Security Group rule should FAIL
# with invalid ip_protocol
@@ -108,7 +103,6 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('12bbc875-1045-4f7a-be46-751277baedb9')
- @test.services('network')
def test_create_security_group_rule_with_invalid_from_port(self):
# Negative test: Creation of Security Group rule should FAIL
# with invalid from_port
@@ -127,7 +121,6 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('ff88804d-144f-45d1-bf59-dd155838a43a')
- @test.services('network')
def test_create_security_group_rule_with_invalid_to_port(self):
# Negative test: Creation of Security Group rule should FAIL
# with invalid to_port
@@ -146,7 +139,6 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('00296fa9-0576-496a-ae15-fbab843189e0')
- @test.services('network')
def test_create_security_group_rule_with_invalid_port_range(self):
# Negative test: Creation of Security Group rule should FAIL
# with invalid port range.
@@ -165,7 +157,6 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('56fddcca-dbb8-4494-a0db-96e9f869527c')
- @test.services('network')
def test_delete_security_group_rule_with_non_existent_id(self):
# Negative test: Deletion of Security Group rule should be FAIL
# with non existent id
diff --git a/tempest/api/compute/security_groups/test_security_groups.py b/tempest/api/compute/security_groups/test_security_groups.py
index a101a19..62d5bea 100644
--- a/tempest/api/compute/security_groups/test_security_groups.py
+++ b/tempest/api/compute/security_groups/test_security_groups.py
@@ -18,7 +18,6 @@
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
class SecurityGroupsTestJSON(base.BaseSecurityGroupsTest):
@@ -30,7 +29,6 @@
@decorators.attr(type='smoke')
@decorators.idempotent_id('eb2b087d-633d-4d0d-a7bd-9e6ba35b32de')
- @test.services('network')
def test_security_groups_create_list_delete(self):
# Positive test:Should return the list of Security Groups
# Create 3 Security Groups
@@ -54,15 +52,13 @@
self.client.wait_for_resource_deletion(sg['id'])
# Now check if all the created Security Groups are deleted
fetched_list = self.client.list_security_groups()['security_groups']
- deleted_sgs = \
- [sg for sg in security_group_list if sg in fetched_list]
+ deleted_sgs = [sg for sg in security_group_list if sg in fetched_list]
self.assertFalse(deleted_sgs,
"Failed to delete Security Group %s "
"list" % ', '.join(m_group['name']
for m_group in deleted_sgs))
@decorators.idempotent_id('ecc0da4a-2117-48af-91af-993cca39a615')
- @test.services('network')
def test_security_group_create_get_delete(self):
# Security Group should be created, fetched and deleted
# with char space between name along with
@@ -83,7 +79,6 @@
self.client.wait_for_resource_deletion(securitygroup['id'])
@decorators.idempotent_id('fe4abc0d-83f5-4c50-ad11-57a1127297a2')
- @test.services('network')
def test_server_security_groups(self):
# Checks that security groups may be added and linked to a server
# and not deleted if the server is active.
@@ -125,7 +120,6 @@
self.client.delete_security_group(sg2['id'])
@decorators.idempotent_id('7d4e1d3c-3209-4d6d-b020-986304ebad1f')
- @test.services('network')
def test_update_security_groups(self):
# Update security group name and description
# Create a security group
@@ -144,7 +138,6 @@
self.assertEqual(s_new_des, fetched_group['description'])
@decorators.idempotent_id('79517d60-535a-438f-af3d-e6feab1cbea7')
- @test.services('network')
def test_list_security_groups_by_server(self):
# Create a couple security groups that we will use
# for the server resource this test creates
diff --git a/tempest/api/compute/security_groups/test_security_groups_negative.py b/tempest/api/compute/security_groups/test_security_groups_negative.py
index c4dff15..9c44bb2 100644
--- a/tempest/api/compute/security_groups/test_security_groups_negative.py
+++ b/tempest/api/compute/security_groups/test_security_groups_negative.py
@@ -20,7 +20,6 @@
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
CONF = config.CONF
@@ -34,7 +33,6 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('673eaec1-9b3e-48ed-bdf1-2786c1b9661c')
- @test.services('network')
def test_security_group_get_nonexistent_group(self):
# Negative test:Should not be able to GET the details
# of non-existent Security Group
@@ -46,7 +44,6 @@
condition=CONF.service_available.neutron)
@decorators.attr(type=['negative'])
@decorators.idempotent_id('1759c3cb-b0fc-44b7-86ce-c99236be911d')
- @test.services('network')
def test_security_group_create_with_invalid_group_name(self):
# Negative test: Security Group should not be created with group name
# as an empty string/with white spaces/chars more than 255
@@ -69,7 +66,6 @@
condition=CONF.service_available.neutron)
@decorators.attr(type=['negative'])
@decorators.idempotent_id('777b6f14-aca9-4758-9e84-38783cfa58bc')
- @test.services('network')
def test_security_group_create_with_invalid_group_description(self):
# Negative test: Security Group should not be created with description
# longer than 255 chars. Empty description is allowed by the API
@@ -85,7 +81,6 @@
@testtools.skipIf(CONF.service_available.neutron,
"Neutron allows duplicate names for security groups")
@decorators.attr(type=['negative'])
- @test.services('network')
def test_security_group_create_with_duplicate_name(self):
# Negative test:Security Group with duplicate name should not
# be created
@@ -99,7 +94,6 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('36a1629f-c6da-4a26-b8b8-55e7e5d5cd58')
- @test.services('network')
def test_delete_the_default_security_group(self):
# Negative test:Deletion of the "default" Security Group should Fail
default_security_group_id = None
@@ -115,7 +109,6 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('6727c00b-214c-4f9e-9a52-017ac3e98411')
- @test.services('network')
def test_delete_nonexistent_security_group(self):
# Negative test:Deletion of a non-existent Security Group should fail
non_exist_id = self.generate_random_security_group_id()
@@ -124,7 +117,6 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('1438f330-8fa4-4aeb-8a94-37c250106d7f')
- @test.services('network')
def test_delete_security_group_without_passing_id(self):
# Negative test:Deletion of a Security Group with out passing ID
# should Fail
@@ -135,7 +127,6 @@
@testtools.skipIf(CONF.service_available.neutron,
"Neutron does not check the security group ID")
@decorators.attr(type=['negative'])
- @test.services('network')
def test_update_security_group_with_invalid_sg_id(self):
# Update security_group with invalid sg_id should fail
s_name = data_utils.rand_name('sg')
@@ -150,7 +141,6 @@
@testtools.skipIf(CONF.service_available.neutron,
"Neutron does not check the security group name")
@decorators.attr(type=['negative'])
- @test.services('network')
def test_update_security_group_with_invalid_sg_name(self):
# Update security_group with invalid sg_name should fail
securitygroup = self.create_security_group()
@@ -165,7 +155,6 @@
@testtools.skipIf(CONF.service_available.neutron,
"Neutron does not check the security group description")
@decorators.attr(type=['negative'])
- @test.services('network')
def test_update_security_group_with_invalid_sg_des(self):
# Update security_group with invalid sg_des should fail
securitygroup = self.create_security_group()
@@ -178,7 +167,6 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('27edee9c-873d-4da6-a68a-3c256efebe8f')
- @test.services('network')
def test_update_non_existent_security_group(self):
# Update a non-existent Security Group should Fail
non_exist_id = self.generate_random_security_group_id()
diff --git a/tempest/api/compute/servers/test_attach_interfaces.py b/tempest/api/compute/servers/test_attach_interfaces.py
index bfde847..0248c65 100644
--- a/tempest/api/compute/servers/test_attach_interfaces.py
+++ b/tempest/api/compute/servers/test_attach_interfaces.py
@@ -19,12 +19,12 @@
from tempest.api.compute import base
from tempest.common import compute
+from tempest.common import utils
from tempest.common.utils import net_utils
from tempest.common import waiters
from tempest import config
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
CONF = config.CONF
@@ -103,7 +103,6 @@
['interfaceAttachment'])
iface = waiters.wait_for_interface_status(
self.interfaces_client, server['id'], iface['port_id'], 'ACTIVE')
- self._check_interface(iface)
return iface
def _test_create_interface_by_network_id(self, server, ifs):
@@ -185,12 +184,11 @@
self.assertEqual(sorted(list1), sorted(list2))
@decorators.idempotent_id('73fe8f02-590d-4bf1-b184-e9ca81065051')
- @test.services('network')
+ @utils.services('network')
def test_create_list_show_delete_interfaces(self):
server, ifs = self._create_server_get_interfaces()
interface_count = len(ifs)
self.assertGreater(interface_count, 0)
- self._check_interface(ifs[0])
try:
iface = self._test_create_interface(server)
@@ -222,13 +220,12 @@
@decorators.attr(type='smoke')
@decorators.idempotent_id('c7e0e60b-ee45-43d0-abeb-8596fd42a2f9')
- @test.services('network')
+ @utils.services('network')
def test_add_remove_fixed_ip(self):
# Add and Remove the fixed IP to server.
server, ifs = self._create_server_get_interfaces()
interface_count = len(ifs)
self.assertGreater(interface_count, 0)
- self._check_interface(ifs[0])
network_id = ifs[0]['net_id']
self.servers_client.add_fixed_ip(server['id'], networkId=network_id)
# Remove the fixed IP from server.
@@ -245,7 +242,6 @@
break
self.servers_client.remove_fixed_ip(server['id'], address=fixed_ip)
- @decorators.skip_because(bug='1607714')
@decorators.idempotent_id('2f3a0127-95c7-4977-92d2-bc5aec602fb4')
def test_reassign_port_between_servers(self):
"""Tests the following:
diff --git a/tempest/api/compute/servers/test_create_server.py b/tempest/api/compute/servers/test_create_server.py
index b727ddd..c660821 100644
--- a/tempest/api/compute/servers/test_create_server.py
+++ b/tempest/api/compute/servers/test_create_server.py
@@ -17,12 +17,11 @@
import testtools
from tempest.api.compute import base
-from tempest.common import compute
+from tempest.common import utils
from tempest.common.utils.linux import remote_client
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
-from tempest import test
CONF = config.CONF
@@ -43,8 +42,9 @@
@classmethod
def resource_setup(cls):
- cls.set_validation_resources()
super(ServersTestJSON, cls).resource_setup()
+ validation_resources = cls.get_class_validation_resources(
+ cls.os_primary)
cls.meta = {'hello': 'world'}
cls.accessIPv4 = '1.1.1.1'
cls.accessIPv6 = '0000:0000:0000:0000:0000:babe:220.12.22.2'
@@ -53,6 +53,7 @@
disk_config = cls.disk_config
server_initial = cls.create_test_server(
validatable=True,
+ validation_resources=validation_resources,
wait_until='ACTIVE',
name=cls.name,
metadata=cls.meta,
@@ -106,11 +107,13 @@
# Verify that the number of vcpus reported by the instance matches
# the amount stated by the flavor
flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
+ validation_resources = self.get_class_validation_resources(
+ self.os_primary)
linux_client = remote_client.RemoteClient(
- self.get_server_ip(self.server),
+ self.get_server_ip(self.server, validation_resources),
self.ssh_user,
self.password,
- self.validation_resources['keypair']['private_key'],
+ validation_resources['keypair']['private_key'],
server=self.server,
servers_client=self.client)
output = linux_client.exec_command('grep -c ^processor /proc/cpuinfo')
@@ -121,11 +124,13 @@
'Instance validation tests are disabled.')
def test_host_name_is_same_as_server_name(self):
# Verify the instance host name is the same as the server name
+ validation_resources = self.get_class_validation_resources(
+ self.os_primary)
linux_client = remote_client.RemoteClient(
- self.get_server_ip(self.server),
+ self.get_server_ip(self.server, validation_resources),
self.ssh_user,
self.password,
- self.validation_resources['keypair']['private_key'],
+ validation_resources['keypair']['private_key'],
server=self.server,
servers_client=self.client)
hostname = linux_client.exec_command("hostname").rstrip()
@@ -133,22 +138,6 @@
'hostname "%s" but got "%s".' % (self.name, hostname))
self.assertEqual(self.name.lower(), hostname, msg)
- @decorators.idempotent_id('ed20d3fb-9d1f-4329-b160-543fbd5d9811')
- @testtools.skipUnless(
- compute.is_scheduler_filter_enabled("ServerGroupAffinityFilter"),
- 'ServerGroupAffinityFilter is not available.')
- def test_create_server_with_scheduler_hint_group(self):
- # Create a server with the scheduler hint "group".
- group_id = self.create_test_server_group()['id']
- hints = {'group': group_id}
- server = self.create_test_server(scheduler_hints=hints,
- wait_until='ACTIVE')
-
- # Check a server is in the group
- server_group = (self.server_groups_client.show_server_group(group_id)
- ['server_group'])
- self.assertIn(server['id'], server_group['members'])
-
class ServersTestManualDisk(ServersTestJSON):
disk_config = 'MANUAL'
@@ -168,6 +157,6 @@
@classmethod
def skip_checks(cls):
super(ServersTestBootFromVolume, cls).skip_checks()
- if not test.get_service_list()['volume']:
+ if not utils.get_service_list()['volume']:
msg = "Volume service not enabled."
raise cls.skipException(msg)
diff --git a/tempest/api/compute/servers/test_delete_server.py b/tempest/api/compute/servers/test_delete_server.py
index 2b03b2b..0093752 100644
--- a/tempest/api/compute/servers/test_delete_server.py
+++ b/tempest/api/compute/servers/test_delete_server.py
@@ -17,10 +17,10 @@
from tempest.api.compute import base
from tempest.common import compute
+from tempest.common import utils
from tempest.common import waiters
from tempest import config
from tempest.lib import decorators
-from tempest import test
CONF = config.CONF
@@ -104,7 +104,7 @@
waiters.wait_for_server_termination(self.client, server['id'])
@decorators.idempotent_id('d0f3f0d6-d9b6-4a32-8da4-23015dcab23c')
- @test.services('volume')
+ @utils.services('volume')
def test_delete_server_while_in_attached_volume(self):
# Delete a server while a volume is attached to it
device = '/dev/%s' % CONF.compute.volume_device_name
diff --git a/tempest/api/compute/servers/test_device_tagging.py b/tempest/api/compute/servers/test_device_tagging.py
index 7ee1b02..a126fd6 100644
--- a/tempest/api/compute/servers/test_device_tagging.py
+++ b/tempest/api/compute/servers/test_device_tagging.py
@@ -17,13 +17,13 @@
from oslo_log import log as logging
from tempest.api.compute import base
+from tempest.common import utils
from tempest.common.utils.linux import remote_client
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
from tempest.lib import exceptions
-from tempest import test
CONF = config.CONF
@@ -66,11 +66,6 @@
dhcp=True)
super(DeviceTaggingTest, cls).setup_credentials()
- @classmethod
- def resource_setup(cls):
- cls.set_validation_resources()
- super(DeviceTaggingTest, cls).resource_setup()
-
def verify_device_metadata(self, md_json):
md_dict = json.loads(md_json)
for d in md_dict['devices']:
@@ -94,7 +89,7 @@
'other']))
@decorators.idempotent_id('a2e65a6c-66f1-4442-aaa8-498c31778d96')
- @test.services('network', 'volume', 'image')
+ @utils.services('network', 'volume', 'image')
def test_device_tagging(self):
# Create volumes
# The create_volume methods waits for the volumes to be available and
@@ -139,9 +134,12 @@
# Create server
admin_pass = data_utils.rand_password()
config_drive_enabled = CONF.compute_feature_enabled.config_drive
+ validation_resources = self.get_test_validation_resources(
+ self.os_primary)
server = self.create_test_server(
validatable=True,
+ validation_resources=validation_resources,
config_drive=config_drive_enabled,
adminPass=admin_pass,
name=data_utils.rand_name('device-tagging-server'),
@@ -208,10 +206,10 @@
self.addCleanup(self.delete_server, server['id'])
self.ssh_client = remote_client.RemoteClient(
- self.get_server_ip(server),
+ self.get_server_ip(server, validation_resources),
CONF.validation.image_ssh_user,
admin_pass,
- self.validation_resources['keypair']['private_key'],
+ validation_resources['keypair']['private_key'],
server=server,
servers_client=self.servers_client)
diff --git a/tempest/api/compute/servers/test_server_actions.py b/tempest/api/compute/servers/test_server_actions.py
index d1d29af..6fe4d82 100644
--- a/tempest/api/compute/servers/test_server_actions.py
+++ b/tempest/api/compute/servers/test_server_actions.py
@@ -19,13 +19,14 @@
from tempest.api.compute import base
from tempest.common import compute
+from tempest.common import utils
from tempest.common.utils.linux import remote_client
from tempest.common import waiters
from tempest import config
+from tempest.lib.common import api_version_utils
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
CONF = config.CONF
@@ -43,13 +44,18 @@
self.server_id, 'ACTIVE')
except lib_exc.NotFound:
# The server was deleted by previous test, create a new one
+ # Use class level validation resources to avoid them being
+ # deleted once a test is over
+ validation_resources = self.get_class_validation_resources(
+ self.os_primary)
server = self.create_test_server(
validatable=True,
+ validation_resources=validation_resources,
wait_until='ACTIVE')
self.__class__.server_id = server['id']
except Exception:
# Rebuild server if something happened to it during a test
- self.__class__.server_id = self.rebuild_server(
+ self.__class__.server_id = self.recreate_server(
self.server_id, validatable=True)
def tearDown(self):
@@ -68,10 +74,8 @@
@classmethod
def resource_setup(cls):
- cls.set_validation_resources()
-
super(ServerActionsTestJSON, cls).resource_setup()
- cls.server_id = cls.rebuild_server(None, validatable=True)
+ cls.server_id = cls.recreate_server(None, validatable=True)
@decorators.idempotent_id('6158df09-4b82-4ab3-af6d-29cf36af858d')
@testtools.skipUnless(CONF.compute_feature_enabled.change_password,
@@ -79,8 +83,11 @@
def test_change_server_password(self):
# Since this test messes with the password and makes the
# server unreachable, it should create its own server
+ validation_resources = self.get_test_validation_resources(
+ self.os_primary)
newserver = self.create_test_server(
validatable=True,
+ validation_resources=validation_resources,
wait_until='ACTIVE')
# The server's password should be set to the provided password
new_password = 'Newpass1234'
@@ -91,7 +98,7 @@
# Verify that the user can authenticate with the new password
server = self.client.show_server(newserver['id'])['server']
linux_client = remote_client.RemoteClient(
- self.get_server_ip(server),
+ self.get_server_ip(server, validation_resources),
self.ssh_user,
new_password,
server=server,
@@ -100,13 +107,15 @@
def _test_reboot_server(self, reboot_type):
if CONF.validation.run_validation:
+ validation_resources = self.get_class_validation_resources(
+ self.os_primary)
# Get the time the server was last rebooted,
server = self.client.show_server(self.server_id)['server']
linux_client = remote_client.RemoteClient(
- self.get_server_ip(server),
+ self.get_server_ip(server, validation_resources),
self.ssh_user,
self.password,
- self.validation_resources['keypair']['private_key'],
+ validation_resources['keypair']['private_key'],
server=server,
servers_client=self.client)
boot_time = linux_client.get_boot_time()
@@ -121,10 +130,10 @@
if CONF.validation.run_validation:
# Log in and verify the boot time has changed
linux_client = remote_client.RemoteClient(
- self.get_server_ip(server),
+ self.get_server_ip(server, validation_resources),
self.ssh_user,
self.password,
- self.validation_resources['keypair']['private_key'],
+ validation_resources['keypair']['private_key'],
server=server,
servers_client=self.client)
new_boot_time = linux_client.get_boot_time()
@@ -200,6 +209,8 @@
self.assertEqual(original_addresses, server['addresses'])
if CONF.validation.run_validation:
+ validation_resources = self.get_class_validation_resources(
+ self.os_primary)
# Authentication is attempted in the following order of priority:
# 1.The key passed in, if one was passed in.
# 2.Any key we can find through an SSH agent (if allowed).
@@ -207,10 +218,10 @@
# ~/.ssh/ (if allowed).
# 4.Plain username/password auth, if a password was given.
linux_client = remote_client.RemoteClient(
- self.get_server_ip(rebuilt_server),
+ self.get_server_ip(rebuilt_server, validation_resources),
self.ssh_user,
password,
- self.validation_resources['keypair']['private_key'],
+ validation_resources['keypair']['private_key'],
server=rebuilt_server,
servers_client=self.client)
linux_client.validate_authentication()
@@ -252,7 +263,7 @@
self.client.start_server(self.server_id)
@decorators.idempotent_id('b68bd8d6-855d-4212-b59b-2e704044dace')
- @test.services('volume')
+ @utils.services('volume')
def test_rebuild_server_with_volume_attached(self):
# create a new volume and attach it to the server
volume = self.create_volume()
@@ -333,7 +344,7 @@
@decorators.idempotent_id('b963d4f1-94b3-4c40-9e97-7b583f46e470')
@testtools.skipUnless(CONF.compute_feature_enabled.snapshot,
'Snapshotting not available, backup not possible.')
- @test.services('image')
+ @utils.services('image')
def test_create_backup(self):
# Positive test:create backup successfully and rotate backups correctly
# create the first and the second backup
@@ -369,7 +380,11 @@
"been successful as it should have been "
"deleted during rotation.", oldest_backup)
- image1_id = data_utils.parse_image_id(resp['location'])
+ if api_version_utils.compare_version_header_to_response(
+ "OpenStack-API-Version", "compute 2.45", resp, "lt"):
+ image1_id = resp['image_id']
+ else:
+ image1_id = data_utils.parse_image_id(resp['location'])
self.addCleanup(_clean_oldest_backup, image1_id)
waiters.wait_for_image_status(glance_client,
image1_id, 'active')
@@ -380,7 +395,11 @@
backup_type='daily',
rotation=2,
name=backup2).response
- image2_id = data_utils.parse_image_id(resp['location'])
+ if api_version_utils.compare_version_header_to_response(
+ "OpenStack-API-Version", "compute 2.45", resp, "lt"):
+ image2_id = resp['image_id']
+ else:
+ image2_id = data_utils.parse_image_id(resp['location'])
self.addCleanup(glance_client.delete_image, image2_id)
waiters.wait_for_image_status(glance_client,
image2_id, 'active')
@@ -419,7 +438,11 @@
backup_type='daily',
rotation=2,
name=backup3).response
- image3_id = data_utils.parse_image_id(resp['location'])
+ if api_version_utils.compare_version_header_to_response(
+ "OpenStack-API-Version", "compute 2.45", resp, "lt"):
+ image3_id = resp['image_id']
+ else:
+ image3_id = data_utils.parse_image_id(resp['location'])
self.addCleanup(glance_client.delete_image, image3_id)
# the first back up should be deleted
waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
@@ -518,7 +541,7 @@
@decorators.idempotent_id('77eba8e0-036e-4635-944b-f7a8f3b78dc9')
@testtools.skipUnless(CONF.compute_feature_enabled.shelve,
'Shelve is not available.')
- @test.services('image')
+ @utils.services('image')
def test_shelve_unshelve_server(self):
if CONF.image_feature_enabled.api_v2:
glance_client = self.os_primary.image_client_v2
diff --git a/tempest/api/compute/servers/test_server_addresses.py b/tempest/api/compute/servers/test_server_addresses.py
index 022ceba..f79b05f 100644
--- a/tempest/api/compute/servers/test_server_addresses.py
+++ b/tempest/api/compute/servers/test_server_addresses.py
@@ -14,8 +14,8 @@
# under the License.
from tempest.api.compute import base
+from tempest.common import utils
from tempest.lib import decorators
-from tempest import test
class ServerAddressesTestJSON(base.BaseV2ComputeTest):
@@ -39,7 +39,7 @@
@decorators.attr(type='smoke')
@decorators.idempotent_id('6eb718c0-02d9-4d5e-acd1-4e0c269cef39')
- @test.services('network')
+ @utils.services('network')
def test_list_server_addresses(self):
# All public and private addresses for
# a server should be returned
@@ -51,13 +51,10 @@
self.assertNotEmpty(addresses)
for network_addresses in addresses.values():
self.assertNotEmpty(network_addresses)
- for address in network_addresses:
- self.assertTrue(address['addr'])
- self.assertTrue(address['version'])
@decorators.attr(type='smoke')
@decorators.idempotent_id('87bbc374-5538-4f64-b673-2b0e4443cc30')
- @test.services('network')
+ @utils.services('network')
def test_list_server_addresses_by_network(self):
# Providing a network type should filter
# the addresses return by that type
diff --git a/tempest/api/compute/servers/test_server_addresses_negative.py b/tempest/api/compute/servers/test_server_addresses_negative.py
index 76a102b..b2b3cc0 100644
--- a/tempest/api/compute/servers/test_server_addresses_negative.py
+++ b/tempest/api/compute/servers/test_server_addresses_negative.py
@@ -14,9 +14,9 @@
# under the License.
from tempest.api.compute import base
+from tempest.common import utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
class ServerAddressesNegativeTestJSON(base.BaseV2ComputeTest):
@@ -38,7 +38,7 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('02c3f645-2d2e-4417-8525-68c0407d001b')
- @test.services('network')
+ @utils.services('network')
def test_list_server_addresses_invalid_server_id(self):
# List addresses request should fail if server id not in system
self.assertRaises(lib_exc.NotFound, self.client.list_addresses,
@@ -46,7 +46,7 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('a2ab5144-78c0-4942-a0ed-cc8edccfd9ba')
- @test.services('network')
+ @utils.services('network')
def test_list_server_addresses_by_network_neg(self):
# List addresses by network should fail if network name not valid
self.assertRaises(lib_exc.NotFound,
diff --git a/tempest/api/compute/servers/test_server_group.py b/tempest/api/compute/servers/test_server_group.py
index 69d7897..5286c8f 100644
--- a/tempest/api/compute/servers/test_server_group.py
+++ b/tempest/api/compute/servers/test_server_group.py
@@ -13,10 +13,13 @@
# License for the specific language governing permissions and limitations
# under the License.
+import testtools
+
from tempest.api.compute import base
+from tempest.common import compute
+from tempest.common import utils
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
-from tempest import test
class ServerGroupTestJSON(base.BaseV2ComputeTest):
@@ -30,7 +33,7 @@
@classmethod
def skip_checks(cls):
super(ServerGroupTestJSON, cls).skip_checks()
- if not test.is_extension_enabled('os-server-groups', 'compute'):
+ if not utils.is_extension_enabled('os-server-groups', 'compute'):
msg = "os-server-groups extension is not enabled."
raise cls.skipException(msg)
@@ -106,3 +109,19 @@
# List the server-group
body = self.client.list_server_groups()['server_groups']
self.assertIn(self.created_server_group, body)
+
+ @decorators.idempotent_id('ed20d3fb-9d1f-4329-b160-543fbd5d9811')
+ @testtools.skipUnless(
+ compute.is_scheduler_filter_enabled("ServerGroupAffinityFilter"),
+ 'ServerGroupAffinityFilter is not available.')
+ def test_create_server_with_scheduler_hint_group(self):
+ # Create a server with the scheduler hint "group".
+ hints = {'group': self.created_server_group['id']}
+ server = self.create_test_server(scheduler_hints=hints,
+ wait_until='ACTIVE')
+ self.addCleanup(self.delete_server, server['id'])
+
+ # Check a server is in the group
+ server_group = (self.server_groups_client.show_server_group(
+ self.created_server_group['id'])['server_group'])
+ self.assertIn(server['id'], server_group['members'])
diff --git a/tempest/api/compute/servers/test_server_metadata.py b/tempest/api/compute/servers/test_server_metadata.py
index f77e7d3..fe95018 100644
--- a/tempest/api/compute/servers/test_server_metadata.py
+++ b/tempest/api/compute/servers/test_server_metadata.py
@@ -32,7 +32,7 @@
def setUp(self):
super(ServerMetadataTestJSON, self).setUp()
meta = {'key1': 'value1', 'key2': 'value2'}
- self.client.set_server_metadata(self.server['id'], meta)['metadata']
+ self.client.set_server_metadata(self.server['id'], meta)
@decorators.idempotent_id('479da087-92b3-4dcf-aeb3-fd293b2d14ce')
def test_list_server_metadata(self):
@@ -49,8 +49,7 @@
# The server's metadata should be replaced with the provided values
# Create a new set of metadata for the server
req_metadata = {'meta2': 'data2', 'meta3': 'data3'}
- self.client.set_server_metadata(self.server['id'],
- req_metadata)['metadata']
+ self.client.set_server_metadata(self.server['id'], req_metadata)
# Verify the expected values are correct, and that the
# previous values have been removed
diff --git a/tempest/api/compute/servers/test_server_personality.py b/tempest/api/compute/servers/test_server_personality.py
index 90b9da4..6f32b46 100644
--- a/tempest/api/compute/servers/test_server_personality.py
+++ b/tempest/api/compute/servers/test_server_personality.py
@@ -20,6 +20,7 @@
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import data_utils
+from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
@@ -34,11 +35,6 @@
super(ServerPersonalityTestJSON, cls).setup_credentials()
@classmethod
- def resource_setup(cls):
- cls.set_validation_resources()
- super(ServerPersonalityTestJSON, cls).resource_setup()
-
- @classmethod
def skip_checks(cls):
super(ServerPersonalityTestJSON, cls).skip_checks()
if not CONF.compute_feature_enabled.personality:
@@ -48,7 +44,6 @@
def setup_clients(cls):
super(ServerPersonalityTestJSON, cls).setup_clients()
cls.client = cls.servers_client
- cls.user_client = cls.limits_client
@decorators.idempotent_id('3cfe87fd-115b-4a02-b942-7dc36a337fdf')
def test_create_server_with_personality(self):
@@ -57,16 +52,23 @@
personality = [{'path': file_path,
'contents': base64.encode_as_text(file_contents)}]
password = data_utils.rand_password()
- created_server = self.create_test_server(personality=personality,
- adminPass=password,
- wait_until='ACTIVE',
- validatable=True)
+ validation_resources = self.get_test_validation_resources(
+ self.os_primary)
+ created_server = self.create_test_server(
+ personality=personality, adminPass=password, wait_until='ACTIVE',
+ validatable=True,
+ validation_resources=validation_resources)
+ self.addCleanup(waiters.wait_for_server_termination,
+ self.servers_client, created_server['id'])
+ self.addCleanup(test_utils.call_and_ignore_notfound_exc,
+ self.servers_client.delete_server,
+ created_server['id'])
server = self.client.show_server(created_server['id'])['server']
if CONF.validation.run_validation:
linux_client = remote_client.RemoteClient(
- self.get_server_ip(server),
+ self.get_server_ip(server, validation_resources),
self.ssh_user, password,
- self.validation_resources['keypair']['private_key'],
+ validation_resources['keypair']['private_key'],
server=server,
servers_client=self.client)
self.assertEqual(file_contents,
@@ -75,8 +77,16 @@
@decorators.idempotent_id('128966d8-71fc-443c-8cab-08e24114ecc9')
def test_rebuild_server_with_personality(self):
- server = self.create_test_server(wait_until='ACTIVE', validatable=True)
+ validation_resources = self.get_test_validation_resources(
+ self.os_primary)
+ server = self.create_test_server(
+ wait_until='ACTIVE', validatable=True,
+ validation_resources=validation_resources)
server_id = server['id']
+ self.addCleanup(waiters.wait_for_server_termination,
+ self.servers_client, server_id)
+ self.addCleanup(test_utils.call_and_ignore_notfound_exc,
+ self.servers_client.delete_server, server_id)
file_contents = 'Test server rebuild.'
personality = [{'path': 'rebuild.txt',
'contents': base64.encode_as_text(file_contents)}]
@@ -93,7 +103,7 @@
# number of files are injected into the server.
file_contents = 'This is a test file.'
personality = []
- limits = self.user_client.show_limits()['limits']
+ limits = self.limits_client.show_limits()['limits']
max_file_limit = limits['absolute']['maxPersonality']
if max_file_limit == -1:
raise self.skipException("No limit for personality files")
@@ -112,7 +122,7 @@
# Server should be created successfully if maximum allowed number of
# files is injected into the server during creation.
file_contents = 'This is a test file.'
- limits = self.user_client.show_limits()['limits']
+ limits = self.limits_client.show_limits()['limits']
max_file_limit = limits['absolute']['maxPersonality']
if max_file_limit == -1:
raise self.skipException("No limit for personality files")
@@ -126,16 +136,22 @@
'contents': base64.encode_as_text(file_contents + str(i)),
})
password = data_utils.rand_password()
- created_server = self.create_test_server(personality=person,
- adminPass=password,
- wait_until='ACTIVE',
- validatable=True)
+ validation_resources = self.get_test_validation_resources(
+ self.os_primary)
+ created_server = self.create_test_server(
+ personality=person, adminPass=password, wait_until='ACTIVE',
+ validatable=True, validation_resources=validation_resources)
+ self.addCleanup(waiters.wait_for_server_termination,
+ self.servers_client, created_server['id'])
+ self.addCleanup(test_utils.call_and_ignore_notfound_exc,
+ self.servers_client.delete_server,
+ created_server['id'])
server = self.client.show_server(created_server['id'])['server']
if CONF.validation.run_validation:
linux_client = remote_client.RemoteClient(
- self.get_server_ip(server),
+ self.get_server_ip(server, validation_resources),
self.ssh_user, password,
- self.validation_resources['keypair']['private_key'],
+ validation_resources['keypair']['private_key'],
server=server,
servers_client=self.client)
for i in person:
diff --git a/tempest/api/compute/servers/test_server_rescue_negative.py b/tempest/api/compute/servers/test_server_rescue_negative.py
index 5fac433..1260c6b 100644
--- a/tempest/api/compute/servers/test_server_rescue_negative.py
+++ b/tempest/api/compute/servers/test_server_rescue_negative.py
@@ -16,12 +16,12 @@
import testtools
from tempest.api.compute import base
+from tempest.common import utils
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
CONF = config.CONF
@@ -109,7 +109,7 @@
self.image_ref_alt)
@decorators.idempotent_id('d0ccac79-0091-4cf4-a1ce-26162d0cc55f')
- @test.services('volume')
+ @utils.services('volume')
@decorators.attr(type=['negative'])
def test_rescued_vm_attach_volume(self):
volume = self.create_volume()
@@ -129,7 +129,7 @@
device='/dev/%s' % self.device)
@decorators.idempotent_id('f56e465b-fe10-48bf-b75d-646cda3a8bc9')
- @test.services('volume')
+ @utils.services('volume')
@decorators.attr(type=['negative'])
def test_rescued_vm_detach_volume(self):
volume = self.create_volume()
diff --git a/tempest/api/compute/servers/test_server_tags.py b/tempest/api/compute/servers/test_server_tags.py
index 0370215..8d0a4e3 100644
--- a/tempest/api/compute/servers/test_server_tags.py
+++ b/tempest/api/compute/servers/test_server_tags.py
@@ -16,9 +16,9 @@
import six
from tempest.api.compute import base
+from tempest.common import utils
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
-from tempest import test
class ServerTagsTestJSON(base.BaseV2ComputeTest):
@@ -29,7 +29,7 @@
@classmethod
def skip_checks(cls):
super(ServerTagsTestJSON, cls).skip_checks()
- if not test.is_extension_enabled('os-server-tags', 'compute'):
+ if not utils.is_extension_enabled('os-server-tags', 'compute'):
msg = "os-server-tags extension is not enabled."
raise cls.skipException(msg)
diff --git a/tempest/api/compute/servers/test_servers.py b/tempest/api/compute/servers/test_servers.py
index 7fd1dd1..c9ee671 100644
--- a/tempest/api/compute/servers/test_servers.py
+++ b/tempest/api/compute/servers/test_servers.py
@@ -19,6 +19,7 @@
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import data_utils
+from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
CONF = config.CONF
@@ -31,10 +32,6 @@
super(ServersTestJSON, cls).setup_clients()
cls.client = cls.servers_client
- def tearDown(self):
- self.clear_servers()
- super(ServersTestJSON, self).tearDown()
-
@decorators.idempotent_id('b92d5ec7-b1dd-44a2-87e4-45e888c46ef0')
@testtools.skipUnless(CONF.compute_feature_enabled.
enable_instance_password,
@@ -43,6 +40,11 @@
# If an admin password is provided on server creation, the server's
# root password should be set to that password.
server = self.create_test_server(adminPass='testpassword')
+ self.addCleanup(waiters.wait_for_server_termination,
+ self.servers_client, server['id'])
+ self.addCleanup(
+ test_utils.call_and_ignore_notfound_exc,
+ self.servers_client.delete_server, server['id'])
# Verify the password is set correctly in the response
self.assertEqual('testpassword', server['adminPass'])
@@ -57,9 +59,19 @@
server = self.create_test_server(name=server_name,
wait_until='ACTIVE')
id1 = server['id']
+ self.addCleanup(waiters.wait_for_server_termination,
+ self.servers_client, id1)
+ self.addCleanup(
+ test_utils.call_and_ignore_notfound_exc,
+ self.servers_client.delete_server, id1)
server = self.create_test_server(name=server_name,
wait_until='ACTIVE')
id2 = server['id']
+ self.addCleanup(waiters.wait_for_server_termination,
+ self.servers_client, id2)
+ self.addCleanup(
+ test_utils.call_and_ignore_notfound_exc,
+ self.servers_client.delete_server, id2)
self.assertNotEqual(id1, id2, "Did not create a new server")
server = self.client.show_server(id1)['server']
name1 = server['name']
@@ -76,6 +88,11 @@
self.addCleanup(self.keypairs_client.delete_keypair, key_name)
self.keypairs_client.list_keypairs()
server = self.create_test_server(key_name=key_name)
+ self.addCleanup(waiters.wait_for_server_termination,
+ self.servers_client, server['id'])
+ self.addCleanup(
+ test_utils.call_and_ignore_notfound_exc,
+ self.servers_client.delete_server, server['id'])
waiters.wait_for_server_status(self.client, server['id'], 'ACTIVE')
server = self.client.show_server(server['id'])['server']
self.assertEqual(key_name, server['key_name'])
@@ -98,6 +115,11 @@
def test_update_server_name(self):
# The server name should be changed to the provided value
server = self.create_test_server(wait_until='ACTIVE')
+ self.addCleanup(waiters.wait_for_server_termination,
+ self.servers_client, server['id'])
+ self.addCleanup(
+ test_utils.call_and_ignore_notfound_exc,
+ self.servers_client.delete_server, server['id'])
# Update instance name with non-ASCII characters
prefix_name = u'\u00CD\u00F1st\u00E1\u00F1c\u00E9'
self._update_server_name(server['id'], 'ACTIVE', prefix_name)
@@ -115,6 +137,11 @@
def test_update_access_server_address(self):
# The server's access addresses should reflect the provided values
server = self.create_test_server(wait_until='ACTIVE')
+ self.addCleanup(waiters.wait_for_server_termination,
+ self.servers_client, server['id'])
+ self.addCleanup(
+ test_utils.call_and_ignore_notfound_exc,
+ self.servers_client.delete_server, server['id'])
# Update the IPv4 and IPv6 access addresses
self.client.update_server(server['id'],
@@ -131,6 +158,11 @@
def test_create_server_with_ipv6_addr_only(self):
# Create a server without an IPv4 address(only IPv6 address).
server = self.create_test_server(accessIPv6='2001:2001::3')
+ self.addCleanup(waiters.wait_for_server_termination,
+ self.servers_client, server['id'])
+ self.addCleanup(
+ test_utils.call_and_ignore_notfound_exc,
+ self.servers_client.delete_server, server['id'])
waiters.wait_for_server_status(self.client, server['id'], 'ACTIVE')
server = self.client.show_server(server['id'])['server']
self.assertEqual('2001:2001::3', server['accessIPv6'])
diff --git a/tempest/api/compute/servers/test_servers_negative.py b/tempest/api/compute/servers/test_servers_negative.py
index 764767b..d067bb3 100644
--- a/tempest/api/compute/servers/test_servers_negative.py
+++ b/tempest/api/compute/servers/test_servers_negative.py
@@ -19,12 +19,12 @@
from tempest.api.compute import base
from tempest.common import compute
+from tempest.common import utils
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
CONF = config.CONF
@@ -37,7 +37,7 @@
waiters.wait_for_server_status(self.client, self.server_id,
'ACTIVE')
except Exception:
- self.__class__.server_id = self.rebuild_server(self.server_id)
+ self.__class__.server_id = self.recreate_server(self.server_id)
def tearDown(self):
self.server_check_teardown()
@@ -217,7 +217,7 @@
@decorators.attr(type=['negative'])
@decorators.related_bug('1651064', status_code=500)
- @test.services('volume')
+ @utils.services('volume')
@decorators.idempotent_id('12146ac1-d7df-4928-ad25-b1f99e5286cd')
def test_create_server_invalid_bdm_in_2nd_dict(self):
volume = self.create_volume()
@@ -512,7 +512,7 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('74085be3-a370-4ca2-bc51-2d0e10e0f573')
- @test.services('volume', 'image')
+ @utils.services('volume', 'image')
def test_create_server_from_non_bootable_volume(self):
# Create a volume
volume = self.create_volume()
@@ -551,7 +551,7 @@
waiters.wait_for_server_status(self.servers_client, self.server_id,
'ACTIVE')
except Exception:
- self.__class__.server_id = self.rebuild_server(self.server_id)
+ self.__class__.server_id = self.recreate_server(self.server_id)
@classmethod
def setup_clients(cls):
diff --git a/tempest/api/compute/servers/test_virtual_interfaces.py b/tempest/api/compute/servers/test_virtual_interfaces.py
index a42b968..90f04ff 100644
--- a/tempest/api/compute/servers/test_virtual_interfaces.py
+++ b/tempest/api/compute/servers/test_virtual_interfaces.py
@@ -17,10 +17,10 @@
import testtools
from tempest.api.compute import base
+from tempest.common import utils
from tempest import config
from tempest.lib import decorators
from tempest.lib import exceptions
-from tempest import test
CONF = config.CONF
@@ -44,7 +44,7 @@
cls.server = cls.create_test_server(wait_until='ACTIVE')
@decorators.idempotent_id('96c4e2ef-5e4d-4d7f-87f5-fed6dca18016')
- @test.services('network')
+ @utils.services('network')
def test_list_virtual_interfaces(self):
# Positive test:Should be able to GET the virtual interfaces list
# for a given server_id
@@ -56,11 +56,11 @@
self.client.list_virtual_interfaces(self.server['id'])
else:
output = self.client.list_virtual_interfaces(self.server['id'])
- virt_ifaces = output
- self.assertNotEmpty(virt_ifaces['virtual_interfaces'],
+ virt_ifaces = output['virtual_interfaces']
+ self.assertNotEmpty(virt_ifaces,
'Expected virtual interfaces, got 0 '
'interfaces.')
- for virt_iface in virt_ifaces['virtual_interfaces']:
+ for virt_iface in virt_ifaces:
mac_address = virt_iface['mac_address']
self.assertTrue(netaddr.valid_mac(mac_address),
"Invalid mac address detected. mac address: %s"
diff --git a/tempest/api/compute/servers/test_virtual_interfaces_negative.py b/tempest/api/compute/servers/test_virtual_interfaces_negative.py
index 173784a..20923a8 100644
--- a/tempest/api/compute/servers/test_virtual_interfaces_negative.py
+++ b/tempest/api/compute/servers/test_virtual_interfaces_negative.py
@@ -14,10 +14,10 @@
# under the License.
from tempest.api.compute import base
+from tempest.common import utils
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
class VirtualInterfacesNegativeTestJSON(base.BaseV2ComputeTest):
@@ -35,7 +35,7 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('64ebd03c-1089-4306-93fa-60f5eb5c803c')
- @test.services('network')
+ @utils.services('network')
def test_list_virtual_interfaces_invalid_server_id(self):
# Negative test: Should not be able to GET virtual interfaces
# for an invalid server_id
diff --git a/tempest/api/compute/test_extensions.py b/tempest/api/compute/test_extensions.py
index 42e13bd..34faf5f 100644
--- a/tempest/api/compute/test_extensions.py
+++ b/tempest/api/compute/test_extensions.py
@@ -16,9 +16,9 @@
from oslo_log import log as logging
from tempest.api.compute import base
+from tempest.common import utils
from tempest import config
from tempest.lib import decorators
-from tempest import test
CONF = config.CONF
@@ -26,7 +26,7 @@
LOG = logging.getLogger(__name__)
-class ExtensionsTestJSON(base.BaseV2ComputeTest):
+class ExtensionsTest(base.BaseV2ComputeTest):
@decorators.idempotent_id('3bb27738-b759-4e0d-a5fa-37d7a6df07d1')
def test_list_extensions(self):
@@ -48,7 +48,7 @@
raise self.skipException('There are not any extensions configured')
@decorators.idempotent_id('05762f39-bdfa-4cdb-9b46-b78f8e78e2fd')
- @test.requires_ext(extension='os-consoles', service='compute')
+ @utils.requires_ext(extension='os-consoles', service='compute')
def test_get_extension(self):
# get the specified extensions
extension = self.extensions_client.show_extension('os-consoles')
diff --git a/tempest/api/compute/test_quotas.py b/tempest/api/compute/test_quotas.py
index 9d83ee1..7cf90ae 100644
--- a/tempest/api/compute/test_quotas.py
+++ b/tempest/api/compute/test_quotas.py
@@ -15,8 +15,8 @@
from tempest.api.compute import base
from tempest.common import tempest_fixtures as fixtures
+from tempest.common import utils
from tempest.lib import decorators
-from tempest import test
class QuotasTestJSON(base.BaseV2ComputeTest):
@@ -24,7 +24,7 @@
@classmethod
def skip_checks(cls):
super(QuotasTestJSON, cls).skip_checks()
- if not test.is_extension_enabled('os-quota-sets', 'compute'):
+ if not utils.is_extension_enabled('os-quota-sets', 'compute'):
msg = "quotas extension not enabled."
raise cls.skipException(msg)
diff --git a/tempest/api/compute/test_tenant_networks.py b/tempest/api/compute/test_tenant_networks.py
index 18c5d38..b55e2c0 100644
--- a/tempest/api/compute/test_tenant_networks.py
+++ b/tempest/api/compute/test_tenant_networks.py
@@ -13,8 +13,8 @@
# under the License.
from tempest.api.compute import base
+from tempest.common import utils
from tempest.lib import decorators
-from tempest import test
class ComputeTenantNetworksTest(base.BaseV2ComputeTest):
@@ -31,7 +31,7 @@
super(ComputeTenantNetworksTest, cls).setup_credentials()
@decorators.idempotent_id('edfea98e-bbe3-4c7a-9739-87b986baff26')
- @test.services('network')
+ @utils.services('network')
def test_list_show_tenant_networks(self):
# Fetch all networks that are visible to the tenant: this may include
# shared and external networks
diff --git a/tempest/api/compute/volumes/test_attach_volume.py b/tempest/api/compute/volumes/test_attach_volume.py
index 502bc1b..9bef80f 100644
--- a/tempest/api/compute/volumes/test_attach_volume.py
+++ b/tempest/api/compute/volumes/test_attach_volume.py
@@ -13,8 +13,6 @@
# License for the specific language governing permissions and limitations
# under the License.
-import testtools
-
from tempest.api.compute import base
from tempest.common import compute
from tempest.common.utils.linux import remote_client
@@ -42,35 +40,37 @@
@classmethod
def resource_setup(cls):
- cls.set_validation_resources()
super(AttachVolumeTestJSON, cls).resource_setup()
cls.device = CONF.compute.volume_device_name
def _create_server(self):
# Start a server and wait for it to become ready
+ validation_resources = self.get_test_validation_resources(
+ self.os_primary)
server = self.create_test_server(
validatable=True,
+ validation_resources=validation_resources,
wait_until='ACTIVE',
adminPass=self.image_ssh_password)
self.addCleanup(self.delete_server, server['id'])
# Record addresses so that we can ssh later
server['addresses'] = self.servers_client.list_addresses(
server['id'])['addresses']
- return server
+ return server, validation_resources
@decorators.idempotent_id('52e9045a-e90d-4c0d-9087-79d657faffff')
def test_attach_detach_volume(self):
# Stop and Start a server with an attached volume, ensuring that
# the volume remains attached.
- server = self._create_server()
+ server, validation_resources = self._create_server()
# NOTE(andreaf) Create one remote client used throughout the test.
if CONF.validation.run_validation:
linux_client = remote_client.RemoteClient(
- self.get_server_ip(server),
+ self.get_server_ip(server, validation_resources),
self.image_ssh_user,
self.image_ssh_password,
- self.validation_resources['keypair']['private_key'],
+ validation_resources['keypair']['private_key'],
server=server,
servers_client=self.servers_client)
# NOTE(andreaf) We need to ensure the ssh key has been
@@ -113,7 +113,7 @@
@decorators.idempotent_id('7fa563fe-f0f7-43eb-9e22-a1ece036b513')
def test_list_get_volume_attachments(self):
# List volume attachment of the server
- server = self._create_server()
+ server, _ = self._create_server()
volume_1st = self.create_volume()
attachment_1st = self.attach_volume(server, volume_1st,
device=('/dev/%s' % self.device))
@@ -143,6 +143,10 @@
self.assertEqual(server['id'], body['serverId'])
self.assertEqual(attachment['volumeId'], body['volumeId'])
self.assertEqual(attachment['id'], body['id'])
+ self.servers_client.detach_volume(server['id'],
+ attachment['volumeId'])
+ waiters.wait_for_volume_resource_status(
+ self.volumes_client, attachment['volumeId'], 'available')
class AttachVolumeShelveTestJSON(AttachVolumeTestJSON):
@@ -155,15 +159,21 @@
min_microversion = '2.20'
max_microversion = 'latest'
- def _count_volumes(self, server):
+ @classmethod
+ def skip_checks(cls):
+ super(AttachVolumeShelveTestJSON, cls).skip_checks()
+ if not CONF.compute_feature_enabled.shelve:
+ raise cls.skipException('Shelve is not available.')
+
+ def _count_volumes(self, server, validation_resources):
# Count number of volumes on an instance
volumes = 0
if CONF.validation.run_validation:
linux_client = remote_client.RemoteClient(
- self.get_server_ip(server),
+ self.get_server_ip(server, validation_resources),
self.image_ssh_user,
self.image_ssh_password,
- self.validation_resources['keypair']['private_key'],
+ validation_resources['keypair']['private_key'],
server=server,
servers_client=self.servers_client)
@@ -171,7 +181,7 @@
volumes = int(linux_client.exec_command(command).strip())
return volumes
- def _shelve_server(self, server):
+ def _shelve_server(self, server, validation_resources):
# NOTE(andreaf) If we are going to shelve a server, we should
# check first whether the server is ssh-able. Otherwise we
# won't be able to distinguish failures introduced by shelve
@@ -180,10 +190,10 @@
# avoid breaking the VM
if CONF.validation.run_validation:
linux_client = remote_client.RemoteClient(
- self.get_server_ip(server),
+ self.get_server_ip(server, validation_resources),
self.image_ssh_user,
self.image_ssh_password,
- self.validation_resources['keypair']['private_key'],
+ validation_resources['keypair']['private_key'],
server=server,
servers_client=self.servers_client)
linux_client.validate_authentication()
@@ -191,32 +201,34 @@
# If validation went ok, or it was skipped, shelve the server
compute.shelve_server(self.servers_client, server['id'])
- def _unshelve_server_and_check_volumes(self, server, number_of_volumes):
+ def _unshelve_server_and_check_volumes(self, server,
+ validation_resources,
+ number_of_volumes):
# Unshelve the instance and check that there are expected volumes
self.servers_client.unshelve_server(server['id'])
waiters.wait_for_server_status(self.servers_client,
server['id'],
'ACTIVE')
if CONF.validation.run_validation:
- counted_volumes = self._count_volumes(server)
+ counted_volumes = self._count_volumes(
+ server, validation_resources)
self.assertEqual(number_of_volumes, counted_volumes)
@decorators.idempotent_id('13a940b6-3474-4c3c-b03f-29b89112bfee')
- @testtools.skipUnless(CONF.compute_feature_enabled.shelve,
- 'Shelve is not available.')
def test_attach_volume_shelved_or_offload_server(self):
# Create server, count number of volumes on it, shelve
# server and attach pre-created volume to shelved server
- server = self._create_server()
+ server, validation_resources = self._create_server()
volume = self.create_volume()
- num_vol = self._count_volumes(server)
- self._shelve_server(server)
+ num_vol = self._count_volumes(server, validation_resources)
+ self._shelve_server(server, validation_resources)
attachment = self.attach_volume(server, volume,
device=('/dev/%s' % self.device),
check_reserved=True)
# Unshelve the instance and check that attached volume exists
- self._unshelve_server_and_check_volumes(server, num_vol + 1)
+ self._unshelve_server_and_check_volumes(
+ server, validation_resources, num_vol + 1)
# Get volume attachment of the server
volume_attachment = self.servers_client.show_volume_attachment(
@@ -229,15 +241,13 @@
self.assertIsNotNone(volume_attachment['device'])
@decorators.idempotent_id('b54e86dd-a070-49c4-9c07-59ae6dae15aa')
- @testtools.skipUnless(CONF.compute_feature_enabled.shelve,
- 'Shelve is not available.')
def test_detach_volume_shelved_or_offload_server(self):
# Count number of volumes on instance, shelve
# server and attach pre-created volume to shelved server
- server = self._create_server()
+ server, validation_resources = self._create_server()
volume = self.create_volume()
- num_vol = self._count_volumes(server)
- self._shelve_server(server)
+ num_vol = self._count_volumes(server, validation_resources)
+ self._shelve_server(server, validation_resources)
# Attach and then detach the volume
self.attach_volume(server, volume, device=('/dev/%s' % self.device),
@@ -248,4 +258,5 @@
# Unshelve the instance and check that we have the expected number of
# volume(s)
- self._unshelve_server_and_check_volumes(server, num_vol)
+ self._unshelve_server_and_check_volumes(
+ server, validation_resources, num_vol)
diff --git a/tempest/api/compute/volumes/test_volume_snapshots.py b/tempest/api/compute/volumes/test_volume_snapshots.py
index 0f436eb..b8ca81d 100644
--- a/tempest/api/compute/volumes/test_volume_snapshots.py
+++ b/tempest/api/compute/volumes/test_volume_snapshots.py
@@ -13,8 +13,6 @@
# License for the specific language governing permissions and limitations
# under the License.
-import testtools
-
from tempest.api.compute import base
from tempest.common import waiters
from tempest import config
@@ -38,6 +36,9 @@
if not CONF.service_available.cinder:
skip_msg = ("%s skipped as Cinder is not available" % cls.__name__)
raise cls.skipException(skip_msg)
+ if not CONF.volume_feature_enabled.snapshot:
+ skip_msg = ("Cinder volume snapshots are disabled")
+ raise cls.skipException(skip_msg)
@classmethod
def setup_clients(cls):
@@ -46,8 +47,6 @@
cls.snapshots_client = cls.snapshots_extensions_client
@decorators.idempotent_id('cd4ec87d-7825-450d-8040-6e2068f2da8f')
- @testtools.skipUnless(CONF.volume_feature_enabled.snapshot,
- 'Cinder volume snapshots are disabled')
def test_volume_snapshot_create_get_list_delete(self):
volume = self.create_volume()
self.addCleanup(self.delete_volume, volume['id'])
diff --git a/tempest/api/identity/admin/v3/test_groups.py b/tempest/api/identity/admin/v3/test_groups.py
index 4bc987f..17db3ea 100644
--- a/tempest/api/identity/admin/v3/test_groups.py
+++ b/tempest/api/identity/admin/v3/test_groups.py
@@ -14,9 +14,12 @@
# under the License.
from tempest.api.identity import base
+from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
+CONF = config.CONF
+
class GroupsV3TestJSON(base.BaseIdentityV3AdminTest):
@@ -130,7 +133,14 @@
self.addCleanup(self.groups_client.delete_group, group['id'])
group_ids.append(group['id'])
# List and Verify Groups
- body = self.groups_client.list_groups()['groups']
+ # When domain specific drivers are enabled the operations
+ # of listing all users and listing all groups are not supported,
+ # they need a domain filter to be specified
+ if CONF.identity_feature_enabled.domain_specific_drivers:
+ body = self.groups_client.list_groups(
+ domain_id=self.domain['id'])['groups']
+ else:
+ body = self.groups_client.list_groups()['groups']
for g in body:
fetched_ids.append(g['id'])
missing_groups = [g for g in group_ids if g not in fetched_ids]
diff --git a/tempest/api/identity/admin/v3/test_inherits.py b/tempest/api/identity/admin/v3/test_inherits.py
index e61dbc8..8b687cd 100644
--- a/tempest/api/identity/admin/v3/test_inherits.py
+++ b/tempest/api/identity/admin/v3/test_inherits.py
@@ -11,9 +11,9 @@
# under the License.
from tempest.api.identity import base
+from tempest.common import utils
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
-from tempest import test
class InheritsV3TestJSON(base.BaseIdentityV3AdminTest):
@@ -21,7 +21,7 @@
@classmethod
def skip_checks(cls):
super(InheritsV3TestJSON, cls).skip_checks()
- if not test.is_extension_enabled('OS-INHERIT', 'identity'):
+ if not utils.is_extension_enabled('OS-INHERIT', 'identity'):
raise cls.skipException("Inherits aren't enabled")
@classmethod
diff --git a/tempest/api/identity/admin/v3/test_list_users.py b/tempest/api/identity/admin/v3/test_list_users.py
index 47a3580..506c729 100644
--- a/tempest/api/identity/admin/v3/test_list_users.py
+++ b/tempest/api/identity/admin/v3/test_list_users.py
@@ -14,9 +14,12 @@
# under the License.
from tempest.api.identity import base
+from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
+CONF = config.CONF
+
class UsersV3TestJSON(base.BaseIdentityV3AdminTest):
@@ -82,6 +85,11 @@
def test_list_users_with_name(self):
# List users with name
params = {'name': self.domain_enabled_user['name']}
+ # When domain specific drivers are enabled the operations
+ # of listing all users and listing all groups are not supported,
+ # they need a domain filter to be specified
+ if CONF.identity_feature_enabled.domain_specific_drivers:
+ params['domain_id'] = self.domain_enabled_user['domain_id']
self._list_users_with_params(params, 'name',
self.domain_enabled_user,
self.non_domain_enabled_user)
@@ -89,7 +97,18 @@
@decorators.idempotent_id('b30d4651-a2ea-4666-8551-0c0e49692635')
def test_list_users(self):
# List users
- body = self.users_client.list_users()['users']
+ # When domain specific drivers are enabled the operations
+ # of listing all users and listing all groups are not supported,
+ # they need a domain filter to be specified
+ if CONF.identity_feature_enabled.domain_specific_drivers:
+ body_enabled_user = self.users_client.list_users(
+ domain_id=self.domain_enabled_user['domain_id'])['users']
+ body_non_enabled_user = self.users_client.list_users(
+ domain_id=self.non_domain_enabled_user['domain_id'])['users']
+ body = (body_enabled_user + body_non_enabled_user)
+ else:
+ body = self.users_client.list_users()['users']
+
fetched_ids = [u['id'] for u in body]
missing_users = [u['id'] for u in self.users
if u['id'] not in fetched_ids]
diff --git a/tempest/api/identity/admin/v3/test_oauth_consumers.py b/tempest/api/identity/admin/v3/test_oauth_consumers.py
index 970ead3..062cce5 100644
--- a/tempest/api/identity/admin/v3/test_oauth_consumers.py
+++ b/tempest/api/identity/admin/v3/test_oauth_consumers.py
@@ -17,7 +17,7 @@
from tempest.lib.common.utils import data_utils
from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
-from tempest.lib import exceptions as exceptions
+from tempest.lib import exceptions
class OAUTHConsumersV3Test(base.BaseIdentityV3AdminTest):
diff --git a/tempest/api/identity/admin/v3/test_tokens.py b/tempest/api/identity/admin/v3/test_tokens.py
index 5c3cd26..6343ea8 100644
--- a/tempest/api/identity/admin/v3/test_tokens.py
+++ b/tempest/api/identity/admin/v3/test_tokens.py
@@ -161,16 +161,14 @@
manager_project_id]
# Get available project scopes
- available_projects =\
- self.client.list_auth_projects()['projects']
+ available_projects = self.client.list_auth_projects()['projects']
# create list to save fetched project's id
fetched_project_ids = [i['id'] for i in available_projects]
# verifying the project ids in list
missing_project_ids = \
- [p for p in assigned_project_ids
- if p not in fetched_project_ids]
+ [p for p in assigned_project_ids if p not in fetched_project_ids]
self.assertEmpty(missing_project_ids,
"Failed to find project_id %s in fetched list" %
', '.join(missing_project_ids))
diff --git a/tempest/api/identity/v2/test_ec2_credentials.py b/tempest/api/identity/v2/test_ec2_credentials.py
index 599b784..237e728 100644
--- a/tempest/api/identity/v2/test_ec2_credentials.py
+++ b/tempest/api/identity/v2/test_ec2_credentials.py
@@ -14,9 +14,9 @@
# under the License.
from tempest.api.identity import base
+from tempest.common import utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
class EC2CredentialsTest(base.BaseIdentityV2Test):
@@ -24,7 +24,7 @@
@classmethod
def skip_checks(cls):
super(EC2CredentialsTest, cls).skip_checks()
- if not test.is_extension_enabled('OS-EC2', 'identity'):
+ if not utils.is_extension_enabled('OS-EC2', 'identity'):
msg = "OS-EC2 identity extension not enabled."
raise cls.skipException(msg)
diff --git a/tempest/api/identity/v3/test_catalog.py b/tempest/api/identity/v3/test_catalog.py
old mode 100755
new mode 100644
diff --git a/tempest/api/identity/v3/test_projects.py b/tempest/api/identity/v3/test_projects.py
index 0ae35ea..bbb4013 100644
--- a/tempest/api/identity/v3/test_projects.py
+++ b/tempest/api/identity/v3/test_projects.py
@@ -24,8 +24,7 @@
@decorators.idempotent_id('86128d46-e170-4644-866a-cc487f699e1d')
def test_list_projects_returns_only_authorized_projects(self):
- alt_project_name =\
- self.os_alt.credentials.project_name
+ alt_project_name = self.os_alt.credentials.project_name
resp = self.non_admin_users_client.list_user_projects(
self.os_primary.credentials.user_id)
diff --git a/tempest/api/image/v2/test_images.py b/tempest/api/image/v2/test_images.py
index 2e68efd..c846f88 100644
--- a/tempest/api/image/v2/test_images.py
+++ b/tempest/api/image/v2/test_images.py
@@ -119,7 +119,7 @@
# Update Image
new_image_name = data_utils.rand_name('new-image')
- body = self.client.update_image(image['id'], [
+ self.client.update_image(image['id'], [
dict(replace='/name', value=new_image_name)])
# Verifying updating
diff --git a/tempest/api/network/admin/test_agent_management.py b/tempest/api/network/admin/test_agent_management.py
index 7304db9..5068fc4 100644
--- a/tempest/api/network/admin/test_agent_management.py
+++ b/tempest/api/network/admin/test_agent_management.py
@@ -14,8 +14,8 @@
from tempest.api.network import base
from tempest.common import tempest_fixtures as fixtures
+from tempest.common import utils
from tempest.lib import decorators
-from tempest import test
class AgentManagementTestJSON(base.BaseAdminNetworkTest):
@@ -23,7 +23,7 @@
@classmethod
def skip_checks(cls):
super(AgentManagementTestJSON, cls).skip_checks()
- if not test.is_extension_enabled('agent', 'network'):
+ if not utils.is_extension_enabled('agent', 'network'):
msg = "agent extension not enabled."
raise cls.skipException(msg)
diff --git a/tempest/api/network/admin/test_dhcp_agent_scheduler.py b/tempest/api/network/admin/test_dhcp_agent_scheduler.py
index 485c8f5..8315c5d 100644
--- a/tempest/api/network/admin/test_dhcp_agent_scheduler.py
+++ b/tempest/api/network/admin/test_dhcp_agent_scheduler.py
@@ -13,8 +13,8 @@
# under the License.
from tempest.api.network import base
+from tempest.common import utils
from tempest.lib import decorators
-from tempest import test
class DHCPAgentSchedulersTestJSON(base.BaseAdminNetworkTest):
@@ -22,7 +22,7 @@
@classmethod
def skip_checks(cls):
super(DHCPAgentSchedulersTestJSON, cls).skip_checks()
- if not test.is_extension_enabled('dhcp_agent_scheduler', 'network'):
+ if not utils.is_extension_enabled('dhcp_agent_scheduler', 'network'):
msg = "dhcp_agent_scheduler extension not enabled."
raise cls.skipException(msg)
diff --git a/tempest/api/network/admin/test_floating_ips_admin_actions.py b/tempest/api/network/admin/test_floating_ips_admin_actions.py
index 7ee819e..5aa337c 100644
--- a/tempest/api/network/admin/test_floating_ips_admin_actions.py
+++ b/tempest/api/network/admin/test_floating_ips_admin_actions.py
@@ -14,9 +14,9 @@
# under the License.
from tempest.api.network import base
+from tempest.common import utils
from tempest import config
from tempest.lib import decorators
-from tempest import test
CONF = config.CONF
@@ -28,7 +28,7 @@
@classmethod
def skip_checks(cls):
super(FloatingIPAdminTestJSON, cls).skip_checks()
- if not test.is_extension_enabled('router', 'network'):
+ if not utils.is_extension_enabled('router', 'network'):
msg = "router extension not enabled."
raise cls.skipException(msg)
if not CONF.network.public_network_id:
diff --git a/tempest/api/network/admin/test_l3_agent_scheduler.py b/tempest/api/network/admin/test_l3_agent_scheduler.py
index 85b2472..1a7b0ec 100644
--- a/tempest/api/network/admin/test_l3_agent_scheduler.py
+++ b/tempest/api/network/admin/test_l3_agent_scheduler.py
@@ -13,10 +13,10 @@
# under the License.
from tempest.api.network import base
+from tempest.common import utils
from tempest import config
from tempest.lib import decorators
from tempest.lib import exceptions
-from tempest import test
CONF = config.CONF
AGENT_TYPE = 'L3 agent'
@@ -41,7 +41,7 @@
@classmethod
def skip_checks(cls):
super(L3AgentSchedulerTestJSON, cls).skip_checks()
- if not test.is_extension_enabled('l3_agent_scheduler', 'network'):
+ if not utils.is_extension_enabled('l3_agent_scheduler', 'network'):
msg = "L3 Agent Scheduler Extension not enabled."
raise cls.skipException(msg)
diff --git a/tempest/api/network/admin/test_metering_extensions.py b/tempest/api/network/admin/test_metering_extensions.py
index 21a7ab4..fd86782 100644
--- a/tempest/api/network/admin/test_metering_extensions.py
+++ b/tempest/api/network/admin/test_metering_extensions.py
@@ -13,9 +13,9 @@
# under the License.
from tempest.api.network import base
+from tempest.common import utils
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
-from tempest import test
class MeteringTestJSON(base.BaseAdminNetworkTest):
@@ -28,7 +28,7 @@
@classmethod
def skip_checks(cls):
super(MeteringTestJSON, cls).skip_checks()
- if not test.is_extension_enabled('metering', 'network'):
+ if not utils.is_extension_enabled('metering', 'network'):
msg = "metering extension not enabled."
raise cls.skipException(msg)
diff --git a/tempest/api/network/admin/test_negative_quotas.py b/tempest/api/network/admin/test_negative_quotas.py
index 21688d2..6849653 100644
--- a/tempest/api/network/admin/test_negative_quotas.py
+++ b/tempest/api/network/admin/test_negative_quotas.py
@@ -14,9 +14,9 @@
# under the License.
from tempest.api.network import base
+from tempest.common import utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
class QuotasNegativeTest(base.BaseAdminNetworkTest):
@@ -35,7 +35,7 @@
@classmethod
def skip_checks(cls):
super(QuotasNegativeTest, cls).skip_checks()
- if not test.is_extension_enabled('quotas', 'network'):
+ if not utils.is_extension_enabled('quotas', 'network'):
msg = "quotas extension not enabled."
raise cls.skipException(msg)
diff --git a/tempest/api/network/admin/test_quotas.py b/tempest/api/network/admin/test_quotas.py
index aa8b2dc..cf4236d 100644
--- a/tempest/api/network/admin/test_quotas.py
+++ b/tempest/api/network/admin/test_quotas.py
@@ -14,10 +14,11 @@
# under the License.
from tempest.api.network import base
+from tempest.common import identity
+from tempest.common import utils
from tempest.lib.common.utils import data_utils
from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
-from tempest import test
class QuotasTest(base.BaseAdminNetworkTest):
@@ -38,7 +39,7 @@
@classmethod
def skip_checks(cls):
super(QuotasTest, cls).skip_checks()
- if not test.is_extension_enabled('quotas', 'network'):
+ if not utils.is_extension_enabled('quotas', 'network'):
msg = "quotas extension not enabled."
raise cls.skipException(msg)
@@ -46,10 +47,11 @@
# Add a project to conduct the test
project = data_utils.rand_name('test_project_')
description = data_utils.rand_name('desc_')
- project = self.identity_utils.create_project(name=project,
- description=description)
+ project = identity.identity_utils(self.os_admin).create_project(
+ name=project, description=description)
project_id = project['id']
- self.addCleanup(self.identity_utils.delete_project, project_id)
+ self.addCleanup(identity.identity_utils(self.os_admin).delete_project,
+ project_id)
# Change quotas for project
quota_set = self.admin_quotas_client.update_quotas(
diff --git a/tempest/api/network/admin/test_routers.py b/tempest/api/network/admin/test_routers.py
index f180cda..8cdb41e 100644
--- a/tempest/api/network/admin/test_routers.py
+++ b/tempest/api/network/admin/test_routers.py
@@ -16,10 +16,11 @@
import testtools
from tempest.api.network import base
+from tempest.common import identity
+from tempest.common import utils
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
-from tempest import test
CONF = config.CONF
@@ -41,23 +42,10 @@
self.addCleanup(self._cleanup_router, router)
return router
- def _add_router_interface_with_subnet_id(self, router_id, subnet_id):
- interface = self.routers_client.add_router_interface(
- router_id, subnet_id=subnet_id)
- self.addCleanup(self._remove_router_interface_with_subnet_id,
- router_id, subnet_id)
- self.assertEqual(subnet_id, interface['subnet_id'])
- return interface
-
- def _remove_router_interface_with_subnet_id(self, router_id, subnet_id):
- body = self.routers_client.remove_router_interface(router_id,
- subnet_id=subnet_id)
- self.assertEqual(subnet_id, body['subnet_id'])
-
@classmethod
def skip_checks(cls):
super(RoutersAdminTest, cls).skip_checks()
- if not test.is_extension_enabled('router', 'network'):
+ if not utils.is_extension_enabled('router', 'network'):
msg = "router extension not enabled."
raise cls.skipException(msg)
@@ -66,10 +54,11 @@
# Test creating router from admin user setting project_id.
project = data_utils.rand_name('test_tenant_')
description = data_utils.rand_name('desc_')
- project = self.identity_utils.create_project(name=project,
- description=description)
+ project = identity.identity_utils(self.os_admin).create_project(
+ name=project, description=description)
project_id = project['id']
- self.addCleanup(self.identity_utils.delete_project, project_id)
+ self.addCleanup(identity.identity_utils(self.os_admin).delete_project,
+ project_id)
name = data_utils.rand_name('router-')
create_body = self.admin_routers_client.create_router(
@@ -79,7 +68,7 @@
self.assertEqual(project_id, create_body['router']['tenant_id'])
@decorators.idempotent_id('847257cc-6afd-4154-b8fb-af49f5670ce8')
- @test.requires_ext(extension='ext-gw-mode', service='network')
+ @utils.requires_ext(extension='ext-gw-mode', service='network')
@testtools.skipUnless(CONF.network.public_network_id,
'The public_network_id option must be specified.')
def test_create_router_with_default_snat_value(self):
@@ -91,7 +80,7 @@
'enable_snat': True})
@decorators.idempotent_id('ea74068d-09e9-4fd7-8995-9b6a1ace920f')
- @test.requires_ext(extension='ext-gw-mode', service='network')
+ @utils.requires_ext(extension='ext-gw-mode', service='network')
@testtools.skipUnless(CONF.network.public_network_id,
'The public_network_id option must be specified.')
def test_create_router_with_snat_explicit(self):
@@ -153,7 +142,7 @@
self._verify_gateway_port(router['id'])
@decorators.idempotent_id('b386c111-3b21-466d-880c-5e72b01e1a33')
- @test.requires_ext(extension='ext-gw-mode', service='network')
+ @utils.requires_ext(extension='ext-gw-mode', service='network')
@testtools.skipUnless(CONF.network.public_network_id,
'The public_network_id option must be specified.')
def test_update_router_set_gateway_with_snat_explicit(self):
@@ -170,7 +159,7 @@
self._verify_gateway_port(router['id'])
@decorators.idempotent_id('96536bc7-8262-4fb2-9967-5c46940fa279')
- @test.requires_ext(extension='ext-gw-mode', service='network')
+ @utils.requires_ext(extension='ext-gw-mode', service='network')
@testtools.skipUnless(CONF.network.public_network_id,
'The public_network_id option must be specified.')
def test_update_router_set_gateway_without_snat(self):
@@ -202,7 +191,7 @@
self.assertFalse(list_body['ports'])
@decorators.idempotent_id('f2faf994-97f4-410b-a831-9bc977b64374')
- @test.requires_ext(extension='ext-gw-mode', service='network')
+ @utils.requires_ext(extension='ext-gw-mode', service='network')
@testtools.skipUnless(CONF.network.public_network_id,
'The public_network_id option must be specified.')
def test_update_router_reset_gateway_without_snat(self):
diff --git a/tempest/api/network/admin/test_routers_dvr.py b/tempest/api/network/admin/test_routers_dvr.py
index b6772b1..93478e6 100644
--- a/tempest/api/network/admin/test_routers_dvr.py
+++ b/tempest/api/network/admin/test_routers_dvr.py
@@ -16,9 +16,9 @@
import testtools
from tempest.api.network import base
+from tempest.common import utils
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
-from tempest import test
class RoutersTestDVR(base.BaseAdminNetworkTest):
@@ -27,7 +27,7 @@
def skip_checks(cls):
super(RoutersTestDVR, cls).skip_checks()
for ext in ['router', 'dvr']:
- if not test.is_extension_enabled(ext, 'network'):
+ if not utils.is_extension_enabled(ext, 'network'):
msg = "%s extension not enabled." % ext
raise cls.skipException(msg)
# The check above will pass if api_extensions=all, which does
@@ -87,7 +87,7 @@
self.assertFalse(router['router']['distributed'])
@decorators.idempotent_id('acd43596-c1fb-439d-ada8-31ad48ae3c2e')
- @testtools.skipUnless(test.is_extension_enabled('l3-ha', 'network'),
+ @testtools.skipUnless(utils.is_extension_enabled('l3-ha', 'network'),
'HA routers are not available.')
def test_centralized_router_update_to_dvr(self):
"""Test centralized router update
diff --git a/tempest/api/network/admin/test_routers_negative.py b/tempest/api/network/admin/test_routers_negative.py
index f350a15..9356bcc 100644
--- a/tempest/api/network/admin/test_routers_negative.py
+++ b/tempest/api/network/admin/test_routers_negative.py
@@ -16,10 +16,10 @@
import testtools
from tempest.api.network import base
+from tempest.common import utils
from tempest import config
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
CONF = config.CONF
@@ -29,13 +29,13 @@
@classmethod
def skip_checks(cls):
super(RoutersAdminNegativeTest, cls).skip_checks()
- if not test.is_extension_enabled('router', 'network'):
+ if not utils.is_extension_enabled('router', 'network'):
msg = "router extension not enabled."
raise cls.skipException(msg)
@decorators.attr(type=['negative'])
@decorators.idempotent_id('7101cc02-058a-11e7-93e1-fa163e4fa634')
- @test.requires_ext(extension='ext-gw-mode', service='network')
+ @utils.requires_ext(extension='ext-gw-mode', service='network')
@testtools.skipUnless(CONF.network.public_network_id,
'The public_network_id option must be specified.')
def test_router_set_gateway_used_ip_returns_409(self):
diff --git a/tempest/api/network/base.py b/tempest/api/network/base.py
index 6bec0d7..8308e34 100644
--- a/tempest/api/network/base.py
+++ b/tempest/api/network/base.py
@@ -96,6 +96,12 @@
cls.metering_labels = []
cls.metering_label_rules = []
cls.ethertype = "IPv" + str(cls._ip_version)
+ if cls._ip_version == 4:
+ cls.cidr = netaddr.IPNetwork(CONF.network.project_network_cidr)
+ cls.mask_bits = CONF.network.project_network_mask_bits
+ elif cls._ip_version == 6:
+ cls.cidr = netaddr.IPNetwork(CONF.network.project_network_v6_cidr)
+ cls.mask_bits = CONF.network.project_network_v6_mask_bits
@classmethod
def resource_cleanup(cls):
diff --git a/tempest/api/network/test_allowed_address_pair.py b/tempest/api/network/test_allowed_address_pair.py
index a90e4bf..3075047 100644
--- a/tempest/api/network/test_allowed_address_pair.py
+++ b/tempest/api/network/test_allowed_address_pair.py
@@ -13,13 +13,12 @@
# License for the specific language governing permissions and limitations
# under the License.
-import netaddr
import six
from tempest.api.network import base
+from tempest.common import utils
from tempest import config
from tempest.lib import decorators
-from tempest import test
CONF = config.CONF
@@ -44,7 +43,7 @@
@classmethod
def skip_checks(cls):
super(AllowedAddressPairTestJSON, cls).skip_checks()
- if not test.is_extension_enabled('allowed-address-pairs', 'network'):
+ if not utils.is_extension_enabled('allowed-address-pairs', 'network'):
msg = "Allowed Address Pairs extension not enabled."
raise cls.skipException(msg)
@@ -103,8 +102,7 @@
@decorators.idempotent_id('4d6d178f-34f6-4bff-a01c-0a2f8fe909e4')
def test_update_port_with_cidr_address_pair(self):
# Update allowed address pair with cidr
- cidr = str(netaddr.IPNetwork(CONF.network.project_network_cidr))
- self._update_port_with_address(cidr)
+ self._update_port_with_address(str(self.cidr))
@decorators.idempotent_id('b3f20091-6cd5-472b-8487-3516137df933')
def test_update_port_with_multiple_ip_mac_address_pair(self):
diff --git a/tempest/api/network/test_extensions.py b/tempest/api/network/test_extensions.py
index 014d064..4804ada 100644
--- a/tempest/api/network/test_extensions.py
+++ b/tempest/api/network/test_extensions.py
@@ -15,8 +15,8 @@
from tempest.api.network import base
+from tempest.common import utils
from tempest.lib import decorators
-from tempest import test
class ExtensionsTestJSON(base.BaseNetworkTest):
@@ -40,7 +40,7 @@
'allowed-address-pairs', 'extra_dhcp_opt',
'metering', 'dvr']
expected_alias = [ext for ext in expected_alias if
- test.is_extension_enabled(ext, 'network')]
+ utils.is_extension_enabled(ext, 'network')]
actual_alias = list()
extensions = self.network_extensions_client.list_extensions()
list_extensions = extensions['extensions']
@@ -66,5 +66,5 @@
# of extensions returned, but only for those that have been
# enabled via configuration
for e in expected_alias:
- if test.is_extension_enabled(e, 'network'):
+ if utils.is_extension_enabled(e, 'network'):
self.assertIn(e, actual_alias)
diff --git a/tempest/api/network/test_extra_dhcp_options.py b/tempest/api/network/test_extra_dhcp_options.py
index dc9042e..0d42033 100644
--- a/tempest/api/network/test_extra_dhcp_options.py
+++ b/tempest/api/network/test_extra_dhcp_options.py
@@ -14,9 +14,9 @@
# under the License.
from tempest.api.network import base
+from tempest.common import utils
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
-from tempest import test
class ExtraDHCPOptionsTestJSON(base.BaseNetworkTest):
@@ -35,7 +35,7 @@
@classmethod
def skip_checks(cls):
super(ExtraDHCPOptionsTestJSON, cls).skip_checks()
- if not test.is_extension_enabled('extra_dhcp_opt', 'network'):
+ if not utils.is_extension_enabled('extra_dhcp_opt', 'network'):
msg = "Extra DHCP Options extension not enabled."
raise cls.skipException(msg)
@@ -75,7 +75,7 @@
def test_update_show_port_with_extra_dhcp_options(self):
# Update port with extra dhcp options
name = data_utils.rand_name('new-port-name')
- body = self.ports_client.update_port(
+ self.ports_client.update_port(
self.port['id'],
name=name,
extra_dhcp_opts=self.extra_dhcp_opts)
diff --git a/tempest/api/network/test_floating_ips.py b/tempest/api/network/test_floating_ips.py
index c799b15..ef4a23a 100644
--- a/tempest/api/network/test_floating_ips.py
+++ b/tempest/api/network/test_floating_ips.py
@@ -14,10 +14,10 @@
# under the License.
from tempest.api.network import base
+from tempest.common import utils
from tempest.common.utils import net_utils
from tempest import config
from tempest.lib import decorators
-from tempest import test
CONF = config.CONF
@@ -43,7 +43,7 @@
@classmethod
def skip_checks(cls):
super(FloatingIPTestJSON, cls).skip_checks()
- if not test.is_extension_enabled('router', 'network'):
+ if not utils.is_extension_enabled('router', 'network'):
msg = "router extension not enabled."
raise cls.skipException(msg)
if not CONF.network.public_network_id:
diff --git a/tempest/api/network/test_floating_ips_negative.py b/tempest/api/network/test_floating_ips_negative.py
index 5ca17fe..e904a81 100644
--- a/tempest/api/network/test_floating_ips_negative.py
+++ b/tempest/api/network/test_floating_ips_negative.py
@@ -15,10 +15,10 @@
# under the License.
from tempest.api.network import base
+from tempest.common import utils
from tempest import config
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
CONF = config.CONF
@@ -34,7 +34,7 @@
@classmethod
def skip_checks(cls):
super(FloatingIPNegativeTestJSON, cls).skip_checks()
- if not test.is_extension_enabled('router', 'network'):
+ if not utils.is_extension_enabled('router', 'network'):
msg = "router extension not enabled."
raise cls.skipException(msg)
if not CONF.network.public_network_id:
diff --git a/tempest/api/network/test_networks.py b/tempest/api/network/test_networks.py
index 269f2c2..1c59556 100644
--- a/tempest/api/network/test_networks.py
+++ b/tempest/api/network/test_networks.py
@@ -18,12 +18,12 @@
from tempest.api.network import base
from tempest.common import custom_matchers
+from tempest.common import utils
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
CONF = config.CONF
@@ -34,8 +34,7 @@
def resource_setup(cls):
super(BaseNetworkTestResources, cls).resource_setup()
cls.network = cls.create_network()
- cls.subnet = cls._create_subnet_with_last_subnet_block(cls.network,
- cls._ip_version)
+ cls.subnet = cls._create_subnet_with_last_subnet_block(cls.network)
cls._subnet_data = {6: {'gateway':
str(cls._get_gateway_from_tempest_conf(6)),
'allocation_pools':
@@ -64,20 +63,13 @@
'new_dns_nameservers': ['7.8.8.8', '7.8.4.4']}}
@classmethod
- def _create_subnet_with_last_subnet_block(cls, network, ip_version):
+ def _create_subnet_with_last_subnet_block(cls, network):
# Derive last subnet CIDR block from project CIDR and
# create the subnet with that derived CIDR
- if ip_version == 4:
- cidr = netaddr.IPNetwork(CONF.network.project_network_cidr)
- mask_bits = CONF.network.project_network_mask_bits
- elif ip_version == 6:
- cidr = netaddr.IPNetwork(CONF.network.project_network_v6_cidr)
- mask_bits = CONF.network.project_network_v6_mask_bits
-
- subnet_cidr = list(cidr.subnet(mask_bits))[-1]
+ subnet_cidr = list(cls.cidr.subnet(cls.mask_bits))[-1]
gateway_ip = str(netaddr.IPAddress(subnet_cidr) + 1)
return cls.create_subnet(network, gateway=gateway_ip,
- cidr=subnet_cidr, mask_bits=mask_bits)
+ cidr=subnet_cidr, mask_bits=cls.mask_bits)
@classmethod
def _get_gateway_from_tempest_conf(cls, ip_version):
@@ -209,7 +201,7 @@
def test_show_network_fields(self):
# Verify specific fields of a network
fields = ['id', 'name']
- if test.is_extension_enabled('net-mtu', 'network'):
+ if utils.is_extension_enabled('net-mtu', 'network'):
fields.append('mtu')
body = self.networks_client.show_network(self.network['id'],
fields=fields)
@@ -233,7 +225,7 @@
def test_list_networks_fields(self):
# Verify specific fields of the networks
fields = ['id', 'name']
- if test.is_extension_enabled('net-mtu', 'network'):
+ if utils.is_extension_enabled('net-mtu', 'network'):
fields.append('mtu')
body = self.networks_client.list_networks(fields=fields)
networks = body['networks']
@@ -370,30 +362,44 @@
@decorators.attr(type='smoke')
@decorators.idempotent_id('af774677-42a9-4e4b-bb58-16fe6a5bc1ec')
- @test.requires_ext(extension='external-net', service='network')
+ @utils.requires_ext(extension='external-net', service='network')
@testtools.skipUnless(CONF.network.public_network_id,
'The public_network_id option must be specified.')
def test_external_network_visibility(self):
- """Verifies user can see external networks but not subnets."""
+ public_network_id = CONF.network.public_network_id
+
+ # find external network matching public_network_id
body = self.networks_client.list_networks(**{'router:external': True})
- networks = [network['id'] for network in body['networks']]
- self.assertNotEmpty(networks, "No external networks found")
+ external_network = next((network for network in body['networks']
+ if network['id'] == public_network_id), None)
+ self.assertIsNotNone(external_network, "Public network %s not found "
+ "in external network list"
+ % public_network_id)
nonexternal = [net for net in body['networks'] if
not net['router:external']]
self.assertEmpty(nonexternal, "Found non-external networks"
" in filtered list (%s)." % nonexternal)
- self.assertIn(CONF.network.public_network_id, networks)
+
# only check the public network ID because the other networks may
# belong to other tests and their state may have changed during this
# test
- body = self.subnets_client.list_subnets(
- network_id=CONF.network.public_network_id)
- self.assertEmpty(body['subnets'], "Public subnets visible")
+ body = self.subnets_client.list_subnets(network_id=public_network_id)
+
+ # check subnet visibility of external_network
+ if external_network['shared']:
+ self.assertNotEmpty(body['subnets'], "Subnets should be visible "
+ "for shared public network %s"
+ % public_network_id)
+ else:
+ self.assertEmpty(body['subnets'], "Subnets should not be visible "
+ "for non-shared public "
+ "network %s"
+ % public_network_id)
@decorators.idempotent_id('c72c1c0c-2193-4aca-ccc4-b1442640bbbb')
- @test.requires_ext(extension="standard-attr-description",
- service="network")
+ @utils.requires_ext(extension="standard-attr-description",
+ service="network")
def test_create_update_network_description(self):
body = self.create_network(description='d1')
self.assertEqual('d1', body['description'])
@@ -473,14 +479,8 @@
def test_bulk_create_delete_subnet(self):
networks = [self.create_network(), self.create_network()]
# Creates 2 subnets in one request
- if self._ip_version == 4:
- cidr = netaddr.IPNetwork(CONF.network.project_network_cidr)
- mask_bits = CONF.network.project_network_mask_bits
- else:
- cidr = netaddr.IPNetwork(CONF.network.project_network_v6_cidr)
- mask_bits = CONF.network.project_network_v6_mask_bits
-
- cidrs = [subnet_cidr for subnet_cidr in cidr.subnet(mask_bits)]
+ cidrs = [subnet_cidr
+ for subnet_cidr in self.cidr.subnet(self.mask_bits)]
names = [data_utils.rand_name('subnet-') for i in range(len(networks))]
subnets_list = []
diff --git a/tempest/api/network/test_ports.py b/tempest/api/network/test_ports.py
index f81927d..eb53fbb 100644
--- a/tempest/api/network/test_ports.py
+++ b/tempest/api/network/test_ports.py
@@ -18,11 +18,11 @@
from tempest.api.network import base_security_groups as sec_base
from tempest.common import custom_matchers
+from tempest.common import utils
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions
-from tempest import test
CONF = config.CONF
@@ -84,25 +84,13 @@
self.assertTrue(port1['admin_state_up'])
self.assertTrue(port2['admin_state_up'])
- @classmethod
- def _get_ipaddress_from_tempest_conf(cls):
- """Return subnet with mask bits for configured CIDR """
- if cls._ip_version == 4:
- cidr = netaddr.IPNetwork(CONF.network.project_network_cidr)
- cidr.prefixlen = CONF.network.project_network_mask_bits
-
- elif cls._ip_version == 6:
- cidr = netaddr.IPNetwork(CONF.network.project_network_v6_cidr)
- cidr.prefixlen = CONF.network.project_network_v6_mask_bits
-
- return cidr
-
@decorators.attr(type='smoke')
@decorators.idempotent_id('0435f278-40ae-48cb-a404-b8a087bc09b1')
def test_create_port_in_allowed_allocation_pools(self):
network = self.create_network()
net_id = network['id']
- address = self._get_ipaddress_from_tempest_conf()
+ address = self.cidr
+ address.prefixlen = self.mask_bits
if ((address.version == 4 and address.prefixlen >= 30) or
(address.version == 6 and address.prefixlen >= 126)):
msg = ("Subnet %s isn't large enough for the test" % address.cidr)
@@ -307,7 +295,7 @@
@decorators.idempotent_id('58091b66-4ff4-4cc1-a549-05d60c7acd1a')
@testtools.skipUnless(
- test.is_extension_enabled('security-group', 'network'),
+ utils.is_extension_enabled('security-group', 'network'),
'security-group extension not enabled.')
def test_update_port_with_security_group_and_extra_attributes(self):
self._update_port_with_security_groups(
@@ -315,7 +303,7 @@
@decorators.idempotent_id('edf6766d-3d40-4621-bc6e-2521a44c257d')
@testtools.skipUnless(
- test.is_extension_enabled('security-group', 'network'),
+ utils.is_extension_enabled('security-group', 'network'),
'security-group extension not enabled.')
def test_update_port_with_two_security_groups_and_extra_attributes(self):
self._update_port_with_security_groups(
@@ -342,7 +330,7 @@
@decorators.attr(type='smoke')
@decorators.idempotent_id('4179dcb9-1382-4ced-84fe-1b91c54f5735')
@testtools.skipUnless(
- test.is_extension_enabled('security-group', 'network'),
+ utils.is_extension_enabled('security-group', 'network'),
'security-group extension not enabled.')
def test_create_port_with_no_securitygroups(self):
network = self.create_network()
diff --git a/tempest/api/network/test_routers.py b/tempest/api/network/test_routers.py
index 128544b..99ffaa8 100644
--- a/tempest/api/network/test_routers.py
+++ b/tempest/api/network/test_routers.py
@@ -17,10 +17,10 @@
import testtools
from tempest.api.network import base
+from tempest.common import utils
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
-from tempest import test
CONF = config.CONF
@@ -55,17 +55,10 @@
@classmethod
def skip_checks(cls):
super(RoutersTest, cls).skip_checks()
- if not test.is_extension_enabled('router', 'network'):
+ if not utils.is_extension_enabled('router', 'network'):
msg = "router extension not enabled."
raise cls.skipException(msg)
- @classmethod
- def resource_setup(cls):
- super(RoutersTest, cls).resource_setup()
- cls.tenant_cidr = (CONF.network.project_network_cidr
- if cls._ip_version == 4 else
- CONF.network.project_network_v6_cidr)
-
@decorators.attr(type='smoke')
@decorators.idempotent_id('f64403e2-8483-4b34-8ccd-b09a87bcc68c')
@testtools.skipUnless(CONF.network.public_network_id,
@@ -139,35 +132,8 @@
self.assertEqual(show_port_body['port']['device_id'],
router['id'])
- def _verify_router_gateway(self, router_id, exp_ext_gw_info=None):
- show_body = self.admin_routers_client.show_router(router_id)
- actual_ext_gw_info = show_body['router']['external_gateway_info']
- if exp_ext_gw_info is None:
- self.assertIsNone(actual_ext_gw_info)
- return
- # Verify only keys passed in exp_ext_gw_info
- for k, v in exp_ext_gw_info.items():
- self.assertEqual(v, actual_ext_gw_info[k])
-
- def _verify_gateway_port(self, router_id):
- list_body = self.admin_ports_client.list_ports(
- network_id=CONF.network.public_network_id,
- device_id=router_id)
- self.assertEqual(len(list_body['ports']), 1)
- gw_port = list_body['ports'][0]
- fixed_ips = gw_port['fixed_ips']
- self.assertNotEmpty(fixed_ips)
- # Assert that all of the IPs from the router gateway port
- # are allocated from a valid public subnet.
- public_net_body = self.admin_networks_client.show_network(
- CONF.network.public_network_id)
- public_subnet_ids = public_net_body['network']['subnets']
- for fixed_ip in fixed_ips:
- subnet_id = fixed_ip['subnet_id']
- self.assertIn(subnet_id, public_subnet_ids)
-
@decorators.idempotent_id('cbe42f84-04c2-11e7-8adb-fa163e4fa634')
- @test.requires_ext(extension='ext-gw-mode', service='network')
+ @utils.requires_ext(extension='ext-gw-mode', service='network')
@testtools.skipUnless(CONF.network.public_network_id,
'The public_network_id option must be specified.')
@decorators.skip_because(bug='1676207')
@@ -198,11 +164,11 @@
fixed_ip['ip_address'])
@decorators.idempotent_id('c86ac3a8-50bd-4b00-a6b8-62af84a0765c')
- @test.requires_ext(extension='extraroute', service='network')
+ @utils.requires_ext(extension='extraroute', service='network')
def test_update_delete_extra_route(self):
# Create different cidr for each subnet to avoid cidr duplicate
# The cidr starts from project_cidr
- next_cidr = netaddr.IPNetwork(self.tenant_cidr)
+ next_cidr = self.cidr
# Prepare to build several routes
test_routes = []
routes_num = 4
@@ -278,7 +244,7 @@
network02 = self.create_network(
network_name=data_utils.rand_name('router-network02-'))
subnet01 = self.create_subnet(network01)
- sub02_cidr = netaddr.IPNetwork(self.tenant_cidr).next()
+ sub02_cidr = self.cidr.next()
subnet02 = self.create_subnet(network02, cidr=sub02_cidr)
router = self._create_router()
interface01 = self._add_router_interface_with_subnet_id(router['id'],
diff --git a/tempest/api/network/test_routers_negative.py b/tempest/api/network/test_routers_negative.py
index db165ab..c9ce55c 100644
--- a/tempest/api/network/test_routers_negative.py
+++ b/tempest/api/network/test_routers_negative.py
@@ -13,14 +13,12 @@
# License for the specific language governing permissions and limitations
# under the License.
-import netaddr
-
from tempest.api.network import base
+from tempest.common import utils
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
CONF = config.CONF
@@ -30,7 +28,7 @@
@classmethod
def skip_checks(cls):
super(RoutersNegativeTest, cls).skip_checks()
- if not test.is_extension_enabled('router', 'network'):
+ if not utils.is_extension_enabled('router', 'network'):
msg = "router extension not enabled."
raise cls.skipException(msg)
@@ -40,9 +38,6 @@
cls.router = cls.create_router()
cls.network = cls.create_network()
cls.subnet = cls.create_subnet(cls.network)
- cls.tenant_cidr = (CONF.network.project_network_cidr
- if cls._ip_version == 4 else
- CONF.network.project_network_v6_cidr)
@decorators.attr(type=['negative'])
@decorators.idempotent_id('37a94fc0-a834-45b9-bd23-9a81d2fd1e22')
@@ -57,7 +52,7 @@
@decorators.idempotent_id('11836a18-0b15-4327-a50b-f0d9dc66bddd')
def test_router_add_gateway_net_not_external_returns_400(self):
alt_network = self.create_network()
- sub_cidr = netaddr.IPNetwork(self.tenant_cidr).next()
+ sub_cidr = self.cidr.next()
self.create_subnet(alt_network, cidr=sub_cidr)
self.assertRaises(lib_exc.BadRequest,
self.routers_client.update_router,
@@ -124,7 +119,7 @@
@classmethod
def skip_checks(cls):
super(DvrRoutersNegativeTest, cls).skip_checks()
- if not test.is_extension_enabled('dvr', 'network'):
+ if not utils.is_extension_enabled('dvr', 'network'):
msg = "DVR extension not enabled."
raise cls.skipException(msg)
diff --git a/tempest/api/network/test_security_groups.py b/tempest/api/network/test_security_groups.py
index a121864..24bd8ea 100644
--- a/tempest/api/network/test_security_groups.py
+++ b/tempest/api/network/test_security_groups.py
@@ -14,21 +14,20 @@
# under the License.
from tempest.api.network import base_security_groups as base
+from tempest.common import utils
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
-from tempest import test
CONF = config.CONF
class SecGroupTest(base.BaseSecGroupTest):
- _project_network_cidr = CONF.network.project_network_cidr
@classmethod
def skip_checks(cls):
super(SecGroupTest, cls).skip_checks()
- if not test.is_extension_enabled('security-group', 'network'):
+ if not utils.is_extension_enabled('security-group', 'network'):
msg = "security-group extension not enabled."
raise cls.skipException(msg)
@@ -209,7 +208,7 @@
protocol = 'tcp'
port_range_min = 76
port_range_max = 77
- ip_prefix = self._project_network_cidr
+ ip_prefix = str(self.cidr)
self._create_verify_security_group_rule(sg_id, direction,
self.ethertype, protocol,
port_range_min,
@@ -238,4 +237,3 @@
class SecGroupIPv6Test(SecGroupTest):
_ip_version = 6
- _project_network_cidr = CONF.network.project_network_v6_cidr
diff --git a/tempest/api/network/test_security_groups_negative.py b/tempest/api/network/test_security_groups_negative.py
index f51fb33..d054865 100644
--- a/tempest/api/network/test_security_groups_negative.py
+++ b/tempest/api/network/test_security_groups_negative.py
@@ -14,22 +14,21 @@
# under the License.
from tempest.api.network import base_security_groups as base
+from tempest.common import utils
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
CONF = config.CONF
class NegativeSecGroupTest(base.BaseSecGroupTest):
- _project_network_cidr = CONF.network.project_network_cidr
@classmethod
def skip_checks(cls):
super(NegativeSecGroupTest, cls).skip_checks()
- if not test.is_extension_enabled('security-group', 'network'):
+ if not utils.is_extension_enabled('security-group', 'network'):
msg = "security-group extension not enabled."
raise cls.skipException(msg)
@@ -110,7 +109,7 @@
sg2_body, _ = self._create_security_group()
# Create rule specifying both remote_ip_prefix and remote_group_id
- prefix = self._project_network_cidr
+ prefix = str(self.cidr)
self.assertRaises(
lib_exc.BadRequest,
self.security_group_rules_client.create_security_group_rule,
@@ -225,7 +224,6 @@
class NegativeSecGroupIPv6Test(NegativeSecGroupTest):
_ip_version = 6
- _project_network_cidr = CONF.network.project_network_v6_cidr
@decorators.attr(type=['negative'])
@decorators.idempotent_id('7607439c-af73-499e-bf64-f687fd12a842')
diff --git a/tempest/api/network/test_service_providers.py b/tempest/api/network/test_service_providers.py
index b90c81b..9ebcd89 100644
--- a/tempest/api/network/test_service_providers.py
+++ b/tempest/api/network/test_service_providers.py
@@ -13,15 +13,15 @@
import testtools
from tempest.api.network import base
+from tempest.common import utils
from tempest.lib import decorators
-from tempest import test
class ServiceProvidersTest(base.BaseNetworkTest):
@decorators.idempotent_id('2cbbeea9-f010-40f6-8df5-4eaa0c918ea6')
@testtools.skipUnless(
- test.is_extension_enabled('service-type', 'network'),
+ utils.is_extension_enabled('service-type', 'network'),
'service-type extension not enabled.')
def test_service_providers_list(self):
body = self.service_providers_client.list_service_providers()
diff --git a/tempest/api/network/test_subnetpools_extensions.py b/tempest/api/network/test_subnetpools_extensions.py
index 01d7db2..bfc2609 100644
--- a/tempest/api/network/test_subnetpools_extensions.py
+++ b/tempest/api/network/test_subnetpools_extensions.py
@@ -13,12 +13,12 @@
# under the License.
from tempest.api.network import base
+from tempest.common import utils
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
CONF = config.CONF
@@ -42,7 +42,7 @@
@classmethod
def skip_checks(cls):
super(SubnetPoolsTestJSON, cls).skip_checks()
- if not test.is_extension_enabled('subnet_allocation', 'network'):
+ if not utils.is_extension_enabled('subnet_allocation', 'network'):
msg = "subnet_allocation extension not enabled."
raise cls.skipException(msg)
diff --git a/tempest/api/network/test_tags.py b/tempest/api/network/test_tags.py
index 567a462..409d556 100644
--- a/tempest/api/network/test_tags.py
+++ b/tempest/api/network/test_tags.py
@@ -14,11 +14,11 @@
# under the License.
from tempest.api.network import base
+from tempest.common import utils
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
CONF = config.CONF
@@ -40,7 +40,7 @@
@classmethod
def skip_checks(cls):
super(TagsTest, cls).skip_checks()
- if not test.is_extension_enabled('tag', 'network'):
+ if not utils.is_extension_enabled('tag', 'network'):
msg = "tag extension not enabled."
raise cls.skipException(msg)
@@ -115,7 +115,7 @@
@classmethod
def skip_checks(cls):
super(TagsExtTest, cls).skip_checks()
- if not test.is_extension_enabled('tag-ext', 'network'):
+ if not utils.is_extension_enabled('tag-ext', 'network'):
msg = "tag-ext extension not enabled."
raise cls.skipException(msg)
diff --git a/tempest/api/object_storage/base.py b/tempest/api/object_storage/base.py
index 11273e4..ee72163 100644
--- a/tempest/api/object_storage/base.py
+++ b/tempest/api/object_storage/base.py
@@ -43,7 +43,7 @@
for cont in containers:
try:
params = {'limit': 9999, 'format': 'json'}
- _, objlist = container_client.list_container_contents(cont, params)
+ _, objlist = container_client.list_container_objects(cont, params)
# delete every object in the container
for obj in objlist:
test_utils.call_and_ignore_notfound_exc(
@@ -71,9 +71,6 @@
def setup_credentials(cls):
cls.set_network_resources()
super(BaseObjectTest, cls).setup_credentials()
- # credentials may be overwritten by children classes
- if hasattr(cls, 'os_roles_operator'):
- cls.os = cls.os_roles_operator
@classmethod
def setup_clients(cls):
@@ -109,7 +106,7 @@
def create_container(cls):
# wrapper that returns a test container
container_name = data_utils.rand_name(name='TestContainer')
- cls.container_client.create_container(container_name)
+ cls.container_client.update_container(container_name)
cls.containers.append(container_name)
return container_name
diff --git a/tempest/api/object_storage/test_account_bulk.py b/tempest/api/object_storage/test_account_bulk.py
index 7c538e8..6599e43 100644
--- a/tempest/api/object_storage/test_account_bulk.py
+++ b/tempest/api/object_storage/test_account_bulk.py
@@ -17,8 +17,8 @@
from tempest.api.object_storage import base
from tempest.common import custom_matchers
+from tempest.common import utils
from tempest.lib import decorators
-from tempest import test
class BulkTest(base.BaseObjectTest):
@@ -69,7 +69,7 @@
self.assertNotIn(container_name, body)
@decorators.idempotent_id('a407de51-1983-47cc-9f14-47c2b059413c')
- @test.requires_ext(extension='bulk_upload', service='object')
+ @utils.requires_ext(extension='bulk_upload', service='object')
def test_extract_archive(self):
# Test bulk operation of file upload with an archived file
filepath, container_name, object_name = self._create_archive()
@@ -96,7 +96,7 @@
self.assertIn(container_name, [b['name'] for b in body])
param = {'format': 'json'}
- resp, contents_list = self.container_client.list_container_contents(
+ resp, contents_list = self.container_client.list_container_objects(
container_name, param)
self.assertHeaders(resp, 'Container', 'GET')
@@ -104,7 +104,7 @@
self.assertIn(object_name, [c['name'] for c in contents_list])
@decorators.idempotent_id('c075e682-0d2a-43b2-808d-4116200d736d')
- @test.requires_ext(extension='bulk_delete', service='object')
+ @utils.requires_ext(extension='bulk_delete', service='object')
def test_bulk_delete(self):
# Test bulk operation of deleting multiple files
filepath, container_name, object_name = self._create_archive()
@@ -129,7 +129,7 @@
self._check_contents_deleted(container_name)
@decorators.idempotent_id('dbea2bcb-efbb-4674-ac8a-a5a0e33d1d79')
- @test.requires_ext(extension='bulk_delete', service='object')
+ @utils.requires_ext(extension='bulk_delete', service='object')
def test_bulk_delete_by_POST(self):
# Test bulk operation of deleting multiple files
filepath, container_name, object_name = self._create_archive()
diff --git a/tempest/api/object_storage/test_account_quotas.py b/tempest/api/object_storage/test_account_quotas.py
index 092d369..48f42ec 100644
--- a/tempest/api/object_storage/test_account_quotas.py
+++ b/tempest/api/object_storage/test_account_quotas.py
@@ -13,10 +13,10 @@
# under the License.
from tempest.api.object_storage import base
+from tempest.common import utils
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
-from tempest import test
CONF = config.CONF
@@ -29,7 +29,6 @@
@classmethod
def setup_credentials(cls):
super(AccountQuotasTest, cls).setup_credentials()
- cls.os = cls.os_roles_operator
cls.os_reselleradmin = cls.os_roles_reseller
@classmethod
@@ -78,7 +77,7 @@
@decorators.attr(type="smoke")
@decorators.idempotent_id('a22ef352-a342-4587-8f47-3bbdb5b039c4')
- @test.requires_ext(extension='account_quotas', service='object')
+ @utils.requires_ext(extension='account_quotas', service='object')
def test_upload_valid_object(self):
object_name = data_utils.rand_name(name="TestObject")
data = data_utils.arbitrary_string()
@@ -89,7 +88,7 @@
@decorators.attr(type=["smoke"])
@decorators.idempotent_id('63f51f9f-5f1d-4fc6-b5be-d454d70949d6')
- @test.requires_ext(extension='account_quotas', service='object')
+ @utils.requires_ext(extension='account_quotas', service='object')
def test_admin_modify_quota(self):
"""Test ResellerAdmin can modify/remove the quota on a user's account
diff --git a/tempest/api/object_storage/test_account_quotas_negative.py b/tempest/api/object_storage/test_account_quotas_negative.py
index 60233b4..798926b 100644
--- a/tempest/api/object_storage/test_account_quotas_negative.py
+++ b/tempest/api/object_storage/test_account_quotas_negative.py
@@ -13,10 +13,10 @@
# under the License.
from tempest.api.object_storage import base
+from tempest.common import utils
from tempest import config
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
CONF = config.CONF
@@ -29,7 +29,6 @@
@classmethod
def setup_credentials(cls):
super(AccountQuotasNegativeTest, cls).setup_credentials()
- cls.os = cls.os_roles_operator
cls.os_reselleradmin = cls.os_roles_reseller
@classmethod
@@ -77,7 +76,7 @@
@decorators.attr(type=["negative"])
@decorators.idempotent_id('d1dc5076-555e-4e6d-9697-28f1fe976324')
- @test.requires_ext(extension='account_quotas', service='object')
+ @utils.requires_ext(extension='account_quotas', service='object')
def test_user_modify_quota(self):
"""Test that a user cannot modify or remove a quota on its account."""
diff --git a/tempest/api/object_storage/test_account_services.py b/tempest/api/object_storage/test_account_services.py
index 2fb676f..d7c85a2 100644
--- a/tempest/api/object_storage/test_account_services.py
+++ b/tempest/api/object_storage/test_account_services.py
@@ -36,15 +36,14 @@
@classmethod
def setup_credentials(cls):
super(AccountTest, cls).setup_credentials()
- cls.os = cls.os_roles_operator
cls.os_operator = cls.os_roles_operator_alt
@classmethod
def resource_setup(cls):
super(AccountTest, cls).resource_setup()
for i in range(ord('a'), ord('f') + 1):
- name = data_utils.rand_name(name='%s-' % chr(i))
- cls.container_client.create_container(name)
+ name = data_utils.rand_name(name='%s-' % six.int2byte(i))
+ cls.container_client.update_container(name)
cls.containers.append(name)
cls.containers_count = len(cls.containers)
diff --git a/tempest/api/object_storage/test_account_services_negative.py b/tempest/api/object_storage/test_account_services_negative.py
index e98a4f5..3e664d7 100644
--- a/tempest/api/object_storage/test_account_services_negative.py
+++ b/tempest/api/object_storage/test_account_services_negative.py
@@ -28,7 +28,6 @@
@classmethod
def setup_credentials(cls):
super(AccountNegativeTest, cls).setup_credentials()
- cls.os = cls.os_roles_operator
cls.os_operator = cls.os_roles_operator_alt
@decorators.attr(type=['negative'])
diff --git a/tempest/api/object_storage/test_container_acl.py b/tempest/api/object_storage/test_container_acl.py
index 4b66ebf..765bc6d 100644
--- a/tempest/api/object_storage/test_container_acl.py
+++ b/tempest/api/object_storage/test_container_acl.py
@@ -41,10 +41,11 @@
tenant_name = self.os_roles_operator_alt.credentials.tenant_name
username = self.os_roles_operator_alt.credentials.username
cont_headers = {'X-Container-Read': tenant_name + ':' + username}
+ container_client = self.os_roles_operator.container_client
resp_meta, _ = (
- self.os_roles_operator.container_client.update_container_metadata(
- self.container_name, metadata=cont_headers,
- metadata_prefix=''))
+ container_client.create_update_or_delete_container_metadata(
+ self.container_name, create_update_metadata=cont_headers,
+ create_update_metadata_prefix=''))
self.assertHeaders(resp_meta, 'Container', 'POST')
# create object
object_name = data_utils.rand_name(name='Object')
@@ -68,10 +69,11 @@
tenant_name = self.os_roles_operator_alt.credentials.tenant_name
username = self.os_roles_operator_alt.credentials.username
cont_headers = {'X-Container-Write': tenant_name + ':' + username}
+ container_client = self.os_roles_operator.container_client
resp_meta, _ = (
- self.os_roles_operator.container_client.update_container_metadata(
- self.container_name, metadata=cont_headers,
- metadata_prefix=''))
+ container_client.create_update_or_delete_container_metadata(
+ self.container_name, create_update_metadata=cont_headers,
+ create_update_metadata_prefix=''))
self.assertHeaders(resp_meta, 'Container', 'POST')
# set alternative authentication data; cannot simply use the
# other object client.
diff --git a/tempest/api/object_storage/test_container_acl_negative.py b/tempest/api/object_storage/test_container_acl_negative.py
index 655626c..90b24b4 100644
--- a/tempest/api/object_storage/test_container_acl_negative.py
+++ b/tempest/api/object_storage/test_container_acl_negative.py
@@ -29,7 +29,6 @@
@classmethod
def setup_credentials(cls):
super(ObjectACLsNegativeTest, cls).setup_credentials()
- cls.os = cls.os_roles_operator
cls.os_operator = cls.os_roles_operator_alt
@classmethod
@@ -40,7 +39,7 @@
def setUp(self):
super(ObjectACLsNegativeTest, self).setUp()
self.container_name = data_utils.rand_name(name='TestContainer')
- self.container_client.create_container(self.container_name)
+ self.container_client.update_container(self.container_name)
def tearDown(self):
self.delete_containers([self.container_name])
@@ -134,9 +133,10 @@
# attempt to read object using non-authorized user
# update X-Container-Read metadata ACL
cont_headers = {'X-Container-Read': 'badtenant:baduser'}
- resp_meta, _ = self.container_client.update_container_metadata(
- self.container_name, metadata=cont_headers,
- metadata_prefix='')
+ resp_meta, _ = (
+ self.container_client.create_update_or_delete_container_metadata(
+ self.container_name, create_update_metadata=cont_headers,
+ create_update_metadata_prefix=''))
self.assertHeaders(resp_meta, 'Container', 'POST')
# create object
object_name = data_utils.rand_name(name='Object')
@@ -158,9 +158,10 @@
# attempt to write object using non-authorized user
# update X-Container-Write metadata ACL
cont_headers = {'X-Container-Write': 'badtenant:baduser'}
- resp_meta, _ = self.container_client.update_container_metadata(
- self.container_name, metadata=cont_headers,
- metadata_prefix='')
+ resp_meta, _ = (
+ self.container_client.create_update_or_delete_container_metadata(
+ self.container_name, create_update_metadata=cont_headers,
+ create_update_metadata_prefix=''))
self.assertHeaders(resp_meta, 'Container', 'POST')
# Trying to write the object without rights
self.object_client.auth_provider.set_alt_auth_data(
@@ -183,9 +184,10 @@
cont_headers = {'X-Container-Read':
tenant_name + ':' + username,
'X-Container-Write': ''}
- resp_meta, _ = self.container_client.update_container_metadata(
- self.container_name, metadata=cont_headers,
- metadata_prefix='')
+ resp_meta, _ = (
+ self.container_client.create_update_or_delete_container_metadata(
+ self.container_name, create_update_metadata=cont_headers,
+ create_update_metadata_prefix=''))
self.assertHeaders(resp_meta, 'Container', 'POST')
# Trying to write the object without write rights
self.object_client.auth_provider.set_alt_auth_data(
@@ -208,9 +210,10 @@
cont_headers = {'X-Container-Read':
tenant_name + ':' + username,
'X-Container-Write': ''}
- resp_meta, _ = self.container_client.update_container_metadata(
- self.container_name, metadata=cont_headers,
- metadata_prefix='')
+ resp_meta, _ = (
+ self.container_client.create_update_or_delete_container_metadata(
+ self.container_name, create_update_metadata=cont_headers,
+ create_update_metadata_prefix=''))
self.assertHeaders(resp_meta, 'Container', 'POST')
# create object
object_name = data_utils.rand_name(name='Object')
diff --git a/tempest/api/object_storage/test_container_quotas.py b/tempest/api/object_storage/test_container_quotas.py
index 8266341..982c4a1 100644
--- a/tempest/api/object_storage/test_container_quotas.py
+++ b/tempest/api/object_storage/test_container_quotas.py
@@ -14,10 +14,10 @@
# under the License.
from tempest.api.object_storage import base
+from tempest.common import utils
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
QUOTA_BYTES = 10
QUOTA_COUNT = 3
@@ -40,8 +40,8 @@
self.container_name = self.create_container()
metadata = {"quota-bytes": str(QUOTA_BYTES),
"quota-count": str(QUOTA_COUNT), }
- self.container_client.update_container_metadata(
- self.container_name, metadata)
+ self.container_client.create_update_or_delete_container_metadata(
+ self.container_name, create_update_metadata=metadata)
def tearDown(self):
"""Cleans the container of any object after each test."""
@@ -49,7 +49,7 @@
super(ContainerQuotasTest, self).tearDown()
@decorators.idempotent_id('9a0fb034-86af-4df0-86fa-f8bd7db21ae0')
- @test.requires_ext(extension='container_quotas', service='object')
+ @utils.requires_ext(extension='container_quotas', service='object')
@decorators.attr(type="smoke")
def test_upload_valid_object(self):
"""Attempts to uploads an object smaller than the bytes quota."""
@@ -66,7 +66,7 @@
self.assertEqual(nbefore + len(data), nafter)
@decorators.idempotent_id('22eeeb2b-3668-4160-baef-44790f65a5a0')
- @test.requires_ext(extension='container_quotas', service='object')
+ @utils.requires_ext(extension='container_quotas', service='object')
@decorators.attr(type="smoke")
def test_upload_large_object(self):
"""Attempts to upload an object larger than the bytes quota."""
@@ -83,7 +83,7 @@
self.assertEqual(nbefore, nafter)
@decorators.idempotent_id('3a387039-697a-44fc-a9c0-935de31f426b')
- @test.requires_ext(extension='container_quotas', service='object')
+ @utils.requires_ext(extension='container_quotas', service='object')
@decorators.attr(type="smoke")
def test_upload_too_many_objects(self):
"""Attempts to upload many objects that exceeds the count limit."""
diff --git a/tempest/api/object_storage/test_container_services.py b/tempest/api/object_storage/test_container_services.py
index 76fe8d4..cdc420e 100644
--- a/tempest/api/object_storage/test_container_services.py
+++ b/tempest/api/object_storage/test_container_services.py
@@ -27,7 +27,7 @@
@decorators.idempotent_id('92139d73-7819-4db1-85f8-3f2f22a8d91f')
def test_create_container(self):
container_name = data_utils.rand_name(name='TestContainer')
- resp, _ = self.container_client.create_container(container_name)
+ resp, _ = self.container_client.update_container(container_name)
self.containers.append(container_name)
self.assertHeaders(resp, 'Container', 'PUT')
@@ -35,20 +35,20 @@
def test_create_container_overwrite(self):
# overwrite container with the same name
container_name = data_utils.rand_name(name='TestContainer')
- self.container_client.create_container(container_name)
+ self.container_client.update_container(container_name)
self.containers.append(container_name)
- resp, _ = self.container_client.create_container(container_name)
+ resp, _ = self.container_client.update_container(container_name)
self.assertHeaders(resp, 'Container', 'PUT')
@decorators.idempotent_id('c2ac4d59-d0f5-40d5-ba19-0635056d48cd')
def test_create_container_with_metadata_key(self):
# create container with the blank value of metadata
container_name = data_utils.rand_name(name='TestContainer')
- metadata = {'test-container-meta': ''}
- resp, _ = self.container_client.create_container(
+ headers = {'X-Container-Meta-test-container-meta': ''}
+ resp, _ = self.container_client.update_container(
container_name,
- metadata=metadata)
+ **headers)
self.containers.append(container_name)
self.assertHeaders(resp, 'Container', 'PUT')
@@ -64,10 +64,10 @@
container_name = data_utils.rand_name(name='TestContainer')
# metadata name using underscores should be converted to hyphens
- metadata = {'test_container_meta': 'Meta1'}
- resp, _ = self.container_client.create_container(
+ headers = {'X-Container-Meta-test_container_meta': 'Meta1'}
+ resp, _ = self.container_client.update_container(
container_name,
- metadata=metadata)
+ **headers)
self.containers.append(container_name)
self.assertHeaders(resp, 'Container', 'PUT')
@@ -75,22 +75,20 @@
container_name)
self.assertIn('x-container-meta-test-container-meta', resp)
self.assertEqual(resp['x-container-meta-test-container-meta'],
- metadata['test_container_meta'])
+ headers['X-Container-Meta-test_container_meta'])
@decorators.idempotent_id('24d16451-1c0c-4e4f-b59c-9840a3aba40e')
def test_create_container_with_remove_metadata_key(self):
# create container with the blank value of remove metadata
container_name = data_utils.rand_name(name='TestContainer')
- metadata_1 = {'test-container-meta': 'Meta1'}
- self.container_client.create_container(
- container_name,
- metadata=metadata_1)
+ headers = {'X-Container-Meta-test-container-meta': 'Meta1'}
+ self.container_client.update_container(container_name, **headers)
self.containers.append(container_name)
- metadata_2 = {'test-container-meta': ''}
- resp, _ = self.container_client.create_container(
+ headers = {'X-Remove-Container-Meta-test-container-meta': ''}
+ resp, _ = self.container_client.update_container(
container_name,
- remove_metadata=metadata_2)
+ **headers)
self.assertHeaders(resp, 'Container', 'PUT')
resp, _ = self.container_client.list_container_metadata(
@@ -101,14 +99,13 @@
def test_create_container_with_remove_metadata_value(self):
# create container with remove metadata
container_name = data_utils.rand_name(name='TestContainer')
- metadata = {'test-container-meta': 'Meta1'}
- self.container_client.create_container(container_name,
- metadata=metadata)
+ headers = {'X-Container-Meta-test-container-meta': 'Meta1'}
+ self.container_client.update_container(container_name, **headers)
self.containers.append(container_name)
-
- resp, _ = self.container_client.create_container(
+ headers = {'X-Remove-Container-Meta-test-container-meta': 'Meta1'}
+ resp, _ = self.container_client.update_container(
container_name,
- remove_metadata=metadata)
+ **headers)
self.assertHeaders(resp, 'Container', 'PUT')
resp, _ = self.container_client.list_container_metadata(
@@ -130,7 +127,7 @@
container_name = self.create_container()
object_name, _ = self.create_object(container_name)
- resp, object_list = self.container_client.list_container_contents(
+ resp, object_list = self.container_client.list_container_objects(
container_name)
self.assertHeaders(resp, 'Container', 'GET')
self.assertEqual([object_name], object_list)
@@ -140,7 +137,7 @@
# get empty container contents list
container_name = self.create_container()
- resp, object_list = self.container_client.list_container_contents(
+ resp, object_list = self.container_client.list_container_objects(
container_name)
self.assertHeaders(resp, 'Container', 'GET')
self.assertEmpty(object_list)
@@ -153,7 +150,7 @@
self.create_object(container_name, object_name)
params = {'delimiter': '/'}
- resp, object_list = self.container_client.list_container_contents(
+ resp, object_list = self.container_client.list_container_objects(
container_name,
params=params)
self.assertHeaders(resp, 'Container', 'GET')
@@ -166,7 +163,7 @@
object_name, _ = self.create_object(container_name)
params = {'end_marker': object_name + 'zzzz'}
- resp, object_list = self.container_client.list_container_contents(
+ resp, object_list = self.container_client.list_container_objects(
container_name,
params=params)
self.assertHeaders(resp, 'Container', 'GET')
@@ -179,7 +176,7 @@
self.create_object(container_name)
params = {'format': 'json'}
- resp, object_list = self.container_client.list_container_contents(
+ resp, object_list = self.container_client.list_container_objects(
container_name,
params=params)
self.assertHeaders(resp, 'Container', 'GET')
@@ -198,7 +195,7 @@
self.create_object(container_name)
params = {'format': 'xml'}
- resp, object_list = self.container_client.list_container_contents(
+ resp, object_list = self.container_client.list_container_objects(
container_name,
params=params)
self.assertHeaders(resp, 'Container', 'GET')
@@ -222,7 +219,7 @@
object_name, _ = self.create_object(container_name)
params = {'limit': data_utils.rand_int_id(1, 10000)}
- resp, object_list = self.container_client.list_container_contents(
+ resp, object_list = self.container_client.list_container_objects(
container_name,
params=params)
self.assertHeaders(resp, 'Container', 'GET')
@@ -235,7 +232,7 @@
object_name, _ = self.create_object(container_name)
params = {'marker': 'AaaaObject1234567890'}
- resp, object_list = self.container_client.list_container_contents(
+ resp, object_list = self.container_client.list_container_objects(
container_name,
params=params)
self.assertHeaders(resp, 'Container', 'GET')
@@ -250,7 +247,7 @@
self.create_object(container_name, object_name)
params = {'path': 'Swift'}
- resp, object_list = self.container_client.list_container_contents(
+ resp, object_list = self.container_client.list_container_objects(
container_name,
params=params)
self.assertHeaders(resp, 'Container', 'GET')
@@ -264,7 +261,7 @@
prefix_key = object_name[0:8]
params = {'prefix': prefix_key}
- resp, object_list = self.container_client.list_container_contents(
+ resp, object_list = self.container_client.list_container_objects(
container_name,
params=params)
self.assertHeaders(resp, 'Container', 'GET')
@@ -277,9 +274,9 @@
container_name = self.create_container()
metadata = {'name': 'Pictures'}
- self.container_client.update_container_metadata(
+ self.container_client.create_update_or_delete_container_metadata(
container_name,
- metadata=metadata)
+ create_update_metadata=metadata)
resp, _ = self.container_client.list_container_metadata(
container_name)
@@ -301,16 +298,16 @@
def test_update_container_metadata_with_create_and_delete_metadata(self):
# Send one request of adding and deleting metadata
container_name = data_utils.rand_name(name='TestContainer')
- metadata_1 = {'test-container-meta1': 'Meta1'}
- self.container_client.create_container(container_name,
- metadata=metadata_1)
+ metadata_1 = {'X-Container-Meta-test-container-meta1': 'Meta1'}
+ self.container_client.update_container(container_name, **metadata_1)
self.containers.append(container_name)
metadata_2 = {'test-container-meta2': 'Meta2'}
- resp, _ = self.container_client.update_container_metadata(
- container_name,
- metadata=metadata_2,
- remove_metadata=metadata_1)
+ resp, _ = (
+ self.container_client.create_update_or_delete_container_metadata(
+ container_name,
+ create_update_metadata=metadata_2,
+ delete_metadata={'test-container-meta1': 'Meta1'}))
self.assertHeaders(resp, 'Container', 'POST')
resp, _ = self.container_client.list_container_metadata(
@@ -326,9 +323,10 @@
container_name = self.create_container()
metadata = {'test-container-meta1': 'Meta1'}
- resp, _ = self.container_client.update_container_metadata(
- container_name,
- metadata=metadata)
+ resp, _ = (
+ self.container_client.create_update_or_delete_container_metadata(
+ container_name,
+ create_update_metadata=metadata))
self.assertHeaders(resp, 'Container', 'POST')
resp, _ = self.container_client.list_container_metadata(
@@ -341,14 +339,14 @@
def test_update_container_metadata_with_delete_metadata(self):
# update container metadata using delete metadata
container_name = data_utils.rand_name(name='TestContainer')
- metadata = {'test-container-meta1': 'Meta1'}
- self.container_client.create_container(container_name,
- metadata=metadata)
+ metadata = {'X-Container-Meta-test-container-meta1': 'Meta1'}
+ self.container_client.update_container(container_name, **metadata)
self.containers.append(container_name)
- resp, _ = self.container_client.delete_container_metadata(
- container_name,
- metadata=metadata)
+ resp, _ = (
+ self.container_client.create_update_or_delete_container_metadata(
+ container_name,
+ delete_metadata={'test-container-meta1': 'Meta1'}))
self.assertHeaders(resp, 'Container', 'POST')
resp, _ = self.container_client.list_container_metadata(
@@ -361,9 +359,10 @@
container_name = self.create_container()
metadata = {'test-container-meta1': ''}
- resp, _ = self.container_client.update_container_metadata(
- container_name,
- metadata=metadata)
+ resp, _ = (
+ self.container_client.create_update_or_delete_container_metadata(
+ container_name,
+ create_update_metadata=metadata))
self.assertHeaders(resp, 'Container', 'POST')
resp, _ = self.container_client.list_container_metadata(
@@ -374,15 +373,15 @@
def test_update_container_metadata_with_delete_metadata_key(self):
# update container metadata with a blank value of metadata
container_name = data_utils.rand_name(name='TestContainer')
- metadata = {'test-container-meta1': 'Meta1'}
- self.container_client.create_container(container_name,
- metadata=metadata)
+ headers = {'X-Container-Meta-test-container-meta1': 'Meta1'}
+ self.container_client.update_container(container_name, **headers)
self.containers.append(container_name)
metadata = {'test-container-meta1': ''}
- resp, _ = self.container_client.delete_container_metadata(
- container_name,
- metadata=metadata)
+ resp, _ = (
+ self.container_client.create_update_or_delete_container_metadata(
+ container_name,
+ delete_metadata=metadata))
self.assertHeaders(resp, 'Container', 'POST')
resp, _ = self.container_client.list_container_metadata(container_name)
diff --git a/tempest/api/object_storage/test_container_services_negative.py b/tempest/api/object_storage/test_container_services_negative.py
index 387b7b6..b8c83b7 100644
--- a/tempest/api/object_storage/test_container_services_negative.py
+++ b/tempest/api/object_storage/test_container_services_negative.py
@@ -45,9 +45,10 @@
max_length = self.constraints['max_container_name_length']
# create a container with long name
container_name = data_utils.arbitrary_string(size=max_length + 1)
- ex = self.assertRaises(exceptions.BadRequest,
- self.container_client.create_container,
- container_name)
+ ex = self.assertRaises(
+ exceptions.BadRequest,
+ self.container_client.update_container,
+ container_name)
self.assertIn('Container name length of ' + str(max_length + 1) +
' longer than ' + str(max_length), str(ex))
@@ -61,11 +62,13 @@
# that is longer than max.
max_length = self.constraints['max_meta_name_length']
container_name = data_utils.rand_name(name='TestContainer')
- metadata_name = data_utils.arbitrary_string(size=max_length + 1)
+ metadata_name = 'X-Container-Meta-' + data_utils.arbitrary_string(
+ size=max_length + 1)
metadata = {metadata_name: 'penguin'}
- ex = self.assertRaises(exceptions.BadRequest,
- self.container_client.create_container,
- container_name, metadata=metadata)
+ ex = self.assertRaises(
+ exceptions.BadRequest,
+ self.container_client.update_container,
+ container_name, **metadata)
self.assertIn('Metadata name too long', str(ex))
@decorators.attr(type=["negative"])
@@ -79,10 +82,11 @@
max_length = self.constraints['max_meta_value_length']
container_name = data_utils.rand_name(name='TestContainer')
metadata_value = data_utils.arbitrary_string(size=max_length + 1)
- metadata = {'animal': metadata_value}
- ex = self.assertRaises(exceptions.BadRequest,
- self.container_client.create_container,
- container_name, metadata=metadata)
+ metadata = {'X-Container-Meta-animal': metadata_value}
+ ex = self.assertRaises(
+ exceptions.BadRequest,
+ self.container_client.update_container,
+ container_name, **metadata)
self.assertIn('Metadata value longer than ' + str(max_length), str(ex))
@decorators.attr(type=["negative"])
@@ -97,11 +101,12 @@
container_name = data_utils.rand_name(name='TestContainer')
metadata = {}
for i in range(max_count + 1):
- metadata['animal-' + str(i)] = 'penguin'
+ metadata['X-Container-Meta-animal-' + str(i)] = 'penguin'
- ex = self.assertRaises(exceptions.BadRequest,
- self.container_client.create_container,
- container_name, metadata=metadata)
+ ex = self.assertRaises(
+ exceptions.BadRequest,
+ self.container_client.update_container,
+ container_name, **metadata)
self.assertIn('Too many metadata items; max ' + str(max_count),
str(ex))
@@ -120,9 +125,10 @@
# Attempts to update metadata using a nonexistent container name.
metadata = {'animal': 'penguin'}
- self.assertRaises(exceptions.NotFound,
- self.container_client.update_container_metadata,
- 'nonexistent_container_name', metadata)
+ self.assertRaises(
+ exceptions.NotFound,
+ self.container_client.create_update_or_delete_container_metadata,
+ 'nonexistent_container_name', create_update_metadata=metadata)
@decorators.attr(type=["negative"])
@decorators.idempotent_id('65387dbf-a0e2-4aac-9ddc-16eb3f1f69ba')
@@ -130,9 +136,10 @@
# Attempts to delete metadata using a nonexistent container name.
metadata = {'animal': 'penguin'}
- self.assertRaises(exceptions.NotFound,
- self.container_client.delete_container_metadata,
- 'nonexistent_container_name', metadata)
+ self.assertRaises(
+ exceptions.NotFound,
+ self.container_client.create_update_or_delete_container_metadata,
+ 'nonexistent_container_name', delete_metadata=metadata)
@decorators.attr(type=["negative"])
@decorators.idempotent_id('14331d21-1e81-420a-beea-19cb5e5207f5')
@@ -141,7 +148,7 @@
# that doesn't exist.
params = {'limit': 9999, 'format': 'json'}
self.assertRaises(exceptions.NotFound,
- self.container_client.list_container_contents,
+ self.container_client.list_container_objects,
'nonexistent_container_name', params)
@decorators.attr(type=["negative"])
@@ -155,7 +162,7 @@
self.assertHeaders(resp, 'Container', 'DELETE')
params = {'limit': 9999, 'format': 'json'}
self.assertRaises(exceptions.NotFound,
- self.container_client.list_container_contents,
+ self.container_client.list_container_objects,
container_name, params)
@decorators.attr(type=["negative"])
diff --git a/tempest/api/object_storage/test_container_staticweb.py b/tempest/api/object_storage/test_container_staticweb.py
index 943011d..1243b83 100644
--- a/tempest/api/object_storage/test_container_staticweb.py
+++ b/tempest/api/object_storage/test_container_staticweb.py
@@ -14,10 +14,10 @@
from tempest.api.object_storage import base
from tempest.common import custom_matchers
+from tempest.common import utils
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
class StaticWebTest(base.BaseObjectTest):
@@ -34,10 +34,10 @@
cls.object_name, cls.object_data = cls.create_object(
cls.container_name)
- cls.container_client.update_container_metadata(
+ cls.container_client.create_update_or_delete_container_metadata(
cls.container_name,
- metadata=headers_public_read_acl,
- metadata_prefix="X-Container-")
+ create_update_metadata=headers_public_read_acl,
+ create_update_metadata_prefix="X-Container-")
@classmethod
def resource_cleanup(cls):
@@ -45,12 +45,12 @@
super(StaticWebTest, cls).resource_cleanup()
@decorators.idempotent_id('c1f055ab-621d-4a6a-831f-846fcb578b8b')
- @test.requires_ext(extension='staticweb', service='object')
+ @utils.requires_ext(extension='staticweb', service='object')
def test_web_index(self):
headers = {'web-index': self.object_name}
- self.container_client.update_container_metadata(
- self.container_name, metadata=headers)
+ self.container_client.create_update_or_delete_container_metadata(
+ self.container_name, create_update_metadata=headers)
# Maintain original headers, no auth added
self.account_client.auth_provider.set_alt_auth_data(
@@ -68,20 +68,21 @@
self.assertEqual(body, self.object_data)
# clean up before exiting
- self.container_client.update_container_metadata(self.container_name,
- {'web-index': ""})
+ self.container_client.create_update_or_delete_container_metadata(
+ self.container_name,
+ create_update_metadata={'web-index': ""})
_, body = self.container_client.list_container_metadata(
self.container_name)
self.assertNotIn('x-container-meta-web-index', body)
@decorators.idempotent_id('941814cf-db9e-4b21-8112-2b6d0af10ee5')
- @test.requires_ext(extension='staticweb', service='object')
+ @utils.requires_ext(extension='staticweb', service='object')
def test_web_listing(self):
headers = {'web-listings': 'true'}
- self.container_client.update_container_metadata(
- self.container_name, metadata=headers)
+ self.container_client.create_update_or_delete_container_metadata(
+ self.container_name, create_update_metadata=headers)
# test GET on http://account_url/container_name
# we should retrieve a listing of objects
@@ -100,21 +101,21 @@
self.assertIn(self.object_name, body.decode())
# clean up before exiting
- self.container_client.update_container_metadata(self.container_name,
- {'web-listings': ""})
-
+ self.container_client.create_update_or_delete_container_metadata(
+ self.container_name,
+ create_update_metadata={'web-listings': ""})
_, body = self.container_client.list_container_metadata(
self.container_name)
self.assertNotIn('x-container-meta-web-listings', body)
@decorators.idempotent_id('bc37ec94-43c8-4990-842e-0e5e02fc8926')
- @test.requires_ext(extension='staticweb', service='object')
+ @utils.requires_ext(extension='staticweb', service='object')
def test_web_listing_css(self):
headers = {'web-listings': 'true',
'web-listings-css': 'listings.css'}
- self.container_client.update_container_metadata(
- self.container_name, metadata=headers)
+ self.container_client.create_update_or_delete_container_metadata(
+ self.container_name, create_update_metadata=headers)
# Maintain original headers, no auth added
self.account_client.auth_provider.set_alt_auth_data(
@@ -131,13 +132,13 @@
self.assertIn(css, body.decode())
@decorators.idempotent_id('f18b4bef-212e-45e7-b3ca-59af3a465f82')
- @test.requires_ext(extension='staticweb', service='object')
+ @utils.requires_ext(extension='staticweb', service='object')
def test_web_error(self):
headers = {'web-listings': 'true',
'web-error': self.object_name}
- self.container_client.update_container_metadata(
- self.container_name, metadata=headers)
+ self.container_client.create_update_or_delete_container_metadata(
+ self.container_name, create_update_metadata=headers)
# Create object to return when requested object not found
object_name_404 = "404" + self.object_name
diff --git a/tempest/api/object_storage/test_container_sync.py b/tempest/api/object_storage/test_container_sync.py
index 4cb1914..042d288 100644
--- a/tempest/api/object_storage/test_container_sync.py
+++ b/tempest/api/object_storage/test_container_sync.py
@@ -41,7 +41,6 @@
@classmethod
def setup_credentials(cls):
super(ContainerSyncTest, cls).setup_credentials()
- cls.os = cls.os_roles_operator
cls.os_alt = cls.os_roles_operator_alt
@classmethod
@@ -103,7 +102,7 @@
while self.attempts > 0:
object_lists = []
for c_client, cont in zip(cont_client, self.containers):
- resp, object_list = c_client.list_container_contents(
+ resp, object_list = c_client.list_container_objects(
cont, params=params)
object_lists.append(dict(
(obj['name'], obj) for obj in object_list))
diff --git a/tempest/api/object_storage/test_container_sync_middleware.py b/tempest/api/object_storage/test_container_sync_middleware.py
index 9eae138..e77b079 100644
--- a/tempest/api/object_storage/test_container_sync_middleware.py
+++ b/tempest/api/object_storage/test_container_sync_middleware.py
@@ -13,9 +13,9 @@
# under the License.
from tempest.api.object_storage import test_container_sync
+from tempest.common import utils
from tempest import config
from tempest.lib import decorators
-from tempest import test
CONF = config.CONF
@@ -39,7 +39,7 @@
@decorators.attr(type='slow')
@decorators.idempotent_id('ea4645a1-d147-4976-82f7-e5a7a3065f80')
- @test.requires_ext(extension='container_sync', service='object')
+ @utils.requires_ext(extension='container_sync', service='object')
def test_container_synchronization(self):
def make_headers(cont, cont_client):
# tell first container to synchronize to a second
diff --git a/tempest/api/object_storage/test_crossdomain.py b/tempest/api/object_storage/test_crossdomain.py
index c47aa93..f61d9f8 100644
--- a/tempest/api/object_storage/test_crossdomain.py
+++ b/tempest/api/object_storage/test_crossdomain.py
@@ -14,8 +14,8 @@
from tempest.api.object_storage import base
from tempest.common import custom_matchers
+from tempest.common import utils
from tempest.lib import decorators
-from tempest import test
class CrossdomainTest(base.BaseObjectTest):
@@ -38,7 +38,7 @@
self.account_client.skip_path()
@decorators.idempotent_id('d1b8b031-b622-4010-82f9-ff78a9e915c7')
- @test.requires_ext(extension='crossdomain', service='object')
+ @utils.requires_ext(extension='crossdomain', service='object')
def test_get_crossdomain_policy(self):
resp, body = self.account_client.get("crossdomain.xml", {})
body = body.decode()
diff --git a/tempest/api/object_storage/test_object_expiry.py b/tempest/api/object_storage/test_object_expiry.py
index ed1be90..86f7c8c 100644
--- a/tempest/api/object_storage/test_object_expiry.py
+++ b/tempest/api/object_storage/test_object_expiry.py
@@ -40,10 +40,10 @@
def _test_object_expiry(self, metadata):
# update object metadata
resp, _ = \
- self.object_client.update_object_metadata(self.container_name,
- self.object_name,
- metadata,
- metadata_prefix='')
+ self.object_client.create_or_update_object_metadata(
+ self.container_name,
+ self.object_name,
+ headers=metadata)
# verify object metadata
resp, _ = \
self.object_client.list_object_metadata(self.container_name,
diff --git a/tempest/api/object_storage/test_object_formpost.py b/tempest/api/object_storage/test_object_formpost.py
index 3a2233a..cd834bf 100644
--- a/tempest/api/object_storage/test_object_formpost.py
+++ b/tempest/api/object_storage/test_object_formpost.py
@@ -19,9 +19,9 @@
from six.moves.urllib import parse as urlparse
from tempest.api.object_storage import base
+from tempest.common import utils
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
-from tempest import test
class ObjectFormPostTest(base.BaseObjectTest):
@@ -108,7 +108,7 @@
return body, content_type
@decorators.idempotent_id('80fac02b-6e54-4f7b-be0d-a965b5cbef76')
- @test.requires_ext(extension='formpost', service='object')
+ @utils.requires_ext(extension='formpost', service='object')
def test_post_object_using_form(self):
body, content_type = self.get_multipart_form()
diff --git a/tempest/api/object_storage/test_object_formpost_negative.py b/tempest/api/object_storage/test_object_formpost_negative.py
index c56d91a..df6a0fd 100644
--- a/tempest/api/object_storage/test_object_formpost_negative.py
+++ b/tempest/api/object_storage/test_object_formpost_negative.py
@@ -19,10 +19,10 @@
from six.moves.urllib import parse as urlparse
from tempest.api.object_storage import base
+from tempest.common import utils
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
class ObjectFormPostNegativeTest(base.BaseObjectTest):
@@ -109,7 +109,7 @@
return body, content_type
@decorators.idempotent_id('d3fb3c4d-e627-48ce-9379-a1631f21336d')
- @test.requires_ext(extension='formpost', service='object')
+ @utils.requires_ext(extension='formpost', service='object')
@decorators.attr(type=['negative'])
def test_post_object_using_form_expired(self):
body, content_type = self.get_multipart_form(expires=1)
@@ -126,7 +126,7 @@
self.assertIn('FormPost: Form Expired', str(exc))
@decorators.idempotent_id('b277257f-113c-4499-b8d1-5fead79f7360')
- @test.requires_ext(extension='formpost', service='object')
+ @utils.requires_ext(extension='formpost', service='object')
@decorators.attr(type=['negative'])
def test_post_object_using_form_invalid_signature(self):
self.key = "Wrong"
diff --git a/tempest/api/object_storage/test_object_services.py b/tempest/api/object_storage/test_object_services.py
index 556ca2f..acb578d 100644
--- a/tempest/api/object_storage/test_object_services.py
+++ b/tempest/api/object_storage/test_object_services.py
@@ -48,8 +48,9 @@
data_segments = [data + str(i) for i in range(segments)]
# uploading segments
for i in range(segments):
- self.object_client.create_object_segments(
- self.container_name, object_name, i, data_segments[i])
+ obj_name = "%s/%s" % (object_name, i)
+ self.object_client.create_object(
+ self.container_name, obj_name, data_segments[i])
return object_name, data_segments
@@ -184,12 +185,15 @@
# create object with transfer_encoding
object_name = data_utils.rand_name(name='TestObject')
data = data_utils.random_bytes(1024)
- _, _, resp_headers = self.object_client.put_object_with_chunk(
- container=self.container_name,
- name=object_name,
- contents=data_utils.chunkify(data, 512)
- )
- self.assertHeaders(resp_headers, 'Object', 'PUT')
+ headers = {'Transfer-Encoding': 'chunked'}
+ resp, _ = self.object_client.create_object(
+ self.container_name,
+ object_name,
+ data=data_utils.chunkify(data, 512),
+ headers=headers,
+ chunked=True)
+
+ self.assertHeaders(resp, 'Object', 'PUT')
# check uploaded content
_, body = self.object_client.get_object(self.container_name,
@@ -325,11 +329,10 @@
object_name, _ = self.create_object(self.container_name)
metadata = {'X-Object-Meta-test-meta': 'Meta'}
- resp, _ = self.object_client.update_object_metadata(
+ resp, _ = self.object_client.create_or_update_object_metadata(
self.container_name,
object_name,
- metadata,
- metadata_prefix='')
+ headers=metadata)
self.assertHeaders(resp, 'Object', 'POST')
resp, _ = self.object_client.list_object_metadata(
@@ -350,11 +353,10 @@
metadata=create_metadata)
update_metadata = {'X-Remove-Object-Meta-test-meta1': 'Meta1'}
- resp, _ = self.object_client.update_object_metadata(
+ resp, _ = self.object_client.create_or_update_object_metadata(
self.container_name,
object_name,
- update_metadata,
- metadata_prefix='')
+ headers=update_metadata)
self.assertHeaders(resp, 'Object', 'POST')
resp, _ = self.object_client.list_object_metadata(
@@ -375,11 +377,10 @@
update_metadata = {'X-Object-Meta-test-meta2': 'Meta2',
'X-Remove-Object-Meta-test-meta1': 'Meta1'}
- resp, _ = self.object_client.update_object_metadata(
+ resp, _ = self.object_client.create_or_update_object_metadata(
self.container_name,
object_name,
- update_metadata,
- metadata_prefix='')
+ headers=update_metadata)
self.assertHeaders(resp, 'Object', 'POST')
resp, _ = self.object_client.list_object_metadata(
@@ -403,11 +404,10 @@
metadata=None)
object_prefix = '%s/%s' % (self.container_name, object_name)
update_metadata = {'X-Object-Manifest': object_prefix}
- resp, _ = self.object_client.update_object_metadata(
+ resp, _ = self.object_client.create_or_update_object_metadata(
self.container_name,
object_name,
- update_metadata,
- metadata_prefix='')
+ headers=update_metadata)
self.assertHeaders(resp, 'Object', 'POST')
resp, _ = self.object_client.list_object_metadata(
@@ -422,11 +422,10 @@
object_name, _ = self.create_object(self.container_name)
update_metadata = {'X-Object-Meta-test-meta': ''}
- resp, _ = self.object_client.update_object_metadata(
+ resp, _ = self.object_client.create_or_update_object_metadata(
self.container_name,
object_name,
- update_metadata,
- metadata_prefix='')
+ headers=update_metadata)
self.assertHeaders(resp, 'Object', 'POST')
resp, _ = self.object_client.list_object_metadata(
@@ -447,11 +446,10 @@
metadata=create_metadata)
update_metadata = {'X-Remove-Object-Meta-test-meta': ''}
- resp, _ = self.object_client.update_object_metadata(
+ resp, _ = self.object_client.create_or_update_object_metadata(
self.container_name,
object_name,
- update_metadata,
- metadata_prefix='')
+ headers=update_metadata)
self.assertHeaders(resp, 'Object', 'POST')
resp, _ = self.object_client.list_object_metadata(
@@ -728,8 +726,13 @@
dst_object_name,
dst_data)
# copy source object to destination
- resp, _ = self.object_client.copy_object_in_same_container(
- self.container_name, src_object_name, dst_object_name)
+ headers = {}
+ headers['X-Copy-From'] = "%s/%s" % (str(self.container_name),
+ str(src_object_name))
+ resp, body = self.object_client.create_object(self.container_name,
+ dst_object_name,
+ data=None,
+ headers=headers)
self.assertHeaders(resp, 'Object', 'PUT')
# check data
@@ -749,8 +752,14 @@
# change the content type of the object
metadata = {'content-type': 'text/plain; charset=UTF-8'}
self.assertNotEqual(resp_tmp['content-type'], metadata['content-type'])
- resp, _ = self.object_client.copy_object_in_same_container(
- self.container_name, object_name, object_name, metadata)
+ headers = {}
+ headers['X-Copy-From'] = "%s/%s" % (str(self.container_name),
+ str(object_name))
+ resp, body = self.object_client.create_object(self.container_name,
+ object_name,
+ data=None,
+ metadata=metadata,
+ headers=headers)
self.assertHeaders(resp, 'Object', 'PUT')
# check the content type
@@ -786,12 +795,12 @@
def test_copy_object_across_containers(self):
# create a container to use as a source container
src_container_name = data_utils.rand_name(name='TestSourceContainer')
- self.container_client.create_container(src_container_name)
+ self.container_client.update_container(src_container_name)
self.containers.append(src_container_name)
# create a container to use as a destination container
dst_container_name = data_utils.rand_name(
name='TestDestinationContainer')
- self.container_client.create_container(dst_container_name)
+ self.container_client.update_container(dst_container_name)
self.containers.append(dst_container_name)
# create object in source container
object_name = data_utils.rand_name(name='Object')
@@ -801,16 +810,21 @@
# set object metadata
meta_key = data_utils.rand_name(name='test')
meta_value = data_utils.rand_name(name='MetaValue')
- orig_metadata = {meta_key: meta_value}
- resp, _ = self.object_client.update_object_metadata(src_container_name,
- object_name,
- orig_metadata)
+ orig_metadata = {'X-Object-Meta-' + meta_key: meta_value}
+ resp, _ = self.object_client.create_or_update_object_metadata(
+ src_container_name,
+ object_name,
+ headers=orig_metadata)
self.assertHeaders(resp, 'Object', 'POST')
# copy object from source container to destination container
- resp, _ = self.object_client.copy_object_across_containers(
- src_container_name, object_name, dst_container_name,
- object_name)
+ headers = {}
+ headers['X-Copy-From'] = "%s/%s" % (str(src_container_name),
+ str(object_name))
+ resp, body = self.object_client.create_object(dst_container_name,
+ object_name,
+ data=None,
+ headers=headers)
self.assertHeaders(resp, 'Object', 'PUT')
# check if object is present in destination container
@@ -897,8 +911,9 @@
data_segments = [data + str(i) for i in range(segments)]
# uploading segments
for i in range(segments):
- resp, _ = self.object_client.create_object_segments(
- self.container_name, object_name, i, data_segments[i])
+ obj_name = "%s/%s" % (object_name, i)
+ resp, _ = self.object_client.create_object(
+ self.container_name, obj_name, data_segments[i])
# creating a manifest file
metadata = {'X-Object-Manifest': '%s/%s/'
% (self.container_name, object_name)}
@@ -906,8 +921,8 @@
object_name, data='')
self.assertHeaders(resp, 'Object', 'PUT')
- resp, _ = self.object_client.update_object_metadata(
- self.container_name, object_name, metadata, metadata_prefix='')
+ resp, _ = self.object_client.create_or_update_object_metadata(
+ self.container_name, object_name, headers=metadata)
self.assertHeaders(resp, 'Object', 'POST')
resp, _ = self.object_client.list_object_metadata(
@@ -967,7 +982,6 @@
@classmethod
def setup_credentials(cls):
super(PublicObjectTest, cls).setup_credentials()
- cls.os = cls.os_roles_operator
cls.os_alt = cls.os_roles_operator_alt
@classmethod
@@ -978,7 +992,7 @@
def setUp(self):
super(PublicObjectTest, self).setUp()
self.container_name = data_utils.rand_name(name='TestContainer')
- self.container_client.create_container(self.container_name)
+ self.container_client.update_container(self.container_name)
def tearDown(self):
self.delete_containers([self.container_name])
@@ -991,8 +1005,11 @@
# update container metadata to make it publicly readable
cont_headers = {'X-Container-Read': '.r:*,.rlistings'}
- resp_meta, body = self.container_client.update_container_metadata(
- self.container_name, metadata=cont_headers, metadata_prefix='')
+ resp_meta, body = (
+ self.container_client.create_update_or_delete_container_metadata(
+ self.container_name,
+ create_update_metadata=cont_headers,
+ create_update_metadata_prefix=''))
self.assertHeaders(resp_meta, 'Container', 'POST')
# create object
@@ -1026,9 +1043,10 @@
# make container public-readable and access an object in it using
# another user's credentials
cont_headers = {'X-Container-Read': '.r:*,.rlistings'}
- resp_meta, body = self.container_client.update_container_metadata(
- self.container_name, metadata=cont_headers,
- metadata_prefix='')
+ resp_meta, body = (
+ self.container_client.create_update_or_delete_container_metadata(
+ self.container_name, create_update_metadata=cont_headers,
+ create_update_metadata_prefix=''))
self.assertHeaders(resp_meta, 'Container', 'POST')
# create object
diff --git a/tempest/api/object_storage/test_object_slo.py b/tempest/api/object_storage/test_object_slo.py
index 894e42d..c66776e 100644
--- a/tempest/api/object_storage/test_object_slo.py
+++ b/tempest/api/object_storage/test_object_slo.py
@@ -18,10 +18,10 @@
from tempest.api.object_storage import base
from tempest.common import custom_matchers
+from tempest.common import utils
from tempest.lib.common.utils import data_utils
from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
-from tempest import test
# Each segment, except for the final one, must be at least 1 megabyte
MIN_SEGMENT_SIZE = 1024 * 1024
@@ -107,7 +107,7 @@
self.assertHeaders(resp, 'Object', method)
@decorators.idempotent_id('2c3f24a6-36e8-4711-9aa2-800ee1fc7b5b')
- @test.requires_ext(extension='slo', service='object')
+ @utils.requires_ext(extension='slo', service='object')
def test_upload_manifest(self):
# create static large object from multipart manifest
manifest = self._create_manifest()
@@ -122,7 +122,7 @@
self._assertHeadersSLO(resp, 'PUT')
@decorators.idempotent_id('e69ad766-e1aa-44a2-bdd2-bf62c09c1456')
- @test.requires_ext(extension='slo', service='object')
+ @utils.requires_ext(extension='slo', service='object')
def test_list_large_object_metadata(self):
# list static large object metadata using multipart manifest
object_name = self._create_large_object()
@@ -134,7 +134,7 @@
self._assertHeadersSLO(resp, 'HEAD')
@decorators.idempotent_id('49bc49bc-dd1b-4c0f-904e-d9f10b830ee8')
- @test.requires_ext(extension='slo', service='object')
+ @utils.requires_ext(extension='slo', service='object')
def test_retrieve_large_object(self):
# list static large object using multipart manifest
object_name = self._create_large_object()
@@ -149,7 +149,7 @@
self.assertEqual(body, sum_data)
@decorators.idempotent_id('87b6dfa1-abe9-404d-8bf0-6c3751e6aa77')
- @test.requires_ext(extension='slo', service='object')
+ @utils.requires_ext(extension='slo', service='object')
def test_delete_large_object(self):
# delete static large object using multipart manifest
object_name = self._create_large_object()
@@ -172,6 +172,6 @@
# Check only the format of common headers with custom matcher
self.assertThat(resp, custom_matchers.AreAllWellFormatted())
- resp, body = self.container_client.list_container_contents(
+ resp, body = self.container_client.list_container_objects(
self.container_name)
self.assertEqual(int(resp['x-container-object-count']), 0)
diff --git a/tempest/api/object_storage/test_object_temp_url.py b/tempest/api/object_storage/test_object_temp_url.py
index 91bc677..b99f93a 100644
--- a/tempest/api/object_storage/test_object_temp_url.py
+++ b/tempest/api/object_storage/test_object_temp_url.py
@@ -19,9 +19,9 @@
from six.moves.urllib import parse as urlparse
from tempest.api.object_storage import base
+from tempest.common import utils
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
-from tempest import test
class ObjectTempUrlTest(base.BaseObjectTest):
@@ -88,7 +88,7 @@
return url
@decorators.idempotent_id('f91c96d4-1230-4bba-8eb9-84476d18d991')
- @test.requires_ext(extension='tempurl', service='object')
+ @utils.requires_ext(extension='tempurl', service='object')
def test_get_object_using_temp_url(self):
expires = self._get_expiry_date()
@@ -103,11 +103,11 @@
self.assertEqual(body, self.content)
# Testing a HEAD on this Temp URL
- resp, body = self.object_client.head(url)
+ resp, _ = self.object_client.head(url)
self.assertHeaders(resp, 'Object', 'HEAD')
@decorators.idempotent_id('671f9583-86bd-4128-a034-be282a68c5d8')
- @test.requires_ext(extension='tempurl', service='object')
+ @utils.requires_ext(extension='tempurl', service='object')
def test_get_object_using_temp_url_key_2(self):
key2 = 'Meta2-'
metadata = {'Temp-URL-Key-2': key2}
@@ -132,7 +132,7 @@
self.assertEqual(body, self.content)
@decorators.idempotent_id('9b08dade-3571-4152-8a4f-a4f2a873a735')
- @test.requires_ext(extension='tempurl', service='object')
+ @utils.requires_ext(extension='tempurl', service='object')
def test_put_object_using_temp_url(self):
new_data = data_utils.random_bytes(size=len(self.object_name))
@@ -142,11 +142,11 @@
expires, self.key)
# trying to put random data in the object using temp url
- resp, body = self.object_client.put(url, new_data, None)
+ resp, _ = self.object_client.put(url, new_data, None)
self.assertHeaders(resp, 'Object', 'PUT')
# Testing a HEAD on this Temp URL
- resp, body = self.object_client.head(url)
+ resp, _ = self.object_client.head(url)
self.assertHeaders(resp, 'Object', 'HEAD')
# Validate that the content of the object has been modified
@@ -158,7 +158,7 @@
self.assertEqual(body, new_data)
@decorators.idempotent_id('249a0111-5ad3-4534-86a7-1993d55f9185')
- @test.requires_ext(extension='tempurl', service='object')
+ @utils.requires_ext(extension='tempurl', service='object')
def test_head_object_using_temp_url(self):
expires = self._get_expiry_date()
@@ -172,7 +172,7 @@
self.assertHeaders(resp, 'Object', 'HEAD')
@decorators.idempotent_id('9d9cfd90-708b-465d-802c-e4a8090b823d')
- @test.requires_ext(extension='tempurl', service='object')
+ @utils.requires_ext(extension='tempurl', service='object')
def test_get_object_using_temp_url_with_inline_query_parameter(self):
expires = self._get_expiry_date()
diff --git a/tempest/api/object_storage/test_object_temp_url_negative.py b/tempest/api/object_storage/test_object_temp_url_negative.py
index c7d1fd5..17ae6c1 100644
--- a/tempest/api/object_storage/test_object_temp_url_negative.py
+++ b/tempest/api/object_storage/test_object_temp_url_negative.py
@@ -19,10 +19,10 @@
from six.moves.urllib import parse as urlparse
from tempest.api.object_storage import base
+from tempest.common import utils
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
class ObjectTempUrlNegativeTest(base.BaseObjectTest):
@@ -94,7 +94,7 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('5a583aca-c804-41ba-9d9a-e7be132bdf0b')
- @test.requires_ext(extension='tempurl', service='object')
+ @utils.requires_ext(extension='tempurl', service='object')
def test_get_object_after_expiration_time(self):
expires = self._get_expiry_date(1)
diff --git a/tempest/api/object_storage/test_object_version.py b/tempest/api/object_storage/test_object_version.py
index dc0d179..51b0a1d 100644
--- a/tempest/api/object_storage/test_object_version.py
+++ b/tempest/api/object_storage/test_object_version.py
@@ -51,18 +51,16 @@
def test_versioned_container(self):
# create container
vers_container_name = data_utils.rand_name(name='TestVersionContainer')
- resp, body = self.container_client.create_container(
- vers_container_name)
+ resp, _ = self.container_client.update_container(vers_container_name)
self.containers.append(vers_container_name)
self.assertHeaders(resp, 'Container', 'PUT')
self.assertContainer(vers_container_name, '0', '0', 'Missing Header')
base_container_name = data_utils.rand_name(name='TestBaseContainer')
headers = {'X-versions-Location': vers_container_name}
- resp, body = self.container_client.create_container(
+ resp, _ = self.container_client.update_container(
base_container_name,
- metadata=headers,
- metadata_prefix='')
+ **headers)
self.containers.append(base_container_name)
self.assertHeaders(resp, 'Container', 'PUT')
self.assertContainer(base_container_name, '0', '0',
@@ -76,20 +74,20 @@
data_2 = data_utils.random_bytes()
resp, _ = self.object_client.create_object(base_container_name,
object_name, data_2)
- resp, body = self.object_client.get_object(base_container_name,
- object_name)
+ _, body = self.object_client.get_object(base_container_name,
+ object_name)
self.assertEqual(body, data_2)
# delete object version 2
resp, _ = self.object_client.delete_object(base_container_name,
object_name)
self.assertContainer(base_container_name, '1', '1024',
vers_container_name)
- resp, body = self.object_client.get_object(base_container_name,
- object_name)
+ _, body = self.object_client.get_object(base_container_name,
+ object_name)
self.assertEqual(body, data_1)
# delete object version 1
- resp, _ = self.object_client.delete_object(base_container_name,
- object_name)
+ self.object_client.delete_object(base_container_name,
+ object_name)
# containers should be empty
self.assertContainer(base_container_name, '0', '0',
vers_container_name)
diff --git a/tempest/api/volume/admin/test_groups.py b/tempest/api/volume/admin/test_groups.py
index baea37b..6b53d85 100644
--- a/tempest/api/volume/admin/test_groups.py
+++ b/tempest/api/volume/admin/test_groups.py
@@ -23,10 +23,7 @@
CONF = config.CONF
-class GroupsTest(base.BaseVolumeAdminTest):
- _api_version = 3
- min_microversion = '3.14'
- max_microversion = 'latest'
+class BaseGroupsTest(base.BaseVolumeAdminTest):
def _delete_group(self, grp_id, delete_volumes=True):
self.groups_client.delete_group(grp_id, delete_volumes)
@@ -37,8 +34,7 @@
self.groups_client.wait_for_resource_deletion(grp_id)
def _delete_group_snapshot(self, group_snapshot_id, grp_id):
- self.group_snapshots_client.delete_group_snapshot(
- group_snapshot_id)
+ self.group_snapshots_client.delete_group_snapshot(group_snapshot_id)
vols = self.volumes_client.list_volumes(detail=True)['volumes']
snapshots = self.snapshots_client.list_snapshots(
detail=True)['snapshots']
@@ -65,6 +61,12 @@
self.assertEqual(grp_name, grp['name'])
return grp
+
+class GroupsTest(BaseGroupsTest):
+ _api_version = 3
+ min_microversion = '3.14'
+ max_microversion = 'latest'
+
@decorators.idempotent_id('4b111d28-b73d-4908-9bd2-03dc2992e4d4')
def test_group_create_show_list_delete(self):
# Create volume type
@@ -106,16 +108,16 @@
self.assertEqual(grp2_id, grp2['id'])
# Get all groups with detail
- grps = self.groups_client.list_groups(
- detail=True)['groups']
- filtered_grps = [g for g in grps if g['id'] in [grp1_id, grp2_id]]
- self.assertEqual(2, len(filtered_grps))
- for grp in filtered_grps:
- self.assertEqual([volume_type['id']], grp['volume_types'])
- self.assertEqual(group_type['id'], grp['group_type'])
+ grps = self.groups_client.list_groups(detail=True)['groups']
+ for grp_id in [grp1_id, grp2_id]:
+ filtered_grps = [g for g in grps if g['id'] == grp_id]
+ self.assertEqual(1, len(filtered_grps))
+ self.assertEqual([volume_type['id']],
+ filtered_grps[0]['volume_types'])
+ self.assertEqual(group_type['id'],
+ filtered_grps[0]['group_type'])
- vols = self.volumes_client.list_volumes(
- detail=True)['volumes']
+ vols = self.volumes_client.list_volumes(detail=True)['volumes']
filtered_vols = [v for v in vols if v['id'] in [vol1_id]]
self.assertEqual(1, len(filtered_vols))
for vol in filtered_vols:
@@ -126,8 +128,7 @@
self._delete_group(grp1_id)
# grp2 is empty so delete_volumes flag can be set to False
self._delete_group(grp2_id, delete_volumes=False)
- grps = self.groups_client.list_groups(
- detail=True)['groups']
+ grps = self.groups_client.list_groups(detail=True)['groups']
self.assertEmpty(grps)
@decorators.idempotent_id('1298e537-f1f0-47a3-a1dd-8adec8168897')
@@ -151,6 +152,9 @@
self.group_snapshots_client.create_group_snapshot(
group_id=grp['id'],
name=group_snapshot_name)['group_snapshot'])
+ self.addCleanup(test_utils.call_and_ignore_notfound_exc,
+ self._delete_group_snapshot,
+ group_snapshot['id'], grp['id'])
snapshots = self.snapshots_client.list_snapshots(
detail=True)['snapshots']
for snap in snapshots:
@@ -167,18 +171,20 @@
group_snapshot['id'])['group_snapshot']
self.assertEqual(group_snapshot_name, group_snapshot['name'])
- # Get all group snapshots with detail
- group_snapshots = (
- self.group_snapshots_client.list_group_snapshots(
- detail=True)['group_snapshots'])
+ # Get all group snapshots with details, check some detail-specific
+ # elements, and look for the created group snapshot
+ group_snapshots = (self.group_snapshots_client.list_group_snapshots(
+ detail=True)['group_snapshots'])
+ for grp_snapshot in group_snapshots:
+ self.assertIn('created_at', grp_snapshot)
+ self.assertIn('group_id', grp_snapshot)
self.assertIn((group_snapshot['name'], group_snapshot['id']),
[(m['name'], m['id']) for m in group_snapshots])
# Delete group snapshot
self._delete_group_snapshot(group_snapshot['id'], grp['id'])
- group_snapshots = (
- self.group_snapshots_client.list_group_snapshots(
- detail=True)['group_snapshots'])
+ group_snapshots = (self.group_snapshots_client.list_group_snapshots()
+ ['group_snapshots'])
self.assertEmpty(group_snapshots)
@decorators.idempotent_id('eff52c70-efc7-45ed-b47a-4ad675d09b81')
@@ -212,14 +218,12 @@
waiters.wait_for_volume_resource_status(
self.snapshots_client, snap['id'], 'available')
waiters.wait_for_volume_resource_status(
- self.group_snapshots_client,
- group_snapshot['id'], 'available')
+ self.group_snapshots_client, group_snapshot['id'], 'available')
# Create Group from Group snapshot
grp_name2 = data_utils.rand_name('Group_from_snap')
grp2 = self.groups_client.create_group_from_source(
- group_snapshot_id=group_snapshot['id'],
- name=grp_name2)['group']
+ group_snapshot_id=group_snapshot['id'], name=grp_name2)['group']
self.addCleanup(self._delete_group, grp2['id'])
self.assertEqual(grp_name2, grp2['name'])
vols = self.volumes_client.list_volumes(detail=True)['volumes']
@@ -250,8 +254,7 @@
source_group_id=grp['id'], name=grp_name2)['group']
self.addCleanup(self._delete_group, grp2['id'])
self.assertEqual(grp_name2, grp2['name'])
- vols = self.volumes_client.list_volumes(
- detail=True)['volumes']
+ vols = self.volumes_client.list_volumes(detail=True)['volumes']
for vol in vols:
if vol['group_id'] == grp2['id']:
waiters.wait_for_volume_resource_status(
@@ -296,12 +299,8 @@
self.assertEqual(new_desc, grp['description'])
# Get volumes in the group
- vols = self.volumes_client.list_volumes(
- detail=True)['volumes']
- grp_vols = []
- for vol in vols:
- if vol['group_id'] == grp['id']:
- grp_vols.append(vol)
+ vols = self.volumes_client.list_volumes(detail=True)['volumes']
+ grp_vols = [v for v in vols if v['group_id'] == grp['id']]
self.assertEqual(1, len(grp_vols))
# Add a volume to the group
@@ -313,10 +312,82 @@
self.groups_client, grp['id'], 'available')
# Get volumes in the group
- vols = self.volumes_client.list_volumes(
- detail=True)['volumes']
- grp_vols = []
- for vol in vols:
- if vol['group_id'] == grp['id']:
- grp_vols.append(vol)
+ vols = self.volumes_client.list_volumes(detail=True)['volumes']
+ grp_vols = [v for v in vols if v['group_id'] == grp['id']]
self.assertEqual(2, len(grp_vols))
+
+
+class GroupsV319Test(BaseGroupsTest):
+ _api_version = 3
+ min_microversion = '3.19'
+ max_microversion = 'latest'
+
+ @decorators.idempotent_id('3b42c9b9-c984-4444-816e-ca2e1ed30b40')
+ def test_reset_group_snapshot_status(self):
+ # Create volume type
+ volume_type = self.create_volume_type()
+
+ # Create group type
+ group_type = self.create_group_type()
+
+ # Create group
+ group = self._create_group(group_type, volume_type)
+
+ # Create volume
+ volume = self.create_volume(volume_type=volume_type['id'],
+ group_id=group['id'])
+
+ # Create group snapshot
+ group_snapshot_name = data_utils.rand_name('group_snapshot')
+ group_snapshot = (self.group_snapshots_client.create_group_snapshot(
+ group_id=group['id'], name=group_snapshot_name)['group_snapshot'])
+ self.addCleanup(self._delete_group_snapshot,
+ group_snapshot['id'], group['id'])
+ snapshots = self.snapshots_client.list_snapshots(
+ detail=True)['snapshots']
+ for snap in snapshots:
+ if volume['id'] == snap['volume_id']:
+ waiters.wait_for_volume_resource_status(
+ self.snapshots_client, snap['id'], 'available')
+ waiters.wait_for_volume_resource_status(
+ self.group_snapshots_client, group_snapshot['id'], 'available')
+
+ # Reset group snapshot status
+ self.addCleanup(waiters.wait_for_volume_resource_status,
+ self.group_snapshots_client,
+ group_snapshot['id'], 'available')
+ self.addCleanup(
+ self.admin_group_snapshots_client.reset_group_snapshot_status,
+ group_snapshot['id'], 'available')
+ for status in ['creating', 'available', 'error']:
+ self.admin_group_snapshots_client.reset_group_snapshot_status(
+ group_snapshot['id'], status)
+ waiters.wait_for_volume_resource_status(
+ self.group_snapshots_client, group_snapshot['id'], status)
+
+
+class GroupsV320Test(BaseGroupsTest):
+ _api_version = 3
+ min_microversion = '3.20'
+ max_microversion = 'latest'
+
+ @decorators.idempotent_id('b20c696b-0cbc-49a5-8b3a-b1fb9338f45c')
+ def test_reset_group_status(self):
+ # Create volume type
+ volume_type = self.create_volume_type()
+
+ # Create group type
+ group_type = self.create_group_type()
+
+ # Create group
+ group = self._create_group(group_type, volume_type)
+
+ # Reset group status
+ self.addCleanup(waiters.wait_for_volume_resource_status,
+ self.groups_client, group['id'], 'available')
+ self.addCleanup(self.admin_groups_client.reset_group_status,
+ group['id'], 'available')
+ for status in ['creating', 'available', 'error']:
+ self.admin_groups_client.reset_group_status(group['id'], status)
+ waiters.wait_for_volume_resource_status(
+ self.groups_client, group['id'], status)
diff --git a/tempest/api/volume/admin/test_multi_backend.py b/tempest/api/volume/admin/test_multi_backend.py
index 2db8010..c0891e4 100644
--- a/tempest/api/volume/admin/test_multi_backend.py
+++ b/tempest/api/volume/admin/test_multi_backend.py
@@ -66,8 +66,7 @@
params = {'name': vol_name, 'volume_type': type_name,
'size': CONF.volume.volume_size}
- cls.volume = cls.admin_volume_client.create_volume(
- **params)['volume']
+ cls.volume = cls.create_volume(**params)
if with_prefix:
cls.volume_id_list_with_prefix.append(cls.volume['id'])
else:
@@ -76,21 +75,6 @@
waiters.wait_for_volume_resource_status(cls.admin_volume_client,
cls.volume['id'], 'available')
- @classmethod
- def resource_cleanup(cls):
- # volumes deletion
- vid_prefix = getattr(cls, 'volume_id_list_with_prefix', [])
- for volume_id in vid_prefix:
- cls.admin_volume_client.delete_volume(volume_id)
- cls.admin_volume_client.wait_for_resource_deletion(volume_id)
-
- vid_no_pre = getattr(cls, 'volume_id_list_without_prefix', [])
- for volume_id in vid_no_pre:
- cls.admin_volume_client.delete_volume(volume_id)
- cls.admin_volume_client.wait_for_resource_deletion(volume_id)
-
- super(VolumeMultiBackendTest, cls).resource_cleanup()
-
@decorators.idempotent_id('c1a41f3f-9dad-493e-9f09-3ff197d477cc')
def test_backend_name_reporting(self):
# get volume id which created by type without prefix
diff --git a/tempest/api/volume/admin/test_snapshots_actions.py b/tempest/api/volume/admin/test_snapshots_actions.py
index 471f39a..41849bc 100644
--- a/tempest/api/volume/admin/test_snapshots_actions.py
+++ b/tempest/api/volume/admin/test_snapshots_actions.py
@@ -14,6 +14,7 @@
# under the License.
from tempest.api.volume import base
+from tempest.common import waiters
from tempest import config
from tempest.lib import decorators
@@ -43,6 +44,8 @@
snapshot_id = self.snapshot['id']
self.admin_snapshots_client.reset_snapshot_status(snapshot_id,
status)
+ waiters.wait_for_volume_resource_status(self.snapshots_client,
+ snapshot_id, status)
super(SnapshotsActionsTest, self).tearDown()
def _create_reset_and_force_delete_temp_snapshot(self, status=None):
@@ -50,10 +53,11 @@
# and force delete temp snapshot
temp_snapshot = self.create_snapshot(volume_id=self.volume['id'])
if status:
- self.admin_snapshots_client.\
- reset_snapshot_status(temp_snapshot['id'], status)
- self.admin_snapshots_client.\
- force_delete_snapshot(temp_snapshot['id'])
+ self.admin_snapshots_client.reset_snapshot_status(
+ temp_snapshot['id'], status)
+ waiters.wait_for_volume_resource_status(
+ self.snapshots_client, temp_snapshot['id'], status)
+ self.admin_snapshots_client.force_delete_snapshot(temp_snapshot['id'])
self.snapshots_client.wait_for_resource_deletion(temp_snapshot['id'])
def _get_progress_alias(self):
@@ -63,18 +67,19 @@
def test_reset_snapshot_status(self):
# Reset snapshot status to creating
status = 'creating'
- self.admin_snapshots_client.\
- reset_snapshot_status(self.snapshot['id'], status)
- snapshot_get = self.admin_snapshots_client.show_snapshot(
- self.snapshot['id'])['snapshot']
- self.assertEqual(status, snapshot_get['status'])
+ self.admin_snapshots_client.reset_snapshot_status(
+ self.snapshot['id'], status)
+ waiters.wait_for_volume_resource_status(self.snapshots_client,
+ self.snapshot['id'], status)
@decorators.idempotent_id('41288afd-d463-485e-8f6e-4eea159413eb')
def test_update_snapshot_status(self):
# Reset snapshot status to creating
status = 'creating'
- self.admin_snapshots_client.\
- reset_snapshot_status(self.snapshot['id'], status)
+ self.admin_snapshots_client.reset_snapshot_status(
+ self.snapshot['id'], status)
+ waiters.wait_for_volume_resource_status(self.snapshots_client,
+ self.snapshot['id'], status)
# Update snapshot status to error
progress = '80%'
diff --git a/tempest/api/volume/admin/test_user_messages.py b/tempest/api/volume/admin/test_user_messages.py
old mode 100755
new mode 100644
diff --git a/tempest/api/volume/admin/test_volume_hosts.py b/tempest/api/volume/admin/test_volume_hosts.py
index e4ec442..ce0cbd2 100644
--- a/tempest/api/volume/admin/test_volume_hosts.py
+++ b/tempest/api/volume/admin/test_volume_hosts.py
@@ -13,8 +13,6 @@
# License for the specific language governing permissions and limitations
# under the License.
-import random
-
from tempest.api.volume import base
from tempest.lib import decorators
@@ -42,20 +40,25 @@
"The count of volume hosts is < 2, "
"response of list hosts is: %s" % hosts)
- # Note(jeremyZ): Host in volume is always presented in two formats:
- # <host-name> or <host-name>@<driver-name>. Since Mitaka is EOL,
- # both formats can be chosen for test.
- host_names = [host['host_name'] for host in hosts]
- self.assertNotEmpty(host_names, "No available volume host is found, "
- "all hosts that found are: %s" % hosts)
+ # Note(jeremyZ): The show host API is to show volume usage info on the
+ # specified cinder-volume host. If the host does not run cinder-volume
+ # service, or the cinder-volume service is disabled on the host, the
+ # show host API should fail (return code: 404). The cinder-volume host
+ # is presented in format: <host-name>@driver-name.
+ c_vol_hosts = [host['host_name'] for host in hosts
+ if (host['service'] == 'cinder-volume'
+ and host['service-state'] == 'enabled')]
+ self.assertNotEmpty(c_vol_hosts,
+ "No available cinder-volume host is found, "
+ "all hosts that found are: %s" % hosts)
- # Choose a random host to get and check its elements
- host_details = self.admin_hosts_client.show_host(
- random.choice(host_names))['host']
- self.assertNotEmpty(host_details)
+ # Check each cinder-volume host.
host_detail_keys = ['project', 'volume_count', 'snapshot_count',
'host', 'total_volume_gb', 'total_snapshot_gb']
- for detail in host_details:
- self.assertIn('resource', detail)
- for key in host_detail_keys:
- self.assertIn(key, detail['resource'])
+ for host in c_vol_hosts:
+ host_details = self.admin_hosts_client.show_host(host)['host']
+ self.assertNotEmpty(host_details)
+ for detail in host_details:
+ self.assertIn('resource', detail)
+ for key in host_detail_keys:
+ self.assertIn(key, detail['resource'])
diff --git a/tempest/api/volume/admin/test_volume_quota_classes.py b/tempest/api/volume/admin/test_volume_quota_classes.py
index f551575..75dca41 100644
--- a/tempest/api/volume/admin/test_volume_quota_classes.py
+++ b/tempest/api/volume/admin/test_volume_quota_classes.py
@@ -19,6 +19,7 @@
from testtools import matchers
from tempest.api.volume import base
+from tempest.common import identity
from tempest.common import tempest_fixtures as fixtures
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
@@ -92,9 +93,10 @@
# Verify a new project's default quotas.
project_name = data_utils.rand_name('quota_class_tenant')
description = data_utils.rand_name('desc_')
- project_id = self.identity_utils.create_project(
+ project_id = identity.identity_utils(self.os_admin).create_project(
name=project_name, description=description)['id']
- self.addCleanup(self.identity_utils.delete_project, project_id)
+ self.addCleanup(identity.identity_utils(self.os_admin).delete_project,
+ project_id)
default_quotas = self.admin_quotas_client.show_default_quota_set(
project_id)['quota_set']
self.assertThat(default_quotas.items(),
diff --git a/tempest/api/volume/admin/test_volume_quotas.py b/tempest/api/volume/admin/test_volume_quotas.py
index 754104e..d56f1de 100644
--- a/tempest/api/volume/admin/test_volume_quotas.py
+++ b/tempest/api/volume/admin/test_volume_quotas.py
@@ -13,6 +13,7 @@
# under the License.
from tempest.api.volume import base
+from tempest.common import identity
from tempest.common import tempest_fixtures as fixtures
from tempest.common import waiters
from tempest.lib.common.utils import data_utils
@@ -100,7 +101,7 @@
volume = self.create_volume()
self.addCleanup(self.delete_volume,
- self.admin_volume_client, volume['id'])
+ self.volumes_client, volume['id'])
new_quota_usage = self.admin_quotas_client.show_quota_set(
self.demo_tenant_id, params={'usage': True})['quota_set']
@@ -117,10 +118,11 @@
# Admin can delete the resource quota set for a project
project_name = data_utils.rand_name('quota_tenant')
description = data_utils.rand_name('desc_')
- project = self.identity_utils.create_project(project_name,
- description=description)
+ project = identity.identity_utils(self.os_admin).create_project(
+ project_name, description=description)
project_id = project['id']
- self.addCleanup(self.identity_utils.delete_project, project_id)
+ self.addCleanup(identity.identity_utils(self.os_admin).delete_project,
+ project_id)
quota_set_default = self.admin_quotas_client.show_default_quota_set(
project_id)['quota_set']
volume_default = quota_set_default['volumes']
diff --git a/tempest/api/volume/admin/test_volume_retype_with_migration.py b/tempest/api/volume/admin/test_volume_retype_with_migration.py
index 94d5299..f0b3a4f 100644
--- a/tempest/api/volume/admin/test_volume_retype_with_migration.py
+++ b/tempest/api/volume/admin/test_volume_retype_with_migration.py
@@ -85,9 +85,7 @@
volume_source = self.admin_volume_client.show_volume(
self.src_vol['id'])['volume']
- # TODO(erlon): change this to volumes_client client after Bug
- # #1657806 is fixed
- self.admin_volume_client.retype_volume(
+ self.volumes_client.retype_volume(
self.src_vol['id'],
new_type=self.dst_vol_type['name'],
migration_policy='on-demand')
diff --git a/tempest/api/volume/admin/test_volume_types_extra_specs_negative.py b/tempest/api/volume/admin/test_volume_types_extra_specs_negative.py
index 4fa934e..fe249d6 100644
--- a/tempest/api/volume/admin/test_volume_types_extra_specs_negative.py
+++ b/tempest/api/volume/admin/test_volume_types_extra_specs_negative.py
@@ -61,7 +61,7 @@
@decorators.idempotent_id('a77dfda2-9100-448e-9076-ed1711f4bdfc')
def test_update_multiple_extra_spec(self):
# Should not update volume type extra specs with multiple specs as
- # body.
+ # body.
extra_spec = {"spec1": "val2", "spec2": "val1"}
self.assertRaises(
lib_exc.BadRequest,
@@ -73,7 +73,7 @@
@decorators.idempotent_id('49d5472c-a53d-4eab-a4d3-450c4db1c545')
def test_create_nonexistent_type_id(self):
# Should not create volume type extra spec for nonexistent volume
- # type id.
+ # type id.
extra_specs = {"spec2": "val1"}
self.assertRaises(
lib_exc.NotFound,
@@ -128,10 +128,10 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('c881797d-12ff-4f1a-b09d-9f6212159753')
- def test_get_nonexistent_extra_spec_id(self):
+ def test_get_nonexistent_extra_spec_name(self):
# Should not get volume type extra spec for nonexistent extra spec
- # id.
+ # name.
self.assertRaises(
lib_exc.NotFound,
self.admin_volume_types_client.show_volume_type_extra_specs,
- self.volume_type['id'], data_utils.rand_uuid())
+ self.volume_type['id'], "nonexistent_extra_spec_name")
diff --git a/tempest/api/volume/admin/test_volume_types_negative.py b/tempest/api/volume/admin/test_volume_types_negative.py
index 4cad52a..ae29049 100644
--- a/tempest/api/volume/admin/test_volume_types_negative.py
+++ b/tempest/api/volume/admin/test_volume_types_negative.py
@@ -22,15 +22,6 @@
class VolumeTypesNegativeTest(base.BaseVolumeAdminTest):
@decorators.attr(type=['negative'])
- @decorators.idempotent_id('b48c98f2-e662-4885-9b71-032256906314')
- def test_create_with_nonexistent_volume_type(self):
- # Should not be able to create volume with nonexistent volume_type.
- params = {'name': data_utils.rand_uuid(),
- 'volume_type': data_utils.rand_uuid()}
- self.assertRaises(lib_exc.NotFound,
- self.volumes_client.create_volume, **params)
-
- @decorators.attr(type=['negative'])
@decorators.idempotent_id('878b4e57-faa2-4659-b0d1-ce740a06ae81')
def test_create_with_empty_name(self):
# Should not be able to create volume type with an empty name.
diff --git a/tempest/api/volume/admin/test_volumes_actions.py b/tempest/api/volume/admin/test_volumes_actions.py
index b81a477..8d09217 100644
--- a/tempest/api/volume/admin/test_volumes_actions.py
+++ b/tempest/api/volume/admin/test_volumes_actions.py
@@ -14,10 +14,10 @@
# under the License.
from tempest.api.volume import base
+from tempest.common import utils
from tempest.common import waiters
from tempest import config
from tempest.lib import decorators
-from tempest import test
CONF = config.CONF
@@ -30,6 +30,8 @@
if status:
self.admin_volume_client.reset_volume_status(
temp_volume['id'], status=status)
+ waiters.wait_for_volume_resource_status(
+ self.volumes_client, temp_volume['id'], status)
self.admin_volume_client.force_delete_volume(temp_volume['id'])
self.volumes_client.wait_for_resource_deletion(temp_volume['id'])
@@ -37,14 +39,15 @@
def test_volume_reset_status(self):
# test volume reset status : available->error->available
volume = self.create_volume()
+ self.addCleanup(waiters.wait_for_volume_resource_status,
+ self.volumes_client, volume['id'], 'available')
self.addCleanup(self.admin_volume_client.reset_volume_status,
volume['id'], status='available')
for status in ['error', 'available', 'maintenance']:
self.admin_volume_client.reset_volume_status(
volume['id'], status=status)
- volume_get = self.admin_volume_client.show_volume(
- volume['id'])['volume']
- self.assertEqual(status, volume_get['status'])
+ waiters.wait_for_volume_resource_status(
+ self.volumes_client, volume['id'], status)
@decorators.idempotent_id('21737d5a-92f2-46d7-b009-a0cc0ee7a570')
def test_volume_force_delete_when_volume_is_creating(self):
@@ -67,7 +70,7 @@
self._create_reset_and_force_delete_temp_volume('maintenance')
@decorators.idempotent_id('d38285d9-929d-478f-96a5-00e66a115b81')
- @test.services('compute')
+ @utils.services('compute')
def test_force_detach_volume(self):
# Create a server and a volume
server_id = self.create_server()['id']
@@ -88,6 +91,8 @@
# Reset volume's status to error
self.admin_volume_client.reset_volume_status(volume_id, status='error')
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume_id, 'error')
# Force detach volume
self.admin_volume_client.force_detach_volume(
diff --git a/tempest/api/volume/admin/test_volumes_backup.py b/tempest/api/volume/admin/test_volumes_backup.py
index afc3281..375aacb 100644
--- a/tempest/api/volume/admin/test_volumes_backup.py
+++ b/tempest/api/volume/admin/test_volumes_backup.py
@@ -99,8 +99,7 @@
'available')
# Verify Import Backup
- backups = self.admin_backups_client.list_backups(
- detail=True)['backups']
+ backups = self.admin_backups_client.list_backups()['backups']
self.assertIn(new_id, [b['id'] for b in backups])
# Restore backup
diff --git a/tempest/api/volume/admin/test_volumes_list.py b/tempest/api/volume/admin/test_volumes_list.py
index 9d98b7a..6ce4a85 100644
--- a/tempest/api/volume/admin/test_volumes_list.py
+++ b/tempest/api/volume/admin/test_volumes_list.py
@@ -45,9 +45,9 @@
# Create a volume in admin tenant
adm_vol = self.admin_volume_client.create_volume(
size=CONF.volume.volume_size)['volume']
+ self.addCleanup(self.admin_volume_client.delete_volume, adm_vol['id'])
waiters.wait_for_volume_resource_status(self.admin_volume_client,
adm_vol['id'], 'available')
- self.addCleanup(self.admin_volume_client.delete_volume, adm_vol['id'])
params = {'all_tenants': 1,
'project_id': self.volumes_client.tenant_id}
# Getting volume list from primary tenant using admin credentials
diff --git a/tempest/api/volume/base.py b/tempest/api/volume/base.py
index 9142dc3..63ef85b 100644
--- a/tempest/api/volume/base.py
+++ b/tempest/api/volume/base.py
@@ -72,6 +72,11 @@
if cls._api_version == 3:
cls.backups_client = cls.os_primary.backups_v3_client
cls.volumes_client = cls.os_primary.volumes_v3_client
+ cls.messages_client = cls.os_primary.volume_v3_messages_client
+ cls.versions_client = cls.os_primary.volume_v3_versions_client
+ cls.groups_client = cls.os_primary.groups_v3_client
+ cls.group_snapshots_client = (
+ cls.os_primary.group_snapshots_v3_client)
else:
cls.backups_client = cls.os_primary.backups_v2_client
cls.volumes_client = cls.os_primary.volumes_v2_client
@@ -82,10 +87,6 @@
cls.availability_zone_client = (
cls.os_primary.volume_v2_availability_zone_client)
cls.volume_limits_client = cls.os_primary.volume_v2_limits_client
- cls.messages_client = cls.os_primary.volume_v3_messages_client
- cls.versions_client = cls.os_primary.volume_v3_versions_client
- cls.groups_client = cls.os_primary.groups_v3_client
- cls.group_snapshots_client = cls.os_primary.group_snapshots_v3_client
def setUp(self):
super(BaseVolumeTest, self).setUp()
@@ -259,6 +260,11 @@
cls.admin_volume_client = cls.os_admin.volumes_v2_client
if cls._api_version == 3:
cls.admin_volume_client = cls.os_admin.volumes_v3_client
+ cls.admin_groups_client = cls.os_admin.groups_v3_client
+ cls.admin_messages_client = cls.os_admin.volume_v3_messages_client
+ cls.admin_group_snapshots_client = \
+ cls.os_admin.group_snapshots_v3_client
+ cls.admin_group_types_client = cls.os_admin.group_types_v3_client
cls.admin_hosts_client = cls.os_admin.volume_hosts_v2_client
cls.admin_snapshot_manage_client = \
cls.os_admin.snapshot_manage_v2_client
@@ -274,11 +280,6 @@
cls.os_admin.volume_capabilities_v2_client
cls.admin_scheduler_stats_client = \
cls.os_admin.volume_scheduler_stats_v2_client
- cls.admin_messages_client = cls.os_admin.volume_v3_messages_client
- cls.admin_groups_client = cls.os_admin.groups_v3_client
- cls.admin_group_snapshots_client = \
- cls.os_admin.group_snapshots_v3_client
- cls.admin_group_types_client = cls.os_admin.group_types_v3_client
@classmethod
def resource_setup(cls):
diff --git a/tempest/api/volume/test_availability_zone.py b/tempest/api/volume/test_availability_zone.py
index d0a87db..0b6ee38 100644
--- a/tempest/api/volume/test_availability_zone.py
+++ b/tempest/api/volume/test_availability_zone.py
@@ -20,14 +20,10 @@
class AvailabilityZoneTestJSON(base.BaseVolumeTest):
"""Tests Availability Zone API List"""
- @classmethod
- def setup_clients(cls):
- super(AvailabilityZoneTestJSON, cls).setup_clients()
- cls.client = cls.availability_zone_client
-
@decorators.idempotent_id('01f1ae88-eba9-4c6b-a011-6f7ace06b725')
def test_get_availability_zone_list(self):
# List of availability zone
- availability_zone = (self.client.list_availability_zones()
- ['availabilityZoneInfo'])
+ availability_zone = (
+ self.availability_zone_client.list_availability_zones()
+ ['availabilityZoneInfo'])
self.assertNotEmpty(availability_zone)
diff --git a/tempest/api/volume/test_image_metadata.py b/tempest/api/volume/test_image_metadata.py
index 129981b..53b3acc 100644
--- a/tempest/api/volume/test_image_metadata.py
+++ b/tempest/api/volume/test_image_metadata.py
@@ -16,9 +16,9 @@
from testtools import matchers
from tempest.api.volume import base
+from tempest.common import utils
from tempest import config
from tempest.lib import decorators
-from tempest import test
CONF = config.CONF
@@ -39,7 +39,7 @@
cls.volume = cls.create_volume(imageRef=CONF.compute.image_ref)
@decorators.idempotent_id('03efff0b-5c75-4822-8f10-8789ac15b13e')
- @test.services('image')
+ @utils.services('image')
def test_update_show_delete_image_metadata(self):
# Update image metadata
image_metadata = {'image_id': '5137a025-3c5f-43c1-bc64-5f41270040a5',
diff --git a/tempest/api/volume/test_versions.py b/tempest/api/volume/test_versions.py
index 0083a3b..b4d48db 100644
--- a/tempest/api/volume/test_versions.py
+++ b/tempest/api/volume/test_versions.py
@@ -26,4 +26,4 @@
# NOTE: The version data is checked on service client side
# with JSON-Schema validation. It is enough to just call
# the API here.
- self.versions_client.list_versions()['versions']
+ self.versions_client.list_versions()
diff --git a/tempest/api/volume/test_volumes_actions.py b/tempest/api/volume/test_volumes_actions.py
index c4d10c3..be5638e 100644
--- a/tempest/api/volume/test_volumes_actions.py
+++ b/tempest/api/volume/test_volumes_actions.py
@@ -14,12 +14,12 @@
# under the License.
from tempest.api.volume import base
+from tempest.common import utils
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
-from tempest import test
CONF = config.CONF
@@ -35,7 +35,7 @@
@decorators.idempotent_id('fff42874-7db5-4487-a8e1-ddda5fb5288d')
@decorators.attr(type='smoke')
- @test.services('compute')
+ @utils.services('compute')
def test_attach_detach_volume_to_instance(self):
# Create a server
server = self.create_server()
@@ -66,7 +66,7 @@
fetched_volume['bootable'])
@decorators.idempotent_id('9516a2c8-9135-488c-8dd6-5677a7e5f371')
- @test.services('compute')
+ @utils.services('compute')
def test_get_volume_attachment(self):
# Create a server
server = self.create_server()
@@ -94,7 +94,7 @@
self.assertEqual(self.volume['id'], attachment['volume_id'])
@decorators.idempotent_id('d8f1ca95-3d5b-44a3-b8ca-909691c9532d')
- @test.services('image')
+ @utils.services('image')
def test_volume_upload(self):
# NOTE(gfidente): the volume uploaded in Glance comes from setUpClass,
# it is shared with the other tests. After it is uploaded in Glance,
@@ -112,6 +112,10 @@
waiters.wait_for_volume_resource_status(self.volumes_client,
self.volume['id'], 'available')
+ image_info = self.images_client.show_image(image_id)
+ self.assertEqual(image_name, image_info['name'])
+ self.assertEqual(CONF.volume.disk_format, image_info['disk_format'])
+
@decorators.idempotent_id('92c4ef64-51b2-40c0-9f7e-4749fbaaba33')
def test_reserve_unreserve_volume(self):
# Mark volume as reserved.
diff --git a/tempest/api/volume/test_volumes_backup.py b/tempest/api/volume/test_volumes_backup.py
index 1f91db6..1e240b8 100644
--- a/tempest/api/volume/test_volumes_backup.py
+++ b/tempest/api/volume/test_volumes_backup.py
@@ -17,11 +17,11 @@
from testtools import matchers
from tempest.api.volume import base
+from tempest.common import utils
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
-from tempest import test
CONF = config.CONF
@@ -83,6 +83,9 @@
# Get all backups with detail
backups = self.backups_client.list_backups(
detail=True)['backups']
+ for backup_info in backups:
+ self.assertIn('created_at', backup_info)
+ self.assertIn('links', backup_info)
self.assertIn((backup['name'], backup['id']),
[(m['name'], m['id']) for m in backups])
@@ -97,7 +100,7 @@
matchers.ContainsAll(metadata.items()))
@decorators.idempotent_id('07af8f6d-80af-44c9-a5dc-c8427b1b62e6')
- @test.services('compute')
+ @utils.services('compute')
def test_backup_create_attached_volume(self):
"""Test backup create using force flag.
@@ -119,7 +122,7 @@
self.assertEqual(backup_name, backup['name'])
@decorators.idempotent_id('2a8ba340-dff2-4511-9db7-646f07156b15')
- @test.services('image')
+ @utils.services('image')
def test_bootable_volume_backup_and_restore(self):
# Create volume from image
img_uuid = CONF.compute.image_ref
diff --git a/tempest/api/volume/test_volumes_clone.py b/tempest/api/volume/test_volumes_clone.py
index 927bfa5..ea39a21 100644
--- a/tempest/api/volume/test_volumes_clone.py
+++ b/tempest/api/volume/test_volumes_clone.py
@@ -14,9 +14,9 @@
# under the License.
from tempest.api.volume import base
+from tempest.common import utils
from tempest import config
from tempest.lib import decorators
-from tempest import test
CONF = config.CONF
@@ -56,7 +56,7 @@
self._verify_volume_clone(src_vol, dst_vol, extra_size=1)
@decorators.idempotent_id('cbbcd7c6-5a6c-481a-97ac-ca55ab715d16')
- @test.services('image')
+ @utils.services('image')
def test_create_from_bootable_volume(self):
# Create volume from image
img_uuid = CONF.compute.image_ref
diff --git a/tempest/api/volume/test_volumes_extend.py b/tempest/api/volume/test_volumes_extend.py
index 1eb76a0..de28a30 100644
--- a/tempest/api/volume/test_volumes_extend.py
+++ b/tempest/api/volume/test_volumes_extend.py
@@ -13,12 +13,15 @@
# License for the specific language governing permissions and limitations
# under the License.
+import time
+
import testtools
from tempest.api.volume import base
from tempest.common import waiters
from tempest import config
from tempest.lib import decorators
+from tempest.lib import exceptions as lib_exc
CONF = config.CONF
@@ -53,3 +56,129 @@
resized_volume = self.volumes_client.show_volume(
volume['id'])['volume']
self.assertEqual(extend_size, resized_volume['size'])
+
+
+class VolumesExtendAttachedTest(base.BaseVolumeTest):
+ """Tests extending the size of an attached volume."""
+
+ # We need admin credentials for getting instance action event details. By
+ # default a non-admin can list and show instance actions if they own the
+ # server instance, but since the event details can contain error messages
+ # and tracebacks, like an instance fault, those are not viewable by
+ # non-admins. This is obviously not a great user experience since the user
+ # may not know when the operation is actually complete. A microversion in
+ # the compute API will be added so that non-admins can see instance action
+ # events but will continue to hide the traceback field.
+ # TODO(mriedem): Change this to not rely on the admin user to get the event
+ # details once that microversion is available in Nova.
+ credentials = ['primary', 'admin']
+
+ _api_version = 3
+ # NOTE(mriedem): The minimum required volume API version is 3.42 and the
+ # minimum required compute API microversion is 2.51, but the compute call
+ # is implicit - Cinder calls Nova at that microversion, Tempest does not.
+ min_microversion = '3.42'
+
+ @classmethod
+ def setup_clients(cls):
+ super(VolumesExtendAttachedTest, cls).setup_clients()
+ cls.admin_servers_client = cls.os_admin.servers_client
+
+ def _find_extend_volume_instance_action(self, server_id):
+ actions = self.servers_client.list_instance_actions(
+ server_id)['instanceActions']
+ for action in actions:
+ if action['action'] == 'extend_volume':
+ return action
+
+ def _find_extend_volume_instance_action_finish_event(self, action):
+ # This has to be called by an admin client otherwise
+ # the events don't show up.
+ action = self.admin_servers_client.show_instance_action(
+ action['instance_uuid'], action['request_id'])['instanceAction']
+ for event in action['events']:
+ if (event['event'] == 'compute_extend_volume' and
+ event['finish_time']):
+ return event
+
+ @decorators.idempotent_id('301f5a30-1c6f-4ea0-be1a-91fd28d44354')
+ @testtools.skipUnless(CONF.volume_feature_enabled.extend_attached_volume,
+ "Attached volume extend is disabled.")
+ def test_extend_attached_volume(self):
+ """This is a happy path test which does the following:
+
+ * Create a volume at the configured volume_size.
+ * Create a server instance.
+ * Attach the volume to the server.
+ * Wait for the volume status to be "in-use".
+ * Extend the size of the volume and wait for the volume status to go
+ back to "in-use".
+ * Assert the volume size change is reflected in the volume API.
+ * Wait for the "compute_extend_volume" instance action event to show
+ up in the compute API with the success or failure status. We fail
+ if we timeout waiting for the instance action event to show up, or
+ if the action on the server fails.
+ """
+ # Create a test volume. Will be automatically cleaned up on teardown.
+ volume = self.create_volume()
+ # Create a test server. Will be automatically cleaned up on teardown.
+ server = self.create_server()
+ # Attach the volume to the server and wait for the volume status to be
+ # "in-use".
+ self.attach_volume(server['id'], volume['id'])
+ # Extend the size of the volume. If this is successful, the volume API
+ # will change the status on the volume to "extending" before doing an
+ # RPC cast to the volume manager on the backend. Note that we multiply
+ # the size of the volume since certain Cinder backends, e.g. ScaleIO,
+ # require multiples of 8GB.
+ extend_size = volume['size'] * 2
+ self.volumes_client.extend_volume(volume['id'], new_size=extend_size)
+ # The volume status should go back to in-use since it is still attached
+ # to the server instance.
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume['id'], 'in-use')
+ # Assert that the volume size has changed in the volume API.
+ volume = self.volumes_client.show_volume(volume['id'])['volume']
+ self.assertEqual(extend_size, volume['size'])
+ # Now we wait for the "compute_extend_volume" instance action event
+ # to show up for the server instance. This is our indication that the
+ # asynchronous operation is complete on the compute side.
+ start_time = int(time.time())
+ timeout = self.servers_client.build_timeout
+ action = self._find_extend_volume_instance_action(server['id'])
+ while action is None and int(time.time()) - start_time < timeout:
+ time.sleep(self.servers_client.build_interval)
+ action = self._find_extend_volume_instance_action(server['id'])
+
+ if action is None:
+ msg = ("Timed out waiting to get 'extend_volume' instance action "
+ "record for server %(server)s after %(timeout)s seconds." %
+ {'server': server['id'], 'timeout': timeout})
+ raise lib_exc.TimeoutException(msg)
+
+ # Now that we found the extend_volume instance action, we can wait for
+ # the compute_extend_volume instance action event to show up to
+ # indicate the operation is complete.
+ start_time = int(time.time())
+ event = self._find_extend_volume_instance_action_finish_event(action)
+ while event is None and int(time.time()) - start_time < timeout:
+ time.sleep(self.servers_client.build_interval)
+ event = self._find_extend_volume_instance_action_finish_event(
+ action)
+
+ if event is None:
+ msg = ("Timed out waiting to get 'compute_extend_volume' instance "
+ "action event record for server %(server)s and request "
+ "%(request_id)s after %(timeout)s seconds." %
+ {'server': server['id'],
+ 'request_id': action['request_id'],
+ 'timeout': timeout})
+ raise lib_exc.TimeoutException(msg)
+
+ # Finally, assert that the action completed successfully.
+ self.assertTrue(
+ event['result'].lower() == 'success',
+ "Unexpected compute_extend_volume result '%(result)s' for request "
+ "%(request_id)s." %
+ {'result': event['result'],
+ 'request_id': action['request_id']})
diff --git a/tempest/api/volume/test_volumes_get.py b/tempest/api/volume/test_volumes_get.py
index ec9a0dd..71db95c 100644
--- a/tempest/api/volume/test_volumes_get.py
+++ b/tempest/api/volume/test_volumes_get.py
@@ -17,11 +17,11 @@
from testtools import matchers
from tempest.api.volume import base
+from tempest.common import utils
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
-from tempest import test
CONF = config.CONF
@@ -122,7 +122,7 @@
@decorators.attr(type='smoke')
@decorators.idempotent_id('54a01030-c7fc-447c-86ee-c1182beae638')
- @test.services('image')
+ @utils.services('image')
def test_volume_create_get_update_delete_from_image(self):
image = self.images_client.show_image(CONF.compute.image_ref)
min_disk = image['min_disk']
diff --git a/tempest/api/volume/test_volumes_list.py b/tempest/api/volume/test_volumes_list.py
index 8593d3a..b5f98ea 100644
--- a/tempest/api/volume/test_volumes_list.py
+++ b/tempest/api/volume/test_volumes_list.py
@@ -50,7 +50,7 @@
return
def str_vol(vol):
- return "%s:%s" % (vol['id'], vol[self.name])
+ return "%s:%s" % (vol['id'], vol['name'])
raw_msg = "Could not find volumes %s in expected list %s; fetched %s"
self.fail(raw_msg % ([str_vol(v) for v in missing_vols],
@@ -60,7 +60,6 @@
@classmethod
def resource_setup(cls):
super(VolumesListTestJSON, cls).resource_setup()
- cls.name = cls.VOLUME_FIELDS[1]
existing_volumes = cls.volumes_client.list_volumes()['volumes']
cls.volume_id_list = [vol['id'] for vol in existing_volumes]
@@ -117,22 +116,20 @@
@decorators.idempotent_id('a28e8da4-0b56-472f-87a8-0f4d3f819c02')
def test_volume_list_by_name(self):
volume = self.volume_list[data_utils.rand_int_id(0, 2)]
- params = {self.name: volume[self.name]}
+ params = {'name': volume['name']}
fetched_vol = self.volumes_client.list_volumes(
params=params)['volumes']
self.assertEqual(1, len(fetched_vol), str(fetched_vol))
- self.assertEqual(fetched_vol[0][self.name],
- volume[self.name])
+ self.assertEqual(fetched_vol[0]['name'], volume['name'])
@decorators.idempotent_id('2de3a6d4-12aa-403b-a8f2-fdeb42a89623')
def test_volume_list_details_by_name(self):
volume = self.volume_list[data_utils.rand_int_id(0, 2)]
- params = {self.name: volume[self.name]}
+ params = {'name': volume['name']}
fetched_vol = self.volumes_client.list_volumes(
detail=True, params=params)['volumes']
self.assertEqual(1, len(fetched_vol), str(fetched_vol))
- self.assertEqual(fetched_vol[0][self.name],
- volume[self.name])
+ self.assertEqual(fetched_vol[0]['name'], volume['name'])
@decorators.idempotent_id('39654e13-734c-4dab-95ce-7613bf8407ce')
def test_volumes_list_by_status(self):
@@ -213,7 +210,7 @@
def test_volume_list_param_display_name_and_status(self):
# Test to list volume when display name and status param is given
volume = self.volume_list[data_utils.rand_int_id(0, 2)]
- params = {self.name: volume[self.name],
+ params = {'name': volume['name'],
'status': 'available'}
self._list_by_param_value_and_assert(params)
@@ -221,7 +218,7 @@
def test_volume_list_with_detail_param_display_name_and_status(self):
# Test to list volume when name and status param is given
volume = self.volume_list[data_utils.rand_int_id(0, 2)]
- params = {self.name: volume[self.name],
+ params = {'name': volume['name'],
'status': 'available'}
self._list_by_param_value_and_assert(params, with_detail=True)
diff --git a/tempest/api/volume/test_volumes_negative.py b/tempest/api/volume/test_volumes_negative.py
index 4e19e62..f139283 100644
--- a/tempest/api/volume/test_volumes_negative.py
+++ b/tempest/api/volume/test_volumes_negative.py
@@ -16,13 +16,13 @@
import six
from tempest.api.volume import base
+from tempest.common import utils
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
CONF = config.CONF
@@ -35,7 +35,6 @@
# Create a test shared instance and volume for attach/detach tests
cls.volume = cls.create_volume()
- cls.mountpoint = "/dev/vdc"
def create_image(self):
# Create image
@@ -168,7 +167,7 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('f5e56b0a-5d02-43c1-a2a7-c9b792c2e3f6')
- @test.services('compute')
+ @utils.services('compute')
def test_attach_volumes_with_nonexistent_volume_id(self):
server = self.create_server()
@@ -176,7 +175,7 @@
self.volumes_client.attach_volume,
data_utils.rand_uuid(),
instance_uuid=server['id'],
- mountpoint=self.mountpoint)
+ mountpoint="/dev/vdc")
@decorators.attr(type=['negative'])
@decorators.idempotent_id('9f9c24e4-011d-46b5-b992-952140ce237a')
@@ -292,7 +291,7 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('5b810c91-0ad1-47ce-aee8-615f789be78f')
- @test.services('image')
+ @utils.services('image')
def test_create_volume_from_image_with_decreasing_size(self):
# Create image
image = self.create_image()
@@ -307,7 +306,7 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('d15e7f35-2cfc-48c8-9418-c8223a89bcbb')
- @test.services('image')
+ @utils.services('image')
def test_create_volume_from_deactivated_image(self):
# Create image
image = self.create_image()
diff --git a/tempest/api/volume/test_volumes_snapshots.py b/tempest/api/volume/test_volumes_snapshots.py
index e68ab7e..dcd3518 100644
--- a/tempest/api/volume/test_volumes_snapshots.py
+++ b/tempest/api/volume/test_volumes_snapshots.py
@@ -14,11 +14,11 @@
from testtools import matchers
from tempest.api.volume import base
+from tempest.common import utils
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
CONF = config.CONF
@@ -37,7 +37,7 @@
cls.volume_origin = cls.create_volume()
@decorators.idempotent_id('8567b54c-4455-446d-a1cf-651ddeaa3ff2')
- @test.services('compute')
+ @utils.services('compute')
def test_snapshot_create_delete_with_volume_in_use(self):
# Create a test instance
server = self.create_server()
@@ -59,7 +59,7 @@
self.delete_snapshot(snapshot2['id'])
@decorators.idempotent_id('5210a1de-85a0-11e6-bb21-641c676a5d61')
- @test.services('compute')
+ @utils.services('compute')
def test_snapshot_create_offline_delete_online(self):
# Create a snapshot while it is not attached
diff --git a/tempest/clients.py b/tempest/clients.py
index 5f02746..ca205c8 100644
--- a/tempest/clients.py
+++ b/tempest/clients.py
@@ -17,7 +17,6 @@
from tempest.lib import auth
from tempest.lib import exceptions as lib_exc
from tempest.lib.services import clients
-from tempest.services import object_storage
CONF = config.CONF
@@ -283,21 +282,11 @@
self.snapshots_client_latest = self.snapshots_v3_client
def _set_object_storage_clients(self):
- # NOTE(andreaf) Load configuration from config. Once object storage
- # is in lib, configuration will be pulled directly from the registry
- # and this will not be required anymore.
- params = config.service_client_config('object-storage')
-
- self.account_client = object_storage.AccountClient(self.auth_provider,
- **params)
- self.bulk_client = object_storage.BulkMiddlewareClient(
- self.auth_provider, **params)
- self.capabilities_client = object_storage.CapabilitiesClient(
- self.auth_provider, **params)
- self.container_client = object_storage.ContainerClient(
- self.auth_provider, **params)
- self.object_client = object_storage.ObjectClient(self.auth_provider,
- **params)
+ self.account_client = self.object_storage.AccountClient()
+ self.bulk_client = self.object_storage.BulkMiddlewareClient()
+ self.capabilities_client = self.object_storage.CapabilitiesClient()
+ self.container_client = self.object_storage.ContainerClient()
+ self.object_client = self.object_storage.ObjectClient()
def get_auth_provider_class(credentials):
diff --git a/tempest/cmd/cleanup.py b/tempest/cmd/cleanup.py
index ac73cbf..a128b3f 100644
--- a/tempest/cmd/cleanup.py
+++ b/tempest/cmd/cleanup.py
@@ -104,7 +104,8 @@
def init(self, parsed_args):
cleanup_service.init_conf()
self.options = parsed_args
- self.admin_mgr = credentials.AdminManager()
+ self.admin_mgr = clients.Manager(
+ credentials.get_configured_admin_credentials())
self.dry_run_data = {}
self.json_data = {}
@@ -263,9 +264,10 @@
def _remove_admin_role(self, tenant_id):
LOG.debug("Remove admin user role for tenant: %s", tenant_id)
- # Must initialize AdminManager for each user role
+ # Must initialize Admin Manager for each user role
# Otherwise authentication exception is thrown, weird
- id_cl = credentials.AdminManager().identity_client
+ id_cl = clients.Manager(
+ credentials.get_configured_admin_credentials()).identity_client
if (self._tenant_exists(tenant_id)):
try:
id_cl.delete_role_from_user_on_project(tenant_id,
diff --git a/tempest/cmd/cleanup_service.py b/tempest/cmd/cleanup_service.py
index 11cd4bb..c75bc85 100644
--- a/tempest/cmd/cleanup_service.py
+++ b/tempest/cmd/cleanup_service.py
@@ -16,11 +16,12 @@
from oslo_log import log as logging
+from tempest import clients
from tempest.common import credentials_factory as credentials
from tempest.common import identity
+from tempest.common import utils
from tempest.common.utils import net_info
from tempest import config
-from tempest import test
LOG = logging.getLogger(__name__)
CONF = config.CONF
@@ -78,7 +79,8 @@
def _get_network_id(net_name, project_name):
- am = credentials.AdminManager()
+ am = clients.Manager(
+ credentials.get_configured_admin_credentials())
net_cl = am.networks_client
tn_cl = am.tenants_client
@@ -962,7 +964,7 @@
tenant_services.append(StackService)
if IS_NEUTRON:
tenant_services.append(NetworkFloatingIpService)
- if test.is_extension_enabled('metering', 'network'):
+ if utils.is_extension_enabled('metering', 'network'):
tenant_services.append(NetworkMeteringLabelRuleService)
tenant_services.append(NetworkMeteringLabelService)
tenant_services.append(NetworkRouterService)
diff --git a/tempest/cmd/run.py b/tempest/cmd/run.py
index 350dd0b..f07f197 100644
--- a/tempest/cmd/run.py
+++ b/tempest/cmd/run.py
@@ -47,6 +47,12 @@
You can also use the **--list-tests** option in conjunction with selection
arguments to list which tests will be run.
+You can also use the **--load-list** option that lets you pass a filepath to
+tempest run with the file format being in a non-regex format, similar to the
+tests generated by the **--list-tests** option. You can specify target tests
+by removing unnecessary tests from a list file which is generated from
+**--list-tests** option.
+
Test Execution
==============
There are several options to control how the tests are executed. By default
@@ -101,6 +107,7 @@
import six
from testrepository.commands import run_argv
+from tempest import clients
from tempest.cmd import cleanup_service
from tempest.cmd import init
from tempest.cmd import workspace
@@ -216,7 +223,8 @@
print("Initializing saved state.")
data = {}
self.global_services = cleanup_service.get_global_cleanup_services()
- self.admin_mgr = credentials.AdminManager()
+ self.admin_mgr = clients.Manager(
+ credentials.get_configured_admin_credentials())
admin_mgr = self.admin_mgr
kwargs = {'data': data,
'is_dry_run': False,
@@ -265,6 +273,12 @@
help='Path to a blacklist file, this file '
'contains a separate regex exclude on '
'each newline')
+ list_selector.add_argument('--load-list', '--load_list',
+ help='Path to a non-regex whitelist file, '
+ 'this file contains a seperate test '
+ 'on each newline. This command'
+ 'supports files created by the tempest'
+ 'run ``--list-tests`` command')
# list only args
parser.add_argument('--list-tests', '-l', action='store_true',
help='List tests',
@@ -316,6 +330,8 @@
options.append("--parallel")
if parsed_args.concurrency:
options.append("--concurrency=%s" % parsed_args.concurrency)
+ if parsed_args.load_list:
+ options.append("--load-list=%s" % parsed_args.load_list)
return options
def _run(self, regex, options):
diff --git a/tempest/cmd/verify_tempest_config.py b/tempest/cmd/verify_tempest_config.py
index a72493d..3fff9af 100644
--- a/tempest/cmd/verify_tempest_config.py
+++ b/tempest/cmd/verify_tempest_config.py
@@ -76,7 +76,6 @@
from tempest import config
import tempest.lib.common.http
from tempest.lib import exceptions as lib_exc
-from tempest.services import object_storage
CONF = config.CONF
@@ -236,11 +235,10 @@
def get_extension_client(os, service):
- params = config.service_client_config('object-storage')
extensions_client = {
'nova': os.compute.ExtensionsClient(),
'neutron': os.network.ExtensionsClient(),
- 'swift': object_storage.CapabilitiesClient(os.auth_provider, **params),
+ 'swift': os.object_storage.CapabilitiesClient(),
# NOTE: Cinder v3 API is current and v2 and v1 are deprecated.
# V3 extension API is the same as v2, so we reuse the v2 client
# for v3 API also.
diff --git a/tempest/common/compute.py b/tempest/common/compute.py
index 47196ec..86fe3f5 100644
--- a/tempest/common/compute.py
+++ b/tempest/common/compute.py
@@ -128,6 +128,8 @@
"this stage.")
raise ValueError(msg)
+ LOG.debug("Provisioning test server with validation resources %s",
+ validation_resources)
if 'security_groups' in kwargs:
kwargs['security_groups'].append(
{'name': validation_resources['security_group']['name']})
@@ -198,9 +200,27 @@
body = rest_client.ResponseBody(body.response, body['server'])
servers = [body]
- # The name of the method to associate a floating IP to as server is too
- # long for PEP8 compliance so:
- assoc = clients.compute_floating_ips_client.associate_floating_ip_to_server
+ def _setup_validation_fip():
+ if CONF.service_available.neutron:
+ ifaces = clients.interfaces_client.list_interfaces(server['id'])
+ validation_port = None
+ for iface in ifaces['interfaceAttachments']:
+ if iface['net_id'] == tenant_network['id']:
+ validation_port = iface['port_id']
+ break
+ if not validation_port:
+ # NOTE(artom) This will get caught by the catch-all clause in
+ # the wait_until loop below
+ raise ValueError('Unable to setup floating IP for validation: '
+ 'port not found on tenant network')
+ clients.floating_ips_client.update_floatingip(
+ validation_resources['floating_ip']['id'],
+ port_id=validation_port)
+ else:
+ fip_client = clients.compute_floating_ips_client
+ fip_client.associate_floating_ip_to_server(
+ floating_ip=validation_resources['floating_ip']['ip'],
+ server_id=servers[0]['id'])
if wait_until:
for server in servers:
@@ -212,9 +232,7 @@
# creation will fail with the condition above (l.58).
if CONF.validation.run_validation and validatable:
if CONF.validation.connect_method == 'floating':
- assoc(floating_ip=validation_resources[
- 'floating_ip']['ip'],
- server_id=servers[0]['id'])
+ _setup_validation_fip()
except Exception:
with excutils.save_and_reraise_exception():
diff --git a/tempest/common/identity.py b/tempest/common/identity.py
index 469defe..6e496d3 100644
--- a/tempest/common/identity.py
+++ b/tempest/common/identity.py
@@ -13,8 +13,12 @@
# License for the specific language governing permissions and limitations
# under the License.
+from tempest import config
+from tempest.lib.common import cred_client
from tempest.lib import exceptions as lib_exc
+CONF = config.CONF
+
def get_tenant_by_name(client, tenant_name):
tenants = client.list_tenants()['tenants']
@@ -30,3 +34,37 @@
if user['name'] == username:
return user
raise lib_exc.NotFound('No such user(%s) in %s' % (username, users))
+
+
+def identity_utils(clients):
+ """A client that abstracts v2 and v3 identity operations.
+
+ This can be used for creating and tearing down projects in tests. It
+ should not be used for testing identity features.
+
+ :param clients: a client manager.
+ :return
+ """
+ if CONF.identity.auth_version == 'v2':
+ client = clients.identity_client
+ users_client = clients.users_client
+ project_client = clients.tenants_client
+ roles_client = clients.roles_client
+ domains_client = None
+ else:
+ client = clients.identity_v3_client
+ users_client = clients.users_v3_client
+ project_client = clients.projects_client
+ roles_client = clients.roles_v3_client
+ domains_client = clients.domains_client
+
+ try:
+ domain = client.auth_provider.credentials.project_domain_name
+ except AttributeError:
+ domain = CONF.auth.default_credentials_domain_name
+
+ return cred_client.get_creds_client(client, project_client,
+ users_client,
+ roles_client,
+ domains_client,
+ project_domain_name=domain)
diff --git a/tempest/common/utils/__init__.py b/tempest/common/utils/__init__.py
index 84e31d0..5a86caa 100644
--- a/tempest/common/utils/__init__.py
+++ b/tempest/common/utils/__init__.py
@@ -12,10 +12,16 @@
# License for the specific language governing permissions and limitations
# under the License.
+import functools
from functools import partial
+import testtools
+
from tempest import config
+from tempest.exceptions import InvalidServiceTag
from tempest.lib.common.utils import data_utils as lib_data_utils
+from tempest.lib import decorators
+
CONF = config.CONF
@@ -36,3 +42,89 @@
return attr_obj
data_utils = DataUtils()
+
+
+def get_service_list():
+ service_list = {
+ 'compute': CONF.service_available.nova,
+ 'image': CONF.service_available.glance,
+ 'volume': CONF.service_available.cinder,
+ # NOTE(masayukig): We have two network services which are neutron and
+ # nova-network. And we have no way to know whether nova-network is
+ # available or not. After the pending removal of nova-network from
+ # nova, we can treat the network/neutron case in the same manner as
+ # the other services.
+ 'network': True,
+ # NOTE(masayukig): Tempest tests always require the identity service.
+ # So we should set this True here.
+ 'identity': True,
+ 'object_storage': CONF.service_available.swift,
+ }
+ return service_list
+
+
+def services(*args):
+ """A decorator used to set an attr for each service used in a test case
+
+ This decorator applies a testtools attr for each service that gets
+ exercised by a test case.
+ """
+ def decorator(f):
+ known_services = get_service_list()
+
+ for service in args:
+ if service not in known_services:
+ raise InvalidServiceTag('%s is not a valid service' % service)
+ decorators.attr(type=list(args))(f)
+
+ @functools.wraps(f)
+ def wrapper(self, *func_args, **func_kwargs):
+ service_list = get_service_list()
+
+ for service in args:
+ if not service_list[service]:
+ msg = 'Skipped because the %s service is not available' % (
+ service)
+ raise testtools.TestCase.skipException(msg)
+ return f(self, *func_args, **func_kwargs)
+ return wrapper
+ return decorator
+
+
+def requires_ext(**kwargs):
+ """A decorator to skip tests if an extension is not enabled
+
+ @param extension
+ @param service
+ """
+ def decorator(func):
+ @functools.wraps(func)
+ def wrapper(*func_args, **func_kwargs):
+ if not is_extension_enabled(kwargs['extension'],
+ kwargs['service']):
+ msg = "Skipped because %s extension: %s is not enabled" % (
+ kwargs['service'], kwargs['extension'])
+ raise testtools.TestCase.skipException(msg)
+ return func(*func_args, **func_kwargs)
+ return wrapper
+ return decorator
+
+
+def is_extension_enabled(extension_name, service):
+ """A function that will check the list of enabled extensions from config
+
+ """
+ config_dict = {
+ 'compute': CONF.compute_feature_enabled.api_extensions,
+ 'volume': CONF.volume_feature_enabled.api_extensions,
+ 'network': CONF.network_feature_enabled.api_extensions,
+ 'object': CONF.object_storage_feature_enabled.discoverable_apis,
+ 'identity': CONF.identity_feature_enabled.api_extensions
+ }
+ if not config_dict[service]:
+ return False
+ if config_dict[service][0] == 'all':
+ return True
+ if extension_name in config_dict[service]:
+ return True
+ return False
diff --git a/tempest/common/validation_resources.py b/tempest/common/validation_resources.py
deleted file mode 100644
index 84f1c9d..0000000
--- a/tempest/common/validation_resources.py
+++ /dev/null
@@ -1,154 +0,0 @@
-# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from oslo_log import log as logging
-
-from tempest.lib.common.utils import data_utils
-from tempest.lib import exceptions as lib_exc
-
-LOG = logging.getLogger(__name__)
-
-
-def _create_neutron_sec_group_rules(os, sec_group, ethertype='IPv4'):
- sec_group_rules_client = os.security_group_rules_client
-
- sec_group_rules_client.create_security_group_rule(
- security_group_id=sec_group['id'],
- protocol='tcp',
- ethertype=ethertype,
- port_range_min=22,
- port_range_max=22,
- direction='ingress')
- sec_group_rules_client.create_security_group_rule(
- security_group_id=sec_group['id'],
- protocol='icmp',
- ethertype=ethertype,
- direction='ingress')
-
-
-def create_ssh_security_group(os, add_rule=False, ethertype='IPv4',
- use_neutron=True):
- security_groups_client = os.compute_security_groups_client
- security_group_rules_client = os.compute_security_group_rules_client
- sg_name = data_utils.rand_name('securitygroup-')
- sg_description = data_utils.rand_name('description-')
- security_group = security_groups_client.create_security_group(
- name=sg_name, description=sg_description)['security_group']
- if add_rule:
- if use_neutron:
- _create_neutron_sec_group_rules(os, security_group,
- ethertype=ethertype)
- else:
- security_group_rules_client.create_security_group_rule(
- parent_group_id=security_group['id'], ip_protocol='tcp',
- from_port=22, to_port=22)
- security_group_rules_client.create_security_group_rule(
- parent_group_id=security_group['id'], ip_protocol='icmp',
- from_port=-1, to_port=-1)
- LOG.debug("SSH Validation resource security group with tcp and icmp "
- "rules %s created", sg_name)
- return security_group
-
-
-def create_validation_resources(os, validation_resources=None,
- ethertype='IPv4', use_neutron=True,
- floating_network_id=None,
- floating_network_name=None):
- # Create and Return the validation resources required to validate a VM
- validation_data = {}
- if validation_resources:
- if validation_resources['keypair']:
- keypair_name = data_utils.rand_name('keypair')
- validation_data.update(os.keypairs_client.create_keypair(
- name=keypair_name))
- LOG.debug("Validation resource key %s created", keypair_name)
- add_rule = False
- if validation_resources['security_group']:
- if validation_resources['security_group_rules']:
- add_rule = True
- validation_data['security_group'] = \
- create_ssh_security_group(
- os, add_rule, use_neutron=use_neutron, ethertype=ethertype)
- if validation_resources['floating_ip']:
- if use_neutron:
- floatingip = os.floating_ips_client.create_floatingip(
- floating_network_id=floating_network_id)
- # validation_resources['floating_ip'] has historically looked
- # like a compute API POST /os-floating-ips response, so we need
- # to mangle it a bit for a Neutron response with different
- # fields.
- validation_data['floating_ip'] = floatingip['floatingip']
- validation_data['floating_ip']['ip'] = (
- floatingip['floatingip']['floating_ip_address'])
- else:
- # NOTE(mriedem): The os-floating-ips compute API was deprecated
- # in the 2.36 microversion. Any tests for CRUD operations on
- # floating IPs using the compute API should be capped at 2.35.
- validation_data.update(
- os.compute_floating_ips_client.create_floating_ip(
- pool=floating_network_name))
- return validation_data
-
-
-def clear_validation_resources(os, validation_data=None):
- # Cleanup the vm validation resources
- has_exception = None
- if validation_data:
- if 'keypair' in validation_data:
- keypair_client = os.keypairs_client
- keypair_name = validation_data['keypair']['name']
- try:
- keypair_client.delete_keypair(keypair_name)
- except lib_exc.NotFound:
- LOG.warning(
- "Keypair %s is not found when attempting to delete",
- keypair_name
- )
- except Exception as exc:
- LOG.exception('Exception raised while deleting key %s',
- keypair_name)
- if not has_exception:
- has_exception = exc
- if 'security_group' in validation_data:
- security_group_client = os.compute_security_groups_client
- sec_id = validation_data['security_group']['id']
- try:
- security_group_client.delete_security_group(sec_id)
- security_group_client.wait_for_resource_deletion(sec_id)
- except lib_exc.NotFound:
- LOG.warning("Security group %s is not found when attempting "
- "to delete", sec_id)
- except lib_exc.Conflict as exc:
- LOG.exception('Conflict while deleting security '
- 'group %s VM might not be deleted', sec_id)
- if not has_exception:
- has_exception = exc
- except Exception as exc:
- LOG.exception('Exception raised while deleting security '
- 'group %s', sec_id)
- if not has_exception:
- has_exception = exc
- if 'floating_ip' in validation_data:
- floating_client = os.compute_floating_ips_client
- fip_id = validation_data['floating_ip']['id']
- try:
- floating_client.delete_floating_ip(fip_id)
- except lib_exc.NotFound:
- LOG.warning('Floating ip %s not found while attempting to '
- 'delete', fip_id)
- except Exception as exc:
- LOG.exception('Exception raised while deleting ip %s', fip_id)
- if not has_exception:
- has_exception = exc
- if has_exception:
- raise has_exception
diff --git a/tempest/common/waiters.py b/tempest/common/waiters.py
index f4c2866..10afee0 100644
--- a/tempest/common/waiters.py
+++ b/tempest/common/waiters.py
@@ -211,6 +211,8 @@
(resource_name, resource_id, statuses, resource_status,
client.build_timeout))
raise lib_exc.TimeoutException(message)
+ LOG.info('%s %s reached %s after waiting for %f seconds',
+ resource_name, resource_id, statuses, time.time() - start)
def wait_for_volume_retype(client, volume_id, new_volume_type):
diff --git a/tempest/config.py b/tempest/config.py
index af9eefc..b392a72 100644
--- a/tempest/config.py
+++ b/tempest/config.py
@@ -15,15 +15,12 @@
from __future__ import print_function
-import functools
import os
import tempfile
-import debtcollector.removals
from oslo_concurrency import lockutils
from oslo_config import cfg
from oslo_log import log as logging
-import testtools
from tempest.lib import exceptions
from tempest.lib.services import clients
@@ -197,6 +194,8 @@
default=60,
help='Timeout in seconds to wait for the http request to '
'return'),
+ cfg.StrOpt('proxy_url',
+ help='Specify an http proxy to use.')
]
identity_feature_group = cfg.OptGroup(name='identity-feature-enabled',
@@ -234,6 +233,12 @@
deprecated_reason="This feature flag was introduced to "
"support testing of old OpenStack versions, "
"which are not supported anymore"),
+ cfg.BoolOpt('domain_specific_drivers',
+ default=False,
+ help='Are domain specific drivers enabled? '
+ 'This configuration value should be same as '
+ '[identity]->domain_specific_drivers_enabled '
+ 'in keystone.conf.'),
cfg.BoolOpt('security_compliance',
default=False,
help='Does the environment have the security compliance '
@@ -833,7 +838,14 @@
help="Is the v2 volume API enabled"),
cfg.BoolOpt('api_v3',
default=True,
- help="Is the v3 volume API enabled")
+ help="Is the v3 volume API enabled"),
+ cfg.BoolOpt('extend_attached_volume',
+ default=False,
+ help='Does the cloud support extending the size of a volume '
+ 'which is currently attached to a server instance? This '
+ 'depends on the 3.42 volume API microversion and the '
+ '2.51 compute API microversion. Also, not all volume or '
+ 'compute backends support this operation.')
]
@@ -1278,79 +1290,6 @@
CONF = TempestConfigProxy()
-@debtcollector.removals.remove(
- message='use testtools.skipUnless instead', removal_version='Queens')
-def skip_unless_config(*args):
- """Decorator to raise a skip if a config opt doesn't exist or is False
-
- :param str group: The first arg, the option group to check
- :param str name: The second arg, the option name to check
- :param str msg: Optional third arg, the skip msg to use if a skip is raised
- :raises testtools.TestCaseskipException: If the specified config option
- doesn't exist or it exists and evaluates to False
- """
- def decorator(f):
- group = args[0]
- name = args[1]
-
- @functools.wraps(f)
- def wrapper(self, *func_args, **func_kwargs):
- if not hasattr(CONF, group):
- msg = "Config group %s doesn't exist" % group
- raise testtools.TestCase.skipException(msg)
-
- conf_group = getattr(CONF, group)
- if not hasattr(conf_group, name):
- msg = "Config option %s.%s doesn't exist" % (group,
- name)
- raise testtools.TestCase.skipException(msg)
-
- value = getattr(conf_group, name)
- if not value:
- if len(args) == 3:
- msg = args[2]
- else:
- msg = "Config option %s.%s is false" % (group,
- name)
- raise testtools.TestCase.skipException(msg)
- return f(self, *func_args, **func_kwargs)
- return wrapper
- return decorator
-
-
-@debtcollector.removals.remove(
- message='use testtools.skipIf instead', removal_version='Queens')
-def skip_if_config(*args):
- """Raise a skipException if a config exists and is True
-
- :param str group: The first arg, the option group to check
- :param str name: The second arg, the option name to check
- :param str msg: Optional third arg, the skip msg to use if a skip is raised
- :raises testtools.TestCase.skipException: If the specified config option
- exists and evaluates to True
- """
- def decorator(f):
- group = args[0]
- name = args[1]
-
- @functools.wraps(f)
- def wrapper(self, *func_args, **func_kwargs):
- if hasattr(CONF, group):
- conf_group = getattr(CONF, group)
- if hasattr(conf_group, name):
- value = getattr(conf_group, name)
- if value:
- if len(args) == 3:
- msg = args[2]
- else:
- msg = "Config option %s.%s is false" % (group,
- name)
- raise testtools.TestCase.skipException(msg)
- return f(self, *func_args, **func_kwargs)
- return wrapper
- return decorator
-
-
def service_client_config(service_client_name=None):
"""Return a dict with the parameters to init service clients
@@ -1371,6 +1310,7 @@
* `ca_certs`
* `trace_requests`
* `http_timeout`
+ * `proxy_url`
The dict returned by this does not fit a few service clients:
@@ -1393,7 +1333,8 @@
CONF.identity.disable_ssl_certificate_validation,
'ca_certs': CONF.identity.ca_certificates_file,
'trace_requests': CONF.debug.trace_requests,
- 'http_timeout': CONF.service_clients.http_timeout
+ 'http_timeout': CONF.service_clients.http_timeout,
+ 'proxy_url': CONF.service_clients.proxy_url,
}
if service_client_name is None:
@@ -1447,7 +1388,7 @@
module = service_clients[service_client]
configs = service_client.split('.')[0]
service_client_data = dict(
- name=service_client.replace('.', '_'),
+ name=service_client.replace('.', '_').replace('-', '_'),
service_version=service_client,
module_path=module.__name__,
client_names=module.__all__,
diff --git a/tempest/exceptions.py b/tempest/exceptions.py
index b5b2d71..a430d5d 100644
--- a/tempest/exceptions.py
+++ b/tempest/exceptions.py
@@ -52,5 +52,5 @@
"the configured network")
-class RFCViolation(exceptions.RestClientException):
- message = "RFC Violation"
+class InvalidServiceTag(exceptions.TempestException):
+ message = "Invalid service tag"
diff --git a/tempest/lib/api_schema/response/compute/v2_1/parameter_types.py b/tempest/lib/api_schema/response/compute/v2_1/parameter_types.py
index a3c9099..28ed816 100644
--- a/tempest/lib/api_schema/response/compute/v2_1/parameter_types.py
+++ b/tempest/lib/api_schema/response/compute/v2_1/parameter_types.py
@@ -65,7 +65,7 @@
'items': {
'type': 'object',
'properties': {
- 'version': {'type': 'integer'},
+ 'version': {'enum': [4, 6]},
'addr': {
'type': 'string',
'oneOf': [
diff --git a/tempest/lib/auth.py b/tempest/lib/auth.py
index ab4308f..a850fe1 100644
--- a/tempest/lib/auth.py
+++ b/tempest/lib/auth.py
@@ -261,12 +261,13 @@
def __init__(self, credentials, auth_url,
disable_ssl_certificate_validation=None,
ca_certs=None, trace_requests=None, scope='project',
- http_timeout=None):
+ http_timeout=None, proxy_url=None):
super(KeystoneAuthProvider, self).__init__(credentials, scope)
self.dscv = disable_ssl_certificate_validation
self.ca_certs = ca_certs
self.trace_requests = trace_requests
self.http_timeout = http_timeout
+ self.proxy_url = proxy_url
self.auth_url = auth_url
self.auth_client = self._auth_client(auth_url)
@@ -345,7 +346,7 @@
return json_v2id.TokenClient(
auth_url, disable_ssl_certificate_validation=self.dscv,
ca_certs=self.ca_certs, trace_requests=self.trace_requests,
- http_timeout=self.http_timeout)
+ http_timeout=self.http_timeout, proxy_url=self.proxy_url)
def _auth_params(self):
"""Auth parameters to be passed to the token request
@@ -433,7 +434,7 @@
return json_v3id.V3TokenClient(
auth_url, disable_ssl_certificate_validation=self.dscv,
ca_certs=self.ca_certs, trace_requests=self.trace_requests,
- http_timeout=self.http_timeout)
+ http_timeout=self.http_timeout, proxy_url=self.proxy_url)
def _auth_params(self):
"""Auth parameters to be passed to the token request
diff --git a/tempest/lib/cli/base.py b/tempest/lib/cli/base.py
index 5468a7b..f39ecbc 100644
--- a/tempest/lib/cli/base.py
+++ b/tempest/lib/cli/base.py
@@ -93,10 +93,20 @@
:type insecure: boolean
:param prefix: prefix to insert before commands
:type prefix: string
+ :param user_domain_name: User's domain name
+ :type user_domain_name: string
+ :param user_domain_id: User's domain ID
+ :type user_domain_id: string
+ :param project_domain_name: Project's domain name
+ :type project_domain_name: string
+ :param project_domain_id: Project's domain ID
+ :type project_domain_id: string
"""
def __init__(self, username='', password='', tenant_name='', uri='',
- cli_dir='', insecure=False, prefix='', *args, **kwargs):
+ cli_dir='', insecure=False, prefix='', user_domain_name=None,
+ user_domain_id=None, project_domain_name=None,
+ project_domain_id=None, *args, **kwargs):
"""Initialize a new CLIClient object."""
super(CLIClient, self).__init__()
self.cli_dir = cli_dir if cli_dir else '/usr/bin'
@@ -106,6 +116,10 @@
self.uri = uri
self.insecure = insecure
self.prefix = prefix
+ self.user_domain_name = user_domain_name
+ self.user_domain_id = user_domain_id
+ self.project_domain_name = project_domain_name
+ self.project_domain_id = project_domain_id
def nova(self, action, flags='', params='', fail_ok=False,
endpoint_type='publicURL', merge_stderr=False):
@@ -366,6 +380,14 @@
self.tenant_name,
self.password,
self.uri))
+ if self.user_domain_name is not None:
+ creds += ' --os-user-domain-name %s' % self.user_domain_name
+ if self.user_domain_id is not None:
+ creds += ' --os-user-domain-id %s' % self.user_domain_id
+ if self.project_domain_name is not None:
+ creds += ' --os-project-domain-name %s' % self.project_domain_name
+ if self.project_domain_id is not None:
+ creds += ' --os-project-domain-id %s' % self.project_domain_id
if self.insecure:
flags = creds + ' --insecure ' + flags
else:
diff --git a/tempest/lib/common/api_version_utils.py b/tempest/lib/common/api_version_utils.py
index 1371b3c..bcb076b 100644
--- a/tempest/lib/common/api_version_utils.py
+++ b/tempest/lib/common/api_version_utils.py
@@ -120,3 +120,59 @@
api_microversion,
response_header))
raise exceptions.InvalidHTTPResponseHeader(msg)
+
+
+def compare_version_header_to_response(api_microversion_header_name,
+ api_microversion,
+ response_header,
+ operation='eq'):
+ """Compares API microversion in response header to ``api_microversion``.
+
+ Compare the ``api_microversion`` value in response header if microversion
+ header is present in response, otherwise return false.
+
+ To make this function work for APIs which do not return microversion
+ header in response (example compute v2.0), this function does *not* raise
+ InvalidHTTPResponseHeader.
+
+ :param api_microversion_header_name: Microversion header name. Example:
+ 'Openstack-Api-Version'.
+ :param api_microversion: Microversion number. Example:
+
+ * '2.10' for the old-style header name, 'X-OpenStack-Nova-API-Version'
+ * 'Compute 2.10' for the new-style header name, 'Openstack-Api-Version'
+
+ :param response_header: Response header where microversion is
+ expected to be present.
+ :param operation: The boolean operation to use to compare the
+ ``api_microversion`` to the microversion in ``response_header``.
+ Can be 'lt', 'eq', 'gt', 'le', 'ne', 'ge'. Default is 'eq'. The
+ operation type should be based on the order of the arguments:
+ ``api_microversion`` <operation> ``response_header`` microversion.
+ :returns: True if the comparison is logically true, else False if the
+ comparison is logically false or if ``api_microversion_header_name`` is
+ missing in the ``response_header``.
+ :raises InvalidParam: If the operation is not lt, eq, gt, le, ne or ge.
+ """
+ api_microversion_header_name = api_microversion_header_name.lower()
+ if api_microversion_header_name not in response_header:
+ return False
+
+ op = getattr(api_version_request.APIVersionRequest,
+ '__%s__' % operation, None)
+
+ if op is None:
+ msg = ("Operation %s is invalid. Valid options include: lt, eq, gt, "
+ "le, ne, ge." % operation)
+ raise exceptions.InvalidParam(invalid_param=msg)
+
+ # Remove "volume" from "volume <microversion>", for example, so that the
+ # microversion can be converted to `APIVersionRequest`.
+ api_version = api_microversion.split(' ')[-1]
+ resp_version = response_header[api_microversion_header_name].split(' ')[-1]
+ if not op(
+ api_version_request.APIVersionRequest(api_version),
+ api_version_request.APIVersionRequest(resp_version)):
+ return False
+
+ return True
diff --git a/tempest/lib/common/dynamic_creds.py b/tempest/lib/common/dynamic_creds.py
index 90e67b4..4f1a883 100644
--- a/tempest/lib/common/dynamic_creds.py
+++ b/tempest/lib/common/dynamic_creds.py
@@ -28,6 +28,43 @@
class DynamicCredentialProvider(cred_provider.CredentialProvider):
+ """Creates credentials dynamically for tests
+
+ A credential provider that, based on an initial set of
+ admin credentials, creates new credentials on the fly for
+ tests to use and then discard.
+
+ :param str identity_version: identity API version to use `v2` or `v3`
+ :param str admin_role: name of the admin role added to admin users
+ :param str name: names of dynamic resources include this parameter
+ when specified
+ :param str credentials_domain: name of the domain where the users
+ are created. If not defined, the project
+ domain from admin_credentials is used
+ :param dict network_resources: network resources to be created for
+ the created credentials
+ :param Credentials admin_creds: initial admin credentials
+ :param bool identity_admin_domain_scope: Set to true if admin should be
+ scoped to the domain. By
+ default this is False and the
+ admin role is scoped to the
+ project.
+ :param str identity_admin_role: The role name to use for admin
+ :param list extra_roles: A list of strings for extra roles that should
+ be assigned to all created users
+ :param bool neutron_available: Whether we are running in an environemnt
+ with neutron
+ :param bool create_networks: Whether dynamic project networks should be
+ created or not
+ :param project_network_cidr: The CIDR to use for created project
+ networks
+ :param project_network_mask_bits: The network mask bits to use for
+ created project networks
+ :param public_network_id: The id for the public network to use
+ :param identity_admin_endpoint_type: The endpoint type for identity
+ admin clients. Defaults to public.
+ :param identity_uri: Identity URI of the target cloud
+ """
def __init__(self, identity_version, name=None, network_resources=None,
credentials_domain=None, admin_role=None, admin_creds=None,
@@ -37,43 +74,6 @@
project_network_cidr=None, project_network_mask_bits=None,
public_network_id=None, resource_prefix=None,
identity_admin_endpoint_type='public', identity_uri=None):
- """Creates credentials dynamically for tests
-
- A credential provider that, based on an initial set of
- admin credentials, creates new credentials on the fly for
- tests to use and then discard.
-
- :param str identity_version: identity API version to use `v2` or `v3`
- :param str admin_role: name of the admin role added to admin users
- :param str name: names of dynamic resources include this parameter
- when specified
- :param str credentials_domain: name of the domain where the users
- are created. If not defined, the project
- domain from admin_credentials is used
- :param dict network_resources: network resources to be created for
- the created credentials
- :param Credentials admin_creds: initial admin credentials
- :param bool identity_admin_domain_scope: Set to true if admin should be
- scoped to the domain. By
- default this is False and the
- admin role is scoped to the
- project.
- :param str identity_admin_role: The role name to use for admin
- :param list extra_roles: A list of strings for extra roles that should
- be assigned to all created users
- :param bool neutron_available: Whether we are running in an environemnt
- with neutron
- :param bool create_networks: Whether dynamic project networks should be
- created or not
- :param project_network_cidr: The CIDR to use for created project
- networks
- :param project_network_mask_bits: The network mask bits to use for
- created project networks
- :param public_network_id: The id for the public network to use
- :param identity_admin_endpoint_type: The endpoint type for identity
- admin clients. Defaults to public.
- :param identity_uri: Identity URI of the target cloud
- """
super(DynamicCredentialProvider, self).__init__(
identity_version=identity_version, identity_uri=identity_uri,
admin_role=admin_role, name=name,
@@ -451,7 +451,7 @@
creds.username)
# NOTE(zhufl): Only when neutron's security_group ext is
# enabled, _cleanup_default_secgroup will not raise error. But
- # here cannot use test.is_extension_enabled for it will cause
+ # here cannot use test_utils.is_extension_enabled for it will cause
# "circular dependency". So here just use try...except to
# ensure tenant deletion without big changes.
try:
diff --git a/tempest/lib/common/http.py b/tempest/lib/common/http.py
index 8a47d44..738c37f 100644
--- a/tempest/lib/common/http.py
+++ b/tempest/lib/common/http.py
@@ -17,6 +17,47 @@
import urllib3
+class ClosingProxyHttp(urllib3.ProxyManager):
+ def __init__(self, proxy_url, disable_ssl_certificate_validation=False,
+ ca_certs=None, timeout=None):
+ kwargs = {}
+
+ if disable_ssl_certificate_validation:
+ urllib3.disable_warnings()
+ kwargs['cert_reqs'] = 'CERT_NONE'
+ elif ca_certs:
+ kwargs['cert_reqs'] = 'CERT_REQUIRED'
+ kwargs['ca_certs'] = ca_certs
+
+ if timeout:
+ kwargs['timeout'] = timeout
+
+ super(ClosingProxyHttp, self).__init__(proxy_url, **kwargs)
+
+ def request(self, url, method, *args, **kwargs):
+
+ class Response(dict):
+ def __init__(self, info):
+ for key, value in info.getheaders().items():
+ self[key.lower()] = value
+ self.status = info.status
+ self['status'] = str(self.status)
+ self.reason = info.reason
+ self.version = info.version
+ self['content-location'] = url
+
+ original_headers = kwargs.get('headers', {})
+ new_headers = dict(original_headers, connection='close')
+ new_kwargs = dict(kwargs, headers=new_headers)
+
+ # Follow up to 5 redirections. Don't raise an exception if
+ # it's exceeded but return the HTTP 3XX response instead.
+ retry = urllib3.util.Retry(raise_on_redirect=False, redirect=5)
+ r = super(ClosingProxyHttp, self).request(method, url, retries=retry,
+ *args, **new_kwargs)
+ return Response(r), r.data
+
+
class ClosingHttp(urllib3.poolmanager.PoolManager):
def __init__(self, disable_ssl_certificate_validation=False,
ca_certs=None, timeout=None):
@@ -25,8 +66,7 @@
if disable_ssl_certificate_validation:
urllib3.disable_warnings()
kwargs['cert_reqs'] = 'CERT_NONE'
-
- if ca_certs:
+ elif ca_certs:
kwargs['cert_reqs'] = 'CERT_REQUIRED'
kwargs['ca_certs'] = ca_certs
diff --git a/tempest/lib/common/preprov_creds.py b/tempest/lib/common/preprov_creds.py
index cd3a10e..83db513 100644
--- a/tempest/lib/common/preprov_creds.py
+++ b/tempest/lib/common/preprov_creds.py
@@ -41,6 +41,35 @@
class PreProvisionedCredentialProvider(cred_provider.CredentialProvider):
+ """Credentials provider using pre-provisioned accounts
+
+ This credentials provider loads the details of pre-provisioned
+ accounts from a YAML file, in the format specified by
+ ``etc/accounts.yaml.sample``. It locks accounts while in use, using the
+ external locking mechanism, allowing for multiple python processes
+ to share a single account file, and thus running tests in parallel.
+
+ The accounts_lock_dir must be generated using `lockutils.get_lock_path`
+ from the oslo.concurrency library. For instance::
+
+ accounts_lock_dir = os.path.join(lockutils.get_lock_path(CONF),
+ 'test_accounts')
+
+ Role names for object storage are optional as long as the
+ `operator` and `reseller_admin` credential types are not used in the
+ accounts file.
+
+ :param identity_version: identity version of the credentials
+ :param admin_role: name of the admin role
+ :param test_accounts_file: path to the accounts YAML file
+ :param accounts_lock_dir: the directory for external locking
+ :param name: name of the hash file (optional)
+ :param credentials_domain: name of the domain credentials belong to
+ (if no domain is configured)
+ :param object_storage_operator_role: name of the role
+ :param object_storage_reseller_admin_role: name of the role
+ :param identity_uri: Identity URI of the target cloud
+ """
# Exclude from the hash fields specific to v2 or v3 identity API
# i.e. only include user*, project*, tenant* and password
@@ -51,35 +80,6 @@
accounts_lock_dir, name=None, credentials_domain=None,
admin_role=None, object_storage_operator_role=None,
object_storage_reseller_admin_role=None, identity_uri=None):
- """Credentials provider using pre-provisioned accounts
-
- This credentials provider loads the details of pre-provisioned
- accounts from a YAML file, in the format specified by
- `etc/accounts.yaml.sample`. It locks accounts while in use, using the
- external locking mechanism, allowing for multiple python processes
- to share a single account file, and thus running tests in parallel.
-
- The accounts_lock_dir must be generated using `lockutils.get_lock_path`
- from the oslo.concurrency library. For instance:
-
- accounts_lock_dir = os.path.join(lockutils.get_lock_path(CONF),
- 'test_accounts')
-
- Role names for object storage are optional as long as the
- `operator` and `reseller_admin` credential types are not used in the
- accounts file.
-
- :param identity_version: identity version of the credentials
- :param admin_role: name of the admin role
- :param test_accounts_file: path to the accounts YAML file
- :param accounts_lock_dir: the directory for external locking
- :param name: name of the hash file (optional)
- :param credentials_domain: name of the domain credentials belong to
- (if no domain is configured)
- :param object_storage_operator_role: name of the role
- :param object_storage_reseller_admin_role: name of the role
- :param identity_uri: Identity URI of the target cloud
- """
super(PreProvisionedCredentialProvider, self).__init__(
identity_version=identity_version, name=name,
admin_role=admin_role, credentials_domain=credentials_domain,
diff --git a/tempest/lib/common/rest_client.py b/tempest/lib/common/rest_client.py
index f58d737..22276d4 100644
--- a/tempest/lib/common/rest_client.py
+++ b/tempest/lib/common/rest_client.py
@@ -69,6 +69,7 @@
of the request and response payload
:param str http_timeout: Timeout in seconds to wait for the http request to
return
+ :param str proxy_url: http proxy url to use.
"""
# The version of the API this client implements
@@ -80,7 +81,8 @@
endpoint_type='publicURL',
build_interval=1, build_timeout=60,
disable_ssl_certificate_validation=False, ca_certs=None,
- trace_requests='', name=None, http_timeout=None):
+ trace_requests='', name=None, http_timeout=None,
+ proxy_url=None):
self.auth_provider = auth_provider
self.service = service
self.region = region
@@ -100,9 +102,16 @@
'retry-after', 'server',
'vary', 'www-authenticate'))
dscv = disable_ssl_certificate_validation
- self.http_obj = http.ClosingHttp(
- disable_ssl_certificate_validation=dscv, ca_certs=ca_certs,
- timeout=http_timeout)
+
+ if proxy_url:
+ self.http_obj = http.ClosingProxyHttp(
+ proxy_url,
+ disable_ssl_certificate_validation=dscv, ca_certs=ca_certs,
+ timeout=http_timeout)
+ else:
+ self.http_obj = http.ClosingHttp(
+ disable_ssl_certificate_validation=dscv, ca_certs=ca_certs,
+ timeout=http_timeout)
def get_headers(self, accept_type=None, send_type=None):
"""Return the default headers which will be used with outgoing requests
diff --git a/tempest/lib/common/utils/linux/remote_client.py b/tempest/lib/common/utils/linux/remote_client.py
index aef2ff3..cd4092b 100644
--- a/tempest/lib/common/utils/linux/remote_client.py
+++ b/tempest/lib/common/utils/linux/remote_client.py
@@ -67,7 +67,7 @@
def __init__(self, ip_address, username, password=None, pkey=None,
server=None, servers_client=None, ssh_timeout=300,
connect_timeout=60, console_output_enabled=True,
- ssh_shell_prologue="set -eu -o pipefail; PATH=$$PATH:/sbin;",
+ ssh_shell_prologue="set -eu -o pipefail; PATH=$PATH:/sbin;",
ping_count=1, ping_size=56):
"""Executes commands in a VM over ssh
diff --git a/tempest/lib/common/validation_resources.py b/tempest/lib/common/validation_resources.py
new file mode 100644
index 0000000..c35a01a
--- /dev/null
+++ b/tempest/lib/common/validation_resources.py
@@ -0,0 +1,457 @@
+# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
+# Copyright (c) 2017 IBM Corp.
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import fixtures
+from oslo_log import log as logging
+from oslo_utils import excutils
+
+from tempest.lib.common.utils import data_utils
+from tempest.lib import exceptions as lib_exc
+
+LOG = logging.getLogger(__name__)
+
+
+def _network_service(clients, use_neutron):
+ # Internal helper to select the right network clients
+ if use_neutron:
+ return clients.network
+ else:
+ return clients.compute
+
+
+def create_ssh_security_group(clients, add_rule=False, ethertype='IPv4',
+ use_neutron=True):
+ """Create a security group for ping/ssh testing
+
+ Create a security group to be attached to a VM using the nova or neutron
+ clients. If rules are added, the group can be attached to a VM to enable
+ connectivity validation over ICMP and further testing over SSH.
+
+ :param clients: Instance of `tempest.lib.services.clients.ServiceClients`
+ or of a subclass of it. Resources are provisioned using clients from
+ `clients`.
+ :param add_rule: Whether security group rules are provisioned or not.
+ Defaults to `False`.
+ :param ethertype: 'IPv4' or 'IPv6'. Honoured only in case neutron is used.
+ :param use_neutron: When True resources are provisioned via neutron, when
+ False resources are provisioned via nova.
+ :returns: A dictionary with the security group as returned by the API.
+
+ Examples::
+
+ from tempest.common import validation_resources as vr
+ from tempest.lib import auth
+ from tempest.lib.services import clients
+
+ creds = auth.get_credentials('http://mycloud/identity/v3',
+ username='me', project_name='me',
+ password='secret', domain_name='Default')
+ osclients = clients.ServiceClients(creds, 'http://mycloud/identity/v3')
+ # Security group for IPv4 tests
+ sg4 = vr.create_ssh_security_group(osclients, add_rule=True)
+ # Security group for IPv6 tests
+ sg6 = vr.create_ssh_security_group(osclients, ethertype='IPv6',
+ add_rule=True)
+ """
+ network_service = _network_service(clients, use_neutron)
+ security_groups_client = network_service.SecurityGroupsClient()
+ security_group_rules_client = network_service.SecurityGroupRulesClient()
+ # Security Group clients for nova and neutron behave the same
+ sg_name = data_utils.rand_name('securitygroup-')
+ sg_description = data_utils.rand_name('description-')
+ security_group = security_groups_client.create_security_group(
+ name=sg_name, description=sg_description)['security_group']
+ # Security Group Rules clients require different parameters depending on
+ # the network service in use
+ if add_rule:
+ try:
+ if use_neutron:
+ security_group_rules_client.create_security_group_rule(
+ security_group_id=security_group['id'],
+ protocol='tcp',
+ ethertype=ethertype,
+ port_range_min=22,
+ port_range_max=22,
+ direction='ingress')
+ security_group_rules_client.create_security_group_rule(
+ security_group_id=security_group['id'],
+ protocol='icmp',
+ ethertype=ethertype,
+ direction='ingress')
+ else:
+ security_group_rules_client.create_security_group_rule(
+ parent_group_id=security_group['id'], ip_protocol='tcp',
+ from_port=22, to_port=22)
+ security_group_rules_client.create_security_group_rule(
+ parent_group_id=security_group['id'], ip_protocol='icmp',
+ from_port=-1, to_port=-1)
+ except Exception as sgc_exc:
+ # If adding security group rules fails, we cleanup the SG before
+ # re-raising the failure up
+ with excutils.save_and_reraise_exception():
+ try:
+ msg = ('Error while provisioning security group rules in '
+ 'security group %s. Trying to cleanup.')
+ # The exceptions logging is already handled, so using
+ # debug here just to provide more context
+ LOG.debug(msg, sgc_exc)
+ clear_validation_resources(
+ clients, keypair=None, floating_ip=None,
+ security_group=security_group,
+ use_neutron=use_neutron)
+ except Exception as cleanup_exc:
+ msg = ('Error during cleanup of a security group. '
+ 'The cleanup was triggered by an exception during '
+ 'the provisioning of security group rules.\n'
+ 'Provisioning exception: %s\n'
+ 'First cleanup exception: %s')
+ LOG.exception(msg, sgc_exc, cleanup_exc)
+ LOG.debug("SSH Validation resource security group with tcp and icmp "
+ "rules %s created", sg_name)
+ return security_group
+
+
+def create_validation_resources(clients, keypair=False, floating_ip=False,
+ security_group=False,
+ security_group_rules=False,
+ ethertype='IPv4', use_neutron=True,
+ floating_network_id=None,
+ floating_network_name=None):
+ """Provision resources for VM ping/ssh testing
+
+ Create resources required to be able to ping / ssh a virtual machine:
+ keypair, security group, security group rules and a floating IP.
+ Which of those resources are required may depend on the cloud setup and on
+ the specific test and it can be controlled via the corresponding
+ arguments.
+
+ Provisioned resources are returned in a dictionary.
+
+ :param clients: Instance of `tempest.lib.services.clients.ServiceClients`
+ or of a subclass of it. Resources are provisioned using clients from
+ `clients`.
+ :param keypair: Whether to provision a keypair. Defaults to False.
+ :param floating_ip: Whether to provision a floating IP. Defaults to False.
+ :param security_group: Whether to provision a security group. Defaults to
+ False.
+ :param security_group_rules: Whether to provision security group rules.
+ Defaults to False.
+ :param ethertype: 'IPv4' or 'IPv6'. Honoured only in case neutron is used.
+ :param use_neutron: When True resources are provisioned via neutron, when
+ False resources are provisioned via nova.
+ :param floating_network_id: The id of the network used to provision a
+ floating IP. Only used if a floating IP is requested and with neutron.
+ :param floating_network_name: The name of the floating IP pool used to
+ provision the floating IP. Only used if a floating IP is requested and
+ with nova-net.
+ :returns: A dictionary with the resources in the format they are returned
+ by the API. Valid keys are 'keypair', 'floating_ip' and
+ 'security_group'.
+
+ Examples::
+
+ from tempest.common import validation_resources as vr
+ from tempest.lib import auth
+ from tempest.lib.services import clients
+
+ creds = auth.get_credentials('http://mycloud/identity/v3',
+ username='me', project_name='me',
+ password='secret', domain_name='Default')
+ osclients = clients.ServiceClients(creds, 'http://mycloud/identity/v3')
+ # Request keypair and floating IP
+ resources = dict(keypair=True, security_group=False,
+ security_group_rules=False, floating_ip=True)
+ resources = vr.create_validation_resources(
+ osclients, use_neutron=True,
+ floating_network_id='4240E68E-23DA-4C82-AC34-9FEFAA24521C',
+ **resources)
+
+ # The floating IP to be attached to the VM
+ floating_ip = resources['floating_ip']['ip']
+ """
+ # Create and Return the validation resources required to validate a VM
+ msg = ('Requested validation resources keypair %s, floating IP %s, '
+ 'security group %s')
+ LOG.debug(msg, keypair, floating_ip, security_group)
+ validation_data = {}
+ try:
+ if keypair:
+ keypair_name = data_utils.rand_name('keypair')
+ validation_data.update(
+ clients.compute.KeyPairsClient().create_keypair(
+ name=keypair_name))
+ LOG.debug("Validation resource key %s created", keypair_name)
+ if security_group:
+ validation_data['security_group'] = create_ssh_security_group(
+ clients, add_rule=security_group_rules,
+ use_neutron=use_neutron, ethertype=ethertype)
+ if floating_ip:
+ floating_ip_client = _network_service(
+ clients, use_neutron).FloatingIPsClient()
+ if use_neutron:
+ floatingip = floating_ip_client.create_floatingip(
+ floating_network_id=floating_network_id)
+ # validation_resources['floating_ip'] has historically looked
+ # like a compute API POST /os-floating-ips response, so we need
+ # to mangle it a bit for a Neutron response with different
+ # fields.
+ validation_data['floating_ip'] = floatingip['floatingip']
+ validation_data['floating_ip']['ip'] = (
+ floatingip['floatingip']['floating_ip_address'])
+ else:
+ # NOTE(mriedem): The os-floating-ips compute API was deprecated
+ # in the 2.36 microversion. Any tests for CRUD operations on
+ # floating IPs using the compute API should be capped at 2.35.
+ validation_data.update(floating_ip_client.create_floating_ip(
+ pool=floating_network_name))
+ LOG.debug("Validation resource floating IP %s created",
+ validation_data['floating_ip'])
+ except Exception as prov_exc:
+ # If something goes wrong, cleanup as much as possible before we
+ # re-raise the exception
+ with excutils.save_and_reraise_exception():
+ if validation_data:
+ # Cleanup may fail as well
+ try:
+ msg = ('Error while provisioning validation resources %s. '
+ 'Trying to cleanup what we provisioned so far: %s')
+ # The exceptions logging is already handled, so using
+ # debug here just to provide more context
+ LOG.debug(msg, prov_exc, str(validation_data))
+ clear_validation_resources(
+ clients,
+ keypair=validation_data.get('keypair', None),
+ floating_ip=validation_data.get('floating_ip', None),
+ security_group=validation_data.get('security_group',
+ None),
+ use_neutron=use_neutron)
+ except Exception as cleanup_exc:
+ msg = ('Error during cleanup of validation resources. '
+ 'The cleanup was triggered by an exception during '
+ 'the provisioning step.\n'
+ 'Provisioning exception: %s\n'
+ 'First cleanup exception: %s')
+ LOG.exception(msg, prov_exc, cleanup_exc)
+ return validation_data
+
+
+def clear_validation_resources(clients, keypair=None, floating_ip=None,
+ security_group=None, use_neutron=True):
+ """Cleanup resources for VM ping/ssh testing
+
+ Cleanup a set of resources provisioned via `create_validation_resources`.
+ In case of errors during cleanup, the exception is logged and the cleanup
+ process is continued. The first exception that was raised is re-raised
+ after the cleanup is complete.
+
+ :param clients: Instance of `tempest.lib.services.clients.ServiceClients`
+ or of a subclass of it. Resources are provisioned using clients from
+ `clients`.
+ :param keypair: A dictionary with the keypair to be deleted. Defaults to
+ None.
+ :param floating_ip: A dictionary with the floating_ip to be deleted.
+ Defaults to None.
+ :param security_group: A dictionary with the security_group to be deleted.
+ Defaults to None.
+ :param use_neutron: When True resources are provisioned via neutron, when
+ False resources are provisioned via nova.
+
+ Examples::
+
+ from tempest.common import validation_resources as vr
+ from tempest.lib import auth
+ from tempest.lib.services import clients
+
+ creds = auth.get_credentials('http://mycloud/identity/v3',
+ username='me', project_name='me',
+ password='secret', domain_name='Default')
+ osclients = clients.ServiceClients(creds, 'http://mycloud/identity/v3')
+ # Request keypair and floating IP
+ resources = dict(keypair=True, security_group=False,
+ security_group_rules=False, floating_ip=True)
+ resources = vr.create_validation_resources(
+ osclients, validation_resources=resources, use_neutron=True,
+ floating_network_id='4240E68E-23DA-4C82-AC34-9FEFAA24521C')
+
+ # Now cleanup the resources
+ try:
+ vr.clear_validation_resources(osclients, use_neutron=True,
+ **resources)
+ except Exception as e:
+ LOG.exception('Something went wrong during cleanup, ignoring')
+ """
+ has_exception = None
+ if keypair:
+ keypair_client = clients.compute.KeyPairsClient()
+ keypair_name = keypair['name']
+ try:
+ keypair_client.delete_keypair(keypair_name)
+ except lib_exc.NotFound:
+ LOG.warning(
+ "Keypair %s is not found when attempting to delete",
+ keypair_name
+ )
+ except Exception as exc:
+ LOG.exception('Exception raised while deleting key %s',
+ keypair_name)
+ if not has_exception:
+ has_exception = exc
+ network_service = _network_service(clients, use_neutron)
+ if security_group:
+ security_group_client = network_service.SecurityGroupsClient()
+ sec_id = security_group['id']
+ try:
+ security_group_client.delete_security_group(sec_id)
+ security_group_client.wait_for_resource_deletion(sec_id)
+ except lib_exc.NotFound:
+ LOG.warning("Security group %s is not found when attempting "
+ "to delete", sec_id)
+ except lib_exc.Conflict as exc:
+ LOG.exception('Conflict while deleting security '
+ 'group %s VM might not be deleted', sec_id)
+ if not has_exception:
+ has_exception = exc
+ except Exception as exc:
+ LOG.exception('Exception raised while deleting security '
+ 'group %s', sec_id)
+ if not has_exception:
+ has_exception = exc
+ if floating_ip:
+ floating_ip_client = network_service.FloatingIPsClient()
+ fip_id = floating_ip['id']
+ try:
+ if use_neutron:
+ floating_ip_client.delete_floatingip(fip_id)
+ else:
+ floating_ip_client.delete_floating_ip(fip_id)
+ except lib_exc.NotFound:
+ LOG.warning('Floating ip %s not found while attempting to '
+ 'delete', fip_id)
+ except Exception as exc:
+ LOG.exception('Exception raised while deleting ip %s', fip_id)
+ if not has_exception:
+ has_exception = exc
+ if has_exception:
+ raise has_exception
+
+
+class ValidationResourcesFixture(fixtures.Fixture):
+ """Fixture to provision and cleanup validation resources"""
+
+ DICT_KEYS = ['keypair', 'security_group', 'floating_ip']
+
+ def __init__(self, clients, keypair=False, floating_ip=False,
+ security_group=False, security_group_rules=False,
+ ethertype='IPv4', use_neutron=True, floating_network_id=None,
+ floating_network_name=None):
+ """Create a ValidationResourcesFixture
+
+ Create a ValidationResourcesFixture fixtures, which provisions the
+ resources required to be able to ping / ssh a virtual machine upon
+ setUp and clears them out upon cleanup. Resources are keypair,
+ security group, security group rules and a floating IP - depending
+ on the params.
+
+ The fixture exposes a dictionary that includes provisioned resources.
+
+ :param clients: `tempest.lib.services.clients.ServiceClients` or of a
+ subclass of it. Resources are provisioned using clients from
+ `clients`.
+ :param keypair: Whether to provision a keypair. Defaults to False.
+ :param floating_ip: Whether to provision a floating IP.
+ Defaults to False.
+ :param security_group: Whether to provision a security group.
+ Defaults to False.
+ :param security_group_rules: Whether to provision security group rules.
+ Defaults to False.
+ :param ethertype: 'IPv4' or 'IPv6'. Honoured only if neutron is used.
+ :param use_neutron: When True resources are provisioned via neutron,
+ when False resources are provisioned via nova.
+ :param floating_network_id: The id of the network used to provision a
+ floating IP. Only used if a floating IP is requested in case
+ neutron is used.
+ :param floating_network_name: The name of the floating IP pool used to
+ provision the floating IP. Only used if a floating IP is requested
+ and with nova-net.
+ :returns: A dictionary with the same keys as the input
+ `validation_resources` and the resources for values in the format
+ they are returned by the API.
+
+ Examples::
+
+ from tempest.common import validation_resources as vr
+ from tempest.lib import auth
+ from tempest.lib.services import clients
+ import testtools
+
+
+ class TestWithVR(testtools.TestCase):
+
+ def setUp(self):
+ creds = auth.get_credentials(
+ 'http://mycloud/identity/v3',
+ username='me', project_name='me',
+ password='secret', domain_name='Default')
+
+ osclients = clients.ServiceClients(
+ creds, 'http://mycloud/identity/v3')
+ # Request keypair and floating IP
+ resources = dict(keypair=True, security_group=False,
+ security_group_rules=False,
+ floating_ip=True)
+ network_id = '4240E68E-23DA-4C82-AC34-9FEFAA24521C'
+ self.vr = self.useFixture(vr.ValidationResourcesFixture(
+ osclients, use_neutron=True,
+ floating_network_id=network_id,
+ **resources)
+
+ def test_use_ip(self):
+ # The floating IP to be attached to the VM
+ floating_ip = self.vr['floating_ip']['ip']
+ """
+ self._clients = clients
+ self._keypair = keypair
+ self._floating_ip = floating_ip
+ self._security_group = security_group
+ self._security_group_rules = security_group_rules
+ self._ethertype = ethertype
+ self._use_neutron = use_neutron
+ self._floating_network_id = floating_network_id
+ self._floating_network_name = floating_network_name
+ self._validation_resources = None
+
+ def _setUp(self):
+ msg = ('Requested setup of ValidationResources keypair %s, floating '
+ 'IP %s, security group %s')
+ LOG.debug(msg, self._keypair, self._floating_ip, self._security_group)
+ self._validation_resources = create_validation_resources(
+ self._clients, keypair=self._keypair,
+ floating_ip=self._floating_ip,
+ security_group=self._security_group,
+ security_group_rules=self._security_group_rules,
+ ethertype=self._ethertype, use_neutron=self._use_neutron,
+ floating_network_id=self._floating_network_id,
+ floating_network_name=self._floating_network_name)
+ # If provisioning raises an exception we won't have anything to
+ # cleanup here, so we don't need a try-finally around provisioning
+ vr = self._validation_resources
+ self.addCleanup(clear_validation_resources, self._clients,
+ keypair=vr.get('keypair', None),
+ floating_ip=vr.get('floating_ip', None),
+ security_group=vr.get('security_group', None),
+ use_neutron=self._use_neutron)
+
+ @property
+ def resources(self):
+ return self._validation_resources
diff --git a/tempest/lib/exceptions.py b/tempest/lib/exceptions.py
index c538c72..9b2e87e 100644
--- a/tempest/lib/exceptions.py
+++ b/tempest/lib/exceptions.py
@@ -276,3 +276,7 @@
class InvalidTestResource(TempestException):
message = "%(name)s is not a valid %(type)s, or the name is ambiguous"
+
+
+class InvalidParam(TempestException):
+ message = ("Invalid Parameter passed: %(invalid_param)s")
diff --git a/tempest/lib/services/clients.py b/tempest/lib/services/clients.py
index 4fa7a7a..8918a8c 100644
--- a/tempest/lib/services/clients.py
+++ b/tempest/lib/services/clients.py
@@ -31,6 +31,7 @@
from tempest.lib.services import identity
from tempest.lib.services import image
from tempest.lib.services import network
+from tempest.lib.services import object_storage
from tempest.lib.services import volume
warnings.simplefilter("once")
@@ -50,20 +51,13 @@
'image.v1': image.v1,
'image.v2': image.v2,
'network': network,
+ 'object-storage': object_storage,
'volume.v1': volume.v1,
'volume.v2': volume.v2,
'volume.v3': volume.v3
}
-def _tempest_internal_modules():
- # Set of unstable service clients available in Tempest
- # NOTE(andreaf) This list will exists only as long the remain clients
- # are migrated to tempest.lib, and it will then be deleted without
- # deprecation or advance notice
- return set(['object-storage'])
-
-
def available_modules():
"""Set of service client modules available in Tempest and plugins
@@ -101,17 +95,6 @@
plug_service_versions))
name_conflicts.append(exceptions.PluginRegistrationException(
name=plugin_name, detailed_error=detailed_error))
- # NOTE(andreaf) Once all tempest clients are stable, the following
- # if will have to be removed.
- if not plug_service_versions.isdisjoint(
- _tempest_internal_modules()):
- detailed_error = (
- 'Plugin %s is trying to register a service %s already '
- 'claimed by a Tempest one' % (plugin_name,
- _tempest_internal_modules() &
- plug_service_versions))
- name_conflicts.append(exceptions.PluginRegistrationException(
- name=plugin_name, detailed_error=detailed_error))
extra_service_versions |= plug_service_versions
if name_conflicts:
LOG.error(
@@ -276,7 +259,7 @@
@removals.removed_kwarg('client_parameters')
def __init__(self, credentials, identity_uri, region=None, scope='project',
disable_ssl_certificate_validation=True, ca_certs=None,
- trace_requests='', client_parameters=None):
+ trace_requests='', client_parameters=None, proxy_url=None):
"""Service Clients provider
Instantiate a `ServiceClients` object, from a set of credentials and an
@@ -336,6 +319,8 @@
name, as declared in `service_clients.available_modules()` except
for the version. Values are dictionaries of parameters that are
going to be passed to all clients in the service client module.
+ :param proxy_url: Applies to auth and to all service clients, set a
+ proxy url for the clients to use.
"""
self._registered_services = set([])
self.credentials = credentials
@@ -360,16 +345,20 @@
self.dscv = disable_ssl_certificate_validation
self.ca_certs = ca_certs
self.trace_requests = trace_requests
+ self.proxy_url = proxy_url
# Creates an auth provider for the credentials
self.auth_provider = auth_provider_class(
self.credentials, self.identity_uri, scope=scope,
disable_ssl_certificate_validation=self.dscv,
- ca_certs=self.ca_certs, trace_requests=self.trace_requests)
+ ca_certs=self.ca_certs, trace_requests=self.trace_requests,
+ proxy_url=proxy_url)
+
# Setup some defaults for client parameters of registered services
client_parameters = client_parameters or {}
self.parameters = {}
+
# Parameters are provided for unversioned services
- all_modules = available_modules() | _tempest_internal_modules()
+ all_modules = available_modules()
unversioned_services = set(
[x.split('.')[0] for x in all_modules])
for service in unversioned_services:
@@ -420,8 +409,8 @@
clients in tempest.
:param client_names: List or set of names of service client classes.
:param kwargs: Extra optional parameters to be passed to all clients.
- ServiceClient provides defaults for region, dscv, ca_certs and
- trace_requests.
+ ServiceClient provides defaults for region, dscv, ca_certs, http
+ proxies and trace_requests.
:raise ServiceClientRegistrationException: if the provided name is
already in use or if service_version is already registered.
:raise ImportError: if module_path cannot be imported.
@@ -442,7 +431,8 @@
params = dict(region=self.region,
disable_ssl_certificate_validation=self.dscv,
ca_certs=self.ca_certs,
- trace_requests=self.trace_requests)
+ trace_requests=self.trace_requests,
+ proxy_url=self.proxy_url)
params.update(kwargs)
# Instantiate the client factory
_factory = ClientsFactory(module_path=module_path,
@@ -456,9 +446,7 @@
@property
def registered_services(self):
- # NOTE(andreaf) Once all tempest modules are stable this needs to
- # be updated to remove _tempest_internal_modules
- return self._registered_services | _tempest_internal_modules()
+ return self._registered_services
def _setup_parameters(self, parameters):
"""Setup default values for client parameters
diff --git a/tempest/lib/services/network/security_groups_client.py b/tempest/lib/services/network/security_groups_client.py
index 1f30216..d3ebf20 100644
--- a/tempest/lib/services/network/security_groups_client.py
+++ b/tempest/lib/services/network/security_groups_client.py
@@ -10,6 +10,7 @@
# License for the specific language governing permissions and limitations
# under the License.
+from tempest.lib import exceptions as lib_exc
from tempest.lib.services.network import base
@@ -66,3 +67,10 @@
"""
uri = '/security-groups'
return self.list_resources(uri, **filters)
+
+ def is_resource_deleted(self, id):
+ try:
+ self.show_security_group(id)
+ except lib_exc.NotFound:
+ return True
+ return False
diff --git a/tempest/lib/services/object_storage/__init__.py b/tempest/lib/services/object_storage/__init__.py
index e69de29..4303d09 100644
--- a/tempest/lib/services/object_storage/__init__.py
+++ b/tempest/lib/services/object_storage/__init__.py
@@ -0,0 +1,25 @@
+# Copyright (c) 2016 Hewlett-Packard Enterprise Development Company, L.P.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may not
+# use this file except in compliance with the License. You may obtain a copy of
+# the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations under
+# the License.
+
+from tempest.lib.services.object_storage.account_client import AccountClient
+from tempest.lib.services.object_storage.bulk_middleware_client import \
+ BulkMiddlewareClient
+from tempest.lib.services.object_storage.capabilities_client import \
+ CapabilitiesClient
+from tempest.lib.services.object_storage.container_client import \
+ ContainerClient
+from tempest.lib.services.object_storage.object_client import ObjectClient
+
+__all__ = ['AccountClient', 'BulkMiddlewareClient', 'CapabilitiesClient',
+ 'ContainerClient', 'ObjectClient']
diff --git a/tempest/services/object_storage/account_client.py b/tempest/lib/services/object_storage/account_client.py
similarity index 75%
rename from tempest/services/object_storage/account_client.py
rename to tempest/lib/services/object_storage/account_client.py
index 5a1737e..67f01a6 100644
--- a/tempest/services/object_storage/account_client.py
+++ b/tempest/lib/services/object_storage/account_client.py
@@ -50,41 +50,20 @@
return resp, body
def list_account_metadata(self):
- """HEAD on the storage URL
-
- Returns all account metadata headers
- """
+ """List all account metadata."""
resp, body = self.head('')
self.expected_success(204, resp.status)
return resp, body
def list_account_containers(self, params=None):
- """GET on the (base) storage URL
+ """List all containers for the account.
Given valid X-Auth-Token, returns a list of all containers for the
account.
- Optional Arguments:
- limit=[integer value N]
- Limits the number of results to at most N values
- DEFAULT: 10,000
-
- marker=[string value X]
- Given string value X, return object names greater in value
- than the specified marker.
- DEFAULT: No Marker
-
- prefix=[string value Y]
- Given string value Y, return object names starting with that prefix
-
- reverse=[boolean value Z]
- Reverse the result order based on the boolean value Z
- DEFAULT: False
-
- format=[string value, either 'json' or 'xml']
- Specify either json or xml to return the respective serialized
- response.
- DEFAULT: Python-List returned in response body
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://developer.openstack.org/api-ref/object-store/#show-account-details-and-list-containers
"""
url = '?%s' % urllib.urlencode(params) if params else ''
diff --git a/tempest/lib/services/object_storage/container_client.py b/tempest/lib/services/object_storage/container_client.py
new file mode 100644
index 0000000..2da8e24
--- /dev/null
+++ b/tempest/lib/services/object_storage/container_client.py
@@ -0,0 +1,124 @@
+# Copyright 2012 OpenStack Foundation
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from xml.etree import ElementTree as etree
+
+import debtcollector.moves
+from oslo_serialization import jsonutils as json
+from six.moves.urllib import parse as urllib
+
+from tempest.lib.common import rest_client
+
+
+class ContainerClient(rest_client.RestClient):
+
+ def update_container(self, container_name, **headers):
+ """Creates or Updates a container
+
+ with optional metadata passed in as a dictionary.
+ Full list of allowed headers or value, please refer to the
+ official API reference:
+ https://developer.openstack.org/api-ref/object-store/#create-container
+ """
+ url = str(container_name)
+
+ resp, body = self.put(url, body=None, headers=headers)
+ self.expected_success([201, 202], resp.status)
+ return resp, body
+
+ # NOTE: This alias is for the usability because PUT can be used for both
+ # updating/creating a resource and this PUT is mainly used for creating
+ # on Swift container API.
+ create_container = update_container
+
+ def delete_container(self, container_name):
+ """Deletes the container (if it's empty)."""
+ url = str(container_name)
+ resp, body = self.delete(url)
+ self.expected_success(204, resp.status)
+ return resp, body
+
+ def create_update_or_delete_container_metadata(
+ self, container_name,
+ create_update_metadata=None,
+ delete_metadata=None,
+ create_update_metadata_prefix='X-Container-Meta-',
+ delete_metadata_prefix='X-Remove-Container-Meta-'):
+ """Creates, Updates or deletes an containter metadata entry.
+
+ Container Metadata can be created, updated or deleted based on
+ metadata header or value. For detailed info, please refer to the
+ official API reference:
+ https://developer.openstack.org/api-ref/object-store/#create-update-or-delete-container-metadata
+ """
+ url = str(container_name)
+ headers = {}
+ if create_update_metadata:
+ for key in create_update_metadata:
+ metadata_header_name = create_update_metadata_prefix + key
+ headers[metadata_header_name] = create_update_metadata[key]
+ if delete_metadata:
+ for key in delete_metadata:
+ headers[delete_metadata_prefix + key] = delete_metadata[key]
+
+ resp, body = self.post(url, headers=headers, body=None)
+ self.expected_success(204, resp.status)
+ return resp, body
+
+ update_container_metadata = debtcollector.moves.moved_function(
+ create_update_or_delete_container_metadata,
+ 'update_container_metadata', __name__,
+ version='Queens', removal_version='Rocky')
+
+ def list_container_metadata(self, container_name):
+ """List all container metadata."""
+ url = str(container_name)
+ resp, body = self.head(url)
+ self.expected_success(204, resp.status)
+ return resp, body
+
+ def list_container_objects(self, container_name, params=None):
+ """List the objects in a container, given the container name
+
+ Returns the container object listing as a plain text list, or as
+ xml or json if that option is specified via the 'format' argument.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://developer.openstack.org/api-ref/object-storage/?expanded=show-container-details-and-list-objects-detail
+ """
+
+ url = str(container_name)
+ if params:
+ url += '?'
+ url += '&%s' % urllib.urlencode(params)
+
+ resp, body = self.get(url, headers={})
+ if params and params.get('format') == 'json':
+ body = json.loads(body)
+ elif params and params.get('format') == 'xml':
+ body = etree.fromstring(body)
+ # Else the content-type is plain/text
+ else:
+ body = [
+ obj_name for obj_name in body.decode().split('\n') if obj_name
+ ]
+
+ self.expected_success([200, 204], resp.status)
+ return resp, body
+
+ list_container_contents = debtcollector.moves.moved_function(
+ list_container_objects, 'list_container_contents', __name__,
+ version='Queens', removal_version='Rocky')
diff --git a/tempest/services/object_storage/object_client.py b/tempest/lib/services/object_storage/object_client.py
similarity index 62%
rename from tempest/services/object_storage/object_client.py
rename to tempest/lib/services/object_storage/object_client.py
index 6d656ec..383aff6 100644
--- a/tempest/services/object_storage/object_client.py
+++ b/tempest/lib/services/object_storage/object_client.py
@@ -23,7 +23,8 @@
class ObjectClient(rest_client.RestClient):
def create_object(self, container, object_name, data,
- params=None, metadata=None, headers=None):
+ params=None, metadata=None, headers=None,
+ chunked=False):
"""Create storage object."""
if headers is None:
@@ -37,7 +38,7 @@
if params:
url += '?%s' % urlparse.urlencode(params)
- resp, body = self.put(url, data, headers)
+ resp, body = self.put(url, data, headers, chunked=chunked)
self.expected_success(201, resp.status)
return resp, body
@@ -50,28 +51,27 @@
self.expected_success([200, 204], resp.status)
return resp, body
- def update_object_metadata(self, container, object_name, metadata,
- metadata_prefix='X-Object-Meta-'):
+ def create_or_update_object_metadata(self, container, object_name,
+ headers=None):
"""Add, remove, or change X-Object-Meta metadata for storage object."""
- headers = {}
- for key in metadata:
- headers["%s%s" % (str(metadata_prefix), str(key))] = metadata[key]
-
url = "%s/%s" % (str(container), str(object_name))
resp, body = self.post(url, None, headers=headers)
self.expected_success(202, resp.status)
return resp, body
- def list_object_metadata(self, container, object_name):
+ def list_object_metadata(self, container, object_name,
+ params=None, headers=None):
"""List all storage object X-Object-Meta- metadata."""
url = "%s/%s" % (str(container), str(object_name))
- resp, body = self.head(url)
+ if params:
+ url += '?%s' % urlparse.urlencode(params)
+ resp, body = self.head(url, headers=headers)
self.expected_success(200, resp.status)
return resp, body
- def get_object(self, container, object_name, metadata=None):
+ def get_object(self, container, object_name, metadata=None, params=None):
"""Retrieve object's data."""
headers = {}
@@ -80,45 +80,12 @@
headers[str(key)] = metadata[key]
url = "{0}/{1}".format(container, object_name)
+ if params:
+ url += '?%s' % urlparse.urlencode(params)
resp, body = self.get(url, headers=headers)
self.expected_success([200, 206], resp.status)
return resp, body
- def copy_object_in_same_container(self, container, src_object_name,
- dest_object_name, metadata=None):
- """Copy storage object's data to the new object using PUT."""
-
- url = "{0}/{1}".format(container, dest_object_name)
- headers = {}
- headers['X-Copy-From'] = "%s/%s" % (str(container),
- str(src_object_name))
- headers['content-length'] = '0'
- if metadata:
- for key in metadata:
- headers[str(key)] = metadata[key]
-
- resp, body = self.put(url, None, headers=headers)
- self.expected_success(201, resp.status)
- return resp, body
-
- def copy_object_across_containers(self, src_container, src_object_name,
- dst_container, dst_object_name,
- metadata=None):
- """Copy storage object's data to the new object using PUT."""
-
- url = "{0}/{1}".format(dst_container, dst_object_name)
- headers = {}
- headers['X-Copy-From'] = "%s/%s" % (str(src_container),
- str(src_object_name))
- headers['content-length'] = '0'
- if metadata:
- for key in metadata:
- headers[str(key)] = metadata[key]
-
- resp, body = self.put(url, None, headers=headers)
- self.expected_success(201, resp.status)
- return resp, body
-
def copy_object_2d_way(self, container, src_object_name, dest_object_name,
metadata=None):
"""Copy storage object's data to the new object using COPY."""
@@ -135,38 +102,6 @@
self.expected_success(201, resp.status)
return resp, body
- def create_object_segments(self, container, object_name, segment, data):
- """Creates object segments."""
- url = "{0}/{1}/{2}".format(container, object_name, segment)
- resp, body = self.put(url, data)
- self.expected_success(201, resp.status)
- return resp, body
-
- def put_object_with_chunk(self, container, name, contents):
- """Put an object with Transfer-Encoding header
-
- :param container: name of the container
- :type container: string
- :param name: name of the object
- :type name: string
- :param contents: object data
- :type contents: iterable
- """
- headers = {'Transfer-Encoding': 'chunked'}
- if self.token:
- headers['X-Auth-Token'] = self.token
-
- url = "%s/%s" % (container, name)
- resp, body = self.put(
- url, headers=headers,
- body=contents,
- chunked=True
- )
-
- self._error_checker(resp, body)
- self.expected_success(201, resp.status)
- return resp.status, resp.reason, resp
-
def create_object_continue(self, container, object_name,
data, metadata=None):
"""Put an object using Expect:100-continue"""
@@ -183,8 +118,7 @@
path = str(parsed.path) + "/"
path += "%s/%s" % (str(container), str(object_name))
- conn = create_connection(parsed)
-
+ conn = _create_connection(parsed)
# Send the PUT request and the headers including the "Expect" header
conn.putrequest('PUT', path)
@@ -218,7 +152,7 @@
return resp.status, resp.reason
-def create_connection(parsed_url):
+def _create_connection(parsed_url):
"""Helper function to create connection with httplib
:param parsed_url: parsed url of the remote location
diff --git a/tempest/lib/services/volume/v1/encryption_types_client.py b/tempest/lib/services/volume/v1/encryption_types_client.py
old mode 100755
new mode 100644
diff --git a/tempest/lib/services/volume/v2/encryption_types_client.py b/tempest/lib/services/volume/v2/encryption_types_client.py
old mode 100755
new mode 100644
diff --git a/tempest/lib/services/volume/v2/volumes_client.py b/tempest/lib/services/volume/v2/volumes_client.py
index e932adc..79973ee 100644
--- a/tempest/lib/services/volume/v2/volumes_client.py
+++ b/tempest/lib/services/volume/v2/volumes_client.py
@@ -13,7 +13,6 @@
# License for the specific language governing permissions and limitations
# under the License.
-from debtcollector import moves
from debtcollector import removals
from oslo_serialization import jsonutils as json
import six
@@ -22,43 +21,12 @@
from tempest.lib.common import rest_client
from tempest.lib import exceptions as lib_exc
from tempest.lib.services.volume import base_client
-from tempest.lib.services.volume.v2 import transfers_client
class VolumesClient(base_client.BaseClient):
"""Client class to send CRUD Volume V2 API requests"""
api_version = "v2"
- create_volume_transfer = moves.moved_function(
- transfers_client.TransfersClient.create_volume_transfer,
- 'VolumesClient.create_volume_transfer', __name__,
- message='Use create_volume_transfer from new location.',
- version='Pike', removal_version='Queens')
-
- show_volume_transfer = moves.moved_function(
- transfers_client.TransfersClient.show_volume_transfer,
- 'VolumesClient.show_volume_transfer', __name__,
- message='Use show_volume_transfer from new location.',
- version='Pike', removal_version='Queens')
-
- list_volume_transfers = moves.moved_function(
- transfers_client.TransfersClient.list_volume_transfers,
- 'VolumesClient.list_volume_transfers', __name__,
- message='Use list_volume_transfer from new location.',
- version='Pike', removal_version='Queens')
-
- delete_volume_transfer = moves.moved_function(
- transfers_client.TransfersClient.delete_volume_transfer,
- 'VolumesClient.delete_volume_transfer', __name__,
- message='Use delete_volume_transfer from new location.',
- version='Pike', removal_version='Queens')
-
- accept_volume_transfer = moves.moved_function(
- transfers_client.TransfersClient.accept_volume_transfer,
- 'VolumesClient.accept_volume_transfer', __name__,
- message='Use accept_volume_transfer from new location.',
- version='Pike', removal_version='Queens')
-
def _prepare_params(self, params):
"""Prepares params for use in get or _ext_get methods.
@@ -197,10 +165,18 @@
return rest_client.ResponseBody(resp, body)
def is_resource_deleted(self, id):
+ """Check the specified resource is deleted or not.
+
+ :param id: A checked resource id
+ :raises lib_exc.DeleteErrorException: If the specified resource is on
+ the status the delete was failed.
+ """
try:
- self.show_volume(id)
+ volume = self.show_volume(id)
except lib_exc.NotFound:
return True
+ if volume["volume"]["status"] == "error_deleting":
+ raise lib_exc.DeleteErrorException(resource_id=id)
return False
@property
diff --git a/tempest/lib/services/volume/v3/group_snapshots_client.py b/tempest/lib/services/volume/v3/group_snapshots_client.py
index e644f02..6e53e3e 100644
--- a/tempest/lib/services/volume/v3/group_snapshots_client.py
+++ b/tempest/lib/services/volume/v3/group_snapshots_client.py
@@ -60,7 +60,7 @@
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
- def list_group_snapshots(self, **params):
+ def list_group_snapshots(self, detail=False, **params):
"""Information for all the tenant's group snapshots.
For more information, please refer to the official API reference:
@@ -68,6 +68,8 @@
https://developer.openstack.org/api-ref/block-storage/v3/#list-group-snapshots-with-details
"""
url = "group_snapshots"
+ if detail:
+ url += "/detail"
if params:
url += '?%s' % urllib.urlencode(params)
resp, body = self.get(url)
@@ -75,6 +77,18 @@
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
+ def reset_group_snapshot_status(self, group_snapshot_id, status_to_set):
+ """Resets group snapshot status.
+
+ For more information, please refer to the official API reference:
+ https://developer.openstack.org/api-ref/block-storage/v3/#reset-group-snapshot-status
+ """
+ post_body = json.dumps({'reset_status': {'status': status_to_set}})
+ resp, body = self.post('group_snapshots/%s/action' % group_snapshot_id,
+ post_body)
+ self.expected_success(202, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
def is_resource_deleted(self, id):
try:
self.show_group_snapshot(id)
diff --git a/tempest/lib/services/volume/v3/groups_client.py b/tempest/lib/services/volume/v3/groups_client.py
index b463fdf..e2e477d 100644
--- a/tempest/lib/services/volume/v3/groups_client.py
+++ b/tempest/lib/services/volume/v3/groups_client.py
@@ -109,6 +109,17 @@
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
+ def reset_group_status(self, group_id, status_to_set):
+ """Resets group status.
+
+ For more information, please refer to the official API reference:
+ https://developer.openstack.org/api-ref/block-storage/v3/#reset-group-status
+ """
+ post_body = json.dumps({'reset_status': {'status': status_to_set}})
+ resp, body = self.post('groups/%s/action' % group_id, post_body)
+ self.expected_success(202, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
def is_resource_deleted(self, id):
try:
self.show_group(id)
diff --git a/tempest/scenario/manager.py b/tempest/scenario/manager.py
index 2843222..2d8935e 100644
--- a/tempest/scenario/manager.py
+++ b/tempest/scenario/manager.py
@@ -89,16 +89,14 @@
# The create_[resource] functions only return body and discard the
# resp part which is not used in scenario tests
- def _create_port(self, network_id, client=None, namestart='port-quotatest',
- **kwargs):
+ def create_port(self, network_id, client=None, **kwargs):
if not client:
client = self.ports_client
- name = data_utils.rand_name(namestart)
+ name = data_utils.rand_name(self.__class__.__name__)
result = client.create_port(
name=name,
network_id=network_id,
**kwargs)
- self.assertIsNotNone(result, 'Unable to allocate port')
port = result['port']
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
client.delete_port, port['id'])
@@ -147,8 +145,7 @@
if vnic_type:
ports = []
- create_port_body = {'binding:vnic_type': vnic_type,
- 'namestart': 'port-smoke'}
+ create_port_body = {'binding:vnic_type': vnic_type}
if kwargs:
# Convert security group names to security group ids
# to pass to create_port
@@ -185,9 +182,9 @@
for net in networks:
net_id = net.get('uuid', net.get('id'))
if 'port' not in net:
- port = self._create_port(network_id=net_id,
- client=clients.ports_client,
- **create_port_body)
+ port = self.create_port(network_id=net_id,
+ client=clients.ports_client,
+ **create_port_body)
ports.append({'port': port['id']})
else:
ports.append({'port': net['port']})
@@ -271,10 +268,8 @@
if backend_name:
extra_specs = {"volume_backend_name": backend_name}
- body = client.create_volume_type(name=randomized_name,
- extra_specs=extra_specs)
- volume_type = body['volume_type']
- self.assertIn('id', volume_type)
+ volume_type = client.create_volume_type(
+ name=randomized_name, extra_specs=extra_specs)['volume_type']
self.addCleanup(client.delete_volume_type, volume_type['id'])
return volume_type
@@ -506,27 +501,6 @@
waiters.wait_for_volume_resource_status(self.volumes_client,
volume['id'], 'available')
- volume = self.volumes_client.show_volume(volume['id'])['volume']
- self.assertEqual('available', volume['status'])
-
- def rebuild_server(self, server_id, image=None,
- preserve_ephemeral=False, wait=True,
- rebuild_kwargs=None):
- if image is None:
- image = CONF.compute.image_ref
-
- rebuild_kwargs = rebuild_kwargs or {}
-
- LOG.debug("Rebuilding server (id: %s, image: %s, preserve eph: %s)",
- server_id, image, preserve_ephemeral)
- self.servers_client.rebuild_server(
- server_id=server_id, image_ref=image,
- preserve_ephemeral=preserve_ephemeral,
- **rebuild_kwargs)
- if wait:
- waiters.wait_for_server_status(self.servers_client,
- server_id, 'ACTIVE')
-
def ping_ip_address(self, ip_address, should_succeed=True,
ping_timeout=None, mtu=None):
timeout = ping_timeout or CONF.validation.ping_timeout
@@ -858,22 +832,6 @@
floating_ip['id'])
return floating_ip
- def _associate_floating_ip(self, floating_ip, server):
- port_id, _ = self._get_server_port_id_and_ip4(server)
- kwargs = dict(port_id=port_id)
- floating_ip = self.floating_ips_client.update_floatingip(
- floating_ip['id'], **kwargs)['floatingip']
- self.assertEqual(port_id, floating_ip['port_id'])
- return floating_ip
-
- def _disassociate_floating_ip(self, floating_ip):
- """:param floating_ip: floating_ips_client.create_floatingip"""
- kwargs = dict(port_id=None)
- floating_ip = self.floating_ips_client.update_floatingip(
- floating_ip['id'], **kwargs)['floatingip']
- self.assertIsNone(floating_ip['port_id'])
- return floating_ip
-
def check_floating_ip_status(self, floating_ip, status):
"""Verifies floatingip reaches the given status
@@ -925,16 +883,13 @@
self._log_net_info(e)
raise
- def _check_remote_connectivity(self, source, dest, should_succeed=True,
- nic=None):
+ def check_remote_connectivity(self, source, dest, should_succeed=True,
+ nic=None):
"""assert ping server via source ssh connection
- Note: This is an internal method. Use check_remote_connectivity
- instead.
-
:param source: RemoteClient: an ssh connection from which to ping
- :param dest: and IP to ping against
- :param should_succeed: boolean should ping succeed or not
+ :param dest: an IP to ping against
+ :param should_succeed: boolean: should ping succeed or not
:param nic: specific network interface to ping from
"""
def ping_remote():
@@ -946,21 +901,8 @@
return not should_succeed
return should_succeed
- return test_utils.call_until_true(ping_remote,
- CONF.validation.ping_timeout,
- 1)
-
- def check_remote_connectivity(self, source, dest, should_succeed=True,
- nic=None):
- """assert ping server via source ssh connection
-
- :param source: RemoteClient: an ssh connection from which to ping
- :param dest: and IP to ping against
- :param should_succeed: boolean should ping succeed or not
- :param nic: specific network interface to ping from
- """
- result = self._check_remote_connectivity(source, dest, should_succeed,
- nic)
+ result = test_utils.call_until_true(ping_remote,
+ CONF.validation.ping_timeout, 1)
source_host = source.ssh_client.host
if should_succeed:
msg = "Timed out waiting for %s to become reachable from %s" \
@@ -1024,23 +966,6 @@
client.delete_security_group, secgroup['id'])
return secgroup
- def _default_security_group(self, client=None, tenant_id=None):
- """Get default secgroup for given tenant_id.
-
- :returns: default secgroup for given tenant
- """
- if client is None:
- client = self.security_groups_client
- if not tenant_id:
- tenant_id = client.tenant_id
- sgs = [
- sg for sg in list(client.list_security_groups().values())[0]
- if sg['tenant_id'] == tenant_id and sg['name'] == 'default'
- ]
- msg = "No default security group for tenant %s." % (tenant_id)
- self.assertNotEmpty(sgs, msg)
- return sgs[0]
-
def _create_security_group_rule(self, secgroup=None,
sec_group_rules_client=None,
tenant_id=None,
@@ -1069,8 +994,12 @@
if not tenant_id:
tenant_id = security_groups_client.tenant_id
if secgroup is None:
- secgroup = self._default_security_group(
- client=security_groups_client, tenant_id=tenant_id)
+ # Get default secgroup for tenant_id
+ default_secgroups = security_groups_client.list_security_groups(
+ name='default', tenant_id=tenant_id)['security_groups']
+ msg = "No default security group for tenant %s." % (tenant_id)
+ self.assertNotEmpty(default_secgroups, msg)
+ secgroup = default_secgroups[0]
ruleset = dict(security_group_id=secgroup['id'],
tenant_id=secgroup['tenant_id'])
@@ -1183,12 +1112,6 @@
router['id'])
return router
- def _update_router_admin_state(self, router, admin_state_up):
- kwargs = dict(admin_state_up=admin_state_up)
- router = self.routers_client.update_router(
- router['id'], **kwargs)['router']
- self.assertEqual(admin_state_up, router['admin_state_up'])
-
def create_networks(self, networks_client=None,
routers_client=None, subnets_client=None,
tenant_id=None, dns_nameservers=None,
@@ -1318,7 +1241,7 @@
def create_container(self, container_name=None):
name = container_name or data_utils.rand_name(
'swift-scenario-container')
- self.container_client.create_container(name)
+ self.container_client.update_container(name)
# look for the container to assure it is created
self.list_and_check_container_objects(name)
LOG.debug('Container %s created', name)
@@ -1355,7 +1278,7 @@
present_obj = []
if not_present_obj is None:
not_present_obj = []
- _, object_list = self.container_client.list_container_contents(
+ _, object_list = self.container_client.list_container_objects(
container_name)
if present_obj:
for obj in present_obj:
@@ -1364,14 +1287,6 @@
for obj in not_present_obj:
self.assertNotIn(obj, object_list)
- def change_container_acl(self, container_name, acl):
- metadata_param = {'metadata_prefix': 'x-container-',
- 'metadata': {'read': acl}}
- self.container_client.update_container_metadata(container_name,
- **metadata_param)
- resp, _ = self.container_client.list_container_metadata(container_name)
- self.assertEqual(resp['x-container-read'], acl)
-
def download_and_verify(self, container_name, obj_name, expected_data):
_, obj = self.object_client.get_object(container_name, obj_name)
self.assertEqual(obj, expected_data)
diff --git a/tempest/scenario/test_aggregates_basic_ops.py b/tempest/scenario/test_aggregates_basic_ops.py
index 25227be..9ff6227 100644
--- a/tempest/scenario/test_aggregates_basic_ops.py
+++ b/tempest/scenario/test_aggregates_basic_ops.py
@@ -14,10 +14,10 @@
# under the License.
from tempest.common import tempest_fixtures as fixtures
+from tempest.common import utils
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.scenario import manager
-from tempest import test
class TestAggregatesBasicOps(manager.ScenarioTest):
@@ -97,7 +97,7 @@
@decorators.idempotent_id('cb2b4c4f-0c7c-4164-bdde-6285b302a081')
@decorators.attr(type='slow')
- @test.services('compute')
+ @utils.services('compute')
def test_aggregate_basic_ops(self):
self.useFixture(fixtures.LockFixture('availability_zone'))
az = 'foo_zone'
diff --git a/tempest/scenario/test_encrypted_cinder_volumes.py b/tempest/scenario/test_encrypted_cinder_volumes.py
index cbdf307..b5220e9 100644
--- a/tempest/scenario/test_encrypted_cinder_volumes.py
+++ b/tempest/scenario/test_encrypted_cinder_volumes.py
@@ -13,10 +13,10 @@
# License for the specific language governing permissions and limitations
# under the License.
+from tempest.common import utils
from tempest import config
from tempest.lib import decorators
from tempest.scenario import manager
-from tempest import test
CONF = config.CONF
@@ -54,7 +54,7 @@
@decorators.idempotent_id('79165fb4-5534-4b9d-8429-97ccffb8f86e')
@decorators.attr(type='slow')
- @test.services('compute', 'volume', 'image')
+ @utils.services('compute', 'volume', 'image')
def test_encrypted_cinder_volumes_luks(self):
server = self.launch_instance()
volume = self.create_encrypted_volume('nova.volume.encryptors.'
@@ -64,7 +64,7 @@
@decorators.idempotent_id('cbc752ed-b716-4717-910f-956cce965722')
@decorators.attr(type='slow')
- @test.services('compute', 'volume', 'image')
+ @utils.services('compute', 'volume', 'image')
def test_encrypted_cinder_volumes_cryptsetup(self):
server = self.launch_instance()
volume = self.create_encrypted_volume('nova.volume.encryptors.'
diff --git a/tempest/scenario/test_minimum_basic.py b/tempest/scenario/test_minimum_basic.py
index 26a834b..29f1743 100644
--- a/tempest/scenario/test_minimum_basic.py
+++ b/tempest/scenario/test_minimum_basic.py
@@ -16,13 +16,13 @@
import testtools
from tempest.common import custom_matchers
+from tempest.common import utils
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
from tempest.lib import exceptions
from tempest.scenario import manager
-from tempest import test
CONF = config.CONF
@@ -105,7 +105,7 @@
'The public_network_id option must be specified.')
@testtools.skipUnless(CONF.network_feature_enabled.floating_ips,
'Floating ips are not available')
- @test.services('compute', 'volume', 'image', 'network')
+ @utils.services('compute', 'volume', 'image', 'network')
def test_minimum_basic_scenario(self):
image = self.glance_image_create()
keypair = self.create_keypair()
diff --git a/tempest/scenario/test_network_advanced_server_ops.py b/tempest/scenario/test_network_advanced_server_ops.py
index c8add8b..340c3c9 100644
--- a/tempest/scenario/test_network_advanced_server_ops.py
+++ b/tempest/scenario/test_network_advanced_server_ops.py
@@ -15,11 +15,11 @@
import testtools
+from tempest.common import utils
from tempest.common import waiters
from tempest import config
from tempest.lib import decorators
from tempest.scenario import manager
-from tempest import test
CONF = config.CONF
@@ -59,7 +59,7 @@
def _setup_server(self, keypair):
security_groups = []
- if test.is_extension_enabled('security-group', 'network'):
+ if utils.is_extension_enabled('security-group', 'network'):
security_group = self._create_security_group()
security_groups = [{'name': security_group['name']}]
network, _, _ = self.create_networks()
@@ -107,7 +107,7 @@
@decorators.idempotent_id('61f1aa9a-1573-410e-9054-afa557cab021')
@decorators.attr(type='slow')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_server_connectivity_stop_start(self):
keypair = self.create_keypair()
server = self._setup_server(keypair)
@@ -122,7 +122,7 @@
server, keypair, floating_ip)
@decorators.idempotent_id('7b6860c2-afa3-4846-9522-adeb38dfbe08')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_server_connectivity_reboot(self):
keypair = self.create_keypair()
server = self._setup_server(keypair)
@@ -133,7 +133,7 @@
@decorators.idempotent_id('88a529c2-1daa-4c85-9aec-d541ba3eb699')
@decorators.attr(type='slow')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_server_connectivity_rebuild(self):
keypair = self.create_keypair()
server = self._setup_server(keypair)
@@ -148,7 +148,7 @@
@testtools.skipUnless(CONF.compute_feature_enabled.pause,
'Pause is not available.')
@decorators.attr(type='slow')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_server_connectivity_pause_unpause(self):
keypair = self.create_keypair()
server = self._setup_server(keypair)
@@ -166,7 +166,7 @@
@testtools.skipUnless(CONF.compute_feature_enabled.suspend,
'Suspend is not available.')
@decorators.attr(type='slow')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_server_connectivity_suspend_resume(self):
keypair = self.create_keypair()
server = self._setup_server(keypair)
@@ -184,7 +184,7 @@
@testtools.skipUnless(CONF.compute_feature_enabled.resize,
'Resize is not available.')
@decorators.attr(type='slow')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_server_connectivity_resize(self):
resize_flavor = CONF.compute.flavor_ref_alt
keypair = self.create_keypair()
@@ -205,7 +205,7 @@
'Less than 2 compute nodes, skipping multinode '
'tests.')
@decorators.attr(type='slow')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_server_connectivity_cold_migration(self):
keypair = self.create_keypair()
server = self._setup_server(keypair)
@@ -231,7 +231,7 @@
'Less than 2 compute nodes, skipping multinode '
'tests.')
@decorators.attr(type='slow')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_server_connectivity_cold_migration_revert(self):
keypair = self.create_keypair()
server = self._setup_server(keypair)
diff --git a/tempest/scenario/test_network_basic_ops.py b/tempest/scenario/test_network_basic_ops.py
index 48ddac6..1c4e262 100644
--- a/tempest/scenario/test_network_basic_ops.py
+++ b/tempest/scenario/test_network_basic_ops.py
@@ -19,13 +19,13 @@
from oslo_log import log as logging
import testtools
+from tempest.common import utils
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
from tempest.lib import exceptions
from tempest.scenario import manager
-from tempest import test
CONF = config.CONF
LOG = logging.getLogger(__name__)
@@ -87,7 +87,7 @@
'public_network_id must be defined.')
raise cls.skipException(msg)
for ext in ['router', 'security-group']:
- if not test.is_extension_enabled(ext, 'network'):
+ if not utils.is_extension_enabled(ext, 'network'):
msg = "%s extension not enabled." % ext
raise cls.skipException(msg)
if not CONF.network_feature_enabled.floating_ips:
@@ -113,7 +113,7 @@
port_id = None
if boot_with_port:
# create a port on the network and boot with that
- port_id = self._create_port(self.network['id'])['id']
+ port_id = self.create_port(self.network['id'])['id']
self.ports.append({'port': port_id})
server = self._create_server(self.network, port_id)
@@ -213,17 +213,20 @@
def _disassociate_floating_ips(self):
floating_ip, _ = self.floating_ip_tuple
- self._disassociate_floating_ip(floating_ip)
- self.floating_ip_tuple = Floating_IP_tuple(
- floating_ip, None)
+ floating_ip = self.floating_ips_client.update_floatingip(
+ floating_ip['id'], port_id=None)['floatingip']
+ self.assertIsNone(floating_ip['port_id'])
+ self.floating_ip_tuple = Floating_IP_tuple(floating_ip, None)
def _reassociate_floating_ips(self):
floating_ip, server = self.floating_ip_tuple
# create a new server for the floating ip
server = self._create_server(self.network)
- self._associate_floating_ip(floating_ip, server)
- self.floating_ip_tuple = Floating_IP_tuple(
- floating_ip, server)
+ port_id, _ = self._get_server_port_id_and_ip4(server)
+ floating_ip = self.floating_ips_client.update_floatingip(
+ floating_ip['id'], port_id=port_id)['floatingip']
+ self.assertEqual(port_id, floating_ip['port_id'])
+ self.floating_ip_tuple = Floating_IP_tuple(floating_ip, server)
def _create_new_network(self, create_gateway=False):
self.new_net = self._create_network()
@@ -355,9 +358,15 @@
self.check_remote_connectivity(ssh_source, remote_ip,
should_connect)
+ def _update_router_admin_state(self, router, admin_state_up):
+ kwargs = dict(admin_state_up=admin_state_up)
+ router = self.routers_client.update_router(
+ router['id'], **kwargs)['router']
+ self.assertEqual(admin_state_up, router['admin_state_up'])
+
@decorators.attr(type='smoke')
@decorators.idempotent_id('f323b3ba-82f8-4db7-8ea6-6a895869ec49')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_network_basic_ops(self):
"""Basic network operation test
@@ -409,10 +418,10 @@
"floating ip")
@decorators.idempotent_id('b158ea55-472e-4086-8fa9-c64ac0c6c1d0')
- @testtools.skipUnless(test.is_extension_enabled('net-mtu', 'network'),
+ @testtools.skipUnless(utils.is_extension_enabled('net-mtu', 'network'),
'No way to calculate MTU for networks')
@decorators.attr(type='slow')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_mtu_sized_frames(self):
"""Validate that network MTU sized frames fit through."""
self._setup_network_and_servers()
@@ -425,7 +434,7 @@
'multitenant network environment')
@decorators.skip_because(bug="1610994")
@decorators.attr(type='slow')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_connectivity_between_vms_on_different_networks(self):
"""Test connectivity between VMs on different networks
@@ -479,7 +488,7 @@
@testtools.skipIf(CONF.network.port_vnic_type in ['direct', 'macvtap'],
'NIC hotplug not supported for '
'vnic_type direct or macvtap')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_hotplug_nic(self):
"""Test hotplug network interface
@@ -501,7 +510,7 @@
'Router state can be altered only with multitenant '
'networks capabilities')
@decorators.attr(type='slow')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_update_router_admin_state(self):
"""Test to update admin state up of router
@@ -535,7 +544,7 @@
@testtools.skipUnless(CONF.scenario.dhcp_client,
"DHCP client is not available.")
@decorators.attr(type='slow')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_subnet_details(self):
"""Tests that subnet's extra configuration details are affecting VMs.
@@ -619,7 +628,7 @@
"Changing a port's admin state is not supported "
"by the test environment")
@decorators.attr(type='slow')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_update_instance_port_admin_state(self):
"""Test to update admin_state_up attribute of instance port
@@ -666,7 +675,7 @@
@decorators.idempotent_id('759462e1-8535-46b0-ab3a-33aa45c55aaa')
@decorators.attr(type='slow')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_preserve_preexisting_port(self):
"""Test preserve pre-existing port
@@ -715,10 +724,10 @@
'server %s.' % server['id'])
self.assertEqual(port['id'], port_list[0]['id'])
- @test.requires_ext(service='network', extension='l3_agent_scheduler')
+ @utils.requires_ext(service='network', extension='l3_agent_scheduler')
@decorators.idempotent_id('2e788c46-fb3f-4ac9-8f82-0561555bea73')
@decorators.attr(type='slow')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_router_rescheduling(self):
"""Tests that router can be removed from agent and add to a new agent.
@@ -793,12 +802,12 @@
should_connect=True,
msg='After router rescheduling')
- @test.requires_ext(service='network', extension='port-security')
+ @utils.requires_ext(service='network', extension='port-security')
@testtools.skipUnless(CONF.compute_feature_enabled.interface_attach,
'NIC hotplug not available')
@decorators.idempotent_id('7c0bb1a2-d053-49a4-98f9-ca1a1d849f63')
@decorators.attr(type='slow')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_port_security_macspoofing_port(self):
"""Tests port_security extension enforces mac spoofing
diff --git a/tempest/scenario/test_network_v6.py b/tempest/scenario/test_network_v6.py
index bf26c2e..b687aa0 100644
--- a/tempest/scenario/test_network_v6.py
+++ b/tempest/scenario/test_network_v6.py
@@ -14,11 +14,11 @@
# under the License.
import functools
+from tempest.common import utils
from tempest import config
from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
from tempest.scenario import manager
-from tempest import test
CONF = config.CONF
@@ -210,49 +210,49 @@
@decorators.attr(type='slow')
@decorators.idempotent_id('2c92df61-29f0-4eaa-bee3-7c65bef62a43')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_slaac_from_os(self):
self._prepare_and_test(address6_mode='slaac')
@decorators.attr(type='slow')
@decorators.idempotent_id('d7e1f858-187c-45a6-89c9-bdafde619a9f')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_dhcp6_stateless_from_os(self):
self._prepare_and_test(address6_mode='dhcpv6-stateless')
@decorators.attr(type='slow')
@decorators.idempotent_id('7ab23f41-833b-4a16-a7c9-5b42fe6d4123')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_multi_prefix_dhcpv6_stateless(self):
self._prepare_and_test(address6_mode='dhcpv6-stateless', n_subnets6=2)
@decorators.attr(type='slow')
@decorators.idempotent_id('dec222b1-180c-4098-b8c5-cc1b8342d611')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_multi_prefix_slaac(self):
self._prepare_and_test(address6_mode='slaac', n_subnets6=2)
@decorators.attr(type='slow')
@decorators.idempotent_id('b6399d76-4438-4658-bcf5-0d6c8584fde2')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_dualnet_slaac_from_os(self):
self._prepare_and_test(address6_mode='slaac', dualnet=True)
@decorators.attr(type='slow')
@decorators.idempotent_id('76f26acd-9688-42b4-bc3e-cd134c4cb09e')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_dualnet_dhcp6_stateless_from_os(self):
self._prepare_and_test(address6_mode='dhcpv6-stateless', dualnet=True)
@decorators.attr(type='slow')
@decorators.idempotent_id('cf1c4425-766b-45b8-be35-e2959728eb00')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_dualnet_multi_prefix_dhcpv6_stateless(self):
self._prepare_and_test(address6_mode='dhcpv6-stateless', n_subnets6=2,
dualnet=True)
@decorators.idempotent_id('9178ad42-10e4-47e9-8987-e02b170cc5cd')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_dualnet_multi_prefix_slaac(self):
self._prepare_and_test(address6_mode='slaac', n_subnets6=2,
dualnet=True)
diff --git a/tempest/scenario/test_object_storage_basic_ops.py b/tempest/scenario/test_object_storage_basic_ops.py
index 25e9f5c..cbe321e 100644
--- a/tempest/scenario/test_object_storage_basic_ops.py
+++ b/tempest/scenario/test_object_storage_basic_ops.py
@@ -13,14 +13,14 @@
# License for the specific language governing permissions and limitations
# under the License.
+from tempest.common import utils
from tempest.lib import decorators
from tempest.scenario import manager
-from tempest import test
class TestObjectStorageBasicOps(manager.ObjectStorageScenarioTest):
@decorators.idempotent_id('b920faf1-7b8a-4657-b9fe-9c4512bfb381')
- @test.services('object_storage')
+ @utils.services('object_storage')
def test_swift_basic_ops(self):
"""Test swift basic ops.
@@ -47,7 +47,7 @@
@decorators.idempotent_id('916c7111-cb1f-44b2-816d-8f760e4ea910')
@decorators.attr(type='slow')
- @test.services('object_storage')
+ @utils.services('object_storage')
def test_swift_acl_anonymous_download(self):
"""This test will cover below steps:
@@ -58,12 +58,18 @@
5. Delete the object and container
"""
container_name = self.create_container()
- obj_name, _ = self.upload_object_to_container(container_name)
+ obj_name, obj_data = self.upload_object_to_container(container_name)
obj_url = '%s/%s/%s' % (self.object_client.base_url,
container_name, obj_name)
resp, _ = self.object_client.raw_request(obj_url, 'GET')
self.assertEqual(resp.status, 401)
-
- self.change_container_acl(container_name, '.r:*')
- resp, _ = self.object_client.raw_request(obj_url, 'GET')
+ metadata_param = {'X-Container-Read': '.r:*'}
+ self.container_client.create_update_or_delete_container_metadata(
+ container_name, create_update_metadata=metadata_param,
+ create_update_metadata_prefix='')
+ resp, _ = self.container_client.list_container_metadata(container_name)
+ self.assertEqual(metadata_param['X-Container-Read'],
+ resp['x-container-read'])
+ resp, data = self.object_client.raw_request(obj_url, 'GET')
self.assertEqual(resp.status, 200)
+ self.assertEqual(obj_data, data)
diff --git a/tempest/scenario/test_security_groups_basic_ops.py b/tempest/scenario/test_security_groups_basic_ops.py
index 51716e8..e39afe0 100644
--- a/tempest/scenario/test_security_groups_basic_ops.py
+++ b/tempest/scenario/test_security_groups_basic_ops.py
@@ -16,12 +16,12 @@
import testtools
from tempest.common import compute
+from tempest.common import utils
from tempest.common.utils import net_info
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.scenario import manager
-from tempest import test
CONF = config.CONF
LOG = log.getLogger(__name__)
@@ -142,7 +142,7 @@
msg = ('Either project_networks_reachable must be "true", or '
'public_network_id must be defined.')
raise cls.skipException(msg)
- if not test.is_extension_enabled('security-group', 'network'):
+ if not utils.is_extension_enabled('security-group', 'network'):
msg = "security-group extension not enabled."
raise cls.skipException(msg)
if CONF.network.shared_physical_network:
@@ -471,7 +471,7 @@
servers=[tenant.access_point], client=client)
@decorators.idempotent_id('e79f879e-debb-440c-a7e4-efeda05b6848')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_cross_tenant_traffic(self):
if not self.credentials_provider.is_multi_tenant():
raise self.skipException("No secondary tenant defined")
@@ -491,7 +491,7 @@
raise
@decorators.idempotent_id('63163892-bbf6-4249-aa12-d5ea1f8f421b')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_in_tenant_traffic(self):
try:
self._create_tenant_servers(self.primary_tenant, num=1)
@@ -505,7 +505,7 @@
@decorators.idempotent_id('f4d556d7-1526-42ad-bafb-6bebf48568f6')
@decorators.attr(type='slow')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_port_update_new_security_group(self):
"""Verifies the traffic after updating the vm port
@@ -559,7 +559,7 @@
@decorators.idempotent_id('d2f77418-fcc4-439d-b935-72eca704e293')
@decorators.attr(type='slow')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_multiple_security_groups(self):
"""Verify multiple security groups and checks that rules
@@ -591,9 +591,9 @@
should_connect=True)
@decorators.attr(type='slow')
- @test.requires_ext(service='network', extension='port-security')
+ @utils.requires_ext(service='network', extension='port-security')
@decorators.idempotent_id('7c811dcc-263b-49a3-92d2-1b4d8405f50c')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_port_security_disable_security_group(self):
"""Verify the default security group rules is disabled."""
new_tenant = self.primary_tenant
@@ -631,7 +631,7 @@
raise
@decorators.attr(type='slow')
- @test.requires_ext(service='network', extension='port-security')
+ @utils.requires_ext(service='network', extension='port-security')
@decorators.idempotent_id('13ccf253-e5ad-424b-9c4a-97b88a026699')
# TODO(mriedem): We shouldn't actually need to check this since neutron
# disables the port_security extension by default, but the problem is nova
@@ -641,7 +641,7 @@
@testtools.skipUnless(
CONF.network_feature_enabled.port_security,
'Port security must be enabled.')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_boot_into_disabled_port_security_network_without_secgroup(self):
tenant = self.primary_tenant
self._create_tenant_network(tenant, port_security_enabled=False)
diff --git a/tempest/scenario/test_server_advanced_ops.py b/tempest/scenario/test_server_advanced_ops.py
index 6d6318c..d4f29ad 100644
--- a/tempest/scenario/test_server_advanced_ops.py
+++ b/tempest/scenario/test_server_advanced_ops.py
@@ -16,11 +16,11 @@
from oslo_log import log as logging
import testtools
+from tempest.common import utils
from tempest.common import waiters
from tempest import config
from tempest.lib import decorators
from tempest.scenario import manager
-from tempest import test
CONF = config.CONF
@@ -45,7 +45,7 @@
@decorators.idempotent_id('e6c28180-7454-4b59-b188-0257af08a63b')
@testtools.skipUnless(CONF.compute_feature_enabled.resize,
'Resize is not available.')
- @test.services('compute', 'volume')
+ @utils.services('compute', 'volume')
def test_resize_volume_backed_server_confirm(self):
# We create an instance for use in this test
instance = self.create_server(volume_backed=True)
@@ -67,7 +67,7 @@
@decorators.idempotent_id('949da7d5-72c8-4808-8802-e3d70df98e2c')
@testtools.skipUnless(CONF.compute_feature_enabled.suspend,
'Suspend is not available.')
- @test.services('compute')
+ @utils.services('compute')
def test_server_sequence_suspend_resume(self):
# We create an instance for use in this test
instance_id = self.create_server()['id']
diff --git a/tempest/scenario/test_server_basic_ops.py b/tempest/scenario/test_server_basic_ops.py
index 0c441ab..d5c378e 100644
--- a/tempest/scenario/test_server_basic_ops.py
+++ b/tempest/scenario/test_server_basic_ops.py
@@ -16,6 +16,7 @@
import json
import re
+from tempest.common import utils
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import data_utils
@@ -23,7 +24,6 @@
from tempest.lib import decorators
from tempest.lib import exceptions
from tempest.scenario import manager
-from tempest import test
CONF = config.CONF
@@ -132,7 +132,7 @@
@decorators.idempotent_id('7fff3fb3-91d8-4fd0-bd7d-0204f1f180ba')
@decorators.attr(type='smoke')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_server_basic_ops(self):
keypair = self.create_keypair()
security_group = self._create_security_group()
diff --git a/tempest/scenario/test_server_multinode.py b/tempest/scenario/test_server_multinode.py
index 552ab27..fdf875c 100644
--- a/tempest/scenario/test_server_multinode.py
+++ b/tempest/scenario/test_server_multinode.py
@@ -13,11 +13,11 @@
# License for the specific language governing permissions and limitations
# under the License.
+from tempest.common import utils
from tempest import config
from tempest.lib import decorators
from tempest.lib import exceptions
from tempest.scenario import manager
-from tempest import test
CONF = config.CONF
@@ -36,7 +36,7 @@
@decorators.idempotent_id('9cecbe35-b9d4-48da-a37e-7ce70aa43d30')
@decorators.attr(type='smoke')
- @test.services('compute', 'network')
+ @utils.services('compute', 'network')
def test_schedule_to_all_nodes(self):
available_zone = \
self.os_admin.availability_zone_client.list_availability_zones(
diff --git a/tempest/scenario/test_shelve_instance.py b/tempest/scenario/test_shelve_instance.py
index fc04b44..68f18d1 100644
--- a/tempest/scenario/test_shelve_instance.py
+++ b/tempest/scenario/test_shelve_instance.py
@@ -16,11 +16,11 @@
import testtools
from tempest.common import compute
+from tempest.common import utils
from tempest.common import waiters
from tempest import config
from tempest.lib import decorators
from tempest.scenario import manager
-from tempest import test
CONF = config.CONF
@@ -78,7 +78,7 @@
@decorators.idempotent_id('1164e700-0af0-4a4c-8792-35909a88743c')
@testtools.skipUnless(CONF.network.public_network_id,
'The public_network_id option must be specified.')
- @test.services('compute', 'network', 'image')
+ @utils.services('compute', 'network', 'image')
def test_shelve_instance(self):
self._create_server_then_shelve_and_unshelve()
@@ -86,6 +86,6 @@
@decorators.idempotent_id('c1b6318c-b9da-490b-9c67-9339b627271f')
@testtools.skipUnless(CONF.network.public_network_id,
'The public_network_id option must be specified.')
- @test.services('compute', 'volume', 'network', 'image')
+ @utils.services('compute', 'volume', 'network', 'image')
def test_shelve_volume_backed_instance(self):
self._create_server_then_shelve_and_unshelve(boot_from_volume=True)
diff --git a/tempest/scenario/test_snapshot_pattern.py b/tempest/scenario/test_snapshot_pattern.py
index 52767dc..b51a781 100644
--- a/tempest/scenario/test_snapshot_pattern.py
+++ b/tempest/scenario/test_snapshot_pattern.py
@@ -15,10 +15,10 @@
import testtools
+from tempest.common import utils
from tempest import config
from tempest.lib import decorators
from tempest.scenario import manager
-from tempest import test
CONF = config.CONF
@@ -44,7 +44,7 @@
@decorators.attr(type='slow')
@testtools.skipUnless(CONF.network.public_network_id,
'The public_network_id option must be specified.')
- @test.services('compute', 'network', 'image')
+ @utils.services('compute', 'network', 'image')
def test_snapshot_pattern(self):
# prepare for booting an instance
keypair = self.create_keypair()
diff --git a/tempest/scenario/test_stamp_pattern.py b/tempest/scenario/test_stamp_pattern.py
index 3632648..ef369d6 100644
--- a/tempest/scenario/test_stamp_pattern.py
+++ b/tempest/scenario/test_stamp_pattern.py
@@ -16,12 +16,12 @@
from oslo_log import log as logging
import testtools
+from tempest.common import utils
from tempest import config
from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
from tempest.scenario import manager
-from tempest import test
CONF = config.CONF
LOG = logging.getLogger(__name__)
@@ -76,7 +76,7 @@
'Snapshotting is not available.')
@testtools.skipUnless(CONF.network.public_network_id,
'The public_network_id option must be specified.')
- @test.services('compute', 'network', 'volume', 'image')
+ @utils.services('compute', 'network', 'volume', 'image')
def test_stamp_pattern(self):
# prepare for booting an instance
keypair = self.create_keypair()
diff --git a/tempest/scenario/test_volume_boot_pattern.py b/tempest/scenario/test_volume_boot_pattern.py
index b6f3b38..64ea8f6 100644
--- a/tempest/scenario/test_volume_boot_pattern.py
+++ b/tempest/scenario/test_volume_boot_pattern.py
@@ -13,12 +13,12 @@
from oslo_log import log as logging
import testtools
+from tempest.common import utils
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.scenario import manager
-from tempest import test
CONF = config.CONF
LOG = logging.getLogger(__name__)
@@ -76,7 +76,7 @@
@decorators.idempotent_id('557cd2c2-4eb8-4dce-98be-f86765ff311b')
@testtools.skipUnless(CONF.network.public_network_id,
'The public_network_id option must be specified.')
- @test.services('compute', 'volume', 'image')
+ @utils.services('compute', 'volume', 'image')
def test_volume_boot_pattern(self):
"""This test case attempts to reproduce the following steps:
@@ -156,7 +156,7 @@
@decorators.idempotent_id('05795fb2-b2a7-4c9f-8fac-ff25aedb1489')
@decorators.attr(type='slow')
- @test.services('compute', 'image', 'volume')
+ @utils.services('compute', 'image', 'volume')
def test_create_server_from_volume_snapshot(self):
# Create a volume from an image
boot_volume = self._create_volume_from_image()
@@ -192,7 +192,7 @@
created_volume_info['attachments'][0]['volume_id'])
@decorators.idempotent_id('36c34c67-7b54-4b59-b188-02a2f458a63b')
- @test.services('compute', 'volume', 'image')
+ @utils.services('compute', 'volume', 'image')
def test_create_ebs_image_and_check_boot(self):
# create an instance from volume
volume_origin = self._create_volume_from_image()
@@ -216,7 +216,7 @@
@decorators.idempotent_id('cb78919a-e553-4bab-b73b-10cf4d2eb125')
@testtools.skipUnless(CONF.compute_feature_enabled.attach_encrypted_volume,
'Encrypted volume attach is not supported')
- @test.services('compute', 'volume')
+ @utils.services('compute', 'volume')
def test_boot_server_from_encrypted_volume_luks(self):
# Create an encrypted volume
volume = self.create_encrypted_volume('nova.volume.encryptors.'
diff --git a/tempest/scenario/test_volume_migrate_attached.py b/tempest/scenario/test_volume_migrate_attached.py
index 5667fbb..cd10bbd 100644
--- a/tempest/scenario/test_volume_migrate_attached.py
+++ b/tempest/scenario/test_volume_migrate_attached.py
@@ -12,11 +12,11 @@
from oslo_log import log as logging
+from tempest.common import utils
from tempest.common import waiters
from tempest import config
from tempest.lib import decorators
from tempest.scenario import manager
-from tempest import test
CONF = config.CONF
LOG = logging.getLogger(__name__)
@@ -38,11 +38,6 @@
credentials = ['primary', 'admin']
@classmethod
- def setup_clients(cls):
- super(TestVolumeMigrateRetypeAttached, cls).setup_clients()
- cls.admin_volumes_client = cls.os_admin.volumes_v2_client
-
- @classmethod
def skip_checks(cls):
super(TestVolumeMigrateRetypeAttached, cls).skip_checks()
if not CONF.volume_feature_enabled.multi_backend:
@@ -82,7 +77,7 @@
def _volume_retype_with_migration(self, volume_id, new_volume_type):
migration_policy = 'on-demand'
- self.admin_volumes_client.retype_volume(
+ self.volumes_client.retype_volume(
volume_id, new_type=new_volume_type,
migration_policy=migration_policy)
waiters.wait_for_volume_retype(self.volumes_client,
@@ -90,7 +85,7 @@
@decorators.attr(type='slow')
@decorators.idempotent_id('deadd2c2-beef-4dce-98be-f86765ff311b')
- @test.services('compute', 'volume')
+ @utils.services('compute', 'volume')
def test_volume_migrate_attached(self):
LOG.info("Creating keypair and security group")
keypair = self.create_keypair()
diff --git a/tempest/services/object_storage/__init__.py b/tempest/services/object_storage/__init__.py
deleted file mode 100644
index a2f0992..0000000
--- a/tempest/services/object_storage/__init__.py
+++ /dev/null
@@ -1,24 +0,0 @@
-# Copyright (c) 2016 Hewlett-Packard Enterprise Development Company, L.P.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may not
-# use this file except in compliance with the License. You may obtain a copy of
-# the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations under
-# the License.
-
-from tempest.lib.services.object_storage.bulk_middleware_client import \
- BulkMiddlewareClient
-from tempest.lib.services.object_storage.capabilities_client import \
- CapabilitiesClient
-from tempest.services.object_storage.account_client import AccountClient
-from tempest.services.object_storage.container_client import ContainerClient
-from tempest.services.object_storage.object_client import ObjectClient
-
-__all__ = ['AccountClient', 'BulkMiddlewareClient', 'CapabilitiesClient',
- 'ContainerClient', 'ObjectClient']
diff --git a/tempest/services/object_storage/container_client.py b/tempest/services/object_storage/container_client.py
deleted file mode 100644
index afedd36..0000000
--- a/tempest/services/object_storage/container_client.py
+++ /dev/null
@@ -1,150 +0,0 @@
-# Copyright 2012 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from xml.etree import ElementTree as etree
-
-from oslo_serialization import jsonutils as json
-from six.moves.urllib import parse as urllib
-
-from tempest.lib.common import rest_client
-
-
-class ContainerClient(rest_client.RestClient):
-
- def create_container(
- self, container_name,
- metadata=None,
- remove_metadata=None,
- metadata_prefix='X-Container-Meta-',
- remove_metadata_prefix='X-Remove-Container-Meta-'):
- """Creates a container
-
- with optional metadata passed in as a dictionary
- """
- url = str(container_name)
- headers = {}
-
- if metadata is not None:
- for key in metadata:
- headers[metadata_prefix + key] = metadata[key]
- if remove_metadata is not None:
- for key in remove_metadata:
- headers[remove_metadata_prefix + key] = remove_metadata[key]
-
- resp, body = self.put(url, body=None, headers=headers)
- self.expected_success([201, 202], resp.status)
- return resp, body
-
- def delete_container(self, container_name):
- """Deletes the container (if it's empty)."""
- url = str(container_name)
- resp, body = self.delete(url)
- self.expected_success(204, resp.status)
- return resp, body
-
- def update_container_metadata(
- self, container_name,
- metadata=None,
- remove_metadata=None,
- metadata_prefix='X-Container-Meta-',
- remove_metadata_prefix='X-Remove-Container-Meta-'):
- """Updates arbitrary metadata on container."""
- url = str(container_name)
- headers = {}
-
- if metadata is not None:
- for key in metadata:
- headers[metadata_prefix + key] = metadata[key]
- if remove_metadata is not None:
- for key in remove_metadata:
- headers[remove_metadata_prefix + key] = remove_metadata[key]
-
- resp, body = self.post(url, body=None, headers=headers)
- self.expected_success(204, resp.status)
- return resp, body
-
- def delete_container_metadata(self, container_name, metadata,
- metadata_prefix='X-Remove-Container-Meta-'):
- """Deletes arbitrary metadata on container."""
- url = str(container_name)
- headers = {}
-
- if metadata is not None:
- for item in metadata:
- headers[metadata_prefix + item] = metadata[item]
-
- resp, body = self.post(url, body=None, headers=headers)
- self.expected_success(204, resp.status)
- return resp, body
-
- def list_container_metadata(self, container_name):
- """Retrieves container metadata headers"""
- url = str(container_name)
- resp, body = self.head(url)
- self.expected_success(204, resp.status)
- return resp, body
-
- def list_container_contents(self, container, params=None):
- """List the objects in a container, given the container name
-
- Returns the container object listing as a plain text list, or as
- xml or json if that option is specified via the 'format' argument.
-
- Optional Arguments:
- limit = integer
- For an integer value n, limits the number of results to at most
- n values.
-
- marker = 'string'
- Given a string value x, return object names greater in value
- than the specified marker.
-
- prefix = 'string'
- For a string value x, causes the results to be limited to names
- beginning with the substring x.
-
- format = 'json' or 'xml'
- Specify either json or xml to return the respective serialized
- response.
- If json, returns a list of json objects
- if xml, returns a string of xml
-
- path = 'string'
- For a string value x, return the object names nested in the
- pseudo path (assuming preconditions are met - see below).
-
- delimiter = 'character'
- For a character c, return all the object names nested in the
- container (without the need for the directory marker objects).
- """
-
- url = str(container)
- if params:
- url += '?'
- url += '&%s' % urllib.urlencode(params)
-
- resp, body = self.get(url, headers={})
- if params and params.get('format') == 'json':
- body = json.loads(body)
- elif params and params.get('format') == 'xml':
- body = etree.fromstring(body)
- # Else the content-type is plain/text
- else:
- body = [
- obj_name for obj_name in body.decode().split('\n') if obj_name
- ]
-
- self.expected_success([200, 204], resp.status)
- return resp, body
diff --git a/tempest/test.py b/tempest/test.py
index 47cbb5e..9da85d5 100644
--- a/tempest/test.py
+++ b/tempest/test.py
@@ -14,7 +14,6 @@
# under the License.
import atexit
-import functools
import os
import sys
@@ -26,10 +25,10 @@
from tempest import clients
from tempest.common import credentials_factory as credentials
-import tempest.common.validation_resources as vresources
+from tempest.common import utils
from tempest import config
-from tempest.lib.common import cred_client
from tempest.lib.common import fixed_network
+from tempest.lib.common import validation_resources as vr
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
@@ -49,95 +48,19 @@
version='Pike', removal_version='?')
-class InvalidServiceTag(lib_exc.TempestException):
- message = "Invalid service tag"
+services = debtcollector.moves.moved_function(
+ utils.services, 'services', __name__,
+ version='Pike', removal_version='?')
-def get_service_list():
- service_list = {
- 'compute': CONF.service_available.nova,
- 'image': CONF.service_available.glance,
- 'volume': CONF.service_available.cinder,
- # NOTE(masayukig): We have two network services which are neutron and
- # nova-network. And we have no way to know whether nova-network is
- # available or not. After the pending removal of nova-network from
- # nova, we can treat the network/neutron case in the same manner as
- # the other services.
- 'network': True,
- # NOTE(masayukig): Tempest tests always require the identity service.
- # So we should set this True here.
- 'identity': True,
- 'object_storage': CONF.service_available.swift,
- }
- return service_list
+requires_ext = debtcollector.moves.moved_function(
+ utils.requires_ext, 'requires_ext', __name__,
+ version='Pike', removal_version='?')
-def services(*args):
- """A decorator used to set an attr for each service used in a test case
-
- This decorator applies a testtools attr for each service that gets
- exercised by a test case.
- """
- def decorator(f):
- known_services = get_service_list()
-
- for service in args:
- if service not in known_services:
- raise InvalidServiceTag('%s is not a valid service' % service)
- decorators.attr(type=list(args))(f)
-
- @functools.wraps(f)
- def wrapper(self, *func_args, **func_kwargs):
- service_list = get_service_list()
-
- for service in args:
- if not service_list[service]:
- msg = 'Skipped because the %s service is not available' % (
- service)
- raise testtools.TestCase.skipException(msg)
- return f(self, *func_args, **func_kwargs)
- return wrapper
- return decorator
-
-
-def requires_ext(**kwargs):
- """A decorator to skip tests if an extension is not enabled
-
- @param extension
- @param service
- """
- def decorator(func):
- @functools.wraps(func)
- def wrapper(*func_args, **func_kwargs):
- if not is_extension_enabled(kwargs['extension'],
- kwargs['service']):
- msg = "Skipped because %s extension: %s is not enabled" % (
- kwargs['service'], kwargs['extension'])
- raise testtools.TestCase.skipException(msg)
- return func(*func_args, **func_kwargs)
- return wrapper
- return decorator
-
-
-def is_extension_enabled(extension_name, service):
- """A function that will check the list of enabled extensions from config
-
- """
- config_dict = {
- 'compute': CONF.compute_feature_enabled.api_extensions,
- 'volume': CONF.volume_feature_enabled.api_extensions,
- 'network': CONF.network_feature_enabled.api_extensions,
- 'object': CONF.object_storage_feature_enabled.discoverable_apis,
- 'identity': CONF.identity_feature_enabled.api_extensions
- }
- if not config_dict[service]:
- return False
- if config_dict[service][0] == 'all':
- return True
- if extension_name in config_dict[service]:
- return True
- return False
-
+is_extension_enabled = debtcollector.moves.moved_function(
+ utils.is_extension_enabled, 'is_extension_enabled', __name__,
+ version='Pike', removal_version='?')
at_exit_set = set()
@@ -174,16 +97,24 @@
- resource_cleanup
"""
- setUpClassCalled = False
-
# NOTE(andreaf) credentials holds a list of the credentials to be allocated
# at class setup time. Credential types can be 'primary', 'alt', 'admin' or
# a list of roles - the first element of the list being a label, and the
# rest the actual roles
credentials = []
+
+ # Track if setUpClass was invoked
+ __setupclass_called = False
+
+ # Network resources to be provisioned for the requested test credentials.
+ # Only used with the dynamic credentials provider.
+ _network_resources = {}
+
+ # Stack of resource cleanups
+ _class_cleanups = []
+
# Resources required to validate a server using ssh
- validation_resources = {}
- network_resources = {}
+ _validation_resources = {}
# NOTE(sdague): log_format is defined inline here instead of using the oslo
# default because going through the config path recouples config to the
@@ -198,23 +129,39 @@
TIMEOUT_SCALING_FACTOR = 1
@classmethod
+ def _reset_class(cls):
+ cls.__setup_credentials_called = False
+ cls.__resource_cleanup_called = False
+ cls.__skip_checks_called = False
+ # Stack of callable to be invoked in reverse order
+ cls._class_cleanups = []
+ # Stack of (name, callable) to be invoked in reverse order at teardown
+ cls._teardowns = []
+
+ @classmethod
def setUpClass(cls):
+ cls.__setupclass_called = True
+ # Reset state
+ cls._reset_class()
# It should never be overridden by descendants
if hasattr(super(BaseTestCase, cls), 'setUpClass'):
super(BaseTestCase, cls).setUpClass()
- cls.setUpClassCalled = True
- # Stack of (name, callable) to be invoked in reverse order at teardown
- cls.teardowns = []
# All the configuration checks that may generate a skip
cls.skip_checks()
+ if not cls.__skip_checks_called:
+ raise RuntimeError("skip_checks for %s did not call the super's "
+ "skip_checks" % cls.__name__)
try:
# Allocation of all required credentials and client managers
- cls.teardowns.append(('credentials', cls.clear_credentials))
+ cls._teardowns.append(('credentials', cls.clear_credentials))
cls.setup_credentials()
+ if not cls.__setup_credentials_called:
+ raise RuntimeError("setup_credentials for %s did not call the "
+ "super's setup_credentials" % cls.__name__)
# Shortcuts to clients
cls.setup_clients()
# Additional class-wide test resources
- cls.teardowns.append(('resources', cls.resource_cleanup))
+ cls._teardowns.append(('resources', cls.resource_cleanup))
cls.resource_setup()
except Exception:
etype, value, trace = sys.exc_info()
@@ -241,18 +188,29 @@
# If there was no exception during setup we shall re-raise the first
# exception in teardown
re_raise = (etype is None)
- while cls.teardowns:
- name, teardown = cls.teardowns.pop()
+ while cls._teardowns:
+ name, teardown = cls._teardowns.pop()
# Catch any exception in tearDown so we can re-raise the original
# exception at the end
try:
teardown()
+ if name == 'resources':
+ if not cls.__resource_cleanup_called:
+ raise RuntimeError(
+ "resource_cleanup for %s did not call the "
+ "super's resource_cleanup" % cls.__name__)
except Exception as te:
sys_exec_info = sys.exc_info()
tetype = sys_exec_info[0]
- # TODO(andreaf): Till we have the ability to cleanup only
- # resources that were successfully setup in resource_cleanup,
- # log AttributeError as info instead of exception.
+ # TODO(andreaf): Resource cleanup is often implemented by
+ # storing an array of resources at class level, and cleaning
+ # them up during `resource_cleanup`.
+ # In case of failure during setup, some resource arrays might
+ # not be defined at all, in which case the cleanup code might
+ # trigger an AttributeError. In such cases we log
+ # AttributeError as info instead of exception. Once all
+ # cleanups are migrated to addClassResourceCleanup we can
+ # remove this.
if tetype is AttributeError and name == 'resources':
LOG.info("tearDownClass of %s failed: %s", name, te)
else:
@@ -288,18 +246,45 @@
"""Class level skip checks.
Subclasses verify in here all conditions that might prevent the
- execution of the entire test class.
- Checks implemented here may not make use API calls, and should rely on
- configuration alone.
- In general skip checks that require an API call are discouraged.
- If one is really needed it may be implemented either in the
- resource_setup or at test level.
+ execution of the entire test class. Skipping here prevents any other
+ class fixture from being executed i.e. no credentials or other
+ resource allocation will happen.
+
+ Tests defined in the test class will no longer appear in test results.
+ The `setUpClass` for the entire test class will be marked as SKIPPED
+ instead.
+
+ At this stage no test credentials are available, so skip checks
+ should rely on configuration alone. This is deliberate since skips
+ based on the result of an API call are discouraged.
+
+ The following checks are implemented in `test.py` already:
+ - check that alt credentials are available when requested by the test
+ - check that admin credentials are available when requested by the test
+ - check that the identity version specified by the test is marked as
+ enabled in the configuration
+
+ Overriders of skip_checks must always invoke skip_check on `super`
+ first.
+
+ Example::
+
+ @classmethod
+ def skip_checks(cls):
+ super(Example, cls).skip_checks()
+ if not CONF.service_available.my_service:
+ skip_msg = ("%s skipped as my_service is not available")
+ raise cls.skipException(skip_msg % cls.__name__)
"""
+ cls.__skip_checks_called = True
identity_version = cls.get_identity_version()
- if 'admin' in cls.credentials and not credentials.is_admin_available(
- identity_version=identity_version):
- msg = "Missing Identity Admin API credentials in configuration."
- raise cls.skipException(msg)
+ # setting force_tenant_isolation to True also needs admin credentials.
+ if ('admin' in cls.credentials or
+ getattr(cls, 'force_tenant_isolation', False)):
+ if not credentials.is_admin_available(
+ identity_version=identity_version):
+ raise cls.skipException(
+ "Missing Identity Admin API credentials in configuration.")
if 'alt' in cls.credentials and not credentials.is_alt_available(
identity_version=identity_version):
msg = "Missing a 2nd set of API credentials in configuration."
@@ -316,13 +301,67 @@
def setup_credentials(cls):
"""Allocate credentials and create the client managers from them.
- For every element of credentials param function creates tenant/user,
- Then it creates client manager for that credential.
+ `setup_credentials` looks for the content of the `credentials`
+ attribute in the test class. If the value is a non-empty collection,
+ a credentials provider is setup, and credentials are provisioned or
+ allocated based on the content of the collection. Every set of
+ credentials is associated to an object of type `cls.client_manager`.
+ The client manager is accessible by tests via class attribute
+ `os_[type]`:
- Network related tests must override this function with
- set_network_resources() method, otherwise it will create
- network resources(network resources are created in a later step).
+ Valid values in `credentials` are:
+ - 'primary':
+ A normal user is provisioned.
+ It can be used only once. Multiple entries will be ignored.
+ Clients are available at os_primary.
+ - 'alt':
+ A normal user other than 'primary' is provisioned.
+ It can be used only once. Multiple entries will be ignored.
+ Clients are available at os_alt.
+ - 'admin':
+ An admin user is provisioned.
+ It can be used only once. Multiple entries will be ignored.
+ Clients are available at os_admin.
+ - A list in the format ['any_label', 'role1', ... , 'roleN']:
+ A client with roles <list>[1:] is provisioned.
+ It can be used multiple times, with unique labels.
+ Clients are available at os_roles_<list>[0].
+
+ By default network resources are allocated (in case of dynamic
+ credentials). Tests that do not need network or that require a
+ custom network setup must specify which network resources shall
+ be provisioned using the `set_network_resources()` method (note
+ that it must be invoked before the `setup_credentials` is
+ invoked on super).
+
+ Example::
+
+ class TestWithCredentials(test.BaseTestCase):
+
+ credentials = ['primary', 'admin',
+ ['special', 'special_role1']]
+
+ @classmethod
+ def setup_credentials(cls):
+ # set_network_resources must be called first
+ cls.set_network_resources(network=True)
+ super(TestWithCredentials, cls).setup_credentials()
+
+ @classmethod
+ def setup_clients(cls):
+ cls.servers = cls.os_primary.compute.ServersClient()
+ cls.admin_servers = cls.os_admin.compute.ServersClient()
+ # certain API calls may require a user with a specific
+ # role assigned. In this example `special_role1` is
+ # assigned to the user in `cls.os_roles_special`.
+ cls.special_servers = (
+ cls.os_roles_special.compute.ServersClient())
+
+ def test_special_servers(self):
+ # Do something with servers
+ pass
"""
+ cls.__setup_credentials_called = True
for credentials_type in cls.credentials:
# This may raise an exception in case credentials are not available
# In that case we want to let the exception through and the test
@@ -364,50 +403,184 @@
@classmethod
def setup_clients(cls):
- """Create links to the clients into the test object."""
- # TODO(andreaf) There is a fair amount of code that could me moved from
- # base / test classes in here. Ideally tests should be able to only
- # specify which client is `client` and nothing else.
+ """Create aliases to the clients in the client managers.
+
+ `setup_clients` is invoked after the credential provisioning step.
+ Client manager objects are available to tests already. The purpose
+ of this helper is to setup shortcuts to specific clients that are
+ useful for the tests implemented in the test class.
+
+ Its purpose is mostly for code readability, however it should be used
+ carefully to avoid doing exactly the opposite, i.e. making the code
+ unreadable and hard to debug. If aliases are defined in a super class
+ it won't be obvious what they refer to, so it's good practice to define
+ all aliases used in the class. Aliases are meant to be shortcuts to
+ be used in tests, not shortcuts to avoid helper method attributes.
+ If an helper method starts relying on a client alias and a subclass
+ overrides that alias, it will become rather difficult to understand
+ what the helper method actually does.
+
+ Example::
+
+ class TestDoneItRight(test.BaseTestCase):
+
+ credentials = ['primary', 'alt']
+
+ @classmethod
+ def setup_clients(cls):
+ super(TestDoneItRight, cls).setup_clients()
+ cls.servers = cls.os_primary.ServersClient()
+ cls.servers_alt = cls.os_alt.ServersClient()
+
+ def _a_good_helper(self, clients):
+ # Some complex logic we're going to use many times
+ servers = clients.ServersClient()
+ vm = servers.create_server(...)
+
+ def delete_server():
+ test_utils.call_and_ignore_notfound_exc(
+ servers.delete_server, vm['id'])
+
+ self.addCleanup(self.delete_server)
+ return vm
+
+ def test_with_servers(self):
+ vm = self._a_good_helper(os.primary)
+ vm_alt = self._a_good_helper(os.alt)
+ cls.servers.show_server(vm['id'])
+ cls.servers_alt.show_server(vm_alt['id'])
+ """
pass
@classmethod
def resource_setup(cls):
- """Class level resource setup for test cases."""
- if (CONF.validation.ip_version_for_ssh not in (4, 6) and
- CONF.service_available.neutron):
- msg = "Invalid IP version %s in ip_version_for_ssh. Use 4 or 6"
- raise lib_exc.InvalidConfiguration(
- msg % CONF.validation.ip_version_for_ssh)
- if hasattr(cls, "os_primary"):
- cls.validation_resources = vresources.create_validation_resources(
- cls.os_primary, cls.validation_resources,
- use_neutron=CONF.service_available.neutron,
- ethertype='IPv' + str(CONF.validation.ip_version_for_ssh),
- floating_network_id=CONF.network.public_network_id,
- floating_network_name=CONF.network.floating_network_name)
- else:
- LOG.warning("Client manager not found, validation resources not"
- " created")
+ """Class level resource setup for test cases.
+
+ `resource_setup` is invoked once all credentials (and related network
+ resources have been provisioned and after client aliases - if any -
+ have been defined.
+
+ The use case for `resource_setup` is test optimization: provisioning
+ of project-specific "expensive" resources that are not dirtied by tests
+ and can thus safely be re-used by multiple tests.
+
+ System wide resources shared by all tests could instead be provisioned
+ only once, before the test run.
+
+ Resources provisioned here must be cleaned up during
+ `resource_cleanup`. This is best achieved by scheduling a cleanup via
+ `addClassResourceCleanup`.
+
+ Some test resources have an asynchronous delete process. It's best
+ practice for them to schedule a wait for delete via
+ `addClassResourceCleanup` to avoid having resources in process of
+ deletion when we reach the credentials cleanup step.
+
+ Example::
+
+ @classmethod
+ def resource_setup(cls):
+ super(MyTest, cls).resource_setup()
+ servers = cls.os_primary.compute.ServersClient()
+ # Schedule delete and wait so that we can first delete the
+ # two servers and then wait for both to delete
+ # Create server 1
+ cls.shared_server = servers.create_server()
+ # Create server 2. If something goes wrong we schedule cleanup
+ # of server 1 anyways.
+ try:
+ cls.shared_server2 = servers.create_server()
+ # Wait server 2
+ cls.addClassResourceCleanup(
+ waiters.wait_for_server_termination,
+ servers, cls.shared_server2['id'],
+ ignore_error=False)
+ finally:
+ # Wait server 1
+ cls.addClassResourceCleanup(
+ waiters.wait_for_server_termination,
+ servers, cls.shared_server['id'],
+ ignore_error=False)
+ # Delete server 1
+ cls.addClassResourceCleanup(
+ test_utils.call_and_ignore_notfound_exc,
+ servers.delete_server,
+ cls.shared_server['id'])
+ # Delete server 2 (if it was created)
+ if hasattr(cls, 'shared_server2'):
+ cls.addClassResourceCleanup(
+ test_utils.call_and_ignore_notfound_exc,
+ servers.delete_server,
+ cls.shared_server2['id'])
+ """
+ pass
@classmethod
def resource_cleanup(cls):
"""Class level resource cleanup for test cases.
- Resource cleanup must be able to handle the case of partially setup
- resources, in case a failure during `resource_setup` should happen.
+ Resource cleanup processes the stack of cleanups produced by
+ `addClassResourceCleanup` and then cleans up validation resources
+ if any were provisioned.
+
+ All cleanups are processed whatever the outcome. Exceptions are
+ accumulated and re-raised as a `MultipleExceptions` at the end.
+
+ In most cases test cases won't need to override `resource_cleanup`,
+ but if they do they must invoke `resource_cleanup` on super.
+
+ Example::
+
+ class TestWithReallyComplexCleanup(test.BaseTestCase):
+
+ @classmethod
+ def resource_setup(cls):
+ # provision resource A
+ cls.addClassResourceCleanup(delete_resource, A)
+ # provision resource B
+ cls.addClassResourceCleanup(delete_resource, B)
+
+ @classmethod
+ def resource_cleanup(cls):
+ # It's possible to override resource_cleanup but in most
+ # cases it shouldn't be required. Nothing that may fail
+ # should be executed before the call to super since it
+ # might cause resource leak in case of error.
+ super(TestWithReallyComplexCleanup, cls).resource_cleanup()
+ # At this point test credentials are still available but
+ # anything from the cleanup stack has been already deleted.
"""
- if cls.validation_resources:
- if hasattr(cls, "os_primary"):
- vresources.clear_validation_resources(cls.os_primary,
- cls.validation_resources)
- cls.validation_resources = {}
- else:
- LOG.warning("Client manager not found, validation resources "
- "not deleted")
+ cls.__resource_cleanup_called = True
+ cleanup_errors = []
+ while cls._class_cleanups:
+ try:
+ fn, args, kwargs = cls._class_cleanups.pop()
+ fn(*args, **kwargs)
+ except Exception:
+ cleanup_errors.append(sys.exc_info())
+ if cleanup_errors:
+ raise testtools.MultipleExceptions(*cleanup_errors)
+
+ @classmethod
+ def addClassResourceCleanup(cls, fn, *arguments, **keywordArguments):
+ """Add a cleanup function to be called during resource_cleanup.
+
+ Functions added with addClassResourceCleanup will be called in reverse
+ order of adding at the beginning of resource_cleanup, before any
+ credential, networking or validation resources cleanup is processed.
+
+ If a function added with addClassResourceCleanup raises an exception,
+ the error will be recorded as a test error, and the next cleanup will
+ then be run.
+
+ Cleanup functions are always called during the test class tearDown
+ fixture, even if an exception occured during setUp or tearDown.
+ """
+ cls._class_cleanups.append((fn, arguments, keywordArguments))
def setUp(self):
super(BaseTestCase, self).setUp()
- if not self.setUpClassCalled:
+ if not self.__setupclass_called:
raise RuntimeError("setUpClass does not calls the super's"
"setUpClass in the "
+ self.__class__.__name__)
@@ -438,37 +611,6 @@
def credentials_provider(self):
return self._get_credentials_provider()
- @property
- def identity_utils(self):
- """A client that abstracts v2 and v3 identity operations.
-
- This can be used for creating and tearing down projects in tests. It
- should not be used for testing identity features.
- """
- if CONF.identity.auth_version == 'v2':
- client = self.os_admin.identity_client
- users_client = self.os_admin.users_client
- project_client = self.os_admin.tenants_client
- roles_client = self.os_admin.roles_client
- domains_client = None
- else:
- client = self.os_admin.identity_v3_client
- users_client = self.os_admin.users_v3_client
- project_client = self.os_admin.projects_client
- roles_client = self.os_admin.roles_v3_client
- domains_client = self.os_admin.domains_client
-
- try:
- domain = client.auth_provider.credentials.project_domain_name
- except AttributeError:
- domain = 'Default'
-
- return cred_client.get_creds_client(client, project_client,
- users_client,
- roles_client,
- domains_client,
- project_domain_name=domain)
-
@classmethod
def get_identity_version(cls):
"""Returns the identity version used by the test class"""
@@ -490,7 +632,7 @@
False)
cls._creds_provider = credentials.get_credentials_provider(
- name=cls.__name__, network_resources=cls.network_resources,
+ name=cls.__name__, network_resources=cls._network_resources,
force_tenant_isolation=force_tenant_isolation)
return cls._creds_provider
@@ -545,62 +687,131 @@
if hasattr(cls, '_creds_provider'):
cls._creds_provider.clear_creds()
+ @staticmethod
+ def _validation_resources_params_from_conf():
+ return dict(
+ keypair=(CONF.validation.auth_method.lower() == "keypair"),
+ floating_ip=(CONF.validation.connect_method.lower() == "floating"),
+ security_group=CONF.validation.security_group,
+ security_group_rules=CONF.validation.security_group_rules,
+ use_neutron=CONF.service_available.neutron,
+ ethertype='IPv' + str(CONF.validation.ip_version_for_ssh),
+ floating_network_id=CONF.network.public_network_id,
+ floating_network_name=CONF.network.floating_network_name)
+
@classmethod
- def set_validation_resources(cls, keypair=None, floating_ip=None,
- security_group=None,
- security_group_rules=None):
- """Specify which ssh server validation resources should be created.
+ def get_class_validation_resources(cls, os_clients):
+ """Provision validation resources according to configuration
- Each of the argument must be set to either None, True or False, with
- None - use default from config (security groups and security group
- rules get created when set to None)
- False - Do not create the validation resource
- True - create the validation resource
+ This is a wrapper around `create_validation_resources` from
+ `tempest.common.validation_resources` that passes parameters from
+ Tempest configuration. Only one instance of class level
+ validation resources is managed by the helper, so If resources
+ were already provisioned before, existing ones will be returned.
- @param keypair
- @param security_group
- @param security_group_rules
- @param floating_ip
+ Resources are returned as a dictionary. They are also scheduled for
+ automatic cleanup during class teardown using
+ `addClassResourcesCleanup`.
+
+ If `CONF.validation.run_validation` is False no resource will be
+ provisioned at all.
+
+ @param os_clients: Clients to be used to provision the resources.
"""
if not CONF.validation.run_validation:
return
- if keypair is None:
- keypair = (CONF.validation.auth_method.lower() == "keypair")
+ if os_clients in cls._validation_resources:
+ return cls._validation_resources[os_clients]
- if floating_ip is None:
- floating_ip = (CONF.validation.connect_method.lower() ==
- "floating")
+ if (CONF.validation.ip_version_for_ssh not in (4, 6) and
+ CONF.service_available.neutron):
+ msg = "Invalid IP version %s in ip_version_for_ssh. Use 4 or 6"
+ raise lib_exc.InvalidConfiguration(
+ msg % CONF.validation.ip_version_for_ssh)
- if security_group is None:
- security_group = CONF.validation.security_group
+ resources = vr.create_validation_resources(
+ os_clients,
+ **cls._validation_resources_params_from_conf())
- if security_group_rules is None:
- security_group_rules = CONF.validation.security_group_rules
+ cls.addClassResourceCleanup(
+ vr.clear_validation_resources, os_clients,
+ use_neutron=CONF.service_available.neutron,
+ **resources)
+ cls._validation_resources[os_clients] = resources
+ return resources
- if not cls.validation_resources:
- cls.validation_resources = {
- 'keypair': keypair,
- 'security_group': security_group,
- 'security_group_rules': security_group_rules,
- 'floating_ip': floating_ip}
+ def get_test_validation_resources(self, os_clients):
+ """Returns a dict of validation resources according to configuration
+
+ Initialise a validation resources fixture based on configuration.
+ Start the fixture and returns the validation resources.
+
+ If `CONF.validation.run_validation` is False no resource will be
+ provisioned at all.
+
+ @param os_clients: Clients to be used to provision the resources.
+ """
+
+ params = {}
+ # Test will try to use the fixture, so for this to be useful
+ # we must return a fixture. If validation is disabled though
+ # we don't need to provision anything, which is the default
+ # behavior for the fixture.
+ if CONF.validation.run_validation:
+ params = self._validation_resources_params_from_conf()
+
+ validation = self.useFixture(
+ vr.ValidationResourcesFixture(os_clients, **params))
+ return validation.resources
@classmethod
def set_network_resources(cls, network=False, router=False, subnet=False,
dhcp=False):
"""Specify which network resources should be created
+ The dynamic credentials provider by default provisions network
+ resources for each user/project that is provisioned. This behavior
+ can be altered using this method, which allows tests to define which
+ specific network resources to be provisioned - none if no parameter
+ is specified.
+
+ This method is designed so that only the network resources set on the
+ leaf class are honoured.
+
+ Credentials are provisioned as part of the class setup fixture,
+ during the `setup_credentials` step. For this to be effective this
+ helper must be invoked before super's `setup_credentials` is executed.
+
@param network
@param router
@param subnet
@param dhcp
+
+ Example::
+
+ @classmethod
+ def setup_credentials(cls):
+ # Do not setup network resources for this test
+ cls.set_network_resources()
+ super(MyTest, cls).setup_credentials()
"""
- # network resources should be set only once from callers
+ # If this is invoked after the credentials are setup, it won't take
+ # any effect. To avoid this situation, fail the test in case this was
+ # invoked too late in the test lifecycle.
+ if cls.__setup_credentials_called:
+ raise RuntimeError(
+ "set_network_resources invoked after setup_credentials on the "
+ "super class has been already invoked. For "
+ "set_network_resources to have effect please invoke it before "
+ "the call to super().setup_credentials")
+
+ # Network resources should be set only once from callers
# in order to ensure that even if it's called multiple times in
# a chain of overloaded methods, the attribute is set only
- # in the leaf class
- if not cls.network_resources:
- cls.network_resources = {
+ # in the leaf class.
+ if not cls._network_resources:
+ cls._network_resources = {
'network': network,
'router': router,
'subnet': subnet,
diff --git a/tempest/tests/api/compute/test_base.py b/tempest/tests/api/compute/test_base.py
index 6345728..5024100 100644
--- a/tempest/tests/api/compute/test_base.py
+++ b/tempest/tests/api/compute/test_base.py
@@ -37,14 +37,16 @@
fake_image = mock.Mock(response={'location': image_id})
compute_images_client.create_image.return_value = fake_image
# call the utility method
- image = compute_base.BaseV2ComputeTest.create_image_from_server(
- mock.sentinel.server_id, name='fake-snapshot-name')
+ cleanup_path = 'tempest.test.BaseTestCase.addClassResourceCleanup'
+ with mock.patch(cleanup_path) as mock_cleanup:
+ image = compute_base.BaseV2ComputeTest.create_image_from_server(
+ mock.sentinel.server_id, name='fake-snapshot-name')
self.assertEqual(fake_image, image)
# make our assertions
compute_images_client.create_image.assert_called_once_with(
mock.sentinel.server_id, name='fake-snapshot-name')
- self.assertEqual(1, len(compute_base.BaseV2ComputeTest.images))
- self.assertEqual(image_id, compute_base.BaseV2ComputeTest.images[0])
+ mock_cleanup.assert_called_once()
+ self.assertIn(image_id, mock_cleanup.call_args[0])
@mock.patch.multiple(compute_base.BaseV2ComputeTest,
compute_images_client=mock.DEFAULT,
diff --git a/tempest/tests/cmd/test_run.py b/tempest/tests/cmd/test_run.py
index 7ac347d..0485e14 100644
--- a/tempest/tests/cmd/test_run.py
+++ b/tempest/tests/cmd/test_run.py
@@ -13,6 +13,7 @@
# under the License.
import argparse
+import atexit
import os
import shutil
import subprocess
@@ -25,6 +26,7 @@
from tempest.tests import base
DEVNULL = open(os.devnull, 'wb')
+atexit.register(DEVNULL.close)
class TestTempestRun(base.TestCase):
@@ -38,6 +40,7 @@
setattr(args, "subunit", True)
setattr(args, "parallel", False)
setattr(args, "concurrency", 10)
+ setattr(args, "load_list", '')
options = self.run_cmd._build_options(args)
self.assertEqual(['--subunit',
'--concurrency=10'],
@@ -68,6 +71,34 @@
self.assertEqual('i_am_a_fun_little_regex',
self.run_cmd._build_regex(args))
+ def test__build_whitelist_file(self):
+ args = mock.Mock(spec=argparse.Namespace)
+ setattr(args, 'smoke', False)
+ setattr(args, 'regex', None)
+ self.tests = tempfile.NamedTemporaryFile(
+ prefix='whitelist', delete=False)
+ self.tests.write(b"volume \n compute")
+ self.tests.close()
+ setattr(args, 'whitelist_file', self.tests.name)
+ setattr(args, 'blacklist_file', None)
+ self.assertEqual("volume|compute",
+ self.run_cmd._build_regex(args))
+ os.unlink(self.tests.name)
+
+ def test__build_blacklist_file(self):
+ args = mock.Mock(spec=argparse.Namespace)
+ setattr(args, 'smoke', False)
+ setattr(args, 'regex', None)
+ self.tests = tempfile.NamedTemporaryFile(
+ prefix='blacklist', delete=False)
+ self.tests.write(b"volume \n compute")
+ self.tests.close()
+ setattr(args, 'whitelist_file', None)
+ setattr(args, 'blacklist_file', self.tests.name)
+ self.assertEqual("^((?!compute|volume).)*$",
+ self.run_cmd._build_regex(args))
+ os.unlink(self.tests.name)
+
class TestRunReturnCode(base.TestCase):
def setUp(self):
diff --git a/tempest/tests/lib/cli/test_execute.py b/tempest/tests/lib/cli/test_execute.py
index 0130454..c276386 100644
--- a/tempest/tests/lib/cli/test_execute.py
+++ b/tempest/tests/lib/cli/test_execute.py
@@ -91,3 +91,37 @@
self.assertEqual(mock_execute.call_count, 1)
self.assertEqual(mock_execute.call_args[1],
{'prefix': 'env LAC_ALL=C'})
+
+ @mock.patch.object(cli_base, 'execute')
+ def test_execute_with_domain_name(self, mock_execute):
+ cli = cli_base.CLIClient(
+ user_domain_name='default',
+ project_domain_name='default'
+ )
+ cli.glance('action')
+ self.assertEqual(mock_execute.call_count, 1)
+ self.assertIn('--os-user-domain-name default',
+ mock_execute.call_args[0][2])
+ self.assertIn('--os-project-domain-name default',
+ mock_execute.call_args[0][2])
+ self.assertNotIn('--os-user-domain-id',
+ mock_execute.call_args[0][2])
+ self.assertNotIn('--os-project-domain-id',
+ mock_execute.call_args[0][2])
+
+ @mock.patch.object(cli_base, 'execute')
+ def test_execute_with_domain_id(self, mock_execute):
+ cli = cli_base.CLIClient(
+ user_domain_id='default',
+ project_domain_id='default'
+ )
+ cli.glance('action')
+ self.assertEqual(mock_execute.call_count, 1)
+ self.assertIn('--os-user-domain-id default',
+ mock_execute.call_args[0][2])
+ self.assertIn('--os-project-domain-id default',
+ mock_execute.call_args[0][2])
+ self.assertNotIn('--os-user-domain-name',
+ mock_execute.call_args[0][2])
+ self.assertNotIn('--os-project-domain-name',
+ mock_execute.call_args[0][2])
diff --git a/tempest/tests/lib/common/test_api_version_utils.py b/tempest/tests/lib/common/test_api_version_utils.py
index c063556..b99e8d4 100644
--- a/tempest/tests/lib/common/test_api_version_utils.py
+++ b/tempest/tests/lib/common/test_api_version_utils.py
@@ -92,24 +92,106 @@
def test_header_matches(self):
microversion_header_name = 'x-openstack-xyz-api-version'
request_microversion = '2.1'
- test_respose = {microversion_header_name: request_microversion}
+ test_response = {microversion_header_name: request_microversion}
api_version_utils.assert_version_header_matches_request(
- microversion_header_name, request_microversion, test_respose)
+ microversion_header_name, request_microversion, test_response)
def test_header_does_not_match(self):
microversion_header_name = 'x-openstack-xyz-api-version'
request_microversion = '2.1'
- test_respose = {microversion_header_name: '2.2'}
+ test_response = {microversion_header_name: '2.2'}
self.assertRaises(
exceptions.InvalidHTTPResponseHeader,
api_version_utils.assert_version_header_matches_request,
- microversion_header_name, request_microversion, test_respose)
+ microversion_header_name, request_microversion, test_response)
def test_header_not_present(self):
microversion_header_name = 'x-openstack-xyz-api-version'
request_microversion = '2.1'
- test_respose = {}
+ test_response = {}
self.assertRaises(
exceptions.InvalidHTTPResponseHeader,
api_version_utils.assert_version_header_matches_request,
- microversion_header_name, request_microversion, test_respose)
+ microversion_header_name, request_microversion, test_response)
+
+ def test_compare_versions_less_than(self):
+ microversion_header_name = 'x-openstack-xyz-api-version'
+ request_microversion = '2.2'
+ test_response = {microversion_header_name: '2.1'}
+ self.assertFalse(
+ api_version_utils.compare_version_header_to_response(
+ microversion_header_name, request_microversion, test_response,
+ "lt"))
+
+ def test_compare_versions_less_than_equal(self):
+ microversion_header_name = 'x-openstack-xyz-api-version'
+ request_microversion = '2.2'
+ test_response = {microversion_header_name: '2.1'}
+ self.assertFalse(
+ api_version_utils.compare_version_header_to_response(
+ microversion_header_name, request_microversion, test_response,
+ "le"))
+
+ def test_compare_versions_greater_than_equal(self):
+ microversion_header_name = 'x-openstack-xyz-api-version'
+ request_microversion = '2.1'
+ test_response = {microversion_header_name: '2.2'}
+ self.assertFalse(
+ api_version_utils.compare_version_header_to_response(
+ microversion_header_name, request_microversion, test_response,
+ "ge"))
+
+ def test_compare_versions_greater_than(self):
+ microversion_header_name = 'x-openstack-xyz-api-version'
+ request_microversion = '2.1'
+ test_response = {microversion_header_name: '2.2'}
+ self.assertFalse(
+ api_version_utils.compare_version_header_to_response(
+ microversion_header_name, request_microversion, test_response,
+ "gt"))
+
+ def test_compare_versions_equal(self):
+ microversion_header_name = 'x-openstack-xyz-api-version'
+ request_microversion = '2.11'
+ test_response = {microversion_header_name: '2.1'}
+ self.assertFalse(
+ api_version_utils.compare_version_header_to_response(
+ microversion_header_name, request_microversion, test_response,
+ "eq"))
+
+ def test_compare_versions_not_equal(self):
+ microversion_header_name = 'x-openstack-xyz-api-version'
+ request_microversion = '2.1'
+ test_response = {microversion_header_name: '2.1'}
+ self.assertFalse(
+ api_version_utils.compare_version_header_to_response(
+ microversion_header_name, request_microversion, test_response,
+ "ne"))
+
+ def test_compare_versions_with_name_in_microversion(self):
+ microversion_header_name = 'x-openstack-xyz-api-version'
+ request_microversion = 'volume 3.1'
+ test_response = {microversion_header_name: 'volume 3.1'}
+ self.assertTrue(
+ api_version_utils.compare_version_header_to_response(
+ microversion_header_name, request_microversion, test_response,
+ "eq"))
+
+ def test_compare_versions_invalid_operation(self):
+ microversion_header_name = 'x-openstack-xyz-api-version'
+ request_microversion = '2.1'
+ test_response = {microversion_header_name: '2.1'}
+ self.assertRaises(
+ exceptions.InvalidParam,
+ api_version_utils.compare_version_header_to_response,
+ microversion_header_name, request_microversion, test_response,
+ "foo")
+
+ def test_compare_versions_header_not_present(self):
+ microversion_header_name = 'x-openstack-xyz-api-version'
+ request_microversion = '2.1'
+ test_response = {}
+ self.assertFalse(
+ api_version_utils.compare_version_header_to_response(
+ microversion_header_name, request_microversion, test_response,
+ "eq"))
diff --git a/tempest/tests/lib/common/test_dynamic_creds.py b/tempest/tests/lib/common/test_dynamic_creds.py
index 6aa7a42..ebcf5d1 100644
--- a/tempest/tests/lib/common/test_dynamic_creds.py
+++ b/tempest/tests/lib/common/test_dynamic_creds.py
@@ -40,6 +40,7 @@
from tempest.tests import fake_config
from tempest.tests.lib import fake_http
from tempest.tests.lib import fake_identity
+from tempest.tests.lib.services import registry_fixture
class TestDynamicCredentialProvider(base.TestCase):
@@ -62,6 +63,7 @@
def setUp(self):
super(TestDynamicCredentialProvider, self).setUp()
self.useFixture(fake_config.ConfigFixture())
+ self.useFixture(registry_fixture.RegistryFixture())
self.patchobject(config, 'TempestConfigPrivate',
fake_config.FakePrivate)
self.patchobject(self.token_client_class, 'raw_request',
diff --git a/tempest/tests/lib/common/test_http.py b/tempest/tests/lib/common/test_http.py
new file mode 100644
index 0000000..a292209
--- /dev/null
+++ b/tempest/tests/lib/common/test_http.py
@@ -0,0 +1,68 @@
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.lib.common import http
+from tempest.tests import base
+
+
+class TestClosingHttp(base.TestCase):
+ def setUp(self):
+ super(TestClosingHttp, self).setUp()
+ self.cert_none = "CERT_NONE"
+ self.cert_location = "/etc/ssl/certs/ca-certificates.crt"
+
+ def test_constructor_invalid_ca_certs_and_timeout(self):
+ connection = http.ClosingHttp(
+ disable_ssl_certificate_validation=False,
+ ca_certs=None,
+ timeout=None)
+ for attr in ('cert_reqs', 'ca_certs', 'timeout'):
+ self.assertNotIn(attr, connection.connection_pool_kw)
+
+ def test_constructor_valid_ca_certs(self):
+ cert_required = 'CERT_REQUIRED'
+ connection = http.ClosingHttp(
+ disable_ssl_certificate_validation=False,
+ ca_certs=self.cert_location,
+ timeout=None)
+ self.assertEqual(cert_required,
+ connection.connection_pool_kw['cert_reqs'])
+ self.assertEqual(self.cert_location,
+ connection.connection_pool_kw['ca_certs'])
+ self.assertNotIn('timeout',
+ connection.connection_pool_kw)
+
+ def test_constructor_ssl_cert_validation_disabled(self):
+ connection = http.ClosingHttp(
+ disable_ssl_certificate_validation=True,
+ ca_certs=None,
+ timeout=30)
+ self.assertEqual(self.cert_none,
+ connection.connection_pool_kw['cert_reqs'])
+ self.assertEqual(30,
+ connection.connection_pool_kw['timeout'])
+ self.assertNotIn('ca_certs',
+ connection.connection_pool_kw)
+
+ def test_constructor_ssl_cert_validation_disabled_and_ca_certs(self):
+ connection = http.ClosingHttp(
+ disable_ssl_certificate_validation=True,
+ ca_certs=self.cert_location,
+ timeout=None)
+ self.assertNotIn('timeout',
+ connection.connection_pool_kw)
+ self.assertEqual(self.cert_none,
+ connection.connection_pool_kw['cert_reqs'])
+ self.assertNotIn('ca_certs',
+ connection.connection_pool_kw)
diff --git a/tempest/tests/lib/common/test_preprov_creds.py b/tempest/tests/lib/common/test_preprov_creds.py
index 5402e47..9b10159 100644
--- a/tempest/tests/lib/common/test_preprov_creds.py
+++ b/tempest/tests/lib/common/test_preprov_creds.py
@@ -32,6 +32,7 @@
from tempest.tests import base
from tempest.tests import fake_config
from tempest.tests.lib import fake_identity
+from tempest.tests.lib.services import registry_fixture
class TestPreProvisionedCredentials(base.TestCase):
@@ -92,9 +93,8 @@
return_value=self.test_accounts))
self.useFixture(fixtures.MockPatch(
'os.path.isfile', return_value=True))
- # NOTE(andreaf) Ensure config is loaded so service clients are
- # registered in the registry before tests
- config.service_client_config()
+ # Make sure we leave the registry clean
+ self.useFixture(registry_fixture.RegistryFixture())
def tearDown(self):
super(TestPreProvisionedCredentials, self).tearDown()
diff --git a/tempest/tests/lib/common/test_validation_resources.py b/tempest/tests/lib/common/test_validation_resources.py
new file mode 100644
index 0000000..d5139f4
--- /dev/null
+++ b/tempest/tests/lib/common/test_validation_resources.py
@@ -0,0 +1,344 @@
+# Copyright (c) 2017 IBM Corp.
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import fixtures
+import mock
+import testtools
+
+from tempest.lib.common import validation_resources as vr
+from tempest.lib import exceptions as lib_exc
+from tempest.lib.services import clients
+from tempest.tests import base
+from tempest.tests.lib import fake_credentials
+from tempest.tests.lib.services import registry_fixture
+
+FAKE_SECURITY_GROUP = {'security_group': {'id': 'sg_id'}}
+FAKE_KEYPAIR = {'keypair': {'name': 'keypair_name'}}
+FAKE_FIP_NOVA_NET = {'floating_ip': {'ip': '1.2.3.4', 'id': '1234'}}
+FAKE_FIP_NEUTRON = {'floatingip': {'floating_ip_address': '1.2.3.4',
+ 'id': '1234'}}
+
+SERVICES = 'tempest.lib.services'
+SG_CLIENT = (SERVICES + '.%s.security_groups_client.SecurityGroupsClient.%s')
+SGR_CLIENT = (SERVICES + '.%s.security_group_rules_client.'
+ 'SecurityGroupRulesClient.create_security_group_rule')
+KP_CLIENT = (SERVICES + '.compute.keypairs_client.KeyPairsClient.%s')
+FIP_CLIENT = (SERVICES + '.%s.floating_ips_client.FloatingIPsClient.%s')
+
+
+class TestValidationResources(base.TestCase):
+
+ def setUp(self):
+ super(TestValidationResources, self).setUp()
+ self.useFixture(registry_fixture.RegistryFixture())
+ self.mock_sg_compute = self.useFixture(fixtures.MockPatch(
+ SG_CLIENT % ('compute', 'create_security_group'), autospec=True,
+ return_value=FAKE_SECURITY_GROUP))
+ self.mock_sg_network = self.useFixture(fixtures.MockPatch(
+ SG_CLIENT % ('network', 'create_security_group'), autospec=True,
+ return_value=FAKE_SECURITY_GROUP))
+ self.mock_sgr_compute = self.useFixture(fixtures.MockPatch(
+ SGR_CLIENT % 'compute', autospec=True))
+ self.mock_sgr_network = self.useFixture(fixtures.MockPatch(
+ SGR_CLIENT % 'network', autospec=True))
+ self.mock_kp = self.useFixture(fixtures.MockPatch(
+ KP_CLIENT % 'create_keypair', autospec=True,
+ return_value=FAKE_KEYPAIR))
+ self.mock_fip_compute = self.useFixture(fixtures.MockPatch(
+ FIP_CLIENT % ('compute', 'create_floating_ip'), autospec=True,
+ return_value=FAKE_FIP_NOVA_NET))
+ self.mock_fip_network = self.useFixture(fixtures.MockPatch(
+ FIP_CLIENT % ('network', 'create_floatingip'), autospec=True,
+ return_value=FAKE_FIP_NEUTRON))
+ self.os = clients.ServiceClients(
+ fake_credentials.FakeKeystoneV3Credentials(), 'fake_uri')
+
+ def test_create_ssh_security_group_nova_net(self):
+ expected_sg_id = FAKE_SECURITY_GROUP['security_group']['id']
+ sg = vr.create_ssh_security_group(self.os, add_rule=True,
+ use_neutron=False)
+ self.assertEqual(FAKE_SECURITY_GROUP['security_group'], sg)
+ # Neutron clients have not been used
+ self.assertEqual(self.mock_sg_network.mock.call_count, 0)
+ self.assertEqual(self.mock_sgr_network.mock.call_count, 0)
+ # Nova-net clients assertions
+ self.assertGreater(self.mock_sg_compute.mock.call_count, 0)
+ self.assertGreater(self.mock_sgr_compute.mock.call_count, 0)
+ for call in self.mock_sgr_compute.mock.call_args_list[1:]:
+ self.assertIn(expected_sg_id, call[1].values())
+
+ def test_create_ssh_security_group_neutron(self):
+ expected_sg_id = FAKE_SECURITY_GROUP['security_group']['id']
+ expected_ethertype = 'fake_ethertype'
+ sg = vr.create_ssh_security_group(self.os, add_rule=True,
+ use_neutron=True,
+ ethertype=expected_ethertype)
+ self.assertEqual(FAKE_SECURITY_GROUP['security_group'], sg)
+ # Nova-net clients have not been used
+ self.assertEqual(self.mock_sg_compute.mock.call_count, 0)
+ self.assertEqual(self.mock_sgr_compute.mock.call_count, 0)
+ # Nova-net clients assertions
+ self.assertGreater(self.mock_sg_network.mock.call_count, 0)
+ self.assertGreater(self.mock_sgr_network.mock.call_count, 0)
+ # Check SG ID and ethertype are passed down to rules
+ for call in self.mock_sgr_network.mock.call_args_list[1:]:
+ self.assertIn(expected_sg_id, call[1].values())
+ self.assertIn(expected_ethertype, call[1].values())
+
+ def test_create_ssh_security_no_rules(self):
+ sg = vr.create_ssh_security_group(self.os, add_rule=False)
+ self.assertEqual(FAKE_SECURITY_GROUP['security_group'], sg)
+ # SG Rules clients have not been used
+ self.assertEqual(self.mock_sgr_compute.mock.call_count, 0)
+ self.assertEqual(self.mock_sgr_network.mock.call_count, 0)
+
+ @mock.patch.object(vr, 'create_ssh_security_group',
+ return_value=FAKE_SECURITY_GROUP['security_group'])
+ def test_create_validation_resources_nova_net(self, mock_create_sg):
+ expected_floating_network_id = 'my_fni'
+ expected_floating_network_name = 'my_fnn'
+ resources = vr.create_validation_resources(
+ self.os, keypair=True, floating_ip=True, security_group=True,
+ security_group_rules=True, ethertype='IPv6', use_neutron=False,
+ floating_network_id=expected_floating_network_id,
+ floating_network_name=expected_floating_network_name)
+ # Keypair calls
+ self.assertGreater(self.mock_kp.mock.call_count, 0)
+ # Floating IP calls
+ self.assertGreater(self.mock_fip_compute.mock.call_count, 0)
+ for call in self.mock_fip_compute.mock.call_args_list[1:]:
+ self.assertIn(expected_floating_network_name, call[1].values())
+ self.assertNotIn(expected_floating_network_id, call[1].values())
+ self.assertEqual(self.mock_fip_network.mock.call_count, 0)
+ # SG calls
+ mock_create_sg.assert_called_once()
+ # Resources
+ for resource in ['keypair', 'floating_ip', 'security_group']:
+ self.assertIn(resource, resources)
+ self.assertEqual(FAKE_KEYPAIR['keypair'], resources['keypair'])
+ self.assertEqual(FAKE_SECURITY_GROUP['security_group'],
+ resources['security_group'])
+ self.assertEqual(FAKE_FIP_NOVA_NET['floating_ip'],
+ resources['floating_ip'])
+
+ @mock.patch.object(vr, 'create_ssh_security_group',
+ return_value=FAKE_SECURITY_GROUP['security_group'])
+ def test_create_validation_resources_neutron(self, mock_create_sg):
+ expected_floating_network_id = 'my_fni'
+ expected_floating_network_name = 'my_fnn'
+ resources = vr.create_validation_resources(
+ self.os, keypair=True, floating_ip=True, security_group=True,
+ security_group_rules=True, ethertype='IPv6', use_neutron=True,
+ floating_network_id=expected_floating_network_id,
+ floating_network_name=expected_floating_network_name)
+ # Keypair calls
+ self.assertGreater(self.mock_kp.mock.call_count, 0)
+ # Floating IP calls
+ self.assertEqual(self.mock_fip_compute.mock.call_count, 0)
+ self.assertGreater(self.mock_fip_network.mock.call_count, 0)
+ for call in self.mock_fip_compute.mock.call_args_list[1:]:
+ self.assertIn(expected_floating_network_id, call[1].values())
+ self.assertNotIn(expected_floating_network_name, call[1].values())
+ # SG calls
+ mock_create_sg.assert_called_once()
+ # Resources
+ for resource in ['keypair', 'floating_ip', 'security_group']:
+ self.assertIn(resource, resources)
+ self.assertEqual(FAKE_KEYPAIR['keypair'], resources['keypair'])
+ self.assertEqual(FAKE_SECURITY_GROUP['security_group'],
+ resources['security_group'])
+ self.assertIn('ip', resources['floating_ip'])
+ self.assertEqual(resources['floating_ip']['ip'],
+ FAKE_FIP_NEUTRON['floatingip']['floating_ip_address'])
+ self.assertEqual(resources['floating_ip']['id'],
+ FAKE_FIP_NEUTRON['floatingip']['id'])
+
+
+class TestClearValidationResourcesFixture(base.TestCase):
+
+ def setUp(self):
+ super(TestClearValidationResourcesFixture, self).setUp()
+ self.useFixture(registry_fixture.RegistryFixture())
+ self.mock_sg_compute = self.useFixture(fixtures.MockPatch(
+ SG_CLIENT % ('compute', 'delete_security_group'), autospec=True))
+ self.mock_sg_network = self.useFixture(fixtures.MockPatch(
+ SG_CLIENT % ('network', 'delete_security_group'), autospec=True))
+ self.mock_sg_wait_compute = self.useFixture(fixtures.MockPatch(
+ SG_CLIENT % ('compute', 'wait_for_resource_deletion'),
+ autospec=True))
+ self.mock_sg_wait_network = self.useFixture(fixtures.MockPatch(
+ SG_CLIENT % ('network', 'wait_for_resource_deletion'),
+ autospec=True))
+ self.mock_kp = self.useFixture(fixtures.MockPatch(
+ KP_CLIENT % 'delete_keypair', autospec=True))
+ self.mock_fip_compute = self.useFixture(fixtures.MockPatch(
+ FIP_CLIENT % ('compute', 'delete_floating_ip'), autospec=True))
+ self.mock_fip_network = self.useFixture(fixtures.MockPatch(
+ FIP_CLIENT % ('network', 'delete_floatingip'), autospec=True))
+ self.os = clients.ServiceClients(
+ fake_credentials.FakeKeystoneV3Credentials(), 'fake_uri')
+
+ def test_clear_validation_resources_nova_net(self):
+ vr.clear_validation_resources(
+ self.os,
+ floating_ip=FAKE_FIP_NOVA_NET['floating_ip'],
+ security_group=FAKE_SECURITY_GROUP['security_group'],
+ keypair=FAKE_KEYPAIR['keypair'],
+ use_neutron=False)
+ self.assertGreater(self.mock_kp.mock.call_count, 0)
+ for call in self.mock_kp.mock.call_args_list[1:]:
+ self.assertIn(FAKE_KEYPAIR['keypair']['name'], call[1].values())
+ self.assertGreater(self.mock_sg_compute.mock.call_count, 0)
+ for call in self.mock_sg_compute.mock.call_args_list[1:]:
+ self.assertIn(FAKE_SECURITY_GROUP['security_group']['id'],
+ call[1].values())
+ self.assertGreater(self.mock_sg_wait_compute.mock.call_count, 0)
+ for call in self.mock_sg_wait_compute.mock.call_args_list[1:]:
+ self.assertIn(FAKE_SECURITY_GROUP['security_group']['id'],
+ call[1].values())
+ self.assertEqual(self.mock_sg_network.mock.call_count, 0)
+ self.assertEqual(self.mock_sg_wait_network.mock.call_count, 0)
+ self.assertGreater(self.mock_fip_compute.mock.call_count, 0)
+ for call in self.mock_fip_compute.mock.call_args_list[1:]:
+ self.assertIn(FAKE_FIP_NOVA_NET['floating_ip']['id'],
+ call[1].values())
+ self.assertEqual(self.mock_fip_network.mock.call_count, 0)
+
+ def test_clear_validation_resources_neutron(self):
+ vr.clear_validation_resources(
+ self.os,
+ floating_ip=FAKE_FIP_NEUTRON['floatingip'],
+ security_group=FAKE_SECURITY_GROUP['security_group'],
+ keypair=FAKE_KEYPAIR['keypair'],
+ use_neutron=True)
+ self.assertGreater(self.mock_kp.mock.call_count, 0)
+ for call in self.mock_kp.mock.call_args_list[1:]:
+ self.assertIn(FAKE_KEYPAIR['keypair']['name'], call[1].values())
+ self.assertGreater(self.mock_sg_network.mock.call_count, 0)
+ for call in self.mock_sg_network.mock.call_args_list[1:]:
+ self.assertIn(FAKE_SECURITY_GROUP['security_group']['id'],
+ call[1].values())
+ self.assertGreater(self.mock_sg_wait_network.mock.call_count, 0)
+ for call in self.mock_sg_wait_network.mock.call_args_list[1:]:
+ self.assertIn(FAKE_SECURITY_GROUP['security_group']['id'],
+ call[1].values())
+ self.assertEqual(self.mock_sg_compute.mock.call_count, 0)
+ self.assertEqual(self.mock_sg_wait_compute.mock.call_count, 0)
+ self.assertGreater(self.mock_fip_network.mock.call_count, 0)
+ for call in self.mock_fip_network.mock.call_args_list[1:]:
+ self.assertIn(FAKE_FIP_NEUTRON['floatingip']['id'],
+ call[1].values())
+ self.assertEqual(self.mock_fip_compute.mock.call_count, 0)
+
+ def test_clear_validation_resources_exceptions(self):
+ # Test that even with exceptions all cleanups are invoked and that only
+ # the first exception is reported.
+ # NOTE(andreaf) There's not way of knowing which exception is going to
+ # be raised first unless we enforce which resource is cleared first,
+ # which is not really interesting, but also not harmful. keypair first.
+ self.mock_kp.mock.side_effect = Exception('keypair exception')
+ self.mock_sg_network.mock.side_effect = Exception('sg exception')
+ self.mock_fip_network.mock.side_effect = Exception('fip exception')
+ with testtools.ExpectedException(Exception, value_re='keypair'):
+ vr.clear_validation_resources(
+ self.os,
+ floating_ip=FAKE_FIP_NEUTRON['floatingip'],
+ security_group=FAKE_SECURITY_GROUP['security_group'],
+ keypair=FAKE_KEYPAIR['keypair'],
+ use_neutron=True)
+ # Clients calls are still made, but not the wait call
+ self.assertGreater(self.mock_kp.mock.call_count, 0)
+ self.assertGreater(self.mock_sg_network.mock.call_count, 0)
+ self.assertGreater(self.mock_fip_network.mock.call_count, 0)
+
+ def test_clear_validation_resources_wait_not_found_wait(self):
+ # Test that a not found on wait is not an exception
+ self.mock_sg_wait_network.mock.side_effect = lib_exc.NotFound('yay')
+ vr.clear_validation_resources(
+ self.os,
+ floating_ip=FAKE_FIP_NEUTRON['floatingip'],
+ security_group=FAKE_SECURITY_GROUP['security_group'],
+ keypair=FAKE_KEYPAIR['keypair'],
+ use_neutron=True)
+ # Clients calls are still made, but not the wait call
+ self.assertGreater(self.mock_kp.mock.call_count, 0)
+ self.assertGreater(self.mock_sg_network.mock.call_count, 0)
+ self.assertGreater(self.mock_sg_wait_network.mock.call_count, 0)
+ self.assertGreater(self.mock_fip_network.mock.call_count, 0)
+
+ def test_clear_validation_resources_wait_not_found_delete(self):
+ # Test that a not found on delete is not an exception
+ self.mock_kp.mock.side_effect = lib_exc.NotFound('yay')
+ self.mock_sg_network.mock.side_effect = lib_exc.NotFound('yay')
+ self.mock_fip_network.mock.side_effect = lib_exc.NotFound('yay')
+ vr.clear_validation_resources(
+ self.os,
+ floating_ip=FAKE_FIP_NEUTRON['floatingip'],
+ security_group=FAKE_SECURITY_GROUP['security_group'],
+ keypair=FAKE_KEYPAIR['keypair'],
+ use_neutron=True)
+ # Clients calls are still made, but not the wait call
+ self.assertGreater(self.mock_kp.mock.call_count, 0)
+ self.assertGreater(self.mock_sg_network.mock.call_count, 0)
+ self.assertEqual(self.mock_sg_wait_network.mock.call_count, 0)
+ self.assertGreater(self.mock_fip_network.mock.call_count, 0)
+
+
+class TestValidationResourcesFixture(base.TestCase):
+
+ @mock.patch.object(vr, 'create_validation_resources', autospec=True)
+ def test_use_fixture(self, mock_vr):
+ exp_vr = dict(keypair='keypair',
+ floating_ip='floating_ip',
+ security_group='security_group')
+ mock_vr.return_value = exp_vr
+ exp_clients = 'clients'
+ exp_parameters = dict(keypair=True, floating_ip=True,
+ security_group=True, security_group_rules=True,
+ ethertype='v6', use_neutron=True,
+ floating_network_id='fnid',
+ floating_network_name='fnname')
+ # First mock cleanup
+ self.useFixture(fixtures.MockPatchObject(
+ vr, 'clear_validation_resources', autospec=True))
+ # And then use vr fixture, so when the fixture is cleaned-up, the mock
+ # is still there
+ vr_fixture = self.useFixture(vr.ValidationResourcesFixture(
+ exp_clients, **exp_parameters))
+ # Assert vr have been provisioned
+ mock_vr.assert_called_once_with(exp_clients, **exp_parameters)
+ # Assert vr have been setup in the fixture
+ self.assertEqual(exp_vr, vr_fixture.resources)
+
+ @mock.patch.object(vr, 'clear_validation_resources', autospec=True)
+ @mock.patch.object(vr, 'create_validation_resources', autospec=True)
+ def test_use_fixture_context(self, mock_vr, mock_clear):
+ exp_vr = dict(keypair='keypair',
+ floating_ip='floating_ip',
+ security_group='security_group')
+ mock_vr.return_value = exp_vr
+ exp_clients = 'clients'
+ exp_parameters = dict(keypair=True, floating_ip=True,
+ security_group=True, security_group_rules=True,
+ ethertype='v6', use_neutron=True,
+ floating_network_id='fnid',
+ floating_network_name='fnname')
+ with vr.ValidationResourcesFixture(exp_clients,
+ **exp_parameters) as vr_fixture:
+ # Assert vr have been provisioned
+ mock_vr.assert_called_once_with(exp_clients, **exp_parameters)
+ # Assert vr have been setup in the fixture
+ self.assertEqual(exp_vr, vr_fixture.resources)
+ # After context manager is closed, clear is invoked
+ exp_vr['use_neutron'] = exp_parameters['use_neutron']
+ mock_clear.assert_called_once_with(exp_clients, **exp_vr)
diff --git a/tempest/tests/lib/common/utils/linux/test_remote_client.py b/tempest/tests/lib/common/utils/linux/test_remote_client.py
index cf312f4..7a21a5f 100644
--- a/tempest/tests/lib/common/utils/linux/test_remote_client.py
+++ b/tempest/tests/lib/common/utils/linux/test_remote_client.py
@@ -34,7 +34,7 @@
client = remote_client.RemoteClient('192.168.1.10', 'username')
client.exec_command('ls')
mock_ssh_exec_command.assert_called_once_with(
- 'set -eu -o pipefail; PATH=$$PATH:/sbin; ls')
+ 'set -eu -o pipefail; PATH=$PATH:/sbin; ls')
@mock.patch.object(ssh.Client, 'test_connection_auth')
def test_validate_authentication(self, mock_test_connection_auth):
diff --git a/tempest/tests/lib/services/object_storage/test_object_client.py b/tempest/tests/lib/services/object_storage/test_object_client.py
new file mode 100644
index 0000000..a16d1d7
--- /dev/null
+++ b/tempest/tests/lib/services/object_storage/test_object_client.py
@@ -0,0 +1,108 @@
+# Copyright 2016 IBM Corp.
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+
+import mock
+
+from tempest.lib import exceptions
+from tempest.lib.services.object_storage import object_client
+from tempest.tests import base
+from tempest.tests.lib import fake_auth_provider
+
+
+class TestObjectClient(base.TestCase):
+
+ def setUp(self):
+ super(TestObjectClient, self).setUp()
+ self.fake_auth = fake_auth_provider.FakeAuthProvider()
+ self.url = self.fake_auth.base_url(None)
+ self.object_client = object_client.ObjectClient(self.fake_auth,
+ 'swift', 'region1')
+
+ @mock.patch.object(object_client, '_create_connection')
+ def test_create_object_continue_no_data(self, mock_poc):
+ self._validate_create_object_continue(None, mock_poc)
+
+ @mock.patch.object(object_client, '_create_connection')
+ def test_create_object_continue_with_data(self, mock_poc):
+ self._validate_create_object_continue('hello', mock_poc)
+
+ @mock.patch.object(object_client, '_create_connection')
+ def test_create_continue_with_no_continue_received(self, mock_poc):
+ self._validate_create_object_continue('hello', mock_poc,
+ initial_status=201)
+
+ def _validate_create_object_continue(self, req_data,
+ mock_poc, initial_status=100):
+
+ expected_hdrs = {
+ 'X-Auth-Token': self.fake_auth.get_token(),
+ 'content-length': 0 if req_data is None else len(req_data),
+ 'Expect': '100-continue'}
+
+ # Setup the Mocks prior to invoking the object creation
+ mock_resp_cls = mock.Mock()
+ mock_resp_cls._read_status.return_value = ("1", initial_status, "OK")
+
+ mock_poc.return_value.response_class.return_value = mock_resp_cls
+
+ # This is the final expected return value
+ mock_poc.return_value.getresponse.return_value.status = 201
+ mock_poc.return_value.getresponse.return_value.reason = 'OK'
+
+ # Call method to PUT object using expect:100-continue
+ cnt = "container1"
+ obj = "object1"
+ path = "/%s/%s" % (cnt, obj)
+
+ # If the expected initial status is not 100, then an exception
+ # should be thrown and the connection closed
+ if initial_status is 100:
+ status, reason = \
+ self.object_client.create_object_continue(cnt, obj, req_data)
+ else:
+ self.assertRaises(exceptions.UnexpectedResponseCode,
+ self.object_client.create_object_continue, cnt,
+ obj, req_data)
+ mock_poc.return_value.close.assert_called_once_with()
+
+ # Verify that putrequest is called 1 time with the appropriate values
+ mock_poc.return_value.putrequest.assert_called_once_with('PUT', path)
+
+ # Verify that headers were written, including "Expect:100-continue"
+ calls = []
+
+ for header, value in expected_hdrs.items():
+ calls.append(mock.call(header, value))
+
+ mock_poc.return_value.putheader.assert_has_calls(calls, False)
+ mock_poc.return_value.endheaders.assert_called_once_with()
+
+ # The following steps are only taken if the initial status is 100
+ if initial_status is 100:
+ # Verify that the method returned what it was supposed to
+ self.assertEqual(status, 201)
+
+ # Verify that _safe_read was called once to remove the CRLF
+ # after the 100 response
+ mock_rc = mock_poc.return_value.response_class.return_value
+ mock_rc._safe_read.assert_called_once_with(2)
+
+ # Verify the actual data was written via send
+ mock_poc.return_value.send.assert_called_once_with(req_data)
+
+ # Verify that the getresponse method was called to receive
+ # the final
+ mock_poc.return_value.getresponse.assert_called_once_with()
diff --git a/tempest/tests/lib/services/registry_fixture.py b/tempest/tests/lib/services/registry_fixture.py
new file mode 100644
index 0000000..1da2112
--- /dev/null
+++ b/tempest/tests/lib/services/registry_fixture.py
@@ -0,0 +1,65 @@
+# Copyright 2017 IBM Corp.
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import fixtures
+
+from tempest.lib.services import clients
+
+
+class RegistryFixture(fixtures.Fixture):
+ """A fixture to setup a test client registry
+
+ The clients registry is a singleton. In Tempest it's filled with
+ content from configuration. When testing Tempest lib classes without
+ configuration it's handy to have the registry setup to be able to access
+ service client factories.
+
+ This fixture sets up the registry using a fake plugin, which includes all
+ services specified at __init__ time. Any other plugin in the registry
+ is removed at setUp time. The fake plugin is removed from the registry
+ on cleanup.
+ """
+
+ PLUGIN_NAME = 'fake_plugin_for_test'
+
+ def __init__(self):
+ """Initialise the registry fixture"""
+ self.services = set(['compute', 'identity.v2', 'identity.v3',
+ 'image.v1', 'image.v2', 'network', 'volume.v1',
+ 'volume.v2', 'volume.v3', 'object-storage'])
+
+ def _setUp(self):
+ # Cleanup the registry
+ registry = clients.ClientsRegistry()
+ registry._service_clients = {}
+ # Prepare the clients for registration
+ all_clients = []
+ service_clients = clients.tempest_modules()
+ for sc in self.services:
+ sc_module = service_clients[sc]
+ sc_unversioned = sc.split('.')[0]
+ sc_name = sc.replace('.', '_').replace('-', '_')
+ # Pass the bare minimum params to satisfy the clients interface
+ service_client_data = dict(
+ name=sc_name, service_version=sc, service=sc_unversioned,
+ module_path=sc_module.__name__,
+ client_names=sc_module.__all__)
+ all_clients.append(service_client_data)
+ registry.register_service_client(self.PLUGIN_NAME, all_clients)
+
+ def _cleanup():
+ del registry._service_clients[self.PLUGIN_NAME]
+
+ self.addCleanup(_cleanup)
diff --git a/tempest/tests/lib/services/test_clients.py b/tempest/tests/lib/services/test_clients.py
index 6d0f27a..43fd88f 100644
--- a/tempest/tests/lib/services/test_clients.py
+++ b/tempest/tests/lib/services/test_clients.py
@@ -189,9 +189,7 @@
def setUp(self):
super(TestServiceClients, self).setUp()
self.useFixture(fixtures.MockPatch(
- 'tempest.lib.services.clients.tempest_modules', return_value={}))
- self.useFixture(fixtures.MockPatch(
- 'tempest.lib.services.clients._tempest_internal_modules',
+ 'tempest.lib.services.clients.tempest_modules',
return_value=set(['fake_service1'])))
def test___init___creds_v2_uri(self):
@@ -416,6 +414,7 @@
_manager = self._get_manager()
duplicate_service = 'fake_service1'
expected_error = '.*' + duplicate_service
+ _manager._registered_services = [duplicate_service]
with testtools.ExpectedException(
exceptions.ServiceClientRegistrationException, expected_error):
_manager.register_service_client_module(
diff --git a/tempest/tests/lib/services/volume/v3/test_group_snapshots_client.py b/tempest/tests/lib/services/volume/v3/test_group_snapshots_client.py
index 5ac5c08..c2784b2 100644
--- a/tempest/tests/lib/services/volume/v3/test_group_snapshots_client.py
+++ b/tempest/tests/lib/services/volume/v3/test_group_snapshots_client.py
@@ -93,7 +93,8 @@
bytes_body,
group_snapshot_id="3fbbcccf-d058-4502-8844-6feeffdf4cb5")
- def _test_list_group_snapshots(self, bytes_body=False, detail=False):
+ def _test_list_group_snapshots(self, detail=False, bytes_body=False,
+ mock_args='group_snapshots', **params):
resp_body = []
if detail:
resp_body = self.FAKE_LIST_GROUP_SNAPSHOTS
@@ -111,8 +112,10 @@
self.client.list_group_snapshots,
'tempest.lib.common.rest_client.RestClient.get',
resp_body,
- bytes_body,
- detail=detail)
+ to_utf=bytes_body,
+ mock_args=[mock_args],
+ detail=detail,
+ **params)
def test_create_group_snapshot_with_str_body(self):
self._test_create_group_snapshot()
@@ -132,6 +135,25 @@
def test_list_group_snapshots_with_bytes_body(self):
self._test_list_group_snapshots(bytes_body=True)
+ def test_list_group_snapshots_with_detail_with_str_body(self):
+ mock_args = "group_snapshots/detail"
+ self._test_list_group_snapshots(detail=True, mock_args=mock_args)
+
+ def test_list_group_snapshots_with_detail_with_bytes_body(self):
+ mock_args = "group_snapshots/detail"
+ self._test_list_group_snapshots(detail=True, bytes_body=True,
+ mock_args=mock_args)
+
+ def test_list_group_snapshots_with_params(self):
+ # Run the test separately for each param, to avoid assertion error
+ # resulting from randomized params order.
+ mock_args = 'group_snapshots?sort_key=name'
+ self._test_list_group_snapshots(mock_args=mock_args, sort_key='name')
+
+ mock_args = 'group_snapshots/detail?limit=10'
+ self._test_list_group_snapshots(detail=True, bytes_body=True,
+ mock_args=mock_args, limit=10)
+
def test_delete_group_snapshot(self):
self.check_service_client_function(
self.client.delete_group_snapshot,
@@ -139,3 +161,12 @@
{},
group_snapshot_id='0e701ab8-1bec-4b9f-b026-a7ba4af13578',
status=202)
+
+ def test_reset_group_snapshot_status(self):
+ self.check_service_client_function(
+ self.client.reset_group_snapshot_status,
+ 'tempest.lib.common.rest_client.RestClient.post',
+ {},
+ status=202,
+ group_snapshot_id='0e701ab8-1bec-4b9f-b026-a7ba4af13578',
+ status_to_set='error')
diff --git a/tempest/tests/lib/services/volume/v3/test_groups_client.py b/tempest/tests/lib/services/volume/v3/test_groups_client.py
index 0884e5a..918e958 100644
--- a/tempest/tests/lib/services/volume/v3/test_groups_client.py
+++ b/tempest/tests/lib/services/volume/v3/test_groups_client.py
@@ -184,3 +184,12 @@
group_id='0e701ab8-1bec-4b9f-b026-a7ba4af13578',
status=202,
**self.FAKE_UPDATE_GROUP['group'])
+
+ def test_reset_group_status(self):
+ self.check_service_client_function(
+ self.client.reset_group_status,
+ 'tempest.lib.common.rest_client.RestClient.post',
+ {},
+ status=202,
+ group_id='0e701ab8-1bec-4b9f-b026-a7ba4af13578',
+ status_to_set='error')
diff --git a/tempest/tests/lib/test_ssh.py b/tempest/tests/lib/test_ssh.py
index a16da1c..37fe646 100644
--- a/tempest/tests/lib/test_ssh.py
+++ b/tempest/tests/lib/test_ssh.py
@@ -12,11 +12,11 @@
# License for the specific language governing permissions and limitations
# under the License.
-from io import StringIO
import socket
import mock
import six
+from six import StringIO
import testtools
from tempest.lib.common import ssh
diff --git a/tempest/tests/services/__init__.py b/tempest/tests/services/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/tempest/tests/services/__init__.py
+++ /dev/null
diff --git a/tempest/tests/services/object_storage/__init__.py b/tempest/tests/services/object_storage/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/tempest/tests/services/object_storage/__init__.py
+++ /dev/null
diff --git a/tempest/tests/services/object_storage/test_object_client.py b/tempest/tests/services/object_storage/test_object_client.py
index 748614c..86535f9 100644
--- a/tempest/tests/services/object_storage/test_object_client.py
+++ b/tempest/tests/services/object_storage/test_object_client.py
@@ -31,15 +31,15 @@
self.object_client = object_client.ObjectClient(self.fake_auth,
'swift', 'region1')
- @mock.patch.object(object_client, 'create_connection')
+ @mock.patch.object(object_client, '_create_connection')
def test_create_object_continue_no_data(self, mock_poc):
self._validate_create_object_continue(None, mock_poc)
- @mock.patch.object(object_client, 'create_connection')
+ @mock.patch.object(object_client, '_create_connection')
def test_create_object_continue_with_data(self, mock_poc):
self._validate_create_object_continue('hello', mock_poc)
- @mock.patch.object(object_client, 'create_connection')
+ @mock.patch.object(object_client, '_create_connection')
def test_create_continue_with_no_continue_received(self, mock_poc):
self._validate_create_object_continue('hello', mock_poc,
initial_status=201)
diff --git a/tempest/tests/test_base_test.py b/tempest/tests/test_base_test.py
index 6c6f612..3ece11d 100644
--- a/tempest/tests/test_base_test.py
+++ b/tempest/tests/test_base_test.py
@@ -13,10 +13,10 @@
# under the License.
import mock
+from oslo_config import cfg
from tempest import clients
from tempest.common import credentials_factory as credentials
-from tempest import config
from tempest.lib.common import fixed_network
from tempest import test
from tempest.tests import base
@@ -28,8 +28,9 @@
super(TestBaseTestCase, self).setUp()
self.useFixture(fake_config.ConfigFixture())
self.fixed_network_name = 'fixed-net'
- config.CONF.compute.fixed_network_name = self.fixed_network_name
- config.CONF.service_available.neutron = True
+ cfg.CONF.set_default('fixed_network_name', self.fixed_network_name,
+ 'compute')
+ cfg.CONF.set_default('neutron', True, 'service_available')
@mock.patch.object(test.BaseTestCase, 'get_client_manager')
@mock.patch.object(test.BaseTestCase, '_get_credentials_provider')
@@ -56,7 +57,7 @@
def test_get_tenant_network_with_nova_net(self, mock_man, mock_iaa,
mock_giv, mock_gtn, mock_gcp,
mock_gcm):
- config.CONF.service_available.neutron = False
+ cfg.CONF.set_default('neutron', False, 'service_available')
mock_prov = mock.Mock()
mock_admin_man = mock.Mock()
mock_iaa.return_value = True
diff --git a/tempest/tests/test_decorators.py b/tempest/tests/test_decorators.py
index 2fc84dc..6018441 100644
--- a/tempest/tests/test_decorators.py
+++ b/tempest/tests/test_decorators.py
@@ -16,7 +16,9 @@
from oslo_config import cfg
import testtools
+from tempest.common import utils
from tempest import config
+from tempest import exceptions
from tempest.lib.common.utils import data_utils
from tempest import test
from tempest.tests import base
@@ -31,6 +33,10 @@
fake_config.FakePrivate)
+# NOTE: The test module is for tempest.test.idempotent_id.
+# After all projects switch to use decorators.idempotent_id,
+# we can remove tempest.test.idempotent_id as well as this
+# test module
class TestIdempotentIdDecorator(BaseDecoratorsTest):
def _test_helper(self, _id, **decorator_args):
@@ -71,7 +77,7 @@
class TestServicesDecorator(BaseDecoratorsTest):
def _test_services_helper(self, *decorator_args):
class TestFoo(test.BaseTestCase):
- @test.services(*decorator_args)
+ @utils.services(*decorator_args)
def test_bar(self):
return 0
@@ -90,7 +96,7 @@
self._test_services_helper('compute', 'compute')
def test_services_decorator_with_invalid_service(self):
- self.assertRaises(test.InvalidServiceTag,
+ self.assertRaises(exceptions.InvalidServiceTag,
self._test_services_helper, 'compute',
'bad_service')
@@ -102,11 +108,11 @@
'volume')
def test_services_list(self):
- service_list = test.get_service_list()
+ service_list = utils.get_service_list()
for service in service_list:
try:
self._test_services_helper(service)
- except test.InvalidServiceTag:
+ except exceptions.InvalidServiceTag:
self.fail('%s is not listed in the valid service tag list'
% service)
except KeyError:
@@ -133,7 +139,7 @@
def _test_requires_ext_helper(self, expected_to_skip=True,
**decorator_args):
class TestFoo(test.BaseTestCase):
- @test.requires_ext(**decorator_args)
+ @utils.requires_ext(**decorator_args)
def test_bar(self):
return 0
@@ -170,96 +176,3 @@
self._test_requires_ext_helper,
extension='enabled_ext',
service='bad_service')
-
-
-class TestConfigDecorators(BaseDecoratorsTest):
- def setUp(self):
- super(TestConfigDecorators, self).setUp()
- cfg.CONF.set_default('nova', True, 'service_available')
- cfg.CONF.set_default('glance', False, 'service_available')
-
- def _assert_skip_message(self, func, skip_msg):
- try:
- func()
- self.fail()
- except testtools.TestCase.skipException as skip_exc:
- self.assertEqual(skip_exc.args[0], skip_msg)
-
- def _test_skip_unless_config(self, expected_to_skip=True, *decorator_args):
-
- class TestFoo(test.BaseTestCase):
- @config.skip_unless_config(*decorator_args)
- def test_bar(self):
- return 0
-
- t = TestFoo('test_bar')
- if expected_to_skip:
- self.assertRaises(testtools.TestCase.skipException, t.test_bar)
- if (len(decorator_args) >= 3):
- # decorator_args[2]: skip message specified
- self._assert_skip_message(t.test_bar, decorator_args[2])
- else:
- try:
- self.assertEqual(t.test_bar(), 0)
- except testtools.TestCase.skipException:
- # We caught a skipException but we didn't expect to skip
- # this test so raise a hard test failure instead.
- raise testtools.TestCase.failureException(
- "Not supposed to skip")
-
- def _test_skip_if_config(self, expected_to_skip=True,
- *decorator_args):
-
- class TestFoo(test.BaseTestCase):
- @config.skip_if_config(*decorator_args)
- def test_bar(self):
- return 0
-
- t = TestFoo('test_bar')
- if expected_to_skip:
- self.assertRaises(testtools.TestCase.skipException, t.test_bar)
- if (len(decorator_args) >= 3):
- # decorator_args[2]: skip message specified
- self._assert_skip_message(t.test_bar, decorator_args[2])
- else:
- try:
- self.assertEqual(t.test_bar(), 0)
- except testtools.TestCase.skipException:
- # We caught a skipException but we didn't expect to skip
- # this test so raise a hard test failure instead.
- raise testtools.TestCase.failureException(
- "Not supposed to skip")
-
- def test_skip_unless_no_group(self):
- self._test_skip_unless_config(True, 'fake_group', 'an_option')
-
- def test_skip_unless_no_option(self):
- self._test_skip_unless_config(True, 'service_available',
- 'not_an_option')
-
- def test_skip_unless_false_option(self):
- self._test_skip_unless_config(True, 'service_available', 'glance')
-
- def test_skip_unless_false_option_msg(self):
- self._test_skip_unless_config(True, 'service_available', 'glance',
- 'skip message')
-
- def test_skip_unless_true_option(self):
- self._test_skip_unless_config(False,
- 'service_available', 'nova')
-
- def test_skip_if_no_group(self):
- self._test_skip_if_config(False, 'fake_group', 'an_option')
-
- def test_skip_if_no_option(self):
- self._test_skip_if_config(False, 'service_available', 'not_an_option')
-
- def test_skip_if_false_option(self):
- self._test_skip_if_config(False, 'service_available', 'glance')
-
- def test_skip_if_true_option(self):
- self._test_skip_if_config(True, 'service_available', 'nova')
-
- def test_skip_if_true_option_msg(self):
- self._test_skip_if_config(True, 'service_available', 'nova',
- 'skip message')
diff --git a/tempest/tests/test_hacking.py b/tempest/tests/test_hacking.py
index c04d933..bc3a753 100644
--- a/tempest/tests/test_hacking.py
+++ b/tempest/tests/test_hacking.py
@@ -86,13 +86,13 @@
def test_scenario_tests_need_service_tags(self):
self.assertFalse(checks.scenario_tests_need_service_tags(
'def test_fake:', './tempest/scenario/test_fake.py',
- "@test.services('compute')"))
+ "@utils.services('compute')"))
self.assertFalse(checks.scenario_tests_need_service_tags(
'def test_fake_test:', './tempest/api/compute/test_fake.py',
- "@test.services('image')"))
+ "@utils.services('image')"))
self.assertFalse(checks.scenario_tests_need_service_tags(
'def test_fake:', './tempest/scenario/orchestration/test_fake.py',
- "@test.services('compute')"))
+ "@utils.services('compute')"))
self.assertTrue(checks.scenario_tests_need_service_tags(
'def test_fake_test:', './tempest/scenario/test_fake.py',
'\n'))
@@ -113,12 +113,13 @@
def test_service_tags_not_in_module_path(self):
self.assertTrue(checks.service_tags_not_in_module_path(
- "@test.services('compute')", './tempest/api/compute/fake_test.py'))
+ "@utils.services('compute')",
+ './tempest/api/compute/fake_test.py'))
self.assertFalse(checks.service_tags_not_in_module_path(
- "@test.services('compute')",
+ "@utils.services('compute')",
'./tempest/scenario/compute/fake_test.py'))
self.assertFalse(checks.service_tags_not_in_module_path(
- "@test.services('compute')", './tempest/api/image/fake_test.py'))
+ "@utils.services('compute')", './tempest/api/image/fake_test.py'))
def test_no_hyphen_at_end_of_rand_name(self):
self.assertIsNone(checks.no_hyphen_at_end_of_rand_name(
diff --git a/tempest/tests/test_list_tests.py b/tempest/tests/test_list_tests.py
index a238879..4af7463 100644
--- a/tempest/tests/test_list_tests.py
+++ b/tempest/tests/test_list_tests.py
@@ -23,12 +23,10 @@
class TestTestList(base.TestCase):
- def test_testr_list_tests_no_errors(self):
- # Remove unit test discover path from env to test tempest tests
+ def test_stestr_list_no_errors(self):
test_env = os.environ.copy()
- test_env.pop('OS_TEST_PATH')
import_failures = []
- p = subprocess.Popen(['testr', 'list-tests'], stdout=subprocess.PIPE,
+ p = subprocess.Popen(['stestr', 'list'], stdout=subprocess.PIPE,
env=test_env)
ids, err = p.communicate()
self.assertEqual(0, p.returncode,
diff --git a/tempest/tests/test_tempest_plugin.py b/tempest/tests/test_tempest_plugin.py
index 13e2499..ddadef5 100644
--- a/tempest/tests/test_tempest_plugin.py
+++ b/tempest/tests/test_tempest_plugin.py
@@ -17,9 +17,16 @@
from tempest.test_discover import plugins
from tempest.tests import base
from tempest.tests import fake_tempest_plugin as fake_plugin
+from tempest.tests.lib.services import registry_fixture
class TestPluginDiscovery(base.TestCase):
+
+ def setUp(self):
+ super(TestPluginDiscovery, self).setUp()
+ # Make sure we leave the registry clean
+ self.useFixture(registry_fixture.RegistryFixture())
+
def test_load_tests_with_one_plugin(self):
# we can't mock stevedore since it's a singleton and already executed
# during test discovery. So basically this test covers the plugin loop
diff --git a/tempest/tests/test_test.py b/tempest/tests/test_test.py
new file mode 100644
index 0000000..fc50736
--- /dev/null
+++ b/tempest/tests/test_test.py
@@ -0,0 +1,626 @@
+# Copyright 2017 IBM Corp
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import os
+import sys
+
+import mock
+from oslo_config import cfg
+import testtools
+
+from tempest import clients
+from tempest import config
+from tempest.lib.common import validation_resources as vr
+from tempest.lib import exceptions as lib_exc
+from tempest import test
+from tempest.tests import base
+from tempest.tests import fake_config
+from tempest.tests.lib import fake_credentials
+from tempest.tests.lib.services import registry_fixture
+
+
+if sys.version_info >= (2, 7):
+ import unittest
+else:
+ import unittest2 as unittest
+
+
+class LoggingTestResult(testtools.TestResult):
+
+ def __init__(self, log, *args, **kwargs):
+ super(LoggingTestResult, self).__init__(*args, **kwargs)
+ self.log = log
+
+ def addError(self, test, err=None, details=None):
+ self.log.append((test, err, details))
+
+
+class TestValidationResources(base.TestCase):
+
+ validation_resources_module = 'tempest.lib.common.validation_resources'
+
+ def setUp(self):
+ super(TestValidationResources, self).setUp()
+ self.useFixture(fake_config.ConfigFixture())
+ self.useFixture(registry_fixture.RegistryFixture())
+ self.patchobject(config, 'TempestConfigPrivate',
+ fake_config.FakePrivate)
+
+ class TestTestClass(test.BaseTestCase):
+ pass
+
+ self.test_test_class = TestTestClass
+
+ def test_validation_resources_no_validation(self):
+ cfg.CONF.set_default('run_validation', False, 'validation')
+ creds = fake_credentials.FakeKeystoneV3Credentials()
+ osclients = clients.Manager(creds)
+ vr = self.test_test_class.get_class_validation_resources(osclients)
+ self.assertIsNone(vr)
+
+ def test_validation_resources_exists(self):
+ cfg.CONF.set_default('run_validation', True, 'validation')
+ creds = fake_credentials.FakeKeystoneV3Credentials()
+ osclients = clients.Manager(creds)
+ expected_vr = 'expected_validation_resources'
+ self.test_test_class._validation_resources[osclients] = expected_vr
+ obtained_vr = self.test_test_class.get_class_validation_resources(
+ osclients)
+ self.assertEqual(expected_vr, obtained_vr)
+
+ @mock.patch(validation_resources_module + '.create_validation_resources',
+ autospec=True)
+ def test_validation_resources_new(self, mock_create_vr):
+ cfg.CONF.set_default('run_validation', True, 'validation')
+ cfg.CONF.set_default('neutron', True, 'service_available')
+ creds = fake_credentials.FakeKeystoneV3Credentials()
+ osclients = clients.Manager(creds)
+ expected_vr = {'expected_validation_resources': None}
+ mock_create_vr.return_value = expected_vr
+ with mock.patch.object(
+ self.test_test_class,
+ 'addClassResourceCleanup') as mock_add_class_cleanup:
+ obtained_vr = self.test_test_class.get_class_validation_resources(
+ osclients)
+ self.assertEqual(1, mock_add_class_cleanup.call_count)
+ self.assertEqual(mock.call(vr.clear_validation_resources,
+ osclients,
+ use_neutron=True,
+ **expected_vr),
+ mock_add_class_cleanup.call_args)
+ self.assertEqual(mock_create_vr.call_count, 1)
+ self.assertIn(osclients, mock_create_vr.call_args_list[0][0])
+ self.assertEqual(expected_vr, obtained_vr)
+ self.assertIn(osclients, self.test_test_class._validation_resources)
+ self.assertEqual(expected_vr,
+ self.test_test_class._validation_resources[osclients])
+
+ def test_validation_resources_invalid_config(self):
+ invalid_version = 999
+ cfg.CONF.set_default('run_validation', True, 'validation')
+ cfg.CONF.set_default('ip_version_for_ssh', invalid_version,
+ 'validation')
+ cfg.CONF.set_default('neutron', True, 'service_available')
+ creds = fake_credentials.FakeKeystoneV3Credentials()
+ osclients = clients.Manager(creds)
+ with testtools.ExpectedException(
+ lib_exc.InvalidConfiguration,
+ value_re='^.*\n.*' + str(invalid_version)):
+ self.test_test_class.get_class_validation_resources(osclients)
+
+ @mock.patch(validation_resources_module + '.create_validation_resources',
+ autospec=True)
+ def test_validation_resources_invalid_config_nova_net(self,
+ mock_create_vr):
+ invalid_version = 999
+ cfg.CONF.set_default('run_validation', True, 'validation')
+ cfg.CONF.set_default('ip_version_for_ssh', invalid_version,
+ 'validation')
+ cfg.CONF.set_default('neutron', False, 'service_available')
+ creds = fake_credentials.FakeKeystoneV3Credentials()
+ osclients = clients.Manager(creds)
+ expected_vr = {'expected_validation_resources': None}
+ mock_create_vr.return_value = expected_vr
+ obtained_vr = self.test_test_class.get_class_validation_resources(
+ osclients)
+ self.assertEqual(mock_create_vr.call_count, 1)
+ self.assertIn(osclients, mock_create_vr.call_args_list[0][0])
+ self.assertEqual(expected_vr, obtained_vr)
+ self.assertIn(osclients, self.test_test_class._validation_resources)
+ self.assertEqual(expected_vr,
+ self.test_test_class._validation_resources[osclients])
+
+ @mock.patch(validation_resources_module + '.create_validation_resources',
+ autospec=True)
+ @mock.patch(validation_resources_module + '.clear_validation_resources',
+ autospec=True)
+ def test_validation_resources_fixture(self, mock_clean_vr, mock_create_vr):
+
+ class TestWithRun(self.test_test_class):
+
+ def runTest(self):
+ pass
+
+ cfg.CONF.set_default('run_validation', True, 'validation')
+ test_case = TestWithRun()
+ creds = fake_credentials.FakeKeystoneV3Credentials()
+ osclients = clients.Manager(creds)
+ test_case.get_test_validation_resources(osclients)
+ self.assertEqual(1, mock_create_vr.call_count)
+ self.assertEqual(0, mock_clean_vr.call_count)
+
+
+class TestSetNetworkResources(base.TestCase):
+
+ def setUp(self):
+ super(TestSetNetworkResources, self).setUp()
+
+ class ParentTest(test.BaseTestCase):
+
+ @classmethod
+ def setup_credentials(cls):
+ cls.set_network_resources(dhcp=True)
+ super(ParentTest, cls).setup_credentials()
+
+ def runTest(self):
+ pass
+
+ self.parent_class = ParentTest
+
+ def test_set_network_resources_child_only(self):
+
+ class ChildTest(self.parent_class):
+
+ @classmethod
+ def setup_credentials(cls):
+ cls.set_network_resources(router=True)
+ super(ChildTest, cls).setup_credentials()
+
+ child_test = ChildTest()
+ child_test.setUpClass()
+ # Assert that the parents network resources are not set
+ self.assertFalse(child_test._network_resources['dhcp'])
+ # Assert that the child network resources are set
+ self.assertTrue(child_test._network_resources['router'])
+
+ def test_set_network_resources_right_order(self):
+
+ class ChildTest(self.parent_class):
+
+ @classmethod
+ def setup_credentials(cls):
+ super(ChildTest, cls).setup_credentials()
+ cls.set_network_resources(router=True)
+
+ child_test = ChildTest()
+ with testtools.ExpectedException(RuntimeError,
+ value_re='set_network_resources'):
+ child_test.setUpClass()
+
+ def test_set_network_resources_children(self):
+
+ class ChildTest(self.parent_class):
+
+ @classmethod
+ def setup_credentials(cls):
+ cls.set_network_resources(router=True)
+ super(ChildTest, cls).setup_credentials()
+
+ class GrandChildTest(ChildTest):
+ pass
+
+ # Invoke setupClass on both and check that the setup_credentials
+ # call check mechanism does not report any false negative.
+ child_test = ChildTest()
+ child_test.setUpClass()
+ grandchild_test = GrandChildTest()
+ grandchild_test.setUpClass()
+
+
+class TestTempestBaseTestClass(base.TestCase):
+
+ def setUp(self):
+ super(TestTempestBaseTestClass, self).setUp()
+ self.useFixture(fake_config.ConfigFixture())
+ self.patchobject(config, 'TempestConfigPrivate',
+ fake_config.FakePrivate)
+
+ class ParentTest(test.BaseTestCase):
+
+ def runTest(self):
+ pass
+
+ self.parent_test = ParentTest
+
+ def test_resource_cleanup(self):
+ cfg.CONF.set_default('neutron', False, 'service_available')
+ exp_args = (1, 2,)
+ exp_kwargs = {'a': 1, 'b': 2}
+ mock1 = mock.Mock()
+ mock2 = mock.Mock()
+ exp_functions = [mock1, mock2]
+
+ class TestWithCleanups(self.parent_test):
+
+ @classmethod
+ def resource_setup(cls):
+ for fn in exp_functions:
+ cls.addClassResourceCleanup(fn, *exp_args,
+ **exp_kwargs)
+
+ test_cleanups = TestWithCleanups()
+ suite = unittest.TestSuite((test_cleanups,))
+ log = []
+ result = LoggingTestResult(log)
+ suite.run(result)
+ # No exception raised - error log is empty
+ self.assertFalse(log)
+ # All stacked resource cleanups invoked
+ mock1.assert_called_once_with(*exp_args, **exp_kwargs)
+ mock2.assert_called_once_with(*exp_args, **exp_kwargs)
+ # Cleanup stack is empty
+ self.assertEqual(0, len(test_cleanups._class_cleanups))
+
+ def test_resource_cleanup_failures(self):
+ cfg.CONF.set_default('neutron', False, 'service_available')
+ exp_args = (1, 2,)
+ exp_kwargs = {'a': 1, 'b': 2}
+ mock1 = mock.Mock()
+ mock1.side_effect = Exception('mock1 resource cleanup failure')
+ mock2 = mock.Mock()
+ mock3 = mock.Mock()
+ mock3.side_effect = Exception('mock3 resource cleanup failure')
+ exp_functions = [mock1, mock2, mock3]
+
+ class TestWithFailingCleanups(self.parent_test):
+
+ @classmethod
+ def resource_setup(cls):
+ for fn in exp_functions:
+ cls.addClassResourceCleanup(fn, *exp_args,
+ **exp_kwargs)
+
+ test_cleanups = TestWithFailingCleanups()
+ suite = unittest.TestSuite((test_cleanups,))
+ log = []
+ result = LoggingTestResult(log)
+ suite.run(result)
+ # One multiple exception captured
+ self.assertEqual(1, len(log))
+ # [0]: test, err, details [1] -> exc_info
+ # Type, Exception, traceback [1] -> MultipleException
+ found_exc = log[0][1][1]
+ self.assertTrue(isinstance(found_exc, testtools.MultipleExceptions))
+ self.assertEqual(2, len(found_exc.args))
+ # Each arg is exc_info - match messages and order
+ self.assertIn('mock3 resource', str(found_exc.args[0][1]))
+ self.assertIn('mock1 resource', str(found_exc.args[1][1]))
+ # All stacked resource cleanups invoked
+ mock1.assert_called_once_with(*exp_args, **exp_kwargs)
+ mock2.assert_called_once_with(*exp_args, **exp_kwargs)
+ # Cleanup stack is empty
+ self.assertEqual(0, len(test_cleanups._class_cleanups))
+
+ def test_super_resource_cleanup_not_invoked(self):
+
+ class BadResourceCleanup(self.parent_test):
+
+ @classmethod
+ def resource_cleanup(cls):
+ pass
+
+ bad_class = BadResourceCleanup()
+ suite = unittest.TestSuite((bad_class,))
+ log = []
+ result = LoggingTestResult(log)
+ suite.run(result)
+ # One multiple exception captured
+ self.assertEqual(1, len(log))
+ # [0]: test, err, details [1] -> exc_info
+ # Type, Exception, traceback [1] -> RuntimeError
+ found_exc = log[0][1][1]
+ self.assertTrue(isinstance(found_exc, RuntimeError))
+ self.assertIn(BadResourceCleanup.__name__, str(found_exc))
+
+ def test_super_skip_checks_not_invoked(self):
+
+ class BadSkipChecks(self.parent_test):
+
+ @classmethod
+ def skip_checks(cls):
+ pass
+
+ bad_class = BadSkipChecks()
+ with testtools.ExpectedException(
+ RuntimeError,
+ value_re='^.* ' + BadSkipChecks.__name__):
+ bad_class.setUpClass()
+
+ def test_super_setup_credentials_not_invoked(self):
+
+ class BadSetupCredentials(self.parent_test):
+
+ @classmethod
+ def skip_checks(cls):
+ pass
+
+ bad_class = BadSetupCredentials()
+ with testtools.ExpectedException(
+ RuntimeError,
+ value_re='^.* ' + BadSetupCredentials.__name__):
+ bad_class.setUpClass()
+
+ def test_grandparent_skip_checks_not_invoked(self):
+
+ class BadSkipChecks(self.parent_test):
+
+ @classmethod
+ def skip_checks(cls):
+ pass
+
+ class SonOfBadSkipChecks(BadSkipChecks):
+ pass
+
+ bad_class = SonOfBadSkipChecks()
+ with testtools.ExpectedException(
+ RuntimeError,
+ value_re='^.* ' + SonOfBadSkipChecks.__name__):
+ bad_class.setUpClass()
+
+ @mock.patch('tempest.common.credentials_factory.is_admin_available',
+ autospec=True, return_value=True)
+ def test_skip_checks_admin(self, mock_iaa):
+ identity_version = 'identity_version'
+
+ class NeedAdmin(self.parent_test):
+ credentials = ['admin']
+
+ @classmethod
+ def get_identity_version(cls):
+ return identity_version
+
+ NeedAdmin().skip_checks()
+ mock_iaa.assert_called_once_with('identity_version')
+
+ @mock.patch('tempest.common.credentials_factory.is_admin_available',
+ autospec=True, return_value=False)
+ def test_skip_checks_admin_not_available(self, mock_iaa):
+ identity_version = 'identity_version'
+
+ class NeedAdmin(self.parent_test):
+ credentials = ['admin']
+
+ @classmethod
+ def get_identity_version(cls):
+ return identity_version
+
+ with testtools.ExpectedException(testtools.testcase.TestSkipped):
+ NeedAdmin().skip_checks()
+ mock_iaa.assert_called_once_with('identity_version')
+
+ def test_skip_checks_identity_v2_not_available(self):
+ cfg.CONF.set_default('api_v2', False, 'identity-feature-enabled')
+
+ class NeedV2(self.parent_test):
+ identity_version = 'v2'
+
+ with testtools.ExpectedException(testtools.testcase.TestSkipped):
+ NeedV2().skip_checks()
+
+ def test_skip_checks_identity_v3_not_available(self):
+ cfg.CONF.set_default('api_v3', False, 'identity-feature-enabled')
+
+ class NeedV3(self.parent_test):
+ identity_version = 'v3'
+
+ with testtools.ExpectedException(testtools.testcase.TestSkipped):
+ NeedV3().skip_checks()
+
+ def test_setup_credentials_all(self):
+ expected_creds = ['string', ['list', 'role1', 'role2']]
+
+ class AllCredentials(self.parent_test):
+ credentials = expected_creds
+
+ expected_clients = 'clients'
+ with mock.patch.object(
+ AllCredentials,
+ 'get_client_manager') as mock_get_client_manager:
+ mock_get_client_manager.return_value = expected_clients
+ all_creds = AllCredentials()
+ all_creds.setup_credentials()
+ self.assertTrue(hasattr(all_creds, 'os_string'))
+ self.assertEqual(expected_clients, all_creds.os_string)
+ self.assertTrue(hasattr(all_creds, 'os_roles_list'))
+ self.assertEqual(expected_clients, all_creds.os_roles_list)
+ self.assertEqual(2, mock_get_client_manager.call_count)
+ self.assertEqual(
+ expected_creds[0],
+ mock_get_client_manager.mock_calls[0][2]['credential_type'])
+ self.assertEqual(
+ expected_creds[1][1:],
+ mock_get_client_manager.mock_calls[1][2]['roles'])
+
+ def test_setup_class_overwritten(self):
+
+ class OverridesSetup(self.parent_test):
+
+ @classmethod
+ def setUpClass(cls): # noqa
+ pass
+
+ overrides_setup = OverridesSetup()
+ suite = unittest.TestSuite((overrides_setup,))
+ log = []
+ result = LoggingTestResult(log)
+ suite.run(result)
+ # Record 0, test (error holder). The error generates during test run.
+ self.assertIn('runTest', str(log[0][0]))
+ # Record 0, traceback
+ self.assertRegex(
+ str(log[0][2]['traceback']).replace('\n', ' '),
+ RuntimeError.__name__ + ': .* ' + OverridesSetup.__name__)
+
+
+class TestTempestBaseTestClassFixtures(base.TestCase):
+
+ SETUP_FIXTURES = [test.BaseTestCase.setUpClass.__name__,
+ test.BaseTestCase.skip_checks.__name__,
+ test.BaseTestCase.setup_credentials.__name__,
+ test.BaseTestCase.setup_clients.__name__,
+ test.BaseTestCase.resource_setup.__name__]
+ TEARDOWN_FIXTURES = [test.BaseTestCase.tearDownClass.__name__,
+ test.BaseTestCase.resource_cleanup.__name__,
+ test.BaseTestCase.clear_credentials.__name__]
+
+ def setUp(self):
+ super(TestTempestBaseTestClassFixtures, self).setUp()
+ self.mocks = {}
+ for fix in self.SETUP_FIXTURES + self.TEARDOWN_FIXTURES:
+ self.mocks[fix] = mock.Mock()
+
+ def tracker_builder(name):
+
+ def tracker(cls):
+ # Track that the fixture was invoked
+ cls.fixtures_invoked.append(name)
+ # Run the fixture
+ getattr(super(TestWithClassFixtures, cls), name)()
+ # Run a mock we can use for side effects
+ self.mocks[name]()
+
+ return tracker
+
+ class TestWithClassFixtures(test.BaseTestCase):
+
+ credentials = []
+ fixtures_invoked = []
+
+ def runTest(_self):
+ pass
+
+ # Decorate all test class fixtures with tracker_builder
+ for method_name in self.SETUP_FIXTURES + self.TEARDOWN_FIXTURES:
+ setattr(TestWithClassFixtures, method_name,
+ classmethod(tracker_builder(method_name)))
+
+ self.test = TestWithClassFixtures()
+
+ def test_no_error_flow(self):
+ # If all setup fixtures are executed, all cleanup fixtures are
+ # executed too
+ suite = unittest.TestSuite((self.test,))
+ log = []
+ result = LoggingTestResult(log)
+ suite.run(result)
+ self.assertEqual(self.SETUP_FIXTURES + self.TEARDOWN_FIXTURES,
+ self.test.fixtures_invoked)
+
+ def test_skip_only(self):
+ # If a skip condition is hit in the test, no credentials or resource
+ # is provisioned / cleaned-up
+ self.mocks['skip_checks'].side_effect = (
+ testtools.testcase.TestSkipped())
+ suite = unittest.TestSuite((self.test,))
+ log = []
+ result = LoggingTestResult(log)
+ suite.run(result)
+ # If we trigger a skip condition, teardown is not invoked at all
+ self.assertEqual(self.SETUP_FIXTURES[:2],
+ self.test.fixtures_invoked)
+
+ def test_skip_credentials_fails(self):
+ expected_exc = 'sc exploded'
+ self.mocks['setup_credentials'].side_effect = Exception(expected_exc)
+ suite = unittest.TestSuite((self.test,))
+ log = []
+ result = LoggingTestResult(log)
+ suite.run(result)
+ # If setup_credentials explodes, we invoked teardown class and
+ # clear credentials, and re-raise
+ self.assertEqual((self.SETUP_FIXTURES[:3] +
+ [self.TEARDOWN_FIXTURES[i] for i in (0, 2)]),
+ self.test.fixtures_invoked)
+ found_exc = log[0][1][1]
+ self.assertIn(expected_exc, str(found_exc))
+
+ def test_skip_credentials_fails_clear_fails(self):
+ # If cleanup fails on failure, we log the exception and do not
+ # re-raise it. Note that since the exception happens outside of
+ # the Tempest test setUp, logging is not captured on the Tempest
+ # test side, it will be captured by the unit test instead.
+ expected_exc = 'sc exploded'
+ clear_exc = 'clear exploded'
+ self.mocks['setup_credentials'].side_effect = Exception(expected_exc)
+ self.mocks['clear_credentials'].side_effect = Exception(clear_exc)
+ suite = unittest.TestSuite((self.test,))
+ log = []
+ result = LoggingTestResult(log)
+ suite.run(result)
+ # If setup_credentials explodes, we invoked teardown class and
+ # clear credentials, and re-raise
+ self.assertEqual((self.SETUP_FIXTURES[:3] +
+ [self.TEARDOWN_FIXTURES[i] for i in (0, 2)]),
+ self.test.fixtures_invoked)
+ found_exc = log[0][1][1]
+ self.assertIn(expected_exc, str(found_exc))
+ # Since log capture depends on OS_LOG_CAPTURE, we can only assert if
+ # logging was captured
+ if os.environ.get('OS_LOG_CAPTURE'):
+ self.assertIn(clear_exc, self.log_fixture.logger.output)
+
+ def test_skip_credentials_clients_resources_credentials_clear_fails(self):
+ # If cleanup fails with no previous failure, we re-raise the exception.
+ expected_exc = 'clear exploded'
+ self.mocks['clear_credentials'].side_effect = Exception(expected_exc)
+ suite = unittest.TestSuite((self.test,))
+ log = []
+ result = LoggingTestResult(log)
+ suite.run(result)
+ # If setup_credentials explodes, we invoked teardown class and
+ # clear credentials, and re-raise
+ self.assertEqual(self.SETUP_FIXTURES + self.TEARDOWN_FIXTURES,
+ self.test.fixtures_invoked)
+ found_exc = log[0][1][1]
+ self.assertIn(expected_exc, str(found_exc))
+
+ def test_skip_credentials_clients_fails(self):
+ expected_exc = 'clients exploded'
+ self.mocks['setup_clients'].side_effect = Exception(expected_exc)
+ suite = unittest.TestSuite((self.test,))
+ log = []
+ result = LoggingTestResult(log)
+ suite.run(result)
+ # If setup_clients explodes, we invoked teardown class and
+ # clear credentials, and re-raise
+ self.assertEqual((self.SETUP_FIXTURES[:4] +
+ [self.TEARDOWN_FIXTURES[i] for i in (0, 2)]),
+ self.test.fixtures_invoked)
+ found_exc = log[0][1][1]
+ self.assertIn(expected_exc, str(found_exc))
+
+ def test_skip_credentials_clients_resources_fails(self):
+ expected_exc = 'resource setup exploded'
+ self.mocks['resource_setup'].side_effect = Exception(expected_exc)
+ suite = unittest.TestSuite((self.test,))
+ log = []
+ result = LoggingTestResult(log)
+ suite.run(result)
+ # If resource_setup explodes, we invoked teardown class and
+ # clear credentials and resource cleanup, and re-raise
+ self.assertEqual(self.SETUP_FIXTURES + self.TEARDOWN_FIXTURES,
+ self.test.fixtures_invoked)
+ found_exc = log[0][1][1]
+ self.assertIn(expected_exc, str(found_exc))
diff --git a/test-requirements.txt b/test-requirements.txt
index 09c7685..37644d0 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -4,9 +4,9 @@
hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0
# needed for doc build
sphinx>=1.6.2 # BSD
-openstackdocstheme>=1.16.0 # Apache-2.0
-reno!=2.3.1,>=1.8.0 # Apache-2.0
-mock>=2.0 # BSD
+openstackdocstheme>=1.17.0 # Apache-2.0
+reno>=2.5.0 # Apache-2.0
+mock>=2.0.0 # BSD
coverage!=4.4,>=4.0 # Apache-2.0
oslotest>=1.10.0 # Apache-2.0
flake8-import-order==0.11 # LGPLv3
diff --git a/tools/generate-tempest-plugins-list.sh b/tools/generate-tempest-plugins-list.sh
index e6aad86..20c99b2 100755
--- a/tools/generate-tempest-plugins-list.sh
+++ b/tools/generate-tempest-plugins-list.sh
@@ -33,8 +33,8 @@
# * network access to https://git.openstack.org/cgit
# ))
#
-# If a file named data/tempest-plugins-registry.header or
-# data/tempest-plugins-registry.footer is found relative to the
+# If a file named doc/source/data/tempest-plugins-registry.header or
+# doc/source/data/tempest-plugins-registry.footer is found relative to the
# current working directory, it will be prepended or appended to
# the generated reStructuredText plugins table respectively.
@@ -43,8 +43,8 @@
(
declare -A plugins
-if [[ -r data/tempest-plugins-registry.header ]]; then
- cat data/tempest-plugins-registry.header
+if [[ -r doc/source/data/tempest-plugins-registry.header ]]; then
+ cat doc/source/data/tempest-plugins-registry.header
fi
sorted_plugins=$(python tools/generate-tempest-plugins-list.py)
@@ -56,8 +56,8 @@
printf "+----------------------------+-------------------------------------------------------------------------+\n"
done
-if [[ -r data/tempest-plugins-registry.footer ]]; then
- cat data/tempest-plugins-registry.footer
+if [[ -r doc/source/data/tempest-plugins-registry.footer ]]; then
+ cat doc/source/data/tempest-plugins-registry.footer
fi
) > doc/source/plugin-registry.rst
diff --git a/tools/tempest-plugin-sanity.sh b/tools/tempest-plugin-sanity.sh
index a4f706e..44bf840 100644
--- a/tools/tempest-plugin-sanity.sh
+++ b/tools/tempest-plugin-sanity.sh
@@ -20,7 +20,7 @@
# What it does:
# * Creates the virtualenv
# * Install tempest
-# * Retrive the project lists having tempest plugin if project name is
+# * Retrieve the project lists having tempest plugin if project name is
# given.
# * For each project in a list, It does:
# * Clone the Project
diff --git a/tox.ini b/tox.ini
index 6f37d00..21696eb 100644
--- a/tox.ini
+++ b/tox.ini
@@ -16,11 +16,11 @@
[testenv]
setenv =
VIRTUAL_ENV={envdir}
- OS_TEST_PATH=./tempest/tests
+ OS_LOG_CAPTURE=1
PYTHONWARNINGS=default::DeprecationWarning
BRANCH_NAME=master
CLIENT_NAME=tempest
-passenv = OS_STDOUT_CAPTURE OS_STDERR_CAPTURE OS_TEST_TIMEOUT OS_TEST_LOCK_PATH OS_TEST_PATH TEMPEST_CONFIG TEMPEST_CONFIG_DIR http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY ZUUL_CACHE_DIR REQUIREMENTS_PIP_LOCATION GENERATE_TEMPEST_PLUGIN_LIST
+passenv = OS_STDOUT_CAPTURE OS_STDERR_CAPTURE OS_TEST_TIMEOUT OS_TEST_LOCK_PATH TEMPEST_CONFIG TEMPEST_CONFIG_DIR http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY ZUUL_CACHE_DIR REQUIREMENTS_PIP_LOCATION GENERATE_TEMPEST_PLUGIN_LIST
usedevelop = True
install_command =
{toxinidir}/tools/tox_install.sh {env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} {opts} {packages}
@@ -30,7 +30,7 @@
-r{toxinidir}/test-requirements.txt
commands =
find . -type f -name "*.pyc" -delete
- ostestr {posargs}
+ stestr --test-path ./tempest/tests run {posargs}
[testenv:genconfig]
commands = oslo-config-generator --config-file tempest/cmd/config-generator.tempest.conf
@@ -138,6 +138,7 @@
[testenv:docs]
commands =
+ rm -rf doc/build
python setup.py build_sphinx {posargs}
[testenv:pep8]
@@ -159,12 +160,14 @@
# E129 skipped because it is too limiting when combined with other rules
ignore = E125,E123,E129
show-source = True
-exclude = .git,.venv,.tox,dist,doc,*egg
+exclude = .git,.venv,.tox,dist,doc,*egg,build
enable-extensions = H106,H203,H904
import-order-style = pep8
[testenv:releasenotes]
-commands = sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html
+commands =
+ rm -rf releasenotes/build
+ sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html
[testenv:pip-check-reqs]
# Do not install test-requirements as that will pollute the virtualenv for