Merge "Use get_tenant_network in get_server_ip"
diff --git a/.gitignore b/.gitignore
index 287db4c..7cb052f 100644
--- a/.gitignore
+++ b/.gitignore
@@ -18,6 +18,7 @@
dist
build
.testrepository
+.stestr
.idea
.project
.pydevproject
diff --git a/.stestr.conf b/.stestr.conf
new file mode 100644
index 0000000..e3201c1
--- /dev/null
+++ b/.stestr.conf
@@ -0,0 +1,4 @@
+[DEFAULT]
+test_path=./tempest/test_discover
+group_regex=([^\.]*\.)*
+
diff --git a/.zuul.yaml b/.zuul.yaml
new file mode 100644
index 0000000..5b73695
--- /dev/null
+++ b/.zuul.yaml
@@ -0,0 +1,23 @@
+- job:
+ name: devstack-tempest
+ parent: devstack
+ description: Base Tempest job.
+ required-projects:
+ - openstack/tempest
+ timeout: 7200
+ roles:
+ - zuul: openstack-dev/devstack
+ vars:
+ devstack_services:
+ tempest: True
+ run: playbooks/devstack-tempest.yaml
+
+- project:
+ name: openstack/tempest
+ check:
+ jobs:
+ - devstack-tempest:
+ files:
+ - ^playbooks/
+ - ^roles/
+ - ^.zuul.yaml$
diff --git a/HACKING.rst b/HACKING.rst
index 8407734..c942cb1 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -84,7 +84,7 @@
It is recommended to use testtools `matcher`_ for the more tricky assertions.
You can implement your own specific `matcher`_ as well.
-.. _matcher: http://testtools.readthedocs.org/en/latest/for-test-authors.html#matchers
+.. _matcher: https://testtools.readthedocs.org/en/latest/for-test-authors.html#matchers
If the test case fails you can see the related logs and the information
carried by the exception (exception class, backtrack and exception info).
@@ -178,7 +178,7 @@
All negative tests should be based on `API-WG guideline`_ . Such negative
tests can block any changes from accurate failure code to invalid one.
-.. _API-WG guideline: http://specs.openstack.org/openstack/api-wg/guidelines/http.html#failure-code-clarifications
+.. _API-WG guideline: https://specs.openstack.org/openstack/api-wg/guidelines/http.html#failure-code-clarifications
If facing some gray area which is not clarified on the above guideline, propose
a new guideline to the API-WG. With a proposal to the API-WG we will be able to
diff --git a/README.rst b/README.rst
index 17d4cba..c087f29 100644
--- a/README.rst
+++ b/README.rst
@@ -2,7 +2,7 @@
Team and repository tags
========================
-.. image:: http://governance.openstack.org/badges/tempest.svg
+.. image:: https://governance.openstack.org/badges/tempest.svg
:target: https://governance.openstack.org/tc/reference/tags/index.html
.. Change things from this point on
@@ -61,7 +61,7 @@
#. You first need to install Tempest. This is done with pip after you check out
the Tempest repo::
- $ git clone http://git.openstack.org/openstack/tempest
+ $ git clone https://git.openstack.org/openstack/tempest
$ pip install tempest/
This can be done within a venv, but the assumption for this guide is that
@@ -133,7 +133,7 @@
Release Versioning
------------------
-`Tempest Release Notes <http://docs.openstack.org/releasenotes/tempest>`_
+`Tempest Release Notes <https://docs.openstack.org/releasenotes/tempest>`_
shows what changes have been released on each version.
Tempest's released versions are broken into 2 sets of information. Depending on
@@ -183,11 +183,11 @@
Tempest also has a set of unit tests which test the Tempest code itself. These
tests can be run by specifying the test discovery path::
- $ OS_TEST_PATH=./tempest/tests testr run --parallel
+ $ stestr --test-path ./tempest/tests run
-By setting OS_TEST_PATH to ./tempest/tests it specifies that test discover
-should only be run on the unit test directory. The default value of OS_TEST_PATH
-is OS_TEST_PATH=./tempest/test_discover which will only run test discover on the
+By setting ``--test-path`` option to ./tempest/tests it specifies that test discover
+should only be run on the unit test directory. The default value of ``test_path``
+is ``test_path=./tempest/test_discover`` which will only run test discover on the
Tempest suite.
Alternatively, there are the py27 and py35 tox jobs which will run the unit
diff --git a/REVIEWING.rst b/REVIEWING.rst
index 7d28320..5e08a6b 100644
--- a/REVIEWING.rst
+++ b/REVIEWING.rst
@@ -2,7 +2,7 @@
======================
To start read the `OpenStack Common Review Checklist
-<http://docs.openstack.org/infra/manual/developers.html#peer-review>`_
+<https://docs.openstack.org/infra/manual/developers.html#peer-review>`_
Ensuring code is executed
@@ -16,7 +16,7 @@
If a new test is added that depends on a new config option (like a feature
flag), the commit message must reference a change in DevStack or DevStack-Gate
that enables the execution of this newly introduced test. This reference could
-either be a `Cross-Repository Dependency <http://docs.openstack.org/infra/
+either be a `Cross-Repository Dependency <https://docs.openstack.org/infra/
manual/developers.html#cross-repository-dependencies>`_ or a simple link
to a Gerrit review.
diff --git a/doc/source/_extra/.htaccess b/doc/source/_extra/.htaccess
new file mode 100644
index 0000000..7745594
--- /dev/null
+++ b/doc/source/_extra/.htaccess
@@ -0,0 +1 @@
+redirectmatch 301 ^/developer/tempest/(.*) /tempest/latest/$1
diff --git a/doc/source/conf.py b/doc/source/conf.py
index 067eb81..0a061b8 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -154,6 +154,9 @@
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
+# Add any paths that contain "extra" files, such as .htaccess or
+# robots.txt.
+html_extra_path = ['_extra']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
diff --git a/doc/source/microversion_testing.rst b/doc/source/microversion_testing.rst
index 307eb07..acf5593 100644
--- a/doc/source/microversion_testing.rst
+++ b/doc/source/microversion_testing.rst
@@ -338,4 +338,28 @@
* `3.3`_
- .. _3.3: https://docs.openstack.org/cinder/latest/contributor/api_microversion_history.html#id4
+ .. _3.3: https://docs.openstack.org/cinder/latest/contributor/api_microversion_history.html#id3
+
+ * `3.9`_
+
+ .. _3.9: https://docs.openstack.org/cinder/latest/contributor/api_microversion_history.html#id9
+
+ * `3.11`_
+
+ .. _3.11: https://docs.openstack.org/cinder/latest/contributor/api_microversion_history.html#id11
+
+ * `3.12`_
+
+ .. _3.12: https://docs.openstack.org/cinder/latest/contributor/api_microversion_history.html#id12
+
+ * `3.14`_
+
+ .. _3.14: https://docs.openstack.org/cinder/latest/contributor/api_microversion_history.html#id14
+
+ * `3.19`_
+
+ .. _3.19: https://docs.openstack.org/cinder/latest/contributor/api_microversion_history.html#id18
+
+ * `3.20`_
+
+ .. _3.20: https://docs.openstack.org/cinder/latest/contributor/api_microversion_history.html#id19
diff --git a/doc/source/plugin.rst b/doc/source/plugin.rst
index 77ef9ed..2afb1e5 100644
--- a/doc/source/plugin.rst
+++ b/doc/source/plugin.rst
@@ -29,6 +29,8 @@
* tempest.config
* tempest.test_discover.plugins
* tempest.common.credentials_factory
+* tempest.clients
+* tempest.test
If there is an interface from tempest that you need to rely on in your plugin
which is not listed above, it likely needs to be migrated to tempest.lib. In
diff --git a/playbooks/devstack-tempest.yaml b/playbooks/devstack-tempest.yaml
new file mode 100644
index 0000000..a684984
--- /dev/null
+++ b/playbooks/devstack-tempest.yaml
@@ -0,0 +1,14 @@
+# Changes that run through devstack-tempest are likely to have an impact on
+# the devstack part of the job, so we keep devstack in the main play to
+# avoid zuul retrying on legitimate failures.
+- hosts: all
+ roles:
+ - run-devstack
+
+# We run tests only on one node, regardless how many nodes are in the system
+- hosts: tempest
+ roles:
+ - setup-tempest-run-dir
+ - setup-tempest-data-dir
+ - acl-devstack-files
+ - run-tempest
diff --git a/releasenotes/notes/add-domain-param-in-cliclient-a270fcf35c8f09e6.yaml b/releasenotes/notes/add-domain-param-in-cliclient-a270fcf35c8f09e6.yaml
new file mode 100644
index 0000000..87a6af9
--- /dev/null
+++ b/releasenotes/notes/add-domain-param-in-cliclient-a270fcf35c8f09e6.yaml
@@ -0,0 +1,17 @@
+---
+fixes:
+ - |
+ Allow to specify new domain parameters:
+
+ * `user_domain_name`
+ * `user_domain_id`
+ * `project_domain_name`
+ * `project_domain_id`
+
+ for CLIClient class, whose values will be substituted to
+ ``--os-user-domain-name``, ``--os-user-domain-id``,
+ ``--os-project-domain-name`` and ``--os-project-domain-id`` respectively
+ during command execution.
+
+ This allows to prevent possible test failures with authentication in
+ Keystone v3. Bug: #1719687
diff --git a/releasenotes/notes/add-load-list-cmd-35a4a2e6ea0a36fd.yaml b/releasenotes/notes/add-load-list-cmd-35a4a2e6ea0a36fd.yaml
new file mode 100644
index 0000000..403bbad
--- /dev/null
+++ b/releasenotes/notes/add-load-list-cmd-35a4a2e6ea0a36fd.yaml
@@ -0,0 +1,7 @@
+---
+features:
+ - |
+ Adds a new cli option to tempest run, --load-list <list-file>
+ to specify target tests to run from a list-file. The list-file
+ supports the output format of the tempest run --list-tests
+ command.
diff --git a/releasenotes/notes/add-reset-group-snapshot-status-api-to-v3-group-snapshots-client-248d41827daf2a0c.yaml b/releasenotes/notes/add-reset-group-snapshot-status-api-to-v3-group-snapshots-client-248d41827daf2a0c.yaml
new file mode 100644
index 0000000..76b395d
--- /dev/null
+++ b/releasenotes/notes/add-reset-group-snapshot-status-api-to-v3-group-snapshots-client-248d41827daf2a0c.yaml
@@ -0,0 +1,6 @@
+---
+features:
+ - |
+ Add reset group snapshot status API to v3 group_snapshots_client library,
+ min_microversion of this API is 3.19. This feature enables the possibility
+ to reset group snapshot status.
diff --git a/releasenotes/notes/add-support-args-kwargs-in-call-until-true-a91k592h5a64exf7.yaml b/releasenotes/notes/add-support-args-kwargs-in-call-until-true-a91k592h5a64exf7.yaml
new file mode 100644
index 0000000..e23abe3
--- /dev/null
+++ b/releasenotes/notes/add-support-args-kwargs-in-call-until-true-a91k592h5a64exf7.yaml
@@ -0,0 +1,5 @@
+---
+features:
+ - Add support of args and kwargs when calling func in call_until_true,
+ also to log the cost time when call_until_true returns True or False
+ for debuggin.
diff --git a/releasenotes/notes/add_proxy_url_get_credentials-aef66b085450513f.yaml b/releasenotes/notes/add_proxy_url_get_credentials-aef66b085450513f.yaml
new file mode 100644
index 0000000..94ab462
--- /dev/null
+++ b/releasenotes/notes/add_proxy_url_get_credentials-aef66b085450513f.yaml
@@ -0,0 +1,6 @@
+---
+features:
+ - |
+ Add the proxy_url optional parameter to the get_credentials method in
+ tempest/lib/auth.py so that that helper can be used when going through
+ and HTTP proxy.
diff --git a/releasenotes/notes/disable-identity-v2-testing-4ef1565d1a5aedcf.yaml b/releasenotes/notes/disable-identity-v2-testing-4ef1565d1a5aedcf.yaml
new file mode 100644
index 0000000..e5d4ab7
--- /dev/null
+++ b/releasenotes/notes/disable-identity-v2-testing-4ef1565d1a5aedcf.yaml
@@ -0,0 +1,7 @@
+---
+upgrade:
+ - |
+ As of the Queens release, tempest no longer tests the identity v2.0 API
+ because the majority of the v2.0 API have been removed from the identity
+ project. Once the Queens release reaches end-of-life, we can remove the
+ v2.0 tempest tests and clean up v2.0 testing cruft.
diff --git a/releasenotes/notes/drop-DEFAULT_PARAMS-bfcc2e7b74ef880b.yaml b/releasenotes/notes/drop-DEFAULT_PARAMS-bfcc2e7b74ef880b.yaml
new file mode 100644
index 0000000..c9a49a7
--- /dev/null
+++ b/releasenotes/notes/drop-DEFAULT_PARAMS-bfcc2e7b74ef880b.yaml
@@ -0,0 +1,13 @@
+---
+upgrade:
+ - |
+ Replace any call in your code to credentials_factory.DEFAULT_PARAMS with
+ a call to config.service_client_config().
+fixes:
+ - |
+ The credentials_factory module used to load configuration at import time
+ which caused configuration being loaded at test discovery time.
+ This was fixed by removing the DEFAULT_PARAMS variable. This variable
+ was redundant (and outdated), the same dictionary (but up to date) can
+ be obtained via invoking config.service_client_config() with no service
+ parameter.
diff --git a/releasenotes/notes/fix-list-group-snapshots-api-969d9321002c566c.yaml b/releasenotes/notes/fix-list-group-snapshots-api-969d9321002c566c.yaml
new file mode 100644
index 0000000..775a383
--- /dev/null
+++ b/releasenotes/notes/fix-list-group-snapshots-api-969d9321002c566c.yaml
@@ -0,0 +1,6 @@
+---
+fixes:
+ - |
+ Fix list_group_snapshots API in v3 group_snapshots_client: Bug#1715786.
+ The url path for list group snapshots with details API is changed from
+ ``?detail=True`` to ``/detail``.
diff --git a/releasenotes/notes/http_proxy_config-cb39b55520e84db5.yaml b/releasenotes/notes/http_proxy_config-cb39b55520e84db5.yaml
new file mode 100644
index 0000000..56969de
--- /dev/null
+++ b/releasenotes/notes/http_proxy_config-cb39b55520e84db5.yaml
@@ -0,0 +1,9 @@
+---
+features:
+ - Adds a new config options, ``proxy_url``. This options is used to configure
+ running tempest through a proxy server.
+ - The RestClient class in tempest.lib.rest_client has a new kwarg parameters,
+ ``proxy_url``, that is used to set a proxy server.
+ - A new class was added to tempest.lib.http, ClosingProxyHttp. This behaves
+ identically to ClosingHttp except that it requires a proxy url and will
+ establish a connection through a proxy
diff --git a/releasenotes/notes/intermediate-queens-release-2f9f305775fca454.yaml b/releasenotes/notes/intermediate-queens-release-2f9f305775fca454.yaml
new file mode 100644
index 0000000..1493b0b
--- /dev/null
+++ b/releasenotes/notes/intermediate-queens-release-2f9f305775fca454.yaml
@@ -0,0 +1,4 @@
+---
+prelude: >
+ This is an intermediate release during the Queens development cycle to
+ make new functionality available to plugins and other consumers.
diff --git a/releasenotes/notes/list-auth-domains-v3-endpoint-9ec60c7d3011c397.yaml b/releasenotes/notes/list-auth-domains-v3-endpoint-9ec60c7d3011c397.yaml
new file mode 100644
index 0000000..0f104cf
--- /dev/null
+++ b/releasenotes/notes/list-auth-domains-v3-endpoint-9ec60c7d3011c397.yaml
@@ -0,0 +1,6 @@
+---
+features:
+ - |
+ Add ``list_auth_domains`` API endpoint to the identity v3 client. This
+ allows the possibility of listing all domains a user has access to
+ via role assignments.
diff --git a/releasenotes/notes/make-account-client-as-stable-interface-d1b07c7e8f17bef6.yaml b/releasenotes/notes/make-object-storage-client-as-stable-interface-d1b07c7e8f17bef6.yaml
similarity index 84%
rename from releasenotes/notes/make-account-client-as-stable-interface-d1b07c7e8f17bef6.yaml
rename to releasenotes/notes/make-object-storage-client-as-stable-interface-d1b07c7e8f17bef6.yaml
index 9d5a1f5..2bba952 100644
--- a/releasenotes/notes/make-account-client-as-stable-interface-d1b07c7e8f17bef6.yaml
+++ b/releasenotes/notes/make-object-storage-client-as-stable-interface-d1b07c7e8f17bef6.yaml
@@ -7,3 +7,5 @@
without any maintenance changes.
* account_client
+ * container_client
+ * object_client
diff --git a/releasenotes/notes/remove-deprecated-apis-from-v2-volumes-client-3ca4a5db5fea518f.yaml b/releasenotes/notes/remove-deprecated-apis-from-v2-volumes-client-3ca4a5db5fea518f.yaml
new file mode 100644
index 0000000..c75da2e
--- /dev/null
+++ b/releasenotes/notes/remove-deprecated-apis-from-v2-volumes-client-3ca4a5db5fea518f.yaml
@@ -0,0 +1,11 @@
+---
+upgrade:
+ - |
+ Remove deprecated APIs from volume v2 volumes_client, and the deprecated
+ APIs are re-realized in volume v2 transfers_client.
+
+ * create_volume_transfer
+ * show_volume_transfer
+ * list_volume_transfers
+ * delete_volume_transfer
+ * accept_volume_transfer
diff --git a/releasenotes/notes/remove-deprecated-volume-apis-from-v2-volumes-client-cf35e5b4cca89860.yaml b/releasenotes/notes/remove-deprecated-volume-apis-from-v2-volumes-client-cf35e5b4cca89860.yaml
new file mode 100644
index 0000000..12ac5b5
--- /dev/null
+++ b/releasenotes/notes/remove-deprecated-volume-apis-from-v2-volumes-client-cf35e5b4cca89860.yaml
@@ -0,0 +1,7 @@
+---
+upgrade:
+ - |
+ Remove deprecated APIs (``show_pools`` and ``show_backend_capabilities``)
+ from volume v2 volumes_client, and the deprecated APIs are re-realized in
+ volume v2 scheduler_stats_client (``list_pools``) and capabilities_client
+ (``show_backend_capabilities``) accordingly.
diff --git a/releasenotes/notes/remove-get-ipv6-addr-by-EUI64-c79972d799c7a430.yaml b/releasenotes/notes/remove-get-ipv6-addr-by-EUI64-c79972d799c7a430.yaml
new file mode 100644
index 0000000..609000c
--- /dev/null
+++ b/releasenotes/notes/remove-get-ipv6-addr-by-EUI64-c79972d799c7a430.yaml
@@ -0,0 +1,5 @@
+---
+upgrade:
+ - |
+ Remove deprecated get_ipv6_addr_by_EUI64 method from data_utils.
+ Use the same method from oslo_utils.netutils.
diff --git a/releasenotes/notes/test-clients-stable-for-plugin-90b1e7dc83f28ccd.yaml b/releasenotes/notes/test-clients-stable-for-plugin-90b1e7dc83f28ccd.yaml
new file mode 100644
index 0000000..e27ee33
--- /dev/null
+++ b/releasenotes/notes/test-clients-stable-for-plugin-90b1e7dc83f28ccd.yaml
@@ -0,0 +1,8 @@
+---
+features:
+ - |
+ Two extra modules are now marked as stable for plugins, test.py and clients.py.
+ The former includes the test base class with its automatic credentials
+ provisioning and test resource managing fixtures.
+ The latter is built on top of ServiceClients and it adds aliases and a few custom
+ configurations to it.
diff --git a/releasenotes/source/conf.py b/releasenotes/source/conf.py
index ae3dca1..57ec7e1 100644
--- a/releasenotes/source/conf.py
+++ b/releasenotes/source/conf.py
@@ -65,16 +65,12 @@
project = u'tempest Release Notes'
copyright = u'2016, tempest Developers'
-# The version info for the project you're documenting, acts as replacement for
-# |version| and |release|, also used in various other places throughout the
-# built documents.
-#
-# The short X.Y version.
-from tempest.version import version_info as tempest_version
+# Release do not need a version number in the title, they
+# cover multiple versions.
# The full version, including alpha/beta/rc tags.
-release = tempest_version.version_string_with_vcs()
+release = ''
# The short X.Y version.
-version = tempest_version.canonical_version_string()
+version = ''
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
diff --git a/requirements.txt b/requirements.txt
index 911f0e5..023148b 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -2,9 +2,9 @@
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr!=2.1.0,>=2.0.0 # Apache-2.0
-cliff>=2.8.0 # Apache-2.0
+cliff!=2.9.0,>=2.8.0 # Apache-2.0
jsonschema<3.0.0,>=2.6.0 # MIT
-testtools>=1.4.0 # MIT
+testtools>=2.2.0 # MIT
paramiko>=2.0.0 # LGPLv2.1+
netaddr>=0.7.18 # BSD
testrepository>=0.0.18 # Apache-2.0/BSD
@@ -12,11 +12,11 @@
oslo.config>=4.6.0 # Apache-2.0
oslo.log>=3.30.0 # Apache-2.0
oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0
-oslo.utils>=3.28.0 # Apache-2.0
-six>=1.9.0 # MIT
+oslo.utils>=3.31.0 # Apache-2.0
+six>=1.10.0 # MIT
fixtures>=3.0.0 # Apache-2.0/BSD
PyYAML>=3.10 # MIT
-python-subunit>=0.0.18 # Apache-2.0/BSD
+python-subunit>=1.0.0 # Apache-2.0/BSD
stevedore>=1.20.0 # Apache-2.0
PrettyTable<0.8,>=0.7.1 # BSD
os-testr>=1.0.0 # Apache-2.0
diff --git a/roles/acl-devstack-files/README.rst b/roles/acl-devstack-files/README.rst
new file mode 100644
index 0000000..76e7e58
--- /dev/null
+++ b/roles/acl-devstack-files/README.rst
@@ -0,0 +1,10 @@
+Grant global read access to devstack `files` folder.
+
+This is handy to grant the `tempest` user access to VM images for testing.
+
+**Role Variables**
+
+.. zuul:rolevar:: devstack_data_dir
+ :default: /opt/stack/data
+
+ The devstack data directory.
diff --git a/roles/acl-devstack-files/defaults/main.yaml b/roles/acl-devstack-files/defaults/main.yaml
new file mode 100644
index 0000000..14265f0
--- /dev/null
+++ b/roles/acl-devstack-files/defaults/main.yaml
@@ -0,0 +1 @@
+devstack_data_dir: /opt/stack/data
diff --git a/roles/acl-devstack-files/tasks/main.yaml b/roles/acl-devstack-files/tasks/main.yaml
new file mode 100644
index 0000000..b3eeec7
--- /dev/null
+++ b/roles/acl-devstack-files/tasks/main.yaml
@@ -0,0 +1,6 @@
+- name: Grant global read access to devstack files
+ file:
+ path: "{{devstack_data_dir}}/files"
+ mode: "o+rx"
+ recurse: yes
+ become: yes
diff --git a/roles/run-tempest/README.rst b/roles/run-tempest/README.rst
new file mode 100644
index 0000000..a75fc31
--- /dev/null
+++ b/roles/run-tempest/README.rst
@@ -0,0 +1,18 @@
+Run Tempest
+
+**Role Variables**
+
+.. zuul:rolevar:: devstack_base_dir
+ :default: /opt/stack
+
+ The devstack base directory.
+
+.. zuul:rolevar:: tempest_concurrency
+ :default: 0
+
+ The number of parallel test processes.
+
+.. zuul:rolevar:: tox_venvlist
+ :default: smoke
+
+ The Tempest tox environment to run.
diff --git a/roles/run-tempest/defaults/main.yaml b/roles/run-tempest/defaults/main.yaml
new file mode 100644
index 0000000..e1e81da
--- /dev/null
+++ b/roles/run-tempest/defaults/main.yaml
@@ -0,0 +1,2 @@
+devstack_base_dir: /opt/stack
+tox_venvlist: smoke
diff --git a/roles/run-tempest/tasks/main.yaml b/roles/run-tempest/tasks/main.yaml
new file mode 100644
index 0000000..d079513
--- /dev/null
+++ b/roles/run-tempest/tasks/main.yaml
@@ -0,0 +1,28 @@
+# NOTE(andreaf) The number of vcpus is not available on all systems.
+# See https://github.com/ansible/ansible/issues/30688
+# When not available, we fall back to ansible_processor_cores
+- name: Get hw.logicalcpu from sysctl
+ shell: sysctl hw.logicalcpu | cut -d' ' -f2
+ register: sysctl_hw_logicalcpu
+ when: ansible_processor_vcpus is not defined
+
+- name: Number of cores
+ set_fact:
+ num_cores: "{{ansible_processor_vcpus|default(sysctl_hw_logicalcpu.stdout)}}"
+
+- name: Set concurrency for cores == 3 or less
+ set_fact:
+ default_concurrency: "{{ num_cores }}"
+ when: num_cores|int <= 3
+
+- name: Limit max concurrency when more than 3 vcpus are available
+ set_fact:
+ default_concurrency: "{{ num_cores|int // 2 }}"
+ when: num_cores|int > 3
+
+- name: Run Tempest
+ command: tox -e {{tox_venvlist}} -- --concurrency={{tempest_concurrency|default(default_concurrency)}}
+ args:
+ chdir: "{{devstack_base_dir}}/tempest"
+ become: true
+ become_user: tempest
diff --git a/roles/setup-tempest-data-dir/README.rst b/roles/setup-tempest-data-dir/README.rst
new file mode 100644
index 0000000..db0b083
--- /dev/null
+++ b/roles/setup-tempest-data-dir/README.rst
@@ -0,0 +1,12 @@
+Setup the `tempest` user as owner of Tempest's data folder.
+
+Tempest's devstack plugin creates the data folder, but it has no knowledge
+of the `tempest` user, so we need a role to fix ownership on the data folder.
+
+
+**Role Variables**
+
+.. zuul:rolevar:: devstack_data_dir
+ :default: /opt/stack/data
+
+ The devstack data directory.
diff --git a/roles/setup-tempest-data-dir/defaults/main.yaml b/roles/setup-tempest-data-dir/defaults/main.yaml
new file mode 100644
index 0000000..14265f0
--- /dev/null
+++ b/roles/setup-tempest-data-dir/defaults/main.yaml
@@ -0,0 +1 @@
+devstack_data_dir: /opt/stack/data
diff --git a/roles/setup-tempest-data-dir/tasks/main.yaml b/roles/setup-tempest-data-dir/tasks/main.yaml
new file mode 100644
index 0000000..9dd6309
--- /dev/null
+++ b/roles/setup-tempest-data-dir/tasks/main.yaml
@@ -0,0 +1,7 @@
+- name: Set tempest as owner of Tempest data folder
+ file:
+ path: "{{devstack_data_dir}}/tempest"
+ owner: tempest
+ group: stack
+ recurse: yes
+ become: yes
diff --git a/roles/setup-tempest-run-dir/README.rst b/roles/setup-tempest-run-dir/README.rst
new file mode 100644
index 0000000..c8e2339
--- /dev/null
+++ b/roles/setup-tempest-run-dir/README.rst
@@ -0,0 +1,14 @@
+Setup Tempest run folder.
+
+To support isolation between multiple runs, separate run folders are required.
+Set `tempest` as owner of Tempest's current run folder.
+There is an implicit assumption here of a one to one relationship between
+devstack versions and Tempest runs.
+
+
+**Role Variables**
+
+.. zuul:rolevar:: devstack_base_dir
+ :default: /opt/stack
+
+ The devstack base directory.
diff --git a/roles/setup-tempest-run-dir/defaults/main.yaml b/roles/setup-tempest-run-dir/defaults/main.yaml
new file mode 100644
index 0000000..fea05c8
--- /dev/null
+++ b/roles/setup-tempest-run-dir/defaults/main.yaml
@@ -0,0 +1 @@
+devstack_base_dir: /opt/stack
diff --git a/roles/setup-tempest-run-dir/tasks/main.yaml b/roles/setup-tempest-run-dir/tasks/main.yaml
new file mode 100644
index 0000000..a012d72
--- /dev/null
+++ b/roles/setup-tempest-run-dir/tasks/main.yaml
@@ -0,0 +1,7 @@
+- name: Set tempest as owner of Tempest run folder
+ file:
+ path: "{{devstack_base_dir}}/tempest"
+ owner: tempest
+ group: stack
+ recurse: yes
+ become: yes
diff --git a/tempest/api/compute/admin/test_aggregates_negative.py b/tempest/api/compute/admin/test_aggregates_negative.py
index 41be620..36ff09e 100644
--- a/tempest/api/compute/admin/test_aggregates_negative.py
+++ b/tempest/api/compute/admin/test_aggregates_negative.py
@@ -27,7 +27,6 @@
def setup_clients(cls):
super(AggregatesAdminNegativeTestJSON, cls).setup_clients()
cls.client = cls.os_admin.aggregates_client
- cls.user_client = cls.aggregates_client
@classmethod
def resource_setup(cls):
@@ -52,7 +51,7 @@
# Regular user is not allowed to create an aggregate.
aggregate_name = data_utils.rand_name(self.aggregate_name_prefix)
self.assertRaises(lib_exc.Forbidden,
- self.user_client.create_aggregate,
+ self.aggregates_client.create_aggregate,
name=aggregate_name)
@decorators.attr(type=['negative'])
@@ -87,7 +86,7 @@
# Regular user is not allowed to delete an aggregate.
aggregate = self._create_test_aggregate()
self.assertRaises(lib_exc.Forbidden,
- self.user_client.delete_aggregate,
+ self.aggregates_client.delete_aggregate,
aggregate['id'])
@decorators.attr(type=['negative'])
@@ -95,7 +94,7 @@
def test_aggregate_list_as_user(self):
# Regular user is not allowed to list aggregates.
self.assertRaises(lib_exc.Forbidden,
- self.user_client.list_aggregates)
+ self.aggregates_client.list_aggregates)
@decorators.attr(type=['negative'])
@decorators.idempotent_id('557cad12-34c9-4ff4-95f0-22f0dfbaf7dc')
@@ -103,7 +102,7 @@
# Regular user is not allowed to get aggregate details.
aggregate = self._create_test_aggregate()
self.assertRaises(lib_exc.Forbidden,
- self.user_client.show_aggregate,
+ self.aggregates_client.show_aggregate,
aggregate['id'])
@decorators.attr(type=['negative'])
@@ -140,7 +139,7 @@
# Regular user is not allowed to add a host to an aggregate.
aggregate = self._create_test_aggregate()
self.assertRaises(lib_exc.Forbidden,
- self.user_client.add_host,
+ self.aggregates_client.add_host,
aggregate['id'], host=self.host)
@decorators.attr(type=['negative'])
@@ -168,7 +167,7 @@
host=self.host)
self.assertRaises(lib_exc.Forbidden,
- self.user_client.remove_host,
+ self.aggregates_client.remove_host,
aggregate['id'], host=self.host)
@decorators.attr(type=['negative'])
diff --git a/tempest/api/compute/admin/test_fixed_ips.py b/tempest/api/compute/admin/test_fixed_ips.py
index ebba73c..66c2c2d 100644
--- a/tempest/api/compute/admin/test_fixed_ips.py
+++ b/tempest/api/compute/admin/test_fixed_ips.py
@@ -42,6 +42,7 @@
super(FixedIPsTestJson, cls).resource_setup()
server = cls.create_test_server(wait_until='ACTIVE')
server = cls.servers_client.show_server(server['id'])['server']
+ cls.ip = None
for ip_set in server['addresses']:
for ip in server['addresses'][ip_set]:
if ip['OS-EXT-IPS:type'] == 'fixed':
@@ -49,6 +50,9 @@
break
if cls.ip:
break
+ if cls.ip is None:
+ raise cls.skipException("No fixed ip found for server: %s"
+ % server['id'])
@decorators.idempotent_id('16b7d848-2f7c-4709-85a3-2dfb4576cc52')
def test_list_fixed_ip_details(self):
diff --git a/tempest/api/compute/admin/test_fixed_ips_negative.py b/tempest/api/compute/admin/test_fixed_ips_negative.py
index a5deb3c..7d41f46 100644
--- a/tempest/api/compute/admin/test_fixed_ips_negative.py
+++ b/tempest/api/compute/admin/test_fixed_ips_negative.py
@@ -43,6 +43,7 @@
super(FixedIPsNegativeTestJson, cls).resource_setup()
server = cls.create_test_server(wait_until='ACTIVE')
server = cls.servers_client.show_server(server['id'])['server']
+ cls.ip = None
for ip_set in server['addresses']:
for ip in server['addresses'][ip_set]:
if ip['OS-EXT-IPS:type'] == 'fixed':
@@ -50,6 +51,9 @@
break
if cls.ip:
break
+ if cls.ip is None:
+ raise cls.skipException("No fixed ip found for server: %s"
+ % server['id'])
@decorators.attr(type=['negative'])
@decorators.idempotent_id('9f17f47d-daad-4adc-986e-12370c93e407')
diff --git a/tempest/api/compute/admin/test_live_migration.py b/tempest/api/compute/admin/test_live_migration.py
index 14be947..411159b 100644
--- a/tempest/api/compute/admin/test_live_migration.py
+++ b/tempest/api/compute/admin/test_live_migration.py
@@ -46,6 +46,18 @@
"Less than 2 compute nodes, skipping migration test.")
@classmethod
+ def setup_credentials(cls):
+ # These tests don't attempt any SSH validation nor do they use
+ # floating IPs on the instance, so all we need is a network and
+ # a subnet so the instance being migrated has a single port, but
+ # we need that to make sure we are properly updating the port
+ # host bindings during the live migration.
+ # TODO(mriedem): SSH validation before and after the instance is
+ # live migrated would be a nice test wrinkle addition.
+ cls.set_network_resources(network=True, subnet=True)
+ super(LiveMigrationTest, cls).setup_credentials()
+
+ @classmethod
def setup_clients(cls):
super(LiveMigrationTest, cls).setup_clients()
cls.admin_migration_client = cls.os_admin.migrations_client
diff --git a/tempest/api/compute/base.py b/tempest/api/compute/base.py
index 683d3e9..705814c 100644
--- a/tempest/api/compute/base.py
+++ b/tempest/api/compute/base.py
@@ -262,7 +262,11 @@
image = cls.compute_images_client.create_image(server_id, name=name,
**kwargs)
- image_id = data_utils.parse_image_id(image.response['location'])
+ if api_version_utils.compare_version_header_to_response(
+ "OpenStack-API-Version", "compute 2.45", image.response, "lt"):
+ image_id = image['image_id']
+ else:
+ image_id = data_utils.parse_image_id(image.response['location'])
cls.addClassResourceCleanup(test_utils.call_and_ignore_notfound_exc,
cls.compute_images_client.delete_image,
image_id)
@@ -297,7 +301,7 @@
return image
@classmethod
- def rebuild_server(cls, server_id, validatable=False, **kwargs):
+ def recreate_server(cls, server_id, validatable=False, **kwargs):
"""Destroy an existing class level server and creates a new one
Some test classes use a test server that can be used by multiple
@@ -418,6 +422,23 @@
volume['id'], 'available')
return volume
+ def _detach_volume(self, server, volume):
+ """Helper method to detach a volume.
+
+ Ignores 404 responses if the volume or server do not exist, or the
+ volume is already detached from the server.
+ """
+ try:
+ volume = self.volumes_client.show_volume(volume['id'])['volume']
+ # Check the status. You can only detach an in-use volume, otherwise
+ # the compute API will return a 400 response.
+ if volume['status'] == 'in-use':
+ self.servers_client.detach_volume(server['id'], volume['id'])
+ except exceptions.NotFound:
+ # Ignore 404s on detach in case the server is deleted or the volume
+ # is already detached.
+ pass
+
def attach_volume(self, server, volume, device=None, check_reserved=False):
"""Attaches volume to server and waits for 'in-use' volume status.
@@ -445,9 +466,7 @@
self.volumes_client, volume['id'], 'available')
# Ignore 404s on detach in case the server is deleted or the volume
# is already detached.
- self.addCleanup(test_utils.call_and_ignore_notfound_exc,
- self.servers_client.detach_volume,
- server['id'], volume['id'])
+ self.addCleanup(self._detach_volume, server, volume)
statuses = ['in-use']
if check_reserved:
statuses.append('reserved')
diff --git a/tempest/api/compute/flavors/test_flavors.py b/tempest/api/compute/flavors/test_flavors.py
index d5bb45a..20294e9 100644
--- a/tempest/api/compute/flavors/test_flavors.py
+++ b/tempest/api/compute/flavors/test_flavors.py
@@ -18,8 +18,6 @@
class FlavorsV2TestJSON(base.BaseV2ComputeTest):
- _min_disk = 'minDisk'
- _min_ram = 'minRam'
@decorators.attr(type='smoke')
@decorators.idempotent_id('e36c0eaa-dff5-4082-ad1f-3f9a80aa3f59')
@@ -89,7 +87,7 @@
flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
flavor_id = flavor['id']
- params = {self._min_disk: flavor['disk'] + 1}
+ params = {'minDisk': flavor['disk'] + 1}
flavors = self.flavors_client.list_flavors(detail=True,
**params)['flavors']
self.assertEmpty([i for i in flavors if i['id'] == flavor_id])
@@ -100,7 +98,7 @@
flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
flavor_id = flavor['id']
- params = {self._min_ram: flavor['ram'] + 1}
+ params = {'minRam': flavor['ram'] + 1}
flavors = self.flavors_client.list_flavors(detail=True,
**params)['flavors']
self.assertEmpty([i for i in flavors if i['id'] == flavor_id])
@@ -111,7 +109,7 @@
flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
flavor_id = flavor['id']
- params = {self._min_disk: flavor['disk'] + 1}
+ params = {'minDisk': flavor['disk'] + 1}
flavors = self.flavors_client.list_flavors(**params)['flavors']
self.assertEmpty([i for i in flavors if i['id'] == flavor_id])
@@ -121,6 +119,6 @@
flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
flavor_id = flavor['id']
- params = {self._min_ram: flavor['ram'] + 1}
+ params = {'minRam': flavor['ram'] + 1}
flavors = self.flavors_client.list_flavors(**params)['flavors']
self.assertEmpty([i for i in flavors if i['id'] == flavor_id])
diff --git a/tempest/api/compute/floating_ips/base.py b/tempest/api/compute/floating_ips/base.py
index 142eaec..262a3c1 100644
--- a/tempest/api/compute/floating_ips/base.py
+++ b/tempest/api/compute/floating_ips/base.py
@@ -14,6 +14,10 @@
# under the License.
from tempest.api.compute import base
+from tempest.common import utils
+from tempest import config
+
+CONF = config.CONF
class BaseFloatingIPsTest(base.BaseV2ComputeTest):
@@ -24,3 +28,17 @@
cls.set_network_resources(network=True, subnet=True,
router=True, dhcp=True)
super(BaseFloatingIPsTest, cls).setup_credentials()
+
+ @classmethod
+ def skip_checks(cls):
+ super(BaseFloatingIPsTest, cls).skip_checks()
+ if not utils.get_service_list()['network']:
+ raise cls.skipException("network service not enabled.")
+ if not CONF.network_feature_enabled.floating_ips:
+ raise cls.skipException("Floating ips are not available")
+
+ @classmethod
+ def setup_clients(cls):
+ super(BaseFloatingIPsTest, cls).setup_clients()
+ cls.client = cls.floating_ips_client
+ cls.pools_client = cls.floating_ip_pools_client
diff --git a/tempest/api/compute/floating_ips/test_floating_ips_actions.py b/tempest/api/compute/floating_ips/test_floating_ips_actions.py
index 86e244b..2adc482 100644
--- a/tempest/api/compute/floating_ips/test_floating_ips_actions.py
+++ b/tempest/api/compute/floating_ips/test_floating_ips_actions.py
@@ -16,7 +16,6 @@
import testtools
from tempest.api.compute.floating_ips import base
-from tempest.common import utils
from tempest import config
from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
@@ -26,35 +25,8 @@
class FloatingIPsTestJSON(base.BaseFloatingIPsTest):
- server_id = None
- floating_ip = None
- @classmethod
- def skip_checks(cls):
- super(FloatingIPsTestJSON, cls).skip_checks()
- if not utils.get_service_list()['network']:
- raise cls.skipException("network service not enabled.")
- if not CONF.network_feature_enabled.floating_ips:
- raise cls.skipException("Floating ips are not available")
-
- @classmethod
- def setup_clients(cls):
- super(FloatingIPsTestJSON, cls).setup_clients()
- cls.client = cls.floating_ips_client
-
- @classmethod
- def resource_setup(cls):
- super(FloatingIPsTestJSON, cls).resource_setup()
-
- # Server creation
- server = cls.create_test_server(wait_until='ACTIVE')
- cls.server_id = server['id']
- # Floating IP creation
- body = cls.client.create_floating_ip(
- pool=CONF.network.floating_network_name)['floating_ip']
- cls.addClassResourceCleanup(cls.client.delete_floating_ip, body['id'])
- cls.floating_ip_id = body['id']
- cls.floating_ip = body['ip']
+ max_microversion = '2.35'
@decorators.idempotent_id('f7bfb946-297e-41b8-9e8c-aba8e9bb5194')
def test_allocate_floating_ip(self):
@@ -85,6 +57,25 @@
# Check it was really deleted.
self.client.wait_for_resource_deletion(floating_ip_body['id'])
+
+class FloatingIPsAssociationTestJSON(base.BaseFloatingIPsTest):
+
+ max_microversion = '2.43'
+
+ @classmethod
+ def resource_setup(cls):
+ super(FloatingIPsAssociationTestJSON, cls).resource_setup()
+
+ # Server creation
+ cls.server = cls.create_test_server(wait_until='ACTIVE')
+ cls.server_id = cls.server['id']
+ # Floating IP creation
+ body = cls.client.create_floating_ip(
+ pool=CONF.network.floating_network_name)['floating_ip']
+ cls.addClassResourceCleanup(cls.client.delete_floating_ip, body['id'])
+ cls.floating_ip_id = body['id']
+ cls.floating_ip = body['ip']
+
@decorators.idempotent_id('307efa27-dc6f-48a0-8cd2-162ce3ef0b52')
@testtools.skipUnless(CONF.network.public_network_id,
'The public_network_id option must be specified.')
diff --git a/tempest/api/compute/floating_ips/test_floating_ips_actions_negative.py b/tempest/api/compute/floating_ips/test_floating_ips_actions_negative.py
index c3d7816..9257458 100644
--- a/tempest/api/compute/floating_ips/test_floating_ips_actions_negative.py
+++ b/tempest/api/compute/floating_ips/test_floating_ips_actions_negative.py
@@ -16,7 +16,6 @@
import testtools
from tempest.api.compute.floating_ips import base
-from tempest.common import utils
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
@@ -27,26 +26,12 @@
class FloatingIPsNegativeTestJSON(base.BaseFloatingIPsTest):
- @classmethod
- def skip_checks(cls):
- super(FloatingIPsNegativeTestJSON, cls).skip_checks()
- if not utils.get_service_list()['network']:
- raise cls.skipException("network service not enabled.")
- if not CONF.network_feature_enabled.floating_ips:
- raise cls.skipException("Floating ips are not available")
-
- @classmethod
- def setup_clients(cls):
- super(FloatingIPsNegativeTestJSON, cls).setup_clients()
- cls.client = cls.floating_ips_client
+ max_microversion = '2.35'
@classmethod
def resource_setup(cls):
super(FloatingIPsNegativeTestJSON, cls).resource_setup()
- # Server creation
- server = cls.create_test_server(wait_until='ACTIVE')
- cls.server_id = server['id']
# Generating a nonexistent floatingIP id
body = cls.client.list_floating_ips()['floating_ips']
floating_ip_ids = [floating_ip['id'] for floating_ip in body]
@@ -77,6 +62,17 @@
self.assertRaises(lib_exc.NotFound, self.client.delete_floating_ip,
self.non_exist_id)
+
+class FloatingIPsAssociationNegativeTestJSON(base.BaseFloatingIPsTest):
+
+ max_microversion = '2.43'
+
+ @classmethod
+ def resource_setup(cls):
+ super(FloatingIPsAssociationNegativeTestJSON, cls).resource_setup()
+ cls.server = cls.create_test_server(wait_until='ACTIVE')
+ cls.server_id = cls.server['id']
+
@decorators.attr(type=['negative'])
@decorators.idempotent_id('595fa616-1a71-4670-9614-46564ac49a4c')
def test_associate_nonexistent_floating_ip(self):
diff --git a/tempest/api/compute/floating_ips/test_list_floating_ips.py b/tempest/api/compute/floating_ips/test_list_floating_ips.py
index 516c544..944f798 100644
--- a/tempest/api/compute/floating_ips/test_list_floating_ips.py
+++ b/tempest/api/compute/floating_ips/test_list_floating_ips.py
@@ -13,29 +13,16 @@
# License for the specific language governing permissions and limitations
# under the License.
-from tempest.api.compute import base
-from tempest.common import utils
+from tempest.api.compute.floating_ips import base
from tempest import config
from tempest.lib import decorators
CONF = config.CONF
-class FloatingIPDetailsTestJSON(base.BaseV2ComputeTest):
+class FloatingIPDetailsTestJSON(base.BaseFloatingIPsTest):
- @classmethod
- def skip_checks(cls):
- super(FloatingIPDetailsTestJSON, cls).skip_checks()
- if not utils.get_service_list()['network']:
- raise cls.skipException("network service not enabled.")
- if not CONF.network_feature_enabled.floating_ips:
- raise cls.skipException("Floating ips are not available")
-
- @classmethod
- def setup_clients(cls):
- super(FloatingIPDetailsTestJSON, cls).setup_clients()
- cls.client = cls.floating_ips_client
- cls.pools_client = cls.floating_ip_pools_client
+ max_microversion = '2.35'
@classmethod
def resource_setup(cls):
diff --git a/tempest/api/compute/floating_ips/test_list_floating_ips_negative.py b/tempest/api/compute/floating_ips/test_list_floating_ips_negative.py
index 0ade872..d69248c 100644
--- a/tempest/api/compute/floating_ips/test_list_floating_ips_negative.py
+++ b/tempest/api/compute/floating_ips/test_list_floating_ips_negative.py
@@ -13,8 +13,7 @@
# License for the specific language governing permissions and limitations
# under the License.
-from tempest.api.compute import base
-from tempest.common import utils
+from tempest.api.compute.floating_ips import base
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
@@ -23,20 +22,9 @@
CONF = config.CONF
-class FloatingIPDetailsNegativeTestJSON(base.BaseV2ComputeTest):
+class FloatingIPDetailsNegativeTestJSON(base.BaseFloatingIPsTest):
- @classmethod
- def skip_checks(cls):
- super(FloatingIPDetailsNegativeTestJSON, cls).skip_checks()
- if not utils.get_service_list()['network']:
- raise cls.skipException("network service not enabled.")
- if not CONF.network_feature_enabled.floating_ips:
- raise cls.skipException("Floating ips are not available")
-
- @classmethod
- def setup_clients(cls):
- super(FloatingIPDetailsNegativeTestJSON, cls).setup_clients()
- cls.client = cls.floating_ips_client
+ max_microversion = '2.35'
@decorators.attr(type=['negative'])
@decorators.idempotent_id('7ab18834-4a4b-4f28-a2c5-440579866695')
diff --git a/tempest/api/compute/images/test_images_oneserver.py b/tempest/api/compute/images/test_images_oneserver.py
index e62e25e..058e7e6 100644
--- a/tempest/api/compute/images/test_images_oneserver.py
+++ b/tempest/api/compute/images/test_images_oneserver.py
@@ -15,6 +15,7 @@
from tempest.api.compute import base
from tempest import config
+from tempest.lib.common import api_version_utils
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
@@ -86,5 +87,9 @@
# 4 byte utf-8 character.
utf8_name = data_utils.rand_name(b'\xe2\x82\xa1'.decode('utf-8'))
body = self.client.create_image(self.server_id, name=utf8_name)
- image_id = data_utils.parse_image_id(body.response['location'])
+ if api_version_utils.compare_version_header_to_response(
+ "OpenStack-API-Version", "compute 2.45", body.response, "lt"):
+ image_id = body['image_id']
+ else:
+ image_id = data_utils.parse_image_id(body.response['location'])
self.addCleanup(self.client.delete_image, image_id)
diff --git a/tempest/api/compute/images/test_images_oneserver_negative.py b/tempest/api/compute/images/test_images_oneserver_negative.py
index 7ecfa0a..a2e58c9 100644
--- a/tempest/api/compute/images/test_images_oneserver_negative.py
+++ b/tempest/api/compute/images/test_images_oneserver_negative.py
@@ -19,6 +19,7 @@
from tempest.api.compute import base
from tempest.common import waiters
from tempest import config
+from tempest.lib.common import api_version_utils
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
@@ -51,7 +52,7 @@
self._reset_server()
def _reset_server(self):
- self.__class__.server_id = self.rebuild_server(self.server_id)
+ self.__class__.server_id = self.recreate_server(self.server_id)
@classmethod
def skip_checks(cls):
@@ -105,7 +106,11 @@
self.assertRaises(lib_exc.Conflict, self.create_image_from_server,
self.server_id)
- image_id = data_utils.parse_image_id(image.response['location'])
+ if api_version_utils.compare_version_header_to_response(
+ "OpenStack-API-Version", "compute 2.45", image.response, "lt"):
+ image_id = image['image_id']
+ else:
+ image_id = data_utils.parse_image_id(image.response['location'])
self.client.delete_image(image_id)
@decorators.attr(type=['negative'])
@@ -123,7 +128,11 @@
# Return an error while trying to delete an image what is creating
image = self.create_image_from_server(self.server_id)
- image_id = data_utils.parse_image_id(image.response['location'])
+ if api_version_utils.compare_version_header_to_response(
+ "OpenStack-API-Version", "compute 2.45", image.response, "lt"):
+ image_id = image['image_id']
+ else:
+ image_id = data_utils.parse_image_id(image.response['location'])
self.addCleanup(self._reset_server)
diff --git a/tempest/api/compute/servers/test_server_actions.py b/tempest/api/compute/servers/test_server_actions.py
index 4cfc665..bce7524 100644
--- a/tempest/api/compute/servers/test_server_actions.py
+++ b/tempest/api/compute/servers/test_server_actions.py
@@ -55,7 +55,7 @@
self.__class__.server_id = server['id']
except Exception:
# Rebuild server if something happened to it during a test
- self.__class__.server_id = self.rebuild_server(
+ self.__class__.server_id = self.recreate_server(
self.server_id, validatable=True)
def tearDown(self):
@@ -75,7 +75,7 @@
@classmethod
def resource_setup(cls):
super(ServerActionsTestJSON, cls).resource_setup()
- cls.server_id = cls.rebuild_server(None, validatable=True)
+ cls.server_id = cls.recreate_server(None, validatable=True)
@decorators.idempotent_id('6158df09-4b82-4ab3-af6d-29cf36af858d')
@testtools.skipUnless(CONF.compute_feature_enabled.change_password,
@@ -281,45 +281,61 @@
self.assertEqual(self.server_id,
vol_after_rebuild['attachments'][0]['server_id'])
- def _test_resize_server_confirm(self, stop=False):
+ def _test_resize_server_confirm(self, server_id, stop=False):
# The server's RAM and disk space should be modified to that of
# the provided flavor
if stop:
- self.client.stop_server(self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id,
+ self.client.stop_server(server_id)
+ waiters.wait_for_server_status(self.client, server_id,
'SHUTOFF')
- self.client.resize_server(self.server_id, self.flavor_ref_alt)
+ self.client.resize_server(server_id, self.flavor_ref_alt)
# NOTE(jlk): Explicitly delete the server to get a new one for later
# tests. Avoids resize down race issues.
- self.addCleanup(self.delete_server, self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id,
+ self.addCleanup(self.delete_server, server_id)
+ waiters.wait_for_server_status(self.client, server_id,
'VERIFY_RESIZE')
- self.client.confirm_resize_server(self.server_id)
+ self.client.confirm_resize_server(server_id)
expected_status = 'SHUTOFF' if stop else 'ACTIVE'
- waiters.wait_for_server_status(self.client, self.server_id,
+ waiters.wait_for_server_status(self.client, server_id,
expected_status)
- server = self.client.show_server(self.server_id)['server']
+ server = self.client.show_server(server_id)['server']
self.assertEqual(self.flavor_ref_alt, server['flavor']['id'])
if stop:
# NOTE(mriedem): tearDown requires the server to be started.
- self.client.start_server(self.server_id)
+ self.client.start_server(server_id)
@decorators.idempotent_id('1499262a-9328-4eda-9068-db1ac57498d2')
@testtools.skipUnless(CONF.compute_feature_enabled.resize,
'Resize not available.')
def test_resize_server_confirm(self):
- self._test_resize_server_confirm(stop=False)
+ self._test_resize_server_confirm(self.server_id, stop=False)
+
+ @decorators.idempotent_id('e6c28180-7454-4b59-b188-0257af08a63b')
+ @decorators.related_bug('1728603')
+ @testtools.skipUnless(CONF.compute_feature_enabled.resize,
+ 'Resize not available.')
+ @utils.services('volume')
+ def test_resize_volume_backed_server_confirm(self):
+ # We have to create a new server that is volume-backed since the one
+ # from setUp is not volume-backed.
+ server = self.create_test_server(
+ volume_backed=True, wait_until='ACTIVE')
+ self._test_resize_server_confirm(server['id'])
+ # Now do something interactive with the guest like get its console
+ # output; we don't actually care about the output, just that it doesn't
+ # raise an error.
+ self.client.get_console_output(server['id'])
@decorators.idempotent_id('138b131d-66df-48c9-a171-64f45eb92962')
@testtools.skipUnless(CONF.compute_feature_enabled.resize,
'Resize not available.')
def test_resize_server_confirm_from_stopped(self):
- self._test_resize_server_confirm(stop=True)
+ self._test_resize_server_confirm(self.server_id, stop=True)
@decorators.idempotent_id('c03aab19-adb1-44f5-917d-c419577e9e68')
@testtools.skipUnless(CONF.compute_feature_enabled.resize,
diff --git a/tempest/api/compute/servers/test_server_personality.py b/tempest/api/compute/servers/test_server_personality.py
index 2f0f5ee..6f32b46 100644
--- a/tempest/api/compute/servers/test_server_personality.py
+++ b/tempest/api/compute/servers/test_server_personality.py
@@ -44,7 +44,6 @@
def setup_clients(cls):
super(ServerPersonalityTestJSON, cls).setup_clients()
cls.client = cls.servers_client
- cls.user_client = cls.limits_client
@decorators.idempotent_id('3cfe87fd-115b-4a02-b942-7dc36a337fdf')
def test_create_server_with_personality(self):
@@ -104,7 +103,7 @@
# number of files are injected into the server.
file_contents = 'This is a test file.'
personality = []
- limits = self.user_client.show_limits()['limits']
+ limits = self.limits_client.show_limits()['limits']
max_file_limit = limits['absolute']['maxPersonality']
if max_file_limit == -1:
raise self.skipException("No limit for personality files")
@@ -123,7 +122,7 @@
# Server should be created successfully if maximum allowed number of
# files is injected into the server during creation.
file_contents = 'This is a test file.'
- limits = self.user_client.show_limits()['limits']
+ limits = self.limits_client.show_limits()['limits']
max_file_limit = limits['absolute']['maxPersonality']
if max_file_limit == -1:
raise self.skipException("No limit for personality files")
diff --git a/tempest/api/compute/servers/test_servers_negative.py b/tempest/api/compute/servers/test_servers_negative.py
index 8170b28..d067bb3 100644
--- a/tempest/api/compute/servers/test_servers_negative.py
+++ b/tempest/api/compute/servers/test_servers_negative.py
@@ -37,7 +37,7 @@
waiters.wait_for_server_status(self.client, self.server_id,
'ACTIVE')
except Exception:
- self.__class__.server_id = self.rebuild_server(self.server_id)
+ self.__class__.server_id = self.recreate_server(self.server_id)
def tearDown(self):
self.server_check_teardown()
@@ -551,7 +551,7 @@
waiters.wait_for_server_status(self.servers_client, self.server_id,
'ACTIVE')
except Exception:
- self.__class__.server_id = self.rebuild_server(self.server_id)
+ self.__class__.server_id = self.recreate_server(self.server_id)
@classmethod
def setup_clients(cls):
diff --git a/tempest/api/identity/admin/v2/test_endpoints.py b/tempest/api/identity/admin/v2/test_endpoints.py
index 59fc4d8..947706e 100644
--- a/tempest/api/identity/admin/v2/test_endpoints.py
+++ b/tempest/api/identity/admin/v2/test_endpoints.py
@@ -23,15 +23,15 @@
@classmethod
def resource_setup(cls):
super(EndPointsTestJSON, cls).resource_setup()
- cls.service_ids = list()
s_name = data_utils.rand_name('service')
s_type = data_utils.rand_name('type')
s_description = data_utils.rand_name('description')
service_data = cls.services_client.create_service(
name=s_name, type=s_type,
description=s_description)['OS-KSADM:service']
+ cls.addClassResourceCleanup(cls.services_client.delete_service,
+ service_data['id'])
cls.service_id = service_data['id']
- cls.service_ids.append(cls.service_id)
# Create endpoints so as to use for LIST and GET test cases
cls.setup_endpoints = list()
for _ in range(2):
@@ -43,18 +43,12 @@
publicurl=url,
adminurl=url,
internalurl=url)['endpoint']
+ cls.addClassResourceCleanup(cls.endpoints_client.delete_endpoint,
+ endpoint['id'])
# list_endpoints() will return 'enabled' field
endpoint['enabled'] = True
cls.setup_endpoints.append(endpoint)
- @classmethod
- def resource_cleanup(cls):
- for e in cls.setup_endpoints:
- cls.endpoints_client.delete_endpoint(e['id'])
- for s in cls.service_ids:
- cls.services_client.delete_service(s)
- super(EndPointsTestJSON, cls).resource_cleanup()
-
@decorators.idempotent_id('11f590eb-59d8-4067-8b2b-980c7f387f51')
def test_list_endpoints(self):
# Get a list of endpoints
diff --git a/tempest/api/identity/admin/v2/test_roles.py b/tempest/api/identity/admin/v2/test_roles.py
index 124bb5f..9736a76 100644
--- a/tempest/api/identity/admin/v2/test_roles.py
+++ b/tempest/api/identity/admin/v2/test_roles.py
@@ -28,14 +28,11 @@
for _ in range(5):
role_name = data_utils.rand_name(name='role')
role = cls.roles_client.create_role(name=role_name)['role']
+ cls.addClassResourceCleanup(
+ test_utils.call_and_ignore_notfound_exc,
+ cls.roles_client.delete_role, role['id'])
cls.roles.append(role)
- @classmethod
- def resource_cleanup(cls):
- super(RolesTestJSON, cls).resource_cleanup()
- for role in cls.roles:
- cls.roles_client.delete_role(role['id'])
-
def _get_role_params(self):
user = self.setup_test_user()
tenant = self.tenants_client.show_tenant(user['tenantId'])['tenant']
diff --git a/tempest/api/identity/admin/v3/test_domains.py b/tempest/api/identity/admin/v3/test_domains.py
index bf04ede..ca6b03e 100644
--- a/tempest/api/identity/admin/v3/test_domains.py
+++ b/tempest/api/identity/admin/v3/test_domains.py
@@ -34,19 +34,6 @@
domain = cls.create_domain(enabled=i < 2)
cls.setup_domains.append(domain)
- @classmethod
- def resource_cleanup(cls):
- for domain in cls.setup_domains:
- cls._delete_domain(domain['id'])
- super(DomainsTestJSON, cls).resource_cleanup()
-
- @classmethod
- def _delete_domain(cls, domain_id):
- # It is necessary to disable the domain before deleting,
- # or else it would result in unauthorized error
- cls.domains_client.update_domain(domain_id, enabled=False)
- cls.domains_client.delete_domain(domain_id)
-
@decorators.idempotent_id('8cf516ef-2114-48f1-907b-d32726c734d4')
def test_list_domains(self):
# Test to list domains
@@ -92,7 +79,7 @@
domain = self.domains_client.create_domain(
name=d_name, description=d_desc)['domain']
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
- self._delete_domain, domain['id'])
+ self.delete_domain, domain['id'])
self.assertIn('description', domain)
self.assertIn('name', domain)
self.assertIn('enabled', domain)
@@ -145,7 +132,7 @@
# Create domain only with name
d_name = data_utils.rand_name('domain')
domain = self.domains_client.create_domain(name=d_name)['domain']
- self.addCleanup(self._delete_domain, domain['id'])
+ self.addCleanup(self.delete_domain, domain['id'])
expected_data = {'name': d_name, 'enabled': True}
self.assertEqual('', domain['description'])
self.assertDictContainsSubset(expected_data, domain)
diff --git a/tempest/api/identity/admin/v3/test_groups.py b/tempest/api/identity/admin/v3/test_groups.py
index 17db3ea..507810b 100644
--- a/tempest/api/identity/admin/v3/test_groups.py
+++ b/tempest/api/identity/admin/v3/test_groups.py
@@ -28,13 +28,6 @@
super(GroupsV3TestJSON, cls).resource_setup()
cls.domain = cls.create_domain()
- @classmethod
- def resource_cleanup(cls):
- # Cleanup the domains created in the setup
- cls.domains_client.update_domain(cls.domain['id'], enabled=False)
- cls.domains_client.delete_domain(cls.domain['id'])
- super(GroupsV3TestJSON, cls).resource_cleanup()
-
@decorators.idempotent_id('2e80343b-6c81-4ac3-88c7-452f3e9d5129')
def test_group_create_update_get(self):
name = data_utils.rand_name('Group')
diff --git a/tempest/api/identity/admin/v3/test_inherits.py b/tempest/api/identity/admin/v3/test_inherits.py
index 8b687cd..c0c79b9 100644
--- a/tempest/api/identity/admin/v3/test_inherits.py
+++ b/tempest/api/identity/admin/v3/test_inherits.py
@@ -49,8 +49,6 @@
cls.groups_client.delete_group(cls.group['id'])
cls.users_client.delete_user(cls.user['id'])
cls.projects_client.delete_project(cls.project['id'])
- cls.domains_client.update_domain(cls.domain['id'], enabled=False)
- cls.domains_client.delete_domain(cls.domain['id'])
super(InheritsV3TestJSON, cls).resource_cleanup()
def _list_assertions(self, body, fetched_role_ids, role_id):
diff --git a/tempest/api/identity/admin/v3/test_list_projects.py b/tempest/api/identity/admin/v3/test_list_projects.py
index 7e70c14..25dd52b 100644
--- a/tempest/api/identity/admin/v3/test_list_projects.py
+++ b/tempest/api/identity/admin/v3/test_list_projects.py
@@ -51,9 +51,6 @@
# Cleanup the projects created during setup in inverse order
for project in reversed(cls.projects):
cls.projects_client.delete_project(project['id'])
- # Cleanup the domain created during setup
- cls.domains_client.update_domain(cls.domain['id'], enabled=False)
- cls.domains_client.delete_domain(cls.domain['id'])
super(ListProjectsTestJSON, cls).resource_cleanup()
@decorators.idempotent_id('1d830662-22ad-427c-8c3e-4ec854b0af44')
diff --git a/tempest/api/identity/admin/v3/test_list_users.py b/tempest/api/identity/admin/v3/test_list_users.py
index 506c729..88cd8be 100644
--- a/tempest/api/identity/admin/v3/test_list_users.py
+++ b/tempest/api/identity/admin/v3/test_list_users.py
@@ -60,9 +60,6 @@
# Cleanup the users created during setup
for user in cls.users:
cls.users_client.delete_user(user['id'])
- # Cleanup the domain created during setup
- cls.domains_client.update_domain(cls.domain['id'], enabled=False)
- cls.domains_client.delete_domain(cls.domain['id'])
super(UsersV3TestJSON, cls).resource_cleanup()
@decorators.idempotent_id('08f9aabb-dcfe-41d0-8172-82b5fa0bd73d')
diff --git a/tempest/api/identity/admin/v3/test_oauth_consumers.py b/tempest/api/identity/admin/v3/test_oauth_consumers.py
index 970ead3..062cce5 100644
--- a/tempest/api/identity/admin/v3/test_oauth_consumers.py
+++ b/tempest/api/identity/admin/v3/test_oauth_consumers.py
@@ -17,7 +17,7 @@
from tempest.lib.common.utils import data_utils
from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
-from tempest.lib import exceptions as exceptions
+from tempest.lib import exceptions
class OAUTHConsumersV3Test(base.BaseIdentityV3AdminTest):
diff --git a/tempest/api/identity/admin/v3/test_projects.py b/tempest/api/identity/admin/v3/test_projects.py
index 1b1d3f7..ac23067 100644
--- a/tempest/api/identity/admin/v3/test_projects.py
+++ b/tempest/api/identity/admin/v3/test_projects.py
@@ -87,7 +87,8 @@
# project and domain APIs
projects_list = self.projects_client.list_projects(
params={'is_domain': True})['projects']
- self.assertIn(project, projects_list)
+ project_ids = [p['id'] for p in projects_list]
+ self.assertIn(project['id'], project_ids)
# The domains API return different attributes for the entity, so we
# compare the entities IDs
@@ -205,3 +206,31 @@
self.assertEqual(project['id'],
new_user_get['project_id'])
self.assertEqual(u_email, new_user_get['email'])
+
+ @decorators.idempotent_id('d1db68b6-aebe-4fa0-b79d-d724d2e21162')
+ def test_project_get_equals_list(self):
+ fields = ['parent_id', 'is_domain', 'description', 'links',
+ 'name', 'enabled', 'domain_id', 'id', 'tags']
+
+ # Tags must be unique, keystone API will reject duplicates
+ tags = ['a', 'c', 'b', 'd']
+
+ # Create a Project, cleanup is handled in the helper
+ project = self.setup_test_project(tags=tags)
+
+ # Show and list for the project
+ project_get = self.projects_client.show_project(
+ project['id'])['project']
+ _projects = self.projects_client.list_projects()['projects']
+ project_list = next(x for x in _projects if x['id'] == project['id'])
+
+ # Assert the list of fields is correct (one is enough to check here)
+ self.assertSetEqual(set(fields), set(project_get.keys()))
+
+ # Ensure the set of tags is identical and match the expected one
+ get_tags = set(project_get.pop("tags"))
+ self.assertSetEqual(get_tags, set(project_list.pop("tags")))
+ self.assertSetEqual(get_tags, set(tags))
+
+ # Ensure all other fields are identical
+ self.assertDictEqual(project_get, project_list)
diff --git a/tempest/api/identity/admin/v3/test_roles.py b/tempest/api/identity/admin/v3/test_roles.py
index ec904e6..e7b005c 100644
--- a/tempest/api/identity/admin/v3/test_roles.py
+++ b/tempest/api/identity/admin/v3/test_roles.py
@@ -58,10 +58,6 @@
cls.groups_client.delete_group(cls.group_body['id'])
cls.users_client.delete_user(cls.user_body['id'])
cls.projects_client.delete_project(cls.project['id'])
- # NOTE(harika-vakadi): It is necessary to disable the domain
- # before deleting,or else it would result in unauthorized error
- cls.domains_client.update_domain(cls.domain['id'], enabled=False)
- cls.domains_client.delete_domain(cls.domain['id'])
for role in cls.roles:
cls.roles_client.delete_role(role['id'])
super(RolesV3TestJSON, cls).resource_cleanup()
diff --git a/tempest/api/identity/admin/v3/test_tokens.py b/tempest/api/identity/admin/v3/test_tokens.py
index 6343ea8..0845407 100644
--- a/tempest/api/identity/admin/v3/test_tokens.py
+++ b/tempest/api/identity/admin/v3/test_tokens.py
@@ -26,6 +26,8 @@
class TokensV3TestJSON(base.BaseIdentityV3AdminTest):
+ credentials = ['primary', 'admin', 'alt']
+
@decorators.idempotent_id('0f9f5a5f-d5cd-4a86-8a5b-c5ded151f212')
def test_tokens(self):
# Valid user's token is authenticated
@@ -163,12 +165,78 @@
# Get available project scopes
available_projects = self.client.list_auth_projects()['projects']
- # create list to save fetched project's id
+ # Create list to save fetched project IDs
fetched_project_ids = [i['id'] for i in available_projects]
# verifying the project ids in list
missing_project_ids = \
[p for p in assigned_project_ids if p not in fetched_project_ids]
self.assertEmpty(missing_project_ids,
- "Failed to find project_id %s in fetched list" %
+ "Failed to find project_ids %s in fetched list" %
', '.join(missing_project_ids))
+
+ @decorators.idempotent_id('ec5ecb05-af64-4c04-ac86-4d9f6f12f185')
+ def test_get_available_domain_scopes(self):
+ # Test for verifying that listing domain scopes for a user works if
+ # the user has a domain role or belongs to a group that has a domain
+ # role. For this test, admin client is used to add roles to alt user,
+ # which performs API calls, to avoid 401 Unauthorized errors.
+ alt_user_id = self.os_alt.credentials.user_id
+
+ def _create_user_domain_role_for_alt_user():
+ domain_id = self.setup_test_domain()['id']
+ role_id = self.setup_test_role()['id']
+
+ # Create a role association between the user and domain.
+ self.roles_client.create_user_role_on_domain(
+ domain_id, alt_user_id, role_id)
+ self.addCleanup(
+ self.roles_client.delete_role_from_user_on_domain,
+ domain_id, alt_user_id, role_id)
+
+ return domain_id
+
+ def _create_group_domain_role_for_alt_user():
+ domain_id = self.setup_test_domain()['id']
+ role_id = self.setup_test_role()['id']
+
+ # Create a group.
+ group_name = data_utils.rand_name('Group')
+ group_id = self.groups_client.create_group(
+ name=group_name, domain_id=domain_id)['group']['id']
+ self.addCleanup(self.groups_client.delete_group, group_id)
+
+ # Add the alt user to the group.
+ self.groups_client.add_group_user(group_id, alt_user_id)
+ self.addCleanup(self.groups_client.delete_group_user,
+ group_id, alt_user_id)
+
+ # Create a role association between the group and domain.
+ self.roles_client.create_group_role_on_domain(
+ domain_id, group_id, role_id)
+ self.addCleanup(
+ self.roles_client.delete_role_from_group_on_domain,
+ domain_id, group_id, role_id)
+
+ return domain_id
+
+ # Add the alt user to 2 random domains and 2 random groups
+ # with randomized domains and roles.
+ assigned_domain_ids = []
+ for _ in range(2):
+ domain_id = _create_user_domain_role_for_alt_user()
+ assigned_domain_ids.append(domain_id)
+ domain_id = _create_group_domain_role_for_alt_user()
+ assigned_domain_ids.append(domain_id)
+
+ # Get available domain scopes for the alt user.
+ available_domains = self.os_alt.identity_v3_client.list_auth_domains()[
+ 'domains']
+ fetched_domain_ids = [i['id'] for i in available_domains]
+
+ # Verify the expected domain IDs are in the list.
+ missing_domain_ids = \
+ [p for p in assigned_domain_ids if p not in fetched_domain_ids]
+ self.assertEmpty(missing_domain_ids,
+ "Failed to find domain_ids %s in fetched list"
+ % ", ".join(missing_domain_ids))
diff --git a/tempest/api/identity/base.py b/tempest/api/identity/base.py
index 30d2a36..9edccbb 100644
--- a/tempest/api/identity/base.py
+++ b/tempest/api/identity/base.py
@@ -249,13 +249,16 @@
if 'description' not in kwargs:
kwargs['description'] = data_utils.rand_name('desc')
domain = cls.domains_client.create_domain(**kwargs)['domain']
+ cls.addClassResourceCleanup(test_utils.call_and_ignore_notfound_exc,
+ cls.delete_domain, domain['id'])
return domain
- def delete_domain(self, domain_id):
+ @classmethod
+ def delete_domain(cls, domain_id):
# NOTE(mpavlase) It is necessary to disable the domain before deleting
# otherwise it raises Forbidden exception
- self.domains_client.update_domain(domain_id, enabled=False)
- self.domains_client.delete_domain(domain_id)
+ cls.domains_client.update_domain(domain_id, enabled=False)
+ cls.domains_client.delete_domain(domain_id)
def setup_test_user(self, password=None):
"""Set up a test user."""
diff --git a/tempest/api/image/base.py b/tempest/api/image/base.py
index 70ba2fe..7103d56 100644
--- a/tempest/api/image/base.py
+++ b/tempest/api/image/base.py
@@ -46,16 +46,6 @@
cls.created_images = []
@classmethod
- def resource_cleanup(cls):
- for image_id in cls.created_images:
- test_utils.call_and_ignore_notfound_exc(
- cls.client.delete_image, image_id)
-
- for image_id in cls.created_images:
- cls.client.wait_for_resource_deletion(image_id)
- super(BaseImageTest, cls).resource_cleanup()
-
- @classmethod
def create_image(cls, data=None, **kwargs):
"""Wrapper that returns a test image."""
@@ -75,6 +65,10 @@
if 'image' in image:
image = image['image']
cls.created_images.append(image['id'])
+ cls.addClassResourceCleanup(cls.client.wait_for_resource_deletion,
+ image['id'])
+ cls.addClassResourceCleanup(test_utils.call_and_ignore_notfound_exc,
+ cls.client.delete_image, image['id'])
return image
@classmethod
diff --git a/tempest/api/network/admin/test_metering_extensions.py b/tempest/api/network/admin/test_metering_extensions.py
index fd86782..5063fef 100644
--- a/tempest/api/network/admin/test_metering_extensions.py
+++ b/tempest/api/network/admin/test_metering_extensions.py
@@ -15,6 +15,7 @@
from tempest.api.network import base
from tempest.common import utils
from tempest.lib.common.utils import data_utils
+from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
@@ -52,7 +53,10 @@
description=description,
name=name)
metering_label = body['metering_label']
- cls.metering_labels.append(metering_label)
+ cls.addClassResourceCleanup(
+ test_utils.call_and_ignore_notfound_exc,
+ cls.admin_metering_labels_client.delete_metering_label,
+ metering_label['id'])
return metering_label
@classmethod
@@ -64,7 +68,9 @@
remote_ip_prefix=remote_ip_prefix, direction=direction,
metering_label_id=metering_label_id)
metering_label_rule = body['metering_label_rule']
- cls.metering_label_rules.append(metering_label_rule)
+ cls.addClassResourceCleanup(
+ test_utils.call_and_ignore_notfound_exc,
+ client.delete_metering_label_rule, metering_label_rule['id'])
return metering_label_rule
def _delete_metering_label(self, metering_label_id):
diff --git a/tempest/api/network/base.py b/tempest/api/network/base.py
index 8308e34..c2a67e3 100644
--- a/tempest/api/network/base.py
+++ b/tempest/api/network/base.py
@@ -93,8 +93,6 @@
cls.ports = []
cls.routers = []
cls.floating_ips = []
- cls.metering_labels = []
- cls.metering_label_rules = []
cls.ethertype = "IPv" + str(cls._ip_version)
if cls._ip_version == 4:
cls.cidr = netaddr.IPNetwork(CONF.network.project_network_cidr)
@@ -111,20 +109,6 @@
test_utils.call_and_ignore_notfound_exc(
cls.floating_ips_client.delete_floatingip,
floating_ip['id'])
-
- # Clean up metering label rules
- # Not all classes in the hierarchy have the client class variable
- if cls.metering_label_rules:
- label_rules_client = cls.admin_metering_label_rules_client
- for metering_label_rule in cls.metering_label_rules:
- test_utils.call_and_ignore_notfound_exc(
- label_rules_client.delete_metering_label_rule,
- metering_label_rule['id'])
- # Clean up metering labels
- for metering_label in cls.metering_labels:
- test_utils.call_and_ignore_notfound_exc(
- cls.admin_metering_labels_client.delete_metering_label,
- metering_label['id'])
# Clean up ports
for port in cls.ports:
test_utils.call_and_ignore_notfound_exc(
diff --git a/tempest/api/network/test_routers.py b/tempest/api/network/test_routers.py
index 99ffaa8..abbb779 100644
--- a/tempest/api/network/test_routers.py
+++ b/tempest/api/network/test_routers.py
@@ -65,9 +65,12 @@
'The public_network_id option must be specified.')
def test_create_show_list_update_delete_router(self):
# Create a router
+ name = data_utils.rand_name(self.__class__.__name__ + '-router')
router = self._create_router(
+ name=name,
admin_state_up=False,
external_network_id=CONF.network.public_network_id)
+ self.assertEqual(router['name'], name)
self.assertEqual(router['admin_state_up'], False)
self.assertEqual(
router['external_gateway_info']['network_id'],
diff --git a/tempest/api/network/test_security_groups.py b/tempest/api/network/test_security_groups.py
index 97ccee9..24bd8ea 100644
--- a/tempest/api/network/test_security_groups.py
+++ b/tempest/api/network/test_security_groups.py
@@ -23,7 +23,6 @@
class SecGroupTest(base.BaseSecGroupTest):
- _project_network_cidr = CONF.network.project_network_cidr
@classmethod
def skip_checks(cls):
@@ -209,7 +208,7 @@
protocol = 'tcp'
port_range_min = 76
port_range_max = 77
- ip_prefix = self._project_network_cidr
+ ip_prefix = str(self.cidr)
self._create_verify_security_group_rule(sg_id, direction,
self.ethertype, protocol,
port_range_min,
@@ -238,4 +237,3 @@
class SecGroupIPv6Test(SecGroupTest):
_ip_version = 6
- _project_network_cidr = CONF.network.project_network_v6_cidr
diff --git a/tempest/api/network/test_security_groups_negative.py b/tempest/api/network/test_security_groups_negative.py
index 435673b..d054865 100644
--- a/tempest/api/network/test_security_groups_negative.py
+++ b/tempest/api/network/test_security_groups_negative.py
@@ -24,7 +24,6 @@
class NegativeSecGroupTest(base.BaseSecGroupTest):
- _project_network_cidr = CONF.network.project_network_cidr
@classmethod
def skip_checks(cls):
@@ -110,7 +109,7 @@
sg2_body, _ = self._create_security_group()
# Create rule specifying both remote_ip_prefix and remote_group_id
- prefix = self._project_network_cidr
+ prefix = str(self.cidr)
self.assertRaises(
lib_exc.BadRequest,
self.security_group_rules_client.create_security_group_rule,
@@ -225,7 +224,6 @@
class NegativeSecGroupIPv6Test(NegativeSecGroupTest):
_ip_version = 6
- _project_network_cidr = CONF.network.project_network_v6_cidr
@decorators.attr(type=['negative'])
@decorators.idempotent_id('7607439c-af73-499e-bf64-f687fd12a842')
diff --git a/tempest/api/network/test_tags.py b/tempest/api/network/test_tags.py
index 409d556..85f6896 100644
--- a/tempest/api/network/test_tags.py
+++ b/tempest/api/network/test_tags.py
@@ -131,11 +131,8 @@
prefix = CONF.network.default_network
cls.subnetpool = cls.subnetpools_client.create_subnetpool(
name=subnetpool_name, prefixes=prefix)['subnetpool']
-
- @classmethod
- def resource_cleanup(cls):
- cls.subnetpools_client.delete_subnetpool(cls.subnetpool['id'])
- super(TagsExtTest, cls).resource_cleanup()
+ cls.addClassResourceCleanup(cls.subnetpools_client.delete_subnetpool,
+ cls.subnetpool['id'])
def _create_tags_for_each_resource(self):
# Create a tag for each resource in `SUPPORTED_RESOURCES` and return
diff --git a/tempest/api/object_storage/base.py b/tempest/api/object_storage/base.py
index 4c49b2a..ee72163 100644
--- a/tempest/api/object_storage/base.py
+++ b/tempest/api/object_storage/base.py
@@ -43,7 +43,7 @@
for cont in containers:
try:
params = {'limit': 9999, 'format': 'json'}
- _, objlist = container_client.list_container_contents(cont, params)
+ _, objlist = container_client.list_container_objects(cont, params)
# delete every object in the container
for obj in objlist:
test_utils.call_and_ignore_notfound_exc(
@@ -106,7 +106,7 @@
def create_container(cls):
# wrapper that returns a test container
container_name = data_utils.rand_name(name='TestContainer')
- cls.container_client.create_container(container_name)
+ cls.container_client.update_container(container_name)
cls.containers.append(container_name)
return container_name
diff --git a/tempest/api/object_storage/test_account_bulk.py b/tempest/api/object_storage/test_account_bulk.py
index 9abd59e..6599e43 100644
--- a/tempest/api/object_storage/test_account_bulk.py
+++ b/tempest/api/object_storage/test_account_bulk.py
@@ -96,7 +96,7 @@
self.assertIn(container_name, [b['name'] for b in body])
param = {'format': 'json'}
- resp, contents_list = self.container_client.list_container_contents(
+ resp, contents_list = self.container_client.list_container_objects(
container_name, param)
self.assertHeaders(resp, 'Container', 'GET')
diff --git a/tempest/api/object_storage/test_account_services.py b/tempest/api/object_storage/test_account_services.py
index 0f86540..d7c85a2 100644
--- a/tempest/api/object_storage/test_account_services.py
+++ b/tempest/api/object_storage/test_account_services.py
@@ -43,7 +43,7 @@
super(AccountTest, cls).resource_setup()
for i in range(ord('a'), ord('f') + 1):
name = data_utils.rand_name(name='%s-' % six.int2byte(i))
- cls.container_client.create_container(name)
+ cls.container_client.update_container(name)
cls.containers.append(name)
cls.containers_count = len(cls.containers)
diff --git a/tempest/api/object_storage/test_container_acl.py b/tempest/api/object_storage/test_container_acl.py
index 4b66ebf..765bc6d 100644
--- a/tempest/api/object_storage/test_container_acl.py
+++ b/tempest/api/object_storage/test_container_acl.py
@@ -41,10 +41,11 @@
tenant_name = self.os_roles_operator_alt.credentials.tenant_name
username = self.os_roles_operator_alt.credentials.username
cont_headers = {'X-Container-Read': tenant_name + ':' + username}
+ container_client = self.os_roles_operator.container_client
resp_meta, _ = (
- self.os_roles_operator.container_client.update_container_metadata(
- self.container_name, metadata=cont_headers,
- metadata_prefix=''))
+ container_client.create_update_or_delete_container_metadata(
+ self.container_name, create_update_metadata=cont_headers,
+ create_update_metadata_prefix=''))
self.assertHeaders(resp_meta, 'Container', 'POST')
# create object
object_name = data_utils.rand_name(name='Object')
@@ -68,10 +69,11 @@
tenant_name = self.os_roles_operator_alt.credentials.tenant_name
username = self.os_roles_operator_alt.credentials.username
cont_headers = {'X-Container-Write': tenant_name + ':' + username}
+ container_client = self.os_roles_operator.container_client
resp_meta, _ = (
- self.os_roles_operator.container_client.update_container_metadata(
- self.container_name, metadata=cont_headers,
- metadata_prefix=''))
+ container_client.create_update_or_delete_container_metadata(
+ self.container_name, create_update_metadata=cont_headers,
+ create_update_metadata_prefix=''))
self.assertHeaders(resp_meta, 'Container', 'POST')
# set alternative authentication data; cannot simply use the
# other object client.
diff --git a/tempest/api/object_storage/test_container_acl_negative.py b/tempest/api/object_storage/test_container_acl_negative.py
index e064753..90b24b4 100644
--- a/tempest/api/object_storage/test_container_acl_negative.py
+++ b/tempest/api/object_storage/test_container_acl_negative.py
@@ -39,7 +39,7 @@
def setUp(self):
super(ObjectACLsNegativeTest, self).setUp()
self.container_name = data_utils.rand_name(name='TestContainer')
- self.container_client.create_container(self.container_name)
+ self.container_client.update_container(self.container_name)
def tearDown(self):
self.delete_containers([self.container_name])
@@ -133,9 +133,10 @@
# attempt to read object using non-authorized user
# update X-Container-Read metadata ACL
cont_headers = {'X-Container-Read': 'badtenant:baduser'}
- resp_meta, _ = self.container_client.update_container_metadata(
- self.container_name, metadata=cont_headers,
- metadata_prefix='')
+ resp_meta, _ = (
+ self.container_client.create_update_or_delete_container_metadata(
+ self.container_name, create_update_metadata=cont_headers,
+ create_update_metadata_prefix=''))
self.assertHeaders(resp_meta, 'Container', 'POST')
# create object
object_name = data_utils.rand_name(name='Object')
@@ -157,9 +158,10 @@
# attempt to write object using non-authorized user
# update X-Container-Write metadata ACL
cont_headers = {'X-Container-Write': 'badtenant:baduser'}
- resp_meta, _ = self.container_client.update_container_metadata(
- self.container_name, metadata=cont_headers,
- metadata_prefix='')
+ resp_meta, _ = (
+ self.container_client.create_update_or_delete_container_metadata(
+ self.container_name, create_update_metadata=cont_headers,
+ create_update_metadata_prefix=''))
self.assertHeaders(resp_meta, 'Container', 'POST')
# Trying to write the object without rights
self.object_client.auth_provider.set_alt_auth_data(
@@ -182,9 +184,10 @@
cont_headers = {'X-Container-Read':
tenant_name + ':' + username,
'X-Container-Write': ''}
- resp_meta, _ = self.container_client.update_container_metadata(
- self.container_name, metadata=cont_headers,
- metadata_prefix='')
+ resp_meta, _ = (
+ self.container_client.create_update_or_delete_container_metadata(
+ self.container_name, create_update_metadata=cont_headers,
+ create_update_metadata_prefix=''))
self.assertHeaders(resp_meta, 'Container', 'POST')
# Trying to write the object without write rights
self.object_client.auth_provider.set_alt_auth_data(
@@ -207,9 +210,10 @@
cont_headers = {'X-Container-Read':
tenant_name + ':' + username,
'X-Container-Write': ''}
- resp_meta, _ = self.container_client.update_container_metadata(
- self.container_name, metadata=cont_headers,
- metadata_prefix='')
+ resp_meta, _ = (
+ self.container_client.create_update_or_delete_container_metadata(
+ self.container_name, create_update_metadata=cont_headers,
+ create_update_metadata_prefix=''))
self.assertHeaders(resp_meta, 'Container', 'POST')
# create object
object_name = data_utils.rand_name(name='Object')
diff --git a/tempest/api/object_storage/test_container_quotas.py b/tempest/api/object_storage/test_container_quotas.py
index c87bed5..982c4a1 100644
--- a/tempest/api/object_storage/test_container_quotas.py
+++ b/tempest/api/object_storage/test_container_quotas.py
@@ -40,8 +40,8 @@
self.container_name = self.create_container()
metadata = {"quota-bytes": str(QUOTA_BYTES),
"quota-count": str(QUOTA_COUNT), }
- self.container_client.update_container_metadata(
- self.container_name, metadata)
+ self.container_client.create_update_or_delete_container_metadata(
+ self.container_name, create_update_metadata=metadata)
def tearDown(self):
"""Cleans the container of any object after each test."""
diff --git a/tempest/api/object_storage/test_container_services.py b/tempest/api/object_storage/test_container_services.py
index 76fe8d4..cdc420e 100644
--- a/tempest/api/object_storage/test_container_services.py
+++ b/tempest/api/object_storage/test_container_services.py
@@ -27,7 +27,7 @@
@decorators.idempotent_id('92139d73-7819-4db1-85f8-3f2f22a8d91f')
def test_create_container(self):
container_name = data_utils.rand_name(name='TestContainer')
- resp, _ = self.container_client.create_container(container_name)
+ resp, _ = self.container_client.update_container(container_name)
self.containers.append(container_name)
self.assertHeaders(resp, 'Container', 'PUT')
@@ -35,20 +35,20 @@
def test_create_container_overwrite(self):
# overwrite container with the same name
container_name = data_utils.rand_name(name='TestContainer')
- self.container_client.create_container(container_name)
+ self.container_client.update_container(container_name)
self.containers.append(container_name)
- resp, _ = self.container_client.create_container(container_name)
+ resp, _ = self.container_client.update_container(container_name)
self.assertHeaders(resp, 'Container', 'PUT')
@decorators.idempotent_id('c2ac4d59-d0f5-40d5-ba19-0635056d48cd')
def test_create_container_with_metadata_key(self):
# create container with the blank value of metadata
container_name = data_utils.rand_name(name='TestContainer')
- metadata = {'test-container-meta': ''}
- resp, _ = self.container_client.create_container(
+ headers = {'X-Container-Meta-test-container-meta': ''}
+ resp, _ = self.container_client.update_container(
container_name,
- metadata=metadata)
+ **headers)
self.containers.append(container_name)
self.assertHeaders(resp, 'Container', 'PUT')
@@ -64,10 +64,10 @@
container_name = data_utils.rand_name(name='TestContainer')
# metadata name using underscores should be converted to hyphens
- metadata = {'test_container_meta': 'Meta1'}
- resp, _ = self.container_client.create_container(
+ headers = {'X-Container-Meta-test_container_meta': 'Meta1'}
+ resp, _ = self.container_client.update_container(
container_name,
- metadata=metadata)
+ **headers)
self.containers.append(container_name)
self.assertHeaders(resp, 'Container', 'PUT')
@@ -75,22 +75,20 @@
container_name)
self.assertIn('x-container-meta-test-container-meta', resp)
self.assertEqual(resp['x-container-meta-test-container-meta'],
- metadata['test_container_meta'])
+ headers['X-Container-Meta-test_container_meta'])
@decorators.idempotent_id('24d16451-1c0c-4e4f-b59c-9840a3aba40e')
def test_create_container_with_remove_metadata_key(self):
# create container with the blank value of remove metadata
container_name = data_utils.rand_name(name='TestContainer')
- metadata_1 = {'test-container-meta': 'Meta1'}
- self.container_client.create_container(
- container_name,
- metadata=metadata_1)
+ headers = {'X-Container-Meta-test-container-meta': 'Meta1'}
+ self.container_client.update_container(container_name, **headers)
self.containers.append(container_name)
- metadata_2 = {'test-container-meta': ''}
- resp, _ = self.container_client.create_container(
+ headers = {'X-Remove-Container-Meta-test-container-meta': ''}
+ resp, _ = self.container_client.update_container(
container_name,
- remove_metadata=metadata_2)
+ **headers)
self.assertHeaders(resp, 'Container', 'PUT')
resp, _ = self.container_client.list_container_metadata(
@@ -101,14 +99,13 @@
def test_create_container_with_remove_metadata_value(self):
# create container with remove metadata
container_name = data_utils.rand_name(name='TestContainer')
- metadata = {'test-container-meta': 'Meta1'}
- self.container_client.create_container(container_name,
- metadata=metadata)
+ headers = {'X-Container-Meta-test-container-meta': 'Meta1'}
+ self.container_client.update_container(container_name, **headers)
self.containers.append(container_name)
-
- resp, _ = self.container_client.create_container(
+ headers = {'X-Remove-Container-Meta-test-container-meta': 'Meta1'}
+ resp, _ = self.container_client.update_container(
container_name,
- remove_metadata=metadata)
+ **headers)
self.assertHeaders(resp, 'Container', 'PUT')
resp, _ = self.container_client.list_container_metadata(
@@ -130,7 +127,7 @@
container_name = self.create_container()
object_name, _ = self.create_object(container_name)
- resp, object_list = self.container_client.list_container_contents(
+ resp, object_list = self.container_client.list_container_objects(
container_name)
self.assertHeaders(resp, 'Container', 'GET')
self.assertEqual([object_name], object_list)
@@ -140,7 +137,7 @@
# get empty container contents list
container_name = self.create_container()
- resp, object_list = self.container_client.list_container_contents(
+ resp, object_list = self.container_client.list_container_objects(
container_name)
self.assertHeaders(resp, 'Container', 'GET')
self.assertEmpty(object_list)
@@ -153,7 +150,7 @@
self.create_object(container_name, object_name)
params = {'delimiter': '/'}
- resp, object_list = self.container_client.list_container_contents(
+ resp, object_list = self.container_client.list_container_objects(
container_name,
params=params)
self.assertHeaders(resp, 'Container', 'GET')
@@ -166,7 +163,7 @@
object_name, _ = self.create_object(container_name)
params = {'end_marker': object_name + 'zzzz'}
- resp, object_list = self.container_client.list_container_contents(
+ resp, object_list = self.container_client.list_container_objects(
container_name,
params=params)
self.assertHeaders(resp, 'Container', 'GET')
@@ -179,7 +176,7 @@
self.create_object(container_name)
params = {'format': 'json'}
- resp, object_list = self.container_client.list_container_contents(
+ resp, object_list = self.container_client.list_container_objects(
container_name,
params=params)
self.assertHeaders(resp, 'Container', 'GET')
@@ -198,7 +195,7 @@
self.create_object(container_name)
params = {'format': 'xml'}
- resp, object_list = self.container_client.list_container_contents(
+ resp, object_list = self.container_client.list_container_objects(
container_name,
params=params)
self.assertHeaders(resp, 'Container', 'GET')
@@ -222,7 +219,7 @@
object_name, _ = self.create_object(container_name)
params = {'limit': data_utils.rand_int_id(1, 10000)}
- resp, object_list = self.container_client.list_container_contents(
+ resp, object_list = self.container_client.list_container_objects(
container_name,
params=params)
self.assertHeaders(resp, 'Container', 'GET')
@@ -235,7 +232,7 @@
object_name, _ = self.create_object(container_name)
params = {'marker': 'AaaaObject1234567890'}
- resp, object_list = self.container_client.list_container_contents(
+ resp, object_list = self.container_client.list_container_objects(
container_name,
params=params)
self.assertHeaders(resp, 'Container', 'GET')
@@ -250,7 +247,7 @@
self.create_object(container_name, object_name)
params = {'path': 'Swift'}
- resp, object_list = self.container_client.list_container_contents(
+ resp, object_list = self.container_client.list_container_objects(
container_name,
params=params)
self.assertHeaders(resp, 'Container', 'GET')
@@ -264,7 +261,7 @@
prefix_key = object_name[0:8]
params = {'prefix': prefix_key}
- resp, object_list = self.container_client.list_container_contents(
+ resp, object_list = self.container_client.list_container_objects(
container_name,
params=params)
self.assertHeaders(resp, 'Container', 'GET')
@@ -277,9 +274,9 @@
container_name = self.create_container()
metadata = {'name': 'Pictures'}
- self.container_client.update_container_metadata(
+ self.container_client.create_update_or_delete_container_metadata(
container_name,
- metadata=metadata)
+ create_update_metadata=metadata)
resp, _ = self.container_client.list_container_metadata(
container_name)
@@ -301,16 +298,16 @@
def test_update_container_metadata_with_create_and_delete_metadata(self):
# Send one request of adding and deleting metadata
container_name = data_utils.rand_name(name='TestContainer')
- metadata_1 = {'test-container-meta1': 'Meta1'}
- self.container_client.create_container(container_name,
- metadata=metadata_1)
+ metadata_1 = {'X-Container-Meta-test-container-meta1': 'Meta1'}
+ self.container_client.update_container(container_name, **metadata_1)
self.containers.append(container_name)
metadata_2 = {'test-container-meta2': 'Meta2'}
- resp, _ = self.container_client.update_container_metadata(
- container_name,
- metadata=metadata_2,
- remove_metadata=metadata_1)
+ resp, _ = (
+ self.container_client.create_update_or_delete_container_metadata(
+ container_name,
+ create_update_metadata=metadata_2,
+ delete_metadata={'test-container-meta1': 'Meta1'}))
self.assertHeaders(resp, 'Container', 'POST')
resp, _ = self.container_client.list_container_metadata(
@@ -326,9 +323,10 @@
container_name = self.create_container()
metadata = {'test-container-meta1': 'Meta1'}
- resp, _ = self.container_client.update_container_metadata(
- container_name,
- metadata=metadata)
+ resp, _ = (
+ self.container_client.create_update_or_delete_container_metadata(
+ container_name,
+ create_update_metadata=metadata))
self.assertHeaders(resp, 'Container', 'POST')
resp, _ = self.container_client.list_container_metadata(
@@ -341,14 +339,14 @@
def test_update_container_metadata_with_delete_metadata(self):
# update container metadata using delete metadata
container_name = data_utils.rand_name(name='TestContainer')
- metadata = {'test-container-meta1': 'Meta1'}
- self.container_client.create_container(container_name,
- metadata=metadata)
+ metadata = {'X-Container-Meta-test-container-meta1': 'Meta1'}
+ self.container_client.update_container(container_name, **metadata)
self.containers.append(container_name)
- resp, _ = self.container_client.delete_container_metadata(
- container_name,
- metadata=metadata)
+ resp, _ = (
+ self.container_client.create_update_or_delete_container_metadata(
+ container_name,
+ delete_metadata={'test-container-meta1': 'Meta1'}))
self.assertHeaders(resp, 'Container', 'POST')
resp, _ = self.container_client.list_container_metadata(
@@ -361,9 +359,10 @@
container_name = self.create_container()
metadata = {'test-container-meta1': ''}
- resp, _ = self.container_client.update_container_metadata(
- container_name,
- metadata=metadata)
+ resp, _ = (
+ self.container_client.create_update_or_delete_container_metadata(
+ container_name,
+ create_update_metadata=metadata))
self.assertHeaders(resp, 'Container', 'POST')
resp, _ = self.container_client.list_container_metadata(
@@ -374,15 +373,15 @@
def test_update_container_metadata_with_delete_metadata_key(self):
# update container metadata with a blank value of metadata
container_name = data_utils.rand_name(name='TestContainer')
- metadata = {'test-container-meta1': 'Meta1'}
- self.container_client.create_container(container_name,
- metadata=metadata)
+ headers = {'X-Container-Meta-test-container-meta1': 'Meta1'}
+ self.container_client.update_container(container_name, **headers)
self.containers.append(container_name)
metadata = {'test-container-meta1': ''}
- resp, _ = self.container_client.delete_container_metadata(
- container_name,
- metadata=metadata)
+ resp, _ = (
+ self.container_client.create_update_or_delete_container_metadata(
+ container_name,
+ delete_metadata=metadata))
self.assertHeaders(resp, 'Container', 'POST')
resp, _ = self.container_client.list_container_metadata(container_name)
diff --git a/tempest/api/object_storage/test_container_services_negative.py b/tempest/api/object_storage/test_container_services_negative.py
index 387b7b6..b8c83b7 100644
--- a/tempest/api/object_storage/test_container_services_negative.py
+++ b/tempest/api/object_storage/test_container_services_negative.py
@@ -45,9 +45,10 @@
max_length = self.constraints['max_container_name_length']
# create a container with long name
container_name = data_utils.arbitrary_string(size=max_length + 1)
- ex = self.assertRaises(exceptions.BadRequest,
- self.container_client.create_container,
- container_name)
+ ex = self.assertRaises(
+ exceptions.BadRequest,
+ self.container_client.update_container,
+ container_name)
self.assertIn('Container name length of ' + str(max_length + 1) +
' longer than ' + str(max_length), str(ex))
@@ -61,11 +62,13 @@
# that is longer than max.
max_length = self.constraints['max_meta_name_length']
container_name = data_utils.rand_name(name='TestContainer')
- metadata_name = data_utils.arbitrary_string(size=max_length + 1)
+ metadata_name = 'X-Container-Meta-' + data_utils.arbitrary_string(
+ size=max_length + 1)
metadata = {metadata_name: 'penguin'}
- ex = self.assertRaises(exceptions.BadRequest,
- self.container_client.create_container,
- container_name, metadata=metadata)
+ ex = self.assertRaises(
+ exceptions.BadRequest,
+ self.container_client.update_container,
+ container_name, **metadata)
self.assertIn('Metadata name too long', str(ex))
@decorators.attr(type=["negative"])
@@ -79,10 +82,11 @@
max_length = self.constraints['max_meta_value_length']
container_name = data_utils.rand_name(name='TestContainer')
metadata_value = data_utils.arbitrary_string(size=max_length + 1)
- metadata = {'animal': metadata_value}
- ex = self.assertRaises(exceptions.BadRequest,
- self.container_client.create_container,
- container_name, metadata=metadata)
+ metadata = {'X-Container-Meta-animal': metadata_value}
+ ex = self.assertRaises(
+ exceptions.BadRequest,
+ self.container_client.update_container,
+ container_name, **metadata)
self.assertIn('Metadata value longer than ' + str(max_length), str(ex))
@decorators.attr(type=["negative"])
@@ -97,11 +101,12 @@
container_name = data_utils.rand_name(name='TestContainer')
metadata = {}
for i in range(max_count + 1):
- metadata['animal-' + str(i)] = 'penguin'
+ metadata['X-Container-Meta-animal-' + str(i)] = 'penguin'
- ex = self.assertRaises(exceptions.BadRequest,
- self.container_client.create_container,
- container_name, metadata=metadata)
+ ex = self.assertRaises(
+ exceptions.BadRequest,
+ self.container_client.update_container,
+ container_name, **metadata)
self.assertIn('Too many metadata items; max ' + str(max_count),
str(ex))
@@ -120,9 +125,10 @@
# Attempts to update metadata using a nonexistent container name.
metadata = {'animal': 'penguin'}
- self.assertRaises(exceptions.NotFound,
- self.container_client.update_container_metadata,
- 'nonexistent_container_name', metadata)
+ self.assertRaises(
+ exceptions.NotFound,
+ self.container_client.create_update_or_delete_container_metadata,
+ 'nonexistent_container_name', create_update_metadata=metadata)
@decorators.attr(type=["negative"])
@decorators.idempotent_id('65387dbf-a0e2-4aac-9ddc-16eb3f1f69ba')
@@ -130,9 +136,10 @@
# Attempts to delete metadata using a nonexistent container name.
metadata = {'animal': 'penguin'}
- self.assertRaises(exceptions.NotFound,
- self.container_client.delete_container_metadata,
- 'nonexistent_container_name', metadata)
+ self.assertRaises(
+ exceptions.NotFound,
+ self.container_client.create_update_or_delete_container_metadata,
+ 'nonexistent_container_name', delete_metadata=metadata)
@decorators.attr(type=["negative"])
@decorators.idempotent_id('14331d21-1e81-420a-beea-19cb5e5207f5')
@@ -141,7 +148,7 @@
# that doesn't exist.
params = {'limit': 9999, 'format': 'json'}
self.assertRaises(exceptions.NotFound,
- self.container_client.list_container_contents,
+ self.container_client.list_container_objects,
'nonexistent_container_name', params)
@decorators.attr(type=["negative"])
@@ -155,7 +162,7 @@
self.assertHeaders(resp, 'Container', 'DELETE')
params = {'limit': 9999, 'format': 'json'}
self.assertRaises(exceptions.NotFound,
- self.container_client.list_container_contents,
+ self.container_client.list_container_objects,
container_name, params)
@decorators.attr(type=["negative"])
diff --git a/tempest/api/object_storage/test_container_staticweb.py b/tempest/api/object_storage/test_container_staticweb.py
index 92fa690..1243b83 100644
--- a/tempest/api/object_storage/test_container_staticweb.py
+++ b/tempest/api/object_storage/test_container_staticweb.py
@@ -34,10 +34,10 @@
cls.object_name, cls.object_data = cls.create_object(
cls.container_name)
- cls.container_client.update_container_metadata(
+ cls.container_client.create_update_or_delete_container_metadata(
cls.container_name,
- metadata=headers_public_read_acl,
- metadata_prefix="X-Container-")
+ create_update_metadata=headers_public_read_acl,
+ create_update_metadata_prefix="X-Container-")
@classmethod
def resource_cleanup(cls):
@@ -49,8 +49,8 @@
def test_web_index(self):
headers = {'web-index': self.object_name}
- self.container_client.update_container_metadata(
- self.container_name, metadata=headers)
+ self.container_client.create_update_or_delete_container_metadata(
+ self.container_name, create_update_metadata=headers)
# Maintain original headers, no auth added
self.account_client.auth_provider.set_alt_auth_data(
@@ -68,8 +68,9 @@
self.assertEqual(body, self.object_data)
# clean up before exiting
- self.container_client.update_container_metadata(self.container_name,
- {'web-index': ""})
+ self.container_client.create_update_or_delete_container_metadata(
+ self.container_name,
+ create_update_metadata={'web-index': ""})
_, body = self.container_client.list_container_metadata(
self.container_name)
@@ -80,8 +81,8 @@
def test_web_listing(self):
headers = {'web-listings': 'true'}
- self.container_client.update_container_metadata(
- self.container_name, metadata=headers)
+ self.container_client.create_update_or_delete_container_metadata(
+ self.container_name, create_update_metadata=headers)
# test GET on http://account_url/container_name
# we should retrieve a listing of objects
@@ -100,9 +101,9 @@
self.assertIn(self.object_name, body.decode())
# clean up before exiting
- self.container_client.update_container_metadata(self.container_name,
- {'web-listings': ""})
-
+ self.container_client.create_update_or_delete_container_metadata(
+ self.container_name,
+ create_update_metadata={'web-listings': ""})
_, body = self.container_client.list_container_metadata(
self.container_name)
self.assertNotIn('x-container-meta-web-listings', body)
@@ -113,8 +114,8 @@
headers = {'web-listings': 'true',
'web-listings-css': 'listings.css'}
- self.container_client.update_container_metadata(
- self.container_name, metadata=headers)
+ self.container_client.create_update_or_delete_container_metadata(
+ self.container_name, create_update_metadata=headers)
# Maintain original headers, no auth added
self.account_client.auth_provider.set_alt_auth_data(
@@ -136,8 +137,8 @@
headers = {'web-listings': 'true',
'web-error': self.object_name}
- self.container_client.update_container_metadata(
- self.container_name, metadata=headers)
+ self.container_client.create_update_or_delete_container_metadata(
+ self.container_name, create_update_metadata=headers)
# Create object to return when requested object not found
object_name_404 = "404" + self.object_name
diff --git a/tempest/api/object_storage/test_container_sync.py b/tempest/api/object_storage/test_container_sync.py
index 7665b48..042d288 100644
--- a/tempest/api/object_storage/test_container_sync.py
+++ b/tempest/api/object_storage/test_container_sync.py
@@ -102,7 +102,7 @@
while self.attempts > 0:
object_lists = []
for c_client, cont in zip(cont_client, self.containers):
- resp, object_list = c_client.list_container_contents(
+ resp, object_list = c_client.list_container_objects(
cont, params=params)
object_lists.append(dict(
(obj['name'], obj) for obj in object_list))
diff --git a/tempest/api/object_storage/test_object_expiry.py b/tempest/api/object_storage/test_object_expiry.py
index ed1be90..86f7c8c 100644
--- a/tempest/api/object_storage/test_object_expiry.py
+++ b/tempest/api/object_storage/test_object_expiry.py
@@ -40,10 +40,10 @@
def _test_object_expiry(self, metadata):
# update object metadata
resp, _ = \
- self.object_client.update_object_metadata(self.container_name,
- self.object_name,
- metadata,
- metadata_prefix='')
+ self.object_client.create_or_update_object_metadata(
+ self.container_name,
+ self.object_name,
+ headers=metadata)
# verify object metadata
resp, _ = \
self.object_client.list_object_metadata(self.container_name,
diff --git a/tempest/api/object_storage/test_object_services.py b/tempest/api/object_storage/test_object_services.py
index d3cdb72..acb578d 100644
--- a/tempest/api/object_storage/test_object_services.py
+++ b/tempest/api/object_storage/test_object_services.py
@@ -48,8 +48,9 @@
data_segments = [data + str(i) for i in range(segments)]
# uploading segments
for i in range(segments):
- self.object_client.create_object_segments(
- self.container_name, object_name, i, data_segments[i])
+ obj_name = "%s/%s" % (object_name, i)
+ self.object_client.create_object(
+ self.container_name, obj_name, data_segments[i])
return object_name, data_segments
@@ -184,12 +185,15 @@
# create object with transfer_encoding
object_name = data_utils.rand_name(name='TestObject')
data = data_utils.random_bytes(1024)
- _, _, resp_headers = self.object_client.put_object_with_chunk(
- container=self.container_name,
- name=object_name,
- contents=data_utils.chunkify(data, 512)
- )
- self.assertHeaders(resp_headers, 'Object', 'PUT')
+ headers = {'Transfer-Encoding': 'chunked'}
+ resp, _ = self.object_client.create_object(
+ self.container_name,
+ object_name,
+ data=data_utils.chunkify(data, 512),
+ headers=headers,
+ chunked=True)
+
+ self.assertHeaders(resp, 'Object', 'PUT')
# check uploaded content
_, body = self.object_client.get_object(self.container_name,
@@ -325,11 +329,10 @@
object_name, _ = self.create_object(self.container_name)
metadata = {'X-Object-Meta-test-meta': 'Meta'}
- resp, _ = self.object_client.update_object_metadata(
+ resp, _ = self.object_client.create_or_update_object_metadata(
self.container_name,
object_name,
- metadata,
- metadata_prefix='')
+ headers=metadata)
self.assertHeaders(resp, 'Object', 'POST')
resp, _ = self.object_client.list_object_metadata(
@@ -350,11 +353,10 @@
metadata=create_metadata)
update_metadata = {'X-Remove-Object-Meta-test-meta1': 'Meta1'}
- resp, _ = self.object_client.update_object_metadata(
+ resp, _ = self.object_client.create_or_update_object_metadata(
self.container_name,
object_name,
- update_metadata,
- metadata_prefix='')
+ headers=update_metadata)
self.assertHeaders(resp, 'Object', 'POST')
resp, _ = self.object_client.list_object_metadata(
@@ -375,11 +377,10 @@
update_metadata = {'X-Object-Meta-test-meta2': 'Meta2',
'X-Remove-Object-Meta-test-meta1': 'Meta1'}
- resp, _ = self.object_client.update_object_metadata(
+ resp, _ = self.object_client.create_or_update_object_metadata(
self.container_name,
object_name,
- update_metadata,
- metadata_prefix='')
+ headers=update_metadata)
self.assertHeaders(resp, 'Object', 'POST')
resp, _ = self.object_client.list_object_metadata(
@@ -403,11 +404,10 @@
metadata=None)
object_prefix = '%s/%s' % (self.container_name, object_name)
update_metadata = {'X-Object-Manifest': object_prefix}
- resp, _ = self.object_client.update_object_metadata(
+ resp, _ = self.object_client.create_or_update_object_metadata(
self.container_name,
object_name,
- update_metadata,
- metadata_prefix='')
+ headers=update_metadata)
self.assertHeaders(resp, 'Object', 'POST')
resp, _ = self.object_client.list_object_metadata(
@@ -422,11 +422,10 @@
object_name, _ = self.create_object(self.container_name)
update_metadata = {'X-Object-Meta-test-meta': ''}
- resp, _ = self.object_client.update_object_metadata(
+ resp, _ = self.object_client.create_or_update_object_metadata(
self.container_name,
object_name,
- update_metadata,
- metadata_prefix='')
+ headers=update_metadata)
self.assertHeaders(resp, 'Object', 'POST')
resp, _ = self.object_client.list_object_metadata(
@@ -447,11 +446,10 @@
metadata=create_metadata)
update_metadata = {'X-Remove-Object-Meta-test-meta': ''}
- resp, _ = self.object_client.update_object_metadata(
+ resp, _ = self.object_client.create_or_update_object_metadata(
self.container_name,
object_name,
- update_metadata,
- metadata_prefix='')
+ headers=update_metadata)
self.assertHeaders(resp, 'Object', 'POST')
resp, _ = self.object_client.list_object_metadata(
@@ -728,8 +726,13 @@
dst_object_name,
dst_data)
# copy source object to destination
- resp, _ = self.object_client.copy_object_in_same_container(
- self.container_name, src_object_name, dst_object_name)
+ headers = {}
+ headers['X-Copy-From'] = "%s/%s" % (str(self.container_name),
+ str(src_object_name))
+ resp, body = self.object_client.create_object(self.container_name,
+ dst_object_name,
+ data=None,
+ headers=headers)
self.assertHeaders(resp, 'Object', 'PUT')
# check data
@@ -749,8 +752,14 @@
# change the content type of the object
metadata = {'content-type': 'text/plain; charset=UTF-8'}
self.assertNotEqual(resp_tmp['content-type'], metadata['content-type'])
- resp, _ = self.object_client.copy_object_in_same_container(
- self.container_name, object_name, object_name, metadata)
+ headers = {}
+ headers['X-Copy-From'] = "%s/%s" % (str(self.container_name),
+ str(object_name))
+ resp, body = self.object_client.create_object(self.container_name,
+ object_name,
+ data=None,
+ metadata=metadata,
+ headers=headers)
self.assertHeaders(resp, 'Object', 'PUT')
# check the content type
@@ -786,12 +795,12 @@
def test_copy_object_across_containers(self):
# create a container to use as a source container
src_container_name = data_utils.rand_name(name='TestSourceContainer')
- self.container_client.create_container(src_container_name)
+ self.container_client.update_container(src_container_name)
self.containers.append(src_container_name)
# create a container to use as a destination container
dst_container_name = data_utils.rand_name(
name='TestDestinationContainer')
- self.container_client.create_container(dst_container_name)
+ self.container_client.update_container(dst_container_name)
self.containers.append(dst_container_name)
# create object in source container
object_name = data_utils.rand_name(name='Object')
@@ -801,16 +810,21 @@
# set object metadata
meta_key = data_utils.rand_name(name='test')
meta_value = data_utils.rand_name(name='MetaValue')
- orig_metadata = {meta_key: meta_value}
- resp, _ = self.object_client.update_object_metadata(src_container_name,
- object_name,
- orig_metadata)
+ orig_metadata = {'X-Object-Meta-' + meta_key: meta_value}
+ resp, _ = self.object_client.create_or_update_object_metadata(
+ src_container_name,
+ object_name,
+ headers=orig_metadata)
self.assertHeaders(resp, 'Object', 'POST')
# copy object from source container to destination container
- resp, _ = self.object_client.copy_object_across_containers(
- src_container_name, object_name, dst_container_name,
- object_name)
+ headers = {}
+ headers['X-Copy-From'] = "%s/%s" % (str(src_container_name),
+ str(object_name))
+ resp, body = self.object_client.create_object(dst_container_name,
+ object_name,
+ data=None,
+ headers=headers)
self.assertHeaders(resp, 'Object', 'PUT')
# check if object is present in destination container
@@ -897,8 +911,9 @@
data_segments = [data + str(i) for i in range(segments)]
# uploading segments
for i in range(segments):
- resp, _ = self.object_client.create_object_segments(
- self.container_name, object_name, i, data_segments[i])
+ obj_name = "%s/%s" % (object_name, i)
+ resp, _ = self.object_client.create_object(
+ self.container_name, obj_name, data_segments[i])
# creating a manifest file
metadata = {'X-Object-Manifest': '%s/%s/'
% (self.container_name, object_name)}
@@ -906,8 +921,8 @@
object_name, data='')
self.assertHeaders(resp, 'Object', 'PUT')
- resp, _ = self.object_client.update_object_metadata(
- self.container_name, object_name, metadata, metadata_prefix='')
+ resp, _ = self.object_client.create_or_update_object_metadata(
+ self.container_name, object_name, headers=metadata)
self.assertHeaders(resp, 'Object', 'POST')
resp, _ = self.object_client.list_object_metadata(
@@ -977,7 +992,7 @@
def setUp(self):
super(PublicObjectTest, self).setUp()
self.container_name = data_utils.rand_name(name='TestContainer')
- self.container_client.create_container(self.container_name)
+ self.container_client.update_container(self.container_name)
def tearDown(self):
self.delete_containers([self.container_name])
@@ -990,8 +1005,11 @@
# update container metadata to make it publicly readable
cont_headers = {'X-Container-Read': '.r:*,.rlistings'}
- resp_meta, body = self.container_client.update_container_metadata(
- self.container_name, metadata=cont_headers, metadata_prefix='')
+ resp_meta, body = (
+ self.container_client.create_update_or_delete_container_metadata(
+ self.container_name,
+ create_update_metadata=cont_headers,
+ create_update_metadata_prefix=''))
self.assertHeaders(resp_meta, 'Container', 'POST')
# create object
@@ -1025,9 +1043,10 @@
# make container public-readable and access an object in it using
# another user's credentials
cont_headers = {'X-Container-Read': '.r:*,.rlistings'}
- resp_meta, body = self.container_client.update_container_metadata(
- self.container_name, metadata=cont_headers,
- metadata_prefix='')
+ resp_meta, body = (
+ self.container_client.create_update_or_delete_container_metadata(
+ self.container_name, create_update_metadata=cont_headers,
+ create_update_metadata_prefix=''))
self.assertHeaders(resp_meta, 'Container', 'POST')
# create object
diff --git a/tempest/api/object_storage/test_object_slo.py b/tempest/api/object_storage/test_object_slo.py
index 65da63d..c66776e 100644
--- a/tempest/api/object_storage/test_object_slo.py
+++ b/tempest/api/object_storage/test_object_slo.py
@@ -172,6 +172,6 @@
# Check only the format of common headers with custom matcher
self.assertThat(resp, custom_matchers.AreAllWellFormatted())
- resp, body = self.container_client.list_container_contents(
+ resp, body = self.container_client.list_container_objects(
self.container_name)
self.assertEqual(int(resp['x-container-object-count']), 0)
diff --git a/tempest/api/object_storage/test_object_version.py b/tempest/api/object_storage/test_object_version.py
index 4799053..51b0a1d 100644
--- a/tempest/api/object_storage/test_object_version.py
+++ b/tempest/api/object_storage/test_object_version.py
@@ -51,18 +51,16 @@
def test_versioned_container(self):
# create container
vers_container_name = data_utils.rand_name(name='TestVersionContainer')
- resp, _ = self.container_client.create_container(
- vers_container_name)
+ resp, _ = self.container_client.update_container(vers_container_name)
self.containers.append(vers_container_name)
self.assertHeaders(resp, 'Container', 'PUT')
self.assertContainer(vers_container_name, '0', '0', 'Missing Header')
base_container_name = data_utils.rand_name(name='TestBaseContainer')
headers = {'X-versions-Location': vers_container_name}
- resp, _ = self.container_client.create_container(
+ resp, _ = self.container_client.update_container(
base_container_name,
- metadata=headers,
- metadata_prefix='')
+ **headers)
self.containers.append(base_container_name)
self.assertHeaders(resp, 'Container', 'PUT')
self.assertContainer(base_container_name, '0', '0',
diff --git a/tempest/api/volume/admin/test_groups.py b/tempest/api/volume/admin/test_groups.py
index d4b2faa..6b53d85 100644
--- a/tempest/api/volume/admin/test_groups.py
+++ b/tempest/api/volume/admin/test_groups.py
@@ -63,9 +63,9 @@
class GroupsTest(BaseGroupsTest):
+ _api_version = 3
min_microversion = '3.14'
max_microversion = 'latest'
- _api_version = 3
@decorators.idempotent_id('4b111d28-b73d-4908-9bd2-03dc2992e4d4')
def test_group_create_show_list_delete(self):
@@ -108,16 +108,16 @@
self.assertEqual(grp2_id, grp2['id'])
# Get all groups with detail
- grps = self.groups_client.list_groups(
- detail=True)['groups']
- filtered_grps = [g for g in grps if g['id'] in [grp1_id, grp2_id]]
- self.assertEqual(2, len(filtered_grps))
- for grp in filtered_grps:
- self.assertEqual([volume_type['id']], grp['volume_types'])
- self.assertEqual(group_type['id'], grp['group_type'])
+ grps = self.groups_client.list_groups(detail=True)['groups']
+ for grp_id in [grp1_id, grp2_id]:
+ filtered_grps = [g for g in grps if g['id'] == grp_id]
+ self.assertEqual(1, len(filtered_grps))
+ self.assertEqual([volume_type['id']],
+ filtered_grps[0]['volume_types'])
+ self.assertEqual(group_type['id'],
+ filtered_grps[0]['group_type'])
- vols = self.volumes_client.list_volumes(
- detail=True)['volumes']
+ vols = self.volumes_client.list_volumes(detail=True)['volumes']
filtered_vols = [v for v in vols if v['id'] in [vol1_id]]
self.assertEqual(1, len(filtered_vols))
for vol in filtered_vols:
@@ -171,18 +171,20 @@
group_snapshot['id'])['group_snapshot']
self.assertEqual(group_snapshot_name, group_snapshot['name'])
- # Get all group snapshots with detail
- group_snapshots = (
- self.group_snapshots_client.list_group_snapshots(
- detail=True)['group_snapshots'])
+ # Get all group snapshots with details, check some detail-specific
+ # elements, and look for the created group snapshot
+ group_snapshots = (self.group_snapshots_client.list_group_snapshots(
+ detail=True)['group_snapshots'])
+ for grp_snapshot in group_snapshots:
+ self.assertIn('created_at', grp_snapshot)
+ self.assertIn('group_id', grp_snapshot)
self.assertIn((group_snapshot['name'], group_snapshot['id']),
[(m['name'], m['id']) for m in group_snapshots])
# Delete group snapshot
self._delete_group_snapshot(group_snapshot['id'], grp['id'])
- group_snapshots = (
- self.group_snapshots_client.list_group_snapshots(
- detail=True)['group_snapshots'])
+ group_snapshots = (self.group_snapshots_client.list_group_snapshots()
+ ['group_snapshots'])
self.assertEmpty(group_snapshots)
@decorators.idempotent_id('eff52c70-efc7-45ed-b47a-4ad675d09b81')
@@ -297,8 +299,7 @@
self.assertEqual(new_desc, grp['description'])
# Get volumes in the group
- vols = self.volumes_client.list_volumes(
- detail=True)['volumes']
+ vols = self.volumes_client.list_volumes(detail=True)['volumes']
grp_vols = [v for v in vols if v['group_id'] == grp['id']]
self.assertEqual(1, len(grp_vols))
@@ -316,6 +317,55 @@
self.assertEqual(2, len(grp_vols))
+class GroupsV319Test(BaseGroupsTest):
+ _api_version = 3
+ min_microversion = '3.19'
+ max_microversion = 'latest'
+
+ @decorators.idempotent_id('3b42c9b9-c984-4444-816e-ca2e1ed30b40')
+ def test_reset_group_snapshot_status(self):
+ # Create volume type
+ volume_type = self.create_volume_type()
+
+ # Create group type
+ group_type = self.create_group_type()
+
+ # Create group
+ group = self._create_group(group_type, volume_type)
+
+ # Create volume
+ volume = self.create_volume(volume_type=volume_type['id'],
+ group_id=group['id'])
+
+ # Create group snapshot
+ group_snapshot_name = data_utils.rand_name('group_snapshot')
+ group_snapshot = (self.group_snapshots_client.create_group_snapshot(
+ group_id=group['id'], name=group_snapshot_name)['group_snapshot'])
+ self.addCleanup(self._delete_group_snapshot,
+ group_snapshot['id'], group['id'])
+ snapshots = self.snapshots_client.list_snapshots(
+ detail=True)['snapshots']
+ for snap in snapshots:
+ if volume['id'] == snap['volume_id']:
+ waiters.wait_for_volume_resource_status(
+ self.snapshots_client, snap['id'], 'available')
+ waiters.wait_for_volume_resource_status(
+ self.group_snapshots_client, group_snapshot['id'], 'available')
+
+ # Reset group snapshot status
+ self.addCleanup(waiters.wait_for_volume_resource_status,
+ self.group_snapshots_client,
+ group_snapshot['id'], 'available')
+ self.addCleanup(
+ self.admin_group_snapshots_client.reset_group_snapshot_status,
+ group_snapshot['id'], 'available')
+ for status in ['creating', 'available', 'error']:
+ self.admin_group_snapshots_client.reset_group_snapshot_status(
+ group_snapshot['id'], status)
+ waiters.wait_for_volume_resource_status(
+ self.group_snapshots_client, group_snapshot['id'], status)
+
+
class GroupsV320Test(BaseGroupsTest):
_api_version = 3
min_microversion = '3.20'
diff --git a/tempest/api/volume/admin/test_volume_quotas.py b/tempest/api/volume/admin/test_volume_quotas.py
index d56f1de..42bfcd6 100644
--- a/tempest/api/volume/admin/test_volume_quotas.py
+++ b/tempest/api/volume/admin/test_volume_quotas.py
@@ -19,7 +19,8 @@
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
-QUOTA_KEYS = ['gigabytes', 'snapshots', 'volumes', 'backups']
+QUOTA_KEYS = ['gigabytes', 'snapshots', 'volumes', 'backups',
+ 'backup_gigabytes', 'per_volume_gigabytes']
QUOTA_USAGE_KEYS = ['reserved', 'limit', 'in_use']
@@ -67,7 +68,9 @@
new_quota_set = {'gigabytes': 1009,
'volumes': 11,
'snapshots': 11,
- 'backups': 11}
+ 'backups': 11,
+ 'backup_gigabytes': 1009,
+ 'per_volume_gigabytes': 1009}
# Update limits for all quota resources
quota_set = self.admin_quotas_client.update_quota_set(
diff --git a/tempest/api/volume/admin/test_volume_types_extra_specs.py b/tempest/api/volume/admin/test_volume_types_extra_specs.py
index b5a2fb7..730acdf 100644
--- a/tempest/api/volume/admin/test_volume_types_extra_specs.py
+++ b/tempest/api/volume/admin/test_volume_types_extra_specs.py
@@ -46,14 +46,32 @@
self.volume_type['id'], extra_specs)['extra_specs']
self.assertEqual(extra_specs, body,
"Volume type extra spec incorrectly created")
+
+ # Only update an extra spec
spec_key = "spec2"
extra_spec = {spec_key: "val2"}
body = self.admin_volume_types_client.update_volume_type_extra_specs(
self.volume_type['id'], spec_key, extra_spec)
self.assertIn(spec_key, body)
+ self.assertEqual(extra_spec[spec_key], body[spec_key])
+ body = self.admin_volume_types_client.show_volume_type_extra_specs(
+ self.volume_type['id'], spec_key)
+ self.assertIn(spec_key, body)
self.assertEqual(extra_spec[spec_key], body[spec_key],
"Volume type extra spec incorrectly updated")
+ # Update an existing extra spec and create a new extra spec
+ extra_specs = {spec_key: "val3", "spec4": "val4"}
+ body = self.admin_volume_types_client.create_volume_type_extra_specs(
+ self.volume_type['id'], extra_specs)['extra_specs']
+ self.assertEqual(extra_specs, body)
+ body = self.admin_volume_types_client.list_volume_types_extra_specs(
+ self.volume_type['id'])['extra_specs']
+ for key in extra_specs:
+ self.assertIn(key, body)
+ self.assertEqual(extra_specs[key], body[key],
+ "Volume type extra spec incorrectly created")
+
@decorators.idempotent_id('d4772798-601f-408a-b2a5-29e8a59d1220')
def test_volume_type_extra_spec_create_get_delete(self):
# Create/Get/Delete volume type extra spec.
diff --git a/tempest/api/volume/admin/test_volumes_actions.py b/tempest/api/volume/admin/test_volumes_actions.py
index 8d09217..3e0deef 100644
--- a/tempest/api/volume/admin/test_volumes_actions.py
+++ b/tempest/api/volume/admin/test_volumes_actions.py
@@ -37,7 +37,7 @@
@decorators.idempotent_id('d063f96e-a2e0-4f34-8b8a-395c42de1845')
def test_volume_reset_status(self):
- # test volume reset status : available->error->available
+ # test volume reset status : available->error->available->maintenance
volume = self.create_volume()
self.addCleanup(waiters.wait_for_volume_resource_status,
self.volumes_client, volume['id'], 'available')
diff --git a/tempest/api/volume/test_availability_zone.py b/tempest/api/volume/test_availability_zone.py
index d0a87db..0b6ee38 100644
--- a/tempest/api/volume/test_availability_zone.py
+++ b/tempest/api/volume/test_availability_zone.py
@@ -20,14 +20,10 @@
class AvailabilityZoneTestJSON(base.BaseVolumeTest):
"""Tests Availability Zone API List"""
- @classmethod
- def setup_clients(cls):
- super(AvailabilityZoneTestJSON, cls).setup_clients()
- cls.client = cls.availability_zone_client
-
@decorators.idempotent_id('01f1ae88-eba9-4c6b-a011-6f7ace06b725')
def test_get_availability_zone_list(self):
# List of availability zone
- availability_zone = (self.client.list_availability_zones()
- ['availabilityZoneInfo'])
+ availability_zone = (
+ self.availability_zone_client.list_availability_zones()
+ ['availabilityZoneInfo'])
self.assertNotEmpty(availability_zone)
diff --git a/tempest/api/volume/test_volumes_extend.py b/tempest/api/volume/test_volumes_extend.py
index 1eb76a0..de28a30 100644
--- a/tempest/api/volume/test_volumes_extend.py
+++ b/tempest/api/volume/test_volumes_extend.py
@@ -13,12 +13,15 @@
# License for the specific language governing permissions and limitations
# under the License.
+import time
+
import testtools
from tempest.api.volume import base
from tempest.common import waiters
from tempest import config
from tempest.lib import decorators
+from tempest.lib import exceptions as lib_exc
CONF = config.CONF
@@ -53,3 +56,129 @@
resized_volume = self.volumes_client.show_volume(
volume['id'])['volume']
self.assertEqual(extend_size, resized_volume['size'])
+
+
+class VolumesExtendAttachedTest(base.BaseVolumeTest):
+ """Tests extending the size of an attached volume."""
+
+ # We need admin credentials for getting instance action event details. By
+ # default a non-admin can list and show instance actions if they own the
+ # server instance, but since the event details can contain error messages
+ # and tracebacks, like an instance fault, those are not viewable by
+ # non-admins. This is obviously not a great user experience since the user
+ # may not know when the operation is actually complete. A microversion in
+ # the compute API will be added so that non-admins can see instance action
+ # events but will continue to hide the traceback field.
+ # TODO(mriedem): Change this to not rely on the admin user to get the event
+ # details once that microversion is available in Nova.
+ credentials = ['primary', 'admin']
+
+ _api_version = 3
+ # NOTE(mriedem): The minimum required volume API version is 3.42 and the
+ # minimum required compute API microversion is 2.51, but the compute call
+ # is implicit - Cinder calls Nova at that microversion, Tempest does not.
+ min_microversion = '3.42'
+
+ @classmethod
+ def setup_clients(cls):
+ super(VolumesExtendAttachedTest, cls).setup_clients()
+ cls.admin_servers_client = cls.os_admin.servers_client
+
+ def _find_extend_volume_instance_action(self, server_id):
+ actions = self.servers_client.list_instance_actions(
+ server_id)['instanceActions']
+ for action in actions:
+ if action['action'] == 'extend_volume':
+ return action
+
+ def _find_extend_volume_instance_action_finish_event(self, action):
+ # This has to be called by an admin client otherwise
+ # the events don't show up.
+ action = self.admin_servers_client.show_instance_action(
+ action['instance_uuid'], action['request_id'])['instanceAction']
+ for event in action['events']:
+ if (event['event'] == 'compute_extend_volume' and
+ event['finish_time']):
+ return event
+
+ @decorators.idempotent_id('301f5a30-1c6f-4ea0-be1a-91fd28d44354')
+ @testtools.skipUnless(CONF.volume_feature_enabled.extend_attached_volume,
+ "Attached volume extend is disabled.")
+ def test_extend_attached_volume(self):
+ """This is a happy path test which does the following:
+
+ * Create a volume at the configured volume_size.
+ * Create a server instance.
+ * Attach the volume to the server.
+ * Wait for the volume status to be "in-use".
+ * Extend the size of the volume and wait for the volume status to go
+ back to "in-use".
+ * Assert the volume size change is reflected in the volume API.
+ * Wait for the "compute_extend_volume" instance action event to show
+ up in the compute API with the success or failure status. We fail
+ if we timeout waiting for the instance action event to show up, or
+ if the action on the server fails.
+ """
+ # Create a test volume. Will be automatically cleaned up on teardown.
+ volume = self.create_volume()
+ # Create a test server. Will be automatically cleaned up on teardown.
+ server = self.create_server()
+ # Attach the volume to the server and wait for the volume status to be
+ # "in-use".
+ self.attach_volume(server['id'], volume['id'])
+ # Extend the size of the volume. If this is successful, the volume API
+ # will change the status on the volume to "extending" before doing an
+ # RPC cast to the volume manager on the backend. Note that we multiply
+ # the size of the volume since certain Cinder backends, e.g. ScaleIO,
+ # require multiples of 8GB.
+ extend_size = volume['size'] * 2
+ self.volumes_client.extend_volume(volume['id'], new_size=extend_size)
+ # The volume status should go back to in-use since it is still attached
+ # to the server instance.
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume['id'], 'in-use')
+ # Assert that the volume size has changed in the volume API.
+ volume = self.volumes_client.show_volume(volume['id'])['volume']
+ self.assertEqual(extend_size, volume['size'])
+ # Now we wait for the "compute_extend_volume" instance action event
+ # to show up for the server instance. This is our indication that the
+ # asynchronous operation is complete on the compute side.
+ start_time = int(time.time())
+ timeout = self.servers_client.build_timeout
+ action = self._find_extend_volume_instance_action(server['id'])
+ while action is None and int(time.time()) - start_time < timeout:
+ time.sleep(self.servers_client.build_interval)
+ action = self._find_extend_volume_instance_action(server['id'])
+
+ if action is None:
+ msg = ("Timed out waiting to get 'extend_volume' instance action "
+ "record for server %(server)s after %(timeout)s seconds." %
+ {'server': server['id'], 'timeout': timeout})
+ raise lib_exc.TimeoutException(msg)
+
+ # Now that we found the extend_volume instance action, we can wait for
+ # the compute_extend_volume instance action event to show up to
+ # indicate the operation is complete.
+ start_time = int(time.time())
+ event = self._find_extend_volume_instance_action_finish_event(action)
+ while event is None and int(time.time()) - start_time < timeout:
+ time.sleep(self.servers_client.build_interval)
+ event = self._find_extend_volume_instance_action_finish_event(
+ action)
+
+ if event is None:
+ msg = ("Timed out waiting to get 'compute_extend_volume' instance "
+ "action event record for server %(server)s and request "
+ "%(request_id)s after %(timeout)s seconds." %
+ {'server': server['id'],
+ 'request_id': action['request_id'],
+ 'timeout': timeout})
+ raise lib_exc.TimeoutException(msg)
+
+ # Finally, assert that the action completed successfully.
+ self.assertTrue(
+ event['result'].lower() == 'success',
+ "Unexpected compute_extend_volume result '%(result)s' for request "
+ "%(request_id)s." %
+ {'result': event['result'],
+ 'request_id': action['request_id']})
diff --git a/tempest/clients.py b/tempest/clients.py
index e617c3c..ca205c8 100644
--- a/tempest/clients.py
+++ b/tempest/clients.py
@@ -17,7 +17,6 @@
from tempest.lib import auth
from tempest.lib import exceptions as lib_exc
from tempest.lib.services import clients
-from tempest.services import object_storage
CONF = config.CONF
@@ -25,8 +24,6 @@
class Manager(clients.ServiceClients):
"""Top level manager for OpenStack tempest clients"""
- default_params = config.service_client_config()
-
def __init__(self, credentials, scope='project'):
"""Initialization of Manager class.
@@ -47,6 +44,10 @@
self._set_object_storage_clients()
self._set_image_clients()
self._set_network_clients()
+ # TODO(andreaf) This is maintained for backward compatibility
+ # with plugins, but it should removed eventually, since it was
+ # never a stable interface and it's not useful anyways
+ self.default_params = config.service_client_config()
def _set_network_clients(self):
self.network_agents_client = self.network.AgentsClient()
@@ -281,21 +282,11 @@
self.snapshots_client_latest = self.snapshots_v3_client
def _set_object_storage_clients(self):
- # NOTE(andreaf) Load configuration from config. Once object storage
- # is in lib, configuration will be pulled directly from the registry
- # and this will not be required anymore.
- params = config.service_client_config('object-storage')
-
- self.account_client = object_storage.AccountClient(self.auth_provider,
- **params)
- self.bulk_client = object_storage.BulkMiddlewareClient(
- self.auth_provider, **params)
- self.capabilities_client = object_storage.CapabilitiesClient(
- self.auth_provider, **params)
- self.container_client = object_storage.ContainerClient(
- self.auth_provider, **params)
- self.object_client = object_storage.ObjectClient(self.auth_provider,
- **params)
+ self.account_client = self.object_storage.AccountClient()
+ self.bulk_client = self.object_storage.BulkMiddlewareClient()
+ self.capabilities_client = self.object_storage.CapabilitiesClient()
+ self.container_client = self.object_storage.ContainerClient()
+ self.object_client = self.object_storage.ObjectClient()
def get_auth_provider_class(credentials):
diff --git a/tempest/cmd/cleanup.py b/tempest/cmd/cleanup.py
index a128b3f..d0aa7dc 100644
--- a/tempest/cmd/cleanup.py
+++ b/tempest/cmd/cleanup.py
@@ -54,17 +54,17 @@
not delete the projects themselves.
**--dry-run**: Creates a report (``./dry_run.json``) of the projects that will
-be cleaned up (in the ``_tenants_to_clean`` dictionary [1]_) and the global
+be cleaned up (in the ``_projects_to_clean`` dictionary [1]_) and the global
objects that will be removed (domains, flavors, images, roles, projects,
and users). Once the cleanup command is executed (e.g. run without
parameters), running it again with **--dry-run** should yield an empty report.
**--help**: Print the help text for the command and parameters.
-.. [1] The ``_tenants_to_clean`` dictionary in ``dry_run.json`` lists the
+.. [1] The ``_projects_to_clean`` dictionary in ``dry_run.json`` lists the
projects that ``tempest cleanup`` will loop through to delete child
objects, but the command will, by default, not delete the projects
- themselves. This may differ from the ``tenants`` list as you can clean
+ themselves. This may differ from the ``projects`` list as you can clean
the Tempest and alternate Tempest users and projects but they will not be
deleted unless the **--delete-tempest-conf-objects** flag is used to
force their deletion.
@@ -111,13 +111,13 @@
self.admin_id = ""
self.admin_role_id = ""
- self.admin_tenant_id = ""
+ self.admin_project_id = ""
self._init_admin_ids()
self.admin_role_added = []
# available services
- self.tenant_services = cleanup_service.get_tenant_cleanup_services()
+ self.project_services = cleanup_service.get_project_cleanup_services()
self.global_services = cleanup_service.get_global_cleanup_services()
if parsed_args.init_saved_state:
@@ -133,24 +133,24 @@
is_save_state = False
if is_dry_run:
- self.dry_run_data["_tenants_to_clean"] = {}
+ self.dry_run_data["_projects_to_clean"] = {}
admin_mgr = self.admin_mgr
- # Always cleanup tempest and alt tempest tenants unless
+ # Always cleanup tempest and alt tempest projects unless
# they are in saved state json. Therefore is_preserve is False
kwargs = {'data': self.dry_run_data,
'is_dry_run': is_dry_run,
'saved_state_json': self.json_data,
'is_preserve': False,
'is_save_state': is_save_state}
- tenant_service = cleanup_service.TenantService(admin_mgr, **kwargs)
- tenants = tenant_service.list()
- print("Process %s tenants" % len(tenants))
+ project_service = cleanup_service.ProjectService(admin_mgr, **kwargs)
+ projects = project_service.list()
+ print("Process %s projects" % len(projects))
- # Loop through list of tenants and clean them up.
- for tenant in tenants:
- self._add_admin(tenant['id'])
- self._clean_tenant(tenant)
+ # Loop through list of projects and clean them up.
+ for project in projects:
+ self._add_admin(project['id'])
+ self._clean_project(project)
kwargs = {'data': self.dry_run_data,
'is_dry_run': is_dry_run,
@@ -169,49 +169,51 @@
self._remove_admin_user_roles()
def _remove_admin_user_roles(self):
- tenant_ids = self.admin_role_added
- LOG.debug("Removing admin user roles where needed for tenants: %s",
- tenant_ids)
- for tenant_id in tenant_ids:
- self._remove_admin_role(tenant_id)
+ project_ids = self.admin_role_added
+ LOG.debug("Removing admin user roles where needed for projects: %s",
+ project_ids)
+ for project_id in project_ids:
+ self._remove_admin_role(project_id)
- def _clean_tenant(self, tenant):
- print("Cleaning tenant: %s " % tenant['name'])
+ def _clean_project(self, project):
+ print("Cleaning project: %s " % project['name'])
is_dry_run = self.options.dry_run
dry_run_data = self.dry_run_data
is_preserve = not self.options.delete_tempest_conf_objects
- tenant_id = tenant['id']
- tenant_name = tenant['name']
- tenant_data = None
+ project_id = project['id']
+ project_name = project['name']
+ project_data = None
if is_dry_run:
- tenant_data = dry_run_data["_tenants_to_clean"][tenant_id] = {}
- tenant_data['name'] = tenant_name
+ project_data = dry_run_data["_projects_to_clean"][project_id] = {}
+ project_data['name'] = project_name
kwargs = {"username": CONF.auth.admin_username,
"password": CONF.auth.admin_password,
- "tenant_name": tenant['name']}
+ "project_name": project['name']}
mgr = clients.Manager(credentials=credentials.get_credentials(
**kwargs))
- kwargs = {'data': tenant_data,
+ kwargs = {'data': project_data,
'is_dry_run': is_dry_run,
'saved_state_json': None,
'is_preserve': is_preserve,
'is_save_state': False,
- 'tenant_id': tenant_id}
- for service in self.tenant_services:
+ 'project_id': project_id}
+ for service in self.project_services:
svc = service(mgr, **kwargs)
svc.run()
def _init_admin_ids(self):
- tn_cl = self.admin_mgr.tenants_client
- rl_cl = self.admin_mgr.roles_client
+ pr_cl = self.admin_mgr.projects_client
+ rl_cl = self.admin_mgr.roles_v3_client
+ rla_cl = self.admin_mgr.role_assignments_client
+ us_cl = self.admin_mgr.users_v3_client
- tenant = identity.get_tenant_by_name(tn_cl,
- CONF.auth.admin_project_name)
- self.admin_tenant_id = tenant['id']
-
- user = identity.get_user_by_username(tn_cl, self.admin_tenant_id,
- CONF.auth.admin_username)
+ project = identity.get_project_by_name(pr_cl,
+ CONF.auth.admin_project_name)
+ self.admin_project_id = project['id']
+ user = identity.get_user_by_project(us_cl, rla_cl,
+ self.admin_project_id,
+ CONF.auth.admin_username)
self.admin_id = user['id']
roles = rl_cl.list_roles()['roles']
@@ -236,7 +238,7 @@
dest='delete_tempest_conf_objects',
default=False,
help="Force deletion of the tempest and "
- "alternate tempest users and tenants.")
+ "alternate tempest users and projects.")
parser.add_argument('--dry-run', action="store_true",
dest='dry_run', default=False,
help="Generate JSON file:" + DRY_RUN_JSON +
@@ -247,44 +249,44 @@
def get_description(self):
return 'Cleanup after tempest run'
- def _add_admin(self, tenant_id):
- rl_cl = self.admin_mgr.roles_client
+ def _add_admin(self, project_id):
+ rl_cl = self.admin_mgr.roles_v3_client
needs_role = True
- roles = rl_cl.list_user_roles_on_project(tenant_id,
+ roles = rl_cl.list_user_roles_on_project(project_id,
self.admin_id)['roles']
for role in roles:
if role['id'] == self.admin_role_id:
needs_role = False
- LOG.debug("User already had admin privilege for this tenant")
+ LOG.debug("User already had admin privilege for this project")
if needs_role:
- LOG.debug("Adding admin privilege for : %s", tenant_id)
- rl_cl.create_user_role_on_project(tenant_id, self.admin_id,
+ LOG.debug("Adding admin privilege for : %s", project_id)
+ rl_cl.create_user_role_on_project(project_id, self.admin_id,
self.admin_role_id)
- self.admin_role_added.append(tenant_id)
+ self.admin_role_added.append(project_id)
- def _remove_admin_role(self, tenant_id):
- LOG.debug("Remove admin user role for tenant: %s", tenant_id)
+ def _remove_admin_role(self, project_id):
+ LOG.debug("Remove admin user role for projectt: %s", project_id)
# Must initialize Admin Manager for each user role
# Otherwise authentication exception is thrown, weird
id_cl = clients.Manager(
credentials.get_configured_admin_credentials()).identity_client
- if (self._tenant_exists(tenant_id)):
+ if (self._project_exists(project_id)):
try:
- id_cl.delete_role_from_user_on_project(tenant_id,
+ id_cl.delete_role_from_user_on_project(project_id,
self.admin_id,
self.admin_role_id)
except Exception as ex:
- LOG.exception("Failed removing role from tenant which still"
+ LOG.exception("Failed removing role from project which still"
"exists, exception: %s", ex)
- def _tenant_exists(self, tenant_id):
- tn_cl = self.admin_mgr.tenants_client
+ def _project_exists(self, project_id):
+ pr_cl = self.admin_mgr.projects_client
try:
- t = tn_cl.show_tenant(tenant_id)
- LOG.debug("Tenant is: %s", str(t))
+ p = pr_cl.show_project(project_id)
+ LOG.debug("Project is: %s", str(p))
return True
except Exception as ex:
- LOG.debug("Tenant no longer exists? %s", ex)
+ LOG.debug("Project no longer exists? %s", ex)
return False
def _init_state(self):
diff --git a/tempest/cmd/cleanup_service.py b/tempest/cmd/cleanup_service.py
index c75bc85..d1e80f1 100644
--- a/tempest/cmd/cleanup_service.py
+++ b/tempest/cmd/cleanup_service.py
@@ -32,7 +32,7 @@
CONF_PRIV_NETWORK_NAME = None
CONF_PUB_NETWORK = None
CONF_PUB_ROUTER = None
-CONF_TENANTS = None
+CONF_PROJECTS = None
CONF_USERS = None
IS_CINDER = None
@@ -50,7 +50,7 @@
global CONF_PRIV_NETWORK_NAME
global CONF_PUB_NETWORK
global CONF_PUB_ROUTER
- global CONF_TENANTS
+ global CONF_PROJECTS
global CONF_USERS
global IS_CINDER
global IS_GLANCE
@@ -69,7 +69,7 @@
CONF_PRIV_NETWORK_NAME = CONF.compute.fixed_network_name
CONF_PUB_NETWORK = CONF.network.public_network_id
CONF_PUB_ROUTER = CONF.network.public_router_id
- CONF_TENANTS = [CONF.auth.admin_project_name]
+ CONF_PROJECTS = [CONF.auth.admin_project_name]
CONF_USERS = [CONF.auth.admin_username]
if IS_NEUTRON:
@@ -82,14 +82,14 @@
am = clients.Manager(
credentials.get_configured_admin_credentials())
net_cl = am.networks_client
- tn_cl = am.tenants_client
+ pr_cl = am.projects_client
networks = net_cl.list_networks()
- tenant = identity.get_tenant_by_name(tn_cl, project_name)
- t_id = tenant['id']
+ project = identity.get_project_by_name(pr_cl, project_name)
+ p_id = project['id']
n_id = None
for net in networks['networks']:
- if (net['tenant_id'] == t_id and net['name'] == net_name):
+ if (net['project_id'] == p_id and net['name'] == net_name):
n_id = net['id']
break
return n_id
@@ -141,7 +141,7 @@
def __init__(self, manager, **kwargs):
super(SnapshotService, self).__init__(kwargs)
- self.client = manager.snapshots_client
+ self.client = manager.snapshots_client_latest
def list(self):
client = self.client
@@ -319,7 +319,7 @@
class VolumeService(BaseService):
def __init__(self, manager, **kwargs):
super(VolumeService, self).__init__(kwargs)
- self.client = manager.volumes_client
+ self.client = manager.volumes_client_latest
def list(self):
client = self.client
@@ -344,7 +344,7 @@
class VolumeQuotaService(BaseService):
def __init__(self, manager, **kwargs):
super(VolumeQuotaService, self).__init__(kwargs)
- self.client = manager.volume_quotas_client
+ self.client = manager.volume_quotas_v2_client
def delete(self):
client = self.client
@@ -786,14 +786,14 @@
class IdentityService(BaseService):
def __init__(self, manager, **kwargs):
super(IdentityService, self).__init__(kwargs)
- self.client = manager.identity_client
+ self.client = manager.identity_v3_client
class UserService(BaseService):
def __init__(self, manager, **kwargs):
super(UserService, self).__init__(kwargs)
- self.client = manager.users_client
+ self.client = manager.users_v3_client
def list(self):
users = self.client.list_users()['users']
@@ -872,43 +872,43 @@
self.data['roles'][role['id']] = role['name']
-class TenantService(BaseService):
+class ProjectService(BaseService):
def __init__(self, manager, **kwargs):
- super(TenantService, self).__init__(kwargs)
- self.client = manager.tenants_client
+ super(ProjectService, self).__init__(kwargs)
+ self.client = manager.projects_client
def list(self):
- tenants = self.client.list_tenants()['tenants']
+ projects = self.client.list_projects()['projects']
if not self.is_save_state:
- tenants = [tenant for tenant in tenants if (tenant['id']
- not in self.saved_state_json['tenants'].keys()
- and tenant['name'] != CONF.auth.admin_project_name)]
+ projects = [project for project in projects if (project['id']
+ not in self.saved_state_json['projects'].keys()
+ and project['name'] != CONF.auth.admin_project_name)]
if self.is_preserve:
- tenants = [tenant for tenant in tenants if tenant['name']
- not in CONF_TENANTS]
+ projects = [project for project in projects if project['name']
+ not in CONF_PROJECTS]
- LOG.debug("List count, %s Tenants after reconcile", len(tenants))
- return tenants
+ LOG.debug("List count, %s Projects after reconcile", len(projects))
+ return projects
def delete(self):
- tenants = self.list()
- for tenant in tenants:
+ projects = self.list()
+ for project in projects:
try:
- self.client.delete_tenant(tenant['id'])
+ self.client.delete_project(project['id'])
except Exception:
- LOG.exception("Delete Tenant exception.")
+ LOG.exception("Delete project exception.")
def dry_run(self):
- tenants = self.list()
- self.data['tenants'] = tenants
+ projects = self.list()
+ self.data['projects'] = projects
def save_state(self):
- tenants = self.list()
- self.data['tenants'] = {}
- for tenant in tenants:
- self.data['tenants'][tenant['id']] = tenant['name']
+ projects = self.list()
+ self.data['projects'] = {}
+ for project in projects:
+ self.data['projects'][project['id']] = project['name']
class DomainService(BaseService):
@@ -948,35 +948,35 @@
self.data['domains'][domain['id']] = domain['name']
-def get_tenant_cleanup_services():
- tenant_services = []
+def get_project_cleanup_services():
+ project_services = []
# TODO(gmann): Tempest should provide some plugin hook for cleanup
# script extension to plugin tests also.
if IS_NOVA:
- tenant_services.append(ServerService)
- tenant_services.append(KeyPairService)
- tenant_services.append(SecurityGroupService)
- tenant_services.append(ServerGroupService)
+ project_services.append(ServerService)
+ project_services.append(KeyPairService)
+ project_services.append(SecurityGroupService)
+ project_services.append(ServerGroupService)
if not IS_NEUTRON:
- tenant_services.append(FloatingIpService)
- tenant_services.append(NovaQuotaService)
+ project_services.append(FloatingIpService)
+ project_services.append(NovaQuotaService)
if IS_HEAT:
- tenant_services.append(StackService)
+ project_services.append(StackService)
if IS_NEUTRON:
- tenant_services.append(NetworkFloatingIpService)
+ project_services.append(NetworkFloatingIpService)
if utils.is_extension_enabled('metering', 'network'):
- tenant_services.append(NetworkMeteringLabelRuleService)
- tenant_services.append(NetworkMeteringLabelService)
- tenant_services.append(NetworkRouterService)
- tenant_services.append(NetworkPortService)
- tenant_services.append(NetworkSubnetService)
- tenant_services.append(NetworkService)
- tenant_services.append(NetworkSecGroupService)
+ project_services.append(NetworkMeteringLabelRuleService)
+ project_services.append(NetworkMeteringLabelService)
+ project_services.append(NetworkRouterService)
+ project_services.append(NetworkPortService)
+ project_services.append(NetworkSubnetService)
+ project_services.append(NetworkService)
+ project_services.append(NetworkSecGroupService)
if IS_CINDER:
- tenant_services.append(SnapshotService)
- tenant_services.append(VolumeService)
- tenant_services.append(VolumeQuotaService)
- return tenant_services
+ project_services.append(SnapshotService)
+ project_services.append(VolumeService)
+ project_services.append(VolumeQuotaService)
+ return project_services
def get_global_cleanup_services():
@@ -986,7 +986,7 @@
if IS_GLANCE:
global_services.append(ImageService)
global_services.append(UserService)
- global_services.append(TenantService)
+ global_services.append(ProjectService)
global_services.append(DomainService)
global_services.append(RoleService)
return global_services
diff --git a/tempest/cmd/run.py b/tempest/cmd/run.py
index e71032a..f07f197 100644
--- a/tempest/cmd/run.py
+++ b/tempest/cmd/run.py
@@ -47,6 +47,12 @@
You can also use the **--list-tests** option in conjunction with selection
arguments to list which tests will be run.
+You can also use the **--load-list** option that lets you pass a filepath to
+tempest run with the file format being in a non-regex format, similar to the
+tests generated by the **--list-tests** option. You can specify target tests
+by removing unnecessary tests from a list file which is generated from
+**--list-tests** option.
+
Test Execution
==============
There are several options to control how the tests are executed. By default
@@ -267,6 +273,12 @@
help='Path to a blacklist file, this file '
'contains a separate regex exclude on '
'each newline')
+ list_selector.add_argument('--load-list', '--load_list',
+ help='Path to a non-regex whitelist file, '
+ 'this file contains a seperate test '
+ 'on each newline. This command'
+ 'supports files created by the tempest'
+ 'run ``--list-tests`` command')
# list only args
parser.add_argument('--list-tests', '-l', action='store_true',
help='List tests',
@@ -318,6 +330,8 @@
options.append("--parallel")
if parsed_args.concurrency:
options.append("--concurrency=%s" % parsed_args.concurrency)
+ if parsed_args.load_list:
+ options.append("--load-list=%s" % parsed_args.load_list)
return options
def _run(self, regex, options):
diff --git a/tempest/cmd/verify_tempest_config.py b/tempest/cmd/verify_tempest_config.py
index a72493d..fdf28d5 100644
--- a/tempest/cmd/verify_tempest_config.py
+++ b/tempest/cmd/verify_tempest_config.py
@@ -76,7 +76,6 @@
from tempest import config
import tempest.lib.common.http
from tempest.lib import exceptions as lib_exc
-from tempest.services import object_storage
CONF = config.CONF
@@ -197,10 +196,6 @@
def verify_keystone_api_versions(os, update):
# Check keystone api versions
versions = _get_api_versions(os, 'keystone')
- if (CONF.identity_feature_enabled.api_v2 !=
- contains_version('v2.', versions)):
- print_and_or_update('api_v2', 'identity-feature-enabled',
- not CONF.identity_feature_enabled.api_v2, update)
if (CONF.identity_feature_enabled.api_v3 !=
contains_version('v3.', versions)):
print_and_or_update('api_v3', 'identity-feature-enabled',
@@ -236,11 +231,10 @@
def get_extension_client(os, service):
- params = config.service_client_config('object-storage')
extensions_client = {
'nova': os.compute.ExtensionsClient(),
'neutron': os.network.ExtensionsClient(),
- 'swift': object_storage.CapabilitiesClient(os.auth_provider, **params),
+ 'swift': os.object_storage.CapabilitiesClient(),
# NOTE: Cinder v3 API is current and v2 and v1 are deprecated.
# V3 extension API is the same as v2, so we reuse the v2 client
# for v3 API also.
diff --git a/tempest/common/credentials_factory.py b/tempest/common/credentials_factory.py
index a340531..da34975 100644
--- a/tempest/common/credentials_factory.py
+++ b/tempest/common/credentials_factory.py
@@ -219,13 +219,6 @@
'alt_user': ('identity', 'alt')
}
-DEFAULT_PARAMS = {
- 'disable_ssl_certificate_validation':
- CONF.identity.disable_ssl_certificate_validation,
- 'ca_certs': CONF.identity.ca_certificates_file,
- 'trace_requests': CONF.debug.trace_requests
-}
-
def get_configured_admin_credentials(fill_in=True, identity_version=None):
"""Get admin credentials from the config file
@@ -252,7 +245,7 @@
if identity_version == 'v3':
conf_attributes.append('domain_name')
# Read the parts of credentials from config
- params = DEFAULT_PARAMS.copy()
+ params = config.service_client_config()
for attr in conf_attributes:
params[attr] = getattr(CONF.auth, 'admin_' + attr)
# Build and validate credentials. We are reading configured credentials,
@@ -282,7 +275,7 @@
:param kwargs: Attributes to be used to build the Credentials object.
:returns: An object of a sub-type of `auth.Credentials`
"""
- params = dict(DEFAULT_PARAMS, **kwargs)
+ params = dict(config.service_client_config(), **kwargs)
identity_version = identity_version or CONF.identity.auth_version
# In case of "v3" add the domain from config if not specified
# To honour the "default_credentials_domain_name", if not domain
diff --git a/tempest/common/identity.py b/tempest/common/identity.py
index 6e496d3..eaf651b 100644
--- a/tempest/common/identity.py
+++ b/tempest/common/identity.py
@@ -20,6 +20,15 @@
CONF = config.CONF
+def get_project_by_name(client, project_name):
+ projects = client.list_projects({'name': project_name})['projects']
+ for project in projects:
+ if project['name'] == project_name:
+ return project
+ raise lib_exc.NotFound('No such project(%s) in %s' % (project_name,
+ projects))
+
+
def get_tenant_by_name(client, tenant_name):
tenants = client.list_tenants()['tenants']
for tenant in tenants:
@@ -36,6 +45,18 @@
raise lib_exc.NotFound('No such user(%s) in %s' % (username, users))
+def get_user_by_project(users_client, roles_client, project_id, username):
+ users = users_client.list_users(**{'name': username})['users']
+ users_in_project = roles_client.list_role_assignments(
+ **{'scope.project.id': project_id})['role_assignments']
+ for user in users:
+ if user['name'] == username:
+ for u in users_in_project:
+ if u['user']['id'] == user['id']:
+ return user
+ raise lib_exc.NotFound('No such user(%s) in %s' % (username, users))
+
+
def identity_utils(clients):
"""A client that abstracts v2 and v3 identity operations.
diff --git a/tempest/config.py b/tempest/config.py
index 4d0839a..0743220 100644
--- a/tempest/config.py
+++ b/tempest/config.py
@@ -194,6 +194,8 @@
default=60,
help='Timeout in seconds to wait for the http request to '
'return'),
+ cfg.StrOpt('proxy_url',
+ help='Specify an http proxy to use.')
]
identity_feature_group = cfg.OptGroup(name='identity-feature-enabled',
@@ -205,8 +207,14 @@
help='Does the identity service have delegation and '
'impersonation enabled'),
cfg.BoolOpt('api_v2',
- default=True,
- help='Is the v2 identity API enabled'),
+ default=False,
+ help='Is the v2 identity API enabled',
+ deprecated_for_removal=True,
+ deprecated_reason='The identity v2.0 API was removed in the '
+ 'Queens release. Tests that exercise the '
+ 'v2.0 API will be removed from tempest in '
+ 'the v22.0.0 release. They are kept only to '
+ 'test stable branches.'),
cfg.BoolOpt('api_v2_admin',
default=True,
help="Is the v2 identity admin API available? This setting "
@@ -836,7 +844,14 @@
help="Is the v2 volume API enabled"),
cfg.BoolOpt('api_v3',
default=True,
- help="Is the v3 volume API enabled")
+ help="Is the v3 volume API enabled"),
+ cfg.BoolOpt('extend_attached_volume',
+ default=False,
+ help='Does the cloud support extending the size of a volume '
+ 'which is currently attached to a server instance? This '
+ 'depends on the 3.42 volume API microversion and the '
+ '2.51 compute API microversion. Also, not all volume or '
+ 'compute backends support this operation.')
]
@@ -1301,6 +1316,7 @@
* `ca_certs`
* `trace_requests`
* `http_timeout`
+ * `proxy_url`
The dict returned by this does not fit a few service clients:
@@ -1323,7 +1339,8 @@
CONF.identity.disable_ssl_certificate_validation,
'ca_certs': CONF.identity.ca_certificates_file,
'trace_requests': CONF.debug.trace_requests,
- 'http_timeout': CONF.service_clients.http_timeout
+ 'http_timeout': CONF.service_clients.http_timeout,
+ 'proxy_url': CONF.service_clients.proxy_url,
}
if service_client_name is None:
@@ -1377,7 +1394,7 @@
module = service_clients[service_client]
configs = service_client.split('.')[0]
service_client_data = dict(
- name=service_client.replace('.', '_'),
+ name=service_client.replace('.', '_').replace('-', '_'),
service_version=service_client,
module_path=module.__name__,
client_names=module.__all__,
diff --git a/tempest/exceptions.py b/tempest/exceptions.py
index a8a6ff0..a430d5d 100644
--- a/tempest/exceptions.py
+++ b/tempest/exceptions.py
@@ -52,9 +52,5 @@
"the configured network")
-class RFCViolation(exceptions.RestClientException):
- message = "RFC Violation"
-
-
class InvalidServiceTag(exceptions.TempestException):
message = "Invalid service tag"
diff --git a/tempest/lib/auth.py b/tempest/lib/auth.py
index ab4308f..2dd9d00 100644
--- a/tempest/lib/auth.py
+++ b/tempest/lib/auth.py
@@ -261,12 +261,13 @@
def __init__(self, credentials, auth_url,
disable_ssl_certificate_validation=None,
ca_certs=None, trace_requests=None, scope='project',
- http_timeout=None):
+ http_timeout=None, proxy_url=None):
super(KeystoneAuthProvider, self).__init__(credentials, scope)
self.dscv = disable_ssl_certificate_validation
self.ca_certs = ca_certs
self.trace_requests = trace_requests
self.http_timeout = http_timeout
+ self.proxy_url = proxy_url
self.auth_url = auth_url
self.auth_client = self._auth_client(auth_url)
@@ -345,7 +346,7 @@
return json_v2id.TokenClient(
auth_url, disable_ssl_certificate_validation=self.dscv,
ca_certs=self.ca_certs, trace_requests=self.trace_requests,
- http_timeout=self.http_timeout)
+ http_timeout=self.http_timeout, proxy_url=self.proxy_url)
def _auth_params(self):
"""Auth parameters to be passed to the token request
@@ -433,7 +434,7 @@
return json_v3id.V3TokenClient(
auth_url, disable_ssl_certificate_validation=self.dscv,
ca_certs=self.ca_certs, trace_requests=self.trace_requests,
- http_timeout=self.http_timeout)
+ http_timeout=self.http_timeout, proxy_url=self.proxy_url)
def _auth_params(self):
"""Auth parameters to be passed to the token request
@@ -599,7 +600,8 @@
def get_credentials(auth_url, fill_in=True, identity_version='v2',
disable_ssl_certificate_validation=None, ca_certs=None,
- trace_requests=None, http_timeout=None, **kwargs):
+ trace_requests=None, http_timeout=None, proxy_url=None,
+ **kwargs):
"""Builds a credentials object based on the configured auth_version
:param auth_url (string): Full URI of the OpenStack Identity API(Keystone)
@@ -617,6 +619,7 @@
:param trace_requests: trace in log API requests to the auth system
:param http_timeout: timeout in seconds to wait for the http request to
return
+ :param proxy_url: URL of HTTP(s) proxy used when fill_in is True
:param kwargs (dict): Dict of credential key/value pairs
Examples:
@@ -641,7 +644,7 @@
auth_provider = auth_provider_class(
creds, auth_url, disable_ssl_certificate_validation=dscv,
ca_certs=ca_certs, trace_requests=trace_requests,
- http_timeout=http_timeout)
+ http_timeout=http_timeout, proxy_url=proxy_url)
creds = auth_provider.fill_credentials()
return creds
diff --git a/tempest/lib/cli/base.py b/tempest/lib/cli/base.py
index 5468a7b..f39ecbc 100644
--- a/tempest/lib/cli/base.py
+++ b/tempest/lib/cli/base.py
@@ -93,10 +93,20 @@
:type insecure: boolean
:param prefix: prefix to insert before commands
:type prefix: string
+ :param user_domain_name: User's domain name
+ :type user_domain_name: string
+ :param user_domain_id: User's domain ID
+ :type user_domain_id: string
+ :param project_domain_name: Project's domain name
+ :type project_domain_name: string
+ :param project_domain_id: Project's domain ID
+ :type project_domain_id: string
"""
def __init__(self, username='', password='', tenant_name='', uri='',
- cli_dir='', insecure=False, prefix='', *args, **kwargs):
+ cli_dir='', insecure=False, prefix='', user_domain_name=None,
+ user_domain_id=None, project_domain_name=None,
+ project_domain_id=None, *args, **kwargs):
"""Initialize a new CLIClient object."""
super(CLIClient, self).__init__()
self.cli_dir = cli_dir if cli_dir else '/usr/bin'
@@ -106,6 +116,10 @@
self.uri = uri
self.insecure = insecure
self.prefix = prefix
+ self.user_domain_name = user_domain_name
+ self.user_domain_id = user_domain_id
+ self.project_domain_name = project_domain_name
+ self.project_domain_id = project_domain_id
def nova(self, action, flags='', params='', fail_ok=False,
endpoint_type='publicURL', merge_stderr=False):
@@ -366,6 +380,14 @@
self.tenant_name,
self.password,
self.uri))
+ if self.user_domain_name is not None:
+ creds += ' --os-user-domain-name %s' % self.user_domain_name
+ if self.user_domain_id is not None:
+ creds += ' --os-user-domain-id %s' % self.user_domain_id
+ if self.project_domain_name is not None:
+ creds += ' --os-project-domain-name %s' % self.project_domain_name
+ if self.project_domain_id is not None:
+ creds += ' --os-project-domain-id %s' % self.project_domain_id
if self.insecure:
flags = creds + ' --insecure ' + flags
else:
diff --git a/tempest/lib/common/api_version_utils.py b/tempest/lib/common/api_version_utils.py
index 98f174d..bcb076b 100644
--- a/tempest/lib/common/api_version_utils.py
+++ b/tempest/lib/common/api_version_utils.py
@@ -12,7 +12,6 @@
# License for the specific language governing permissions and limitations
# under the License.
-from oslo_log import log as logging
import testtools
from tempest.lib.common import api_version_request
@@ -20,7 +19,6 @@
LATEST_MICROVERSION = 'latest'
-LOG = logging.getLogger(__name__)
class BaseMicroversionTest(object):
@@ -166,7 +164,6 @@
if op is None:
msg = ("Operation %s is invalid. Valid options include: lt, eq, gt, "
"le, ne, ge." % operation)
- LOG.debug(msg)
raise exceptions.InvalidParam(invalid_param=msg)
# Remove "volume" from "volume <microversion>", for example, so that the
diff --git a/tempest/lib/common/http.py b/tempest/lib/common/http.py
index b4b1fc9..738c37f 100644
--- a/tempest/lib/common/http.py
+++ b/tempest/lib/common/http.py
@@ -17,6 +17,47 @@
import urllib3
+class ClosingProxyHttp(urllib3.ProxyManager):
+ def __init__(self, proxy_url, disable_ssl_certificate_validation=False,
+ ca_certs=None, timeout=None):
+ kwargs = {}
+
+ if disable_ssl_certificate_validation:
+ urllib3.disable_warnings()
+ kwargs['cert_reqs'] = 'CERT_NONE'
+ elif ca_certs:
+ kwargs['cert_reqs'] = 'CERT_REQUIRED'
+ kwargs['ca_certs'] = ca_certs
+
+ if timeout:
+ kwargs['timeout'] = timeout
+
+ super(ClosingProxyHttp, self).__init__(proxy_url, **kwargs)
+
+ def request(self, url, method, *args, **kwargs):
+
+ class Response(dict):
+ def __init__(self, info):
+ for key, value in info.getheaders().items():
+ self[key.lower()] = value
+ self.status = info.status
+ self['status'] = str(self.status)
+ self.reason = info.reason
+ self.version = info.version
+ self['content-location'] = url
+
+ original_headers = kwargs.get('headers', {})
+ new_headers = dict(original_headers, connection='close')
+ new_kwargs = dict(kwargs, headers=new_headers)
+
+ # Follow up to 5 redirections. Don't raise an exception if
+ # it's exceeded but return the HTTP 3XX response instead.
+ retry = urllib3.util.Retry(raise_on_redirect=False, redirect=5)
+ r = super(ClosingProxyHttp, self).request(method, url, retries=retry,
+ *args, **new_kwargs)
+ return Response(r), r.data
+
+
class ClosingHttp(urllib3.poolmanager.PoolManager):
def __init__(self, disable_ssl_certificate_validation=False,
ca_certs=None, timeout=None):
diff --git a/tempest/lib/common/rest_client.py b/tempest/lib/common/rest_client.py
index f58d737..22276d4 100644
--- a/tempest/lib/common/rest_client.py
+++ b/tempest/lib/common/rest_client.py
@@ -69,6 +69,7 @@
of the request and response payload
:param str http_timeout: Timeout in seconds to wait for the http request to
return
+ :param str proxy_url: http proxy url to use.
"""
# The version of the API this client implements
@@ -80,7 +81,8 @@
endpoint_type='publicURL',
build_interval=1, build_timeout=60,
disable_ssl_certificate_validation=False, ca_certs=None,
- trace_requests='', name=None, http_timeout=None):
+ trace_requests='', name=None, http_timeout=None,
+ proxy_url=None):
self.auth_provider = auth_provider
self.service = service
self.region = region
@@ -100,9 +102,16 @@
'retry-after', 'server',
'vary', 'www-authenticate'))
dscv = disable_ssl_certificate_validation
- self.http_obj = http.ClosingHttp(
- disable_ssl_certificate_validation=dscv, ca_certs=ca_certs,
- timeout=http_timeout)
+
+ if proxy_url:
+ self.http_obj = http.ClosingProxyHttp(
+ proxy_url,
+ disable_ssl_certificate_validation=dscv, ca_certs=ca_certs,
+ timeout=http_timeout)
+ else:
+ self.http_obj = http.ClosingHttp(
+ disable_ssl_certificate_validation=dscv, ca_certs=ca_certs,
+ timeout=http_timeout)
def get_headers(self, accept_type=None, send_type=None):
"""Return the default headers which will be used with outgoing requests
diff --git a/tempest/lib/common/utils/data_utils.py b/tempest/lib/common/utils/data_utils.py
index a0941ef..c5df590 100644
--- a/tempest/lib/common/utils/data_utils.py
+++ b/tempest/lib/common/utils/data_utils.py
@@ -18,9 +18,6 @@
import string
import uuid
-from debtcollector import removals
-import netaddr
-from oslo_utils import netutils
from oslo_utils import uuidutils
import six.moves
@@ -177,36 +174,6 @@
for i in range(size)])
-@removals.remove(
- message="use get_ipv6_addr_by_EUI64 from oslo_utils.netutils",
- version="Newton",
- removal_version="Ocata")
-def get_ipv6_addr_by_EUI64(cidr, mac):
- """Generate a IPv6 addr by EUI-64 with CIDR and MAC
-
- :param str cidr: a IPv6 CIDR
- :param str mac: a MAC address
- :return: an IPv6 Address
- :rtype: netaddr.IPAddress
- """
- # Check if the prefix is IPv4 address
- is_ipv4 = netutils.is_valid_ipv4(cidr)
- if is_ipv4:
- msg = "Unable to generate IP address by EUI64 for IPv4 prefix"
- raise TypeError(msg)
- try:
- eui64 = int(netaddr.EUI(mac).eui64())
- prefix = netaddr.IPNetwork(cidr)
- return netaddr.IPAddress(prefix.first + eui64 ^ (1 << 57))
- except (ValueError, netaddr.AddrFormatError):
- raise TypeError('Bad prefix or mac format for generating IPv6 '
- 'address by EUI-64: %(prefix)s, %(mac)s:'
- % {'prefix': cidr, 'mac': mac})
- except TypeError:
- raise TypeError('Bad prefix type for generate IPv6 address by '
- 'EUI-64: %s' % cidr)
-
-
# Courtesy of http://stackoverflow.com/a/312464
def chunkify(sequence, chunksize):
"""Yield successive chunks from `sequence`."""
diff --git a/tempest/lib/common/utils/test_utils.py b/tempest/lib/common/utils/test_utils.py
index bd0db7c..c2e93ee 100644
--- a/tempest/lib/common/utils/test_utils.py
+++ b/tempest/lib/common/utils/test_utils.py
@@ -86,22 +86,29 @@
pass
-def call_until_true(func, duration, sleep_for):
+def call_until_true(func, duration, sleep_for, *args, **kwargs):
"""Call the given function until it returns True (and return True)
or until the specified duration (in seconds) elapses (and return False).
- :param func: A zero argument callable that returns True on success.
+ :param func: A callable that returns True on success.
:param duration: The number of seconds for which to attempt a
successful call of the function.
:param sleep_for: The number of seconds to sleep after an unsuccessful
invocation of the function.
+ :param args: args that are passed to func.
+ :param kwargs: kwargs that are passed to func.
"""
now = time.time()
+ begin_time = now
timeout = now + duration
while now < timeout:
- if func():
+ if func(*args, **kwargs):
+ LOG.debug("Call %s returns true in %f seconds",
+ getattr(func, '__name__'), time.time() - begin_time)
return True
time.sleep(sleep_for)
now = time.time()
+ LOG.debug("Call %s returns false in %f seconds",
+ getattr(func, '__name__'), duration)
return False
diff --git a/tempest/lib/services/clients.py b/tempest/lib/services/clients.py
index 4fa7a7a..8918a8c 100644
--- a/tempest/lib/services/clients.py
+++ b/tempest/lib/services/clients.py
@@ -31,6 +31,7 @@
from tempest.lib.services import identity
from tempest.lib.services import image
from tempest.lib.services import network
+from tempest.lib.services import object_storage
from tempest.lib.services import volume
warnings.simplefilter("once")
@@ -50,20 +51,13 @@
'image.v1': image.v1,
'image.v2': image.v2,
'network': network,
+ 'object-storage': object_storage,
'volume.v1': volume.v1,
'volume.v2': volume.v2,
'volume.v3': volume.v3
}
-def _tempest_internal_modules():
- # Set of unstable service clients available in Tempest
- # NOTE(andreaf) This list will exists only as long the remain clients
- # are migrated to tempest.lib, and it will then be deleted without
- # deprecation or advance notice
- return set(['object-storage'])
-
-
def available_modules():
"""Set of service client modules available in Tempest and plugins
@@ -101,17 +95,6 @@
plug_service_versions))
name_conflicts.append(exceptions.PluginRegistrationException(
name=plugin_name, detailed_error=detailed_error))
- # NOTE(andreaf) Once all tempest clients are stable, the following
- # if will have to be removed.
- if not plug_service_versions.isdisjoint(
- _tempest_internal_modules()):
- detailed_error = (
- 'Plugin %s is trying to register a service %s already '
- 'claimed by a Tempest one' % (plugin_name,
- _tempest_internal_modules() &
- plug_service_versions))
- name_conflicts.append(exceptions.PluginRegistrationException(
- name=plugin_name, detailed_error=detailed_error))
extra_service_versions |= plug_service_versions
if name_conflicts:
LOG.error(
@@ -276,7 +259,7 @@
@removals.removed_kwarg('client_parameters')
def __init__(self, credentials, identity_uri, region=None, scope='project',
disable_ssl_certificate_validation=True, ca_certs=None,
- trace_requests='', client_parameters=None):
+ trace_requests='', client_parameters=None, proxy_url=None):
"""Service Clients provider
Instantiate a `ServiceClients` object, from a set of credentials and an
@@ -336,6 +319,8 @@
name, as declared in `service_clients.available_modules()` except
for the version. Values are dictionaries of parameters that are
going to be passed to all clients in the service client module.
+ :param proxy_url: Applies to auth and to all service clients, set a
+ proxy url for the clients to use.
"""
self._registered_services = set([])
self.credentials = credentials
@@ -360,16 +345,20 @@
self.dscv = disable_ssl_certificate_validation
self.ca_certs = ca_certs
self.trace_requests = trace_requests
+ self.proxy_url = proxy_url
# Creates an auth provider for the credentials
self.auth_provider = auth_provider_class(
self.credentials, self.identity_uri, scope=scope,
disable_ssl_certificate_validation=self.dscv,
- ca_certs=self.ca_certs, trace_requests=self.trace_requests)
+ ca_certs=self.ca_certs, trace_requests=self.trace_requests,
+ proxy_url=proxy_url)
+
# Setup some defaults for client parameters of registered services
client_parameters = client_parameters or {}
self.parameters = {}
+
# Parameters are provided for unversioned services
- all_modules = available_modules() | _tempest_internal_modules()
+ all_modules = available_modules()
unversioned_services = set(
[x.split('.')[0] for x in all_modules])
for service in unversioned_services:
@@ -420,8 +409,8 @@
clients in tempest.
:param client_names: List or set of names of service client classes.
:param kwargs: Extra optional parameters to be passed to all clients.
- ServiceClient provides defaults for region, dscv, ca_certs and
- trace_requests.
+ ServiceClient provides defaults for region, dscv, ca_certs, http
+ proxies and trace_requests.
:raise ServiceClientRegistrationException: if the provided name is
already in use or if service_version is already registered.
:raise ImportError: if module_path cannot be imported.
@@ -442,7 +431,8 @@
params = dict(region=self.region,
disable_ssl_certificate_validation=self.dscv,
ca_certs=self.ca_certs,
- trace_requests=self.trace_requests)
+ trace_requests=self.trace_requests,
+ proxy_url=self.proxy_url)
params.update(kwargs)
# Instantiate the client factory
_factory = ClientsFactory(module_path=module_path,
@@ -456,9 +446,7 @@
@property
def registered_services(self):
- # NOTE(andreaf) Once all tempest modules are stable this needs to
- # be updated to remove _tempest_internal_modules
- return self._registered_services | _tempest_internal_modules()
+ return self._registered_services
def _setup_parameters(self, parameters):
"""Setup default values for client parameters
diff --git a/tempest/lib/services/identity/v3/identity_client.py b/tempest/lib/services/identity/v3/identity_client.py
index 2512a3e..ad770bf 100644
--- a/tempest/lib/services/identity/v3/identity_client.py
+++ b/tempest/lib/services/identity/v3/identity_client.py
@@ -57,3 +57,10 @@
self.expected_success(200, resp.status)
body = json.loads(body)
return rest_client.ResponseBody(resp, body)
+
+ def list_auth_domains(self):
+ """Get available domain scopes."""
+ resp, body = self.get("auth/domains")
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
diff --git a/tempest/lib/services/object_storage/__init__.py b/tempest/lib/services/object_storage/__init__.py
index e69de29..4303d09 100644
--- a/tempest/lib/services/object_storage/__init__.py
+++ b/tempest/lib/services/object_storage/__init__.py
@@ -0,0 +1,25 @@
+# Copyright (c) 2016 Hewlett-Packard Enterprise Development Company, L.P.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may not
+# use this file except in compliance with the License. You may obtain a copy of
+# the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations under
+# the License.
+
+from tempest.lib.services.object_storage.account_client import AccountClient
+from tempest.lib.services.object_storage.bulk_middleware_client import \
+ BulkMiddlewareClient
+from tempest.lib.services.object_storage.capabilities_client import \
+ CapabilitiesClient
+from tempest.lib.services.object_storage.container_client import \
+ ContainerClient
+from tempest.lib.services.object_storage.object_client import ObjectClient
+
+__all__ = ['AccountClient', 'BulkMiddlewareClient', 'CapabilitiesClient',
+ 'ContainerClient', 'ObjectClient']
diff --git a/tempest/lib/services/object_storage/container_client.py b/tempest/lib/services/object_storage/container_client.py
new file mode 100644
index 0000000..2da8e24
--- /dev/null
+++ b/tempest/lib/services/object_storage/container_client.py
@@ -0,0 +1,124 @@
+# Copyright 2012 OpenStack Foundation
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from xml.etree import ElementTree as etree
+
+import debtcollector.moves
+from oslo_serialization import jsonutils as json
+from six.moves.urllib import parse as urllib
+
+from tempest.lib.common import rest_client
+
+
+class ContainerClient(rest_client.RestClient):
+
+ def update_container(self, container_name, **headers):
+ """Creates or Updates a container
+
+ with optional metadata passed in as a dictionary.
+ Full list of allowed headers or value, please refer to the
+ official API reference:
+ https://developer.openstack.org/api-ref/object-store/#create-container
+ """
+ url = str(container_name)
+
+ resp, body = self.put(url, body=None, headers=headers)
+ self.expected_success([201, 202], resp.status)
+ return resp, body
+
+ # NOTE: This alias is for the usability because PUT can be used for both
+ # updating/creating a resource and this PUT is mainly used for creating
+ # on Swift container API.
+ create_container = update_container
+
+ def delete_container(self, container_name):
+ """Deletes the container (if it's empty)."""
+ url = str(container_name)
+ resp, body = self.delete(url)
+ self.expected_success(204, resp.status)
+ return resp, body
+
+ def create_update_or_delete_container_metadata(
+ self, container_name,
+ create_update_metadata=None,
+ delete_metadata=None,
+ create_update_metadata_prefix='X-Container-Meta-',
+ delete_metadata_prefix='X-Remove-Container-Meta-'):
+ """Creates, Updates or deletes an containter metadata entry.
+
+ Container Metadata can be created, updated or deleted based on
+ metadata header or value. For detailed info, please refer to the
+ official API reference:
+ https://developer.openstack.org/api-ref/object-store/#create-update-or-delete-container-metadata
+ """
+ url = str(container_name)
+ headers = {}
+ if create_update_metadata:
+ for key in create_update_metadata:
+ metadata_header_name = create_update_metadata_prefix + key
+ headers[metadata_header_name] = create_update_metadata[key]
+ if delete_metadata:
+ for key in delete_metadata:
+ headers[delete_metadata_prefix + key] = delete_metadata[key]
+
+ resp, body = self.post(url, headers=headers, body=None)
+ self.expected_success(204, resp.status)
+ return resp, body
+
+ update_container_metadata = debtcollector.moves.moved_function(
+ create_update_or_delete_container_metadata,
+ 'update_container_metadata', __name__,
+ version='Queens', removal_version='Rocky')
+
+ def list_container_metadata(self, container_name):
+ """List all container metadata."""
+ url = str(container_name)
+ resp, body = self.head(url)
+ self.expected_success(204, resp.status)
+ return resp, body
+
+ def list_container_objects(self, container_name, params=None):
+ """List the objects in a container, given the container name
+
+ Returns the container object listing as a plain text list, or as
+ xml or json if that option is specified via the 'format' argument.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://developer.openstack.org/api-ref/object-storage/?expanded=show-container-details-and-list-objects-detail
+ """
+
+ url = str(container_name)
+ if params:
+ url += '?'
+ url += '&%s' % urllib.urlencode(params)
+
+ resp, body = self.get(url, headers={})
+ if params and params.get('format') == 'json':
+ body = json.loads(body)
+ elif params and params.get('format') == 'xml':
+ body = etree.fromstring(body)
+ # Else the content-type is plain/text
+ else:
+ body = [
+ obj_name for obj_name in body.decode().split('\n') if obj_name
+ ]
+
+ self.expected_success([200, 204], resp.status)
+ return resp, body
+
+ list_container_contents = debtcollector.moves.moved_function(
+ list_container_objects, 'list_container_contents', __name__,
+ version='Queens', removal_version='Rocky')
diff --git a/tempest/services/object_storage/object_client.py b/tempest/lib/services/object_storage/object_client.py
similarity index 62%
rename from tempest/services/object_storage/object_client.py
rename to tempest/lib/services/object_storage/object_client.py
index 6d656ec..383aff6 100644
--- a/tempest/services/object_storage/object_client.py
+++ b/tempest/lib/services/object_storage/object_client.py
@@ -23,7 +23,8 @@
class ObjectClient(rest_client.RestClient):
def create_object(self, container, object_name, data,
- params=None, metadata=None, headers=None):
+ params=None, metadata=None, headers=None,
+ chunked=False):
"""Create storage object."""
if headers is None:
@@ -37,7 +38,7 @@
if params:
url += '?%s' % urlparse.urlencode(params)
- resp, body = self.put(url, data, headers)
+ resp, body = self.put(url, data, headers, chunked=chunked)
self.expected_success(201, resp.status)
return resp, body
@@ -50,28 +51,27 @@
self.expected_success([200, 204], resp.status)
return resp, body
- def update_object_metadata(self, container, object_name, metadata,
- metadata_prefix='X-Object-Meta-'):
+ def create_or_update_object_metadata(self, container, object_name,
+ headers=None):
"""Add, remove, or change X-Object-Meta metadata for storage object."""
- headers = {}
- for key in metadata:
- headers["%s%s" % (str(metadata_prefix), str(key))] = metadata[key]
-
url = "%s/%s" % (str(container), str(object_name))
resp, body = self.post(url, None, headers=headers)
self.expected_success(202, resp.status)
return resp, body
- def list_object_metadata(self, container, object_name):
+ def list_object_metadata(self, container, object_name,
+ params=None, headers=None):
"""List all storage object X-Object-Meta- metadata."""
url = "%s/%s" % (str(container), str(object_name))
- resp, body = self.head(url)
+ if params:
+ url += '?%s' % urlparse.urlencode(params)
+ resp, body = self.head(url, headers=headers)
self.expected_success(200, resp.status)
return resp, body
- def get_object(self, container, object_name, metadata=None):
+ def get_object(self, container, object_name, metadata=None, params=None):
"""Retrieve object's data."""
headers = {}
@@ -80,45 +80,12 @@
headers[str(key)] = metadata[key]
url = "{0}/{1}".format(container, object_name)
+ if params:
+ url += '?%s' % urlparse.urlencode(params)
resp, body = self.get(url, headers=headers)
self.expected_success([200, 206], resp.status)
return resp, body
- def copy_object_in_same_container(self, container, src_object_name,
- dest_object_name, metadata=None):
- """Copy storage object's data to the new object using PUT."""
-
- url = "{0}/{1}".format(container, dest_object_name)
- headers = {}
- headers['X-Copy-From'] = "%s/%s" % (str(container),
- str(src_object_name))
- headers['content-length'] = '0'
- if metadata:
- for key in metadata:
- headers[str(key)] = metadata[key]
-
- resp, body = self.put(url, None, headers=headers)
- self.expected_success(201, resp.status)
- return resp, body
-
- def copy_object_across_containers(self, src_container, src_object_name,
- dst_container, dst_object_name,
- metadata=None):
- """Copy storage object's data to the new object using PUT."""
-
- url = "{0}/{1}".format(dst_container, dst_object_name)
- headers = {}
- headers['X-Copy-From'] = "%s/%s" % (str(src_container),
- str(src_object_name))
- headers['content-length'] = '0'
- if metadata:
- for key in metadata:
- headers[str(key)] = metadata[key]
-
- resp, body = self.put(url, None, headers=headers)
- self.expected_success(201, resp.status)
- return resp, body
-
def copy_object_2d_way(self, container, src_object_name, dest_object_name,
metadata=None):
"""Copy storage object's data to the new object using COPY."""
@@ -135,38 +102,6 @@
self.expected_success(201, resp.status)
return resp, body
- def create_object_segments(self, container, object_name, segment, data):
- """Creates object segments."""
- url = "{0}/{1}/{2}".format(container, object_name, segment)
- resp, body = self.put(url, data)
- self.expected_success(201, resp.status)
- return resp, body
-
- def put_object_with_chunk(self, container, name, contents):
- """Put an object with Transfer-Encoding header
-
- :param container: name of the container
- :type container: string
- :param name: name of the object
- :type name: string
- :param contents: object data
- :type contents: iterable
- """
- headers = {'Transfer-Encoding': 'chunked'}
- if self.token:
- headers['X-Auth-Token'] = self.token
-
- url = "%s/%s" % (container, name)
- resp, body = self.put(
- url, headers=headers,
- body=contents,
- chunked=True
- )
-
- self._error_checker(resp, body)
- self.expected_success(201, resp.status)
- return resp.status, resp.reason, resp
-
def create_object_continue(self, container, object_name,
data, metadata=None):
"""Put an object using Expect:100-continue"""
@@ -183,8 +118,7 @@
path = str(parsed.path) + "/"
path += "%s/%s" % (str(container), str(object_name))
- conn = create_connection(parsed)
-
+ conn = _create_connection(parsed)
# Send the PUT request and the headers including the "Expect" header
conn.putrequest('PUT', path)
@@ -218,7 +152,7 @@
return resp.status, resp.reason
-def create_connection(parsed_url):
+def _create_connection(parsed_url):
"""Helper function to create connection with httplib
:param parsed_url: parsed url of the remote location
diff --git a/tempest/lib/services/volume/v1/encryption_types_client.py b/tempest/lib/services/volume/v1/encryption_types_client.py
index 067b4e8..0fac6bd 100644
--- a/tempest/lib/services/volume/v1/encryption_types_client.py
+++ b/tempest/lib/services/volume/v1/encryption_types_client.py
@@ -49,9 +49,9 @@
def create_encryption_type(self, volume_type_id, **kwargs):
"""Create encryption type.
- TODO: Current api-site doesn't contain this API description.
- After fixing the api-site, we need to fix here also for putting
- the link to api-site.
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://developer.openstack.org/api-ref/block-storage/v2/#create-an-encryption-type-for-v2
"""
url = "/types/%s/encryption" % volume_type_id
post_body = json.dumps({'encryption': kwargs})
diff --git a/tempest/lib/services/volume/v1/hosts_client.py b/tempest/lib/services/volume/v1/hosts_client.py
index 56ba12c..9b19b84 100644
--- a/tempest/lib/services/volume/v1/hosts_client.py
+++ b/tempest/lib/services/volume/v1/hosts_client.py
@@ -23,8 +23,12 @@
"""Client class to send CRUD Volume Host API V1 requests"""
def list_hosts(self, **params):
- """Lists all hosts."""
+ """Lists all hosts.
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://developer.openstack.org/api-ref/block-storage/v2/#list-all-hosts
+ """
url = 'os-hosts'
if params:
url += '?%s' % urllib.urlencode(params)
diff --git a/tempest/lib/services/volume/v1/qos_client.py b/tempest/lib/services/volume/v1/qos_client.py
index e247b7b..593bddd 100644
--- a/tempest/lib/services/volume/v1/qos_client.py
+++ b/tempest/lib/services/volume/v1/qos_client.py
@@ -92,7 +92,9 @@
:param keys: keys to delete from the QoS specification.
- TODO(jordanP): Add a link once LP #1524877 is fixed.
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://developer.openstack.org/api-ref/block-storage/v2/#unset-keys-in-qos-specification
"""
put_body = json.dumps({'keys': keys})
resp, body = self.put('qos-specs/%s/delete_keys' % qos_id, put_body)
diff --git a/tempest/lib/services/volume/v1/quotas_client.py b/tempest/lib/services/volume/v1/quotas_client.py
index 678fd82..84f34f2 100644
--- a/tempest/lib/services/volume/v1/quotas_client.py
+++ b/tempest/lib/services/volume/v1/quotas_client.py
@@ -47,7 +47,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref-blockstorage-v1.html#updateQuota
+ https://developer.openstack.org/api-ref/block-storage/v2/#update-quotas
"""
put_body = jsonutils.dumps({'quota_set': kwargs})
resp, body = self.put('os-quota-sets/%s' % tenant_id, put_body)
diff --git a/tempest/lib/services/volume/v1/snapshots_client.py b/tempest/lib/services/volume/v1/snapshots_client.py
index 3433e68..51f7b9b 100644
--- a/tempest/lib/services/volume/v1/snapshots_client.py
+++ b/tempest/lib/services/volume/v1/snapshots_client.py
@@ -27,7 +27,8 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/block-storage/v1/#list-snapshots-with-details-v1
+ https://developer.openstack.org/api-ref/block-storage/v2/#list-snapshots
+ https://developer.openstack.org/api-ref/block-storage/v2/#list-snapshots-with-details
"""
url = 'snapshots'
if detail:
@@ -45,7 +46,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/block-storage/v1/#show-snapshot-details-v1
+ https://developer.openstack.org/api-ref/block-storage/v2/#show-snapshot-details
"""
url = "snapshots/%s" % snapshot_id
resp, body = self.get(url)
@@ -58,7 +59,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/block-storage/v1/#create-snapshot-v1
+ https://developer.openstack.org/api-ref/block-storage/v2/#create-snapshot
"""
post_body = json.dumps({'snapshot': kwargs})
resp, body = self.post('snapshots', post_body)
@@ -71,7 +72,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/block-storage/v1/#delete-snapshot-v1
+ https://developer.openstack.org/api-ref/block-storage/v2/#delete-snapshot
"""
resp, body = self.delete("snapshots/%s" % snapshot_id)
self.expected_success(202, resp.status)
@@ -123,7 +124,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/block-storage/v1/#update-snapshot-v1
+ https://developer.openstack.org/api-ref/block-storage/v2/#update-snapshot
"""
put_body = json.dumps({'snapshot': kwargs})
resp, body = self.put('snapshots/%s' % snapshot_id, put_body)
@@ -136,7 +137,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/block-storage/v1/#show-snapshot-metadata-v1
+ https://developer.openstack.org/api-ref/block-storage/v2/#show-snapshot-metadata
"""
url = "snapshots/%s/metadata" % snapshot_id
resp, body = self.get(url)
@@ -149,7 +150,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/block-storage/v1/#update-snapshot-metadata-v1
+ https://developer.openstack.org/api-ref/block-storage/v2/#update-snapshot-metadata
"""
put_body = json.dumps(kwargs)
url = "snapshots/%s/metadata" % snapshot_id
diff --git a/tempest/lib/services/volume/v1/types_client.py b/tempest/lib/services/volume/v1/types_client.py
index 4ae9935..58a80b7 100644
--- a/tempest/lib/services/volume/v1/types_client.py
+++ b/tempest/lib/services/volume/v1/types_client.py
@@ -40,7 +40,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/block-storage/v1/#list-volume-types-v1
+ https://developer.openstack.org/api-ref/block-storage/v2/#list-all-volume-types-for-v2
"""
url = 'types'
if params:
@@ -56,7 +56,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/block-storage/v1/#show-volume-type-v1
+ https://developer.openstack.org/api-ref/block-storage/v2/#show-volume-type-details-for-v2
"""
url = "types/%s" % volume_type_id
resp, body = self.get(url)
@@ -69,7 +69,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/block-storage/v1/#create-volume-type-v1
+ https://developer.openstack.org/api-ref/block-storage/v2/#create-volume-type-for-v2
"""
post_body = json.dumps({'volume_type': kwargs})
resp, body = self.post('types', post_body)
@@ -82,7 +82,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/block-storage/v1/#delete-volume-type-v1
+ https://developer.openstack.org/api-ref/block-storage/v2/#delete-volume-type
"""
resp, body = self.delete("types/%s" % volume_type_id)
self.expected_success(202, resp.status)
@@ -137,7 +137,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/block-storage/v1/#update-volume-type-v1
+ https://developer.openstack.org/api-ref/block-storage/v2/#update-volume-type
"""
put_body = json.dumps({'volume_type': kwargs})
resp, body = self.put('types/%s' % volume_type_id, put_body)
@@ -155,7 +155,7 @@
updated value.
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/block-storage/v1/#update-extra-specs-for-a-volume-type-v1
+ https://developer.openstack.org/api-ref/block-storage/v2/#update-extra-specs-for-a-volume-type
"""
url = "types/%s/extra_specs/%s" % (volume_type_id, extra_spec_name)
put_body = json.dumps(extra_specs)
diff --git a/tempest/lib/services/volume/v1/volumes_client.py b/tempest/lib/services/volume/v1/volumes_client.py
index 7a25697..0e6ea9f 100644
--- a/tempest/lib/services/volume/v1/volumes_client.py
+++ b/tempest/lib/services/volume/v1/volumes_client.py
@@ -38,6 +38,11 @@
"""List all the volumes created.
Params can be a string (must be urlencoded) or a dictionary.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://developer.openstack.org/api-ref/block-storage/v2/#list-volumes
+ https://developer.openstack.org/api-ref/block-storage/v2/#list-volumes-with-details
"""
url = 'volumes'
if detail:
@@ -63,7 +68,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/block-storage/v1/#create-volume
+ https://developer.openstack.org/api-ref/block-storage/v2/#create-volume
"""
post_body = json.dumps({'volume': kwargs})
resp, body = self.post('volumes', post_body)
@@ -76,7 +81,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/block-storage/v1/#update-volume
+ https://developer.openstack.org/api-ref/block-storage/v2/#update-volume
"""
put_body = json.dumps({'volume': kwargs})
resp, body = self.put('volumes/%s' % volume_id, put_body)
@@ -104,7 +109,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/block-storage/v1/#attach-volume
+ https://developer.openstack.org/api-ref/block-storage/v2/#attach-volume-to-server
"""
post_body = json.dumps({'os-attach': kwargs})
url = 'volumes/%s/action' % (volume_id)
@@ -161,7 +166,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/block-storage/v1/#extend-volume
+ https://developer.openstack.org/api-ref/block-storage/v2/#extend-volume-size
"""
post_body = json.dumps({'os-extend': kwargs})
url = 'volumes/%s/action' % (volume_id)
@@ -174,7 +179,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/block-storage/v1/#reset-volume-status
+ https://developer.openstack.org/api-ref/block-storage/v2/#reset-volume-statuses
"""
post_body = json.dumps({'os-reset_status': kwargs})
resp, body = self.post('volumes/%s/action' % volume_id, post_body)
@@ -186,7 +191,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/block-storage/v1/#create-volume-transfer
+ https://developer.openstack.org/api-ref/block-storage/v2/#create-volume-transfer
"""
post_body = json.dumps({'transfer': kwargs})
resp, body = self.post('os-volume-transfer', post_body)
@@ -207,7 +212,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/block-storage/v1/#list-volume-transfers
+ https://developer.openstack.org/api-ref/block-storage/v2/#list-volume-transfers
"""
url = 'os-volume-transfer'
if params:
@@ -228,7 +233,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/block-storage/v1/#accept-volume-transfer
+ https://developer.openstack.org/api-ref/block-storage/v2/#accept-volume-transfer
"""
url = 'os-volume-transfer/%s/accept' % transfer_id
post_body = json.dumps({'accept': kwargs})
diff --git a/tempest/lib/services/volume/v2/volumes_client.py b/tempest/lib/services/volume/v2/volumes_client.py
index d13e449..da3f2b5 100644
--- a/tempest/lib/services/volume/v2/volumes_client.py
+++ b/tempest/lib/services/volume/v2/volumes_client.py
@@ -13,8 +13,6 @@
# License for the specific language governing permissions and limitations
# under the License.
-from debtcollector import moves
-from debtcollector import removals
from oslo_serialization import jsonutils as json
import six
from six.moves.urllib import parse as urllib
@@ -22,43 +20,12 @@
from tempest.lib.common import rest_client
from tempest.lib import exceptions as lib_exc
from tempest.lib.services.volume import base_client
-from tempest.lib.services.volume.v2 import transfers_client
class VolumesClient(base_client.BaseClient):
"""Client class to send CRUD Volume V2 API requests"""
api_version = "v2"
- create_volume_transfer = moves.moved_function(
- transfers_client.TransfersClient.create_volume_transfer,
- 'VolumesClient.create_volume_transfer', __name__,
- message='Use create_volume_transfer from new location.',
- version='Pike', removal_version='Queens')
-
- show_volume_transfer = moves.moved_function(
- transfers_client.TransfersClient.show_volume_transfer,
- 'VolumesClient.show_volume_transfer', __name__,
- message='Use show_volume_transfer from new location.',
- version='Pike', removal_version='Queens')
-
- list_volume_transfers = moves.moved_function(
- transfers_client.TransfersClient.list_volume_transfers,
- 'VolumesClient.list_volume_transfers', __name__,
- message='Use list_volume_transfer from new location.',
- version='Pike', removal_version='Queens')
-
- delete_volume_transfer = moves.moved_function(
- transfers_client.TransfersClient.delete_volume_transfer,
- 'VolumesClient.delete_volume_transfer', __name__,
- message='Use delete_volume_transfer from new location.',
- version='Pike', removal_version='Queens')
-
- accept_volume_transfer = moves.moved_function(
- transfers_client.TransfersClient.accept_volume_transfer,
- 'VolumesClient.accept_volume_transfer', __name__,
- message='Use accept_volume_transfer from new location.',
- version='Pike', removal_version='Queens')
-
def _prepare_params(self, params):
"""Prepares params for use in get or _ext_get methods.
@@ -372,34 +339,6 @@
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
- @removals.remove(message="use list_pools from tempest.lib.services."
- "volume.v2.scheduler_stats_client")
- def show_pools(self, detail=False):
- # List all the volumes pools (hosts)
- url = 'scheduler-stats/get_pools'
- if detail:
- url += '?detail=True'
-
- resp, body = self.get(url)
- body = json.loads(body)
- self.expected_success(200, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- @removals.remove(message="use show_backend_capabilities from tempest.lib."
- "services.volume.v2.capabilities_client")
- def show_backend_capabilities(self, host):
- """Shows capabilities for a storage back end.
-
- For a full list of available parameters, please refer to the official
- API reference:
- http://developer.openstack.org/api-ref/block-storage/v2/#show-back-end-capabilities
- """
- url = 'capabilities/%s' % host
- resp, body = self.get(url)
- body = json.loads(body)
- self.expected_success(200, resp.status)
- return rest_client.ResponseBody(resp, body)
-
def unmanage_volume(self, volume_id):
"""Unmanage volume.
diff --git a/tempest/lib/services/volume/v3/group_snapshots_client.py b/tempest/lib/services/volume/v3/group_snapshots_client.py
index e644f02..6e53e3e 100644
--- a/tempest/lib/services/volume/v3/group_snapshots_client.py
+++ b/tempest/lib/services/volume/v3/group_snapshots_client.py
@@ -60,7 +60,7 @@
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
- def list_group_snapshots(self, **params):
+ def list_group_snapshots(self, detail=False, **params):
"""Information for all the tenant's group snapshots.
For more information, please refer to the official API reference:
@@ -68,6 +68,8 @@
https://developer.openstack.org/api-ref/block-storage/v3/#list-group-snapshots-with-details
"""
url = "group_snapshots"
+ if detail:
+ url += "/detail"
if params:
url += '?%s' % urllib.urlencode(params)
resp, body = self.get(url)
@@ -75,6 +77,18 @@
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
+ def reset_group_snapshot_status(self, group_snapshot_id, status_to_set):
+ """Resets group snapshot status.
+
+ For more information, please refer to the official API reference:
+ https://developer.openstack.org/api-ref/block-storage/v3/#reset-group-snapshot-status
+ """
+ post_body = json.dumps({'reset_status': {'status': status_to_set}})
+ resp, body = self.post('group_snapshots/%s/action' % group_snapshot_id,
+ post_body)
+ self.expected_success(202, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
def is_resource_deleted(self, id):
try:
self.show_group_snapshot(id)
diff --git a/tempest/scenario/manager.py b/tempest/scenario/manager.py
index e5d5c69..06b4b59 100644
--- a/tempest/scenario/manager.py
+++ b/tempest/scenario/manager.py
@@ -89,16 +89,14 @@
# The create_[resource] functions only return body and discard the
# resp part which is not used in scenario tests
- def _create_port(self, network_id, client=None, namestart='port-quotatest',
- **kwargs):
+ def create_port(self, network_id, client=None, **kwargs):
if not client:
client = self.ports_client
- name = data_utils.rand_name(namestart)
+ name = data_utils.rand_name(self.__class__.__name__)
result = client.create_port(
name=name,
network_id=network_id,
**kwargs)
- self.assertIsNotNone(result, 'Unable to allocate port')
port = result['port']
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
client.delete_port, port['id'])
@@ -147,8 +145,7 @@
if vnic_type:
ports = []
- create_port_body = {'binding:vnic_type': vnic_type,
- 'namestart': 'port-smoke'}
+ create_port_body = {'binding:vnic_type': vnic_type}
if kwargs:
# Convert security group names to security group ids
# to pass to create_port
@@ -185,9 +182,9 @@
for net in networks:
net_id = net.get('uuid', net.get('id'))
if 'port' not in net:
- port = self._create_port(network_id=net_id,
- client=clients.ports_client,
- **create_port_body)
+ port = self.create_port(network_id=net_id,
+ client=clients.ports_client,
+ **create_port_body)
ports.append({'port': port['id']})
else:
ports.append({'port': net['port']})
@@ -271,10 +268,8 @@
if backend_name:
extra_specs = {"volume_backend_name": backend_name}
- body = client.create_volume_type(name=randomized_name,
- extra_specs=extra_specs)
- volume_type = body['volume_type']
- self.assertIn('id', volume_type)
+ volume_type = client.create_volume_type(
+ name=randomized_name, extra_specs=extra_specs)['volume_type']
self.addCleanup(client.delete_volume_type, volume_type['id'])
return volume_type
@@ -506,27 +501,6 @@
waiters.wait_for_volume_resource_status(self.volumes_client,
volume['id'], 'available')
- volume = self.volumes_client.show_volume(volume['id'])['volume']
- self.assertEqual('available', volume['status'])
-
- def rebuild_server(self, server_id, image=None,
- preserve_ephemeral=False, wait=True,
- rebuild_kwargs=None):
- if image is None:
- image = CONF.compute.image_ref
-
- rebuild_kwargs = rebuild_kwargs or {}
-
- LOG.debug("Rebuilding server (id: %s, image: %s, preserve eph: %s)",
- server_id, image, preserve_ephemeral)
- self.servers_client.rebuild_server(
- server_id=server_id, image_ref=image,
- preserve_ephemeral=preserve_ephemeral,
- **rebuild_kwargs)
- if wait:
- waiters.wait_for_server_status(self.servers_client,
- server_id, 'ACTIVE')
-
def ping_ip_address(self, ip_address, should_succeed=True,
ping_timeout=None, mtu=None):
timeout = ping_timeout or CONF.validation.ping_timeout
@@ -730,17 +704,14 @@
network['id'])
return network
- def _create_subnet(self, network, subnets_client=None,
- routers_client=None, namestart='subnet-smoke',
- **kwargs):
+ def create_subnet(self, network, subnets_client=None,
+ namestart='subnet-smoke', **kwargs):
"""Create a subnet for the given network
within the cidr block configured for tenant networks.
"""
if not subnets_client:
subnets_client = self.subnets_client
- if not routers_client:
- routers_client = self.routers_client
def cidr_in_use(cidr, tenant_id):
"""Check cidr existence
@@ -883,11 +854,11 @@
LOG.info("FloatingIP: {fp} is at status: {st}"
.format(fp=floating_ip, st=status))
- def _check_tenant_network_connectivity(self, server,
- username,
- private_key,
- should_connect=True,
- servers_for_debug=None):
+ def check_tenant_network_connectivity(self, server,
+ username,
+ private_key,
+ should_connect=True,
+ servers_for_debug=None):
if not CONF.network.project_networks_reachable:
msg = 'Tenant networks not configured to be reachable.'
LOG.info(msg)
@@ -907,16 +878,13 @@
self._log_net_info(e)
raise
- def _check_remote_connectivity(self, source, dest, should_succeed=True,
- nic=None):
+ def check_remote_connectivity(self, source, dest, should_succeed=True,
+ nic=None):
"""assert ping server via source ssh connection
- Note: This is an internal method. Use check_remote_connectivity
- instead.
-
:param source: RemoteClient: an ssh connection from which to ping
- :param dest: and IP to ping against
- :param should_succeed: boolean should ping succeed or not
+ :param dest: an IP to ping against
+ :param should_succeed: boolean: should ping succeed or not
:param nic: specific network interface to ping from
"""
def ping_remote():
@@ -928,28 +896,19 @@
return not should_succeed
return should_succeed
- return test_utils.call_until_true(ping_remote,
- CONF.validation.ping_timeout,
- 1)
+ result = test_utils.call_until_true(ping_remote,
+ CONF.validation.ping_timeout, 1)
+ if result:
+ return
- def check_remote_connectivity(self, source, dest, should_succeed=True,
- nic=None):
- """assert ping server via source ssh connection
-
- :param source: RemoteClient: an ssh connection from which to ping
- :param dest: and IP to ping against
- :param should_succeed: boolean should ping succeed or not
- :param nic: specific network interface to ping from
- """
- result = self._check_remote_connectivity(source, dest, should_succeed,
- nic)
source_host = source.ssh_client.host
if should_succeed:
msg = "Timed out waiting for %s to become reachable from %s" \
% (dest, source_host)
else:
msg = "%s is reachable from %s" % (dest, source_host)
- self.assertTrue(result, msg)
+ self._log_console_output()
+ self.fail(msg)
def _create_security_group(self, security_group_rules_client=None,
tenant_id=None,
@@ -1006,23 +965,6 @@
client.delete_security_group, secgroup['id'])
return secgroup
- def _default_security_group(self, client=None, tenant_id=None):
- """Get default secgroup for given tenant_id.
-
- :returns: default secgroup for given tenant
- """
- if client is None:
- client = self.security_groups_client
- if not tenant_id:
- tenant_id = client.tenant_id
- sgs = [
- sg for sg in list(client.list_security_groups().values())[0]
- if sg['tenant_id'] == tenant_id and sg['name'] == 'default'
- ]
- msg = "No default security group for tenant %s." % (tenant_id)
- self.assertNotEmpty(sgs, msg)
- return sgs[0]
-
def _create_security_group_rule(self, secgroup=None,
sec_group_rules_client=None,
tenant_id=None,
@@ -1051,8 +993,12 @@
if not tenant_id:
tenant_id = security_groups_client.tenant_id
if secgroup is None:
- secgroup = self._default_security_group(
- client=security_groups_client, tenant_id=tenant_id)
+ # Get default secgroup for tenant_id
+ default_secgroups = security_groups_client.list_security_groups(
+ name='default', tenant_id=tenant_id)['security_groups']
+ msg = "No default security group for tenant %s." % (tenant_id)
+ self.assertNotEmpty(default_secgroups, msg)
+ secgroup = default_secgroups[0]
ruleset = dict(security_group_id=secgroup['id'],
tenant_id=secgroup['tenant_id'])
@@ -1140,31 +1086,18 @@
body = client.show_router(router_id)
return body['router']
elif network_id:
- router = self._create_router(client, tenant_id)
- kwargs = {'external_gateway_info': dict(network_id=network_id)}
- router = client.update_router(router['id'], **kwargs)['router']
+ router = client.create_router(
+ name=data_utils.rand_name(self.__class__.__name__ + '-router'),
+ admin_state_up=True,
+ tenant_id=tenant_id,
+ external_gateway_info=dict(network_id=network_id))['router']
+ self.addCleanup(test_utils.call_and_ignore_notfound_exc,
+ client.delete_router, router['id'])
return router
else:
raise Exception("Neither of 'public_router_id' or "
"'public_network_id' has been defined.")
- def _create_router(self, client=None, tenant_id=None,
- namestart='router-smoke'):
- if not client:
- client = self.routers_client
- if not tenant_id:
- tenant_id = client.tenant_id
- name = data_utils.rand_name(namestart)
- result = client.create_router(name=name,
- admin_state_up=True,
- tenant_id=tenant_id)
- router = result['router']
- self.assertEqual(router['name'], name)
- self.addCleanup(test_utils.call_and_ignore_notfound_exc,
- client.delete_router,
- router['id'])
- return router
-
def create_networks(self, networks_client=None,
routers_client=None, subnets_client=None,
tenant_id=None, dns_nameservers=None,
@@ -1199,12 +1132,11 @@
router = self._get_router(client=routers_client,
tenant_id=tenant_id)
subnet_kwargs = dict(network=network,
- subnets_client=subnets_client,
- routers_client=routers_client)
+ subnets_client=subnets_client)
# use explicit check because empty list is a valid option
if dns_nameservers is not None:
subnet_kwargs['dns_nameservers'] = dns_nameservers
- subnet = self._create_subnet(**subnet_kwargs)
+ subnet = self.create_subnet(**subnet_kwargs)
if not routers_client:
routers_client = self.routers_client
router_id = router['id']
@@ -1294,7 +1226,7 @@
def create_container(self, container_name=None):
name = container_name or data_utils.rand_name(
'swift-scenario-container')
- self.container_client.create_container(name)
+ self.container_client.update_container(name)
# look for the container to assure it is created
self.list_and_check_container_objects(name)
LOG.debug('Container %s created', name)
@@ -1331,7 +1263,7 @@
present_obj = []
if not_present_obj is None:
not_present_obj = []
- _, object_list = self.container_client.list_container_contents(
+ _, object_list = self.container_client.list_container_objects(
container_name)
if present_obj:
for obj in present_obj:
@@ -1340,14 +1272,6 @@
for obj in not_present_obj:
self.assertNotIn(obj, object_list)
- def change_container_acl(self, container_name, acl):
- metadata_param = {'metadata_prefix': 'x-container-',
- 'metadata': {'read': acl}}
- self.container_client.update_container_metadata(container_name,
- **metadata_param)
- resp, _ = self.container_client.list_container_metadata(container_name)
- self.assertEqual(resp['x-container-read'], acl)
-
def download_and_verify(self, container_name, obj_name, expected_data):
_, obj = self.object_client.get_object(container_name, obj_name)
self.assertEqual(obj, expected_data)
diff --git a/tempest/scenario/test_network_advanced_server_ops.py b/tempest/scenario/test_network_advanced_server_ops.py
index 340c3c9..7c404ad 100644
--- a/tempest/scenario/test_network_advanced_server_ops.py
+++ b/tempest/scenario/test_network_advanced_server_ops.py
@@ -83,7 +83,7 @@
should_connect=True):
username = CONF.validation.image_ssh_user
private_key = keypair['private_key']
- self._check_tenant_network_connectivity(
+ self.check_tenant_network_connectivity(
server, username, private_key,
should_connect=should_connect,
servers_for_debug=[server])
diff --git a/tempest/scenario/test_network_basic_ops.py b/tempest/scenario/test_network_basic_ops.py
index 0c3bf23..6332c6d 100644
--- a/tempest/scenario/test_network_basic_ops.py
+++ b/tempest/scenario/test_network_basic_ops.py
@@ -113,11 +113,16 @@
port_id = None
if boot_with_port:
# create a port on the network and boot with that
- port_id = self._create_port(self.network['id'])['id']
+ port_id = self.create_port(self.network['id'])['id']
self.ports.append({'port': port_id})
server = self._create_server(self.network, port_id)
- self._check_tenant_network_connectivity()
+ ssh_login = CONF.validation.image_ssh_user
+ for server in self.servers:
+ # call the common method in the parent class
+ self.check_tenant_network_connectivity(
+ server, ssh_login, self._get_server_key(server),
+ servers_for_debug=self.servers)
floating_ip = self.create_floating_ip(server)
self.floating_ip_tuple = Floating_IP_tuple(floating_ip, server)
@@ -170,15 +175,6 @@
def _get_server_key(self, server):
return self.keypairs[server['key_name']]['private_key']
- def _check_tenant_network_connectivity(self):
- ssh_login = CONF.validation.image_ssh_user
- for server in self.servers:
- # call the common method in the parent class
- super(TestNetworkBasicOps, self).\
- _check_tenant_network_connectivity(
- server, ssh_login, self._get_server_key(server),
- servers_for_debug=self.servers)
-
def check_public_network_connectivity(
self, should_connect=True, msg=None,
should_check_floating_ip_status=True, mtu=None):
@@ -231,10 +227,10 @@
def _create_new_network(self, create_gateway=False):
self.new_net = self._create_network()
if create_gateway:
- self.new_subnet = self._create_subnet(
+ self.new_subnet = self.create_subnet(
network=self.new_net)
else:
- self.new_subnet = self._create_subnet(
+ self.new_subnet = self.create_subnet(
network=self.new_net,
gateway_ip=None)
diff --git a/tempest/scenario/test_network_v6.py b/tempest/scenario/test_network_v6.py
index b687aa0..9f4e62b 100644
--- a/tempest/scenario/test_network_v6.py
+++ b/tempest/scenario/test_network_v6.py
@@ -12,8 +12,6 @@
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
-import functools
-
from tempest.common import utils
from tempest import config
from tempest.lib.common.utils import test_utils
@@ -78,9 +76,9 @@
if dualnet:
network_v6 = self._create_network()
- sub4 = self._create_subnet(network=network,
- namestart='sub4',
- ip_version=4)
+ sub4 = self.create_subnet(network=network,
+ namestart='sub4',
+ ip_version=4)
router = self._get_router()
self.routers_client.add_router_interface(router['id'],
@@ -93,11 +91,11 @@
self.subnets_v6 = []
for _ in range(n_subnets6):
net6 = network_v6 if dualnet else network
- sub6 = self._create_subnet(network=net6,
- namestart='sub6',
- ip_version=6,
- ipv6_ra_mode=address6_mode,
- ipv6_address_mode=address6_mode)
+ sub6 = self.create_subnet(network=net6,
+ namestart='sub6',
+ ip_version=6,
+ ipv6_ra_mode=address6_mode,
+ ipv6_address_mode=address6_mode)
self.routers_client.add_router_interface(router['id'],
subnet_id=sub6['id'])
@@ -132,7 +130,7 @@
ssh = self.get_remote_client(
ip_address=fip['floating_ip_address'],
username=username, server=srv)
- return ssh, ips, srv["id"]
+ return ssh, ips, srv
def turn_nic6_on(self, ssh, sid, network_id):
"""Turns the IPv6 vNIC on
@@ -163,8 +161,8 @@
n_subnets6=n_subnets6,
dualnet=dualnet)
- sshv4_1, ips_from_api_1, sid1 = self.prepare_server(networks=net_list)
- sshv4_2, ips_from_api_2, sid2 = self.prepare_server(networks=net_list)
+ sshv4_1, ips_from_api_1, srv1 = self.prepare_server(networks=net_list)
+ sshv4_2, ips_from_api_2, srv2 = self.prepare_server(networks=net_list)
def guest_has_address(ssh, addr):
return addr in ssh.exec_command("ip address")
@@ -172,8 +170,8 @@
# Turn on 2nd NIC for Cirros when dualnet
if dualnet:
_, network_v6 = net_list
- self.turn_nic6_on(sshv4_1, sid1, network_v6['id'])
- self.turn_nic6_on(sshv4_2, sid2, network_v6['id'])
+ self.turn_nic6_on(sshv4_1, srv1['id'], network_v6['id'])
+ self.turn_nic6_on(sshv4_2, srv2['id'], network_v6['id'])
# get addresses assigned to vNIC as reported by 'ip address' utility
ips_from_ip_1 = sshv4_1.exec_command("ip address")
@@ -183,17 +181,19 @@
for i in range(n_subnets6):
# v6 should be configured since the image supports it
# It can take time for ipv6 automatic address to get assigned
- srv1_v6_addr_assigned = functools.partial(
- guest_has_address, sshv4_1, ips_from_api_1['6'][i])
-
- srv2_v6_addr_assigned = functools.partial(
- guest_has_address, sshv4_2, ips_from_api_2['6'][i])
-
- self.assertTrue(test_utils.call_until_true(srv1_v6_addr_assigned,
- CONF.validation.ping_timeout, 1))
-
- self.assertTrue(test_utils.call_until_true(srv2_v6_addr_assigned,
- CONF.validation.ping_timeout, 1))
+ for srv, ssh, ips in (
+ (srv1, sshv4_1, ips_from_api_1),
+ (srv2, sshv4_2, ips_from_api_2)):
+ ip = ips['6'][i]
+ result = test_utils.call_until_true(
+ guest_has_address,
+ CONF.validation.ping_timeout, 1, ssh, ip)
+ if not result:
+ self._log_console_output(servers=[srv])
+ self.fail(
+ 'Address %s not configured for instance %s, '
+ 'ip address output is\n%s' %
+ (ip, srv['id'], ssh.exec_command("ip address")))
self.check_remote_connectivity(sshv4_1, ips_from_api_2['4'])
self.check_remote_connectivity(sshv4_2, ips_from_api_1['4'])
diff --git a/tempest/scenario/test_object_storage_basic_ops.py b/tempest/scenario/test_object_storage_basic_ops.py
index da0b1e8..cbe321e 100644
--- a/tempest/scenario/test_object_storage_basic_ops.py
+++ b/tempest/scenario/test_object_storage_basic_ops.py
@@ -58,12 +58,18 @@
5. Delete the object and container
"""
container_name = self.create_container()
- obj_name, _ = self.upload_object_to_container(container_name)
+ obj_name, obj_data = self.upload_object_to_container(container_name)
obj_url = '%s/%s/%s' % (self.object_client.base_url,
container_name, obj_name)
resp, _ = self.object_client.raw_request(obj_url, 'GET')
self.assertEqual(resp.status, 401)
-
- self.change_container_acl(container_name, '.r:*')
- resp, _ = self.object_client.raw_request(obj_url, 'GET')
+ metadata_param = {'X-Container-Read': '.r:*'}
+ self.container_client.create_update_or_delete_container_metadata(
+ container_name, create_update_metadata=metadata_param,
+ create_update_metadata_prefix='')
+ resp, _ = self.container_client.list_container_metadata(container_name)
+ self.assertEqual(metadata_param['X-Container-Read'],
+ resp['x-container-read'])
+ resp, data = self.object_client.raw_request(obj_url, 'GET')
self.assertEqual(resp.status, 200)
+ self.assertEqual(obj_data, data)
diff --git a/tempest/scenario/test_server_advanced_ops.py b/tempest/scenario/test_server_advanced_ops.py
index d4f29ad..89b9fdd 100644
--- a/tempest/scenario/test_server_advanced_ops.py
+++ b/tempest/scenario/test_server_advanced_ops.py
@@ -42,28 +42,6 @@
super(TestServerAdvancedOps, cls).setup_credentials()
@decorators.attr(type='slow')
- @decorators.idempotent_id('e6c28180-7454-4b59-b188-0257af08a63b')
- @testtools.skipUnless(CONF.compute_feature_enabled.resize,
- 'Resize is not available.')
- @utils.services('compute', 'volume')
- def test_resize_volume_backed_server_confirm(self):
- # We create an instance for use in this test
- instance = self.create_server(volume_backed=True)
- instance_id = instance['id']
- resize_flavor = CONF.compute.flavor_ref_alt
- LOG.debug("Resizing instance %s from flavor %s to flavor %s",
- instance['id'], instance['flavor']['id'], resize_flavor)
- self.servers_client.resize_server(instance_id, resize_flavor)
- waiters.wait_for_server_status(self.servers_client, instance_id,
- 'VERIFY_RESIZE')
-
- LOG.debug("Confirming resize of instance %s", instance_id)
- self.servers_client.confirm_resize_server(instance_id)
-
- waiters.wait_for_server_status(self.servers_client, instance_id,
- 'ACTIVE')
-
- @decorators.attr(type='slow')
@decorators.idempotent_id('949da7d5-72c8-4808-8802-e3d70df98e2c')
@testtools.skipUnless(CONF.compute_feature_enabled.suspend,
'Suspend is not available.')
diff --git a/tempest/services/object_storage/__init__.py b/tempest/services/object_storage/__init__.py
deleted file mode 100644
index 771ed8f..0000000
--- a/tempest/services/object_storage/__init__.py
+++ /dev/null
@@ -1,24 +0,0 @@
-# Copyright (c) 2016 Hewlett-Packard Enterprise Development Company, L.P.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may not
-# use this file except in compliance with the License. You may obtain a copy of
-# the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations under
-# the License.
-
-from tempest.lib.services.object_storage.account_client import AccountClient
-from tempest.lib.services.object_storage.bulk_middleware_client import \
- BulkMiddlewareClient
-from tempest.lib.services.object_storage.capabilities_client import \
- CapabilitiesClient
-from tempest.services.object_storage.container_client import ContainerClient
-from tempest.services.object_storage.object_client import ObjectClient
-
-__all__ = ['AccountClient', 'BulkMiddlewareClient', 'CapabilitiesClient',
- 'ContainerClient', 'ObjectClient']
diff --git a/tempest/services/object_storage/container_client.py b/tempest/services/object_storage/container_client.py
deleted file mode 100644
index afedd36..0000000
--- a/tempest/services/object_storage/container_client.py
+++ /dev/null
@@ -1,150 +0,0 @@
-# Copyright 2012 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from xml.etree import ElementTree as etree
-
-from oslo_serialization import jsonutils as json
-from six.moves.urllib import parse as urllib
-
-from tempest.lib.common import rest_client
-
-
-class ContainerClient(rest_client.RestClient):
-
- def create_container(
- self, container_name,
- metadata=None,
- remove_metadata=None,
- metadata_prefix='X-Container-Meta-',
- remove_metadata_prefix='X-Remove-Container-Meta-'):
- """Creates a container
-
- with optional metadata passed in as a dictionary
- """
- url = str(container_name)
- headers = {}
-
- if metadata is not None:
- for key in metadata:
- headers[metadata_prefix + key] = metadata[key]
- if remove_metadata is not None:
- for key in remove_metadata:
- headers[remove_metadata_prefix + key] = remove_metadata[key]
-
- resp, body = self.put(url, body=None, headers=headers)
- self.expected_success([201, 202], resp.status)
- return resp, body
-
- def delete_container(self, container_name):
- """Deletes the container (if it's empty)."""
- url = str(container_name)
- resp, body = self.delete(url)
- self.expected_success(204, resp.status)
- return resp, body
-
- def update_container_metadata(
- self, container_name,
- metadata=None,
- remove_metadata=None,
- metadata_prefix='X-Container-Meta-',
- remove_metadata_prefix='X-Remove-Container-Meta-'):
- """Updates arbitrary metadata on container."""
- url = str(container_name)
- headers = {}
-
- if metadata is not None:
- for key in metadata:
- headers[metadata_prefix + key] = metadata[key]
- if remove_metadata is not None:
- for key in remove_metadata:
- headers[remove_metadata_prefix + key] = remove_metadata[key]
-
- resp, body = self.post(url, body=None, headers=headers)
- self.expected_success(204, resp.status)
- return resp, body
-
- def delete_container_metadata(self, container_name, metadata,
- metadata_prefix='X-Remove-Container-Meta-'):
- """Deletes arbitrary metadata on container."""
- url = str(container_name)
- headers = {}
-
- if metadata is not None:
- for item in metadata:
- headers[metadata_prefix + item] = metadata[item]
-
- resp, body = self.post(url, body=None, headers=headers)
- self.expected_success(204, resp.status)
- return resp, body
-
- def list_container_metadata(self, container_name):
- """Retrieves container metadata headers"""
- url = str(container_name)
- resp, body = self.head(url)
- self.expected_success(204, resp.status)
- return resp, body
-
- def list_container_contents(self, container, params=None):
- """List the objects in a container, given the container name
-
- Returns the container object listing as a plain text list, or as
- xml or json if that option is specified via the 'format' argument.
-
- Optional Arguments:
- limit = integer
- For an integer value n, limits the number of results to at most
- n values.
-
- marker = 'string'
- Given a string value x, return object names greater in value
- than the specified marker.
-
- prefix = 'string'
- For a string value x, causes the results to be limited to names
- beginning with the substring x.
-
- format = 'json' or 'xml'
- Specify either json or xml to return the respective serialized
- response.
- If json, returns a list of json objects
- if xml, returns a string of xml
-
- path = 'string'
- For a string value x, return the object names nested in the
- pseudo path (assuming preconditions are met - see below).
-
- delimiter = 'character'
- For a character c, return all the object names nested in the
- container (without the need for the directory marker objects).
- """
-
- url = str(container)
- if params:
- url += '?'
- url += '&%s' % urllib.urlencode(params)
-
- resp, body = self.get(url, headers={})
- if params and params.get('format') == 'json':
- body = json.loads(body)
- elif params and params.get('format') == 'xml':
- body = etree.fromstring(body)
- # Else the content-type is plain/text
- else:
- body = [
- obj_name for obj_name in body.decode().split('\n') if obj_name
- ]
-
- self.expected_success([200, 204], resp.status)
- return resp, body
diff --git a/tempest/test_discover/plugins.py b/tempest/test_discover/plugins.py
index 1206e3f..9c18052 100644
--- a/tempest/test_discover/plugins.py
+++ b/tempest/test_discover/plugins.py
@@ -76,7 +76,7 @@
conf.register_opt(my_config.service_option,
group='service_available')
conf.register_group(my_config.my_service_group)
- conf.register_opts(my_config.MyService +
+ conf.register_opts(my_config.MyServiceGroup,
my_config.my_service_group)
conf.register_group(my_config.my_service_feature_group)
diff --git a/tempest/tests/cmd/test_account_generator.py b/tempest/tests/cmd/test_account_generator.py
index f907bd0..8bf4c5b 100644
--- a/tempest/tests/cmd/test_account_generator.py
+++ b/tempest/tests/cmd/test_account_generator.py
@@ -44,6 +44,7 @@
self.patchobject(config, 'TempestConfigPrivate',
fake_config.FakePrivate)
self.opts = FakeOpts(version=identity_version)
+ self.patch('oslo_log.log.setup', autospec=True)
def mock_resource_creation(self):
fake_resource = dict(id='id', name='name')
diff --git a/tempest/tests/cmd/test_run.py b/tempest/tests/cmd/test_run.py
index 6e1250f..0485e14 100644
--- a/tempest/tests/cmd/test_run.py
+++ b/tempest/tests/cmd/test_run.py
@@ -40,6 +40,7 @@
setattr(args, "subunit", True)
setattr(args, "parallel", False)
setattr(args, "concurrency", 10)
+ setattr(args, "load_list", '')
options = self.run_cmd._build_options(args)
self.assertEqual(['--subunit',
'--concurrency=10'],
diff --git a/tempest/tests/cmd/test_verify_tempest_config.py b/tempest/tests/cmd/test_verify_tempest_config.py
index 810f9e5..8641b63 100644
--- a/tempest/tests/cmd/test_verify_tempest_config.py
+++ b/tempest/tests/cmd/test_verify_tempest_config.py
@@ -176,22 +176,6 @@
False, True)
@mock.patch('tempest.lib.common.http.ClosingHttp.request')
- def test_verify_keystone_api_versions_no_v2(self, mock_request):
- self.useFixture(fixtures.MockPatchObject(
- verify_tempest_config, '_get_unversioned_endpoint',
- return_value='http://fake_endpoint:5000'))
- fake_resp = {'versions': {'values': [{'id': 'v3.0'}]}}
- fake_resp = json.dumps(fake_resp)
- mock_request.return_value = (None, fake_resp)
- fake_os = mock.MagicMock()
- with mock.patch.object(verify_tempest_config,
- 'print_and_or_update') as print_mock:
- verify_tempest_config.verify_keystone_api_versions(fake_os, True)
- print_mock.assert_called_once_with('api_v2',
- 'identity-feature-enabled',
- False, True)
-
- @mock.patch('tempest.lib.common.http.ClosingHttp.request')
def test_verify_cinder_api_versions_no_v3(self, mock_request):
self.useFixture(fixtures.MockPatchObject(
verify_tempest_config, '_get_unversioned_endpoint',
diff --git a/tempest/tests/common/test_credentials_factory.py b/tempest/tests/common/test_credentials_factory.py
index 020818e..7cf87f8 100644
--- a/tempest/tests/common/test_credentials_factory.py
+++ b/tempest/tests/common/test_credentials_factory.py
@@ -183,7 +183,7 @@
# Build the expected params
expected_params = dict(
[(field, value) for _, field, value in all_params])
- expected_params.update(cf.DEFAULT_PARAMS)
+ expected_params.update(config.service_client_config())
admin_creds = cf.get_configured_admin_credentials()
mock_get_credentials.assert_called_once_with(
fill_in=True, identity_version='v3', **expected_params)
@@ -205,7 +205,7 @@
# Build the expected params
expected_params = dict(
[(field, value) for _, field, value in all_params])
- expected_params.update(cf.DEFAULT_PARAMS)
+ expected_params.update(config.service_client_config())
admin_creds = cf.get_configured_admin_credentials(
fill_in=False, identity_version='v3')
mock_get_credentials.assert_called_once_with(
@@ -232,7 +232,7 @@
cfg.CONF.set_default('uri', expected_uri, 'identity')
params = {'foo': 'bar'}
expected_params = params.copy()
- expected_params.update(cf.DEFAULT_PARAMS)
+ expected_params.update(config.service_client_config())
result = cf.get_credentials(identity_version='v2', **params)
self.assertEqual(expected_result, result)
mock_auth_get_credentials.assert_called_once_with(
@@ -251,7 +251,7 @@
params = {'foo': 'bar'}
expected_params = params.copy()
expected_params['domain_name'] = expected_domain
- expected_params.update(cf.DEFAULT_PARAMS)
+ expected_params.update(config.service_client_config())
result = cf.get_credentials(fill_in=False, identity_version='v3',
**params)
self.assertEqual(expected_result, result)
@@ -270,7 +270,7 @@
expected_domain, 'auth')
params = {'foo': 'bar', 'user_domain_name': expected_domain}
expected_params = params.copy()
- expected_params.update(cf.DEFAULT_PARAMS)
+ expected_params.update(config.service_client_config())
result = cf.get_credentials(fill_in=False, identity_version='v3',
**params)
self.assertEqual(expected_result, result)
diff --git a/tempest/tests/lib/cli/test_execute.py b/tempest/tests/lib/cli/test_execute.py
index 0130454..c276386 100644
--- a/tempest/tests/lib/cli/test_execute.py
+++ b/tempest/tests/lib/cli/test_execute.py
@@ -91,3 +91,37 @@
self.assertEqual(mock_execute.call_count, 1)
self.assertEqual(mock_execute.call_args[1],
{'prefix': 'env LAC_ALL=C'})
+
+ @mock.patch.object(cli_base, 'execute')
+ def test_execute_with_domain_name(self, mock_execute):
+ cli = cli_base.CLIClient(
+ user_domain_name='default',
+ project_domain_name='default'
+ )
+ cli.glance('action')
+ self.assertEqual(mock_execute.call_count, 1)
+ self.assertIn('--os-user-domain-name default',
+ mock_execute.call_args[0][2])
+ self.assertIn('--os-project-domain-name default',
+ mock_execute.call_args[0][2])
+ self.assertNotIn('--os-user-domain-id',
+ mock_execute.call_args[0][2])
+ self.assertNotIn('--os-project-domain-id',
+ mock_execute.call_args[0][2])
+
+ @mock.patch.object(cli_base, 'execute')
+ def test_execute_with_domain_id(self, mock_execute):
+ cli = cli_base.CLIClient(
+ user_domain_id='default',
+ project_domain_id='default'
+ )
+ cli.glance('action')
+ self.assertEqual(mock_execute.call_count, 1)
+ self.assertIn('--os-user-domain-id default',
+ mock_execute.call_args[0][2])
+ self.assertIn('--os-project-domain-id default',
+ mock_execute.call_args[0][2])
+ self.assertNotIn('--os-user-domain-name',
+ mock_execute.call_args[0][2])
+ self.assertNotIn('--os-project-domain-name',
+ mock_execute.call_args[0][2])
diff --git a/tempest/tests/lib/common/utils/test_data_utils.py b/tempest/tests/lib/common/utils/test_data_utils.py
index 8bdf70e..b8385b2 100644
--- a/tempest/tests/lib/common/utils/test_data_utils.py
+++ b/tempest/tests/lib/common/utils/test_data_utils.py
@@ -13,8 +13,6 @@
# License for the specific language governing permissions and limitations
# under the License.
-import netaddr
-
from tempest.lib.common.utils import data_utils
from tempest.tests import base
@@ -81,7 +79,11 @@
self.assertEqual(len(actual), 3)
self.assertRegex(actual, "[A-Za-z0-9~!@#%^&*_=+]{3}")
actual2 = data_utils.rand_password(2)
- self.assertNotEqual(actual, actual2)
+ # NOTE(masayukig): Originally, we checked that the acutal and actual2
+ # are different each other. But only 3 letters can be the same value
+ # in a very rare case. So, we just check the length here, too,
+ # just in case.
+ self.assertEqual(len(actual2), 3)
def test_rand_url(self):
actual = data_utils.rand_url()
@@ -137,43 +139,6 @@
actual = data_utils.random_bytes(size=2048)
self.assertEqual(2048, len(actual))
- def test_get_ipv6_addr_by_EUI64(self):
- actual = data_utils.get_ipv6_addr_by_EUI64('2001:db8::',
- '00:16:3e:33:44:55')
- self.assertIsInstance(actual, netaddr.IPAddress)
- self.assertEqual(actual,
- netaddr.IPAddress('2001:db8::216:3eff:fe33:4455'))
-
- def test_get_ipv6_addr_by_EUI64_with_IPv4_prefix(self):
- ipv4_prefix = '10.0.8'
- mac = '00:16:3e:33:44:55'
- self.assertRaises(TypeError, data_utils.get_ipv6_addr_by_EUI64,
- ipv4_prefix, mac)
-
- def test_get_ipv6_addr_by_EUI64_bad_cidr_type(self):
- bad_cidr = 123
- mac = '00:16:3e:33:44:55'
- self.assertRaises(TypeError, data_utils.get_ipv6_addr_by_EUI64,
- bad_cidr, mac)
-
- def test_get_ipv6_addr_by_EUI64_bad_cidr_value(self):
- bad_cidr = 'bb'
- mac = '00:16:3e:33:44:55'
- self.assertRaises(TypeError, data_utils.get_ipv6_addr_by_EUI64,
- bad_cidr, mac)
-
- def test_get_ipv6_addr_by_EUI64_bad_mac_value(self):
- cidr = '2001:db8::'
- bad_mac = '00:16:3e:33:44:5Z'
- self.assertRaises(TypeError, data_utils.get_ipv6_addr_by_EUI64,
- cidr, bad_mac)
-
- def test_get_ipv6_addr_by_EUI64_bad_mac_type(self):
- cidr = '2001:db8::'
- bad_mac = 99999999999999999999
- self.assertRaises(TypeError, data_utils.get_ipv6_addr_by_EUI64,
- cidr, bad_mac)
-
def test_chunkify(self):
data = "aaa"
chunks = data_utils.chunkify(data, 2)
diff --git a/tempest/tests/lib/common/utils/test_test_utils.py b/tempest/tests/lib/common/utils/test_test_utils.py
index 29c5684..f638ba6 100644
--- a/tempest/tests/lib/common/utils/test_test_utils.py
+++ b/tempest/tests/lib/common/utils/test_test_utils.py
@@ -81,11 +81,13 @@
@mock.patch('time.sleep')
@mock.patch('time.time')
def test_call_until_true_when_f_never_returns_true(self, m_time, m_sleep):
+ def set_value(bool_value):
+ return bool_value
timeout = 42 # The value doesn't matter as we mock time.time()
sleep = 60 # The value doesn't matter as we mock time.sleep()
m_time.side_effect = utils.generate_timeout_series(timeout)
self.assertEqual(
- False, test_utils.call_until_true(lambda: False, timeout, sleep)
+ False, test_utils.call_until_true(set_value, timeout, sleep, False)
)
m_sleep.call_args_list = [mock.call(sleep)] * 2
m_time.call_args_list = [mock.call()] * 2
@@ -93,11 +95,30 @@
@mock.patch('time.sleep')
@mock.patch('time.time')
def test_call_until_true_when_f_returns_true(self, m_time, m_sleep):
+ def set_value(bool_value=False):
+ return bool_value
timeout = 42 # The value doesn't matter as we mock time.time()
sleep = 60 # The value doesn't matter as we mock time.sleep()
m_time.return_value = 0
self.assertEqual(
- True, test_utils.call_until_true(lambda: True, timeout, sleep)
+ True, test_utils.call_until_true(set_value, timeout, sleep,
+ bool_value=True)
)
self.assertEqual(0, m_sleep.call_count)
- self.assertEqual(1, m_time.call_count)
+ # when logging cost time we need to acquire current time.
+ self.assertEqual(2, m_time.call_count)
+
+ @mock.patch('time.sleep')
+ @mock.patch('time.time')
+ def test_call_until_true_when_f_returns_true_no_param(
+ self, m_time, m_sleep):
+ def set_value(bool_value=False):
+ return bool_value
+ timeout = 42 # The value doesn't matter as we mock time.time()
+ sleep = 60 # The value doesn't matter as we mock time.sleep()
+ m_time.side_effect = utils.generate_timeout_series(timeout)
+ self.assertEqual(
+ False, test_utils.call_until_true(set_value, timeout, sleep)
+ )
+ m_sleep.call_args_list = [mock.call(sleep)] * 2
+ m_time.call_args_list = [mock.call()] * 2
diff --git a/tempest/tests/lib/services/identity/v3/test_identity_client.py b/tempest/tests/lib/services/identity/v3/test_identity_client.py
index 6572947..3739fe6 100644
--- a/tempest/tests/lib/services/identity/v3/test_identity_client.py
+++ b/tempest/tests/lib/services/identity/v3/test_identity_client.py
@@ -60,6 +60,34 @@
}
}
+ FAKE_AUTH_DOMAINS = {
+ "domains": [
+ {
+ "description": "my domain description",
+ "enabled": True,
+ "id": "1789d1",
+ "links": {
+ "self": "https://example.com/identity/v3/domains/1789d1"
+ },
+ "name": "my domain"
+ },
+ {
+ "description": "description of my other domain",
+ "enabled": True,
+ "id": "43e8da",
+ "links": {
+ "self": "https://example.com/identity/v3/domains/43e8da"
+ },
+ "name": "another domain"
+ }
+ ],
+ "links": {
+ "self": "https://example.com/identity/v3/auth/domains",
+ "previous": None,
+ "next": None
+ }
+ }
+
def setUp(self):
super(TestIdentityClient, self).setUp()
fake_auth = fake_auth_provider.FakeAuthProvider()
@@ -89,6 +117,13 @@
self.FAKE_AUTH_PROJECTS,
bytes_body)
+ def _test_list_auth_domains(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.list_auth_domains,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_AUTH_DOMAINS,
+ bytes_body)
+
def test_show_api_description_with_str_body(self):
self._test_show_api_description()
@@ -122,3 +157,9 @@
def test_list_auth_projects_with_bytes_body(self):
self._test_list_auth_projects(bytes_body=True)
+
+ def test_list_auth_domains_with_str_body(self):
+ self._test_list_auth_domains()
+
+ def test_list_auth_domains_with_bytes_body(self):
+ self._test_list_auth_domains(bytes_body=True)
diff --git a/tempest/tests/lib/services/object_storage/test_object_client.py b/tempest/tests/lib/services/object_storage/test_object_client.py
new file mode 100644
index 0000000..a16d1d7
--- /dev/null
+++ b/tempest/tests/lib/services/object_storage/test_object_client.py
@@ -0,0 +1,108 @@
+# Copyright 2016 IBM Corp.
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+
+import mock
+
+from tempest.lib import exceptions
+from tempest.lib.services.object_storage import object_client
+from tempest.tests import base
+from tempest.tests.lib import fake_auth_provider
+
+
+class TestObjectClient(base.TestCase):
+
+ def setUp(self):
+ super(TestObjectClient, self).setUp()
+ self.fake_auth = fake_auth_provider.FakeAuthProvider()
+ self.url = self.fake_auth.base_url(None)
+ self.object_client = object_client.ObjectClient(self.fake_auth,
+ 'swift', 'region1')
+
+ @mock.patch.object(object_client, '_create_connection')
+ def test_create_object_continue_no_data(self, mock_poc):
+ self._validate_create_object_continue(None, mock_poc)
+
+ @mock.patch.object(object_client, '_create_connection')
+ def test_create_object_continue_with_data(self, mock_poc):
+ self._validate_create_object_continue('hello', mock_poc)
+
+ @mock.patch.object(object_client, '_create_connection')
+ def test_create_continue_with_no_continue_received(self, mock_poc):
+ self._validate_create_object_continue('hello', mock_poc,
+ initial_status=201)
+
+ def _validate_create_object_continue(self, req_data,
+ mock_poc, initial_status=100):
+
+ expected_hdrs = {
+ 'X-Auth-Token': self.fake_auth.get_token(),
+ 'content-length': 0 if req_data is None else len(req_data),
+ 'Expect': '100-continue'}
+
+ # Setup the Mocks prior to invoking the object creation
+ mock_resp_cls = mock.Mock()
+ mock_resp_cls._read_status.return_value = ("1", initial_status, "OK")
+
+ mock_poc.return_value.response_class.return_value = mock_resp_cls
+
+ # This is the final expected return value
+ mock_poc.return_value.getresponse.return_value.status = 201
+ mock_poc.return_value.getresponse.return_value.reason = 'OK'
+
+ # Call method to PUT object using expect:100-continue
+ cnt = "container1"
+ obj = "object1"
+ path = "/%s/%s" % (cnt, obj)
+
+ # If the expected initial status is not 100, then an exception
+ # should be thrown and the connection closed
+ if initial_status is 100:
+ status, reason = \
+ self.object_client.create_object_continue(cnt, obj, req_data)
+ else:
+ self.assertRaises(exceptions.UnexpectedResponseCode,
+ self.object_client.create_object_continue, cnt,
+ obj, req_data)
+ mock_poc.return_value.close.assert_called_once_with()
+
+ # Verify that putrequest is called 1 time with the appropriate values
+ mock_poc.return_value.putrequest.assert_called_once_with('PUT', path)
+
+ # Verify that headers were written, including "Expect:100-continue"
+ calls = []
+
+ for header, value in expected_hdrs.items():
+ calls.append(mock.call(header, value))
+
+ mock_poc.return_value.putheader.assert_has_calls(calls, False)
+ mock_poc.return_value.endheaders.assert_called_once_with()
+
+ # The following steps are only taken if the initial status is 100
+ if initial_status is 100:
+ # Verify that the method returned what it was supposed to
+ self.assertEqual(status, 201)
+
+ # Verify that _safe_read was called once to remove the CRLF
+ # after the 100 response
+ mock_rc = mock_poc.return_value.response_class.return_value
+ mock_rc._safe_read.assert_called_once_with(2)
+
+ # Verify the actual data was written via send
+ mock_poc.return_value.send.assert_called_once_with(req_data)
+
+ # Verify that the getresponse method was called to receive
+ # the final
+ mock_poc.return_value.getresponse.assert_called_once_with()
diff --git a/tempest/tests/lib/services/registry_fixture.py b/tempest/tests/lib/services/registry_fixture.py
index 8484209..1da2112 100644
--- a/tempest/tests/lib/services/registry_fixture.py
+++ b/tempest/tests/lib/services/registry_fixture.py
@@ -38,7 +38,7 @@
"""Initialise the registry fixture"""
self.services = set(['compute', 'identity.v2', 'identity.v3',
'image.v1', 'image.v2', 'network', 'volume.v1',
- 'volume.v2', 'volume.v3'])
+ 'volume.v2', 'volume.v3', 'object-storage'])
def _setUp(self):
# Cleanup the registry
@@ -50,7 +50,7 @@
for sc in self.services:
sc_module = service_clients[sc]
sc_unversioned = sc.split('.')[0]
- sc_name = sc.replace('.', '_')
+ sc_name = sc.replace('.', '_').replace('-', '_')
# Pass the bare minimum params to satisfy the clients interface
service_client_data = dict(
name=sc_name, service_version=sc, service=sc_unversioned,
diff --git a/tempest/tests/lib/services/test_clients.py b/tempest/tests/lib/services/test_clients.py
index 6d0f27a..43fd88f 100644
--- a/tempest/tests/lib/services/test_clients.py
+++ b/tempest/tests/lib/services/test_clients.py
@@ -189,9 +189,7 @@
def setUp(self):
super(TestServiceClients, self).setUp()
self.useFixture(fixtures.MockPatch(
- 'tempest.lib.services.clients.tempest_modules', return_value={}))
- self.useFixture(fixtures.MockPatch(
- 'tempest.lib.services.clients._tempest_internal_modules',
+ 'tempest.lib.services.clients.tempest_modules',
return_value=set(['fake_service1'])))
def test___init___creds_v2_uri(self):
@@ -416,6 +414,7 @@
_manager = self._get_manager()
duplicate_service = 'fake_service1'
expected_error = '.*' + duplicate_service
+ _manager._registered_services = [duplicate_service]
with testtools.ExpectedException(
exceptions.ServiceClientRegistrationException, expected_error):
_manager.register_service_client_module(
diff --git a/tempest/tests/lib/services/volume/v3/test_group_snapshots_client.py b/tempest/tests/lib/services/volume/v3/test_group_snapshots_client.py
index 5ac5c08..c2784b2 100644
--- a/tempest/tests/lib/services/volume/v3/test_group_snapshots_client.py
+++ b/tempest/tests/lib/services/volume/v3/test_group_snapshots_client.py
@@ -93,7 +93,8 @@
bytes_body,
group_snapshot_id="3fbbcccf-d058-4502-8844-6feeffdf4cb5")
- def _test_list_group_snapshots(self, bytes_body=False, detail=False):
+ def _test_list_group_snapshots(self, detail=False, bytes_body=False,
+ mock_args='group_snapshots', **params):
resp_body = []
if detail:
resp_body = self.FAKE_LIST_GROUP_SNAPSHOTS
@@ -111,8 +112,10 @@
self.client.list_group_snapshots,
'tempest.lib.common.rest_client.RestClient.get',
resp_body,
- bytes_body,
- detail=detail)
+ to_utf=bytes_body,
+ mock_args=[mock_args],
+ detail=detail,
+ **params)
def test_create_group_snapshot_with_str_body(self):
self._test_create_group_snapshot()
@@ -132,6 +135,25 @@
def test_list_group_snapshots_with_bytes_body(self):
self._test_list_group_snapshots(bytes_body=True)
+ def test_list_group_snapshots_with_detail_with_str_body(self):
+ mock_args = "group_snapshots/detail"
+ self._test_list_group_snapshots(detail=True, mock_args=mock_args)
+
+ def test_list_group_snapshots_with_detail_with_bytes_body(self):
+ mock_args = "group_snapshots/detail"
+ self._test_list_group_snapshots(detail=True, bytes_body=True,
+ mock_args=mock_args)
+
+ def test_list_group_snapshots_with_params(self):
+ # Run the test separately for each param, to avoid assertion error
+ # resulting from randomized params order.
+ mock_args = 'group_snapshots?sort_key=name'
+ self._test_list_group_snapshots(mock_args=mock_args, sort_key='name')
+
+ mock_args = 'group_snapshots/detail?limit=10'
+ self._test_list_group_snapshots(detail=True, bytes_body=True,
+ mock_args=mock_args, limit=10)
+
def test_delete_group_snapshot(self):
self.check_service_client_function(
self.client.delete_group_snapshot,
@@ -139,3 +161,12 @@
{},
group_snapshot_id='0e701ab8-1bec-4b9f-b026-a7ba4af13578',
status=202)
+
+ def test_reset_group_snapshot_status(self):
+ self.check_service_client_function(
+ self.client.reset_group_snapshot_status,
+ 'tempest.lib.common.rest_client.RestClient.post',
+ {},
+ status=202,
+ group_snapshot_id='0e701ab8-1bec-4b9f-b026-a7ba4af13578',
+ status_to_set='error')
diff --git a/tempest/tests/services/__init__.py b/tempest/tests/services/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/tempest/tests/services/__init__.py
+++ /dev/null
diff --git a/tempest/tests/services/object_storage/__init__.py b/tempest/tests/services/object_storage/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/tempest/tests/services/object_storage/__init__.py
+++ /dev/null
diff --git a/tempest/tests/services/object_storage/test_object_client.py b/tempest/tests/services/object_storage/test_object_client.py
index 748614c..86535f9 100644
--- a/tempest/tests/services/object_storage/test_object_client.py
+++ b/tempest/tests/services/object_storage/test_object_client.py
@@ -31,15 +31,15 @@
self.object_client = object_client.ObjectClient(self.fake_auth,
'swift', 'region1')
- @mock.patch.object(object_client, 'create_connection')
+ @mock.patch.object(object_client, '_create_connection')
def test_create_object_continue_no_data(self, mock_poc):
self._validate_create_object_continue(None, mock_poc)
- @mock.patch.object(object_client, 'create_connection')
+ @mock.patch.object(object_client, '_create_connection')
def test_create_object_continue_with_data(self, mock_poc):
self._validate_create_object_continue('hello', mock_poc)
- @mock.patch.object(object_client, 'create_connection')
+ @mock.patch.object(object_client, '_create_connection')
def test_create_continue_with_no_continue_received(self, mock_poc):
self._validate_create_object_continue('hello', mock_poc,
initial_status=201)
diff --git a/tempest/tests/test_base_test.py b/tempest/tests/test_base_test.py
index 3ece11d..011bc9b 100644
--- a/tempest/tests/test_base_test.py
+++ b/tempest/tests/test_base_test.py
@@ -17,6 +17,7 @@
from tempest import clients
from tempest.common import credentials_factory as credentials
+from tempest import config
from tempest.lib.common import fixed_network
from tempest import test
from tempest.tests import base
@@ -27,6 +28,8 @@
def setUp(self):
super(TestBaseTestCase, self).setUp()
self.useFixture(fake_config.ConfigFixture())
+ self.patchobject(config, 'TempestConfigPrivate',
+ fake_config.FakePrivate)
self.fixed_network_name = 'fixed-net'
cfg.CONF.set_default('fixed_network_name', self.fixed_network_name,
'compute')
diff --git a/tempest/tests/test_imports.py b/tempest/tests/test_imports.py
new file mode 100644
index 0000000..6f1cfca
--- /dev/null
+++ b/tempest/tests/test_imports.py
@@ -0,0 +1,69 @@
+# Copyright 2017 IBM Corp.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import mock
+
+from tempest.tests import base
+
+
+class ConfCounter(object):
+
+ def __init__(self, *args, **kwargs):
+ self.count = 0
+
+ def __getattr__(self, key):
+ self.count += 1
+ return mock.MagicMock()
+
+ def get_counts(self):
+ return self.count
+
+
+class TestImports(base.TestCase):
+ def setUp(self):
+ super(TestImports, self).setUp()
+ self.conf_mock = self.patch('tempest.config.CONF',
+ new_callable=ConfCounter)
+
+ def test_account_generator_command_import(self):
+ from tempest.cmd import account_generator # noqa
+ self.assertEqual(0, self.conf_mock.get_counts())
+
+ def test_cleanup_command_import(self):
+ from tempest.cmd import cleanup # noqa
+ self.assertEqual(0, self.conf_mock.get_counts())
+
+ def test_init_command_import(self):
+ from tempest.cmd import init # noqa
+ self.assertEqual(0, self.conf_mock.get_counts())
+
+ def test_list_plugins_command_import(self):
+ from tempest.cmd import list_plugins # noqa
+ self.assertEqual(0, self.conf_mock.get_counts())
+
+ def test_run_command_import(self):
+ from tempest.cmd import run # noqa
+ self.assertEqual(0, self.conf_mock.get_counts())
+
+ def test_subunit_descibe_command_import(self):
+ from tempest.cmd import subunit_describe_calls # noqa
+ self.assertEqual(0, self.conf_mock.get_counts())
+
+ def test_verify_tempest_config_command_import(self):
+ from tempest.cmd import verify_tempest_config # noqa
+ self.assertEqual(0, self.conf_mock.get_counts())
+
+ def test_workspace_command_import(self):
+ from tempest.cmd import workspace # noqa
+ self.assertEqual(0, self.conf_mock.get_counts())
diff --git a/tempest/tests/test_list_tests.py b/tempest/tests/test_list_tests.py
index a238879..4af7463 100644
--- a/tempest/tests/test_list_tests.py
+++ b/tempest/tests/test_list_tests.py
@@ -23,12 +23,10 @@
class TestTestList(base.TestCase):
- def test_testr_list_tests_no_errors(self):
- # Remove unit test discover path from env to test tempest tests
+ def test_stestr_list_no_errors(self):
test_env = os.environ.copy()
- test_env.pop('OS_TEST_PATH')
import_failures = []
- p = subprocess.Popen(['testr', 'list-tests'], stdout=subprocess.PIPE,
+ p = subprocess.Popen(['stestr', 'list'], stdout=subprocess.PIPE,
env=test_env)
ids, err = p.communicate()
self.assertEqual(0, p.returncode,
diff --git a/tox.ini b/tox.ini
index 7bdc580..21696eb 100644
--- a/tox.ini
+++ b/tox.ini
@@ -16,12 +16,11 @@
[testenv]
setenv =
VIRTUAL_ENV={envdir}
- OS_TEST_PATH=./tempest/tests
OS_LOG_CAPTURE=1
PYTHONWARNINGS=default::DeprecationWarning
BRANCH_NAME=master
CLIENT_NAME=tempest
-passenv = OS_STDOUT_CAPTURE OS_STDERR_CAPTURE OS_TEST_TIMEOUT OS_TEST_LOCK_PATH OS_TEST_PATH TEMPEST_CONFIG TEMPEST_CONFIG_DIR http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY ZUUL_CACHE_DIR REQUIREMENTS_PIP_LOCATION GENERATE_TEMPEST_PLUGIN_LIST
+passenv = OS_STDOUT_CAPTURE OS_STDERR_CAPTURE OS_TEST_TIMEOUT OS_TEST_LOCK_PATH TEMPEST_CONFIG TEMPEST_CONFIG_DIR http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY ZUUL_CACHE_DIR REQUIREMENTS_PIP_LOCATION GENERATE_TEMPEST_PLUGIN_LIST
usedevelop = True
install_command =
{toxinidir}/tools/tox_install.sh {env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} {opts} {packages}
@@ -31,7 +30,7 @@
-r{toxinidir}/test-requirements.txt
commands =
find . -type f -name "*.pyc" -delete
- ostestr {posargs}
+ stestr --test-path ./tempest/tests run {posargs}
[testenv:genconfig]
commands = oslo-config-generator --config-file tempest/cmd/config-generator.tempest.conf