Merge "Remove python-devel package for rpm based distributions from bindep"
diff --git a/HACKING.rst b/HACKING.rst
index dc28e4e..caf954b 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -194,6 +194,13 @@
attribute should be sparingly applied to only the tests that sanity-check the
most essential functionality of an OpenStack cloud.
+Multinode Attribute
+^^^^^^^^^^^^^^^^^^^
+The ``type='multinode'`` attribute is used to signify that a test is desired
+to be executed in a multinode environment. By marking the tests with this
+attribute we can avoid running tests which aren't that beneficial for the
+multinode setup and thus reduce the consumption of resources.
+
Test fixtures and resources
---------------------------
Test level resources should be cleaned-up after the test execution. Clean-up
@@ -322,7 +329,14 @@
- If the execution of a set of tests is required to be serialized then locking
can be used to perform this. See usage of ``LockFixture`` for examples of
- using locking.
+ using locking. However, LockFixture only helps if you want to separate the
+ execution of two small sets of test cases. On the other hand, if you need to
+ run a set of tests separately from potentially all other tests then
+ ``LockFixture`` does not scale as you would need to take the lock in all the
+ other tests too. In this case, you can use the ``@serial`` decorator on top
+ of the test class holding the tests that need to run separately from the
+ potentially parallel test set. See more in :ref:`tempest_test_writing`.
+
Sample Configuration File
-------------------------
diff --git a/doc/source/keystone_scopes_and_roles_support.rst b/doc/source/keystone_scopes_and_roles_support.rst
index f446f8c..4d70565 100644
--- a/doc/source/keystone_scopes_and_roles_support.rst
+++ b/doc/source/keystone_scopes_and_roles_support.rst
@@ -203,6 +203,10 @@
cls.az_p_reader_client = (
cls.os_project_reader.availability_zone_client)
+ .. note::
+ 'primary', 'project_admin', 'project_member', and 'project_reader'
+ credentials will be created under same project.
+
#. Project alternate Admin: This is supported and can be requested and used from
the test as below:
@@ -248,6 +252,10 @@
cls.az_p_alt_reader_client = (
cls.os_project_alt_reader.availability_zone_client)
+ .. note::
+ 'alt', 'project_alt_admin', 'project_alt_member', and
+ 'project_alt_reader' credentials will be created under same project.
+
#. Project other roles: This is supported and can be requested and used from
the test as below:
@@ -269,6 +277,16 @@
cls.az_role2_client = (
cls.os_project_my_role2.availability_zone_client)
+ .. note::
+ 'admin' credenatials is considered and kept as legacy admin and
+ will be created under new project. If any test want to test with
+ admin role in projectA and non-admin/admin in projectB then test
+ can request projectA admin using 'admin' or 'project_alt_admin'
+ and non-admin in projectB using 'primary', 'project_member',
+ or 'project_reader'/admin in projectB using 'project_admin'. Many
+ existing tests using the 'admin' with new project to assert on the
+ resource list so we are keeping 'admin' a kind of legacy admin.
+
Pre-Provisioned Credentials
---------------------------
diff --git a/doc/source/plugins/plugin.rst b/doc/source/plugins/plugin.rst
index b1fd6f8..0771318 100644
--- a/doc/source/plugins/plugin.rst
+++ b/doc/source/plugins/plugin.rst
@@ -345,6 +345,8 @@
plugin package on your system and then running Tempest inside a venv will not
work.
-Tempest also exposes a tox job, all-plugin, which will setup a tox virtualenv
-with system site-packages enabled. This will let you leverage tox without
-requiring to manually install plugins in the tox venv before running tests.
+For example, you can use tox to install and run tests from a tempest plugin like
+this::
+
+ [~/tempest] $ tox -e venv-tempest -- pip install (path to the plugin directory)
+ [~/tempest] $ tox -e all
diff --git a/doc/source/write_tests.rst b/doc/source/write_tests.rst
index 34df089..3626a3f 100644
--- a/doc/source/write_tests.rst
+++ b/doc/source/write_tests.rst
@@ -256,6 +256,33 @@
worth checking the immediate parent for what is set to determine if your
class needs to override that setting.
+Running some tests in serial
+----------------------------
+Tempest potentially runs test cases in parallel, depending on the configuration.
+However, sometimes you need to make sure that tests are not interfering with
+each other via OpenStack resources. Tempest creates separate projects for each
+test class to separate project based resources between test cases.
+
+If your tests use resources outside of projects, e.g. host aggregates then
+you might need to explicitly separate interfering test cases. If you only need
+to separate a small set of testcases from each other then you can use the
+``LockFixture``.
+
+However, in some cases a small set of tests needs to be run independently from
+the rest of the test cases. For example, some of the host aggregate and
+availability zone testing needs compute nodes without any running nova server
+to be able to move compute hosts between availability zones. But many tempest
+tests start one or more nova servers. In this scenario you can mark the small
+set of tests that needs to be independent from the rest with the ``@serial``
+class decorator. This will make sure that even if tempest is configured to run
+the tests in parallel the tests in the marked test class will always be executed
+separately from the rest of the test cases.
+
+Please note that due to test ordering optimization reasons test cases marked
+for ``@serial`` execution need to be put under ``tempest/serial_tests``
+directory. This will ensure that the serial tests will block the parallel tests
+in the least amount of time.
+
Interacting with Credentials and Clients
========================================
diff --git a/playbooks/enable-fips.yaml b/playbooks/enable-fips.yaml
deleted file mode 100644
index c8f042d..0000000
--- a/playbooks/enable-fips.yaml
+++ /dev/null
@@ -1,4 +0,0 @@
-- hosts: all
- tasks:
- - include_role:
- name: enable-fips
diff --git a/releasenotes/notes/2023.2-intermediate-release-8725d48b96854dce.yaml b/releasenotes/notes/2023.2-intermediate-release-8725d48b96854dce.yaml
new file mode 100644
index 0000000..7d3d3c4
--- /dev/null
+++ b/releasenotes/notes/2023.2-intermediate-release-8725d48b96854dce.yaml
@@ -0,0 +1,5 @@
+---
+prelude: >
+ This is an intermediate release during the 2023.2 development cycle to
+ make scenario tests server SSHABLE functionality available to plugins
+ and other consumers.
diff --git a/releasenotes/notes/add-image-task-apis-as-tempest-clients-228ccba01f59cbf3.yaml b/releasenotes/notes/add-image-task-apis-as-tempest-clients-228ccba01f59cbf3.yaml
new file mode 100644
index 0000000..cb99a29
--- /dev/null
+++ b/releasenotes/notes/add-image-task-apis-as-tempest-clients-228ccba01f59cbf3.yaml
@@ -0,0 +1,54 @@
+---
+features:
+ - |
+ The following ``tasks_client`` tempest client for glance v2 image
+ task API is implemented in this release.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/releasenotes/notes/add-keystone-config-opt-minimum-password-age-426e9d225f743137.yaml b/releasenotes/notes/add-keystone-config-opt-minimum-password-age-426e9d225f743137.yaml
new file mode 100644
index 0000000..06f993e
--- /dev/null
+++ b/releasenotes/notes/add-keystone-config-opt-minimum-password-age-426e9d225f743137.yaml
@@ -0,0 +1,8 @@
+---
+features:
+ - |
+ Adding a new config option `[identity]/user_minimum_password_age`
+ which allows to specify the number of days that a password must
+ be used before the user can change it. For this option to take
+ effect, identity-feature-enabled.security_compliance must be set
+ to True.
diff --git a/releasenotes/notes/add-server-external-events-client-c86b269b0091077b.yaml b/releasenotes/notes/add-server-external-events-client-c86b269b0091077b.yaml
new file mode 100644
index 0000000..2af8e95
--- /dev/null
+++ b/releasenotes/notes/add-server-external-events-client-c86b269b0091077b.yaml
@@ -0,0 +1,5 @@
+---
+features:
+ - |
+ The ``server_external_events`` tempest client for compute
+ Server External Events API is implemented in this release.
diff --git a/releasenotes/notes/add-ssh-allow-agent-2dee6448fd250e50.yaml b/releasenotes/notes/add-ssh-allow-agent-2dee6448fd250e50.yaml
new file mode 100644
index 0000000..33f11ce
--- /dev/null
+++ b/releasenotes/notes/add-ssh-allow-agent-2dee6448fd250e50.yaml
@@ -0,0 +1,10 @@
+---
+features:
+ - |
+ Adds a ``ssh_allow_agent`` parameter to the ``RemoteClient`` class
+ wrapper and the direct ssh ``Client`` class to allow a caller to
+ explicitly request that the SSH Agent is not consulted for
+ authentication. This is useful if your attempting explicit password
+ based authentication as ``paramiko``, the underlying library used for
+ SSH, defaults to utilizing an ssh-agent process before attempting
+ password authentication.
diff --git a/releasenotes/notes/add-volume-detach-libs-2cbb3ca924aed0ac.yaml b/releasenotes/notes/add-volume-detach-libs-2cbb3ca924aed0ac.yaml
new file mode 100644
index 0000000..30127b3
--- /dev/null
+++ b/releasenotes/notes/add-volume-detach-libs-2cbb3ca924aed0ac.yaml
@@ -0,0 +1,5 @@
+---
+features:
+ - |
+ Add delete_attachment to the v3 AttachmentsClient and terminate_connection
+ to the v3 VolumesClient.
diff --git a/releasenotes/notes/end-of-support-of-wallaby-455e4871ae4cb32e.yaml b/releasenotes/notes/end-of-support-of-wallaby-455e4871ae4cb32e.yaml
new file mode 100644
index 0000000..d5c2974
--- /dev/null
+++ b/releasenotes/notes/end-of-support-of-wallaby-455e4871ae4cb32e.yaml
@@ -0,0 +1,12 @@
+---
+prelude: |
+ This is an intermediate release during the 2023.1 development cycle to
+ mark the end of support for EM Wallaby release in Tempest.
+ After this release, Tempest will support below OpenStack Releases:
+
+ * Zed
+ * Yoga
+ * Xena
+
+ Current development of Tempest is for OpenStack 2023.1 development
+ cycle.
diff --git a/releasenotes/notes/end-of-support-of-xena-2e747cff7f8bc48a.yaml b/releasenotes/notes/end-of-support-of-xena-2e747cff7f8bc48a.yaml
new file mode 100644
index 0000000..39f6866
--- /dev/null
+++ b/releasenotes/notes/end-of-support-of-xena-2e747cff7f8bc48a.yaml
@@ -0,0 +1,12 @@
+---
+prelude: >
+ This is an intermediate release during the 2023.2 development cycle to
+ mark the end of support for EM Xena release in Tempest.
+ After this release, Tempest will support below OpenStack Releases:
+
+ * 2023.1
+ * Zed
+ * Yoga
+
+ Current development of Tempest is for OpenStack 2023.2 development
+ cycle.
diff --git a/releasenotes/notes/enforce_scope_placement-47a12c741e330f60.yaml b/releasenotes/notes/enforce_scope_placement-47a12c741e330f60.yaml
new file mode 100644
index 0000000..e5e602e
--- /dev/null
+++ b/releasenotes/notes/enforce_scope_placement-47a12c741e330f60.yaml
@@ -0,0 +1,4 @@
+---
+prelude: >
+ Adding placement service for config options ``enforce_scope`` so that
+ we can switch the scope and new defaults enforcement for placement service.
diff --git a/releasenotes/notes/fix-bug-1964509-b742f2c95d854980.yaml b/releasenotes/notes/fix-bug-1964509-b742f2c95d854980.yaml
new file mode 100644
index 0000000..db627de
--- /dev/null
+++ b/releasenotes/notes/fix-bug-1964509-b742f2c95d854980.yaml
@@ -0,0 +1,19 @@
+---
+fixes:
+ - |
+ There was a bug (bug#1964509) in dynamic credentials creation where
+ project credentials with different roles are created with the new
+ projects. Credential of different role of projects must be created
+ within the same project. For exmaple, 'project_admin', 'project_member',
+ 'project_reader', and 'primary', credentials will be created in the
+ same projects. 'alt', 'project_alt_admin', 'project_alt_member',
+ 'project_alt_reader' will be created within the same project.
+
+ 'admin' credenatials is considered and kept as legacy admin and
+ will be created under new project. If any test want to test with
+ admin role in projectA and non-admin/admin in projectB then test
+ can request projectA admin using 'admin' or 'project_alt_admin'
+ and non-admin in projectB using 'primary', 'project_member',
+ or 'project_reader'/admin in projectB using 'project_admin'. Many
+ existing tests using the 'admin' with new project to assert on the
+ resource list so we are keeping 'admin' a kind of legacy admin.
diff --git a/releasenotes/notes/remove-glance-v1-api-tests-5a39d3ea4b6bd71e.yaml b/releasenotes/notes/remove-glance-v1-api-tests-5a39d3ea4b6bd71e.yaml
new file mode 100644
index 0000000..dc36ac0
--- /dev/null
+++ b/releasenotes/notes/remove-glance-v1-api-tests-5a39d3ea4b6bd71e.yaml
@@ -0,0 +1,8 @@
+---
+prelude: >
+ Glance v1 APIs were removed in Rocky release and last
+ supported release for v1 was Queens. Tempest master does
+ not support the Rocky or Queens release so we removed
+ the Glance v1 tests, config option, and its service clients.
+ If you would like to test the v1 APIs then you can use the old
+ Tempest version.
diff --git a/releasenotes/notes/remove-nova-network-tests-f694bcd30a97a4ca.yaml b/releasenotes/notes/remove-nova-network-tests-f694bcd30a97a4ca.yaml
new file mode 100644
index 0000000..6ee5691
--- /dev/null
+++ b/releasenotes/notes/remove-nova-network-tests-f694bcd30a97a4ca.yaml
@@ -0,0 +1,11 @@
+---
+prelude: >
+ Tempest remove the nova-network tests and service clients.
+ The nova-network was removed from Rocky release and current
+ Tempest master does not support the Rocky release. Below are
+ the service clients have been removed:
+
+ * floating_ip_pools_client
+ * floating_ips_bulk_client
+ * fixed_ips_client
+ * list_virtual_interfaces
diff --git a/releasenotes/notes/tempest-2023-1-release-b18a240afadae8c9.yaml b/releasenotes/notes/tempest-2023-1-release-b18a240afadae8c9.yaml
new file mode 100644
index 0000000..092f4e3
--- /dev/null
+++ b/releasenotes/notes/tempest-2023-1-release-b18a240afadae8c9.yaml
@@ -0,0 +1,17 @@
+---
+prelude: |
+ This release is to tag Tempest for OpenStack 2023.1 release.
+ This release marks the start of 2023.1 release support in Tempest.
+ After this release, Tempest will support below OpenStack Releases:
+
+ * 2023.1
+ * Zed
+ * Yoga
+ * Xena
+
+ Current development of Tempest is for OpenStack 2023.2 development
+ cycle. Every Tempest commit is also tested against master during
+ the 2023.2 cycle. However, this does not necessarily mean that using
+ Tempest as of this tag will work against a 2023.2 (or future release)
+ cloud.
+ To be on safe side, use this tag to test the OpenStack 2023.1 release.
diff --git a/releasenotes/notes/update-v3-entrypoint-29d56c902439cc03.yaml b/releasenotes/notes/update-v3-entrypoint-29d56c902439cc03.yaml
new file mode 100644
index 0000000..363e59f
--- /dev/null
+++ b/releasenotes/notes/update-v3-entrypoint-29d56c902439cc03.yaml
@@ -0,0 +1,6 @@
+---
+upgrade:
+ - |
+ Update default value of config option ``CONF.identity.v3_entrypoint_type``
+ from adminURL to public. This was deprecated in Q release, and was missed.
+ The default entrypoint used by tempest should be the public one.
diff --git a/releasenotes/source/index.rst b/releasenotes/source/index.rst
index b36be01..4c1edd5 100644
--- a/releasenotes/source/index.rst
+++ b/releasenotes/source/index.rst
@@ -6,6 +6,9 @@
:maxdepth: 1
unreleased
+ v34.2.0
+ v34.0.0
+ v33.0.0
v32.0.0
v31.1.0
v31.0.0
diff --git a/releasenotes/source/v33.0.0.rst b/releasenotes/source/v33.0.0.rst
new file mode 100644
index 0000000..fe7bd7d
--- /dev/null
+++ b/releasenotes/source/v33.0.0.rst
@@ -0,0 +1,5 @@
+=====================
+v33.0.0 Release Notes
+=====================
+.. release-notes:: 33.0.0 Release Notes
+ :version: 33.0.0
diff --git a/releasenotes/source/v34.0.0.rst b/releasenotes/source/v34.0.0.rst
new file mode 100644
index 0000000..94d3b67
--- /dev/null
+++ b/releasenotes/source/v34.0.0.rst
@@ -0,0 +1,6 @@
+=====================
+v34.0.0 Release Notes
+=====================
+
+.. release-notes:: 34.0.0 Release Notes
+ :version: 34.0.0
diff --git a/releasenotes/source/v34.2.0.rst b/releasenotes/source/v34.2.0.rst
new file mode 100644
index 0000000..386cf71
--- /dev/null
+++ b/releasenotes/source/v34.2.0.rst
@@ -0,0 +1,6 @@
+=====================
+v34.2.0 Release Notes
+=====================
+
+.. release-notes:: 34.2.0 Release Notes
+ :version: 34.2.0
diff --git a/requirements.txt b/requirements.txt
index a118856..6e66046 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -22,3 +22,4 @@
urllib3>=1.21.1 # MIT
debtcollector>=1.2.0 # Apache-2.0
defusedxml>=0.7.1 # PSFL
+fasteners>=0.16.0 # Apache-2.0
diff --git a/roles/run-tempest-26/README.rst b/roles/run-tempest-26/README.rst
index 3643edb..8ff1656 100644
--- a/roles/run-tempest-26/README.rst
+++ b/roles/run-tempest-26/README.rst
@@ -21,7 +21,7 @@
A regular expression used to select the tests.
It works only when used with some specific tox environments
- ('all', 'all-plugin'.)
+ ('all', 'all-site-packages')
In the following example only api scenario and third party tests
will be executed.
@@ -47,7 +47,7 @@
A regular expression used to skip the tests.
It works only when used with some specific tox environments
- ('all', 'all-plugin'.)
+ ('all', 'all-site-packages').
::
vars:
diff --git a/roles/run-tempest-26/tasks/main.yaml b/roles/run-tempest-26/tasks/main.yaml
index f846006..7ad5c99 100644
--- a/roles/run-tempest-26/tasks/main.yaml
+++ b/roles/run-tempest-26/tasks/main.yaml
@@ -17,7 +17,7 @@
- name: Limit max concurrency when more than 3 vcpus are available
set_fact:
- default_concurrency: "{{ num_cores|int // 2 }}"
+ default_concurrency: "{{ num_cores|int - 2 }}"
when: num_cores|int > 3
- name: Override target branch
@@ -62,7 +62,9 @@
when: blacklist_stat.stat.exists
- name: Run Tempest
- command: tox -e {{tox_envlist}} {{tox_extra_args}} -- {{tempest_test_regex|quote}} {{blacklist_option|default('')}} \
+ command: tox -e {{tox_envlist}} {{tox_extra_args}} -- \
+ {{tempest_test_regex|quote if (tempest_test_regex|length>0)|default(None, True)}} \
+ {{blacklist_option|default(None)}} \
--concurrency={{tempest_concurrency|default(default_concurrency)}} \
--black-regex={{tempest_black_regex|quote}}
args:
diff --git a/roles/run-tempest/README.rst b/roles/run-tempest/README.rst
index d9f855a..04db849 100644
--- a/roles/run-tempest/README.rst
+++ b/roles/run-tempest/README.rst
@@ -21,7 +21,7 @@
A regular expression used to select the tests.
It works only when used with some specific tox environments
- ('all', 'all-plugin'.)
+ ('all', 'all-site-packages').
In the following example only api scenario and third party tests
will be executed.
@@ -56,7 +56,7 @@
A regular expression used to skip the tests.
It works only when used with some specific tox environments
- ('all', 'all-plugin'.)
+ ('all', 'all-site-packages').
::
vars:
diff --git a/roles/run-tempest/tasks/main.yaml b/roles/run-tempest/tasks/main.yaml
index f302fa5..3d78557 100644
--- a/roles/run-tempest/tasks/main.yaml
+++ b/roles/run-tempest/tasks/main.yaml
@@ -17,7 +17,7 @@
- name: Limit max concurrency when more than 3 vcpus are available
set_fact:
- default_concurrency: "{{ num_cores|int // 2 }}"
+ default_concurrency: "{{ num_cores|int - 2 }}"
when: num_cores|int > 3
- name: Override target branch
@@ -25,11 +25,11 @@
target_branch: "{{ zuul.override_checkout }}"
when: zuul.override_checkout is defined
-- name: Use stable branch upper-constraints till stable/victoria
+- name: Use stable branch upper-constraints till stable/wallaby
set_fact:
# TOX_CONSTRAINTS_FILE is new name, UPPER_CONSTRAINTS_FILE is old one, best to set both
tempest_tox_environment: "{{ tempest_tox_environment | combine({'UPPER_CONSTRAINTS_FILE': stable_constraints_file}) | combine({'TOX_CONSTRAINTS_FILE': stable_constraints_file}) }}"
- when: target_branch in ["stable/ocata", "stable/pike", "stable/queens", "stable/rocky", "stable/stein", "stable/train", "stable/ussuri", "stable/victoria"]
+ when: target_branch in ["stable/ocata", "stable/pike", "stable/queens", "stable/rocky", "stable/stein", "stable/train", "stable/ussuri", "stable/victoria", "stable/wallaby"]
- name: Use Configured upper-constraints for non-master Tempest
set_fact:
@@ -120,10 +120,11 @@
- target_branch in ["stable/train", "stable/ussuri", "stable/victoria"]
- name: Run Tempest
- command: tox -e {{tox_envlist}} {{tox_extra_args}} -- {{tempest_test_regex|quote}} \
- {{blacklist_option|default('')}} {{exclude_list_option|default('')}} \
+ command: tox -e {{tox_envlist}} {{tox_extra_args}} -- \
+ {{tempest_test_regex|quote if (tempest_test_regex|length>0)|default(None, True)}} \
+ {{blacklist_option|default(None)}} {{exclude_list_option|default(None)}} \
--concurrency={{tempest_concurrency|default(default_concurrency)}} \
- {{tempest_test_exclude_regex|default('')}}
+ {{tempest_test_exclude_regex|default(None)}}
args:
chdir: "{{devstack_base_dir}}/tempest"
register: tempest_run_result
diff --git a/setup.cfg b/setup.cfg
index a531eb4..bb1ced5 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -17,6 +17,8 @@
Programming Language :: Python :: 3
Programming Language :: Python :: 3.8
Programming Language :: Python :: 3.9
+ Programming Language :: Python :: 3.10
+ Programming Language :: Python :: 3.11
Programming Language :: Python :: 3 :: Only
Programming Language :: Python :: Implementation :: CPython
diff --git a/tempest/api/compute/admin/test_assisted_volume_snapshots.py b/tempest/api/compute/admin/test_assisted_volume_snapshots.py
new file mode 100644
index 0000000..b7be796
--- /dev/null
+++ b/tempest/api/compute/admin/test_assisted_volume_snapshots.py
@@ -0,0 +1,70 @@
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.api.compute import base
+from tempest import config
+from tempest.lib.common.utils import data_utils
+from tempest.lib import decorators
+
+
+CONF = config.CONF
+
+
+class VolumesAssistedSnapshotsTest(base.BaseV2ComputeAdminTest):
+ """Test volume assisted snapshots"""
+
+ create_default_network = True
+
+ # TODO(gmann): Remove the admin access to service user
+ # once nova change the default of this API to service
+ # role. To merge the nova changing the policy default
+ # we need to use token with admin as well as service
+ # role and later we can use only service token.
+ credentials = ['primary', 'admin', ['service_user', 'admin', 'service']]
+
+ @classmethod
+ def skip_checks(cls):
+ super(VolumesAssistedSnapshotsTest, cls).skip_checks()
+ if not CONF.service_available.cinder:
+ skip_msg = ("%s skipped as Cinder is not available" % cls.__name__)
+ raise cls.skipException(skip_msg)
+
+ @classmethod
+ def setup_clients(cls):
+ super(VolumesAssistedSnapshotsTest, cls).setup_clients()
+ cls.assisted_v_client = (
+ cls.os_service_user.assisted_volume_snapshots_client)
+ cls.volumes_client = cls.os_admin.volumes_client_latest
+ cls.servers_client = cls.os_admin.servers_client
+
+ @decorators.idempotent_id('8aee84a3-1b1f-42e4-9b00-613931ccac24')
+ def test_volume_assisted_snapshot_create_delete(self):
+ """Test create/delete volume assisted snapshot"""
+ volume = self.create_volume()
+ self.addCleanup(self.delete_volume, volume['id'])
+ validation_resources = self.get_class_validation_resources(
+ self.os_primary)
+ server = self.create_test_server(
+ validatable=True,
+ validation_resources=validation_resources,
+ wait_until='SSHABLE'
+ )
+ # Attach created volume to server
+ self.attach_volume(server, volume)
+ snapshot_id = data_utils.rand_uuid()
+ snapshot = self.assisted_v_client.create_assisted_volume_snapshot(
+ volume_id=volume['id'], snapshot_id=snapshot_id,
+ type='qcow2', new_file='new_file')['snapshot']
+ self.assisted_v_client.delete_assisted_volume_snapshot(
+ volume_id=volume['id'], snapshot_id=snapshot['id'])
diff --git a/tempest/api/compute/admin/test_fixed_ips.py b/tempest/api/compute/admin/test_fixed_ips.py
deleted file mode 100644
index 9de3da9..0000000
--- a/tempest/api/compute/admin/test_fixed_ips.py
+++ /dev/null
@@ -1,72 +0,0 @@
-# Copyright 2013 IBM Corp
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.compute import base
-from tempest.common import utils
-from tempest import config
-from tempest.lib import decorators
-
-CONF = config.CONF
-
-
-class FixedIPsTestJson(base.BaseV2ComputeAdminTest):
- """Test fixed ips API"""
-
- @classmethod
- def skip_checks(cls):
- super(FixedIPsTestJson, cls).skip_checks()
- if CONF.service_available.neutron:
- msg = ("%s skipped as neutron is available" % cls.__name__)
- raise cls.skipException(msg)
- if not utils.get_service_list()['network']:
- raise cls.skipException("network service not enabled.")
-
- @classmethod
- def setup_clients(cls):
- super(FixedIPsTestJson, cls).setup_clients()
- cls.client = cls.os_admin.fixed_ips_client
-
- @classmethod
- def resource_setup(cls):
- super(FixedIPsTestJson, cls).resource_setup()
- server = cls.create_test_server(wait_until='ACTIVE')
- server = cls.servers_client.show_server(server['id'])['server']
- cls.ip = None
- for ip_set in server['addresses']:
- for ip in server['addresses'][ip_set]:
- if ip['OS-EXT-IPS:type'] == 'fixed':
- cls.ip = ip['addr']
- break
- if cls.ip:
- break
- if cls.ip is None:
- raise cls.skipException("No fixed ip found for server: %s"
- % server['id'])
-
- @decorators.idempotent_id('16b7d848-2f7c-4709-85a3-2dfb4576cc52')
- def test_list_fixed_ip_details(self):
- """Test getting fixed ip details"""
- fixed_ip = self.client.show_fixed_ip(self.ip)
- self.assertEqual(fixed_ip['fixed_ip']['address'], self.ip)
-
- @decorators.idempotent_id('5485077b-7e46-4cec-b402-91dc3173433b')
- def test_set_reserve(self):
- """Test reserving fixed ip"""
- self.client.reserve_fixed_ip(self.ip, reserve="None")
-
- @decorators.idempotent_id('7476e322-b9ff-4710-bf82-49d51bac6e2e')
- def test_set_unreserve(self):
- """Test unreserving fixed ip"""
- self.client.reserve_fixed_ip(self.ip, unreserve="None")
diff --git a/tempest/api/compute/admin/test_fixed_ips_negative.py b/tempest/api/compute/admin/test_fixed_ips_negative.py
deleted file mode 100644
index 1629faa..0000000
--- a/tempest/api/compute/admin/test_fixed_ips_negative.py
+++ /dev/null
@@ -1,101 +0,0 @@
-# Copyright 2013 NEC Corporation. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.compute import base
-from tempest.common import utils
-from tempest import config
-from tempest.lib import decorators
-from tempest.lib import exceptions as lib_exc
-
-CONF = config.CONF
-
-
-class FixedIPsNegativeTestJson(base.BaseV2ComputeAdminTest):
- """Negative tests of fixed ips API"""
-
- @classmethod
- def skip_checks(cls):
- super(FixedIPsNegativeTestJson, cls).skip_checks()
- if CONF.service_available.neutron:
- msg = ("%s skipped as neutron is available" % cls.__name__)
- raise cls.skipException(msg)
- if not utils.get_service_list()['network']:
- raise cls.skipException("network service not enabled.")
-
- @classmethod
- def setup_clients(cls):
- super(FixedIPsNegativeTestJson, cls).setup_clients()
- cls.client = cls.os_admin.fixed_ips_client
- cls.non_admin_client = cls.fixed_ips_client
-
- @classmethod
- def resource_setup(cls):
- super(FixedIPsNegativeTestJson, cls).resource_setup()
- server = cls.create_test_server(wait_until='ACTIVE')
- server = cls.servers_client.show_server(server['id'])['server']
- cls.ip = None
- for ip_set in server['addresses']:
- for ip in server['addresses'][ip_set]:
- if ip['OS-EXT-IPS:type'] == 'fixed':
- cls.ip = ip['addr']
- break
- if cls.ip:
- break
- if cls.ip is None:
- raise cls.skipException("No fixed ip found for server: %s"
- % server['id'])
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('9f17f47d-daad-4adc-986e-12370c93e407')
- def test_list_fixed_ip_details_with_non_admin_user(self):
- """Test listing fixed ip with detail by non-admin user is forbidden"""
- self.assertRaises(lib_exc.Forbidden,
- self.non_admin_client.show_fixed_ip, self.ip)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('ce60042c-fa60-4836-8d43-1c8e3359dc47')
- def test_set_reserve_with_non_admin_user(self):
- """Test reserving fixed ip by non-admin user is forbidden"""
- self.assertRaises(lib_exc.Forbidden,
- self.non_admin_client.reserve_fixed_ip,
- self.ip, reserve="None")
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('f1f7a35b-0390-48c5-9803-5f27461439db')
- def test_set_unreserve_with_non_admin_user(self):
- """Test unreserving fixed ip by non-admin user is forbidden"""
- self.assertRaises(lib_exc.Forbidden,
- self.non_admin_client.reserve_fixed_ip,
- self.ip, unreserve="None")
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('f51cf464-7fc5-4352-bc3e-e75cfa2cb717')
- def test_set_reserve_with_invalid_ip(self):
- """Test reserving invalid fixed ip should fail"""
- # NOTE(maurosr): since this exercises the same code snippet, we do it
- # only for reserve action
- # NOTE(eliqiao): in Juno, the exception is NotFound, but in master, we
- # change the error code to BadRequest, both exceptions should be
- # accepted by tempest
- self.assertRaises((lib_exc.NotFound, lib_exc.BadRequest),
- self.client.reserve_fixed_ip,
- "my.invalid.ip", reserve="None")
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('fd26ef50-f135-4232-9d32-281aab3f9176')
- def test_fixed_ip_with_invalid_action(self):
- """Test operating fixed ip with invalid action should fail"""
- self.assertRaises(lib_exc.BadRequest,
- self.client.reserve_fixed_ip,
- self.ip, invalid_action="None")
diff --git a/tempest/api/compute/admin/test_floating_ips_bulk.py b/tempest/api/compute/admin/test_floating_ips_bulk.py
deleted file mode 100644
index 786c7f0..0000000
--- a/tempest/api/compute/admin/test_floating_ips_bulk.py
+++ /dev/null
@@ -1,85 +0,0 @@
-# Copyright 2014 NEC Technologies India Ltd.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import netaddr
-
-from tempest.api.compute import base
-from tempest.common import utils
-from tempest import config
-from tempest.lib.common.utils import test_utils
-from tempest.lib import decorators
-from tempest.lib import exceptions
-
-CONF = config.CONF
-
-
-# TODO(stephenfin): Remove this test class once the nova queens branch goes
-# into extended maintenance mode.
-class FloatingIPsBulkAdminTestJSON(base.BaseV2ComputeAdminTest):
- """Tests Floating IPs Bulk APIs that require admin privileges.
-
- API documentation - http://docs.openstack.org/api/openstack-compute/2/
- content/ext-os-floating-ips-bulk.html
- """
- max_microversion = '2.35'
- depends_on_nova_network = True
-
- @classmethod
- def setup_clients(cls):
- super(FloatingIPsBulkAdminTestJSON, cls).setup_clients()
- cls.client = cls.os_admin.floating_ips_bulk_client
-
- @classmethod
- def resource_setup(cls):
- super(FloatingIPsBulkAdminTestJSON, cls).resource_setup()
- cls.ip_range = CONF.validation.floating_ip_range
- cls.verify_unallocated_floating_ip_range(cls.ip_range)
-
- @classmethod
- def verify_unallocated_floating_ip_range(cls, ip_range):
- # Verify whether configure floating IP range is not already allocated.
- body = cls.client.list_floating_ips_bulk()['floating_ip_info']
- allocated_ips_list = map(lambda x: x['address'], body)
- for ip_addr in netaddr.IPNetwork(ip_range).iter_hosts():
- if str(ip_addr) in allocated_ips_list:
- msg = ("Configured unallocated floating IP range is already "
- "allocated. Configure the correct unallocated range "
- "as 'floating_ip_range'")
- raise exceptions.InvalidConfiguration(msg)
- return
-
- @decorators.idempotent_id('2c8f145f-8012-4cb8-ac7e-95a587f0e4ab')
- @utils.services('network')
- def test_create_list_delete_floating_ips_bulk(self):
- """Creating, listing and deleting the Floating IPs Bulk"""
- pool = 'test_pool'
- # NOTE(GMann): Reserving the IP range but those are not attached
- # anywhere. Using the below mentioned interface which is not ever
- # expected to be used. Clean Up has been done for created IP range
- interface = 'eth0'
- body = (self.client.create_floating_ips_bulk(self.ip_range,
- pool,
- interface)
- ['floating_ips_bulk_create'])
- self.addCleanup(test_utils.call_and_ignore_notfound_exc,
- self.client.delete_floating_ips_bulk, self.ip_range)
- self.assertEqual(self.ip_range, body['ip_range'])
- ips_list = self.client.list_floating_ips_bulk()['floating_ip_info']
- self.assertNotEmpty(ips_list)
- for ip in netaddr.IPNetwork(self.ip_range).iter_hosts():
- self.assertIn(str(ip), map(lambda x: x['address'], ips_list))
- body = (self.client.delete_floating_ips_bulk(self.ip_range)
- ['floating_ips_bulk_delete'])
- self.assertEqual(self.ip_range, body)
diff --git a/tempest/api/compute/admin/test_live_migration.py b/tempest/api/compute/admin/test_live_migration.py
index 2826f56..d68334f 100644
--- a/tempest/api/compute/admin/test_live_migration.py
+++ b/tempest/api/compute/admin/test_live_migration.py
@@ -140,6 +140,7 @@
LOG.info("Live migrate back to source %s", source_host)
self._live_migrate(server_id, source_host, state, volume_backed)
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('1dce86b8-eb04-4c03-a9d8-9c1dc3ee0c7b')
@testtools.skipUnless(CONF.compute_feature_enabled.
block_migration_for_live_migration,
@@ -148,6 +149,7 @@
"""Test live migrating an active server"""
self._test_live_migration()
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('1e107f21-61b2-4988-8f22-b196e938ab88')
@testtools.skipUnless(CONF.compute_feature_enabled.
block_migration_for_live_migration,
@@ -158,6 +160,7 @@
"""Test live migrating a paused server"""
self._test_live_migration(state='PAUSED')
+ @decorators.attr(type='multinode')
@testtools.skipUnless(CONF.compute_feature_enabled.
volume_backed_live_migration,
'Volume-backed live migration not available')
@@ -167,6 +170,7 @@
"""Test live migrating an active server booted from volume"""
self._test_live_migration(volume_backed=True)
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('e19c0cc6-6720-4ed8-be83-b6603ed5c812')
@testtools.skipIf(not CONF.compute_feature_enabled.
block_migration_for_live_migration,
@@ -198,7 +202,8 @@
volume = self.create_volume()
# Attach the volume to the server
- self.attach_volume(server, volume, device='/dev/xvdb')
+ self.attach_volume(server, volume, device='/dev/xvdb',
+ wait_for_detach=False)
server = self.admin_servers_client.show_server(server_id)['server']
volume_id1 = server["os-extended-volumes:volumes_attached"][0]["id"]
self._live_migrate(server_id, target_host, 'ACTIVE')
@@ -253,6 +258,9 @@
port = self.ports_client.show_port(port_id)['port']
return port['status'] == 'ACTIVE'
+ @decorators.unstable_test(bug='2024160')
+ @decorators.unstable_test(bug='2033887')
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('0022c12e-a482-42b0-be2d-396b5f0cffe3')
@utils.requires_ext(service='network', extension='trunk')
@utils.services('network')
@@ -297,12 +305,17 @@
min_microversion = '2.6'
max_microversion = 'latest'
+ @classmethod
+ def skip_checks(cls):
+ super(LiveMigrationRemoteConsolesV26Test, cls).skip_checks()
+ if not CONF.compute_feature_enabled.serial_console:
+ skip_msg = ("Serial console not supported.")
+ raise cls.skipException(skip_msg)
+ if not compute.is_scheduler_filter_enabled("DifferentHostFilter"):
+ raise cls.skipException("DifferentHostFilter is not available.")
+
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('6190af80-513e-4f0f-90f2-9714e84955d7')
- @testtools.skipUnless(CONF.compute_feature_enabled.serial_console,
- 'Serial console not supported.')
- @testtools.skipUnless(
- compute.is_scheduler_filter_enabled("DifferentHostFilter"),
- 'DifferentHostFilter is not available.')
def test_live_migration_serial_console(self):
"""Test the live-migration of an instance which has a serial console
diff --git a/tempest/api/compute/admin/test_migrations.py b/tempest/api/compute/admin/test_migrations.py
index 89152d6..b3d2833 100644
--- a/tempest/api/compute/admin/test_migrations.py
+++ b/tempest/api/compute/admin/test_migrations.py
@@ -158,6 +158,7 @@
dst_host = self.get_host_for_server(server['id'])
assert_func(src_host, dst_host)
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('4bf0be52-3b6f-4746-9a27-3143636fe30d')
@testtools.skipUnless(CONF.compute_feature_enabled.cold_migration,
'Cold migration not available.')
@@ -165,6 +166,7 @@
"""Test cold migrating server and then confirm the migration"""
self._test_cold_migrate_server(revert=False)
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('caa1aa8b-f4ef-4374-be0d-95f001c2ac2d')
@testtools.skipUnless(CONF.compute_feature_enabled.cold_migration,
'Cold migration not available.')
diff --git a/tempest/api/compute/admin/test_networks.py b/tempest/api/compute/admin/test_networks.py
index fb6376e..d7fb62d 100644
--- a/tempest/api/compute/admin/test_networks.py
+++ b/tempest/api/compute/admin/test_networks.py
@@ -64,5 +64,5 @@
configured_network = CONF.compute.fixed_network_name
self.assertIn(configured_network, [x['label'] for x in networks])
else:
- network_labels = [x['label'] for x in networks]
- self.assertNotEmpty(network_labels)
+ raise self.skipException(
+ "Environment has no known-for-sure existing network.")
diff --git a/tempest/api/compute/admin/test_server_external_events.py b/tempest/api/compute/admin/test_server_external_events.py
new file mode 100644
index 0000000..d867a39
--- /dev/null
+++ b/tempest/api/compute/admin/test_server_external_events.py
@@ -0,0 +1,44 @@
+# Copyright 2022 NEC Corporation. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.api.compute import base
+from tempest.lib import decorators
+
+
+class ServerExternalEventsTest(base.BaseV2ComputeAdminTest):
+ """Test server external events test"""
+
+ # TODO(gmann): Remove the admin access to service user
+ # once nova change the default of this API to service
+ # role. To merge the nova changing the policy default
+ # we need to use token with admin as well as service
+ # role and later we can use only service token.
+ credentials = ['primary', 'admin', ['service_user', 'admin', 'service']]
+
+ @decorators.idempotent_id('6bbf4723-61d2-4372-af55-7ba27f1c9ba6')
+ def test_create_server_external_events(self):
+ """Test create a server and add some external events"""
+ server_id = self.create_test_server(wait_until='ACTIVE')['id']
+ events = [
+ {
+ "name": "network-changed",
+ "server_uuid": server_id,
+ }
+ ]
+ client = self.os_service_user.server_external_events_client
+ events_resp = client.create_server_external_events(
+ events=events)['events'][0]
+ self.assertEqual(server_id, events_resp['server_uuid'])
+ self.assertEqual('network-changed', events_resp['name'])
+ self.assertEqual(200, events_resp['code'])
diff --git a/tempest/api/compute/admin/test_servers.py b/tempest/api/compute/admin/test_servers.py
index bc00f8c..321078c 100644
--- a/tempest/api/compute/admin/test_servers.py
+++ b/tempest/api/compute/admin/test_servers.py
@@ -25,6 +25,8 @@
class ServersAdminTestJSON(base.BaseV2ComputeAdminTest):
"""Tests Servers API using admin privileges"""
+ create_default_network = True
+
@classmethod
def setup_clients(cls):
super(ServersAdminTestJSON, cls).setup_clients()
diff --git a/tempest/api/compute/admin/test_servers_on_multinodes.py b/tempest/api/compute/admin/test_servers_on_multinodes.py
index 9082306..013e7d8 100644
--- a/tempest/api/compute/admin/test_servers_on_multinodes.py
+++ b/tempest/api/compute/admin/test_servers_on_multinodes.py
@@ -61,6 +61,7 @@
return hosts
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('26a9d5df-6890-45f2-abc4-a659290cb130')
@testtools.skipUnless(
compute.is_scheduler_filter_enabled("SameHostFilter"),
@@ -73,6 +74,7 @@
host02 = self.get_host_for_server(server02)
self.assertEqual(self.host01, host02)
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('cc7ca884-6e3e-42a3-a92f-c522fcf25e8e')
@testtools.skipUnless(
compute.is_scheduler_filter_enabled("DifferentHostFilter"),
@@ -85,6 +87,7 @@
host02 = self.get_host_for_server(server02)
self.assertNotEqual(self.host01, host02)
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('7869cc84-d661-4e14-9f00-c18cdc89cf57')
@testtools.skipUnless(
compute.is_scheduler_filter_enabled("DifferentHostFilter"),
@@ -97,6 +100,7 @@
host02 = self.get_host_for_server(server02)
self.assertNotEqual(self.host01, host02)
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('f8bd0867-e459-45f5-ba53-59134552fe04')
@testtools.skipUnless(
compute.is_scheduler_filter_enabled("ServerGroupAntiAffinityFilter"),
@@ -112,6 +116,7 @@
self.assertNotEqual(hostnames[0], hostnames[1],
'Servers are on the same host: %s' % hosts)
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('9d2e924a-baf4-11e7-b856-fa163e65f5ce')
@testtools.skipUnless(
compute.is_scheduler_filter_enabled("ServerGroupAffinityFilter"),
@@ -152,6 +157,7 @@
waiters.wait_for_server_status(self.servers_client, server['id'],
'ACTIVE')
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('b5cc0889-50c2-46a0-b8ff-b5fb4c3a6e20')
def test_unshelve_to_specific_host(self):
"""Test unshelve to a specific host, new behavior introduced in
diff --git a/tempest/api/compute/admin/test_volume.py b/tempest/api/compute/admin/test_volume.py
index 99d8e2a..e7c931e 100644
--- a/tempest/api/compute/admin/test_volume.py
+++ b/tempest/api/compute/admin/test_volume.py
@@ -13,8 +13,6 @@
# License for the specific language governing permissions and limitations
# under the License.
-import io
-
from tempest.api.compute import base
from tempest.common import waiters
from tempest import config
@@ -49,9 +47,11 @@
:param return image_id: The UUID of the newly created image.
"""
image = self.admin_image_client.show_image(CONF.compute.image_ref)
- image_data = self.admin_image_client.show_image_file(
- CONF.compute.image_ref).data
- image_file = io.BytesIO(image_data)
+ # NOTE(danms): We need to stream this, so chunked=True means we get
+ # back a urllib3.HTTPResponse and have to carefully pass it to
+ # store_image_file() to upload it in pieces.
+ image_data_resp = self.admin_image_client.show_image_file(
+ CONF.compute.image_ref, chunked=True)
create_dict = {
'container_format': image['container_format'],
'disk_format': image['disk_format'],
@@ -60,12 +60,16 @@
'visibility': 'public',
}
create_dict.update(kwargs)
- new_image = self.admin_image_client.create_image(**create_dict)
- self.addCleanup(self.admin_image_client.wait_for_resource_deletion,
- new_image['id'])
- self.addCleanup(self.admin_image_client.delete_image, new_image['id'])
- self.admin_image_client.store_image_file(new_image['id'], image_file)
-
+ try:
+ new_image = self.admin_image_client.create_image(**create_dict)
+ self.addCleanup(self.admin_image_client.wait_for_resource_deletion,
+ new_image['id'])
+ self.addCleanup(
+ self.admin_image_client.delete_image, new_image['id'])
+ self.admin_image_client.store_image_file(new_image['id'],
+ image_data_resp)
+ finally:
+ image_data_resp.release_conn()
return new_image['id']
diff --git a/tempest/api/compute/admin/test_volume_swap.py b/tempest/api/compute/admin/test_volume_swap.py
index 7da87c7..9576b74 100644
--- a/tempest/api/compute/admin/test_volume_swap.py
+++ b/tempest/api/compute/admin/test_volume_swap.py
@@ -13,7 +13,6 @@
import time
from tempest.api.compute import base
-from tempest.common import utils
from tempest.common import waiters
from tempest import config
from tempest.lib import decorators
@@ -33,6 +32,8 @@
@classmethod
def skip_checks(cls):
super(TestVolumeSwapBase, cls).skip_checks()
+ if not CONF.service_available.cinder:
+ raise cls.skipException("Cinder is not available")
if not CONF.compute_feature_enabled.swap_volume:
raise cls.skipException("Swapping volumes is not supported.")
@@ -81,7 +82,6 @@
# so it's marked as such.
@decorators.attr(type='slow')
@decorators.idempotent_id('1769f00d-a693-4d67-a631-6a3496773813')
- @utils.services('volume')
def test_volume_swap(self):
"""Test swapping of volume attached to server with admin user
@@ -183,7 +183,6 @@
# multiple computes but that would just side-step the underlying bug.
@decorators.skip_because(bug='1807723',
condition=CONF.compute.min_compute_nodes > 1)
- @utils.services('volume')
def test_volume_swap_with_multiattach(self):
"""Test swapping volume attached to multiple servers
@@ -199,11 +198,12 @@
"server1"
8. Check "volume2" is attached to "server1".
"""
+ multiattach_vol_type = CONF.volume.volume_type_multiattach
# Create two volumes.
# NOTE(gmann): Volumes are created before server creation so that
# volumes cleanup can happen successfully irrespective of which volume
# is attached to server.
- volume1 = self.create_volume(multiattach=True)
+ volume1 = self.create_volume(volume_type=multiattach_vol_type)
# Make volume1 read-only since you can't swap from a volume with
# multiple read/write attachments, and you can't change the readonly
# flag on an in-use volume so we have to do this before attaching
@@ -211,7 +211,7 @@
# attach modes, then we can handle this differently.
self.admin_volumes_client.update_volume_readonly(
volume1['id'], readonly=True)
- volume2 = self.create_volume(multiattach=True)
+ volume2 = self.create_volume(volume_type=multiattach_vol_type)
# Create two servers and wait for them to be ACTIVE.
validation_resources = self.get_class_validation_resources(
diff --git a/tempest/api/compute/admin/test_volumes_negative.py b/tempest/api/compute/admin/test_volumes_negative.py
index 91ab09e..55c842f 100644
--- a/tempest/api/compute/admin/test_volumes_negative.py
+++ b/tempest/api/compute/admin/test_volumes_negative.py
@@ -115,9 +115,11 @@
5. Check "vol1" is still attached to both servers
6. Check "vol2" is not attached to any server
"""
+ multiattach_vol_type = CONF.volume.volume_type_multiattach
+
# Create two multiattach capable volumes.
- vol1 = self.create_volume(multiattach=True)
- vol2 = self.create_volume(multiattach=True)
+ vol1 = self.create_volume(volume_type=multiattach_vol_type)
+ vol2 = self.create_volume(volume_type=multiattach_vol_type)
# Create two instances.
validation_resources = self.get_class_validation_resources(
diff --git a/tempest/api/compute/base.py b/tempest/api/compute/base.py
index 75df5ae..d02532d 100644
--- a/tempest/api/compute/base.py
+++ b/tempest/api/compute/base.py
@@ -51,6 +51,9 @@
super(BaseV2ComputeTest, cls).skip_checks()
if not CONF.service_available.nova:
raise cls.skipException("Nova is not available")
+ if cls.create_default_network and not CONF.service_available.neutron:
+ raise cls.skipException("Neutron is not available")
+
api_version_utils.check_skip_with_microversion(
cls.min_microversion, cls.max_microversion,
CONF.compute.min_microversion, CONF.compute.max_microversion)
@@ -79,7 +82,6 @@
cls.flavors_client = cls.os_primary.flavors_client
cls.compute_images_client = cls.os_primary.compute_images_client
cls.extensions_client = cls.os_primary.extensions_client
- cls.floating_ip_pools_client = cls.os_primary.floating_ip_pools_client
cls.floating_ips_client = cls.os_primary.compute_floating_ips_client
cls.keypairs_client = cls.os_primary.keypairs_client
cls.security_group_rules_client = (
@@ -94,7 +96,6 @@
cls.snapshots_extensions_client =\
cls.os_primary.snapshots_extensions_client
cls.interfaces_client = cls.os_primary.interfaces_client
- cls.fixed_ips_client = cls.os_primary.fixed_ips_client
cls.availability_zone_client = cls.os_primary.availability_zone_client
cls.agents_client = cls.os_primary.agents_client
cls.aggregates_client = cls.os_primary.aggregates_client
@@ -112,43 +113,11 @@
cls.attachments_client = cls.os_primary.attachments_client_latest
cls.snapshots_client = cls.os_primary.snapshots_client_latest
if CONF.service_available.glance:
- if CONF.image_feature_enabled.api_v1:
- cls.images_client = cls.os_primary.image_client
- elif CONF.image_feature_enabled.api_v2:
+ if CONF.image_feature_enabled.api_v2:
cls.images_client = cls.os_primary.image_client_v2
else:
raise lib_exc.InvalidConfiguration(
- 'Either api_v1 or api_v2 must be True in '
- '[image-feature-enabled].')
- cls._check_depends_on_nova_network()
-
- @classmethod
- def _check_depends_on_nova_network(cls):
- # Since nova-network APIs were removed from Nova in the Rocky release,
- # determine, based on the max version from the version document, if
- # the compute API is >Queens and if so, skip tests that rely on
- # nova-network.
- if not getattr(cls, 'depends_on_nova_network', False):
- return
- versions = cls.versions_client.list_versions()['versions']
- # Find the v2.1 version which will tell us our max version for the
- # compute API we're testing against.
- for version in versions:
- if version['id'] == 'v2.1':
- max_version = api_version_request.APIVersionRequest(
- version['version'])
- break
- else:
- LOG.warning(
- 'Unable to determine max v2.1 compute API version: %s',
- versions)
- return
-
- # The max compute API version in Queens is 2.60 so we cap
- # at that version.
- queens = api_version_request.APIVersionRequest('2.60')
- if max_version > queens:
- raise cls.skipException('nova-network is gone')
+ 'api_v2 must be True in [image-feature-enabled].')
@classmethod
def resource_setup(cls):
@@ -326,18 +295,18 @@
body['id'])
return body
- def wait_for(self, condition):
+ def wait_for(self, condition, *args):
"""Repeatedly calls condition() until a timeout."""
start_time = int(time.time())
while True:
try:
- condition()
+ condition(*args)
except Exception:
pass
else:
return
if int(time.time()) - start_time >= self.build_timeout:
- condition()
+ condition(*args)
return
time.sleep(self.build_interval)
@@ -462,9 +431,11 @@
self, server_id, new_flavor_id, wait_until='ACTIVE', **kwargs
):
"""resize and confirm_resize an server, waits for it to be ACTIVE."""
- self.servers_client.resize_server(server_id, new_flavor_id, **kwargs)
- waiters.wait_for_server_status(self.servers_client, server_id,
- 'VERIFY_RESIZE')
+ body = self.servers_client.resize_server(
+ server_id, new_flavor_id, **kwargs)
+ waiters.wait_for_server_status(
+ self.servers_client, server_id, 'VERIFY_RESIZE',
+ request_id=body.response['x-openstack-request-id'])
self.servers_client.confirm_resize_server(server_id)
waiters.wait_for_server_status(
@@ -522,6 +493,8 @@
"""Create a volume and wait for it to become 'available'.
:param image_ref: Specify an image id to create a bootable volume.
+ :param wait_for_available: Wait until the volume becomes available
+ before returning
:param kwargs: other parameters to create volume.
:returns: The available volume.
"""
@@ -532,6 +505,7 @@
kwargs['display_name'] = vol_name
if image_ref is not None:
kwargs['imageRef'] = image_ref
+ wait = kwargs.pop('wait_for_available', True)
if CONF.volume.volume_type and 'volume_type' not in kwargs:
# If volume_type is not provided in config then no need to
# add a volume type and
@@ -547,8 +521,9 @@
cls.addClassResourceCleanup(test_utils.call_and_ignore_notfound_exc,
cls.volumes_client.delete_volume,
volume['id'])
- waiters.wait_for_volume_resource_status(cls.volumes_client,
- volume['id'], 'available')
+ if wait:
+ waiters.wait_for_volume_resource_status(cls.volumes_client,
+ volume['id'], 'available')
return volume
def _detach_volume(self, server, volume):
@@ -568,7 +543,8 @@
# is already detached.
pass
- def attach_volume(self, server, volume, device=None, tag=None):
+ def attach_volume(self, server, volume, device=None, tag=None,
+ wait_for_detach=True):
"""Attaches volume to server and waits for 'in-use' volume status.
The volume will be detached when the test tears down.
@@ -605,7 +581,7 @@
# the contents of the console log. The final check of the volume state
# should be a no-op by this point and is just added for completeness
# when detaching non-multiattach volumes.
- if not volume['multiattach']:
+ if not volume['multiattach'] and wait_for_detach:
self.addCleanup(
waiters.wait_for_volume_resource_status, self.volumes_client,
volume['id'], 'available')
@@ -698,6 +674,8 @@
binary='nova-compute')['services']
hosts = []
for svc in svcs:
+ if svc['host'].endswith('-ironic'):
+ continue
if svc['state'] == 'up' and svc['status'] == 'enabled':
if CONF.compute.compute_volume_common_az:
if svc['zone'] == CONF.compute.compute_volume_common_az:
diff --git a/tempest/api/compute/flavors/test_flavors_negative.py b/tempest/api/compute/flavors/test_flavors_negative.py
index 5d6a7d7..22b71fc 100644
--- a/tempest/api/compute/flavors/test_flavors_negative.py
+++ b/tempest/api/compute/flavors/test_flavors_negative.py
@@ -17,7 +17,6 @@
import random
from tempest.api.compute import base
-from tempest.common import image as common_image
from tempest.common import utils
from tempest import config
from tempest.lib.common.utils import data_utils
@@ -48,23 +47,15 @@
'name': data_utils.rand_name('image'),
'container_format': CONF.image.container_formats[0],
'disk_format': CONF.image.disk_formats[0],
- 'min_ram': min_img_ram
+ 'min_ram': min_img_ram,
+ 'visibility': 'private'
}
- if CONF.image_feature_enabled.api_v1:
- params.update({'is_public': False})
- params = {'headers': common_image.image_meta_to_headers(**params)}
- else:
- params.update({'visibility': 'private'})
-
image = self.images_client.create_image(**params)
image = image['image'] if 'image' in image else image
self.addCleanup(self.images_client.delete_image, image['id'])
- if CONF.image_feature_enabled.api_v1:
- self.images_client.update_image(image['id'], data=image_file)
- else:
- self.images_client.store_image_file(image['id'], data=image_file)
+ self.images_client.store_image_file(image['id'], data=image_file)
self.assertEqual(min_img_ram, image['min_ram'])
diff --git a/tempest/api/compute/floating_ips/base.py b/tempest/api/compute/floating_ips/base.py
index 262a3c1..d6c302d 100644
--- a/tempest/api/compute/floating_ips/base.py
+++ b/tempest/api/compute/floating_ips/base.py
@@ -41,4 +41,3 @@
def setup_clients(cls):
super(BaseFloatingIPsTest, cls).setup_clients()
cls.client = cls.floating_ips_client
- cls.pools_client = cls.floating_ip_pools_client
diff --git a/tempest/api/compute/floating_ips/test_list_floating_ips.py b/tempest/api/compute/floating_ips/test_list_floating_ips.py
index 6bfee95..fcbea2f 100644
--- a/tempest/api/compute/floating_ips/test_list_floating_ips.py
+++ b/tempest/api/compute/floating_ips/test_list_floating_ips.py
@@ -66,10 +66,3 @@
self.assertEqual(floating_ip_fixed_ip,
body['fixed_ip'])
self.assertEqual(floating_ip_id, body['id'])
-
- @decorators.idempotent_id('df389fc8-56f5-43cc-b290-20eda39854d3')
- def test_list_floating_ip_pools(self):
- """Test listing floating ip pools"""
- floating_ip_pools = self.pools_client.list_floating_ip_pools()
- self.assertNotEmpty(floating_ip_pools['floating_ip_pools'],
- "Expected floating IP Pools. Got zero.")
diff --git a/tempest/api/compute/images/test_image_metadata.py b/tempest/api/compute/images/test_image_metadata.py
index ece983d..f630bc8 100644
--- a/tempest/api/compute/images/test_image_metadata.py
+++ b/tempest/api/compute/images/test_image_metadata.py
@@ -16,7 +16,6 @@
import io
from tempest.api.compute import base
-from tempest.common import image as common_image
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import data_utils
@@ -42,17 +41,11 @@
@classmethod
def setup_clients(cls):
super(ImagesMetadataTestJSON, cls).setup_clients()
- # Check if glance v1 is available to determine which client to use. We
- # prefer glance v1 for the compute API tests since the compute image
- # API proxy was written for glance v1.
- if CONF.image_feature_enabled.api_v1:
- cls.glance_client = cls.os_primary.image_client
- elif CONF.image_feature_enabled.api_v2:
+ if CONF.image_feature_enabled.api_v2:
cls.glance_client = cls.os_primary.image_client_v2
else:
raise exceptions.InvalidConfiguration(
- 'Either api_v1 or api_v2 must be True in '
- '[image-feature-enabled].')
+ 'api_v2 must be True in [image-feature-enabled].')
cls.client = cls.compute_images_client
@classmethod
@@ -63,13 +56,9 @@
params = {
'name': data_utils.rand_name('image'),
'container_format': 'bare',
- 'disk_format': 'raw'
+ 'disk_format': 'raw',
+ 'visibility': 'private'
}
- if CONF.image_feature_enabled.api_v1:
- params.update({'is_public': False})
- params = {'headers': common_image.image_meta_to_headers(**params)}
- else:
- params.update({'visibility': 'private'})
body = cls.glance_client.create_image(**params)
body = body['image'] if 'image' in body else body
@@ -78,10 +67,7 @@
cls.glance_client.delete_image,
cls.image_id)
image_file = io.BytesIO((b'*' * 1024))
- if CONF.image_feature_enabled.api_v1:
- cls.glance_client.update_image(cls.image_id, data=image_file)
- else:
- cls.glance_client.store_image_file(cls.image_id, data=image_file)
+ cls.glance_client.store_image_file(cls.image_id, data=image_file)
waiters.wait_for_image_status(cls.client, cls.image_id, 'ACTIVE')
def setUp(self):
diff --git a/tempest/api/compute/images/test_image_metadata_negative.py b/tempest/api/compute/images/test_image_metadata_negative.py
index b9806c7..33a59ae 100644
--- a/tempest/api/compute/images/test_image_metadata_negative.py
+++ b/tempest/api/compute/images/test_image_metadata_negative.py
@@ -14,10 +14,13 @@
# under the License.
from tempest.api.compute import base
+from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
+CONF = config.CONF
+
class ImagesMetadataNegativeTestJSON(base.BaseV2ComputeTest):
"""Negative tests of image metadata
@@ -28,6 +31,13 @@
max_microversion = '2.38'
@classmethod
+ def skip_checks(cls):
+ super(ImagesMetadataNegativeTestJSON, cls).skip_checks()
+ if not CONF.service_available.glance:
+ skip_msg = ("%s skipped as glance is not available" % cls.__name__)
+ raise cls.skipException(skip_msg)
+
+ @classmethod
def setup_clients(cls):
super(ImagesMetadataNegativeTestJSON, cls).setup_clients()
cls.client = cls.compute_images_client
diff --git a/tempest/api/compute/images/test_images_oneserver.py b/tempest/api/compute/images/test_images_oneserver.py
index 23f8326..2b859da 100644
--- a/tempest/api/compute/images/test_images_oneserver.py
+++ b/tempest/api/compute/images/test_images_oneserver.py
@@ -24,6 +24,8 @@
class ImagesOneServerTestJSON(base.BaseV2ComputeTest):
"""Test server images API"""
+ create_default_network = True
+
@classmethod
def resource_setup(cls):
super(ImagesOneServerTestJSON, cls).resource_setup()
diff --git a/tempest/api/compute/images/test_list_image_filters.py b/tempest/api/compute/images/test_list_image_filters.py
index 15b8a00..c6eff9b 100644
--- a/tempest/api/compute/images/test_list_image_filters.py
+++ b/tempest/api/compute/images/test_list_image_filters.py
@@ -19,7 +19,6 @@
import testtools
from tempest.api.compute import base
-from tempest.common import image as common_image
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import data_utils
@@ -46,17 +45,11 @@
def setup_clients(cls):
super(ListImageFiltersTestJSON, cls).setup_clients()
cls.client = cls.compute_images_client
- # Check if glance v1 is available to determine which client to use. We
- # prefer glance v1 for the compute API tests since the compute image
- # API proxy was written for glance v1.
- if CONF.image_feature_enabled.api_v1:
- cls.glance_client = cls.os_primary.image_client
- elif CONF.image_feature_enabled.api_v2:
+ if CONF.image_feature_enabled.api_v2:
cls.glance_client = cls.os_primary.image_client_v2
else:
raise exceptions.InvalidConfiguration(
- 'Either api_v1 or api_v2 must be True in '
- '[image-feature-enabled].')
+ 'api_v2 must be True in [image-feature-enabled].')
@classmethod
def resource_setup(cls):
@@ -66,14 +59,9 @@
params = {
'name': data_utils.rand_name(cls.__name__ + '-image'),
'container_format': 'bare',
- 'disk_format': 'raw'
+ 'disk_format': 'raw',
+ 'visibility': 'private'
}
- if CONF.image_feature_enabled.api_v1:
- params.update({'is_public': False})
- params = {'headers':
- common_image.image_meta_to_headers(**params)}
- else:
- params.update({'visibility': 'private'})
body = cls.glance_client.create_image(**params)
body = body['image'] if 'image' in body else body
@@ -86,10 +74,7 @@
# between created_at and updated_at.
time.sleep(1)
image_file = io.BytesIO((b'*' * 1024))
- if CONF.image_feature_enabled.api_v1:
- cls.glance_client.update_image(image_id, data=image_file)
- else:
- cls.glance_client.store_image_file(image_id, data=image_file)
+ cls.glance_client.store_image_file(image_id, data=image_file)
waiters.wait_for_image_status(cls.client, image_id, 'ACTIVE')
body = cls.client.show_image(image_id)['image']
return body
diff --git a/tempest/api/compute/servers/test_attach_interfaces.py b/tempest/api/compute/servers/test_attach_interfaces.py
index efecd6c..9b6bf84 100644
--- a/tempest/api/compute/servers/test_attach_interfaces.py
+++ b/tempest/api/compute/servers/test_attach_interfaces.py
@@ -295,8 +295,8 @@
def test_reassign_port_between_servers(self):
"""Tests reassigning port between servers
- 1. Create a port in Neutron.
- 2. Create two servers in Nova.
+ 1. Create two servers in Nova.
+ 2. Create a port in Neutron.
3. Attach the port to the first server.
4. Detach the port from the first server.
5. Attach the port to the second server.
@@ -304,11 +304,6 @@
"""
network = self.get_tenant_network()
network_id = network['id']
- port = self.ports_client.create_port(
- network_id=network_id,
- name=data_utils.rand_name(self.__class__.__name__))
- port_id = port['port']['id']
- self.addCleanup(self.ports_client.delete_port, port_id)
# NOTE(artom) We create two servers one at a time because
# create_test_server doesn't support multiple validatable servers.
@@ -318,12 +313,21 @@
def _create_validatable_server():
_, servers = compute.create_test_server(
self.os_primary, tenant_network=network,
- wait_until='ACTIVE', validatable=True,
+ validatable=True,
validation_resources=validation_resources)
return servers[0]
+ # NOTE(danms): We create these with no waiters because we will wait
+ # for them to be validatable (i.e. SSHABLE) below. That way some of
+ # the server creation overlap each other and with create_port.
servers = [_create_validatable_server(), _create_validatable_server()]
+ port = self.ports_client.create_port(
+ network_id=network_id,
+ name=data_utils.rand_name(self.__class__.__name__))
+ port_id = port['port']['id']
+ self.addCleanup(self.ports_client.delete_port, port_id)
+
# add our cleanups for the servers since we bypassed the base class
for server in servers:
self.addCleanup(self.delete_server, server['id'])
@@ -332,7 +336,9 @@
# NOTE(mgoddard): Get detailed server to ensure addresses are
# present in fixed IP case.
server = self.servers_client.show_server(server['id'])['server']
- self._wait_for_validation(server, validation_resources)
+ compute.wait_for_ssh_or_ping(server, self.os_primary, network,
+ True, validation_resources,
+ 'SSHABLE', True)
# attach the port to the server
iface = self.interfaces_client.create_interface(
server['id'], port_id=port_id)['interfaceAttachment']
diff --git a/tempest/api/compute/servers/test_create_server_multi_nic.py b/tempest/api/compute/servers/test_create_server_multi_nic.py
index bd3f58d..6ec058d 100644
--- a/tempest/api/compute/servers/test_create_server_multi_nic.py
+++ b/tempest/api/compute/servers/test_create_server_multi_nic.py
@@ -23,6 +23,28 @@
CONF = config.CONF
+def get_subnets(count=2):
+ """Returns a list of requested subnets from project_network_cidr block.
+
+ Args:
+ count (int): Number of blocks required.
+
+ Returns:
+ CIDRs as a list of strings
+ e.g. ['19.80.0.0/24', '19.86.0.0/24']
+ """
+ default_rtn = ['19.80.0.0/24', '19.86.0.0/24']
+ _net = netaddr.IPNetwork(CONF.network.project_network_cidr)
+
+ # Split the subnet into the requested number of smaller subnets.
+ sub_prefix_len = (32 - _net.prefixlen) // count
+ if sub_prefix_len < 1:
+ return default_rtn
+
+ _new_cidr = _net.prefixlen + sub_prefix_len
+ return [str(net) for _, net in zip(range(count), _net.subnet(_new_cidr))]
+
+
class ServersTestMultiNic(base.BaseV2ComputeTest):
"""Test multiple networks in servers"""
@@ -65,8 +87,9 @@
The networks order given at the server creation is preserved within
the server.
"""
- net1 = self._create_net_subnet_ret_net_from_cidr('19.80.0.0/24')
- net2 = self._create_net_subnet_ret_net_from_cidr('19.86.0.0/24')
+ _cidrs = get_subnets()
+ net1 = self._create_net_subnet_ret_net_from_cidr(_cidrs[0])
+ net2 = self._create_net_subnet_ret_net_from_cidr(_cidrs[1])
networks = [{'uuid': net1['network']['id']},
{'uuid': net2['network']['id']}]
@@ -86,14 +109,12 @@
['addresses'])
# We can't predict the ip addresses assigned to the server on networks.
- # Sometimes the assigned addresses are ['19.80.0.2', '19.86.0.2'], at
- # other times ['19.80.0.3', '19.86.0.3']. So we check if the first
- # address is in first network, similarly second address is in second
- # network.
+ # So we check if the first address is in first network, similarly
+ # second address is in second network.
addr = [addresses[net1['network']['name']][0]['addr'],
addresses[net2['network']['name']][0]['addr']]
- networks = [netaddr.IPNetwork('19.80.0.0/24'),
- netaddr.IPNetwork('19.86.0.0/24')]
+ networks = [netaddr.IPNetwork(_cidrs[0]),
+ netaddr.IPNetwork(_cidrs[1])]
for address, network in zip(addr, networks):
self.assertIn(address, network)
@@ -107,8 +128,9 @@
"""
# Verify that server creation does not fail when more than one nic
# is created on the same network.
- net1 = self._create_net_subnet_ret_net_from_cidr('19.80.0.0/24')
- net2 = self._create_net_subnet_ret_net_from_cidr('19.86.0.0/24')
+ _cidrs = get_subnets()
+ net1 = self._create_net_subnet_ret_net_from_cidr(_cidrs[0])
+ net2 = self._create_net_subnet_ret_net_from_cidr(_cidrs[1])
networks = [{'uuid': net1['network']['id']},
{'uuid': net2['network']['id']},
@@ -124,8 +146,8 @@
addr = [addresses[net1['network']['name']][0]['addr'],
addresses[net2['network']['name']][0]['addr'],
addresses[net1['network']['name']][1]['addr']]
- networks = [netaddr.IPNetwork('19.80.0.0/24'),
- netaddr.IPNetwork('19.86.0.0/24'),
- netaddr.IPNetwork('19.80.0.0/24')]
+ networks = [netaddr.IPNetwork(_cidrs[0]),
+ netaddr.IPNetwork(_cidrs[1]),
+ netaddr.IPNetwork(_cidrs[0])]
for address, network in zip(addr, networks):
self.assertIn(address, network)
diff --git a/tempest/api/compute/servers/test_multiple_create.py b/tempest/api/compute/servers/test_multiple_create.py
index 10c76bb..b464c45 100644
--- a/tempest/api/compute/servers/test_multiple_create.py
+++ b/tempest/api/compute/servers/test_multiple_create.py
@@ -15,6 +15,7 @@
from tempest.api.compute import base
from tempest.common import compute
+from tempest.common import waiters
from tempest.lib import decorators
@@ -34,8 +35,15 @@
wait_until='ACTIVE',
min_count=2,
tenant_network=tenant_network)
+
+ for server in servers:
+ self.addCleanup(waiters.wait_for_server_termination,
+ self.servers_client,
+ server['id'])
+
for server in servers:
self.addCleanup(self.servers_client.delete_server, server['id'])
+
# NOTE(maurosr): do status response check and also make sure that
# reservation_id is not in the response body when the request send
# contains return_reservation_id=False
diff --git a/tempest/api/compute/servers/test_server_actions.py b/tempest/api/compute/servers/test_server_actions.py
index a69dbb3..a181839 100644
--- a/tempest/api/compute/servers/test_server_actions.py
+++ b/tempest/api/compute/servers/test_server_actions.py
@@ -34,13 +34,13 @@
LOG = logging.getLogger(__name__)
-class ServerActionsTestJSON(base.BaseV2ComputeTest):
+class ServerActionsBase(base.BaseV2ComputeTest):
"""Test server actions"""
def setUp(self):
# NOTE(afazekas): Normally we use the same server with all test cases,
# but if it has an issue, we build a new one
- super(ServerActionsTestJSON, self).setUp()
+ super().setUp()
# Check if the server is in a clean state after test
try:
self.validation_resources = self.get_class_validation_resources(
@@ -73,7 +73,7 @@
self.server_id, validatable=True, wait_until='SSHABLE')
def tearDown(self):
- super(ServerActionsTestJSON, self).tearDown()
+ super(ServerActionsBase, self).tearDown()
# NOTE(zhufl): Because server_check_teardown will raise Exception
# which will prevent other cleanup steps from being executed, so
# server_check_teardown should be called after super's tearDown.
@@ -82,51 +82,19 @@
@classmethod
def setup_credentials(cls):
cls.prepare_instance_network()
- super(ServerActionsTestJSON, cls).setup_credentials()
+ super(ServerActionsBase, cls).setup_credentials()
@classmethod
def setup_clients(cls):
- super(ServerActionsTestJSON, cls).setup_clients()
+ super(ServerActionsBase, cls).setup_clients()
cls.client = cls.servers_client
@classmethod
def resource_setup(cls):
- super(ServerActionsTestJSON, cls).resource_setup()
+ super(ServerActionsBase, cls).resource_setup()
cls.server_id = cls.recreate_server(None, validatable=True,
wait_until='SSHABLE')
- @decorators.idempotent_id('6158df09-4b82-4ab3-af6d-29cf36af858d')
- @testtools.skipUnless(CONF.compute_feature_enabled.change_password,
- 'Change password not available.')
- def test_change_server_password(self):
- """Test changing server's password
-
- The server's password should be set to the provided password and
- the user can authenticate with the new password.
- """
- # Since this test messes with the password and makes the
- # server unreachable, it should create its own server
- newserver = self.create_test_server(
- validatable=True,
- validation_resources=self.validation_resources,
- wait_until='ACTIVE')
- self.addCleanup(self.delete_server, newserver['id'])
- # The server's password should be set to the provided password
- new_password = 'Newpass1234'
- self.client.change_password(newserver['id'], adminPass=new_password)
- waiters.wait_for_server_status(self.client, newserver['id'], 'ACTIVE')
-
- if CONF.validation.run_validation:
- # Verify that the user can authenticate with the new password
- server = self.client.show_server(newserver['id'])['server']
- linux_client = remote_client.RemoteClient(
- self.get_server_ip(server, self.validation_resources),
- self.ssh_user,
- new_password,
- server=server,
- servers_client=self.client)
- linux_client.validate_authentication()
-
def _test_reboot_server(self, reboot_type):
if CONF.validation.run_validation:
# Get the time the server was last rebooted,
@@ -159,68 +127,23 @@
self.assertGreater(new_boot_time, boot_time,
'%s > %s' % (new_boot_time, boot_time))
- @decorators.attr(type='smoke')
- @decorators.idempotent_id('2cb1baf6-ac8d-4429-bf0d-ba8a0ba53e32')
- def test_reboot_server_hard(self):
- """Test hard rebooting server
-
- The server should be power cycled.
- """
- self._test_reboot_server('HARD')
-
- @decorators.idempotent_id('1d1c9104-1b0a-11e7-a3d4-fa163e65f5ce')
- def test_remove_server_all_security_groups(self):
- """Test removing all security groups from server"""
- server = self.create_test_server(wait_until='ACTIVE')
-
- # Remove all Security group
- self.client.remove_security_group(
- server['id'], name=server['security_groups'][0]['name'])
-
- # Verify all Security group
- server = self.client.show_server(server['id'])['server']
- self.assertNotIn('security_groups', server)
-
- def _rebuild_server_and_check(self, image_ref, server):
- rebuilt_server = (self.client.rebuild_server(server['id'], image_ref)
- ['server'])
- if CONF.validation.run_validation:
- tenant_network = self.get_tenant_network()
- compute.wait_for_ssh_or_ping(
- server, self.os_primary, tenant_network,
- True, self.validation_resources, "SSHABLE", True)
- else:
- waiters.wait_for_server_status(self.client, self.server['id'],
- 'ACTIVE')
-
- msg = ('Server was not rebuilt to the original image. '
- 'The original image: {0}. The current image: {1}'
- .format(image_ref, rebuilt_server['image']['id']))
- self.assertEqual(image_ref, rebuilt_server['image']['id'], msg)
-
- def _test_rebuild_server(self):
+ def _test_rebuild_server(self, server_id):
# Get the IPs the server has before rebuilding it
- original_addresses = (self.client.show_server(self.server_id)['server']
+ original_addresses = (self.client.show_server(server_id)['server']
['addresses'])
# The server should be rebuilt using the provided image and data
meta = {'rebuild': 'server'}
new_name = data_utils.rand_name(self.__class__.__name__ + '-server')
password = 'rebuildPassw0rd'
rebuilt_server = self.client.rebuild_server(
- self.server_id,
+ server_id,
self.image_ref_alt,
name=new_name,
metadata=meta,
adminPass=password)['server']
- # If the server was rebuilt on a different image, restore it to the
- # original image once the test ends
- if self.image_ref_alt != self.image_ref:
- self.addCleanup(self._rebuild_server_and_check, self.image_ref,
- rebuilt_server)
-
# Verify the properties in the initial response are correct
- self.assertEqual(self.server_id, rebuilt_server['id'])
+ self.assertEqual(server_id, rebuilt_server['id'])
rebuilt_image_id = rebuilt_server['image']['id']
self.assertTrue(self.image_ref_alt.endswith(rebuilt_image_id))
self.assert_flavor_equal(self.flavor_ref, rebuilt_server['flavor'])
@@ -250,86 +173,6 @@
servers_client=self.client)
linux_client.validate_authentication()
- @decorators.idempotent_id('aaa6cdf3-55a7-461a-add9-1c8596b9a07c')
- def test_rebuild_server(self):
- """Test rebuilding server
-
- The server should be rebuilt using the provided image and data.
- """
- self._test_rebuild_server()
-
- @decorators.idempotent_id('30449a88-5aff-4f9b-9866-6ee9b17f906d')
- def test_rebuild_server_in_stop_state(self):
- """Test rebuilding server in stop state
-
- The server in stop state should be rebuilt using the provided
- image and remain in SHUTOFF state.
- """
- server = self.client.show_server(self.server_id)['server']
- old_image = server['image']['id']
- new_image = (self.image_ref_alt
- if old_image == self.image_ref else self.image_ref)
- self.client.stop_server(self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id, 'SHUTOFF')
- rebuilt_server = (self.client.rebuild_server(self.server_id, new_image)
- ['server'])
- # If the server was rebuilt on a different image, restore it to the
- # original image once the test ends
- if self.image_ref_alt != self.image_ref:
- self.addCleanup(self._rebuild_server_and_check, old_image, server)
-
- # Verify the properties in the initial response are correct
- self.assertEqual(self.server_id, rebuilt_server['id'])
- rebuilt_image_id = rebuilt_server['image']['id']
- self.assertEqual(new_image, rebuilt_image_id)
- self.assert_flavor_equal(self.flavor_ref, rebuilt_server['flavor'])
-
- # Verify the server properties after the rebuild completes
- waiters.wait_for_server_status(self.client,
- rebuilt_server['id'], 'SHUTOFF')
- server = self.client.show_server(rebuilt_server['id'])['server']
- rebuilt_image_id = server['image']['id']
- self.assertEqual(new_image, rebuilt_image_id)
-
- self.client.start_server(self.server_id)
-
- # NOTE(mriedem): Marked as slow because while rebuild and volume-backed is
- # common, we don't actually change the image (you can't with volume-backed
- # rebuild) so this isn't testing much outside normal rebuild
- # (and it's slow).
- @decorators.attr(type='slow')
- @decorators.idempotent_id('b68bd8d6-855d-4212-b59b-2e704044dace')
- @utils.services('volume')
- def test_rebuild_server_with_volume_attached(self):
- """Test rebuilding server with volume attached
-
- The volume should be attached to the instance after rebuild.
- """
- # create a new volume and attach it to the server
- volume = self.create_volume()
-
- server = self.client.show_server(self.server_id)['server']
- self.attach_volume(server, volume)
-
- # run general rebuild test
- self._test_rebuild_server()
-
- # make sure the volume is attached to the instance after rebuild
- vol_after_rebuild = self.volumes_client.show_volume(volume['id'])
- vol_after_rebuild = vol_after_rebuild['volume']
- self.assertEqual('in-use', vol_after_rebuild['status'])
- self.assertEqual(self.server_id,
- vol_after_rebuild['attachments'][0]['server_id'])
- if CONF.validation.run_validation:
- linux_client = remote_client.RemoteClient(
- self.get_server_ip(server, self.validation_resources),
- self.ssh_alt_user,
- password=None,
- pkey=self.validation_resources['keypair']['private_key'],
- server=server,
- servers_client=self.client)
- linux_client.validate_authentication()
-
def _test_resize_server_confirm(self, server_id, stop=False):
# The server's RAM and disk space should be modified to that of
# the provided flavor
@@ -358,6 +201,82 @@
# NOTE(mriedem): tearDown requires the server to be started.
self.client.start_server(server_id)
+ def _get_output(self, server_id):
+ output = self.client.get_console_output(
+ server_id, length=3)['output']
+ self.assertTrue(output, "Console output was empty.")
+ lines = len(output.split('\n'))
+ self.assertEqual(lines, 3)
+
+ def _validate_url(self, url):
+ valid_scheme = ['http', 'https']
+ parsed_url = urlparse.urlparse(url)
+ self.assertNotEqual('None', parsed_url.port)
+ self.assertNotEqual('None', parsed_url.hostname)
+ self.assertIn(parsed_url.scheme, valid_scheme)
+
+
+class ServerActionsTestJSON(ServerActionsBase):
+ @decorators.idempotent_id('6158df09-4b82-4ab3-af6d-29cf36af858d')
+ @testtools.skipUnless(CONF.compute_feature_enabled.change_password,
+ 'Change password not available.')
+ def test_change_server_password(self):
+ """Test changing server's password
+
+ The server's password should be set to the provided password and
+ the user can authenticate with the new password.
+ """
+ # Since this test messes with the password and makes the
+ # server unreachable, it should create its own server
+ newserver = self.create_test_server(
+ validatable=True,
+ validation_resources=self.validation_resources,
+ wait_until='ACTIVE')
+ self.addCleanup(self.delete_server, newserver['id'])
+ # The server's password should be set to the provided password
+ new_password = 'Newpass1234'
+ self.client.change_password(newserver['id'], adminPass=new_password)
+ waiters.wait_for_server_status(self.client, newserver['id'], 'ACTIVE')
+
+ if CONF.validation.run_validation:
+ # Verify that the user can authenticate with the new password
+ server = self.client.show_server(newserver['id'])['server']
+ linux_client = remote_client.RemoteClient(
+ self.get_server_ip(server, self.validation_resources),
+ self.ssh_user,
+ new_password,
+ server=server,
+ servers_client=self.client)
+ linux_client.validate_authentication()
+
+ @decorators.attr(type='smoke')
+ @decorators.idempotent_id('2cb1baf6-ac8d-4429-bf0d-ba8a0ba53e32')
+ def test_reboot_server_hard(self):
+ """Test hard rebooting server
+
+ The server should be power cycled.
+ """
+ self._test_reboot_server('HARD')
+
+ @decorators.idempotent_id('aaa6cdf3-55a7-461a-add9-1c8596b9a07c')
+ def test_rebuild_server(self):
+ """Test rebuilding server
+
+ The server should be rebuilt using the provided image and data.
+ """
+ tenant_network = self.get_tenant_network()
+ _, servers = compute.create_test_server(
+ self.os_primary,
+ wait_until='ACTIVE',
+ tenant_network=tenant_network)
+ server = servers[0]
+
+ self.addCleanup(waiters.wait_for_server_termination,
+ self.client, server['id'])
+ self.addCleanup(self.client.delete_server, server['id'])
+
+ self._test_rebuild_server(server_id=server['id'])
+
@decorators.idempotent_id('1499262a-9328-4eda-9068-db1ac57498d2')
@testtools.skipUnless(CONF.compute_feature_enabled.resize,
'Resize not available.')
@@ -365,6 +284,194 @@
"""Test resizing server and then confirming"""
self._test_resize_server_confirm(self.server_id, stop=False)
+ @decorators.idempotent_id('c03aab19-adb1-44f5-917d-c419577e9e68')
+ @testtools.skipUnless(CONF.compute_feature_enabled.resize,
+ 'Resize not available.')
+ def test_resize_server_revert(self):
+ """Test resizing server and then reverting
+
+ The server's RAM and disk space should return to its original
+ values after a resize is reverted.
+ """
+
+ self.client.resize_server(self.server_id, self.flavor_ref_alt)
+ # NOTE(zhufl): Explicitly delete the server to get a new one for later
+ # tests. Avoids resize down race issues.
+ self.addCleanup(self.delete_server, self.server_id)
+ waiters.wait_for_server_status(self.client, self.server_id,
+ 'VERIFY_RESIZE')
+
+ self.client.revert_resize_server(self.server_id)
+ waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
+
+ server = self.client.show_server(self.server_id)['server']
+ self.assert_flavor_equal(self.flavor_ref, server['flavor'])
+
+ @decorators.idempotent_id('4b8867e6-fffa-4d54-b1d1-6fdda57be2f3')
+ @testtools.skipUnless(CONF.compute_feature_enabled.console_output,
+ 'Console output not supported.')
+ def test_get_console_output(self):
+ """Test getting console output for a server
+
+ Should be able to GET the console output for a given server_id and
+ number of lines.
+ """
+
+ # This reboot is necessary for outputting some console log after
+ # creating an instance backup. If an instance backup, the console
+ # log file is truncated and we cannot get any console log through
+ # "console-log" API.
+ # The detail is https://bugs.launchpad.net/nova/+bug/1251920
+ self.reboot_server(self.server_id, type='HARD')
+ self.wait_for(self._get_output, self.server_id)
+
+ @decorators.idempotent_id('bd61a9fd-062f-4670-972b-2d6c3e3b9e73')
+ @testtools.skipUnless(CONF.compute_feature_enabled.pause,
+ 'Pause is not available.')
+ def test_pause_unpause_server(self):
+ """Test pausing and unpausing server"""
+ self.client.pause_server(self.server_id)
+ waiters.wait_for_server_status(self.client, self.server_id, 'PAUSED')
+ self.client.unpause_server(self.server_id)
+ waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
+
+ @decorators.idempotent_id('0d8ee21e-b749-462d-83da-b85b41c86c7f')
+ @testtools.skipUnless(CONF.compute_feature_enabled.suspend,
+ 'Suspend is not available.')
+ def test_suspend_resume_server(self):
+ """Test suspending and resuming server"""
+ self.client.suspend_server(self.server_id)
+ waiters.wait_for_server_status(self.client, self.server_id,
+ 'SUSPENDED')
+ self.client.resume_server(self.server_id)
+ waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
+
+ @decorators.idempotent_id('af8eafd4-38a7-4a4b-bdbc-75145a580560')
+ def test_stop_start_server(self):
+ """Test stopping and starting server"""
+ self.client.stop_server(self.server_id)
+ waiters.wait_for_server_status(self.client, self.server_id, 'SHUTOFF')
+ self.client.start_server(self.server_id)
+ waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
+
+ @decorators.idempotent_id('80a8094c-211e-440a-ab88-9e59d556c7ee')
+ def test_lock_unlock_server(self):
+ """Test locking and unlocking server
+
+ Lock the server, and trying to stop it will fail because locked
+ server is not allowed to be stopped by non-admin user.
+ Then unlock the server, now the server can be stopped and started.
+ """
+ # Lock the server,try server stop(exceptions throw),unlock it and retry
+ self.client.lock_server(self.server_id)
+ self.addCleanup(self.client.unlock_server, self.server_id)
+ server = self.client.show_server(self.server_id)['server']
+ self.assertEqual(server['status'], 'ACTIVE')
+ # Locked server is not allowed to be stopped by non-admin user
+ self.assertRaises(lib_exc.Conflict,
+ self.client.stop_server, self.server_id)
+ self.client.unlock_server(self.server_id)
+ self.client.stop_server(self.server_id)
+ waiters.wait_for_server_status(self.client, self.server_id, 'SHUTOFF')
+ self.client.start_server(self.server_id)
+ waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
+
+
+class ServerActionsTestOtherA(ServerActionsBase):
+ @decorators.idempotent_id('1d1c9104-1b0a-11e7-a3d4-fa163e65f5ce')
+ def test_remove_server_all_security_groups(self):
+ """Test removing all security groups from server"""
+ server = self.create_test_server(wait_until='ACTIVE')
+
+ # Remove all Security group
+ self.client.remove_security_group(
+ server['id'], name=server['security_groups'][0]['name'])
+
+ # Verify all Security group
+ server = self.client.show_server(server['id'])['server']
+ self.assertNotIn('security_groups', server)
+
+ @decorators.idempotent_id('30449a88-5aff-4f9b-9866-6ee9b17f906d')
+ def test_rebuild_server_in_stop_state(self):
+ """Test rebuilding server in stop state
+
+ The server in stop state should be rebuilt using the provided
+ image and remain in SHUTOFF state.
+ """
+ tenant_network = self.get_tenant_network()
+ _, servers = compute.create_test_server(
+ self.os_primary,
+ wait_until='ACTIVE',
+ tenant_network=tenant_network)
+ server = servers[0]
+
+ self.addCleanup(waiters.wait_for_server_termination,
+ self.client, server['id'])
+ self.addCleanup(self.client.delete_server, server['id'])
+ server = self.client.show_server(server['id'])['server']
+ old_image = server['image']['id']
+ new_image = (self.image_ref_alt
+ if old_image == self.image_ref else self.image_ref)
+ self.client.stop_server(server['id'])
+ waiters.wait_for_server_status(self.client, server['id'], 'SHUTOFF')
+ rebuilt_server = (self.client.rebuild_server(server['id'], new_image)
+ ['server'])
+
+ # Verify the properties in the initial response are correct
+ self.assertEqual(server['id'], rebuilt_server['id'])
+ rebuilt_image_id = rebuilt_server['image']['id']
+ self.assertEqual(new_image, rebuilt_image_id)
+ self.assert_flavor_equal(self.flavor_ref, rebuilt_server['flavor'])
+
+ # Verify the server properties after the rebuild completes
+ waiters.wait_for_server_status(self.client,
+ rebuilt_server['id'], 'SHUTOFF')
+ server = self.client.show_server(rebuilt_server['id'])['server']
+ rebuilt_image_id = server['image']['id']
+ self.assertEqual(new_image, rebuilt_image_id)
+
+ # NOTE(mriedem): Marked as slow because while rebuild and volume-backed is
+ # common, we don't actually change the image (you can't with volume-backed
+ # rebuild) so this isn't testing much outside normal rebuild
+ # (and it's slow).
+ @decorators.attr(type='slow')
+ @decorators.idempotent_id('b68bd8d6-855d-4212-b59b-2e704044dace')
+ @utils.services('volume')
+ def test_rebuild_server_with_volume_attached(self):
+ """Test rebuilding server with volume attached
+
+ The volume should be attached to the instance after rebuild.
+ """
+ # create a new volume and attach it to the server
+ volume = self.create_volume(wait_for_available=False)
+ network = self.get_tenant_network()
+ validation_resources = self.get_test_validation_resources(
+ self.os_primary)
+ _, servers = compute.create_test_server(
+ self.os_primary, tenant_network=network,
+ validatable=True,
+ validation_resources=validation_resources,
+ wait_until='SSHABLE')
+ server = servers[0]
+ self.addCleanup(waiters.wait_for_server_termination,
+ self.client, server['id'])
+ self.addCleanup(self.client.delete_server, server['id'])
+
+ server = self.client.show_server(server['id'])['server']
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume['id'], 'available')
+ self.attach_volume(server, volume)
+
+ # run general rebuild test
+ self._test_rebuild_server(server_id=server['id'])
+
+ # make sure the volume is attached to the instance after rebuild
+ vol_after_rebuild = self.volumes_client.show_volume(volume['id'])
+ vol_after_rebuild = vol_after_rebuild['volume']
+ self.assertEqual('in-use', vol_after_rebuild['status'])
+ self.assertEqual(server['id'],
+ vol_after_rebuild['attachments'][0]['server_id'])
+
@decorators.idempotent_id('e6c28180-7454-4b59-b188-0257af08a63b')
@decorators.related_bug('1728603')
@testtools.skipUnless(CONF.compute_feature_enabled.resize,
@@ -402,6 +509,8 @@
servers_client=self.client)
linux_client.validate_authentication()
+
+class ServerActionsTestOtherB(ServerActionsBase):
@decorators.idempotent_id('138b131d-66df-48c9-a171-64f45eb92962')
@testtools.skipUnless(CONF.compute_feature_enabled.resize,
'Resize not available.')
@@ -409,29 +518,6 @@
"""Test resizing a stopped server and then confirming"""
self._test_resize_server_confirm(self.server_id, stop=True)
- @decorators.idempotent_id('c03aab19-adb1-44f5-917d-c419577e9e68')
- @testtools.skipUnless(CONF.compute_feature_enabled.resize,
- 'Resize not available.')
- def test_resize_server_revert(self):
- """Test resizing server and then reverting
-
- The server's RAM and disk space should return to its original
- values after a resize is reverted.
- """
-
- self.client.resize_server(self.server_id, self.flavor_ref_alt)
- # NOTE(zhufl): Explicitly delete the server to get a new one for later
- # tests. Avoids resize down race issues.
- self.addCleanup(self.delete_server, self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id,
- 'VERIFY_RESIZE')
-
- self.client.revert_resize_server(self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
-
- server = self.client.show_server(self.server_id)['server']
- self.assert_flavor_equal(self.flavor_ref, server['flavor'])
-
@decorators.idempotent_id('fbbf075f-a812-4022-bc5c-ccb8047eef12')
@decorators.related_bug('1737599')
@testtools.skipUnless(CONF.compute_feature_enabled.resize,
@@ -483,17 +569,11 @@
# create the first and the second backup
- # Check if glance v1 is available to determine which client to use. We
- # prefer glance v1 for the compute API tests since the compute image
- # API proxy was written for glance v1.
- if CONF.image_feature_enabled.api_v1:
- glance_client = self.os_primary.image_client
- elif CONF.image_feature_enabled.api_v2:
+ if CONF.image_feature_enabled.api_v2:
glance_client = self.os_primary.image_client_v2
else:
raise lib_exc.InvalidConfiguration(
- 'Either api_v1 or api_v2 must be True in '
- '[image-feature-enabled].')
+ 'api_v2 must be True in [image-feature-enabled].')
backup1 = data_utils.rand_name('backup-1')
resp = self.client.create_backup(self.server_id,
@@ -549,16 +629,9 @@
'sort_key': 'created_at',
'sort_dir': 'asc'
}
- if CONF.image_feature_enabled.api_v1:
- for key, value in properties.items():
- params['property-%s' % key] = value
- image_list = glance_client.list_images(
- detail=True,
- **params)['images']
- else:
- # Additional properties are flattened in glance v2.
- params.update(properties)
- image_list = glance_client.list_images(params)['images']
+ # Additional properties are flattened in glance v2.
+ params.update(properties)
+ image_list = glance_client.list_images(params)['images']
self.assertEqual(2, len(image_list))
self.assertEqual((backup1, backup2),
@@ -582,11 +655,7 @@
waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
glance_client.wait_for_resource_deletion(image1_id)
oldest_backup_exist = False
- if CONF.image_feature_enabled.api_v1:
- image_list = glance_client.list_images(
- detail=True, **params)['images']
- else:
- image_list = glance_client.list_images(params)['images']
+ image_list = glance_client.list_images(params)['images']
self.assertEqual(2, len(image_list),
'Unexpected number of images for '
'v2:test_create_backup; was the oldest backup not '
@@ -595,31 +664,6 @@
self.assertEqual((backup2, backup3),
(image_list[0]['name'], image_list[1]['name']))
- def _get_output(self):
- output = self.client.get_console_output(
- self.server_id, length=3)['output']
- self.assertTrue(output, "Console output was empty.")
- lines = len(output.split('\n'))
- self.assertEqual(lines, 3)
-
- @decorators.idempotent_id('4b8867e6-fffa-4d54-b1d1-6fdda57be2f3')
- @testtools.skipUnless(CONF.compute_feature_enabled.console_output,
- 'Console output not supported.')
- def test_get_console_output(self):
- """Test getting console output for a server
-
- Should be able to GET the console output for a given server_id and
- number of lines.
- """
-
- # This reboot is necessary for outputting some console log after
- # creating an instance backup. If an instance backup, the console
- # log file is truncated and we cannot get any console log through
- # "console-log" API.
- # The detail is https://bugs.launchpad.net/nova/+bug/1251920
- self.reboot_server(self.server_id, type='HARD')
- self.wait_for(self._get_output)
-
@decorators.idempotent_id('89104062-69d8-4b19-a71b-f47b7af093d7')
@testtools.skipUnless(CONF.compute_feature_enabled.console_output,
'Console output not supported.')
@@ -643,6 +687,7 @@
self.wait_for(_check_full_length_console_log)
+ @decorators.skip_because(bug='2028851')
@decorators.idempotent_id('5b65d4e7-4ecd-437c-83c0-d6b79d927568')
@testtools.skipUnless(CONF.compute_feature_enabled.console_output,
'Console output not supported.')
@@ -661,28 +706,7 @@
self.client.stop_server(temp_server_id)
waiters.wait_for_server_status(self.client, temp_server_id, 'SHUTOFF')
- self.wait_for(self._get_output)
-
- @decorators.idempotent_id('bd61a9fd-062f-4670-972b-2d6c3e3b9e73')
- @testtools.skipUnless(CONF.compute_feature_enabled.pause,
- 'Pause is not available.')
- def test_pause_unpause_server(self):
- """Test pausing and unpausing server"""
- self.client.pause_server(self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id, 'PAUSED')
- self.client.unpause_server(self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
-
- @decorators.idempotent_id('0d8ee21e-b749-462d-83da-b85b41c86c7f')
- @testtools.skipUnless(CONF.compute_feature_enabled.suspend,
- 'Suspend is not available.')
- def test_suspend_resume_server(self):
- """Test suspending and resuming server"""
- self.client.suspend_server(self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id,
- 'SUSPENDED')
- self.client.resume_server(self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
+ self.wait_for(self._get_output, temp_server_id)
@decorators.idempotent_id('77eba8e0-036e-4635-944b-f7a8f3b78dc9')
@testtools.skipUnless(CONF.compute_feature_enabled.shelve,
@@ -692,34 +716,26 @@
"""Test shelving and unshelving server"""
if CONF.image_feature_enabled.api_v2:
glance_client = self.os_primary.image_client_v2
- elif CONF.image_feature_enabled.api_v1:
- glance_client = self.os_primary.image_client
else:
raise lib_exc.InvalidConfiguration(
- 'Either api_v1 or api_v2 must be True in '
- '[image-feature-enabled].')
+ 'api_v2 must be True in [image-feature-enabled].')
compute.shelve_server(self.client, self.server_id,
force_shelve_offload=True)
- def _unshelve_server():
- server_info = self.client.show_server(self.server_id)['server']
- if 'SHELVED' in server_info['status']:
- self.client.unshelve_server(self.server_id)
- self.addCleanup(_unshelve_server)
-
server = self.client.show_server(self.server_id)['server']
image_name = server['name'] + '-shelved'
params = {'name': image_name}
- if CONF.image_feature_enabled.api_v2:
- images = glance_client.list_images(params)['images']
- elif CONF.image_feature_enabled.api_v1:
- images = glance_client.list_images(
- detail=True, **params)['images']
+ images = glance_client.list_images(params)['images']
self.assertEqual(1, len(images))
self.assertEqual(image_name, images[0]['name'])
- self.client.unshelve_server(self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
+ body = self.client.unshelve_server(self.server_id)
+ waiters.wait_for_server_status(
+ self.client,
+ self.server_id,
+ "ACTIVE",
+ request_id=body.response["x-openstack-request-id"],
+ )
glance_client.wait_for_resource_deletion(images[0]['id'])
@decorators.idempotent_id('8cf9f450-a871-42cf-9bef-77eba189c0b0')
@@ -737,43 +753,6 @@
compute.shelve_server(self.client, server['id'],
force_shelve_offload=True)
- @decorators.idempotent_id('af8eafd4-38a7-4a4b-bdbc-75145a580560')
- def test_stop_start_server(self):
- """Test stopping and starting server"""
- self.client.stop_server(self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id, 'SHUTOFF')
- self.client.start_server(self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
-
- @decorators.idempotent_id('80a8094c-211e-440a-ab88-9e59d556c7ee')
- def test_lock_unlock_server(self):
- """Test locking and unlocking server
-
- Lock the server, and trying to stop it will fail because locked
- server is not allowed to be stopped by non-admin user.
- Then unlock the server, now the server can be stopped and started.
- """
- # Lock the server,try server stop(exceptions throw),unlock it and retry
- self.client.lock_server(self.server_id)
- self.addCleanup(self.client.unlock_server, self.server_id)
- server = self.client.show_server(self.server_id)['server']
- self.assertEqual(server['status'], 'ACTIVE')
- # Locked server is not allowed to be stopped by non-admin user
- self.assertRaises(lib_exc.Conflict,
- self.client.stop_server, self.server_id)
- self.client.unlock_server(self.server_id)
- self.client.stop_server(self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id, 'SHUTOFF')
- self.client.start_server(self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
-
- def _validate_url(self, url):
- valid_scheme = ['http', 'https']
- parsed_url = urlparse.urlparse(url)
- self.assertNotEqual('None', parsed_url.port)
- self.assertNotEqual('None', parsed_url.hostname)
- self.assertIn(parsed_url.scheme, valid_scheme)
-
@decorators.idempotent_id('c6bc11bf-592e-4015-9319-1c98dc64daf5')
@testtools.skipUnless(CONF.compute_feature_enabled.vnc_console,
'VNC Console feature is disabled.')
@@ -804,9 +783,15 @@
min_microversion = '2.47'
+ @classmethod
+ def skip_checks(cls):
+ if not CONF.service_available.glance:
+ skip_msg = ("%s skipped as glance is not available" % cls.__name__)
+ raise cls.skipException(skip_msg)
+ super(ServersAaction247Test, cls).skip_checks()
+
@testtools.skipUnless(CONF.compute_feature_enabled.snapshot,
'Snapshotting not available, backup not possible.')
- @utils.services('image')
@decorators.idempotent_id('252a4bdd-6366-4dae-9994-8c30aa660f23')
def test_create_backup(self):
server = self.create_test_server(wait_until='ACTIVE')
@@ -817,3 +802,114 @@
backup_type='daily',
rotation=2,
name=backup1)
+
+
+class ServerActionsV293TestJSON(base.BaseV2ComputeTest):
+
+ min_microversion = '2.93'
+ volume_min_microversion = '3.68'
+
+ @classmethod
+ def skip_checks(cls):
+ if not CONF.service_available.cinder:
+ raise cls.skipException("Cinder is not available")
+ return super().skip_checks()
+
+ @classmethod
+ def setup_credentials(cls):
+ cls.prepare_instance_network()
+ super(ServerActionsV293TestJSON, cls).setup_credentials()
+
+ @classmethod
+ def resource_setup(cls):
+ super(ServerActionsV293TestJSON, cls).resource_setup()
+ cls.server_id = cls.recreate_server(None, volume_backed=True,
+ validatable=True)
+
+ @decorators.idempotent_id('6652dab9-ea24-4c93-ab5a-93d79c3041cf')
+ def test_rebuild_volume_backed_server(self):
+ """Test rebuilding a volume backed server"""
+ self.validation_resources = self.get_class_validation_resources(
+ self.os_primary)
+ server = self.servers_client.show_server(self.server_id)['server']
+ volume_id = server['os-extended-volumes:volumes_attached'][0]['id']
+ volume_before_rebuild = self.volumes_client.show_volume(volume_id)
+ image_before_rebuild = (
+ volume_before_rebuild['volume']
+ ['volume_image_metadata']['image_id'])
+ # Verify that image inside volume is our initial image before rebuild
+ self.assertEqual(self.image_ref, image_before_rebuild)
+
+ # Authentication is attempted in the following order of priority:
+ # 1.The key passed in, if one was passed in.
+ # 2.Any key we can find through an SSH agent (if allowed).
+ # 3.Any "id_rsa", "id_dsa" or "id_ecdsa" key discoverable in
+ # ~/.ssh/ (if allowed).
+ # 4.Plain username/password auth, if a password was given.
+ linux_client = remote_client.RemoteClient(
+ self.get_server_ip(server, self.validation_resources),
+ self.ssh_user,
+ password=None,
+ pkey=self.validation_resources['keypair']['private_key'],
+ server=server,
+ servers_client=self.servers_client)
+ output = linux_client.exec_command('touch test_file')
+ # No output means success
+ self.assertEqual('', output.strip())
+
+ # The server should be rebuilt using the provided image and data
+ meta = {'rebuild': 'server'}
+ new_name = data_utils.rand_name(self.__class__.__name__ + '-server')
+ password = 'rebuildPassw0rd'
+ rebuilt_server = self.servers_client.rebuild_server(
+ server['id'],
+ self.image_ref_alt,
+ name=new_name,
+ metadata=meta,
+ adminPass=password)['server']
+
+ # Verify the properties in the initial response are correct
+ self.assertEqual(server['id'], rebuilt_server['id'])
+ rebuilt_image_id = rebuilt_server['image']
+ # Since it is a volume backed server, image id will remain empty
+ self.assertEqual('', rebuilt_image_id)
+ self.assert_flavor_equal(self.flavor_ref, rebuilt_server['flavor'])
+
+ # Verify the server properties after the rebuild completes
+ waiters.wait_for_server_status(self.servers_client,
+ rebuilt_server['id'], 'ACTIVE')
+ server = self.servers_client.show_server(
+ rebuilt_server['id'])['server']
+ volume_id = server['os-extended-volumes:volumes_attached'][0]['id']
+ volume_after_rebuild = self.volumes_client.show_volume(volume_id)
+ image_after_rebuild = (
+ volume_after_rebuild['volume']
+ ['volume_image_metadata']['image_id'])
+
+ self.assertEqual(new_name, server['name'])
+ # Verify that volume ID remains same before and after rebuild
+ self.assertEqual(volume_before_rebuild['volume']['id'],
+ volume_after_rebuild['volume']['id'])
+ # Verify that image inside volume is our final image after rebuild
+ self.assertEqual(self.image_ref_alt, image_after_rebuild)
+
+ # Authentication is attempted in the following order of priority:
+ # 1.The key passed in, if one was passed in.
+ # 2.Any key we can find through an SSH agent (if allowed).
+ # 3.Any "id_rsa", "id_dsa" or "id_ecdsa" key discoverable in
+ # ~/.ssh/ (if allowed).
+ # 4.Plain username/password auth, if a password was given.
+ linux_client = remote_client.RemoteClient(
+ self.get_server_ip(rebuilt_server, self.validation_resources),
+ self.ssh_alt_user,
+ password,
+ self.validation_resources['keypair']['private_key'],
+ server=rebuilt_server,
+ servers_client=self.servers_client)
+ linux_client.validate_authentication()
+ e = self.assertRaises(lib_exc.SSHExecCommandFailed,
+ linux_client.exec_command,
+ 'cat test_file')
+ # If we rebuilt the boot volume, then we should not find
+ # the file we touched.
+ self.assertIn('No such file or directory', str(e))
diff --git a/tempest/api/compute/servers/test_server_addresses.py b/tempest/api/compute/servers/test_server_addresses.py
index 5a3f5d0..978a9da 100644
--- a/tempest/api/compute/servers/test_server_addresses.py
+++ b/tempest/api/compute/servers/test_server_addresses.py
@@ -14,7 +14,6 @@
# under the License.
from tempest.api.compute import base
-from tempest.common import utils
from tempest.lib import decorators
@@ -35,7 +34,6 @@
@decorators.attr(type='smoke')
@decorators.idempotent_id('6eb718c0-02d9-4d5e-acd1-4e0c269cef39')
- @utils.services('network')
def test_list_server_addresses(self):
"""Test listing server address
@@ -52,7 +50,6 @@
@decorators.attr(type='smoke')
@decorators.idempotent_id('87bbc374-5538-4f64-b673-2b0e4443cc30')
- @utils.services('network')
def test_list_server_addresses_by_network(self):
"""Test listing server addresses filtered by network addresses
diff --git a/tempest/api/compute/servers/test_server_addresses_negative.py b/tempest/api/compute/servers/test_server_addresses_negative.py
index e7444d2..bb21594 100644
--- a/tempest/api/compute/servers/test_server_addresses_negative.py
+++ b/tempest/api/compute/servers/test_server_addresses_negative.py
@@ -14,7 +14,6 @@
# under the License.
from tempest.api.compute import base
-from tempest.common import utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
@@ -35,7 +34,6 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('02c3f645-2d2e-4417-8525-68c0407d001b')
- @utils.services('network')
def test_list_server_addresses_invalid_server_id(self):
"""List addresses request should fail if server id not in system"""
self.assertRaises(lib_exc.NotFound, self.client.list_addresses,
@@ -43,7 +41,6 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('a2ab5144-78c0-4942-a0ed-cc8edccfd9ba')
- @utils.services('network')
def test_list_server_addresses_by_network_neg(self):
"""List addresses by network should fail if network name not valid"""
self.assertRaises(lib_exc.NotFound,
diff --git a/tempest/api/compute/servers/test_server_rescue.py b/tempest/api/compute/servers/test_server_rescue.py
index 716ecda..97c2774 100644
--- a/tempest/api/compute/servers/test_server_rescue.py
+++ b/tempest/api/compute/servers/test_server_rescue.py
@@ -239,13 +239,15 @@
# after unrescue the server. Due to that we need to make
# server SSHable before it try to detach, more details are
# in bug#1960346
+ volume = self.create_volume(wait_for_available=False)
validation_resources = self.get_class_validation_resources(
self.os_primary)
server, rescue_image_id = self._create_server_and_rescue_image(
hw_rescue_device='disk', hw_rescue_bus='virtio', validatable=True,
validation_resources=validation_resources, wait_until="SSHABLE")
server = self.servers_client.show_server(server['id'])['server']
- volume = self.create_volume()
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume['id'], 'available')
self.attach_volume(server, volume)
waiters.wait_for_volume_resource_status(self.volumes_client,
volume['id'], 'in-use')
diff --git a/tempest/api/compute/servers/test_servers.py b/tempest/api/compute/servers/test_servers.py
index 1c839eb..388b9b0 100644
--- a/tempest/api/compute/servers/test_servers.py
+++ b/tempest/api/compute/servers/test_servers.py
@@ -28,10 +28,16 @@
"""Test servers API"""
create_default_network = True
+ credentials = ['primary', 'project_reader']
+
@classmethod
def setup_clients(cls):
super(ServersTestJSON, cls).setup_clients()
cls.client = cls.servers_client
+ if CONF.enforce_scope.nova:
+ cls.reader_client = cls.os_project_reader.servers_client
+ else:
+ cls.reader_client = cls.client
@decorators.idempotent_id('b92d5ec7-b1dd-44a2-87e4-45e888c46ef0')
@testtools.skipUnless(CONF.compute_feature_enabled.
@@ -64,9 +70,9 @@
id2 = server['id']
self.addCleanup(self.delete_server, id2)
self.assertNotEqual(id1, id2, "Did not create a new server")
- server = self.client.show_server(id1)['server']
+ server = self.reader_client.show_server(id1)['server']
name1 = server['name']
- server = self.client.show_server(id2)['server']
+ server = self.reader_client.show_server(id2)['server']
name2 = server['name']
self.assertEqual(name1, name2)
@@ -80,7 +86,7 @@
server = self.create_test_server(key_name=key_name,
wait_until='ACTIVE')
self.addCleanup(self.delete_server, server['id'])
- server = self.client.show_server(server['id'])['server']
+ server = self.reader_client.show_server(server['id'])['server']
self.assertEqual(key_name, server['key_name'])
def _update_server_name(self, server_id, status, prefix_name='server'):
@@ -93,7 +99,7 @@
waiters.wait_for_server_status(self.client, server_id, status)
# Verify the name of the server has changed
- server = self.client.show_server(server_id)['server']
+ server = self.reader_client.show_server(server_id)['server']
self.assertEqual(new_name, server['name'])
return server
@@ -128,7 +134,7 @@
waiters.wait_for_server_status(self.client, server['id'], 'ACTIVE')
# Verify the access addresses have been updated
- server = self.client.show_server(server['id'])['server']
+ server = self.reader_client.show_server(server['id'])['server']
self.assertEqual('1.1.1.1', server['accessIPv4'])
self.assertEqual('::babe:202:202', server['accessIPv6'])
@@ -138,7 +144,7 @@
server = self.create_test_server(accessIPv6='2001:2001::3',
wait_until='ACTIVE')
self.addCleanup(self.delete_server, server['id'])
- server = self.client.show_server(server['id'])['server']
+ server = self.reader_client.show_server(server['id'])['server']
self.assertEqual('2001:2001::3', server['accessIPv6'])
@decorators.related_bug('1730756')
@@ -169,12 +175,22 @@
# also. 2.47 APIs schema are on top of 2.9->2.19->2.26 schema so
# below tests cover all of the schema.
+ credentials = ['primary', 'project_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(ServerShowV247Test, cls).setup_clients()
+ if CONF.enforce_scope.nova:
+ cls.reader_client = cls.os_project_reader.servers_client
+ else:
+ cls.reader_client = cls.servers_client
+
@decorators.idempotent_id('88b0bdb2-494c-11e7-a919-92ebcb67fe33')
def test_show_server(self):
"""Test getting server detail"""
server = self.create_test_server()
# All fields will be checked by API schema
- self.servers_client.show_server(server['id'])
+ self.reader_client.show_server(server['id'])
@decorators.idempotent_id('8de397c2-57d0-4b90-aa30-e5d668f21a8b')
def test_update_rebuild_list_server(self):
@@ -198,6 +214,16 @@
min_microversion = '2.63'
max_microversion = 'latest'
+ credentials = ['primary', 'project_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(ServerShowV263Test, cls).setup_clients()
+ if CONF.enforce_scope.nova:
+ cls.reader_client = cls.os_project_reader.servers_client
+ else:
+ cls.reader_client = cls.servers_client
+
@testtools.skipUnless(CONF.compute.certified_image_ref,
'``[compute]/certified_image_ref`` required to test '
'image certificate validation.')
@@ -214,7 +240,7 @@
wait_until='ACTIVE')
# Check show API response schema
- self.servers_client.show_server(server['id'])['server']
+ self.reader_client.show_server(server['id'])['server']
# Check update API response schema
self.servers_client.update_server(server['id'])
diff --git a/tempest/api/compute/servers/test_servers_negative.py b/tempest/api/compute/servers/test_servers_negative.py
index 4f85048..bd383d3 100644
--- a/tempest/api/compute/servers/test_servers_negative.py
+++ b/tempest/api/compute/servers/test_servers_negative.py
@@ -508,10 +508,7 @@
server = self.client.show_server(self.server_id)['server']
image_name = server['name'] + '-shelved'
- if CONF.image_feature_enabled.api_v1:
- kwargs = {'name': image_name}
- else:
- kwargs = {'params': {'name': image_name}}
+ kwargs = {'params': {'name': image_name}}
images = self.images_client.list_images(**kwargs)['images']
self.assertEqual(1, len(images))
self.assertEqual(image_name, images[0]['name'])
diff --git a/tempest/api/compute/servers/test_virtual_interfaces.py b/tempest/api/compute/servers/test_virtual_interfaces.py
deleted file mode 100644
index b2e02c5..0000000
--- a/tempest/api/compute/servers/test_virtual_interfaces.py
+++ /dev/null
@@ -1,66 +0,0 @@
-# Copyright 2013 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import netaddr
-import testtools
-
-from tempest.api.compute import base
-from tempest.common import utils
-from tempest import config
-from tempest.lib import decorators
-from tempest.lib import exceptions
-
-CONF = config.CONF
-
-
-# TODO(mriedem): Remove this test class once the nova queens branch goes into
-# extended maintenance mode.
-class VirtualInterfacesTestJSON(base.BaseV2ComputeTest):
- """Test virtual interfaces API with compute microversion less than 2.44"""
-
- max_microversion = '2.43'
-
- depends_on_nova_network = True
-
- create_default_network = True
-
- @classmethod
- def setup_clients(cls):
- super(VirtualInterfacesTestJSON, cls).setup_clients()
- cls.client = cls.servers_client
-
- @classmethod
- def resource_setup(cls):
- super(VirtualInterfacesTestJSON, cls).resource_setup()
- cls.server = cls.create_test_server(wait_until='ACTIVE')
-
- @decorators.idempotent_id('96c4e2ef-5e4d-4d7f-87f5-fed6dca18016')
- @utils.services('network')
- def test_list_virtual_interfaces(self):
- """Test listing virtual interfaces of a server"""
- if CONF.service_available.neutron:
- with testtools.ExpectedException(exceptions.BadRequest):
- self.client.list_virtual_interfaces(self.server['id'])
- else:
- output = self.client.list_virtual_interfaces(self.server['id'])
- virt_ifaces = output['virtual_interfaces']
- self.assertNotEmpty(virt_ifaces,
- 'Expected virtual interfaces, got 0 '
- 'interfaces.')
- for virt_iface in virt_ifaces:
- mac_address = virt_iface['mac_address']
- self.assertTrue(netaddr.valid_mac(mac_address),
- "Invalid mac address detected. mac address: %s"
- % mac_address)
diff --git a/tempest/api/compute/servers/test_virtual_interfaces_negative.py b/tempest/api/compute/servers/test_virtual_interfaces_negative.py
deleted file mode 100644
index 5667281..0000000
--- a/tempest/api/compute/servers/test_virtual_interfaces_negative.py
+++ /dev/null
@@ -1,50 +0,0 @@
-# Copyright 2013 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.compute import base
-from tempest.common import utils
-from tempest.lib.common.utils import data_utils
-from tempest.lib import decorators
-from tempest.lib import exceptions as lib_exc
-
-
-# TODO(mriedem): Remove this test class once the nova queens branch goes into
-# extended maintenance mode.
-class VirtualInterfacesNegativeTestJSON(base.BaseV2ComputeTest):
- """Negative tests of virtual interfaces API
-
- Negative tests of virtual interfaces API for compute microversion less
- than 2.44.
- """
-
- max_microversion = '2.43'
-
- depends_on_nova_network = True
-
- @classmethod
- def setup_credentials(cls):
- # For this test no network resources are needed
- cls.set_network_resources()
- super(VirtualInterfacesNegativeTestJSON, cls).setup_credentials()
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('64ebd03c-1089-4306-93fa-60f5eb5c803c')
- @utils.services('network')
- def test_list_virtual_interfaces_invalid_server_id(self):
- """Test listing virtual interfaces of an invalid server should fail"""
- invalid_server_id = data_utils.rand_uuid()
- self.assertRaises(lib_exc.NotFound,
- self.servers_client.list_virtual_interfaces,
- invalid_server_id)
diff --git a/tempest/api/compute/test_tenant_networks.py b/tempest/api/compute/test_tenant_networks.py
index 17f4b80..da28b9b 100644
--- a/tempest/api/compute/test_tenant_networks.py
+++ b/tempest/api/compute/test_tenant_networks.py
@@ -14,8 +14,11 @@
from tempest.api.compute import base
from tempest.common import utils
+from tempest import config
from tempest.lib import decorators
+CONF = config.CONF
+
class ComputeTenantNetworksTest(base.BaseV2ComputeTest):
"""Test compute tenant networks API with microversion less than 2.36"""
@@ -23,6 +26,14 @@
max_microversion = '2.35'
@classmethod
+ def skip_checks(cls):
+ super(ComputeTenantNetworksTest, cls).skip_checks()
+ if not CONF.service_available.neutron:
+ skip_msg = (
+ "%s skipped as Neutron is not available" % cls.__name__)
+ raise cls.skipException(skip_msg)
+
+ @classmethod
def resource_setup(cls):
super(ComputeTenantNetworksTest, cls).resource_setup()
cls.client = cls.os_primary.tenant_networks_client
diff --git a/tempest/api/compute/volumes/test_attach_volume.py b/tempest/api/compute/volumes/test_attach_volume.py
index 5380c67..7ea8f09 100644
--- a/tempest/api/compute/volumes/test_attach_volume.py
+++ b/tempest/api/compute/volumes/test_attach_volume.py
@@ -369,7 +369,9 @@
kwargs = {}
if bootable:
kwargs['image_ref'] = CONF.compute.image_ref
- return self.create_volume(multiattach=True, **kwargs)
+ multiattach_vol_type = CONF.volume.volume_type_multiattach
+ return self.create_volume(volume_type=multiattach_vol_type,
+ **kwargs)
def _create_and_multiattach(self):
"""Creates two server instances and a volume and attaches to both.
diff --git a/tempest/api/identity/v3/test_access_rules.py b/tempest/api/identity/v3/test_access_rules.py
index 608eb59..64a6959 100644
--- a/tempest/api/identity/v3/test_access_rules.py
+++ b/tempest/api/identity/v3/test_access_rules.py
@@ -17,6 +17,7 @@
from tempest.api.identity import base
from tempest import config
from tempest.lib.common.utils import data_utils
+from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
@@ -37,10 +38,6 @@
super(AccessRulesV3Test, cls).resource_setup()
cls.user_id = cls.os_primary.credentials.user_id
cls.project_id = cls.os_primary.credentials.project_id
-
- def setUp(self):
- super(AccessRulesV3Test, self).setUp()
- ac = self.non_admin_app_creds_client
access_rules = [
{
"path": "/v2.1/servers/*/ips",
@@ -48,11 +45,15 @@
"service": "compute"
}
]
- self.app_cred = ac.create_application_credential(
- self.user_id,
+ cls.ac = cls.non_admin_app_creds_client
+ cls.app_cred = cls.ac.create_application_credential(
+ cls.user_id,
name=data_utils.rand_name('application_credential'),
access_rules=access_rules
)['application_credential']
+ cls.addClassResourceCleanup(
+ cls.ac.delete_application_credential,
+ cls.user_id, cls.app_cred['id'])
@decorators.idempotent_id('2354c498-5119-4ba5-9f0d-44f16f78fb0e')
def test_list_access_rules(self):
@@ -67,18 +68,33 @@
@decorators.idempotent_id('278757e9-e193-4bf8-adf2-0b0a229a17d0')
def test_delete_access_rule(self):
- access_rule_id = self.app_cred['access_rules'][0]['id']
- app_cred_id = self.app_cred['id']
+ access_rules = [
+ {
+ "path": "/v2.1/servers/*/ips",
+ "method": "GET",
+ "service": "monitoring"
+ }
+ ]
+ app_cred = self.ac.create_application_credential(
+ self.user_id,
+ name=data_utils.rand_name('application_credential'),
+ access_rules=access_rules
+ )['application_credential']
+ self.addCleanup(
+ test_utils.call_and_ignore_notfound_exc,
+ self.ac.delete_application_credential,
+ self.user_id, app_cred['id'])
+ access_rule_id = app_cred['access_rules'][0]['id']
self.assertRaises(
lib_exc.Forbidden,
self.non_admin_access_rules_client.delete_access_rule,
self.user_id,
access_rule_id)
- self.non_admin_app_creds_client.delete_application_credential(
- self.user_id, app_cred_id)
+ self.ac.delete_application_credential(
+ self.user_id, app_cred['id'])
ar = self.non_admin_access_rules_client.list_access_rules(self.user_id)
- self.assertEqual(1, len(ar['access_rules']))
+ self.assertIn(access_rule_id, [x['id'] for x in ar['access_rules']])
self.non_admin_access_rules_client.delete_access_rule(
self.user_id, access_rule_id)
ar = self.non_admin_access_rules_client.list_access_rules(self.user_id)
- self.assertEqual(0, len(ar['access_rules']))
+ self.assertNotIn(access_rule_id, [x['id'] for x in ar['access_rules']])
diff --git a/tempest/api/identity/v3/test_users.py b/tempest/api/identity/v3/test_users.py
index dc6dd4a..53814ad 100644
--- a/tempest/api/identity/v3/test_users.py
+++ b/tempest/api/identity/v3/test_users.py
@@ -31,6 +31,12 @@
"""Test identity user password"""
@classmethod
+ def skip_checks(cls):
+ super(IdentityV3UsersTest, cls).skip_checks()
+ if not CONF.identity_feature_enabled.security_compliance:
+ raise cls.skipException("Security compliance not available.")
+
+ @classmethod
def resource_setup(cls):
super(IdentityV3UsersTest, cls).resource_setup()
cls.creds = cls.os_primary.credentials
@@ -77,13 +83,15 @@
time.sleep(1)
self.non_admin_users_client.auth_provider.set_auth()
- @testtools.skipUnless(CONF.identity_feature_enabled.security_compliance,
- 'Security compliance not available.')
@decorators.idempotent_id('ad71bd23-12ad-426b-bb8b-195d2b635f27')
@testtools.skipIf(CONF.identity_feature_enabled.immutable_user_source,
'Skipped because environment has an '
'immutable user source and solely '
'provides read-only access to users.')
+ @testtools.skipIf(CONF.identity.user_minimum_password_age > 0,
+ 'Skipped because password cannot '
+ 'be changed immediately, resulting '
+ 'in failed password update.')
def test_user_update_own_password(self):
"""Test updating user's own password"""
old_pass = self.creds.password
@@ -107,13 +115,15 @@
user_id=self.user_id,
password=old_pass)
- @testtools.skipUnless(CONF.identity_feature_enabled.security_compliance,
- 'Security compliance not available.')
@decorators.idempotent_id('941784ee-5342-4571-959b-b80dd2cea516')
@testtools.skipIf(CONF.identity_feature_enabled.immutable_user_source,
'Skipped because environment has an '
'immutable user source and solely '
'provides read-only access to users.')
+ @testtools.skipIf(CONF.identity.user_minimum_password_age > 0,
+ 'Skipped because password cannot '
+ 'be changed immediately, resulting '
+ 'in failed password update.')
def test_password_history_check_self_service_api(self):
"""Test checking password changing history"""
old_pass = self.creds.password
@@ -142,8 +152,6 @@
# A different password can be set
self._update_password(original_password=new_pass1, password=new_pass2)
- @testtools.skipUnless(CONF.identity_feature_enabled.security_compliance,
- 'Security compliance not available.')
@decorators.idempotent_id('a7ad8bbf-2cff-4520-8c1d-96332e151658')
def test_user_account_lockout(self):
"""Test locking out user account after failure attempts"""
diff --git a/tempest/api/image/base.py b/tempest/api/image/base.py
index 23e7fd8..7bae712 100644
--- a/tempest/api/image/base.py
+++ b/tempest/api/image/base.py
@@ -12,9 +12,8 @@
# License for the specific language governing permissions and limitations
# under the License.
-import io
+import time
-from tempest.common import image as common_image
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib.common.utils import test_utils
@@ -22,6 +21,7 @@
import tempest.test
CONF = config.CONF
+BAD_REQUEST_RETRIES = 3
class BaseImageTest(tempest.test.BaseTestCase):
@@ -54,17 +54,7 @@
name = data_utils.rand_name(cls.__name__ + "-image")
kwargs['name'] = name
- params = cls._get_create_params(**kwargs)
- if data:
- # NOTE: On glance v1 API, the data should be passed on
- # a header. Then here handles the data separately.
- params['data'] = data
-
- image = cls.client.create_image(**params)
- # Image objects returned by the v1 client have the image
- # data inside a dict that is keyed against 'image'.
- if 'image' in image:
- image = image['image']
+ image = cls.client.create_image(**kwargs)
cls.created_images.append(image['id'])
cls.addClassResourceCleanup(cls.client.wait_for_resource_deletion,
image['id'])
@@ -72,54 +62,6 @@
cls.client.delete_image, image['id'])
return image
- @classmethod
- def _get_create_params(cls, **kwargs):
- return kwargs
-
-
-class BaseV1ImageTest(BaseImageTest):
-
- @classmethod
- def skip_checks(cls):
- super(BaseV1ImageTest, cls).skip_checks()
- if not CONF.image_feature_enabled.api_v1:
- msg = "Glance API v1 not supported"
- raise cls.skipException(msg)
-
- @classmethod
- def setup_clients(cls):
- super(BaseV1ImageTest, cls).setup_clients()
- cls.client = cls.os_primary.image_client
-
- @classmethod
- def _get_create_params(cls, **kwargs):
- return {'headers': common_image.image_meta_to_headers(**kwargs)}
-
-
-class BaseV1ImageMembersTest(BaseV1ImageTest):
-
- credentials = ['primary', 'alt']
-
- @classmethod
- def setup_clients(cls):
- super(BaseV1ImageMembersTest, cls).setup_clients()
- cls.image_member_client = cls.os_primary.image_member_client
- cls.alt_image_member_client = cls.os_alt.image_member_client
- cls.alt_img_cli = cls.os_alt.image_client
-
- @classmethod
- def resource_setup(cls):
- super(BaseV1ImageMembersTest, cls).resource_setup()
- cls.alt_tenant_id = cls.alt_image_member_client.tenant_id
-
- def _create_image(self):
- image_file = io.BytesIO(data_utils.random_bytes())
- image = self.create_image(container_format='bare',
- disk_format='raw',
- is_public=False,
- data=image_file)
- return image['id']
-
class BaseV2ImageTest(BaseImageTest):
@@ -159,6 +101,82 @@
pass
return stores
+ def _update_image_with_retries(self, image, patch):
+ # NOTE(danms): If glance was unable to fetch the remote image via
+ # HTTP, it will return BadRequest. Because this can be transient in
+ # CI, we try this a few times before we agree that it has failed
+ # for a reason worthy of failing the test.
+ for i in range(BAD_REQUEST_RETRIES):
+ try:
+ self.client.update_image(image, patch)
+ break
+ except exceptions.BadRequest:
+ if i + 1 == BAD_REQUEST_RETRIES:
+ raise
+ else:
+ time.sleep(1)
+
+ def check_set_location(self):
+ image = self.client.create_image(container_format='bare',
+ disk_format='raw')
+
+ # Locations should be empty when there is no data
+ self.assertEqual('queued', image['status'])
+ self.assertEqual([], image['locations'])
+
+ # Add a new location
+ new_loc = {'metadata': {'foo': 'bar'},
+ 'url': CONF.image.http_image}
+ self._update_image_with_retries(image['id'], [
+ dict(add='/locations/-', value=new_loc)])
+
+ # The image should now be active, with one location that looks
+ # like we expect
+ image = self.client.show_image(image['id'])
+ self.assertEqual(1, len(image['locations']),
+ 'Image should have one location but has %i' % (
+ len(image['locations'])))
+ self.assertEqual(new_loc['url'], image['locations'][0]['url'])
+ self.assertEqual('bar', image['locations'][0]['metadata'].get('foo'))
+ if 'direct_url' in image:
+ self.assertEqual(image['direct_url'], image['locations'][0]['url'])
+
+ # If we added the location directly, the image goes straight
+ # to active and no hashing is done
+ self.assertEqual('active', image['status'])
+ self.assertIsNone(None, image['os_hash_algo'])
+ self.assertIsNone(None, image['os_hash_value'])
+
+ return image
+
+ def check_set_multiple_locations(self):
+ image = self.check_set_location()
+
+ new_loc = {'metadata': {'speed': '88mph'},
+ 'url': '%s#new' % CONF.image.http_image}
+ self._update_image_with_retries(image['id'],
+ [dict(add='/locations/-',
+ value=new_loc)])
+
+ # The image should now have two locations and the last one
+ # (locations are ordered) should have the new URL.
+ image = self.client.show_image(image['id'])
+ self.assertEqual(2, len(image['locations']),
+ 'Image should have two locations but has %i' % (
+ len(image['locations'])))
+ self.assertEqual(new_loc['url'], image['locations'][1]['url'])
+
+ # The image should still be active and still have no hashes
+ self.assertEqual('active', image['status'])
+ self.assertIsNone(None, image['os_hash_algo'])
+ self.assertIsNone(None, image['os_hash_value'])
+
+ # The direct_url should still match the first location
+ if 'direct_url' in image:
+ self.assertEqual(image['direct_url'], image['locations'][0]['url'])
+
+ return image
+
class BaseV2MemberImageTest(BaseV2ImageTest):
diff --git a/tempest/api/image/v1/test_image_members.py b/tempest/api/image/v1/test_image_members.py
deleted file mode 100644
index 5e2c8af..0000000
--- a/tempest/api/image/v1/test_image_members.py
+++ /dev/null
@@ -1,63 +0,0 @@
-# Copyright 2013 IBM Corp.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-from tempest.api.image import base
-from tempest.lib import decorators
-from tempest.lib import exceptions as lib_exc
-
-
-class ImageMembersTest(base.BaseV1ImageMembersTest):
- """Test image members"""
-
- @decorators.idempotent_id('1d6ef640-3a20-4c84-8710-d95828fdb6ad')
- def test_add_image_member(self):
- """Test adding member for image"""
- image = self._create_image()
- self.image_member_client.create_image_member(image, self.alt_tenant_id)
- body = self.image_member_client.list_image_members(image)
- members = body['members']
- members = [member['member_id'] for member in members]
- self.assertIn(self.alt_tenant_id, members)
- # get image as alt user
- self.alt_img_cli.show_image(image)
-
- @decorators.idempotent_id('6a5328a5-80e8-4b82-bd32-6c061f128da9')
- def test_get_shared_images(self):
- """Test getting shared images"""
- image = self._create_image()
- self.image_member_client.create_image_member(image, self.alt_tenant_id)
- share_image = self._create_image()
- self.image_member_client.create_image_member(share_image,
- self.alt_tenant_id)
- body = self.image_member_client.list_shared_images(
- self.alt_tenant_id)
- images = body['shared_images']
- images = [img['image_id'] for img in images]
- self.assertIn(share_image, images)
- self.assertIn(image, images)
-
- @decorators.idempotent_id('a76a3191-8948-4b44-a9d6-4053e5f2b138')
- def test_remove_member(self):
- """Test removing member from image"""
- image_id = self._create_image()
- self.image_member_client.create_image_member(image_id,
- self.alt_tenant_id)
- self.image_member_client.delete_image_member(image_id,
- self.alt_tenant_id)
- body = self.image_member_client.list_image_members(image_id)
- members = body['members']
- self.assertEmpty(members)
- self.assertRaises(
- lib_exc.NotFound, self.alt_img_cli.show_image, image_id)
diff --git a/tempest/api/image/v1/test_image_members_negative.py b/tempest/api/image/v1/test_image_members_negative.py
deleted file mode 100644
index 4e3c27c..0000000
--- a/tempest/api/image/v1/test_image_members_negative.py
+++ /dev/null
@@ -1,62 +0,0 @@
-# Copyright 2013 IBM Corp.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.image import base
-from tempest.lib.common.utils import data_utils
-from tempest.lib import decorators
-from tempest.lib import exceptions as lib_exc
-
-
-class ImageMembersNegativeTest(base.BaseV1ImageMembersTest):
- """Negative tests of image members"""
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('147a9536-18e3-45da-91ea-b037a028f364')
- def test_add_member_with_non_existing_image(self):
- """Add member with non existing image"""
- non_exist_image = data_utils.rand_uuid()
- self.assertRaises(lib_exc.NotFound,
- self.image_member_client.create_image_member,
- non_exist_image, self.alt_tenant_id)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('e1559f05-b667-4f1b-a7af-518b52dc0c0f')
- def test_delete_member_with_non_existing_image(self):
- """Delete member with non existing image"""
- non_exist_image = data_utils.rand_uuid()
- self.assertRaises(lib_exc.NotFound,
- self.image_member_client.delete_image_member,
- non_exist_image, self.alt_tenant_id)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('f5720333-dd69-4194-bb76-d2f048addd56')
- def test_delete_member_with_non_existing_tenant(self):
- """Delete member from image with non existing tenant"""
- image_id = self._create_image()
- non_exist_tenant = data_utils.rand_uuid_hex()
- self.assertRaises(lib_exc.NotFound,
- self.image_member_client.delete_image_member,
- image_id, non_exist_tenant)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('f25f89e4-0b6c-453b-a853-1f80b9d7ef26')
- def test_get_image_without_membership(self):
- """Get image without membership
-
- Image is hidden from another tenants.
- """
- image_id = self._create_image()
- self.assertRaises(lib_exc.NotFound,
- self.alt_img_cli.show_image,
- image_id)
diff --git a/tempest/api/image/v1/test_images.py b/tempest/api/image/v1/test_images.py
deleted file mode 100644
index 6fd6c4e..0000000
--- a/tempest/api/image/v1/test_images.py
+++ /dev/null
@@ -1,341 +0,0 @@
-# Copyright 2012 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import io
-
-from tempest.api.image import base
-from tempest.common import image as common_image
-from tempest.common import waiters
-from tempest import config
-from tempest.lib.common.utils import data_utils
-from tempest.lib import decorators
-from tempest.lib import exceptions
-
-CONF = config.CONF
-
-
-def get_container_and_disk_format():
- a_formats = ['ami', 'ari', 'aki']
-
- container_format = CONF.image.container_formats[0]
-
- # In v1, If container_format is one of ['ami', 'ari', 'aki'], then
- # disk_format must be same with container_format.
- # If they are of different item sequence in tempest.conf, such as:
- # container_formats = ami,ari,aki,bare
- # disk_formats = ari,ami,aki,vhd
- # we can select one in disk_format list that is same with container_format.
- if container_format in a_formats:
- if container_format in CONF.image.disk_formats:
- disk_format = container_format
- else:
- msg = ("The container format and the disk format don't match. "
- "Container format: %(container)s, Disk format: %(disk)s." %
- {'container': container_format, 'disk':
- CONF.image.disk_formats})
- raise exceptions.InvalidConfiguration(msg)
- else:
- disk_format = CONF.image.disk_formats[0]
-
- return container_format, disk_format
-
-
-class CreateRegisterImagesTest(base.BaseV1ImageTest):
- """Here we test the registration and creation of images."""
-
- @decorators.idempotent_id('3027f8e6-3492-4a11-8575-c3293017af4d')
- def test_register_then_upload(self):
- """Register, then upload an image"""
- properties = {'prop1': 'val1'}
- container_format, disk_format = get_container_and_disk_format()
- image = self.create_image(name='New Name',
- container_format=container_format,
- disk_format=disk_format,
- is_public=False,
- properties=properties)
- self.assertEqual('New Name', image.get('name'))
- self.assertFalse(image.get('is_public'))
- self.assertEqual('queued', image.get('status'))
- for key, val in properties.items():
- self.assertEqual(val, image.get('properties')[key])
-
- # Now try uploading an image file
- image_file = io.BytesIO(data_utils.random_bytes())
- body = self.client.update_image(image['id'], data=image_file)['image']
- self.assertIn('size', body)
- self.assertEqual(1024, body.get('size'))
-
- @decorators.idempotent_id('69da74d9-68a9-404b-9664-ff7164ccb0f5')
- def test_register_remote_image(self):
- """Register a new remote image"""
- container_format, disk_format = get_container_and_disk_format()
- body = self.create_image(name='New Remote Image',
- container_format=container_format,
- disk_format=disk_format, is_public=False,
- location=CONF.image.http_image,
- properties={'key1': 'value1',
- 'key2': 'value2'})
- self.assertEqual('New Remote Image', body.get('name'))
- self.assertFalse(body.get('is_public'))
- self.assertEqual('active', body.get('status'))
- properties = body.get('properties')
- self.assertEqual(properties['key1'], 'value1')
- self.assertEqual(properties['key2'], 'value2')
-
- @decorators.idempotent_id('6d0e13a7-515b-460c-b91f-9f4793f09816')
- def test_register_http_image(self):
- """Register a new image from an http image path url"""
- container_format, disk_format = get_container_and_disk_format()
- image = self.create_image(name='New Http Image',
- container_format=container_format,
- disk_format=disk_format, is_public=False,
- copy_from=CONF.image.http_image)
- self.assertEqual('New Http Image', image.get('name'))
- self.assertFalse(image.get('is_public'))
- waiters.wait_for_image_status(self.client, image['id'], 'active')
- self.client.show_image(image['id'])
-
- @decorators.idempotent_id('05b19d55-140c-40d0-b36b-fafd774d421b')
- def test_register_image_with_min_ram(self):
- """Register an image with min ram"""
- container_format, disk_format = get_container_and_disk_format()
- properties = {'prop1': 'val1'}
- body = self.create_image(name='New_image_with_min_ram',
- container_format=container_format,
- disk_format=disk_format,
- is_public=False,
- min_ram=40,
- properties=properties)
- self.assertEqual('New_image_with_min_ram', body.get('name'))
- self.assertFalse(body.get('is_public'))
- self.assertEqual('queued', body.get('status'))
- self.assertEqual(40, body.get('min_ram'))
- for key, val in properties.items():
- self.assertEqual(val, body.get('properties')[key])
- self.client.delete_image(body['id'])
-
-
-class ListImagesTest(base.BaseV1ImageTest):
- """Here we test the listing of image information"""
-
- @classmethod
- def skip_checks(cls):
- super(ListImagesTest, cls).skip_checks()
- if (len(CONF.image.container_formats) < 2 or
- len(CONF.image.disk_formats) < 2):
- skip_msg = ("%s skipped as multiple container formats "
- "or disk formats are not available." % cls.__name__)
- raise cls.skipException(skip_msg)
-
- @classmethod
- def resource_setup(cls):
- super(ListImagesTest, cls).resource_setup()
- # We add a few images here to test the listing functionality of
- # the images API
- a_formats = ['ami', 'ari', 'aki']
-
- (cls.container_format,
- container_format_alt) = CONF.image.container_formats[:2]
- cls.disk_format, cls.disk_format_alt = CONF.image.disk_formats[:2]
- if cls.container_format in a_formats:
- cls.disk_format = cls.container_format
- if container_format_alt in a_formats:
- cls.disk_format_alt = container_format_alt
-
- img1 = cls._create_remote_image('one', cls.container_format,
- cls.disk_format)
- img2 = cls._create_remote_image('two', container_format_alt,
- cls.disk_format_alt)
- img3 = cls._create_remote_image('dup', cls.container_format,
- cls.disk_format)
- img4 = cls._create_remote_image('dup', cls.container_format,
- cls.disk_format)
- img5 = cls._create_standard_image('1', container_format_alt,
- cls.disk_format_alt, 42)
- img6 = cls._create_standard_image('2', container_format_alt,
- cls.disk_format_alt, 142)
- img7 = cls._create_standard_image('33', cls.container_format,
- cls.disk_format, 142)
- img8 = cls._create_standard_image('33', cls.container_format,
- cls.disk_format, 142)
- cls.created_set = set(cls.created_images)
- # same container format
- cls.same_container_format_set = set((img1, img3, img4, img7, img8))
- # same disk format
- cls.same_disk_format_set = set((img2, img5, img6))
-
- # 1x with size 42
- cls.size42_set = set((img5,))
- # 3x with size 142
- cls.size142_set = set((img6, img7, img8,))
- # dup named
- cls.dup_set = set((img3, img4))
-
- @classmethod
- def _create_remote_image(cls, name, container_format, disk_format):
- """Create a new remote image and return newly-registered image-id"""
-
- name = 'New Remote Image %s' % name
- location = CONF.image.http_image
- image = cls.create_image(name=name,
- container_format=container_format,
- disk_format=disk_format,
- is_public=False,
- location=location)
- return image['id']
-
- @classmethod
- def _create_standard_image(cls, name, container_format,
- disk_format, size):
- """Create a new standard image and return newly-registered image-id
-
- Note that the size of the new image is a random number between
- 1024 and 4096
- """
- image_file = io.BytesIO(data_utils.random_bytes(size))
- name = 'New Standard Image %s' % name
- image = cls.create_image(name=name,
- container_format=container_format,
- disk_format=disk_format,
- is_public=False, data=image_file)
- return image['id']
-
- @decorators.idempotent_id('246178ab-3b33-4212-9a4b-a7fe8261794d')
- def test_index_no_params(self):
- """Simple test to see all fixture images returned"""
- images_list = self.client.list_images()['images']
- image_list = [image['id'] for image in images_list]
- for image_id in self.created_images:
- self.assertIn(image_id, image_list)
-
- @decorators.idempotent_id('f1755589-63d6-4468-b098-589820eb4031')
- def test_index_disk_format(self):
- """Test listing images by disk format"""
- images_list = self.client.list_images(
- disk_format=self.disk_format_alt)['images']
- for image in images_list:
- self.assertEqual(image['disk_format'], self.disk_format_alt)
- result_set = set(map(lambda x: x['id'], images_list))
- self.assertTrue(self.same_disk_format_set <= result_set)
- self.assertFalse(self.created_set - self.same_disk_format_set <=
- result_set)
-
- @decorators.idempotent_id('2143655d-96d9-4bec-9188-8674206b4b3b')
- def test_index_container_format(self):
- """Test listing images by container format"""
- images_list = self.client.list_images(
- container_format=self.container_format)['images']
- for image in images_list:
- self.assertEqual(image['container_format'], self.container_format)
- result_set = set(map(lambda x: x['id'], images_list))
- self.assertTrue(self.same_container_format_set <= result_set)
- self.assertFalse(self.created_set - self.same_container_format_set <=
- result_set)
-
- @decorators.idempotent_id('feb32ac6-22bb-4a16-afd8-9454bb714b14')
- def test_index_max_size(self):
- """Test listing images by max size"""
- images_list = self.client.list_images(size_max=42)['images']
- for image in images_list:
- self.assertLessEqual(image['size'], 42)
- result_set = set(map(lambda x: x['id'], images_list))
- self.assertTrue(self.size42_set <= result_set)
- self.assertFalse(self.created_set - self.size42_set <= result_set)
-
- @decorators.idempotent_id('6ffc16d0-4cbf-4401-95c8-4ac63eac34d8')
- def test_index_min_size(self):
- """Test listing images by min size"""
- images_list = self.client.list_images(size_min=142)['images']
- for image in images_list:
- self.assertGreaterEqual(image['size'], 142)
- result_set = set(map(lambda x: x['id'], images_list))
- self.assertTrue(self.size142_set <= result_set)
- self.assertFalse(self.size42_set <= result_set)
-
- @decorators.idempotent_id('e5dc26d9-9aa2-48dd-bda5-748e1445da98')
- def test_index_status_active_detail(self):
- """Test listing active images sorting by size in descending order"""
- images_list = self.client.list_images(detail=True,
- status='active',
- sort_key='size',
- sort_dir='desc')['images']
- top_size = images_list[0]['size'] # We have non-zero sized images
- for image in images_list:
- size = image['size']
- self.assertLessEqual(size, top_size)
- top_size = size
- self.assertEqual(image['status'], 'active')
-
- @decorators.idempotent_id('097af10a-bae8-4342-bff4-edf89969ed2a')
- def test_index_name(self):
- """Test listing images by its name"""
- images_list = self.client.list_images(
- detail=True,
- name='New Remote Image dup')['images']
- result_set = set(map(lambda x: x['id'], images_list))
- for image in images_list:
- self.assertEqual(image['name'], 'New Remote Image dup')
- self.assertTrue(self.dup_set <= result_set)
- self.assertFalse(self.created_set - self.dup_set <= result_set)
-
-
-class UpdateImageMetaTest(base.BaseV1ImageTest):
- """Test image metadata"""
-
- @classmethod
- def resource_setup(cls):
- super(UpdateImageMetaTest, cls).resource_setup()
- container_format, disk_format = get_container_and_disk_format()
- cls.image_id = cls._create_standard_image('1', container_format,
- disk_format, 42)
-
- @classmethod
- def _create_standard_image(cls, name, container_format,
- disk_format, size):
- """Create a new standard image and return newly-registered image-id"""
-
- image_file = io.BytesIO(data_utils.random_bytes(size))
- name = 'New Standard Image %s' % name
- image = cls.create_image(name=name,
- container_format=container_format,
- disk_format=disk_format,
- is_public=False, data=image_file,
- properties={'key1': 'value1'})
- return image['id']
-
- @decorators.idempotent_id('01752c1c-0275-4de3-9e5b-876e44541928')
- def test_list_image_metadata(self):
- """Test listing image metadata"""
- # All metadata key/value pairs for an image should be returned
- resp = self.client.check_image(self.image_id)
- resp_metadata = common_image.get_image_meta_from_headers(resp)
- expected = {'key1': 'value1'}
- self.assertEqual(expected, resp_metadata['properties'])
-
- @decorators.idempotent_id('d6d7649c-08ce-440d-9ea7-e3dda552f33c')
- def test_update_image_metadata(self):
- """Test updating image metadata"""
- # The metadata for the image should match the updated values
- req_metadata = {'key1': 'alt1', 'key2': 'value2'}
- resp = self.client.check_image(self.image_id)
- metadata = common_image.get_image_meta_from_headers(resp)
- self.assertEqual(metadata['properties'], {'key1': 'value1'})
- metadata['properties'].update(req_metadata)
- headers = common_image.image_meta_to_headers(
- properties=metadata['properties'])
- self.client.update_image(self.image_id, headers=headers)
- resp = self.client.check_image(self.image_id)
- resp_metadata = common_image.get_image_meta_from_headers(resp)
- self.assertEqual(req_metadata, resp_metadata['properties'])
diff --git a/tempest/api/image/v1/test_images_negative.py b/tempest/api/image/v1/test_images_negative.py
deleted file mode 100644
index 2af1288..0000000
--- a/tempest/api/image/v1/test_images_negative.py
+++ /dev/null
@@ -1,82 +0,0 @@
-# Copyright 2013 IBM Corp.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-from tempest.api.image import base
-from tempest.lib.common.utils import data_utils
-from tempest.lib import decorators
-from tempest.lib import exceptions as lib_exc
-
-
-class CreateDeleteImagesNegativeTest(base.BaseV1ImageTest):
- """Here are negative tests for the deletion and creation of images."""
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('036ede36-6160-4463-8c01-c781eee6369d')
- def test_register_with_invalid_container_format(self):
- """Create image with invalid container format
-
- Negative tests for invalid data supplied to POST /images
- """
- self.assertRaises(lib_exc.BadRequest, self.client.create_image,
- headers={'x-image-meta-name': 'test',
- 'x-image-meta-container_format': 'wrong',
- 'x-image-meta-disk_format': 'vhd'})
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('993face5-921d-4e84-aabf-c1bba4234a67')
- def test_register_with_invalid_disk_format(self):
- """Create image with invalid disk format"""
- self.assertRaises(lib_exc.BadRequest, self.client.create_image,
- headers={'x-image-meta-name': 'test',
- 'x-image-meta-container_format': 'bare',
- 'x-image-meta-disk_format': 'wrong'})
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('ec652588-7e3c-4b67-a2f2-0fa96f57c8fc')
- def test_delete_non_existent_image(self):
- """Return an error while trying to delete a non-existent image"""
-
- non_existent_image_id = data_utils.rand_uuid()
- self.assertRaises(lib_exc.NotFound, self.client.delete_image,
- non_existent_image_id)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('04f72aa3-fcec-45a3-81a3-308ef7cc82bc')
- def test_delete_image_blank_id(self):
- """Return an error while trying to delete an image with blank Id"""
- self.assertRaises(lib_exc.NotFound, self.client.delete_image, '')
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('950e5054-a3c7-4dee-ada5-e576f1087abd')
- def test_delete_image_non_hex_string_id(self):
- """Return an error while trying to delete an image with non hex id"""
- invalid_image_id = data_utils.rand_uuid()[:-1] + "j"
- self.assertRaises(lib_exc.NotFound, self.client.delete_image,
- invalid_image_id)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('4ed757cd-450c-44b1-9fd1-c819748c650d')
- def test_delete_image_negative_image_id(self):
- """Return an error while trying to delete an image with negative id"""
- self.assertRaises(lib_exc.NotFound, self.client.delete_image, -1)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('a4a448ab-3db2-4d2d-b9b2-6a1271241dfe')
- def test_delete_image_id_over_character_limit(self):
- """Return an error while trying to delete image with id over limit"""
- overlimit_image_id = data_utils.rand_uuid() + "1"
- self.assertRaises(lib_exc.NotFound, self.client.delete_image,
- overlimit_image_id)
diff --git a/tempest/api/image/v2/admin/test_image_task.py b/tempest/api/image/v2/admin/test_image_task.py
new file mode 100644
index 0000000..9439e91
--- /dev/null
+++ b/tempest/api/image/v2/admin/test_image_task.py
@@ -0,0 +1,140 @@
+# Copyright 2023 Red Hat, Inc.
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.api.image import base
+from tempest.common import waiters
+from tempest import config
+from tempest.lib.common.utils import data_utils
+from tempest.lib import decorators
+
+CONF = config.CONF
+
+
+class ImageTaskCreate(base.BaseV2ImageAdminTest):
+ """Test image task operations"""
+
+ @classmethod
+ def skip_checks(cls):
+ # TODO(msava): Add additional skipcheck with task conversion_format and
+ # glance ceph backend then will be available
+ # in tempest image service config options.
+ super(ImageTaskCreate, cls).skip_checks()
+ if not CONF.image.http_image:
+ skip_msg = ("%s skipped as http_image is not available " %
+ cls.__name__)
+ raise cls.skipException(skip_msg)
+
+ @classmethod
+ def resource_setup(cls):
+ super(ImageTaskCreate, cls).resource_setup()
+
+ @staticmethod
+ def _prepare_image_tasks_param(type="import",
+ disk_format=['qcow2'],
+ image_from_format=['qcow2'],
+ image_location=CONF.image.http_image):
+ # TODO(msava): Need to add additional disk formats then
+ # task conversion_format and glance Ceph backend will be
+ # available in image service options
+ """Prepare image task params.
+ By default, will create task type 'import'
+
+ The same index is used for both params and creates a task
+ :param type Type of the task.
+ :param disk_format: Each format in the list is a different task.
+ :param image_from_format: Each format in the list is a different task.
+ :param image_location Location to import image from.
+ :return: A list with all task.
+ """
+ i = 0
+ tasks = list()
+ while i < len(disk_format):
+ image_name = data_utils.rand_name("task_image")
+ image_property = {"container_format": "bare",
+ "disk_format": disk_format[0],
+ "visibility": "public",
+ "name": image_name
+ }
+ task = {
+ "type": type,
+ "input": {
+ "image_properties": image_property,
+ "import_from_format": image_from_format[0],
+ "import_from": image_location
+ }
+ }
+ tasks.append(task)
+ i += 1
+ return tasks
+
+ def _verify_disk_format(self, task_body):
+ expected_disk_format = \
+ task_body['input']['image_properties']['disk_format']
+ image_id = task_body['result']['image_id']
+ observed_disk_format = self.admin_client.show_image(
+ image_id)['disk_format']
+ # If glance backend storage is Ceph glance will convert
+ # image to raw format.
+ # TODO(msava): Need to change next lines once task conversion_format
+ # and glance ceph backend will be available in image service options
+ if observed_disk_format == 'raw':
+ return
+ self.assertEqual(observed_disk_format, expected_disk_format,
+ message="Expected disk format not match ")
+
+ @decorators.skip_because(bug='2030527')
+ @decorators.idempotent_id('669d5387-0340-4abf-b62d-7cc89f539c8c')
+ def test_image_tasks_create(self):
+ """Test task type 'import' image """
+
+ # Prepare params for task type 'import'
+ tasks = self._prepare_image_tasks_param()
+
+ # Create task type 'import'
+ body = self.os_admin.tasks_client.create_task(**tasks[0])
+ task_id = body['id']
+ task_body = waiters.wait_for_tasks_status(self.os_admin.tasks_client,
+ task_id, 'success')
+ self.addCleanup(self.admin_client.delete_image,
+ task_body['result']['image_id'])
+ task_image_id = task_body['result']['image_id']
+ waiters.wait_for_image_status(self.client, task_image_id, 'active')
+ self._verify_disk_format(task_body)
+
+ # Verify disk format
+ image_body = self.client.show_image(task_image_id)
+ task_disk_format = \
+ task_body['input']['image_properties']['disk_format']
+ image_disk_format = image_body['disk_format']
+ self.assertEqual(
+ image_disk_format, task_disk_format,
+ message="Image Disc format %s not match to expected %s"
+ % (image_disk_format, task_disk_format))
+
+ @decorators.idempotent_id("ad6450c6-7060-4ee7-a2d1-41c2604b446c")
+ @decorators.attr(type=['negative'])
+ def test_task_create_fake_image_location(self):
+ http_fake_url = ''.join(
+ ["http://", data_utils.rand_name('dummy-img-file'), ".qcow2"])
+ task = self._prepare_image_tasks_param(
+ image_from_format=['qcow2'],
+ disk_format=['qcow2'],
+ image_location=http_fake_url)
+ body = self.os_admin.tasks_client.create_task(**task[0])
+ task_observed = \
+ waiters.wait_for_tasks_status(self.os_admin.tasks_client,
+ body['id'], 'failure')
+ task_observed = task_observed['status']
+ self.assertEqual(task_observed, 'failure')
diff --git a/tempest/api/image/v2/admin/test_images.py b/tempest/api/image/v2/admin/test_images.py
index 733c778..ce50c5d 100644
--- a/tempest/api/image/v2/admin/test_images.py
+++ b/tempest/api/image/v2/admin/test_images.py
@@ -20,6 +20,7 @@
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
+from tempest.lib import exceptions as lib_exc
CONF = config.CONF
@@ -59,6 +60,24 @@
self.assertNotEqual(created_image_info['owner'],
updated_image_info['owner'])
+ @decorators.idempotent_id('f6ab4aa0-035e-4664-9f2d-c57c6df50605')
+ def test_list_public_image(self):
+ """Test create image as admin and list public image as none admin"""
+ name = data_utils.rand_name(self.__class__.__name__ + '-Image')
+ image = self.admin_client.create_image(
+ name=name,
+ container_format='bare',
+ visibility='public',
+ disk_format='raw')
+ waiters.wait_for_image_status(self.admin_client, image['id'], 'queued')
+ created_image = self.admin_client.show_image(image['id'])
+ self.assertEqual(image['id'], created_image['id'])
+ self.addCleanup(self.admin_client.delete_image, image['id'])
+
+ images_list = self.client.list_images()['images']
+ fetched_images_id = [img['id'] for img in images_list]
+ self.assertIn(image['id'], fetched_images_id)
+
class ImportCopyImagesTest(base.BaseV2ImageAdminTest):
"""Test the import copy-image operations"""
@@ -120,3 +139,40 @@
self.assertEqual(0, len(failed_stores),
"Failed to copy the following stores: %s" %
str(failed_stores))
+
+
+class ImageLocationsAdminTest(base.BaseV2ImageAdminTest):
+
+ @classmethod
+ def skip_checks(cls):
+ super(ImageLocationsAdminTest, cls).skip_checks()
+ if not CONF.image_feature_enabled.manage_locations:
+ skip_msg = (
+ "%s skipped as show_multiple_locations is not available" % (
+ cls.__name__))
+ raise cls.skipException(skip_msg)
+
+ @decorators.idempotent_id('8a648de4-b745-4c28-a7b5-20de1c3da4d2')
+ def test_delete_locations(self):
+ image = self.check_set_multiple_locations()
+ expected_remaining_loc = image['locations'][1]
+
+ self.admin_client.update_image(image['id'], [
+ dict(remove='/locations/0')])
+
+ # The image should now have only the one location we did not delete
+ image = self.client.show_image(image['id'])
+ self.assertEqual(1, len(image['locations']),
+ 'Image should have one location but has %i' % (
+ len(image['locations'])))
+ self.assertEqual(expected_remaining_loc['url'],
+ image['locations'][0]['url'])
+
+ # The direct_url should now be the last remaining location
+ if 'direct_url' in image:
+ self.assertEqual(image['direct_url'], image['locations'][0]['url'])
+
+ # Removing the last location should be disallowed
+ self.assertRaises(lib_exc.Forbidden,
+ self.admin_client.update_image, image['id'], [
+ dict(remove='/locations/0')])
diff --git a/tempest/api/image/v2/test_images.py b/tempest/api/image/v2/test_images.py
index d590668..977ad82 100644
--- a/tempest/api/image/v2/test_images.py
+++ b/tempest/api/image/v2/test_images.py
@@ -22,6 +22,7 @@
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import data_utils
+from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
@@ -733,6 +734,30 @@
body = self.schemas_client.show_schema(schema)
self.assertEqual("images", body['name'])
+ @decorators.idempotent_id('d43f3efc-da4c-4af9-b636-868f0c6acedb')
+ def test_list_hidden_image(self):
+ image = self.client.create_image(os_hidden=True)
+ image = image['image'] if 'image' in image else image
+ self.addCleanup(self.client.wait_for_resource_deletion, image['id'])
+ self.addCleanup(test_utils.call_and_ignore_notfound_exc,
+ self.client.delete_image, image['id'])
+ images_list = self.client.list_images()['images']
+ fetched_images_id = [img['id'] for img in images_list]
+ self.assertNotIn(image['id'], fetched_images_id)
+
+ @decorators.idempotent_id('fdb96b81-257b-42ac-978b-ddeefa3760e4')
+ def test_list_update_hidden_image(self):
+ image = self.create_image()
+ images_list = self.client.list_images()['images']
+ fetched_images_id = [img['id'] for img in images_list]
+ self.assertIn(image['id'], fetched_images_id)
+
+ self.client.update_image(image['id'],
+ [dict(replace='/os_hidden', value=True)])
+ images_list = self.client.list_images()['images']
+ fetched_images_id = [img['id'] for img in images_list]
+ self.assertNotIn(image['id'], fetched_images_id)
+
class ListSharedImagesTest(base.BaseV2ImageTest):
"""Here we test the listing of a shared image information"""
@@ -806,73 +831,13 @@
return image
- def _check_set_location(self):
- image = self.client.create_image(container_format='bare',
- disk_format='raw')
-
- # Locations should be empty when there is no data
- self.assertEqual('queued', image['status'])
- self.assertEqual([], image['locations'])
-
- # Add a new location
- new_loc = {'metadata': {'foo': 'bar'},
- 'url': CONF.image.http_image}
- self.client.update_image(image['id'], [
- dict(add='/locations/-', value=new_loc)])
-
- # The image should now be active, with one location that looks
- # like we expect
- image = self.client.show_image(image['id'])
- self.assertEqual(1, len(image['locations']),
- 'Image should have one location but has %i' % (
- len(image['locations'])))
- self.assertEqual(new_loc['url'], image['locations'][0]['url'])
- self.assertEqual('bar', image['locations'][0]['metadata'].get('foo'))
- if 'direct_url' in image:
- self.assertEqual(image['direct_url'], image['locations'][0]['url'])
-
- # If we added the location directly, the image goes straight
- # to active and no hashing is done
- self.assertEqual('active', image['status'])
- self.assertIsNone(None, image['os_hash_algo'])
- self.assertIsNone(None, image['os_hash_value'])
-
- return image
-
@decorators.idempotent_id('37599b8a-d5c0-4590-aee5-73878502be15')
def test_set_location(self):
- self._check_set_location()
-
- def _check_set_multiple_locations(self):
- image = self._check_set_location()
-
- new_loc = {'metadata': {'speed': '88mph'},
- 'url': '%s#new' % CONF.image.http_image}
- self.client.update_image(image['id'], [
- dict(add='/locations/-', value=new_loc)])
-
- # The image should now have two locations and the last one
- # (locations are ordered) should have the new URL.
- image = self.client.show_image(image['id'])
- self.assertEqual(2, len(image['locations']),
- 'Image should have two locations but has %i' % (
- len(image['locations'])))
- self.assertEqual(new_loc['url'], image['locations'][1]['url'])
-
- # The image should still be active and still have no hashes
- self.assertEqual('active', image['status'])
- self.assertIsNone(None, image['os_hash_algo'])
- self.assertIsNone(None, image['os_hash_value'])
-
- # The direct_url should still match the first location
- if 'direct_url' in image:
- self.assertEqual(image['direct_url'], image['locations'][0]['url'])
-
- return image
+ self.check_set_location()
@decorators.idempotent_id('bf6e0009-c039-4884-b498-db074caadb10')
def test_replace_location(self):
- image = self._check_set_multiple_locations()
+ image = self.check_set_multiple_locations()
original_locs = image['locations']
# Replacing with the exact thing should work
@@ -909,31 +874,6 @@
len(image['locations'])))
self.assertEqual(original_locs, image['locations'])
- @decorators.idempotent_id('8a648de4-b745-4c28-a7b5-20de1c3da4d2')
- def test_delete_locations(self):
- image = self._check_set_multiple_locations()
- expected_remaining_loc = image['locations'][1]
-
- self.client.update_image(image['id'], [
- dict(remove='/locations/0')])
-
- # The image should now have only the one location we did not delete
- image = self.client.show_image(image['id'])
- self.assertEqual(1, len(image['locations']),
- 'Image should have one location but has %i' % (
- len(image['locations'])))
- self.assertEqual(expected_remaining_loc['url'],
- image['locations'][0]['url'])
-
- # The direct_url should now be the last remaining location
- if 'direct_url' in image:
- self.assertEqual(image['direct_url'], image['locations'][0]['url'])
-
- # Removing the last location should be disallowed
- self.assertRaises(lib_exc.Forbidden,
- self.client.update_image, image['id'], [
- dict(remove='/locations/0')])
-
@decorators.idempotent_id('a9a20396-8399-4b36-909d-564949be098f')
def test_set_location_bad_scheme(self):
image = self.client.create_image(container_format='bare',
@@ -961,8 +901,9 @@
'os_hash_algo': 'sha512'},
'metadata': {},
'url': CONF.image.http_image}
- self.client.update_image(image['id'], [
- dict(add='/locations/-', value=new_loc)])
+ self._update_image_with_retries(image['id'],
+ [dict(add='/locations/-',
+ value=new_loc)])
# Expect that all of our values ended up on the image
image = self.client.show_image(image['id'])
@@ -989,8 +930,9 @@
'os_hash_algo': orig_image['os_hash_algo']},
'metadata': {},
'url': '%s#new' % CONF.image.http_image}
- self.client.update_image(orig_image['id'], [
- dict(add='/locations/-', value=new_loc)])
+ self._update_image_with_retries(orig_image['id'],
+ [dict(add='/locations/-',
+ value=new_loc)])
# Setting the same exact values on a new location should work
image = self.client.show_image(orig_image['id'])
@@ -1024,17 +966,17 @@
# This should always fail due to the mismatch
self.assertRaises(lib_exc.Conflict,
- self.client.update_image,
- orig_image['id'], [
- dict(add='/locations/-', value=new_loc)])
+ self._update_image_with_retries,
+ orig_image['id'],
+ [dict(add='/locations/-', value=new_loc)])
# Now try to add a new location with all of the substitutions,
# which should also fail
new_loc['validation_data'] = values
self.assertRaises(lib_exc.Conflict,
- self.client.update_image,
- orig_image['id'], [
- dict(add='/locations/-', value=new_loc)])
+ self._update_image_with_retries,
+ orig_image['id'],
+ [dict(add='/locations/-', value=new_loc)])
# Make sure nothing has changed on our image after all the
# above failures
diff --git a/tempest/api/network/admin/test_dhcp_agent_scheduler.py b/tempest/api/network/admin/test_dhcp_agent_scheduler.py
index 2506185..3c0efee 100644
--- a/tempest/api/network/admin/test_dhcp_agent_scheduler.py
+++ b/tempest/api/network/admin/test_dhcp_agent_scheduler.py
@@ -14,6 +14,7 @@
from tempest.api.network import base
from tempest.common import utils
+from tempest.common import waiters
from tempest.lib import decorators
@@ -36,6 +37,16 @@
cls.create_subnet(cls.network)
cls.port = cls.create_port(cls.network)
+ @decorators.idempotent_id('f164801e-1dd8-4b8b-b5d3-cc3ac77cfaa5')
+ def test_dhcp_port_status_active(self):
+ ports = self.admin_ports_client.list_ports(
+ network_id=self.network['id'])['ports']
+ for port in ports:
+ waiters.wait_for_port_status(
+ client=self.admin_ports_client,
+ port_id=port['id'],
+ status='ACTIVE')
+
@decorators.idempotent_id('5032b1fe-eb42-4a64-8f3b-6e189d8b5c7d')
def test_list_dhcp_agent_hosting_network(self):
"""Test Listing DHCP agents hosting a network"""
diff --git a/tempest/api/network/test_service_providers.py b/tempest/api/network/test_service_providers.py
index 5af5244..e203a2c 100644
--- a/tempest/api/network/test_service_providers.py
+++ b/tempest/api/network/test_service_providers.py
@@ -10,8 +10,6 @@
# License for the specific language governing permissions and limitations
# under the License.
-import testtools
-
from tempest.api.network import base
from tempest.common import utils
from tempest.lib import decorators
@@ -20,10 +18,14 @@
class ServiceProvidersTest(base.BaseNetworkTest):
"""Test network service providers"""
+ @classmethod
+ def skip_checks(cls):
+ super(ServiceProvidersTest, cls).skip_checks()
+ if not utils.is_extension_enabled('service-type', 'network'):
+ skip_msg = ("service-type extension not enabled.")
+ raise cls.skipException(skip_msg)
+
@decorators.idempotent_id('2cbbeea9-f010-40f6-8df5-4eaa0c918ea6')
- @testtools.skipUnless(
- utils.is_extension_enabled('service-type', 'network'),
- 'service-type extension not enabled.')
def test_service_providers_list(self):
"""Test listing network service providers"""
body = self.service_providers_client.list_service_providers()
diff --git a/tempest/api/object_storage/base.py b/tempest/api/object_storage/base.py
index 7107dc4..58ad9d4 100644
--- a/tempest/api/object_storage/base.py
+++ b/tempest/api/object_storage/base.py
@@ -15,6 +15,8 @@
import time
+from oslo_log import log
+
from tempest.common import custom_matchers
from tempest.common import waiters
from tempest import config
@@ -23,6 +25,7 @@
import tempest.test
CONF = config.CONF
+LOG = log.getLogger(__name__)
def delete_containers(containers, container_client, object_client):
@@ -41,17 +44,33 @@
for cont in containers:
try:
- params = {'limit': 9999, 'format': 'json'}
- _, objlist = container_client.list_container_objects(cont, params)
- # delete every object in the container
- for obj in objlist:
- object_client.delete_object(cont, obj['name'])
- object_client.wait_for_resource_deletion(obj['name'], cont)
- # Verify resource deletion
+ delete_objects(cont, container_client, object_client)
container_client.delete_container(cont)
container_client.wait_for_resource_deletion(cont)
except lib_exc.NotFound:
- pass
+ LOG.warning(f"Container {cont} wasn't deleted as it wasn't found.")
+
+
+def delete_objects(container, container_client, object_client):
+ """Remove all objects from container.
+
+ Will not throw any error if the objects do not exist
+
+ :param container: Name of the container that contains the objects to be
+ deleted
+ :param container_client: Client to be used to list objects in
+ the container
+ :param object_client: Client to be used to delete objects
+ """
+ params = {'limit': 9999, 'format': 'json'}
+ _, objlist = container_client.list_container_objects(container, params)
+
+ for obj in objlist:
+ try:
+ object_client.delete_object(container, obj['name'])
+ object_client.wait_for_resource_deletion(obj['name'], container)
+ except lib_exc.NotFound:
+ LOG.warning(f"Object {obj} wasn't deleted as it wasn't found.")
class BaseObjectTest(tempest.test.BaseTestCase):
diff --git a/tempest/api/object_storage/test_container_acl_negative.py b/tempest/api/object_storage/test_container_acl_negative.py
index 85e6ddb..347c79e 100644
--- a/tempest/api/object_storage/test_container_acl_negative.py
+++ b/tempest/api/object_storage/test_container_acl_negative.py
@@ -41,6 +41,7 @@
super(ObjectACLsNegativeTest, self).setUp()
self.container_name = data_utils.rand_name(name='TestContainer')
self.container_client.update_container(self.container_name)
+ self.containers.append(self.container_name)
@classmethod
def resource_cleanup(cls):
diff --git a/tempest/api/object_storage/test_object_services.py b/tempest/api/object_storage/test_object_services.py
index 7d5bd26..61b9136 100644
--- a/tempest/api/object_storage/test_object_services.py
+++ b/tempest/api/object_storage/test_object_services.py
@@ -1016,9 +1016,10 @@
super(PublicObjectTest, self).setUp()
self.container_name = data_utils.rand_name(name='TestContainer')
self.container_client.update_container(self.container_name)
+ self.containers.append(self.container_name)
def tearDown(self):
- self.delete_containers([self.container_name])
+ self.delete_containers()
super(PublicObjectTest, self).tearDown()
@decorators.idempotent_id('07c9cf95-c0d4-4b49-b9c8-0ef2c9b27193')
diff --git a/tempest/api/volume/admin/test_encrypted_volumes_extend.py b/tempest/api/volume/admin/test_encrypted_volumes_extend.py
index e85a00d..4506389 100644
--- a/tempest/api/volume/admin/test_encrypted_volumes_extend.py
+++ b/tempest/api/volume/admin/test_encrypted_volumes_extend.py
@@ -14,7 +14,6 @@
from tempest.api.volume import base
from tempest.api.volume import test_volumes_extend as extend
-from tempest.common import utils
from tempest import config
from tempest.lib import decorators
@@ -25,23 +24,25 @@
base.BaseVolumeAdminTest):
"""Tests extending the size of an attached encrypted volume."""
+ @classmethod
+ def skip_checks(cls):
+ super(EncryptedVolumesExtendAttachedTest, cls).skip_checks()
+ if not CONF.service_available.nova:
+ skip_msg = ("%s skipped as Nova is not available" % cls.__name__)
+ raise cls.skipException(skip_msg)
+ if not CONF.volume_feature_enabled.extend_attached_encrypted_volume:
+ raise cls.skipException(
+ "Attached encrypted volume extend is disabled.")
+
@decorators.idempotent_id('e93243ec-7c37-4b5b-a099-ebf052c13216')
- @testtools.skipUnless(
- CONF.volume_feature_enabled.extend_attached_encrypted_volume,
- "Attached encrypted volume extend is disabled.")
- @utils.services('compute')
def test_extend_attached_encrypted_volume_luksv1(self):
"""LUKs v1 decrypts and extends through libvirt."""
volume = self.create_encrypted_volume(encryption_provider="luks")
self._test_extend_attached_volume(volume)
@decorators.idempotent_id('381a2a3a-b2f4-4631-a910-720881f2cc2f')
- @testtools.skipUnless(
- CONF.volume_feature_enabled.extend_attached_encrypted_volume,
- "Attached encrypted volume extend is disabled.")
@testtools.skipIf(CONF.volume.storage_protocol == 'ceph',
'Ceph only supports LUKSv2 if doing host attach.')
- @utils.services('compute')
def test_extend_attached_encrypted_volume_luksv2(self):
"""LUKs v2 decrypts and extends through os-brick."""
volume = self.create_encrypted_volume(encryption_provider="luks2")
diff --git a/tempest/api/volume/admin/test_group_snapshots.py b/tempest/api/volume/admin/test_group_snapshots.py
index 73903cf..8af8435 100644
--- a/tempest/api/volume/admin/test_group_snapshots.py
+++ b/tempest/api/volume/admin/test_group_snapshots.py
@@ -91,9 +91,15 @@
grp = self.create_group(group_type=group_type['id'],
volume_types=[volume_type['id']])
- # Create volume
- vol = self.create_volume(volume_type=volume_type['id'],
- group_id=grp['id'])
+ # Create volume is instance level, can not be deleted before group.
+ # Volume delete handled by delete_group method, cleanup method.
+ params = {'name': data_utils.rand_name("volume"),
+ 'volume_type': volume_type['id'],
+ 'group_id': grp['id'],
+ 'size': CONF.volume.volume_size}
+ vol = self.volumes_client.create_volume(**params)['volume']
+ waiters.wait_for_volume_resource_status(
+ self.volumes_client, vol['id'], 'available')
# Create group snapshot
group_snapshot_name = data_utils.rand_name('group_snapshot')
@@ -153,9 +159,15 @@
grp = self.create_group(group_type=group_type['id'],
volume_types=[volume_type['id']])
- # Create volume
- vol = self.create_volume(volume_type=volume_type['id'],
- group_id=grp['id'])
+ # Create volume is instance level, can not be deleted before group.
+ # Volume delete handled by delete_group method, cleanup method.
+ params = {'name': data_utils.rand_name("volume"),
+ 'volume_type': volume_type['id'],
+ 'group_id': grp['id'],
+ 'size': CONF.volume.volume_size}
+ vol = self.volumes_client.create_volume(**params)['volume']
+ waiters.wait_for_volume_resource_status(
+ self.volumes_client, vol['id'], 'available')
# Create group_snapshot
group_snapshot_name = data_utils.rand_name('group_snapshot')
@@ -215,8 +227,15 @@
# volume-type and group id.
volume_list = []
for _ in range(2):
- volume = self.create_volume(volume_type=volume_type['id'],
- group_id=grp['id'])
+ # Create volume is instance level, can't be deleted before group.
+ # Volume delete handled by delete_group method, cleanup method.
+ params = {'name': data_utils.rand_name("volume"),
+ 'volume_type': volume_type['id'],
+ 'group_id': grp['id'],
+ 'size': CONF.volume.volume_size}
+ volume = self.volumes_client.create_volume(**params)['volume']
+ waiters.wait_for_volume_resource_status(
+ self.volumes_client, volume['id'], 'available')
volume_list.append(volume['id'])
for vol in volume_list:
@@ -268,9 +287,15 @@
group = self.create_group(group_type=group_type['id'],
volume_types=[volume_type['id']])
- # Create volume
- volume = self.create_volume(volume_type=volume_type['id'],
- group_id=group['id'])
+ # Create volume is instance level, can not be deleted before group.
+ # Volume delete handled by delete_group method, cleanup method.
+ params = {'name': data_utils.rand_name("volume"),
+ 'volume_type': volume_type['id'],
+ 'group_id': group['id'],
+ 'size': CONF.volume.volume_size}
+ volume = self.volumes_client.create_volume(**params)['volume']
+ waiters.wait_for_volume_resource_status(
+ self.volumes_client, volume['id'], 'available')
# Create group snapshot
group_snapshot = self._create_group_snapshot(group_id=group['id'])
diff --git a/tempest/api/volume/admin/test_groups.py b/tempest/api/volume/admin/test_groups.py
index f16e4d2..094f142 100644
--- a/tempest/api/volume/admin/test_groups.py
+++ b/tempest/api/volume/admin/test_groups.py
@@ -108,11 +108,17 @@
grp = self.create_group(group_type=group_type['id'],
volume_types=[volume_type['id']])
- # Create volumes
+ # Create volume is instance level, can not be deleted before group.
+ # Volume delete handled by delete_group method, cleanup method.
grp_vols = []
for _ in range(2):
- vol = self.create_volume(volume_type=volume_type['id'],
- group_id=grp['id'])
+ params = {'name': data_utils.rand_name("volume"),
+ 'volume_type': volume_type['id'],
+ 'group_id': grp['id'],
+ 'size': CONF.volume.volume_size}
+ vol = self.volumes_client.create_volume(**params)['volume']
+ waiters.wait_for_volume_resource_status(
+ self.volumes_client, vol['id'], 'available')
grp_vols.append(vol)
vol2 = grp_vols[1]
@@ -171,8 +177,15 @@
grp = self.create_group(group_type=group_type['id'],
volume_types=[volume_type['id']])
- # Create volume
- self.create_volume(volume_type=volume_type['id'], group_id=grp['id'])
+ # Create volume is instance level, can not be deleted before group.
+ # Volume delete handled by delete_group method, cleanup method.
+ params = {'name': data_utils.rand_name("volume"),
+ 'volume_type': volume_type['id'],
+ 'group_id': grp['id'],
+ 'size': CONF.volume.volume_size}
+ vol = self.volumes_client.create_volume(**params)['volume']
+ waiters.wait_for_volume_resource_status(
+ self.volumes_client, vol['id'], 'available')
# Create Group from Group
grp_name2 = data_utils.rand_name('Group_from_grp')
diff --git a/tempest/api/volume/admin/test_snapshot_manage.py b/tempest/api/volume/admin/test_snapshot_manage.py
index ab0aa38..478bd16 100644
--- a/tempest/api/volume/admin/test_snapshot_manage.py
+++ b/tempest/api/volume/admin/test_snapshot_manage.py
@@ -14,6 +14,7 @@
# under the License.
from tempest.api.volume import base
+from tempest.common import utils
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import data_utils
@@ -31,6 +32,8 @@
managed by Cinder from a storage back end to Cinder
"""
+ create_default_network = True
+
@classmethod
def skip_checks(cls):
super(SnapshotManageAdminTest, cls).skip_checks()
@@ -46,8 +49,7 @@
"it should be a list of two elements")
raise exceptions.InvalidConfiguration(msg)
- @decorators.idempotent_id('0132f42d-0147-4b45-8501-cc504bbf7810')
- def test_unmanage_manage_snapshot(self):
+ def _test_unmanage_manage_snapshot(self, attached_volume=False):
"""Test unmanaging and managing volume snapshot"""
# Create a volume
volume = self.create_volume()
@@ -55,6 +57,13 @@
# Create a snapshot
snapshot = self.create_snapshot(volume_id=volume['id'])
+ if attached_volume:
+ # Create a server
+ server = self.create_server(wait_until='SSHABLE')
+ # Attach volume to instance
+ self.attach_volume(server['id'], volume['id'],
+ wait_for_detach=False)
+
# Unmanage the snapshot
# Unmanage snapshot function works almost the same as delete snapshot,
# but it does not delete the snapshot data
@@ -100,3 +109,17 @@
self.assertEqual(snapshot['size'], new_snapshot_info['size'])
for key in ['volume_id', 'name', 'description', 'metadata']:
self.assertEqual(snapshot_ref[key], new_snapshot_info[key])
+
+ @decorators.idempotent_id('0132f42d-0147-4b45-8501-cc504bbf7810')
+ def test_unmanage_manage_snapshot(self):
+ self._test_unmanage_manage_snapshot()
+
+ @decorators.idempotent_id('7c735385-e953-4198-8534-68137f72dbdc')
+ @utils.services('compute')
+ def test_snapshot_manage_with_attached_volume(self):
+ """Test manage a snapshot with an attached volume.
+
+ The case validates manage snapshot operation while
+ the parent volume is attached to an instance.
+ """
+ self._test_unmanage_manage_snapshot(attached_volume=True)
diff --git a/tempest/api/volume/admin/test_volumes_actions.py b/tempest/api/volume/admin/test_volumes_actions.py
index ecddfba..b6e9f32 100644
--- a/tempest/api/volume/admin/test_volumes_actions.py
+++ b/tempest/api/volume/admin/test_volumes_actions.py
@@ -83,7 +83,7 @@
server_id = self.create_server()['id']
volume_id = self.create_volume()['id']
- # Attach volume
+ # Request Cinder to map & export volume (it's not attached to instance)
self.volumes_client.attach_volume(
volume_id,
instance_uuid=server_id,
@@ -101,7 +101,9 @@
waiters.wait_for_volume_resource_status(self.volumes_client,
volume_id, 'error')
- # Force detach volume
+ # The force detach volume calls works because the volume is not really
+ # connected to the instance (it is safe), otherwise it would be
+ # rejected for security reasons (bug #2004555).
self.admin_volume_client.force_detach_volume(
volume_id, connector=None,
attachment_id=attachment['attachment_id'])
diff --git a/tempest/api/volume/base.py b/tempest/api/volume/base.py
index 172b6ed..ad8f573 100644
--- a/tempest/api/volume/base.py
+++ b/tempest/api/volume/base.py
@@ -19,6 +19,8 @@
from tempest.lib.common import api_version_utils
from tempest.lib.common.utils import data_utils
from tempest.lib.common.utils import test_utils
+from tempest.lib.decorators import cleanup_order
+from tempest.lib import exceptions as lib_exc
import tempest.test
CONF = config.CONF
@@ -40,6 +42,10 @@
if not CONF.service_available.cinder:
skip_msg = ("%s skipped as Cinder is not available" % cls.__name__)
raise cls.skipException(skip_msg)
+ if cls.create_default_network and not CONF.service_available.neutron:
+ skip_msg = (
+ "%s skipped as Neutron is not available" % cls.__name__)
+ raise cls.skipException(skip_msg)
api_version_utils.check_skip_with_microversion(
cls.volume_min_microversion, cls.volume_max_microversion,
@@ -49,6 +55,8 @@
def setup_credentials(cls):
cls.set_network_resources(
network=cls.create_default_network,
+ router=cls.create_default_network,
+ dhcp=cls.create_default_network,
subnet=cls.create_default_network)
super(BaseVolumeTest, cls).setup_credentials()
@@ -94,8 +102,8 @@
cls.build_interval = CONF.volume.build_interval
cls.build_timeout = CONF.volume.build_timeout
- @classmethod
- def create_volume(cls, wait_until='available', **kwargs):
+ @cleanup_order
+ def create_volume(self, wait_until='available', **kwargs):
"""Wrapper utility that returns a test volume.
:param wait_until: wait till volume status, None means no wait.
@@ -104,12 +112,12 @@
kwargs['size'] = CONF.volume.volume_size
if 'imageRef' in kwargs:
- image = cls.images_client.show_image(kwargs['imageRef'])
+ image = self.images_client.show_image(kwargs['imageRef'])
min_disk = image['min_disk']
kwargs['size'] = max(kwargs['size'], min_disk)
if 'name' not in kwargs:
- name = data_utils.rand_name(cls.__name__ + '-Volume')
+ name = data_utils.rand_name(self.__name__ + '-Volume')
kwargs['name'] = name
if CONF.volume.volume_type and 'volume_type' not in kwargs:
@@ -123,27 +131,46 @@
kwargs.setdefault('availability_zone',
CONF.compute.compute_volume_common_az)
- volume = cls.volumes_client.create_volume(**kwargs)['volume']
- cls.addClassResourceCleanup(test_utils.call_and_ignore_notfound_exc,
- cls.delete_volume, cls.volumes_client,
- volume['id'])
+ volume = self.volumes_client.create_volume(**kwargs)['volume']
+ self.cleanup(test_utils.call_and_ignore_notfound_exc,
+ self._delete_volume_for_cleanup,
+ self.volumes_client, volume['id'])
if wait_until:
- waiters.wait_for_volume_resource_status(cls.volumes_client,
+ waiters.wait_for_volume_resource_status(self.volumes_client,
volume['id'], wait_until)
return volume
- @classmethod
- def create_snapshot(cls, volume_id=1, **kwargs):
+ @staticmethod
+ def _delete_volume_for_cleanup(volumes_client, volume_id):
+ """Delete a volume (only) for cleanup.
+
+ If it is attached to a server, wait for it to become available,
+ assuming we have already deleted the server and just need nova to
+ complete the delete operation before it is available to be deleted.
+ Otherwise proceed to the regular delete_volume().
+ """
+ try:
+ vol = volumes_client.show_volume(volume_id)['volume']
+ if vol['status'] == 'in-use':
+ waiters.wait_for_volume_resource_status(volumes_client,
+ volume_id,
+ 'available')
+ except lib_exc.NotFound:
+ pass
+ BaseVolumeTest.delete_volume(volumes_client, volume_id)
+
+ @cleanup_order
+ def create_snapshot(self, volume_id=1, **kwargs):
"""Wrapper utility that returns a test snapshot."""
if 'name' not in kwargs:
- name = data_utils.rand_name(cls.__name__ + '-Snapshot')
+ name = data_utils.rand_name(self.__name__ + '-Snapshot')
kwargs['name'] = name
- snapshot = cls.snapshots_client.create_snapshot(
+ snapshot = self.snapshots_client.create_snapshot(
volume_id=volume_id, **kwargs)['snapshot']
- cls.addClassResourceCleanup(test_utils.call_and_ignore_notfound_exc,
- cls.delete_snapshot, snapshot['id'])
- waiters.wait_for_volume_resource_status(cls.snapshots_client,
+ self.cleanup(test_utils.call_and_ignore_notfound_exc,
+ self.delete_snapshot, snapshot['id'])
+ waiters.wait_for_volume_resource_status(self.snapshots_client,
snapshot['id'], 'available')
return snapshot
@@ -175,23 +202,25 @@
client.delete_volume(volume_id)
client.wait_for_resource_deletion(volume_id)
- @classmethod
- def delete_snapshot(cls, snapshot_id, snapshots_client=None):
+ @cleanup_order
+ def delete_snapshot(self, snapshot_id, snapshots_client=None):
"""Delete snapshot by the given client"""
if snapshots_client is None:
- snapshots_client = cls.snapshots_client
+ snapshots_client = self.snapshots_client
snapshots_client.delete_snapshot(snapshot_id)
snapshots_client.wait_for_resource_deletion(snapshot_id)
- def attach_volume(self, server_id, volume_id):
+ def attach_volume(self, server_id, volume_id, wait_for_detach=True):
"""Attach a volume to a server"""
self.servers_client.attach_volume(
server_id, volumeId=volume_id,
device='/dev/%s' % CONF.compute.volume_device_name)
waiters.wait_for_volume_resource_status(self.volumes_client,
volume_id, 'in-use')
- self.addCleanup(waiters.wait_for_volume_resource_status,
- self.volumes_client, volume_id, 'available')
+ if wait_for_detach:
+ self.addCleanup(waiters.wait_for_volume_resource_status,
+ self.volumes_client, volume_id, 'available',
+ server_id, self.servers_client)
self.addCleanup(self.servers_client.detach_volume, server_id,
volume_id)
@@ -200,6 +229,14 @@
'name',
data_utils.rand_name(self.__class__.__name__ + '-instance'))
+ if wait_until == 'SSHABLE' and not kwargs.get('validation_resources'):
+ # If we were asked for SSHABLE but were not provided with the
+ # required validation_resources and validatable flag, ensure we
+ # pass them to create_test_server() so that it will actually wait.
+ kwargs['validation_resources'] = (
+ self.get_test_validation_resources(self.os_primary))
+ kwargs['validatable'] = True
+
tenant_network = self.get_tenant_network()
body, _ = compute.create_test_server(
self.os_primary,
@@ -278,23 +315,23 @@
cls.admin_scheduler_stats_client = \
cls.os_admin.volume_scheduler_stats_client_latest
- @classmethod
- def create_test_qos_specs(cls, name=None, consumer=None, **kwargs):
+ @cleanup_order
+ def create_test_qos_specs(self, name=None, consumer=None, **kwargs):
"""create a test Qos-Specs."""
- name = name or data_utils.rand_name(cls.__name__ + '-QoS')
+ name = name or data_utils.rand_name(self.__name__ + '-QoS')
consumer = consumer or 'front-end'
- qos_specs = cls.admin_volume_qos_client.create_qos(
+ qos_specs = self.admin_volume_qos_client.create_qos(
name=name, consumer=consumer, **kwargs)['qos_specs']
- cls.addClassResourceCleanup(cls.clear_qos_spec, qos_specs['id'])
+ self.cleanup(self.clear_qos_spec, qos_specs['id'])
return qos_specs
- @classmethod
- def create_volume_type(cls, name=None, **kwargs):
+ @cleanup_order
+ def create_volume_type(self, name=None, **kwargs):
"""Create a test volume-type"""
- name = name or data_utils.rand_name(cls.__name__ + '-volume-type')
- volume_type = cls.admin_volume_types_client.create_volume_type(
+ name = name or data_utils.rand_name(self.__name__ + '-volume-type')
+ volume_type = self.admin_volume_types_client.create_volume_type(
name=name, **kwargs)['volume_type']
- cls.addClassResourceCleanup(cls.clear_volume_type, volume_type['id'])
+ self.cleanup(self.clear_volume_type, volume_type['id'])
return volume_type
def create_encryption_type(self, type_id=None, provider=None,
@@ -328,19 +365,19 @@
group_type['id'])
return group_type
- @classmethod
- def clear_qos_spec(cls, qos_id):
+ @cleanup_order
+ def clear_qos_spec(self, qos_id):
test_utils.call_and_ignore_notfound_exc(
- cls.admin_volume_qos_client.delete_qos, qos_id)
+ self.admin_volume_qos_client.delete_qos, qos_id)
test_utils.call_and_ignore_notfound_exc(
- cls.admin_volume_qos_client.wait_for_resource_deletion, qos_id)
+ self.admin_volume_qos_client.wait_for_resource_deletion, qos_id)
- @classmethod
- def clear_volume_type(cls, vol_type_id):
+ @cleanup_order
+ def clear_volume_type(self, vol_type_id):
test_utils.call_and_ignore_notfound_exc(
- cls.admin_volume_types_client.delete_volume_type, vol_type_id)
+ self.admin_volume_types_client.delete_volume_type, vol_type_id)
test_utils.call_and_ignore_notfound_exc(
- cls.admin_volume_types_client.wait_for_resource_deletion,
+ self.admin_volume_types_client.wait_for_resource_deletion,
vol_type_id)
diff --git a/tempest/api/volume/test_volumes_backup.py b/tempest/api/volume/test_volumes_backup.py
index 138d120..89ff497 100644
--- a/tempest/api/volume/test_volumes_backup.py
+++ b/tempest/api/volume/test_volumes_backup.py
@@ -29,6 +29,8 @@
class VolumesBackupsTest(base.BaseVolumeTest):
"""Test volumes backup"""
+ create_default_network = True
+
@classmethod
def skip_checks(cls):
super(VolumesBackupsTest, cls).skip_checks()
@@ -114,10 +116,16 @@
is "available" or "in-use".
"""
# Create a server
- volume = self.create_volume()
+ volume = self.create_volume(wait_until=False)
self.addCleanup(self.delete_volume, self.volumes_client, volume['id'])
- server = self.create_server()
+ validation_resources = self.get_test_validation_resources(
+ self.os_primary)
+ server = self.create_server(wait_until='SSHABLE',
+ validation_resources=validation_resources,
+ validatable=True)
# Attach volume to instance
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume['id'], 'available')
self.attach_volume(server['id'], volume['id'])
# Create backup using force flag
backup_name = data_utils.rand_name(
diff --git a/tempest/api/volume/test_volumes_extend.py b/tempest/api/volume/test_volumes_extend.py
index fcbc982..c766db8 100644
--- a/tempest/api/volume/test_volumes_extend.py
+++ b/tempest/api/volume/test_volumes_extend.py
@@ -18,7 +18,6 @@
import testtools
from tempest.api.volume import base
-from tempest.common import utils
from tempest.common import waiters
from tempest import config
from tempest.lib import decorators
@@ -46,6 +45,9 @@
@decorators.idempotent_id('86be1cba-2640-11e5-9c82-635fb964c912')
@testtools.skipUnless(CONF.volume_feature_enabled.snapshot,
"Cinder volume snapshots are disabled")
+ @testtools.skipUnless(
+ CONF.volume_feature_enabled.extend_volume_with_snapshot,
+ "Extending volume with snapshot is disabled.")
def test_volume_extend_when_volume_has_snapshot(self):
"""Test extending a volume which has a snapshot"""
volume = self.create_volume()
@@ -114,7 +116,7 @@
if the action on the server fails.
"""
# Create a test server. Will be automatically cleaned up on teardown.
- server = self.create_server()
+ server = self.create_server(wait_until='SSHABLE')
# Attach the volume to the server and wait for the volume status to be
# "in-use".
self.attach_volume(server['id'], volume['id'])
@@ -178,10 +180,16 @@
class VolumesExtendAttachedTest(BaseVolumesExtendAttachedTest):
+ @classmethod
+ def skip_checks(cls):
+ super(VolumesExtendAttachedTest, cls).skip_checks()
+ if not CONF.service_available.nova:
+ skip_msg = ("%s skipped as Nova is not available" % cls.__name__)
+ raise cls.skipException(skip_msg)
+ if not CONF.volume_feature_enabled.extend_attached_volume:
+ raise cls.skipException("Attached volume extend is disabled.")
+
@decorators.idempotent_id('301f5a30-1c6f-4ea0-be1a-91fd28d44354')
- @testtools.skipUnless(CONF.volume_feature_enabled.extend_attached_volume,
- "Attached volume extend is disabled.")
- @utils.services('compute')
def test_extend_attached_volume(self):
volume = self.create_volume()
self._test_extend_attached_volume(volume)
diff --git a/tempest/api/volume/test_volumes_snapshots.py b/tempest/api/volume/test_volumes_snapshots.py
index b3a04f8..95521e7 100644
--- a/tempest/api/volume/test_volumes_snapshots.py
+++ b/tempest/api/volume/test_volumes_snapshots.py
@@ -44,12 +44,17 @@
@utils.services('compute')
def test_snapshot_create_delete_with_volume_in_use(self):
"""Test create/delete snapshot from volume attached to server"""
- # Create a test instance
- server = self.create_server(wait_until='SSHABLE')
# NOTE(zhufl) Here we create volume from self.image_ref for adding
# coverage for "creating snapshot from non-blank volume".
volume = self.create_volume(imageRef=self.image_ref)
- self.attach_volume(server['id'], volume['id'])
+
+ # Create a test instance
+ server = self.create_server(wait_until='SSHABLE')
+
+ # NOTE(danms): We are attaching this volume to a server, but we do
+ # not need to block on detach during cleanup because we will be
+ # deleting the server anyway.
+ self.attach_volume(server['id'], volume['id'], wait_for_detach=False)
# Snapshot a volume which attached to an instance with force=False
self.assertRaises(lib_exc.BadRequest, self.create_snapshot,
@@ -81,7 +86,11 @@
# Create a server and attach it
server = self.create_server(wait_until='SSHABLE')
- self.attach_volume(server['id'], self.volume_origin['id'])
+ # NOTE(danms): We are attaching this volume to a server, but we do
+ # not need to block on detach during cleanup because we will be
+ # deleting the server anyway.
+ self.attach_volume(server['id'], self.volume_origin['id'],
+ wait_for_detach=False)
# Now that the volume is attached, create other snapshots
snapshot2 = self.create_snapshot(self.volume_origin['id'], force=True)
diff --git a/tempest/clients.py b/tempest/clients.py
index a65c43b..5b31cf8 100644
--- a/tempest/clients.py
+++ b/tempest/clients.py
@@ -83,8 +83,6 @@
def _set_image_clients(self):
if CONF.service_available.glance:
- self.image_client = self.image_v1.ImagesClient()
- self.image_member_client = self.image_v1.ImageMembersClient()
self.image_client_v2 = self.image_v2.ImagesClient()
self.image_member_client_v2 = self.image_v2.ImageMembersClient()
self.image_cache_client = self.image_v2.ImageCacheClient()
@@ -97,6 +95,7 @@
self.image_v2.NamespacePropertiesClient()
self.namespace_tags_client = self.image_v2.NamespaceTagsClient()
self.image_versions_client = self.image_v2.VersionsClient()
+ self.tasks_client = self.image_v2.TaskClient()
# NOTE(danms): If no alternate endpoint is configured,
# this client will work the same as the base self.images_client.
# If your test needs to know if these are different, check the
@@ -124,15 +123,12 @@
self.quota_classes_client = self.compute.QuotaClassesClient()
self.flavors_client = self.compute.FlavorsClient()
self.extensions_client = self.compute.ExtensionsClient()
- self.floating_ip_pools_client = self.compute.FloatingIPPoolsClient()
- self.floating_ips_bulk_client = self.compute.FloatingIPsBulkClient()
self.compute_floating_ips_client = self.compute.FloatingIPsClient()
self.compute_security_group_rules_client = (
self.compute.SecurityGroupRulesClient())
self.compute_security_groups_client = (
self.compute.SecurityGroupsClient())
self.interfaces_client = self.compute.InterfacesClient()
- self.fixed_ips_client = self.compute.FixedIPsClient()
self.availability_zone_client = self.compute.AvailabilityZoneClient()
self.aggregates_client = self.compute.AggregatesClient()
self.services_client = self.compute.ServicesClient()
@@ -144,6 +140,8 @@
self.tenant_networks_client = self.compute.TenantNetworksClient()
self.assisted_volume_snapshots_client = (
self.compute.AssistedVolumeSnapshotsClient())
+ self.server_external_events_client = (
+ self.compute.ServerExternalEventsClient())
# NOTE: The following client needs special timeout values because
# the API is a proxy for the other component.
diff --git a/tempest/cmd/account_generator.py b/tempest/cmd/account_generator.py
index ad0b547..f4f4b17 100755
--- a/tempest/cmd/account_generator.py
+++ b/tempest/cmd/account_generator.py
@@ -155,7 +155,7 @@
# Create the list of resources to be provisioned for each process
# NOTE(andreaf) get_credentials expects a string for types or a list for
# roles. Adding all required inputs to the spec list.
- spec = ['primary', 'alt']
+ spec = ['primary', 'alt', 'project_reader']
if CONF.service_available.swift:
spec.append([CONF.object_storage.operator_role])
spec.append([CONF.object_storage.reseller_admin_role])
@@ -163,8 +163,13 @@
spec.append('admin')
resources = []
for cred_type in spec:
+ scope = None
+ if "_" in cred_type:
+ scope = cred_type.split("_")[0]
+ cred_type = cred_type.split("_")[1:2]
+
resources.append((cred_type, cred_provider.get_credentials(
- credential_type=cred_type)))
+ credential_type=cred_type, scope=scope)))
return resources
diff --git a/tempest/cmd/cleanup.py b/tempest/cmd/cleanup.py
index 0b96d9e..a8a344a 100644
--- a/tempest/cmd/cleanup.py
+++ b/tempest/cmd/cleanup.py
@@ -90,7 +90,6 @@
from tempest import clients
from tempest.cmd import cleanup_service
from tempest.common import credentials_factory as credentials
-from tempest.common import identity
from tempest import config
from tempest.lib import exceptions
@@ -140,11 +139,6 @@
self.dry_run_data = {}
self.json_data = {}
- self.admin_id = ""
- self.admin_role_id = ""
- self.admin_project_id = ""
- self._init_admin_ids()
-
# available services
self.project_associated_services = (
cleanup_service.get_project_associated_cleanup_services())
@@ -227,26 +221,6 @@
svc = service(self.admin_mgr, **kwargs)
svc.run()
- def _init_admin_ids(self):
- pr_cl = self.admin_mgr.projects_client
- rl_cl = self.admin_mgr.roles_v3_client
- rla_cl = self.admin_mgr.role_assignments_client
- us_cl = self.admin_mgr.users_v3_client
-
- project = identity.get_project_by_name(pr_cl,
- CONF.auth.admin_project_name)
- self.admin_project_id = project['id']
- user = identity.get_user_by_project(us_cl, rla_cl,
- self.admin_project_id,
- CONF.auth.admin_username)
- self.admin_id = user['id']
-
- roles = rl_cl.list_roles()['roles']
- for role in roles:
- if role['name'] == CONF.identity.admin_role:
- self.admin_role_id = role['id']
- break
-
def get_parser(self, prog_name):
parser = super(TempestCleanup, self).get_parser(prog_name)
parser.add_argument('--init-saved-state', action="store_true",
diff --git a/tempest/cmd/verify_tempest_config.py b/tempest/cmd/verify_tempest_config.py
index 3d476b9..b105c70 100644
--- a/tempest/cmd/verify_tempest_config.py
+++ b/tempest/cmd/verify_tempest_config.py
@@ -118,25 +118,16 @@
# Since we want to verify that the configuration is correct, we cannot
# rely on a specific version of the API being available.
try:
- _, versions = os.image_v1.ImagesClient().get_versions()
+ versions = os.image_v2.VersionsClient().list_versions()['versions']
+ versions = [x['id'] for x in versions]
except lib_exc.NotFound:
- # If not found, we use v2. The assumption is that either v1 or v2
- # are available, since glance is marked as available in the catalog.
- # If not, glance should be disabled in Tempest conf.
- try:
- versions = os.image_v2.VersionsClient().list_versions()['versions']
- versions = [x['id'] for x in versions]
- except lib_exc.NotFound:
- msg = ('Glance is available in the catalog, but no known version, '
- '(v1.x or v2.x) of Glance could be found, so Glance should '
- 'be configured as not available')
- LOG.warning(msg)
- print_and_or_update('glance', 'service-available', False, update)
- return
+ msg = ('Glance is available in the catalog, but no known version, '
+ 'of Glance could be found, so Glance should '
+ 'be configured as not available')
+ LOG.warning(msg)
+ print_and_or_update('glance', 'service-available', False, update)
+ return
- if CONF.image_feature_enabled.api_v1 != contains_version('v1.', versions):
- print_and_or_update('api_v1', 'image-feature-enabled',
- not CONF.image_feature_enabled.api_v1, update)
if CONF.image_feature_enabled.api_v2 != contains_version('v2.', versions):
print_and_or_update('api_v2', 'image-feature-enabled',
not CONF.image_feature_enabled.api_v2, update)
diff --git a/tempest/common/compute.py b/tempest/common/compute.py
index 00f133e..53d44f1 100644
--- a/tempest/common/compute.py
+++ b/tempest/common/compute.py
@@ -304,6 +304,10 @@
# this additional wait state for later use.
wait_until_extra = None
if wait_until in ['PINGABLE', 'SSHABLE']:
+ if not validatable and validation_resources is None:
+ raise RuntimeError(
+ 'SSHABLE/PINGABLE requires validatable=True '
+ 'and validation_resources to be passed')
wait_until_extra = wait_until
wait_until = 'ACTIVE'
@@ -352,6 +356,8 @@
except Exception:
LOG.exception('Server %s failed to delete in time',
server['id'])
+ if servers and not multiple_create_request:
+ body = rest_client.ResponseBody(body.response, servers[0])
return body, servers
return body, created_servers
diff --git a/tempest/common/utils/linux/remote_client.py b/tempest/common/utils/linux/remote_client.py
index 9d9fab7..0d93430 100644
--- a/tempest/common/utils/linux/remote_client.py
+++ b/tempest/common/utils/linux/remote_client.py
@@ -109,6 +109,15 @@
LOG.debug('(get_nic_name_by_ip) Command result: %s', nic)
return nic.strip().strip(":").split('@')[0].lower()
+ def get_nic_ip_addresses(self, nic_name, ip_version=None):
+ cmd = "ip "
+ if ip_version:
+ cmd += "-%s " % ip_version
+ cmd += "-o addr | awk '/%s/ {print $4}'" % nic_name
+ ip_addresses = self.exec_command(cmd)
+ LOG.debug('(get_nic_ip_address): Command result: %s', ip_addresses)
+ return ip_addresses.strip().split()
+
def _get_dns_servers(self):
cmd = 'cat /etc/resolv.conf'
resolve_file = self.exec_command(cmd).strip().split('\n')
@@ -145,15 +154,20 @@
cmd = "sudo /sbin/dhclient -r && sudo /sbin/dhclient"
self.exec_command(cmd)
+ def _renew_lease_dhcpcd(self, fixed_ip=None):
+ """Renews DHCP lease via dhcpcd client. """
+ cmd = "sudo /sbin/dhcpcd --rebind"
+ self.exec_command(cmd)
+
def renew_lease(self, fixed_ip=None, dhcp_client='udhcpc'):
"""Wrapper method for renewing DHCP lease via given client
Supporting:
* udhcpc
* dhclient
+ * dhcpcd
"""
- # TODO(yfried): add support for dhcpcd
- supported_clients = ['udhcpc', 'dhclient']
+ supported_clients = ['udhcpc', 'dhclient', 'dhcpcd']
if dhcp_client not in supported_clients:
raise tempest.lib.exceptions.InvalidConfiguration(
'%s DHCP client unsupported' % dhcp_client)
@@ -169,7 +183,7 @@
self.exec_command('sudo umount %s' % mount_path)
def make_fs(self, dev_name, fs='ext4'):
- cmd_mkfs = 'sudo mke2fs -t %s /dev/%s' % (fs, dev_name)
+ cmd_mkfs = 'sudo mkfs -t %s /dev/%s' % (fs, dev_name)
try:
self.exec_command(cmd_mkfs)
except tempest.lib.exceptions.SSHExecCommandFailed:
diff --git a/tempest/common/waiters.py b/tempest/common/waiters.py
index 71599bd..d3be6fd 100644
--- a/tempest/common/waiters.py
+++ b/tempest/common/waiters.py
@@ -16,12 +16,10 @@
from oslo_log import log as logging
-from tempest.common import image as common_image
from tempest import config
from tempest import exceptions
from tempest.lib.common.utils import test_utils
from tempest.lib import exceptions as lib_exc
-from tempest.lib.services.image.v1 import images_client as images_v1_client
CONF = config.CONF
LOG = logging.getLogger(__name__)
@@ -77,7 +75,8 @@
if 'fault' in body:
details += 'Fault: %s.' % body['fault']
if request_id:
- details += ' Server boot request ID: %s.' % request_id
+ details += ' Request ID of server operation performed before'
+ details += ' checking the server status %s.' % request_id
raise exceptions.BuildErrorException(details, server_id=server_id)
timed_out = int(time.time()) - start_time >= timeout
@@ -92,7 +91,8 @@
'expected_task_state': expected_task_state,
'timeout': timeout})
if request_id:
- message += ' Server boot request ID: %s.' % request_id
+ message += ' Request ID of server operation performed before'
+ message += ' checking the server status %s.' % request_id
message += ' Current status: %s.' % server_status
message += ' Current task state: %s.' % task_state
caller = test_utils.find_test_caller()
@@ -154,17 +154,7 @@
The client should have a show_image(image_id) method to get the image.
The client should also have build_interval and build_timeout attributes.
"""
- if isinstance(client, images_v1_client.ImagesClient):
- # The 'check_image' method is used here because the show_image method
- # returns image details plus the image itself which is very expensive.
- # The 'check_image' method returns just image details.
- def _show_image_v1(image_id):
- resp = client.check_image(image_id)
- return common_image.get_image_meta_from_headers(resp)
-
- show_image = _show_image_v1
- else:
- show_image = client.show_image
+ show_image = client.show_image
current_status = 'An unknown status'
start = int(time.time())
@@ -222,6 +212,24 @@
raise lib_exc.TimeoutException(message)
+def wait_for_tasks_status(client, task_id, status):
+ start = int(time.time())
+ while int(time.time()) - start < client.build_timeout:
+ task = client.show_tasks(task_id)
+ if task['status'] == status:
+ return task
+ time.sleep(client.build_interval)
+ message = ('Task %(task_id)s tasks: '
+ 'failed to reach %(status)s state within the required '
+ 'time (%(timeout)s s).' % {'task_id': task_id,
+ 'status': status,
+ 'timeout': client.build_timeout})
+ caller = test_utils.find_test_caller()
+ if caller:
+ message = '(%s) %s' % (caller, message)
+ raise lib_exc.TimeoutException(message)
+
+
def wait_for_image_imported_to_stores(client, image_id, stores=None):
"""Waits for an image to be imported to all requested stores.
@@ -303,12 +311,16 @@
raise lib_exc.TimeoutException(message)
-def wait_for_volume_resource_status(client, resource_id, status):
+def wait_for_volume_resource_status(client, resource_id, status,
+ server_id=None, servers_client=None):
"""Waits for a volume resource to reach a given status.
This function is a common function for volume, snapshot and backup
resources. The function extracts the name of the desired resource from
the client class name of the resource.
+
+ If server_id and servers_client are provided, dump the console for that
+ server on failure.
"""
resource_name = re.findall(
r'(volume|group-snapshot|snapshot|backup|group)',
@@ -330,6 +342,11 @@
raise exceptions.VolumeExtendErrorException(volume_id=resource_id)
if int(time.time()) - start >= client.build_timeout:
+ if server_id and servers_client:
+ console_output = servers_client.get_console_output(
+ server_id)['output']
+ LOG.debug('Console output for %s\nbody=\n%s',
+ server_id, console_output)
message = ('%s %s failed to reach %s status (current %s) '
'within the required time (%s s).' %
(resource_name, resource_id, status, resource_status,
@@ -604,6 +621,22 @@
raise lib_exc.TimeoutException()
+def wait_for_port_status(client, port_id, status):
+ """Wait for a port reach a certain status : ["BUILD" | "DOWN" | "ACTIVE"]
+ :param client: The network client to use when querying the port's
+ status
+ :param status: A string to compare the current port status-to.
+ :param port_id: The uuid of the port we would like queried for status.
+ """
+ start_time = time.time()
+ while (time.time() - start_time <= client.build_timeout):
+ result = client.show_port(port_id)
+ if result['port']['status'].lower() == status.lower():
+ return result
+ time.sleep(client.build_interval)
+ raise lib_exc.TimeoutException
+
+
def wait_for_ssh(ssh_client, timeout=30):
"""Waits for SSH connection to become usable"""
start_time = int(time.time())
diff --git a/tempest/config.py b/tempest/config.py
index a60e5a8..ee083d8 100644
--- a/tempest/config.py
+++ b/tempest/config.py
@@ -153,13 +153,11 @@
help="The public endpoint type to use for OpenStack Identity "
"(Keystone) API v2"),
cfg.StrOpt('v3_endpoint_type',
- default='adminURL',
+ default='public',
choices=['public', 'admin', 'internal',
'publicURL', 'adminURL', 'internalURL'],
help="The endpoint type to use for OpenStack Identity "
- "(Keystone) API v3. The default value adminURL is "
- "deprecated and will be modified to publicURL in "
- "the next release."),
+ "(Keystone) API v3."),
cfg.StrOpt('admin_role',
default='admin',
help="Role required to administrate keystone."),
@@ -201,8 +199,15 @@
"default value is 0 meaning disabling this feature. "
"NOTE: This config option value must be same as "
"keystone.conf: security_compliance.unique_last_password_"
- "count otherwise test might fail"
- ),
+ "count otherwise test might fail"),
+ cfg.IntOpt('user_minimum_password_age',
+ default=0,
+ help="The number of days that a password must be used before "
+ "the user can change it. This only takes effect when "
+ "identity-feature-enabled.security_compliance is set to "
+ "'True'. For more details, refer to keystone config "
+ "options "
+ "keystone.conf:security_compliance.minimum_password_age.")
]
service_clients_group = cfg.OptGroup(name='service-clients',
@@ -714,14 +719,6 @@
'are current one. In future, Tempest will '
'test v2 APIs only so this config option '
'will be removed.'),
- cfg.BoolOpt('api_v1',
- default=False,
- help="Is the v1 image API enabled",
- deprecated_for_removal=True,
- deprecated_reason='Glance v1 APIs are deprecated and v2 APIs '
- 'are current one. In future, Tempest will '
- 'test v2 APIs only so this config option '
- 'will be removed.'),
# Image import feature is setup in devstack victoria onwards.
# Once all stable branches setup the same via glance standalone
# mode or with uwsgi, we can remove this config option.
@@ -975,12 +972,12 @@
default='ecdsa',
help='Type of key to use for ssh connections. '
'Valid types are rsa, ecdsa'),
- cfg.IntOpt('allowed_network_downtime',
- default=5.0,
- help="Allowed VM network connection downtime during live "
- "migration, in seconds. "
- "When the measured downtime exceeds this value, an "
- "exception is raised."),
+ cfg.FloatOpt('allowed_network_downtime',
+ default=5.0,
+ help="Allowed VM network connection downtime during live "
+ "migration, in seconds. "
+ "When the measured downtime exceeds this value, an "
+ "exception is raised."),
]
volume_group = cfg.OptGroup(name='volume',
@@ -1015,6 +1012,10 @@
cfg.StrOpt('volume_type',
default='',
help='Volume type to be used while creating volume.'),
+ cfg.StrOpt('volume_type_multiattach',
+ default='',
+ help='Multiattach volume type used while creating multiattach '
+ 'volume.'),
cfg.StrOpt('storage_protocol',
default='iSCSI',
help='Backend protocol to target when creating volume types'),
@@ -1105,7 +1106,13 @@
'server instance? This depends on the 3.42 volume API '
'microversion and the 2.51 compute API microversion. '
'Also, not all volume or compute backends support this '
+ 'operation.'),
+ cfg.BoolOpt('extend_volume_with_snapshot',
+ default=True,
+ help='Does the cloud support extending the size of a volume '
+ 'which has snapshot? Some drivers do not support this '
'operation.')
+
]
@@ -1200,13 +1207,12 @@
help='Image container format'),
cfg.DictOpt('img_properties', help='Glance image properties. '
'Use for custom images which require them'),
- # TODO(yfried): add support for dhcpcd
cfg.StrOpt('dhcp_client',
default='udhcpc',
- choices=["udhcpc", "dhclient", ""],
+ choices=["udhcpc", "dhclient", "dhcpcd", ""],
help='DHCP client used by images to renew DCHP lease. '
'If left empty, update operation will be skipped. '
- 'Supported clients: "udhcpc", "dhclient"'),
+ 'Supported clients: "udhcpc", "dhclient", "dhcpcd"'),
cfg.StrOpt('protocol',
default='icmp',
choices=('icmp', 'tcp', 'udp'),
@@ -1280,6 +1286,13 @@
'enabled when keystone.conf: [oslo_policy]. '
'enforce_new_defaults and keystone.conf: [oslo_policy]. '
'enforce_scope options are enabled in keystone conf.'),
+ cfg.BoolOpt('placement',
+ default=False,
+ help='Does the placement service API policies enforce scope '
+ 'and new defaults? This configuration value should be '
+ 'enabled when placement.conf: [oslo_policy]. '
+ 'enforce_new_defaults and nova.conf: [oslo_policy]. '
+ 'enforce_scope options are enabled in placement conf.'),
]
debug_group = cfg.OptGroup(name="debug",
diff --git a/tempest/lib/api_schema/response/compute/v2_1/fixed_ips.py b/tempest/lib/api_schema/response/compute/v2_1/fixed_ips.py
deleted file mode 100644
index a653213..0000000
--- a/tempest/lib/api_schema/response/compute/v2_1/fixed_ips.py
+++ /dev/null
@@ -1,41 +0,0 @@
-# Copyright 2014 NEC Corporation. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.lib.api_schema.response.compute.v2_1 import parameter_types
-
-get_fixed_ip = {
- 'status_code': [200],
- 'response_body': {
- 'type': 'object',
- 'properties': {
- 'fixed_ip': {
- 'type': 'object',
- 'properties': {
- 'address': parameter_types.ip_address,
- 'cidr': {'type': 'string'},
- 'host': {'type': 'string'},
- 'hostname': {'type': 'string'}
- },
- 'additionalProperties': False,
- 'required': ['address', 'cidr', 'host', 'hostname']
- }
- },
- 'additionalProperties': False,
- 'required': ['fixed_ip']
- }
-}
-
-reserve_unreserve_fixed_ip = {
- 'status_code': [202]
-}
diff --git a/tempest/lib/api_schema/response/compute/v2_1/floating_ips.py b/tempest/lib/api_schema/response/compute/v2_1/floating_ips.py
index 0c66590..274540c 100644
--- a/tempest/lib/api_schema/response/compute/v2_1/floating_ips.py
+++ b/tempest/lib/api_schema/response/compute/v2_1/floating_ips.py
@@ -58,91 +58,6 @@
}
}
-list_floating_ip_pools = {
- 'status_code': [200],
- 'response_body': {
- 'type': 'object',
- 'properties': {
- 'floating_ip_pools': {
- 'type': 'array',
- 'items': {
- 'type': 'object',
- 'properties': {
- 'name': {'type': 'string'}
- },
- 'additionalProperties': False,
- 'required': ['name'],
- }
- }
- },
- 'additionalProperties': False,
- 'required': ['floating_ip_pools'],
- }
-}
-
add_remove_floating_ip = {
'status_code': [202]
}
-
-create_floating_ips_bulk = {
- 'status_code': [200],
- 'response_body': {
- 'type': 'object',
- 'properties': {
- 'floating_ips_bulk_create': {
- 'type': 'object',
- 'properties': {
- 'interface': {'type': ['string', 'null']},
- 'ip_range': {'type': 'string'},
- 'pool': {'type': ['string', 'null']},
- },
- 'additionalProperties': False,
- 'required': ['interface', 'ip_range', 'pool'],
- }
- },
- 'additionalProperties': False,
- 'required': ['floating_ips_bulk_create'],
- }
-}
-
-delete_floating_ips_bulk = {
- 'status_code': [200],
- 'response_body': {
- 'type': 'object',
- 'properties': {
- 'floating_ips_bulk_delete': {'type': 'string'}
- },
- 'additionalProperties': False,
- 'required': ['floating_ips_bulk_delete'],
- }
-}
-
-list_floating_ips_bulk = {
- 'status_code': [200],
- 'response_body': {
- 'type': 'object',
- 'properties': {
- 'floating_ip_info': {
- 'type': 'array',
- 'items': {
- 'type': 'object',
- 'properties': {
- 'address': parameter_types.ip_address,
- 'instance_uuid': {'type': ['string', 'null']},
- 'interface': {'type': ['string', 'null']},
- 'pool': {'type': ['string', 'null']},
- 'project_id': {'type': ['string', 'null']},
- 'fixed_ip': parameter_types.ip_address
- },
- 'additionalProperties': False,
- # NOTE: fixed_ip is introduced after JUNO release,
- # So it is not defined as 'required'.
- 'required': ['address', 'instance_uuid', 'interface',
- 'pool', 'project_id'],
- }
- }
- },
- 'additionalProperties': False,
- 'required': ['floating_ip_info'],
- }
-}
diff --git a/tempest/lib/api_schema/response/compute/v2_1/parameter_types.py b/tempest/lib/api_schema/response/compute/v2_1/parameter_types.py
index 8aed37d..b36c9d6 100644
--- a/tempest/lib/api_schema/response/compute/v2_1/parameter_types.py
+++ b/tempest/lib/api_schema/response/compute/v2_1/parameter_types.py
@@ -30,7 +30,7 @@
mac_address = {
'type': 'string',
- 'pattern': '(?:[a-f0-9]{2}:){5}[a-f0-9]{2}'
+ 'pattern': '(?:[a-fA-F0-9]{2}:){5}[a-fA-F0-9]{2}'
}
ip_address = {
diff --git a/tempest/lib/api_schema/response/compute/v2_1/server_external_events.py b/tempest/lib/api_schema/response/compute/v2_1/server_external_events.py
new file mode 100644
index 0000000..2ab69e2
--- /dev/null
+++ b/tempest/lib/api_schema/response/compute/v2_1/server_external_events.py
@@ -0,0 +1,55 @@
+# Copyright 2022 NEC Corporation. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+create = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'events': {
+ 'type': 'array', 'minItems': 1,
+ 'items': {
+ 'type': 'object',
+ 'properties': {
+ 'server_uuid': {
+ 'type': 'string', 'format': 'uuid'
+ },
+ 'name': {
+ 'type': 'string',
+ 'enum': [
+ 'network-changed',
+ 'network-vif-plugged',
+ 'network-vif-unplugged',
+ 'network-vif-deleted'
+ ],
+ },
+ 'status': {
+ 'type': 'string',
+ 'enum': ['failed', 'completed', 'in-progress'],
+ },
+ 'tag': {
+ 'type': 'string', 'maxLength': 255,
+ },
+ 'code': {'type': 'integer'},
+ },
+ 'required': [
+ 'server_uuid', 'name', 'code'],
+ 'additionalProperties': False,
+ },
+ },
+ },
+ 'required': ['events'],
+ 'additionalProperties': False,
+ }
+}
diff --git a/tempest/lib/api_schema/response/compute/v2_1/servers.py b/tempest/lib/api_schema/response/compute/v2_1/servers.py
index bd42afd..14e2d3b 100644
--- a/tempest/lib/api_schema/response/compute/v2_1/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_1/servers.py
@@ -250,33 +250,6 @@
rescue_server_with_admin_pass['response_body'].update(
{'required': ['adminPass']})
-
-list_virtual_interfaces = {
- 'status_code': [200],
- 'response_body': {
- 'type': 'object',
- 'properties': {
- 'virtual_interfaces': {
- 'type': 'array',
- 'items': {
- 'type': 'object',
- 'properties': {
- 'id': {'type': 'string'},
- 'mac_address': parameter_types.mac_address,
- 'OS-EXT-VIF-NET:net_id': {'type': 'string'}
- },
- 'additionalProperties': False,
- # 'OS-EXT-VIF-NET:net_id' is API extension So it is
- # not defined as 'required'
- 'required': ['id', 'mac_address']
- }
- }
- },
- 'additionalProperties': False,
- 'required': ['virtual_interfaces']
- }
-}
-
common_attach_volume_info = {
'type': 'object',
'properties': {
diff --git a/tempest/lib/api_schema/response/volume/volumes.py b/tempest/lib/api_schema/response/volume/volumes.py
index 4f44526..900e5ef 100644
--- a/tempest/lib/api_schema/response/volume/volumes.py
+++ b/tempest/lib/api_schema/response/volume/volumes.py
@@ -295,6 +295,7 @@
attach_volume = {'status_code': [202]}
set_bootable_volume = {'status_code': [200]}
detach_volume = {'status_code': [202]}
+terminate_connection = {'status_code': [202]}
reserve_volume = {'status_code': [202]}
unreserve_volume = {'status_code': [202]}
extend_volume = {'status_code': [202]}
diff --git a/tempest/lib/cli/base.py b/tempest/lib/cli/base.py
index c661d21..c9cffd2 100644
--- a/tempest/lib/cli/base.py
+++ b/tempest/lib/cli/base.py
@@ -97,6 +97,10 @@
:type identity_api_version: string
"""
+ CLIENTS_WITHOUT_IDENTITY_VERSION = ['nova', 'nova_manage', 'keystone',
+ 'glance', 'ceilometer', 'heat',
+ 'cinder', 'neutron', 'sahara']
+
def __init__(self, username='', password='', tenant_name='', uri='',
cli_dir='', insecure=False, prefix='', user_domain_name=None,
user_domain_id=None, project_domain_name=None,
@@ -377,8 +381,9 @@
self.password,
self.uri))
if self.identity_api_version:
- creds += ' --os-identity-api-version %s' % (
- self.identity_api_version)
+ if cmd not in self.CLIENTS_WITHOUT_IDENTITY_VERSION:
+ creds += ' --os-identity-api-version %s' % (
+ self.identity_api_version)
if self.user_domain_name is not None:
creds += ' --os-user-domain-name %s' % self.user_domain_name
if self.user_domain_id is not None:
diff --git a/tempest/lib/common/cred_client.py b/tempest/lib/common/cred_client.py
index f13d6d0..69798a4 100644
--- a/tempest/lib/common/cred_client.py
+++ b/tempest/lib/common/cred_client.py
@@ -58,6 +58,10 @@
def create_project(self, name, description):
pass
+ @abc.abstractmethod
+ def show_project(self, project_id):
+ pass
+
def _check_role_exists(self, role_name):
try:
roles = self._list_roles()
@@ -118,6 +122,9 @@
name=name, description=description)['tenant']
return tenant
+ def show_project(self, project_id):
+ return self.projects_client.show_tenant(project_id)['tenant']
+
def delete_project(self, project_id):
self.projects_client.delete_tenant(project_id)
@@ -159,6 +166,9 @@
domain_id=self.creds_domain['id'])['project']
return project
+ def show_project(self, project_id):
+ return self.projects_client.show_project(project_id)['project']
+
def delete_project(self, project_id):
self.projects_client.delete_project(project_id)
diff --git a/tempest/lib/common/dynamic_creds.py b/tempest/lib/common/dynamic_creds.py
index d687eb5..99647d4 100644
--- a/tempest/lib/common/dynamic_creds.py
+++ b/tempest/lib/common/dynamic_creds.py
@@ -163,7 +163,8 @@
os.network.PortsClient(),
os.network.SecurityGroupsClient())
- def _create_creds(self, admin=False, roles=None, scope='project'):
+ def _create_creds(self, admin=False, roles=None, scope='project',
+ project_id=None):
"""Create credentials with random name.
Creates user and role assignments on a project, domain, or system. When
@@ -177,6 +178,8 @@
:type roles: list
:param str scope: The scope for the role assignment, may be one of
'project', 'domain', or 'system'.
+ :param str project_id: The project id of already created project
+ for credentials under same project.
:return: Readonly Credentials with network resources
:raises: Exception if scope is invalid
"""
@@ -190,12 +193,20 @@
'system': None
}
if scope == 'project':
- project_name = data_utils.rand_name(
- root, prefix=self.resource_prefix)
- project_desc = project_name + '-desc'
- project = self.creds_client.create_project(
- name=project_name, description=project_desc)
-
+ if not project_id:
+ project_name = data_utils.rand_name(
+ root, prefix=self.resource_prefix)
+ project_desc = project_name + '-desc'
+ project = self.creds_client.create_project(
+ name=project_name, description=project_desc)
+ else:
+ # NOTE(gmann) This is the case where creds are requested
+ # from the existing creds within same project. We should
+ # not create the new project in this case.
+ project = self.creds_client.show_project(project_id)
+ project_name = project['name']
+ LOG.info("Using the existing project %s for scope %s and "
+ "roles: %s", project['id'], scope, roles)
# NOTE(andreaf) User and project can be distinguished from the
# context, having the same ID in both makes it easier to match them
# and debug.
@@ -372,48 +383,78 @@
self.routers_admin_client.add_router_interface(router_id,
subnet_id=subnet_id)
- def get_credentials(self, credential_type, scope=None):
- if not scope and self._creds.get(str(credential_type)):
- credentials = self._creds[str(credential_type)]
- elif scope and (
- self._creds.get("%s_%s" % (scope, str(credential_type)))):
- credentials = self._creds["%s_%s" % (scope, str(credential_type))]
+ def _get_project_id(self, credential_type, scope):
+ same_creds = [['admin'], ['member'], ['reader']]
+ same_alt_creds = [['alt_admin'], ['alt_member'], ['alt_reader']]
+ search_in = []
+ if credential_type in same_creds:
+ search_in = same_creds
+ elif credential_type in same_alt_creds:
+ search_in = same_alt_creds
+ for cred in search_in:
+ found_cred = self._creds.get("%s_%s" % (scope, str(cred)))
+ if found_cred:
+ project_id = found_cred.get("%s_%s" % (scope, 'id'))
+ LOG.debug("Reusing existing project %s from creds: %s ",
+ project_id, found_cred)
+ return project_id
+ return None
+
+ def get_credentials(self, credential_type, scope=None, by_role=False):
+ cred_prefix = ''
+ if by_role:
+ cred_prefix = 'role_'
+ if not scope and self._creds.get(
+ "%s%s" % (cred_prefix, str(credential_type))):
+ credentials = self._creds[
+ "%s%s" % (cred_prefix, str(credential_type))]
+ elif scope and (self._creds.get(
+ "%s%s_%s" % (cred_prefix, scope, str(credential_type)))):
+ credentials = self._creds[
+ "%s%s_%s" % (cred_prefix, scope, str(credential_type))]
else:
LOG.debug("Creating new dynamic creds for scope: %s and "
"credential_type: %s", scope, credential_type)
+ project_id = None
if scope:
- if credential_type in [['admin'], ['alt_admin']]:
+ if scope == 'project':
+ project_id = self._get_project_id(
+ credential_type, 'project')
+ if by_role:
credentials = self._create_creds(
- admin=True, scope=scope)
+ roles=credential_type, scope=scope)
+ elif credential_type in [['admin'], ['alt_admin']]:
+ credentials = self._create_creds(
+ admin=True, scope=scope, project_id=project_id)
elif credential_type in [['alt_member'], ['alt_reader']]:
cred_type = credential_type[0][4:]
if isinstance(cred_type, str):
cred_type = [cred_type]
credentials = self._create_creds(
- roles=cred_type, scope=scope)
- else:
+ roles=cred_type, scope=scope, project_id=project_id)
+ elif credential_type in [['member'], ['reader']]:
credentials = self._create_creds(
- roles=credential_type, scope=scope)
+ roles=credential_type, scope=scope,
+ project_id=project_id)
elif credential_type in ['primary', 'alt', 'admin']:
is_admin = (credential_type == 'admin')
credentials = self._create_creds(admin=is_admin)
else:
credentials = self._create_creds(roles=credential_type)
if scope:
- self._creds["%s_%s" %
- (scope, str(credential_type))] = credentials
+ self._creds["%s%s_%s" % (
+ cred_prefix, scope, str(credential_type))] = credentials
else:
- self._creds[str(credential_type)] = credentials
+ self._creds[
+ "%s%s" % (cred_prefix, str(credential_type))] = credentials
# Maintained until tests are ported
LOG.info("Acquired dynamic creds:\n"
" credentials: %s", credentials)
# NOTE(gmann): For 'domain' and 'system' scoped token, there is no
# project_id so we are skipping the network creation for both
- # scope. How these scoped token can create the network, Nova
- # server or other project mapped resources is one of the open
- # question and discussed a lot in Xena cycle PTG. Once we sort
- # out that then if needed we can update the network creation here.
- if (not scope or scope == 'project'):
+ # scope.
+ # We need to create nework resource once per project.
+ if (not project_id and (not scope or scope == 'project')):
if (self.neutron_available and self.create_networks):
network, subnet, router = self._create_network_resources(
credentials.tenant_id)
@@ -422,24 +463,22 @@
LOG.info("Created isolated network resources for:\n"
" credentials: %s", credentials)
else:
- LOG.info("Network resources are not created for scope: %s",
- scope)
+ LOG.info("Network resources are not created for requested "
+ "scope: %s and credentials: %s", scope, credentials)
return credentials
# TODO(gmann): Remove this method in favor of get_project_member_creds()
# after the deprecation phase.
def get_primary_creds(self):
- return self.get_credentials('primary')
+ return self.get_project_member_creds()
- # TODO(gmann): Remove this method in favor of get_project_admin_creds()
- # after the deprecation phase.
def get_admin_creds(self):
return self.get_credentials('admin')
- # TODO(gmann): Replace this method with more appropriate name.
- # like get_project_alt_member_creds()
+ # TODO(gmann): Remove this method in favor of
+ # get_project_alt_member_creds() after the deprecation phase.
def get_alt_creds(self):
- return self.get_credentials('alt')
+ return self.get_project_alt_member_creds()
def get_system_admin_creds(self):
return self.get_credentials(['admin'], scope='system')
@@ -481,9 +520,9 @@
roles = list(set(roles))
# The roles list as a str will become the index as the dict key for
# the created credentials set in the dynamic_creds dict.
- creds_name = str(roles)
+ creds_name = "role_%s" % str(roles)
if scope:
- creds_name = "%s_%s" % (scope, str(roles))
+ creds_name = "role_%s_%s" % (scope, str(roles))
exist_creds = self._creds.get(creds_name)
# If force_new flag is True 2 cred sets with the same roles are needed
# handle this by creating a separate index for old one to store it
@@ -492,7 +531,7 @@
new_index = creds_name + '-' + str(len(self._creds))
self._creds[new_index] = exist_creds
del self._creds[creds_name]
- return self.get_credentials(roles, scope=scope)
+ return self.get_credentials(roles, scope=scope, by_role=True)
def _clear_isolated_router(self, router_id, router_name):
client = self.routers_admin_client
@@ -553,31 +592,20 @@
if not self._creds:
return
self._clear_isolated_net_resources()
+ project_ids = set()
for creds in self._creds.values():
+ # NOTE(gmann): With new RBAC personas, we can have single project
+ # and multiple user created under it, to avoid conflict let's
+ # cleanup the projects at the end.
+ # Adding project if id is not None, means leaving domain and
+ # system creds.
+ if creds.project_id:
+ project_ids.add(creds.project_id)
try:
self.creds_client.delete_user(creds.user_id)
except lib_exc.NotFound:
LOG.warning("user with name: %s not found for delete",
creds.username)
- if creds.tenant_id:
- # NOTE(zhufl): Only when neutron's security_group ext is
- # enabled, cleanup_default_secgroup will not raise error. But
- # here cannot use test_utils.is_extension_enabled for it will
- # cause "circular dependency". So here just use try...except to
- # ensure tenant deletion without big changes.
- try:
- if self.neutron_available:
- self.cleanup_default_secgroup(
- self.security_groups_admin_client, creds.tenant_id)
- except lib_exc.NotFound:
- LOG.warning("failed to cleanup tenant %s's secgroup",
- creds.tenant_name)
- try:
- self.creds_client.delete_project(creds.tenant_id)
- except lib_exc.NotFound:
- LOG.warning("tenant with name: %s not found for delete",
- creds.tenant_name)
-
# if cred is domain scoped, delete ephemeral domain
# do not delete default domain
if (hasattr(creds, 'domain_id') and
@@ -587,6 +615,28 @@
except lib_exc.NotFound:
LOG.warning("domain with name: %s not found for delete",
creds.domain_name)
+ for project_id in project_ids:
+ # NOTE(zhufl): Only when neutron's security_group ext is
+ # enabled, cleanup_default_secgroup will not raise error. But
+ # here cannot use test_utils.is_extension_enabled for it will
+ # cause "circular dependency". So here just use try...except to
+ # ensure tenant deletion without big changes.
+ LOG.info("Deleting project and security group for project: %s",
+ project_id)
+
+ try:
+ if self.neutron_available:
+ self.cleanup_default_secgroup(
+ self.security_groups_admin_client, project_id)
+ except lib_exc.NotFound:
+ LOG.warning("failed to cleanup tenant %s's secgroup",
+ project_id)
+ try:
+ self.creds_client.delete_project(project_id)
+ except lib_exc.NotFound:
+ LOG.warning("tenant with id: %s not found for delete",
+ project_id)
+
self._creds = {}
def is_multi_user(self):
diff --git a/tempest/lib/common/http.py b/tempest/lib/common/http.py
index 33f871b..d163968 100644
--- a/tempest/lib/common/http.py
+++ b/tempest/lib/common/http.py
@@ -60,7 +60,12 @@
retry = urllib3.util.Retry(redirect=False)
r = super(ClosingProxyHttp, self).request(method, url, retries=retry,
*args, **new_kwargs)
- return Response(r), r.data
+ if not kwargs.get('preload_content', True):
+ # This means we asked urllib3 for streaming content, so we
+ # need to return the raw response and not read any data yet
+ return r, b''
+ else:
+ return Response(r), r.data
class ClosingHttp(urllib3.poolmanager.PoolManager):
@@ -109,4 +114,9 @@
retry = urllib3.util.Retry(redirect=False)
r = super(ClosingHttp, self).request(method, url, retries=retry,
*args, **new_kwargs)
- return Response(r), r.data
+ if not kwargs.get('preload_content', True):
+ # This means we asked urllib3 for streaming content, so we
+ # need to return the raw response and not read any data yet
+ return r, b''
+ else:
+ return Response(r), r.data
diff --git a/tempest/lib/common/rest_client.py b/tempest/lib/common/rest_client.py
index a11b7c1..6cf5b73 100644
--- a/tempest/lib/common/rest_client.py
+++ b/tempest/lib/common/rest_client.py
@@ -19,6 +19,7 @@
import re
import time
import urllib
+import urllib3
import jsonschema
from oslo_log import log as logging
@@ -298,7 +299,7 @@
"""
return self.request('POST', url, extra_headers, headers, body, chunked)
- def get(self, url, headers=None, extra_headers=False):
+ def get(self, url, headers=None, extra_headers=False, chunked=False):
"""Send a HTTP GET request using keystone service catalog and auth
:param str url: the relative url to send the get request to
@@ -307,11 +308,19 @@
returned by the get_headers() method are to
be used but additional headers are needed in
the request pass them in as a dict.
+ :param bool chunked: Boolean value that indicates if we should stream
+ the response instead of reading it all at once.
+ If True, data will be empty and the raw urllib3
+ response object will be returned.
+ NB: If you pass True here, you **MUST** call
+ release_conn() on the response object before
+ finishing!
:return: a tuple with the first entry containing the response headers
and the second the response body
:rtype: tuple
"""
- return self.request('GET', url, extra_headers, headers)
+ return self.request('GET', url, extra_headers, headers,
+ chunked=chunked)
def delete(self, url, headers=None, body=None, extra_headers=False):
"""Send a HTTP DELETE request using keystone service catalog and auth
@@ -480,7 +489,7 @@
self.LOG.info(
'Request (%s): %s %s %s%s',
caller_name,
- resp['status'],
+ resp.status,
method,
req_url,
secs,
@@ -617,17 +626,30 @@
"""
if headers is None:
headers = self.get_headers()
+ # In urllib3, chunked only affects the upload. However, we may
+ # want to read large responses to GET incrementally. Re-purpose
+ # chunked=True on a GET to also control how we handle the response.
+ preload = not (method.lower() == 'get' and chunked)
+ if not preload:
+ # NOTE(danms): Not specifically necessary, but don't send
+ # chunked=True to urllib3 on a GET, since it is technically
+ # for PUT/POST type operations
+ chunked = False
# Do the actual request, and time it
start = time.time()
self._log_request_start(method, url)
resp, resp_body = self.http_obj.request(
url, method, headers=headers,
- body=body, chunked=chunked)
+ body=body, chunked=chunked, preload_content=preload)
end = time.time()
req_body = body if log_req_body is None else log_req_body
- self._log_request(method, url, resp, secs=(end - start),
- req_headers=headers, req_body=req_body,
- resp_body=resp_body)
+ if preload:
+ # NOTE(danms): If we are reading the whole response, we can do
+ # this logging. If not, skip the logging because it will result
+ # in us reading the response data prematurely.
+ self._log_request(method, url, resp, secs=(end - start),
+ req_headers=headers, req_body=req_body,
+ resp_body=resp_body)
return resp, resp_body
def request(self, method, url, extra_headers=False, headers=None,
@@ -773,6 +795,10 @@
# resp this could possibly fail
if str(type(resp)) == "<type 'instance'>":
ctype = resp.getheader('content-type')
+ elif isinstance(resp, urllib3.HTTPResponse):
+ # If we requested chunked=True streaming, this will be a raw
+ # urllib3.HTTPResponse
+ ctype = resp.getheaders()['content-type']
else:
try:
ctype = resp['content-type']
diff --git a/tempest/lib/common/ssh.py b/tempest/lib/common/ssh.py
index cb59a82..aad04b8 100644
--- a/tempest/lib/common/ssh.py
+++ b/tempest/lib/common/ssh.py
@@ -53,7 +53,8 @@
def __init__(self, host, username, password=None, timeout=300, pkey=None,
channel_timeout=10, look_for_keys=False, key_filename=None,
- port=22, proxy_client=None, ssh_key_type='rsa'):
+ port=22, proxy_client=None, ssh_key_type='rsa',
+ ssh_allow_agent=True):
"""SSH client.
Many of parameters are just passed to the underlying implementation
@@ -76,6 +77,9 @@
for ssh-over-ssh. The default is None, which means
not to use ssh-over-ssh.
:param ssh_key_type: ssh key type (rsa, ecdsa)
+ :param ssh_allow_agent: boolean, default True, if the SSH client is
+ allowed to also utilize the ssh-agent. Explicit use of passwords
+ in some tests may need this set as False.
:type proxy_client: ``tempest.lib.common.ssh.Client`` object
"""
self.host = host
@@ -105,6 +109,7 @@
raise exceptions.SSHClientProxyClientLoop(
host=self.host, port=self.port, username=self.username)
self._proxy_conn = None
+ self.ssh_allow_agent = ssh_allow_agent
def _get_ssh_connection(self, sleep=1.5, backoff=1):
"""Returns an ssh connection to the specified host."""
@@ -133,7 +138,7 @@
look_for_keys=self.look_for_keys,
key_filename=self.key_filename,
timeout=self.channel_timeout, pkey=self.pkey,
- sock=proxy_chan)
+ sock=proxy_chan, allow_agent=self.ssh_allow_agent)
LOG.info("ssh connection to %s@%s successfully created",
self.username, self.host)
return ssh
diff --git a/tempest/lib/common/utils/linux/remote_client.py b/tempest/lib/common/utils/linux/remote_client.py
index d0cdc25..662b452 100644
--- a/tempest/lib/common/utils/linux/remote_client.py
+++ b/tempest/lib/common/utils/linux/remote_client.py
@@ -69,7 +69,8 @@
server=None, servers_client=None, ssh_timeout=300,
connect_timeout=60, console_output_enabled=True,
ssh_shell_prologue="set -eu -o pipefail; PATH=$PATH:/sbin;",
- ping_count=1, ping_size=56, ssh_key_type='rsa'):
+ ping_count=1, ping_size=56, ssh_key_type='rsa',
+ ssh_allow_agent=True):
"""Executes commands in a VM over ssh
:param ip_address: IP address to ssh to
@@ -85,6 +86,8 @@
:param ping_count: Number of ping packets
:param ping_size: Packet size for ping packets
:param ssh_key_type: ssh key type (rsa, ecdsa)
+ :param ssh_allow_agent: Boolean if ssh agent support is permitted.
+ Defaults to True.
"""
self.server = server
self.servers_client = servers_client
@@ -94,11 +97,14 @@
self.ping_count = ping_count
self.ping_size = ping_size
self.ssh_key_type = ssh_key_type
+ self.ssh_allow_agent = ssh_allow_agent
self.ssh_client = ssh.Client(ip_address, username, password,
ssh_timeout, pkey=pkey,
channel_timeout=connect_timeout,
- ssh_key_type=ssh_key_type)
+ ssh_key_type=ssh_key_type,
+ ssh_allow_agent=ssh_allow_agent,
+ )
@debug_ssh
def exec_command(self, cmd):
diff --git a/tempest/lib/common/utils/test_utils.py b/tempest/lib/common/utils/test_utils.py
index 4cf8351..c79db15 100644
--- a/tempest/lib/common/utils/test_utils.py
+++ b/tempest/lib/common/utils/test_utils.py
@@ -93,6 +93,7 @@
if attempt >= 3:
raise
LOG.warning('Got ServerFault while running %s, retrying...', func)
+ time.sleep(1)
def call_until_true(func, duration, sleep_for, *args, **kwargs):
diff --git a/tempest/lib/decorators.py b/tempest/lib/decorators.py
index a4633ca..7d54c1a 100644
--- a/tempest/lib/decorators.py
+++ b/tempest/lib/decorators.py
@@ -13,6 +13,7 @@
# under the License.
import functools
+from types import MethodType
import uuid
from oslo_log import log as logging
@@ -189,3 +190,41 @@
raise e
return inner
return decor
+
+
+class cleanup_order:
+ """Descriptor for base create function to cleanup based on caller.
+
+ There are functions created as classmethod and the cleanup
+ was managed by the class with addClassResourceCleanup,
+ In case the function called from a class level (resource_setup) its ok
+ But when it is called from testcase level there is no reson to delete the
+ resource when class tears down.
+
+ The testcase results will not reflect the resources cleanup because test
+ may pass but the class cleanup fails. if the resources were created by
+ testcase its better to let the testcase delete them and report failure
+ part of the testcase
+ """
+
+ def __init__(self, func):
+ self.func = func
+ functools.update_wrapper(self, func)
+
+ def __get__(self, instance, owner):
+ if instance:
+ # instance is the caller
+ instance.cleanup = instance.addCleanup
+ instance.__name__ = owner.__name__
+ return MethodType(self.func, instance)
+ elif owner:
+ # class is the caller
+ owner.cleanup = owner.addClassResourceCleanup
+ return MethodType(self.func, owner)
+
+
+def serial(cls):
+ """A decorator to mark a test class for serial execution"""
+ cls._serial = True
+ LOG.debug('marked %s for serial execution', cls.__name__)
+ return cls
diff --git a/tempest/lib/services/clients.py b/tempest/lib/services/clients.py
index 8b5c758..86ce6ec 100644
--- a/tempest/lib/services/clients.py
+++ b/tempest/lib/services/clients.py
@@ -48,7 +48,6 @@
'placement': placement,
'identity.v2': identity.v2,
'identity.v3': identity.v3,
- 'image.v1': image.v1,
'image.v2': image.v2,
'network': network,
'object-storage': object_storage,
diff --git a/tempest/lib/services/compute/__init__.py b/tempest/lib/services/compute/__init__.py
index 8d07a45..10ec9be 100644
--- a/tempest/lib/services/compute/__init__.py
+++ b/tempest/lib/services/compute/__init__.py
@@ -24,12 +24,7 @@
CertificatesClient
from tempest.lib.services.compute.extensions_client import \
ExtensionsClient
-from tempest.lib.services.compute.fixed_ips_client import FixedIPsClient
from tempest.lib.services.compute.flavors_client import FlavorsClient
-from tempest.lib.services.compute.floating_ip_pools_client import \
- FloatingIPPoolsClient
-from tempest.lib.services.compute.floating_ips_bulk_client import \
- FloatingIPsBulkClient
from tempest.lib.services.compute.floating_ips_client import \
FloatingIPsClient
from tempest.lib.services.compute.hosts_client import HostsClient
@@ -52,6 +47,8 @@
SecurityGroupRulesClient
from tempest.lib.services.compute.security_groups_client import \
SecurityGroupsClient
+from tempest.lib.services.compute.server_external_events_client \
+ import ServerExternalEventsClient
from tempest.lib.services.compute.server_groups_client import \
ServerGroupsClient
from tempest.lib.services.compute.servers_client import ServersClient
@@ -67,14 +64,13 @@
__all__ = ['AgentsClient', 'AggregatesClient', 'AssistedVolumeSnapshotsClient',
'AvailabilityZoneClient', 'BaremetalNodesClient',
- 'CertificatesClient', 'ExtensionsClient', 'FixedIPsClient',
- 'FlavorsClient', 'FloatingIPPoolsClient',
- 'FloatingIPsBulkClient', 'FloatingIPsClient', 'HostsClient',
- 'HypervisorClient', 'ImagesClient', 'InstanceUsagesAuditLogClient',
+ 'CertificatesClient', 'ExtensionsClient', 'FlavorsClient',
+ 'FloatingIPsClient', 'HostsClient', 'HypervisorClient',
+ 'ImagesClient', 'InstanceUsagesAuditLogClient',
'InterfacesClient', 'KeyPairsClient', 'LimitsClient',
'MigrationsClient', 'NetworksClient', 'QuotaClassesClient',
'QuotasClient', 'SecurityGroupDefaultRulesClient',
'SecurityGroupRulesClient', 'SecurityGroupsClient',
- 'ServerGroupsClient', 'ServersClient', 'ServicesClient',
- 'SnapshotsClient', 'TenantNetworksClient', 'TenantUsagesClient',
- 'VersionsClient', 'VolumesClient']
+ 'ServerExternalEventsClient', 'ServerGroupsClient', 'ServersClient',
+ 'ServicesClient', 'SnapshotsClient', 'TenantNetworksClient',
+ 'TenantUsagesClient', 'VersionsClient', 'VolumesClient']
diff --git a/tempest/lib/services/compute/fixed_ips_client.py b/tempest/lib/services/compute/fixed_ips_client.py
deleted file mode 100644
index 098c856..0000000
--- a/tempest/lib/services/compute/fixed_ips_client.py
+++ /dev/null
@@ -1,42 +0,0 @@
-# Copyright 2013 IBM Corp
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from oslo_serialization import jsonutils as json
-
-from tempest.lib.api_schema.response.compute.v2_1 import fixed_ips as schema
-from tempest.lib.common import rest_client
-from tempest.lib.services.compute import base_compute_client
-
-
-class FixedIPsClient(base_compute_client.BaseComputeClient):
-
- def show_fixed_ip(self, fixed_ip):
- url = "os-fixed-ips/%s" % fixed_ip
- resp, body = self.get(url)
- body = json.loads(body)
- self.validate_response(schema.get_fixed_ip, resp, body)
- return rest_client.ResponseBody(resp, body)
-
- def reserve_fixed_ip(self, fixed_ip, **kwargs):
- """Reserve/Unreserve a fixed IP.
-
- For a full list of available parameters, please refer to the official
- API reference:
- https://docs.openstack.org/api-ref/compute/#reserve-or-release-a-fixed-ip
- """
- url = "os-fixed-ips/%s/action" % fixed_ip
- resp, body = self.post(url, json.dumps(kwargs))
- self.validate_response(schema.reserve_unreserve_fixed_ip, resp, body)
- return rest_client.ResponseBody(resp, body)
diff --git a/tempest/lib/services/compute/floating_ip_pools_client.py b/tempest/lib/services/compute/floating_ip_pools_client.py
deleted file mode 100644
index aa065b8..0000000
--- a/tempest/lib/services/compute/floating_ip_pools_client.py
+++ /dev/null
@@ -1,36 +0,0 @@
-# Copyright 2012 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from urllib import parse as urllib
-
-from oslo_serialization import jsonutils as json
-
-from tempest.lib.api_schema.response.compute.v2_1 import floating_ips as schema
-from tempest.lib.common import rest_client
-from tempest.lib.services.compute import base_compute_client
-
-
-class FloatingIPPoolsClient(base_compute_client.BaseComputeClient):
-
- def list_floating_ip_pools(self, params=None):
- """Gets all floating IP Pools list."""
- url = 'os-floating-ip-pools'
- if params:
- url += '?%s' % urllib.urlencode(params)
-
- resp, body = self.get(url)
- body = json.loads(body)
- self.validate_response(schema.list_floating_ip_pools, resp, body)
- return rest_client.ResponseBody(resp, body)
diff --git a/tempest/lib/services/compute/floating_ips_bulk_client.py b/tempest/lib/services/compute/floating_ips_bulk_client.py
deleted file mode 100644
index 5f06009..0000000
--- a/tempest/lib/services/compute/floating_ips_bulk_client.py
+++ /dev/null
@@ -1,51 +0,0 @@
-# Copyright 2012 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from oslo_serialization import jsonutils as json
-
-from tempest.lib.api_schema.response.compute.v2_1 import floating_ips as schema
-from tempest.lib.common import rest_client
-from tempest.lib.services.compute import base_compute_client
-
-
-class FloatingIPsBulkClient(base_compute_client.BaseComputeClient):
-
- def create_floating_ips_bulk(self, ip_range, pool, interface):
- """Allocate floating IPs in bulk."""
- post_body = {
- 'ip_range': ip_range,
- 'pool': pool,
- 'interface': interface
- }
- post_body = json.dumps({'floating_ips_bulk_create': post_body})
- resp, body = self.post('os-floating-ips-bulk', post_body)
- body = json.loads(body)
- self.validate_response(schema.create_floating_ips_bulk, resp, body)
- return rest_client.ResponseBody(resp, body)
-
- def list_floating_ips_bulk(self):
- """Gets all floating IPs in bulk."""
- resp, body = self.get('os-floating-ips-bulk')
- body = json.loads(body)
- self.validate_response(schema.list_floating_ips_bulk, resp, body)
- return rest_client.ResponseBody(resp, body)
-
- def delete_floating_ips_bulk(self, ip_range):
- """Deletes the provided floating IPs in bulk."""
- post_body = json.dumps({'ip_range': ip_range})
- resp, body = self.put('os-floating-ips-bulk/delete', post_body)
- body = json.loads(body)
- self.validate_response(schema.delete_floating_ips_bulk, resp, body)
- return rest_client.ResponseBody(resp, body)
diff --git a/tempest/lib/services/compute/server_external_events_client.py b/tempest/lib/services/compute/server_external_events_client.py
new file mode 100644
index 0000000..683dce1
--- /dev/null
+++ b/tempest/lib/services/compute/server_external_events_client.py
@@ -0,0 +1,36 @@
+# Copyright 2022 NEC Corporation. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from oslo_serialization import jsonutils as json
+
+from tempest.lib.api_schema.response.compute.v2_1 import \
+ server_external_events as schema
+from tempest.lib.common import rest_client
+from tempest.lib.services.compute import base_compute_client
+
+
+class ServerExternalEventsClient(base_compute_client.BaseComputeClient):
+
+ def create_server_external_events(self, events):
+ """Create Server External Events.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://docs.openstack.org/api-ref/compute/#run-events
+ """
+ post_body = json.dumps({'events': events})
+ resp, body = self.post("os-server-external-events", post_body)
+ body = json.loads(body)
+ self.validate_response(schema.create, resp, body)
+ return rest_client.ResponseBody(resp, body)
diff --git a/tempest/lib/services/compute/servers_client.py b/tempest/lib/services/compute/servers_client.py
index d2bdb6e..7e3b99f 100644
--- a/tempest/lib/services/compute/servers_client.py
+++ b/tempest/lib/services/compute/servers_client.py
@@ -676,14 +676,6 @@
self.validate_response(schema.get_remote_consoles, resp, body)
return rest_client.ResponseBody(resp, body)
- def list_virtual_interfaces(self, server_id):
- """List the virtual interfaces used in an instance."""
- resp, body = self.get('/'.join(['servers', server_id,
- 'os-virtual-interfaces']))
- body = json.loads(body)
- self.validate_response(schema.list_virtual_interfaces, resp, body)
- return rest_client.ResponseBody(resp, body)
-
def rescue_server(self, server_id, **kwargs):
"""Rescue the provided server.
diff --git a/tempest/lib/services/image/__init__.py b/tempest/lib/services/image/__init__.py
index 4b01663..ee1c32c 100644
--- a/tempest/lib/services/image/__init__.py
+++ b/tempest/lib/services/image/__init__.py
@@ -12,7 +12,6 @@
# License for the specific language governing permissions and limitations under
# the License.
-from tempest.lib.services.image import v1
from tempest.lib.services.image import v2
-__all__ = ['v1', 'v2']
+__all__ = ['v2']
diff --git a/tempest/lib/services/image/v1/__init__.py b/tempest/lib/services/image/v1/__init__.py
deleted file mode 100644
index 1f33cef..0000000
--- a/tempest/lib/services/image/v1/__init__.py
+++ /dev/null
@@ -1,28 +0,0 @@
-# Copyright (c) 2016 Hewlett-Packard Enterprise Development Company, L.P.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may not
-# use this file except in compliance with the License. You may obtain a copy of
-# the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations under
-# the License.
-
-import warnings
-
-from tempest.lib.services.image.v1.image_members_client import \
- ImageMembersClient
-from tempest.lib.services.image.v1.images_client import ImagesClient
-
-__all__ = ['ImageMembersClient', 'ImagesClient']
-
-
-warnings.warn(
- "The tempest.lib.services.image.v1 module (Image v1 APIs service "
- "clients) is deprecated in favor of tempest.lib.services.image.v2 "
- "(Image v2 APIs service clients) and will be removed once Tempest stop "
- "supporting stable Ussuri.", DeprecationWarning)
diff --git a/tempest/lib/services/image/v1/image_members_client.py b/tempest/lib/services/image/v1/image_members_client.py
deleted file mode 100644
index 7499ec0..0000000
--- a/tempest/lib/services/image/v1/image_members_client.py
+++ /dev/null
@@ -1,66 +0,0 @@
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from oslo_serialization import jsonutils as json
-
-from tempest.lib.common import rest_client
-
-
-class ImageMembersClient(rest_client.RestClient):
- api_version = "v1"
-
- def list_image_members(self, image_id):
- """List all members of an image."""
- url = 'images/%s/members' % image_id
- resp, body = self.get(url)
- self.expected_success(200, resp.status)
- body = json.loads(body)
- return rest_client.ResponseBody(resp, body)
-
- def list_shared_images(self, tenant_id):
- """List image memberships for the given tenant.
-
- For a full list of available parameters, please refer to the official
- API reference:
- https://docs.openstack.org/api-ref/image/v1/#list-shared-images
- """
-
- url = 'shared-images/%s' % tenant_id
- resp, body = self.get(url)
- self.expected_success(200, resp.status)
- body = json.loads(body)
- return rest_client.ResponseBody(resp, body)
-
- def create_image_member(self, image_id, member_id, **kwargs):
- """Add a member to an image.
-
- For a full list of available parameters, please refer to the official
- API reference:
- https://docs.openstack.org/api-ref/image/v1/#add-member-to-image
- """
- url = 'images/%s/members/%s' % (image_id, member_id)
- body = json.dumps({'member': kwargs})
- resp, __ = self.put(url, body)
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp)
-
- def delete_image_member(self, image_id, member_id):
- """Removes a membership from the image.
-
- For a full list of available parameters, please refer to the official
- API reference:
- https://docs.openstack.org/api-ref/image/v1/#remove-member
- """
- url = 'images/%s/members/%s' % (image_id, member_id)
- resp, __ = self.delete(url)
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp)
diff --git a/tempest/lib/services/image/v1/images_client.py b/tempest/lib/services/image/v1/images_client.py
deleted file mode 100644
index c9a4a94..0000000
--- a/tempest/lib/services/image/v1/images_client.py
+++ /dev/null
@@ -1,155 +0,0 @@
-# Copyright 2013 IBM Corp.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import functools
-from urllib import parse as urllib
-
-from oslo_serialization import jsonutils as json
-
-from tempest.lib.common import rest_client
-from tempest.lib import exceptions as lib_exc
-
-CHUNKSIZE = 1024 * 64 # 64kB
-
-
-class ImagesClient(rest_client.RestClient):
- api_version = "v1"
-
- def _create_with_data(self, headers, data):
- # We are going to do chunked transfert, so split the input data
- # info fixed-sized chunks.
- headers['Content-Type'] = 'application/octet-stream'
- data = iter(functools.partial(data.read, CHUNKSIZE), b'')
- resp, body = self.request('POST', 'images',
- headers=headers, body=data, chunked=True)
- self._error_checker(resp, body)
- body = json.loads(body)
- return rest_client.ResponseBody(resp, body)
-
- def _update_with_data(self, image_id, headers, data):
- # We are going to do chunked transfert, so split the input data
- # info fixed-sized chunks.
- headers['Content-Type'] = 'application/octet-stream'
- data = iter(functools.partial(data.read, CHUNKSIZE), b'')
- url = 'images/%s' % image_id
- resp, body = self.request('PUT', url, headers=headers,
- body=data, chunked=True)
- self._error_checker(resp, body)
- body = json.loads(body)
- return rest_client.ResponseBody(resp, body)
-
- @property
- def http(self):
- if self._http is None:
- self._http = self._get_http()
- return self._http
-
- def create_image(self, data=None, headers=None):
- """Create an image.
-
- For a full list of available parameters, please refer to the official
- API reference:
- https://docs.openstack.org/api-ref/image/v1/index.html#create-image
- """
- if headers is None:
- headers = {}
-
- if data is not None:
- return self._create_with_data(headers, data)
-
- resp, body = self.post('images', None, headers)
- self.expected_success(201, resp.status)
- body = json.loads(body)
- return rest_client.ResponseBody(resp, body)
-
- def update_image(self, image_id, data=None, headers=None):
- """Update an image.
-
- For a full list of available parameters, please refer to the official
- API reference:
- https://docs.openstack.org/api-ref/image/v1/index.html#update-image
- """
- if headers is None:
- headers = {}
-
- if data is not None:
- return self._update_with_data(image_id, headers, data)
-
- url = 'images/%s' % image_id
- resp, body = self.put(url, None, headers)
- self.expected_success(200, resp.status)
- body = json.loads(body)
- return rest_client.ResponseBody(resp, body)
-
- def delete_image(self, image_id):
- url = 'images/%s' % image_id
- resp, body = self.delete(url)
- self.expected_success(200, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def list_images(self, detail=False, **kwargs):
- """Return a list of all images filtered by input parameters.
-
- For a full list of available parameters, please refer to the official
- API reference:
- https://docs.openstack.org/api-ref/image/v1/#list-images
-
- Most parameters except the following are passed to the API without
- any changes.
- :param changes_since: The name is changed to changes-since
- """
- url = 'images'
-
- if detail:
- url += '/detail'
-
- if 'changes_since' in kwargs:
- kwargs['changes-since'] = kwargs.pop('changes_since')
-
- if kwargs:
- url += '?%s' % urllib.urlencode(kwargs)
-
- resp, body = self.get(url)
- self.expected_success(200, resp.status)
- body = json.loads(body)
- return rest_client.ResponseBody(resp, body)
-
- def check_image(self, image_id):
- """Check image metadata."""
- url = 'images/%s' % image_id
- resp, body = self.head(url)
- self.expected_success(200, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def show_image(self, image_id):
- """Get image details plus the image itself."""
- url = 'images/%s' % image_id
- resp, body = self.get(url)
- self.expected_success(200, resp.status)
- return rest_client.ResponseBodyData(resp, body)
-
- def is_resource_deleted(self, id):
- try:
- resp = self.check_image(id)
- if resp.response["x-image-meta-status"] == 'deleted':
- return True
- except lib_exc.NotFound:
- return True
- return False
-
- @property
- def resource_type(self):
- """Returns the primary type of resource this client works with."""
- return 'image_meta'
diff --git a/tempest/lib/services/image/v2/__init__.py b/tempest/lib/services/image/v2/__init__.py
index a2f5bdc..5e303e3 100644
--- a/tempest/lib/services/image/v2/__init__.py
+++ b/tempest/lib/services/image/v2/__init__.py
@@ -27,9 +27,11 @@
from tempest.lib.services.image.v2.resource_types_client import \
ResourceTypesClient
from tempest.lib.services.image.v2.schemas_client import SchemasClient
+from tempest.lib.services.image.v2.tasks_client import TaskClient
from tempest.lib.services.image.v2.versions_client import VersionsClient
+
__all__ = ['ImageMembersClient', 'ImagesClient', 'ImageCacheClient',
'NamespaceObjectsClient', 'NamespacePropertiesClient',
'NamespaceTagsClient', 'NamespacesClient', 'ResourceTypesClient',
- 'SchemasClient', 'VersionsClient']
+ 'SchemasClient', 'TaskClient', 'VersionsClient']
diff --git a/tempest/lib/services/image/v2/images_client.py b/tempest/lib/services/image/v2/images_client.py
index ae6ce25..8460b57 100644
--- a/tempest/lib/services/image/v2/images_client.py
+++ b/tempest/lib/services/image/v2/images_client.py
@@ -248,17 +248,26 @@
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp)
- def show_image_file(self, image_id):
+ def show_image_file(self, image_id, chunked=False):
"""Download binary image data.
+ :param bool chunked: If True, do not read the body and return only
+ the raw urllib3 response object for processing.
+ NB: If you pass True here, you **MUST** call
+ release_conn() on the response object before
+ finishing!
+
For a full list of available parameters, please refer to the official
API reference:
https://docs.openstack.org/api-ref/image/v2/#download-binary-image-data
"""
url = 'images/%s/file' % image_id
- resp, body = self.get(url)
+ resp, body = self.get(url, chunked=chunked)
self.expected_success([200, 204, 206], resp.status)
- return rest_client.ResponseBodyData(resp, body)
+ if chunked:
+ return resp
+ else:
+ return rest_client.ResponseBodyData(resp, body)
def add_image_tag(self, image_id, tag):
"""Add an image tag.
diff --git a/tempest/lib/services/image/v2/tasks_client.py b/tempest/lib/services/image/v2/tasks_client.py
new file mode 100644
index 0000000..2cb33eb
--- /dev/null
+++ b/tempest/lib/services/image/v2/tasks_client.py
@@ -0,0 +1,70 @@
+# Copyright 2023 Red Hat, Inc.
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+
+from urllib import parse as urllib
+
+from oslo_serialization import jsonutils as json
+
+from tempest.lib.common import rest_client
+
+CHUNKSIZE = 1024 * 64 # 64kB
+
+
+class TaskClient(rest_client.RestClient):
+ api_version = "v2"
+
+ def create_task(self, **kwargs):
+ """Create a task.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://developer.openstack.org/api-ref/image/v2/#create-task
+ """
+ data = json.dumps(kwargs)
+ resp, body = self.post('tasks', data)
+ self.expected_success(201, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def list_tasks(self, **kwargs):
+ """List tasks.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://developer.openstack.org/api-ref/image/v2/#list-tasks
+ """
+ url = 'tasks'
+
+ if kwargs:
+ url += '?%s' % urllib.urlencode(kwargs)
+
+ resp, body = self.get(url)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def show_tasks(self, task_id):
+ """Show task details.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://docs.openstack.org/api-ref/image/v2/#show-task-details
+ """
+ url = 'tasks/%s' % task_id
+ resp, body = self.get(url)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
diff --git a/tempest/lib/services/object_storage/container_client.py b/tempest/lib/services/object_storage/container_client.py
index ee87726..bdca0d0 100644
--- a/tempest/lib/services/object_storage/container_client.py
+++ b/tempest/lib/services/object_storage/container_client.py
@@ -43,7 +43,7 @@
url = str(container_name)
resp, body = self.put(url, body=None, headers=headers)
- self.expected_success([201, 202], resp.status)
+ self.expected_success([201, 202, 204], resp.status)
return resp, body
# NOTE: This alias is for the usability because PUT can be used for both
diff --git a/tempest/lib/services/volume/v3/attachments_client.py b/tempest/lib/services/volume/v3/attachments_client.py
index 5e448f7..ef8be37 100644
--- a/tempest/lib/services/volume/v3/attachments_client.py
+++ b/tempest/lib/services/volume/v3/attachments_client.py
@@ -26,3 +26,11 @@
body = json.loads(body)
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
+
+ def delete_attachment(self, attachment_id):
+ """Delete volume attachment."""
+ url = "attachments/%s" % (attachment_id)
+ resp, body = self.delete(url)
+ body = json.loads(body)
+ self.expected_success(200, resp.status)
+ return rest_client.ResponseBody(resp, body)
diff --git a/tempest/lib/services/volume/v3/volumes_client.py b/tempest/lib/services/volume/v3/volumes_client.py
index ad8bd71..c6f8973 100644
--- a/tempest/lib/services/volume/v3/volumes_client.py
+++ b/tempest/lib/services/volume/v3/volumes_client.py
@@ -205,14 +205,23 @@
self.validate_response(schema.set_bootable_volume, resp, body)
return rest_client.ResponseBody(resp, body)
- def detach_volume(self, volume_id):
+ def detach_volume(self, volume_id, **kwargs):
"""Detaches a volume from an instance."""
- post_body = json.dumps({'os-detach': {}})
+ post_body = json.dumps({'os-detach': kwargs})
url = 'volumes/%s/action' % (volume_id)
resp, body = self.post(url, post_body)
self.validate_response(schema.detach_volume, resp, body)
return rest_client.ResponseBody(resp, body)
+ def terminate_connection(self, volume_id, connector):
+ """Detaches a volume from an instance using terminate_connection."""
+ post_body = json.dumps(
+ {'os-terminate_connection': {'connector': connector}})
+ url = 'volumes/%s/action' % (volume_id)
+ resp, body = self.post(url, post_body)
+ self.validate_response(schema.terminate_connection, resp, body)
+ return rest_client.ResponseBody(resp, body)
+
def reserve_volume(self, volume_id):
"""Reserves a volume."""
post_body = json.dumps({'os-reserve': {}})
diff --git a/tempest/scenario/manager.py b/tempest/scenario/manager.py
index 2843498..0450d94 100644
--- a/tempest/scenario/manager.py
+++ b/tempest/scenario/manager.py
@@ -14,6 +14,7 @@
# License for the specific language governing permissions and limitations
# under the License.
+import copy
import os
import subprocess
@@ -24,7 +25,6 @@
from oslo_utils import netutils
from tempest.common import compute
-from tempest.common import image as common_image
from tempest.common.utils.linux import remote_client
from tempest.common.utils import net_utils
from tempest.common import waiters
@@ -89,6 +89,16 @@
volume_microversion=cls.volume_request_microversion,
placement_microversion=cls.placement_request_microversion)
+ @classmethod
+ def setup_credentials(cls):
+ # Setting network=True, subnet=True creates a default network
+ cls.set_network_resources(
+ network=True,
+ subnet=True,
+ router=True,
+ dhcp=True)
+ super(ScenarioTest, cls).setup_credentials()
+
def setup_compute_client(cls):
"""Compute client"""
cls.compute_images_client = cls.os_primary.compute_images_client
@@ -113,15 +123,11 @@
"""This setup the service clients for the tests"""
super(ScenarioTest, cls).setup_clients()
if CONF.service_available.glance:
- # Check if glance v1 is available to determine which client to use.
- if CONF.image_feature_enabled.api_v1:
- cls.image_client = cls.os_primary.image_client
- elif CONF.image_feature_enabled.api_v2:
+ if CONF.image_feature_enabled.api_v2:
cls.image_client = cls.os_primary.image_client_v2
else:
raise lib_exc.InvalidConfiguration(
- 'Either api_v1 or api_v2 must be True in '
- '[image-feature-enabled].')
+ 'api_v2 must be True in [image-feature-enabled].')
cls.setup_compute_client(cls)
cls.setup_network_client(cls)
@@ -145,6 +151,7 @@
- 'binding:vnic_type' - defaults to CONF.network.port_vnic_type
- 'binding:profile' - defaults to CONF.network.port_profile
"""
+
if not client:
client = self.ports_client
name = data_utils.rand_name(
@@ -158,10 +165,12 @@
network_id=network_id,
**kwargs)
self.assertIsNotNone(result, 'Unable to allocate port')
- port = result['port']
+ port_id = result['port']['id']
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
- client.delete_port, port['id'])
- return port
+ client.delete_port, port_id)
+ port = waiters.wait_for_port_status(
+ client=client, port_id=port_id, status="DOWN")
+ return port["port"]
def create_keypair(self, client=None, **kwargs):
"""Creates keypair
@@ -181,7 +190,7 @@
return body['keypair']
def create_server(self, name=None, image_id=None, flavor=None,
- validatable=False, wait_until='ACTIVE',
+ validatable=None, wait_until='ACTIVE',
clients=None, **kwargs):
"""Wrapper utility that returns a test server.
@@ -306,6 +315,28 @@
kwargs.setdefault('availability_zone',
CONF.compute.compute_volume_common_az)
+ kwargs['validatable'] = bool(validatable)
+ keypair = kwargs.pop('keypair', None)
+ if wait_until == 'SSHABLE' and (
+ kwargs.get('validation_resources') is None):
+ # NOTE(danms): We should do this whether valdiation is enabled or
+ # not to consistently provide the resources to the
+ # create_test_server() function. If validation is disabled, then
+ # get_test_validation_resources() is basically a no-op for
+ # performance.
+ validation_resources = self.get_test_validation_resources(
+ self.os_primary)
+ if keypair:
+ validation_resources = copy.deepcopy(validation_resources)
+ validation_resources.update(
+ keypair=keypair)
+ kwargs.update({
+ 'validatable': (validatable if validatable is not None
+ else True),
+ 'validation_resources': validation_resources})
+ if keypair:
+ kwargs.update({'key_name': keypair['name']})
+
body, _ = compute.create_test_server(
clients,
tenant_network=tenant_network,
@@ -322,22 +353,20 @@
def create_volume(self, size=None, name=None, snapshot_id=None,
imageRef=None, volume_type=None, wait_until='available',
- **kwargs):
+ client=None, **kwargs):
"""Creates volume
This wrapper utility creates volume and waits for volume to be
in 'available' state by default. If wait_until is None, means no wait.
This method returns the volume's full representation by GET request.
"""
+ if client is None:
+ client = self.volumes_client
if size is None:
size = CONF.volume.volume_size
if imageRef:
- if CONF.image_feature_enabled.api_v1:
- resp = self.image_client.check_image(imageRef)
- image = common_image.get_image_meta_from_headers(resp)
- else:
- image = self.image_client.show_image(imageRef)
+ image = self.image_client.show_image(imageRef)
min_disk = image.get('min_disk')
size = max(size, min_disk)
if name is None:
@@ -352,19 +381,20 @@
kwargs.setdefault('availability_zone',
CONF.compute.compute_volume_common_az)
- volume = self.volumes_client.create_volume(**kwargs)['volume']
+ volume = client.create_volume(**kwargs)['volume']
- self.addCleanup(self.volumes_client.wait_for_resource_deletion,
+ self.addCleanup(client.wait_for_resource_deletion,
volume['id'])
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
- self.volumes_client.delete_volume, volume['id'])
+ client.delete_volume, volume['id'])
self.assertEqual(name, volume['name'])
if wait_until:
- waiters.wait_for_volume_resource_status(self.volumes_client,
+ waiters.wait_for_volume_resource_status(client,
volume['id'], wait_until)
# The volume retrieved on creation has a non-up-to-date status.
# Retrieval after it becomes active ensures correct details.
- volume = self.volumes_client.show_volume(volume['id'])['volume']
+ volume = client.show_volume(volume['id'])['volume']
+
return volume
def create_backup(self, volume_id, name=None, description=None,
@@ -757,27 +787,18 @@
'name': name,
'container_format': img_container_format,
'disk_format': img_disk_format or img_container_format,
+ 'visibility': 'private'
}
- if CONF.image_feature_enabled.api_v1:
- params['is_public'] = 'False'
- if img_properties:
- params['properties'] = img_properties
- params = {'headers': common_image.image_meta_to_headers(**params)}
- else:
- params['visibility'] = 'private'
- # Additional properties are flattened out in the v2 API.
- if img_properties:
- params.update(img_properties)
+ # Additional properties are flattened out in the v2 API.
+ if img_properties:
+ params.update(img_properties)
params.update(kwargs)
body = self.image_client.create_image(**params)
image = body['image'] if 'image' in body else body
self.addCleanup(self.image_client.delete_image, image['id'])
self.assertEqual("queued", image['status'])
with open(img_path, 'rb') as image_file:
- if CONF.image_feature_enabled.api_v1:
- self.image_client.update_image(image['id'], data=image_file)
- else:
- self.image_client.store_image_file(image['id'], image_file)
+ self.image_client.store_image_file(image['id'], image_file)
LOG.debug("image:%s", image['id'])
return image['id']
@@ -825,15 +846,9 @@
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
_image_client.delete_image, image_id)
- if CONF.image_feature_enabled.api_v1:
- # In glance v1 the additional properties are stored in the headers
- resp = _image_client.check_image(image_id)
- snapshot_image = common_image.get_image_meta_from_headers(resp)
- image_props = snapshot_image.get('properties', {})
- else:
- # In glance v2 the additional properties are flattened.
- snapshot_image = _image_client.show_image(image_id)
- image_props = snapshot_image
+ # In glance v2 the additional properties are flattened.
+ snapshot_image = _image_client.show_image(image_id)
+ image_props = snapshot_image
bdm = image_props.get('block_device_mapping')
if bdm:
@@ -855,32 +870,43 @@
image_name, server['name'])
return snapshot_image
- def nova_volume_attach(self, server, volume_to_attach, **kwargs):
+ def nova_volume_attach(self, server, volume_to_attach,
+ volumes_client=None, servers_client=None,
+ **kwargs):
"""Compute volume attach
This utility attaches volume from compute and waits for the
volume status to be 'in-use' state.
"""
- volume = self.servers_client.attach_volume(
+ if volumes_client is None:
+ volumes_client = self.volumes_client
+ if servers_client is None:
+ servers_client = self.servers_client
+
+ volume = servers_client.attach_volume(
server['id'], volumeId=volume_to_attach['id'],
**kwargs)['volumeAttachment']
self.assertEqual(volume_to_attach['id'], volume['id'])
- waiters.wait_for_volume_resource_status(self.volumes_client,
+ waiters.wait_for_volume_resource_status(volumes_client,
volume['id'], 'in-use')
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
- self.nova_volume_detach, server, volume)
+ self.nova_volume_detach, server, volume,
+ servers_client)
# Return the updated volume after the attachment
- return self.volumes_client.show_volume(volume['id'])['volume']
+ return volumes_client.show_volume(volume['id'])['volume']
- def nova_volume_detach(self, server, volume):
+ def nova_volume_detach(self, server, volume, servers_client=None):
"""Compute volume detach
This utility detaches the volume from the server and checks whether the
volume attachment has been removed from Nova.
"""
- self.servers_client.detach_volume(server['id'], volume['id'])
+ if servers_client is None:
+ servers_client = self.servers_client
+
+ servers_client.detach_volume(server['id'], volume['id'])
waiters.wait_for_volume_attachment_remove_from_server(
- self.servers_client, server['id'], volume['id'])
+ servers_client, server['id'], volume['id'])
def ping_ip_address(self, ip_address, should_succeed=True,
ping_timeout=None, mtu=None, server=None):
@@ -1037,6 +1063,20 @@
floating_ip['id'])
return floating_ip
+ def get_floating_ip(self, server):
+ """Attempt to get an existing floating ip or a server
+
+ If one exists, return it, else return None
+ """
+ port_id, ip4 = self.get_server_port_id_and_ip4(server)
+ ips = self.floating_ips_client.list_floatingips(
+ floating_network_id=CONF.network.public_network_id,
+ port_id=port_id)
+ try:
+ return ips['floatingips'][0]['floating_ip_address']
+ except (KeyError, IndexError):
+ return None
+
def associate_floating_ip(self, floating_ip, server):
"""Associate floating ip to server
@@ -1065,7 +1105,7 @@
def create_timestamp(self, ip_address, dev_name=None, mount_path='/mnt',
private_key=None, server=None, username=None,
- fs='ext4'):
+ fs='vfat'):
"""Creates timestamp
This wrapper utility does ssh, creates timestamp and returns the
@@ -1076,14 +1116,19 @@
server=server,
username=username)
+ # Default the directory in which to write the timestamp file to /tmp
+ # and only use the mount_path as the target directory if we mounted
+ # dev_name to mount_path.
+ target_dir = '/tmp'
if dev_name is not None:
ssh_client.make_fs(dev_name, fs=fs)
ssh_client.exec_command('sudo mount /dev/%s %s' % (dev_name,
mount_path))
- cmd_timestamp = 'sudo sh -c "date > %s/timestamp; sync"' % mount_path
+ target_dir = mount_path
+ cmd_timestamp = 'sudo sh -c "date > %s/timestamp; sync"' % target_dir
ssh_client.exec_command(cmd_timestamp)
timestamp = ssh_client.exec_command('sudo cat %s/timestamp'
- % mount_path)
+ % target_dir)
if dev_name is not None:
ssh_client.exec_command('sudo umount %s' % mount_path)
return timestamp
@@ -1108,10 +1153,15 @@
server=server,
username=username)
+ # Default the directory from which to read the timestamp file to /tmp
+ # and only use the mount_path as the target directory if we mounted
+ # dev_name to mount_path.
+ target_dir = '/tmp'
if dev_name is not None:
ssh_client.mount(dev_name, mount_path)
+ target_dir = mount_path
timestamp = ssh_client.exec_command('sudo cat %s/timestamp'
- % mount_path)
+ % target_dir)
if dev_name is not None:
ssh_client.exec_command('sudo umount %s' % mount_path)
return timestamp
@@ -1131,8 +1181,14 @@
# The tests calling this method don't have a floating IP
# and can't make use of the validation resources. So the
# method is creating the floating IP there.
- return self.create_floating_ip(
- server, **kwargs)['floating_ip_address']
+ fip = self.get_floating_ip(server)
+ if fip:
+ # Already have a floating ip, so use it instead of creating
+ # another
+ return fip
+ else:
+ return self.create_floating_ip(
+ server, **kwargs)['floating_ip_address']
elif CONF.validation.connect_method == 'fixed':
# Determine the network name to look for based on config or creds
# provider network resources.
@@ -1181,7 +1237,7 @@
create_kwargs = dict({'image_id': ''})
if keypair:
- create_kwargs['key_name'] = keypair['name']
+ create_kwargs['keypair'] = keypair
if security_group:
create_kwargs['security_groups'] = [
{'name': security_group['name']}]
@@ -1589,7 +1645,8 @@
def create_encrypted_volume(self, encryption_provider, volume_type,
key_size=256, cipher='aes-xts-plain64',
- control_location='front-end'):
+ control_location='front-end',
+ wait_until='available'):
"""Creates an encrypted volume"""
volume_type = self.create_volume_type(name=volume_type)
self.create_encryption_type(type_id=volume_type['id'],
@@ -1597,7 +1654,8 @@
key_size=key_size,
cipher=cipher,
control_location=control_location)
- return self.create_volume(volume_type=volume_type['name'])
+ return self.create_volume(volume_type=volume_type['name'],
+ wait_until=wait_until)
class ObjectStorageScenarioTest(ScenarioTest):
diff --git a/tempest/scenario/test_encrypted_cinder_volumes.py b/tempest/scenario/test_encrypted_cinder_volumes.py
index 9788e19..753e64f 100644
--- a/tempest/scenario/test_encrypted_cinder_volumes.py
+++ b/tempest/scenario/test_encrypted_cinder_volumes.py
@@ -16,6 +16,7 @@
import testtools
from tempest.common import utils
+from tempest.common import waiters
from tempest import config
from tempest.lib import decorators
from tempest.scenario import manager
@@ -45,9 +46,7 @@
raise cls.skipException('Encrypted volume attach is not supported')
def launch_instance(self):
- keypair = self.create_keypair()
-
- return self.create_server(key_name=keypair['name'])
+ return self.create_server(wait_until='SSHABLE')
def attach_detach_volume(self, server, volume):
attached_volume = self.nova_volume_attach(server, volume)
@@ -58,9 +57,16 @@
@utils.services('compute', 'volume', 'image')
def test_encrypted_cinder_volumes_luks(self):
"""LUKs v1 decrypts volume through libvirt."""
- server = self.launch_instance()
volume = self.create_encrypted_volume('luks',
- volume_type='luks')
+ volume_type='luks',
+ wait_until=None)
+ server = self.launch_instance()
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume['id'], 'available')
+ # The volume retrieved on creation has a non-up-to-date status.
+ # Retrieval after it becomes active ensures correct details.
+ volume = self.volumes_client.show_volume(volume['id'])['volume']
+
self.attach_detach_volume(server, volume)
@decorators.idempotent_id('7abec0a3-61a0-42a5-9e36-ad3138fb38b4')
@@ -70,16 +76,30 @@
@utils.services('compute', 'volume', 'image')
def test_encrypted_cinder_volumes_luksv2(self):
"""LUKs v2 decrypts volume through os-brick."""
- server = self.launch_instance()
volume = self.create_encrypted_volume('luks2',
- volume_type='luksv2')
+ volume_type='luksv2',
+ wait_until=None)
+ server = self.launch_instance()
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume['id'], 'available')
+ # The volume retrieved on creation has a non-up-to-date status.
+ # Retrieval after it becomes active ensures correct details.
+ volume = self.volumes_client.show_volume(volume['id'])['volume']
+
self.attach_detach_volume(server, volume)
@decorators.idempotent_id('cbc752ed-b716-4717-910f-956cce965722')
@decorators.attr(type='slow')
@utils.services('compute', 'volume', 'image')
def test_encrypted_cinder_volumes_cryptsetup(self):
- server = self.launch_instance()
volume = self.create_encrypted_volume('plain',
- volume_type='cryptsetup')
+ volume_type='cryptsetup',
+ wait_until=None)
+ server = self.launch_instance()
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume['id'], 'available')
+ # The volume retrieved on creation has a non-up-to-date status.
+ # Retrieval after it becomes active ensures correct details.
+ volume = self.volumes_client.show_volume(volume['id'])['volume']
+
self.attach_detach_volume(server, volume)
diff --git a/tempest/scenario/test_minimum_basic.py b/tempest/scenario/test_minimum_basic.py
index 90e1bc5..6372c6b 100644
--- a/tempest/scenario/test_minimum_basic.py
+++ b/tempest/scenario/test_minimum_basic.py
@@ -38,6 +38,12 @@
* check command outputs
"""
+ @classmethod
+ def skip_checks(cls):
+ super(TestMinimumBasicScenario, cls).skip_checks()
+ if not CONF.service_available.cinder:
+ raise cls.skipException("Cinder is not available")
+
def nova_show(self, server):
got_server = (self.servers_client.show_server(server['id'])
['server'])
@@ -86,6 +92,7 @@
'%s' % (secgroup['id'], server['id']))
raise exceptions.TimeoutException(msg)
+ @decorators.attr(type='slow')
@decorators.idempotent_id('bdbb5441-9204-419d-a225-b4fdbfb1a1a8')
@utils.services('compute', 'volume', 'image', 'network')
def test_minimum_basic_scenario(self):
@@ -159,6 +166,7 @@
self.servers_client, server, floating_ip,
wait_for_disassociate=True)
+ @decorators.attr(type='slow')
@decorators.idempotent_id('a8fd48ec-1d01-4895-b932-02321661ec1e')
@testtools.skipUnless(CONF.volume_feature_enabled.snapshot,
"Cinder volume snapshots are disabled")
diff --git a/tempest/scenario/test_network_advanced_server_ops.py b/tempest/scenario/test_network_advanced_server_ops.py
index e630e29..2c7c085 100644
--- a/tempest/scenario/test_network_advanced_server_ops.py
+++ b/tempest/scenario/test_network_advanced_server_ops.py
@@ -218,7 +218,7 @@
@testtools.skipUnless(CONF.compute.min_compute_nodes > 1,
'Less than 2 compute nodes, skipping multinode '
'tests.')
- @decorators.attr(type='slow')
+ @decorators.attr(type=['slow', 'multinode'])
@utils.services('compute', 'network')
def test_server_connectivity_cold_migration(self):
keypair = self.create_keypair()
@@ -244,7 +244,7 @@
@testtools.skipUnless(CONF.compute.min_compute_nodes > 1,
'Less than 2 compute nodes, skipping multinode '
'tests.')
- @decorators.attr(type='slow')
+ @decorators.attr(type=['slow', 'multinode'])
@utils.services('compute', 'network')
def test_server_connectivity_live_migration(self):
keypair = self.create_keypair()
@@ -270,26 +270,28 @@
new_host = self.get_host_for_server(server['id'])
self.assertNotEqual(old_host, new_host, 'Server did not migrate')
+ # we first wait until the VM replies pings again, then check the
+ # network downtime
+ self._wait_server_status_and_check_network_connectivity(
+ server, keypair, floating_ip)
+
downtime = downtime_meter.get_downtime()
self.assertIsNotNone(downtime)
LOG.debug("Downtime seconds measured with downtime_meter = %r",
downtime)
allowed_downtime = CONF.validation.allowed_network_downtime
- self.assertLess(
+ self.assertLessEqual(
downtime, allowed_downtime,
"Downtime of {} seconds is higher than expected '{}'".format(
downtime, allowed_downtime))
- self._wait_server_status_and_check_network_connectivity(
- server, keypair, floating_ip)
-
@decorators.idempotent_id('25b188d7-0183-4b1e-a11d-15840c8e2fd6')
@testtools.skipUnless(CONF.compute_feature_enabled.cold_migration,
'Cold migration is not available.')
@testtools.skipUnless(CONF.compute.min_compute_nodes > 1,
'Less than 2 compute nodes, skipping multinode '
'tests.')
- @decorators.attr(type='slow')
+ @decorators.attr(type=['slow', 'multinode'])
@utils.services('compute', 'network')
def test_server_connectivity_cold_migration_revert(self):
keypair = self.create_keypair()
diff --git a/tempest/scenario/test_network_basic_ops.py b/tempest/scenario/test_network_basic_ops.py
index cbe8c20..7b819e0 100644
--- a/tempest/scenario/test_network_basic_ops.py
+++ b/tempest/scenario/test_network_basic_ops.py
@@ -897,10 +897,20 @@
self.check_remote_connectivity(ssh_client, dest=peer_address,
nic=spoof_nic, should_succeed=True)
# Set a mac address by making nic down temporary
- cmd = ("sudo ip link set {nic} down;"
+ spoof_ip_addresses = ssh_client.get_nic_ip_addresses(spoof_nic)
+ dhcp_cmd = ("sudo start-stop-daemon -K -x /sbin/dhcpcd -p "
+ "/var/run/dhcpcd/pid -o || true")
+ cmd = ("{dhcp_cmd}; sudo ip link set {nic} down;"
"sudo ip link set dev {nic} address {mac};"
- "sudo ip link set {nic} up").format(nic=spoof_nic,
- mac=spoof_mac)
+ "sudo ip link set {nic} up;"
+ "sudo ip address flush dev {nic};").format(nic=spoof_nic,
+ dhcp_cmd=dhcp_cmd,
+ mac=spoof_mac)
+ for ip_address in spoof_ip_addresses:
+ cmd += (
+ "sudo ip addr add {ip_address} dev {nic};"
+ ).format(ip_address=ip_address, nic=spoof_nic)
+
ssh_client.exec_command(cmd)
new_mac = ssh_client.get_mac_address(nic=spoof_nic)
diff --git a/tempest/scenario/test_network_qos_placement.py b/tempest/scenario/test_network_qos_placement.py
index 365eb1b..0b2cfcb 100644
--- a/tempest/scenario/test_network_qos_placement.py
+++ b/tempest/scenario/test_network_qos_placement.py
@@ -278,6 +278,7 @@
port = self.os_admin.ports_client.show_port(not_valid_port['id'])
self.assertEqual(0, len(port['port']['binding:profile']))
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('8a98150c-a506-49a5-96c6-73a5e7b04ada')
@testtools.skipUnless(CONF.compute_feature_enabled.cold_migration,
'Cold migration is not available.')
@@ -851,6 +852,7 @@
self.assert_allocations(server, port, min_kbps, min_kpps)
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('bdd0b31c-c8b0-4b7b-b80a-545a46b32abe')
@testtools.skipUnless(
CONF.compute_feature_enabled.cold_migration,
@@ -1033,6 +1035,7 @@
self.assert_allocations(server, port2, 0, 0)
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('36ffdb85-6cc2-4cc9-a426-cad5bac8626b')
@testtools.skipUnless(
CONF.compute.min_compute_nodes > 1,
diff --git a/tempest/scenario/test_security_groups_basic_ops.py b/tempest/scenario/test_security_groups_basic_ops.py
index aff7509..2fc5f32 100644
--- a/tempest/scenario/test_security_groups_basic_ops.py
+++ b/tempest/scenario/test_security_groups_basic_ops.py
@@ -480,6 +480,7 @@
direction='ingress')
return ruleset
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('e79f879e-debb-440c-a7e4-efeda05b6848')
@utils.services('compute', 'network')
def test_cross_tenant_traffic(self):
@@ -510,6 +511,7 @@
self._log_console_output_for_all_tenants()
raise
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('63163892-bbf6-4249-aa12-d5ea1f8f421b')
@utils.services('compute', 'network')
def test_in_tenant_traffic(self):
@@ -524,7 +526,7 @@
raise
@decorators.idempotent_id('f4d556d7-1526-42ad-bafb-6bebf48568f6')
- @decorators.attr(type='slow')
+ @decorators.attr(type=['slow', 'multinode'])
@utils.services('compute', 'network')
def test_port_update_new_security_group(self):
"""Verifies the traffic after updating the vm port
@@ -578,7 +580,7 @@
raise
@decorators.idempotent_id('d2f77418-fcc4-439d-b935-72eca704e293')
- @decorators.attr(type='slow')
+ @decorators.attr(type=['slow', 'multinode'])
@utils.services('compute', 'network')
def test_multiple_security_groups(self):
"""Verify multiple security groups and checks that rules
@@ -610,7 +612,7 @@
private_key=private_key,
should_connect=True)
- @decorators.attr(type='slow')
+ @decorators.attr(type=['slow', 'multinode'])
@utils.requires_ext(service='network', extension='port-security')
@decorators.idempotent_id('7c811dcc-263b-49a3-92d2-1b4d8405f50c')
@utils.services('compute', 'network')
@@ -650,7 +652,7 @@
self._log_console_output_for_all_tenants()
raise
- @decorators.attr(type='slow')
+ @decorators.attr(type=['slow', 'multinode'])
@utils.requires_ext(service='network', extension='port-security')
@decorators.idempotent_id('13ccf253-e5ad-424b-9c4a-97b88a026699')
# TODO(mriedem): We shouldn't actually need to check this since neutron
diff --git a/tempest/scenario/test_server_advanced_ops.py b/tempest/scenario/test_server_advanced_ops.py
index 990b325..1c2246d 100644
--- a/tempest/scenario/test_server_advanced_ops.py
+++ b/tempest/scenario/test_server_advanced_ops.py
@@ -14,7 +14,6 @@
# under the License.
from oslo_log import log as logging
-import testtools
from tempest.common import utils
from tempest.common import waiters
@@ -36,14 +35,21 @@
"""
@classmethod
+ def skip_checks(cls):
+ super(TestServerAdvancedOps, cls).skip_checks()
+ if not CONF.service_available.nova:
+ skip_msg = ("%s skipped as Nova is not available" % cls.__name__)
+ raise cls.skipException(skip_msg)
+ if not CONF.compute_feature_enabled.suspend:
+ raise cls.skipException("Suspend is not available.")
+
+ @classmethod
def setup_credentials(cls):
cls.set_network_resources(network=True, subnet=True)
super(TestServerAdvancedOps, cls).setup_credentials()
@decorators.attr(type='slow')
@decorators.idempotent_id('949da7d5-72c8-4808-8802-e3d70df98e2c')
- @testtools.skipUnless(CONF.compute_feature_enabled.suspend,
- 'Suspend is not available.')
@utils.services('compute')
def test_server_sequence_suspend_resume(self):
# We create an instance for use in this test
diff --git a/tempest/scenario/test_server_basic_ops.py b/tempest/scenario/test_server_basic_ops.py
index 2a15470..3830fbc 100644
--- a/tempest/scenario/test_server_basic_ops.py
+++ b/tempest/scenario/test_server_basic_ops.py
@@ -49,16 +49,8 @@
def verify_ssh(self, keypair):
if self.run_ssh:
- # Obtain a floating IP if floating_ips is enabled
- if (CONF.network_feature_enabled.floating_ips and
- CONF.network.floating_network_name):
- fip = self.create_floating_ip(self.instance)
- self.ip = self.associate_floating_ip(
- fip, self.instance)['floating_ip_address']
- else:
- server = self.servers_client.show_server(
- self.instance['id'])['server']
- self.ip = self.get_server_ip(server)
+ # Obtain server IP
+ self.ip = self.get_server_ip(self.instance)
# Check ssh
self.ssh_client = self.get_remote_client(
ip_address=self.ip,
@@ -133,7 +125,8 @@
security_group = self.create_security_group()
self.md = {'meta1': 'data1', 'meta2': 'data2', 'metaN': 'dataN'}
self.instance = self.create_server(
- key_name=keypair['name'],
+ keypair=keypair,
+ wait_until='SSHABLE',
security_groups=[{'name': security_group['name']}],
config_drive=CONF.compute_feature_enabled.config_drive,
metadata=self.md)
diff --git a/tempest/scenario/test_server_multinode.py b/tempest/scenario/test_server_multinode.py
index fdf875c..fe85234 100644
--- a/tempest/scenario/test_server_multinode.py
+++ b/tempest/scenario/test_server_multinode.py
@@ -14,6 +14,7 @@
# under the License.
from tempest.common import utils
+from tempest.common import waiters
from tempest import config
from tempest.lib import decorators
from tempest.lib import exceptions
@@ -35,7 +36,7 @@
"Less than 2 compute nodes, skipping multinode tests.")
@decorators.idempotent_id('9cecbe35-b9d4-48da-a37e-7ce70aa43d30')
- @decorators.attr(type='smoke')
+ @decorators.attr(type=['smoke', 'multinode'])
@utils.services('compute', 'network')
def test_schedule_to_all_nodes(self):
available_zone = \
@@ -46,7 +47,8 @@
if zone['zoneState']['available']:
for host in zone['hosts']:
if 'nova-compute' in zone['hosts'][host] and \
- zone['hosts'][host]['nova-compute']['available']:
+ zone['hosts'][host]['nova-compute']['available'] and \
+ not host.endswith('-ironic'):
hosts.append({'zone': zone['zoneName'],
'host_name': host})
@@ -60,6 +62,7 @@
# threshold (so that things don't get crazy if you have 1000
# compute nodes but set min to 3).
servers = []
+ host_server_ids = {}
for host in hosts[:CONF.compute.min_compute_nodes]:
# by getting to active state here, this means this has
@@ -67,12 +70,18 @@
# in order to use the availability_zone:host scheduler hint,
# admin client is need here.
inst = self.create_server(
+ wait_until=None,
clients=self.os_admin,
availability_zone='%(zone)s:%(host_name)s' % host)
+ host_server_ids[host['host_name']] = inst['id']
+
+ for host_name, server_id in host_server_ids.items():
+ waiters.wait_for_server_status(self.os_admin.servers_client,
+ server_id, 'ACTIVE')
server = self.os_admin.servers_client.show_server(
- inst['id'])['server']
+ server_id)['server']
# ensure server is located on the requested host
- self.assertEqual(host['host_name'], server['OS-EXT-SRV-ATTR:host'])
+ self.assertEqual(host_name, server['OS-EXT-SRV-ATTR:host'])
servers.append(server)
# make sure we really have the number of servers we think we should
diff --git a/tempest/scenario/test_server_volume_attachment.py b/tempest/scenario/test_server_volume_attachment.py
new file mode 100644
index 0000000..076b835
--- /dev/null
+++ b/tempest/scenario/test_server_volume_attachment.py
@@ -0,0 +1,208 @@
+# Copyright 2023 Red Hat
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from unittest import mock
+
+from tempest.common import utils
+from tempest.common import waiters
+from tempest import config
+from tempest.lib import decorators
+from tempest.lib import exceptions
+from tempest.scenario import manager
+
+CONF = config.CONF
+
+
+class BaseAttachmentTest(manager.ScenarioTest):
+
+ @classmethod
+ def skip_checks(cls):
+ super(BaseAttachmentTest, cls).skip_checks()
+ if not CONF.service_available.cinder:
+ raise cls.skipException("Cinder is not available")
+
+ @classmethod
+ def setup_clients(cls):
+ super().setup_clients()
+ cls.attachments_client = cls.os_primary.attachments_client_latest
+ cls.admin_volume_client = cls.os_admin.volumes_client_latest
+
+ def _call_with_fake_service_token(self, valid_token,
+ client, method_name, *args, **kwargs):
+ """Call client method with non-service service token
+
+ Add a service token header that can be a valid normal user token (which
+ won't have the service role) or an invalid token altogether.
+ """
+ original_raw_request = client.raw_request
+
+ def raw_request(url, method, headers=None, body=None, chunked=False,
+ log_req_body=None):
+ token = headers['X-Auth-Token']
+ if not valid_token:
+ token = token[:-1] + ('a' if token[-1] != 'a' else 'b')
+ headers['X-Service-Token'] = token
+ return original_raw_request(url, method, headers=headers,
+ body=body, chunked=chunked,
+ log_req_body=log_req_body)
+
+ client_method = getattr(client, method_name)
+ with mock.patch.object(client, 'raw_request', raw_request):
+ return client_method(*args, **kwargs)
+
+
+class TestServerVolumeAttachmentScenario(BaseAttachmentTest):
+
+ """Test server attachment behaviors
+
+ This tests that volume attachments to servers may not be removed directly
+ and are only allowed through the compute service (bug #2004555).
+ """
+
+ @decorators.attr(type='slow')
+ @decorators.idempotent_id('be615530-f105-437a-8afe-ce998c9535d9')
+ @utils.services('compute', 'volume', 'image', 'network')
+ def test_server_detach_rules(self):
+ """Test that various methods of detaching a volume honors the rules"""
+ volume = self.create_volume(wait_until=None)
+ volume2 = self.create_volume(wait_until=None)
+
+ server = self.create_server(wait_until='SSHABLE')
+ servers = self.servers_client.list_servers()['servers']
+ self.assertIn(server['id'], [x['id'] for x in servers])
+
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume['id'], 'available')
+ # The volume retrieved on creation has a non-up-to-date status.
+ # Retrieval after it becomes active ensures correct details.
+ volume = self.volumes_client.show_volume(volume['id'])['volume']
+
+ volume = self.nova_volume_attach(server, volume)
+ self.addCleanup(self.nova_volume_detach, server, volume)
+ att_id = volume['attachments'][0]['attachment_id']
+
+ # Test user call to detach volume is rejected
+ self.assertRaises((exceptions.Forbidden, exceptions.Conflict),
+ self.volumes_client.detach_volume, volume['id'])
+
+ # Test user call to terminate connection is rejected
+ self.assertRaises((exceptions.Forbidden, exceptions.Conflict),
+ self.volumes_client.terminate_connection,
+ volume['id'], connector={})
+
+ # Test faking of service token on call to detach, force detach,
+ # terminate_connection
+ for valid_token in (True, False):
+ valid_exceptions = [exceptions.Forbidden, exceptions.Conflict]
+ if not valid_token:
+ valid_exceptions.append(exceptions.Unauthorized)
+ self.assertRaises(
+ tuple(valid_exceptions),
+ self._call_with_fake_service_token,
+ valid_token,
+ self.volumes_client,
+ 'detach_volume',
+ volume['id'])
+ self.assertRaises(
+ tuple(valid_exceptions),
+ self._call_with_fake_service_token,
+ valid_token,
+ self.volumes_client,
+ 'terminate_connection',
+ volume['id'], connector={})
+
+ # Reset volume's status to error
+ self.admin_volume_client.reset_volume_status(volume['id'],
+ status='error')
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume['id'], 'error')
+
+ # For the cleanup, we need to reset the volume status to in-use before
+ # the other cleanup steps try to detach it.
+ self.addCleanup(waiters.wait_for_volume_resource_status,
+ self.volumes_client, volume['id'], 'in-use')
+ self.addCleanup(self.admin_volume_client.reset_volume_status,
+ volume['id'], status='in-use')
+
+ # Test user call to force detach volume is rejected
+ self.assertRaises(
+ (exceptions.Forbidden, exceptions.Conflict),
+ self.admin_volume_client.force_detach_volume,
+ volume['id'], connector=None,
+ attachment_id=att_id)
+
+ # Test trying to override detach with force and service token
+ for valid_token in (True, False):
+ valid_exceptions = [exceptions.Forbidden, exceptions.Conflict]
+ if not valid_token:
+ valid_exceptions.append(exceptions.Unauthorized)
+ self.assertRaises(
+ tuple(valid_exceptions),
+ self._call_with_fake_service_token,
+ valid_token,
+ self.admin_volume_client,
+ 'force_detach_volume',
+ volume['id'], connector=None, attachment_id=att_id)
+
+ # Test user call to detach with mismatch is rejected
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume2['id'], 'available')
+ # The volume retrieved on creation has a non-up-to-date status.
+ # Retrieval after it becomes active ensures correct details.
+ volume2 = self.volumes_client.show_volume(volume2['id'])['volume']
+
+ volume2 = self.nova_volume_attach(server, volume2)
+ att_id2 = volume2['attachments'][0]['attachment_id']
+ self.assertRaises(
+ (exceptions.Forbidden, exceptions.BadRequest),
+ self.volumes_client.detach_volume,
+ volume['id'], attachment_id=att_id2)
+
+
+class TestServerVolumeAttachScenarioOldVersion(BaseAttachmentTest):
+ volume_min_microversion = '3.27'
+ volume_max_microversion = 'latest'
+
+ @decorators.attr(type='slow')
+ @decorators.idempotent_id('6f4d2144-99f4-495c-8b0b-c6a537971418')
+ @utils.services('compute', 'volume', 'image', 'network')
+ def test_old_versions_reject(self):
+ server = self.create_server(wait_until='SSHABLE')
+ servers = self.servers_client.list_servers()['servers']
+ self.assertIn(server['id'], [x['id'] for x in servers])
+
+ volume = self.create_volume()
+
+ volume = self.nova_volume_attach(server, volume)
+ self.addCleanup(self.nova_volume_detach, server, volume)
+ att_id = volume['attachments'][0]['attachment_id']
+
+ for valid_token in (True, False):
+ valid_exceptions = [exceptions.Forbidden,
+ exceptions.Conflict]
+ if not valid_token:
+ valid_exceptions.append(exceptions.Unauthorized)
+ self.assertRaises(
+ tuple(valid_exceptions),
+ self._call_with_fake_service_token,
+ valid_token,
+ self.attachments_client,
+ 'delete_attachment',
+ att_id)
+
+ self.assertRaises(
+ (exceptions.Forbidden, exceptions.Conflict),
+ self.attachments_client.delete_attachment,
+ att_id)
diff --git a/tempest/scenario/test_shelve_instance.py b/tempest/scenario/test_shelve_instance.py
index 29612ec..204471e 100644
--- a/tempest/scenario/test_shelve_instance.py
+++ b/tempest/scenario/test_shelve_instance.py
@@ -119,7 +119,7 @@
def test_shelve_volume_backed_instance(self):
self._create_server_then_shelve_and_unshelve(boot_from_volume=True)
- @decorators.attr(type='slow')
+ @decorators.attr(type=['slow', 'multinode'])
@decorators.idempotent_id('1295fd9e-193a-4cf8-b211-55358e021bae')
@testtools.skipUnless(CONF.network.public_network_id,
'The public_network_id option must be specified.')
diff --git a/tempest/scenario/test_stamp_pattern.py b/tempest/scenario/test_stamp_pattern.py
index 4b81b9e..92dbffb 100644
--- a/tempest/scenario/test_stamp_pattern.py
+++ b/tempest/scenario/test_stamp_pattern.py
@@ -16,6 +16,7 @@
import testtools
from tempest.common import utils
+from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
@@ -50,6 +51,8 @@
@classmethod
def skip_checks(cls):
super(TestStampPattern, cls).skip_checks()
+ if not CONF.service_available.cinder:
+ raise cls.skipException("Cinder is not available")
if not CONF.volume_feature_enabled.snapshot:
raise cls.skipException("Cinder volume snapshots are disabled")
@@ -84,7 +87,7 @@
security_group = self.create_security_group()
# boot an instance and create a timestamp file in it
- volume = self.create_volume()
+ volume = self.create_volume(wait_until=None)
server = self.create_server(
key_name=keypair['name'],
security_groups=[{'name': security_group['name']}])
@@ -97,6 +100,12 @@
ip_for_server, private_key=keypair['private_key'],
server=server)
disks_list_before_attach = linux_client.list_disks()
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume['id'], 'available')
+ # The volume retrieved on creation has a non-up-to-date status.
+ # Retrieval after it becomes active ensures correct details.
+ volume = self.volumes_client.show_volume(volume['id'])['volume']
+
self.nova_volume_attach(server, volume)
volume_device_name = self._attached_volume_name(
disks_list_before_attach, ip_for_server, keypair['private_key'])
@@ -115,7 +124,7 @@
# create second volume from the snapshot(volume2)
volume_from_snapshot = self.create_volume(
- snapshot_id=volume_snapshot['id'])
+ snapshot_id=volume_snapshot['id'], wait_until=None)
# boot second instance from the snapshot(instance2)
server_from_snapshot = self.create_server(
@@ -135,6 +144,14 @@
disks_list_before_attach = linux_client.list_disks()
# attach volume2 to instance2
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume_from_snapshot['id'],
+ 'available')
+ # The volume retrieved on creation has a non-up-to-date status.
+ # Retrieval after it becomes active ensures correct details.
+ volume_from_snapshot = self.volumes_client.show_volume(
+ volume_from_snapshot['id'])['volume']
+
self.nova_volume_attach(server_from_snapshot, volume_from_snapshot)
volume_device_name = self._attached_volume_name(
disks_list_before_attach, ip_for_snapshot, keypair['private_key'])
diff --git a/tempest/scenario/test_volume_backup_restore.py b/tempest/scenario/test_volume_backup_restore.py
index d0885cf..07ca38a 100644
--- a/tempest/scenario/test_volume_backup_restore.py
+++ b/tempest/scenario/test_volume_backup_restore.py
@@ -41,6 +41,8 @@
@classmethod
def skip_checks(cls):
super(TestVolumeBackupRestore, cls).skip_checks()
+ if not CONF.service_available.cinder:
+ raise cls.skipException("Cinder is not available")
if not CONF.volume_feature_enabled.backup:
raise cls.skipException('Backup is not enable.')
diff --git a/tempest/scenario/test_volume_boot_pattern.py b/tempest/scenario/test_volume_boot_pattern.py
index 2e87c15..6ebee48 100644
--- a/tempest/scenario/test_volume_boot_pattern.py
+++ b/tempest/scenario/test_volume_boot_pattern.py
@@ -31,6 +31,12 @@
# breathing room to get through deletes in the time allotted.
TIMEOUT_SCALING_FACTOR = 2
+ @classmethod
+ def skip_checks(cls):
+ super(TestVolumeBootPattern, cls).skip_checks()
+ if not CONF.service_available.cinder:
+ raise cls.skipException("Cinder is not available")
+
def _delete_server(self, server):
self.servers_client.delete_server(server['id'])
waiters.wait_for_server_termination(self.servers_client, server['id'])
@@ -187,6 +193,7 @@
source_id=volume_origin['id'],
source_type='volume',
delete_on_termination=True,
+ wait_until='SSHABLE',
name=name)
# Create a snapshot image from the volume-backed server.
# The compute service will have the block service create a snapshot of
@@ -200,7 +207,8 @@
# disk for the server.
name = data_utils.rand_name(self.__class__.__name__ +
'-image-snapshot-server')
- instance2 = self.create_server(image_id=image['id'], name=name)
+ instance2 = self.create_server(image_id=image['id'], name=name,
+ wait_until='SSHABLE')
# Verify the server was created from the image-defined BDM.
volume_attachments = instance2['os-extended-volumes:volumes_attached']
diff --git a/tempest/scenario/test_volume_migrate_attached.py b/tempest/scenario/test_volume_migrate_attached.py
index 57d2a1a..5005346 100644
--- a/tempest/scenario/test_volume_migrate_attached.py
+++ b/tempest/scenario/test_volume_migrate_attached.py
@@ -48,6 +48,8 @@
@classmethod
def skip_checks(cls):
super(TestVolumeMigrateRetypeAttached, cls).skip_checks()
+ if not CONF.service_available.cinder:
+ raise cls.skipException("Cinder is not available")
if not CONF.volume_feature_enabled.multi_backend:
raise cls.skipException("Cinder multi-backend feature disabled")
diff --git a/tempest/api/image/v1/__init__.py b/tempest/serial_tests/__init__.py
similarity index 100%
rename from tempest/api/image/v1/__init__.py
rename to tempest/serial_tests/__init__.py
diff --git a/tempest/api/image/v1/__init__.py b/tempest/serial_tests/api/__init__.py
similarity index 100%
copy from tempest/api/image/v1/__init__.py
copy to tempest/serial_tests/api/__init__.py
diff --git a/tempest/api/image/v1/__init__.py b/tempest/serial_tests/api/admin/__init__.py
similarity index 100%
copy from tempest/api/image/v1/__init__.py
copy to tempest/serial_tests/api/admin/__init__.py
diff --git a/tempest/api/compute/admin/test_aggregates.py b/tempest/serial_tests/api/admin/test_aggregates.py
similarity index 99%
rename from tempest/api/compute/admin/test_aggregates.py
rename to tempest/serial_tests/api/admin/test_aggregates.py
index a6c6535..2ca91aa 100644
--- a/tempest/api/compute/admin/test_aggregates.py
+++ b/tempest/serial_tests/api/admin/test_aggregates.py
@@ -26,6 +26,7 @@
CONF = config.CONF
+@decorators.serial
class AggregatesAdminTestBase(base.BaseV2ComputeAdminTest):
"""Tests Aggregates API that require admin privileges"""
diff --git a/tempest/api/image/v1/__init__.py b/tempest/serial_tests/scenario/__init__.py
similarity index 100%
copy from tempest/api/image/v1/__init__.py
copy to tempest/serial_tests/scenario/__init__.py
diff --git a/tempest/scenario/test_aggregates_basic_ops.py b/tempest/serial_tests/scenario/test_aggregates_basic_ops.py
similarity index 99%
rename from tempest/scenario/test_aggregates_basic_ops.py
rename to tempest/serial_tests/scenario/test_aggregates_basic_ops.py
index 58e234f..ba31d84 100644
--- a/tempest/scenario/test_aggregates_basic_ops.py
+++ b/tempest/serial_tests/scenario/test_aggregates_basic_ops.py
@@ -20,6 +20,7 @@
from tempest.scenario import manager
+@decorators.serial
class TestAggregatesBasicOps(manager.ScenarioTest):
"""Creates an aggregate within an availability zone
diff --git a/tempest/test.py b/tempest/test.py
index dba2695..3360221 100644
--- a/tempest/test.py
+++ b/tempest/test.py
@@ -18,7 +18,9 @@
import sys
import debtcollector.moves
+from fasteners import process_lock
import fixtures
+from oslo_concurrency import lockutils
from oslo_log import log as logging
import testtools
@@ -123,6 +125,23 @@
# A way to adjust slow test classes
TIMEOUT_SCALING_FACTOR = 1
+ # An interprocess lock to implement serial test execution if requested.
+ # The serial test classes are the writers as only one of them can be
+ # executed. The rest of the test classes are the readers as many of them
+ # can be run in parallel.
+ # Only classes can be decorated with @serial decorator not individual test
+ # cases as tempest allows test class level resource setup which could
+ # interfere with serialized execution on test cases level. I.e. the class
+ # setup of one of the test cases could run before taking a test case level
+ # lock.
+ # We cannot init the lock here as external lock needs oslo configuration
+ # to be loaded first to get the lock_path
+ serial_rw_lock = None
+
+ # Defines if the tests in this class should be run without any parallelism
+ # Use the @serial decorator on your test class to indicate such requirement
+ _serial = False
+
@classmethod
def _reset_class(cls):
cls.__setup_credentials_called = False
@@ -134,14 +153,33 @@
cls._teardowns = []
@classmethod
+ def is_serial_execution_requested(cls):
+ return cls._serial
+
+ @classmethod
def setUpClass(cls):
cls.__setupclass_called = True
+
+ if cls.serial_rw_lock is None:
+ path = os.path.join(
+ lockutils.get_lock_path(CONF), 'tempest-serial-rw-lock')
+ cls.serial_rw_lock = (
+ process_lock.InterProcessReaderWriterLock(path)
+ )
+
# Reset state
cls._reset_class()
# It should never be overridden by descendants
if hasattr(super(BaseTestCase, cls), 'setUpClass'):
super(BaseTestCase, cls).setUpClass()
try:
+ if cls.is_serial_execution_requested():
+ LOG.debug('%s taking the write lock', cls.__name__)
+ cls.serial_rw_lock.acquire_write_lock()
+ LOG.debug('%s took the write lock', cls.__name__)
+ else:
+ cls.serial_rw_lock.acquire_read_lock()
+
cls.skip_checks()
if not cls.__skip_checks_called:
@@ -184,35 +222,44 @@
# If there was no exception during setup we shall re-raise the first
# exception in teardown
re_raise = (etype is None)
- while cls._teardowns:
- name, teardown = cls._teardowns.pop()
- # Catch any exception in tearDown so we can re-raise the original
- # exception at the end
- try:
- teardown()
- if name == 'resources':
- if not cls.__resource_cleanup_called:
- raise RuntimeError(
- "resource_cleanup for %s did not call the "
- "super's resource_cleanup" % cls.__name__)
- except Exception as te:
- sys_exec_info = sys.exc_info()
- tetype = sys_exec_info[0]
- # TODO(andreaf): Resource cleanup is often implemented by
- # storing an array of resources at class level, and cleaning
- # them up during `resource_cleanup`.
- # In case of failure during setup, some resource arrays might
- # not be defined at all, in which case the cleanup code might
- # trigger an AttributeError. In such cases we log
- # AttributeError as info instead of exception. Once all
- # cleanups are migrated to addClassResourceCleanup we can
- # remove this.
- if tetype is AttributeError and name == 'resources':
- LOG.info("tearDownClass of %s failed: %s", name, te)
- else:
- LOG.exception("teardown of %s failed: %s", name, te)
- if not etype:
- etype, value, trace = sys_exec_info
+ try:
+ while cls._teardowns:
+ name, teardown = cls._teardowns.pop()
+ # Catch any exception in tearDown so we can re-raise the
+ # original exception at the end
+ try:
+ teardown()
+ if name == 'resources':
+ if not cls.__resource_cleanup_called:
+ raise RuntimeError(
+ "resource_cleanup for %s did not call the "
+ "super's resource_cleanup" % cls.__name__)
+ except Exception as te:
+ sys_exec_info = sys.exc_info()
+ tetype = sys_exec_info[0]
+ # TODO(andreaf): Resource cleanup is often implemented by
+ # storing an array of resources at class level, and
+ # cleaning them up during `resource_cleanup`.
+ # In case of failure during setup, some resource arrays
+ # might not be defined at all, in which case the cleanup
+ # code might trigger an AttributeError. In such cases we
+ # log AttributeError as info instead of exception. Once all
+ # cleanups are migrated to addClassResourceCleanup we can
+ # remove this.
+ if tetype is AttributeError and name == 'resources':
+ LOG.info("tearDownClass of %s failed: %s", name, te)
+ else:
+ LOG.exception("teardown of %s failed: %s", name, te)
+ if not etype:
+ etype, value, trace = sys_exec_info
+ finally:
+ if cls.is_serial_execution_requested():
+ LOG.debug('%s releasing the write lock', cls.__name__)
+ cls.serial_rw_lock.release_write_lock()
+ LOG.debug('%s released the write lock', cls.__name__)
+ else:
+ cls.serial_rw_lock.release_read_lock()
+
# If exceptions were raised during teardown, and not before, re-raise
# the first one
if re_raise and etype is not None:
@@ -762,7 +809,7 @@
@param os_clients: Clients to be used to provision the resources.
"""
if not CONF.validation.run_validation:
- return
+ return {}
if os_clients in cls._validation_resources:
return cls._validation_resources[os_clients]
diff --git a/tempest/test_discover/test_discover.py b/tempest/test_discover/test_discover.py
index a19f20b..679d58b 100644
--- a/tempest/test_discover/test_discover.py
+++ b/tempest/test_discover/test_discover.py
@@ -25,7 +25,7 @@
base_path = os.path.split(os.path.dirname(os.path.abspath(__file__)))[0]
base_path = os.path.split(base_path)[0]
# Load local tempest tests
- for test_dir in ['api', 'scenario']:
+ for test_dir in ['api', 'scenario', 'serial_tests']:
full_test_dir = os.path.join(base_path, 'tempest', test_dir)
if not pattern:
suite.addTests(loader.discover(full_test_dir,
diff --git a/tempest/tests/cmd/test_account_generator.py b/tempest/tests/cmd/test_account_generator.py
index 7d764be..9647467 100644
--- a/tempest/tests/cmd/test_account_generator.py
+++ b/tempest/tests/cmd/test_account_generator.py
@@ -153,13 +153,14 @@
resources = account_generator.generate_resources(
self.cred_provider, admin=False)
resource_types = [k for k, _ in resources]
- # No admin, no swift, expect two credentials only
- self.assertEqual(2, len(resources))
- # Ensure create_user was invoked twice (two distinct users)
- self.assertEqual(2, self.user_create_fixture.mock.call_count)
+ # No admin, no swift, expect three credentials only
+ self.assertEqual(3, len(resources))
+ # Ensure create_user was invoked three times (three distinct users)
+ self.assertEqual(3, self.user_create_fixture.mock.call_count)
self.assertIn('primary', resource_types)
self.assertIn('alt', resource_types)
self.assertNotIn('admin', resource_types)
+ self.assertIn(['reader'], resource_types)
self.assertNotIn(['fake_operator'], resource_types)
self.assertNotIn(['fake_reseller'], resource_types)
self.assertNotIn(['fake_owner'], resource_types)
@@ -178,12 +179,13 @@
self.cred_provider, admin=True)
resource_types = [k for k, _ in resources]
# Admin, no swift, expect three credentials only
- self.assertEqual(3, len(resources))
- # Ensure create_user was invoked 3 times (3 distinct users)
- self.assertEqual(3, self.user_create_fixture.mock.call_count)
+ self.assertEqual(4, len(resources))
+ # Ensure create_user was invoked 4 times (4 distinct users)
+ self.assertEqual(4, self.user_create_fixture.mock.call_count)
self.assertIn('primary', resource_types)
self.assertIn('alt', resource_types)
self.assertIn('admin', resource_types)
+ self.assertIn(['reader'], resource_types)
self.assertNotIn(['fake_operator'], resource_types)
self.assertNotIn(['fake_reseller'], resource_types)
self.assertNotIn(['fake_owner'], resource_types)
@@ -201,13 +203,14 @@
resources = account_generator.generate_resources(
self.cred_provider, admin=True)
resource_types = [k for k, _ in resources]
- # all options on, expect five credentials
- self.assertEqual(5, len(resources))
- # Ensure create_user was invoked 5 times (5 distinct users)
- self.assertEqual(5, self.user_create_fixture.mock.call_count)
+ # all options on, expect six credentials
+ self.assertEqual(6, len(resources))
+ # Ensure create_user was invoked 6 times (6 distinct users)
+ self.assertEqual(6, self.user_create_fixture.mock.call_count)
self.assertIn('primary', resource_types)
self.assertIn('alt', resource_types)
self.assertIn('admin', resource_types)
+ self.assertIn(['reader'], resource_types)
self.assertIn(['fake_operator'], resource_types)
self.assertIn(['fake_reseller'], resource_types)
for resource in resources:
@@ -224,13 +227,14 @@
resources = account_generator.generate_resources(
self.cred_provider, admin=False)
resource_types = [k for k, _ in resources]
- # No Admin, swift, expect four credentials only
- self.assertEqual(4, len(resources))
- # Ensure create_user was invoked 4 times (4 distinct users)
- self.assertEqual(4, self.user_create_fixture.mock.call_count)
+ # No Admin, swift, expect five credentials only
+ self.assertEqual(5, len(resources))
+ # Ensure create_user was invoked 5 times (5 distinct users)
+ self.assertEqual(5, self.user_create_fixture.mock.call_count)
self.assertIn('primary', resource_types)
self.assertIn('alt', resource_types)
self.assertNotIn('admin', resource_types)
+ self.assertIn(['reader'], resource_types)
self.assertIn(['fake_operator'], resource_types)
self.assertIn(['fake_reseller'], resource_types)
self.assertNotIn(['fake_owner'], resource_types)
@@ -284,14 +288,14 @@
# Ordered args in [0], keyword args in [1]
accounts, f = yaml_dump_mock.call_args[0]
self.assertEqual(handle, f)
- self.assertEqual(5, len(accounts))
+ self.assertEqual(6, len(accounts))
if self.domain_is_in:
self.assertIn('domain_name', accounts[0].keys())
else:
self.assertNotIn('domain_name', accounts[0].keys())
self.assertEqual(1, len([x for x in accounts if
x.get('types') == ['admin']]))
- self.assertEqual(2, len([x for x in accounts if 'roles' in x]))
+ self.assertEqual(3, len([x for x in accounts if 'roles' in x]))
for account in accounts:
self.assertIn('resources', account)
self.assertIn('network', account.get('resources'))
@@ -315,14 +319,14 @@
# Ordered args in [0], keyword args in [1]
accounts, f = yaml_dump_mock.call_args[0]
self.assertEqual(handle, f)
- self.assertEqual(5, len(accounts))
+ self.assertEqual(6, len(accounts))
if self.domain_is_in:
self.assertIn('domain_name', accounts[0].keys())
else:
self.assertNotIn('domain_name', accounts[0].keys())
self.assertEqual(1, len([x for x in accounts if
x.get('types') == ['admin']]))
- self.assertEqual(2, len([x for x in accounts if 'roles' in x]))
+ self.assertEqual(3, len([x for x in accounts if 'roles' in x]))
for account in accounts:
self.assertIn('resources', account)
self.assertIn('network', account.get('resources'))
diff --git a/tempest/tests/cmd/test_verify_tempest_config.py b/tempest/tests/cmd/test_verify_tempest_config.py
index 05ea84e..fa43e58 100644
--- a/tempest/tests/cmd/test_verify_tempest_config.py
+++ b/tempest/tests/cmd/test_verify_tempest_config.py
@@ -178,13 +178,13 @@
def test_verify_glance_version_no_v2_with_v1_1(self):
# This test verifies that wrong config api_v2 = True is detected
class FakeClient(object):
- def get_versions(self):
- return (None, ['v1.1'])
+ def list_versions(self):
+ return {'versions': [{'id': 'v1.1'}]}
fake_os = mock.MagicMock()
fake_module = mock.MagicMock()
- fake_module.ImagesClient = FakeClient
- fake_os.image_v1 = fake_module
+ fake_module.VersionsClient = FakeClient
+ fake_os.image_v2 = fake_module
with mock.patch.object(verify_tempest_config,
'print_and_or_update') as print_mock:
verify_tempest_config.verify_glance_api_versions(fake_os, True)
@@ -194,53 +194,28 @@
def test_verify_glance_version_no_v2_with_v1_0(self):
# This test verifies that wrong config api_v2 = True is detected
class FakeClient(object):
- def get_versions(self):
- return (None, ['v1.0'])
+ def list_versions(self):
+ return {'versions': [{'id': 'v1.0'}]}
fake_os = mock.MagicMock()
fake_module = mock.MagicMock()
- fake_module.ImagesClient = FakeClient
- fake_os.image_v1 = fake_module
+ fake_module.VersionsClient = FakeClient
+ fake_os.image_v2 = fake_module
with mock.patch.object(verify_tempest_config,
'print_and_or_update') as print_mock:
verify_tempest_config.verify_glance_api_versions(fake_os, True)
print_mock.assert_called_with('api_v2', 'image-feature-enabled',
False, True)
- def test_verify_glance_version_no_v1(self):
- # This test verifies that wrong config api_v1 = True is detected
- class FakeClient(object):
- def get_versions(self):
- raise lib_exc.NotFound()
-
- def list_versions(self):
- return {'versions': [{'id': 'v2.0'}]}
-
- fake_os = mock.MagicMock()
- fake_module = mock.MagicMock()
- fake_module.ImagesClient = FakeClient
- fake_module.VersionsClient = FakeClient
- fake_os.image_v1 = fake_module
- fake_os.image_v2 = fake_module
- with mock.patch.object(verify_tempest_config,
- 'print_and_or_update') as print_mock:
- verify_tempest_config.verify_glance_api_versions(fake_os, True)
- print_mock.assert_not_called()
-
def test_verify_glance_version_no_version(self):
- # This test verifies that wrong config api_v1 = True is detected
+ # This test verifies that wrong config api_v2 = True is detected
class FakeClient(object):
- def get_versions(self):
- raise lib_exc.NotFound()
-
def list_versions(self):
raise lib_exc.NotFound()
fake_os = mock.MagicMock()
fake_module = mock.MagicMock()
- fake_module.ImagesClient = FakeClient
fake_module.VersionsClient = FakeClient
- fake_os.image_v1 = fake_module
fake_os.image_v2 = fake_module
with mock.patch.object(verify_tempest_config,
'print_and_or_update') as print_mock:
diff --git a/tempest/tests/common/test_waiters.py b/tempest/tests/common/test_waiters.py
index 71088a4..93c949e 100755
--- a/tempest/tests/common/test_waiters.py
+++ b/tempest/tests/common/test_waiters.py
@@ -21,6 +21,7 @@
from tempest import exceptions
from tempest.lib import exceptions as lib_exc
from tempest.lib.services.compute import servers_client
+from tempest.lib.services.network import ports_client
from tempest.lib.services.volume.v2 import volumes_client
from tempest.tests import base
import tempest.tests.utils as utils
@@ -385,6 +386,29 @@
mock_sleep.assert_called_once_with(1)
@mock.patch.object(time, 'sleep')
+ def test_wait_for_volume_status_timeout_console(self, mock_sleep):
+ # Tests that the wait method gets the server console log if the
+ # timeout is hit.
+ client = mock.Mock(spec=volumes_client.VolumesClient,
+ resource_type="volume",
+ build_interval=1,
+ build_timeout=1)
+ servers_client = mock.Mock()
+ servers_client.get_console_output.return_value = {
+ 'output': 'console log'}
+ volume = {'volume': {'status': 'detaching'}}
+ mock_show = mock.Mock(return_value=volume)
+ client.show_volume = mock_show
+ volume_id = '7532b91e-aa0a-4e06-b3e5-20c0c5ee1caa'
+ self.assertRaises(lib_exc.TimeoutException,
+ waiters.wait_for_volume_resource_status,
+ client, volume_id, 'available',
+ server_id='someserver',
+ servers_client=servers_client)
+ servers_client.get_console_output.assert_called_once_with(
+ 'someserver')
+
+ @mock.patch.object(time, 'sleep')
def test_wait_for_volume_status_error_extending(self, mock_sleep):
# Tests that the wait method raises VolumeExtendErrorException if
# the volume status is 'error_extending'.
@@ -612,6 +636,48 @@
)
+class TestPortCreationWaiter(base.TestCase):
+ def test_wait_for_port_status(self):
+ """Test that the waiter replies with the port before the timeout"""
+
+ def client_response(self):
+ """Mock client response, replies with the final status after
+ 2 calls
+ """
+ if mock_client.call_count >= 2:
+ return mock_port
+ else:
+ mock_client.call_count += 1
+ return mock_port_build
+
+ mock_port = {'port': {'id': '1234', 'status': "DOWN"}}
+ mock_port_build = {'port': {'id': '1234', 'status': "BUILD"}}
+ mock_client = mock.Mock(
+ spec=ports_client.PortsClient,
+ build_timeout=30, build_interval=1,
+ show_port=client_response)
+ fake_port_id = "1234"
+ fake_status = "DOWN"
+ self.assertEqual(mock_port, waiters.wait_for_port_status(
+ mock_client, fake_port_id, fake_status))
+
+ def test_wait_for_port_status_timeout(self):
+ """Negative test - checking that a timeout
+ presented by a small 'fake_timeout' and a static status of
+ 'BUILD' in the mock will raise a timeout exception
+ """
+ mock_port = {'port': {'id': '1234', 'status': "BUILD"}}
+ mock_client = mock.Mock(
+ spec=ports_client.PortsClient,
+ build_timeout=2, build_interval=1,
+ show_port=lambda id: mock_port)
+ fake_port_id = "1234"
+ fake_status = "ACTIVE"
+ self.assertRaises(lib_exc.TimeoutException,
+ waiters.wait_for_port_status, mock_client,
+ fake_port_id, fake_status)
+
+
class TestServerFloatingIPWaiters(base.TestCase):
def test_wait_for_server_floating_ip_associate_timeout(self):
diff --git a/tempest/tests/lib/common/test_dynamic_creds.py b/tempest/tests/lib/common/test_dynamic_creds.py
index b4b1b91..d3d01c0 100644
--- a/tempest/tests/lib/common/test_dynamic_creds.py
+++ b/tempest/tests/lib/common/test_dynamic_creds.py
@@ -60,6 +60,7 @@
fake_response = fake_identity._fake_v2_response
tenants_client_class = tenants_client.TenantsClient
delete_tenant = 'delete_tenant'
+ create_tenant = 'create_tenant'
def setUp(self):
super(TestDynamicCredentialProvider, self).setUp()
@@ -140,7 +141,9 @@
return_value=(rest_client.ResponseBody
(200, {'roles': [
{'id': '1', 'name': 'FakeRole'},
- {'id': '2', 'name': 'member'}]}))))
+ {'id': '2', 'name': 'member'},
+ {'id': '3', 'name': 'reader'},
+ {'id': '4', 'name': 'admin'}]}))))
return roles_fix
def _mock_list_ec2_credentials(self, user_id, tenant_id):
@@ -191,6 +194,205 @@
self.assertEqual(primary_creds.tenant_id, '1234')
self.assertEqual(primary_creds.user_id, '1234')
+ def _request_and_check_second_creds(
+ self, creds_obj, func, creds_to_compare,
+ show_mock, sm_count=1, sm_count_in_diff_project=0,
+ same_project_request=True, **func_kwargs):
+ self._mock_user_create('111', 'fake_user')
+ with mock.patch.object(creds_obj.creds_client,
+ 'create_project') as create_mock:
+ create_mock.return_value = {'id': '22', 'name': 'fake_project'}
+ new_creds = func(**func_kwargs)
+ if same_project_request:
+ # Check that with second creds request, create_project is not
+ # called and show_project is called. Which means new project is
+ # not created for the second requested creds instead new user is
+ # created under existing project.
+ self.assertEqual(len(create_mock.mock_calls), 0)
+ self.assertEqual(len(show_mock.mock_calls), sm_count)
+ # Verify project name and id is same as creds_to_compare
+ self.assertEqual(creds_to_compare.tenant_name,
+ new_creds.tenant_name)
+ self.assertEqual(creds_to_compare.tenant_id,
+ new_creds.tenant_id)
+ else:
+ # Check that with different project creds request, create_project
+ # is called and show_project is not called. Which means new project
+ # is created for this new creds request.
+ self.assertEqual(len(create_mock.mock_calls), 1)
+ self.assertEqual(len(show_mock.mock_calls),
+ sm_count_in_diff_project)
+ # Verify project name and id is not same as creds_to_compare
+ self.assertNotEqual(creds_to_compare.tenant_name,
+ new_creds.tenant_name)
+ self.assertNotEqual(creds_to_compare.tenant_id,
+ new_creds.tenant_id)
+ self.assertEqual(new_creds.tenant_name, 'fake_project')
+ self.assertEqual(new_creds.tenant_id, '22')
+ # Verify new user name and id
+ self.assertEqual(new_creds.username, 'fake_user')
+ self.assertEqual(new_creds.user_id, '111')
+ return new_creds
+
+ @mock.patch('tempest.lib.common.rest_client.RestClient')
+ def _creds_within_same_project(self, MockRestClient, test_alt_creds=False):
+ creds = dynamic_creds.DynamicCredentialProvider(**self.fixed_params)
+ if test_alt_creds:
+ admin_func = creds.get_project_alt_admin_creds
+ member_func = creds.get_project_alt_member_creds
+ reader_func = creds.get_project_alt_reader_creds
+ else:
+ admin_func = creds.get_project_admin_creds
+ member_func = creds.get_project_member_creds
+ reader_func = creds.get_project_reader_creds
+ self._mock_assign_user_role()
+ self._mock_list_role()
+ self._mock_user_create('11', 'fake_user1')
+ show_mock = self.patchobject(creds.creds_client, 'show_project')
+ show_mock.return_value = {'id': '21', 'name': 'fake_project1'}
+ with mock.patch.object(creds.creds_client,
+ 'create_project') as create_mock:
+ create_mock.return_value = {'id': '21', 'name': 'fake_project1'}
+ member_creds = member_func()
+ # Check that with first creds request, create_project is called and
+ # show_project is not called. Which means new project is created for
+ # the requested creds.
+ self.assertEqual(len(create_mock.mock_calls), 1)
+ self.assertEqual(len(show_mock.mock_calls), 0)
+ # Verify project, user name and IDs
+ self.assertEqual(member_creds.username, 'fake_user1')
+ self.assertEqual(member_creds.tenant_name, 'fake_project1')
+ self.assertEqual(member_creds.tenant_id, '21')
+ self.assertEqual(member_creds.user_id, '11')
+
+ # Now request for the project reader creds which should not create new
+ # project instead should use the project_id of member_creds already
+ # created project.
+ self._request_and_check_second_creds(
+ creds, reader_func, member_creds, show_mock)
+
+ # Now request for the project admin creds which should not create new
+ # project instead should use the project_id of member_creds already
+ # created project.
+ self._request_and_check_second_creds(
+ creds, admin_func, member_creds, show_mock, sm_count=2)
+
+ def test_creds_within_same_project(self):
+ self._creds_within_same_project()
+
+ def test_alt_creds_within_same_project(self):
+ self._creds_within_same_project(test_alt_creds=True)
+
+ @mock.patch('tempest.lib.common.rest_client.RestClient')
+ def test_creds_in_different_project(self, MockRestClient):
+ creds = dynamic_creds.DynamicCredentialProvider(**self.fixed_params)
+ self._mock_assign_user_role()
+ self._mock_list_role()
+ self._mock_user_create('11', 'fake_user1')
+ show_mock = self.patchobject(creds.creds_client, 'show_project')
+ show_mock.return_value = {'id': '21', 'name': 'fake_project1'}
+ with mock.patch.object(creds.creds_client,
+ 'create_project') as create_mock:
+ create_mock.return_value = {'id': '21', 'name': 'fake_project1'}
+ member_creds = creds.get_project_member_creds()
+ # Check that with first creds request, create_project is called and
+ # show_project is not called. Which means new project is created for
+ # the requested creds.
+ self.assertEqual(len(create_mock.mock_calls), 1)
+ self.assertEqual(len(show_mock.mock_calls), 0)
+ # Verify project, user name and IDs
+ self.assertEqual(member_creds.username, 'fake_user1')
+ self.assertEqual(member_creds.tenant_name, 'fake_project1')
+ self.assertEqual(member_creds.tenant_id, '21')
+ self.assertEqual(member_creds.user_id, '11')
+
+ # Now request for the project alt reader creds which should create
+ # new project as this request is for alt creds.
+ alt_reader_creds = self._request_and_check_second_creds(
+ creds, creds.get_project_alt_reader_creds,
+ member_creds, show_mock, same_project_request=False)
+
+ # Check that with second creds request, create_project is not called
+ # and show_project is called. Which means new project is not created
+ # for the second requested creds instead new user is created under
+ # existing project.
+ self._request_and_check_second_creds(
+ creds, creds.get_project_reader_creds, member_creds, show_mock)
+
+ # Now request for the project alt member creds which should not create
+ # new project instead use the alt project already created for
+ # alt_reader creds.
+ show_mock.return_value = {
+ 'id': alt_reader_creds.tenant_id,
+ 'name': alt_reader_creds.tenant_name}
+ self._request_and_check_second_creds(
+ creds, creds.get_project_alt_member_creds,
+ alt_reader_creds, show_mock, sm_count=2,
+ same_project_request=True)
+
+ @mock.patch('tempest.lib.common.rest_client.RestClient')
+ def test_creds_by_role_in_different_project(self, MockRestClient):
+ creds = dynamic_creds.DynamicCredentialProvider(**self.fixed_params)
+ self._mock_assign_user_role()
+ self._mock_list_role()
+ self._mock_user_create('11', 'fake_user1')
+ show_mock = self.patchobject(creds.creds_client, 'show_project')
+ show_mock.return_value = {'id': '21', 'name': 'fake_project1'}
+ with mock.patch.object(creds.creds_client,
+ 'create_project') as create_mock:
+ create_mock.return_value = {'id': '21', 'name': 'fake_project1'}
+ member_creds = creds.get_project_member_creds()
+ # Check that with first creds request, create_project is called and
+ # show_project is not called. Which means new project is created for
+ # the requested creds.
+ self.assertEqual(len(create_mock.mock_calls), 1)
+ self.assertEqual(len(show_mock.mock_calls), 0)
+ # Verify project, user name and IDs
+ self.assertEqual(member_creds.username, 'fake_user1')
+ self.assertEqual(member_creds.tenant_name, 'fake_project1')
+ self.assertEqual(member_creds.tenant_id, '21')
+ self.assertEqual(member_creds.user_id, '11')
+ # Check that with second creds request, create_project is not called
+ # and show_project is called. Which means new project is not created
+ # for the second requested creds instead new user is created under
+ # existing project.
+ self._request_and_check_second_creds(
+ creds, creds.get_project_reader_creds, member_creds, show_mock)
+ # Now request the creds by role which should create new project.
+ self._request_and_check_second_creds(
+ creds, creds.get_creds_by_roles, member_creds, show_mock,
+ sm_count_in_diff_project=1, same_project_request=False,
+ roles=['member'], scope='project')
+
+ @mock.patch('tempest.lib.common.rest_client.RestClient')
+ def test_legacy_admin_creds_in_different_project(self, MockRestClient):
+ creds = dynamic_creds.DynamicCredentialProvider(**self.fixed_params)
+ self._mock_assign_user_role()
+ self._mock_list_role()
+ self._mock_user_create('11', 'fake_user1')
+ show_mock = self.patchobject(creds.creds_client, 'show_project')
+ show_mock.return_value = {'id': '21', 'name': 'fake_project1'}
+ with mock.patch.object(creds.creds_client,
+ 'create_project') as create_mock:
+ create_mock.return_value = {'id': '21', 'name': 'fake_project1'}
+ member_creds = creds.get_project_member_creds()
+ # Check that with first creds request, create_project is called and
+ # show_project is not called. Which means new project is created for
+ # the requested creds.
+ self.assertEqual(len(create_mock.mock_calls), 1)
+ self.assertEqual(len(show_mock.mock_calls), 0)
+ # Verify project, user name and IDs
+ self.assertEqual(member_creds.username, 'fake_user1')
+ self.assertEqual(member_creds.tenant_name, 'fake_project1')
+ self.assertEqual(member_creds.tenant_id, '21')
+ self.assertEqual(member_creds.user_id, '11')
+
+ # Now request for the legacy admin creds which should create
+ # new project instead of using project member creds project.
+ self._request_and_check_second_creds(
+ creds, creds.get_admin_creds,
+ member_creds, show_mock, same_project_request=False)
+
@mock.patch('tempest.lib.common.rest_client.RestClient')
def test_admin_creds(self, MockRestClient):
creds = dynamic_creds.DynamicCredentialProvider(**self.fixed_params)
@@ -321,7 +523,8 @@
@mock.patch('tempest.lib.common.rest_client.RestClient')
def _test_get_same_role_creds_with_project_scope(self, MockRestClient,
- scope=None):
+ scope=None,
+ force_new=False):
creds = dynamic_creds.DynamicCredentialProvider(**self.fixed_params)
self._mock_list_2_roles()
self._mock_user_create('1234', 'fake_role_user')
@@ -329,7 +532,7 @@
with mock.patch.object(self.roles_client.RolesClient,
'create_user_role_on_project') as user_mock:
role_creds = creds.get_creds_by_roles(
- roles=['role1', 'role2'], scope=scope)
+ roles=['role1', 'role2'], force_new=force_new, scope=scope)
calls = user_mock.mock_calls
# Assert that the role creation is called with the 2 specified roles
self.assertEqual(len(calls), 2)
@@ -338,13 +541,18 @@
with mock.patch.object(self.roles_client.RolesClient,
'create_user_role_on_project') as user_mock1:
role_creds_new = creds.get_creds_by_roles(
- roles=['role1', 'role2'], scope=scope)
+ roles=['role1', 'role2'], force_new=force_new, scope=scope)
calls = user_mock1.mock_calls
+ # With force_new, assert that new creds are created
+ if force_new:
+ self.assertEqual(len(calls), 2)
+ self.assertNotEqual(role_creds, role_creds_new)
# Assert that previously created creds are return and no call to
- # role creation.
- self.assertEqual(len(calls), 0)
+ # role creation
# Check if previously created creds are returned.
- self.assertEqual(role_creds, role_creds_new)
+ else:
+ self.assertEqual(len(calls), 0)
+ self.assertEqual(role_creds, role_creds_new)
def test_get_same_role_creds_with_project_scope(self):
self._test_get_same_role_creds_with_project_scope(scope='project')
@@ -352,6 +560,13 @@
def test_get_same_role_creds_with_default_scope(self):
self._test_get_same_role_creds_with_project_scope()
+ def test_get_same_role_creds_with_project_scope_force_new(self):
+ self._test_get_same_role_creds_with_project_scope(
+ scope='project', force_new=True)
+
+ def test_get_same_role_creds_with_default_scope_force_new(self):
+ self._test_get_same_role_creds_with_project_scope(force_new=True)
+
@mock.patch('tempest.lib.common.rest_client.RestClient')
def _test_get_different_role_creds_with_project_scope(
self, MockRestClient, scope=None):
@@ -391,8 +606,12 @@
self._mock_assign_user_role()
self._mock_list_role()
self._mock_tenant_create('1234', 'fake_prim_tenant')
- self._mock_user_create('1234', 'fake_prim_user')
+ show_mock = self.patchobject(creds.creds_client, 'show_project')
+ show_mock.return_value = {'id': '1234', 'name': 'fake_prim_tenant'}
+ self._mock_user_create('1234', 'fake_project1_user')
creds.get_primary_creds()
+ self._mock_user_create('12341', 'fake_project1_user')
+ creds.get_project_admin_creds()
self._mock_tenant_create('12345', 'fake_alt_tenant')
self._mock_user_create('12345', 'fake_alt_user')
creds.get_alt_creds()
@@ -407,10 +626,11 @@
creds.clear_creds()
# Verify user delete calls
calls = user_mock.mock_calls
- self.assertEqual(len(calls), 3)
+ self.assertEqual(len(calls), 4)
args = map(lambda x: x[1][0], calls)
args = list(args)
self.assertIn('1234', args)
+ self.assertIn('12341', args)
self.assertIn('12345', args)
self.assertIn('123456', args)
# Verify tenant delete calls
@@ -512,6 +732,9 @@
self._mock_list_role()
self._mock_user_create('1234', 'fake_prim_user')
self._mock_tenant_create('1234', 'fake_prim_tenant')
+ show_mock = self.patchobject(creds.creds_client, 'show_project')
+ show_mock.return_value = {'id': '1234', 'name': 'fake_prim_tenant'}
+ self._mock_user_create('12341', 'fake_project1_user')
self._mock_network_create(creds, '1234', 'fake_net')
self._mock_subnet_create(creds, '1234', 'fake_subnet')
self._mock_router_create('1234', 'fake_router')
@@ -519,6 +742,7 @@
'tempest.lib.services.network.routers_client.RoutersClient.'
'add_router_interface')
creds.get_primary_creds()
+ creds.get_project_admin_creds()
router_interface_mock.assert_called_once_with('1234', subnet_id='1234')
router_interface_mock.reset_mock()
# Create alternate tenant and network
@@ -779,6 +1003,7 @@
fake_response = fake_identity._fake_v3_response
tenants_client_class = tenants_client.ProjectsClient
delete_tenant = 'delete_project'
+ create_tenant = 'create_project'
def setUp(self):
super(TestDynamicCredentialProviderV3, self).setUp()
diff --git a/tempest/tests/lib/common/test_http.py b/tempest/tests/lib/common/test_http.py
index a19153f..aae6ba2 100644
--- a/tempest/tests/lib/common/test_http.py
+++ b/tempest/tests/lib/common/test_http.py
@@ -149,6 +149,31 @@
'xtra key': 'Xtra Value'},
response)
+ def test_request_preload(self):
+ # Given
+ connection = self.closing_http()
+ headers = {'Xtra Key': 'Xtra Value'}
+ http_response = urllib3.HTTPResponse(headers=headers)
+ request = self.patch('urllib3.PoolManager.request',
+ return_value=http_response)
+ retry = self.patch('urllib3.util.Retry')
+
+ # When
+ response, _ = connection.request(
+ method=REQUEST_METHOD,
+ url=REQUEST_URL,
+ headers=headers,
+ preload_content=False)
+
+ # Then
+ request.assert_called_once_with(
+ REQUEST_METHOD,
+ REQUEST_URL,
+ headers=dict(headers, connection='close'),
+ preload_content=False,
+ retries=retry(raise_on_redirect=False, redirect=5))
+ self.assertIsInstance(response, urllib3.HTTPResponse)
+
class TestClosingProxyHttp(TestClosingHttp):
diff --git a/tempest/tests/lib/common/test_rest_client.py b/tempest/tests/lib/common/test_rest_client.py
index 910756f..81a76e0 100644
--- a/tempest/tests/lib/common/test_rest_client.py
+++ b/tempest/tests/lib/common/test_rest_client.py
@@ -55,6 +55,7 @@
def test_get(self):
__, return_dict = self.rest_client.get(self.url)
self.assertEqual('GET', return_dict['method'])
+ self.assertTrue(return_dict['preload_content'])
def test_delete(self):
__, return_dict = self.rest_client.delete(self.url)
@@ -78,6 +79,17 @@
__, return_dict = self.rest_client.copy(self.url)
self.assertEqual('COPY', return_dict['method'])
+ def test_get_chunked(self):
+ self.useFixture(fixtures.MockPatchObject(self.rest_client,
+ '_log_request'))
+ __, return_dict = self.rest_client.get(self.url, chunked=True)
+ # Default is preload_content=True, make sure we passed False
+ self.assertFalse(return_dict['preload_content'])
+ # Make sure we did not pass chunked=True to urllib3 for GET
+ self.assertFalse(return_dict['chunked'])
+ # Make sure we did not call _log_request() on the raw response
+ self.rest_client._log_request.assert_not_called()
+
class TestRestClientNotFoundHandling(BaseRestClientTestClass):
def setUp(self):
diff --git a/tempest/tests/lib/fake_http.py b/tempest/tests/lib/fake_http.py
index cfa4b93..5fa0c43 100644
--- a/tempest/tests/lib/fake_http.py
+++ b/tempest/tests/lib/fake_http.py
@@ -21,14 +21,17 @@
self.return_type = return_type
def request(self, uri, method="GET", body=None, headers=None,
- redirections=5, connection_type=None, chunked=False):
+ redirections=5, connection_type=None, chunked=False,
+ preload_content=False):
if not self.return_type:
fake_headers = fake_http_response(headers)
return_obj = {
'uri': uri,
'method': method,
'body': body,
- 'headers': headers
+ 'headers': headers,
+ 'chunked': chunked,
+ 'preload_content': preload_content,
}
return (fake_headers, return_obj)
elif isinstance(self.return_type, int):
diff --git a/tempest/tests/lib/services/compute/test_fixedIPs_client.py b/tempest/tests/lib/services/compute/test_fixedIPs_client.py
deleted file mode 100644
index 65bda45..0000000
--- a/tempest/tests/lib/services/compute/test_fixedIPs_client.py
+++ /dev/null
@@ -1,58 +0,0 @@
-# Copyright 2015 NEC Corporation. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.lib.services.compute import fixed_ips_client
-from tempest.tests.lib import fake_auth_provider
-from tempest.tests.lib.services import base
-
-
-class TestFixedIPsClient(base.BaseServiceTest):
- FIXED_IP_INFO = {"fixed_ip": {"address": "10.0.0.1",
- "cidr": "10.11.12.0/24",
- "host": "localhost",
- "hostname": "OpenStack"}}
-
- def setUp(self):
- super(TestFixedIPsClient, self).setUp()
- fake_auth = fake_auth_provider.FakeAuthProvider()
- self.fixedIPsClient = (fixed_ips_client.
- FixedIPsClient
- (fake_auth, 'compute',
- 'regionOne'))
-
- def _test_show_fixed_ip(self, bytes_body=False):
- self.check_service_client_function(
- self.fixedIPsClient.show_fixed_ip,
- 'tempest.lib.common.rest_client.RestClient.get',
- self.FIXED_IP_INFO, bytes_body,
- status=200, fixed_ip='Identifier')
-
- def test_show_fixed_ip_with_str_body(self):
- self._test_show_fixed_ip()
-
- def test_show_fixed_ip_with_bytes_body(self):
- self._test_show_fixed_ip(True)
-
- def _test_reserve_fixed_ip(self, bytes_body=False):
- self.check_service_client_function(
- self.fixedIPsClient.reserve_fixed_ip,
- 'tempest.lib.common.rest_client.RestClient.post',
- {}, bytes_body,
- status=202, fixed_ip='Identifier')
-
- def test_reserve_fixed_ip_with_str_body(self):
- self._test_reserve_fixed_ip()
-
- def test_reserve_fixed_ip_with_bytes_body(self):
- self._test_reserve_fixed_ip(True)
diff --git a/tempest/tests/lib/services/compute/test_floating_ip_pools_client.py b/tempest/tests/lib/services/compute/test_floating_ip_pools_client.py
deleted file mode 100644
index 6278df4..0000000
--- a/tempest/tests/lib/services/compute/test_floating_ip_pools_client.py
+++ /dev/null
@@ -1,46 +0,0 @@
-# Copyright 2015 NEC Corporation. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.lib.services.compute import floating_ip_pools_client
-from tempest.tests.lib import fake_auth_provider
-from tempest.tests.lib.services import base
-
-
-class TestFloatingIPPoolsClient(base.BaseServiceTest):
-
- FAKE_FLOATING_IP_POOLS = {
- "floating_ip_pools":
- [
- {"name": '\u3042'},
- {"name": '\u3044'}
- ]
- }
-
- def setUp(self):
- super(TestFloatingIPPoolsClient, self).setUp()
- fake_auth = fake_auth_provider.FakeAuthProvider()
- self.client = floating_ip_pools_client.FloatingIPPoolsClient(
- fake_auth, 'compute', 'regionOne')
-
- def test_list_floating_ip_pools_with_str_body(self):
- self.check_service_client_function(
- self.client.list_floating_ip_pools,
- 'tempest.lib.common.rest_client.RestClient.get',
- self.FAKE_FLOATING_IP_POOLS)
-
- def test_list_floating_ip_pools_with_bytes_body(self):
- self.check_service_client_function(
- self.client.list_floating_ip_pools,
- 'tempest.lib.common.rest_client.RestClient.get',
- self.FAKE_FLOATING_IP_POOLS, to_utf=True)
diff --git a/tempest/tests/lib/services/compute/test_floating_ips_bulk_client.py b/tempest/tests/lib/services/compute/test_floating_ips_bulk_client.py
deleted file mode 100644
index ace76f8..0000000
--- a/tempest/tests/lib/services/compute/test_floating_ips_bulk_client.py
+++ /dev/null
@@ -1,88 +0,0 @@
-# Copyright 2015 NEC Corporation. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.tests.lib import fake_auth_provider
-
-from tempest.lib.services.compute import floating_ips_bulk_client
-from tempest.tests.lib.services import base
-
-
-class TestFloatingIPsBulkClient(base.BaseServiceTest):
-
- FAKE_FIP_BULK_LIST = {"floating_ip_info": [{
- "address": "10.10.10.1",
- "instance_uuid": None,
- "fixed_ip": None,
- "interface": "eth0",
- "pool": "nova",
- "project_id": None
- },
- {
- "address": "10.10.10.2",
- "instance_uuid": None,
- "fixed_ip": None,
- "interface": "eth0",
- "pool": "nova",
- "project_id": None
- }]}
-
- def setUp(self):
- super(TestFloatingIPsBulkClient, self).setUp()
- fake_auth = fake_auth_provider.FakeAuthProvider()
- self.client = floating_ips_bulk_client.FloatingIPsBulkClient(
- fake_auth, 'compute', 'regionOne')
-
- def _test_list_floating_ips_bulk(self, bytes_body=False):
- self.check_service_client_function(
- self.client.list_floating_ips_bulk,
- 'tempest.lib.common.rest_client.RestClient.get',
- self.FAKE_FIP_BULK_LIST,
- to_utf=bytes_body)
-
- def _test_create_floating_ips_bulk(self, bytes_body=False):
- fake_fip_create_data = {"floating_ips_bulk_create": {
- "ip_range": "192.168.1.0/24", "pool": "nova", "interface": "eth0"}}
- self.check_service_client_function(
- self.client.create_floating_ips_bulk,
- 'tempest.lib.common.rest_client.RestClient.post',
- fake_fip_create_data,
- to_utf=bytes_body,
- ip_range="192.168.1.0/24", pool="nova", interface="eth0")
-
- def _test_delete_floating_ips_bulk(self, bytes_body=False):
- fake_fip_delete_data = {"floating_ips_bulk_delete": "192.168.1.0/24"}
- self.check_service_client_function(
- self.client.delete_floating_ips_bulk,
- 'tempest.lib.common.rest_client.RestClient.put',
- fake_fip_delete_data,
- to_utf=bytes_body,
- ip_range="192.168.1.0/24")
-
- def test_list_floating_ips_bulk_with_str_body(self):
- self._test_list_floating_ips_bulk()
-
- def test_list_floating_ips_bulk_with_bytes_body(self):
- self._test_list_floating_ips_bulk(True)
-
- def test_create_floating_ips_bulk_with_str_body(self):
- self._test_create_floating_ips_bulk()
-
- def test_create_floating_ips_bulk_with_bytes_body(self):
- self._test_create_floating_ips_bulk(True)
-
- def test_delete_floating_ips_bulk_with_str_body(self):
- self._test_delete_floating_ips_bulk()
-
- def test_delete_floating_ips_bulk_with_bytes_body(self):
- self._test_delete_floating_ips_bulk(True)
diff --git a/tempest/tests/lib/services/compute/test_server_external_events_client.py b/tempest/tests/lib/services/compute/test_server_external_events_client.py
new file mode 100644
index 0000000..63922b3
--- /dev/null
+++ b/tempest/tests/lib/services/compute/test_server_external_events_client.py
@@ -0,0 +1,56 @@
+# Copyright 2022 NEC Corporation. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.lib.services.compute import server_external_events_client
+from tempest.tests.lib import fake_auth_provider
+from tempest.tests.lib.services import base
+
+
+class TestServerExternalEventsClient(base.BaseServiceTest):
+
+ events = [
+ {
+ "code": 200,
+ "name": "network-changed",
+ "server_uuid": "ff1df7b2-6772-45fd-9326-c0a3b05591c2",
+ "status": "completed",
+ "tag": "foo"
+ }
+ ]
+
+ events_req = [
+ {
+ "name": "network-changed",
+ "server_uuid": "ff1df7b2-6772-45fd-9326-c0a3b05591c2",
+ }
+ ]
+
+ def setUp(self):
+ super(TestServerExternalEventsClient, self).setUp()
+ fake_auth = fake_auth_provider.FakeAuthProvider()
+ self.client = server_external_events_client.ServerExternalEventsClient(
+ fake_auth, 'compute', 'regionOne')
+
+ def _test_create_server_external_events(self, bytes_body=False):
+ expected = {"events": self.events}
+ self.check_service_client_function(
+ self.client.create_server_external_events,
+ 'tempest.lib.common.rest_client.RestClient.post', expected,
+ bytes_body, events=self.events_req)
+
+ def test_create_server_external_events_str_body(self):
+ self._test_create_server_external_events(bytes_body=False)
+
+ def test_create_server_external_events_byte_body(self):
+ self._test_create_server_external_events(bytes_body=True)
diff --git a/tempest/tests/lib/services/compute/test_servers_client.py b/tempest/tests/lib/services/compute/test_servers_client.py
index a82b255..8df82f7 100644
--- a/tempest/tests/lib/services/compute/test_servers_client.py
+++ b/tempest/tests/lib/services/compute/test_servers_client.py
@@ -789,21 +789,6 @@
length='fake-length'
)
- def test_list_virtual_interfaces_with_str_body(self):
- self._test_list_virtual_interfaces()
-
- def test_list_virtual_interfaces_with_bytes_body(self):
- self._test_list_virtual_interfaces(True)
-
- def _test_list_virtual_interfaces(self, bytes_body=False):
- self.check_service_client_function(
- self.client.list_virtual_interfaces,
- 'tempest.lib.common.rest_client.RestClient.get',
- {'virtual_interfaces': [self.FAKE_VIRTUAL_INTERFACES]},
- bytes_body,
- server_id=self.server_id
- )
-
def test_rescue_server_with_str_body(self):
self._test_rescue_server()
diff --git a/tempest/tests/lib/services/image/v1/__init__.py b/tempest/tests/lib/services/image/v1/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/tempest/tests/lib/services/image/v1/__init__.py
+++ /dev/null
diff --git a/tempest/tests/lib/services/image/v1/test_image_members_client.py b/tempest/tests/lib/services/image/v1/test_image_members_client.py
deleted file mode 100644
index a5a6128..0000000
--- a/tempest/tests/lib/services/image/v1/test_image_members_client.py
+++ /dev/null
@@ -1,84 +0,0 @@
-# Copyright 2016 NEC Corporation. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.lib.services.image.v1 import image_members_client
-from tempest.tests.lib import fake_auth_provider
-from tempest.tests.lib.services import base
-
-
-class TestImageMembersClient(base.BaseServiceTest):
- FAKE_LIST_IMAGE_MEMBERS = {
- "members": [
- {
- "created_at": "2013-10-07T17:58:03Z",
- "image_id": "dbc999e3-c52f-4200-bedd-3b18fe7f87fe",
- "member_id": "123456789",
- "status": "pending",
- "updated_at": "2013-10-07T17:58:03Z"
- },
- {
- "created_at": "2013-10-07T17:58:55Z",
- "image_id": "dbc999e3-c52f-4200-bedd-3b18fe7f87fe",
- "member_id": "987654321",
- "status": "accepted",
- "updated_at": "2013-10-08T12:08:55Z"
- }
- ]
- }
-
- def setUp(self):
- super(TestImageMembersClient, self).setUp()
- fake_auth = fake_auth_provider.FakeAuthProvider()
- self.client = image_members_client.ImageMembersClient(fake_auth,
- 'image',
- 'regionOne')
-
- def _test_list_image_members(self, bytes_body=False):
- self.check_service_client_function(
- self.client.list_image_members,
- 'tempest.lib.common.rest_client.RestClient.get',
- self.FAKE_LIST_IMAGE_MEMBERS,
- bytes_body,
- image_id="0ae74cc5-5147-4239-9ce2-b0c580f7067e")
-
- def _test_create_image_member(self, bytes_body=False):
- self.check_service_client_function(
- self.client.create_image_member,
- 'tempest.lib.common.rest_client.RestClient.put',
- {},
- bytes_body,
- image_id="0ae74cc5-5147-4239-9ce2-b0c580f7067e",
- member_id="8989447062e04a818baf9e073fd04fa7",
- status=204)
-
- def test_list_image_members_with_str_body(self):
- self._test_list_image_members()
-
- def test_list_image_members_with_bytes_body(self):
- self._test_list_image_members(bytes_body=True)
-
- def test_create_image_member_with_str_body(self):
- self._test_create_image_member()
-
- def test_create_image_member_with_bytes_body(self):
- self._test_create_image_member(bytes_body=True)
-
- def test_delete_image_member(self):
- self.check_service_client_function(
- self.client.delete_image_member,
- 'tempest.lib.common.rest_client.RestClient.delete',
- {},
- image_id="0ae74cc5-5147-4239-9ce2-b0c580f7067e",
- member_id="8989447062e04a818baf9e073fd04fa7",
- status=204)
diff --git a/tempest/tests/lib/services/image/v2/test_image_tasks_client.py b/tempest/tests/lib/services/image/v2/test_image_tasks_client.py
new file mode 100644
index 0000000..6e3b3b5
--- /dev/null
+++ b/tempest/tests/lib/services/image/v2/test_image_tasks_client.py
@@ -0,0 +1,86 @@
+# Copyright 2023 Red Hat, Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.lib.services.image.v2 import tasks_client
+from tempest.tests.lib import fake_auth_provider
+from tempest.tests.lib.services import base
+
+
+class TestImageTaskClient(base.BaseServiceTest):
+ def setUp(self):
+ super(TestImageTaskClient, self).setUp()
+ fake_auth = fake_auth_provider.FakeAuthProvider()
+ self.client = tasks_client.TaskClient(
+ fake_auth, 'image', 'regionOne')
+
+ def test_list_task(self):
+ fake_result = {
+
+ "first": "/v2/tasks",
+ "schema": "/v2/schemas/tasks",
+ "tasks": [
+ {
+ "id": "08b7e1c8-3821-4f54-b3b8-d6655d178cdf",
+ "owner": "fa6c8c1600f4444281658a23ee6da8e8",
+ "schema": "/v2/schemas/task",
+ "self": "/v2/tasks/08b7e1c8-3821-4f54-b3b8-d6655d178cdf",
+ "status": "processing",
+ "type": "import"
+ },
+ {
+ "id": "231c311d-3557-4e23-afc4-6d98af1419e7",
+ "owner": "fa6c8c1600f4444281658a23ee6da8e8",
+ "schema": "/v2/schemas/task",
+ "self": "/v2/tasks/231c311d-3557-4e23-afc4-6d98af1419e7",
+ "status": "processing",
+ "type": "import"
+ }
+ ]
+ }
+ self.check_service_client_function(
+ self.client.list_tasks,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ fake_result,
+ mock_args=['tasks'])
+
+ def test_create_task(self):
+ fake_result = {
+ "type": "import",
+ "input": {
+ "import_from":
+ "http://download.cirros-cloud.net/0.6.1/ \
+ cirros-0.6.1-x86_64-disk.img",
+ "import_from_format": "qcow2",
+ "image_properties": {
+ "disk_format": "qcow2",
+ "container_format": "bare"
+ }
+ }
+ }
+ self.check_service_client_function(
+ self.client.create_task,
+ 'tempest.lib.common.rest_client.RestClient.post',
+ fake_result,
+ status=201)
+
+ def test_show_task(self):
+ fake_result = {
+ "task_id": "08b7e1c8-3821-4f54-b3b8-d6655d178cdf"
+ }
+ self.check_service_client_function(
+ self.client.show_tasks,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ fake_result,
+ status=200,
+ task_id="e485aab9-0907-4973-921c-bb6da8a8fcf8")
diff --git a/tempest/tests/lib/services/image/v2/test_images_client.py b/tempest/tests/lib/services/image/v2/test_images_client.py
index 5b162f8..27a50a9 100644
--- a/tempest/tests/lib/services/image/v2/test_images_client.py
+++ b/tempest/tests/lib/services/image/v2/test_images_client.py
@@ -13,6 +13,9 @@
# under the License.
import io
+from unittest import mock
+
+import fixtures
from tempest.lib.common.utils import data_utils
from tempest.lib.services.image.v2 import images_client
@@ -239,6 +242,21 @@
headers={'Content-Type': 'application/octet-stream'},
status=200)
+ def test_show_image_file_chunked(self):
+ # Since chunked=True on a GET should pass the response object
+ # basically untouched, we use a mock here so we get some assurances.
+ http_response = mock.MagicMock()
+ http_response.status = 200
+ self.useFixture(fixtures.MockPatch(
+ 'tempest.lib.common.rest_client.RestClient.get',
+ return_value=(http_response, b'')))
+ resp = self.client.show_image_file(
+ self.FAKE_CREATE_UPDATE_SHOW_IMAGE['id'],
+ chunked=True)
+ self.assertEqual(http_response, resp)
+ resp.__contains__.assert_not_called()
+ resp.__getitem__.assert_not_called()
+
def test_add_image_tag(self):
self.check_service_client_function(
self.client.add_image_tag,
diff --git a/tempest/tests/lib/services/registry_fixture.py b/tempest/tests/lib/services/registry_fixture.py
index a368705..d722b06 100644
--- a/tempest/tests/lib/services/registry_fixture.py
+++ b/tempest/tests/lib/services/registry_fixture.py
@@ -37,7 +37,7 @@
def __init__(self):
"""Initialise the registry fixture"""
self.services = set(['compute', 'identity.v2', 'identity.v3',
- 'image.v1', 'image.v2', 'network', 'placement',
+ 'image.v2', 'network', 'placement',
'volume.v2', 'volume.v3', 'object-storage'])
def _setUp(self):
diff --git a/tempest/tests/lib/test_decorators.py b/tempest/tests/lib/test_decorators.py
index fc93f76..f9a12b6 100644
--- a/tempest/tests/lib/test_decorators.py
+++ b/tempest/tests/lib/test_decorators.py
@@ -21,6 +21,7 @@
from tempest.lib import base as test
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
+from tempest.lib import exceptions
from tempest.lib import exceptions as lib_exc
from tempest.tests import base
@@ -289,3 +290,109 @@
with mock.patch.object(decorators.LOG, 'error'):
self.assertRaises(lib_exc.InvalidParam, test_foo, object())
+
+
+class TestCleanupOrderDecorator(base.TestCase):
+
+ @decorators.cleanup_order
+ def _create_volume(self, raise_exception=False):
+ """Test doc"""
+ vol_id = "487ef6b6-546a-40c7-bc3f-b405d6239fc8"
+ self.cleanup(self._delete_dummy, vol_id)
+ if raise_exception:
+ raise exceptions.NotFound("Not found")
+ return "volume"
+
+ def _delete_dummy(self, vol_id):
+ pass
+
+ class DummyClassResourceCleanup(list):
+ """dummy list class simulate ClassResourceCleanup"""
+
+ def __call__(self, func, vol_id):
+ self.append((func, vol_id))
+
+ @classmethod
+ def resource_setup(cls):
+ cls.addClassResourceCleanup = cls.DummyClassResourceCleanup()
+ cls.volume = cls._create_volume()
+
+ @classmethod
+ def resource_setup_exception(cls):
+ cls.addClassResourceCleanup = cls.DummyClassResourceCleanup()
+ cls.volume = cls._create_volume(raise_exception=True)
+
+ def setUp(self):
+ super().setUp()
+ self.volume_instance = self._create_volume()
+
+ def test_cleanup_order_when_called_from_instance_testcase(self):
+ # create a volume
+ my_vol = self._create_volume()
+ # Verify method runs and return value
+ self.assertEqual(my_vol, "volume")
+ # Verify __doc__ exists from original function
+ self.assertEqual(self._create_volume.__doc__, "Test doc")
+ # New cleanup created and refers to addCleanup
+ self.assertTrue(hasattr(self, "cleanup"))
+ self.assertEqual(self.cleanup, self.addCleanup)
+ # New __name__ created from type(self)
+ self.assertEqual(self.__name__, type(self).__name__)
+ # Verify function added to instance _cleanups
+ self.assertIn(self._delete_dummy, [e[0] for e in self._cleanups])
+
+ def test_cleanup_order_when_called_from_setup_instance(self):
+ # create a volume
+ my_vol = self.volume_instance
+ # Verify method runs and return value
+ self.assertEqual(my_vol, "volume")
+ # Verify __doc__ exists from original function
+ self.assertEqual(self._create_volume.__doc__, "Test doc")
+ # New cleanup created and refers to addCleanup
+ self.assertTrue(hasattr(self, "cleanup"))
+ self.assertEqual(self.cleanup, self.addCleanup)
+ # New __name__ created from type(self)
+ self.assertEqual(self.__name__, type(self).__name__)
+ # Verify function added to instance _cleanups
+ self.assertIn(self._delete_dummy, [e[0] for e in self._cleanups])
+
+ def test_cleanup_order_when_called_from_instance_raise(self):
+ # create a volume when raised exceptions
+ self.assertRaises(exceptions.NotFound, self._create_volume,
+ raise_exception=True)
+ # before raise exceptions
+ self.assertTrue(hasattr(self, "cleanup"))
+ self.assertEqual(self.cleanup, self.addCleanup)
+ # New __name__ created from type(self)
+ self.assertEqual(self.__name__, type(self).__name__)
+ # Verify function added to instance _cleanups before exception
+ self.assertIn(self._delete_dummy, [e[0] for e in self._cleanups])
+
+ def test_cleanup_order_when_called_from_class_method(self):
+ # call class method
+ type(self).resource_setup()
+ # create a volume
+ my_vol = self.volume
+ # Verify method runs and return value
+ self.assertEqual(my_vol, "volume")
+ # Verify __doc__ exists from original function
+ self.assertEqual(self._create_volume.__doc__, "Test doc")
+ # New cleanup created and refers to addClassResourceCleanup
+ self.assertTrue(hasattr(self, "cleanup"))
+ self.assertEqual(type(self).cleanup, self.addClassResourceCleanup)
+ # Verify function added to instance addClassResourceCleanup
+ self.assertIn(type(self)._delete_dummy,
+ [e[0] for e in self.addClassResourceCleanup])
+
+ def test_cleanup_order_when_called_from_class_method_raise(self):
+ # call class method
+ self.assertRaises(exceptions.NotFound,
+ type(self).resource_setup_exception)
+ # Verify __doc__ exists from original function
+ self.assertEqual(self._create_volume.__doc__, "Test doc")
+ # New cleanup created and refers to addClassResourceCleanup
+ self.assertTrue(hasattr(self, "cleanup"))
+ self.assertEqual(type(self).cleanup, self.addClassResourceCleanup)
+ # Verify function added to instance addClassResourceCleanup
+ self.assertIn(type(self)._delete_dummy,
+ [e[0] for e in self.addClassResourceCleanup])
diff --git a/tempest/tests/lib/test_ssh.py b/tempest/tests/lib/test_ssh.py
index 886d99c..13870ba 100644
--- a/tempest/tests/lib/test_ssh.py
+++ b/tempest/tests/lib/test_ssh.py
@@ -75,7 +75,8 @@
look_for_keys=False,
timeout=10.0,
password=None,
- sock=None
+ sock=None,
+ allow_agent=True
)]
self.assertEqual(expected_connect, client_mock.connect.mock_calls)
self.assertEqual(0, s_mock.call_count)
@@ -91,7 +92,8 @@
proxy_client = ssh.Client('proxy-host', 'proxy-user', timeout=2)
client = ssh.Client('localhost', 'root', timeout=2,
- proxy_client=proxy_client)
+ proxy_client=proxy_client,
+ ssh_allow_agent=False)
client._get_ssh_connection(sleep=1)
aa_mock.assert_has_calls([mock.call(), mock.call()])
@@ -106,7 +108,8 @@
look_for_keys=False,
timeout=10.0,
password=None,
- sock=None
+ sock=None,
+ allow_agent=True
)]
self.assertEqual(proxy_expected_connect,
proxy_client_mock.connect.mock_calls)
@@ -121,7 +124,8 @@
look_for_keys=False,
timeout=10.0,
password=None,
- sock=proxy_client_mock.get_transport().open_session()
+ sock=proxy_client_mock.get_transport().open_session(),
+ allow_agent=False
)]
self.assertEqual(expected_connect, client_mock.connect.mock_calls)
self.assertEqual(0, s_mock.call_count)
diff --git a/tempest/tests/test_test.py b/tempest/tests/test_test.py
index cbb81e2..80825a4 100644
--- a/tempest/tests/test_test.py
+++ b/tempest/tests/test_test.py
@@ -17,12 +17,14 @@
import unittest
from unittest import mock
+from oslo_concurrency import lockutils
from oslo_config import cfg
import testtools
from tempest import clients
from tempest import config
from tempest.lib.common import validation_resources as vr
+from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
from tempest.lib.services.compute import base_compute_client
from tempest.lib.services.placement import base_placement_client
@@ -33,6 +35,8 @@
from tempest.tests.lib import fake_credentials
from tempest.tests.lib.services import registry_fixture
+CONF = config.CONF
+
class LoggingTestResult(testtools.TestResult):
@@ -65,7 +69,7 @@
creds = fake_credentials.FakeKeystoneV3Credentials()
osclients = clients.Manager(creds)
vr = self.test_test_class.get_class_validation_resources(osclients)
- self.assertIsNone(vr)
+ self.assertEqual({}, vr)
def test_validation_resources_exists(self):
cfg.CONF.set_default('run_validation', True, 'validation')
@@ -594,6 +598,52 @@
str(log[0][2]['traceback']).replace('\n', ' '),
RuntimeError.__name__ + ': .* ' + OverridesSetup.__name__)
+ @mock.patch.object(test.process_lock, 'InterProcessReaderWriterLock')
+ def test_serial_execution_if_requested(self, mock_lock):
+
+ @decorators.serial
+ class SerialTests(self.parent_test):
+ pass
+
+ class ParallelTests(self.parent_test):
+ pass
+
+ @decorators.serial
+ class SerialTests2(self.parent_test):
+ pass
+
+ suite = unittest.TestSuite(
+ (SerialTests(), ParallelTests(), SerialTests2()))
+ log = []
+ result = LoggingTestResult(log)
+ suite.run(result)
+
+ expected_lock_path = os.path.join(
+ lockutils.get_lock_path(CONF), 'tempest-serial-rw-lock')
+
+ # We except that each test class has a lock with the _same_ external
+ # path so that if they would run by different processes they would
+ # still use the same lock
+ # Also we expect that each serial class takes and releases the
+ # write-lock while each non-serial class takes and releases the
+ # read-lock.
+ self.assertEqual(
+ [
+ mock.call(expected_lock_path),
+ mock.call().acquire_write_lock(),
+ mock.call().release_write_lock(),
+
+ mock.call(expected_lock_path),
+ mock.call().acquire_read_lock(),
+ mock.call().release_read_lock(),
+
+ mock.call(expected_lock_path),
+ mock.call().acquire_write_lock(),
+ mock.call().release_write_lock(),
+ ],
+ mock_lock.mock_calls
+ )
+
class TestTempestBaseTestClassFixtures(base.TestCase):
@@ -750,6 +800,11 @@
class TestAPIMicroversionTest1(test.BaseTestCase):
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ self.useFixture(fake_config.ConfigFixture())
+ config.TempestConfigPrivate = fake_config.FakePrivate
+
@classmethod
def resource_setup(cls):
super(TestAPIMicroversionTest1, cls).resource_setup()
@@ -812,6 +867,11 @@
class TestAPIMicroversionTest2(test.BaseTestCase):
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ self.useFixture(fake_config.ConfigFixture())
+ config.TempestConfigPrivate = fake_config.FakePrivate
+
@classmethod
def resource_setup(cls):
super(TestAPIMicroversionTest2, cls).resource_setup()
diff --git a/tools/generate-tempest-plugins-list.py b/tools/generate-tempest-plugins-list.py
index b96bbe4..0b6b342 100644
--- a/tools/generate-tempest-plugins-list.py
+++ b/tools/generate-tempest-plugins-list.py
@@ -77,6 +77,9 @@
'x/ranger-tempest-plugin'
'x/tap-as-a-service-tempest-plugin'
'x/trio2o'
+ # No changes are merging in this
+ # https://review.opendev.org/q/project:x%252Fnetworking-fortinet
+ 'x/networking-fortinet'
]
url = 'https://review.opendev.org/projects/'
diff --git a/tools/tempest-extra-tests-list.txt b/tools/tempest-extra-tests-list.txt
new file mode 100644
index 0000000..9c88109
--- /dev/null
+++ b/tools/tempest-extra-tests-list.txt
@@ -0,0 +1,20 @@
+# This file includes the list of tests which need to be
+# excluded to run from integrated testing (tempest-full job
+# or other generic jobs. We will run these tests in a separate
+# jobs. This is needed to avoid the job timeout, details in
+# bug#2004780.
+# Basic criteria to add test in this list is:
+# * Admin test which are not needed for interop and most of them
+# are running as part of other API and Scenario tests.
+# * Negative tests which are mostly covered in tempest API tests
+# or service unit/functional tests.
+
+# All admin tests except keystone admin test which might not have much
+# coverage in existing other tests
+tempest.api.compute.admin
+tempest.api.volume.admin
+tempest.api.image.admin
+tempest.api.network.admin
+
+# All negative tests
+negative
diff --git a/tox.ini b/tox.ini
index 94eb4d9..fc882cf 100644
--- a/tox.ini
+++ b/tox.ini
@@ -1,11 +1,8 @@
[tox]
envlist = pep8,py39,bashate,pip-check-reqs
minversion = 3.18.0
-skipsdist = True
-ignore_basepython_conflict = True
[tempestenv]
-basepython = python3
sitepackages = False
setenv =
VIRTUAL_ENV={envdir}
@@ -16,7 +13,6 @@
-r{toxinidir}/requirements.txt
[testenv]
-basepython = python3
setenv =
VIRTUAL_ENV={envdir}
OS_LOG_CAPTURE=1
@@ -24,10 +20,25 @@
OS_STDERR_CAPTURE=1
OS_TEST_TIMEOUT=160
PYTHONWARNINGS=default::DeprecationWarning,ignore::DeprecationWarning:distutils,ignore::DeprecationWarning:site
-passenv = OS_STDOUT_CAPTURE OS_STDERR_CAPTURE OS_TEST_TIMEOUT OS_TEST_LOCK_PATH TEMPEST_CONFIG TEMPEST_CONFIG_DIR http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY ZUUL_CACHE_DIR REQUIREMENTS_PIP_LOCATION GENERATE_TEMPEST_PLUGIN_LIST
+passenv =
+ OS_STDOUT_CAPTURE
+ OS_STDERR_CAPTURE
+ OS_TEST_TIMEOUT
+ OS_TEST_LOCK_PATH
+ TEMPEST_CONFIG
+ TEMPEST_CONFIG_DIR
+ http_proxy
+ HTTP_PROXY
+ https_proxy
+ HTTPS_PROXY
+ no_proxy
+ NO_PROXY
+ ZUUL_CACHE_DIR
+ REQUIREMENTS_PIP_LOCATION
+ GENERATE_TEMPEST_PLUGIN_LIST
usedevelop = True
-install_command = pip install {opts} {packages}
-allowlist_externals = *
+allowlist_externals =
+ find
deps =
-c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master}
-r{toxinidir}/requirements.txt
@@ -58,7 +69,6 @@
[testenv:all]
envdir = .tox/tempest
sitepackages = {[tempestenv]sitepackages}
-basepython = {[tempestenv]basepython}
# 'all' includes slow tests
setenv =
{[tempestenv]setenv}
@@ -79,7 +89,6 @@
# 'all' includes slow tests
setenv =
{[tempestenv]setenv}
-basepython = {[tempestenv]basepython}
deps = {[tempestenv]deps}
commands =
echo "WARNING: The all-plugin env is deprecated and will be removed"
@@ -92,7 +101,6 @@
# 'all' includes slow tests
setenv =
{[tempestenv]setenv}
-basepython = {[tempestenv]basepython}
deps = {[tempestenv]deps}
commands =
find . -type f -name "*.pyc" -delete
@@ -101,7 +109,6 @@
[testenv:full]
envdir = .tox/tempest
sitepackages = {[tempestenv]sitepackages}
-basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
# The regex below is used to select which tests to run and exclude the slow tag:
@@ -110,166 +117,223 @@
commands =
find . -type f -name "*.pyc" -delete
tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' {posargs}
- tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)' {posargs}
+ tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)|(^tempest\.serial_tests)' {posargs}
+
+[testenv:integrated-full]
+envdir = .tox/tempest
+sitepackages = {[tempestenv]sitepackages}
+setenv = {[tempestenv]setenv}
+deps = {[tempestenv]deps}
+# The regex below is used to select which tests to run. It exclude the extra
+# tests mentioned in tools/tempest-extra-tests-list.txt and slow tag:
+# See the testrepository bug: https://bugs.launchpad.net/testrepository/+bug/1208610
+# FIXME: We can replace it with the `--exclude-regex` option to exclude tests now.
+regex1 = '(?!.*\[.*\bslow\b.*\])(^tempest\.api)'
+regex2 = '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)|(^tempest\.serial_tests)'
+commands =
+ find . -type f -name "*.pyc" -delete
+ tempest run --regex {[testenv:integrated-full]regex1} --exclude-list ./tools/tempest-extra-tests-list.txt {posargs}
+ tempest run --combine --serial --regex {[testenv:integrated-full]regex2} {posargs}
+
+[testenv:extra-tests]
+envdir = .tox/tempest
+sitepackages = {[tempestenv]sitepackages}
+setenv = {[tempestenv]setenv}
+deps = {[tempestenv]deps}
+# The regex below is used to select extra tests mentioned in
+# tools/tempest-extra-tests-list.txt and exclude slow tag tests:
+# See the testrepository bug: https://bugs.launchpad.net/testrepository/+bug/1208610
+# FIXME: We can replace it with the `--exclude-regex` option to exclude tests now.
+exclude-regex = '\[.*\bslow\b.*\]'
+commands =
+ find . -type f -name "*.pyc" -delete
+ tempest run --exclude-regex {[testenv:extra-tests]exclude-regex} --include-list ./tools/tempest-extra-tests-list.txt {posargs}
[testenv:full-parallel]
envdir = .tox/tempest
sitepackages = {[tempestenv]sitepackages}
-basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
-# The regex below is used to select all tempest scenario and including the non slow api tests
+# But exlcude the extra tests mentioned in tools/tempest-extra-tests-list.txt
+regex = '(^tempest\.scenario.*)|(^tempest\.serial_tests)|(?!.*\[.*\bslow\b.*\])(^tempest\.api)'
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '(^tempest\.scenario.*)|(?!.*\[.*\bslow\b.*\])(^tempest\.api)' {posargs}
+ tempest run --regex {[testenv:full-parallel]regex} --exclude-list ./tools/tempest-extra-tests-list.txt {posargs}
[testenv:api-microversion-tests]
envdir = .tox/tempest
sitepackages = {[tempestenv]sitepackages}
-basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex = '(^tempest\.api\.compute)|(^tempest\.api\.volume)'
# The regex below is used to select all tempest api tests for services having API
# microversion concept.
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '(^tempest\.api\.compute)|(^tempest\.api\.volume)' {posargs}
+ tempest run --regex {[testenv:api-microversion-tests]regex} {posargs}
[testenv:integrated-network]
envdir = .tox/tempest
sitepackages = {[tempestenv]sitepackages}
-basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex1 = '(?!.*\[.*\bslow\b.*\])(^tempest\.api)'
+regex2 = '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)|(^tempest\.serial_tests)'
# The regex below is used to select which tests to run and exclude the slow tag and
# tests listed in exclude-list file:
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' --exclude-list ./tools/tempest-integrated-gate-networking-exclude-list.txt {posargs}
- tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)' --exclude-list ./tools/tempest-integrated-gate-networking-exclude-list.txt {posargs}
+ tempest run --regex {[testenv:integrated-network]regex1} --exclude-list ./tools/tempest-integrated-gate-networking-exclude-list.txt {posargs}
+ tempest run --combine --serial --regex {[testenv:integrated-network]regex2} --exclude-list ./tools/tempest-integrated-gate-networking-exclude-list.txt {posargs}
[testenv:integrated-compute]
envdir = .tox/tempest
sitepackages = {[tempestenv]sitepackages}
-basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex1 = '(?!.*\[.*\bslow\b.*\])(^tempest\.api)'
+regex2 = '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)|(^tempest\.serial_tests)'
# The regex below is used to select which tests to run and exclude the slow tag and
# tests listed in exclude-list file:
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' --exclude-list ./tools/tempest-integrated-gate-compute-exclude-list.txt {posargs}
- tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)' --exclude-list ./tools/tempest-integrated-gate-compute-exclude-list.txt {posargs}
+ tempest run --regex {[testenv:integrated-compute]regex1} --exclude-list ./tools/tempest-integrated-gate-compute-exclude-list.txt {posargs}
+ tempest run --combine --serial --regex {[testenv:integrated-compute]regex2} --exclude-list ./tools/tempest-integrated-gate-compute-exclude-list.txt {posargs}
[testenv:integrated-placement]
envdir = .tox/tempest
sitepackages = {[tempestenv]sitepackages}
-basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex1 = '(?!.*\[.*\bslow\b.*\])(^tempest\.api)'
+regex2 = '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)|(^tempest\.serial_tests)'
# The regex below is used to select which tests to run and exclude the slow tag and
# tests listed in exclude-list file:
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' --exclude-list ./tools/tempest-integrated-gate-placement-exclude-list.txt {posargs}
- tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)' --exclude-list ./tools/tempest-integrated-gate-placement-exclude-list.txt {posargs}
+ tempest run --regex {[testenv:integrated-placement]regex1} --exclude-list ./tools/tempest-integrated-gate-placement-exclude-list.txt {posargs}
+ tempest run --combine --serial --regex {[testenv:integrated-placement]regex2} --exclude-list ./tools/tempest-integrated-gate-placement-exclude-list.txt {posargs}
[testenv:integrated-storage]
envdir = .tox/tempest
sitepackages = {[tempestenv]sitepackages}
-basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex1 = '(?!.*\[.*\bslow\b.*\])(^tempest\.api)'
+regex2 = '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)|(^tempest\.serial_tests)'
# The regex below is used to select which tests to run and exclude the slow tag and
# tests listed in exclude-list file:
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' --exclude-list ./tools/tempest-integrated-gate-storage-exclude-list.txt {posargs}
- tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)' --exclude-list ./tools/tempest-integrated-gate-storage-exclude-list.txt {posargs}
+ tempest run --regex {[testenv:integrated-storage]regex1} --exclude-list ./tools/tempest-integrated-gate-storage-exclude-list.txt {posargs}
+ tempest run --combine --serial --regex {[testenv:integrated-storage]regex2} --exclude-list ./tools/tempest-integrated-gate-storage-exclude-list.txt {posargs}
[testenv:integrated-object-storage]
envdir = .tox/tempest
sitepackages = {[tempestenv]sitepackages}
-basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex1 = '(?!.*\[.*\bslow\b.*\])(^tempest\.api)'
+regex2 = '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)|(^tempest\.serial_tests)'
# The regex below is used to select which tests to run and exclude the slow tag and
# tests listed in exclude-list file:
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' --exclude-list ./tools/tempest-integrated-gate-object-storage-exclude-list.txt {posargs}
- tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)' --exclude-list ./tools/tempest-integrated-gate-object-storage-exclude-list.txt {posargs}
+ tempest run --regex {[testenv:integrated-object-storage]regex1} --exclude-list ./tools/tempest-integrated-gate-object-storage-exclude-list.txt {posargs}
+ tempest run --combine --serial --regex {[testenv:integrated-object-storage]regex2} --exclude-list ./tools/tempest-integrated-gate-object-storage-exclude-list.txt {posargs}
[testenv:full-serial]
envdir = .tox/tempest
sitepackages = {[tempestenv]sitepackages}
-basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex = '(?!.*\[.*\bslow\b.*\])(^tempest\.(api|scenario|serial_tests))'
# The regex below is used to select which tests to run and exclude the slow tag:
# See the testrepository bug: https://bugs.launchpad.net/testrepository/+bug/1208610
# FIXME: We can replace it with the `--exclude-regex` option to exclude tests now.
commands =
find . -type f -name "*.pyc" -delete
- tempest run --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.(api|scenario))' {posargs}
+ tempest run --serial --regex {[testenv:full-serial]regex} {posargs}
[testenv:scenario]
envdir = .tox/tempest
sitepackages = {[tempestenv]sitepackages}
-basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex = '(^tempest\.scenario)'
# The regex below is used to select all scenario tests
commands =
find . -type f -name "*.pyc" -delete
- tempest run --serial --regex '(^tempest\.scenario)' {posargs}
+ tempest run --serial --regex {[testenv:scenario]regex} {posargs}
[testenv:smoke]
envdir = .tox/tempest
sitepackages = {[tempestenv]sitepackages}
-basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex = '\[.*\bsmoke\b.*\]'
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '\[.*\bsmoke\b.*\]' {posargs}
+ tempest run --regex {[testenv:smoke]regex} {posargs}
[testenv:smoke-serial]
envdir = .tox/tempest
sitepackages = {[tempestenv]sitepackages}
-basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex = '\[.*\bsmoke\b.*\]'
# This is still serial because neutron doesn't work with parallel. See:
# https://bugs.launchpad.net/tempest/+bug/1216076 so the neutron smoke
# job would fail if we moved it to parallel.
commands =
find . -type f -name "*.pyc" -delete
- tempest run --serial --regex '\[.*\bsmoke\b.*\]' {posargs}
+ tempest run --serial --regex {[testenv:smoke-serial]regex} {posargs}
[testenv:slow-serial]
envdir = .tox/tempest
sitepackages = {[tempestenv]sitepackages}
-basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex = '\[.*\bslow\b.*\]'
# The regex below is used to select the slow tagged tests to run serially:
commands =
find . -type f -name "*.pyc" -delete
- tempest run --serial --regex '\[.*\bslow\b.*\]' {posargs}
+ tempest run --serial --regex {[testenv:slow-serial]regex} {posargs}
+
+[testenv:slow]
+envdir = .tox/tempest
+sitepackages = {[tempestenv]sitepackages}
+setenv = {[tempestenv]setenv}
+deps = {[tempestenv]deps}
+# The regex below is used to select the slow tagged tests:
+regex = '\[.*\bslow\b.*\]'
+commands =
+ find . -type f -name "*.pyc" -delete
+ tempest run --regex {[testenv:slow]regex} {posargs}
+
+[testenv:multinode]
+envdir = .tox/tempest
+sitepackages = {[tempestenv]sitepackages}
+setenv = {[tempestenv]setenv}
+deps = {[tempestenv]deps}
+# The regex below is used to select the multinode and smoke tagged tests
+regex = '\[.*\bsmoke|multinode\b.*\]'
+commands =
+ find . -type f -name "*.pyc" -delete
+ tempest run --regex {[testenv:multinode]regex} {posargs}
[testenv:ipv6-only]
envdir = .tox/tempest
sitepackages = {[tempestenv]sitepackages}
-basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex = '\[.*\bsmoke|ipv6|test_network_v6\b.*\]'
# Run only smoke and ipv6 tests. This env is used to tests
# the ipv6 deployments and basic tests run fine so that we can
# verify that services listen on IPv6 address.
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '\[.*\bsmoke|ipv6|test_network_v6\b.*\]' {posargs}
+ tempest run --regex {[testenv:ipv6-only]regex} {posargs}
[testenv:venv]
deps =
@@ -281,7 +345,6 @@
[testenv:venv-tempest]
envdir = .tox/tempest
sitepackages = {[tempestenv]sitepackages}
-basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
commands = {posargs}
@@ -425,11 +488,11 @@
[testenv:stestr-master]
envdir = .tox/tempest
sitepackages = {[tempestenv]sitepackages}
-basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex = '\[.*\bsmoke\b.*\]'
# The below command install stestr master version and run smoke tests
commands =
find . -type f -name "*.pyc" -delete
pip install -U git+https://github.com/mtreinish/stestr
- tempest run --regex '\[.*\bsmoke\b.*\]' {posargs}
+ tempest run --regex {[testenv:stestr-master]regex} {posargs}
diff --git a/zuul.d/base.yaml b/zuul.d/base.yaml
index 3deb944..0ac893a 100644
--- a/zuul.d/base.yaml
+++ b/zuul.d/base.yaml
@@ -13,6 +13,8 @@
roles: &base_roles
- zuul: opendev.org/openstack/devstack
vars: &base_vars
+ devstack_localrc:
+ IMAGE_URLS: http://download.cirros-cloud.net/0.6.2/cirros-0.6.2-x86_64-disk.img, http://download.cirros-cloud.net/0.6.1/cirros-0.6.1-x86_64-disk.img
devstack_services:
tempest: true
devstack_local_conf:
@@ -72,7 +74,8 @@
and a tempest one exist.
timeout: 10800
vars:
- tox_envlist: full
+ # This job run multinode and smoke tests.
+ tox_envlist: multinode
devstack_localrc:
FORCE_CONFIG_DRIVE: false
NOVA_ALLOW_MOVE_TO_SAME_HOST: false
diff --git a/zuul.d/integrated-gate.yaml b/zuul.d/integrated-gate.yaml
index 4c08ad9..87b8af0 100644
--- a/zuul.d/integrated-gate.yaml
+++ b/zuul.d/integrated-gate.yaml
@@ -11,16 +11,15 @@
vars:
tox_envlist: all
tempest_test_regex: tempest
- # TODO(gmann): Enable File injection tests once nova bug is fixed
- # https://bugs.launchpad.net/nova/+bug/1882421
- # devstack_localrc:
- # ENABLE_FILE_INJECTION: true
+ devstack_localrc:
+ MYSQL_REDUCE_MEMORY: true
+ # TODO(gmann): Enable File injection tests once nova bug is fixed
+ # https://bugs.launchpad.net/nova/+bug/1882421
+ # ENABLE_FILE_INJECTION: true
- job:
name: tempest-ipv6-only
parent: devstack-tempest-ipv6
- # This currently works from stable/pike on.
- branches: ^(?!stable/ocata).*$
description: |
Integration test of IPv6-only deployments. This job runs
smoke and IPv6 relates tests only. Basic idea is to test
@@ -32,10 +31,6 @@
- job:
name: tempest-full
parent: devstack-tempest
- # This currently works from stable/pike on.
- # Before stable/pike, legacy version of tempest-full
- # 'legacy-tempest-dsvm-neutron-full' run.
- branches: ^(?!stable/ocata).*$
description: |
Base integration test with Neutron networking and py27.
This job is supposed to run until stable/train setup only.
@@ -60,11 +55,26 @@
c-bak: false
- job:
+ name: tempest-extra-tests
+ parent: tempest-full-py3
+ description: |
+ This job runs the extra tests mentioned in
+ tools/tempest-extra-tests-list.txt.
+ vars:
+ tox_envlist: extra-tests
+
+- job:
name: tempest-full-py3
parent: devstack-tempest
# This job version is with swift enabled on py3
# as swift is ready on py3 from stable/ussuri onwards.
- branches: ^(?!stable/(ocata|pike|queens|rocky|stein|train)).*$
+ # As this use 'integrated-full' tox env which is not
+ # available in old tempest used till stable/wallaby,
+ # this job definition is only for stable/xena onwards
+ # and separate job definition until stable/wallaby
+ branches:
+ regex: ^stable/(stein|train|ussuri|victoria|wallaby)$
+ negate: true
description: |
Base integration test with Neutron networking, horizon, swift enable,
and py3.
@@ -74,7 +84,12 @@
required-projects:
- openstack/horizon
vars:
- tox_envlist: full
+ # NOTE(gmann): Default concurrency is higher (number of cpu -2) which
+ # end up 6 in upstream CI. Higher concurrency means high parallel
+ # requests to services and can cause more oom issues. To avoid the
+ # oom issue, setting the concurrency to 4 in this job.
+ tempest_concurrency: 4
+ tox_envlist: integrated-full
devstack_localrc:
USE_PYTHON3: true
FORCE_CONFIG_DRIVE: true
@@ -91,18 +106,20 @@
parent: tempest-full-py3
nodeset: devstack-single-node-centos-9-stream
# centos-9-stream is supported from yoga release onwards
- branches: ^(?!stable/(pike|queens|rocky|stein|train|ussuri|victoria|wallaby|xena)).*$
+ branches:
+ regex: ^stable/(stein|train|ussuri|victoria|wallaby|xena)$
+ negate: true
description: |
Base integration test on CentOS 9 stream
vars:
# Required until bug/1949606 is resolved when using libvirt and QEMU
# >=5.0.0 with a [libvirt]virt_type of qemu (TCG).
configure_swap_size: 4096
+ tox_envlist: full
- job:
name: tempest-integrated-networking
parent: devstack-tempest
- branches: ^(?!stable/ocata).*$
description: |
This job runs integration tests for networking. This is subset of
'tempest-full-py3' job and run only Neutron and Nova related tests.
@@ -122,12 +139,16 @@
- job:
name: tempest-integrated-compute
parent: devstack-tempest
- branches: ^(?!stable/ocata).*$
description: |
This job runs integration tests for compute. This is
subset of 'tempest-full-py3' job and run Nova, Neutron, Cinder (except backup tests)
and Glance related tests. This is meant to be run on Nova gate only.
vars:
+ # NOTE(gmann): Default concurrency is higher (number of cpu -2) which
+ # end up 6 in upstream CI. Higher concurrency means high parallel
+ # requests to services and can cause more oom issues. To avoid the
+ # oom issue, setting the concurrency to 4 in this job.
+ tempest_concurrency: 4
tox_envlist: integrated-compute
tempest_exclude_regex: ""
devstack_localrc:
@@ -146,7 +167,9 @@
parent: tempest-integrated-compute
nodeset: devstack-single-node-centos-9-stream
# centos-9-stream is supported from yoga release onwards
- branches: ^(?!stable/(pike|queens|rocky|stein|train|ussuri|victoria|wallaby|xena)).*$
+ branches:
+ regex: ^stable/(stein|train|ussuri|victoria|wallaby|xena)$
+ negate: true
description: |
This job runs integration tests for compute. This is
subset of 'tempest-full-py3' job and run Nova, Neutron, Cinder (except backup tests)
@@ -160,12 +183,16 @@
- job:
name: tempest-integrated-placement
parent: devstack-tempest
- branches: ^(?!stable/ocata).*$
description: |
This job runs integration tests for placement. This is
subset of 'tempest-full-py3' job and run Nova and Neutron
related tests. This is meant to be run on Placement gate only.
vars:
+ # NOTE(gmann): Default concurrency is higher (number of cpu -2) which
+ # end up 6 in upstream CI. Higher concurrency means high parallel
+ # requests to services and can cause more oom issues. To avoid the
+ # oom issue, setting the concurrency to 4 in this job.
+ tempest_concurrency: 4
tox_envlist: integrated-placement
devstack_localrc:
USE_PYTHON3: true
@@ -181,7 +208,6 @@
- job:
name: tempest-integrated-storage
parent: devstack-tempest
- branches: ^(?!stable/ocata).*$
description: |
This job runs integration tests for image & block storage. This is
subset of 'tempest-full-py3' job and run Cinder, Glance, Swift and Nova
@@ -197,7 +223,6 @@
- job:
name: tempest-integrated-object-storage
parent: devstack-tempest
- branches: ^(?!stable/ocata).*$
description: |
This job runs integration tests for object storage. This is
subset of 'tempest-full-py3' job and run Swift, Cinder and Glance
@@ -225,33 +250,34 @@
TEMPEST_PLACEMENT_MIN_MICROVERSION: 'latest'
- job:
- name: tempest-multinode-full
- parent: tempest-multinode-full-base
- nodeset: openstack-two-node-focal
- # This job runs on Focal from stable/victoria on.
- branches: ^(?!stable/(ocata|pike|queens|rocky|stein|train|ussuri)).*$
- vars:
- devstack_localrc:
- USE_PYTHON3: False
- group-vars:
- subnode:
- devstack_localrc:
- USE_PYTHON3: False
-
-- job:
name: tempest-multinode-full-py3
- parent: tempest-multinode-full
+ parent: tempest-multinode-full-base
+ nodeset: openstack-two-node-jammy
+ # This job runs on ubuntu Jammy and after stable/zed.
+ branches:
+ regex: ^stable/(stein|train|ussuri|victoria|wallaby|xena|yoga|zed)$
+ negate: true
vars:
+ # NOTE(gmann): Default concurrency is higher (number of cpu -2) which
+ # end up 6 in upstream CI. Higher concurrency means high parallel
+ # requests to services and can cause more oom issues. To avoid the
+ # oom issue, setting the concurrency to 4 in this job.
+ tempest_concurrency: 4
devstack_localrc:
USE_PYTHON3: true
devstack_plugins:
neutron: https://opendev.org/openstack/neutron
devstack_services:
neutron-trunk: true
+ br-ex-tcpdump: true
+ br-int-flows: true
group-vars:
subnode:
devstack_localrc:
USE_PYTHON3: true
+ devstack_services:
+ br-ex-tcpdump: true
+ br-int-flows: true
- job:
name: tempest-slow
@@ -265,9 +291,7 @@
* legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend
* tempest-scenario-multinode-lvm-multibackend
timeout: 10800
- # This job runs on stable/stein onwards.
- branches: ^(?!stable/(ocata|pike|queens|rocky)).*$
- vars: &tempest_slow_vars
+ vars:
tox_envlist: slow-serial
devstack_localrc:
CINDER_ENABLED_BACKENDS: lvm:lvmdriver-1,lvm:lvmdriver-2
@@ -277,7 +301,6 @@
devstack_services:
neutron-placement: true
neutron-qos: true
- tempest_concurrency: 2
group-vars:
# NOTE(mriedem): The ENABLE_VOLUME_MULTIATTACH variable is used on both
# the controller and subnode prior to Rocky so we have to make sure the
@@ -292,8 +315,29 @@
# This job version is with swift enabled on py3
# as swift is ready on py3 from stable/ussuri onwards.
timeout: 10800
- branches: ^(?!stable/(ocata|pike|queens|rocky|stein|train)).*$
- vars: *tempest_slow_vars
+ # As the 'slow' tox env which is not available in old tempest used
+ # till stable/wallaby, this job definition is only for stable/xena
+ # onwards and separate job definition until stable/wallaby
+ branches:
+ regex: ^stable/(stein|train|ussuri|victoria|wallaby)$
+ negate: true
+ vars:
+ tox_envlist: slow
+ devstack_localrc:
+ CINDER_ENABLED_BACKENDS: lvm:lvmdriver-1,lvm:lvmdriver-2
+ ENABLE_VOLUME_MULTIATTACH: true
+ devstack_plugins:
+ neutron: https://opendev.org/openstack/neutron
+ devstack_services:
+ neutron-placement: true
+ neutron-qos: true
+ group-vars:
+ # NOTE(mriedem): The ENABLE_VOLUME_MULTIATTACH variable is used on both
+ # the controller and subnode prior to Rocky so we have to make sure the
+ # variable is set in both locations.
+ subnode:
+ devstack_localrc:
+ ENABLE_VOLUME_MULTIATTACH: true
- job:
name: tempest-cinder-v2-api
@@ -315,23 +359,24 @@
description: |
Integration testing for a FIPS enabled Centos 8 system
nodeset: devstack-single-node-centos-8-stream
- pre-run: playbooks/enable-fips.yaml
vars:
tox_envlist: full
configure_swap_size: 4096
nslookup_target: 'opendev.org'
+ enable_fips: True
- job:
name: tempest-centos9-stream-fips
parent: devstack-tempest
description: |
Integration testing for a FIPS enabled Centos 9 system
+ timeout: 10800
nodeset: devstack-single-node-centos-9-stream
- pre-run: playbooks/enable-fips.yaml
vars:
tox_envlist: full
configure_swap_size: 4096
nslookup_target: 'opendev.org'
+ enable_fips: True
- job:
name: tempest-pg-full
@@ -346,6 +391,43 @@
# ENABLE_FILE_INJECTION: true
DATABASE_TYPE: postgresql
+- job:
+ name: tempest-full-enforce-scope-new-defaults
+ parent: tempest-full-py3
+ description: |
+ This job runs the Tempest tests with scope and new defaults enabled.
+ # TODO: remove this once https://review.opendev.org/c/openstack/neutron-lib/+/864213
+ # fix is released in neutron-lib
+ required-projects:
+ - openstack/neutron-lib
+ - openstack/neutron
+ vars:
+ devstack_localrc:
+ # Enabeling the scope and new defaults for services.
+ # NOTE: (gmann) We need to keep keystone scope check disable as
+ # services (except ironic) does not support the system scope and
+ # they need keystone to continue working with project scope. Until
+ # Keystone policies are changed to work for both system as well as
+ # for project scoped, we need to keep scope check disable for
+ # keystone.
+ # Nova and Glance have enabled the new defaults and scope by default
+ # in devstack.
+ CINDER_ENFORCE_SCOPE: true
+ NEUTRON_ENFORCE_SCOPE: true
+ PLACEMENT_ENFORCE_SCOPE: true
+
+- job:
+ name: tempest-all-rbac-old-defaults
+ parent: tempest-all
+ description: |
+ Integration test that runs all tests on RBAC old defaults.
+ devstack_localrc:
+ # NOTE(gmann): Nova and Glance have enabled the new defaults and scope
+ # by default in devstack so we need some jobs keep testing the old
+ # defaults until they are removed from service side.
+ NOVA_ENFORCE_SCOPE: false
+ GLANCE_ENFORCE_SCOPE: false
+
- project-template:
name: integrated-gate-networking
description: |
@@ -357,19 +439,29 @@
- grenade
- grenade-skip-level:
voting: false
+ branches:
+ - stable/2023.1
- tempest-integrated-networking
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
+ # and job is broken up to wallaby branch due to the issue
+ # described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
- branches: ^(?!stable/ussuri).*$
+ branches:
+ regex: ^stable/(ussuri|victoria|wallaby)$
+ negate: true
gate:
jobs:
- grenade
- tempest-integrated-networking
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
+ # and job is broken up to wallaby branch due to the issue
+ # described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
- branches: ^(?!stable/ussuri).*$
+ branches:
+ regex: ^stable/(ussuri|victoria|wallaby)$
+ negate: true
- project-template:
name: integrated-gate-compute
@@ -387,24 +479,46 @@
jobs:
- grenade-skip-level:
voting: false
+ branches:
+ - stable/2023.1
+ # NOTE(gmann): Nova decided to run grenade skip level testing always
+ # (on SLURP as well as non SLURP release) so we are adding grenade-skip-level-always
+ # job in integrated gate and we do not need to update skip level job
+ # here until Nova change the decision.
+ # This is added from 2023.2 relese cycle onwards so we need to use branch variant
+ # to make sure we do not run this job on older than 2023.2 gate.
+ - grenade-skip-level-always:
+ branches:
+ - master
- tempest-integrated-compute
# centos-8-stream is tested from wallaby -> yoga branches
- tempest-integrated-compute-centos-8-stream:
branches: ^stable/(wallaby|xena|yoga).*$
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
+ # and job is broken up to wallaby branch due to the issue
+ # described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
- branches: ^(?!stable/ussuri).*$
+ branches:
+ regex: ^stable/(ussuri|victoria|wallaby)$
+ negate: true
gate:
jobs:
+ - grenade-skip-level-always:
+ branches:
+ - master
- tempest-integrated-compute
- openstacksdk-functional-devstack:
- branches: ^(?!stable/ussuri).*$
+ branches:
+ regex: ^stable/(ussuri|victoria|wallaby)$
+ negate: true
periodic-weekly:
jobs:
# centos-9-stream is tested from zed release onwards
- tempest-integrated-compute-centos-9-stream:
- branches: ^(?!stable/(pike|queens|rocky|stein|train|ussuri|victoria|wallaby|xena|yoga)).*$
+ branches:
+ regex: ^stable/(stein|train|ussuri|victoria|wallaby|xena|yoga)$
+ negate: true
- project-template:
name: integrated-gate-placement
@@ -418,19 +532,29 @@
- grenade
- grenade-skip-level:
voting: false
+ branches:
+ - stable/2023.1
- tempest-integrated-placement
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
+ # and job is broken up to wallaby branch due to the issue
+ # described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
- branches: ^(?!stable/ussuri).*$
+ branches:
+ regex: ^stable/(ussuri|victoria|wallaby)$
+ negate: true
gate:
jobs:
- grenade
- tempest-integrated-placement
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
+ # and job is broken up to wallaby branch due to the issue
+ # described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
- branches: ^(?!stable/ussuri).*$
+ branches:
+ regex: ^stable/(ussuri|victoria|wallaby)$
+ negate: true
- project-template:
name: integrated-gate-storage
@@ -444,19 +568,29 @@
- grenade
- grenade-skip-level:
voting: false
+ branches:
+ - stable/2023.1
- tempest-integrated-storage
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
+ # and job is broken up to wallaby branch due to the issue
+ # described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
- branches: ^(?!stable/ussuri).*$
+ branches:
+ regex: ^stable/(ussuri|victoria|wallaby)$
+ negate: true
gate:
jobs:
- grenade
- tempest-integrated-storage
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
+ # and job is broken up to wallaby branch due to the issue
+ # described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
- branches: ^(?!stable/ussuri).*$
+ branches:
+ regex: ^stable/(ussuri|victoria|wallaby)$
+ negate: true
- project-template:
name: integrated-gate-object-storage
@@ -471,13 +605,21 @@
- tempest-integrated-object-storage
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
+ # and job is broken up to wallaby branch due to the issue
+ # described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
- branches: ^(?!stable/ussuri).*$
+ branches:
+ regex: ^stable/(ussuri|victoria|wallaby)$
+ negate: true
gate:
jobs:
- grenade
- tempest-integrated-object-storage
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
+ # and job is broken up to wallaby branch due to the issue
+ # described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
- branches: ^(?!stable/ussuri).*$
+ branches:
+ regex: ^stable/(ussuri|victoria|wallaby)$
+ negate: true
diff --git a/zuul.d/project.yaml b/zuul.d/project.yaml
index 2824677..469b659 100644
--- a/zuul.d/project.yaml
+++ b/zuul.d/project.yaml
@@ -11,7 +11,8 @@
- openstack-tox-py38
- openstack-tox-py39
- openstack-tox-py310
- - tempest-full-parallel:
+ - openstack-tox-py311
+ - tempest-full-py3:
# Define list of irrelevant files to use everywhere else
irrelevant-files: &tempest-irrelevant-files
- ^.*\.rst$
@@ -26,24 +27,20 @@
- ^.gitignore$
- ^.gitreview$
- ^.mailmap$
- - tempest-full-py3:
- irrelevant-files: *tempest-irrelevant-files
- - tempest-full-py3-ipv6:
- voting: false
+ - tempest-extra-tests:
irrelevant-files: *tempest-irrelevant-files
- glance-multistore-cinder-import:
voting: false
irrelevant-files: *tempest-irrelevant-files
- - tempest-full-zed:
+ # NOTE(gmann): We will be testing the latest and oldest
+ # supported stable branch in Tempest master gate with assuming
+ # if things are working in latest and oldest it will work in between
+ # stable branches also. If anything is breaking we will be catching
+ # those in respective stable branch gate.
+ - tempest-full-2023-1:
irrelevant-files: *tempest-irrelevant-files
- tempest-full-yoga:
irrelevant-files: *tempest-irrelevant-files
- - tempest-full-xena:
- irrelevant-files: *tempest-irrelevant-files
- - tempest-full-wallaby-py3:
- irrelevant-files: *tempest-irrelevant-files
- - tempest-slow-wallaby:
- irrelevant-files: *tempest-irrelevant-files
- tempest-multinode-full-py3:
irrelevant-files: *tempest-irrelevant-files
- tempest-tox-plugin-sanity-check:
@@ -68,6 +65,7 @@
- ^tools/tempest-integrated-gate-placement-exclude-list.txt
- ^tools/tempest-integrated-gate-storage-blacklist.txt
- ^tools/tempest-integrated-gate-storage-exclude-list.txt
+ - ^tools/tempest-extra-tests-list.txt
- ^tools/verify-ipv6-only-deployments.sh
- ^tools/with_venv.sh
# tools/ is not here since this relies on a script in tools/.
@@ -91,6 +89,7 @@
- ^tools/tempest-integrated-gate-placement-exclude-list.txt
- ^tools/tempest-integrated-gate-storage-blacklist.txt
- ^tools/tempest-integrated-gate-storage-exclude-list.txt
+ - ^tools/tempest-extra-tests-list.txt
- ^tools/tempest-plugin-sanity.sh
- ^tools/with_venv.sh
- ^.coveragerc$
@@ -101,6 +100,8 @@
irrelevant-files: *tempest-irrelevant-files
- nova-live-migration:
irrelevant-files: *tempest-irrelevant-files
+ - tempest-full-enforce-scope-new-defaults:
+ irrelevant-files: *tempest-irrelevant-files
- devstack-plugin-ceph-tempest-py3:
# TODO(kopecmartin): make it voting once the below bug is fixed
# https://bugs.launchpad.net/devstack-plugin-ceph/+bug/1975648
@@ -118,49 +119,55 @@
- tempest-full-test-account-py3:
voting: false
irrelevant-files: *tempest-irrelevant-files
- - tempest-full-test-account-no-admin-py3:
- voting: false
+ - ironic-tempest-bios-ipmi-direct-tinyipa:
irrelevant-files: *tempest-irrelevant-files
- openstack-tox-bashate:
irrelevant-files: *tempest-irrelevant-files-2
- - tempest-full-centos-9-stream:
- # TODO(gmann): make it voting once below fix is merged
- # https://review.opendev.org/c/openstack/tempest/+/842140
- voting: false
- irrelevant-files: *tempest-irrelevant-files
gate:
jobs:
- openstack-tox-pep8
- openstack-tox-py38
- openstack-tox-py39
- openstack-tox-py310
+ - openstack-tox-py311
- tempest-slow-py3:
irrelevant-files: *tempest-irrelevant-files
- neutron-ovs-grenade-multinode:
irrelevant-files: *tempest-irrelevant-files
- tempest-full-py3:
irrelevant-files: *tempest-irrelevant-files
+ - tempest-extra-tests:
+ irrelevant-files: *tempest-irrelevant-files
- grenade:
irrelevant-files: *tempest-irrelevant-files
- tempest-ipv6-only:
irrelevant-files: *tempest-irrelevant-files-3
- tempest-multinode-full-py3:
irrelevant-files: *tempest-irrelevant-files
+ - tempest-full-enforce-scope-new-defaults:
+ irrelevant-files: *tempest-irrelevant-files
#- devstack-plugin-ceph-tempest-py3:
# irrelevant-files: *tempest-irrelevant-files
- #- tempest-full-centos-9-stream:
- # irrelevant-files: *tempest-irrelevant-files
- nova-live-migration:
irrelevant-files: *tempest-irrelevant-files
+ - ironic-tempest-bios-ipmi-direct-tinyipa:
+ irrelevant-files: *tempest-irrelevant-files
experimental:
jobs:
- nova-multi-cell
+ - nova-ceph-multistore:
+ irrelevant-files: *tempest-irrelevant-files
- tempest-with-latest-microversion
- tempest-stestr-master
- tempest-cinder-v2-api:
irrelevant-files: *tempest-irrelevant-files
- tempest-all:
irrelevant-files: *tempest-irrelevant-files
+ - tempest-all-rbac-old-defaults
+ - tempest-full-parallel
+ - tempest-full-zed-extra-tests
+ - tempest-full-yoga-extra-tests
+ - tempest-full-enforce-scope-new-defaults-zed
- neutron-ovs-tempest-dvr-ha-multinode-full:
irrelevant-files: *tempest-irrelevant-files
- nova-tempest-v2-api:
@@ -169,21 +176,34 @@
irrelevant-files: *tempest-irrelevant-files
- tempest-pg-full:
irrelevant-files: *tempest-irrelevant-files
+ - tempest-full-py3-ipv6:
+ irrelevant-files: *tempest-irrelevant-files
+ - tempest-full-centos-9-stream:
+ irrelevant-files: *tempest-irrelevant-files
- tempest-centos9-stream-fips:
irrelevant-files: *tempest-irrelevant-files
+ - tempest-full-test-account-no-admin-py3:
+ irrelevant-files: *tempest-irrelevant-files
periodic-stable:
jobs:
+ - tempest-full-2023-1
- tempest-full-zed
- tempest-full-yoga
- - tempest-full-xena
- - tempest-full-wallaby-py3
+ - tempest-slow-2023-1
- tempest-slow-zed
- tempest-slow-yoga
- - tempest-slow-xena
- - tempest-slow-wallaby
+ - tempest-full-2023-1-extra-tests
+ - tempest-full-zed-extra-tests
+ - tempest-full-yoga-extra-tests
periodic:
jobs:
- tempest-all
+ - tempest-all-rbac-old-defaults
+ - tempest-full-parallel
- tempest-full-oslo-master
- tempest-stestr-master
+ - tempest-full-py3-ipv6
- tempest-centos9-stream-fips
+ - tempest-full-centos-9-stream
+ - tempest-full-test-account-no-admin-py3
+ - tempest-full-enforce-scope-new-defaults-zed
diff --git a/zuul.d/stable-jobs.yaml b/zuul.d/stable-jobs.yaml
index 6d97fad..cc13426 100644
--- a/zuul.d/stable-jobs.yaml
+++ b/zuul.d/stable-jobs.yaml
@@ -1,43 +1,93 @@
# NOTE(gmann): This file includes all stable release jobs definition.
- job:
+ name: tempest-full-2023-1
+ parent: tempest-full-py3
+ nodeset: openstack-single-node-jammy
+ override-checkout: stable/2023.1
+
+- job:
name: tempest-full-zed
parent: tempest-full-py3
+ nodeset: openstack-single-node-focal
override-checkout: stable/zed
- job:
name: tempest-full-yoga
parent: tempest-full-py3
+ nodeset: openstack-single-node-focal
override-checkout: stable/yoga
- job:
- name: tempest-full-xena
- parent: tempest-full-py3
- override-checkout: stable/xena
+ name: tempest-full-2023-1-extra-tests
+ parent: tempest-extra-tests
+ nodeset: openstack-single-node-jammy
+ override-checkout: stable/2023.1
- job:
- name: tempest-full-wallaby-py3
- parent: tempest-full-py3
- override-checkout: stable/wallaby
+ name: tempest-full-zed-extra-tests
+ parent: tempest-extra-tests
+ nodeset: openstack-single-node-focal
+ override-checkout: stable/zed
+
+- job:
+ name: tempest-full-yoga-extra-tests
+ parent: tempest-extra-tests
+ nodeset: openstack-single-node-focal
+ override-checkout: stable/yoga
+
+- job:
+ name: tempest-slow-2023-1
+ parent: tempest-slow-py3
+ nodeset: openstack-two-node-jammy
+ override-checkout: stable/2023-1
+
+- job:
+ name: tempest-full-enforce-scope-new-defaults-zed
+ parent: tempest-full-enforce-scope-new-defaults
+ nodeset: openstack-single-node-focal
+ override-checkout: stable/zed
- job:
name: tempest-slow-zed
parent: tempest-slow-py3
+ nodeset: openstack-two-node-focal
override-checkout: stable/zed
- job:
name: tempest-slow-yoga
parent: tempest-slow-py3
+ nodeset: openstack-two-node-focal
override-checkout: stable/yoga
- job:
- name: tempest-slow-xena
- parent: tempest-slow-py3
- override-checkout: stable/xena
-
-- job:
- name: tempest-slow-wallaby
- parent: tempest-slow-py3
- override-checkout: stable/wallaby
+ name: tempest-full-py3
+ parent: devstack-tempest
+ # This job version is to use the 'full' tox env which
+ # is available for stable/ussuri to stable/wallaby also.
+ branches:
+ - stable/ussuri
+ - stable/victoria
+ - stable/wallaby
+ description: |
+ Base integration test with Neutron networking, horizon, swift enable,
+ and py3.
+ Former names for this job where:
+ * legacy-tempest-dsvm-py35
+ * gate-tempest-dsvm-py35
+ required-projects:
+ - openstack/horizon
+ vars:
+ tox_envlist: full
+ devstack_localrc:
+ USE_PYTHON3: true
+ FORCE_CONFIG_DRIVE: true
+ ENABLE_VOLUME_MULTIATTACH: true
+ GLANCE_USE_IMPORT_WORKFLOW: True
+ devstack_plugins:
+ neutron: https://opendev.org/openstack/neutron
+ devstack_services:
+ # Enbale horizon so that we can run horizon test.
+ horizon: true
- job:
name: tempest-full-py3
@@ -45,9 +95,6 @@
# This job version is with swift disabled on py3
# as swift was not ready on py3 until stable/train.
branches:
- - stable/pike
- - stable/queens
- - stable/rocky
- stable/stein
- stable/train
description: |
@@ -95,6 +142,69 @@
neutron-qos: true
- job:
+ name: tempest-multinode-full-py3
+ parent: tempest-multinode-full
+ nodeset: openstack-two-node-bionic
+ # This job runs on Bionic.
+ branches:
+ - stable/stein
+ - stable/train
+ - stable/ussuri
+ vars:
+ devstack_localrc:
+ USE_PYTHON3: true
+ devstack_plugins:
+ neutron: https://opendev.org/openstack/neutron
+ devstack_services:
+ neutron-trunk: true
+ group-vars:
+ subnode:
+ devstack_localrc:
+ USE_PYTHON3: true
+
+- job:
+ name: tempest-multinode-full-py3
+ parent: tempest-multinode-full
+ nodeset: openstack-two-node-focal
+ # This job runs on Focal and supposed to run until stable/zed.
+ branches:
+ - stable/victoria
+ - stable/wallaby
+ - stable/xena
+ - stable/yoga
+ - stable/zed
+ vars:
+ devstack_localrc:
+ USE_PYTHON3: true
+ devstack_plugins:
+ neutron: https://opendev.org/openstack/neutron
+ devstack_services:
+ neutron-trunk: true
+ group-vars:
+ subnode:
+ devstack_localrc:
+ USE_PYTHON3: true
+
+- job:
+ name: tempest-multinode-full
+ parent: tempest-multinode-full-base
+ nodeset: openstack-two-node-focal
+ # This job runs on Focal and on python2. This is for stable/victoria to stable/zed.
+ branches:
+ - stable/victoria
+ - stable/wallaby
+ - stable/xena
+ - stable/yoga
+ - stable/zed
+ vars:
+ devstack_localrc:
+ USE_PYTHON3: False
+ group-vars:
+ subnode:
+ devstack_localrc:
+ USE_PYTHON3: False
+
+- job:
name: tempest-multinode-full
parent: tempest-multinode-full-base
nodeset: openstack-two-node-bionic
@@ -114,73 +224,11 @@
USE_PYTHON3: False
- job:
- name: tempest-multinode-full
- parent: tempest-multinode-full-base
- nodeset: openstack-two-node-xenial
- # This job runs on Xenial and this is for stable/pike, stable/queens
- # and stable/rocky. This job is prepared to make sure all stable branches
- # before stable/stein will keep running on xenial. This job can be
- # removed once stable/rocky is EOL.
- branches:
- - stable/pike
- - stable/queens
- - stable/rocky
- vars:
- devstack_localrc:
- USE_PYTHON3: False
- group-vars:
- subnode:
- devstack_localrc:
- USE_PYTHON3: False
-
-- job:
- name: tempest-slow
- parent: tempest-multinode-full
- description: |
- This multinode integration job will run all the tests tagged as slow.
- It enables the lvm multibackend setup to cover few scenario tests.
- This job will run only slow tests (API or Scenario) serially.
- Former names for this job were:
- * legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend
- * tempest-scenario-multinode-lvm-multibackend
- timeout: 10800
- branches:
- - stable/pike
- - stable/queens
- - stable/rocky
- vars:
- tox_envlist: slow-serial
- devstack_localrc:
- CINDER_ENABLED_BACKENDS: lvm:lvmdriver-1,lvm:lvmdriver-2
- ENABLE_VOLUME_MULTIATTACH: true
- # to avoid https://bugs.launchpad.net/neutron/+bug/1914037
- # as we couldn't backport the fix to rocky and older releases
- IPV6_PUBLIC_RANGE: 2001:db8:0:10::/64
- IPV6_PUBLIC_NETWORK_GATEWAY: 2001:db8:0:10::2
- IPV6_ROUTER_GW_IP: 2001:db8:0:10::1
- devstack_plugins:
- neutron: https://opendev.org/openstack/neutron
- devstack_services:
- neutron-placement: true
- neutron-qos: true
- tempest_concurrency: 2
- group-vars:
- # NOTE(mriedem): The ENABLE_VOLUME_MULTIATTACH variable is used on both
- # the controller and subnode prior to Rocky so we have to make sure the
- # variable is set in both locations.
- subnode:
- devstack_localrc:
- ENABLE_VOLUME_MULTIATTACH: true
-
-- job:
name: tempest-slow-py3
parent: tempest-slow
# This job version is with swift disabled on py3
# as swift was not ready on py3 until stable/train.
branches:
- - stable/pike
- - stable/queens
- - stable/rocky
- stable/stein
- stable/train
vars:
@@ -199,6 +247,18 @@
USE_PYTHON3: true
- job:
+ name: tempest-slow-py3
+ parent: tempest-slow
+ # This job version is to use the 'slow-serial' tox env for
+ # the stable/ussuri to stable/wallaby testing.
+ branches:
+ - stable/ussuri
+ - stable/victoria
+ - stable/wallaby
+ vars:
+ tox_envlist: slow-serial
+
+- job:
name: tempest-full-py3-opensuse15
parent: tempest-full-py3
nodeset: devstack-single-node-opensuse-15
@@ -209,9 +269,6 @@
# This job is not used after stable/xena and can be
# removed once stable/xena is EOL.
branches:
- - stable/pike
- - stable/queens
- - stable/rocky
- stable/stein
- stable/train
- stable/ussuri
diff --git a/zuul.d/tempest-specific.yaml b/zuul.d/tempest-specific.yaml
index 822feaa..10490b4 100644
--- a/zuul.d/tempest-specific.yaml
+++ b/zuul.d/tempest-specific.yaml
@@ -30,11 +30,12 @@
- opendev.org/openstack/oslo.utils
- opendev.org/openstack/oslo.versionedobjects
- opendev.org/openstack/oslo.vmware
+ vars:
+ tox_envlist: full
- job:
name: tempest-full-parallel
parent: tempest-full-py3
- voting: false
branches:
- master
description: |
@@ -48,11 +49,11 @@
run_tempest_dry_cleanup: true
devstack_localrc:
DEVSTACK_PARALLEL: True
+ MYSQL_REDUCE_MEMORY: true
- job:
name: tempest-full-py3-ipv6
parent: devstack-tempest-ipv6
- branches: ^(?!stable/ocata).*$
description: |
Base integration test with Neutron networking, IPv6 and py3.
vars:
@@ -73,7 +74,7 @@
parent: tox
description: |
Run tempest plugin sanity check script using tox.
- nodeset: ubuntu-focal
+ nodeset: ubuntu-jammy
vars:
tox_envlist: plugin-sanity-check
timeout: 5000
@@ -91,7 +92,12 @@
vars:
devstack_localrc:
TEMPEST_USE_TEST_ACCOUNTS: True
-
+ # FIXME(gmann): Nova and Glance have enabled the new defaults and scope
+ # by default in devstack and pre provisioned account code and testing
+ # needs to be move to new RBAC design testing. Until we do that, let's
+ # run these jobs with old defaults.
+ NOVA_ENFORCE_SCOPE: false
+ GLANCE_ENFORCE_SCOPE: false
- job:
name: tempest-full-test-account-no-admin-py3
parent: tempest-full-test-account-py3