Merge "Enable placement scope and new defaults in rbac test job"
diff --git a/HACKING.rst b/HACKING.rst
index dc28e4e..17e2a49 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -322,7 +322,14 @@
- If the execution of a set of tests is required to be serialized then locking
can be used to perform this. See usage of ``LockFixture`` for examples of
- using locking.
+ using locking. However, LockFixture only helps if you want to separate the
+ execution of two small sets of test cases. On the other hand, if you need to
+ run a set of tests separately from potentially all other tests then
+ ``LockFixture`` does not scale as you would need to take the lock in all the
+ other tests too. In this case, you can use the ``@serial`` decorator on top
+ of the test class holding the tests that need to run separately from the
+ potentially parallel test set. See more in :ref:`tempest_test_writing`.
+
Sample Configuration File
-------------------------
diff --git a/doc/source/plugins/plugin.rst b/doc/source/plugins/plugin.rst
index b1fd6f8..0771318 100644
--- a/doc/source/plugins/plugin.rst
+++ b/doc/source/plugins/plugin.rst
@@ -345,6 +345,8 @@
plugin package on your system and then running Tempest inside a venv will not
work.
-Tempest also exposes a tox job, all-plugin, which will setup a tox virtualenv
-with system site-packages enabled. This will let you leverage tox without
-requiring to manually install plugins in the tox venv before running tests.
+For example, you can use tox to install and run tests from a tempest plugin like
+this::
+
+ [~/tempest] $ tox -e venv-tempest -- pip install (path to the plugin directory)
+ [~/tempest] $ tox -e all
diff --git a/doc/source/write_tests.rst b/doc/source/write_tests.rst
index 34df089..3626a3f 100644
--- a/doc/source/write_tests.rst
+++ b/doc/source/write_tests.rst
@@ -256,6 +256,33 @@
worth checking the immediate parent for what is set to determine if your
class needs to override that setting.
+Running some tests in serial
+----------------------------
+Tempest potentially runs test cases in parallel, depending on the configuration.
+However, sometimes you need to make sure that tests are not interfering with
+each other via OpenStack resources. Tempest creates separate projects for each
+test class to separate project based resources between test cases.
+
+If your tests use resources outside of projects, e.g. host aggregates then
+you might need to explicitly separate interfering test cases. If you only need
+to separate a small set of testcases from each other then you can use the
+``LockFixture``.
+
+However, in some cases a small set of tests needs to be run independently from
+the rest of the test cases. For example, some of the host aggregate and
+availability zone testing needs compute nodes without any running nova server
+to be able to move compute hosts between availability zones. But many tempest
+tests start one or more nova servers. In this scenario you can mark the small
+set of tests that needs to be independent from the rest with the ``@serial``
+class decorator. This will make sure that even if tempest is configured to run
+the tests in parallel the tests in the marked test class will always be executed
+separately from the rest of the test cases.
+
+Please note that due to test ordering optimization reasons test cases marked
+for ``@serial`` execution need to be put under ``tempest/serial_tests``
+directory. This will ensure that the serial tests will block the parallel tests
+in the least amount of time.
+
Interacting with Credentials and Clients
========================================
diff --git a/releasenotes/notes/add-server-external-events-client-c86b269b0091077b.yaml b/releasenotes/notes/add-server-external-events-client-c86b269b0091077b.yaml
new file mode 100644
index 0000000..2af8e95
--- /dev/null
+++ b/releasenotes/notes/add-server-external-events-client-c86b269b0091077b.yaml
@@ -0,0 +1,5 @@
+---
+features:
+ - |
+ The ``server_external_events`` tempest client for compute
+ Server External Events API is implemented in this release.
diff --git a/releasenotes/notes/add-ssh-allow-agent-2dee6448fd250e50.yaml b/releasenotes/notes/add-ssh-allow-agent-2dee6448fd250e50.yaml
new file mode 100644
index 0000000..33f11ce
--- /dev/null
+++ b/releasenotes/notes/add-ssh-allow-agent-2dee6448fd250e50.yaml
@@ -0,0 +1,10 @@
+---
+features:
+ - |
+ Adds a ``ssh_allow_agent`` parameter to the ``RemoteClient`` class
+ wrapper and the direct ssh ``Client`` class to allow a caller to
+ explicitly request that the SSH Agent is not consulted for
+ authentication. This is useful if your attempting explicit password
+ based authentication as ``paramiko``, the underlying library used for
+ SSH, defaults to utilizing an ssh-agent process before attempting
+ password authentication.
diff --git a/releasenotes/notes/end-of-support-of-wallaby-455e4871ae4cb32e.yaml b/releasenotes/notes/end-of-support-of-wallaby-455e4871ae4cb32e.yaml
new file mode 100644
index 0000000..d5c2974
--- /dev/null
+++ b/releasenotes/notes/end-of-support-of-wallaby-455e4871ae4cb32e.yaml
@@ -0,0 +1,12 @@
+---
+prelude: |
+ This is an intermediate release during the 2023.1 development cycle to
+ mark the end of support for EM Wallaby release in Tempest.
+ After this release, Tempest will support below OpenStack Releases:
+
+ * Zed
+ * Yoga
+ * Xena
+
+ Current development of Tempest is for OpenStack 2023.1 development
+ cycle.
diff --git a/releasenotes/source/index.rst b/releasenotes/source/index.rst
index b36be01..ccd5fe1 100644
--- a/releasenotes/source/index.rst
+++ b/releasenotes/source/index.rst
@@ -6,6 +6,7 @@
:maxdepth: 1
unreleased
+ v33.0.0
v32.0.0
v31.1.0
v31.0.0
diff --git a/releasenotes/source/v33.0.0.rst b/releasenotes/source/v33.0.0.rst
new file mode 100644
index 0000000..fe7bd7d
--- /dev/null
+++ b/releasenotes/source/v33.0.0.rst
@@ -0,0 +1,5 @@
+=====================
+v33.0.0 Release Notes
+=====================
+.. release-notes:: 33.0.0 Release Notes
+ :version: 33.0.0
diff --git a/requirements.txt b/requirements.txt
index a118856..6e66046 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -22,3 +22,4 @@
urllib3>=1.21.1 # MIT
debtcollector>=1.2.0 # Apache-2.0
defusedxml>=0.7.1 # PSFL
+fasteners>=0.16.0 # Apache-2.0
diff --git a/roles/run-tempest-26/README.rst b/roles/run-tempest-26/README.rst
index 3643edb..8ff1656 100644
--- a/roles/run-tempest-26/README.rst
+++ b/roles/run-tempest-26/README.rst
@@ -21,7 +21,7 @@
A regular expression used to select the tests.
It works only when used with some specific tox environments
- ('all', 'all-plugin'.)
+ ('all', 'all-site-packages')
In the following example only api scenario and third party tests
will be executed.
@@ -47,7 +47,7 @@
A regular expression used to skip the tests.
It works only when used with some specific tox environments
- ('all', 'all-plugin'.)
+ ('all', 'all-site-packages').
::
vars:
diff --git a/roles/run-tempest-26/tasks/main.yaml b/roles/run-tempest-26/tasks/main.yaml
index f846006..7423bfb 100644
--- a/roles/run-tempest-26/tasks/main.yaml
+++ b/roles/run-tempest-26/tasks/main.yaml
@@ -62,7 +62,9 @@
when: blacklist_stat.stat.exists
- name: Run Tempest
- command: tox -e {{tox_envlist}} {{tox_extra_args}} -- {{tempest_test_regex|quote}} {{blacklist_option|default('')}} \
+ command: tox -e {{tox_envlist}} {{tox_extra_args}} -- \
+ {{tempest_test_regex|quote if (tempest_test_regex|length>0)|default(None, True)}} \
+ {{blacklist_option|default(None)}} \
--concurrency={{tempest_concurrency|default(default_concurrency)}} \
--black-regex={{tempest_black_regex|quote}}
args:
diff --git a/roles/run-tempest/README.rst b/roles/run-tempest/README.rst
index d9f855a..04db849 100644
--- a/roles/run-tempest/README.rst
+++ b/roles/run-tempest/README.rst
@@ -21,7 +21,7 @@
A regular expression used to select the tests.
It works only when used with some specific tox environments
- ('all', 'all-plugin'.)
+ ('all', 'all-site-packages').
In the following example only api scenario and third party tests
will be executed.
@@ -56,7 +56,7 @@
A regular expression used to skip the tests.
It works only when used with some specific tox environments
- ('all', 'all-plugin'.)
+ ('all', 'all-site-packages').
::
vars:
diff --git a/roles/run-tempest/tasks/main.yaml b/roles/run-tempest/tasks/main.yaml
index f302fa5..3fb494f 100644
--- a/roles/run-tempest/tasks/main.yaml
+++ b/roles/run-tempest/tasks/main.yaml
@@ -25,11 +25,11 @@
target_branch: "{{ zuul.override_checkout }}"
when: zuul.override_checkout is defined
-- name: Use stable branch upper-constraints till stable/victoria
+- name: Use stable branch upper-constraints till stable/wallaby
set_fact:
# TOX_CONSTRAINTS_FILE is new name, UPPER_CONSTRAINTS_FILE is old one, best to set both
tempest_tox_environment: "{{ tempest_tox_environment | combine({'UPPER_CONSTRAINTS_FILE': stable_constraints_file}) | combine({'TOX_CONSTRAINTS_FILE': stable_constraints_file}) }}"
- when: target_branch in ["stable/ocata", "stable/pike", "stable/queens", "stable/rocky", "stable/stein", "stable/train", "stable/ussuri", "stable/victoria"]
+ when: target_branch in ["stable/ocata", "stable/pike", "stable/queens", "stable/rocky", "stable/stein", "stable/train", "stable/ussuri", "stable/victoria", "stable/wallaby"]
- name: Use Configured upper-constraints for non-master Tempest
set_fact:
@@ -120,10 +120,11 @@
- target_branch in ["stable/train", "stable/ussuri", "stable/victoria"]
- name: Run Tempest
- command: tox -e {{tox_envlist}} {{tox_extra_args}} -- {{tempest_test_regex|quote}} \
- {{blacklist_option|default('')}} {{exclude_list_option|default('')}} \
+ command: tox -e {{tox_envlist}} {{tox_extra_args}} -- \
+ {{tempest_test_regex|quote if (tempest_test_regex|length>0)|default(None, True)}} \
+ {{blacklist_option|default(None)}} {{exclude_list_option|default(None)}} \
--concurrency={{tempest_concurrency|default(default_concurrency)}} \
- {{tempest_test_exclude_regex|default('')}}
+ {{tempest_test_exclude_regex|default(None)}}
args:
chdir: "{{devstack_base_dir}}/tempest"
register: tempest_run_result
diff --git a/setup.cfg b/setup.cfg
index 8fb10cb..beaf9b4 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -16,6 +16,7 @@
Programming Language :: Python
Programming Language :: Python :: 3
Programming Language :: Python :: 3.8
+ Programming Language :: Python :: 3.9
Programming Language :: Python :: 3.10
Programming Language :: Python :: 3 :: Only
Programming Language :: Python :: Implementation :: CPython
diff --git a/tempest/api/compute/admin/test_server_external_events.py b/tempest/api/compute/admin/test_server_external_events.py
new file mode 100644
index 0000000..1c5c295
--- /dev/null
+++ b/tempest/api/compute/admin/test_server_external_events.py
@@ -0,0 +1,37 @@
+# Copyright 2022 NEC Corporation. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.api.compute import base
+from tempest.lib import decorators
+
+
+class ServerExternalEventsTest(base.BaseV2ComputeAdminTest):
+ """Test server external events test"""
+
+ @decorators.idempotent_id('6bbf4723-61d2-4372-af55-7ba27f1c9ba6')
+ def test_create_server_external_events(self):
+ """Test create a server and add some external events"""
+ server_id = self.create_test_server(wait_until='ACTIVE')['id']
+ events = [
+ {
+ "name": "network-changed",
+ "server_uuid": server_id,
+ }
+ ]
+ client = self.os_admin.server_external_events_client
+ events_resp = client.create_server_external_events(
+ events=events)['events'][0]
+ self.assertEqual(server_id, events_resp['server_uuid'])
+ self.assertEqual('network-changed', events_resp['name'])
+ self.assertEqual(200, events_resp['code'])
diff --git a/tempest/api/compute/admin/test_volume.py b/tempest/api/compute/admin/test_volume.py
index 99d8e2a..e7c931e 100644
--- a/tempest/api/compute/admin/test_volume.py
+++ b/tempest/api/compute/admin/test_volume.py
@@ -13,8 +13,6 @@
# License for the specific language governing permissions and limitations
# under the License.
-import io
-
from tempest.api.compute import base
from tempest.common import waiters
from tempest import config
@@ -49,9 +47,11 @@
:param return image_id: The UUID of the newly created image.
"""
image = self.admin_image_client.show_image(CONF.compute.image_ref)
- image_data = self.admin_image_client.show_image_file(
- CONF.compute.image_ref).data
- image_file = io.BytesIO(image_data)
+ # NOTE(danms): We need to stream this, so chunked=True means we get
+ # back a urllib3.HTTPResponse and have to carefully pass it to
+ # store_image_file() to upload it in pieces.
+ image_data_resp = self.admin_image_client.show_image_file(
+ CONF.compute.image_ref, chunked=True)
create_dict = {
'container_format': image['container_format'],
'disk_format': image['disk_format'],
@@ -60,12 +60,16 @@
'visibility': 'public',
}
create_dict.update(kwargs)
- new_image = self.admin_image_client.create_image(**create_dict)
- self.addCleanup(self.admin_image_client.wait_for_resource_deletion,
- new_image['id'])
- self.addCleanup(self.admin_image_client.delete_image, new_image['id'])
- self.admin_image_client.store_image_file(new_image['id'], image_file)
-
+ try:
+ new_image = self.admin_image_client.create_image(**create_dict)
+ self.addCleanup(self.admin_image_client.wait_for_resource_deletion,
+ new_image['id'])
+ self.addCleanup(
+ self.admin_image_client.delete_image, new_image['id'])
+ self.admin_image_client.store_image_file(new_image['id'],
+ image_data_resp)
+ finally:
+ image_data_resp.release_conn()
return new_image['id']
diff --git a/tempest/api/compute/base.py b/tempest/api/compute/base.py
index 75df5ae..ea1cddc 100644
--- a/tempest/api/compute/base.py
+++ b/tempest/api/compute/base.py
@@ -698,6 +698,8 @@
binary='nova-compute')['services']
hosts = []
for svc in svcs:
+ if svc['host'].endswith('-ironic'):
+ continue
if svc['state'] == 'up' and svc['status'] == 'enabled':
if CONF.compute.compute_volume_common_az:
if svc['zone'] == CONF.compute.compute_volume_common_az:
diff --git a/tempest/api/image/v2/test_images.py b/tempest/api/image/v2/test_images.py
index d590668..1d05f13 100644
--- a/tempest/api/image/v2/test_images.py
+++ b/tempest/api/image/v2/test_images.py
@@ -16,6 +16,7 @@
import io
import random
+import time
from oslo_log import log as logging
from tempest.api.image import base
@@ -27,6 +28,7 @@
CONF = config.CONF
LOG = logging.getLogger(__name__)
+BAD_REQUEST_RETRIES = 3
class ImportImagesTest(base.BaseV2ImageTest):
@@ -817,8 +819,21 @@
# Add a new location
new_loc = {'metadata': {'foo': 'bar'},
'url': CONF.image.http_image}
- self.client.update_image(image['id'], [
- dict(add='/locations/-', value=new_loc)])
+
+ # NOTE(danms): If glance was unable to fetch the remote image via
+ # HTTP, it will return BadRequest. Because this can be transient in
+ # CI, we try this a few times before we agree that it has failed
+ # for a reason worthy of failing the test.
+ for i in range(BAD_REQUEST_RETRIES):
+ try:
+ self.client.update_image(image['id'], [
+ dict(add='/locations/-', value=new_loc)])
+ break
+ except lib_exc.BadRequest:
+ if i + 1 == BAD_REQUEST_RETRIES:
+ raise
+ else:
+ time.sleep(1)
# The image should now be active, with one location that looks
# like we expect
@@ -848,8 +863,21 @@
new_loc = {'metadata': {'speed': '88mph'},
'url': '%s#new' % CONF.image.http_image}
- self.client.update_image(image['id'], [
- dict(add='/locations/-', value=new_loc)])
+
+ # NOTE(danms): If glance was unable to fetch the remote image via
+ # HTTP, it will return BadRequest. Because this can be transient in
+ # CI, we try this a few times before we agree that it has failed
+ # for a reason worthy of failing the test.
+ for i in range(BAD_REQUEST_RETRIES):
+ try:
+ self.client.update_image(image['id'], [
+ dict(add='/locations/-', value=new_loc)])
+ break
+ except lib_exc.BadRequest:
+ if i + 1 == BAD_REQUEST_RETRIES:
+ raise
+ else:
+ time.sleep(1)
# The image should now have two locations and the last one
# (locations are ordered) should have the new URL.
diff --git a/tempest/api/network/admin/test_dhcp_agent_scheduler.py b/tempest/api/network/admin/test_dhcp_agent_scheduler.py
index 2506185..3c0efee 100644
--- a/tempest/api/network/admin/test_dhcp_agent_scheduler.py
+++ b/tempest/api/network/admin/test_dhcp_agent_scheduler.py
@@ -14,6 +14,7 @@
from tempest.api.network import base
from tempest.common import utils
+from tempest.common import waiters
from tempest.lib import decorators
@@ -36,6 +37,16 @@
cls.create_subnet(cls.network)
cls.port = cls.create_port(cls.network)
+ @decorators.idempotent_id('f164801e-1dd8-4b8b-b5d3-cc3ac77cfaa5')
+ def test_dhcp_port_status_active(self):
+ ports = self.admin_ports_client.list_ports(
+ network_id=self.network['id'])['ports']
+ for port in ports:
+ waiters.wait_for_port_status(
+ client=self.admin_ports_client,
+ port_id=port['id'],
+ status='ACTIVE')
+
@decorators.idempotent_id('5032b1fe-eb42-4a64-8f3b-6e189d8b5c7d')
def test_list_dhcp_agent_hosting_network(self):
"""Test Listing DHCP agents hosting a network"""
diff --git a/tempest/api/volume/admin/test_group_snapshots.py b/tempest/api/volume/admin/test_group_snapshots.py
index 73903cf..8af8435 100644
--- a/tempest/api/volume/admin/test_group_snapshots.py
+++ b/tempest/api/volume/admin/test_group_snapshots.py
@@ -91,9 +91,15 @@
grp = self.create_group(group_type=group_type['id'],
volume_types=[volume_type['id']])
- # Create volume
- vol = self.create_volume(volume_type=volume_type['id'],
- group_id=grp['id'])
+ # Create volume is instance level, can not be deleted before group.
+ # Volume delete handled by delete_group method, cleanup method.
+ params = {'name': data_utils.rand_name("volume"),
+ 'volume_type': volume_type['id'],
+ 'group_id': grp['id'],
+ 'size': CONF.volume.volume_size}
+ vol = self.volumes_client.create_volume(**params)['volume']
+ waiters.wait_for_volume_resource_status(
+ self.volumes_client, vol['id'], 'available')
# Create group snapshot
group_snapshot_name = data_utils.rand_name('group_snapshot')
@@ -153,9 +159,15 @@
grp = self.create_group(group_type=group_type['id'],
volume_types=[volume_type['id']])
- # Create volume
- vol = self.create_volume(volume_type=volume_type['id'],
- group_id=grp['id'])
+ # Create volume is instance level, can not be deleted before group.
+ # Volume delete handled by delete_group method, cleanup method.
+ params = {'name': data_utils.rand_name("volume"),
+ 'volume_type': volume_type['id'],
+ 'group_id': grp['id'],
+ 'size': CONF.volume.volume_size}
+ vol = self.volumes_client.create_volume(**params)['volume']
+ waiters.wait_for_volume_resource_status(
+ self.volumes_client, vol['id'], 'available')
# Create group_snapshot
group_snapshot_name = data_utils.rand_name('group_snapshot')
@@ -215,8 +227,15 @@
# volume-type and group id.
volume_list = []
for _ in range(2):
- volume = self.create_volume(volume_type=volume_type['id'],
- group_id=grp['id'])
+ # Create volume is instance level, can't be deleted before group.
+ # Volume delete handled by delete_group method, cleanup method.
+ params = {'name': data_utils.rand_name("volume"),
+ 'volume_type': volume_type['id'],
+ 'group_id': grp['id'],
+ 'size': CONF.volume.volume_size}
+ volume = self.volumes_client.create_volume(**params)['volume']
+ waiters.wait_for_volume_resource_status(
+ self.volumes_client, volume['id'], 'available')
volume_list.append(volume['id'])
for vol in volume_list:
@@ -268,9 +287,15 @@
group = self.create_group(group_type=group_type['id'],
volume_types=[volume_type['id']])
- # Create volume
- volume = self.create_volume(volume_type=volume_type['id'],
- group_id=group['id'])
+ # Create volume is instance level, can not be deleted before group.
+ # Volume delete handled by delete_group method, cleanup method.
+ params = {'name': data_utils.rand_name("volume"),
+ 'volume_type': volume_type['id'],
+ 'group_id': group['id'],
+ 'size': CONF.volume.volume_size}
+ volume = self.volumes_client.create_volume(**params)['volume']
+ waiters.wait_for_volume_resource_status(
+ self.volumes_client, volume['id'], 'available')
# Create group snapshot
group_snapshot = self._create_group_snapshot(group_id=group['id'])
diff --git a/tempest/api/volume/admin/test_groups.py b/tempest/api/volume/admin/test_groups.py
index f16e4d2..094f142 100644
--- a/tempest/api/volume/admin/test_groups.py
+++ b/tempest/api/volume/admin/test_groups.py
@@ -108,11 +108,17 @@
grp = self.create_group(group_type=group_type['id'],
volume_types=[volume_type['id']])
- # Create volumes
+ # Create volume is instance level, can not be deleted before group.
+ # Volume delete handled by delete_group method, cleanup method.
grp_vols = []
for _ in range(2):
- vol = self.create_volume(volume_type=volume_type['id'],
- group_id=grp['id'])
+ params = {'name': data_utils.rand_name("volume"),
+ 'volume_type': volume_type['id'],
+ 'group_id': grp['id'],
+ 'size': CONF.volume.volume_size}
+ vol = self.volumes_client.create_volume(**params)['volume']
+ waiters.wait_for_volume_resource_status(
+ self.volumes_client, vol['id'], 'available')
grp_vols.append(vol)
vol2 = grp_vols[1]
@@ -171,8 +177,15 @@
grp = self.create_group(group_type=group_type['id'],
volume_types=[volume_type['id']])
- # Create volume
- self.create_volume(volume_type=volume_type['id'], group_id=grp['id'])
+ # Create volume is instance level, can not be deleted before group.
+ # Volume delete handled by delete_group method, cleanup method.
+ params = {'name': data_utils.rand_name("volume"),
+ 'volume_type': volume_type['id'],
+ 'group_id': grp['id'],
+ 'size': CONF.volume.volume_size}
+ vol = self.volumes_client.create_volume(**params)['volume']
+ waiters.wait_for_volume_resource_status(
+ self.volumes_client, vol['id'], 'available')
# Create Group from Group
grp_name2 = data_utils.rand_name('Group_from_grp')
diff --git a/tempest/api/volume/base.py b/tempest/api/volume/base.py
index 172b6ed..49f9e22 100644
--- a/tempest/api/volume/base.py
+++ b/tempest/api/volume/base.py
@@ -19,6 +19,7 @@
from tempest.lib.common import api_version_utils
from tempest.lib.common.utils import data_utils
from tempest.lib.common.utils import test_utils
+from tempest.lib.decorators import cleanup_order
import tempest.test
CONF = config.CONF
@@ -94,8 +95,8 @@
cls.build_interval = CONF.volume.build_interval
cls.build_timeout = CONF.volume.build_timeout
- @classmethod
- def create_volume(cls, wait_until='available', **kwargs):
+ @cleanup_order
+ def create_volume(self, wait_until='available', **kwargs):
"""Wrapper utility that returns a test volume.
:param wait_until: wait till volume status, None means no wait.
@@ -104,12 +105,12 @@
kwargs['size'] = CONF.volume.volume_size
if 'imageRef' in kwargs:
- image = cls.images_client.show_image(kwargs['imageRef'])
+ image = self.images_client.show_image(kwargs['imageRef'])
min_disk = image['min_disk']
kwargs['size'] = max(kwargs['size'], min_disk)
if 'name' not in kwargs:
- name = data_utils.rand_name(cls.__name__ + '-Volume')
+ name = data_utils.rand_name(self.__name__ + '-Volume')
kwargs['name'] = name
if CONF.volume.volume_type and 'volume_type' not in kwargs:
@@ -123,27 +124,26 @@
kwargs.setdefault('availability_zone',
CONF.compute.compute_volume_common_az)
- volume = cls.volumes_client.create_volume(**kwargs)['volume']
- cls.addClassResourceCleanup(test_utils.call_and_ignore_notfound_exc,
- cls.delete_volume, cls.volumes_client,
- volume['id'])
+ volume = self.volumes_client.create_volume(**kwargs)['volume']
+ self.cleanup(test_utils.call_and_ignore_notfound_exc,
+ self.delete_volume, self.volumes_client, volume['id'])
if wait_until:
- waiters.wait_for_volume_resource_status(cls.volumes_client,
+ waiters.wait_for_volume_resource_status(self.volumes_client,
volume['id'], wait_until)
return volume
- @classmethod
- def create_snapshot(cls, volume_id=1, **kwargs):
+ @cleanup_order
+ def create_snapshot(self, volume_id=1, **kwargs):
"""Wrapper utility that returns a test snapshot."""
if 'name' not in kwargs:
- name = data_utils.rand_name(cls.__name__ + '-Snapshot')
+ name = data_utils.rand_name(self.__name__ + '-Snapshot')
kwargs['name'] = name
- snapshot = cls.snapshots_client.create_snapshot(
+ snapshot = self.snapshots_client.create_snapshot(
volume_id=volume_id, **kwargs)['snapshot']
- cls.addClassResourceCleanup(test_utils.call_and_ignore_notfound_exc,
- cls.delete_snapshot, snapshot['id'])
- waiters.wait_for_volume_resource_status(cls.snapshots_client,
+ self.cleanup(test_utils.call_and_ignore_notfound_exc,
+ self.delete_snapshot, snapshot['id'])
+ waiters.wait_for_volume_resource_status(self.snapshots_client,
snapshot['id'], 'available')
return snapshot
@@ -175,11 +175,11 @@
client.delete_volume(volume_id)
client.wait_for_resource_deletion(volume_id)
- @classmethod
- def delete_snapshot(cls, snapshot_id, snapshots_client=None):
+ @cleanup_order
+ def delete_snapshot(self, snapshot_id, snapshots_client=None):
"""Delete snapshot by the given client"""
if snapshots_client is None:
- snapshots_client = cls.snapshots_client
+ snapshots_client = self.snapshots_client
snapshots_client.delete_snapshot(snapshot_id)
snapshots_client.wait_for_resource_deletion(snapshot_id)
@@ -278,23 +278,23 @@
cls.admin_scheduler_stats_client = \
cls.os_admin.volume_scheduler_stats_client_latest
- @classmethod
- def create_test_qos_specs(cls, name=None, consumer=None, **kwargs):
+ @cleanup_order
+ def create_test_qos_specs(self, name=None, consumer=None, **kwargs):
"""create a test Qos-Specs."""
- name = name or data_utils.rand_name(cls.__name__ + '-QoS')
+ name = name or data_utils.rand_name(self.__name__ + '-QoS')
consumer = consumer or 'front-end'
- qos_specs = cls.admin_volume_qos_client.create_qos(
+ qos_specs = self.admin_volume_qos_client.create_qos(
name=name, consumer=consumer, **kwargs)['qos_specs']
- cls.addClassResourceCleanup(cls.clear_qos_spec, qos_specs['id'])
+ self.cleanup(self.clear_qos_spec, qos_specs['id'])
return qos_specs
- @classmethod
- def create_volume_type(cls, name=None, **kwargs):
+ @cleanup_order
+ def create_volume_type(self, name=None, **kwargs):
"""Create a test volume-type"""
- name = name or data_utils.rand_name(cls.__name__ + '-volume-type')
- volume_type = cls.admin_volume_types_client.create_volume_type(
+ name = name or data_utils.rand_name(self.__name__ + '-volume-type')
+ volume_type = self.admin_volume_types_client.create_volume_type(
name=name, **kwargs)['volume_type']
- cls.addClassResourceCleanup(cls.clear_volume_type, volume_type['id'])
+ self.cleanup(self.clear_volume_type, volume_type['id'])
return volume_type
def create_encryption_type(self, type_id=None, provider=None,
@@ -328,19 +328,19 @@
group_type['id'])
return group_type
- @classmethod
- def clear_qos_spec(cls, qos_id):
+ @cleanup_order
+ def clear_qos_spec(self, qos_id):
test_utils.call_and_ignore_notfound_exc(
- cls.admin_volume_qos_client.delete_qos, qos_id)
+ self.admin_volume_qos_client.delete_qos, qos_id)
test_utils.call_and_ignore_notfound_exc(
- cls.admin_volume_qos_client.wait_for_resource_deletion, qos_id)
+ self.admin_volume_qos_client.wait_for_resource_deletion, qos_id)
- @classmethod
- def clear_volume_type(cls, vol_type_id):
+ @cleanup_order
+ def clear_volume_type(self, vol_type_id):
test_utils.call_and_ignore_notfound_exc(
- cls.admin_volume_types_client.delete_volume_type, vol_type_id)
+ self.admin_volume_types_client.delete_volume_type, vol_type_id)
test_utils.call_and_ignore_notfound_exc(
- cls.admin_volume_types_client.wait_for_resource_deletion,
+ self.admin_volume_types_client.wait_for_resource_deletion,
vol_type_id)
diff --git a/tempest/clients.py b/tempest/clients.py
index a65c43b..1aa34d0 100644
--- a/tempest/clients.py
+++ b/tempest/clients.py
@@ -144,6 +144,8 @@
self.tenant_networks_client = self.compute.TenantNetworksClient()
self.assisted_volume_snapshots_client = (
self.compute.AssistedVolumeSnapshotsClient())
+ self.server_external_events_client = (
+ self.compute.ServerExternalEventsClient())
# NOTE: The following client needs special timeout values because
# the API is a proxy for the other component.
diff --git a/tempest/cmd/cleanup.py b/tempest/cmd/cleanup.py
index 0b96d9e..a8a344a 100644
--- a/tempest/cmd/cleanup.py
+++ b/tempest/cmd/cleanup.py
@@ -90,7 +90,6 @@
from tempest import clients
from tempest.cmd import cleanup_service
from tempest.common import credentials_factory as credentials
-from tempest.common import identity
from tempest import config
from tempest.lib import exceptions
@@ -140,11 +139,6 @@
self.dry_run_data = {}
self.json_data = {}
- self.admin_id = ""
- self.admin_role_id = ""
- self.admin_project_id = ""
- self._init_admin_ids()
-
# available services
self.project_associated_services = (
cleanup_service.get_project_associated_cleanup_services())
@@ -227,26 +221,6 @@
svc = service(self.admin_mgr, **kwargs)
svc.run()
- def _init_admin_ids(self):
- pr_cl = self.admin_mgr.projects_client
- rl_cl = self.admin_mgr.roles_v3_client
- rla_cl = self.admin_mgr.role_assignments_client
- us_cl = self.admin_mgr.users_v3_client
-
- project = identity.get_project_by_name(pr_cl,
- CONF.auth.admin_project_name)
- self.admin_project_id = project['id']
- user = identity.get_user_by_project(us_cl, rla_cl,
- self.admin_project_id,
- CONF.auth.admin_username)
- self.admin_id = user['id']
-
- roles = rl_cl.list_roles()['roles']
- for role in roles:
- if role['name'] == CONF.identity.admin_role:
- self.admin_role_id = role['id']
- break
-
def get_parser(self, prog_name):
parser = super(TempestCleanup, self).get_parser(prog_name)
parser.add_argument('--init-saved-state', action="store_true",
diff --git a/tempest/common/utils/linux/remote_client.py b/tempest/common/utils/linux/remote_client.py
index 9d9fab7..4fdf6a4 100644
--- a/tempest/common/utils/linux/remote_client.py
+++ b/tempest/common/utils/linux/remote_client.py
@@ -109,6 +109,15 @@
LOG.debug('(get_nic_name_by_ip) Command result: %s', nic)
return nic.strip().strip(":").split('@')[0].lower()
+ def get_nic_ip_addresses(self, nic_name, ip_version=None):
+ cmd = "ip "
+ if ip_version:
+ cmd += "-%s " % ip_version
+ cmd += "-o addr | awk '/%s/ {print $4}'" % nic_name
+ ip_addresses = self.exec_command(cmd)
+ LOG.debug('(get_nic_ip_address): Command result: %s', ip_addresses)
+ return ip_addresses.strip().split()
+
def _get_dns_servers(self):
cmd = 'cat /etc/resolv.conf'
resolve_file = self.exec_command(cmd).strip().split('\n')
@@ -145,15 +154,20 @@
cmd = "sudo /sbin/dhclient -r && sudo /sbin/dhclient"
self.exec_command(cmd)
+ def _renew_lease_dhcpcd(self, fixed_ip=None):
+ """Renews DHCP lease via dhcpcd client. """
+ cmd = "sudo /sbin/dhcpcd --rebind"
+ self.exec_command(cmd)
+
def renew_lease(self, fixed_ip=None, dhcp_client='udhcpc'):
"""Wrapper method for renewing DHCP lease via given client
Supporting:
* udhcpc
* dhclient
+ * dhcpcd
"""
- # TODO(yfried): add support for dhcpcd
- supported_clients = ['udhcpc', 'dhclient']
+ supported_clients = ['udhcpc', 'dhclient', 'dhcpcd']
if dhcp_client not in supported_clients:
raise tempest.lib.exceptions.InvalidConfiguration(
'%s DHCP client unsupported' % dhcp_client)
diff --git a/tempest/common/waiters.py b/tempest/common/waiters.py
index 71599bd..45a7b8a 100644
--- a/tempest/common/waiters.py
+++ b/tempest/common/waiters.py
@@ -604,6 +604,22 @@
raise lib_exc.TimeoutException()
+def wait_for_port_status(client, port_id, status):
+ """Wait for a port reach a certain status : ["BUILD" | "DOWN" | "ACTIVE"]
+ :param client: The network client to use when querying the port's
+ status
+ :param status: A string to compare the current port status-to.
+ :param port_id: The uuid of the port we would like queried for status.
+ """
+ start_time = time.time()
+ while (time.time() - start_time <= client.build_timeout):
+ result = client.show_port(port_id)
+ if result['port']['status'].lower() == status.lower():
+ return result
+ time.sleep(client.build_interval)
+ raise lib_exc.TimeoutException
+
+
def wait_for_ssh(ssh_client, timeout=30):
"""Waits for SSH connection to become usable"""
start_time = int(time.time())
diff --git a/tempest/config.py b/tempest/config.py
index d91fca4..00b394e 100644
--- a/tempest/config.py
+++ b/tempest/config.py
@@ -1200,13 +1200,12 @@
help='Image container format'),
cfg.DictOpt('img_properties', help='Glance image properties. '
'Use for custom images which require them'),
- # TODO(yfried): add support for dhcpcd
cfg.StrOpt('dhcp_client',
default='udhcpc',
- choices=["udhcpc", "dhclient", ""],
+ choices=["udhcpc", "dhclient", "dhcpcd", ""],
help='DHCP client used by images to renew DCHP lease. '
'If left empty, update operation will be skipped. '
- 'Supported clients: "udhcpc", "dhclient"'),
+ 'Supported clients: "udhcpc", "dhclient", "dhcpcd"'),
cfg.StrOpt('protocol',
default='icmp',
choices=('icmp', 'tcp', 'udp'),
diff --git a/tempest/lib/api_schema/response/compute/v2_1/server_external_events.py b/tempest/lib/api_schema/response/compute/v2_1/server_external_events.py
new file mode 100644
index 0000000..2ab69e2
--- /dev/null
+++ b/tempest/lib/api_schema/response/compute/v2_1/server_external_events.py
@@ -0,0 +1,55 @@
+# Copyright 2022 NEC Corporation. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+create = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'events': {
+ 'type': 'array', 'minItems': 1,
+ 'items': {
+ 'type': 'object',
+ 'properties': {
+ 'server_uuid': {
+ 'type': 'string', 'format': 'uuid'
+ },
+ 'name': {
+ 'type': 'string',
+ 'enum': [
+ 'network-changed',
+ 'network-vif-plugged',
+ 'network-vif-unplugged',
+ 'network-vif-deleted'
+ ],
+ },
+ 'status': {
+ 'type': 'string',
+ 'enum': ['failed', 'completed', 'in-progress'],
+ },
+ 'tag': {
+ 'type': 'string', 'maxLength': 255,
+ },
+ 'code': {'type': 'integer'},
+ },
+ 'required': [
+ 'server_uuid', 'name', 'code'],
+ 'additionalProperties': False,
+ },
+ },
+ },
+ 'required': ['events'],
+ 'additionalProperties': False,
+ }
+}
diff --git a/tempest/lib/common/http.py b/tempest/lib/common/http.py
index 33f871b..d163968 100644
--- a/tempest/lib/common/http.py
+++ b/tempest/lib/common/http.py
@@ -60,7 +60,12 @@
retry = urllib3.util.Retry(redirect=False)
r = super(ClosingProxyHttp, self).request(method, url, retries=retry,
*args, **new_kwargs)
- return Response(r), r.data
+ if not kwargs.get('preload_content', True):
+ # This means we asked urllib3 for streaming content, so we
+ # need to return the raw response and not read any data yet
+ return r, b''
+ else:
+ return Response(r), r.data
class ClosingHttp(urllib3.poolmanager.PoolManager):
@@ -109,4 +114,9 @@
retry = urllib3.util.Retry(redirect=False)
r = super(ClosingHttp, self).request(method, url, retries=retry,
*args, **new_kwargs)
- return Response(r), r.data
+ if not kwargs.get('preload_content', True):
+ # This means we asked urllib3 for streaming content, so we
+ # need to return the raw response and not read any data yet
+ return r, b''
+ else:
+ return Response(r), r.data
diff --git a/tempest/lib/common/rest_client.py b/tempest/lib/common/rest_client.py
index a11b7c1..6cf5b73 100644
--- a/tempest/lib/common/rest_client.py
+++ b/tempest/lib/common/rest_client.py
@@ -19,6 +19,7 @@
import re
import time
import urllib
+import urllib3
import jsonschema
from oslo_log import log as logging
@@ -298,7 +299,7 @@
"""
return self.request('POST', url, extra_headers, headers, body, chunked)
- def get(self, url, headers=None, extra_headers=False):
+ def get(self, url, headers=None, extra_headers=False, chunked=False):
"""Send a HTTP GET request using keystone service catalog and auth
:param str url: the relative url to send the get request to
@@ -307,11 +308,19 @@
returned by the get_headers() method are to
be used but additional headers are needed in
the request pass them in as a dict.
+ :param bool chunked: Boolean value that indicates if we should stream
+ the response instead of reading it all at once.
+ If True, data will be empty and the raw urllib3
+ response object will be returned.
+ NB: If you pass True here, you **MUST** call
+ release_conn() on the response object before
+ finishing!
:return: a tuple with the first entry containing the response headers
and the second the response body
:rtype: tuple
"""
- return self.request('GET', url, extra_headers, headers)
+ return self.request('GET', url, extra_headers, headers,
+ chunked=chunked)
def delete(self, url, headers=None, body=None, extra_headers=False):
"""Send a HTTP DELETE request using keystone service catalog and auth
@@ -480,7 +489,7 @@
self.LOG.info(
'Request (%s): %s %s %s%s',
caller_name,
- resp['status'],
+ resp.status,
method,
req_url,
secs,
@@ -617,17 +626,30 @@
"""
if headers is None:
headers = self.get_headers()
+ # In urllib3, chunked only affects the upload. However, we may
+ # want to read large responses to GET incrementally. Re-purpose
+ # chunked=True on a GET to also control how we handle the response.
+ preload = not (method.lower() == 'get' and chunked)
+ if not preload:
+ # NOTE(danms): Not specifically necessary, but don't send
+ # chunked=True to urllib3 on a GET, since it is technically
+ # for PUT/POST type operations
+ chunked = False
# Do the actual request, and time it
start = time.time()
self._log_request_start(method, url)
resp, resp_body = self.http_obj.request(
url, method, headers=headers,
- body=body, chunked=chunked)
+ body=body, chunked=chunked, preload_content=preload)
end = time.time()
req_body = body if log_req_body is None else log_req_body
- self._log_request(method, url, resp, secs=(end - start),
- req_headers=headers, req_body=req_body,
- resp_body=resp_body)
+ if preload:
+ # NOTE(danms): If we are reading the whole response, we can do
+ # this logging. If not, skip the logging because it will result
+ # in us reading the response data prematurely.
+ self._log_request(method, url, resp, secs=(end - start),
+ req_headers=headers, req_body=req_body,
+ resp_body=resp_body)
return resp, resp_body
def request(self, method, url, extra_headers=False, headers=None,
@@ -773,6 +795,10 @@
# resp this could possibly fail
if str(type(resp)) == "<type 'instance'>":
ctype = resp.getheader('content-type')
+ elif isinstance(resp, urllib3.HTTPResponse):
+ # If we requested chunked=True streaming, this will be a raw
+ # urllib3.HTTPResponse
+ ctype = resp.getheaders()['content-type']
else:
try:
ctype = resp['content-type']
diff --git a/tempest/lib/common/ssh.py b/tempest/lib/common/ssh.py
index cb59a82..aad04b8 100644
--- a/tempest/lib/common/ssh.py
+++ b/tempest/lib/common/ssh.py
@@ -53,7 +53,8 @@
def __init__(self, host, username, password=None, timeout=300, pkey=None,
channel_timeout=10, look_for_keys=False, key_filename=None,
- port=22, proxy_client=None, ssh_key_type='rsa'):
+ port=22, proxy_client=None, ssh_key_type='rsa',
+ ssh_allow_agent=True):
"""SSH client.
Many of parameters are just passed to the underlying implementation
@@ -76,6 +77,9 @@
for ssh-over-ssh. The default is None, which means
not to use ssh-over-ssh.
:param ssh_key_type: ssh key type (rsa, ecdsa)
+ :param ssh_allow_agent: boolean, default True, if the SSH client is
+ allowed to also utilize the ssh-agent. Explicit use of passwords
+ in some tests may need this set as False.
:type proxy_client: ``tempest.lib.common.ssh.Client`` object
"""
self.host = host
@@ -105,6 +109,7 @@
raise exceptions.SSHClientProxyClientLoop(
host=self.host, port=self.port, username=self.username)
self._proxy_conn = None
+ self.ssh_allow_agent = ssh_allow_agent
def _get_ssh_connection(self, sleep=1.5, backoff=1):
"""Returns an ssh connection to the specified host."""
@@ -133,7 +138,7 @@
look_for_keys=self.look_for_keys,
key_filename=self.key_filename,
timeout=self.channel_timeout, pkey=self.pkey,
- sock=proxy_chan)
+ sock=proxy_chan, allow_agent=self.ssh_allow_agent)
LOG.info("ssh connection to %s@%s successfully created",
self.username, self.host)
return ssh
diff --git a/tempest/lib/common/utils/linux/remote_client.py b/tempest/lib/common/utils/linux/remote_client.py
index d0cdc25..662b452 100644
--- a/tempest/lib/common/utils/linux/remote_client.py
+++ b/tempest/lib/common/utils/linux/remote_client.py
@@ -69,7 +69,8 @@
server=None, servers_client=None, ssh_timeout=300,
connect_timeout=60, console_output_enabled=True,
ssh_shell_prologue="set -eu -o pipefail; PATH=$PATH:/sbin;",
- ping_count=1, ping_size=56, ssh_key_type='rsa'):
+ ping_count=1, ping_size=56, ssh_key_type='rsa',
+ ssh_allow_agent=True):
"""Executes commands in a VM over ssh
:param ip_address: IP address to ssh to
@@ -85,6 +86,8 @@
:param ping_count: Number of ping packets
:param ping_size: Packet size for ping packets
:param ssh_key_type: ssh key type (rsa, ecdsa)
+ :param ssh_allow_agent: Boolean if ssh agent support is permitted.
+ Defaults to True.
"""
self.server = server
self.servers_client = servers_client
@@ -94,11 +97,14 @@
self.ping_count = ping_count
self.ping_size = ping_size
self.ssh_key_type = ssh_key_type
+ self.ssh_allow_agent = ssh_allow_agent
self.ssh_client = ssh.Client(ip_address, username, password,
ssh_timeout, pkey=pkey,
channel_timeout=connect_timeout,
- ssh_key_type=ssh_key_type)
+ ssh_key_type=ssh_key_type,
+ ssh_allow_agent=ssh_allow_agent,
+ )
@debug_ssh
def exec_command(self, cmd):
diff --git a/tempest/lib/decorators.py b/tempest/lib/decorators.py
index a4633ca..7d54c1a 100644
--- a/tempest/lib/decorators.py
+++ b/tempest/lib/decorators.py
@@ -13,6 +13,7 @@
# under the License.
import functools
+from types import MethodType
import uuid
from oslo_log import log as logging
@@ -189,3 +190,41 @@
raise e
return inner
return decor
+
+
+class cleanup_order:
+ """Descriptor for base create function to cleanup based on caller.
+
+ There are functions created as classmethod and the cleanup
+ was managed by the class with addClassResourceCleanup,
+ In case the function called from a class level (resource_setup) its ok
+ But when it is called from testcase level there is no reson to delete the
+ resource when class tears down.
+
+ The testcase results will not reflect the resources cleanup because test
+ may pass but the class cleanup fails. if the resources were created by
+ testcase its better to let the testcase delete them and report failure
+ part of the testcase
+ """
+
+ def __init__(self, func):
+ self.func = func
+ functools.update_wrapper(self, func)
+
+ def __get__(self, instance, owner):
+ if instance:
+ # instance is the caller
+ instance.cleanup = instance.addCleanup
+ instance.__name__ = owner.__name__
+ return MethodType(self.func, instance)
+ elif owner:
+ # class is the caller
+ owner.cleanup = owner.addClassResourceCleanup
+ return MethodType(self.func, owner)
+
+
+def serial(cls):
+ """A decorator to mark a test class for serial execution"""
+ cls._serial = True
+ LOG.debug('marked %s for serial execution', cls.__name__)
+ return cls
diff --git a/tempest/lib/services/compute/__init__.py b/tempest/lib/services/compute/__init__.py
index 8d07a45..da800af 100644
--- a/tempest/lib/services/compute/__init__.py
+++ b/tempest/lib/services/compute/__init__.py
@@ -52,6 +52,8 @@
SecurityGroupRulesClient
from tempest.lib.services.compute.security_groups_client import \
SecurityGroupsClient
+from tempest.lib.services.compute.server_external_events_client \
+ import ServerExternalEventsClient
from tempest.lib.services.compute.server_groups_client import \
ServerGroupsClient
from tempest.lib.services.compute.servers_client import ServersClient
@@ -75,6 +77,6 @@
'MigrationsClient', 'NetworksClient', 'QuotaClassesClient',
'QuotasClient', 'SecurityGroupDefaultRulesClient',
'SecurityGroupRulesClient', 'SecurityGroupsClient',
- 'ServerGroupsClient', 'ServersClient', 'ServicesClient',
- 'SnapshotsClient', 'TenantNetworksClient', 'TenantUsagesClient',
- 'VersionsClient', 'VolumesClient']
+ 'ServerExternalEventsClient', 'ServerGroupsClient', 'ServersClient',
+ 'ServicesClient', 'SnapshotsClient', 'TenantNetworksClient',
+ 'TenantUsagesClient', 'VersionsClient', 'VolumesClient']
diff --git a/tempest/lib/services/compute/server_external_events_client.py b/tempest/lib/services/compute/server_external_events_client.py
new file mode 100644
index 0000000..683dce1
--- /dev/null
+++ b/tempest/lib/services/compute/server_external_events_client.py
@@ -0,0 +1,36 @@
+# Copyright 2022 NEC Corporation. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from oslo_serialization import jsonutils as json
+
+from tempest.lib.api_schema.response.compute.v2_1 import \
+ server_external_events as schema
+from tempest.lib.common import rest_client
+from tempest.lib.services.compute import base_compute_client
+
+
+class ServerExternalEventsClient(base_compute_client.BaseComputeClient):
+
+ def create_server_external_events(self, events):
+ """Create Server External Events.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://docs.openstack.org/api-ref/compute/#run-events
+ """
+ post_body = json.dumps({'events': events})
+ resp, body = self.post("os-server-external-events", post_body)
+ body = json.loads(body)
+ self.validate_response(schema.create, resp, body)
+ return rest_client.ResponseBody(resp, body)
diff --git a/tempest/lib/services/image/v2/images_client.py b/tempest/lib/services/image/v2/images_client.py
index ae6ce25..8460b57 100644
--- a/tempest/lib/services/image/v2/images_client.py
+++ b/tempest/lib/services/image/v2/images_client.py
@@ -248,17 +248,26 @@
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp)
- def show_image_file(self, image_id):
+ def show_image_file(self, image_id, chunked=False):
"""Download binary image data.
+ :param bool chunked: If True, do not read the body and return only
+ the raw urllib3 response object for processing.
+ NB: If you pass True here, you **MUST** call
+ release_conn() on the response object before
+ finishing!
+
For a full list of available parameters, please refer to the official
API reference:
https://docs.openstack.org/api-ref/image/v2/#download-binary-image-data
"""
url = 'images/%s/file' % image_id
- resp, body = self.get(url)
+ resp, body = self.get(url, chunked=chunked)
self.expected_success([200, 204, 206], resp.status)
- return rest_client.ResponseBodyData(resp, body)
+ if chunked:
+ return resp
+ else:
+ return rest_client.ResponseBodyData(resp, body)
def add_image_tag(self, image_id, tag):
"""Add an image tag.
diff --git a/tempest/scenario/manager.py b/tempest/scenario/manager.py
index 2843498..bf3f62f 100644
--- a/tempest/scenario/manager.py
+++ b/tempest/scenario/manager.py
@@ -145,6 +145,7 @@
- 'binding:vnic_type' - defaults to CONF.network.port_vnic_type
- 'binding:profile' - defaults to CONF.network.port_profile
"""
+
if not client:
client = self.ports_client
name = data_utils.rand_name(
@@ -158,10 +159,12 @@
network_id=network_id,
**kwargs)
self.assertIsNotNone(result, 'Unable to allocate port')
- port = result['port']
+ port_id = result['port']['id']
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
- client.delete_port, port['id'])
- return port
+ client.delete_port, port_id)
+ port = waiters.wait_for_port_status(
+ client=client, port_id=port_id, status="DOWN")
+ return port["port"]
def create_keypair(self, client=None, **kwargs):
"""Creates keypair
diff --git a/tempest/scenario/test_minimum_basic.py b/tempest/scenario/test_minimum_basic.py
index 90e1bc5..5513f4d 100644
--- a/tempest/scenario/test_minimum_basic.py
+++ b/tempest/scenario/test_minimum_basic.py
@@ -86,6 +86,7 @@
'%s' % (secgroup['id'], server['id']))
raise exceptions.TimeoutException(msg)
+ @decorators.attr(type='slow')
@decorators.idempotent_id('bdbb5441-9204-419d-a225-b4fdbfb1a1a8')
@utils.services('compute', 'volume', 'image', 'network')
def test_minimum_basic_scenario(self):
@@ -159,6 +160,7 @@
self.servers_client, server, floating_ip,
wait_for_disassociate=True)
+ @decorators.attr(type='slow')
@decorators.idempotent_id('a8fd48ec-1d01-4895-b932-02321661ec1e')
@testtools.skipUnless(CONF.volume_feature_enabled.snapshot,
"Cinder volume snapshots are disabled")
diff --git a/tempest/scenario/test_network_basic_ops.py b/tempest/scenario/test_network_basic_ops.py
index cbe8c20..cbe4122 100644
--- a/tempest/scenario/test_network_basic_ops.py
+++ b/tempest/scenario/test_network_basic_ops.py
@@ -897,10 +897,17 @@
self.check_remote_connectivity(ssh_client, dest=peer_address,
nic=spoof_nic, should_succeed=True)
# Set a mac address by making nic down temporary
+ spoof_ip_addresses = ssh_client.get_nic_ip_addresses(spoof_nic)
cmd = ("sudo ip link set {nic} down;"
"sudo ip link set dev {nic} address {mac};"
- "sudo ip link set {nic} up").format(nic=spoof_nic,
- mac=spoof_mac)
+ "sudo ip link set {nic} up;"
+ "sudo ip address flush dev {nic};").format(nic=spoof_nic,
+ mac=spoof_mac)
+ for ip_address in spoof_ip_addresses:
+ cmd += (
+ "sudo ip addr add {ip_address} dev {nic};"
+ ).format(ip_address=ip_address, nic=spoof_nic)
+
ssh_client.exec_command(cmd)
new_mac = ssh_client.get_mac_address(nic=spoof_nic)
diff --git a/tempest/serial_tests/__init__.py b/tempest/serial_tests/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tempest/serial_tests/__init__.py
diff --git a/tempest/serial_tests/api/__init__.py b/tempest/serial_tests/api/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tempest/serial_tests/api/__init__.py
diff --git a/tempest/serial_tests/api/admin/__init__.py b/tempest/serial_tests/api/admin/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tempest/serial_tests/api/admin/__init__.py
diff --git a/tempest/api/compute/admin/test_aggregates.py b/tempest/serial_tests/api/admin/test_aggregates.py
similarity index 99%
rename from tempest/api/compute/admin/test_aggregates.py
rename to tempest/serial_tests/api/admin/test_aggregates.py
index a6c6535..2ca91aa 100644
--- a/tempest/api/compute/admin/test_aggregates.py
+++ b/tempest/serial_tests/api/admin/test_aggregates.py
@@ -26,6 +26,7 @@
CONF = config.CONF
+@decorators.serial
class AggregatesAdminTestBase(base.BaseV2ComputeAdminTest):
"""Tests Aggregates API that require admin privileges"""
diff --git a/tempest/serial_tests/scenario/__init__.py b/tempest/serial_tests/scenario/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tempest/serial_tests/scenario/__init__.py
diff --git a/tempest/scenario/test_aggregates_basic_ops.py b/tempest/serial_tests/scenario/test_aggregates_basic_ops.py
similarity index 99%
rename from tempest/scenario/test_aggregates_basic_ops.py
rename to tempest/serial_tests/scenario/test_aggregates_basic_ops.py
index 58e234f..ba31d84 100644
--- a/tempest/scenario/test_aggregates_basic_ops.py
+++ b/tempest/serial_tests/scenario/test_aggregates_basic_ops.py
@@ -20,6 +20,7 @@
from tempest.scenario import manager
+@decorators.serial
class TestAggregatesBasicOps(manager.ScenarioTest):
"""Creates an aggregate within an availability zone
diff --git a/tempest/test.py b/tempest/test.py
index dba2695..d49458e 100644
--- a/tempest/test.py
+++ b/tempest/test.py
@@ -18,7 +18,9 @@
import sys
import debtcollector.moves
+from fasteners import process_lock
import fixtures
+from oslo_concurrency import lockutils
from oslo_log import log as logging
import testtools
@@ -123,6 +125,23 @@
# A way to adjust slow test classes
TIMEOUT_SCALING_FACTOR = 1
+ # An interprocess lock to implement serial test execution if requested.
+ # The serial test classes are the writers as only one of them can be
+ # executed. The rest of the test classes are the readers as many of them
+ # can be run in parallel.
+ # Only classes can be decorated with @serial decorator not individual test
+ # cases as tempest allows test class level resource setup which could
+ # interfere with serialized execution on test cases level. I.e. the class
+ # setup of one of the test cases could run before taking a test case level
+ # lock.
+ # We cannot init the lock here as external lock needs oslo configuration
+ # to be loaded first to get the lock_path
+ serial_rw_lock = None
+
+ # Defines if the tests in this class should be run without any parallelism
+ # Use the @serial decorator on your test class to indicate such requirement
+ _serial = False
+
@classmethod
def _reset_class(cls):
cls.__setup_credentials_called = False
@@ -134,14 +153,33 @@
cls._teardowns = []
@classmethod
+ def is_serial_execution_requested(cls):
+ return cls._serial
+
+ @classmethod
def setUpClass(cls):
cls.__setupclass_called = True
+
+ if cls.serial_rw_lock is None:
+ path = os.path.join(
+ lockutils.get_lock_path(CONF), 'tempest-serial-rw-lock')
+ cls.serial_rw_lock = (
+ process_lock.InterProcessReaderWriterLock(path)
+ )
+
# Reset state
cls._reset_class()
# It should never be overridden by descendants
if hasattr(super(BaseTestCase, cls), 'setUpClass'):
super(BaseTestCase, cls).setUpClass()
try:
+ if cls.is_serial_execution_requested():
+ LOG.debug('%s taking the write lock', cls.__name__)
+ cls.serial_rw_lock.acquire_write_lock()
+ LOG.debug('%s took the write lock', cls.__name__)
+ else:
+ cls.serial_rw_lock.acquire_read_lock()
+
cls.skip_checks()
if not cls.__skip_checks_called:
@@ -184,35 +222,44 @@
# If there was no exception during setup we shall re-raise the first
# exception in teardown
re_raise = (etype is None)
- while cls._teardowns:
- name, teardown = cls._teardowns.pop()
- # Catch any exception in tearDown so we can re-raise the original
- # exception at the end
- try:
- teardown()
- if name == 'resources':
- if not cls.__resource_cleanup_called:
- raise RuntimeError(
- "resource_cleanup for %s did not call the "
- "super's resource_cleanup" % cls.__name__)
- except Exception as te:
- sys_exec_info = sys.exc_info()
- tetype = sys_exec_info[0]
- # TODO(andreaf): Resource cleanup is often implemented by
- # storing an array of resources at class level, and cleaning
- # them up during `resource_cleanup`.
- # In case of failure during setup, some resource arrays might
- # not be defined at all, in which case the cleanup code might
- # trigger an AttributeError. In such cases we log
- # AttributeError as info instead of exception. Once all
- # cleanups are migrated to addClassResourceCleanup we can
- # remove this.
- if tetype is AttributeError and name == 'resources':
- LOG.info("tearDownClass of %s failed: %s", name, te)
- else:
- LOG.exception("teardown of %s failed: %s", name, te)
- if not etype:
- etype, value, trace = sys_exec_info
+ try:
+ while cls._teardowns:
+ name, teardown = cls._teardowns.pop()
+ # Catch any exception in tearDown so we can re-raise the
+ # original exception at the end
+ try:
+ teardown()
+ if name == 'resources':
+ if not cls.__resource_cleanup_called:
+ raise RuntimeError(
+ "resource_cleanup for %s did not call the "
+ "super's resource_cleanup" % cls.__name__)
+ except Exception as te:
+ sys_exec_info = sys.exc_info()
+ tetype = sys_exec_info[0]
+ # TODO(andreaf): Resource cleanup is often implemented by
+ # storing an array of resources at class level, and
+ # cleaning them up during `resource_cleanup`.
+ # In case of failure during setup, some resource arrays
+ # might not be defined at all, in which case the cleanup
+ # code might trigger an AttributeError. In such cases we
+ # log AttributeError as info instead of exception. Once all
+ # cleanups are migrated to addClassResourceCleanup we can
+ # remove this.
+ if tetype is AttributeError and name == 'resources':
+ LOG.info("tearDownClass of %s failed: %s", name, te)
+ else:
+ LOG.exception("teardown of %s failed: %s", name, te)
+ if not etype:
+ etype, value, trace = sys_exec_info
+ finally:
+ if cls.is_serial_execution_requested():
+ LOG.debug('%s releasing the write lock', cls.__name__)
+ cls.serial_rw_lock.release_write_lock()
+ LOG.debug('%s released the write lock', cls.__name__)
+ else:
+ cls.serial_rw_lock.release_read_lock()
+
# If exceptions were raised during teardown, and not before, re-raise
# the first one
if re_raise and etype is not None:
diff --git a/tempest/test_discover/test_discover.py b/tempest/test_discover/test_discover.py
index a19f20b..679d58b 100644
--- a/tempest/test_discover/test_discover.py
+++ b/tempest/test_discover/test_discover.py
@@ -25,7 +25,7 @@
base_path = os.path.split(os.path.dirname(os.path.abspath(__file__)))[0]
base_path = os.path.split(base_path)[0]
# Load local tempest tests
- for test_dir in ['api', 'scenario']:
+ for test_dir in ['api', 'scenario', 'serial_tests']:
full_test_dir = os.path.join(base_path, 'tempest', test_dir)
if not pattern:
suite.addTests(loader.discover(full_test_dir,
diff --git a/tempest/tests/common/test_waiters.py b/tempest/tests/common/test_waiters.py
index 71088a4..2695048 100755
--- a/tempest/tests/common/test_waiters.py
+++ b/tempest/tests/common/test_waiters.py
@@ -21,6 +21,7 @@
from tempest import exceptions
from tempest.lib import exceptions as lib_exc
from tempest.lib.services.compute import servers_client
+from tempest.lib.services.network import ports_client
from tempest.lib.services.volume.v2 import volumes_client
from tempest.tests import base
import tempest.tests.utils as utils
@@ -612,6 +613,48 @@
)
+class TestPortCreationWaiter(base.TestCase):
+ def test_wait_for_port_status(self):
+ """Test that the waiter replies with the port before the timeout"""
+
+ def client_response(self):
+ """Mock client response, replies with the final status after
+ 2 calls
+ """
+ if mock_client.call_count >= 2:
+ return mock_port
+ else:
+ mock_client.call_count += 1
+ return mock_port_build
+
+ mock_port = {'port': {'id': '1234', 'status': "DOWN"}}
+ mock_port_build = {'port': {'id': '1234', 'status': "BUILD"}}
+ mock_client = mock.Mock(
+ spec=ports_client.PortsClient,
+ build_timeout=30, build_interval=1,
+ show_port=client_response)
+ fake_port_id = "1234"
+ fake_status = "DOWN"
+ self.assertEqual(mock_port, waiters.wait_for_port_status(
+ mock_client, fake_port_id, fake_status))
+
+ def test_wait_for_port_status_timeout(self):
+ """Negative test - checking that a timeout
+ presented by a small 'fake_timeout' and a static status of
+ 'BUILD' in the mock will raise a timeout exception
+ """
+ mock_port = {'port': {'id': '1234', 'status': "BUILD"}}
+ mock_client = mock.Mock(
+ spec=ports_client.PortsClient,
+ build_timeout=2, build_interval=1,
+ show_port=lambda id: mock_port)
+ fake_port_id = "1234"
+ fake_status = "ACTIVE"
+ self.assertRaises(lib_exc.TimeoutException,
+ waiters.wait_for_port_status, mock_client,
+ fake_port_id, fake_status)
+
+
class TestServerFloatingIPWaiters(base.TestCase):
def test_wait_for_server_floating_ip_associate_timeout(self):
diff --git a/tempest/tests/lib/common/test_http.py b/tempest/tests/lib/common/test_http.py
index a19153f..aae6ba2 100644
--- a/tempest/tests/lib/common/test_http.py
+++ b/tempest/tests/lib/common/test_http.py
@@ -149,6 +149,31 @@
'xtra key': 'Xtra Value'},
response)
+ def test_request_preload(self):
+ # Given
+ connection = self.closing_http()
+ headers = {'Xtra Key': 'Xtra Value'}
+ http_response = urllib3.HTTPResponse(headers=headers)
+ request = self.patch('urllib3.PoolManager.request',
+ return_value=http_response)
+ retry = self.patch('urllib3.util.Retry')
+
+ # When
+ response, _ = connection.request(
+ method=REQUEST_METHOD,
+ url=REQUEST_URL,
+ headers=headers,
+ preload_content=False)
+
+ # Then
+ request.assert_called_once_with(
+ REQUEST_METHOD,
+ REQUEST_URL,
+ headers=dict(headers, connection='close'),
+ preload_content=False,
+ retries=retry(raise_on_redirect=False, redirect=5))
+ self.assertIsInstance(response, urllib3.HTTPResponse)
+
class TestClosingProxyHttp(TestClosingHttp):
diff --git a/tempest/tests/lib/common/test_rest_client.py b/tempest/tests/lib/common/test_rest_client.py
index 910756f..81a76e0 100644
--- a/tempest/tests/lib/common/test_rest_client.py
+++ b/tempest/tests/lib/common/test_rest_client.py
@@ -55,6 +55,7 @@
def test_get(self):
__, return_dict = self.rest_client.get(self.url)
self.assertEqual('GET', return_dict['method'])
+ self.assertTrue(return_dict['preload_content'])
def test_delete(self):
__, return_dict = self.rest_client.delete(self.url)
@@ -78,6 +79,17 @@
__, return_dict = self.rest_client.copy(self.url)
self.assertEqual('COPY', return_dict['method'])
+ def test_get_chunked(self):
+ self.useFixture(fixtures.MockPatchObject(self.rest_client,
+ '_log_request'))
+ __, return_dict = self.rest_client.get(self.url, chunked=True)
+ # Default is preload_content=True, make sure we passed False
+ self.assertFalse(return_dict['preload_content'])
+ # Make sure we did not pass chunked=True to urllib3 for GET
+ self.assertFalse(return_dict['chunked'])
+ # Make sure we did not call _log_request() on the raw response
+ self.rest_client._log_request.assert_not_called()
+
class TestRestClientNotFoundHandling(BaseRestClientTestClass):
def setUp(self):
diff --git a/tempest/tests/lib/fake_http.py b/tempest/tests/lib/fake_http.py
index cfa4b93..5fa0c43 100644
--- a/tempest/tests/lib/fake_http.py
+++ b/tempest/tests/lib/fake_http.py
@@ -21,14 +21,17 @@
self.return_type = return_type
def request(self, uri, method="GET", body=None, headers=None,
- redirections=5, connection_type=None, chunked=False):
+ redirections=5, connection_type=None, chunked=False,
+ preload_content=False):
if not self.return_type:
fake_headers = fake_http_response(headers)
return_obj = {
'uri': uri,
'method': method,
'body': body,
- 'headers': headers
+ 'headers': headers,
+ 'chunked': chunked,
+ 'preload_content': preload_content,
}
return (fake_headers, return_obj)
elif isinstance(self.return_type, int):
diff --git a/tempest/tests/lib/services/compute/test_server_external_events_client.py b/tempest/tests/lib/services/compute/test_server_external_events_client.py
new file mode 100644
index 0000000..63922b3
--- /dev/null
+++ b/tempest/tests/lib/services/compute/test_server_external_events_client.py
@@ -0,0 +1,56 @@
+# Copyright 2022 NEC Corporation. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.lib.services.compute import server_external_events_client
+from tempest.tests.lib import fake_auth_provider
+from tempest.tests.lib.services import base
+
+
+class TestServerExternalEventsClient(base.BaseServiceTest):
+
+ events = [
+ {
+ "code": 200,
+ "name": "network-changed",
+ "server_uuid": "ff1df7b2-6772-45fd-9326-c0a3b05591c2",
+ "status": "completed",
+ "tag": "foo"
+ }
+ ]
+
+ events_req = [
+ {
+ "name": "network-changed",
+ "server_uuid": "ff1df7b2-6772-45fd-9326-c0a3b05591c2",
+ }
+ ]
+
+ def setUp(self):
+ super(TestServerExternalEventsClient, self).setUp()
+ fake_auth = fake_auth_provider.FakeAuthProvider()
+ self.client = server_external_events_client.ServerExternalEventsClient(
+ fake_auth, 'compute', 'regionOne')
+
+ def _test_create_server_external_events(self, bytes_body=False):
+ expected = {"events": self.events}
+ self.check_service_client_function(
+ self.client.create_server_external_events,
+ 'tempest.lib.common.rest_client.RestClient.post', expected,
+ bytes_body, events=self.events_req)
+
+ def test_create_server_external_events_str_body(self):
+ self._test_create_server_external_events(bytes_body=False)
+
+ def test_create_server_external_events_byte_body(self):
+ self._test_create_server_external_events(bytes_body=True)
diff --git a/tempest/tests/lib/services/image/v2/test_images_client.py b/tempest/tests/lib/services/image/v2/test_images_client.py
index 5b162f8..27a50a9 100644
--- a/tempest/tests/lib/services/image/v2/test_images_client.py
+++ b/tempest/tests/lib/services/image/v2/test_images_client.py
@@ -13,6 +13,9 @@
# under the License.
import io
+from unittest import mock
+
+import fixtures
from tempest.lib.common.utils import data_utils
from tempest.lib.services.image.v2 import images_client
@@ -239,6 +242,21 @@
headers={'Content-Type': 'application/octet-stream'},
status=200)
+ def test_show_image_file_chunked(self):
+ # Since chunked=True on a GET should pass the response object
+ # basically untouched, we use a mock here so we get some assurances.
+ http_response = mock.MagicMock()
+ http_response.status = 200
+ self.useFixture(fixtures.MockPatch(
+ 'tempest.lib.common.rest_client.RestClient.get',
+ return_value=(http_response, b'')))
+ resp = self.client.show_image_file(
+ self.FAKE_CREATE_UPDATE_SHOW_IMAGE['id'],
+ chunked=True)
+ self.assertEqual(http_response, resp)
+ resp.__contains__.assert_not_called()
+ resp.__getitem__.assert_not_called()
+
def test_add_image_tag(self):
self.check_service_client_function(
self.client.add_image_tag,
diff --git a/tempest/tests/lib/test_decorators.py b/tempest/tests/lib/test_decorators.py
index fc93f76..f9a12b6 100644
--- a/tempest/tests/lib/test_decorators.py
+++ b/tempest/tests/lib/test_decorators.py
@@ -21,6 +21,7 @@
from tempest.lib import base as test
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
+from tempest.lib import exceptions
from tempest.lib import exceptions as lib_exc
from tempest.tests import base
@@ -289,3 +290,109 @@
with mock.patch.object(decorators.LOG, 'error'):
self.assertRaises(lib_exc.InvalidParam, test_foo, object())
+
+
+class TestCleanupOrderDecorator(base.TestCase):
+
+ @decorators.cleanup_order
+ def _create_volume(self, raise_exception=False):
+ """Test doc"""
+ vol_id = "487ef6b6-546a-40c7-bc3f-b405d6239fc8"
+ self.cleanup(self._delete_dummy, vol_id)
+ if raise_exception:
+ raise exceptions.NotFound("Not found")
+ return "volume"
+
+ def _delete_dummy(self, vol_id):
+ pass
+
+ class DummyClassResourceCleanup(list):
+ """dummy list class simulate ClassResourceCleanup"""
+
+ def __call__(self, func, vol_id):
+ self.append((func, vol_id))
+
+ @classmethod
+ def resource_setup(cls):
+ cls.addClassResourceCleanup = cls.DummyClassResourceCleanup()
+ cls.volume = cls._create_volume()
+
+ @classmethod
+ def resource_setup_exception(cls):
+ cls.addClassResourceCleanup = cls.DummyClassResourceCleanup()
+ cls.volume = cls._create_volume(raise_exception=True)
+
+ def setUp(self):
+ super().setUp()
+ self.volume_instance = self._create_volume()
+
+ def test_cleanup_order_when_called_from_instance_testcase(self):
+ # create a volume
+ my_vol = self._create_volume()
+ # Verify method runs and return value
+ self.assertEqual(my_vol, "volume")
+ # Verify __doc__ exists from original function
+ self.assertEqual(self._create_volume.__doc__, "Test doc")
+ # New cleanup created and refers to addCleanup
+ self.assertTrue(hasattr(self, "cleanup"))
+ self.assertEqual(self.cleanup, self.addCleanup)
+ # New __name__ created from type(self)
+ self.assertEqual(self.__name__, type(self).__name__)
+ # Verify function added to instance _cleanups
+ self.assertIn(self._delete_dummy, [e[0] for e in self._cleanups])
+
+ def test_cleanup_order_when_called_from_setup_instance(self):
+ # create a volume
+ my_vol = self.volume_instance
+ # Verify method runs and return value
+ self.assertEqual(my_vol, "volume")
+ # Verify __doc__ exists from original function
+ self.assertEqual(self._create_volume.__doc__, "Test doc")
+ # New cleanup created and refers to addCleanup
+ self.assertTrue(hasattr(self, "cleanup"))
+ self.assertEqual(self.cleanup, self.addCleanup)
+ # New __name__ created from type(self)
+ self.assertEqual(self.__name__, type(self).__name__)
+ # Verify function added to instance _cleanups
+ self.assertIn(self._delete_dummy, [e[0] for e in self._cleanups])
+
+ def test_cleanup_order_when_called_from_instance_raise(self):
+ # create a volume when raised exceptions
+ self.assertRaises(exceptions.NotFound, self._create_volume,
+ raise_exception=True)
+ # before raise exceptions
+ self.assertTrue(hasattr(self, "cleanup"))
+ self.assertEqual(self.cleanup, self.addCleanup)
+ # New __name__ created from type(self)
+ self.assertEqual(self.__name__, type(self).__name__)
+ # Verify function added to instance _cleanups before exception
+ self.assertIn(self._delete_dummy, [e[0] for e in self._cleanups])
+
+ def test_cleanup_order_when_called_from_class_method(self):
+ # call class method
+ type(self).resource_setup()
+ # create a volume
+ my_vol = self.volume
+ # Verify method runs and return value
+ self.assertEqual(my_vol, "volume")
+ # Verify __doc__ exists from original function
+ self.assertEqual(self._create_volume.__doc__, "Test doc")
+ # New cleanup created and refers to addClassResourceCleanup
+ self.assertTrue(hasattr(self, "cleanup"))
+ self.assertEqual(type(self).cleanup, self.addClassResourceCleanup)
+ # Verify function added to instance addClassResourceCleanup
+ self.assertIn(type(self)._delete_dummy,
+ [e[0] for e in self.addClassResourceCleanup])
+
+ def test_cleanup_order_when_called_from_class_method_raise(self):
+ # call class method
+ self.assertRaises(exceptions.NotFound,
+ type(self).resource_setup_exception)
+ # Verify __doc__ exists from original function
+ self.assertEqual(self._create_volume.__doc__, "Test doc")
+ # New cleanup created and refers to addClassResourceCleanup
+ self.assertTrue(hasattr(self, "cleanup"))
+ self.assertEqual(type(self).cleanup, self.addClassResourceCleanup)
+ # Verify function added to instance addClassResourceCleanup
+ self.assertIn(type(self)._delete_dummy,
+ [e[0] for e in self.addClassResourceCleanup])
diff --git a/tempest/tests/lib/test_ssh.py b/tempest/tests/lib/test_ssh.py
index 886d99c..13870ba 100644
--- a/tempest/tests/lib/test_ssh.py
+++ b/tempest/tests/lib/test_ssh.py
@@ -75,7 +75,8 @@
look_for_keys=False,
timeout=10.0,
password=None,
- sock=None
+ sock=None,
+ allow_agent=True
)]
self.assertEqual(expected_connect, client_mock.connect.mock_calls)
self.assertEqual(0, s_mock.call_count)
@@ -91,7 +92,8 @@
proxy_client = ssh.Client('proxy-host', 'proxy-user', timeout=2)
client = ssh.Client('localhost', 'root', timeout=2,
- proxy_client=proxy_client)
+ proxy_client=proxy_client,
+ ssh_allow_agent=False)
client._get_ssh_connection(sleep=1)
aa_mock.assert_has_calls([mock.call(), mock.call()])
@@ -106,7 +108,8 @@
look_for_keys=False,
timeout=10.0,
password=None,
- sock=None
+ sock=None,
+ allow_agent=True
)]
self.assertEqual(proxy_expected_connect,
proxy_client_mock.connect.mock_calls)
@@ -121,7 +124,8 @@
look_for_keys=False,
timeout=10.0,
password=None,
- sock=proxy_client_mock.get_transport().open_session()
+ sock=proxy_client_mock.get_transport().open_session(),
+ allow_agent=False
)]
self.assertEqual(expected_connect, client_mock.connect.mock_calls)
self.assertEqual(0, s_mock.call_count)
diff --git a/tempest/tests/test_test.py b/tempest/tests/test_test.py
index cbb81e2..26e8079 100644
--- a/tempest/tests/test_test.py
+++ b/tempest/tests/test_test.py
@@ -17,12 +17,14 @@
import unittest
from unittest import mock
+from oslo_concurrency import lockutils
from oslo_config import cfg
import testtools
from tempest import clients
from tempest import config
from tempest.lib.common import validation_resources as vr
+from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
from tempest.lib.services.compute import base_compute_client
from tempest.lib.services.placement import base_placement_client
@@ -33,6 +35,8 @@
from tempest.tests.lib import fake_credentials
from tempest.tests.lib.services import registry_fixture
+CONF = config.CONF
+
class LoggingTestResult(testtools.TestResult):
@@ -594,6 +598,52 @@
str(log[0][2]['traceback']).replace('\n', ' '),
RuntimeError.__name__ + ': .* ' + OverridesSetup.__name__)
+ @mock.patch.object(test.process_lock, 'InterProcessReaderWriterLock')
+ def test_serial_execution_if_requested(self, mock_lock):
+
+ @decorators.serial
+ class SerialTests(self.parent_test):
+ pass
+
+ class ParallelTests(self.parent_test):
+ pass
+
+ @decorators.serial
+ class SerialTests2(self.parent_test):
+ pass
+
+ suite = unittest.TestSuite(
+ (SerialTests(), ParallelTests(), SerialTests2()))
+ log = []
+ result = LoggingTestResult(log)
+ suite.run(result)
+
+ expected_lock_path = os.path.join(
+ lockutils.get_lock_path(CONF), 'tempest-serial-rw-lock')
+
+ # We except that each test class has a lock with the _same_ external
+ # path so that if they would run by different processes they would
+ # still use the same lock
+ # Also we expect that each serial class takes and releases the
+ # write-lock while each non-serial class takes and releases the
+ # read-lock.
+ self.assertEqual(
+ [
+ mock.call(expected_lock_path),
+ mock.call().acquire_write_lock(),
+ mock.call().release_write_lock(),
+
+ mock.call(expected_lock_path),
+ mock.call().acquire_read_lock(),
+ mock.call().release_read_lock(),
+
+ mock.call(expected_lock_path),
+ mock.call().acquire_write_lock(),
+ mock.call().release_write_lock(),
+ ],
+ mock_lock.mock_calls
+ )
+
class TestTempestBaseTestClassFixtures(base.TestCase):
@@ -750,6 +800,11 @@
class TestAPIMicroversionTest1(test.BaseTestCase):
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ self.useFixture(fake_config.ConfigFixture())
+ config.TempestConfigPrivate = fake_config.FakePrivate
+
@classmethod
def resource_setup(cls):
super(TestAPIMicroversionTest1, cls).resource_setup()
@@ -812,6 +867,11 @@
class TestAPIMicroversionTest2(test.BaseTestCase):
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ self.useFixture(fake_config.ConfigFixture())
+ config.TempestConfigPrivate = fake_config.FakePrivate
+
@classmethod
def resource_setup(cls):
super(TestAPIMicroversionTest2, cls).resource_setup()
diff --git a/tox.ini b/tox.ini
index 94eb4d9..e1c17df 100644
--- a/tox.ini
+++ b/tox.ini
@@ -1,7 +1,6 @@
[tox]
envlist = pep8,py39,bashate,pip-check-reqs
minversion = 3.18.0
-skipsdist = True
ignore_basepython_conflict = True
[tempestenv]
@@ -24,10 +23,25 @@
OS_STDERR_CAPTURE=1
OS_TEST_TIMEOUT=160
PYTHONWARNINGS=default::DeprecationWarning,ignore::DeprecationWarning:distutils,ignore::DeprecationWarning:site
-passenv = OS_STDOUT_CAPTURE OS_STDERR_CAPTURE OS_TEST_TIMEOUT OS_TEST_LOCK_PATH TEMPEST_CONFIG TEMPEST_CONFIG_DIR http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY ZUUL_CACHE_DIR REQUIREMENTS_PIP_LOCATION GENERATE_TEMPEST_PLUGIN_LIST
+passenv =
+ OS_STDOUT_CAPTURE
+ OS_STDERR_CAPTURE
+ OS_TEST_TIMEOUT
+ OS_TEST_LOCK_PATH
+ TEMPEST_CONFIG
+ TEMPEST_CONFIG_DIR
+ http_proxy
+ HTTP_PROXY
+ https_proxy
+ HTTPS_PROXY
+ no_proxy
+ NO_PROXY
+ ZUUL_CACHE_DIR
+ REQUIREMENTS_PIP_LOCATION
+ GENERATE_TEMPEST_PLUGIN_LIST
usedevelop = True
-install_command = pip install {opts} {packages}
-allowlist_externals = *
+allowlist_externals =
+ find
deps =
-c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master}
-r{toxinidir}/requirements.txt
@@ -110,7 +124,7 @@
commands =
find . -type f -name "*.pyc" -delete
tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' {posargs}
- tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)' {posargs}
+ tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)|(^tempest\.serial_tests)' {posargs}
[testenv:full-parallel]
envdir = .tox/tempest
@@ -118,10 +132,11 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex = '(^tempest\.scenario.*)|(^tempest\.serial_tests)|(?!.*\[.*\bslow\b.*\])(^tempest\.api)'
# The regex below is used to select all tempest scenario and including the non slow api tests
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '(^tempest\.scenario.*)|(?!.*\[.*\bslow\b.*\])(^tempest\.api)' {posargs}
+ tempest run --regex {[testenv:full-parallel]regex} {posargs}
[testenv:api-microversion-tests]
envdir = .tox/tempest
@@ -129,11 +144,12 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex = '(^tempest\.api\.compute)|(^tempest\.api\.volume)'
# The regex below is used to select all tempest api tests for services having API
# microversion concept.
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '(^tempest\.api\.compute)|(^tempest\.api\.volume)' {posargs}
+ tempest run --regex {[testenv:api-microversion-tests]regex} {posargs}
[testenv:integrated-network]
envdir = .tox/tempest
@@ -141,12 +157,14 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex1 = '(?!.*\[.*\bslow\b.*\])(^tempest\.api)'
+regex2 = '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)|(^tempest\.serial_tests)'
# The regex below is used to select which tests to run and exclude the slow tag and
# tests listed in exclude-list file:
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' --exclude-list ./tools/tempest-integrated-gate-networking-exclude-list.txt {posargs}
- tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)' --exclude-list ./tools/tempest-integrated-gate-networking-exclude-list.txt {posargs}
+ tempest run --regex {[testenv:integrated-network]regex1} --exclude-list ./tools/tempest-integrated-gate-networking-exclude-list.txt {posargs}
+ tempest run --combine --serial --regex {[testenv:integrated-network]regex2} --exclude-list ./tools/tempest-integrated-gate-networking-exclude-list.txt {posargs}
[testenv:integrated-compute]
envdir = .tox/tempest
@@ -154,12 +172,14 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex1 = '(?!.*\[.*\bslow\b.*\])(^tempest\.api)'
+regex2 = '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)|(^tempest\.serial_tests)'
# The regex below is used to select which tests to run and exclude the slow tag and
# tests listed in exclude-list file:
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' --exclude-list ./tools/tempest-integrated-gate-compute-exclude-list.txt {posargs}
- tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)' --exclude-list ./tools/tempest-integrated-gate-compute-exclude-list.txt {posargs}
+ tempest run --regex {[testenv:integrated-compute]regex1} --exclude-list ./tools/tempest-integrated-gate-compute-exclude-list.txt {posargs}
+ tempest run --combine --serial --regex {[testenv:integrated-compute]regex2} --exclude-list ./tools/tempest-integrated-gate-compute-exclude-list.txt {posargs}
[testenv:integrated-placement]
envdir = .tox/tempest
@@ -167,12 +187,14 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex1 = '(?!.*\[.*\bslow\b.*\])(^tempest\.api)'
+regex2 = '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)|(^tempest\.serial_tests)'
# The regex below is used to select which tests to run and exclude the slow tag and
# tests listed in exclude-list file:
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' --exclude-list ./tools/tempest-integrated-gate-placement-exclude-list.txt {posargs}
- tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)' --exclude-list ./tools/tempest-integrated-gate-placement-exclude-list.txt {posargs}
+ tempest run --regex {[testenv:integrated-placement]regex1} --exclude-list ./tools/tempest-integrated-gate-placement-exclude-list.txt {posargs}
+ tempest run --combine --serial --regex {[testenv:integrated-placement]regex2} --exclude-list ./tools/tempest-integrated-gate-placement-exclude-list.txt {posargs}
[testenv:integrated-storage]
envdir = .tox/tempest
@@ -180,12 +202,14 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex1 = '(?!.*\[.*\bslow\b.*\])(^tempest\.api)'
+regex2 = '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)|(^tempest\.serial_tests)'
# The regex below is used to select which tests to run and exclude the slow tag and
# tests listed in exclude-list file:
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' --exclude-list ./tools/tempest-integrated-gate-storage-exclude-list.txt {posargs}
- tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)' --exclude-list ./tools/tempest-integrated-gate-storage-exclude-list.txt {posargs}
+ tempest run --regex {[testenv:integrated-storage]regex1} --exclude-list ./tools/tempest-integrated-gate-storage-exclude-list.txt {posargs}
+ tempest run --combine --serial --regex {[testenv:integrated-storage]regex2} --exclude-list ./tools/tempest-integrated-gate-storage-exclude-list.txt {posargs}
[testenv:integrated-object-storage]
envdir = .tox/tempest
@@ -193,12 +217,14 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex1 = '(?!.*\[.*\bslow\b.*\])(^tempest\.api)'
+regex2 = '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)|(^tempest\.serial_tests)'
# The regex below is used to select which tests to run and exclude the slow tag and
# tests listed in exclude-list file:
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' --exclude-list ./tools/tempest-integrated-gate-object-storage-exclude-list.txt {posargs}
- tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)' --exclude-list ./tools/tempest-integrated-gate-object-storage-exclude-list.txt {posargs}
+ tempest run --regex {[testenv:integrated-object-storage]regex1} --exclude-list ./tools/tempest-integrated-gate-object-storage-exclude-list.txt {posargs}
+ tempest run --combine --serial --regex {[testenv:integrated-object-storage]regex2} --exclude-list ./tools/tempest-integrated-gate-object-storage-exclude-list.txt {posargs}
[testenv:full-serial]
envdir = .tox/tempest
@@ -206,12 +232,13 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex = '(?!.*\[.*\bslow\b.*\])(^tempest\.(api|scenario|serial_tests))'
# The regex below is used to select which tests to run and exclude the slow tag:
# See the testrepository bug: https://bugs.launchpad.net/testrepository/+bug/1208610
# FIXME: We can replace it with the `--exclude-regex` option to exclude tests now.
commands =
find . -type f -name "*.pyc" -delete
- tempest run --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.(api|scenario))' {posargs}
+ tempest run --serial --regex {[testenv:full-serial]regex} {posargs}
[testenv:scenario]
envdir = .tox/tempest
@@ -219,10 +246,11 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex = '(^tempest\.scenario)'
# The regex below is used to select all scenario tests
commands =
find . -type f -name "*.pyc" -delete
- tempest run --serial --regex '(^tempest\.scenario)' {posargs}
+ tempest run --serial --regex {[testenv:scenario]regex} {posargs}
[testenv:smoke]
envdir = .tox/tempest
@@ -230,9 +258,10 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex = '\[.*\bsmoke\b.*\]'
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '\[.*\bsmoke\b.*\]' {posargs}
+ tempest run --regex {[testenv:smoke]regex} {posargs}
[testenv:smoke-serial]
envdir = .tox/tempest
@@ -240,12 +269,13 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex = '\[.*\bsmoke\b.*\]'
# This is still serial because neutron doesn't work with parallel. See:
# https://bugs.launchpad.net/tempest/+bug/1216076 so the neutron smoke
# job would fail if we moved it to parallel.
commands =
find . -type f -name "*.pyc" -delete
- tempest run --serial --regex '\[.*\bsmoke\b.*\]' {posargs}
+ tempest run --serial --regex {[testenv:smoke-serial]regex} {posargs}
[testenv:slow-serial]
envdir = .tox/tempest
@@ -253,10 +283,11 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex = '\[.*\bslow\b.*\]'
# The regex below is used to select the slow tagged tests to run serially:
commands =
find . -type f -name "*.pyc" -delete
- tempest run --serial --regex '\[.*\bslow\b.*\]' {posargs}
+ tempest run --serial --regex {[testenv:slow-serial]regex} {posargs}
[testenv:ipv6-only]
envdir = .tox/tempest
@@ -264,12 +295,13 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex = '\[.*\bsmoke|ipv6|test_network_v6\b.*\]'
# Run only smoke and ipv6 tests. This env is used to tests
# the ipv6 deployments and basic tests run fine so that we can
# verify that services listen on IPv6 address.
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '\[.*\bsmoke|ipv6|test_network_v6\b.*\]' {posargs}
+ tempest run --regex {[testenv:ipv6-only]regex} {posargs}
[testenv:venv]
deps =
@@ -428,8 +460,9 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex = '\[.*\bsmoke\b.*\]'
# The below command install stestr master version and run smoke tests
commands =
find . -type f -name "*.pyc" -delete
pip install -U git+https://github.com/mtreinish/stestr
- tempest run --regex '\[.*\bsmoke\b.*\]' {posargs}
+ tempest run --regex {[testenv:stestr-master]regex} {posargs}
diff --git a/zuul.d/integrated-gate.yaml b/zuul.d/integrated-gate.yaml
index 038a5ee..e461490 100644
--- a/zuul.d/integrated-gate.yaml
+++ b/zuul.d/integrated-gate.yaml
@@ -324,6 +324,7 @@
parent: devstack-tempest
description: |
Integration testing for a FIPS enabled Centos 9 system
+ timeout: 10800
nodeset: devstack-single-node-centos-9-stream
pre-run: playbooks/enable-fips.yaml
vars:
@@ -383,16 +384,20 @@
- tempest-integrated-networking
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
+ # and job is broken on wallaby branch due to the issue
+ # described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
- branches: ^(?!stable/ussuri).*$
+ branches: ^(?!stable/(ussuri|wallaby)).*$
gate:
jobs:
- grenade
- tempest-integrated-networking
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
+ # and job is broken on wallaby branch due to the issue
+ # described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
- branches: ^(?!stable/ussuri).*$
+ branches: ^(?!stable/(ussuri|wallaby)).*$
- project-template:
name: integrated-gate-compute
@@ -416,13 +421,15 @@
branches: ^stable/(wallaby|xena|yoga).*$
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
+ # and job is broken on wallaby branch due to the issue
+ # described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
- branches: ^(?!stable/ussuri).*$
+ branches: ^(?!stable/(ussuri|wallaby)).*$
gate:
jobs:
- tempest-integrated-compute
- openstacksdk-functional-devstack:
- branches: ^(?!stable/ussuri).*$
+ branches: ^(?!stable/(ussuri|wallaby)).*$
periodic-weekly:
jobs:
# centos-9-stream is tested from zed release onwards
@@ -444,6 +451,8 @@
- tempest-integrated-placement
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
+ # and job is broken on wallaby branch due to the issue
+ # described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
branches: ^(?!stable/ussuri).*$
gate:
@@ -452,8 +461,10 @@
- tempest-integrated-placement
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
+ # and job is broken on wallaby branch due to the issue
+ # described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
- branches: ^(?!stable/ussuri).*$
+ branches: ^(?!stable/(ussuri|wallaby)).*$
- project-template:
name: integrated-gate-storage
@@ -470,16 +481,20 @@
- tempest-integrated-storage
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
+ # and job is broken on wallaby branch due to the issue
+ # described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
- branches: ^(?!stable/ussuri).*$
+ branches: ^(?!stable/(ussuri|wallaby)).*$
gate:
jobs:
- grenade
- tempest-integrated-storage
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
+ # and job is broken on wallaby branch due to the issue
+ # described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
- branches: ^(?!stable/ussuri).*$
+ branches: ^(?!stable/(ussuri|wallaby)).*$
- project-template:
name: integrated-gate-object-storage
@@ -494,13 +509,17 @@
- tempest-integrated-object-storage
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
+ # and job is broken on wallaby branch due to the issue
+ # described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
- branches: ^(?!stable/ussuri).*$
+ branches: ^(?!stable/(ussuri|wallaby)).*$
gate:
jobs:
- grenade
- tempest-integrated-object-storage
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
+ # and job is broken on wallaby branch due to the issue
+ # described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
- branches: ^(?!stable/ussuri).*$
+ branches: ^(?!stable/(ussuri|wallaby)).*$
diff --git a/zuul.d/project.yaml b/zuul.d/project.yaml
index 46c0d8d..b9672fd 100644
--- a/zuul.d/project.yaml
+++ b/zuul.d/project.yaml
@@ -30,23 +30,18 @@
irrelevant-files: *tempest-irrelevant-files
- tempest-full-ubuntu-focal:
irrelevant-files: *tempest-irrelevant-files
- - tempest-full-py3-ipv6:
- voting: false
- irrelevant-files: *tempest-irrelevant-files
- glance-multistore-cinder-import:
voting: false
irrelevant-files: *tempest-irrelevant-files
- tempest-full-zed:
irrelevant-files: *tempest-irrelevant-files
- - tempest-full-yoga:
- irrelevant-files: *tempest-irrelevant-files
- tempest-full-xena:
irrelevant-files: *tempest-irrelevant-files
- - tempest-full-wallaby-py3:
- irrelevant-files: *tempest-irrelevant-files
- - tempest-slow-wallaby:
- irrelevant-files: *tempest-irrelevant-files
+ # Temporarily marked as n-v due to the below bug which blocks
+ # the CI and complicates merging of patches.
+ # https://bugs.launchpad.net/tempest/+bug/1998916
- tempest-multinode-full-py3:
+ voting: false
irrelevant-files: *tempest-irrelevant-files
- tempest-tox-plugin-sanity-check:
irrelevant-files: &tempest-irrelevant-files-2
@@ -122,16 +117,8 @@
- tempest-full-test-account-py3:
voting: false
irrelevant-files: *tempest-irrelevant-files
- - tempest-full-test-account-no-admin-py3:
- voting: false
- irrelevant-files: *tempest-irrelevant-files
- openstack-tox-bashate:
irrelevant-files: *tempest-irrelevant-files-2
- - tempest-full-centos-9-stream:
- # TODO(gmann): make it voting once below fix is merged
- # https://review.opendev.org/c/openstack/tempest/+/842140
- voting: false
- irrelevant-files: *tempest-irrelevant-files
gate:
jobs:
- openstack-tox-pep8
@@ -150,19 +137,20 @@
irrelevant-files: *tempest-irrelevant-files
- tempest-ipv6-only:
irrelevant-files: *tempest-irrelevant-files-3
- - tempest-multinode-full-py3:
- irrelevant-files: *tempest-irrelevant-files
+ # https://bugs.launchpad.net/tempest/+bug/1998916
+ #- tempest-multinode-full-py3:
+ # irrelevant-files: *tempest-irrelevant-files
- tempest-full-enforce-scope-new-defaults:
irrelevant-files: *tempest-irrelevant-files
#- devstack-plugin-ceph-tempest-py3:
# irrelevant-files: *tempest-irrelevant-files
- #- tempest-full-centos-9-stream:
- # irrelevant-files: *tempest-irrelevant-files
- nova-live-migration:
irrelevant-files: *tempest-irrelevant-files
experimental:
jobs:
- nova-multi-cell
+ - nova-ceph-multistore:
+ irrelevant-files: *tempest-irrelevant-files
- tempest-with-latest-microversion
- tempest-stestr-master
- tempest-cinder-v2-api:
@@ -177,21 +165,28 @@
irrelevant-files: *tempest-irrelevant-files
- tempest-pg-full:
irrelevant-files: *tempest-irrelevant-files
+ - tempest-full-py3-ipv6:
+ irrelevant-files: *tempest-irrelevant-files
+ - tempest-full-centos-9-stream:
+ irrelevant-files: *tempest-irrelevant-files
- tempest-centos9-stream-fips:
irrelevant-files: *tempest-irrelevant-files
+ - tempest-full-test-account-no-admin-py3:
+ irrelevant-files: *tempest-irrelevant-files
periodic-stable:
jobs:
- tempest-full-zed
- tempest-full-yoga
- tempest-full-xena
- - tempest-full-wallaby-py3
- tempest-slow-zed
- tempest-slow-yoga
- tempest-slow-xena
- - tempest-slow-wallaby
periodic:
jobs:
- tempest-all
- tempest-full-oslo-master
- tempest-stestr-master
+ - tempest-full-py3-ipv6
- tempest-centos9-stream-fips
+ - tempest-full-centos-9-stream
+ - tempest-full-test-account-no-admin-py3
diff --git a/zuul.d/stable-jobs.yaml b/zuul.d/stable-jobs.yaml
index 82c3e71..fb2300b 100644
--- a/zuul.d/stable-jobs.yaml
+++ b/zuul.d/stable-jobs.yaml
@@ -18,12 +18,6 @@
override-checkout: stable/xena
- job:
- name: tempest-full-wallaby-py3
- parent: tempest-full-py3
- nodeset: openstack-single-node-focal
- override-checkout: stable/wallaby
-
-- job:
name: tempest-slow-zed
parent: tempest-slow-py3
nodeset: openstack-two-node-focal
@@ -42,12 +36,6 @@
override-checkout: stable/xena
- job:
- name: tempest-slow-wallaby
- parent: tempest-slow-py3
- nodeset: openstack-two-node-focal
- override-checkout: stable/wallaby
-
-- job:
name: tempest-full-py3
parent: devstack-tempest
# This job version is with swift disabled on py3
@@ -105,12 +93,30 @@
- job:
name: tempest-multinode-full-py3
parent: tempest-multinode-full
- nodeset: openstack-two-node-focal
- # This job runs on Focal and supposed to run until stable/zed.
+ nodeset: openstack-two-node-bionic
+ # This job runs on Bionic.
branches:
- stable/stein
- stable/train
- stable/ussuri
+ vars:
+ devstack_localrc:
+ USE_PYTHON3: true
+ devstack_plugins:
+ neutron: https://opendev.org/openstack/neutron
+ devstack_services:
+ neutron-trunk: true
+ group-vars:
+ subnode:
+ devstack_localrc:
+ USE_PYTHON3: true
+
+- job:
+ name: tempest-multinode-full-py3
+ parent: tempest-multinode-full
+ nodeset: openstack-two-node-focal
+ # This job runs on Focal and supposed to run until stable/zed.
+ branches:
- stable/victoria
- stable/wallaby
- stable/xena