Merge "Fix server cleanup in test_multiple_create test"
diff --git a/HACKING.rst b/HACKING.rst
index 17e2a49..caf954b 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -194,6 +194,13 @@
attribute should be sparingly applied to only the tests that sanity-check the
most essential functionality of an OpenStack cloud.
+Multinode Attribute
+^^^^^^^^^^^^^^^^^^^
+The ``type='multinode'`` attribute is used to signify that a test is desired
+to be executed in a multinode environment. By marking the tests with this
+attribute we can avoid running tests which aren't that beneficial for the
+multinode setup and thus reduce the consumption of resources.
+
Test fixtures and resources
---------------------------
Test level resources should be cleaned-up after the test execution. Clean-up
diff --git a/doc/source/keystone_scopes_and_roles_support.rst b/doc/source/keystone_scopes_and_roles_support.rst
index f446f8c..4d70565 100644
--- a/doc/source/keystone_scopes_and_roles_support.rst
+++ b/doc/source/keystone_scopes_and_roles_support.rst
@@ -203,6 +203,10 @@
cls.az_p_reader_client = (
cls.os_project_reader.availability_zone_client)
+ .. note::
+ 'primary', 'project_admin', 'project_member', and 'project_reader'
+ credentials will be created under same project.
+
#. Project alternate Admin: This is supported and can be requested and used from
the test as below:
@@ -248,6 +252,10 @@
cls.az_p_alt_reader_client = (
cls.os_project_alt_reader.availability_zone_client)
+ .. note::
+ 'alt', 'project_alt_admin', 'project_alt_member', and
+ 'project_alt_reader' credentials will be created under same project.
+
#. Project other roles: This is supported and can be requested and used from
the test as below:
@@ -269,6 +277,16 @@
cls.az_role2_client = (
cls.os_project_my_role2.availability_zone_client)
+ .. note::
+ 'admin' credenatials is considered and kept as legacy admin and
+ will be created under new project. If any test want to test with
+ admin role in projectA and non-admin/admin in projectB then test
+ can request projectA admin using 'admin' or 'project_alt_admin'
+ and non-admin in projectB using 'primary', 'project_member',
+ or 'project_reader'/admin in projectB using 'project_admin'. Many
+ existing tests using the 'admin' with new project to assert on the
+ resource list so we are keeping 'admin' a kind of legacy admin.
+
Pre-Provisioned Credentials
---------------------------
diff --git a/releasenotes/notes/add-ssh-allow-agent-2dee6448fd250e50.yaml b/releasenotes/notes/add-ssh-allow-agent-2dee6448fd250e50.yaml
new file mode 100644
index 0000000..33f11ce
--- /dev/null
+++ b/releasenotes/notes/add-ssh-allow-agent-2dee6448fd250e50.yaml
@@ -0,0 +1,10 @@
+---
+features:
+ - |
+ Adds a ``ssh_allow_agent`` parameter to the ``RemoteClient`` class
+ wrapper and the direct ssh ``Client`` class to allow a caller to
+ explicitly request that the SSH Agent is not consulted for
+ authentication. This is useful if your attempting explicit password
+ based authentication as ``paramiko``, the underlying library used for
+ SSH, defaults to utilizing an ssh-agent process before attempting
+ password authentication.
diff --git a/releasenotes/notes/fix-bug-1964509-b742f2c95d854980.yaml b/releasenotes/notes/fix-bug-1964509-b742f2c95d854980.yaml
new file mode 100644
index 0000000..db627de
--- /dev/null
+++ b/releasenotes/notes/fix-bug-1964509-b742f2c95d854980.yaml
@@ -0,0 +1,19 @@
+---
+fixes:
+ - |
+ There was a bug (bug#1964509) in dynamic credentials creation where
+ project credentials with different roles are created with the new
+ projects. Credential of different role of projects must be created
+ within the same project. For exmaple, 'project_admin', 'project_member',
+ 'project_reader', and 'primary', credentials will be created in the
+ same projects. 'alt', 'project_alt_admin', 'project_alt_member',
+ 'project_alt_reader' will be created within the same project.
+
+ 'admin' credenatials is considered and kept as legacy admin and
+ will be created under new project. If any test want to test with
+ admin role in projectA and non-admin/admin in projectB then test
+ can request projectA admin using 'admin' or 'project_alt_admin'
+ and non-admin in projectB using 'primary', 'project_member',
+ or 'project_reader'/admin in projectB using 'project_admin'. Many
+ existing tests using the 'admin' with new project to assert on the
+ resource list so we are keeping 'admin' a kind of legacy admin.
diff --git a/releasenotes/notes/tempest-2023-1-release-b18a240afadae8c9.yaml b/releasenotes/notes/tempest-2023-1-release-b18a240afadae8c9.yaml
new file mode 100644
index 0000000..092f4e3
--- /dev/null
+++ b/releasenotes/notes/tempest-2023-1-release-b18a240afadae8c9.yaml
@@ -0,0 +1,17 @@
+---
+prelude: |
+ This release is to tag Tempest for OpenStack 2023.1 release.
+ This release marks the start of 2023.1 release support in Tempest.
+ After this release, Tempest will support below OpenStack Releases:
+
+ * 2023.1
+ * Zed
+ * Yoga
+ * Xena
+
+ Current development of Tempest is for OpenStack 2023.2 development
+ cycle. Every Tempest commit is also tested against master during
+ the 2023.2 cycle. However, this does not necessarily mean that using
+ Tempest as of this tag will work against a 2023.2 (or future release)
+ cloud.
+ To be on safe side, use this tag to test the OpenStack 2023.1 release.
diff --git a/roles/run-tempest-26/tasks/main.yaml b/roles/run-tempest-26/tasks/main.yaml
index f846006..7423bfb 100644
--- a/roles/run-tempest-26/tasks/main.yaml
+++ b/roles/run-tempest-26/tasks/main.yaml
@@ -62,7 +62,9 @@
when: blacklist_stat.stat.exists
- name: Run Tempest
- command: tox -e {{tox_envlist}} {{tox_extra_args}} -- {{tempest_test_regex|quote}} {{blacklist_option|default('')}} \
+ command: tox -e {{tox_envlist}} {{tox_extra_args}} -- \
+ {{tempest_test_regex|quote if (tempest_test_regex|length>0)|default(None, True)}} \
+ {{blacklist_option|default(None)}} \
--concurrency={{tempest_concurrency|default(default_concurrency)}} \
--black-regex={{tempest_black_regex|quote}}
args:
diff --git a/roles/run-tempest/tasks/main.yaml b/roles/run-tempest/tasks/main.yaml
index e569e53..3fb494f 100644
--- a/roles/run-tempest/tasks/main.yaml
+++ b/roles/run-tempest/tasks/main.yaml
@@ -120,10 +120,11 @@
- target_branch in ["stable/train", "stable/ussuri", "stable/victoria"]
- name: Run Tempest
- command: tox -e {{tox_envlist}} {{tox_extra_args}} -- {{tempest_test_regex|quote}} \
- {{blacklist_option|default('')}} {{exclude_list_option|default('')}} \
+ command: tox -e {{tox_envlist}} {{tox_extra_args}} -- \
+ {{tempest_test_regex|quote if (tempest_test_regex|length>0)|default(None, True)}} \
+ {{blacklist_option|default(None)}} {{exclude_list_option|default(None)}} \
--concurrency={{tempest_concurrency|default(default_concurrency)}} \
- {{tempest_test_exclude_regex|default('')}}
+ {{tempest_test_exclude_regex|default(None)}}
args:
chdir: "{{devstack_base_dir}}/tempest"
register: tempest_run_result
diff --git a/tempest/api/compute/admin/test_live_migration.py b/tempest/api/compute/admin/test_live_migration.py
index 2826f56..f7c0dd9 100644
--- a/tempest/api/compute/admin/test_live_migration.py
+++ b/tempest/api/compute/admin/test_live_migration.py
@@ -140,6 +140,7 @@
LOG.info("Live migrate back to source %s", source_host)
self._live_migrate(server_id, source_host, state, volume_backed)
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('1dce86b8-eb04-4c03-a9d8-9c1dc3ee0c7b')
@testtools.skipUnless(CONF.compute_feature_enabled.
block_migration_for_live_migration,
@@ -148,6 +149,7 @@
"""Test live migrating an active server"""
self._test_live_migration()
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('1e107f21-61b2-4988-8f22-b196e938ab88')
@testtools.skipUnless(CONF.compute_feature_enabled.
block_migration_for_live_migration,
@@ -158,6 +160,7 @@
"""Test live migrating a paused server"""
self._test_live_migration(state='PAUSED')
+ @decorators.attr(type='multinode')
@testtools.skipUnless(CONF.compute_feature_enabled.
volume_backed_live_migration,
'Volume-backed live migration not available')
@@ -167,6 +170,7 @@
"""Test live migrating an active server booted from volume"""
self._test_live_migration(volume_backed=True)
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('e19c0cc6-6720-4ed8-be83-b6603ed5c812')
@testtools.skipIf(not CONF.compute_feature_enabled.
block_migration_for_live_migration,
@@ -198,7 +202,8 @@
volume = self.create_volume()
# Attach the volume to the server
- self.attach_volume(server, volume, device='/dev/xvdb')
+ self.attach_volume(server, volume, device='/dev/xvdb',
+ wait_for_detach=False)
server = self.admin_servers_client.show_server(server_id)['server']
volume_id1 = server["os-extended-volumes:volumes_attached"][0]["id"]
self._live_migrate(server_id, target_host, 'ACTIVE')
@@ -253,6 +258,7 @@
port = self.ports_client.show_port(port_id)['port']
return port['status'] == 'ACTIVE'
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('0022c12e-a482-42b0-be2d-396b5f0cffe3')
@utils.requires_ext(service='network', extension='trunk')
@utils.services('network')
@@ -297,6 +303,7 @@
min_microversion = '2.6'
max_microversion = 'latest'
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('6190af80-513e-4f0f-90f2-9714e84955d7')
@testtools.skipUnless(CONF.compute_feature_enabled.serial_console,
'Serial console not supported.')
diff --git a/tempest/api/compute/admin/test_migrations.py b/tempest/api/compute/admin/test_migrations.py
index 89152d6..b3d2833 100644
--- a/tempest/api/compute/admin/test_migrations.py
+++ b/tempest/api/compute/admin/test_migrations.py
@@ -158,6 +158,7 @@
dst_host = self.get_host_for_server(server['id'])
assert_func(src_host, dst_host)
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('4bf0be52-3b6f-4746-9a27-3143636fe30d')
@testtools.skipUnless(CONF.compute_feature_enabled.cold_migration,
'Cold migration not available.')
@@ -165,6 +166,7 @@
"""Test cold migrating server and then confirm the migration"""
self._test_cold_migrate_server(revert=False)
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('caa1aa8b-f4ef-4374-be0d-95f001c2ac2d')
@testtools.skipUnless(CONF.compute_feature_enabled.cold_migration,
'Cold migration not available.')
diff --git a/tempest/api/compute/admin/test_servers_on_multinodes.py b/tempest/api/compute/admin/test_servers_on_multinodes.py
index 9082306..013e7d8 100644
--- a/tempest/api/compute/admin/test_servers_on_multinodes.py
+++ b/tempest/api/compute/admin/test_servers_on_multinodes.py
@@ -61,6 +61,7 @@
return hosts
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('26a9d5df-6890-45f2-abc4-a659290cb130')
@testtools.skipUnless(
compute.is_scheduler_filter_enabled("SameHostFilter"),
@@ -73,6 +74,7 @@
host02 = self.get_host_for_server(server02)
self.assertEqual(self.host01, host02)
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('cc7ca884-6e3e-42a3-a92f-c522fcf25e8e')
@testtools.skipUnless(
compute.is_scheduler_filter_enabled("DifferentHostFilter"),
@@ -85,6 +87,7 @@
host02 = self.get_host_for_server(server02)
self.assertNotEqual(self.host01, host02)
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('7869cc84-d661-4e14-9f00-c18cdc89cf57')
@testtools.skipUnless(
compute.is_scheduler_filter_enabled("DifferentHostFilter"),
@@ -97,6 +100,7 @@
host02 = self.get_host_for_server(server02)
self.assertNotEqual(self.host01, host02)
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('f8bd0867-e459-45f5-ba53-59134552fe04')
@testtools.skipUnless(
compute.is_scheduler_filter_enabled("ServerGroupAntiAffinityFilter"),
@@ -112,6 +116,7 @@
self.assertNotEqual(hostnames[0], hostnames[1],
'Servers are on the same host: %s' % hosts)
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('9d2e924a-baf4-11e7-b856-fa163e65f5ce')
@testtools.skipUnless(
compute.is_scheduler_filter_enabled("ServerGroupAffinityFilter"),
@@ -152,6 +157,7 @@
waiters.wait_for_server_status(self.servers_client, server['id'],
'ACTIVE')
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('b5cc0889-50c2-46a0-b8ff-b5fb4c3a6e20')
def test_unshelve_to_specific_host(self):
"""Test unshelve to a specific host, new behavior introduced in
diff --git a/tempest/api/compute/admin/test_volume.py b/tempest/api/compute/admin/test_volume.py
index 2fcd053..e7c931e 100644
--- a/tempest/api/compute/admin/test_volume.py
+++ b/tempest/api/compute/admin/test_volume.py
@@ -13,8 +13,6 @@
# License for the specific language governing permissions and limitations
# under the License.
-import io
-
from tempest.api.compute import base
from tempest.common import waiters
from tempest import config
@@ -49,9 +47,11 @@
:param return image_id: The UUID of the newly created image.
"""
image = self.admin_image_client.show_image(CONF.compute.image_ref)
- image_data = self.admin_image_client.show_image_file(
- CONF.compute.image_ref).data
- image_file = io.BytesIO(image_data)
+ # NOTE(danms): We need to stream this, so chunked=True means we get
+ # back a urllib3.HTTPResponse and have to carefully pass it to
+ # store_image_file() to upload it in pieces.
+ image_data_resp = self.admin_image_client.show_image_file(
+ CONF.compute.image_ref, chunked=True)
create_dict = {
'container_format': image['container_format'],
'disk_format': image['disk_format'],
@@ -60,24 +60,22 @@
'visibility': 'public',
}
create_dict.update(kwargs)
- new_image = self.admin_image_client.create_image(**create_dict)
- self.addCleanup(self.admin_image_client.wait_for_resource_deletion,
- new_image['id'])
- self.addCleanup(self.admin_image_client.delete_image, new_image['id'])
- self.admin_image_client.store_image_file(new_image['id'], image_file)
-
+ try:
+ new_image = self.admin_image_client.create_image(**create_dict)
+ self.addCleanup(self.admin_image_client.wait_for_resource_deletion,
+ new_image['id'])
+ self.addCleanup(
+ self.admin_image_client.delete_image, new_image['id'])
+ self.admin_image_client.store_image_file(new_image['id'],
+ image_data_resp)
+ finally:
+ image_data_resp.release_conn()
return new_image['id']
class AttachSCSIVolumeTestJSON(BaseAttachSCSIVolumeTest):
"""Test attaching scsi volume to server"""
- # NOTE(gibi): https://bugs.launchpad.net/nova/+bug/2002951/comments/5 shows
- # that calling _create_image_with_custom_property can cause excessive
- # memory usage in the test executor as it downloads a glance image in
- # memory. This is causing gate failures so the test is disabled. One
- # potential fix is to do a chunked data download / upload loop instead.
- @decorators.skip_because(bug="2002951", condition=True)
@decorators.idempotent_id('777e468f-17ca-4da4-b93d-b7dbf56c0494')
def test_attach_scsi_disk_with_config_drive(self):
"""Test the attach/detach volume with config drive/scsi disk
diff --git a/tempest/api/compute/base.py b/tempest/api/compute/base.py
index ea1cddc..260d4e0 100644
--- a/tempest/api/compute/base.py
+++ b/tempest/api/compute/base.py
@@ -568,7 +568,8 @@
# is already detached.
pass
- def attach_volume(self, server, volume, device=None, tag=None):
+ def attach_volume(self, server, volume, device=None, tag=None,
+ wait_for_detach=True):
"""Attaches volume to server and waits for 'in-use' volume status.
The volume will be detached when the test tears down.
@@ -605,7 +606,7 @@
# the contents of the console log. The final check of the volume state
# should be a no-op by this point and is just added for completeness
# when detaching non-multiattach volumes.
- if not volume['multiattach']:
+ if not volume['multiattach'] and wait_for_detach:
self.addCleanup(
waiters.wait_for_volume_resource_status, self.volumes_client,
volume['id'], 'available')
diff --git a/tempest/api/compute/servers/test_servers.py b/tempest/api/compute/servers/test_servers.py
index 1c839eb..388b9b0 100644
--- a/tempest/api/compute/servers/test_servers.py
+++ b/tempest/api/compute/servers/test_servers.py
@@ -28,10 +28,16 @@
"""Test servers API"""
create_default_network = True
+ credentials = ['primary', 'project_reader']
+
@classmethod
def setup_clients(cls):
super(ServersTestJSON, cls).setup_clients()
cls.client = cls.servers_client
+ if CONF.enforce_scope.nova:
+ cls.reader_client = cls.os_project_reader.servers_client
+ else:
+ cls.reader_client = cls.client
@decorators.idempotent_id('b92d5ec7-b1dd-44a2-87e4-45e888c46ef0')
@testtools.skipUnless(CONF.compute_feature_enabled.
@@ -64,9 +70,9 @@
id2 = server['id']
self.addCleanup(self.delete_server, id2)
self.assertNotEqual(id1, id2, "Did not create a new server")
- server = self.client.show_server(id1)['server']
+ server = self.reader_client.show_server(id1)['server']
name1 = server['name']
- server = self.client.show_server(id2)['server']
+ server = self.reader_client.show_server(id2)['server']
name2 = server['name']
self.assertEqual(name1, name2)
@@ -80,7 +86,7 @@
server = self.create_test_server(key_name=key_name,
wait_until='ACTIVE')
self.addCleanup(self.delete_server, server['id'])
- server = self.client.show_server(server['id'])['server']
+ server = self.reader_client.show_server(server['id'])['server']
self.assertEqual(key_name, server['key_name'])
def _update_server_name(self, server_id, status, prefix_name='server'):
@@ -93,7 +99,7 @@
waiters.wait_for_server_status(self.client, server_id, status)
# Verify the name of the server has changed
- server = self.client.show_server(server_id)['server']
+ server = self.reader_client.show_server(server_id)['server']
self.assertEqual(new_name, server['name'])
return server
@@ -128,7 +134,7 @@
waiters.wait_for_server_status(self.client, server['id'], 'ACTIVE')
# Verify the access addresses have been updated
- server = self.client.show_server(server['id'])['server']
+ server = self.reader_client.show_server(server['id'])['server']
self.assertEqual('1.1.1.1', server['accessIPv4'])
self.assertEqual('::babe:202:202', server['accessIPv6'])
@@ -138,7 +144,7 @@
server = self.create_test_server(accessIPv6='2001:2001::3',
wait_until='ACTIVE')
self.addCleanup(self.delete_server, server['id'])
- server = self.client.show_server(server['id'])['server']
+ server = self.reader_client.show_server(server['id'])['server']
self.assertEqual('2001:2001::3', server['accessIPv6'])
@decorators.related_bug('1730756')
@@ -169,12 +175,22 @@
# also. 2.47 APIs schema are on top of 2.9->2.19->2.26 schema so
# below tests cover all of the schema.
+ credentials = ['primary', 'project_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(ServerShowV247Test, cls).setup_clients()
+ if CONF.enforce_scope.nova:
+ cls.reader_client = cls.os_project_reader.servers_client
+ else:
+ cls.reader_client = cls.servers_client
+
@decorators.idempotent_id('88b0bdb2-494c-11e7-a919-92ebcb67fe33')
def test_show_server(self):
"""Test getting server detail"""
server = self.create_test_server()
# All fields will be checked by API schema
- self.servers_client.show_server(server['id'])
+ self.reader_client.show_server(server['id'])
@decorators.idempotent_id('8de397c2-57d0-4b90-aa30-e5d668f21a8b')
def test_update_rebuild_list_server(self):
@@ -198,6 +214,16 @@
min_microversion = '2.63'
max_microversion = 'latest'
+ credentials = ['primary', 'project_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(ServerShowV263Test, cls).setup_clients()
+ if CONF.enforce_scope.nova:
+ cls.reader_client = cls.os_project_reader.servers_client
+ else:
+ cls.reader_client = cls.servers_client
+
@testtools.skipUnless(CONF.compute.certified_image_ref,
'``[compute]/certified_image_ref`` required to test '
'image certificate validation.')
@@ -214,7 +240,7 @@
wait_until='ACTIVE')
# Check show API response schema
- self.servers_client.show_server(server['id'])['server']
+ self.reader_client.show_server(server['id'])['server']
# Check update API response schema
self.servers_client.update_server(server['id'])
diff --git a/tempest/api/image/v2/test_images.py b/tempest/api/image/v2/test_images.py
index d590668..b723977 100644
--- a/tempest/api/image/v2/test_images.py
+++ b/tempest/api/image/v2/test_images.py
@@ -16,6 +16,7 @@
import io
import random
+import time
from oslo_log import log as logging
from tempest.api.image import base
@@ -27,6 +28,7 @@
CONF = config.CONF
LOG = logging.getLogger(__name__)
+BAD_REQUEST_RETRIES = 3
class ImportImagesTest(base.BaseV2ImageTest):
@@ -817,7 +819,7 @@
# Add a new location
new_loc = {'metadata': {'foo': 'bar'},
'url': CONF.image.http_image}
- self.client.update_image(image['id'], [
+ self._update_image_with_retries(image['id'], [
dict(add='/locations/-', value=new_loc)])
# The image should now be active, with one location that looks
@@ -843,13 +845,29 @@
def test_set_location(self):
self._check_set_location()
+ def _update_image_with_retries(self, image, patch):
+ # NOTE(danms): If glance was unable to fetch the remote image via
+ # HTTP, it will return BadRequest. Because this can be transient in
+ # CI, we try this a few times before we agree that it has failed
+ # for a reason worthy of failing the test.
+ for i in range(BAD_REQUEST_RETRIES):
+ try:
+ self.client.update_image(image, patch)
+ break
+ except lib_exc.BadRequest:
+ if i + 1 == BAD_REQUEST_RETRIES:
+ raise
+ else:
+ time.sleep(1)
+
def _check_set_multiple_locations(self):
image = self._check_set_location()
new_loc = {'metadata': {'speed': '88mph'},
'url': '%s#new' % CONF.image.http_image}
- self.client.update_image(image['id'], [
- dict(add='/locations/-', value=new_loc)])
+ self._update_image_with_retries(image['id'],
+ [dict(add='/locations/-',
+ value=new_loc)])
# The image should now have two locations and the last one
# (locations are ordered) should have the new URL.
@@ -961,8 +979,9 @@
'os_hash_algo': 'sha512'},
'metadata': {},
'url': CONF.image.http_image}
- self.client.update_image(image['id'], [
- dict(add='/locations/-', value=new_loc)])
+ self._update_image_with_retries(image['id'],
+ [dict(add='/locations/-',
+ value=new_loc)])
# Expect that all of our values ended up on the image
image = self.client.show_image(image['id'])
@@ -989,8 +1008,9 @@
'os_hash_algo': orig_image['os_hash_algo']},
'metadata': {},
'url': '%s#new' % CONF.image.http_image}
- self.client.update_image(orig_image['id'], [
- dict(add='/locations/-', value=new_loc)])
+ self._update_image_with_retries(orig_image['id'],
+ [dict(add='/locations/-',
+ value=new_loc)])
# Setting the same exact values on a new location should work
image = self.client.show_image(orig_image['id'])
@@ -1024,17 +1044,17 @@
# This should always fail due to the mismatch
self.assertRaises(lib_exc.Conflict,
- self.client.update_image,
- orig_image['id'], [
- dict(add='/locations/-', value=new_loc)])
+ self._update_image_with_retries,
+ orig_image['id'],
+ [dict(add='/locations/-', value=new_loc)])
# Now try to add a new location with all of the substitutions,
# which should also fail
new_loc['validation_data'] = values
self.assertRaises(lib_exc.Conflict,
- self.client.update_image,
- orig_image['id'], [
- dict(add='/locations/-', value=new_loc)])
+ self._update_image_with_retries,
+ orig_image['id'],
+ [dict(add='/locations/-', value=new_loc)])
# Make sure nothing has changed on our image after all the
# above failures
diff --git a/tempest/api/volume/base.py b/tempest/api/volume/base.py
index 49f9e22..9ba9949 100644
--- a/tempest/api/volume/base.py
+++ b/tempest/api/volume/base.py
@@ -20,6 +20,7 @@
from tempest.lib.common.utils import data_utils
from tempest.lib.common.utils import test_utils
from tempest.lib.decorators import cleanup_order
+from tempest.lib import exceptions as lib_exc
import tempest.test
CONF = config.CONF
@@ -126,12 +127,32 @@
volume = self.volumes_client.create_volume(**kwargs)['volume']
self.cleanup(test_utils.call_and_ignore_notfound_exc,
- self.delete_volume, self.volumes_client, volume['id'])
+ self._delete_volume_for_cleanup,
+ self.volumes_client, volume['id'])
if wait_until:
waiters.wait_for_volume_resource_status(self.volumes_client,
volume['id'], wait_until)
return volume
+ @staticmethod
+ def _delete_volume_for_cleanup(volumes_client, volume_id):
+ """Delete a volume (only) for cleanup.
+
+ If it is attached to a server, wait for it to become available,
+ assuming we have already deleted the server and just need nova to
+ complete the delete operation before it is available to be deleted.
+ Otherwise proceed to the regular delete_volume().
+ """
+ try:
+ vol = volumes_client.show_volume(volume_id)['volume']
+ if vol['status'] == 'in-use':
+ waiters.wait_for_volume_resource_status(volumes_client,
+ volume_id,
+ 'available')
+ except lib_exc.NotFound:
+ pass
+ BaseVolumeTest.delete_volume(volumes_client, volume_id)
+
@cleanup_order
def create_snapshot(self, volume_id=1, **kwargs):
"""Wrapper utility that returns a test snapshot."""
@@ -183,15 +204,17 @@
snapshots_client.delete_snapshot(snapshot_id)
snapshots_client.wait_for_resource_deletion(snapshot_id)
- def attach_volume(self, server_id, volume_id):
+ def attach_volume(self, server_id, volume_id, wait_for_detach=True):
"""Attach a volume to a server"""
self.servers_client.attach_volume(
server_id, volumeId=volume_id,
device='/dev/%s' % CONF.compute.volume_device_name)
waiters.wait_for_volume_resource_status(self.volumes_client,
volume_id, 'in-use')
- self.addCleanup(waiters.wait_for_volume_resource_status,
- self.volumes_client, volume_id, 'available')
+ if wait_for_detach:
+ self.addCleanup(waiters.wait_for_volume_resource_status,
+ self.volumes_client, volume_id, 'available',
+ server_id, self.servers_client)
self.addCleanup(self.servers_client.detach_volume, server_id,
volume_id)
diff --git a/tempest/api/volume/test_volumes_snapshots.py b/tempest/api/volume/test_volumes_snapshots.py
index b3a04f8..95521e7 100644
--- a/tempest/api/volume/test_volumes_snapshots.py
+++ b/tempest/api/volume/test_volumes_snapshots.py
@@ -44,12 +44,17 @@
@utils.services('compute')
def test_snapshot_create_delete_with_volume_in_use(self):
"""Test create/delete snapshot from volume attached to server"""
- # Create a test instance
- server = self.create_server(wait_until='SSHABLE')
# NOTE(zhufl) Here we create volume from self.image_ref for adding
# coverage for "creating snapshot from non-blank volume".
volume = self.create_volume(imageRef=self.image_ref)
- self.attach_volume(server['id'], volume['id'])
+
+ # Create a test instance
+ server = self.create_server(wait_until='SSHABLE')
+
+ # NOTE(danms): We are attaching this volume to a server, but we do
+ # not need to block on detach during cleanup because we will be
+ # deleting the server anyway.
+ self.attach_volume(server['id'], volume['id'], wait_for_detach=False)
# Snapshot a volume which attached to an instance with force=False
self.assertRaises(lib_exc.BadRequest, self.create_snapshot,
@@ -81,7 +86,11 @@
# Create a server and attach it
server = self.create_server(wait_until='SSHABLE')
- self.attach_volume(server['id'], self.volume_origin['id'])
+ # NOTE(danms): We are attaching this volume to a server, but we do
+ # not need to block on detach during cleanup because we will be
+ # deleting the server anyway.
+ self.attach_volume(server['id'], self.volume_origin['id'],
+ wait_for_detach=False)
# Now that the volume is attached, create other snapshots
snapshot2 = self.create_snapshot(self.volume_origin['id'], force=True)
diff --git a/tempest/common/waiters.py b/tempest/common/waiters.py
index 45a7b8a..c5da412 100644
--- a/tempest/common/waiters.py
+++ b/tempest/common/waiters.py
@@ -303,12 +303,16 @@
raise lib_exc.TimeoutException(message)
-def wait_for_volume_resource_status(client, resource_id, status):
+def wait_for_volume_resource_status(client, resource_id, status,
+ server_id=None, servers_client=None):
"""Waits for a volume resource to reach a given status.
This function is a common function for volume, snapshot and backup
resources. The function extracts the name of the desired resource from
the client class name of the resource.
+
+ If server_id and servers_client are provided, dump the console for that
+ server on failure.
"""
resource_name = re.findall(
r'(volume|group-snapshot|snapshot|backup|group)',
@@ -330,6 +334,11 @@
raise exceptions.VolumeExtendErrorException(volume_id=resource_id)
if int(time.time()) - start >= client.build_timeout:
+ if server_id and servers_client:
+ console_output = servers_client.get_console_output(
+ server_id)['output']
+ LOG.debug('Console output for %s\nbody=\n%s',
+ server_id, console_output)
message = ('%s %s failed to reach %s status (current %s) '
'within the required time (%s s).' %
(resource_name, resource_id, status, resource_status,
diff --git a/tempest/config.py b/tempest/config.py
index 00b394e..dfc0a8e 100644
--- a/tempest/config.py
+++ b/tempest/config.py
@@ -975,12 +975,12 @@
default='ecdsa',
help='Type of key to use for ssh connections. '
'Valid types are rsa, ecdsa'),
- cfg.IntOpt('allowed_network_downtime',
- default=5.0,
- help="Allowed VM network connection downtime during live "
- "migration, in seconds. "
- "When the measured downtime exceeds this value, an "
- "exception is raised."),
+ cfg.FloatOpt('allowed_network_downtime',
+ default=5.0,
+ help="Allowed VM network connection downtime during live "
+ "migration, in seconds. "
+ "When the measured downtime exceeds this value, an "
+ "exception is raised."),
]
volume_group = cfg.OptGroup(name='volume',
diff --git a/tempest/lib/api_schema/response/compute/v2_1/parameter_types.py b/tempest/lib/api_schema/response/compute/v2_1/parameter_types.py
index 8aed37d..b36c9d6 100644
--- a/tempest/lib/api_schema/response/compute/v2_1/parameter_types.py
+++ b/tempest/lib/api_schema/response/compute/v2_1/parameter_types.py
@@ -30,7 +30,7 @@
mac_address = {
'type': 'string',
- 'pattern': '(?:[a-f0-9]{2}:){5}[a-f0-9]{2}'
+ 'pattern': '(?:[a-fA-F0-9]{2}:){5}[a-fA-F0-9]{2}'
}
ip_address = {
diff --git a/tempest/lib/cli/base.py b/tempest/lib/cli/base.py
index c661d21..c9cffd2 100644
--- a/tempest/lib/cli/base.py
+++ b/tempest/lib/cli/base.py
@@ -97,6 +97,10 @@
:type identity_api_version: string
"""
+ CLIENTS_WITHOUT_IDENTITY_VERSION = ['nova', 'nova_manage', 'keystone',
+ 'glance', 'ceilometer', 'heat',
+ 'cinder', 'neutron', 'sahara']
+
def __init__(self, username='', password='', tenant_name='', uri='',
cli_dir='', insecure=False, prefix='', user_domain_name=None,
user_domain_id=None, project_domain_name=None,
@@ -377,8 +381,9 @@
self.password,
self.uri))
if self.identity_api_version:
- creds += ' --os-identity-api-version %s' % (
- self.identity_api_version)
+ if cmd not in self.CLIENTS_WITHOUT_IDENTITY_VERSION:
+ creds += ' --os-identity-api-version %s' % (
+ self.identity_api_version)
if self.user_domain_name is not None:
creds += ' --os-user-domain-name %s' % self.user_domain_name
if self.user_domain_id is not None:
diff --git a/tempest/lib/common/cred_client.py b/tempest/lib/common/cred_client.py
index f13d6d0..69798a4 100644
--- a/tempest/lib/common/cred_client.py
+++ b/tempest/lib/common/cred_client.py
@@ -58,6 +58,10 @@
def create_project(self, name, description):
pass
+ @abc.abstractmethod
+ def show_project(self, project_id):
+ pass
+
def _check_role_exists(self, role_name):
try:
roles = self._list_roles()
@@ -118,6 +122,9 @@
name=name, description=description)['tenant']
return tenant
+ def show_project(self, project_id):
+ return self.projects_client.show_tenant(project_id)['tenant']
+
def delete_project(self, project_id):
self.projects_client.delete_tenant(project_id)
@@ -159,6 +166,9 @@
domain_id=self.creds_domain['id'])['project']
return project
+ def show_project(self, project_id):
+ return self.projects_client.show_project(project_id)['project']
+
def delete_project(self, project_id):
self.projects_client.delete_project(project_id)
diff --git a/tempest/lib/common/dynamic_creds.py b/tempest/lib/common/dynamic_creds.py
index d687eb5..99647d4 100644
--- a/tempest/lib/common/dynamic_creds.py
+++ b/tempest/lib/common/dynamic_creds.py
@@ -163,7 +163,8 @@
os.network.PortsClient(),
os.network.SecurityGroupsClient())
- def _create_creds(self, admin=False, roles=None, scope='project'):
+ def _create_creds(self, admin=False, roles=None, scope='project',
+ project_id=None):
"""Create credentials with random name.
Creates user and role assignments on a project, domain, or system. When
@@ -177,6 +178,8 @@
:type roles: list
:param str scope: The scope for the role assignment, may be one of
'project', 'domain', or 'system'.
+ :param str project_id: The project id of already created project
+ for credentials under same project.
:return: Readonly Credentials with network resources
:raises: Exception if scope is invalid
"""
@@ -190,12 +193,20 @@
'system': None
}
if scope == 'project':
- project_name = data_utils.rand_name(
- root, prefix=self.resource_prefix)
- project_desc = project_name + '-desc'
- project = self.creds_client.create_project(
- name=project_name, description=project_desc)
-
+ if not project_id:
+ project_name = data_utils.rand_name(
+ root, prefix=self.resource_prefix)
+ project_desc = project_name + '-desc'
+ project = self.creds_client.create_project(
+ name=project_name, description=project_desc)
+ else:
+ # NOTE(gmann) This is the case where creds are requested
+ # from the existing creds within same project. We should
+ # not create the new project in this case.
+ project = self.creds_client.show_project(project_id)
+ project_name = project['name']
+ LOG.info("Using the existing project %s for scope %s and "
+ "roles: %s", project['id'], scope, roles)
# NOTE(andreaf) User and project can be distinguished from the
# context, having the same ID in both makes it easier to match them
# and debug.
@@ -372,48 +383,78 @@
self.routers_admin_client.add_router_interface(router_id,
subnet_id=subnet_id)
- def get_credentials(self, credential_type, scope=None):
- if not scope and self._creds.get(str(credential_type)):
- credentials = self._creds[str(credential_type)]
- elif scope and (
- self._creds.get("%s_%s" % (scope, str(credential_type)))):
- credentials = self._creds["%s_%s" % (scope, str(credential_type))]
+ def _get_project_id(self, credential_type, scope):
+ same_creds = [['admin'], ['member'], ['reader']]
+ same_alt_creds = [['alt_admin'], ['alt_member'], ['alt_reader']]
+ search_in = []
+ if credential_type in same_creds:
+ search_in = same_creds
+ elif credential_type in same_alt_creds:
+ search_in = same_alt_creds
+ for cred in search_in:
+ found_cred = self._creds.get("%s_%s" % (scope, str(cred)))
+ if found_cred:
+ project_id = found_cred.get("%s_%s" % (scope, 'id'))
+ LOG.debug("Reusing existing project %s from creds: %s ",
+ project_id, found_cred)
+ return project_id
+ return None
+
+ def get_credentials(self, credential_type, scope=None, by_role=False):
+ cred_prefix = ''
+ if by_role:
+ cred_prefix = 'role_'
+ if not scope and self._creds.get(
+ "%s%s" % (cred_prefix, str(credential_type))):
+ credentials = self._creds[
+ "%s%s" % (cred_prefix, str(credential_type))]
+ elif scope and (self._creds.get(
+ "%s%s_%s" % (cred_prefix, scope, str(credential_type)))):
+ credentials = self._creds[
+ "%s%s_%s" % (cred_prefix, scope, str(credential_type))]
else:
LOG.debug("Creating new dynamic creds for scope: %s and "
"credential_type: %s", scope, credential_type)
+ project_id = None
if scope:
- if credential_type in [['admin'], ['alt_admin']]:
+ if scope == 'project':
+ project_id = self._get_project_id(
+ credential_type, 'project')
+ if by_role:
credentials = self._create_creds(
- admin=True, scope=scope)
+ roles=credential_type, scope=scope)
+ elif credential_type in [['admin'], ['alt_admin']]:
+ credentials = self._create_creds(
+ admin=True, scope=scope, project_id=project_id)
elif credential_type in [['alt_member'], ['alt_reader']]:
cred_type = credential_type[0][4:]
if isinstance(cred_type, str):
cred_type = [cred_type]
credentials = self._create_creds(
- roles=cred_type, scope=scope)
- else:
+ roles=cred_type, scope=scope, project_id=project_id)
+ elif credential_type in [['member'], ['reader']]:
credentials = self._create_creds(
- roles=credential_type, scope=scope)
+ roles=credential_type, scope=scope,
+ project_id=project_id)
elif credential_type in ['primary', 'alt', 'admin']:
is_admin = (credential_type == 'admin')
credentials = self._create_creds(admin=is_admin)
else:
credentials = self._create_creds(roles=credential_type)
if scope:
- self._creds["%s_%s" %
- (scope, str(credential_type))] = credentials
+ self._creds["%s%s_%s" % (
+ cred_prefix, scope, str(credential_type))] = credentials
else:
- self._creds[str(credential_type)] = credentials
+ self._creds[
+ "%s%s" % (cred_prefix, str(credential_type))] = credentials
# Maintained until tests are ported
LOG.info("Acquired dynamic creds:\n"
" credentials: %s", credentials)
# NOTE(gmann): For 'domain' and 'system' scoped token, there is no
# project_id so we are skipping the network creation for both
- # scope. How these scoped token can create the network, Nova
- # server or other project mapped resources is one of the open
- # question and discussed a lot in Xena cycle PTG. Once we sort
- # out that then if needed we can update the network creation here.
- if (not scope or scope == 'project'):
+ # scope.
+ # We need to create nework resource once per project.
+ if (not project_id and (not scope or scope == 'project')):
if (self.neutron_available and self.create_networks):
network, subnet, router = self._create_network_resources(
credentials.tenant_id)
@@ -422,24 +463,22 @@
LOG.info("Created isolated network resources for:\n"
" credentials: %s", credentials)
else:
- LOG.info("Network resources are not created for scope: %s",
- scope)
+ LOG.info("Network resources are not created for requested "
+ "scope: %s and credentials: %s", scope, credentials)
return credentials
# TODO(gmann): Remove this method in favor of get_project_member_creds()
# after the deprecation phase.
def get_primary_creds(self):
- return self.get_credentials('primary')
+ return self.get_project_member_creds()
- # TODO(gmann): Remove this method in favor of get_project_admin_creds()
- # after the deprecation phase.
def get_admin_creds(self):
return self.get_credentials('admin')
- # TODO(gmann): Replace this method with more appropriate name.
- # like get_project_alt_member_creds()
+ # TODO(gmann): Remove this method in favor of
+ # get_project_alt_member_creds() after the deprecation phase.
def get_alt_creds(self):
- return self.get_credentials('alt')
+ return self.get_project_alt_member_creds()
def get_system_admin_creds(self):
return self.get_credentials(['admin'], scope='system')
@@ -481,9 +520,9 @@
roles = list(set(roles))
# The roles list as a str will become the index as the dict key for
# the created credentials set in the dynamic_creds dict.
- creds_name = str(roles)
+ creds_name = "role_%s" % str(roles)
if scope:
- creds_name = "%s_%s" % (scope, str(roles))
+ creds_name = "role_%s_%s" % (scope, str(roles))
exist_creds = self._creds.get(creds_name)
# If force_new flag is True 2 cred sets with the same roles are needed
# handle this by creating a separate index for old one to store it
@@ -492,7 +531,7 @@
new_index = creds_name + '-' + str(len(self._creds))
self._creds[new_index] = exist_creds
del self._creds[creds_name]
- return self.get_credentials(roles, scope=scope)
+ return self.get_credentials(roles, scope=scope, by_role=True)
def _clear_isolated_router(self, router_id, router_name):
client = self.routers_admin_client
@@ -553,31 +592,20 @@
if not self._creds:
return
self._clear_isolated_net_resources()
+ project_ids = set()
for creds in self._creds.values():
+ # NOTE(gmann): With new RBAC personas, we can have single project
+ # and multiple user created under it, to avoid conflict let's
+ # cleanup the projects at the end.
+ # Adding project if id is not None, means leaving domain and
+ # system creds.
+ if creds.project_id:
+ project_ids.add(creds.project_id)
try:
self.creds_client.delete_user(creds.user_id)
except lib_exc.NotFound:
LOG.warning("user with name: %s not found for delete",
creds.username)
- if creds.tenant_id:
- # NOTE(zhufl): Only when neutron's security_group ext is
- # enabled, cleanup_default_secgroup will not raise error. But
- # here cannot use test_utils.is_extension_enabled for it will
- # cause "circular dependency". So here just use try...except to
- # ensure tenant deletion without big changes.
- try:
- if self.neutron_available:
- self.cleanup_default_secgroup(
- self.security_groups_admin_client, creds.tenant_id)
- except lib_exc.NotFound:
- LOG.warning("failed to cleanup tenant %s's secgroup",
- creds.tenant_name)
- try:
- self.creds_client.delete_project(creds.tenant_id)
- except lib_exc.NotFound:
- LOG.warning("tenant with name: %s not found for delete",
- creds.tenant_name)
-
# if cred is domain scoped, delete ephemeral domain
# do not delete default domain
if (hasattr(creds, 'domain_id') and
@@ -587,6 +615,28 @@
except lib_exc.NotFound:
LOG.warning("domain with name: %s not found for delete",
creds.domain_name)
+ for project_id in project_ids:
+ # NOTE(zhufl): Only when neutron's security_group ext is
+ # enabled, cleanup_default_secgroup will not raise error. But
+ # here cannot use test_utils.is_extension_enabled for it will
+ # cause "circular dependency". So here just use try...except to
+ # ensure tenant deletion without big changes.
+ LOG.info("Deleting project and security group for project: %s",
+ project_id)
+
+ try:
+ if self.neutron_available:
+ self.cleanup_default_secgroup(
+ self.security_groups_admin_client, project_id)
+ except lib_exc.NotFound:
+ LOG.warning("failed to cleanup tenant %s's secgroup",
+ project_id)
+ try:
+ self.creds_client.delete_project(project_id)
+ except lib_exc.NotFound:
+ LOG.warning("tenant with id: %s not found for delete",
+ project_id)
+
self._creds = {}
def is_multi_user(self):
diff --git a/tempest/lib/common/http.py b/tempest/lib/common/http.py
index 33f871b..d163968 100644
--- a/tempest/lib/common/http.py
+++ b/tempest/lib/common/http.py
@@ -60,7 +60,12 @@
retry = urllib3.util.Retry(redirect=False)
r = super(ClosingProxyHttp, self).request(method, url, retries=retry,
*args, **new_kwargs)
- return Response(r), r.data
+ if not kwargs.get('preload_content', True):
+ # This means we asked urllib3 for streaming content, so we
+ # need to return the raw response and not read any data yet
+ return r, b''
+ else:
+ return Response(r), r.data
class ClosingHttp(urllib3.poolmanager.PoolManager):
@@ -109,4 +114,9 @@
retry = urllib3.util.Retry(redirect=False)
r = super(ClosingHttp, self).request(method, url, retries=retry,
*args, **new_kwargs)
- return Response(r), r.data
+ if not kwargs.get('preload_content', True):
+ # This means we asked urllib3 for streaming content, so we
+ # need to return the raw response and not read any data yet
+ return r, b''
+ else:
+ return Response(r), r.data
diff --git a/tempest/lib/common/jsonschema_validator.py b/tempest/lib/common/jsonschema_validator.py
index 1618175..5212221 100644
--- a/tempest/lib/common/jsonschema_validator.py
+++ b/tempest/lib/common/jsonschema_validator.py
@@ -18,7 +18,7 @@
# JSON Schema validator and format checker used for JSON Schema validation
JSONSCHEMA_VALIDATOR = jsonschema.Draft4Validator
-FORMAT_CHECKER = jsonschema.draft4_format_checker
+FORMAT_CHECKER = jsonschema.Draft4Validator.FORMAT_CHECKER
# NOTE(gmann): Add customized format checker for 'date-time' format because:
@@ -39,7 +39,7 @@
return True
-@jsonschema.FormatChecker.cls_checks('base64')
+@FORMAT_CHECKER.checks('base64')
def _validate_base64_format(instance):
try:
if isinstance(instance, str):
diff --git a/tempest/lib/common/rest_client.py b/tempest/lib/common/rest_client.py
index a11b7c1..6cf5b73 100644
--- a/tempest/lib/common/rest_client.py
+++ b/tempest/lib/common/rest_client.py
@@ -19,6 +19,7 @@
import re
import time
import urllib
+import urllib3
import jsonschema
from oslo_log import log as logging
@@ -298,7 +299,7 @@
"""
return self.request('POST', url, extra_headers, headers, body, chunked)
- def get(self, url, headers=None, extra_headers=False):
+ def get(self, url, headers=None, extra_headers=False, chunked=False):
"""Send a HTTP GET request using keystone service catalog and auth
:param str url: the relative url to send the get request to
@@ -307,11 +308,19 @@
returned by the get_headers() method are to
be used but additional headers are needed in
the request pass them in as a dict.
+ :param bool chunked: Boolean value that indicates if we should stream
+ the response instead of reading it all at once.
+ If True, data will be empty and the raw urllib3
+ response object will be returned.
+ NB: If you pass True here, you **MUST** call
+ release_conn() on the response object before
+ finishing!
:return: a tuple with the first entry containing the response headers
and the second the response body
:rtype: tuple
"""
- return self.request('GET', url, extra_headers, headers)
+ return self.request('GET', url, extra_headers, headers,
+ chunked=chunked)
def delete(self, url, headers=None, body=None, extra_headers=False):
"""Send a HTTP DELETE request using keystone service catalog and auth
@@ -480,7 +489,7 @@
self.LOG.info(
'Request (%s): %s %s %s%s',
caller_name,
- resp['status'],
+ resp.status,
method,
req_url,
secs,
@@ -617,17 +626,30 @@
"""
if headers is None:
headers = self.get_headers()
+ # In urllib3, chunked only affects the upload. However, we may
+ # want to read large responses to GET incrementally. Re-purpose
+ # chunked=True on a GET to also control how we handle the response.
+ preload = not (method.lower() == 'get' and chunked)
+ if not preload:
+ # NOTE(danms): Not specifically necessary, but don't send
+ # chunked=True to urllib3 on a GET, since it is technically
+ # for PUT/POST type operations
+ chunked = False
# Do the actual request, and time it
start = time.time()
self._log_request_start(method, url)
resp, resp_body = self.http_obj.request(
url, method, headers=headers,
- body=body, chunked=chunked)
+ body=body, chunked=chunked, preload_content=preload)
end = time.time()
req_body = body if log_req_body is None else log_req_body
- self._log_request(method, url, resp, secs=(end - start),
- req_headers=headers, req_body=req_body,
- resp_body=resp_body)
+ if preload:
+ # NOTE(danms): If we are reading the whole response, we can do
+ # this logging. If not, skip the logging because it will result
+ # in us reading the response data prematurely.
+ self._log_request(method, url, resp, secs=(end - start),
+ req_headers=headers, req_body=req_body,
+ resp_body=resp_body)
return resp, resp_body
def request(self, method, url, extra_headers=False, headers=None,
@@ -773,6 +795,10 @@
# resp this could possibly fail
if str(type(resp)) == "<type 'instance'>":
ctype = resp.getheader('content-type')
+ elif isinstance(resp, urllib3.HTTPResponse):
+ # If we requested chunked=True streaming, this will be a raw
+ # urllib3.HTTPResponse
+ ctype = resp.getheaders()['content-type']
else:
try:
ctype = resp['content-type']
diff --git a/tempest/lib/common/ssh.py b/tempest/lib/common/ssh.py
index cb59a82..aad04b8 100644
--- a/tempest/lib/common/ssh.py
+++ b/tempest/lib/common/ssh.py
@@ -53,7 +53,8 @@
def __init__(self, host, username, password=None, timeout=300, pkey=None,
channel_timeout=10, look_for_keys=False, key_filename=None,
- port=22, proxy_client=None, ssh_key_type='rsa'):
+ port=22, proxy_client=None, ssh_key_type='rsa',
+ ssh_allow_agent=True):
"""SSH client.
Many of parameters are just passed to the underlying implementation
@@ -76,6 +77,9 @@
for ssh-over-ssh. The default is None, which means
not to use ssh-over-ssh.
:param ssh_key_type: ssh key type (rsa, ecdsa)
+ :param ssh_allow_agent: boolean, default True, if the SSH client is
+ allowed to also utilize the ssh-agent. Explicit use of passwords
+ in some tests may need this set as False.
:type proxy_client: ``tempest.lib.common.ssh.Client`` object
"""
self.host = host
@@ -105,6 +109,7 @@
raise exceptions.SSHClientProxyClientLoop(
host=self.host, port=self.port, username=self.username)
self._proxy_conn = None
+ self.ssh_allow_agent = ssh_allow_agent
def _get_ssh_connection(self, sleep=1.5, backoff=1):
"""Returns an ssh connection to the specified host."""
@@ -133,7 +138,7 @@
look_for_keys=self.look_for_keys,
key_filename=self.key_filename,
timeout=self.channel_timeout, pkey=self.pkey,
- sock=proxy_chan)
+ sock=proxy_chan, allow_agent=self.ssh_allow_agent)
LOG.info("ssh connection to %s@%s successfully created",
self.username, self.host)
return ssh
diff --git a/tempest/lib/common/utils/linux/remote_client.py b/tempest/lib/common/utils/linux/remote_client.py
index d0cdc25..662b452 100644
--- a/tempest/lib/common/utils/linux/remote_client.py
+++ b/tempest/lib/common/utils/linux/remote_client.py
@@ -69,7 +69,8 @@
server=None, servers_client=None, ssh_timeout=300,
connect_timeout=60, console_output_enabled=True,
ssh_shell_prologue="set -eu -o pipefail; PATH=$PATH:/sbin;",
- ping_count=1, ping_size=56, ssh_key_type='rsa'):
+ ping_count=1, ping_size=56, ssh_key_type='rsa',
+ ssh_allow_agent=True):
"""Executes commands in a VM over ssh
:param ip_address: IP address to ssh to
@@ -85,6 +86,8 @@
:param ping_count: Number of ping packets
:param ping_size: Packet size for ping packets
:param ssh_key_type: ssh key type (rsa, ecdsa)
+ :param ssh_allow_agent: Boolean if ssh agent support is permitted.
+ Defaults to True.
"""
self.server = server
self.servers_client = servers_client
@@ -94,11 +97,14 @@
self.ping_count = ping_count
self.ping_size = ping_size
self.ssh_key_type = ssh_key_type
+ self.ssh_allow_agent = ssh_allow_agent
self.ssh_client = ssh.Client(ip_address, username, password,
ssh_timeout, pkey=pkey,
channel_timeout=connect_timeout,
- ssh_key_type=ssh_key_type)
+ ssh_key_type=ssh_key_type,
+ ssh_allow_agent=ssh_allow_agent,
+ )
@debug_ssh
def exec_command(self, cmd):
diff --git a/tempest/lib/services/image/v2/images_client.py b/tempest/lib/services/image/v2/images_client.py
index ae6ce25..8460b57 100644
--- a/tempest/lib/services/image/v2/images_client.py
+++ b/tempest/lib/services/image/v2/images_client.py
@@ -248,17 +248,26 @@
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp)
- def show_image_file(self, image_id):
+ def show_image_file(self, image_id, chunked=False):
"""Download binary image data.
+ :param bool chunked: If True, do not read the body and return only
+ the raw urllib3 response object for processing.
+ NB: If you pass True here, you **MUST** call
+ release_conn() on the response object before
+ finishing!
+
For a full list of available parameters, please refer to the official
API reference:
https://docs.openstack.org/api-ref/image/v2/#download-binary-image-data
"""
url = 'images/%s/file' % image_id
- resp, body = self.get(url)
+ resp, body = self.get(url, chunked=chunked)
self.expected_success([200, 204, 206], resp.status)
- return rest_client.ResponseBodyData(resp, body)
+ if chunked:
+ return resp
+ else:
+ return rest_client.ResponseBodyData(resp, body)
def add_image_tag(self, image_id, tag):
"""Add an image tag.
diff --git a/tempest/scenario/test_minimum_basic.py b/tempest/scenario/test_minimum_basic.py
index 90e1bc5..5513f4d 100644
--- a/tempest/scenario/test_minimum_basic.py
+++ b/tempest/scenario/test_minimum_basic.py
@@ -86,6 +86,7 @@
'%s' % (secgroup['id'], server['id']))
raise exceptions.TimeoutException(msg)
+ @decorators.attr(type='slow')
@decorators.idempotent_id('bdbb5441-9204-419d-a225-b4fdbfb1a1a8')
@utils.services('compute', 'volume', 'image', 'network')
def test_minimum_basic_scenario(self):
@@ -159,6 +160,7 @@
self.servers_client, server, floating_ip,
wait_for_disassociate=True)
+ @decorators.attr(type='slow')
@decorators.idempotent_id('a8fd48ec-1d01-4895-b932-02321661ec1e')
@testtools.skipUnless(CONF.volume_feature_enabled.snapshot,
"Cinder volume snapshots are disabled")
diff --git a/tempest/scenario/test_network_advanced_server_ops.py b/tempest/scenario/test_network_advanced_server_ops.py
index e630e29..e6c6eb6 100644
--- a/tempest/scenario/test_network_advanced_server_ops.py
+++ b/tempest/scenario/test_network_advanced_server_ops.py
@@ -218,7 +218,7 @@
@testtools.skipUnless(CONF.compute.min_compute_nodes > 1,
'Less than 2 compute nodes, skipping multinode '
'tests.')
- @decorators.attr(type='slow')
+ @decorators.attr(type=['slow', 'multinode'])
@utils.services('compute', 'network')
def test_server_connectivity_cold_migration(self):
keypair = self.create_keypair()
@@ -244,7 +244,7 @@
@testtools.skipUnless(CONF.compute.min_compute_nodes > 1,
'Less than 2 compute nodes, skipping multinode '
'tests.')
- @decorators.attr(type='slow')
+ @decorators.attr(type=['slow', 'multinode'])
@utils.services('compute', 'network')
def test_server_connectivity_live_migration(self):
keypair = self.create_keypair()
@@ -275,7 +275,7 @@
LOG.debug("Downtime seconds measured with downtime_meter = %r",
downtime)
allowed_downtime = CONF.validation.allowed_network_downtime
- self.assertLess(
+ self.assertLessEqual(
downtime, allowed_downtime,
"Downtime of {} seconds is higher than expected '{}'".format(
downtime, allowed_downtime))
@@ -289,7 +289,7 @@
@testtools.skipUnless(CONF.compute.min_compute_nodes > 1,
'Less than 2 compute nodes, skipping multinode '
'tests.')
- @decorators.attr(type='slow')
+ @decorators.attr(type=['slow', 'multinode'])
@utils.services('compute', 'network')
def test_server_connectivity_cold_migration_revert(self):
keypair = self.create_keypair()
diff --git a/tempest/scenario/test_network_qos_placement.py b/tempest/scenario/test_network_qos_placement.py
index 365eb1b..0b2cfcb 100644
--- a/tempest/scenario/test_network_qos_placement.py
+++ b/tempest/scenario/test_network_qos_placement.py
@@ -278,6 +278,7 @@
port = self.os_admin.ports_client.show_port(not_valid_port['id'])
self.assertEqual(0, len(port['port']['binding:profile']))
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('8a98150c-a506-49a5-96c6-73a5e7b04ada')
@testtools.skipUnless(CONF.compute_feature_enabled.cold_migration,
'Cold migration is not available.')
@@ -851,6 +852,7 @@
self.assert_allocations(server, port, min_kbps, min_kpps)
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('bdd0b31c-c8b0-4b7b-b80a-545a46b32abe')
@testtools.skipUnless(
CONF.compute_feature_enabled.cold_migration,
@@ -1033,6 +1035,7 @@
self.assert_allocations(server, port2, 0, 0)
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('36ffdb85-6cc2-4cc9-a426-cad5bac8626b')
@testtools.skipUnless(
CONF.compute.min_compute_nodes > 1,
diff --git a/tempest/scenario/test_security_groups_basic_ops.py b/tempest/scenario/test_security_groups_basic_ops.py
index aff7509..2fc5f32 100644
--- a/tempest/scenario/test_security_groups_basic_ops.py
+++ b/tempest/scenario/test_security_groups_basic_ops.py
@@ -480,6 +480,7 @@
direction='ingress')
return ruleset
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('e79f879e-debb-440c-a7e4-efeda05b6848')
@utils.services('compute', 'network')
def test_cross_tenant_traffic(self):
@@ -510,6 +511,7 @@
self._log_console_output_for_all_tenants()
raise
+ @decorators.attr(type='multinode')
@decorators.idempotent_id('63163892-bbf6-4249-aa12-d5ea1f8f421b')
@utils.services('compute', 'network')
def test_in_tenant_traffic(self):
@@ -524,7 +526,7 @@
raise
@decorators.idempotent_id('f4d556d7-1526-42ad-bafb-6bebf48568f6')
- @decorators.attr(type='slow')
+ @decorators.attr(type=['slow', 'multinode'])
@utils.services('compute', 'network')
def test_port_update_new_security_group(self):
"""Verifies the traffic after updating the vm port
@@ -578,7 +580,7 @@
raise
@decorators.idempotent_id('d2f77418-fcc4-439d-b935-72eca704e293')
- @decorators.attr(type='slow')
+ @decorators.attr(type=['slow', 'multinode'])
@utils.services('compute', 'network')
def test_multiple_security_groups(self):
"""Verify multiple security groups and checks that rules
@@ -610,7 +612,7 @@
private_key=private_key,
should_connect=True)
- @decorators.attr(type='slow')
+ @decorators.attr(type=['slow', 'multinode'])
@utils.requires_ext(service='network', extension='port-security')
@decorators.idempotent_id('7c811dcc-263b-49a3-92d2-1b4d8405f50c')
@utils.services('compute', 'network')
@@ -650,7 +652,7 @@
self._log_console_output_for_all_tenants()
raise
- @decorators.attr(type='slow')
+ @decorators.attr(type=['slow', 'multinode'])
@utils.requires_ext(service='network', extension='port-security')
@decorators.idempotent_id('13ccf253-e5ad-424b-9c4a-97b88a026699')
# TODO(mriedem): We shouldn't actually need to check this since neutron
diff --git a/tempest/scenario/test_server_multinode.py b/tempest/scenario/test_server_multinode.py
index fdf875c..9285da2 100644
--- a/tempest/scenario/test_server_multinode.py
+++ b/tempest/scenario/test_server_multinode.py
@@ -35,7 +35,7 @@
"Less than 2 compute nodes, skipping multinode tests.")
@decorators.idempotent_id('9cecbe35-b9d4-48da-a37e-7ce70aa43d30')
- @decorators.attr(type='smoke')
+ @decorators.attr(type=['smoke', 'multinode'])
@utils.services('compute', 'network')
def test_schedule_to_all_nodes(self):
available_zone = \
diff --git a/tempest/scenario/test_shelve_instance.py b/tempest/scenario/test_shelve_instance.py
index 29612ec..204471e 100644
--- a/tempest/scenario/test_shelve_instance.py
+++ b/tempest/scenario/test_shelve_instance.py
@@ -119,7 +119,7 @@
def test_shelve_volume_backed_instance(self):
self._create_server_then_shelve_and_unshelve(boot_from_volume=True)
- @decorators.attr(type='slow')
+ @decorators.attr(type=['slow', 'multinode'])
@decorators.idempotent_id('1295fd9e-193a-4cf8-b211-55358e021bae')
@testtools.skipUnless(CONF.network.public_network_id,
'The public_network_id option must be specified.')
diff --git a/tempest/tests/common/test_waiters.py b/tempest/tests/common/test_waiters.py
index 2695048..93c949e 100755
--- a/tempest/tests/common/test_waiters.py
+++ b/tempest/tests/common/test_waiters.py
@@ -386,6 +386,29 @@
mock_sleep.assert_called_once_with(1)
@mock.patch.object(time, 'sleep')
+ def test_wait_for_volume_status_timeout_console(self, mock_sleep):
+ # Tests that the wait method gets the server console log if the
+ # timeout is hit.
+ client = mock.Mock(spec=volumes_client.VolumesClient,
+ resource_type="volume",
+ build_interval=1,
+ build_timeout=1)
+ servers_client = mock.Mock()
+ servers_client.get_console_output.return_value = {
+ 'output': 'console log'}
+ volume = {'volume': {'status': 'detaching'}}
+ mock_show = mock.Mock(return_value=volume)
+ client.show_volume = mock_show
+ volume_id = '7532b91e-aa0a-4e06-b3e5-20c0c5ee1caa'
+ self.assertRaises(lib_exc.TimeoutException,
+ waiters.wait_for_volume_resource_status,
+ client, volume_id, 'available',
+ server_id='someserver',
+ servers_client=servers_client)
+ servers_client.get_console_output.assert_called_once_with(
+ 'someserver')
+
+ @mock.patch.object(time, 'sleep')
def test_wait_for_volume_status_error_extending(self, mock_sleep):
# Tests that the wait method raises VolumeExtendErrorException if
# the volume status is 'error_extending'.
diff --git a/tempest/tests/lib/common/test_dynamic_creds.py b/tempest/tests/lib/common/test_dynamic_creds.py
index b4b1b91..d3d01c0 100644
--- a/tempest/tests/lib/common/test_dynamic_creds.py
+++ b/tempest/tests/lib/common/test_dynamic_creds.py
@@ -60,6 +60,7 @@
fake_response = fake_identity._fake_v2_response
tenants_client_class = tenants_client.TenantsClient
delete_tenant = 'delete_tenant'
+ create_tenant = 'create_tenant'
def setUp(self):
super(TestDynamicCredentialProvider, self).setUp()
@@ -140,7 +141,9 @@
return_value=(rest_client.ResponseBody
(200, {'roles': [
{'id': '1', 'name': 'FakeRole'},
- {'id': '2', 'name': 'member'}]}))))
+ {'id': '2', 'name': 'member'},
+ {'id': '3', 'name': 'reader'},
+ {'id': '4', 'name': 'admin'}]}))))
return roles_fix
def _mock_list_ec2_credentials(self, user_id, tenant_id):
@@ -191,6 +194,205 @@
self.assertEqual(primary_creds.tenant_id, '1234')
self.assertEqual(primary_creds.user_id, '1234')
+ def _request_and_check_second_creds(
+ self, creds_obj, func, creds_to_compare,
+ show_mock, sm_count=1, sm_count_in_diff_project=0,
+ same_project_request=True, **func_kwargs):
+ self._mock_user_create('111', 'fake_user')
+ with mock.patch.object(creds_obj.creds_client,
+ 'create_project') as create_mock:
+ create_mock.return_value = {'id': '22', 'name': 'fake_project'}
+ new_creds = func(**func_kwargs)
+ if same_project_request:
+ # Check that with second creds request, create_project is not
+ # called and show_project is called. Which means new project is
+ # not created for the second requested creds instead new user is
+ # created under existing project.
+ self.assertEqual(len(create_mock.mock_calls), 0)
+ self.assertEqual(len(show_mock.mock_calls), sm_count)
+ # Verify project name and id is same as creds_to_compare
+ self.assertEqual(creds_to_compare.tenant_name,
+ new_creds.tenant_name)
+ self.assertEqual(creds_to_compare.tenant_id,
+ new_creds.tenant_id)
+ else:
+ # Check that with different project creds request, create_project
+ # is called and show_project is not called. Which means new project
+ # is created for this new creds request.
+ self.assertEqual(len(create_mock.mock_calls), 1)
+ self.assertEqual(len(show_mock.mock_calls),
+ sm_count_in_diff_project)
+ # Verify project name and id is not same as creds_to_compare
+ self.assertNotEqual(creds_to_compare.tenant_name,
+ new_creds.tenant_name)
+ self.assertNotEqual(creds_to_compare.tenant_id,
+ new_creds.tenant_id)
+ self.assertEqual(new_creds.tenant_name, 'fake_project')
+ self.assertEqual(new_creds.tenant_id, '22')
+ # Verify new user name and id
+ self.assertEqual(new_creds.username, 'fake_user')
+ self.assertEqual(new_creds.user_id, '111')
+ return new_creds
+
+ @mock.patch('tempest.lib.common.rest_client.RestClient')
+ def _creds_within_same_project(self, MockRestClient, test_alt_creds=False):
+ creds = dynamic_creds.DynamicCredentialProvider(**self.fixed_params)
+ if test_alt_creds:
+ admin_func = creds.get_project_alt_admin_creds
+ member_func = creds.get_project_alt_member_creds
+ reader_func = creds.get_project_alt_reader_creds
+ else:
+ admin_func = creds.get_project_admin_creds
+ member_func = creds.get_project_member_creds
+ reader_func = creds.get_project_reader_creds
+ self._mock_assign_user_role()
+ self._mock_list_role()
+ self._mock_user_create('11', 'fake_user1')
+ show_mock = self.patchobject(creds.creds_client, 'show_project')
+ show_mock.return_value = {'id': '21', 'name': 'fake_project1'}
+ with mock.patch.object(creds.creds_client,
+ 'create_project') as create_mock:
+ create_mock.return_value = {'id': '21', 'name': 'fake_project1'}
+ member_creds = member_func()
+ # Check that with first creds request, create_project is called and
+ # show_project is not called. Which means new project is created for
+ # the requested creds.
+ self.assertEqual(len(create_mock.mock_calls), 1)
+ self.assertEqual(len(show_mock.mock_calls), 0)
+ # Verify project, user name and IDs
+ self.assertEqual(member_creds.username, 'fake_user1')
+ self.assertEqual(member_creds.tenant_name, 'fake_project1')
+ self.assertEqual(member_creds.tenant_id, '21')
+ self.assertEqual(member_creds.user_id, '11')
+
+ # Now request for the project reader creds which should not create new
+ # project instead should use the project_id of member_creds already
+ # created project.
+ self._request_and_check_second_creds(
+ creds, reader_func, member_creds, show_mock)
+
+ # Now request for the project admin creds which should not create new
+ # project instead should use the project_id of member_creds already
+ # created project.
+ self._request_and_check_second_creds(
+ creds, admin_func, member_creds, show_mock, sm_count=2)
+
+ def test_creds_within_same_project(self):
+ self._creds_within_same_project()
+
+ def test_alt_creds_within_same_project(self):
+ self._creds_within_same_project(test_alt_creds=True)
+
+ @mock.patch('tempest.lib.common.rest_client.RestClient')
+ def test_creds_in_different_project(self, MockRestClient):
+ creds = dynamic_creds.DynamicCredentialProvider(**self.fixed_params)
+ self._mock_assign_user_role()
+ self._mock_list_role()
+ self._mock_user_create('11', 'fake_user1')
+ show_mock = self.patchobject(creds.creds_client, 'show_project')
+ show_mock.return_value = {'id': '21', 'name': 'fake_project1'}
+ with mock.patch.object(creds.creds_client,
+ 'create_project') as create_mock:
+ create_mock.return_value = {'id': '21', 'name': 'fake_project1'}
+ member_creds = creds.get_project_member_creds()
+ # Check that with first creds request, create_project is called and
+ # show_project is not called. Which means new project is created for
+ # the requested creds.
+ self.assertEqual(len(create_mock.mock_calls), 1)
+ self.assertEqual(len(show_mock.mock_calls), 0)
+ # Verify project, user name and IDs
+ self.assertEqual(member_creds.username, 'fake_user1')
+ self.assertEqual(member_creds.tenant_name, 'fake_project1')
+ self.assertEqual(member_creds.tenant_id, '21')
+ self.assertEqual(member_creds.user_id, '11')
+
+ # Now request for the project alt reader creds which should create
+ # new project as this request is for alt creds.
+ alt_reader_creds = self._request_and_check_second_creds(
+ creds, creds.get_project_alt_reader_creds,
+ member_creds, show_mock, same_project_request=False)
+
+ # Check that with second creds request, create_project is not called
+ # and show_project is called. Which means new project is not created
+ # for the second requested creds instead new user is created under
+ # existing project.
+ self._request_and_check_second_creds(
+ creds, creds.get_project_reader_creds, member_creds, show_mock)
+
+ # Now request for the project alt member creds which should not create
+ # new project instead use the alt project already created for
+ # alt_reader creds.
+ show_mock.return_value = {
+ 'id': alt_reader_creds.tenant_id,
+ 'name': alt_reader_creds.tenant_name}
+ self._request_and_check_second_creds(
+ creds, creds.get_project_alt_member_creds,
+ alt_reader_creds, show_mock, sm_count=2,
+ same_project_request=True)
+
+ @mock.patch('tempest.lib.common.rest_client.RestClient')
+ def test_creds_by_role_in_different_project(self, MockRestClient):
+ creds = dynamic_creds.DynamicCredentialProvider(**self.fixed_params)
+ self._mock_assign_user_role()
+ self._mock_list_role()
+ self._mock_user_create('11', 'fake_user1')
+ show_mock = self.patchobject(creds.creds_client, 'show_project')
+ show_mock.return_value = {'id': '21', 'name': 'fake_project1'}
+ with mock.patch.object(creds.creds_client,
+ 'create_project') as create_mock:
+ create_mock.return_value = {'id': '21', 'name': 'fake_project1'}
+ member_creds = creds.get_project_member_creds()
+ # Check that with first creds request, create_project is called and
+ # show_project is not called. Which means new project is created for
+ # the requested creds.
+ self.assertEqual(len(create_mock.mock_calls), 1)
+ self.assertEqual(len(show_mock.mock_calls), 0)
+ # Verify project, user name and IDs
+ self.assertEqual(member_creds.username, 'fake_user1')
+ self.assertEqual(member_creds.tenant_name, 'fake_project1')
+ self.assertEqual(member_creds.tenant_id, '21')
+ self.assertEqual(member_creds.user_id, '11')
+ # Check that with second creds request, create_project is not called
+ # and show_project is called. Which means new project is not created
+ # for the second requested creds instead new user is created under
+ # existing project.
+ self._request_and_check_second_creds(
+ creds, creds.get_project_reader_creds, member_creds, show_mock)
+ # Now request the creds by role which should create new project.
+ self._request_and_check_second_creds(
+ creds, creds.get_creds_by_roles, member_creds, show_mock,
+ sm_count_in_diff_project=1, same_project_request=False,
+ roles=['member'], scope='project')
+
+ @mock.patch('tempest.lib.common.rest_client.RestClient')
+ def test_legacy_admin_creds_in_different_project(self, MockRestClient):
+ creds = dynamic_creds.DynamicCredentialProvider(**self.fixed_params)
+ self._mock_assign_user_role()
+ self._mock_list_role()
+ self._mock_user_create('11', 'fake_user1')
+ show_mock = self.patchobject(creds.creds_client, 'show_project')
+ show_mock.return_value = {'id': '21', 'name': 'fake_project1'}
+ with mock.patch.object(creds.creds_client,
+ 'create_project') as create_mock:
+ create_mock.return_value = {'id': '21', 'name': 'fake_project1'}
+ member_creds = creds.get_project_member_creds()
+ # Check that with first creds request, create_project is called and
+ # show_project is not called. Which means new project is created for
+ # the requested creds.
+ self.assertEqual(len(create_mock.mock_calls), 1)
+ self.assertEqual(len(show_mock.mock_calls), 0)
+ # Verify project, user name and IDs
+ self.assertEqual(member_creds.username, 'fake_user1')
+ self.assertEqual(member_creds.tenant_name, 'fake_project1')
+ self.assertEqual(member_creds.tenant_id, '21')
+ self.assertEqual(member_creds.user_id, '11')
+
+ # Now request for the legacy admin creds which should create
+ # new project instead of using project member creds project.
+ self._request_and_check_second_creds(
+ creds, creds.get_admin_creds,
+ member_creds, show_mock, same_project_request=False)
+
@mock.patch('tempest.lib.common.rest_client.RestClient')
def test_admin_creds(self, MockRestClient):
creds = dynamic_creds.DynamicCredentialProvider(**self.fixed_params)
@@ -321,7 +523,8 @@
@mock.patch('tempest.lib.common.rest_client.RestClient')
def _test_get_same_role_creds_with_project_scope(self, MockRestClient,
- scope=None):
+ scope=None,
+ force_new=False):
creds = dynamic_creds.DynamicCredentialProvider(**self.fixed_params)
self._mock_list_2_roles()
self._mock_user_create('1234', 'fake_role_user')
@@ -329,7 +532,7 @@
with mock.patch.object(self.roles_client.RolesClient,
'create_user_role_on_project') as user_mock:
role_creds = creds.get_creds_by_roles(
- roles=['role1', 'role2'], scope=scope)
+ roles=['role1', 'role2'], force_new=force_new, scope=scope)
calls = user_mock.mock_calls
# Assert that the role creation is called with the 2 specified roles
self.assertEqual(len(calls), 2)
@@ -338,13 +541,18 @@
with mock.patch.object(self.roles_client.RolesClient,
'create_user_role_on_project') as user_mock1:
role_creds_new = creds.get_creds_by_roles(
- roles=['role1', 'role2'], scope=scope)
+ roles=['role1', 'role2'], force_new=force_new, scope=scope)
calls = user_mock1.mock_calls
+ # With force_new, assert that new creds are created
+ if force_new:
+ self.assertEqual(len(calls), 2)
+ self.assertNotEqual(role_creds, role_creds_new)
# Assert that previously created creds are return and no call to
- # role creation.
- self.assertEqual(len(calls), 0)
+ # role creation
# Check if previously created creds are returned.
- self.assertEqual(role_creds, role_creds_new)
+ else:
+ self.assertEqual(len(calls), 0)
+ self.assertEqual(role_creds, role_creds_new)
def test_get_same_role_creds_with_project_scope(self):
self._test_get_same_role_creds_with_project_scope(scope='project')
@@ -352,6 +560,13 @@
def test_get_same_role_creds_with_default_scope(self):
self._test_get_same_role_creds_with_project_scope()
+ def test_get_same_role_creds_with_project_scope_force_new(self):
+ self._test_get_same_role_creds_with_project_scope(
+ scope='project', force_new=True)
+
+ def test_get_same_role_creds_with_default_scope_force_new(self):
+ self._test_get_same_role_creds_with_project_scope(force_new=True)
+
@mock.patch('tempest.lib.common.rest_client.RestClient')
def _test_get_different_role_creds_with_project_scope(
self, MockRestClient, scope=None):
@@ -391,8 +606,12 @@
self._mock_assign_user_role()
self._mock_list_role()
self._mock_tenant_create('1234', 'fake_prim_tenant')
- self._mock_user_create('1234', 'fake_prim_user')
+ show_mock = self.patchobject(creds.creds_client, 'show_project')
+ show_mock.return_value = {'id': '1234', 'name': 'fake_prim_tenant'}
+ self._mock_user_create('1234', 'fake_project1_user')
creds.get_primary_creds()
+ self._mock_user_create('12341', 'fake_project1_user')
+ creds.get_project_admin_creds()
self._mock_tenant_create('12345', 'fake_alt_tenant')
self._mock_user_create('12345', 'fake_alt_user')
creds.get_alt_creds()
@@ -407,10 +626,11 @@
creds.clear_creds()
# Verify user delete calls
calls = user_mock.mock_calls
- self.assertEqual(len(calls), 3)
+ self.assertEqual(len(calls), 4)
args = map(lambda x: x[1][0], calls)
args = list(args)
self.assertIn('1234', args)
+ self.assertIn('12341', args)
self.assertIn('12345', args)
self.assertIn('123456', args)
# Verify tenant delete calls
@@ -512,6 +732,9 @@
self._mock_list_role()
self._mock_user_create('1234', 'fake_prim_user')
self._mock_tenant_create('1234', 'fake_prim_tenant')
+ show_mock = self.patchobject(creds.creds_client, 'show_project')
+ show_mock.return_value = {'id': '1234', 'name': 'fake_prim_tenant'}
+ self._mock_user_create('12341', 'fake_project1_user')
self._mock_network_create(creds, '1234', 'fake_net')
self._mock_subnet_create(creds, '1234', 'fake_subnet')
self._mock_router_create('1234', 'fake_router')
@@ -519,6 +742,7 @@
'tempest.lib.services.network.routers_client.RoutersClient.'
'add_router_interface')
creds.get_primary_creds()
+ creds.get_project_admin_creds()
router_interface_mock.assert_called_once_with('1234', subnet_id='1234')
router_interface_mock.reset_mock()
# Create alternate tenant and network
@@ -779,6 +1003,7 @@
fake_response = fake_identity._fake_v3_response
tenants_client_class = tenants_client.ProjectsClient
delete_tenant = 'delete_project'
+ create_tenant = 'create_project'
def setUp(self):
super(TestDynamicCredentialProviderV3, self).setUp()
diff --git a/tempest/tests/lib/common/test_http.py b/tempest/tests/lib/common/test_http.py
index a19153f..aae6ba2 100644
--- a/tempest/tests/lib/common/test_http.py
+++ b/tempest/tests/lib/common/test_http.py
@@ -149,6 +149,31 @@
'xtra key': 'Xtra Value'},
response)
+ def test_request_preload(self):
+ # Given
+ connection = self.closing_http()
+ headers = {'Xtra Key': 'Xtra Value'}
+ http_response = urllib3.HTTPResponse(headers=headers)
+ request = self.patch('urllib3.PoolManager.request',
+ return_value=http_response)
+ retry = self.patch('urllib3.util.Retry')
+
+ # When
+ response, _ = connection.request(
+ method=REQUEST_METHOD,
+ url=REQUEST_URL,
+ headers=headers,
+ preload_content=False)
+
+ # Then
+ request.assert_called_once_with(
+ REQUEST_METHOD,
+ REQUEST_URL,
+ headers=dict(headers, connection='close'),
+ preload_content=False,
+ retries=retry(raise_on_redirect=False, redirect=5))
+ self.assertIsInstance(response, urllib3.HTTPResponse)
+
class TestClosingProxyHttp(TestClosingHttp):
diff --git a/tempest/tests/lib/common/test_rest_client.py b/tempest/tests/lib/common/test_rest_client.py
index 910756f..81a76e0 100644
--- a/tempest/tests/lib/common/test_rest_client.py
+++ b/tempest/tests/lib/common/test_rest_client.py
@@ -55,6 +55,7 @@
def test_get(self):
__, return_dict = self.rest_client.get(self.url)
self.assertEqual('GET', return_dict['method'])
+ self.assertTrue(return_dict['preload_content'])
def test_delete(self):
__, return_dict = self.rest_client.delete(self.url)
@@ -78,6 +79,17 @@
__, return_dict = self.rest_client.copy(self.url)
self.assertEqual('COPY', return_dict['method'])
+ def test_get_chunked(self):
+ self.useFixture(fixtures.MockPatchObject(self.rest_client,
+ '_log_request'))
+ __, return_dict = self.rest_client.get(self.url, chunked=True)
+ # Default is preload_content=True, make sure we passed False
+ self.assertFalse(return_dict['preload_content'])
+ # Make sure we did not pass chunked=True to urllib3 for GET
+ self.assertFalse(return_dict['chunked'])
+ # Make sure we did not call _log_request() on the raw response
+ self.rest_client._log_request.assert_not_called()
+
class TestRestClientNotFoundHandling(BaseRestClientTestClass):
def setUp(self):
diff --git a/tempest/tests/lib/fake_http.py b/tempest/tests/lib/fake_http.py
index cfa4b93..5fa0c43 100644
--- a/tempest/tests/lib/fake_http.py
+++ b/tempest/tests/lib/fake_http.py
@@ -21,14 +21,17 @@
self.return_type = return_type
def request(self, uri, method="GET", body=None, headers=None,
- redirections=5, connection_type=None, chunked=False):
+ redirections=5, connection_type=None, chunked=False,
+ preload_content=False):
if not self.return_type:
fake_headers = fake_http_response(headers)
return_obj = {
'uri': uri,
'method': method,
'body': body,
- 'headers': headers
+ 'headers': headers,
+ 'chunked': chunked,
+ 'preload_content': preload_content,
}
return (fake_headers, return_obj)
elif isinstance(self.return_type, int):
diff --git a/tempest/tests/lib/services/image/v2/test_images_client.py b/tempest/tests/lib/services/image/v2/test_images_client.py
index 5b162f8..27a50a9 100644
--- a/tempest/tests/lib/services/image/v2/test_images_client.py
+++ b/tempest/tests/lib/services/image/v2/test_images_client.py
@@ -13,6 +13,9 @@
# under the License.
import io
+from unittest import mock
+
+import fixtures
from tempest.lib.common.utils import data_utils
from tempest.lib.services.image.v2 import images_client
@@ -239,6 +242,21 @@
headers={'Content-Type': 'application/octet-stream'},
status=200)
+ def test_show_image_file_chunked(self):
+ # Since chunked=True on a GET should pass the response object
+ # basically untouched, we use a mock here so we get some assurances.
+ http_response = mock.MagicMock()
+ http_response.status = 200
+ self.useFixture(fixtures.MockPatch(
+ 'tempest.lib.common.rest_client.RestClient.get',
+ return_value=(http_response, b'')))
+ resp = self.client.show_image_file(
+ self.FAKE_CREATE_UPDATE_SHOW_IMAGE['id'],
+ chunked=True)
+ self.assertEqual(http_response, resp)
+ resp.__contains__.assert_not_called()
+ resp.__getitem__.assert_not_called()
+
def test_add_image_tag(self):
self.check_service_client_function(
self.client.add_image_tag,
diff --git a/tempest/tests/lib/test_ssh.py b/tempest/tests/lib/test_ssh.py
index 886d99c..13870ba 100644
--- a/tempest/tests/lib/test_ssh.py
+++ b/tempest/tests/lib/test_ssh.py
@@ -75,7 +75,8 @@
look_for_keys=False,
timeout=10.0,
password=None,
- sock=None
+ sock=None,
+ allow_agent=True
)]
self.assertEqual(expected_connect, client_mock.connect.mock_calls)
self.assertEqual(0, s_mock.call_count)
@@ -91,7 +92,8 @@
proxy_client = ssh.Client('proxy-host', 'proxy-user', timeout=2)
client = ssh.Client('localhost', 'root', timeout=2,
- proxy_client=proxy_client)
+ proxy_client=proxy_client,
+ ssh_allow_agent=False)
client._get_ssh_connection(sleep=1)
aa_mock.assert_has_calls([mock.call(), mock.call()])
@@ -106,7 +108,8 @@
look_for_keys=False,
timeout=10.0,
password=None,
- sock=None
+ sock=None,
+ allow_agent=True
)]
self.assertEqual(proxy_expected_connect,
proxy_client_mock.connect.mock_calls)
@@ -121,7 +124,8 @@
look_for_keys=False,
timeout=10.0,
password=None,
- sock=proxy_client_mock.get_transport().open_session()
+ sock=proxy_client_mock.get_transport().open_session(),
+ allow_agent=False
)]
self.assertEqual(expected_connect, client_mock.connect.mock_calls)
self.assertEqual(0, s_mock.call_count)
diff --git a/tools/tempest-extra-tests-list.txt b/tools/tempest-extra-tests-list.txt
new file mode 100644
index 0000000..9c88109
--- /dev/null
+++ b/tools/tempest-extra-tests-list.txt
@@ -0,0 +1,20 @@
+# This file includes the list of tests which need to be
+# excluded to run from integrated testing (tempest-full job
+# or other generic jobs. We will run these tests in a separate
+# jobs. This is needed to avoid the job timeout, details in
+# bug#2004780.
+# Basic criteria to add test in this list is:
+# * Admin test which are not needed for interop and most of them
+# are running as part of other API and Scenario tests.
+# * Negative tests which are mostly covered in tempest API tests
+# or service unit/functional tests.
+
+# All admin tests except keystone admin test which might not have much
+# coverage in existing other tests
+tempest.api.compute.admin
+tempest.api.volume.admin
+tempest.api.image.admin
+tempest.api.network.admin
+
+# All negative tests
+negative
diff --git a/tox.ini b/tox.ini
index 618f9e0..47ef5eb 100644
--- a/tox.ini
+++ b/tox.ini
@@ -126,16 +126,49 @@
tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' {posargs}
tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)|(^tempest\.serial_tests)' {posargs}
+[testenv:integrated-full]
+envdir = .tox/tempest
+sitepackages = {[tempestenv]sitepackages}
+basepython = {[tempestenv]basepython}
+setenv = {[tempestenv]setenv}
+deps = {[tempestenv]deps}
+# The regex below is used to select which tests to run. It exclude the extra
+# tests mentioned in tools/tempest-extra-tests-list.txt and slow tag:
+# See the testrepository bug: https://bugs.launchpad.net/testrepository/+bug/1208610
+# FIXME: We can replace it with the `--exclude-regex` option to exclude tests now.
+regex1 = '(?!.*\[.*\bslow\b.*\])(^tempest\.api)'
+regex2 = '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)|(^tempest\.serial_tests)'
+commands =
+ find . -type f -name "*.pyc" -delete
+ tempest run --regex {[testenv:integrated-full]regex1} --exclude-list ./tools/tempest-extra-tests-list.txt {posargs}
+ tempest run --combine --serial --regex {[testenv:integrated-full]regex2} {posargs}
+
+[testenv:extra-tests]
+envdir = .tox/tempest
+sitepackages = {[tempestenv]sitepackages}
+basepython = {[tempestenv]basepython}
+setenv = {[tempestenv]setenv}
+deps = {[tempestenv]deps}
+# The regex below is used to select extra tests mentioned in
+# tools/tempest-extra-tests-list.txt and exclude slow tag tests:
+# See the testrepository bug: https://bugs.launchpad.net/testrepository/+bug/1208610
+# FIXME: We can replace it with the `--exclude-regex` option to exclude tests now.
+exclude-regex = '\[.*\bslow\b.*\]'
+commands =
+ find . -type f -name "*.pyc" -delete
+ tempest run --exclude-regex {[testenv:extra-tests]exclude-regex} --include-list ./tools/tempest-extra-tests-list.txt {posargs}
+
[testenv:full-parallel]
envdir = .tox/tempest
sitepackages = {[tempestenv]sitepackages}
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
-# The regex below is used to select all tempest scenario and including the non slow api tests
+# But exlcude the extra tests mentioned in tools/tempest-extra-tests-list.txt
+regex = '(^tempest\.scenario.*)|(^tempest\.serial_tests)|(?!.*\[.*\bslow\b.*\])(^tempest\.api)'
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '(^tempest\.scenario.*)|(^tempest\.serial_tests)|(?!.*\[.*\bslow\b.*\])(^tempest\.api)' {posargs}
+ tempest run --regex {[testenv:full-parallel]regex} --exclude-list ./tools/tempest-extra-tests-list.txt {posargs}
[testenv:api-microversion-tests]
envdir = .tox/tempest
@@ -143,11 +176,12 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex = '(^tempest\.api\.compute)|(^tempest\.api\.volume)'
# The regex below is used to select all tempest api tests for services having API
# microversion concept.
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '(^tempest\.api\.compute)|(^tempest\.api\.volume)' {posargs}
+ tempest run --regex {[testenv:api-microversion-tests]regex} {posargs}
[testenv:integrated-network]
envdir = .tox/tempest
@@ -155,12 +189,14 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex1 = '(?!.*\[.*\bslow\b.*\])(^tempest\.api)'
+regex2 = '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)|(^tempest\.serial_tests)'
# The regex below is used to select which tests to run and exclude the slow tag and
# tests listed in exclude-list file:
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' --exclude-list ./tools/tempest-integrated-gate-networking-exclude-list.txt {posargs}
- tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)|(^tempest\.serial_tests)' --exclude-list ./tools/tempest-integrated-gate-networking-exclude-list.txt {posargs}
+ tempest run --regex {[testenv:integrated-network]regex1} --exclude-list ./tools/tempest-integrated-gate-networking-exclude-list.txt {posargs}
+ tempest run --combine --serial --regex {[testenv:integrated-network]regex2} --exclude-list ./tools/tempest-integrated-gate-networking-exclude-list.txt {posargs}
[testenv:integrated-compute]
envdir = .tox/tempest
@@ -168,12 +204,14 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex1 = '(?!.*\[.*\bslow\b.*\])(^tempest\.api)'
+regex2 = '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)|(^tempest\.serial_tests)'
# The regex below is used to select which tests to run and exclude the slow tag and
# tests listed in exclude-list file:
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' --exclude-list ./tools/tempest-integrated-gate-compute-exclude-list.txt {posargs}
- tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)|(^tempest\.serial_tests)' --exclude-list ./tools/tempest-integrated-gate-compute-exclude-list.txt {posargs}
+ tempest run --regex {[testenv:integrated-compute]regex1} --exclude-list ./tools/tempest-integrated-gate-compute-exclude-list.txt {posargs}
+ tempest run --combine --serial --regex {[testenv:integrated-compute]regex2} --exclude-list ./tools/tempest-integrated-gate-compute-exclude-list.txt {posargs}
[testenv:integrated-placement]
envdir = .tox/tempest
@@ -181,12 +219,14 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex1 = '(?!.*\[.*\bslow\b.*\])(^tempest\.api)'
+regex2 = '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)|(^tempest\.serial_tests)'
# The regex below is used to select which tests to run and exclude the slow tag and
# tests listed in exclude-list file:
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' --exclude-list ./tools/tempest-integrated-gate-placement-exclude-list.txt {posargs}
- tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)|(^tempest\.serial_tests)' --exclude-list ./tools/tempest-integrated-gate-placement-exclude-list.txt {posargs}
+ tempest run --regex {[testenv:integrated-placement]regex1} --exclude-list ./tools/tempest-integrated-gate-placement-exclude-list.txt {posargs}
+ tempest run --combine --serial --regex {[testenv:integrated-placement]regex2} --exclude-list ./tools/tempest-integrated-gate-placement-exclude-list.txt {posargs}
[testenv:integrated-storage]
envdir = .tox/tempest
@@ -194,12 +234,14 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex1 = '(?!.*\[.*\bslow\b.*\])(^tempest\.api)'
+regex2 = '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)|(^tempest\.serial_tests)'
# The regex below is used to select which tests to run and exclude the slow tag and
# tests listed in exclude-list file:
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' --exclude-list ./tools/tempest-integrated-gate-storage-exclude-list.txt {posargs}
- tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)|(^tempest\.serial_tests)' --exclude-list ./tools/tempest-integrated-gate-storage-exclude-list.txt {posargs}
+ tempest run --regex {[testenv:integrated-storage]regex1} --exclude-list ./tools/tempest-integrated-gate-storage-exclude-list.txt {posargs}
+ tempest run --combine --serial --regex {[testenv:integrated-storage]regex2} --exclude-list ./tools/tempest-integrated-gate-storage-exclude-list.txt {posargs}
[testenv:integrated-object-storage]
envdir = .tox/tempest
@@ -207,12 +249,14 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex1 = '(?!.*\[.*\bslow\b.*\])(^tempest\.api)'
+regex2 = '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)|(^tempest\.serial_tests)'
# The regex below is used to select which tests to run and exclude the slow tag and
# tests listed in exclude-list file:
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' --exclude-list ./tools/tempest-integrated-gate-object-storage-exclude-list.txt {posargs}
- tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)|(^tempest\.serial_tests)' --exclude-list ./tools/tempest-integrated-gate-object-storage-exclude-list.txt {posargs}
+ tempest run --regex {[testenv:integrated-object-storage]regex1} --exclude-list ./tools/tempest-integrated-gate-object-storage-exclude-list.txt {posargs}
+ tempest run --combine --serial --regex {[testenv:integrated-object-storage]regex2} --exclude-list ./tools/tempest-integrated-gate-object-storage-exclude-list.txt {posargs}
[testenv:full-serial]
envdir = .tox/tempest
@@ -220,12 +264,13 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex = '(?!.*\[.*\bslow\b.*\])(^tempest\.(api|scenario|serial_tests))'
# The regex below is used to select which tests to run and exclude the slow tag:
# See the testrepository bug: https://bugs.launchpad.net/testrepository/+bug/1208610
# FIXME: We can replace it with the `--exclude-regex` option to exclude tests now.
commands =
find . -type f -name "*.pyc" -delete
- tempest run --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.(api|scenario|serial_tests))' {posargs}
+ tempest run --serial --regex {[testenv:full-serial]regex} {posargs}
[testenv:scenario]
envdir = .tox/tempest
@@ -233,10 +278,11 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex = '(^tempest\.scenario)'
# The regex below is used to select all scenario tests
commands =
find . -type f -name "*.pyc" -delete
- tempest run --serial --regex '(^tempest\.scenario)' {posargs}
+ tempest run --serial --regex {[testenv:scenario]regex} {posargs}
[testenv:smoke]
envdir = .tox/tempest
@@ -244,9 +290,10 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex = '\[.*\bsmoke\b.*\]'
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '\[.*\bsmoke\b.*\]' {posargs}
+ tempest run --regex {[testenv:smoke]regex} {posargs}
[testenv:smoke-serial]
envdir = .tox/tempest
@@ -254,12 +301,13 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex = '\[.*\bsmoke\b.*\]'
# This is still serial because neutron doesn't work with parallel. See:
# https://bugs.launchpad.net/tempest/+bug/1216076 so the neutron smoke
# job would fail if we moved it to parallel.
commands =
find . -type f -name "*.pyc" -delete
- tempest run --serial --regex '\[.*\bsmoke\b.*\]' {posargs}
+ tempest run --serial --regex {[testenv:smoke-serial]regex} {posargs}
[testenv:slow-serial]
envdir = .tox/tempest
@@ -267,10 +315,35 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex = '\[.*\bslow\b.*\]'
# The regex below is used to select the slow tagged tests to run serially:
commands =
find . -type f -name "*.pyc" -delete
- tempest run --serial --regex '\[.*\bslow\b.*\]' {posargs}
+ tempest run --serial --regex {[testenv:slow-serial]regex} {posargs}
+
+[testenv:slow]
+envdir = .tox/tempest
+sitepackages = {[tempestenv]sitepackages}
+basepython = {[tempestenv]basepython}
+setenv = {[tempestenv]setenv}
+deps = {[tempestenv]deps}
+# The regex below is used to select the slow tagged tests:
+regex = '\[.*\bslow\b.*\]'
+commands =
+ find . -type f -name "*.pyc" -delete
+ tempest run --regex {[testenv:slow]regex} {posargs}
+
+[testenv:multinode]
+envdir = .tox/tempest
+sitepackages = {[tempestenv]sitepackages}
+basepython = {[tempestenv]basepython}
+setenv = {[tempestenv]setenv}
+deps = {[tempestenv]deps}
+# The regex below is used to select the multinode and smoke tagged tests
+regex = '\[.*\bsmoke|multinode\b.*\]'
+commands =
+ find . -type f -name "*.pyc" -delete
+ tempest run --regex {[testenv:multinode]regex} {posargs}
[testenv:ipv6-only]
envdir = .tox/tempest
@@ -278,12 +351,13 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex = '\[.*\bsmoke|ipv6|test_network_v6\b.*\]'
# Run only smoke and ipv6 tests. This env is used to tests
# the ipv6 deployments and basic tests run fine so that we can
# verify that services listen on IPv6 address.
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '\[.*\bsmoke|ipv6|test_network_v6\b.*\]' {posargs}
+ tempest run --regex {[testenv:ipv6-only]regex} {posargs}
[testenv:venv]
deps =
@@ -442,8 +516,9 @@
basepython = {[tempestenv]basepython}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
+regex = '\[.*\bsmoke\b.*\]'
# The below command install stestr master version and run smoke tests
commands =
find . -type f -name "*.pyc" -delete
pip install -U git+https://github.com/mtreinish/stestr
- tempest run --regex '\[.*\bsmoke\b.*\]' {posargs}
+ tempest run --regex {[testenv:stestr-master]regex} {posargs}
diff --git a/zuul.d/base.yaml b/zuul.d/base.yaml
index 3deb944..2d978c0 100644
--- a/zuul.d/base.yaml
+++ b/zuul.d/base.yaml
@@ -72,7 +72,8 @@
and a tempest one exist.
timeout: 10800
vars:
- tox_envlist: full
+ # This job run multinode and smoke tests.
+ tox_envlist: multinode
devstack_localrc:
FORCE_CONFIG_DRIVE: false
NOVA_ALLOW_MOVE_TO_SAME_HOST: false
diff --git a/zuul.d/integrated-gate.yaml b/zuul.d/integrated-gate.yaml
index f379041..4f21956 100644
--- a/zuul.d/integrated-gate.yaml
+++ b/zuul.d/integrated-gate.yaml
@@ -11,10 +11,11 @@
vars:
tox_envlist: all
tempest_test_regex: tempest
- # TODO(gmann): Enable File injection tests once nova bug is fixed
- # https://bugs.launchpad.net/nova/+bug/1882421
- # devstack_localrc:
- # ENABLE_FILE_INJECTION: true
+ devstack_localrc:
+ MYSQL_REDUCE_MEMORY: true
+ # TODO(gmann): Enable File injection tests once nova bug is fixed
+ # https://bugs.launchpad.net/nova/+bug/1882421
+ # ENABLE_FILE_INJECTION: true
- job:
name: tempest-ipv6-only
@@ -60,11 +61,24 @@
c-bak: false
- job:
+ name: tempest-extra-tests
+ parent: devstack-tempest
+ description: |
+ This job runs the extra tests mentioned in
+ tools/tempest-extra-tests-list.txt.
+ vars:
+ tox_envlist: extra-tests
+
+- job:
name: tempest-full-py3
parent: devstack-tempest
# This job version is with swift enabled on py3
# as swift is ready on py3 from stable/ussuri onwards.
- branches: ^(?!stable/(ocata|pike|queens|rocky|stein|train)).*$
+ # As this use 'integrated-full' tox env which is not
+ # available in old tempest used till stable/wallaby,
+ # this job definition is only for stable/xena onwards
+ # and separate job definition until stable/wallaby
+ branches: ^(?!stable/(ocata|pike|queens|rocky|stein|train|ussuri|victoria|wallaby)).*$
description: |
Base integration test with Neutron networking, horizon, swift enable,
and py3.
@@ -74,7 +88,7 @@
required-projects:
- openstack/horizon
vars:
- tox_envlist: full
+ tox_envlist: integrated-full
devstack_localrc:
USE_PYTHON3: true
FORCE_CONFIG_DRIVE: true
@@ -107,6 +121,7 @@
# Required until bug/1949606 is resolved when using libvirt and QEMU
# >=5.0.0 with a [libvirt]virt_type of qemu (TCG).
configure_swap_size: 4096
+ tox_envlist: full
- job:
name: tempest-integrated-networking
@@ -246,10 +261,15 @@
neutron: https://opendev.org/openstack/neutron
devstack_services:
neutron-trunk: true
+ br-ex-tcpdump: true
+ br-int-flows: true
group-vars:
subnode:
devstack_localrc:
USE_PYTHON3: true
+ devstack_services:
+ br-ex-tcpdump: true
+ br-int-flows: true
- job:
name: tempest-slow
@@ -294,6 +314,15 @@
vars: *tempest_slow_vars
- job:
+ name: tempest-slow-parallel
+ parent: tempest-slow-py3
+ # This job run slow tests in parallel.
+ vars:
+ tox_envlist: slow
+ devstack_localrc:
+ MYSQL_REDUCE_MEMORY: true
+
+- job:
name: tempest-cinder-v2-api
parent: devstack-tempest
# NOTE(gmann): Cinder v2 APIs are available until
@@ -368,6 +397,7 @@
CINDER_ENFORCE_SCOPE: true
GLANCE_ENFORCE_SCOPE: true
NEUTRON_ENFORCE_SCOPE: true
+ PLACEMENT_ENFORCE_SCOPE: true
- project-template:
name: integrated-gate-networking
@@ -383,20 +413,20 @@
- tempest-integrated-networking
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
- # and job is broken on wallaby branch due to the issue
+ # and job is broken up to wallaby branch due to the issue
# described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
- branches: ^(?!stable/(ussuri|wallaby)).*$
+ branches: ^(?!stable/(ussuri|victoria|wallaby)).*$
gate:
jobs:
- grenade
- tempest-integrated-networking
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
- # and job is broken on wallaby branch due to the issue
+ # and job is broken up to wallaby branch due to the issue
# described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
- branches: ^(?!stable/(ussuri|wallaby)).*$
+ branches: ^(?!stable/(ussuri|victoria|wallaby)).*$
- project-template:
name: integrated-gate-compute
@@ -420,15 +450,15 @@
branches: ^stable/(wallaby|xena|yoga).*$
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
- # and job is broken on wallaby branch due to the issue
+ # and job is broken up to wallaby branch due to the issue
# described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
- branches: ^(?!stable/(ussuri|wallaby)).*$
+ branches: ^(?!stable/(ussuri|victoria|wallaby)).*$
gate:
jobs:
- tempest-integrated-compute
- openstacksdk-functional-devstack:
- branches: ^(?!stable/(ussuri|wallaby)).*$
+ branches: ^(?!stable/(ussuri|victoria|wallaby)).*$
periodic-weekly:
jobs:
# centos-9-stream is tested from zed release onwards
@@ -450,20 +480,20 @@
- tempest-integrated-placement
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
- # and job is broken on wallaby branch due to the issue
+ # and job is broken up to wallaby branch due to the issue
# described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
- branches: ^(?!stable/ussuri).*$
+ branches: ^(?!stable/(ussuri|victoria|wallaby)).*$
gate:
jobs:
- grenade
- tempest-integrated-placement
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
- # and job is broken on wallaby branch due to the issue
+ # and job is broken up to wallaby branch due to the issue
# described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
- branches: ^(?!stable/(ussuri|wallaby)).*$
+ branches: ^(?!stable/(ussuri|victoria|wallaby)).*$
- project-template:
name: integrated-gate-storage
@@ -480,20 +510,20 @@
- tempest-integrated-storage
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
- # and job is broken on wallaby branch due to the issue
+ # and job is broken up to wallaby branch due to the issue
# described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
- branches: ^(?!stable/(ussuri|wallaby)).*$
+ branches: ^(?!stable/(ussuri|victoria|wallaby)).*$
gate:
jobs:
- grenade
- tempest-integrated-storage
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
- # and job is broken on wallaby branch due to the issue
+ # and job is broken up to wallaby branch due to the issue
# described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
- branches: ^(?!stable/(ussuri|wallaby)).*$
+ branches: ^(?!stable/(ussuri|victoria|wallaby)).*$
- project-template:
name: integrated-gate-object-storage
@@ -508,17 +538,17 @@
- tempest-integrated-object-storage
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
- # and job is broken on wallaby branch due to the issue
+ # and job is broken up to wallaby branch due to the issue
# described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
- branches: ^(?!stable/(ussuri|wallaby)).*$
+ branches: ^(?!stable/(ussuri|victoria|wallaby)).*$
gate:
jobs:
- grenade
- tempest-integrated-object-storage
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
- # and job is broken on wallaby branch due to the issue
+ # and job is broken up to wallaby branch due to the issue
# described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
- branches: ^(?!stable/(ussuri|wallaby)).*$
+ branches: ^(?!stable/(ussuri|victoria|wallaby)).*$
diff --git a/zuul.d/project.yaml b/zuul.d/project.yaml
index 966cc9a..3df61d8 100644
--- a/zuul.d/project.yaml
+++ b/zuul.d/project.yaml
@@ -11,7 +11,7 @@
- openstack-tox-py38
- openstack-tox-py39
- openstack-tox-py310
- - tempest-full-parallel:
+ - tempest-full-py3:
# Define list of irrelevant files to use everywhere else
irrelevant-files: &tempest-irrelevant-files
- ^.*\.rst$
@@ -26,20 +26,15 @@
- ^.gitignore$
- ^.gitreview$
- ^.mailmap$
- - tempest-full-py3:
+ - tempest-extra-tests:
irrelevant-files: *tempest-irrelevant-files
- tempest-full-ubuntu-focal:
irrelevant-files: *tempest-irrelevant-files
- - tempest-full-py3-ipv6:
- voting: false
- irrelevant-files: *tempest-irrelevant-files
- glance-multistore-cinder-import:
voting: false
irrelevant-files: *tempest-irrelevant-files
- tempest-full-zed:
irrelevant-files: *tempest-irrelevant-files
- - tempest-full-yoga:
- irrelevant-files: *tempest-irrelevant-files
- tempest-full-xena:
irrelevant-files: *tempest-irrelevant-files
- tempest-multinode-full-py3:
@@ -66,6 +61,7 @@
- ^tools/tempest-integrated-gate-placement-exclude-list.txt
- ^tools/tempest-integrated-gate-storage-blacklist.txt
- ^tools/tempest-integrated-gate-storage-exclude-list.txt
+ - ^tools/tempest-extra-tests-list.txt
- ^tools/verify-ipv6-only-deployments.sh
- ^tools/with_venv.sh
# tools/ is not here since this relies on a script in tools/.
@@ -89,6 +85,7 @@
- ^tools/tempest-integrated-gate-placement-exclude-list.txt
- ^tools/tempest-integrated-gate-storage-blacklist.txt
- ^tools/tempest-integrated-gate-storage-exclude-list.txt
+ - ^tools/tempest-extra-tests-list.txt
- ^tools/tempest-plugin-sanity.sh
- ^tools/with_venv.sh
- ^.coveragerc$
@@ -118,16 +115,8 @@
- tempest-full-test-account-py3:
voting: false
irrelevant-files: *tempest-irrelevant-files
- - tempest-full-test-account-no-admin-py3:
- voting: false
- irrelevant-files: *tempest-irrelevant-files
- openstack-tox-bashate:
irrelevant-files: *tempest-irrelevant-files-2
- - tempest-full-centos-9-stream:
- # TODO(gmann): make it voting once below fix is merged
- # https://review.opendev.org/c/openstack/tempest/+/842140
- voting: false
- irrelevant-files: *tempest-irrelevant-files
gate:
jobs:
- openstack-tox-pep8
@@ -142,6 +131,8 @@
irrelevant-files: *tempest-irrelevant-files
- tempest-full-py3:
irrelevant-files: *tempest-irrelevant-files
+ - tempest-extra-tests:
+ irrelevant-files: *tempest-irrelevant-files
- grenade:
irrelevant-files: *tempest-irrelevant-files
- tempest-ipv6-only:
@@ -152,19 +143,24 @@
irrelevant-files: *tempest-irrelevant-files
#- devstack-plugin-ceph-tempest-py3:
# irrelevant-files: *tempest-irrelevant-files
- #- tempest-full-centos-9-stream:
- # irrelevant-files: *tempest-irrelevant-files
- nova-live-migration:
irrelevant-files: *tempest-irrelevant-files
experimental:
jobs:
- nova-multi-cell
+ - nova-ceph-multistore:
+ irrelevant-files: *tempest-irrelevant-files
- tempest-with-latest-microversion
- tempest-stestr-master
- tempest-cinder-v2-api:
irrelevant-files: *tempest-irrelevant-files
- tempest-all:
irrelevant-files: *tempest-irrelevant-files
+ - tempest-slow-parallel
+ - tempest-full-parallel
+ - tempest-full-zed-extra-tests
+ - tempest-full-yoga-extra-tests
+ - tempest-full-xena-extra-tests
- neutron-ovs-tempest-dvr-ha-multinode-full:
irrelevant-files: *tempest-irrelevant-files
- nova-tempest-v2-api:
@@ -173,8 +169,14 @@
irrelevant-files: *tempest-irrelevant-files
- tempest-pg-full:
irrelevant-files: *tempest-irrelevant-files
+ - tempest-full-py3-ipv6:
+ irrelevant-files: *tempest-irrelevant-files
+ - tempest-full-centos-9-stream:
+ irrelevant-files: *tempest-irrelevant-files
- tempest-centos9-stream-fips:
irrelevant-files: *tempest-irrelevant-files
+ - tempest-full-test-account-no-admin-py3:
+ irrelevant-files: *tempest-irrelevant-files
periodic-stable:
jobs:
- tempest-full-zed
@@ -183,9 +185,17 @@
- tempest-slow-zed
- tempest-slow-yoga
- tempest-slow-xena
+ - tempest-full-zed-extra-tests
+ - tempest-full-yoga-extra-tests
+ - tempest-full-xena-extra-tests
periodic:
jobs:
- tempest-all
+ - tempest-slow-parallel
+ - tempest-full-parallel
- tempest-full-oslo-master
- tempest-stestr-master
+ - tempest-full-py3-ipv6
- tempest-centos9-stream-fips
+ - tempest-full-centos-9-stream
+ - tempest-full-test-account-no-admin-py3
diff --git a/zuul.d/stable-jobs.yaml b/zuul.d/stable-jobs.yaml
index fb2300b..8aeb748 100644
--- a/zuul.d/stable-jobs.yaml
+++ b/zuul.d/stable-jobs.yaml
@@ -18,6 +18,24 @@
override-checkout: stable/xena
- job:
+ name: tempest-full-zed-extra-tests
+ parent: tempest-extra-tests
+ nodeset: openstack-single-node-focal
+ override-checkout: stable/zed
+
+- job:
+ name: tempest-full-yoga-extra-tests
+ parent: tempest-extra-tests
+ nodeset: openstack-single-node-focal
+ override-checkout: stable/yoga
+
+- job:
+ name: tempest-full-xena-extra-tests
+ parent: tempest-extra-tests
+ nodeset: openstack-single-node-focal
+ override-checkout: stable/xena
+
+- job:
name: tempest-slow-zed
parent: tempest-slow-py3
nodeset: openstack-two-node-focal
@@ -38,6 +56,36 @@
- job:
name: tempest-full-py3
parent: devstack-tempest
+ # This job version is to use the 'full' tox env which
+ # is available for stable/ussuri to stable/wallaby also.
+ branches:
+ - stable/ussuri
+ - stable/victoria
+ - stable/wallaby
+ description: |
+ Base integration test with Neutron networking, horizon, swift enable,
+ and py3.
+ Former names for this job where:
+ * legacy-tempest-dsvm-py35
+ * gate-tempest-dsvm-py35
+ required-projects:
+ - openstack/horizon
+ vars:
+ tox_envlist: full
+ devstack_localrc:
+ USE_PYTHON3: true
+ FORCE_CONFIG_DRIVE: true
+ ENABLE_VOLUME_MULTIATTACH: true
+ GLANCE_USE_IMPORT_WORKFLOW: True
+ devstack_plugins:
+ neutron: https://opendev.org/openstack/neutron
+ devstack_services:
+ # Enbale horizon so that we can run horizon test.
+ horizon: true
+
+- job:
+ name: tempest-full-py3
+ parent: devstack-tempest
# This job version is with swift disabled on py3
# as swift was not ready on py3 until stable/train.
branches:
diff --git a/zuul.d/tempest-specific.yaml b/zuul.d/tempest-specific.yaml
index ca9ba7f..a8c29af 100644
--- a/zuul.d/tempest-specific.yaml
+++ b/zuul.d/tempest-specific.yaml
@@ -30,11 +30,12 @@
- opendev.org/openstack/oslo.utils
- opendev.org/openstack/oslo.versionedobjects
- opendev.org/openstack/oslo.vmware
+ vars:
+ tox_envlist: full
- job:
name: tempest-full-parallel
parent: tempest-full-py3
- voting: false
branches:
- master
description: |
@@ -48,6 +49,7 @@
run_tempest_dry_cleanup: true
devstack_localrc:
DEVSTACK_PARALLEL: True
+ MYSQL_REDUCE_MEMORY: true
- job:
name: tempest-full-py3-ipv6