Merge "New test case for Swift Policy quota limit added"
diff --git a/doc/source/plugins/plugin.rst b/doc/source/plugins/plugin.rst
index 31aa134..573aa7d 100644
--- a/doc/source/plugins/plugin.rst
+++ b/doc/source/plugins/plugin.rst
@@ -29,6 +29,10 @@
* tempest.config
* tempest.test_discover.plugins
* tempest.common.credentials_factory
+* tempest.common.compute
+* tempest.common.identity
+* tempest.common.image
+* tempest.common.object_storage
* tempest.clients
* tempest.test
* tempest.scenario.manager
diff --git a/doc/source/supported_version.rst b/doc/source/supported_version.rst
index f38b6ca..d4b7b4c 100644
--- a/doc/source/supported_version.rst
+++ b/doc/source/supported_version.rst
@@ -9,9 +9,9 @@
Tempest master supports the below OpenStack Releases:
+* 2025.2
* 2025.1
* 2024.2
-* 2024.1
For older OpenStack Release:
@@ -32,7 +32,7 @@
Tempest master supports the below python versions:
-* Python 3.9
* Python 3.10
* Python 3.11
* Python 3.12
+* Python 3.13
diff --git a/releasenotes/notes/add-alt-manager-dynamic-creds-f8f1007862ea5dfb.yaml b/releasenotes/notes/add-alt-manager-dynamic-creds-f8f1007862ea5dfb.yaml
new file mode 100644
index 0000000..bff36c7
--- /dev/null
+++ b/releasenotes/notes/add-alt-manager-dynamic-creds-f8f1007862ea5dfb.yaml
@@ -0,0 +1,4 @@
+---
+features:
+ - |
+ Add alt manager role to the dynamic credentials provider for project scope.
diff --git a/releasenotes/notes/add-concurrency-utility-function-e8d4f2a1c9b5e7f3.yaml b/releasenotes/notes/add-concurrency-utility-function-e8d4f2a1c9b5e7f3.yaml
new file mode 100644
index 0000000..402633e
--- /dev/null
+++ b/releasenotes/notes/add-concurrency-utility-function-e8d4f2a1c9b5e7f3.yaml
@@ -0,0 +1,14 @@
+---
+features:
+ - |
+ Add ``run_concurrent_tasks`` helper function in ``tempest.common.concurrency``
+ module to simplify writing concurrency tests for OpenStack services. This
+ utility uses multiprocessing to execute operations in parallel and collect
+ results in a shared list. It automatically handles process management,
+ exception collection, and cleanup. Tempest plugins can use this function
+ to test concurrent operations such as parallel volume creation, snapshot
+ creation, or server operations. The number of concurrent processes is
+ specified via the ``resource_count`` parameter, allowing plugins to pass
+ their service-specific configuration values (e.g.,
+ ``CONF.volume.concurrent_resource_count`` for cinder,
+ ``CONF.share.concurrent_resource_count`` for manila).
diff --git a/releasenotes/notes/add-config-nova-policy-roles-37fc4ef511f97f50.yaml b/releasenotes/notes/add-config-nova-policy-roles-37fc4ef511f97f50.yaml
new file mode 100644
index 0000000..f357b00
--- /dev/null
+++ b/releasenotes/notes/add-config-nova-policy-roles-37fc4ef511f97f50.yaml
@@ -0,0 +1,7 @@
+---
+features:
+ - |
+ A new config option ``nova_policy_roles`` is added in the
+ ``compute-feature-enabled`` section. This can be used to
+ configure the available roles that are used as default in
+ nova policy rules.
diff --git a/releasenotes/notes/add-keystone-system-token-config-1658eb05ed07e4da.yaml b/releasenotes/notes/add-keystone-system-token-config-1658eb05ed07e4da.yaml
new file mode 100644
index 0000000..2a752eb
--- /dev/null
+++ b/releasenotes/notes/add-keystone-system-token-config-1658eb05ed07e4da.yaml
@@ -0,0 +1,7 @@
+---
+features:
+ - |
+ Added a new config option in the `identity` section, `use_system_token`,
+ which will tell Tempest to use system scope token to test the
+ keystone APIs. By default is disabled which means Tempest will use project
+ scoped token.
diff --git a/releasenotes/notes/add-location-api-5a57ab29dc6d6cd7.yaml b/releasenotes/notes/add-location-api-5a57ab29dc6d6cd7.yaml
new file mode 100644
index 0000000..f9166a2
--- /dev/null
+++ b/releasenotes/notes/add-location-api-5a57ab29dc6d6cd7.yaml
@@ -0,0 +1,4 @@
+---
+features:
+ - |
+ Add new location API support to image V2 client
diff --git a/releasenotes/notes/add-server-migrations-clients-ffbf5cbdf7818305.yaml b/releasenotes/notes/add-server-migrations-clients-ffbf5cbdf7818305.yaml
new file mode 100644
index 0000000..8e88513
--- /dev/null
+++ b/releasenotes/notes/add-server-migrations-clients-ffbf5cbdf7818305.yaml
@@ -0,0 +1,5 @@
+---
+features:
+ - |
+ Add the server live migration list and force complete service
+ clients.
diff --git a/releasenotes/notes/bug-2132971-a89a576348dcd1d6.yaml b/releasenotes/notes/bug-2132971-a89a576348dcd1d6.yaml
new file mode 100644
index 0000000..d21289c
--- /dev/null
+++ b/releasenotes/notes/bug-2132971-a89a576348dcd1d6.yaml
@@ -0,0 +1,5 @@
+---
+fixes:
+ - |
+ Fixed bug #2132971. ``test_rebuild_server`` will no longer expect a
+ floating ip when floating ip networks are disabled.
diff --git a/releasenotes/notes/drop-python-3-9-b8a25c06e4bc0787.yaml b/releasenotes/notes/drop-python-3-9-b8a25c06e4bc0787.yaml
new file mode 100644
index 0000000..f9488d7
--- /dev/null
+++ b/releasenotes/notes/drop-python-3-9-b8a25c06e4bc0787.yaml
@@ -0,0 +1,8 @@
+---
+prelude: >
+ Tempest dropped the support of python 3.9.
+upgrade:
+ - |
+ Python 3.9 support has been dropped. The last release of Tempest
+ to support python 3.9 is Temepst 45.0.0. The minimum version
+ of Python supported by Tempest is python 3.10.
diff --git a/releasenotes/notes/plugin-sytable-interface-18f865ba3a415c70.yaml b/releasenotes/notes/plugin-sytable-interface-18f865ba3a415c70.yaml
new file mode 100644
index 0000000..6d466c7
--- /dev/null
+++ b/releasenotes/notes/plugin-sytable-interface-18f865ba3a415c70.yaml
@@ -0,0 +1,9 @@
+---
+prelude: |
+ Tempest declares the following interface as a stable interface
+ to be used by the tempest plugins:
+
+ * tempest.common.compute
+ * tempest.common.identity
+ * tempest.common.image
+ * tempest.common.object_storage
diff --git a/releasenotes/notes/tempest-2024-2-release-e706f62c7e841bd0.yaml b/releasenotes/notes/tempest-2025-1-release-e706f62c7e841bd0.yaml
similarity index 98%
rename from releasenotes/notes/tempest-2024-2-release-e706f62c7e841bd0.yaml
rename to releasenotes/notes/tempest-2025-1-release-e706f62c7e841bd0.yaml
index 86af60c..24129cf 100644
--- a/releasenotes/notes/tempest-2024-2-release-e706f62c7e841bd0.yaml
+++ b/releasenotes/notes/tempest-2025-1-release-e706f62c7e841bd0.yaml
@@ -1,5 +1,5 @@
---
-prelude: >
+prelude: |
This release is to tag Tempest for OpenStack 2025.1 release.
This release marks the start of 2025.1 release support in Tempest.
After this release, Tempest will support below OpenStack Releases:
diff --git a/releasenotes/notes/tempest-2025-2-release-085c56b9b4cf2c84.yaml b/releasenotes/notes/tempest-2025-2-release-085c56b9b4cf2c84.yaml
new file mode 100644
index 0000000..cfc6b19
--- /dev/null
+++ b/releasenotes/notes/tempest-2025-2-release-085c56b9b4cf2c84.yaml
@@ -0,0 +1,17 @@
+---
+prelude: >
+ This release is to tag Tempest for OpenStack 2025.2 release.
+ This release marks the start of 2025.2 release support in Tempest.
+ After this release, Tempest will support below OpenStack Releases:
+
+ * 2025.2
+ * 2025.1
+ * 2024.2
+ * 2024.1
+
+ Current development of Tempest is for OpenStack 2026.1 development
+ cycle. Every Tempest commit is also tested against master during
+ the 2026.1 cycle. However, this does not necessarily mean that using
+ Tempest as of this tag will work against a 2026.1 (or future release)
+ cloud.
+ To be on safe side, use this tag to test the OpenStack 2025.2 release.
diff --git a/releasenotes/source/index.rst b/releasenotes/source/index.rst
index 058f65f..33c141d 100644
--- a/releasenotes/source/index.rst
+++ b/releasenotes/source/index.rst
@@ -6,6 +6,8 @@
:maxdepth: 1
unreleased
+ v45.0.0
+ v44.0.0
v43.0.0
v42.0.0
v41.0.0
diff --git a/releasenotes/source/v44.0.0.rst b/releasenotes/source/v44.0.0.rst
new file mode 100644
index 0000000..bdb3b69
--- /dev/null
+++ b/releasenotes/source/v44.0.0.rst
@@ -0,0 +1,6 @@
+=====================
+v44.0.0 Release Notes
+=====================
+
+.. release-notes:: 44.0.0 Release Notes
+ :version: 44.0.0
diff --git a/releasenotes/source/v45.0.0.rst b/releasenotes/source/v45.0.0.rst
new file mode 100644
index 0000000..4183a63
--- /dev/null
+++ b/releasenotes/source/v45.0.0.rst
@@ -0,0 +1,6 @@
+=====================
+v45.0.0 Release Notes
+=====================
+
+.. release-notes:: 45.0.0 Release Notes
+ :version: 45.0.0
diff --git a/roles/run-tempest/tasks/main.yaml b/roles/run-tempest/tasks/main.yaml
index 15b1743..60402ee 100644
--- a/roles/run-tempest/tasks/main.yaml
+++ b/roles/run-tempest/tasks/main.yaml
@@ -25,11 +25,11 @@
target_branch: "{{ zuul.override_checkout }}"
when: zuul.override_checkout is defined
-- name: Use stable branch upper-constraints till 2023.1
+- name: Use stable branch upper-constraints till 2024.1
set_fact:
# TOX_CONSTRAINTS_FILE is new name, UPPER_CONSTRAINTS_FILE is old one, best to set both
tempest_tox_environment: "{{ tempest_tox_environment | combine({'UPPER_CONSTRAINTS_FILE': stable_constraints_file}) | combine({'TOX_CONSTRAINTS_FILE': stable_constraints_file}) }}"
- when: target_branch in ["stable/ocata", "stable/pike", "stable/queens", "stable/rocky", "stable/stein", "stable/train", "stable/ussuri", "stable/2023.1", "unmaintained/victoria", "unmaintained/wallaby", "unmaintained/xena", "unmaintained/yoga", "unmaintained/zed", "unmaintained/2023.1"]
+ when: target_branch in ["stable/ocata", "stable/pike", "stable/queens", "stable/rocky", "stable/stein", "stable/train", "stable/ussuri", "stable/2023.1", "unmaintained/victoria", "unmaintained/wallaby", "unmaintained/xena", "unmaintained/yoga", "unmaintained/zed", "unmaintained/2023.1", "unmaintained/2024.1"]
- name: Use Configured upper-constraints for non-master Tempest
set_fact:
diff --git a/setup.cfg b/setup.cfg
index 67555f4..fa17801 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -14,10 +14,10 @@
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 3
- Programming Language :: Python :: 3.9
Programming Language :: Python :: 3.10
Programming Language :: Python :: 3.11
Programming Language :: Python :: 3.12
+ Programming Language :: Python :: 3.13
Programming Language :: Python :: 3 :: Only
Programming Language :: Python :: Implementation :: CPython
diff --git a/tempest/api/compute/admin/test_assisted_volume_snapshots.py b/tempest/api/compute/admin/test_assisted_volume_snapshots.py
index b7be796..034efc9 100644
--- a/tempest/api/compute/admin/test_assisted_volume_snapshots.py
+++ b/tempest/api/compute/admin/test_assisted_volume_snapshots.py
@@ -26,11 +26,6 @@
create_default_network = True
- # TODO(gmann): Remove the admin access to service user
- # once nova change the default of this API to service
- # role. To merge the nova changing the policy default
- # we need to use token with admin as well as service
- # role and later we can use only service token.
credentials = ['primary', 'admin', ['service_user', 'admin', 'service']]
@classmethod
@@ -39,6 +34,13 @@
if not CONF.service_available.cinder:
skip_msg = ("%s skipped as Cinder is not available" % cls.__name__)
raise cls.skipException(skip_msg)
+ # NOTE(gmaan): If new policy is enforced and and service role
+ # is present in nova then use service user (no admin role) for
+ # assisted volume snapshots APIs.
+ if (CONF.enforce_scope.nova and 'service' in
+ CONF.compute_feature_enabled.nova_policy_roles):
+ cls.credentials = [
+ 'primary', 'admin', ['service_user', 'service']]
@classmethod
def setup_clients(cls):
diff --git a/tempest/api/compute/admin/test_auto_allocate_network.py b/tempest/api/compute/admin/test_auto_allocate_network.py
index e8011a6..ca654b5 100644
--- a/tempest/api/compute/admin/test_auto_allocate_network.py
+++ b/tempest/api/compute/admin/test_auto_allocate_network.py
@@ -37,6 +37,8 @@
calls to Neutron to automatically allocate the network topology.
"""
+ credentials = ['primary', 'project_reader']
+
force_tenant_isolation = True
min_microversion = '2.37'
@@ -65,6 +67,14 @@
cls.routers_client = cls.os_primary.routers_client
cls.subnets_client = cls.os_primary.subnets_client
cls.ports_client = cls.os_primary.ports_client
+ if CONF.enforce_scope.nova:
+ cls.reader_networks_client = cls.os_project_reader.networks_client
+ cls.reader_routers_client = cls.os_project_reader.routers_client
+ cls.reader_ports_client = cls.os_project_reader.ports_client
+ else:
+ cls.reader_networks_client = cls.networks_client
+ cls.reader_routers_client = cls.routers_client
+ cls.reader_ports_client = cls.ports_client
@classmethod
def resource_setup(cls):
@@ -74,14 +84,14 @@
tenant_id = cls.networks_client.tenant_id
# (1) Retrieve non-public network list owned by the tenant.
search_opts = {'tenant_id': tenant_id, 'shared': False}
- nets = cls.networks_client.list_networks(
+ nets = cls.reader_networks_client.list_networks(
**search_opts).get('networks', [])
if nets:
raise lib_excs.TempestException(
'Found tenant networks: %s' % nets)
# (2) Retrieve shared network list.
search_opts = {'shared': True}
- nets = cls.networks_client.list_networks(
+ nets = cls.reader_networks_client.list_networks(
**search_opts).get('networks', [])
if nets:
raise cls.skipException('Found shared networks: %s' % nets)
@@ -93,7 +103,7 @@
# Find the auto-allocated router for the tenant.
# This is a bit hacky since we don't have a great way to find the
# auto-allocated router given the private tenant network we have.
- routers = cls.routers_client.list_routers().get('routers', [])
+ routers = cls.reader_routers_client.list_routers().get('routers', [])
if len(routers) > 1:
# This indicates a race where nova is concurrently calling the
# neutron auto-allocated-topology API for multiple server builds
@@ -109,7 +119,7 @@
# created. All such networks will be in the current tenant. Neutron
# will cleanup duplicate resources automatically, so ignore 404s.
search_opts = {'tenant_id': cls.networks_client.tenant_id}
- networks = cls.networks_client.list_networks(
+ networks = cls.reader_networks_client.list_networks(
**search_opts).get('networks', [])
for router in routers:
@@ -127,7 +137,7 @@
for network in networks:
# Get and delete the ports for the given network.
- ports = cls.ports_client.list_ports(
+ ports = cls.reader_ports_client.list_ports(
network_id=network['id']).get('ports', [])
for port in ports:
test_utils.call_and_ignore_notfound_exc(
@@ -150,7 +160,7 @@
# create the server with no networking
server = self.create_test_server(networks='none', wait_until='ACTIVE')
# get the server ips
- addresses = self.servers_client.list_addresses(
+ addresses = self.reader_servers_client.list_addresses(
server['id'])['addresses']
# assert that there is no networking
self.assertEqual({}, addresses)
@@ -180,7 +190,7 @@
server_nets = set()
for server in servers:
# get the server ips
- addresses = self.servers_client.list_addresses(
+ addresses = self.reader_servers_client.list_addresses(
server['id'])['addresses']
# assert that there is networking (should only be one)
self.assertEqual(1, len(addresses))
@@ -196,7 +206,7 @@
search_opts = {'tenant_id': self.networks_client.tenant_id,
'shared': False,
'admin_state_up': True}
- nets = self.networks_client.list_networks(
+ nets = self.reader_networks_client.list_networks(
**search_opts).get('networks', [])
self.assertEqual(1, len(nets))
# verify the single private tenant network is the one that the servers
diff --git a/tempest/api/compute/admin/test_availability_zone.py b/tempest/api/compute/admin/test_availability_zone.py
index 3eb0d9a..10730c2 100644
--- a/tempest/api/compute/admin/test_availability_zone.py
+++ b/tempest/api/compute/admin/test_availability_zone.py
@@ -14,21 +14,30 @@
# under the License.
from tempest.api.compute import base
+from tempest import config
from tempest.lib import decorators
+CONF = config.CONF
+
class AZAdminV2TestJSON(base.BaseV2ComputeAdminTest):
"""Tests Availability Zone API List"""
+ credentials = ['primary', 'admin', 'project_reader']
+
@classmethod
def setup_clients(cls):
super(AZAdminV2TestJSON, cls).setup_clients()
cls.client = cls.availability_zone_admin_client
+ if CONF.enforce_scope.nova:
+ cls.reader_client = cls.os_project_reader.availability_zone_client
+ else:
+ cls.reader_client = cls.availability_zone_client
@decorators.idempotent_id('d3431479-8a09-4f76-aa2d-26dc580cb27c')
def test_get_availability_zone_list(self):
"""Test listing availability zones"""
- availability_zone = self.client.list_availability_zones()
+ availability_zone = self.reader_client.list_availability_zones()
self.assertNotEmpty(availability_zone['availabilityZoneInfo'])
@decorators.idempotent_id('ef726c58-530f-44c2-968c-c7bed22d5b8c')
diff --git a/tempest/api/compute/admin/test_create_server.py b/tempest/api/compute/admin/test_create_server.py
index 293e284..b56faa9 100644
--- a/tempest/api/compute/admin/test_create_server.py
+++ b/tempest/api/compute/admin/test_create_server.py
@@ -29,6 +29,8 @@
class ServersWithSpecificFlavorTestJSON(base.BaseV2ComputeAdminTest):
"""Test creating servers with specific flavor"""
+ credentials = ['primary', 'admin', 'project_reader']
+
@classmethod
def setup_credentials(cls):
cls.prepare_instance_network()
@@ -38,6 +40,10 @@
def setup_clients(cls):
super(ServersWithSpecificFlavorTestJSON, cls).setup_clients()
cls.client = cls.servers_client
+ if CONF.enforce_scope.nova:
+ cls.reader_flavors_client = cls.os_project_reader.flavors_client
+ else:
+ cls.reader_flavors_client = cls.flavors_client
@decorators.idempotent_id('b3c7bcfc-bb5b-4e22-b517-c7f686b802ca')
@testtools.skipUnless(CONF.validation.run_validation,
@@ -46,7 +52,7 @@
"Aarch64 does not support ephemeral disk test")
def test_verify_created_server_ephemeral_disk(self):
"""Verify that the ephemeral disk is created when creating server"""
- flavor_base = self.flavors_client.show_flavor(
+ flavor_base = self.reader_flavors_client.show_flavor(
self.flavor_ref)['flavor']
def create_flavor_with_ephemeral(ephem_disk):
@@ -67,7 +73,7 @@
# create server which should have been contained in
# self.flavor_ref.
extra_spec_keys = \
- self.admin_flavors_client.list_flavor_extra_specs(
+ self.reader_flavors_client.list_flavor_extra_specs(
self.flavor_ref)['extra_specs']
if extra_spec_keys:
self.admin_flavors_client.set_flavor_extra_spec(
@@ -96,7 +102,7 @@
server_no_eph_disk['id'])
# Get partition number of server without ephemeral disk.
- server_no_eph_disk = self.client.show_server(
+ server_no_eph_disk = self.reader_servers_client.show_server(
server_no_eph_disk['id'])['server']
linux_client = remote_client.RemoteClient(
self.get_server_ip(server_no_eph_disk,
@@ -124,7 +130,7 @@
self.servers_client.delete_server,
server_with_eph_disk['id'])
- server_with_eph_disk = self.client.show_server(
+ server_with_eph_disk = self.reader_servers_client.show_server(
server_with_eph_disk['id'])['server']
linux_client = remote_client.RemoteClient(
self.get_server_ip(server_with_eph_disk,
diff --git a/tempest/api/compute/admin/test_delete_server.py b/tempest/api/compute/admin/test_delete_server.py
index c625939..982c3a7 100644
--- a/tempest/api/compute/admin/test_delete_server.py
+++ b/tempest/api/compute/admin/test_delete_server.py
@@ -15,12 +15,17 @@
from tempest.api.compute import base
from tempest.common import waiters
+from tempest import config
from tempest.lib import decorators
+CONF = config.CONF
+
class DeleteServersAdminTestJSON(base.BaseV2ComputeAdminTest):
"""Test deletion of servers"""
+ credentials = ['primary', 'admin', 'project_reader']
+
# NOTE: Server creations of each test class should be under 10
# for preventing "Quota exceeded for instances".
@@ -36,7 +41,7 @@
server = self.create_test_server(wait_until='ACTIVE')
self.admin_client.reset_state(server['id'], state='error')
# Verify server's state
- server = self.non_admin_client.show_server(server['id'])['server']
+ server = self.reader_servers_client.show_server(server['id'])['server']
self.assertEqual(server['status'], 'ERROR')
self.non_admin_client.delete_server(server['id'])
waiters.wait_for_server_termination(self.servers_client,
diff --git a/tempest/api/compute/admin/test_flavors.py b/tempest/api/compute/admin/test_flavors.py
index cece905..48a2867 100644
--- a/tempest/api/compute/admin/test_flavors.py
+++ b/tempest/api/compute/admin/test_flavors.py
@@ -27,6 +27,16 @@
class FlavorsAdminTestJSON(base.BaseV2ComputeAdminTest):
"""Tests Flavors API Create and Delete that require admin privileges"""
+ credentials = ['primary', 'admin', 'project_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(FlavorsAdminTestJSON, cls).setup_clients()
+ if CONF.enforce_scope.nova:
+ cls.reader_flavors_client = cls.os_project_reader.flavors_client
+ else:
+ cls.reader_flavors_client = cls.flavors_client
+
@classmethod
def resource_setup(cls):
super(FlavorsAdminTestJSON, cls).resource_setup()
@@ -92,7 +102,7 @@
rxtx_factor=self.rxtx)
# Check if flavor is present in list
- flavors_list = self.admin_flavors_client.list_flavors(
+ flavors_list = self.reader_flavors_client.list_flavors(
detail=True)['flavors']
self.assertIn(flavor_name, [f['name'] for f in flavors_list])
@@ -130,13 +140,15 @@
verify_flavor_response_extension(flavor)
# Verify flavor is retrieved
- flavor = self.admin_flavors_client.show_flavor(new_flavor_id)['flavor']
+ flavor = self.reader_flavors_client.show_flavor(
+ new_flavor_id)['flavor']
self.assertEqual(flavor['name'], flavor_name)
verify_flavor_response_extension(flavor)
# Check if flavor is present in list
flavors_list = [
- f for f in self.flavors_client.list_flavors(detail=True)['flavors']
+ f for f in self.reader_flavors_client.list_flavors(
+ detail=True)['flavors']
if f['name'] == flavor_name
]
self.assertNotEmpty(flavors_list)
@@ -160,7 +172,7 @@
disk=self.disk,
is_public="False")
# Verify flavor is not retrieved
- flavors_list = self.admin_flavors_client.list_flavors(
+ flavors_list = self.reader_flavors_client.list_flavors(
detail=True)['flavors']
self.assertNotIn(flavor_name, [f['name'] for f in flavors_list])
@@ -197,7 +209,8 @@
disk=self.disk,
is_public="True")
# Verify flavor is retrieved with new user
- flavors_list = self.flavors_client.list_flavors(detail=True)['flavors']
+ flavors_list = self.reader_flavors_client.list_flavors(detail=True)[
+ 'flavors']
self.assertIn(flavor_name, [f['name'] for f in flavors_list])
@decorators.idempotent_id('fb9cbde6-3a0e-41f2-a983-bdb0a823c44e')
diff --git a/tempest/api/compute/admin/test_flavors_access.py b/tempest/api/compute/admin/test_flavors_access.py
index c86ff76..f12d239 100644
--- a/tempest/api/compute/admin/test_flavors_access.py
+++ b/tempest/api/compute/admin/test_flavors_access.py
@@ -14,8 +14,11 @@
# under the License.
from tempest.api.compute import base
+from tempest import config
from tempest.lib import decorators
+CONF = config.CONF
+
class FlavorsAccessTestJSON(base.BaseV2ComputeAdminTest):
"""Tests Flavor Access API extension.
@@ -23,6 +26,16 @@
Add and remove Flavor Access require admin privileges.
"""
+ credentials = ['primary', 'admin', 'project_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(FlavorsAccessTestJSON, cls).setup_clients()
+ if CONF.enforce_scope.nova:
+ cls.reader_flavors_client = cls.os_project_reader.flavors_client
+ else:
+ cls.reader_flavors_client = cls.flavors_client
+
@classmethod
def resource_setup(cls):
super(FlavorsAccessTestJSON, cls).resource_setup()
@@ -64,7 +77,8 @@
self.assertIn(resp_body, add_body)
# The flavor is present in list.
- flavors = self.flavors_client.list_flavors(detail=True)['flavors']
+ flavors = self.reader_flavors_client.list_flavors(
+ detail=True)['flavors']
self.assertIn(flavor['id'], map(lambda x: x['id'], flavors))
# Remove flavor access from a tenant.
@@ -73,5 +87,6 @@
self.assertNotIn(resp_body, remove_body)
# The flavor is not present in list.
- flavors = self.flavors_client.list_flavors(detail=True)['flavors']
+ flavors = self.reader_flavors_client.list_flavors(
+ detail=True)['flavors']
self.assertNotIn(flavor['id'], map(lambda x: x['id'], flavors))
diff --git a/tempest/api/compute/admin/test_flavors_extra_specs.py b/tempest/api/compute/admin/test_flavors_extra_specs.py
index 5829269..cd15d76 100644
--- a/tempest/api/compute/admin/test_flavors_extra_specs.py
+++ b/tempest/api/compute/admin/test_flavors_extra_specs.py
@@ -28,6 +28,16 @@
GET Flavor Extra specs can be performed even by without admin privileges.
"""
+ credentials = ['primary', 'admin', 'project_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(FlavorsExtraSpecsTestJSON, cls).setup_clients()
+ if CONF.enforce_scope.nova:
+ cls.reader_flavors_client = cls.os_project_reader.flavors_client
+ else:
+ cls.reader_flavors_client = cls.flavors_client
+
@classmethod
def resource_setup(cls):
super(FlavorsExtraSpecsTestJSON, cls).resource_setup()
@@ -69,7 +79,7 @@
self.flavor['id'], **specs)['extra_specs']
self.assertEqual(set_body, specs)
# GET extra specs and verify
- get_body = (self.admin_flavors_client.list_flavor_extra_specs(
+ get_body = (self.reader_flavors_client.list_flavor_extra_specs(
self.flavor['id'])['extra_specs'])
self.assertEqual(get_body, specs)
@@ -80,7 +90,7 @@
# GET extra specs and verify the value of the 'hw:cpu_policy'
# is the same as before
- get_body = self.admin_flavors_client.list_flavor_extra_specs(
+ get_body = self.reader_flavors_client.list_flavor_extra_specs(
self.flavor['id'])['extra_specs']
self.assertEqual(
get_body, {'hw:numa_nodes': '2', 'hw:cpu_policy': 'shared'}
@@ -93,7 +103,7 @@
self.admin_flavors_client.unset_flavor_extra_spec(
self.flavor['id'], 'hw:cpu_policy'
)
- get_body = self.admin_flavors_client.list_flavor_extra_specs(
+ get_body = self.reader_flavors_client.list_flavor_extra_specs(
self.flavor['id'])['extra_specs']
self.assertEmpty(get_body)
@@ -103,7 +113,7 @@
specs = {'hw:numa_nodes': '1', 'hw:cpu_policy': 'shared'}
self.admin_flavors_client.set_flavor_extra_spec(self.flavor['id'],
**specs)
- body = (self.flavors_client.list_flavor_extra_specs(
+ body = (self.reader_flavors_client.list_flavor_extra_specs(
self.flavor['id'])['extra_specs'])
for key in specs:
@@ -119,7 +129,7 @@
self.assertEqual(body['hw:numa_nodes'], '1')
self.assertIn('hw:cpu_policy', body)
- body = self.flavors_client.show_flavor_extra_spec(
+ body = self.reader_flavors_client.show_flavor_extra_spec(
self.flavor['id'], 'hw:numa_nodes')
self.assertEqual(body['hw:numa_nodes'], '1')
self.assertNotIn('hw:cpu_policy', body)
diff --git a/tempest/api/compute/admin/test_flavors_extra_specs_negative.py b/tempest/api/compute/admin/test_flavors_extra_specs_negative.py
index 7f518d2..f6d15f5 100644
--- a/tempest/api/compute/admin/test_flavors_extra_specs_negative.py
+++ b/tempest/api/compute/admin/test_flavors_extra_specs_negative.py
@@ -29,6 +29,16 @@
SET, UNSET, UPDATE Flavor Extra specs require admin privileges.
"""
+ credentials = ['primary', 'admin', 'project_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(FlavorsExtraSpecsNegativeTestJSON, cls).setup_clients()
+ if CONF.enforce_scope.nova:
+ cls.reader_flavors_client = cls.os_project_reader.flavors_client
+ else:
+ cls.reader_flavors_client = cls.flavors_client
+
@classmethod
def resource_setup(cls):
super(FlavorsExtraSpecsNegativeTestJSON, cls).resource_setup()
@@ -110,7 +120,7 @@
def test_flavor_get_nonexistent_key(self):
"""Getting non existence flavor extra spec key should fail"""
self.assertRaises(lib_exc.NotFound,
- self.flavors_client.show_flavor_extra_spec,
+ self.reader_flavors_client.show_flavor_extra_spec,
self.flavor['id'],
'hw:cpu_thread_policy')
diff --git a/tempest/api/compute/admin/test_flavors_microversions.py b/tempest/api/compute/admin/test_flavors_microversions.py
index d904cbd..4380326 100644
--- a/tempest/api/compute/admin/test_flavors_microversions.py
+++ b/tempest/api/compute/admin/test_flavors_microversions.py
@@ -13,9 +13,12 @@
# License for the specific language governing permissions and limitations
# under the License.
from tempest.api.compute import base
+from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
+CONF = config.CONF
+
class FlavorsV255TestJSON(base.BaseV2ComputeAdminTest):
"""Test flavors API with compute microversion greater than 2.54"""
@@ -23,9 +26,19 @@
min_microversion = '2.55'
max_microversion = 'latest'
+ credentials = ['primary', 'admin', 'project_reader']
+
# NOTE(gmann): This class tests the flavors APIs
# response schema for the 2.55 microversion.
+ @classmethod
+ def setup_clients(cls):
+ super(FlavorsV255TestJSON, cls).setup_clients()
+ if CONF.enforce_scope.nova:
+ cls.reader_flavors_client = cls.os_project_reader.flavors_client
+ else:
+ cls.reader_flavors_client = cls.flavors_client
+
@decorators.idempotent_id('61976b25-488d-41dc-9dcb-cb9693a7b075')
def test_crud_flavor(self):
"""Test create/show/update/list flavor
@@ -40,14 +53,14 @@
disk=10,
id=flavor_id)['id']
# Checking show API response schema
- self.flavors_client.show_flavor(new_flavor_id)
+ self.reader_flavors_client.show_flavor(new_flavor_id)
# Checking update API response schema
self.admin_flavors_client.update_flavor(new_flavor_id,
description='new')
# Checking list details API response schema
- self.flavors_client.list_flavors(detail=True)
+ self.reader_flavors_client.list_flavors(detail=True)
# Checking list API response schema
- self.flavors_client.list_flavors()
+ self.reader_flavors_client.list_flavors()
class FlavorsV261TestJSON(FlavorsV255TestJSON):
diff --git a/tempest/api/compute/admin/test_live_migration.py b/tempest/api/compute/admin/test_live_migration.py
index f6a1ae9..5b7614d 100644
--- a/tempest/api/compute/admin/test_live_migration.py
+++ b/tempest/api/compute/admin/test_live_migration.py
@@ -26,6 +26,7 @@
from tempest.lib.common.utils import data_utils
from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
+from tempest.lib import exceptions as lib_exceptions
CONF = config.CONF
LOG = logging.getLogger(__name__)
@@ -34,6 +35,7 @@
class LiveMigrationTestBase(base.BaseV2ComputeAdminTest):
"""Test live migration operations supported by admin user"""
+ credentials = ['primary', 'admin', 'project_manager', 'project_reader']
create_default_network = True
@classmethod
@@ -51,13 +53,19 @@
@classmethod
def setup_clients(cls):
super(LiveMigrationTestBase, cls).setup_clients()
- cls.admin_migration_client = cls.os_admin.migrations_client
+ cls.migration_client = cls.os_admin.migrations_client
cls.networks_client = cls.os_primary.networks_client
cls.subnets_client = cls.os_primary.subnets_client
cls.ports_client = cls.os_primary.ports_client
cls.trunks_client = cls.os_primary.trunks_client
+ cls.server_client = cls.admin_servers_client
+ if CONF.enforce_scope.nova:
+ cls.reader_ports_client = cls.os_project_reader.ports_client
+ else:
+ cls.reader_ports_client = cls.ports_client
- def _migrate_server_to(self, server_id, dest_host, volume_backed=False):
+ def _migrate_server_to(self, server_id, dest_host, volume_backed=False,
+ use_manager_client=False):
kwargs = dict()
block_migration = getattr(self, 'block_migration', None)
if self.block_migration is None:
@@ -66,7 +74,19 @@
block_migration = (CONF.compute_feature_enabled.
block_migration_for_live_migration and
not volume_backed)
- self.admin_servers_client.live_migrate_server(
+ # Avoid changing self.server_client permanently because
+ # [compute_feature_enabled]live_migrate_back_and_forth might be
+ # set to True. If it is, the test will live migrate the server back to
+ # the source using the os-migrate-server:migrate_live:host API, which
+ # is not allowed for project manager by default policy.
+ server_client = self.server_client
+ if use_manager_client:
+ server_client = self.os_project_manager.servers_client
+ LOG.info("Using project manager for live migrating server: %s, "
+ "project manager user id: %s",
+ server_id, server_client.user_id)
+
+ server_client.live_migrate_server(
server_id, host=dest_host, block_migration=block_migration,
**kwargs)
@@ -74,11 +94,21 @@
volume_backed=False):
# If target_host is None, check whether source host is different with
# the new host after migration.
+ use_manager_client = False
if target_host is None:
source_host = self.get_host_for_server(server_id)
- self._migrate_server_to(server_id, target_host, volume_backed)
- waiters.wait_for_server_status(self.servers_client, server_id, state)
- migration_list = (self.admin_migration_client.list_migrations()
+ # NOTE(gmaan): If new policy is enforced and and manager role
+ # is present in nova then use manager user to live migrate.
+ if (CONF.enforce_scope.nova and 'manager' in
+ CONF.compute_feature_enabled.nova_policy_roles):
+ use_manager_client = True
+
+ self._migrate_server_to(server_id, target_host, volume_backed,
+ use_manager_client)
+ waiters.wait_for_server_status(
+ self.reader_servers_client, server_id, state)
+
+ migration_list = (self.os_admin.migrations_client.list_migrations()
['migrations'])
msg = ("Live Migration failed. Migrations list for Instance "
@@ -98,6 +128,9 @@
class LiveMigrationTest(LiveMigrationTestBase):
max_microversion = '2.24'
block_migration = None
+ # If test case want to request the destination host to Nova
+ # otherwise Nova scheduler will pick one.
+ request_host = True
@classmethod
def setup_credentials(cls):
@@ -119,15 +152,16 @@
server_id = self.create_test_server(wait_until="ACTIVE",
volume_backed=volume_backed)['id']
source_host = self.get_host_for_server(server_id)
- if not CONF.compute_feature_enabled.can_migrate_between_any_hosts:
+ if (self.request_host and
+ CONF.compute_feature_enabled.can_migrate_between_any_hosts):
+ destination_host = self.get_host_other_than(server_id)
+ else:
# not to specify a host so that the scheduler will pick one
destination_host = None
- else:
- destination_host = self.get_host_other_than(server_id)
if state == 'PAUSED':
self.admin_servers_client.pause_server(server_id)
- waiters.wait_for_server_status(self.admin_servers_client,
+ waiters.wait_for_server_status(self.reader_servers_client,
server_id, state)
LOG.info("Live migrate from source %s to destination %s",
@@ -201,11 +235,11 @@
# Attach the volume to the server
self.attach_volume(server, volume, device='/dev/xvdb',
wait_for_detach=False)
- server = self.admin_servers_client.show_server(server_id)['server']
+ server = self.reader_servers_client.show_server(server_id)['server']
volume_id1 = server["os-extended-volumes:volumes_attached"][0]["id"]
self._live_migrate(server_id, target_host, 'ACTIVE')
- server = self.admin_servers_client.show_server(server_id)['server']
+ server = self.reader_servers_client.show_server(server_id)['server']
volume_id2 = server["os-extended-volumes:volumes_attached"][0]["id"]
self.assertEqual(volume_id1, volume_id2)
@@ -255,7 +289,7 @@
return trunk, parent, subport
def _is_port_status_active(self, port_id):
- port = self.ports_client.show_port(port_id)['port']
+ port = self.reader_ports_client.show_port(port_id)['port']
return port['status'] == 'ACTIVE'
@decorators.unstable_test(bug='2024160')
@@ -280,7 +314,7 @@
test_utils.call_until_true(
self._is_port_status_active, CONF.validation.connect_timeout,
5, parent['id']))
- subport = self.ports_client.show_port(subport['id'])['port']
+ subport = self.reader_ports_client.show_port(subport['id'])['port']
if not CONF.compute_feature_enabled.can_migrate_between_any_hosts:
# not to specify a host so that the scheduler will pick one
@@ -381,3 +415,153 @@
min_microversion = '2.25'
max_microversion = 'latest'
block_migration = 'auto'
+
+
+class LiveMigrationWithoutHostTest(LiveMigrationTest):
+ # Test live migrations without host and let Nova scheduler will pick one.
+ request_host = False
+
+ @classmethod
+ def skip_checks(cls):
+ super(LiveMigrationWithoutHostTest, cls).skip_checks()
+ if not CONF.compute_feature_enabled.can_migrate_between_any_hosts:
+ skip_msg = ("Existing live migration tests are configured to live "
+ "migrate without requesting host.")
+ raise cls.skipException(skip_msg)
+
+
+class LiveMigrationManagerWorkflowTest(LiveMigrationTestBase):
+ # This test shows live migrations workflow for the project manager.
+ # NOTE(gmaan): microversion 2.80 adds project_id query param in list
+ # migration API which is what needed by project manager to request
+ # their project migration list.
+ min_microversion = '2.80'
+ block_migration = None
+ request_host = False
+
+ @classmethod
+ def skip_checks(cls):
+ super(LiveMigrationManagerWorkflowTest, cls).skip_checks()
+ # If RBAC new defaults (manager defaults) are not present then we have
+ # existing test cases tests the live migration scenarios.
+ if (not CONF.enforce_scope.nova or 'manager' not in
+ CONF.compute_feature_enabled.nova_policy_roles):
+ skip_msg = ("Nova RBAC new defaults are not enabled or manager "
+ "role is not present so skipping the project manager "
+ "specific test.")
+ raise cls.skipException(skip_msg)
+
+ @classmethod
+ def setup_clients(cls):
+ super(LiveMigrationManagerWorkflowTest, cls).setup_clients()
+ cls.mgr_server_client = cls.os_project_manager.servers_client
+ LOG.info("Using project manager for live migrating servers, "
+ "project manager user id: %s",
+ cls.mgr_server_client.user_id)
+
+ def _initiate_live_migration(self):
+ # Project member create the server.
+ server_id = self.create_test_server(wait_until="ACTIVE")['id']
+ source_host = self.get_host_for_server(server_id)
+ # Project manager does not know about host info so they will not
+ # specify a host to nova and let nova scheduler to pick one.
+ dest_host = None
+
+ LOG.info("Live migrate from source %s", source_host)
+ # Live migrate an instance to another host
+ self._migrate_server_to(
+ server_id, dest_host, use_manager_client=True)
+ # Try to list the in-progress live migation. This list can be empty
+ # if migration is already completed.
+ in_progress_migrations = (
+ self.mgr_server_client.list_in_progress_live_migration(
+ server_id)['migrations'])
+ LOG.info("in-progress live migrations: %s", in_progress_migrations)
+
+ in_progress_migration_uuid = None
+ for migration in in_progress_migrations:
+ if (migration['server_uuid'] == server_id):
+ in_progress_migration_uuid = migration['uuid']
+ # Project manager should not get any host related field.
+ self.assertIsNone(migration['dest_compute'])
+ self.assertIsNone(migration['dest_host'])
+ self.assertIsNone(migration['dest_node'])
+ self.assertIsNone(migration['source_compute'])
+ self.assertIsNone(migration['source_node'])
+
+ return server_id, source_host, in_progress_migration_uuid
+
+ @decorators.attr(type='multinode')
+ @decorators.idempotent_id('cc4e2431-4476-49b0-9a80-d7a2f638f091')
+ def test_live_migration_by_project_manager(self):
+ """Tests live migrations workflow for the project manager.
+
+ - Create server by project member.
+ - Project manager perform the below steps:
+
+ * Initiate the live migration.
+ * List the in-progress live migration. If migration is completed
+ then list might be empty but test does not fail for empty list.
+ * Assuming migration is in progress, force complete the live
+ migration. Test do not fail if migration is already completed
+ and force complete request raise error.
+ * Wait for server to be active.
+ * List migrations with project_id filter, means request only
+ their project migrations. This should return the migration
+ initiated by project manager.
+ * Check if server is migrated to the differnet host than source
+ host.
+ """
+ server_id, source_host, in_progress_migration_uuid = (
+ self._initiate_live_migration())
+ try:
+ # If we know that migration is in progress then try to force
+ # complete it but there is chance that migration might be
+ # completed before test send request to nova. In that case,
+ # test will not fail. It will skip the force complete and
+ # resume to the next step.
+ if in_progress_migration_uuid:
+ LOG.info("Starting force complete live migrations: %s",
+ in_progress_migration_uuid)
+ self.mgr_server_client.force_complete_live_migration(
+ server_id, in_progress_migration_uuid)
+ LOG.info("Finish force complete live migrations: %s",
+ in_progress_migration_uuid)
+ except lib_exceptions.BadRequest:
+ # If migration is already completed then nova will raise
+ # HTTPBadRequest. In that case, log the info and execute the
+ # rest of the steps.
+ LOG.info("Server %s live migration %s is already completed. Due "
+ "to that force completed live migration is not "
+ "performed.", server_id, in_progress_migration_uuid)
+
+ waiters.wait_for_server_status(self.reader_servers_client,
+ server_id, 'ACTIVE')
+ # List migration with project_id as filter so that manager can
+ # get its own project migrations.
+ mgr_migration_client = self.os_project_manager.migrations_client
+ project_id = mgr_migration_client.project_id
+ migrations = (mgr_migration_client.list_migrations(
+ migration_type='live-migration',
+ project_id=project_id)['migrations'])
+ migration_uuid = None
+ LOG.info("Project %s migrations list: %s", project_id, migrations)
+ for migration in migrations:
+ if (migration['instance_uuid'] == server_id):
+ migration_uuid = migration['uuid']
+ # Check if project manager get other project migrations
+ self.assertEqual(project_id, migration['project_id'])
+ # Project manager should not get any host related field.
+ self.assertIsNone(migration['dest_compute'])
+ self.assertIsNone(migration['dest_host'])
+ self.assertIsNone(migration['dest_node'])
+ self.assertIsNone(migration['source_compute'])
+ self.assertIsNone(migration['source_node'])
+ self.assertIsNotNone(migration_uuid)
+ if in_progress_migration_uuid:
+ self.assertEqual(in_progress_migration_uuid, migration_uuid)
+ msg = ("Server %s Live Migration %s failed.",
+ server_id, migration_uuid)
+ # Check if server is migrated to the differnet host than source host.
+ self.assertNotEqual(source_host,
+ self.get_host_for_server(server_id), msg)
diff --git a/tempest/api/compute/admin/test_live_migration_negative.py b/tempest/api/compute/admin/test_live_migration_negative.py
index c956d99..a5b6d71 100644
--- a/tempest/api/compute/admin/test_live_migration_negative.py
+++ b/tempest/api/compute/admin/test_live_migration_negative.py
@@ -26,6 +26,12 @@
class LiveMigrationNegativeTest(base.BaseV2ComputeAdminTest):
"""Negative tests of live migration"""
+ credentials = ['primary', 'admin', 'project_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(LiveMigrationNegativeTest, cls).setup_clients()
+
@classmethod
def skip_checks(cls):
super(LiveMigrationNegativeTest, cls).skip_checks()
@@ -49,8 +55,8 @@
self.assertRaises(lib_exc.BadRequest, self._migrate_server_to,
server['id'], target_host)
- waiters.wait_for_server_status(self.servers_client, server['id'],
- 'ACTIVE')
+ waiters.wait_for_server_status(self.reader_servers_client,
+ server['id'], 'ACTIVE')
@decorators.attr(type=['negative'])
@decorators.idempotent_id('6e2f94f5-2ee8-4830-bef5-5bc95bb0795b')
@@ -59,7 +65,7 @@
server = self.create_test_server(wait_until="ACTIVE")
self.admin_servers_client.suspend_server(server['id'])
- waiters.wait_for_server_status(self.servers_client,
+ waiters.wait_for_server_status(self.reader_servers_client,
server['id'], 'SUSPENDED')
destination_host = self.get_host_other_than(server['id'])
diff --git a/tempest/api/compute/admin/test_migrations.py b/tempest/api/compute/admin/test_migrations.py
index fa8a737..fa9d68f 100644
--- a/tempest/api/compute/admin/test_migrations.py
+++ b/tempest/api/compute/admin/test_migrations.py
@@ -12,6 +12,7 @@
# License for the specific language governing permissions and limitations
# under the License.
+from oslo_log import log as logging
import testtools
from tempest.api.compute import base
@@ -22,15 +23,31 @@
from tempest.lib import exceptions
CONF = config.CONF
+LOG = logging.getLogger(__name__)
class MigrationsAdminTest(base.BaseV2ComputeAdminTest):
"""Test migration operations supported by admin user"""
+ credentials = ['primary', 'admin', 'project_manager', 'project_reader']
+
@classmethod
def setup_clients(cls):
super(MigrationsAdminTest, cls).setup_clients()
cls.client = cls.os_admin.migrations_client
+ cls.mgr_server_client = cls.admin_servers_client
+ # NOTE(gmaan): If new policy is enforced and and manager role
+ # is present in nova then use manager user to live migrate.
+ if (CONF.enforce_scope.nova and 'manager' in
+ CONF.compute_feature_enabled.nova_policy_roles):
+ cls.mgr_server_client = cls.os_project_manager.servers_client
+ LOG.info("Using project manager for migrating servers, "
+ "project manager user id: %s",
+ cls.mgr_server_client.user_id)
+ if CONF.enforce_scope.nova:
+ cls.reader_flavors_client = cls.os_project_reader.flavors_client
+ else:
+ cls.reader_flavors_client = cls.flavors_client
@decorators.idempotent_id('75c0b83d-72a0-4cf8-a153-631e83e7d53f')
def test_list_migrations(self):
@@ -71,7 +88,7 @@
# First we have to create a flavor that we can delete so make a copy
# of the normal flavor from which we'd create a server.
- flavor = self.admin_flavors_client.show_flavor(
+ flavor = self.reader_flavors_client.show_flavor(
self.flavor_ref)['flavor']
flavor = self.admin_flavors_client.create_flavor(
name=data_utils.rand_name(
@@ -87,7 +104,7 @@
# because the environment may need some special extra specs to
# create server which should have been contained in
# self.flavor_ref.
- extra_spec_keys = self.admin_flavors_client.list_flavor_extra_specs(
+ extra_spec_keys = self.reader_flavors_client.list_flavor_extra_specs(
self.flavor_ref)['extra_specs']
if extra_spec_keys:
self.admin_flavors_client.set_flavor_extra_spec(
@@ -96,14 +113,15 @@
# Now boot a server with the copied flavor.
server = self.create_test_server(
wait_until='ACTIVE', flavor=flavor['id'])
- server = self.servers_client.show_server(server['id'])['server']
+ server = self.reader_servers_client.show_server(server['id'])['server']
# If 'id' not in server['flavor'], we can only compare the flavor
# details, so here we should save the to-be-deleted flavor's details,
# for the flavor comparison after the server resizing.
if not server['flavor'].get('id'):
pre_flavor = {}
- body = self.flavors_client.show_flavor(flavor['id'])['flavor']
+ body = (self.reader_flavors_client.show_flavor(flavor['id'])
+ ['flavor'])
for key in ['name', 'ram', 'vcpus', 'disk']:
pre_flavor[key] = body[key]
@@ -112,16 +130,16 @@
# Now resize the server and wait for it to go into verify state.
self.servers_client.resize_server(server['id'], self.flavor_ref_alt)
- waiters.wait_for_server_status(self.servers_client, server['id'],
- 'VERIFY_RESIZE')
+ waiters.wait_for_server_status(self.reader_servers_client,
+ server['id'], 'VERIFY_RESIZE')
# Now revert the resize, it should be OK even though the original
# flavor used to boot the server was deleted.
self.servers_client.revert_resize_server(server['id'])
- waiters.wait_for_server_status(self.servers_client, server['id'],
- 'ACTIVE')
+ waiters.wait_for_server_status(self.reader_servers_client,
+ server['id'], 'ACTIVE')
- server = self.servers_client.show_server(server['id'])['server']
+ server = self.reader_servers_client.show_server(server['id'])['server']
if server['flavor'].get('id'):
msg = ('server flavor is not same as flavor!')
self.assertEqual(flavor['id'], server['flavor']['id'], msg)
@@ -143,9 +161,9 @@
server = self.create_test_server(wait_until="ACTIVE")
src_host = self.get_host_for_server(server['id'])
- self.admin_servers_client.migrate_server(server['id'])
+ self.mgr_server_client.migrate_server(server['id'])
- waiters.wait_for_server_status(self.servers_client,
+ waiters.wait_for_server_status(self.reader_servers_client,
server['id'], 'VERIFY_RESIZE')
if revert:
@@ -155,7 +173,7 @@
self.servers_client.confirm_resize_server(server['id'])
assert_func = self.assertNotEqual
- waiters.wait_for_server_status(self.servers_client,
+ waiters.wait_for_server_status(self.reader_servers_client,
server['id'], 'ACTIVE')
dst_host = self.get_host_for_server(server['id'])
assert_func(src_host, dst_host)
diff --git a/tempest/api/compute/admin/test_networks.py b/tempest/api/compute/admin/test_networks.py
index d7fb62d..64b9cc0 100644
--- a/tempest/api/compute/admin/test_networks.py
+++ b/tempest/api/compute/admin/test_networks.py
@@ -28,15 +28,21 @@
"""
max_microversion = '2.35'
+ credentials = ['primary', 'admin', 'project_reader']
+
@classmethod
def setup_clients(cls):
super(NetworksTest, cls).setup_clients()
cls.client = cls.os_admin.compute_networks_client
+ if CONF.enforce_scope.nova:
+ cls.reader_client = cls.os_project_reader.compute_networks_client
+ else:
+ cls.reader_client = cls.client
@decorators.idempotent_id('d206d211-8912-486f-86e2-a9d090d1f416')
def test_get_network(self):
"""Test getting network from nova side"""
- networks = self.client.list_networks()['networks']
+ networks = self.reader_client.list_networks()['networks']
if CONF.compute.fixed_network_name:
configured_network = [x for x in networks if x['label'] ==
CONF.compute.fixed_network_name]
@@ -51,14 +57,14 @@
raise self.skipException(
"Environment has no known-for-sure existing network.")
configured_network = configured_network[0]
- network = (self.client.show_network(configured_network['id'])
+ network = (self.reader_client.show_network(configured_network['id'])
['network'])
self.assertEqual(configured_network['label'], network['label'])
@decorators.idempotent_id('df3d1046-6fa5-4b2c-ad0c-cfa46a351cb9')
def test_list_all_networks(self):
"""Test getting all networks from nova side"""
- networks = self.client.list_networks()['networks']
+ networks = self.reader_client.list_networks()['networks']
# Check the configured network is in the list
if CONF.compute.fixed_network_name:
configured_network = CONF.compute.fixed_network_name
diff --git a/tempest/api/compute/admin/test_quotas.py b/tempest/api/compute/admin/test_quotas.py
index 70711f5..ad4f625 100644
--- a/tempest/api/compute/admin/test_quotas.py
+++ b/tempest/api/compute/admin/test_quotas.py
@@ -31,6 +31,8 @@
class QuotasAdminTestBase(base.BaseV2ComputeAdminTest):
force_tenant_isolation = True
+ credentials = ['primary', 'admin', 'project_reader']
+
def setUp(self):
# NOTE(mriedem): Avoid conflicts with os-quota-class-sets tests.
self.useFixture(fixtures.LockFixture('compute_quotas'))
@@ -40,6 +42,10 @@
def setup_clients(cls):
super(QuotasAdminTestBase, cls).setup_clients()
cls.adm_client = cls.os_admin.quotas_client
+ if CONF.enforce_scope.nova:
+ cls.reader_quotas_client = cls.os_project_reader.quotas_client
+ else:
+ cls.reader_quotas_client = cls.quotas_client
def _get_updated_quotas(self):
# Verify that GET shows the updated quota set of project
@@ -110,7 +116,7 @@
def test_get_default_quotas(self):
"""Test admin can get the default compute quota set for a project"""
expected_quota_set = self.default_quota_set | set(['id'])
- quota_set = self.adm_client.show_default_quota_set(
+ quota_set = self.reader_quotas_client.show_default_quota_set(
self.demo_tenant_id)['quota_set']
self.assertEqual(quota_set['id'], self.demo_tenant_id)
for quota in expected_quota_set:
@@ -121,7 +127,7 @@
'Legacy quota update not available with unified limits')
def test_update_all_quota_resources_for_tenant(self):
"""Test admin can update all the compute quota limits for a project"""
- default_quota_set = self.adm_client.show_default_quota_set(
+ default_quota_set = self.reader_quotas_client.show_default_quota_set(
self.demo_tenant_id)['quota_set']
new_quota_set = {'metadata_items': 256, 'ram': 10240,
'key_pairs': 200, 'instances': 20,
@@ -170,15 +176,16 @@
project_id = project['id']
self.addCleanup(identity.identity_utils(self.os_admin).delete_project,
project_id)
- quota_set_default = (self.adm_client.show_quota_set(project_id)
- ['quota_set'])
+ quota_set_default = (self.adm_client.show_quota_set(
+ project_id)['quota_set'])
ram_default = quota_set_default['ram']
self.adm_client.update_quota_set(project_id, ram='5120')
self.adm_client.delete_quota_set(project_id)
- quota_set_new = self.adm_client.show_quota_set(project_id)['quota_set']
+ quota_set_new = (self.adm_client.show_quota_set(
+ project_id)['quota_set'])
self.assertEqual(ram_default, quota_set_new['ram'])
@@ -227,6 +234,8 @@
class QuotaClassesAdminTestJSON(base.BaseV2ComputeAdminTest):
"""Tests the os-quota-class-sets API to update default quotas."""
+ credentials = ['primary', 'admin', 'project_reader']
+
def setUp(self):
# All test cases in this class need to externally lock on doing
# anything with default quota values.
@@ -237,6 +246,11 @@
def resource_setup(cls):
super(QuotaClassesAdminTestJSON, cls).resource_setup()
cls.adm_client = cls.os_admin.quota_classes_client
+ if CONF.enforce_scope.nova:
+ cls.reader_quota_classes_client = (
+ cls.os_project_reader.quota_classes_client)
+ else:
+ cls.reader_quota_classes_client = cls.adm_client
def _restore_default_quotas(self, original_defaults):
LOG.debug("restoring quota class defaults")
@@ -270,8 +284,8 @@
self.assertThat(update_body.items(),
matchers.ContainsAll(body.items()))
# check quota values are changed
- show_body = self.adm_client.show_quota_class_set(
- 'default')['quota_class_set']
+ show_body = (self.adm_client.show_quota_class_set(
+ 'default')['quota_class_set'])
self.assertThat(show_body.items(),
matchers.ContainsAll(body.items()))
diff --git a/tempest/api/compute/admin/test_quotas_negative.py b/tempest/api/compute/admin/test_quotas_negative.py
index ef89cc1..faf1330 100644
--- a/tempest/api/compute/admin/test_quotas_negative.py
+++ b/tempest/api/compute/admin/test_quotas_negative.py
@@ -27,6 +27,8 @@
class QuotasAdminNegativeTestBase(base.BaseV2ComputeAdminTest):
force_tenant_isolation = True
+ credentials = ['primary', 'admin', 'project_reader']
+
@classmethod
def setup_clients(cls):
super(QuotasAdminNegativeTestBase, cls).setup_clients()
@@ -34,6 +36,12 @@
cls.adm_client = cls.os_admin.quotas_client
cls.sg_client = cls.security_groups_client
cls.sgr_client = cls.security_group_rules_client
+ if CONF.enforce_scope.nova:
+ cls.reader_quotas_client = cls.os_project_reader.quotas_client
+ cls.reader_limits_client = cls.os_project_reader.limits_client
+ else:
+ cls.reader_quotas_client = cls.client
+ cls.reader_limits_client = cls.limits_client
@classmethod
def resource_setup(cls):
@@ -43,8 +51,8 @@
cls.demo_tenant_id = cls.client.tenant_id
def _update_quota(self, quota_item, quota_value):
- quota_set = (self.adm_client.show_quota_set(self.demo_tenant_id)
- ['quota_set'])
+ quota_set = (self.reader_quotas_client.show_quota_set(
+ self.demo_tenant_id)['quota_set'])
default_quota_value = quota_set[quota_item]
self.adm_client.update_quota_set(self.demo_tenant_id,
@@ -112,8 +120,8 @@
def test_security_groups_exceed_limit(self):
"""Negative test: Creation Security Groups over limit should FAIL"""
# Set the quota to number of used security groups
- sg_quota = self.limits_client.show_limits()['limits']['absolute'][
- 'totalSecurityGroupsUsed']
+ sg_quota = (self.reader_limits_client.show_limits()['limits']
+ ['absolute']['totalSecurityGroupsUsed'])
self._update_quota('security_groups', sg_quota)
# Check we cannot create anymore
diff --git a/tempest/api/compute/admin/test_security_groups.py b/tempest/api/compute/admin/test_security_groups.py
index 41acc94..f614218 100644
--- a/tempest/api/compute/admin/test_security_groups.py
+++ b/tempest/api/compute/admin/test_security_groups.py
@@ -31,11 +31,18 @@
max_microversion = '2.35'
+ credentials = ['primary', 'admin', 'project_reader']
+
@classmethod
def setup_clients(cls):
super(SecurityGroupsTestAdminJSON, cls).setup_clients()
cls.adm_client = cls.os_admin.compute_security_groups_client
cls.client = cls.security_groups_client
+ if CONF.enforce_scope.nova:
+ cls.reader_client = (
+ cls.os_project_reader.compute_security_groups_client)
+ else:
+ cls.reader_client = cls.client
def _delete_security_group(self, securitygroup_id, admin=True):
if admin:
@@ -93,8 +100,8 @@
# Fetch all security groups for non-admin user with 'all_tenants'
# search filter
- fetched_list = (self.client.list_security_groups(all_tenants='true')
- ['security_groups'])
+ fetched_list = (self.reader_client.list_security_groups(
+ all_tenants='true')['security_groups'])
sec_group_id_list = [sg['id'] for sg in fetched_list]
# Now check that 'all_tenants='true' filter for non-admin user only
# provide the requested non-admin user's created security groups,
diff --git a/tempest/api/compute/admin/test_server_external_events.py b/tempest/api/compute/admin/test_server_external_events.py
index d867a39..6b26c68 100644
--- a/tempest/api/compute/admin/test_server_external_events.py
+++ b/tempest/api/compute/admin/test_server_external_events.py
@@ -19,11 +19,6 @@
class ServerExternalEventsTest(base.BaseV2ComputeAdminTest):
"""Test server external events test"""
- # TODO(gmann): Remove the admin access to service user
- # once nova change the default of this API to service
- # role. To merge the nova changing the policy default
- # we need to use token with admin as well as service
- # role and later we can use only service token.
credentials = ['primary', 'admin', ['service_user', 'admin', 'service']]
@decorators.idempotent_id('6bbf4723-61d2-4372-af55-7ba27f1c9ba6')
diff --git a/tempest/api/compute/admin/test_servers.py b/tempest/api/compute/admin/test_servers.py
index 6c9aafb..62696ee 100644
--- a/tempest/api/compute/admin/test_servers.py
+++ b/tempest/api/compute/admin/test_servers.py
@@ -27,6 +27,8 @@
create_default_network = True
+ credentials = ['primary', 'admin', 'project_reader']
+
@classmethod
def setup_clients(cls):
super(ServersAdminTestJSON, cls).setup_clients()
@@ -48,7 +50,7 @@
server = cls.create_test_server(name=cls.s2_name,
wait_until='ACTIVE')
cls.s2_id = server['id']
- waiters.wait_for_server_status(cls.non_admin_client,
+ waiters.wait_for_server_status(cls.reader_servers_client,
cls.s1_id, 'ACTIVE')
@decorators.idempotent_id('06f960bb-15bb-48dc-873d-f96e89be7870')
@@ -56,11 +58,11 @@
"""Test filtering the list of servers by server error status"""
params = {'status': 'error'}
self.client.reset_state(self.s1_id, state='error')
- body = self.non_admin_client.list_servers(**params)
+ body = self.reader_servers_client.list_servers(**params)
# Reset server's state to 'active'
self.client.reset_state(self.s1_id, state='active')
# Verify server's state
- server = self.client.show_server(self.s1_id)['server']
+ server = self.reader_servers_client.show_server(self.s1_id)['server']
self.assertEqual(server['status'], 'ACTIVE')
servers = body['servers']
# Verify error server in list result
@@ -72,11 +74,13 @@
"""Test filtering the list of servers by invalid server status"""
params = {'status': 'invalid_status'}
if self.is_requested_microversion_compatible('2.37'):
- body = self.client.list_servers(detail=True, **params)
+ body = self.reader_servers_client.list_servers(
+ detail=True, **params)
servers = body['servers']
self.assertEmpty(servers)
else:
- self.assertRaises(lib_exc.BadRequest, self.client.list_servers,
+ self.assertRaises(lib_exc.BadRequest,
+ self.reader_servers_client.list_servers,
detail=True, **params)
@decorators.idempotent_id('51717b38-bdc1-458b-b636-1cf82d99f62f')
@@ -154,7 +158,8 @@
nonexistent_params = {'host': 'nonexistent_host',
'all_tenants': '1'}
- nonexistent_body = self.client.list_servers(**nonexistent_params)
+ nonexistent_body = self.client.list_servers(
+ **nonexistent_params)
nonexistent_servers = nonexistent_body['servers']
self.assertNotIn(server['id'],
map(lambda x: x['id'], nonexistent_servers))
@@ -166,14 +171,14 @@
self.client.reset_state(self.s1_id, state='error')
# Verify server's state
- server = self.client.show_server(self.s1_id)['server']
+ server = self.reader_servers_client.show_server(self.s1_id)['server']
self.assertEqual(server['status'], 'ERROR')
# Reset server's state to 'active'
self.client.reset_state(self.s1_id, state='active')
# Verify server's state
- server = self.client.show_server(self.s1_id)['server']
+ server = self.reader_servers_client.show_server(self.s1_id)['server']
self.assertEqual(server['status'], 'ACTIVE')
@decorators.idempotent_id('682cb127-e5bb-4f53-87ce-cb9003604442')
@@ -187,7 +192,8 @@
self.client.reset_state(self.s1_id, state='error')
rebuilt_server = self.non_admin_client.rebuild_server(
self.s1_id, self.image_ref_alt)['server']
- self.addCleanup(waiters.wait_for_server_status, self.non_admin_client,
+ self.addCleanup(waiters.wait_for_server_status,
+ self.reader_servers_client,
self.s1_id, 'ACTIVE')
self.addCleanup(self.non_admin_client.rebuild_server, self.s1_id,
self.image_ref)
@@ -197,11 +203,11 @@
rebuilt_image_id = rebuilt_server['image']['id']
self.assertEqual(self.image_ref_alt, rebuilt_image_id)
self.assert_flavor_equal(self.flavor_ref, rebuilt_server['flavor'])
- waiters.wait_for_server_status(self.non_admin_client,
+ waiters.wait_for_server_status(self.reader_servers_client,
rebuilt_server['id'], 'ACTIVE',
raise_on_error=False)
# Verify the server properties after rebuilding
- server = (self.non_admin_client.show_server(rebuilt_server['id'])
+ server = (self.reader_servers_client.show_server(rebuilt_server['id'])
['server'])
rebuilt_image_id = server['image']['id']
self.assertEqual(self.image_ref_alt, rebuilt_image_id)
@@ -235,20 +241,26 @@
min_microversion = '2.75'
+ credentials = ['primary', 'admin', 'project_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(ServersAdmin275Test, cls).setup_clients()
+
@decorators.idempotent_id('bf2b4a00-73a3-4d53-81fa-acbcd97d6339')
def test_rebuild_update_server_275(self):
server = self.create_test_server()
# Checking update response schema.
self.servers_client.update_server(server['id'])
- waiters.wait_for_server_status(self.servers_client, server['id'],
- 'ACTIVE')
+ waiters.wait_for_server_status(
+ self.reader_servers_client, server['id'], 'ACTIVE')
# Checking rebuild API response schema
self.servers_client.rebuild_server(server['id'], self.image_ref_alt)
- waiters.wait_for_server_status(self.servers_client,
+ waiters.wait_for_server_status(self.reader_servers_client,
server['id'], 'ACTIVE')
# Checking rebuild server with admin response schema.
self.os_admin.servers_client.rebuild_server(
server['id'], self.image_ref)
self.addCleanup(waiters.wait_for_server_status,
- self.os_admin.servers_client,
+ self.reader_servers_client,
server['id'], 'ACTIVE')
diff --git a/tempest/api/compute/admin/test_servers_negative.py b/tempest/api/compute/admin/test_servers_negative.py
index c933c80..8480c24 100644
--- a/tempest/api/compute/admin/test_servers_negative.py
+++ b/tempest/api/compute/admin/test_servers_negative.py
@@ -28,11 +28,18 @@
class ServersAdminNegativeTestJSON(base.BaseV2ComputeAdminTest):
"""Negative Tests of Servers API using admin privileges"""
+ credentials = ['primary', 'admin', 'project_reader']
+
@classmethod
def setup_clients(cls):
super(ServersAdminNegativeTestJSON, cls).setup_clients()
cls.client = cls.os_admin.servers_client
cls.quotas_client = cls.os_admin.quotas_client
+ if CONF.enforce_scope.nova:
+ cls.reader_quotas_client = (
+ cls.os_project_reader.quotas_client)
+ else:
+ cls.reader_quotas_client = cls.quotas_client
@classmethod
def resource_setup(cls):
@@ -144,7 +151,7 @@
server_id = server['id']
# suspend the server.
self.client.suspend_server(server_id)
- waiters.wait_for_server_status(self.client,
+ waiters.wait_for_server_status(self.reader_servers_client,
server_id, 'SUSPENDED')
# migrate a suspended server should fail
self.assertRaises(lib_exc.Conflict,
diff --git a/tempest/api/compute/admin/test_servers_on_multinodes.py b/tempest/api/compute/admin/test_servers_on_multinodes.py
index e0290e4..7a0d5ca 100644
--- a/tempest/api/compute/admin/test_servers_on_multinodes.py
+++ b/tempest/api/compute/admin/test_servers_on_multinodes.py
@@ -25,6 +25,18 @@
class ServersOnMultiNodesTest(base.BaseV2ComputeAdminTest):
"""Test creating servers on multiple nodes with scheduler_hints."""
+
+ credentials = ['primary', 'admin', 'project_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(ServersOnMultiNodesTest, cls).setup_clients()
+ if CONF.enforce_scope.nova:
+ cls.reader_server_groups_client = (
+ cls.os_project_reader.server_groups_client)
+ else:
+ cls.reader_server_groups_client = cls.server_groups_client
+
@classmethod
def resource_setup(cls):
super(ServersOnMultiNodesTest, cls).resource_setup()
@@ -47,12 +59,12 @@
return_reservation_id=True)['reservation_id']
# Get the servers using the reservation_id.
- servers = self.servers_client.list_servers(
+ servers = self.reader_servers_client.list_servers(
detail=True, reservation_id=reservation_id)['servers']
self.assertEqual(2, len(servers))
# Assert the servers are in the group.
- server_group = self.server_groups_client.show_server_group(
+ server_group = self.reader_server_groups_client.show_server_group(
group_id)['server_group']
hosts = {}
for server in servers:
@@ -142,6 +154,12 @@
min_microversion = '2.91'
max_microversion = 'latest'
+ credentials = ['primary', 'admin', 'project_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(UnshelveToHostMultiNodesTest, cls).setup_clients()
+
@classmethod
def skip_checks(cls):
super(UnshelveToHostMultiNodesTest, cls).skip_checks()
@@ -167,8 +185,8 @@
server['id'],
body={'unshelve': {'host': host}}
)
- waiters.wait_for_server_status(self.servers_client, server['id'],
- 'ACTIVE')
+ waiters.wait_for_server_status(
+ self.reader_servers_client, server['id'], 'ACTIVE')
@decorators.attr(type='multinode')
@decorators.idempotent_id('b5cc0889-50c2-46a0-b8ff-b5fb4c3a6e20')
diff --git a/tempest/api/compute/admin/test_simple_tenant_usage.py b/tempest/api/compute/admin/test_simple_tenant_usage.py
index c24f420..b71c8b6 100644
--- a/tempest/api/compute/admin/test_simple_tenant_usage.py
+++ b/tempest/api/compute/admin/test_simple_tenant_usage.py
@@ -16,10 +16,13 @@
import datetime
from tempest.api.compute import base
+from tempest import config
from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
from tempest.lib import exceptions as e
+CONF = config.CONF
+
# Time that waits for until returning valid response
# TODO(takmatsu): Ideally this value would come from configuration.
VALID_WAIT = 30
@@ -28,11 +31,18 @@
class TenantUsagesTestJSON(base.BaseV2ComputeAdminTest):
"""Test tenant usages"""
+ credentials = ['primary', 'admin', 'project_reader']
+
@classmethod
def setup_clients(cls):
super(TenantUsagesTestJSON, cls).setup_clients()
cls.adm_client = cls.os_admin.tenant_usages_client
cls.client = cls.os_primary.tenant_usages_client
+ if CONF.enforce_scope.nova:
+ cls.reader_client = (
+ cls.os_project_reader.tenant_usages_client)
+ else:
+ cls.reader_client = cls.client
@classmethod
def resource_setup(cls):
@@ -87,6 +97,6 @@
def test_get_usage_tenant_with_non_admin_user(self):
"""Test getting usage for a specific tenant with non admin user"""
tenant_usage = self.call_until_valid(
- self.client.show_tenant_usage, VALID_WAIT,
+ self.reader_client.show_tenant_usage, VALID_WAIT,
self.tenant_id, start=self.start, end=self.end)['tenant_usage']
self.assertEqual(len(tenant_usage), 8)
diff --git a/tempest/api/compute/admin/test_spice.py b/tempest/api/compute/admin/test_spice.py
index f09012d..8942565 100644
--- a/tempest/api/compute/admin/test_spice.py
+++ b/tempest/api/compute/admin/test_spice.py
@@ -31,6 +31,8 @@
min_microversion = '2.99'
max_microversion = 'latest'
+ credentials = ['primary', 'admin', 'project_reader']
+
# SPICE client protocol constants
magic = b'REDQ'
major = 2
diff --git a/tempest/api/compute/admin/test_volume.py b/tempest/api/compute/admin/test_volume.py
index 2813d7a..bdfa9e5 100644
--- a/tempest/api/compute/admin/test_volume.py
+++ b/tempest/api/compute/admin/test_volume.py
@@ -25,18 +25,39 @@
"""Base class for the admin volume tests in this module."""
create_default_network = True
+ credentials = ['primary', 'admin', 'project_reader']
+
@classmethod
def skip_checks(cls):
super(BaseAttachSCSIVolumeTest, cls).skip_checks()
if not CONF.service_available.cinder:
skip_msg = ("%s skipped as Cinder is not available" % cls.__name__)
raise cls.skipException(skip_msg)
+ if not CONF.service_available.glance:
+ skip_msg = ("%s skipped as Glance is not available" % cls.__name__)
+ raise cls.skipException(skip_msg)
+ if not CONF.image_feature_enabled.api_v2:
+ skip_msg = ("%s skipped as Glance API v2 is not enabled" %
+ cls.__name__)
+ raise cls.skipException(skip_msg)
@classmethod
def setup_credentials(cls):
cls.prepare_instance_network()
super(BaseAttachSCSIVolumeTest, cls).setup_credentials()
+ @classmethod
+ def setup_clients(cls):
+ super(BaseAttachSCSIVolumeTest, cls).setup_clients()
+ if CONF.enforce_scope.nova:
+ cls.reader_volumes_client = (
+ cls.os_project_reader.volumes_client_latest)
+ cls.reader_image_client = (
+ cls.os_project_reader.image_client_v2)
+ else:
+ cls.reader_volumes_client = cls.volumes_client
+ cls.reader_image_client = cls.images_client
+
def _create_image_with_custom_property(self, **kwargs):
"""Wrapper utility that returns the custom image.
@@ -46,7 +67,7 @@
:param return image_id: The UUID of the newly created image.
"""
- image = self.admin_image_client.show_image(CONF.compute.image_ref)
+ image = self.reader_image_client.show_image(CONF.compute.image_ref)
# NOTE(danms): We need to stream this, so chunked=True means we get
# back a urllib3.HTTPResponse and have to carefully pass it to
# store_image_file() to upload it in pieces.
@@ -67,8 +88,9 @@
create_dict.update(kwargs)
try:
new_image = self.admin_image_client.create_image(**create_dict)
- self.addCleanup(self.admin_image_client.wait_for_resource_deletion,
- new_image['id'])
+ self.addCleanup(
+ self.reader_image_client.wait_for_resource_deletion,
+ new_image['id'])
self.addCleanup(
self.admin_image_client.delete_image, new_image['id'])
self.admin_image_client.store_image_file(new_image['id'],
@@ -110,20 +132,21 @@
# deleted otherwise image deletion can start before server is
# deleted.
self.addCleanup(waiters.wait_for_server_termination,
- self.servers_client, server['id'])
+ self.reader_servers_client, server['id'])
self.addCleanup(self.servers_client.delete_server, server['id'])
volume = self.create_volume()
attachment = self.attach_volume(server, volume)
waiters.wait_for_volume_resource_status(
- self.volumes_client, attachment['volumeId'], 'in-use')
- volume_after_attach = self.servers_client.list_volume_attachments(
- server['id'])['volumeAttachments']
+ self.reader_volumes_client, attachment['volumeId'], 'in-use')
+ volume_after_attach = (
+ self.reader_servers_client.list_volume_attachments(
+ server['id'])['volumeAttachments'])
self.assertEqual(1, len(volume_after_attach),
"Failed to attach volume")
self.servers_client.detach_volume(
server['id'], attachment['volumeId'])
waiters.wait_for_volume_resource_status(
- self.volumes_client, attachment['volumeId'], 'available')
+ self.reader_volumes_client, attachment['volumeId'], 'available')
waiters.wait_for_volume_attachment_remove_from_server(
- self.servers_client, server['id'], attachment['volumeId'])
+ self.reader_servers_client, server['id'], attachment['volumeId'])
diff --git a/tempest/api/compute/admin/test_volume_swap.py b/tempest/api/compute/admin/test_volume_swap.py
index 9576b74..481367c 100644
--- a/tempest/api/compute/admin/test_volume_swap.py
+++ b/tempest/api/compute/admin/test_volume_swap.py
@@ -24,6 +24,9 @@
class TestVolumeSwapBase(base.BaseV2ComputeAdminTest):
create_default_network = True
+ credentials = ['primary', 'admin', 'project_reader',
+ ['service_user', 'admin', 'service']]
+
@classmethod
def setup_credentials(cls):
cls.prepare_instance_network()
@@ -37,11 +40,22 @@
if not CONF.compute_feature_enabled.swap_volume:
raise cls.skipException("Swapping volumes is not supported.")
+ @classmethod
+ def setup_clients(cls):
+ super(TestVolumeSwapBase, cls).setup_clients()
+ cls.service_client = cls.os_service_user.servers_client
+ if CONF.enforce_scope.nova:
+ cls.reader_volumes_client = (
+ cls.os_project_reader.volumes_client_latest)
+ else:
+ cls.reader_volumes_client = cls.volumes_client
+
def wait_for_server_volume_swap(self, server_id, old_volume_id,
new_volume_id):
"""Waits for a server to swap the old volume to a new one."""
- volume_attachments = self.servers_client.list_volume_attachments(
- server_id)['volumeAttachments']
+ volume_attachments = (
+ self.reader_servers_client.list_volume_attachments(
+ server_id)['volumeAttachments'])
attached_volume_ids = [attachment['volumeId']
for attachment in volume_attachments]
start = int(time.time())
@@ -49,8 +63,9 @@
while (old_volume_id in attached_volume_ids) \
or (new_volume_id not in attached_volume_ids):
time.sleep(self.servers_client.build_interval)
- volume_attachments = self.servers_client.list_volume_attachments(
- server_id)['volumeAttachments']
+ volume_attachments = (
+ self.reader_servers_client.list_volume_attachments(
+ server_id)['volumeAttachments'])
attached_volume_ids = [attachment['volumeId']
for attachment in volume_attachments]
@@ -74,16 +89,17 @@
class TestVolumeSwap(TestVolumeSwapBase):
- """The test suite for swapping of volume with admin user"""
+ """The test suite for swapping of volume with service user"""
# NOTE(mriedem): This is an uncommon scenario to call the compute API
# to swap volumes directly; swap volume is primarily only for volume
# live migration and retype callbacks from the volume service, and is slow
# so it's marked as such.
+ @decorators.skip_because(bug='2112187')
@decorators.attr(type='slow')
@decorators.idempotent_id('1769f00d-a693-4d67-a631-6a3496773813')
def test_volume_swap(self):
- """Test swapping of volume attached to server with admin user
+ """Test swapping of volume attached to server with service user
The following is the scenario outline:
@@ -91,11 +107,8 @@
2. Create a volume "volume2" with non-admin.
3. Boot an instance "instance1" with non-admin.
4. Attach "volume1" to "instance1" with non-admin.
- 5. Swap volume from "volume1" to "volume2" as admin.
- 6. Check the swap volume is successful and "volume2"
- is attached to "instance1" and "volume1" is in available state.
- 7. Swap volume from "volume2" to "volume1" as admin.
- 8. Check the swap volume is successful and "volume1"
+ 5. Swap volume from "volume1" to "volume2" as service.
+ 6. Check the swap volume is rejected and "volume1"
is attached to "instance1" and "volume2" is in available state.
"""
# Create two volumes.
@@ -118,34 +131,19 @@
# Attach "volume1" to server
self.attach_volume(server, volume1)
# Swap volume from "volume1" to "volume2"
- self.admin_servers_client.update_attached_volume(
+ self.assertRaises(
+ lib_exc.Conflict, self.service_client.update_attached_volume,
server['id'], volume1['id'], volumeId=volume2['id'])
- waiters.wait_for_volume_resource_status(self.volumes_client,
- volume1['id'], 'available')
- waiters.wait_for_volume_resource_status(self.volumes_client,
- volume2['id'], 'in-use')
- self.wait_for_server_volume_swap(server['id'], volume1['id'],
- volume2['id'])
- # Verify "volume2" is attached to the server
- vol_attachments = self.servers_client.list_volume_attachments(
- server['id'])['volumeAttachments']
- self.assertEqual(1, len(vol_attachments))
- self.assertIn(volume2['id'], vol_attachments[0]['volumeId'])
-
- # Swap volume from "volume2" to "volume1"
- self.admin_servers_client.update_attached_volume(
- server['id'], volume2['id'], volumeId=volume1['id'])
- waiters.wait_for_volume_resource_status(self.volumes_client,
- volume2['id'], 'available')
- waiters.wait_for_volume_resource_status(self.volumes_client,
- volume1['id'], 'in-use')
- self.wait_for_server_volume_swap(server['id'], volume2['id'],
- volume1['id'])
# Verify "volume1" is attached to the server
- vol_attachments = self.servers_client.list_volume_attachments(
+ vol_attachments = self.reader_servers_client.list_volume_attachments(
server['id'])['volumeAttachments']
self.assertEqual(1, len(vol_attachments))
self.assertIn(volume1['id'], vol_attachments[0]['volumeId'])
+ waiters.wait_for_volume_resource_status(
+ self.reader_volumes_client, volume1['id'], 'in-use')
+ # verify "volume2" is still available
+ waiters.wait_for_volume_resource_status(
+ self.reader_volumes_client, volume2['id'], 'available')
class TestMultiAttachVolumeSwap(TestVolumeSwapBase):
@@ -194,9 +192,8 @@
4. Attach "volume1" to "server1" with non-admin.
5. Attach "volume1" to "server2" with non-admin.
6. Swap "volume1" to "volume2" on "server1"
- 7. Check "volume1" is attached to "server2" and not attached to
- "server1"
- 8. Check "volume2" is attached to "server1".
+ 7. Check the swap volume is rejected and "volume1" is attached to
+ "server1".
"""
multiattach_vol_type = CONF.volume.volume_type_multiattach
# Create two volumes.
@@ -227,7 +224,7 @@
return_reservation_id=True,
)['reservation_id']
# Get the servers using the reservation_id.
- servers = self.servers_client.list_servers(
+ servers = self.reader_servers_client.list_servers(
reservation_id=reservation_id)['servers']
self.assertEqual(2, len(servers))
# Attach volume1 to server1
@@ -237,26 +234,25 @@
server2 = servers[1]
self.attach_volume(server2, volume1)
- # Swap volume1 to volume2 on server1, volume1 should remain attached
- # to server 2
- self.admin_servers_client.update_attached_volume(
+ # Swap volume is rejected by nova
+ self.assertRaises(
+ lib_exc.Conflict, self.service_client.update_attached_volume,
server1['id'], volume1['id'], volumeId=volume2['id'])
- # volume1 will return to in-use after the swap
- waiters.wait_for_volume_resource_status(self.volumes_client,
- volume1['id'], 'in-use')
- waiters.wait_for_volume_resource_status(self.volumes_client,
- volume2['id'], 'in-use')
- self.wait_for_server_volume_swap(server1['id'], volume1['id'],
- volume2['id'])
- # Verify volume2 is attached to server1
- vol_attachments = self.servers_client.list_volume_attachments(
+ # volume1 remains in in-use and volume2 in available
+ waiters.wait_for_volume_resource_status(
+ self.reader_volumes_client, volume1['id'], 'in-use')
+ waiters.wait_for_volume_resource_status(
+ self.reader_volumes_client, volume2['id'], 'available')
+
+ # Verify volume1 is attached to server1
+ vol_attachments = self.reader_servers_client.list_volume_attachments(
server1['id'])['volumeAttachments']
self.assertEqual(1, len(vol_attachments))
- self.assertIn(volume2['id'], vol_attachments[0]['volumeId'])
+ self.assertIn(volume1['id'], vol_attachments[0]['volumeId'])
# Verify volume1 is still attached to server2
- vol_attachments = self.servers_client.list_volume_attachments(
+ vol_attachments = self.reader_servers_client.list_volume_attachments(
server2['id'])['volumeAttachments']
self.assertEqual(1, len(vol_attachments))
self.assertIn(volume1['id'], vol_attachments[0]['volumeId'])
diff --git a/tempest/api/compute/admin/test_volumes_negative.py b/tempest/api/compute/admin/test_volumes_negative.py
index 55c842f..1868e2b 100644
--- a/tempest/api/compute/admin/test_volumes_negative.py
+++ b/tempest/api/compute/admin/test_volumes_negative.py
@@ -26,6 +26,8 @@
"""Negative tests of volume swapping"""
create_default_network = True
+ credentials = ['primary', 'admin', 'project_reader',
+ ['service_user', 'admin', 'service']]
@classmethod
def setup_credentials(cls):
@@ -39,6 +41,19 @@
skip_msg = ("%s skipped as Cinder is not available" % cls.__name__)
raise cls.skipException(skip_msg)
+ @classmethod
+ def setup_clients(cls):
+ super(VolumesAdminNegativeTest, cls).setup_clients()
+ cls.service_client = cls.os_service_user.servers_client
+ if CONF.enforce_scope.nova:
+ cls.reader_volumes_client = (
+ cls.os_project_reader.volumes_client_latest)
+ cls.reader_attachments_client = (
+ cls.os_project_reader.attachments_client_latest)
+ else:
+ cls.reader_volumes_client = cls.volumes_client
+ cls.reader_attachments_client = cls.attachments_client
+
@decorators.attr(type=['negative'])
@decorators.idempotent_id('309b5ecd-0585-4a7e-a36f-d2b2bf55259d')
def test_update_attached_volume_with_nonexistent_volume_in_uri(self):
@@ -47,10 +62,11 @@
volume = self.create_volume()
nonexistent_volume = data_utils.rand_uuid()
self.assertRaises(lib_exc.NotFound,
- self.admin_servers_client.update_attached_volume,
+ self.service_client.update_attached_volume,
self.server['id'], nonexistent_volume,
volumeId=volume['id'])
+ @decorators.skip_because(bug='2112187')
@decorators.related_bug('1629110', status_code=400)
@decorators.attr(type=['negative'])
@decorators.idempotent_id('7dcac15a-b107-46d3-a5f6-cb863f4e454a')
@@ -71,8 +87,8 @@
self.attach_volume(self.server, volume)
nonexistent_volume = data_utils.rand_uuid()
- self.assertRaises(lib_exc.BadRequest,
- self.admin_servers_client.update_attached_volume,
+ self.assertRaises(lib_exc.Conflict,
+ self.service_servers_client.update_attached_volume,
self.server['id'], volume['id'],
volumeId=nonexistent_volume)
@@ -89,6 +105,8 @@
volume_min_microversion = '3.27'
create_default_network = True
+ credentials = ['primary', 'admin', 'project_reader',
+ ['service_user', 'admin', 'service']]
@classmethod
def setup_credentials(cls):
@@ -101,9 +119,23 @@
if not CONF.compute_feature_enabled.volume_multiattach:
raise cls.skipException('Volume multi-attach is not available.')
+ @classmethod
+ def setup_clients(cls):
+ super(UpdateMultiattachVolumeNegativeTest, cls).setup_clients()
+ cls.service_client = cls.os_service_user.servers_client
+ if CONF.enforce_scope.nova:
+ cls.reader_volumes_client = (
+ cls.os_project_reader.volumes_client_latest)
+ cls.reader_attachments_client = (
+ cls.os_project_reader.attachments_client_latest)
+ else:
+ cls.reader_volumes_client = cls.volumes_client
+ cls.reader_attachments_client = cls.attachments_client
+
@decorators.attr(type=['negative'])
@decorators.idempotent_id('7576d497-b7c6-44bd-9cc5-c5b4e50fec71')
@utils.services('volume')
+ @decorators.skip_because(bug='2112187')
def test_multiattach_rw_volume_update_failure(self):
"""Test swapping volume attached to multi-servers with read-write mode
@@ -143,7 +175,7 @@
vol1_attachment2 = self.attach_volume(server2, vol1)
# Assert that we now have two attachments.
- vol1 = self.volumes_client.show_volume(vol1['id'])['volume']
+ vol1 = self.reader_volumes_client.show_volume(vol1['id'])['volume']
self.assertEqual(2, len(vol1['attachments']))
# By default both of these attachments should have an attach_mode of
@@ -151,21 +183,21 @@
# the volume will be rejected.
for volume_attachment in vol1['attachments']:
attachment_id = volume_attachment['attachment_id']
- attachment = self.attachments_client.show_attachment(
+ attachment = self.reader_attachments_client.show_attachment(
attachment_id)['attachment']
self.assertEqual('rw', attachment['attach_mode'])
# Assert that a BadRequest is raised when we attempt to update volume1
# to volume2 on server1 or server2.
self.assertRaises(lib_exc.BadRequest,
- self.admin_servers_client.update_attached_volume,
+ self.service_client.update_attached_volume,
server1['id'], vol1['id'], volumeId=vol2['id'])
self.assertRaises(lib_exc.BadRequest,
- self.admin_servers_client.update_attached_volume,
+ self.service_client.update_attached_volume,
server2['id'], vol1['id'], volumeId=vol2['id'])
# Fetch the volume 1 to check the current attachments.
- vol1 = self.volumes_client.show_volume(vol1['id'])['volume']
+ vol1 = self.reader_volumes_client.show_volume(vol1['id'])['volume']
vol1_attachment_ids = [a['id'] for a in vol1['attachments']]
# Assert that volume 1 is still attached to both server 1 and 2.
@@ -173,5 +205,5 @@
self.assertIn(vol1_attachment2['id'], vol1_attachment_ids)
# Assert that volume 2 has no attachments.
- vol2 = self.volumes_client.show_volume(vol2['id'])['volume']
+ vol2 = self.reader_volumes_client.show_volume(vol2['id'])['volume']
self.assertEqual([], vol2['attachments'])
diff --git a/tempest/api/compute/base.py b/tempest/api/compute/base.py
index b974b52..3b44ded 100644
--- a/tempest/api/compute/base.py
+++ b/tempest/api/compute/base.py
@@ -44,7 +44,7 @@
# TODO(andreaf) We should care also for the alt_manager here
# but only once client lazy load in the manager is done
- credentials = ['primary']
+ credentials = ['primary', 'project_reader']
@classmethod
def skip_checks(cls):
@@ -78,6 +78,10 @@
def setup_clients(cls):
super(BaseV2ComputeTest, cls).setup_clients()
cls.servers_client = cls.os_primary.servers_client
+ if CONF.enforce_scope.nova and hasattr(cls, 'os_project_reader'):
+ cls.reader_servers_client = cls.os_project_reader.servers_client
+ else:
+ cls.reader_servers_client = cls.servers_client
cls.server_groups_client = cls.os_primary.server_groups_client
cls.flavors_client = cls.os_primary.flavors_client
cls.compute_images_client = cls.os_primary.compute_images_client
@@ -556,8 +560,17 @@
'tagging metadata was not checked in the '
'metadata API')
return True
+
cmd = 'curl %s' % md_url
- md_json = ssh_client.exec_command(cmd)
+ try:
+ md_json = ssh_client.exec_command(cmd)
+ except lib_exc.SSHExecCommandFailed:
+ # NOTE(eolivare): We cannot guarantee that the metadata service
+ # is available right after the VM is ssh-able, because it could
+ # obtain authorized ssh keys from config_drive or it could use
+ # password. Hence, retries may be needed.
+ LOG.exception('metadata service not available yet')
+ return False
return verify_method(md_json)
# NOTE(gmann) Keep refreshing the metadata info until the metadata
# cache is refreshed. For safer side, we will go with wait loop of
@@ -680,7 +693,7 @@
class BaseV2ComputeAdminTest(BaseV2ComputeTest):
"""Base test case class for Compute Admin API tests."""
- credentials = ['primary', 'admin']
+ credentials = ['primary', 'admin', 'project_reader']
@classmethod
def setup_clients(cls):
diff --git a/tempest/api/compute/flavors/test_flavors.py b/tempest/api/compute/flavors/test_flavors.py
index 9ab75c5..752a48d 100644
--- a/tempest/api/compute/flavors/test_flavors.py
+++ b/tempest/api/compute/flavors/test_flavors.py
@@ -14,18 +14,30 @@
# under the License.
from tempest.api.compute import base
+from tempest import config
from tempest.lib import decorators
+CONF = config.CONF
+
+
class FlavorsV2TestJSON(base.BaseV2ComputeTest):
"""Tests Flavors"""
+ @classmethod
+ def setup_clients(cls):
+ super(FlavorsV2TestJSON, cls).setup_clients()
+ if CONF.enforce_scope.nova:
+ cls.reader_client = cls.os_project_reader.flavors_client
+ else:
+ cls.reader_client = cls.flavors_client
+
@decorators.attr(type='smoke')
@decorators.idempotent_id('e36c0eaa-dff5-4082-ad1f-3f9a80aa3f59')
def test_list_flavors(self):
"""List of all flavors should contain the expected flavor"""
- flavors = self.flavors_client.list_flavors()['flavors']
- flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
+ flavors = self.reader_client.list_flavors()['flavors']
+ flavor = self.reader_client.show_flavor(self.flavor_ref)['flavor']
flavor_min_detail = {'id': flavor['id'], 'links': flavor['links'],
'name': flavor['name']}
# description field is added to the response of list_flavors in 2.55
@@ -36,93 +48,93 @@
@decorators.idempotent_id('6e85fde4-b3cd-4137-ab72-ed5f418e8c24')
def test_list_flavors_with_detail(self):
"""Detailed list of all flavors should contain the expected flavor"""
- flavors = self.flavors_client.list_flavors(detail=True)['flavors']
- flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
+ flavors = self.reader_client.list_flavors(detail=True)['flavors']
+ flavor = self.reader_client.show_flavor(self.flavor_ref)['flavor']
self.assertIn(flavor, flavors)
@decorators.attr(type='smoke')
@decorators.idempotent_id('1f12046b-753d-40d2-abb6-d8eb8b30cb2f')
def test_get_flavor(self):
"""The expected flavor details should be returned"""
- flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
+ flavor = self.reader_client.show_flavor(self.flavor_ref)['flavor']
self.assertEqual(self.flavor_ref, flavor['id'])
@decorators.idempotent_id('8d7691b3-6ed4-411a-abc9-2839a765adab')
def test_list_flavors_limit_results(self):
"""Only the expected number of flavors should be returned"""
params = {'limit': 1}
- flavors = self.flavors_client.list_flavors(**params)['flavors']
+ flavors = self.reader_client.list_flavors(**params)['flavors']
self.assertEqual(1, len(flavors))
@decorators.idempotent_id('b26f6327-2886-467a-82be-cef7a27709cb')
def test_list_flavors_detailed_limit_results(self):
"""Only the expected number of flavors(detailed) should be returned"""
params = {'limit': 1}
- flavors = self.flavors_client.list_flavors(detail=True,
- **params)['flavors']
+ flavors = self.reader_client.list_flavors(detail=True,
+ **params)['flavors']
self.assertEqual(1, len(flavors))
@decorators.idempotent_id('e800f879-9828-4bd0-8eae-4f17189951fb')
def test_list_flavors_using_marker(self):
"""The list of flavors should start from the provided marker"""
- flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
+ flavor = self.reader_client.show_flavor(self.flavor_ref)['flavor']
flavor_id = flavor['id']
params = {'marker': flavor_id}
- flavors = self.flavors_client.list_flavors(**params)['flavors']
+ flavors = self.reader_client.list_flavors(**params)['flavors']
self.assertEmpty([i for i in flavors if i['id'] == flavor_id],
'The list of flavors did not start after the marker.')
@decorators.idempotent_id('6db2f0c0-ddee-4162-9c84-0703d3dd1107')
def test_list_flavors_detailed_using_marker(self):
"""The list of flavors should start from the provided marker"""
- flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
+ flavor = self.reader_client.show_flavor(self.flavor_ref)['flavor']
flavor_id = flavor['id']
params = {'marker': flavor_id}
- flavors = self.flavors_client.list_flavors(detail=True,
- **params)['flavors']
+ flavors = self.reader_client.list_flavors(detail=True,
+ **params)['flavors']
self.assertEmpty([i for i in flavors if i['id'] == flavor_id],
'The list of flavors did not start after the marker.')
@decorators.idempotent_id('3df2743e-3034-4e57-a4cb-b6527f6eac79')
def test_list_flavors_detailed_filter_by_min_disk(self):
"""The detailed list of flavors should be filtered by disk space"""
- flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
+ flavor = self.reader_client.show_flavor(self.flavor_ref)['flavor']
flavor_id = flavor['id']
params = {'minDisk': flavor['disk'] + 1}
- flavors = self.flavors_client.list_flavors(detail=True,
- **params)['flavors']
+ flavors = self.reader_client.list_flavors(detail=True,
+ **params)['flavors']
self.assertEmpty([i for i in flavors if i['id'] == flavor_id])
@decorators.idempotent_id('09fe7509-b4ee-4b34-bf8b-39532dc47292')
def test_list_flavors_detailed_filter_by_min_ram(self):
"""The detailed list of flavors should be filtered by RAM"""
- flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
+ flavor = self.reader_client.show_flavor(self.flavor_ref)['flavor']
flavor_id = flavor['id']
params = {'minRam': flavor['ram'] + 1}
- flavors = self.flavors_client.list_flavors(detail=True,
- **params)['flavors']
+ flavors = self.reader_client.list_flavors(detail=True,
+ **params)['flavors']
self.assertEmpty([i for i in flavors if i['id'] == flavor_id])
@decorators.idempotent_id('10645a4d-96f5-443f-831b-730711e11dd4')
def test_list_flavors_filter_by_min_disk(self):
"""The list of flavors should be filtered by disk space"""
- flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
+ flavor = self.reader_client.show_flavor(self.flavor_ref)['flavor']
flavor_id = flavor['id']
params = {'minDisk': flavor['disk'] + 1}
- flavors = self.flavors_client.list_flavors(**params)['flavors']
+ flavors = self.reader_client.list_flavors(**params)['flavors']
self.assertEmpty([i for i in flavors if i['id'] == flavor_id])
@decorators.idempotent_id('935cf550-e7c8-4da6-8002-00f92d5edfaa')
def test_list_flavors_filter_by_min_ram(self):
"""The list of flavors should be filtered by RAM"""
- flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
+ flavor = self.reader_client.show_flavor(self.flavor_ref)['flavor']
flavor_id = flavor['id']
params = {'minRam': flavor['ram'] + 1}
- flavors = self.flavors_client.list_flavors(**params)['flavors']
+ flavors = self.reader_client.list_flavors(**params)['flavors']
self.assertEmpty([i for i in flavors if i['id'] == flavor_id])
diff --git a/tempest/api/compute/flavors/test_flavors_negative.py b/tempest/api/compute/flavors/test_flavors_negative.py
index efd9cdd..5700ecd 100644
--- a/tempest/api/compute/flavors/test_flavors_negative.py
+++ b/tempest/api/compute/flavors/test_flavors_negative.py
@@ -28,6 +28,14 @@
class FlavorsV2NegativeTest(base.BaseV2ComputeTest):
+ @classmethod
+ def setup_clients(cls):
+ super(FlavorsV2NegativeTest, cls).setup_clients()
+ if CONF.enforce_scope.nova:
+ cls.reader_client = cls.os_project_reader.flavors_client
+ else:
+ cls.reader_client = cls.flavors_client
+
@decorators.attr(type=['negative'])
@utils.services('image')
@decorators.idempotent_id('90f0d93a-91c1-450c-91e6-07d18172cefe')
@@ -38,7 +46,7 @@
Try to create server with flavor of insufficient ram size from
that image
"""
- flavor = self.flavors_client.show_flavor(
+ flavor = self.reader_client.show_flavor(
CONF.compute.flavor_ref)['flavor']
min_img_ram = flavor['ram'] + 1
size = random.randint(1024, 4096)
diff --git a/tempest/api/compute/servers/test_attach_interfaces.py b/tempest/api/compute/servers/test_attach_interfaces.py
index eddfd73..1ba4e58 100644
--- a/tempest/api/compute/servers/test_attach_interfaces.py
+++ b/tempest/api/compute/servers/test_attach_interfaces.py
@@ -60,6 +60,13 @@
super(AttachInterfacesTestBase, cls).setup_clients()
cls.subnets_client = cls.os_primary.subnets_client
cls.ports_client = cls.os_primary.ports_client
+ if CONF.enforce_scope.nova:
+ cls.reader_interfaces_client = (
+ cls.os_project_reader.interfaces_client)
+ cls.reader_ports_client = cls.os_project_reader.ports_client
+ else:
+ cls.reader_interfaces_client = cls.interfaces_client
+ cls.reader_ports_client = cls.ports_client
def _wait_for_validation(self, server, validation_resources):
linux_client = remote_client.RemoteClient(
@@ -81,7 +88,8 @@
wait_until='ACTIVE')
# NOTE(mgoddard): Get detailed server to ensure addresses are present
# in fixed IP case.
- server = self.servers_client.show_server(server['id'])['server']
+ server = self.reader_servers_client.show_server(
+ server['id'])['server']
# NOTE(artom) self.create_test_server adds cleanups, but this is
# apparently not enough? Add cleanup here.
self.addCleanup(self.delete_server, server['id'])
@@ -90,7 +98,7 @@
fip = set([validation_resources['floating_ip']['ip']])
except KeyError:
fip = ()
- ifs = (self.interfaces_client.list_interfaces(server['id'])
+ ifs = (self.reader_interfaces_client.list_interfaces(server['id'])
['interfaceAttachments'])
body = waiters.wait_for_interface_status(
self.interfaces_client, server['id'], ifs[0]['port_id'], 'ACTIVE')
@@ -107,7 +115,7 @@
:param port_id: The id of the port being detached.
:returns: The final port dict from the show_port response.
"""
- port = self.ports_client.show_port(port_id)['port']
+ port = self.reader_ports_client.show_port(port_id)['port']
device_id = port['device_id']
start = int(time.time())
@@ -115,7 +123,7 @@
# None, but it's not contractual so handle Falsey either way.
while device_id:
time.sleep(self.build_interval)
- port = self.ports_client.show_port(port_id)['port']
+ port = self.reader_ports_client.show_port(port_id)['port']
device_id = port['device_id']
timed_out = int(time.time()) - start >= self.build_timeout
@@ -205,13 +213,13 @@
# NOTE(danms): delete not the first or last, but one in the middle
iface = ifs[1]
self.interfaces_client.delete_interface(server['id'], iface['port_id'])
- _ifs = (self.interfaces_client.list_interfaces(server['id'])
+ _ifs = (self.reader_interfaces_client.list_interfaces(server['id'])
['interfaceAttachments'])
start = int(time.time())
while len(ifs) == len(_ifs):
time.sleep(self.build_interval)
- _ifs = (self.interfaces_client.list_interfaces(server['id'])
+ _ifs = (self.reader_interfaces_client.list_interfaces(server['id'])
['interfaceAttachments'])
timed_out = int(time.time()) - start >= self.build_timeout
if len(ifs) == len(_ifs) and timed_out:
@@ -254,7 +262,7 @@
iface = self._test_create_interface_by_port_id(server, ifs)
ifs.append(iface)
- _ifs = (self.interfaces_client.list_interfaces(server['id'])
+ _ifs = (self.reader_interfaces_client.list_interfaces(server['id'])
['interfaceAttachments'])
self._compare_iface_list(ifs, _ifs)
@@ -284,7 +292,7 @@
iface = self._test_create_interface_by_fixed_ips(server, ifs)
ifs.append(iface)
- _ifs = (self.interfaces_client.list_interfaces(server['id'])
+ _ifs = (self.reader_interfaces_client.list_interfaces(server['id'])
['interfaceAttachments'])
self._compare_iface_list(ifs, _ifs)
@@ -340,7 +348,8 @@
for server in servers:
# NOTE(mgoddard): Get detailed server to ensure addresses are
# present in fixed IP case.
- server = self.servers_client.show_server(server['id'])['server']
+ server = self.reader_servers_client.show_server(server['id'])[
+ 'server']
compute.wait_for_ssh_or_ping(server, self.os_primary, network,
True, validation_resources,
'SSHABLE', True)
@@ -419,7 +428,7 @@
'Timed out while waiting for IP count to increase.')
# Remove the fixed IP that we just added.
- server_detail = self.os_primary.servers_client.show_server(
+ server_detail = self.reader_servers_client.show_server(
server['id'])['server']
# Get the Fixed IP from server.
fixed_ip = None
@@ -467,4 +476,4 @@
# just to check the response schema
self.interfaces_client.show_interface(
server['id'], iface['port_id'])
- self.interfaces_client.list_interfaces(server['id'])
+ self.reader_interfaces_client.list_interfaces(server['id'])
diff --git a/tempest/api/compute/servers/test_availability_zone.py b/tempest/api/compute/servers/test_availability_zone.py
index d239149..8e2b660 100644
--- a/tempest/api/compute/servers/test_availability_zone.py
+++ b/tempest/api/compute/servers/test_availability_zone.py
@@ -14,9 +14,13 @@
# under the License.
from tempest.api.compute import base
+from tempest import config
from tempest.lib import decorators
+CONF = config.CONF
+
+
class AZV2TestJSON(base.BaseV2ComputeTest):
"""Tests Availability Zone API List"""
@@ -24,9 +28,14 @@
def setup_clients(cls):
super(AZV2TestJSON, cls).setup_clients()
cls.client = cls.availability_zone_client
+ if CONF.enforce_scope.nova:
+ cls.reader_az_client = (
+ cls.os_project_reader.availability_zone_client)
+ else:
+ cls.reader_az_client = cls.client
@decorators.idempotent_id('a8333aa2-205c-449f-a828-d38c2489bf25')
def test_get_availability_zone_list_with_non_admin_user(self):
"""List of availability zone with non-administrator user"""
- availability_zone = self.client.list_availability_zones()
+ availability_zone = self.reader_az_client.list_availability_zones()
self.assertNotEmpty(availability_zone['availabilityZoneInfo'])
diff --git a/tempest/api/compute/servers/test_create_server.py b/tempest/api/compute/servers/test_create_server.py
index 0b39b8a..a61f5fb 100644
--- a/tempest/api/compute/servers/test_create_server.py
+++ b/tempest/api/compute/servers/test_create_server.py
@@ -33,7 +33,6 @@
This is to create server booted from image and with disk_config 'AUTO'
"""
-
disk_config = 'AUTO'
volume_backed = False
@@ -45,7 +44,6 @@
@classmethod
def setup_clients(cls):
super(ServersTestJSON, cls).setup_clients()
- cls.client = cls.servers_client
@classmethod
def resource_setup(cls):
@@ -71,7 +69,8 @@
disk_config=disk_config,
adminPass=cls.password,
volume_backed=cls.volume_backed)
- cls.server = cls.client.show_server(server_initial['id'])['server']
+ cls.server = cls.reader_servers_client.show_server(
+ server_initial['id'])['server']
@decorators.attr(type='smoke')
@decorators.idempotent_id('5de47127-9977-400a-936f-abcfbec1218f')
@@ -95,7 +94,7 @@
@decorators.idempotent_id('9a438d88-10c6-4bcd-8b5b-5b6e25e1346f')
def test_list_servers(self):
"""The created server should be in the list of all servers"""
- body = self.client.list_servers()
+ body = self.reader_servers_client.list_servers()
servers = body['servers']
found = [i for i in servers if i['id'] == self.server['id']]
self.assertNotEmpty(found)
@@ -103,7 +102,7 @@
@decorators.idempotent_id('585e934c-448e-43c4-acbf-d06a9b899997')
def test_list_servers_with_detail(self):
"""The created server should be in the detailed list of all servers"""
- body = self.client.list_servers(detail=True)
+ body = self.reader_servers_client.list_servers(detail=True)
servers = body['servers']
found = [i for i in servers if i['id'] == self.server['id']]
self.assertNotEmpty(found)
@@ -126,7 +125,7 @@
self.password,
validation_resources['keypair']['private_key'],
server=self.server,
- servers_client=self.client)
+ servers_client=self.servers_client)
output = linux_client.exec_command('grep -c ^processor /proc/cpuinfo')
self.assertEqual(flavor['vcpus'], int(output))
@@ -143,7 +142,7 @@
self.password,
validation_resources['keypair']['private_key'],
server=self.server,
- servers_client=self.client)
+ servers_client=self.servers_client)
hostname = linux_client.exec_command("hostname").rstrip()
msg = ('Failed while verifying servername equals hostname. Expected '
'hostname "%s" but got "%s".' %
@@ -201,7 +200,6 @@
@classmethod
def setup_clients(cls):
super(ServersTestFqdnHostnames, cls).setup_clients()
- cls.client = cls.servers_client
@decorators.idempotent_id('622066d2-39fc-4c09-9eeb-35903c114a0a')
@testtools.skipUnless(
@@ -234,7 +232,7 @@
self.password,
validation_resources['keypair']['private_key'],
server=test_server,
- servers_client=self.client)
+ servers_client=self.servers_client)
hostname = linux_client.exec_command("hostname").rstrip()
self.assertEqual('guest-instance-1-domain-com', hostname)
@@ -260,7 +258,6 @@
@classmethod
def setup_clients(cls):
super(ServersV294TestFqdnHostnames, cls).setup_clients()
- cls.client = cls.servers_client
@classmethod
def resource_setup(cls):
@@ -279,7 +276,8 @@
accessIPv4=cls.accessIPv4,
adminPass=cls.password,
hostname=cls.hostname)
- cls.server = cls.client.show_server(cls.test_server['id'])['server']
+ cls.server = cls.reader_servers_client.show_server(
+ cls.test_server['id'])['server']
def verify_metadata_hostname(self, md_json):
md_dict = json.loads(md_json)
@@ -307,6 +305,6 @@
self.password,
self.validation_resources['keypair']['private_key'],
server=self.test_server,
- servers_client=self.client)
+ servers_client=self.servers_client)
self.verify_metadata_from_api(
self.test_server, linux_client, self.verify_metadata_hostname)
diff --git a/tempest/api/compute/servers/test_create_server_multi_nic.py b/tempest/api/compute/servers/test_create_server_multi_nic.py
index 1cbb976..fc75e2f 100644
--- a/tempest/api/compute/servers/test_create_server_multi_nic.py
+++ b/tempest/api/compute/servers/test_create_server_multi_nic.py
@@ -62,7 +62,6 @@
@classmethod
def setup_clients(cls):
super(ServersTestMultiNic, cls).setup_clients()
- cls.client = cls.servers_client
cls.networks_client = cls.os_primary.networks_client
cls.subnets_client = cls.os_primary.subnets_client
@@ -107,8 +106,8 @@
# we're OK.
self.addCleanup(self.delete_server, server_multi_nics['id'])
- addresses = (self.client.list_addresses(server_multi_nics['id'])
- ['addresses'])
+ addresses = (self.reader_servers_client.list_addresses(
+ server_multi_nics['id'])['addresses'])
# We can't predict the ip addresses assigned to the server on networks.
# So we check if the first address is in first network, similarly
@@ -142,8 +141,8 @@
networks=networks, wait_until='ACTIVE')
self.addCleanup(self.delete_server, server_multi_nics['id'])
- addresses = (self.client.list_addresses(server_multi_nics['id'])
- ['addresses'])
+ addresses = (self.reader_servers_client.list_addresses(
+ server_multi_nics['id'])['addresses'])
addr = [addresses[net1['network']['name']][0]['addr'],
addresses[net2['network']['name']][0]['addr'],
diff --git a/tempest/api/compute/servers/test_delete_server.py b/tempest/api/compute/servers/test_delete_server.py
index 596d2bd..5fb669e 100644
--- a/tempest/api/compute/servers/test_delete_server.py
+++ b/tempest/api/compute/servers/test_delete_server.py
@@ -35,30 +35,30 @@
@classmethod
def setup_clients(cls):
super(DeleteServersTestJSON, cls).setup_clients()
- cls.client = cls.servers_client
@decorators.idempotent_id('9e6e0c87-3352-42f7-9faf-5d6210dbd159')
def test_delete_server_while_in_building_state(self):
"""Test deleting a server while it's VM state is Building"""
server = self.create_test_server(wait_until='BUILD')
- self.client.delete_server(server['id'])
- waiters.wait_for_server_termination(self.client, server['id'])
+ self.servers_client.delete_server(server['id'])
+ waiters.wait_for_server_termination(self.servers_client, server['id'])
@decorators.idempotent_id('925fdfb4-5b13-47ea-ac8a-c36ae6fddb05')
def test_delete_active_server(self):
"""Test deleting a server while it's VM state is Active"""
server = self.create_test_server(wait_until='ACTIVE')
- self.client.delete_server(server['id'])
- waiters.wait_for_server_termination(self.client, server['id'])
+ self.servers_client.delete_server(server['id'])
+ waiters.wait_for_server_termination(self.servers_client, server['id'])
@decorators.idempotent_id('546d368c-bb6c-4645-979a-83ed16f3a6be')
def test_delete_server_while_in_shutoff_state(self):
"""Test deleting a server while it's VM state is Shutoff"""
server = self.create_test_server(wait_until='ACTIVE')
- self.client.stop_server(server['id'])
- waiters.wait_for_server_status(self.client, server['id'], 'SHUTOFF')
- self.client.delete_server(server['id'])
- waiters.wait_for_server_termination(self.client, server['id'])
+ self.servers_client.stop_server(server['id'])
+ waiters.wait_for_server_status(self.servers_client, server['id'],
+ 'SHUTOFF')
+ self.servers_client.delete_server(server['id'])
+ waiters.wait_for_server_termination(self.servers_client, server['id'])
@decorators.idempotent_id('943bd6e8-4d7a-4904-be83-7a6cc2d4213b')
@testtools.skipUnless(CONF.compute_feature_enabled.pause,
@@ -66,10 +66,11 @@
def test_delete_server_while_in_pause_state(self):
"""Test deleting a server while it's VM state is Pause"""
server = self.create_test_server(wait_until='ACTIVE')
- self.client.pause_server(server['id'])
- waiters.wait_for_server_status(self.client, server['id'], 'PAUSED')
- self.client.delete_server(server['id'])
- waiters.wait_for_server_termination(self.client, server['id'])
+ self.servers_client.pause_server(server['id'])
+ waiters.wait_for_server_status(self.servers_client, server['id'],
+ 'PAUSED')
+ self.servers_client.delete_server(server['id'])
+ waiters.wait_for_server_termination(self.servers_client, server['id'])
@decorators.idempotent_id('1f82ebd3-8253-4f4e-b93f-de9b7df56d8b')
@testtools.skipUnless(CONF.compute_feature_enabled.suspend,
@@ -77,10 +78,11 @@
def test_delete_server_while_in_suspended_state(self):
"""Test deleting a server while it's VM state is Suspended"""
server = self.create_test_server(wait_until='ACTIVE')
- self.client.suspend_server(server['id'])
- waiters.wait_for_server_status(self.client, server['id'], 'SUSPENDED')
- self.client.delete_server(server['id'])
- waiters.wait_for_server_termination(self.client, server['id'])
+ self.servers_client.suspend_server(server['id'])
+ waiters.wait_for_server_status(self.servers_client, server['id'],
+ 'SUSPENDED')
+ self.servers_client.delete_server(server['id'])
+ waiters.wait_for_server_termination(self.servers_client, server['id'])
@decorators.idempotent_id('bb0cb402-09dd-4947-b6e5-5e7e1cfa61ad')
@testtools.skipUnless(CONF.compute_feature_enabled.shelve,
@@ -88,10 +90,10 @@
def test_delete_server_while_in_shelved_state(self):
"""Test deleting a server while it's VM state is Shelved"""
server = self.create_test_server(wait_until='ACTIVE')
- compute.shelve_server(self.client, server['id'])
+ compute.shelve_server(self.servers_client, server['id'])
- self.client.delete_server(server['id'])
- waiters.wait_for_server_termination(self.client, server['id'])
+ self.servers_client.delete_server(server['id'])
+ waiters.wait_for_server_termination(self.servers_client, server['id'])
@decorators.idempotent_id('ab0c38b4-cdd8-49d3-9b92-0cb898723c01')
@testtools.skipIf(not CONF.compute_feature_enabled.resize,
@@ -99,14 +101,16 @@
def test_delete_server_while_in_verify_resize_state(self):
"""Test deleting a server while it's VM state is VERIFY_RESIZE"""
server = self.create_test_server(wait_until='ACTIVE')
- body = self.client.resize_server(server['id'], self.flavor_ref_alt)
+ body = self.servers_client.resize_server(server['id'],
+ self.flavor_ref_alt)
request_id = body.response['x-openstack-request-id']
waiters.wait_for_server_status(
- self.client, server['id'], 'VERIFY_RESIZE', request_id=request_id)
- body = self.client.delete_server(server['id'])
+ self.servers_client, server['id'], 'VERIFY_RESIZE',
+ request_id=request_id)
+ body = self.servers_client.delete_server(server['id'])
request_id = body.response['x-openstack-request-id']
waiters.wait_for_server_termination(
- self.client, server['id'], request_id=request_id)
+ self.servers_client, server['id'], request_id=request_id)
@decorators.idempotent_id('d0f3f0d6-d9b6-4a32-8da4-23015dcab23c')
@utils.services('volume')
@@ -117,7 +121,7 @@
volume = self.create_volume()
self.attach_volume(server, volume)
- self.client.delete_server(server['id'])
- waiters.wait_for_server_termination(self.client, server['id'])
+ self.servers_client.delete_server(server['id'])
+ waiters.wait_for_server_termination(self.servers_client, server['id'])
waiters.wait_for_volume_resource_status(self.volumes_client,
volume['id'], 'available')
diff --git a/tempest/api/compute/servers/test_device_tagging.py b/tempest/api/compute/servers/test_device_tagging.py
index d2fdd52..fd087ea 100644
--- a/tempest/api/compute/servers/test_device_tagging.py
+++ b/tempest/api/compute/servers/test_device_tagging.py
@@ -256,7 +256,8 @@
self.addCleanup(self.delete_server, server['id'])
- server = self.servers_client.show_server(server['id'])['server']
+ server = self.reader_servers_client.show_server(
+ server['id'])['server']
ssh_client = remote_client.RemoteClient(
self.get_server_ip(server, validation_resources),
CONF.validation.image_ssh_user,
@@ -388,7 +389,8 @@
# NOTE(mgoddard): Get detailed server to ensure addresses are present
# in fixed IP case.
- server = self.servers_client.show_server(server['id'])['server']
+ server = self.reader_servers_client.show_server(
+ server['id'])['server']
# Attach tagged nic and volume
interface = self.interfaces_client.create_interface(
diff --git a/tempest/api/compute/servers/test_disk_config.py b/tempest/api/compute/servers/test_disk_config.py
index e5e051a..fc79e37 100644
--- a/tempest/api/compute/servers/test_disk_config.py
+++ b/tempest/api/compute/servers/test_disk_config.py
@@ -38,53 +38,56 @@
@classmethod
def setup_clients(cls):
super(ServerDiskConfigTestJSON, cls).setup_clients()
- cls.client = cls.os_primary.servers_client
def _update_server_with_disk_config(self, server_id, disk_config):
- server = self.client.show_server(server_id)['server']
+ server = self.reader_servers_client.show_server(server_id)['server']
if disk_config != server['OS-DCF:diskConfig']:
- server = self.client.update_server(
+ server = self.servers_client.update_server(
server_id, disk_config=disk_config)['server']
- waiters.wait_for_server_status(self.client, server['id'], 'ACTIVE')
- server = self.client.show_server(server['id'])['server']
+ waiters.wait_for_server_status(self.servers_client, server['id'],
+ 'ACTIVE')
+ server = self.reader_servers_client.show_server(
+ server['id'])['server']
self.assertEqual(disk_config, server['OS-DCF:diskConfig'])
@decorators.idempotent_id('bef56b09-2e8c-4883-a370-4950812f430e')
def test_rebuild_server_with_manual_disk_config(self):
"""A server should be rebuilt using the manual disk config option"""
server = self.create_test_server(wait_until='ACTIVE')
- self.addCleanup(self.client.delete_server, server['id'])
+ self.addCleanup(self.servers_client.delete_server, server['id'])
self._update_server_with_disk_config(server['id'],
disk_config='AUTO')
- server = self.client.rebuild_server(server['id'],
- self.image_ref_alt,
- disk_config='MANUAL')['server']
+ server = self.servers_client.rebuild_server(
+ server['id'], self.image_ref_alt,
+ disk_config='MANUAL')['server']
# Wait for the server to become active
- waiters.wait_for_server_status(self.client, server['id'], 'ACTIVE')
+ waiters.wait_for_server_status(self.servers_client, server['id'],
+ 'ACTIVE')
# Verify the specified attributes are set correctly
- server = self.client.show_server(server['id'])['server']
+ server = self.reader_servers_client.show_server(server['id'])['server']
self.assertEqual('MANUAL', server['OS-DCF:diskConfig'])
@decorators.idempotent_id('9c9fae77-4feb-402f-8450-bf1c8b609713')
def test_rebuild_server_with_auto_disk_config(self):
"""A server should be rebuilt using the auto disk config option"""
server = self.create_test_server(wait_until='ACTIVE')
- self.addCleanup(self.client.delete_server, server['id'])
+ self.addCleanup(self.servers_client.delete_server, server['id'])
self._update_server_with_disk_config(server['id'],
disk_config='MANUAL')
- server = self.client.rebuild_server(server['id'],
- self.image_ref_alt,
- disk_config='AUTO')['server']
+ server = self.servers_client.rebuild_server(
+ server['id'], self.image_ref_alt,
+ disk_config='AUTO')['server']
# Wait for the server to become active
- waiters.wait_for_server_status(self.client, server['id'], 'ACTIVE')
+ waiters.wait_for_server_status(self.servers_client, server['id'],
+ 'ACTIVE')
# Verify the specified attributes are set correctly
- server = self.client.show_server(server['id'])['server']
+ server = self.reader_servers_client.show_server(server['id'])['server']
self.assertEqual('AUTO', server['OS-DCF:diskConfig'])
@decorators.idempotent_id('414e7e93-45b5-44bc-8e03-55159c6bfc97')
@@ -93,14 +96,14 @@
def test_resize_server_from_manual_to_auto(self):
"""A server should be resized from manual to auto disk config"""
server = self.create_test_server(wait_until='ACTIVE')
- self.addCleanup(self.client.delete_server, server['id'])
+ self.addCleanup(self.servers_client.delete_server, server['id'])
self._update_server_with_disk_config(server['id'],
disk_config='MANUAL')
# Resize with auto option
self.resize_server(server['id'], self.flavor_ref_alt,
disk_config='AUTO')
- server = self.client.show_server(server['id'])['server']
+ server = self.reader_servers_client.show_server(server['id'])['server']
self.assertEqual('AUTO', server['OS-DCF:diskConfig'])
@decorators.idempotent_id('693d16f3-556c-489a-8bac-3d0ca2490bad')
@@ -109,29 +112,30 @@
def test_resize_server_from_auto_to_manual(self):
"""A server should be resized from auto to manual disk config"""
server = self.create_test_server(wait_until='ACTIVE')
- self.addCleanup(self.client.delete_server, server['id'])
+ self.addCleanup(self.servers_client.delete_server, server['id'])
self._update_server_with_disk_config(server['id'],
disk_config='AUTO')
# Resize with manual option
self.resize_server(server['id'], self.flavor_ref_alt,
disk_config='MANUAL')
- server = self.client.show_server(server['id'])['server']
+ server = self.reader_servers_client.show_server(server['id'])['server']
self.assertEqual('MANUAL', server['OS-DCF:diskConfig'])
@decorators.idempotent_id('5ef18867-358d-4de9-b3c9-94d4ba35742f')
def test_update_server_from_auto_to_manual(self):
"""A server should be updated from auto to manual disk config"""
server = self.create_test_server(wait_until='ACTIVE')
- self.addCleanup(self.client.delete_server, server['id'])
+ self.addCleanup(self.servers_client.delete_server, server['id'])
self._update_server_with_disk_config(server['id'],
disk_config='AUTO')
# Update the disk_config attribute to manual
- server = self.client.update_server(server['id'],
- disk_config='MANUAL')['server']
- waiters.wait_for_server_status(self.client, server['id'], 'ACTIVE')
+ server = self.servers_client.update_server(
+ server['id'], disk_config='MANUAL')['server']
+ waiters.wait_for_server_status(self.servers_client, server['id'],
+ 'ACTIVE')
# Verify the disk_config attribute is set correctly
- server = self.client.show_server(server['id'])['server']
+ server = self.reader_servers_client.show_server(server['id'])['server']
self.assertEqual('MANUAL', server['OS-DCF:diskConfig'])
diff --git a/tempest/api/compute/servers/test_instance_actions.py b/tempest/api/compute/servers/test_instance_actions.py
index 028da68..6a68aab 100644
--- a/tempest/api/compute/servers/test_instance_actions.py
+++ b/tempest/api/compute/servers/test_instance_actions.py
@@ -15,9 +15,13 @@
from tempest.api.compute import base
from tempest.common import waiters
+from tempest import config
from tempest.lib import decorators
+CONF = config.CONF
+
+
class InstanceActionsTestJSON(base.BaseV2ComputeTest):
"""Test instance actions API"""
@@ -26,7 +30,6 @@
@classmethod
def setup_clients(cls):
super(InstanceActionsTestJSON, cls).setup_clients()
- cls.client = cls.servers_client
@classmethod
def resource_setup(cls):
@@ -39,8 +42,8 @@
"""Test listing actions of the provided server"""
self.reboot_server(self.server['id'], type='HARD')
- body = (self.client.list_instance_actions(self.server['id'])
- ['instanceActions'])
+ body = (self.reader_servers_client.list_instance_actions(
+ self.server['id'])['instanceActions'])
self.assertEqual(len(body), 2, str(body))
self.assertEqual(sorted([i['action'] for i in body]),
['create', 'reboot'])
@@ -48,7 +51,7 @@
@decorators.idempotent_id('aacc71ca-1d70-4aa5-bbf6-0ff71470e43c')
def test_get_instance_action(self):
"""Test getting the action details of the provided server"""
- body = self.client.show_instance_action(
+ body = self.reader_servers_client.show_instance_action(
self.server['id'], self.request_id)['instanceAction']
self.assertEqual(self.server['id'], body['instance_uuid'])
self.assertEqual('create', body['action'])
@@ -65,7 +68,6 @@
@classmethod
def setup_clients(cls):
super(InstanceActionsV221TestJSON, cls).setup_clients()
- cls.client = cls.servers_client
@decorators.idempotent_id('0a0f85d4-10fa-41f6-bf80-a54fb4aa2ae1')
def test_get_list_deleted_instance_actions(self):
@@ -75,9 +77,9 @@
actions should contain 'create' and 'delete'.
"""
server = self.create_test_server(wait_until='ACTIVE')
- self.client.delete_server(server['id'])
- waiters.wait_for_server_termination(self.client, server['id'])
- body = (self.client.list_instance_actions(server['id'])
+ self.servers_client.delete_server(server['id'])
+ waiters.wait_for_server_termination(self.servers_client, server['id'])
+ body = (self.reader_servers_client.list_instance_actions(server['id'])
['instanceActions'])
self.assertEqual(len(body), 2, str(body))
self.assertEqual(sorted([i['action'] for i in body]),
diff --git a/tempest/api/compute/servers/test_instance_actions_negative.py b/tempest/api/compute/servers/test_instance_actions_negative.py
index dd2bf06..8e5b883 100644
--- a/tempest/api/compute/servers/test_instance_actions_negative.py
+++ b/tempest/api/compute/servers/test_instance_actions_negative.py
@@ -27,7 +27,6 @@
@classmethod
def setup_clients(cls):
super(InstanceActionsNegativeTestJSON, cls).setup_clients()
- cls.client = cls.servers_client
@classmethod
def resource_setup(cls):
@@ -40,12 +39,13 @@
"""Test listing actions for non existent instance should fail"""
non_existent_server_id = data_utils.rand_uuid()
self.assertRaises(lib_exc.NotFound,
- self.client.list_instance_actions,
+ self.reader_servers_client.list_instance_actions,
non_existent_server_id)
@decorators.attr(type=['negative'])
@decorators.idempotent_id('0269f40a-6f18-456c-b336-c03623c897f1')
def test_get_instance_action_invalid_request(self):
"""Test getting instance action with invalid request_id should fail"""
- self.assertRaises(lib_exc.NotFound, self.client.show_instance_action,
+ self.assertRaises(lib_exc.NotFound,
+ self.reader_servers_client.show_instance_action,
self.server['id'], '999')
diff --git a/tempest/api/compute/servers/test_list_server_filters.py b/tempest/api/compute/servers/test_list_server_filters.py
index 7873296..20d460c 100644
--- a/tempest/api/compute/servers/test_list_server_filters.py
+++ b/tempest/api/compute/servers/test_list_server_filters.py
@@ -36,7 +36,6 @@
@classmethod
def setup_clients(cls):
super(ListServerFiltersTestJSON, cls).setup_clients()
- cls.client = cls.servers_client
@classmethod
def resource_setup(cls):
@@ -69,9 +68,9 @@
flavor=cls.flavor_ref_alt,
wait_until='ACTIVE')
- waiters.wait_for_server_status(cls.client, cls.s1['id'],
+ waiters.wait_for_server_status(cls.servers_client, cls.s1['id'],
'ACTIVE')
- waiters.wait_for_server_status(cls.client, cls.s2['id'],
+ waiters.wait_for_server_status(cls.servers_client, cls.s2['id'],
'ACTIVE')
@decorators.idempotent_id('05e8a8e7-9659-459a-989d-92c2f501f4ba')
@@ -80,7 +79,7 @@
def test_list_servers_filter_by_image(self):
"""Filter the list of servers by image"""
params = {'image': self.image_ref}
- body = self.client.list_servers(**params)
+ body = self.reader_servers_client.list_servers(**params)
servers = body['servers']
self.assertIn(self.s1['id'], map(lambda x: x['id'], servers))
@@ -91,7 +90,7 @@
def test_list_servers_filter_by_flavor(self):
"""Filter the list of servers by flavor"""
params = {'flavor': self.flavor_ref_alt}
- body = self.client.list_servers(**params)
+ body = self.reader_servers_client.list_servers(**params)
servers = body['servers']
self.assertNotIn(self.s1['id'], map(lambda x: x['id'], servers))
@@ -102,7 +101,7 @@
def test_list_servers_filter_by_server_name(self):
"""Filter the list of servers by server name"""
params = {'name': self.s1_name}
- body = self.client.list_servers(**params)
+ body = self.reader_servers_client.list_servers(**params)
servers = body['servers']
self.assertIn(self.s1_name, map(lambda x: x['name'], servers))
@@ -113,7 +112,7 @@
def test_list_servers_filter_by_active_status(self):
"""Filter the list of servers by server active status"""
params = {'status': 'active'}
- body = self.client.list_servers(**params)
+ body = self.reader_servers_client.list_servers(**params)
servers = body['servers']
self.assertIn(self.s1['id'], map(lambda x: x['id'], servers))
@@ -124,12 +123,12 @@
def test_list_servers_filter_by_shutoff_status(self):
"""Filter the list of servers by server shutoff status"""
params = {'status': 'shutoff'}
- self.client.stop_server(self.s1['id'])
- waiters.wait_for_server_status(self.client, self.s1['id'],
+ self.servers_client.stop_server(self.s1['id'])
+ waiters.wait_for_server_status(self.servers_client, self.s1['id'],
'SHUTOFF')
- body = self.client.list_servers(**params)
- self.client.start_server(self.s1['id'])
- waiters.wait_for_server_status(self.client, self.s1['id'],
+ body = self.reader_servers_client.list_servers(**params)
+ self.servers_client.start_server(self.s1['id'])
+ waiters.wait_for_server_status(self.servers_client, self.s1['id'],
'ACTIVE')
servers = body['servers']
@@ -144,7 +143,7 @@
Verify only the expected number of servers are returned (one server)
"""
params = {'limit': 1}
- servers = self.client.list_servers(**params)
+ servers = self.reader_servers_client.list_servers(**params)
self.assertEqual(1, len([x for x in servers['servers'] if 'id' in x]))
@decorators.idempotent_id('b1495414-2d93-414c-8019-849afe8d319e')
@@ -154,7 +153,7 @@
Verify only the expected number of servers are returned (no server)
"""
params = {'limit': 0}
- servers = self.client.list_servers(**params)
+ servers = self.reader_servers_client.list_servers(**params)
self.assertEmpty(servers['servers'])
@decorators.idempotent_id('37791bbd-90c0-4de0-831e-5f38cba9c6b3')
@@ -164,8 +163,8 @@
Verify only the expected number of servers are returned (all servers)
"""
params = {'limit': 100000}
- servers = self.client.list_servers(**params)
- all_servers = self.client.list_servers()
+ servers = self.reader_servers_client.list_servers(**params)
+ all_servers = self.reader_servers_client.list_servers()
self.assertEqual(len([x for x in all_servers['servers'] if 'id' in x]),
len([x for x in servers['servers'] if 'id' in x]))
@@ -175,7 +174,7 @@
def test_list_servers_detailed_filter_by_image(self):
""""Filter the detailed list of servers by image"""
params = {'image': self.image_ref}
- body = self.client.list_servers(detail=True, **params)
+ body = self.reader_servers_client.list_servers(detail=True, **params)
servers = body['servers']
self.assertIn(self.s1['id'], map(lambda x: x['id'], servers))
@@ -186,7 +185,7 @@
def test_list_servers_detailed_filter_by_flavor(self):
"""Filter the detailed list of servers by flavor"""
params = {'flavor': self.flavor_ref_alt}
- body = self.client.list_servers(detail=True, **params)
+ body = self.reader_servers_client.list_servers(detail=True, **params)
servers = body['servers']
self.assertNotIn(self.s1['id'], map(lambda x: x['id'], servers))
@@ -197,7 +196,7 @@
def test_list_servers_detailed_filter_by_server_name(self):
"""Filter the detailed list of servers by server name"""
params = {'name': self.s1_name}
- body = self.client.list_servers(detail=True, **params)
+ body = self.reader_servers_client.list_servers(detail=True, **params)
servers = body['servers']
self.assertIn(self.s1_name, map(lambda x: x['name'], servers))
@@ -208,7 +207,7 @@
def test_list_servers_detailed_filter_by_server_status(self):
"""Filter the detailed list of servers by server status"""
params = {'status': 'active'}
- body = self.client.list_servers(detail=True, **params)
+ body = self.reader_servers_client.list_servers(detail=True, **params)
servers = body['servers']
test_ids = [s['id'] for s in (self.s1, self.s2, self.s3)]
@@ -223,7 +222,7 @@
"""Filter the list of servers by part of server name"""
# List all servers that contains '-instance' in name
params = {'name': '-instance'}
- body = self.client.list_servers(**params)
+ body = self.reader_servers_client.list_servers(**params)
servers = body['servers']
self.assertIn(self.s1_name, map(lambda x: x['name'], servers))
@@ -234,7 +233,7 @@
part_name = self.s1_name[6:-1]
params = {'name': part_name}
- body = self.client.list_servers(**params)
+ body = self.reader_servers_client.list_servers(**params)
servers = body['servers']
self.assertIn(self.s1_name, map(lambda x: x['name'], servers))
@@ -248,7 +247,7 @@
regexes = [r'^.*\-instance\-[0-9]+$', r'^.*\-instance\-.*$']
for regex in regexes:
params = {'name': regex}
- body = self.client.list_servers(**params)
+ body = self.reader_servers_client.list_servers(**params)
servers = body['servers']
self.assertIn(self.s1_name, map(lambda x: x['name'], servers))
@@ -259,7 +258,7 @@
part_name = self.s1_name[-10:]
params = {'name': part_name}
- body = self.client.list_servers(**params)
+ body = self.reader_servers_client.list_servers(**params)
servers = body['servers']
self.assertIn(self.s1_name, map(lambda x: x['name'], servers))
@@ -279,22 +278,25 @@
# so here look for the longest server ip, and filter by that ip,
# so as to ensure only one server is returned.
ip_list = {}
- self.s1 = self.client.show_server(self.s1['id'])['server']
+ self.s1 = self.reader_servers_client.show_server(
+ self.s1['id'])['server']
# Get first ip address in spite of v4 or v6
ip_addr = self.s1['addresses'][self.fixed_network_name][0]['addr']
ip_list[ip_addr] = self.s1['id']
- self.s2 = self.client.show_server(self.s2['id'])['server']
+ self.s2 = self.reader_servers_client.show_server(
+ self.s2['id'])['server']
ip_addr = self.s2['addresses'][self.fixed_network_name][0]['addr']
ip_list[ip_addr] = self.s2['id']
- self.s3 = self.client.show_server(self.s3['id'])['server']
+ self.s3 = self.reader_servers_client.show_server(
+ self.s3['id'])['server']
ip_addr = self.s3['addresses'][self.fixed_network_name][0]['addr']
ip_list[ip_addr] = self.s3['id']
longest_ip = max([[len(ip), ip] for ip in ip_list])[1]
params = {'ip': longest_ip}
- body = self.client.list_servers(**params)
+ body = self.reader_servers_client.list_servers(**params)
servers = body['servers']
self.assertIn(ip_list[longest_ip], map(lambda x: x['id'], servers))
@@ -311,7 +313,7 @@
# query addresses of the 3 servers
addrs = []
for s in [self.s1, self.s2, self.s3]:
- s_show = self.client.show_server(s['id'])['server']
+ s_show = self.reader_servers_client.show_server(s['id'])['server']
addr_spec = s_show['addresses'][self.fixed_network_name][0]
addrs.append(addr_spec['addr'])
# find common part of the 3 ip addresses
@@ -329,8 +331,8 @@
else:
params = {'ip6': prefix}
# capture all servers in case something goes wrong
- all_servers = self.client.list_servers(detail=True)
- body = self.client.list_servers(**params)
+ all_servers = self.reader_servers_client.list_servers(detail=True)
+ body = self.reader_servers_client.list_servers(**params)
servers = body['servers']
self.assertIn(self.s1_name, map(lambda x: x['name'], servers),
@@ -350,5 +352,6 @@
Verify only the expected number of servers are returned (one server)
"""
params = {'limit': 1}
- servers = self.client.list_servers(detail=True, **params)
+ servers = self.reader_servers_client.list_servers(detail=True,
+ **params)
self.assertEqual(1, len(servers['servers']))
diff --git a/tempest/api/compute/servers/test_list_servers_negative.py b/tempest/api/compute/servers/test_list_servers_negative.py
index 3d55696..afa785c 100644
--- a/tempest/api/compute/servers/test_list_servers_negative.py
+++ b/tempest/api/compute/servers/test_list_servers_negative.py
@@ -15,10 +15,14 @@
from tempest.api.compute import base
from tempest.common import waiters
+from tempest import config
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
+CONF = config.CONF
+
+
class ListServersNegativeTestJSON(base.BaseV2ComputeTest):
"""Negative tests of listing servers"""
@@ -27,7 +31,6 @@
@classmethod
def setup_clients(cls):
super(ListServersNegativeTestJSON, cls).setup_clients()
- cls.client = cls.servers_client
@classmethod
def resource_setup(cls):
@@ -41,15 +44,16 @@
# delete one of the created servers
cls.deleted_id = body['server']['id']
- cls.client.delete_server(cls.deleted_id)
- waiters.wait_for_server_termination(cls.client, cls.deleted_id)
+ cls.servers_client.delete_server(cls.deleted_id)
+ waiters.wait_for_server_termination(
+ cls.servers_client, cls.deleted_id)
@decorators.attr(type=['negative'])
@decorators.idempotent_id('24a26f1a-1ddc-4eea-b0d7-a90cc874ad8f')
def test_list_servers_with_a_deleted_server(self):
"""Test that deleted servers do not show by default in list servers"""
# List servers and verify server not returned
- body = self.client.list_servers()
+ body = self.reader_servers_client.list_servers()
servers = body['servers']
actual = [srv for srv in servers
if srv['id'] == self.deleted_id]
@@ -59,7 +63,8 @@
@decorators.idempotent_id('ff01387d-c7ad-47b4-ae9e-64fa214638fe')
def test_list_servers_by_non_existing_image(self):
"""Test listing servers for a non existing image returns empty list"""
- body = self.client.list_servers(image='non_existing_image')
+ body = self.reader_servers_client.list_servers(
+ image='non_existing_image')
servers = body['servers']
self.assertEmpty(servers)
@@ -67,7 +72,8 @@
@decorators.idempotent_id('5913660b-223b-44d4-a651-a0fbfd44ca75')
def test_list_servers_by_non_existing_flavor(self):
"""Test listing servers by non existing flavor returns empty list"""
- body = self.client.list_servers(flavor='non_existing_flavor')
+ body = self.reader_servers_client.list_servers(
+ flavor='non_existing_flavor')
servers = body['servers']
self.assertEmpty(servers)
@@ -80,7 +86,8 @@
list.
"""
- body = self.client.list_servers(name='non_existing_server_name')
+ body = self.reader_servers_client.list_servers(
+ name='non_existing_server_name')
servers = body['servers']
self.assertEmpty(servers)
@@ -95,12 +102,15 @@
"""
if self.is_requested_microversion_compatible('2.37'):
- body = self.client.list_servers(status='non_existing_status')
+ body = self.reader_servers_client.list_servers(
+ status='non_existing_status')
servers = body['servers']
self.assertEmpty(servers)
else:
- self.assertRaises(lib_exc.BadRequest, self.client.list_servers,
- status='non_existing_status')
+ self.assertRaises(
+ lib_exc.BadRequest,
+ self.reader_servers_client.list_servers,
+ status='non_existing_status')
@decorators.attr(type=['negative'])
@decorators.idempotent_id('d47c17fb-eebd-4287-8e95-f20a7e627b18')
@@ -112,33 +122,39 @@
"""
# Gather the complete list of servers in the project for reference
- full_list = self.client.list_servers()['servers']
+ full_list = self.reader_servers_client.list_servers()['servers']
# List servers by specifying a greater value for limit
limit = len(full_list) + 100
- body = self.client.list_servers(limit=limit)
+ body = self.reader_servers_client.list_servers(limit=limit)
self.assertEqual(len(full_list), len(body['servers']))
@decorators.attr(type=['negative'])
@decorators.idempotent_id('679bc053-5e70-4514-9800-3dfab1a380a6')
def test_list_servers_by_limits_pass_string(self):
"""Test listing servers by non-integer limit should fail"""
- self.assertRaises(lib_exc.BadRequest, self.client.list_servers,
- limit='testing')
+ self.assertRaises(
+ lib_exc.BadRequest,
+ self.reader_servers_client.list_servers,
+ limit='testing')
@decorators.attr(type=['negative'])
@decorators.idempotent_id('62610dd9-4713-4ee0-8beb-fd2c1aa7f950')
def test_list_servers_by_limits_pass_negative_value(self):
"""Test listing servers by negative limit should fail"""
- self.assertRaises(lib_exc.BadRequest, self.client.list_servers,
- limit=-1)
+ self.assertRaises(
+ lib_exc.BadRequest,
+ self.reader_servers_client.list_servers,
+ limit=-1)
@decorators.attr(type=['negative'])
@decorators.idempotent_id('87d12517-e20a-4c9c-97b6-dd1628d6d6c9')
def test_list_servers_by_changes_since_invalid_date(self):
"""Test listing servers by invalid changes-since format should fail"""
params = {'changes-since': '2011/01/01'}
- self.assertRaises(lib_exc.BadRequest, self.client.list_servers,
- **params)
+ self.assertRaises(
+ lib_exc.BadRequest,
+ self.reader_servers_client.list_servers,
+ **params)
@decorators.attr(type=['negative'])
@decorators.idempotent_id('74745ad8-b346-45b5-b9b8-509d7447fc1f')
@@ -154,14 +170,14 @@
# {'status': 'ACTIVE'} along with changes-since as filter.
changes_since = {'changes-since': '2051-01-01T12:34:00Z',
'status': 'ACTIVE'}
- body = self.client.list_servers(**changes_since)
+ body = self.reader_servers_client.list_servers(**changes_since)
self.assertEmpty(body['servers'])
@decorators.attr(type=['negative'])
@decorators.idempotent_id('93055106-2d34-46fe-af68-d9ddbf7ee570')
def test_list_servers_detail_server_is_deleted(self):
"""Test listing servers detail should not contain deleted server"""
- body = self.client.list_servers(detail=True)
+ body = self.reader_servers_client.list_servers(detail=True)
servers = body['servers']
actual = [srv for srv in servers
if srv['id'] == self.deleted_id]
diff --git a/tempest/api/compute/servers/test_novnc.py b/tempest/api/compute/servers/test_novnc.py
index 1308b19..95f5b99 100644
--- a/tempest/api/compute/servers/test_novnc.py
+++ b/tempest/api/compute/servers/test_novnc.py
@@ -13,9 +13,7 @@
# License for the specific language governing permissions and limitations
# under the License.
-import struct
import urllib.parse as urlparse
-import urllib3
from tempest.api.compute import base
from tempest.common import compute
@@ -25,7 +23,8 @@
CONF = config.CONF
-class NoVNCConsoleTestJSON(base.BaseV2ComputeTest):
+class NoVNCConsoleTestJSON(base.BaseV2ComputeTest,
+ compute.NoVNCValidateMixin):
"""Test novnc console"""
create_default_network = True
@@ -38,12 +37,12 @@
def setUp(self):
super(NoVNCConsoleTestJSON, self).setUp()
- self._websocket = None
+ self.websocket = None
def tearDown(self):
super(NoVNCConsoleTestJSON, self).tearDown()
- if self._websocket is not None:
- self._websocket.close()
+ if self.websocket is not None:
+ self.websocket.close()
# NOTE(zhufl): Because server_check_teardown will raise Exception
# which will prevent other cleanup steps from being executed, so
# server_check_teardown should be called after super's tearDown.
@@ -52,7 +51,6 @@
@classmethod
def setup_clients(cls):
super(NoVNCConsoleTestJSON, cls).setup_clients()
- cls.client = cls.servers_client
@classmethod
def resource_setup(cls):
@@ -62,137 +60,25 @@
if not cls.is_requested_microversion_compatible('2.5'):
cls.use_get_remote_console = True
- def _validate_novnc_html(self, vnc_url):
- """Verify we can connect to novnc and get back the javascript."""
- resp = urllib3.PoolManager().request('GET', vnc_url)
- # Make sure that the GET request was accepted by the novncproxy
- self.assertEqual(resp.status, 200, 'Got a Bad HTTP Response on the '
- 'initial call: ' + str(resp.status))
- # Do some basic validation to make sure it is an expected HTML document
- resp_data = resp.data.decode()
- # This is needed in the case of example: <html lang="en">
- self.assertRegex(resp_data, '<html.*>',
- 'Not a valid html document in the response.')
- self.assertIn('</html>', resp_data,
- 'Not a valid html document in the response.')
- # Just try to make sure we got JavaScript back for noVNC, since we
- # won't actually use it since not inside of a browser
- self.assertIn('noVNC', resp_data,
- 'Not a valid noVNC javascript html document.')
- self.assertIn('<script', resp_data,
- 'Not a valid noVNC javascript html document.')
-
- def _validate_rfb_negotiation(self):
- """Verify we can connect to novnc and do the websocket connection."""
- # Turn the Socket into a WebSocket to do the communication
- data = self._websocket.receive_frame()
- self.assertFalse(data is None or not data,
- 'Token must be invalid because the connection '
- 'closed.')
- # Parse the RFB version from the data to make sure it is valid
- # and belong to the known supported RFB versions.
- version = float("%d.%d" % (int(data[4:7], base=10),
- int(data[8:11], base=10)))
- # Add the max RFB versions supported
- supported_versions = [3.3, 3.8]
- self.assertIn(version, supported_versions,
- 'Bad RFB Version: ' + str(version))
- # Send our RFB version to the server
- self._websocket.send_frame(data)
- # Get the sever authentication type and make sure None is supported
- data = self._websocket.receive_frame()
- self.assertIsNotNone(data, 'Expected authentication type None.')
- data_length = len(data)
- if version == 3.3:
- # For RFB 3.3: in the security handshake, rather than a two-way
- # negotiation, the server decides the security type and sends a
- # single word(4 bytes).
- self.assertEqual(
- data_length, 4, 'Expected authentication type None.')
- self.assertIn(1, [int(data[i]) for i in (0, 3)],
- 'Expected authentication type None.')
- else:
- self.assertGreaterEqual(
- len(data), 2, 'Expected authentication type None.')
- self.assertIn(
- 1,
- [int(data[i + 1]) for i in range(int(data[0]))],
- 'Expected authentication type None.')
- # Send to the server that we only support authentication
- # type None
- self._websocket.send_frame(bytes((1,)))
-
- # The server should send 4 bytes of 0's if security
- # handshake succeeded
- data = self._websocket.receive_frame()
- self.assertEqual(
- len(data), 4,
- 'Server did not think security was successful.')
- self.assertEqual(
- [int(i) for i in data], [0, 0, 0, 0],
- 'Server did not think security was successful.')
-
- # Say to leave the desktop as shared as part of client initialization
- self._websocket.send_frame(bytes((1,)))
- # Get the server initialization packet back and make sure it is the
- # right structure where bytes 20-24 is the name length and
- # 24-N is the name
- data = self._websocket.receive_frame()
- data_length = len(data) if data is not None else 0
- self.assertFalse(data_length <= 24 or
- data_length != (struct.unpack(">L",
- data[20:24])[0] + 24),
- 'Server initialization was not the right format.')
- # Since the rest of the data on the screen is arbitrary, we will
- # close the socket and end our validation of the data at this point
- # Assert that the latest check was false, meaning that the server
- # initialization was the right format
- self.assertFalse(data_length <= 24 or
- data_length != (struct.unpack(">L",
- data[20:24])[0] + 24))
-
- def _validate_websocket_upgrade(self):
- """Verify that the websocket upgrade was successful.
-
- Parses response and ensures that required response
- fields are present and accurate.
- (https://tools.ietf.org/html/rfc7231#section-6.2.2)
- """
-
- self.assertTrue(
- self._websocket.response.startswith(b'HTTP/1.1 101 Switching '
- b'Protocols'),
- 'Incorrect HTTP return status code: {}'.format(
- str(self._websocket.response)
- )
- )
- _required_header = 'upgrade: websocket'
- _response = str(self._websocket.response).lower()
- self.assertIn(
- _required_header,
- _response,
- 'Did not get the expected WebSocket HTTP Response.'
- )
-
@decorators.idempotent_id('c640fdff-8ab4-45a4-a5d8-7e6146cbd0dc')
def test_novnc(self):
"""Test accessing novnc console of server"""
if self.use_get_remote_console:
- body = self.client.get_remote_console(
+ body = self.servers_client.get_remote_console(
self.server['id'], console_type='novnc',
protocol='vnc')['remote_console']
else:
- body = self.client.get_vnc_console(self.server['id'],
- type='novnc')['console']
+ body = self.servers_client.get_vnc_console(self.server['id'],
+ type='novnc')['console']
self.assertEqual('novnc', body['type'])
# Do the initial HTTP Request to novncproxy to get the NoVNC JavaScript
- self._validate_novnc_html(body['url'])
+ self.validate_novnc_html(body['url'])
# Do the WebSockify HTTP Request to novncproxy to do the RFB connection
- self._websocket = compute.create_websocket(body['url'])
+ self.websocket = compute.create_websocket(body['url'])
# Validate that we successfully connected and upgraded to Web Sockets
- self._validate_websocket_upgrade()
+ self.validate_websocket_upgrade()
# Validate the RFB Negotiation to determine if a valid VNC session
- self._validate_rfb_negotiation()
+ self.validate_rfb_negotiation()
@decorators.idempotent_id('f9c79937-addc-4aaa-9e0e-841eef02aeb7')
def test_novnc_bad_token(self):
@@ -202,12 +88,12 @@
the novnc proxy should reject the connection and closed it.
"""
if self.use_get_remote_console:
- body = self.client.get_remote_console(
+ body = self.servers_client.get_remote_console(
self.server['id'], console_type='novnc',
protocol='vnc')['remote_console']
else:
- body = self.client.get_vnc_console(self.server['id'],
- type='novnc')['console']
+ body = self.servers_client.get_vnc_console(self.server['id'],
+ type='novnc')['console']
self.assertEqual('novnc', body['type'])
# Do the WebSockify HTTP Request to novncproxy with a bad token
parts = urlparse.urlparse(body['url'])
@@ -222,9 +108,9 @@
parts.path, parts.params, new_query,
parts.fragment)
url = urlparse.urlunparse(new_parts)
- self._websocket = compute.create_websocket(url)
+ self.websocket = compute.create_websocket(url)
# Make sure the novncproxy rejected the connection and closed it
- data = self._websocket.receive_frame()
+ data = self.websocket.receive_frame()
self.assertTrue(data is None or not data,
"The novnc proxy actually sent us some data, but we "
"expected it to close the connection.")
diff --git a/tempest/api/compute/servers/test_server_actions.py b/tempest/api/compute/servers/test_server_actions.py
index 10153bb..10c2e91 100644
--- a/tempest/api/compute/servers/test_server_actions.py
+++ b/tempest/api/compute/servers/test_server_actions.py
@@ -45,7 +45,7 @@
try:
self.validation_resources = self.get_class_validation_resources(
self.os_primary)
- waiters.wait_for_server_status(self.client,
+ waiters.wait_for_server_status(self.servers_client,
self.server_id, 'ACTIVE')
except lib_exc.NotFound:
# The server was deleted by previous test, create a new one
@@ -78,7 +78,6 @@
@classmethod
def setup_clients(cls):
super(ServerActionsBase, cls).setup_clients()
- cls.client = cls.servers_client
@classmethod
def resource_setup(cls):
@@ -89,14 +88,15 @@
def _test_reboot_server(self, reboot_type):
if CONF.validation.run_validation:
# Get the time the server was last rebooted,
- server = self.client.show_server(self.server_id)['server']
+ server = self.reader_servers_client.show_server(
+ self.server_id)['server']
linux_client = remote_client.RemoteClient(
self.get_server_ip(server, self.validation_resources),
self.ssh_user,
self.password,
self.validation_resources['keypair']['private_key'],
server=server,
- servers_client=self.client)
+ servers_client=self.servers_client)
boot_time = linux_client.get_boot_time()
# NOTE: This sync is for avoiding the loss of pub key data
@@ -113,22 +113,23 @@
self.password,
self.validation_resources['keypair']['private_key'],
server=server,
- servers_client=self.client)
+ servers_client=self.servers_client)
new_boot_time = linux_client.get_boot_time()
self.assertGreater(new_boot_time, boot_time,
'%s > %s' % (new_boot_time, boot_time))
def _test_rebuild_server(self, server_id, **kwargs):
# Get the IPs the server has before rebuilding it
- original_addresses = (self.client.show_server(server_id)['server']
- ['addresses'])
+ original_addresses = (
+ self.reader_servers_client.show_server(server_id)['server']
+ ['addresses'])
# The server should be rebuilt using the provided image and data
meta = {'rebuild': 'server'}
new_name = data_utils.rand_name(
prefix=CONF.resource_name_prefix,
name=self.__class__.__name__ + '-server')
password = 'rebuildPassw0rd'
- rebuilt_server = self.client.rebuild_server(
+ rebuilt_server = self.servers_client.rebuild_server(
server_id,
self.image_ref_alt,
name=new_name,
@@ -142,9 +143,10 @@
self.assert_flavor_equal(self.flavor_ref, rebuilt_server['flavor'])
# Verify the server properties after the rebuild completes
- waiters.wait_for_server_status(self.client,
+ waiters.wait_for_server_status(self.servers_client,
rebuilt_server['id'], 'ACTIVE')
- server = self.client.show_server(rebuilt_server['id'])['server']
+ server = self.reader_servers_client.show_server(
+ rebuilt_server['id'])['server']
rebuilt_image_id = server['image']['id']
self.assertTrue(self.image_ref_alt.endswith(rebuilt_image_id))
self.assertEqual(new_name, server['name'])
@@ -169,7 +171,7 @@
password,
validation_resources['keypair']['private_key'],
server=rebuilt_server,
- servers_client=self.client)
+ servers_client=self.servers_client)
linux_client.validate_authentication()
def _test_resize_server_confirm(self, server_id, stop=False):
@@ -177,31 +179,31 @@
# the provided flavor
if stop:
- self.client.stop_server(server_id)
- waiters.wait_for_server_status(self.client, server_id,
+ self.servers_client.stop_server(server_id)
+ waiters.wait_for_server_status(self.servers_client, server_id,
'SHUTOFF')
- self.client.resize_server(server_id, self.flavor_ref_alt)
+ self.servers_client.resize_server(server_id, self.flavor_ref_alt)
# NOTE(jlk): Explicitly delete the server to get a new one for later
# tests. Avoids resize down race issues.
self.addCleanup(self.delete_server, server_id)
- waiters.wait_for_server_status(self.client, server_id,
+ waiters.wait_for_server_status(self.servers_client, server_id,
'VERIFY_RESIZE')
- self.client.confirm_resize_server(server_id)
+ self.servers_client.confirm_resize_server(server_id)
expected_status = 'SHUTOFF' if stop else 'ACTIVE'
- waiters.wait_for_server_status(self.client, server_id,
+ waiters.wait_for_server_status(self.servers_client, server_id,
expected_status)
- server = self.client.show_server(server_id)['server']
+ server = self.reader_servers_client.show_server(server_id)['server']
self.assert_flavor_equal(self.flavor_ref_alt, server['flavor'])
if stop:
# NOTE(mriedem): tearDown requires the server to be started.
- self.client.start_server(server_id)
+ self.servers_client.start_server(server_id)
def _get_output(self, server_id):
- output = self.client.get_console_output(
+ output = self.servers_client.get_console_output(
server_id, length=3)['output']
self.assertTrue(output, "Console output was empty.")
lines = len(output.split('\n'))
@@ -234,18 +236,21 @@
self.addCleanup(self.delete_server, newserver['id'])
# The server's password should be set to the provided password
new_password = 'Newpass1234'
- self.client.change_password(newserver['id'], adminPass=new_password)
- waiters.wait_for_server_status(self.client, newserver['id'], 'ACTIVE')
+ self.servers_client.change_password(newserver['id'],
+ adminPass=new_password)
+ waiters.wait_for_server_status(self.servers_client, newserver['id'],
+ 'ACTIVE')
if CONF.validation.run_validation:
# Verify that the user can authenticate with the new password
- server = self.client.show_server(newserver['id'])['server']
+ server = self.reader_servers_client.show_server(
+ newserver['id'])['server']
linux_client = remote_client.RemoteClient(
self.get_server_ip(server, self.validation_resources),
self.ssh_user,
new_password,
server=server,
- servers_client=self.client)
+ servers_client=self.servers_client)
linux_client.validate_authentication()
@decorators.attr(type='smoke')
@@ -278,14 +283,15 @@
# a situation when a newly created server doesn't have a floating
# ip attached at the beginning of the test_rebuild_server let's
# make sure right here the floating ip is attached
- waiters.wait_for_server_floating_ip(
- self.client,
- server,
- validation_resources['floating_ip'])
+ if 'floating_ip' in validation_resources:
+ waiters.wait_for_server_floating_ip(
+ self.servers_client,
+ server,
+ validation_resources['floating_ip'])
self.addCleanup(waiters.wait_for_server_termination,
- self.client, server['id'])
- self.addCleanup(self.client.delete_server, server['id'])
+ self.servers_client, server['id'])
+ self.addCleanup(self.servers_client.delete_server, server['id'])
self._test_rebuild_server(
server_id=server['id'],
@@ -308,17 +314,19 @@
values after a resize is reverted.
"""
- self.client.resize_server(self.server_id, self.flavor_ref_alt)
+ self.servers_client.resize_server(self.server_id, self.flavor_ref_alt)
# NOTE(zhufl): Explicitly delete the server to get a new one for later
# tests. Avoids resize down race issues.
self.addCleanup(self.delete_server, self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id,
+ waiters.wait_for_server_status(self.servers_client, self.server_id,
'VERIFY_RESIZE')
- self.client.revert_resize_server(self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
+ self.servers_client.revert_resize_server(self.server_id)
+ waiters.wait_for_server_status(self.servers_client, self.server_id,
+ 'ACTIVE')
- server = self.client.show_server(self.server_id)['server']
+ server = self.reader_servers_client.show_server(
+ self.server_id)['server']
self.assert_flavor_equal(self.flavor_ref, server['flavor'])
@decorators.idempotent_id('4b8867e6-fffa-4d54-b1d1-6fdda57be2f3')
@@ -344,29 +352,34 @@
'Pause is not available.')
def test_pause_unpause_server(self):
"""Test pausing and unpausing server"""
- self.client.pause_server(self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id, 'PAUSED')
- self.client.unpause_server(self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
+ self.servers_client.pause_server(self.server_id)
+ waiters.wait_for_server_status(self.servers_client, self.server_id,
+ 'PAUSED')
+ self.servers_client.unpause_server(self.server_id)
+ waiters.wait_for_server_status(self.servers_client, self.server_id,
+ 'ACTIVE')
@decorators.idempotent_id('0d8ee21e-b749-462d-83da-b85b41c86c7f')
@testtools.skipUnless(CONF.compute_feature_enabled.suspend,
'Suspend is not available.')
def test_suspend_resume_server(self):
"""Test suspending and resuming server"""
- self.client.suspend_server(self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id,
+ self.servers_client.suspend_server(self.server_id)
+ waiters.wait_for_server_status(self.servers_client, self.server_id,
'SUSPENDED')
- self.client.resume_server(self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
+ self.servers_client.resume_server(self.server_id)
+ waiters.wait_for_server_status(self.servers_client, self.server_id,
+ 'ACTIVE')
@decorators.idempotent_id('af8eafd4-38a7-4a4b-bdbc-75145a580560')
def test_stop_start_server(self):
"""Test stopping and starting server"""
- self.client.stop_server(self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id, 'SHUTOFF')
- self.client.start_server(self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
+ self.servers_client.stop_server(self.server_id)
+ waiters.wait_for_server_status(self.servers_client, self.server_id,
+ 'SHUTOFF')
+ self.servers_client.start_server(self.server_id)
+ waiters.wait_for_server_status(self.servers_client, self.server_id,
+ 'ACTIVE')
@decorators.idempotent_id('80a8094c-211e-440a-ab88-9e59d556c7ee')
def test_lock_unlock_server(self):
@@ -377,18 +390,21 @@
Then unlock the server, now the server can be stopped and started.
"""
# Lock the server,try server stop(exceptions throw),unlock it and retry
- self.client.lock_server(self.server_id)
- self.addCleanup(self.client.unlock_server, self.server_id)
- server = self.client.show_server(self.server_id)['server']
+ self.servers_client.lock_server(self.server_id)
+ self.addCleanup(self.servers_client.unlock_server, self.server_id)
+ server = self.reader_servers_client.show_server(
+ self.server_id)['server']
self.assertEqual(server['status'], 'ACTIVE')
# Locked server is not allowed to be stopped by non-admin user
self.assertRaises(lib_exc.Conflict,
- self.client.stop_server, self.server_id)
- self.client.unlock_server(self.server_id)
- self.client.stop_server(self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id, 'SHUTOFF')
- self.client.start_server(self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
+ self.servers_client.stop_server, self.server_id)
+ self.servers_client.unlock_server(self.server_id)
+ self.servers_client.stop_server(self.server_id)
+ waiters.wait_for_server_status(self.servers_client, self.server_id,
+ 'SHUTOFF')
+ self.servers_client.start_server(self.server_id)
+ waiters.wait_for_server_status(self.servers_client, self.server_id,
+ 'ACTIVE')
class ServerActionsTestOtherA(ServerActionsBase):
@@ -398,11 +414,11 @@
server = self.create_test_server(wait_until='ACTIVE')
# Remove all Security group
- self.client.remove_security_group(
+ self.servers_client.remove_security_group(
server['id'], name=server['security_groups'][0]['name'])
# Verify all Security group
- server = self.client.show_server(server['id'])['server']
+ server = self.reader_servers_client.show_server(server['id'])['server']
self.assertNotIn('security_groups', server)
@decorators.idempotent_id('30449a88-5aff-4f9b-9866-6ee9b17f906d')
@@ -420,16 +436,17 @@
server = servers[0]
self.addCleanup(waiters.wait_for_server_termination,
- self.client, server['id'])
- self.addCleanup(self.client.delete_server, server['id'])
- server = self.client.show_server(server['id'])['server']
+ self.servers_client, server['id'])
+ self.addCleanup(self.servers_client.delete_server, server['id'])
+ server = self.reader_servers_client.show_server(server['id'])['server']
old_image = server['image']['id']
new_image = (self.image_ref_alt
if old_image == self.image_ref else self.image_ref)
- self.client.stop_server(server['id'])
- waiters.wait_for_server_status(self.client, server['id'], 'SHUTOFF')
- rebuilt_server = (self.client.rebuild_server(server['id'], new_image)
- ['server'])
+ self.servers_client.stop_server(server['id'])
+ waiters.wait_for_server_status(self.servers_client, server['id'],
+ 'SHUTOFF')
+ rebuilt_server = (self.servers_client.rebuild_server(
+ server['id'], new_image)['server'])
# Verify the properties in the initial response are correct
self.assertEqual(server['id'], rebuilt_server['id'])
@@ -438,9 +455,10 @@
self.assert_flavor_equal(self.flavor_ref, rebuilt_server['flavor'])
# Verify the server properties after the rebuild completes
- waiters.wait_for_server_status(self.client,
+ waiters.wait_for_server_status(self.servers_client,
rebuilt_server['id'], 'SHUTOFF')
- server = self.client.show_server(rebuilt_server['id'])['server']
+ server = self.reader_servers_client.show_server(
+ rebuilt_server['id'])['server']
rebuilt_image_id = server['image']['id']
self.assertEqual(new_image, rebuilt_image_id)
@@ -468,10 +486,10 @@
wait_until='SSHABLE')
server = servers[0]
self.addCleanup(waiters.wait_for_server_termination,
- self.client, server['id'])
- self.addCleanup(self.client.delete_server, server['id'])
+ self.servers_client, server['id'])
+ self.addCleanup(self.servers_client.delete_server, server['id'])
- server = self.client.show_server(server['id'])['server']
+ server = self.reader_servers_client.show_server(server['id'])['server']
waiters.wait_for_volume_resource_status(self.volumes_client,
volume['id'], 'available')
self.attach_volume(server, volume)
@@ -506,7 +524,7 @@
# NOTE(mgoddard): Get detailed server to ensure addresses are present
# in fixed IP case.
- server = self.servers_client.show_server(server['id'])['server']
+ server = self.reader_servers_client.show_server(server['id'])['server']
self._test_resize_server_confirm(server['id'])
@@ -514,7 +532,7 @@
# Now do something interactive with the guest like get its console
# output; we don't actually care about the output,
# just that it doesn't raise an error.
- self.client.get_console_output(server['id'])
+ self.servers_client.get_console_output(server['id'])
if CONF.validation.run_validation:
linux_client = remote_client.RemoteClient(
self.get_server_ip(server, self.validation_resources),
@@ -522,7 +540,7 @@
password=None,
pkey=self.validation_resources['keypair']['private_key'],
server=server,
- servers_client=self.client)
+ servers_client=self.servers_client)
linux_client.validate_authentication()
@@ -550,21 +568,24 @@
# Create a blank volume and attach it to the server created in setUp.
volume = self.create_volume()
- server = self.client.show_server(self.server_id)['server']
+ server = self.reader_servers_client.show_server(
+ self.server_id)['server']
self.attach_volume(server, volume)
# Now resize the server with the blank volume attached.
- self.client.resize_server(self.server_id, self.flavor_ref_alt)
+ self.servers_client.resize_server(self.server_id, self.flavor_ref_alt)
# Explicitly delete the server to get a new one for later
# tests. Avoids resize down race issues.
self.addCleanup(self.delete_server, self.server_id)
waiters.wait_for_server_status(
- self.client, self.server_id, 'VERIFY_RESIZE')
+ self.servers_client, self.server_id, 'VERIFY_RESIZE')
# Now revert the resize which should move the instance and it's volume
# attachment back to the original source compute host.
- self.client.revert_resize_server(self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
+ self.servers_client.revert_resize_server(self.server_id)
+ waiters.wait_for_server_status(self.servers_client, self.server_id,
+ 'ACTIVE')
# Make sure everything still looks OK.
- server = self.client.show_server(self.server_id)['server']
+ server = self.reader_servers_client.show_server(
+ self.server_id)['server']
self.assert_flavor_equal(self.flavor_ref, server['flavor'])
attached_volumes = server['os-extended-volumes:volumes_attached']
self.assertEqual(1, len(attached_volumes))
@@ -593,10 +614,10 @@
backup1 = data_utils.rand_name(
prefix=CONF.resource_name_prefix, name='backup-1')
- resp = self.client.create_backup(self.server_id,
- backup_type='daily',
- rotation=2,
- name=backup1)
+ resp = self.servers_client.create_backup(self.server_id,
+ backup_type='daily',
+ rotation=2,
+ name=backup1)
oldest_backup_exist = True
# the oldest one should be deleted automatically in this test
@@ -630,11 +651,12 @@
backup2 = data_utils.rand_name(
prefix=CONF.resource_name_prefix, name='backup-2')
- waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
- resp = self.client.create_backup(self.server_id,
- backup_type='daily',
- rotation=2,
- name=backup2)
+ waiters.wait_for_server_status(self.servers_client, self.server_id,
+ 'ACTIVE')
+ resp = self.servers_client.create_backup(self.server_id,
+ backup_type='daily',
+ rotation=2,
+ name=backup2)
if api_version_utils.compare_version_header_to_response(
"OpenStack-API-Version", "compute 2.45", resp.response, "lt"):
image2_id = resp['image_id']
@@ -669,11 +691,12 @@
# the first one will be deleted
backup3 = data_utils.rand_name(
prefix=CONF.resource_name_prefix, name='backup-3')
- waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
- resp = self.client.create_backup(self.server_id,
- backup_type='daily',
- rotation=2,
- name=backup3)
+ waiters.wait_for_server_status(self.servers_client, self.server_id,
+ 'ACTIVE')
+ resp = self.servers_client.create_backup(self.server_id,
+ backup_type='daily',
+ rotation=2,
+ name=backup3)
if api_version_utils.compare_version_header_to_response(
"OpenStack-API-Version", "compute 2.45", resp.response, "lt"):
image3_id = resp['image_id']
@@ -683,7 +706,8 @@
image3_id, 'success')
self.addCleanup(glance_client.delete_image, image3_id)
# the first back up should be deleted
- waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
+ waiters.wait_for_server_status(self.servers_client, self.server_id,
+ 'ACTIVE')
glance_client.wait_for_resource_deletion(image1_id)
oldest_backup_exist = False
image_list = glance_client.list_images(params)['images']
@@ -707,7 +731,8 @@
server = self.create_test_server(wait_until='ACTIVE')
def _check_full_length_console_log():
- output = self.client.get_console_output(server['id'])['output']
+ output = self.servers_client.get_console_output(server['id'])[
+ 'output']
self.assertTrue(output, "Console output was empty.")
lines = len(output.split('\n'))
@@ -735,8 +760,9 @@
server = self.create_test_server(wait_until='ACTIVE')
temp_server_id = server['id']
- self.client.stop_server(temp_server_id)
- waiters.wait_for_server_status(self.client, temp_server_id, 'SHUTOFF')
+ self.servers_client.stop_server(temp_server_id)
+ waiters.wait_for_server_status(self.servers_client, temp_server_id,
+ 'SHUTOFF')
self.wait_for(self._get_output, temp_server_id)
@decorators.idempotent_id('77eba8e0-036e-4635-944b-f7a8f3b78dc9')
@@ -750,19 +776,20 @@
else:
raise lib_exc.InvalidConfiguration(
'api_v2 must be True in [image-feature-enabled].')
- compute.shelve_server(self.client, self.server_id,
+ compute.shelve_server(self.servers_client, self.server_id,
force_shelve_offload=True)
- server = self.client.show_server(self.server_id)['server']
+ server = self.reader_servers_client.show_server(
+ self.server_id)['server']
image_name = server['name'] + '-shelved'
params = {'name': image_name}
images = glance_client.list_images(params)['images']
self.assertEqual(1, len(images))
self.assertEqual(image_name, images[0]['name'])
- body = self.client.unshelve_server(self.server_id)
+ body = self.servers_client.unshelve_server(self.server_id)
waiters.wait_for_server_status(
- self.client,
+ self.servers_client,
self.server_id,
"ACTIVE",
request_id=body.response["x-openstack-request-id"],
@@ -778,10 +805,11 @@
def test_shelve_paused_server(self):
"""Test shelving a paused server"""
server = self.create_test_server(wait_until='ACTIVE')
- self.client.pause_server(server['id'])
- waiters.wait_for_server_status(self.client, server['id'], 'PAUSED')
+ self.servers_client.pause_server(server['id'])
+ waiters.wait_for_server_status(self.servers_client, server['id'],
+ 'PAUSED')
# Check if Shelve operation is successful on paused server.
- compute.shelve_server(self.client, server['id'],
+ compute.shelve_server(self.servers_client, server['id'],
force_shelve_offload=True)
@decorators.idempotent_id('c6bc11bf-592e-4015-9319-1c98dc64daf5')
@@ -793,10 +821,10 @@
The returned vnc console url should be in valid format.
"""
if self.is_requested_microversion_compatible('2.5'):
- body = self.client.get_vnc_console(
+ body = self.servers_client.get_vnc_console(
self.server_id, type='novnc')['console']
else:
- body = self.client.get_remote_console(
+ body = self.servers_client.get_remote_console(
self.server_id, console_type='novnc',
protocol='vnc')['remote_console']
self.assertEqual('novnc', body['type'])
@@ -853,6 +881,10 @@
super(ServerActionsV293TestJSON, cls).setup_credentials()
@classmethod
+ def setup_clients(cls):
+ super(ServerActionsV293TestJSON, cls).setup_clients()
+
+ @classmethod
def resource_setup(cls):
super(ServerActionsV293TestJSON, cls).resource_setup()
cls.server_id = cls.recreate_server(None, volume_backed=True,
@@ -863,7 +895,8 @@
"""Test rebuilding a volume backed server"""
self.validation_resources = self.get_class_validation_resources(
self.os_primary)
- server = self.servers_client.show_server(self.server_id)['server']
+ server = self.reader_servers_client.show_server(
+ self.server_id)['server']
volume_id = server['os-extended-volumes:volumes_attached'][0]['id']
volume_before_rebuild = self.volumes_client.show_volume(volume_id)
image_before_rebuild = (
@@ -912,7 +945,7 @@
# Verify the server properties after the rebuild completes
waiters.wait_for_server_status(self.servers_client,
rebuilt_server['id'], 'ACTIVE')
- server = self.servers_client.show_server(
+ server = self.reader_servers_client.show_server(
rebuilt_server['id'])['server']
volume_id = server['os-extended-volumes:volumes_attached'][0]['id']
volume_after_rebuild = self.volumes_client.show_volume(volume_id)
diff --git a/tempest/api/compute/servers/test_server_addresses.py b/tempest/api/compute/servers/test_server_addresses.py
index 978a9da..429ffff 100644
--- a/tempest/api/compute/servers/test_server_addresses.py
+++ b/tempest/api/compute/servers/test_server_addresses.py
@@ -14,9 +14,13 @@
# under the License.
from tempest.api.compute import base
+from tempest import config
from tempest.lib import decorators
+CONF = config.CONF
+
+
class ServerAddressesTestJSON(base.BaseV2ComputeTest):
"""Test server addresses"""
create_default_network = True
@@ -24,7 +28,6 @@
@classmethod
def setup_clients(cls):
super(ServerAddressesTestJSON, cls).setup_clients()
- cls.client = cls.servers_client
@classmethod
def resource_setup(cls):
@@ -40,7 +43,8 @@
All public and private addresses for a server should be returned.
"""
- addresses = self.client.list_addresses(self.server['id'])['addresses']
+ addresses = self.reader_servers_client.list_addresses(
+ self.server['id'])['addresses']
# We do not know the exact network configuration, but an instance
# should at least have a single public or private address
@@ -57,14 +61,16 @@
the specified one.
"""
- addresses = self.client.list_addresses(self.server['id'])['addresses']
+ addresses = self.reader_servers_client.list_addresses(
+ self.server['id'])['addresses']
# Once again we don't know the environment's exact network config,
# but the response for each individual network should be the same
# as the partial result of the full address list
id = self.server['id']
for addr_type in addresses:
- addr = self.client.list_addresses_by_network(id, addr_type)
+ addr = self.reader_servers_client.list_addresses_by_network(
+ id, addr_type)
addr = addr[addr_type]
for address in addresses[addr_type]:
diff --git a/tempest/api/compute/servers/test_server_addresses_negative.py b/tempest/api/compute/servers/test_server_addresses_negative.py
index bb21594..8688cd7 100644
--- a/tempest/api/compute/servers/test_server_addresses_negative.py
+++ b/tempest/api/compute/servers/test_server_addresses_negative.py
@@ -25,7 +25,6 @@
@classmethod
def setup_clients(cls):
super(ServerAddressesNegativeTestJSON, cls).setup_clients()
- cls.client = cls.servers_client
@classmethod
def resource_setup(cls):
@@ -36,13 +35,13 @@
@decorators.idempotent_id('02c3f645-2d2e-4417-8525-68c0407d001b')
def test_list_server_addresses_invalid_server_id(self):
"""List addresses request should fail if server id not in system"""
- self.assertRaises(lib_exc.NotFound, self.client.list_addresses,
- '999')
+ self.assertRaises(lib_exc.NotFound,
+ self.reader_servers_client.list_addresses, '999')
@decorators.attr(type=['negative'])
@decorators.idempotent_id('a2ab5144-78c0-4942-a0ed-cc8edccfd9ba')
def test_list_server_addresses_by_network_neg(self):
"""List addresses by network should fail if network name not valid"""
self.assertRaises(lib_exc.NotFound,
- self.client.list_addresses_by_network,
+ self.reader_servers_client.list_addresses_by_network,
self.server['id'], 'invalid')
diff --git a/tempest/api/compute/servers/test_server_group.py b/tempest/api/compute/servers/test_server_group.py
index f92b5ba..c4408ed 100644
--- a/tempest/api/compute/servers/test_server_group.py
+++ b/tempest/api/compute/servers/test_server_group.py
@@ -37,6 +37,10 @@
def setup_clients(cls):
super(ServerGroupTestJSON, cls).setup_clients()
cls.client = cls.server_groups_client
+ if CONF.enforce_scope.nova:
+ cls.reader_sg_client = cls.os_project_reader.server_groups_client
+ else:
+ cls.reader_sg_client = cls.client
@classmethod
def _set_policy(cls, policy):
@@ -78,7 +82,8 @@
# delete the test server-group
self.client.delete_server_group(server_group['id'])
# validation of server-group deletion
- server_group_list = self.client.list_server_groups()['server_groups']
+ server_group_list = self.reader_sg_client.list_server_groups()[
+ 'server_groups']
self.assertNotIn(server_group, server_group_list)
def _create_delete_server_group(self, policy):
@@ -118,14 +123,14 @@
@decorators.idempotent_id('b3545034-dd78-48f0-bdc2-a4adfa6d0ead')
def test_show_server_group(self):
"""Test getting the server-group detail"""
- body = self.client.show_server_group(
+ body = self.reader_sg_client.show_server_group(
self.created_server_group['id'])['server_group']
self.assertEqual(self.created_server_group, body)
@decorators.idempotent_id('d4874179-27b4-4d7d-80e4-6c560cdfe321')
def test_list_server_groups(self):
"""Test listing the server-groups"""
- body = self.client.list_server_groups()['server_groups']
+ body = self.reader_sg_client.list_server_groups()['server_groups']
self.assertIn(self.created_server_group, body)
@decorators.idempotent_id('ed20d3fb-9d1f-4329-b160-543fbd5d9811')
@@ -140,7 +145,7 @@
self.addCleanup(self.delete_server, server['id'])
# Check a server is in the group
- server_group = (self.server_groups_client.show_server_group(
+ server_group = (self.reader_sg_client.show_server_group(
self.created_server_group['id'])['server_group'])
self.assertIn(server['id'], server_group['members'])
@@ -154,6 +159,14 @@
create_default_network = True
min_microversion = '2.64'
+ @classmethod
+ def setup_clients(cls):
+ super(ServerGroup264TestJSON, cls).setup_clients()
+ if CONF.enforce_scope.nova:
+ cls.reader_client = cls.os_project_reader.server_groups_client
+ else:
+ cls.reader_client = cls.server_groups_client
+
@decorators.idempotent_id('b52f09dd-2133-4037-9a5d-bdb260096a88')
def test_create_get_server_group(self):
# create, get the test server-group with given policy
@@ -162,5 +175,5 @@
self.addCleanup(
self.server_groups_client.delete_server_group,
server_group['id'])
- self.server_groups_client.list_server_groups()
- self.server_groups_client.show_server_group(server_group['id'])
+ self.reader_client.list_server_groups()
+ self.reader_client.show_server_group(server_group['id'])
diff --git a/tempest/api/compute/servers/test_server_metadata.py b/tempest/api/compute/servers/test_server_metadata.py
index 5f35b15..3e82202 100644
--- a/tempest/api/compute/servers/test_server_metadata.py
+++ b/tempest/api/compute/servers/test_server_metadata.py
@@ -29,7 +29,6 @@
@classmethod
def setup_clients(cls):
super(ServerMetadataTestJSON, cls).setup_clients()
- cls.client = cls.servers_client
@classmethod
def resource_setup(cls):
@@ -39,7 +38,7 @@
def setUp(self):
super(ServerMetadataTestJSON, self).setUp()
meta = {'key1': 'value1', 'key2': 'value2'}
- self.client.set_server_metadata(self.server['id'], meta)
+ self.servers_client.set_server_metadata(self.server['id'], meta)
@decorators.idempotent_id('479da087-92b3-4dcf-aeb3-fd293b2d14ce')
def test_list_server_metadata(self):
@@ -47,8 +46,9 @@
All metadata key/value pairs for a server should be returned.
"""
- resp_metadata = (self.client.list_server_metadata(self.server['id'])
- ['metadata'])
+ resp_metadata = (
+ self.reader_servers_client.list_server_metadata(self.server['id'])
+ ['metadata'])
# Verify the expected metadata items are in the list
expected = {'key1': 'value1', 'key2': 'value2'}
@@ -62,12 +62,14 @@
"""
# Create a new set of metadata for the server
req_metadata = {'meta2': 'data2', 'meta3': 'data3'}
- self.client.set_server_metadata(self.server['id'], req_metadata)
+ self.servers_client.set_server_metadata(self.server['id'],
+ req_metadata)
# Verify the expected values are correct, and that the
# previous values have been removed
- resp_metadata = (self.client.list_server_metadata(self.server['id'])
- ['metadata'])
+ resp_metadata = (
+ self.reader_servers_client.list_server_metadata(self.server['id'])
+ ['metadata'])
self.assertEqual(resp_metadata, req_metadata)
@decorators.idempotent_id('344d981e-0c33-4997-8a5d-6c1d803e4134')
@@ -77,11 +79,12 @@
The server's metadata values should be updated to the provided values.
"""
meta = {'key1': 'alt1', 'key3': 'value3'}
- self.client.update_server_metadata(self.server['id'], meta)
+ self.servers_client.update_server_metadata(self.server['id'], meta)
# Verify the values have been updated to the proper values
- resp_metadata = (self.client.list_server_metadata(self.server['id'])
- ['metadata'])
+ resp_metadata = (
+ self.reader_servers_client.list_server_metadata(self.server['id'])
+ ['metadata'])
expected = {'key1': 'alt1', 'key2': 'value2', 'key3': 'value3'}
self.assertEqual(expected, resp_metadata)
@@ -93,17 +96,18 @@
body is passed.
"""
meta = {}
- self.client.update_server_metadata(self.server['id'], meta)
- resp_metadata = (self.client.list_server_metadata(self.server['id'])
- ['metadata'])
+ self.servers_client.update_server_metadata(self.server['id'], meta)
+ resp_metadata = (
+ self.reader_servers_client.list_server_metadata(self.server['id'])
+ ['metadata'])
expected = {'key1': 'value1', 'key2': 'value2'}
self.assertEqual(expected, resp_metadata)
@decorators.idempotent_id('3043c57d-7e0e-49a6-9a96-ad569c265e6a')
def test_get_server_metadata_item(self):
"""Test getting specific server metadata item"""
- meta = self.client.show_server_metadata_item(self.server['id'],
- 'key2')['meta']
+ meta = self.reader_servers_client.show_server_metadata_item(
+ self.server['id'], 'key2')['meta']
self.assertEqual('value2', meta['key2'])
@decorators.idempotent_id('58c02d4f-5c67-40be-8744-d3fa5982eb1c')
@@ -115,11 +119,13 @@
# Update the metadata value.
meta = {'nova': 'alt'}
- self.client.set_server_metadata_item(self.server['id'], 'nova', meta)
+ self.servers_client.set_server_metadata_item(self.server['id'],
+ 'nova', meta)
# Verify the meta item's value has been updated
- resp_metadata = (self.client.list_server_metadata(self.server['id'])
- ['metadata'])
+ resp_metadata = (
+ self.reader_servers_client.list_server_metadata(self.server['id'])
+ ['metadata'])
expected = {'key1': 'value1', 'key2': 'value2', 'nova': 'alt'}
self.assertEqual(expected, resp_metadata)
@@ -129,10 +135,12 @@
The metadata value/key pair should be deleted from the server.
"""
- self.client.delete_server_metadata_item(self.server['id'], 'key1')
+ self.servers_client.delete_server_metadata_item(self.server['id'],
+ 'key1')
# Verify the metadata item has been removed
- resp_metadata = (self.client.list_server_metadata(self.server['id'])
- ['metadata'])
+ resp_metadata = (
+ self.reader_servers_client.list_server_metadata(self.server['id'])
+ ['metadata'])
expected = {'key2': 'value2'}
self.assertEqual(expected, resp_metadata)
diff --git a/tempest/api/compute/servers/test_server_metadata_negative.py b/tempest/api/compute/servers/test_server_metadata_negative.py
index 2059dfa..5ba83ad 100644
--- a/tempest/api/compute/servers/test_server_metadata_negative.py
+++ b/tempest/api/compute/servers/test_server_metadata_negative.py
@@ -27,12 +27,11 @@
@classmethod
def setup_clients(cls):
super(ServerMetadataNegativeTestJSON, cls).setup_clients()
- cls.client = cls.servers_client
@classmethod
def resource_setup(cls):
super(ServerMetadataNegativeTestJSON, cls).resource_setup()
- cls.tenant_id = cls.client.tenant_id
+ cls.tenant_id = cls.servers_client.tenant_id
cls.server = cls.create_test_server(metadata={}, wait_until='ACTIVE')
@decorators.attr(type=['negative'])
@@ -68,7 +67,7 @@
# GET on a non-existent server should not succeed
non_existent_server_id = data_utils.rand_uuid()
self.assertRaises(lib_exc.NotFound,
- self.client.show_server_metadata_item,
+ self.reader_servers_client.show_server_metadata_item,
non_existent_server_id,
'test2')
@@ -78,7 +77,7 @@
"""Test listing metadata for a non existent server should fail"""
non_existent_server_id = data_utils.rand_uuid()
self.assertRaises(lib_exc.NotFound,
- self.client.list_server_metadata,
+ self.reader_servers_client.list_server_metadata,
non_existent_server_id)
@decorators.attr(type=['negative'])
@@ -90,7 +89,7 @@
"""
meta = {'testkey': 'testvalue'}
self.assertRaises(lib_exc.BadRequest,
- self.client.set_server_metadata_item,
+ self.servers_client.set_server_metadata_item,
self.server['id'], 'key', meta)
@decorators.attr(type=['negative'])
@@ -100,7 +99,7 @@
non_existent_server_id = data_utils.rand_uuid()
meta = {'meta1': 'data1'}
self.assertRaises(lib_exc.NotFound,
- self.client.set_server_metadata,
+ self.servers_client.set_server_metadata,
non_existent_server_id,
meta)
@@ -111,7 +110,7 @@
non_existent_server_id = data_utils.rand_uuid()
meta = {'key1': 'value1', 'key2': 'value2'}
self.assertRaises(lib_exc.NotFound,
- self.client.update_server_metadata,
+ self.servers_client.update_server_metadata,
non_existent_server_id,
meta)
@@ -121,7 +120,7 @@
"""Test updating server metadata to blank key should fail"""
meta = {'': 'data1'}
self.assertRaises(lib_exc.BadRequest,
- self.client.update_server_metadata,
+ self.servers_client.update_server_metadata,
self.server['id'], meta=meta)
@decorators.attr(type=['negative'])
@@ -133,7 +132,7 @@
"""
non_existent_server_id = data_utils.rand_uuid()
self.assertRaises(lib_exc.NotFound,
- self.client.delete_server_metadata_item,
+ self.servers_client.delete_server_metadata_item,
non_existent_server_id,
'd')
@@ -155,14 +154,14 @@
for num in range(1, quota_metadata + 2):
req_metadata['key' + str(num)] = 'val' + str(num)
self.assertRaises((lib_exc.OverLimit, lib_exc.Forbidden),
- self.client.set_server_metadata,
+ self.servers_client.set_server_metadata,
self.server['id'], req_metadata)
# A 403 Forbidden or 413 Overlimit (old behaviour) exception
# will be raised while exceeding metadata items limit for
# tenant.
self.assertRaises((lib_exc.Forbidden, lib_exc.OverLimit),
- self.client.update_server_metadata,
+ self.servers_client.update_server_metadata,
self.server['id'], req_metadata)
@decorators.attr(type=['negative'])
@@ -171,7 +170,7 @@
"""Test setting server metadata with blank key should fail"""
meta = {'': 'data1'}
self.assertRaises(lib_exc.BadRequest,
- self.client.set_server_metadata,
+ self.servers_client.set_server_metadata,
self.server['id'], meta=meta)
@decorators.attr(type=['negative'])
@@ -180,5 +179,5 @@
"""Test setting server metadata without metadata field should fail"""
meta = {'meta1': 'data1'}
self.assertRaises(lib_exc.BadRequest,
- self.client.set_server_metadata,
+ self.servers_client.set_server_metadata,
self.server['id'], meta=meta, no_metadata_field=True)
diff --git a/tempest/api/compute/servers/test_server_password.py b/tempest/api/compute/servers/test_server_password.py
index f61d4fd..205b0ec 100644
--- a/tempest/api/compute/servers/test_server_password.py
+++ b/tempest/api/compute/servers/test_server_password.py
@@ -15,15 +15,23 @@
from tempest.api.compute import base
+from tempest import config
from tempest.lib import decorators
+CONF = config.CONF
+
+
class ServerPasswordTestJSON(base.BaseV2ComputeTest):
"""Test server password"""
create_default_network = True
@classmethod
+ def setup_clients(cls):
+ super(ServerPasswordTestJSON, cls).setup_clients()
+
+ @classmethod
def resource_setup(cls):
super(ServerPasswordTestJSON, cls).resource_setup()
cls.server = cls.create_test_server(wait_until="ACTIVE")
@@ -31,7 +39,7 @@
@decorators.idempotent_id('f83b582f-62a8-4f22-85b0-0dee50ff783a')
def test_get_server_password(self):
"""Test getting password of a server"""
- self.servers_client.show_password(self.server['id'])
+ self.reader_servers_client.show_password(self.server['id'])
@decorators.idempotent_id('f8229e8b-b625-4493-800a-bde86ac611ea')
def test_delete_server_password(self):
diff --git a/tempest/api/compute/servers/test_server_personality.py b/tempest/api/compute/servers/test_server_personality.py
index 8a05e7a..a4b2ad3 100644
--- a/tempest/api/compute/servers/test_server_personality.py
+++ b/tempest/api/compute/servers/test_server_personality.py
@@ -45,7 +45,10 @@
@classmethod
def setup_clients(cls):
super(ServerPersonalityTestJSON, cls).setup_clients()
- cls.client = cls.servers_client
+ if CONF.enforce_scope.nova:
+ cls.reader_limits_client = cls.os_project_reader.limits_client
+ else:
+ cls.reader_limits_client = cls.limits_client
# NOTE(mriedem): Marked as slow because personality (file injection) is
# deprecated in nova so we don't care as much about running this all the
@@ -70,14 +73,15 @@
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
self.servers_client.delete_server,
created_server['id'])
- server = self.client.show_server(created_server['id'])['server']
+ server = self.reader_servers_client.show_server(
+ created_server['id'])['server']
if CONF.validation.run_validation:
linux_client = remote_client.RemoteClient(
self.get_server_ip(server, validation_resources),
self.ssh_user, password,
validation_resources['keypair']['private_key'],
server=server,
- servers_client=self.client)
+ servers_client=self.servers_client)
self.assertEqual(file_contents,
linux_client.exec_command(
'sudo cat %s' % file_path))
@@ -102,10 +106,10 @@
file_contents = 'Test server rebuild.'
personality = [{'path': 'rebuild.txt',
'contents': base64.encode_as_text(file_contents)}]
- rebuilt_server = self.client.rebuild_server(server_id,
- self.image_ref_alt,
- personality=personality)
- waiters.wait_for_server_status(self.client, server_id, 'ACTIVE')
+ rebuilt_server = self.servers_client.rebuild_server(
+ server_id, self.image_ref_alt, personality=personality)
+ waiters.wait_for_server_status(self.servers_client, server_id,
+ 'ACTIVE')
self.assertEqual(self.image_ref_alt,
rebuilt_server['server']['image']['id'])
@@ -118,7 +122,7 @@
"""
file_contents = 'This is a test file.'
personality = []
- limits = self.limits_client.show_limits()['limits']
+ limits = self.reader_limits_client.show_limits()['limits']
max_file_limit = limits['absolute']['maxPersonality']
if max_file_limit == -1:
raise self.skipException("No limit for personality files")
@@ -144,7 +148,7 @@
files is injected into the server during creation.
"""
file_contents = 'This is a test file.'
- limits = self.limits_client.show_limits()['limits']
+ limits = self.reader_limits_client.show_limits()['limits']
max_file_limit = limits['absolute']['maxPersonality']
if max_file_limit == -1:
raise self.skipException("No limit for personality files")
@@ -168,14 +172,15 @@
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
self.servers_client.delete_server,
created_server['id'])
- server = self.client.show_server(created_server['id'])['server']
+ server = self.reader_servers_client.show_server(
+ created_server['id'])['server']
if CONF.validation.run_validation:
linux_client = remote_client.RemoteClient(
self.get_server_ip(server, validation_resources),
self.ssh_user, password,
validation_resources['keypair']['private_key'],
server=server,
- servers_client=self.client)
+ servers_client=self.servers_client)
for i in person:
self.assertEqual(base64.decode_as_text(i['contents']),
linux_client.exec_command(
diff --git a/tempest/api/compute/servers/test_server_rescue.py b/tempest/api/compute/servers/test_server_rescue.py
index d6c0324..d9ecadc 100644
--- a/tempest/api/compute/servers/test_server_rescue.py
+++ b/tempest/api/compute/servers/test_server_rescue.py
@@ -245,7 +245,8 @@
server, rescue_image_id = self._create_server_and_rescue_image(
hw_rescue_device='disk', hw_rescue_bus='virtio', validatable=True,
validation_resources=validation_resources, wait_until="SSHABLE")
- server = self.servers_client.show_server(server['id'])['server']
+ server = self.reader_servers_client.show_server(
+ server['id'])['server']
waiters.wait_for_volume_resource_status(self.volumes_client,
volume['id'], 'available')
self.attach_volume(server, volume)
@@ -282,6 +283,7 @@
"""
block_device_mapping_v2 = [{
"boot_index": "0",
+ "delete_on_termination": "true",
"source_type": "blank",
"volume_size": CONF.volume.volume_size,
"destination_type": "volume"}]
@@ -300,6 +302,7 @@
"""
block_device_mapping_v2 = [{
"boot_index": "0",
+ "delete_on_termination": "true",
"source_type": "image",
"volume_size": CONF.volume.volume_size,
"uuid": CONF.compute.image_ref,
diff --git a/tempest/api/compute/servers/test_server_tags.py b/tempest/api/compute/servers/test_server_tags.py
index 0b5870a..5eb5ea5 100644
--- a/tempest/api/compute/servers/test_server_tags.py
+++ b/tempest/api/compute/servers/test_server_tags.py
@@ -32,7 +32,6 @@
@classmethod
def setup_clients(cls):
super(ServerTagsTestJSON, cls).setup_clients()
- cls.client = cls.servers_client
@classmethod
def resource_setup(cls):
@@ -43,14 +42,15 @@
if not isinstance(tags, (list, tuple)):
tags = [tags]
for tag in tags:
- self.client.update_tag(server_id, tag)
- self.addCleanup(self.client.delete_all_tags, server_id)
+ self.servers_client.update_tag(server_id, tag)
+ self.addCleanup(self.servers_client.delete_all_tags, server_id)
@decorators.idempotent_id('8d95abe2-c658-4c42-9a44-c0258500306b')
def test_create_delete_tag(self):
"""Test creating and deleting server tag"""
# Check that no tags exist.
- fetched_tags = self.client.list_tags(self.server['id'])['tags']
+ fetched_tags = self.reader_servers_client.list_tags(
+ self.server['id'])['tags']
self.assertEmpty(fetched_tags)
# Add server tag to the server.
@@ -59,12 +59,14 @@
self._update_server_tags(self.server['id'], assigned_tag)
# Check that added tag exists.
- fetched_tags = self.client.list_tags(self.server['id'])['tags']
+ fetched_tags = self.reader_servers_client.list_tags(
+ self.server['id'])['tags']
self.assertEqual([assigned_tag], fetched_tags)
# Remove assigned tag from server and check that it was removed.
- self.client.delete_tag(self.server['id'], assigned_tag)
- fetched_tags = self.client.list_tags(self.server['id'])['tags']
+ self.servers_client.delete_tag(self.server['id'], assigned_tag)
+ fetched_tags = self.reader_servers_client.list_tags(
+ self.server['id'])['tags']
self.assertEmpty(fetched_tags)
@decorators.idempotent_id('a2c1af8c-127d-417d-974b-8115f7e3d831')
@@ -81,12 +83,13 @@
# Replace tags with new tags and check that they are present.
new_tags = [data_utils.rand_name(**kwargs),
data_utils.rand_name(**kwargs)]
- replaced_tags = self.client.update_all_tags(
+ replaced_tags = self.servers_client.update_all_tags(
self.server['id'], new_tags)['tags']
self.assertCountEqual(new_tags, replaced_tags)
# List the tags and check that the tags were replaced.
- fetched_tags = self.client.list_tags(self.server['id'])['tags']
+ fetched_tags = self.reader_servers_client.list_tags(
+ self.server['id'])['tags']
self.assertCountEqual(new_tags, fetched_tags)
@decorators.idempotent_id('a63b2a74-e918-4b7c-bcab-10c855f3a57e')
@@ -102,8 +105,9 @@
self._update_server_tags(self.server['id'], assigned_tags)
# Delete tags from the server and check that they were deleted.
- self.client.delete_all_tags(self.server['id'])
- fetched_tags = self.client.list_tags(self.server['id'])['tags']
+ self.servers_client.delete_all_tags(self.server['id'])
+ fetched_tags = self.reader_servers_client.list_tags(
+ self.server['id'])['tags']
self.assertEmpty(fetched_tags)
@decorators.idempotent_id('81279a66-61c3-4759-b830-a2dbe64cbe08')
@@ -116,4 +120,5 @@
# Check that added tag exists. Throws a 404 if not found, else a 204,
# which was already checked by the schema validation.
- self.client.check_tag_existence(self.server['id'], assigned_tag)
+ self.servers_client.check_tag_existence(self.server['id'],
+ assigned_tag)
diff --git a/tempest/api/compute/servers/test_servers.py b/tempest/api/compute/servers/test_servers.py
index ea3a710..b529b6b 100644
--- a/tempest/api/compute/servers/test_servers.py
+++ b/tempest/api/compute/servers/test_servers.py
@@ -28,16 +28,9 @@
"""Test servers API"""
create_default_network = True
- credentials = ['primary', 'project_reader']
-
@classmethod
def setup_clients(cls):
super(ServersTestJSON, cls).setup_clients()
- cls.client = cls.servers_client
- if CONF.enforce_scope.nova:
- cls.reader_client = cls.os_project_reader.servers_client
- else:
- cls.reader_client = cls.client
@decorators.idempotent_id('b92d5ec7-b1dd-44a2-87e4-45e888c46ef0')
@testtools.skipUnless(CONF.compute_feature_enabled.
@@ -71,9 +64,9 @@
id2 = server['id']
self.addCleanup(self.delete_server, id2)
self.assertNotEqual(id1, id2, "Did not create a new server")
- server = self.reader_client.show_server(id1)['server']
+ server = self.reader_servers_client.show_server(id1)['server']
name1 = server['name']
- server = self.reader_client.show_server(id2)['server']
+ server = self.reader_servers_client.show_server(id2)['server']
name2 = server['name']
self.assertEqual(name1, name2)
@@ -88,7 +81,7 @@
server = self.create_test_server(key_name=key_name,
wait_until='ACTIVE')
self.addCleanup(self.delete_server, server['id'])
- server = self.reader_client.show_server(server['id'])['server']
+ server = self.reader_servers_client.show_server(server['id'])['server']
self.assertEqual(key_name, server['key_name'])
def _update_server_name(self, server_id, status, prefix_name='server'):
@@ -97,12 +90,12 @@
prefix=CONF.resource_name_prefix, name=prefix_name)
# Update the server with a new name
- self.client.update_server(server_id,
- name=new_name)
- waiters.wait_for_server_status(self.client, server_id, status)
+ self.servers_client.update_server(server_id,
+ name=new_name)
+ waiters.wait_for_server_status(self.servers_client, server_id, status)
# Verify the name of the server has changed
- server = self.reader_client.show_server(server_id)['server']
+ server = self.reader_servers_client.show_server(server_id)['server']
self.assertEqual(new_name, server['name'])
return server
@@ -116,8 +109,9 @@
self._update_server_name(server['id'], 'ACTIVE', prefix_name)
# stop server and check server name update again
- self.client.stop_server(server['id'])
- waiters.wait_for_server_status(self.client, server['id'], 'SHUTOFF')
+ self.servers_client.stop_server(server['id'])
+ waiters.wait_for_server_status(self.servers_client, server['id'],
+ 'SHUTOFF')
# Update instance name with non-ASCII characters
updated_server = self._update_server_name(server['id'],
'SHUTOFF',
@@ -131,13 +125,14 @@
self.addCleanup(self.delete_server, server['id'])
# Update the IPv4 and IPv6 access addresses
- self.client.update_server(server['id'],
- accessIPv4='1.1.1.1',
- accessIPv6='::babe:202:202')
- waiters.wait_for_server_status(self.client, server['id'], 'ACTIVE')
+ self.servers_client.update_server(server['id'],
+ accessIPv4='1.1.1.1',
+ accessIPv6='::babe:202:202')
+ waiters.wait_for_server_status(self.servers_client, server['id'],
+ 'ACTIVE')
# Verify the access addresses have been updated
- server = self.reader_client.show_server(server['id'])['server']
+ server = self.reader_servers_client.show_server(server['id'])['server']
self.assertEqual('1.1.1.1', server['accessIPv4'])
self.assertEqual('::babe:202:202', server['accessIPv6'])
@@ -147,7 +142,7 @@
server = self.create_test_server(accessIPv6='2001:2001::3',
wait_until='ACTIVE')
self.addCleanup(self.delete_server, server['id'])
- server = self.reader_client.show_server(server['id'])['server']
+ server = self.reader_servers_client.show_server(server['id'])['server']
self.assertEqual('2001:2001::3', server['accessIPv6'])
@decorators.related_bug('1730756')
@@ -180,22 +175,16 @@
# also. 2.47 APIs schema are on top of 2.9->2.19->2.26 schema so
# below tests cover all of the schema.
- credentials = ['primary', 'project_reader']
-
@classmethod
def setup_clients(cls):
super(ServerShowV247Test, cls).setup_clients()
- if CONF.enforce_scope.nova:
- cls.reader_client = cls.os_project_reader.servers_client
- else:
- cls.reader_client = cls.servers_client
@decorators.idempotent_id('88b0bdb2-494c-11e7-a919-92ebcb67fe33')
def test_show_server(self):
"""Test getting server detail"""
server = self.create_test_server()
# All fields will be checked by API schema
- self.reader_client.show_server(server['id'])
+ self.reader_servers_client.show_server(server['id'])
@decorators.idempotent_id('8de397c2-57d0-4b90-aa30-e5d668f21a8b')
def test_update_rebuild_list_server(self):
@@ -210,7 +199,7 @@
waiters.wait_for_server_status(self.servers_client,
server['id'], 'ACTIVE')
# Checking list details API response schema
- self.servers_client.list_servers(detail=True)
+ self.reader_servers_client.list_servers(detail=True)
class ServerShowV263Test(base.BaseV2ComputeTest):
@@ -219,15 +208,9 @@
min_microversion = '2.63'
max_microversion = 'latest'
- credentials = ['primary', 'project_reader']
-
@classmethod
def setup_clients(cls):
super(ServerShowV263Test, cls).setup_clients()
- if CONF.enforce_scope.nova:
- cls.reader_client = cls.os_project_reader.servers_client
- else:
- cls.reader_client = cls.servers_client
@testtools.skipUnless(CONF.compute.certified_image_ref,
'``[compute]/certified_image_ref`` required to test '
@@ -245,7 +228,7 @@
wait_until='ACTIVE')
# Check show API response schema
- self.reader_client.show_server(server['id'])['server']
+ self.reader_servers_client.show_server(server['id'])['server']
# Check update API response schema
self.servers_client.update_server(server['id'])
@@ -260,7 +243,7 @@
# Check list details API response schema
params = {'trusted_image_certificates': trusted_certs}
- servers = self.servers_client.list_servers(
+ servers = self.reader_servers_client.list_servers(
detail=True, **params)['servers']
self.assertNotEmpty(servers)
@@ -275,13 +258,17 @@
min_microversion = '2.96'
max_microversion = 'latest'
+ @classmethod
+ def setup_clients(cls):
+ super(ServersListShow296Test, cls).setup_clients()
+
@decorators.idempotent_id('4eee1ffe-9e00-4c99-a431-0d3e0f323a8f')
def test_list_show_update_rebuild_server_296(self):
server = self.create_test_server(wait_until='ACTIVE')
# Checking list API response schema.
- self.servers_client.list_servers(detail=True)
+ self.reader_servers_client.list_servers(detail=True)
# Checking show API response schema
- self.servers_client.show_server(server['id'])
+ self.reader_servers_client.show_server(server['id'])
# Checking update API response schema
self.servers_client.update_server(server['id'])
# Check rebuild API response schema
@@ -296,13 +283,17 @@
min_microversion = '2.98'
max_microversion = 'latest'
+ @classmethod
+ def setup_clients(cls):
+ super(ServersListShow298Test, cls).setup_clients()
+
@decorators.idempotent_id('3981e496-3bf7-4015-b807-63ffee7c520c')
def test_list_show_update_rebuild_server_298(self):
server = self.create_test_server(wait_until='ACTIVE')
# Check list details API response schema
- self.servers_client.list_servers(detail=True)
+ self.reader_servers_client.list_servers(detail=True)
# Check show API response schema
- self.servers_client.show_server(server['id'])
+ self.reader_servers_client.show_server(server['id'])
# Checking update API response schema
self.servers_client.update_server(server['id'])
# Check rebuild API response schema
@@ -321,13 +312,17 @@
min_microversion = '2.100'
max_microversion = 'latest'
+ @classmethod
+ def setup_clients(cls):
+ super(ServersListShow2100Test, cls).setup_clients()
+
@decorators.idempotent_id('2c3a8270-e6f7-4400-af0f-db003c117e48')
def test_list_show_rebuild_update_server_2100(self):
server = self.create_test_server(wait_until='ACTIVE')
# Checking list API response schema.
- self.servers_client.list_servers(detail=True)
+ self.reader_servers_client.list_servers(detail=True)
# Checking show API response schema
- self.servers_client.show_server(server['id'])
+ self.reader_servers_client.show_server(server['id'])
# Checking update API response schema
self.servers_client.update_server(server['id'])
waiters.wait_for_server_status(self.servers_client,
diff --git a/tempest/api/compute/servers/test_servers_negative.py b/tempest/api/compute/servers/test_servers_negative.py
index fa40629..582819a 100644
--- a/tempest/api/compute/servers/test_servers_negative.py
+++ b/tempest/api/compute/servers/test_servers_negative.py
@@ -37,7 +37,7 @@
def setUp(self):
super(ServersNegativeTestJSON, self).setUp()
try:
- waiters.wait_for_server_status(self.client, self.server_id,
+ waiters.wait_for_server_status(self.servers_client, self.server_id,
'ACTIVE')
except Exception:
self.__class__.server_id = self.recreate_server(self.server_id)
@@ -52,7 +52,6 @@
@classmethod
def setup_clients(cls):
super(ServersNegativeTestJSON, cls).setup_clients()
- cls.client = cls.servers_client
@classmethod
def resource_setup(cls):
@@ -62,8 +61,8 @@
# Wait until the instance is active to avoid the delete racing
server = cls.create_test_server(wait_until='ACTIVE')
- cls.client.delete_server(server['id'])
- waiters.wait_for_server_termination(cls.client, server['id'])
+ cls.servers_client.delete_server(server['id'])
+ waiters.wait_for_server_termination(cls.servers_client, server['id'])
cls.deleted_server_id = server['id']
@decorators.attr(type=['negative'])
@@ -135,7 +134,7 @@
"""Resizing a non-existent server should fail"""
nonexistent_server = data_utils.rand_uuid()
self.assertRaises(lib_exc.NotFound,
- self.client.resize_server,
+ self.servers_client.resize_server,
nonexistent_server, self.flavor_ref)
@decorators.idempotent_id('ced1a1d7-2ab6-45c9-b90f-b27d87b30efd')
@@ -145,7 +144,8 @@
def test_resize_server_with_non_existent_flavor(self):
"""Resizing a server with non existent flavor should fail"""
nonexistent_flavor = data_utils.rand_uuid()
- self.assertRaises(lib_exc.BadRequest, self.client.resize_server,
+ self.assertRaises(lib_exc.BadRequest,
+ self.servers_client.resize_server,
self.server_id, flavor_ref=nonexistent_flavor)
@decorators.idempotent_id('45436a7d-a388-4a35-a9d8-3adc5d0d940b')
@@ -154,7 +154,8 @@
@decorators.attr(type=['negative'])
def test_resize_server_with_null_flavor(self):
"""Resizing a server with null flavor should fail"""
- self.assertRaises(lib_exc.BadRequest, self.client.resize_server,
+ self.assertRaises(lib_exc.BadRequest,
+ self.servers_client.resize_server,
self.server_id, flavor_ref="")
@decorators.attr(type=['negative'])
@@ -162,7 +163,7 @@
def test_reboot_non_existent_server(self):
"""Rebooting a non existent server should fail"""
nonexistent_server = data_utils.rand_uuid()
- self.assertRaises(lib_exc.NotFound, self.client.reboot_server,
+ self.assertRaises(lib_exc.NotFound, self.servers_client.reboot_server,
nonexistent_server, type='SOFT')
@decorators.idempotent_id('d1417e7f-a509-41b5-a102-d5eed8613369')
@@ -171,19 +172,20 @@
@decorators.attr(type=['negative'])
def test_pause_paused_server(self):
"""Pausing a paused server should fail"""
- self.client.pause_server(self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id, 'PAUSED')
+ self.servers_client.pause_server(self.server_id)
+ waiters.wait_for_server_status(self.servers_client, self.server_id,
+ 'PAUSED')
self.assertRaises(lib_exc.Conflict,
- self.client.pause_server,
+ self.servers_client.pause_server,
self.server_id)
- self.client.unpause_server(self.server_id)
+ self.servers_client.unpause_server(self.server_id)
@decorators.attr(type=['negative'])
@decorators.idempotent_id('98fa0458-1485-440f-873b-fe7f0d714930')
def test_rebuild_deleted_server(self):
"""Rebuilding a deleted server should fail"""
self.assertRaises(lib_exc.NotFound,
- self.client.rebuild_server,
+ self.servers_client.rebuild_server,
self.deleted_server_id, self.image_ref)
@decorators.related_bug('1660878', status_code=409)
@@ -191,7 +193,7 @@
@decorators.idempotent_id('581a397d-5eab-486f-9cf9-1014bbd4c984')
def test_reboot_deleted_server(self):
"""Rebooting a deleted server should fail"""
- self.assertRaises(lib_exc.NotFound, self.client.reboot_server,
+ self.assertRaises(lib_exc.NotFound, self.servers_client.reboot_server,
self.deleted_server_id, type='SOFT')
@decorators.attr(type=['negative'])
@@ -200,7 +202,7 @@
"""Rebuilding a non existent server should fail"""
nonexistent_server = data_utils.rand_uuid()
self.assertRaises(lib_exc.NotFound,
- self.client.rebuild_server,
+ self.servers_client.rebuild_server,
nonexistent_server,
self.image_ref)
@@ -292,7 +294,7 @@
prefix=CONF.resource_name_prefix,
name=self.__class__.__name__ + '-server') + '_updated'
- self.assertRaises(lib_exc.NotFound, self.client.update_server,
+ self.assertRaises(lib_exc.NotFound, self.servers_client.update_server,
nonexistent_server, name=new_name)
@decorators.attr(type=['negative'])
@@ -300,7 +302,8 @@
def test_update_server_set_empty_name(self):
"""Updating name of the server to an empty string should fail"""
new_name = ''
- self.assertRaises(lib_exc.BadRequest, self.client.update_server,
+ self.assertRaises(lib_exc.BadRequest,
+ self.servers_client.update_server,
self.server_id, name=new_name)
@decorators.attr(type=['negative'])
@@ -313,7 +316,7 @@
"""
new_name = 'a' * 256
self.assertRaises(lib_exc.BadRequest,
- self.client.update_server,
+ self.servers_client.update_server,
self.server_id,
name=new_name)
@@ -322,14 +325,15 @@
def test_delete_non_existent_server(self):
"""Deleting a non existent server should fail"""
nonexistent_server = data_utils.rand_uuid()
- self.assertRaises(lib_exc.NotFound, self.client.delete_server,
+ self.assertRaises(lib_exc.NotFound, self.servers_client.delete_server,
nonexistent_server)
@decorators.attr(type=['negative'])
@decorators.idempotent_id('75f79124-277c-45e6-a373-a1d6803f4cc4')
def test_delete_server_pass_negative_id(self):
"""Passing an invalid string parameter to delete server should fail"""
- self.assertRaises(lib_exc.NotFound, self.client.delete_server, -1)
+ self.assertRaises(lib_exc.NotFound,
+ self.servers_client.delete_server, -1)
@decorators.attr(type=['negative'])
@decorators.idempotent_id('f4d7279b-5fd2-4bf2-9ba4-ae35df0d18c5')
@@ -339,7 +343,7 @@
Pass a server ID that exceeds length limit to delete server, an error
is returned.
"""
- self.assertRaises(lib_exc.NotFound, self.client.delete_server,
+ self.assertRaises(lib_exc.NotFound, self.servers_client.delete_server,
sys.maxsize + 1)
@decorators.attr(type=['negative'])
@@ -356,7 +360,8 @@
def test_get_non_existent_server(self):
"""Getting a non existent server details should fail"""
nonexistent_server = data_utils.rand_uuid()
- self.assertRaises(lib_exc.NotFound, self.client.show_server,
+ self.assertRaises(lib_exc.NotFound,
+ self.reader_servers_client.show_server,
nonexistent_server)
@decorators.attr(type=['negative'])
@@ -374,7 +379,7 @@
def test_pause_non_existent_server(self):
"""Pausing a non existent server should fail"""
nonexistent_server = data_utils.rand_uuid()
- self.assertRaises(lib_exc.NotFound, self.client.pause_server,
+ self.assertRaises(lib_exc.NotFound, self.servers_client.pause_server,
nonexistent_server)
@decorators.idempotent_id('705b8e3a-e8a7-477c-a19b-6868fc24ac75')
@@ -384,7 +389,7 @@
def test_unpause_non_existent_server(self):
"""Unpausing a non existent server should fail"""
nonexistent_server = data_utils.rand_uuid()
- self.assertRaises(lib_exc.NotFound, self.client.unpause_server,
+ self.assertRaises(lib_exc.NotFound, self.servers_client.unpause_server,
nonexistent_server)
@decorators.idempotent_id('c8e639a7-ece8-42dd-a2e0-49615917ba4f')
@@ -394,7 +399,7 @@
def test_unpause_server_invalid_state(self):
"""Unpausing an active server should fail"""
self.assertRaises(lib_exc.Conflict,
- self.client.unpause_server,
+ self.servers_client.unpause_server,
self.server_id)
@decorators.idempotent_id('d1f032d5-7b6e-48aa-b252-d5f16dd994ca')
@@ -404,7 +409,7 @@
def test_suspend_non_existent_server(self):
"""Suspending a non existent server should fail"""
nonexistent_server = data_utils.rand_uuid()
- self.assertRaises(lib_exc.NotFound, self.client.suspend_server,
+ self.assertRaises(lib_exc.NotFound, self.servers_client.suspend_server,
nonexistent_server)
@decorators.idempotent_id('7f323206-05a9-4bf8-996b-dd5b2036501b')
@@ -413,13 +418,13 @@
@decorators.attr(type=['negative'])
def test_suspend_server_invalid_state(self):
"""Suspending a suspended server should fail"""
- self.client.suspend_server(self.server_id)
- waiters.wait_for_server_status(self.client, self.server_id,
+ self.servers_client.suspend_server(self.server_id)
+ waiters.wait_for_server_status(self.servers_client, self.server_id,
'SUSPENDED')
self.assertRaises(lib_exc.Conflict,
- self.client.suspend_server,
+ self.servers_client.suspend_server,
self.server_id)
- self.client.resume_server(self.server_id)
+ self.servers_client.resume_server(self.server_id)
@decorators.idempotent_id('221cd282-bddb-4837-a683-89c2487389b6')
@testtools.skipUnless(CONF.compute_feature_enabled.suspend,
@@ -428,7 +433,7 @@
def test_resume_non_existent_server(self):
"""Resuming a non existent server should fail"""
nonexistent_server = data_utils.rand_uuid()
- self.assertRaises(lib_exc.NotFound, self.client.resume_server,
+ self.assertRaises(lib_exc.NotFound, self.servers_client.resume_server,
nonexistent_server)
@decorators.idempotent_id('ccb6294d-c4c9-498f-8a43-554c098bfadb')
@@ -438,7 +443,7 @@
def test_resume_server_invalid_state(self):
"""Resuming an active server should fail"""
self.assertRaises(lib_exc.Conflict,
- self.client.resume_server,
+ self.servers_client.resume_server,
self.server_id)
@decorators.attr(type=['negative'])
@@ -447,7 +452,7 @@
"""Getting the console output for a non existent server should fail"""
nonexistent_server = data_utils.rand_uuid()
self.assertRaises(lib_exc.NotFound,
- self.client.get_console_output,
+ self.servers_client.get_console_output,
nonexistent_server, length=10)
@decorators.attr(type=['negative'])
@@ -456,7 +461,7 @@
"""Force-deleting a non existent server should fail"""
nonexistent_server = data_utils.rand_uuid()
self.assertRaises(lib_exc.NotFound,
- self.client.force_delete_server,
+ self.servers_client.force_delete_server,
nonexistent_server)
@decorators.attr(type=['negative'])
@@ -469,7 +474,7 @@
"""
nonexistent_server = data_utils.rand_uuid()
self.assertRaises(lib_exc.NotFound,
- self.client.restore_soft_deleted_server,
+ self.servers_client.restore_soft_deleted_server,
nonexistent_server)
@decorators.idempotent_id('abca56e2-a892-48ea-b5e5-e07e69774816')
@@ -479,7 +484,7 @@
def test_shelve_non_existent_server(self):
"""Shelving a non existent server should fail"""
nonexistent_server = data_utils.rand_uuid()
- self.assertRaises(lib_exc.NotFound, self.client.shelve_server,
+ self.assertRaises(lib_exc.NotFound, self.servers_client.shelve_server,
nonexistent_server)
@decorators.idempotent_id('443e4f9b-e6bf-4389-b601-3a710f15fddd')
@@ -488,15 +493,17 @@
@decorators.attr(type=['negative'])
def test_shelve_shelved_server(self):
"""Shelving a shelved server should fail"""
- compute.shelve_server(self.client, self.server_id)
+ compute.shelve_server(self.servers_client, self.server_id)
def _unshelve_server():
- server_info = self.client.show_server(self.server_id)['server']
+ server_info = self.reader_servers_client.show_server(
+ self.server_id)['server']
if 'SHELVED' in server_info['status']:
- self.client.unshelve_server(self.server_id)
+ self.servers_client.unshelve_server(self.server_id)
self.addCleanup(_unshelve_server)
- server = self.client.show_server(self.server_id)['server']
+ server = self.reader_servers_client.show_server(
+ self.server_id)['server']
image_name = server['name'] + '-shelved'
kwargs = {'params': {'name': image_name}}
images = self.images_client.list_images(**kwargs)['images']
@@ -504,10 +511,10 @@
self.assertEqual(image_name, images[0]['name'])
self.assertRaises(lib_exc.Conflict,
- self.client.shelve_server,
+ self.servers_client.shelve_server,
self.server_id)
- self.client.unshelve_server(self.server_id)
+ self.servers_client.unshelve_server(self.server_id)
@decorators.idempotent_id('23d23b37-afaf-40d7-aa5d-5726f82d8821')
@testtools.skipUnless(CONF.compute_feature_enabled.shelve,
@@ -516,7 +523,8 @@
def test_unshelve_non_existent_server(self):
"""Unshelving a non existent server should fail"""
nonexistent_server = data_utils.rand_uuid()
- self.assertRaises(lib_exc.NotFound, self.client.unshelve_server,
+ self.assertRaises(lib_exc.NotFound,
+ self.servers_client.unshelve_server,
nonexistent_server)
@decorators.idempotent_id('8f198ded-1cca-4228-9e65-c6b449c54880')
@@ -526,7 +534,7 @@
def test_unshelve_server_invalid_state(self):
"""Unshelving an active server should fail"""
self.assertRaises(lib_exc.Conflict,
- self.client.unshelve_server,
+ self.servers_client.unshelve_server,
self.server_id)
@decorators.attr(type=['negative'])
diff --git a/tempest/api/identity/admin/v3/test_domains.py b/tempest/api/identity/admin/v3/test_domains.py
index 80c4d1c..7291a0b 100644
--- a/tempest/api/identity/admin/v3/test_domains.py
+++ b/tempest/api/identity/admin/v3/test_domains.py
@@ -26,6 +26,27 @@
class DomainsTestJSON(base.BaseIdentityV3AdminTest):
"""Test identity domains"""
+ credentials = ['primary', 'admin', 'system_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(DomainsTestJSON, cls).setup_clients()
+ if CONF.identity.use_system_token:
+ # Use system reader for listing/showing domains
+ cls.reader_domains_client = (
+ cls.os_system_reader.domains_client)
+ # Use system reader for showing users
+ cls.reader_users_client = (
+ cls.os_system_reader.users_v3_client)
+ # Use system reader for showing groups
+ cls.reader_groups_client = (
+ cls.os_system_reader.groups_client)
+ else:
+ # Use admin client by default
+ cls.reader_domains_client = cls.domains_client
+ cls.reader_users_client = cls.users_client
+ cls.reader_groups_client = cls.groups_client
+
@classmethod
def resource_setup(cls):
super(DomainsTestJSON, cls).resource_setup()
@@ -41,7 +62,7 @@
"""Test listing domains"""
fetched_ids = list()
# List and Verify Domains
- body = self.domains_client.list_domains()['domains']
+ body = self.reader_domains_client.list_domains()['domains']
for d in body:
fetched_ids.append(d['id'])
missing_doms = [d for d in self.setup_domains
@@ -52,7 +73,7 @@
def test_list_domains_filter_by_name(self):
"""Test listing domains filtering by name"""
params = {'name': self.setup_domains[0]['name']}
- fetched_domains = self.domains_client.list_domains(
+ fetched_domains = self.reader_domains_client.list_domains(
**params)['domains']
# Verify the filtered list is correct, domain names are unique
# so exactly one domain should be found with the provided name
@@ -64,7 +85,7 @@
def test_list_domains_filter_by_enabled(self):
"""Test listing domains filtering by enabled domains"""
params = {'enabled': True}
- fetched_domains = self.domains_client.list_domains(
+ fetched_domains = self.reader_domains_client.list_domains(
**params)['domains']
# Verify the filtered list is correct
self.assertIn(self.setup_domains[0], fetched_domains)
@@ -108,14 +129,14 @@
self.assertEqual(new_desc, updated_domain['description'])
self.assertEqual(False, updated_domain['enabled'])
# Show domain
- fetched_domain = self.domains_client.show_domain(
+ fetched_domain = self.reader_domains_client.show_domain(
domain['id'])['domain']
self.assertEqual(new_name, fetched_domain['name'])
self.assertEqual(new_desc, fetched_domain['description'])
self.assertEqual(False, fetched_domain['enabled'])
# Delete domain
self.domains_client.delete_domain(domain['id'])
- body = self.domains_client.list_domains()['domains']
+ body = self.reader_domains_client.list_domains()['domains']
domains_list = [d['id'] for d in body]
self.assertNotIn(domain['id'], domains_list)
@@ -130,11 +151,11 @@
self.delete_domain(domain['id'])
# Check the domain, its users and groups are gone
self.assertRaises(exceptions.NotFound,
- self.domains_client.show_domain, domain['id'])
+ self.reader_domains_client.show_domain, domain['id'])
self.assertRaises(exceptions.NotFound,
- self.users_client.show_user, user['id'])
+ self.reader_users_client.show_user, user['id'])
self.assertRaises(exceptions.NotFound,
- self.groups_client.show_group, group['id'])
+ self.reader_groups_client.show_group, group['id'])
@decorators.idempotent_id('036df86e-bb5d-42c0-a7c2-66b9db3a6046')
def test_create_domain_with_disabled_status(self):
diff --git a/tempest/api/identity/admin/v3/test_endpoints.py b/tempest/api/identity/admin/v3/test_endpoints.py
index f9f3e72..defdcc7 100644
--- a/tempest/api/identity/admin/v3/test_endpoints.py
+++ b/tempest/api/identity/admin/v3/test_endpoints.py
@@ -30,10 +30,21 @@
# pre-provisioned credentials provider.
force_tenant_isolation = False
+ credentials = ['primary', 'admin', 'system_reader']
+
@classmethod
def setup_clients(cls):
super(EndPointsTestJSON, cls).setup_clients()
cls.client = cls.endpoints_client
+ if CONF.identity.use_system_token:
+ # Use system reader for listing/showing endpoints
+ cls.reader_client = cls.os_system_reader.endpoints_v3_client
+ # Use system reader for showing regions
+ cls.reader_regions_client = cls.os_system_reader.regions_client
+ else:
+ # Use admin client by default
+ cls.reader_client = cls.client
+ cls.reader_regions_client = cls.regions_client
@classmethod
def resource_setup(cls):
@@ -55,7 +66,8 @@
endpoint = cls.client.create_endpoint(
service_id=cls.service_ids[i], interface=interfaces[i],
url=url, region=region_name, enabled=True)['endpoint']
- region = cls.regions_client.show_region(region_name)['region']
+ region = cls.reader_regions_client.show_region(region_name)[
+ 'region']
cls.addClassResourceCleanup(
cls.regions_client.delete_region, region['id'])
cls.addClassResourceCleanup(
@@ -81,7 +93,7 @@
def test_list_endpoints(self):
"""Test listing keystone endpoints by filters"""
# Get the list of all the endpoints.
- fetched_endpoints = self.client.list_endpoints()['endpoints']
+ fetched_endpoints = self.reader_client.list_endpoints()['endpoints']
fetched_endpoint_ids = [e['id'] for e in fetched_endpoints]
# Check that all the created endpoints are present in
# "fetched_endpoints".
@@ -93,9 +105,9 @@
', '.join(str(e) for e in missing_endpoints))
# Check that filtering endpoints by service_id works.
- fetched_endpoints_for_service = self.client.list_endpoints(
+ fetched_endpoints_for_service = self.reader_client.list_endpoints(
service_id=self.service_ids[0])['endpoints']
- fetched_endpoints_for_alt_service = self.client.list_endpoints(
+ fetched_endpoints_for_alt_service = self.reader_client.list_endpoints(
service_id=self.service_ids[1])['endpoints']
# Assert that both filters returned the correct result.
@@ -106,9 +118,9 @@
fetched_endpoints_for_alt_service[0]['id']]))
# Check that filtering endpoints by interface works.
- fetched_public_endpoints = self.client.list_endpoints(
+ fetched_public_endpoints = self.reader_client.list_endpoints(
interface='public')['endpoints']
- fetched_internal_endpoints = self.client.list_endpoints(
+ fetched_internal_endpoints = self.reader_client.list_endpoints(
interface='internal')['endpoints']
# Check that the expected endpoint_id is present per filter. [0] is
@@ -129,7 +141,7 @@
interface=interface,
url=url, region=region_name,
enabled=True)['endpoint']
- region = self.regions_client.show_region(region_name)['region']
+ region = self.reader_regions_client.show_region(region_name)['region']
self.addCleanup(self.regions_client.delete_region, region['id'])
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
self.client.delete_endpoint, endpoint['id'])
@@ -138,13 +150,13 @@
self.assertEqual(url, endpoint['url'])
# Checking if created endpoint is present in the list of endpoints
- fetched_endpoints = self.client.list_endpoints()['endpoints']
+ fetched_endpoints = self.reader_client.list_endpoints()['endpoints']
fetched_endpoints_id = [e['id'] for e in fetched_endpoints]
self.assertIn(endpoint['id'], fetched_endpoints_id)
# Show endpoint
fetched_endpoint = (
- self.client.show_endpoint(endpoint['id'])['endpoint'])
+ self.reader_client.show_endpoint(endpoint['id'])['endpoint'])
# Asserting if the attributes of endpoint are the same
self.assertEqual(self.service_ids[0], fetched_endpoint['service_id'])
self.assertEqual(interface, fetched_endpoint['interface'])
@@ -156,7 +168,7 @@
self.client.delete_endpoint(endpoint['id'])
# Checking whether endpoint is deleted successfully
- fetched_endpoints = self.client.list_endpoints()['endpoints']
+ fetched_endpoints = self.reader_client.list_endpoints()['endpoints']
fetched_endpoints_id = [e['id'] for e in fetched_endpoints]
self.assertNotIn(endpoint['id'], fetched_endpoints_id)
@@ -187,7 +199,8 @@
interface=interface1,
url=url1, region=region1_name,
enabled=True)['endpoint'])
- region1 = self.regions_client.show_region(region1_name)['region']
+ region1 = self.reader_regions_client.show_region(region1_name)[
+ 'region']
self.addCleanup(self.regions_client.delete_region, region1['id'])
# Updating endpoint with new values
@@ -199,7 +212,8 @@
interface=interface2,
url=url2, region=region2_name,
enabled=False)['endpoint']
- region2 = self.regions_client.show_region(region2_name)['region']
+ region2 = self.reader_regions_client.show_region(region2_name)[
+ 'region']
self.addCleanup(self.regions_client.delete_region, region2['id'])
self.addCleanup(self.client.delete_endpoint, endpoint_for_update['id'])
diff --git a/tempest/api/identity/admin/v3/test_groups.py b/tempest/api/identity/admin/v3/test_groups.py
index 96218bb..f704f02 100644
--- a/tempest/api/identity/admin/v3/test_groups.py
+++ b/tempest/api/identity/admin/v3/test_groups.py
@@ -30,6 +30,23 @@
# pre-provisioned credentials provider.
force_tenant_isolation = False
+ credentials = ['primary', 'admin', 'system_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(GroupsV3TestJSON, cls).setup_clients()
+ if CONF.identity.use_system_token:
+ # Use system reader for listing/showing groups
+ cls.reader_groups_client = (
+ cls.os_system_reader.groups_client)
+ # Use system reader for listing user groups
+ cls.reader_users_client = (
+ cls.os_system_reader.users_v3_client)
+ else:
+ # Use admin client by default
+ cls.reader_groups_client = cls.groups_client
+ cls.reader_users_client = cls.users_client
+
@classmethod
def resource_setup(cls):
super(GroupsV3TestJSON, cls).resource_setup()
@@ -60,7 +77,7 @@
self.assertEqual(updated_group['description'], first_desc_update)
# Verify that the updated values are reflected after performing show.
- new_group = self.groups_client.show_group(group['id'])['group']
+ new_group = self.reader_groups_client.show_group(group['id'])['group']
self.assertEqual(group['id'], new_group['id'])
self.assertEqual(first_name_update, new_group['name'])
self.assertEqual(first_desc_update, new_group['description'])
@@ -94,7 +111,8 @@
self.groups_client.add_group_user(group['id'], user['id'])
# list users in group
- group_users = self.groups_client.list_group_users(group['id'])['users']
+ group_users = self.reader_groups_client.list_group_users(group['id'])[
+ 'users']
self.assertEqual(sorted(users, key=lambda k: k['name']),
sorted(group_users, key=lambda k: k['name']))
# check and delete user in group
@@ -102,7 +120,8 @@
self.groups_client.check_group_user_existence(
group['id'], user['id'])
self.groups_client.delete_group_user(group['id'], user['id'])
- group_users = self.groups_client.list_group_users(group['id'])['users']
+ group_users = self.reader_groups_client.list_group_users(group['id'])[
+ 'users']
self.assertEqual(len(group_users), 0)
@decorators.idempotent_id('64573281-d26a-4a52-b899-503cb0f4e4ec')
@@ -121,7 +140,8 @@
groups.append(group)
self.groups_client.add_group_user(group['id'], user['id'])
# list groups which user belongs to
- user_groups = self.users_client.list_user_groups(user['id'])['groups']
+ user_groups = self.reader_users_client.list_user_groups(user['id'])[
+ 'groups']
# The `membership_expires_at` attribute is present when listing user
# group memberships, and is not an attribute of the groups themselves.
# Therefore we remove it from the comparison.
@@ -146,10 +166,10 @@
# of listing all users and listing all groups are not supported,
# they need a domain filter to be specified
if CONF.identity_feature_enabled.domain_specific_drivers:
- body = self.groups_client.list_groups(
+ body = self.reader_groups_client.list_groups(
domain_id=self.domain['id'])['groups']
else:
- body = self.groups_client.list_groups()['groups']
+ body = self.reader_groups_client.list_groups()['groups']
for g in body:
fetched_ids.append(g['id'])
missing_groups = [g for g in group_ids if g not in fetched_ids]
diff --git a/tempest/api/identity/admin/v3/test_list_projects.py b/tempest/api/identity/admin/v3/test_list_projects.py
index 2135fcc..c758dfa 100644
--- a/tempest/api/identity/admin/v3/test_list_projects.py
+++ b/tempest/api/identity/admin/v3/test_list_projects.py
@@ -26,13 +26,26 @@
class BaseListProjectsTestJSON(base.BaseIdentityV3AdminTest):
+ credentials = ['primary', 'admin', 'system_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(BaseListProjectsTestJSON, cls).setup_clients()
+ if CONF.identity.use_system_token:
+ # Use system reader for listing projects
+ cls.reader_projects_client = (
+ cls.os_system_reader.projects_client)
+ else:
+ # Use admin client by default
+ cls.reader_projects_client = cls.projects_client
+
def _list_projects_with_params(self, included, excluded, params, key):
# Validate that projects in ``included`` belongs to the projects
# returned that match ``params`` but not projects in ``excluded``
- all_projects = self.projects_client.list_projects()['projects']
+ all_projects = self.reader_projects_client.list_projects()['projects']
LOG.debug("Complete list of projects available in keystone: %s",
all_projects)
- body = self.projects_client.list_projects(params)['projects']
+ body = self.reader_projects_client.list_projects(params)['projects']
for p in included:
self.assertIn(p[key], map(lambda x: x[key], body))
for p in excluded:
@@ -75,7 +88,7 @@
def test_list_projects_with_parent(self):
"""Test listing projects with parent"""
params = {'parent_id': self.p3['parent_id']}
- fetched_projects = self.projects_client.list_projects(
+ fetched_projects = self.reader_projects_client.list_projects(
params)['projects']
self.assertNotEmpty(fetched_projects)
for project in fetched_projects:
@@ -111,10 +124,10 @@
@decorators.idempotent_id('1d830662-22ad-427c-8c3e-4ec854b0af44')
def test_list_projects(self):
"""Test listing projects"""
- list_projects = self.projects_client.list_projects()['projects']
+ list_projects = self.reader_projects_client.list_projects()['projects']
for p in [self.p1, self.p2]:
- show_project = self.projects_client.show_project(p['id'])[
+ show_project = self.reader_projects_client.show_project(p['id'])[
'project']
self.assertIn(show_project, list_projects)
diff --git a/tempest/api/identity/admin/v3/test_list_users.py b/tempest/api/identity/admin/v3/test_list_users.py
index 3884989..e8d0ff5 100644
--- a/tempest/api/identity/admin/v3/test_list_users.py
+++ b/tempest/api/identity/admin/v3/test_list_users.py
@@ -24,12 +24,25 @@
class UsersV3TestJSON(base.BaseIdentityV3AdminTest):
"""Test listing keystone users"""
+ credentials = ['primary', 'admin', 'system_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(UsersV3TestJSON, cls).setup_clients()
+ if CONF.identity.use_system_token:
+ # Use system reader for listing users
+ cls.reader_users_client = (
+ cls.os_system_reader.users_v3_client)
+ else:
+ # Use admin client by default
+ cls.reader_users_client = cls.users_client
+
def _list_users_with_params(self, params, key, expected, not_expected):
# Helper method to list users filtered with params and
# assert the response based on expected and not_expected
# expected: user expected in the list response
# not_expected: user, which should not be present in list response
- body = self.users_client.list_users(**params)['users']
+ body = self.reader_users_client.list_users(**params)['users']
self.assertIn(expected[key], map(lambda x: x[key], body))
self.assertNotIn(not_expected[key],
map(lambda x: x[key], body))
@@ -105,13 +118,13 @@
# of listing all users and listing all groups are not supported,
# they need a domain filter to be specified
if CONF.identity_feature_enabled.domain_specific_drivers:
- body_enabled_user = self.users_client.list_users(
+ body_enabled_user = self.reader_users_client.list_users(
domain_id=self.domain_enabled_user['domain_id'])['users']
- body_non_enabled_user = self.users_client.list_users(
+ body_non_enabled_user = self.reader_users_client.list_users(
domain_id=self.non_domain_enabled_user['domain_id'])['users']
body = (body_enabled_user + body_non_enabled_user)
else:
- body = self.users_client.list_users()['users']
+ body = self.reader_users_client.list_users()['users']
fetched_ids = [u['id'] for u in body]
missing_users = [u['id'] for u in self.users
@@ -123,7 +136,7 @@
@decorators.idempotent_id('b4baa3ae-ac00-4b4e-9e27-80deaad7771f')
def test_get_user(self):
"""Get a user detail"""
- user = self.users_client.show_user(self.users[0]['id'])['user']
+ user = self.reader_users_client.show_user(self.users[0]['id'])['user']
self.assertEqual(self.users[0]['id'], user['id'])
self.assertEqual(self.users[0]['name'], user['name'])
self.assertEqual(self.alt_email, user['email'])
diff --git a/tempest/api/identity/admin/v3/test_policies.py b/tempest/api/identity/admin/v3/test_policies.py
index 2d3775a..6bce533 100644
--- a/tempest/api/identity/admin/v3/test_policies.py
+++ b/tempest/api/identity/admin/v3/test_policies.py
@@ -24,6 +24,19 @@
class PoliciesTestJSON(base.BaseIdentityV3AdminTest):
"""Test keystone policies"""
+ credentials = ['primary', 'admin', 'system_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(PoliciesTestJSON, cls).setup_clients()
+ if CONF.identity.use_system_token:
+ # Use system reader for listing/showing policies
+ cls.reader_policies_client = (
+ cls.os_system_reader.policies_client)
+ else:
+ # Use admin client by default
+ cls.reader_policies_client = cls.policies_client
+
def _delete_policy(self, policy_id):
self.policies_client.delete_policy(policy_id)
@@ -43,7 +56,7 @@
self.addCleanup(self._delete_policy, policy['id'])
policy_ids.append(policy['id'])
# List and Verify Policies
- body = self.policies_client.list_policies()['policies']
+ body = self.reader_policies_client.list_policies()['policies']
for p in body:
fetched_ids.append(p['id'])
missing_pols = [p for p in policy_ids if p not in fetched_ids]
@@ -70,7 +83,7 @@
policy['id'], type=update_type)['policy']
self.assertIn('type', data)
# Assertion for updated value with fetched value
- fetched_policy = self.policies_client.show_policy(
+ fetched_policy = self.reader_policies_client.show_policy(
policy['id'])['policy']
self.assertIn('id', fetched_policy)
self.assertIn('blob', fetched_policy)
diff --git a/tempest/api/identity/admin/v3/test_projects.py b/tempest/api/identity/admin/v3/test_projects.py
index 3b0052c..c191955 100644
--- a/tempest/api/identity/admin/v3/test_projects.py
+++ b/tempest/api/identity/admin/v3/test_projects.py
@@ -30,6 +30,27 @@
# pre-provisioned credentials provider.
force_tenant_isolation = False
+ credentials = ['primary', 'admin', 'system_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(ProjectsTestJSON, cls).setup_clients()
+ if CONF.identity.use_system_token:
+ # Use system reader for listing/showing projects
+ cls.reader_projects_client = (
+ cls.os_system_reader.projects_client)
+ # Use system reader for listing/showing domains
+ cls.reader_domains_client = (
+ cls.os_system_reader.domains_client)
+ # Use system reader for showing users
+ cls.reader_users_client = (
+ cls.os_system_reader.users_v3_client)
+ else:
+ # Use admin client by default
+ cls.reader_projects_client = cls.projects_client
+ cls.reader_domains_client = cls.domains_client
+ cls.reader_users_client = cls.users_client
+
@decorators.idempotent_id('0ecf465c-0dc4-4532-ab53-91ffeb74d12d')
def test_project_create_with_description(self):
"""Test creating project with a description"""
@@ -40,7 +61,7 @@
desc1 = project['description']
self.assertEqual(desc1, project_desc, 'Description should have '
'been sent in response for create')
- body = self.projects_client.show_project(project_id)['project']
+ body = self.reader_projects_client.show_project(project_id)['project']
desc2 = body['description']
self.assertEqual(desc2, project_desc, 'Description does not appear '
'to be set')
@@ -56,7 +77,7 @@
project_id = project['id']
self.assertEqual(project_name, project['name'])
self.assertEqual(domain['id'], project['domain_id'])
- body = self.projects_client.show_project(project_id)['project']
+ body = self.reader_projects_client.show_project(project_id)['project']
self.assertEqual(project_name, body['name'])
self.assertEqual(domain['id'], body['domain_id'])
@@ -97,15 +118,15 @@
# Check if the is_domain project is correctly returned by both
# project and domain APIs
- projects_list = self.projects_client.list_projects(
+ projects_list = self.reader_projects_client.list_projects(
params={'is_domain': True})['projects']
project_ids = [p['id'] for p in projects_list]
self.assertIn(project['id'], project_ids)
# The domains API return different attributes for the entity, so we
# compare the entities IDs
- domains_ids = [d['id'] for d in self.domains_client.list_domains()[
- 'domains']]
+ domains_list = self.reader_domains_client.list_domains()['domains']
+ domains_ids = [d['id'] for d in domains_list]
self.assertIn(project['id'], domains_ids)
@decorators.idempotent_id('1f66dc76-50cc-4741-a200-af984509e480')
@@ -115,7 +136,7 @@
project_id = project['id']
self.assertTrue(project['enabled'],
'Enable should be True in response')
- body = self.projects_client.show_project(project_id)['project']
+ body = self.reader_projects_client.show_project(project_id)['project']
self.assertTrue(body['enabled'], 'Enable should be True in lookup')
@decorators.idempotent_id('78f96a9c-e0e0-4ee6-a3ba-fbf6dfd03207')
@@ -124,7 +145,8 @@
project = self.setup_test_project(enabled=False)
self.assertFalse(project['enabled'],
'Enable should be False in response')
- body = self.projects_client.show_project(project['id'])['project']
+ body = self.reader_projects_client.show_project(project['id'])[
+ 'project']
self.assertFalse(body['enabled'],
'Enable should be False in lookup')
@@ -144,7 +166,8 @@
resp2_name = body['name']
self.assertNotEqual(resp1_name, resp2_name)
- body = self.projects_client.show_project(project['id'])['project']
+ body = self.reader_projects_client.show_project(project['id'])[
+ 'project']
resp3_name = body['name']
self.assertNotEqual(resp1_name, resp3_name)
@@ -166,7 +189,8 @@
resp2_desc = body['description']
self.assertNotEqual(resp1_desc, resp2_desc)
- body = self.projects_client.show_project(project['id'])['project']
+ body = self.reader_projects_client.show_project(project['id'])[
+ 'project']
resp3_desc = body['description']
self.assertNotEqual(resp1_desc, resp3_desc)
@@ -187,7 +211,8 @@
resp2_en = body['enabled']
self.assertNotEqual(resp1_en, resp2_en)
- body = self.projects_client.show_project(project['id'])['project']
+ body = self.reader_projects_client.show_project(project['id'])[
+ 'project']
resp3_en = body['enabled']
self.assertNotEqual(resp1_en, resp3_en)
@@ -217,7 +242,7 @@
self.addCleanup(self.users_client.delete_user, user['id'])
# Get User To validate the user details
- new_user_get = self.users_client.show_user(user['id'])['user']
+ new_user_get = self.reader_users_client.show_user(user['id'])['user']
# Assert response body of GET
self.assertEqual(u_name, new_user_get['name'])
self.assertEqual(u_desc, new_user_get['description'])
@@ -238,9 +263,9 @@
project = self.setup_test_project(tags=tags)
# Show and list for the project
- project_get = self.projects_client.show_project(
+ project_get = self.reader_projects_client.show_project(
project['id'])['project']
- _projects = self.projects_client.list_projects()['projects']
+ _projects = self.reader_projects_client.list_projects()['projects']
project_list = next(x for x in _projects if x['id'] == project['id'])
# Assert the expected fields exist. More fields than expected may
diff --git a/tempest/api/identity/admin/v3/test_regions.py b/tempest/api/identity/admin/v3/test_regions.py
index 870a406..f021cc2 100644
--- a/tempest/api/identity/admin/v3/test_regions.py
+++ b/tempest/api/identity/admin/v3/test_regions.py
@@ -30,10 +30,18 @@
# pre-provisioned credentials provider.
force_tenant_isolation = False
+ credentials = ['primary', 'admin', 'system_reader']
+
@classmethod
def setup_clients(cls):
super(RegionsTestJSON, cls).setup_clients()
cls.client = cls.regions_client
+ if CONF.identity.use_system_token:
+ # Use system reader for listing/showing regions
+ cls.reader_client = cls.os_system_reader.regions_client
+ else:
+ # Use admin client by default
+ cls.reader_client = cls.client
@classmethod
def resource_setup(cls):
@@ -77,13 +85,13 @@
self.assertEqual(self.setup_regions[1]['id'],
region['parent_region_id'])
# Get the details of region
- region = self.client.show_region(region['id'])['region']
+ region = self.reader_client.show_region(region['id'])['region']
self.assertEqual(r_alt_description, region['description'])
self.assertEqual(self.setup_regions[1]['id'],
region['parent_region_id'])
# Delete the region
self.client.delete_region(region['id'])
- body = self.client.list_regions()['regions']
+ body = self.reader_client.list_regions()['regions']
regions_list = [r['id'] for r in body]
self.assertNotIn(region['id'], regions_list)
@@ -104,7 +112,7 @@
@decorators.idempotent_id('d180bf99-544a-445c-ad0d-0c0d27663796')
def test_list_regions(self):
"""Test getting a list of regions"""
- fetched_regions = self.client.list_regions()['regions']
+ fetched_regions = self.reader_client.list_regions()['regions']
missing_regions =\
[e for e in self.setup_regions if e not in fetched_regions]
# Asserting List Regions response
@@ -124,7 +132,8 @@
self.addCleanup(self.client.delete_region, region['id'])
# Get the list of regions filtering with the parent_region_id
params = {'parent_region_id': self.setup_regions[0]['id']}
- fetched_regions = self.client.list_regions(params=params)['regions']
+ fetched_regions = self.reader_client.list_regions(params=params)[
+ 'regions']
# Asserting list regions response
self.assertIn(region, fetched_regions)
for r in fetched_regions:
diff --git a/tempest/api/identity/admin/v3/test_roles.py b/tempest/api/identity/admin/v3/test_roles.py
index ab96027..d1c90dc 100644
--- a/tempest/api/identity/admin/v3/test_roles.py
+++ b/tempest/api/identity/admin/v3/test_roles.py
@@ -32,6 +32,19 @@
# pre-provisioned credentials provider.
force_tenant_isolation = False
+ credentials = ['primary', 'admin', 'system_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(RolesV3TestJSON, cls).setup_clients()
+ if CONF.identity.use_system_token:
+ # Use system reader for listing/showing roles
+ cls.reader_roles_client = (
+ cls.os_system_reader.roles_v3_client)
+ else:
+ # Use admin client by default
+ cls.reader_roles_client = cls.roles_client
+
@classmethod
def resource_setup(cls):
super(RolesV3TestJSON, cls).resource_setup()
@@ -97,11 +110,11 @@
self.assertIn('links', updated_role)
self.assertNotEqual(r_name, updated_role['name'])
- new_role = self.roles_client.show_role(role['id'])['role']
+ new_role = self.reader_roles_client.show_role(role['id'])['role']
self.assertEqual(new_name, new_role['name'])
self.assertEqual(updated_role['id'], new_role['id'])
- roles = self.roles_client.list_roles()['roles']
+ roles = self.reader_roles_client.list_roles()['roles']
self.assertIn(role['id'], [r['id'] for r in roles])
@decorators.idempotent_id('c6b80012-fe4a-498b-9ce8-eb391c05169f')
@@ -114,7 +127,7 @@
self.user_body['id'],
self.role['id'])
- roles = self.roles_client.list_user_roles_on_project(
+ roles = self.reader_roles_client.list_user_roles_on_project(
self.project['id'], self.user_body['id'])['roles']
self.assertEqual(1, len(roles))
@@ -135,7 +148,7 @@
self.roles_client.create_user_role_on_domain(
self.domain['id'], self.user_body['id'], self.role['id'])
- roles = self.roles_client.list_user_roles_on_domain(
+ roles = self.reader_roles_client.list_user_roles_on_domain(
self.domain['id'], self.user_body['id'])['roles']
self.assertEqual(1, len(roles))
@@ -155,7 +168,7 @@
self.roles_client.create_user_role_on_system(
self.user_body['id'], self.role['id'])
- roles = self.roles_client.list_user_roles_on_system(
+ roles = self.reader_roles_client.list_user_roles_on_system(
self.user_body['id'])['roles']
self.assertEqual(1, len(roles))
@@ -177,7 +190,7 @@
self.roles_client.create_group_role_on_project(
self.project['id'], self.group_body['id'], self.role['id'])
# List group roles on project
- roles = self.roles_client.list_group_roles_on_project(
+ roles = self.reader_roles_client.list_group_roles_on_project(
self.project['id'], self.group_body['id'])['roles']
self.assertEqual(1, len(roles))
@@ -210,7 +223,7 @@
self.roles_client.create_group_role_on_domain(
self.domain['id'], self.group_body['id'], self.role['id'])
- roles = self.roles_client.list_group_roles_on_domain(
+ roles = self.reader_roles_client.list_group_roles_on_domain(
self.domain['id'], self.group_body['id'])['roles']
self.assertEqual(1, len(roles))
@@ -227,7 +240,7 @@
self.roles_client.create_group_role_on_system(
self.group_body['id'], self.role['id'])
- roles = self.roles_client.list_group_roles_on_system(
+ roles = self.reader_roles_client.list_group_roles_on_system(
self.group_body['id'])['roles']
self.assertEqual(1, len(roles))
@@ -243,7 +256,7 @@
def test_list_roles(self):
"""Test listing roles"""
# Return a list of all roles
- body = self.roles_client.list_roles()['roles']
+ body = self.reader_roles_client.list_roles()['roles']
found = [role for role in body if role in self.roles]
self.assertEqual(len(found), len(self.roles))
@@ -278,7 +291,7 @@
prior_role_id, implies_role_id)
# Show the inference rule and check its elements
- resp_body = self.roles_client.show_role_inference_rule(
+ resp_body = self.reader_roles_client.show_role_inference_rule(
prior_role_id, implies_role_id)
self.assertIn('role_inference', resp_body)
role_inference = resp_body['role_inference']
@@ -293,7 +306,7 @@
# Check if the inference rule no longer exists
self.assertRaises(
lib_exc.NotFound,
- self.roles_client.show_role_inference_rule,
+ self.reader_roles_client.show_role_inference_rule,
prior_role_id,
implies_role_id)
@@ -313,14 +326,14 @@
self.roles[2]['id'], self.role['id'])
# Listing inferences rules from "roles[2]" should only return "role"
- rules = self.roles_client.list_role_inferences_rules(
+ rules = self.reader_roles_client.list_role_inferences_rules(
self.roles[2]['id'])['role_inference']
self.assertEqual(1, len(rules['implies']))
self.assertEqual(self.role['id'], rules['implies'][0]['id'])
# Listing inferences rules from "roles[0]" should return "roles[1]" and
# "roles[2]" (only direct rules are listed)
- rules = self.roles_client.list_role_inferences_rules(
+ rules = self.reader_roles_client.list_role_inferences_rules(
self.roles[0]['id'])['role_inference']
implies_ids = [role['id'] for role in rules['implies']]
self.assertEqual(2, len(implies_ids))
@@ -384,13 +397,13 @@
self.roles_client.delete_role,
domain_role['id'])
- domain_roles = self.roles_client.list_roles(
+ domain_roles = self.reader_roles_client.list_roles(
domain_id=self.domain['id'])['roles']
self.assertEqual(1, len(domain_roles))
self.assertIn(domain_role, domain_roles)
self.roles_client.delete_role(domain_role['id'])
- domain_roles = self.roles_client.list_roles(
+ domain_roles = self.reader_roles_client.list_roles(
domain_id=self.domain['id'])['roles']
self.assertEmpty(domain_roles)
@@ -465,7 +478,7 @@
self._create_implied_role(
self.roles[2]['id'], self.role['id'])
- rules = self.roles_client.list_all_role_inference_rules()[
+ rules = self.reader_roles_client.list_all_role_inference_rules()[
'role_inferences']
# NOTE(jaosorior): With the work related to the define-default-roles
diff --git a/tempest/api/identity/admin/v3/test_services.py b/tempest/api/identity/admin/v3/test_services.py
index b67e175..3379c3e 100644
--- a/tempest/api/identity/admin/v3/test_services.py
+++ b/tempest/api/identity/admin/v3/test_services.py
@@ -25,11 +25,25 @@
class ServicesTestJSON(base.BaseIdentityV3AdminTest):
"""Test keystone services"""
+ credentials = ['primary', 'admin', 'system_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(ServicesTestJSON, cls).setup_clients()
+ if CONF.identity.use_system_token:
+ # Use system reader for listing/showing services
+ cls.reader_services_client = (
+ cls.os_system_reader.identity_services_v3_client)
+ else:
+ # Use admin client by default
+ cls.reader_services_client = cls.services_client
+
def _del_service(self, service_id):
# Used for deleting the services created in this class
self.services_client.delete_service(service_id)
# Checking whether service is deleted successfully
- self.assertRaises(lib_exc.NotFound, self.services_client.show_service,
+ self.assertRaises(lib_exc.NotFound,
+ self.reader_services_client.show_service,
service_id)
@decorators.attr(type='smoke')
@@ -61,7 +75,8 @@
self.assertNotEqual(resp1_desc, resp2_desc)
# Get service
- fetched_service = self.services_client.show_service(s_id)['service']
+ fetched_service = self.reader_services_client.show_service(s_id)[
+ 'service']
resp3_desc = fetched_service['description']
self.assertEqual(resp2_desc, resp3_desc)
@@ -100,14 +115,14 @@
service_types.append(serv_type)
# List and Verify Services
- services = self.services_client.list_services()['services']
+ services = self.reader_services_client.list_services()['services']
fetched_ids = [service['id'] for service in services]
found = [s for s in fetched_ids if s in service_ids]
self.assertEqual(len(found), len(service_ids))
# Check that filtering by service type works.
for serv_type in service_types:
- fetched_services = self.services_client.list_services(
+ fetched_services = self.reader_services_client.list_services(
type=serv_type)['services']
self.assertEqual(1, len(fetched_services))
self.assertEqual(serv_type, fetched_services[0]['type'])
diff --git a/tempest/api/identity/admin/v3/test_trusts.py b/tempest/api/identity/admin/v3/test_trusts.py
index 5bd6756..d843abf 100644
--- a/tempest/api/identity/admin/v3/test_trusts.py
+++ b/tempest/api/identity/admin/v3/test_trusts.py
@@ -29,6 +29,19 @@
class TrustsV3TestJSON(base.BaseIdentityV3AdminTest):
"""Test keystone trusts"""
+ credentials = ['primary', 'admin', 'system_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(TrustsV3TestJSON, cls).setup_clients()
+ if CONF.identity.use_system_token:
+ # Use system reader for listing trusts
+ cls.reader_trusts_client = (
+ cls.os_system_reader.trusts_client)
+ else:
+ # Use admin client by default
+ cls.reader_trusts_client = cls.trusts_client
+
@classmethod
def skip_checks(cls):
super(TrustsV3TestJSON, cls).skip_checks()
@@ -293,7 +306,7 @@
original_scope = self.os_admin.auth_provider.scope
set_scope(self.os_admin.auth_provider, 'project')
self.addCleanup(set_scope, self.os_admin.auth_provider, original_scope)
- trusts_get = self.trusts_client.list_trusts()['trusts']
+ trusts_get = self.reader_trusts_client.list_trusts()['trusts']
trusts = [t for t in trusts_get
if t['id'] == self.trust_id]
self.assertEqual(1, len(trusts))
diff --git a/tempest/api/identity/admin/v3/test_users.py b/tempest/api/identity/admin/v3/test_users.py
index 9bcbba5..1272adb 100644
--- a/tempest/api/identity/admin/v3/test_users.py
+++ b/tempest/api/identity/admin/v3/test_users.py
@@ -29,6 +29,27 @@
class UsersV3TestJSON(base.BaseIdentityV3AdminTest):
"""Test keystone users"""
+ credentials = ['primary', 'admin', 'system_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(UsersV3TestJSON, cls).setup_clients()
+ if CONF.identity.use_system_token:
+ # Use system reader for listing/showing users
+ cls.reader_users_client = (
+ cls.os_system_reader.users_v3_client)
+ # Use system reader for showing roles
+ cls.reader_roles_client = (
+ cls.os_system_reader.roles_v3_client)
+ # Use system reader for showing projects
+ cls.reader_projects_client = (
+ cls.os_system_reader.projects_client)
+ else:
+ # Use admin client by default
+ cls.reader_users_client = cls.users_client
+ cls.reader_roles_client = cls.roles_client
+ cls.reader_projects_client = cls.projects_client
+
@classmethod
def skip_checks(cls):
super(UsersV3TestJSON, cls).skip_checks()
@@ -67,7 +88,7 @@
self.assertEqual(update_kwargs[field], updated_user[field])
# GET by id after updating
- new_user_get = self.users_client.show_user(user['id'])['user']
+ new_user_get = self.reader_users_client.show_user(user['id'])['user']
# Assert response body of GET after updation
for field in update_kwargs:
self.assertEqual(update_kwargs[field], new_user_get[field])
@@ -120,19 +141,20 @@
# Creating Role
role_body = self.setup_test_role()
- user = self.users_client.show_user(user_body['id'])['user']
- role = self.roles_client.show_role(role_body['id'])['role']
+ user = self.reader_users_client.show_user(user_body['id'])['user']
+ role = self.reader_roles_client.show_role(role_body['id'])['role']
for _ in range(2):
# Creating project so as to assign role
project_body = self.setup_test_project()
- project = self.projects_client.show_project(
+ project = self.reader_projects_client.show_project(
project_body['id'])['project']
# Assigning roles to user on project
self.roles_client.create_user_role_on_project(project['id'],
user['id'],
role['id'])
assigned_project_ids.append(project['id'])
- body = self.users_client.list_user_projects(user['id'])['projects']
+ body = self.reader_users_client.list_user_projects(user['id'])[
+ 'projects']
for i in body:
fetched_project_ids.append(i['id'])
# verifying the project ids in list
@@ -148,7 +170,7 @@
def test_get_user(self):
"""Test getting a user detail"""
user = self.setup_test_user()
- fetched_user = self.users_client.show_user(user['id'])['user']
+ fetched_user = self.reader_users_client.show_user(user['id'])['user']
self.assertEqual(user['id'], fetched_user['id'])
@testtools.skipUnless(CONF.identity_feature_enabled.security_compliance,
diff --git a/tempest/api/image/v2/test_images.py b/tempest/api/image/v2/test_images.py
index 9309c76..4375da5 100644
--- a/tempest/api/image/v2/test_images.py
+++ b/tempest/api/image/v2/test_images.py
@@ -19,6 +19,7 @@
from oslo_log import log as logging
from tempest.api.image import base
+from tempest.common import image as image_utils
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import data_utils
@@ -980,3 +981,87 @@
self.assertEqual(orig_image['os_hash_value'], image['os_hash_value'])
self.assertEqual(orig_image['os_hash_algo'], image['os_hash_algo'])
self.assertNotIn('validation_data', image['locations'][0])
+
+
+class HashCalculationRemoteDeletionTest(base.BaseV2ImageTest):
+ """Test calculation of image hash with new location API when the image is
+ deleted from a remote Glance service.
+ """
+ @classmethod
+ def resource_setup(cls):
+ super(HashCalculationRemoteDeletionTest,
+ cls).resource_setup()
+ if not cls.versions_client.has_version('2.17'):
+ # API is not new enough to support add location API
+ skip_msg = (
+ '%s skipped as Glance does not support v2.17')
+ raise cls.skipException(skip_msg)
+
+ @classmethod
+ def skip_checks(cls):
+ super(HashCalculationRemoteDeletionTest,
+ cls).skip_checks()
+ if not CONF.image_feature_enabled.do_secure_hash:
+ skip_msg = (
+ "%s skipped as do_secure_hash is disabled" %
+ cls.__name__)
+ raise cls.skipException(skip_msg)
+
+ if not CONF.image_feature_enabled.http_store_enabled:
+ skip_msg = (
+ "%s skipped as http store is disabled" %
+ cls.__name__)
+ raise cls.skipException(skip_msg)
+
+ @decorators.idempotent_id('123e4567-e89b-12d3-a456-426614174000')
+ def test_hash_calculation_cancelled(self):
+ """Test that image hash calculation is cancelled when the image
+ is deleted from a remote Glance service.
+
+ This test creates an image using new location API, verifies that
+ the hash calculation is initiated, and then deletes the image from a
+ remote Glance service, and verifies that the hash calculation process
+ is properly cancelled and image deleted successfully.
+ """
+
+ # Create an image with a location
+ image_name = data_utils.rand_name('image')
+ container_format = CONF.image.container_formats[0]
+ disk_format = CONF.image.disk_formats[0]
+ image = self.create_image(name=image_name,
+ container_format=container_format,
+ disk_format=disk_format,
+ visibility='private')
+ self.assertEqual(image_name, image['name'])
+ self.assertEqual('queued', image['status'])
+
+ # Start http server at random port to simulate the image location
+ # and to provide random data for the image with slow transfer
+ server = image_utils.RandomDataServer()
+ server.start()
+ self.addCleanup(server.stop)
+
+ # Add a location to the image
+ location = 'http://localhost:%d' % server.port
+ self.client.add_image_location(image['id'], location)
+ waiters.wait_for_image_status(self.client, image['id'], 'active')
+
+ # Verify that the hash calculation is initiated
+ image_info = self.client.show_image(image['id'])
+ self.assertEqual(CONF.image.hashing_algorithm,
+ image_info['os_hash_algo'])
+ self.assertEqual('active', image_info['status'])
+
+ if CONF.image.alternate_image_endpoint:
+ # If alternate image endpoint is configured, we will delete the
+ # image from the alternate worker
+ self.os_primary.image_client_remote.delete_image(image['id'])
+ else:
+ # delete image from backend
+ self.client.delete_image(image['id'])
+
+ # If image is deleted successfully, the hash calculation is cancelled
+ self.client.wait_for_resource_deletion(image['id'])
+
+ # Stop the server to release the port
+ server.stop()
diff --git a/tempest/api/network/test_agent_management_negative.py b/tempest/api/network/test_agent_management_negative.py
index d1c02ce..f4107e2 100644
--- a/tempest/api/network/test_agent_management_negative.py
+++ b/tempest/api/network/test_agent_management_negative.py
@@ -14,15 +14,28 @@
# under the License.
from tempest.api.network import base
+from tempest import config
from tempest.lib import decorators
+CONF = config.CONF
+
class AgentManagementNegativeTest(base.BaseNetworkTest):
+ credentials = ['primary', 'project_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(AgentManagementNegativeTest, cls).setup_clients()
+ if CONF.enforce_scope.neutron:
+ cls.reader_client = cls.os_project_reader.network_agents_client
+ else:
+ cls.reader_client = cls.agents_client
+
@decorators.idempotent_id('e335be47-b9a1-46fd-be30-0874c0b751e6')
@decorators.attr(type=['negative'])
def test_list_agents_non_admin(self):
"""Validate that non-admin user cannot list agents."""
# Listing agents requires admin_only permissions.
- body = self.agents_client.list_agents()
+ body = self.reader_client.list_agents()
self.assertEmpty(body["agents"])
diff --git a/tempest/api/network/test_allowed_address_pair.py b/tempest/api/network/test_allowed_address_pair.py
index 58160e0..4570b18 100644
--- a/tempest/api/network/test_allowed_address_pair.py
+++ b/tempest/api/network/test_allowed_address_pair.py
@@ -39,6 +39,7 @@
api_extensions
"""
+ credentials = ['primary', 'project_reader']
@classmethod
def skip_checks(cls):
@@ -48,6 +49,14 @@
raise cls.skipException(msg)
@classmethod
+ def setup_clients(cls):
+ super(AllowedAddressPairTestJSON, cls).setup_clients()
+ if CONF.enforce_scope.neutron:
+ cls.reader_client = cls.os_project_reader.ports_client
+ else:
+ cls.reader_client = cls.ports_client
+
+ @classmethod
def resource_setup(cls):
super(AllowedAddressPairTestJSON, cls).resource_setup()
cls.network = cls.create_network()
@@ -73,7 +82,7 @@
self.ports_client.delete_port, port_id)
# Confirm port was created with allowed address pair attribute
- body = self.ports_client.list_ports()
+ body = self.reader_client.list_ports()
ports = body['ports']
port = [p for p in ports if p['id'] == port_id]
msg = 'Created port not found in list of ports returned by Neutron'
diff --git a/tempest/api/network/test_dhcp_ipv6.py b/tempest/api/network/test_dhcp_ipv6.py
index fee6af5..eaead8a 100644
--- a/tempest/api/network/test_dhcp_ipv6.py
+++ b/tempest/api/network/test_dhcp_ipv6.py
@@ -41,6 +41,8 @@
addressing in subnets with router
"""
+ credentials = ['primary', 'project_reader']
+
@classmethod
def skip_checks(cls):
super(NetworksTestDHCPv6, cls).skip_checks()
@@ -53,6 +55,18 @@
raise cls.skipException(msg)
@classmethod
+ def setup_clients(cls):
+ super(NetworksTestDHCPv6, cls).setup_clients()
+ if CONF.enforce_scope.neutron:
+ cls.reader_ports_client = cls.os_project_reader.ports_client
+ cls.reader_subnets_client = cls.os_project_reader.subnets_client
+ cls.reader_routers_client = cls.os_project_reader.routers_client
+ else:
+ cls.reader_ports_client = cls.ports_client
+ cls.reader_subnets_client = cls.subnets_client
+ cls.reader_routers_client = cls.routers_client
+
+ @classmethod
def resource_setup(cls):
super(NetworksTestDHCPv6, cls).resource_setup()
cls.network = cls.create_network()
@@ -67,7 +81,7 @@
del things_list[index]
def _clean_network(self):
- body = self.ports_client.list_ports()
+ body = self.reader_ports_client.list_ports()
ports = body['ports']
for port in ports:
if (net_info.is_router_interface_port(port) and
@@ -78,13 +92,13 @@
if port['id'] in [p['id'] for p in self.ports]:
self.ports_client.delete_port(port['id'])
self._remove_from_list_by_index(self.ports, port)
- body = self.subnets_client.list_subnets()
+ body = self.reader_subnets_client.list_subnets()
subnets = body['subnets']
for subnet in subnets:
if subnet['id'] in [s['id'] for s in self.subnets]:
self.subnets_client.delete_subnet(subnet['id'])
self._remove_from_list_by_index(self.subnets, subnet)
- body = self.routers_client.list_routers()
+ body = self.reader_routers_client.list_routers()
routers = body['routers']
for router in routers:
if router['id'] in [r['id'] for r in self.routers]:
@@ -221,7 +235,7 @@
subnet_slaac]]
self.ports_client.delete_port(port['id'])
self.ports.pop()
- body = self.ports_client.list_ports()
+ body = self.reader_ports_client.list_ports()
ports_id_list = [i['id'] for i in body['ports']]
self.assertNotIn(port['id'], ports_id_list)
self._clean_network()
@@ -398,7 +412,7 @@
self.routers.append(router)
port = self.create_router_interface(router['id'],
subnet['id'])
- body = self.ports_client.show_port(port['port_id'])
+ body = self.reader_ports_client.show_port(port['port_id'])
return subnet, body['port']
@decorators.idempotent_id('e98f65db-68f4-4330-9fea-abd8c5192d4d')
diff --git a/tempest/api/network/test_extensions.py b/tempest/api/network/test_extensions.py
index e116d7c..98b2bb1 100644
--- a/tempest/api/network/test_extensions.py
+++ b/tempest/api/network/test_extensions.py
@@ -16,8 +16,11 @@
from tempest.api.network import base
from tempest.common import utils
+from tempest import config
from tempest.lib import decorators
+CONF = config.CONF
+
class ExtensionsTestJSON(base.BaseNetworkTest):
"""Tests the following operations in the Neutron API:
@@ -29,6 +32,16 @@
etc/tempest.conf.
"""
+ credentials = ['primary', 'project_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(ExtensionsTestJSON, cls).setup_clients()
+ if CONF.enforce_scope.neutron:
+ cls.reader_client = cls.os_project_reader.network_extensions_client
+ else:
+ cls.reader_client = cls.network_extensions_client
+
@decorators.attr(type='smoke')
@decorators.idempotent_id('ef28c7e6-e646-4979-9d67-deb207bc5564')
def test_list_show_extensions(self):
@@ -42,14 +55,14 @@
expected_alias = [ext for ext in expected_alias if
utils.is_extension_enabled(ext, 'network')]
actual_alias = list()
- extensions = self.network_extensions_client.list_extensions()
+ extensions = self.reader_client.list_extensions()
list_extensions = extensions['extensions']
# Show and verify the details of the available extensions
for ext in list_extensions:
ext_name = ext['name']
ext_alias = ext['alias']
actual_alias.append(ext['alias'])
- ext_details = self.network_extensions_client.show_extension(
+ ext_details = self.reader_client.show_extension(
ext_alias)
ext_details = ext_details['extension']
diff --git a/tempest/api/network/test_extra_dhcp_options.py b/tempest/api/network/test_extra_dhcp_options.py
index 36578b1..5ff43a7 100644
--- a/tempest/api/network/test_extra_dhcp_options.py
+++ b/tempest/api/network/test_extra_dhcp_options.py
@@ -36,6 +36,8 @@
section of etc/tempest.conf
"""
+ credentials = ['primary', 'project_reader']
+
@classmethod
def skip_checks(cls):
super(ExtraDHCPOptionsTestJSON, cls).skip_checks()
@@ -44,6 +46,14 @@
raise cls.skipException(msg)
@classmethod
+ def setup_clients(cls):
+ super(ExtraDHCPOptionsTestJSON, cls).setup_clients()
+ if CONF.enforce_scope.neutron:
+ cls.reader_ports_client = cls.os_project_reader.ports_client
+ else:
+ cls.reader_ports_client = cls.ports_client
+
+ @classmethod
def resource_setup(cls):
super(ExtraDHCPOptionsTestJSON, cls).resource_setup()
cls.network = cls.create_network()
@@ -72,7 +82,7 @@
self.ports_client.delete_port, port_id)
# Confirm port created has Extra DHCP Options
- body = self.ports_client.list_ports()
+ body = self.reader_ports_client.list_ports()
ports = body['ports']
port = [p for p in ports if p['id'] == port_id]
self.assertTrue(port)
@@ -88,7 +98,7 @@
name=name,
extra_dhcp_opts=self.extra_dhcp_opts)
# Confirm extra dhcp options were added to the port
- body = self.ports_client.show_port(self.port['id'])
+ body = self.reader_ports_client.show_port(self.port['id'])
self._confirm_extra_dhcp_options(body['port'], self.extra_dhcp_opts)
def _confirm_extra_dhcp_options(self, port, extra_dhcp_opts):
diff --git a/tempest/api/network/test_floating_ips.py b/tempest/api/network/test_floating_ips.py
index 07f0903..799bce3 100644
--- a/tempest/api/network/test_floating_ips.py
+++ b/tempest/api/network/test_floating_ips.py
@@ -42,6 +42,8 @@
public_network_id which is the id for the external network present
"""
+ credentials = ['primary', 'project_reader']
+
@classmethod
def skip_checks(cls):
super(FloatingIPTestJSON, cls).skip_checks()
@@ -55,6 +57,14 @@
raise cls.skipException("Floating ips are not available")
@classmethod
+ def setup_clients(cls):
+ super(FloatingIPTestJSON, cls).setup_clients()
+ if CONF.enforce_scope.neutron:
+ cls.reader_client = cls.os_project_reader.floating_ips_client
+ else:
+ cls.reader_client = cls.floating_ips_client
+
+ @classmethod
def resource_setup(cls):
super(FloatingIPTestJSON, cls).resource_setup()
cls.ext_net_id = CONF.network.public_network_id
@@ -92,7 +102,7 @@
self.assertIn(created_floating_ip['fixed_ip_address'],
[ip['ip_address'] for ip in self.ports[0]['fixed_ips']])
# Verifies the details of a floating_ip
- floating_ip = self.floating_ips_client.show_floatingip(
+ floating_ip = self.reader_client.show_floatingip(
created_floating_ip['id'])
shown_floating_ip = floating_ip['floatingip']
self.assertEqual(shown_floating_ip['id'], created_floating_ip['id'])
@@ -105,7 +115,7 @@
self.assertEqual(shown_floating_ip['port_id'], self.ports[0]['id'])
# Verify the floating ip exists in the list of all floating_ips
- floating_ips = self.floating_ips_client.list_floatingips()
+ floating_ips = self.reader_client.list_floatingips()
floatingip_id_list = list()
for f in floating_ips['floatingips']:
floatingip_id_list.append(f['id'])
@@ -162,7 +172,7 @@
# Delete port
self.ports_client.delete_port(created_port['id'])
# Verifies the details of the floating_ip
- floating_ip = self.floating_ips_client.show_floatingip(
+ floating_ip = self.reader_client.show_floatingip(
created_floating_ip['id'])
shown_floating_ip = floating_ip['floatingip']
# Confirm the fields are back to None
diff --git a/tempest/api/network/test_networks.py b/tempest/api/network/test_networks.py
index b1fba2d..ff02e80 100644
--- a/tempest/api/network/test_networks.py
+++ b/tempest/api/network/test_networks.py
@@ -155,6 +155,24 @@
project_network_v6_mask_bits is the equivalent for ipv6 subnets
"""
+ credentials = ['primary', 'project_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(NetworksTest, cls).setup_clients()
+ if CONF.enforce_scope.neutron:
+ cls.reader_networks_client = cls.os_project_reader.networks_client
+ cls.reader_ports_client = cls.os_project_reader.ports_client
+ cls.reader_subnets_client = cls.os_project_reader.subnets_client
+ cls.reader_network_extensions_client = (
+ cls.os_project_reader.network_extensions_client)
+ else:
+ cls.reader_networks_client = cls.networks_client
+ cls.reader_ports_client = cls.ports_client
+ cls.reader_subnets_client = cls.subnets_client
+ cls.reader_network_extensions_client = (
+ cls.network_extensions_client)
+
@decorators.attr(type='smoke')
@decorators.idempotent_id('0e269138-0da6-4efc-a46d-578161e7b221')
def test_create_update_delete_network_subnet(self):
@@ -185,7 +203,7 @@
@decorators.idempotent_id('2bf13842-c93f-4a69-83ed-717d2ec3b44e')
def test_show_network(self):
"""Verify the details of a network"""
- body = self.networks_client.show_network(self.network['id'])
+ body = self.reader_networks_client.show_network(self.network['id'])
network = body['network']
for key in ['id', 'name']:
self.assertEqual(network[key], self.network[key])
@@ -196,8 +214,8 @@
fields = ['id', 'name']
if utils.is_extension_enabled('net-mtu', 'network'):
fields.append('mtu')
- body = self.networks_client.show_network(self.network['id'],
- fields=fields)
+ body = self.reader_networks_client.show_network(self.network['id'],
+ fields=fields)
network = body['network']
self.assertEqual(sorted(network.keys()), sorted(fields))
for field_name in fields:
@@ -209,7 +227,7 @@
@decorators.idempotent_id('f7ffdeda-e200-4a7a-bcbe-05716e86bf43')
def test_list_networks(self):
"""Verify the network exists in the list of all networks"""
- body = self.networks_client.list_networks()
+ body = self.reader_networks_client.list_networks()
networks = [network['id'] for network in body['networks']
if network['id'] == self.network['id']]
self.assertNotEmpty(networks, "Created network not found in the list")
@@ -220,7 +238,7 @@
fields = ['id', 'name']
if utils.is_extension_enabled('net-mtu', 'network'):
fields.append('mtu')
- body = self.networks_client.list_networks(fields=fields)
+ body = self.reader_networks_client.list_networks(fields=fields)
networks = body['networks']
self.assertNotEmpty(networks, "Network list returned is empty")
for network in networks:
@@ -230,7 +248,7 @@
@decorators.idempotent_id('bd635d81-6030-4dd1-b3b9-31ba0cfdf6cc')
def test_show_subnet(self):
"""Verify the details of a subnet"""
- body = self.subnets_client.show_subnet(self.subnet['id'])
+ body = self.reader_subnets_client.show_subnet(self.subnet['id'])
subnet = body['subnet']
self.assertNotEmpty(subnet, "Subnet returned has no fields")
for key in ['id', 'cidr']:
@@ -241,8 +259,8 @@
def test_show_subnet_fields(self):
"""Verify specific fields of a subnet"""
fields = ['id', 'network_id']
- body = self.subnets_client.show_subnet(self.subnet['id'],
- fields=fields)
+ body = self.reader_subnets_client.show_subnet(self.subnet['id'],
+ fields=fields)
subnet = body['subnet']
self.assertEqual(sorted(subnet.keys()), sorted(fields))
for field_name in fields:
@@ -252,7 +270,7 @@
@decorators.idempotent_id('db68ba48-f4ea-49e9-81d1-e367f6d0b20a')
def test_list_subnets(self):
"""Verify the subnet exists in the list of all subnets"""
- body = self.subnets_client.list_subnets()
+ body = self.reader_subnets_client.list_subnets()
subnets = [subnet['id'] for subnet in body['subnets']
if subnet['id'] == self.subnet['id']]
self.assertNotEmpty(subnets, "Created subnet not found in the list")
@@ -261,7 +279,7 @@
def test_list_subnets_fields(self):
"""Verify specific fields of subnets"""
fields = ['id', 'network_id']
- body = self.subnets_client.list_subnets(fields=fields)
+ body = self.reader_subnets_client.list_subnets(fields=fields)
subnets = body['subnets']
self.assertNotEmpty(subnets, "Subnet list returned is empty")
for subnet in subnets:
@@ -284,7 +302,8 @@
self.networks_client.delete_network(net_id)
# Verify that the subnet got automatically deleted.
- self.assertRaises(lib_exc.NotFound, self.subnets_client.show_subnet,
+ self.assertRaises(lib_exc.NotFound,
+ self.reader_subnets_client.show_subnet,
subnet_id)
@decorators.idempotent_id('d2d596e2-8e76-47a9-ac51-d4648009f4d3')
@@ -373,7 +392,8 @@
public_network_id = CONF.network.public_network_id
# find external network matching public_network_id
- body = self.networks_client.list_networks(**{'router:external': True})
+ body = self.reader_networks_client.list_networks(
+ **{'router:external': True})
external_network = next((network for network in body['networks']
if network['id'] == public_network_id), None)
self.assertIsNotNone(external_network, "Public network %s not found "
@@ -388,10 +408,12 @@
# only check the public network ID because the other networks may
# belong to other tests and their state may have changed during this
# test
- body = self.subnets_client.list_subnets(network_id=public_network_id)
+ body = self.reader_subnets_client.list_subnets(
+ network_id=public_network_id)
extensions = [
ext['alias'] for ext in
- self.network_extensions_client.list_extensions()['extensions']]
+ self.reader_network_extensions_client.list_extensions()[
+ 'extensions']]
is_sen_ext = 'subnet-external-network' in extensions
# check subnet visibility of external_network
@@ -412,12 +434,14 @@
body = self.create_network(description='d1')
self.assertEqual('d1', body['description'])
net_id = body['id']
- body = self.networks_client.list_networks(id=net_id)['networks'][0]
+ body = self.reader_networks_client.list_networks(
+ id=net_id)['networks'][0]
self.assertEqual('d1', body['description'])
body = self.networks_client.update_network(body['id'],
description='d2')
self.assertEqual('d2', body['network']['description'])
- body = self.networks_client.list_networks(id=net_id)['networks'][0]
+ body = self.reader_networks_client.list_networks(
+ id=net_id)['networks'][0]
self.assertEqual('d2', body['description'])
@@ -439,11 +463,25 @@
the block defined by project-network_cidr
"""
+ credentials = ['primary', 'project_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(BulkNetworkOpsTest, cls).setup_clients()
+ if CONF.enforce_scope.neutron:
+ cls.reader_networks_client = cls.os_project_reader.networks_client
+ cls.reader_ports_client = cls.os_project_reader.ports_client
+ cls.reader_subnets_client = cls.os_project_reader.subnets_client
+ else:
+ cls.reader_networks_client = cls.networks_client
+ cls.reader_ports_client = cls.ports_client
+ cls.reader_subnets_client = cls.subnets_client
+
def _delete_networks(self, created_networks):
for n in created_networks:
self.networks_client.delete_network(n['id'])
# Asserting that the networks are not found in the list after deletion
- body = self.networks_client.list_networks()
+ body = self.reader_networks_client.list_networks()
networks_list = [network['id'] for network in body['networks']]
for n in created_networks:
self.assertNotIn(n['id'], networks_list)
@@ -452,7 +490,7 @@
for n in created_subnets:
self.subnets_client.delete_subnet(n['id'])
# Asserting that the subnets are not found in the list after deletion
- body = self.subnets_client.list_subnets()
+ body = self.reader_subnets_client.list_subnets()
subnets_list = [subnet['id'] for subnet in body['subnets']]
for n in created_subnets:
self.assertNotIn(n['id'], subnets_list)
@@ -461,7 +499,7 @@
for n in created_ports:
self.ports_client.delete_port(n['id'])
# Asserting that the ports are not found in the list after deletion
- body = self.ports_client.list_ports()
+ body = self.reader_ports_client.list_ports()
ports_list = [port['id'] for port in body['ports']]
for n in created_ports:
self.assertNotIn(n['id'], ports_list)
@@ -480,7 +518,7 @@
created_networks = body['networks']
self.addCleanup(self._delete_networks, created_networks)
# Asserting that the networks are found in the list after creation
- body = self.networks_client.list_networks()
+ body = self.reader_networks_client.list_networks()
networks_list = [network['id'] for network in body['networks']]
for n in created_networks:
self.assertIsNotNone(n['id'])
@@ -512,7 +550,7 @@
created_subnets = body['subnets']
self.addCleanup(self._delete_subnets, created_subnets)
# Asserting that the subnets are found in the list after creation
- body = self.subnets_client.list_subnets()
+ body = self.reader_subnets_client.list_subnets()
subnets_list = [subnet['id'] for subnet in body['subnets']]
for n in created_subnets:
self.assertIsNotNone(n['id'])
@@ -541,7 +579,7 @@
created_ports = body['ports']
self.addCleanup(self._delete_ports, created_ports)
# Asserting that the ports are found in the list after creation
- body = self.ports_client.list_ports()
+ body = self.reader_ports_client.list_ports()
ports_list = [port['id'] for port in body['ports']]
for n in created_ports:
self.assertIsNotNone(n['id'])
@@ -555,6 +593,16 @@
class NetworksIpV6Test(NetworksTest):
_ip_version = 6
+ credentials = ['primary', 'project_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(NetworksIpV6Test, cls).setup_clients()
+ if CONF.enforce_scope.neutron:
+ cls.reader_subnets_client = cls.os_project_reader.subnets_client
+ else:
+ cls.reader_subnets_client = cls.subnets_client
+
@decorators.idempotent_id('e41a4888-65a6-418c-a095-f7c2ef4ad59a')
def test_create_delete_subnet_with_gw(self):
"""Verify creating and deleting subnet with gateway"""
@@ -600,7 +648,7 @@
# Verifies Subnet GW is None in IPv4
self.assertIsNone(subnet2['gateway_ip'])
# Verifies all 2 subnets in the same network
- body = self.subnets_client.list_subnets()
+ body = self.reader_subnets_client.list_subnets()
subnets = [sub['id'] for sub in body['subnets']
if sub['network_id'] == network['id']]
test_subnet_ids = [sub['id'] for sub in (subnet1, subnet2)]
@@ -613,6 +661,16 @@
_ip_version = 6
+ credentials = ['primary', 'project_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(NetworksIpV6TestAttrs, cls).setup_clients()
+ if CONF.enforce_scope.neutron:
+ cls.reader_subnets_client = cls.os_project_reader.subnets_client
+ else:
+ cls.reader_subnets_client = cls.subnets_client
+
@classmethod
def skip_checks(cls):
super(NetworksIpV6TestAttrs, cls).skip_checks()
@@ -651,7 +709,7 @@
port = self.create_port(slaac_network)
self.assertIsNotNone(port['fixed_ips'][0]['ip_address'])
self.subnets_client.delete_subnet(subnet_slaac['id'])
- subnets = self.subnets_client.list_subnets()
+ subnets = self.reader_subnets_client.list_subnets()
subnet_ids = [subnet['id'] for subnet in subnets['subnets']]
self.assertNotIn(subnet_slaac['id'], subnet_ids,
"Subnet wasn't deleted")
diff --git a/tempest/api/network/test_networks_negative.py b/tempest/api/network/test_networks_negative.py
index 6c91df0..72655a3 100644
--- a/tempest/api/network/test_networks_negative.py
+++ b/tempest/api/network/test_networks_negative.py
@@ -26,12 +26,27 @@
class NetworksNegativeTestJSON(base.BaseNetworkTest):
"""Negative tests of network"""
+ credentials = ['primary', 'project_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(NetworksNegativeTestJSON, cls).setup_clients()
+ if CONF.enforce_scope.neutron:
+ cls.reader_ports_client = cls.os_project_reader.ports_client
+ cls.reader_subnets_client = cls.os_project_reader.subnets_client
+ cls.reader_networks_client = cls.os_project_reader.networks_client
+ else:
+ cls.reader_ports_client = cls.ports_client
+ cls.reader_subnets_client = cls.subnets_client
+ cls.reader_networks_client = cls.networks_client
+
@decorators.attr(type=['negative'])
@decorators.idempotent_id('9293e937-824d-42d2-8d5b-e985ea67002a')
def test_show_non_existent_network(self):
"""Test showing non existent network"""
non_exist_id = data_utils.rand_uuid()
- self.assertRaises(lib_exc.NotFound, self.networks_client.show_network,
+ self.assertRaises(lib_exc.NotFound,
+ self.reader_networks_client.show_network,
non_exist_id)
@decorators.attr(type=['negative'])
@@ -39,7 +54,8 @@
def test_show_non_existent_subnet(self):
"""Test showing non existent subnet"""
non_exist_id = data_utils.rand_uuid()
- self.assertRaises(lib_exc.NotFound, self.subnets_client.show_subnet,
+ self.assertRaises(lib_exc.NotFound,
+ self.reader_subnets_client.show_subnet,
non_exist_id)
@decorators.attr(type=['negative'])
@@ -47,7 +63,8 @@
def test_show_non_existent_port(self):
"""Test showing non existent port"""
non_exist_id = data_utils.rand_uuid()
- self.assertRaises(lib_exc.NotFound, self.ports_client.show_port,
+ self.assertRaises(lib_exc.NotFound,
+ self.reader_ports_client.show_port,
non_exist_id)
@decorators.attr(type=['negative'])
diff --git a/tempest/api/network/test_ports.py b/tempest/api/network/test_ports.py
index 02faa59..82de7e3 100644
--- a/tempest/api/network/test_ports.py
+++ b/tempest/api/network/test_ports.py
@@ -40,6 +40,16 @@
port update
"""
+ credentials = ['primary', 'project_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(PortsTestJSON, cls).setup_clients()
+ if CONF.enforce_scope.neutron:
+ cls.reader_client = cls.os_project_reader.ports_client
+ else:
+ cls.reader_client = cls.ports_client
+
@classmethod
def resource_setup(cls):
super(PortsTestJSON, cls).resource_setup()
@@ -48,7 +58,7 @@
def _delete_port(self, port_id):
self.ports_client.delete_port(port_id)
- body = self.ports_client.list_ports()
+ body = self.reader_client.list_ports()
ports_list = body['ports']
self.assertFalse(port_id in [n['id'] for n in ports_list])
@@ -153,7 +163,7 @@
@decorators.idempotent_id('c9a685bd-e83f-499c-939f-9f7863ca259f')
def test_show_port(self):
"""Verify the details of port"""
- body = self.ports_client.show_port(self.port['id'])
+ body = self.reader_client.show_port(self.port['id'])
port = body['port']
self.assertIn('id', port)
# NOTE(rfolco): created_at and updated_at may get inconsistent values
@@ -170,8 +180,8 @@
def test_show_port_fields(self):
"""Verify specific fields of a port"""
fields = ['id', 'mac_address']
- body = self.ports_client.show_port(self.port['id'],
- fields=fields)
+ body = self.reader_client.show_port(self.port['id'],
+ fields=fields)
port = body['port']
self.assertEqual(sorted(port.keys()), sorted(fields))
for field_name in fields:
@@ -181,7 +191,7 @@
@decorators.idempotent_id('cf95b358-3e92-4a29-a148-52445e1ac50e')
def test_list_ports(self):
"""Verify the port exists in the list of all ports"""
- body = self.ports_client.list_ports()
+ body = self.reader_client.list_ports()
ports = [port['id'] for port in body['ports']
if port['id'] == self.port['id']]
self.assertNotEmpty(ports, "Created port not found in the list")
@@ -212,7 +222,7 @@
# List ports filtered by fixed_ips
port_1_fixed_ip = port_1['port']['fixed_ips'][0]['ip_address']
fixed_ips = 'ip_address=' + port_1_fixed_ip
- port_list = self.ports_client.list_ports(fixed_ips=fixed_ips)
+ port_list = self.reader_client.list_ports(fixed_ips=fixed_ips)
# Check that we got the desired port
ports = port_list['ports']
project_ids = set([port['project_id'] for port in ports])
@@ -281,7 +291,7 @@
ips_filter = 'ip_address_substr=' + ip_address_1[:-1]
else:
ips_filter = 'ip_address_substr=' + ip_address_1
- ports = self.ports_client.list_ports(fixed_ips=ips_filter)['ports']
+ ports = self.reader_client.list_ports(fixed_ips=ips_filter)['ports']
# Check that we got the desired port
port_ids = [port['id'] for port in ports]
fixed_ips = [port['fixed_ips'] for port in ports]
@@ -302,7 +312,7 @@
while substr not in ip_address_2:
substr = substr[:-1]
ips_filter = 'ip_address_substr=' + substr
- ports = self.ports_client.list_ports(fixed_ips=ips_filter)['ports']
+ ports = self.reader_client.list_ports(fixed_ips=ips_filter)['ports']
# Check that we got both port
port_ids = [port['id'] for port in ports]
fixed_ips = [port['fixed_ips'] for port in ports]
@@ -339,7 +349,7 @@
self.routers_client.remove_router_interface,
router['id'], port_id=port['port']['id'])
# List ports filtered by router_id
- port_list = self.ports_client.list_ports(device_id=router['id'])
+ port_list = self.reader_client.list_ports(device_id=router['id'])
ports = port_list['ports']
self.assertEqual(len(ports), 1)
self.assertEqual(ports[0]['id'], port['port']['id'])
@@ -349,7 +359,7 @@
def test_list_ports_fields(self):
"""Verify specific fields of ports"""
fields = ['id', 'mac_address']
- body = self.ports_client.list_ports(fields=fields)
+ body = self.reader_client.list_ports(fields=fields)
ports = body['ports']
self.assertNotEmpty(ports, "Port list returned is empty")
# Asserting the fields returned are correct
@@ -501,7 +511,7 @@
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
self.ports_client.delete_port, body['port']['id'])
port = body['port']
- body = self.ports_client.show_port(port['id'])
+ body = self.reader_client.show_port(port['id'])
show_port = body['port']
self.assertEqual(free_mac_address,
show_port['mac_address'])
diff --git a/tempest/api/network/test_routers.py b/tempest/api/network/test_routers.py
index fedf2f4..9e5a604 100644
--- a/tempest/api/network/test_routers.py
+++ b/tempest/api/network/test_routers.py
@@ -37,6 +37,18 @@
self.assertEqual(subnet_id, interface['subnet_id'])
return interface
+ credentials = ['primary', 'project_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(RoutersTest, cls).setup_clients()
+ if CONF.enforce_scope.neutron:
+ cls.reader_routers_client = cls.os_project_reader.routers_client
+ cls.reader_ports_client = cls.os_project_reader.ports_client
+ else:
+ cls.reader_routers_client = cls.routers_client
+ cls.reader_ports_client = cls.ports_client
+
@classmethod
def skip_checks(cls):
super(RoutersTest, cls).skip_checks()
@@ -65,7 +77,7 @@
router['external_gateway_info']['network_id'],
CONF.network.public_network_id)
# Show details of the created router
- router_show = self.routers_client.show_router(
+ router_show = self.reader_routers_client.show_router(
router['id'])['router']
self.assertEqual(router_show['name'], router['name'])
self.assertEqual(
@@ -79,7 +91,7 @@
router_update = self.routers_client.update_router(
router['id'], name=updated_name)['router']
self.assertEqual(router_update['name'], updated_name)
- router_show = self.routers_client.show_router(
+ router_show = self.reader_routers_client.show_router(
router['id'])['router']
self.assertEqual(router_show['name'], updated_name)
@@ -107,7 +119,7 @@
self.assertIn('subnet_id', interface.keys())
self.assertIn('port_id', interface.keys())
# Verify router id is equal to device id in port details
- show_port_body = self.ports_client.show_port(
+ show_port_body = self.reader_ports_client.show_port(
interface['port_id'])
self.assertEqual(show_port_body['port']['device_id'],
router['id'])
@@ -140,7 +152,7 @@
self.assertIn('subnet_id', interface.keys())
self.assertIn('port_id', interface.keys())
# Verify router id is equal to device id in port details
- show_port_body = self.ports_client.show_port(
+ show_port_body = self.reader_ports_client.show_port(
interface['port_id'])
self.assertEqual(show_port_body['port']['device_id'],
router['id'])
@@ -194,7 +206,7 @@
test_routes.sort(key=lambda x: x['destination'])
extra_route = self.routers_client.update_router(
router['id'], routes=test_routes)
- show_body = self.routers_client.show_router(router['id'])
+ show_body = self.reader_routers_client.show_router(router['id'])
# Assert the number of routes
self.assertEqual(routes_num, len(extra_route['router']['routes']))
self.assertEqual(routes_num, len(show_body['router']['routes']))
@@ -215,7 +227,7 @@
self.assertEqual(test_routes[i]['nexthop'], routes[i]['nexthop'])
self._delete_extra_routes(router['id'])
- show_body_after_deletion = self.routers_client.show_router(
+ show_body_after_deletion = self.reader_routers_client.show_router(
router['id'])
self.assertEmpty(show_body_after_deletion['router']['routes'])
@@ -232,7 +244,7 @@
update_body = self.routers_client.update_router(router['id'],
admin_state_up=True)
self.assertTrue(update_body['router']['admin_state_up'])
- show_body = self.routers_client.show_router(router['id'])
+ show_body = self.reader_routers_client.show_router(router['id'])
self.assertTrue(show_body['router']['admin_state_up'])
@decorators.attr(type='smoke')
@@ -288,7 +300,7 @@
subnet['id'])
self.assertIn('port_id', interface)
self.assertIn('subnet_id', interface)
- port = self.ports_client.show_port(interface['port_id'])
+ port = self.reader_ports_client.show_port(interface['port_id'])
self.assertEqual(port['port']['id'], interface['port_id'])
router_port = self.ports_client.update_port(port['port']['id'],
fixed_ips=fixed_ip)
@@ -296,7 +308,7 @@
router_port['port']['fixed_ips'][0]['subnet_id'])
def _verify_router_interface(self, router_id, subnet_id, port_id):
- show_port_body = self.ports_client.show_port(port_id)
+ show_port_body = self.reader_ports_client.show_port(port_id)
interface_port = show_port_body['port']
self.assertEqual(router_id, interface_port['device_id'])
self.assertEqual(subnet_id,
diff --git a/tempest/api/network/test_routers_negative.py b/tempest/api/network/test_routers_negative.py
index 299e0e9..5b06016 100644
--- a/tempest/api/network/test_routers_negative.py
+++ b/tempest/api/network/test_routers_negative.py
@@ -26,6 +26,16 @@
class RoutersNegativeTest(base.BaseNetworkTest):
"""Negative tests of routers"""
+ credentials = ['primary', 'project_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(RoutersNegativeTest, cls).setup_clients()
+ if CONF.enforce_scope.neutron:
+ cls.reader_client = cls.os_project_reader.routers_client
+ else:
+ cls.reader_client = cls.routers_client
+
@classmethod
def skip_checks(cls):
super(RoutersNegativeTest, cls).skip_checks()
@@ -105,7 +115,7 @@
"""Test showing non existent router"""
router = data_utils.rand_name(
name='non_exist_router', prefix=CONF.resource_name_prefix)
- self.assertRaises(lib_exc.NotFound, self.routers_client.show_router,
+ self.assertRaises(lib_exc.NotFound, self.reader_client.show_router,
router)
@decorators.attr(type=['negative'])
diff --git a/tempest/api/network/test_security_groups.py b/tempest/api/network/test_security_groups.py
index c7f6b8f..b60abac 100644
--- a/tempest/api/network/test_security_groups.py
+++ b/tempest/api/network/test_security_groups.py
@@ -26,6 +26,21 @@
class SecGroupTest(base.BaseSecGroupTest):
"""Test security groups"""
+ credentials = ['primary', 'project_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(SecGroupTest, cls).setup_clients()
+ if CONF.enforce_scope.neutron:
+ cls.reader_security_groups_client = (
+ cls.os_project_reader.security_groups_client)
+ cls.reader_security_group_rules_client = (
+ cls.os_project_reader.security_group_rules_client)
+ else:
+ cls.reader_security_groups_client = cls.security_groups_client
+ cls.reader_security_group_rules_client = (
+ cls.security_group_rules_client)
+
@classmethod
def skip_checks(cls):
super(SecGroupTest, cls).skip_checks()
@@ -72,7 +87,7 @@
@decorators.idempotent_id('e30abd17-fef9-4739-8617-dc26da88e686')
def test_list_security_groups(self):
"""Verify that default security group exist"""
- body = self.security_groups_client.list_security_groups()
+ body = self.reader_security_groups_client.list_security_groups()
security_groups = body['security_groups']
found = None
for n in security_groups:
@@ -88,7 +103,7 @@
group_create_body, _ = self._create_security_group()
# List security groups and verify if created group is there in response
- list_body = self.security_groups_client.list_security_groups()
+ list_body = self.reader_security_groups_client.list_security_groups()
secgroup_list = list()
for secgroup in list_body['security_groups']:
secgroup_list.append(secgroup['id'])
@@ -106,7 +121,7 @@
self.assertEqual(update_body['security_group']['description'],
new_description)
# Show details of the updated security group
- show_body = self.security_groups_client.show_security_group(
+ show_body = self.reader_security_groups_client.show_security_group(
group_create_body['security_group']['id'])
self.assertEqual(show_body['security_group']['name'], new_name)
self.assertEqual(show_body['security_group']['description'],
@@ -136,7 +151,8 @@
# List rules and verify created rule is not in response
rule_list_body = (
- self.security_group_rules_client.list_security_group_rules())
+ self.reader_security_group_rules_client
+ .list_security_group_rules())
rule_list = [rule['id']
for rule in rule_list_body['security_group_rules']]
self.assertNotIn(rule_id, rule_list)
@@ -170,7 +186,8 @@
# List rules and verify created rule is in response
rule_list_body = (
- self.security_group_rules_client.list_security_group_rules())
+ self.reader_security_group_rules_client
+ .list_security_group_rules())
rule_list = [rule['id']
for rule in rule_list_body['security_group_rules']]
self.assertIn(rule_create_body['security_group_rule']['id'],
diff --git a/tempest/api/network/test_security_groups_negative.py b/tempest/api/network/test_security_groups_negative.py
index beaeb20..7f68f52 100644
--- a/tempest/api/network/test_security_groups_negative.py
+++ b/tempest/api/network/test_security_groups_negative.py
@@ -26,6 +26,21 @@
class NegativeSecGroupTest(base.BaseSecGroupTest):
"""Negative tests of security groups"""
+ credentials = ['primary', 'project_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(NegativeSecGroupTest, cls).setup_clients()
+ if CONF.enforce_scope.neutron:
+ cls.reader_security_groups_client = (
+ cls.os_project_reader.security_groups_client)
+ cls.reader_security_group_rules_client = (
+ cls.os_project_reader.security_group_rules_client)
+ else:
+ cls.reader_security_groups_client = cls.security_groups_client
+ cls.reader_security_group_rules_client = (
+ cls.security_group_rules_client)
+
@classmethod
def skip_checks(cls):
super(NegativeSecGroupTest, cls).skip_checks()
@@ -39,7 +54,8 @@
"""Test showing non existent security group"""
non_exist_id = data_utils.rand_uuid()
self.assertRaises(
- lib_exc.NotFound, self.security_groups_client.show_security_group,
+ lib_exc.NotFound,
+ self.reader_security_groups_client.show_security_group,
non_exist_id)
@decorators.attr(type=['negative'])
@@ -49,7 +65,7 @@
non_exist_id = data_utils.rand_uuid()
self.assertRaises(
lib_exc.NotFound,
- self.security_group_rules_client.show_security_group_rule,
+ self.reader_security_group_rules_client.show_security_group_rule,
non_exist_id)
@decorators.attr(type=['negative'])
diff --git a/tempest/api/network/test_service_providers.py b/tempest/api/network/test_service_providers.py
index e203a2c..6771392 100644
--- a/tempest/api/network/test_service_providers.py
+++ b/tempest/api/network/test_service_providers.py
@@ -12,12 +12,25 @@
from tempest.api.network import base
from tempest.common import utils
+from tempest import config
from tempest.lib import decorators
+CONF = config.CONF
+
class ServiceProvidersTest(base.BaseNetworkTest):
"""Test network service providers"""
+ credentials = ['primary', 'project_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(ServiceProvidersTest, cls).setup_clients()
+ if CONF.enforce_scope.neutron:
+ cls.reader_client = cls.os_project_reader.service_providers_client
+ else:
+ cls.reader_client = cls.service_providers_client
+
@classmethod
def skip_checks(cls):
super(ServiceProvidersTest, cls).skip_checks()
@@ -28,6 +41,6 @@
@decorators.idempotent_id('2cbbeea9-f010-40f6-8df5-4eaa0c918ea6')
def test_service_providers_list(self):
"""Test listing network service providers"""
- body = self.service_providers_client.list_service_providers()
+ body = self.reader_client.list_service_providers()
self.assertIn('service_providers', body)
self.assertIsInstance(body['service_providers'], list)
diff --git a/tempest/api/network/test_subnetpools_extensions.py b/tempest/api/network/test_subnetpools_extensions.py
index 689844b..bd20358 100644
--- a/tempest/api/network/test_subnetpools_extensions.py
+++ b/tempest/api/network/test_subnetpools_extensions.py
@@ -39,6 +39,8 @@
"""
+ credentials = ['primary', 'project_reader']
+
@classmethod
def skip_checks(cls):
super(SubnetPoolsTestJSON, cls).skip_checks()
@@ -46,6 +48,14 @@
msg = "subnet_allocation extension not enabled."
raise cls.skipException(msg)
+ @classmethod
+ def setup_clients(cls):
+ super(SubnetPoolsTestJSON, cls).setup_clients()
+ if CONF.enforce_scope.neutron:
+ cls.reader_client = cls.os_project_reader.subnetpools_client
+ else:
+ cls.reader_client = cls.subnetpools_client
+
@decorators.attr(type='smoke')
@decorators.idempotent_id('62595970-ab1c-4b7f-8fcc-fddfe55e9811')
def test_create_list_show_update_delete_subnetpools(self):
@@ -62,7 +72,7 @@
subnetpool_id)
self.assertEqual(subnetpool_name, body["subnetpool"]["name"])
# get detail about subnet pool
- body = self.subnetpools_client.show_subnetpool(subnetpool_id)
+ body = self.reader_client.show_subnetpool(subnetpool_id)
self.assertEqual(subnetpool_name, body["subnetpool"]["name"])
# update the subnet pool
subnetpool_name = data_utils.rand_name(
@@ -73,5 +83,5 @@
# delete subnet pool
body = self.subnetpools_client.delete_subnetpool(subnetpool_id)
self.assertRaises(lib_exc.NotFound,
- self.subnetpools_client.show_subnetpool,
+ self.reader_client.show_subnetpool,
subnetpool_id)
diff --git a/tempest/api/network/test_tags.py b/tempest/api/network/test_tags.py
index a0c6342..527b745 100644
--- a/tempest/api/network/test_tags.py
+++ b/tempest/api/network/test_tags.py
@@ -37,6 +37,8 @@
tags on their networks. The extension supports networks only.
"""
+ credentials = ['primary', 'project_reader']
+
@classmethod
def skip_checks(cls):
super(TagsTest, cls).skip_checks()
@@ -45,6 +47,14 @@
raise cls.skipException(msg)
@classmethod
+ def setup_clients(cls):
+ super(TagsTest, cls).setup_clients()
+ if CONF.enforce_scope.neutron:
+ cls.reader_client = cls.os_project_reader.tags_client
+ else:
+ cls.reader_client = cls.tags_client
+
+ @classmethod
def resource_setup(cls):
super(TagsTest, cls).resource_setup()
cls.network = cls.create_network()
@@ -61,7 +71,7 @@
tag_name)
# Validate that listing tags on a network resource works.
- retrieved_tags = self.tags_client.list_tags(
+ retrieved_tags = self.reader_client.list_tags(
'networks', self.network['id'])['tags']
self.assertEqual([tag_name], retrieved_tags)
@@ -115,6 +125,8 @@
# the singular case for the corresponding class resource object.
SUPPORTED_RESOURCES = ['subnets', 'ports', 'routers', 'subnetpools']
+ credentials = ['primary', 'project_reader']
+
@classmethod
def skip_checks(cls):
super(TagsExtTest, cls).skip_checks()
@@ -127,6 +139,14 @@
raise cls.skipException(msg)
@classmethod
+ def setup_clients(cls):
+ super(TagsExtTest, cls).setup_clients()
+ if CONF.enforce_scope.neutron:
+ cls.reader_client = cls.os_project_reader.tags_client
+ else:
+ cls.reader_client = cls.tags_client
+
+ @classmethod
def resource_setup(cls):
super(TagsExtTest, cls).resource_setup()
cls.network = cls.create_network()
@@ -169,7 +189,7 @@
for i, resource in enumerate(self.SUPPORTED_RESOURCES):
# Ensure that a tag was created for each resource.
resource_object = getattr(self, resource[:-1])
- retrieved_tags = self.tags_client.list_tags(
+ retrieved_tags = self.reader_client.list_tags(
resource, resource_object['id'])['tags']
self.assertEqual(1, len(retrieved_tags))
self.assertEqual(tag_names[i], retrieved_tags[0])
@@ -181,7 +201,7 @@
# Delete the tag and ensure it was deleted.
self.tags_client.delete_tag(
resource, resource_object['id'], tag_names[i])
- retrieved_tags = self.tags_client.list_tags(
+ retrieved_tags = self.reader_client.list_tags(
resource, resource_object['id'])['tags']
self.assertEmpty(retrieved_tags)
diff --git a/tempest/api/network/test_versions.py b/tempest/api/network/test_versions.py
index 020cb5c..84add7a 100644
--- a/tempest/api/network/test_versions.py
+++ b/tempest/api/network/test_versions.py
@@ -13,10 +13,24 @@
# under the License.
from tempest.api.network import base
+from tempest import config
from tempest.lib import decorators
+CONF = config.CONF
+
class NetworksApiDiscovery(base.BaseNetworkTest):
+
+ credentials = ['primary', 'project_reader']
+
+ @classmethod
+ def setup_clients(cls):
+ super(NetworksApiDiscovery, cls).setup_clients()
+ if CONF.enforce_scope.neutron:
+ cls.reader_client = cls.os_project_reader.network_versions_client
+ else:
+ cls.reader_client = cls.network_versions_client
+
@decorators.attr(type='smoke')
@decorators.idempotent_id('cac8a836-c2e0-4304-b556-cd299c7281d1')
def test_api_version_resources(self):
@@ -28,7 +42,7 @@
schema.
"""
- result = self.network_versions_client.list_versions()
+ result = self.reader_client.list_versions()
expected_versions = ('v2.0',)
expected_resources = ('id', 'links', 'status')
received_list = result.values()
@@ -45,7 +59,7 @@
"""Test that GET /v2.0/ returns expected resources."""
current_version = 'v2.0'
expected_resources = ('subnet', 'network', 'port')
- result = self.network_versions_client.show_version(current_version)
+ result = self.reader_client.show_version(current_version)
actual_resources = [r['name'] for r in result['resources']]
for resource in expected_resources:
self.assertIn(resource, actual_resources)
diff --git a/tempest/api/object_storage/test_object_services.py b/tempest/api/object_storage/test_object_services.py
index 8110915..00cd347 100644
--- a/tempest/api/object_storage/test_object_services.py
+++ b/tempest/api/object_storage/test_object_services.py
@@ -13,12 +13,12 @@
# License for the specific language governing permissions and limitations
# under the License.
+import hashlib
import random
import re
import time
import zlib
-from oslo_utils.secretutils import md5
from tempest.api.object_storage import base
from tempest.common import custom_matchers
from tempest import config
@@ -158,7 +158,7 @@
object_name = data_utils.rand_name(
prefix=CONF.resource_name_prefix, name='TestObject')
data = data_utils.random_bytes()
- create_md5 = md5(data, usedforsecurity=False).hexdigest()
+ create_md5 = hashlib.md5(data, usedforsecurity=False).hexdigest()
metadata = {'Etag': create_md5}
resp, _ = self.object_client.create_object(
self.container_name,
@@ -661,7 +661,7 @@
object_name = data_utils.rand_name(
prefix=CONF.resource_name_prefix, name='TestObject')
data = data_utils.random_bytes(10)
- create_md5 = md5(data, usedforsecurity=False).hexdigest()
+ create_md5 = hashlib.md5(data, usedforsecurity=False).hexdigest()
create_metadata = {'Etag': create_md5}
self.object_client.create_object(self.container_name,
object_name,
@@ -703,7 +703,7 @@
object_name = data_utils.rand_name(
prefix=CONF.resource_name_prefix, name='TestObject')
data = data_utils.random_bytes()
- create_md5 = md5(data, usedforsecurity=False).hexdigest()
+ create_md5 = hashlib.md5(data, usedforsecurity=False).hexdigest()
create_metadata = {'Etag': create_md5}
self.object_client.create_object(self.container_name,
object_name,
@@ -711,7 +711,7 @@
metadata=create_metadata)
list_data = data_utils.random_bytes()
- list_md5 = md5(list_data, usedforsecurity=False).hexdigest()
+ list_md5 = hashlib.md5(list_data, usedforsecurity=False).hexdigest()
list_metadata = {'If-None-Match': list_md5}
resp, body = self.object_client.get_object(
self.container_name,
@@ -1011,7 +1011,7 @@
"""
object_name, data = self.create_object(self.container_name)
# local copy is identical, no download
- object_md5 = md5(data, usedforsecurity=False).hexdigest()
+ object_md5 = hashlib.md5(data, usedforsecurity=False).hexdigest()
headers = {'If-None-Match': object_md5}
url = "%s/%s" % (self.container_name, object_name)
resp, _ = self.object_client.get(url, headers=headers)
@@ -1026,7 +1026,8 @@
# local copy is different, download
local_data = "something different"
- other_md5 = md5(local_data.encode(), usedforsecurity=False).hexdigest()
+ other_md5 = hashlib.md5(
+ local_data.encode(), usedforsecurity=False).hexdigest()
headers = {'If-None-Match': other_md5}
resp, _ = self.object_client.get(url, headers=headers)
self.assertHeaders(resp, 'Object', 'GET')
diff --git a/tempest/api/object_storage/test_object_slo.py b/tempest/api/object_storage/test_object_slo.py
index b8e0b55..0cf493b 100644
--- a/tempest/api/object_storage/test_object_slo.py
+++ b/tempest/api/object_storage/test_object_slo.py
@@ -12,9 +12,10 @@
# License for the specific language governing permissions and limitations
# under the License.
+import hashlib
+
from oslo_serialization import jsonutils as json
-from oslo_utils.secretutils import md5
from tempest.api.object_storage import base
from tempest.common import utils
from tempest import config
@@ -66,14 +67,20 @@
object_name_base_1)
path_object_2 = '/%s/%s' % (self.container_name,
object_name_base_2)
- data_manifest = [{'path': path_object_1,
- 'etag': md5(self.content,
- usedforsecurity=False).hexdigest(),
- 'size_bytes': data_size},
- {'path': path_object_2,
- 'etag': md5(self.content,
- usedforsecurity=False).hexdigest(),
- 'size_bytes': data_size}]
+ data_manifest = [
+ {
+ 'path': path_object_1,
+ 'etag': hashlib.md5(
+ self.content, usedforsecurity=False).hexdigest(),
+ 'size_bytes': data_size
+ },
+ {
+ 'path': path_object_2,
+ 'etag': hashlib.md5(
+ self.content, usedforsecurity=False).hexdigest(),
+ 'size_bytes': data_size
+ }
+ ]
return json.dumps(data_manifest)
diff --git a/tempest/api/volume/admin/test_volume_retype.py b/tempest/api/volume/admin/test_volume_retype.py
index 4a3f494..e19038e 100644
--- a/tempest/api/volume/admin/test_volume_retype.py
+++ b/tempest/api/volume/admin/test_volume_retype.py
@@ -204,3 +204,50 @@
# Retype the volume from snapshot
self._retype_volume(src_vol, migration_policy='never')
+
+
+class VolumeRetypeMultiattachTest(VolumeRetypeTest):
+ """Test volume retype with/without multiattach"""
+
+ volume_min_microversion = '3.50'
+ volume_max_microversion = 'latest'
+
+ @classmethod
+ def skip_checks(cls):
+ super(VolumeRetypeMultiattachTest, cls).skip_checks()
+ if not CONF.compute_feature_enabled.volume_multiattach:
+ raise cls.skipException('Volume multi-attach is not available.')
+
+ @classmethod
+ def resource_setup(cls):
+ super(VolumeRetypeMultiattachTest, cls).resource_setup()
+ extra_specs_src = {"multiattach": '<is> True'}
+ cls.src_vol_type = cls.create_volume_type()
+ cls.dst_vol_type = cls.create_volume_type(extra_specs=extra_specs_src)
+
+ def _verify_migration(self, source_vol, dest_vol):
+ self.assertEqual(dest_vol['status'], "available")
+ self.assertEqual(dest_vol['volume_type'], self.dst_vol_type['name'])
+ if "multiattach" in self.dst_vol_type['extra_specs'].keys():
+ self.assertEqual(dest_vol['multiattach'], True)
+ else:
+ self.assertEqual(dest_vol['multiattach'], False)
+
+ @decorators.idempotent_id('c0521465-ed82-4d03-961d-a68d673a5051')
+ def test_volume_retype_multiattach(self):
+ """Test volume retype with/without multiattach
+
+ 1. Create dst_vol_type with "multiattach = '<is> True'"
+ 2. Create src_vol_type without the "multiattach" property
+ 3. Retype volume from src_vol_type (non-multiattach)
+ to dst_vol_type(multiattach) and vice versa
+ 4. Verify successful retype.
+ """
+ # Retype from non-multiattach to multiattach
+ vol = self.create_volume(volume_type=self.src_vol_type['name'])
+ self._retype_volume(vol, migration_policy='never')
+
+ self.dst_vol_type = self.src_vol_type
+
+ # Retype from multiattach to non-multiattach
+ self._retype_volume(vol, migration_policy='never')
diff --git a/tempest/api/volume/test_volumes_actions.py b/tempest/api/volume/test_volumes_actions.py
index 8b2bc69..6261ddc 100644
--- a/tempest/api/volume/test_volumes_actions.py
+++ b/tempest/api/volume/test_volumes_actions.py
@@ -126,13 +126,6 @@
image_id)
waiters.wait_for_image_status(self.images_client, image_id,
'active')
- # This is required for the optimized upload volume path.
- # New location APIs are async so we need to wait for the location
- # import task to complete.
- # This should work with old location API since we don't fail if
- # there are no tasks for the image
- waiters.wait_for_image_tasks_status(self.images_client,
- image_id, 'success')
waiters.wait_for_volume_resource_status(self.volumes_client,
self.volume['id'],
'available')
diff --git a/tempest/api/volume/test_volumes_snapshots.py b/tempest/api/volume/test_volumes_snapshots.py
index 35afffd..4d24372 100644
--- a/tempest/api/volume/test_volumes_snapshots.py
+++ b/tempest/api/volume/test_volumes_snapshots.py
@@ -40,6 +40,21 @@
super(VolumesSnapshotTestJSON, cls).resource_setup()
cls.volume_origin = cls.create_volume()
+ def setUp(self):
+ super(VolumesSnapshotTestJSON, self).setUp()
+ # Check volume and make sure it is in available state before the next
+ # test uses it.
+ try:
+ vol = self.volumes_client.show_volume(
+ self.volume_origin['id'])['volume']
+ if vol['status'] != 'available':
+ waiters.wait_for_volume_resource_status(
+ self.volumes_client,
+ self.volume_origin['id'],
+ 'available')
+ except (lib_exc.NotFound, lib_exc.TimeoutException):
+ self.volume_origin = self.create_volume()
+
@decorators.idempotent_id('8567b54c-4455-446d-a1cf-651ddeaa3ff2')
@utils.services('compute')
def test_snapshot_create_delete_with_volume_in_use(self):
diff --git a/tempest/common/compute.py b/tempest/common/compute.py
index b885b83..268a463 100644
--- a/tempest/common/compute.py
+++ b/tempest/common/compute.py
@@ -19,9 +19,11 @@
import struct
import textwrap
from urllib import parse as urlparse
+import urllib3
from oslo_log import log as logging
from oslo_utils import excutils
+import testtools
from tempest.common.utils.linux import remote_client
from tempest.common import waiters
@@ -548,3 +550,122 @@
self.cached_stream = self.response[end_loc + 4:]
# ensure response ends with '\r\n\r\n'.
self.response = self.response[:end_loc + 4]
+
+
+class NoVNCValidateMixin(testtools.TestCase):
+ """Mixin methods to validate a novnc connection."""
+
+ def validate_novnc_html(self, vnc_url):
+ """Verify we can connect to novnc and get back the javascript."""
+
+ resp = urllib3.PoolManager().request('GET', vnc_url)
+ # Make sure that the GET request was accepted by the novncproxy
+ self.assertEqual(resp.status, 200, 'Got a Bad HTTP Response on the '
+ 'initial call: ' + str(resp.status))
+ # Do some basic validation to make sure it is an expected HTML document
+ resp_data = resp.data.decode()
+ # This is needed in the case of example: <html lang="en">
+ self.assertRegex(resp_data, '<html.*>',
+ 'Not a valid html document in the response.')
+ self.assertIn('</html>', resp_data,
+ 'Not a valid html document in the response.')
+ # Just try to make sure we got JavaScript back for noVNC, since we
+ # won't actually use it since not inside of a browser
+ self.assertIn('noVNC', resp_data,
+ 'Not a valid noVNC javascript html document.')
+ self.assertIn('<script', resp_data,
+ 'Not a valid noVNC javascript html document.')
+
+ def validate_rfb_negotiation(self):
+ """Verify we can connect to novnc and do the websocket connection."""
+ self.assertIsNotNone(self.websocket)
+ # Turn the Socket into a WebSocket to do the communication
+ data = self.websocket.receive_frame()
+ self.assertFalse(data is None or not data,
+ 'Token must be invalid because the connection '
+ 'closed.')
+ # Parse the RFB version from the data to make sure it is valid
+ # and belong to the known supported RFB versions.
+ version = float("%d.%d" % (int(data[4:7], base=10),
+ int(data[8:11], base=10)))
+ # Add the max RFB versions supported
+ supported_versions = [3.3, 3.8]
+ self.assertIn(version, supported_versions,
+ 'Bad RFB Version: ' + str(version))
+ # Send our RFB version to the server
+ self.websocket.send_frame(data)
+ # Get the sever authentication type and make sure None is supported
+ data = self.websocket.receive_frame()
+ self.assertIsNotNone(data, 'Expected authentication type None.')
+ data_length = len(data)
+ if version == 3.3:
+ # For RFB 3.3: in the security handshake, rather than a two-way
+ # negotiation, the server decides the security type and sends a
+ # single word(4 bytes).
+ self.assertEqual(
+ data_length, 4, 'Expected authentication type None.')
+ self.assertIn(1, [int(data[i]) for i in (0, 3)],
+ 'Expected authentication type None.')
+ else:
+ self.assertGreaterEqual(
+ len(data), 2, 'Expected authentication type None.')
+ self.assertIn(
+ 1,
+ [int(data[i + 1]) for i in range(int(data[0]))],
+ 'Expected authentication type None.')
+ # Send to the server that we only support authentication
+ # type None
+ self.websocket.send_frame(bytes((1,)))
+
+ # The server should send 4 bytes of 0's if security
+ # handshake succeeded
+ data = self.websocket.receive_frame()
+ self.assertEqual(
+ len(data), 4,
+ 'Server did not think security was successful.')
+ self.assertEqual(
+ [int(i) for i in data], [0, 0, 0, 0],
+ 'Server did not think security was successful.')
+
+ # Say to leave the desktop as shared as part of client initialization
+ self.websocket.send_frame(bytes((1,)))
+ # Get the server initialization packet back and make sure it is the
+ # right structure where bytes 20-24 is the name length and
+ # 24-N is the name
+ data = self.websocket.receive_frame()
+ data_length = len(data) if data is not None else 0
+ self.assertFalse(data_length <= 24 or
+ data_length != (struct.unpack(">L",
+ data[20:24])[0] + 24),
+ 'Server initialization was not the right format.')
+ # Since the rest of the data on the screen is arbitrary, we will
+ # close the socket and end our validation of the data at this point
+ # Assert that the latest check was false, meaning that the server
+ # initialization was the right format
+ self.assertFalse(data_length <= 24 or
+ data_length != (struct.unpack(">L",
+ data[20:24])[0] + 24))
+
+ def validate_websocket_upgrade(self):
+ """Verify that the websocket upgrade was successful.
+
+ Parses response and ensures that required response
+ fields are present and accurate.
+ (https://tools.ietf.org/html/rfc7231#section-6.2.2)
+ """
+
+ self.assertIsNotNone(self.websocket)
+ self.assertTrue(
+ self.websocket.response.startswith(b'HTTP/1.1 101 Switching '
+ b'Protocols'),
+ 'Incorrect HTTP return status code: {}'.format(
+ str(self.websocket.response)
+ )
+ )
+ _required_header = 'upgrade: websocket'
+ _response = str(self.websocket.response).lower()
+ self.assertIn(
+ _required_header,
+ _response,
+ 'Did not get the expected WebSocket HTTP Response.'
+ )
diff --git a/tempest/common/concurrency.py b/tempest/common/concurrency.py
new file mode 100644
index 0000000..bcff6c5
--- /dev/null
+++ b/tempest/common/concurrency.py
@@ -0,0 +1,61 @@
+# Copyright 2025 Red Hat, Inc.
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import multiprocessing
+
+
+def run_concurrent_tasks(target, resource_count, **kwargs):
+ """Run a target function concurrently using multiprocessing.
+
+ :param target: Function to execute concurrently. Must accept
+ (index, resource_ids, **kwargs) as parameters.
+ :param resource_count: Number of concurrent processes to spawn.
+ :param kwargs: Additional keyword arguments passed to the target function.
+ :return: List of results collected from all processes.
+ :raises RuntimeError: If any worker process fails during execution.
+ """
+ manager = multiprocessing.Manager()
+ resource_ids = manager.list()
+ errors = manager.list() # Capture exceptions from workers
+
+ def wrapped_target(index, resource_ids, **kwargs):
+ try:
+ target(index, resource_ids, **kwargs)
+ except Exception as exc:
+ errors.append(f"Worker {index} failed: {exc}")
+
+ processes = []
+ for i in range(resource_count):
+ p = multiprocessing.Process(
+ target=wrapped_target,
+ args=(i, resource_ids),
+ kwargs=kwargs
+ )
+ processes.append(p)
+
+ # Start all processes
+ for p in processes:
+ p.start()
+
+ # Wait for all processes to finish
+ for p in processes:
+ p.join()
+
+ if errors:
+ raise RuntimeError(
+ "One or more concurrent tasks failed:\n" + "\n".join(errors)
+ )
+
+ return list(resource_ids)
diff --git a/tempest/common/image.py b/tempest/common/image.py
index 3618f7e..b8f76fb 100644
--- a/tempest/common/image.py
+++ b/tempest/common/image.py
@@ -14,6 +14,10 @@
# under the License.
import copy
+from http import server
+import random
+import threading
+import time
def get_image_meta_from_headers(resp):
@@ -63,3 +67,57 @@
headers['x-image-meta-%s' % key] = str(value)
return headers
+
+
+class RandomDataHandler(server.BaseHTTPRequestHandler):
+ def do_GET(self):
+ self.send_response(200)
+ self.send_header('Content-Type', 'application/octet-stream')
+ self.end_headers()
+
+ start_time = time.time()
+ chunk_size = 64 * 1024 # 64 KiB per chunk
+ while time.time() - start_time < 60:
+ data = bytes(random.getrandbits(8) for _ in range(chunk_size))
+ try:
+ self.wfile.write(data)
+ self.wfile.flush()
+ # simulate slow transfer
+ time.sleep(0.2)
+ except BrokenPipeError:
+ # Client disconnected; stop sending data
+ break
+
+ def do_HEAD(self):
+ # same size as in do_GET (19,660,800 bytes (about 18.75 MiB)
+ size = 300 * 65536
+ self.send_response(200)
+ self.send_header('Content-Type', 'application/octet-stream')
+ self.send_header('Content-Length', str(size))
+ self.end_headers()
+
+
+class RandomDataServer(object):
+ def __init__(self, handler_class=RandomDataHandler):
+ self.handler_class = handler_class
+ self.server = None
+ self.thread = None
+ self.port = None
+
+ def start(self):
+ # Bind to port 0 for an unused port
+ self.server = server.HTTPServer(('localhost', 0), self.handler_class)
+ self.port = self.server.server_address[1]
+
+ # Run server in background thread
+ self.thread = threading.Thread(target=self.server.serve_forever)
+ self.thread.daemon = True
+ self.thread.start()
+
+ def stop(self):
+ if self.server:
+ self.server.shutdown()
+ self.server.server_close()
+ self.thread.join()
+ self.server = None
+ self.thread = None
diff --git a/tempest/config.py b/tempest/config.py
index 9c288ff..f0ed2ab 100644
--- a/tempest/config.py
+++ b/tempest/config.py
@@ -219,7 +219,13 @@
"identity-feature-enabled.security_compliance is set to "
"'True'. For more details, refer to keystone config "
"options "
- "keystone.conf:security_compliance.minimum_password_age.")
+ "keystone.conf:security_compliance.minimum_password_age."),
+ cfg.BoolOpt('use_system_token',
+ default=False,
+ help="Keystone supports both system as well as project "
+ "scoped token. This config option tells tempest to "
+ "use the system scoped token for keystone identity "
+ "tests.")
]
service_clients_group = cfg.OptGroup(name='service-clients',
@@ -627,6 +633,14 @@
cfg.BoolOpt('unified_limits',
default=False,
help='Does the test environment support unified limits?'),
+ cfg.ListOpt('nova_policy_roles',
+ default=['admin', 'member', 'reader'],
+ help='Compute service API policies roles list. List all the '
+ 'roles which are used as default in Nova policy rules. '
+ 'This config option value is used to run the tests with '
+ 'the available roles in Nova. For example, if manager '
+ 'role is not present in the nova release then tempest '
+ 'will use old defaults role token to call nova APIs'),
]
@@ -688,6 +702,11 @@
'vdi', 'iso', 'vhdx'],
help="A list of image's disk formats "
"users can specify."),
+ cfg.StrOpt('hashing_algorithm',
+ default='sha512',
+ help=('Hashing algorithm used by glance to calculate image '
+ 'hashes. This configuration value should be same as '
+ 'glance-api.conf: hashing_algorithm config option.')),
cfg.StrOpt('images_manifest_file',
default=None,
help="A path to a manifest.yml generated using the "
@@ -732,6 +751,17 @@
help=('Indicates that image format is enforced by glance, '
'such that we should not expect to be able to upload '
'bad images for testing other services.')),
+ cfg.BoolOpt('do_secure_hash',
+ default=True,
+ help=('Is do_secure_hash enabled in glance. '
+ 'This configuration value should be same as '
+ 'glance-api.conf: do_secure_hash config option.')),
+ cfg.BoolOpt('http_store_enabled',
+ default=False,
+ help=('Is http store is enabled in glance. '
+ 'http store needs to be mentioned either in '
+ 'glance-api.conf: stores or in enabled_backends '
+ 'configuration option.')),
]
network_group = cfg.OptGroup(name='network',
@@ -1030,9 +1060,11 @@
help='Disk format to use when copying a volume to image'),
cfg.IntOpt('volume_size',
default=1,
+ min=1,
help='Default size in GB for volumes created by volumes tests'),
cfg.IntOpt('volume_size_extend',
default=1,
+ min=1,
help="Size in GB a volume is extended by - if a test "
"extends a volume, the size of the new volume will be "
"volume_size + volume_size_extend."),
diff --git a/tempest/lib/api_schema/response/compute/v2_1/servers.py b/tempest/lib/api_schema/response/compute/v2_1/servers.py
index 14e2d3b..d86ac6a 100644
--- a/tempest/lib/api_schema/response/compute/v2_1/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_1/servers.py
@@ -520,3 +520,49 @@
'type': 'object'
}
}
+
+list_live_migrations = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'migrations': {
+ 'type': 'array',
+ 'items': {
+ 'type': 'object',
+ 'properties': {
+ 'id': {'type': 'integer'},
+ 'status': {'type': ['string', 'null']},
+ 'server_uuid': {'type': ['string', 'null']},
+ 'source_node': {'type': ['string', 'null']},
+ 'source_compute': {'type': ['string', 'null']},
+ 'dest_node': {'type': ['string', 'null']},
+ 'dest_compute': {'type': ['string', 'null']},
+ 'dest_host': {'type': ['string', 'null']},
+ 'disk_processed_bytes': {'type': ['integer', 'null']},
+ 'disk_remaining_bytes': {'type': ['integer', 'null']},
+ 'disk_total_bytes': {'type': ['integer', 'null']},
+ 'memory_processed_bytes': {
+ 'type': ['integer', 'null']},
+ 'memory_remaining_bytes': {
+ 'type': ['integer', 'null']},
+ 'memory_total_bytes': {'type': ['integer', 'null']},
+ 'created_at': parameter_types.date_time,
+ 'updated_at': parameter_types.date_time_or_null
+ },
+ 'additionalProperties': False,
+ 'required': [
+ 'id', 'status', 'instance_uuid', 'source_node',
+ 'source_compute', 'dest_node', 'dest_compute',
+ 'dest_host', 'disk_processed_bytes',
+ 'disk_remaining_bytes', 'disk_total_bytes',
+ 'memory_processed_bytes', 'memory_remaining_bytes',
+ 'memory_total_bytes', 'created_at', 'updated_at'
+ ]
+ }
+ }
+ },
+ 'additionalProperties': False,
+ 'required': ['migrations']
+ }
+}
diff --git a/tempest/lib/api_schema/response/compute/v2_100/servers.py b/tempest/lib/api_schema/response/compute/v2_100/servers.py
index 8721387..8a2c15d 100644
--- a/tempest/lib/api_schema/response/compute/v2_100/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_100/servers.py
@@ -127,3 +127,4 @@
get_remote_consoles = copy.deepcopy(servers299.get_remote_consoles)
show_instance_action = copy.deepcopy(servers299.show_instance_action)
create_backup = copy.deepcopy(servers299.create_backup)
+list_live_migrations = copy.deepcopy(servers299.list_live_migrations)
diff --git a/tempest/lib/api_schema/response/compute/v2_16/servers.py b/tempest/lib/api_schema/response/compute/v2_16/servers.py
index 2b3ce38..f09ea7f 100644
--- a/tempest/lib/api_schema/response/compute/v2_16/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_16/servers.py
@@ -173,3 +173,4 @@
list_volume_attachments = copy.deepcopy(servers.list_volume_attachments)
show_instance_action = copy.deepcopy(servers.show_instance_action)
create_backup = copy.deepcopy(servers.create_backup)
+list_live_migrations = copy.deepcopy(servers.list_live_migrations)
diff --git a/tempest/lib/api_schema/response/compute/v2_19/servers.py b/tempest/lib/api_schema/response/compute/v2_19/servers.py
index ba3d787..5cb5bf3 100644
--- a/tempest/lib/api_schema/response/compute/v2_19/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_19/servers.py
@@ -63,3 +63,4 @@
list_volume_attachments = copy.deepcopy(serversv216.list_volume_attachments)
show_instance_action = copy.deepcopy(serversv216.show_instance_action)
create_backup = copy.deepcopy(serversv216.create_backup)
+list_live_migrations = copy.deepcopy(serversv216.list_live_migrations)
diff --git a/tempest/lib/api_schema/response/compute/v2_26/servers.py b/tempest/lib/api_schema/response/compute/v2_26/servers.py
index 123eb72..4ce7f90 100644
--- a/tempest/lib/api_schema/response/compute/v2_26/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_26/servers.py
@@ -106,3 +106,4 @@
list_volume_attachments = copy.deepcopy(servers219.list_volume_attachments)
show_instance_action = copy.deepcopy(servers219.show_instance_action)
create_backup = copy.deepcopy(servers219.create_backup)
+list_live_migrations = copy.deepcopy(servers219.list_live_migrations)
diff --git a/tempest/lib/api_schema/response/compute/v2_3/servers.py b/tempest/lib/api_schema/response/compute/v2_3/servers.py
index d19f1ad..c7e0147 100644
--- a/tempest/lib/api_schema/response/compute/v2_3/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_3/servers.py
@@ -178,3 +178,4 @@
list_volume_attachments = copy.deepcopy(servers.list_volume_attachments)
show_instance_action = copy.deepcopy(servers.show_instance_action)
create_backup = copy.deepcopy(servers.create_backup)
+list_live_migrations = copy.deepcopy(servers.list_live_migrations)
diff --git a/tempest/lib/api_schema/response/compute/v2_45/servers.py b/tempest/lib/api_schema/response/compute/v2_45/servers.py
index cb0fc13..0746465 100644
--- a/tempest/lib/api_schema/response/compute/v2_45/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_45/servers.py
@@ -47,3 +47,4 @@
attach_volume = copy.deepcopy(servers226.attach_volume)
show_volume_attachment = copy.deepcopy(servers226.show_volume_attachment)
list_volume_attachments = copy.deepcopy(servers226.list_volume_attachments)
+list_live_migrations = copy.deepcopy(servers226.list_live_migrations)
diff --git a/tempest/lib/api_schema/response/compute/v2_47/servers.py b/tempest/lib/api_schema/response/compute/v2_47/servers.py
index 1399c2d..d24cc25 100644
--- a/tempest/lib/api_schema/response/compute/v2_47/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_47/servers.py
@@ -72,3 +72,4 @@
list_volume_attachments = copy.deepcopy(servers245.list_volume_attachments)
show_instance_action = copy.deepcopy(servers226.show_instance_action)
create_backup = copy.deepcopy(servers245.create_backup)
+list_live_migrations = copy.deepcopy(servers245.list_live_migrations)
diff --git a/tempest/lib/api_schema/response/compute/v2_48/servers.py b/tempest/lib/api_schema/response/compute/v2_48/servers.py
index 5b53906..a500155 100644
--- a/tempest/lib/api_schema/response/compute/v2_48/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_48/servers.py
@@ -134,3 +134,4 @@
list_volume_attachments = copy.deepcopy(servers247.list_volume_attachments)
show_instance_action = copy.deepcopy(servers247.show_instance_action)
create_backup = copy.deepcopy(servers247.create_backup)
+list_live_migrations = copy.deepcopy(servers247.list_live_migrations)
diff --git a/tempest/lib/api_schema/response/compute/v2_51/servers.py b/tempest/lib/api_schema/response/compute/v2_51/servers.py
index 50d6aaa..27e5f45 100644
--- a/tempest/lib/api_schema/response/compute/v2_51/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_51/servers.py
@@ -41,3 +41,4 @@
show_volume_attachment = copy.deepcopy(servers248.show_volume_attachment)
list_volume_attachments = copy.deepcopy(servers248.list_volume_attachments)
create_backup = copy.deepcopy(servers248.create_backup)
+list_live_migrations = copy.deepcopy(servers248.list_live_migrations)
diff --git a/tempest/lib/api_schema/response/compute/v2_54/servers.py b/tempest/lib/api_schema/response/compute/v2_54/servers.py
index 9de3016..bef1e7f 100644
--- a/tempest/lib/api_schema/response/compute/v2_54/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_54/servers.py
@@ -60,3 +60,4 @@
list_volume_attachments = copy.deepcopy(servers251.list_volume_attachments)
show_instance_action = copy.deepcopy(servers251.show_instance_action)
create_backup = copy.deepcopy(servers251.create_backup)
+list_live_migrations = copy.deepcopy(servers251.list_live_migrations)
diff --git a/tempest/lib/api_schema/response/compute/v2_57/servers.py b/tempest/lib/api_schema/response/compute/v2_57/servers.py
index ee91391..7bee542 100644
--- a/tempest/lib/api_schema/response/compute/v2_57/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_57/servers.py
@@ -64,3 +64,4 @@
list_volume_attachments = copy.deepcopy(servers254.list_volume_attachments)
show_instance_action = copy.deepcopy(servers254.show_instance_action)
create_backup = copy.deepcopy(servers254.create_backup)
+list_live_migrations = copy.deepcopy(servers254.list_live_migrations)
diff --git a/tempest/lib/api_schema/response/compute/v2_58/servers.py b/tempest/lib/api_schema/response/compute/v2_58/servers.py
index 637b765..3e7be49 100644
--- a/tempest/lib/api_schema/response/compute/v2_58/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_58/servers.py
@@ -43,3 +43,4 @@
show_volume_attachment = copy.deepcopy(servers257.show_volume_attachment)
list_volume_attachments = copy.deepcopy(servers257.list_volume_attachments)
create_backup = copy.deepcopy(servers257.create_backup)
+list_live_migrations = copy.deepcopy(servers257.list_live_migrations)
diff --git a/tempest/lib/api_schema/response/compute/v2_59/servers.py b/tempest/lib/api_schema/response/compute/v2_59/servers.py
new file mode 100644
index 0000000..a52c3f4
--- /dev/null
+++ b/tempest/lib/api_schema/response/compute/v2_59/servers.py
@@ -0,0 +1,57 @@
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+import copy
+
+from tempest.lib.api_schema.response.compute.v2_58 import servers as servers258
+
+###########################################################################
+#
+# 2.59:
+#
+# The uuid value is now returned in the response body in addition to the
+# migration id for the following API responses:
+#
+# - GET /os-migrations
+# - GET /servers/{server_id}/migrations/{migration_id}
+# - GET /servers/{server_id}/migrations
+#
+###########################################################################
+
+list_live_migrations = copy.deepcopy(servers258.list_live_migrations)
+list_live_migrations['response_body']['properties']['migrations']['items'][
+ 'properties'].update({'uuid': {'type': 'string', 'format': 'uuid'}})
+list_live_migrations['response_body']['properties']['migrations']['items'][
+ 'required'].append('uuid')
+
+# Below are the unchanged schema in this microversion. We need
+# to keep this schema in this file to have the generic way to select the
+# right schema based on self.schema_versions_info mapping in service client.
+list_servers = copy.deepcopy(servers258.list_servers)
+show_server_diagnostics = copy.deepcopy(servers258.show_server_diagnostics)
+get_remote_consoles = copy.deepcopy(servers258.get_remote_consoles)
+list_tags = copy.deepcopy(servers258.list_tags)
+update_all_tags = copy.deepcopy(servers258.update_all_tags)
+delete_all_tags = copy.deepcopy(servers258.delete_all_tags)
+check_tag_existence = copy.deepcopy(servers258.check_tag_existence)
+update_tag = copy.deepcopy(servers258.update_tag)
+delete_tag = copy.deepcopy(servers258.delete_tag)
+get_server = copy.deepcopy(servers258.get_server)
+list_servers_detail = copy.deepcopy(servers258.list_servers_detail)
+update_server = copy.deepcopy(servers258.update_server)
+rebuild_server = copy.deepcopy(servers258.rebuild_server)
+rebuild_server_with_admin_pass = copy.deepcopy(
+ servers258.rebuild_server_with_admin_pass)
+attach_volume = copy.deepcopy(servers258.attach_volume)
+show_volume_attachment = copy.deepcopy(servers258.show_volume_attachment)
+list_volume_attachments = copy.deepcopy(servers258.list_volume_attachments)
+show_instance_action = copy.deepcopy(servers258.show_instance_action)
+create_backup = copy.deepcopy(servers258.create_backup)
diff --git a/tempest/lib/api_schema/response/compute/v2_6/servers.py b/tempest/lib/api_schema/response/compute/v2_6/servers.py
index 05ab616..d3fc884 100644
--- a/tempest/lib/api_schema/response/compute/v2_6/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_6/servers.py
@@ -33,6 +33,7 @@
list_volume_attachments = copy.deepcopy(servers.list_volume_attachments)
show_instance_action = copy.deepcopy(servers.show_instance_action)
create_backup = copy.deepcopy(servers.create_backup)
+list_live_migrations = copy.deepcopy(servers.list_live_migrations)
# NOTE: The consolidated remote console API got introduced with v2.6
# with bp/consolidate-console-api. See Nova commit 578bafeda
diff --git a/tempest/lib/api_schema/response/compute/v2_62/servers.py b/tempest/lib/api_schema/response/compute/v2_62/servers.py
index d761fe9..829479f 100644
--- a/tempest/lib/api_schema/response/compute/v2_62/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_62/servers.py
@@ -11,11 +11,11 @@
# under the License.
import copy
-from tempest.lib.api_schema.response.compute.v2_58 import servers as servers258
+from tempest.lib.api_schema.response.compute.v2_59 import servers as servers259
# microversion 2.62 added hostId and host to the event, but only hostId is
# mandatory
-show_instance_action = copy.deepcopy(servers258.show_instance_action)
+show_instance_action = copy.deepcopy(servers259.show_instance_action)
show_instance_action['response_body']['properties']['instanceAction'][
'properties']['events']['items'][
'properties']['hostId'] = {'type': 'string'}
@@ -27,22 +27,23 @@
# Below are the unchanged schema in this microversion. We need
# to keep this schema in this file to have the generic way to select the
# right schema based on self.schema_versions_info mapping in service client.
-list_servers = copy.deepcopy(servers258.list_servers)
-show_server_diagnostics = copy.deepcopy(servers258.show_server_diagnostics)
-get_remote_consoles = copy.deepcopy(servers258.get_remote_consoles)
-list_tags = copy.deepcopy(servers258.list_tags)
-update_all_tags = copy.deepcopy(servers258.update_all_tags)
-delete_all_tags = copy.deepcopy(servers258.delete_all_tags)
-check_tag_existence = copy.deepcopy(servers258.check_tag_existence)
-update_tag = copy.deepcopy(servers258.update_tag)
-delete_tag = copy.deepcopy(servers258.delete_tag)
-get_server = copy.deepcopy(servers258.get_server)
-list_servers_detail = copy.deepcopy(servers258.list_servers_detail)
-update_server = copy.deepcopy(servers258.update_server)
-rebuild_server = copy.deepcopy(servers258.rebuild_server)
+list_servers = copy.deepcopy(servers259.list_servers)
+show_server_diagnostics = copy.deepcopy(servers259.show_server_diagnostics)
+get_remote_consoles = copy.deepcopy(servers259.get_remote_consoles)
+list_tags = copy.deepcopy(servers259.list_tags)
+update_all_tags = copy.deepcopy(servers259.update_all_tags)
+delete_all_tags = copy.deepcopy(servers259.delete_all_tags)
+check_tag_existence = copy.deepcopy(servers259.check_tag_existence)
+update_tag = copy.deepcopy(servers259.update_tag)
+delete_tag = copy.deepcopy(servers259.delete_tag)
+get_server = copy.deepcopy(servers259.get_server)
+list_servers_detail = copy.deepcopy(servers259.list_servers_detail)
+update_server = copy.deepcopy(servers259.update_server)
+rebuild_server = copy.deepcopy(servers259.rebuild_server)
rebuild_server_with_admin_pass = copy.deepcopy(
- servers258.rebuild_server_with_admin_pass)
-attach_volume = copy.deepcopy(servers258.attach_volume)
-show_volume_attachment = copy.deepcopy(servers258.show_volume_attachment)
-list_volume_attachments = copy.deepcopy(servers258.list_volume_attachments)
-create_backup = copy.deepcopy(servers258.create_backup)
+ servers259.rebuild_server_with_admin_pass)
+attach_volume = copy.deepcopy(servers259.attach_volume)
+show_volume_attachment = copy.deepcopy(servers259.show_volume_attachment)
+list_volume_attachments = copy.deepcopy(servers259.list_volume_attachments)
+create_backup = copy.deepcopy(servers259.create_backup)
+list_live_migrations = copy.deepcopy(servers259.list_live_migrations)
diff --git a/tempest/lib/api_schema/response/compute/v2_63/servers.py b/tempest/lib/api_schema/response/compute/v2_63/servers.py
index 865b4fd..fe596f5 100644
--- a/tempest/lib/api_schema/response/compute/v2_63/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_63/servers.py
@@ -78,3 +78,4 @@
list_volume_attachments = copy.deepcopy(servers262.list_volume_attachments)
show_instance_action = copy.deepcopy(servers262.show_instance_action)
create_backup = copy.deepcopy(servers262.create_backup)
+list_live_migrations = copy.deepcopy(servers262.list_live_migrations)
diff --git a/tempest/lib/api_schema/response/compute/v2_70/servers.py b/tempest/lib/api_schema/response/compute/v2_70/servers.py
index 6bb688a..bafc7cb 100644
--- a/tempest/lib/api_schema/response/compute/v2_70/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_70/servers.py
@@ -80,3 +80,4 @@
delete_tag = copy.deepcopy(servers263.delete_tag)
show_instance_action = copy.deepcopy(servers263.show_instance_action)
create_backup = copy.deepcopy(servers263.create_backup)
+list_live_migrations = copy.deepcopy(servers263.list_live_migrations)
diff --git a/tempest/lib/api_schema/response/compute/v2_71/servers.py b/tempest/lib/api_schema/response/compute/v2_71/servers.py
index b1c202b..6444e7b 100644
--- a/tempest/lib/api_schema/response/compute/v2_71/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_71/servers.py
@@ -84,3 +84,4 @@
list_volume_attachments = copy.deepcopy(servers270.list_volume_attachments)
show_instance_action = copy.deepcopy(servers270.show_instance_action)
create_backup = copy.deepcopy(servers270.create_backup)
+list_live_migrations = copy.deepcopy(servers270.list_live_migrations)
diff --git a/tempest/lib/api_schema/response/compute/v2_73/servers.py b/tempest/lib/api_schema/response/compute/v2_73/servers.py
index 89f100d..e6ca52e 100644
--- a/tempest/lib/api_schema/response/compute/v2_73/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_73/servers.py
@@ -81,3 +81,4 @@
list_volume_attachments = copy.deepcopy(servers271.list_volume_attachments)
show_instance_action = copy.deepcopy(servers271.show_instance_action)
create_backup = copy.deepcopy(servers271.create_backup)
+list_live_migrations = copy.deepcopy(servers271.list_live_migrations)
diff --git a/tempest/lib/api_schema/response/compute/v2_75/servers.py b/tempest/lib/api_schema/response/compute/v2_75/servers.py
index 6b3e93d..a06355b 100644
--- a/tempest/lib/api_schema/response/compute/v2_75/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_75/servers.py
@@ -62,3 +62,4 @@
list_volume_attachments = copy.deepcopy(servers273.list_volume_attachments)
show_instance_action = copy.deepcopy(servers273.show_instance_action)
create_backup = copy.deepcopy(servers273.create_backup)
+list_live_migrations = copy.deepcopy(servers273.list_live_migrations)
diff --git a/tempest/lib/api_schema/response/compute/v2_79/servers.py b/tempest/lib/api_schema/response/compute/v2_79/servers.py
index 77d9beb..f2d3103 100644
--- a/tempest/lib/api_schema/response/compute/v2_79/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_79/servers.py
@@ -67,3 +67,4 @@
delete_tag = copy.deepcopy(servers275.delete_tag)
show_instance_action = copy.deepcopy(servers275.show_instance_action)
create_backup = copy.deepcopy(servers275.create_backup)
+list_live_migrations = copy.deepcopy(servers275.list_live_migrations)
diff --git a/tempest/lib/api_schema/response/compute/v2_8/servers.py b/tempest/lib/api_schema/response/compute/v2_8/servers.py
index 366fb1b..0d37155 100644
--- a/tempest/lib/api_schema/response/compute/v2_8/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_8/servers.py
@@ -40,3 +40,4 @@
list_volume_attachments = copy.deepcopy(servers.list_volume_attachments)
show_instance_action = copy.deepcopy(servers.show_instance_action)
create_backup = copy.deepcopy(servers.create_backup)
+list_live_migrations = copy.deepcopy(servers.list_live_migrations)
diff --git a/tempest/lib/api_schema/response/compute/v2_80/servers.py b/tempest/lib/api_schema/response/compute/v2_80/servers.py
new file mode 100644
index 0000000..cde1612
--- /dev/null
+++ b/tempest/lib/api_schema/response/compute/v2_80/servers.py
@@ -0,0 +1,60 @@
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import copy
+
+from tempest.lib.api_schema.response.compute.v2_79 import servers as servers279
+
+###########################################################################
+#
+# 2.80:
+#
+# The user_id and project_id value is now returned in the response body in
+# addition to the migration id for the following API responses:
+#
+# - GET /os-migrations
+#
+###########################################################################
+
+list_live_migrations = copy.deepcopy(servers279.list_live_migrations)
+list_live_migrations['response_body']['properties']['migrations']['items'][
+ 'properties'].update({
+ 'user_id': {'type': 'string'},
+ 'project_id': {'type': 'string'}
+ })
+list_live_migrations['response_body']['properties']['migrations']['items'][
+ 'required'].extend(['user_id', 'project_id'])
+
+# NOTE(zhufl): Below are the unchanged schema in this microversion. We
+# need to keep this schema in this file to have the generic way to select the
+# right schema based on self.schema_versions_info mapping in service client.
+# ****** Schemas unchanged since microversion 2.79 ***
+rebuild_server = copy.deepcopy(servers279.rebuild_server)
+rebuild_server_with_admin_pass = copy.deepcopy(
+ servers279.rebuild_server_with_admin_pass)
+update_server = copy.deepcopy(servers279.update_server)
+get_server = copy.deepcopy(servers279.get_server)
+list_servers_detail = copy.deepcopy(servers279.list_servers_detail)
+list_servers = copy.deepcopy(servers279.list_servers)
+show_server_diagnostics = copy.deepcopy(servers279.show_server_diagnostics)
+get_remote_consoles = copy.deepcopy(servers279.get_remote_consoles)
+list_tags = copy.deepcopy(servers279.list_tags)
+update_all_tags = copy.deepcopy(servers279.update_all_tags)
+delete_all_tags = copy.deepcopy(servers279.delete_all_tags)
+check_tag_existence = copy.deepcopy(servers279.check_tag_existence)
+update_tag = copy.deepcopy(servers279.update_tag)
+delete_tag = copy.deepcopy(servers279.delete_tag)
+show_instance_action = copy.deepcopy(servers279.show_instance_action)
+create_backup = copy.deepcopy(servers279.create_backup)
+attach_volume = copy.deepcopy(servers279.attach_volume)
+show_volume_attachment = copy.deepcopy(servers279.show_volume_attachment)
+list_volume_attachments = copy.deepcopy(servers279.list_volume_attachments)
diff --git a/tempest/lib/api_schema/response/compute/v2_89/servers.py b/tempest/lib/api_schema/response/compute/v2_89/servers.py
index debf0dc..f072eda 100644
--- a/tempest/lib/api_schema/response/compute/v2_89/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_89/servers.py
@@ -12,7 +12,7 @@
import copy
-from tempest.lib.api_schema.response.compute.v2_79 import servers as servers279
+from tempest.lib.api_schema.response.compute.v2_80 import servers as servers280
###########################################################################
@@ -27,11 +27,11 @@
# - POST /servers/{server_id}/os-volume_attachments
###########################################################################
-attach_volume = copy.deepcopy(servers279.attach_volume)
+attach_volume = copy.deepcopy(servers280.attach_volume)
-show_volume_attachment = copy.deepcopy(servers279.show_volume_attachment)
+show_volume_attachment = copy.deepcopy(servers280.show_volume_attachment)
-list_volume_attachments = copy.deepcopy(servers279.list_volume_attachments)
+list_volume_attachments = copy.deepcopy(servers280.list_volume_attachments)
# Remove properties
# 'id' is available unti v2.88
@@ -64,21 +64,22 @@
# NOTE(zhufl): Below are the unchanged schema in this microversion. We
# need to keep this schema in this file to have the generic way to select the
# right schema based on self.schema_versions_info mapping in service client.
-# ****** Schemas unchanged since microversion 2.75 ***
-rebuild_server = copy.deepcopy(servers279.rebuild_server)
+# ****** Schemas unchanged since microversion 2.80 ***
+rebuild_server = copy.deepcopy(servers280.rebuild_server)
rebuild_server_with_admin_pass = copy.deepcopy(
- servers279.rebuild_server_with_admin_pass)
-update_server = copy.deepcopy(servers279.update_server)
-get_server = copy.deepcopy(servers279.get_server)
-list_servers_detail = copy.deepcopy(servers279.list_servers_detail)
-list_servers = copy.deepcopy(servers279.list_servers)
-show_server_diagnostics = copy.deepcopy(servers279.show_server_diagnostics)
-get_remote_consoles = copy.deepcopy(servers279.get_remote_consoles)
-list_tags = copy.deepcopy(servers279.list_tags)
-update_all_tags = copy.deepcopy(servers279.update_all_tags)
-delete_all_tags = copy.deepcopy(servers279.delete_all_tags)
-check_tag_existence = copy.deepcopy(servers279.check_tag_existence)
-update_tag = copy.deepcopy(servers279.update_tag)
-delete_tag = copy.deepcopy(servers279.delete_tag)
-show_instance_action = copy.deepcopy(servers279.show_instance_action)
-create_backup = copy.deepcopy(servers279.create_backup)
+ servers280.rebuild_server_with_admin_pass)
+update_server = copy.deepcopy(servers280.update_server)
+get_server = copy.deepcopy(servers280.get_server)
+list_servers_detail = copy.deepcopy(servers280.list_servers_detail)
+list_servers = copy.deepcopy(servers280.list_servers)
+show_server_diagnostics = copy.deepcopy(servers280.show_server_diagnostics)
+get_remote_consoles = copy.deepcopy(servers280.get_remote_consoles)
+list_tags = copy.deepcopy(servers280.list_tags)
+update_all_tags = copy.deepcopy(servers280.update_all_tags)
+delete_all_tags = copy.deepcopy(servers280.delete_all_tags)
+check_tag_existence = copy.deepcopy(servers280.check_tag_existence)
+update_tag = copy.deepcopy(servers280.update_tag)
+delete_tag = copy.deepcopy(servers280.delete_tag)
+show_instance_action = copy.deepcopy(servers280.show_instance_action)
+create_backup = copy.deepcopy(servers280.create_backup)
+list_live_migrations = copy.deepcopy(servers280.list_live_migrations)
diff --git a/tempest/lib/api_schema/response/compute/v2_9/servers.py b/tempest/lib/api_schema/response/compute/v2_9/servers.py
index b4c7865..ad39b14 100644
--- a/tempest/lib/api_schema/response/compute/v2_9/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_9/servers.py
@@ -59,3 +59,4 @@
list_volume_attachments = copy.deepcopy(servers.list_volume_attachments)
show_instance_action = copy.deepcopy(servers.show_instance_action)
create_backup = copy.deepcopy(servers.create_backup)
+list_live_migrations = copy.deepcopy(servers.list_live_migrations)
diff --git a/tempest/lib/api_schema/response/compute/v2_96/servers.py b/tempest/lib/api_schema/response/compute/v2_96/servers.py
index 8a4ed9f..0c4be65 100644
--- a/tempest/lib/api_schema/response/compute/v2_96/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_96/servers.py
@@ -84,3 +84,4 @@
delete_tag = copy.deepcopy(servers289.delete_tag)
show_instance_action = copy.deepcopy(servers289.show_instance_action)
create_backup = copy.deepcopy(servers289.create_backup)
+list_live_migrations = copy.deepcopy(servers289.list_live_migrations)
diff --git a/tempest/lib/api_schema/response/compute/v2_98/servers.py b/tempest/lib/api_schema/response/compute/v2_98/servers.py
index 2fca3eb..0296410 100644
--- a/tempest/lib/api_schema/response/compute/v2_98/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_98/servers.py
@@ -83,3 +83,4 @@
delete_tag = copy.deepcopy(servers296.delete_tag)
show_instance_action = copy.deepcopy(servers296.show_instance_action)
create_backup = copy.deepcopy(servers296.create_backup)
+list_live_migrations = copy.deepcopy(servers296.list_live_migrations)
diff --git a/tempest/lib/api_schema/response/compute/v2_99/servers.py b/tempest/lib/api_schema/response/compute/v2_99/servers.py
index e667321..25b3150 100644
--- a/tempest/lib/api_schema/response/compute/v2_99/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_99/servers.py
@@ -31,6 +31,7 @@
list_volume_attachments = copy.deepcopy(servers.list_volume_attachments)
show_instance_action = copy.deepcopy(servers.show_instance_action)
create_backup = copy.deepcopy(servers.create_backup)
+list_live_migrations = copy.deepcopy(servers.list_live_migrations)
console_auth_tokens = {
'status_code': [200],
diff --git a/tempest/lib/api_schema/response/volume/volume_types.py b/tempest/lib/api_schema/response/volume/volume_types.py
index 51b3a72..4d09bcd 100644
--- a/tempest/lib/api_schema/response/volume/volume_types.py
+++ b/tempest/lib/api_schema/response/volume/volume_types.py
@@ -31,8 +31,7 @@
'qos_specs_id': {'type': ['string', 'null'], 'format': 'uuid'}
},
'additionalProperties': False,
- 'required': ['name', 'is_public', 'description', 'id',
- 'os-volume-type-access:is_public']
+ 'required': ['name', 'is_public', 'description', 'id']
}
show_volume_type = {
diff --git a/tempest/lib/common/cred_provider.py b/tempest/lib/common/cred_provider.py
index 93b9586..9be7c5e 100644
--- a/tempest/lib/common/cred_provider.py
+++ b/tempest/lib/common/cred_provider.py
@@ -100,6 +100,10 @@
return
@abc.abstractmethod
+ def get_project_alt_manager_creds(self):
+ return
+
+ @abc.abstractmethod
def get_project_member_creds(self):
return
diff --git a/tempest/lib/common/dynamic_creds.py b/tempest/lib/common/dynamic_creds.py
index 1815dc6..11e7215 100644
--- a/tempest/lib/common/dynamic_creds.py
+++ b/tempest/lib/common/dynamic_creds.py
@@ -427,7 +427,8 @@
elif credential_type in [['admin'], ['alt_admin']]:
credentials = self._create_creds(
admin=True, scope=scope, project_id=project_id)
- elif credential_type in [['alt_member'], ['alt_reader']]:
+ elif credential_type in [['alt_manager'], ['alt_member'],
+ ['alt_reader']]:
cred_type = credential_type[0][4:]
if isinstance(cred_type, str):
cred_type = [cred_type]
@@ -511,6 +512,9 @@
def get_project_manager_creds(self):
return self.get_credentials(['manager'], scope='project')
+ def get_project_alt_manager_creds(self):
+ return self.get_credentials(['alt_manager'], scope='project')
+
def get_project_member_creds(self):
return self.get_credentials(['member'], scope='project')
diff --git a/tempest/lib/common/preprov_creds.py b/tempest/lib/common/preprov_creds.py
index 3ba7db1..e685c2c 100644
--- a/tempest/lib/common/preprov_creds.py
+++ b/tempest/lib/common/preprov_creds.py
@@ -12,11 +12,11 @@
# License for the specific language governing permissions and limitations
# under the License.
+import hashlib
import os
from oslo_concurrency import lockutils
from oslo_log import log as logging
-from oslo_utils.secretutils import md5
import yaml
from tempest.lib import auth
@@ -134,7 +134,7 @@
scope = 'domain'
elif 'system' in account:
scope = 'system'
- temp_hash = md5(usedforsecurity=False)
+ temp_hash = hashlib.md5(usedforsecurity=False)
account_for_hash = dict((k, v) for (k, v) in account.items()
if k in cls.HASH_CRED_FIELDS)
temp_hash.update(str(account_for_hash).encode('utf-8'))
@@ -392,6 +392,10 @@
self._creds['project_manager'] = project_manager
return project_manager
+ def get_project_alt_manager_creds(self):
+ # TODO(msava):Implement alt manager hash.
+ return
+
def get_project_member_creds(self):
if self._creds.get('project_member'):
return self._creds.get('project_member')
diff --git a/tempest/lib/common/ssh.py b/tempest/lib/common/ssh.py
index aad04b8..cc318db 100644
--- a/tempest/lib/common/ssh.py
+++ b/tempest/lib/common/ssh.py
@@ -13,7 +13,7 @@
# License for the specific language governing permissions and limitations
# under the License.
-
+import hashlib
import io
import select
import socket
@@ -21,7 +21,6 @@
import warnings
from oslo_log import log as logging
-from oslo_utils.secretutils import md5
from tempest.lib import exceptions
@@ -43,7 +42,7 @@
TODO(alee) Remove this when paramiko is patched.
See https://github.com/paramiko/paramiko/pull/1928
"""
- return md5(self.asbytes(), usedforsecurity=False).digest()
+ return hashlib.md5(self.asbytes(), usedforsecurity=False).digest()
paramiko.pkey.PKey.get_fingerprint = get_fingerprint
diff --git a/tempest/lib/services/compute/servers_client.py b/tempest/lib/services/compute/servers_client.py
index 4a607a3..1778194 100644
--- a/tempest/lib/services/compute/servers_client.py
+++ b/tempest/lib/services/compute/servers_client.py
@@ -43,6 +43,7 @@
from tempest.lib.api_schema.response.compute.v2_75 import servers as schemav275
from tempest.lib.api_schema.response.compute.v2_79 import servers as schemav279
from tempest.lib.api_schema.response.compute.v2_8 import servers as schemav28
+from tempest.lib.api_schema.response.compute.v2_80 import servers as schemav280
from tempest.lib.api_schema.response.compute.v2_89 import servers as schemav289
from tempest.lib.api_schema.response.compute.v2_9 import servers as schemav29
from tempest.lib.api_schema.response.compute.v2_96 import servers as schemav296
@@ -79,7 +80,8 @@
{'min': '2.71', 'max': '2.72', 'schema': schemav271},
{'min': '2.73', 'max': '2.74', 'schema': schemav273},
{'min': '2.75', 'max': '2.78', 'schema': schemav275},
- {'min': '2.79', 'max': '2.88', 'schema': schemav279},
+ {'min': '2.79', 'max': '2.79', 'schema': schemav279},
+ {'min': '2.80', 'max': '2.88', 'schema': schemav280},
{'min': '2.89', 'max': '2.95', 'schema': schemav289},
{'min': '2.96', 'max': '2.97', 'schema': schemav296},
{'min': '2.98', 'max': '2.98', 'schema': schemav298},
@@ -549,6 +551,35 @@
"""
return self.action(server_id, 'os-migrateLive', **kwargs)
+ def list_in_progress_live_migration(self, server_id, **kwargs):
+ """This should be called with administrator privileges.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://docs.openstack.org/api-ref/compute/#id318
+ """
+ resp, body = self.get('servers/%s/migrations' % server_id)
+ body = json.loads(body)
+ schema = self.get_schema(self.schema_versions_info)
+ self.validate_response(schema.list_live_migrations, resp, body)
+ return rest_client.ResponseBody(resp, body)
+
+ def force_complete_live_migration(self, server_id, migration_id, **kwargs):
+ """Force complete a in-progress live migration.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://docs.openstack.org/api-ref/compute/#force-migration-complete-action-force-complete-action
+ """
+ post_body = json.dumps({"force_complete": 'null'})
+ resp, body = self.post('servers/%s/migrations/%s/action' %
+ (server_id, migration_id),
+ post_body)
+ body = json.loads(body)
+ schema = self.get_schema(self.schema_versions_info)
+ self.validate_response(schema.server_actions_common_schema, resp, body)
+ return rest_client.ResponseBody(resp, body)
+
def migrate_server(self, server_id, **kwargs):
"""Migrate a server to a new host.
diff --git a/tempest/lib/services/image/v2/images_client.py b/tempest/lib/services/image/v2/images_client.py
index a6a1623..c491d9b 100644
--- a/tempest/lib/services/image/v2/images_client.py
+++ b/tempest/lib/services/image/v2/images_client.py
@@ -304,3 +304,13 @@
resp, _ = self.delete(url)
self.expected_success(204, resp.status)
return rest_client.ResponseBody(resp)
+
+ def add_image_location(self, image_id, url, validation_data=None):
+ """Add location for specific Image."""
+ if not validation_data:
+ validation_data = {}
+ data = json.dumps({'url': url, 'validation_data': validation_data})
+ resp, _ = self.post('images/%s/locations' % (image_id),
+ data)
+ self.expected_success(202, resp.status)
+ return rest_client.ResponseBody(resp)
diff --git a/tempest/scenario/test_network_advanced_server_ops.py b/tempest/scenario/test_network_advanced_server_ops.py
index f4ee98d..d8ffa54 100644
--- a/tempest/scenario/test_network_advanced_server_ops.py
+++ b/tempest/scenario/test_network_advanced_server_ops.py
@@ -33,6 +33,8 @@
class BaseTestNetworkAdvancedServerOps(manager.NetworkScenarioTest):
"""Base class for defining methods used in tests."""
+ credentials = ['primary', 'admin', 'project_manager']
+
@classmethod
def skip_checks(cls):
super(BaseTestNetworkAdvancedServerOps, cls).skip_checks()
@@ -47,7 +49,7 @@
@classmethod
def setup_clients(cls):
super(BaseTestNetworkAdvancedServerOps, cls).setup_clients()
- cls.admin_servers_client = cls.os_admin.servers_client
+ cls.mgr_server_client = cls.os_admin.servers_client
cls.sec_group_rules_client = \
cls.os_primary.security_group_rules_client
cls.sec_groups_client = cls.os_primary.security_groups_client
@@ -159,7 +161,13 @@
self._wait_server_status_and_check_network_connectivity(
server, keypair, floating_ip)
- self.admin_servers_client.migrate_server(
+ if (not dest_host and CONF.enforce_scope.nova and 'manager' in
+ CONF.compute_feature_enabled.nova_policy_roles):
+ self.mgr_server_client = self.os_project_manager.servers_client
+ LOG.info("Using project manager for migrating server: %s, "
+ "project manager user id: %s",
+ server['id'], self.mgr_server_client.user_id)
+ self.mgr_server_client.migrate_server(
server['id'], host=dest_host)
waiters.wait_for_server_status(self.servers_client, server['id'],
'VERIFY_RESIZE')
@@ -210,8 +218,13 @@
if dest_host:
migration_kwargs['host'] = dest_host
-
- self.admin_servers_client.live_migrate_server(
+ elif (CONF.enforce_scope.nova and 'manager' in
+ CONF.compute_feature_enabled.nova_policy_roles):
+ self.mgr_server_client = self.os_project_manager.servers_client
+ LOG.info("Using project manager for migrating server: %s, "
+ "project manager user id: %s",
+ server['id'], self.mgr_server_client.user_id)
+ self.mgr_server_client.live_migrate_server(
server['id'], **migration_kwargs)
waiters.wait_for_server_status(self.servers_client,
server['id'], 'ACTIVE')
@@ -260,7 +273,13 @@
self._wait_server_status_and_check_network_connectivity(
server, keypair, floating_ip)
- self.admin_servers_client.migrate_server(
+ if (not dest_host and CONF.enforce_scope.nova and 'manager' in
+ CONF.compute_feature_enabled.nova_policy_roles):
+ self.mgr_server_client = self.os_project_manager.servers_client
+ LOG.info("Using project manager for migrating server: %s, "
+ "project manager user id: %s",
+ server['id'], self.mgr_server_client.user_id)
+ self.mgr_server_client.migrate_server(
server['id'], host=dest_host)
waiters.wait_for_server_status(self.servers_client, server['id'],
'VERIFY_RESIZE')
@@ -415,7 +434,7 @@
- Cold Migration with revert
- Live Migration
"""
- credentials = ['primary', 'admin']
+ credentials = ['primary', 'admin', 'project_manager']
compute_min_microversion = "2.74"
@classmethod
@@ -441,7 +460,7 @@
cls.keypairs_client = cls.os_admin.keypairs_client
cls.floating_ips_client = cls.os_admin.floating_ips_client
cls.servers_client = cls.os_admin.servers_client
- cls.admin_servers_client = cls.os_admin.servers_client
+ cls.mgr_server_client = cls.os_admin.servers_client
@decorators.idempotent_id('06e23934-79ae-11ee-b962-0242ac120002')
@testtools.skipUnless(CONF.compute_feature_enabled.resize,
diff --git a/tempest/scenario/test_network_qos_placement.py b/tempest/scenario/test_network_qos_placement.py
index 055dcb6..faff6f9 100644
--- a/tempest/scenario/test_network_qos_placement.py
+++ b/tempest/scenario/test_network_qos_placement.py
@@ -152,21 +152,36 @@
min_kbps=self.BANDWIDTH_2
)
- def _create_network_and_qos_policies(self, policy_method):
- physnet_name = CONF.network_feature_enabled.qos_placement_physnet
- base_segm = \
- CONF.network_feature_enabled.provider_net_base_segmentation_id
-
- self.prov_network, _, _ = self.setup_network_subnet_with_router(
- networks_client=self.networks_client,
- routers_client=self.routers_client,
- subnets_client=self.subnets_client,
+ def _use_or_create_network_and_qos_policies(self, policy_method):
+ vlan_ext_nets = self.networks_client.list_networks(
**{
- 'shared': True,
'provider:network_type': 'vlan',
- 'provider:physical_network': physnet_name,
- 'provider:segmentation_id': base_segm
- })
+ 'router:external': True}
+ )['networks']
+ if vlan_ext_nets:
+ self.prov_network = vlan_ext_nets[0]
+ if not self.prov_network['shared']:
+ self.prov_network = self.networks_client.update_network(
+ self.prov_network['id'], shared=True)['network']
+ self.addClassResourceCleanup(
+ self.networks_client.update_network,
+ self.prov_network['id'],
+ shared=False)
+ else:
+ physnet_name = CONF.network_feature_enabled.qos_placement_physnet
+ base_segm = \
+ CONF.network_feature_enabled.provider_net_base_segmentation_id
+
+ self.prov_network, _, _ = self.setup_network_subnet_with_router(
+ networks_client=self.networks_client,
+ routers_client=self.routers_client,
+ subnets_client=self.subnets_client,
+ **{
+ 'shared': True,
+ 'provider:network_type': 'vlan',
+ 'provider:physical_network': physnet_name,
+ 'provider:segmentation_id': base_segm
+ })
policy_method()
@@ -261,7 +276,8 @@
* Create port with invalid QoS policy, and try to boot VM with that,
it should fail.
"""
- self._create_network_and_qos_policies(self._create_qos_basic_policies)
+ self._use_or_create_network_and_qos_policies(
+ self._create_qos_basic_policies)
server1, valid_port = self._boot_vm_with_min_bw(
qos_policy_id=self.qos_policy_valid['id'])
self._assert_allocation_is_as_expected(server1['id'],
@@ -297,7 +313,8 @@
* If the VM goes to ACTIVE state check that allocations are as
expected.
"""
- self._create_network_and_qos_policies(self._create_qos_basic_policies)
+ self._use_or_create_network_and_qos_policies(
+ self._create_qos_basic_policies)
server, valid_port = self._boot_vm_with_min_bw(
qos_policy_id=self.qos_policy_valid['id'])
self._assert_allocation_is_as_expected(server['id'],
@@ -335,7 +352,8 @@
* If the VM goes to ACTIVE state check that allocations are as
expected.
"""
- self._create_network_and_qos_policies(self._create_qos_basic_policies)
+ self._use_or_create_network_and_qos_policies(
+ self._create_qos_basic_policies)
server, valid_port = self._boot_vm_with_min_bw(
qos_policy_id=self.qos_policy_valid['id'])
self._assert_allocation_is_as_expected(server['id'],
@@ -378,7 +396,7 @@
if not utils.is_network_feature_enabled('update_port_qos'):
raise self.skipException("update_port_qos feature is not enabled")
- self._create_network_and_qos_policies(
+ self._use_or_create_network_and_qos_policies(
self._create_qos_policies_from_life)
port = self.create_port(
@@ -432,7 +450,7 @@
if not utils.is_network_feature_enabled('update_port_qos'):
raise self.skipException("update_port_qos feature is not enabled")
- self._create_network_and_qos_policies(
+ self._use_or_create_network_and_qos_policies(
self._create_qos_policies_from_life)
port = self.create_port(self.prov_network['id'])
@@ -457,7 +475,7 @@
if not utils.is_network_feature_enabled('update_port_qos'):
raise self.skipException("update_port_qos feature is not enabled")
- self._create_network_and_qos_policies(
+ self._use_or_create_network_and_qos_policies(
self._create_qos_policies_from_life)
port = self.create_port(
@@ -479,7 +497,7 @@
if not utils.is_network_feature_enabled('update_port_qos'):
raise self.skipException("update_port_qos feature is not enabled")
- self._create_network_and_qos_policies(
+ self._use_or_create_network_and_qos_policies(
self._create_qos_policies_from_life)
port1 = self.create_port(
@@ -506,7 +524,7 @@
if not utils.is_network_feature_enabled('update_port_qos'):
raise self.skipException("update_port_qos feature is not enabled")
- self._create_network_and_qos_policies(
+ self._use_or_create_network_and_qos_policies(
self._create_qos_policies_from_life)
port = self.create_port(
@@ -552,7 +570,7 @@
direction=self.EGRESS_DIRECTION,
)
- self._create_network_and_qos_policies(create_policies)
+ self._use_or_create_network_and_qos_policies(create_policies)
port = self.create_port(
self.prov_network['id'],
diff --git a/tempest/scenario/test_shelve_instance.py b/tempest/scenario/test_shelve_instance.py
index 204471e..d53e918 100644
--- a/tempest/scenario/test_shelve_instance.py
+++ b/tempest/scenario/test_shelve_instance.py
@@ -13,6 +13,7 @@
# License for the specific language governing permissions and limitations
# under the License.
+from oslo_log import log as logging
import testtools
from tempest.common import compute
@@ -23,6 +24,7 @@
from tempest.scenario import manager
CONF = config.CONF
+LOG = logging.getLogger(__name__)
class TestShelveInstance(manager.ScenarioTest):
@@ -38,12 +40,18 @@
"""
- credentials = ['primary', 'admin']
+ credentials = ['primary', 'admin', 'project_manager']
@classmethod
def setup_clients(cls):
super(TestShelveInstance, cls).setup_clients()
- cls.admin_servers_client = cls.os_admin.servers_client
+ cls.mgr_servers_client = cls.os_admin.servers_client
+ if (CONF.enforce_scope.nova and 'manager' in
+ CONF.compute_feature_enabled.nova_policy_roles):
+ cls.mgr_servers_client = cls.os_project_manager.servers_client
+ LOG.info("Using project manager for migrating server, "
+ "project manager user id: %s",
+ cls.mgr_servers_client.user_id)
@classmethod
def skip_checks(cls):
@@ -62,7 +70,7 @@
def _cold_migrate_server(self, server):
src_host = self.get_host_for_server(server['id'])
- self.admin_servers_client.migrate_server(server['id'])
+ self.mgr_servers_client.migrate_server(server['id'])
waiters.wait_for_server_status(self.servers_client,
server['id'], 'VERIFY_RESIZE')
self.servers_client.confirm_resize_server(server['id'])
diff --git a/tempest/tests/common/test_concurrency.py b/tempest/tests/common/test_concurrency.py
new file mode 100644
index 0000000..0f3d742
--- /dev/null
+++ b/tempest/tests/common/test_concurrency.py
@@ -0,0 +1,99 @@
+# Copyright 2025 Red Hat, Inc.
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.common import concurrency
+from tempest.tests import base
+
+
+class TestConcurrency(base.TestCase):
+
+ def test_run_concurrent_tasks_success(self):
+ """Test successful concurrent task execution."""
+ def target_func(index, resource_ids, prefix='resource'):
+ resource_ids.append(f"{prefix}_{index}")
+
+ result = concurrency.run_concurrent_tasks(
+ target_func,
+ resource_count=3,
+ prefix='test_resource'
+ )
+
+ self.assertEqual(len(result), 3)
+ self.assertIn('test_resource_0', result)
+ self.assertIn('test_resource_1', result)
+ self.assertIn('test_resource_2', result)
+
+ def test_run_concurrent_tasks_multiple_workers(self):
+ """Test concurrent task execution with multiple workers."""
+ def target_func(index, resource_ids):
+ resource_ids.append(f"item_{index}")
+
+ result = concurrency.run_concurrent_tasks(
+ target_func,
+ resource_count=4
+ )
+
+ self.assertEqual(len(result), 4)
+ self.assertIn('item_0', result)
+ self.assertIn('item_1', result)
+ self.assertIn('item_2', result)
+ self.assertIn('item_3', result)
+
+ def test_run_concurrent_tasks_single_process(self):
+ """Test concurrent task execution with single process."""
+ def target_func(index, resource_ids, value):
+ resource_ids.append(value * 2)
+
+ result = concurrency.run_concurrent_tasks(
+ target_func,
+ resource_count=1,
+ value=5
+ )
+
+ self.assertEqual(len(result), 1)
+ self.assertEqual(result[0], 10)
+
+ def test_run_concurrent_tasks_with_exception(self):
+ """Test that exceptions in tasks are properly captured and raised."""
+ def failing_target(index, resource_ids):
+ if index == 1:
+ raise ValueError("Test error in worker 1")
+ resource_ids.append(f"resource_{index}")
+
+ error = self.assertRaises(
+ RuntimeError,
+ concurrency.run_concurrent_tasks,
+ failing_target,
+ resource_count=3
+ )
+ self.assertIn("Worker 1 failed", str(error))
+ self.assertIn("Test error in worker 1", str(error))
+
+ def test_run_concurrent_tasks_dict_return_values(self):
+ """Test concurrent task execution with dict return values."""
+ def target_returning_dict(index, resource_ids):
+ resource_ids.append({'id': index, 'name': f'resource_{index}'})
+
+ result = concurrency.run_concurrent_tasks(
+ target_returning_dict,
+ resource_count=3
+ )
+
+ self.assertEqual(len(result), 3)
+ # Verify that dicts are present
+ ids = [r['id'] for r in result]
+ self.assertIn(0, ids)
+ self.assertIn(1, ids)
+ self.assertIn(2, ids)
diff --git a/tempest/tests/lib/common/test_dynamic_creds.py b/tempest/tests/lib/common/test_dynamic_creds.py
index 4122db3..b62d854 100644
--- a/tempest/tests/lib/common/test_dynamic_creds.py
+++ b/tempest/tests/lib/common/test_dynamic_creds.py
@@ -248,6 +248,7 @@
creds = dynamic_creds.DynamicCredentialProvider(**self.fixed_params)
if test_alt_creds:
admin_func = creds.get_project_alt_admin_creds
+ manager_func = creds.get_project_alt_manager_creds
member_func = creds.get_project_alt_member_creds
reader_func = creds.get_project_alt_reader_creds
else:
@@ -290,11 +291,8 @@
# Now request for the project manager creds which should not create new
# project instead should use the project_id of member_creds already
# created project.
- # TODO(gmaan): test test_alt_creds also once alt project
- # manager is available.
- if not test_alt_creds:
- self._request_and_check_second_creds(
- creds, manager_func, member_creds, show_mock, sm_count=3)
+ self._request_and_check_second_creds(
+ creds, manager_func, member_creds, show_mock, sm_count=3)
def test_creds_within_same_project(self):
self._creds_within_same_project()
diff --git a/tempest/tests/lib/common/test_preprov_creds.py b/tempest/tests/lib/common/test_preprov_creds.py
index 5a36f71..c5eaf7e 100644
--- a/tempest/tests/lib/common/test_preprov_creds.py
+++ b/tempest/tests/lib/common/test_preprov_creds.py
@@ -12,6 +12,7 @@
# License for the specific language governing permissions and limitations
# under the License.
+import hashlib
import os
import shutil
from unittest import mock
@@ -21,7 +22,6 @@
import fixtures
from oslo_concurrency.fixture import lockutils as lockutils_fixtures
from oslo_config import cfg
-from oslo_utils.secretutils import md5
from tempest import config
from tempest.lib import auth
@@ -111,7 +111,7 @@
hash_fields = (
preprov_creds.PreProvisionedCredentialProvider.HASH_CRED_FIELDS)
for account in accounts_list:
- hash = md5(usedforsecurity=False)
+ hash = hashlib.md5(usedforsecurity=False)
account_for_hash = dict((k, v) for (k, v) in account.items()
if k in hash_fields)
hash.update(str(account_for_hash).encode('utf-8'))
diff --git a/tempest/tests/test_test.py b/tempest/tests/test_test.py
index 7fb9bb3..f6f3588 100644
--- a/tempest/tests/test_test.py
+++ b/tempest/tests/test_test.py
@@ -407,7 +407,7 @@
def get_identity_version(cls):
return identity_version
- with testtools.ExpectedException(testtools.testcase.TestSkipped):
+ with testtools.ExpectedException(unittest.SkipTest):
NeedAdmin().skip_checks()
mock_iaa.assert_called_once_with('identity_version')
@@ -417,7 +417,7 @@
class NeedV2(self.parent_test):
identity_version = 'v2'
- with testtools.ExpectedException(testtools.testcase.TestSkipped):
+ with testtools.ExpectedException(unittest.SkipTest):
NeedV2().skip_checks()
def test_skip_checks_identity_v3_not_available(self):
@@ -426,7 +426,7 @@
class NeedV3(self.parent_test):
identity_version = 'v3'
- with testtools.ExpectedException(testtools.testcase.TestSkipped):
+ with testtools.ExpectedException(unittest.SkipTest):
NeedV3().skip_checks()
def test_setup_credentials_all(self):
diff --git a/test-requirements.txt b/test-requirements.txt
index b925921..f599d53 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -1,4 +1,4 @@
hacking>=7.0.0,<7.1.0
coverage!=4.4,>=4.0 # Apache-2.0
oslotest>=3.2.0 # Apache-2.0
-flake8-import-order>=0.18.0,<0.19.0 # LGPLv3
+flake8-import-order>=0.19.0 # LGPLv3
diff --git a/tools/format.sh b/tools/format.sh
index ef5cc92..9685cdf 100755
--- a/tools/format.sh
+++ b/tools/format.sh
@@ -15,7 +15,7 @@
# isort is not compatible with the default flake8 (H306), maybe flake8-isort
# isort -rc -sl -fss ../tempest ../setup.py
-$AUTOPEP8 --exit-code --max-line-length=79 --experimental --in-place \
+$AUTOPEP8 --exit-code --max-line-length=79 --in-place \
-r ../tempest ../setup.py
ERROR=$?
diff --git a/tools/generate-tempest-plugins-list.py b/tools/generate-tempest-plugins-list.py
index 2e8ced5..0690d57 100644
--- a/tools/generate-tempest-plugins-list.py
+++ b/tools/generate-tempest-plugins-list.py
@@ -79,6 +79,10 @@
# No changes are merging in this
# https://review.opendev.org/q/project:x%252Fnetworking-fortinet
'x/networking-fortinet'
+ # It is broken and it use retired plugin 'patrol'. Last change done
+ # in this plugin was 7 years ago.
+ # https://opendev.org/airship/tempest-plugin
+ 'airship/tempest-plugin'
]
url = 'https://review.opendev.org/projects/'
diff --git a/tox.ini b/tox.ini
index 0fbc252..634b72f 100644
--- a/tox.ini
+++ b/tox.ini
@@ -1,5 +1,5 @@
[tox]
-envlist = pep8,py39,bashate,pip-check-reqs
+envlist = pep8,py,bashate,pip-check-reqs
minversion = 3.18.0
[tempestenv]
@@ -389,12 +389,14 @@
{[testenv]deps}
autopep8>=2.1.0
commands =
- autopep8 --exit-code --max-line-length=79 --experimental --diff -r tempest setup.py
+ autopep8 --exit-code --max-line-length=79 --diff -r tempest setup.py
flake8 {posargs}
check-uuid
[testenv:autopep8]
deps = autopep8>=2.1.0
+allowlist_externals =
+ {toxinidir}/tools/format.sh
commands =
{toxinidir}/tools/format.sh
@@ -473,8 +475,8 @@
pip-extra-reqs -d --ignore-file=tempest/tests/* tempest
pip-missing-reqs -d --ignore-file=tempest/tests/* tempest
-
[testenv:bindep]
+skip_install = true
# Do not install any requirements. We want this to be fast and work even if
# system dependencies are missing, since it's used to tell you what system
# dependencies are missing! This also means that bindep must be installed
diff --git a/zuul.d/integrated-gate.yaml b/zuul.d/integrated-gate.yaml
index 8e0a9f3..98a1831 100644
--- a/zuul.d/integrated-gate.yaml
+++ b/zuul.d/integrated-gate.yaml
@@ -82,8 +82,8 @@
Former names for this job where:
* legacy-tempest-dsvm-py35
* gate-tempest-dsvm-py35
- required-projects:
- - openstack/horizon
+ # required-projects:
+ # - openstack/horizon
vars:
# NOTE(gmann): Default concurrency is higher (number of cpu -2) which
# end up 6 in upstream CI. Higher concurrency means high parallel
@@ -101,7 +101,11 @@
neutron: https://opendev.org/openstack/neutron
devstack_services:
# Enable horizon so that we can run horizon test.
- horizon: true
+ # horizon: true
+ # FIXME(sean-k-mooney): restore horizon deployment
+ # once horizon does not depend on setuptools to provide
+ # pkg_resources or bug #2141277 is resolved by other means
+ horizon: false
- job:
name: tempest-full-centos-9-stream
@@ -190,8 +194,9 @@
parent: tempest-integrated-compute
nodeset: devstack-single-node-centos-9-stream
# centos-9-stream is supported from yoga release onwards
+ # PYTHON3_VERSION override support missing before 2025.2
branches:
- regex: ^.*/(victoria|wallaby|xena)$
+ regex: ^.*/(victoria|wallaby|xena|yoga|zed|2023.1|2024.1|2024.2|2025.1)$
negate: true
description: |
This job runs integration tests for compute. This is
@@ -373,6 +378,9 @@
- job:
name: tempest-centos9-stream-fips
parent: devstack-tempest
+ branches:
+ regex: ^.*/(victoria|wallaby|xena|yoga|zed|2023.1|2024.1|2024.2|2025.1)$
+ negate: true
description: |
Integration testing for a FIPS enabled Centos 9 system
timeout: 10800
@@ -452,6 +460,7 @@
# but if project feel that is not required to run for non SLURP releases then they can opt to make it non-voting or remove it.
- grenade-skip-level-always:
branches:
+ - ^.*/2025.2
- ^.*/2025.1
- master
- tempest-integrated-networking
@@ -479,6 +488,7 @@
# but if project feel that is not required to run for non SLURP releases then they can opt to make it non-voting or remove it.
- grenade-skip-level-always:
branches:
+ - ^.*/2025.2
- ^.*/2025.1
- master
# Do not run it on ussuri until below issue is fixed
@@ -523,6 +533,7 @@
- ^.*/2024.1
- ^.*/2024.2
- ^.*/2025.1
+ - ^.*/2025.2
- master
- tempest-integrated-compute
# Do not run it on ussuri until below issue is fixed
@@ -541,6 +552,7 @@
- ^.*/2024.1
- ^.*/2024.2
- ^.*/2025.1
+ - ^.*/2025.2
- master
- tempest-integrated-compute
- openstacksdk-functional-devstack:
@@ -584,6 +596,7 @@
# but if project feel that is not required to run for non SLURP releases then they can opt to make it non-voting or remove it.
- grenade-skip-level-always:
branches:
+ - ^.*/2025.2
- ^.*/2025.1
- master
- tempest-integrated-placement
@@ -611,6 +624,7 @@
# but if project feel that is not required to run for non SLURP releases then they can opt to make it non-voting or remove it.
- grenade-skip-level-always:
branches:
+ - ^.*/2025.2
- ^.*/2025.1
- master
# Do not run it on ussuri until below issue is fixed
@@ -651,6 +665,7 @@
# but if project feel that is not required to run for non SLURP releases then they can opt to make it non-voting or remove it.
- grenade-skip-level-always:
branches:
+ - ^.*/2025.2
- ^.*/2025.1
- master
- tempest-integrated-storage
@@ -677,6 +692,7 @@
# but if project feel that is not required to run for non SLURP releases then they can opt to make it non-voting or remove it.
- grenade-skip-level-always:
branches:
+ - ^.*/2025.2
- ^.*/2025.1
- master
- tempest-integrated-storage
@@ -711,6 +727,7 @@
# but if project feel that is not required to run for non SLURP releases then they can opt to make it non-voting or remove it.
- grenade-skip-level-always:
branches:
+ - ^.*/2025.2
- ^.*/2025.1
- master
- tempest-integrated-object-storage
@@ -737,6 +754,7 @@
# but if project feel that is not required to run for non SLURP releases then they can opt to make it non-voting or remove it.
- grenade-skip-level-always:
branches:
+ - ^.*/2025.2
- ^.*/2025.1
- master
- tempest-integrated-object-storage
diff --git a/zuul.d/project.yaml b/zuul.d/project.yaml
index 9c9bc61..45e117f 100644
--- a/zuul.d/project.yaml
+++ b/zuul.d/project.yaml
@@ -8,10 +8,10 @@
check:
jobs:
- openstack-tox-pep8
- - openstack-tox-py39
- openstack-tox-py310
- openstack-tox-py311
- openstack-tox-py312
+ - openstack-tox-py313
- tempest-full-py3:
# Define list of irrelevant files to use everywhere else
irrelevant-files: &tempest-irrelevant-files
@@ -37,9 +37,9 @@
# if things are working in latest and oldest it will work in between
# stable branches also. If anything is breaking we will be catching
# those in respective stable branch gate.
- - tempest-full-2025-1:
+ - tempest-full-2025-2:
irrelevant-files: *tempest-irrelevant-files
- - tempest-full-2024-1:
+ - tempest-full-2024-2:
irrelevant-files: *tempest-irrelevant-files
- tempest-multinode-full-py3:
irrelevant-files: *tempest-irrelevant-files
@@ -113,22 +113,20 @@
- neutron-ovs-tempest-dvr:
voting: false
irrelevant-files: *tempest-irrelevant-files
- - interop-tempest-consistency:
- irrelevant-files: *tempest-irrelevant-files
- tempest-full-test-account-py3:
voting: false
irrelevant-files: *tempest-irrelevant-files
- - ironic-tempest-bios-ipmi-direct-tinyipa:
+ - ironic-tempest-bios-ipmi-direct:
irrelevant-files: *tempest-irrelevant-files
- openstack-tox-bashate:
irrelevant-files: *tempest-irrelevant-files-2
gate:
jobs:
- openstack-tox-pep8
- - openstack-tox-py39
- openstack-tox-py310
- openstack-tox-py311
- openstack-tox-py312
+ - openstack-tox-py313
- tempest-slow-py3:
irrelevant-files: *tempest-irrelevant-files
- neutron-ovs-grenade-multinode:
@@ -151,11 +149,12 @@
irrelevant-files: *tempest-irrelevant-files
- nova-live-migration:
irrelevant-files: *tempest-irrelevant-files
- - ironic-tempest-bios-ipmi-direct-tinyipa:
+ - ironic-tempest-bios-ipmi-direct:
irrelevant-files: *tempest-irrelevant-files
experimental:
jobs:
- nova-multi-cell
+ - nova-alt-configurations
- tempest-with-latest-microversion
- tempest-full-oslo-master
- tempest-stestr-master
@@ -184,30 +183,31 @@
irrelevant-files: *tempest-irrelevant-files
# Run stable releases jobs except those are running in check
# pipeline already
- - tempest-full-2024-2
- - tempest-multinode-2025-1
- - tempest-multinode-2024-2
- - tempest-multinode-2024-1
- - tempest-slow-2025-1
- - tempest-slow-2024-2
- - tempest-slow-2024-1
- - tempest-full-2025-1-extra-tests
- - tempest-full-2024-2-extra-tests
- - tempest-full-2024-1-extra-tests
- periodic-stable:
- jobs:
- tempest-full-2025-1
- tempest-full-2024-2
- - tempest-full-2024-1
+ - tempest-multinode-2025-2
- tempest-multinode-2025-1
- tempest-multinode-2024-2
- - tempest-multinode-2024-1
+ - tempest-slow-2025-2
- tempest-slow-2025-1
- tempest-slow-2024-2
- - tempest-slow-2024-1
+ - tempest-full-2025-2-extra-tests
- tempest-full-2025-1-extra-tests
- tempest-full-2024-2-extra-tests
- - tempest-full-2024-1-extra-tests
+ periodic-stable:
+ jobs:
+ - tempest-full-2025-2
+ - tempest-full-2025-1
+ - tempest-full-2024-2
+ - tempest-multinode-2025-2
+ - tempest-multinode-2025-1
+ - tempest-multinode-2024-2
+ - tempest-slow-2025-2
+ - tempest-slow-2025-1
+ - tempest-slow-2024-2
+ - tempest-full-2025-2-extra-tests
+ - tempest-full-2025-1-extra-tests
+ - tempest-full-2024-2-extra-tests
periodic:
jobs:
- tempest-all
diff --git a/zuul.d/stable-jobs.yaml b/zuul.d/stable-jobs.yaml
index 27e65b9..479ffff 100644
--- a/zuul.d/stable-jobs.yaml
+++ b/zuul.d/stable-jobs.yaml
@@ -1,4 +1,11 @@
# NOTE(gmann): This file includes all stable release jobs definition.
+
+- job:
+ name: tempest-full-2025-2
+ parent: tempest-full-py3
+ nodeset: openstack-single-node-noble
+ override-checkout: stable/2025.2
+
- job:
name: tempest-full-2025-1
parent: tempest-full-py3
@@ -12,10 +19,10 @@
override-checkout: stable/2024.2
- job:
- name: tempest-full-2024-1
- parent: tempest-full-py3
- nodeset: openstack-single-node-jammy
- override-checkout: stable/2024.1
+ name: tempest-full-2025-2-extra-tests
+ parent: tempest-extra-tests
+ nodeset: openstack-single-node-noble
+ override-checkout: stable/2025.2
- job:
name: tempest-full-2025-1-extra-tests
@@ -30,10 +37,10 @@
override-checkout: stable/2024.2
- job:
- name: tempest-full-2024-1-extra-tests
- parent: tempest-extra-tests
- nodeset: openstack-single-node-jammy
- override-checkout: stable/2024.1
+ name: tempest-multinode-2025-2
+ parent: tempest-multinode-full-py3
+ nodeset: openstack-two-node-noble
+ override-checkout: stable/2025.2
- job:
name: tempest-multinode-2025-1
@@ -48,10 +55,10 @@
override-checkout: stable/2024.2
- job:
- name: tempest-multinode-2024-1
- parent: tempest-multinode-full-py3
- nodeset: openstack-two-node-jammy
- override-checkout: stable/2024.1
+ name: tempest-slow-2025-2
+ parent: tempest-slow-py3
+ nodeset: openstack-two-node-noble
+ override-checkout: stable/2025.2
- job:
name: tempest-slow-2025-1
@@ -66,12 +73,6 @@
override-checkout: stable/2024.2
- job:
- name: tempest-slow-2024-1
- parent: tempest-slow-py3
- nodeset: openstack-two-node-jammy
- override-checkout: stable/2024.1
-
-- job:
name: tempest-full-py3
parent: devstack-tempest
# This job version is to use the 'full' tox env which
@@ -187,3 +188,43 @@
- ^.*/victoria
- ^.*/wallaby
- ^.*/xena
+
+- job:
+ name: tempest-integrated-compute-centos-9-stream
+ parent: tempest-integrated-compute
+ nodeset: devstack-single-node-centos-9-stream
+ # centos-9-stream before 2026.1 need to run with default
+ # PYTHON3_VERSION i.e 3.9
+ branches: ¢os9_stable
+ - ^.*/yoga
+ - ^.*/zed
+ - ^.*/2023.1
+ - ^.*/2024.1
+ - ^.*/2024.2
+ - ^.*/2025.1
+ description: |
+ This job runs integration tests for compute. This is
+ subset of 'tempest-full-py3' job and run Nova, Neutron, Cinder (except backup tests)
+ and Glance related tests. This is meant to be run on Nova gate only.
+ This version of the job also uses CentOS 9 stream.
+ vars:
+ # Required until bug/1949606 is resolved when using libvirt and QEMU
+ # >=5.0.0 with a [libvirt]virt_type of qemu (TCG).
+ configure_swap_size: 4096
+
+
+- job:
+ name: tempest-centos9-stream-fips
+ parent: devstack-tempest
+ description: |
+ Integration testing for a FIPS enabled Centos 9 system
+ timeout: 10800
+ nodeset: devstack-single-node-centos-9-stream
+ # centos-9-stream before 2026.1 need to run with default
+ # PYTHON3_VERSION i.e 3.9
+ branches: *centos9_stable
+ vars:
+ tox_envlist: full
+ configure_swap_size: 4096
+ nslookup_target: 'opendev.org'
+ enable_fips: True