Merge "Test glance reserved properties"
diff --git a/REVIEWING.rst b/REVIEWING.rst
index e07e358..4c63aa0 100644
--- a/REVIEWING.rst
+++ b/REVIEWING.rst
@@ -160,13 +160,11 @@
When to approve
---------------
* It's OK to hold off on an approval until a subject matter expert reviews it.
-* Every patch needs two +2's before being approved.
-* However, a single Tempest core reviewer can approve patches without waiting
- for another +2 in the following cases:
+* Every patch needs at least single +2's before being approved. A single
+ Tempest core reviewer can approve patches but can always wait for another
+ +2 in any case. Following cases where single +2 can be used without any
+ issue:
- * If a patch has already been approved but requires a trivial rebase to
- merge, then there is no need to wait for a second +2, since the patch has
- already had two +2's.
* If any trivial patch set fixes one of the items below:
* Documentation or code comment typo
@@ -187,7 +185,4 @@
voting ``tempest-tox-plugin-sanity-check`` job) and unblock the
tempest gate
- Note that such a policy should be used judiciously, as we should strive to
- have two +2's on each patch set, prior to approval.
-
.. _example: https://review.opendev.org/#/c/611032/
diff --git a/doc/source/contributor/contributing.rst b/doc/source/contributor/contributing.rst
index 9c79a1f..62953ff 100644
--- a/doc/source/contributor/contributing.rst
+++ b/doc/source/contributor/contributing.rst
@@ -43,10 +43,9 @@
Getting Your Patch Merged
~~~~~~~~~~~~~~~~~~~~~~~~~
-All changes proposed to the Tempest require two ``Code-Review +2`` votes from
-Tempest core reviewers before one of the core reviewers can approve the patch by
-giving ``Workflow +1`` vote. More detailed guidelines for reviewers are available
-at :doc:`../REVIEWING`.
+All changes proposed to the Tempest require single ``Code-Review +2`` votes from
+Tempest core reviewers by giving ``Workflow +1`` vote. More detailed guidelines
+for reviewers are available at :doc:`../REVIEWING`.
Project Team Lead Duties
~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/data/tempest-blacklisted-plugins-registry.header b/doc/source/data/tempest-non-active-plugins-registry.header
similarity index 67%
rename from doc/source/data/tempest-blacklisted-plugins-registry.header
rename to doc/source/data/tempest-non-active-plugins-registry.header
index 6b6af11..06d8eaa 100644
--- a/doc/source/data/tempest-blacklisted-plugins-registry.header
+++ b/doc/source/data/tempest-non-active-plugins-registry.header
@@ -1,7 +1,7 @@
-Blacklisted Plugins
+Non Active Plugins
===================
List of Tempest plugin projects that are stale or unmaintained for a long
-time (6 months or more). They can be moved out of blacklist state once one
+time (6 months or more). They can be moved out of nonactivelist state once one
of the relevant patches gets merged:
https://review.opendev.org/#/q/topic:tempest-sanity-gate+%28status:open%29
diff --git a/doc/source/microversion_testing.rst b/doc/source/microversion_testing.rst
index c1981f9..06062c2 100644
--- a/doc/source/microversion_testing.rst
+++ b/doc/source/microversion_testing.rst
@@ -302,6 +302,10 @@
.. _2.2: https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id2
+ * `2.3`_
+
+ .. _2.3: http://docs.openstack.org/developer/nova/api_microversion_history.html#maximum-in-kilo
+
* `2.6`_
.. _2.6: https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id5
diff --git a/doc/source/overview.rst b/doc/source/overview.rst
index e51b90b..2eaf72f 100644
--- a/doc/source/overview.rst
+++ b/doc/source/overview.rst
@@ -113,7 +113,7 @@
There is also the option to use `stestr`_ directly. For example, from
the workspace dir run::
- $ stestr run --black-regex '\[.*\bslow\b.*\]' '^tempest\.(api|scenario)'
+ $ stestr run --exclude-regex '\[.*\bslow\b.*\]' '^tempest\.(api|scenario)'
will run the same set of tests as the default gate jobs. Or you can
use `unittest`_ compatible test runners such as `stestr`_, `pytest`_ etc.
diff --git a/doc/source/stable_branch_support_policy.rst b/doc/source/stable_branch_support_policy.rst
index 87e3ad1..9c2d1ed 100644
--- a/doc/source/stable_branch_support_policy.rst
+++ b/doc/source/stable_branch_support_policy.rst
@@ -20,7 +20,7 @@
testing branches in these phases, it's possible that we'll introduce changes to
Tempest on master which will break support on *Extended Maintenance* phase
branches. When this happens the expectation for those branches is to either
-switch to running Tempest from a tag with support for the branch, or blacklist
+switch to running Tempest from a tag with support for the branch, or exclude
a newly introduced test (if that is the cause of the issue). Tempest will not
be creating stable branches to support *Extended Maintenance* phase branches, as
the burden is on the *Extended Maintenance* phase branche maintainers, not the Tempest
diff --git a/etc/whitelist.yaml b/etc/allow-list.yaml
similarity index 100%
rename from etc/whitelist.yaml
rename to etc/allow-list.yaml
diff --git a/etc/rbac-persona-accounts.yaml.sample b/etc/rbac-persona-accounts.yaml.sample
new file mode 100644
index 0000000..0b59538
--- /dev/null
+++ b/etc/rbac-persona-accounts.yaml.sample
@@ -0,0 +1,108 @@
+- user_domain_name: Default
+ password: password
+ roles:
+ - admin
+ username: tempest-system-admin-1
+ system: all
+- user_domain_name: Default
+ password: password
+ username: tempest-system-member-1
+ roles:
+ - member
+ system: all
+- user_domain_name: Default
+ password: password
+ username: tempest-system-reader-1
+ roles:
+ - reader
+ system: all
+- user_domain_name: Default
+ password: password
+ domain_name: tempest-test-domain
+ username: tempest-domain-admin-1
+ roles:
+ - admin
+- user_domain_name: Default
+ password: password
+ domain_name: tempest-test-domain
+ username: tempest-domain-member-1
+ roles:
+ - member
+- user_domain_name: Default
+ password: password
+ domain_name: tempest-test-domain
+ username: tempest-domain-reader-1
+ roles:
+ - reader
+- user_domain_name: Default
+ password: password
+ project_name: tempest-test-project
+ username: tempest-project-admin-1
+ roles:
+ - admin
+- user_domain_name: Default
+ password: password
+ project_name: tempest-test-project
+ username: tempest-project-member-1
+ roles:
+ - member
+- user_domain_name: Default
+ password: password
+ project_name: tempest-test-project
+ username: tempest-project-reader-1
+ roles:
+ - reader
+- user_domain_name: Default
+ password: password
+ username: tempest-system-admin-2
+ roles:
+ - admin
+ system: all
+- user_domain_name: Default
+ password: password
+ username: tempest-system-member-2
+ roles:
+ - member
+ system: all
+- user_domain_name: Default
+ password: password
+ system: all
+ username: tempest-system-reader-2
+ roles:
+ - reader
+- user_domain_name: Default
+ password: password
+ domain_name: tempest-test-domain
+ username: tempest-domain-admin-2
+ roles:
+ - admin
+- user_domain_name: Default
+ password: password
+ domain_name: tempest-test-domain
+ username: tempest-domain-member-2
+ roles:
+ - member
+- user_domain_name: Default
+ password: password
+ domain_name: tempest-test-domain
+ username: tempest-domain-reader-2
+ roles:
+ - reader
+- user_domain_name: Default
+ password: password
+ project_name: tempest-test-project
+ username: tempest-project-admin-2
+ roles:
+ - admin
+- user_domain_name: Default
+ password: password
+ project_name: tempest-test-project
+ username: tempest-project-member-2
+ roles:
+ - member
+- user_domain_name: Default
+ password: password
+ project_name: tempest-test-project
+ username: tempest-project-reader-2
+ roles:
+ - reader
diff --git a/releasenotes/notes/Add-keystone-v3-OS_FEDERATION-APIs-as-tempest-clients-fe9e10a0fe5f09d4.yaml b/releasenotes/notes/Add-keystone-v3-OS_FEDERATION-APIs-as-tempest-clients-fe9e10a0fe5f09d4.yaml
new file mode 100644
index 0000000..33df7c4
--- /dev/null
+++ b/releasenotes/notes/Add-keystone-v3-OS_FEDERATION-APIs-as-tempest-clients-fe9e10a0fe5f09d4.yaml
@@ -0,0 +1,10 @@
+---
+features:
+ - |
+ The following tempest clients for keystone v3 OS_FEDERATION API were
+ implemented in this release
+
+ * identity_providers
+ * protocols
+ * mappings
+ * service_providers
diff --git a/releasenotes/notes/Inclusive-jargon-17621346744f0cf4.yaml b/releasenotes/notes/Inclusive-jargon-17621346744f0cf4.yaml
new file mode 100644
index 0000000..089569e
--- /dev/null
+++ b/releasenotes/notes/Inclusive-jargon-17621346744f0cf4.yaml
@@ -0,0 +1,13 @@
+---
+deprecations:
+ - |
+ In this release the following tempest arguments are deprecated and
+ replaced by new ones which are functionally equivalent:
+
+ * --black-regex is replaced by --exclude-regex
+ * --blacklist-file is replaced by --exclude-list
+ * --whitelist-file is replaced by --include-list
+
+ For now Tempest supports both (new and old ones) in order to make the
+ transition for all consumers smoother. However, that's just a temporary
+ case and the old options will be removed soon.
diff --git a/releasenotes/notes/Remove-manager-2e0b0af48f01294a.yaml b/releasenotes/notes/Remove-manager-2e0b0af48f01294a.yaml
new file mode 100644
index 0000000..822df7d
--- /dev/null
+++ b/releasenotes/notes/Remove-manager-2e0b0af48f01294a.yaml
@@ -0,0 +1,5 @@
+---
+upgrade:
+ - |
+ In this release tempest/manager.py is removed after more than 4 years
+ of deprecation.
diff --git a/releasenotes/notes/add-identity-roles-system-methods-519dc144231993a3.yaml b/releasenotes/notes/add-identity-roles-system-methods-519dc144231993a3.yaml
new file mode 100644
index 0000000..1840c10
--- /dev/null
+++ b/releasenotes/notes/add-identity-roles-system-methods-519dc144231993a3.yaml
@@ -0,0 +1,13 @@
+---
+features:
+ - |
+ Added methods to the identity v3 roles client to support:
+
+ - PUT /v3/system/users/{user}/roles/{role}
+ - GET /v3/system/users/{user}/roles
+ - GET /v3/system/users/{user}/roles/{role}
+ - DELETE /v3/system/users/{user}/roles/{role}
+ - PUT /v3/system/groups/{group}/roles/{role}
+ - GET /v3/system/groups/{group}/roles
+ - GET /v3/system/groups/{group}/roles/{role}
+ - DELETE /v3/system/groups/{group}/roles/{role}
diff --git a/releasenotes/notes/create_loginable_secgroup_rule-73722fd4b4eb12d0.yaml b/releasenotes/notes/create_loginable_secgroup_rule-73722fd4b4eb12d0.yaml
new file mode 100644
index 0000000..e53411d
--- /dev/null
+++ b/releasenotes/notes/create_loginable_secgroup_rule-73722fd4b4eb12d0.yaml
@@ -0,0 +1,6 @@
+---
+features:
+ - |
+ Added public interface create_loginable_secgroup_rule().
+ Since this interface is meant to be used by tempest plugins,
+ It doesn't neccessarily require to be private api.
diff --git a/releasenotes/notes/create_security_group_rule-16d58a8f0f0ff262.yaml b/releasenotes/notes/create_security_group_rule-16d58a8f0f0ff262.yaml
new file mode 100644
index 0000000..3354f65
--- /dev/null
+++ b/releasenotes/notes/create_security_group_rule-16d58a8f0f0ff262.yaml
@@ -0,0 +1,6 @@
+---
+features:
+ - |
+ Added public interface create_security_group_rule().
+ Since this interface is meant to be used by tempest plugins,
+ It doesn't neccessarily require to be private api.
diff --git a/releasenotes/notes/log_console_output-dae6b8740b5a5821.yaml b/releasenotes/notes/log_console_output-dae6b8740b5a5821.yaml
new file mode 100644
index 0000000..2779b26
--- /dev/null
+++ b/releasenotes/notes/log_console_output-dae6b8740b5a5821.yaml
@@ -0,0 +1,8 @@
+---
+features:
+ - |
+ Added public interface log_console_output().
+ It used to be a private method with name _log_console_output().
+ Since this interface is meant to be used by tempest plugins,
+ It doesn't neccessarily require to be private api.
+
diff --git a/releasenotes/notes/merge-tempest-horizon-plugin-39d555339ab8c7ce.yaml b/releasenotes/notes/merge-tempest-horizon-plugin-39d555339ab8c7ce.yaml
new file mode 100644
index 0000000..ff406fb
--- /dev/null
+++ b/releasenotes/notes/merge-tempest-horizon-plugin-39d555339ab8c7ce.yaml
@@ -0,0 +1,6 @@
+---
+prelude: >
+ The integrated horizon dashboard test is now moved
+ from tempest-horizon plugin into Tempest. You do not need
+ to install tempest-horizon to run the horizon test which
+ can be run using Tempest itself.
diff --git a/releasenotes/notes/random-bytes-size-limit-ee94a8c6534fe916.yaml b/releasenotes/notes/random-bytes-size-limit-ee94a8c6534fe916.yaml
new file mode 100644
index 0000000..42322e4
--- /dev/null
+++ b/releasenotes/notes/random-bytes-size-limit-ee94a8c6534fe916.yaml
@@ -0,0 +1,9 @@
+---
+upgrade:
+ - |
+ The ``tempest.lib.common.utils.data_utils.random_bytes()`` helper
+ function will no longer allow a ``size`` of more than 1MiB. Tests
+ generally do not need to generate and use large payloads for
+ feature verification and it is easy to lose track of and duplicate
+ large buffers. The sum total of such errors can become problematic
+ in paralllelized and constrained CI environments.
diff --git a/releasenotes/notes/system-scope-44244cc955a7825f.yaml b/releasenotes/notes/system-scope-44244cc955a7825f.yaml
new file mode 100644
index 0000000..969a71f
--- /dev/null
+++ b/releasenotes/notes/system-scope-44244cc955a7825f.yaml
@@ -0,0 +1,7 @@
+---
+features:
+ - |
+ Adds new personas that can be used to test service policies for all
+ default scopes (project, domain, and system) and roles (reader, member,
+ and admin). Both dynamic credentials and pre-provisioned credentials are
+ supported.
diff --git a/roles/run-tempest/README.rst b/roles/run-tempest/README.rst
index 3643edb..f9fcf28 100644
--- a/roles/run-tempest/README.rst
+++ b/roles/run-tempest/README.rst
@@ -32,7 +32,11 @@
.. zuul:rolevar:: tempest_test_blacklist
- Specifies a blacklist file to skip tests that are not needed.
+ DEPRECATED option, please use tempest_test_exclude_list instead.
+
+.. zuul:rolevar:: tempest_test_exclude_list
+
+ Specifies an excludelist file to skip tests that are not needed.
Pass a full path to the file.
@@ -44,6 +48,11 @@
.. zuul:rolevar:: tempest_black_regex
:default: ''
+ DEPRECATED option, please use tempest_exclude_regex instead.
+
+.. zuul:rolevar:: tempest_exclude_regex
+ :default: ''
+
A regular expression used to skip the tests.
It works only when used with some specific tox environments
@@ -51,7 +60,7 @@
::
vars:
- tempest_black_regex: (tempest.api.identity).*$
+ tempest_exclude_regex: (tempest.api.identity).*$
.. zuul:rolevar:: tox_extra_args
:default: ''
diff --git a/roles/run-tempest/defaults/main.yaml b/roles/run-tempest/defaults/main.yaml
index 5867b6c..52713be 100644
--- a/roles/run-tempest/defaults/main.yaml
+++ b/roles/run-tempest/defaults/main.yaml
@@ -1,7 +1,6 @@
devstack_base_dir: /opt/stack
tempest_test_regex: ''
tox_envlist: smoke
-tempest_black_regex: ''
tox_extra_args: ''
tempest_test_timeout: ''
stable_constraints_file: "{{ devstack_base_dir }}/requirements/upper-constraints.txt"
diff --git a/roles/run-tempest/tasks/main.yaml b/roles/run-tempest/tasks/main.yaml
index 1de3725..999e256 100644
--- a/roles/run-tempest/tasks/main.yaml
+++ b/roles/run-tempest/tasks/main.yaml
@@ -36,6 +36,9 @@
tempest_tox_environment: "{{ tempest_tox_environment | combine({'OS_TEST_TIMEOUT': tempest_test_timeout}) }}"
when: tempest_test_timeout != ''
+# TODO(kopecmartin) remove the following 'when block' after all consumers of
+# the role have switched to tempest_test_exclude_list option, until then it's
+# kept here for backward compatibility
- when:
- tempest_test_blacklist is defined
block:
@@ -50,10 +53,42 @@
blacklist_option: "--blacklist-file={{ tempest_test_blacklist|quote }}"
when: blacklist_stat.stat.exists
+- when:
+ - tempest_test_exclude_list is defined
+ block:
+ - name: Check for test exclude list file
+ stat:
+ path: "{{ tempest_test_exclude_list }}"
+ register:
+ exclude_list_stat
+
+ - name: Build exclude list option
+ set_fact:
+ exclude_list_option: "--exclude-list={{ tempest_test_exclude_list|quote }}"
+ when: exclude_list_stat.stat.exists
+
+# TODO(kopecmartin) remove this after all consumers of the role have switched
+# to tempest_exclude_regex option, until then it's kept here for the backward
+# compatibility
+- name: Build exclude regex (old param)
+ set_fact:
+ tempest_test_exclude_regex: "--black-regex={{tempest_black_regex|quote}}"
+ when:
+ - tempest_black_regex is defined
+ - tempest_exclude_regex is not defined
+
+- name: Build exclude regex (new param)
+ set_fact:
+ tempest_test_exclude_regex: "--exclude-regex={{tempest_exclude_regex|quote}}"
+ when:
+ - tempest_black_regex is not defined
+ - tempest_exclude_regex is defined
+
- name: Run Tempest
- command: tox -e {{tox_envlist}} {{tox_extra_args}} -- {{tempest_test_regex|quote}} {{blacklist_option|default('')}} \
+ command: tox -e {{tox_envlist}} {{tox_extra_args}} -- {{tempest_test_regex|quote}} \
+ {{blacklist_option|default('')}} {{exclude_list_option|default('')}} \
--concurrency={{tempest_concurrency|default(default_concurrency)}} \
- --black-regex={{tempest_black_regex|quote}}
+ {{tempest_test_exclude_regex|default('')}}
args:
chdir: "{{devstack_base_dir}}/tempest"
register: tempest_run_result
diff --git a/tempest/api/compute/images/test_images_oneserver_negative.py b/tempest/api/compute/images/test_images_oneserver_negative.py
index 0296220..275a26f 100644
--- a/tempest/api/compute/images/test_images_oneserver_negative.py
+++ b/tempest/api/compute/images/test_images_oneserver_negative.py
@@ -110,20 +110,30 @@
Creating another server image when first image is being saved is
not allowed.
"""
- # Create first snapshot
- image = self.create_image_from_server(self.server_id)
- self.addCleanup(self._reset_server)
+ try:
+ # Create first snapshot
+ image = self.create_image_from_server(self.server_id)
+ self.addCleanup(self._reset_server)
- # Create second snapshot
- self.assertRaises(lib_exc.Conflict, self.create_image_from_server,
- self.server_id)
+ # Create second snapshot
+ self.assertRaises(lib_exc.Conflict, self.create_image_from_server,
+ self.server_id)
- if api_version_utils.compare_version_header_to_response(
- "OpenStack-API-Version", "compute 2.45", image.response, "lt"):
- image_id = image['image_id']
- else:
- image_id = data_utils.parse_image_id(image.response['location'])
- self.client.delete_image(image_id)
+ if api_version_utils.compare_version_header_to_response(
+ "OpenStack-API-Version", "compute 2.45", image.response, "lt"):
+ image_id = image['image_id']
+ else:
+ image_id = data_utils.parse_image_id(
+ image.response['location'])
+ self.client.delete_image(image_id)
+
+ except lib_exc.TimeoutException as ex:
+ # Test cannot capture the image saving state.
+ # If timeout is reached, we don't need to check state,
+ # since, it wouldn't be a 'SAVING' state atleast and apart from
+ # it, this testcase doesn't have scope for other state transition
+ # Hence, skip the test.
+ raise self.skipException("This test is skipped because " + str(ex))
@decorators.attr(type=['negative'])
@decorators.idempotent_id('084f0cbc-500a-4963-8a4e-312905862581')
diff --git a/tempest/api/identity/admin/v3/test_roles.py b/tempest/api/identity/admin/v3/test_roles.py
index dd7d5af..e5137f4 100644
--- a/tempest/api/identity/admin/v3/test_roles.py
+++ b/tempest/api/identity/admin/v3/test_roles.py
@@ -142,6 +142,26 @@
self.roles_client.delete_role_from_user_on_domain(
self.domain['id'], self.user_body['id'], self.role['id'])
+ @testtools.skipIf(CONF.identity_feature_enabled.immutable_user_source,
+ 'Skipped because environment has an immutable user '
+ 'source and solely provides read-only access to users.')
+ @decorators.idempotent_id('e5a81737-d294-424d-8189-8664858aae4c')
+ def test_grant_list_revoke_role_to_user_on_system(self):
+ self.roles_client.create_user_role_on_system(
+ self.user_body['id'], self.role['id'])
+
+ roles = self.roles_client.list_user_roles_on_system(
+ self.user_body['id'])['roles']
+
+ self.assertEqual(1, len(roles))
+ self.assertEqual(self.role['id'], roles[0]['id'])
+
+ self.roles_client.check_user_role_existence_on_system(
+ self.user_body['id'], self.role['id'])
+
+ self.roles_client.delete_role_from_user_on_system(
+ self.user_body['id'], self.role['id'])
+
@decorators.idempotent_id('cbf11737-1904-4690-9613-97bcbb3df1c4')
@testtools.skipIf(CONF.identity_feature_enabled.immutable_user_source,
'Skipped because environment has an immutable user '
@@ -197,6 +217,23 @@
self.roles_client.delete_role_from_group_on_domain(
self.domain['id'], self.group_body['id'], self.role['id'])
+ @decorators.idempotent_id('c888fe4f-8018-48db-b959-542225c1b4b6')
+ def test_grant_list_revoke_role_to_group_on_system(self):
+ self.roles_client.create_group_role_on_system(
+ self.group_body['id'], self.role['id'])
+
+ roles = self.roles_client.list_group_roles_on_system(
+ self.group_body['id'])['roles']
+
+ self.assertEqual(1, len(roles))
+ self.assertEqual(self.role['id'], roles[0]['id'])
+
+ self.roles_client.check_role_from_group_on_system_existence(
+ self.group_body['id'], self.role['id'])
+
+ self.roles_client.delete_role_from_group_on_system(
+ self.group_body['id'], self.role['id'])
+
@decorators.idempotent_id('f5654bcc-08c4-4f71-88fe-05d64e06de94')
def test_list_roles(self):
"""Test listing roles"""
diff --git a/tempest/api/image/v2/test_images.py b/tempest/api/image/v2/test_images.py
index 9e25901..ca72388 100644
--- a/tempest/api/image/v2/test_images.py
+++ b/tempest/api/image/v2/test_images.py
@@ -90,7 +90,7 @@
self.assertEqual('uploading', body['status'])
# import image from staging to backend
self.client.image_import(image['id'], method='glance-direct')
- self.client.wait_for_resource_activation(image['id'])
+ waiters.wait_for_image_imported_to_stores(self.client, image['id'])
@decorators.idempotent_id('f6feb7a4-b04f-4706-a011-206129f83e62')
def test_image_web_download_import(self):
@@ -111,7 +111,7 @@
image_uri = CONF.image.http_image
self.client.image_import(image['id'], method='web-download',
image_uri=image_uri)
- self.client.wait_for_resource_activation(image['id'])
+ waiters.wait_for_image_imported_to_stores(self.client, image['id'])
class MultiStoresImportImagesTest(base.BaseV2ImageTest):
@@ -158,7 +158,7 @@
self.client.stage_image_file(
image['id'],
- six.BytesIO(data_utils.random_bytes(10485760)))
+ six.BytesIO(data_utils.random_bytes()))
# Check image status is 'uploading'
body = self.client.show_image(image['id'])
self.assertEqual(image['id'], body['id'])
diff --git a/tempest/cmd/run.py b/tempest/cmd/run.py
index 8bebce2..2669ff7 100644
--- a/tempest/cmd/run.py
+++ b/tempest/cmd/run.py
@@ -22,10 +22,10 @@
* ``--regex/-r``: This is a selection regex like what stestr uses. It will run
any tests that match on re.match() with the regex
* ``--smoke/-s``: Run all the tests tagged as smoke
-* ``--black-regex``: It allows to do simple test exclusion via passing a
- rejection/black regexp
+* ``--exclude-regex``: It allows to do simple test exclusion via passing a
+ rejection/exclude regexp
-There are also the ``--blacklist-file`` and ``--whitelist-file`` options that
+There are also the ``--exclude-list`` and ``--include-list`` options that
let you pass a filepath to tempest run with the file format being a line
separated regex, with '#' used to signify the start of a comment on a line.
For example::
@@ -128,6 +128,7 @@
import sys
from cliff import command
+from oslo_log import log
from oslo_serialization import jsonutils as json
from stestr import commands
@@ -141,6 +142,8 @@
CONF = config.CONF
SAVED_STATE_JSON = "saved_state.json"
+LOG = log.getLogger(__name__)
+
class TempestRun(command.Command):
@@ -201,23 +204,71 @@
self._init_state()
regex = self._build_regex(parsed_args)
+
+ # temporary method for parsing deprecated and new stestr options
+ # and showing warning messages in order to make the transition
+ # smoother for all tempest consumers
+ # TODO(kopecmartin) remove this after stestr>=3.1.0 is used
+ # in all supported OpenStack releases
+ def parse_dep(old_o, old_v, new_o, new_v):
+ ret = ''
+ if old_v:
+ LOG.warning("'%s' option is deprecated, use '%s' instead "
+ "which is functionally equivalent. Right now "
+ "Tempest still supports this option for "
+ "backward compatibility, however, it will be "
+ "removed soon.",
+ old_o, new_o)
+ ret = old_v
+ if old_v and new_v:
+ # both options are specified
+ LOG.warning("'%s' and '%s' are specified at the same time, "
+ "'%s' takes precedence over '%s'",
+ new_o, old_o, new_o, old_o)
+ if new_v:
+ ret = new_v
+ return ret
+ ex_regex = parse_dep('--black-regex', parsed_args.black_regex,
+ '--exclude-regex', parsed_args.exclude_regex)
+ ex_list = parse_dep('--blacklist-file', parsed_args.blacklist_file,
+ '--exclude-list', parsed_args.exclude_list)
+ in_list = parse_dep('--whitelist-file', parsed_args.whitelist_file,
+ '--include-list', parsed_args.include_list)
+
return_code = 0
if parsed_args.list_tests:
- return_code = commands.list_command(
- filters=regex, whitelist_file=parsed_args.whitelist_file,
- blacklist_file=parsed_args.blacklist_file,
- black_regex=parsed_args.black_regex)
+ try:
+ return_code = commands.list_command(
+ filters=regex, include_list=in_list,
+ exclude_list=ex_list, exclude_regex=ex_regex)
+ except TypeError:
+ # exclude_list, include_list and exclude_regex are defined only
+ # in stestr >= 3.1.0, this except block catches the case when
+ # tempest is executed with an older stestr
+ return_code = commands.list_command(
+ filters=regex, whitelist_file=in_list,
+ blacklist_file=ex_list, black_regex=ex_regex)
else:
serial = not parsed_args.parallel
- return_code = commands.run_command(
- filters=regex, subunit_out=parsed_args.subunit,
- serial=serial, concurrency=parsed_args.concurrency,
- blacklist_file=parsed_args.blacklist_file,
- whitelist_file=parsed_args.whitelist_file,
- black_regex=parsed_args.black_regex,
- worker_path=parsed_args.worker_file,
- load_list=parsed_args.load_list, combine=parsed_args.combine)
+ params = {
+ 'filters': regex, 'subunit_out': parsed_args.subunit,
+ 'serial': serial, 'concurrency': parsed_args.concurrency,
+ 'worker_path': parsed_args.worker_file,
+ 'load_list': parsed_args.load_list,
+ 'combine': parsed_args.combine
+ }
+ try:
+ return_code = commands.run_command(
+ **params, exclude_list=ex_list,
+ include_list=in_list, exclude_regex=ex_regex)
+ except TypeError:
+ # exclude_list, include_list and exclude_regex are defined only
+ # in stestr >= 3.1.0, this except block catches the case when
+ # tempest is executed with an older stestr
+ return_code = commands.run_command(
+ **params, blacklist_file=ex_list,
+ whitelist_file=in_list, black_regex=ex_regex)
if return_code > 0:
sys.exit(return_code)
return return_code
@@ -271,15 +322,38 @@
help='A normal stestr selection regex used to '
'specify a subset of tests to run')
parser.add_argument('--black-regex', dest='black_regex',
+ help='DEPRECATED: This option is deprecated and '
+ 'will be removed soon, use --exclude-regex '
+ 'which is functionally equivalent. If this '
+ 'is specified at the same time as '
+ '--exclude-regex, this flag will be ignored '
+ 'and --exclude-regex will be used')
+ parser.add_argument('--exclude-regex', dest='exclude_regex',
help='A regex to exclude tests that match it')
parser.add_argument('--whitelist-file', '--whitelist_file',
- help="Path to a whitelist file, this file "
- "contains a separate regex on each "
- "newline.")
+ help='DEPRECATED: This option is deprecated and '
+ 'will be removed soon, use --include-list '
+ 'which is functionally equivalent. If this '
+ 'is specified at the same time as '
+ '--include-list, this flag will be ignored '
+ 'and --include-list will be used')
+ parser.add_argument('--include-list', '--include_list',
+ help="Path to an include file which contains the "
+ "regex for tests to be included in tempest "
+ "run, this file contains a separate regex on "
+ "each newline.")
parser.add_argument('--blacklist-file', '--blacklist_file',
- help='Path to a blacklist file, this file '
- 'contains a separate regex exclude on '
- 'each newline')
+ help='DEPRECATED: This option is deprecated and '
+ 'will be removed soon, use --exclude-list '
+ 'which is functionally equivalent. If this '
+ 'is specified at the same time as '
+ '--exclude-list, this flag will be ignored '
+ 'and --exclude-list will be used')
+ parser.add_argument('--exclude-list', '--exclude_list',
+ help='Path to an exclude file which contains the '
+ 'regex for tests to be excluded in tempest '
+ 'run, this file contains a separate regex on '
+ 'each newline.')
parser.add_argument('--load-list', '--load_list',
help='Path to a non-regex whitelist file, '
'this file contains a separate test '
diff --git a/tempest/common/credentials_factory.py b/tempest/common/credentials_factory.py
index c6e5dcb..2d486a7 100644
--- a/tempest/common/credentials_factory.py
+++ b/tempest/common/credentials_factory.py
@@ -245,6 +245,9 @@
if identity_version == 'v3':
conf_attributes.append('domain_name')
+ conf_attributes.append('user_domain_name')
+ conf_attributes.append('project_domain_name')
+ conf_attributes.append('system')
# Read the parts of credentials from config
params = config.service_client_config()
for attr in conf_attributes:
@@ -284,7 +287,8 @@
if identity_version == 'v3':
domain_fields = set(x for x in auth.KeystoneV3Credentials.ATTRIBUTES
if 'domain' in x)
- if not domain_fields.intersection(kwargs.keys()):
+ if (not params.get('system') and
+ not domain_fields.intersection(kwargs.keys())):
domain_name = CONF.auth.default_credentials_domain_name
# NOTE(andreaf) Setting domain_name implicitly sets user and
# project domain names, if they are None
diff --git a/tempest/common/utils/__init__.py b/tempest/common/utils/__init__.py
index 914acf7..38881ee 100644
--- a/tempest/common/utils/__init__.py
+++ b/tempest/common/utils/__init__.py
@@ -59,6 +59,7 @@
# So we should set this True here.
'identity': True,
'object_storage': CONF.service_available.swift,
+ 'dashboard': CONF.service_available.horizon,
}
return service_list
diff --git a/tempest/common/waiters.py b/tempest/common/waiters.py
index e3c33c7..eaac05e 100644
--- a/tempest/common/waiters.py
+++ b/tempest/common/waiters.py
@@ -193,26 +193,34 @@
raise lib_exc.TimeoutException(message)
-def wait_for_image_imported_to_stores(client, image_id, stores):
+def wait_for_image_imported_to_stores(client, image_id, stores=None):
"""Waits for an image to be imported to all requested stores.
+ Short circuits to fail if the serer reports failure of any store.
+ If stores is None, just wait for status==active.
+
The client should also have build_interval and build_timeout attributes.
"""
+ exc_cls = lib_exc.TimeoutException
start = int(time.time())
while int(time.time()) - start < client.build_timeout:
image = client.show_image(image_id)
- if image['status'] == 'active' and image['stores'] == stores:
+ if image['status'] == 'active' and (stores is None or
+ image['stores'] == stores):
return
+ if image.get('os_glance_failed_import'):
+ exc_cls = lib_exc.OtherRestClientException
+ break
time.sleep(client.build_interval)
message = ('Image %s failed to import on stores: %s' %
- (image_id, str(image['os_glance_failed_import'])))
+ (image_id, str(image.get('os_glance_failed_import'))))
caller = test_utils.find_test_caller()
if caller:
message = '(%s) %s' % (caller, message)
- raise lib_exc.TimeoutException(message)
+ raise exc_cls(message)
def wait_for_image_copied_to_stores(client, image_id):
diff --git a/tempest/config.py b/tempest/config.py
index b36643a..1367678 100644
--- a/tempest/config.py
+++ b/tempest/config.py
@@ -92,7 +92,24 @@
cfg.StrOpt('admin_domain_name',
default='Default',
help="Admin domain name for authentication (Keystone V3). "
- "The same domain applies to user and project"),
+ "The same domain applies to user and project if "
+ "admin_user_domain_name and admin_project_domain_name "
+ "are not specified"),
+ cfg.StrOpt('admin_user_domain_name',
+ help="Domain name that contains the admin user (Keystone V3). "
+ "May be different from admin_project_domain_name and "
+ "admin_domain_name"),
+ cfg.StrOpt('admin_project_domain_name',
+ help="Domain name that contains the project given by "
+ "admin_project_name (Keystone V3). May be different from "
+ "admin_user_domain_name and admin_domain_name"),
+ cfg.StrOpt('admin_system',
+ default=None,
+ help="The system scope on which an admin user has an admin "
+ "role assignment, if any. Valid values are 'all' or None. "
+ "This must be set to 'all' if using the "
+ "[oslo_policy]/enforce_scope=true option for the "
+ "identity service."),
]
identity_group = cfg.OptGroup(name='identity',
@@ -835,6 +852,18 @@
'This value will be increased in case of conflict.')
]
+dashboard_group = cfg.OptGroup(name="dashboard",
+ title="Dashboard options")
+
+DashboardGroup = [
+ cfg.StrOpt('dashboard_url',
+ default='http://localhost/',
+ help="Where the dashboard can be found"),
+ cfg.BoolOpt('disable_ssl_certificate_validation',
+ default=False,
+ help="Set to True if using self-signed SSL certificates."),
+]
+
validation_group = cfg.OptGroup(name='validation',
title='SSH Validation options')
@@ -1180,6 +1209,42 @@
cfg.BoolOpt('nova',
default=True,
help="Whether or not nova is expected to be available"),
+ cfg.BoolOpt('horizon',
+ default=True,
+ help="Whether or not horizon is expected to be available"),
+]
+
+enforce_scope_group = cfg.OptGroup(name="enforce_scope",
+ title="OpenStack Services with "
+ "enforce scope")
+
+
+EnforceScopeGroup = [
+ cfg.BoolOpt('nova',
+ default=False,
+ help='Does the compute service API policies enforce scope? '
+ 'This configuration value should be same as '
+ 'nova.conf: [oslo_policy].enforce_scope option.'),
+ cfg.BoolOpt('neutron',
+ default=False,
+ help='Does the network service API policies enforce scope? '
+ 'This configuration value should be same as '
+ 'neutron.conf: [oslo_policy].enforce_scope option.'),
+ cfg.BoolOpt('glance',
+ default=False,
+ help='Does the Image service API policies enforce scope? '
+ 'This configuration value should be same as '
+ 'glance.conf: [oslo_policy].enforce_scope option.'),
+ cfg.BoolOpt('cinder',
+ default=False,
+ help='Does the Volume service API policies enforce scope? '
+ 'This configuration value should be same as '
+ 'cinder.conf: [oslo_policy].enforce_scope option.'),
+ cfg.BoolOpt('keystone',
+ default=False,
+ help='Does the Identity service API policies enforce scope? '
+ 'This configuration value should be same as '
+ 'keystone.conf: [oslo_policy].enforce_scope option.'),
]
debug_group = cfg.OptGroup(name="debug",
@@ -1243,6 +1308,7 @@
(image_feature_group, ImageFeaturesGroup),
(network_group, NetworkGroup),
(network_feature_group, NetworkFeaturesGroup),
+ (dashboard_group, DashboardGroup),
(validation_group, ValidationGroup),
(volume_group, VolumeGroup),
(volume_feature_group, VolumeFeaturesGroup),
@@ -1250,6 +1316,7 @@
(object_storage_feature_group, ObjectStoreFeaturesGroup),
(scenario_group, ScenarioGroup),
(service_available_group, ServiceAvailableGroup),
+ (enforce_scope_group, EnforceScopeGroup),
(debug_group, DebugGroup),
(placement_group, PlacementGroup),
(profiler_group, ProfilerGroup),
@@ -1310,6 +1377,7 @@
self.image_feature_enabled = _CONF['image-feature-enabled']
self.network = _CONF.network
self.network_feature_enabled = _CONF['network-feature-enabled']
+ self.dashboard = _CONF.dashboard
self.validation = _CONF.validation
self.volume = _CONF.volume
self.volume_feature_enabled = _CONF['volume-feature-enabled']
@@ -1318,6 +1386,7 @@
'object-storage-feature-enabled']
self.scenario = _CONF.scenario
self.service_available = _CONF.service_available
+ self.enforce_scope = _CONF.enforce_scope
self.debug = _CONF.debug
logging.tempest_set_log_file('tempest.log')
# Setting attributes for plugins
diff --git a/tempest/lib/auth.py b/tempest/lib/auth.py
index 7c279ab..9f8c7c5 100644
--- a/tempest/lib/auth.py
+++ b/tempest/lib/auth.py
@@ -428,7 +428,7 @@
class KeystoneV3AuthProvider(KeystoneAuthProvider):
"""Provides authentication based on the Identity V3 API"""
- SCOPES = set(['project', 'domain', 'unscoped', None])
+ SCOPES = set(['system', 'project', 'domain', 'unscoped', None])
def _auth_client(self, auth_url):
return json_v3id.V3TokenClient(
@@ -441,8 +441,8 @@
Fields available in Credentials are passed to the token request,
depending on the value of scope. Valid values for scope are: "project",
- "domain". Any other string (e.g. "unscoped") or None will lead to an
- unscoped token request.
+ "domain", or "system". Any other string (e.g. "unscoped") or None will
+ lead to an unscoped token request.
"""
auth_params = dict(
@@ -465,12 +465,16 @@
domain_id=self.credentials.domain_id,
domain_name=self.credentials.domain_name)
+ if self.scope == 'system':
+ auth_params.update(system='all')
+
return auth_params
def _fill_credentials(self, auth_data_body):
- # project or domain, depending on the scope
+ # project, domain, or system depending on the scope
project = auth_data_body.get('project', None)
domain = auth_data_body.get('domain', None)
+ system = auth_data_body.get('system', None)
# user is always there
user = auth_data_body['user']
# Set project fields
@@ -490,6 +494,9 @@
self.credentials.domain_id = domain['id']
if self.credentials.domain_name is None:
self.credentials.domain_name = domain['name']
+ # Set system scope
+ if system is not None:
+ self.credentials.system = 'all'
# Set user fields
if self.credentials.username is None:
self.credentials.username = user['name']
@@ -677,7 +684,8 @@
raise exceptions.InvalidCredentials(msg)
for key in attr:
if key in self.ATTRIBUTES:
- setattr(self, key, attr[key])
+ if attr[key] is not None:
+ setattr(self, key, attr[key])
else:
msg = '%s is not a valid attr for %s' % (key, self.__class__)
raise exceptions.InvalidCredentials(msg)
@@ -779,7 +787,7 @@
ATTRIBUTES = ['domain_id', 'domain_name', 'password', 'username',
'project_domain_id', 'project_domain_name', 'project_id',
'project_name', 'tenant_id', 'tenant_name', 'user_domain_id',
- 'user_domain_name', 'user_id']
+ 'user_domain_name', 'user_id', 'system']
COLLISIONS = [('project_name', 'tenant_name'), ('project_id', 'tenant_id')]
def __setattr__(self, key, value):
diff --git a/tempest/lib/common/cred_client.py b/tempest/lib/common/cred_client.py
index a81f53c..e16a565 100644
--- a/tempest/lib/common/cred_client.py
+++ b/tempest/lib/common/cred_client.py
@@ -39,11 +39,15 @@
self.projects_client = projects_client
self.roles_client = roles_client
- def create_user(self, username, password, project, email):
+ def create_user(self, username, password, project=None, email=None):
params = {'name': username,
- 'password': password,
- self.project_id_param: project['id'],
- 'email': email}
+ 'password': password}
+ # with keystone v3, a default project is not required
+ if project:
+ params[self.project_id_param] = project['id']
+ # email is not a first-class attribute of a user
+ if email:
+ params['email'] = email
user = self.users_client.create_user(**params)
if 'user' in user:
user = user['user']
@@ -83,12 +87,15 @@
role['id'], project['id'], user['id'])
@abc.abstractmethod
- def get_credentials(self, user, project, password):
+ def get_credentials(
+ self, user, project, password, domain=None, system=None):
"""Produces a Credentials object from the details provided
:param user: a user dict
- :param project: a project dict
+ :param project: a project dict or None if using domain or system scope
:param password: the password as a string
+ :param domain: a domain dict
+ :param system: a system dict
:return: a Credentials object with all the available credential details
"""
pass
@@ -116,7 +123,8 @@
def delete_project(self, project_id):
self.projects_client.delete_tenant(project_id)
- def get_credentials(self, user, project, password):
+ def get_credentials(
+ self, user, project, password, domain=None, system=None):
# User and project already include both ID and name here,
# so there's no need to use the fill_in mode
return auth.get_credentials(
@@ -156,23 +164,46 @@
def delete_project(self, project_id):
self.projects_client.delete_project(project_id)
- def get_credentials(self, user, project, password):
+ def create_domain(self, name, description):
+ domain = self.domains_client.create_domain(
+ name=name, description=description)['domain']
+ return domain
+
+ def delete_domain(self, domain_id):
+ self.domains_client.update_domain(domain_id, enabled=False)
+ self.domains_client.delete_domain(domain_id)
+
+ def get_credentials(
+ self, user, project, password, domain=None, system=None):
# User, project and domain already include both ID and name here,
# so there's no need to use the fill_in mode.
# NOTE(andreaf) We need to set all fields in the returned credentials.
# Scope is then used to pick only those relevant for the type of
# token needed by each service client.
+ if project:
+ project_name = project['name']
+ project_id = project['id']
+ else:
+ project_name = None
+ project_id = None
+ if domain:
+ domain_name = domain['name']
+ domain_id = domain['id']
+ else:
+ domain_name = self.creds_domain['name']
+ domain_id = self.creds_domain['id']
return auth.get_credentials(
auth_url=None,
fill_in=False,
identity_version='v3',
username=user['name'], user_id=user['id'],
- project_name=project['name'], project_id=project['id'],
+ project_name=project_name, project_id=project_id,
password=password,
project_domain_id=self.creds_domain['id'],
project_domain_name=self.creds_domain['name'],
- domain_id=self.creds_domain['id'],
- domain_name=self.creds_domain['name'])
+ domain_id=domain_id,
+ domain_name=domain_name,
+ system=system)
def assign_user_role_on_domain(self, user, role_name, domain=None):
"""Assign the specified role on a domain
@@ -197,6 +228,23 @@
LOG.debug("Role %s already assigned on domain %s for user %s",
role['id'], domain['id'], user['id'])
+ def assign_user_role_on_system(self, user, role_name):
+ """Assign the specified role on the system
+
+ :param user: a user dict
+ :param role_name: name of the role to be assigned
+ """
+ role = self._check_role_exists(role_name)
+ if not role:
+ msg = 'No "%s" role found' % role_name
+ raise lib_exc.NotFound(msg)
+ try:
+ self.roles_client.create_user_role_on_system(
+ user['id'], role['id'])
+ except lib_exc.Conflict:
+ LOG.debug("Role %s already assigned on the system for user %s",
+ role['id'], user['id'])
+
def get_creds_client(identity_client,
projects_client,
diff --git a/tempest/lib/common/dynamic_creds.py b/tempest/lib/common/dynamic_creds.py
index 8b82391..ecbbe8f 100644
--- a/tempest/lib/common/dynamic_creds.py
+++ b/tempest/lib/common/dynamic_creds.py
@@ -142,7 +142,14 @@
else:
# We use a dedicated client manager for identity client in case we
# need a different token scope for them.
- scope = 'domain' if self.identity_admin_domain_scope else 'project'
+ if self.default_admin_creds.system:
+ scope = 'system'
+ elif (self.identity_admin_domain_scope and
+ (self.default_admin_creds.domain_id or
+ self.default_admin_creds.domain_name)):
+ scope = 'domain'
+ else:
+ scope = 'project'
identity_os = clients.ServiceClients(self.default_admin_creds,
self.identity_uri,
scope=scope)
@@ -157,62 +164,98 @@
os.network.PortsClient(),
os.network.SecurityGroupsClient())
- def _create_creds(self, admin=False, roles=None):
+ def _create_creds(self, admin=False, roles=None, scope='project'):
"""Create credentials with random name.
- Creates project and user. When admin flag is True create user
- with admin role. Assign user with additional roles (for example
- _member_) and roles requested by caller.
+ Creates user and role assignments on a project, domain, or system. When
+ the admin flag is True, creates user with the admin role on the
+ resource. If roles are provided, assigns those roles on the resource.
+ Otherwise, assigns the user the 'member' role on the resource.
:param admin: Flag if to assign to the user admin role
:type admin: bool
:param roles: Roles to assign for the user
:type roles: list
+ :param str scope: The scope for the role assignment, may be one of
+ 'project', 'domain', or 'system'.
:return: Readonly Credentials with network resources
+ :raises: Exception if scope is invalid
"""
+ if not roles:
+ roles = []
root = self.name
- project_name = data_utils.rand_name(root, prefix=self.resource_prefix)
- project_desc = project_name + "-desc"
- project = self.creds_client.create_project(
- name=project_name, description=project_desc)
+ cred_params = {
+ 'project': None,
+ 'domain': None,
+ 'system': None
+ }
+ if scope == 'project':
+ project_name = data_utils.rand_name(
+ root, prefix=self.resource_prefix)
+ project_desc = project_name + '-desc'
+ project = self.creds_client.create_project(
+ name=project_name, description=project_desc)
- # NOTE(andreaf) User and project can be distinguished from the context,
- # having the same ID in both makes it easier to match them and debug.
- username = project_name
- user_password = data_utils.rand_password()
- email = data_utils.rand_name(
- root, prefix=self.resource_prefix) + "@example.com"
- user = self.creds_client.create_user(
- username, user_password, project, email)
- role_assigned = False
+ # NOTE(andreaf) User and project can be distinguished from the
+ # context, having the same ID in both makes it easier to match them
+ # and debug.
+ username = project_name + '-project'
+ cred_params['project'] = project
+ elif scope == 'domain':
+ domain_name = data_utils.rand_name(
+ root, prefix=self.resource_prefix)
+ domain_desc = domain_name + '-desc'
+ domain = self.creds_client.create_domain(
+ name=domain_name, description=domain_desc)
+ username = domain_name + '-domain'
+ cred_params['domain'] = domain
+ elif scope == 'system':
+ prefix = data_utils.rand_name(root, prefix=self.resource_prefix)
+ username = prefix + '-system'
+ cred_params['system'] = 'all'
+ else:
+ raise lib_exc.InvalidScopeType(scope=scope)
if admin:
- self.creds_client.assign_user_role(user, project, self.admin_role)
- role_assigned = True
+ username += '-admin'
+ elif roles and len(roles) == 1:
+ username += '-' + roles[0]
+ user_password = data_utils.rand_password()
+ cred_params['password'] = user_password
+ user = self.creds_client.create_user(
+ username, user_password)
+ cred_params['user'] = user
+ roles_to_assign = [r for r in roles]
+ if admin:
+ roles_to_assign.append(self.admin_role)
+ self.creds_client.assign_user_role(
+ user, project, self.identity_admin_role)
if (self.identity_version == 'v3' and
self.identity_admin_domain_scope):
self.creds_client.assign_user_role_on_domain(
user, self.identity_admin_role)
# Add roles specified in config file
- for conf_role in self.extra_roles:
- self.creds_client.assign_user_role(user, project, conf_role)
- role_assigned = True
- # Add roles requested by caller
- if roles:
- for role in roles:
- self.creds_client.assign_user_role(user, project, role)
- role_assigned = True
+ roles_to_assign.extend(self.extra_roles)
+ # If there are still no roles, default to 'member'
# NOTE(mtreinish) For a user to have access to a project with v3 auth
# it must beassigned a role on the project. So we need to ensure that
# our newly created user has a role on the newly created project.
- if self.identity_version == 'v3' and not role_assigned:
+ if not roles_to_assign and self.identity_version == 'v3':
+ roles_to_assign = ['member']
try:
self.creds_client.create_user_role('member')
except lib_exc.Conflict:
LOG.warning('member role already exists, ignoring conflict.')
- self.creds_client.assign_user_role(user, project, 'member')
+ for role in roles_to_assign:
+ if scope == 'project':
+ self.creds_client.assign_user_role(user, project, role)
+ elif scope == 'domain':
+ self.creds_client.assign_user_role_on_domain(
+ user, role, domain)
+ elif scope == 'system':
+ self.creds_client.assign_user_role_on_system(user, role)
- creds = self.creds_client.get_credentials(user, project, user_password)
+ creds = self.creds_client.get_credentials(**cred_params)
return cred_provider.TestResources(creds)
def _create_network_resources(self, tenant_id):
@@ -327,16 +370,29 @@
self.routers_admin_client.add_router_interface(router_id,
subnet_id=subnet_id)
- def get_credentials(self, credential_type):
- if self._creds.get(str(credential_type)):
+ def get_credentials(self, credential_type, scope=None):
+ if not scope and self._creds.get(str(credential_type)):
credentials = self._creds[str(credential_type)]
+ elif scope and self._creds.get("%s_%s" % (scope, credential_type[0])):
+ credentials = self._creds["%s_%s" % (scope, credential_type[0])]
else:
- if credential_type in ['primary', 'alt', 'admin']:
+ if scope:
+ if credential_type == 'admin':
+ credentials = self._create_creds(
+ admin=True, scope=scope)
+ else:
+ credentials = self._create_creds(
+ roles=credential_type, scope=scope)
+ elif credential_type in ['primary', 'alt', 'admin']:
is_admin = (credential_type == 'admin')
credentials = self._create_creds(admin=is_admin)
else:
credentials = self._create_creds(roles=credential_type)
- self._creds[str(credential_type)] = credentials
+ if scope:
+ self._creds["%s_%s" %
+ (scope, credential_type[0])] = credentials
+ else:
+ self._creds[str(credential_type)] = credentials
# Maintained until tests are ported
LOG.info("Acquired dynamic creds:\n"
" credentials: %s", credentials)
@@ -358,6 +414,33 @@
def get_alt_creds(self):
return self.get_credentials('alt')
+ def get_system_admin_creds(self):
+ return self.get_credentials(['admin'], scope='system')
+
+ def get_system_member_creds(self):
+ return self.get_credentials(['member'], scope='system')
+
+ def get_system_reader_creds(self):
+ return self.get_credentials(['reader'], scope='system')
+
+ def get_domain_admin_creds(self):
+ return self.get_credentials(['admin'], scope='domain')
+
+ def get_domain_member_creds(self):
+ return self.get_credentials(['member'], scope='domain')
+
+ def get_domain_reader_creds(self):
+ return self.get_credentials(['reader'], scope='domain')
+
+ def get_project_admin_creds(self):
+ return self.get_credentials(['admin'], scope='project')
+
+ def get_project_member_creds(self):
+ return self.get_credentials(['member'], scope='project')
+
+ def get_project_reader_creds(self):
+ return self.get_credentials(['reader'], scope='project')
+
def get_creds_by_roles(self, roles, force_new=False):
roles = list(set(roles))
# The roles list as a str will become the index as the dict key for
@@ -465,6 +548,16 @@
except lib_exc.NotFound:
LOG.warning("tenant with name: %s not found for delete",
creds.tenant_name)
+
+ # if cred is domain scoped, delete ephemeral domain
+ # do not delete default domain
+ if (hasattr(creds, 'domain_id') and
+ creds.domain_id != creds.project_domain_id):
+ try:
+ self.creds_client.delete_domain(creds.domain_id)
+ except lib_exc.NotFound:
+ LOG.warning("domain with name: %s not found for delete",
+ creds.domain_name)
self._creds = {}
def is_multi_user(self):
diff --git a/tempest/lib/common/preprov_creds.py b/tempest/lib/common/preprov_creds.py
index 641d727..8325f44 100644
--- a/tempest/lib/common/preprov_creds.py
+++ b/tempest/lib/common/preprov_creds.py
@@ -104,15 +104,24 @@
return hash_dict
@classmethod
+ def _append_scoped_role(cls, scope, role, account_hash, hash_dict):
+ key = "%s_%s" % (scope, role)
+ hash_dict['scoped_roles'].setdefault(key, [])
+ hash_dict['scoped_roles'][key].append(account_hash)
+ return hash_dict
+
+ @classmethod
def get_hash_dict(cls, accounts, admin_role,
object_storage_operator_role=None,
object_storage_reseller_admin_role=None):
- hash_dict = {'roles': {}, 'creds': {}, 'networks': {}}
+ hash_dict = {'roles': {}, 'creds': {}, 'networks': {},
+ 'scoped_roles': {}}
# Loop over the accounts read from the yaml file
for account in accounts:
roles = []
types = []
+ scope = None
resources = []
if 'roles' in account:
roles = account.pop('roles')
@@ -120,6 +129,12 @@
types = account.pop('types')
if 'resources' in account:
resources = account.pop('resources')
+ if 'project_name' in account:
+ scope = 'project'
+ elif 'domain_name' in account:
+ scope = 'domain'
+ elif 'system' in account:
+ scope = 'system'
temp_hash = hashlib.md5()
account_for_hash = dict((k, v) for (k, v) in account.items()
if k in cls.HASH_CRED_FIELDS)
@@ -129,6 +144,9 @@
for role in roles:
hash_dict = cls._append_role(role, temp_hash_key,
hash_dict)
+ if scope:
+ hash_dict = cls._append_scoped_role(
+ scope, role, temp_hash_key, hash_dict)
# If types are set for the account append the matching role
# subdict with the hash
for type in types:
@@ -201,17 +219,25 @@
'the credentials for this allocation request' % ','.join(names))
raise lib_exc.InvalidCredentials(msg)
- def _get_match_hash_list(self, roles=None):
+ def _get_match_hash_list(self, roles=None, scope=None):
hashes = []
if roles:
# Loop over all the creds for each role in the subdict and generate
# a list of cred lists for each role
for role in roles:
- temp_hashes = self.hash_dict['roles'].get(role, None)
- if not temp_hashes:
- raise lib_exc.InvalidCredentials(
- "No credentials with role: %s specified in the "
- "accounts ""file" % role)
+ if scope:
+ key = "%s_%s" % (scope, role)
+ temp_hashes = self.hash_dict['scoped_roles'].get(key)
+ if not temp_hashes:
+ raise lib_exc.InvalidCredentials(
+ "No credentials matching role: %s, scope: %s "
+ "specified in the accounts file" % (role, scope))
+ else:
+ temp_hashes = self.hash_dict['roles'].get(role, None)
+ if not temp_hashes:
+ raise lib_exc.InvalidCredentials(
+ "No credentials with role: %s specified in the "
+ "accounts file" % role)
hashes.append(temp_hashes)
# Take the list of lists and do a boolean and between each list to
# find the creds which fall under all the specified roles
@@ -239,8 +265,8 @@
temp_creds.pop('password')
return temp_creds
- def _get_creds(self, roles=None):
- useable_hashes = self._get_match_hash_list(roles)
+ def _get_creds(self, roles=None, scope=None):
+ useable_hashes = self._get_match_hash_list(roles, scope)
if not useable_hashes:
msg = 'No users configured for type/roles %s' % roles
raise lib_exc.InvalidCredentials(msg)
@@ -296,6 +322,69 @@
self._creds['alt'] = net_creds
return net_creds
+ def get_system_admin_creds(self):
+ if self._creds.get('system_admin'):
+ return self._creds.get('system_admin')
+ system_admin = self._get_creds(['admin'], scope='system')
+ self._creds['system_admin'] = system_admin
+ return system_admin
+
+ def get_system_member_creds(self):
+ if self._creds.get('system_member'):
+ return self._creds.get('system_member')
+ system_member = self._get_creds(['member'], scope='system')
+ self._creds['system_member'] = system_member
+ return system_member
+
+ def get_system_reader_creds(self):
+ if self._creds.get('system_reader'):
+ return self._creds.get('system_reader')
+ system_reader = self._get_creds(['reader'], scope='system')
+ self._creds['system_reader'] = system_reader
+ return system_reader
+
+ def get_domain_admin_creds(self):
+ if self._creds.get('domain_admin'):
+ return self._creds.get('domain_admin')
+ domain_admin = self._get_creds(['admin'], scope='domain')
+ self._creds['domain_admin'] = domain_admin
+ return domain_admin
+
+ def get_domain_member_creds(self):
+ if self._creds.get('domain_member'):
+ return self._creds.get('domain_member')
+ domain_member = self._get_creds(['member'], scope='domain')
+ self._creds['domain_member'] = domain_member
+ return domain_member
+
+ def get_domain_reader_creds(self):
+ if self._creds.get('domain_reader'):
+ return self._creds.get('domain_reader')
+ domain_reader = self._get_creds(['reader'], scope='domain')
+ self._creds['domain_reader'] = domain_reader
+ return domain_reader
+
+ def get_project_admin_creds(self):
+ if self._creds.get('project_admin'):
+ return self._creds.get('project_admin')
+ project_admin = self._get_creds(['admin'], scope='project')
+ self._creds['project_admin'] = project_admin
+ return project_admin
+
+ def get_project_member_creds(self):
+ if self._creds.get('project_member'):
+ return self._creds.get('project_member')
+ project_member = self._get_creds(['member'], scope='project')
+ self._creds['project_member'] = project_member
+ return project_member
+
+ def get_project_reader_creds(self):
+ if self._creds.get('project_reader'):
+ return self._creds.get('project_reader')
+ project_reader = self._get_creds(['reader'], scope='project')
+ self._creds['project_reader'] = project_reader
+ return project_reader
+
def get_creds_by_roles(self, roles, force_new=False):
roles = list(set(roles))
exist_creds = self._creds.get(six.text_type(roles).encode(
diff --git a/tempest/lib/common/utils/data_utils.py b/tempest/lib/common/utils/data_utils.py
index 44b55eb..b6671b5 100644
--- a/tempest/lib/common/utils/data_utils.py
+++ b/tempest/lib/common/utils/data_utils.py
@@ -169,6 +169,8 @@
:return: size randomly bytes
:rtype: string
"""
+ if size > 1 << 20:
+ raise RuntimeError('Size should be less than 1MiB')
return b''.join([six.int2byte(random.randint(0, 255))
for i in range(size)])
diff --git a/tempest/lib/decorators.py b/tempest/lib/decorators.py
index ebe2d61..25ff473 100644
--- a/tempest/lib/decorators.py
+++ b/tempest/lib/decorators.py
@@ -72,19 +72,13 @@
def decorator(f):
@functools.wraps(f)
def wrapper(*func_args, **func_kwargs):
- skip = False
- msg = ''
- if "condition" in kwargs:
- if kwargs["condition"] is True:
- skip = True
- else:
- skip = True
- if "bug" in kwargs and skip is True:
- bug = kwargs['bug']
+ condition = kwargs.get('condition', True)
+ bug = kwargs.get('bug', None)
+ if bug and condition:
bug_type = kwargs.get('bug_type', 'launchpad')
bug_url = _get_bug_url(bug, bug_type)
- msg = "Skipped until bug: %s is resolved." % bug_url
- raise testtools.TestCase.skipException(msg)
+ raise testtools.TestCase.skipException(
+ "Skipped until bug: %s is resolved." % bug_url)
return f(*func_args, **func_kwargs)
return wrapper
return decorator
diff --git a/tempest/lib/exceptions.py b/tempest/lib/exceptions.py
index 84b7ee6..abe68d2 100644
--- a/tempest/lib/exceptions.py
+++ b/tempest/lib/exceptions.py
@@ -294,3 +294,7 @@
class ConsistencyGroupSnapshotException(TempestException):
message = ("Consistency group snapshot %(cgsnapshot_id)s failed and is "
"in ERROR status")
+
+
+class InvalidScopeType(TempestException):
+ message = "Invalid scope %(scope)s"
diff --git a/tempest/lib/services/clients.py b/tempest/lib/services/clients.py
index 90debd9..d328956 100644
--- a/tempest/lib/services/clients.py
+++ b/tempest/lib/services/clients.py
@@ -257,7 +257,7 @@
# class should only be used by tests hosted in Tempest.
@removals.removed_kwarg('client_parameters')
- def __init__(self, credentials, identity_uri, region=None, scope='project',
+ def __init__(self, credentials, identity_uri, region=None, scope=None,
disable_ssl_certificate_validation=True, ca_certs=None,
trace_requests='', client_parameters=None, proxy_url=None):
"""Service Clients provider
@@ -348,6 +348,14 @@
self.ca_certs = ca_certs
self.trace_requests = trace_requests
self.proxy_url = proxy_url
+ if self.credentials.project_id or self.credentials.project_name:
+ scope = 'project'
+ elif self.credentials.system:
+ scope = 'system'
+ elif self.credentials.domain_id or self.credentials.domain_name:
+ scope = 'domain'
+ else:
+ scope = 'project'
# Creates an auth provider for the credentials
self.auth_provider = auth_provider_class(
self.credentials, self.identity_uri, scope=scope,
diff --git a/tempest/lib/services/identity/v3/identity_providers_client.py b/tempest/lib/services/identity/v3/identity_providers_client.py
new file mode 100644
index 0000000..af6a245
--- /dev/null
+++ b/tempest/lib/services/identity/v3/identity_providers_client.py
@@ -0,0 +1,92 @@
+# Copyright 2020 Samsung Electronics Co., Ltd
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may not
+# use this file except in compliance with the License. You may obtain a copy of
+# the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations under
+# the License.
+
+from oslo_serialization import jsonutils as json
+from six.moves.urllib import parse as urllib
+
+from tempest.lib.common import rest_client
+
+
+class IdentityProvidersClient(rest_client.RestClient):
+
+ def register_identity_provider(self, identity_provider_id, **kwargs):
+ """Register an identity provider.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://docs.openstack.org/api-ref/identity/v3-ext/index.html#register-an-identity-provider
+ """
+ post_body = json.dumps({'identity_provider': kwargs})
+ resp, body = self.put(
+ 'OS-FEDERATION/identity_providers/%s' % identity_provider_id,
+ post_body)
+ self.expected_success(201, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def list_identity_providers(self, **params):
+ """List identity providers.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://docs.openstack.org/api-ref/identity/v3-ext/index.html#list-identity-providers
+ """
+ url = 'identity_providers'
+ if params:
+ url += '?%s' % urllib.urlencode(params)
+ resp, body = self.get(url)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def get_identity_provider(self, identity_provider_id):
+ """Get identity provider.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://docs.openstack.org/api-ref/identity/v3-ext/index.html#get-identity-provider
+ """
+ resp, body = self.get(
+ 'OS-FEDERATION/identity_providers/%s' % identity_provider_id)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def delete_identity_provider(self, identity_provider_id):
+ """Delete identity provider.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://docs.openstack.org/api-ref/identity/v3-ext/index.html#delete-identity-provider
+ """
+ resp, body = self.delete(
+ 'OS-FEDERATION/identity_providers/%s' % identity_provider_id)
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def update_identity_provider(self, identity_provider_id, **kwargs):
+ """Update identity provider.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://docs.openstack.org/api-ref/identity/v3-ext/index.html#update-identity-provider
+ """
+ post_body = json.dumps({'identity_provider': kwargs})
+ resp, body = self.patch(
+ 'OS-FEDERATION/identity_providers/%s' % identity_provider_id,
+ post_body)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
diff --git a/tempest/lib/services/identity/v3/mappings_client.py b/tempest/lib/services/identity/v3/mappings_client.py
new file mode 100644
index 0000000..9ec5384
--- /dev/null
+++ b/tempest/lib/services/identity/v3/mappings_client.py
@@ -0,0 +1,90 @@
+# Copyright 2020 Samsung Electronics Co., Ltd
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may not
+# use this file except in compliance with the License. You may obtain a copy of
+# the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations under
+# the License.
+
+from oslo_serialization import jsonutils as json
+from six.moves.urllib import parse as urllib
+
+from tempest.lib.common import rest_client
+
+
+class MappingsClient(rest_client.RestClient):
+
+ def create_mapping(self, mapping_id, **kwargs):
+ """Create a mapping.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://docs.openstack.org/api-ref/identity/v3-ext/index.html#create-a-mapping
+ """
+ post_body = json.dumps({'mapping': kwargs})
+ resp, body = self.put(
+ 'OS-FEDERATION/mappings/%s' % mapping_id, post_body)
+ self.expected_success(201, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def get_mapping(self, mapping_id):
+ """Get a mapping.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://docs.openstack.org/api-ref/identity/v3-ext/index.html#get-a-mapping
+ """
+ resp, body = self.get(
+ 'OS-FEDERATION/mappings/%s' % mapping_id)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def update_mapping(self, mapping_id, **kwargs):
+ """Update a mapping.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://docs.openstack.org/api-ref/identity/v3-ext/index.html#update-a-mapping
+ """
+ post_body = json.dumps({'mapping': kwargs})
+ resp, body = self.patch(
+ 'OS-FEDERATION/mappings/%s' % mapping_id, post_body)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def list_mappings(self, **kwargs):
+ """List mappings.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://docs.openstack.org/api-ref/identity/v3-ext/index.html#list-mappings
+ """
+ url = 'OS-FEDERATION/mappings'
+ if kwargs:
+ url += '?%s' % urllib.urlencode(kwargs)
+ resp, body = self.get(url)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def delete_mapping(self, mapping_id):
+ """Delete a mapping.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://docs.openstack.org/api-ref/identity/v3-ext/index.html#delete-a-mapping
+ """
+ resp, body = self.delete(
+ 'OS-FEDERATION/mappings/%s' % mapping_id)
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp, body)
diff --git a/tempest/lib/services/identity/v3/protocols_client.py b/tempest/lib/services/identity/v3/protocols_client.py
new file mode 100644
index 0000000..2e0221b
--- /dev/null
+++ b/tempest/lib/services/identity/v3/protocols_client.py
@@ -0,0 +1,96 @@
+# Copyright 2020 Samsung Electronics Co., Ltd
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may not
+# use this file except in compliance with the License. You may obtain a copy of
+# the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations under
+# the License.
+
+from oslo_serialization import jsonutils as json
+from six.moves.urllib import parse as urllib
+
+from tempest.lib.common import rest_client
+
+
+class ProtocolsClient(rest_client.RestClient):
+
+ def add_protocol_to_identity_provider(self, idp_id, protocol_id,
+ **kwargs):
+ """Add protocol to identity provider.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://docs.openstack.org/api-ref/identity/v3-ext/index.html#add-protocol-to-identity-provider
+ """
+ post_body = json.dumps({'protocol': kwargs})
+ resp, body = self.put(
+ 'OS-FEDERATION/identity_providers/%s/protocols/%s'
+ % (idp_id, protocol_id), post_body)
+ self.expected_success(201, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def list_protocols_of_identity_provider(self, idp_id, **kwargs):
+ """List protocols of identity provider.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://docs.openstack.org/api-ref/identity/v3-ext/index.html#list-protocols-of-identity-provider
+ """
+ url = 'OS-FEDERATION/identity_providers/%s/protocols' % idp_id
+ if kwargs:
+ url += '?%s' % urllib.urlencode(kwargs)
+ resp, body = self.get(url)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def get_protocol_for_identity_provider(self, idp_id, protocol_id):
+ """Get protocol for identity provider.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://docs.openstack.org/api-ref/identity/v3-ext/index.html#get-protocol-for-identity-provider
+ """
+ resp, body = self.get(
+ 'OS-FEDERATION/identity_providers/%s/protocols/%s'
+ % (idp_id, protocol_id))
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def update_mapping_for_identity_provider(self, idp_id, protocol_id,
+ **kwargs):
+ """Update attribute mapping for identity provider.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://docs.openstack.org/api-ref/identity/v3-ext/index.html#update-attribute-mapping-for-identity-provider
+ """
+ post_body = json.dumps({'protocol': kwargs})
+ resp, body = self.patch(
+ 'OS-FEDERATION/identity_providers/%s/protocols/%s'
+ % (idp_id, protocol_id), post_body)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def delete_protocol_from_identity_provider(self, idp_id, protocol_id):
+ """Delete a protocol from identity provider.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://docs.openstack.org/api-ref/identity/v3-ext/index.html#delete-a-protocol-from-identity-provider
+ """
+ resp, body = self.delete(
+ 'OS-FEDERATION/identity_providers/%s/protocols/%s'
+ % (idp_id, protocol_id))
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp, body)
diff --git a/tempest/lib/services/identity/v3/roles_client.py b/tempest/lib/services/identity/v3/roles_client.py
index 0d7593a..e41dc28 100644
--- a/tempest/lib/services/identity/v3/roles_client.py
+++ b/tempest/lib/services/identity/v3/roles_client.py
@@ -89,6 +89,13 @@
self.expected_success(204, resp.status)
return rest_client.ResponseBody(resp, body)
+ def create_user_role_on_system(self, user_id, role_id):
+ """Add roles to a user on the system."""
+ resp, body = self.put('system/users/%s/roles/%s' %
+ (user_id, role_id), None)
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
def list_user_roles_on_project(self, project_id, user_id):
"""list roles of a user on a project."""
resp, body = self.get('projects/%s/users/%s/roles' %
@@ -105,6 +112,13 @@
body = json.loads(body)
return rest_client.ResponseBody(resp, body)
+ def list_user_roles_on_system(self, user_id):
+ """list roles of a user on the system."""
+ resp, body = self.get('system/users/%s/roles' % user_id)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
def delete_role_from_user_on_project(self, project_id, user_id, role_id):
"""Delete role of a user on a project."""
resp, body = self.delete('projects/%s/users/%s/roles/%s' %
@@ -119,6 +133,13 @@
self.expected_success(204, resp.status)
return rest_client.ResponseBody(resp, body)
+ def delete_role_from_user_on_system(self, user_id, role_id):
+ """Delete role of a user on the system."""
+ resp, body = self.delete('system/users/%s/roles/%s' %
+ (user_id, role_id))
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
def check_user_role_existence_on_project(self, project_id,
user_id, role_id):
"""Check role of a user on a project."""
@@ -135,6 +156,12 @@
self.expected_success(204, resp.status)
return rest_client.ResponseBody(resp)
+ def check_user_role_existence_on_system(self, user_id, role_id):
+ """Check role of a user on the system."""
+ resp, body = self.head('system/users/%s/roles/%s' % (user_id, role_id))
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp)
+
def create_group_role_on_project(self, project_id, group_id, role_id):
"""Add roles to a group on a project."""
resp, body = self.put('projects/%s/groups/%s/roles/%s' %
@@ -149,6 +176,13 @@
self.expected_success(204, resp.status)
return rest_client.ResponseBody(resp, body)
+ def create_group_role_on_system(self, group_id, role_id):
+ """Add roles to a group on the system."""
+ resp, body = self.put('system/groups/%s/roles/%s' %
+ (group_id, role_id), None)
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
def list_group_roles_on_project(self, project_id, group_id):
"""list roles of a group on a project."""
resp, body = self.get('projects/%s/groups/%s/roles' %
@@ -165,6 +199,13 @@
body = json.loads(body)
return rest_client.ResponseBody(resp, body)
+ def list_group_roles_on_system(self, group_id):
+ """list roles of a group on the system."""
+ resp, body = self.get('system/groups/%s/roles' % group_id)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
def delete_role_from_group_on_project(self, project_id, group_id, role_id):
"""Delete role of a group on a project."""
resp, body = self.delete('projects/%s/groups/%s/roles/%s' %
@@ -179,6 +220,13 @@
self.expected_success(204, resp.status)
return rest_client.ResponseBody(resp, body)
+ def delete_role_from_group_on_system(self, group_id, role_id):
+ """Delete role of a group on the system."""
+ resp, body = self.delete('system/groups/%s/roles/%s' %
+ (group_id, role_id))
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
def check_role_from_group_on_project_existence(self, project_id,
group_id, role_id):
"""Check role of a group on a project."""
@@ -195,6 +243,13 @@
self.expected_success(204, resp.status)
return rest_client.ResponseBody(resp)
+ def check_role_from_group_on_system_existence(self, group_id, role_id):
+ """Check role of a group on the system."""
+ resp, body = self.head('system/groups/%s/roles/%s' %
+ (group_id, role_id))
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp)
+
def create_role_inference_rule(self, prior_role, implies_role):
"""Create a role inference rule."""
resp, body = self.put('roles/%s/implies/%s' %
diff --git a/tempest/lib/services/identity/v3/service_providers_client.py b/tempest/lib/services/identity/v3/service_providers_client.py
new file mode 100644
index 0000000..b84cf43
--- /dev/null
+++ b/tempest/lib/services/identity/v3/service_providers_client.py
@@ -0,0 +1,92 @@
+# Copyright 2020 Samsung Electronics Co., Ltd
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may not
+# use this file except in compliance with the License. You may obtain a copy of
+# the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations under
+# the License.
+
+from oslo_serialization import jsonutils as json
+from six.moves.urllib import parse as urllib
+
+from tempest.lib.common import rest_client
+
+
+class ServiceProvidersClient(rest_client.RestClient):
+
+ def register_service_provider(self, service_provider_id, **kwargs):
+ """Register a service provider.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://docs.openstack.org/api-ref/identity/v3-ext/index.html#register-a-service-provider
+ """
+ post_body = json.dumps({'service_provider': kwargs})
+ resp, body = self.put(
+ 'OS-FEDERATION/service_providers/%s' % service_provider_id,
+ post_body)
+ self.expected_success(201, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def list_service_providers(self, **kwargs):
+ """List service providers.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://docs.openstack.org/api-ref/identity/v3-ext/index.html#list-service-providers
+ """
+ url = 'OS-FEDERATION/service_providers'
+ if kwargs:
+ url += '?%s' % urllib.urlencode(kwargs)
+ resp, body = self.get(url)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def get_service_provider(self, service_provider_id):
+ """Get a service provider.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://docs.openstack.org/api-ref/identity/v3-ext/index.html#get-service-provider
+ """
+ resp, body = self.get(
+ 'OS-FEDERATION/service_providers/%s' % service_provider_id)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def delete_service_provider(self, service_provider_id):
+ """Delete a service provider.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://docs.openstack.org/api-ref/identity/v3-ext/index.html#delete-service-provider
+ """
+ resp, body = self.delete(
+ 'OS-FEDERATION/service_providers/%s' % service_provider_id)
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def update_service_provider(self, service_provider_id, **kwargs):
+ """Update a service provider.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://docs.openstack.org/api-ref/identity/v3-ext/index.html#update-service-provider
+ """
+ post_body = json.dumps({'service_provider': kwargs})
+ resp, body = self.patch(
+ 'OS-FEDERATION/service_providers/%s' % service_provider_id,
+ post_body)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
diff --git a/tempest/lib/services/identity/v3/token_client.py b/tempest/lib/services/identity/v3/token_client.py
index 6956297..08a8f46 100644
--- a/tempest/lib/services/identity/v3/token_client.py
+++ b/tempest/lib/services/identity/v3/token_client.py
@@ -51,7 +51,7 @@
def auth(self, user_id=None, username=None, password=None, project_id=None,
project_name=None, user_domain_id=None, user_domain_name=None,
project_domain_id=None, project_domain_name=None, domain_id=None,
- domain_name=None, token=None, app_cred_id=None,
+ domain_name=None, system=None, token=None, app_cred_id=None,
app_cred_secret=None):
"""Obtains a token from the authentication service
@@ -65,6 +65,7 @@
:param domain_name: a domain name to scope to
:param project_id: a project id to scope to
:param project_name: a project name to scope to
+ :param system: whether the token should be scoped to the system
:param token: a token to re-scope.
Accepts different combinations of credentials.
@@ -74,6 +75,7 @@
- user_id, password
- username, password, user_domain_id
- username, password, project_name, user_domain_id, project_domain_id
+ - username, password, user_domain_id, system
Validation is left to the server side.
"""
creds = {
@@ -135,6 +137,8 @@
creds['auth']['scope'] = dict(domain={'id': domain_id})
elif domain_name:
creds['auth']['scope'] = dict(domain={'name': domain_name})
+ elif system:
+ creds['auth']['scope'] = dict(system={system: True})
body = json.dumps(creds, sort_keys=True)
resp, body = self.post(self.auth_url, body=body)
diff --git a/tempest/lib/services/object_storage/object_client.py b/tempest/lib/services/object_storage/object_client.py
index 6970c0a..1d38153 100644
--- a/tempest/lib/services/object_storage/object_client.py
+++ b/tempest/lib/services/object_storage/object_client.py
@@ -166,7 +166,6 @@
conn = httplib.HTTPSConnection(parsed_url.netloc,
context=context)
else:
- conn = httplib.HTTPConnection(parsed_url.netloc,
- context=context)
+ conn = httplib.HTTPConnection(parsed_url.netloc)
return conn
diff --git a/tempest/manager.py b/tempest/manager.py
deleted file mode 100644
index b485ef2..0000000
--- a/tempest/manager.py
+++ /dev/null
@@ -1,62 +0,0 @@
-# Copyright 2012 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from oslo_log import log as logging
-
-from tempest import clients as tempest_clients
-from tempest import config
-from tempest.lib.services import clients
-
-CONF = config.CONF
-LOG = logging.getLogger(__name__)
-
-
-class Manager(clients.ServiceClients):
- """Service client manager class for backward compatibility
-
- The former manager.Manager is not a stable interface in Tempest,
- nonetheless it is consumed by a number of plugins already. This class
- exists to provide some grace time for the move to tempest.lib.
- """
-
- def __init__(self, credentials, scope='project'):
- msg = ("tempest.manager.Manager is not a stable interface and as such "
- "it should not be imported directly. It will be removed as "
- "soon as the client manager becomes available in tempest.lib.")
- LOG.warning(msg)
- dscv = CONF.identity.disable_ssl_certificate_validation
- _, uri = tempest_clients.get_auth_provider_class(credentials)
- super(Manager, self).__init__(
- credentials=credentials, scope=scope,
- identity_uri=uri,
- disable_ssl_certificate_validation=dscv,
- ca_certs=CONF.identity.ca_certificates_file,
- trace_requests=CONF.debug.trace_requests)
-
-
-def get_auth_provider(credentials, pre_auth=False, scope='project'):
- """Shim to get_auth_provider in clients.py
-
- get_auth_provider used to be hosted in this module, but it has been
- moved to clients.py now as a more permanent location.
- This module will be removed eventually, and this shim is only
- maintained for the benefit of plugins already consuming this interface.
- """
- msg = ("tempest.manager.get_auth_provider is not a stable interface and "
- "as such it should not imported directly. It will be removed as "
- "the client manager becomes available in tempest.lib.")
- LOG.warning(msg)
- return tempest_clients.get_auth_provider(credentials=credentials,
- pre_auth=pre_auth, scope=scope)
diff --git a/tempest/scenario/manager.py b/tempest/scenario/manager.py
index acc563a..4652af4 100644
--- a/tempest/scenario/manager.py
+++ b/tempest/scenario/manager.py
@@ -375,21 +375,35 @@
def create_backup(self, volume_id, name=None, description=None,
force=False, snapshot_id=None, incremental=False,
- container=None):
- """Creates backup
+ container=None, **kwargs):
+ """Creates a backup of the given volume_id or snapshot_id
- This wrapper utility creates backup and waits for backup to be
- in 'available' state.
+ This wrapper utility creates a backup and waits until it is in
+ 'available' state.
+
+ :param volume_id: UUID of the volume to back up
+ :param name: backup name, '$classname-backup' by default
+ :param description: Description of the backup, None by default
+ :param force: boolean whether to backup even if the volume is attached
+ False by default
+ :param snapshot_id: UUID of the source snapshot to back up
+ None by default
+ :param incremental: boolean, False by default
+ :param container: a container name, None by default
+ :param **kwargs: additional parameters per the documentation:
+ https://docs.openstack.org/api-ref/block-storage/v3/
+ #create-a-backup
"""
name = name or data_utils.rand_name(
self.__class__.__name__ + "-backup")
- kwargs = {'name': name,
- 'description': description,
- 'force': force,
- 'snapshot_id': snapshot_id,
- 'incremental': incremental,
- 'container': container}
+ args = {'name': name,
+ 'description': description,
+ 'force': force,
+ 'snapshot_id': snapshot_id,
+ 'incremental': incremental,
+ 'container': container}
+ args.update(kwargs)
backup = self.backups_client.create_backup(volume_id=volume_id,
**kwargs)['backup']
self.addCleanup(self.backups_client.delete_backup, backup['id'])
@@ -397,14 +411,20 @@
backup['id'], 'available')
return backup
- def restore_backup(self, backup_id):
- """Restore backup
+ def restore_backup(self, backup_id, **kwargs):
+ """Restores a backup given by the backup_id
- This wrapper utility restores backup and waits for backup to be
- in 'available' state.
+ This wrapper utility restores a backup and waits until it is in
+ 'available' state.
+
+ :param backup_id: UUID of a backup to restore
+ :param **kwargs: additional parameters per the documentation:
+ https://docs.openstack.org/api-ref/block-storage/v3/
+ #restore-a-backup
"""
- restore = self.backups_client.restore_backup(backup_id)['restore']
+ body = self.backups_client.restore_backup(backup_id, **kwargs)
+ restore = body['restore']
self.addCleanup(self.volumes_client.delete_volume,
restore['volume_id'])
waiters.wait_for_volume_resource_status(self.backups_client,
@@ -431,11 +451,20 @@
server_id, 'ACTIVE')
def create_volume_snapshot(self, volume_id, name=None, description=None,
- metadata=None, force=False):
- """Creates volume
+ metadata=None, force=False, **kwargs):
+ """Creates volume's snapshot
- This wrapper utility creates volume snapshot and waits for backup
- to be in 'available' state.
+ This wrapper utility creates volume snapshot and waits for it until
+ it is in 'available' state.
+
+ :param volume_id: UUID of a volume to create snapshot of
+ :param name: name of the snapshot, '$classname-snapshot' by default
+ :param description: description of the snapshot
+ :param metadata: metadata key and value pairs for the snapshot
+ :param force: whether snapshot even when the volume is attached
+ :param **kwargs: additional parameters per the doc
+ https://docs.openstack.org/api-ref/block-storage/v3/
+ #create-a-snapshot
"""
name = name or data_utils.rand_name(
@@ -445,7 +474,8 @@
force=force,
name=name,
description=description,
- metadata=metadata)['snapshot']
+ metadata=metadata,
+ **kwargs)['snapshot']
self.addCleanup(self.snapshots_client.wait_for_resource_deletion,
snapshot['id'])
@@ -515,7 +545,7 @@
self.addCleanup(self._cleanup_volume_type, volume_type)
return volume_type
- def _create_loginable_secgroup_rule(self, secgroup_id=None, rulesets=None):
+ def create_loginable_secgroup_rule(self, secgroup_id=None, rulesets=None):
"""Create loginable security group rule by compute clients.
This function will create by default the following rules:
@@ -575,7 +605,7 @@
secgroup['id'])
# Add rules to the security group
- self._create_loginable_secgroup_rule(secgroup['id'])
+ self.create_loginable_secgroup_rule(secgroup['id'])
return secgroup
def get_remote_client(self, ip_address, username=None, private_key=None,
@@ -660,7 +690,7 @@
LOG.debug("image:%s", image['id'])
return image['id']
- def _log_console_output(self, servers=None, client=None, **kwargs):
+ def log_console_output(self, servers=None, client=None, **kwargs):
"""Console log output"""
if not CONF.compute_feature_enabled.console_output:
LOG.debug('Console output not supported, cannot log')
@@ -796,7 +826,7 @@
'result': 'expected' if result else 'unexpected'
})
if server:
- self._log_console_output([server])
+ self.log_console_output([server])
return result
def check_vm_connectivity(self, ip_address,
@@ -1285,7 +1315,7 @@
should_connect=should_connect)
except Exception as e:
LOG.exception('Tenant network connectivity check failed')
- self._log_console_output(servers_for_debug)
+ self.log_console_output(servers_for_debug)
self._log_net_info(e)
raise
@@ -1328,7 +1358,7 @@
% (dest, source_host)
else:
msg = "%s is reachable from %s" % (dest, source_host)
- self._log_console_output()
+ self.log_console_output()
self.fail(msg)
def _create_security_group(self, security_group_rules_client=None,
@@ -1346,7 +1376,7 @@
project_id=project_id)
# Add rules to the security group
- rules = self._create_loginable_secgroup_rule(
+ rules = self.create_loginable_secgroup_rule(
security_group_rules_client=security_group_rules_client,
secgroup=secgroup,
security_groups_client=security_groups_client)
@@ -1387,10 +1417,10 @@
client.delete_security_group, secgroup['id'])
return secgroup
- def _create_security_group_rule(self, secgroup=None,
- sec_group_rules_client=None,
- project_id=None,
- security_groups_client=None, **kwargs):
+ def create_security_group_rule(self, secgroup=None,
+ sec_group_rules_client=None,
+ project_id=None,
+ security_groups_client=None, **kwargs):
"""Create a rule from a dictionary of rule parameters.
Create a rule in a secgroup. if secgroup not defined will search for
@@ -1435,9 +1465,9 @@
return sg_rule
- def _create_loginable_secgroup_rule(self, security_group_rules_client=None,
- secgroup=None,
- security_groups_client=None):
+ def create_loginable_secgroup_rule(self, security_group_rules_client=None,
+ secgroup=None,
+ security_groups_client=None):
"""Create loginable security group rule by neutron clients by default.
This function will create:
@@ -1474,7 +1504,7 @@
for r_direction in ['ingress', 'egress']:
ruleset['direction'] = r_direction
try:
- sg_rule = self._create_security_group_rule(
+ sg_rule = self.create_security_group_rule(
sec_group_rules_client=sec_group_rules_client,
secgroup=secgroup,
security_groups_client=security_groups_client,
@@ -1490,7 +1520,7 @@
return rules
- def _get_router(self, client=None, project_id=None):
+ def _get_router(self, client=None, project_id=None, **kwargs):
"""Retrieve a router for the given tenant id.
If a public router has been configured, it will be returned.
@@ -1510,11 +1540,20 @@
body = client.show_router(router_id)
return body['router']
elif network_id:
+ name = kwargs.pop('name', None)
+ if not name:
+ namestart = self.__class__.__name__ + '-router'
+ name = data_utils.rand_name(namestart)
+
+ ext_gw_info = kwargs.pop('external_gateway_info', None)
+ if not ext_gw_info:
+ ext_gw_info = dict(network_id=network_id)
router = client.create_router(
- name=data_utils.rand_name(self.__class__.__name__ + '-router'),
- admin_state_up=True,
+ name=name,
+ admin_state_up=kwargs.get('admin_state_up', True),
project_id=project_id,
- external_gateway_info=dict(network_id=network_id))['router']
+ external_gateway_info=ext_gw_info,
+ **kwargs)['router']
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
client.delete_router, router['id'])
return router
diff --git a/tempest/scenario/test_dashboard_basic_ops.py b/tempest/scenario/test_dashboard_basic_ops.py
new file mode 100644
index 0000000..b1098fa
--- /dev/null
+++ b/tempest/scenario/test_dashboard_basic_ops.py
@@ -0,0 +1,141 @@
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import html.parser
+import ssl
+from urllib import parse
+from urllib import request
+
+from tempest.common import utils
+from tempest import config
+from tempest.lib import decorators
+from tempest import test
+
+CONF = config.CONF
+
+
+class HorizonHTMLParser(html.parser.HTMLParser):
+ csrf_token = None
+ region = None
+ login = None
+
+ def _find_name(self, attrs, name):
+ for attrpair in attrs:
+ if attrpair[0] == 'name' and attrpair[1] == name:
+ return True
+ return False
+
+ def _find_value(self, attrs):
+ for attrpair in attrs:
+ if attrpair[0] == 'value':
+ return attrpair[1]
+ return None
+
+ def _find_attr_value(self, attrs, attr_name):
+ for attrpair in attrs:
+ if attrpair[0] == attr_name:
+ return attrpair[1]
+ return None
+
+ def handle_starttag(self, tag, attrs):
+ if tag == 'input':
+ if self._find_name(attrs, 'csrfmiddlewaretoken'):
+ self.csrf_token = self._find_value(attrs)
+ if self._find_name(attrs, 'region'):
+ self.region = self._find_value(attrs)
+ if tag == 'form':
+ self.login = self._find_attr_value(attrs, 'action')
+
+
+class TestDashboardBasicOps(test.BaseTestCase):
+
+ """The test suite for dashboard basic operations
+
+ This is a basic scenario test:
+ * checks that the login page is available
+ * logs in as a regular user
+ * checks that the user home page loads without error
+ """
+ opener = None
+
+ credentials = ['primary']
+
+ @classmethod
+ def skip_checks(cls):
+ super(TestDashboardBasicOps, cls).skip_checks()
+ if not CONF.service_available.horizon:
+ raise cls.skipException("Horizon support is required")
+
+ @classmethod
+ def setup_credentials(cls):
+ cls.set_network_resources()
+ super(TestDashboardBasicOps, cls).setup_credentials()
+
+ def check_login_page(self):
+ response = self._get_opener().open(CONF.dashboard.dashboard_url).read()
+ self.assertIn("id_username", response.decode("utf-8"))
+
+ def user_login(self, username, password):
+ response = self._get_opener().open(CONF.dashboard.dashboard_url).read()
+
+ # Grab the CSRF token and default region
+ parser = HorizonHTMLParser()
+ parser.feed(response.decode("utf-8"))
+
+ # construct login url for dashboard, discovery accommodates non-/ web
+ # root for dashboard
+ login_url = parse.urljoin(CONF.dashboard.dashboard_url, parser.login)
+
+ # Prepare login form request
+ req = request.Request(login_url)
+ req.add_header('Content-type', 'application/x-www-form-urlencoded')
+ req.add_header('Referer', CONF.dashboard.dashboard_url)
+
+ # Pass the default domain name regardless of the auth version in order
+ # to test the scenario of when horizon is running with keystone v3
+ params = {'username': username,
+ 'password': password,
+ 'region': parser.region,
+ 'domain': CONF.auth.default_credentials_domain_name,
+ 'csrfmiddlewaretoken': parser.csrf_token}
+ self._get_opener().open(req, parse.urlencode(params).encode())
+
+ def check_home_page(self):
+ response = self._get_opener().open(CONF.dashboard.dashboard_url).read()
+ self.assertIn('Overview', response.decode("utf-8"))
+
+ def _get_opener(self):
+ if not self.opener:
+ if (CONF.dashboard.disable_ssl_certificate_validation and
+ self._ssl_default_context_supported()):
+ ctx = ssl.create_default_context()
+ ctx.check_hostname = False
+ ctx.verify_mode = ssl.CERT_NONE
+ self.opener = request.build_opener(
+ request.HTTPSHandler(context=ctx),
+ request.HTTPCookieProcessor())
+ else:
+ self.opener = request.build_opener(
+ request.HTTPCookieProcessor())
+ return self.opener
+
+ def _ssl_default_context_supported(self):
+ return (hasattr(ssl, 'create_default_context'))
+
+ @decorators.attr(type='smoke')
+ @decorators.idempotent_id('4f8851b1-0e69-482b-b63b-84c6e76f6c80')
+ @utils.services('dashboard')
+ def test_basic_scenario(self):
+ creds = self.os_primary.credentials
+ self.check_login_page()
+ self.user_login(creds.username, creds.password)
+ self.check_home_page()
diff --git a/tempest/scenario/test_encrypted_cinder_volumes.py b/tempest/scenario/test_encrypted_cinder_volumes.py
index fc93a5e..6ee9f28 100644
--- a/tempest/scenario/test_encrypted_cinder_volumes.py
+++ b/tempest/scenario/test_encrypted_cinder_volumes.py
@@ -30,8 +30,7 @@
For both LUKS and cryptsetup encryption types, this test performs
the following:
- * Creates an image in Glance
- * Boots an instance from the image
+ * Boots an instance from an image (CONF.compute.image_ref)
* Creates an encryption type (as admin)
* Creates a volume of that encryption type (as a regular user)
* Attaches and detaches the encrypted volume to the instance
@@ -44,10 +43,9 @@
raise cls.skipException('Encrypted volume attach is not supported')
def launch_instance(self):
- image = self.image_create()
keypair = self.create_keypair()
- return self.create_server(image_id=image, key_name=keypair['name'])
+ return self.create_server(key_name=keypair['name'])
def attach_detach_volume(self, server, volume):
attached_volume = self.nova_volume_attach(server, volume)
diff --git a/tempest/scenario/test_minbw_allocation_placement.py b/tempest/scenario/test_minbw_allocation_placement.py
index a9d15bc..8c2752d 100644
--- a/tempest/scenario/test_minbw_allocation_placement.py
+++ b/tempest/scenario/test_minbw_allocation_placement.py
@@ -20,6 +20,7 @@
from tempest.lib.common.utils import data_utils
from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
+from tempest.lib import exceptions as lib_exc
from tempest.scenario import manager
@@ -54,6 +55,8 @@
# https://github.com/openstack/placement/blob/master/placement/
# db/constants.py#L16
PLACEMENT_MAX_INT = 0x7FFFFFFF
+ BANDWIDTH_1 = 1000
+ BANDWIDTH_2 = 2000
@classmethod
def setup_clients(cls):
@@ -61,6 +64,7 @@
cls.placement_client = cls.os_admin.placement_client
cls.networks_client = cls.os_admin.networks_client
cls.subnets_client = cls.os_admin.subnets_client
+ cls.ports_client = cls.os_primary.ports_client
cls.routers_client = cls.os_adm.routers_client
cls.qos_client = cls.os_admin.qos_client
cls.qos_min_bw_client = cls.os_admin.qos_min_bw_client
@@ -78,7 +82,6 @@
def setUp(self):
super(MinBwAllocationPlacementTest, self).setUp()
self._check_if_allocation_is_possible()
- self._create_network_and_qos_policies()
def _create_policy_and_min_bw_rule(self, name_prefix, min_kbps):
policy = self.qos_client.create_qos_policy(
@@ -99,7 +102,7 @@
return policy
- def _create_qos_policies(self):
+ def _create_qos_basic_policies(self):
self.qos_policy_valid = self._create_policy_and_min_bw_rule(
name_prefix='test_policy_valid',
min_kbps=self.SMALLEST_POSSIBLE_BW)
@@ -107,7 +110,20 @@
name_prefix='test_policy_not_valid',
min_kbps=self.PLACEMENT_MAX_INT)
- def _create_network_and_qos_policies(self):
+ def _create_qos_policies_from_life(self):
+ # For tempest-slow the max bandwidth configured is 1000000,
+ # https://opendev.org/openstack/tempest/src/branch/master/
+ # .zuul.yaml#L416-L420
+ self.qos_policy_1 = self._create_policy_and_min_bw_rule(
+ name_prefix='test_policy_1',
+ min_kbps=self.BANDWIDTH_1
+ )
+ self.qos_policy_2 = self._create_policy_and_min_bw_rule(
+ name_prefix='test_policy_2',
+ min_kbps=self.BANDWIDTH_2
+ )
+
+ def _create_network_and_qos_policies(self, policy_method):
physnet_name = CONF.network_feature_enabled.qos_placement_physnet
base_segm = \
CONF.network_feature_enabled.provider_net_base_segmentation_id
@@ -123,7 +139,7 @@
'provider:segmentation_id': base_segm
})
- self._create_qos_policies()
+ policy_method()
def _check_if_allocation_is_possible(self):
alloc_candidates = self.placement_client.list_allocation_candidates(
@@ -157,20 +173,29 @@
status=status, ready_wait=False, raise_on_error=False)
return server, port
- def _assert_allocation_is_as_expected(self, allocations, port_id):
- self.assertGreater(len(allocations['allocations']), 0)
+ def _assert_allocation_is_as_expected(self, consumer, port_ids,
+ min_kbps=SMALLEST_POSSIBLE_BW):
+ allocations = self.placement_client.list_allocations(
+ consumer)['allocations']
+ self.assertGreater(len(allocations), 0)
bw_resource_in_alloc = False
- for rp, resources in allocations['allocations'].items():
+ for rp, resources in allocations.items():
if self.INGRESS_RESOURCE_CLASS in resources['resources']:
+ self.assertEqual(
+ min_kbps,
+ resources['resources'][self.INGRESS_RESOURCE_CLASS])
bw_resource_in_alloc = True
allocation_rp = rp
- self.assertTrue(bw_resource_in_alloc)
+ if min_kbps:
+ self.assertTrue(bw_resource_in_alloc)
- # Check binding_profile of the port is not empty and equals with the
- # rp uuid
- port = self.os_admin.ports_client.show_port(port_id)
- self.assertEqual(allocation_rp,
- port['port']['binding:profile']['allocation'])
+ # Check binding_profile of the port is not empty and equals with
+ # the rp uuid
+ for port_id in port_ids:
+ port = self.os_admin.ports_client.show_port(port_id)
+ self.assertEqual(
+ allocation_rp,
+ port['port']['binding:profile']['allocation'])
@decorators.idempotent_id('78625d92-212c-400e-8695-dd51706858b8')
@utils.services('compute', 'network')
@@ -193,11 +218,11 @@
* Create port with invalid QoS policy, and try to boot VM with that,
it should fail.
"""
-
+ self._create_network_and_qos_policies(self._create_qos_basic_policies)
server1, valid_port = self._boot_vm_with_min_bw(
qos_policy_id=self.qos_policy_valid['id'])
- allocations = self.placement_client.list_allocations(server1['id'])
- self._assert_allocation_is_as_expected(allocations, valid_port['id'])
+ self._assert_allocation_is_as_expected(server1['id'],
+ [valid_port['id']])
server2, not_valid_port = self._boot_vm_with_min_bw(
self.qos_policy_not_valid['id'], status='ERROR')
@@ -228,27 +253,28 @@
* If the VM goes to ACTIVE state check that allocations are as
expected.
"""
+ self._create_network_and_qos_policies(self._create_qos_basic_policies)
server, valid_port = self._boot_vm_with_min_bw(
qos_policy_id=self.qos_policy_valid['id'])
- allocations = self.placement_client.list_allocations(server['id'])
- self._assert_allocation_is_as_expected(allocations, valid_port['id'])
+ self._assert_allocation_is_as_expected(server['id'],
+ [valid_port['id']])
self.servers_client.migrate_server(server_id=server['id'])
waiters.wait_for_server_status(
client=self.os_primary.servers_client, server_id=server['id'],
status='VERIFY_RESIZE', ready_wait=False, raise_on_error=False)
- allocations = self.placement_client.list_allocations(server['id'])
# TODO(lajoskatona): Check that the allocations are ok for the
# migration?
- self._assert_allocation_is_as_expected(allocations, valid_port['id'])
+ self._assert_allocation_is_as_expected(server['id'],
+ [valid_port['id']])
self.servers_client.confirm_resize_server(server_id=server['id'])
waiters.wait_for_server_status(
client=self.os_primary.servers_client, server_id=server['id'],
status='ACTIVE', ready_wait=False, raise_on_error=True)
- allocations = self.placement_client.list_allocations(server['id'])
- self._assert_allocation_is_as_expected(allocations, valid_port['id'])
+ self._assert_allocation_is_as_expected(server['id'],
+ [valid_port['id']])
@decorators.idempotent_id('c29e7fd3-035d-4993-880f-70819847683f')
@testtools.skipUnless(CONF.compute_feature_enabled.resize,
@@ -264,10 +290,11 @@
* If the VM goes to ACTIVE state check that allocations are as
expected.
"""
+ self._create_network_and_qos_policies(self._create_qos_basic_policies)
server, valid_port = self._boot_vm_with_min_bw(
qos_policy_id=self.qos_policy_valid['id'])
- allocations = self.placement_client.list_allocations(server['id'])
- self._assert_allocation_is_as_expected(allocations, valid_port['id'])
+ self._assert_allocation_is_as_expected(server['id'],
+ [valid_port['id']])
old_flavor = self.flavors_client.show_flavor(
CONF.compute.flavor_ref)['flavor']
@@ -285,15 +312,176 @@
waiters.wait_for_server_status(
client=self.os_primary.servers_client, server_id=server['id'],
status='VERIFY_RESIZE', ready_wait=False, raise_on_error=False)
- allocations = self.placement_client.list_allocations(server['id'])
# TODO(lajoskatona): Check that the allocations are ok for the
# migration?
- self._assert_allocation_is_as_expected(allocations, valid_port['id'])
+ self._assert_allocation_is_as_expected(server['id'],
+ [valid_port['id']])
self.servers_client.confirm_resize_server(server_id=server['id'])
waiters.wait_for_server_status(
client=self.os_primary.servers_client, server_id=server['id'],
status='ACTIVE', ready_wait=False, raise_on_error=True)
- allocations = self.placement_client.list_allocations(server['id'])
- self._assert_allocation_is_as_expected(allocations, valid_port['id'])
+ self._assert_allocation_is_as_expected(server['id'],
+ [valid_port['id']])
+
+ @decorators.idempotent_id('79fdaa1c-df62-4738-a0f0-1cff9dc415f6')
+ @utils.services('compute', 'network')
+ def test_qos_min_bw_allocation_update_policy(self):
+ """Test the update of QoS policy on bound port
+
+ Related RFE in neutron: #1882804
+ The scenario is the following:
+ * Have a port with QoS policy and minimum bandwidth rule.
+ * Boot a VM with the port.
+ * Update the port with a new policy with different minimum bandwidth
+ values.
+ * The allocation on placement side should be according to the new
+ rules.
+ """
+ if not utils.is_network_feature_enabled('update_port_qos'):
+ raise self.skipException("update_port_qos feature is not enabled")
+
+ self._create_network_and_qos_policies(
+ self._create_qos_policies_from_life)
+
+ port = self.create_port(
+ self.prov_network['id'], qos_policy_id=self.qos_policy_1['id'])
+
+ server1 = self.create_server(
+ networks=[{'port': port['id']}])
+
+ self._assert_allocation_is_as_expected(server1['id'], [port['id']],
+ self.BANDWIDTH_1)
+
+ self.ports_client.update_port(
+ port['id'],
+ **{'qos_policy_id': self.qos_policy_2['id']})
+ self._assert_allocation_is_as_expected(server1['id'], [port['id']],
+ self.BANDWIDTH_2)
+
+ # I changed my mind
+ self.ports_client.update_port(
+ port['id'],
+ **{'qos_policy_id': self.qos_policy_1['id']})
+ self._assert_allocation_is_as_expected(server1['id'], [port['id']],
+ self.BANDWIDTH_1)
+
+ # bad request....
+ self.qos_policy_not_valid = self._create_policy_and_min_bw_rule(
+ name_prefix='test_policy_not_valid',
+ min_kbps=self.PLACEMENT_MAX_INT)
+ port_orig = self.ports_client.show_port(port['id'])['port']
+ self.assertRaises(
+ lib_exc.Conflict,
+ self.ports_client.update_port,
+ port['id'], **{'qos_policy_id': self.qos_policy_not_valid['id']})
+ self._assert_allocation_is_as_expected(server1['id'], [port['id']],
+ self.BANDWIDTH_1)
+
+ port_upd = self.ports_client.show_port(port['id'])['port']
+ self.assertEqual(port_orig['qos_policy_id'],
+ port_upd['qos_policy_id'])
+ self.assertEqual(self.qos_policy_1['id'], port_upd['qos_policy_id'])
+
+ @decorators.idempotent_id('9cfc3bb8-f433-4c91-87b6-747cadc8958a')
+ @utils.services('compute', 'network')
+ def test_qos_min_bw_allocation_update_policy_from_zero(self):
+ """Test port without QoS policy to have QoS policy
+
+ This scenario checks if updating a port without QoS policy to
+ have QoS policy with minimum_bandwidth rule succeeds only on
+ controlplane, but placement allocation remains 0.
+ """
+ if not utils.is_network_feature_enabled('update_port_qos'):
+ raise self.skipException("update_port_qos feature is not enabled")
+
+ self._create_network_and_qos_policies(
+ self._create_qos_policies_from_life)
+
+ port = self.create_port(self.prov_network['id'])
+
+ server1 = self.create_server(
+ networks=[{'port': port['id']}])
+
+ self._assert_allocation_is_as_expected(server1['id'], [port['id']], 0)
+
+ self.ports_client.update_port(
+ port['id'], **{'qos_policy_id': self.qos_policy_2['id']})
+ self._assert_allocation_is_as_expected(server1['id'], [port['id']], 0)
+
+ @decorators.idempotent_id('a9725a70-1d28-4e3b-ae0e-450abc235962')
+ @utils.services('compute', 'network')
+ def test_qos_min_bw_allocation_update_policy_to_zero(self):
+ """Test port with QoS policy to remove QoS policy
+
+ In this scenario port with QoS minimum_bandwidth rule update to
+ remove QoS policy results in 0 placement allocation.
+ """
+ if not utils.is_network_feature_enabled('update_port_qos'):
+ raise self.skipException("update_port_qos feature is not enabled")
+
+ self._create_network_and_qos_policies(
+ self._create_qos_policies_from_life)
+
+ port = self.create_port(
+ self.prov_network['id'], qos_policy_id=self.qos_policy_1['id'])
+
+ server1 = self.create_server(
+ networks=[{'port': port['id']}])
+ self._assert_allocation_is_as_expected(server1['id'], [port['id']],
+ self.BANDWIDTH_1)
+
+ self.ports_client.update_port(
+ port['id'],
+ **{'qos_policy_id': None})
+ self._assert_allocation_is_as_expected(server1['id'], [port['id']], 0)
+
+ @decorators.idempotent_id('756ced7f-6f1a-43e7-a851-2fcfc16f3dd7')
+ @utils.services('compute', 'network')
+ def test_qos_min_bw_allocation_update_with_multiple_ports(self):
+ if not utils.is_network_feature_enabled('update_port_qos'):
+ raise self.skipException("update_port_qos feature is not enabled")
+
+ self._create_network_and_qos_policies(
+ self._create_qos_policies_from_life)
+
+ port1 = self.create_port(
+ self.prov_network['id'], qos_policy_id=self.qos_policy_1['id'])
+ port2 = self.create_port(
+ self.prov_network['id'], qos_policy_id=self.qos_policy_2['id'])
+
+ server1 = self.create_server(
+ networks=[{'port': port1['id']}, {'port': port2['id']}])
+ self._assert_allocation_is_as_expected(
+ server1['id'], [port1['id'], port2['id']],
+ self.BANDWIDTH_1 + self.BANDWIDTH_2)
+
+ self.ports_client.update_port(
+ port1['id'],
+ **{'qos_policy_id': self.qos_policy_2['id']})
+ self._assert_allocation_is_as_expected(
+ server1['id'], [port1['id'], port2['id']],
+ 2 * self.BANDWIDTH_2)
+
+ @decorators.idempotent_id('0805779e-e03c-44fb-900f-ce97a790653b')
+ @utils.services('compute', 'network')
+ def test_empty_update(self):
+ if not utils.is_network_feature_enabled('update_port_qos'):
+ raise self.skipException("update_port_qos feature is not enabled")
+
+ self._create_network_and_qos_policies(
+ self._create_qos_policies_from_life)
+
+ port = self.create_port(
+ self.prov_network['id'], qos_policy_id=self.qos_policy_1['id'])
+
+ server1 = self.create_server(
+ networks=[{'port': port['id']}])
+ self._assert_allocation_is_as_expected(server1['id'], [port['id']],
+ self.BANDWIDTH_1)
+ self.ports_client.update_port(
+ port['id'],
+ **{'description': 'foo'})
+ self._assert_allocation_is_as_expected(server1['id'], [port['id']],
+ self.BANDWIDTH_1)
diff --git a/tempest/scenario/test_network_v6.py b/tempest/scenario/test_network_v6.py
index 14f24c7..9be28c4 100644
--- a/tempest/scenario/test_network_v6.py
+++ b/tempest/scenario/test_network_v6.py
@@ -218,7 +218,7 @@
guest_has_address,
CONF.validation.ping_timeout, 1, ssh, ip)
if not result:
- self._log_console_output(servers=[srv])
+ self.log_console_output(servers=[srv])
self.fail(
'Address %s not configured for instance %s, '
'ip address output is\n%s' %
diff --git a/tempest/scenario/test_security_groups_basic_ops.py b/tempest/scenario/test_security_groups_basic_ops.py
index 3fc93e4..496a371 100644
--- a/tempest/scenario/test_security_groups_basic_ops.py
+++ b/tempest/scenario/test_security_groups_basic_ops.py
@@ -217,7 +217,7 @@
direction='ingress',
)
sec_group_rules_client = tenant.manager.security_group_rules_client
- self._create_security_group_rule(
+ self.create_security_group_rule(
secgroup=access_sg,
sec_group_rules_client=sec_group_rules_client,
**ssh_rule)
@@ -385,7 +385,7 @@
remote_group_id=tenant.security_groups['default']['id'],
direction='ingress'
)
- self._create_security_group_rule(
+ self.create_security_group_rule(
secgroup=tenant.security_groups['default'],
security_groups_client=tenant.manager.security_groups_client,
**ruleset
@@ -413,7 +413,7 @@
protocol = ruleset['protocol']
sec_group_rules_client = (
dest_tenant.manager.security_group_rules_client)
- self._create_security_group_rule(
+ self.create_security_group_rule(
secgroup=dest_tenant.security_groups['default'],
sec_group_rules_client=sec_group_rules_client,
**ruleset
@@ -429,7 +429,7 @@
# allow reverse traffic and check
sec_group_rules_client = (
source_tenant.manager.security_group_rules_client)
- self._create_security_group_rule(
+ self.create_security_group_rule(
secgroup=source_tenant.security_groups['default'],
sec_group_rules_client=sec_group_rules_client,
**ruleset
@@ -464,9 +464,9 @@
def _log_console_output_for_all_tenants(self):
for tenant in self.tenants.values():
client = tenant.manager.servers_client
- self._log_console_output(servers=tenant.servers, client=client)
+ self.log_console_output(servers=tenant.servers, client=client)
if tenant.access_point is not None:
- self._log_console_output(
+ self.log_console_output(
servers=[tenant.access_point], client=client)
def _create_protocol_ruleset(self, protocol, port=80):
@@ -543,7 +543,7 @@
direction='ingress',
)
sec_group_rules_client = new_tenant.manager.security_group_rules_client
- self._create_security_group_rule(
+ self.create_security_group_rule(
secgroup=new_sg,
sec_group_rules_client=sec_group_rules_client,
**icmp_rule)
@@ -596,7 +596,7 @@
protocol='icmp',
direction='ingress'
)
- self._create_security_group_rule(
+ self.create_security_group_rule(
secgroup=tenant.security_groups['default'],
**ruleset
)
diff --git a/tempest/scenario/test_server_advanced_ops.py b/tempest/scenario/test_server_advanced_ops.py
index 8aa729b..990b325 100644
--- a/tempest/scenario/test_server_advanced_ops.py
+++ b/tempest/scenario/test_server_advanced_ops.py
@@ -37,7 +37,7 @@
@classmethod
def setup_credentials(cls):
- cls.set_network_resources()
+ cls.set_network_resources(network=True, subnet=True)
super(TestServerAdvancedOps, cls).setup_credentials()
@decorators.attr(type='slow')
diff --git a/tempest/scenario/test_server_basic_ops.py b/tempest/scenario/test_server_basic_ops.py
index 02bc692..60242d5 100644
--- a/tempest/scenario/test_server_basic_ops.py
+++ b/tempest/scenario/test_server_basic_ops.py
@@ -67,7 +67,10 @@
def verify_metadata(self):
if self.run_ssh and CONF.compute_feature_enabled.metadata_service:
# Verify metadata service
- md_url = 'http://169.254.169.254/latest/meta-data/public-ipv4'
+ if CONF.network.public_network_id:
+ md_url = 'http://169.254.169.254/latest/meta-data/public-ipv4'
+ else:
+ md_url = 'http://169.254.169.254/latest/meta-data/local-ipv4'
def exec_cmd_and_verify_output():
cmd = 'curl ' + md_url
diff --git a/tempest/test.py b/tempest/test.py
index f383bc1..68602d6 100644
--- a/tempest/test.py
+++ b/tempest/test.py
@@ -38,12 +38,6 @@
CONF = config.CONF
-# TODO(oomichi): This test.idempotent_id should be removed after all projects
-# switch to use decorators.idempotent_id.
-idempotent_id = debtcollector.moves.moved_function(
- decorators.idempotent_id, 'idempotent_id', __name__,
- version='Mitaka', removal_version='?')
-
attr = debtcollector.moves.moved_function(
decorators.attr, 'attr', __name__,
diff --git a/tempest/tests/cmd/test_run.py b/tempest/tests/cmd/test_run.py
index 3c99bbe..ec7b760 100644
--- a/tempest/tests/cmd/test_run.py
+++ b/tempest/tests/cmd/test_run.py
@@ -68,6 +68,11 @@
class TestRunReturnCode(base.TestCase):
+
+ exclude_regex = '--exclude-regex'
+ exclude_list = '--exclude-list'
+ include_list = '--include-list'
+
def setUp(self):
super(TestRunReturnCode, self).setUp()
# Setup test dirs
@@ -92,6 +97,14 @@
self.addCleanup(os.chdir, os.path.abspath(os.curdir))
os.chdir(self.directory)
+ def _get_test_list_file(self, content):
+ fd, path = tempfile.mkstemp()
+ self.addCleanup(os.remove, path)
+ test_file = os.fdopen(fd, 'wb', 0)
+ self.addCleanup(test_file.close)
+ test_file.write(content.encode('utf-8'))
+ return path
+
def assertRunExit(self, cmd, expected):
p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
@@ -115,19 +128,23 @@
subprocess.call(['stestr', 'init'])
self.assertRunExit(['tempest', 'run', '--regex', 'failing'], 1)
- def test_tempest_run_blackregex_failing(self):
- self.assertRunExit(['tempest', 'run', '--black-regex', 'failing'], 0)
+ def test_tempest_run_exclude_regex_failing(self):
+ self.assertRunExit(['tempest', 'run',
+ self.exclude_regex, 'failing'], 0)
- def test_tempest_run_blackregex_failing_with_stestr_repository(self):
+ def test_tempest_run_exclude_regex_failing_with_stestr_repository(self):
subprocess.call(['stestr', 'init'])
- self.assertRunExit(['tempest', 'run', '--black-regex', 'failing'], 0)
+ self.assertRunExit(['tempest', 'run',
+ self.exclude_regex, 'failing'], 0)
- def test_tempest_run_blackregex_passing(self):
- self.assertRunExit(['tempest', 'run', '--black-regex', 'passing'], 1)
+ def test_tempest_run_exclude_regex_passing(self):
+ self.assertRunExit(['tempest', 'run',
+ self.exclude_regex, 'passing'], 1)
- def test_tempest_run_blackregex_passing_with_stestr_repository(self):
+ def test_tempest_run_exclude_regex_passing_with_stestr_repository(self):
subprocess.call(['stestr', 'init'])
- self.assertRunExit(['tempest', 'run', '--black-regex', 'passing'], 1)
+ self.assertRunExit(['tempest', 'run',
+ self.exclude_regex, 'passing'], 1)
def test_tempest_run_fails(self):
self.assertRunExit(['tempest', 'run'], 1)
@@ -149,47 +166,31 @@
self.assertEqual(result, tests)
def test_tempest_run_with_worker_file(self):
- fd, path = tempfile.mkstemp()
- self.addCleanup(os.remove, path)
- worker_file = os.fdopen(fd, 'wb', 0)
- self.addCleanup(worker_file.close)
- worker_file.write(
- '- worker:\n - passing\n concurrency: 3'.encode('utf-8'))
+ path = self._get_test_list_file(
+ '- worker:\n - passing\n concurrency: 3')
self.assertRunExit(['tempest', 'run', '--worker-file=%s' % path], 0)
- def test_tempest_run_with_whitelist(self):
- fd, path = tempfile.mkstemp()
- self.addCleanup(os.remove, path)
- whitelist_file = os.fdopen(fd, 'wb', 0)
- self.addCleanup(whitelist_file.close)
- whitelist_file.write('passing'.encode('utf-8'))
- self.assertRunExit(['tempest', 'run', '--whitelist-file=%s' % path], 0)
+ def test_tempest_run_with_include_list(self):
+ path = self._get_test_list_file('passing')
+ self.assertRunExit(['tempest', 'run',
+ '%s=%s' % (self.include_list, path)], 0)
- def test_tempest_run_with_whitelist_regex_include_pass_check_fail(self):
- fd, path = tempfile.mkstemp()
- self.addCleanup(os.remove, path)
- whitelist_file = os.fdopen(fd, 'wb', 0)
- self.addCleanup(whitelist_file.close)
- whitelist_file.write('passing'.encode('utf-8'))
- self.assertRunExit(['tempest', 'run', '--whitelist-file=%s' % path,
+ def test_tempest_run_with_include_regex_include_pass_check_fail(self):
+ path = self._get_test_list_file('passing')
+ self.assertRunExit(['tempest', 'run',
+ '%s=%s' % (self.include_list, path),
'--regex', 'fail'], 1)
- def test_tempest_run_with_whitelist_regex_include_pass_check_pass(self):
- fd, path = tempfile.mkstemp()
- self.addCleanup(os.remove, path)
- whitelist_file = os.fdopen(fd, 'wb', 0)
- self.addCleanup(whitelist_file.close)
- whitelist_file.write('passing'.encode('utf-8'))
- self.assertRunExit(['tempest', 'run', '--whitelist-file=%s' % path,
+ def test_tempest_run_with_include_regex_include_pass_check_pass(self):
+ path = self._get_test_list_file('passing')
+ self.assertRunExit(['tempest', 'run',
+ '%s=%s' % (self.include_list, path),
'--regex', 'passing'], 0)
- def test_tempest_run_with_whitelist_regex_include_fail_check_pass(self):
- fd, path = tempfile.mkstemp()
- self.addCleanup(os.remove, path)
- whitelist_file = os.fdopen(fd, 'wb', 0)
- self.addCleanup(whitelist_file.close)
- whitelist_file.write('failing'.encode('utf-8'))
- self.assertRunExit(['tempest', 'run', '--whitelist-file=%s' % path,
+ def test_tempest_run_with_include_regex_include_fail_check_pass(self):
+ path = self._get_test_list_file('failing')
+ self.assertRunExit(['tempest', 'run',
+ '%s=%s' % (self.include_list, path),
'--regex', 'pass'], 1)
def test_tempest_run_passes_with_config_file(self):
@@ -197,50 +198,75 @@
'--config-file', self.stestr_conf_file,
'--regex', 'passing'], 0)
- def test_tempest_run_with_blacklist_failing(self):
- fd, path = tempfile.mkstemp()
- self.addCleanup(os.remove, path)
- blacklist_file = os.fdopen(fd, 'wb', 0)
- self.addCleanup(blacklist_file.close)
- blacklist_file.write('failing'.encode('utf-8'))
- self.assertRunExit(['tempest', 'run', '--blacklist-file=%s' % path], 0)
+ def test_tempest_run_with_exclude_list_failing(self):
+ path = self._get_test_list_file('failing')
+ self.assertRunExit(['tempest', 'run',
+ '%s=%s' % (self.exclude_list, path)], 0)
- def test_tempest_run_with_blacklist_passing(self):
- fd, path = tempfile.mkstemp()
- self.addCleanup(os.remove, path)
- blacklist_file = os.fdopen(fd, 'wb', 0)
- self.addCleanup(blacklist_file.close)
- blacklist_file.write('passing'.encode('utf-8'))
- self.assertRunExit(['tempest', 'run', '--blacklist-file=%s' % path], 1)
+ def test_tempest_run_with_exclude_list_passing(self):
+ path = self._get_test_list_file('passing')
+ self.assertRunExit(['tempest', 'run',
+ '%s=%s' % (self.exclude_list, path)], 1)
- def test_tempest_run_with_blacklist_regex_exclude_fail_check_pass(self):
- fd, path = tempfile.mkstemp()
- self.addCleanup(os.remove, path)
- blacklist_file = os.fdopen(fd, 'wb', 0)
- self.addCleanup(blacklist_file.close)
- blacklist_file.write('failing'.encode('utf-8'))
- self.assertRunExit(['tempest', 'run', '--blacklist-file=%s' % path,
+ def test_tempest_run_with_exclude_list_regex_exclude_fail_check_pass(self):
+ path = self._get_test_list_file('failing')
+ self.assertRunExit(['tempest', 'run',
+ '%s=%s' % (self.exclude_list, path),
'--regex', 'pass'], 0)
- def test_tempest_run_with_blacklist_regex_exclude_pass_check_pass(self):
- fd, path = tempfile.mkstemp()
- self.addCleanup(os.remove, path)
- blacklist_file = os.fdopen(fd, 'wb', 0)
- self.addCleanup(blacklist_file.close)
- blacklist_file.write('passing'.encode('utf-8'))
- self.assertRunExit(['tempest', 'run', '--blacklist-file=%s' % path,
+ def test_tempest_run_with_exclude_list_regex_exclude_pass_check_pass(self):
+ path = self._get_test_list_file('passing')
+ self.assertRunExit(['tempest', 'run',
+ '%s=%s' % (self.exclude_list, path),
'--regex', 'pass'], 1)
- def test_tempest_run_with_blacklist_regex_exclude_pass_check_fail(self):
- fd, path = tempfile.mkstemp()
- self.addCleanup(os.remove, path)
- blacklist_file = os.fdopen(fd, 'wb', 0)
- self.addCleanup(blacklist_file.close)
- blacklist_file.write('passing'.encode('utf-8'))
- self.assertRunExit(['tempest', 'run', '--blacklist-file=%s' % path,
+ def test_tempest_run_with_exclude_list_regex_exclude_pass_check_fail(self):
+ path = self._get_test_list_file('passing')
+ self.assertRunExit(['tempest', 'run',
+ '%s=%s' % (self.exclude_list, path),
'--regex', 'fail'], 1)
+class TestOldArgRunReturnCode(TestRunReturnCode):
+ """A class for testing deprecated but still supported args.
+
+ This class will be removed once we remove the following arguments:
+ * --black-regex
+ * --blacklist-file
+ * --whitelist-file
+ """
+ exclude_regex = '--black-regex'
+ exclude_list = '--blacklist-file'
+ include_list = '--whitelist-file'
+
+ def _test_args_passing(self, args):
+ self.assertRunExit(['tempest', 'run'] + args, 0)
+
+ def test_tempest_run_new_old_arg_comb(self):
+ path = self._get_test_list_file('failing')
+ self._test_args_passing(['--black-regex', 'failing',
+ '--exclude-regex', 'failing'])
+ self._test_args_passing(['--blacklist-file=' + path,
+ '--exclude-list=' + path])
+ path = self._get_test_list_file('passing')
+ self._test_args_passing(['--whitelist-file=' + path,
+ '--include-list=' + path])
+
+ def _test_args_passing_with_stestr_repository(self, args):
+ subprocess.call(['stestr', 'init'])
+ self.assertRunExit(['tempest', 'run'] + args, 0)
+
+ def test_tempest_run_new_old_arg_comb_with_stestr_repository(self):
+ path = self._get_test_list_file('failing')
+ self._test_args_passing_with_stestr_repository(
+ ['--black-regex', 'failing', '--exclude-regex', 'failing'])
+ self._test_args_passing_with_stestr_repository(
+ ['--blacklist-file=' + path, '--exclude-list=' + path])
+ path = self._get_test_list_file('passing')
+ self._test_args_passing_with_stestr_repository(
+ ['--whitelist-file=' + path, '--include-list=' + path])
+
+
class TestConfigPathCheck(base.TestCase):
def setUp(self):
super(TestConfigPathCheck, self).setUp()
diff --git a/tempest/tests/common/test_credentials_factory.py b/tempest/tests/common/test_credentials_factory.py
index 0ef3742..374474d 100644
--- a/tempest/tests/common/test_credentials_factory.py
+++ b/tempest/tests/common/test_credentials_factory.py
@@ -173,10 +173,15 @@
@mock.patch.object(cf, 'get_credentials')
def test_get_configured_admin_credentials(self, mock_get_credentials):
cfg.CONF.set_default('auth_version', 'v3', 'identity')
- all_params = [('admin_username', 'username', 'my_name'),
- ('admin_password', 'password', 'secret'),
- ('admin_project_name', 'project_name', 'my_pname'),
- ('admin_domain_name', 'domain_name', 'my_dname')]
+ all_params = [
+ ('admin_username', 'username', 'my_name'),
+ ('admin_user_domain_name', 'user_domain_name', 'my_dname'),
+ ('admin_password', 'password', 'secret'),
+ ('admin_project_name', 'project_name', 'my_pname'),
+ ('admin_project_domain_name', 'project_domain_name', 'my_dname'),
+ ('admin_domain_name', 'domain_name', 'my_dname'),
+ ('admin_system', 'system', None),
+ ]
expected_result = 'my_admin_credentials'
mock_get_credentials.return_value = expected_result
for config_item, _, value in all_params:
@@ -194,10 +199,15 @@
def test_get_configured_admin_credentials_not_fill_valid(
self, mock_get_credentials):
cfg.CONF.set_default('auth_version', 'v2', 'identity')
- all_params = [('admin_username', 'username', 'my_name'),
- ('admin_password', 'password', 'secret'),
- ('admin_project_name', 'project_name', 'my_pname'),
- ('admin_domain_name', 'domain_name', 'my_dname')]
+ all_params = [
+ ('admin_username', 'username', 'my_name'),
+ ('admin_user_domain_name', 'user_domain_name', 'my_dname'),
+ ('admin_password', 'password', 'secret'),
+ ('admin_project_domain_name', 'project_domain_name', 'my_dname'),
+ ('admin_project_name', 'project_name', 'my_pname'),
+ ('admin_domain_name', 'domain_name', 'my_dname'),
+ ('admin_system', 'system', None),
+ ]
expected_result = mock.Mock()
expected_result.is_valid.return_value = True
mock_get_credentials.return_value = expected_result
@@ -278,3 +288,20 @@
mock_auth_get_credentials.assert_called_once_with(
expected_uri, fill_in=False, identity_version='v3',
**expected_params)
+
+ @mock.patch('tempest.lib.auth.get_credentials')
+ def test_get_credentials_v3_system(self, mock_auth_get_credentials):
+ expected_uri = 'V3_URI'
+ expected_result = 'my_creds'
+ mock_auth_get_credentials.return_value = expected_result
+ cfg.CONF.set_default('uri_v3', expected_uri, 'identity')
+ cfg.CONF.set_default('admin_system', 'all', 'auth')
+ params = {'system': 'all'}
+ expected_params = params.copy()
+ expected_params.update(config.service_client_config())
+ result = cf.get_credentials(fill_in=False, identity_version='v3',
+ **params)
+ self.assertEqual(expected_result, result)
+ mock_auth_get_credentials.assert_called_once_with(
+ expected_uri, fill_in=False, identity_version='v3',
+ **expected_params)
diff --git a/tempest/tests/common/test_waiters.py b/tempest/tests/common/test_waiters.py
index ff74877..d64d7b0 100755
--- a/tempest/tests/common/test_waiters.py
+++ b/tempest/tests/common/test_waiters.py
@@ -66,7 +66,7 @@
# Ensure waiter returns before build_timeout
self.assertLess((end_time - start_time), 10)
- def test_wait_for_image_imported_to_stores_timeout(self):
+ def test_wait_for_image_imported_to_stores_failure(self):
time_mock = self.patch('time.time')
client = mock.MagicMock()
client.build_timeout = 2
@@ -77,6 +77,20 @@
'status': 'saving',
'stores': 'fake_store',
'os_glance_failed_import': 'fake_os_glance_failed_import'})
+ self.assertRaises(lib_exc.OtherRestClientException,
+ waiters.wait_for_image_imported_to_stores,
+ client, 'fake_image_id', 'fake_store')
+
+ def test_wait_for_image_imported_to_stores_timeout(self):
+ time_mock = self.patch('time.time')
+ client = mock.MagicMock()
+ client.build_timeout = 2
+ self.patch('time.time', side_effect=[0., 1., 2.])
+ time_mock.side_effect = utils.generate_timeout_series(1)
+
+ client.show_image.return_value = ({
+ 'status': 'saving',
+ 'stores': 'fake_store'})
self.assertRaises(lib_exc.TimeoutException,
waiters.wait_for_image_imported_to_stores,
client, 'fake_image_id', 'fake_store')
diff --git a/tempest/tests/lib/common/test_cred_client.py b/tempest/tests/lib/common/test_cred_client.py
index 860a465..b99311c 100644
--- a/tempest/tests/lib/common/test_cred_client.py
+++ b/tempest/tests/lib/common/test_cred_client.py
@@ -43,6 +43,14 @@
self.projects_client.delete_tenant.assert_called_once_with(
'fake_id')
+ def test_get_credentials(self):
+ ret = self.creds_client.get_credentials(
+ {'name': 'some_user', 'id': 'fake_id'},
+ {'name': 'some_project', 'id': 'fake_id'},
+ 'password123')
+ self.assertEqual(ret.username, 'some_user')
+ self.assertEqual(ret.project_name, 'some_project')
+
class TestCredClientV3(base.TestCase):
def setUp(self):
@@ -53,7 +61,7 @@
self.roles_client = mock.MagicMock()
self.domains_client = mock.MagicMock()
self.domains_client.list_domains.return_value = {
- 'domains': [{'id': 'fake_domain_id'}]
+ 'domains': [{'id': 'fake_domain_id', 'name': 'some_domain'}]
}
self.creds_client = cred_client.V3CredsClient(self.identity_client,
self.projects_client,
@@ -75,3 +83,31 @@
self.creds_client.delete_project('fake_id')
self.projects_client.delete_project.assert_called_once_with(
'fake_id')
+
+ def test_get_credentials(self):
+ ret = self.creds_client.get_credentials(
+ {'name': 'some_user', 'id': 'fake_id'},
+ {'name': 'some_project', 'id': 'fake_id'},
+ 'password123')
+ self.assertEqual(ret.username, 'some_user')
+ self.assertEqual(ret.project_name, 'some_project')
+ self.assertIsNone(ret.system)
+ self.assertEqual(ret.domain_name, 'some_domain')
+ ret = self.creds_client.get_credentials(
+ {'name': 'some_user', 'id': 'fake_id'},
+ None,
+ 'password123',
+ domain={'name': 'another_domain', 'id': 'another_id'})
+ self.assertEqual(ret.username, 'some_user')
+ self.assertIsNone(ret.project_name)
+ self.assertIsNone(ret.system)
+ self.assertEqual(ret.domain_name, 'another_domain')
+ ret = self.creds_client.get_credentials(
+ {'name': 'some_user', 'id': 'fake_id'},
+ None,
+ 'password123',
+ system={'system': 'all'})
+ self.assertEqual(ret.username, 'some_user')
+ self.assertIsNone(ret.project_name)
+ self.assertEqual(ret.system, {'system': 'all'})
+ self.assertEqual(ret.domain_name, 'some_domain')
diff --git a/tempest/tests/lib/services/identity/v3/test_identity_providers_client.py b/tempest/tests/lib/services/identity/v3/test_identity_providers_client.py
new file mode 100644
index 0000000..964c51b
--- /dev/null
+++ b/tempest/tests/lib/services/identity/v3/test_identity_providers_client.py
@@ -0,0 +1,142 @@
+# Copyright 2020 Samsung Electronics Co., Ltd
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may not
+# use this file except in compliance with the License. You may obtain a copy of
+# the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations under
+# the License.
+
+from tempest.lib.services.identity.v3 import identity_providers_client
+from tempest.tests.lib import fake_auth_provider
+from tempest.tests.lib.services import base
+
+
+class TestIdentityProvidersClient(base.BaseServiceTest):
+ FAKE_IDENTITY_PROVIDERS_INFO = {
+ "identity_providers": [
+ {
+ "domain_id": "FAKE_DOMAIN_ID",
+ "description": "FAKE IDENTITY PROVIDER",
+ "remote_ids": ["fake_id_1", "fake_id_2"],
+ "enabled": True,
+ "id": "FAKE_ID",
+ "links": {
+ "protocols": "http://example.com/identity/v3/" +
+ "OS-FEDERATION/identity_providers/" +
+ "FAKE_ID/protocols",
+ "self": "http://example.com/identity/v3/OS-FEDERATION/" +
+ "identity_providers/FAKE_ID"
+ }
+ }
+ ],
+ "links": {
+ "next": None,
+ "previous": None,
+ "self": "http://example.com/identity/v3/OS-FEDERATION/" +
+ "identity_providers"
+ }
+ }
+
+ FAKE_IDENTITY_PROVIDER_INFO = {
+ "identity_provider": {
+ "authorization_ttl": None,
+ "domain_id": "FAKE_DOMAIN_ID",
+ "description": "FAKE IDENTITY PROVIDER",
+ "remote_ids": ["fake_id_1", "fake_id_2"],
+ "enabled": True,
+ "id": "ACME",
+ "links": {
+ "protocols": "http://example.com/identity/v3/OS-FEDERATION/" +
+ "identity_providers/FAKE_ID/protocols",
+ "self": "http://example.com/identity/v3/OS-FEDERATION/" +
+ "identity_providers/FAKE_ID"
+ }
+ }
+ }
+
+ def setUp(self):
+ super(TestIdentityProvidersClient, self).setUp()
+ fake_auth = fake_auth_provider.FakeAuthProvider()
+ self.client = identity_providers_client.IdentityProvidersClient(
+ fake_auth, 'identity', 'regionOne')
+
+ def _test_register_identity_provider(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.register_identity_provider,
+ 'tempest.lib.common.rest_client.RestClient.put',
+ self.FAKE_IDENTITY_PROVIDER_INFO,
+ bytes_body,
+ identity_provider_id="FAKE_ID",
+ status=201)
+
+ def _test_list_identity_providers(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.list_identity_providers,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_IDENTITY_PROVIDERS_INFO,
+ bytes_body,
+ status=200)
+
+ def _test_get_identity_provider(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.get_identity_provider,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_IDENTITY_PROVIDER_INFO,
+ bytes_body,
+ identity_provider_id="FAKE_ID",
+ status=200)
+
+ def _test_delete_identity_provider(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.delete_identity_provider,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ bytes_body,
+ identity_provider_id="FAKE_ID",
+ status=204)
+
+ def _test_update_identity_provider(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.update_identity_provider,
+ 'tempest.lib.common.rest_client.RestClient.patch',
+ self.FAKE_IDENTITY_PROVIDER_INFO,
+ bytes_body,
+ identity_provider_id="FAKE_ID",
+ status=200)
+
+ def test_register_identity_provider_with_str_body(self):
+ self._test_register_identity_provider()
+
+ def test_register_identity_provider_with_bytes_body(self):
+ self._test_register_identity_provider(bytes_body=True)
+
+ def test_list_identity_providers_with_str_body(self):
+ self._test_list_identity_providers()
+
+ def test_list_identity_providers_with_bytes_body(self):
+ self._test_list_identity_providers(bytes_body=True)
+
+ def test_get_identity_provider_with_str_body(self):
+ self._test_get_identity_provider()
+
+ def test_get_identity_provider_with_bytes_body(self):
+ self._test_get_identity_provider(bytes_body=True)
+
+ def test_delete_identity_provider_with_str_body(self):
+ self._test_delete_identity_provider()
+
+ def test_delete_identity_provider_with_bytes_body(self):
+ self._test_delete_identity_provider(bytes_body=True)
+
+ def test_update_identity_provider_with_str_body(self):
+ self._test_update_identity_provider()
+
+ def test_update_identity_provider_with_bytes_body(self):
+ self._test_update_identity_provider(bytes_body=True)
diff --git a/tempest/tests/lib/services/identity/v3/test_mappings_client.py b/tempest/tests/lib/services/identity/v3/test_mappings_client.py
new file mode 100644
index 0000000..845a3f9
--- /dev/null
+++ b/tempest/tests/lib/services/identity/v3/test_mappings_client.py
@@ -0,0 +1,183 @@
+# Copyright 2020 Samsung Electronics Co., Ltd
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may not
+# use this file except in compliance with the License. You may obtain a copy of
+# the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations under
+# the License.
+
+from tempest.lib.services.identity.v3 import mappings_client
+from tempest.tests.lib import fake_auth_provider
+from tempest.tests.lib.services import base
+
+
+class TestMappingsClient(base.BaseServiceTest):
+ FAKE_MAPPING_INFO = {
+ "mapping": {
+ "id": "fake123",
+ "links": {
+ "self": "http://example.com/identity/v3/OS-FEDERATION/" +
+ "mappings/fake123"
+ },
+ "rules": [
+ {
+ "local": [
+ {
+ "user": {
+ "name": "{0}"
+ }
+ },
+ {
+ "group": {
+ "id": "0cd5e9"
+ }
+ }
+ ],
+ "remote": [
+ {
+ "type": "UserName"
+ },
+ {
+ "type": "orgPersonType",
+ "not_any_of": [
+ "Contractor",
+ "Guest"
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ }
+
+ FAKE_MAPPINGS_INFO = {
+ "links": {
+ "next": None,
+ "previous": None,
+ "self": "http://example.com/identity/v3/OS-FEDERATION/mappings"
+ },
+ "mappings": [
+ {
+ "id": "fake123",
+ "links": {
+ "self": "http://example.com/identity/v3/OS-FEDERATION/" +
+ "mappings/fake123"
+ },
+ "rules": [
+ {
+ "local": [
+ {
+ "user": {
+ "name": "{0}"
+ }
+ },
+ {
+ "group": {
+ "id": "0cd5e9"
+ }
+ }
+ ],
+ "remote": [
+ {
+ "type": "UserName"
+ },
+ {
+ "type": "orgPersonType",
+ "any_one_of": [
+ "Contractor",
+ "SubContractor"
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+
+ def setUp(self):
+ super(TestMappingsClient, self).setUp()
+ fake_auth = fake_auth_provider.FakeAuthProvider()
+ self.client = mappings_client.MappingsClient(
+ fake_auth, 'identity', 'regionOne')
+
+ def _test_create_mapping(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.create_mapping,
+ 'tempest.lib.common.rest_client.RestClient.put',
+ self.FAKE_MAPPING_INFO,
+ bytes_body,
+ mapping_id="fake123",
+ status=201)
+
+ def _test_get_mapping(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.get_mapping,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_MAPPING_INFO,
+ bytes_body,
+ mapping_id="fake123",
+ status=200)
+
+ def _test_update_mapping(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.update_mapping,
+ 'tempest.lib.common.rest_client.RestClient.patch',
+ self.FAKE_MAPPING_INFO,
+ bytes_body,
+ mapping_id="fake123",
+ status=200)
+
+ def _test_list_mappings(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.list_mappings,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_MAPPINGS_INFO,
+ bytes_body,
+ status=200)
+
+ def _test_delete_mapping(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.delete_mapping,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ bytes_body,
+ mapping_id="fake123",
+ status=204)
+
+ def test_create_mapping_with_str_body(self):
+ self._test_create_mapping()
+
+ def test_create_mapping_with_bytes_body(self):
+ self._test_create_mapping(bytes_body=True)
+
+ def test_get_mapping_with_str_body(self):
+ self._test_get_mapping()
+
+ def test_get_mapping_with_bytes_body(self):
+ self._test_get_mapping(bytes_body=True)
+
+ def test_update_mapping_with_str_body(self):
+ self._test_update_mapping()
+
+ def test_update_mapping_with_bytes_body(self):
+ self._test_update_mapping(bytes_body=True)
+
+ def test_list_mappings_with_str_body(self):
+ self._test_list_mappings()
+
+ def test_list_mappings_with_bytes_body(self):
+ self._test_list_mappings(bytes_body=True)
+
+ def test_delete_mapping_with_str_body(self):
+ self._test_delete_mapping()
+
+ def test_delete_mapping_with_bytes_body(self):
+ self._test_delete_mapping(bytes_body=True)
diff --git a/tempest/tests/lib/services/identity/v3/test_protocols_client.py b/tempest/tests/lib/services/identity/v3/test_protocols_client.py
new file mode 100644
index 0000000..c1d04f4
--- /dev/null
+++ b/tempest/tests/lib/services/identity/v3/test_protocols_client.py
@@ -0,0 +1,140 @@
+# Copyright 2020 Samsung Electronics Co., Ltd
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may not
+# use this file except in compliance with the License. You may obtain a copy of
+# the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations under
+# the License.
+
+from tempest.lib.services.identity.v3 import protocols_client
+from tempest.tests.lib import fake_auth_provider
+from tempest.tests.lib.services import base
+
+
+class TestProtocolsClient(base.BaseServiceTest):
+ FAKE_PROTOCOLS_INFO = {
+ "links": {
+ "next": None,
+ "previous": None,
+ "self": "http://example.com/identity/v3/OS-FEDERATION/" +
+ "identity_providers/FAKE_ID/protocols"
+ },
+ "protocols": [
+ {
+ "id": "fake_id1",
+ "links": {
+ "identity_provider": "http://example.com/identity/v3/" +
+ "OS-FEDERATION/identity_providers/" +
+ "FAKE_ID",
+ "self": "http://example.com/identity/v3/OS-FEDERATION/"
+ "identity_providers/FAKE_ID/protocols/fake_id1"
+ },
+ "mapping_id": "fake123"
+ }
+ ]
+ }
+
+ FAKE_PROTOCOL_INFO = {
+ "protocol": {
+ "id": "fake_id1",
+ "links": {
+ "identity_provider": "http://example.com/identity/v3/OS-" +
+ "FEDERATION/identity_providers/FAKE_ID",
+ "self": "http://example.com/identity/v3/OS-FEDERATION/" +
+ "identity_providers/FAKE_ID/protocols/fake_id1"
+ },
+ "mapping_id": "fake123"
+ }
+ }
+
+ def setUp(self):
+ super(TestProtocolsClient, self).setUp()
+ fake_auth = fake_auth_provider.FakeAuthProvider()
+ self.client = protocols_client.ProtocolsClient(
+ fake_auth, 'identity', 'regionOne')
+
+ def _test_add_protocol_to_identity_provider(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.add_protocol_to_identity_provider,
+ 'tempest.lib.common.rest_client.RestClient.put',
+ self.FAKE_PROTOCOL_INFO,
+ bytes_body,
+ idp_id="FAKE_ID",
+ protocol_id="fake_id1",
+ status=201)
+
+ def _test_list_protocols_of_identity_provider(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.list_protocols_of_identity_provider,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_PROTOCOLS_INFO,
+ bytes_body,
+ idp_id="FAKE_ID",
+ status=200)
+
+ def _test_get_protocol_for_identity_provider(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.get_protocol_for_identity_provider,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_PROTOCOL_INFO,
+ bytes_body,
+ idp_id="FAKE_ID",
+ protocol_id="fake_id1",
+ status=200)
+
+ def _test_update_mapping_for_identity_provider(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.update_mapping_for_identity_provider,
+ 'tempest.lib.common.rest_client.RestClient.patch',
+ self.FAKE_PROTOCOL_INFO,
+ bytes_body,
+ idp_id="FAKE_ID",
+ protocol_id="fake_id1",
+ status=200)
+
+ def _test_delete_protocol_from_identity_provider(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.delete_protocol_from_identity_provider,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ bytes_body,
+ idp_id="FAKE_ID",
+ protocol_id="fake_id1",
+ status=204)
+
+ def test_add_protocol_to_identity_provider_with_str_body(self):
+ self._test_add_protocol_to_identity_provider()
+
+ def test_add_protocol_to_identity_provider_with_bytes_body(self):
+ self._test_add_protocol_to_identity_provider(bytes_body=True)
+
+ def test_list_protocols_of_identity_provider_with_str_body(self):
+ self._test_list_protocols_of_identity_provider()
+
+ def test_list_protocols_of_identity_provider_with_bytes_body(self):
+ self._test_list_protocols_of_identity_provider(bytes_body=True)
+
+ def test_get_protocol_for_identity_provider_with_str_body(self):
+ self._test_get_protocol_for_identity_provider()
+
+ def test_get_protocol_for_identity_provider_with_bytes_body(self):
+ self._test_get_protocol_for_identity_provider(bytes_body=True)
+
+ def test_update_mapping_for_identity_provider_with_str_body(self):
+ self._test_update_mapping_for_identity_provider()
+
+ def test_update_mapping_for_identity_provider_with_bytes_body(self):
+ self._test_update_mapping_for_identity_provider(bytes_body=True)
+
+ def test_delete_protocol_from_identity_provider_with_str_body(self):
+ self._test_delete_protocol_from_identity_provider()
+
+ def test_delete_protocol_from_identity_provider_with_bytes_body(self):
+ self._test_delete_protocol_from_identity_provider(bytes_body=False)
diff --git a/tempest/tests/lib/services/identity/v3/test_roles_client.py b/tempest/tests/lib/services/identity/v3/test_roles_client.py
index 8d6bb42..e963310 100644
--- a/tempest/tests/lib/services/identity/v3/test_roles_client.py
+++ b/tempest/tests/lib/services/identity/v3/test_roles_client.py
@@ -225,6 +225,16 @@
role_id="1234",
status=204)
+ def _test_create_user_role_on_system(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.create_user_role_on_system,
+ 'tempest.lib.common.rest_client.RestClient.put',
+ {},
+ bytes_body,
+ user_id="123",
+ role_id="1234",
+ status=204)
+
def _test_list_user_roles_on_project(self, bytes_body=False):
self.check_service_client_function(
self.client.list_user_roles_on_project,
@@ -243,6 +253,14 @@
domain_id="b344506af7644f6794d9cb316600b020",
user_id="123")
+ def _test_list_user_roles_on_system(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.list_user_roles_on_system,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_LIST_ROLES,
+ bytes_body,
+ user_id="123")
+
def _test_create_group_role_on_project(self, bytes_body=False):
self.check_service_client_function(
self.client.create_group_role_on_project,
@@ -265,6 +283,16 @@
role_id="1234",
status=204)
+ def _test_create_group_role_on_system(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.create_group_role_on_system,
+ 'tempest.lib.common.rest_client.RestClient.put',
+ {},
+ bytes_body,
+ group_id="123",
+ role_id="1234",
+ status=204)
+
def _test_list_group_roles_on_project(self, bytes_body=False):
self.check_service_client_function(
self.client.list_group_roles_on_project,
@@ -283,6 +311,15 @@
domain_id="b344506af7644f6794d9cb316600b020",
group_id="123")
+ def _test_list_group_roles_on_system(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.list_group_roles_on_system,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_LIST_ROLES,
+ bytes_body,
+ domain_id="b344506af7644f6794d9cb316600b020",
+ group_id="123")
+
def _test_create_role_inference_rule(self, bytes_body=False):
self.check_service_client_function(
self.client.create_role_inference_rule,
@@ -405,6 +442,15 @@
role_id="1234",
status=204)
+ def test_delete_role_from_user_on_system(self):
+ self.check_service_client_function(
+ self.client.delete_role_from_user_on_system,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ user_id="123",
+ role_id="1234",
+ status=204)
+
def test_delete_role_from_group_on_project(self):
self.check_service_client_function(
self.client.delete_role_from_group_on_project,
@@ -425,6 +471,15 @@
role_id="1234",
status=204)
+ def test_delete_role_from_group_on_system(self):
+ self.check_service_client_function(
+ self.client.delete_role_from_group_on_system,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ group_id="123",
+ role_id="1234",
+ status=204)
+
def test_check_user_role_existence_on_project(self):
self.check_service_client_function(
self.client.check_user_role_existence_on_project,
@@ -445,6 +500,15 @@
role_id="1234",
status=204)
+ def test_check_user_role_existence_on_system(self):
+ self.check_service_client_function(
+ self.client.check_user_role_existence_on_system,
+ 'tempest.lib.common.rest_client.RestClient.head',
+ {},
+ user_id="123",
+ role_id="1234",
+ status=204)
+
def test_check_role_from_group_on_project_existence(self):
self.check_service_client_function(
self.client.check_role_from_group_on_project_existence,
@@ -465,6 +529,15 @@
role_id="1234",
status=204)
+ def test_check_role_from_group_on_system_existence(self):
+ self.check_service_client_function(
+ self.client.check_role_from_group_on_system_existence,
+ 'tempest.lib.common.rest_client.RestClient.head',
+ {},
+ group_id="123",
+ role_id="1234",
+ status=204)
+
def test_create_role_inference_rule_with_str_body(self):
self._test_create_role_inference_rule()
diff --git a/tempest/tests/lib/services/identity/v3/test_service_providers_client.py b/tempest/tests/lib/services/identity/v3/test_service_providers_client.py
new file mode 100644
index 0000000..ec908bc
--- /dev/null
+++ b/tempest/tests/lib/services/identity/v3/test_service_providers_client.py
@@ -0,0 +1,157 @@
+# Copyright 2020 Samsung Electronics Co., Ltd
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may not
+# use this file except in compliance with the License. You may obtain a copy of
+# the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations under
+# the License.
+
+from tempest.lib.services.identity.v3 import service_providers_client
+from tempest.tests.lib import fake_auth_provider
+from tempest.tests.lib.services import base
+
+
+class TestServiceProvidersClient(base.BaseServiceTest):
+ FAKE_SERVICE_PROVIDER_INFO = {
+ "service_provider": {
+ "auth_url": "https://example.com/identity/v3/OS-FEDERATION/" +
+ "identity_providers/FAKE_ID/protocols/fake_id1/auth",
+ "description": "Fake Service Provider",
+ "enabled": True,
+ "id": "FAKE_ID",
+ "links": {
+ "self": "https://example.com/identity/v3/OS-FEDERATION/" +
+ "service_providers/FAKE_ID"
+ },
+ "relay_state_prefix": "ss:mem:",
+ "sp_url": "https://example.com/identity/Shibboleth.sso/" +
+ "FAKE_ID1/ECP"
+ }
+ }
+
+ FAKE_SERVICE_PROVIDERS_INFO = {
+ "links": {
+ "next": None,
+ "previous": None,
+ "self": "http://example.com/identity/v3/OS-FEDERATION/" +
+ "service_providers"
+ },
+ "service_providers": [
+ {
+ "auth_url": "https://example.com/identity/v3/OS-FEDERATION/" +
+ "identity_providers/acme/protocols/saml2/auth",
+ "description": "Stores ACME identities",
+ "enabled": True,
+ "id": "ACME",
+ "links": {
+ "self": "http://example.com/identity/v3/OS-FEDERATION/" +
+ "service_providers/ACME"
+ },
+ "relay_state_prefix": "ss:mem:",
+ "sp_url": "https://example.com/identity/Shibboleth.sso/" +
+ "SAML2/ECP"
+ },
+ {
+ "auth_url": "https://other.example.com/identity/v3/" +
+ "OS-FEDERATION/identity_providers/acme/" +
+ "protocols/saml2/auth",
+ "description": "Stores contractor identities",
+ "enabled": False,
+ "id": "ACME-contractors",
+ "links": {
+ "self": "http://example.com/identity/v3/OS-FEDERATION/" +
+ "service_providers/ACME-contractors"
+ },
+ "relay_state_prefix": "ss:mem:",
+ "sp_url": "https://other.example.com/identity/Shibboleth" +
+ ".sso/SAML2/ECP"
+ }
+ ]
+ }
+
+ def setUp(self):
+ super(TestServiceProvidersClient, self).setUp()
+ fake_auth = fake_auth_provider.FakeAuthProvider()
+ self.client = service_providers_client.ServiceProvidersClient(
+ fake_auth, 'identity', 'regionOne')
+
+ def _test_register_service_provider(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.register_service_provider,
+ 'tempest.lib.common.rest_client.RestClient.put',
+ self.FAKE_SERVICE_PROVIDER_INFO,
+ bytes_body,
+ service_provider_id="FAKE_ID",
+ status=201)
+
+ def _test_list_service_providers(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.list_service_providers,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_SERVICE_PROVIDERS_INFO,
+ bytes_body,
+ status=200)
+
+ def _test_get_service_provider(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.get_service_provider,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_SERVICE_PROVIDER_INFO,
+ bytes_body,
+ service_provider_id="FAKE_ID",
+ status=200)
+
+ def _test_delete_service_provider(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.delete_service_provider,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ bytes_body,
+ service_provider_id="FAKE_ID",
+ status=204)
+
+ def _test_update_service_provider(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.update_service_provider,
+ 'tempest.lib.common.rest_client.RestClient.patch',
+ self.FAKE_SERVICE_PROVIDER_INFO,
+ bytes_body,
+ service_provider_id="FAKE_ID",
+ status=200)
+
+ def test_register_service_provider_with_str_body(self):
+ self._test_register_service_provider()
+
+ def test_register_service_provider_with_bytes_body(self):
+ self._test_register_service_provider(bytes_body=True)
+
+ def test_list_service_providers_with_str_body(self):
+ self._test_list_service_providers()
+
+ def test_list_service_providers_with_bytes_body(self):
+ self._test_list_service_providers(bytes_body=True)
+
+ def test_get_service_provider_with_str_body(self):
+ self._test_get_service_provider()
+
+ def test_get_service_provider_with_bytes_body(self):
+ self._test_get_service_provider(bytes_body=True)
+
+ def test_delete_service_provider_with_str_body(self):
+ self._test_delete_service_provider()
+
+ def test_delete_service_provider_with_bytes_body(self):
+ self._test_delete_service_provider(bytes_body=True)
+
+ def test_update_service_provider_with_str_body(self):
+ self._test_update_service_provider()
+
+ def test_update_service_provider_with_bytes_body(self):
+ self._test_update_service_provider(bytes_body=True)
diff --git a/tempest/tests/lib/services/identity/v3/test_trusts_client.py b/tempest/tests/lib/services/identity/v3/test_trusts_client.py
index a1ca020..33dca7d 100644
--- a/tempest/tests/lib/services/identity/v3/test_trusts_client.py
+++ b/tempest/tests/lib/services/identity/v3/test_trusts_client.py
@@ -94,6 +94,35 @@
}
}
+ FAKE_LIST_TRUSTS_ROLES = {
+ "roles": [
+ {
+ "id": "c1648e",
+ "links": {
+ "self": "http://example.com/identity/v3/roles/c1648e"
+ },
+ "name": "manager"
+ },
+ {
+ "id": "ed7b78",
+ "links": {
+ "self": "http://example.com/identity/v3/roles/ed7b78"
+ },
+ "name": "member"
+ }
+ ]
+ }
+
+ FAKE_TRUST_ROLE = {
+ "role": {
+ "id": "c1648e",
+ "links": {
+ "self": "http://example.com/identity/v3/roles/c1648e"
+ },
+ "name": "manager"
+ }
+ }
+
def setUp(self):
super(TestTrustsClient, self).setUp()
fake_auth = fake_auth_provider.FakeAuthProvider()
@@ -123,6 +152,43 @@
self.FAKE_LIST_TRUSTS,
bytes_body)
+ def _test_list_trust_roles(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.list_trust_roles,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_LIST_TRUSTS_ROLES,
+ bytes_body,
+ trust_id="1ff900")
+
+ def test_check_trust_role(self):
+ self.check_service_client_function(
+ self.client.check_trust_role,
+ 'tempest.lib.common.rest_client.RestClient.head',
+ {},
+ trust_id="1ff900",
+ role_id="ed7b78")
+
+ def _check_show_trust_role(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.show_trust_role,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_TRUST_ROLE,
+ bytes_body,
+ trust_id="1ff900",
+ role_id="ed7b78")
+
+ def test_list_trust_roles_with_str_body(self):
+ self._test_list_trust_roles()
+
+ def test_list_trust_roles_with_bytes_body(self):
+ self._test_list_trust_roles(bytes_body=True)
+
+ def test_check_show_trust_role_with_str_body(self):
+ self._check_show_trust_role()
+
+ def test_check_show_trust_role_with_bytes_body(self):
+ self._check_show_trust_role(bytes_body=True)
+
def test_create_trust_with_str_body(self):
self._test_create_trust()
diff --git a/tempest/tests/lib/services/network/test_ports_client.py b/tempest/tests/lib/services/network/test_ports_client.py
index 20ef3f1..9ca9ac6 100644
--- a/tempest/tests/lib/services/network/test_ports_client.py
+++ b/tempest/tests/lib/services/network/test_ports_client.py
@@ -22,53 +22,126 @@
class TestPortsClient(base.BaseServiceTest):
+ FAKE_CREATE_PORTS = {
+ "port": {
+ "binding:host_id": "4df8d9ff-6f6f-438f-90a1-ef660d4586ad",
+ "binding:profile": {
+ "local_link_information": [
+ {
+ "port_id": "Ethernet3/1",
+ "switch_id": "0a:1b:2c:3d:4e:5f",
+ "switch_info": "switch1"
+ }
+ ]
+ },
+ "binding:vnic_type": "baremetal",
+ "device_id": "d90a13da-be41-461f-9f99-1dbcf438fdf2",
+ "device_owner": "baremetal:none",
+ "dns_domain": "my-domain.org.",
+ "dns_name": "myport",
+ "qos_policy_id": "29d5e02e-d5ab-4929-bee4-4a9fc12e22ae",
+ "uplink_status_propagation": False
+ }
+ }
+
FAKE_PORTS = {
"ports": [
{
"admin_state_up": True,
"allowed_address_pairs": [],
+ "created_at": "2016-03-08T20:19:41",
"data_plane_status": None,
"description": "",
"device_id": "9ae135f4-b6e0-4dad-9e91-3c223e385824",
"device_owner": "network:router_gateway",
- "extra_dhcp_opts": [],
+ "dns_assignment": [
+ {
+ "hostname": "myport",
+ "ip_address": "172.24.4.2",
+ "fqdn": "myport.my-domain.org"
+ }
+ ],
+ "dns_domain": "my-domain.org.",
+ "dns_name": "myport",
+ "extra_dhcp_opts": [
+ {
+ "opt_value": "pxelinux.0",
+ "ip_version": 4,
+ "opt_name": "bootfile-name"
+ }
+ ],
"fixed_ips": [
{
"ip_address": "172.24.4.2",
- "subnet_id": "008ba151-0b8c-4a67-98b5-0d2b87666062"
+ "subnet_id":
+ "008ba151-0b8c-4a67-98b5-0d2b87666062"
}
],
"id": "d80b1a3b-4fc1-49f3-952e-1e2ab7081d8b",
+ "ip_allocation": "immediate",
"mac_address": "fa:16:3e:58:42:ed",
"name": "",
"network_id": "70c1db1f-b701-45bd-96e0-a313ee3430b3",
"project_id": "",
+ "revision_number": 1,
"security_groups": [],
"status": "ACTIVE",
- "tenant_id": ""
+ "tags": ["tag1,tag2"],
+ "tenant_id": "d6700c0c9ffa4f1cb322cd4a1f3906fa",
+ "updated_at": "2016-03-08T20:19:41",
+ "qos_network_policy_id":
+ "174dd0c1-a4eb-49d4-a807-ae80246d82f4",
+ "qos_policy_id": "29d5e02e-d5ab-4929-bee4-4a9fc12e22ae",
+ "port_security_enabled": False,
+ "uplink_status_propagation": False
},
{
"admin_state_up": True,
"allowed_address_pairs": [],
+ "created_at": "2016-03-08T20:19:41",
"data_plane_status": None,
"description": "",
"device_id": "9ae135f4-b6e0-4dad-9e91-3c223e385824",
"device_owner": "network:router_interface",
- "extra_dhcp_opts": [],
+ "dns_assignment": [
+ {
+ "hostname": "myport2",
+ "ip_address": "10.0.0.1",
+ "fqdn": "myport2.my-domain.org"
+ }
+ ],
+ "dns_domain": "my-domain.org.",
+ "dns_name": "myport2",
+ "extra_dhcp_opts": [
+ {
+ "opt_value": "pxelinux.0",
+ "ip_version": 4,
+ "opt_name": "bootfile-name"
+ }
+ ],
"fixed_ips": [
{
"ip_address": "10.0.0.1",
- "subnet_id": "288bf4a1-51ba-43b6-9d0a-520e9005db17"
+ "subnet_id":
+ "288bf4a1-51ba-43b6-9d0a-520e9005db17"
}
],
"id": "f71a6703-d6de-4be1-a91a-a570ede1d159",
+ "ip_allocation": "immediate",
"mac_address": "fa:16:3e:bb:3c:e4",
"name": "",
"network_id": "f27aa545-cbdd-4907-b0c6-c9e8b039dcc2",
"project_id": "d397de8a63f341818f198abb0966f6f3",
+ "revision_number": 1,
"security_groups": [],
"status": "ACTIVE",
- "tenant_id": "d397de8a63f341818f198abb0966f6f3"
+ "tags": ["tag1,tag2"],
+ "tenant_id": "d397de8a63f341818f198abb0966f6f3",
+ "updated_at": "2016-03-08T20:19:41",
+ "qos_network_policy_id": None,
+ "qos_policy_id": None,
+ "port_security_enabled": False,
+ "uplink_status_propagation": False
}
]
}
@@ -112,7 +185,7 @@
self.check_service_client_function(
self.ports_client.create_port,
"tempest.lib.common.rest_client.RestClient.post",
- {"port": self.FAKE_PORTS["ports"][0]},
+ self.FAKE_CREATE_PORTS,
bytes_body,
201,
**self.FAKE_PORT1)
diff --git a/tempest/tests/lib/test_auth.py b/tempest/tests/lib/test_auth.py
index c3a792f..3edb122 100644
--- a/tempest/tests/lib/test_auth.py
+++ b/tempest/tests/lib/test_auth.py
@@ -786,6 +786,19 @@
self.assertIn(attr, auth_params.keys())
self.assertEqual(getattr(all_creds, attr), auth_params[attr])
+ def test_auth_parameters_with_system_scope(self):
+ all_creds = fake_credentials.FakeKeystoneV3AllCredentials()
+ self.auth_provider.credentials = all_creds
+ self.auth_provider.scope = 'system'
+ auth_params = self.auth_provider._auth_params()
+ self.assertNotIn('scope', auth_params.keys())
+ for attr in all_creds.get_init_attributes():
+ if attr.startswith('project_') or attr.startswith('domain_'):
+ self.assertNotIn(attr, auth_params.keys())
+ else:
+ self.assertIn(attr, auth_params.keys())
+ self.assertEqual(getattr(all_creds, attr), auth_params[attr])
+
class TestKeystoneV3Credentials(base.TestCase):
def testSetAttrUserDomain(self):
diff --git a/tempest/tests/test_decorators.py b/tempest/tests/test_decorators.py
index 6018441..1889420 100644
--- a/tempest/tests/test_decorators.py
+++ b/tempest/tests/test_decorators.py
@@ -19,7 +19,6 @@
from tempest.common import utils
from tempest import config
from tempest import exceptions
-from tempest.lib.common.utils import data_utils
from tempest import test
from tempest.tests import base
from tempest.tests import fake_config
@@ -33,47 +32,6 @@
fake_config.FakePrivate)
-# NOTE: The test module is for tempest.test.idempotent_id.
-# After all projects switch to use decorators.idempotent_id,
-# we can remove tempest.test.idempotent_id as well as this
-# test module
-class TestIdempotentIdDecorator(BaseDecoratorsTest):
-
- def _test_helper(self, _id, **decorator_args):
- @test.idempotent_id(_id)
- def foo():
- """Docstring"""
- pass
-
- return foo
-
- def _test_helper_without_doc(self, _id, **decorator_args):
- @test.idempotent_id(_id)
- def foo():
- pass
-
- return foo
-
- def test_positive(self):
- _id = data_utils.rand_uuid()
- foo = self._test_helper(_id)
- self.assertIn('id-%s' % _id, getattr(foo, '__testtools_attrs'))
- self.assertTrue(foo.__doc__.startswith('Test idempotent id: %s' % _id))
-
- def test_positive_without_doc(self):
- _id = data_utils.rand_uuid()
- foo = self._test_helper_without_doc(_id)
- self.assertTrue(foo.__doc__.startswith('Test idempotent id: %s' % _id))
-
- def test_idempotent_id_not_str(self):
- _id = 42
- self.assertRaises(TypeError, self._test_helper, _id)
-
- def test_idempotent_id_not_valid_uuid(self):
- _id = '42'
- self.assertRaises(ValueError, self._test_helper, _id)
-
-
class TestServicesDecorator(BaseDecoratorsTest):
def _test_services_helper(self, *decorator_args):
class TestFoo(test.BaseTestCase):
diff --git a/tools/check_logs.py b/tools/check_logs.py
index de7e41d..7e191a0 100755
--- a/tools/check_logs.py
+++ b/tools/check_logs.py
@@ -56,39 +56,39 @@
's-proxy'])
-def process_files(file_specs, url_specs, whitelists):
+def process_files(file_specs, url_specs, allow_lists):
regexp = re.compile(r"^.* (ERROR|CRITICAL|TRACE) .*\[.*\-.*\]")
logs_with_errors = []
for (name, filename) in file_specs:
- whitelist = whitelists.get(name, [])
+ allow_list = allow_lists.get(name, [])
with open(filename) as content:
- if scan_content(content, regexp, whitelist):
+ if scan_content(content, regexp, allow_list):
logs_with_errors.append(name)
for (name, url) in url_specs:
- whitelist = whitelists.get(name, [])
+ allow_list = allow_lists.get(name, [])
req = urlreq.Request(url)
req.add_header('Accept-Encoding', 'gzip')
page = urlreq.urlopen(req)
buf = six.StringIO(page.read())
f = gzip.GzipFile(fileobj=buf)
- if scan_content(f.read().splitlines(), regexp, whitelist):
+ if scan_content(f.read().splitlines(), regexp, allow_list):
logs_with_errors.append(name)
return logs_with_errors
-def scan_content(content, regexp, whitelist):
+def scan_content(content, regexp, allow_list):
had_errors = False
for line in content:
if not line.startswith("Stderr:") and regexp.match(line):
- whitelisted = False
- for w in whitelist:
+ allowed = False
+ for w in allow_list:
pat = ".*%s.*%s.*" % (w['module'].replace('.', '\\.'),
w['message'])
if re.match(pat, line):
- whitelisted = True
+ allowed = True
break
- if not whitelisted or dump_all_errors:
- if not whitelisted:
+ if not allowed or dump_all_errors:
+ if not allowed:
had_errors = True
return had_errors
@@ -105,9 +105,9 @@
print("Must provide exactly one of -d or -u")
return 1
print("Checking logs...")
- WHITELIST_FILE = os.path.join(
+ ALLOW_LIST_FILE = os.path.join(
os.path.abspath(os.path.dirname(os.path.dirname(__file__))),
- "etc", "whitelist.yaml")
+ "etc", "allow-list.yaml")
file_matcher = re.compile(r".*screen-([\w-]+)\.log")
files = []
@@ -132,17 +132,17 @@
if m:
urls_to_process.append((m.group(1), u))
- whitelists = {}
- with open(WHITELIST_FILE) as stream:
+ allow_lists = {}
+ with open(ALLOW_LIST_FILE) as stream:
loaded = yaml.safe_load(stream)
if loaded:
for (name, l) in six.iteritems(loaded):
for w in l:
assert 'module' in w, 'no module in %s' % name
assert 'message' in w, 'no message in %s' % name
- whitelists = loaded
+ allow_lists = loaded
logs_with_errors = process_files(files_to_process, urls_to_process,
- whitelists)
+ allow_lists)
failed = False
if logs_with_errors:
@@ -164,14 +164,14 @@
usage = """
-Find non-white-listed log errors in log files from a devstack-gate run.
+Find non-allow-listed log errors in log files from a devstack-gate run.
Log files will be searched for ERROR or CRITICAL messages. If any
-error messages do not match any of the whitelist entries contained in
-etc/whitelist.yaml, those messages will be printed to the console and
+error messages do not match any of the allow-list entries contained in
+etc/allow-list.yaml, those messages will be printed to the console and
failure will be returned. A file directory containing logs or a url to the
log files of an OpenStack gate job can be provided.
-The whitelist yaml looks like:
+The allow-list yaml looks like:
log-name:
- module: "a.b.c"
@@ -179,7 +179,7 @@
- module: "a.b.c"
message: "regexp"
-repeated for each log file with a whitelist.
+repeated for each log file with an allow-list.
"""
parser = argparse.ArgumentParser(description=usage)
diff --git a/tools/generate-tempest-plugins-list.py b/tools/generate-tempest-plugins-list.py
index 618c388..1b5b369 100644
--- a/tools/generate-tempest-plugins-list.py
+++ b/tools/generate-tempest-plugins-list.py
@@ -32,9 +32,9 @@
# List of projects having tempest plugin stale or unmaintained for a long time
# (6 months or more)
-# TODO(masayukig): Some of these can be removed from BLACKLIST in the future
-# when the patches are merged.
-BLACKLIST = [
+# TODO(masayukig): Some of these can be removed from NON_ACTIVE_LIST in the
+# future when the patches are merged.
+NON_ACTIVE_LIST = [
'x/gce-api', # It looks gce-api doesn't support python3 yet.
'x/glare', # To avoid sanity-job failure
'x/group-based-policy', # It looks this doesn't support python3 yet.
@@ -52,8 +52,11 @@
'x/tap-as-a-service', # To avoid sanity-job failure
'x/valet', # https://review.opendev.org/#/c/638339/
'x/kingbird', # https://bugs.launchpad.net/kingbird/+bug/1869722
- # vmware-nsx is blacklisted since https://review.opendev.org/#/c/736952
+ # vmware-nsx is excluded since https://review.opendev.org/#/c/736952
'x/vmware-nsx-tempest-plugin',
+ # mogan is unmaintained now, remove from the list when this is merged:
+ # https://review.opendev.org/c/x/mogan/+/767718
+ 'x/mogan',
]
url = 'https://review.opendev.org/projects/'
@@ -86,10 +89,10 @@
False
-if len(sys.argv) > 1 and sys.argv[1] == 'blacklist':
- for black_plugin in BLACKLIST:
- print(black_plugin)
- # We just need BLACKLIST when we use this `blacklist` option.
+if len(sys.argv) > 1 and sys.argv[1] == 'nonactivelist':
+ for non_active_plugin in NON_ACTIVE_LIST:
+ print(non_active_plugin)
+ # We just need NON_ACTIVE_LIST when we use this `nonactivelist` option.
# So, this exits here.
sys.exit()
diff --git a/tools/generate-tempest-plugins-list.sh b/tools/generate-tempest-plugins-list.sh
index 33675ed..4430bbf 100755
--- a/tools/generate-tempest-plugins-list.sh
+++ b/tools/generate-tempest-plugins-list.sh
@@ -81,17 +81,17 @@
printf "\n\n"
-# Print BLACKLIST
-if [[ -r doc/source/data/tempest-blacklisted-plugins-registry.header ]]; then
- cat doc/source/data/tempest-blacklisted-plugins-registry.header
+# Print NON_ACTIVE_LIST
+if [[ -r doc/source/data/tempest-non-active-plugins-registry.header ]]; then
+ cat doc/source/data/tempest-non-active-plugins-registry.header
fi
-blacklist=$(python tools/generate-tempest-plugins-list.py blacklist)
-name_col_len=$(echo "${blacklist}" | wc -L)
+nonactivelist=$(python tools/generate-tempest-plugins-list.py nonactivelist)
+name_col_len=$(echo "${nonactivelist}" | wc -L)
name_col_len=$(( name_col_len + 20 ))
printf "\n\n"
-print_plugin_table "${blacklist}"
+print_plugin_table "${nonactivelist}"
printf "\n\n"
diff --git a/tools/tempest-integrated-gate-compute-blacklist.txt b/tools/tempest-integrated-gate-compute-exclude-list.txt
similarity index 60%
rename from tools/tempest-integrated-gate-compute-blacklist.txt
rename to tools/tempest-integrated-gate-compute-exclude-list.txt
index 2290751..8805262 100644
--- a/tools/tempest-integrated-gate-compute-blacklist.txt
+++ b/tools/tempest-integrated-gate-compute-exclude-list.txt
@@ -11,9 +11,3 @@
tempest.scenario.test_object_storage_basic_ops.TestObjectStorageBasicOps.test_swift_basic_ops
tempest.scenario.test_object_storage_basic_ops.TestObjectStorageBasicOps.test_swift_acl_anonymous_download
tempest.scenario.test_volume_backup_restore.TestVolumeBackupRestore.test_volume_backup_restore
-
-# Skip test scenario when creating second image from instance
-# https://bugs.launchpad.net/tripleo/+bug/1881592
-# The test is most likely wrong and may fail if the fists image is create quickly.
-# FIXME: Either fix the test so it won't race or consider if we should cover the scenario at all.
-tempest.api.compute.images.test_images_oneserver_negative.ImagesOneServerNegativeTestJSON.test_create_second_image_when_first_image_is_being_saved
diff --git a/tools/tempest-integrated-gate-networking-blacklist.txt b/tools/tempest-integrated-gate-networking-exclude-list.txt
similarity index 100%
rename from tools/tempest-integrated-gate-networking-blacklist.txt
rename to tools/tempest-integrated-gate-networking-exclude-list.txt
diff --git a/tools/tempest-integrated-gate-object-storage-blacklist.txt b/tools/tempest-integrated-gate-object-storage-exclude-list.txt
similarity index 100%
rename from tools/tempest-integrated-gate-object-storage-blacklist.txt
rename to tools/tempest-integrated-gate-object-storage-exclude-list.txt
diff --git a/tools/tempest-integrated-gate-placement-blacklist.txt b/tools/tempest-integrated-gate-placement-exclude-list.txt
similarity index 100%
rename from tools/tempest-integrated-gate-placement-blacklist.txt
rename to tools/tempest-integrated-gate-placement-exclude-list.txt
diff --git a/tools/tempest-integrated-gate-storage-blacklist.txt b/tools/tempest-integrated-gate-storage-blacklist.txt
new file mode 120000
index 0000000..2d691f8
--- /dev/null
+++ b/tools/tempest-integrated-gate-storage-blacklist.txt
@@ -0,0 +1 @@
+tempest-integrated-gate-storage-exclude-list.txt
\ No newline at end of file
diff --git a/tools/tempest-integrated-gate-storage-blacklist.txt b/tools/tempest-integrated-gate-storage-exclude-list.txt
similarity index 100%
rename from tools/tempest-integrated-gate-storage-blacklist.txt
rename to tools/tempest-integrated-gate-storage-exclude-list.txt
diff --git a/tools/tempest-plugin-sanity.sh b/tools/tempest-plugin-sanity.sh
index c983da9..106a9c6 100644
--- a/tools/tempest-plugin-sanity.sh
+++ b/tools/tempest-plugin-sanity.sh
@@ -44,7 +44,7 @@
# retrieve a list of projects having tempest plugins
PROJECT_LIST="$(python tools/generate-tempest-plugins-list.py)"
-BLACKLIST="$(python tools/generate-tempest-plugins-list.py blacklist)"
+NON_ACTIVE_LIST="$(python tools/generate-tempest-plugins-list.py nonactivelist)"
# Function to clone project using zuul-cloner or from git
function clone_project {
@@ -117,8 +117,8 @@
failed_plugin=''
# Perform sanity on all tempest plugin projects
for project in $PROJECT_LIST; do
- # Remove blacklisted tempest plugins
- if ! [[ `echo $BLACKLIST | grep -c $project ` -gt 0 ]]; then
+ # Remove non-active tempest plugins
+ if ! [[ `echo $NON_ACTIVE_LIST | grep -c $project ` -gt 0 ]]; then
plugin_sanity_check $project && passed_plugin+=", $project" || \
failed_plugin+="$project, " > $SANITY_DIR/$project.txt
fi
diff --git a/tox.ini b/tox.ini
index d8e059a..2315163 100644
--- a/tox.ini
+++ b/tox.ini
@@ -1,6 +1,6 @@
[tox]
envlist = pep8,py36,py38,bashate,pip-check-reqs
-minversion = 3.1.1
+minversion = 3.18.0
skipsdist = True
ignore_basepython_conflict = True
@@ -26,7 +26,7 @@
passenv = OS_STDOUT_CAPTURE OS_STDERR_CAPTURE OS_TEST_TIMEOUT OS_TEST_LOCK_PATH TEMPEST_CONFIG TEMPEST_CONFIG_DIR http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY ZUUL_CACHE_DIR REQUIREMENTS_PIP_LOCATION GENERATE_TEMPEST_PLUGIN_LIST
usedevelop = True
install_command = pip install {opts} {packages}
-whitelist_externals = *
+allowlist_externals = *
deps =
-c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master}
-r{toxinidir}/requirements.txt
@@ -108,7 +108,7 @@
deps = {[tempestenv]deps}
# The regex below is used to select which tests to run and exclude the slow tag:
# See the testrepository bug: https://bugs.launchpad.net/testrepository/+bug/1208610
-# FIXME: We can replace it with the `--black-regex` option to exclude tests now.
+# FIXME: We can replace it with the `--exclude-regex` option to exclude tests now.
commands =
find . -type f -name "*.pyc" -delete
tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' {posargs}
@@ -132,11 +132,11 @@
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
# The regex below is used to select which tests to run and exclude the slow tag and
-# tests listed in blacklist file:
+# tests listed in exclude-list file:
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' --blacklist_file ./tools/tempest-integrated-gate-networking-blacklist.txt {posargs}
- tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)' --blacklist_file ./tools/tempest-integrated-gate-networking-blacklist.txt {posargs}
+ tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' --exclude-list ./tools/tempest-integrated-gate-networking-exclude-list.txt {posargs}
+ tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)' --exclude-list ./tools/tempest-integrated-gate-networking-exclude-list.txt {posargs}
[testenv:integrated-compute]
envdir = .tox/tempest
@@ -145,11 +145,11 @@
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
# The regex below is used to select which tests to run and exclude the slow tag and
-# tests listed in blacklist file:
+# tests listed in exclude-list file:
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' --blacklist_file ./tools/tempest-integrated-gate-compute-blacklist.txt {posargs}
- tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)' --blacklist_file ./tools/tempest-integrated-gate-compute-blacklist.txt {posargs}
+ tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' --exclude-list ./tools/tempest-integrated-gate-compute-exclude-list.txt {posargs}
+ tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)' --exclude-list ./tools/tempest-integrated-gate-compute-exclude-list.txt {posargs}
[testenv:integrated-placement]
envdir = .tox/tempest
@@ -158,11 +158,11 @@
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
# The regex below is used to select which tests to run and exclude the slow tag and
-# tests listed in blacklist file:
+# tests listed in exclude-list file:
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' --blacklist_file ./tools/tempest-integrated-gate-placement-blacklist.txt {posargs}
- tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)' --blacklist_file ./tools/tempest-integrated-gate-placement-blacklist.txt {posargs}
+ tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' --exclude-list ./tools/tempest-integrated-gate-placement-exclude-list.txt {posargs}
+ tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)' --exclude-list ./tools/tempest-integrated-gate-placement-exclude-list.txt {posargs}
[testenv:integrated-storage]
envdir = .tox/tempest
@@ -171,11 +171,11 @@
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
# The regex below is used to select which tests to run and exclude the slow tag and
-# tests listed in blacklist file:
+# tests listed in exclude-list file:
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' --blacklist_file ./tools/tempest-integrated-gate-storage-blacklist.txt {posargs}
- tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)' --blacklist_file ./tools/tempest-integrated-gate-storage-blacklist.txt {posargs}
+ tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' --exclude-list ./tools/tempest-integrated-gate-storage-exclude-list.txt {posargs}
+ tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)' --exclude-list ./tools/tempest-integrated-gate-storage-exclude-list.txt {posargs}
[testenv:integrated-object-storage]
envdir = .tox/tempest
@@ -184,11 +184,11 @@
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
# The regex below is used to select which tests to run and exclude the slow tag and
-# tests listed in blacklist file:
+# tests listed in exclude-list file:
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' --blacklist_file ./tools/tempest-integrated-gate-object-storage-blacklist.txt {posargs}
- tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)' --blacklist_file ./tools/tempest-integrated-gate-object-storage-blacklist.txt {posargs}
+ tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' --exclude-list ./tools/tempest-integrated-gate-object-storage-exclude-list.txt {posargs}
+ tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)' --exclude-list ./tools/tempest-integrated-gate-object-storage-exclude-list.txt {posargs}
[testenv:full-serial]
envdir = .tox/tempest
@@ -198,7 +198,7 @@
deps = {[tempestenv]deps}
# The regex below is used to select which tests to run and exclude the slow tag:
# See the testrepository bug: https://bugs.launchpad.net/testrepository/+bug/1208610
-# FIXME: We can replace it with the `--black-regex` option to exclude tests now.
+# FIXME: We can replace it with the `--exclude-regex` option to exclude tests now.
commands =
find . -type f -name "*.pyc" -delete
tempest run --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.(api|scenario))' {posargs}
@@ -290,12 +290,12 @@
sphinx-apidoc -f -o doc/source/tests/volume tempest/api/volume
rm -rf doc/build
sphinx-build -W -b html doc/source doc/build/html
-whitelist_externals =
+allowlist_externals =
rm
[testenv:pdf-docs]
deps = {[testenv:docs]deps}
-whitelist_externals =
+allowlist_externals =
rm
make
commands =
@@ -369,7 +369,7 @@
rm -rf releasenotes/build
sphinx-build -a -E -W -d releasenotes/build/doctrees \
-b html releasenotes/source releasenotes/build/html
-whitelist_externals = rm
+allowlist_externals = rm
[testenv:bashate]
# if you want to test out some changes you have made to bashate
@@ -377,7 +377,7 @@
# modified bashate tree
deps =
{env:BASHATE_INSTALL_PATH:bashate}
-whitelist_externals = bash
+allowlist_externals = bash
commands = bash -c "find {toxinidir}/tools \
-not \( -type d -name .?\* -prune \) \
-type f \
@@ -406,6 +406,6 @@
[testenv:plugin-sanity-check]
# perform tempest plugin sanity
-whitelist_externals = bash
+allowlist_externals = bash
commands =
bash tools/tempest-plugin-sanity.sh
diff --git a/zuul.d/integrated-gate.yaml b/zuul.d/integrated-gate.yaml
index 4c1ee5a..52ccd3e 100644
--- a/zuul.d/integrated-gate.yaml
+++ b/zuul.d/integrated-gate.yaml
@@ -11,8 +11,10 @@
vars:
tox_envlist: all
tempest_test_regex: tempest
- devstack_localrc:
- ENABLE_FILE_INJECTION: true
+ # TODO(gmann): Enable File injection tests once nova bug is fixed
+ # https://bugs.launchpad.net/nova/+bug/1882421
+ # devstack_localrc:
+ # ENABLE_FILE_INJECTION: true
- job:
name: tempest-ipv6-only
@@ -69,6 +71,8 @@
Former names for this job where:
* legacy-tempest-dsvm-py35
* gate-tempest-dsvm-py35
+ required-projects:
+ - openstack/horizon
vars:
tox_envlist: full
devstack_localrc:
@@ -89,6 +93,8 @@
network-feature-enabled:
qos_placement_physnet: public
devstack_services:
+ # Enbale horizon so that we can run horizon test.
+ horizon: true
s-account: false
s-container: false
s-object: false
@@ -290,6 +296,8 @@
* legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend
* tempest-scenario-multinode-lvm-multibackend
timeout: 10800
+ # This job runs on stable/stein onwards.
+ branches: ^(?!stable/(ocata|pike|queens|rocky)).*$
vars:
tox_envlist: slow-serial
devstack_localrc:
@@ -310,6 +318,46 @@
ENABLE_VOLUME_MULTIATTACH: true
- job:
+ name: tempest-slow
+ parent: tempest-multinode-full
+ description: |
+ This multinode integration job will run all the tests tagged as slow.
+ It enables the lvm multibackend setup to cover few scenario tests.
+ This job will run only slow tests (API or Scenario) serially.
+
+ Former names for this job were:
+ * legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend
+ * tempest-scenario-multinode-lvm-multibackend
+ timeout: 10800
+ branches:
+ - stable/pike
+ - stable/queens
+ - stable/rocky
+ vars:
+ tox_envlist: slow-serial
+ devstack_localrc:
+ CINDER_ENABLED_BACKENDS: lvm:lvmdriver-1,lvm:lvmdriver-2
+ ENABLE_VOLUME_MULTIATTACH: true
+ # to avoid https://bugs.launchpad.net/neutron/+bug/1914037
+ # as we couldn't backport the fix to rocky and older releases
+ IPV6_PUBLIC_RANGE: 2001:db8:0:10::/64
+ IPV6_PUBLIC_NETWORK_GATEWAY: 2001:db8:0:10::2
+ IPV6_ROUTER_GW_IP: 2001:db8:0:10::1
+ devstack_plugins:
+ neutron: https://opendev.org/openstack/neutron
+ devstack_services:
+ neutron-placement: true
+ neutron-qos: true
+ tempest_concurrency: 2
+ group-vars:
+ # NOTE(mriedem): The ENABLE_VOLUME_MULTIATTACH variable is used on both
+ # the controller and subnode prior to Rocky so we have to make sure the
+ # variable is set in both locations.
+ subnode:
+ devstack_localrc:
+ ENABLE_VOLUME_MULTIATTACH: true
+
+- job:
name: tempest-slow-py3
parent: tempest-slow
vars:
@@ -348,7 +396,9 @@
Former name for this job was legacy-tempest-dsvm-neutron-pg-full.
vars:
devstack_localrc:
- ENABLE_FILE_INJECTION: true
+ # TODO(gmann): Enable File injection tests once nova bug is fixed
+ # https://bugs.launchpad.net/nova/+bug/1882421
+ # ENABLE_FILE_INJECTION: true
DATABASE_TYPE: postgresql
- project-template:
diff --git a/zuul.d/project.yaml b/zuul.d/project.yaml
index 5dcd27f..d5b2787 100644
--- a/zuul.d/project.yaml
+++ b/zuul.d/project.yaml
@@ -8,17 +8,6 @@
- release-notes-jobs-python3
check:
jobs:
- - devstack-tempest:
- files:
- - ^playbooks/
- - ^roles/
- - ^.zuul.yaml$
- - devstack-tempest-ipv6:
- voting: false
- files:
- - ^playbooks/
- - ^roles/
- - ^.zuul.yaml$
- tempest-full-parallel:
# Define list of irrelevant files to use everywhere else
irrelevant-files: &tempest-irrelevant-files