Merge "Make scenario snapshot tests work with ephemeral|swap"
diff --git a/HACKING.rst b/HACKING.rst
index caf954b..23bc61b 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -184,7 +184,7 @@
the ``slow`` attribute is leveraged to run slow tests on a selective basis,
to keep total `Zuul`_ job runtime down to a reasonable time frame.
-.. _Zuul: https://docs.openstack.org/infra/zuul/
+.. _Zuul: https://zuul-ci.org/docs/zuul/latest/
Smoke Attribute
^^^^^^^^^^^^^^^
@@ -488,7 +488,7 @@
Otherwise the bug fix won't be able to land in the project.
Handily, `Zuul's cross-repository dependencies
-<https://docs.openstack.org/infra/zuul/user/gating.html#cross-project-dependencies>`_.
+<https://zuul-ci.org/docs/zuul/latest/gating.html#cross-project-dependencies>`_.
can be leveraged to do without step 2 and to have steps 3 and 4 happen
"atomically". To do that, make the patch written in step 1 to depend (refer to
Zuul's documentation above) on the patch written in step 4. The commit message
diff --git a/README.rst b/README.rst
index 3cde2bf..7880357 100644
--- a/README.rst
+++ b/README.rst
@@ -3,7 +3,6 @@
========================
.. image:: https://governance.openstack.org/tc/badges/tempest.svg
- :target: https://governance.openstack.org/tc/reference/tags/index.html
.. Change things from this point on
diff --git a/doc/source/contributor/contributing.rst b/doc/source/contributor/contributing.rst
index 139f0b7..81a1874 100644
--- a/doc/source/contributor/contributing.rst
+++ b/doc/source/contributor/contributing.rst
@@ -14,8 +14,9 @@
Communication
~~~~~~~~~~~~~
* IRC channel ``#openstack-qa`` at OFTC
-* Mailing list (prefix subjects with ``[qa]`` for faster responses)
- http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss
+* `Mailing list <https://lists.openstack.org/mailman3/lists/openstack-discuss.lists.openstack.org/>`_
+ (prefix subjects with ``[qa]`` for faster responses)
+
Contacting the Core Team
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -26,25 +27,29 @@
~~~~~~~~~~~~~~~~~~~~
If you want to propose a new feature please read `Feature Proposal Process`_
Tempest features are tracked on `Launchpad BP <https://blueprints.launchpad.net/tempest>`_.
+It also helps to bring the feature up during the next PTG and contact
+`the current PTL of QA project <https://governance.openstack.org/tc/reference/projects/>`_.
+Information about PTG is always posted on `the Mailing list
+<https://lists.openstack.org/mailman3/lists/openstack-discuss.lists.openstack.org/>`_.
Task Tracking
~~~~~~~~~~~~~
We track our tasks in `Launchpad <https://bugs.launchpad.net/tempest>`_.
-If you're looking for some smaller, easier work item to pick up and get started
+If you're looking for some smaller, easier work items to pick up and get started
on, search for the 'low-hanging-fruit' tag.
Reporting a Bug
~~~~~~~~~~~~~~~
-You found an issue and want to make sure we are aware of it? You can do so on
-`Launchpad <https://bugs.launchpad.net/tempest/+filebug>`__.
+Have you found an issue and want to make sure we are aware of it? You can do so
+on `Launchpad <https://bugs.launchpad.net/tempest/+filebug>`__.
More info about Launchpad usage can be found on `OpenStack docs page
<https://docs.openstack.org/contributors/common/task-tracking.html#launchpad>`_
Getting Your Patch Merged
~~~~~~~~~~~~~~~~~~~~~~~~~
-All changes proposed to the Tempest require single ``Code-Review +2`` votes from
-Tempest core reviewers by giving ``Workflow +1`` vote. More detailed guidelines
+All changes proposed to the Tempest require a single ``Code-Review +2`` vote
+from a Tempest core followed by a ``Workflow +1`` vote. More detailed guidelines
for reviewers are available at :doc:`../REVIEWING`.
Project Team Lead Duties
diff --git a/doc/source/index.rst b/doc/source/index.rst
index 2f29cf2..0340f8d 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -83,6 +83,7 @@
:maxdepth: 2
HACKING
+ serial_tests
REVIEWING
microversion_testing
test_removal
diff --git a/doc/source/library/api_microversion_testing.rst b/doc/source/library/api_microversion_testing.rst
index 8be924d..e979683 100644
--- a/doc/source/library/api_microversion_testing.rst
+++ b/doc/source/library/api_microversion_testing.rst
@@ -9,9 +9,9 @@
Many of the OpenStack components have implemented API microversions.
It is important to test those microversions in Tempest or external plugins.
-Tempest now provides stable interfaces to support to test the API microversions.
+Tempest now provides stable interfaces to support testing the API microversions.
Based on the microversion range coming from the combination of both configuration
-and each test case, APIs request will be made with selected microversion.
+and each test case, APIs requests will be made with the selected microversion.
This document explains the interfaces needed for microversion testing.
diff --git a/doc/source/library/clients.rst b/doc/source/library/clients.rst
index 0f4ba4c..fe9f4ca 100644
--- a/doc/source/library/clients.rst
+++ b/doc/source/library/clients.rst
@@ -6,11 +6,11 @@
Tests make requests against APIs using service clients. Service clients are
specializations of the ``RestClient`` class. The service clients that cover the
APIs exposed by a service should be grouped in a service clients module.
-A service clients module is python module where all service clients are
+A service clients module is a Python module where all service clients are
defined. If major API versions are available, submodules should be defined,
one for each version.
-The ``ClientsFactory`` class helps initializing all clients of a specific
+The ``ClientsFactory`` class helps to initialize all clients of a specific
service client module from a set of shared parameters.
The ``ServiceClients`` class provides a convenient way to get access to all
diff --git a/doc/source/library/credential_providers.rst b/doc/source/library/credential_providers.rst
index d25f85c..8c9a16a 100644
--- a/doc/source/library/credential_providers.rst
+++ b/doc/source/library/credential_providers.rst
@@ -4,12 +4,12 @@
====================
These library interfaces are used to deal with allocating credentials on demand
-either dynamically by calling keystone to allocate new credentials, or from
+either dynamically by calling Keystone to allocate new credentials, or from
a list of preprovisioned credentials. These 2 modules are implementations of
the same abstract credential providers class and can be used interchangeably.
However, each implementation has some additional parameters that are used to
influence the behavior of the modules. The API reference at the bottom of this
-doc shows the interface definitions for both modules, however that may be a bit
+doc shows the interface definitions for both modules, however, that may be a bit
opaque. You can see some examples of how to leverage this interface below.
Initialization Example
@@ -30,7 +30,7 @@
# If a test requires a new account to work, it can have it via forcing
# dynamic credentials. A new account will be produced only for that test.
# In case admin credentials are not available for the account creation,
- # the test should be skipped else it would fail.
+ # the test should be skipped else it will fail.
identity_version = identity_version or CONF.identity.auth_version
if CONF.auth.use_dynamic_credentials or force_tenant_isolation:
admin_creds = get_configured_admin_credentials(
@@ -81,12 +81,12 @@
Once you have a credential provider object created the access patterns for
allocating and removing credentials are the same across both the dynamic
and preprovisioned credentials. These are defined in the abstract
-CredentialProvider class. At a high level the credentials provider enables
-you to get 3 basic types of credentials at once (per object): a primary, alt,
+CredentialProvider class. At a high level, the credentials provider enables
+you to get 3 basic types of credentials at once (per object): primary, alt,
and admin. You're also able to allocate a credential by role. These credentials
-are tracked by the provider object and delete must manually be called otherwise
-the created resources will not be deleted (or returned to the pool in the case
-of preprovisioned creds)
+are tracked by the provider object and delete must be called manually,
+otherwise, the created resources will not be deleted (or returned to the pool
+in the case of preprovisioned creds).
Examples
''''''''
diff --git a/doc/source/microversion_testing.rst b/doc/source/microversion_testing.rst
index 20ace9e..33e75ff 100644
--- a/doc/source/microversion_testing.rst
+++ b/doc/source/microversion_testing.rst
@@ -454,6 +454,10 @@
.. _2.86: https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id79
+ * `2.96`_
+
+ .. _2.96: https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#maximum-in-2024-1-caracal-and-2024-2-dalmatian
+
* Volume
* `3.3`_
diff --git a/doc/source/plugins/plugin.rst b/doc/source/plugins/plugin.rst
index 0771318..31aa134 100644
--- a/doc/source/plugins/plugin.rst
+++ b/doc/source/plugins/plugin.rst
@@ -80,30 +80,30 @@
Since all that's required for a plugin to be detected by Tempest is a valid
setuptools entry point in the proper namespace there is no difference from the
-Tempest perspective on either creating a separate python package to
-house the plugin or adding the code to an existing python project. However,
+Tempest perspective on either creating a separate Python package to
+house the plugin or adding the code to an existing Python project. However,
there are tradeoffs to consider when deciding which approach to take when
creating a new plugin.
-If you create a separate python project for your plugin this makes a lot of
+If you create a separate Python project for your plugin this makes a lot of
things much easier. Firstly it makes packaging and versioning much simpler, you
can easily decouple the requirements for the plugin from the requirements for
the other project. It lets you version the plugin independently and maintain a
single version of the test code across project release boundaries (see the
`Branchless Tempest Spec`_ for more details on this). It also greatly
simplifies the install time story for external users. Instead of having to
-install the right version of a project in the same python namespace as Tempest
+install the right version of a project in the same Python namespace as Tempest
they simply need to pip install the plugin in that namespace. It also means
that users don't have to worry about inadvertently installing a Tempest plugin
when they install another package.
.. _Branchless Tempest Spec: https://specs.openstack.org/openstack/qa-specs/specs/tempest/implemented/branchless-tempest.html
-The sole advantage to integrating a plugin into an existing python project is
+The sole advantage of integrating a plugin into an existing Python project is
that it enables you to land code changes at the same time you land test changes
in the plugin. This reduces some of the burden on contributors by not having
-to land 2 changes to add a new API feature and then test it and doing it as a
-single combined commit.
+to land 2 changes to add a new API feature and then test it, and do it as a
+single combined commit instead.
Plugin Class
@@ -122,7 +122,7 @@
class MyPlugin(plugins.TempestPlugin):
Then you need to ensure you locally define all of the mandatory methods in the
-abstract class, you can refer to the api doc below for a reference of what that
+abstract class, you can refer to the API doc below for a reference of what that
entails.
Abstract Plugin Class
@@ -135,7 +135,7 @@
================
While there are no hard and fast rules for the structure of a plugin, there are
basically no constraints on what the plugin looks like as long as the 2 steps
-above are done. However, there are some recommended patterns to follow to make
+above are done. However, there are some recommended patterns to follow to make
it easy for people to contribute and work with your plugin. For example, if you
create a directory structure with something like::
@@ -214,7 +214,7 @@
Parameters:
* **name**: Name of the attribute used to access the ``ClientsFactory`` from
- the ``ServiceClients`` instance. See example below.
+ the ``ServiceClients`` instance. See the example below.
* **service_version**: Tempest enforces a single implementation for each
service client. Available service clients are held in a ``ClientsRegistry``
singleton, and registered with ``service_version``, which means that
@@ -229,7 +229,7 @@
.. code-block:: python
- # my_creds is instance of tempest.lib.auth.Credentials
+ # my_creds is an instance of tempest.lib.auth.Credentials
# identity_uri is v2 or v3 depending on the configuration
from tempest.lib.services import clients
@@ -241,13 +241,13 @@
constraints on the structure of the configuration options exposed by the
plugin.
-First ``service_version`` should be in the format `service_config[.version]`.
+Firstly, ``service_version`` should be in the format `service_config[.version]`.
The `.version` part is optional, and should only be used if there are multiple
versions of the same API available. The `service_config` must match the name of
a configuration options group defined by the plugin. Different versions of one
API must share the same configuration group.
-Second the configuration options group `service_config` must contain the
+Secondly, the configuration options group `service_config` must contain the
following options:
* `catalog_type`: corresponds to `service` in the catalog
@@ -257,10 +257,10 @@
as they do not necessarily apply to all service clients.
* `region`: default to identity.region
-* `build_timeout` : default to compute.build_timeout
+* `build_timeout`: default to compute.build_timeout
* `build_interval`: default to compute.build_interval
-Third the service client classes should inherit from ``RestClient``, should
+Thirdly, the service client classes should inherit from ``RestClient``, should
accept generic keyword arguments, and should pass those arguments to the
``__init__`` method of ``RestClient``. Extra arguments can be added. For
instance:
@@ -276,7 +276,7 @@
self.my_arg = my_arg
self.my_args2 = my_arg
-Finally the service client should be structured in a python module, so that all
+Finally, the service client should be structured in a Python module, so that all
service client classes are importable from it. Each major API version should
have its own module.
@@ -299,7 +299,7 @@
__all__ = ['API1Client', 'API2Client']
The following folder and module structure is recommended for multiple major
-API version::
+API versions::
plugin_dir/
services/
@@ -325,14 +325,14 @@
=============
Tempest will automatically discover any installed plugins when it is run. So by
-just installing the python packages which contain your plugin you'll be using
+just installing the Python packages, which contain your plugin, you'll be using
them with Tempest, nothing else is really required.
-However, you should take care when installing plugins. By their very nature
+However, you should take care when installing plugins. By their very nature,
there are no guarantees when running Tempest with plugins enabled about the
quality of the plugin. Additionally, while there is no limitation on running
with multiple plugins, it's worth noting that poorly written plugins might not
-properly isolate their tests which could cause unexpected cross interactions
+properly isolate their tests which could cause unexpected cross-interactions
between plugins.
Notes for using plugins with virtualenvs
diff --git a/doc/source/serial_tests.rst b/doc/source/serial_tests.rst
new file mode 120000
index 0000000..6709115
--- /dev/null
+++ b/doc/source/serial_tests.rst
@@ -0,0 +1 @@
+../../tempest/serial_tests/README.rst
\ No newline at end of file
diff --git a/doc/source/stable_branch_support_policy.rst b/doc/source/stable_branch_support_policy.rst
index 9c2d1ed..cea632b 100644
--- a/doc/source/stable_branch_support_policy.rst
+++ b/doc/source/stable_branch_support_policy.rst
@@ -23,7 +23,7 @@
switch to running Tempest from a tag with support for the branch, or exclude
a newly introduced test (if that is the cause of the issue). Tempest will not
be creating stable branches to support *Extended Maintenance* phase branches, as
-the burden is on the *Extended Maintenance* phase branche maintainers, not the Tempest
+the burden is on the *Extended Maintenance* phase branch maintainers, not the Tempest
project, to support that branch.
.. _Extended Maintenance policy: https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html
diff --git a/doc/source/supported_version.rst b/doc/source/supported_version.rst
index 89f0f90..0adfebd 100644
--- a/doc/source/supported_version.rst
+++ b/doc/source/supported_version.rst
@@ -9,10 +9,9 @@
Tempest master supports the below OpenStack Releases:
+* 2024.2
+* 2024.1
* 2023.2
-* 2023.1
-* Zed
-* Yoga
For older OpenStack Release:
@@ -33,7 +32,7 @@
Tempest master supports the below python versions:
-* Python 3.8
* Python 3.9
* Python 3.10
* Python 3.11
+* Python 3.12
diff --git a/doc/source/tests/modules.rst b/doc/source/tests/modules.rst
index 026a7a5..697b011 100644
--- a/doc/source/tests/modules.rst
+++ b/doc/source/tests/modules.rst
@@ -19,3 +19,10 @@
network/modules
object_storage/modules
volume/modules
+
+Serial Tests
+------------
+.. toctree::
+ :maxdepth: 2
+
+ serial_tests/modules
diff --git a/playbooks/devstack-tempest-ipv6.yaml b/playbooks/devstack-tempest-ipv6.yaml
index 568077e..89eec6d 100644
--- a/playbooks/devstack-tempest-ipv6.yaml
+++ b/playbooks/devstack-tempest-ipv6.yaml
@@ -17,6 +17,16 @@
# fail early if anything missing the IPv6 settings or deployments.
- devstack-ipv6-only-deployments-verification
tasks:
+ - name: Run tempest cleanup init-saved-state
+ include_role:
+ name: tempest-cleanup
+ vars:
+ init_saved_state: true
+ when: (run_tempest_dry_cleanup is defined and run_tempest_dry_cleanup | bool) or
+ (run_tempest_cleanup is defined and run_tempest_cleanup | bool) or
+ (run_tempest_fail_if_leaked_resources is defined and run_tempest_fail_if_leaked_resources | bool) or
+ (run_tempest_cleanup_prefix is defined and run_tempest_cleanup_prefix | bool)
+
- name: Run Tempest version <= 26.0.0
include_role:
name: run-tempest-26
@@ -30,3 +40,15 @@
when:
- zuul.branch is defined
- zuul.branch not in ["stable/ocata", "stable/pike", "stable/queens", "stable/rocky", "stable/stein"]
+
+ - name: Run tempest cleanup dry-run
+ include_role:
+ name: tempest-cleanup
+ vars:
+ dry_run: true
+ when: run_tempest_dry_cleanup is defined and run_tempest_dry_cleanup | bool
+
+ - name: Run tempest cleanup
+ include_role:
+ name: tempest-cleanup
+ when: run_tempest_cleanup is defined and run_tempest_cleanup | bool
diff --git a/playbooks/devstack-tempest.yaml b/playbooks/devstack-tempest.yaml
index 269999c..5fb1afc 100644
--- a/playbooks/devstack-tempest.yaml
+++ b/playbooks/devstack-tempest.yaml
@@ -20,6 +20,11 @@
include_role:
name: acl-devstack-files
+ - name: Set source and destination host
+ include_role:
+ name: set-src-dest-host
+ when: tempest_set_src_dest_host is defined and tempest_set_src_dest_host | bool
+
- name: Run tempest cleanup init-saved-state
include_role:
name: tempest-cleanup
@@ -27,7 +32,8 @@
init_saved_state: true
when: (run_tempest_dry_cleanup is defined and run_tempest_dry_cleanup | bool) or
(run_tempest_cleanup is defined and run_tempest_cleanup | bool) or
- (run_tempest_fail_if_leaked_resources is defined and run_tempest_fail_if_leaked_resources | bool)
+ (run_tempest_fail_if_leaked_resources is defined and run_tempest_fail_if_leaked_resources | bool) or
+ (run_tempest_cleanup_prefix is defined and run_tempest_cleanup_prefix | bool)
- name: Run Tempest version <= 26.0.0
include_role:
diff --git a/releasenotes/notes/2024.2-intermediate-release-2a9f305375fcb462.yaml b/releasenotes/notes/2024.2-intermediate-release-2a9f305375fcb462.yaml
new file mode 100644
index 0000000..11d3a4f
--- /dev/null
+++ b/releasenotes/notes/2024.2-intermediate-release-2a9f305375fcb462.yaml
@@ -0,0 +1,5 @@
+---
+prelude: >
+ This is an intermediate release during the 2024.2 Dalmatian development
+ cycle to make new functionality available to plugins and other consumers.
+
diff --git a/releasenotes/notes/Add-http_qcow2_image-config-option-a9dca410897c3044.yaml b/releasenotes/notes/Add-http_qcow2_image-config-option-a9dca410897c3044.yaml
new file mode 100644
index 0000000..c1b0033
--- /dev/null
+++ b/releasenotes/notes/Add-http_qcow2_image-config-option-a9dca410897c3044.yaml
@@ -0,0 +1,8 @@
+---
+features:
+ - |
+ Added a new config option in the `image` section, `http_qcow2_image`,
+ which will use `qcow2` format image to download from the external
+ source specified and use it for image conversion in glance tests. By
+ default it will download
+ `http://download.cirros-cloud.net/0.6.2/cirros-0.6.2-x86_64-disk.img`
diff --git a/releasenotes/notes/Add-scenario-config-opt-target-dir-5a969b64be1dc718.yaml b/releasenotes/notes/Add-scenario-config-opt-target-dir-5a969b64be1dc718.yaml
new file mode 100644
index 0000000..3adacfc
--- /dev/null
+++ b/releasenotes/notes/Add-scenario-config-opt-target-dir-5a969b64be1dc718.yaml
@@ -0,0 +1,7 @@
+---
+features:
+ - |
+ Adding a new config options `[scenario]/target_dir` which allows
+ users to specify the location where timestamps files will be
+ written to. The default value is /tmp that, however, cannot be
+ expected to persist across reboots of an instance.
diff --git a/releasenotes/notes/Allow-tempest-cleanup-delete-resources-based-on-prefix-96d9562f1f30e979.yaml b/releasenotes/notes/Allow-tempest-cleanup-delete-resources-based-on-prefix-96d9562f1f30e979.yaml
new file mode 100644
index 0000000..872f664
--- /dev/null
+++ b/releasenotes/notes/Allow-tempest-cleanup-delete-resources-based-on-prefix-96d9562f1f30e979.yaml
@@ -0,0 +1,10 @@
+---
+features:
+ - |
+ We add a new argument, ``--prefix``, to ``tempest cleanup`` tool that will
+ allow users delete only resources that match the prefix. When this option
+ is used, ``saved_state.json`` file is not needed (no need to run with
+ ``--init-saved-state`` first). If there is one, it will be ignored and the
+ cleanup will be done based on the given prefix only.
+ Note, that some resources are not named thus they will not be deleted when
+ filtering based on the prefix.
diff --git a/releasenotes/notes/add-delete-image-from-specific-store-api-84c0ecd50724f6de.yaml b/releasenotes/notes/add-delete-image-from-specific-store-api-84c0ecd50724f6de.yaml
new file mode 100644
index 0000000..a8a0b70
--- /dev/null
+++ b/releasenotes/notes/add-delete-image-from-specific-store-api-84c0ecd50724f6de.yaml
@@ -0,0 +1,4 @@
+---
+features:
+ - |
+ Add delete image from specific store API to image V2 client
diff --git a/releasenotes/notes/add-enable-volume-image-dep-tests-option-150b929d18da233f.yaml b/releasenotes/notes/add-enable-volume-image-dep-tests-option-150b929d18da233f.yaml
new file mode 100644
index 0000000..e78201e
--- /dev/null
+++ b/releasenotes/notes/add-enable-volume-image-dep-tests-option-150b929d18da233f.yaml
@@ -0,0 +1,6 @@
+---
+features:
+ - |
+ Add new config option 'enable_volume_image_dep_tests' in section
+ [volume-feature-enabled] which should be used in
+ image<->volume<->snapshot dependency tests.
diff --git a/releasenotes/notes/add-manager-creds-49acd9192110c3e3.yaml b/releasenotes/notes/add-manager-creds-49acd9192110c3e3.yaml
new file mode 100644
index 0000000..a5d7984
--- /dev/null
+++ b/releasenotes/notes/add-manager-creds-49acd9192110c3e3.yaml
@@ -0,0 +1,7 @@
+---
+features:
+ - |
+ Add support for project manager and domain manager personas by adding
+ ``get_project_manager_creds`` and ``get_domain_manager_creds`` to
+ the ``DynamicCredentialProvider`` and ``PreProvisionedCredentialProvider``
+ classes of the common library.
diff --git a/releasenotes/notes/add-option-to-specify-source-host.yaml b/releasenotes/notes/add-option-to-specify-source-host.yaml
new file mode 100644
index 0000000..f8df40a
--- /dev/null
+++ b/releasenotes/notes/add-option-to-specify-source-host.yaml
@@ -0,0 +1,5 @@
+---
+features:
+ - Add a new config options migration_source_host and migration_dest_host
+ in the compute section, which if is set takes source or destination
+ host from options, otherwise a host is chosen automatically.
diff --git a/releasenotes/notes/add-placement-resource-provider-traits-api-calls-9f4b0455afec9afb.yaml b/releasenotes/notes/add-placement-resource-provider-traits-api-calls-9f4b0455afec9afb.yaml
new file mode 100644
index 0000000..1d1811c
--- /dev/null
+++ b/releasenotes/notes/add-placement-resource-provider-traits-api-calls-9f4b0455afec9afb.yaml
@@ -0,0 +1,4 @@
+---
+features:
+ - |
+ Adds API calls for traits in ResourceProvidersClient.
diff --git a/releasenotes/notes/add-placement-traits-api-calls-087061f5455f0b12.yaml b/releasenotes/notes/add-placement-traits-api-calls-087061f5455f0b12.yaml
new file mode 100644
index 0000000..77d0b38
--- /dev/null
+++ b/releasenotes/notes/add-placement-traits-api-calls-087061f5455f0b12.yaml
@@ -0,0 +1,4 @@
+---
+features:
+ - |
+ Adds API calls for traits in PlacementClient.
diff --git a/releasenotes/notes/add-target-host-filter-94803e93b701d052.yaml b/releasenotes/notes/add-target-host-filter-94803e93b701d052.yaml
new file mode 100644
index 0000000..83a3728
--- /dev/null
+++ b/releasenotes/notes/add-target-host-filter-94803e93b701d052.yaml
@@ -0,0 +1,6 @@
+---
+features:
+ - |
+ Add a new config option `[compute]/target_hosts_to_avoid` which will
+ filter out any hypervisor candidates with a hostname that matches the
+ provided pattern when determining target hosts for migration.
diff --git a/releasenotes/notes/add-volume_types_for_data_volume-config-option.yaml b/releasenotes/notes/add-volume_types_for_data_volume-config-option.yaml
new file mode 100644
index 0000000..30a2278
--- /dev/null
+++ b/releasenotes/notes/add-volume_types_for_data_volume-config-option.yaml
@@ -0,0 +1,8 @@
+---
+features:
+ - |
+ A new config option in the ``volume_feature_enabled`` section,
+ ``volume_types_for_data_volume``, is added to allow the user to specify
+ which volume types can be used for data volumes in a new test
+ ``test_instances_with_cinder_volumes_on_all_compute_nodes``. By default,
+ this option is set to None.
diff --git a/releasenotes/notes/change-volume-catalog_type-default-fbcb2be6ebc42818.yaml b/releasenotes/notes/change-volume-catalog_type-default-fbcb2be6ebc42818.yaml
new file mode 100644
index 0000000..a507bd7
--- /dev/null
+++ b/releasenotes/notes/change-volume-catalog_type-default-fbcb2be6ebc42818.yaml
@@ -0,0 +1,6 @@
+---
+upgrade:
+ - |
+ The default for ``[volume] catalog_type``, which is used to determine the
+ service type to use to identify the block storage service in the service
+ catalog, has changed from ``volumev3`` to ``block-storage``.
diff --git a/releasenotes/notes/cleanup-attr-decorator-alias-78ce21eb20d87e01.yaml b/releasenotes/notes/cleanup-attr-decorator-alias-78ce21eb20d87e01.yaml
new file mode 100644
index 0000000..43091e1
--- /dev/null
+++ b/releasenotes/notes/cleanup-attr-decorator-alias-78ce21eb20d87e01.yaml
@@ -0,0 +1,5 @@
+---
+upgrade:
+ - |
+ The ``attr`` decorator is no longer available in the ``tempest.test``
+ module. Use the ``tempest.lib.decorators`` module instead.
diff --git a/releasenotes/notes/cleanup-container-client-interface-6a9fe49072cfdb17.yaml b/releasenotes/notes/cleanup-container-client-interface-6a9fe49072cfdb17.yaml
new file mode 100644
index 0000000..48c1717
--- /dev/null
+++ b/releasenotes/notes/cleanup-container-client-interface-6a9fe49072cfdb17.yaml
@@ -0,0 +1,8 @@
+---
+upgrade:
+ - |
+ The following deprecated alias methods of the ``ContainerClient`` class
+ has been removed.
+
+ - ``update_container_metadata``, replaced by ``create_update_or_delete_container_metadata``
+ - ``list_container_contents``, replaced by ``list_container_objects``
diff --git a/releasenotes/notes/cleanup-decorator-aliases-e940b6e114e6f481.yaml b/releasenotes/notes/cleanup-decorator-aliases-e940b6e114e6f481.yaml
new file mode 100644
index 0000000..fd4a546
--- /dev/null
+++ b/releasenotes/notes/cleanup-decorator-aliases-e940b6e114e6f481.yaml
@@ -0,0 +1,9 @@
+---
+upgrade:
+ - |
+ The following decorators are no longer available in the ``tempest.test``
+ module. Use the ``tempest.common.utils`` module instead.
+
+ - ``services``
+ - ``requires_ext``
+ - ``is_extension_enabled``
diff --git a/releasenotes/notes/deprecate-import_image-e8c627aab833b64d.yaml b/releasenotes/notes/deprecate-import_image-e8c627aab833b64d.yaml
new file mode 100644
index 0000000..d408538
--- /dev/null
+++ b/releasenotes/notes/deprecate-import_image-e8c627aab833b64d.yaml
@@ -0,0 +1,12 @@
+---
+upgrade:
+ - |
+ Default value of the ``[image-feature-enabled] image_import`` has been
+ changed from ``False`` to ``True``, and now the image import feature is
+ tested by default.
+
+deprecations:
+ - |
+ The ``[image-feature-enabled] image_import`` option has been deprecated.
+ The image import feature works in both standalone mode and WSGI mode since
+ Victoria and the image import feature can be always tested.
diff --git a/releasenotes/notes/deprecate-os_glance_reserved-bace16f21facca3b.yaml b/releasenotes/notes/deprecate-os_glance_reserved-bace16f21facca3b.yaml
new file mode 100644
index 0000000..2834876
--- /dev/null
+++ b/releasenotes/notes/deprecate-os_glance_reserved-bace16f21facca3b.yaml
@@ -0,0 +1,11 @@
+---
+upgrade:
+ - |
+ Default value of the ``[image-feature-enabled] os_glance_reserved`` has
+ been changed from ``False`` to ``True`` and now the reservation of
+ os_glance namespace is tested by default.
+
+deprecations:
+ - |
+ The ``[image-feature-enabled] os_glance_reserved`` option has been
+ deprecatd because glance reserves the os_glance namespace since Wallaby.
diff --git a/releasenotes/notes/drop-python38-support-c0a696af00110602.yaml b/releasenotes/notes/drop-python38-support-c0a696af00110602.yaml
new file mode 100644
index 0000000..035f628
--- /dev/null
+++ b/releasenotes/notes/drop-python38-support-c0a696af00110602.yaml
@@ -0,0 +1,8 @@
+---
+prelude: >
+ Tempest dropped the Python 3.8 support.
+upgrade:
+ - |
+ Python 3.8 support has been dropped. Last release of Tempest
+ to support python 3.8 is Temepst 41.0.0. The minimum version
+ of Python now supported by Tempest is Python 3.9.
diff --git a/releasenotes/notes/enable-neutron-by-default-57b87a20acc1ac47.yaml b/releasenotes/notes/enable-neutron-by-default-57b87a20acc1ac47.yaml
new file mode 100644
index 0000000..b8722ea
--- /dev/null
+++ b/releasenotes/notes/enable-neutron-by-default-57b87a20acc1ac47.yaml
@@ -0,0 +1,9 @@
+---
+upgrade:
+ - |
+ Default value of the ``[service_available] neutron`` option has been
+ updated from ``False`` to ``True``.
+
+ - |
+ All tests which require network features are now skipped when
+ the ``[service_available] neutron`` option is set to ``False``
diff --git a/releasenotes/notes/end-of-support-of-2023-1-ddec1dac59700063.yaml b/releasenotes/notes/end-of-support-of-2023-1-ddec1dac59700063.yaml
new file mode 100644
index 0000000..d52b54e
--- /dev/null
+++ b/releasenotes/notes/end-of-support-of-2023-1-ddec1dac59700063.yaml
@@ -0,0 +1,12 @@
+---
+prelude: >
+ This is an intermediate release during the 2025.1 development cycle to
+ mark the end of support for 2023.1 release in Tempest.
+ After this release, Tempest will support below OpenStack Releases:
+
+ * 2024.2
+ * 2024.1
+ * 2023.2
+
+ Current development of Tempest is for OpenStack 2025.1 development
+ cycle.
diff --git a/releasenotes/notes/end-of-support-of-yoga-4ad45e91fe893024.yaml b/releasenotes/notes/end-of-support-of-yoga-4ad45e91fe893024.yaml
new file mode 100644
index 0000000..ceeb2b2
--- /dev/null
+++ b/releasenotes/notes/end-of-support-of-yoga-4ad45e91fe893024.yaml
@@ -0,0 +1,12 @@
+---
+prelude: >
+ This is an intermediate release during the 2024.1 development cycle to
+ mark the end of support for Yoga release in Tempest.
+ After this release, Tempest will support below OpenStack Releases:
+
+ * 2023.2
+ * 2023.1
+ * Zed
+
+ Current development of Tempest is for OpenStack 2024.1 development
+ cycle.
diff --git a/releasenotes/notes/end-of-support-of-zed-43e2d5dd5608cb10.yaml b/releasenotes/notes/end-of-support-of-zed-43e2d5dd5608cb10.yaml
new file mode 100644
index 0000000..a0b3ac2
--- /dev/null
+++ b/releasenotes/notes/end-of-support-of-zed-43e2d5dd5608cb10.yaml
@@ -0,0 +1,12 @@
+---
+prelude: >
+ This is an intermediate release during the 2024.2 development cycle to
+ mark the end of support for Zed release in Tempest.
+ After this release, Tempest will support below OpenStack Releases:
+
+ * 2024.1
+ * 2023.2
+ * 2023.1
+
+ Current development of Tempest is for OpenStack 2024.2 development
+ cycle.
diff --git a/releasenotes/notes/identity-feature-opt-cleanup-caracal-7afd283855a07025.yaml b/releasenotes/notes/identity-feature-opt-cleanup-caracal-7afd283855a07025.yaml
new file mode 100644
index 0000000..67f6ede
--- /dev/null
+++ b/releasenotes/notes/identity-feature-opt-cleanup-caracal-7afd283855a07025.yaml
@@ -0,0 +1,21 @@
+---
+upgrade:
+ - |
+ The following deprecated options in the ``[identity-feature-enabled]``
+ section have been removed. Project tags API and application credentials
+ API are now always tested if identity v3 API is available.
+
+ - ``project_tag``
+ - ``application_credentials``
+
+ - |
+ Default value of the ``[identity-feature-enabled] access_rule`` option has
+ been changed from ``False`` to ``True`` and now the access rule API is
+ always tested when identity API is available.
+
+deprecations:
+ - |
+ The Keystone access_rule is enabled by default since Train release and we
+ no longer need a separate config in Tempest to enable it. Therefore
+ the ``[identity-feature-enabled] access_rule`` option has been deprecated
+ and will be removed in a future release.
diff --git a/releasenotes/notes/image-config-http-image-default-value-change-476622e984e16ab5.yaml b/releasenotes/notes/image-config-http-image-default-value-change-476622e984e16ab5.yaml
new file mode 100644
index 0000000..96e9251
--- /dev/null
+++ b/releasenotes/notes/image-config-http-image-default-value-change-476622e984e16ab5.yaml
@@ -0,0 +1,7 @@
+---
+upgrade:
+ - |
+ Changed the default value of 'http_image' config option in the
+ 'image' group to 'http://download.cirros-cloud.net/0.6.2/cirros-0.6.2-x86_64-uec.tar.gz'
+ from 'http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-uec.tar.gz' as former
+ image is very old and we should always use latest one.
\ No newline at end of file
diff --git a/releasenotes/notes/image-enforcement-config-0bc67791a40bac56.yaml b/releasenotes/notes/image-enforcement-config-0bc67791a40bac56.yaml
new file mode 100644
index 0000000..2bbc82b
--- /dev/null
+++ b/releasenotes/notes/image-enforcement-config-0bc67791a40bac56.yaml
@@ -0,0 +1,8 @@
+---
+features:
+ - |
+ Add a new config option
+ `[image_feature_enabled]/image_format_enforcement` which tells tempest
+ that glance will do image format inspection and enforcement on upload. This
+ will disable tests that require glance to accept a bad image in order to
+ test another service (i.e. nova).
diff --git a/releasenotes/notes/image-wait-multiple-79c55305b584b1ba.yaml b/releasenotes/notes/image-wait-multiple-79c55305b584b1ba.yaml
new file mode 100644
index 0000000..6f63ebd
--- /dev/null
+++ b/releasenotes/notes/image-wait-multiple-79c55305b584b1ba.yaml
@@ -0,0 +1,6 @@
+---
+features:
+ - |
+ The wait_for_image_status() waiter now allows a list of status values
+ instead of just a string, and returns the state the image was in when we
+ stopped waiting.
diff --git a/releasenotes/notes/remove-compute-feature-enabled-block-migrate-cinder-iscsi-882da88096019f3c.yaml b/releasenotes/notes/remove-compute-feature-enabled-block-migrate-cinder-iscsi-882da88096019f3c.yaml
new file mode 100644
index 0000000..2808677
--- /dev/null
+++ b/releasenotes/notes/remove-compute-feature-enabled-block-migrate-cinder-iscsi-882da88096019f3c.yaml
@@ -0,0 +1,8 @@
+---
+upgrade:
+ - |
+ The deprecated ``[compute-feature-enabled] block_migrate_cinder_iscsi``
+ option has been removed.
+ Now the ``[compute-feature-enabled] block_migration_for_live_migration``
+ option is solely used to determine when to run block migration based tests
+ during live migration.
diff --git a/releasenotes/notes/remove-dns_servers_option-f49fdb2b4eb50f8f.yaml b/releasenotes/notes/remove-dns_servers_option-f49fdb2b4eb50f8f.yaml
new file mode 100644
index 0000000..6be1db9
--- /dev/null
+++ b/releasenotes/notes/remove-dns_servers_option-f49fdb2b4eb50f8f.yaml
@@ -0,0 +1,4 @@
+---
+upgrade:
+ - |
+ The deprecated ``[network] dns_servers`` option has been removed.
diff --git a/releasenotes/notes/remove-identity-v2-tests-369b3fa190f624da.yaml b/releasenotes/notes/remove-identity-v2-tests-369b3fa190f624da.yaml
new file mode 100644
index 0000000..d927a68
--- /dev/null
+++ b/releasenotes/notes/remove-identity-v2-tests-369b3fa190f624da.yaml
@@ -0,0 +1,23 @@
+---
+upgrade:
+ - |
+ Tests for identity v2 API have been removed.
+
+deprecations:
+ - |
+ The following options have been formally deprecated. These options were
+ used to test identity v2 API which was removed during Queens cycle.
+ The tests for identity v2 API were removed from tempest and these options
+ have no effect.
+
+ - ``[identity] uri``
+ - ``[identity] v2_admin_endpoint_type``
+ - ``[identity] v2_public_endpoint_type``
+ - ``[identity-feature-enabled] api_v2_admin``
+
+ - |
+ The following options have been deprecated because only identity v3 API
+ is used.
+
+ - ``[identity] auth_version``
+ - ``[identity-feature-enabled] api_v3``
diff --git a/releasenotes/notes/remove-nova_cert-e2ee70a40e117e8a.yaml b/releasenotes/notes/remove-nova_cert-e2ee70a40e117e8a.yaml
new file mode 100644
index 0000000..1a292f0
--- /dev/null
+++ b/releasenotes/notes/remove-nova_cert-e2ee70a40e117e8a.yaml
@@ -0,0 +1,6 @@
+---
+upgrade:
+ - |
+ The deprecated ``[compute-feature-enabled] nova_cert`` option has been
+ removed. The nova-cert service was removed from nova in 16.0.0 release.
+ Tests of compute root certificates API have also been removed.
diff --git a/releasenotes/notes/remove-rdp_console-34e11f58d525905a.yaml b/releasenotes/notes/remove-rdp_console-34e11f58d525905a.yaml
new file mode 100644
index 0000000..4f03150
--- /dev/null
+++ b/releasenotes/notes/remove-rdp_console-34e11f58d525905a.yaml
@@ -0,0 +1,5 @@
+---
+upgrade:
+ - |
+ The deprecated ``[compute-feature-enabled] rdp_console`` config option has
+ been removed.
diff --git a/releasenotes/notes/remove-vnc-server-header-1a9731ba10242603.yaml b/releasenotes/notes/remove-vnc-server-header-1a9731ba10242603.yaml
new file mode 100644
index 0000000..cf14513
--- /dev/null
+++ b/releasenotes/notes/remove-vnc-server-header-1a9731ba10242603.yaml
@@ -0,0 +1,5 @@
+---
+upgrade:
+ - |
+ The deprecated ``[compute-feature-enabled] vnc_server_header`` option has
+ been removed.
diff --git a/releasenotes/notes/remove-xenapi_apis-86720c0c399460ab.yaml b/releasenotes/notes/remove-xenapi_apis-86720c0c399460ab.yaml
new file mode 100644
index 0000000..26da18c
--- /dev/null
+++ b/releasenotes/notes/remove-xenapi_apis-86720c0c399460ab.yaml
@@ -0,0 +1,5 @@
+---
+upgrade:
+ - |
+ The deprecated ``[compute-feature-enabled] xenapi_apis`` option has been
+ removed.
diff --git a/releasenotes/notes/resource-list-cbf9779e8b434654.yaml b/releasenotes/notes/resource-list-cbf9779e8b434654.yaml
new file mode 100644
index 0000000..bbd2f16
--- /dev/null
+++ b/releasenotes/notes/resource-list-cbf9779e8b434654.yaml
@@ -0,0 +1,11 @@
+---
+features:
+ - |
+ A new interface ``--resource-list`` has been introduced in the
+ ``tempest cleanup`` command to remove the resources created by
+ Tempest. A new config option in the default section, ``record_resources``,
+ is added to allow the recording of all resources created by Tempest.
+ A list of these resources will be saved in ``resource_list.json`` file,
+ which will be appended in case of multiple Tempest runs. This file
+ is intended to be used with the ``tempest cleanup`` command if it is
+ used with the newly added option ``--resource-list``.
diff --git a/releasenotes/notes/tempest-2024-1-release-d51f15c6bfe60b35.yaml b/releasenotes/notes/tempest-2024-1-release-d51f15c6bfe60b35.yaml
new file mode 100644
index 0000000..81d6a05
--- /dev/null
+++ b/releasenotes/notes/tempest-2024-1-release-d51f15c6bfe60b35.yaml
@@ -0,0 +1,17 @@
+---
+prelude: >
+ This release is to tag Tempest for OpenStack 2024.1 release.
+ This release marks the start of 2024.1 release support in Tempest.
+ After this release, Tempest will support below OpenStack Releases:
+
+ * 2024.1
+ * 2023.2
+ * 2023.1
+ * Zed
+
+ Current development of Tempest is for OpenStack 2024.2 development
+ cycle. Every Tempest commit is also tested against master during
+ the 2024.2 cycle. However, this does not necessarily mean that using
+ Tempest as of this tag will work against a 2024.2 (or future release)
+ cloud.
+ To be on safe side, use this tag to test the OpenStack 2024.1 release.
diff --git a/releasenotes/notes/tempest-2024-2-release-78846595720db3cd.yaml b/releasenotes/notes/tempest-2024-2-release-78846595720db3cd.yaml
new file mode 100644
index 0000000..57367c8
--- /dev/null
+++ b/releasenotes/notes/tempest-2024-2-release-78846595720db3cd.yaml
@@ -0,0 +1,17 @@
+---
+prelude: >
+ This release is to tag Tempest for OpenStack 2024.2 release.
+ This release marks the start of 2024.2 release support in Tempest.
+ After this release, Tempest will support below OpenStack Releases:
+
+ * 2024.2
+ * 2024.1
+ * 2023.2
+ * 2023.1
+
+ Current development of Tempest is for OpenStack 2025.1 development
+ cycle. Every Tempest commit is also tested against master during
+ the 2025.1 cycle. However, this does not necessarily mean that using
+ Tempest as of this tag will work against a 2025.1 (or future release)
+ cloud.
+ To be on safe side, use this tag to test the OpenStack 2024.2 release.
diff --git a/releasenotes/source/index.rst b/releasenotes/source/index.rst
index 989d3b5..633d90e 100644
--- a/releasenotes/source/index.rst
+++ b/releasenotes/source/index.rst
@@ -6,6 +6,11 @@
:maxdepth: 1
unreleased
+ v41.0.0
+ v40.0.0
+ v39.0.0
+ v38.0.0
+ v37.0.0
v36.0.0
v35.0.0
v34.2.0
diff --git a/releasenotes/source/v37.0.0.rst b/releasenotes/source/v37.0.0.rst
new file mode 100644
index 0000000..72b8bc6
--- /dev/null
+++ b/releasenotes/source/v37.0.0.rst
@@ -0,0 +1,6 @@
+=====================
+v37.0.0 Release Notes
+=====================
+
+.. release-notes:: 37.0.0 Release Notes
+ :version: 37.0.0
diff --git a/releasenotes/source/v38.0.0.rst b/releasenotes/source/v38.0.0.rst
new file mode 100644
index 0000000..2664374
--- /dev/null
+++ b/releasenotes/source/v38.0.0.rst
@@ -0,0 +1,6 @@
+=====================
+v38.0.0 Release Notes
+=====================
+
+.. release-notes:: 38.0.0 Release Notes
+ :version: 38.0.0
diff --git a/releasenotes/source/v39.0.0.rst b/releasenotes/source/v39.0.0.rst
new file mode 100644
index 0000000..a971fbc
--- /dev/null
+++ b/releasenotes/source/v39.0.0.rst
@@ -0,0 +1,6 @@
+=====================
+v39.0.0 Release Notes
+=====================
+
+.. release-notes:: 39.0.0 Release Notes
+ :version: 39.0.0
diff --git a/releasenotes/source/v40.0.0.rst b/releasenotes/source/v40.0.0.rst
new file mode 100644
index 0000000..995767b
--- /dev/null
+++ b/releasenotes/source/v40.0.0.rst
@@ -0,0 +1,6 @@
+=====================
+v40.0.0 Release Notes
+=====================
+
+.. release-notes:: 40.0.0 Release Notes
+ :version: 40.0.0
diff --git a/releasenotes/source/v41.0.0.rst b/releasenotes/source/v41.0.0.rst
new file mode 100644
index 0000000..6d79c4c
--- /dev/null
+++ b/releasenotes/source/v41.0.0.rst
@@ -0,0 +1,6 @@
+=====================
+v41.0.0 Release Notes
+=====================
+
+.. release-notes:: 41.0.0 Release Notes
+ :version: 41.0.0
diff --git a/requirements.txt b/requirements.txt
index 6e66046..a1eff53 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,6 +1,3 @@
-# The order of packages is significant, because pip processes them in the order
-# of appearance. Changing the order has an impact on the overall integration
-# process, which may cause wedges in the gate later.
pbr!=2.1.0,>=2.0.0 # Apache-2.0
cliff!=2.9.0,>=2.8.0 # Apache-2.0
jsonschema>=3.2.0 # MIT
@@ -13,7 +10,7 @@
oslo.log>=3.36.0 # Apache-2.0
stestr>=1.0.0 # Apache-2.0
oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0
-oslo.utils>=4.7.0 # Apache-2.0
+oslo.utils>=7.0.0 # Apache-2.0
fixtures>=3.0.0 # Apache-2.0/BSD
PyYAML>=3.12 # MIT
python-subunit>=1.0.0 # Apache-2.0/BSD
@@ -23,3 +20,4 @@
debtcollector>=1.2.0 # Apache-2.0
defusedxml>=0.7.1 # PSFL
fasteners>=0.16.0 # Apache-2.0
+testscenarios>=0.5.0
diff --git a/roles/run-tempest/README.rst b/roles/run-tempest/README.rst
index 04db849..c682641 100644
--- a/roles/run-tempest/README.rst
+++ b/roles/run-tempest/README.rst
@@ -81,7 +81,7 @@
.. zuul:rolevar:: stable_constraints_file
:default: ''
- Upper constraints file to be used for stable branch till stable/victoria.
+ Upper constraints file to be used for stable branch till Wallaby
.. zuul:rolevar:: tempest_tox_environment
:default: ''
diff --git a/roles/run-tempest/tasks/main.yaml b/roles/run-tempest/tasks/main.yaml
index 3d78557..15b1743 100644
--- a/roles/run-tempest/tasks/main.yaml
+++ b/roles/run-tempest/tasks/main.yaml
@@ -25,11 +25,11 @@
target_branch: "{{ zuul.override_checkout }}"
when: zuul.override_checkout is defined
-- name: Use stable branch upper-constraints till stable/wallaby
+- name: Use stable branch upper-constraints till 2023.1
set_fact:
# TOX_CONSTRAINTS_FILE is new name, UPPER_CONSTRAINTS_FILE is old one, best to set both
tempest_tox_environment: "{{ tempest_tox_environment | combine({'UPPER_CONSTRAINTS_FILE': stable_constraints_file}) | combine({'TOX_CONSTRAINTS_FILE': stable_constraints_file}) }}"
- when: target_branch in ["stable/ocata", "stable/pike", "stable/queens", "stable/rocky", "stable/stein", "stable/train", "stable/ussuri", "stable/victoria", "stable/wallaby"]
+ when: target_branch in ["stable/ocata", "stable/pike", "stable/queens", "stable/rocky", "stable/stein", "stable/train", "stable/ussuri", "stable/2023.1", "unmaintained/victoria", "unmaintained/wallaby", "unmaintained/xena", "unmaintained/yoga", "unmaintained/zed", "unmaintained/2023.1"]
- name: Use Configured upper-constraints for non-master Tempest
set_fact:
@@ -80,14 +80,14 @@
- name: Tempest 26.1.0 workaround to fallback exclude-list to blacklist
# NOTE(gmann): stable/train|ussuri|victoria use Tempest 26.1.0 and with
- # stestr 2.5.1/3.0.1 (beacause of upper constraints of stestr 2.5.1/3.0.1
+ # stestr 2.5.1/3.0.1 (because of upper constraints of stestr 2.5.1/3.0.1
# in stable/train|ussuri|victoria) which does not have new args exclude-list
# so let's fallback to old arg if new arg is passed.
set_fact:
exclude_list_option: "--blacklist-file={{ tempest_test_exclude_list|quote }}"
when:
- tempest_test_exclude_list is defined
- - target_branch in ["stable/train", "stable/ussuri", "stable/victoria"]
+ - target_branch in ["stable/train", "stable/ussuri", "unmaintained/victoria"]
# TODO(kopecmartin) remove this after all consumers of the role have switched
# to tempest_exclude_regex option, until then it's kept here for the backward
@@ -105,11 +105,11 @@
when:
- tempest_black_regex is not defined
- tempest_exclude_regex is defined
- - target_branch not in ["stable/train", "stable/ussuri", "stable/victoria"]
+ - target_branch not in ["stable/train", "stable/ussuri", "unmaintained/victoria"]
- name: Tempest 26.1.0 workaround to fallback exclude-regex to black-regex
# NOTE(gmann): stable/train|ussuri|victoria use Tempest 26.1.0 and with stestr
- # 2.5.1/3.0.1 (beacause of upper constraints of stestr 2.5.1/3.0.1 in
+ # 2.5.1/3.0.1 (because of upper constraints of stestr 2.5.1/3.0.1 in
# stable/train|ussuri|victoria) which does not have new args exclude-list so
# let's fallback to old arg if new arg is passed.
set_fact:
@@ -117,7 +117,7 @@
when:
- tempest_black_regex is not defined
- tempest_exclude_regex is defined
- - target_branch in ["stable/train", "stable/ussuri", "stable/victoria"]
+ - target_branch in ["stable/train", "stable/ussuri", "unmaintained/victoria"]
- name: Run Tempest
command: tox -e {{tox_envlist}} {{tox_extra_args}} -- \
diff --git a/roles/set-src-dest-host/defaults/main.yaml b/roles/set-src-dest-host/defaults/main.yaml
new file mode 100644
index 0000000..fea05c8
--- /dev/null
+++ b/roles/set-src-dest-host/defaults/main.yaml
@@ -0,0 +1 @@
+devstack_base_dir: /opt/stack
diff --git a/roles/set-src-dest-host/tasks/main.yaml b/roles/set-src-dest-host/tasks/main.yaml
new file mode 100644
index 0000000..78b7a2c
--- /dev/null
+++ b/roles/set-src-dest-host/tasks/main.yaml
@@ -0,0 +1,29 @@
+- name: Find out hostnames
+ set_fact:
+ devstack_hostnames: "{{ devstack_hostnames|default([]) + [hostvars[zj_item]['ansible_hostname'] | default('unknown')] }}"
+ loop: "{{ query('inventory_hostnames', 'all,!localhost') }}"
+ loop_control:
+ loop_var: zj_item
+ ignore_errors: yes # noqa ignore-errors
+
+- name: Found hostnames
+ debug:
+ msg: |
+ # Available hosts
+ {{ devstack_hostnames }}
+
+- name: Set migration_source_host in tempest.conf
+ become: true
+ community.general.ini_file:
+ path: "{{ devstack_base_dir }}/tempest/etc/tempest.conf"
+ section: compute
+ option: migration_source_host
+ value: "{{ devstack_hostnames[0] }}"
+
+- name: Set migration_dest_host in tempest.conf
+ become: true
+ community.general.ini_file:
+ path: "{{ devstack_base_dir }}/tempest/etc/tempest.conf"
+ section: compute
+ option: migration_dest_host
+ value: "{{ devstack_hostnames[1] }}"
diff --git a/roles/tempest-cleanup/README.rst b/roles/tempest-cleanup/README.rst
index d1fad90..255ca2d 100644
--- a/roles/tempest-cleanup/README.rst
+++ b/roles/tempest-cleanup/README.rst
@@ -40,6 +40,21 @@
some must have been leaked. This can be also used to verify that tempest
cleanup was successful.
+.. zuul:rolevar:: run_tempest_cleanup_prefix
+ :default: false
+
+ When true, tempest cleanup will be called with '--prefix tempest' to delete
+ only resources with names that match the prefix. This option can be used
+ together with dry_run.
+
+.. zuul:rolevar:: run_tempest_cleanup_resource_list
+ :default: false
+
+ When true, tempest cleanup will be called with '--resource-list' to delete
+ only resources listed in ./resource_list.json that is created if
+ record_resources config option in the default section of tempest.conf file
+ is enabled (set to True). The resource_list.json contains all resources
+ created by Tempest during a Tempest run.
Role usage
----------
diff --git a/roles/tempest-cleanup/defaults/main.yaml b/roles/tempest-cleanup/defaults/main.yaml
index ce78bdb..1ec2f8c 100644
--- a/roles/tempest-cleanup/defaults/main.yaml
+++ b/roles/tempest-cleanup/defaults/main.yaml
@@ -2,3 +2,5 @@
init_saved_state: false
dry_run: false
run_tempest_fail_if_leaked_resources: false
+run_tempest_cleanup_prefix: false
+run_tempest_cleanup_resource_list: false
diff --git a/roles/tempest-cleanup/tasks/dry_run.yaml b/roles/tempest-cleanup/tasks/dry_run.yaml
index 46749ab..8ae5183 100644
--- a/roles/tempest-cleanup/tasks/dry_run.yaml
+++ b/roles/tempest-cleanup/tasks/dry_run.yaml
@@ -5,3 +5,22 @@
command: tox -evenv-tempest -- tempest cleanup --dry-run --debug
args:
chdir: "{{ devstack_base_dir }}/tempest"
+ when:
+ - not run_tempest_cleanup_prefix
+ - run_tempest_cleanup_resource_list is not defined or not run_tempest_cleanup_resource_list
+
+- name: Run tempest cleanup dry-run with tempest prefix
+ become: yes
+ become_user: tempest
+ command: tox -evenv-tempest -- tempest cleanup --dry-run --debug --prefix tempest
+ args:
+ chdir: "{{ devstack_base_dir }}/tempest"
+ when: run_tempest_cleanup_prefix
+
+- name: Run tempest cleanup with tempest resource list
+ become: yes
+ become_user: tempest
+ command: tox -evenv-tempest -- tempest cleanup --dry-run --debug --resource-list
+ args:
+ chdir: "{{ devstack_base_dir }}/tempest"
+ when: run_tempest_cleanup_resource_list
diff --git a/roles/tempest-cleanup/tasks/main.yaml b/roles/tempest-cleanup/tasks/main.yaml
index c1d63f0..1e1c1a7 100644
--- a/roles/tempest-cleanup/tasks/main.yaml
+++ b/roles/tempest-cleanup/tasks/main.yaml
@@ -27,6 +27,29 @@
command: tox -evenv-tempest -- tempest cleanup --debug
args:
chdir: "{{ devstack_base_dir }}/tempest"
+ when:
+ - not run_tempest_cleanup_prefix
+ - run_tempest_cleanup_resource_list is not defined or not run_tempest_cleanup_resource_list
+
+ - name: Run tempest cleanup with tempest prefix
+ become: yes
+ become_user: tempest
+ command: tox -evenv-tempest -- tempest cleanup --debug --prefix tempest
+ args:
+ chdir: "{{ devstack_base_dir }}/tempest"
+ when: run_tempest_cleanup_prefix
+
+ - name: Cat resource_list.json
+ command: cat "{{ devstack_base_dir }}/tempest/resource_list.json"
+ when: run_tempest_cleanup_resource_list
+
+ - name: Run tempest cleanup with tempest resource list
+ become: yes
+ become_user: tempest
+ command: tox -evenv-tempest -- tempest cleanup --debug --resource-list
+ args:
+ chdir: "{{ devstack_base_dir }}/tempest"
+ when: run_tempest_cleanup_resource_list
- when:
- run_tempest_fail_if_leaked_resources
diff --git a/setup.cfg b/setup.cfg
index bb1ced5..67555f4 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -6,7 +6,6 @@
author = OpenStack
author_email = openstack-discuss@lists.openstack.org
home_page = https://docs.openstack.org/tempest/latest/
-python_requires = >=3.8
classifier =
Intended Audience :: Information Technology
Intended Audience :: System Administrators
@@ -15,10 +14,10 @@
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 3
- Programming Language :: Python :: 3.8
Programming Language :: Python :: 3.9
Programming Language :: Python :: 3.10
Programming Language :: Python :: 3.11
+ Programming Language :: Python :: 3.12
Programming Language :: Python :: 3 :: Only
Programming Language :: Python :: Implementation :: CPython
diff --git a/tempest/README.rst b/tempest/README.rst
index b345032..b300dcb 100644
--- a/tempest/README.rst
+++ b/tempest/README.rst
@@ -8,7 +8,7 @@
implementations for both correctness, as well as a burn in tool for
OpenStack clouds.
-As such Tempest tests come in many flavors, each with their own rules
+As such Tempest tests come in many flavors, each with its own rules
and guidelines. Below is the overview of the Tempest repository structure
to make this clear.
@@ -17,6 +17,7 @@
tempest/
api/ - API tests
scenario/ - complex scenario tests
+ serial_tests/ - tests that run always in the serial mode
tests/ - unit tests for Tempest internals
Each of these directories contains different types of tests. What
@@ -41,13 +42,20 @@
---------------------------
Scenario tests are complex "through path" tests for OpenStack
-functionality. They are typically a series of steps where complicated
+functionality. They are typically a series of steps where a complicated
state requiring multiple services is set up exercised, and torn down.
Scenario tests should not use the existing Python clients for OpenStack,
but should instead use the Tempest implementations of clients.
+:ref:`serial_tests_guide`
+--------------------------------
+
+Tests within this category will always be executed serially from the rest of
+the test cases.
+
+
:ref:`unit_tests_field_guide`
-----------------------------
diff --git a/tempest/api/README.rst b/tempest/api/README.rst
index a796922..7051230 100644
--- a/tempest/api/README.rst
+++ b/tempest/api/README.rst
@@ -7,20 +7,20 @@
What are these tests?
---------------------
-One of Tempest's prime function is to ensure that your OpenStack cloud
+One of Tempest's prime functions is to ensure that your OpenStack cloud
works with the OpenStack API as documented. The current largest
portion of Tempest code is devoted to test cases that do exactly this.
It's also important to test not only the expected positive path on
APIs, but also to provide them with invalid data to ensure they fail
in expected and documented ways. The latter type of tests is called
-``negative tests`` in Tempest source code. Over the course of the OpenStack
-project Tempest has discovered many fundamental bugs by doing just
+``negative tests`` in Tempest source code. Throughout the OpenStack
+project, Tempest has discovered many fundamental bugs by doing just
this.
In order for some APIs to return meaningful results, there must be
enough data in the system. This means these tests might start by
-spinning up a server, image, etc, then operating on it.
+spinning up a server, image, etc., and then operating on it.
Why are these tests in Tempest?
@@ -32,7 +32,7 @@
It could be argued that some of the negative testing could be done
back in the projects themselves, and we might evolve there over time,
-but currently in the OpenStack gate this is a fundamentally important
+but currently, in the OpenStack gate, this is a fundamentally important
place to keep things.
@@ -43,7 +43,7 @@
OpenStack API, as we want to ensure that bugs aren't hidden by the
official clients.
-They should test specific API calls, and can build up complex state if
+They should test specific API calls and can build up complex states if
it's needed for the API call to be meaningful.
They should send not only good data, but bad data at the API and look
diff --git a/tempest/api/compute/admin/test_agents.py b/tempest/api/compute/admin/test_agents.py
deleted file mode 100644
index 8fc155b..0000000
--- a/tempest/api/compute/admin/test_agents.py
+++ /dev/null
@@ -1,125 +0,0 @@
-# Copyright 2014 NEC Corporation. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.compute import base
-from tempest import config
-from tempest.lib.common.utils import data_utils
-from tempest.lib import decorators
-
-CONF = config.CONF
-
-
-# TODO(stephenfin): Remove these tests once the nova Ussuri branch goes EOL
-class AgentsAdminTestJSON(base.BaseV2ComputeAdminTest):
- """Tests Compute Agents API"""
-
- @classmethod
- def skip_checks(cls):
- super(AgentsAdminTestJSON, cls).skip_checks()
- if not CONF.compute_feature_enabled.xenapi_apis:
- raise cls.skipException('The os-agents API is not supported.')
-
- @classmethod
- def setup_clients(cls):
- super(AgentsAdminTestJSON, cls).setup_clients()
- cls.client = cls.os_admin.agents_client
-
- @classmethod
- def resource_setup(cls):
- super(AgentsAdminTestJSON, cls).resource_setup()
- cls.params_agent = cls._param_helper(
- hypervisor='common', os='linux', architecture='x86_64',
- version='7.0', url='xxx://xxxx/xxx/xxx',
- md5hash='add6bb58e139be103324d04d82d8f545')
-
- @staticmethod
- def _param_helper(**kwargs):
- rand_key = 'architecture'
- if rand_key in kwargs:
- # NOTE: The rand_name is for avoiding agent conflicts.
- # If you try to create an agent with the same hypervisor,
- # os and architecture as an existing agent, Nova will return
- # an HTTPConflict or HTTPServerError.
- kwargs[rand_key] = data_utils.rand_name(
- prefix=CONF.resource_name_prefix,
- name=kwargs[rand_key])
- return kwargs
-
- @decorators.idempotent_id('1fc6bdc8-0b6d-4cc7-9f30-9b04fabe5b90')
- def test_create_agent(self):
- """Test creating a compute agent"""
- params = self._param_helper(
- hypervisor='kvm', os='win', architecture='x86',
- version='7.0', url='xxx://xxxx/xxx/xxx',
- md5hash='add6bb58e139be103324d04d82d8f545')
- body = self.client.create_agent(**params)['agent']
- self.addCleanup(self.client.delete_agent, body['agent_id'])
- for expected_item, value in params.items():
- self.assertEqual(value, body[expected_item])
-
- @decorators.idempotent_id('dc9ffd51-1c50-4f0e-a820-ae6d2a568a9e')
- def test_update_agent(self):
- """Test updating a compute agent"""
- # Create and update an agent.
- body = self.client.create_agent(**self.params_agent)['agent']
- self.addCleanup(self.client.delete_agent, body['agent_id'])
- agent_id = body['agent_id']
- params = self._param_helper(
- version='8.0', url='xxx://xxxx/xxx/xxx2',
- md5hash='add6bb58e139be103324d04d82d8f547')
- body = self.client.update_agent(agent_id, **params)['agent']
- for expected_item, value in params.items():
- self.assertEqual(value, body[expected_item])
-
- @decorators.idempotent_id('470e0b89-386f-407b-91fd-819737d0b335')
- def test_delete_agent(self):
- """Test deleting a compute agent"""
- body = self.client.create_agent(**self.params_agent)['agent']
- self.client.delete_agent(body['agent_id'])
-
- # Verify the list doesn't contain the deleted agent.
- agents = self.client.list_agents()['agents']
- self.assertNotIn(body['agent_id'], map(lambda x: x['agent_id'],
- agents))
-
- @decorators.idempotent_id('6a326c69-654b-438a-80a3-34bcc454e138')
- def test_list_agents(self):
- """Test listing compute agents"""
- body = self.client.create_agent(**self.params_agent)['agent']
- self.addCleanup(self.client.delete_agent, body['agent_id'])
- agents = self.client.list_agents()['agents']
- self.assertNotEmpty(agents, 'Cannot get any agents.')
- self.assertIn(body['agent_id'], map(lambda x: x['agent_id'], agents))
-
- @decorators.idempotent_id('eabadde4-3cd7-4ec4-a4b5-5a936d2d4408')
- def test_list_agents_with_filter(self):
- """Test listing compute agents by the filter"""
- body = self.client.create_agent(**self.params_agent)['agent']
- self.addCleanup(self.client.delete_agent, body['agent_id'])
- params = self._param_helper(
- hypervisor='xen', os='linux', architecture='x86',
- version='7.0', url='xxx://xxxx/xxx/xxx1',
- md5hash='add6bb58e139be103324d04d82d8f546')
- agent_xen = self.client.create_agent(**params)['agent']
- self.addCleanup(self.client.delete_agent, agent_xen['agent_id'])
-
- agent_id_xen = agent_xen['agent_id']
- agents = (self.client.list_agents(hypervisor=agent_xen['hypervisor'])
- ['agents'])
- self.assertNotEmpty(agents, 'Cannot get any agents.')
- self.assertIn(agent_id_xen, map(lambda x: x['agent_id'], agents))
- self.assertNotIn(body['agent_id'], map(lambda x: x['agent_id'],
- agents))
- for agent in agents:
- self.assertEqual(agent_xen['hypervisor'], agent['hypervisor'])
diff --git a/tempest/api/compute/admin/test_hosts.py b/tempest/api/compute/admin/test_hosts.py
index 0d79570..849b535 100644
--- a/tempest/api/compute/admin/test_hosts.py
+++ b/tempest/api/compute/admin/test_hosts.py
@@ -14,8 +14,11 @@
from tempest.api.compute import base
from tempest.common import tempest_fixtures as fixtures
+from tempest import config
from tempest.lib import decorators
+CONF = config.CONF
+
class HostsAdminTestJSON(base.BaseV2ComputeAdminTest):
"""Tests nova hosts API using admin privileges."""
@@ -70,7 +73,7 @@
hosts = [host for host in hosts if (
host['service'] == 'compute' and
- not host['host_name'].endswith('-ironic'))]
+ CONF.compute.target_hosts_to_avoid not in host['host_name'])]
self.assertNotEmpty(hosts)
for host in hosts:
diff --git a/tempest/api/compute/admin/test_live_migration.py b/tempest/api/compute/admin/test_live_migration.py
index 429755a..f6a1ae9 100644
--- a/tempest/api/compute/admin/test_live_migration.py
+++ b/tempest/api/compute/admin/test_live_migration.py
@@ -175,9 +175,6 @@
@testtools.skipIf(not CONF.compute_feature_enabled.
block_migration_for_live_migration,
'Block Live migration not available')
- @testtools.skipIf(not CONF.compute_feature_enabled.
- block_migrate_cinder_iscsi,
- 'Block Live migration not configured for iSCSI')
@utils.services('volume')
def test_live_block_migration_with_attached_volume(self):
"""Test the live-migration of an instance with an attached volume.
diff --git a/tempest/api/compute/admin/test_servers.py b/tempest/api/compute/admin/test_servers.py
index be838fc..6c9aafb 100644
--- a/tempest/api/compute/admin/test_servers.py
+++ b/tempest/api/compute/admin/test_servers.py
@@ -207,15 +207,10 @@
self.assertEqual(self.image_ref_alt, rebuilt_image_id)
@decorators.idempotent_id('7a1323b4-a6a2-497a-96cb-76c07b945c71')
- def test_reset_network_inject_network_info(self):
- """Test resetting and injecting network info of a server"""
- if not CONF.compute_feature_enabled.xenapi_apis:
- raise self.skipException(
- 'The resetNetwork server action is not supported.')
-
- # Reset Network of a Server
+ def test_inject_network_info(self):
+ """Test injecting network info of a server"""
+ # Create a server
server = self.create_test_server(wait_until='ACTIVE')
- self.client.reset_network(server['id'])
# Inject the Network Info into Server
self.client.inject_network_info(server['id'])
diff --git a/tempest/api/compute/admin/test_servers_on_multinodes.py b/tempest/api/compute/admin/test_servers_on_multinodes.py
index 013e7d8..c5d5b19 100644
--- a/tempest/api/compute/admin/test_servers_on_multinodes.py
+++ b/tempest/api/compute/admin/test_servers_on_multinodes.py
@@ -24,7 +24,7 @@
class ServersOnMultiNodesTest(base.BaseV2ComputeAdminTest):
- """Test creating servers on mutiple nodes with scheduler_hints."""
+ """Test creating servers on multiple nodes with scheduler_hints."""
@classmethod
def resource_setup(cls):
super(ServersOnMultiNodesTest, cls).resource_setup()
@@ -150,6 +150,15 @@
compute.shelve_server(self.servers_client, server['id'],
force_shelve_offload=True)
+ # Work around https://bugs.launchpad.net/nova/+bug/2045785
+ # This can be removed when ^ is fixed.
+ def _check_server_host_is_none():
+ server_details = self.os_admin.servers_client.show_server(
+ server['id'])
+ self.assertIsNone(server_details['server']['OS-EXT-SRV-ATTR:host'])
+
+ self.wait_for(_check_server_host_is_none)
+
self.os_admin.servers_client.unshelve_server(
server['id'],
body={'unshelve': {'host': host}}
diff --git a/tempest/api/compute/base.py b/tempest/api/compute/base.py
index 2557e47..b974b52 100644
--- a/tempest/api/compute/base.py
+++ b/tempest/api/compute/base.py
@@ -410,7 +410,7 @@
:param validatable: whether to the server needs to be
validatable. When True, validation resources are acquired via
the `get_class_validation_resources` helper.
- :param kwargs: extra paramaters are passed through to the
+ :param kwargs: extra parameters are passed through to the
`create_test_server` call.
:return: the UUID of the created server.
"""
@@ -538,6 +538,37 @@
volume['id'], 'available')
return volume
+ @classmethod
+ def verify_metadata_from_api(self, server, ssh_client, verify_method):
+ md_url = 'http://169.254.169.254/openstack/latest/meta_data.json'
+ LOG.info('Attempting to verify tagged devices in server %s via '
+ 'the metadata service: %s', server['id'], md_url)
+
+ def get_and_verify_metadata():
+ try:
+ ssh_client.exec_command('curl -V')
+ except lib_exc.SSHExecCommandFailed:
+ if not CONF.compute_feature_enabled.config_drive:
+ raise self.skipException('curl not found in guest '
+ 'and config drive is '
+ 'disabled')
+ LOG.warning('curl was not found in the guest, device '
+ 'tagging metadata was not checked in the '
+ 'metadata API')
+ return True
+ cmd = 'curl %s' % md_url
+ md_json = ssh_client.exec_command(cmd)
+ return verify_method(md_json)
+ # NOTE(gmann) Keep refreshing the metadata info until the metadata
+ # cache is refreshed. For safer side, we will go with wait loop of
+ # build_interval till build_timeout. verify_method() above will return
+ # True if all metadata verification is done as expected.
+ if not test_utils.call_until_true(get_and_verify_metadata,
+ CONF.compute.build_timeout,
+ CONF.compute.build_interval):
+ raise lib_exc.TimeoutException('Timeout while verifying '
+ 'metadata on server.')
+
def _detach_volume(self, server, volume):
"""Helper method to detach a volume.
@@ -689,7 +720,7 @@
binary='nova-compute')['services']
hosts = []
for svc in svcs:
- if svc['host'].endswith('-ironic'):
+ if CONF.compute.target_hosts_to_avoid in svc['host']:
continue
if svc['state'] == 'up' and svc['status'] == 'enabled':
if CONF.compute.compute_volume_common_az:
diff --git a/tempest/api/compute/certificates/test_certificates.py b/tempest/api/compute/certificates/test_certificates.py
deleted file mode 100644
index 5917931..0000000
--- a/tempest/api/compute/certificates/test_certificates.py
+++ /dev/null
@@ -1,40 +0,0 @@
-# Copyright 2012 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.compute import base
-from tempest import config
-from tempest.lib import decorators
-
-CONF = config.CONF
-
-
-class CertificatesV2TestJSON(base.BaseV2ComputeTest):
- """Test Certificates API"""
-
- @classmethod
- def skip_checks(cls):
- super(CertificatesV2TestJSON, cls).skip_checks()
- if not CONF.compute_feature_enabled.nova_cert:
- raise cls.skipException("Nova cert is not available")
-
- @decorators.idempotent_id('c070a441-b08e-447e-a733-905909535b1b')
- def test_create_root_certificate(self):
- """Test creating root certificate"""
- self.certificates_client.create_certificate()
-
- @decorators.idempotent_id('3ac273d0-92d2-4632-bdfc-afbc21d4606c')
- def test_get_root_certificate(self):
- """Test getting root certificate details"""
- self.certificates_client.show_certificate('root')
diff --git a/tempest/api/compute/flavors/test_flavors_negative.py b/tempest/api/compute/flavors/test_flavors_negative.py
index 09f54b5..efd9cdd 100644
--- a/tempest/api/compute/flavors/test_flavors_negative.py
+++ b/tempest/api/compute/flavors/test_flavors_negative.py
@@ -47,7 +47,7 @@
'name': data_utils.rand_name(
prefix=CONF.resource_name_prefix, name='image'),
'container_format': CONF.image.container_formats[0],
- 'disk_format': CONF.image.disk_formats[0],
+ 'disk_format': 'raw',
'min_ram': min_img_ram,
'visibility': 'private'
}
diff --git a/tempest/api/compute/images/test_images.py b/tempest/api/compute/images/test_images.py
index 87cedae..a90d500 100644
--- a/tempest/api/compute/images/test_images.py
+++ b/tempest/api/compute/images/test_images.py
@@ -71,7 +71,7 @@
self.assertEqual(snapshot_name, image['name'])
except lib_exceptions.TimeoutException as ex:
# If timeout is reached, we don't need to check state,
- # since, it wouldn't be a 'SAVING' state atleast and apart from
+ # since, it wouldn't be a 'SAVING' state at least and apart from
# it, this testcase doesn't have scope for other state transition
# Hence, skip the test.
raise self.skipException("This test is skipped because " + str(ex))
@@ -90,6 +90,14 @@
name=snapshot_name,
wait_until='ACTIVE',
wait_for_server=False)
+ # This is required due to ceph issue:
+ # https://bugs.launchpad.net/glance/+bug/2045769.
+ # New location APIs are async so we need to wait for the location
+ # import task to complete.
+ # This should work with old location API since we don't fail if there
+ # are no tasks for the image
+ waiters.wait_for_image_tasks_status(self.images_client,
+ image['id'], 'success')
self.addCleanup(self.client.delete_image, image['id'])
self.assertEqual(snapshot_name, image['name'])
@@ -110,6 +118,14 @@
name=snapshot_name,
wait_until='ACTIVE',
wait_for_server=False)
+ # This is required due to ceph issue:
+ # https://bugs.launchpad.net/glance/+bug/2045769.
+ # New location APIs are async so we need to wait for the location
+ # import task to complete.
+ # This should work with old location API since we don't fail if there
+ # are no tasks for the image
+ waiters.wait_for_image_tasks_status(self.images_client,
+ image['id'], 'success')
self.addCleanup(self.client.delete_image, image['id'])
self.assertEqual(snapshot_name, image['name'])
@@ -130,6 +146,14 @@
name=snapshot_name,
wait_until='ACTIVE',
wait_for_server=False)
+ # This is required due to ceph issue:
+ # https://bugs.launchpad.net/glance/+bug/2045769.
+ # New location APIs are async so we need to wait for the location
+ # import task to complete.
+ # This should work with old location API since we don't fail if there
+ # are no tasks for the image
+ waiters.wait_for_image_tasks_status(self.images_client,
+ image['id'], 'success')
self.addCleanup(self.client.delete_image, image['id'])
self.assertEqual(snapshot_name, image['name'])
diff --git a/tempest/api/compute/images/test_images_oneserver_negative.py b/tempest/api/compute/images/test_images_oneserver_negative.py
index 275a26f..a245a8a 100644
--- a/tempest/api/compute/images/test_images_oneserver_negative.py
+++ b/tempest/api/compute/images/test_images_oneserver_negative.py
@@ -130,7 +130,7 @@
except lib_exc.TimeoutException as ex:
# Test cannot capture the image saving state.
# If timeout is reached, we don't need to check state,
- # since, it wouldn't be a 'SAVING' state atleast and apart from
+ # since, it wouldn't be a 'SAVING' state at least and apart from
# it, this testcase doesn't have scope for other state transition
# Hence, skip the test.
raise self.skipException("This test is skipped because " + str(ex))
diff --git a/tempest/api/compute/servers/test_attach_interfaces.py b/tempest/api/compute/servers/test_attach_interfaces.py
index 8984d1d..eddfd73 100644
--- a/tempest/api/compute/servers/test_attach_interfaces.py
+++ b/tempest/api/compute/servers/test_attach_interfaces.py
@@ -316,6 +316,7 @@
_, servers = compute.create_test_server(
self.os_primary, tenant_network=network,
validatable=True,
+ wait_until='ACTIVE',
validation_resources=validation_resources)
return servers[0]
diff --git a/tempest/api/compute/servers/test_create_server.py b/tempest/api/compute/servers/test_create_server.py
index 6664e15..0b39b8a 100644
--- a/tempest/api/compute/servers/test_create_server.py
+++ b/tempest/api/compute/servers/test_create_server.py
@@ -16,6 +16,8 @@
import netaddr
import testtools
+from oslo_serialization import jsonutils as json
+
from tempest.api.compute import base
from tempest.common import utils
from tempest.common.utils.linux import remote_client
@@ -185,7 +187,7 @@
class ServersTestFqdnHostnames(base.BaseV2ComputeTest):
- """Test creating server with FQDN hostname and verifying atrributes
+ """Test creating server with FQDN hostname and verifying attributes
Starting Wallaby release, Nova sanitizes freeform characters in
server hostname with dashes. This test verifies the same.
@@ -235,3 +237,76 @@
servers_client=self.client)
hostname = linux_client.exec_command("hostname").rstrip()
self.assertEqual('guest-instance-1-domain-com', hostname)
+
+
+class ServersV294TestFqdnHostnames(base.BaseV2ComputeTest):
+ """Test creating server with FQDN hostname and verifying attributes
+
+ Starting Antelope release, Nova allows to set hostname as an FQDN
+ type and allows free form characters in hostname using --hostname
+ parameter with API above 2.94 .
+
+ This is to create server with --hostname having FQDN type value having
+ more than 64 characters
+ """
+
+ min_microversion = '2.94'
+
+ @classmethod
+ def setup_credentials(cls):
+ cls.prepare_instance_network()
+ super(ServersV294TestFqdnHostnames, cls).setup_credentials()
+
+ @classmethod
+ def setup_clients(cls):
+ super(ServersV294TestFqdnHostnames, cls).setup_clients()
+ cls.client = cls.servers_client
+
+ @classmethod
+ def resource_setup(cls):
+ super(ServersV294TestFqdnHostnames, cls).resource_setup()
+ cls.validation_resources = cls.get_class_validation_resources(
+ cls.os_primary)
+ cls.accessIPv4 = '1.1.1.1'
+ cls.name = 'guest-instance-1'
+ cls.password = data_utils.rand_password()
+ cls.hostname = 'x' * 52 + '-guest-test.domaintest.com'
+ cls.test_server = cls.create_test_server(
+ validatable=True,
+ validation_resources=cls.validation_resources,
+ wait_until='ACTIVE',
+ name=cls.name,
+ accessIPv4=cls.accessIPv4,
+ adminPass=cls.password,
+ hostname=cls.hostname)
+ cls.server = cls.client.show_server(cls.test_server['id'])['server']
+
+ def verify_metadata_hostname(self, md_json):
+ md_dict = json.loads(md_json)
+ dhcp_domain = CONF.compute_feature_enabled.dhcp_domain
+ if md_dict['hostname'] == f"{self.hostname}{dhcp_domain}":
+ return True
+ else:
+ return False
+
+ @decorators.idempotent_id('e7b05488-f9d5-4fce-91b3-e82216c52017')
+ @testtools.skipUnless(CONF.validation.run_validation,
+ 'Instance validation tests are disabled.')
+ def test_verify_hostname_allows_fqdn(self):
+ """Test to verify --hostname allows FQDN type name scheme
+
+ Verify the hostname has FQDN value and Freeform characters
+ in the hostname are allowed
+ """
+ self.assertEqual(
+ self.hostname, self.server['OS-EXT-SRV-ATTR:hostname'])
+ # Verify that metadata API has correct hostname inside guest
+ linux_client = remote_client.RemoteClient(
+ self.get_server_ip(self.test_server, self.validation_resources),
+ self.ssh_user,
+ self.password,
+ self.validation_resources['keypair']['private_key'],
+ server=self.test_server,
+ servers_client=self.client)
+ self.verify_metadata_from_api(
+ self.test_server, linux_client, self.verify_metadata_hostname)
diff --git a/tempest/api/compute/servers/test_delete_server.py b/tempest/api/compute/servers/test_delete_server.py
index ee25a22..596d2bd 100644
--- a/tempest/api/compute/servers/test_delete_server.py
+++ b/tempest/api/compute/servers/test_delete_server.py
@@ -99,11 +99,14 @@
def test_delete_server_while_in_verify_resize_state(self):
"""Test deleting a server while it's VM state is VERIFY_RESIZE"""
server = self.create_test_server(wait_until='ACTIVE')
- self.client.resize_server(server['id'], self.flavor_ref_alt)
- waiters.wait_for_server_status(self.client, server['id'],
- 'VERIFY_RESIZE')
- self.client.delete_server(server['id'])
- waiters.wait_for_server_termination(self.client, server['id'])
+ body = self.client.resize_server(server['id'], self.flavor_ref_alt)
+ request_id = body.response['x-openstack-request-id']
+ waiters.wait_for_server_status(
+ self.client, server['id'], 'VERIFY_RESIZE', request_id=request_id)
+ body = self.client.delete_server(server['id'])
+ request_id = body.response['x-openstack-request-id']
+ waiters.wait_for_server_termination(
+ self.client, server['id'], request_id=request_id)
@decorators.idempotent_id('d0f3f0d6-d9b6-4a32-8da4-23015dcab23c')
@utils.services('volume')
diff --git a/tempest/api/compute/servers/test_device_tagging.py b/tempest/api/compute/servers/test_device_tagging.py
index 2640311..d2fdd52 100644
--- a/tempest/api/compute/servers/test_device_tagging.py
+++ b/tempest/api/compute/servers/test_device_tagging.py
@@ -23,9 +23,7 @@
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import data_utils
-from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
-from tempest.lib import exceptions
CONF = config.CONF
@@ -64,36 +62,6 @@
dhcp=True)
super(DeviceTaggingBase, cls).setup_credentials()
- def verify_metadata_from_api(self, server, ssh_client, verify_method):
- md_url = 'http://169.254.169.254/openstack/latest/meta_data.json'
- LOG.info('Attempting to verify tagged devices in server %s via '
- 'the metadata service: %s', server['id'], md_url)
-
- def get_and_verify_metadata():
- try:
- ssh_client.exec_command('curl -V')
- except exceptions.SSHExecCommandFailed:
- if not CONF.compute_feature_enabled.config_drive:
- raise self.skipException('curl not found in guest '
- 'and config drive is '
- 'disabled')
- LOG.warning('curl was not found in the guest, device '
- 'tagging metadata was not checked in the '
- 'metadata API')
- return True
- cmd = 'curl %s' % md_url
- md_json = ssh_client.exec_command(cmd)
- return verify_method(md_json)
- # NOTE(gmann) Keep refreshing the metadata info until the metadata
- # cache is refreshed. For safer side, we will go with wait loop of
- # build_interval till build_timeout. verify_method() above will return
- # True if all metadata verification is done as expected.
- if not test_utils.call_until_true(get_and_verify_metadata,
- CONF.compute.build_timeout,
- CONF.compute.build_interval):
- raise exceptions.TimeoutException('Timeout while verifying '
- 'metadata on server.')
-
def verify_metadata_on_config_drive(self, server, ssh_client,
verify_method):
LOG.info('Attempting to verify tagged devices in server %s via '
diff --git a/tempest/api/compute/servers/test_multiple_create_negative.py b/tempest/api/compute/servers/test_multiple_create_negative.py
index 3a970dd..d2e2935 100644
--- a/tempest/api/compute/servers/test_multiple_create_negative.py
+++ b/tempest/api/compute/servers/test_multiple_create_negative.py
@@ -40,7 +40,7 @@
@decorators.attr(type=['negative'])
@decorators.idempotent_id('a6f9c2ab-e060-4b82-b23c-4532cb9390ff')
def test_max_count_less_than_one(self):
- """Test creating server with max_count < 1 shoudld fail"""
+ """Test creating server with max_count < 1 should fail"""
invalid_max_count = 0
self.assertRaises(lib_exc.BadRequest, self.create_test_server,
max_count=invalid_max_count)
diff --git a/tempest/api/compute/servers/test_server_actions.py b/tempest/api/compute/servers/test_server_actions.py
index 21ed0cd..c911039 100644
--- a/tempest/api/compute/servers/test_server_actions.py
+++ b/tempest/api/compute/servers/test_server_actions.py
@@ -605,6 +605,14 @@
self.addCleanup(_clean_oldest_backup, image1_id)
waiters.wait_for_image_status(glance_client,
image1_id, 'active')
+ # This is required due to ceph issue:
+ # https://bugs.launchpad.net/glance/+bug/2045769.
+ # New location APIs are async so we need to wait for the location
+ # import task to complete.
+ # This should work with old location API since we don't fail if there
+ # are no tasks for the image
+ waiters.wait_for_image_tasks_status(self.images_client,
+ image1_id, 'success')
backup2 = data_utils.rand_name(
prefix=CONF.resource_name_prefix, name='backup-2')
@@ -621,6 +629,8 @@
self.addCleanup(glance_client.delete_image, image2_id)
waiters.wait_for_image_status(glance_client,
image2_id, 'active')
+ waiters.wait_for_image_tasks_status(self.images_client,
+ image2_id, 'success')
# verify they have been created
properties = {
@@ -655,6 +665,8 @@
image3_id = resp['image_id']
else:
image3_id = data_utils.parse_image_id(resp.response['location'])
+ waiters.wait_for_image_tasks_status(self.images_client,
+ image3_id, 'success')
self.addCleanup(glance_client.delete_image, image3_id)
# the first back up should be deleted
waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
diff --git a/tempest/api/compute/servers/test_server_metadata.py b/tempest/api/compute/servers/test_server_metadata.py
index 9f93e76..5f35b15 100644
--- a/tempest/api/compute/servers/test_server_metadata.py
+++ b/tempest/api/compute/servers/test_server_metadata.py
@@ -27,13 +27,6 @@
create_default_network = True
@classmethod
- def skip_checks(cls):
- super(ServerMetadataTestJSON, cls).skip_checks()
- if not CONF.compute_feature_enabled.xenapi_apis:
- raise cls.skipException(
- 'Metadata is read-only on non-Xen-based deployments.')
-
- @classmethod
def setup_clients(cls):
super(ServerMetadataTestJSON, cls).setup_clients()
cls.client = cls.servers_client
diff --git a/tempest/api/compute/servers/test_server_metadata_negative.py b/tempest/api/compute/servers/test_server_metadata_negative.py
index 655909c..2059dfa 100644
--- a/tempest/api/compute/servers/test_server_metadata_negative.py
+++ b/tempest/api/compute/servers/test_server_metadata_negative.py
@@ -14,13 +14,10 @@
# under the License.
from tempest.api.compute import base
-from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-CONF = config.CONF
-
class ServerMetadataNegativeTestJSON(base.BaseV2ComputeTest):
"""Negative tests of server metadata"""
@@ -91,10 +88,6 @@
Raise BadRequest if key in uri does not match the key passed in body.
"""
- if not CONF.compute_feature_enabled.xenapi_apis:
- raise self.skipException(
- 'Metadata is read-only on non-Xen-based deployments.')
-
meta = {'testkey': 'testvalue'}
self.assertRaises(lib_exc.BadRequest,
self.client.set_server_metadata_item,
@@ -104,10 +97,6 @@
@decorators.idempotent_id('0df38c2a-3d4e-4db5-98d8-d4d9fa843a12')
def test_set_metadata_non_existent_server(self):
"""Test setting metadata for a non existent server should fail"""
- if not CONF.compute_feature_enabled.xenapi_apis:
- raise self.skipException(
- 'Metadata is read-only on non-Xen-based deployments.')
-
non_existent_server_id = data_utils.rand_uuid()
meta = {'meta1': 'data1'}
self.assertRaises(lib_exc.NotFound,
@@ -119,10 +108,6 @@
@decorators.idempotent_id('904b13dc-0ef2-4e4c-91cd-3b4a0f2f49d8')
def test_update_metadata_non_existent_server(self):
"""Test updating metadata for a non existent server should fail"""
- if not CONF.compute_feature_enabled.xenapi_apis:
- raise self.skipException(
- 'Metadata is read-only on non-Xen-based deployments.')
-
non_existent_server_id = data_utils.rand_uuid()
meta = {'key1': 'value1', 'key2': 'value2'}
self.assertRaises(lib_exc.NotFound,
@@ -134,10 +119,6 @@
@decorators.idempotent_id('a452f38c-05c2-4b47-bd44-a4f0bf5a5e48')
def test_update_metadata_with_blank_key(self):
"""Test updating server metadata to blank key should fail"""
- if not CONF.compute_feature_enabled.xenapi_apis:
- raise self.skipException(
- 'Metadata is read-only on non-Xen-based deployments.')
-
meta = {'': 'data1'}
self.assertRaises(lib_exc.BadRequest,
self.client.update_server_metadata,
@@ -150,10 +131,6 @@
Should not be able to delete metadata item from a non-existent server.
"""
- if not CONF.compute_feature_enabled.xenapi_apis:
- raise self.skipException(
- 'Metadata is read-only on non-Xen-based deployments.')
-
non_existent_server_id = data_utils.rand_uuid()
self.assertRaises(lib_exc.NotFound,
self.client.delete_server_metadata_item,
@@ -168,10 +145,6 @@
A 403 Forbidden or 413 Overlimit (old behaviour) exception
will be raised while exceeding metadata items limit for project.
"""
- if not CONF.compute_feature_enabled.xenapi_apis:
- raise self.skipException(
- 'Metadata is read-only on non-Xen-based deployments.')
-
quota_set = self.quotas_client.show_quota_set(
self.tenant_id)['quota_set']
quota_metadata = quota_set['metadata_items']
@@ -196,10 +169,6 @@
@decorators.idempotent_id('96100343-7fa9-40d8-80fa-d29ef588ce1c')
def test_set_server_metadata_blank_key(self):
"""Test setting server metadata with blank key should fail"""
- if not CONF.compute_feature_enabled.xenapi_apis:
- raise self.skipException(
- 'Metadata is read-only on non-Xen-based deployments.')
-
meta = {'': 'data1'}
self.assertRaises(lib_exc.BadRequest,
self.client.set_server_metadata,
@@ -209,10 +178,6 @@
@decorators.idempotent_id('64a91aee-9723-4863-be44-4c9d9f1e7d0e')
def test_set_server_metadata_missing_metadata(self):
"""Test setting server metadata without metadata field should fail"""
- if not CONF.compute_feature_enabled.xenapi_apis:
- raise self.skipException(
- 'Metadata is read-only on non-Xen-based deployments.')
-
meta = {'meta1': 'data1'}
self.assertRaises(lib_exc.BadRequest,
self.client.set_server_metadata,
diff --git a/tempest/api/compute/servers/test_server_rescue.py b/tempest/api/compute/servers/test_server_rescue.py
index 97c2774..d6c0324 100644
--- a/tempest/api/compute/servers/test_server_rescue.py
+++ b/tempest/api/compute/servers/test_server_rescue.py
@@ -234,7 +234,7 @@
and virtio as the rescue disk.
"""
# This test just check detach fail and does not
- # perfom the detach operation but in cleanup from
+ # perform the detach operation but in cleanup from
# self.attach_volume() it will try to detach the server
# after unrescue the server. Due to that we need to make
# server SSHable before it try to detach, more details are
diff --git a/tempest/api/compute/servers/test_server_rescue_negative.py b/tempest/api/compute/servers/test_server_rescue_negative.py
index 955ba1c..fd05ec6 100644
--- a/tempest/api/compute/servers/test_server_rescue_negative.py
+++ b/tempest/api/compute/servers/test_server_rescue_negative.py
@@ -139,7 +139,7 @@
"""Test detaching volume from a rescued server should fail"""
volume = self.create_volume()
# This test just check detach fail and does not
- # perfom the detach operation but in cleanup from
+ # perform the detach operation but in cleanup from
# self.attach_volume() it will try to detach the server
# after unrescue the server. Due to that we need to make
# server SSHable before it try to detach, more details are
diff --git a/tempest/api/compute/servers/test_servers.py b/tempest/api/compute/servers/test_servers.py
index c72b74e..e7e84d6 100644
--- a/tempest/api/compute/servers/test_servers.py
+++ b/tempest/api/compute/servers/test_servers.py
@@ -263,3 +263,22 @@
servers = self.servers_client.list_servers(
detail=True, **params)['servers']
self.assertNotEmpty(servers)
+
+
+class ServersListShow296Test(base.BaseV2ComputeTest):
+ """Test compute server with microversion >= than 2.96
+
+ This test tests the Server APIs response schema for 2.96 microversion.
+ No specific assert or behaviour verification is needed.
+ """
+
+ min_microversion = '2.96'
+ max_microversion = 'latest'
+
+ @decorators.idempotent_id('4eee1ffe-9e00-4c99-a431-0d3e0f323a8f')
+ def test_list_show_server_296(self):
+ server = self.create_test_server()
+ # Checking list API response schema.
+ self.servers_client.list_servers(detail=True)
+ # Checking show API response schema
+ self.servers_client.show_server(server['id'])
diff --git a/tempest/api/compute/volumes/test_attach_volume.py b/tempest/api/compute/volumes/test_attach_volume.py
index 7ea8f09..e267b0f 100644
--- a/tempest/api/compute/volumes/test_attach_volume.py
+++ b/tempest/api/compute/volumes/test_attach_volume.py
@@ -465,6 +465,73 @@
self._boot_from_multiattach_volume()
@utils.services('image')
+ @decorators.idempotent_id('07eb6686-571c-45f0-9d96-446b120f1121')
+ def test_boot_with_multiattach_volume_direct_lun(self, boot=False):
+ image = self.images_client.show_image(CONF.compute.image_ref)
+ if image.get('hw_scsi_model') != 'virtio-scsi':
+ # NOTE(danms): Technically we don't need this to be virtio-scsi,
+ # but cirros (and other) test images won't see the device unless
+ # they have lsilogic drivers (which is the default). So use this
+ # as sort of the indication that the test should be enabled.
+ self.skip('hw_scsi_model=virtio-scsi not set on image')
+ if not CONF.validation.run_validation:
+ self.skip('validation is required for this test')
+
+ validation_resources = self.get_test_validation_resources(
+ self.os_primary)
+
+ volume = self._create_multiattach_volume(bootable=boot)
+ # Create an image-backed instance with the multi-attach volume as a
+ # block device with device_type=lun
+ bdm = [{'source_type': 'image',
+ 'destination_type': 'local',
+ 'uuid': CONF.compute.image_ref,
+ 'boot_index': 0},
+ {'uuid': volume['id'],
+ 'source_type': 'volume',
+ 'destination_type': 'volume',
+ 'device_type': 'lun',
+ 'disk_bus': 'scsi'}]
+
+ if boot:
+ # If we're booting from it, we don't need the local-from-image
+ # disk, but we need the volume to have a boot_index
+ bdm.pop(0)
+ bdm[0]['boot_index'] = 0
+
+ server = self.create_test_server(
+ validatable=True,
+ validation_resources=validation_resources,
+ block_device_mapping_v2=bdm, wait_until='SSHABLE')
+
+ # Assert the volume is attached to the server.
+ attachments = self.servers_client.list_volume_attachments(
+ server['id'])['volumeAttachments']
+ self.assertEqual(1, len(attachments))
+ self.assertEqual(volume['id'], attachments[0]['volumeId'])
+
+ linux_client = remote_client.RemoteClient(
+ self.get_server_ip(server, validation_resources),
+ self.image_ssh_user,
+ self.image_ssh_password,
+ validation_resources['keypair']['private_key'],
+ server=server,
+ servers_client=self.servers_client)
+
+ # Assert the volume appears as a SCSI device
+ command = 'lsblk -S'
+ blks = linux_client.exec_command(command).strip()
+ self.assertIn('\nsda ', blks)
+
+ self.servers_client.delete_server(server['id'])
+ waiters.wait_for_server_termination(self.servers_client, server['id'])
+
+ @utils.services('image')
+ @decorators.idempotent_id('bfe61d6e-767a-4f93-9de8-054355536475')
+ def test_boot_from_multiattach_volume_direct_lun(self, boot=False):
+ self.test_boot_with_multiattach_volume_direct_lun(boot=True)
+
+ @utils.services('image')
@decorators.idempotent_id('885ac48a-2d7a-40c5-ae8b-1993882d724c')
@testtools.skipUnless(CONF.compute_feature_enabled.snapshot,
'Snapshotting is not available.')
diff --git a/tempest/api/identity/admin/v2/test_endpoints.py b/tempest/api/identity/admin/v2/test_endpoints.py
deleted file mode 100644
index 20d023b..0000000
--- a/tempest/api/identity/admin/v2/test_endpoints.py
+++ /dev/null
@@ -1,97 +0,0 @@
-# Copyright 2013 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.identity import base
-from tempest import config
-from tempest.lib.common.utils import data_utils
-from tempest.lib import decorators
-
-CONF = config.CONF
-
-
-class EndPointsTestJSON(base.BaseIdentityV2AdminTest):
- """Test keystone v2 endpoints"""
-
- @classmethod
- def resource_setup(cls):
- super(EndPointsTestJSON, cls).resource_setup()
- s_name = data_utils.rand_name(
- name='service', prefix=CONF.resource_name_prefix)
- s_type = data_utils.rand_name(
- name='type', prefix=CONF.resource_name_prefix)
- s_description = data_utils.rand_name(
- name='description', prefix=CONF.resource_name_prefix)
- service_data = cls.services_client.create_service(
- name=s_name, type=s_type,
- description=s_description)['OS-KSADM:service']
- cls.addClassResourceCleanup(cls.services_client.delete_service,
- service_data['id'])
- cls.service_id = service_data['id']
- # Create endpoints so as to use for LIST and GET test cases
- cls.setup_endpoints = list()
- for _ in range(2):
- region = data_utils.rand_name(
- name='region', prefix=CONF.resource_name_prefix)
- url = data_utils.rand_url()
- endpoint = cls.endpoints_client.create_endpoint(
- service_id=cls.service_id,
- region=region,
- publicurl=url,
- adminurl=url,
- internalurl=url)['endpoint']
- cls.addClassResourceCleanup(cls.endpoints_client.delete_endpoint,
- endpoint['id'])
- # list_endpoints() will return 'enabled' field
- endpoint['enabled'] = True
- cls.setup_endpoints.append(endpoint)
-
- @decorators.idempotent_id('11f590eb-59d8-4067-8b2b-980c7f387f51')
- def test_list_endpoints(self):
- """Test listing keystone endpoints"""
- # Get a list of endpoints
- fetched_endpoints = self.endpoints_client.list_endpoints()['endpoints']
- # Asserting LIST endpoints
- missing_endpoints =\
- [e for e in self.setup_endpoints if e not in fetched_endpoints]
- self.assertEmpty(missing_endpoints,
- "Failed to find endpoint %s in fetched list" %
- ', '.join(str(e) for e in missing_endpoints))
-
- @decorators.idempotent_id('9974530a-aa28-4362-8403-f06db02b26c1')
- def test_create_list_delete_endpoint(self):
- """Test creating, listing and deleting a keystone endpoint"""
- region = data_utils.rand_name(
- name='region', prefix=CONF.resource_name_prefix)
- url = data_utils.rand_url()
- endpoint = self.endpoints_client.create_endpoint(
- service_id=self.service_id,
- region=region,
- publicurl=url,
- adminurl=url,
- internalurl=url)['endpoint']
- # Asserting Create Endpoint response body
- self.assertIn('id', endpoint)
- self.assertEqual(region, endpoint['region'])
- self.assertEqual(url, endpoint['publicurl'])
- # Checking if created endpoint is present in the list of endpoints
- fetched_endpoints = self.endpoints_client.list_endpoints()['endpoints']
- fetched_endpoints_id = [e['id'] for e in fetched_endpoints]
- self.assertIn(endpoint['id'], fetched_endpoints_id)
- # Deleting the endpoint created in this method
- self.endpoints_client.delete_endpoint(endpoint['id'])
- # Checking whether endpoint is deleted successfully
- fetched_endpoints = self.endpoints_client.list_endpoints()['endpoints']
- fetched_endpoints_id = [e['id'] for e in fetched_endpoints]
- self.assertNotIn(endpoint['id'], fetched_endpoints_id)
diff --git a/tempest/api/identity/admin/v2/test_roles.py b/tempest/api/identity/admin/v2/test_roles.py
deleted file mode 100644
index 6d384ab..0000000
--- a/tempest/api/identity/admin/v2/test_roles.py
+++ /dev/null
@@ -1,121 +0,0 @@
-# Copyright 2012 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.identity import base
-from tempest import config
-from tempest.lib.common.utils import data_utils
-from tempest.lib.common.utils import test_utils
-from tempest.lib import decorators
-
-CONF = config.CONF
-
-
-class RolesTestJSON(base.BaseIdentityV2AdminTest):
-
- @classmethod
- def resource_setup(cls):
- super(RolesTestJSON, cls).resource_setup()
- cls.roles = list()
- for _ in range(5):
- role_name = data_utils.rand_name(
- name='role', prefix=CONF.resource_name_prefix)
- role = cls.roles_client.create_role(name=role_name)['role']
- cls.addClassResourceCleanup(
- test_utils.call_and_ignore_notfound_exc,
- cls.roles_client.delete_role, role['id'])
- cls.roles.append(role)
-
- def _get_role_params(self):
- user = self.setup_test_user()
- tenant = self.tenants_client.show_tenant(user['tenantId'])['tenant']
- role = self.setup_test_role()
- return (user, tenant, role)
-
- def assert_role_in_role_list(self, role, roles):
- found = False
- for user_role in roles:
- if user_role['id'] == role['id']:
- found = True
- self.assertTrue(found, "assigned role was not in list")
-
- @decorators.idempotent_id('75d9593f-50b7-4fcf-bd64-e3fb4a278e23')
- def test_list_roles(self):
- """Return a list of all roles."""
- body = self.roles_client.list_roles()['roles']
- found = [role for role in body if role in self.roles]
- self.assertNotEmpty(found)
- self.assertEqual(len(found), len(self.roles))
-
- @decorators.idempotent_id('c62d909d-6c21-48c0-ae40-0a0760e6db5e')
- def test_role_create_delete(self):
- """Role should be created, verified, and deleted."""
- role_name = data_utils.rand_name(
- name='role-test', prefix=CONF.resource_name_prefix)
- body = self.roles_client.create_role(name=role_name)['role']
- self.addCleanup(test_utils.call_and_ignore_notfound_exc,
- self.roles_client.delete_role, body['id'])
- self.assertEqual(role_name, body['name'])
-
- body = self.roles_client.list_roles()['roles']
- found = [role for role in body if role['name'] == role_name]
- self.assertNotEmpty(found)
-
- body = self.roles_client.delete_role(found[0]['id'])
-
- body = self.roles_client.list_roles()['roles']
- found = [role for role in body if role['name'] == role_name]
- self.assertEmpty(found)
-
- @decorators.idempotent_id('db6870bd-a6ed-43be-a9b1-2f10a5c9994f')
- def test_get_role_by_id(self):
- """Get a role by its id."""
- role = self.setup_test_role()
- role_id = role['id']
- role_name = role['name']
- body = self.roles_client.show_role(role_id)['role']
- self.assertEqual(role_id, body['id'])
- self.assertEqual(role_name, body['name'])
-
- @decorators.idempotent_id('0146f675-ffbd-4208-b3a4-60eb628dbc5e')
- def test_assign_user_role(self):
- """Assign a role to a user on a tenant."""
- (user, tenant, role) = self._get_role_params()
- self.roles_client.create_user_role_on_project(tenant['id'],
- user['id'],
- role['id'])
- roles = self.roles_client.list_user_roles_on_project(
- tenant['id'], user['id'])['roles']
- self.assert_role_in_role_list(role, roles)
-
- @decorators.idempotent_id('f0b9292c-d3ba-4082-aa6c-440489beef69')
- def test_remove_user_role(self):
- """Remove a role assigned to a user on a tenant."""
- (user, tenant, role) = self._get_role_params()
- user_role = self.roles_client.create_user_role_on_project(
- tenant['id'], user['id'], role['id'])['role']
- self.roles_client.delete_role_from_user_on_project(tenant['id'],
- user['id'],
- user_role['id'])
-
- @decorators.idempotent_id('262e1e3e-ed71-4edd-a0e5-d64e83d66d05')
- def test_list_user_roles(self):
- """List roles assigned to a user on tenant."""
- (user, tenant, role) = self._get_role_params()
- self.roles_client.create_user_role_on_project(tenant['id'],
- user['id'],
- role['id'])
- roles = self.roles_client.list_user_roles_on_project(
- tenant['id'], user['id'])['roles']
- self.assert_role_in_role_list(role, roles)
diff --git a/tempest/api/identity/admin/v2/test_roles_negative.py b/tempest/api/identity/admin/v2/test_roles_negative.py
deleted file mode 100644
index 0f0466e..0000000
--- a/tempest/api/identity/admin/v2/test_roles_negative.py
+++ /dev/null
@@ -1,296 +0,0 @@
-# Copyright 2013 Huawei Technologies Co.,LTD.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.identity import base
-from tempest import config
-from tempest.lib.common.utils import data_utils
-from tempest.lib import decorators
-from tempest.lib import exceptions as lib_exc
-
-CONF = config.CONF
-
-
-class RolesNegativeTestJSON(base.BaseIdentityV2AdminTest):
- """Negative tests of keystone roles via v2 API"""
-
- def _get_role_params(self):
- user = self.setup_test_user()
- tenant = self.tenants_client.show_tenant(user['tenantId'])['tenant']
- role = self.setup_test_role()
- return (user, tenant, role)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('d5d5f1df-f8ca-4de0-b2ef-259c1cc67025')
- def test_list_roles_by_unauthorized_user(self):
- """Test Non-admin user should not be able to list roles via v2 API"""
- self.assertRaises(lib_exc.Forbidden,
- self.non_admin_roles_client.list_roles)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('11a3c7da-df6c-40c2-abc2-badd682edf9f')
- def test_list_roles_request_without_token(self):
- """Test listing roles without a valid token via v2 API should fail"""
- token = self.client.auth_provider.get_token()
- self.client.delete_token(token)
- self.assertRaises(lib_exc.Unauthorized, self.roles_client.list_roles)
- self.client.auth_provider.clear_auth()
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('c0b89e56-accc-4c73-85f8-9c0f866104c1')
- def test_role_create_blank_name(self):
- """Test creating a role with a blank name via v2 API is not allowed"""
- self.assertRaises(lib_exc.BadRequest, self.roles_client.create_role,
- name='')
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('585c8998-a8a4-4641-a5dd-abef7a8ced00')
- def test_create_role_by_unauthorized_user(self):
- """Test non-admin user should not be able to create role via v2 API"""
- role_name = data_utils.rand_name(
- name='role', prefix=CONF.resource_name_prefix)
- self.assertRaises(lib_exc.Forbidden,
- self.non_admin_roles_client.create_role,
- name=role_name)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('a7edd17a-e34a-4aab-8bb7-fa6f498645b8')
- def test_create_role_request_without_token(self):
- """Test creating role without a valid token via v2 API should fail"""
- token = self.client.auth_provider.get_token()
- self.client.delete_token(token)
- role_name = data_utils.rand_name(
- name='role', prefix=CONF.resource_name_prefix)
- self.assertRaises(lib_exc.Unauthorized,
- self.roles_client.create_role, name=role_name)
- self.client.auth_provider.clear_auth()
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('c0cde2c8-81c1-4bb0-8fe2-cf615a3547a8')
- def test_role_create_duplicate(self):
- """Test role names should be unique via v2 API"""
- role_name = data_utils.rand_name(
- name='role-dup', prefix=CONF.resource_name_prefix)
- body = self.roles_client.create_role(name=role_name)['role']
- role1_id = body.get('id')
- self.addCleanup(self.roles_client.delete_role, role1_id)
- self.assertRaises(lib_exc.Conflict, self.roles_client.create_role,
- name=role_name)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('15347635-b5b1-4a87-a280-deb2bd6d865e')
- def test_delete_role_by_unauthorized_user(self):
- """Test non-admin user should not be able to delete role via v2 API"""
- role_name = data_utils.rand_name(
- name='role', prefix=CONF.resource_name_prefix)
- body = self.roles_client.create_role(name=role_name)['role']
- self.addCleanup(self.roles_client.delete_role, body['id'])
- role_id = body.get('id')
- self.assertRaises(lib_exc.Forbidden,
- self.non_admin_roles_client.delete_role, role_id)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('44b60b20-70de-4dac-beaf-a3fc2650a16b')
- def test_delete_role_request_without_token(self):
- """Test deleting role without a valid token via v2 API should fail"""
- role_name = data_utils.rand_name(
- name='role', prefix=CONF.resource_name_prefix)
- body = self.roles_client.create_role(name=role_name)['role']
- self.addCleanup(self.roles_client.delete_role, body['id'])
- role_id = body.get('id')
- token = self.client.auth_provider.get_token()
- self.client.delete_token(token)
- self.assertRaises(lib_exc.Unauthorized,
- self.roles_client.delete_role,
- role_id)
- self.client.auth_provider.clear_auth()
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('38373691-8551-453a-b074-4260ad8298ef')
- def test_delete_role_non_existent(self):
- """Test deleting a non existent role via v2 API should fail"""
- non_existent_role = data_utils.rand_uuid_hex()
- self.assertRaises(lib_exc.NotFound, self.roles_client.delete_role,
- non_existent_role)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('391df5cf-3ec3-46c9-bbe5-5cb58dd4dc41')
- def test_assign_user_role_by_unauthorized_user(self):
- """Test non-admin user assigning a role to user via v2 API
-
- Non-admin user should not be authorized to assign a role to user via
- v2 API.
- """
- (user, tenant, role) = self._get_role_params()
- self.assertRaises(
- lib_exc.Forbidden,
- self.non_admin_roles_client.create_user_role_on_project,
- tenant['id'], user['id'], role['id'])
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('f0d2683c-5603-4aee-95d7-21420e87cfd8')
- def test_assign_user_role_request_without_token(self):
- """Test assigning a role to a user without a valid token via v2 API
-
- Assigning a role to a user without a valid token via v2 API should
- fail.
- """
- (user, tenant, role) = self._get_role_params()
- token = self.client.auth_provider.get_token()
- self.client.delete_token(token)
- self.assertRaises(
- lib_exc.Unauthorized,
- self.roles_client.create_user_role_on_project, tenant['id'],
- user['id'], role['id'])
- self.client.auth_provider.clear_auth()
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('99b297f6-2b5d-47c7-97a9-8b6bb4f91042')
- def test_assign_user_role_for_non_existent_role(self):
- """Test assigning a non existent role to user via v2 API
-
- Assigning a non existent role to user via v2 API should fail.
- """
- (user, tenant, _) = self._get_role_params()
- non_existent_role = data_utils.rand_uuid_hex()
- self.assertRaises(lib_exc.NotFound,
- self.roles_client.create_user_role_on_project,
- tenant['id'], user['id'], non_existent_role)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('b2285aaa-9e76-4704-93a9-7a8acd0a6c8f')
- def test_assign_user_role_for_non_existent_tenant(self):
- """Test assigning a role on a non existent tenant via v2 API
-
- Assigning a role on a non existent tenant via v2 API should fail.
- """
- (user, _, role) = self._get_role_params()
- non_existent_tenant = data_utils.rand_uuid_hex()
- self.assertRaises(lib_exc.NotFound,
- self.roles_client.create_user_role_on_project,
- non_existent_tenant, user['id'], role['id'])
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('5c3132cd-c4c8-4402-b5ea-71eb44e97793')
- def test_assign_duplicate_user_role(self):
- """Test duplicate user role should not get assigned via v2 API"""
- (user, tenant, role) = self._get_role_params()
- self.roles_client.create_user_role_on_project(tenant['id'],
- user['id'],
- role['id'])
- self.assertRaises(lib_exc.Conflict,
- self.roles_client.create_user_role_on_project,
- tenant['id'], user['id'], role['id'])
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('d0537987-0977-448f-a435-904c15de7298')
- def test_remove_user_role_by_unauthorized_user(self):
- """Test non-admin user removing a user's role via v2 API
-
- Non-admin user should not be authorized to remove a user's role via
- v2 API
- """
- (user, tenant, role) = self._get_role_params()
- self.roles_client.create_user_role_on_project(tenant['id'],
- user['id'],
- role['id'])
- self.assertRaises(
- lib_exc.Forbidden,
- self.non_admin_roles_client.delete_role_from_user_on_project,
- tenant['id'], user['id'], role['id'])
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('cac81cf4-c1d2-47dc-90d3-f2b7eb572286')
- def test_remove_user_role_request_without_token(self):
- """Test removing a user's role without a valid token via v2 API
-
- Removing a user's role without a valid token via v2 API should fail.
- """
- (user, tenant, role) = self._get_role_params()
- self.roles_client.create_user_role_on_project(tenant['id'],
- user['id'],
- role['id'])
- token = self.client.auth_provider.get_token()
- self.client.delete_token(token)
- self.assertRaises(lib_exc.Unauthorized,
- self.roles_client.delete_role_from_user_on_project,
- tenant['id'], user['id'], role['id'])
- self.client.auth_provider.clear_auth()
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('ab32d759-cd16-41f1-a86e-44405fa9f6d2')
- def test_remove_user_role_non_existent_role(self):
- """Test deleting a non existent role from a user via v2 API
-
- Deleting a non existent role from a user via v2 API should fail.
- """
- (user, tenant, role) = self._get_role_params()
- self.roles_client.create_user_role_on_project(tenant['id'],
- user['id'],
- role['id'])
- non_existent_role = data_utils.rand_uuid_hex()
- self.assertRaises(lib_exc.NotFound,
- self.roles_client.delete_role_from_user_on_project,
- tenant['id'], user['id'], non_existent_role)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('67a679ec-03dd-4551-bbfc-d1c93284f023')
- def test_remove_user_role_non_existent_tenant(self):
- """Test removing a role from a non existent tenant via v2 API
-
- Removing a role from a non existent tenant via v2 API should fail.
- """
- (user, tenant, role) = self._get_role_params()
- self.roles_client.create_user_role_on_project(tenant['id'],
- user['id'],
- role['id'])
- non_existent_tenant = data_utils.rand_uuid_hex()
- self.assertRaises(lib_exc.NotFound,
- self.roles_client.delete_role_from_user_on_project,
- non_existent_tenant, user['id'], role['id'])
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('7391ab4c-06f3-477a-a64a-c8e55ce89837')
- def test_list_user_roles_by_unauthorized_user(self):
- """Test non-admin user listing a user's roles via v2 API
-
- Non-admin user should not be authorized to list a user's roles via v2
- API.
- """
- (user, tenant, role) = self._get_role_params()
- self.roles_client.create_user_role_on_project(tenant['id'],
- user['id'],
- role['id'])
- self.assertRaises(
- lib_exc.Forbidden,
- self.non_admin_roles_client.list_user_roles_on_project,
- tenant['id'], user['id'])
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('682adfb2-fd5f-4b0a-a9ca-322e9bebb907')
- def test_list_user_roles_request_without_token(self):
- """Test listing user's roles without a valid token via v2 API
-
- Listing user's roles without a valid token via v2 API should fail
- """
- (user, tenant, _) = self._get_role_params()
- token = self.client.auth_provider.get_token()
- self.client.delete_token(token)
- try:
- self.assertRaises(lib_exc.Unauthorized,
- self.roles_client.list_user_roles_on_project,
- tenant['id'],
- user['id'])
- finally:
- self.client.auth_provider.clear_auth()
diff --git a/tempest/api/identity/admin/v2/test_services.py b/tempest/api/identity/admin/v2/test_services.py
deleted file mode 100644
index 0e5d378..0000000
--- a/tempest/api/identity/admin/v2/test_services.py
+++ /dev/null
@@ -1,115 +0,0 @@
-# Copyright 2012 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.identity import base
-from tempest import config
-from tempest.lib.common.utils import data_utils
-from tempest.lib import decorators
-from tempest.lib import exceptions as lib_exc
-
-CONF = config.CONF
-
-
-class ServicesTestJSON(base.BaseIdentityV2AdminTest):
- """Test identity services via v2 API"""
-
- def _del_service(self, service_id):
- # Deleting the service created in this method
- self.services_client.delete_service(service_id)
- # Checking whether service is deleted successfully
- self.assertRaises(lib_exc.NotFound, self.services_client.show_service,
- service_id)
-
- @decorators.idempotent_id('84521085-c6e6-491c-9a08-ec9f70f90110')
- def test_create_get_delete_service(self):
- """Test verifies the identity service create/get/delete via v2 API"""
- # GET Service
- # Creating a Service
- name = data_utils.rand_name(
- name='service', prefix=CONF.resource_name_prefix)
- s_type = data_utils.rand_name(
- name='type', prefix=CONF.resource_name_prefix)
- description = data_utils.rand_name(
- name='description', prefix=CONF.resource_name_prefix)
- service_data = self.services_client.create_service(
- name=name, type=s_type,
- description=description)['OS-KSADM:service']
- self.assertIsNotNone(service_data['id'])
- self.addCleanup(self._del_service, service_data['id'])
- # Verifying response body of create service
- self.assertIn('name', service_data)
- self.assertEqual(name, service_data['name'])
- self.assertIn('type', service_data)
- self.assertEqual(s_type, service_data['type'])
- self.assertIn('description', service_data)
- self.assertEqual(description, service_data['description'])
- # Get service
- fetched_service = (
- self.services_client.show_service(service_data['id'])
- ['OS-KSADM:service'])
- # verifying the existence of service created
- self.assertIn('id', fetched_service)
- self.assertEqual(fetched_service['id'], service_data['id'])
- self.assertIn('name', fetched_service)
- self.assertEqual(fetched_service['name'], service_data['name'])
- self.assertIn('type', fetched_service)
- self.assertEqual(fetched_service['type'], service_data['type'])
- self.assertIn('description', fetched_service)
- self.assertEqual(fetched_service['description'],
- service_data['description'])
-
- @decorators.idempotent_id('5d3252c8-e555-494b-a6c8-e11d7335da42')
- def test_create_service_without_description(self):
- """Test creating identity service without description via v2 API
-
- Create a service only with name and type.
- """
- name = data_utils.rand_name(
- name='service', prefix=CONF.resource_name_prefix)
- s_type = data_utils.rand_name(
- name='type', prefix=CONF.resource_name_prefix)
- service = self.services_client.create_service(
- name=name, type=s_type)['OS-KSADM:service']
- self.assertIn('id', service)
- self.addCleanup(self._del_service, service['id'])
- self.assertIn('name', service)
- self.assertEqual(name, service['name'])
- self.assertIn('type', service)
- self.assertEqual(s_type, service['type'])
-
- @decorators.attr(type='smoke')
- @decorators.idempotent_id('34ea6489-012d-4a86-9038-1287cadd5eca')
- def test_list_services(self):
- """Test Create/List/Verify/Delete of identity service via v2 API"""
- services = []
- for _ in range(3):
- name = data_utils.rand_name(
- name='service', prefix=CONF.resource_name_prefix)
- s_type = data_utils.rand_name(
- name='type', prefix=CONF.resource_name_prefix)
- description = data_utils.rand_name(
- name='description', prefix=CONF.resource_name_prefix)
-
- service = self.services_client.create_service(
- name=name, type=s_type,
- description=description)['OS-KSADM:service']
- self.addCleanup(self.services_client.delete_service, service['id'])
- services.append(service)
- service_ids = [svc['id'] for svc in services]
-
- # List and Verify Services
- body = self.services_client.list_services()['OS-KSADM:services']
- found = [serv for serv in body if serv['id'] in service_ids]
- self.assertEqual(len(found), len(services), 'Services not found')
diff --git a/tempest/api/identity/admin/v2/test_tenant_negative.py b/tempest/api/identity/admin/v2/test_tenant_negative.py
deleted file mode 100644
index 4c7c44c..0000000
--- a/tempest/api/identity/admin/v2/test_tenant_negative.py
+++ /dev/null
@@ -1,164 +0,0 @@
-# Copyright 2013 Huawei Technologies Co.,LTD.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.identity import base
-from tempest import config
-from tempest.lib.common.utils import data_utils
-from tempest.lib import decorators
-from tempest.lib import exceptions as lib_exc
-
-CONF = config.CONF
-
-
-class TenantsNegativeTestJSON(base.BaseIdentityV2AdminTest):
- """Negative tests of keystone tenants via v2 API"""
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('ca9bb202-63dd-4240-8a07-8ef9c19c04bb')
- def test_list_tenants_by_unauthorized_user(self):
- """Test Non-admin should not be able to list tenants via v2 API"""
- self.assertRaises(lib_exc.Forbidden,
- self.non_admin_tenants_client.list_tenants)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('df33926c-1c96-4d8d-a762-79cc6b0c3cf4')
- def test_list_tenant_request_without_token(self):
- """Test listing tenants without a valid token via v2 API
-
- Listing tenants without a valid token via v2 API should fail.
- """
- token = self.client.auth_provider.get_token()
- self.client.delete_token(token)
- self.assertRaises(lib_exc.Unauthorized,
- self.tenants_client.list_tenants)
- self.client.auth_provider.clear_auth()
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('162ba316-f18b-4987-8c0c-fd9140cd63ed')
- def test_tenant_delete_by_unauthorized_user(self):
- """Test non-admin should not be able to delete a tenant via v2 API"""
- tenant = self.setup_test_tenant()
- self.assertRaises(lib_exc.Forbidden,
- self.non_admin_tenants_client.delete_tenant,
- tenant['id'])
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('e450db62-2e9d-418f-893a-54772d6386b1')
- def test_tenant_delete_request_without_token(self):
- """Test deleting a tenant without a valid token via v2 API
-
- Deleting a tenant without a valid token via v2 API should fail.
- """
- tenant = self.setup_test_tenant()
- token = self.client.auth_provider.get_token()
- self.client.delete_token(token)
- self.assertRaises(lib_exc.Unauthorized,
- self.tenants_client.delete_tenant,
- tenant['id'])
- self.client.auth_provider.clear_auth()
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('9c9a2aed-6e3c-467a-8f5c-89da9d1b516b')
- def test_delete_non_existent_tenant(self):
- """Test deleting a non existent tenant via v2 API should fail"""
- self.assertRaises(lib_exc.NotFound, self.tenants_client.delete_tenant,
- data_utils.rand_uuid_hex())
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('af16f44b-a849-46cb-9f13-a751c388f739')
- def test_tenant_create_duplicate(self):
- """Test tenant names should be unique via v2 API"""
- tenant_name = data_utils.rand_name(
- name='tenant', prefix=CONF.resource_name_prefix)
- self.setup_test_tenant(name=tenant_name)
- self.assertRaises(lib_exc.Conflict, self.tenants_client.create_tenant,
- name=tenant_name)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('d26b278a-6389-4702-8d6e-5980d80137e0')
- def test_create_tenant_by_unauthorized_user(self):
- """Test non-admin user creating a tenant via v2 API
-
- Non-admin user should not be authorized to create a tenant via v2 API.
- """
- tenant_name = data_utils.rand_name(
- name='tenant', prefix=CONF.resource_name_prefix)
- self.assertRaises(lib_exc.Forbidden,
- self.non_admin_tenants_client.create_tenant,
- name=tenant_name)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('a3ee9d7e-6920-4dd5-9321-d4b2b7f0a638')
- def test_create_tenant_request_without_token(self):
- """Test creating tenant without a token via v2 API is not allowed"""
- tenant_name = data_utils.rand_name(
- name='tenant', prefix=CONF.resource_name_prefix)
- token = self.client.auth_provider.get_token()
- self.client.delete_token(token)
- self.assertRaises(lib_exc.Unauthorized,
- self.tenants_client.create_tenant,
- name=tenant_name)
- self.client.auth_provider.clear_auth()
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('5a2e4ca9-b0c0-486c-9c48-64a94fba2395')
- def test_create_tenant_with_empty_name(self):
- """Test tenant name should not be empty via v2 API"""
- self.assertRaises(lib_exc.BadRequest,
- self.tenants_client.create_tenant,
- name='')
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('2ff18d1e-dfe3-4359-9dc3-abf582c196b9')
- def test_create_tenants_name_length_over_64(self):
- """Test tenant name length should not exceed 64 via v2 API"""
- tenant_name = 'a' * 65
- self.assertRaises(lib_exc.BadRequest,
- self.tenants_client.create_tenant,
- name=tenant_name)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('bd20dc2a-9557-4db7-b755-f48d952ad706')
- def test_update_non_existent_tenant(self):
- """Test updating a non existent tenant via v2 API should fail"""
- self.assertRaises(lib_exc.NotFound, self.tenants_client.update_tenant,
- data_utils.rand_uuid_hex())
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('41704dc5-c5f7-4f79-abfa-76e6fedc570b')
- def test_tenant_update_by_unauthorized_user(self):
- """Test non-admin user updating a tenant via v2 API
-
- Non-admin user should not be able to update a tenant via v2 API
- """
- tenant = self.setup_test_tenant()
- self.assertRaises(lib_exc.Forbidden,
- self.non_admin_tenants_client.update_tenant,
- tenant['id'])
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('7a421573-72c7-4c22-a98e-ce539219c657')
- def test_tenant_update_request_without_token(self):
- """Test updating a tenant without a valid token via v2 API
-
- Updating a tenant without a valid token via v2 API should fail
- """
- tenant = self.setup_test_tenant()
- token = self.client.auth_provider.get_token()
- self.client.delete_token(token)
- self.assertRaises(lib_exc.Unauthorized,
- self.tenants_client.update_tenant,
- tenant['id'])
- self.client.auth_provider.clear_auth()
diff --git a/tempest/api/identity/admin/v2/test_tenants.py b/tempest/api/identity/admin/v2/test_tenants.py
deleted file mode 100644
index 4f674a8..0000000
--- a/tempest/api/identity/admin/v2/test_tenants.py
+++ /dev/null
@@ -1,157 +0,0 @@
-# Copyright 2012 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.identity import base
-from tempest import config
-from tempest.lib.common.utils import data_utils
-from tempest.lib import decorators
-
-CONF = config.CONF
-
-
-class TenantsTestJSON(base.BaseIdentityV2AdminTest):
- """Test identity tenants via v2 API"""
-
- @decorators.idempotent_id('16c6e05c-6112-4b0e-b83f-5e43f221b6b0')
- def test_tenant_list_delete(self):
- """Test listing and deleting tenants via v2 API
-
- Create several tenants and delete them
- """
- tenants = []
- for _ in range(3):
- tenant = self.setup_test_tenant()
- tenants.append(tenant)
- tenant_ids = [tn['id'] for tn in tenants]
- body = self.tenants_client.list_tenants()['tenants']
- found = [t for t in body if t['id'] in tenant_ids]
- self.assertEqual(len(found), len(tenants), 'Tenants not created')
-
- for tenant in tenants:
- self.tenants_client.delete_tenant(tenant['id'])
-
- body = self.tenants_client.list_tenants()['tenants']
- found = [tenant for tenant in body if tenant['id'] in tenant_ids]
- self.assertEmpty(found, 'Tenants failed to delete')
-
- @decorators.idempotent_id('d25e9f24-1310-4d29-b61b-d91299c21d6d')
- def test_tenant_create_with_description(self):
- """Test creating tenant with a description via v2 API"""
- tenant_desc = data_utils.rand_name(
- name='desc', prefix=CONF.resource_name_prefix)
- tenant = self.setup_test_tenant(description=tenant_desc)
- tenant_id = tenant['id']
- desc1 = tenant['description']
- self.assertEqual(desc1, tenant_desc, 'Description should have '
- 'been sent in response for create')
- body = self.tenants_client.show_tenant(tenant_id)['tenant']
- desc2 = body['description']
- self.assertEqual(desc2, tenant_desc, 'Description does not appear '
- 'to be set')
- self.tenants_client.delete_tenant(tenant_id)
-
- @decorators.idempotent_id('670bdddc-1cd7-41c7-b8e2-751cfb67df50')
- def test_tenant_create_enabled(self):
- """Test creating a tenant that is enabled via v2 API"""
- tenant = self.setup_test_tenant(enabled=True)
- tenant_id = tenant['id']
- self.assertTrue(tenant['enabled'], 'Enable should be True in response')
- body = self.tenants_client.show_tenant(tenant_id)['tenant']
- self.assertTrue(body['enabled'], 'Enable should be True in lookup')
- self.tenants_client.delete_tenant(tenant_id)
-
- @decorators.idempotent_id('3be22093-b30f-499d-b772-38340e5e16fb')
- def test_tenant_create_not_enabled(self):
- """Test creating a tenant that is not enabled via v2 API"""
- tenant = self.setup_test_tenant(enabled=False)
- tenant_id = tenant['id']
- self.assertFalse(tenant['enabled'],
- 'Enable should be False in response')
- body = self.tenants_client.show_tenant(tenant_id)['tenant']
- self.assertFalse(body['enabled'],
- 'Enable should be False in lookup')
- self.tenants_client.delete_tenant(tenant_id)
-
- @decorators.idempotent_id('781f2266-d128-47f3-8bdb-f70970add238')
- def test_tenant_update_name(self):
- """Test updating name attribute of a tenant via v2 API"""
- t_name1 = data_utils.rand_name(
- name='tenant', prefix=CONF.resource_name_prefix)
- tenant = self.setup_test_tenant(name=t_name1)
- t_id = tenant['id']
- resp1_name = tenant['name']
-
- t_name2 = data_utils.rand_name(
- name='tenant2', prefix=CONF.resource_name_prefix)
- body = self.tenants_client.update_tenant(t_id, name=t_name2)['tenant']
- resp2_name = body['name']
- self.assertNotEqual(resp1_name, resp2_name)
-
- body = self.tenants_client.show_tenant(t_id)['tenant']
- resp3_name = body['name']
-
- self.assertNotEqual(resp1_name, resp3_name)
- self.assertEqual(t_name1, resp1_name)
- self.assertEqual(resp2_name, resp3_name)
-
- self.tenants_client.delete_tenant(t_id)
-
- @decorators.idempotent_id('859fcfe1-3a03-41ef-86f9-b19a47d1cd87')
- def test_tenant_update_desc(self):
- """Test updating description attribute of a tenant via v2 API"""
- t_desc = data_utils.rand_name(
- name='desc', prefix=CONF.resource_name_prefix)
- tenant = self.setup_test_tenant(description=t_desc)
- t_id = tenant['id']
- resp1_desc = tenant['description']
-
- t_desc2 = data_utils.rand_name(
- name='desc2', prefix=CONF.resource_name_prefix)
- body = self.tenants_client.update_tenant(t_id, description=t_desc2)
- updated_tenant = body['tenant']
- resp2_desc = updated_tenant['description']
- self.assertNotEqual(resp1_desc, resp2_desc)
-
- body = self.tenants_client.show_tenant(t_id)['tenant']
- resp3_desc = body['description']
-
- self.assertNotEqual(resp1_desc, resp3_desc)
- self.assertEqual(t_desc, resp1_desc)
- self.assertEqual(resp2_desc, resp3_desc)
-
- self.tenants_client.delete_tenant(t_id)
-
- @decorators.idempotent_id('8fc8981f-f12d-4c66-9972-2bdcf2bc2e1a')
- def test_tenant_update_enable(self):
- """Test updating the enabled attribute of a tenant via v2 API"""
- t_en = False
- tenant = self.setup_test_tenant(enabled=t_en)
- t_id = tenant['id']
- resp1_en = tenant['enabled']
-
- t_en2 = True
- body = self.tenants_client.update_tenant(t_id, enabled=t_en2)
- updated_tenant = body['tenant']
- resp2_en = updated_tenant['enabled']
- self.assertNotEqual(resp1_en, resp2_en)
-
- body = self.tenants_client.show_tenant(t_id)['tenant']
- resp3_en = body['enabled']
-
- self.assertNotEqual(resp1_en, resp3_en)
- self.assertFalse(tenant['enabled'])
- self.assertEqual(resp2_en, resp3_en)
-
- self.tenants_client.delete_tenant(t_id)
diff --git a/tempest/api/identity/admin/v2/test_tokens.py b/tempest/api/identity/admin/v2/test_tokens.py
deleted file mode 100644
index 78a2aad..0000000
--- a/tempest/api/identity/admin/v2/test_tokens.py
+++ /dev/null
@@ -1,143 +0,0 @@
-# Copyright 2013 Huawei Technologies Co.,LTD.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.identity import base
-from tempest import config
-from tempest.lib.common.utils import data_utils
-from tempest.lib import decorators
-from tempest.lib import exceptions as lib_exc
-
-CONF = config.CONF
-
-
-class TokensTestJSON(base.BaseIdentityV2AdminTest):
- """Test keystone tokens via v2 API"""
-
- @decorators.idempotent_id('453ad4d5-e486-4b2f-be72-cffc8149e586')
- def test_create_check_get_delete_token(self):
- """Test getting create/check/get/delete token for user via v2 API"""
- # get a token by username and password
- user_name = data_utils.rand_name(
- name='user', prefix=CONF.resource_name_prefix)
- user_password = data_utils.rand_password()
- # first:create a tenant
- tenant = self.setup_test_tenant()
- # second:create a user
- user = self.create_test_user(name=user_name,
- password=user_password,
- tenantId=tenant['id'],
- email='')
- # then get a token for the user
- body = self.token_client.auth(user_name,
- user_password,
- tenant['name'])
- self.assertEqual(body['token']['tenant']['name'],
- tenant['name'])
- # Perform GET Token
- token_id = body['token']['id']
- self.client.check_token_existence(token_id)
- token_details = self.client.show_token(token_id)['access']
- self.assertEqual(token_id, token_details['token']['id'])
- self.assertEqual(user['id'], token_details['user']['id'])
- self.assertEqual(user_name, token_details['user']['name'])
- self.assertEqual(tenant['name'],
- token_details['token']['tenant']['name'])
- # then delete the token
- self.client.delete_token(token_id)
- self.assertRaises(lib_exc.NotFound,
- self.client.check_token_existence,
- token_id)
-
- @decorators.idempotent_id('25ba82ee-8a32-4ceb-8f50-8b8c71e8765e')
- def test_rescope_token(self):
- """Test an unscoped token can be requested via v2 API
-
- That token can be used to request a scoped token.
- """
-
- # Create a user.
- user_name = data_utils.rand_name(
- name='user', prefix=CONF.resource_name_prefix)
- user_password = data_utils.rand_password()
- tenant_id = None # No default tenant so will get unscoped token.
- user = self.create_test_user(name=user_name,
- password=user_password,
- tenantId=tenant_id,
- email='')
-
- # Create a couple tenants.
- tenant1_name = data_utils.rand_name(
- name='tenant', prefix=CONF.resource_name_prefix)
- tenant1 = self.setup_test_tenant(name=tenant1_name)
-
- tenant2_name = data_utils.rand_name(
- name='tenant', prefix=CONF.resource_name_prefix)
- tenant2 = self.setup_test_tenant(name=tenant2_name)
-
- # Create a role
- role = self.setup_test_role()
-
- # Grant the user the role on the tenants.
- self.roles_client.create_user_role_on_project(tenant1['id'],
- user['id'],
- role['id'])
-
- self.roles_client.create_user_role_on_project(tenant2['id'],
- user['id'],
- role['id'])
-
- # Get an unscoped token.
- body = self.token_client.auth(user_name, user_password)
-
- token_id = body['token']['id']
-
- # Use the unscoped token to get a token scoped to tenant1
- body = self.token_client.auth_token(token_id,
- tenant=tenant1_name)
-
- scoped_token_id = body['token']['id']
-
- # Revoke the scoped token
- self.client.delete_token(scoped_token_id)
-
- # Use the unscoped token to get a token scoped to tenant2
- body = self.token_client.auth_token(token_id,
- tenant=tenant2_name)
-
- @decorators.idempotent_id('ca3ea6f7-ed08-4a61-adbd-96906456ad31')
- def test_list_endpoints_for_token(self):
- """Test listing endpoints for token via v2 API"""
- tempest_services = ['keystone', 'nova', 'neutron', 'swift', 'cinder',
- 'neutron']
- # get a token for the user
- creds = self.os_primary.credentials
- username = creds.username
- password = creds.password
- tenant_name = creds.tenant_name
- token = self.token_client.auth(username,
- password,
- tenant_name)['token']
- endpoints = self.client.list_endpoints_for_token(
- token['id'])['endpoints']
- self.assertIsInstance(endpoints, list)
- # Store list of service names
- service_names = [e['name'] for e in endpoints]
- # Get the list of available services. Keystone is always available.
- available_services = [s[0] for s in list(
- CONF.service_available.items()) if s[1] is True] + ['keystone']
- # Verify that all available services are present.
- for service in tempest_services:
- if service in available_services:
- self.assertIn(service, service_names)
diff --git a/tempest/api/identity/admin/v2/test_tokens_negative.py b/tempest/api/identity/admin/v2/test_tokens_negative.py
deleted file mode 100644
index f2e41ff..0000000
--- a/tempest/api/identity/admin/v2/test_tokens_negative.py
+++ /dev/null
@@ -1,43 +0,0 @@
-# Copyright 2017 AT&T Corporation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.identity import base
-from tempest.lib import decorators
-from tempest.lib import exceptions as lib_exc
-
-
-class TokensAdminTestNegative(base.BaseIdentityV2AdminTest):
- """Negative tests of keystone tokens via v2 API"""
-
- credentials = ['primary', 'admin', 'alt']
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('a0a0a600-4292-4364-99c5-922c834fdf05')
- def test_check_token_existence_negative(self):
- """Test checking other tenant's token existence via v2 API
-
- Checking other tenant's token existence via v2 API should fail.
- """
- creds = self.os_primary.credentials
- creds_alt = self.os_alt.credentials
- username = creds.username
- password = creds.password
- tenant_name = creds.tenant_name
- alt_tenant_name = creds_alt.tenant_name
- body = self.token_client.auth(username, password, tenant_name)
- self.assertRaises(lib_exc.Unauthorized,
- self.client.check_token_existence,
- body['token']['id'],
- belongsTo=alt_tenant_name)
diff --git a/tempest/api/identity/admin/v2/test_users.py b/tempest/api/identity/admin/v2/test_users.py
deleted file mode 100644
index 011419e..0000000
--- a/tempest/api/identity/admin/v2/test_users.py
+++ /dev/null
@@ -1,204 +0,0 @@
-# Copyright 2012 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import time
-
-from testtools import matchers
-
-from tempest.api.identity import base
-from tempest import config
-from tempest.lib.common.utils import data_utils
-from tempest.lib import decorators
-
-CONF = config.CONF
-
-
-class UsersTestJSON(base.BaseIdentityV2AdminTest):
- """Test keystone users via v2 API"""
-
- @classmethod
- def resource_setup(cls):
- super(UsersTestJSON, cls).resource_setup()
- cls.alt_user = data_utils.rand_name(
- name='test_user', prefix=CONF.resource_name_prefix)
- cls.alt_email = cls.alt_user + '@testmail.tm'
-
- @decorators.attr(type='smoke')
- @decorators.idempotent_id('2d55a71e-da1d-4b43-9c03-d269fd93d905')
- def test_create_user(self):
- """Test creating a user via v2 API"""
- tenant = self.setup_test_tenant()
- user = self.create_test_user(name=self.alt_user, tenantId=tenant['id'])
- self.assertEqual(self.alt_user, user['name'])
-
- @decorators.idempotent_id('89d9fdb8-15c2-4304-a429-48715d0af33d')
- def test_create_user_with_enabled(self):
- """Test creating a user with enabled : False via v2 API"""
- tenant = self.setup_test_tenant()
- name = data_utils.rand_name(
- name='test_user', prefix=CONF.resource_name_prefix)
- user = self.create_test_user(name=name,
- tenantId=tenant['id'],
- email=self.alt_email,
- enabled=False)
- self.assertEqual(name, user['name'])
- self.assertEqual(False, user['enabled'])
- self.assertEqual(self.alt_email, user['email'])
-
- @decorators.idempotent_id('39d05857-e8a5-4ed4-ba83-0b52d3ab97ee')
- def test_update_user(self):
- """Test updating user attributes via v2 API"""
- tenant = self.setup_test_tenant()
- user = self.create_test_user(tenantId=tenant['id'])
-
- # Updating user details with new values
- u_name2 = data_utils.rand_name(
- name='user2', prefix=CONF.resource_name_prefix)
- u_email2 = u_name2 + '@testmail.tm'
- update_user = self.users_client.update_user(user['id'], name=u_name2,
- email=u_email2,
- enabled=False)['user']
- self.assertEqual(u_name2, update_user['name'])
- self.assertEqual(u_email2, update_user['email'])
- self.assertEqual(False, update_user['enabled'])
- # GET by id after updating
- updated_user = self.users_client.show_user(user['id'])['user']
- # Assert response body of GET after updating
- self.assertEqual(u_name2, updated_user['name'])
- self.assertEqual(u_email2, updated_user['email'])
- self.assertEqual(False, update_user['enabled'])
-
- @decorators.idempotent_id('29ed26f4-a74e-4425-9a85-fdb49fa269d2')
- def test_delete_user(self):
- """Test deleting a user via v2 API"""
- tenant = self.setup_test_tenant()
- user = self.create_test_user(tenantId=tenant['id'])
- self.users_client.delete_user(user['id'])
-
- @decorators.idempotent_id('aca696c3-d645-4f45-b728-63646045beb1')
- def test_user_authentication(self):
- """Test that valid user's token is authenticated via v2 API"""
- password = data_utils.rand_password()
- user = self.setup_test_user(password)
- tenant = self.tenants_client.show_tenant(user['tenantId'])['tenant']
- # Get a token
- self.token_client.auth(user['name'],
- password,
- tenant['name'])
- # Re-auth
- self.token_client.auth(user['name'],
- password,
- tenant['name'])
-
- @decorators.idempotent_id('5d1fa498-4c2d-4732-a8fe-2b054598cfdd')
- def test_authentication_request_without_token(self):
- """Test authentication request without token via v2 API"""
- # Request for token authentication with a valid token in header
- password = data_utils.rand_password()
- user = self.setup_test_user(password)
- tenant = self.tenants_client.show_tenant(user['tenantId'])['tenant']
- self.token_client.auth(user['name'],
- password,
- tenant['name'])
- # Get the token of the current client
- token = self.client.auth_provider.get_token()
- # Delete the token from database
- self.client.delete_token(token)
- # Re-auth
- self.token_client.auth(user['name'],
- password,
- tenant['name'])
- self.client.auth_provider.clear_auth()
-
- @decorators.idempotent_id('a149c02e-e5e0-4b89-809e-7e8faf33ccda')
- def test_get_users(self):
- """Test getting users via v2 API
-
- Get a list of users and find the test user
- """
- user = self.setup_test_user()
- users = self.users_client.list_users()['users']
- self.assertThat([u['name'] for u in users],
- matchers.Contains(user['name']),
- "Could not find %s" % user['name'])
-
- @decorators.idempotent_id('6e317209-383a-4bed-9f10-075b7c82c79a')
- def test_list_users_for_tenant(self):
- """Test returning a list of all users for a tenant via v2 API"""
- tenant = self.setup_test_tenant()
- user_ids = list()
- fetched_user_ids = list()
- user1 = self.create_test_user(tenantId=tenant['id'])
- user_ids.append(user1['id'])
- user2 = self.create_test_user(tenantId=tenant['id'])
- user_ids.append(user2['id'])
- # List of users for the respective tenant ID
- body = (self.tenants_client.list_tenant_users(tenant['id'])
- ['users'])
- for i in body:
- fetched_user_ids.append(i['id'])
- # verifying the user Id in the list
- missing_users =\
- [user for user in user_ids if user not in fetched_user_ids]
- self.assertEmpty(missing_users,
- "Failed to find user %s in fetched list" %
- ', '.join(m_user for m_user in missing_users))
-
- @decorators.idempotent_id('a8b54974-40e1-41c0-b812-50fc90827971')
- def test_list_users_with_roles_for_tenant(self):
- """Test listing users on tenant with roles assigned via v2 API"""
- user = self.setup_test_user()
- tenant = self.tenants_client.show_tenant(user['tenantId'])['tenant']
- role = self.setup_test_role()
- # Assigning roles to two users
- user_ids = list()
- fetched_user_ids = list()
- user_ids.append(user['id'])
- role = self.roles_client.create_user_role_on_project(
- tenant['id'], user['id'], role['id'])['role']
-
- second_user = self.create_test_user(tenantId=tenant['id'])
- user_ids.append(second_user['id'])
- role = self.roles_client.create_user_role_on_project(
- tenant['id'], second_user['id'], role['id'])['role']
- # List of users with roles for the respective tenant ID
- body = (self.tenants_client.list_tenant_users(tenant['id'])['users'])
- for i in body:
- fetched_user_ids.append(i['id'])
- # verifying the user Id in the list
- missing_users = [missing_user for missing_user in user_ids
- if missing_user not in fetched_user_ids]
- self.assertEmpty(missing_users,
- "Failed to find user %s in fetched list" %
- ', '.join(m_user for m_user in missing_users))
-
- @decorators.idempotent_id('1aeb25ac-6ec5-4d8b-97cb-7ac3567a989f')
- def test_update_user_password(self):
- """Test updating of user password via v2 API"""
- user = self.setup_test_user()
- tenant = self.tenants_client.show_tenant(user['tenantId'])['tenant']
- # Updating the user with new password
- new_pass = data_utils.rand_password()
- update_user = self.users_client.update_user_password(
- user['id'], password=new_pass)['user']
- self.assertEqual(update_user['id'], user['id'])
- # NOTE(morganfainberg): Fernet tokens are not subsecond aware and
- # Keystone should only be precise to the second. Sleep to ensure
- # we are passing the second boundary.
- time.sleep(1)
- # Validate the updated password through getting a token.
- body = self.token_client.auth(user['name'], new_pass,
- tenant['name'])
- self.assertIn('id', body['token'])
diff --git a/tempest/api/identity/admin/v2/test_users_negative.py b/tempest/api/identity/admin/v2/test_users_negative.py
deleted file mode 100644
index 7ccd75c..0000000
--- a/tempest/api/identity/admin/v2/test_users_negative.py
+++ /dev/null
@@ -1,286 +0,0 @@
-# Copyright 2012 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.identity import base
-from tempest import config
-from tempest.lib.common.utils import data_utils
-from tempest.lib import decorators
-from tempest.lib import exceptions as lib_exc
-
-CONF = config.CONF
-
-
-class UsersNegativeTestJSON(base.BaseIdentityV2AdminTest):
- """Negative tests of identity users via v2 API"""
-
- @classmethod
- def resource_setup(cls):
- super(UsersNegativeTestJSON, cls).resource_setup()
- cls.alt_user = data_utils.rand_name(
- 'test_user', prefix=CONF.resource_name_prefix)
- cls.alt_password = data_utils.rand_password()
- cls.alt_email = cls.alt_user + '@testmail.tm'
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('60a1f5fa-5744-4cdf-82bf-60b7de2d29a4')
- def test_create_user_by_unauthorized_user(self):
- """Non-admin should not be authorized to create a user via v2 API"""
- tenant = self.setup_test_tenant()
- self.assertRaises(lib_exc.Forbidden,
- self.non_admin_users_client.create_user,
- name=self.alt_user, password=self.alt_password,
- tenantId=tenant['id'],
- email=self.alt_email)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('d80d0c2f-4514-4d1e-806d-0930dfc5a187')
- def test_create_user_with_empty_name(self):
- """User with an empty name should not be created via v2 API"""
- tenant = self.setup_test_tenant()
- self.assertRaises(lib_exc.BadRequest, self.users_client.create_user,
- name='', password=self.alt_password,
- tenantId=tenant['id'],
- email=self.alt_email)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('7704b4f3-3b75-4b82-87cc-931d41c8f780')
- def test_create_user_with_name_length_over_255(self):
- """Length of user name should not exceed 255 via v2 API"""
- tenant = self.setup_test_tenant()
- self.assertRaises(lib_exc.BadRequest, self.users_client.create_user,
- name='a' * 256, password=self.alt_password,
- tenantId=tenant['id'],
- email=self.alt_email)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('57ae8558-120c-4723-9308-3751474e7ecf')
- def test_create_user_with_duplicate_name(self):
- """Duplicate user should not be created via v2 API"""
- password = data_utils.rand_password()
- user = self.setup_test_user(password)
- tenant = self.tenants_client.show_tenant(user['tenantId'])['tenant']
- self.assertRaises(lib_exc.Conflict, self.users_client.create_user,
- name=user['name'],
- password=password,
- tenantId=tenant['id'],
- email=user['email'])
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('0132cc22-7c4f-42e1-9e50-ac6aad31d59a')
- def test_create_user_for_non_existent_tenant(self):
- """Creating a user in a non-existent tenant via v2 API should fail"""
- self.assertRaises(lib_exc.NotFound, self.users_client.create_user,
- name=self.alt_user,
- password=self.alt_password,
- tenantId='49ffgg99999',
- email=self.alt_email)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('55bbb103-d1ae-437b-989b-bcdf8175c1f4')
- def test_create_user_request_without_a_token(self):
- """Creating a user without a valid token via v2 API should fail"""
- tenant = self.setup_test_tenant()
- # Get the token of the current client
- token = self.client.auth_provider.get_token()
- # Delete the token from database
- self.client.delete_token(token)
-
- # Unset the token to allow further tests to generate a new token
- self.addCleanup(self.client.auth_provider.clear_auth)
-
- self.assertRaises(lib_exc.Unauthorized, self.users_client.create_user,
- name=self.alt_user, password=self.alt_password,
- tenantId=tenant['id'],
- email=self.alt_email)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('23a2f3da-4a1a-41da-abdd-632328a861ad')
- def test_create_user_with_enabled_non_bool(self):
- """Creating a user with invalid enabled para via v2 API should fail"""
- tenant = self.setup_test_tenant()
- name = data_utils.rand_name(
- 'test_user', prefix=CONF.resource_name_prefix)
- self.assertRaises(lib_exc.BadRequest, self.users_client.create_user,
- name=name, password=self.alt_password,
- tenantId=tenant['id'],
- email=self.alt_email, enabled=3)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('3d07e294-27a0-4144-b780-a2a1bf6fee19')
- def test_update_user_for_non_existent_user(self):
- """Updating a non-existent user via v2 API should fail"""
- user_name = data_utils.rand_name(
- 'user', prefix=CONF.resource_name_prefix)
- non_existent_id = data_utils.rand_uuid()
- self.assertRaises(lib_exc.NotFound, self.users_client.update_user,
- non_existent_id, name=user_name)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('3cc2a64b-83aa-4b02-88f0-d6ab737c4466')
- def test_update_user_request_without_a_token(self):
- """Updating a user without a valid token via v2 API should fail"""
-
- # Get the token of the current client
- token = self.client.auth_provider.get_token()
- # Delete the token from database
- self.client.delete_token(token)
-
- # Unset the token to allow further tests to generate a new token
- self.addCleanup(self.client.auth_provider.clear_auth)
-
- self.assertRaises(lib_exc.Unauthorized, self.users_client.update_user,
- self.alt_user)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('424868d5-18a7-43e1-8903-a64f95ee3aac')
- def test_update_user_by_unauthorized_user(self):
- """Non-admin should not be authorized to update user via v2 API"""
- self.assertRaises(lib_exc.Forbidden,
- self.non_admin_users_client.update_user,
- self.alt_user)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('d45195d5-33ed-41b9-a452-7d0d6a00f6e9')
- def test_delete_users_by_unauthorized_user(self):
- """Non-admin should not be authorized to delete a user via v2 API"""
- user = self.setup_test_user()
- self.assertRaises(lib_exc.Forbidden,
- self.non_admin_users_client.delete_user,
- user['id'])
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('7cc82f7e-9998-4f89-abae-23df36495867')
- def test_delete_non_existent_user(self):
- """Attempt to delete a non-existent user via v2 API should fail"""
- self.assertRaises(lib_exc.NotFound, self.users_client.delete_user,
- 'junk12345123')
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('57fe1df8-0aa7-46c0-ae9f-c2e785c7504a')
- def test_delete_user_request_without_a_token(self):
- """Deleting a user without a valid token via v2 API should fail"""
-
- # Get the token of the current client
- token = self.client.auth_provider.get_token()
- # Delete the token from database
- self.client.delete_token(token)
-
- # Unset the token to allow further tests to generate a new token
- self.addCleanup(self.client.auth_provider.clear_auth)
-
- self.assertRaises(lib_exc.Unauthorized, self.users_client.delete_user,
- self.alt_user)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('593a4981-f6d4-460a-99a1-57a78bf20829')
- def test_authentication_for_disabled_user(self):
- """Disabled user's token should not get authenticated via v2 API"""
- password = data_utils.rand_password()
- user = self.setup_test_user(password)
- tenant = self.tenants_client.show_tenant(user['tenantId'])['tenant']
- self.disable_user(user['name'])
- self.assertRaises(lib_exc.Unauthorized, self.token_client.auth,
- user['name'],
- password,
- tenant['name'])
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('440a7a8d-9328-4b7b-83e0-d717010495e4')
- def test_authentication_when_tenant_is_disabled(self):
- """Test User's token for a disabled tenant via v2 API
-
- User's token for a disabled tenant should not be authenticated via
- v2 API.
- """
- password = data_utils.rand_password()
- user = self.setup_test_user(password)
- tenant = self.tenants_client.show_tenant(user['tenantId'])['tenant']
- self.disable_tenant(tenant['name'])
- self.assertRaises(lib_exc.Unauthorized, self.token_client.auth,
- user['name'],
- password,
- tenant['name'])
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('921f1ad6-7907-40b8-853f-637e7ee52178')
- def test_authentication_with_invalid_tenant(self):
- """Test User's token for an invalid tenant via v2 API
-
- User's token for an invalid tenant should not be authenticated via V2
- API.
- """
- password = data_utils.rand_password()
- user = self.setup_test_user(password)
- self.assertRaises(lib_exc.Unauthorized, self.token_client.auth,
- user['name'],
- password,
- 'junktenant1234')
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('bde9aecd-3b1c-4079-858f-beb5deaa5b5e')
- def test_authentication_with_invalid_username(self):
- """Non-existent user's token should not get authorized via v2 API"""
- password = data_utils.rand_password()
- user = self.setup_test_user(password)
- tenant = self.tenants_client.show_tenant(user['tenantId'])['tenant']
- self.assertRaises(lib_exc.Unauthorized, self.token_client.auth,
- 'junkuser123', password, tenant['name'])
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('d5308b33-3574-43c3-8d87-1c090c5e1eca')
- def test_authentication_with_invalid_password(self):
- """Test User's token with invalid password via v2 API
-
- User's token with invalid password should not be authenticated via V2
- API.
- """
- user = self.setup_test_user()
- tenant = self.tenants_client.show_tenant(user['tenantId'])['tenant']
- self.assertRaises(lib_exc.Unauthorized, self.token_client.auth,
- user['name'], 'junkpass1234', tenant['name'])
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('284192ce-fb7c-4909-a63b-9a502e0ddd11')
- def test_get_users_by_unauthorized_user(self):
- """Non-admin should not be authorized to get user list via v2 API"""
- self.assertRaises(lib_exc.Forbidden,
- self.non_admin_users_client.list_users)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('a73591ec-1903-4ffe-be42-282b39fefc9d')
- def test_get_users_request_without_token(self):
- """Listing users without a valid token via v2 API should fail"""
- token = self.client.auth_provider.get_token()
- self.client.delete_token(token)
-
- # Unset the token to allow further tests to generate a new token
- self.addCleanup(self.client.auth_provider.clear_auth)
-
- self.assertRaises(lib_exc.Unauthorized, self.users_client.list_users)
-
- @decorators.attr(type=['negative'])
- @decorators.idempotent_id('f5d39046-fc5f-425c-b29e-bac2632da28e')
- def test_list_users_with_invalid_tenant(self):
- """Listing users for a non-existent tenant via v2 API should fail"""
- # Assign invalid tenant ids
- invalid_id = list()
- invalid_id.append(data_utils.rand_name('999'))
- invalid_id.append('alpha')
- invalid_id.append(data_utils.rand_name("dddd@#%%^$"))
- invalid_id.append('!@#()$%^&*?<>{}[]')
- # List the users with invalid tenant id
- for invalid in invalid_id:
- self.assertRaises(lib_exc.NotFound,
- self.tenants_client.list_tenant_users, invalid)
diff --git a/tempest/api/identity/admin/v3/test_groups.py b/tempest/api/identity/admin/v3/test_groups.py
index b5b3c5d..96218bb 100644
--- a/tempest/api/identity/admin/v3/test_groups.py
+++ b/tempest/api/identity/admin/v3/test_groups.py
@@ -128,7 +128,7 @@
for g in user_groups:
if 'membership_expires_at' in g:
self.assertIsNone(g['membership_expires_at'])
- del(g['membership_expires_at'])
+ del g['membership_expires_at']
self.assertEqual(sorted(groups, key=lambda k: k['name']),
sorted(user_groups, key=lambda k: k['name']))
self.assertEqual(2, len(user_groups))
diff --git a/tempest/api/identity/admin/v3/test_project_tags.py b/tempest/api/identity/admin/v3/test_project_tags.py
index 2cc7257..2004cbc 100644
--- a/tempest/api/identity/admin/v3/test_project_tags.py
+++ b/tempest/api/identity/admin/v3/test_project_tags.py
@@ -13,8 +13,6 @@
# License for the specific language governing permissions and limitations
# under the License.
-import testtools
-
from tempest.api.identity import base
from tempest import config
from tempest.lib.common.utils import data_utils
@@ -33,8 +31,6 @@
force_tenant_isolation = False
@decorators.idempotent_id('7c123aac-999d-416a-a0fb-84b915ab10de')
- @testtools.skipUnless(CONF.identity_feature_enabled.project_tags,
- 'Project tags not available.')
def test_list_update_delete_project_tags(self):
"""Test listing, updating and deleting of project tags"""
project = self.setup_test_project()
diff --git a/tempest/api/identity/base.py b/tempest/api/identity/base.py
index c9e0e1c..811dff4 100644
--- a/tempest/api/identity/base.py
+++ b/tempest/api/identity/base.py
@@ -98,85 +98,6 @@
return role
-class BaseIdentityV2Test(BaseIdentityTest):
-
- credentials = ['primary']
-
- # identity v2 tests should obtain tokens and create accounts via v2
- # regardless of the configured CONF.identity.auth_version
- identity_version = 'v2'
-
- @classmethod
- def setup_clients(cls):
- super(BaseIdentityV2Test, cls).setup_clients()
- cls.non_admin_client = cls.os_primary.identity_public_client
- cls.non_admin_token_client = cls.os_primary.token_client
- cls.non_admin_tenants_client = cls.os_primary.tenants_public_client
- cls.non_admin_users_client = cls.os_primary.users_public_client
-
-
-class BaseIdentityV2AdminTest(BaseIdentityV2Test):
-
- credentials = ['primary', 'admin']
-
- # NOTE(andreaf) Identity tests work with credentials, so it is safer
- # for them to always use disposable credentials. Forcing dynamic creds
- # on regular identity tests would be however to restrictive, since it
- # would prevent any identity test from being executed against clouds where
- # admin credentials are not available.
- # Since All admin tests require admin credentials to be
- # executed, so this will not impact the ability to execute tests.
- force_tenant_isolation = True
-
- @classmethod
- def skip_checks(cls):
- super(BaseIdentityV2AdminTest, cls).skip_checks()
- if not CONF.identity_feature_enabled.api_v2_admin:
- raise cls.skipException('Identity v2 admin not available')
-
- @classmethod
- def setup_clients(cls):
- super(BaseIdentityV2AdminTest, cls).setup_clients()
- cls.client = cls.os_admin.identity_client
- cls.non_admin_client = cls.os_primary.identity_client
- cls.token_client = cls.os_admin.token_client
- cls.tenants_client = cls.os_admin.tenants_client
- cls.non_admin_tenants_client = cls.os_primary.tenants_client
- cls.roles_client = cls.os_admin.roles_client
- cls.non_admin_roles_client = cls.os_primary.roles_client
- cls.users_client = cls.os_admin.users_client
- cls.non_admin_users_client = cls.os_primary.users_client
- cls.services_client = cls.os_admin.identity_services_client
- cls.endpoints_client = cls.os_admin.endpoints_client
-
- @classmethod
- def resource_setup(cls):
- super(BaseIdentityV2AdminTest, cls).resource_setup()
- cls.projects_client = cls.tenants_client
-
- def setup_test_user(self, password=None):
- """Set up a test user."""
- tenant = self.setup_test_tenant()
- user = self.create_test_user(tenantId=tenant['id'], password=password)
- return user
-
- def setup_test_tenant(self, **kwargs):
- """Set up a test tenant."""
- if 'name' not in kwargs:
- kwargs['name'] = data_utils.rand_name(
- name='test_tenant',
- prefix=CONF.resource_name_prefix)
- if 'description' not in kwargs:
- kwargs['description'] = data_utils.rand_name(
- name='desc', prefix=CONF.resource_name_prefix)
- tenant = self.projects_client.create_tenant(**kwargs)['tenant']
- # Delete the tenant at the end of the test
- self.addCleanup(
- test_utils.call_and_ignore_notfound_exc,
- self.tenants_client.delete_tenant, tenant['id'])
- return tenant
-
-
class BaseIdentityV3Test(BaseIdentityTest):
credentials = ['primary']
@@ -322,13 +243,6 @@
class BaseApplicationCredentialsV3Test(BaseIdentityV3Test):
@classmethod
- def skip_checks(cls):
- super(BaseApplicationCredentialsV3Test, cls).skip_checks()
- if not CONF.identity_feature_enabled.application_credentials:
- raise cls.skipException("Application credentials are not available"
- " in this environment")
-
- @classmethod
def resource_setup(cls):
super(BaseApplicationCredentialsV3Test, cls).resource_setup()
cls.user_id = cls.os_primary.credentials.user_id
diff --git a/tempest/api/identity/v2/__init__.py b/tempest/api/identity/v2/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/tempest/api/identity/v2/__init__.py
+++ /dev/null
diff --git a/tempest/api/identity/v2/test_api_discovery.py b/tempest/api/identity/v2/test_api_discovery.py
deleted file mode 100644
index afda104..0000000
--- a/tempest/api/identity/v2/test_api_discovery.py
+++ /dev/null
@@ -1,60 +0,0 @@
-# Copyright 2015 OpenStack Foundation.
-# Copyright 2015, Red Hat, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.identity import base
-from tempest.lib import decorators
-
-
-class TestApiDiscovery(base.BaseIdentityV2Test):
- """Tests for identity v2 API discovery features."""
-
- @decorators.attr(type='smoke')
- @decorators.idempotent_id('ea889a68-a15f-4166-bfb1-c12456eae853')
- def test_api_version_resources(self):
- """Test showing identity v2 api version resources"""
- descr = self.non_admin_client.show_api_description()['version']
- expected_resources = ('id', 'links', 'media-types', 'status',
- 'updated')
-
- keys = descr.keys()
- for res in expected_resources:
- self.assertIn(res, keys)
-
- @decorators.attr(type='smoke')
- @decorators.idempotent_id('007a0be0-78fe-4fdb-bbee-e9216cc17bb2')
- def test_api_media_types(self):
- """Test showing identity v2 api version media type"""
- descr = self.non_admin_client.show_api_description()['version']
- # Get MIME type bases and descriptions
- media_types = [(media_type['base'], media_type['type']) for
- media_type in descr['media-types']]
- # These are supported for API version 2
- supported_types = [('application/json',
- 'application/vnd.openstack.identity-v2.0+json')]
-
- # Check if supported types exist in response body
- for s_type in supported_types:
- self.assertIn(s_type, media_types)
-
- @decorators.attr(type='smoke')
- @decorators.idempotent_id('77fd6be0-8801-48e6-b9bf-38cdd2f253ec')
- def test_api_version_statuses(self):
- """Test showing identity v2 api version status"""
- descr = self.non_admin_client.show_api_description()['version']
- status = descr['status'].lower()
- supported_statuses = ['current', 'stable', 'experimental',
- 'supported', 'deprecated']
-
- self.assertIn(status, supported_statuses)
diff --git a/tempest/api/identity/v2/test_ec2_credentials.py b/tempest/api/identity/v2/test_ec2_credentials.py
deleted file mode 100644
index 9981ef8..0000000
--- a/tempest/api/identity/v2/test_ec2_credentials.py
+++ /dev/null
@@ -1,113 +0,0 @@
-# Copyright 2015 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.identity import base
-from tempest.common import utils
-from tempest.lib import decorators
-from tempest.lib import exceptions as lib_exc
-
-
-class EC2CredentialsTest(base.BaseIdentityV2Test):
-
- @classmethod
- def skip_checks(cls):
- super(EC2CredentialsTest, cls).skip_checks()
- if not utils.is_extension_enabled('OS-EC2', 'identity'):
- msg = "OS-EC2 identity extension not enabled."
- raise cls.skipException(msg)
-
- @classmethod
- def resource_setup(cls):
- super(EC2CredentialsTest, cls).resource_setup()
- cls.creds = cls.os_primary.credentials
-
- @decorators.idempotent_id('b580fab9-7ae9-46e8-8138-417260cb6f9f')
- def test_create_ec2_credential(self):
- """Create user ec2 credential."""
- resp = self.non_admin_users_client.create_user_ec2_credential(
- self.creds.user_id,
- tenant_id=self.creds.tenant_id)["credential"]
- access = resp['access']
- self.addCleanup(
- self.non_admin_users_client.delete_user_ec2_credential,
- self.creds.user_id, access)
- self.assertNotEmpty(resp['access'])
- self.assertNotEmpty(resp['secret'])
- self.assertEqual(self.creds.user_id, resp['user_id'])
- self.assertEqual(self.creds.tenant_id, resp['tenant_id'])
-
- @decorators.idempotent_id('9e2ea42f-0a4f-468c-a768-51859ce492e0')
- def test_list_ec2_credentials(self):
- """Get the list of user ec2 credentials."""
- created_creds = []
- # create first ec2 credentials
- creds1 = self.non_admin_users_client.create_user_ec2_credential(
- self.creds.user_id,
- tenant_id=self.creds.tenant_id)["credential"]
- created_creds.append(creds1['access'])
- self.addCleanup(
- self.non_admin_users_client.delete_user_ec2_credential,
- self.creds.user_id, creds1['access'])
-
- # create second ec2 credentials
- creds2 = self.non_admin_users_client.create_user_ec2_credential(
- self.creds.user_id,
- tenant_id=self.creds.tenant_id)["credential"]
- created_creds.append(creds2['access'])
- self.addCleanup(
- self.non_admin_users_client.delete_user_ec2_credential,
- self.creds.user_id, creds2['access'])
-
- # get the list of user ec2 credentials
- resp = self.non_admin_users_client.list_user_ec2_credentials(
- self.creds.user_id)["credentials"]
- fetched_creds = [cred['access'] for cred in resp]
- # created credentials should be in a fetched list
- missing = [cred for cred in created_creds
- if cred not in fetched_creds]
- self.assertEmpty(missing,
- "Failed to find ec2_credentials %s in fetched list" %
- ', '.join(cred for cred in missing))
-
- @decorators.idempotent_id('cb284075-b613-440d-83ca-fe0b33b3c2b8')
- def test_show_ec2_credential(self):
- """Get the definite user ec2 credential."""
- resp = self.non_admin_users_client.create_user_ec2_credential(
- self.creds.user_id,
- tenant_id=self.creds.tenant_id)["credential"]
- self.addCleanup(
- self.non_admin_users_client.delete_user_ec2_credential,
- self.creds.user_id, resp['access'])
-
- ec2_creds = self.non_admin_users_client.show_user_ec2_credential(
- self.creds.user_id, resp['access']
- )["credential"]
- for key in ['access', 'secret', 'user_id', 'tenant_id']:
- self.assertEqual(ec2_creds[key], resp[key])
-
- @decorators.idempotent_id('6aba0d4c-b76b-4e46-aa42-add79bc1551d')
- def test_delete_ec2_credential(self):
- """Delete user ec2 credential."""
- resp = self.non_admin_users_client.create_user_ec2_credential(
- self.creds.user_id,
- tenant_id=self.creds.tenant_id)["credential"]
- access = resp['access']
- self.non_admin_users_client.delete_user_ec2_credential(
- self.creds.user_id, access)
- self.assertRaises(
- lib_exc.NotFound,
- self.non_admin_users_client.show_user_ec2_credential,
- self.creds.user_id,
- access)
diff --git a/tempest/api/identity/v2/test_extension.py b/tempest/api/identity/v2/test_extension.py
deleted file mode 100644
index 13555bd..0000000
--- a/tempest/api/identity/v2/test_extension.py
+++ /dev/null
@@ -1,32 +0,0 @@
-# Copyright 2014 NEC Corporation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.identity import base
-from tempest.lib import decorators
-
-
-class ExtensionTestJSON(base.BaseIdentityV2Test):
- """Test extensions in identity v2 API"""
-
- @decorators.idempotent_id('85f3f661-f54c-4d48-b563-72ae952b9383')
- def test_list_extensions(self):
- """List all the identity extensions via v2 API"""
- body = self.non_admin_client.list_extensions()['extensions']['values']
- self.assertNotEmpty(body)
- keys = ['name', 'updated', 'alias', 'links',
- 'namespace', 'description']
- for value in body:
- for key in keys:
- self.assertIn(key, value)
diff --git a/tempest/api/identity/v2/test_tenants.py b/tempest/api/identity/v2/test_tenants.py
deleted file mode 100644
index 1752b65..0000000
--- a/tempest/api/identity/v2/test_tenants.py
+++ /dev/null
@@ -1,52 +0,0 @@
-# Copyright 2015 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.identity import base
-from tempest.lib import decorators
-from tempest.lib import exceptions as lib_exc
-
-
-class IdentityTenantsTest(base.BaseIdentityV2Test):
- """Test listing tenants in identity v2 API"""
-
- credentials = ['primary', 'alt']
-
- @decorators.idempotent_id('ecae2459-243d-4ba1-ad02-65f15dc82b78')
- def test_list_tenants_returns_only_authorized_tenants(self):
- """Test listing tenants only returns authorized tenants via v2 API"""
- alt_tenant_name = self.os_alt.credentials.tenant_name
- resp = self.non_admin_tenants_client.list_tenants()
-
- # check that user can see only that tenants that he presents in so user
- # can successfully authenticate using his credentials and tenant name
- # from received tenants list
- for tenant in resp['tenants']:
- body = self.non_admin_token_client.auth(
- self.os_primary.credentials.username,
- self.os_primary.credentials.password,
- tenant['name'])
- self.assertNotEmpty(body['token']['id'])
- self.assertEqual(body['token']['tenant']['id'], tenant['id'])
- self.assertEqual(body['token']['tenant']['name'], tenant['name'])
- self.assertEqual(
- body['user']['id'], self.os_primary.credentials.user_id)
-
- # check that user cannot log in to alt user's tenant
- self.assertRaises(
- lib_exc.Unauthorized,
- self.non_admin_token_client.auth,
- self.os_primary.credentials.username,
- self.os_primary.credentials.password,
- alt_tenant_name)
diff --git a/tempest/api/identity/v2/test_tokens.py b/tempest/api/identity/v2/test_tokens.py
deleted file mode 100644
index d3776b8..0000000
--- a/tempest/api/identity/v2/test_tokens.py
+++ /dev/null
@@ -1,50 +0,0 @@
-# Copyright 2015 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from oslo_utils import timeutils
-from tempest.api.identity import base
-from tempest.lib import decorators
-
-
-class TokensTest(base.BaseIdentityV2Test):
- """Test tokens in identity v2 API"""
-
- @decorators.idempotent_id('65ae3b78-91ff-467b-a705-f6678863b8ec')
- def test_create_token(self):
- """Test creating token for user via v2 API"""
- token_client = self.non_admin_token_client
-
- # get a token for the user
- creds = self.os_primary.credentials
- username = creds.username
- password = creds.password
- tenant_name = creds.tenant_name
-
- body = token_client.auth(username, password, tenant_name)
-
- self.assertNotEmpty(body['token']['id'])
- self.assertIsInstance(body['token']['id'], str)
-
- now = timeutils.utcnow()
- expires_at = timeutils.normalize_time(
- timeutils.parse_isotime(body['token']['expires']))
- self.assertGreater(expires_at, now)
-
- self.assertEqual(body['token']['tenant']['id'],
- creds.tenant_id)
- self.assertEqual(body['token']['tenant']['name'],
- tenant_name)
-
- self.assertEqual(body['user']['id'], creds.user_id)
diff --git a/tempest/api/identity/v2/test_users.py b/tempest/api/identity/v2/test_users.py
deleted file mode 100644
index a63b45c..0000000
--- a/tempest/api/identity/v2/test_users.py
+++ /dev/null
@@ -1,112 +0,0 @@
-# Copyright 2015 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import time
-
-import testtools
-
-from tempest.api.identity import base
-from tempest import config
-from tempest.lib.common.utils import data_utils
-from tempest.lib import decorators
-from tempest.lib import exceptions
-
-
-CONF = config.CONF
-
-
-class IdentityUsersTest(base.BaseIdentityV2Test):
- """Test user password in identity v2 API"""
-
- @classmethod
- def resource_setup(cls):
- super(IdentityUsersTest, cls).resource_setup()
- cls.creds = cls.os_primary.credentials
- cls.username = cls.creds.username
- cls.password = cls.creds.password
- cls.tenant_name = cls.creds.tenant_name
-
- def _update_password(self, user_id, original_password, password):
- self.non_admin_users_client.update_user_own_password(
- user_id, password=password, original_password=original_password)
-
- # NOTE(morganfainberg): Fernet tokens are not subsecond aware and
- # Keystone should only be precise to the second. Sleep to ensure
- # we are passing the second boundary.
- time.sleep(1)
-
- # check authorization with new password
- self.non_admin_token_client.auth(self.username,
- password,
- self.tenant_name)
-
- # Reset auth to get a new token with the new password
- self.non_admin_users_client.auth_provider.clear_auth()
- self.non_admin_users_client.auth_provider.credentials.password = (
- password)
-
- def _restore_password(self, user_id, old_pass, new_pass):
- if CONF.identity_feature_enabled.security_compliance:
- # First we need to clear the password history
- unique_count = CONF.identity.user_unique_last_password_count
- for _ in range(unique_count):
- random_pass = data_utils.rand_password()
- self._update_password(
- user_id, original_password=new_pass, password=random_pass)
- new_pass = random_pass
-
- self._update_password(
- user_id, original_password=new_pass, password=old_pass)
- # Reset auth again to verify the password restore does work.
- # Clear auth restores the original credentials and deletes
- # cached auth data
- self.non_admin_users_client.auth_provider.clear_auth()
- # NOTE(lbragstad): Fernet tokens are not subsecond aware and
- # Keystone should only be precise to the second. Sleep to ensure we
- # are passing the second boundary before attempting to
- # authenticate.
- time.sleep(1)
- self.non_admin_users_client.auth_provider.set_auth()
-
- @decorators.idempotent_id('165859c9-277f-4124-9479-a7d1627b0ca7')
- @testtools.skipIf(CONF.identity_feature_enabled.immutable_user_source,
- 'Skipped because environment has an '
- 'immutable user source and solely '
- 'provides read-only access to users.')
- def test_user_update_own_password(self):
- """test updating user's own password via v2 API"""
- old_pass = self.creds.password
- old_token = self.non_admin_users_client.token
- new_pass = data_utils.rand_password()
- user_id = self.creds.user_id
-
- # to change password back. important for use_dynamic_credentials=false
- self.addCleanup(self._restore_password, user_id, old_pass, new_pass)
-
- # user updates own password
- self._update_password(
- user_id, original_password=old_pass, password=new_pass)
-
- # authorize with old token should lead to Unauthorized
- self.assertRaises(exceptions.Unauthorized,
- self.non_admin_token_client.auth_token,
- old_token)
-
- # authorize with old password should lead to Unauthorized
- self.assertRaises(exceptions.Unauthorized,
- self.non_admin_token_client.auth,
- self.username,
- old_pass,
- self.tenant_name)
diff --git a/tempest/api/image/base.py b/tempest/api/image/base.py
index 0544c31..c2f067c 100644
--- a/tempest/api/image/base.py
+++ b/tempest/api/image/base.py
@@ -12,6 +12,7 @@
# License for the specific language governing permissions and limitations
# under the License.
+import io
import time
from tempest import config
@@ -95,6 +96,35 @@
namespace_name)
return namespace
+ def create_and_stage_image(self, all_stores=False):
+ """Create Image & stage image file for glance-direct import method."""
+ image_name = data_utils.rand_name('test-image')
+ container_format = CONF.image.container_formats[0]
+ image = self.create_image(name=image_name,
+ container_format=container_format,
+ disk_format='raw',
+ visibility='private')
+ self.assertEqual('queued', image['status'])
+
+ self.client.stage_image_file(
+ image['id'],
+ io.BytesIO(data_utils.random_bytes()))
+ # Check image status is 'uploading'
+ body = self.client.show_image(image['id'])
+ self.assertEqual(image['id'], body['id'])
+ self.assertEqual('uploading', body['status'])
+
+ if all_stores:
+ stores_list = ','.join([store['id']
+ for store in self.available_stores
+ if store.get('read-only') != 'true'])
+ else:
+ stores = [store['id'] for store in self.available_stores
+ if store.get('read-only') != 'true']
+ stores_list = stores[::max(1, len(stores) - 1)]
+
+ return body, stores_list
+
@classmethod
def get_available_stores(cls):
stores = []
@@ -147,8 +177,8 @@
# If we added the location directly, the image goes straight
# to active and no hashing is done
self.assertEqual('active', image['status'])
- self.assertIsNone(None, image['os_hash_algo'])
- self.assertIsNone(None, image['os_hash_value'])
+ self.assertIsNone(image['os_hash_algo'])
+ self.assertIsNone(image['os_hash_value'])
return image
@@ -171,8 +201,8 @@
# The image should still be active and still have no hashes
self.assertEqual('active', image['status'])
- self.assertIsNone(None, image['os_hash_algo'])
- self.assertIsNone(None, image['os_hash_value'])
+ self.assertIsNone(image['os_hash_algo'])
+ self.assertIsNone(image['os_hash_value'])
# The direct_url should still match the first location
if 'direct_url' in image:
diff --git a/tempest/api/image/v2/admin/test_image_caching.py b/tempest/api/image/v2/admin/test_image_caching.py
index 75369c9..333f946 100644
--- a/tempest/api/image/v2/admin/test_image_caching.py
+++ b/tempest/api/image/v2/admin/test_image_caching.py
@@ -37,13 +37,17 @@
# NOTE(abhishekk): As caching is enabled instance boot or volume
# boot or image download can also cache image, so we are going to
# maintain our caching information to avoid disturbing other tests
- self.cached_info = {}
+ self.cached_info = []
+ self.cached_info_remote = []
def tearDown(self):
# Delete all from cache/queue if we exit abruptly
for image_id in self.cached_info:
- self.os_admin.image_cache_client.cache_delete(
- image_id)
+ self.os_admin.image_cache_client.cache_delete(image_id)
+
+ for image_id in self.cached_info_remote:
+ self.os_admin.image_cache_client.cache_delete(image_id)
+
super(ImageCachingTest, self).tearDown()
@classmethod
@@ -75,19 +79,13 @@
image = self.client.show_image(image['id'])
return image
- def _assertCheckQueues(self, queued_images):
- for image in self.cached_info:
- if self.cached_info[image] == 'queued':
- self.assertIn(image, queued_images)
-
- def _assertCheckCache(self, cached_images):
+ def _assertCheckCache(self, cached_images, cached):
cached_list = []
for image in cached_images:
cached_list.append(image['image_id'])
- for image in self.cached_info:
- if self.cached_info[image] == 'cached':
- self.assertIn(image, cached_list)
+ for image in cached:
+ self.assertIn(image, cached_list)
@decorators.idempotent_id('4bf6adba-2f9f-47e9-a6d5-37f21ad4387c')
def test_image_caching_cycle(self):
@@ -97,10 +95,9 @@
self.assertRaises(lib_exc.Forbidden,
self.os_primary.image_cache_client.list_cache)
- # Check there is nothing is queued for cached by us
+ # Check there is nothing cached by us
output = self.os_admin.image_cache_client.list_cache()
- self._assertCheckQueues(output['queued_images'])
- self._assertCheckCache(output['cached_images'])
+ self._assertCheckCache(output['cached_images'], self.cached_info)
# Non-existing image should raise NotFound exception
self.assertRaises(lib_exc.NotFound,
@@ -122,12 +119,6 @@
# Queue image for caching
self.os_admin.image_cache_client.cache_queue(image['id'])
- self.cached_info[image['id']] = 'queued'
- # Verify that we have 1 image for queueing and 0 for caching
- output = self.os_admin.image_cache_client.list_cache()
- self._assertCheckQueues(output['queued_images'])
- self._assertCheckCache(output['cached_images'])
-
# Wait for image caching
LOG.info("Waiting for image %s to get cached", image['id'])
caching = waiters.wait_for_caching(
@@ -135,10 +126,9 @@
self.os_admin.image_cache_client,
image['id'])
- self.cached_info[image['id']] = 'cached'
- # verify that we have image in cache and not in queued
- self._assertCheckQueues(caching['queued_images'])
- self._assertCheckCache(caching['cached_images'])
+ self.cached_info.append(image['id'])
+ # verify that we have image cached
+ self._assertCheckCache(caching['cached_images'], self.cached_info)
# Verify that we can delete images from caching and queueing with
# api call.
@@ -152,4 +142,78 @@
self.os_admin.image_cache_client.cache_clear,
target="invalid")
# Remove all data from local information
- self.cached_info = {}
+ self.cached_info = []
+
+ @decorators.idempotent_id('0a6b7e10-bc30-4a41-91ff-69fb4f5e65f2')
+ def test_remote_and_self_cache(self):
+ """Test image cache works with self and remote glance service"""
+ if not CONF.image.alternate_image_endpoint:
+ raise self.skipException('No image_remote service to test '
+ 'against')
+
+ # Check there is nothing is cached by us on current and
+ # remote node
+ output = self.os_admin.image_cache_client.list_cache()
+ self._assertCheckCache(output['cached_images'], self.cached_info)
+
+ output = self.os_admin.cache_client_remote.list_cache()
+ self._assertCheckCache(output['cached_images'],
+ self.cached_info_remote)
+
+ # Create one image
+ image = self.image_create_and_upload(name='first',
+ container_format='bare',
+ disk_format='raw',
+ visibility='private')
+ self.assertEqual('active', image['status'])
+
+ # Queue image for caching on local node
+ self.os_admin.image_cache_client.cache_queue(image['id'])
+ # Wait for image caching
+ LOG.info("Waiting for image %s to get cached", image['id'])
+ caching = waiters.wait_for_caching(
+ self.client,
+ self.os_admin.image_cache_client,
+ image['id'])
+ self.cached_info.append(image['id'])
+ # verify that we have image in cache on local node
+ self._assertCheckCache(caching['cached_images'], self.cached_info)
+ # verify that we don't have anything cached on remote node
+ output = self.os_admin.cache_client_remote.list_cache()
+ self._assertCheckCache(output['cached_images'],
+ self.cached_info_remote)
+
+ # cache same image on remote node
+ self.os_admin.cache_client_remote.cache_queue(image['id'])
+ # Wait for image caching
+ LOG.info("Waiting for image %s to get cached", image['id'])
+ caching = waiters.wait_for_caching(
+ self.client,
+ self.os_admin.cache_client_remote,
+ image['id'])
+ self.cached_info_remote.append(image['id'])
+
+ # verify that we have image cached on remote node
+ output = self.os_admin.cache_client_remote.list_cache()
+ self._assertCheckCache(output['cached_images'],
+ self.cached_info_remote)
+
+ # Verify that we can delete image from remote cache and it
+ # still present in local cache
+ self.os_admin.cache_client_remote.cache_clear()
+ output = self.os_admin.cache_client_remote.list_cache()
+ self.assertEqual(0, len(output['queued_images']))
+ self.assertEqual(0, len(output['cached_images']))
+
+ output = self.os_admin.image_cache_client.list_cache()
+ self._assertCheckCache(output['cached_images'], self.cached_info)
+
+ # Delete image from local cache as well
+ self.os_admin.image_cache_client.cache_clear()
+ output = self.os_admin.image_cache_client.list_cache()
+ self.assertEqual(0, len(output['queued_images']))
+ self.assertEqual(0, len(output['cached_images']))
+
+ # Remove all data from local and remote information
+ self.cached_info = []
+ self.cached_info_remote = []
diff --git a/tempest/api/image/v2/admin/test_images.py b/tempest/api/image/v2/admin/test_images.py
index 27cdcd8..2c2e9a8 100644
--- a/tempest/api/image/v2/admin/test_images.py
+++ b/tempest/api/image/v2/admin/test_images.py
@@ -112,7 +112,7 @@
image_name = data_utils.rand_name(
prefix=CONF.resource_name_prefix, name='copy-image')
container_format = CONF.image.container_formats[0]
- disk_format = CONF.image.disk_formats[0]
+ disk_format = 'raw'
image = self.create_image(name=image_name,
container_format=container_format,
disk_format=disk_format,
@@ -179,3 +179,59 @@
self.assertRaises(lib_exc.Forbidden,
self.admin_client.update_image, image['id'], [
dict(remove='/locations/0')])
+
+
+class MultiStoresImagesTest(base.BaseV2ImageAdminTest, base.BaseV2ImageTest):
+ """Test importing and deleting image in multiple stores"""
+ @classmethod
+ def skip_checks(cls):
+ super(MultiStoresImagesTest, cls).skip_checks()
+ if not CONF.image_feature_enabled.import_image:
+ skip_msg = (
+ "%s skipped as image import is not available" % cls.__name__)
+ raise cls.skipException(skip_msg)
+
+ @classmethod
+ def resource_setup(cls):
+ super(MultiStoresImagesTest, cls).resource_setup()
+ cls.available_import_methods = \
+ cls.client.info_import()['import-methods']['value']
+ if not cls.available_import_methods:
+ raise cls.skipException('Server does not support '
+ 'any import method')
+
+ # NOTE(pdeore): Skip if glance-direct import method and mutlistore
+ # are not enabled/configured, or only one store is configured in
+ # multiple stores setup.
+ cls.available_stores = cls.get_available_stores()
+ if ('glance-direct' not in cls.available_import_methods or
+ not len(cls.available_stores) > 1):
+ raise cls.skipException(
+ 'Either glance-direct import method not present in %s or '
+ 'None or only one store is '
+ 'configured %s' % (cls.available_import_methods,
+ cls.available_stores))
+
+ @decorators.idempotent_id('1ecec683-41d4-4470-a0df-54969ec74514')
+ def test_delete_image_from_specific_store(self):
+ """Test delete image from specific store"""
+ # Import image to available stores
+ image, stores = self.create_and_stage_image(all_stores=True)
+ self.client.image_import(image['id'],
+ method='glance-direct',
+ all_stores=True)
+ self.addCleanup(self.admin_client.delete_image, image['id'])
+ waiters.wait_for_image_imported_to_stores(
+ self.client,
+ image['id'], stores)
+ observed_image = self.client.show_image(image['id'])
+
+ # Image will be deleted from first store
+ first_image_store_deleted = (observed_image['stores'].split(","))[0]
+ self.admin_client.delete_image_from_store(
+ observed_image['id'], first_image_store_deleted)
+ waiters.wait_for_image_deleted_from_store(
+ self.admin_client,
+ observed_image,
+ stores,
+ first_image_store_deleted)
diff --git a/tempest/api/image/v2/test_images.py b/tempest/api/image/v2/test_images.py
index be7424f..9309c76 100644
--- a/tempest/api/image/v2/test_images.py
+++ b/tempest/api/image/v2/test_images.py
@@ -56,7 +56,14 @@
image_name = data_utils.rand_name(
prefix=CONF.resource_name_prefix, name='image')
container_format = container_format or CONF.image.container_formats[0]
- disk_format = disk_format or CONF.image.disk_formats[0]
+ disk_format = disk_format or 'raw'
+ if disk_format not in CONF.image.disk_formats:
+ # If the test asked for some disk format that is not available,
+ # consider that a programming error. Tests with specific
+ # requirements should be checking to see if it is available and
+ # skipping themselves instead of this helper doing it.
+ raise RuntimeError('Test requires unavailable disk_format %s, '
+ 'but did not skip' % disk_format)
image = self.create_image(name=image_name,
container_format=container_format,
disk_format=disk_format,
@@ -76,7 +83,7 @@
'%s import method' % method)
def _stage_and_check(self):
- image = self._create_image()
+ image = self._create_image(disk_format='raw')
# Stage image data
file_content = data_utils.random_bytes()
image_file = io.BytesIO(file_content)
@@ -125,13 +132,13 @@
"""
self._require_import_method('web-download')
- image = self._create_image()
+ image = self._create_image(disk_format='qcow2')
# Now try to get image details
body = self.client.show_image(image['id'])
self.assertEqual(image['id'], body['id'])
self.assertEqual('queued', body['status'])
# import image from web to backend
- image_uri = CONF.image.http_image
+ image_uri = CONF.image.http_qcow2_image
self.client.image_import(image['id'], method='web-download',
import_params={'uri': image_uri})
waiters.wait_for_image_imported_to_stores(self.client, image['id'])
@@ -344,37 +351,6 @@
'configured %s' % (cls.available_import_methods,
cls.available_stores))
- def _create_and_stage_image(self, all_stores=False):
- """Create Image & stage image file for glance-direct import method."""
- image_name = data_utils.rand_name(
- prefix=CONF.resource_name_prefix, name='test-image')
- container_format = CONF.image.container_formats[0]
- disk_format = CONF.image.disk_formats[0]
- image = self.create_image(name=image_name,
- container_format=container_format,
- disk_format=disk_format,
- visibility='private')
- self.assertEqual('queued', image['status'])
-
- self.client.stage_image_file(
- image['id'],
- io.BytesIO(data_utils.random_bytes()))
- # Check image status is 'uploading'
- body = self.client.show_image(image['id'])
- self.assertEqual(image['id'], body['id'])
- self.assertEqual('uploading', body['status'])
-
- if all_stores:
- stores_list = ','.join([store['id']
- for store in self.available_stores
- if store.get('read-only') != 'true'])
- else:
- stores = [store['id'] for store in self.available_stores
- if store.get('read-only') != 'true']
- stores_list = stores[::max(1, len(stores) - 1)]
-
- return body, stores_list
-
@decorators.idempotent_id('bf04ff00-3182-47cb-833a-f1c6767b47fd')
def test_glance_direct_import_image_to_all_stores(self):
"""Test image is imported in all available stores
@@ -382,7 +358,7 @@
Create image, import image to all available stores using glance-direct
import method and verify that import succeeded.
"""
- image, stores = self._create_and_stage_image(all_stores=True)
+ image, stores = self.create_and_stage_image(all_stores=True)
self.client.image_import(
image['id'], method='glance-direct', all_stores=True)
@@ -397,7 +373,7 @@
Create image, import image to specified store(s) using glance-direct
import method and verify that import succeeded.
"""
- image, stores = self._create_and_stage_image()
+ image, stores = self.create_and_stage_image()
self.client.image_import(image['id'], method='glance-direct',
stores=stores)
@@ -421,7 +397,7 @@
image_name = data_utils.rand_name(
prefix=CONF.resource_name_prefix, name='image')
container_format = CONF.image.container_formats[0]
- disk_format = CONF.image.disk_formats[0]
+ disk_format = 'raw'
image = self.create_image(name=image_name,
container_format=container_format,
disk_format=disk_format,
@@ -560,6 +536,15 @@
for container_fmt in container_fmts
for disk_fmt in disk_fmts]
+ # NOTE(danms): This tests depends on being able to lie about image
+ # content. We can probably improve this in some way, but without a
+ # valid sample of each image format in each container format, there is
+ # no easy solution.
+ if CONF.image_feature_enabled.image_format_enforcement:
+ raise cls.skipException(
+ 'Image format enforcement prevents testing with '
+ 'bogus image data')
+
for (container_fmt, disk_fmt) in all_pairs[:6]:
LOG.debug("Creating an image "
"(Container format: %s, Disk format: %s).",
@@ -785,7 +770,7 @@
# Create an image to be shared using default visibility
image_file = io.BytesIO(data_utils.random_bytes(2048))
container_format = CONF.image.container_formats[0]
- disk_format = CONF.image.disk_formats[0]
+ disk_format = 'raw'
image = self.create_image(container_format=container_format,
disk_format=disk_format)
self.client.store_image_file(image['id'], data=image_file)
diff --git a/tempest/api/image/v2/test_images_dependency.py b/tempest/api/image/v2/test_images_dependency.py
new file mode 100644
index 0000000..41611bb
--- /dev/null
+++ b/tempest/api/image/v2/test_images_dependency.py
@@ -0,0 +1,135 @@
+# Copyright 2024 OpenStack Foundation
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import io
+
+from oslo_log import log as logging
+
+from tempest.api.compute import base as compute_base
+from tempest.api.image import base as image_base
+from tempest.common import utils
+from tempest.common import waiters
+from tempest import config
+from tempest.lib.common.utils import data_utils
+from tempest.lib import decorators
+from tempest.scenario import manager
+
+CONF = config.CONF
+LOG = logging.getLogger(__name__)
+
+
+class ImageDependencyTests(image_base.BaseV2ImageTest,
+ compute_base.BaseV2ComputeTest,
+ manager.ScenarioTest):
+ """Test image, instance, and snapshot dependency.
+
+ The tests create image and remove the base image that other snapshots
+ were depend on.In OpenStack, images and snapshots should be separate,
+ but in some configurations like Glance with Ceph storage,
+ there were cases where images couldn't be removed.
+ This was fixed in glance store for RBD backend.
+
+ * Dependency scenarios:
+ - image > instance -> snapshot dependency
+
+ NOTE: volume -> image dependencies tests are in cinder-tempest-plugin
+ """
+
+ @classmethod
+ def skip_checks(cls):
+ super(ImageDependencyTests, cls).skip_checks()
+ if not CONF.volume_feature_enabled.enable_volume_image_dep_tests:
+ skip_msg = (
+ "%s Volume/image dependency tests "
+ "not enabled" % (cls.__name__))
+ raise cls.skipException(skip_msg)
+
+ def _create_instance_snapshot(self, bfv=False):
+ """Create instance from image and then snapshot the instance."""
+ # Create image and store data to image
+ source = 'volume' if bfv else 'image'
+ image_name = data_utils.rand_name(
+ prefix=CONF.resource_name_prefix,
+ name='image-dependency-test')
+ image = self.create_image(name=image_name,
+ container_format='bare',
+ disk_format='raw',
+ visibility='private')
+ file_content = data_utils.random_bytes()
+ image_file = io.BytesIO(file_content)
+ self.client.store_image_file(image['id'], image_file)
+ waiters.wait_for_image_status(
+ self.client, image['id'], 'active')
+ if bfv:
+ # Create instance
+ instance = self.create_test_server(
+ name='instance-depend-image',
+ image_id=image['id'],
+ volume_backed=True,
+ wait_until='ACTIVE')
+ else:
+ # Create instance
+ instance = self.create_test_server(
+ name='instance-depend-image',
+ image_id=image['id'],
+ wait_until='ACTIVE')
+ LOG.info("Instance from %s is created %s", source, instance)
+ instance_observed = \
+ self.servers_client.show_server(instance['id'])['server']
+ # Create instance snapshot
+ snapshot_instance = self.create_server_snapshot(
+ server=instance_observed)
+ LOG.info("Instance snapshot is created %s", snapshot_instance)
+ return image['id'], snapshot_instance['id']
+
+ @decorators.idempotent_id('d19b0731-e98e-4103-8b0e-02f651b8f586')
+ @utils.services('compute')
+ def test_nova_image_snapshot_dependency(self):
+ """Test with image > instance > snapshot dependency.
+
+ Create instance snapshot and check if we able to delete base
+ image
+
+ """
+ base_image_id, snapshot_image_id = self._create_instance_snapshot()
+ self.client.delete_image(base_image_id)
+ self.client.wait_for_resource_deletion(base_image_id)
+ images_list = self.client.list_images()['images']
+ fetched_images_id = [img['id'] for img in images_list]
+ self.assertNotIn(base_image_id, fetched_images_id)
+ self.assertIn(snapshot_image_id, fetched_images_id)
+
+ @utils.services('compute', 'volume')
+ @decorators.idempotent_id('f0c8a35d-8f8f-443c-8bcb-85a9c0f87d19')
+ def test_image_volume_server_snapshot_dependency(self):
+ """Test with image > volume > instance > snapshot dependency.
+
+ We are going to perform the following steps in the test:
+ * Create image
+ * Create a bootable volume from Image
+ * Launch an instance from the bootable volume
+ * Take snapshot of the instance -- which creates the volume snapshot
+ * Delete the image.
+
+ This will test the dependency chain of image -> volume -> snapshot.
+ """
+ base_image_id, snapshot_image_id = self._create_instance_snapshot(
+ bfv=True)
+ self.client.delete_image(base_image_id)
+ self.client.wait_for_resource_deletion(base_image_id)
+ images_list = self.client.list_images()['images']
+ fetched_images_id = [img['id'] for img in images_list]
+ self.assertNotIn(base_image_id, fetched_images_id)
+ self.assertIn(snapshot_image_id, fetched_images_id)
diff --git a/tempest/api/image/v2/test_images_formats.py b/tempest/api/image/v2/test_images_formats.py
new file mode 100644
index 0000000..f0dec90
--- /dev/null
+++ b/tempest/api/image/v2/test_images_formats.py
@@ -0,0 +1,212 @@
+# Copyright 2024 Red Hat, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import os
+
+import testscenarios
+import yaml
+
+from tempest.api.compute import base as compute_base
+from tempest.api.image import base
+from tempest.common import waiters
+from tempest import config
+from tempest import exceptions
+from tempest.lib.common.utils import data_utils
+from tempest.lib import decorators
+from tempest.lib import exceptions as lib_exc
+
+CONF = config.CONF
+
+
+def load_tests(loader, suite, pattern):
+ """Generate scenarios from the image manifest."""
+ if CONF.image.images_manifest_file is None:
+ return suite
+ ImagesFormatTest.scenarios = []
+ with open(CONF.image.images_manifest_file) as f:
+ ImagesFormatTest._manifest = yaml.load(f, Loader=yaml.SafeLoader)
+ for imgdef in ImagesFormatTest._manifest['images']:
+ ImagesFormatTest.scenarios.append((imgdef['name'],
+ {'imgdef': imgdef}))
+ result = loader.suiteClass()
+ result.addTests(testscenarios.generate_scenarios(suite))
+ return result
+
+
+class ImagesFormatTest(base.BaseV2ImageTest,
+ compute_base.BaseV2ComputeTest):
+ def setUp(self):
+ super().setUp()
+ if CONF.image.images_manifest_file is None:
+ self.skipTest('Image format testing is not configured')
+ self._image_base = os.path.dirname(os.path.abspath(
+ CONF.image.images_manifest_file))
+
+ self.images = []
+
+ def tearDown(self):
+ for img in self.images:
+ try:
+ self.client.delete_image(img['id'])
+ except lib_exc.NotFound:
+ pass
+ return super().tearDown()
+
+ @classmethod
+ def resource_setup(cls):
+ super().resource_setup()
+ cls.available_import_methods = cls.client.info_import()[
+ 'import-methods']['value']
+
+ def _test_image(self, image_def, override_format=None, asimport=False):
+ image_name = data_utils.rand_name(
+ prefix=CONF.resource_name_prefix,
+ name=image_def['name'])
+ image = self.client.create_image(
+ name=image_name,
+ container_format='bare',
+ disk_format=override_format or image_def['format'])
+ self.images.append(image)
+ image_fn = os.path.join(self._image_base, image_def['filename'])
+ with open(image_fn, 'rb') as f:
+ if asimport:
+ self.client.stage_image_file(image['id'], f)
+ self.client.image_import(image['id'], method='glance-direct')
+ else:
+ self.client.store_image_file(image['id'], f)
+ return image
+
+ @decorators.idempotent_id('a245fcbe-63ce-4dc1-a1d0-c16d76d9e6df')
+ def test_accept_usable_formats(self):
+ if self.imgdef['usable']:
+ if self.imgdef['format'] in CONF.image.disk_formats:
+ # These are expected to work
+ self._test_image(self.imgdef)
+ else:
+ # If this is not configured to be supported, we should get
+ # a BadRequest from glance
+ self.assertRaises(lib_exc.BadRequest,
+ self._test_image, self.imgdef)
+ else:
+ self.skipTest(
+ 'Glance does not currently reject unusable images on upload')
+
+ @decorators.idempotent_id('7c7c2f16-2e97-4dce-8cb4-bc10be031c85')
+ def test_accept_reject_formats_import(self):
+ """Make sure glance rejects invalid images during conversion."""
+ if 'glance-direct' not in self.available_import_methods:
+ self.skipTest('Import via glance-direct is not available')
+ if not CONF.image_feature_enabled.image_conversion:
+ self.skipTest('Import image_conversion not enabled')
+
+ # VMDK with footer was not supported by earlier service versions,
+ # so we need to tolerate it passing and failing (skip for the latter).
+ # See this for more info:
+ # https://bugs.launchpad.net/glance/+bug/2073262
+ is_broken = 'footer' in self.imgdef['name']
+
+ if (self.imgdef['format'] in CONF.image.disk_formats and
+ self.imgdef['usable']):
+ # Usable images should end up in active state
+ image = self._test_image(self.imgdef, asimport=True)
+ try:
+ waiters.wait_for_image_status(self.client, image['id'],
+ 'active')
+ except lib_exc.TimeoutException:
+ if is_broken:
+ self.skipTest(
+ 'Older glance did not support vmdk-with-footer')
+ else:
+ raise
+ else:
+ # FIXME(danms): Make this better, but gpt will fail before
+ # the import even starts until glance has it in its API
+ # schema as a valid value. Other formats expected to fail
+ # do so during import and return to queued state.
+ if self.imgdef['format'] not in CONF.image.disk_formats:
+ self.assertRaises(lib_exc.BadRequest,
+ self._test_image,
+ self.imgdef, asimport=True)
+ else:
+ image = self._test_image(self.imgdef, asimport=True)
+ waiters.wait_for_image_status(self.client, image['id'],
+ 'queued')
+ self.client.delete_image(image['id'])
+
+ if self.imgdef['format'] == 'iso':
+ # NOTE(danms): Glance has a special case to not convert ISO images
+ # because they are special and must remain as ISOs in order to be
+ # properly used for CD-based rescue and boot.
+ self.assertEqual('iso', image['disk_format'])
+
+ def _create_server_with_image_def(self, image_def, **overrides):
+ image_def = dict(image_def, **overrides)
+ image = self._test_image(image_def)
+ server = self.create_test_server(name='server-%s' % image['name'],
+ image_id=image['id'],
+ wait_until='ACTIVE')
+ return server
+
+ @decorators.idempotent_id('f77394bc-81f4-4d54-9f5b-e48f3d6b5376')
+ def test_compute_rejects_invalid(self):
+ """Make sure compute rejects invalid/insecure images."""
+ if self.imgdef['format'] not in CONF.image.disk_formats:
+ # if this format is not allowed by glance, we can not create
+ # a properly-formatted image for it, so skip it.
+ self.skipTest(
+ 'Format %s not allowed by config' % self.imgdef['format'])
+ if CONF.image_feature_enabled.image_format_enforcement:
+ # If glance rejects bad images during upload, we cannot get them
+ # registered so that we can test nova.
+ self.skipTest(
+ 'Unable to test compute image formats if glance does not '
+ 'allow them to be uploaded')
+
+ # VMDK with footer was not supported by earlier service versions,
+ # so we need to tolerate it passing and failing (skip for the latter).
+ # See this for more info:
+ # https://bugs.launchpad.net/glance/+bug/2073262
+ is_broken = 'footer' in self.imgdef['name']
+
+ if self.imgdef['usable']:
+ try:
+ server = self._create_server_with_image_def(self.imgdef)
+ except exceptions.BuildErrorException:
+ if is_broken:
+ self.skip('Tolerating failed build with known-broken '
+ 'image format')
+ else:
+ raise
+ self.delete_server(server['id'])
+ else:
+ self.assertRaises(exceptions.BuildErrorException,
+ self._create_server_with_image_def,
+ self.imgdef)
+
+ @decorators.idempotent_id('ffe21610-e801-4992-9b81-e2d646e2e2e9')
+ def test_compute_rejects_format_mismatch(self):
+ """Make sure compute rejects any image with a format mismatch."""
+ if CONF.image_feature_enabled.image_format_enforcement:
+ # If glance rejects bad images during upload, we cannot get them
+ # registered so that we can test nova.
+ self.skipTest(
+ 'Unable to test compute image formats if glance does not '
+ 'allow them to be uploaded')
+ # Lying about the disk_format should always fail
+ override_fmt = (
+ self.imgdef['format'] in ('raw', 'gpt') and 'qcow2' or 'raw')
+ self.assertRaises(exceptions.BuildErrorException,
+ self._create_server_with_image_def,
+ self.imgdef,
+ format=override_fmt)
diff --git a/tempest/api/image/v2/test_images_negative.py b/tempest/api/image/v2/test_images_negative.py
index 80c01a5..f0b891f 100644
--- a/tempest/api/image/v2/test_images_negative.py
+++ b/tempest/api/image/v2/test_images_negative.py
@@ -58,7 +58,10 @@
def test_get_delete_deleted_image(self):
"""Get and delete the deleted image"""
# create and delete image
- image = self.client.create_image(name='test',
+ image_name = data_utils.rand_name(
+ prefix=CONF.resource_name_prefix,
+ name="test")
+ image = self.client.create_image(name=image_name,
container_format='bare',
disk_format='raw')
self.client.delete_image(image['id'])
@@ -111,7 +114,10 @@
@decorators.idempotent_id('ab980a34-8410-40eb-872b-f264752f46e5')
def test_delete_protected_image(self):
"""Create a protected image"""
- image = self.create_image(protected=True)
+ image_name = data_utils.rand_name(
+ prefix=CONF.resource_name_prefix,
+ name="test")
+ image = self.create_image(name=image_name, protected=True)
self.addCleanup(self.client.update_image, image['id'],
[dict(replace="/protected", value=False)])
@@ -132,7 +138,10 @@
if not CONF.image_feature_enabled.os_glance_reserved:
raise self.skipException('os_glance_reserved is not enabled')
- image = self.create_image(name='test',
+ image_name = data_utils.rand_name(
+ prefix=CONF.resource_name_prefix,
+ name="test")
+ image = self.create_image(name=image_name,
container_format='bare',
disk_format='raw')
self.assertRaises(lib_exc.Forbidden,
@@ -152,9 +161,12 @@
if not CONF.image_feature_enabled.os_glance_reserved:
raise self.skipException('os_glance_reserved is not enabled')
+ image_name = data_utils.rand_name(
+ prefix=CONF.resource_name_prefix,
+ name="test")
self.assertRaises(lib_exc.Forbidden,
self.create_image,
- name='test',
+ name=image_name,
container_format='bare',
disk_format='raw',
os_glance_foo='bar')
@@ -195,7 +207,10 @@
if 'web-download' not in self.available_import_methods:
raise self.skipException('Server does not support '
'web-download import method')
- image = self.client.create_image(name='test',
+ image_name = data_utils.rand_name(
+ prefix=CONF.resource_name_prefix,
+ name="test")
+ image = self.client.create_image(name=image_name,
container_format='bare',
disk_format='raw')
# Now try to get image details
diff --git a/tempest/api/network/admin/test_dhcp_agent_scheduler.py b/tempest/api/network/admin/test_dhcp_agent_scheduler.py
index b4bfc61..8b4766c 100644
--- a/tempest/api/network/admin/test_dhcp_agent_scheduler.py
+++ b/tempest/api/network/admin/test_dhcp_agent_scheduler.py
@@ -48,12 +48,13 @@
@decorators.idempotent_id('f164801e-1dd8-4b8b-b5d3-cc3ac77cfaa5')
def test_dhcp_port_status_active(self):
- ports = self.admin_ports_client.list_ports(
+ dhcp_ports = self.admin_ports_client.list_ports(
+ device_owner='network:dhcp',
network_id=self.network['id'])['ports']
- for port in ports:
+ for dhcp_port in dhcp_ports:
waiters.wait_for_port_status(
client=self.admin_ports_client,
- port_id=port['id'],
+ port_id=dhcp_port['id'],
status='ACTIVE')
@decorators.idempotent_id('5032b1fe-eb42-4a64-8f3b-6e189d8b5c7d')
diff --git a/tempest/api/network/test_allowed_address_pair.py b/tempest/api/network/test_allowed_address_pair.py
index 5c28e96..01dda06 100644
--- a/tempest/api/network/test_allowed_address_pair.py
+++ b/tempest/api/network/test_allowed_address_pair.py
@@ -108,7 +108,7 @@
# both cases, with and without that "active" attribute, we need to
# removes that field from the allowed_address_pairs which are returned
# by the Neutron server.
- # We could make expected results of those tests to be dependend on the
+ # We could make expected results of those tests to be dependent on the
# available Neutron's API extensions but in that case existing tests
# may fail randomly as all tests are always using same IP addresses
# thus allowed_address_pair may be active=True or active=False.
diff --git a/tempest/api/network/test_floating_ips.py b/tempest/api/network/test_floating_ips.py
index e39ad08..07f0903 100644
--- a/tempest/api/network/test_floating_ips.py
+++ b/tempest/api/network/test_floating_ips.py
@@ -129,7 +129,7 @@
self.assertIsNone(updated_floating_ip['fixed_ip_address'])
self.assertIsNone(updated_floating_ip['router_id'])
- # Explicity test deletion of floating IP
+ # Explicitly test deletion of floating IP
self.floating_ips_client.delete_floatingip(created_floating_ip['id'])
@decorators.idempotent_id('e1f6bffd-442f-4668-b30e-df13f2705e77')
diff --git a/tempest/api/network/test_networks.py b/tempest/api/network/test_networks.py
index fd93779..b1fba2d 100644
--- a/tempest/api/network/test_networks.py
+++ b/tempest/api/network/test_networks.py
@@ -389,17 +389,20 @@
# belong to other tests and their state may have changed during this
# test
body = self.subnets_client.list_subnets(network_id=public_network_id)
+ extensions = [
+ ext['alias'] for ext in
+ self.network_extensions_client.list_extensions()['extensions']]
+ is_sen_ext = 'subnet-external-network' in extensions
# check subnet visibility of external_network
- if external_network['shared']:
- self.assertNotEmpty(body['subnets'], "Subnets should be visible "
- "for shared public network %s"
- % public_network_id)
+ if external_network['shared'] or is_sen_ext:
+ self.assertNotEmpty(body['subnets'],
+ 'Subnets should be visible for shared or '
+ 'external networks %s' % public_network_id)
else:
- self.assertEmpty(body['subnets'], "Subnets should not be visible "
- "for non-shared public "
- "network %s"
- % public_network_id)
+ self.assertEmpty(body['subnets'],
+ 'Subnets should not be visible for non-shared or'
+ 'non-external networks %s' % public_network_id)
@decorators.idempotent_id('c72c1c0c-2193-4aca-ccc4-b1442640bbbb')
@utils.requires_ext(extension="standard-attr-description",
diff --git a/tempest/api/network/test_routers.py b/tempest/api/network/test_routers.py
index 0dd7c70..aaedba2 100644
--- a/tempest/api/network/test_routers.py
+++ b/tempest/api/network/test_routers.py
@@ -18,6 +18,7 @@
from tempest.api.network import base
from tempest.common import utils
+from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib.common.utils import test_utils
@@ -33,11 +34,17 @@
interface = self.routers_client.add_router_interface(
router_id, subnet_id=subnet_id)
self.addCleanup(self._remove_router_interface_with_subnet_id,
- router_id, subnet_id)
+ router_id, subnet_id, interface['port_id'])
self.assertEqual(subnet_id, interface['subnet_id'])
return interface
- def _remove_router_interface_with_subnet_id(self, router_id, subnet_id):
+ def _remove_router_interface_with_subnet_id(self, router_id, subnet_id,
+ port_id):
+ # NOTE: with DVR and without a VM port, it is not possible to know
+ # what agent will host the router interface thus won't be bound.
+ if not utils.is_extension_enabled('dvr', 'network'):
+ waiters.wait_for_port_status(client=self.ports_client,
+ port_id=port_id, status='ACTIVE')
body = self.routers_client.remove_router_interface(router_id,
subnet_id=subnet_id)
self.assertEqual(subnet_id, body['subnet_id'])
@@ -107,7 +114,7 @@
interface = self.routers_client.add_router_interface(
router['id'], subnet_id=subnet['id'])
self.addCleanup(self._remove_router_interface_with_subnet_id,
- router['id'], subnet['id'])
+ router['id'], subnet['id'], interface['port_id'])
self.assertIn('subnet_id', interface.keys())
self.assertIn('port_id', interface.keys())
# Verify router id is equal to device id in port details
@@ -183,9 +190,10 @@
next_cidr = next_cidr.next()
# Add router interface with subnet id
- self.create_router_interface(router['id'], subnet['id'])
+ interface = self.create_router_interface(router['id'],
+ subnet['id'])
self.addCleanup(self._remove_router_interface_with_subnet_id,
- router['id'], subnet['id'])
+ router['id'], subnet['id'], interface['port_id'])
cidr = netaddr.IPNetwork(subnet['cidr'])
next_hop = str(cidr[2])
destination = str(subnet['cidr'])
diff --git a/tempest/api/network/test_tags.py b/tempest/api/network/test_tags.py
index bd3e360..a0c6342 100644
--- a/tempest/api/network/test_tags.py
+++ b/tempest/api/network/test_tags.py
@@ -118,7 +118,7 @@
@classmethod
def skip_checks(cls):
super(TagsExtTest, cls).skip_checks()
- # Added condition to support backward compatiblity since
+ # Added condition to support backward compatibility since
# tag-ext has been renamed to standard-attr-tag
if not (utils.is_extension_enabled('tag-ext', 'network') or
utils.is_extension_enabled('standard-attr-tag', 'network')):
diff --git a/tempest/api/object_storage/test_container_sync.py b/tempest/api/object_storage/test_container_sync.py
index e2c9d54..2524def 100644
--- a/tempest/api/object_storage/test_container_sync.py
+++ b/tempest/api/object_storage/test_container_sync.py
@@ -142,7 +142,7 @@
"""Test container synchronization"""
def make_headers(cont, cont_client):
# tell first container to synchronize to a second
- # use rsplit with a maxsplit of 1 to ensure ipv6 adresses are
+ # use rsplit with a maxsplit of 1 to ensure ipv6 addresses are
# handled properly as well
client_proxy_ip = urlparse.urlparse(
cont_client.base_url).netloc.rsplit(':', 1)[0]
diff --git a/tempest/api/volume/base.py b/tempest/api/volume/base.py
index 53ffe7c..7a08545 100644
--- a/tempest/api/volume/base.py
+++ b/tempest/api/volume/base.py
@@ -202,7 +202,7 @@
cont = data_utils.rand_name(
prefix=CONF.resource_name_prefix,
name=cont_name)
- kwargs['container'] = cont
+ kwargs['container'] = cont.lower()
self.addCleanup(object_storage.delete_containers,
kwargs['container'], container_client,
diff --git a/tempest/api/volume/test_volume_delete_cascade.py b/tempest/api/volume/test_volume_delete_cascade.py
index 53f1bca..1a50eb5 100644
--- a/tempest/api/volume/test_volume_delete_cascade.py
+++ b/tempest/api/volume/test_volume_delete_cascade.py
@@ -78,8 +78,9 @@
self._assert_cascade_delete(volume['id'])
@decorators.idempotent_id('59a77ede-609b-4ee8-9f68-fc3c6ffe97b5')
- @testtools.skipIf(CONF.volume.storage_protocol == 'ceph',
- 'Skip because of Bug#1677525')
+ @testtools.skipUnless(
+ CONF.volume_feature_enabled.enable_volume_image_dep_tests,
+ 'Volume dependency tests disabled')
def test_volume_from_snapshot_cascade_delete(self):
"""Test deleting a volume with associated volume-associated snapshot
diff --git a/tempest/api/volume/test_volumes_actions.py b/tempest/api/volume/test_volumes_actions.py
index 150677d..8cf44be 100644
--- a/tempest/api/volume/test_volumes_actions.py
+++ b/tempest/api/volume/test_volumes_actions.py
@@ -119,6 +119,13 @@
self.images_client.delete_image,
image_id)
waiters.wait_for_image_status(self.images_client, image_id, 'active')
+ # This is required for the optimized upload volume path.
+ # New location APIs are async so we need to wait for the location
+ # import task to complete.
+ # This should work with old location API since we don't fail if there
+ # are no tasks for the image
+ waiters.wait_for_image_tasks_status(self.images_client,
+ image_id, 'success')
waiters.wait_for_volume_resource_status(self.volumes_client,
self.volume['id'], 'available')
diff --git a/tempest/api/volume/test_volumes_backup.py b/tempest/api/volume/test_volumes_backup.py
index 2810440..a3ba974 100644
--- a/tempest/api/volume/test_volumes_backup.py
+++ b/tempest/api/volume/test_volumes_backup.py
@@ -82,7 +82,7 @@
if CONF.volume.backup_driver == "swift":
kwargs["container"] = data_utils.rand_name(
prefix=CONF.resource_name_prefix,
- name=self.__class__.__name__ + '-Backup-container')
+ name=self.__class__.__name__ + '-backup-container').lower()
backup = self.create_backup(volume_id=volume['id'], **kwargs)
self.assertEqual(kwargs["name"], backup['name'])
waiters.wait_for_volume_resource_status(self.volumes_client,
@@ -172,6 +172,52 @@
self.assertTrue(restored_volume_info['bootable'])
+ @decorators.idempotent_id('f86eff09-2a6d-43c1-905e-8079e5754f1e')
+ @utils.services('compute')
+ @decorators.related_bug('1703011')
+ def test_volume_backup_incremental(self):
+ """Test create a backup when latest incremental backup is deleted"""
+ # Create a volume
+ volume = self.create_volume()
+
+ # Create a server
+ server = self.create_server(wait_until='SSHABLE')
+
+ # Attach volume to the server
+ self.attach_volume(server['id'], volume['id'])
+
+ # Create a backup to the attached volume
+ backup1 = self.create_backup(volume['id'], force=True)
+
+ # Validate backup details
+ backup_info = self.backups_client.show_backup(backup1['id'])['backup']
+ self.assertEqual(False, backup_info['has_dependent_backups'])
+ self.assertEqual(False, backup_info['is_incremental'])
+
+ # Create another incremental backup
+ backup2 = self.backups_client.create_backup(
+ volume_id=volume['id'], incremental=True, force=True)['backup']
+ waiters.wait_for_volume_resource_status(self.backups_client,
+ backup2['id'], 'available')
+
+ # Validate incremental backup details
+ backup2_info = self.backups_client.show_backup(backup2['id'])['backup']
+ self.assertEqual(True, backup2_info['is_incremental'])
+ self.assertEqual(False, backup2_info['has_dependent_backups'])
+
+ # Delete the last incremental backup that was created
+ self.backups_client.delete_backup(backup2['id'])
+ self.backups_client.wait_for_resource_deletion(backup2['id'])
+
+ # Create another incremental backup
+ backup3 = self.create_backup(
+ volume_id=volume['id'], incremental=True, force=True)
+
+ # Validate incremental backup details
+ backup3_info = self.backups_client.show_backup(backup3['id'])['backup']
+ self.assertEqual(True, backup3_info['is_incremental'])
+ self.assertEqual(False, backup3_info['has_dependent_backups'])
+
class VolumesBackupsV39Test(base.BaseVolumeTest):
"""Test volumes backup with volume microversion greater than 3.8"""
diff --git a/tempest/api/volume/test_volumes_negative.py b/tempest/api/volume/test_volumes_negative.py
index d8480df..754b676 100644
--- a/tempest/api/volume/test_volumes_negative.py
+++ b/tempest/api/volume/test_volumes_negative.py
@@ -45,7 +45,7 @@
image = self.images_client.create_image(
name=image_name,
container_format=CONF.image.container_formats[0],
- disk_format=CONF.image.disk_formats[0],
+ disk_format='raw',
visibility='private',
min_disk=CONF.volume.volume_size + CONF.volume.volume_size_extend)
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
diff --git a/tempest/clients.py b/tempest/clients.py
index 5b31cf8..6707429 100644
--- a/tempest/clients.py
+++ b/tempest/clients.py
@@ -13,8 +13,13 @@
# License for the specific language governing permissions and limitations
# under the License.
+import os
+
+from oslo_concurrency import lockutils
+
from tempest import config
from tempest.lib import auth
+from tempest.lib.common.rest_client import RestClient
from tempest.lib import exceptions as lib_exc
from tempest.lib.services import clients
@@ -35,6 +40,11 @@
super(Manager, self).__init__(
credentials=credentials, identity_uri=identity_uri, scope=scope,
region=CONF.identity.region)
+ if CONF.record_resources:
+ RestClient.lock_dir = os.path.join(
+ lockutils.get_lock_path(CONF),
+ 'tempest-rec-rw-lock')
+ RestClient.record_resources = True
# TODO(andreaf) When clients are initialised without the right
# parameters available, the calls below will trigger a KeyError.
# We should catch that and raise a better error.
@@ -104,6 +114,15 @@
service=CONF.image.alternate_image_endpoint,
endpoint_type=CONF.image.alternate_image_endpoint_type,
region=CONF.image.region)
+ # NOTE(abhishekk): If no alternate endpoint is configured,
+ # this client will work the same as the base
+ # self.image_cache_client. If your test needs to know if
+ # these are different, check the config option to see if
+ # the alternate_image_endpoint is set.
+ self.cache_client_remote = self.image_v2.ImageCacheClient(
+ service=CONF.image.alternate_image_endpoint,
+ endpoint_type=CONF.image.alternate_image_endpoint_type,
+ region=CONF.image.region)
def _set_compute_clients(self):
self.agents_client = self.compute.AgentsClient()
@@ -164,30 +183,6 @@
self.placement.ResourceProvidersClient()
def _set_identity_clients(self):
- # Clients below use the admin endpoint type of Keystone API v2
- params_v2_admin = {
- 'endpoint_type': CONF.identity.v2_admin_endpoint_type}
- self.endpoints_client = self.identity_v2.EndpointsClient(
- **params_v2_admin)
- self.identity_client = self.identity_v2.IdentityClient(
- **params_v2_admin)
- self.tenants_client = self.identity_v2.TenantsClient(
- **params_v2_admin)
- self.roles_client = self.identity_v2.RolesClient(**params_v2_admin)
- self.users_client = self.identity_v2.UsersClient(**params_v2_admin)
- self.identity_services_client = self.identity_v2.ServicesClient(
- **params_v2_admin)
-
- # Clients below use the public endpoint type of Keystone API v2
- params_v2_public = {
- 'endpoint_type': CONF.identity.v2_public_endpoint_type}
- self.identity_public_client = self.identity_v2.IdentityClient(
- **params_v2_public)
- self.tenants_public_client = self.identity_v2.TenantsClient(
- **params_v2_public)
- self.users_public_client = self.identity_v2.UsersClient(
- **params_v2_public)
-
# Clients below use the endpoint type of Keystone API v3, which is set
# in endpoint_type
params_v3 = {'endpoint_type': CONF.identity.v3_endpoint_type}
@@ -232,16 +227,6 @@
self.identity_limits_client = \
self.identity_v3.LimitsClient(**params_v3)
- # Token clients do not use the catalog. They only need default_params.
- # They read auth_url, so they should only be set if the corresponding
- # API version is marked as enabled
- if CONF.identity_feature_enabled.api_v2:
- if CONF.identity.uri:
- self.token_client = self.identity_v2.TokenClient(
- auth_url=CONF.identity.uri)
- else:
- msg = 'Identity v2 API enabled, but no identity.uri set'
- raise lib_exc.InvalidConfiguration(msg)
if CONF.identity_feature_enabled.api_v3:
if CONF.identity.uri_v3:
self.token_v3_client = self.identity_v3.V3TokenClient(
diff --git a/tempest/cmd/cleanup.py b/tempest/cmd/cleanup.py
index a8a344a..8d06f93 100644
--- a/tempest/cmd/cleanup.py
+++ b/tempest/cmd/cleanup.py
@@ -28,6 +28,10 @@
.. warning::
+ We advice not to run tempest cleanup on production environments.
+
+.. warning::
+
If step 1 is skipped in the example below, the cleanup procedure
may delete resources that existed in the cloud before the test run. This
may cause an unwanted destruction of cloud resources, so use caution with
@@ -45,7 +49,10 @@
* ``--init-saved-state``: Initializes the saved state of the OpenStack
deployment and will output a ``saved_state.json`` file containing resources
from your deployment that will be preserved from the cleanup command. This
- should be done prior to running Tempest tests.
+ should be done prior to running Tempest tests. Note, that if other users of
+ your cloud could have created resources after running ``--init-saved-state``,
+ it would not protect those resources as they wouldn't be present in the
+ saved_state.json file.
* ``--delete-tempest-conf-objects``: If option is present, then the command
will delete the admin project in addition to the resources associated with
@@ -58,7 +65,44 @@
global objects that will be removed (domains, flavors, images, roles,
projects, and users). Once the cleanup command is executed (e.g. run without
parameters), running it again with ``--dry-run`` should yield an empty
- report.
+ report. We STRONGLY ENCOURAGE to run ``tempest cleanup`` with ``--dry-run``
+ first and then verify that the resources listed in the ``dry_run.json`` file
+ are meant to be deleted.
+
+* ``--prefix``: Only resources that match the prefix will be deleted. When this
+ option is used, ``saved_state.json`` file is not needed (no need to run with
+ ``--init-saved-state`` first).
+
+ All tempest resources are created with the prefix value from the config
+ option ``resource_name_prefix`` in tempest.conf. To cleanup only the
+ resources created by tempest, you should use the prefix set in your
+ tempest.conf (the default value of ``resource_name_prefix`` is ``tempest``.
+
+ Note, that some resources are not named thus they will not be deleted when
+ filtering based on the prefix. This option will be ignored when
+ ``--init-saved-state`` is used so that it can capture the true init state -
+ all resources present at that moment. If there is any ``saved_state.json``
+ file present (e.g. if you ran the tempest cleanup with ``--init-saved-state``
+ before) and you run the tempest cleanup with ``--prefix``, the
+ ``saved_state.json`` file will be ignored and cleanup will be done based on
+ the passed prefix only.
+
+* ``--resource-list``: Allows the use of file ``./resource_list.json``, which
+ contains all resources created by Tempest during all Tempest runs, to
+ create another method for removing only resources created by Tempest.
+ List of these resources is created when config option ``record_resources``
+ in default section is set to true. After using this option for cleanup,
+ the existing ``./resource_list.json`` is cleared from deleted resources.
+
+ When this option is used, ``saved_state.json`` file is not needed (no
+ need to run with ``--init-saved-state`` first). If there is any
+ ``saved_state.json`` file present and you run the tempest cleanup with
+ ``--resource-list``, the ``saved_state.json`` file will be ignored and
+ cleanup will be done based on the ``resource_list.json`` only.
+
+ If you run tempest cleanup with both ``--prefix`` and ``--resource-list``,
+ the ``--resource-list`` option will be ignored and cleanup will be done
+ based on the ``--prefix`` option only.
* ``--help``: Print the help text for the command and parameters.
@@ -95,6 +139,7 @@
SAVED_STATE_JSON = "saved_state.json"
DRY_RUN_JSON = "dry_run.json"
+RESOURCE_LIST_JSON = "resource_list.json"
LOG = logging.getLogger(__name__)
CONF = config.CONF
@@ -137,6 +182,7 @@
self.admin_mgr = clients.Manager(
credentials.get_configured_admin_credentials())
self.dry_run_data = {}
+ self.resource_data = {}
self.json_data = {}
# available services
@@ -150,13 +196,22 @@
self._init_state()
return
- self._load_json()
+ if parsed_args.prefix:
+ return
+
+ if parsed_args.resource_list:
+ self._load_resource_list()
+ return
+
+ self._load_saved_state()
def _cleanup(self):
LOG.info("Begin cleanup")
is_dry_run = self.options.dry_run
is_preserve = not self.options.delete_tempest_conf_objects
+ is_resource_list = self.options.resource_list
is_save_state = False
+ cleanup_prefix = self.options.prefix
if is_dry_run:
self.dry_run_data["_projects_to_clean"] = {}
@@ -166,9 +221,12 @@
# they are in saved state json. Therefore is_preserve is False
kwargs = {'data': self.dry_run_data,
'is_dry_run': is_dry_run,
+ 'resource_list_json': self.resource_data,
'saved_state_json': self.json_data,
'is_preserve': False,
- 'is_save_state': is_save_state}
+ 'is_resource_list': is_resource_list,
+ 'is_save_state': is_save_state,
+ 'prefix': cleanup_prefix}
project_service = cleanup_service.ProjectService(admin_mgr, **kwargs)
projects = project_service.list()
LOG.info("Processing %s projects", len(projects))
@@ -179,9 +237,12 @@
kwargs = {'data': self.dry_run_data,
'is_dry_run': is_dry_run,
+ 'resource_list_json': self.resource_data,
'saved_state_json': self.json_data,
'is_preserve': is_preserve,
+ 'is_resource_list': is_resource_list,
'is_save_state': is_save_state,
+ 'prefix': cleanup_prefix,
'got_exceptions': self.GOT_EXCEPTIONS}
LOG.info("Processing global services")
for service in self.global_services:
@@ -198,14 +259,21 @@
f.write(json.dumps(self.dry_run_data, sort_keys=True,
indent=2, separators=(',', ': ')))
+ if is_resource_list:
+ LOG.info("Clearing 'resource_list.json' file.")
+ with open(RESOURCE_LIST_JSON, 'w') as f:
+ f.write('{}')
+
def _clean_project(self, project):
LOG.debug("Cleaning project: %s ", project['name'])
is_dry_run = self.options.dry_run
dry_run_data = self.dry_run_data
is_preserve = not self.options.delete_tempest_conf_objects
+ is_resource_list = self.options.resource_list
project_id = project['id']
project_name = project['name']
project_data = None
+ cleanup_prefix = self.options.prefix
if is_dry_run:
project_data = dry_run_data["_projects_to_clean"][project_id] = {}
project_data['name'] = project_name
@@ -213,9 +281,12 @@
kwargs = {'data': project_data,
'is_dry_run': is_dry_run,
'saved_state_json': self.json_data,
+ 'resource_list_json': self.resource_data,
'is_preserve': is_preserve,
+ 'is_resource_list': is_resource_list,
'is_save_state': False,
'project_id': project_id,
+ 'prefix': cleanup_prefix,
'got_exceptions': self.GOT_EXCEPTIONS}
for service in self.project_associated_services:
svc = service(self.admin_mgr, **kwargs)
@@ -243,10 +314,39 @@
help="Generate JSON file:" + DRY_RUN_JSON +
", that reports the objects that would have "
"been deleted had a full cleanup been run.")
+ parser.add_argument('--prefix', dest='prefix', default=None,
+ help="Only resources that match the prefix will "
+ "be deleted (resources in saved_state.json are "
+ "not taken into account). All tempest resources "
+ "are created with the prefix value set by "
+ "resource_name_prefix in tempest.conf, default "
+ "prefix is tempest. Note that some resources are "
+ "not named thus they will not be deleted when "
+ "filtering based on the prefix. This opt will be "
+ "ignored when --init-saved-state is used so that "
+ "it can capture the true init state - all "
+ "resources present at that moment.")
+ parser.add_argument('--resource-list', action="store_true",
+ dest='resource_list', default=False,
+ help="Runs tempest cleanup with generated "
+ "JSON file: " + RESOURCE_LIST_JSON + " to "
+ "erase resources created during Tempest run. "
+ "NOTE: To create " + RESOURCE_LIST_JSON + " "
+ "set config option record_resources under default "
+ "section in tempest.conf file to true. This "
+ "option will be ignored when --init-saved-state "
+ "is used so that it can capture the true init "
+ "state - all resources present at that moment. "
+ "This option will be ignored if passed with "
+ "--prefix.")
return parser
def get_description(self):
- return 'Cleanup after tempest run'
+ return ('tempest cleanup tool, read the full documentation before '
+ 'using this tool. We advice not to run it on production '
+ 'environments. On environments where also other users may '
+ 'create resources, we strongly advice using --dry-run '
+ 'argument first and verify the content of dry_run.json file.')
def _init_state(self):
LOG.info("Initializing saved state.")
@@ -256,7 +356,12 @@
'is_dry_run': False,
'saved_state_json': data,
'is_preserve': False,
+ 'is_resource_list': False,
'is_save_state': True,
+ # must be None as we want to capture true init state
+ # (all resources present) thus no filtering based
+ # on the prefix
+ 'prefix': None,
'got_exceptions': self.GOT_EXCEPTIONS}
for service in self.global_services:
svc = service(admin_mgr, **kwargs)
@@ -274,15 +379,31 @@
f.write(json.dumps(data, sort_keys=True,
indent=2, separators=(',', ': ')))
- def _load_json(self, saved_state_json=SAVED_STATE_JSON):
+ def _load_resource_list(self, resource_list_json=RESOURCE_LIST_JSON):
+ try:
+ with open(resource_list_json, 'rb') as json_file:
+ self.resource_data = json.load(json_file)
+ except IOError as ex:
+ LOG.exception(
+ "Failed loading 'resource_list.json', please "
+ "be sure you created this file by setting config "
+ "option record_resources in default section to true "
+ "prior to running tempest. Exception: %s", ex)
+ sys.exit(ex)
+ except Exception as ex:
+ LOG.exception(
+ "Exception parsing 'resource_list.json' : %s", ex)
+ sys.exit(ex)
+
+ def _load_saved_state(self, saved_state_json=SAVED_STATE_JSON):
try:
with open(saved_state_json, 'rb') as json_file:
self.json_data = json.load(json_file)
-
except IOError as ex:
- LOG.exception("Failed loading saved state, please be sure you"
- " have first run cleanup with --init-saved-state "
- "flag prior to running tempest. Exception: %s", ex)
+ LOG.exception(
+ "Failed loading saved state, please be sure you"
+ " have first run cleanup with --init-saved-state "
+ "flag prior to running tempest. Exception: %s", ex)
sys.exit(ex)
except Exception as ex:
LOG.exception("Exception parsing saved state json : %s", ex)
diff --git a/tempest/cmd/cleanup_service.py b/tempest/cmd/cleanup_service.py
index f2370f3..db4407d 100644
--- a/tempest/cmd/cleanup_service.py
+++ b/tempest/cmd/cleanup_service.py
@@ -115,6 +115,34 @@
return [item for item in item_list
if item['tenant_id'] == self.tenant_id]
+ def _filter_by_prefix(self, item_list, top_key=None):
+ items = []
+ for item in item_list:
+ name = item[top_key]['name'] if top_key else item['name']
+ if name.startswith(self.prefix):
+ items.append(item)
+ return items
+
+ def _filter_by_resource_list(self, item_list, attr):
+ if attr not in self.resource_list_json:
+ return []
+ items = []
+ for item in item_list:
+ item_id = (item['keypair']['name'] if attr == 'keypairs'
+ else item['id'])
+ if item_id in self.resource_list_json[attr].keys():
+ items.append(item)
+ return items
+
+ def _filter_out_ids_from_saved(self, item_list, attr):
+ items = []
+ for item in item_list:
+ item_id = (item['keypair']['name'] if attr == 'keypairs'
+ else item['id'])
+ if item_id not in self.saved_state_json[attr].keys():
+ items.append(item)
+ return items
+
def list(self):
pass
@@ -156,10 +184,14 @@
def list(self):
client = self.client
snaps = client.list_snapshots()['snapshots']
- if not self.is_save_state:
+
+ if self.prefix:
+ snaps = self._filter_by_prefix(snaps)
+ elif self.is_resource_list:
+ snaps = self._filter_by_resource_list(snaps, 'snapshots')
+ elif not self.is_save_state:
# recreate list removing saved snapshots
- snaps = [snap for snap in snaps if snap['id']
- not in self.saved_state_json['snapshots'].keys()]
+ snaps = self._filter_out_ids_from_saved(snaps, 'snapshots')
LOG.debug("List count, %s Snapshots", len(snaps))
return snaps
@@ -194,10 +226,14 @@
client = self.client
servers_body = client.list_servers()
servers = servers_body['servers']
- if not self.is_save_state:
+
+ if self.prefix:
+ servers = self._filter_by_prefix(servers)
+ elif self.is_resource_list:
+ servers = self._filter_by_resource_list(servers, 'servers')
+ elif not self.is_save_state:
# recreate list removing saved servers
- servers = [server for server in servers if server['id']
- not in self.saved_state_json['servers'].keys()]
+ servers = self._filter_out_ids_from_saved(servers, 'servers')
LOG.debug("List count, %s Servers", len(servers))
return servers
@@ -226,11 +262,15 @@
def list(self):
client = self.server_groups_client
- sgs = client.list_server_groups()['server_groups']
- if not self.is_save_state:
+ sgs = client.list_server_groups(all_projects=True)['server_groups']
+
+ if self.prefix:
+ sgs = self._filter_by_prefix(sgs)
+ elif self.is_resource_list:
+ sgs = self._filter_by_resource_list(sgs, 'server_groups')
+ elif not self.is_save_state:
# recreate list removing saved server_groups
- sgs = [sg for sg in sgs if sg['id']
- not in self.saved_state_json['server_groups'].keys()]
+ sgs = self._filter_out_ids_from_saved(sgs, 'server_groups')
LOG.debug("List count, %s Server Groups", len(sgs))
return sgs
@@ -263,11 +303,13 @@
def list(self):
client = self.client
keypairs = client.list_keypairs()['keypairs']
- if not self.is_save_state:
- # recreate list removing saved keypairs
- keypairs = [keypair for keypair in keypairs
- if keypair['keypair']['name']
- not in self.saved_state_json['keypairs'].keys()]
+
+ if self.prefix:
+ keypairs = self._filter_by_prefix(keypairs, 'keypair')
+ elif self.is_resource_list:
+ keypairs = self._filter_by_resource_list(keypairs, 'keypairs')
+ elif not self.is_save_state:
+ keypairs = self._filter_out_ids_from_saved(keypairs, 'keypairs')
LOG.debug("List count, %s Keypairs", len(keypairs))
return keypairs
@@ -302,10 +344,14 @@
def list(self):
client = self.client
vols = client.list_volumes()['volumes']
- if not self.is_save_state:
+
+ if self.prefix:
+ vols = self._filter_by_prefix(vols)
+ elif self.is_resource_list:
+ vols = self._filter_by_resource_list(vols, 'volumes')
+ elif not self.is_save_state:
# recreate list removing saved volumes
- vols = [vol for vol in vols if vol['id']
- not in self.saved_state_json['volumes'].keys()]
+ vols = self._filter_out_ids_from_saved(vols, 'volumes')
LOG.debug("List count, %s Volumes", len(vols))
return vols
@@ -336,6 +382,10 @@
self.client = manager.volume_quotas_client_latest
def delete(self):
+ if self.prefix:
+ # this means we're cleaning resources based on a certain prefix,
+ # this resource doesn't have a name, therefore do nothing
+ return
client = self.client
try:
LOG.debug("Deleting Volume Quotas for project with id %s",
@@ -346,6 +396,10 @@
self.project_id)
def dry_run(self):
+ if self.prefix:
+ # this means we're cleaning resources based on a certain prefix,
+ # this resource doesn't have a name, therefore do nothing
+ return
quotas = self.client.show_quota_set(
self.project_id, params={'usage': True})['quota_set']
self.data['volume_quotas'] = quotas
@@ -358,6 +412,10 @@
self.limits_client = manager.limits_client
def delete(self):
+ if self.prefix:
+ # this means we're cleaning resources based on a certain prefix,
+ # this resource doesn't have a name, therefore do nothing
+ return
client = self.client
try:
LOG.debug("Deleting Nova Quotas for project with id %s",
@@ -368,6 +426,10 @@
self.project_id)
def dry_run(self):
+ if self.prefix:
+ # this means we're cleaning resources based on a certain prefix,
+ # this resource doesn't have a name, therefore do nothing
+ return
client = self.limits_client
quotas = client.show_limits()['limits']
self.data['compute_quotas'] = quotas['absolute']
@@ -379,6 +441,10 @@
self.client = manager.network_quotas_client
def delete(self):
+ if self.prefix:
+ # this means we're cleaning resources based on a certain prefix,
+ # this resource doesn't have a name, therefore do nothing
+ return
client = self.client
try:
LOG.debug("Deleting Network Quotas for project with id %s",
@@ -389,6 +455,10 @@
self.project_id)
def dry_run(self):
+ if self.prefix:
+ # this means we're cleaning resources based on a certain prefix,
+ # this resource doesn't have a name, therefore do nothing
+ return
resp = [quota for quota in self.client.list_quotas()['quotas']
if quota['project_id'] == self.project_id]
self.data['network_quotas'] = resp
@@ -423,10 +493,15 @@
networks = client.list_networks(**self.tenant_filter)
networks = networks['networks']
- if not self.is_save_state:
- # recreate list removing saved networks
- networks = [network for network in networks if network['id']
- not in self.saved_state_json['networks'].keys()]
+ if self.prefix:
+ networks = self._filter_by_prefix(networks)
+ elif self.is_resource_list:
+ networks = self._filter_by_resource_list(networks, 'networks')
+ else:
+ if not self.is_save_state:
+ # recreate list removing saved networks
+ networks = self._filter_out_ids_from_saved(
+ networks, 'networks')
# filter out networks declared in tempest.conf
if self.is_preserve:
networks = [network for network in networks
@@ -462,10 +537,15 @@
flips = client.list_floatingips(**self.tenant_filter)
flips = flips['floatingips']
- if not self.is_save_state:
+ if self.prefix:
+ # this means we're cleaning resources based on a certain prefix,
+ # this resource doesn't have a name, therefore return empty list
+ return []
+ elif self.is_resource_list:
+ flips = self._filter_by_resource_list(flips, 'floatingips')
+ elif not self.is_save_state:
# recreate list removing saved flips
- flips = [flip for flip in flips if flip['id']
- not in self.saved_state_json['floatingips'].keys()]
+ flips = self._filter_out_ids_from_saved(flips, 'floatingips')
LOG.debug("List count, %s Network Floating IPs", len(flips))
return flips
@@ -499,14 +579,17 @@
routers = client.list_routers(**self.tenant_filter)
routers = routers['routers']
- if not self.is_save_state:
- # recreate list removing saved routers
- routers = [router for router in routers if router['id']
- not in self.saved_state_json['routers'].keys()]
+ if self.prefix:
+ routers = self._filter_by_prefix(routers)
+ elif self.is_resource_list:
+ routers = self._filter_by_resource_list(routers, 'routers')
+ else:
+ if not self.is_save_state:
+ # recreate list removing saved routers
+ routers = self._filter_out_ids_from_saved(routers, 'routers')
if self.is_preserve:
routers = [router for router in routers
if router['id'] != CONF_PUB_ROUTER]
-
LOG.debug("List count, %s Routers", len(routers))
return routers
@@ -552,10 +635,17 @@
rules = rules['metering_label_rules']
rules = self._filter_by_tenant_id(rules)
- if not self.is_save_state:
- saved_rules = self.saved_state_json['metering_label_rules'].keys()
+ if self.prefix:
+ # this means we're cleaning resources based on a certain prefix,
+ # this resource doesn't have a name, therefore return empty list
+ return []
+ elif self.is_resource_list:
+ rules = self._filter_by_resource_list(
+ rules, 'metering_label_rules')
+ elif not self.is_save_state:
+ rules = self._filter_out_ids_from_saved(
+ rules, 'metering_label_rules')
# recreate list removing saved rules
- rules = [rule for rule in rules if rule['id'] not in saved_rules]
LOG.debug("List count, %s Metering Label Rules", len(rules))
return rules
@@ -590,10 +680,15 @@
labels = labels['metering_labels']
labels = self._filter_by_tenant_id(labels)
- if not self.is_save_state:
+ if self.prefix:
+ labels = self._filter_by_prefix(labels)
+ elif self.is_resource_list:
+ labels = self._filter_by_resource_list(
+ labels, 'metering_labels')
+ elif not self.is_save_state:
# recreate list removing saved labels
- labels = [label for label in labels if label['id']
- not in self.saved_state_json['metering_labels'].keys()]
+ labels = self._filter_out_ids_from_saved(
+ labels, 'metering_labels')
LOG.debug("List count, %s Metering Labels", len(labels))
return labels
@@ -628,13 +723,16 @@
if port["device_owner"] == "" or
port["device_owner"].startswith("compute:")]
- if not self.is_save_state:
- # recreate list removing saved ports
- ports = [port for port in ports if port['id']
- not in self.saved_state_json['ports'].keys()]
+ if self.prefix:
+ ports = self._filter_by_prefix(ports)
+ elif self.is_resource_list:
+ ports = self._filter_by_resource_list(ports, 'ports')
+ else:
+ if not self.is_save_state:
+ # recreate list removing saved ports
+ ports = self._filter_out_ids_from_saved(ports, 'ports')
if self.is_preserve:
ports = self._filter_by_conf_networks(ports)
-
LOG.debug("List count, %s Ports", len(ports))
return ports
@@ -668,15 +766,21 @@
client.list_security_groups(**filter)['security_groups']
if secgroup['name'] != 'default']
- if not self.is_save_state:
- # recreate list removing saved security_groups
- secgroups = [secgroup for secgroup in secgroups if secgroup['id']
- not in self.saved_state_json['security_groups'].keys()
- ]
+ if self.prefix:
+ secgroups = self._filter_by_prefix(secgroups)
+ elif self.is_resource_list:
+ secgroups = self._filter_by_resource_list(
+ secgroups, 'security_groups')
+ else:
+ if not self.is_save_state:
+ # recreate list removing saved security_groups
+ secgroups = self._filter_out_ids_from_saved(
+ secgroups, 'security_groups')
if self.is_preserve:
- secgroups = [secgroup for secgroup in secgroups
- if secgroup['security_group_rules'][0]['project_id']
- not in CONF_PROJECTS]
+ secgroups = [
+ secgroup for secgroup in secgroups
+ if secgroup['security_group_rules'][0]['project_id']
+ not in CONF_PROJECTS]
LOG.debug("List count, %s security_groups", len(secgroups))
return secgroups
@@ -708,10 +812,15 @@
client = self.subnets_client
subnets = client.list_subnets(**self.tenant_filter)
subnets = subnets['subnets']
- if not self.is_save_state:
- # recreate list removing saved subnets
- subnets = [subnet for subnet in subnets if subnet['id']
- not in self.saved_state_json['subnets'].keys()]
+
+ if self.prefix:
+ subnets = self._filter_by_prefix(subnets)
+ elif self.is_resource_list:
+ subnets = self._filter_by_resource_list(subnets, 'subnets')
+ else:
+ if not self.is_save_state:
+ # recreate list removing saved subnets
+ subnets = self._filter_out_ids_from_saved(subnets, 'subnets')
if self.is_preserve:
subnets = self._filter_by_conf_networks(subnets)
LOG.debug("List count, %s Subnets", len(subnets))
@@ -743,10 +852,15 @@
def list(self):
client = self.subnetpools_client
pools = client.list_subnetpools(**self.tenant_filter)['subnetpools']
- if not self.is_save_state:
- # recreate list removing saved subnet pools
- pools = [pool for pool in pools if pool['id']
- not in self.saved_state_json['subnetpools'].keys()]
+
+ if self.prefix:
+ pools = self._filter_by_prefix(pools)
+ elif self.is_resource_list:
+ pools = self._filter_by_resource_list(pools, 'subnetpools')
+ else:
+ if not self.is_save_state:
+ # recreate list removing saved subnet pools
+ pools = self._filter_out_ids_from_saved(pools, 'subnetpools')
if self.is_preserve:
pools = [pool for pool in pools if pool['project_id']
not in CONF_PROJECTS]
@@ -784,9 +898,18 @@
def list(self):
client = self.client
regions = client.list_regions()
- if not self.is_save_state:
- regions = [region for region in regions['regions'] if region['id']
- not in self.saved_state_json['regions'].keys()]
+
+ if self.prefix:
+ # this means we're cleaning resources based on a certain prefix,
+ # this resource doesn't have a name, therefore return empty list
+ return []
+ elif self.is_resource_list:
+ regions = self._filter_by_resource_list(
+ regions['regions'], 'regions')
+ return regions
+ elif not self.is_save_state:
+ regions = self._filter_out_ids_from_saved(
+ regions['regions'], 'regions')
LOG.debug("List count, %s Regions", len(regions))
return regions
else:
@@ -824,11 +947,15 @@
def list(self):
client = self.client
flavors = client.list_flavors({"is_public": None})['flavors']
- if not self.is_save_state:
- # recreate list removing saved flavors
- flavors = [flavor for flavor in flavors if flavor['id']
- not in self.saved_state_json['flavors'].keys()]
+ if self.prefix:
+ flavors = self._filter_by_prefix(flavors)
+ elif self.is_resource_list:
+ flavors = self._filter_by_resource_list(flavors, 'flavors')
+ else:
+ if not self.is_save_state:
+ # recreate list removing saved flavors
+ flavors = self._filter_out_ids_from_saved(flavors, 'flavors')
if self.is_preserve:
flavors = [flavor for flavor in flavors
if flavor['id'] not in CONF_FLAVORS]
@@ -872,9 +999,13 @@
response = client.list_images(params={"marker": marker})
images.extend(response['images'])
- if not self.is_save_state:
- images = [image for image in images if image['id']
- not in self.saved_state_json['images'].keys()]
+ if self.prefix:
+ images = self._filter_by_prefix(images)
+ elif self.is_resource_list:
+ images = self._filter_by_resource_list(images, 'images')
+ else:
+ if not self.is_save_state:
+ images = self._filter_out_ids_from_saved(images, 'images')
if self.is_preserve:
images = [image for image in images
if image['id'] not in CONF_IMAGES]
@@ -910,19 +1041,19 @@
def list(self):
users = self.client.list_users()['users']
-
- if not self.is_save_state:
- users = [user for user in users if user['id']
- not in self.saved_state_json['users'].keys()]
-
+ if self.prefix:
+ users = self._filter_by_prefix(users)
+ elif self.is_resource_list:
+ users = self._filter_by_resource_list(users, 'users')
+ else:
+ if not self.is_save_state:
+ users = self._filter_out_ids_from_saved(users, 'users')
if self.is_preserve:
users = [user for user in users if user['name']
not in CONF_USERS]
-
elif not self.is_save_state: # Never delete admin user
users = [user for user in users if user['name'] !=
CONF.auth.admin_username]
-
LOG.debug("List count, %s Users after reconcile", len(users))
return users
@@ -955,13 +1086,17 @@
def list(self):
try:
roles = self.client.list_roles()['roles']
- # reconcile roles with saved state and never list admin role
- if not self.is_save_state:
- roles = [role for role in roles if
- (role['id'] not in
- self.saved_state_json['roles'].keys() and
- role['name'] != CONF.identity.admin_role)]
- LOG.debug("List count, %s Roles after reconcile", len(roles))
+
+ if self.prefix:
+ roles = self._filter_by_prefix(roles)
+ elif self.is_resource_list:
+ roles = self._filter_by_resource_list(roles, 'roles')
+ elif not self.is_save_state:
+ # reconcile roles with saved state and never list admin role
+ roles = self._filter_out_ids_from_saved(roles, 'roles')
+ roles = [role for role in roles
+ if role['name'] != CONF.identity.admin_role]
+ LOG.debug("List count, %s Roles after reconcile", len(roles))
return roles
except Exception:
LOG.exception("Cannot retrieve Roles.")
@@ -995,18 +1130,20 @@
def list(self):
projects = self.client.list_projects()['projects']
- if not self.is_save_state:
- project_ids = self.saved_state_json['projects']
- projects = [project
- for project in projects
- if (project['id'] not in project_ids and
- project['name'] != CONF.auth.admin_project_name)]
+ if self.prefix:
+ projects = self._filter_by_prefix(projects)
+ elif self.is_resource_list:
+ projects = self._filter_by_resource_list(projects, 'projects')
+ else:
+ if not self.is_save_state:
+ projects = self._filter_out_ids_from_saved(
+ projects, 'projects')
+ projects = [project for project in projects
+ if project['name'] != CONF.auth.admin_project_name]
if self.is_preserve:
- projects = [project
- for project in projects
+ projects = [project for project in projects
if project['name'] not in CONF_PROJECTS]
-
LOG.debug("List count, %s Projects after reconcile", len(projects))
return projects
@@ -1039,10 +1176,13 @@
def list(self):
client = self.client
domains = client.list_domains()['domains']
- if not self.is_save_state:
- domains = [domain for domain in domains if domain['id']
- not in self.saved_state_json['domains'].keys()]
+ if self.prefix:
+ domains = self._filter_by_prefix(domains)
+ elif self.is_resource_list:
+ domains = self._filter_by_resource_list(domains, 'domains')
+ elif not self.is_save_state:
+ domains = self._filter_out_ids_from_saved(domains, 'domains')
LOG.debug("List count, %s Domains after reconcile", len(domains))
return domains
diff --git a/tempest/cmd/run.py b/tempest/cmd/run.py
index 2669ff7..e305646 100644
--- a/tempest/cmd/run.py
+++ b/tempest/cmd/run.py
@@ -269,6 +269,8 @@
return_code = commands.run_command(
**params, blacklist_file=ex_list,
whitelist_file=in_list, black_regex=ex_regex)
+ if parsed_args.slowest:
+ commands.slowest_command()
if return_code > 0:
sys.exit(return_code)
return return_code
@@ -392,6 +394,9 @@
help='Combine the output of this run with the '
"previous run's as a combined stream in the "
"stestr repository after it finish")
+ parser.add_argument('--slowest', action='store_true',
+ help='Show the longest running tests in the '
+ 'stestr repository after it finishes')
parser.set_defaults(parallel=True)
return parser
diff --git a/tempest/common/compute.py b/tempest/common/compute.py
index a8aafe9..49fcaf2 100644
--- a/tempest/common/compute.py
+++ b/tempest/common/compute.py
@@ -424,7 +424,7 @@
class _WebSocket(object):
def __init__(self, client_socket, url):
- """Contructor for the WebSocket wrapper to the socket."""
+ """Constructor for the WebSocket wrapper to the socket."""
self._socket = client_socket
# cached stream for early frames.
self.cached_stream = b''
diff --git a/tempest/common/custom_matchers.py b/tempest/common/custom_matchers.py
index b0bf5b2..8d257b0 100644
--- a/tempest/common/custom_matchers.py
+++ b/tempest/common/custom_matchers.py
@@ -53,7 +53,7 @@
# Check common headers for all HTTP methods.
#
# Please note that for 1xx and 204 responses Content-Length presence
- # is not checked intensionally. According to RFC 7230 a server MUST
+ # is not checked intentionally. According to RFC 7230 a server MUST
# NOT send the header in such responses. Thus, clients should not
# depend on this header. However, the standard does not require them
# to validate the server's behavior. We leverage that to not refuse
diff --git a/tempest/common/utils/__init__.py b/tempest/common/utils/__init__.py
index 0fa5ce4..0c510de 100644
--- a/tempest/common/utils/__init__.py
+++ b/tempest/common/utils/__init__.py
@@ -29,12 +29,7 @@
'compute': CONF.service_available.nova,
'image': CONF.service_available.glance,
'volume': CONF.service_available.cinder,
- # NOTE(masayukig): We have two network services which are neutron and
- # nova-network. And we have no way to know whether nova-network is
- # available or not. After the pending removal of nova-network from
- # nova, we can treat the network/neutron case in the same manner as
- # the other services.
- 'network': True,
+ 'network': CONF.service_available.neutron,
# NOTE(masayukig): Tempest tests always require the identity service.
# So we should set this True here.
'identity': True,
diff --git a/tempest/common/utils/linux/remote_client.py b/tempest/common/utils/linux/remote_client.py
index 0d93430..79cc09c 100644
--- a/tempest/common/utils/linux/remote_client.py
+++ b/tempest/common/utils/linux/remote_client.py
@@ -59,14 +59,14 @@
output = self.exec_command(command)
selected = []
pos = None
- for l in output.splitlines():
- if pos is None and l.find("TYPE") > 0:
- pos = l.find("TYPE")
+ for line in output.splitlines():
+ if pos is None and line.find("TYPE") > 0:
+ pos = line.find("TYPE")
# Show header line too
- selected.append(l)
+ selected.append(line)
# lsblk lists disk type in a column right-aligned with TYPE
- elif pos is not None and pos > 0 and l[pos:pos + 4] == "disk":
- selected.append(l)
+ elif pos is not None and pos > 0 and line[pos:pos + 4] == "disk":
+ selected.append(line)
if selected:
return "\n".join(selected)
@@ -121,9 +121,9 @@
def _get_dns_servers(self):
cmd = 'cat /etc/resolv.conf'
resolve_file = self.exec_command(cmd).strip().split('\n')
- entries = (l.split() for l in resolve_file)
- dns_servers = [l[1] for l in entries
- if len(l) and l[0] == 'nameserver']
+ entries = (line.split() for line in resolve_file)
+ dns_servers = [line[1] for line in entries
+ if len(line) and line[0] == 'nameserver']
return dns_servers
def get_dns_servers(self, timeout=5):
@@ -182,6 +182,9 @@
def umount(self, mount_path='/mnt'):
self.exec_command('sudo umount %s' % mount_path)
+ def mkdir(self, dir_path):
+ self.exec_command('sudo mkdir -p %s' % dir_path)
+
def make_fs(self, dev_name, fs='ext4'):
cmd_mkfs = 'sudo mkfs -t %s /dev/%s' % (fs, dev_name)
try:
diff --git a/tempest/common/utils/net_downtime.py b/tempest/common/utils/net_downtime.py
index 9675ec8..ec1a4c8 100644
--- a/tempest/common/utils/net_downtime.py
+++ b/tempest/common/utils/net_downtime.py
@@ -22,12 +22,38 @@
LOG = log.getLogger(__name__)
+PASSED = 'PASSED'
+FAILED = 'FAILED'
+METADATA_SCRIPT_PATH = '/tmp/metadata_meter_script.sh'
+METADATA_RESULTS_PATH = '/tmp/metadata_meter.log'
+METADATA_PID_PATH = '/tmp/metadata_meter.pid'
+# /proc/uptime is used because it include two decimals in cirros, while
+# `date +%s.%N` does not work in cirros (min granularity is seconds)
+METADATA_SCRIPT = """#!/bin/sh
+echo $$ > %(metadata_meter_pidfile)s
+old_time=$(cut -d" " -f1 /proc/uptime)
+while true; do
+ curl http://169.254.169.254/latest/meta-data/hostname 2>/dev/null | \
+grep -q `hostname`
+ result=$?
+ new_time=$(cut -d" " -f1 /proc/uptime)
+ runtime=$(awk -v new=$new_time -v old=$old_time "BEGIN {print new-old}")
+ old_time=$new_time
+ if [ $result -eq 0 ]; then
+ echo "PASSED $runtime"
+ else
+ echo "FAILED $runtime"
+ fi
+ sleep %(interval)s
+done
+"""
+
class NetDowntimeMeter(fixtures.Fixture):
- def __init__(self, dest_ip, interval='0.2'):
+ def __init__(self, dest_ip, interval=0.2):
self.dest_ip = dest_ip
# Note: for intervals lower than 0.2 ping requires root privileges
- self.interval = interval
+ self.interval = float(interval)
self.ping_process = None
def _setUp(self):
@@ -35,18 +61,18 @@
def start_background_pinger(self):
cmd = ['ping', '-q', '-s1']
- cmd.append('-i{}'.format(self.interval))
+ cmd.append('-i%g' % self.interval)
cmd.append(self.dest_ip)
- LOG.debug("Starting background pinger to '{}' with interval {}".format(
- self.dest_ip, self.interval))
+ LOG.debug("Starting background pinger to '%s' with interval %g",
+ self.dest_ip, self.interval)
self.ping_process = subprocess.Popen(
cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
self.addCleanup(self.cleanup)
def cleanup(self):
if self.ping_process and self.ping_process.poll() is None:
- LOG.debug('Terminating background pinger with pid {}'.format(
- self.ping_process.pid))
+ LOG.debug('Terminating background pinger with pid %d',
+ self.ping_process.pid)
self.ping_process.terminate()
self.ping_process = None
@@ -57,7 +83,68 @@
output = self.ping_process.stderr.readline().strip().decode('utf-8')
if output and len(output.split()[0].split('/')) == 2:
succ, total = output.split()[0].split('/')
- return (int(total) - int(succ)) * float(self.interval)
+ return (int(total) - int(succ)) * self.interval
else:
LOG.warning('Unexpected output obtained from the pinger: %s',
output)
+
+
+class MetadataDowntimeMeter(fixtures.Fixture):
+ def __init__(self, ssh_client,
+ interval='0.2', script_path=METADATA_SCRIPT_PATH,
+ output_path=METADATA_RESULTS_PATH,
+ pidfile_path=METADATA_PID_PATH):
+ self.ssh_client = ssh_client
+ self.interval = interval
+ self.script_path = script_path
+ self.output_path = output_path
+ self.pidfile_path = pidfile_path
+ self.pid = None
+
+ def _setUp(self):
+ self.addCleanup(self.cleanup)
+ self.upload_metadata_script()
+ self.run_metadata_script()
+
+ def upload_metadata_script(self):
+ metadata_script = METADATA_SCRIPT % {
+ 'metadata_meter_pidfile': self.pidfile_path,
+ 'interval': self.interval}
+ echo_cmd = "echo '{}' > {}".format(
+ metadata_script, self.script_path)
+ chmod_cmd = 'chmod +x {}'.format(self.script_path)
+ self.ssh_client.exec_command(';'.join((echo_cmd, chmod_cmd)))
+ LOG.debug('script created: %s', self.script_path)
+ output = self.ssh_client.exec_command(
+ 'cat {}'.format(self.script_path))
+ LOG.debug('script content: %s', output)
+
+ def run_metadata_script(self):
+ self.ssh_client.exec_command('{} > {} &'.format(self.script_path,
+ self.output_path))
+ self.pid = self.ssh_client.exec_command(
+ 'cat {}'.format(self.pidfile_path)).strip()
+ LOG.debug('running metadata downtime meter script in background with '
+ 'PID = %s', self.pid)
+
+ def get_results(self):
+ output = self.ssh_client.exec_command(
+ 'cat {}'.format(self.output_path))
+ results = {}
+ results['successes'] = output.count(PASSED)
+ results['failures'] = output.count(FAILED)
+ downtime = {PASSED: 0.0, FAILED: 0.0}
+ for line in output.splitlines():
+ key, value = line.strip().split()
+ downtime[key] += float(value)
+
+ results['downtime'] = downtime
+ LOG.debug('metadata downtime meter results: %r', results)
+ return results
+
+ def cleanup(self):
+ if self.pid:
+ self.ssh_client.exec_command('kill {}'.format(self.pid))
+ LOG.debug('killed metadata downtime script with PID %s', self.pid)
+ else:
+ LOG.debug('No metadata downtime script found')
diff --git a/tempest/common/waiters.py b/tempest/common/waiters.py
index d3be6fd..b4312b7 100644
--- a/tempest/common/waiters.py
+++ b/tempest/common/waiters.py
@@ -103,7 +103,8 @@
old_task_state = task_state
-def wait_for_server_termination(client, server_id, ignore_error=False):
+def wait_for_server_termination(client, server_id, ignore_error=False,
+ request_id=None):
"""Waits for server to reach termination."""
try:
body = client.show_server(server_id)['server']
@@ -126,9 +127,13 @@
'/'.join((server_status, str(task_state))),
time.time() - start_time)
if server_status == 'ERROR' and not ignore_error:
- raise lib_exc.DeleteErrorException(
- "Server %s failed to delete and is in ERROR status" %
- server_id)
+ details = ("Server %s failed to delete and is in ERROR status." %
+ server_id)
+ if 'fault' in body:
+ details += ' Fault: %s.' % body['fault']
+ if request_id:
+ details += ' Server delete request ID: %s.' % request_id
+ raise lib_exc.DeleteErrorException(details, server_id=server_id)
if server_status == 'SOFT_DELETED':
# Soft-deleted instances need to be forcibly deleted to
@@ -149,13 +154,21 @@
def wait_for_image_status(client, image_id, status):
- """Waits for an image to reach a given status.
+ """Waits for an image to reach a given status (or list of them).
The client should have a show_image(image_id) method to get the image.
The client should also have build_interval and build_timeout attributes.
+
+ status can be either a string or a list of strings that constitute a
+ terminal state that we will return.
"""
show_image = client.show_image
+ if isinstance(status, str):
+ terminal_status = [status]
+ else:
+ terminal_status = status
+
current_status = 'An unknown status'
start = int(time.time())
while int(time.time()) - start < client.build_timeout:
@@ -166,8 +179,8 @@
image = image['image']
current_status = image['status']
- if current_status == status:
- return
+ if current_status in terminal_status:
+ return current_status
if current_status.lower() == 'killed':
raise exceptions.ImageKilledException(image_id=image_id,
status=status)
@@ -179,7 +192,7 @@
message = ('Image %(image_id)s failed to reach %(status)s state '
'(current state %(current_status)s) within the required '
'time (%(timeout)s s).' % {'image_id': image_id,
- 'status': status,
+ 'status': ','.join(terminal_status),
'current_status': current_status,
'timeout': client.build_timeout})
caller = test_utils.find_test_caller()
@@ -311,6 +324,35 @@
raise lib_exc.TimeoutException(message)
+def wait_for_image_deleted_from_store(client, image, available_stores,
+ image_store_deleted):
+ """Waits for an image to be deleted from specific store.
+
+ API will not allow deletion of the last location for an image.
+ This return image if image deleted from store.
+ """
+
+ # Check if image have last store location
+ if len(available_stores) == 1:
+ exc_cls = lib_exc.OtherRestClientException
+ message = 'Delete from last store location not allowed'
+ raise exc_cls(message)
+ start = int(time.time())
+ while int(time.time()) - start < client.build_timeout:
+ image = client.show_image(image['id'])
+ image_stores = image['stores'].split(",")
+ if image_store_deleted not in image_stores:
+ return
+ time.sleep(client.build_interval)
+ message = ('Failed to delete %s from requested store location: %s '
+ 'within the required time: (%s s)' %
+ (image, image_store_deleted, client.build_timeout))
+ caller = test_utils.find_test_caller()
+ if caller:
+ message = '(%s) %s' % (caller, message)
+ raise exc_cls(message)
+
+
def wait_for_volume_resource_status(client, resource_id, status,
server_id=None, servers_client=None):
"""Waits for a volume resource to reach a given status.
@@ -513,7 +555,7 @@
interface_status = body['port_state']
start = int(time.time())
- while(interface_status != status):
+ while interface_status != status:
time.sleep(client.build_interval)
body = (client.show_interface(server_id, port_id)
['interfaceAttachment'])
@@ -576,7 +618,7 @@
floating IPs.
:param server: The server JSON dict on which to wait.
:param floating_ip: The floating IP JSON dict on which to wait.
- :param wait_for_disassociate: Boolean indiating whether to wait for
+ :param wait_for_disassociate: Boolean indicating whether to wait for
disassociation instead of association.
"""
@@ -637,6 +679,28 @@
raise lib_exc.TimeoutException
+def wait_for_server_ports_active(client, server_id, is_active, **kwargs):
+ """Wait for all server ports to reach active status
+ :param client: The network client to use when querying the port's status
+ :param server_id: The uuid of the server's ports we need to verify.
+ :param is_active: A function to call to the check port active status.
+ :param kwargs: Additional arguments, if any, to pass to list_ports()
+ """
+ start_time = time.time()
+ while (time.time() - start_time <= client.build_timeout):
+ ports = client.list_ports(device_id=server_id, **kwargs)['ports']
+ if all(is_active(port) for port in ports):
+ LOG.debug("Server ID %s ports are all ACTIVE %s: ",
+ server_id, ports)
+ return ports
+ LOG.warning("Server ID %s has ports that are not ACTIVE, waiting "
+ "for state to change on all: %s", server_id, ports)
+ time.sleep(client.build_interval)
+ LOG.error("Server ID %s ports have failed to transition to ACTIVE, "
+ "timing out: %s", server_id, ports)
+ raise lib_exc.TimeoutException
+
+
def wait_for_ssh(ssh_client, timeout=30):
"""Waits for SSH connection to become usable"""
start_time = int(time.time())
diff --git a/tempest/config.py b/tempest/config.py
index 699e271..36f0152 100644
--- a/tempest/config.py
+++ b/tempest/config.py
@@ -126,12 +126,20 @@
default=None,
help='Specify a CA bundle file to use in verifying a '
'TLS (https) server certificate.'),
- cfg.StrOpt('uri',
+ cfg.URIOpt('uri',
+ schemes=['http', 'https'],
+ deprecated_for_removal=True,
+ deprecated_reason='The identity v2 API tests were removed '
+ 'and this option has no effect',
help="Full URI of the OpenStack Identity API (Keystone), v2"),
- cfg.StrOpt('uri_v3',
+ cfg.URIOpt('uri_v3',
+ schemes=['http', 'https'],
help='Full URI of the OpenStack Identity API (Keystone), v3'),
cfg.StrOpt('auth_version',
default='v3',
+ deprecated_for_removal=True,
+ deprecated_reason='Identity v2 API was removed and v3 is '
+ 'the only available identity API version now',
help="Identity API version to be used for authentication "
"for API tests."),
cfg.StrOpt('region',
@@ -144,12 +152,16 @@
default='adminURL',
choices=['public', 'admin', 'internal',
'publicURL', 'adminURL', 'internalURL'],
+ deprecated_for_removal=True,
+ deprecated_reason='This option has no effect',
help="The admin endpoint type to use for OpenStack Identity "
"(Keystone) API v2"),
cfg.StrOpt('v2_public_endpoint_type',
default='publicURL',
choices=['public', 'admin', 'internal',
'publicURL', 'adminURL', 'internalURL'],
+ deprecated_for_removal=True,
+ deprecated_reason='This option has no effect',
help="The public endpoint type to use for OpenStack Identity "
"(Keystone) API v2"),
cfg.StrOpt('v3_endpoint_type',
@@ -232,19 +244,22 @@
'impersonation enabled'),
cfg.BoolOpt('api_v2',
default=False,
- help='Is the v2 identity API enabled',
deprecated_for_removal=True,
- deprecated_reason='The identity v2.0 API was removed in the '
- 'Queens release. Tests that exercise the '
- 'v2.0 API will be removed from tempest in '
- 'the v22.0.0 release. They are kept only to '
- 'test stable branches.'),
+ deprecated_reason='The identity v2 API tests were removed '
+ 'and this option has no effect',
+ help='Is the v2 identity API enabled'),
cfg.BoolOpt('api_v2_admin',
default=True,
- help="Is the v2 identity admin API available? This setting "
- "only applies if api_v2 is set to True."),
+ deprecated_for_removal=True,
+ deprecated_reason='The identity v2 API tests were removed '
+ 'and this option has no effect',
+ help="Is the v2 identity admin API available?"),
cfg.BoolOpt('api_v3',
default=True,
+ deprecated_for_removal=True,
+ deprecated_reason='Identity v2 API was removed and v3 is '
+ 'the only available identity API version '
+ 'now',
help='Is the v3 identity API enabled'),
cfg.ListOpt('api_extensions',
default=['all'],
@@ -263,23 +278,11 @@
default=False,
help='Does the environment have the security compliance '
'settings enabled?'),
- cfg.BoolOpt('project_tags',
- default=True,
- help='Is the project tags identity v3 API available?',
- deprecated_for_removal=True,
- deprecated_reason='Project tags API is a default feature '
- 'since Queens'),
- cfg.BoolOpt('application_credentials',
- default=True,
- help='Does the environment have application credentials '
- 'enabled?',
- deprecated_for_removal=True,
- deprecated_reason='Application credentials is a default '
- 'feature since Queens'),
- # Access rules for application credentials is a default feature in Train.
- # This config option can removed once Stein is EOL.
cfg.BoolOpt('access_rules',
- default=False,
+ default=True,
+ deprecated_for_removal=True,
+ deprecated_reason='Access rules for application credentials '
+ 'is a default feature since Train',
help='Does the environment have access rules enabled?'),
cfg.BoolOpt('immutable_user_source',
default=False,
@@ -405,6 +408,21 @@
'allow_availability_zone_fallback=False in cinder.conf), '
'the volume create request will fail and the instance '
'will fail the build request.'),
+ cfg.StrOpt('migration_source_host',
+ default=None,
+ help="Specify source host for live-migration, cold-migration"
+ " and resize tests. If option is not set tests will use"
+ " host automatically."),
+ cfg.StrOpt('migration_dest_host',
+ default=None,
+ help="Specify destination host for live-migration and cold"
+ " migration. If option is not set tests will use host"
+ " automatically."),
+ cfg.StrOpt('target_hosts_to_avoid',
+ default='-ironic',
+ help="When aggregating available hypervisors for testing,"
+ " avoid migrating to and booting any test VM on hosts with"
+ " a name that matches the provided pattern"),
]
placement_group = cfg.OptGroup(name='placement',
@@ -455,6 +473,15 @@
"the '.' with '-' to comply with fqdn hostname. Nova "
"changed that in Wallaby cycle, if your cloud is older "
"than wallaby then you can keep/make it False."),
+ cfg.StrOpt('dhcp_domain',
+ default='.novalocal',
+ help="Configure a fully-qualified domain name for instance "
+ "hostnames. The value is suffixed to instance hostname "
+ "from the database to construct the hostname that "
+ "appears in the metadata API. To disable this behavior "
+ "(for example in order to correctly support "
+ "microversion's 2.94 FQDN hostnames), set this to the "
+ "empty string."),
cfg.BoolOpt('change_password',
default=False,
help="Does the test environment support changing the admin "
@@ -501,18 +528,6 @@
default=False,
help="Does the test environment use block devices for live "
"migration"),
- cfg.BoolOpt('block_migrate_cinder_iscsi',
- default=False,
- help="Does the test environment support block migration with "
- "Cinder iSCSI volumes. Note: libvirt >= 1.2.17 is required "
- "to support this if using the libvirt compute driver.",
- deprecated_for_removal=True,
- deprecated_reason='This option duplicates the more generic '
- '[compute-feature-enabled]/block_migration '
- '_for_live_migration now that '
- 'MIN_LIBVIRT_VERSION is >= 1.2.17 on all '
- 'branches from stable/rocky and will be '
- 'removed in a future release.'),
cfg.BoolOpt('can_migrate_between_any_hosts',
default=True,
help="Does the test environment support migrating between "
@@ -523,15 +538,6 @@
default=False,
help='Enable VNC console. This configuration value should '
'be same as nova.conf: vnc.enabled'),
- cfg.StrOpt('vnc_server_header',
- default='WebSockify',
- help='Expected VNC server name (WebSockify, nginx, etc) '
- 'in response header.',
- deprecated_for_removal=True,
- deprecated_reason='This option will be ignored because the '
- 'usage of different response header fields '
- 'to accomplish the same goal (in accordance '
- 'with RFC7231 S6.2.2) makes it obsolete.'),
cfg.BoolOpt('spice_console',
default=False,
help='Enable Spice console. This configuration value should '
@@ -540,14 +546,6 @@
deprecated_reason="This config option is not being used "
"in Tempest, we can add it back when "
"adding the test cases."),
- cfg.BoolOpt('rdp_console',
- default=False,
- help='Enable RDP console. This configuration value should '
- 'be same as nova.conf: rdp.enabled',
- deprecated_for_removal=True,
- deprecated_reason="This config option is not being used "
- "in Tempest, we can add it back when "
- "adding the test cases."),
cfg.BoolOpt('serial_console',
default=False,
help='Enable serial console. This configuration value '
@@ -575,13 +573,6 @@
default=True,
help='Does the test environment support creating snapshot '
'images of running instances?'),
- cfg.BoolOpt('nova_cert',
- default=False,
- help='Does the test environment have the nova cert running?',
- deprecated_for_removal=True,
- deprecated_reason="On Nova side, the nova-cert service is "
- "deprecated and the service will be removed "
- "as early as Ocata."),
cfg.BoolOpt('personality',
default=False,
help='Does the test environment support server personality'),
@@ -630,18 +621,6 @@
help='Does the test environment support attaching a volume to '
'more than one instance? This depends on hypervisor and '
'volume backend/type and compute API version 2.60.'),
- cfg.BoolOpt('xenapi_apis',
- default=False,
- help='Does the test environment support the XenAPI-specific '
- 'APIs: os-agents, writeable server metadata and the '
- 'resetNetwork server action? '
- 'These were removed in Victoria alongside the XenAPI '
- 'virt driver.',
- deprecated_for_removal=True,
- deprecated_reason="On Nova side, XenAPI virt driver and the "
- "APIs that only worked with that driver "
- "have been removed and there's nothing to "
- "test after Ussuri."),
cfg.BoolOpt('ide_bus',
default=True,
help='Does the test environment support attaching devices '
@@ -682,12 +661,17 @@
cfg.BoolOpt('image_caching_enabled',
default=False,
help=("Flag to enable if caching is enabled by image "
- "service, operator should set this parameter to True"
+ "service, operator should set this parameter to True "
"if 'image_cache_dir' is set in glance-api.conf")),
cfg.StrOpt('http_image',
- default='http://download.cirros-cloud.net/0.3.1/'
- 'cirros-0.3.1-x86_64-uec.tar.gz',
+ default='http://download.cirros-cloud.net/0.6.2/'
+ 'cirros-0.6.2-x86_64-uec.tar.gz',
help='http accessible image'),
+ cfg.StrOpt('http_qcow2_image',
+ default='http://download.cirros-cloud.net/0.6.2/'
+ 'cirros-0.6.2-x86_64-disk.img',
+ help='http qcow2 accessible image which will be used '
+ 'for image conversion if enabled.'),
cfg.IntOpt('build_timeout',
default=300,
help="Timeout in seconds to wait for an image to "
@@ -697,14 +681,18 @@
help="Time in seconds between image operation status "
"checks."),
cfg.ListOpt('container_formats',
- default=['ami', 'ari', 'aki', 'bare', 'ovf', 'ova'],
+ default=['bare', 'ami', 'ari', 'aki', 'ovf', 'ova'],
help="A list of image's container formats "
"users can specify."),
cfg.ListOpt('disk_formats',
- default=['ami', 'ari', 'aki', 'vhd', 'vmdk', 'raw', 'qcow2',
+ default=['qcow2', 'raw', 'ami', 'ari', 'aki', 'vhd', 'vmdk',
'vdi', 'iso', 'vhdx'],
help="A list of image's disk formats "
- "users can specify.")
+ "users can specify."),
+ cfg.StrOpt('images_manifest_file',
+ default=None,
+ help="A path to a manifest.yml generated using the "
+ "os-test-images project"),
]
image_feature_group = cfg.OptGroup(name='image-feature-enabled',
@@ -719,24 +707,32 @@
'are current one. In future, Tempest will '
'test v2 APIs only so this config option '
'will be removed.'),
- # Image import feature is setup in devstack victoria onwards.
- # Once all stable branches setup the same via glance standalone
- # mode or with uwsgi, we can remove this config option.
cfg.BoolOpt('import_image',
- default=False,
- help="Is image import feature enabled"),
- # NOTE(danms): Starting mid-Wallaby glance began enforcing the
- # previously-informal requirement that os_glance_* properties are
- # reserved for internal use. Thus, we can only run these checks
- # if we know we are on a new enough glance.
+ default=True,
+ help="Is image import feature enabled",
+ deprecated_for_removal=True,
+ deprecated_reason='Issue with image import in WSGI mode was '
+ 'fixed in Victoria, and this feature works '
+ 'in any deployment architecture now.'),
cfg.BoolOpt('os_glance_reserved',
- default=False,
- help="Should we check that os_glance namespace is reserved"),
+ default=True,
+ help="Should we check that os_glance namespace is reserved",
+ deprecated_for_removal=True,
+ deprecated_reason='os_glance namespace is always reserved '
+ 'since Wallaby'),
cfg.BoolOpt('manage_locations',
default=False,
help=('Is show_multiple_locations enabled in glance. '
'Note that at least one http store must be enabled as '
'well, because we use that location scheme to test.')),
+ cfg.BoolOpt('image_conversion',
+ default=False,
+ help=('Is image_conversion enabled in glance.')),
+ cfg.BoolOpt('image_format_enforcement',
+ default=True,
+ help=('Indicates that image format is enforced by glance, '
+ 'such that we should not expect to be able to upload '
+ 'bad images for testing other services.')),
]
network_group = cfg.OptGroup(name='network',
@@ -801,13 +797,6 @@
default=1,
help="Time in seconds between network operation status "
"checks."),
- cfg.ListOpt('dns_servers',
- default=["8.8.8.8", "8.8.4.4"],
- help="List of dns servers which should be used"
- " for subnet creation",
- deprecated_for_removal=True,
- deprecated_reason="This config option is no longer "
- "used anywhere, so it can be removed."),
cfg.StrOpt('port_vnic_type',
choices=[None, 'normal', 'direct', 'macvtap', 'direct-physical',
'baremetal', 'virtio-forwarder'],
@@ -883,8 +872,9 @@
title="Dashboard options")
DashboardGroup = [
- cfg.StrOpt('dashboard_url',
+ cfg.URIOpt('dashboard_url',
default='http://localhost/',
+ schemes=['http', 'https'],
help="Where the dashboard can be found"),
cfg.BoolOpt('disable_ssl_certificate_validation',
default=False,
@@ -909,10 +899,11 @@
help='Enable/disable security group rules.'),
cfg.StrOpt('connect_method',
default='floating',
- choices=['fixed', 'floating'],
- help='Default IP type used for validation: '
- '-fixed: uses the first IP belonging to the fixed network '
- '-floating: creates and uses a floating IP'),
+ choices=[('fixed',
+ 'uses the first IP belonging to the fixed network'),
+ ('floating',
+ 'creates and uses a floating IP')],
+ help='Default IP type used for validation'),
cfg.StrOpt('auth_method',
default='keypair',
choices=['keypair'],
@@ -970,14 +961,20 @@
"connect_method=floating."),
cfg.StrOpt('ssh_key_type',
default='ecdsa',
- help='Type of key to use for ssh connections. '
- 'Valid types are rsa, ecdsa'),
+ choices=['ecdsa', 'rsa'],
+ help='Type of key to use for ssh connections.'),
cfg.FloatOpt('allowed_network_downtime',
default=5.0,
help="Allowed VM network connection downtime during live "
"migration, in seconds. "
"When the measured downtime exceeds this value, an "
"exception is raised."),
+ cfg.FloatOpt('allowed_metadata_downtime',
+ default=6.0,
+ help="Allowed VM metadata connection downtime during live "
+ "migration, in seconds. "
+ "When the measured downtime exceeds this value, an "
+ "exception is raised."),
]
volume_group = cfg.OptGroup(name='volume',
@@ -992,7 +989,7 @@
help='Timeout in seconds to wait for a volume to become '
'available.'),
cfg.StrOpt('catalog_type',
- default='volumev3',
+ default='block-storage',
help="Catalog type of the Volume Service"),
cfg.StrOpt('region',
default='',
@@ -1118,8 +1115,15 @@
default=True,
help='Does the cloud support extending the size of a volume '
'which has snapshot? Some drivers do not support this '
- 'operation.')
-
+ 'operation.'),
+ cfg.StrOpt('volume_types_for_data_volume',
+ default=None,
+ help='Volume types used for data volumes. Multiple volume '
+ 'types can be assigned.'),
+ cfg.BoolOpt('enable_volume_image_dep_tests',
+ default=True,
+ help='Run tests for dependencies between images, volumes'
+ 'and instance snapshots')
]
@@ -1217,7 +1221,7 @@
cfg.StrOpt('dhcp_client',
default='udhcpc',
choices=["udhcpc", "dhclient", "dhcpcd", ""],
- help='DHCP client used by images to renew DCHP lease. '
+ help='DHCP client used by images to renew DHCP lease. '
'If left empty, update operation will be skipped. '
'Supported clients: "udhcpc", "dhclient", "dhcpcd"'),
cfg.StrOpt('protocol',
@@ -1225,6 +1229,9 @@
choices=('icmp', 'tcp', 'udp'),
help='The protocol used in security groups tests to check '
'connectivity.'),
+ cfg.StrOpt('target_dir',
+ default='/tmp',
+ help='Directory in which to write the timestamp file.'),
]
@@ -1236,7 +1243,7 @@
default=True,
help="Whether or not cinder is expected to be available"),
cfg.BoolOpt('neutron',
- default=False,
+ default=True,
help="Whether or not neutron is expected to be available"),
cfg.BoolOpt('glance',
default=True,
@@ -1347,9 +1354,9 @@
The best use case is investigating used resources of one test.
A test can be run as follows:
- $ stestr run --pdb TEST_ID
+$ stestr run --pdb TEST_ID
or
- $ python -m testtools.run TEST_ID"""),
+$ python -m testtools.run TEST_ID"""),
cfg.StrOpt('resource_name_prefix',
default='tempest',
help="Define the prefix name for the resources created by "
@@ -1357,6 +1364,15 @@
"to cleanup only the resources that match the prefix. "
"Make sure this prefix does not match with the resource "
"name you do not want Tempest cleanup CLI to delete."),
+ cfg.BoolOpt('record_resources',
+ default=False,
+ help="Allows to record all resources created by Tempest. "
+ "These resources are stored in file resource_list.json, "
+ "which can be later used for resource deletion by "
+ "command tempest cleanup. The resource_list.json file "
+ "will be appended in case of multiple Tempest runs, "
+ "so the file will contain a list of resources created "
+ "during all Tempest runs."),
]
_opts = [
diff --git a/tempest/hacking/checks.py b/tempest/hacking/checks.py
index 1c9c55b..c81ec03 100644
--- a/tempest/hacking/checks.py
+++ b/tempest/hacking/checks.py
@@ -16,7 +16,6 @@
import re
from hacking import core
-import pycodestyle
PYTHON_CLIENTS = ['cinder', 'glance', 'keystone', 'nova', 'swift', 'neutron',
@@ -40,22 +39,22 @@
@core.flake8ext
-def import_no_clients_in_api_and_scenario_tests(physical_line, filename):
+def import_no_clients_in_api_and_scenario_tests(logical_line, filename):
"""Check for client imports from tempest/api & tempest/scenario tests
T102: Cannot import OpenStack python clients
"""
if "tempest/api" in filename or "tempest/scenario" in filename:
- res = PYTHON_CLIENT_RE.match(physical_line)
+ res = PYTHON_CLIENT_RE.match(logical_line)
if res:
- return (physical_line.find(res.group(1)),
+ return (logical_line.find(res.group(1)),
("T102: python clients import not allowed"
" in tempest/api/* or tempest/scenario/* tests"))
@core.flake8ext
-def scenario_tests_need_service_tags(physical_line, filename,
+def scenario_tests_need_service_tags(logical_line, filename,
previous_logical):
"""Check that scenario tests have service tags
@@ -63,28 +62,28 @@
"""
if 'tempest/scenario/' in filename and '/test_' in filename:
- if TEST_DEFINITION.match(physical_line):
+ if TEST_DEFINITION.match(logical_line):
if not SCENARIO_DECORATOR.match(previous_logical):
- return (physical_line.find('def'),
+ return (logical_line.find('def'),
"T104: Scenario tests require a service decorator")
@core.flake8ext
-def no_setup_teardown_class_for_tests(physical_line, filename):
+def no_setup_teardown_class_for_tests(logical_line, filename, noqa):
- if pycodestyle.noqa(physical_line):
+ if noqa:
return
if 'tempest/test.py' in filename or 'tempest/lib/' in filename:
return
- if SETUP_TEARDOWN_CLASS_DEFINITION.match(physical_line):
- return (physical_line.find('def'),
+ if SETUP_TEARDOWN_CLASS_DEFINITION.match(logical_line):
+ return (logical_line.find('def'),
"T105: (setUp|tearDown)Class can not be used in tests")
@core.flake8ext
-def service_tags_not_in_module_path(physical_line, filename):
+def service_tags_not_in_module_path(logical_line, filename):
"""Check that a service tag isn't in the module path
A service tag should only be added if the service name isn't already in
@@ -96,14 +95,14 @@
# created for services like heat which would cause false negatives for
# those tests, so just exclude the scenario tests.
if 'tempest/scenario' not in filename:
- matches = SCENARIO_DECORATOR.match(physical_line)
+ matches = SCENARIO_DECORATOR.match(logical_line)
if matches:
services = matches.group(1).split(',')
for service in services:
service_name = service.strip().strip("'")
modulepath = os.path.split(filename)[0]
if service_name in modulepath:
- return (physical_line.find(service_name),
+ return (logical_line.find(service_name),
"T107: service tag should not be in path")
@@ -140,28 +139,27 @@
"decorators.skip_because from tempest.lib")
-def _common_service_clients_check(logical_line, physical_line, filename):
+def _common_service_clients_check(logical_line, filename, noqa):
+ if noqa:
+ return False
+
if not re.match('tempest/(lib/)?services/.*', filename):
return False
- if not METHOD.match(physical_line):
- return False
-
- if pycodestyle.noqa(physical_line):
+ if not METHOD.match(logical_line):
return False
return True
@core.flake8ext
-def get_resources_on_service_clients(physical_line, logical_line, filename,
- line_number, lines):
+def get_resources_on_service_clients(logical_line, filename,
+ line_number, lines, noqa):
"""Check that service client names of GET should be consistent
T110
"""
- if not _common_service_clients_check(logical_line, physical_line,
- filename):
+ if not _common_service_clients_check(logical_line, filename, noqa):
return
for line in lines[line_number:]:
@@ -182,14 +180,13 @@
@core.flake8ext
-def delete_resources_on_service_clients(physical_line, logical_line, filename,
- line_number, lines):
+def delete_resources_on_service_clients(logical_line, filename,
+ line_number, lines, noqa):
"""Check that service client names of DELETE should be consistent
T111
"""
- if not _common_service_clients_check(logical_line, physical_line,
- filename):
+ if not _common_service_clients_check(logical_line, filename, noqa):
return
for line in lines[line_number:]:
@@ -262,7 +259,7 @@
'oslo_config' in logical_line):
msg = ('T114: tempest.lib can not have any dependency on tempest '
'config.')
- yield(0, msg)
+ yield (0, msg)
@core.flake8ext
@@ -281,7 +278,7 @@
if not re.match(r'.\/tempest\/api\/.*\/admin\/.*', filename):
msg = 'T115: All admin tests should exist under admin path.'
- yield(0, msg)
+ yield (0, msg)
@core.flake8ext
@@ -293,11 +290,11 @@
result = EX_ATTRIBUTE.search(logical_line)
msg = ("[T116] Unsupported 'message' Exception attribute in PY3")
if result:
- yield(0, msg)
+ yield (0, msg)
@core.flake8ext
-def negative_test_attribute_always_applied_to_negative_tests(physical_line,
+def negative_test_attribute_always_applied_to_negative_tests(logical_line,
filename):
"""Check ``@decorators.attr(type=['negative'])`` applied to negative tests.
@@ -307,13 +304,13 @@
if re.match(r'.\/tempest\/api\/.*_negative.*', filename):
- if NEGATIVE_TEST_DECORATOR.match(physical_line):
+ if NEGATIVE_TEST_DECORATOR.match(logical_line):
_HAVE_NEGATIVE_DECORATOR = True
return
- if TEST_DEFINITION.match(physical_line):
+ if TEST_DEFINITION.match(logical_line):
if not _HAVE_NEGATIVE_DECORATOR:
- return (
+ yield (
0, "T117: Must apply `@decorators.attr(type=['negative'])`"
" to all negative API tests"
)
diff --git a/tempest/api/identity/admin/v2/__init__.py b/tempest/lib/api_schema/response/compute/v2_80/__init__.py
similarity index 100%
rename from tempest/api/identity/admin/v2/__init__.py
rename to tempest/lib/api_schema/response/compute/v2_80/__init__.py
diff --git a/tempest/lib/api_schema/response/compute/v2_80/migrations.py b/tempest/lib/api_schema/response/compute/v2_80/migrations.py
new file mode 100644
index 0000000..f2fa008
--- /dev/null
+++ b/tempest/lib/api_schema/response/compute/v2_80/migrations.py
@@ -0,0 +1,40 @@
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import copy
+
+from tempest.lib.api_schema.response.compute.v2_59 import migrations
+
+###########################################################################
+#
+# 2.80:
+#
+# The user_id and project_id value is now returned in the response body in
+# addition to the migration id for the following API responses:
+#
+# - GET /os-migrations
+#
+###########################################################################
+
+user_id = {'type': 'string'}
+project_id = {'type': 'string'}
+
+list_migrations = copy.deepcopy(migrations.list_migrations)
+
+list_migrations['response_body']['properties']['migrations']['items'][
+ 'properties'].update({
+ 'user_id': user_id,
+ 'project_id': project_id
+ })
+
+list_migrations['response_body']['properties']['migrations']['items'][
+ 'required'].extend(['user_id', 'project_id'])
diff --git a/tempest/api/identity/admin/v2/__init__.py b/tempest/lib/api_schema/response/compute/v2_89/__init__.py
similarity index 100%
copy from tempest/api/identity/admin/v2/__init__.py
copy to tempest/lib/api_schema/response/compute/v2_89/__init__.py
diff --git a/tempest/lib/api_schema/response/compute/v2_89/servers.py b/tempest/lib/api_schema/response/compute/v2_89/servers.py
new file mode 100644
index 0000000..debf0dc
--- /dev/null
+++ b/tempest/lib/api_schema/response/compute/v2_89/servers.py
@@ -0,0 +1,84 @@
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import copy
+
+from tempest.lib.api_schema.response.compute.v2_79 import servers as servers279
+
+
+###########################################################################
+#
+# 2.89:
+#
+# The attachment_id and bdm_uuid parameter is now returned in the response body
+# of the following calls:
+#
+# - GET /servers/{server_id}/os-volume_attachments
+# - GET /servers/{server_id}/os-volume_attachments/{volume_id}
+# - POST /servers/{server_id}/os-volume_attachments
+###########################################################################
+
+attach_volume = copy.deepcopy(servers279.attach_volume)
+
+show_volume_attachment = copy.deepcopy(servers279.show_volume_attachment)
+
+list_volume_attachments = copy.deepcopy(servers279.list_volume_attachments)
+
+# Remove properties
+# 'id' is available unti v2.88
+show_volume_attachment['response_body']['properties'][
+ 'volumeAttachment']['properties'].pop('id')
+show_volume_attachment['response_body']['properties'][
+ 'volumeAttachment']['required'].remove('id')
+list_volume_attachments['response_body']['properties'][
+ 'volumeAttachments']['items']['properties'].pop('id')
+list_volume_attachments['response_body']['properties'][
+ 'volumeAttachments']['items']['required'].remove('id')
+
+
+# Add new properties
+new_properties = {
+ 'attachment_id': {'type': 'string', 'format': 'uuid'},
+ 'bdm_uuid': {'type': 'string', 'format': 'uuid'}
+}
+
+show_volume_attachment['response_body']['properties'][
+ 'volumeAttachment']['properties'].update(new_properties)
+show_volume_attachment['response_body']['properties'][
+ 'volumeAttachment']['required'].extend(new_properties.keys())
+list_volume_attachments['response_body']['properties'][
+ 'volumeAttachments']['items']['properties'].update(new_properties)
+list_volume_attachments['response_body']['properties'][
+ 'volumeAttachments']['items']['required'].extend(new_properties.keys())
+
+
+# NOTE(zhufl): Below are the unchanged schema in this microversion. We
+# need to keep this schema in this file to have the generic way to select the
+# right schema based on self.schema_versions_info mapping in service client.
+# ****** Schemas unchanged since microversion 2.75 ***
+rebuild_server = copy.deepcopy(servers279.rebuild_server)
+rebuild_server_with_admin_pass = copy.deepcopy(
+ servers279.rebuild_server_with_admin_pass)
+update_server = copy.deepcopy(servers279.update_server)
+get_server = copy.deepcopy(servers279.get_server)
+list_servers_detail = copy.deepcopy(servers279.list_servers_detail)
+list_servers = copy.deepcopy(servers279.list_servers)
+show_server_diagnostics = copy.deepcopy(servers279.show_server_diagnostics)
+get_remote_consoles = copy.deepcopy(servers279.get_remote_consoles)
+list_tags = copy.deepcopy(servers279.list_tags)
+update_all_tags = copy.deepcopy(servers279.update_all_tags)
+delete_all_tags = copy.deepcopy(servers279.delete_all_tags)
+check_tag_existence = copy.deepcopy(servers279.check_tag_existence)
+update_tag = copy.deepcopy(servers279.update_tag)
+delete_tag = copy.deepcopy(servers279.delete_tag)
+show_instance_action = copy.deepcopy(servers279.show_instance_action)
+create_backup = copy.deepcopy(servers279.create_backup)
diff --git a/tempest/api/identity/admin/v2/__init__.py b/tempest/lib/api_schema/response/compute/v2_96/__init__.py
similarity index 100%
copy from tempest/api/identity/admin/v2/__init__.py
copy to tempest/lib/api_schema/response/compute/v2_96/__init__.py
diff --git a/tempest/lib/api_schema/response/compute/v2_96/servers.py b/tempest/lib/api_schema/response/compute/v2_96/servers.py
new file mode 100644
index 0000000..7036a11
--- /dev/null
+++ b/tempest/lib/api_schema/response/compute/v2_96/servers.py
@@ -0,0 +1,62 @@
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import copy
+
+from tempest.lib.api_schema.response.compute.v2_89 import servers as servers289
+
+
+###########################################################################
+#
+# 2.96:
+#
+# The attachment_id and bdm_uuid parameter is now returned in the response body
+# of the following calls:
+# The pinned_availability_zone parameter is now returned in the response body
+# of the following calls:
+#
+# - GET /servers/detail
+# - GET /servers/{server_id}
+###########################################################################
+
+get_server = copy.deepcopy(servers289.get_server)
+get_server['response_body']['properties']['server'][
+ 'properties'].update(
+ {'pinned_availability_zone': {'type': ['string', 'null']}})
+
+list_servers_detail = copy.deepcopy(servers289.list_servers_detail)
+list_servers_detail['response_body']['properties']['servers']['items'][
+ 'properties'].update(
+ {'pinned_availability_zone': {'type': ['string', 'null']}})
+
+# NOTE(zhufl): Below are the unchanged schema in this microversion. We
+# need to keep this schema in this file to have the generic way to select the
+# right schema based on self.schema_versions_info mapping in service client.
+# ****** Schemas unchanged since microversion 2.89***
+attach_volume = copy.deepcopy(servers289.attach_volume)
+show_volume_attachment = copy.deepcopy(servers289.show_volume_attachment)
+list_volume_attachments = copy.deepcopy(servers289.list_volume_attachments)
+rebuild_server = copy.deepcopy(servers289.rebuild_server)
+rebuild_server_with_admin_pass = copy.deepcopy(
+ servers289.rebuild_server_with_admin_pass)
+update_server = copy.deepcopy(servers289.update_server)
+list_servers = copy.deepcopy(servers289.list_servers)
+show_server_diagnostics = copy.deepcopy(servers289.show_server_diagnostics)
+get_remote_consoles = copy.deepcopy(servers289.get_remote_consoles)
+list_tags = copy.deepcopy(servers289.list_tags)
+update_all_tags = copy.deepcopy(servers289.update_all_tags)
+delete_all_tags = copy.deepcopy(servers289.delete_all_tags)
+check_tag_existence = copy.deepcopy(servers289.check_tag_existence)
+update_tag = copy.deepcopy(servers289.update_tag)
+delete_tag = copy.deepcopy(servers289.delete_tag)
+show_instance_action = copy.deepcopy(servers289.show_instance_action)
+create_backup = copy.deepcopy(servers289.create_backup)
diff --git a/tempest/lib/api_schema/response/volume/volumes.py b/tempest/lib/api_schema/response/volume/volumes.py
index 900e5ef..9b5dfda 100644
--- a/tempest/lib/api_schema/response/volume/volumes.py
+++ b/tempest/lib/api_schema/response/volume/volumes.py
@@ -236,7 +236,7 @@
}
}
-# TODO(zhufl): This is under discussion, so will be merged in a seperate patch.
+# TODO(zhufl): This is under discussion, so will be merged in a separate patch.
# https://bugs.launchpad.net/cinder/+bug/1880566
# upload_volume = {
# 'status_code': [202],
diff --git a/tempest/lib/auth.py b/tempest/lib/auth.py
index 8bdf98e..12fffdb 100644
--- a/tempest/lib/auth.py
+++ b/tempest/lib/auth.py
@@ -21,6 +21,7 @@
from urllib import parse as urlparse
from oslo_log import log as logging
+from oslo_utils import timeutils
from tempest.lib import exceptions
from tempest.lib.services.identity.v2 import token_client as json_v2id
@@ -419,8 +420,7 @@
def is_expired(self, auth_data):
_, access = auth_data
expiry = self._parse_expiry_time(access['token']['expires'])
- return (expiry - self.token_expiry_threshold <=
- datetime.datetime.utcnow())
+ return (expiry - self.token_expiry_threshold <= timeutils.utcnow())
class KeystoneV3AuthProvider(KeystoneAuthProvider):
@@ -595,8 +595,7 @@
def is_expired(self, auth_data):
_, access = auth_data
expiry = self._parse_expiry_time(access['expires_at'])
- return (expiry - self.token_expiry_threshold <=
- datetime.datetime.utcnow())
+ return (expiry - self.token_expiry_threshold <= timeutils.utcnow())
def is_identity_version_supported(identity_version):
diff --git a/tempest/lib/cli/base.py b/tempest/lib/cli/base.py
index c9cffd2..c308c30 100644
--- a/tempest/lib/cli/base.py
+++ b/tempest/lib/cli/base.py
@@ -374,12 +374,13 @@
:param merge_stderr: if True the stderr buffer is merged into stdout
:type merge_stderr: boolean
"""
- creds = ('--os-username %s --os-project-name %s --os-password %s '
+ creds = ('--os-username %s --os-password %s '
'--os-auth-url %s' %
(self.username,
- self.tenant_name,
self.password,
self.uri))
+ if self.tenant_name is not None:
+ creds += ' --os-project-name %s' % self.tenant_name
if self.identity_api_version:
if cmd not in self.CLIENTS_WITHOUT_IDENTITY_VERSION:
creds += ' --os-identity-api-version %s' % (
diff --git a/tempest/lib/cmd/check_uuid.py b/tempest/lib/cmd/check_uuid.py
index 466222d..af1112d 100755
--- a/tempest/lib/cmd/check_uuid.py
+++ b/tempest/lib/cmd/check_uuid.py
@@ -266,7 +266,7 @@
"groups! This is not valid according to the PEP8 "
"style guide. " % source_path)
- # Divide grouped_imports into groupes based on PEP8 style guide
+ # Divide grouped_imports into groups based on PEP8 style guide
pep8_groups = {}
package_name = self.package.__name__.split(".")[0]
for key in grouped_imports:
diff --git a/tempest/lib/common/cred_provider.py b/tempest/lib/common/cred_provider.py
index 2da206f..93b9586 100644
--- a/tempest/lib/common/cred_provider.py
+++ b/tempest/lib/common/cred_provider.py
@@ -76,6 +76,10 @@
return
@abc.abstractmethod
+ def get_domain_manager_creds(self):
+ return
+
+ @abc.abstractmethod
def get_domain_member_creds(self):
return
@@ -92,6 +96,10 @@
return
@abc.abstractmethod
+ def get_project_manager_creds(self):
+ return
+
+ @abc.abstractmethod
def get_project_member_creds(self):
return
diff --git a/tempest/lib/common/dynamic_creds.py b/tempest/lib/common/dynamic_creds.py
index 99647d4..6c90938 100644
--- a/tempest/lib/common/dynamic_creds.py
+++ b/tempest/lib/common/dynamic_creds.py
@@ -51,7 +51,7 @@
:param str identity_admin_role: The role name to use for admin
:param list extra_roles: A list of strings for extra roles that should
be assigned to all created users
- :param bool neutron_available: Whether we are running in an environemnt
+ :param bool neutron_available: Whether we are running in an environment
with neutron
:param bool create_networks: Whether dynamic project networks should be
created or not
@@ -432,7 +432,7 @@
cred_type = [cred_type]
credentials = self._create_creds(
roles=cred_type, scope=scope, project_id=project_id)
- elif credential_type in [['member'], ['reader']]:
+ elif credential_type in [['manager'], ['member'], ['reader']]:
credentials = self._create_creds(
roles=credential_type, scope=scope,
project_id=project_id)
@@ -453,7 +453,7 @@
# NOTE(gmann): For 'domain' and 'system' scoped token, there is no
# project_id so we are skipping the network creation for both
# scope.
- # We need to create nework resource once per project.
+ # We need to create network resource once per project.
if (not project_id and (not scope or scope == 'project')):
if (self.neutron_available and self.create_networks):
network, subnet, router = self._create_network_resources(
@@ -492,6 +492,9 @@
def get_domain_admin_creds(self):
return self.get_credentials(['admin'], scope='domain')
+ def get_domain_manager_creds(self):
+ return self.get_credentials(['manager'], scope='domain')
+
def get_domain_member_creds(self):
return self.get_credentials(['member'], scope='domain')
@@ -504,6 +507,9 @@
def get_project_alt_admin_creds(self):
return self.get_credentials(['alt_admin'], scope='project')
+ def get_project_manager_creds(self):
+ return self.get_credentials(['manager'], scope='project')
+
def get_project_member_creds(self):
return self.get_credentials(['member'], scope='project')
diff --git a/tempest/lib/common/http.py b/tempest/lib/common/http.py
index d163968..5bdcecd 100644
--- a/tempest/lib/common/http.py
+++ b/tempest/lib/common/http.py
@@ -60,6 +60,14 @@
retry = urllib3.util.Retry(redirect=False)
r = super(ClosingProxyHttp, self).request(method, url, retries=retry,
*args, **new_kwargs)
+
+ # Clearing the pool is necessary to free memory that holds certificates
+ # loaded by the HTTPConnection class in urllib3. This line can be
+ # removed once we require a newer version of urllib3 (e.g., 2.2.3) that
+ # does not retain certificates in memory for each HTTPConnection
+ # managed by the PoolManager.
+ self.clear()
+
if not kwargs.get('preload_content', True):
# This means we asked urllib3 for streaming content, so we
# need to return the raw response and not read any data yet
@@ -114,6 +122,14 @@
retry = urllib3.util.Retry(redirect=False)
r = super(ClosingHttp, self).request(method, url, retries=retry,
*args, **new_kwargs)
+
+ # Clearing the pool is necessary to free memory that holds certificates
+ # loaded by the HTTPConnection class in urllib3. This line can be
+ # removed once we require a newer version of urllib3 (e.g., 2.2.3) that
+ # does not retain certificates in memory for each HTTPConnection
+ # managed by the PoolManager.
+ self.clear()
+
if not kwargs.get('preload_content', True):
# This means we asked urllib3 for streaming content, so we
# need to return the raw response and not read any data yet
diff --git a/tempest/lib/common/preprov_creds.py b/tempest/lib/common/preprov_creds.py
index 6d948cf..3ba7db1 100644
--- a/tempest/lib/common/preprov_creds.py
+++ b/tempest/lib/common/preprov_creds.py
@@ -353,6 +353,13 @@
self._creds['domain_admin'] = domain_admin
return domain_admin
+ def get_domain_manager_creds(self):
+ if self._creds.get('domain_manager'):
+ return self._creds.get('domain_manager')
+ domain_manager = self._get_creds(['manager'], scope='domain')
+ self._creds['domain_manager'] = domain_manager
+ return domain_manager
+
def get_domain_member_creds(self):
if self._creds.get('domain_member'):
return self._creds.get('domain_member')
@@ -378,6 +385,13 @@
# TODO(gmann): Implement alt admin hash.
return
+ def get_project_manager_creds(self):
+ if self._creds.get('project_manager'):
+ return self._creds.get('project_manager')
+ project_manager = self._get_creds(['manager'], scope='project')
+ self._creds['project_manager'] = project_manager
+ return project_manager
+
def get_project_member_creds(self):
if self._creds.get('project_member'):
return self._creds.get('project_member')
diff --git a/tempest/lib/common/rest_client.py b/tempest/lib/common/rest_client.py
index 6cf5b73..b360569 100644
--- a/tempest/lib/common/rest_client.py
+++ b/tempest/lib/common/rest_client.py
@@ -21,6 +21,7 @@
import urllib
import urllib3
+from fasteners import process_lock
import jsonschema
from oslo_log import log as logging
from oslo_log import versionutils
@@ -45,6 +46,8 @@
JSONSCHEMA_VALIDATOR = jsonschema_validator.JSONSCHEMA_VALIDATOR
FORMAT_CHECKER = jsonschema_validator.FORMAT_CHECKER
+RESOURCE_LIST_JSON = "resource_list.json"
+
class RestClient(object):
"""Unified OpenStack RestClient class
@@ -78,6 +81,17 @@
# The version of the API this client implements
api_version = None
+ # Directory for storing read-write lock
+ lock_dir = None
+
+ # An interprocess lock used when the recording of all resources created by
+ # Tempest is allowed.
+ rec_rw_lock = None
+
+ # Variable mirrors value in config option 'record_resources' that allows
+ # the recording of all resources created by Tempest.
+ record_resources = False
+
LOG = logging.getLogger(__name__)
def __init__(self, auth_provider, service, region,
@@ -297,7 +311,13 @@
and the second the response body
:rtype: tuple
"""
- return self.request('POST', url, extra_headers, headers, body, chunked)
+ resp_header, resp_body = self.request(
+ 'POST', url, extra_headers, headers, body, chunked)
+
+ if self.record_resources:
+ self.resource_record(resp_body)
+
+ return resp_header, resp_body
def get(self, url, headers=None, extra_headers=False, chunked=False):
"""Send a HTTP GET request using keystone service catalog and auth
@@ -861,6 +881,11 @@
resp_body = self._parse_resp(resp_body)
raise exceptions.Gone(resp_body, resp=resp)
+ if resp.status == 406:
+ if parse_resp:
+ resp_body = self._parse_resp(resp_body)
+ raise exceptions.NotAcceptable(resp_body, resp=resp)
+
if resp.status == 409:
if parse_resp:
resp_body = self._parse_resp(resp_body)
@@ -1006,6 +1031,66 @@
"""Returns the primary type of resource this client works with."""
return 'resource'
+ def resource_update(self, data, res_type, res_dict):
+ """Updates resource_list.json file with current resource."""
+ if not isinstance(res_dict, dict):
+ return
+
+ if not res_type.endswith('s'):
+ res_type += 's'
+
+ if res_type not in data:
+ data[res_type] = {}
+
+ if 'uuid' in res_dict:
+ data[res_type].update(
+ {res_dict.get('uuid'): res_dict.get('name')})
+ elif 'id' in res_dict:
+ data[res_type].update(
+ {res_dict.get('id'): res_dict.get('name')})
+ elif 'name' in res_dict:
+ data[res_type].update({res_dict.get('name'): ""})
+
+ self.rec_rw_lock.acquire_write_lock()
+ with open(RESOURCE_LIST_JSON, 'w+') as f:
+ f.write(json.dumps(data, indent=2, separators=(',', ': ')))
+ self.rec_rw_lock.release_write_lock()
+
+ def resource_record(self, resp_dict):
+ """Records resources into resource_list.json file."""
+ if self.rec_rw_lock is None:
+ path = self.lock_dir
+ self.rec_rw_lock = (
+ process_lock.InterProcessReaderWriterLock(path)
+ )
+
+ self.rec_rw_lock.acquire_read_lock()
+ try:
+ with open(RESOURCE_LIST_JSON, 'rb') as f:
+ data = json.load(f)
+ except IOError:
+ data = {}
+ self.rec_rw_lock.release_read_lock()
+
+ try:
+ resp_dict = json.loads(resp_dict.decode('utf-8'))
+ except (AttributeError, TypeError, ValueError):
+ return
+
+ # check if response has any keys
+ if not resp_dict.keys():
+ return
+
+ resource_type = list(resp_dict.keys())[0]
+
+ resource_dict = resp_dict[resource_type]
+
+ if isinstance(resource_dict, list):
+ for resource in resource_dict:
+ self.resource_update(data, resource_type, resource)
+ else:
+ self.resource_update(data, resource_type, resource_dict)
+
@classmethod
def validate_response(cls, schema, resp, body):
# Only check the response if the status code is a success code
diff --git a/tempest/lib/common/utils/test_utils.py b/tempest/lib/common/utils/test_utils.py
index c79db15..7b85dec 100644
--- a/tempest/lib/common/utils/test_utils.py
+++ b/tempest/lib/common/utils/test_utils.py
@@ -93,7 +93,7 @@
if attempt >= 3:
raise
LOG.warning('Got ServerFault while running %s, retrying...', func)
- time.sleep(1)
+ time.sleep(5)
def call_until_true(func, duration, sleep_for, *args, **kwargs):
diff --git a/tempest/lib/decorators.py b/tempest/lib/decorators.py
index 7d54c1a..144450b 100644
--- a/tempest/lib/decorators.py
+++ b/tempest/lib/decorators.py
@@ -198,7 +198,7 @@
There are functions created as classmethod and the cleanup
was managed by the class with addClassResourceCleanup,
In case the function called from a class level (resource_setup) its ok
- But when it is called from testcase level there is no reson to delete the
+ But when it is called from testcase level there is no reason to delete the
resource when class tears down.
The testcase results will not reflect the resources cleanup because test
diff --git a/tempest/lib/exceptions.py b/tempest/lib/exceptions.py
index dd7885e..0242de2 100644
--- a/tempest/lib/exceptions.py
+++ b/tempest/lib/exceptions.py
@@ -94,6 +94,11 @@
message = "Object not found"
+class NotAcceptable(ClientRestClientException):
+ status_code = 406
+ message = "Not Acceptable"
+
+
class Conflict(ClientRestClientException):
status_code = 409
message = "Conflict with state of target resource"
diff --git a/tempest/lib/services/compute/migrations_client.py b/tempest/lib/services/compute/migrations_client.py
index 8a6e62a..d43fe83 100644
--- a/tempest/lib/services/compute/migrations_client.py
+++ b/tempest/lib/services/compute/migrations_client.py
@@ -21,6 +21,8 @@
as schemav223
from tempest.lib.api_schema.response.compute.v2_59 import migrations \
as schemav259
+from tempest.lib.api_schema.response.compute.v2_80 import migrations \
+ as schemav280
from tempest.lib.common import rest_client
from tempest.lib.services.compute import base_compute_client
@@ -29,7 +31,8 @@
schema_versions_info = [
{'min': None, 'max': '2.22', 'schema': schema},
{'min': '2.23', 'max': '2.58', 'schema': schemav223},
- {'min': '2.59', 'max': None, 'schema': schemav259}]
+ {'min': '2.59', 'max': '2.79', 'schema': schemav259},
+ {'min': '2.80', 'max': None, 'schema': schemav280}]
def list_migrations(self, **params):
"""List all migrations.
diff --git a/tempest/lib/services/compute/server_groups_client.py b/tempest/lib/services/compute/server_groups_client.py
index 9895653..5c1e623 100644
--- a/tempest/lib/services/compute/server_groups_client.py
+++ b/tempest/lib/services/compute/server_groups_client.py
@@ -14,6 +14,8 @@
# License for the specific language governing permissions and limitations
# under the License.
+from urllib import parse as urllib
+
from oslo_serialization import jsonutils as json
from tempest.lib.api_schema.response.compute.v2_1 import server_groups \
@@ -55,9 +57,14 @@
self.validate_response(schema.delete_server_group, resp, body)
return rest_client.ResponseBody(resp, body)
- def list_server_groups(self):
+ def list_server_groups(self, **params):
"""List the server-groups."""
- resp, body = self.get("os-server-groups")
+
+ url = 'os-server-groups'
+ if params:
+ url += '?%s' % urllib.urlencode(params)
+
+ resp, body = self.get(url)
body = json.loads(body)
schema = self.get_schema(self.schema_versions_info)
self.validate_response(schema.list_server_groups, resp, body)
diff --git a/tempest/lib/services/compute/servers_client.py b/tempest/lib/services/compute/servers_client.py
index 7e3b99f..e91c87a 100644
--- a/tempest/lib/services/compute/servers_client.py
+++ b/tempest/lib/services/compute/servers_client.py
@@ -43,7 +43,9 @@
from tempest.lib.api_schema.response.compute.v2_75 import servers as schemav275
from tempest.lib.api_schema.response.compute.v2_79 import servers as schemav279
from tempest.lib.api_schema.response.compute.v2_8 import servers as schemav28
+from tempest.lib.api_schema.response.compute.v2_89 import servers as schemav289
from tempest.lib.api_schema.response.compute.v2_9 import servers as schemav29
+from tempest.lib.api_schema.response.compute.v2_96 import servers as schemav296
from tempest.lib.common import rest_client
from tempest.lib.services.compute import base_compute_client
@@ -73,7 +75,9 @@
{'min': '2.71', 'max': '2.72', 'schema': schemav271},
{'min': '2.73', 'max': '2.74', 'schema': schemav273},
{'min': '2.75', 'max': '2.78', 'schema': schemav275},
- {'min': '2.79', 'max': None, 'schema': schemav279}]
+ {'min': '2.79', 'max': '2.88', 'schema': schemav279},
+ {'min': '2.89', 'max': '2.95', 'schema': schemav289},
+ {'min': '2.96', 'max': None, 'schema': schemav296}]
def __init__(self, auth_provider, service, region,
enable_instance_password=True, **kwargs):
@@ -896,7 +900,11 @@
API reference:
https://docs.openstack.org/api-ref/compute/#evacuate-server-evacuate-action
"""
- if self.enable_instance_password:
+ api_version = self.get_headers().get(self.api_microversion_header_name)
+
+ if not api_version and self.enable_instance_password:
+ evacuate_schema = schema.evacuate_server_with_admin_pass
+ elif api_version < '2.14':
evacuate_schema = schema.evacuate_server_with_admin_pass
else:
evacuate_schema = schema.evacuate_server
diff --git a/tempest/lib/services/image/v2/images_client.py b/tempest/lib/services/image/v2/images_client.py
index 8460b57..a6a1623 100644
--- a/tempest/lib/services/image/v2/images_client.py
+++ b/tempest/lib/services/image/v2/images_client.py
@@ -159,7 +159,7 @@
"""
url = 'images/%s/file' % image_id
- # We are going to do chunked transfert, so split the input data
+ # We are going to do chunked transfer, so split the input data
# info fixed-sized chunks.
headers = {'Content-Type': 'application/octet-stream'}
data = iter(functools.partial(data.read, CHUNKSIZE), b'')
@@ -292,3 +292,15 @@
resp, _ = self.delete(url)
self.expected_success(204, resp.status)
return rest_client.ResponseBody(resp)
+
+ def delete_image_from_store(self, image_id, store_name):
+ """Delete image from store
+
+ For a full list of available parameters,
+ please refer to the official API reference:
+ https://docs.openstack.org/api-ref/image/v2/#delete-image-from-store
+ """
+ url = 'stores/%s/%s' % (store_name, image_id)
+ resp, _ = self.delete(url)
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp)
diff --git a/tempest/lib/services/object_storage/container_client.py b/tempest/lib/services/object_storage/container_client.py
index bdca0d0..47edf70 100644
--- a/tempest/lib/services/object_storage/container_client.py
+++ b/tempest/lib/services/object_storage/container_client.py
@@ -15,7 +15,6 @@
from urllib import parse as urllib
-import debtcollector.moves
from defusedxml import ElementTree as etree
from oslo_serialization import jsonutils as json
@@ -64,7 +63,7 @@
delete_metadata=None,
create_update_metadata_prefix='X-Container-Meta-',
delete_metadata_prefix='X-Remove-Container-Meta-'):
- """Creates, Updates or deletes an containter metadata entry.
+ """Creates, Updates or deletes an container metadata entry.
Container Metadata can be created, updated or deleted based on
metadata header or value. For detailed info, please refer to the
@@ -85,11 +84,6 @@
self.expected_success(204, resp.status)
return resp, body
- update_container_metadata = debtcollector.moves.moved_function(
- create_update_or_delete_container_metadata,
- 'update_container_metadata', __name__,
- version='Queens', removal_version='Rocky')
-
def list_container_metadata(self, container_name):
"""List all container metadata."""
url = str(container_name)
@@ -126,7 +120,3 @@
self.expected_success([200, 204], resp.status)
return resp, body
-
- list_container_contents = debtcollector.moves.moved_function(
- list_container_objects, 'list_container_contents', __name__,
- version='Queens', removal_version='Rocky')
diff --git a/tempest/lib/services/placement/placement_client.py b/tempest/lib/services/placement/placement_client.py
index 216ac08..f272cbf 100644
--- a/tempest/lib/services/placement/placement_client.py
+++ b/tempest/lib/services/placement/placement_client.py
@@ -49,3 +49,39 @@
self.expected_success(200, resp.status)
body = json.loads(body)
return rest_client.ResponseBody(resp, body)
+
+ def list_traits(self, **params):
+ """API ref https://docs.openstack.org/api-ref/placement/#traits
+ """
+ url = "/traits"
+ if params:
+ url += '?%s' % urllib.urlencode(params)
+ resp, body = self.get(url)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def show_trait(self, name, **params):
+ url = "/traits"
+ if params:
+ url += '?%s' % urllib.urlencode(params)
+ resp, body = self.get(url)
+ body = json.loads(body)
+ self.expected_success(200, resp.status)
+ return rest_client.ResponseBody(resp, body)
+ url = f"{url}/{name}"
+ resp, _ = self.get(url)
+ self.expected_success(204, resp.status)
+ return resp.status
+
+ def create_trait(self, name, **params):
+ url = f"/traits/{name}"
+ json_body = json.dumps(params)
+ resp, _ = self.put(url, body=json_body)
+ return resp.status
+
+ def delete_trait(self, name):
+ url = f"/traits/{name}"
+ resp, _ = self.delete(url)
+ self.expected_success(204, resp.status)
+ return resp.status
diff --git a/tempest/lib/services/placement/resource_providers_client.py b/tempest/lib/services/placement/resource_providers_client.py
index 3214053..a336500 100644
--- a/tempest/lib/services/placement/resource_providers_client.py
+++ b/tempest/lib/services/placement/resource_providers_client.py
@@ -121,3 +121,29 @@
resp, body = self.delete(url)
self.expected_success(204, resp.status)
return rest_client.ResponseBody(resp, body)
+
+ def list_resource_provider_traits(self, rp_uuid, **kwargs):
+ """https://docs.openstack.org/api-ref/placement/#resource-provider-traits
+ """
+ url = f"/resource_providers/{rp_uuid}/traits"
+ if kwargs:
+ url += '?%s' % urllib.urlencode(kwargs)
+ resp, body = self.get(url)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def update_resource_provider_traits(self, rp_uuid, **kwargs):
+ url = f"/resource_providers/{rp_uuid}/traits"
+ data = json.dumps(kwargs)
+ resp, body = self.put(url, data)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def delete_resource_provider_traits(self, rp_uuid):
+ url = f"/resource_providers/{rp_uuid}/traits"
+ resp, body = self.delete(url)
+ self.expected_success(204, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
diff --git a/tempest/lib/services/volume/v3/volumes_client.py b/tempest/lib/services/volume/v3/volumes_client.py
index c6f8973..95f3ffc 100644
--- a/tempest/lib/services/volume/v3/volumes_client.py
+++ b/tempest/lib/services/volume/v3/volumes_client.py
@@ -86,7 +86,7 @@
def migrate_volume(self, volume_id, **kwargs):
"""Migrate a volume to a new backend
- For a full list of available parameters please refer to the offical
+ For a full list of available parameters please refer to the official
API reference:
https://docs.openstack.org/api-ref/block-storage/v3/index.html#migrate-a-volume
@@ -173,7 +173,7 @@
resp, body = self.post(url, post_body)
body = json.loads(body)
# TODO(zhufl): This is under discussion, so will be merged
- # in a seperate patch.
+ # in a separate patch.
# https://bugs.launchpad.net/cinder/+bug/1880566
# self.validate_response(schema.upload_volume, resp, body)
self.expected_success(202, resp.status)
diff --git a/tempest/scenario/README.rst b/tempest/scenario/README.rst
index efcd139..6c51f22 100644
--- a/tempest/scenario/README.rst
+++ b/tempest/scenario/README.rst
@@ -7,14 +7,14 @@
What are these tests?
---------------------
-Scenario tests are "through path" tests of OpenStack
-function. Complicated setups where one part might depend on completion
+Scenario tests are "through path" tests of OpenStack function.
+Complicated setups where one part might depend on the completion
of a previous part. They ideally involve the integration between
multiple OpenStack services to exercise the touch points between them.
Any scenario test should have a real-life use case. An example would be:
-- "As operator I want to start with a blank environment":
+- "As an operator, I want to start with a blank environment":
1. upload a glance image
2. deploy a vm from it
@@ -24,12 +24,14 @@
Why are these tests in Tempest?
-------------------------------
+
This is one of Tempest's core purposes, testing the integration between
projects.
Scope of these tests
--------------------
+
Scenario tests should always use the Tempest implementation of the
OpenStack API, as we want to ensure that bugs aren't hidden by the
official clients.
@@ -40,6 +42,7 @@
Example of a good test
----------------------
+
While we are looking for interaction of 2 or more services, be
specific in your interactions. A giant "this is my data center" smoke
test is hard to debug when it goes wrong.
diff --git a/tempest/scenario/manager.py b/tempest/scenario/manager.py
index a809342..be2b2d6 100644
--- a/tempest/scenario/manager.py
+++ b/tempest/scenario/manager.py
@@ -441,7 +441,7 @@
'container': container}
args.update(kwargs)
backup = self.backups_client.create_backup(volume_id=volume_id,
- **kwargs)['backup']
+ **args)['backup']
self.addCleanup(self.backups_client.delete_backup, backup['id'])
waiters.wait_for_volume_resource_status(self.backups_client,
backup['id'], 'available')
@@ -751,6 +751,31 @@
return rules
+ def create_and_add_security_group_to_server(self, server):
+ """Create a security group and add it to the server.
+
+ :param server: The server to add the security group to.
+ :return: The security group was added to the server.
+ """
+
+ secgroup = self.create_security_group()
+ self.servers_client.add_security_group(server['id'],
+ name=secgroup['name'])
+ self.addCleanup(self.servers_client.remove_security_group,
+ server['id'], name=secgroup['name'])
+
+ def wait_for_secgroup_add():
+ body = (self.servers_client.show_server(server['id'])
+ ['server'])
+ return {'name': secgroup['name']} in body['security_groups']
+
+ if not test_utils.call_until_true(wait_for_secgroup_add,
+ CONF.compute.build_timeout,
+ CONF.compute.build_interval):
+ msg = ('Timed out waiting for adding security group %s to server '
+ '%s' % (secgroup['id'], server['id']))
+ raise lib_exc.TimeoutException(msg)
+
def get_remote_client(self, ip_address, username=None, private_key=None,
server=None):
"""Get a SSH client to a remote server
@@ -826,7 +851,7 @@
kernel_img_path = os.path.join(extract_dir, fname)
elif re.search(r'(.*-initrd.*|ari-.*/image$)', fname):
ramdisk_img_path = os.path.join(extract_dir, fname)
- elif re.search(f'(.*\\.img$|ami-.*/image$)', fname):
+ elif re.search(r'(.*\\.img$|ami-.*/image$)', fname):
img_path = os.path.join(extract_dir, fname)
# Create the kernel image.
kparams = {
@@ -1085,8 +1110,6 @@
if ip_addr and not kwargs.get('fixed_ips'):
kwargs['fixed_ips'] = 'ip_address=%s' % ip_addr
- ports = self.os_admin.ports_client.list_ports(
- device_id=server['id'], **kwargs)['ports']
# A port can have more than one IP address in some cases.
# If the network is dual-stack (IPv4 + IPv6), this port is associated
@@ -1101,6 +1124,18 @@
return (port['status'] == 'ACTIVE' or
port.get('binding:vnic_type') == 'baremetal')
+ # Wait for all compute ports to be ACTIVE.
+ # This will raise a TimeoutException if that does not happen.
+ client = self.os_admin.ports_client
+ try:
+ ports = waiters.wait_for_server_ports_active(
+ client=client, server_id=server['id'],
+ is_active=_is_active, **kwargs)
+ except lib_exc.TimeoutException:
+ LOG.error("Server ports failed transitioning to ACTIVE for "
+ "server: %s", server)
+ raise
+
port_map = [(p["id"], fxip["ip_address"])
for p in ports
for fxip in p["fixed_ips"]
@@ -1108,7 +1143,8 @@
_is_active(p))]
inactive = [p for p in ports if p['status'] != 'ACTIVE']
if inactive:
- LOG.warning("Instance has ports that are not ACTIVE: %s", inactive)
+ # This should just be Ironic ports, see _is_active() above
+ LOG.debug("Instance has ports that are not ACTIVE: %s", inactive)
self.assertNotEmpty(port_map,
"No IPv4 addresses found in: %s" % ports)
@@ -1189,6 +1225,15 @@
self.assertIsNone(floating_ip['port_id'])
return floating_ip
+ def create_file(self, ip_address, path, private_key=None, server=None,
+ username=None):
+ """Create a file on a remote server"""
+ ssh_client = self.get_remote_client(ip_address,
+ private_key=private_key,
+ server=server,
+ username=username)
+ ssh_client.exec_command('sudo mkdir -p %s' % path)
+
def create_timestamp(self, ip_address, dev_name=None, mount_path='/mnt',
private_key=None, server=None, username=None,
fs='vfat'):
@@ -1205,18 +1250,20 @@
# Default the directory in which to write the timestamp file to /tmp
# and only use the mount_path as the target directory if we mounted
# dev_name to mount_path.
- target_dir = '/tmp'
+ target_dir = CONF.scenario.target_dir
if dev_name is not None:
+ mount_path = os.path.join(mount_path, dev_name)
ssh_client.make_fs(dev_name, fs=fs)
- ssh_client.exec_command('sudo mount /dev/%s %s' % (dev_name,
- mount_path))
+ ssh_client.mkdir(mount_path)
+ ssh_client.mount(dev_name, mount_path)
target_dir = mount_path
+
cmd_timestamp = 'sudo sh -c "date > %s/timestamp; sync"' % target_dir
ssh_client.exec_command(cmd_timestamp)
timestamp = ssh_client.exec_command('sudo cat %s/timestamp'
% target_dir)
if dev_name is not None:
- ssh_client.exec_command('sudo umount %s' % mount_path)
+ ssh_client.umount(mount_path)
return timestamp
def get_timestamp(self, ip_address, dev_name=None, mount_path='/mnt',
@@ -1242,14 +1289,16 @@
# Default the directory from which to read the timestamp file to /tmp
# and only use the mount_path as the target directory if we mounted
# dev_name to mount_path.
- target_dir = '/tmp'
+ target_dir = CONF.scenario.target_dir
if dev_name is not None:
+ mount_path = os.path.join(mount_path, dev_name)
+ ssh_client.mkdir(mount_path)
ssh_client.mount(dev_name, mount_path)
target_dir = mount_path
timestamp = ssh_client.exec_command('sudo cat %s/timestamp'
% target_dir)
if dev_name is not None:
- ssh_client.exec_command('sudo umount %s' % mount_path)
+ ssh_client.umount(mount_path)
return timestamp
def get_server_ip(self, server, **kwargs):
@@ -1524,8 +1573,8 @@
floating_ip = (self.floating_ips_client.
show_floatingip(floatingip_id)['floatingip'])
if status == floating_ip['status']:
- LOG.info("FloatingIP: {fp} is at status: {st}"
- .format(fp=floating_ip, st=status))
+ LOG.info("FloatingIP: %s is at status: %s",
+ floating_ip, status)
return status == floating_ip['status']
if not test_utils.call_until_true(refresh,
diff --git a/tempest/scenario/test_instances_with_cinder_volumes.py b/tempest/scenario/test_instances_with_cinder_volumes.py
new file mode 100644
index 0000000..0ddbec1
--- /dev/null
+++ b/tempest/scenario/test_instances_with_cinder_volumes.py
@@ -0,0 +1,218 @@
+# Copyright 2024 Openstack Foundation
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+from oslo_log import log as logging
+
+from tempest.common import utils
+from tempest.common import waiters
+from tempest import config
+from tempest.lib import decorators
+from tempest.lib import exceptions
+from tempest.scenario import manager
+
+
+CONF = config.CONF
+LOG = logging.getLogger(__name__)
+
+
+class TestInstancesWithCinderVolumes(manager.ScenarioTest):
+ """This is cinder volumes test.
+
+ Tests are below:
+ * test_instances_with_cinder_volumes_on_all_compute_nodes
+ """
+
+ compute_min_microversion = '2.60'
+
+ @decorators.idempotent_id('d0e3c1a3-4b0a-4b0e-8b0a-4b0e8b0a4b0e')
+ @decorators.attr(type=['slow', 'multinode'])
+ @utils.services('compute', 'volume', 'image', 'network')
+ def test_instances_with_cinder_volumes_on_all_compute_nodes(self):
+ """Test instances with cinder volumes launches on all compute nodes
+
+ Steps:
+ 1. Create an image
+ 2. Create a keypair
+ 3. Create a bootable volume from the image and of the given volume
+ type
+ 4. Boot an instance from the bootable volume on each available
+ compute node, up to CONF.compute.min_compute_nodes
+ 5. Create a volume using each volume_types_for_data_volume on all
+ available compute nodes, up to CONF.compute.min_compute_nodes.
+ Total number of volumes is equal to
+ compute nodes * len(volume_types_for_data_volume)
+ 6. Attach volumes to the instances
+ 7. Assign floating IP to all instances
+ 8. Configure security group for ssh access to all instances
+ 9. Confirm ssh access to all instances
+ 10. Run write test to all volumes through ssh connection per
+ instance
+ 11. Clean up the sources, an instance, volumes, keypair and image
+ """
+ boot_volume_type = (CONF.volume.volume_type or
+ self.create_volume_type()['name'])
+
+ # create an image
+ image = self.image_create()
+
+ # create keypair
+ keypair = self.create_keypair()
+
+ # check all available zones for booting instances
+ available_zone = \
+ self.os_admin.availability_zone_client.list_availability_zones(
+ detail=True)['availabilityZoneInfo']
+
+ hosts = []
+ for zone in available_zone:
+ if zone['zoneState']['available']:
+ for host in zone['hosts']:
+ if 'nova-compute' in zone['hosts'][host] and \
+ zone['hosts'][host]['nova-compute']['available'] and \
+ CONF.compute.target_hosts_to_avoid not in host:
+ hosts.append({'zone': zone['zoneName'],
+ 'host_name': host})
+
+ # fail if there is less hosts than minimal number of instances
+ if len(hosts) < CONF.compute.min_compute_nodes:
+ raise exceptions.InvalidConfiguration(
+ "Host list %s is shorter than min_compute_nodes. " % hosts)
+
+ # get volume types
+ volume_types = []
+ if CONF.volume_feature_enabled.volume_types_for_data_volume:
+ types = CONF.volume_feature_enabled.volume_types_for_data_volume
+ volume_types = types.split(',')
+ else:
+ # no user specified volume types, create 2 default ones
+ volume_types.append(self.create_volume_type()['name'])
+ volume_types.append(self.create_volume_type()['name'])
+
+ hosts_to_boot_servers = hosts[:CONF.compute.min_compute_nodes]
+ LOG.debug("List of hosts selected to boot servers %s: ",
+ hosts_to_boot_servers)
+
+ # create volumes so that we dont need to wait for them to be created
+ # and save them in a list
+ created_volumes = []
+ for host in hosts_to_boot_servers:
+ for volume_type in volume_types:
+ created_volumes.append(
+ self.create_volume(volume_type=volume_type,
+ wait_until=None)
+ )
+
+ bootable_volumes = []
+ for host in hosts_to_boot_servers:
+ # create boot volume from image and of the given volume type
+ bootable_volumes.append(
+ self.create_volume(
+ imageRef=image, volume_type=boot_volume_type,
+ wait_until=None)
+ )
+
+ # boot server
+ servers = []
+
+ for bootable_volume in bootable_volumes:
+
+ # wait for bootable volumes to become available
+ waiters.wait_for_volume_resource_status(
+ self.volumes_client, bootable_volume['id'], 'available')
+
+ # create an instance from bootable volume
+ server = self.boot_instance_from_resource(
+ source_id=bootable_volume['id'],
+ source_type='volume',
+ keypair=keypair,
+ wait_until=None
+ )
+ servers.append(server)
+
+ start = 0
+ end = len(volume_types)
+ for server in servers:
+ attached_volumes = []
+
+ # wait for server to become active
+ waiters.wait_for_server_status(self.servers_client,
+ server['id'], 'ACTIVE')
+
+ # attach volumes to the instances
+ for volume in created_volumes[start:end]:
+
+ # wait for volume to become available
+ waiters.wait_for_volume_resource_status(
+ self.volumes_client, volume['id'], 'available')
+
+ attached_volume = self.nova_volume_attach(server, volume)
+ attached_volumes.append(attached_volume)
+ LOG.debug("Attached volume %s to server %s",
+ attached_volume['id'], server['id'])
+
+ # assign floating ip
+ floating_ip = None
+ if (CONF.network_feature_enabled.floating_ips and
+ CONF.network.floating_network_name):
+ fip = self.create_floating_ip(server)
+ floating_ip = self.associate_floating_ip(
+ fip, server)
+ ssh_ip = floating_ip['floating_ip_address']
+ else:
+ ssh_ip = self.get_server_ip(server)
+
+ # create security group
+ self.create_and_add_security_group_to_server(server)
+
+ # confirm ssh access
+ self.linux_client = self.get_remote_client(
+ ssh_ip, private_key=keypair['private_key'],
+ server=server
+ )
+
+ server_name = server['name'].split('-')[-1]
+
+ # run write test on all volumes
+ for volume in attached_volumes:
+
+ # dev name volume['attachments'][0]['device'][5:] is like
+ # /dev/vdb, we need to remove /dev/ -> first 5 chars
+ dev_name = volume['attachments'][0]['device'][5:]
+
+ mount_path = f"/mnt/{server_name}"
+
+ timestamp_before = self.create_timestamp(
+ ssh_ip, private_key=keypair['private_key'], server=server,
+ dev_name=dev_name, mount_path=mount_path,
+ )
+ timestamp_after = self.get_timestamp(
+ ssh_ip, private_key=keypair['private_key'], server=server,
+ dev_name=dev_name, mount_path=mount_path,
+ )
+ self.assertEqual(timestamp_before, timestamp_after)
+
+ # delete volume
+ self.nova_volume_detach(server, volume)
+ self.volumes_client.delete_volume(volume['id'])
+
+ if floating_ip:
+ # delete the floating IP, this should refresh the server
+ # addresses
+ self.disassociate_floating_ip(floating_ip)
+ waiters.wait_for_server_floating_ip(
+ self.servers_client, server, floating_ip,
+ wait_for_disassociate=True)
+
+ start += len(volume_types)
+ end += len(volume_types)
diff --git a/tempest/scenario/test_minimum_basic.py b/tempest/scenario/test_minimum_basic.py
index 6372c6b..543be31 100644
--- a/tempest/scenario/test_minimum_basic.py
+++ b/tempest/scenario/test_minimum_basic.py
@@ -19,9 +19,7 @@
from tempest.common import utils
from tempest.common import waiters
from tempest import config
-from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
-from tempest.lib import exceptions
from tempest.scenario import manager
CONF = config.CONF
@@ -73,25 +71,6 @@
disks = self.linux_client.get_disks()
self.assertEqual(1, disks.count(CONF.compute.volume_device_name))
- def create_and_add_security_group_to_server(self, server):
- secgroup = self.create_security_group()
- self.servers_client.add_security_group(server['id'],
- name=secgroup['name'])
- self.addCleanup(self.servers_client.remove_security_group,
- server['id'], name=secgroup['name'])
-
- def wait_for_secgroup_add():
- body = (self.servers_client.show_server(server['id'])
- ['server'])
- return {'name': secgroup['name']} in body['security_groups']
-
- if not test_utils.call_until_true(wait_for_secgroup_add,
- CONF.compute.build_timeout,
- CONF.compute.build_interval):
- msg = ('Timed out waiting for adding security group %s to server '
- '%s' % (secgroup['id'], server['id']))
- raise exceptions.TimeoutException(msg)
-
@decorators.attr(type='slow')
@decorators.idempotent_id('bdbb5441-9204-419d-a225-b4fdbfb1a1a8')
@utils.services('compute', 'volume', 'image', 'network')
diff --git a/tempest/scenario/test_network_advanced_server_ops.py b/tempest/scenario/test_network_advanced_server_ops.py
index 882afff..f4ee98d 100644
--- a/tempest/scenario/test_network_advanced_server_ops.py
+++ b/tempest/scenario/test_network_advanced_server_ops.py
@@ -17,9 +17,11 @@
from oslo_log import log
from tempest.common import utils
+from tempest.common.utils.linux import remote_client
from tempest.common.utils import net_downtime
from tempest.common import waiters
from tempest import config
+from tempest.lib.common import api_version_request
from tempest.lib import decorators
from tempest.scenario import manager
@@ -28,25 +30,12 @@
LOG = log.getLogger(__name__)
-class TestNetworkAdvancedServerOps(manager.NetworkScenarioTest):
- """Check VM connectivity after some advanced instance operations executed:
-
- * Stop/Start an instance
- * Reboot an instance
- * Rebuild an instance
- * Pause/Unpause an instance
- * Suspend/Resume an instance
- * Resize an instance
- """
-
- @classmethod
- def setup_clients(cls):
- super(TestNetworkAdvancedServerOps, cls).setup_clients()
- cls.admin_servers_client = cls.os_admin.servers_client
+class BaseTestNetworkAdvancedServerOps(manager.NetworkScenarioTest):
+ """Base class for defining methods used in tests."""
@classmethod
def skip_checks(cls):
- super(TestNetworkAdvancedServerOps, cls).skip_checks()
+ super(BaseTestNetworkAdvancedServerOps, cls).skip_checks()
if not (CONF.network.project_networks_reachable or
CONF.network.public_network_id):
msg = ('Either project_networks_reachable must be "true", or '
@@ -56,26 +45,52 @@
raise cls.skipException("Floating ips are not available")
@classmethod
+ def setup_clients(cls):
+ super(BaseTestNetworkAdvancedServerOps, cls).setup_clients()
+ cls.admin_servers_client = cls.os_admin.servers_client
+ cls.sec_group_rules_client = \
+ cls.os_primary.security_group_rules_client
+ cls.sec_groups_client = cls.os_primary.security_groups_client
+ cls.keypairs_client = cls.os_primary.keypairs_client
+ cls.floating_ips_client = cls.os_primary.floating_ips_client
+ cls.servers_client = cls.os_primary.servers_client
+
+ @classmethod
def setup_credentials(cls):
# Create no network resources for these tests.
cls.set_network_resources()
- super(TestNetworkAdvancedServerOps, cls).setup_credentials()
+ super(BaseTestNetworkAdvancedServerOps, cls).setup_credentials()
- def _setup_server(self, keypair):
+ def _setup_server(self, keypair, host_spec=None):
security_groups = []
if utils.is_extension_enabled('security-group', 'network'):
- security_group = self.create_security_group()
+ sec_args = {
+ 'security_group_rules_client':
+ self.sec_group_rules_client,
+ 'security_groups_client':
+ self.sec_groups_client
+ }
+ security_group = self.create_security_group(**sec_args)
security_groups = [{'name': security_group['name']}]
network, _, _ = self.setup_network_subnet_with_router()
- server = self.create_server(
- networks=[{'uuid': network['id']}],
- key_name=keypair['name'],
- security_groups=security_groups)
+ server_args = {
+ 'networks': [{'uuid': network['id']}],
+ 'key_name': keypair['name'],
+ 'security_groups': security_groups,
+ }
+
+ if host_spec is not None:
+ server_args['host'] = host_spec
+ # by default, host can be specified by administrators only
+ server_args['clients'] = self.os_admin
+
+ server = self.create_server(**server_args)
return server
def _setup_network(self, server, keypair):
public_network_id = CONF.network.public_network_id
- floating_ip = self.create_floating_ip(server, public_network_id)
+ floating_ip = self.create_floating_ip(
+ server, public_network_id, client=self.floating_ips_client)
# Verify that we can indeed connect to the server before we mess with
# it's state
self._wait_server_status_and_check_network_connectivity(
@@ -107,6 +122,169 @@
self._check_network_connectivity(server, keypair, floating_ip,
username=username)
+ def _test_server_connectivity_resize(self, src_host=None):
+ resize_flavor = CONF.compute.flavor_ref_alt
+ keypair = self.create_keypair()
+ server = self._setup_server(keypair, src_host)
+ if src_host:
+ server_host = self.get_host_for_server(server['id'])
+ self.assertEqual(server_host, src_host)
+ floating_ip = self._setup_network(server, keypair)
+ self.servers_client.resize_server(server['id'],
+ flavor_ref=resize_flavor)
+ waiters.wait_for_server_status(self.servers_client, server['id'],
+ 'VERIFY_RESIZE')
+ self.servers_client.confirm_resize_server(server['id'])
+ server = self.servers_client.show_server(server['id'])['server']
+ # Nova API > 2.46 no longer includes flavor.id, and schema check
+ # will cover whether 'id' should be in flavor
+ if server['flavor'].get('id'):
+ self.assertEqual(resize_flavor, server['flavor']['id'])
+ else:
+ flavor = self.flavors_client.show_flavor(resize_flavor)['flavor']
+ self.assertEqual(flavor['name'], server['flavor']['original_name'])
+ for key in ['ram', 'vcpus', 'disk']:
+ self.assertEqual(flavor[key], server['flavor'][key])
+ self._wait_server_status_and_check_network_connectivity(
+ server, keypair, floating_ip)
+
+ def _test_server_connectivity_cold_migration(self, source_host=None,
+ dest_host=None):
+ keypair = self.create_keypair(client=self.keypairs_client)
+ server = self._setup_server(keypair, source_host)
+ floating_ip = self._setup_network(server, keypair)
+ src_host = self.get_host_for_server(server['id'])
+ if source_host:
+ self.assertEqual(src_host, source_host)
+ self._wait_server_status_and_check_network_connectivity(
+ server, keypair, floating_ip)
+
+ self.admin_servers_client.migrate_server(
+ server['id'], host=dest_host)
+ waiters.wait_for_server_status(self.servers_client, server['id'],
+ 'VERIFY_RESIZE')
+ self.servers_client.confirm_resize_server(server['id'])
+ self._wait_server_status_and_check_network_connectivity(
+ server, keypair, floating_ip)
+ dst_host = self.get_host_for_server(server['id'])
+ if dest_host:
+ self.assertEqual(dst_host, dest_host)
+ self.assertNotEqual(src_host, dst_host)
+
+ def _test_server_connectivity_live_migration(self, source_host=None,
+ dest_host=None,
+ migration=False):
+ keypair = self.create_keypair(client=self.keypairs_client)
+ server = self._setup_server(keypair, source_host)
+ floating_ip = self._setup_network(server, keypair)
+ self._wait_server_status_and_check_network_connectivity(
+ server, keypair, floating_ip)
+
+ block_migration = (CONF.compute_feature_enabled.
+ block_migration_for_live_migration)
+ src_host = self.get_host_for_server(server['id'])
+ if source_host:
+ self.assertEqual(src_host, source_host)
+
+ downtime_meter = net_downtime.NetDowntimeMeter(
+ floating_ip['floating_ip_address'])
+ self.useFixture(downtime_meter)
+
+ metadata_downtime_meter = net_downtime.MetadataDowntimeMeter(
+ remote_client.RemoteClient(floating_ip['floating_ip_address'],
+ CONF.validation.image_ssh_user,
+ pkey=keypair['private_key']))
+ self.useFixture(metadata_downtime_meter)
+
+ migration_kwargs = {'host': None, 'block_migration': block_migration}
+
+ # check if microversion is less than 2.25 because of
+ # disk_over_commit is depracted since compute api version 2.25
+ # if min_microversion is None, it runs on version < 2.25
+ min_v = api_version_request.APIVersionRequest(
+ CONF.compute.min_microversion)
+ api_v = api_version_request.APIVersionRequest('2.25')
+ if not migration and (CONF.compute.min_microversion is None or
+ min_v < api_v):
+ migration_kwargs['disk_over_commit'] = False
+
+ if dest_host:
+ migration_kwargs['host'] = dest_host
+
+ self.admin_servers_client.live_migrate_server(
+ server['id'], **migration_kwargs)
+ waiters.wait_for_server_status(self.servers_client,
+ server['id'], 'ACTIVE')
+
+ dst_host = self.get_host_for_server(server['id'])
+ if dest_host:
+ self.assertEqual(dst_host, dest_host)
+
+ self.assertNotEqual(src_host, dst_host, 'Server did not migrate')
+
+ # we first wait until the VM replies pings again, then check the
+ # network downtime
+ self._wait_server_status_and_check_network_connectivity(
+ server, keypair, floating_ip)
+
+ downtime = downtime_meter.get_downtime()
+ self.assertIsNotNone(downtime)
+ LOG.debug("Downtime seconds measured with downtime_meter = %r",
+ downtime)
+ allowed_downtime = CONF.validation.allowed_network_downtime
+ self.assertLessEqual(
+ downtime, allowed_downtime,
+ "Downtime of {} seconds is higher than expected '{}'".format(
+ downtime, allowed_downtime))
+
+ metadata_downtime_results = metadata_downtime_meter.get_results()
+ self.assertGreater(metadata_downtime_results['successes'], 0)
+ LOG.debug("Metadata Downtime seconds measured = %r",
+ metadata_downtime_results['downtime'])
+ allowed_metadata_downtime = CONF.validation.allowed_metadata_downtime
+ metadata_downtime_failed = \
+ metadata_downtime_results['downtime']['FAILED']
+ self.assertLessEqual(
+ metadata_downtime_failed, allowed_metadata_downtime,
+ "Metadata downtime: {} seconds is higher than expected: {}".format(
+ metadata_downtime_failed, allowed_metadata_downtime))
+
+ def _test_server_connectivity_cold_migration_revert(self, source_host=None,
+ dest_host=None):
+ keypair = self.create_keypair(client=self.keypairs_client)
+ server = self._setup_server(keypair, source_host)
+ floating_ip = self._setup_network(server, keypair)
+ src_host = self.get_host_for_server(server['id'])
+ if source_host:
+ self.assertEqual(src_host, source_host)
+ self._wait_server_status_and_check_network_connectivity(
+ server, keypair, floating_ip)
+
+ self.admin_servers_client.migrate_server(
+ server['id'], host=dest_host)
+ waiters.wait_for_server_status(self.servers_client, server['id'],
+ 'VERIFY_RESIZE')
+ if dest_host:
+ self.assertEqual(dest_host,
+ self.get_host_for_server(server['id']))
+ self.servers_client.revert_resize_server(server['id'])
+ self._wait_server_status_and_check_network_connectivity(
+ server, keypair, floating_ip)
+ dst_host = self.get_host_for_server(server['id'])
+
+ self.assertEqual(src_host, dst_host)
+
+
+class TestNetworkAdvancedServerOps(BaseTestNetworkAdvancedServerOps):
+ """Check VM connectivity after some advanced instance operations executed:
+
+ * Stop/Start an instance
+ * Reboot an instance
+ * Rebuild an instance
+ * Pause/Unpause an instance
+ * Suspend/Resume an instance
+ """
+
@decorators.idempotent_id('61f1aa9a-1573-410e-9054-afa557cab021')
@decorators.attr(type='slow')
@utils.services('compute', 'network')
@@ -190,27 +368,7 @@
@decorators.attr(type='slow')
@utils.services('compute', 'network')
def test_server_connectivity_resize(self):
- resize_flavor = CONF.compute.flavor_ref_alt
- keypair = self.create_keypair()
- server = self._setup_server(keypair)
- floating_ip = self._setup_network(server, keypair)
- self.servers_client.resize_server(server['id'],
- flavor_ref=resize_flavor)
- waiters.wait_for_server_status(self.servers_client, server['id'],
- 'VERIFY_RESIZE')
- self.servers_client.confirm_resize_server(server['id'])
- server = self.servers_client.show_server(server['id'])['server']
- # Nova API > 2.46 no longer includes flavor.id, and schema check
- # will cover whether 'id' should be in flavor
- if server['flavor'].get('id'):
- self.assertEqual(resize_flavor, server['flavor']['id'])
- else:
- flavor = self.flavors_client.show_flavor(resize_flavor)['flavor']
- self.assertEqual(flavor['name'], server['flavor']['original_name'])
- for key in ['ram', 'vcpus', 'disk']:
- self.assertEqual(flavor[key], server['flavor'][key])
- self._wait_server_status_and_check_network_connectivity(
- server, keypair, floating_ip)
+ self._test_server_connectivity_resize()
@decorators.idempotent_id('a4858f6c-401e-4155-9a49-d5cd053d1a2f')
@testtools.skipUnless(CONF.compute_feature_enabled.cold_migration,
@@ -221,22 +379,7 @@
@decorators.attr(type=['slow', 'multinode'])
@utils.services('compute', 'network')
def test_server_connectivity_cold_migration(self):
- keypair = self.create_keypair()
- server = self._setup_server(keypair)
- floating_ip = self._setup_network(server, keypair)
- src_host = self.get_host_for_server(server['id'])
- self._wait_server_status_and_check_network_connectivity(
- server, keypair, floating_ip)
-
- self.admin_servers_client.migrate_server(server['id'])
- waiters.wait_for_server_status(self.servers_client, server['id'],
- 'VERIFY_RESIZE')
- self.servers_client.confirm_resize_server(server['id'])
- self._wait_server_status_and_check_network_connectivity(
- server, keypair, floating_ip)
- dst_host = self.get_host_for_server(server['id'])
-
- self.assertNotEqual(src_host, dst_host)
+ self._test_server_connectivity_cold_migration()
@decorators.idempotent_id('03fd1562-faad-11e7-9ea0-fa163e65f5ce')
@testtools.skipUnless(CONF.compute_feature_enabled.live_migration,
@@ -247,52 +390,7 @@
@decorators.attr(type=['slow', 'multinode'])
@utils.services('compute', 'network')
def test_server_connectivity_live_migration(self):
- keypair = self.create_keypair()
- server = self._setup_server(keypair)
- floating_ip = self._setup_network(server, keypair)
- self._wait_server_status_and_check_network_connectivity(
- server, keypair, floating_ip)
-
- block_migration = (CONF.compute_feature_enabled.
- block_migration_for_live_migration)
- old_host = self.get_host_for_server(server['id'])
-
- downtime_meter = net_downtime.NetDowntimeMeter(
- floating_ip['floating_ip_address'])
- self.useFixture(downtime_meter)
-
- migration_kwargs = {'host': None, 'block_migration': block_migration}
-
- # check if microversion is less than 2.25 because of
- # disk_over_commit is depracted since compute api version 2.25
- # if min_microversion is None, it runs on version < 2.25
- if (CONF.compute.min_microversion is None or
- CONF.compute.min_microversion < 2.25):
- migration_kwargs['disk_over_commit'] = False
-
- self.admin_servers_client.live_migrate_server(
- server['id'], **migration_kwargs)
-
- waiters.wait_for_server_status(self.servers_client,
- server['id'], 'ACTIVE')
-
- new_host = self.get_host_for_server(server['id'])
- self.assertNotEqual(old_host, new_host, 'Server did not migrate')
-
- # we first wait until the VM replies pings again, then check the
- # network downtime
- self._wait_server_status_and_check_network_connectivity(
- server, keypair, floating_ip)
-
- downtime = downtime_meter.get_downtime()
- self.assertIsNotNone(downtime)
- LOG.debug("Downtime seconds measured with downtime_meter = %r",
- downtime)
- allowed_downtime = CONF.validation.allowed_network_downtime
- self.assertLessEqual(
- downtime, allowed_downtime,
- "Downtime of {} seconds is higher than expected '{}'".format(
- downtime, allowed_downtime))
+ self._test_server_connectivity_live_migration()
@decorators.idempotent_id('25b188d7-0183-4b1e-a11d-15840c8e2fd6')
@testtools.skipUnless(CONF.compute_feature_enabled.cold_migration,
@@ -303,19 +401,95 @@
@decorators.attr(type=['slow', 'multinode'])
@utils.services('compute', 'network')
def test_server_connectivity_cold_migration_revert(self):
- keypair = self.create_keypair()
- server = self._setup_server(keypair)
- floating_ip = self._setup_network(server, keypair)
- src_host = self.get_host_for_server(server['id'])
- self._wait_server_status_and_check_network_connectivity(
- server, keypair, floating_ip)
+ self._test_server_connectivity_cold_migration_revert()
- self.admin_servers_client.migrate_server(server['id'])
- waiters.wait_for_server_status(self.servers_client, server['id'],
- 'VERIFY_RESIZE')
- self.servers_client.revert_resize_server(server['id'])
- self._wait_server_status_and_check_network_connectivity(
- server, keypair, floating_ip)
- dst_host = self.get_host_for_server(server['id'])
- self.assertEqual(src_host, dst_host)
+class TestNetworkAdvancedServerMigrationWithHost(
+ BaseTestNetworkAdvancedServerOps):
+
+ """Check VM connectivity with specifying source and destination hosts:
+
+ * Resize an instance by creating server on configured source host
+ * Migrate server by creating it on configured source host and migrate it
+ - Cold Migration
+ - Cold Migration with revert
+ - Live Migration
+ """
+ credentials = ['primary', 'admin']
+ compute_min_microversion = "2.74"
+
+ @classmethod
+ def skip_checks(cls):
+ super(TestNetworkAdvancedServerMigrationWithHost, cls).skip_checks()
+ if not (CONF.compute.migration_source_host or
+ CONF.compute.migration_dest_host):
+ raise cls.skipException("migration_source_host or "
+ "migration_dest_host is required")
+ if (CONF.compute.migration_source_host and
+ CONF.compute.migration_dest_host and
+ CONF.compute.migration_source_host ==
+ CONF.compute.migration_dest_host):
+ raise cls.skipException("migration_source_host and "
+ "migration_dest_host must be different")
+
+ @classmethod
+ def setup_clients(cls):
+ super(BaseTestNetworkAdvancedServerOps, cls).setup_clients()
+ cls.sec_group_rules_client = \
+ cls.os_admin.security_group_rules_client
+ cls.sec_groups_client = cls.os_admin.security_groups_client
+ cls.keypairs_client = cls.os_admin.keypairs_client
+ cls.floating_ips_client = cls.os_admin.floating_ips_client
+ cls.servers_client = cls.os_admin.servers_client
+ cls.admin_servers_client = cls.os_admin.servers_client
+
+ @decorators.idempotent_id('06e23934-79ae-11ee-b962-0242ac120002')
+ @testtools.skipUnless(CONF.compute_feature_enabled.resize,
+ 'Resize is not available.')
+ @decorators.attr(type='slow')
+ @utils.services('compute', 'network')
+ def test_server_connectivity_resize(self):
+ source_host = CONF.compute.migration_source_host
+ self._test_server_connectivity_resize(src_host=source_host)
+
+ @decorators.idempotent_id('14f0c9e6-79ae-11ee-b962-0242ac120002')
+ @testtools.skipUnless(CONF.compute_feature_enabled.cold_migration,
+ 'Cold migration is not available.')
+ @testtools.skipUnless(CONF.compute.min_compute_nodes > 1,
+ 'Less than 2 compute nodes, skipping multinode '
+ 'tests.')
+ @decorators.attr(type=['slow', 'multinode'])
+ @utils.services('compute', 'network')
+ def test_server_connectivity_cold_migration(self):
+ source_host = CONF.compute.migration_source_host
+ dest_host = CONF.compute.migration_dest_host
+ self._test_server_connectivity_cold_migration(
+ source_host=source_host, dest_host=dest_host)
+
+ @decorators.idempotent_id('1c13933e-79ae-11ee-b962-0242ac120002')
+ @testtools.skipUnless(CONF.compute_feature_enabled.live_migration,
+ 'Live migration is not available.')
+ @testtools.skipUnless(CONF.compute.min_compute_nodes > 1,
+ 'Less than 2 compute nodes, skipping multinode '
+ 'tests.')
+ @decorators.attr(type=['slow', 'multinode'])
+ @utils.services('compute', 'network')
+ def test_server_connectivity_live_migration(self):
+ source_host = CONF.compute.migration_source_host
+ dest_host = CONF.compute.migration_dest_host
+ self._test_server_connectivity_live_migration(
+ source_host=source_host, dest_host=dest_host, migration=True)
+
+ @decorators.idempotent_id('2627789a-79ae-11ee-b962-0242ac120002')
+ @testtools.skipUnless(CONF.compute_feature_enabled.cold_migration,
+ 'Cold migration is not available.')
+ @testtools.skipUnless(CONF.compute.min_compute_nodes > 1,
+ 'Less than 2 compute nodes, skipping multinode '
+ 'tests.')
+ @decorators.attr(type=['slow', 'multinode'])
+ @utils.services('compute', 'network')
+ def test_server_connectivity_cold_migration_revert(self):
+ source_host = CONF.compute.migration_source_host
+ dest_host = CONF.compute.migration_dest_host
+ self._test_server_connectivity_cold_migration_revert(
+ source_host=source_host, dest_host=dest_host)
diff --git a/tempest/scenario/test_network_basic_ops.py b/tempest/scenario/test_network_basic_ops.py
index 7b819e0..fb68e46 100644
--- a/tempest/scenario/test_network_basic_ops.py
+++ b/tempest/scenario/test_network_basic_ops.py
@@ -179,8 +179,7 @@
def _check_public_network_connectivity(
self, should_connect=True, msg=None,
should_check_floating_ip_status=True, mtu=None):
- """Verifies connectivty to a VM via public network and floating IP
-
+ """Verifies connectivity to a VM via public network and floating IP
and verifies floating IP has resource status is correct.
:param should_connect: bool. determines if connectivity check is
diff --git a/tempest/scenario/test_network_qos_placement.py b/tempest/scenario/test_network_qos_placement.py
index dbbc314..055dcb6 100644
--- a/tempest/scenario/test_network_qos_placement.py
+++ b/tempest/scenario/test_network_qos_placement.py
@@ -67,10 +67,10 @@
cls.networks_client = cls.os_admin.networks_client
cls.subnets_client = cls.os_admin.subnets_client
cls.ports_client = cls.os_primary.ports_client
- cls.routers_client = cls.os_adm.routers_client
+ cls.routers_client = cls.os_admin.routers_client
cls.qos_client = cls.os_admin.qos_client
cls.qos_min_bw_client = cls.os_admin.qos_min_bw_client
- cls.flavors_client = cls.os_adm.flavors_client
+ cls.flavors_client = cls.os_admin.flavors_client
cls.servers_client = cls.os_primary.servers_client
def _create_flavor_to_resize_to(self):
diff --git a/tempest/scenario/test_server_multinode.py b/tempest/scenario/test_server_multinode.py
index fe85234..556b925 100644
--- a/tempest/scenario/test_server_multinode.py
+++ b/tempest/scenario/test_server_multinode.py
@@ -48,7 +48,7 @@
for host in zone['hosts']:
if 'nova-compute' in zone['hosts'][host] and \
zone['hosts'][host]['nova-compute']['available'] and \
- not host.endswith('-ironic'):
+ CONF.compute.target_hosts_to_avoid not in host:
hosts.append({'zone': zone['zoneName'],
'host_name': host})
diff --git a/tempest/scenario/test_stamp_pattern.py b/tempest/scenario/test_stamp_pattern.py
index 92dbffb..e060b0f 100644
--- a/tempest/scenario/test_stamp_pattern.py
+++ b/tempest/scenario/test_stamp_pattern.py
@@ -137,7 +137,7 @@
# Make sure the machine ssh-able before attaching the volume
# Just a live machine is responding
- # for device attache/detach as expected
+ # for device attach/detach as expected
linux_client = self.get_remote_client(
ip_for_snapshot, private_key=keypair['private_key'],
server=server_from_snapshot)
diff --git a/tempest/scenario/test_unified_limits.py b/tempest/scenario/test_unified_limits.py
index 6e194f9..7e8f8b2 100644
--- a/tempest/scenario/test_unified_limits.py
+++ b/tempest/scenario/test_unified_limits.py
@@ -32,6 +32,13 @@
credentials = ['primary', 'system_admin']
@classmethod
+ def skip_checks(cls):
+ super(ImageQuotaTest, cls).skip_checks()
+ if not CONF.service_available.glance:
+ skip_msg = ("%s skipped as glance is not available" % cls.__name__)
+ raise cls.skipException(skip_msg)
+
+ @classmethod
def resource_setup(cls):
super(ImageQuotaTest, cls).resource_setup()
diff --git a/tempest/scenario/test_volume_migrate_attached.py b/tempest/scenario/test_volume_migrate_attached.py
index 5005346..f34bfd6 100644
--- a/tempest/scenario/test_volume_migrate_attached.py
+++ b/tempest/scenario/test_volume_migrate_attached.py
@@ -96,10 +96,7 @@
waiters.wait_for_volume_retype(self.volumes_client,
volume_id, new_volume_type)
- @decorators.attr(type='slow')
- @decorators.idempotent_id('deadd2c2-beef-4dce-98be-f86765ff311b')
- @utils.services('compute', 'volume')
- def test_volume_retype_attached(self):
+ def _test_volume_retype_attached(self, dev_name=None):
LOG.info("Creating keypair and security group")
keypair = self.create_keypair()
security_group = self.create_security_group()
@@ -108,18 +105,30 @@
LOG.info("Creating Volume types")
source_type, dest_type = self._create_volume_types()
- # create an instance from volume
- LOG.info("Booting instance from volume")
- volume_id = self.create_volume(imageRef=CONF.compute.image_ref,
- volume_type=source_type['name'])['id']
+ if dev_name is None:
+ # create an instance from volume
+ LOG.info("Booting instance from volume")
+ volume_id = self.create_volume(
+ imageRef=CONF.compute.image_ref,
+ volume_type=source_type['name'])['id']
- instance = self._boot_instance_from_volume(volume_id, keypair,
- security_group)
+ instance = self._boot_instance_from_volume(volume_id, keypair,
+ security_group)
+ else:
+ LOG.info("Booting instance from image and attaching data volume")
+ key_name = keypair['name']
+ security_groups = [{'name': security_group['name']}]
+ instance = self.create_server(key_name=key_name,
+ security_groups=security_groups)
+ volume = self.create_volume(volume_type=source_type['name'])
+ volume_id = volume['id']
+ volume = self.nova_volume_attach(instance, volume)
# write content to volume on instance
LOG.info("Setting timestamp in instance %s", instance['id'])
ip_instance = self.get_server_ip(instance)
timestamp = self.create_timestamp(ip_instance,
+ dev_name=dev_name,
private_key=keypair['private_key'],
server=instance)
@@ -134,6 +143,7 @@
LOG.info("Getting timestamp in postmigrated instance %s",
instance['id'])
timestamp2 = self.get_timestamp(ip_instance,
+ dev_name=dev_name,
private_key=keypair['private_key'],
server=instance)
self.assertEqual(timestamp, timestamp2)
@@ -152,10 +162,35 @@
instance['id'])['volumeAttachments']
self.assertEqual(volume_id, attached_volumes[0]['id'])
+ # Reboot the instance and verify it boots successfully
+ LOG.info("Hard rebooting instance %s", instance['id'])
+ self.servers_client.reboot_server(instance['id'], type='HARD')
+ waiters.wait_for_server_status(
+ self.servers_client, instance['id'], 'ACTIVE')
+
+ # check the content of written file to verify the instance is working
+ # after being rebooted
+ LOG.info("Getting timestamp in postmigrated rebooted instance %s",
+ instance['id'])
+ timestamp2 = self.get_timestamp(ip_instance,
+ dev_name=dev_name,
+ private_key=keypair['private_key'],
+ server=instance)
+ self.assertEqual(timestamp, timestamp2)
+
@decorators.attr(type='slow')
- @decorators.idempotent_id('fe47b1ed-640e-4e3b-a090-200e25607362')
+ @decorators.idempotent_id('deadd2c2-beef-4dce-98be-f86765ff311b')
@utils.services('compute', 'volume')
- def test_volume_migrate_attached(self):
+ def test_volume_retype_attached(self):
+ self._test_volume_retype_attached()
+
+ @decorators.attr(type='slow')
+ @decorators.idempotent_id('122e070c-a5b2-470c-af2b-81e9dbefb9e8')
+ @utils.services('compute', 'volume')
+ def test_volume_retype_attached_data_volume(self):
+ self._test_volume_retype_attached(dev_name='vdb')
+
+ def _test_volume_migrate_attached(self, dev_name=None):
LOG.info("Creating keypair and security group")
keypair = self.create_keypair()
security_group = self.create_security_group()
@@ -163,16 +198,26 @@
LOG.info("Creating volume")
# Create a unique volume type to avoid using the backend default
migratable_type = self.create_volume_type()['name']
- volume_id = self.create_volume(imageRef=CONF.compute.image_ref,
- volume_type=migratable_type)['id']
- volume = self.admin_volumes_client.show_volume(volume_id)
- LOG.info("Booting instance from volume")
- instance = self._boot_instance_from_volume(volume_id, keypair,
- security_group)
+ if dev_name is None:
+ volume_id = self.create_volume(imageRef=CONF.compute.image_ref,
+ volume_type=migratable_type)['id']
+ LOG.info("Booting instance from volume")
+ instance = self._boot_instance_from_volume(volume_id, keypair,
+ security_group)
+ else:
+ LOG.info("Booting instance from image and attaching data volume")
+ key_name = keypair['name']
+ security_groups = [{'name': security_group['name']}]
+ instance = self.create_server(key_name=key_name,
+ security_groups=security_groups)
+ volume = self.create_volume(volume_type=migratable_type)
+ volume_id = volume['id']
+ volume = self.nova_volume_attach(instance, volume)
# Identify the source and destination hosts for the migration
- src_host = volume['volume']['os-vol-host-attr:host']
+ volume = self.admin_volumes_client.show_volume(volume_id)['volume']
+ src_host = volume['os-vol-host-attr:host']
# Select the first c-vol host that isn't hosting the volume as the dest
# host['host_name'] should take the format of host@backend.
@@ -186,6 +231,7 @@
ip_instance = self.get_server_ip(instance)
timestamp = self.create_timestamp(ip_instance,
+ dev_name=dev_name,
private_key=keypair['private_key'],
server=instance)
@@ -202,6 +248,7 @@
LOG.info("Getting timestamp in postmigrated instance %s",
instance['id'])
timestamp2 = self.get_timestamp(ip_instance,
+ dev_name=dev_name,
private_key=keypair['private_key'],
server=instance)
self.assertEqual(timestamp, timestamp2)
@@ -216,3 +263,31 @@
instance['id'])['volumeAttachments']
attached_volume_id = attached_volumes[0]['id']
self.assertEqual(volume_id, attached_volume_id)
+
+ # Reboot the instance and verify it boots successfully
+ LOG.info("Hard rebooting instance %s", instance['id'])
+ self.servers_client.reboot_server(instance['id'], type='HARD')
+ waiters.wait_for_server_status(
+ self.servers_client, instance['id'], 'ACTIVE')
+
+ # check the content of written file to verify the instance is working
+ # after being rebooted
+ LOG.info("Getting timestamp in postmigrated rebooted instance %s",
+ instance['id'])
+ timestamp2 = self.get_timestamp(ip_instance,
+ dev_name=dev_name,
+ private_key=keypair['private_key'],
+ server=instance)
+ self.assertEqual(timestamp, timestamp2)
+
+ @decorators.attr(type='slow')
+ @decorators.idempotent_id('fe47b1ed-640e-4e3b-a090-200e25607362')
+ @utils.services('compute', 'volume')
+ def test_volume_migrate_attached(self):
+ self._test_volume_migrate_attached()
+
+ @decorators.attr(type='slow')
+ @decorators.idempotent_id('1b8661cb-db93-4110-860b-201295027b78')
+ @utils.services('compute', 'volume')
+ def test_volume_migrate_attached_data_volume(self):
+ self._test_volume_migrate_attached(dev_name='vdb')
diff --git a/tempest/serial_tests/README.rst b/tempest/serial_tests/README.rst
new file mode 100644
index 0000000..40a50f4
--- /dev/null
+++ b/tempest/serial_tests/README.rst
@@ -0,0 +1,61 @@
+.. _serial_tests_guide:
+
+Tempest Field Guide to Serial tests
+===================================
+
+
+What are these tests?
+---------------------
+
+Tempest can run tests serially as well as in parallel, depending on the
+configuration that is fully up to the user. However, sometimes you need to
+make sure that tests are not interfering with each other via OpenStack
+resources with the other tests running in parallel. Tempest creates separate
+projects for each test class to separate project based resources between test
+cases.
+
+If your tests use resources outside of projects, e.g. host aggregates then
+you might need to explicitly separate interfering test cases. If you only need
+to separate a small set of test cases from each other then you can use the
+``LockFixture``.
+
+However, in some cases, a small set of tests needs to be run serially. For
+example, some of the host aggregate and availability zone testing needs
+compute nodes without any running nova server to be able to move compute hosts
+between availability zones. But many tempest tests start one or more nova
+servers.
+
+
+Why are these tests in Tempest?
+-------------------------------
+
+This is one of Tempest's core purposes, testing the integration between
+projects.
+
+
+Scope of these tests
+--------------------
+
+The tests should always use the Tempest implementation of the OpenStack API,
+as we want to ensure that bugs aren't hidden by the official clients.
+
+Tests should be tagged with which services they exercise, as
+determined by which client libraries are used directly by the test.
+
+
+Example of a good test
+----------------------
+
+While we are looking for interaction of 2 or more services, be specific in
+your interactions. A giant "this is my data center" smoke test is hard to
+debug when it goes wrong.
+
+The tests that need to be run serially need to be marked with the
+``@serial`` class decorator. This will make sure that even if tempest is
+configured to run the tests in parallel, these tests will always be executed
+separately from the rest of the test cases.
+
+Please note that due to test ordering optimization reasons test cases marked
+for ``@serial`` execution need to be put under ``tempest/serial_tests``
+directory. This will ensure that the serial tests will block the parallel tests
+in the least amount of time.
diff --git a/tempest/test.py b/tempest/test.py
index 3360221..85a6c36 100644
--- a/tempest/test.py
+++ b/tempest/test.py
@@ -26,13 +26,11 @@
from tempest import clients
from tempest.common import credentials_factory as credentials
-from tempest.common import utils
from tempest import config
from tempest.lib.common import api_microversion_fixture
from tempest.lib.common import fixed_network
from tempest.lib.common import profiler
from tempest.lib.common import validation_resources as vr
-from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
LOG = logging.getLogger(__name__)
@@ -40,25 +38,6 @@
CONF = config.CONF
-attr = debtcollector.moves.moved_function(
- decorators.attr, 'attr', __name__,
- version='Pike', removal_version='?')
-
-
-services = debtcollector.moves.moved_function(
- utils.services, 'services', __name__,
- version='Pike', removal_version='?')
-
-
-requires_ext = debtcollector.moves.moved_function(
- utils.requires_ext, 'requires_ext', __name__,
- version='Pike', removal_version='?')
-
-
-is_extension_enabled = debtcollector.moves.moved_function(
- utils.is_extension_enabled, 'is_extension_enabled', __name__,
- version='Pike', removal_version='?')
-
at_exit_set = set()
@@ -661,7 +640,7 @@
then be run.
Cleanup functions are always called during the test class tearDown
- fixture, even if an exception occured during setUp or tearDown.
+ fixture, even if an exception occurred during setUp or tearDown.
"""
cls._class_cleanups.append((fn, arguments, keywordArguments))
diff --git a/tempest/test_discover/plugins.py b/tempest/test_discover/plugins.py
index 1d69d9d..f2e809b 100644
--- a/tempest/test_discover/plugins.py
+++ b/tempest/test_discover/plugins.py
@@ -58,7 +58,7 @@
help="Whether or not my service is available")
# Note: as long as the group is listed in get_opt_lists,
- # it will be possible to access its optins in the plugin code
+ # it will be possible to access its options in the plugin code
# via ("-" in the group name are replaces with "_"):
# CONF.my_service.<option_name>
my_service_group = cfg.OptGroup(name="my-service",
diff --git a/tempest/tests/README.rst b/tempest/tests/README.rst
index 0587e7b..081dd07 100644
--- a/tempest/tests/README.rst
+++ b/tempest/tests/README.rst
@@ -14,6 +14,7 @@
Why are these tests in Tempest?
-------------------------------
+
These tests exist to make sure that the mechanisms that we use inside of
Tempest are valid and remain functional. They are only here for self
validation of Tempest.
@@ -21,6 +22,7 @@
Scope of these tests
--------------------
+
Unit tests should not require an external service to be running or any extra
configuration to run. Any state that is required for a test should either be
mocked out or created in a temporary test directory. (see test_wrappers.py for
diff --git a/tempest/tests/cmd/test_cleanup.py b/tempest/tests/cmd/test_cleanup.py
index 69e735b..3efc9bd 100644
--- a/tempest/tests/cmd/test_cleanup.py
+++ b/tempest/tests/cmd/test_cleanup.py
@@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+import json
from unittest import mock
from tempest.cmd import cleanup
@@ -20,12 +21,30 @@
class TestTempestCleanup(base.TestCase):
- def test_load_json(self):
+ def test_load_json_saved_state(self):
# instantiate "empty" TempestCleanup
c = cleanup.TempestCleanup(None, None, 'test')
test_saved_json = 'tempest/tests/cmd/test_saved_state_json.json'
+ with open(test_saved_json, 'r') as f:
+ test_saved_json_content = json.load(f)
# test if the file is loaded without any issues/exceptions
- c._load_json(test_saved_json)
+ c.options = mock.Mock()
+ c.options.init_saved_state = True
+ c._load_saved_state(test_saved_json)
+ self.assertEqual(c.json_data, test_saved_json_content)
+
+ def test_load_json_resource_list(self):
+ # instantiate "empty" TempestCleanup
+ c = cleanup.TempestCleanup(None, None, 'test')
+ test_resource_list = 'tempest/tests/cmd/test_resource_list.json'
+ with open(test_resource_list, 'r') as f:
+ test_resource_list_content = json.load(f)
+ # test if the file is loaded without any issues/exceptions
+ c.options = mock.Mock()
+ c.options.init_saved_state = False
+ c.options.resource_list = True
+ c._load_resource_list(test_resource_list)
+ self.assertEqual(c.resource_data, test_resource_list_content)
@mock.patch('tempest.cmd.cleanup.TempestCleanup.init')
@mock.patch('tempest.cmd.cleanup.TempestCleanup._cleanup')
diff --git a/tempest/tests/cmd/test_cleanup_services.py b/tempest/tests/cmd/test_cleanup_services.py
index 2301be6..7f8db9f 100644
--- a/tempest/tests/cmd/test_cleanup_services.py
+++ b/tempest/tests/cmd/test_cleanup_services.py
@@ -41,26 +41,35 @@
def test_base_service_init(self):
kwargs = {'data': {'data': 'test'},
'is_dry_run': False,
+ 'resource_list_json': {'resp': 'data'},
'saved_state_json': {'saved': 'data'},
'is_preserve': False,
+ 'is_resource_list': False,
'is_save_state': True,
+ 'prefix': 'tempest',
'tenant_id': 'project_id',
'got_exceptions': []}
base = cleanup_service.BaseService(kwargs)
self.assertEqual(base.data, kwargs['data'])
self.assertFalse(base.is_dry_run)
+ self.assertEqual(base.resource_list_json, kwargs['resource_list_json'])
self.assertEqual(base.saved_state_json, kwargs['saved_state_json'])
self.assertFalse(base.is_preserve)
+ self.assertFalse(base.is_resource_list)
self.assertTrue(base.is_save_state)
self.assertEqual(base.tenant_filter['project_id'], kwargs['tenant_id'])
self.assertEqual(base.got_exceptions, kwargs['got_exceptions'])
+ self.assertEqual(base.prefix, kwargs['prefix'])
def test_not_implemented_ex(self):
kwargs = {'data': {'data': 'test'},
'is_dry_run': False,
+ 'resource_list_json': {'resp': 'data'},
'saved_state_json': {'saved': 'data'},
'is_preserve': False,
+ 'is_resource_list': False,
'is_save_state': False,
+ 'prefix': 'tempest',
'tenant_id': 'project_id',
'got_exceptions': []}
base = self.TestException(kwargs)
@@ -178,26 +187,40 @@
"subnetpools": {'8acf64c1-43fc': 'saved-subnet-pool'},
"regions": {'RegionOne': {}}
}
+
+ resource_list = {
+ "keypairs": {'saved-key-pair': ""}
+ }
+
# Mocked methods
get_method = 'tempest.lib.common.rest_client.RestClient.get'
delete_method = 'tempest.lib.common.rest_client.RestClient.delete'
log_method = 'tempest.cmd.cleanup_service.LOG.exception'
+ filter_saved_state = 'tempest.cmd.cleanup_service.' \
+ 'BaseService._filter_out_ids_from_saved'
+ filter_resource_list = 'tempest.cmd.cleanup_service.' \
+ 'BaseService._filter_by_resource_list'
+ filter_prefix = 'tempest.cmd.cleanup_service.BaseService._filter_by_prefix'
# Override parameters
service_class = 'BaseService'
response = None
service_name = 'default'
def _create_cmd_service(self, service_type, is_save_state=False,
- is_preserve=False, is_dry_run=False):
+ is_preserve=False, is_dry_run=False,
+ prefix='', is_resource_list=False):
creds = fake_credentials.FakeKeystoneV3Credentials()
os = clients.Manager(creds)
return getattr(cleanup_service, service_type)(
os,
+ is_resource_list=is_resource_list,
is_save_state=is_save_state,
is_preserve=is_preserve,
is_dry_run=is_dry_run,
+ prefix=prefix,
project_id='b8e3ece07bb049138d224436756e3b57',
data={},
+ resource_list_json=self.resource_list,
saved_state_json=self.saved_state
)
@@ -261,6 +284,38 @@
self.assertNotIn(rsp['id'], self.conf_values.values())
self.assertNotIn(rsp['name'], self.conf_values.values())
+ def _test_prefix_opt_precedence(self, delete_mock):
+ serv = self._create_cmd_service(
+ self.service_class, is_resource_list=True, prefix='tempest')
+ _, fixtures = self.run_function_with_mocks(
+ serv.run,
+ delete_mock
+ )
+
+ # Check that prefix was used for filtering
+ fixtures[2].mock.assert_called_once()
+
+ # Check that neither saved_state.json nor resource list was
+ # used for filtering
+ fixtures[0].mock.assert_not_called()
+ fixtures[1].mock.assert_not_called()
+
+ def _test_resource_list_opt_precedence(self, delete_mock):
+ serv = self._create_cmd_service(
+ self.service_class, is_resource_list=True)
+ _, fixtures = self.run_function_with_mocks(
+ serv.run,
+ delete_mock
+ )
+
+ # Check that resource list was used for filtering
+ fixtures[1].mock.assert_called_once()
+
+ # Check that neither saved_state.json nor prefix was
+ # used for filtering
+ fixtures[0].mock.assert_not_called()
+ fixtures[2].mock.assert_not_called()
+
class TestSnapshotService(BaseCmdServiceTests):
@@ -315,6 +370,24 @@
def test_save_state(self):
self._test_saved_state_true([(self.get_method, self.response, 200)])
+ def test_prefix_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_prefix_opt_precedence(delete_mock)
+
+ def test_resource_list_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_resource_list_opt_precedence(delete_mock)
+
class TestServerService(BaseCmdServiceTests):
@@ -373,6 +446,24 @@
def test_save_state(self):
self._test_saved_state_true([(self.get_method, self.response, 200)])
+ def test_prefix_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_prefix_opt_precedence(delete_mock)
+
+ def test_resource_list_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_resource_list_opt_precedence(delete_mock)
+
class TestServerGroupService(BaseCmdServiceTests):
@@ -424,6 +515,26 @@
(self.validate_response, 'validate', None)
])
+ def test_prefix_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.validate_response, 'validate', None),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_prefix_opt_precedence(delete_mock)
+
+ def test_resource_list_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.validate_response, 'validate', None),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_resource_list_opt_precedence(delete_mock)
+
class TestKeyPairService(BaseCmdServiceTests):
@@ -488,6 +599,26 @@
(self.validate_response, 'validate', None)
])
+ def test_prefix_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.validate_response, 'validate', None),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_prefix_opt_precedence(delete_mock)
+
+ def test_resource_list_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.validate_response, 'validate', None),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_resource_list_opt_precedence(delete_mock)
+
class TestVolumeService(BaseCmdServiceTests):
@@ -537,6 +668,24 @@
def test_save_state(self):
self._test_saved_state_true([(self.get_method, self.response, 200)])
+ def test_prefix_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_prefix_opt_precedence(delete_mock)
+
+ def test_resource_list_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_resource_list_opt_precedence(delete_mock)
+
class TestVolumeQuotaService(BaseCmdServiceTests):
@@ -756,6 +905,24 @@
})
self._test_is_preserve_true([(self.get_method, self.response, 200)])
+ def test_prefix_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_prefix_opt_precedence(delete_mock)
+
+ def test_resource_list_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_resource_list_opt_precedence(delete_mock)
+
class TestNetworkFloatingIpService(BaseCmdServiceTests):
@@ -818,6 +985,34 @@
def test_save_state(self):
self._test_saved_state_true([(self.get_method, self.response, 200)])
+ def test_prefix_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ serv = self._create_cmd_service(
+ self.service_class, is_resource_list=True, prefix='tempest')
+ _, fixtures = self.run_function_with_mocks(
+ serv.run,
+ delete_mock
+ )
+
+ # cleanup returns []
+ fixtures[0].mock.assert_not_called()
+ fixtures[1].mock.assert_not_called()
+ fixtures[2].mock.assert_not_called()
+
+ def test_resource_list_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_resource_list_opt_precedence(delete_mock)
+
class TestNetworkRouterService(BaseCmdServiceTests):
@@ -932,6 +1127,24 @@
})
self._test_is_preserve_true([(self.get_method, self.response, 200)])
+ def test_prefix_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_prefix_opt_precedence(delete_mock)
+
+ def test_resource_list_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_resource_list_opt_precedence(delete_mock)
+
class TestNetworkMeteringLabelRuleService(BaseCmdServiceTests):
@@ -973,6 +1186,34 @@
def test_save_state(self):
self._test_saved_state_true([(self.get_method, self.response, 200)])
+ def test_prefix_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ serv = self._create_cmd_service(
+ self.service_class, is_resource_list=True, prefix='tempest')
+ _, fixtures = self.run_function_with_mocks(
+ serv.run,
+ delete_mock
+ )
+
+ # cleanup returns []
+ fixtures[0].mock.assert_not_called()
+ fixtures[1].mock.assert_not_called()
+ fixtures[2].mock.assert_not_called()
+
+ def test_resource_list_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_resource_list_opt_precedence(delete_mock)
+
class TestNetworkMeteringLabelService(BaseCmdServiceTests):
@@ -1015,6 +1256,24 @@
def test_save_state(self):
self._test_saved_state_true([(self.get_method, self.response, 200)])
+ def test_prefix_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_prefix_opt_precedence(delete_mock)
+
+ def test_resource_list_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_resource_list_opt_precedence(delete_mock)
+
class TestNetworkPortService(BaseCmdServiceTests):
@@ -1113,6 +1372,24 @@
})
self._test_is_preserve_true([(self.get_method, self.response, 200)])
+ def test_prefix_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_prefix_opt_precedence(delete_mock)
+
+ def test_resource_list_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_resource_list_opt_precedence(delete_mock)
+
class TestNetworkSecGroupService(BaseCmdServiceTests):
@@ -1191,6 +1468,24 @@
})
self._test_is_preserve_true([(self.get_method, self.response, 200)])
+ def test_prefix_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_prefix_opt_precedence(delete_mock)
+
+ def test_resource_list_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_resource_list_opt_precedence(delete_mock)
+
class TestNetworkSubnetService(BaseCmdServiceTests):
@@ -1267,6 +1562,24 @@
})
self._test_is_preserve_true([(self.get_method, self.response, 200)])
+ def test_prefix_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_prefix_opt_precedence(delete_mock)
+
+ def test_resource_list_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_resource_list_opt_precedence(delete_mock)
+
class TestNetworkSubnetPoolsService(BaseCmdServiceTests):
@@ -1335,6 +1648,24 @@
})
self._test_is_preserve_true([(self.get_method, self.response, 200)])
+ def test_prefix_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_prefix_opt_precedence(delete_mock)
+
+ def test_resource_list_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_resource_list_opt_precedence(delete_mock)
+
# begin global services
class TestRegionService(BaseCmdServiceTests):
@@ -1387,6 +1718,34 @@
def test_save_state(self):
self._test_saved_state_true([(self.get_method, self.response, 200)])
+ def test_prefix_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ serv = self._create_cmd_service(
+ self.service_class, is_resource_list=True, prefix='tempest')
+ _, fixtures = self.run_function_with_mocks(
+ serv.run,
+ delete_mock
+ )
+
+ # cleanup returns []
+ fixtures[0].mock.assert_not_called()
+ fixtures[1].mock.assert_not_called()
+ fixtures[2].mock.assert_not_called()
+
+ def test_resource_list_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_resource_list_opt_precedence(delete_mock)
+
class TestDomainService(BaseCmdServiceTests):
@@ -1440,6 +1799,26 @@
def test_save_state(self):
self._test_saved_state_true([(self.get_method, self.response, 200)])
+ def test_prefix_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None),
+ (self.mock_update, 'update', None)]
+ self._test_prefix_opt_precedence(delete_mock)
+
+ def test_resource_list_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None),
+ (self.mock_update, 'update', None)]
+ self._test_resource_list_opt_precedence(delete_mock)
+
class TestProjectsService(BaseCmdServiceTests):
@@ -1513,6 +1892,24 @@
})
self._test_is_preserve_true([(self.get_method, self.response, 200)])
+ def test_prefix_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_prefix_opt_precedence(delete_mock)
+
+ def test_resource_list_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_resource_list_opt_precedence(delete_mock)
+
class TestImagesService(BaseCmdServiceTests):
@@ -1592,6 +1989,24 @@
})
self._test_is_preserve_true([(self.get_method, self.response, 200)])
+ def test_prefix_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_prefix_opt_precedence(delete_mock)
+
+ def test_resource_list_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_resource_list_opt_precedence(delete_mock)
+
class TestFlavorService(BaseCmdServiceTests):
@@ -1665,6 +2080,24 @@
})
self._test_is_preserve_true([(self.get_method, self.response, 200)])
+ def test_prefix_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_prefix_opt_precedence(delete_mock)
+
+ def test_resource_list_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_resource_list_opt_precedence(delete_mock)
+
class TestRoleService(BaseCmdServiceTests):
@@ -1711,6 +2144,24 @@
def test_save_state(self):
self._test_saved_state_true([(self.get_method, self.response, 200)])
+ def test_prefix_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_prefix_opt_precedence(delete_mock)
+
+ def test_resource_list_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_resource_list_opt_precedence(delete_mock)
+
class TestUserService(BaseCmdServiceTests):
@@ -1777,3 +2228,21 @@
"password_expires_at": "1893-11-06T15:32:17.000000",
})
self._test_is_preserve_true([(self.get_method, self.response, 200)])
+
+ def test_prefix_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_prefix_opt_precedence(delete_mock)
+
+ def test_resource_list_opt_precedence(self):
+ delete_mock = [(self.filter_saved_state, [], None),
+ (self.filter_resource_list, [], None),
+ (self.filter_prefix, [], None),
+ (self.get_method, self.response, 200),
+ (self.delete_method, 'error', None),
+ (self.log_method, 'exception', None)]
+ self._test_resource_list_opt_precedence(delete_mock)
diff --git a/tempest/tests/cmd/test_resource_list.json b/tempest/tests/cmd/test_resource_list.json
new file mode 100644
index 0000000..dfbc790
--- /dev/null
+++ b/tempest/tests/cmd/test_resource_list.json
@@ -0,0 +1,11 @@
+{
+ "project": {
+ "ce4e7edf051c439d8b81c4bfe581c5ef": "test"
+ },
+ "keypairs": {
+ "tempest-keypair-1215039183": ""
+ },
+ "users": {
+ "74463c83f9d640fe84c4376527ceff26": "test"
+ }
+}
diff --git a/tempest/tests/cmd/test_run.py b/tempest/tests/cmd/test_run.py
index 3b5e901..b487c3f 100644
--- a/tempest/tests/cmd/test_run.py
+++ b/tempest/tests/cmd/test_run.py
@@ -225,6 +225,11 @@
'%s=%s' % (self.exclude_list, path),
'--regex', 'fail'], 1)
+ def test_tempest_run_with_slowest(self):
+ out, err = self.assertRunExit(['tempest', 'run', '--regex', 'passing',
+ '--slowest'], 0)
+ self.assertRegex(str(out), r'Test id\s+Runtime \(s\)')
+
class TestOldArgRunReturnCode(TestRunReturnCode):
"""A class for testing deprecated but still supported args.
@@ -363,6 +368,7 @@
parsed_args.state = None
parsed_args.list_tests = False
parsed_args.config_file = path
+ parsed_args.slowest = False
with mock.patch('stestr.commands.run_command') as m:
m.return_value = 0
@@ -393,6 +399,7 @@
parsed_args.state = None
parsed_args.list_tests = False
parsed_args.config_file = path
+ parsed_args.slowest = False
with mock.patch('stestr.commands.run_command') as m:
m.return_value = 0
@@ -409,6 +416,7 @@
parsed_args.state = None
parsed_args.list_tests = False
parsed_args.config_file = ''
+ parsed_args.slowest = False
with mock.patch('stestr.commands.run_command') as m:
m.return_value = 0
@@ -441,6 +449,7 @@
parsed_args.state = True
parsed_args.list_tests = False
parsed_args.config_file = ''
+ parsed_args.slowest = False
with mock.patch('stestr.commands.run_command') as m:
m.return_value = 0
@@ -460,6 +469,7 @@
parsed_args.state = True
parsed_args.list_tests = False
parsed_args.config_file = path
+ parsed_args.slowest = False
with mock.patch('stestr.commands.run_command') as m:
m.return_value = 0
diff --git a/tempest/tests/cmd/test_verify_tempest_config.py b/tempest/tests/cmd/test_verify_tempest_config.py
index fa43e58..3df9f19 100644
--- a/tempest/tests/cmd/test_verify_tempest_config.py
+++ b/tempest/tests/cmd/test_verify_tempest_config.py
@@ -498,7 +498,7 @@
return ('token',
{'serviceCatalog': [{'type': 'compute'},
{'type': 'image'},
- {'type': 'volumev3'},
+ {'type': 'block-storage'},
{'type': 'network'},
{'type': 'object-store'}]})
diff --git a/tempest/tests/common/test_credentials_factory.py b/tempest/tests/common/test_credentials_factory.py
index 374474d..154d8d1 100644
--- a/tempest/tests/common/test_credentials_factory.py
+++ b/tempest/tests/common/test_credentials_factory.py
@@ -37,7 +37,7 @@
fake_config.FakePrivate)
def test_get_dynamic_provider_params_creds_v2(self):
- expected_uri = 'EXPECTED_V2_URI'
+ expected_uri = 'http://v2.identy.example.com'
cfg.CONF.set_default('uri', expected_uri, group='identity')
admin_creds = fake_credentials.FakeCredentials()
params = cf.get_dynamic_provider_params('v2', admin_creds=admin_creds)
@@ -48,7 +48,7 @@
self.assertEqual(expected_params[key], params[key])
def test_get_dynamic_provider_params_creds_v3(self):
- expected_uri = 'EXPECTED_V3_URI'
+ expected_uri = 'http://v3.identy.example.com'
cfg.CONF.set_default('uri_v3', expected_uri, group='identity')
admin_creds = fake_credentials.FakeCredentials()
params = cf.get_dynamic_provider_params('v3', admin_creds=admin_creds)
@@ -76,14 +76,14 @@
fill_in=True, identity_version=expected_identity_version)
def test_get_preprov_provider_params_creds_v2(self):
- expected_uri = 'EXPECTED_V2_URI'
+ expected_uri = 'http://v2.identy.example.com'
cfg.CONF.set_default('uri', expected_uri, group='identity')
params = cf.get_preprov_provider_params('v2')
self.assertIn('identity_uri', params)
self.assertEqual(expected_uri, params['identity_uri'])
def test_get_preprov_provider_params_creds_v3(self):
- expected_uri = 'EXPECTED_V3_URI'
+ expected_uri = 'http://v3.identy.example.com'
cfg.CONF.set_default('uri_v3', expected_uri, group='identity')
params = cf.get_preprov_provider_params('v3')
self.assertIn('identity_uri', params)
@@ -237,7 +237,7 @@
@mock.patch('tempest.lib.auth.get_credentials')
def test_get_credentials_v2(self, mock_auth_get_credentials):
- expected_uri = 'V2_URI'
+ expected_uri = 'http://v2.identity.example.com'
expected_result = 'my_creds'
mock_auth_get_credentials.return_value = expected_result
cfg.CONF.set_default('uri', expected_uri, 'identity')
@@ -252,7 +252,7 @@
@mock.patch('tempest.lib.auth.get_credentials')
def test_get_credentials_v3_no_domain(self, mock_auth_get_credentials):
- expected_uri = 'V3_URI'
+ expected_uri = 'https://v3.identity.example.com'
expected_result = 'my_creds'
expected_domain = 'my_domain'
mock_auth_get_credentials.return_value = expected_result
@@ -272,7 +272,7 @@
@mock.patch('tempest.lib.auth.get_credentials')
def test_get_credentials_v3_domain(self, mock_auth_get_credentials):
- expected_uri = 'V3_URI'
+ expected_uri = 'https://v3.identity.example.com'
expected_result = 'my_creds'
expected_domain = 'my_domain'
mock_auth_get_credentials.return_value = expected_result
@@ -291,7 +291,7 @@
@mock.patch('tempest.lib.auth.get_credentials')
def test_get_credentials_v3_system(self, mock_auth_get_credentials):
- expected_uri = 'V3_URI'
+ expected_uri = 'https://v3.identity.example.com'
expected_result = 'my_creds'
mock_auth_get_credentials.return_value = expected_result
cfg.CONF.set_default('uri_v3', expected_uri, 'identity')
diff --git a/tempest/tests/common/test_waiters.py b/tempest/tests/common/test_waiters.py
index f194173..f7f2dc7 100755
--- a/tempest/tests/common/test_waiters.py
+++ b/tempest/tests/common/test_waiters.py
@@ -884,6 +884,58 @@
waiters.wait_for_port_status, mock_client,
fake_port_id, fake_status)
+ def test_wait_for_server_ports_active(self):
+ """Test that the waiter replies with the ports before the timeout"""
+
+ def is_active(port):
+ return port['status'] == 'ACTIVE'
+
+ def client_response(device_id):
+ """Mock client response, replies with partial status after one
+ call and final status after 2 calls
+ """
+ if mock_client.call_count >= 2:
+ return mock_ports_active
+ else:
+ mock_client.call_count += 1
+ if mock_client.call_count > 1:
+ return mock_ports_half_active
+ return mock_ports
+
+ mock_ports = {'ports': [{'id': '1234', 'status': 'DOWN'},
+ {'id': '5678', 'status': 'DOWN'}]}
+ mock_ports_half_active = {'ports': [{'id': '1234', 'status': 'ACTIVE'},
+ {'id': '5678', 'status': 'DOWN'}]}
+ mock_ports_active = {'ports': [{'id': '1234', 'status': 'ACTIVE'},
+ {'id': '5678', 'status': 'ACTIVE'}]}
+ mock_client = mock.Mock(
+ spec=ports_client.PortsClient,
+ build_timeout=30, build_interval=1,
+ list_ports=client_response)
+ fake_server_id = "9876"
+ self.assertEqual(mock_ports_active['ports'],
+ waiters.wait_for_server_ports_active(
+ mock_client, fake_server_id, is_active))
+
+ def test_wait_for_server_ports_active_timeout(self):
+ """Negative test - checking that a timeout
+ presented by a small 'fake_timeout' and a static status of
+ 'DOWN' in the mock will raise a timeout exception
+ """
+
+ def is_active(port):
+ return port['status'] == 'ACTIVE'
+
+ mock_ports = {'ports': [{'id': '1234', 'status': "DOWN"}]}
+ mock_client = mock.Mock(
+ spec=ports_client.PortsClient,
+ build_timeout=2, build_interval=1,
+ list_ports=lambda device_id: mock_ports)
+ fake_server_id = "9876"
+ self.assertRaises(lib_exc.TimeoutException,
+ waiters.wait_for_server_ports_active,
+ mock_client, fake_server_id, is_active)
+
class TestServerFloatingIPWaiters(base.TestCase):
diff --git a/tempest/tests/lib/common/test_dynamic_creds.py b/tempest/tests/lib/common/test_dynamic_creds.py
index d3d01c0..4c2ea30 100644
--- a/tempest/tests/lib/common/test_dynamic_creds.py
+++ b/tempest/tests/lib/common/test_dynamic_creds.py
@@ -104,6 +104,14 @@
(200, {'tenant': {'id': id, 'name': name}}))))
return tenant_fix
+ def _mock_domain_create(self, id, name):
+ domain_fix = self.useFixture(fixtures.MockPatchObject(
+ self.domains_client.DomainsClient,
+ 'create_domain',
+ return_value=(rest_client.ResponseBody
+ (200, {'domain': {'id': id, 'name': name}}))))
+ return domain_fix
+
def _mock_list_roles(self, id, name):
roles_fix = self.useFixture(fixtures.MockPatchObject(
self.roles_client.RolesClient,
@@ -143,7 +151,8 @@
{'id': '1', 'name': 'FakeRole'},
{'id': '2', 'name': 'member'},
{'id': '3', 'name': 'reader'},
- {'id': '4', 'name': 'admin'}]}))))
+ {'id': '4', 'name': 'manager'},
+ {'id': '5', 'name': 'admin'}]}))))
return roles_fix
def _mock_list_ec2_credentials(self, user_id, tenant_id):
@@ -999,6 +1008,7 @@
roles_client = v3_roles_client
tenants_client = v3_projects_client
users_client = v3_users_client
+ domains_client = domains_client
token_client_class = token_client.V3TokenClient
fake_response = fake_identity._fake_v3_response
tenants_client_class = tenants_client.ProjectsClient
@@ -1263,3 +1273,47 @@
"member role already exists, ignoring conflict.")
creds.creds_client.assign_user_role.assert_called_once_with(
mock.ANY, mock.ANY, 'member')
+
+ @mock.patch('tempest.lib.common.rest_client.RestClient')
+ def test_project_manager_creds(self, MockRestClient):
+ creds = dynamic_creds.DynamicCredentialProvider(**self.fixed_params)
+ self._mock_list_roles('1234', 'manager')
+ self._mock_user_create('1234', 'fake_manager_user')
+ self._mock_tenant_create('1234', 'fake_manager_tenant')
+
+ user_mock = mock.patch.object(self.roles_client.RolesClient,
+ 'create_user_role_on_project')
+ user_mock.start()
+ self.addCleanup(user_mock.stop)
+ with mock.patch.object(self.roles_client.RolesClient,
+ 'create_user_role_on_project') as user_mock:
+ manager_creds = creds.get_project_manager_creds()
+ user_mock.assert_has_calls([
+ mock.call('1234', '1234', '1234')])
+ self.assertEqual(manager_creds.username, 'fake_manager_user')
+ self.assertEqual(manager_creds.tenant_name, 'fake_manager_tenant')
+ # Verify IDs
+ self.assertEqual(manager_creds.tenant_id, '1234')
+ self.assertEqual(manager_creds.user_id, '1234')
+
+ @mock.patch('tempest.lib.common.rest_client.RestClient')
+ def test_domain_manager_creds(self, MockRestClient):
+ creds = dynamic_creds.DynamicCredentialProvider(**self.fixed_params)
+ self._mock_list_roles('1234', 'manager')
+ self._mock_user_create('1234', 'fake_manager_user')
+ self._mock_domain_create('1234', 'fake_manager_domain')
+
+ user_mock = mock.patch.object(self.roles_client.RolesClient,
+ 'create_user_role_on_domain')
+ user_mock.start()
+ self.addCleanup(user_mock.stop)
+ with mock.patch.object(self.roles_client.RolesClient,
+ 'create_user_role_on_domain') as user_mock:
+ manager_creds = creds.get_domain_manager_creds()
+ user_mock.assert_has_calls([
+ mock.call('1234', '1234', '1234')])
+ self.assertEqual(manager_creds.username, 'fake_manager_user')
+ self.assertEqual(manager_creds.domain_name, 'fake_manager_domain')
+ # Verify IDs
+ self.assertEqual(manager_creds.domain_id, '1234')
+ self.assertEqual(manager_creds.user_id, '1234')
diff --git a/tempest/tests/lib/common/test_preprov_creds.py b/tempest/tests/lib/common/test_preprov_creds.py
index f2131dc..5a36f71 100644
--- a/tempest/tests/lib/common/test_preprov_creds.py
+++ b/tempest/tests/lib/common/test_preprov_creds.py
@@ -77,7 +77,13 @@
{'username': 'test_admin2', 'project_name': 'test_tenant12',
'password': 'p', 'roles': [admin_role]},
{'username': 'test_admin3', 'project_name': 'test_tenant13',
- 'password': 'p', 'types': ['admin']}]
+ 'password': 'p', 'types': ['admin']},
+ {'username': 'test_project_manager1',
+ 'project_name': 'test_tenant14', 'password': 'p',
+ 'roles': ['manager']},
+ {'username': 'test_project_manager2',
+ 'tenant_name': 'test_tenant15', 'password': 'p',
+ 'roles': ['manager']}]
def setUp(self):
super(TestPreProvisionedCredentials, self).setUp()
@@ -319,7 +325,7 @@
calls = get_free_hash_mock.mock.mock_calls
self.assertEqual(len(calls), 1)
args = calls[0][1][0]
- self.assertEqual(len(args), 10)
+ self.assertEqual(len(args), 12)
for i in admin_hashes:
self.assertNotIn(i, args)
@@ -431,6 +437,26 @@
# Get one more
test_accounts_class.get_admin_creds()
+ def test_get_project_manager_creds(self):
+ test_accounts_class = preprov_creds.PreProvisionedCredentialProvider(
+ **self.fixed_params)
+ p_manager_creds = test_accounts_class.get_project_manager_creds()
+ self.assertNotIn('test_admin', p_manager_creds.username)
+ self.assertNotIn('test_user', p_manager_creds.username)
+ self.assertIn('test_project_manager', p_manager_creds.username)
+
+ def test_get_project_manager_creds_none_available(self):
+ admin_accounts = [x for x in self.test_accounts if 'test_admin'
+ in x['username']]
+ self.useFixture(fixtures.MockPatch(
+ 'tempest.lib.common.preprov_creds.read_accounts_yaml',
+ return_value=admin_accounts))
+ test_accounts_class = preprov_creds.PreProvisionedCredentialProvider(
+ **self.fixed_params)
+ with testtools.ExpectedException(lib_exc.InvalidCredentials):
+ # Get one more
+ test_accounts_class.get_project_manager_creds()
+
class TestPreProvisionedCredentialsV3(TestPreProvisionedCredentials):
@@ -480,4 +506,29 @@
{'username': 'test_admin2', 'project_name': 'test_project12',
'domain_name': 'domain', 'password': 'p', 'roles': [admin_role]},
{'username': 'test_admin3', 'project_name': 'test_tenant13',
- 'domain_name': 'domain', 'password': 'p', 'types': ['admin']}]
+ 'domain_name': 'domain', 'password': 'p', 'types': ['admin']},
+ {'username': 'test_project_manager1',
+ 'project_name': 'test_project14', 'domain_name': 'domain',
+ 'password': 'p', 'roles': ['manager']},
+ {'username': 'test_domain_manager1',
+ 'domain_name': 'domain', 'password': 'p', 'roles': ['manager']}]
+
+ def test_get_domain_manager_creds(self):
+ test_accounts_class = preprov_creds.PreProvisionedCredentialProvider(
+ **self.fixed_params)
+ d_manager_creds = test_accounts_class.get_domain_manager_creds()
+ self.assertNotIn('test_admin', d_manager_creds.username)
+ self.assertNotIn('test_user', d_manager_creds.username)
+ self.assertIn('test_domain_manager', d_manager_creds.username)
+
+ def test_get_domain_manager_creds_none_available(self):
+ admin_accounts = [x for x in self.test_accounts if 'test_admin'
+ in x['username']]
+ self.useFixture(fixtures.MockPatch(
+ 'tempest.lib.common.preprov_creds.read_accounts_yaml',
+ return_value=admin_accounts))
+ test_accounts_class = preprov_creds.PreProvisionedCredentialProvider(
+ **self.fixed_params)
+ with testtools.ExpectedException(lib_exc.InvalidCredentials):
+ # Get one more
+ test_accounts_class.get_domain_manager_creds()
diff --git a/tempest/tests/lib/common/test_rest_client.py b/tempest/tests/lib/common/test_rest_client.py
index 81a76e0..0d1660c 100644
--- a/tempest/tests/lib/common/test_rest_client.py
+++ b/tempest/tests/lib/common/test_rest_client.py
@@ -13,6 +13,8 @@
# under the License.
import copy
+from unittest import mock
+from unittest.mock import patch
import fixtures
import jsonschema
@@ -749,6 +751,131 @@
expected_code, read_code)
+class TestRecordResources(BaseRestClientTestClass):
+
+ def setUp(self):
+ self.fake_http = fake_http.fake_httplib2()
+ super(TestRecordResources, self).setUp()
+ self.rest_client.rec_rw_lock = mock.MagicMock()
+
+ def test_post_record_resources(self):
+ with patch('builtins.open', mock.mock_open(read_data=b'{}')):
+ self.rest_client.record_resources = True
+ __, return_dict = self.rest_client.post(self.url, {}, {})
+ self.assertEqual({}, return_dict['headers'])
+ self.assertEqual({}, return_dict['body'])
+
+ def test_resource_record_dict(self):
+ mock_resource_list_1 = mock.mock_open(read_data=b'{}')
+ mock_resource_list_2 = mock.mock_open(
+ read_data=b'{"projects": {"test-id": ""}}')
+
+ with patch('builtins.open') as mock_open_func:
+ mock_open_func.side_effect = [
+ mock_resource_list_1.return_value,
+ mock_resource_list_2.return_value
+ ]
+
+ test_dict_body = b'{"project": {"id": "test-id", "name": ""}}\n'
+ self.rest_client.resource_record(test_dict_body)
+
+ content = mock_resource_list_2().read()
+ resource_list_2 = json.loads(content)
+ test_resource_list_2 = {
+ "projects": {"test-id": ""}
+ }
+ self.assertEqual(resource_list_2, test_resource_list_2)
+
+ def test_resource_record_list(self):
+ mock_content_2 = b'{"users": {"test-uuid": "test-name"}}'
+ mock_content_3 = (
+ b'{"users": {"test-uuid": "test-name",'
+ b'"test-uuid2": "test-name2"}}'
+ )
+
+ mock_resource_list_1 = mock.mock_open(read_data=b'{}')
+ mock_resource_list_2 = mock.mock_open(read_data=mock_content_2)
+ mock_resource_list_3 = mock.mock_open(read_data=mock_content_3)
+
+ with patch('builtins.open') as mock_open_func:
+ mock_open_func.side_effect = [
+ mock_resource_list_1.return_value,
+ mock_resource_list_2.return_value,
+ mock_resource_list_3.return_value
+ ]
+
+ test_list_body = '''{
+ "user": [
+ {
+ "id": "test-uuid",
+ "name": "test-name"
+ },
+ {
+ "id": "test-uuid2",
+ "name": "test-name2"
+ }
+ ]
+ }'''
+ test_list_body = test_list_body.encode('utf-8')
+ self.rest_client.resource_record(test_list_body)
+
+ content_2 = mock_resource_list_2().read()
+ resource_list_2 = json.loads(content_2)
+
+ test_resource_list_2 = {
+ "users": {
+ "test-uuid": "test-name"
+ }
+ }
+ self.assertEqual(resource_list_2, test_resource_list_2)
+
+ content_3 = mock_resource_list_3().read()
+ resource_list_3 = json.loads(content_3)
+
+ test_resource_list_3 = {
+ "users": {
+ "test-uuid": "test-name",
+ "test-uuid2": "test-name2"
+ }
+ }
+ self.assertEqual(resource_list_3, test_resource_list_3)
+
+ def test_resource_update_id(self):
+ data = {}
+ res_dict = {'id': 'test-uuid', 'name': 'test-name'}
+
+ with patch('builtins.open', mock.mock_open(read_data=b'{}')):
+ self.rest_client.resource_update(data, 'user', res_dict)
+ result = {'users': {'test-uuid': 'test-name'}}
+ self.assertEqual(data, result)
+
+ def test_resource_update_name(self):
+ data = {'keypairs': {}}
+ res_dict = {'name': 'test-keypair'}
+
+ with patch('builtins.open', mock.mock_open(read_data=b'{}')):
+ self.rest_client.resource_update(data, 'keypair', res_dict)
+ result = {'keypairs': {'test-keypair': ""}}
+ self.assertEqual(data, result)
+
+ def test_resource_update_no_id(self):
+ data = {}
+ res_dict = {'type': 'test', 'description': 'example'}
+
+ with patch('builtins.open', mock.mock_open(read_data=b'{}')):
+ self.rest_client.resource_update(data, 'projects', res_dict)
+ result = {'projects': {}}
+ self.assertEqual(data, result)
+
+ def test_resource_update_not_dict(self):
+ data = {}
+ res_dict = 'test-string'
+
+ with patch('builtins.open', mock.mock_open(read_data=b'{}')):
+ self.rest_client.resource_update(data, 'user', res_dict)
+ self.assertEqual(data, {})
+
+
class TestResponseBody(base.TestCase):
def test_str(self):
diff --git a/tempest/tests/lib/common/utils/test_data_utils.py b/tempest/tests/lib/common/utils/test_data_utils.py
index a0267d0..06a7805 100644
--- a/tempest/tests/lib/common/utils/test_data_utils.py
+++ b/tempest/tests/lib/common/utils/test_data_utils.py
@@ -79,7 +79,7 @@
self.assertEqual(len(actual), 3)
self.assertRegex(actual, "[A-Za-z0-9~!@#%^&*_=+]{3}")
actual2 = data_utils.rand_password(2)
- # NOTE(masayukig): Originally, we checked that the acutal and actual2
+ # NOTE(masayukig): Originally, we checked that the actual and actual2
# are different each other. But only 3 letters can be the same value
# in a very rare case. So, we just check the length here, too,
# just in case.
diff --git a/tempest/tests/lib/services/base.py b/tempest/tests/lib/services/base.py
index 924f9f2..fd4bc17 100644
--- a/tempest/tests/lib/services/base.py
+++ b/tempest/tests/lib/services/base.py
@@ -54,7 +54,7 @@
``assert_called_once_with(foo='bar')`` is called.
* If mock_args='foo' then ``assert_called_once_with('foo')``
is called.
- :param resp_as_string: Whether response body is retruned as string.
+ :param resp_as_string: Whether response body is returned as string.
This is for service client methods which return ResponseBodyData
object.
:param kwargs: kwargs that are passed to function.
diff --git a/tempest/tests/lib/services/image/v2/test_images_client.py b/tempest/tests/lib/services/image/v2/test_images_client.py
index 27a50a9..01861a2 100644
--- a/tempest/tests/lib/services/image/v2/test_images_client.py
+++ b/tempest/tests/lib/services/image/v2/test_images_client.py
@@ -146,6 +146,36 @@
]
}
+ FAKE_DELETE_IMAGE_FROM_STORE = {
+ "id": "e485aab9-0907-4973-921c-bb6da8a8fcf8",
+ "name": u"\u2740(*\xb4\u25e2`*)\u2740",
+ "status": "active",
+ "visibility": "public",
+ "size": 2254249,
+ "checksum": "2cec138d7dae2aa59038ef8c9aec2390",
+ "tags": [
+ "fedora",
+ "beefy"
+ ],
+ "created_at": "2012-08-10T19:23:50Z",
+ "updated_at": "2012-08-12T11:11:33Z",
+ "self": "/v2/images/da3b75d9-3f4a-40e7-8a2c-bfab23927dea",
+ "file": "/v2/images/da3b75d9-3f4a-40e7-8a2c-bfab23927"
+ "dea/file",
+ "schema": "/v2/schemas/image",
+ "owner": None,
+ "min_ram": None,
+ "min_disk": None,
+ "disk_format": None,
+ "virtual_size": None,
+ "container_format": None,
+ "os_hash_algo": "sha512",
+ "os_hash_value": "ef7d1ed957ffafefb324d50ebc6685ed03d0e645d",
+ "os_hidden": False,
+ "protected": False,
+ "stores": ["store-1", "store-2"],
+ }
+
FAKE_TAG_NAME = "fake tag"
def setUp(self):
@@ -294,3 +324,12 @@
self.FAKE_SHOW_IMAGE_TASKS,
True,
image_id="e485aab9-0907-4973-921c-bb6da8a8fcf8")
+
+ def test_delete_image_from_store(self):
+ self.check_service_client_function(
+ self.client.delete_image_from_store,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ image_id=self.FAKE_DELETE_IMAGE_FROM_STORE["id"],
+ store_name=self.FAKE_DELETE_IMAGE_FROM_STORE["stores"][0],
+ status=204)
diff --git a/tempest/tests/lib/services/placement/test_placement_client.py b/tempest/tests/lib/services/placement/test_placement_client.py
index 1396a85..bb57bb0 100644
--- a/tempest/tests/lib/services/placement/test_placement_client.py
+++ b/tempest/tests/lib/services/placement/test_placement_client.py
@@ -87,3 +87,77 @@
def test_list_allocations_with_bytes_body(self):
self._test_list_allocations(bytes_body=True)
+
+ FAKE_ALL_TRAITS = {
+ "traits": [
+ "CUSTOM_HW_FPGA_CLASS1",
+ "CUSTOM_HW_FPGA_CLASS2",
+ "CUSTOM_HW_FPGA_CLASS3"
+ ]
+ }
+
+ FAKE_ASSOCIATED_TRAITS = {
+ "traits": [
+ "CUSTOM_HW_FPGA_CLASS1",
+ "CUSTOM_HW_FPGA_CLASS2"
+ ]
+ }
+
+ def test_list_traits(self):
+ self.check_service_client_function(
+ self.client.list_traits,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_ALL_TRAITS)
+
+ self.check_service_client_function(
+ self.client.list_traits,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_ASSOCIATED_TRAITS,
+ **{
+ "associated": "true"
+ })
+
+ self.check_service_client_function(
+ self.client.list_traits,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_ALL_TRAITS,
+ **{
+ "associated": "true",
+ "name": "startswith:CUSTOM_HW_FPGPA"
+ })
+
+ def test_show_traits(self):
+ self.check_service_client_function(
+ self.client.show_trait,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ 204, status=204,
+ name="CUSTOM_HW_FPGA_CLASS1")
+
+ self.check_service_client_function(
+ self.client.show_trait,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ 404, status=404,
+ # trait with this name does not exists
+ name="CUSTOM_HW_FPGA_CLASS4")
+
+ def test_create_traits(self):
+ self.check_service_client_function(
+ self.client.create_trait,
+ 'tempest.lib.common.rest_client.RestClient.put',
+ 204, status=204,
+ # try to create trait with existing name
+ name="CUSTOM_HW_FPGA_CLASS1")
+
+ self.check_service_client_function(
+ self.client.create_trait,
+ 'tempest.lib.common.rest_client.RestClient.put',
+ 201, status=201,
+ # create new trait
+ name="CUSTOM_HW_FPGA_CLASS4")
+
+ def test_delete_traits(self):
+ self.check_service_client_function(
+ self.client.delete_trait,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ 204, status=204,
+ name="CUSTOM_HW_FPGA_CLASS1")
diff --git a/tempest/tests/lib/services/placement/test_resource_providers_client.py b/tempest/tests/lib/services/placement/test_resource_providers_client.py
index 2871395..399f323 100644
--- a/tempest/tests/lib/services/placement/test_resource_providers_client.py
+++ b/tempest/tests/lib/services/placement/test_resource_providers_client.py
@@ -204,3 +204,40 @@
def test_show_resource_provider_usages_with_with_bytes_body(self):
self._test_list_resource_provider_inventories(bytes_body=True)
+
+ FAKE_ALL_RESOURCE_PROVIDER_TRAITS = {
+ "resource_provider_generation": 0,
+ "traits": [
+ "CUSTOM_HW_FPGA_CLASS1",
+ "CUSTOM_HW_FPGA_CLASS2"
+ ]
+ }
+ FAKE_NEW_RESOURCE_PROVIDER_TRAITS = {
+ "resource_provider_generation": 1,
+ "traits": [
+ "CUSTOM_HW_FPGA_CLASS1",
+ "CUSTOM_HW_FPGA_CLASS2"
+ ]
+ }
+
+ def test_list_resource_provider_traits(self):
+ self.check_service_client_function(
+ self.client.list_resource_provider_traits,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_ALL_RESOURCE_PROVIDER_TRAITS,
+ rp_uuid=self.FAKE_RESOURCE_PROVIDER_UUID)
+
+ def test_update_resource_provider_traits(self):
+ self.check_service_client_function(
+ self.client.update_resource_provider_traits,
+ 'tempest.lib.common.rest_client.RestClient.put',
+ self.FAKE_NEW_RESOURCE_PROVIDER_TRAITS,
+ rp_uuid=self.FAKE_RESOURCE_PROVIDER_UUID,
+ **self.FAKE_NEW_RESOURCE_PROVIDER_TRAITS)
+
+ def test_delete_resource_provider_traits(self):
+ self.check_service_client_function(
+ self.client.delete_resource_provider_traits,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ self.FAKE_ALL_RESOURCE_PROVIDER_TRAITS, status=204,
+ rp_uuid=self.FAKE_RESOURCE_PROVIDER_UUID)
diff --git a/tempest/tests/lib/test_auth.py b/tempest/tests/lib/test_auth.py
index 3edb122..4e5ec48 100644
--- a/tempest/tests/lib/test_auth.py
+++ b/tempest/tests/lib/test_auth.py
@@ -17,6 +17,7 @@
import datetime
import fixtures
+from oslo_utils import timeutils
import testtools
from tempest.lib import auth
@@ -509,15 +510,15 @@
self._test_base_url_helper(expected, filters, ('token', auth_data))
def test_token_not_expired(self):
- expiry_data = datetime.datetime.utcnow() + datetime.timedelta(days=1)
+ expiry_data = timeutils.utcnow() + datetime.timedelta(days=1)
self._verify_expiry(expiry_data=expiry_data, should_be_expired=False)
def test_token_expired(self):
- expiry_data = datetime.datetime.utcnow() - datetime.timedelta(hours=1)
+ expiry_data = timeutils.utcnow() - datetime.timedelta(hours=1)
self._verify_expiry(expiry_data=expiry_data, should_be_expired=True)
def test_token_not_expired_to_be_renewed(self):
- expiry_data = (datetime.datetime.utcnow() +
+ expiry_data = (timeutils.utcnow() +
self.auth_provider.token_expiry_threshold / 2)
self._verify_expiry(expiry_data=expiry_data, should_be_expired=True)
diff --git a/tempest/tests/lib/test_ssh.py b/tempest/tests/lib/test_ssh.py
index 13870ba..0ba6ed3 100644
--- a/tempest/tests/lib/test_ssh.py
+++ b/tempest/tests/lib/test_ssh.py
@@ -162,7 +162,7 @@
client = ssh.Client('localhost', 'root', timeout=timeout)
# We need to mock LOG here because LOG.info() calls time.time()
- # in order to preprend a timestamp.
+ # in order to prepend a timestamp.
with mock.patch.object(ssh, 'LOG'):
self.assertRaises(exceptions.SSHTimeout,
client._get_ssh_connection)
diff --git a/tempest/tests/test_hacking.py b/tempest/tests/test_hacking.py
index 464e66a..3f603e8 100644
--- a/tempest/tests/test_hacking.py
+++ b/tempest/tests/test_hacking.py
@@ -51,25 +51,34 @@
def test_no_setup_teardown_class_for_tests(self):
self.assertTrue(checks.no_setup_teardown_class_for_tests(
- " def setUpClass(cls):", './tempest/tests/fake_test.py'))
+ " def setUpClass(cls):", './tempest/tests/fake_test.py', False))
self.assertIsNone(checks.no_setup_teardown_class_for_tests(
- " def setUpClass(cls): # noqa", './tempest/tests/fake_test.py'))
+ " def setUpClass(cls):", './tempest/tests/fake_test.py',
+ True))
self.assertTrue(checks.no_setup_teardown_class_for_tests(
- " def setUpClass(cls):", './tempest/api/fake_test.py'))
+ " def setUpClass(cls):", './tempest/api/fake_test.py',
+ False))
self.assertTrue(checks.no_setup_teardown_class_for_tests(
- " def setUpClass(cls):", './tempest/scenario/fake_test.py'))
+ " def setUpClass(cls):", './tempest/scenario/fake_test.py',
+ False))
self.assertFalse(checks.no_setup_teardown_class_for_tests(
- " def setUpClass(cls):", './tempest/test.py'))
+ " def setUpClass(cls):", './tempest/test.py',
+ False))
self.assertTrue(checks.no_setup_teardown_class_for_tests(
- " def tearDownClass(cls):", './tempest/tests/fake_test.py'))
+ " def tearDownClass(cls):", './tempest/tests/fake_test.py',
+ False))
self.assertIsNone(checks.no_setup_teardown_class_for_tests(
- " def tearDownClass(cls): # noqa", './tempest/tests/fake_test.py'))
+ " def tearDownClass(cls):", './tempest/tests/fake_test.py',
+ True))
self.assertTrue(checks.no_setup_teardown_class_for_tests(
- " def tearDownClass(cls):", './tempest/api/fake_test.py'))
+ " def tearDownClass(cls):", './tempest/api/fake_test.py',
+ False))
self.assertTrue(checks.no_setup_teardown_class_for_tests(
- " def tearDownClass(cls):", './tempest/scenario/fake_test.py'))
+ " def tearDownClass(cls):", './tempest/scenario/fake_test.py',
+ False))
self.assertFalse(checks.no_setup_teardown_class_for_tests(
- " def tearDownClass(cls):", './tempest/test.py'))
+ " def tearDownClass(cls):", './tempest/test.py',
+ False))
def test_import_no_clients_in_api_and_scenario_tests(self):
for client in checks.PYTHON_CLIENTS:
@@ -198,22 +207,26 @@
# arbitrarily many decorators. These insert decorators above the
# @decorators.attr(type=['negative']) decorator.
for decorator in other_decorators:
- self.assertIsNone(check(" %s" % decorator, filename))
+ self.assertFalse(
+ list(check(" %s" % decorator, filename)))
if with_negative_decorator:
- self.assertIsNone(
- check("@decorators.attr(type=['negative'])", filename))
+ self.assertFalse(
+ list(check("@decorators.attr(type=['negative'])", filename)))
if with_other_decorators:
# Include multiple decorators to verify that this check works with
# arbitrarily many decorators. These insert decorators between
# the test and the @decorators.attr(type=['negative']) decorator.
for decorator in other_decorators:
- self.assertIsNone(check(" %s" % decorator, filename))
- final_result = check(" def test_some_negative_case", filename)
+ self.assertFalse(
+ list(check(" %s" % decorator, filename)))
+ final_result = list(check(" def test_some_negative_case", filename))
if expected_success:
- self.assertIsNone(final_result)
+ self.assertFalse(final_result)
else:
- self.assertIsInstance(final_result, tuple)
- self.assertFalse(final_result[0])
+ self.assertEqual(1, len(final_result))
+ self.assertIsInstance(final_result[0], tuple)
+ self.assertEqual(0, final_result[0][0])
+ self.assertTrue(final_result[0][1])
def test_no_negatve_test_attribute_applied_to_negative_test(self):
# Check negative filename, negative decorator passes
diff --git a/tempest/tests/test_test.py b/tempest/tests/test_test.py
index 80825a4..7fb9bb3 100644
--- a/tempest/tests/test_test.py
+++ b/tempest/tests/test_test.py
@@ -303,7 +303,7 @@
# [0]: test, err, details [1] -> exc_info
# Type, Exception, traceback [1] -> MultipleException
found_exc = log[0][1][1]
- self.assertTrue(isinstance(found_exc, testtools.MultipleExceptions))
+ self.assertIsInstance(found_exc, testtools.MultipleExceptions)
self.assertEqual(2, len(found_exc.args))
# Each arg is exc_info - match messages and order
self.assertIn('mock3 resource', str(found_exc.args[0][1]))
@@ -332,7 +332,7 @@
# [0]: test, err, details [1] -> exc_info
# Type, Exception, traceback [1] -> RuntimeError
found_exc = log[0][1][1]
- self.assertTrue(isinstance(found_exc, RuntimeError))
+ self.assertIsInstance(found_exc, RuntimeError)
self.assertIn(BadResourceCleanup.__name__, str(found_exc))
def test_super_skip_checks_not_invoked(self):
diff --git a/test-requirements.txt b/test-requirements.txt
index 17fa9f1..bd4d772 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -1,8 +1,4 @@
-# The order of packages is significant, because pip processes them in the order
-# of appearance. Changing the order has an impact on the overall integration
-# process, which may cause wedges in the gate later.
-hacking>=3.0.1,<3.1.0;python_version>='3.5' # Apache-2.0
+hacking>=6.1.0,<6.2.0
coverage!=4.4,>=4.0 # Apache-2.0
oslotest>=3.2.0 # Apache-2.0
-pycodestyle>=2.0.0,<2.6.0 # MIT
-flake8-import-order==0.11 # LGPLv3
+flake8-import-order>=0.18.0,<0.19.0 # LGPLv3
diff --git a/tools/generate-tempest-plugins-list.py b/tools/generate-tempest-plugins-list.py
index 0b6b342..2e8ced5 100644
--- a/tools/generate-tempest-plugins-list.py
+++ b/tools/generate-tempest-plugins-list.py
@@ -75,7 +75,6 @@
'x/networking-l2gw-tempest-plugin'
'x/novajoin-tempest-plugin'
'x/ranger-tempest-plugin'
- 'x/tap-as-a-service-tempest-plugin'
'x/trio2o'
# No changes are merging in this
# https://review.opendev.org/q/project:x%252Fnetworking-fortinet
diff --git a/tox.ini b/tox.ini
index de81707..d9d2bad 100644
--- a/tox.ini
+++ b/tox.ini
@@ -154,7 +154,7 @@
sitepackages = {[tempestenv]sitepackages}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
-# But exlcude the extra tests mentioned in tools/tempest-extra-tests-list.txt
+# But exclude the extra tests mentioned in tools/tempest-extra-tests-list.txt
regex = '(^tempest\.scenario.*)|(^tempest\.serial_tests)|(?!.*\[.*\bslow\b.*\])(^tempest\.api)'
commands =
find . -type f -name "*.pyc" -delete
@@ -197,8 +197,8 @@
# tests listed in exclude-list file:
commands =
find . -type f -name "*.pyc" -delete
- tempest run --regex {[testenv:integrated-compute]regex1} --exclude-list ./tools/tempest-integrated-gate-compute-exclude-list.txt {posargs}
- tempest run --combine --serial --regex {[testenv:integrated-compute]regex2} --exclude-list ./tools/tempest-integrated-gate-compute-exclude-list.txt {posargs}
+ tempest run --slowest --regex {[testenv:integrated-compute]regex1} --exclude-list ./tools/tempest-integrated-gate-compute-exclude-list.txt {posargs}
+ tempest run --combine --serial --slowest --regex {[testenv:integrated-compute]regex2} --exclude-list ./tools/tempest-integrated-gate-compute-exclude-list.txt {posargs}
[testenv:integrated-placement]
envdir = .tox/tempest
@@ -359,8 +359,9 @@
sphinx-apidoc -f -o doc/source/tests/image tempest/api/image
sphinx-apidoc -f -o doc/source/tests/network tempest/api/network
sphinx-apidoc -f -o doc/source/tests/object_storage tempest/api/object_storage
- sphinx-apidoc -f -o doc/source/tests/scenario tempest/scenario
sphinx-apidoc -f -o doc/source/tests/volume tempest/api/volume
+ sphinx-apidoc -f -o doc/source/tests/scenario tempest/scenario
+ sphinx-apidoc -f -o doc/source/tests/serial_tests tempest/serial_tests
rm -rf doc/build
sphinx-build -W -b html doc/source doc/build/html
allowlist_externals =
@@ -377,8 +378,9 @@
sphinx-apidoc -f -o doc/source/tests/image tempest/api/image
sphinx-apidoc -f -o doc/source/tests/network tempest/api/network
sphinx-apidoc -f -o doc/source/tests/object_storage tempest/api/object_storage
- sphinx-apidoc -f -o doc/source/tests/scenario tempest/scenario
sphinx-apidoc -f -o doc/source/tests/volume tempest/api/volume
+ sphinx-apidoc -f -o doc/source/tests/scenario tempest/scenario
+ sphinx-apidoc -f -o doc/source/tests/serial_tests tempest/serial_tests
sphinx-build -W -b latex doc/source doc/build/pdf
make -C doc/build/pdf
@@ -409,7 +411,8 @@
# E129 skipped because it is too limiting when combined with other rules
# W504 skipped because it is overeager and unnecessary
# H405 skipped because it arbitrarily forces doctring "title" lines
-ignore = E125,E123,E129,W504,H405
+# I201 and I202 skipped because the rule does not allow new line between 3rd party modules and own modules
+ignore = E125,E123,E129,W504,H405,I201,I202,T117
show-source = True
exclude = .git,.venv,.tox,dist,doc,*egg,build
enable-extensions = H106,H203,H904
diff --git a/zuul.d/base.yaml b/zuul.d/base.yaml
index 0ac893a..4de4111 100644
--- a/zuul.d/base.yaml
+++ b/zuul.d/base.yaml
@@ -12,6 +12,10 @@
timeout: 7200
roles: &base_roles
- zuul: opendev.org/openstack/devstack
+ failure-output:
+ # This matches stestr/tempest output when a test fails
+ # {1} tempest.api.test_blah [5.743446s] ... FAILED
+ - '\{\d+\} (.*?) \[[\d\.]+s\] \.\.\. FAILED'
vars: &base_vars
devstack_localrc:
IMAGE_URLS: http://download.cirros-cloud.net/0.6.2/cirros-0.6.2-x86_64-disk.img, http://download.cirros-cloud.net/0.6.1/cirros-0.6.1-x86_64-disk.img
@@ -22,8 +26,12 @@
$TEMPEST_CONFIG:
compute:
min_compute_nodes: "{{ groups['compute'] | default(['controller']) | length }}"
+ service-clients:
+ http_timeout: 90
test_results_stage_name: test_results
zuul_copy_output:
+ '/var/log/openvswitch': logs
+ '/var/log/ovn': logs
'{{ devstack_base_dir }}/tempest/etc/tempest.conf': logs
'{{ devstack_base_dir }}/tempest/etc/accounts.yaml': logs
'{{ devstack_base_dir }}/tempest/tempest.log': logs
@@ -56,6 +64,10 @@
required-projects: *base_required-projects
timeout: 7200
roles: *base_roles
+ failure-output:
+ # This matches stestr/tempest output when a test fails
+ # {1} tempest.api.test_blah [5.743446s] ... FAILED
+ - '\{\d+\} (.*?) \[[\d\.]+s\] \.\.\. FAILED'
vars: *base_vars
run: playbooks/devstack-tempest-ipv6.yaml
post-run: playbooks/post-tempest.yaml
diff --git a/zuul.d/integrated-gate.yaml b/zuul.d/integrated-gate.yaml
index 4b4306c..fb08297 100644
--- a/zuul.d/integrated-gate.yaml
+++ b/zuul.d/integrated-gate.yaml
@@ -8,6 +8,7 @@
Integration test that runs all tests.
Former name for this job was:
* legacy-periodic-tempest-dsvm-all-master
+ timeout: 10800
vars:
tox_envlist: all
tempest_test_regex: tempest
@@ -16,6 +17,13 @@
# TODO(gmann): Enable File injection tests once nova bug is fixed
# https://bugs.launchpad.net/nova/+bug/1882421
# ENABLE_FILE_INJECTION: true
+ run_tempest_cleanup: true
+ run_tempest_cleanup_resource_list: true
+ devstack_local_conf:
+ test-config:
+ $TEMPEST_CONFIG:
+ DEFAULT:
+ record_resources: true
- job:
name: tempest-ipv6-only
@@ -23,38 +31,12 @@
description: |
Integration test of IPv6-only deployments. This job runs
smoke and IPv6 relates tests only. Basic idea is to test
- whether OpenStack Services listen on IPv6 addrress or not.
+ whether OpenStack Services listen on IPv6 address or not.
timeout: 10800
vars:
tox_envlist: ipv6-only
- job:
- name: tempest-full
- parent: devstack-tempest
- description: |
- Base integration test with Neutron networking and py27.
- This job is supposed to run until stable/train setup only.
- If you are running it on stable/ussuri gate onwards for python2.7
- coverage then you need to do override-checkout with any stable
- branch less than or equal to stable/train.
- Former names for this job where:
- * legacy-tempest-dsvm-neutron-full
- * gate-tempest-dsvm-neutron-full-ubuntu-xenial
- vars:
- tox_envlist: full
- devstack_localrc:
- ENABLE_FILE_INJECTION: true
- ENABLE_VOLUME_MULTIATTACH: true
- USE_PYTHON3: False
- devstack_services:
- # NOTE(mriedem): Disable the cinder-backup service from tempest-full
- # since tempest-full is in the integrated-gate project template but
- # the backup tests do not really involve other services so they should
- # be run in some more cinder-specific job, especially because the
- # tests fail at a high rate (see bugs 1483434, 1813217, 1745168)
- c-bak: false
-
-- job:
name: tempest-extra-tests
parent: tempest-full-py3
description: |
@@ -62,6 +44,14 @@
tools/tempest-extra-tests-list.txt.
vars:
tox_envlist: extra-tests
+ run_tempest_cleanup: true
+ run_tempest_cleanup_resource_list: true
+ run_tempest_dry_cleanup: true
+ devstack_local_conf:
+ test-config:
+ $TEMPEST_CONFIG:
+ DEFAULT:
+ record_resources: true
- job:
name: tempest-full-py3
@@ -73,8 +63,12 @@
# this job definition is only for stable/xena onwards
# and separate job definition until stable/wallaby
branches:
- regex: ^stable/(stein|train|ussuri|victoria|wallaby)$
+ regex: ^.*/(victoria|wallaby)$
negate: true
+ # NOTE(sean-k-mooney): this job and its descendants frequently times out
+ # run on rax-* providers with a timeout of 2 hours. temporary increase
+ # the timeout to 2.5 hours.
+ timeout: 9000
description: |
Base integration test with Neutron networking, horizon, swift enable,
and py3.
@@ -88,6 +82,8 @@
# end up 6 in upstream CI. Higher concurrency means high parallel
# requests to services and can cause more oom issues. To avoid the
# oom issue, setting the concurrency to 4 in this job.
+ # NOTE(sean-k-mooney): now that we use zswap we should be able to
+ # increase the concurrency to 6.
tempest_concurrency: 4
tox_envlist: integrated-full
devstack_localrc:
@@ -98,7 +94,7 @@
devstack_plugins:
neutron: https://opendev.org/openstack/neutron
devstack_services:
- # Enbale horizon so that we can run horizon test.
+ # Enable horizon so that we can run horizon test.
horizon: true
- job:
@@ -107,7 +103,7 @@
nodeset: devstack-single-node-centos-9-stream
# centos-9-stream is supported from yoga release onwards
branches:
- regex: ^stable/(stein|train|ussuri|victoria|wallaby|xena)$
+ regex: ^.*/(victoria|wallaby|xena)$
negate: true
description: |
Base integration test on CentOS 9 stream
@@ -143,11 +139,18 @@
This job runs integration tests for compute. This is
subset of 'tempest-full-py3' job and run Nova, Neutron, Cinder (except backup tests)
and Glance related tests. This is meant to be run on Nova gate only.
+ # NOTE(sean-k-mooney): this job and its descendants frequently times out
+ # when run on rax-* providers, recent optimizations have reduced the
+ # runtime of the job but it still times out. temporary increase the
+ # timeout to 2.5 hours.
+ timeout: 9000
vars:
# NOTE(gmann): Default concurrency is higher (number of cpu -2) which
# end up 6 in upstream CI. Higher concurrency means high parallel
# requests to services and can cause more oom issues. To avoid the
# oom issue, setting the concurrency to 4 in this job.
+ # NOTE(sean-k-mooney): now that we use zswap we should be able to
+ # increase the concurrency to 6.
tempest_concurrency: 4
tox_envlist: integrated-compute
tempest_exclude_regex: ""
@@ -168,7 +171,7 @@
nodeset: devstack-single-node-centos-9-stream
# centos-9-stream is supported from yoga release onwards
branches:
- regex: ^stable/(stein|train|ussuri|victoria|wallaby|xena)$
+ regex: ^.*/(victoria|wallaby|xena)$
negate: true
description: |
This job runs integration tests for compute. This is
@@ -231,7 +234,7 @@
tox_envlist: integrated-object-storage
devstack_localrc:
# NOTE(gmann): swift is not ready on python3 yet and devstack
- # install it on python2.7 only. But settting the USE_PYTHON3
+ # install it on python2.7 only. But setting the USE_PYTHON3
# for future once swift is ready on py3.
USE_PYTHON3: true
@@ -253,9 +256,9 @@
name: tempest-multinode-full-py3
parent: tempest-multinode-full-base
nodeset: openstack-two-node-jammy
- # This job runs on ubuntu Jammy and after stable/zed.
+ # This job runs on ubuntu Jammy and after unmaintained/zed.
branches:
- regex: ^stable/(stein|train|ussuri|victoria|wallaby|xena|yoga|zed)$
+ regex: ^.*/(victoria|wallaby|xena|yoga|zed)$
negate: true
vars:
# NOTE(gmann): Default concurrency is higher (number of cpu -2) which
@@ -263,6 +266,7 @@
# requests to services and can cause more oom issues. To avoid the
# oom issue, setting the concurrency to 4 in this job.
tempest_concurrency: 4
+ tempest_set_src_dest_host: true
devstack_localrc:
USE_PYTHON3: true
devstack_plugins:
@@ -319,13 +323,14 @@
# till stable/wallaby, this job definition is only for stable/xena
# onwards and separate job definition until stable/wallaby
branches:
- regex: ^stable/(stein|train|ussuri|victoria|wallaby)$
+ regex: ^.*/(victoria|wallaby)$
negate: true
vars:
tox_envlist: slow
devstack_localrc:
CINDER_ENABLED_BACKENDS: lvm:lvmdriver-1,lvm:lvmdriver-2
ENABLE_VOLUME_MULTIATTACH: true
+ GLANCE_ENFORCE_IMAGE_FORMAT: false
devstack_plugins:
neutron: https://opendev.org/openstack/neutron
devstack_services:
@@ -354,18 +359,6 @@
TEMPEST_VOLUME_TYPE: volumev2
- job:
- name: tempest-centos8-stream-fips
- parent: devstack-tempest
- description: |
- Integration testing for a FIPS enabled Centos 8 system
- nodeset: devstack-single-node-centos-8-stream
- vars:
- tox_envlist: full
- configure_swap_size: 4096
- nslookup_target: 'opendev.org'
- enable_fips: True
-
-- job:
name: tempest-centos9-stream-fips
parent: devstack-tempest
description: |
@@ -398,15 +391,7 @@
This job runs the Tempest tests with scope and new defaults enabled.
vars:
devstack_localrc:
- # Enabeling the scope and new defaults for services.
- # NOTE: (gmann) We need to keep keystone scope check disable as
- # services (except ironic) does not support the system scope and
- # they need keystone to continue working with project scope. Until
- # Keystone policies are changed to work for both system as well as
- # for project scoped, we need to keep scope check disable for
- # keystone.
- # Nova, Glance, and Neutron have enabled the new defaults and scope
- # by default in devstack.
+ KEYSTONE_ENFORCE_SCOPE: true
CINDER_ENFORCE_SCOPE: true
PLACEMENT_ENFORCE_SCOPE: true
@@ -440,10 +425,16 @@
voting: false
branches:
- stable/2023.1
- # on master (SLURP 2024.1) grenade-skip-level which test stable/2023.1
- # to stable/2024.1 upgrade is voting.
+ # on stable/2024.1(SLURP) grenade-skip-level is voting which test
+ # stable/2023.1 to stable/2024.1 upgrade. This is supposed to run on
+ # SLURP release only.
- grenade-skip-level:
branches:
+ - ^.*/2024.1
+ # on current master 2025.1(SLURP) grenade-skip-level-always is voting
+ # which test stable/2024.1 to 2025.1 upgrade.
+ - grenade-skip-level-always:
+ branches:
- master
- tempest-integrated-networking
# Do not run it on ussuri until below issue is fixed
@@ -452,16 +443,22 @@
# described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
branches:
- regex: ^stable/(ussuri|victoria|wallaby)$
+ regex: ^.*/(victoria|wallaby)$
negate: true
gate:
jobs:
- grenade
- tempest-integrated-networking
- # on master (SLURP 2024.1) grenade-skip-level which test stable/2023.1
- # to stable/2024.1 upgrade is voting.
+ # on stable/2024.1(SLURP) grenade-skip-level is voting which test
+ # stable/2023.1 to stable/2024.1 upgrade. This is supposed to run on
+ # SLURP release only.
- grenade-skip-level:
branches:
+ - ^.*/2024.1
+ # on current master 2025.1(SLURP) grenade-skip-level-always is voting
+ # which test stable/2024.1 to 2025.1 upgrade.
+ - grenade-skip-level-always:
+ branches:
- master
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
@@ -469,7 +466,7 @@
# described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
branches:
- regex: ^stable/(ussuri|victoria|wallaby)$
+ regex: ^.*/(victoria|wallaby)$
negate: true
- project-template:
@@ -497,39 +494,42 @@
# (on SLURP as well as non SLURP release) so we are adding grenade-skip-level-always
# job in integrated gate and we do not need to update skip level job
# here until Nova change the decision.
- # This is added from 2023.2 relese cycle onwards so we need to use branch variant
+ # This is added from 2023.2 release cycle onwards so we need to use branch variant
# to make sure we do not run this job on older than 2023.2 gate.
- grenade-skip-level-always:
branches:
+ - ^.*/2023.2
+ - ^.*/2024.1
+ - ^.*/2024.2
- master
- tempest-integrated-compute
- # centos-8-stream is tested from wallaby -> yoga branches
- - tempest-integrated-compute-centos-8-stream:
- branches: ^stable/(wallaby|xena|yoga).*$
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
# and job is broken up to wallaby branch due to the issue
# described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
branches:
- regex: ^stable/(ussuri|victoria|wallaby)$
+ regex: ^.*/(victoria|wallaby)$
negate: true
gate:
jobs:
- grenade-skip-level-always:
branches:
+ - ^.*/2023.2
+ - ^.*/2024.1
+ - ^.*/2024.2
- master
- tempest-integrated-compute
- openstacksdk-functional-devstack:
branches:
- regex: ^stable/(ussuri|victoria|wallaby)$
+ regex: ^.*/(victoria|wallaby)$
negate: true
periodic-weekly:
jobs:
# centos-9-stream is tested from zed release onwards
- tempest-integrated-compute-centos-9-stream:
branches:
- regex: ^stable/(stein|train|ussuri|victoria|wallaby|xena|yoga)$
+ regex: ^.*/(victoria|wallaby|xena|yoga)$
negate: true
- project-template:
@@ -549,10 +549,16 @@
voting: false
branches:
- stable/2023.1
- # on master (SLURP 2024.1) grenade-skip-level which test stable/2023.1
- # to stable/2024.1 upgrade is voting.
+ # on stable/2024.1(SLURP) grenade-skip-level is voting which test
+ # stable/2023.1 to stable/2024.1 upgrade. This is supposed to run on
+ # SLURP release only.
- grenade-skip-level:
branches:
+ - ^.*/2024.1
+ # on current master 2025.1(SLURP) grenade-skip-level-always is voting
+ # which test stable/2024.1 to 2025.1 upgrade.
+ - grenade-skip-level-always:
+ branches:
- master
- tempest-integrated-placement
# Do not run it on ussuri until below issue is fixed
@@ -561,16 +567,22 @@
# described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
branches:
- regex: ^stable/(ussuri|victoria|wallaby)$
+ regex: ^.*/(victoria|wallaby)$
negate: true
gate:
jobs:
- grenade
- tempest-integrated-placement
- # on master (SLURP 2024.1) grenade-skip-level which test stable/2023.1
- # to stable/2024.1 upgrade is voting.
+ # on stable/2024.1(SLURP) grenade-skip-level is voting which test
+ # stable/2023.1 to stable/2024.1 upgrade. This is supposed to run on
+ # SLURP release only.
- grenade-skip-level:
branches:
+ - ^.*/2024.1
+ # on current master 2025.1(SLURP) grenade-skip-level-always is voting
+ # which test stable/2024.1 to 2025.1 upgrade.
+ - grenade-skip-level-always:
+ branches:
- master
# Do not run it on ussuri until below issue is fixed
# https://storyboard.openstack.org/#!/story/2010057
@@ -578,7 +590,7 @@
# described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
branches:
- regex: ^stable/(ussuri|victoria|wallaby)$
+ regex: ^.*/(victoria|wallaby)$
negate: true
- project-template:
@@ -598,10 +610,16 @@
voting: false
branches:
- stable/2023.1
- # on master (SLURP 2024.1) grenade-skip-level which test stable/2023.1
- # to stable/2024.1 upgrade is voting.
+ # on stable/2024.1(SLURP) grenade-skip-level is voting which test
+ # stable/2023.1 to stable/2024.1 upgrade. This is supposed to run on
+ # SLURP release only.
- grenade-skip-level:
branches:
+ - ^.*/2024.1
+ # on current master 2025.1(SLURP) grenade-skip-level-always is voting
+ # which test stable/2024.1 to 2025.1 upgrade.
+ - grenade-skip-level-always:
+ branches:
- master
- tempest-integrated-storage
# Do not run it on ussuri until below issue is fixed
@@ -610,15 +628,21 @@
# described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
branches:
- regex: ^stable/(ussuri|victoria|wallaby)$
+ regex: ^.*/(victoria|wallaby)$
negate: true
gate:
jobs:
- grenade
- # on master (SLURP 2024.1) grenade-skip-level which test stable/2023.1
- # to stable/2024.1 upgrade is voting.
+ # on stable/2024.1(SLURP) grenade-skip-level is voting which test
+ # stable/2023.1 to stable/2024.1 upgrade. This is supposed to run on
+ # SLURP release only.
- grenade-skip-level:
branches:
+ - ^.*/2024.1
+ # on current master 2025.1(SLURP) grenade-skip-level-always is voting
+ # which test stable/2024.1 to 2025.1 upgrade.
+ - grenade-skip-level-always:
+ branches:
- master
- tempest-integrated-storage
# Do not run it on ussuri until below issue is fixed
@@ -627,7 +651,7 @@
# described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
branches:
- regex: ^stable/(ussuri|victoria|wallaby)$
+ regex: ^.*/(victoria|wallaby)$
negate: true
- project-template:
@@ -640,10 +664,16 @@
check:
jobs:
- grenade
- # on master (SLURP 2024.1) grenade-skip-level which test stable/2023.1
- # to stable/2024.1 upgrade is voting.
+ # on stable/2024.1(SLURP) grenade-skip-level is voting which test
+ # stable/2023.1 to stable/2024.1 upgrade. This is supposed to run on
+ # SLURP release only.
- grenade-skip-level:
branches:
+ - ^.*/2024.1
+ # on current master 2025.1(SLURP) grenade-skip-level-always is voting
+ # which test stable/2024.1 to 2025.1 upgrade.
+ - grenade-skip-level-always:
+ branches:
- master
- tempest-integrated-object-storage
# Do not run it on ussuri until below issue is fixed
@@ -652,15 +682,21 @@
# described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
branches:
- regex: ^stable/(ussuri|victoria|wallaby)$
+ regex: ^.*/(victoria|wallaby)$
negate: true
gate:
jobs:
- grenade
- # on master (SLURP 2024.1) grenade-skip-level which test stable/2023.1
- # to stable/2024.1 upgrade is voting.
+ # on stable/2024.1(SLURP) grenade-skip-level is voting which test
+ # stable/2023.1 to stable/2024.1 upgrade. This is supposed to run on
+ # SLURP release only.
- grenade-skip-level:
branches:
+ - ^.*/2024.1
+ # on current master 2025.1(SLURP) grenade-skip-level-always is voting
+ # which test stable/2024.1 to 2025.1 upgrade.
+ - grenade-skip-level-always:
+ branches:
- master
- tempest-integrated-object-storage
# Do not run it on ussuri until below issue is fixed
@@ -669,5 +705,5 @@
# described in https://review.opendev.org/872341
- openstacksdk-functional-devstack:
branches:
- regex: ^stable/(ussuri|victoria|wallaby)$
+ regex: ^.*/(victoria|wallaby)$
negate: true
diff --git a/zuul.d/project.yaml b/zuul.d/project.yaml
index 3f32f9f..a7641a6 100644
--- a/zuul.d/project.yaml
+++ b/zuul.d/project.yaml
@@ -8,10 +8,10 @@
check:
jobs:
- openstack-tox-pep8
- - openstack-tox-py38
- openstack-tox-py39
- openstack-tox-py310
- openstack-tox-py311
+ - openstack-tox-py312
- tempest-full-py3:
# Define list of irrelevant files to use everywhere else
irrelevant-files: &tempest-irrelevant-files
@@ -37,9 +37,9 @@
# if things are working in latest and oldest it will work in between
# stable branches also. If anything is breaking we will be catching
# those in respective stable branch gate.
- - tempest-full-2023-2:
+ - tempest-full-2024-2:
irrelevant-files: *tempest-irrelevant-files
- - tempest-full-yoga:
+ - tempest-full-2023-2:
irrelevant-files: *tempest-irrelevant-files
- tempest-multinode-full-py3:
irrelevant-files: *tempest-irrelevant-files
@@ -103,15 +103,13 @@
- tempest-full-enforce-scope-new-defaults:
irrelevant-files: *tempest-irrelevant-files
- devstack-plugin-ceph-tempest-py3:
- # TODO(kopecmartin): make it voting once the below bug is fixed
- # https://bugs.launchpad.net/devstack-plugin-ceph/+bug/1975648
- voting: false
+ timeout: 9000
irrelevant-files: *tempest-irrelevant-files
- neutron-ovs-grenade-multinode:
irrelevant-files: *tempest-irrelevant-files
- grenade:
irrelevant-files: *tempest-irrelevant-files
- - grenade-skip-level:
+ - grenade-skip-level-always:
irrelevant-files: *tempest-irrelevant-files
- neutron-ovs-tempest-dvr:
voting: false
@@ -128,10 +126,10 @@
gate:
jobs:
- openstack-tox-pep8
- - openstack-tox-py38
- openstack-tox-py39
- openstack-tox-py310
- openstack-tox-py311
+ - openstack-tox-py312
- tempest-slow-py3:
irrelevant-files: *tempest-irrelevant-files
- neutron-ovs-grenade-multinode:
@@ -142,7 +140,7 @@
irrelevant-files: *tempest-irrelevant-files
- grenade:
irrelevant-files: *tempest-irrelevant-files
- - grenade-skip-level:
+ - grenade-skip-level-always:
irrelevant-files: *tempest-irrelevant-files
- tempest-ipv6-only:
irrelevant-files: *tempest-irrelevant-files-3
@@ -169,9 +167,6 @@
irrelevant-files: *tempest-irrelevant-files
- tempest-all-rbac-old-defaults
- tempest-full-parallel
- - tempest-full-zed-extra-tests
- - tempest-full-yoga-extra-tests
- - tempest-full-enforce-scope-new-defaults-zed
- neutron-ovs-tempest-dvr-ha-multinode-full:
irrelevant-files: *tempest-irrelevant-files
- nova-tempest-v2-api:
@@ -190,18 +185,15 @@
irrelevant-files: *tempest-irrelevant-files
periodic-stable:
jobs:
+ - tempest-full-2024-2
+ - tempest-full-2024-1
- tempest-full-2023-2
- - tempest-full-2023-1
- - tempest-full-zed
- - tempest-full-yoga
+ - tempest-slow-2024-2
+ - tempest-slow-2024-1
- tempest-slow-2023-2
- - tempest-slow-2023-1
- - tempest-slow-zed
- - tempest-slow-yoga
+ - tempest-full-2024-2-extra-tests
+ - tempest-full-2024-1-extra-tests
- tempest-full-2023-2-extra-tests
- - tempest-full-2023-1-extra-tests
- - tempest-full-zed-extra-tests
- - tempest-full-yoga-extra-tests
periodic:
jobs:
- tempest-all
@@ -213,4 +205,3 @@
- tempest-centos9-stream-fips
- tempest-full-centos-9-stream
- tempest-full-test-account-no-admin-py3
- - tempest-full-enforce-scope-new-defaults-zed
diff --git a/zuul.d/stable-jobs.yaml b/zuul.d/stable-jobs.yaml
index 2fdc2af..efa771e 100644
--- a/zuul.d/stable-jobs.yaml
+++ b/zuul.d/stable-jobs.yaml
@@ -1,27 +1,33 @@
# NOTE(gmann): This file includes all stable release jobs definition.
- job:
+ name: tempest-full-2024-2
+ parent: tempest-full-py3
+ nodeset: openstack-single-node-jammy
+ override-checkout: stable/2024.2
+
+- job:
+ name: tempest-full-2024-1
+ parent: tempest-full-py3
+ nodeset: openstack-single-node-jammy
+ override-checkout: stable/2024.1
+
+- job:
name: tempest-full-2023-2
parent: tempest-full-py3
nodeset: openstack-single-node-jammy
override-checkout: stable/2023.2
- job:
- name: tempest-full-2023-1
- parent: tempest-full-py3
+ name: tempest-full-2024-2-extra-tests
+ parent: tempest-extra-tests
nodeset: openstack-single-node-jammy
- override-checkout: stable/2023.1
+ override-checkout: stable/2024.2
- job:
- name: tempest-full-zed
- parent: tempest-full-py3
- nodeset: openstack-single-node-focal
- override-checkout: stable/zed
-
-- job:
- name: tempest-full-yoga
- parent: tempest-full-py3
- nodeset: openstack-single-node-focal
- override-checkout: stable/yoga
+ name: tempest-full-2024-1-extra-tests
+ parent: tempest-extra-tests
+ nodeset: openstack-single-node-jammy
+ override-checkout: stable/2024.1
- job:
name: tempest-full-2023-2-extra-tests
@@ -30,22 +36,16 @@
override-checkout: stable/2023.2
- job:
- name: tempest-full-2023-1-extra-tests
- parent: tempest-extra-tests
- nodeset: openstack-single-node-jammy
- override-checkout: stable/2023.1
+ name: tempest-slow-2024-2
+ parent: tempest-slow-py3
+ nodeset: openstack-two-node-jammy
+ override-checkout: stable/2024.2
- job:
- name: tempest-full-zed-extra-tests
- parent: tempest-extra-tests
- nodeset: openstack-single-node-focal
- override-checkout: stable/zed
-
-- job:
- name: tempest-full-yoga-extra-tests
- parent: tempest-extra-tests
- nodeset: openstack-single-node-focal
- override-checkout: stable/yoga
+ name: tempest-slow-2024-1
+ parent: tempest-slow-py3
+ nodeset: openstack-two-node-jammy
+ override-checkout: stable/2024.1
- job:
name: tempest-slow-2023-2
@@ -54,38 +54,14 @@
override-checkout: stable/2023.2
- job:
- name: tempest-slow-2023-1
- parent: tempest-slow-py3
- nodeset: openstack-two-node-jammy
- override-checkout: stable/2023.1
-
-- job:
- name: tempest-full-enforce-scope-new-defaults-zed
- parent: tempest-full-enforce-scope-new-defaults
- nodeset: openstack-single-node-focal
- override-checkout: stable/zed
-
-- job:
- name: tempest-slow-zed
- parent: tempest-slow-py3
- nodeset: openstack-two-node-focal
- override-checkout: stable/zed
-
-- job:
- name: tempest-slow-yoga
- parent: tempest-slow-py3
- nodeset: openstack-two-node-focal
- override-checkout: stable/yoga
-
-- job:
name: tempest-full-py3
parent: devstack-tempest
# This job version is to use the 'full' tox env which
- # is available for stable/ussuri to stable/wallaby also.
+ # is available for unmaintained/victoria to unmaintained/xena also.
branches:
- - stable/ussuri
- - stable/victoria
- - stable/wallaby
+ - ^.*/victoria
+ - ^.*/wallaby
+ - ^.*/xena
description: |
Base integration test with Neutron networking, horizon, swift enable,
and py3.
@@ -96,6 +72,10 @@
- openstack/horizon
vars:
tox_envlist: full
+ tempest_exclude_regex: "\
+ (DHCPAgentSchedulersTestJSON)|\
+ (AttachVolumeMultiAttachTest)|\
+ (UpdateMultiattachVolumeNegativeTest)"
devstack_localrc:
USE_PYTHON3: true
FORCE_CONFIG_DRIVE: true
@@ -104,117 +84,47 @@
devstack_plugins:
neutron: https://opendev.org/openstack/neutron
devstack_services:
- # Enbale horizon so that we can run horizon test.
+ # Enable horizon so that we can run horizon test.
horizon: true
- job:
- name: tempest-full-py3
- parent: devstack-tempest
- # This job version is with swift disabled on py3
- # as swift was not ready on py3 until stable/train.
+ name: tempest-multinode-full-py3
+ parent: tempest-multinode-full
+ nodeset: openstack-two-node-focal
+ # This job runs on Focal and supposed to run until unmaintained/zed.
branches:
- - stable/stein
- - stable/train
- description: |
- Base integration test with Neutron networking, swift disabled, and py3.
- Former names for this job where:
- * legacy-tempest-dsvm-py35
- * gate-tempest-dsvm-py35
- required-projects:
- - openstack/horizon
+ - ^.*/victoria
+ - ^.*/wallaby
+ - ^.*/xena
+ - ^.*/yoga
+ - ^.*/zed
+ vars:
+ devstack_localrc:
+ USE_PYTHON3: true
+ devstack_plugins:
+ neutron: https://opendev.org/openstack/neutron
+ devstack_services:
+ neutron-trunk: true
+ group-vars:
+ subnode:
+ devstack_localrc:
+ USE_PYTHON3: true
+
+- job:
+ name: tempest-multinode-full
+ parent: tempest-multinode-full-base
+ nodeset: openstack-two-node-focal
+ # This job runs on Focal and on python2. This is for unmaintained/victoria to unmaintained/xena.
+ branches:
+ - ^.*/victoria
+ - ^.*/wallaby
+ - ^.*/xena
vars:
tox_envlist: full
- devstack_localrc:
- USE_PYTHON3: true
- FORCE_CONFIG_DRIVE: true
- ENABLE_VOLUME_MULTIATTACH: true
- GLANCE_USE_IMPORT_WORKFLOW: True
- devstack_plugins:
- neutron: https://opendev.org/openstack/neutron
- devstack_local_conf:
- post-config:
- "/$NEUTRON_CORE_PLUGIN_CONF":
- ovs:
- bridge_mappings: public:br-ex
- resource_provider_bandwidths: br-ex:1000000:1000000
- test-config:
- $TEMPEST_CONFIG:
- network-feature-enabled:
- qos_placement_physnet: public
- devstack_services:
- # Enbale horizon so that we can run horizon test.
- horizon: true
- s-account: false
- s-container: false
- s-object: false
- s-proxy: false
- # without Swift, c-bak cannot run (in the Gate at least)
- # NOTE(mriedem): Disable the cinder-backup service from
- # tempest-full-py3 since tempest-full-py3 is in the integrated-gate-py3
- # project template but the backup tests do not really involve other
- # services so they should be run in some more cinder-specific job,
- # especially because the tests fail at a high rate (see bugs 1483434,
- # 1813217, 1745168)
- c-bak: false
- neutron-placement: true
- neutron-qos: true
-
-- job:
- name: tempest-multinode-full-py3
- parent: tempest-multinode-full
- nodeset: openstack-two-node-bionic
- # This job runs on Bionic.
- branches:
- - stable/stein
- - stable/train
- - stable/ussuri
- vars:
- devstack_localrc:
- USE_PYTHON3: true
- devstack_plugins:
- neutron: https://opendev.org/openstack/neutron
- devstack_services:
- neutron-trunk: true
- group-vars:
- subnode:
- devstack_localrc:
- USE_PYTHON3: true
-
-- job:
- name: tempest-multinode-full-py3
- parent: tempest-multinode-full
- nodeset: openstack-two-node-focal
- # This job runs on Focal and supposed to run until stable/zed.
- branches:
- - stable/victoria
- - stable/wallaby
- - stable/xena
- - stable/yoga
- - stable/zed
- vars:
- devstack_localrc:
- USE_PYTHON3: true
- devstack_plugins:
- neutron: https://opendev.org/openstack/neutron
- devstack_services:
- neutron-trunk: true
- group-vars:
- subnode:
- devstack_localrc:
- USE_PYTHON3: true
-
-- job:
- name: tempest-multinode-full
- parent: tempest-multinode-full-base
- nodeset: openstack-two-node-focal
- # This job runs on Focal and on python2. This is for stable/victoria to stable/zed.
- branches:
- - stable/victoria
- - stable/wallaby
- - stable/xena
- - stable/yoga
- - stable/zed
- vars:
+ tempest_exclude_regex: "\
+ (DHCPAgentSchedulersTestJSON)|\
+ (AttachVolumeMultiAttachTest)|\
+ (UpdateMultiattachVolumeNegativeTest)"
devstack_localrc:
USE_PYTHON3: False
group-vars:
@@ -225,14 +135,11 @@
- job:
name: tempest-multinode-full
parent: tempest-multinode-full-base
- nodeset: openstack-two-node-bionic
- # This job runs on Bionic and on python2. This is for stable/stein and stable/train.
- # This job is prepared to make sure all stable branches from stable/stein till stable/train
- # will keep running on bionic. This can be removed once stable/train is EOL.
+ nodeset: openstack-two-node-focal
+ # This job runs on Focal and on python2. This is for unmaintained/yoga to unmaintained/zed.
branches:
- - stable/stein
- - stable/train
- - stable/ussuri
+ - ^.*/yoga
+ - ^.*/zed
vars:
devstack_localrc:
USE_PYTHON3: False
@@ -244,35 +151,11 @@
- job:
name: tempest-slow-py3
parent: tempest-slow
- # This job version is with swift disabled on py3
- # as swift was not ready on py3 until stable/train.
- branches:
- - stable/stein
- - stable/train
- vars:
- devstack_localrc:
- USE_PYTHON3: true
- devstack_services:
- s-account: false
- s-container: false
- s-object: false
- s-proxy: false
- # without Swift, c-bak cannot run (in the Gate at least)
- c-bak: false
- group-vars:
- subnode:
- devstack_localrc:
- USE_PYTHON3: true
-
-- job:
- name: tempest-slow-py3
- parent: tempest-slow
# This job version is to use the 'slow-serial' tox env for
# the stable/ussuri to stable/wallaby testing.
branches:
- - stable/ussuri
- - stable/victoria
- - stable/wallaby
+ - ^.*/victoria
+ - ^.*/wallaby
vars:
tox_envlist: slow-serial
@@ -287,47 +170,6 @@
# This job is not used after stable/xena and can be
# removed once stable/xena is EOL.
branches:
- - stable/stein
- - stable/train
- - stable/ussuri
- - stable/victoria
- - stable/wallaby
- - stable/xena
-
-- job:
- name: tempest-integrated-compute-centos-8-stream
- parent: tempest-integrated-compute
- # TODO(gmann): Make this job non voting until bug#1957941 if fixed.
- voting: false
- nodeset: devstack-single-node-centos-8-stream
- branches:
- - stable/wallaby
- - stable/xena
- - stable/yoga
- description: |
- This job runs integration tests for compute. This is
- subset of 'tempest-full-py3' job and run Nova, Neutron, Cinder (except backup tests)
- and Glance related tests. This is meant to be run on Nova gate only.
- This version of the job also uses CentOS 8 stream.
- vars:
- # Required until bug/1949606 is resolved when using libvirt and QEMU
- # >=5.0.0 with a [libvirt]virt_type of qemu (TCG).
- configure_swap_size: 4096
-
-- job:
- name: tempest-full-py3-centos-8-stream
- parent: tempest-full-py3
- # TODO(gmann): Make this job non voting until bug#1957941 if fixed.
- voting: false
- branches:
- - stable/wallaby
- - stable/xena
- - stable/yoga
- nodeset: devstack-single-node-centos-8-stream
- description: |
- Base integration test with Neutron networking and py36 running
- on CentOS 8 stream
- vars:
- # Required until bug/1949606 is resolved when using libvirt and QEMU
- # >=5.0.0 with a [libvirt]virt_type of qemu (TCG).
- configure_swap_size: 4096
+ - ^.*/victoria
+ - ^.*/wallaby
+ - ^.*/xena
diff --git a/zuul.d/tempest-specific.yaml b/zuul.d/tempest-specific.yaml
index 10490b4..296682e 100644
--- a/zuul.d/tempest-specific.yaml
+++ b/zuul.d/tempest-specific.yaml
@@ -58,6 +58,8 @@
Base integration test with Neutron networking, IPv6 and py3.
vars:
tox_envlist: full
+ run_tempest_cleanup: true
+ run_tempest_cleanup_prefix: true
devstack_localrc:
USE_PYTHON3: true
FORCE_CONFIG_DRIVE: true