Merge "Create port with port_vnic_type and port_profile from config"
diff --git a/.zuul.yaml b/.zuul.yaml
index fd3aa2a..1cf67b0 100644
--- a/.zuul.yaml
+++ b/.zuul.yaml
@@ -101,7 +101,7 @@
- master
description: |
Base multinode integration test with Neutron networking and py27.
- Former names for this job where:
+ Former names for this job were:
* neutron-tempest-multinode-full
* legacy-tempest-dsvm-neutron-multinode-full
* gate-tempest-dsvm-neutron-multinode-full-ubuntu-xenial-nv
@@ -142,21 +142,26 @@
Base integration test with Neutron networking and py36.
voting: false
-# TODO(gmann): needs to migrate this to zuulv3
- job:
name: tempest-scenario-all
- parent: legacy-dsvm-base-multinode
+ parent: tempest-multinode-full
+ branches:
+ - master
description: |
- This job will run all scenario tests including slow tests
- with lvm multibackend setup. This job will not run any API tests.
- run: playbooks/tempest-scenario-multinode-lvm-multibackend/run.yaml
- post-run: playbooks/tempest-scenario-multinode-lvm-multibackend/post.yaml
+ This multinode integration job will run all scenario tests including slow
+ tests with lvm multibackend setup. This job will not run any API tests.
+
+ Former names for this job were:
+ * legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend
+ * tempest-scenario-multinode-lvm-multibackend
timeout: 10800
- required-projects:
- - openstack-infra/devstack-gate
- - openstack/neutron
- - openstack/tempest
- nodeset: ubuntu-xenial-2-node
+ vars:
+ # 'all' is used for applying the custom regex below.
+ tox_envlist: all
+ devstack_localrc:
+ CINDER_ENABLED_BACKENDS: lvm:lvmdriver-1,lvm:lvmdriver-2
+ tempest_concurrency: 2
+ tempest_test_regex: (^tempest\.(scenario))
- job:
name: tempest-full-queens
@@ -277,7 +282,16 @@
- ^playbooks/
- ^roles/
- ^.zuul.yaml$
- - nova-multiattach
+ - nova-multiattach:
+ irrelevant-files:
+ - ^(test-|)requirements.txt$
+ - ^.*\.rst$
+ - ^doc/.*$
+ - ^etc/.*$
+ - ^releasenotes/.*$
+ - ^setup.cfg$
+ - ^tempest/hacking/.*$
+ - ^tempest/tests/.*$
- tempest-full-parallel:
irrelevant-files:
- ^(test-|)requirements.txt$
@@ -373,7 +387,16 @@
- ^tempest/tests/.*$
gate:
jobs:
- - nova-multiattach
+ - nova-multiattach:
+ irrelevant-files:
+ - ^(test-|)requirements.txt$
+ - ^.*\.rst$
+ - ^doc/.*$
+ - ^etc/.*$
+ - ^releasenotes/.*$
+ - ^setup.cfg$
+ - ^tempest/hacking/.*$
+ - ^tempest/tests/.*$
- tempest-scenario-all:
irrelevant-files:
- ^(test-|)requirements.txt$
diff --git a/REVIEWING.rst b/REVIEWING.rst
index a880181..8a1e152 100644
--- a/REVIEWING.rst
+++ b/REVIEWING.rst
@@ -99,6 +99,39 @@
scenario tests this is up to the reviewers discretion whether a docstring is
required or not.
+
+Test Removal and Refactoring
+----------------------------
+Make sure that any test that is renamed, relocated (e.g. moved to another
+class), or removed does not belong to the `interop`_ testing suite -- which
+includes a select suite of Tempest tests for the purposes of validating that
+OpenStack vendor clouds are interoperable -- or a project's `whitelist`_ or
+`blacklist`_ files.
+
+It is of critical importance that no interop, whitelist or blacklist test
+reference be broken by a patch set introduced to Tempest that renames,
+relocates or removes a referenced test.
+
+Please check the existence of code which references Tempest tests with:
+http://codesearch.openstack.org/
+
+Interop
+^^^^^^^
+Make sure that modifications to an `interop`_ test are backwards-compatible.
+This means that code modifications to tests should not undermine the quality of
+the validation currently performed by the test or significantly alter the
+behavior of the test.
+
+Removal
+^^^^^^^
+Reference the :ref:`test-removal` guidelines for understanding best practices
+associated with test removal.
+
+.. _interop: https://www.openstack.org/brand/interop
+.. _whitelist: https://docs.openstack.org/tempest/latest/run.html#test-selection
+.. _blacklist: https://docs.openstack.org/tempest/latest/run.html#test-selection
+
+
Release Notes
-------------
Release notes are how we indicate to users and other consumers of Tempest what
@@ -113,16 +146,18 @@
.. _reno: https://docs.openstack.org/reno/latest/
+
Deprecated Code
---------------
Sometimes we have some bugs in deprecated code. Basically, we leave it. Because
we don't need to maintain it. However, if the bug is critical, we might need to
fix it. When it will happen, we will deal with it on a case-by-case basis.
+
When to approve
---------------
* Every patch needs two +2s before being approved.
-* Its ok to hold off on an approval until a subject matter expert reviews it
+* It's ok to hold off on an approval until a subject matter expert reviews it
* If a patch has already been approved but requires a trivial rebase to merge,
you do not have to wait for a second +2, since the patch has already had
two +2s.
diff --git a/doc/source/configuration.rst b/doc/source/configuration.rst
index d0d7320..2e5f706 100644
--- a/doc/source/configuration.rst
+++ b/doc/source/configuration.rst
@@ -172,7 +172,7 @@
resize test).
Using a smaller flavor is generally recommended. When larger flavors are used,
-the extra time required to bring up servers will likely affect total run time
+the extra time required to bring up servers will likely affect the total run time
and probably require tweaking timeout values to ensure tests have ample time to
finish.
@@ -207,7 +207,7 @@
The behavior of these options is a bit convoluted (which will likely be fixed in
future versions). You first need to specify ``img_dir``, which is the directory
-in which Tempest will look for the image files. First it will check if the
+in which Tempest will look for the image files. First, it will check if the
filename set for ``img_file`` could be found in ``img_dir``. If it is found then
the ``img_container_format`` and ``img_disk_format`` options are used to upload
that image to glance. However, if it is not found, Tempest will look for the
@@ -239,7 +239,7 @@
""""""""""""""""""""""""""""""""""
When Tempest creates servers for testing, some tests require being able to
connect those servers. Depending on the configuration of the cloud, the methods
-for doing this can be different. In certain configurations it is required to
+for doing this can be different. In certain configurations, it is required to
specify a single network with server create calls. Accordingly, Tempest provides
a few different methods for providing this information in configuration to try
and ensure that regardless of the cloud's configuration it'll still be able to
@@ -297,10 +297,10 @@
''''''''''''''''''''''''
With dynamic credentials enabled and using nova-network, your only option for
configuration is to either set a fixed network name or not. However, in most
-cases it shouldn't matter because nova-network should have no problem booting a
+cases, it shouldn't matter because nova-network should have no problem booting a
server with multiple networks. If this is not the case for your cloud then using
an accounts file is recommended because it provides the necessary flexibility to
-describe your configuration. Dynamic credentials is not able to dynamically
+describe your configuration. Dynamic credentials are not able to dynamically
allocate things as necessary if Neutron is not enabled.
With Neutron and dynamic credentials enabled there should not be any additional
@@ -352,7 +352,7 @@
OpenStack is really a constellation of several different projects which
are running together to create a cloud. However which projects you're running
is not set in stone, and which services are running is up to the deployer.
-Tempest however needs to know which services are available so it can figure
+Tempest, however, needs to know which services are available so it can figure
out which tests it is able to run and certain setup steps which differ based
on the available services.
@@ -390,8 +390,8 @@
.. note::
- Tempest does not serve all kinds of fancy URLs in the service catalog. The
- service catalog should be in a standard format (which is going to be
+ Tempest does not serve all kinds of fancy URLs in the service catalog.
+ The service catalog should be in a standard format (which is going to be
standardized at the Keystone level).
Tempest expects URLs in the Service catalog in the following format:
@@ -413,10 +413,10 @@
certain operations and features aren't supported depending on the configuration.
These features may or may not be discoverable from the API so the burden is
often on the user to figure out what is supported by the cloud they're talking
-to. Besides the obvious interoperability issues with this it also leaves
+to. Besides the obvious interoperability issues with this, it also leaves
Tempest in an interesting situation trying to figure out which tests are
expected to work. However, Tempest tests do not rely on dynamic API discovery
-for a feature (assuming one exists). Instead Tempest has to be explicitly
+for a feature (assuming one exists). Instead, Tempest has to be explicitly
configured as to which optional features are enabled. This is in order to
prevent bugs in the discovery mechanisms from masking failures.
@@ -432,8 +432,8 @@
^^^^^^^^^^^^^^
The service feature-enabled sections often contain an ``api-extensions`` option
(or in the case of Swift a ``discoverable_apis`` option). This is used to tell
-Tempest which api extensions (or configurable middleware) is used in your
+Tempest which API extensions (or configurable middleware) is used in your
deployment. It has two valid config states: either it contains a single value
-``all`` (which is the default) which means that every api extension is assumed
+``all`` (which is the default) which means that every API extension is assumed
to be enabled, or it is set to a list of each individual extension that is
enabled for that service.
diff --git a/doc/source/library.rst b/doc/source/library.rst
index 14415ae..6a12c45 100644
--- a/doc/source/library.rst
+++ b/doc/source/library.rst
@@ -4,12 +4,12 @@
=============================
Tempest provides a stable library interface that provides external tools or
-test suites an interface for reusing pieces of tempest code. Any public
-interface that lives in tempest/lib in the tempest repo is treated as a stable
+test suites an interface for reusing pieces of Tempest code. Any public
+interface that lives in tempest/lib in the Tempest repo is treated as a stable
public interface and it should be safe to external consume that. Every effort
goes into maintaining backwards compatibility with any change.
The library is self contained and doesn't have any dependency on
-other tempest internals outside of lib (including no usage of tempest
+other Tempest internals outside of lib (including no usage of Tempest
configuration).
Stability
@@ -32,7 +32,7 @@
Making changes
''''''''''''''
When making changes to tempest/lib you have to be conscious of the effect of
-any changes on external consumers. If your proposed changeset will change the
+any changes on external consumers. If your proposed change set will change the
default behaviour of any interface, or make something which previously worked
not after your change, then it is not acceptable. Every effort needs to go into
preserving backwards compatibility in changes.
@@ -40,8 +40,8 @@
Reviewing
'''''''''
When reviewing a proposed change to tempest/lib code we need to be careful to
-ensure that we don't break backwards compatibility. For patches that change
-existing interfaces we have to be careful to make sure we don't break any
+ensure that we don't break backward compatibility. For patches that change
+existing interfaces, we have to be careful to make sure we don't break any
external consumers. Some common red flags are:
* a change to an existing API requires a change outside the library directory
@@ -52,7 +52,7 @@
'''''''
When adding a new interface to the library we need to at a minimum have unit
test coverage. A proposed change to add an interface to tempest/lib that
-doesn't have unit tests shouldn't be accepted. Ideally these unit tests will
+doesn't have unit tests shouldn't be accepted. Ideally, these unit tests will
provide sufficient coverage to ensure a stable interface moving forward.
Current Library APIs
diff --git a/doc/source/plugin.rst b/doc/source/plugin.rst
index 6f6621d..9958792 100644
--- a/doc/source/plugin.rst
+++ b/doc/source/plugin.rst
@@ -5,24 +5,24 @@
=============================
Tempest has an external test plugin interface which enables anyone to integrate
-an external test suite as part of a tempest run. This will let any project
-leverage being run with the rest of the tempest suite while not requiring the
-tests live in the tempest tree.
+an external test suite as part of a Tempest run. This will let any project
+leverage being run with the rest of the Tempest suite while not requiring the
+tests live in the Tempest tree.
Creating a plugin
=================
Creating a plugin is fairly straightforward and doesn't require much additional
effort on top of creating a test suite using tempest.lib. One thing to note with
-doing this is that the interfaces exposed by tempest are not considered stable
-(with the exception of configuration variables which ever effort goes into
-ensuring backwards compatibility). You should not need to import anything from
-tempest itself except where explicitly noted.
+doing this is that the interfaces exposed by Tempest are not considered stable
+(with the exception of configuration variables whichever effort goes into
+ensuring backward compatibility). You should not need to import anything from
+Tempest itself except where explicitly noted.
Stable Tempest APIs plugins may use
-----------------------------------
-As noted above, several tempest APIs are acceptable to use from plugins, while
+As noted above, several Tempest APIs are acceptable to use from plugins, while
others are not. A list of stable APIs available to plugins is provided below:
* tempest.lib.*
@@ -32,7 +32,7 @@
* tempest.clients
* tempest.test
-If there is an interface from tempest that you need to rely on in your plugin
+If there is an interface from Tempest that you need to rely on in your plugin
which is not listed above, it likely needs to be migrated to tempest.lib. In
that situation, file a bug, push a migration patch, etc. to expedite providing
the interface in a reliable manner.
@@ -62,7 +62,7 @@
-----------
Once you've created your plugin class you need to add an entry point to your
-project to enable tempest to find the plugin. The entry point must be added
+project to enable Tempest to find the plugin. The entry point must be added
to the "tempest.test_plugins" namespace.
If you are using pbr this is fairly straightforward, in the setup.cfg just add
@@ -77,9 +77,9 @@
Standalone Plugin vs In-repo Plugin
-----------------------------------
-Since all that's required for a plugin to be detected by tempest is a valid
+Since all that's required for a plugin to be detected by Tempest is a valid
setuptools entry point in the proper namespace there is no difference from the
-tempest perspective on either creating a separate python package to
+Tempest perspective on either creating a separate python package to
house the plugin or adding the code to an existing python project. However,
there are tradeoffs to consider when deciding which approach to take when
creating a new plugin.
@@ -91,9 +91,9 @@
single version of the test code across project release boundaries (see the
`Branchless Tempest Spec`_ for more details on this). It also greatly
simplifies the install time story for external users. Instead of having to
-install the right version of a project in the same python namespace as tempest
+install the right version of a project in the same python namespace as Tempest
they simply need to pip install the plugin in that namespace. It also means
-that users don't have to worry about inadvertently installing a tempest plugin
+that users don't have to worry about inadvertently installing a Tempest plugin
when they install another package.
.. _Branchless Tempest Spec: http://specs.openstack.org/openstack/qa-specs/specs/tempest/implemented/branchless-tempest.html
@@ -108,9 +108,9 @@
Plugin Class
============
-To provide tempest with all the required information it needs to be able to run
-your plugin you need to create a plugin class which tempest will load and call
-to get information when it needs. To simplify creating this tempest provides an
+To provide Tempest with all the required information it needs to be able to run
+your plugin you need to create a plugin class which Tempest will load and call
+to get information when it needs. To simplify creating this Tempest provides an
abstract class that should be used as the parent for your plugin. To use this
you would do something like the following:
@@ -147,7 +147,7 @@
services/
client.py
-That will mirror what people expect from tempest. The file
+That will mirror what people expect from Tempest. The file
* **config.py**: contains any plugin specific configuration variables
* **plugin.py**: contains the plugin class used for the entry point
@@ -156,14 +156,14 @@
* **services**: where the plugin specific service clients are
Additionally, when you're creating the plugin you likely want to follow all
-of the tempest developer and reviewer documentation to ensure that the tests
-being added in the plugin act and behave like the rest of tempest.
+of the Tempest developer and reviewer documentation to ensure that the tests
+being added in the plugin act and behave like the rest of Tempest.
Dealing with configuration options
----------------------------------
-Historically Tempest didn't provide external guarantees on its configuration
-options. However, with the introduction of the plugin interface this is no
+Historically, Tempest didn't provide external guarantees on its configuration
+options. However, with the introduction of the plugin interface, this is no
longer the case. An external plugin can rely on using any configuration option
coming from Tempest, there will be at least a full deprecation cycle for any
option before it's removed. However, just the options provided by Tempest
@@ -171,7 +171,7 @@
configuration options you should use the ``register_opts`` and
``get_opt_lists`` methods to pass them to Tempest when the plugin is loaded.
When adding configuration options the ``register_opts`` method gets passed the
-CONF object from tempest. This enables the plugin to add options to both
+CONF object from Tempest. This enables the plugin to add options to both
existing sections and also create new configuration sections for new options.
Service Clients
@@ -325,23 +325,23 @@
Tempest will automatically discover any installed plugins when it is run. So by
just installing the python packages which contain your plugin you'll be using
-them with tempest, nothing else is really required.
+them with Tempest, nothing else is really required.
However, you should take care when installing plugins. By their very nature
-there are no guarantees when running tempest with plugins enabled about the
+there are no guarantees when running Tempest with plugins enabled about the
quality of the plugin. Additionally, while there is no limitation on running
-with multiple plugins it's worth noting that poorly written plugins might not
+with multiple plugins, it's worth noting that poorly written plugins might not
properly isolate their tests which could cause unexpected cross interactions
between plugins.
Notes for using plugins with virtualenvs
----------------------------------------
-When using a tempest inside a virtualenv (like when running under tox) you have
+When using a Tempest inside a virtualenv (like when running under tox) you have
to ensure that the package that contains your plugin is either installed in the
venv too or that you have system site-packages enabled. The virtualenv will
-isolate the tempest install from the rest of your system so just installing the
-plugin package on your system and then running tempest inside a venv will not
+isolate the Tempest install from the rest of your system so just installing the
+plugin package on your system and then running Tempest inside a venv will not
work.
Tempest also exposes a tox job, all-plugin, which will setup a tox virtualenv
diff --git a/doc/source/test_removal.rst b/doc/source/test_removal.rst
index b22503b..e249bdd 100644
--- a/doc/source/test_removal.rst
+++ b/doc/source/test_removal.rst
@@ -1,3 +1,5 @@
+.. _test-removal:
+
Tempest Test Removal Procedure
==============================
@@ -91,7 +93,7 @@
#. paste the output table with numbers and the mysql command you ran to
generate it into the etherpad.
-Eventually a CLI interface will be created to make that a bit more friendly.
+Eventually, a CLI interface will be created to make that a bit more friendly.
Also a dashboard is in the works so we don't need to manually run the command.
The intent of the 2nd prong is to verify that moving the test into a project
@@ -102,21 +104,24 @@
blocking it from landing) and having the testing run in Tempest still has
value.
-However for the 3rd prong verification is a bit more subjective. The original
+However, for the 3rd prong verification is a bit more subjective. The original
intent of this prong was mostly for refstack/defcore and also for things that
running on the stable branches. We don't want to remove any tests if that
would break our API consistency checking between releases, or something that
defcore/refstack is depending on being in Tempest. It's worth pointing out
-that if a test is used in defcore as part of interop testing then it will
+that if a test is used in `defcore`_ as part of `interop`_ testing then it will
probably have continuing value being in Tempest as part of the
integration/integrated tests in general. This is one area where some overlap
is expected between testing in projects and Tempest, which is not a bad thing.
+.. _defcore: https://wiki.openstack.org/wiki/Governance/InteropWG
+.. _interop: https://www.openstack.org/brand/interop
+
Discussing the 3rd prong
""""""""""""""""""""""""
There are 2 approaches to addressing the 3rd prong. Either it can be raised
-during a qa meeting during the Tempest discussion. Please put it on the agenda
+during a QA meeting during the Tempest discussion. Please put it on the agenda
well ahead of the scheduled meeting. Since the meeting time will be well known
ahead of time anyone who depends on the tests will have ample time beforehand
to outline any concerns on the before the meeting. To give ample time for
@@ -133,7 +138,7 @@
Exceptions to this procedure
----------------------------
-For the most part all Tempest test removals have to go through this procedure
+For the most part, all Tempest test removals have to go through this procedure
there are a couple of exceptions though:
#. The class of testing has been decided to be outside the scope of Tempest.
@@ -145,7 +150,7 @@
Such tests cannot live in Tempest because of the branchless nature of
Tempest. Such tests must still honor `prong #3`_.
-For the first exception type the only types of testing in tree which have been
+For the first exception type, the only types of testing in the tree which have been
declared out of scope at this point are:
* The CLI tests (which should be completely removed at this point)
@@ -154,7 +159,7 @@
* XML API Tests (which should be completely removed at this point)
* EC2 API/boto tests (which should be completely removed at this point)
-For tests that fit into this category the only criteria for removal is that
+For tests that fit into this category, the only criteria for removal is that
there is equivalent testing elsewhere.
Tempest Scope
@@ -187,7 +192,7 @@
This is because tests would not be able to know or control which API response
to expect, and thus would not be able to enforce a specific behavior.
-If a test exists in Tempest that would meet this criteria as consequence of a
-change, the test must be removed according to the procedure discussed into
+If a test exists in Tempest that would meet these criteria as a consequence of a
+change, the test must be removed according to the procedure discussed in
this document. The API change should not be merged until all conditions
required for test removal can be met.
diff --git a/doc/source/write_tests.rst b/doc/source/write_tests.rst
index fff2405..0a29b7b 100644
--- a/doc/source/write_tests.rst
+++ b/doc/source/write_tests.rst
@@ -4,7 +4,7 @@
##########################
This guide serves as a starting point for developers working on writing new
-Tempest tests. At a high level tests in Tempest are just tests that conform to
+Tempest tests. At a high level, tests in Tempest are just tests that conform to
the standard python `unit test`_ framework. But there are several aspects of
that are unique to Tempest and its role as an integration test suite running
against a real cloud.
diff --git a/playbooks/tempest-scenario-multinode-lvm-multibackend/post.yaml b/playbooks/tempest-scenario-multinode-lvm-multibackend/post.yaml
deleted file mode 100644
index e07f551..0000000
--- a/playbooks/tempest-scenario-multinode-lvm-multibackend/post.yaml
+++ /dev/null
@@ -1,15 +0,0 @@
-- hosts: primary
- tasks:
-
- - name: Copy files from {{ ansible_user_dir }}/workspace/ on node
- synchronize:
- src: '{{ ansible_user_dir }}/workspace/'
- dest: '{{ zuul.executor.log_root }}'
- mode: pull
- copy_links: true
- verify_host: true
- rsync_opts:
- - --include=/logs/**
- - --include=*/
- - --exclude=*
- - --prune-empty-dirs
diff --git a/playbooks/tempest-scenario-multinode-lvm-multibackend/run.yaml b/playbooks/tempest-scenario-multinode-lvm-multibackend/run.yaml
deleted file mode 100644
index 57b4074..0000000
--- a/playbooks/tempest-scenario-multinode-lvm-multibackend/run.yaml
+++ /dev/null
@@ -1,65 +0,0 @@
-- hosts: primary
- name: Autoconverted job tempest-scenario-multinode-lvm-multibackend
- from old job gate-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend-ubuntu-xenial-nv
- tasks:
-
- - name: Ensure legacy workspace directory
- file:
- path: '{{ ansible_user_dir }}/workspace'
- state: directory
-
- - shell:
- cmd: |
- set -e
- set -x
- cat > clonemap.yaml << EOF
- clonemap:
- - name: openstack-infra/devstack-gate
- dest: devstack-gate
- EOF
- /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \
- git://git.openstack.org \
- openstack-infra/devstack-gate
- executable: /bin/bash
- chdir: '{{ ansible_user_dir }}/workspace'
- environment: '{{ zuul | zuul_legacy_vars }}'
-
- - shell:
- cmd: |
- set -e
- set -x
- cat << 'EOF' >>"/tmp/dg-local.conf"
- [[local|localrc]]
- ENABLE_IDENTITY_V2=False
- TEMPEST_USE_TEST_ACCOUNTS=True
- # Enable lvm multiple backends to run multi backend slow scenario tests.
- # Note: multi backend experimental job exclude the slow scenario tests.
- CINDER_ENABLED_BACKENDS=lvm:lvmdriver-1,lvm:lvmdriver-2
-
- EOF
- executable: /bin/bash
- chdir: '{{ ansible_user_dir }}/workspace'
- environment: '{{ zuul | zuul_legacy_vars }}'
-
- - shell:
- cmd: |
- set -e
- set -x
- export PYTHONUNBUFFERED=true
- export DEVSTACK_GATE_TEMPEST=1
- # Run all scenario tests including slow tests with concurrency 2
- export DEVSTACK_GATE_TEMPEST_REGEX='(^tempest\.(scenario))'
- export TEMPEST_CONCURRENCY=2
- export DEVSTACK_GATE_NEUTRON=1
- export DEVSTACK_GATE_TLSPROXY=1
- export BRANCH_OVERRIDE=default
- if [ "$BRANCH_OVERRIDE" != "default" ] ; then
- export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE
- fi
- export DEVSTACK_GATE_TOPOLOGY="multinode"
-
- cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh
- ./safe-devstack-vm-gate-wrap.sh
- executable: /bin/bash
- chdir: '{{ ansible_user_dir }}/workspace'
- environment: '{{ zuul | zuul_legacy_vars }}'
diff --git a/releasenotes/notes/add-storyboard-in-skip-because-decorator-3e139aa8a4f7970f.yaml b/releasenotes/notes/add-storyboard-in-skip-because-decorator-3e139aa8a4f7970f.yaml
new file mode 100644
index 0000000..dd4a90b
--- /dev/null
+++ b/releasenotes/notes/add-storyboard-in-skip-because-decorator-3e139aa8a4f7970f.yaml
@@ -0,0 +1,17 @@
+---
+features:
+ - |
+ Add a new parameter called ``bug_type`` to
+ ``tempest.lib.decorators.related_bug`` and
+ ``tempest.lib.decorators.skip_because`` decorators, which accepts
+ 2 values:
+
+ * launchpad
+ * storyboard
+
+ This offers the possibility of tracking bugs related to tests using
+ launchpad or storyboard references. The default value is launchpad
+ for backward compatibility.
+
+ Passing in a non-digit ``bug`` value to either decorator will raise
+ a ``InvalidParam`` exception (previously ``ValueError``).
diff --git a/releasenotes/notes/omit_X-Subject-Token_from_log-1bf5fef88c80334b.yaml b/releasenotes/notes/omit_X-Subject-Token_from_log-1bf5fef88c80334b.yaml
new file mode 100644
index 0000000..51c8f79
--- /dev/null
+++ b/releasenotes/notes/omit_X-Subject-Token_from_log-1bf5fef88c80334b.yaml
@@ -0,0 +1,7 @@
+---
+security:
+ - |
+ The x-subject-token of a response header is ommitted from log,
+ but clients specify the same token on a request header on
+ Keystone API and that was not omitted. In this release,
+ that has been omitted for a security reason.
diff --git a/releasenotes/notes/remove-allow_tenant_isolation-option-03f0d998eb498d44.yaml b/releasenotes/notes/remove-allow_tenant_isolation-option-03f0d998eb498d44.yaml
new file mode 100644
index 0000000..4f4516b
--- /dev/null
+++ b/releasenotes/notes/remove-allow_tenant_isolation-option-03f0d998eb498d44.yaml
@@ -0,0 +1,6 @@
+---
+upgrade:
+ - |
+ Remove deprecated config option ``allow_tenant_isolation`` from
+ ``auth`` and ``compute`` groups. Use ``use_dynamic_credentials`` directly
+ instead of the removed option.
diff --git a/tempest/api/compute/admin/test_flavors_microversions.py b/tempest/api/compute/admin/test_flavors_microversions.py
index 9f014e6..31b9217 100644
--- a/tempest/api/compute/admin/test_flavors_microversions.py
+++ b/tempest/api/compute/admin/test_flavors_microversions.py
@@ -33,14 +33,14 @@
disk=10,
id=flavor_id)['id']
# Checking show API response schema
- self.flavors_client.show_flavor(new_flavor_id)['flavor']
+ self.flavors_client.show_flavor(new_flavor_id)
# Checking update API response schema
self.admin_flavors_client.update_flavor(new_flavor_id,
- description='new')['flavor']
+ description='new')
# Checking list details API response schema
- self.flavors_client.list_flavors(detail=True)['flavors']
+ self.flavors_client.list_flavors(detail=True)
# Checking list API response schema
- self.flavors_client.list_flavors()['flavors']
+ self.flavors_client.list_flavors()
class FlavorsV261TestJSON(FlavorsV255TestJSON):
diff --git a/tempest/api/compute/admin/test_floating_ips_bulk.py b/tempest/api/compute/admin/test_floating_ips_bulk.py
index 72d09ed..2d7e1a7 100644
--- a/tempest/api/compute/admin/test_floating_ips_bulk.py
+++ b/tempest/api/compute/admin/test_floating_ips_bulk.py
@@ -25,6 +25,8 @@
CONF = config.CONF
+# TODO(stephenfin): Remove this test class once the nova queens branch goes
+# into extended maintenance mode.
class FloatingIPsBulkAdminTestJSON(base.BaseV2ComputeAdminTest):
"""Tests Floating IPs Bulk APIs that require admin privileges.
@@ -32,6 +34,7 @@
content/ext-os-floating-ips-bulk.html
"""
max_microversion = '2.35'
+ depends_on_nova_network = True
@classmethod
def setup_clients(cls):
diff --git a/tempest/api/compute/admin/test_live_migration.py b/tempest/api/compute/admin/test_live_migration.py
index bc38144..8350f7c 100644
--- a/tempest/api/compute/admin/test_live_migration.py
+++ b/tempest/api/compute/admin/test_live_migration.py
@@ -66,7 +66,8 @@
kwargs = dict()
block_migration = getattr(self, 'block_migration', None)
if self.block_migration is None:
- kwargs['disk_over_commit'] = False
+ if self.is_requested_microversion_compatible('2.24'):
+ kwargs['disk_over_commit'] = False
block_migration = (CONF.compute_feature_enabled.
block_migration_for_live_migration and
not volume_backed)
diff --git a/tempest/api/compute/admin/test_live_migration_negative.py b/tempest/api/compute/admin/test_live_migration_negative.py
index deabbc2..8327a3b 100644
--- a/tempest/api/compute/admin/test_live_migration_negative.py
+++ b/tempest/api/compute/admin/test_live_migration_negative.py
@@ -32,9 +32,10 @@
def _migrate_server_to(self, server_id, dest_host):
bmflm = CONF.compute_feature_enabled.block_migration_for_live_migration
- self.admin_servers_client.live_migrate_server(
- server_id, host=dest_host, block_migration=bmflm,
- disk_over_commit=False)
+ kwargs = dict(host=dest_host, block_migration=bmflm)
+ if self.is_requested_microversion_compatible('2.24'):
+ kwargs['disk_over_commit'] = False
+ self.admin_servers_client.live_migrate_server(server_id, **kwargs)
@decorators.attr(type=['negative'])
@decorators.idempotent_id('7fb7856e-ae92-44c9-861a-af62d7830bcb')
diff --git a/tempest/api/compute/admin/test_services_negative.py b/tempest/api/compute/admin/test_services_negative.py
index 993c8ec..d264829 100644
--- a/tempest/api/compute/admin/test_services_negative.py
+++ b/tempest/api/compute/admin/test_services_negative.py
@@ -74,8 +74,6 @@
@classmethod
def resource_setup(cls):
super(ServicesAdminNegativeV253TestJSON, cls).resource_setup()
- # Nova returns 400 if `binary` is not nova-compute.
- cls.binary = 'nova-compute'
cls.fake_service_id = data_utils.rand_uuid()
@decorators.attr(type=['negative'])
diff --git a/tempest/api/compute/servers/test_device_tagging.py b/tempest/api/compute/servers/test_device_tagging.py
index ff8ed61..40681cb 100644
--- a/tempest/api/compute/servers/test_device_tagging.py
+++ b/tempest/api/compute/servers/test_device_tagging.py
@@ -320,7 +320,9 @@
try:
self.assertEmpty(md_dict['devices'])
return True
- except Exception:
+ except AssertionError:
+ LOG.debug("Related bug 1775947. Devices dict is not empty: %s",
+ md_dict['devices'])
return False
@decorators.idempotent_id('3e41c782-2a89-4922-a9d2-9a188c4e7c7c')
@@ -380,5 +382,8 @@
waiters.wait_for_interface_detach(self.interfaces_client,
server['id'],
interface['port_id'])
- self.verify_metadata_from_api(server, ssh_client,
- self.verify_empty_devices)
+ # FIXME(mriedem): The assertion that the tagged devices are removed
+ # from the metadata for the server is being skipped until bug 1775947
+ # is fixed.
+ # self.verify_metadata_from_api(server, ssh_client,
+ # self.verify_empty_devices)
diff --git a/tempest/api/compute/servers/test_novnc.py b/tempest/api/compute/servers/test_novnc.py
index 1dfd0f9..3e44b56 100644
--- a/tempest/api/compute/servers/test_novnc.py
+++ b/tempest/api/compute/servers/test_novnc.py
@@ -58,6 +58,9 @@
def resource_setup(cls):
super(NoVNCConsoleTestJSON, cls).resource_setup()
cls.server = cls.create_test_server(wait_until="ACTIVE")
+ cls.use_get_remote_console = False
+ if not cls.is_requested_microversion_compatible('2.5'):
+ cls.use_get_remote_console = True
def _validate_novnc_html(self, vnc_url):
"""Verify we can connect to novnc and get back the javascript."""
@@ -170,8 +173,13 @@
@decorators.idempotent_id('c640fdff-8ab4-45a4-a5d8-7e6146cbd0dc')
def test_novnc(self):
- body = self.client.get_vnc_console(self.server['id'],
- type='novnc')['console']
+ if self.use_get_remote_console:
+ body = self.client.get_remote_console(
+ self.server['id'], console_type='novnc',
+ protocol='vnc')['remote_console']
+ else:
+ body = self.client.get_vnc_console(self.server['id'],
+ type='novnc')['console']
self.assertEqual('novnc', body['type'])
# Do the initial HTTP Request to novncproxy to get the NoVNC JavaScript
self._validate_novnc_html(body['url'])
@@ -184,8 +192,13 @@
@decorators.idempotent_id('f9c79937-addc-4aaa-9e0e-841eef02aeb7')
def test_novnc_bad_token(self):
- body = self.client.get_vnc_console(self.server['id'],
- type='novnc')['console']
+ if self.use_get_remote_console:
+ body = self.client.get_remote_console(
+ self.server['id'], console_type='novnc',
+ protocol='vnc')['remote_console']
+ else:
+ body = self.client.get_vnc_console(self.server['id'],
+ type='novnc')['console']
self.assertEqual('novnc', body['type'])
# Do the WebSockify HTTP Request to novncproxy with a bad token
url = body['url'].replace('token=', 'token=bad')
diff --git a/tempest/api/compute/servers/test_server_actions.py b/tempest/api/compute/servers/test_server_actions.py
index 350e8ba..961b2b7 100644
--- a/tempest/api/compute/servers/test_server_actions.py
+++ b/tempest/api/compute/servers/test_server_actions.py
@@ -527,10 +527,10 @@
def _get_output(self):
output = self.client.get_console_output(
- self.server_id, length=10)['output']
+ self.server_id, length=3)['output']
self.assertTrue(output, "Console output was empty.")
lines = len(output.split('\n'))
- self.assertEqual(lines, 10)
+ self.assertEqual(lines, 3)
@decorators.idempotent_id('4b8867e6-fffa-4d54-b1d1-6fdda57be2f3')
@testtools.skipUnless(CONF.compute_feature_enabled.console_output,
@@ -561,8 +561,8 @@
# NOTE: This test tries to get full length console log, and the
# length should be bigger than the one of test_get_console_output.
- self.assertGreater(lines, 10, "Cannot get enough console log "
- "length. (lines: %s)" % lines)
+ self.assertGreater(lines, 3, "Cannot get enough console log "
+ "length. (lines: %s)" % lines)
self.wait_for(_check_full_length_console_log)
@@ -690,8 +690,13 @@
# Get the VNC console of type 'novnc' and 'xvpvnc'
console_types = ['novnc', 'xvpvnc']
for console_type in console_types:
- body = self.client.get_vnc_console(self.server_id,
- type=console_type)['console']
+ if self.is_requested_microversion_compatible('2.5'):
+ body = self.client.get_vnc_console(
+ self.server_id, type=console_type)['console']
+ else:
+ body = self.client.get_remote_console(
+ self.server_id, console_type=console_type,
+ protocol='vnc')['remote_console']
self.assertEqual(console_type, body['type'])
self.assertNotEqual('', body['url'])
self._validate_url(body['url'])
diff --git a/tempest/api/compute/servers/test_server_group.py b/tempest/api/compute/servers/test_server_group.py
index 5286c8f..1b7cb96 100644
--- a/tempest/api/compute/servers/test_server_group.py
+++ b/tempest/api/compute/servers/test_server_group.py
@@ -47,8 +47,16 @@
super(ServerGroupTestJSON, cls).resource_setup()
cls.policy = ['affinity']
- cls.created_server_group = cls.create_test_server_group(
- policy=cls.policy)
+ def setUp(self):
+ super(ServerGroupTestJSON, self).setUp()
+ # TODO(zhufl): After microversion 2.13 project_id and user_id are
+ # added to the body of server_group, and microversion is not used
+ # in resource_setup for now, so we should create server group in setUp
+ # in order to use the same microversion as in testcases till
+ # microversion support in resource_setup is fulfilled.
+ if not hasattr(self, 'created_server_group'):
+ self.__class__.created_server_group = \
+ self.create_test_server_group(policy=self.policy)
def _create_server_group(self, name, policy):
# create the test server-group with given policy
diff --git a/tempest/api/compute/servers/test_virtual_interfaces.py b/tempest/api/compute/servers/test_virtual_interfaces.py
index 5fb1711..f810ec5 100644
--- a/tempest/api/compute/servers/test_virtual_interfaces.py
+++ b/tempest/api/compute/servers/test_virtual_interfaces.py
@@ -28,6 +28,7 @@
# TODO(mriedem): Remove this test class once the nova queens branch goes into
# extended maintenance mode.
class VirtualInterfacesTestJSON(base.BaseV2ComputeTest):
+ max_microversion = '2.43'
depends_on_nova_network = True
diff --git a/tempest/api/compute/servers/test_virtual_interfaces_negative.py b/tempest/api/compute/servers/test_virtual_interfaces_negative.py
index ec4d7a8..f6e8bc9 100644
--- a/tempest/api/compute/servers/test_virtual_interfaces_negative.py
+++ b/tempest/api/compute/servers/test_virtual_interfaces_negative.py
@@ -23,6 +23,7 @@
# TODO(mriedem): Remove this test class once the nova queens branch goes into
# extended maintenance mode.
class VirtualInterfacesNegativeTestJSON(base.BaseV2ComputeTest):
+ max_microversion = '2.43'
depends_on_nova_network = True
diff --git a/tempest/api/identity/admin/v3/test_endpoints.py b/tempest/api/identity/admin/v3/test_endpoints.py
index 874aaa4..2cd8906 100644
--- a/tempest/api/identity/admin/v3/test_endpoints.py
+++ b/tempest/api/identity/admin/v3/test_endpoints.py
@@ -20,6 +20,10 @@
class EndPointsTestJSON(base.BaseIdentityV3AdminTest):
+ # NOTE: force_tenant_isolation is true in the base class by default but
+ # overridden to false here to allow test execution for clouds using the
+ # pre-provisioned credentials provider.
+ force_tenant_isolation = False
@classmethod
def setup_clients(cls):
diff --git a/tempest/api/identity/admin/v3/test_endpoints_negative.py b/tempest/api/identity/admin/v3/test_endpoints_negative.py
index d54e222..4c3eb1c 100644
--- a/tempest/api/identity/admin/v3/test_endpoints_negative.py
+++ b/tempest/api/identity/admin/v3/test_endpoints_negative.py
@@ -1,4 +1,3 @@
-
# Copyright 2013 IBM Corp.
# All Rights Reserved.
#
@@ -21,6 +20,10 @@
class EndpointsNegativeTestJSON(base.BaseIdentityV3AdminTest):
+ # NOTE: force_tenant_isolation is true in the base class by default but
+ # overridden to false here to allow test execution for clouds using the
+ # pre-provisioned credentials provider.
+ force_tenant_isolation = False
@classmethod
def setup_clients(cls):
diff --git a/tempest/api/identity/v2/test_users.py b/tempest/api/identity/v2/test_users.py
index 9c77fff..158dfb3 100644
--- a/tempest/api/identity/v2/test_users.py
+++ b/tempest/api/identity/v2/test_users.py
@@ -84,7 +84,7 @@
new_pass = data_utils.rand_password()
user_id = self.creds.user_id
- # to change password back. important for allow_tenant_isolation = false
+ # to change password back. important for use_dynamic_credentials=false
self.addCleanup(self._restore_password, user_id, old_pass, new_pass)
# user updates own password
diff --git a/tempest/api/identity/v3/test_users.py b/tempest/api/identity/v3/test_users.py
index 1f099df..6d6baca 100644
--- a/tempest/api/identity/v3/test_users.py
+++ b/tempest/api/identity/v3/test_users.py
@@ -82,7 +82,7 @@
old_token = self.non_admin_client.token
new_pass = data_utils.rand_password()
- # to change password back. important for allow_tenant_isolation = false
+ # to change password back. important for use_dynamic_credentials=false
self.addCleanup(self._restore_password, old_pass, new_pass)
# user updates own password
diff --git a/tempest/api/network/admin/test_external_network_extension.py b/tempest/api/network/admin/test_external_network_extension.py
index 49a9cdb..39d03e7 100644
--- a/tempest/api/network/admin/test_external_network_extension.py
+++ b/tempest/api/network/admin/test_external_network_extension.py
@@ -95,6 +95,7 @@
self.assertEqual(self.network['id'], show_net['id'])
self.assertFalse(show_net['router:external'])
+ @decorators.skip_because(bug="1749820")
@decorators.idempotent_id('82068503-2cf2-4ed4-b3be-ecb89432e4bb')
@testtools.skipUnless(CONF.network_feature_enabled.floating_ips,
'Floating ips are not availabled')
diff --git a/tempest/api/volume/admin/test_volume_services_negative.py b/tempest/api/volume/admin/test_volume_services_negative.py
index 6f3dbc6..3a863a1 100644
--- a/tempest/api/volume/admin/test_volume_services_negative.py
+++ b/tempest/api/volume/admin/test_volume_services_negative.py
@@ -23,10 +23,9 @@
@classmethod
def resource_setup(cls):
super(VolumeServicesNegativeTest, cls).resource_setup()
- cls.services = cls.admin_volume_services_client.list_services()[
- 'services']
- cls.host = cls.services[0]['host']
- cls.binary = cls.services[0]['binary']
+ services = cls.admin_volume_services_client.list_services()['services']
+ cls.host = services[0]['host']
+ cls.binary = services[0]['binary']
@decorators.attr(type='negative')
@decorators.idempotent_id('3246ce65-ba70-4159-aa3b-082c28e4b484')
diff --git a/tempest/api/volume/test_volumes_extend.py b/tempest/api/volume/test_volumes_extend.py
index 5d339c4..ac9a9c7 100644
--- a/tempest/api/volume/test_volumes_extend.py
+++ b/tempest/api/volume/test_volumes_extend.py
@@ -33,7 +33,7 @@
def test_volume_extend(self):
# Extend Volume Test.
volume = self.create_volume(image_ref=self.image_ref)
- extend_size = volume['size'] + 1
+ extend_size = volume['size'] * 2
self.volumes_client.extend_volume(volume['id'],
new_size=extend_size)
waiters.wait_for_volume_resource_status(self.volumes_client,
@@ -48,7 +48,7 @@
volume = self.create_volume()
self.create_snapshot(volume['id'])
- extend_size = volume['size'] + 1
+ extend_size = volume['size'] * 2
self.volumes_client.extend_volume(volume['id'], new_size=extend_size)
waiters.wait_for_volume_resource_status(self.volumes_client,
diff --git a/tempest/api/volume/test_volumes_negative.py b/tempest/api/volume/test_volumes_negative.py
index f139283..866bd87 100644
--- a/tempest/api/volume/test_volumes_negative.py
+++ b/tempest/api/volume/test_volumes_negative.py
@@ -103,21 +103,24 @@
def test_create_volume_with_nonexistent_volume_type(self):
# Should not be able to create volume with non-existent volume type
self.assertRaises(lib_exc.NotFound, self.volumes_client.create_volume,
- size='1', volume_type=data_utils.rand_uuid())
+ size=CONF.volume.volume_size,
+ volume_type=data_utils.rand_uuid())
@decorators.attr(type=['negative'])
@decorators.idempotent_id('0c36f6ae-4604-4017-b0a9-34fdc63096f9')
def test_create_volume_with_nonexistent_snapshot_id(self):
# Should not be able to create volume with non-existent snapshot
self.assertRaises(lib_exc.NotFound, self.volumes_client.create_volume,
- size='1', snapshot_id=data_utils.rand_uuid())
+ size=CONF.volume.volume_size,
+ snapshot_id=data_utils.rand_uuid())
@decorators.attr(type=['negative'])
@decorators.idempotent_id('47c73e08-4be8-45bb-bfdf-0c4e79b88344')
def test_create_volume_with_nonexistent_source_volid(self):
# Should not be able to create volume with non-existent source volume
self.assertRaises(lib_exc.NotFound, self.volumes_client.create_volume,
- size='1', source_volid=data_utils.rand_uuid())
+ size=CONF.volume.volume_size,
+ source_volid=data_utils.rand_uuid())
@decorators.attr(type=['negative'])
@decorators.idempotent_id('0186422c-999a-480e-a026-6a665744c30c')
diff --git a/tempest/api/volume/test_volumes_snapshots_negative.py b/tempest/api/volume/test_volumes_snapshots_negative.py
index ea5f036..0453c0a 100644
--- a/tempest/api/volume/test_volumes_snapshots_negative.py
+++ b/tempest/api/volume/test_volumes_snapshots_negative.py
@@ -50,7 +50,7 @@
@decorators.idempotent_id('677863d1-34f9-456d-b6ac-9924f667a7f4')
def test_volume_from_snapshot_decreasing_size(self):
# Creates a volume a snapshot passing a size different from the source
- src_size = CONF.volume.volume_size + 1
+ src_size = CONF.volume.volume_size * 2
src_vol = self.create_volume(size=src_size)
src_snap = self.create_snapshot(src_vol['id'])
@@ -58,7 +58,7 @@
# Destination volume smaller than source
self.assertRaises(lib_exc.BadRequest,
self.volumes_client.create_volume,
- size=src_size - 1,
+ size=CONF.volume.volume_size,
snapshot_id=src_snap['id'])
@decorators.attr(type=['negative'])
diff --git a/tempest/config.py b/tempest/config.py
index 1fb5c8e..cc0ba34 100644
--- a/tempest/config.py
+++ b/tempest/config.py
@@ -61,11 +61,7 @@
"users. This option requires that OpenStack Identity "
"API admin credentials are known. If false, isolated "
"test cases and parallel execution, can still be "
- "achieved configuring a list of test accounts",
- deprecated_opts=[cfg.DeprecatedOpt('allow_tenant_isolation',
- group='auth'),
- cfg.DeprecatedOpt('allow_tenant_isolation',
- group='compute')]),
+ "achieved configuring a list of test accounts"),
cfg.ListOpt('tempest_roles',
help="Roles to assign to all users created by tempest",
default=[]),
diff --git a/tempest/lib/api_schema/response/compute/v2_1/servers.py b/tempest/lib/api_schema/response/compute/v2_1/servers.py
index 2954de0..3979c65 100644
--- a/tempest/lib/api_schema/response/compute/v2_1/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_1/servers.py
@@ -430,8 +430,9 @@
'traceback': {'type': ['string', 'null']}
},
'additionalProperties': False,
- 'required': ['event', 'start_time', 'finish_time', 'result',
- 'traceback']
+ # NOTE(zhufl): events.traceback can only be seen by admin users
+ # with default policy.json, so it shouldn't be a required field.
+ 'required': ['event', 'start_time', 'finish_time', 'result']
}
}
diff --git a/tempest/lib/common/rest_client.py b/tempest/lib/common/rest_client.py
index 22276d4..bc9cfe2 100644
--- a/tempest/lib/common/rest_client.py
+++ b/tempest/lib/common/rest_client.py
@@ -416,6 +416,8 @@
resp_body=None, extra=None):
if 'X-Auth-Token' in req_headers:
req_headers['X-Auth-Token'] = '<omitted>'
+ if 'X-Subject-Token' in req_headers:
+ req_headers['X-Subject-Token'] = '<omitted>'
# A shallow copy is sufficient
resp_log = resp.copy()
if 'x-subject-token' in resp_log:
diff --git a/tempest/lib/decorators.py b/tempest/lib/decorators.py
index e99dd24..b399aa0 100644
--- a/tempest/lib/decorators.py
+++ b/tempest/lib/decorators.py
@@ -19,39 +19,83 @@
import six
import testtools
+from tempest.lib import exceptions as lib_exc
+
LOG = logging.getLogger(__name__)
+_SUPPORTED_BUG_TYPES = {
+ 'launchpad': 'https://launchpad.net/bugs/%s',
+ 'storyboard': 'https://storyboard.openstack.org/#!/story/%s',
+}
+
+
+def _validate_bug_and_bug_type(bug, bug_type):
+ """Validates ``bug`` and ``bug_type`` values.
+
+ :param bug: bug number causing the test to skip (launchpad or storyboard)
+ :param bug_type: 'launchpad' or 'storyboard', default 'launchpad'
+ :raises: InvalidParam if ``bug`` is not a digit or ``bug_type`` is not
+ a valid value
+ """
+ if not bug.isdigit():
+ invalid_param = '%s must be a valid %s number' % (bug, bug_type)
+ raise lib_exc.InvalidParam(invalid_param=invalid_param)
+ if bug_type not in _SUPPORTED_BUG_TYPES:
+ invalid_param = 'bug_type "%s" must be one of: %s' % (
+ bug_type, ', '.join(_SUPPORTED_BUG_TYPES.keys()))
+ raise lib_exc.InvalidParam(invalid_param=invalid_param)
+
+
+def _get_bug_url(bug, bug_type='launchpad'):
+ """Get the bug URL based on the ``bug_type`` and ``bug``
+
+ :param bug: The launchpad/storyboard bug number causing the test
+ :param bug_type: 'launchpad' or 'storyboard', default 'launchpad'
+ :returns: Bug URL corresponding to ``bug_type`` value
+ """
+ _validate_bug_and_bug_type(bug, bug_type)
+ return _SUPPORTED_BUG_TYPES[bug_type] % bug
+
def skip_because(*args, **kwargs):
"""A decorator useful to skip tests hitting known bugs
- @param bug: bug number causing the test to skip
- @param condition: optional condition to be True for the skip to have place
+ ``bug`` must be a number and ``condition`` must be true for the test to
+ skip.
+
+ :param bug: bug number causing the test to skip (launchpad or storyboard)
+ :param bug_type: 'launchpad' or 'storyboard', default 'launchpad'
+ :param condition: optional condition to be True for the skip to have place
+ :raises: testtools.TestCase.skipException if ``condition`` is True and
+ ``bug`` is included
"""
def decorator(f):
@functools.wraps(f)
def wrapper(*func_args, **func_kwargs):
skip = False
+ msg = ''
if "condition" in kwargs:
if kwargs["condition"] is True:
skip = True
else:
skip = True
if "bug" in kwargs and skip is True:
- if not kwargs['bug'].isdigit():
- raise ValueError('bug must be a valid bug number')
- msg = "Skipped until Bug: %s is resolved." % kwargs["bug"]
+ bug = kwargs['bug']
+ bug_type = kwargs.get('bug_type', 'launchpad')
+ bug_url = _get_bug_url(bug, bug_type)
+ msg = "Skipped until bug: %s is resolved." % bug_url
raise testtools.TestCase.skipException(msg)
return f(*func_args, **func_kwargs)
return wrapper
return decorator
-def related_bug(bug, status_code=None):
- """A decorator useful to know solutions from launchpad bug reports
+def related_bug(bug, status_code=None, bug_type='launchpad'):
+ """A decorator useful to know solutions from launchpad/storyboard reports
- @param bug: The launchpad bug number causing the test
- @param status_code: The status code related to the bug report
+ :param bug: The launchpad/storyboard bug number causing the test bug
+ :param bug_type: 'launchpad' or 'storyboard', default 'launchpad'
+ :param status_code: The status code related to the bug report
"""
def decorator(f):
@functools.wraps(f)
@@ -61,9 +105,10 @@
except Exception as exc:
exc_status_code = getattr(exc, 'status_code', None)
if status_code is None or status_code == exc_status_code:
- LOG.error('Hints: This test was made for the bug %s. '
- 'The failure could be related to '
- 'https://launchpad.net/bugs/%s', bug, bug)
+ if bug:
+ LOG.error('Hints: This test was made for the bug_type '
+ '%s. The failure could be related to '
+ '%s', bug, _get_bug_url(bug, bug_type))
raise exc
return wrapper
return decorator
diff --git a/tempest/scenario/manager.py b/tempest/scenario/manager.py
index 1109bc8..9b8f3ad 100644
--- a/tempest/scenario/manager.py
+++ b/tempest/scenario/manager.py
@@ -547,7 +547,7 @@
volume['id'], 'available')
def ping_ip_address(self, ip_address, should_succeed=True,
- ping_timeout=None, mtu=None):
+ ping_timeout=None, mtu=None, server=None):
timeout = ping_timeout or CONF.validation.ping_timeout
cmd = ['ping', '-c1', '-w1']
@@ -581,12 +581,16 @@
'caller': caller, 'ip': ip_address, 'timeout': timeout,
'result': 'expected' if result else 'unexpected'
})
+ if server:
+ self._log_console_output([server])
return result
def check_vm_connectivity(self, ip_address,
username=None,
private_key=None,
should_connect=True,
+ extra_msg="",
+ server=None,
mtu=None):
"""Check server connectivity
@@ -596,43 +600,36 @@
:param should_connect: True/False indicates positive/negative test
positive - attempt ping and ssh
negative - attempt ping and fail if succeed
+ :param extra_msg: Message to help with debugging if ``ping_ip_address``
+ fails
+ :param server: The server whose console to log for debugging
:param mtu: network MTU to use for connectivity validation
:raises: AssertError if the result of the connectivity check does
not match the value of the should_connect param
"""
+ LOG.debug('checking network connections to IP %s with user: %s',
+ ip_address, username)
if should_connect:
msg = "Timed out waiting for %s to become reachable" % ip_address
else:
msg = "ip address %s is reachable" % ip_address
+ if extra_msg:
+ msg = "%s\n%s" % (extra_msg, msg)
self.assertTrue(self.ping_ip_address(ip_address,
should_succeed=should_connect,
- mtu=mtu),
+ mtu=mtu, server=server),
msg=msg)
if should_connect:
# no need to check ssh for negative connectivity
- self.get_remote_client(ip_address, username, private_key)
-
- def check_public_network_connectivity(self, ip_address, username,
- private_key, should_connect=True,
- msg=None, servers=None, mtu=None):
- # The target login is assumed to have been configured for
- # key-based authentication by cloud-init.
- LOG.debug('checking network connections to IP %s with user: %s',
- ip_address, username)
- try:
- self.check_vm_connectivity(ip_address,
- username,
- private_key,
- should_connect=should_connect,
- mtu=mtu)
- except Exception:
- ex_msg = 'Public network connectivity check failed'
- if msg:
- ex_msg += ": " + msg
- LOG.exception(ex_msg)
- self._log_console_output(servers)
- raise
+ try:
+ self.get_remote_client(ip_address, username, private_key,
+ server=server)
+ except Exception:
+ if not extra_msg:
+ extra_msg = 'Failed to ssh to %s' % ip_address
+ LOG.exception(extra_msg)
+ raise
def create_floating_ip(self, thing, pool_name=None):
"""Create a floating IP and associates to a server on Nova"""
@@ -813,8 +810,13 @@
return subnet
def _get_server_port_id_and_ip4(self, server, ip_addr=None):
- ports = self.os_admin.ports_client.list_ports(
- device_id=server['id'], fixed_ip=ip_addr)['ports']
+ if ip_addr:
+ ports = self.os_admin.ports_client.list_ports(
+ device_id=server['id'],
+ fixed_ips='ip_address=%s' % ip_addr)['ports']
+ else:
+ ports = self.os_admin.ports_client.list_ports(
+ device_id=server['id'])['ports']
# A port can have more than one IP address in some cases.
# If the network is dual-stack (IPv4 + IPv6), this port is associated
# with 2 subnets
diff --git a/tempest/scenario/test_network_advanced_server_ops.py b/tempest/scenario/test_network_advanced_server_ops.py
index 87ce951..b0e4669 100644
--- a/tempest/scenario/test_network_advanced_server_ops.py
+++ b/tempest/scenario/test_network_advanced_server_ops.py
@@ -90,9 +90,10 @@
floating_ip_addr = floating_ip['floating_ip_address']
# Check FloatingIP status before checking the connectivity
self.check_floating_ip_status(floating_ip, 'ACTIVE')
- self.check_public_network_connectivity(floating_ip_addr, username,
- private_key, should_connect,
- servers=[server])
+ self.check_vm_connectivity(floating_ip_addr, username,
+ private_key, should_connect,
+ 'Public network connectivity check failed',
+ server)
def _wait_server_status_and_check_network_connectivity(self, server,
keypair,
diff --git a/tempest/scenario/test_network_basic_ops.py b/tempest/scenario/test_network_basic_ops.py
index 8212e75..c1132cf 100644
--- a/tempest/scenario/test_network_basic_ops.py
+++ b/tempest/scenario/test_network_basic_ops.py
@@ -175,7 +175,7 @@
def _get_server_key(self, server):
return self.keypairs[server['key_name']]['private_key']
- def check_public_network_connectivity(
+ def _check_public_network_connectivity(
self, should_connect=True, msg=None,
should_check_floating_ip_status=True, mtu=None):
"""Verifies connectivty to a VM via public network and floating IP
@@ -199,13 +199,18 @@
if should_connect:
private_key = self._get_server_key(server)
floatingip_status = 'ACTIVE'
+
# Check FloatingIP Status before initiating a connection
if should_check_floating_ip_status:
self.check_floating_ip_status(floating_ip, floatingip_status)
- # call the common method in the parent class
- super(TestNetworkBasicOps, self).check_public_network_connectivity(
- ip_address, ssh_login, private_key, should_connect, msg,
- self.servers, mtu=mtu)
+
+ message = 'Public network connectivity check failed'
+ if msg:
+ message += '. Reason: %s' % msg
+
+ self.check_vm_connectivity(
+ ip_address, ssh_login, private_key, should_connect,
+ message, server, mtu=mtu)
def _disassociate_floating_ips(self):
floating_ip, _ = self.floating_ip_tuple
@@ -404,17 +409,17 @@
"""
self._setup_network_and_servers()
- self.check_public_network_connectivity(should_connect=True)
+ self._check_public_network_connectivity(should_connect=True)
self._check_network_internal_connectivity(network=self.network)
self._check_network_external_connectivity()
self._disassociate_floating_ips()
- self.check_public_network_connectivity(should_connect=False,
- msg="after disassociate "
- "floating ip")
+ self._check_public_network_connectivity(should_connect=False,
+ msg="after disassociate "
+ "floating ip")
self._reassociate_floating_ips()
- self.check_public_network_connectivity(should_connect=True,
- msg="after re-associate "
- "floating ip")
+ self._check_public_network_connectivity(should_connect=True,
+ msg="after re-associate "
+ "floating ip")
@decorators.idempotent_id('b158ea55-472e-4086-8fa9-c64ac0c6c1d0')
@testtools.skipUnless(utils.is_extension_enabled('net-mtu', 'network'),
@@ -425,10 +430,10 @@
"""Validate that network MTU sized frames fit through."""
self._setup_network_and_servers()
# first check that connectivity works in general for the instance
- self.check_public_network_connectivity(should_connect=True)
+ self._check_public_network_connectivity(should_connect=True)
# now that we checked general connectivity, test that full size frames
# can also pass between nodes
- self.check_public_network_connectivity(
+ self._check_public_network_connectivity(
should_connect=True, mtu=self.network['mtu'])
@decorators.idempotent_id('1546850e-fbaa-42f5-8b5f-03d8a6a95f15')
@@ -467,7 +472,7 @@
"""
self._setup_network_and_servers()
- self.check_public_network_connectivity(should_connect=True)
+ self._check_public_network_connectivity(should_connect=True)
self._check_network_internal_connectivity(network=self.network)
self._check_network_external_connectivity()
self._create_new_network(create_gateway=True)
@@ -502,7 +507,7 @@
"""
self._setup_network_and_servers()
- self.check_public_network_connectivity(should_connect=True)
+ self._check_public_network_connectivity(should_connect=True)
self._create_new_network()
self._hotplug_server()
self._check_network_internal_connectivity(network=self.new_net)
@@ -524,19 +529,19 @@
admin_state_up attribute of router to True
"""
self._setup_network_and_servers()
- self.check_public_network_connectivity(
+ self._check_public_network_connectivity(
should_connect=True, msg="before updating "
"admin_state_up of router to False")
self._update_router_admin_state(self.router, False)
# TODO(alokmaurya): Remove should_check_floating_ip_status=False check
# once bug 1396310 is fixed
- self.check_public_network_connectivity(
+ self._check_public_network_connectivity(
should_connect=False, msg="after updating "
"admin_state_up of router to False",
should_check_floating_ip_status=False)
self._update_router_admin_state(self.router, True)
- self.check_public_network_connectivity(
+ self._check_public_network_connectivity(
should_connect=True, msg="after updating "
"admin_state_up of router to True")
@@ -581,7 +586,7 @@
renew_timeout = CONF.network.build_timeout
self._setup_network_and_servers(dns_nameservers=[initial_dns_server])
- self.check_public_network_connectivity(should_connect=True)
+ self._check_public_network_connectivity(should_connect=True)
floating_ip, server = self.floating_ip_tuple
ip_address = floating_ip['floating_ip_address']
@@ -656,20 +661,20 @@
private_key=private_key,
server=server2)
- self.check_public_network_connectivity(
+ self._check_public_network_connectivity(
should_connect=True, msg="before updating "
"admin_state_up of instance port to False")
self.check_remote_connectivity(ssh_client, dest=server_pip,
should_succeed=True)
self.ports_client.update_port(port_id, admin_state_up=False)
- self.check_public_network_connectivity(
+ self._check_public_network_connectivity(
should_connect=False, msg="after updating "
"admin_state_up of instance port to False",
should_check_floating_ip_status=False)
self.check_remote_connectivity(ssh_client, dest=server_pip,
should_succeed=False)
self.ports_client.update_port(port_id, admin_state_up=True)
- self.check_public_network_connectivity(
+ self._check_public_network_connectivity(
should_connect=True, msg="after updating "
"admin_state_up of instance port to True")
self.check_remote_connectivity(ssh_client, dest=server_pip,
@@ -766,7 +771,7 @@
msg = "Rescheduling test does not apply to distributed routers."
raise self.skipException(msg)
- self.check_public_network_connectivity(should_connect=True)
+ self._check_public_network_connectivity(should_connect=True)
# remove resource from agents
hosting_agents = set(a["id"] for a in
@@ -783,7 +788,7 @@
'unscheduling router failed')
# verify resource is un-functional
- self.check_public_network_connectivity(
+ self._check_public_network_connectivity(
should_connect=False,
msg='after router unscheduling',
)
@@ -800,7 +805,7 @@
"target agent")
# verify resource is functional
- self.check_public_network_connectivity(
+ self._check_public_network_connectivity(
should_connect=True,
msg='After router rescheduling')
@@ -834,7 +839,7 @@
# Create server
self._setup_network_and_servers()
- self.check_public_network_connectivity(should_connect=True)
+ self._check_public_network_connectivity(should_connect=True)
self._create_new_network()
self._hotplug_server()
fip, server = self.floating_ip_tuple
diff --git a/tempest/tests/lib/services/volume/v2/test_volumes_client.py b/tempest/tests/lib/services/volume/v2/test_volumes_client.py
deleted file mode 100644
index d7b042e..0000000
--- a/tempest/tests/lib/services/volume/v2/test_volumes_client.py
+++ /dev/null
@@ -1,127 +0,0 @@
-# Copyright 2017 FiberHome Telecommunication Technologies CO.,LTD
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from oslo_serialization import jsonutils as json
-
-from tempest.lib.services.volume.v2 import volumes_client
-from tempest.tests.lib import fake_auth_provider
-from tempest.tests.lib.services import base
-
-
-class TestVolumesClient(base.BaseServiceTest):
-
- FAKE_VOLUME_METADATA_ITEM = {
- "meta": {
- "key1": "value1"
- }
- }
-
- FAKE_VOLUME_IMAGE_METADATA = {
- "metadata": {
- "container_format": "bare",
- "min_ram": "0",
- "disk_format": "raw",
- "image_name": "xly-ubuntu16-server",
- "image_id": "3e087b0c-10c5-4255-b147-6e8e9dbad6fc",
- "checksum": "008f5d22fe3cb825d714da79607a90f9",
- "min_disk": "0",
- "size": "8589934592"
- }
- }
-
- def setUp(self):
- super(TestVolumesClient, self).setUp()
- fake_auth = fake_auth_provider.FakeAuthProvider()
- self.client = volumes_client.VolumesClient(fake_auth,
- 'volume',
- 'regionOne')
-
- def _test_retype_volume(self, bytes_body=False):
- kwargs = {
- "new_type": "dedup-tier-replication",
- "migration_policy": "never"
- }
-
- self.check_service_client_function(
- self.client.retype_volume,
- 'tempest.lib.common.rest_client.RestClient.post',
- {},
- to_utf=bytes_body,
- status=202,
- volume_id="a3be971b-8de5-4bdf-bdb8-3d8eb0fb69f8",
- **kwargs
- )
-
- def _test_force_detach_volume(self, bytes_body=False):
- kwargs = {
- 'attachment_id': '6980e295-920f-412e-b189-05c50d605acd',
- 'connector': {
- 'initiator': 'iqn.2017-04.org.fake:01'
- }
- }
-
- self.check_service_client_function(
- self.client.force_detach_volume,
- 'tempest.lib.common.rest_client.RestClient.post',
- {},
- to_utf=bytes_body,
- status=202,
- volume_id="a3be971b-8de5-4bdf-bdb8-3d8eb0fb69f8",
- **kwargs
- )
-
- def _test_show_volume_metadata_item(self, bytes_body=False):
- self.check_service_client_function(
- self.client.show_volume_metadata_item,
- 'tempest.lib.common.rest_client.RestClient.get',
- self.FAKE_VOLUME_METADATA_ITEM,
- to_utf=bytes_body,
- volume_id="a3be971b-8de5-4bdf-bdb8-3d8eb0fb69f8",
- id="key1")
-
- def _test_show_volume_image_metadata(self, bytes_body=False):
- fake_volume_id = "a3be971b-8de5-4bdf-bdb8-3d8eb0fb69f8"
- self.check_service_client_function(
- self.client.show_volume_image_metadata,
- 'tempest.lib.common.rest_client.RestClient.post',
- self.FAKE_VOLUME_IMAGE_METADATA,
- to_utf=bytes_body,
- mock_args=['volumes/%s/action' % fake_volume_id,
- json.dumps({"os-show_image_metadata": {}})],
- volume_id=fake_volume_id)
-
- def test_force_detach_volume_with_str_body(self):
- self._test_force_detach_volume()
-
- def test_force_detach_volume_with_bytes_body(self):
- self._test_force_detach_volume(bytes_body=True)
-
- def test_show_volume_metadata_item_with_str_body(self):
- self._test_show_volume_metadata_item()
-
- def test_show_volume_metadata_item_with_bytes_body(self):
- self._test_show_volume_metadata_item(bytes_body=True)
-
- def test_show_volume_image_metadata_with_str_body(self):
- self._test_show_volume_image_metadata()
-
- def test_show_volume_image_metadata_with_bytes_body(self):
- self._test_show_volume_image_metadata(bytes_body=True)
-
- def test_retype_volume_with_str_body(self):
- self._test_retype_volume()
-
- def test_retype_volume_with_bytes_body(self):
- self._test_retype_volume(bytes_body=True)
diff --git a/tempest/tests/lib/services/volume/v3/test_volumes_client.py b/tempest/tests/lib/services/volume/v3/test_volumes_client.py
index a515fd3..1250536 100644
--- a/tempest/tests/lib/services/volume/v3/test_volumes_client.py
+++ b/tempest/tests/lib/services/volume/v3/test_volumes_client.py
@@ -13,6 +13,8 @@
# License for the specific language governing permissions and limitations
# under the License.
+from oslo_serialization import jsonutils as json
+
from tempest.lib.services.volume.v3 import volumes_client
from tempest.tests.lib import fake_auth_provider
from tempest.tests.lib.services import base
@@ -27,6 +29,25 @@
}
}
+ FAKE_VOLUME_METADATA_ITEM = {
+ "meta": {
+ "key1": "value1"
+ }
+ }
+
+ FAKE_VOLUME_IMAGE_METADATA = {
+ "metadata": {
+ "container_format": "bare",
+ "min_ram": "0",
+ "disk_format": "raw",
+ "image_name": "xly-ubuntu16-server",
+ "image_id": "3e087b0c-10c5-4255-b147-6e8e9dbad6fc",
+ "checksum": "008f5d22fe3cb825d714da79607a90f9",
+ "min_disk": "0",
+ "size": "8589934592"
+ }
+ }
+
def setUp(self):
super(TestVolumesClient, self).setUp()
fake_auth = fake_auth_provider.FakeAuthProvider()
@@ -34,6 +55,60 @@
'volume',
'regionOne')
+ def _test_retype_volume(self, bytes_body=False):
+ kwargs = {
+ "new_type": "dedup-tier-replication",
+ "migration_policy": "never"
+ }
+
+ self.check_service_client_function(
+ self.client.retype_volume,
+ 'tempest.lib.common.rest_client.RestClient.post',
+ {},
+ to_utf=bytes_body,
+ status=202,
+ volume_id="a3be971b-8de5-4bdf-bdb8-3d8eb0fb69f8",
+ **kwargs
+ )
+
+ def _test_force_detach_volume(self, bytes_body=False):
+ kwargs = {
+ 'attachment_id': '6980e295-920f-412e-b189-05c50d605acd',
+ 'connector': {
+ 'initiator': 'iqn.2017-04.org.fake:01'
+ }
+ }
+
+ self.check_service_client_function(
+ self.client.force_detach_volume,
+ 'tempest.lib.common.rest_client.RestClient.post',
+ {},
+ to_utf=bytes_body,
+ status=202,
+ volume_id="a3be971b-8de5-4bdf-bdb8-3d8eb0fb69f8",
+ **kwargs
+ )
+
+ def _test_show_volume_metadata_item(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.show_volume_metadata_item,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_VOLUME_METADATA_ITEM,
+ to_utf=bytes_body,
+ volume_id="a3be971b-8de5-4bdf-bdb8-3d8eb0fb69f8",
+ id="key1")
+
+ def _test_show_volume_image_metadata(self, bytes_body=False):
+ fake_volume_id = "a3be971b-8de5-4bdf-bdb8-3d8eb0fb69f8"
+ self.check_service_client_function(
+ self.client.show_volume_image_metadata,
+ 'tempest.lib.common.rest_client.RestClient.post',
+ self.FAKE_VOLUME_IMAGE_METADATA,
+ to_utf=bytes_body,
+ mock_args=['volumes/%s/action' % fake_volume_id,
+ json.dumps({"os-show_image_metadata": {}})],
+ volume_id=fake_volume_id)
+
def _test_show_volume_summary(self, bytes_body=False):
self.check_service_client_function(
self.client.show_volume_summary,
@@ -41,6 +116,30 @@
self.FAKE_VOLUME_SUMMARY,
bytes_body)
+ def test_force_detach_volume_with_str_body(self):
+ self._test_force_detach_volume()
+
+ def test_force_detach_volume_with_bytes_body(self):
+ self._test_force_detach_volume(bytes_body=True)
+
+ def test_show_volume_metadata_item_with_str_body(self):
+ self._test_show_volume_metadata_item()
+
+ def test_show_volume_metadata_item_with_bytes_body(self):
+ self._test_show_volume_metadata_item(bytes_body=True)
+
+ def test_show_volume_image_metadata_with_str_body(self):
+ self._test_show_volume_image_metadata()
+
+ def test_show_volume_image_metadata_with_bytes_body(self):
+ self._test_show_volume_image_metadata(bytes_body=True)
+
+ def test_retype_volume_with_str_body(self):
+ self._test_retype_volume()
+
+ def test_retype_volume_with_bytes_body(self):
+ self._test_retype_volume(bytes_body=True)
+
def test_show_volume_summary_with_str_body(self):
self._test_show_volume_summary()
diff --git a/tempest/tests/lib/test_decorators.py b/tempest/tests/lib/test_decorators.py
index ed0eea3..0b1a599 100644
--- a/tempest/tests/lib/test_decorators.py
+++ b/tempest/tests/lib/test_decorators.py
@@ -19,6 +19,7 @@
from tempest.lib import base as test
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
+from tempest.lib import exceptions as lib_exc
from tempest.tests import base
@@ -62,21 +63,40 @@
t = TestFoo('test_bar')
if expected_to_skip:
- self.assertRaises(testtools.TestCase.skipException, t.test_bar)
+ e = self.assertRaises(testtools.TestCase.skipException, t.test_bar)
+ bug = decorator_args['bug']
+ bug_type = decorator_args.get('bug_type', 'launchpad')
+ self.assertRegex(
+ str(e),
+ r'Skipped until bug\: %s.*' % decorators._get_bug_url(
+ bug, bug_type)
+ )
else:
# assert that test_bar returned 0
self.assertEqual(TestFoo('test_bar').test_bar(), 0)
- def test_skip_because_bug(self):
+ def test_skip_because_launchpad_bug(self):
self._test_skip_because_helper(bug='12345')
- def test_skip_because_bug_and_condition_true(self):
+ def test_skip_because_launchpad_bug_and_condition_true(self):
self._test_skip_because_helper(bug='12348', condition=True)
- def test_skip_because_bug_and_condition_false(self):
+ def test_skip_because_launchpad_bug_and_condition_false(self):
self._test_skip_because_helper(expected_to_skip=False,
bug='12349', condition=False)
+ def test_skip_because_storyboard_bug(self):
+ self._test_skip_because_helper(bug='1992', bug_type='storyboard')
+
+ def test_skip_because_storyboard_bug_and_condition_true(self):
+ self._test_skip_because_helper(bug='1992', bug_type='storyboard',
+ condition=True)
+
+ def test_skip_because_storyboard_bug_and_condition_false(self):
+ self._test_skip_because_helper(expected_to_skip=False,
+ bug='1992', bug_type='storyboard',
+ condition=False)
+
def test_skip_because_bug_without_bug_never_skips(self):
"""Never skip without a bug parameter."""
self._test_skip_because_helper(expected_to_skip=False,
@@ -84,8 +104,8 @@
self._test_skip_because_helper(expected_to_skip=False)
def test_skip_because_invalid_bug_number(self):
- """Raise ValueError if with an invalid bug number"""
- self.assertRaises(ValueError, self._test_skip_because_helper,
+ """Raise InvalidParam if with an invalid bug number"""
+ self.assertRaises(lib_exc.InvalidParam, self._test_skip_because_helper,
bug='critical_bug')
@@ -126,6 +146,13 @@
class TestRelatedBugDecorator(base.TestCase):
+
+ def _get_my_exception(self):
+ class MyException(Exception):
+ def __init__(self, status_code):
+ self.status_code = status_code
+ return MyException
+
def test_relatedbug_when_no_exception(self):
f = mock.Mock()
sentinel = object()
@@ -137,10 +164,9 @@
test_foo(sentinel)
f.assert_called_once_with(sentinel)
- def test_relatedbug_when_exception(self):
- class MyException(Exception):
- def __init__(self, status_code):
- self.status_code = status_code
+ def test_relatedbug_when_exception_with_launchpad_bug_type(self):
+ """Validate related_bug decorator with bug_type == 'launchpad'"""
+ MyException = self._get_my_exception()
def f(self):
raise MyException(status_code=500)
@@ -152,4 +178,53 @@
with mock.patch.object(decorators.LOG, 'error') as m_error:
self.assertRaises(MyException, test_foo, object())
- m_error.assert_called_once_with(mock.ANY, '1234', '1234')
+ m_error.assert_called_once_with(
+ mock.ANY, '1234', 'https://launchpad.net/bugs/1234')
+
+ def test_relatedbug_when_exception_with_storyboard_bug_type(self):
+ """Validate related_bug decorator with bug_type == 'storyboard'"""
+ MyException = self._get_my_exception()
+
+ def f(self):
+ raise MyException(status_code=500)
+
+ @decorators.related_bug(bug="1234", status_code=500,
+ bug_type='storyboard')
+ def test_foo(self):
+ f(self)
+
+ with mock.patch.object(decorators.LOG, 'error') as m_error:
+ self.assertRaises(MyException, test_foo, object())
+
+ m_error.assert_called_once_with(
+ mock.ANY, '1234', 'https://storyboard.openstack.org/#!/story/1234')
+
+ def test_relatedbug_when_exception_invalid_bug_type(self):
+ """Check related_bug decorator raises exc when bug_type is not valid"""
+ MyException = self._get_my_exception()
+
+ def f(self):
+ raise MyException(status_code=500)
+
+ @decorators.related_bug(bug="1234", status_code=500,
+ bug_type=mock.sentinel.invalid)
+ def test_foo(self):
+ f(self)
+
+ with mock.patch.object(decorators.LOG, 'error'):
+ self.assertRaises(lib_exc.InvalidParam, test_foo, object())
+
+ def test_relatedbug_when_exception_invalid_bug_number(self):
+ """Check related_bug decorator raises exc when bug_number != digit"""
+ MyException = self._get_my_exception()
+
+ def f(self):
+ raise MyException(status_code=500)
+
+ @decorators.related_bug(bug="not a digit", status_code=500,
+ bug_type='launchpad')
+ def test_foo(self):
+ f(self)
+
+ with mock.patch.object(decorators.LOG, 'error'):
+ self.assertRaises(lib_exc.InvalidParam, test_foo, object())