Merge "Revert "Skip unstable v6 scenario tests""
diff --git a/.coveragerc b/.coveragerc
index 51482d3..449e62c 100644
--- a/.coveragerc
+++ b/.coveragerc
@@ -1,4 +1,4 @@
[run]
branch = True
source = tempest
-omit = tempest/tests/*,tempest/scenario/test_*.py,tempest/api_schema/*,tempest/api/*
+omit = tempest/tests/*,tempest/scenario/test_*.py,tempest/api/*
diff --git a/HACKING.rst b/HACKING.rst
index 432db7d..a209b3f 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -166,8 +166,33 @@
suite, Tempest is not suitable for handling all negative test cases, as
the wide variety and complexity of negative tests can lead to long test
runs and knowledge of internal implementation details. The bulk of
-negative testing should be handled with project function tests. The
-exception to this rule is API tests used for interoperability testing.
+negative testing should be handled with project function tests.
+All negative tests should be based on `API-WG guideline`_ . Such negative
+tests can block any changes from accurate failure code to invalid one.
+
+.. _API-WG guideline: https://github.com/openstack/api-wg/blob/master/guidelines/http.rst#failure-code-clarifications
+
+If facing some gray area which is not clarified on the above guideline, propose
+a new guideline to the API-WG. With a proposal to the API-WG we will be able to
+build a consensus across all OpenStack projects and improve the quality and
+consistency of all the APIs.
+
+In addition, we have some guidelines for additional negative tests.
+
+- About BadRequest(HTTP400) case: We can add a single negative tests of
+ BadRequest for each resource and method(POST, PUT).
+ Please don't implement more negative tests on the same combination of
+ resource and method even if API request parameters are different from
+ the existing test.
+- About NotFound(HTTP404) case: We can add a single negative tests of
+ NotFound for each resource and method(GET, PUT, DELETE, HEAD).
+ Please don't implement more negative tests on the same combination
+ of resource and method.
+
+The above guidelines don't cover all cases and we will grow these guidelines
+organically over time. Patches outside of the above guidelines are left up to
+the reviewers' discretion and if we face some conflicts between reviewers, we
+will expand the guideline based on our discussion and experience.
Test skips because of Known Bugs
--------------------------------
@@ -215,29 +240,6 @@
can be used to perform this. See AggregatesAdminTest in
tempest.api.compute.admin for an example of using locking.
-Stress Tests in Tempest
------------------------
-Any tempest test case can be flagged as a stress test. With this flag it will
-be automatically discovery and used in the stress test runs. The stress test
-framework itself is a facility to spawn and control worker processes in order
-to find race conditions (see ``tempest/stress/`` for more information). Please
-note that these stress tests can't be used for benchmarking purposes since they
-don't measure any performance characteristics.
-
-Example::
-
- @stresstest(class_setup_per='process')
- def test_this_and_that(self):
- ...
-
-This will flag the test ``test_this_and_that`` as a stress test. The parameter
-``class_setup_per`` gives control when the setUpClass function should be called.
-
-Good candidates for stress tests are:
-
-- Scenario tests
-- API tests that have a wide focus
-
Sample Configuration File
-------------------------
The sample config file is autogenerated using a script. If any changes are made
diff --git a/README.rst b/README.rst
index 53c7de5..fc4de5e 100644
--- a/README.rst
+++ b/README.rst
@@ -92,18 +92,18 @@
be done using the :ref:`tempest_run` command. This can be done by either
running::
- $ tempest run
+ $ tempest run
from the Tempest workspace directory. Or you can use the ``--workspace``
argument to run in the workspace you created regarless of your current
working directory. For example::
- $ tempest run --workspace cloud-01
+ $ tempest run --workspace cloud-01
There is also the option to use testr directly, or any `testr`_ based test
runner, like `ostestr`_. For example, from the workspace dir run::
- $ ostestr --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.(api|scenario))'
+ $ ostestr --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.(api|scenario))'
will run the same set of tests as the default gate jobs.
@@ -124,6 +124,9 @@
Release Versioning
------------------
+`Tempest Release Notes <http://docs.openstack.org/releasenotes/tempest>`_
+shows what changes have been released on each version.
+
Tempest's released versions are broken into 2 sets of information. Depending on
how you intend to consume tempest you might need
@@ -158,9 +161,9 @@
of the configuration.
You can generate a new sample tempest.conf file, run the following
-command from the top level of the Tempest directory:
+command from the top level of the Tempest directory::
- tox -egenconfig
+ $ tox -egenconfig
The most important pieces that are needed are the user ids, openstack
endpoint, and basic flavors and images needed to run tests.
@@ -178,9 +181,8 @@
is OS_TEST_PATH=./tempest/test_discover which will only run test discover on the
Tempest suite.
-Alternatively, you can use the run_tests.sh script which will create a venv and
-run the unit tests. There are also the py27 and py34 tox jobs which will run
-the unit tests with the corresponding version of python.
+Alternatively, there are the py27 and py34 tox jobs which will run the unit
+tests with the corresponding version of python.
Python 2.6
----------
@@ -194,18 +196,18 @@
from a remote system running python 2.7. (or deploy a cloud guest in your cloud
that has python 2.7)
-Python 3.4
+Python 3.x
----------
Starting during the Liberty release development cycle work began on enabling
Tempest to run under both Python 2.7 and Python 3.4. Tempest strives to fully
-support running with Python 3.4. A gating unit test job was added to also run
-Tempest's unit tests under Python 3.4. This means that the Tempest code at
-least imports under Python 3.4 and things that have unit test coverage will
-work on Python 3.4. However, because large parts of Tempest are self-verifying
-there might be uncaught issues running on Python 3.4. So until there is a gating
-job which does a full Tempest run using Python 3.4 there isn't any guarantee
-that running Tempest under Python 3.4 is bug free.
+support running with Python 3.4 and newer. A gating unit test job was added to
+also run Tempest's unit tests under Python 3. This means that the Tempest
+code at least imports under Python 3.4 and things that have unit test coverage
+will work on Python 3.4. However, because large parts of Tempest are
+self-verifying there might be uncaught issues running on Python 3. So until
+there is a gating job which does a full Tempest run using Python 3 there
+isn't any guarantee that running Tempest under Python 3 is bug free.
Legacy run method
-----------------
@@ -224,7 +226,7 @@
$ cd $TEMPEST_ROOT_DIR
$ oslo-config-generator --config-file \
- etc/config-generator.tempest.conf \
+ tempest/cmd/config-generator.tempest.conf \
--output-file etc/tempest.conf
After that, open up the ``etc/tempest.conf`` file and edit the
@@ -255,11 +257,11 @@
and run the tests or use tox to do the same. Tox also contains several existing
job configurations. For example::
- $ tox -efull
+ $ tox -efull
which will run the same set of tests as the OpenStack gate. (it's exactly how
the gate invokes Tempest) Or::
- $ tox -esmoke
+ $ tox -esmoke
to run the tests tagged as smoke.
diff --git a/REVIEWING.rst b/REVIEWING.rst
index 676a217..cfe7f4c 100644
--- a/REVIEWING.rst
+++ b/REVIEWING.rst
@@ -13,6 +13,13 @@
it. Tests which aren't executed either because of configuration or skips should
not be accepted.
+If a new test is added that depends on a new config option (like a feature
+flag), the commit message must reference a change in DevStack or DevStack-Gate
+that enables the execution of this newly introduced test. This reference could
+either be a `Cross-Repository Dependency <http://docs.openstack.org/infra/
+manual/developers.html#cross-repository-dependencies>`_ or a simple link
+to a Gerrit review.
+
Unit Tests
----------
diff --git a/doc/source/conf.py b/doc/source/conf.py
index 127613d..2edaddb 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -14,6 +14,7 @@
import sys
import os
import subprocess
+import warnings
# Build the plugin registry
def build_plugin_registry(app):
@@ -140,9 +141,13 @@
# using the given strftime format.
git_cmd = ["git", "log", "--pretty=format:'%ad, commit %h'", "--date=local",
"-n1"]
-html_last_updated_fmt = subprocess.Popen(git_cmd,
- stdout=subprocess.PIPE).\
- communicate()[0]
+try:
+ html_last_updated_fmt = subprocess.Popen(git_cmd,
+ stdout=subprocess.PIPE).\
+ communicate()[0]
+except Exception:
+ warnings.warn('Cannot get last updated time from git repository. '
+ 'Not setting "html_last_updated_fmt".')
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
diff --git a/doc/source/configuration.rst b/doc/source/configuration.rst
index e4b104f..18269bf 100644
--- a/doc/source/configuration.rst
+++ b/doc/source/configuration.rst
@@ -61,10 +61,9 @@
Credential Provider Mechanisms
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Tempest currently also has three different internal methods for providing
-authentication to tests: dynamic credentials, locking test accounts, and
-non-locking test accounts. Depending on which one is in use the configuration
-of Tempest is slightly different.
+Tempest currently has two different internal methods for providing authentication
+to tests: dynamic credentials and pre-provisioned credentials.
+Depending on which one is in use the configuration of Tempest is slightly different.
Dynamic Credentials
"""""""""""""""""""
@@ -96,7 +95,7 @@
accounts will be assigned a role on domain configured in
``default_credentials_domain_name``. This will make the accounts provisioned
usable in a cloud where domain scoped tokens are required by keystone for
-admin operations. Note that the the initial pre-provision admin accounts,
+admin operations. Note that the initial pre-provision admin accounts,
configured in tempest.conf, must have a role on the same domain as well, for
Dynamic Credentials to work.
@@ -135,7 +134,10 @@
It is worth pointing out that each set of credentials in the accounts.yaml
should have a unique project. This is required to provide proper isolation
to the tests using the credentials, and failure to do this will likely cause
-unexpected failures in some tests.
+unexpected failures in some tests. Also, ensure that these projects and users
+used do not have any pre-existing resources created. Tempest assumes all
+tenants it's using are empty and may sporadically fail if there are unexpected
+resources present.
When the keystone in the target cloud requires domain scoped tokens to
perform admin actions, all pre-provisioned admin users must have a role
@@ -149,7 +151,7 @@
``admin_domain_scope`` as ``default_credentials_domain_name`` are configured
properly in tempest.conf.
-Pre-Provisioned Credentials are also know as accounts.yaml or accounts file.
+Pre-Provisioned Credentials are also known as accounts.yaml or accounts file.
Compute
-------
diff --git a/doc/source/field_guide/stress.rst b/doc/source/field_guide/stress.rst
deleted file mode 120000
index d39d0f8..0000000
--- a/doc/source/field_guide/stress.rst
+++ /dev/null
@@ -1 +0,0 @@
-../../../tempest/stress/README.rst
\ No newline at end of file
diff --git a/doc/source/index.rst b/doc/source/index.rst
index f1ede06..896cd98 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -24,7 +24,6 @@
field_guide/index
field_guide/api
field_guide/scenario
- field_guide/stress
field_guide/unit_tests
=========
@@ -50,7 +49,6 @@
account_generator
cleanup
- javelin
subunit_describe_calls
workspace
run
diff --git a/doc/source/javelin.rst b/doc/source/javelin.rst
deleted file mode 100644
index 01090ca..0000000
--- a/doc/source/javelin.rst
+++ /dev/null
@@ -1,5 +0,0 @@
-----------------------------------------------------------
-Javelin2 - How to check that resources survived an upgrade
-----------------------------------------------------------
-
-.. automodule:: tempest.cmd.javelin
diff --git a/doc/source/library.rst b/doc/source/library.rst
index 6a2fb83..29248d1 100644
--- a/doc/source/library.rst
+++ b/doc/source/library.rst
@@ -67,3 +67,4 @@
library/utils
library/api_microversion_testing
library/auth
+ library/clients
diff --git a/doc/source/library/clients.rst b/doc/source/library/clients.rst
new file mode 100644
index 0000000..086cfc9
--- /dev/null
+++ b/doc/source/library/clients.rst
@@ -0,0 +1,24 @@
+.. _clients:
+
+Service Clients Usage
+=====================
+
+Tests make requests against APIs using service clients. Service clients are
+specializations of the ``RestClient`` class. The service clients that cover the
+APIs exposed by a service should be grouped in a service clients module.
+A service clients module is python module where all service clients are
+defined. If major API versions are available, submodules should be defined,
+one for each version.
+
+The ``ClientsFactory`` class helps initializing all clients of a specific
+service client module from a set of shared parameters.
+
+The ``ServiceClients`` class provides a convenient way to get access to all
+available service clients initialized with a provided set of credentials.
+
+------------------
+The clients module
+------------------
+
+.. automodule:: tempest.lib.services.clients
+ :members:
diff --git a/doc/source/microversion_testing.rst b/doc/source/microversion_testing.rst
index 17059e4..dc73ef2 100644
--- a/doc/source/microversion_testing.rst
+++ b/doc/source/microversion_testing.rst
@@ -24,7 +24,9 @@
* max_microversion
Those should be defined under respective section of each service.
- For example::
+ For example:
+
+ .. code-block:: ini
[compute]
min_microversion = None
@@ -42,7 +44,9 @@
api_version_utils.check_skip_with_microversion function can be used
to automatically skip the tests which do not fall under configured
Microversion range.
-For example::
+For example:
+
+.. code-block:: python
class BaseTestCase1(api_version_utils.BaseMicroversionTest):
@@ -65,7 +69,9 @@
to send with API request.
api_version_utils.select_request_microversion function can be used
to select the appropriate Microversion which will be used for API request.
-For example::
+For example:
+
+.. code-block:: python
@classmethod
def resource_setup(cls):
@@ -87,7 +93,9 @@
Also Microversion header name needs to be defined on service clients which
should be constant because it is not supposed to be changed by project
as per API contract.
-For example::
+For example:
+
+.. code-block:: python
COMPUTE_MICROVERSION = None
@@ -96,7 +104,9 @@
Now test class can set the selected Microversion on required service clients
using fixture which can take care of resetting the same once tests is completed.
-For example::
+For example:
+
+.. code-block:: python
def setUp(self):
super(BaseTestCase1, self).setUp()
@@ -105,7 +115,9 @@
Service clients needs to add set Microversion in API request header which
can be done by overriding the get_headers() method of rest_client.
-For example::
+For example:
+
+.. code-block:: python
COMPUTE_MICROVERSION = None
@@ -136,7 +148,9 @@
For example:
-Below test is applicable for Microversion from 2.2 till 2.9::
+Below test is applicable for Microversion from 2.2 till 2.9:
+
+.. code-block:: python
class BaseTestCase1(api_version_utils.BaseMicroversionTest,
tempest.test.BaseTestCase):
@@ -150,7 +164,9 @@
[..]
-Below test is applicable for Microversion from 2.10 till latest::
+Below test is applicable for Microversion from 2.10 till latest:
+
+.. code-block:: python
class Test2(BaseTestCase1):
min_microversion = '2.10'
@@ -159,8 +175,6 @@
[..]
-
-
Notes about Compute Microversion Tests
""""""""""""""""""""""""""""""""""""""
@@ -217,3 +231,11 @@
* `2.20`_
.. _2.20: http://docs.openstack.org/developer/nova/api_microversion_history.html#id18
+
+ * `2.25`_
+
+ .. _2.25: http://docs.openstack.org/developer/nova/api_microversion_history.html#maximum-in-mitaka
+
+ * `2.37`_
+
+ .. _2.37: http://docs.openstack.org/developer/nova/api_microversion_history.html#id34
diff --git a/doc/source/plugin.rst b/doc/source/plugin.rst
index 9640469..6b30825 100644
--- a/doc/source/plugin.rst
+++ b/doc/source/plugin.rst
@@ -61,7 +61,9 @@
to the "tempest.test_plugins" namespace.
If you are using pbr this is fairly straightforward, in the setup.cfg just add
-something like the following::
+something like the following:
+
+.. code-block:: ini
[entry_points]
tempest.test_plugins =
@@ -105,14 +107,17 @@
your plugin you need to create a plugin class which tempest will load and call
to get information when it needs. To simplify creating this tempest provides an
abstract class that should be used as the parent for your plugin. To use this
-you would do something like the following::
+you would do something like the following:
+
+.. code-block:: python
from tempest.test_discover import plugins
class MyPlugin(plugins.TempestPlugin):
-Then you need to ensure you locally define all of the methods in the abstract
-class, you can refer to the api doc below for a reference of what that entails.
+Then you need to ensure you locally define all of the mandatory methods in the
+abstract class, you can refer to the api doc below for a reference of what that
+entails.
Abstract Plugin Class
---------------------
@@ -164,6 +169,152 @@
CONF object from tempest. This enables the plugin to add options to both
existing sections and also create new configuration sections for new options.
+Service Clients
+---------------
+
+If a plugin defines a service client, it is beneficial for it to implement the
+``get_service_clients`` method in the plugin class. All service clients which
+are exposed via this interface will be automatically configured and be
+available in any instance of the service clients class, defined in
+``tempest.lib.services.clients.ServiceClients``. In case multiple plugins are
+installed, all service clients from all plugins will be registered, making it
+easy to write tests which rely on multiple APIs whose service clients are in
+different plugins.
+
+Example implementation of ``get_service_clients``:
+
+.. code-block:: python
+
+ def get_service_clients(self):
+ # Example implementation with two service clients
+ my_service1_config = config.service_client_config('my_service')
+ params_my_service1 = {
+ 'name': 'my_service_v1',
+ 'service_version': 'my_service.v1',
+ 'module_path': 'plugin_tempest_tests.services.my_service.v1',
+ 'client_names': ['API1Client', 'API2Client'],
+ }
+ params_my_service1.update(my_service_config)
+ my_service2_config = config.service_client_config('my_service')
+ params_my_service2 = {
+ 'name': 'my_service_v2',
+ 'service_version': 'my_service.v2',
+ 'module_path': 'plugin_tempest_tests.services.my_service.v2',
+ 'client_names': ['API1Client', 'API2Client'],
+ }
+ params_my_service2.update(my_service2_config)
+ return [params_my_service1, params_my_service2]
+
+Parameters:
+
+* **name**: Name of the attribute used to access the ``ClientsFactory`` from
+ the ``ServiceClients`` instance. See example below.
+* **service_version**: Tempest enforces a single implementation for each
+ service client. Available service clients are held in a ``ClientsRegistry``
+ singleton, and registered with ``service_version``, which means that
+ ``service_version`` must be unique and it should represent the service API
+ and version implemented by the service client.
+* **module_path**: Relative to the service client module, from the root of the
+ plugin.
+* **client_names**: Name of the classes that implement service clients in the
+ service clients module.
+
+Example usage of the service clients in tests:
+
+.. code-block:: python
+
+ # my_creds is instance of tempest.lib.auth.Credentials
+ # identity_uri is v2 or v3 depending on the configuration
+ from tempest.lib.services import clients
+
+ my_clients = clients.ServiceClients(my_creds, identity_uri)
+ my_service1_api1_client = my_clients.my_service_v1.API1Client()
+ my_service2_api1_client = my_clients.my_service_v2.API1Client(my_args='any')
+
+Automatic configuration and registration of service clients imposes some extra
+constraints on the structure of the configuration options exposed by the
+plugin.
+
+First ``service_version`` should be in the format `service_config[.version]`.
+The `.version` part is optional, and should only be used if there are multiple
+versions of the same API available. The `service_config` must match the name of
+a configuration options group defined by the plugin. Different versions of one
+API must share the same configuration group.
+
+Second the configuration options group `service_config` must contain the
+following options:
+
+* `catalog_type`: corresponds to `service` in the catalog
+* `endpoint_type`
+
+The following options will be honoured if defined, but they are not mandatory,
+as they do not necessarily apply to all service clients.
+
+* `region`: default to identity.region
+* `build_timeout` : default to compute.build_timeout
+* `build_interval`: default to compute.build_interval
+
+Third the service client classes should inherit from ``RestClient``, should
+accept generic keyword arguments, and should pass those arguments to the
+``__init__`` method of ``RestClient``. Extra arguments can be added. For
+instance:
+
+.. code-block:: python
+
+ class MyAPIClient(rest_client.RestClient):
+
+ def __init__(self, auth_provider, service, region,
+ my_arg, my_arg2=True, **kwargs):
+ super(MyAPIClient, self).__init__(
+ auth_provider, service, region, **kwargs)
+ self.my_arg = my_arg
+ self.my_args2 = my_arg
+
+Finally the service client should be structured in a python module, so that all
+service client classes are importable from it. Each major API version should
+have its own module.
+
+The following folder and module structure is recommended for a single major
+API version::
+
+ plugin_dir/
+ services/
+ __init__.py
+ client_api_1.py
+ client_api_2.py
+
+The content of __init__.py module should be:
+
+.. code-block:: python
+
+ from client_api_1.py import API1Client
+ from client_api_2.py import API2Client
+
+ __all__ = ['API1Client', 'API2Client']
+
+The following folder and module structure is recommended for multiple major
+API version::
+
+ plugin_dir/
+ services/
+ v1/
+ __init__.py
+ client_api_1.py
+ client_api_2.py
+ v2/
+ __init__.py
+ client_api_1.py
+ client_api_2.py
+
+The content each of __init__.py module under vN should be:
+
+.. code-block:: python
+
+ from client_api_1.py import API1Client
+ from client_api_2.py import API2Client
+
+ __all__ = ['API1Client', 'API2Client']
+
Using Plugins
=============
diff --git a/etc/javelin-resources.yaml.sample b/etc/javelin-resources.yaml.sample
deleted file mode 100644
index 1565686..0000000
--- a/etc/javelin-resources.yaml.sample
+++ /dev/null
@@ -1,63 +0,0 @@
-tenants:
- - javelin
- - discuss
-
-users:
- - name: javelin
- pass: gungnir
- tenant: javelin
- - name: javelin2
- pass: gungnir2
- tenant: discuss
-
-secgroups:
- - name: secgroup1
- owner: javelin
- description: SecurityGroup1
- rules:
- - 'icmp -1 -1 0.0.0.0/0'
- - 'tcp 22 22 0.0.0.0/0'
- - name: secgroup2
- owner: javelin2
- description: SecurityGroup2
- rules:
- - 'tcp 80 80 0.0.0.0/0'
-
-images:
- - name: cirros1
- owner: javelin
- imgdir: images
- file: cirros.img
- container_format: bare
- disk_format: qcow2
- - name: cirros2
- owner: javelin2
- imgdir: files/images/cirros-0.3.2-x86_64-uec
- file: cirros-0.3.2-x86_64-blank.img
- container_format: ami
- disk_format: ami
- aki: cirros-0.3.2-x86_64-vmlinuz
- ari: cirros-0.3.2-x86_64-initrd
-
-networks:
- - name: network1
- owner: javelin
- - name: network2
- owner: javelin2
-
-subnets:
- - name: net1-subnet1
- range: 10.1.0.0/24
- network: network1
- owner: javelin
- - name: net2-subnet2
- range: 192.168.1.0/24
- network: network2
- owner: javelin2
-
-objects:
- - container: container1
- name: object1
- owner: javelin
- file: /etc/hosts
- swift_role: Member
diff --git a/etc/logging.conf.sample b/etc/logging.conf.sample
index 36cd324..c131b07 100644
--- a/etc/logging.conf.sample
+++ b/etc/logging.conf.sample
@@ -1,5 +1,5 @@
[loggers]
-keys=root,tempest_stress
+keys=root
[handlers]
keys=file,devel,syslog
@@ -11,11 +11,6 @@
level=DEBUG
handlers=file
-[logger_tempest_stress]
-level=DEBUG
-handlers=file,devel
-qualname=tempest.stress
-
[handler_file]
class=FileHandler
level=DEBUG
diff --git a/releasenotes/notes/Tempest-library-interface-0eb680b810139a50.yaml b/releasenotes/notes/10.0.0-Tempest-library-interface-0eb680b810139a50.yaml
similarity index 100%
rename from releasenotes/notes/Tempest-library-interface-0eb680b810139a50.yaml
rename to releasenotes/notes/10.0.0-Tempest-library-interface-0eb680b810139a50.yaml
diff --git a/releasenotes/notes/start-using-reno-ed9518126fd0e1a3.yaml b/releasenotes/notes/10.0.0-start-using-reno-ed9518126fd0e1a3.yaml
similarity index 100%
rename from releasenotes/notes/start-using-reno-ed9518126fd0e1a3.yaml
rename to releasenotes/notes/10.0.0-start-using-reno-ed9518126fd0e1a3.yaml
diff --git a/releasenotes/notes/api-microversion-testing-support-2ceddd2255670932.yaml b/releasenotes/notes/11.0.0-api-microversion-testing-support-2ceddd2255670932.yaml
similarity index 100%
rename from releasenotes/notes/api-microversion-testing-support-2ceddd2255670932.yaml
rename to releasenotes/notes/11.0.0-api-microversion-testing-support-2ceddd2255670932.yaml
diff --git a/releasenotes/notes/compute-microversion-support-e0b23f960f894b9b.yaml b/releasenotes/notes/11.0.0-compute-microversion-support-e0b23f960f894b9b.yaml
similarity index 100%
rename from releasenotes/notes/compute-microversion-support-e0b23f960f894b9b.yaml
rename to releasenotes/notes/11.0.0-compute-microversion-support-e0b23f960f894b9b.yaml
diff --git a/releasenotes/notes/add-network-versions-client-d90e8334e1443f5c.yaml b/releasenotes/notes/12.1.0-add-network-versions-client-d90e8334e1443f5c.yaml
similarity index 100%
rename from releasenotes/notes/add-network-versions-client-d90e8334e1443f5c.yaml
rename to releasenotes/notes/12.1.0-add-network-versions-client-d90e8334e1443f5c.yaml
diff --git a/releasenotes/notes/add-scope-to-auth-b5a82493ea89f41e.yaml b/releasenotes/notes/12.1.0-add-scope-to-auth-b5a82493ea89f41e.yaml
similarity index 100%
rename from releasenotes/notes/add-scope-to-auth-b5a82493ea89f41e.yaml
rename to releasenotes/notes/12.1.0-add-scope-to-auth-b5a82493ea89f41e.yaml
diff --git a/releasenotes/notes/add-tempest-run-3d0aaf69c2ca4115.yaml b/releasenotes/notes/12.1.0-add-tempest-run-3d0aaf69c2ca4115.yaml
similarity index 100%
rename from releasenotes/notes/add-tempest-run-3d0aaf69c2ca4115.yaml
rename to releasenotes/notes/12.1.0-add-tempest-run-3d0aaf69c2ca4115.yaml
diff --git a/releasenotes/notes/add-tempest-workspaces-228a2ba4690b5589.yaml b/releasenotes/notes/12.1.0-add-tempest-workspaces-228a2ba4690b5589.yaml
similarity index 100%
rename from releasenotes/notes/add-tempest-workspaces-228a2ba4690b5589.yaml
rename to releasenotes/notes/12.1.0-add-tempest-workspaces-228a2ba4690b5589.yaml
diff --git a/releasenotes/notes/12.1.0-add_subunit_describe_calls-5498a37e6cd66c4b.yaml b/releasenotes/notes/12.1.0-add_subunit_describe_calls-5498a37e6cd66c4b.yaml
new file mode 100644
index 0000000..092014e
--- /dev/null
+++ b/releasenotes/notes/12.1.0-add_subunit_describe_calls-5498a37e6cd66c4b.yaml
@@ -0,0 +1,8 @@
+---
+features:
+ - |
+ Adds subunit-describe-calls. A parser for subunit streams to determine what
+ REST API calls are made inside of a test and in what order they are called.
+
+ * Input can be piped in or a file can be specified
+ * Output is shortened for stdout, the output file has more information
diff --git a/releasenotes/notes/bug-1486834-7ebca15836ae27a9.yaml b/releasenotes/notes/12.1.0-bug-1486834-7ebca15836ae27a9.yaml
similarity index 100%
rename from releasenotes/notes/bug-1486834-7ebca15836ae27a9.yaml
rename to releasenotes/notes/12.1.0-bug-1486834-7ebca15836ae27a9.yaml
diff --git a/releasenotes/notes/identity-clients-as-library-e663c6132fcac6c2.yaml b/releasenotes/notes/12.1.0-identity-clients-as-library-e663c6132fcac6c2.yaml
similarity index 100%
rename from releasenotes/notes/identity-clients-as-library-e663c6132fcac6c2.yaml
rename to releasenotes/notes/12.1.0-identity-clients-as-library-e663c6132fcac6c2.yaml
diff --git a/releasenotes/notes/image-clients-as-library-86d17caa26ce3961.yaml b/releasenotes/notes/12.1.0-image-clients-as-library-86d17caa26ce3961.yaml
similarity index 100%
rename from releasenotes/notes/image-clients-as-library-86d17caa26ce3961.yaml
rename to releasenotes/notes/12.1.0-image-clients-as-library-86d17caa26ce3961.yaml
diff --git a/releasenotes/notes/new-test-utils-module-adf34468c4d52719.yaml b/releasenotes/notes/12.1.0-new-test-utils-module-adf34468c4d52719.yaml
similarity index 100%
rename from releasenotes/notes/new-test-utils-module-adf34468c4d52719.yaml
rename to releasenotes/notes/12.1.0-new-test-utils-module-adf34468c4d52719.yaml
diff --git a/releasenotes/notes/remove-input-scenarios-functionality-01308e6d4307f580.yaml b/releasenotes/notes/12.1.0-remove-input-scenarios-functionality-01308e6d4307f580.yaml
similarity index 100%
rename from releasenotes/notes/remove-input-scenarios-functionality-01308e6d4307f580.yaml
rename to releasenotes/notes/12.1.0-remove-input-scenarios-functionality-01308e6d4307f580.yaml
diff --git a/releasenotes/notes/remove-integrated-horizon-bb57551c1e5f5be3.yaml b/releasenotes/notes/12.1.0-remove-integrated-horizon-bb57551c1e5f5be3.yaml
similarity index 100%
rename from releasenotes/notes/remove-integrated-horizon-bb57551c1e5f5be3.yaml
rename to releasenotes/notes/12.1.0-remove-integrated-horizon-bb57551c1e5f5be3.yaml
diff --git a/releasenotes/notes/remove-legacy-credential-providers-3d653ac3ba1ada2b.yaml b/releasenotes/notes/12.1.0-remove-legacy-credential-providers-3d653ac3ba1ada2b.yaml
similarity index 100%
rename from releasenotes/notes/remove-legacy-credential-providers-3d653ac3ba1ada2b.yaml
rename to releasenotes/notes/12.1.0-remove-legacy-credential-providers-3d653ac3ba1ada2b.yaml
diff --git a/releasenotes/notes/remove-trove-tests-666522e9113549f9.yaml b/releasenotes/notes/12.1.0-remove-trove-tests-666522e9113549f9.yaml
similarity index 62%
rename from releasenotes/notes/remove-trove-tests-666522e9113549f9.yaml
rename to releasenotes/notes/12.1.0-remove-trove-tests-666522e9113549f9.yaml
index 1157a4f..7a1fc36 100644
--- a/releasenotes/notes/remove-trove-tests-666522e9113549f9.yaml
+++ b/releasenotes/notes/12.1.0-remove-trove-tests-666522e9113549f9.yaml
@@ -1,4 +1,4 @@
---
upgrade:
- All tests for the Trove project have been removed from tempest. They now
- live as a tempest plugin in the the trove project.
+ live as a tempest plugin in the trove project.
diff --git a/releasenotes/notes/routers-client-as-library-25a363379da351f6.yaml b/releasenotes/notes/12.1.0-routers-client-as-library-25a363379da351f6.yaml
similarity index 100%
rename from releasenotes/notes/routers-client-as-library-25a363379da351f6.yaml
rename to releasenotes/notes/12.1.0-routers-client-as-library-25a363379da351f6.yaml
diff --git a/releasenotes/notes/support-chunked-encoding-d71f53225f68edf3.yaml b/releasenotes/notes/12.1.0-support-chunked-encoding-d71f53225f68edf3.yaml
similarity index 100%
rename from releasenotes/notes/support-chunked-encoding-d71f53225f68edf3.yaml
rename to releasenotes/notes/12.1.0-support-chunked-encoding-d71f53225f68edf3.yaml
diff --git a/releasenotes/notes/tempest-init-global-config-dir-location-changes-12260255871d3a2b.yaml b/releasenotes/notes/12.1.0-tempest-init-global-config-dir-location-changes-12260255871d3a2b.yaml
similarity index 100%
rename from releasenotes/notes/tempest-init-global-config-dir-location-changes-12260255871d3a2b.yaml
rename to releasenotes/notes/12.1.0-tempest-init-global-config-dir-location-changes-12260255871d3a2b.yaml
diff --git a/releasenotes/notes/12.2.0-add-httptimeout-in-restclient-ax78061900e3f3d7.yaml b/releasenotes/notes/12.2.0-add-httptimeout-in-restclient-ax78061900e3f3d7.yaml
new file mode 100644
index 0000000..a360f8e
--- /dev/null
+++ b/releasenotes/notes/12.2.0-add-httptimeout-in-restclient-ax78061900e3f3d7.yaml
@@ -0,0 +1,7 @@
+---
+features:
+ - RestClient now supports setting timeout in urllib3.poolmanager.
+ Clients will use CONF.service_clients.http_timeout for timeout
+ value to wait for http request to response.
+ - KeystoneAuthProvider will accept http_timeout and will use it in
+ get_credentials.
diff --git a/releasenotes/notes/add-new-identity-clients-3c3afd674a395bde.yaml b/releasenotes/notes/12.2.0-add-new-identity-clients-3c3afd674a395bde.yaml
similarity index 66%
rename from releasenotes/notes/add-new-identity-clients-3c3afd674a395bde.yaml
rename to releasenotes/notes/12.2.0-add-new-identity-clients-3c3afd674a395bde.yaml
index b8dcfce..3ec8b56 100644
--- a/releasenotes/notes/add-new-identity-clients-3c3afd674a395bde.yaml
+++ b/releasenotes/notes/12.2.0-add-new-identity-clients-3c3afd674a395bde.yaml
@@ -1,10 +1,13 @@
---
features:
- |
- Define identity service clients as libraries
+ Define identity service clients as libraries.
The following identity service clients are defined as library interface,
so the other projects can use these modules as stable libraries without
any maintenance changes.
* endpoints_client(v3)
* policies_client (v3)
+ * regions_client(v3)
+ * services_client(v3)
+ * projects_client(v3)
diff --git a/releasenotes/notes/12.2.0-clients_module-16f3025f515bf9ec.yaml b/releasenotes/notes/12.2.0-clients_module-16f3025f515bf9ec.yaml
new file mode 100644
index 0000000..53741da
--- /dev/null
+++ b/releasenotes/notes/12.2.0-clients_module-16f3025f515bf9ec.yaml
@@ -0,0 +1,18 @@
+---
+features:
+ - The Tempest plugin interface contains a new optional method, which allows
+ plugins to declare and automatically register any service client defined
+ in the plugin.
+ - tempest.lib exposes a new stable interface, the clients module and
+ ServiceClients class, which provides a convinient way for plugin tests to
+ access service clients defined in Tempest as well as service clients
+ defined in all loaded plugins.
+ The new ServiceClients class only exposes for now the service clients
+ which are in tempest.lib, i.e. compute, network and image. The remaing
+ service clients (identity, volume and object-storage) will be added in
+ future updates.
+deprecations:
+ - The new clients module provides a stable alternative to tempest classes
+ manager.Manager and clients.Manager. manager.Manager only exists now
+ to smoothen the transition of plugins to the new interface, but it will
+ be removed shortly without further notice.
diff --git a/releasenotes/notes/12.2.0-nova_cert_default-90eb7c1e3cde624a.yaml b/releasenotes/notes/12.2.0-nova_cert_default-90eb7c1e3cde624a.yaml
new file mode 100644
index 0000000..cfe97c5
--- /dev/null
+++ b/releasenotes/notes/12.2.0-nova_cert_default-90eb7c1e3cde624a.yaml
@@ -0,0 +1,8 @@
+---
+upgrade:
+
+ - The ``nova_cert`` option default is changed to ``False``. The nova
+ certification management APIs were a hold over from ec2, and are
+ not used by any other parts of nova. They are deprecated for
+ removal in nova after the newton release. This makes false a more
+ sensible default going forward.
\ No newline at end of file
diff --git a/releasenotes/notes/12.2.0-plugin-service-client-registration-00b19a2dd4935ba0.yaml b/releasenotes/notes/12.2.0-plugin-service-client-registration-00b19a2dd4935ba0.yaml
new file mode 100644
index 0000000..64f729a
--- /dev/null
+++ b/releasenotes/notes/12.2.0-plugin-service-client-registration-00b19a2dd4935ba0.yaml
@@ -0,0 +1,12 @@
+---
+features:
+ - A new optional interface `TempestPlugin.get_service_clients`
+ is available to plugins. It allows them to declare
+ any service client they implement. For now this is used by
+ tempest only, for auto-registration of service clients
+ in the new class `ServiceClients`.
+ - A new singleton class `clients.ClientsRegistry` is
+ available. It holds the service clients registration data
+ from all plugins. It is used by `ServiceClients` for
+ auto-registration of the service clients implemented
+ in plugins.
diff --git a/releasenotes/notes/12.2.0-remove-javelin-276f62d04f7e4a1d.yaml b/releasenotes/notes/12.2.0-remove-javelin-276f62d04f7e4a1d.yaml
new file mode 100644
index 0000000..8e893b8
--- /dev/null
+++ b/releasenotes/notes/12.2.0-remove-javelin-276f62d04f7e4a1d.yaml
@@ -0,0 +1,5 @@
+---
+upgrade:
+ - The previously deprecated Javelin utility has been removed from Tempest.
+ As an alternative Ansible can be used to construct similar yaml workflows
+ to what Javelin used to provide.
diff --git a/releasenotes/notes/service_client_config-8a1d7b4de769c633.yaml b/releasenotes/notes/12.2.0-service_client_config-8a1d7b4de769c633.yaml
similarity index 100%
rename from releasenotes/notes/service_client_config-8a1d7b4de769c633.yaml
rename to releasenotes/notes/12.2.0-service_client_config-8a1d7b4de769c633.yaml
diff --git a/releasenotes/notes/12.2.0-volume-clients-as-library-9a3444dd63c134b3.yaml b/releasenotes/notes/12.2.0-volume-clients-as-library-9a3444dd63c134b3.yaml
new file mode 100644
index 0000000..cf504ad
--- /dev/null
+++ b/releasenotes/notes/12.2.0-volume-clients-as-library-9a3444dd63c134b3.yaml
@@ -0,0 +1,18 @@
+---
+features:
+ - |
+ Define volume service clients as libraries
+ The following volume service clients are defined as library interface,
+ so the other projects can use these modules as stable libraries
+ without any maintenance changes.
+
+ * availability_zone_client(v1)
+ * availability_zone_client(v2)
+ * extensions_client(v1)
+ * extensions_client(v2)
+ * hosts_client(v1)
+ * hosts_client(v2)
+ * quotas_client(v1)
+ * quotas_client(v2)
+ * services_client(v1)
+ * services_client(v2)
diff --git a/releasenotes/notes/13.0.0-add-new-identity-clients-as-library-5f7ndha733nwdsn9.yaml b/releasenotes/notes/13.0.0-add-new-identity-clients-as-library-5f7ndha733nwdsn9.yaml
new file mode 100644
index 0000000..9e828f6
--- /dev/null
+++ b/releasenotes/notes/13.0.0-add-new-identity-clients-as-library-5f7ndha733nwdsn9.yaml
@@ -0,0 +1,15 @@
+---
+features:
+ - |
+ Define identity service clients as libraries.
+ Add new service clients to the library interface so the other projects can use these modules as stable libraries without
+ any maintenance changes.
+
+ * identity_client(v2)
+ * groups_client(v3)
+ * trusts_client(v3)
+ * users_client(v3)
+ * identity_client(v3)
+ * roles_client(v3)
+ * inherited_roles_client(v3)
+ * credentials_client(v3)
diff --git a/releasenotes/notes/13.0.0-add-volume-clients-as-a-library-d05b6bc35e66c6ef.yaml b/releasenotes/notes/13.0.0-add-volume-clients-as-a-library-d05b6bc35e66c6ef.yaml
new file mode 100644
index 0000000..9cfce0d
--- /dev/null
+++ b/releasenotes/notes/13.0.0-add-volume-clients-as-a-library-d05b6bc35e66c6ef.yaml
@@ -0,0 +1,16 @@
+---
+features:
+ - |
+ Define volume service clients as libraries.
+ The following volume service clients are defined as library interface,
+ so the other projects can use these modules as stable libraries without
+ any maintenance changes.
+
+ * backups_client
+ * encryption_types_client (v1)
+ * encryption_types_client (v2)
+ * qos_clients (v1)
+ * qos_clients (v2)
+ * snapshots_client (v1)
+ * snapshots_client (v2)
+
diff --git a/releasenotes/notes/13.0.0-deprecate-get_ipv6_addr_by_EUI64-4673f07677289cf6.yaml b/releasenotes/notes/13.0.0-deprecate-get_ipv6_addr_by_EUI64-4673f07677289cf6.yaml
new file mode 100644
index 0000000..0884cfa
--- /dev/null
+++ b/releasenotes/notes/13.0.0-deprecate-get_ipv6_addr_by_EUI64-4673f07677289cf6.yaml
@@ -0,0 +1,4 @@
+---
+deprecations:
+ - Oslo.utils provides same method get_ipv6_addr_by_EUI64,
+ so deprecate it in Newton and remove it in Ocata.
diff --git a/releasenotes/notes/13.0.0-move-call-until-true-to-tempest-lib-c9ea70dd6fe9bd15.yaml b/releasenotes/notes/13.0.0-move-call-until-true-to-tempest-lib-c9ea70dd6fe9bd15.yaml
new file mode 100644
index 0000000..543cf7b
--- /dev/null
+++ b/releasenotes/notes/13.0.0-move-call-until-true-to-tempest-lib-c9ea70dd6fe9bd15.yaml
@@ -0,0 +1,5 @@
+---
+deprecations:
+ - The ``call_until_true`` function is moved from the ``tempest.test`` module
+ to the ``tempest.lib.common.utils.test_utils`` module. Backward
+ compatibilty is preserved until Ocata.
diff --git a/releasenotes/notes/13.0.0-start-of-newton-support-3ebb274f300f28eb.yaml b/releasenotes/notes/13.0.0-start-of-newton-support-3ebb274f300f28eb.yaml
new file mode 100644
index 0000000..b9b6fb5
--- /dev/null
+++ b/releasenotes/notes/13.0.0-start-of-newton-support-3ebb274f300f28eb.yaml
@@ -0,0 +1,13 @@
+---
+prelude: >
+ This release is marking the start of Newton release support in Tempest
+other:
+ - |
+ OpenStack releases supported at this time are **Liberty**, **Mitaka**,
+ and **Newton**.
+
+ The release under current development as of this tag is Ocata,
+ meaning that every Tempest commit is also tested against master during
+ the Ocata cycle. However, this does not necessarily mean that using
+ Tempest as of this tag will work against a Ocata (or future releases)
+ cloud.
diff --git a/releasenotes/notes/13.0.0-tempest-cleanup-nostandalone-39df2aafb2545d35.yaml b/releasenotes/notes/13.0.0-tempest-cleanup-nostandalone-39df2aafb2545d35.yaml
new file mode 100644
index 0000000..20f310d
--- /dev/null
+++ b/releasenotes/notes/13.0.0-tempest-cleanup-nostandalone-39df2aafb2545d35.yaml
@@ -0,0 +1,5 @@
+---
+upgrade:
+ - the already depreacted tempest-cleanup standalone command has been
+ removed. The corresponding functionalities can be accessed through
+ the unified `tempest` command (`tempest cleanup`).
diff --git a/releasenotes/notes/13.0.0-volume-clients-as-library-660811011be29d1a.yaml b/releasenotes/notes/13.0.0-volume-clients-as-library-660811011be29d1a.yaml
new file mode 100644
index 0000000..9e9eff6
--- /dev/null
+++ b/releasenotes/notes/13.0.0-volume-clients-as-library-660811011be29d1a.yaml
@@ -0,0 +1,6 @@
+---
+features:
+ - |
+ Define the v1 and v2 types_client clients for the volume service as
+ library interfaces, allowing other projects to use these modules as
+ stable libraries without maintenance changes.
diff --git a/releasenotes/notes/13.1.0-volume-clients-as-library-309030c7a16e62ab.yaml b/releasenotes/notes/13.1.0-volume-clients-as-library-309030c7a16e62ab.yaml
new file mode 100644
index 0000000..056e199
--- /dev/null
+++ b/releasenotes/notes/13.1.0-volume-clients-as-library-309030c7a16e62ab.yaml
@@ -0,0 +1,10 @@
+---
+features:
+ - |
+ Define volume service clients as libraries.
+ The following volume service clients are defined as library interface,
+ so the other projects can use these modules as stable libraries without
+ any maintenance changes.
+
+ * volumes_client(v1)
+ * volumes_client(v2)
diff --git a/releasenotes/notes/add-cred-provider-abstract-class-to-lib-70ff513221f8a871.yaml b/releasenotes/notes/add-cred-provider-abstract-class-to-lib-70ff513221f8a871.yaml
new file mode 100644
index 0000000..6f7a411
--- /dev/null
+++ b/releasenotes/notes/add-cred-provider-abstract-class-to-lib-70ff513221f8a871.yaml
@@ -0,0 +1,6 @@
+---
+features:
+ - The cred_provider abstract class which serves as the basis for both
+ of tempest's cred providers, pre-provisioned credentials and dynamic
+ credentials, is now a library interface. This provides the common signature
+ required for building a credential provider.
diff --git a/releasenotes/notes/add-error-code-translation-to-versions-clients-acbc78292e24b014.yaml b/releasenotes/notes/add-error-code-translation-to-versions-clients-acbc78292e24b014.yaml
new file mode 100644
index 0000000..57bf47c
--- /dev/null
+++ b/releasenotes/notes/add-error-code-translation-to-versions-clients-acbc78292e24b014.yaml
@@ -0,0 +1,6 @@
+---
+upgrade:
+ - Add an error translation to list_versions() of versions_client of both
+ compute and network. This can affect users who are expecting that these
+ clients return error status code instead of the exception. It is needed
+ to change the code for handling the exception like the other clients code.
diff --git a/releasenotes/notes/add-ssh-port-parameter-to-client-6d16c374ac4456c1.yaml b/releasenotes/notes/add-ssh-port-parameter-to-client-6d16c374ac4456c1.yaml
new file mode 100644
index 0000000..b2ad199
--- /dev/null
+++ b/releasenotes/notes/add-ssh-port-parameter-to-client-6d16c374ac4456c1.yaml
@@ -0,0 +1,4 @@
+---
+features:
+ - A new optional parameter `port` for ssh client (`tempest.lib.common.ssh.Client`)
+ to specify destination port for a host. The default value is 22.
diff --git a/releasenotes/notes/add_subunit_describe_calls-5498a37e6cd66c4b.yaml b/releasenotes/notes/add_subunit_describe_calls-5498a37e6cd66c4b.yaml
deleted file mode 100644
index b457ddd..0000000
--- a/releasenotes/notes/add_subunit_describe_calls-5498a37e6cd66c4b.yaml
+++ /dev/null
@@ -1,4 +0,0 @@
----
-features:
- - Adds subunit-describe-calls. A parser for subunit streams to determine what
- REST API calls are made inside of a test and in what order they are called.
diff --git a/releasenotes/notes/deprecate-nova-api-extensions-df16b02485dae203.yaml b/releasenotes/notes/deprecate-nova-api-extensions-df16b02485dae203.yaml
new file mode 100644
index 0000000..c2d9a9b
--- /dev/null
+++ b/releasenotes/notes/deprecate-nova-api-extensions-df16b02485dae203.yaml
@@ -0,0 +1,7 @@
+---
+deprecations:
+ - The *api_extensions* config option in the *compute-feature-enabled* group is
+ now deprecated. This option will be removed from tempest when all the
+ OpenStack releases supported by tempest no longer support the API extensions
+ mechanism. This was removed from Nova during the Newton cycle, so this will
+ be removed at the Mitaka EOL.
diff --git a/releasenotes/notes/remo-stress-tests-81052b211ad95d2e.yaml b/releasenotes/notes/remo-stress-tests-81052b211ad95d2e.yaml
new file mode 100644
index 0000000..aa3a78e
--- /dev/null
+++ b/releasenotes/notes/remo-stress-tests-81052b211ad95d2e.yaml
@@ -0,0 +1,4 @@
+---
+upgrade:
+ - The Stress tests framework and all the stress tests have been removed.
+
diff --git a/releasenotes/notes/remove-sahara-tests-1532c47c7df80e3a.yaml b/releasenotes/notes/remove-sahara-tests-1532c47c7df80e3a.yaml
new file mode 100644
index 0000000..b541cf9
--- /dev/null
+++ b/releasenotes/notes/remove-sahara-tests-1532c47c7df80e3a.yaml
@@ -0,0 +1,4 @@
+---
+upgrade:
+ - All tests for the Sahara project have been removed from Tempest. They now
+ live as a Tempest plugin in the ``openstack/sahara-tests`` repository.
diff --git a/releasenotes/source/conf.py b/releasenotes/source/conf.py
index 4522a17..140263c 100644
--- a/releasenotes/source/conf.py
+++ b/releasenotes/source/conf.py
@@ -275,3 +275,6 @@
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False
+
+# -- Options for Internationalization output ------------------------------
+locale_dirs = ['locale/']
diff --git a/releasenotes/source/index.rst b/releasenotes/source/index.rst
index 2c22408..8eac1d0 100644
--- a/releasenotes/source/index.rst
+++ b/releasenotes/source/index.rst
@@ -5,10 +5,11 @@
.. toctree::
:maxdepth: 1
+ unreleased
+ v13.0.0
v12.0.0
v11.0.0
v10.0.0
- unreleased
Indices and tables
==================
diff --git a/releasenotes/source/v13.0.0.rst b/releasenotes/source/v13.0.0.rst
new file mode 100644
index 0000000..39816e4
--- /dev/null
+++ b/releasenotes/source/v13.0.0.rst
@@ -0,0 +1,6 @@
+=====================
+v13.0.0 Release Notes
+=====================
+
+.. release-notes:: 13.0.0 Release Notes
+ :version: 13.0.0
diff --git a/requirements.txt b/requirements.txt
index 058ea00..9079a8d 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -2,23 +2,24 @@
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr>=1.6 # Apache-2.0
-cliff!=1.16.0,!=1.17.0,>=1.15.0 # Apache-2.0
+cliff>=2.2.0 # Apache-2.0
jsonschema!=2.5.0,<3.0.0,>=2.0.0 # MIT
testtools>=1.4.0 # MIT
paramiko>=2.0 # LGPLv2.1+
-netaddr!=0.7.16,>=0.7.12 # BSD
+netaddr!=0.7.16,>=0.7.13 # BSD
testrepository>=0.0.18 # Apache-2.0/BSD
oslo.concurrency>=3.8.0 # Apache-2.0
-oslo.config>=3.12.0 # Apache-2.0
-oslo.i18n>=2.1.0 # Apache-2.0
-oslo.log>=1.14.0 # Apache-2.0
+oslo.config>=3.14.0 # Apache-2.0
+oslo.log>=3.11.0 # Apache-2.0
oslo.serialization>=1.10.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.17.0 # Apache-2.0
six>=1.9.0 # MIT
fixtures>=3.0.0 # Apache-2.0/BSD
-testscenarios>=0.4 # Apache-2.0/BSD
-PyYAML>=3.1.0 # MIT
-stevedore>=1.16.0 # Apache-2.0
-PrettyTable<0.8,>=0.7 # BSD
-os-testr>=0.7.0 # Apache-2.0
+PyYAML>=3.10.0 # MIT
+python-subunit>=0.0.18 # Apache-2.0/BSD
+stevedore>=1.17.1 # Apache-2.0
+PrettyTable<0.8,>=0.7.1 # BSD
+os-testr>=0.8.0 # Apache-2.0
urllib3>=1.15.1 # MIT
+debtcollector>=1.2.0 # Apache-2.0
+unittest2 # BSD
diff --git a/run_tempest.sh b/run_tempest.sh
index af01734..414146b 100755
--- a/run_tempest.sh
+++ b/run_tempest.sh
@@ -1,5 +1,7 @@
#!/usr/bin/env bash
+echo "WARNING: This script is deprecated and will be removed in the near future. Please migrate to tempest run or another method of launching a test runner"
+
function usage {
echo "Usage: $0 [OPTION]..."
echo "Run Tempest test suite"
diff --git a/run_tests.sh b/run_tests.sh
index 22314b6..a856bb4 100755
--- a/run_tests.sh
+++ b/run_tests.sh
@@ -17,6 +17,44 @@
echo " -- [TESTROPTIONS] After the first '--' you can pass arbitrary arguments to testr "
}
+function deprecation_warning {
+ cat <<EOF
+-------------------------------------------------------------------------
+WARNING: run_tests.sh is deprecated and this script will be removed after
+the Newton release. All tests should be run through testr/ostestr or tox.
+
+To run style checks:
+
+ tox -e pep8
+
+To run python 2.7 unit tests
+
+ tox -e py27
+
+To run unit tests and generate coverage report
+
+ tox -e cover
+
+To run a subset of any of these tests:
+
+ tox -e py27 someregex
+
+ i.e.: tox -e py27 test_servers
+
+Additional tox targets are available in tox.ini. For more information
+see:
+http://docs.openstack.org/project-team-guide/project-setup/python.html
+
+NOTE: if you want to use testr to run tests, you can instead use:
+
+ OS_TEST_PATH=./tempest/tests testr run
+
+Documentation on using testr directly can be found at
+http://testrepository.readthedocs.org/en/latest/MANUAL.html
+-------------------------------------------------------------------------
+EOF
+}
+
testrargs=""
just_pep8=0
venv=${VENV:-.venv}
@@ -32,6 +70,8 @@
config_file=""
update=0
+deprecation_warning
+
if ! options=$(getopt -o VNnfuctphd -l virtual-env,no-virtual-env,no-site-packages,force,update,serial,coverage,pep8,help,debug -- "$@")
then
# parse error
diff --git a/setup.cfg b/setup.cfg
index 2a3000d..96313fd 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -17,6 +17,7 @@
Programming Language :: Python :: 2.7
Programming Language :: Python :: 3
Programming Language :: Python :: 3.4
+ Programming Language :: Python :: 3.5
[files]
packages =
@@ -27,9 +28,6 @@
[entry_points]
console_scripts =
verify-tempest-config = tempest.cmd.verify_tempest_config:main
- javelin2 = tempest.cmd.javelin:main
- run-tempest-stress = tempest.cmd.run_stress:main
- tempest-cleanup = tempest.cmd.cleanup:main
tempest-account-generator = tempest.cmd.account_generator:main
tempest = tempest.cmd.main:main
skip-tracker = tempest.lib.cmd.skip_tracker:main
@@ -39,7 +37,6 @@
account-generator = tempest.cmd.account_generator:TempestAccountGenerator
init = tempest.cmd.init:TempestInit
cleanup = tempest.cmd.cleanup:TempestCleanup
- run-stress = tempest.cmd.run_stress:TempestRunStress
list-plugins = tempest.cmd.list_plugins:TempestListPlugins
verify-config = tempest.cmd.verify_tempest_config:TempestVerifyConfig
workspace = tempest.cmd.workspace:TempestWorkspace
diff --git a/tempest/README.rst b/tempest/README.rst
index 113b191..0feec41 100644
--- a/tempest/README.rst
+++ b/tempest/README.rst
@@ -15,7 +15,6 @@
| tempest/
| api/ - API tests
| scenario/ - complex scenario tests
-| stress/ - stress tests
Each of these directories contains different types of tests. What
belongs in each directory, the rules and examples for good tests, are
@@ -26,10 +25,9 @@
API tests are validation tests for the OpenStack API. They should not
use the existing python clients for OpenStack, but should instead use
-the tempest implementations of clients. This allows us to test both
-XML and JSON. Having raw clients also lets us pass invalid JSON and
-XML to the APIs and see the results, something we could not get with
-the native clients.
+the tempest implementations of clients. Having raw clients let us
+pass invalid JSON to the APIs and see the results, something we could
+not get with the native clients.
When it makes sense, API testing should be moved closer to the
projects themselves, possibly as functional tests in their unit test
@@ -47,14 +45,6 @@
but should instead use the tempest implementations of clients.
-:ref:`stress_field_guide`
--------------------------
-
-Stress tests are designed to stress an OpenStack environment by running a high
-workload against it and seeing what breaks. The stress test framework runs
-several test jobs in parallel and can run any existing test in Tempest as a
-stress job.
-
:ref:`unit_tests_field_guide`
-----------------------------
diff --git a/tempest/api/baremetal/admin/base.py b/tempest/api/baremetal/admin/base.py
index f7891dd..ac5986c 100644
--- a/tempest/api/baremetal/admin/base.py
+++ b/tempest/api/baremetal/admin/base.py
@@ -96,10 +96,10 @@
@classmethod
@creates('chassis')
- def create_chassis(cls, description=None, expect_errors=False):
+ def create_chassis(cls, description=None):
"""Wrapper utility for creating test chassis.
- :param description: A description of the chassis. if not supplied,
+ :param description: A description of the chassis. If not supplied,
a random value will be generated.
:return: Created chassis.
@@ -114,6 +114,7 @@
memory_mb=4096):
"""Wrapper utility for creating test baremetal nodes.
+ :param chassis_id: The unique identifier of the chassis.
:param cpu_arch: CPU architecture of the node. Default: x86.
:param cpus: Number of CPUs. Default: 8.
:param local_gb: Disk size. Default: 10.
@@ -133,6 +134,7 @@
def create_port(cls, node_id, address, extra=None, uuid=None):
"""Wrapper utility for creating test ports.
+ :param node_id: The unique identifier of the node.
:param address: MAC address of the port.
:param extra: Meta data of the port. If not supplied, an empty
dictionary will be created.
@@ -150,7 +152,7 @@
def delete_chassis(cls, chassis_id):
"""Deletes a chassis having the specified UUID.
- :param uuid: The unique identifier of the chassis.
+ :param chassis_id: The unique identifier of the chassis.
:return: Server response.
"""
@@ -166,7 +168,7 @@
def delete_node(cls, node_id):
"""Deletes a node having the specified UUID.
- :param uuid: The unique identifier of the node.
+ :param node_id: The unique identifier of the node.
:return: Server response.
"""
@@ -182,7 +184,7 @@
def delete_port(cls, port_id):
"""Deletes a port having the specified UUID.
- :param uuid: The unique identifier of the port.
+ :param port_id: The unique identifier of the port.
:return: Server response.
"""
diff --git a/tempest/api/compute/admin/test_agents.py b/tempest/api/compute/admin/test_agents.py
index 4f48ad0..61359f1 100644
--- a/tempest/api/compute/admin/test_agents.py
+++ b/tempest/api/compute/admin/test_agents.py
@@ -16,7 +16,6 @@
from tempest.api.compute import base
from tempest.common.utils import data_utils
-from tempest.lib.common.utils import test_utils
from tempest import test
LOG = log.getLogger(__name__)
@@ -30,24 +29,16 @@
super(AgentsAdminTestJSON, cls).setup_clients()
cls.client = cls.os_adm.agents_client
- def setUp(self):
- super(AgentsAdminTestJSON, self).setUp()
- params = self._param_helper(
+ @classmethod
+ def resource_setup(cls):
+ super(AgentsAdminTestJSON, cls).resource_setup()
+ cls.params_agent = cls._param_helper(
hypervisor='common', os='linux', architecture='x86_64',
version='7.0', url='xxx://xxxx/xxx/xxx',
md5hash='add6bb58e139be103324d04d82d8f545')
- body = self.client.create_agent(**params)['agent']
- self.agent_id = body['agent_id']
- def tearDown(self):
- try:
- test_utils.call_and_ignore_notfound_exc(
- self.client.delete_agent, self.agent_id)
- except Exception:
- LOG.exception('Exception raised deleting agent %s', self.agent_id)
- super(AgentsAdminTestJSON, self).tearDown()
-
- def _param_helper(self, **kwargs):
+ @staticmethod
+ def _param_helper(**kwargs):
rand_key = 'architecture'
if rand_key in kwargs:
# NOTE: The rand_name is for avoiding agent conflicts.
@@ -71,33 +62,42 @@
@test.idempotent_id('dc9ffd51-1c50-4f0e-a820-ae6d2a568a9e')
def test_update_agent(self):
- # Update an agent.
+ # Create and update an agent.
+ body = self.client.create_agent(**self.params_agent)['agent']
+ self.addCleanup(self.client.delete_agent, body['agent_id'])
+ agent_id = body['agent_id']
params = self._param_helper(
version='8.0', url='xxx://xxxx/xxx/xxx2',
md5hash='add6bb58e139be103324d04d82d8f547')
- body = self.client.update_agent(self.agent_id, **params)['agent']
+ body = self.client.update_agent(agent_id, **params)['agent']
for expected_item, value in params.items():
self.assertEqual(value, body[expected_item])
@test.idempotent_id('470e0b89-386f-407b-91fd-819737d0b335')
def test_delete_agent(self):
- # Delete an agent.
- self.client.delete_agent(self.agent_id)
+ # Create an agent and delete it.
+ body = self.client.create_agent(**self.params_agent)['agent']
+ self.client.delete_agent(body['agent_id'])
# Verify the list doesn't contain the deleted agent.
agents = self.client.list_agents()['agents']
- self.assertNotIn(self.agent_id, map(lambda x: x['agent_id'], agents))
+ self.assertNotIn(body['agent_id'], map(lambda x: x['agent_id'],
+ agents))
@test.idempotent_id('6a326c69-654b-438a-80a3-34bcc454e138')
def test_list_agents(self):
- # List all agents.
+ # Create an agent and list all agents.
+ body = self.client.create_agent(**self.params_agent)['agent']
+ self.addCleanup(self.client.delete_agent, body['agent_id'])
agents = self.client.list_agents()['agents']
self.assertTrue(len(agents) > 0, 'Cannot get any agents.(%s)' % agents)
- self.assertIn(self.agent_id, map(lambda x: x['agent_id'], agents))
+ self.assertIn(body['agent_id'], map(lambda x: x['agent_id'], agents))
@test.idempotent_id('eabadde4-3cd7-4ec4-a4b5-5a936d2d4408')
def test_list_agents_with_filter(self):
- # List the agent builds by the filter.
+ # Create agents and list the agent builds by the filter.
+ body = self.client.create_agent(**self.params_agent)['agent']
+ self.addCleanup(self.client.delete_agent, body['agent_id'])
params = self._param_helper(
hypervisor='xen', os='linux', architecture='x86',
version='7.0', url='xxx://xxxx/xxx/xxx1',
@@ -110,4 +110,5 @@
['agents'])
self.assertTrue(len(agents) > 0, 'Cannot get any agents.(%s)' % agents)
self.assertIn(agent_id_xen, map(lambda x: x['agent_id'], agents))
- self.assertNotIn(self.agent_id, map(lambda x: x['agent_id'], agents))
+ self.assertNotIn(body['agent_id'], map(lambda x: x['agent_id'],
+ agents))
diff --git a/tempest/api/compute/admin/test_aggregates.py b/tempest/api/compute/admin/test_aggregates.py
index ac1bfee..667d30b 100644
--- a/tempest/api/compute/admin/test_aggregates.py
+++ b/tempest/api/compute/admin/test_aggregates.py
@@ -25,8 +25,6 @@
class AggregatesAdminTestJSON(base.BaseV2ComputeAdminTest):
"""Tests Aggregates API that require admin privileges"""
- _host_key = 'OS-EXT-SRV-ATTR:host'
-
@classmethod
def setup_clients(cls):
super(AggregatesAdminTestJSON, cls).setup_clients()
@@ -217,10 +215,8 @@
self.client.add_host(aggregate['id'], host=self.host)
self.addCleanup(self.client.remove_host, aggregate['id'],
host=self.host)
- server_name = data_utils.rand_name('test_server')
admin_servers_client = self.os_adm.servers_client
- server = self.create_test_server(name=server_name,
- availability_zone=az_name,
+ server = self.create_test_server(availability_zone=az_name,
wait_until='ACTIVE')
body = admin_servers_client.show_server(server['id'])['server']
- self.assertEqual(self.host, body[self._host_key])
+ self.assertEqual(self.host, body['OS-EXT-SRV-ATTR:host'])
diff --git a/tempest/api/compute/admin/test_auto_allocate_network.py b/tempest/api/compute/admin/test_auto_allocate_network.py
new file mode 100644
index 0000000..ee8ed14
--- /dev/null
+++ b/tempest/api/compute/admin/test_auto_allocate_network.py
@@ -0,0 +1,211 @@
+# Copyright 2016 IBM Corp.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from oslo_log import log
+
+from tempest.api.compute import base
+from tempest.common import compute
+from tempest.common import credentials_factory as credentials
+from tempest.common import waiters
+from tempest import config
+from tempest import exceptions
+from tempest.lib.common.utils import test_utils
+from tempest import test
+
+CONF = config.CONF
+LOG = log.getLogger(__name__)
+
+
+# NOTE(mriedem): This is in the admin directory only because it requires
+# force_tenant_isolation=True, but doesn't extend BaseV2ComputeAdminTest
+# because it doesn't actually use any admin credentials in the tests.
+class AutoAllocateNetworkTest(base.BaseV2ComputeTest):
+ """Tests auto-allocating networks with the v2.37 microversion.
+
+ These tests rely on Neutron being enabled. Also, the tenant must not have
+ any network resources available to it so we can make sure that Nova
+ calls to Neutron to automatically allocate the network topology.
+ """
+
+ force_tenant_isolation = True
+
+ min_microversion = '2.37'
+ max_microversion = 'latest'
+
+ @classmethod
+ def skip_checks(cls):
+ super(AutoAllocateNetworkTest, cls).skip_checks()
+ identity_version = cls.get_identity_version()
+ if not credentials.is_admin_available(
+ identity_version=identity_version):
+ msg = "Missing Identity Admin API credentials in configuration."
+ raise cls.skipException(msg)
+ if not CONF.service_available.neutron:
+ raise cls.skipException('Neutron is required')
+ if not test.is_extension_enabled('auto-allocated-topology', 'network'):
+ raise cls.skipException(
+ 'auto-allocated-topology extension is not available')
+
+ @classmethod
+ def setup_credentials(cls):
+ # Do not create network resources for these tests.
+ cls.set_network_resources()
+ super(AutoAllocateNetworkTest, cls).setup_credentials()
+
+ @classmethod
+ def setup_clients(cls):
+ super(AutoAllocateNetworkTest, cls).setup_clients()
+ cls.servers_client = cls.servers_client
+ cls.networks_client = cls.os.networks_client
+ cls.routers_client = cls.os.routers_client
+ cls.subnets_client = cls.os.subnets_client
+ cls.ports_client = cls.os.ports_client
+
+ @classmethod
+ def resource_setup(cls):
+ super(AutoAllocateNetworkTest, cls).resource_setup()
+ # Sanity check that there are no networks available to the tenant.
+ # This is essentially what Nova does for getting available networks.
+ tenant_id = cls.networks_client.tenant_id
+ # (1) Retrieve non-public network list owned by the tenant.
+ search_opts = {'tenant_id': tenant_id, 'shared': False}
+ nets = cls.networks_client.list_networks(
+ **search_opts).get('networks', [])
+ if nets:
+ raise exceptions.TempestException(
+ 'Found tenant networks: %s' % nets)
+ # (2) Retrieve shared network list.
+ search_opts = {'shared': True}
+ nets = cls.networks_client.list_networks(
+ **search_opts).get('networks', [])
+ if nets:
+ raise exceptions.TempestException(
+ 'Found shared networks: %s' % nets)
+
+ @classmethod
+ def resource_cleanup(cls):
+ """Deletes any auto_allocated_network and it's associated resources."""
+
+ # Find the auto-allocated router for the tenant.
+ # This is a bit hacky since we don't have a great way to find the
+ # auto-allocated router given the private tenant network we have.
+ routers = cls.routers_client.list_routers().get('routers', [])
+ if len(routers) > 1:
+ # This indicates a race where nova is concurrently calling the
+ # neutron auto-allocated-topology API for multiple server builds
+ # at the same time (it's called from nova-compute when setting up
+ # networking for a server). Neutron will detect duplicates and
+ # automatically clean them up, but there is a window where the API
+ # can return multiple and we don't have a good way to filter those
+ # out right now, so we'll just handle them.
+ LOG.info('(%s) Found more than one router for tenant.',
+ test_utils.find_test_caller())
+
+ # Let's just blindly remove any networks, duplicate or otherwise, that
+ # the test might have created even though Neutron will cleanup
+ # duplicate resources automatically (so ignore 404s).
+ networks = cls.networks_client.list_networks().get('networks', [])
+
+ for router in routers:
+ # Disassociate the subnets from the router. Because of the race
+ # mentioned above the subnets might not be associated with the
+ # router so ignore any 404.
+ for network in networks:
+ for subnet_id in network['subnets']:
+ test_utils.call_and_ignore_notfound_exc(
+ cls.routers_client.remove_router_interface,
+ router['id'], subnet_id=subnet_id)
+
+ # Delete the router.
+ cls.routers_client.delete_router(router['id'])
+
+ for network in networks:
+ # Get and delete the ports for the given network.
+ ports = cls.ports_client.list_ports(
+ network_id=network['id']).get('ports', [])
+ for port in ports:
+ test_utils.call_and_ignore_notfound_exc(
+ cls.ports_client.delete_port, port['id'])
+
+ # Delete the subnets.
+ for subnet_id in network['subnets']:
+ test_utils.call_and_ignore_notfound_exc(
+ cls.subnets_client.delete_subnet, subnet_id)
+
+ # Delete the network.
+ test_utils.call_and_ignore_notfound_exc(
+ cls.networks_client.delete_network, network['id'])
+
+ @test.idempotent_id('5eb7b8fa-9c23-47a2-9d7d-02ed5809dd34')
+ def test_server_create_no_allocate(self):
+ """Tests that no networking is allocated for the server."""
+ # create the server with no networking
+ server, _ = compute.create_test_server(
+ self.os, networks='none', wait_until='ACTIVE')
+ self.addCleanup(waiters.wait_for_server_termination,
+ self.servers_client, server['id'])
+ self.addCleanup(self.servers_client.delete_server, server['id'])
+ # get the server ips
+ addresses = self.servers_client.list_addresses(
+ server['id'])['addresses']
+ # assert that there is no networking
+ self.assertEqual({}, addresses)
+
+ @test.idempotent_id('2e6cf129-9e28-4e8a-aaaa-045ea826b2a6')
+ def test_server_multi_create_auto_allocate(self):
+ """Tests that networking is auto-allocated for multiple servers."""
+
+ # Create multiple servers with auto networking to make sure the
+ # automatic network allocation is atomic. Using a minimum of three
+ # servers is essential for this scenario because:
+ #
+ # - First request sees no networks for the tenant so it auto-allocates
+ # one from Neutron, let's call that net1.
+ # - Second request sees no networks for the tenant so it auto-allocates
+ # one from Neutron. Neutron creates net2 but sees it's a duplicate
+ # so it queues net2 for deletion and returns net1 from the API and
+ # Nova uses that for the second server request.
+ # - Third request sees net1 and net2 for the tenant and fails with a
+ # NetworkAmbiguous 400 error.
+ _, servers = compute.create_test_server(
+ self.os, networks='auto', wait_until='ACTIVE',
+ min_count=3)
+ server_nets = set()
+ for server in servers:
+ self.addCleanup(waiters.wait_for_server_termination,
+ self.servers_client, server['id'])
+ self.addCleanup(self.servers_client.delete_server, server['id'])
+ # get the server ips
+ addresses = self.servers_client.list_addresses(
+ server['id'])['addresses']
+ # assert that there is networking (should only be one)
+ self.assertEqual(1, len(addresses))
+ server_nets.add(list(addresses.keys())[0])
+ # all servers should be on the same network
+ self.assertEqual(1, len(server_nets))
+
+ # List the networks for the tenant; we filter on admin_state_up=True
+ # because the auto-allocated-topology code in Neutron won't set that
+ # to True until the network is ready and is returned from the API.
+ # Duplicate networks created from a race should have
+ # admin_state_up=False.
+ search_opts = {'tenant_id': self.networks_client.tenant_id,
+ 'shared': False,
+ 'admin_state_up': True}
+ nets = self.networks_client.list_networks(
+ **search_opts).get('networks', [])
+ self.assertEqual(1, len(nets))
+ # verify the single private tenant network is the one that the servers
+ # are using also
+ self.assertIn(nets[0]['name'], server_nets)
diff --git a/tempest/api/compute/admin/test_availability_zone.py b/tempest/api/compute/admin/test_availability_zone.py
index 5befa53..3470602 100644
--- a/tempest/api/compute/admin/test_availability_zone.py
+++ b/tempest/api/compute/admin/test_availability_zone.py
@@ -29,10 +29,10 @@
def test_get_availability_zone_list(self):
# List of availability zone
availability_zone = self.client.list_availability_zones()
- self.assertTrue(len(availability_zone['availabilityZoneInfo']) > 0)
+ self.assertGreater(len(availability_zone['availabilityZoneInfo']), 0)
@test.idempotent_id('ef726c58-530f-44c2-968c-c7bed22d5b8c')
def test_get_availability_zone_list_detail(self):
# List of availability zones and available services
availability_zone = self.client.list_availability_zones(detail=True)
- self.assertTrue(len(availability_zone['availabilityZoneInfo']) > 0)
+ self.assertGreater(len(availability_zone['availabilityZoneInfo']), 0)
diff --git a/tempest/api/compute/admin/test_flavors.py b/tempest/api/compute/admin/test_flavors.py
index 95e7ef1..fde5622 100644
--- a/tempest/api/compute/admin/test_flavors.py
+++ b/tempest/api/compute/admin/test_flavors.py
@@ -161,6 +161,7 @@
verify_flavor_response_extension(flavor)
# Check if flavor is present in list
+ flag = False
flavors = self.user_client.list_flavors(detail=True)['flavors']
for flavor in flavors:
if flavor['name'] == flavor_name:
diff --git a/tempest/api/compute/admin/test_floating_ips_bulk.py b/tempest/api/compute/admin/test_floating_ips_bulk.py
index 456363c..e207aed 100644
--- a/tempest/api/compute/admin/test_floating_ips_bulk.py
+++ b/tempest/api/compute/admin/test_floating_ips_bulk.py
@@ -17,7 +17,7 @@
from tempest.api.compute import base
from tempest import config
-from tempest import exceptions
+from tempest.lib import exceptions
from tempest import test
CONF = config.CONF
diff --git a/tempest/api/compute/admin/test_hosts.py b/tempest/api/compute/admin/test_hosts.py
index a9e9644..29e1eb8 100644
--- a/tempest/api/compute/admin/test_hosts.py
+++ b/tempest/api/compute/admin/test_hosts.py
@@ -36,7 +36,7 @@
hosts = self.client.list_hosts()['hosts']
host = hosts[0]
hosts = self.client.list_hosts(zone=host['zone'])['hosts']
- self.assertTrue(len(hosts) >= 1)
+ self.assertGreaterEqual(len(hosts), 1)
self.assertIn(host, hosts)
@test.idempotent_id('9af3c171-fbf4-4150-a624-22109733c2a6')
@@ -58,12 +58,12 @@
hosts = self.client.list_hosts()['hosts']
hosts = [host for host in hosts if host['service'] == 'compute']
- self.assertTrue(len(hosts) >= 1)
+ self.assertGreaterEqual(len(hosts), 1)
for host in hosts:
hostname = host['host_name']
resources = self.client.show_host(hostname)['host']
- self.assertTrue(len(resources) >= 1)
+ self.assertGreaterEqual(len(resources), 1)
host_resource = resources[0]['resource']
self.assertIsNotNone(host_resource)
self.assertIsNotNone(host_resource['cpu'])
diff --git a/tempest/api/compute/admin/test_hosts_negative.py b/tempest/api/compute/admin/test_hosts_negative.py
index 8366945..c270829 100644
--- a/tempest/api/compute/admin/test_hosts_negative.py
+++ b/tempest/api/compute/admin/test_hosts_negative.py
@@ -29,7 +29,7 @@
def _get_host_name(self):
hosts = self.client.list_hosts()['hosts']
- self.assertTrue(len(hosts) >= 1)
+ self.assertGreaterEqual(len(hosts), 1)
hostname = hosts[0]['host_name']
return hostname
diff --git a/tempest/api/compute/admin/test_hypervisor.py b/tempest/api/compute/admin/test_hypervisor.py
index 113ec40..92a9135 100644
--- a/tempest/api/compute/admin/test_hypervisor.py
+++ b/tempest/api/compute/admin/test_hypervisor.py
@@ -52,7 +52,7 @@
self.assertHypervisors(hypers)
details = self.client.show_hypervisor(hypers[0]['id'])['hypervisor']
- self.assertTrue(len(details) > 0)
+ self.assertGreater(len(details), 0)
self.assertEqual(details['hypervisor_hostname'],
hypers[0]['hypervisor_hostname'])
@@ -65,14 +65,14 @@
hostname = hypers[0]['hypervisor_hostname']
hypervisors = (self.client.list_servers_on_hypervisor(hostname)
['hypervisors'])
- self.assertTrue(len(hypervisors) > 0)
+ self.assertGreater(len(hypervisors), 0)
@test.idempotent_id('797e4f28-b6e0-454d-a548-80cc77c00816')
def test_get_hypervisor_stats(self):
# Verify the stats of the all hypervisor
stats = (self.client.show_hypervisor_statistics()
['hypervisor_statistics'])
- self.assertTrue(len(stats) > 0)
+ self.assertGreater(len(stats), 0)
@test.idempotent_id('91a50d7d-1c2b-4f24-b55a-a1fe20efca70')
def test_get_hypervisor_uptime(self):
diff --git a/tempest/api/compute/admin/test_hypervisor_negative.py b/tempest/api/compute/admin/test_hypervisor_negative.py
index 9c6df7f..220ea39 100644
--- a/tempest/api/compute/admin/test_hypervisor_negative.py
+++ b/tempest/api/compute/admin/test_hypervisor_negative.py
@@ -47,7 +47,7 @@
@test.idempotent_id('51e663d0-6b89-4817-a465-20aca0667d03')
def test_show_hypervisor_with_non_admin_user(self):
hypers = self._list_hypervisors()
- self.assertTrue(len(hypers) > 0)
+ self.assertGreater(len(hypers), 0)
self.assertRaises(
lib_exc.Forbidden,
@@ -58,7 +58,7 @@
@test.idempotent_id('2a0a3938-832e-4859-95bf-1c57c236b924')
def test_show_servers_with_non_admin_user(self):
hypers = self._list_hypervisors()
- self.assertTrue(len(hypers) > 0)
+ self.assertGreater(len(hypers), 0)
self.assertRaises(
lib_exc.Forbidden,
@@ -96,7 +96,7 @@
@test.idempotent_id('6c3461f9-c04c-4e2a-bebb-71dc9cb47df2')
def test_get_hypervisor_uptime_with_non_admin_user(self):
hypers = self._list_hypervisors()
- self.assertTrue(len(hypers) > 0)
+ self.assertGreater(len(hypers), 0)
self.assertRaises(
lib_exc.Forbidden,
@@ -133,7 +133,7 @@
@test.idempotent_id('5b6a6c79-5dc1-4fa5-9c58-9c8085948e74')
def test_search_hypervisor_with_non_admin_user(self):
hypers = self._list_hypervisors()
- self.assertTrue(len(hypers) > 0)
+ self.assertGreater(len(hypers), 0)
self.assertRaises(
lib_exc.Forbidden,
diff --git a/tempest/api/compute/admin/test_live_migration.py b/tempest/api/compute/admin/test_live_migration.py
index 94635ff..72d5b18 100644
--- a/tempest/api/compute/admin/test_live_migration.py
+++ b/tempest/api/compute/admin/test_live_migration.py
@@ -26,7 +26,8 @@
class LiveBlockMigrationTestJSON(base.BaseV2ComputeAdminTest):
- _host_key = 'OS-EXT-SRV-ATTR:host'
+ max_microversion = '2.24'
+ block_migration = None
@classmethod
def skip_checks(cls):
@@ -61,15 +62,19 @@
return body
def _get_host_for_server(self, server_id):
- return self._get_server_details(server_id)[self._host_key]
+ return self._get_server_details(server_id)['OS-EXT-SRV-ATTR:host']
def _migrate_server_to(self, server_id, dest_host, volume_backed=False):
- block_migration = (CONF.compute_feature_enabled.
- block_migration_for_live_migration and
- not volume_backed)
+ kwargs = dict()
+ block_migration = getattr(self, 'block_migration', None)
+ if self.block_migration is None:
+ kwargs['disk_over_commit'] = False
+ block_migration = (CONF.compute_feature_enabled.
+ block_migration_for_live_migration and
+ not volume_backed)
body = self.admin_servers_client.live_migrate_server(
server_id, host=dest_host, block_migration=block_migration,
- disk_over_commit=False)
+ **kwargs)
return body
def _get_host_other_than(self, host):
@@ -77,14 +82,6 @@
if host != target_host:
return target_host
- def _volume_clean_up(self, server_id, volume_id):
- body = self.volumes_client.show_volume(volume_id)['volume']
- if body['status'] == 'in-use':
- self.servers_client.detach_volume(server_id, volume_id)
- waiters.wait_for_volume_status(self.volumes_client,
- volume_id, 'available')
- self.volumes_client.delete_volume(volume_id)
-
def _test_live_migration(self, state='ACTIVE', volume_backed=False):
"""Tests live migration between two hosts.
@@ -131,8 +128,7 @@
def test_live_block_migration_paused(self):
self._test_live_migration(state='PAUSED')
- @decorators.skip_because(bug="1549511",
- condition=CONF.service_available.neutron)
+ @decorators.skip_because(bug="1524898")
@test.idempotent_id('5071cf17-3004-4257-ae61-73a84e28badd')
@test.services('volume')
def test_volume_backed_live_migration(self):
@@ -146,24 +142,23 @@
block_migrate_cinder_iscsi,
'Block Live migration not configured for iSCSI')
def test_iscsi_volume(self):
- server_id = self.create_test_server(wait_until="ACTIVE")['id']
+ server = self.create_test_server(wait_until="ACTIVE")
+ server_id = server['id']
actual_host = self._get_host_for_server(server_id)
target_host = self._get_host_other_than(actual_host)
- volume = self.volumes_client.create_volume(
- display_name='test')['volume']
-
- waiters.wait_for_volume_status(self.volumes_client,
- volume['id'], 'available')
- self.addCleanup(self._volume_clean_up, server_id, volume['id'])
+ volume = self.create_volume()
# Attach the volume to the server
- self.servers_client.attach_volume(server_id, volumeId=volume['id'],
- device='/dev/xvdb')
- waiters.wait_for_volume_status(self.volumes_client,
- volume['id'], 'in-use')
+ self.attach_volume(server, volume, device='/dev/xvdb')
self._migrate_server_to(server_id, target_host)
waiters.wait_for_server_status(self.servers_client,
server_id, 'ACTIVE')
self.assertEqual(target_host, self._get_host_for_server(server_id))
+
+
+class LiveAutoBlockMigrationV225TestJSON(LiveBlockMigrationTestJSON):
+ min_microversion = '2.25'
+ max_microversion = 'latest'
+ block_migration = 'auto'
diff --git a/tempest/api/compute/admin/test_migrations.py b/tempest/api/compute/admin/test_migrations.py
index 6113c04..4f075eb 100644
--- a/tempest/api/compute/admin/test_migrations.py
+++ b/tempest/api/compute/admin/test_migrations.py
@@ -31,6 +31,8 @@
super(MigrationsAdminTest, cls).setup_clients()
cls.client = cls.os_adm.migrations_client
cls.flavors_admin_client = cls.os_adm.flavors_client
+ cls.admin_hosts_client = cls.os_adm.hosts_client
+ cls.admin_servers_client = cls.os_adm.servers_client
@test.idempotent_id('75c0b83d-72a0-4cf8-a153-631e83e7d53f')
def test_list_migrations(self):
@@ -103,3 +105,42 @@
server = self.servers_client.show_server(server['id'])['server']
self.assertEqual(flavor['id'], server['flavor']['id'])
+
+ def _test_cold_migrate_server(self, revert=False):
+ if CONF.compute.min_compute_nodes < 2:
+ msg = "Less than 2 compute nodes, skipping multinode tests."
+ raise self.skipException(msg)
+
+ server = self.create_test_server(wait_until="ACTIVE")
+ src_host = self.admin_servers_client.show_server(
+ server['id'])['server']['OS-EXT-SRV-ATTR:host']
+
+ self.admin_servers_client.migrate_server(server['id'])
+
+ waiters.wait_for_server_status(self.servers_client,
+ server['id'], 'VERIFY_RESIZE')
+
+ if revert:
+ self.servers_client.revert_resize_server(server['id'])
+ assert_func = self.assertEqual
+ else:
+ self.servers_client.confirm_resize_server(server['id'])
+ assert_func = self.assertNotEqual
+
+ waiters.wait_for_server_status(self.servers_client,
+ server['id'], 'ACTIVE')
+ dst_host = self.admin_servers_client.show_server(
+ server['id'])['server']['OS-EXT-SRV-ATTR:host']
+ assert_func(src_host, dst_host)
+
+ @test.idempotent_id('4bf0be52-3b6f-4746-9a27-3143636fe30d')
+ @testtools.skipUnless(CONF.compute_feature_enabled.cold_migration,
+ 'Cold migration not available.')
+ def test_cold_migration(self):
+ self._test_cold_migrate_server(revert=False)
+
+ @test.idempotent_id('caa1aa8b-f4ef-4374-be0d-95f001c2ac2d')
+ @testtools.skipUnless(CONF.compute_feature_enabled.cold_migration,
+ 'Cold migration not available.')
+ def test_revert_cold_migration(self):
+ self._test_cold_migrate_server(revert=True)
diff --git a/tempest/api/compute/admin/test_quotas.py b/tempest/api/compute/admin/test_quotas.py
index b1f0755..7d97ce2 100644
--- a/tempest/api/compute/admin/test_quotas.py
+++ b/tempest/api/compute/admin/test_quotas.py
@@ -117,8 +117,6 @@
password=password,
project=project,
email=email)
- if 'user' in user:
- user = user['user']
user_id = user['id']
self.addCleanup(self.identity_utils.delete_user, user_id)
diff --git a/tempest/api/compute/admin/test_security_groups.py b/tempest/api/compute/admin/test_security_groups.py
index 58da8b9..e329869 100644
--- a/tempest/api/compute/admin/test_security_groups.py
+++ b/tempest/api/compute/admin/test_security_groups.py
@@ -15,11 +15,8 @@
from tempest.api.compute import base
from tempest.common.utils import data_utils
-from tempest import config
from tempest import test
-CONF = config.CONF
-
class SecurityGroupsTestAdminJSON(base.BaseV2ComputeAdminTest):
diff --git a/tempest/api/compute/admin/test_servers.py b/tempest/api/compute/admin/test_servers.py
index 09253b0..efa55d5 100644
--- a/tempest/api/compute/admin/test_servers.py
+++ b/tempest/api/compute/admin/test_servers.py
@@ -24,8 +24,6 @@
class ServersAdminTestJSON(base.BaseV2ComputeAdminTest):
"""Tests Servers API using admin privileges"""
- _host_key = 'OS-EXT-SRV-ATTR:host'
-
@classmethod
def setup_clients(cls):
super(ServersAdminTestJSON, cls).setup_clients()
@@ -37,23 +35,16 @@
def resource_setup(cls):
super(ServersAdminTestJSON, cls).resource_setup()
- cls.s1_name = data_utils.rand_name('server')
+ cls.s1_name = data_utils.rand_name(cls.__name__ + '-server')
server = cls.create_test_server(name=cls.s1_name,
wait_until='ACTIVE')
cls.s1_id = server['id']
- cls.s2_name = data_utils.rand_name('server')
+ cls.s2_name = data_utils.rand_name(cls.__name__ + '-server')
server = cls.create_test_server(name=cls.s2_name,
wait_until='ACTIVE')
cls.s2_id = server['id']
- @test.idempotent_id('51717b38-bdc1-458b-b636-1cf82d99f62f')
- def test_list_servers_by_admin(self):
- # Listing servers by admin user returns empty list by default
- body = self.client.list_servers(detail=True)
- servers = body['servers']
- self.assertEqual([], servers)
-
@test.idempotent_id('06f960bb-15bb-48dc-873d-f96e89be7870')
def test_list_servers_filter_by_error_status(self):
# Filter the list of servers by server error status
@@ -70,6 +61,26 @@
self.assertIn(self.s1_id, map(lambda x: x['id'], servers))
self.assertNotIn(self.s2_id, map(lambda x: x['id'], servers))
+ @test.idempotent_id('d56e9540-73ed-45e0-9b88-98fc419087eb')
+ def test_list_servers_detailed_filter_by_invalid_status(self):
+ params = {'status': 'invalid_status'}
+ body = self.client.list_servers(detail=True, **params)
+ servers = body['servers']
+ self.assertEqual([], servers)
+
+ @test.idempotent_id('51717b38-bdc1-458b-b636-1cf82d99f62f')
+ def test_list_servers_by_admin(self):
+ # Listing servers by admin user returns a list which doesn't
+ # contain the other tenants' server by default
+ body = self.client.list_servers(detail=True)
+ servers = body['servers']
+
+ # This case is for the test environments which contain
+ # the existing servers before testing
+ servers_name = [server['name'] for server in servers]
+ self.assertNotIn(self.s1_name, servers_name)
+ self.assertNotIn(self.s2_name, servers_name)
+
@test.idempotent_id('9f5579ae-19b4-4985-a091-2a5d56106580')
def test_list_servers_by_admin_with_all_tenants(self):
# Listing servers by admin user with all tenants parameter
@@ -91,19 +102,31 @@
params = {'tenant_id': tenant_id}
body = self.client.list_servers(detail=True, **params)
servers = body['servers']
- self.assertEqual([], servers)
+ servers_name = map(lambda x: x['name'], servers)
+ self.assertNotIn(self.s1_name, servers_name)
+ self.assertNotIn(self.s2_name, servers_name)
- # List the admin tenant which has no servers
+ # List the primary tenant with all_tenants is specified
+ params = {'all_tenants': '', 'tenant_id': tenant_id}
+ body = self.client.list_servers(detail=True, **params)
+ servers = body['servers']
+ servers_name = map(lambda x: x['name'], servers)
+ self.assertIn(self.s1_name, servers_name)
+ self.assertIn(self.s2_name, servers_name)
+
+ # List the admin tenant shouldn't get servers created by other tenants
admin_tenant_id = self.client.tenant_id
params = {'all_tenants': '', 'tenant_id': admin_tenant_id}
body = self.client.list_servers(detail=True, **params)
servers = body['servers']
- self.assertEqual([], servers)
+ servers_name = map(lambda x: x['name'], servers)
+ self.assertNotIn(self.s1_name, servers_name)
+ self.assertNotIn(self.s2_name, servers_name)
@test.idempotent_id('86c7a8f7-50cf-43a9-9bac-5b985317134f')
def test_list_servers_filter_by_exist_host(self):
# Filter the list of servers by existent host
- name = data_utils.rand_name('server')
+ name = data_utils.rand_name(self.__class__.__name__ + '-server')
network = self.get_tenant_network()
network_kwargs = fixed_network.set_networks_kwarg(network)
# We need to create the server as an admin, so we can't use
@@ -114,7 +137,7 @@
self.addCleanup(self.client.delete_server, test_server['id'])
server = self.client.show_server(test_server['id'])['server']
self.assertEqual(server['status'], 'ACTIVE')
- hostname = server[self._host_key]
+ hostname = server['OS-EXT-SRV-ATTR:host']
params = {'host': hostname}
body = self.client.list_servers(**params)
servers = body['servers']
diff --git a/tempest/api/compute/admin/test_servers_negative.py b/tempest/api/compute/admin/test_servers_negative.py
index 7437c14..23b16e7 100644
--- a/tempest/api/compute/admin/test_servers_negative.py
+++ b/tempest/api/compute/admin/test_servers_negative.py
@@ -34,13 +34,14 @@
cls.client = cls.os_adm.servers_client
cls.non_adm_client = cls.servers_client
cls.flavors_client = cls.os_adm.flavors_client
+ cls.quotas_client = cls.os_adm.quotas_client
@classmethod
def resource_setup(cls):
super(ServersAdminNegativeTestJSON, cls).resource_setup()
cls.tenant_id = cls.client.tenant_id
- cls.s1_name = data_utils.rand_name('server')
+ cls.s1_name = data_utils.rand_name(cls.__name__ + '-server')
server = cls.create_test_server(name=cls.s1_name,
wait_until='ACTIVE')
cls.s1_id = server['id']
@@ -64,11 +65,11 @@
self.useFixture(fixtures.LockFixture('compute_quotas'))
flavor_name = data_utils.rand_name("flavor")
flavor_id = self._get_unused_flavor_id()
- quota_set = (self.quotas_client.show_default_quota_set(self.tenant_id)
- ['quota_set'])
+ quota_set = self.quotas_client.show_quota_set(
+ self.tenant_id)['quota_set']
ram = int(quota_set['ram'])
if ram == -1:
- raise self.skipException("default ram quota set is -1,"
+ raise self.skipException("ram quota set is -1,"
" cannot test overlimit")
ram += 1
vcpus = 8
@@ -93,11 +94,11 @@
flavor_name = data_utils.rand_name("flavor")
flavor_id = self._get_unused_flavor_id()
ram = 512
- quota_set = (self.quotas_client.show_default_quota_set(self.tenant_id)
- ['quota_set'])
+ quota_set = self.quotas_client.show_quota_set(
+ self.tenant_id)['quota_set']
vcpus = int(quota_set['cores'])
if vcpus == -1:
- raise self.skipException("default cores quota set is -1,"
+ raise self.skipException("cores quota set is -1,"
" cannot test overlimit")
vcpus += 1
disk = 10
diff --git a/tempest/api/compute/admin/test_simple_tenant_usage.py b/tempest/api/compute/admin/test_simple_tenant_usage.py
index a4ed8dc..dbc22e0 100644
--- a/tempest/api/compute/admin/test_simple_tenant_usage.py
+++ b/tempest/api/compute/admin/test_simple_tenant_usage.py
@@ -16,6 +16,7 @@
import datetime
from tempest.api.compute import base
+from tempest.lib.common.utils import test_utils
from tempest.lib import exceptions as e
from tempest import test
@@ -59,8 +60,8 @@
return True
except e.InvalidHTTPResponseBody:
return False
- self.assertEqual(test.call_until_true(is_valid, duration, 1), True,
- "%s not return valid response in %s secs" % (
+ self.assertEqual(test_utils.call_until_true(is_valid, duration, 1),
+ True, "%s not return valid response in %s secs" % (
func.__name__, duration))
return self.resp
diff --git a/tempest/api/compute/admin/test_volume_swap.py b/tempest/api/compute/admin/test_volume_swap.py
new file mode 100644
index 0000000..f603abd
--- /dev/null
+++ b/tempest/api/compute/admin/test_volume_swap.py
@@ -0,0 +1,75 @@
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.api.compute import base
+from tempest.common import waiters
+from tempest import config
+from tempest import test
+
+CONF = config.CONF
+
+
+class TestVolumeSwap(base.BaseV2ComputeAdminTest):
+ """The test suite for swapping of volume with admin user.
+
+ The following is the scenario outline:
+ 1. Create a volume "volume1" with non-admin.
+ 2. Create a volume "volume2" with non-admin.
+ 3. Boot an instance "instance1" with non-admin.
+ 4. Attach "volume1" to "instance1" with non-admin.
+ 5. Swap volume from "volume1" to "volume2" as admin.
+ 6. Check the swap volume is successful and "volume2"
+ is attached to "instance1" and "volume1" is in available state.
+ """
+
+ @classmethod
+ def skip_checks(cls):
+ super(TestVolumeSwap, cls).skip_checks()
+ if not CONF.compute_feature_enabled.swap_volume:
+ raise cls.skipException("Swapping volumes is not supported.")
+
+ @classmethod
+ def setup_clients(cls):
+ super(TestVolumeSwap, cls).setup_clients()
+ # We need the admin client for performing the update (swap) volume call
+ cls.servers_admin_client = cls.os_adm.servers_client
+
+ @test.idempotent_id('1769f00d-a693-4d67-a631-6a3496773813')
+ @test.services('volume')
+ def test_volume_swap(self):
+ # Create two volumes.
+ # NOTE(gmann): Volumes are created before server creation so that
+ # volumes cleanup can happen successfully irrespective of which volume
+ # is attached to server.
+ volume1 = self.create_volume()
+ volume2 = self.create_volume()
+ # Boot server
+ server = self.create_test_server(wait_until='ACTIVE')
+ # Attach "volume1" to server
+ self.attach_volume(server, volume1)
+ # Swap volume from "volume1" to "volume2"
+ self.servers_admin_client.update_attached_volume(
+ server['id'], volume1['id'], volumeId=volume2['id'])
+ waiters.wait_for_volume_status(self.volumes_client,
+ volume1['id'], 'available')
+ waiters.wait_for_volume_status(self.volumes_client,
+ volume2['id'], 'in-use')
+ self.addCleanup(self.servers_client.detach_volume,
+ server['id'], volume2['id'])
+ # Verify "volume2" is attached to the server
+ vol_attachments = self.servers_client.list_volume_attachments(
+ server['id'])['volumeAttachments']
+ self.assertEqual(1, len(vol_attachments))
+ self.assertIn(volume2['id'], vol_attachments[0]['volumeId'])
+
+ # TODO(mriedem): Test swapping back from volume2 to volume1 after
+ # nova bug 1490236 is fixed.
diff --git a/tempest/api/compute/base.py b/tempest/api/compute/base.py
index 37aa5ac..b738e82 100644
--- a/tempest/api/compute/base.py
+++ b/tempest/api/compute/base.py
@@ -120,6 +120,7 @@
cls.images = []
cls.security_groups = []
cls.server_groups = []
+ cls.volumes = []
@classmethod
def resource_cleanup(cls):
@@ -127,6 +128,7 @@
cls.clear_servers()
cls.clear_security_groups()
cls.clear_server_groups()
+ cls.clear_volumes()
super(BaseV2ComputeTest, cls).resource_cleanup()
@classmethod
@@ -219,6 +221,8 @@
:param validatable: Whether the server will be pingable or sshable.
:param volume_backed: Whether the instance is volume backed or not.
"""
+ if 'name' not in kwargs:
+ kwargs['name'] = data_utils.rand_name(cls.__name__ + "-server")
tenant_network = cls.get_tenant_network()
body, servers = compute.create_test_server(
cls.os,
@@ -359,7 +363,7 @@
for address in addresses:
if address['version'] == CONF.validation.ip_version_for_ssh:
return address['addr']
- raise exceptions.ServerUnreachable()
+ raise exceptions.ServerUnreachable(server_id=server['id'])
else:
raise exceptions.InvalidConfiguration()
@@ -368,6 +372,66 @@
self.useFixture(api_microversion_fixture.APIMicroversionFixture(
self.request_microversion))
+ @classmethod
+ def create_volume(cls):
+ """Create a volume and wait for it to become 'available'.
+
+ :returns: The available volume.
+ """
+ vol_name = data_utils.rand_name(cls.__name__ + '-volume')
+ volume = cls.volumes_client.create_volume(
+ size=CONF.volume.volume_size, display_name=vol_name)['volume']
+ cls.volumes.append(volume)
+ waiters.wait_for_volume_status(cls.volumes_client,
+ volume['id'], 'available')
+ return volume
+
+ @classmethod
+ def clear_volumes(cls):
+ LOG.debug('Clearing volumes: %s', ','.join(
+ volume['id'] for volume in cls.volumes))
+ for volume in cls.volumes:
+ try:
+ test_utils.call_and_ignore_notfound_exc(
+ cls.volumes_client.delete_volume, volume['id'])
+ except Exception:
+ LOG.exception('Deleting volume %s failed', volume['id'])
+
+ for volume in cls.volumes:
+ try:
+ cls.volumes_client.wait_for_resource_deletion(volume['id'])
+ except Exception:
+ LOG.exception('Waiting for deletion of volume %s failed',
+ volume['id'])
+
+ def attach_volume(self, server, volume, device=None):
+ """Attaches volume to server and waits for 'in-use' volume status.
+
+ The volume will be detached when the test tears down.
+
+ :param server: The server to which the volume will be attached.
+ :param volume: The volume to attach.
+ :param device: Optional mountpoint for the attached volume. Note that
+ this is not guaranteed for all hypervisors and is not recommended.
+ """
+ attach_kwargs = dict(volumeId=volume['id'])
+ if device:
+ attach_kwargs['device'] = device
+ self.servers_client.attach_volume(
+ server['id'], **attach_kwargs)
+ # On teardown detach the volume and wait for it to be available. This
+ # is so we don't error out when trying to delete the volume during
+ # teardown.
+ self.addCleanup(waiters.wait_for_volume_status,
+ self.volumes_client, volume['id'], 'available')
+ # Ignore 404s on detach in case the server is deleted or the volume
+ # is already detached.
+ self.addCleanup(test_utils.call_and_ignore_notfound_exc,
+ self.servers_client.detach_volume,
+ server['id'], volume['id'])
+ waiters.wait_for_volume_status(self.volumes_client,
+ volume['id'], 'in-use')
+
class BaseV2ComputeAdminTest(BaseV2ComputeTest):
"""Base test case class for Compute Admin API tests."""
diff --git a/tempest/api/compute/flavors/test_flavors_negative.py b/tempest/api/compute/flavors/test_flavors_negative.py
deleted file mode 100644
index 83f8e19..0000000
--- a/tempest/api/compute/flavors/test_flavors_negative.py
+++ /dev/null
@@ -1,43 +0,0 @@
-# Copyright 2013 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.compute import base
-from tempest.api_schema.request.compute.v2 import flavors
-from tempest import config
-from tempest import test
-
-
-CONF = config.CONF
-
-load_tests = test.NegativeAutoTest.load_tests
-
-
-@test.SimpleNegativeAutoTest
-class FlavorsListWithDetailsNegativeTestJSON(base.BaseV2ComputeTest,
- test.NegativeAutoTest):
- _service = CONF.compute.catalog_type
- _schema = flavors.flavor_list
-
-
-@test.SimpleNegativeAutoTest
-class FlavorDetailsNegativeTestJSON(base.BaseV2ComputeTest,
- test.NegativeAutoTest):
- _service = CONF.compute.catalog_type
- _schema = flavors.flavors_details
-
- @classmethod
- def resource_setup(cls):
- super(FlavorDetailsNegativeTestJSON, cls).resource_setup()
- cls.set_resource("flavor", cls.flavor_ref)
diff --git a/tempest/api/compute/floating_ips/test_floating_ips_actions.py b/tempest/api/compute/floating_ips/test_floating_ips_actions.py
index 3508ba9..fdf1e93 100644
--- a/tempest/api/compute/floating_ips/test_floating_ips_actions.py
+++ b/tempest/api/compute/floating_ips/test_floating_ips_actions.py
@@ -14,7 +14,6 @@
# under the License.
from tempest.api.compute.floating_ips import base
-from tempest.common.utils import data_utils
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import test_utils
@@ -112,8 +111,7 @@
# positive test:Association of an already associated floating IP
# to specific server should change the association of the Floating IP
# Create server so as to use for Multiple association
- new_name = data_utils.rand_name('floating_server')
- body = self.create_test_server(name=new_name)
+ body = self.create_test_server()
waiters.wait_for_server_status(self.servers_client,
body['id'], 'ACTIVE')
self.new_server_id = body['id']
diff --git a/tempest/api/compute/floating_ips/test_list_floating_ips.py b/tempest/api/compute/floating_ips/test_list_floating_ips.py
index 5738293..222bf18 100644
--- a/tempest/api/compute/floating_ips/test_list_floating_ips.py
+++ b/tempest/api/compute/floating_ips/test_list_floating_ips.py
@@ -41,8 +41,8 @@
@classmethod
def resource_cleanup(cls):
- for i in range(3):
- cls.client.delete_floating_ip(cls.floating_ip_id[i])
+ for f_id in cls.floating_ip_id:
+ cls.client.delete_floating_ip(f_id)
super(FloatingIPDetailsTestJSON, cls).resource_cleanup()
@test.idempotent_id('16db31c3-fb85-40c9-bbe2-8cf7b67ff99f')
diff --git a/tempest/api/compute/images/test_image_metadata.py b/tempest/api/compute/images/test_image_metadata.py
index 6f80730..999233d 100644
--- a/tempest/api/compute/images/test_image_metadata.py
+++ b/tempest/api/compute/images/test_image_metadata.py
@@ -71,7 +71,7 @@
body = body['image'] if 'image' in body else body
cls.image_id = body['id']
cls.images.append(cls.image_id)
- image_file = six.StringIO(('*' * 1024))
+ image_file = six.BytesIO((b'*' * 1024))
if CONF.image_feature_enabled.api_v1:
cls.glance_client.update_image(cls.image_id, data=image_file)
else:
diff --git a/tempest/api/compute/images/test_images.py b/tempest/api/compute/images/test_images.py
index 150e8af..154d717 100644
--- a/tempest/api/compute/images/test_images.py
+++ b/tempest/api/compute/images/test_images.py
@@ -42,13 +42,14 @@
@test.idempotent_id('aa06b52b-2db5-4807-b218-9441f75d74e3')
def test_delete_saving_image(self):
- snapshot_name = data_utils.rand_name('test-snap')
server = self.create_test_server(wait_until='ACTIVE')
self.addCleanup(self.servers_client.delete_server, server['id'])
image = self.create_image_from_server(server['id'],
- name=snapshot_name,
wait_until='SAVING')
self.client.delete_image(image['id'])
+ msg = ('The image with ID {image_id} failed to be deleted'
+ .format(image_id=image['id']))
+ self.assertTrue(self.client.is_resource_deleted(image['id']), msg)
@test.idempotent_id('aaacd1d0-55a2-4ce8-818a-b5439df8adc9')
def test_create_image_from_stopped_server(self):
diff --git a/tempest/api/compute/images/test_images_oneserver.py b/tempest/api/compute/images/test_images_oneserver.py
index 7b978ab..6c417f1 100644
--- a/tempest/api/compute/images/test_images_oneserver.py
+++ b/tempest/api/compute/images/test_images_oneserver.py
@@ -19,6 +19,7 @@
from tempest.common.utils import data_utils
from tempest.common import waiters
from tempest import config
+from tempest.lib.common.utils import test_utils
from tempest import test
CONF = config.CONF
@@ -27,26 +28,6 @@
class ImagesOneServerTestJSON(base.BaseV2ComputeTest):
- def tearDown(self):
- """Terminate test instances created after a test is executed."""
- self.server_check_teardown()
- super(ImagesOneServerTestJSON, self).tearDown()
-
- def setUp(self):
- # NOTE(afazekas): Normally we use the same server with all test cases,
- # but if it has an issue, we build a new one
- super(ImagesOneServerTestJSON, self).setUp()
- # Check if the server is in a clean state after test
- try:
- waiters.wait_for_server_status(self.servers_client,
- self.server_id, 'ACTIVE')
- except Exception:
- LOG.exception('server %s timed out to become ACTIVE. rebuilding'
- % self.server_id)
- # Rebuild server if cannot reach the ACTIVE state
- # Usually it means the server had a serious accident
- self.__class__.server_id = self.rebuild_server(self.server_id)
-
@classmethod
def skip_checks(cls):
super(ImagesOneServerTestJSON, cls).skip_checks()
@@ -74,6 +55,18 @@
flavor = self.flavors_client.show_flavor(flavor_id)['flavor']
return flavor['disk']
+ @classmethod
+ def _rebuild_server_when_fails(cls, server_id):
+ try:
+ waiters.wait_for_server_status(cls.servers_client,
+ server_id, 'ACTIVE')
+ except Exception:
+ LOG.exception('server %s timed out to become ACTIVE. rebuilding'
+ % server_id)
+ # Rebuild server if cannot reach the ACTIVE state
+ # Usually it means the server had a serious accident
+ cls.server_id = cls.rebuild_server(server_id)
+
@test.idempotent_id('3731d080-d4c5-4872-b41a-64d0d0021314')
def test_create_delete_image(self):
@@ -83,6 +76,8 @@
body = self.client.create_image(self.server_id, name=name,
metadata=meta)
image_id = data_utils.parse_image_id(body.response['location'])
+ self.addCleanup(test_utils.call_and_ignore_notfound_exc,
+ self.client.delete_image, image_id)
waiters.wait_for_image_status(self.client, image_id, 'ACTIVE')
# Verify the image was created correctly
@@ -103,6 +98,7 @@
# Verify the image was deleted correctly
self.client.delete_image(image_id)
self.client.wait_for_resource_deletion(image_id)
+ self.addCleanup(self._rebuild_server_when_fails, self.server_id)
@test.idempotent_id('3b7c6fe4-dfe7-477c-9243-b06359db51e6')
def test_create_image_specify_multibyte_character_image_name(self):
@@ -116,3 +112,4 @@
body = self.client.create_image(self.server_id, name=utf8_name)
image_id = data_utils.parse_image_id(body.response['location'])
self.addCleanup(self.client.delete_image, image_id)
+ self.addCleanup(self._rebuild_server_when_fails, self.server_id)
diff --git a/tempest/api/compute/images/test_list_image_filters.py b/tempest/api/compute/images/test_list_image_filters.py
index 9017461..a9c2f7a 100644
--- a/tempest/api/compute/images/test_list_image_filters.py
+++ b/tempest/api/compute/images/test_list_image_filters.py
@@ -23,7 +23,7 @@
from tempest.common.utils import data_utils
from tempest.common import waiters
from tempest import config
-from tempest import exceptions
+from tempest.lib import exceptions
from tempest import test
CONF = config.CONF
@@ -60,7 +60,7 @@
def _create_image():
params = {
- 'name': data_utils.rand_name('image'),
+ 'name': data_utils.rand_name(cls.__name__ + '-image'),
'container_format': 'bare',
'disk_format': 'raw'
}
@@ -78,7 +78,7 @@
# Wait 1 second between creation and upload to ensure a delta
# between created_at and updated_at.
time.sleep(1)
- image_file = six.StringIO(('*' * 1024))
+ image_file = six.BytesIO((b'*' * 1024))
if CONF.image_feature_enabled.api_v1:
cls.glance_client.update_image(image_id, data=image_file)
else:
diff --git a/tempest/api/compute/security_groups/test_security_group_rules_negative.py b/tempest/api/compute/security_groups/test_security_group_rules_negative.py
index 853ef31..4f53663 100644
--- a/tempest/api/compute/security_groups/test_security_group_rules_negative.py
+++ b/tempest/api/compute/security_groups/test_security_group_rules_negative.py
@@ -23,7 +23,8 @@
def not_existing_id():
- if CONF.service_available.neutron:
+ if (CONF.service_available.neutron and
+ test.is_extension_enabled('security-group', 'network')):
return data_utils.rand_uuid()
else:
return data_utils.rand_int_id(start=999)
diff --git a/tempest/api/compute/security_groups/test_security_groups.py b/tempest/api/compute/security_groups/test_security_groups.py
index f6353c8..755336f 100644
--- a/tempest/api/compute/security_groups/test_security_groups.py
+++ b/tempest/api/compute/security_groups/test_security_groups.py
@@ -94,8 +94,7 @@
# Create server and add the security group created
# above to the server we just created
- server_name = data_utils.rand_name('server')
- server = self.create_test_server(name=server_name)
+ server = self.create_test_server()
server_id = server['id']
waiters.wait_for_server_status(self.servers_client, server_id,
'ACTIVE')
diff --git a/tempest/api/compute/security_groups/test_security_groups_negative.py b/tempest/api/compute/security_groups/test_security_groups_negative.py
index 5125e2b..e6abf28 100644
--- a/tempest/api/compute/security_groups/test_security_groups_negative.py
+++ b/tempest/api/compute/security_groups/test_security_groups_negative.py
@@ -44,9 +44,11 @@
security_group_id.append(body[i]['id'])
# Generate a non-existent security group id
while True:
- non_exist_id = data_utils.rand_int_id(start=999)
- if self.neutron_available:
+ if (self.neutron_available and
+ test.is_extension_enabled('security-group', 'network')):
non_exist_id = data_utils.rand_uuid()
+ else:
+ non_exist_id = data_utils.rand_int_id(start=999)
if non_exist_id not in security_group_id:
break
return non_exist_id
diff --git a/tempest/api/compute/servers/test_attach_interfaces.py b/tempest/api/compute/servers/test_attach_interfaces.py
index d02f86f..a8c59ca 100644
--- a/tempest/api/compute/servers/test_attach_interfaces.py
+++ b/tempest/api/compute/servers/test_attach_interfaces.py
@@ -21,6 +21,7 @@
from tempest.common import waiters
from tempest import config
from tempest import exceptions
+from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
from tempest import test
@@ -46,22 +47,20 @@
@classmethod
def setup_clients(cls):
super(AttachInterfacesTestJSON, cls).setup_clients()
- cls.client = cls.os.interfaces_client
cls.networks_client = cls.os.networks_client
cls.subnets_client = cls.os.subnets_client
cls.ports_client = cls.os.ports_client
- cls.servers_client = cls.servers_client
def wait_for_interface_status(self, server, port_id, status):
"""Waits for an interface to reach a given status."""
- body = (self.client.show_interface(server, port_id)
+ body = (self.interfaces_client.show_interface(server, port_id)
['interfaceAttachment'])
interface_status = body['port_state']
start = int(time.time())
while(interface_status != status):
time.sleep(self.build_interval)
- body = (self.client.show_interface(server, port_id)
+ body = (self.interfaces_client.show_interface(server, port_id)
['interfaceAttachment'])
interface_status = body['port_state']
@@ -118,7 +117,7 @@
def _create_server_get_interfaces(self):
server = self.create_test_server(wait_until='ACTIVE')
- ifs = (self.client.list_interfaces(server['id'])
+ ifs = (self.interfaces_client.list_interfaces(server['id'])
['interfaceAttachments'])
body = self.wait_for_interface_status(
server['id'], ifs[0]['port_id'], 'ACTIVE')
@@ -126,7 +125,7 @@
return server, ifs
def _test_create_interface(self, server):
- iface = (self.client.create_interface(server['id'])
+ iface = (self.interfaces_client.create_interface(server['id'])
['interfaceAttachment'])
iface = self.wait_for_interface_status(
server['id'], iface['port_id'], 'ACTIVE')
@@ -135,7 +134,7 @@
def _test_create_interface_by_network_id(self, server, ifs):
network_id = ifs[0]['net_id']
- iface = self.client.create_interface(
+ iface = self.interfaces_client.create_interface(
server['id'], net_id=network_id)['interfaceAttachment']
iface = self.wait_for_interface_status(
server['id'], iface['port_id'], 'ACTIVE')
@@ -147,7 +146,7 @@
port = self.ports_client.create_port(network_id=network_id)
port_id = port['port']['id']
self.addCleanup(self.ports_client.delete_port, port_id)
- iface = self.client.create_interface(
+ iface = self.interfaces_client.create_interface(
server['id'], port_id=port_id)['interfaceAttachment']
iface = self.wait_for_interface_status(
server['id'], iface['port_id'], 'ACTIVE')
@@ -164,7 +163,7 @@
1)
fixed_ips = [{'ip_address': ip_list[0]}]
- iface = self.client.create_interface(
+ iface = self.interfaces_client.create_interface(
server['id'], net_id=network_id,
fixed_ips=fixed_ips)['interfaceAttachment']
self.addCleanup(self.ports_client.delete_port, iface['port_id'])
@@ -175,7 +174,7 @@
def _test_show_interface(self, server, ifs):
iface = ifs[0]
- _iface = self.client.show_interface(
+ _iface = self.interfaces_client.show_interface(
server['id'], iface['port_id'])['interfaceAttachment']
self._check_interface(iface, port_id=_iface['port_id'],
network_id=_iface['net_id'],
@@ -185,14 +184,14 @@
def _test_delete_interface(self, server, ifs):
# NOTE(danms): delete not the first or last, but one in the middle
iface = ifs[1]
- self.client.delete_interface(server['id'], iface['port_id'])
- _ifs = (self.client.list_interfaces(server['id'])
+ self.interfaces_client.delete_interface(server['id'], iface['port_id'])
+ _ifs = (self.interfaces_client.list_interfaces(server['id'])
['interfaceAttachments'])
start = int(time.time())
while len(ifs) == len(_ifs):
time.sleep(self.build_interval)
- _ifs = (self.client.list_interfaces(server['id'])
+ _ifs = (self.interfaces_client.list_interfaces(server['id'])
['interfaceAttachments'])
timed_out = int(time.time()) - start >= self.build_timeout
if len(ifs) == len(_ifs) and timed_out:
@@ -216,7 +215,7 @@
def test_create_list_show_delete_interfaces(self):
server, ifs = self._create_server_get_interfaces()
interface_count = len(ifs)
- self.assertTrue(interface_count > 0)
+ self.assertGreater(interface_count, 0)
self._check_interface(ifs[0])
try:
@@ -238,7 +237,7 @@
iface = self._test_create_interface_by_fixed_ips(server, ifs)
ifs.append(iface)
- _ifs = (self.client.list_interfaces(server['id'])
+ _ifs = (self.interfaces_client.list_interfaces(server['id'])
['interfaceAttachments'])
self._compare_iface_list(ifs, _ifs)
@@ -254,7 +253,7 @@
# Add and Remove the fixed IP to server.
server, ifs = self._create_server_get_interfaces()
interface_count = len(ifs)
- self.assertTrue(interface_count > 0)
+ self.assertGreater(interface_count, 0)
self._check_interface(ifs[0])
network_id = ifs[0]['net_id']
self.servers_client.add_fixed_ip(server['id'], networkId=network_id)
@@ -272,6 +271,7 @@
break
self.servers_client.remove_fixed_ip(server['id'], address=fixed_ip)
+ @decorators.skip_because(bug='1607714')
@test.idempotent_id('2f3a0127-95c7-4977-92d2-bc5aec602fb4')
def test_reassign_port_between_servers(self):
"""Tests the following:
@@ -300,11 +300,11 @@
for server in servers:
# attach the port to the server
- iface = self.client.create_interface(
+ iface = self.interfaces_client.create_interface(
server['id'], port_id=port_id)['interfaceAttachment']
self._check_interface(iface, port_id=port_id)
# detach the port from the server; this is a cast in the compute
# API so we have to poll the port until the device_id is unset.
- self.client.delete_interface(server['id'], port_id)
+ self.interfaces_client.delete_interface(server['id'], port_id)
self.wait_for_port_detach(port_id)
diff --git a/tempest/api/compute/servers/test_availability_zone.py b/tempest/api/compute/servers/test_availability_zone.py
index 76da317..00df86b 100644
--- a/tempest/api/compute/servers/test_availability_zone.py
+++ b/tempest/api/compute/servers/test_availability_zone.py
@@ -29,4 +29,4 @@
def test_get_availability_zone_list_with_non_admin_user(self):
# List of availability zone with non-administrator user
availability_zone = self.client.list_availability_zones()
- self.assertTrue(len(availability_zone['availabilityZoneInfo']) > 0)
+ self.assertGreater(len(availability_zone['availabilityZoneInfo']), 0)
diff --git a/tempest/api/compute/servers/test_create_server.py b/tempest/api/compute/servers/test_create_server.py
index da9d548..a48c17b 100644
--- a/tempest/api/compute/servers/test_create_server.py
+++ b/tempest/api/compute/servers/test_create_server.py
@@ -48,7 +48,7 @@
cls.meta = {'hello': 'world'}
cls.accessIPv4 = '1.1.1.1'
cls.accessIPv6 = '0000:0000:0000:0000:0000:babe:220.12.22.2'
- cls.name = data_utils.rand_name('server')
+ cls.name = data_utils.rand_name(cls.__name__ + '-server')
cls.password = data_utils.rand_password()
disk_config = cls.disk_config
cls.server_initial = cls.create_test_server(
@@ -139,19 +139,15 @@
hostname = linux_client.get_hostname()
msg = ('Failed while verifying servername equals hostname. Expected '
'hostname "%s" but got "%s".' % (self.name, hostname))
- self.assertEqual(self.name, hostname, msg)
+ self.assertEqual(self.name.lower(), hostname, msg)
@test.idempotent_id('ed20d3fb-9d1f-4329-b160-543fbd5d9811')
+ @testtools.skipUnless(
+ test.is_scheduler_filter_enabled("ServerGroupAffinityFilter"),
+ 'ServerGroupAffinityFilter is not available.')
def test_create_server_with_scheduler_hint_group(self):
# Create a server with the scheduler hint "group".
- name = data_utils.rand_name('server_group')
- policies = ['affinity']
- body = self.server_groups_client.create_server_group(
- name=name, policies=policies)['server_group']
- group_id = body['id']
- self.addCleanup(self.server_groups_client.delete_server_group,
- group_id)
-
+ group_id = self.create_test_server_group()['id']
hints = {'group': group_id}
server = self.create_test_server(scheduler_hints=hints,
wait_until='ACTIVE')
@@ -268,37 +264,23 @@
flavor_base = self.flavors_client.show_flavor(
self.flavor_ref)['flavor']
- def create_flavor_with_extra_specs():
- flavor_with_eph_disk_name = data_utils.rand_name('eph_flavor')
+ def create_flavor_with_ephemeral(ephem_disk):
+ if ephem_disk > 0:
+ flavor_name = data_utils.rand_name('eph_flavor')
+ else:
+ flavor_name = data_utils.rand_name('no_eph_flavor')
flavor_with_eph_disk_id = data_utils.rand_int_id(start=1000)
ram = flavor_base['ram']
vcpus = flavor_base['vcpus']
disk = flavor_base['disk']
- # Create a flavor with extra specs
+ # Create a flavor with ephemeral disk
flavor = (self.flavor_client.
- create_flavor(name=flavor_with_eph_disk_name,
+ create_flavor(name=flavor_name,
ram=ram, vcpus=vcpus, disk=disk,
id=flavor_with_eph_disk_id,
- ephemeral=1))['flavor']
- self.addCleanup(flavor_clean_up, flavor['id'])
-
- return flavor['id']
-
- def create_flavor_without_extra_specs():
- flavor_no_eph_disk_name = data_utils.rand_name('no_eph_flavor')
- flavor_no_eph_disk_id = data_utils.rand_int_id(start=1000)
-
- ram = flavor_base['ram']
- vcpus = flavor_base['vcpus']
- disk = flavor_base['disk']
-
- # Create a flavor without extra specs
- flavor = (self.flavor_client.
- create_flavor(name=flavor_no_eph_disk_name,
- ram=ram, vcpus=vcpus, disk=disk,
- id=flavor_no_eph_disk_id))['flavor']
+ ephemeral=ephem_disk))['flavor']
self.addCleanup(flavor_clean_up, flavor['id'])
return flavor['id']
@@ -307,8 +289,8 @@
self.flavor_client.delete_flavor(flavor_id)
self.flavor_client.wait_for_resource_deletion(flavor_id)
- flavor_with_eph_disk_id = create_flavor_with_extra_specs()
- flavor_no_eph_disk_id = create_flavor_without_extra_specs()
+ flavor_with_eph_disk_id = create_flavor_with_ephemeral(ephem_disk=1)
+ flavor_no_eph_disk_id = create_flavor_with_ephemeral(ephem_disk=0)
admin_pass = self.image_ssh_password
@@ -318,7 +300,7 @@
adminPass=admin_pass,
flavor=flavor_no_eph_disk_id)
- # Get partition number of server without extra specs.
+ # Get partition number of server without ephemeral disk.
server_no_eph_disk = self.client.show_server(
server_no_eph_disk['id'])['server']
linux_client = remote_client.RemoteClient(
diff --git a/tempest/api/compute/servers/test_delete_server.py b/tempest/api/compute/servers/test_delete_server.py
index 079465d..07f46c5 100644
--- a/tempest/api/compute/servers/test_delete_server.py
+++ b/tempest/api/compute/servers/test_delete_server.py
@@ -106,24 +106,15 @@
@test.services('volume')
def test_delete_server_while_in_attached_volume(self):
# Delete a server while a volume is attached to it
- volumes_client = self.volumes_extensions_client
device = '/dev/%s' % CONF.compute.volume_device_name
server = self.create_test_server(wait_until='ACTIVE')
- volume = (volumes_client.create_volume(size=CONF.volume.volume_size)
- ['volume'])
- self.addCleanup(volumes_client.delete_volume, volume['id'])
- waiters.wait_for_volume_status(volumes_client,
- volume['id'], 'available')
- self.client.attach_volume(server['id'],
- volumeId=volume['id'],
- device=device)
- waiters.wait_for_volume_status(volumes_client,
- volume['id'], 'in-use')
+ volume = self.create_volume()
+ self.attach_volume(server, volume, device=device)
self.client.delete_server(server['id'])
waiters.wait_for_server_termination(self.client, server['id'])
- waiters.wait_for_volume_status(volumes_client,
+ waiters.wait_for_volume_status(self.volumes_client,
volume['id'], 'available')
diff --git a/tempest/api/compute/servers/test_instance_actions.py b/tempest/api/compute/servers/test_instance_actions.py
index 1367629..a229df8 100644
--- a/tempest/api/compute/servers/test_instance_actions.py
+++ b/tempest/api/compute/servers/test_instance_actions.py
@@ -40,9 +40,9 @@
body = (self.client.list_instance_actions(self.server_id)
['instanceActions'])
- self.assertTrue(len(body) == 2, str(body))
- self.assertTrue(any([i for i in body if i['action'] == 'create']))
- self.assertTrue(any([i for i in body if i['action'] == 'reboot']))
+ self.assertEqual(len(body), 2, str(body))
+ self.assertEqual(sorted([i['action'] for i in body]),
+ ['create', 'reboot'])
@test.idempotent_id('aacc71ca-1d70-4aa5-bbf6-0ff71470e43c')
def test_get_instance_action(self):
@@ -51,3 +51,27 @@
self.server_id, self.request_id)['instanceAction']
self.assertEqual(self.server_id, body['instance_uuid'])
self.assertEqual('create', body['action'])
+
+
+class InstanceActionsV221TestJSON(base.BaseV2ComputeTest):
+
+ min_microversion = '2.21'
+ max_microversion = 'latest'
+
+ @classmethod
+ def setup_clients(cls):
+ super(InstanceActionsV221TestJSON, cls).setup_clients()
+ cls.client = cls.servers_client
+
+ @test.idempotent_id('0a0f85d4-10fa-41f6-bf80-a54fb4aa2ae1')
+ def test_get_list_deleted_instance_actions(self):
+
+ # List actions of the deleted server
+ server = self.create_test_server(wait_until='ACTIVE')
+ self.client.delete_server(server['id'])
+ waiters.wait_for_server_termination(self.client, server['id'])
+ body = (self.client.list_instance_actions(server['id'])
+ ['instanceActions'])
+ self.assertEqual(len(body), 2, str(body))
+ self.assertEqual(sorted([i['action'] for i in body]),
+ ['create', 'delete'])
diff --git a/tempest/api/compute/servers/test_list_server_filters.py b/tempest/api/compute/servers/test_list_server_filters.py
index c1fbb12..611d5a2 100644
--- a/tempest/api/compute/servers/test_list_server_filters.py
+++ b/tempest/api/compute/servers/test_list_server_filters.py
@@ -17,13 +17,10 @@
from tempest.common import fixed_network
from tempest.common.utils import data_utils
from tempest.common import waiters
-from tempest import config
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
from tempest import test
-CONF = config.CONF
-
class ListServerFiltersTestJSON(base.BaseV2ComputeTest):
@@ -122,8 +119,8 @@
self.assertNotIn(self.s3_name, map(lambda x: x['name'], servers))
@test.idempotent_id('ca78e20e-fddb-4ce6-b7f7-bcbf8605e66e')
- def test_list_servers_filter_by_server_status(self):
- # Filter the list of servers by server status
+ def test_list_servers_filter_by_active_status(self):
+ # Filter the list of servers by server active status
params = {'status': 'active'}
body = self.client.list_servers(**params)
servers = body['servers']
@@ -274,14 +271,9 @@
msg = 'fixed_network_name needs to be configured to run this test'
raise self.skipException(msg)
self.s1 = self.client.show_server(self.s1['id'])['server']
- for addr_spec in self.s1['addresses'][self.fixed_network_name]:
- ip = addr_spec['addr']
- if addr_spec['version'] == 4:
- params = {'ip': ip}
- break
- else:
- msg = "Skipped until bug 1450859 is resolved"
- raise self.skipException(msg)
+ # Get first ip address inspite of v4 or v6
+ addr_spec = self.s1['addresses'][self.fixed_network_name][0]
+ params = {'ip': addr_spec['addr']}
body = self.client.list_servers(**params)
servers = body['servers']
diff --git a/tempest/api/compute/servers/test_list_servers_negative.py b/tempest/api/compute/servers/test_list_servers_negative.py
index 357c907..fcd5a24 100644
--- a/tempest/api/compute/servers/test_list_servers_negative.py
+++ b/tempest/api/compute/servers/test_list_servers_negative.py
@@ -13,8 +13,6 @@
# License for the specific language governing permissions and limitations
# under the License.
-from six import moves
-
from tempest.api.compute import base
from tempest.common import waiters
from tempest.lib import exceptions as lib_exc
@@ -38,7 +36,7 @@
# tearDownClass method of the super-class.
cls.existing_fixtures = []
cls.deleted_fixtures = []
- for x in moves.xrange(2):
+ for x in range(2):
srv = cls.create_test_server(wait_until='ACTIVE')
cls.existing_fixtures.append(srv)
diff --git a/tempest/api/compute/servers/test_multiple_create.py b/tempest/api/compute/servers/test_multiple_create.py
index eb1beb1..9fc30f9 100644
--- a/tempest/api/compute/servers/test_multiple_create.py
+++ b/tempest/api/compute/servers/test_multiple_create.py
@@ -14,32 +14,16 @@
# under the License.
from tempest.api.compute import base
-from tempest.common.utils import data_utils
from tempest import test
class MultipleCreateTestJSON(base.BaseV2ComputeTest):
- _name = 'multiple-create-test'
-
- def _generate_name(self):
- return data_utils.rand_name(self._name)
-
- def _create_multiple_servers(self, name=None, wait_until=None, **kwargs):
- # NOTE: This is the right way to create_multiple servers and manage to
- # get the created servers into the servers list to be cleaned up after
- # all.
- kwargs['name'] = name if name else self._generate_name()
- if wait_until:
- kwargs['wait_until'] = wait_until
- body = self.create_test_server(**kwargs)
-
- return body
@test.idempotent_id('61e03386-89c3-449c-9bb1-a06f423fd9d1')
def test_multiple_create(self):
- body = self._create_multiple_servers(wait_until='ACTIVE',
- min_count=1,
- max_count=2)
+ body = self.create_test_server(wait_until='ACTIVE',
+ min_count=1,
+ max_count=2)
# NOTE(maurosr): do status response check and also make sure that
# reservation_id is not in the response body when the request send
# contains return_reservation_id=False
@@ -47,8 +31,8 @@
@test.idempotent_id('864777fb-2f1e-44e3-b5b9-3eb6fa84f2f7')
def test_multiple_create_with_reservation_return(self):
- body = self._create_multiple_servers(wait_until='ACTIVE',
- min_count=1,
- max_count=2,
- return_reservation_id=True)
+ body = self.create_test_server(wait_until='ACTIVE',
+ min_count=1,
+ max_count=2,
+ return_reservation_id=True)
self.assertIn('reservation_id', body)
diff --git a/tempest/api/compute/servers/test_multiple_create_negative.py b/tempest/api/compute/servers/test_multiple_create_negative.py
index e5b4f46..c4dbe23 100644
--- a/tempest/api/compute/servers/test_multiple_create_negative.py
+++ b/tempest/api/compute/servers/test_multiple_create_negative.py
@@ -22,13 +22,10 @@
class MultipleCreateNegativeTestJSON(base.BaseV2ComputeTest):
_name = 'multiple-create-test'
- def _generate_name(self):
- return data_utils.rand_name(self._name)
-
- def _create_multiple_servers(self, name=None, wait_until=None, **kwargs):
+ def _create_multiple_servers(self, **kwargs):
# This is the right way to create_multiple servers and manage to get
# the created servers into the servers list to be cleaned up after all.
- kwargs['name'] = kwargs.get('name', self._generate_name())
+ kwargs['name'] = kwargs.get('name', data_utils.rand_name(self._name))
body = self.create_test_server(**kwargs)
return body
diff --git a/tempest/api/compute/servers/test_server_actions.py b/tempest/api/compute/servers/test_server_actions.py
index 21e7d10..9077801 100644
--- a/tempest/api/compute/servers/test_server_actions.py
+++ b/tempest/api/compute/servers/test_server_actions.py
@@ -24,7 +24,6 @@
from tempest.common.utils.linux import remote_client
from tempest.common import waiters
from tempest import config
-from tempest import exceptions
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
from tempest import test
@@ -81,14 +80,19 @@
@testtools.skipUnless(CONF.compute_feature_enabled.change_password,
'Change password not available.')
def test_change_server_password(self):
+ # Since this test messes with the password and makes the
+ # server unreachable, it should create its own server
+ newserver = self.create_test_server(
+ validatable=True,
+ wait_until='ACTIVE')
# The server's password should be set to the provided password
new_password = 'Newpass1234'
- self.client.change_password(self.server_id, adminPass=new_password)
- waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
+ self.client.change_password(newserver['id'], adminPass=new_password)
+ waiters.wait_for_server_status(self.client, newserver['id'], 'ACTIVE')
if CONF.validation.run_validation:
# Verify that the user can authenticate with the new password
- server = self.client.show_server(self.server_id)['server']
+ server = self.client.show_server(newserver['id'])['server']
linux_client = remote_client.RemoteClient(
self.get_server_ip(server),
self.ssh_user,
@@ -155,7 +159,7 @@
def test_rebuild_server(self):
# The server should be rebuilt using the provided image and data
meta = {'rebuild': 'server'}
- new_name = data_utils.rand_name('server')
+ new_name = data_utils.rand_name(self.__class__.__name__ + '-server')
password = 'rebuildPassw0rd'
rebuilt_server = self.client.rebuild_server(
self.server_id,
@@ -235,20 +239,10 @@
@test.services('volume')
def test_rebuild_server_with_volume_attached(self):
# create a new volume and attach it to the server
- volume = self.volumes_client.create_volume(
- size=CONF.volume.volume_size)
- volume = volume['volume']
- self.addCleanup(self.volumes_client.delete_volume, volume['id'])
- waiters.wait_for_volume_status(self.volumes_client, volume['id'],
- 'available')
+ volume = self.create_volume()
- self.client.attach_volume(self.server_id, volumeId=volume['id'])
- self.addCleanup(waiters.wait_for_volume_status, self.volumes_client,
- volume['id'], 'available')
- self.addCleanup(self.client.detach_volume,
- self.server_id, volume['id'])
- waiters.wait_for_volume_status(self.volumes_client, volume['id'],
- 'in-use')
+ server = self.client.show_server(self.server_id)['server']
+ self.attach_volume(server, volume)
# run general rebuild test
self.test_rebuild_server()
@@ -270,6 +264,9 @@
'SHUTOFF')
self.client.resize_server(self.server_id, self.flavor_ref_alt)
+ # NOTE(jlk): Explicitly delete the server to get a new one for later
+ # tests. Avoids resize down race issues.
+ self.addCleanup(self.delete_server, self.server_id)
waiters.wait_for_server_status(self.client, self.server_id,
'VERIFY_RESIZE')
@@ -285,10 +282,6 @@
# NOTE(mriedem): tearDown requires the server to be started.
self.client.start_server(self.server_id)
- # NOTE(jlk): Explicitly delete the server to get a new one for later
- # tests. Avoids resize down race issues.
- self.addCleanup(self.delete_server, self.server_id)
-
@test.idempotent_id('1499262a-9328-4eda-9068-db1ac57498d2')
@testtools.skipUnless(CONF.compute_feature_enabled.resize,
'Resize not available.')
@@ -309,6 +302,9 @@
# values after a resize is reverted
self.client.resize_server(self.server_id, self.flavor_ref_alt)
+ # NOTE(zhufl): Explicitly delete the server to get a new one for later
+ # tests. Avoids resize down race issues.
+ self.addCleanup(self.delete_server, self.server_id)
waiters.wait_for_server_status(self.client, self.server_id,
'VERIFY_RESIZE')
@@ -334,7 +330,7 @@
elif CONF.image_feature_enabled.api_v2:
glance_client = self.os.image_client_v2
else:
- raise exceptions.InvalidConfiguration(
+ raise lib_exc.InvalidConfiguration(
'Either api_v1 or api_v2 must be True in '
'[image-feature-enabled].')
diff --git a/tempest/api/compute/servers/test_server_addresses.py b/tempest/api/compute/servers/test_server_addresses.py
index 864f38f..d31b6f8 100644
--- a/tempest/api/compute/servers/test_server_addresses.py
+++ b/tempest/api/compute/servers/test_server_addresses.py
@@ -49,9 +49,9 @@
# We do not know the exact network configuration, but an instance
# should at least have a single public or private address
- self.assertTrue(len(addresses) >= 1)
+ self.assertGreaterEqual(len(addresses), 1)
for network_name, network_addresses in six.iteritems(addresses):
- self.assertTrue(len(network_addresses) >= 1)
+ self.assertGreaterEqual(len(network_addresses), 1)
for address in network_addresses:
self.assertTrue(address['addr'])
self.assertTrue(address['version'])
diff --git a/tempest/api/compute/servers/test_server_group.py b/tempest/api/compute/servers/test_server_group.py
index e32f6b0..bc49e7b 100644
--- a/tempest/api/compute/servers/test_server_group.py
+++ b/tempest/api/compute/servers/test_server_group.py
@@ -19,12 +19,13 @@
class ServerGroupTestJSON(base.BaseV2ComputeTest):
- """These tests check for the server-group APIs
+ """These tests check for the server-group APIs.
They create/delete server-groups with different policies.
policies = affinity/anti-affinity
It also adds the tests for list and get details of server-groups
"""
+
@classmethod
def skip_checks(cls):
super(ServerGroupTestJSON, cls).skip_checks()
@@ -40,12 +41,10 @@
@classmethod
def resource_setup(cls):
super(ServerGroupTestJSON, cls).resource_setup()
- server_group_name = data_utils.rand_name('server-group')
cls.policy = ['affinity']
cls.created_server_group = cls.create_test_server_group(
- server_group_name,
- cls.policy)
+ policy=cls.policy)
def _create_server_group(self, name, policy):
# create the test server-group with given policy
diff --git a/tempest/api/compute/servers/test_server_personality.py b/tempest/api/compute/servers/test_server_personality.py
index baa4f9a..e5ad7b4 100644
--- a/tempest/api/compute/servers/test_server_personality.py
+++ b/tempest/api/compute/servers/test_server_personality.py
@@ -13,7 +13,7 @@
# License for the specific language governing permissions and limitations
# under the License.
-import base64
+from oslo_serialization import base64
from tempest.api.compute import base
from tempest.common.utils.linux import remote_client
@@ -55,7 +55,7 @@
file_contents = 'This is a test file.'
file_path = '/test.txt'
personality = [{'path': file_path,
- 'contents': base64.b64encode(file_contents)}]
+ 'contents': base64.encode_as_text(file_contents)}]
password = data_utils.rand_password()
created_server = self.create_test_server(personality=personality,
adminPass=password,
@@ -79,7 +79,7 @@
server_id = server['id']
file_contents = 'Test server rebuild.'
personality = [{'path': 'rebuild.txt',
- 'contents': base64.b64encode(file_contents)}]
+ 'contents': base64.encode_as_text(file_contents)}]
rebuilt_server = self.client.rebuild_server(server_id,
self.image_ref_alt,
personality=personality)
@@ -100,7 +100,8 @@
for i in range(0, int(max_file_limit) + 1):
path = 'etc/test' + str(i) + '.txt'
personality.append({'path': path,
- 'contents': base64.b64encode(file_contents)})
+ 'contents': base64.encode_as_text(
+ file_contents)})
# A 403 Forbidden or 413 Overlimit (old behaviour) exception
# will be raised when out of quota
self.assertRaises((lib_exc.Forbidden, lib_exc.OverLimit),
@@ -120,7 +121,7 @@
path = '/etc/test' + str(i) + '.txt'
person.append({
'path': path,
- 'contents': base64.b64encode(file_contents),
+ 'contents': base64.encode_as_text(file_contents),
})
password = data_utils.rand_password()
created_server = self.create_test_server(personality=person,
@@ -136,6 +137,6 @@
server=server,
servers_client=self.client)
for i in person:
- self.assertEqual(base64.b64decode(i['contents']),
+ self.assertEqual(base64.decode_as_text(i['contents']),
linux_client.exec_command(
'sudo cat %s' % i['path']))
diff --git a/tempest/api/compute/servers/test_server_rescue_negative.py b/tempest/api/compute/servers/test_server_rescue_negative.py
index 8d63b6b..41b648c 100644
--- a/tempest/api/compute/servers/test_server_rescue_negative.py
+++ b/tempest/api/compute/servers/test_server_rescue_negative.py
@@ -60,20 +60,6 @@
waiters.wait_for_server_status(cls.servers_client,
cls.server_id, 'ACTIVE')
- def _create_volume(self):
- volume = self.volumes_extensions_client.create_volume(
- size=CONF.volume.volume_size, display_name=data_utils.rand_name(
- self.__class__.__name__ + '_volume'))['volume']
- self.addCleanup(self.delete_volume, volume['id'])
- waiters.wait_for_volume_status(self.volumes_extensions_client,
- volume['id'], 'available')
- return volume
-
- def _detach(self, server_id, volume_id):
- self.servers_client.detach_volume(server_id, volume_id)
- waiters.wait_for_volume_status(self.volumes_extensions_client,
- volume_id, 'available')
-
def _unrescue(self, server_id):
self.servers_client.unrescue_server(server_id)
waiters.wait_for_server_status(self.servers_client,
@@ -125,7 +111,7 @@
@test.services('volume')
@test.attr(type=['negative'])
def test_rescued_vm_attach_volume(self):
- volume = self._create_volume()
+ volume = self.create_volume()
# Rescue the server
self.servers_client.rescue_server(self.server_id,
@@ -145,14 +131,11 @@
@test.services('volume')
@test.attr(type=['negative'])
def test_rescued_vm_detach_volume(self):
- volume = self._create_volume()
+ volume = self.create_volume()
# Attach the volume to the server
- self.servers_client.attach_volume(self.server_id,
- volumeId=volume['id'],
- device='/dev/%s' % self.device)
- waiters.wait_for_volume_status(self.volumes_extensions_client,
- volume['id'], 'in-use')
+ server = self.servers_client.show_server(self.server_id)['server']
+ self.attach_volume(server, volume, device='/dev/%s' % self.device)
# Rescue the server
self.servers_client.rescue_server(self.server_id,
@@ -160,7 +143,6 @@
waiters.wait_for_server_status(self.servers_client,
self.server_id, 'RESCUE')
# addCleanup is a LIFO queue
- self.addCleanup(self._detach, self.server_id, volume['id'])
self.addCleanup(self._unrescue, self.server_id)
# Detach the volume from the server expecting failure
diff --git a/tempest/api/compute/servers/test_servers.py b/tempest/api/compute/servers/test_servers.py
index e91857a..5aeba4e 100644
--- a/tempest/api/compute/servers/test_servers.py
+++ b/tempest/api/compute/servers/test_servers.py
@@ -52,7 +52,8 @@
# Creating a server with a name that already exists is allowed
# TODO(sdague): clear out try, we do cleanup one layer up
- server_name = data_utils.rand_name('server')
+ server_name = data_utils.rand_name(
+ self.__class__.__name__ + '-server')
server = self.create_test_server(name=server_name,
wait_until='ACTIVE')
id1 = server['id']
diff --git a/tempest/api/compute/servers/test_servers_negative.py b/tempest/api/compute/servers/test_servers_negative.py
index 10ea31d..89be3f3 100644
--- a/tempest/api/compute/servers/test_servers_negative.py
+++ b/tempest/api/compute/servers/test_servers_negative.py
@@ -253,7 +253,8 @@
# Update name of a non-existent server
nonexistent_server = data_utils.rand_uuid()
- new_name = data_utils.rand_name('server') + '_updated'
+ new_name = data_utils.rand_name(
+ self.__class__.__name__ + '-server') + '_updated'
self.assertRaises(lib_exc.NotFound, self.client.update_server,
nonexistent_server, name=new_name)
@@ -301,7 +302,7 @@
# Pass a server ID that exceeds length limit to delete server
self.assertRaises(lib_exc.NotFound, self.client.delete_server,
- sys.maxint + 1)
+ sys.maxsize + 1)
@test.attr(type=['negative'])
@test.idempotent_id('c5fa6041-80cd-483b-aa6d-4e45f19d093c')
diff --git a/tempest/api/compute/test_live_block_migration_negative.py b/tempest/api/compute/test_live_block_migration_negative.py
index dc57396..ffd274f 100644
--- a/tempest/api/compute/test_live_block_migration_negative.py
+++ b/tempest/api/compute/test_live_block_migration_negative.py
@@ -24,8 +24,6 @@
class LiveBlockMigrationNegativeTestJSON(base.BaseV2ComputeAdminTest):
- _host_key = 'OS-EXT-SRV-ATTR:host'
-
@classmethod
def skip_checks(cls):
super(LiveBlockMigrationNegativeTestJSON, cls).skip_checks()
diff --git a/tempest/api/compute/volumes/test_attach_volume.py b/tempest/api/compute/volumes/test_attach_volume.py
index 05c23ee..d4831b1 100644
--- a/tempest/api/compute/volumes/test_attach_volume.py
+++ b/tempest/api/compute/volumes/test_attach_volume.py
@@ -57,122 +57,102 @@
waiters.wait_for_volume_status(self.volumes_client,
volume_id, 'available')
- def _delete_volume(self):
- # Delete the created Volumes
- if self.volume:
- self.volumes_client.delete_volume(self.volume['id'])
- self.volumes_client.wait_for_resource_deletion(self.volume['id'])
- self.volume = None
-
- def _create_and_attach(self, shelve_server=False):
+ def _create_server(self):
# Start a server and wait for it to become ready
- self.admin_pass = self.image_ssh_password
- self.server = self.create_test_server(
+ server = self.create_test_server(
validatable=True,
wait_until='ACTIVE',
- adminPass=self.admin_pass)
+ adminPass=self.image_ssh_password)
# Record addresses so that we can ssh later
- self.server['addresses'] = self.servers_client.list_addresses(
- self.server['id'])['addresses']
+ server['addresses'] = self.servers_client.list_addresses(
+ server['id'])['addresses']
+ return server
+ def _create_and_attach_volume(self, server):
# Create a volume and wait for it to become ready
- self.volume = self.volumes_client.create_volume(
- size=CONF.volume.volume_size, display_name='test')['volume']
- self.addCleanup(self._delete_volume)
- waiters.wait_for_volume_status(self.volumes_client,
- self.volume['id'], 'available')
-
- if shelve_server:
- # NOTE(andreaf) If we are going to shelve a server, we should
- # check first whether the server is ssh-able. Otherwise we won't
- # be able to distinguish failures introduced by shelve from
- # pre-existing ones. Also it's good to wait for cloud-init to be
- # done and sshd server to be running before shelving to avoid
- # breaking the VM
- linux_client = remote_client.RemoteClient(
- self.get_server_ip(self.server),
- self.image_ssh_user,
- self.admin_pass,
- self.validation_resources['keypair']['private_key'])
- linux_client.validate_authentication()
- # If validation went ok, shelve the server
- compute.shelve_server(self.servers_client, self.server['id'])
+ volume = self.create_volume()
+ self.addCleanup(self.delete_volume, volume['id'])
# Attach the volume to the server
self.attachment = self.servers_client.attach_volume(
- self.server['id'],
- volumeId=self.volume['id'],
+ server['id'],
+ volumeId=volume['id'],
device='/dev/%s' % self.device)['volumeAttachment']
waiters.wait_for_volume_status(self.volumes_client,
- self.volume['id'], 'in-use')
+ volume['id'], 'in-use')
- self.addCleanup(self._detach, self.server['id'], self.volume['id'])
+ self.addCleanup(self._detach, server['id'], volume['id'])
+ return volume
@test.idempotent_id('52e9045a-e90d-4c0d-9087-79d657faffff')
- @testtools.skipUnless(CONF.validation.run_validation,
- 'SSH required for this test')
def test_attach_detach_volume(self):
# Stop and Start a server with an attached volume, ensuring that
# the volume remains attached.
- self._create_and_attach()
+ server = self._create_server()
+ volume = self._create_and_attach_volume(server)
- self.servers_client.stop_server(self.server['id'])
- waiters.wait_for_server_status(self.servers_client, self.server['id'],
+ self.servers_client.stop_server(server['id'])
+ waiters.wait_for_server_status(self.servers_client, server['id'],
'SHUTOFF')
- self.servers_client.start_server(self.server['id'])
- waiters.wait_for_server_status(self.servers_client, self.server['id'],
+ self.servers_client.start_server(server['id'])
+ waiters.wait_for_server_status(self.servers_client, server['id'],
'ACTIVE')
- linux_client = remote_client.RemoteClient(
- self.get_server_ip(self.server),
- self.image_ssh_user,
- self.admin_pass,
- self.validation_resources['keypair']['private_key'],
- server=self.server,
- servers_client=self.servers_client)
+ if CONF.validation.run_validation:
+ linux_client = remote_client.RemoteClient(
+ self.get_server_ip(server),
+ self.image_ssh_user,
+ self.image_ssh_password,
+ self.validation_resources['keypair']['private_key'],
+ server=server,
+ servers_client=self.servers_client)
- partitions = linux_client.get_partitions()
- self.assertIn(self.device, partitions)
+ partitions = linux_client.get_partitions()
+ device_name_to_match = ' ' + self.device + '\n'
+ self.assertIn(device_name_to_match, partitions)
- self._detach(self.server['id'], self.volume['id'])
+ self._detach(server['id'], volume['id'])
self.attachment = None
- self.servers_client.stop_server(self.server['id'])
- waiters.wait_for_server_status(self.servers_client, self.server['id'],
+ self.servers_client.stop_server(server['id'])
+ waiters.wait_for_server_status(self.servers_client, server['id'],
'SHUTOFF')
- self.servers_client.start_server(self.server['id'])
- waiters.wait_for_server_status(self.servers_client, self.server['id'],
+ self.servers_client.start_server(server['id'])
+ waiters.wait_for_server_status(self.servers_client, server['id'],
'ACTIVE')
- linux_client = remote_client.RemoteClient(
- self.get_server_ip(self.server),
- self.image_ssh_user,
- self.admin_pass,
- self.validation_resources['keypair']['private_key'],
- server=self.server,
- servers_client=self.servers_client)
+ if CONF.validation.run_validation:
+ linux_client = remote_client.RemoteClient(
+ self.get_server_ip(server),
+ self.image_ssh_user,
+ self.image_ssh_password,
+ self.validation_resources['keypair']['private_key'],
+ server=server,
+ servers_client=self.servers_client)
- partitions = linux_client.get_partitions()
- self.assertNotIn(self.device, partitions)
+ partitions = linux_client.get_partitions()
+ self.assertNotIn(device_name_to_match, partitions)
@test.idempotent_id('7fa563fe-f0f7-43eb-9e22-a1ece036b513')
def test_list_get_volume_attachments(self):
# Create Server, Volume and attach that Volume to Server
- self._create_and_attach()
+ server = self._create_server()
+ volume = self._create_and_attach_volume(server)
+
# List Volume attachment of the server
body = self.servers_client.list_volume_attachments(
- self.server['id'])['volumeAttachments']
+ server['id'])['volumeAttachments']
self.assertEqual(1, len(body))
self.assertIn(self.attachment, body)
# Get Volume attachment of the server
body = self.servers_client.show_volume_attachment(
- self.server['id'],
+ server['id'],
self.attachment['id'])['volumeAttachment']
- self.assertEqual(self.server['id'], body['serverId'])
- self.assertEqual(self.volume['id'], body['volumeId'])
+ self.assertEqual(server['id'], body['serverId'])
+ self.assertEqual(volume['id'], body['volumeId'])
self.assertEqual(self.attachment['id'], body['id'])
@@ -180,46 +160,77 @@
"""Testing volume with shelved instance.
This test checks the attaching and detaching volumes from
- a shelved or shelved ofload instance.
+ a shelved or shelved offload instance.
"""
min_microversion = '2.20'
max_microversion = 'latest'
- def _unshelve_server_and_check_volumes(self, number_of_partition):
- # Unshelve the instance and check that there are expected volumes
- self.servers_client.unshelve_server(self.server['id'])
- waiters.wait_for_server_status(self.servers_client,
- self.server['id'],
- 'ACTIVE')
- linux_client = remote_client.RemoteClient(
- self.get_server_ip(self.server['id']),
- self.image_ssh_user,
- self.admin_pass,
- self.validation_resources['keypair']['private_key'],
- server=self.server,
- servers_client=self.servers_client)
+ def _count_volumes(self, server):
+ # Count number of volumes on an instance
+ volumes = 0
+ if CONF.validation.run_validation:
+ linux_client = remote_client.RemoteClient(
+ self.get_server_ip(server),
+ self.image_ssh_user,
+ self.image_ssh_password,
+ self.validation_resources['keypair']['private_key'],
+ server=server,
+ servers_client=self.servers_client)
- command = 'grep [vs]d /proc/partitions | wc -l'
- nb_partitions = linux_client.exec_command(command).strip()
- self.assertEqual(number_of_partition, nb_partitions)
+ command = 'grep -c -E [vs]d.$ /proc/partitions'
+ volumes = int(linux_client.exec_command(command).strip())
+ return volumes
+
+ def _shelve_server(self, server):
+ # NOTE(andreaf) If we are going to shelve a server, we should
+ # check first whether the server is ssh-able. Otherwise we
+ # won't be able to distinguish failures introduced by shelve
+ # from pre-existing ones. Also it's good to wait for cloud-init
+ # to be done and sshd server to be running before shelving to
+ # avoid breaking the VM
+ if CONF.validation.run_validation:
+ linux_client = remote_client.RemoteClient(
+ self.get_server_ip(server),
+ self.image_ssh_user,
+ self.image_ssh_password,
+ self.validation_resources['keypair']['private_key'],
+ server=server,
+ servers_client=self.servers_client)
+ linux_client.validate_authentication()
+
+ # If validation went ok, or it was skipped, shelve the server
+ compute.shelve_server(self.servers_client, server['id'])
+
+ def _unshelve_server_and_check_volumes(self, server, number_of_volumes):
+ # Unshelve the instance and check that there are expected volumes
+ self.servers_client.unshelve_server(server['id'])
+ waiters.wait_for_server_status(self.servers_client,
+ server['id'],
+ 'ACTIVE')
+ if CONF.validation.run_validation:
+ counted_volumes = self._count_volumes(server)
+ self.assertEqual(number_of_volumes, counted_volumes)
@test.idempotent_id('13a940b6-3474-4c3c-b03f-29b89112bfee')
@testtools.skipUnless(CONF.compute_feature_enabled.shelve,
'Shelve is not available.')
- @testtools.skipUnless(CONF.validation.run_validation,
- 'SSH required for this test')
def test_attach_volume_shelved_or_offload_server(self):
- self._create_and_attach(shelve_server=True)
+ # Create server, count number of volumes on it, shelve
+ # server and attach pre-created volume to shelved server
+ server = self._create_server()
+ num_vol = self._count_volumes(server)
+ self._shelve_server(server)
+ self._create_and_attach_volume(server)
- # Unshelve the instance and check that there are two volumes
- self._unshelve_server_and_check_volumes('2')
+ # Unshelve the instance and check that attached volume exists
+ self._unshelve_server_and_check_volumes(server, num_vol + 1)
# Get Volume attachment of the server
volume_attachment = self.servers_client.show_volume_attachment(
- self.server['id'],
+ server['id'],
self.attachment['id'])['volumeAttachment']
- self.assertEqual(self.server['id'], volume_attachment['serverId'])
+ self.assertEqual(server['id'], volume_attachment['serverId'])
self.assertEqual(self.attachment['id'], volume_attachment['id'])
# Check the mountpoint is not None after unshelve server even in
# case of shelved_offloaded.
@@ -228,14 +239,18 @@
@test.idempotent_id('b54e86dd-a070-49c4-9c07-59ae6dae15aa')
@testtools.skipUnless(CONF.compute_feature_enabled.shelve,
'Shelve is not available.')
- @testtools.skipUnless(CONF.validation.run_validation,
- 'SSH required for this test')
def test_detach_volume_shelved_or_offload_server(self):
- self._create_and_attach(shelve_server=True)
+ # Create server, count number of volumes on it, shelve
+ # server and attach pre-created volume to shelved server
+ server = self._create_server()
+ num_vol = self._count_volumes(server)
+ self._shelve_server(server)
+ volume = self._create_and_attach_volume(server)
# Detach the volume
- self._detach(self.server['id'], self.volume['id'])
+ self._detach(server['id'], volume['id'])
self.attachment = None
- # Unshelve the instance and check that there is only one volume
- self._unshelve_server_and_check_volumes('1')
+ # Unshelve the instance and check that we have the expected number of
+ # volume(s)
+ self._unshelve_server_and_check_volumes(server, num_vol)
diff --git a/tempest/api/compute/volumes/test_attach_volume_negative.py b/tempest/api/compute/volumes/test_attach_volume_negative.py
new file mode 100644
index 0000000..b7fa0fe
--- /dev/null
+++ b/tempest/api/compute/volumes/test_attach_volume_negative.py
@@ -0,0 +1,41 @@
+# Copyright 2016 NEC Corporation. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.api.compute import base
+from tempest import config
+from tempest.lib import exceptions as lib_exc
+from tempest import test
+
+CONF = config.CONF
+
+
+class AttachVolumeNegativeTest(base.BaseV2ComputeTest):
+
+ @classmethod
+ def skip_checks(cls):
+ super(AttachVolumeNegativeTest, cls).skip_checks()
+ if not CONF.service_available.cinder:
+ skip_msg = ("%s skipped as Cinder is not available" % cls.__name__)
+ raise cls.skipException(skip_msg)
+
+ @test.idempotent_id('a313b5cd-fbd0-49cc-94de-870e99f763c7')
+ def test_delete_attached_volume(self):
+ server = self.create_test_server(wait_until='ACTIVE')
+ volume = self.create_volume()
+
+ path = "/dev/%s" % CONF.compute.volume_device_name
+ self.attach_volume(server, volume, device=path)
+
+ self.assertRaises(lib_exc.BadRequest,
+ self.delete_volume, volume['id'])
diff --git a/tempest/api/compute/volumes/test_volume_snapshots.py b/tempest/api/compute/volumes/test_volume_snapshots.py
index f42d153..460c882 100644
--- a/tempest/api/compute/volumes/test_volume_snapshots.py
+++ b/tempest/api/compute/volumes/test_volume_snapshots.py
@@ -40,14 +40,10 @@
@test.idempotent_id('cd4ec87d-7825-450d-8040-6e2068f2da8f')
def test_volume_snapshot_create_get_list_delete(self):
- v_name = data_utils.rand_name('Volume')
- volume = self.volumes_client.create_volume(
- size=CONF.volume.volume_size,
- display_name=v_name)['volume']
+ volume = self.create_volume()
self.addCleanup(self.delete_volume, volume['id'])
- waiters.wait_for_volume_status(self.volumes_client, volume['id'],
- 'available')
- s_name = data_utils.rand_name('Snapshot')
+
+ s_name = data_utils.rand_name(self.__class__.__name__ + '-Snapshot')
# Create snapshot
snapshot = self.snapshots_client.create_snapshot(
volume_id=volume['id'],
diff --git a/tempest/api/compute/volumes/test_volumes_get.py b/tempest/api/compute/volumes/test_volumes_get.py
index 6074054..d599431 100644
--- a/tempest/api/compute/volumes/test_volumes_get.py
+++ b/tempest/api/compute/volumes/test_volumes_get.py
@@ -42,15 +42,14 @@
@test.idempotent_id('f10f25eb-9775-4d9d-9cbe-1cf54dae9d5f')
def test_volume_create_get_delete(self):
# CREATE, GET, DELETE Volume
- volume = None
- v_name = data_utils.rand_name('Volume')
+ v_name = data_utils.rand_name(self.__class__.__name__ + '-Volume')
metadata = {'Type': 'work'}
# Create volume
volume = self.client.create_volume(size=CONF.volume.volume_size,
display_name=v_name,
metadata=metadata)['volume']
- self.addCleanup(self.delete_volume, volume['id'])
self.assertIn('id', volume)
+ self.addCleanup(self.delete_volume, volume['id'])
self.assertIn('displayName', volume)
self.assertEqual(volume['displayName'], v_name,
"The created volume name is not equal "
@@ -66,6 +65,10 @@
fetched_volume['displayName'],
'The fetched Volume is different '
'from the created Volume')
+ self.assertEqual(CONF.volume.volume_size,
+ fetched_volume['size'],
+ 'The fetched volume size is different '
+ 'from the created Volume')
self.assertEqual(volume['id'],
fetched_volume['id'],
'The fetched Volume is different '
diff --git a/tempest/api/compute/volumes/test_volumes_list.py b/tempest/api/compute/volumes/test_volumes_list.py
index f709c91..c60fcca 100644
--- a/tempest/api/compute/volumes/test_volumes_list.py
+++ b/tempest/api/compute/volumes/test_volumes_list.py
@@ -48,7 +48,7 @@
cls.volume_list = []
cls.volume_id_list = []
for i in range(3):
- v_name = data_utils.rand_name('volume')
+ v_name = data_utils.rand_name(cls.__name__ + '-volume')
metadata = {'Type': 'work'}
try:
volume = cls.client.create_volume(size=CONF.volume.volume_size,
diff --git a/tempest/api/compute/volumes/test_volumes_negative.py b/tempest/api/compute/volumes/test_volumes_negative.py
index 92f5ea8..5fe4cb3 100644
--- a/tempest/api/compute/volumes/test_volumes_negative.py
+++ b/tempest/api/compute/volumes/test_volumes_negative.py
@@ -59,7 +59,7 @@
def test_create_volume_with_invalid_size(self):
# Negative: Should not be able to create volume with invalid size
# in request
- v_name = data_utils.rand_name('Volume')
+ v_name = data_utils.rand_name(self.__class__.__name__ + '-Volume')
metadata = {'Type': 'work'}
self.assertRaises(lib_exc.BadRequest, self.client.create_volume,
size='#$%', display_name=v_name, metadata=metadata)
@@ -69,7 +69,7 @@
def test_create_volume_with_out_passing_size(self):
# Negative: Should not be able to create volume without passing size
# in request
- v_name = data_utils.rand_name('Volume')
+ v_name = data_utils.rand_name(self.__class__.__name__ + '-Volume')
metadata = {'Type': 'work'}
self.assertRaises(lib_exc.BadRequest, self.client.create_volume,
size='', display_name=v_name, metadata=metadata)
@@ -78,19 +78,12 @@
@test.idempotent_id('8cce995e-0a83-479a-b94d-e1e40b8a09d1')
def test_create_volume_with_size_zero(self):
# Negative: Should not be able to create volume with size zero
- v_name = data_utils.rand_name('Volume')
+ v_name = data_utils.rand_name(self.__class__.__name__ + '-Volume')
metadata = {'Type': 'work'}
self.assertRaises(lib_exc.BadRequest, self.client.create_volume,
size='0', display_name=v_name, metadata=metadata)
@test.attr(type=['negative'])
- @test.idempotent_id('f01904f2-e975-4915-98ce-cb5fa27bde4f')
- def test_get_invalid_volume_id(self):
- # Negative: Should not be able to get volume with invalid id
- self.assertRaises(lib_exc.NotFound,
- self.client.show_volume, '#$%%&^&^')
-
- @test.attr(type=['negative'])
@test.idempotent_id('62bab09a-4c03-4617-8cca-8572bc94af9b')
def test_get_volume_without_passing_volume_id(self):
# Negative: Should not be able to get volume when empty ID is passed
diff --git a/tempest/api/data_processing/base.py b/tempest/api/data_processing/base.py
deleted file mode 100644
index d5ba76c..0000000
--- a/tempest/api/data_processing/base.py
+++ /dev/null
@@ -1,442 +0,0 @@
-# Copyright (c) 2014 Mirantis Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from collections import OrderedDict
-import copy
-
-import six
-
-from tempest import config
-from tempest import exceptions
-from tempest.lib.common.utils import test_utils
-import tempest.test
-
-
-CONF = config.CONF
-
-"""Default templates.
-There should always be at least a master1 and a worker1 node
-group template."""
-BASE_VANILLA_DESC = {
- 'NODES': {
- 'master1': {
- 'count': 1,
- 'node_processes': ['namenode', 'resourcemanager',
- 'hiveserver']
- },
- 'master2': {
- 'count': 1,
- 'node_processes': ['oozie', 'historyserver',
- 'secondarynamenode']
- },
- 'worker1': {
- 'count': 1,
- 'node_processes': ['datanode', 'nodemanager'],
- 'node_configs': {
- 'MapReduce': {
- 'yarn.app.mapreduce.am.resource.mb': 256,
- 'yarn.app.mapreduce.am.command-opts': '-Xmx256m'
- },
- 'YARN': {
- 'yarn.scheduler.minimum-allocation-mb': 256,
- 'yarn.scheduler.maximum-allocation-mb': 1024,
- 'yarn.nodemanager.vmem-check-enabled': False
- }
- }
- }
- },
- 'cluster_configs': {
- 'HDFS': {
- 'dfs.replication': 1
- }
- }
-}
-
-BASE_SPARK_DESC = {
- 'NODES': {
- 'master1': {
- 'count': 1,
- 'node_processes': ['namenode', 'master']
- },
- 'worker1': {
- 'count': 1,
- 'node_processes': ['datanode', 'slave']
- }
- },
- 'cluster_configs': {
- 'HDFS': {
- 'dfs.replication': 1
- }
- }
-}
-
-BASE_CDH_DESC = {
- 'NODES': {
- 'master1': {
- 'count': 1,
- 'node_processes': ['CLOUDERA_MANAGER']
- },
- 'master2': {
- 'count': 1,
- 'node_processes': ['HDFS_NAMENODE',
- 'YARN_RESOURCEMANAGER']
- },
- 'master3': {
- 'count': 1,
- 'node_processes': ['OOZIE_SERVER', 'YARN_JOBHISTORY',
- 'HDFS_SECONDARYNAMENODE',
- 'HIVE_METASTORE', 'HIVE_SERVER2']
- },
- 'worker1': {
- 'count': 1,
- 'node_processes': ['YARN_NODEMANAGER', 'HDFS_DATANODE']
- }
- },
- 'cluster_configs': {
- 'HDFS': {
- 'dfs_replication': 1
- }
- }
-}
-
-
-DEFAULT_TEMPLATES = {
- 'vanilla': OrderedDict([
- ('2.6.0', copy.deepcopy(BASE_VANILLA_DESC)),
- ('2.7.1', copy.deepcopy(BASE_VANILLA_DESC)),
- ('1.2.1', {
- 'NODES': {
- 'master1': {
- 'count': 1,
- 'node_processes': ['namenode', 'jobtracker']
- },
- 'worker1': {
- 'count': 1,
- 'node_processes': ['datanode', 'tasktracker'],
- 'node_configs': {
- 'HDFS': {
- 'Data Node Heap Size': 1024
- },
- 'MapReduce': {
- 'Task Tracker Heap Size': 1024
- }
- }
- }
- },
- 'cluster_configs': {
- 'HDFS': {
- 'dfs.replication': 1
- },
- 'MapReduce': {
- 'mapred.map.tasks.speculative.execution': False,
- 'mapred.child.java.opts': '-Xmx500m'
- },
- 'general': {
- 'Enable Swift': False
- }
- }
- })
- ]),
- 'hdp': OrderedDict([
- ('2.0.6', {
- 'NODES': {
- 'master1': {
- 'count': 1,
- 'node_processes': ['NAMENODE', 'SECONDARY_NAMENODE',
- 'ZOOKEEPER_SERVER', 'AMBARI_SERVER',
- 'HISTORYSERVER', 'RESOURCEMANAGER',
- 'GANGLIA_SERVER', 'NAGIOS_SERVER',
- 'OOZIE_SERVER']
- },
- 'worker1': {
- 'count': 1,
- 'node_processes': ['HDFS_CLIENT', 'DATANODE',
- 'YARN_CLIENT', 'ZOOKEEPER_CLIENT',
- 'MAPREDUCE2_CLIENT', 'NODEMANAGER',
- 'PIG', 'OOZIE_CLIENT']
- }
- },
- 'cluster_configs': {
- 'HDFS': {
- 'dfs.replication': 1
- }
- }
- })
- ]),
- 'spark': OrderedDict([
- ('1.0.0', copy.deepcopy(BASE_SPARK_DESC)),
- ('1.3.1', copy.deepcopy(BASE_SPARK_DESC))
- ]),
- 'cdh': OrderedDict([
- ('5.4.0', copy.deepcopy(BASE_CDH_DESC)),
- ('5.3.0', copy.deepcopy(BASE_CDH_DESC)),
- ('5', copy.deepcopy(BASE_CDH_DESC))
- ]),
-}
-
-
-class BaseDataProcessingTest(tempest.test.BaseTestCase):
-
- credentials = ['primary']
-
- @classmethod
- def skip_checks(cls):
- super(BaseDataProcessingTest, cls).skip_checks()
- if not CONF.service_available.sahara:
- raise cls.skipException('Sahara support is required')
- cls.default_plugin = cls._get_default_plugin()
-
- @classmethod
- def setup_clients(cls):
- super(BaseDataProcessingTest, cls).setup_clients()
- cls.client = cls.os.data_processing_client
-
- @classmethod
- def resource_setup(cls):
- super(BaseDataProcessingTest, cls).resource_setup()
-
- cls.default_version = cls._get_default_version()
- if cls.default_plugin is not None and cls.default_version is None:
- raise exceptions.InvalidConfiguration(
- message="No known Sahara plugin version was found")
- cls.flavor_ref = CONF.compute.flavor_ref
-
- # add lists for watched resources
- cls._node_group_templates = []
- cls._cluster_templates = []
- cls._data_sources = []
- cls._job_binary_internals = []
- cls._job_binaries = []
- cls._jobs = []
-
- @classmethod
- def resource_cleanup(cls):
- cls.cleanup_resources(getattr(cls, '_cluster_templates', []),
- cls.client.delete_cluster_template)
- cls.cleanup_resources(getattr(cls, '_node_group_templates', []),
- cls.client.delete_node_group_template)
- cls.cleanup_resources(getattr(cls, '_jobs', []), cls.client.delete_job)
- cls.cleanup_resources(getattr(cls, '_job_binaries', []),
- cls.client.delete_job_binary)
- cls.cleanup_resources(getattr(cls, '_job_binary_internals', []),
- cls.client.delete_job_binary_internal)
- cls.cleanup_resources(getattr(cls, '_data_sources', []),
- cls.client.delete_data_source)
- super(BaseDataProcessingTest, cls).resource_cleanup()
-
- @staticmethod
- def cleanup_resources(resource_id_list, method):
- for resource_id in resource_id_list:
- test_utils.call_and_ignore_notfound_exc(method, resource_id)
-
- @classmethod
- def create_node_group_template(cls, name, plugin_name, hadoop_version,
- node_processes, flavor_id,
- node_configs=None, **kwargs):
- """Creates watched node group template with specified params.
-
- It supports passing additional params using kwargs and returns created
- object. All resources created in this method will be automatically
- removed in tearDownClass method.
- """
- resp_body = cls.client.create_node_group_template(name, plugin_name,
- hadoop_version,
- node_processes,
- flavor_id,
- node_configs,
- **kwargs)
- resp_body = resp_body['node_group_template']
- # store id of created node group template
- cls._node_group_templates.append(resp_body['id'])
-
- return resp_body
-
- @classmethod
- def create_cluster_template(cls, name, plugin_name, hadoop_version,
- node_groups, cluster_configs=None, **kwargs):
- """Creates watched cluster template with specified params.
-
- It supports passing additional params using kwargs and returns created
- object. All resources created in this method will be automatically
- removed in tearDownClass method.
- """
- resp_body = cls.client.create_cluster_template(name, plugin_name,
- hadoop_version,
- node_groups,
- cluster_configs,
- **kwargs)
- resp_body = resp_body['cluster_template']
- # store id of created cluster template
- cls._cluster_templates.append(resp_body['id'])
-
- return resp_body
-
- @classmethod
- def create_data_source(cls, name, type, url, **kwargs):
- """Creates watched data source with specified params.
-
- It supports passing additional params using kwargs and returns created
- object. All resources created in this method will be automatically
- removed in tearDownClass method.
- """
- resp_body = cls.client.create_data_source(name, type, url, **kwargs)
- resp_body = resp_body['data_source']
- # store id of created data source
- cls._data_sources.append(resp_body['id'])
-
- return resp_body
-
- @classmethod
- def create_job_binary_internal(cls, name, data):
- """Creates watched job binary internal with specified params.
-
- It returns created object. All resources created in this method will
- be automatically removed in tearDownClass method.
- """
- resp_body = cls.client.create_job_binary_internal(name, data)
- resp_body = resp_body['job_binary_internal']
- # store id of created job binary internal
- cls._job_binary_internals.append(resp_body['id'])
-
- return resp_body
-
- @classmethod
- def create_job_binary(cls, name, url, extra=None, **kwargs):
- """Creates watched job binary with specified params.
-
- It supports passing additional params using kwargs and returns created
- object. All resources created in this method will be automatically
- removed in tearDownClass method.
- """
- resp_body = cls.client.create_job_binary(name, url, extra, **kwargs)
- resp_body = resp_body['job_binary']
- # store id of created job binary
- cls._job_binaries.append(resp_body['id'])
-
- return resp_body
-
- @classmethod
- def create_job(cls, name, job_type, mains, libs=None, **kwargs):
- """Creates watched job with specified params.
-
- It supports passing additional params using kwargs and returns created
- object. All resources created in this method will be automatically
- removed in tearDownClass method.
- """
- resp_body = cls.client.create_job(name,
- job_type, mains, libs, **kwargs)
- resp_body = resp_body['job']
- # store id of created job
- cls._jobs.append(resp_body['id'])
-
- return resp_body
-
- @classmethod
- def _get_default_plugin(cls):
- """Returns the default plugin used for testing."""
- if len(CONF.data_processing_feature_enabled.plugins) == 0:
- return None
-
- for plugin in CONF.data_processing_feature_enabled.plugins:
- if plugin in DEFAULT_TEMPLATES:
- break
- else:
- plugin = ''
- return plugin
-
- @classmethod
- def _get_default_version(cls):
- """Returns the default plugin version used for testing.
-
- This is gathered separately from the plugin to allow
- the usage of plugin name in skip_checks. This method is
- rather invoked into resource_setup, which allows API calls
- and exceptions.
- """
- if not cls.default_plugin:
- return None
- plugin = cls.client.get_plugin(cls.default_plugin)['plugin']
-
- for version in DEFAULT_TEMPLATES[cls.default_plugin].keys():
- if version in plugin['versions']:
- break
- else:
- version = None
-
- return version
-
- @classmethod
- def get_node_group_template(cls, nodegroup='worker1'):
- """Returns a node group template for the default plugin."""
- try:
- plugin_data = (
- DEFAULT_TEMPLATES[cls.default_plugin][cls.default_version]
- )
- nodegroup_data = plugin_data['NODES'][nodegroup]
- node_group_template = {
- 'description': 'Test node group template',
- 'plugin_name': cls.default_plugin,
- 'hadoop_version': cls.default_version,
- 'node_processes': nodegroup_data['node_processes'],
- 'flavor_id': cls.flavor_ref,
- 'node_configs': nodegroup_data.get('node_configs', {}),
- }
- return node_group_template
- except (IndexError, KeyError):
- return None
-
- @classmethod
- def get_cluster_template(cls, node_group_template_ids=None):
- """Returns a cluster template for the default plugin.
-
- node_group_template_defined contains the type and ID of pre-defined
- node group templates that have to be used in the cluster template
- (instead of dynamically defining them with 'node_processes').
- """
- if node_group_template_ids is None:
- node_group_template_ids = {}
- try:
- plugin_data = (
- DEFAULT_TEMPLATES[cls.default_plugin][cls.default_version]
- )
-
- all_node_groups = []
- for ng_name, ng_data in six.iteritems(plugin_data['NODES']):
- node_group = {
- 'name': '%s-node' % (ng_name),
- 'flavor_id': cls.flavor_ref,
- 'count': ng_data['count']
- }
- if ng_name in node_group_template_ids.keys():
- # node group already defined, use it
- node_group['node_group_template_id'] = (
- node_group_template_ids[ng_name]
- )
- else:
- # node_processes list defined on-the-fly
- node_group['node_processes'] = ng_data['node_processes']
- if 'node_configs' in ng_data:
- node_group['node_configs'] = ng_data['node_configs']
- all_node_groups.append(node_group)
-
- cluster_template = {
- 'description': 'Test cluster template',
- 'plugin_name': cls.default_plugin,
- 'hadoop_version': cls.default_version,
- 'cluster_configs': plugin_data.get('cluster_configs', {}),
- 'node_groups': all_node_groups,
- }
- return cluster_template
- except (IndexError, KeyError):
- return None
diff --git a/tempest/api/data_processing/test_cluster_templates.py b/tempest/api/data_processing/test_cluster_templates.py
deleted file mode 100644
index dfd8e27..0000000
--- a/tempest/api/data_processing/test_cluster_templates.py
+++ /dev/null
@@ -1,124 +0,0 @@
-# Copyright (c) 2014 Mirantis Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.data_processing import base as dp_base
-from tempest.common.utils import data_utils
-from tempest import exceptions
-from tempest import test
-
-
-class ClusterTemplateTest(dp_base.BaseDataProcessingTest):
- # Link to the API documentation is http://docs.openstack.org/developer/
- # sahara/restapi/rest_api_v1.0.html#cluster-templates
-
- @classmethod
- def skip_checks(cls):
- super(ClusterTemplateTest, cls).skip_checks()
- if cls.default_plugin is None:
- raise cls.skipException("No Sahara plugins configured")
-
- @classmethod
- def resource_setup(cls):
- super(ClusterTemplateTest, cls).resource_setup()
-
- # pre-define a node group templates
- node_group_template_w = cls.get_node_group_template('worker1')
- if node_group_template_w is None:
- raise exceptions.InvalidConfiguration(
- message="No known Sahara plugin was found")
-
- node_group_template_w['name'] = data_utils.rand_name(
- 'sahara-ng-template')
- resp_body = cls.create_node_group_template(**node_group_template_w)
- node_group_template_id = resp_body['id']
- configured_node_group_templates = {'worker1': node_group_template_id}
-
- cls.full_cluster_template = cls.get_cluster_template(
- configured_node_group_templates)
-
- # create cls.cluster_template variable to use for comparison to cluster
- # template response body. The 'node_groups' field in the response body
- # has some extra info that post body does not have. The 'node_groups'
- # field in the response body is something like this
- #
- # 'node_groups': [
- # {
- # 'count': 3,
- # 'name': 'worker-node',
- # 'volume_mount_prefix': '/volumes/disk',
- # 'created_at': '2014-05-21 14:31:37',
- # 'updated_at': None,
- # 'floating_ip_pool': None,
- # ...
- # },
- # ...
- # ]
- cls.cluster_template = cls.full_cluster_template.copy()
- del cls.cluster_template['node_groups']
-
- def _create_cluster_template(self, template_name=None):
- """Creates Cluster Template with optional name specified.
-
- It creates template, ensures template name and response body.
- Returns id and name of created template.
- """
- if not template_name:
- # generate random name if it's not specified
- template_name = data_utils.rand_name('sahara-cluster-template')
-
- # create cluster template
- resp_body = self.create_cluster_template(template_name,
- **self.full_cluster_template)
-
- # ensure that template created successfully
- self.assertEqual(template_name, resp_body['name'])
- self.assertDictContainsSubset(self.cluster_template, resp_body)
-
- return resp_body['id'], template_name
-
- @test.attr(type='smoke')
- @test.idempotent_id('3525f1f1-3f9c-407d-891a-a996237e728b')
- def test_cluster_template_create(self):
- self._create_cluster_template()
-
- @test.attr(type='smoke')
- @test.idempotent_id('7a161882-e430-4840-a1c6-1d928201fab2')
- def test_cluster_template_list(self):
- template_info = self._create_cluster_template()
-
- # check for cluster template in list
- templates = self.client.list_cluster_templates()['cluster_templates']
- templates_info = [(template['id'], template['name'])
- for template in templates]
- self.assertIn(template_info, templates_info)
-
- @test.attr(type='smoke')
- @test.idempotent_id('2b75fe22-f731-4b0f-84f1-89ab25f86637')
- def test_cluster_template_get(self):
- template_id, template_name = self._create_cluster_template()
-
- # check cluster template fetch by id
- template = self.client.get_cluster_template(template_id)
- template = template['cluster_template']
- self.assertEqual(template_name, template['name'])
- self.assertDictContainsSubset(self.cluster_template, template)
-
- @test.attr(type='smoke')
- @test.idempotent_id('ff1fd989-171c-4dd7-91fd-9fbc71b09675')
- def test_cluster_template_delete(self):
- template_id, _ = self._create_cluster_template()
-
- # delete the cluster template by id
- self.client.delete_cluster_template(template_id)
- # TODO(ylobankov): check that cluster template is really deleted
diff --git a/tempest/api/data_processing/test_data_sources.py b/tempest/api/data_processing/test_data_sources.py
deleted file mode 100644
index 67d09a0..0000000
--- a/tempest/api/data_processing/test_data_sources.py
+++ /dev/null
@@ -1,161 +0,0 @@
-# Copyright (c) 2014 Mirantis Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.data_processing import base as dp_base
-from tempest.common.utils import data_utils
-from tempest import test
-
-
-class DataSourceTest(dp_base.BaseDataProcessingTest):
- @classmethod
- def resource_setup(cls):
- super(DataSourceTest, cls).resource_setup()
- cls.swift_data_source_with_creds = {
- 'url': 'swift://sahara-container.sahara/input-source',
- 'description': 'Test data source',
- 'credentials': {
- 'user': cls.os.credentials.username,
- 'password': cls.os.credentials.password
- },
- 'type': 'swift'
- }
- cls.swift_data_source = cls.swift_data_source_with_creds.copy()
- del cls.swift_data_source['credentials']
-
- cls.local_hdfs_data_source = {
- 'url': 'input-source',
- 'description': 'Test data source',
- 'type': 'hdfs'
- }
-
- cls.external_hdfs_data_source = {
- 'url': 'hdfs://172.18.168.2:8020/usr/hadoop/input-source',
- 'description': 'Test data source',
- 'type': 'hdfs'
- }
-
- def _create_data_source(self, source_body, source_name=None):
- """Creates Data Source with optional name specified.
-
- It creates a link to input-source file (it may not exist), ensures
- source name and response body. Returns id and name of created source.
- """
- if not source_name:
- # generate random name if it's not specified
- source_name = data_utils.rand_name('sahara-data-source')
-
- # create data source
- resp_body = self.create_data_source(source_name, **source_body)
-
- # ensure that source created successfully
- self.assertEqual(source_name, resp_body['name'])
- if source_body['type'] == 'swift':
- source_body = self.swift_data_source
- self.assertDictContainsSubset(source_body, resp_body)
-
- return resp_body['id'], source_name
-
- def _list_data_sources(self, source_info):
- # check for data source in list
- sources = self.client.list_data_sources()['data_sources']
- sources_info = [(source['id'], source['name']) for source in sources]
- self.assertIn(source_info, sources_info)
-
- def _get_data_source(self, source_id, source_name, source_body):
- # check data source fetch by id
- source = self.client.get_data_source(source_id)['data_source']
- self.assertEqual(source_name, source['name'])
- self.assertDictContainsSubset(source_body, source)
-
- @test.attr(type='smoke')
- @test.idempotent_id('9e0e836d-c372-4fca-91b7-b66c3e9646c8')
- def test_swift_data_source_create(self):
- self._create_data_source(self.swift_data_source_with_creds)
-
- @test.attr(type='smoke')
- @test.idempotent_id('3cb87a4a-0534-4b97-9edc-8bbc822b68a0')
- def test_swift_data_source_list(self):
- source_info = (
- self._create_data_source(self.swift_data_source_with_creds))
- self._list_data_sources(source_info)
-
- @test.attr(type='smoke')
- @test.idempotent_id('fc07409b-6477-4cb3-9168-e633c46b227f')
- def test_swift_data_source_get(self):
- source_id, source_name = (
- self._create_data_source(self.swift_data_source_with_creds))
- self._get_data_source(source_id, source_name, self.swift_data_source)
-
- @test.attr(type='smoke')
- @test.idempotent_id('df53669c-0cd1-4cf7-b408-4cf215d8beb8')
- def test_swift_data_source_delete(self):
- source_id, _ = (
- self._create_data_source(self.swift_data_source_with_creds))
-
- # delete the data source by id
- self.client.delete_data_source(source_id)
-
- @test.attr(type='smoke')
- @test.idempotent_id('88505d52-db01-4229-8f1d-a1137da5fe2d')
- def test_local_hdfs_data_source_create(self):
- self._create_data_source(self.local_hdfs_data_source)
-
- @test.attr(type='smoke')
- @test.idempotent_id('81d7d42a-d7f6-4d9b-b38c-0801a4dfe3c2')
- def test_local_hdfs_data_source_list(self):
- source_info = self._create_data_source(self.local_hdfs_data_source)
- self._list_data_sources(source_info)
-
- @test.attr(type='smoke')
- @test.idempotent_id('ec0144c6-db1e-4169-bb06-7abae14a8443')
- def test_local_hdfs_data_source_get(self):
- source_id, source_name = (
- self._create_data_source(self.local_hdfs_data_source))
- self._get_data_source(
- source_id, source_name, self.local_hdfs_data_source)
-
- @test.attr(type='smoke')
- @test.idempotent_id('e398308b-4230-4f86-ba10-9b0b60a59c8d')
- def test_local_hdfs_data_source_delete(self):
- source_id, _ = self._create_data_source(self.local_hdfs_data_source)
-
- # delete the data source by id
- self.client.delete_data_source(source_id)
-
- @test.attr(type='smoke')
- @test.idempotent_id('bfd91128-e642-4d95-a973-3e536962180c')
- def test_external_hdfs_data_source_create(self):
- self._create_data_source(self.external_hdfs_data_source)
-
- @test.attr(type='smoke')
- @test.idempotent_id('92e2be72-f7ab-499d-ae01-fb9943c90d8e')
- def test_external_hdfs_data_source_list(self):
- source_info = self._create_data_source(self.external_hdfs_data_source)
- self._list_data_sources(source_info)
-
- @test.attr(type='smoke')
- @test.idempotent_id('a31edb1b-6bc6-4f42-871f-70cd243184ac')
- def test_external_hdfs_data_source_get(self):
- source_id, source_name = (
- self._create_data_source(self.external_hdfs_data_source))
- self._get_data_source(
- source_id, source_name, self.external_hdfs_data_source)
-
- @test.attr(type='smoke')
- @test.idempotent_id('295924cd-a085-4b45-aea8-0707cdb2da7e')
- def test_external_hdfs_data_source_delete(self):
- source_id, _ = self._create_data_source(self.external_hdfs_data_source)
-
- # delete the data source by id
- self.client.delete_data_source(source_id)
diff --git a/tempest/api/data_processing/test_job_binaries.py b/tempest/api/data_processing/test_job_binaries.py
deleted file mode 100644
index a47ddbc..0000000
--- a/tempest/api/data_processing/test_job_binaries.py
+++ /dev/null
@@ -1,148 +0,0 @@
-# Copyright (c) 2014 Mirantis Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.data_processing import base as dp_base
-from tempest.common.utils import data_utils
-from tempest import test
-
-
-class JobBinaryTest(dp_base.BaseDataProcessingTest):
- # Link to the API documentation is http://docs.openstack.org/developer/
- # sahara/restapi/rest_api_v1.1_EDP.html#job-binaries
-
- @classmethod
- def resource_setup(cls):
- super(JobBinaryTest, cls).resource_setup()
- cls.swift_job_binary_with_extra = {
- 'url': 'swift://sahara-container.sahara/example.jar',
- 'description': 'Test job binary',
- 'extra': {
- 'user': cls.os.credentials.username,
- 'password': cls.os.credentials.password
- }
- }
- # Create extra cls.swift_job_binary variable to use for comparison to
- # job binary response body because response body has no 'extra' field.
- cls.swift_job_binary = cls.swift_job_binary_with_extra.copy()
- del cls.swift_job_binary['extra']
-
- name = data_utils.rand_name('sahara-internal-job-binary')
- cls.job_binary_data = 'Some script may be data'
- job_binary_internal = (
- cls.create_job_binary_internal(name, cls.job_binary_data))
- cls.internal_db_job_binary = {
- 'url': 'internal-db://%s' % job_binary_internal['id'],
- 'description': 'Test job binary',
- }
-
- def _create_job_binary(self, binary_body, binary_name=None):
- """Creates Job Binary with optional name specified.
-
- It creates a link to data (jar, pig files, etc.), ensures job binary
- name and response body. Returns id and name of created job binary.
- Data may not exist when using Swift as data storage.
- In other cases data must exist in storage.
- """
- if not binary_name:
- # generate random name if it's not specified
- binary_name = data_utils.rand_name('sahara-job-binary')
-
- # create job binary
- resp_body = self.create_job_binary(binary_name, **binary_body)
-
- # ensure that binary created successfully
- self.assertEqual(binary_name, resp_body['name'])
- if 'swift' in binary_body['url']:
- binary_body = self.swift_job_binary
- self.assertDictContainsSubset(binary_body, resp_body)
-
- return resp_body['id'], binary_name
-
- @test.attr(type='smoke')
- @test.idempotent_id('c00d43f8-4360-45f8-b280-af1a201b12d3')
- def test_swift_job_binary_create(self):
- self._create_job_binary(self.swift_job_binary_with_extra)
-
- @test.attr(type='smoke')
- @test.idempotent_id('f8809352-e79d-4748-9359-ce1efce89f2a')
- def test_swift_job_binary_list(self):
- binary_info = self._create_job_binary(self.swift_job_binary_with_extra)
-
- # check for job binary in list
- binaries = self.client.list_job_binaries()['binaries']
- binaries_info = [(binary['id'], binary['name']) for binary in binaries]
- self.assertIn(binary_info, binaries_info)
-
- @test.attr(type='smoke')
- @test.idempotent_id('2d4a670f-e8f1-413c-b5ac-50c1bfe9e1b1')
- def test_swift_job_binary_get(self):
- binary_id, binary_name = (
- self._create_job_binary(self.swift_job_binary_with_extra))
-
- # check job binary fetch by id
- binary = self.client.get_job_binary(binary_id)['job_binary']
- self.assertEqual(binary_name, binary['name'])
- self.assertDictContainsSubset(self.swift_job_binary, binary)
-
- @test.attr(type='smoke')
- @test.idempotent_id('9b0e8f38-04f3-4616-b399-cfa7eb2677ed')
- def test_swift_job_binary_delete(self):
- binary_id, _ = (
- self._create_job_binary(self.swift_job_binary_with_extra))
-
- # delete the job binary by id
- self.client.delete_job_binary(binary_id)
-
- @test.attr(type='smoke')
- @test.idempotent_id('63662f6d-8291-407e-a6fc-f654522ebab6')
- def test_internal_db_job_binary_create(self):
- self._create_job_binary(self.internal_db_job_binary)
-
- @test.attr(type='smoke')
- @test.idempotent_id('38731e7b-6d9d-4ffa-8fd1-193c453e88b1')
- def test_internal_db_job_binary_list(self):
- binary_info = self._create_job_binary(self.internal_db_job_binary)
-
- # check for job binary in list
- binaries = self.client.list_job_binaries()['binaries']
- binaries_info = [(binary['id'], binary['name']) for binary in binaries]
- self.assertIn(binary_info, binaries_info)
-
- @test.attr(type='smoke')
- @test.idempotent_id('1b32199b-c3f5-43e1-a37a-3797e57b7066')
- def test_internal_db_job_binary_get(self):
- binary_id, binary_name = (
- self._create_job_binary(self.internal_db_job_binary))
-
- # check job binary fetch by id
- binary = self.client.get_job_binary(binary_id)['job_binary']
- self.assertEqual(binary_name, binary['name'])
- self.assertDictContainsSubset(self.internal_db_job_binary, binary)
-
- @test.attr(type='smoke')
- @test.idempotent_id('3c42b0c3-3e03-46a5-adf0-df0650271a4e')
- def test_internal_db_job_binary_delete(self):
- binary_id, _ = self._create_job_binary(self.internal_db_job_binary)
-
- # delete the job binary by id
- self.client.delete_job_binary(binary_id)
-
- @test.attr(type='smoke')
- @test.idempotent_id('d5d47659-7e2c-4ea7-b292-5b3e559e8587')
- def test_job_binary_get_data(self):
- binary_id, _ = self._create_job_binary(self.internal_db_job_binary)
-
- # get data of job binary by id
- _, data = self.client.get_job_binary_data(binary_id)
- self.assertEqual(data, self.job_binary_data)
diff --git a/tempest/api/data_processing/test_job_binary_internals.py b/tempest/api/data_processing/test_job_binary_internals.py
deleted file mode 100644
index b4f0769..0000000
--- a/tempest/api/data_processing/test_job_binary_internals.py
+++ /dev/null
@@ -1,88 +0,0 @@
-# Copyright (c) 2014 Mirantis Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.data_processing import base as dp_base
-from tempest.common.utils import data_utils
-from tempest import test
-
-
-class JobBinaryInternalTest(dp_base.BaseDataProcessingTest):
- # Link to the API documentation is http://docs.openstack.org/developer/
- # sahara/restapi/rest_api_v1.1_EDP.html#job-binary-internals
-
- @classmethod
- def resource_setup(cls):
- super(JobBinaryInternalTest, cls).resource_setup()
- cls.job_binary_internal_data = 'Some script may be data'
-
- def _create_job_binary_internal(self, binary_name=None):
- """Creates Job Binary Internal with optional name specified.
-
- It puts data into Sahara database and ensures job binary internal name.
- Returns id and name of created job binary internal.
- """
- if not binary_name:
- # generate random name if it's not specified
- binary_name = data_utils.rand_name('sahara-job-binary-internal')
-
- # create job binary internal
- resp_body = (
- self.create_job_binary_internal(binary_name,
- self.job_binary_internal_data))
-
- # ensure that job binary internal created successfully
- self.assertEqual(binary_name, resp_body['name'])
-
- return resp_body['id'], binary_name
-
- @test.attr(type='smoke')
- @test.idempotent_id('249c4dc2-946f-4939-83e6-212ddb6ea0be')
- def test_job_binary_internal_create(self):
- self._create_job_binary_internal()
-
- @test.attr(type='smoke')
- @test.idempotent_id('1e3c2ecd-5673-499d-babe-4fe2fcdf64ee')
- def test_job_binary_internal_list(self):
- binary_info = self._create_job_binary_internal()
-
- # check for job binary internal in list
- binaries = self.client.list_job_binary_internals()['binaries']
- binaries_info = [(binary['id'], binary['name']) for binary in binaries]
- self.assertIn(binary_info, binaries_info)
-
- @test.attr(type='smoke')
- @test.idempotent_id('a2046a53-386c-43ab-be35-df54b19db776')
- def test_job_binary_internal_get(self):
- binary_id, binary_name = self._create_job_binary_internal()
-
- # check job binary internal fetch by id
- binary = self.client.get_job_binary_internal(binary_id)
- self.assertEqual(binary_name, binary['job_binary_internal']['name'])
-
- @test.attr(type='smoke')
- @test.idempotent_id('b3568c33-4eed-40d5-aae4-6ff3b2ac58f5')
- def test_job_binary_internal_delete(self):
- binary_id, _ = self._create_job_binary_internal()
-
- # delete the job binary internal by id
- self.client.delete_job_binary_internal(binary_id)
-
- @test.attr(type='smoke')
- @test.idempotent_id('8871f2b0-5782-4d66-9bb9-6f95bcb839ea')
- def test_job_binary_internal_get_data(self):
- binary_id, _ = self._create_job_binary_internal()
-
- # get data of job binary internal by id
- _, data = self.client.get_job_binary_internal_data(binary_id)
- self.assertEqual(data, self.job_binary_internal_data)
diff --git a/tempest/api/data_processing/test_jobs.py b/tempest/api/data_processing/test_jobs.py
deleted file mode 100644
index 8503320..0000000
--- a/tempest/api/data_processing/test_jobs.py
+++ /dev/null
@@ -1,93 +0,0 @@
-# Copyright (c) 2014 Mirantis Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.data_processing import base as dp_base
-from tempest.common.utils import data_utils
-from tempest import test
-
-
-class JobTest(dp_base.BaseDataProcessingTest):
- # NOTE: Link to the API documentation: http://docs.openstack.org/developer/
- # sahara/restapi/rest_api_v1.1_EDP.html#jobs
-
- @classmethod
- def resource_setup(cls):
- super(JobTest, cls).resource_setup()
- # create job binary
- job_binary = {
- 'name': data_utils.rand_name('sahara-job-binary'),
- 'url': 'swift://sahara-container.sahara/example.jar',
- 'description': 'Test job binary',
- 'extra': {
- 'user': cls.os.credentials.username,
- 'password': cls.os.credentials.password
- }
- }
- resp_body = cls.create_job_binary(**job_binary)
- job_binary_id = resp_body['id']
-
- cls.job = {
- 'job_type': 'Pig',
- 'mains': [job_binary_id]
- }
-
- def _create_job(self, job_name=None):
- """Creates Job with optional name specified.
-
- It creates job and ensures job name. Returns id and name of created
- job.
- """
- if not job_name:
- # generate random name if it's not specified
- job_name = data_utils.rand_name('sahara-job')
-
- # create job
- resp_body = self.create_job(job_name, **self.job)
-
- # ensure that job created successfully
- self.assertEqual(job_name, resp_body['name'])
-
- return resp_body['id'], job_name
-
- @test.attr(type='smoke')
- @test.idempotent_id('8cf785ca-adf4-473d-8281-fb9a5efa3073')
- def test_job_create(self):
- self._create_job()
-
- @test.attr(type='smoke')
- @test.idempotent_id('41e253fe-b02a-41a0-b186-5ff1f0463ba3')
- def test_job_list(self):
- job_info = self._create_job()
-
- # check for job in list
- jobs = self.client.list_jobs()['jobs']
- jobs_info = [(job['id'], job['name']) for job in jobs]
- self.assertIn(job_info, jobs_info)
-
- @test.attr(type='smoke')
- @test.idempotent_id('3faf17fa-bc94-4a60-b1c3-79e53674c16c')
- def test_job_get(self):
- job_id, job_name = self._create_job()
-
- # check job fetch by id
- job = self.client.get_job(job_id)['job']
- self.assertEqual(job_name, job['name'])
-
- @test.attr(type='smoke')
- @test.idempotent_id('dff85e62-7dda-4ad8-b1ee-850adecb0c6e')
- def test_job_delete(self):
- job_id, _ = self._create_job()
-
- # delete the job by id
- self.client.delete_job(job_id)
diff --git a/tempest/api/data_processing/test_node_group_templates.py b/tempest/api/data_processing/test_node_group_templates.py
deleted file mode 100644
index c2dae85..0000000
--- a/tempest/api/data_processing/test_node_group_templates.py
+++ /dev/null
@@ -1,86 +0,0 @@
-# Copyright (c) 2014 Mirantis Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.data_processing import base as dp_base
-from tempest.common.utils import data_utils
-from tempest import test
-
-
-class NodeGroupTemplateTest(dp_base.BaseDataProcessingTest):
-
- @classmethod
- def skip_checks(cls):
- super(NodeGroupTemplateTest, cls).skip_checks()
- if cls.default_plugin is None:
- raise cls.skipException("No Sahara plugins configured")
-
- def _create_node_group_template(self, template_name=None):
- """Creates Node Group Template with optional name specified.
-
- It creates template, ensures template name and response body.
- Returns id and name of created template.
- """
- self.node_group_template = self.get_node_group_template()
- self.assertIsNotNone(self.node_group_template,
- "No known Sahara plugin was found")
-
- if not template_name:
- # generate random name if it's not specified
- template_name = data_utils.rand_name('sahara-ng-template')
-
- # create node group template
- resp_body = self.create_node_group_template(template_name,
- **self.node_group_template)
-
- # ensure that template created successfully
- self.assertEqual(template_name, resp_body['name'])
- self.assertDictContainsSubset(self.node_group_template, resp_body)
-
- return resp_body['id'], template_name
-
- @test.attr(type='smoke')
- @test.idempotent_id('63164051-e46d-4387-9741-302ef4791cbd')
- def test_node_group_template_create(self):
- self._create_node_group_template()
-
- @test.attr(type='smoke')
- @test.idempotent_id('eb39801d-2612-45e5-88b1-b5d70b329185')
- def test_node_group_template_list(self):
- template_info = self._create_node_group_template()
-
- # check for node group template in list
- templates = self.client.list_node_group_templates()
- templates = templates['node_group_templates']
- templates_info = [(template['id'], template['name'])
- for template in templates]
- self.assertIn(template_info, templates_info)
-
- @test.attr(type='smoke')
- @test.idempotent_id('6ee31539-a708-466f-9c26-4093ce09a836')
- def test_node_group_template_get(self):
- template_id, template_name = self._create_node_group_template()
-
- # check node group template fetch by id
- template = self.client.get_node_group_template(template_id)
- template = template['node_group_template']
- self.assertEqual(template_name, template['name'])
- self.assertDictContainsSubset(self.node_group_template, template)
-
- @test.attr(type='smoke')
- @test.idempotent_id('f4f5cb82-708d-4031-81c4-b0618a706a2f')
- def test_node_group_template_delete(self):
- template_id, _ = self._create_node_group_template()
-
- # delete the node group template by id
- self.client.delete_node_group_template(template_id)
diff --git a/tempest/api/data_processing/test_plugins.py b/tempest/api/data_processing/test_plugins.py
deleted file mode 100644
index 14594e4..0000000
--- a/tempest/api/data_processing/test_plugins.py
+++ /dev/null
@@ -1,56 +0,0 @@
-# Copyright (c) 2014 Mirantis Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.data_processing import base as dp_base
-from tempest import config
-from tempest import test
-
-CONF = config.CONF
-
-
-class PluginsTest(dp_base.BaseDataProcessingTest):
- def _list_all_plugin_names(self):
- """Returns all enabled plugin names.
-
- It ensures main plugins availability.
- """
- plugins = self.client.list_plugins()['plugins']
- plugins_names = [plugin['name'] for plugin in plugins]
- for enabled_plugin in CONF.data_processing_feature_enabled.plugins:
- self.assertIn(enabled_plugin, plugins_names)
-
- return plugins_names
-
- @test.attr(type='smoke')
- @test.idempotent_id('01a005a3-426c-4c0b-9617-d09475403e09')
- def test_plugin_list(self):
- self._list_all_plugin_names()
-
- @test.attr(type='smoke')
- @test.idempotent_id('53cf6487-2cfb-4a6f-8671-97c542c6e901')
- def test_plugin_get(self):
- for plugin_name in self._list_all_plugin_names():
- plugin = self.client.get_plugin(plugin_name)['plugin']
- self.assertEqual(plugin_name, plugin['name'])
-
- for plugin_version in plugin['versions']:
- detailed_plugin = self.client.get_plugin(plugin_name,
- plugin_version)
- detailed_plugin = detailed_plugin['plugin']
- self.assertEqual(plugin_name, detailed_plugin['name'])
-
- # check that required image tags contains name and version
- image_tags = detailed_plugin['required_image_tags']
- self.assertIn(plugin_name, image_tags)
- self.assertIn(plugin_version, image_tags)
diff --git a/tempest/api/identity/admin/v2/test_roles.py b/tempest/api/identity/admin/v2/test_roles.py
index 380920f..d284aac 100644
--- a/tempest/api/identity/admin/v2/test_roles.py
+++ b/tempest/api/identity/admin/v2/test_roles.py
@@ -13,8 +13,6 @@
# License for the specific language governing permissions and limitations
# under the License.
-from six import moves
-
from tempest.api.identity import base
from tempest.common.utils import data_utils
from tempest.lib.common.utils import test_utils
@@ -27,7 +25,7 @@
def resource_setup(cls):
super(RolesTestJSON, cls).resource_setup()
cls.roles = list()
- for _ in moves.xrange(5):
+ for _ in range(5):
role_name = data_utils.rand_name(name='role')
role = cls.roles_client.create_role(name=role_name)['role']
cls.roles.append(role)
diff --git a/tempest/api/identity/admin/v2/test_services.py b/tempest/api/identity/admin/v2/test_services.py
index 94291f8..3ed51f0 100644
--- a/tempest/api/identity/admin/v2/test_services.py
+++ b/tempest/api/identity/admin/v2/test_services.py
@@ -13,8 +13,6 @@
# License for the specific language governing permissions and limitations
# under the License.
-from six import moves
-
from tempest.api.identity import base
from tempest.common.utils import data_utils
from tempest.lib import exceptions as lib_exc
@@ -84,7 +82,7 @@
def test_list_services(self):
# Create, List, Verify and Delete Services
services = []
- for _ in moves.xrange(3):
+ for _ in range(3):
name = data_utils.rand_name('service')
s_type = data_utils.rand_name('type')
description = data_utils.rand_name('description')
diff --git a/tempest/api/identity/admin/v2/test_tenants.py b/tempest/api/identity/admin/v2/test_tenants.py
index 4faf184..f4fad53 100644
--- a/tempest/api/identity/admin/v2/test_tenants.py
+++ b/tempest/api/identity/admin/v2/test_tenants.py
@@ -13,8 +13,6 @@
# License for the specific language governing permissions and limitations
# under the License.
-from six import moves
-
from tempest.api.identity import base
from tempest.common.utils import data_utils
from tempest.lib.common.utils import test_utils
@@ -27,7 +25,7 @@
def test_tenant_list_delete(self):
# Create several tenants and delete them
tenants = []
- for _ in moves.xrange(3):
+ for _ in range(3):
tenant_name = data_utils.rand_name(name='tenant-new')
tenant = self.tenants_client.create_tenant(
name=tenant_name)['tenant']
diff --git a/tempest/api/identity/admin/v2/test_users.py b/tempest/api/identity/admin/v2/test_users.py
index 8e63498..4a4b51a 100644
--- a/tempest/api/identity/admin/v2/test_users.py
+++ b/tempest/api/identity/admin/v2/test_users.py
@@ -234,4 +234,4 @@
# Validate the updated password through getting a token.
body = self.token_client.auth(user['name'], new_pass,
tenant['name'])
- self.assertTrue('id' in body['token'])
+ self.assertIn('id', body['token'])
diff --git a/tempest/api/identity/admin/v3/test_credentials.py b/tempest/api/identity/admin/v3/test_credentials.py
index 7c2e8e0..a0d8748 100644
--- a/tempest/api/identity/admin/v3/test_credentials.py
+++ b/tempest/api/identity/admin/v3/test_credentials.py
@@ -12,6 +12,7 @@
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
+from oslo_serialization import jsonutils as json
from tempest.api.identity import base
from tempest.common.utils import data_utils
@@ -37,7 +38,7 @@
cls.projects.append(cls.project['id'])
cls.user_body = cls.users_client.create_user(
- u_name, description=u_desc, password=u_password,
+ name=u_name, description=u_desc, password=u_password,
email=u_email, project_id=cls.projects[0])['user']
@classmethod
@@ -70,6 +71,7 @@
update_body = self.creds_client.update_credential(
cred['id'], blob=blob, project_id=self.projects[1],
type='ec2')['credential']
+ update_body['blob'] = json.loads(update_body['blob'])
self.assertEqual(cred['id'], update_body['id'])
self.assertEqual(self.projects[1], update_body['project_id'])
self.assertEqual(self.user_body['id'], update_body['user_id'])
@@ -77,6 +79,7 @@
self.assertEqual(update_body['blob']['secret'], new_keys[1])
get_body = self.creds_client.show_credential(cred['id'])['credential']
+ get_body['blob'] = json.loads(get_body['blob'])
for value1 in self.creds_list[0]:
self.assertEqual(update_body[value1],
get_body[value1])
diff --git a/tempest/api/identity/admin/v3/test_default_project_id.py b/tempest/api/identity/admin/v3/test_default_project_id.py
index a540da7..59ffc19 100644
--- a/tempest/api/identity/admin/v3/test_default_project_id.py
+++ b/tempest/api/identity/admin/v3/test_default_project_id.py
@@ -54,7 +54,7 @@
# default project
user_name = data_utils.rand_name('user')
user_body = self.users_client.create_user(
- user_name,
+ name=user_name,
password=user_name,
domain_id=dom_id,
default_project_id=proj_id)['user']
@@ -69,7 +69,7 @@
admin_role_id = admin_role['id']
# grant the admin role to the user on his project
- self.roles_client.assign_user_role_on_project(proj_id, user_id,
+ self.roles_client.create_user_role_on_project(proj_id, user_id,
admin_role_id)
# create a new client with user's credentials (NOTE: unscoped token!)
diff --git a/tempest/api/identity/admin/v3/test_domains.py b/tempest/api/identity/admin/v3/test_domains.py
index 24a7a4e..cbf1439 100644
--- a/tempest/api/identity/admin/v3/test_domains.py
+++ b/tempest/api/identity/admin/v3/test_domains.py
@@ -44,11 +44,11 @@
super(DomainsTestJSON, cls).resource_cleanup()
@classmethod
- def _delete_domain(self, domain_id):
+ def _delete_domain(cls, domain_id):
# It is necessary to disable the domain before deleting,
# or else it would result in unauthorized error
- self.domains_client.update_domain(domain_id, enabled=False)
- self.domains_client.delete_domain(domain_id)
+ cls.domains_client.update_domain(domain_id, enabled=False)
+ cls.domains_client.delete_domain(domain_id)
@test.idempotent_id('8cf516ef-2114-48f1-907b-d32726c734d4')
def test_list_domains(self):
diff --git a/tempest/api/identity/admin/v3/test_groups.py b/tempest/api/identity/admin/v3/test_groups.py
index 59fcec6..3cbcc1f 100644
--- a/tempest/api/identity/admin/v3/test_groups.py
+++ b/tempest/api/identity/admin/v3/test_groups.py
@@ -83,14 +83,16 @@
for i in range(3):
name = data_utils.rand_name('User')
password = data_utils.rand_password()
- user = self.users_client.create_user(name, password)['user']
+ user = self.users_client.create_user(name=name,
+ password=password)['user']
users.append(user)
self.addCleanup(self.users_client.delete_user, user['id'])
self.groups_client.add_group_user(group['id'], user['id'])
# list users in group
group_users = self.groups_client.list_group_users(group['id'])['users']
- self.assertEqual(sorted(users), sorted(group_users))
+ self.assertEqual(sorted(users, key=lambda k: k['name']),
+ sorted(group_users, key=lambda k: k['name']))
# check and delete user in group
for user in users:
self.groups_client.check_group_user_existence(
@@ -103,7 +105,8 @@
def test_list_user_groups(self):
# create a user
user = self.users_client.create_user(
- data_utils.rand_name('User'), data_utils.rand_password())['user']
+ name=data_utils.rand_name('User'),
+ password=data_utils.rand_password())['user']
self.addCleanup(self.users_client.delete_user, user['id'])
# create two groups, and add user into them
groups = []
@@ -116,7 +119,8 @@
self.groups_client.add_group_user(group['id'], user['id'])
# list groups which user belongs to
user_groups = self.users_client.list_user_groups(user['id'])['groups']
- self.assertEqual(sorted(groups), sorted(user_groups))
+ self.assertEqual(sorted(groups, key=lambda k: k['name']),
+ sorted(user_groups, key=lambda k: k['name']))
self.assertEqual(2, len(user_groups))
@test.idempotent_id('cc9a57a5-a9ed-4f2d-a29f-4f979a06ec71')
diff --git a/tempest/api/identity/admin/v3/test_inherits.py b/tempest/api/identity/admin/v3/test_inherits.py
index fe20349..955b6fb 100644
--- a/tempest/api/identity/admin/v3/test_inherits.py
+++ b/tempest/api/identity/admin/v3/test_inherits.py
@@ -12,11 +12,8 @@
from tempest.api.identity import base
from tempest.common.utils import data_utils
-from tempest import config
from tempest import test
-CONF = config.CONF
-
class BaseInheritsV3Test(base.BaseIdentityV3AdminTest):
@@ -44,7 +41,7 @@
name=data_utils.rand_name('group-'), project_id=cls.project['id'],
domain_id=cls.domain['id'])['group']
cls.user = cls.users_client.create_user(
- u_name, description=u_desc, password=u_password,
+ name=u_name, description=u_desc, password=u_password,
email=u_email, project_id=cls.project['id'],
domain_id=cls.domain['id'])['user']
@@ -71,10 +68,10 @@
name=data_utils.rand_name('Role'))['role']
self.addCleanup(self.roles_client.delete_role, src_role['id'])
# Assign role on domains user
- self.roles_client.assign_inherited_role_on_domains_user(
+ self.inherited_roles_client.create_inherited_role_on_domains_user(
self.domain['id'], self.user['id'], src_role['id'])
# list role on domains user
- roles = self.roles_client.\
+ roles = self.inherited_roles_client.\
list_inherited_project_role_for_user_on_domain(
self.domain['id'], self.user['id'])['roles']
@@ -83,10 +80,11 @@
src_role['id'])
# Check role on domains user
- self.roles_client.check_user_inherited_project_role_on_domain(
- self.domain['id'], self.user['id'], src_role['id'])
+ (self.inherited_roles_client.
+ check_user_inherited_project_role_on_domain(
+ self.domain['id'], self.user['id'], src_role['id']))
# Revoke role from domains user.
- self.roles_client.revoke_inherited_role_from_user_on_domain(
+ self.inherited_roles_client.delete_inherited_role_from_user_on_domain(
self.domain['id'], self.user['id'], src_role['id'])
@test.idempotent_id('c7a8dda2-be50-4fb4-9a9c-e830771078b1')
@@ -96,10 +94,10 @@
name=data_utils.rand_name('Role'))['role']
self.addCleanup(self.roles_client.delete_role, src_role['id'])
# Assign role on domains group
- self.roles_client.assign_inherited_role_on_domains_group(
+ self.inherited_roles_client.create_inherited_role_on_domains_group(
self.domain['id'], self.group['id'], src_role['id'])
# List role on domains group
- roles = self.roles_client.\
+ roles = self.inherited_roles_client.\
list_inherited_project_role_for_group_on_domain(
self.domain['id'], self.group['id'])['roles']
@@ -108,10 +106,11 @@
src_role['id'])
# Check role on domains group
- self.roles_client.check_group_inherited_project_role_on_domain(
- self.domain['id'], self.group['id'], src_role['id'])
+ (self.inherited_roles_client.
+ check_group_inherited_project_role_on_domain(
+ self.domain['id'], self.group['id'], src_role['id']))
# Revoke role from domains group
- self.roles_client.revoke_inherited_role_from_group_on_domain(
+ self.inherited_roles_client.delete_inherited_role_from_group_on_domain(
self.domain['id'], self.group['id'], src_role['id'])
@test.idempotent_id('18b70e45-7687-4b72-8277-b8f1a47d7591')
@@ -121,13 +120,14 @@
name=data_utils.rand_name('Role'))['role']
self.addCleanup(self.roles_client.delete_role, src_role['id'])
# Assign role on projects user
- self.roles_client.assign_inherited_role_on_projects_user(
+ self.inherited_roles_client.create_inherited_role_on_projects_user(
self.project['id'], self.user['id'], src_role['id'])
# Check role on projects user
- self.roles_client.check_user_has_flag_on_inherited_to_project(
- self.project['id'], self.user['id'], src_role['id'])
+ (self.inherited_roles_client.
+ check_user_has_flag_on_inherited_to_project(
+ self.project['id'], self.user['id'], src_role['id']))
# Revoke role from projects user
- self.roles_client.revoke_inherited_role_from_user_on_project(
+ self.inherited_roles_client.delete_inherited_role_from_user_on_project(
self.project['id'], self.user['id'], src_role['id'])
@test.idempotent_id('26021436-d5a4-4256-943c-ded01e0d4b45')
@@ -137,11 +137,98 @@
name=data_utils.rand_name('Role'))['role']
self.addCleanup(self.roles_client.delete_role, src_role['id'])
# Assign role on projects group
- self.roles_client.assign_inherited_role_on_projects_group(
+ self.inherited_roles_client.create_inherited_role_on_projects_group(
self.project['id'], self.group['id'], src_role['id'])
# Check role on projects group
- self.roles_client.check_group_has_flag_on_inherited_to_project(
- self.project['id'], self.group['id'], src_role['id'])
+ (self.inherited_roles_client.
+ check_group_has_flag_on_inherited_to_project(
+ self.project['id'], self.group['id'], src_role['id']))
# Revoke role from projects group
- self.roles_client.revoke_inherited_role_from_group_on_project(
- self.project['id'], self.group['id'], src_role['id'])
+ (self.inherited_roles_client.
+ delete_inherited_role_from_group_on_project(
+ self.project['id'], self.group['id'], src_role['id']))
+
+ @test.idempotent_id('3acf666e-5354-42ac-8e17-8b68893bcd36')
+ def test_inherit_assign_list_revoke_user_roles_on_domain(self):
+ # Create role
+ src_role = self.roles_client.create_role(
+ name=data_utils.rand_name('Role'))['role']
+ self.addCleanup(self.roles_client.delete_role, src_role['id'])
+
+ # Create a project hierarchy
+ leaf_project_name = data_utils.rand_name('project')
+ leaf_project = self.projects_client.create_project(
+ leaf_project_name, domain_id=self.domain['id'],
+ parent_id=self.project['id'])['project']
+ self.addCleanup(
+ self.projects_client.delete_project, leaf_project['id'])
+
+ # Assign role on domain
+ self.inherited_roles_client.create_inherited_role_on_domains_user(
+ self.domain['id'], self.user['id'], src_role['id'])
+
+ # List "effective" role assignments from user on the parent project
+ assignments = (
+ self.role_assignments.list_user_project_effective_assignments(
+ self.project['id'], self.user['id']))['role_assignments']
+ self.assertNotEmpty(assignments)
+
+ # List "effective" role assignments from user on the leaf project
+ assignments = (
+ self.role_assignments.list_user_project_effective_assignments(
+ leaf_project['id'], self.user['id']))['role_assignments']
+ self.assertNotEmpty(assignments)
+
+ # Revoke role from domain
+ self.inherited_roles_client.delete_inherited_role_from_user_on_domain(
+ self.domain['id'], self.user['id'], src_role['id'])
+
+ # List "effective" role assignments from user on the parent project
+ # should return an empty list
+ assignments = (
+ self.role_assignments.list_user_project_effective_assignments(
+ self.project['id'], self.user['id']))['role_assignments']
+ self.assertEmpty(assignments)
+
+ # List "effective" role assignments from user on the leaf project
+ # should return an empty list
+ assignments = (
+ self.role_assignments.list_user_project_effective_assignments(
+ leaf_project['id'], self.user['id']))['role_assignments']
+ self.assertEmpty(assignments)
+
+ @test.idempotent_id('9f02ccd9-9b57-46b4-8f77-dd5a736f3a06')
+ def test_inherit_assign_list_revoke_user_roles_on_project_tree(self):
+ # Create role
+ src_role = self.roles_client.create_role(
+ name=data_utils.rand_name('Role'))['role']
+ self.addCleanup(self.roles_client.delete_role, src_role['id'])
+
+ # Create a project hierarchy
+ leaf_project_name = data_utils.rand_name('project')
+ leaf_project = self.projects_client.create_project(
+ leaf_project_name, domain_id=self.domain['id'],
+ parent_id=self.project['id'])['project']
+ self.addCleanup(
+ self.projects_client.delete_project, leaf_project['id'])
+
+ # Assign role on parent project
+ self.inherited_roles_client.create_inherited_role_on_projects_user(
+ self.project['id'], self.user['id'], src_role['id'])
+
+ # List "effective" role assignments from user on the leaf project
+ assignments = (
+ self.role_assignments.list_user_project_effective_assignments(
+ leaf_project['id'], self.user['id']))['role_assignments']
+ self.assertNotEmpty(assignments)
+
+ # Revoke role from parent project
+ self.inherited_roles_client.delete_inherited_role_from_user_on_project(
+ self.project['id'], self.user['id'], src_role['id'])
+
+ # List "effective" role assignments from user on the leaf project
+ # should return an empty list
+ assignments = (
+ self.role_assignments.list_user_project_effective_assignments(
+ leaf_project['id'], self.user['id']))['role_assignments']
+ self.assertEmpty(assignments)
diff --git a/tempest/api/identity/admin/v3/test_list_users.py b/tempest/api/identity/admin/v3/test_list_users.py
index 9691ee8..99df559 100644
--- a/tempest/api/identity/admin/v3/test_list_users.py
+++ b/tempest/api/identity/admin/v3/test_list_users.py
@@ -25,7 +25,7 @@
# assert the response based on expected and not_expected
# expected: user expected in the list response
# not_expected: user, which should not be present in list response
- body = self.users_client.list_users(params)['users']
+ body = self.users_client.list_users(**params)['users']
self.assertIn(expected[key], map(lambda x: x[key], body))
self.assertNotIn(not_expected[key],
map(lambda x: x[key], body))
@@ -42,13 +42,13 @@
cls.users = list()
u1_name = data_utils.rand_name('test_user')
cls.domain_enabled_user = cls.users_client.create_user(
- u1_name, password=alt_password,
+ name=u1_name, password=alt_password,
email=cls.alt_email, domain_id=cls.domain['id'])['user']
cls.users.append(cls.domain_enabled_user)
# Create default not enabled user
u2_name = data_utils.rand_name('test_user')
cls.non_domain_enabled_user = cls.users_client.create_user(
- u2_name, password=alt_password,
+ name=u2_name, password=alt_password,
email=cls.alt_email, enabled=False)['user']
cls.users.append(cls.non_domain_enabled_user)
diff --git a/tempest/api/identity/admin/v3/test_projects.py b/tempest/api/identity/admin/v3/test_projects.py
index 60bb314..1137191 100644
--- a/tempest/api/identity/admin/v3/test_projects.py
+++ b/tempest/api/identity/admin/v3/test_projects.py
@@ -1,4 +1,4 @@
-# Copyright 2013 OpenStack, LLC
+# Copyright 2013 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
@@ -200,7 +200,7 @@
u_email = u_name + '@testmail.tm'
u_password = data_utils.rand_password()
user = self.users_client.create_user(
- u_name, description=u_desc, password=u_password,
+ name=u_name, description=u_desc, password=u_password,
email=u_email, project_id=project['id'])['user']
# Delete the User at the end of this method
self.addCleanup(self.users_client.delete_user, user['id'])
diff --git a/tempest/api/identity/admin/v3/test_projects_negative.py b/tempest/api/identity/admin/v3/test_projects_negative.py
index e661f42..c76b9ee 100644
--- a/tempest/api/identity/admin/v3/test_projects_negative.py
+++ b/tempest/api/identity/admin/v3/test_projects_negative.py
@@ -1,4 +1,4 @@
-# Copyright 2013 OpenStack, LLC
+# Copyright 2013 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
diff --git a/tempest/api/identity/admin/v3/test_roles.py b/tempest/api/identity/admin/v3/test_roles.py
index 2b77023..f5bf923 100644
--- a/tempest/api/identity/admin/v3/test_roles.py
+++ b/tempest/api/identity/admin/v3/test_roles.py
@@ -44,7 +44,7 @@
name=data_utils.rand_name('Group'), project_id=cls.project['id'],
domain_id=cls.domain['id'])['group']
cls.user_body = cls.users_client.create_user(
- u_name, description=u_desc, password=cls.u_password,
+ name=u_name, description=u_desc, password=cls.u_password,
email=u_email, project_id=cls.project['id'],
domain_id=cls.domain['id'])['user']
cls.role = cls.roles_client.create_role(
@@ -94,7 +94,7 @@
@test.idempotent_id('c6b80012-fe4a-498b-9ce8-eb391c05169f')
def test_grant_list_revoke_role_to_user_on_project(self):
- self.roles_client.assign_user_role_on_project(self.project['id'],
+ self.roles_client.create_user_role_on_project(self.project['id'],
self.user_body['id'],
self.role['id'])
@@ -115,7 +115,7 @@
@test.idempotent_id('6c9a2940-3625-43a3-ac02-5dcec62ef3bd')
def test_grant_list_revoke_role_to_user_on_domain(self):
- self.roles_client.assign_user_role_on_domain(
+ self.roles_client.create_user_role_on_domain(
self.domain['id'], self.user_body['id'], self.role['id'])
roles = self.roles_client.list_user_roles_on_domain(
@@ -136,7 +136,7 @@
@test.idempotent_id('cbf11737-1904-4690-9613-97bcbb3df1c4')
def test_grant_list_revoke_role_to_group_on_project(self):
# Grant role to group on project
- self.roles_client.assign_group_role_on_project(
+ self.roles_client.create_group_role_on_project(
self.project['id'], self.group_body['id'], self.role['id'])
# List group roles on project
roles = self.roles_client.list_group_roles_on_project(
@@ -170,7 +170,7 @@
@test.idempotent_id('4bf8a70b-e785-413a-ad53-9f91ce02faa7')
def test_grant_list_revoke_role_to_group_on_domain(self):
- self.roles_client.assign_group_role_on_domain(
+ self.roles_client.create_group_role_on_domain(
self.domain['id'], self.group_body['id'], self.role['id'])
roles = self.roles_client.list_group_roles_on_domain(
diff --git a/tempest/api/identity/admin/v3/test_tokens.py b/tempest/api/identity/admin/v3/test_tokens.py
index 89cfd5b..8706cf7 100644
--- a/tempest/api/identity/admin/v3/test_tokens.py
+++ b/tempest/api/identity/admin/v3/test_tokens.py
@@ -32,7 +32,7 @@
u_email = '%s@testmail.tm' % u_name
u_password = data_utils.rand_password()
user = self.users_client.create_user(
- u_name, description=u_desc, password=u_password,
+ name=u_name, description=u_desc, password=u_password,
email=u_email)['user']
self.addCleanup(self.users_client.delete_user, user['id'])
# Perform Authentication
@@ -62,7 +62,7 @@
# Create a user.
user_name = data_utils.rand_name(name='user')
user_password = data_utils.rand_password()
- user = self.users_client.create_user(user_name,
+ user = self.users_client.create_user(name=user_name,
password=user_password)['user']
self.addCleanup(self.users_client.delete_user, user['id'])
@@ -83,11 +83,11 @@
self.addCleanup(self.roles_client.delete_role, role['id'])
# Grant the user the role on both projects.
- self.roles_client.assign_user_role_on_project(project1['id'],
+ self.roles_client.create_user_role_on_project(project1['id'],
user['id'],
role['id'])
- self.roles_client.assign_user_role_on_project(project2['id'],
+ self.roles_client.create_user_role_on_project(project2['id'],
user['id'],
role['id'])
diff --git a/tempest/api/identity/admin/v3/test_trusts.py b/tempest/api/identity/admin/v3/test_trusts.py
index 9c8f1f6..4e69de8 100644
--- a/tempest/api/identity/admin/v3/test_trusts.py
+++ b/tempest/api/identity/admin/v3/test_trusts.py
@@ -57,7 +57,7 @@
u_email = self.trustor_username + '@testmail.xx'
self.trustor_password = data_utils.rand_password()
user = self.users_client.create_user(
- self.trustor_username,
+ name=self.trustor_username,
description=u_desc,
password=self.trustor_password,
email=u_email,
@@ -77,11 +77,11 @@
self.not_delegated_role_id = role['id']
# Assign roles to trustor
- self.roles_client.assign_user_role_on_project(
+ self.roles_client.create_user_role_on_project(
self.trustor_project_id,
self.trustor_user_id,
self.delegated_role_id)
- self.roles_client.assign_user_role_on_project(
+ self.roles_client.create_user_role_on_project(
self.trustor_project_id,
self.trustor_user_id,
self.not_delegated_role_id)
diff --git a/tempest/api/identity/admin/v3/test_users.py b/tempest/api/identity/admin/v3/test_users.py
index f200095..fd2683e 100644
--- a/tempest/api/identity/admin/v3/test_users.py
+++ b/tempest/api/identity/admin/v3/test_users.py
@@ -31,7 +31,7 @@
u_email = u_name + '@testmail.tm'
u_password = data_utils.rand_password()
user = self.users_client.create_user(
- u_name, description=u_desc, password=u_password,
+ name=u_name, description=u_desc, password=u_password,
email=u_email, enabled=False)['user']
# Delete the User at the end of this method
self.addCleanup(self.users_client.delete_user, user['id'])
@@ -71,7 +71,7 @@
u_name = data_utils.rand_name('user')
original_password = data_utils.rand_password()
user = self.users_client.create_user(
- u_name, password=original_password)['user']
+ name=u_name, password=original_password)['user']
# Delete the User at the end all test methods
self.addCleanup(self.users_client.delete_user, user['id'])
# Update user with new password
@@ -107,7 +107,7 @@
u_email = u_name + '@testmail.tm'
u_password = data_utils.rand_password()
user_body = self.users_client.create_user(
- u_name, description=u_desc, password=u_password,
+ name=u_name, description=u_desc, password=u_password,
email=u_email, enabled=False, project_id=u_project['id'])['user']
# Delete the User at the end of this method
self.addCleanup(self.users_client.delete_user, user_body['id'])
@@ -130,7 +130,7 @@
self.addCleanup(
self.projects_client.delete_project, project_body['id'])
# Assigning roles to user on project
- self.roles_client.assign_user_role_on_project(project['id'],
+ self.roles_client.create_user_role_on_project(project['id'],
user['id'],
role['id'])
assigned_project_ids.append(project['id'])
diff --git a/tempest/api/identity/admin/v3/test_users_negative.py b/tempest/api/identity/admin/v3/test_users_negative.py
index 71e8bc5..5b0fc97 100644
--- a/tempest/api/identity/admin/v3/test_users_negative.py
+++ b/tempest/api/identity/admin/v3/test_users_negative.py
@@ -29,7 +29,7 @@
u_email = u_name + '@testmail.tm'
u_password = data_utils.rand_password()
self.assertRaises(lib_exc.NotFound, self.users_client.create_user,
- u_name, u_password,
+ name=u_name, password=u_password,
email=u_email,
domain_id=data_utils.rand_uuid_hex())
diff --git a/tempest/api/identity/base.py b/tempest/api/identity/base.py
index ce052e6..14bf4f8 100644
--- a/tempest/api/identity/base.py
+++ b/tempest/api/identity/base.py
@@ -13,19 +13,22 @@
# License for the specific language governing permissions and limitations
# under the License.
-from oslo_log import log as logging
-
from tempest.common.utils import data_utils
from tempest import config
import tempest.test
CONF = config.CONF
-LOG = logging.getLogger(__name__)
class BaseIdentityTest(tempest.test.BaseTestCase):
@classmethod
+ def setup_credentials(cls):
+ # Create no network resources for these test.
+ cls.set_network_resources()
+ super(BaseIdentityTest, cls).setup_credentials()
+
+ @classmethod
def disable_user(cls, user_name):
user = cls.get_user_by_name(user_name)
cls.users_client.update_user_enabled(user['id'], enabled=False)
@@ -39,7 +42,7 @@
def get_user_by_name(cls, name, domain_id=None):
if domain_id:
params = {'domain_id': domain_id}
- users = cls.users_client.list_users(params)['users']
+ users = cls.users_client.list_users(**params)['users']
else:
users = cls.users_client.list_users()['users']
user = [u for u in users if u['name'] == name]
@@ -122,10 +125,6 @@
super(BaseIdentityV2AdminTest, cls).resource_setup()
cls.projects_client = cls.tenants_client
- @classmethod
- def resource_cleanup(cls):
- super(BaseIdentityV2AdminTest, cls).resource_cleanup()
-
def setup_test_user(self, password=None):
"""Set up a test user."""
tenant = self.setup_test_tenant()
@@ -174,6 +173,7 @@
cls.users_client = cls.os_adm.users_v3_client
cls.trusts_client = cls.os_adm.trusts_client
cls.roles_client = cls.os_adm.roles_v3_client
+ cls.inherited_roles_client = cls.os_adm.inherited_roles_client
cls.token = cls.os_adm.token_v3_client
cls.endpoints_client = cls.os_adm.endpoints_v3_client
cls.regions_client = cls.os_adm.regions_client
@@ -182,6 +182,7 @@
cls.creds_client = cls.os_adm.credentials_client
cls.groups_client = cls.os_adm.groups_client
cls.projects_client = cls.os_adm.projects_client
+ cls.role_assignments = cls.os_admin.role_assignments_client
if CONF.identity.admin_domain_scope:
# NOTE(andreaf) When keystone policy requires it, the identity
# admin clients for these tests shall use 'domain' scoped tokens.
@@ -190,17 +191,9 @@
cls.os_adm.auth_provider.scope = 'domain'
@classmethod
- def resource_setup(cls):
- super(BaseIdentityV3AdminTest, cls).resource_setup()
-
- @classmethod
- def resource_cleanup(cls):
- super(BaseIdentityV3AdminTest, cls).resource_cleanup()
-
- @classmethod
def disable_user(cls, user_name, domain_id=None):
user = cls.get_user_by_name(user_name, domain_id)
- cls.users_client.update_user(user['id'], user_name, enabled=False)
+ cls.users_client.update_user(user['id'], name=user_name, enabled=False)
@classmethod
def create_domain(cls):
@@ -221,7 +214,7 @@
project = self.setup_test_project()
username = data_utils.rand_name('test_user')
email = username + '@testmail.tm'
- user = self._create_test_user(user_name=username, email=email,
+ user = self._create_test_user(name=username, email=email,
project_id=project['id'],
password=password)
return user
diff --git a/tempest/api/identity/v2/test_ec2_credentials.py b/tempest/api/identity/v2/test_ec2_credentials.py
index 3c379f0..8f493aa 100644
--- a/tempest/api/identity/v2/test_ec2_credentials.py
+++ b/tempest/api/identity/v2/test_ec2_credentials.py
@@ -51,7 +51,6 @@
def test_list_ec2_credentials(self):
"""Get the list of user ec2 credentials."""
created_creds = []
- fetched_creds = []
# create first ec2 credentials
creds1 = self.non_admin_users_client.create_user_ec2_credential(
self.creds.user_id,
diff --git a/tempest/api/identity/v2/test_users.py b/tempest/api/identity/v2/test_users.py
index 4833f9e..33d212c 100644
--- a/tempest/api/identity/v2/test_users.py
+++ b/tempest/api/identity/v2/test_users.py
@@ -44,6 +44,11 @@
# Clear auth restores the original credentials and deletes
# cached auth data
client.auth_provider.clear_auth()
+ # NOTE(lbragstad): Fernet tokens are not subsecond aware and
+ # Keystone should only be precise to the second. Sleep to ensure we
+ # are passing the second boundary before attempting to
+ # authenticate.
+ time.sleep(1)
client.auth_provider.set_auth()
old_pass = self.creds.password
diff --git a/tempest/api/identity/v3/test_users.py b/tempest/api/identity/v3/test_users.py
index c92e750..1a38f3a 100644
--- a/tempest/api/identity/v3/test_users.py
+++ b/tempest/api/identity/v3/test_users.py
@@ -44,6 +44,11 @@
# Clear auth restores the original credentials and deletes
# cached auth data
client.auth_provider.clear_auth()
+ # NOTE(lbragstad): Fernet tokens are not subsecond aware and
+ # Keystone should only be precise to the second. Sleep to ensure we
+ # are passing the second boundary before attempting to
+ # authenticate.
+ time.sleep(1)
client.auth_provider.set_auth()
old_pass = self.creds.password
diff --git a/tempest/api/image/admin/v2/test_images.py b/tempest/api/image/admin/v2/test_images.py
index 80da7a1..9844a67 100644
--- a/tempest/api/image/admin/v2/test_images.py
+++ b/tempest/api/image/admin/v2/test_images.py
@@ -13,7 +13,7 @@
# License for the specific language governing permissions and limitations
# under the License.
-from six import moves
+import six
import testtools
from tempest.api.image import base
@@ -34,27 +34,26 @@
def test_admin_deactivate_reactivate_image(self):
# Create image by non-admin tenant
image_name = data_utils.rand_name('image')
- body = self.client.create_image(name=image_name,
- container_format='bare',
- disk_format='raw',
- visibility='private')
- image_id = body['id']
- self.addCleanup(self.client.delete_image, image_id)
+ image = self.client.create_image(name=image_name,
+ container_format='bare',
+ disk_format='raw',
+ visibility='private')
+ self.addCleanup(self.client.delete_image, image['id'])
# upload an image file
content = data_utils.random_bytes()
- image_file = moves.cStringIO(content)
- self.client.store_image_file(image_id, image_file)
+ image_file = six.BytesIO(content)
+ self.client.store_image_file(image['id'], image_file)
# deactivate image
- self.admin_client.deactivate_image(image_id)
- body = self.client.show_image(image_id)
+ self.admin_client.deactivate_image(image['id'])
+ body = self.client.show_image(image['id'])
self.assertEqual("deactivated", body['status'])
# non-admin user unable to download deactivated image
self.assertRaises(lib_exc.Forbidden, self.client.show_image_file,
- image_id)
+ image['id'])
# reactivate image
- self.admin_client.reactivate_image(image_id)
- body = self.client.show_image(image_id)
+ self.admin_client.reactivate_image(image['id'])
+ body = self.client.show_image(image['id'])
self.assertEqual("active", body['status'])
# non-admin user able to download image after reactivation by admin
- body = self.client.show_image_file(image_id)
+ body = self.client.show_image_file(image['id'])
self.assertEqual(content, body.data)
diff --git a/tempest/api/image/base.py b/tempest/api/image/base.py
index 6fd6ea6..1cc3fa2 100644
--- a/tempest/api/image/base.py
+++ b/tempest/api/image/base.py
@@ -12,7 +12,7 @@
# License for the specific language governing permissions and limitations
# under the License.
-from six import moves
+import six
from tempest.common import image as common_image
from tempest.common.utils import data_utils
@@ -60,7 +60,7 @@
"""Wrapper that returns a test image."""
if 'name' not in kwargs:
- name = data_utils.rand_name(cls.__name__ + "-instance")
+ name = data_utils.rand_name(cls.__name__ + "-image")
kwargs['name'] = name
params = cls._get_create_params(**kwargs)
@@ -118,13 +118,12 @@
cls.alt_tenant_id = cls.alt_image_member_client.tenant_id
def _create_image(self):
- image_file = moves.cStringIO(data_utils.random_bytes())
+ image_file = six.BytesIO(data_utils.random_bytes())
image = self.create_image(container_format='bare',
disk_format='raw',
is_public=False,
data=image_file)
- image_id = image['id']
- return image_id
+ return image['id']
class BaseV2ImageTest(BaseImageTest):
@@ -144,6 +143,18 @@
cls.resource_types_client = cls.os.resource_types_client
cls.schemas_client = cls.os.schemas_client
+ def create_namespace(cls, namespace_name=None, visibility='public',
+ description='Tempest', protected=False,
+ **kwargs):
+ if not namespace_name:
+ namespace_name = data_utils.rand_name('test-ns')
+ kwargs.setdefault('display_name', namespace_name)
+ namespace = cls.namespaces_client.create_namespace(
+ namespace=namespace_name, visibility=visibility,
+ description=description, protected=protected, **kwargs)
+ cls.addCleanup(cls.namespaces_client.delete_namespace, namespace_name)
+ return namespace
+
class BaseV2MemberImageTest(BaseV2ImageTest):
@@ -167,13 +178,12 @@
return image_ids
def _create_image(self):
- name = data_utils.rand_name('image')
+ name = data_utils.rand_name(self.__class__.__name__ + '-image')
image = self.client.create_image(name=name,
container_format='bare',
disk_format='raw')
- image_id = image['id']
- self.addCleanup(self.client.delete_image, image_id)
- return image_id
+ self.addCleanup(self.client.delete_image, image['id'])
+ return image['id']
class BaseV1ImageAdminTest(BaseImageTest):
diff --git a/tempest/api/image/v1/test_images.py b/tempest/api/image/v1/test_images.py
index e4fbbe3..7d52695 100644
--- a/tempest/api/image/v1/test_images.py
+++ b/tempest/api/image/v1/test_images.py
@@ -13,14 +13,14 @@
# License for the specific language governing permissions and limitations
# under the License.
-from six import moves
+import six
from tempest.api.image import base
from tempest.common import image as common_image
from tempest.common.utils import data_utils
from tempest.common import waiters
from tempest import config
-from tempest import exceptions
+from tempest.lib import exceptions
from tempest import test
CONF = config.CONF
@@ -49,22 +49,21 @@
# Register, then upload an image
properties = {'prop1': 'val1'}
container_format, disk_format = get_container_and_disk_format()
- body = self.create_image(name='New Name',
- container_format=container_format,
- disk_format=disk_format,
- is_public=False,
- properties=properties)
- self.assertIn('id', body)
- image_id = body.get('id')
- self.assertEqual('New Name', body.get('name'))
- self.assertFalse(body.get('is_public'))
- self.assertEqual('queued', body.get('status'))
+ image = self.create_image(name='New Name',
+ container_format=container_format,
+ disk_format=disk_format,
+ is_public=False,
+ properties=properties)
+ self.assertIn('id', image)
+ self.assertEqual('New Name', image.get('name'))
+ self.assertFalse(image.get('is_public'))
+ self.assertEqual('queued', image.get('status'))
for key, val in properties.items():
- self.assertEqual(val, body.get('properties')[key])
+ self.assertEqual(val, image.get('properties')[key])
# Now try uploading an image file
- image_file = moves.cStringIO(data_utils.random_bytes())
- body = self.client.update_image(image_id, data=image_file)['image']
+ image_file = six.BytesIO(data_utils.random_bytes())
+ body = self.client.update_image(image['id'], data=image_file)['image']
self.assertIn('size', body)
self.assertEqual(1024, body.get('size'))
@@ -89,16 +88,15 @@
@test.idempotent_id('6d0e13a7-515b-460c-b91f-9f4793f09816')
def test_register_http_image(self):
container_format, disk_format = get_container_and_disk_format()
- body = self.create_image(name='New Http Image',
- container_format=container_format,
- disk_format=disk_format, is_public=False,
- copy_from=CONF.image.http_image)
- self.assertIn('id', body)
- image_id = body.get('id')
- self.assertEqual('New Http Image', body.get('name'))
- self.assertFalse(body.get('is_public'))
- waiters.wait_for_image_status(self.client, image_id, 'active')
- self.client.show_image(image_id)
+ image = self.create_image(name='New Http Image',
+ container_format=container_format,
+ disk_format=disk_format, is_public=False,
+ copy_from=CONF.image.http_image)
+ self.assertIn('id', image)
+ self.assertEqual('New Http Image', image.get('name'))
+ self.assertFalse(image.get('is_public'))
+ waiters.wait_for_image_status(self.client, image['id'], 'active')
+ self.client.show_image(image['id'])
@test.idempotent_id('05b19d55-140c-40d0-b36b-fafd774d421b')
def test_register_image_with_min_ram(self):
@@ -188,8 +186,7 @@
disk_format=disk_format,
is_public=False,
location=location)
- image_id = image['id']
- return image_id
+ return image['id']
@classmethod
def _create_standard_image(cls, name, container_format,
@@ -199,14 +196,13 @@
Note that the size of the new image is a random number between
1024 and 4096
"""
- image_file = moves.cStringIO(data_utils.random_bytes(size))
+ image_file = six.BytesIO(data_utils.random_bytes(size))
name = 'New Standard Image %s' % name
image = cls.create_image(name=name,
container_format=container_format,
disk_format=disk_format,
is_public=False, data=image_file)
- image_id = image['id']
- return image_id
+ return image['id']
@test.idempotent_id('246178ab-3b33-4212-9a4b-a7fe8261794d')
def test_index_no_params(self):
@@ -251,7 +247,7 @@
def test_index_min_size(self):
images_list = self.client.list_images(size_min=142)['images']
for image in images_list:
- self.assertTrue(image['size'] >= 142)
+ self.assertGreaterEqual(image['size'], 142)
result_set = set(map(lambda x: x['id'], images_list))
self.assertTrue(self.size142_set <= result_set)
self.assertFalse(self.size42_set <= result_set)
@@ -294,15 +290,14 @@
disk_format, size):
"""Create a new standard image and return newly-registered image-id"""
- image_file = moves.cStringIO(data_utils.random_bytes(size))
+ image_file = six.BytesIO(data_utils.random_bytes(size))
name = 'New Standard Image %s' % name
image = cls.create_image(name=name,
container_format=container_format,
disk_format=disk_format,
is_public=False, data=image_file,
properties={'key1': 'value1'})
- image_id = image['id']
- return image_id
+ return image['id']
@test.idempotent_id('01752c1c-0275-4de3-9e5b-876e44541928')
def test_list_image_metadata(self):
@@ -322,10 +317,7 @@
metadata['properties'].update(req_metadata)
headers = common_image.image_meta_to_headers(
properties=metadata['properties'])
- metadata = self.client.update_image(self.image_id,
- headers=headers)['image']
-
+ self.client.update_image(self.image_id, headers=headers)
resp = self.client.check_image(self.image_id)
resp_metadata = common_image.get_image_meta_from_headers(resp)
- expected = {'key1': 'alt1', 'key2': 'value2'}
- self.assertEqual(expected, resp_metadata['properties'])
+ self.assertEqual(req_metadata, resp_metadata['properties'])
diff --git a/tempest/api/image/v1/test_images_negative.py b/tempest/api/image/v1/test_images_negative.py
index 9e67c25..d8f103a 100644
--- a/tempest/api/image/v1/test_images_negative.py
+++ b/tempest/api/image/v1/test_images_negative.py
@@ -40,13 +40,6 @@
'x-image-meta-disk_format': 'wrong'})
@test.attr(type=['negative'])
- @test.idempotent_id('bb016f15-0820-4f27-a92d-09b2f67d2488')
- def test_delete_image_with_invalid_image_id(self):
- # An image should not be deleted with invalid image id
- self.assertRaises(lib_exc.NotFound, self.client.delete_image,
- '!@$%^&*()')
-
- @test.attr(type=['negative'])
@test.idempotent_id('ec652588-7e3c-4b67-a2f2-0fa96f57c8fc')
def test_delete_non_existent_image(self):
# Return an error while trying to delete a non-existent image
diff --git a/tempest/api/image/v2/test_images.py b/tempest/api/image/v2/test_images.py
index 42a4352..6f8d239 100644
--- a/tempest/api/image/v2/test_images.py
+++ b/tempest/api/image/v2/test_images.py
@@ -16,7 +16,7 @@
import random
-from six import moves
+import six
from oslo_log import log as logging
from tempest.api.image import base
@@ -44,35 +44,34 @@
image_name = data_utils.rand_name('image')
container_format = CONF.image.container_formats[0]
disk_format = CONF.image.disk_formats[0]
- body = self.create_image(name=image_name,
- container_format=container_format,
- disk_format=disk_format,
- visibility='private',
- ramdisk_id=uuid)
- self.assertIn('id', body)
- image_id = body.get('id')
- self.assertIn('name', body)
- self.assertEqual(image_name, body['name'])
- self.assertIn('visibility', body)
- self.assertEqual('private', body['visibility'])
- self.assertIn('status', body)
- self.assertEqual('queued', body['status'])
+ image = self.create_image(name=image_name,
+ container_format=container_format,
+ disk_format=disk_format,
+ visibility='private',
+ ramdisk_id=uuid)
+ self.assertIn('id', image)
+ self.assertIn('name', image)
+ self.assertEqual(image_name, image['name'])
+ self.assertIn('visibility', image)
+ self.assertEqual('private', image['visibility'])
+ self.assertIn('status', image)
+ self.assertEqual('queued', image['status'])
# Now try uploading an image file
file_content = data_utils.random_bytes()
- image_file = moves.cStringIO(file_content)
- self.client.store_image_file(image_id, image_file)
+ image_file = six.BytesIO(file_content)
+ self.client.store_image_file(image['id'], image_file)
# Now try to get image details
- body = self.client.show_image(image_id)
- self.assertEqual(image_id, body['id'])
+ body = self.client.show_image(image['id'])
+ self.assertEqual(image['id'], body['id'])
self.assertEqual(image_name, body['name'])
self.assertEqual(uuid, body['ramdisk_id'])
self.assertIn('size', body)
self.assertEqual(1024, body.get('size'))
# Now try get image file
- body = self.client.show_image_file(image_id)
+ body = self.client.show_image_file(image['id'])
self.assertEqual(file_content, body.data)
@test.attr(type='smoke')
@@ -84,20 +83,18 @@
image_name = data_utils.rand_name('image')
container_format = CONF.image.container_formats[0]
disk_format = CONF.image.disk_formats[0]
- body = self.client.create_image(name=image_name,
- container_format=container_format,
- disk_format=disk_format,
- visibility='private')
- image_id = body['id']
-
+ image = self.client.create_image(name=image_name,
+ container_format=container_format,
+ disk_format=disk_format,
+ visibility='private')
# Delete Image
- self.client.delete_image(image_id)
- self.client.wait_for_resource_deletion(image_id)
+ self.client.delete_image(image['id'])
+ self.client.wait_for_resource_deletion(image['id'])
# Verifying deletion
images = self.client.list_images()['images']
images_id = [item['id'] for item in images]
- self.assertNotIn(image_id, images_id)
+ self.assertNotIn(image['id'], images_id)
@test.attr(type='smoke')
@test.idempotent_id('f66891a7-a35c-41a8-b590-a065c2a1caa6')
@@ -108,27 +105,26 @@
image_name = data_utils.rand_name('image')
container_format = CONF.image.container_formats[0]
disk_format = CONF.image.disk_formats[0]
- body = self.client.create_image(name=image_name,
- container_format=container_format,
- disk_format=disk_format,
- visibility='private')
- self.addCleanup(self.client.delete_image, body['id'])
- self.assertEqual('queued', body['status'])
- image_id = body['id']
+ image = self.client.create_image(name=image_name,
+ container_format=container_format,
+ disk_format=disk_format,
+ visibility='private')
+ self.addCleanup(self.client.delete_image, image['id'])
+ self.assertEqual('queued', image['status'])
# Now try uploading an image file
- image_file = moves.cStringIO(data_utils.random_bytes())
- self.client.store_image_file(image_id, image_file)
+ image_file = six.BytesIO(data_utils.random_bytes())
+ self.client.store_image_file(image['id'], image_file)
# Update Image
new_image_name = data_utils.rand_name('new-image')
- body = self.client.update_image(image_id, [
+ body = self.client.update_image(image['id'], [
dict(replace='/name', value=new_image_name)])
# Verifying updating
- body = self.client.show_image(image_id)
- self.assertEqual(image_id, body['id'])
+ body = self.client.show_image(image['id'])
+ self.assertEqual(image['id'], body['id'])
self.assertEqual(new_image_name, body['name'])
@@ -160,16 +156,13 @@
1024 and 4096
"""
size = random.randint(1024, 4096)
- image_file = moves.cStringIO(data_utils.random_bytes(size))
- name = data_utils.rand_name('image')
- body = cls.create_image(name=name,
- container_format=container_format,
- disk_format=disk_format,
- visibility='private')
- image_id = body['id']
- cls.client.store_image_file(image_id, data=image_file)
+ image_file = six.BytesIO(data_utils.random_bytes(size))
+ image = cls.create_image(container_format=container_format,
+ disk_format=disk_format,
+ visibility='private')
+ cls.client.store_image_file(image['id'], data=image_file)
- return image_id
+ return image['id']
def _list_by_param_value_and_assert(self, params):
"""Perform list action with given params and validates result."""
@@ -250,6 +243,16 @@
self.assertEqual(len(images_list), params['limit'],
"Failed to get images by limit")
+ @test.idempotent_id('e9a44b91-31c8-4b40-a332-e0a39ffb4dbb')
+ def test_list_image_param_owner(self):
+ # Test to get images by owner
+ image_id = self.created_images[0]
+ # Get image metadata
+ image = self.client.show_image(image_id)
+
+ params = {"owner": image['owner']}
+ self._list_by_param_value_and_assert(params)
+
@test.idempotent_id('622b925c-479f-4736-860d-adeaf13bc371')
def test_get_image_schema(self):
# Test to get image schema
diff --git a/tempest/api/image/v2/test_images_metadefs_namespaces.py b/tempest/api/image/v2/test_images_metadefs_namespaces.py
index 6fced00..a80a0cf 100644
--- a/tempest/api/image/v2/test_images_metadefs_namespaces.py
+++ b/tempest/api/image/v2/test_images_metadefs_namespaces.py
@@ -40,6 +40,10 @@
protected=True)
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
self._cleanup_namespace, namespace_name)
+ # list namespaces
+ bodys = self.namespaces_client.list_namespaces()['namespaces']
+ body = [namespace['namespace'] for namespace in bodys]
+ self.assertIn(namespace_name, body)
# get namespace details
body = self.namespaces_client.show_namespace(namespace_name)
self.assertEqual(namespace_name, body['namespace'])
diff --git a/tempest/api/image/v2/test_images_metadefs_resource_types.py b/tempest/api/image/v2/test_images_metadefs_resource_types.py
new file mode 100644
index 0000000..3dd432b
--- /dev/null
+++ b/tempest/api/image/v2/test_images_metadefs_resource_types.py
@@ -0,0 +1,54 @@
+# Copyright 2016 Ericsson India Global Services Private Limited
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.api.image import base
+from tempest import test
+
+
+class MetadataResourceTypesTest(base.BaseV2ImageTest):
+ """Test the Metadata definition resource types basic functionality"""
+
+ @test.idempotent_id('6f358a4e-5ef0-11e6-a795-080027d0d606')
+ def test_basic_meta_def_resource_type_association(self):
+ # Get the available resource types and use one resource_type
+ body = self.resource_types_client.list_resource_types()
+ resource_name = body['resource_types'][0]['name']
+ # Create a namespace
+ namespace = self.create_namespace()
+ # Create resource type association
+ body = self.resource_types_client.create_resource_type_association(
+ namespace['namespace'], name=resource_name)
+ self.assertEqual(body['name'], resource_name)
+ # NOTE(raiesmh08): Here intentionally I have not added addcleanup
+ # method for resource type dissociation because its a metadata add and
+ # being cleaned as soon as namespace is cleaned at test case level.
+ # When namespace cleans, resource type association will automatically
+ # clean without any error or dependency.
+
+ # List resource type associations and validate creation
+ rs_type_associations = [
+ rs_type_association['name'] for rs_type_association in
+ self.resource_types_client.list_resource_type_association(
+ namespace['namespace'])['resource_type_associations']]
+ self.assertIn(resource_name, rs_type_associations)
+ # Delete resource type association
+ self.resource_types_client.delete_resource_type_association(
+ namespace['namespace'], resource_name)
+ # List resource type associations and validate deletion
+ rs_type_associations = [
+ rs_type_association['name'] for rs_type_association in
+ self.resource_types_client.list_resource_type_association(
+ namespace['namespace'])['resource_type_associations']]
+ self.assertNotIn(resource_name, rs_type_associations)
diff --git a/tempest/api/image/v2/test_images_negative.py b/tempest/api/image/v2/test_images_negative.py
index 14de8fd..cd1bca0 100644
--- a/tempest/api/image/v2/test_images_negative.py
+++ b/tempest/api/image/v2/test_images_negative.py
@@ -29,7 +29,7 @@
** get image with image_id=NULL
** get the deleted image
** delete non-existent image
- ** delete rimage with image_id=NULL
+ ** delete image with image_id=NULL
** delete the deleted image
"""
@@ -53,19 +53,19 @@
def test_get_delete_deleted_image(self):
# get and delete the deleted image
# create and delete image
- body = self.client.create_image(name='test',
- container_format='bare',
- disk_format='raw')
- image_id = body['id']
- self.client.delete_image(image_id)
- self.client.wait_for_resource_deletion(image_id)
+ image = self.client.create_image(name='test',
+ container_format='bare',
+ disk_format='raw')
+ self.client.delete_image(image['id'])
+ self.client.wait_for_resource_deletion(image['id'])
# get the deleted image
- self.assertRaises(lib_exc.NotFound, self.client.show_image, image_id)
+ self.assertRaises(lib_exc.NotFound,
+ self.client.show_image, image['id'])
# delete the deleted image
self.assertRaises(lib_exc.NotFound, self.client.delete_image,
- image_id)
+ image['id'])
@test.attr(type=['negative'])
@test.idempotent_id('6fe40f1c-57bd-4918-89cc-8500f850f3de')
diff --git a/tempest/api/image/v2/test_images_tags.py b/tempest/api/image/v2/test_images_tags.py
index 42a4b87..03f29bd 100644
--- a/tempest/api/image/v2/test_images_tags.py
+++ b/tempest/api/image/v2/test_images_tags.py
@@ -21,19 +21,18 @@
@test.idempotent_id('10407036-6059-4f95-a2cd-cbbbee7ed329')
def test_update_delete_tags_for_image(self):
- body = self.create_image(container_format='bare',
- disk_format='raw',
- visibility='private')
- image_id = body['id']
+ image = self.create_image(container_format='bare',
+ disk_format='raw',
+ visibility='private')
tag = data_utils.rand_name('tag')
- self.addCleanup(self.client.delete_image, image_id)
+ self.addCleanup(self.client.delete_image, image['id'])
# Creating image tag and verify it.
- self.client.add_image_tag(image_id, tag)
- body = self.client.show_image(image_id)
+ self.client.add_image_tag(image['id'], tag)
+ body = self.client.show_image(image['id'])
self.assertIn(tag, body['tags'])
# Deleting image tag and verify it.
- self.client.delete_image_tag(image_id, tag)
- body = self.client.show_image(image_id)
+ self.client.delete_image_tag(image['id'], tag)
+ body = self.client.show_image(image['id'])
self.assertNotIn(tag, body['tags'])
diff --git a/tempest/api/image/v2/test_images_tags_negative.py b/tempest/api/image/v2/test_images_tags_negative.py
index dd5650f..af4ffcf 100644
--- a/tempest/api/image/v2/test_images_tags_negative.py
+++ b/tempest/api/image/v2/test_images_tags_negative.py
@@ -33,12 +33,11 @@
@test.idempotent_id('39c023a2-325a-433a-9eea-649bf1414b19')
def test_delete_non_existing_tag(self):
# Delete non existing tag.
- body = self.create_image(container_format='bare',
- disk_format='raw',
- visibility='private'
- )
- image_id = body['id']
+ image = self.create_image(container_format='bare',
+ disk_format='raw',
+ visibility='private'
+ )
tag = data_utils.rand_name('non-exist-tag')
- self.addCleanup(self.client.delete_image, image_id)
+ self.addCleanup(self.client.delete_image, image['id'])
self.assertRaises(lib_exc.NotFound, self.client.delete_image_tag,
- image_id, tag)
+ image['id'], tag)
diff --git a/tempest/api/network/admin/test_dhcp_agent_scheduler.py b/tempest/api/network/admin/test_dhcp_agent_scheduler.py
index d2ab237..b3555b6 100644
--- a/tempest/api/network/admin/test_dhcp_agent_scheduler.py
+++ b/tempest/api/network/admin/test_dhcp_agent_scheduler.py
@@ -44,7 +44,7 @@
body = self.admin_networks_client.list_dhcp_agents_on_hosting_network(
self.network['id'])
agents = body['agents']
- self.assertIsNotNone(agents)
+ self.assertNotEmpty(agents, "no dhcp agent")
agent = agents[0]
self.assertTrue(self._check_network_in_dhcp_agent(
self.network['id'], agent))
diff --git a/tempest/api/network/admin/test_floating_ips_admin_actions.py b/tempest/api/network/admin/test_floating_ips_admin_actions.py
index baeaa0c..a32e7da 100644
--- a/tempest/api/network/admin/test_floating_ips_admin_actions.py
+++ b/tempest/api/network/admin/test_floating_ips_admin_actions.py
@@ -14,7 +14,6 @@
# under the License.
from tempest.api.network import base
-from tempest.common.utils import data_utils
from tempest import config
from tempest import test
@@ -26,6 +25,13 @@
credentials = ['primary', 'alt', 'admin']
@classmethod
+ def skip_checks(cls):
+ super(FloatingIPAdminTestJSON, cls).skip_checks()
+ if not test.is_extension_enabled('router', 'network'):
+ msg = "router extension not enabled."
+ raise cls.skipException(msg)
+
+ @classmethod
def setup_clients(cls):
super(FloatingIPAdminTestJSON, cls).setup_clients()
cls.alt_floating_ips_client = cls.alt_manager.floating_ips_client
@@ -37,8 +43,7 @@
cls.floating_ip = cls.create_floatingip(cls.ext_net_id)
cls.network = cls.create_network()
cls.subnet = cls.create_subnet(cls.network)
- cls.router = cls.create_router(data_utils.rand_name('router-'),
- external_network_id=cls.ext_net_id)
+ cls.router = cls.create_router(external_network_id=cls.ext_net_id)
cls.create_router_interface(cls.router['id'], cls.subnet['id'])
cls.port = cls.create_port(cls.network)
diff --git a/tempest/api/network/admin/test_l3_agent_scheduler.py b/tempest/api/network/admin/test_l3_agent_scheduler.py
index b2cb003..c2ff038 100644
--- a/tempest/api/network/admin/test_l3_agent_scheduler.py
+++ b/tempest/api/network/admin/test_l3_agent_scheduler.py
@@ -13,9 +13,8 @@
# under the License.
from tempest.api.network import base
-from tempest.common.utils import data_utils
from tempest import config
-from tempest import exceptions
+from tempest.lib import exceptions
from tempest import test
CONF = config.CONF
@@ -66,35 +65,36 @@
else:
msg = "L3 Agent Scheduler enabled in conf, but L3 Agent not found"
raise exceptions.InvalidConfiguration(msg)
- cls.router = cls.create_router(data_utils.rand_name('router'))
- # NOTE(armax): If DVR is an available extension, and the created router
- # is indeed a distributed one, more resources need to be provisioned
- # in order to bind the router to the L3 agent.
- # That said, let's preserve the existing test logic, where the extra
- # query and setup steps are only required if the extension is available
- # and only if the router's default type is distributed.
- if test.is_extension_enabled('dvr', 'network'):
- cls.is_dvr_router = cls.admin_routers_client.show_router(
- cls.router['id'])['router'].get('distributed', False)
- if cls.is_dvr_router:
- cls.network = cls.create_network()
- cls.subnet = cls.create_subnet(cls.network)
- cls.port = cls.create_port(cls.network)
- cls.routers_client.add_router_interface(
- cls.router['id'], port_id=cls.port['id'])
- # NOTE: Sometimes we have seen this test fail with dvr in,
- # multinode tests, since the dhcp port is not created before
- # the test gets executed and so the router is not scheduled
- # on the given agent. By adding the external gateway info to
- # the router, the router should be properly scheduled in the
- # dvr_snat node.
- # This is a temporary work around to prevent a race condition.
- external_gateway_info = {
- 'network_id': CONF.network.public_network_id,
- 'enable_snat': True}
- cls.admin_routers_client.update_router(
- cls.router['id'],
- external_gateway_info=external_gateway_info)
+ cls.router = cls.create_router()
+
+ if CONF.network.dvr_extra_resources:
+ # NOTE(armax): If DVR is an available extension, and the created
+ # router is indeed a distributed one, more resources need to be
+ # provisioned in order to bind the router to the L3 agent in the
+ # Liberty release or older, and are not required since the Mitaka
+ # release.
+ if test.is_extension_enabled('dvr', 'network'):
+ cls.is_dvr_router = cls.admin_routers_client.show_router(
+ cls.router['id'])['router'].get('distributed', False)
+ if cls.is_dvr_router:
+ cls.network = cls.create_network()
+ cls.subnet = cls.create_subnet(cls.network)
+ cls.port = cls.create_port(cls.network)
+ cls.routers_client.add_router_interface(
+ cls.router['id'], port_id=cls.port['id'])
+ # NOTE: Sometimes we have seen this test fail with dvr in,
+ # multinode tests, since the dhcp port is not created
+ # before the test gets executed and so the router is not
+ # scheduled on the given agent. By adding the external
+ # gateway info to the router, the router should be properly
+ # scheduled in the dvr_snat node. This is a temporary work
+ # around to prevent a race condition.
+ external_gateway_info = {
+ 'network_id': CONF.network.public_network_id,
+ 'enable_snat': True}
+ cls.admin_routers_client.update_router(
+ cls.router['id'],
+ external_gateway_info=external_gateway_info)
@classmethod
def resource_cleanup(cls):
diff --git a/tempest/api/network/admin/test_quotas.py b/tempest/api/network/admin/test_quotas.py
index 2ff31e0..3a264ff 100644
--- a/tempest/api/network/admin/test_quotas.py
+++ b/tempest/api/network/admin/test_quotas.py
@@ -87,5 +87,5 @@
@test.idempotent_id('2390f766-836d-40ef-9aeb-e810d78207fb')
def test_quotas(self):
- new_quotas = {'network': 0, 'security_group': 0}
+ new_quotas = {'network': 0, 'port': 0}
self._check_quotas(new_quotas)
diff --git a/tempest/api/network/base.py b/tempest/api/network/base.py
index 9e7c795..629926d 100644
--- a/tempest/api/network/base.py
+++ b/tempest/api/network/base.py
@@ -26,7 +26,7 @@
class BaseNetworkTest(tempest.test.BaseTestCase):
- """Base class for the Neutron tests
+ """Base class for the Neutron tests.
Per the Neutron API Guide, API v1.x was removed from the source code tree
(docs.openstack.org/api/openstack-network/2.0/content/Overview-d1e71.html)
@@ -137,7 +137,8 @@
@classmethod
def create_network(cls, network_name=None):
"""Wrapper utility that returns a test network."""
- network_name = network_name or data_utils.rand_name('test-network-')
+ network_name = network_name or data_utils.rand_name(
+ cls.__name__ + "-network")
body = cls.networks_client.create_network(name=network_name)
network = body['network']
@@ -148,7 +149,6 @@
def create_subnet(cls, network, gateway='', cidr=None, mask_bits=None,
ip_version=None, client=None, **kwargs):
"""Wrapper utility that returns a test subnet."""
-
# allow tests to use admin client
if not client:
client = cls.subnets_client
@@ -208,6 +208,9 @@
def create_router(cls, router_name=None, admin_state_up=False,
external_network_id=None, enable_snat=None,
**kwargs):
+ router_name = router_name or data_utils.rand_name(
+ cls.__name__ + "-router")
+
ext_gw_info = {}
if external_network_id:
ext_gw_info['network_id'] = external_network_id
@@ -270,7 +273,7 @@
"""Wrapper utility that returns a test metering label."""
body = cls.admin_metering_labels_client.create_metering_label(
description=description,
- name=data_utils.rand_name("metering-label"))
+ name=name)
metering_label = body['metering_label']
cls.metering_labels.append(metering_label)
return metering_label
diff --git a/tempest/api/network/base_routers.py b/tempest/api/network/base_routers.py
index 807257f..5fb5232 100644
--- a/tempest/api/network/base_routers.py
+++ b/tempest/api/network/base_routers.py
@@ -25,7 +25,7 @@
self.delete_router(router)
self.routers.remove(router)
- def _create_router(self, name, admin_state_up=False,
+ def _create_router(self, name=None, admin_state_up=False,
external_network_id=None, enable_snat=None):
# associate a cleanup with created routers to avoid quota limits
router = self.create_router(name, admin_state_up,
diff --git a/tempest/api/network/test_dhcp_ipv6.py b/tempest/api/network/test_dhcp_ipv6.py
index 77008ab..84c48ec 100644
--- a/tempest/api/network/test_dhcp_ipv6.py
+++ b/tempest/api/network/test_dhcp_ipv6.py
@@ -20,6 +20,7 @@
from tempest.api.network import base
from tempest.common.utils import data_utils
+from tempest.common.utils import net_info
from tempest import config
from tempest.lib import exceptions as lib_exc
from tempest import test
@@ -30,7 +31,7 @@
class NetworksTestDHCPv6(base.BaseNetworkTest):
_ip_version = 6
- """ Test DHCPv6 specific features using SLAAC, stateless and
+ """Test DHCPv6 specific features using SLAAC, stateless and
stateful settings for subnets. Also it shall check dual-stack
functionality (IPv4 + IPv6 together).
The tests include:
@@ -66,7 +67,7 @@
body = self.ports_client.list_ports()
ports = body['ports']
for port in ports:
- if (port['device_owner'].startswith('network:router_interface') and
+ if (net_info.is_router_interface_port(port) and
port['device_id'] in [r['id'] for r in self.routers]):
self.routers_client.remove_router_interface(port['device_id'],
port_id=port['id'])
@@ -347,9 +348,7 @@
def _create_subnet_router(self, kwargs):
subnet = self.create_subnet(self.network, **kwargs)
- router = self.create_router(
- router_name=data_utils.rand_name("routerv6-"),
- admin_state_up=True)
+ router = self.create_router(admin_state_up=True)
port = self.create_router_interface(router['id'],
subnet['id'])
body = self.ports_client.show_port(port['port_id'])
diff --git a/tempest/api/network/test_extensions.py b/tempest/api/network/test_extensions.py
index 2c981a1..84150b4 100644
--- a/tempest/api/network/test_extensions.py
+++ b/tempest/api/network/test_extensions.py
@@ -23,9 +23,9 @@
List all available extensions
- v2.0 of the Neutron API is assumed. It is also assumed that the following
- options are defined in the [network] section of etc/tempest.conf:
-
+ v2.0 of the Neutron API is assumed. It is also assumed that api-extensions
+ option is defined in the [network-feature-enabled] section of
+ etc/tempest.conf.
"""
@test.attr(type='smoke')
diff --git a/tempest/api/network/test_floating_ips.py b/tempest/api/network/test_floating_ips.py
index 2abbf93..efe8982 100644
--- a/tempest/api/network/test_floating_ips.py
+++ b/tempest/api/network/test_floating_ips.py
@@ -14,7 +14,6 @@
# under the License.
from tempest.api.network import base
-from tempest.common.utils import data_utils
from tempest.common.utils import net_utils
from tempest import config
from tempest import test
@@ -54,9 +53,8 @@
# Create network, subnet, router and add interface
cls.network = cls.create_network()
- cls.subnet = cls.create_subnet(cls.network)
- cls.router = cls.create_router(data_utils.rand_name('router-'),
- external_network_id=cls.ext_net_id)
+ cls.subnet = cls.create_subnet(cls.network, enable_dhcp=False)
+ cls.router = cls.create_router(external_network_id=cls.ext_net_id)
cls.create_router_interface(cls.router['id'], cls.subnet['id'])
# Create two ports one each for Creation and Updating of floatingIP
for i in range(2):
@@ -156,8 +154,7 @@
self.assertEqual(created_floating_ip['router_id'], self.router['id'])
network2 = self.create_network()
subnet2 = self.create_subnet(network2)
- router2 = self.create_router(data_utils.rand_name('router-'),
- external_network_id=self.ext_net_id)
+ router2 = self.create_router(external_network_id=self.ext_net_id)
self.create_router_interface(router2['id'], subnet2['id'])
port_other_router = self.create_port(network2)
# Associate floating IP to the other port on another router
diff --git a/tempest/api/network/test_floating_ips_negative.py b/tempest/api/network/test_floating_ips_negative.py
index 963d99d..7ffc30f 100644
--- a/tempest/api/network/test_floating_ips_negative.py
+++ b/tempest/api/network/test_floating_ips_negative.py
@@ -15,7 +15,6 @@
# under the License.
from tempest.api.network import base
-from tempest.common.utils import data_utils
from tempest import config
from tempest.lib import exceptions as lib_exc
from tempest import test
@@ -45,7 +44,7 @@
# Create a network with a subnet connected to a router.
cls.network = cls.create_network()
cls.subnet = cls.create_subnet(cls.network)
- cls.router = cls.create_router(data_utils.rand_name('router'))
+ cls.router = cls.create_router()
cls.create_router_interface(cls.router['id'], cls.subnet['id'])
cls.port = cls.create_port(cls.network)
diff --git a/tempest/api/network/test_networks.py b/tempest/api/network/test_networks.py
index bf80ff5..dadaaba 100644
--- a/tempest/api/network/test_networks.py
+++ b/tempest/api/network/test_networks.py
@@ -127,7 +127,7 @@
def _get_allocation_pools_from_gateway(cls, ip_version):
"""Return allocation range for subnet of given gateway"""
gateway = cls._get_gateway_from_tempest_conf(ip_version)
- return [{'start': str(gateway + 2), 'end': str(gateway + 3)}]
+ return [{'start': str(gateway + 2), 'end': str(gateway + 6)}]
def subnet_dict(self, include_keys):
# Return a subnet dict which has include_keys and their corresponding
@@ -175,8 +175,7 @@
@test.idempotent_id('0e269138-0da6-4efc-a46d-578161e7b221')
def test_create_update_delete_network_subnet(self):
# Create a network
- name = data_utils.rand_name('network-')
- network = self.create_network(network_name=name)
+ network = self.create_network()
self.addCleanup(self._delete_network, network)
net_id = network['id']
self.assertEqual('ACTIVE', network['status'])
@@ -296,7 +295,7 @@
subnet_id)
# Since create_subnet adds the subnet to the delete list, and it is
- # is actually deleted here - this will create and issue, hence remove
+ # actually deleted here - this will create and issue, hence remove
# it from the list.
self.subnets.pop()
@@ -525,8 +524,7 @@
def test_create_delete_subnet_with_gw(self):
net = netaddr.IPNetwork(CONF.network.project_network_v6_cidr)
gateway = str(netaddr.IPAddress(net.first + 2))
- name = data_utils.rand_name('network-')
- network = self.create_network(network_name=name)
+ network = self.create_network()
subnet = self.create_subnet(network, gateway)
# Verifies Subnet GW in IPv6
self.assertEqual(subnet['gateway_ip'], gateway)
@@ -535,16 +533,14 @@
def test_create_delete_subnet_with_default_gw(self):
net = netaddr.IPNetwork(CONF.network.project_network_v6_cidr)
gateway_ip = str(netaddr.IPAddress(net.first + 1))
- name = data_utils.rand_name('network-')
- network = self.create_network(network_name=name)
+ network = self.create_network()
subnet = self.create_subnet(network)
# Verifies Subnet GW in IPv6
self.assertEqual(subnet['gateway_ip'], gateway_ip)
@test.idempotent_id('a9653883-b2a4-469b-8c3c-4518430a7e55')
def test_create_list_subnet_with_no_gw64_one_network(self):
- name = data_utils.rand_name('network-')
- network = self.create_network(name)
+ network = self.create_network()
ipv6_gateway = self.subnet_dict(['gateway'])['gateway']
subnet1 = self.create_subnet(network,
ip_version=6,
@@ -559,7 +555,7 @@
# Verifies Subnet GW is set in IPv6
self.assertEqual(subnet1['gateway_ip'], ipv6_gateway)
# Verifies Subnet GW is None in IPv4
- self.assertEqual(subnet2['gateway_ip'], None)
+ self.assertIsNone(subnet2['gateway_ip'])
# Verifies all 2 subnets in the same network
body = self.subnets_client.list_subnets()
subnets = [sub['id'] for sub in body['subnets']
diff --git a/tempest/api/network/test_ports.py b/tempest/api/network/test_ports.py
index caf7f14..15d289d 100644
--- a/tempest/api/network/test_ports.py
+++ b/tempest/api/network/test_ports.py
@@ -16,13 +16,14 @@
import socket
import netaddr
+import testtools
from tempest.api.network import base
from tempest.api.network import base_security_groups as sec_base
from tempest.common import custom_matchers
from tempest.common.utils import data_utils
from tempest import config
-from tempest import exceptions
+from tempest.lib import exceptions
from tempest import test
CONF = config.CONF
@@ -71,8 +72,7 @@
@test.idempotent_id('67f1b811-f8db-43e2-86bd-72c074d4a42c')
def test_create_bulk_port(self):
network1 = self.network
- name = data_utils.rand_name('network-')
- network2 = self.create_network(network_name=name)
+ network2 = self.create_network()
network_list = [network1['id'], network2['id']]
port_list = [{'network_id': net_id} for net_id in network_list]
body = self.ports_client.create_bulk_ports(ports=port_list)
@@ -199,7 +199,7 @@
self.addCleanup(self.networks_client.delete_network, network['id'])
subnet = self.create_subnet(network)
self.addCleanup(self.subnets_client.delete_subnet, subnet['id'])
- router = self.create_router(data_utils.rand_name('router-'))
+ router = self.create_router()
self.addCleanup(self.routers_client.delete_router, router['id'])
port = self.ports_client.create_port(network_id=network['id'])
# Add router interface to port created above
@@ -308,11 +308,17 @@
self.assertIn(security_group, port_show['security_groups'])
@test.idempotent_id('58091b66-4ff4-4cc1-a549-05d60c7acd1a')
+ @testtools.skipUnless(
+ test.is_extension_enabled('security-group', 'network'),
+ 'security-group extension not enabled.')
def test_update_port_with_security_group_and_extra_attributes(self):
self._update_port_with_security_groups(
[data_utils.rand_name('secgroup')])
@test.idempotent_id('edf6766d-3d40-4621-bc6e-2521a44c257d')
+ @testtools.skipUnless(
+ test.is_extension_enabled('security-group', 'network'),
+ 'security-group extension not enabled.')
def test_update_port_with_two_security_groups_and_extra_attributes(self):
self._update_port_with_security_groups(
[data_utils.rand_name('secgroup'),
@@ -337,6 +343,9 @@
@test.attr(type='smoke')
@test.idempotent_id('4179dcb9-1382-4ced-84fe-1b91c54f5735')
+ @testtools.skipUnless(
+ test.is_extension_enabled('security-group', 'network'),
+ 'security-group extension not enabled.')
def test_create_port_with_no_securitygroups(self):
network = self.create_network()
self.addCleanup(self.networks_client.delete_network, network['id'])
diff --git a/tempest/api/network/test_routers.py b/tempest/api/network/test_routers.py
index ba416e4..de2e71f 100644
--- a/tempest/api/network/test_routers.py
+++ b/tempest/api/network/test_routers.py
@@ -106,9 +106,8 @@
@test.requires_ext(extension='ext-gw-mode', service='network')
def test_create_router_with_default_snat_value(self):
# Create a router with default snat rule
- name = data_utils.rand_name('router')
router = self._create_router(
- name, external_network_id=CONF.network.public_network_id)
+ external_network_id=CONF.network.public_network_id)
self._verify_router_gateway(
router['id'], {'network_id': CONF.network.public_network_id,
'enable_snat': True})
@@ -136,7 +135,7 @@
def test_add_remove_router_interface_with_subnet_id(self):
network = self.create_network()
subnet = self.create_subnet(network)
- router = self._create_router(data_utils.rand_name('router-'))
+ router = self._create_router()
# Add router interface with subnet id
interface = self.routers_client.add_router_interface(
router['id'], subnet_id=subnet['id'])
@@ -155,7 +154,7 @@
def test_add_remove_router_interface_with_port_id(self):
network = self.create_network()
self.create_subnet(network)
- router = self._create_router(data_utils.rand_name('router-'))
+ router = self._create_router()
port_body = self.ports_client.create_port(
network_id=network['id'])
# add router interface to port created above
@@ -190,15 +189,18 @@
gw_port = list_body['ports'][0]
fixed_ips = gw_port['fixed_ips']
self.assertGreaterEqual(len(fixed_ips), 1)
+ # Assert that all of the IPs from the router gateway port
+ # are allocated from a valid public subnet.
public_net_body = self.admin_networks_client.show_network(
CONF.network.public_network_id)
- public_subnet_id = public_net_body['network']['subnets'][0]
- self.assertIn(public_subnet_id,
- map(lambda x: x['subnet_id'], fixed_ips))
+ public_subnet_ids = public_net_body['network']['subnets']
+ for fixed_ip in fixed_ips:
+ subnet_id = fixed_ip['subnet_id']
+ self.assertIn(subnet_id, public_subnet_ids)
@test.idempotent_id('6cc285d8-46bf-4f36-9b1a-783e3008ba79')
def test_update_router_set_gateway(self):
- router = self._create_router(data_utils.rand_name('router-'))
+ router = self._create_router()
self.routers_client.update_router(
router['id'],
external_gateway_info={
@@ -212,7 +214,7 @@
@test.idempotent_id('b386c111-3b21-466d-880c-5e72b01e1a33')
@test.requires_ext(extension='ext-gw-mode', service='network')
def test_update_router_set_gateway_with_snat_explicit(self):
- router = self._create_router(data_utils.rand_name('router-'))
+ router = self._create_router()
self.admin_routers_client.update_router(
router['id'],
external_gateway_info={
@@ -227,7 +229,7 @@
@test.idempotent_id('96536bc7-8262-4fb2-9967-5c46940fa279')
@test.requires_ext(extension='ext-gw-mode', service='network')
def test_update_router_set_gateway_without_snat(self):
- router = self._create_router(data_utils.rand_name('router-'))
+ router = self._create_router()
self.admin_routers_client.update_router(
router['id'],
external_gateway_info={
@@ -242,7 +244,6 @@
@test.idempotent_id('ad81b7ee-4f81-407b-a19c-17e623f763e8')
def test_update_router_unset_gateway(self):
router = self._create_router(
- data_utils.rand_name('router-'),
external_network_id=CONF.network.public_network_id)
self.routers_client.update_router(router['id'],
external_gateway_info={})
@@ -257,7 +258,6 @@
@test.requires_ext(extension='ext-gw-mode', service='network')
def test_update_router_reset_gateway_without_snat(self):
router = self._create_router(
- data_utils.rand_name('router-'),
external_network_id=CONF.network.public_network_id)
self.admin_routers_client.update_router(
router['id'],
@@ -280,8 +280,7 @@
test_routes = []
routes_num = 4
# Create a router
- router = self._create_router(
- data_utils.rand_name('router-'), True)
+ router = self._create_router(admin_state_up=True)
self.addCleanup(
self._delete_extra_routes,
router['id'])
@@ -335,7 +334,7 @@
@test.idempotent_id('a8902683-c788-4246-95c7-ad9c6d63a4d9')
def test_update_router_admin_state(self):
- router = self._create_router(data_utils.rand_name('router-'))
+ router = self._create_router()
self.assertFalse(router['admin_state_up'])
# Update router admin state
update_body = self.routers_client.update_router(router['id'],
@@ -354,7 +353,7 @@
subnet01 = self.create_subnet(network01)
sub02_cidr = netaddr.IPNetwork(self.tenant_cidr).next()
subnet02 = self.create_subnet(network02, cidr=sub02_cidr)
- router = self._create_router(data_utils.rand_name('router-'))
+ router = self._create_router()
interface01 = self._add_router_interface_with_subnet_id(router['id'],
subnet01['id'])
self._verify_router_interface(router['id'], subnet01['id'],
@@ -368,7 +367,7 @@
def test_router_interface_port_update_with_fixed_ip(self):
network = self.create_network()
subnet = self.create_subnet(network)
- router = self._create_router(data_utils.rand_name('router-'))
+ router = self._create_router()
fixed_ip = [{'subnet_id': subnet['id']}]
interface = self._add_router_interface_with_subnet_id(router['id'],
subnet['id'])
@@ -414,7 +413,7 @@
@test.idempotent_id('644d7a4a-01a1-4b68-bb8d-0c0042cb1729')
def test_convert_centralized_router(self):
- router = self._create_router(data_utils.rand_name('router'))
+ router = self._create_router()
self.assertNotIn('distributed', router)
update_body = self.admin_routers_client.update_router(router['id'],
distributed=True)
diff --git a/tempest/api/network/test_routers_negative.py b/tempest/api/network/test_routers_negative.py
index cd9f6ad..b3983de 100644
--- a/tempest/api/network/test_routers_negative.py
+++ b/tempest/api/network/test_routers_negative.py
@@ -36,7 +36,7 @@
@classmethod
def resource_setup(cls):
super(RoutersNegativeTest, cls).resource_setup()
- cls.router = cls.create_router(data_utils.rand_name('router-'))
+ cls.router = cls.create_router()
cls.network = cls.create_network()
cls.subnet = cls.create_subnet(cls.network)
cls.tenant_cidr = (CONF.network.project_network_cidr
@@ -55,8 +55,7 @@
@test.attr(type=['negative'])
@test.idempotent_id('11836a18-0b15-4327-a50b-f0d9dc66bddd')
def test_router_add_gateway_net_not_external_returns_400(self):
- alt_network = self.create_network(
- network_name=data_utils.rand_name('router-negative-'))
+ alt_network = self.create_network()
sub_cidr = netaddr.IPNetwork(self.tenant_cidr).next()
self.create_subnet(alt_network, cidr=sub_cidr)
self.assertRaises(lib_exc.BadRequest,
@@ -128,14 +127,12 @@
@classmethod
def resource_setup(cls):
super(DvrRoutersNegativeTest, cls).resource_setup()
- cls.router = cls.create_router(data_utils.rand_name('router'))
+ cls.router = cls.create_router()
cls.network = cls.create_network()
cls.subnet = cls.create_subnet(cls.network)
@test.attr(type=['negative'])
@test.idempotent_id('4990b055-8fc7-48ab-bba7-aa28beaad0b9')
def test_router_create_tenant_distributed_returns_forbidden(self):
- self.assertRaises(lib_exc.Forbidden,
- self.create_router,
- data_utils.rand_name('router'),
+ self.assertRaises(lib_exc.Forbidden, self.create_router,
distributed=True)
diff --git a/tempest/api/network/test_security_groups.py b/tempest/api/network/test_security_groups.py
index 5312979..1031ab8 100644
--- a/tempest/api/network/test_security_groups.py
+++ b/tempest/api/network/test_security_groups.py
@@ -71,7 +71,7 @@
@test.attr(type='smoke')
@test.idempotent_id('e30abd17-fef9-4739-8617-dc26da88e686')
def test_list_security_groups(self):
- # Verify the that security group belonging to project exist in list
+ # Verify the security group belonging to project exist in list
body = self.security_groups_client.list_security_groups()
security_groups = body['security_groups']
found = None
diff --git a/tempest/api/object_storage/base.py b/tempest/api/object_storage/base.py
index 97d9eed..1b1ffd1 100644
--- a/tempest/api/object_storage/base.py
+++ b/tempest/api/object_storage/base.py
@@ -15,6 +15,7 @@
from tempest.common import custom_matchers
from tempest import config
+from tempest.lib.common.utils import data_utils
from tempest.lib.common.utils import test_utils
from tempest.lib import exceptions as lib_exc
import tempest.test
@@ -57,16 +58,51 @@
cls.container_client.auth_provider.clear_auth()
cls.account_client.auth_provider.clear_auth()
+ # make sure that discoverability is enabled and that the sections
+ # have not been disallowed by Swift
+ cls.policies = None
+
+ if CONF.object_storage_feature_enabled.discoverability:
+ _, body = cls.account_client.list_extensions()
+
+ if 'swift' in body and 'policies' in body['swift']:
+ cls.policies = body['swift']['policies']
+
+ cls.containers = []
+
@classmethod
- def delete_containers(cls, containers, container_client=None,
+ def create_container(cls):
+ # wrapper that returns a test container
+ container_name = data_utils.rand_name(name='TestContainer')
+ cls.container_client.create_container(container_name)
+ cls.containers.append(container_name)
+
+ return container_name
+
+ @classmethod
+ def create_object(cls, container_name, object_name=None,
+ data=None, metadata=None):
+ # wrapper that returns a test object
+ if object_name is None:
+ object_name = data_utils.rand_name(name='TestObject')
+ if data is None:
+ data = data_utils.arbitrary_string()
+ cls.object_client.create_object(container_name,
+ object_name,
+ data,
+ metadata=metadata)
+
+ return object_name, data
+
+ @classmethod
+ def delete_containers(cls, container_client=None,
object_client=None):
- """Remove given containers and all objects in them.
+ """Remove containers and all objects in them.
The containers should be visible from the container_client given.
Will not throw any error if the containers don't exist.
Will not check that object and container deletions succeed.
- :param containers: list of container names to remove
:param container_client: if None, use cls.container_client, this means
that the default testing user will be used (see 'username' in
'etc/tempest.conf')
@@ -76,9 +112,11 @@
container_client = cls.container_client
if object_client is None:
object_client = cls.object_client
- for cont in containers:
+ for cont in cls.containers:
try:
- objlist = container_client.list_all_container_objects(cont)
+ params = {'limit': 9999, 'format': 'json'}
+ resp, objlist = container_client.list_container_contents(
+ cont, params)
# delete every object in the container
for obj in objlist:
test_utils.call_and_ignore_notfound_exc(
@@ -91,5 +129,5 @@
"""Check the existence and the format of response headers"""
self.assertThat(resp, custom_matchers.ExistsAllResponseHeaders(
- target, method))
+ target, method, self.policies))
self.assertThat(resp, custom_matchers.AreAllWellFormatted())
diff --git a/tempest/api/object_storage/test_account_bulk.py b/tempest/api/object_storage/test_account_bulk.py
index da4c80c..7292ee9 100644
--- a/tempest/api/object_storage/test_account_bulk.py
+++ b/tempest/api/object_storage/test_account_bulk.py
@@ -27,7 +27,7 @@
self.containers = []
def tearDown(self):
- self.delete_containers(self.containers)
+ self.delete_containers()
super(BulkTest, self).tearDown()
def _create_archive(self):
diff --git a/tempest/api/object_storage/test_account_quotas.py b/tempest/api/object_storage/test_account_quotas.py
index 0f6a330..fcbd6eb 100644
--- a/tempest/api/object_storage/test_account_quotas.py
+++ b/tempest/api/object_storage/test_account_quotas.py
@@ -34,8 +34,7 @@
@classmethod
def resource_setup(cls):
super(AccountQuotasTest, cls).resource_setup()
- cls.container_name = data_utils.rand_name(name="TestContainer")
- cls.container_client.create_container(cls.container_name)
+ cls.container_name = cls.create_container()
# Retrieve a ResellerAdmin auth data and use it to set a quota
# on the client's account
@@ -73,8 +72,7 @@
@classmethod
def resource_cleanup(cls):
- if hasattr(cls, "container_name"):
- cls.delete_containers([cls.container_name])
+ cls.delete_containers()
super(AccountQuotasTest, cls).resource_cleanup()
@test.attr(type="smoke")
diff --git a/tempest/api/object_storage/test_account_quotas_negative.py b/tempest/api/object_storage/test_account_quotas_negative.py
index 546bb06..ae8dfcc 100644
--- a/tempest/api/object_storage/test_account_quotas_negative.py
+++ b/tempest/api/object_storage/test_account_quotas_negative.py
@@ -13,9 +13,7 @@
# under the License.
from tempest.api.object_storage import base
-from tempest.common.utils import data_utils
from tempest import config
-from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
from tempest import test
@@ -36,8 +34,7 @@
@classmethod
def resource_setup(cls):
super(AccountQuotasNegativeTest, cls).resource_setup()
- cls.container_name = data_utils.rand_name(name="TestContainer")
- cls.container_client.create_container(cls.container_name)
+ cls.container_name = cls.create_container()
# Retrieve a ResellerAdmin auth data and use it to set a quota
# on the client's account
@@ -74,8 +71,7 @@
@classmethod
def resource_cleanup(cls):
- if hasattr(cls, "container_name"):
- cls.delete_containers([cls.container_name])
+ cls.delete_containers()
super(AccountQuotasNegativeTest, cls).resource_cleanup()
@test.attr(type=["negative"])
@@ -93,14 +89,3 @@
self.assertRaises(lib_exc.Forbidden,
self.account_client.create_account_metadata,
{"Quota-Bytes": "100"})
-
- @test.attr(type=["negative"])
- @decorators.skip_because(bug="1310597")
- @test.idempotent_id('cf9e21f5-3aa4-41b1-9462-28ac550d8d3f')
- @test.requires_ext(extension='account_quotas', service='object')
- def test_upload_large_object(self):
- object_name = data_utils.rand_name(name="TestObject")
- data = data_utils.arbitrary_string(30)
- self.assertRaises(lib_exc.OverLimit,
- self.object_client.create_object,
- self.container_name, object_name, data)
diff --git a/tempest/api/object_storage/test_account_services.py b/tempest/api/object_storage/test_account_services.py
index 5983c1f..33e5852 100644
--- a/tempest/api/object_storage/test_account_services.py
+++ b/tempest/api/object_storage/test_account_services.py
@@ -15,7 +15,6 @@
import random
-from six import moves
import testtools
from tempest.api.object_storage import base
@@ -42,7 +41,7 @@
@classmethod
def resource_setup(cls):
super(AccountTest, cls).resource_setup()
- for i in moves.xrange(ord('a'), ord('f') + 1):
+ for i in range(ord('a'), ord('f') + 1):
name = data_utils.rand_name(name='%s-' % chr(i))
cls.container_client.create_container(name)
cls.containers.append(name)
@@ -50,7 +49,7 @@
@classmethod
def resource_cleanup(cls):
- cls.delete_containers(cls.containers)
+ cls.delete_containers()
super(AccountTest, cls).resource_cleanup()
@test.attr(type='smoke')
@@ -78,7 +77,16 @@
# container request, the response does not contain 'accept-ranges'
# header. This is a special case, therefore the existence of response
# headers is checked without custom matcher.
- self.assertIn('content-length', resp)
+ #
+ # As the expected response is 204 No Content, Content-Length presence
+ # is not checked here intentionally. According to RFC 7230 a server
+ # MUST NOT send the header in such responses. Thus, clients should not
+ # depend on this header. However, the standard does not require them
+ # to validate the server's behavior. We leverage that to not refuse
+ # any implementation violating it like Swift [1] or some versions of
+ # Ceph RadosGW [2].
+ # [1] https://bugs.launchpad.net/swift/+bug/1537811
+ # [2] http://tracker.ceph.com/issues/13582
self.assertIn('x-timestamp', resp)
self.assertIn('x-account-bytes-used', resp)
self.assertIn('x-account-container-count', resp)
@@ -131,7 +139,8 @@
@test.idempotent_id('5cfa4ab2-4373-48dd-a41f-a532b12b08b2')
def test_list_containers_with_limit(self):
# list containers one of them, half of them then all of them
- for limit in (1, self.containers_count / 2, self.containers_count):
+ for limit in (1, self.containers_count // 2,
+ self.containers_count):
params = {'limit': limit}
resp, container_list = \
self.account_client.list_account_containers(params=params)
@@ -152,12 +161,13 @@
self.assertEqual(len(container_list), 0)
- params = {'marker': self.containers[self.containers_count / 2]}
+ params = {'marker': self.containers[self.containers_count // 2]}
resp, container_list = \
self.account_client.list_account_containers(params=params)
self.assertHeaders(resp, 'Account', 'GET')
- self.assertEqual(len(container_list), self.containers_count / 2 - 1)
+ self.assertEqual(len(container_list),
+ self.containers_count // 2 - 1)
@test.idempotent_id('5ca164e4-7bde-43fa-bafb-913b53b9e786')
def test_list_containers_with_end_marker(self):
@@ -171,11 +181,11 @@
self.assertHeaders(resp, 'Account', 'GET')
self.assertEqual(len(container_list), 0)
- params = {'end_marker': self.containers[self.containers_count / 2]}
+ params = {'end_marker': self.containers[self.containers_count // 2]}
resp, container_list = \
self.account_client.list_account_containers(params=params)
self.assertHeaders(resp, 'Account', 'GET')
- self.assertEqual(len(container_list), self.containers_count / 2)
+ self.assertEqual(len(container_list), self.containers_count // 2)
@test.idempotent_id('ac8502c2-d4e4-4f68-85a6-40befea2ef5e')
def test_list_containers_with_marker_and_end_marker(self):
@@ -206,12 +216,12 @@
# list containers combining limit and end_marker param
limit = random.randint(1, self.containers_count)
params = {'limit': limit,
- 'end_marker': self.containers[self.containers_count / 2]}
+ 'end_marker': self.containers[self.containers_count // 2]}
resp, container_list = self.account_client.list_account_containers(
params=params)
self.assertHeaders(resp, 'Account', 'GET')
self.assertEqual(len(container_list),
- min(limit, self.containers_count / 2))
+ min(limit, self.containers_count // 2))
@test.idempotent_id('8cf98d9c-e3a0-4e44-971b-c87656fdddbd')
def test_list_containers_with_limit_and_marker_and_end_marker(self):
diff --git a/tempest/api/object_storage/test_container_acl.py b/tempest/api/object_storage/test_container_acl.py
index c1b6711..ffdd1de 100644
--- a/tempest/api/object_storage/test_container_acl.py
+++ b/tempest/api/object_storage/test_container_acl.py
@@ -39,11 +39,10 @@
def setUp(self):
super(ObjectTestACLs, self).setUp()
- self.container_name = data_utils.rand_name(name='TestContainer')
- self.container_client.create_container(self.container_name)
+ self.container_name = self.create_container()
def tearDown(self):
- self.delete_containers([self.container_name])
+ self.delete_containers()
super(ObjectTestACLs, self).tearDown()
@test.idempotent_id('a3270f3f-7640-4944-8448-c7ea783ea5b6')
diff --git a/tempest/api/object_storage/test_container_quotas.py b/tempest/api/object_storage/test_container_quotas.py
index 01e5389..8cbe441 100644
--- a/tempest/api/object_storage/test_container_quotas.py
+++ b/tempest/api/object_storage/test_container_quotas.py
@@ -15,11 +15,9 @@
from tempest.api.object_storage import base
from tempest.common.utils import data_utils
-from tempest import config
from tempest.lib import exceptions as lib_exc
from tempest import test
-CONF = config.CONF
QUOTA_BYTES = 10
QUOTA_COUNT = 3
@@ -38,8 +36,7 @@
Maximum object count of the container.
"""
super(ContainerQuotasTest, self).setUp()
- self.container_name = data_utils.rand_name(name="TestContainer")
- self.container_client.create_container(self.container_name)
+ self.container_name = self.create_container()
metadata = {"quota-bytes": str(QUOTA_BYTES),
"quota-count": str(QUOTA_COUNT), }
self.container_client.update_container_metadata(
@@ -47,7 +44,7 @@
def tearDown(self):
"""Cleans the container of any object after each test."""
- self.delete_containers([self.container_name])
+ self.delete_containers()
super(ContainerQuotasTest, self).tearDown()
@test.idempotent_id('9a0fb034-86af-4df0-86fa-f8bd7db21ae0')
@@ -71,7 +68,7 @@
@test.requires_ext(extension='container_quotas', service='object')
@test.attr(type="smoke")
def test_upload_large_object(self):
- """Attempts to upload an object lagger than the bytes quota."""
+ """Attempts to upload an object larger than the bytes quota."""
object_name = data_utils.rand_name(name="TestObject")
data = data_utils.arbitrary_string(QUOTA_BYTES + 1)
diff --git a/tempest/api/object_storage/test_container_services.py b/tempest/api/object_storage/test_container_services.py
index 9d043e5..dbe8b4a 100644
--- a/tempest/api/object_storage/test_container_services.py
+++ b/tempest/api/object_storage/test_container_services.py
@@ -19,33 +19,10 @@
class ContainerTest(base.BaseObjectTest):
- def setUp(self):
- super(ContainerTest, self).setUp()
- self.containers = []
-
def tearDown(self):
- self.delete_containers(self.containers)
+ self.delete_containers()
super(ContainerTest, self).tearDown()
- def _create_container(self):
- # setup container
- container_name = data_utils.rand_name(name='TestContainer')
- self.container_client.create_container(container_name)
- self.containers.append(container_name)
-
- return container_name
-
- def _create_object(self, container_name, object_name=None):
- # setup object
- if object_name is None:
- object_name = data_utils.rand_name(name='TestObject')
- data = data_utils.arbitrary_string()
- self.object_client.create_object(container_name,
- object_name,
- data)
-
- return object_name
-
@test.attr(type='smoke')
@test.idempotent_id('92139d73-7819-4db1-85f8-3f2f22a8d91f')
def test_create_container(self):
@@ -86,7 +63,8 @@
# create container with metadata value
container_name = data_utils.rand_name(name='TestContainer')
- metadata = {'test-container-meta': 'Meta1'}
+ # metadata name using underscores should be converted to hyphens
+ metadata = {'test_container_meta': 'Meta1'}
resp, _ = self.container_client.create_container(
container_name,
metadata=metadata)
@@ -97,7 +75,7 @@
container_name)
self.assertIn('x-container-meta-test-container-meta', resp)
self.assertEqual(resp['x-container-meta-test-container-meta'],
- metadata['test-container-meta'])
+ metadata['test_container_meta'])
@test.idempotent_id('24d16451-1c0c-4e4f-b59c-9840a3aba40e')
def test_create_container_with_remove_metadata_key(self):
@@ -140,18 +118,17 @@
@test.idempotent_id('95d3a249-b702-4082-a2c4-14bb860cf06a')
def test_delete_container(self):
# create a container
- container_name = self._create_container()
+ container_name = self.create_container()
# delete container, success asserted within
resp, _ = self.container_client.delete_container(container_name)
self.assertHeaders(resp, 'Container', 'DELETE')
- self.containers.remove(container_name)
@test.attr(type='smoke')
@test.idempotent_id('312ff6bd-5290-497f-bda1-7c5fec6697ab')
def test_list_container_contents(self):
# get container contents list
- container_name = self._create_container()
- object_name = self._create_object(container_name)
+ container_name = self.create_container()
+ object_name, _ = self.create_object(container_name)
resp, object_list = self.container_client.list_container_contents(
container_name)
@@ -161,7 +138,7 @@
@test.idempotent_id('4646ac2d-9bfb-4c7d-a3c5-0f527402b3df')
def test_list_container_contents_with_no_object(self):
# get empty container contents list
- container_name = self._create_container()
+ container_name = self.create_container()
resp, object_list = self.container_client.list_container_contents(
container_name)
@@ -171,9 +148,9 @@
@test.idempotent_id('fe323a32-57b9-4704-a996-2e68f83b09bc')
def test_list_container_contents_with_delimiter(self):
# get container contents list using delimiter param
- container_name = self._create_container()
+ container_name = self.create_container()
object_name = data_utils.rand_name(name='TestObject/')
- self._create_object(container_name, object_name)
+ self.create_object(container_name, object_name)
params = {'delimiter': '/'}
resp, object_list = self.container_client.list_container_contents(
@@ -185,8 +162,8 @@
@test.idempotent_id('55b4fa5c-e12e-4ca9-8fcf-a79afe118522')
def test_list_container_contents_with_end_marker(self):
# get container contents list using end_marker param
- container_name = self._create_container()
- object_name = self._create_object(container_name)
+ container_name = self.create_container()
+ object_name, _ = self.create_object(container_name)
params = {'end_marker': 'ZzzzObject1234567890'}
resp, object_list = self.container_client.list_container_contents(
@@ -198,8 +175,8 @@
@test.idempotent_id('196f5034-6ab0-4032-9da9-a937bbb9fba9')
def test_list_container_contents_with_format_json(self):
# get container contents list using format_json param
- container_name = self._create_container()
- self._create_object(container_name)
+ container_name = self.create_container()
+ self.create_object(container_name)
params = {'format': 'json'}
resp, object_list = self.container_client.list_container_contents(
@@ -217,8 +194,8 @@
@test.idempotent_id('655a53ca-4d15-408c-a377-f4c6dbd0a1fa')
def test_list_container_contents_with_format_xml(self):
# get container contents list using format_xml param
- container_name = self._create_container()
- self._create_object(container_name)
+ container_name = self.create_container()
+ self.create_object(container_name)
params = {'format': 'xml'}
resp, object_list = self.container_client.list_container_contents(
@@ -241,8 +218,8 @@
@test.idempotent_id('297ec38b-2b61-4ff4-bcd1-7fa055e97b61')
def test_list_container_contents_with_limit(self):
# get container contents list using limit param
- container_name = self._create_container()
- object_name = self._create_object(container_name)
+ container_name = self.create_container()
+ object_name, _ = self.create_object(container_name)
params = {'limit': data_utils.rand_int_id(1, 10000)}
resp, object_list = self.container_client.list_container_contents(
@@ -254,8 +231,8 @@
@test.idempotent_id('c31ddc63-2a58-4f6b-b25c-94d2937e6867')
def test_list_container_contents_with_marker(self):
# get container contents list using marker param
- container_name = self._create_container()
- object_name = self._create_object(container_name)
+ container_name = self.create_container()
+ object_name, _ = self.create_object(container_name)
params = {'marker': 'AaaaObject1234567890'}
resp, object_list = self.container_client.list_container_contents(
@@ -267,9 +244,9 @@
@test.idempotent_id('58ca6cc9-6af0-408d-aaec-2a6a7b2f0df9')
def test_list_container_contents_with_path(self):
# get container contents list using path param
- container_name = self._create_container()
+ container_name = self.create_container()
object_name = data_utils.rand_name(name='Swift/TestObject')
- self._create_object(container_name, object_name)
+ self.create_object(container_name, object_name)
params = {'path': 'Swift'}
resp, object_list = self.container_client.list_container_contents(
@@ -281,8 +258,8 @@
@test.idempotent_id('77e742c7-caf2-4ec9-8aa4-f7d509a3344c')
def test_list_container_contents_with_prefix(self):
# get container contents list using prefix param
- container_name = self._create_container()
- object_name = self._create_object(container_name)
+ container_name = self.create_container()
+ object_name, _ = self.create_object(container_name)
prefix_key = object_name[0:8]
params = {'prefix': prefix_key}
@@ -296,7 +273,7 @@
@test.idempotent_id('96e68f0e-19ec-4aa2-86f3-adc6a45e14dd')
def test_list_container_metadata(self):
# List container metadata
- container_name = self._create_container()
+ container_name = self.create_container()
metadata = {'name': 'Pictures'}
self.container_client.update_container_metadata(
@@ -312,7 +289,7 @@
@test.idempotent_id('a2faf936-6b13-4f8d-92a2-c2278355821e')
def test_list_no_container_metadata(self):
# HEAD container without metadata
- container_name = self._create_container()
+ container_name = self.create_container()
resp, _ = self.container_client.list_container_metadata(
container_name)
@@ -345,7 +322,7 @@
@test.idempotent_id('2ae5f295-4bf1-4e04-bfad-21e54b62cec5')
def test_update_container_metadata_with_create_metadata(self):
# update container metadata using add metadata
- container_name = self._create_container()
+ container_name = self.create_container()
metadata = {'test-container-meta1': 'Meta1'}
resp, _ = self.container_client.update_container_metadata(
@@ -380,7 +357,7 @@
@test.idempotent_id('31f40a5f-6a52-4314-8794-cd89baed3040')
def test_update_container_metadata_with_create_metadata_key(self):
# update container metadata with a blank value of metadata
- container_name = self._create_container()
+ container_name = self.create_container()
metadata = {'test-container-meta1': ''}
resp, _ = self.container_client.update_container_metadata(
diff --git a/tempest/api/object_storage/test_container_services_negative.py b/tempest/api/object_storage/test_container_services_negative.py
new file mode 100644
index 0000000..7049db0
--- /dev/null
+++ b/tempest/api/object_storage/test_container_services_negative.py
@@ -0,0 +1,167 @@
+# Copyright 2016 OpenStack Foundation
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.api.object_storage import base
+from tempest.lib.common.utils import data_utils
+from tempest.lib import exceptions
+from tempest import test
+
+
+class ContainerNegativeTest(base.BaseObjectTest):
+
+ @classmethod
+ def resource_setup(cls):
+ super(ContainerNegativeTest, cls).resource_setup()
+
+ # use /info to get default constraints
+ _, body = cls.account_client.list_extensions()
+ cls.constraints = body['swift']
+
+ @test.attr(type=["negative"])
+ @test.idempotent_id('30686921-4bed-4764-a038-40d741ed4e78')
+ def test_create_container_name_exceeds_max_length(self):
+ # Attempts to create a container name that is longer than max
+ max_length = self.constraints['max_container_name_length']
+ # create a container with long name
+ container_name = data_utils.arbitrary_string(size=max_length + 1)
+ ex = self.assertRaises(exceptions.BadRequest,
+ self.container_client.create_container,
+ container_name)
+ self.assertIn('Container name length of ' + str(max_length + 1) +
+ ' longer than ' + str(max_length), str(ex))
+
+ @test.attr(type=["negative"])
+ @test.idempotent_id('41e645bf-2e68-4f84-bf7b-c71aa5cd76ce')
+ def test_create_container_metadata_name_exceeds_max_length(self):
+ # Attempts to create container with metadata name
+ # that is longer than max.
+ max_length = self.constraints['max_meta_name_length']
+ container_name = data_utils.rand_name(name='TestContainer')
+ metadata_name = data_utils.arbitrary_string(size=max_length + 1)
+ metadata = {metadata_name: 'penguin'}
+ ex = self.assertRaises(exceptions.BadRequest,
+ self.container_client.create_container,
+ container_name, metadata=metadata)
+ self.assertIn('Metadata name too long', str(ex))
+
+ @test.attr(type=["negative"])
+ @test.idempotent_id('81e36922-326b-4b7c-8155-3bbceecd7a82')
+ def test_create_container_metadata_value_exceeds_max_length(self):
+ # Attempts to create container with metadata value
+ # that is longer than max.
+ max_length = self.constraints['max_meta_value_length']
+ container_name = data_utils.rand_name(name='TestContainer')
+ metadata_value = data_utils.arbitrary_string(size=max_length + 1)
+ metadata = {'animal': metadata_value}
+ ex = self.assertRaises(exceptions.BadRequest,
+ self.container_client.create_container,
+ container_name, metadata=metadata)
+ self.assertIn('Metadata value longer than ' + str(max_length), str(ex))
+
+ @test.attr(type=["negative"])
+ @test.idempotent_id('ac666539-d566-4f02-8ceb-58e968dfb732')
+ def test_create_container_metadata_exceeds_overall_metadata_count(self):
+ # Attempts to create container with metadata that exceeds the
+ # default count
+ max_count = self.constraints['max_meta_count']
+ container_name = data_utils.rand_name(name='TestContainer')
+ metadata = {}
+ for i in range(max_count + 1):
+ metadata['animal-' + str(i)] = 'penguin'
+
+ ex = self.assertRaises(exceptions.BadRequest,
+ self.container_client.create_container,
+ container_name, metadata=metadata)
+ self.assertIn('Too many metadata items; max ' + str(max_count),
+ str(ex))
+
+ @test.attr(type=["negative"])
+ @test.idempotent_id('1a95ab2e-b712-4a98-8a4d-8ce21b7557d6')
+ def test_get_metadata_headers_with_invalid_container_name(self):
+ # Attempts to retrieve metadata headers with an invalid
+ # container name.
+ invalid_name = data_utils.rand_name(name="TestInvalidContainer")
+
+ self.assertRaises(exceptions.NotFound,
+ self.container_client.list_container_metadata,
+ invalid_name)
+
+ @test.attr(type=["negative"])
+ @test.idempotent_id('125a24fa-90a7-4cfc-b604-44e49d788390')
+ def test_update_metadata_with_nonexistent_container_name(self):
+ # Attempts to update metadata using a nonexistent container name.
+ nonexistent_name = data_utils.rand_name(
+ name="TestNonexistentContainer")
+ metadata = {'animal': 'penguin'}
+
+ self.assertRaises(exceptions.NotFound,
+ self.container_client.update_container_metadata,
+ nonexistent_name, metadata)
+
+ @test.attr(type=["negative"])
+ @test.idempotent_id('65387dbf-a0e2-4aac-9ddc-16eb3f1f69ba')
+ def test_delete_with_nonexistent_container_name(self):
+ # Attempts to delete metadata using a nonexistent container name.
+ nonexistent_name = data_utils.rand_name(
+ name="TestNonexistentContainer")
+ metadata = {'animal': 'penguin'}
+
+ self.assertRaises(exceptions.NotFound,
+ self.container_client.delete_container_metadata,
+ nonexistent_name, metadata)
+
+ @test.attr(type=["negative"])
+ @test.idempotent_id('14331d21-1e81-420a-beea-19cb5e5207f5')
+ def test_list_all_container_objects_with_nonexistent_container(self):
+ # Attempts to get a listing of all objects on a container
+ # that doesn't exist.
+ nonexistent_name = data_utils.rand_name(
+ name="TestNonexistentContainer")
+ params = {'limit': 9999, 'format': 'json'}
+ self.assertRaises(exceptions.NotFound,
+ self.container_client.list_container_contents,
+ nonexistent_name, params)
+
+ @test.attr(type=["negative"])
+ @test.idempotent_id('86b2ab08-92d5-493d-acd2-85f0c848819e')
+ def test_list_all_container_objects_on_deleted_container(self):
+ # Attempts to get a listing of all objects on a container
+ # that was deleted.
+ container_name = self.create_container()
+ # delete container
+ resp, _ = self.container_client.delete_container(container_name)
+ self.assertHeaders(resp, 'Container', 'DELETE')
+ params = {'limit': 9999, 'format': 'json'}
+ self.assertRaises(exceptions.NotFound,
+ self.container_client.list_container_contents,
+ container_name, params)
+
+ @test.attr(type=["negative"])
+ @test.idempotent_id('42da116e-1e8c-4c96-9e06-2f13884ed2b1')
+ def test_delete_non_empty_container(self):
+ # create a container and an object within it
+ # attempt to delete a container that isn't empty.
+ container_name = self.create_container()
+ self.addCleanup(self.container_client.delete_container,
+ container_name)
+ object_name, _ = self.create_object(container_name)
+ self.addCleanup(self.object_client.delete_object,
+ container_name, object_name)
+
+ ex = self.assertRaises(exceptions.Conflict,
+ self.container_client.delete_container,
+ container_name)
+ self.assertIn('An object with that identifier already exists',
+ str(ex))
diff --git a/tempest/api/object_storage/test_container_staticweb.py b/tempest/api/object_storage/test_container_staticweb.py
index 5b3ce79..47ef0d3 100644
--- a/tempest/api/object_storage/test_container_staticweb.py
+++ b/tempest/api/object_storage/test_container_staticweb.py
@@ -24,18 +24,14 @@
@classmethod
def resource_setup(cls):
super(StaticWebTest, cls).resource_setup()
- cls.container_name = data_utils.rand_name(name="TestContainer")
# This header should be posted on the container before every test
cls.headers_public_read_acl = {'Read': '.r:*,.rlistings'}
# Create test container and create one object in it
- cls.container_client.create_container(cls.container_name)
- cls.object_name = data_utils.rand_name(name="TestObject")
- cls.object_data = data_utils.arbitrary_string()
- cls.object_client.create_object(cls.container_name,
- cls.object_name,
- cls.object_data)
+ cls.container_name = cls.create_container()
+ cls.object_name, cls.object_data = cls.create_object(
+ cls.container_name)
cls.container_client.update_container_metadata(
cls.container_name,
@@ -44,8 +40,7 @@
@classmethod
def resource_cleanup(cls):
- if hasattr(cls, "container_name"):
- cls.delete_containers([cls.container_name])
+ cls.delete_containers()
super(StaticWebTest, cls).resource_cleanup()
@test.idempotent_id('c1f055ab-621d-4a6a-831f-846fcb578b8b')
diff --git a/tempest/api/object_storage/test_container_sync.py b/tempest/api/object_storage/test_container_sync.py
index 2a5cec6..e10b900 100644
--- a/tempest/api/object_storage/test_container_sync.py
+++ b/tempest/api/object_storage/test_container_sync.py
@@ -80,7 +80,7 @@
@classmethod
def resource_cleanup(cls):
for client in cls.clients.values():
- cls.delete_containers(cls.containers, client[0], client[1])
+ cls.delete_containers(client[0], client[1])
super(ContainerSyncTest, cls).resource_cleanup()
def _test_container_synchronization(self, make_headers):
diff --git a/tempest/api/object_storage/test_object_expiry.py b/tempest/api/object_storage/test_object_expiry.py
index 9db8bde..11acb31 100644
--- a/tempest/api/object_storage/test_object_expiry.py
+++ b/tempest/api/object_storage/test_object_expiry.py
@@ -16,7 +16,6 @@
import time
from tempest.api.object_storage import base
-from tempest.common.utils import data_utils
from tempest.lib import exceptions as lib_exc
from tempest import test
@@ -25,19 +24,17 @@
@classmethod
def resource_setup(cls):
super(ObjectExpiryTest, cls).resource_setup()
- cls.container_name = data_utils.rand_name(name='TestContainer')
- cls.container_client.create_container(cls.container_name)
+ cls.container_name = cls.create_container()
def setUp(self):
super(ObjectExpiryTest, self).setUp()
# create object
- self.object_name = data_utils.rand_name(name='TestObject')
- resp, _ = self.object_client.create_object(self.container_name,
- self.object_name, '')
+ self.object_name, _ = self.create_object(
+ self.container_name)
@classmethod
def resource_cleanup(cls):
- cls.delete_containers([cls.container_name])
+ cls.delete_containers()
super(ObjectExpiryTest, cls).resource_cleanup()
def _test_object_expiry(self, metadata):
diff --git a/tempest/api/object_storage/test_object_formpost.py b/tempest/api/object_storage/test_object_formpost.py
index 356f560..102ec2f 100644
--- a/tempest/api/object_storage/test_object_formpost.py
+++ b/tempest/api/object_storage/test_object_formpost.py
@@ -31,12 +31,9 @@
@classmethod
def resource_setup(cls):
super(ObjectFormPostTest, cls).resource_setup()
- cls.container_name = data_utils.rand_name(name='TestContainer')
+ cls.container_name = cls.create_container()
cls.object_name = data_utils.rand_name(name='ObjectTemp')
- cls.container_client.create_container(cls.container_name)
- cls.containers = [cls.container_name]
-
cls.key = 'Meta'
cls.metadata = {'Temp-URL-Key': cls.key}
cls.account_client.create_account_metadata(metadata=cls.metadata)
@@ -56,7 +53,7 @@
@classmethod
def resource_cleanup(cls):
cls.account_client.delete_account_metadata(metadata=cls.metadata)
- cls.delete_containers(cls.containers)
+ cls.delete_containers()
super(ObjectFormPostTest, cls).resource_cleanup()
def get_multipart_form(self, expires=600):
diff --git a/tempest/api/object_storage/test_object_formpost_negative.py b/tempest/api/object_storage/test_object_formpost_negative.py
index cb13271..8ff5d82 100644
--- a/tempest/api/object_storage/test_object_formpost_negative.py
+++ b/tempest/api/object_storage/test_object_formpost_negative.py
@@ -32,12 +32,9 @@
@classmethod
def resource_setup(cls):
super(ObjectFormPostNegativeTest, cls).resource_setup()
- cls.container_name = data_utils.rand_name(name='TestContainer')
+ cls.container_name = cls.create_container()
cls.object_name = data_utils.rand_name(name='ObjectTemp')
- cls.container_client.create_container(cls.container_name)
- cls.containers = [cls.container_name]
-
cls.key = 'Meta'
cls.metadata = {'Temp-URL-Key': cls.key}
cls.account_client.create_account_metadata(metadata=cls.metadata)
@@ -57,7 +54,7 @@
@classmethod
def resource_cleanup(cls):
cls.account_client.delete_account_metadata(metadata=cls.metadata)
- cls.delete_containers(cls.containers)
+ cls.delete_containers()
super(ObjectFormPostNegativeTest, cls).resource_cleanup()
def get_multipart_form(self, expires=600):
diff --git a/tempest/api/object_storage/test_object_services.py b/tempest/api/object_storage/test_object_services.py
index a88e4f4..8736f9a 100644
--- a/tempest/api/object_storage/test_object_services.py
+++ b/tempest/api/object_storage/test_object_services.py
@@ -35,32 +35,21 @@
@classmethod
def resource_setup(cls):
super(ObjectTest, cls).resource_setup()
- cls.container_name = data_utils.rand_name(name='TestContainer')
- cls.container_client.create_container(cls.container_name)
- cls.containers = [cls.container_name]
+ cls.container_name = cls.create_container()
@classmethod
def resource_cleanup(cls):
- cls.delete_containers(cls.containers)
+ cls.delete_containers()
super(ObjectTest, cls).resource_cleanup()
- def _create_object(self, metadata=None):
- # setup object
- object_name = data_utils.rand_name(name='TestObject')
- data = data_utils.arbitrary_string()
- self.object_client.create_object(self.container_name,
- object_name, data, metadata=metadata)
-
- return object_name, data
-
def _upload_segments(self):
# create object
object_name = data_utils.rand_name(name='LObject')
data = data_utils.arbitrary_string()
segments = 10
- data_segments = [data + str(i) for i in six.moves.xrange(segments)]
+ data_segments = [data + str(i) for i in range(segments)]
# uploading segments
- for i in six.moves.xrange(segments):
+ for i in range(segments):
resp, _ = self.object_client.create_object_segments(
self.container_name, object_name, i, data_segments[i])
@@ -335,7 +324,7 @@
@test.idempotent_id('7a94c25d-66e6-434c-9c38-97d4e2c29945')
def test_update_object_metadata(self):
# update object metadata
- object_name, data = self._create_object()
+ object_name, _ = self.create_object(self.container_name)
metadata = {'X-Object-Meta-test-meta': 'Meta'}
resp, _ = self.object_client.update_object_metadata(
@@ -431,8 +420,8 @@
@test.idempotent_id('0dbbe89c-6811-4d84-a2df-eca2bdd40c0e')
def test_update_object_metadata_with_x_object_metakey(self):
- # update object metadata with a blenk value of metadata
- object_name, data = self._create_object()
+ # update object metadata with a blank value of metadata
+ object_name, _ = self.create_object(self.container_name)
update_metadata = {'X-Object-Meta-test-meta': ''}
resp, _ = self.object_client.update_object_metadata(
@@ -494,7 +483,7 @@
@test.idempotent_id('170fb90e-f5c3-4b1f-ae1b-a18810821172')
def test_list_no_object_metadata(self):
# get empty list of object metadata
- object_name, data = self._create_object()
+ object_name, _ = self.create_object(self.container_name)
resp, _ = self.object_client.list_object_metadata(
self.container_name,
@@ -548,7 +537,7 @@
# retrieve object's data (in response body)
# create object
- object_name, data = self._create_object()
+ object_name, data = self.create_object(self.container_name)
# get object
resp, body = self.object_client.get_object(self.container_name,
object_name)
@@ -701,7 +690,7 @@
@test.idempotent_id('0aa1201c-10aa-467a-bee7-63cbdd463152')
def test_get_object_with_if_unmodified_since(self):
# get object with if_unmodified_since
- object_name, data = self._create_object()
+ object_name, data = self.create_object(self.container_name)
time_now = time.time()
http_date = time.ctime(time_now + 86400)
@@ -716,7 +705,7 @@
@test.idempotent_id('94587078-475f-48f9-a40f-389c246e31cd')
def test_get_object_with_x_newest(self):
# get object with x_newest
- object_name, data = self._create_object()
+ object_name, data = self.create_object(self.container_name)
list_metadata = {'X-Newest': 'true'}
resp, body = self.object_client.get_object(
@@ -757,7 +746,7 @@
# change the content type of an existing object
# create object
- object_name, data = self._create_object()
+ object_name, _ = self.create_object(self.container_name)
# get the old content type
resp_tmp, _ = self.object_client.list_object_metadata(
self.container_name, object_name)
@@ -843,7 +832,8 @@
def test_copy_object_with_x_fresh_metadata(self):
# create source object
metadata = {'x-object-meta-src': 'src_value'}
- src_object_name, data = self._create_object(metadata)
+ src_object_name, data = self.create_object(self.container_name,
+ metadata=metadata)
# copy source object with x_fresh_metadata header
metadata = {'X-Fresh-Metadata': 'true'}
@@ -863,7 +853,8 @@
def test_copy_object_with_x_object_metakey(self):
# create source object
metadata = {'x-object-meta-src': 'src_value'}
- src_obj_name, data = self._create_object(metadata)
+ src_obj_name, data = self.create_object(self.container_name,
+ metadata=metadata)
# copy source object to destination with x-object-meta-key
metadata = {'x-object-meta-test': ''}
@@ -885,7 +876,8 @@
def test_copy_object_with_x_object_meta(self):
# create source object
metadata = {'x-object-meta-src': 'src_value'}
- src_obj_name, data = self._create_object(metadata)
+ src_obj_name, data = self.create_object(self.container_name,
+ metadata=metadata)
# copy source object to destination with object metadata
metadata = {'x-object-meta-test': 'value'}
@@ -909,9 +901,9 @@
object_name = data_utils.rand_name(name='LObject')
data = data_utils.arbitrary_string()
segments = 10
- data_segments = [data + str(i) for i in six.moves.xrange(segments)]
+ data_segments = [data + str(i) for i in range(segments)]
# uploading segments
- for i in six.moves.xrange(segments):
+ for i in range(segments):
resp, _ = self.object_client.create_object_segments(
self.container_name, object_name, i, data_segments[i])
# creating a manifest file
@@ -951,7 +943,7 @@
# Make a conditional request for an object using the If-None-Match
# header, it should get downloaded only if the local file is different,
# otherwise the response code should be 304 Not Modified
- object_name, data = self._create_object()
+ object_name, data = self.create_object(self.container_name)
# local copy is identical, no download
md5 = hashlib.md5(data).hexdigest()
headers = {'If-None-Match': md5}
@@ -962,10 +954,7 @@
# When the file is not downloaded from Swift server, response does
# not contain 'X-Timestamp' header. This is the special case, therefore
# the existence of response headers is checked without custom matcher.
- self.assertIn('content-type', resp)
- self.assertIn('x-trans-id', resp)
self.assertIn('date', resp)
- self.assertIn('accept-ranges', resp)
# Check only the format of common headers with custom matcher
self.assertThat(resp, custom_matchers.AreAllWellFormatted())
diff --git a/tempest/api/object_storage/test_object_slo.py b/tempest/api/object_storage/test_object_slo.py
index 867159b..e00bbab 100644
--- a/tempest/api/object_storage/test_object_slo.py
+++ b/tempest/api/object_storage/test_object_slo.py
@@ -30,8 +30,7 @@
def setUp(self):
super(ObjectSloTest, self).setUp()
- self.container_name = data_utils.rand_name(name='TestContainer')
- self.container_client.create_container(self.container_name)
+ self.container_name = self.create_container()
self.objects = []
def tearDown(self):
diff --git a/tempest/api/object_storage/test_object_temp_url.py b/tempest/api/object_storage/test_object_temp_url.py
index 3d28f6e..7287a2d 100644
--- a/tempest/api/object_storage/test_object_temp_url.py
+++ b/tempest/api/object_storage/test_object_temp_url.py
@@ -20,11 +20,8 @@
from tempest.api.object_storage import base
from tempest.common.utils import data_utils
-from tempest import config
from tempest import test
-CONF = config.CONF
-
class ObjectTempUrlTest(base.BaseObjectTest):
@@ -32,9 +29,7 @@
def resource_setup(cls):
super(ObjectTempUrlTest, cls).resource_setup()
# create a container
- cls.container_name = data_utils.rand_name(name='TestContainer')
- cls.container_client.create_container(cls.container_name)
- cls.containers = [cls.container_name]
+ cls.container_name = cls.create_container()
# update account metadata
cls.key = 'Meta'
@@ -44,11 +39,7 @@
cls.account_client.create_account_metadata(metadata=metadata)
# create an object
- cls.object_name = data_utils.rand_name(name='ObjectTemp')
- cls.content = data_utils.arbitrary_string(size=len(cls.object_name),
- base_text=cls.object_name)
- cls.object_client.create_object(cls.container_name,
- cls.object_name, cls.content)
+ cls.object_name, cls.content = cls.create_object(cls.container_name)
@classmethod
def resource_cleanup(cls):
@@ -56,7 +47,7 @@
cls.account_client.delete_account_metadata(
metadata=metadata)
- cls.delete_containers(cls.containers)
+ cls.delete_containers()
super(ObjectTempUrlTest, cls).resource_cleanup()
diff --git a/tempest/api/object_storage/test_object_temp_url_negative.py b/tempest/api/object_storage/test_object_temp_url_negative.py
index 38fe697..577f3bd 100644
--- a/tempest/api/object_storage/test_object_temp_url_negative.py
+++ b/tempest/api/object_storage/test_object_temp_url_negative.py
@@ -33,9 +33,7 @@
def resource_setup(cls):
super(ObjectTempUrlNegativeTest, cls).resource_setup()
- cls.container_name = data_utils.rand_name(name='TestContainer')
- cls.container_client.create_container(cls.container_name)
- cls.containers = [cls.container_name]
+ cls.container_name = cls.create_container()
# update account metadata
cls.key = 'Meta'
@@ -49,7 +47,7 @@
resp, _ = cls.account_client.delete_account_metadata(
metadata=cls.metadata)
- cls.delete_containers(cls.containers)
+ cls.delete_containers()
super(ObjectTempUrlNegativeTest, cls).resource_cleanup()
diff --git a/tempest/api/object_storage/test_object_version.py b/tempest/api/object_storage/test_object_version.py
index 24ec3f5..3f6623b 100644
--- a/tempest/api/object_storage/test_object_version.py
+++ b/tempest/api/object_storage/test_object_version.py
@@ -31,7 +31,7 @@
@classmethod
def resource_cleanup(cls):
- cls.delete_containers(cls.containers)
+ cls.delete_containers()
super(ContainerTest, cls).resource_cleanup()
def assertContainer(self, container, count, byte, versioned):
diff --git a/tempest/api/orchestration/stacks/templates/neutron_basic.yaml b/tempest/api/orchestration/stacks/templates/neutron_basic.yaml
index be33c94..ccb1b54 100644
--- a/tempest/api/orchestration/stacks/templates/neutron_basic.yaml
+++ b/tempest/api/orchestration/stacks/templates/neutron_basic.yaml
@@ -58,7 +58,7 @@
#!/bin/sh -v
SIGNAL_DATA='{"Status": "SUCCESS", "Reason": "SmokeServerNeutron created", "Data": "Application has completed configuration.", "UniqueId": "00000"}'
- while ! curl --fail -X PUT -H 'Content-Type:' --data-binary "$SIGNAL_DATA" \
+ while ! curl --insecure --fail -X PUT -H 'Content-Type:' --data-binary "$SIGNAL_DATA" \
'wait_handle' ; do sleep 3; done
params:
wait_handle: {get_resource: WaitHandleNeutron}
diff --git a/tempest/api/orchestration/stacks/test_environment.py b/tempest/api/orchestration/stacks/test_environment.py
index 9d2b425..f2ffbd7 100644
--- a/tempest/api/orchestration/stacks/test_environment.py
+++ b/tempest/api/orchestration/stacks/test_environment.py
@@ -12,13 +12,9 @@
from tempest.api.orchestration import base
from tempest.common.utils import data_utils
-from tempest import config
from tempest import test
-CONF = config.CONF
-
-
class StackEnvironmentTest(base.BaseOrchestrationTest):
@test.idempotent_id('37d4346b-1abd-4442-b7b1-2a4e5749a1e3')
diff --git a/tempest/api/orchestration/stacks/test_nova_keypair_resources.py b/tempest/api/orchestration/stacks/test_nova_keypair_resources.py
index 0400e76..160bf6f 100644
--- a/tempest/api/orchestration/stacks/test_nova_keypair_resources.py
+++ b/tempest/api/orchestration/stacks/test_nova_keypair_resources.py
@@ -72,16 +72,16 @@
for outputs in stack['outputs']:
output_map[outputs['output_key']] = outputs['output_value']
# Test that first key generated public and private keys
- self.assertTrue('KeyPair_PublicKey' in output_map)
- self.assertTrue("Generated" in output_map['KeyPair_PublicKey'])
- self.assertTrue('KeyPair_PrivateKey' in output_map)
- self.assertTrue('-----BEGIN' in output_map['KeyPair_PrivateKey'])
+ self.assertIn('KeyPair_PublicKey', output_map)
+ self.assertIn("Generated", output_map['KeyPair_PublicKey'])
+ self.assertIn('KeyPair_PrivateKey', output_map)
+ self.assertIn('-----BEGIN', output_map['KeyPair_PrivateKey'])
# Test that second key generated public key, and private key is not
# in the output due to save_private_key = false
- self.assertTrue('KeyPairDontSavePrivate_PublicKey' in output_map)
- self.assertTrue('Generated' in
- output_map['KeyPairDontSavePrivate_PublicKey'])
- self.assertTrue(u'KeyPairDontSavePrivate_PrivateKey' in output_map)
+ self.assertIn('KeyPairDontSavePrivate_PublicKey', output_map)
+ self.assertIn('Generated',
+ output_map['KeyPairDontSavePrivate_PublicKey'])
+ self.assertIn(u'KeyPairDontSavePrivate_PrivateKey', output_map)
private_key = output_map['KeyPairDontSavePrivate_PrivateKey']
self.assertTrue(len(private_key) == 0)
diff --git a/tempest/api/orchestration/stacks/test_soft_conf.py b/tempest/api/orchestration/stacks/test_soft_conf.py
index aa0b46a..b660f6e 100644
--- a/tempest/api/orchestration/stacks/test_soft_conf.py
+++ b/tempest/api/orchestration/stacks/test_soft_conf.py
@@ -12,12 +12,9 @@
from tempest.api.orchestration import base
from tempest.common.utils import data_utils
-from tempest import config
from tempest.lib import exceptions as lib_exc
from tempest import test
-CONF = config.CONF
-
class TestSoftwareConfig(base.BaseOrchestrationTest):
diff --git a/tempest/api/volume/admin/test_backends_capabilities.py b/tempest/api/volume/admin/test_backends_capabilities.py
new file mode 100644
index 0000000..8a21853
--- /dev/null
+++ b/tempest/api/volume/admin/test_backends_capabilities.py
@@ -0,0 +1,79 @@
+# Copyright 2016 OpenStack Foundation
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import operator
+
+from tempest.api.volume import base
+from tempest import test
+
+
+class BackendsCapabilitiesAdminV2TestsJSON(base.BaseVolumeAdminTest):
+
+ CAPABILITIES = ('namespace',
+ 'vendor_name',
+ 'volume_backend_name',
+ 'pool_name',
+ 'driver_version',
+ 'storage_protocol',
+ 'display_name',
+ 'description',
+ 'visibility',
+ 'properties')
+
+ @classmethod
+ def resource_setup(cls):
+ super(BackendsCapabilitiesAdminV2TestsJSON, cls).resource_setup()
+ # Get host list, formation: host@backend-name
+ cls.hosts = [
+ pool['name'] for pool in
+ cls.admin_volume_client.show_pools()['pools']
+ ]
+
+ @test.idempotent_id('3750af44-5ea2-4cd4-bc3e-56e7e6caf854')
+ def test_get_capabilities_backend(self):
+ # Test backend properties
+ backend = self.admin_volume_client.show_backend_capabilities(
+ self.hosts[0])
+
+ # Verify getting capabilities parameters from a backend
+ for key in self.CAPABILITIES:
+ self.assertIn(key, backend)
+
+ @test.idempotent_id('a9035743-d46a-47c5-9cb7-3c80ea16dea0')
+ def test_compare_volume_stats_values(self):
+ # Test values comparison between show_backend_capabilities
+ # to show_pools
+ VOLUME_STATS = ('vendor_name',
+ 'volume_backend_name',
+ 'storage_protocol')
+
+ # Get list backend capabilities using show_pools
+ cinder_pools = [
+ pool['capabilities'] for pool in
+ self.admin_volume_client.show_pools(detail=True)['pools']
+ ]
+
+ # Get list backends capabilities using show_backend_capabilities
+ capabilities = [
+ self.admin_volume_client.show_backend_capabilities(
+ host=host) for host in self.hosts
+ ]
+
+ # Returns a tuple of VOLUME_STATS values
+ expected_list = map(operator.itemgetter(*VOLUME_STATS),
+ cinder_pools)
+ observed_list = map(operator.itemgetter(*VOLUME_STATS),
+ capabilities)
+ self.assertEqual(expected_list, observed_list)
diff --git a/tempest/api/volume/admin/test_multi_backend.py b/tempest/api/volume/admin/test_multi_backend.py
index 5615cf3..120dbb1 100644
--- a/tempest/api/volume/admin/test_multi_backend.py
+++ b/tempest/api/volume/admin/test_multi_backend.py
@@ -53,30 +53,30 @@
cls._create_type_and_volume(backend_name, True)
@classmethod
- def _create_type_and_volume(self, backend_name_key, with_prefix):
+ def _create_type_and_volume(cls, backend_name_key, with_prefix):
# Volume/Type creation
- type_name = data_utils.rand_name('Type')
- vol_name = data_utils.rand_name('Volume')
+ type_name = data_utils.rand_name(cls.__name__ + '-Type')
+ vol_name = data_utils.rand_name(cls.__name__ + '-Volume')
spec_key_with_prefix = "capabilities:volume_backend_name"
spec_key_without_prefix = "volume_backend_name"
if with_prefix:
extra_specs = {spec_key_with_prefix: backend_name_key}
else:
extra_specs = {spec_key_without_prefix: backend_name_key}
- self.type = self.create_volume_type(name=type_name,
- extra_specs=extra_specs)
+ cls.type = cls.create_volume_type(name=type_name,
+ extra_specs=extra_specs)
- params = {self.name_field: vol_name, 'volume_type': type_name}
-
- self.volume = self.admin_volume_client.create_volume(
+ params = {cls.name_field: vol_name, 'volume_type': type_name,
+ 'size': CONF.volume.volume_size}
+ cls.volume = cls.admin_volume_client.create_volume(
**params)['volume']
if with_prefix:
- self.volume_id_list_with_prefix.append(self.volume['id'])
+ cls.volume_id_list_with_prefix.append(cls.volume['id'])
else:
- self.volume_id_list_without_prefix.append(
- self.volume['id'])
- waiters.wait_for_volume_status(self.admin_volume_client,
- self.volume['id'], 'available')
+ cls.volume_id_list_without_prefix.append(
+ cls.volume['id'])
+ waiters.wait_for_volume_status(cls.admin_volume_client,
+ cls.volume['id'], 'available')
@classmethod
def resource_cleanup(cls):
diff --git a/tempest/api/volume/admin/test_qos.py b/tempest/api/volume/admin/test_qos.py
index 9402668..9275d2b 100644
--- a/tempest/api/volume/admin/test_qos.py
+++ b/tempest/api/volume/admin/test_qos.py
@@ -14,6 +14,7 @@
from tempest.api.volume import base
from tempest.common.utils import data_utils as utils
+from tempest.common import waiters
from tempest import test
@@ -37,7 +38,7 @@
read_iops_sec='2000')
def _create_delete_test_qos_with_given_consumer(self, consumer):
- name = utils.rand_name('qos')
+ name = utils.rand_name(self.__class__.__name__ + '-qos')
qos = {'name': name, 'consumer': consumer}
body = self.create_test_qos_specs(name, consumer)
for key in ['name', 'consumer']:
@@ -54,16 +55,6 @@
self.admin_volume_qos_client.associate_qos(
self.created_qos['id'], vol_type_id)
- def _test_get_association_qos(self):
- body = self.admin_volume_qos_client.show_association_qos(
- self.created_qos['id'])['qos_associations']
-
- associations = []
- for association in body:
- associations.append(association['id'])
-
- return associations
-
@test.idempotent_id('7e15f883-4bef-49a9-95eb-f94209a1ced1')
def test_create_delete_qos_with_front_end_consumer(self):
"""Tests the creation and deletion of QoS specs
@@ -119,8 +110,9 @@
self.admin_volume_qos_client.unset_qos_key(self.created_qos['id'],
keys)
operation = 'qos-key-unset'
- self.admin_volume_qos_client.wait_for_qos_operations(
- self.created_qos['id'], operation, keys)
+ waiters.wait_for_qos_operations(self.admin_volume_qos_client,
+ self.created_qos['id'],
+ operation, keys)
body = self.admin_volume_qos_client.show_qos(
self.created_qos['id'])['qos_specs']
self.assertNotIn(keys[0], body['specs'])
@@ -145,8 +137,9 @@
self._test_associate_qos(vol_type[i]['id'])
# get the association of the qos-specs
- associations = self._test_get_association_qos()
-
+ body = self.admin_volume_qos_client.show_association_qos(
+ self.created_qos['id'])['qos_associations']
+ associations = [association['id'] for association in body]
for i in range(0, 3):
self.assertIn(vol_type[i]['id'], associations)
@@ -154,19 +147,16 @@
self.admin_volume_qos_client.disassociate_qos(
self.created_qos['id'], vol_type[0]['id'])
operation = 'disassociate'
- self.admin_volume_qos_client.wait_for_qos_operations(
- self.created_qos['id'], operation, vol_type[0]['id'])
- associations = self._test_get_association_qos()
- self.assertNotIn(vol_type[0]['id'], associations)
+ waiters.wait_for_qos_operations(self.admin_volume_qos_client,
+ self.created_qos['id'], operation,
+ vol_type[0]['id'])
# disassociate all volume-types from qos-specs
self.admin_volume_qos_client.disassociate_all_qos(
self.created_qos['id'])
operation = 'disassociate-all'
- self.admin_volume_qos_client.wait_for_qos_operations(
- self.created_qos['id'], operation)
- associations = self._test_get_association_qos()
- self.assertEmpty(associations)
+ waiters.wait_for_qos_operations(self.admin_volume_qos_client,
+ self.created_qos['id'], operation)
class QosSpecsV1TestJSON(QosSpecsV2TestJSON):
diff --git a/tempest/api/volume/admin/test_snapshots_actions.py b/tempest/api/volume/admin/test_snapshots_actions.py
index a17cc69..5af83b3 100644
--- a/tempest/api/volume/admin/test_snapshots_actions.py
+++ b/tempest/api/volume/admin/test_snapshots_actions.py
@@ -14,8 +14,6 @@
# under the License.
from tempest.api.volume import base
-from tempest.common.utils import data_utils
-from tempest.common import waiters
from tempest import config
from tempest import test
@@ -39,31 +37,10 @@
super(SnapshotsActionsV2Test, cls).resource_setup()
# Create a test shared volume for tests
- vol_name = data_utils.rand_name(cls.__name__ + '-Volume')
- cls.name_field = cls.special_fields['name_field']
- params = {cls.name_field: vol_name}
- cls.volume = cls.volumes_client.create_volume(**params)['volume']
- waiters.wait_for_volume_status(cls.volumes_client,
- cls.volume['id'], 'available')
+ cls.volume = cls.create_volume()
# Create a test shared snapshot for tests
- snap_name = data_utils.rand_name(cls.__name__ + '-Snapshot')
- params = {cls.name_field: snap_name}
- cls.snapshot = cls.client.create_snapshot(
- volume_id=cls.volume['id'], **params)['snapshot']
- waiters.wait_for_snapshot_status(cls.client,
- cls.snapshot['id'], 'available')
-
- @classmethod
- def resource_cleanup(cls):
- # Delete the test snapshot
- cls.client.delete_snapshot(cls.snapshot['id'])
- cls.client.wait_for_resource_deletion(cls.snapshot['id'])
-
- # Delete the test volume
- cls.delete_volume(cls.volumes_client, cls.volume['id'])
-
- super(SnapshotsActionsV2Test, cls).resource_cleanup()
+ cls.snapshot = cls.create_snapshot(volume_id=cls.volume['id'])
def tearDown(self):
# Set snapshot's status to available after test
diff --git a/tempest/api/volume/admin/test_volume_quotas.py b/tempest/api/volume/admin/test_volume_quotas.py
index ba17d9c..b47a5f0 100644
--- a/tempest/api/volume/admin/test_volume_quotas.py
+++ b/tempest/api/volume/admin/test_volume_quotas.py
@@ -18,7 +18,7 @@
from tempest.common import waiters
from tempest import test
-QUOTA_KEYS = ['gigabytes', 'snapshots', 'volumes']
+QUOTA_KEYS = ['gigabytes', 'snapshots', 'volumes', 'backups']
QUOTA_USAGE_KEYS = ['reserved', 'limit', 'in_use']
@@ -54,7 +54,8 @@
self.demo_tenant_id)['quota_set']
new_quota_set = {'gigabytes': 1009,
'volumes': 11,
- 'snapshots': 11}
+ 'snapshots': 11,
+ 'backups': 11}
# Update limits for all quota resources
quota_set = self.admin_quotas_client.update_quota_set(
@@ -73,8 +74,9 @@
@test.idempotent_id('18c51ae9-cb03-48fc-b234-14a19374dbed')
def test_show_quota_usage(self):
- quota_usage = self.admin_quotas_client.show_quota_usage(
- self.os_adm.credentials.tenant_id)['quota_set']
+ quota_usage = self.admin_quotas_client.show_quota_set(
+ self.os_adm.credentials.tenant_id,
+ params={'usage': True})['quota_set']
for key in QUOTA_KEYS:
self.assertIn(key, quota_usage)
for usage_key in QUOTA_USAGE_KEYS:
@@ -82,15 +84,15 @@
@test.idempotent_id('ae8b6091-48ad-4bfa-a188-bbf5cc02115f')
def test_quota_usage(self):
- quota_usage = self.admin_quotas_client.show_quota_usage(
- self.demo_tenant_id)['quota_set']
+ quota_usage = self.admin_quotas_client.show_quota_set(
+ self.demo_tenant_id, params={'usage': True})['quota_set']
volume = self.create_volume()
self.addCleanup(self.delete_volume,
self.admin_volume_client, volume['id'])
- new_quota_usage = self.admin_quotas_client.show_quota_usage(
- self.demo_tenant_id)['quota_set']
+ new_quota_usage = self.admin_quotas_client.show_quota_set(
+ self.demo_tenant_id, params={'usage': True})['quota_set']
self.assertEqual(quota_usage['volumes']['in_use'] + 1,
new_quota_usage['volumes']['in_use'])
@@ -128,11 +130,11 @@
self.admin_volume_client, volume['id'])
# List of tenants quota usage pre-transfer
- primary_quota = self.admin_quotas_client.show_quota_usage(
- self.demo_tenant_id)['quota_set']
+ primary_quota = self.admin_quotas_client.show_quota_set(
+ self.demo_tenant_id, params={'usage': True})['quota_set']
- alt_quota = self.admin_quotas_client.show_quota_usage(
- self.alt_client.tenant_id)['quota_set']
+ alt_quota = self.admin_quotas_client.show_quota_set(
+ self.alt_client.tenant_id, params={'usage': True})['quota_set']
# Creates a volume transfer
transfer = self.volumes_client.create_volume_transfer(
@@ -149,11 +151,11 @@
self.alt_client, volume['id'], 'available')
# List of tenants quota usage post transfer
- new_primary_quota = self.admin_quotas_client.show_quota_usage(
- self.demo_tenant_id)['quota_set']
+ new_primary_quota = self.admin_quotas_client.show_quota_set(
+ self.demo_tenant_id, params={'usage': True})['quota_set']
- new_alt_quota = self.admin_quotas_client.show_quota_usage(
- self.alt_client.tenant_id)['quota_set']
+ new_alt_quota = self.admin_quotas_client.show_quota_set(
+ self.alt_client.tenant_id, params={'usage': True})['quota_set']
# Verify tenants quota usage was updated
self.assertEqual(primary_quota['volumes']['in_use'] -
diff --git a/tempest/api/volume/admin/test_volume_quotas_negative.py b/tempest/api/volume/admin/test_volume_quotas_negative.py
index dde8915..c19b1c4 100644
--- a/tempest/api/volume/admin/test_volume_quotas_negative.py
+++ b/tempest/api/volume/admin/test_volume_quotas_negative.py
@@ -32,8 +32,7 @@
@classmethod
def resource_setup(cls):
super(BaseVolumeQuotasNegativeV2TestJSON, cls).resource_setup()
- cls.default_volume_size = cls.volumes_client.default_volume_size
- cls.shared_quota_set = {'gigabytes': 2 * cls.default_volume_size,
+ cls.shared_quota_set = {'gigabytes': 2 * CONF.volume.volume_size,
'volumes': 1}
# NOTE(gfidente): no need to restore original quota set
@@ -50,7 +49,8 @@
@test.idempotent_id('bf544854-d62a-47f2-a681-90f7a47d86b6')
def test_quota_volumes(self):
self.assertRaises(lib_exc.OverLimit,
- self.volumes_client.create_volume)
+ self.volumes_client.create_volume,
+ size=CONF.volume.volume_size)
@test.attr(type='negative')
@test.idempotent_id('2dc27eee-8659-4298-b900-169d71a91374')
@@ -61,13 +61,14 @@
self.addCleanup(self.admin_quotas_client.update_quota_set,
self.demo_tenant_id,
**self.shared_quota_set)
- new_quota_set = {'gigabytes': self.default_volume_size,
+ new_quota_set = {'gigabytes': CONF.volume.volume_size,
'volumes': 2, 'snapshots': 1}
self.admin_quotas_client.update_quota_set(
self.demo_tenant_id,
**new_quota_set)
self.assertRaises(lib_exc.OverLimit,
- self.volumes_client.create_volume)
+ self.volumes_client.create_volume,
+ size=CONF.volume.volume_size)
class VolumeQuotasNegativeV1TestJSON(BaseVolumeQuotasNegativeV2TestJSON):
diff --git a/tempest/api/volume/admin/test_volume_services.py b/tempest/api/volume/admin/test_volume_services.py
index 755365d..165874b 100644
--- a/tempest/api/volume/admin/test_volume_services.py
+++ b/tempest/api/volume/admin/test_volume_services.py
@@ -79,7 +79,7 @@
services = (self.admin_volume_services_client.list_services(
host=self.host_name, binary=self.binary_name))['services']
- self.assertEqual(1, len(services))
+ self.assertNotEqual(0, len(services))
self.assertEqual(self.host_name, _get_host(services[0]['host']))
self.assertEqual(self.binary_name, services[0]['binary'])
diff --git a/tempest/api/volume/admin/test_volume_snapshot_quotas_negative.py b/tempest/api/volume/admin/test_volume_snapshot_quotas_negative.py
index 1565a8c..09af7fe 100644
--- a/tempest/api/volume/admin/test_volume_snapshot_quotas_negative.py
+++ b/tempest/api/volume/admin/test_volume_snapshot_quotas_negative.py
@@ -38,7 +38,7 @@
@classmethod
def resource_setup(cls):
super(VolumeSnapshotQuotasNegativeV2TestJSON, cls).resource_setup()
- cls.default_volume_size = cls.volumes_client.default_volume_size
+ cls.default_volume_size = CONF.volume.volume_size
cls.shared_quota_set = {'gigabytes': 3 * cls.default_volume_size,
'volumes': 1, 'snapshots': 1}
diff --git a/tempest/api/volume/admin/test_volume_types.py b/tempest/api/volume/admin/test_volume_types.py
index 27f6ccb..99f0a6b 100644
--- a/tempest/api/volume/admin/test_volume_types.py
+++ b/tempest/api/volume/admin/test_volume_types.py
@@ -35,7 +35,7 @@
def test_volume_crud_with_volume_type_and_extra_specs(self):
# Create/update/get/delete volume with volume_type and extra spec.
volume_types = list()
- vol_name = data_utils.rand_name("volume")
+ vol_name = data_utils.rand_name(self.__class__.__name__ + '-volume')
self.name_field = self.special_fields['name_field']
proto = CONF.volume.storage_protocol
vendor = CONF.volume.vendor_name
@@ -43,13 +43,12 @@
"vendor_name": vendor}
# Create two volume_types
for i in range(2):
- vol_type_name = data_utils.rand_name("volume-type")
vol_type = self.create_volume_type(
- name=vol_type_name,
extra_specs=extra_specs)
volume_types.append(vol_type)
params = {self.name_field: vol_name,
- 'volume_type': volume_types[0]['id']}
+ 'volume_type': volume_types[0]['id'],
+ 'size': CONF.volume.volume_size}
# Create volume
volume = self.volumes_client.create_volume(**params)['volume']
@@ -87,19 +86,22 @@
def test_volume_type_create_get_delete(self):
# Create/get volume type.
body = {}
- name = data_utils.rand_name("volume-type")
+ name = data_utils.rand_name(self.__class__.__name__ + '-volume-type')
+ description = data_utils.rand_name("volume-type-description")
proto = CONF.volume.storage_protocol
vendor = CONF.volume.vendor_name
extra_specs = {"storage_protocol": proto,
"vendor_name": vendor}
- body = self.create_volume_type(
- name=name,
- extra_specs=extra_specs)
+ body = self.create_volume_type(description=description, name=name,
+ extra_specs=extra_specs)
self.assertIn('id', body)
self.assertIn('name', body)
- self.assertEqual(body['name'], name,
+ self.assertEqual(name, body['name'],
"The created volume_type name is not equal "
"to the requested name")
+ self.assertEqual(description, body['description'],
+ "The created volume_type_description name is "
+ "not equal to the requested name")
self.assertTrue(body['id'] is not None,
"Field volume_type id is empty or not found.")
fetched_volume_type = self.admin_volume_types_client.show_volume_type(
@@ -119,11 +121,10 @@
# Create/get/delete encryption type.
provider = "LuksEncryptor"
control_location = "front-end"
- name = data_utils.rand_name("volume-type")
- body = self.create_volume_type(name=name)
+ body = self.create_volume_type()
# Create encryption type
encryption_type = \
- self.admin_volume_types_client.create_encryption_type(
+ self.admin_encryption_types_client.create_encryption_type(
body['id'], provider=provider,
control_location=control_location)['encryption']
self.assertIn('volume_type_id', encryption_type)
@@ -136,7 +137,7 @@
# Get encryption type
fetched_encryption_type = (
- self.admin_volume_types_client.show_encryption_type(
+ self.admin_encryption_types_client.show_encryption_type(
encryption_type['volume_type_id']))
self.assertEqual(provider,
fetched_encryption_type['provider'],
@@ -148,16 +149,35 @@
'different from the created encryption_type')
# Delete encryption type
- self.admin_volume_types_client.delete_encryption_type(
- encryption_type['volume_type_id'])
- resource = {"id": encryption_type['volume_type_id'],
- "type": "encryption-type"}
- self.admin_volume_types_client.wait_for_resource_deletion(resource)
+ type_id = encryption_type['volume_type_id']
+ self.admin_encryption_types_client.delete_encryption_type(type_id)
+ self.admin_encryption_types_client.wait_for_resource_deletion(type_id)
deleted_encryption_type = (
- self.admin_volume_types_client.show_encryption_type(
- encryption_type['volume_type_id']))
+ self.admin_encryption_types_client.show_encryption_type(type_id))
self.assertEmpty(deleted_encryption_type)
+ @test.idempotent_id('cf9f07c6-db9e-4462-a243-5933ad65e9c8')
+ def test_volume_type_update(self):
+ # Create volume type
+ volume_type = self.create_volume_type()
+
+ # New volume type details
+ name = data_utils.rand_name("volume-type")
+ description = data_utils.rand_name("volume-type-description")
+ is_public = not volume_type['is_public']
+
+ # Update volume type details
+ kwargs = {'name': name,
+ 'description': description,
+ 'is_public': is_public}
+ updated_vol_type = self.admin_volume_types_client.update_volume_type(
+ volume_type['id'], **kwargs)['volume_type']
+
+ # Verify volume type details were updated
+ self.assertEqual(name, updated_vol_type['name'])
+ self.assertEqual(description, updated_vol_type['description'])
+ self.assertEqual(is_public, updated_vol_type['is_public'])
+
class VolumeTypesV1Test(VolumeTypesV2Test):
_api_version = 1
diff --git a/tempest/api/volume/admin/test_volume_types_extra_specs.py b/tempest/api/volume/admin/test_volume_types_extra_specs.py
index 9e49b94..fdff2df 100644
--- a/tempest/api/volume/admin/test_volume_types_extra_specs.py
+++ b/tempest/api/volume/admin/test_volume_types_extra_specs.py
@@ -14,7 +14,7 @@
# under the License.
from tempest.api.volume import base
-from tempest.common.utils import data_utils
+from tempest.lib import exceptions as lib_exc
from tempest import test
@@ -23,8 +23,7 @@
@classmethod
def resource_setup(cls):
super(VolumeTypesExtraSpecsV2Test, cls).resource_setup()
- vol_type_name = data_utils.rand_name('Volume-type')
- cls.volume_type = cls.create_volume_type(name=vol_type_name)
+ cls.volume_type = cls.create_volume_type()
@test.idempotent_id('b42923e9-0452-4945-be5b-d362ae533e60')
def test_volume_type_extra_specs_list(self):
@@ -66,13 +65,18 @@
self.assertEqual(extra_specs, body,
"Volume type extra spec incorrectly created")
- self.admin_volume_types_client.show_volume_type_extra_specs(
+ body = self.admin_volume_types_client.show_volume_type_extra_specs(
self.volume_type['id'],
spec_key)
self.assertEqual(extra_specs, body,
"Volume type extra spec incorrectly fetched")
+
self.admin_volume_types_client.delete_volume_type_extra_specs(
self.volume_type['id'], spec_key)
+ self.assertRaises(
+ lib_exc.NotFound,
+ self.admin_volume_types_client.show_volume_type_extra_specs,
+ self.volume_type['id'], spec_key)
class VolumeTypesExtraSpecsV1Test(VolumeTypesExtraSpecsV2Test):
diff --git a/tempest/api/volume/admin/test_volume_types_extra_specs_negative.py b/tempest/api/volume/admin/test_volume_types_extra_specs_negative.py
index 2193aa6..8040322 100644
--- a/tempest/api/volume/admin/test_volume_types_extra_specs_negative.py
+++ b/tempest/api/volume/admin/test_volume_types_extra_specs_negative.py
@@ -24,10 +24,8 @@
@classmethod
def resource_setup(cls):
super(ExtraSpecsNegativeV2Test, cls).resource_setup()
- vol_type_name = data_utils.rand_name('Volume-type')
cls.extra_specs = {"spec1": "val1"}
- cls.volume_type = cls.create_volume_type(name=vol_type_name,
- extra_specs=cls.extra_specs)
+ cls.volume_type = cls.create_volume_type(extra_specs=cls.extra_specs)
@test.idempotent_id('08961d20-5cbb-4910-ac0f-89ad6dbb2da1')
def test_update_no_body(self):
diff --git a/tempest/api/volume/admin/test_volumes_actions.py b/tempest/api/volume/admin/test_volumes_actions.py
index 5388f7f..e7a3f62 100644
--- a/tempest/api/volume/admin/test_volumes_actions.py
+++ b/tempest/api/volume/admin/test_volumes_actions.py
@@ -14,10 +14,11 @@
# under the License.
from tempest.api.volume import base
-from tempest.common.utils import data_utils as utils
-from tempest.common import waiters
+from tempest import config
from tempest import test
+CONF = config.CONF
+
class VolumesActionsV2Test(base.BaseVolumeAdminTest):
@@ -31,54 +32,28 @@
super(VolumesActionsV2Test, cls).resource_setup()
# Create a test shared volume for tests
- vol_name = utils.rand_name(cls.__name__ + '-Volume')
- cls.name_field = cls.special_fields['name_field']
- params = {cls.name_field: vol_name}
-
- cls.volume = cls.client.create_volume(**params)['volume']
- waiters.wait_for_volume_status(cls.client,
- cls.volume['id'], 'available')
-
- @classmethod
- def resource_cleanup(cls):
- # Delete the test volume
- cls.delete_volume(cls.client, cls.volume['id'])
-
- super(VolumesActionsV2Test, cls).resource_cleanup()
-
- def _reset_volume_status(self, volume_id, status):
- # Reset the volume status
- body = self.admin_volume_client.reset_volume_status(volume_id,
- status=status)
- return body
+ cls.volume = cls.create_volume()
def tearDown(self):
# Set volume's status to available after test
- self._reset_volume_status(self.volume['id'], status='available')
+ self.admin_volume_client.reset_volume_status(
+ self.volume['id'], status='available')
super(VolumesActionsV2Test, self).tearDown()
- def _create_temp_volume(self):
- # Create a temp volume for force delete tests
- vol_name = utils.rand_name('Volume')
- params = {self.name_field: vol_name}
- temp_volume = self.client.create_volume(**params)['volume']
- waiters.wait_for_volume_status(self.client,
- temp_volume['id'], 'available')
-
- return temp_volume
-
def _create_reset_and_force_delete_temp_volume(self, status=None):
# Create volume, reset volume status, and force delete temp volume
- temp_volume = self._create_temp_volume()
+ temp_volume = self.create_volume()
if status:
- self._reset_volume_status(temp_volume['id'], status)
+ self.admin_volume_client.reset_volume_status(
+ temp_volume['id'], status=status)
self.admin_volume_client.force_delete_volume(temp_volume['id'])
self.client.wait_for_resource_deletion(temp_volume['id'])
@test.idempotent_id('d063f96e-a2e0-4f34-8b8a-395c42de1845')
def test_volume_reset_status(self):
# test volume reset status : available->error->available
- self._reset_volume_status(self.volume['id'], 'error')
+ self.admin_volume_client.reset_volume_status(
+ self.volume['id'], status='error')
volume_get = self.admin_volume_client.show_volume(
self.volume['id'])['volume']
self.assertEqual('error', volume_get['status'])
diff --git a/tempest/api/volume/admin/test_volumes_backup.py b/tempest/api/volume/admin/test_volumes_backup.py
index b6dc488..73f1f8f 100644
--- a/tempest/api/volume/admin/test_volumes_backup.py
+++ b/tempest/api/volume/admin/test_volumes_backup.py
@@ -13,9 +13,7 @@
# License for the specific language governing permissions and limitations
# under the License.
-import base64
-import six
-
+from oslo_serialization import base64
from oslo_serialization import jsonutils as json
from tempest.api.volume import base
@@ -43,60 +41,20 @@
def _delete_backup(self, backup_id):
self.admin_backups_client.delete_backup(backup_id)
- self.admin_backups_client.wait_for_backup_deletion(backup_id)
+ self.admin_backups_client.wait_for_resource_deletion(backup_id)
def _decode_url(self, backup_url):
- return json.loads(base64.decodestring(backup_url))
+ return json.loads(base64.decode_as_text(backup_url))
def _encode_backup(self, backup):
retval = json.dumps(backup)
- if six.PY3:
- retval = retval.encode('utf-8')
- return base64.encodestring(retval)
+ return base64.encode_as_text(retval)
def _modify_backup_url(self, backup_url, changes):
backup = self._decode_url(backup_url)
backup.update(changes)
return self._encode_backup(backup)
- @test.idempotent_id('a66eb488-8ee1-47d4-8e9f-575a095728c6')
- def test_volume_backup_create_get_detailed_list_restore_delete(self):
- # Create backup
- backup_name = data_utils.rand_name('Backup')
- create_backup = self.admin_backups_client.create_backup
- backup = create_backup(volume_id=self.volume['id'],
- name=backup_name)['backup']
- self.addCleanup(self.admin_backups_client.delete_backup,
- backup['id'])
- self.assertEqual(backup_name, backup['name'])
- waiters.wait_for_volume_status(self.admin_volume_client,
- self.volume['id'], 'available')
- self.admin_backups_client.wait_for_backup_status(backup['id'],
- 'available')
-
- # Get a given backup
- backup = self.admin_backups_client.show_backup(backup['id'])['backup']
- self.assertEqual(backup_name, backup['name'])
-
- # Get all backups with detail
- backups = self.admin_backups_client.list_backups(
- detail=True)['backups']
- self.assertIn((backup['name'], backup['id']),
- [(m['name'], m['id']) for m in backups])
-
- # Restore backup
- restore = self.admin_backups_client.restore_backup(
- backup['id'])['restore']
-
- # Delete backup
- self.addCleanup(self.admin_volume_client.delete_volume,
- restore['volume_id'])
- self.assertEqual(backup['id'], restore['backup_id'])
- self.admin_backups_client.wait_for_backup_status(backup['id'],
- 'available')
- waiters.wait_for_volume_status(self.admin_volume_client,
- restore['volume_id'], 'available')
-
@test.idempotent_id('a99c54a1-dd80-4724-8a13-13bf58d4068d')
def test_volume_backup_export_import(self):
"""Test backup export import functionality.
@@ -105,13 +63,13 @@
be imported back in case of a DB loss.
"""
# Create backup
- backup_name = data_utils.rand_name('Backup')
+ backup_name = data_utils.rand_name(self.__class__.__name__ + '-Backup')
backup = (self.admin_backups_client.create_backup(
volume_id=self.volume['id'], name=backup_name)['backup'])
self.addCleanup(self._delete_backup, backup['id'])
self.assertEqual(backup_name, backup['name'])
- self.admin_backups_client.wait_for_backup_status(backup['id'],
- 'available')
+ waiters.wait_for_backup_status(self.admin_backups_client,
+ backup['id'], 'available')
# Export Backup
export_backup = (self.admin_backups_client.export_backup(backup['id'])
@@ -143,8 +101,8 @@
self.addCleanup(self._delete_backup, new_id)
self.assertIn("id", import_backup)
self.assertEqual(new_id, import_backup['id'])
- self.admin_backups_client.wait_for_backup_status(import_backup['id'],
- 'available')
+ waiters.wait_for_backup_status(self.admin_backups_client,
+ import_backup['id'], 'available')
# Verify Import Backup
backups = self.admin_backups_client.list_backups(
@@ -163,8 +121,26 @@
# Verify if restored volume is there in volume list
volumes = self.admin_volume_client.list_volumes()['volumes']
self.assertIn(restore['volume_id'], [v['id'] for v in volumes])
- self.admin_backups_client.wait_for_backup_status(import_backup['id'],
- 'available')
+ waiters.wait_for_backup_status(self.admin_backups_client,
+ import_backup['id'], 'available')
+
+ @test.idempotent_id('47a35425-a891-4e13-961c-c45deea21e94')
+ def test_volume_backup_reset_status(self):
+ # Create a backup
+ backup_name = data_utils.rand_name(
+ self.__class__.__name__ + '-Backup')
+ backup = self.admin_backups_client.create_backup(
+ volume_id=self.volume['id'], name=backup_name)['backup']
+ self.addCleanup(self.admin_backups_client.delete_backup,
+ backup['id'])
+ self.assertEqual(backup_name, backup['name'])
+ waiters.wait_for_backup_status(self.admin_backups_client,
+ backup['id'], 'available')
+ # Reset backup status to error
+ self.admin_backups_client.reset_backup_status(backup_id=backup['id'],
+ status="error")
+ waiters.wait_for_backup_status(self.admin_backups_client,
+ backup['id'], 'error')
class VolumesBackupsAdminV1Test(VolumesBackupsAdminV2Test):
diff --git a/tempest/api/volume/v3/admin/__init__.py b/tempest/api/volume/admin/v2/__init__.py
similarity index 100%
rename from tempest/api/volume/v3/admin/__init__.py
rename to tempest/api/volume/admin/v2/__init__.py
diff --git a/tempest/api/volume/admin/test_volume_pools.py b/tempest/api/volume/admin/v2/test_volume_pools.py
similarity index 100%
rename from tempest/api/volume/admin/test_volume_pools.py
rename to tempest/api/volume/admin/v2/test_volume_pools.py
diff --git a/tempest/api/volume/admin/test_volume_type_access.py b/tempest/api/volume/admin/v2/test_volume_type_access.py
similarity index 93%
rename from tempest/api/volume/admin/test_volume_type_access.py
rename to tempest/api/volume/admin/v2/test_volume_type_access.py
index fac71a8..91ff5af 100644
--- a/tempest/api/volume/admin/test_volume_type_access.py
+++ b/tempest/api/volume/admin/v2/test_volume_type_access.py
@@ -17,9 +17,12 @@
from tempest.api.volume import base
from tempest.common import waiters
+from tempest import config
from tempest.lib import exceptions as lib_exc
from tempest import test
+CONF = config.CONF
+
class VolumeTypesAccessV2Test(base.BaseVolumeAdminTest):
@@ -38,7 +41,8 @@
# Try creating a volume from volume type in primary tenant
self.assertRaises(lib_exc.NotFound, self.volumes_client.create_volume,
- volume_type=volume_type['id'])
+ volume_type=volume_type['id'],
+ size=CONF.volume.volume_size)
# Adding volume type access for primary tenant
self.admin_volume_types_client.add_type_access(
@@ -49,7 +53,8 @@
# Creating a volume from primary tenant
volume = self.volumes_client.create_volume(
- volume_type=volume_type['id'])['volume']
+ volume_type=volume_type['id'],
+ size=CONF.volume.volume_size)['volume']
self.addCleanup(self.delete_volume, self.volumes_client, volume['id'])
waiters.wait_for_volume_status(self.volumes_client, volume['id'],
'available')
diff --git a/tempest/api/volume/admin/test_volumes_list.py b/tempest/api/volume/admin/v2/test_volumes_list.py
similarity index 94%
rename from tempest/api/volume/admin/test_volumes_list.py
rename to tempest/api/volume/admin/v2/test_volumes_list.py
index 70c16f3..4437803 100644
--- a/tempest/api/volume/admin/test_volumes_list.py
+++ b/tempest/api/volume/admin/v2/test_volumes_list.py
@@ -17,8 +17,11 @@
from tempest.api.volume import base
from tempest.common import waiters
+from tempest import config
from tempest import test
+CONF = config.CONF
+
class VolumesListAdminV2TestJSON(base.BaseVolumeAdminTest):
@@ -38,7 +41,8 @@
def test_volume_list_param_tenant(self):
# Test to list volumes from single tenant
# Create a volume in admin tenant
- adm_vol = self.admin_volume_client.create_volume()['volume']
+ adm_vol = self.admin_volume_client.create_volume(
+ size=CONF.volume.volume_size)['volume']
waiters.wait_for_volume_status(self.admin_volume_client,
adm_vol['id'], 'available')
self.addCleanup(self.admin_volume_client.delete_volume, adm_vol['id'])
diff --git a/tempest/api/volume/v3/admin/__init__.py b/tempest/api/volume/admin/v3/__init__.py
similarity index 100%
copy from tempest/api/volume/v3/admin/__init__.py
copy to tempest/api/volume/admin/v3/__init__.py
diff --git a/tempest/api/volume/v3/admin/test_user_messages.py b/tempest/api/volume/admin/v3/test_user_messages.py
old mode 100644
new mode 100755
similarity index 93%
rename from tempest/api/volume/v3/admin/test_user_messages.py
rename to tempest/api/volume/admin/v3/test_user_messages.py
index 9d59d1b..39a5dfa
--- a/tempest/api/volume/v3/admin/test_user_messages.py
+++ b/tempest/api/volume/admin/v3/test_user_messages.py
@@ -16,9 +16,12 @@
from tempest.api.volume.v3 import base
from tempest.common.utils import data_utils
from tempest.common import waiters
+from tempest import config
from tempest import exceptions
from tempest import test
+CONF = config.CONF
+
MESSAGE_KEYS = [
'created_at',
'event_id',
@@ -42,13 +45,15 @@
bad_vendor = data_utils.rand_name('vendor_name')
extra_specs = {'storage_protocol': bad_protocol,
'vendor_name': bad_vendor}
- vol_type_name = data_utils.rand_name('volume-type')
+ vol_type_name = data_utils.rand_name(
+ self.__class__.__name__ + '-volume-type')
bogus_type = self.admin_volume_types_client.create_volume_type(
name=vol_type_name,
extra_specs=extra_specs)['volume_type']
self.addCleanup(self.admin_volume_types_client.delete_volume_type,
bogus_type['id'])
- params = {'volume_type': bogus_type['id']}
+ params = {'volume_type': bogus_type['id'],
+ 'size': CONF.volume.volume_size}
volume = self.volumes_client.create_volume(**params)['volume']
self.addCleanup(self.delete_volume, self.volumes_client, volume['id'])
try:
diff --git a/tempest/api/volume/base.py b/tempest/api/volume/base.py
index 087b9a8..2082f50 100644
--- a/tempest/api/volume/base.py
+++ b/tempest/api/volume/base.py
@@ -17,8 +17,8 @@
from tempest.common.utils import data_utils
from tempest.common import waiters
from tempest import config
-from tempest import exceptions
from tempest.lib.common.utils import test_utils
+from tempest.lib import exceptions
import tempest.test
CONF = config.CONF
@@ -110,13 +110,15 @@
@classmethod
def create_volume(cls, **kwargs):
"""Wrapper utility that returns a test volume."""
- name = data_utils.rand_name('Volume')
+ if 'size' not in kwargs:
+ kwargs['size'] = CONF.volume.volume_size
name_field = cls.special_fields['name_field']
+ if name_field not in kwargs:
+ name = data_utils.rand_name(cls.__name__ + '-Volume')
+ kwargs[name_field] = name
- kwargs[name_field] = name
volume = cls.volumes_client.create_volume(**kwargs)['volume']
-
cls.volumes.append(volume)
waiters.wait_for_volume_status(cls.volumes_client,
volume['id'], 'available')
@@ -125,6 +127,11 @@
@classmethod
def create_snapshot(cls, volume_id=1, **kwargs):
"""Wrapper utility that returns a test snapshot."""
+ name_field = cls.special_fields['name_field']
+ if name_field not in kwargs:
+ name = data_utils.rand_name(cls.__name__ + '-Snapshot')
+ kwargs[name_field] = name
+
snapshot = cls.snapshots_client.create_snapshot(
volume_id=volume_id, **kwargs)['snapshot']
cls.snapshots.append(snapshot)
@@ -169,14 +176,23 @@
except Exception:
pass
- @classmethod
- def create_server(cls, name, **kwargs):
- tenant_network = cls.get_tenant_network()
+ def create_server(self, **kwargs):
+ name = kwargs.get(
+ 'name',
+ data_utils.rand_name(self.__class__.__name__ + '-instance'))
+
+ tenant_network = self.get_tenant_network()
body, _ = compute.create_test_server(
- cls.os,
+ self.os,
tenant_network=tenant_network,
name=name,
**kwargs)
+
+ self.addCleanup(test_utils.call_and_ignore_notfound_exc,
+ waiters.wait_for_server_termination,
+ self.servers_client, body['id'])
+ self.addCleanup(test_utils.call_and_ignore_notfound_exc,
+ self.servers_client.delete_server, body['id'])
return body
@@ -198,6 +214,8 @@
cls.admin_hosts_client = cls.os_adm.volume_hosts_client
cls.admin_snapshots_client = cls.os_adm.snapshots_client
cls.admin_backups_client = cls.os_adm.backups_client
+ cls.admin_encryption_types_client = \
+ cls.os_adm.encryption_types_client
cls.admin_quotas_client = cls.os_adm.volume_quotas_client
elif cls._api_version == 2:
cls.admin_volume_qos_client = cls.os_adm.volume_qos_v2_client
@@ -208,6 +226,8 @@
cls.admin_hosts_client = cls.os_adm.volume_hosts_v2_client
cls.admin_snapshots_client = cls.os_adm.snapshots_v2_client
cls.admin_backups_client = cls.os_adm.backups_v2_client
+ cls.admin_encryption_types_client = \
+ cls.os_adm.encryption_types_v2_client
cls.admin_quotas_client = cls.os_adm.volume_quotas_v2_client
@classmethod
@@ -236,7 +256,7 @@
@classmethod
def create_volume_type(cls, name=None, **kwargs):
"""Create a test volume-type"""
- name = name or data_utils.rand_name('volume-type')
+ name = name or data_utils.rand_name(cls.__name__ + '-volume-type')
volume_type = cls.admin_volume_types_client.create_volume_type(
name=name, **kwargs)['volume_type']
cls.volume_types.append(volume_type['id'])
@@ -259,9 +279,6 @@
cls.admin_volume_types_client.delete_volume_type, vol_type)
for vol_type in cls.volume_types:
- # Resource dictionary uses for is_resource_deleted method,
- # to distinguish between volume-type to encryption-type.
- resource = {'id': vol_type, 'type': 'volume-type'}
test_utils.call_and_ignore_notfound_exc(
cls.admin_volume_types_client.wait_for_resource_deletion,
- resource)
+ vol_type)
diff --git a/tempest/api/volume/test_availability_zone.py b/tempest/api/volume/test_availability_zone.py
index fe51375..ae4b8f9 100644
--- a/tempest/api/volume/test_availability_zone.py
+++ b/tempest/api/volume/test_availability_zone.py
@@ -30,7 +30,7 @@
# List of availability zone
availability_zone = (self.client.list_availability_zones()
['availabilityZoneInfo'])
- self.assertTrue(len(availability_zone) > 0)
+ self.assertGreater(len(availability_zone), 0)
class AvailabilityZoneV1TestJSON(AvailabilityZoneV2TestJSON):
diff --git a/tempest/api/volume/test_volume_metadata.py b/tempest/api/volume/test_volume_metadata.py
index e529538..ee1744d 100644
--- a/tempest/api/volume/test_volume_metadata.py
+++ b/tempest/api/volume/test_volume_metadata.py
@@ -26,11 +26,10 @@
super(VolumesV2MetadataTest, cls).resource_setup()
# Create a volume
cls.volume = cls.create_volume()
- cls.volume_id = cls.volume['id']
def tearDown(self):
# Update the metadata to {}
- self.volumes_client.update_volume_metadata(self.volume_id, {})
+ self.volumes_client.update_volume_metadata(self.volume['id'], {})
super(VolumesV2MetadataTest, self).tearDown()
@test.idempotent_id('6f5b125b-f664-44bf-910f-751591fe5769')
@@ -41,17 +40,17 @@
"key3": "value3",
"key4": "<value&special_chars>"}
- body = self.volumes_client.create_volume_metadata(self.volume_id,
+ body = self.volumes_client.create_volume_metadata(self.volume['id'],
metadata)['metadata']
# Get the metadata of the volume
body = self.volumes_client.show_volume_metadata(
- self.volume_id)['metadata']
+ self.volume['id'])['metadata']
self.assertThat(body.items(), matchers.ContainsAll(metadata.items()))
# Delete one item metadata of the volume
self.volumes_client.delete_volume_metadata_item(
- self.volume_id, "key1")
+ self.volume['id'], "key1")
body = self.volumes_client.show_volume_metadata(
- self.volume_id)['metadata']
+ self.volume['id'])['metadata']
self.assertNotIn("key1", body)
del metadata["key1"]
self.assertThat(body.items(), matchers.ContainsAll(metadata.items()))
@@ -68,17 +67,17 @@
# Create metadata for the volume
body = self.volumes_client.create_volume_metadata(
- self.volume_id, metadata)['metadata']
+ self.volume['id'], metadata)['metadata']
# Get the metadata of the volume
body = self.volumes_client.show_volume_metadata(
- self.volume_id)['metadata']
+ self.volume['id'])['metadata']
self.assertThat(body.items(), matchers.ContainsAll(metadata.items()))
# Update metadata
body = self.volumes_client.update_volume_metadata(
- self.volume_id, update)['metadata']
+ self.volume['id'], update)['metadata']
# Get the metadata of the volume
body = self.volumes_client.show_volume_metadata(
- self.volume_id)['metadata']
+ self.volume['id'])['metadata']
self.assertEqual(update, body)
@test.idempotent_id('862261c5-8df4-475a-8c21-946e50e36a20')
@@ -93,14 +92,14 @@
"key3": "value3_update"}
# Create metadata for the volume
body = self.volumes_client.create_volume_metadata(
- self.volume_id, metadata)['metadata']
+ self.volume['id'], metadata)['metadata']
self.assertThat(body.items(), matchers.ContainsAll(metadata.items()))
# Update metadata item
body = self.volumes_client.update_volume_metadata_item(
- self.volume_id, "key3", update_item)['meta']
+ self.volume['id'], "key3", update_item)['meta']
# Get the metadata of the volume
body = self.volumes_client.show_volume_metadata(
- self.volume_id)['metadata']
+ self.volume['id'])['metadata']
self.assertThat(body.items(), matchers.ContainsAll(expect.items()))
diff --git a/tempest/api/volume/test_volume_transfers.py b/tempest/api/volume/test_volume_transfers.py
index d138490..a8889e0 100644
--- a/tempest/api/volume/test_volume_transfers.py
+++ b/tempest/api/volume/test_volume_transfers.py
@@ -17,11 +17,8 @@
from tempest.api.volume import base
from tempest.common import waiters
-from tempest import config
from tempest import test
-CONF = config.CONF
-
class VolumesV2TransfersTest(base.BaseVolumeTest):
diff --git a/tempest/api/volume/test_volumes_actions.py b/tempest/api/volume/test_volumes_actions.py
index 76cd36c..737ce5e 100644
--- a/tempest/api/volume/test_volumes_actions.py
+++ b/tempest/api/volume/test_volumes_actions.py
@@ -18,8 +18,8 @@
from tempest.common.utils import data_utils
from tempest.common import waiters
from tempest import config
-from tempest import exceptions
from tempest.lib.common.utils import test_utils
+from tempest.lib import exceptions
from tempest import test
CONF = config.CONF
@@ -45,34 +45,19 @@
@classmethod
def resource_setup(cls):
super(VolumesV2ActionsTest, cls).resource_setup()
- # Create a test shared instance
- srv_name = data_utils.rand_name(cls.__name__ + '-Instance')
- cls.server = cls.create_server(
- name=srv_name,
- wait_until='ACTIVE')
# Create a test shared volume for attach/detach tests
cls.volume = cls.create_volume()
- waiters.wait_for_volume_status(cls.client,
- cls.volume['id'], 'available')
-
- @classmethod
- def resource_cleanup(cls):
- # Delete the test instance
- cls.servers_client.delete_server(cls.server['id'])
- waiters.wait_for_server_termination(cls.servers_client,
- cls.server['id'])
-
- super(VolumesV2ActionsTest, cls).resource_cleanup()
@test.idempotent_id('fff42874-7db5-4487-a8e1-ddda5fb5288d')
- @test.stresstest(class_setup_per='process')
@test.attr(type='smoke')
@test.services('compute')
def test_attach_detach_volume_to_instance(self):
+ # Create a server
+ server = self.create_server(wait_until='ACTIVE')
# Volume is attached and detached successfully from an instance
self.client.attach_volume(self.volume['id'],
- instance_uuid=self.server['id'],
+ instance_uuid=server['id'],
mountpoint='/dev/%s' %
CONF.compute.volume_device_name)
waiters.wait_for_volume_status(self.client,
@@ -96,29 +81,29 @@
self.assertEqual(bool_bootable, bool_flag)
@test.idempotent_id('9516a2c8-9135-488c-8dd6-5677a7e5f371')
- @test.stresstest(class_setup_per='process')
@test.services('compute')
def test_get_volume_attachment(self):
+ # Create a server
+ server = self.create_server(wait_until='ACTIVE')
# Verify that a volume's attachment information is retrieved
self.client.attach_volume(self.volume['id'],
- instance_uuid=self.server['id'],
+ instance_uuid=server['id'],
mountpoint='/dev/%s' %
CONF.compute.volume_device_name)
waiters.wait_for_volume_status(self.client,
self.volume['id'], 'in-use')
- # NOTE(gfidente): added in reverse order because functions will be
- # called in reverse order to the order they are added (LIFO)
self.addCleanup(waiters.wait_for_volume_status, self.client,
self.volume['id'],
'available')
self.addCleanup(self.client.detach_volume, self.volume['id'])
volume = self.client.show_volume(self.volume['id'])['volume']
self.assertIn('attachments', volume)
- attachment = self.client.get_attachment_from_volume(volume)
+ attachment = volume['attachments'][0]
+
self.assertEqual('/dev/%s' %
CONF.compute.volume_device_name,
attachment['device'])
- self.assertEqual(self.server['id'], attachment['server_id'])
+ self.assertEqual(server['id'], attachment['server_id'])
self.assertEqual(self.volume['id'], attachment['id'])
self.assertEqual(self.volume['id'], attachment['volume_id'])
@@ -129,7 +114,7 @@
# it is shared with the other tests. After it is uploaded in Glance,
# there is no way to delete it from Cinder, so we delete it from Glance
# using the Glance image_client and from Cinder via tearDownClass.
- image_name = data_utils.rand_name('Image')
+ image_name = data_utils.rand_name(self.__class__.__name__ + '-Image')
body = self.client.upload_volume(
self.volume['id'], image_name=image_name,
disk_format=CONF.volume.disk_format)['os-volume_upload_image']
@@ -159,24 +144,15 @@
@test.idempotent_id('fff74e1e-5bd3-4b33-9ea9-24c103bc3f59')
def test_volume_readonly_update(self):
- # Update volume readonly true
- readonly = True
- self.client.update_volume_readonly(self.volume['id'],
- readonly=readonly)
- # Get Volume information
- fetched_volume = self.client.show_volume(self.volume['id'])['volume']
- bool_flag = self._is_true(fetched_volume['metadata']['readonly'])
- self.assertEqual(True, bool_flag)
-
- # Update volume readonly false
- readonly = False
- self.client.update_volume_readonly(self.volume['id'],
- readonly=readonly)
-
- # Get Volume information
- fetched_volume = self.client.show_volume(self.volume['id'])['volume']
- bool_flag = self._is_true(fetched_volume['metadata']['readonly'])
- self.assertEqual(False, bool_flag)
+ for readonly in [True, False]:
+ # Update volume readonly
+ self.client.update_volume_readonly(self.volume['id'],
+ readonly=readonly)
+ # Get Volume information
+ fetched_volume = self.client.show_volume(
+ self.volume['id'])['volume']
+ bool_flag = self._is_true(fetched_volume['metadata']['readonly'])
+ self.assertEqual(readonly, bool_flag)
class VolumesV1ActionsTest(VolumesV2ActionsTest):
diff --git a/tempest/api/volume/test_volumes_backup.py b/tempest/api/volume/test_volumes_backup.py
index 87146db..141336f 100644
--- a/tempest/api/volume/test_volumes_backup.py
+++ b/tempest/api/volume/test_volumes_backup.py
@@ -30,11 +30,47 @@
if not CONF.volume_feature_enabled.backup:
raise cls.skipException("Cinder backup feature disabled")
- @classmethod
- def resource_setup(cls):
- super(VolumesBackupsV2Test, cls).resource_setup()
+ @test.idempotent_id('a66eb488-8ee1-47d4-8e9f-575a095728c6')
+ def test_volume_backup_create_get_detailed_list_restore_delete(self):
+ # Create backup
+ volume = self.create_volume()
+ self.addCleanup(self.volumes_client.delete_volume,
+ volume['id'])
+ backup_name = data_utils.rand_name(
+ self.__class__.__name__ + '-Backup')
+ create_backup = self.backups_client.create_backup
+ backup = create_backup(volume_id=volume['id'],
+ name=backup_name)['backup']
+ self.addCleanup(self.backups_client.delete_backup,
+ backup['id'])
+ self.assertEqual(backup_name, backup['name'])
+ waiters.wait_for_volume_status(self.volumes_client,
+ volume['id'], 'available')
+ waiters.wait_for_backup_status(self.backups_client,
+ backup['id'], 'available')
- cls.volume = cls.create_volume()
+ # Get a given backup
+ backup = self.backups_client.show_backup(backup['id'])['backup']
+ self.assertEqual(backup_name, backup['name'])
+
+ # Get all backups with detail
+ backups = self.backups_client.list_backups(
+ detail=True)['backups']
+ self.assertIn((backup['name'], backup['id']),
+ [(m['name'], m['id']) for m in backups])
+
+ # Restore backup
+ restore = self.backups_client.restore_backup(
+ backup['id'])['restore']
+
+ # Delete backup
+ self.addCleanup(self.volumes_client.delete_volume,
+ restore['volume_id'])
+ self.assertEqual(backup['id'], restore['backup_id'])
+ waiters.wait_for_backup_status(self.backups_client,
+ backup['id'], 'available')
+ waiters.wait_for_volume_status(self.volumes_client,
+ restore['volume_id'], 'available')
@test.idempotent_id('07af8f6d-80af-44c9-a5dc-c8427b1b62e6')
@test.services('compute')
@@ -45,26 +81,28 @@
is "available" or "in-use".
"""
# Create a server
- server_name = data_utils.rand_name('instance')
- server = self.create_server(name=server_name, wait_until='ACTIVE')
- self.addCleanup(self.servers_client.delete_server, server['id'])
+ volume = self.create_volume()
+ self.addCleanup(self.volumes_client.delete_volume,
+ volume['id'])
+ server = self.create_server(wait_until='ACTIVE')
# Attach volume to instance
self.servers_client.attach_volume(server['id'],
- volumeId=self.volume['id'])
+ volumeId=volume['id'])
waiters.wait_for_volume_status(self.volumes_client,
- self.volume['id'], 'in-use')
+ volume['id'], 'in-use')
self.addCleanup(waiters.wait_for_volume_status, self.volumes_client,
- self.volume['id'], 'available')
+ volume['id'], 'available')
self.addCleanup(self.servers_client.detach_volume, server['id'],
- self.volume['id'])
+ volume['id'])
# Create backup using force flag
- backup_name = data_utils.rand_name('Backup')
+ backup_name = data_utils.rand_name(
+ self.__class__.__name__ + '-Backup')
backup = self.backups_client.create_backup(
- volume_id=self.volume['id'],
+ volume_id=volume['id'],
name=backup_name, force=True)['backup']
self.addCleanup(self.backups_client.delete_backup, backup['id'])
- self.backups_client.wait_for_backup_status(backup['id'],
- 'available')
+ waiters.wait_for_backup_status(self.backups_client,
+ backup['id'], 'available')
self.assertEqual(backup_name, backup['name'])
diff --git a/tempest/api/volume/test_volumes_clone.py b/tempest/api/volume/test_volumes_clone.py
index f38a068..7529dc2 100644
--- a/tempest/api/volume/test_volumes_clone.py
+++ b/tempest/api/volume/test_volumes_clone.py
@@ -23,6 +23,12 @@
class VolumesCloneTest(base.BaseVolumeTest):
+ @classmethod
+ def skip_checks(cls):
+ super(VolumesCloneTest, cls).skip_checks()
+ if not CONF.volume_feature_enabled.clone:
+ raise cls.skipException("Cinder volume clones are disabled")
+
@test.idempotent_id('9adae371-a257-43a5-9555-dc7c88e66e0e')
def test_create_from_volume(self):
# Creates a volume from another volume passing a size different from
diff --git a/tempest/api/volume/test_volumes_clone_negative.py b/tempest/api/volume/test_volumes_clone_negative.py
index ee51e00..d1bedb4 100644
--- a/tempest/api/volume/test_volumes_clone_negative.py
+++ b/tempest/api/volume/test_volumes_clone_negative.py
@@ -24,6 +24,12 @@
class VolumesCloneTest(base.BaseVolumeTest):
+ @classmethod
+ def skip_checks(cls):
+ super(VolumesCloneTest, cls).skip_checks()
+ if not CONF.volume_feature_enabled.clone:
+ raise cls.skipException("Cinder volume clones are disabled")
+
@test.idempotent_id('9adae371-a257-43a5-459a-dc7c88e66e0e')
def test_create_from_volume_decreasing_size(self):
# Creates a volume from another volume passing a size different from
diff --git a/tempest/api/volume/test_volumes_extend.py b/tempest/api/volume/test_volumes_extend.py
index 1947779..7aea1c4 100644
--- a/tempest/api/volume/test_volumes_extend.py
+++ b/tempest/api/volume/test_volumes_extend.py
@@ -15,11 +15,8 @@
from tempest.api.volume import base
from tempest.common import waiters
-from tempest import config
from tempest import test
-CONF = config.CONF
-
class VolumesV2ExtendTest(base.BaseVolumeTest):
diff --git a/tempest/api/volume/test_volumes_get.py b/tempest/api/volume/test_volumes_get.py
index e5fcdfe..07f799b 100644
--- a/tempest/api/volume/test_volumes_get.py
+++ b/tempest/api/volume/test_volumes_get.py
@@ -41,8 +41,7 @@
def _volume_create_get_update_delete(self, **kwargs):
# Create a volume, Get it's details and Delete the volume
- volume = {}
- v_name = data_utils.rand_name('Volume')
+ v_name = data_utils.rand_name(self.__class__.__name__ + '-Volume')
metadata = {'Type': 'Test'}
# Create a volume
kwargs[self.name_field] = v_name
@@ -82,7 +81,8 @@
params = {self.name_field: v_name}
self.client.update_volume(volume['id'], **params)
# Test volume update when display_name is new
- new_v_name = data_utils.rand_name('new-Volume')
+ new_v_name = data_utils.rand_name(
+ self.__class__.__name__ + '-new-Volume')
new_desc = 'This is the new description of volume'
params = {self.name_field: new_v_name,
self.descrip_field: new_desc}
@@ -103,10 +103,10 @@
# Test volume create when display_name is none and display_description
# contains specific characters,
# then test volume update if display_name is duplicated
- new_volume = {}
new_v_desc = data_utils.rand_name('@#$%^* description')
params = {self.descrip_field: new_v_desc,
- 'availability_zone': volume['availability_zone']}
+ 'availability_zone': volume['availability_zone'],
+ 'size': CONF.volume.volume_size}
new_volume = self.client.create_volume(**params)['volume']
self.assertIn('id', new_volume)
self.addCleanup(self.delete_volume, self.client, new_volume['id'])
@@ -125,7 +125,7 @@
@test.attr(type='smoke')
@test.idempotent_id('27fb0e9f-fb64-41dd-8bdb-1ffa762f0d51')
def test_volume_create_get_update_delete(self):
- self._volume_create_get_update_delete()
+ self._volume_create_get_update_delete(size=CONF.volume.volume_size)
@test.attr(type='smoke')
@test.idempotent_id('54a01030-c7fc-447c-86ee-c1182beae638')
@@ -143,7 +143,8 @@
'Cinder volume clones are disabled')
def test_volume_create_get_update_delete_as_clone(self):
origin = self.create_volume()
- self._volume_create_get_update_delete(source_volid=origin['id'])
+ self._volume_create_get_update_delete(source_volid=origin['id'],
+ size=CONF.volume.volume_size)
class VolumesV1GetTest(VolumesV2GetTest):
diff --git a/tempest/api/volume/test_volumes_list.py b/tempest/api/volume/test_volumes_list.py
index f7176f4..40793ec 100644
--- a/tempest/api/volume/test_volumes_list.py
+++ b/tempest/api/volume/test_volumes_list.py
@@ -32,6 +32,11 @@
VOLUME_FIELDS = ('id', 'name')
def assertVolumesIn(self, fetched_list, expected_list, fields=None):
+ """Check out the list.
+
+ This function is aim at check out whether all of the volumes in
+ expected_list are in fetched_list.
+ """
if fields:
fieldsgetter = operator.itemgetter(*fields)
expected_list = map(fieldsgetter, expected_list)
@@ -58,23 +63,13 @@
def resource_setup(cls):
super(VolumesV2ListTestJSON, cls).resource_setup()
cls.name = cls.VOLUME_FIELDS[1]
-
# Create 3 test volumes
cls.volume_list = []
- cls.volume_id_list = []
cls.metadata = {'Type': 'work'}
for i in range(3):
volume = cls.create_volume(metadata=cls.metadata)
volume = cls.client.show_volume(volume['id'])['volume']
cls.volume_list.append(volume)
- cls.volume_id_list.append(volume['id'])
-
- @classmethod
- def resource_cleanup(cls):
- # Delete the created volumes
- for volid in cls.volume_id_list:
- cls.delete_volume(cls.client, volid)
- super(VolumesV2ListTestJSON, cls).resource_cleanup()
def _list_by_param_value_and_assert(self, params, with_detail=False):
"""list or list_details with given params and validates result"""
@@ -155,6 +150,28 @@
self.assertEqual('available', volume['status'])
self.assertVolumesIn(fetched_list, self.volume_list)
+ @test.idempotent_id('2016a942-3020-40d7-95ce-7613bf8407ce')
+ def test_volumes_list_by_bootable(self):
+ """Check out volumes.
+
+ This test function is aim at check out whether all of the volumes
+ in volume_list are not a bootable volume.
+ """
+ params = {'bootable': 'false'}
+ fetched_list = self.client.list_volumes(params=params)['volumes']
+ self._list_by_param_value_and_assert(params)
+ self.assertVolumesIn(fetched_list, self.volume_list,
+ fields=self.VOLUME_FIELDS)
+
+ @test.idempotent_id('2016a939-72ec-482a-bf49-d5ca06216b9f')
+ def test_volumes_list_details_by_bootable(self):
+ params = {'bootable': 'false'}
+ fetched_list = self.client.list_volumes(
+ detail=True, params=params)['volumes']
+ for volume in fetched_list:
+ self.assertEqual('false', volume['bootable'])
+ self.assertVolumesIn(fetched_list, self.volume_list)
+
@test.idempotent_id('c0cfa863-3020-40d7-b587-e35f597d5d87')
def test_volumes_list_by_availability_zone(self):
volume = self.volume_list[data_utils.rand_int_id(0, 2)]
diff --git a/tempest/api/volume/test_volumes_negative.py b/tempest/api/volume/test_volumes_negative.py
index 77bfaf1..fda0dda 100644
--- a/tempest/api/volume/test_volumes_negative.py
+++ b/tempest/api/volume/test_volumes_negative.py
@@ -15,7 +15,6 @@
from tempest.api.volume import base
from tempest.common.utils import data_utils
-from tempest.common import waiters
from tempest.lib import exceptions as lib_exc
from tempest import test
@@ -56,7 +55,7 @@
def test_create_volume_with_invalid_size(self):
# Should not be able to create volume with invalid size
# in request
- v_name = data_utils.rand_name('Volume')
+ v_name = data_utils.rand_name(self.__class__.__name__ + '-Volume')
metadata = {'Type': 'work'}
self.assertRaises(lib_exc.BadRequest, self.client.create_volume,
size='#$%', display_name=v_name, metadata=metadata)
@@ -66,7 +65,7 @@
def test_create_volume_with_out_passing_size(self):
# Should not be able to create volume without passing size
# in request
- v_name = data_utils.rand_name('Volume')
+ v_name = data_utils.rand_name(self.__class__.__name__ + '-Volume')
metadata = {'Type': 'work'}
self.assertRaises(lib_exc.BadRequest, self.client.create_volume,
size='', display_name=v_name, metadata=metadata)
@@ -75,7 +74,7 @@
@test.idempotent_id('41331caa-eaf4-4001-869d-bc18c1869360')
def test_create_volume_with_size_zero(self):
# Should not be able to create volume with size zero
- v_name = data_utils.rand_name('Volume')
+ v_name = data_utils.rand_name(self.__class__.__name__ + '-Volume')
metadata = {'Type': 'work'}
self.assertRaises(lib_exc.BadRequest, self.client.create_volume,
size='0', display_name=v_name, metadata=metadata)
@@ -84,7 +83,7 @@
@test.idempotent_id('8b472729-9eba-446e-a83b-916bdb34bef7')
def test_create_volume_with_size_negative(self):
# Should not be able to create volume with size negative
- v_name = data_utils.rand_name('Volume')
+ v_name = data_utils.rand_name(self.__class__.__name__ + '-Volume')
metadata = {'Type': 'work'}
self.assertRaises(lib_exc.BadRequest, self.client.create_volume,
size='-1', display_name=v_name, metadata=metadata)
@@ -93,7 +92,7 @@
@test.idempotent_id('10254ed8-3849-454e-862e-3ab8e6aa01d2')
def test_create_volume_with_nonexistent_volume_type(self):
# Should not be able to create volume with non-existent volume type
- v_name = data_utils.rand_name('Volume')
+ v_name = data_utils.rand_name(self.__class__.__name__ + '-Volume')
metadata = {'Type': 'work'}
self.assertRaises(lib_exc.NotFound, self.client.create_volume,
size='1', volume_type=data_utils.rand_uuid(),
@@ -103,7 +102,7 @@
@test.idempotent_id('0c36f6ae-4604-4017-b0a9-34fdc63096f9')
def test_create_volume_with_nonexistent_snapshot_id(self):
# Should not be able to create volume with non-existent snapshot
- v_name = data_utils.rand_name('Volume')
+ v_name = data_utils.rand_name(self.__class__.__name__ + '-Volume')
metadata = {'Type': 'work'}
self.assertRaises(lib_exc.NotFound, self.client.create_volume,
size='1', snapshot_id=data_utils.rand_uuid(),
@@ -113,7 +112,7 @@
@test.idempotent_id('47c73e08-4be8-45bb-bfdf-0c4e79b88344')
def test_create_volume_with_nonexistent_source_volid(self):
# Should not be able to create volume with non-existent source volume
- v_name = data_utils.rand_name('Volume')
+ v_name = data_utils.rand_name(self.__class__.__name__ + '-Volume')
metadata = {'Type': 'work'}
self.assertRaises(lib_exc.NotFound, self.client.create_volume,
size='1', source_volid=data_utils.rand_uuid(),
@@ -122,7 +121,7 @@
@test.attr(type=['negative'])
@test.idempotent_id('0186422c-999a-480e-a026-6a665744c30c')
def test_update_volume_with_nonexistent_volume_id(self):
- v_name = data_utils.rand_name('Volume')
+ v_name = data_utils.rand_name(self.__class__.__name__ + '-Volume')
metadata = {'Type': 'work'}
self.assertRaises(lib_exc.NotFound, self.client.update_volume,
volume_id=data_utils.rand_uuid(),
@@ -132,7 +131,7 @@
@test.attr(type=['negative'])
@test.idempotent_id('e66e40d6-65e6-4e75-bdc7-636792fa152d')
def test_update_volume_with_invalid_volume_id(self):
- v_name = data_utils.rand_name('Volume')
+ v_name = data_utils.rand_name(self.__class__.__name__ + '-Volume')
metadata = {'Type': 'work'}
self.assertRaises(lib_exc.NotFound, self.client.update_volume,
volume_id='#$%%&^&^', display_name=v_name,
@@ -141,7 +140,7 @@
@test.attr(type=['negative'])
@test.idempotent_id('72aeca85-57a5-4c1f-9057-f320f9ea575b')
def test_update_volume_with_empty_volume_id(self):
- v_name = data_utils.rand_name('Volume')
+ v_name = data_utils.rand_name(self.__class__.__name__ + '-Volume')
metadata = {'Type': 'work'}
self.assertRaises(lib_exc.NotFound, self.client.update_volume,
volume_id='', display_name=v_name,
@@ -177,13 +176,7 @@
@test.idempotent_id('f5e56b0a-5d02-43c1-a2a7-c9b792c2e3f6')
@test.services('compute')
def test_attach_volumes_with_nonexistent_volume_id(self):
- srv_name = data_utils.rand_name('Instance')
- server = self.create_server(
- name=srv_name,
- wait_until='ACTIVE')
- self.addCleanup(waiters.wait_for_server_termination,
- self.servers_client, server['id'])
- self.addCleanup(self.servers_client.delete_server, server['id'])
+ server = self.create_server(wait_until='ACTIVE')
self.assertRaises(lib_exc.NotFound,
self.client.attach_volume,
@@ -267,7 +260,7 @@
@test.attr(type=['negative'])
@test.idempotent_id('0f4aa809-8c7b-418f-8fb3-84c7a5dfc52f')
def test_list_volumes_with_nonexistent_name(self):
- v_name = data_utils.rand_name('Volume')
+ v_name = data_utils.rand_name(self.__class__.__name__ + '-Volume')
params = {self.name_field: v_name}
fetched_volume = self.client.list_volumes(params=params)['volumes']
self.assertEqual(0, len(fetched_volume))
@@ -275,7 +268,7 @@
@test.attr(type=['negative'])
@test.idempotent_id('9ca17820-a0e7-4cbd-a7fa-f4468735e359')
def test_list_volumes_detail_with_nonexistent_name(self):
- v_name = data_utils.rand_name('Volume')
+ v_name = data_utils.rand_name(self.__class__.__name__ + '-Volume')
params = {self.name_field: v_name}
fetched_volume = \
self.client.list_volumes(detail=True, params=params)['volumes']
@@ -299,4 +292,3 @@
class VolumesV1NegativeTest(VolumesV2NegativeTest):
_api_version = 1
- _name = 'display_name'
diff --git a/tempest/api/volume/test_volumes_snapshots.py b/tempest/api/volume/test_volumes_snapshots.py
index c7f1e6e..3c05d3e 100644
--- a/tempest/api/volume/test_volumes_snapshots.py
+++ b/tempest/api/volume/test_volumes_snapshots.py
@@ -14,7 +14,6 @@
from tempest.common.utils import data_utils
from tempest.common import waiters
from tempest import config
-from tempest.lib import decorators
from tempest import test
CONF = config.CONF
@@ -32,54 +31,21 @@
def resource_setup(cls):
super(VolumesV2SnapshotTestJSON, cls).resource_setup()
cls.volume_origin = cls.create_volume()
-
cls.name_field = cls.special_fields['name_field']
cls.descrip_field = cls.special_fields['descrip_field']
- # Create 2 snapshots
- for _ in xrange(2):
- cls.create_snapshot(cls.volume_origin['id'])
- def _detach(self, volume_id):
- """Detach volume."""
- self.volumes_client.detach_volume(volume_id)
- waiters.wait_for_volume_status(self.volumes_client,
- volume_id, 'available')
-
- def _list_by_param_values_and_assert(self, with_detail=False, **params):
- """list or list_details with given params and validates result."""
-
- if with_detail:
- fetched_snap_list = self.snapshots_client.list_snapshots(
- detail=True, **params)['snapshots']
- else:
- fetched_snap_list = self.snapshots_client.list_snapshots(
- **params)['snapshots']
-
- # Validating params of fetched snapshots
- for snap in fetched_snap_list:
- for key in params:
- msg = "Failed to list snapshots %s by %s" % \
- ('details' if with_detail else '', key)
- self.assertEqual(params[key], snap[key], msg)
-
- def _list_snapshots_by_param_limit(self, limit, expected_elements):
- """list snapshots by limit param"""
- # Get snapshots list using limit parameter
- fetched_snap_list = self.snapshots_client.list_snapshots(
- limit=limit)['snapshots']
- # Validating filtered snapshots length equals to expected_elements
- self.assertEqual(expected_elements, len(fetched_snap_list))
+ def cleanup_snapshot(self, snapshot):
+ # Delete the snapshot
+ self.snapshots_client.delete_snapshot(snapshot['id'])
+ self.snapshots_client.wait_for_resource_deletion(snapshot['id'])
+ self.snapshots.remove(snapshot)
@test.idempotent_id('b467b54c-07a4-446d-a1cf-651dedcc3ff1')
@test.services('compute')
def test_snapshot_create_with_volume_in_use(self):
# Create a snapshot when volume status is in-use
# Create a test instance
- server_name = data_utils.rand_name('instance')
- server = self.create_server(
- name=server_name,
- wait_until='ACTIVE')
- self.addCleanup(self.servers_client.delete_server, server['id'])
+ server = self.create_server(wait_until='ACTIVE')
self.servers_client.attach_volume(
server['id'], volumeId=self.volume_origin['id'],
device='/dev/%s' % CONF.compute.volume_device_name)
@@ -98,9 +64,7 @@
@test.idempotent_id('2a8abbe4-d871-46db-b049-c41f5af8216e')
def test_snapshot_create_get_list_update_delete(self):
# Create a snapshot
- s_name = data_utils.rand_name('snap')
- params = {self.name_field: s_name}
- snapshot = self.create_snapshot(self.volume_origin['id'], **params)
+ snapshot = self.create_snapshot(self.volume_origin['id'])
# Get the snap and check for some of its details
snap_get = self.snapshots_client.show_snapshot(
@@ -116,7 +80,8 @@
self.assertIn(tracking_data, snaps_data)
# Updates snapshot with new values
- new_s_name = data_utils.rand_name('new-snap')
+ new_s_name = data_utils.rand_name(
+ self.__class__.__name__ + '-new-snap')
new_desc = 'This is the new description of snapshot.'
params = {self.name_field: new_s_name,
self.descrip_field: new_desc}
@@ -134,48 +99,6 @@
# Delete the snapshot
self.cleanup_snapshot(snapshot)
- @test.idempotent_id('59f41f43-aebf-48a9-ab5d-d76340fab32b')
- def test_snapshots_list_with_params(self):
- """list snapshots with params."""
- # Create a snapshot
- display_name = data_utils.rand_name('snap')
- params = {self.name_field: display_name}
- snapshot = self.create_snapshot(self.volume_origin['id'], **params)
- self.addCleanup(self.cleanup_snapshot, snapshot)
-
- # Verify list snapshots by display_name filter
- params = {self.name_field: snapshot[self.name_field]}
- self._list_by_param_values_and_assert(**params)
-
- # Verify list snapshots by status filter
- params = {'status': 'available'}
- self._list_by_param_values_and_assert(**params)
-
- # Verify list snapshots by status and display name filter
- params = {'status': 'available',
- self.name_field: snapshot[self.name_field]}
- self._list_by_param_values_and_assert(**params)
-
- @test.idempotent_id('220a1022-1fcd-4a74-a7bd-6b859156cda2')
- def test_snapshots_list_details_with_params(self):
- """list snapshot details with params."""
- # Create a snapshot
- display_name = data_utils.rand_name('snap')
- params = {self.name_field: display_name}
- snapshot = self.create_snapshot(self.volume_origin['id'], **params)
- self.addCleanup(self.cleanup_snapshot, snapshot)
-
- # Verify list snapshot details by display_name filter
- params = {self.name_field: snapshot[self.name_field]}
- self._list_by_param_values_and_assert(with_detail=True, **params)
- # Verify list snapshot details by status filter
- params = {'status': 'available'}
- self._list_by_param_values_and_assert(with_detail=True, **params)
- # Verify list snapshot details by status and display name filter
- params = {'status': 'available',
- self.name_field: snapshot[self.name_field]}
- self._list_by_param_values_and_assert(with_detail=True, **params)
-
@test.idempotent_id('677863d1-3142-456d-b6ac-9924f667a7f4')
def test_volume_from_snapshot(self):
# Creates a volume a snapshot passing a size different from the source
@@ -192,31 +115,6 @@
self.assertEqual(volume['snapshot_id'], src_snap['id'])
self.assertEqual(int(volume['size']), src_size + 1)
- @test.idempotent_id('db4d8e0a-7a2e-41cc-a712-961f6844e896')
- def test_snapshot_list_param_limit(self):
- # List returns limited elements
- self._list_snapshots_by_param_limit(limit=1, expected_elements=1)
-
- @test.idempotent_id('a1427f61-420e-48a5-b6e3-0b394fa95400')
- def test_snapshot_list_param_limit_equals_infinite(self):
- # List returns all elements when request limit exceeded
- # snapshots number
- snap_list = self.snapshots_client.list_snapshots()['snapshots']
- self._list_snapshots_by_param_limit(limit=100000,
- expected_elements=len(snap_list))
-
- @decorators.skip_because(bug='1540893')
- @test.idempotent_id('e3b44b7f-ae87-45b5-8a8c-66110eb24d0a')
- def test_snapshot_list_param_limit_equals_zero(self):
- # List returns zero elements
- self._list_snapshots_by_param_limit(limit=0, expected_elements=0)
-
- def cleanup_snapshot(self, snapshot):
- # Delete the snapshot
- self.snapshots_client.delete_snapshot(snapshot['id'])
- self.snapshots_client.wait_for_resource_deletion(snapshot['id'])
- self.snapshots.remove(snapshot)
-
class VolumesV1SnapshotTestJSON(VolumesV2SnapshotTestJSON):
_api_version = 1
diff --git a/tempest/api/volume/test_volumes_snapshots_list.py b/tempest/api/volume/test_volumes_snapshots_list.py
new file mode 100644
index 0000000..4416bef
--- /dev/null
+++ b/tempest/api/volume/test_volumes_snapshots_list.py
@@ -0,0 +1,116 @@
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.api.volume import base
+from tempest import config
+from tempest.lib import decorators
+from tempest import test
+
+CONF = config.CONF
+
+
+class VolumesV2SnapshotListTestJSON(base.BaseVolumeTest):
+
+ @classmethod
+ def skip_checks(cls):
+ super(VolumesV2SnapshotListTestJSON, cls).skip_checks()
+ if not CONF.volume_feature_enabled.snapshot:
+ raise cls.skipException("Cinder volume snapshots are disabled")
+
+ @classmethod
+ def resource_setup(cls):
+ super(VolumesV2SnapshotListTestJSON, cls).resource_setup()
+ cls.volume_origin = cls.create_volume()
+ cls.name_field = cls.special_fields['name_field']
+ # Create snapshots with params
+ for _ in range(2):
+ cls.snapshot = cls.create_snapshot(cls.volume_origin['id'])
+
+ def _list_by_param_values_and_assert(self, with_detail=False, **params):
+ """list or list_details with given params and validates result."""
+
+ fetched_snap_list = self.snapshots_client.list_snapshots(
+ detail=with_detail, **params)['snapshots']
+
+ # Validating params of fetched snapshots
+ for snap in fetched_snap_list:
+ for key in params:
+ msg = "Failed to list snapshots %s by %s" % \
+ ('details' if with_detail else '', key)
+ self.assertEqual(params[key], snap[key], msg)
+
+ def _list_snapshots_by_param_limit(self, limit, expected_elements):
+ """list snapshots by limit param"""
+ # Get snapshots list using limit parameter
+ fetched_snap_list = self.snapshots_client.list_snapshots(
+ limit=limit)['snapshots']
+ # Validating filtered snapshots length equals to expected_elements
+ self.assertEqual(expected_elements, len(fetched_snap_list))
+
+ @test.idempotent_id('59f41f43-aebf-48a9-ab5d-d76340fab32b')
+ def test_snapshots_list_with_params(self):
+ """list snapshots with params."""
+ # Verify list snapshots by display_name filter
+ params = {self.name_field: self.snapshot[self.name_field]}
+ self._list_by_param_values_and_assert(**params)
+
+ # Verify list snapshots by status filter
+ params = {'status': 'available'}
+ self._list_by_param_values_and_assert(**params)
+
+ # Verify list snapshots by status and display name filter
+ params = {'status': 'available',
+ self.name_field: self.snapshot[self.name_field]}
+ self._list_by_param_values_and_assert(**params)
+
+ @test.idempotent_id('220a1022-1fcd-4a74-a7bd-6b859156cda2')
+ def test_snapshots_list_details_with_params(self):
+ """list snapshot details with params."""
+ # Verify list snapshot details by display_name filter
+ params = {self.name_field: self.snapshot[self.name_field]}
+ self._list_by_param_values_and_assert(with_detail=True, **params)
+ # Verify list snapshot details by status filter
+ params = {'status': 'available'}
+ self._list_by_param_values_and_assert(with_detail=True, **params)
+ # Verify list snapshot details by status and display name filter
+ params = {'status': 'available',
+ self.name_field: self.snapshot[self.name_field]}
+ self._list_by_param_values_and_assert(with_detail=True, **params)
+
+ @test.idempotent_id('db4d8e0a-7a2e-41cc-a712-961f6844e896')
+ def test_snapshot_list_param_limit(self):
+ # List returns limited elements
+ self._list_snapshots_by_param_limit(limit=1, expected_elements=1)
+
+ @test.idempotent_id('a1427f61-420e-48a5-b6e3-0b394fa95400')
+ def test_snapshot_list_param_limit_equals_infinite(self):
+ # List returns all elements when request limit exceeded
+ # snapshots number
+ snap_list = self.snapshots_client.list_snapshots()['snapshots']
+ self._list_snapshots_by_param_limit(limit=100000,
+ expected_elements=len(snap_list))
+
+ @decorators.skip_because(bug='1540893')
+ @test.idempotent_id('e3b44b7f-ae87-45b5-8a8c-66110eb24d0a')
+ def test_snapshot_list_param_limit_equals_zero(self):
+ # List returns zero elements
+ self._list_snapshots_by_param_limit(limit=0, expected_elements=0)
+
+ def cleanup_snapshot(self, snapshot):
+ # Delete the snapshot
+ self.snapshots_client.delete_snapshot(snapshot['id'])
+ self.snapshots_client.wait_for_resource_deletion(snapshot['id'])
+ self.snapshots.remove(snapshot)
+
+
+class VolumesV1SnapshotLimitTestJSON(VolumesV2SnapshotListTestJSON):
+ _api_version = 1
diff --git a/tempest/api/volume/test_volumes_snapshots_negative.py b/tempest/api/volume/test_volumes_snapshots_negative.py
index 2df9523..1f5bb0d 100644
--- a/tempest/api/volume/test_volumes_snapshots_negative.py
+++ b/tempest/api/volume/test_volumes_snapshots_negative.py
@@ -31,7 +31,7 @@
@test.idempotent_id('e3e466af-70ab-4f4b-a967-ab04e3532ea7')
def test_create_snapshot_with_nonexistent_volume_id(self):
# Create a snapshot with nonexistent volume id
- s_name = data_utils.rand_name('snap')
+ s_name = data_utils.rand_name(self.__class__.__name__ + '-snap')
self.assertRaises(lib_exc.NotFound,
self.snapshots_client.create_snapshot,
volume_id=data_utils.rand_uuid(),
@@ -41,7 +41,7 @@
@test.idempotent_id('bb9da53e-d335-4309-9c15-7e76fd5e4d6d')
def test_create_snapshot_without_passing_volume_id(self):
# Create a snapshot without passing volume id
- s_name = data_utils.rand_name('snap')
+ s_name = data_utils.rand_name(self.__class__.__name__ + '-snap')
self.assertRaises(lib_exc.NotFound,
self.snapshots_client.create_snapshot,
volume_id=None, display_name=s_name)
diff --git a/tempest/api/volume/v2/test_volumes_list.py b/tempest/api/volume/v2/test_volumes_list.py
index 5117e6c..60a35b0 100644
--- a/tempest/api/volume/v2/test_volumes_list.py
+++ b/tempest/api/volume/v2/test_volumes_list.py
@@ -42,22 +42,15 @@
super(VolumesV2ListTestJSON, cls).resource_setup()
# Create 3 test volumes
- cls.volume_list = []
- cls.volume_id_list = []
cls.metadata = {'Type': 'work'}
+ # NOTE(zhufl): When using pre-provisioned credentials, the project
+ # may have volumes other than those created below.
+ existing_volumes = cls.client.list_volumes()['volumes']
+ cls.volume_id_list = [vol['id'] for vol in existing_volumes]
for i in range(3):
volume = cls.create_volume(metadata=cls.metadata)
- volume = cls.client.show_volume(volume['id'])['volume']
- cls.volume_list.append(volume)
cls.volume_id_list.append(volume['id'])
- @classmethod
- def resource_cleanup(cls):
- # Delete the created volumes
- for volid in cls.volume_id_list:
- cls.delete_volume(cls.client, volid)
- super(VolumesV2ListTestJSON, cls).resource_cleanup()
-
@test.idempotent_id('2a7064eb-b9c3-429b-b888-33928fc5edd3')
def test_volume_list_details_with_multiple_params(self):
# List volumes detail using combined condition
@@ -174,9 +167,9 @@
# If cannot follow make sure it's because we have finished
else:
- self.assertListEqual([], remaining or [],
- 'No more pages reported, but still '
- 'missing ids %s' % remaining)
+ self.assertEqual([], remaining or [],
+ 'No more pages reported, but still '
+ 'missing ids %s' % remaining)
break
@test.idempotent_id('e9138a2c-f67b-4796-8efa-635c196d01de')
diff --git a/tempest/api_schema/__init__.py b/tempest/api_schema/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/tempest/api_schema/__init__.py
+++ /dev/null
diff --git a/tempest/api_schema/request/__init__.py b/tempest/api_schema/request/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/tempest/api_schema/request/__init__.py
+++ /dev/null
diff --git a/tempest/api_schema/request/compute/__init__.py b/tempest/api_schema/request/compute/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/tempest/api_schema/request/compute/__init__.py
+++ /dev/null
diff --git a/tempest/api_schema/request/compute/flavors.py b/tempest/api_schema/request/compute/flavors.py
deleted file mode 100644
index adaaf27..0000000
--- a/tempest/api_schema/request/compute/flavors.py
+++ /dev/null
@@ -1,58 +0,0 @@
-# (c) 2014 Deutsche Telekom AG
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-common_flavor_details = {
- "name": "get-flavor-details",
- "http-method": "GET",
- "url": "flavors/%s",
- "resources": [
- {"name": "flavor", "expected_result": 404}
- ]
-}
-
-common_flavor_list = {
- "name": "list-flavors-with-detail",
- "http-method": "GET",
- "url": "flavors/detail",
- "json-schema": {
- "type": "object",
- "properties": {
- }
- }
-}
-
-common_admin_flavor_create = {
- "name": "flavor-create",
- "http-method": "POST",
- "admin_client": True,
- "url": "flavors",
- "default_result_code": 400,
- "json-schema": {
- "type": "object",
- "properties": {
- "flavor": {
- "type": "object",
- "properties": {
- "name": {"type": "string",
- "exclude_tests": ["gen_str_min_length"]},
- "ram": {"type": "integer", "minimum": 1},
- "vcpus": {"type": "integer", "minimum": 1},
- "disk": {"type": "integer"},
- "id": {"type": "integer",
- "exclude_tests": ["gen_none", "gen_string"]
- },
- }
- }
- }
- }
-}
diff --git a/tempest/api_schema/request/compute/v2/flavors.py b/tempest/api_schema/request/compute/v2/flavors.py
deleted file mode 100644
index bc459ad..0000000
--- a/tempest/api_schema/request/compute/v2/flavors.py
+++ /dev/null
@@ -1,39 +0,0 @@
-# (c) 2014 Deutsche Telekom AG
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import copy
-
-from tempest.api_schema.request.compute import flavors
-
-flavors_details = copy.deepcopy(flavors.common_flavor_details)
-
-flavor_list = copy.deepcopy(flavors.common_flavor_list)
-
-flavor_create = copy.deepcopy(flavors.common_admin_flavor_create)
-
-flavor_list["json-schema"]["properties"] = {
- "minRam": {
- "type": "integer",
- "results": {
- "gen_none": 400,
- "gen_string": 400
- }
- },
- "minDisk": {
- "type": "integer",
- "results": {
- "gen_none": 400,
- "gen_string": 400
- }
- }
-}
diff --git a/tempest/clients.py b/tempest/clients.py
index fd010f2..be6bc02 100644
--- a/tempest/clients.py
+++ b/tempest/clients.py
@@ -22,12 +22,8 @@
from tempest import exceptions
from tempest.lib import auth
from tempest.lib import exceptions as lib_exc
-from tempest.lib.services import compute
-from tempest.lib.services import image
-from tempest.lib.services import network
-from tempest import service_clients
+from tempest.lib.services import clients
from tempest.services import baremetal
-from tempest.services import data_processing
from tempest.services import identity
from tempest.services import object_storage
from tempest.services import orchestration
@@ -37,12 +33,12 @@
LOG = logging.getLogger(__name__)
-class Manager(service_clients.ServiceClients):
+class Manager(clients.ServiceClients):
"""Top level manager for OpenStack tempest clients"""
default_params = config.service_client_config()
- # TODO(andreaf) This is only used by data_processing and baremetal clients,
+ # TODO(andreaf) This is only used by baremetal clients,
# and should be removed once they are out of Tempest
default_params_with_timeout_values = {
'build_interval': CONF.compute.build_interval,
@@ -87,12 +83,6 @@
build_interval=CONF.orchestration.build_interval,
build_timeout=CONF.orchestration.build_timeout,
**self.default_params)
- self.data_processing_client = data_processing.DataProcessingClient(
- self.auth_provider,
- CONF.data_processing.catalog_type,
- CONF.identity.region,
- endpoint_type=CONF.data_processing.endpoint_type,
- **self.default_params_with_timeout_values)
self.negative_client = negative_rest_client.NegativeRestClient(
self.auth_provider, service, **self.default_params)
@@ -102,17 +92,16 @@
This uses `config.service_client_config` for all services to collect
most configuration items needed to init the clients.
"""
- # NOTE(andreaf) Configuration items will be passed in future patches
- # into ClientFactory objects, but for now we update all the
- # _set_*_client methods to consume them so we can verify that the
- # configuration collected is correct
+ # NOTE(andreaf) Once all service clients in Tempest are migrated
+ # to tempest.lib, their configuration will be picked up from the
+ # registry, and this method will become redundant.
configuration = {}
- # Setup the parameters for all Tempest services.
+ # Setup the parameters for all Tempest services which are not in lib.
# NOTE(andreaf) Since client.py is an internal module of Tempest,
# it doesn't have to consider plugin configuration.
- for service in service_clients.tempest_modules():
+ for service in clients._tempest_internal_modules():
try:
# NOTE(andreaf) Use the unversioned service name to fetch
# the configuration since configuration is not versioned.
@@ -121,139 +110,90 @@
configuration[service_for_config] = (
config.service_client_config(service_for_config))
except lib_exc.UnknownServiceClient:
- LOG.warn(
+ LOG.warning(
'Could not load configuration for service %s' % service)
return configuration
def _set_network_clients(self):
- params = self.parameters['network']
- self.network_agents_client = network.AgentsClient(
- self.auth_provider, **params)
- self.network_extensions_client = network.ExtensionsClient(
- self.auth_provider, **params)
- self.networks_client = network.NetworksClient(
- self.auth_provider, **params)
- self.subnetpools_client = network.SubnetpoolsClient(
- self.auth_provider, **params)
- self.subnets_client = network.SubnetsClient(
- self.auth_provider, **params)
- self.ports_client = network.PortsClient(
- self.auth_provider, **params)
- self.network_quotas_client = network.QuotasClient(
- self.auth_provider, **params)
- self.floating_ips_client = network.FloatingIPsClient(
- self.auth_provider, **params)
- self.metering_labels_client = network.MeteringLabelsClient(
- self.auth_provider, **params)
- self.metering_label_rules_client = network.MeteringLabelRulesClient(
- self.auth_provider, **params)
- self.routers_client = network.RoutersClient(
- self.auth_provider, **params)
- self.security_group_rules_client = network.SecurityGroupRulesClient(
- self.auth_provider, **params)
- self.security_groups_client = network.SecurityGroupsClient(
- self.auth_provider, **params)
- self.network_versions_client = network.NetworkVersionsClient(
- self.auth_provider, **params)
+ self.network_agents_client = self.network.AgentsClient()
+ self.network_extensions_client = self.network.ExtensionsClient()
+ self.networks_client = self.network.NetworksClient()
+ self.subnetpools_client = self.network.SubnetpoolsClient()
+ self.subnets_client = self.network.SubnetsClient()
+ self.ports_client = self.network.PortsClient()
+ self.network_quotas_client = self.network.QuotasClient()
+ self.floating_ips_client = self.network.FloatingIPsClient()
+ self.metering_labels_client = self.network.MeteringLabelsClient()
+ self.metering_label_rules_client = (
+ self.network.MeteringLabelRulesClient())
+ self.routers_client = self.network.RoutersClient()
+ self.security_group_rules_client = (
+ self.network.SecurityGroupRulesClient())
+ self.security_groups_client = self.network.SecurityGroupsClient()
+ self.network_versions_client = self.network.NetworkVersionsClient()
def _set_image_clients(self):
if CONF.service_available.glance:
- params = self.parameters['image']
- self.image_client = image.v1.ImagesClient(
- self.auth_provider, **params)
- self.image_member_client = image.v1.ImageMembersClient(
- self.auth_provider, **params)
-
- self.image_client_v2 = image.v2.ImagesClient(
- self.auth_provider, **params)
- self.image_member_client_v2 = image.v2.ImageMembersClient(
- self.auth_provider, **params)
- self.namespaces_client = image.v2.NamespacesClient(
- self.auth_provider, **params)
- self.resource_types_client = image.v2.ResourceTypesClient(
- self.auth_provider, **params)
- self.schemas_client = image.v2.SchemasClient(
- self.auth_provider, **params)
+ self.image_client = self.image_v1.ImagesClient()
+ self.image_member_client = self.image_v1.ImageMembersClient()
+ self.image_client_v2 = self.image_v2.ImagesClient()
+ self.image_member_client_v2 = self.image_v2.ImageMembersClient()
+ self.namespaces_client = self.image_v2.NamespacesClient()
+ self.resource_types_client = self.image_v2.ResourceTypesClient()
+ self.schemas_client = self.image_v2.SchemasClient()
def _set_compute_clients(self):
- params = self.parameters['compute']
-
- self.agents_client = compute.AgentsClient(self.auth_provider, **params)
- self.compute_networks_client = compute.NetworksClient(
- self.auth_provider, **params)
- self.migrations_client = compute.MigrationsClient(self.auth_provider,
- **params)
+ self.agents_client = self.compute.AgentsClient()
+ self.compute_networks_client = self.compute.NetworksClient()
+ self.migrations_client = self.compute.MigrationsClient()
self.security_group_default_rules_client = (
- compute.SecurityGroupDefaultRulesClient(self.auth_provider,
- **params))
- self.certificates_client = compute.CertificatesClient(
- self.auth_provider, **params)
- self.servers_client = compute.ServersClient(
- self.auth_provider,
- enable_instance_password=CONF.compute_feature_enabled
- .enable_instance_password,
- **params)
- self.server_groups_client = compute.ServerGroupsClient(
- self.auth_provider, **params)
- self.limits_client = compute.LimitsClient(self.auth_provider, **params)
- self.compute_images_client = compute.ImagesClient(self.auth_provider,
- **params)
- self.keypairs_client = compute.KeyPairsClient(self.auth_provider,
- **params)
- self.quotas_client = compute.QuotasClient(self.auth_provider, **params)
- self.quota_classes_client = compute.QuotaClassesClient(
- self.auth_provider, **params)
- self.flavors_client = compute.FlavorsClient(self.auth_provider,
- **params)
- self.extensions_client = compute.ExtensionsClient(self.auth_provider,
- **params)
- self.floating_ip_pools_client = compute.FloatingIPPoolsClient(
- self.auth_provider, **params)
- self.floating_ips_bulk_client = compute.FloatingIPsBulkClient(
- self.auth_provider, **params)
- self.compute_floating_ips_client = compute.FloatingIPsClient(
- self.auth_provider, **params)
+ self.compute.SecurityGroupDefaultRulesClient())
+ self.certificates_client = self.compute.CertificatesClient()
+ eip = CONF.compute_feature_enabled.enable_instance_password
+ self.servers_client = self.compute.ServersClient(
+ enable_instance_password=eip)
+ self.server_groups_client = self.compute.ServerGroupsClient()
+ self.limits_client = self.compute.LimitsClient()
+ self.compute_images_client = self.compute.ImagesClient()
+ self.keypairs_client = self.compute.KeyPairsClient()
+ self.quotas_client = self.compute.QuotasClient()
+ self.quota_classes_client = self.compute.QuotaClassesClient()
+ self.flavors_client = self.compute.FlavorsClient()
+ self.extensions_client = self.compute.ExtensionsClient()
+ self.floating_ip_pools_client = self.compute.FloatingIPPoolsClient()
+ self.floating_ips_bulk_client = self.compute.FloatingIPsBulkClient()
+ self.compute_floating_ips_client = self.compute.FloatingIPsClient()
self.compute_security_group_rules_client = (
- compute.SecurityGroupRulesClient(self.auth_provider, **params))
- self.compute_security_groups_client = compute.SecurityGroupsClient(
- self.auth_provider, **params)
- self.interfaces_client = compute.InterfacesClient(self.auth_provider,
- **params)
- self.fixed_ips_client = compute.FixedIPsClient(self.auth_provider,
- **params)
- self.availability_zone_client = compute.AvailabilityZoneClient(
- self.auth_provider, **params)
- self.aggregates_client = compute.AggregatesClient(self.auth_provider,
- **params)
- self.services_client = compute.ServicesClient(self.auth_provider,
- **params)
- self.tenant_usages_client = compute.TenantUsagesClient(
- self.auth_provider, **params)
- self.hosts_client = compute.HostsClient(self.auth_provider, **params)
- self.hypervisor_client = compute.HypervisorClient(self.auth_provider,
- **params)
+ self.compute.SecurityGroupRulesClient())
+ self.compute_security_groups_client = (
+ self.compute.SecurityGroupsClient())
+ self.interfaces_client = self.compute.InterfacesClient()
+ self.fixed_ips_client = self.compute.FixedIPsClient()
+ self.availability_zone_client = self.compute.AvailabilityZoneClient()
+ self.aggregates_client = self.compute.AggregatesClient()
+ self.services_client = self.compute.ServicesClient()
+ self.tenant_usages_client = self.compute.TenantUsagesClient()
+ self.hosts_client = self.compute.HostsClient()
+ self.hypervisor_client = self.compute.HypervisorClient()
self.instance_usages_audit_log_client = (
- compute.InstanceUsagesAuditLogClient(self.auth_provider, **params))
- self.tenant_networks_client = compute.TenantNetworksClient(
- self.auth_provider, **params)
- self.baremetal_nodes_client = compute.BaremetalNodesClient(
- self.auth_provider, **params)
+ self.compute.InstanceUsagesAuditLogClient())
+ self.tenant_networks_client = self.compute.TenantNetworksClient()
+ self.baremetal_nodes_client = self.compute.BaremetalNodesClient()
# NOTE: The following client needs special timeout values because
# the API is a proxy for the other component.
- params_volume = copy.deepcopy(params)
- # Optional parameters
+ params_volume = {}
for _key in ('build_interval', 'build_timeout'):
_value = self.parameters['volume'].get(_key)
if _value:
params_volume[_key] = _value
- self.volumes_extensions_client = compute.VolumesClient(
- self.auth_provider, **params_volume)
- self.compute_versions_client = compute.VersionsClient(
- self.auth_provider, **params_volume)
- self.snapshots_extensions_client = compute.SnapshotsClient(
- self.auth_provider, **params_volume)
+ self.volumes_extensions_client = self.compute.VolumesClient(
+ **params_volume)
+ self.compute_versions_client = self.compute.VersionsClient(
+ **params_volume)
+ self.snapshots_extensions_client = self.compute.SnapshotsClient(
+ **params_volume)
def _set_identity_clients(self):
params = self.parameters['identity']
@@ -301,6 +241,10 @@
self.auth_provider, **params_v3)
self.roles_v3_client = identity.v3.RolesClient(self.auth_provider,
**params_v3)
+ self.inherited_roles_client = identity.v3.InheritedRolesClient(
+ self.auth_provider, **params_v3)
+ self.role_assignments_client = identity.v3.RoleAssignmentsClient(
+ self.auth_provider, **params_v3)
self.identity_services_v3_client = identity.v3.ServicesClient(
self.auth_provider, **params_v3)
self.policies_client = identity.v3.PoliciesClient(self.auth_provider,
@@ -323,14 +267,14 @@
CONF.identity.uri, **self.default_params)
else:
msg = 'Identity v2 API enabled, but no identity.uri set'
- raise exceptions.InvalidConfiguration(msg)
+ raise lib_exc.InvalidConfiguration(msg)
if CONF.identity_feature_enabled.api_v3:
if CONF.identity.uri_v3:
self.token_v3_client = identity.v3.V3TokenClient(
CONF.identity.uri_v3, **self.default_params)
else:
msg = 'Identity v3 API enabled, but no identity.uri_v3 set'
- raise exceptions.InvalidConfiguration(msg)
+ raise lib_exc.InvalidConfiguration(msg)
def _set_volume_clients(self):
# Mandatory parameters (always defined)
@@ -348,16 +292,18 @@
**params)
self.backups_v2_client = volume.v2.BackupsClient(self.auth_provider,
**params)
+ self.encryption_types_client = volume.v1.EncryptionTypesClient(
+ self.auth_provider, **params)
+ self.encryption_types_v2_client = volume.v2.EncryptionTypesClient(
+ self.auth_provider, **params)
self.snapshots_client = volume.v1.SnapshotsClient(self.auth_provider,
**params)
self.snapshots_v2_client = volume.v2.SnapshotsClient(
self.auth_provider, **params)
- self.volumes_client = volume.v1.VolumesClient(
- self.auth_provider, default_volume_size=CONF.volume.volume_size,
- **params)
- self.volumes_v2_client = volume.v2.VolumesClient(
- self.auth_provider, default_volume_size=CONF.volume.volume_size,
- **params)
+ self.volumes_client = volume.v1.VolumesClient(self.auth_provider,
+ **params)
+ self.volumes_v2_client = volume.v2.VolumesClient(self.auth_provider,
+ **params)
self.volume_messages_client = volume.v3.MessagesClient(
self.auth_provider, **params)
self.volume_types_client = volume.v1.TypesClient(self.auth_provider,
diff --git a/tempest/cmd/account_generator.py b/tempest/cmd/account_generator.py
index f9d7a9b..3d38e25 100755
--- a/tempest/cmd/account_generator.py
+++ b/tempest/cmd/account_generator.py
@@ -22,7 +22,7 @@
credentials for created users, so each user will be in separate tenant and
have the username, tenant_name, password and roles.
-**Usage:** ``tempest-account-generator [-h] [OPTIONS] accounts_file.yaml``.
+**Usage:** ``tempest account-generator [-h] [OPTIONS] accounts_file.yaml``.
Positional Arguments
--------------------
@@ -90,7 +90,7 @@
**-i VERSION**, **--identity-version VERSION** (Optional) Provisions accounts
using the specified version of the identity API. (default: '3').
-To see help on specific argument, please do: ``tempest-account-generator
+To see help on specific argument, please do: ``tempest account-generator
[OPTIONS] <accounts_file.yaml> -h``.
"""
import argparse
@@ -144,6 +144,13 @@
identity_version=identity_version,
name=opts.tag,
network_resources=network_resources,
+ neutron_available=CONF.service_available.neutron,
+ create_networks=CONF.auth.create_isolated_networks,
+ identity_admin_role=CONF.identity.admin_role,
+ identity_admin_domain_scope=CONF.identity.admin_domain_scope,
+ project_network_cidr=CONF.network.project_network_cidr,
+ project_network_mask_bits=CONF.network.project_network_mask_bits,
+ public_network_id=CONF.network.public_network_id,
admin_creds=admin_creds,
**credentials_factory.get_dynamic_provider_params())
@@ -255,9 +262,9 @@
def get_options():
- usage_string = ('tempest-account-generator [-h] <ARG> ...\n\n'
+ usage_string = ('tempest account-generator [-h] <ARG> ...\n\n'
'To see help on specific argument, do:\n'
- 'tempest-account-generator <ARG> -h')
+ 'tempest account-generator <ARG> -h')
parser = argparse.ArgumentParser(
description=DESCRIPTION,
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
diff --git a/tempest/cmd/cleanup_service.py b/tempest/cmd/cleanup_service.py
index 426571f..32b0ebb 100644
--- a/tempest/cmd/cleanup_service.py
+++ b/tempest/cmd/cleanup_service.py
@@ -18,6 +18,7 @@
from tempest.common import credentials_factory as credentials
from tempest.common import identity
+from tempest.common.utils import net_info
from tempest import config
from tempest import test
@@ -349,7 +350,8 @@
LOG.exception("Delete Volume Quotas exception.")
def dry_run(self):
- quotas = self.client.show_quota_usage(self.tenant_id)['quota_set']
+ quotas = self.client.show_quota_set(
+ self.tenant_id, params={'usage': True})['quota_set']
self.data['volume_quotas'] = quotas
@@ -462,7 +464,7 @@
rid = router['id']
ports = [port for port
in ports_client.list_ports(device_id=rid)['ports']
- if port["device_owner"] == "network:router_interface"]
+ if net_info.is_router_interface_port(port)]
for port in ports:
client.remove_router_interface(rid, port_id=port['id'])
client.delete_router(rid)
diff --git a/tempest/cmd/init.py b/tempest/cmd/init.py
index e3788ab..99185d2 100644
--- a/tempest/cmd/init.py
+++ b/tempest/cmd/init.py
@@ -14,15 +14,15 @@
import os
import shutil
-import subprocess
import sys
from cliff import command
from oslo_config import generator
from oslo_log import log as logging
from six import moves
+from testrepository import commands
-from tempest.cmd.workspace import WorkspaceManager
+from tempest.cmd import workspace
LOG = logging.getLogger(__name__)
@@ -69,7 +69,10 @@
def get_parser(self, prog_name):
parser = super(TempestInit, self).get_parser(prog_name)
- parser.add_argument('dir', nargs='?', default=os.getcwd())
+ parser.add_argument('dir', nargs='?', default=os.getcwd(),
+ help="The path to the workspace directory. If you "
+ "omit this argument, the workspace directory is "
+ "your current directory")
parser.add_argument('--config-dir', '-c', default=None)
parser.add_argument('--show-global-config-dir', '-s',
action='store_true', dest='show_global_dir',
@@ -78,7 +81,7 @@
parser.add_argument('--name', help="The workspace name", default=None)
parser.add_argument('--workspace-path', default=None,
help="The path to the workspace file, the default "
- "is ~/.tempest/workspace")
+ "is ~/.tempest/workspace.yaml")
return parser
def generate_testr_conf(self, local_path):
@@ -89,18 +92,28 @@
with open(testr_conf_path, 'w+') as testr_conf_file:
testr_conf_file.write(testr_conf)
- def update_local_conf(self, conf_path, lock_dir, log_dir):
- config_parse = moves.configparser.SafeConfigParser()
+ def get_configparser(self, conf_path):
+ config_parse = moves.configparser.ConfigParser()
config_parse.optionxform = str
- with open(conf_path, 'a+') as conf_file:
- # Set local lock_dir in tempest conf
- if not config_parse.has_section('oslo_concurrency'):
- config_parse.add_section('oslo_concurrency')
- config_parse.set('oslo_concurrency', 'lock_path', lock_dir)
- # Set local log_dir in tempest conf
- config_parse.set('DEFAULT', 'log_dir', log_dir)
- # Set default log filename to tempest.log
- config_parse.set('DEFAULT', 'log_file', 'tempest.log')
+ # get any existing values if a config file already exists
+ if os.path.isfile(conf_path):
+ # use read() for Python 2 and 3 compatibility
+ config_parse.read(conf_path)
+ return config_parse
+
+ def update_local_conf(self, conf_path, lock_dir, log_dir):
+ config_parse = self.get_configparser(conf_path)
+ # Set local lock_dir in tempest conf
+ if not config_parse.has_section('oslo_concurrency'):
+ config_parse.add_section('oslo_concurrency')
+ config_parse.set('oslo_concurrency', 'lock_path', lock_dir)
+ # Set local log_dir in tempest conf
+ config_parse.set('DEFAULT', 'log_dir', log_dir)
+ # Set default log filename to tempest.log
+ config_parse.set('DEFAULT', 'log_file', 'tempest.log')
+
+ # write out a new file with the updated configurations
+ with open(conf_path, 'w+') as conf_file:
config_parse.write(conf_file)
def copy_config(self, etc_dir, config_dir):
@@ -154,15 +167,17 @@
self.generate_testr_conf(local_dir)
# setup local testr working dir
if not os.path.isdir(testr_dir):
- subprocess.call(['testr', 'init'], cwd=local_dir)
+ commands.run_argv(['testr', 'init', '-d', local_dir], sys.stdin,
+ sys.stdout, sys.stderr)
def take_action(self, parsed_args):
- workspace_manager = WorkspaceManager(parsed_args.workspace_path)
+ workspace_manager = workspace.WorkspaceManager(
+ parsed_args.workspace_path)
name = parsed_args.name or parsed_args.dir.split(os.path.sep)[-1]
- workspace_manager.register_new_workspace(
- name, parsed_args.dir, init=True)
config_dir = parsed_args.config_dir or get_tempest_default_config_dir()
if parsed_args.show_global_dir:
print("Global config dir is located at: %s" % config_dir)
sys.exit(0)
self.create_working_dir(parsed_args.dir, config_dir)
+ workspace_manager.register_new_workspace(
+ name, parsed_args.dir, init=True)
diff --git a/tempest/cmd/javelin.py b/tempest/cmd/javelin.py
deleted file mode 100755
index a9e5167..0000000
--- a/tempest/cmd/javelin.py
+++ /dev/null
@@ -1,1131 +0,0 @@
-#!/usr/bin/env python
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""Javelin is a tool for creating, verifying, and deleting a small set of
-resources in a declarative way.
-
-Javelin is meant to be used as a way to validate quickly that resources can
-survive an upgrade process.
-
-Authentication
---------------
-
-Javelin will be creating (and removing) users and tenants so it needs the admin
-credentials of your cloud to operate properly. The corresponding info can be
-given the usual way, either through CLI options or environment variables.
-
-You're probably familiar with these, but just in case::
-
- +----------+------------------+----------------------+
- | Param | CLI | Environment Variable |
- +----------+------------------+----------------------+
- | Username | --os-username | OS_USERNAME |
- | Password | --os-password | OS_PASSWORD |
- | Tenant | --os-tenant-name | OS_TENANT_NAME |
- +----------+------------------+----------------------+
-
-
-Runtime Arguments
------------------
-
-**-m/--mode**: (Required) Has to be one of 'check', 'create' or 'destroy'. It
-indicates which actions javelin is going to perform.
-
-**-r/--resources**: (Required) The path to a YAML file describing the resources
-used by Javelin.
-
-**-d/--devstack-base**: (Required) The path to the devstack repo used to
-retrieve artefacts (like images) that will be referenced in the resource files.
-
-**-c/--config-file**: (Optional) The path to a valid Tempest config file
-describing your cloud. Javelin may use this to determine if certain services
-are enabled and modify its behavior accordingly.
-
-
-Resource file
--------------
-
-The resource file is a valid YAML file describing the resources that will be
-created, checked and destroyed by javelin. Here's a canonical example of a
-resource file::
-
- tenants:
- - javelin
- - discuss
-
- users:
- - name: javelin
- pass: gungnir
- tenant: javelin
- - name: javelin2
- pass: gungnir2
- tenant: discuss
-
- # resources that we want to create
- images:
- - name: javelin_cirros
- owner: javelin
- file: cirros-0.3.2-x86_64-blank.img
- disk_format: ami
- container_format: ami
- aki: cirros-0.3.2-x86_64-vmlinuz
- ari: cirros-0.3.2-x86_64-initrd
-
- servers:
- - name: peltast
- owner: javelin
- flavor: m1.small
- image: javelin_cirros
- floating_ip_pool: public
- - name: hoplite
- owner: javelin
- flavor: m1.medium
- image: javelin_cirros
-
-
-An important piece of the resource definition is the *owner* field, which is
-the user (that we've created) that is the owner of that resource. All
-operations on that resource will happen as that regular user to ensure that
-admin level access does not mask issues.
-
-The check phase will act like a unit test, using well known assert methods to
-verify that the correct resources exist.
-
-"""
-
-import argparse
-import collections
-import datetime
-import os
-import sys
-import unittest
-
-import netaddr
-from oslo_log import log as logging
-import six
-import yaml
-
-from tempest.common import identity
-from tempest.common import waiters
-from tempest import config
-from tempest.lib import auth
-from tempest.lib import exceptions as lib_exc
-from tempest.lib.services.compute import flavors_client
-from tempest.lib.services.compute import floating_ips_client
-from tempest.lib.services.compute import security_group_rules_client
-from tempest.lib.services.compute import security_groups_client
-from tempest.lib.services.compute import servers_client
-from tempest.lib.services.identity.v2 import roles_client
-from tempest.lib.services.identity.v2 import tenants_client
-from tempest.lib.services.identity.v2 import users_client
-from tempest.lib.services.image.v2 import images_client
-from tempest.lib.services.network import networks_client
-from tempest.lib.services.network import ports_client
-from tempest.lib.services.network import routers_client
-from tempest.lib.services.network import subnets_client
-from tempest.services.identity.v2.json import identity_client
-from tempest.services.object_storage import container_client
-from tempest.services.object_storage import object_client
-from tempest.services.volume.v1.json import volumes_client
-
-CONF = config.CONF
-OPTS = {}
-USERS = {}
-RES = collections.defaultdict(list)
-
-LOG = None
-
-JAVELIN_START = datetime.datetime.utcnow()
-
-
-class OSClient(object):
- _creds = None
- identity = None
- servers = None
-
- def __init__(self, user, pw, tenant):
- default_params = {
- 'disable_ssl_certificate_validation':
- CONF.identity.disable_ssl_certificate_validation,
- 'ca_certs': CONF.identity.ca_certificates_file,
- 'trace_requests': CONF.debug.trace_requests
- }
- default_params_with_timeout_values = {
- 'build_interval': CONF.compute.build_interval,
- 'build_timeout': CONF.compute.build_timeout
- }
- default_params_with_timeout_values.update(default_params)
-
- compute_params = {
- 'service': CONF.compute.catalog_type,
- 'region': CONF.compute.region or CONF.identity.region,
- 'endpoint_type': CONF.compute.endpoint_type,
- 'build_interval': CONF.compute.build_interval,
- 'build_timeout': CONF.compute.build_timeout
- }
- compute_params.update(default_params)
-
- object_storage_params = {
- 'service': CONF.object_storage.catalog_type,
- 'region': CONF.object_storage.region or CONF.identity.region,
- 'endpoint_type': CONF.object_storage.endpoint_type
- }
- object_storage_params.update(default_params)
-
- _creds = auth.KeystoneV2Credentials(
- username=user,
- password=pw,
- tenant_name=tenant)
- auth_provider_params = {
- 'disable_ssl_certificate_validation':
- CONF.identity.disable_ssl_certificate_validation,
- 'ca_certs': CONF.identity.ca_certificates_file,
- 'trace_requests': CONF.debug.trace_requests
- }
- _auth = auth.KeystoneV2AuthProvider(
- _creds, CONF.identity.uri, **auth_provider_params)
- self.identity = identity_client.IdentityClient(
- _auth,
- CONF.identity.catalog_type,
- CONF.identity.region,
- endpoint_type='adminURL',
- **default_params_with_timeout_values)
- self.tenants = tenants_client.TenantsClient(
- _auth,
- CONF.identity.catalog_type,
- CONF.identity.region,
- endpoint_type='adminURL',
- **default_params_with_timeout_values)
- self.roles = roles_client.RolesClient(
- _auth,
- CONF.identity.catalog_type,
- CONF.identity.region,
- endpoint_type='adminURL',
- **default_params_with_timeout_values)
- self.users = users_client.UsersClient(
- _auth,
- CONF.identity.catalog_type,
- CONF.identity.region,
- endpoint_type='adminURL',
- **default_params_with_timeout_values)
- self.servers = servers_client.ServersClient(_auth,
- **compute_params)
- self.flavors = flavors_client.FlavorsClient(_auth,
- **compute_params)
- self.floating_ips = floating_ips_client.FloatingIPsClient(
- _auth, **compute_params)
- self.secgroups = security_groups_client.SecurityGroupsClient(
- _auth, **compute_params)
- self.secrules = security_group_rules_client.SecurityGroupRulesClient(
- _auth, **compute_params)
- self.objects = object_client.ObjectClient(_auth,
- **object_storage_params)
- self.containers = container_client.ContainerClient(
- _auth, **object_storage_params)
- self.images = images_client.ImagesClient(
- _auth,
- CONF.image.catalog_type,
- CONF.image.region or CONF.identity.region,
- endpoint_type=CONF.image.endpoint_type,
- build_interval=CONF.image.build_interval,
- build_timeout=CONF.image.build_timeout,
- **default_params)
- self.volumes = volumes_client.VolumesClient(
- _auth,
- CONF.volume.catalog_type,
- CONF.volume.region or CONF.identity.region,
- endpoint_type=CONF.volume.endpoint_type,
- build_interval=CONF.volume.build_interval,
- build_timeout=CONF.volume.build_timeout,
- **default_params)
- self.networks = networks_client.NetworksClient(
- _auth,
- CONF.network.catalog_type,
- CONF.network.region or CONF.identity.region,
- endpoint_type=CONF.network.endpoint_type,
- build_interval=CONF.network.build_interval,
- build_timeout=CONF.network.build_timeout,
- **default_params)
- self.ports = ports_client.PortsClient(
- _auth,
- CONF.network.catalog_type,
- CONF.network.region or CONF.identity.region,
- endpoint_type=CONF.network.endpoint_type,
- build_interval=CONF.network.build_interval,
- build_timeout=CONF.network.build_timeout,
- **default_params)
- self.routers = routers_client.RoutersClient(
- _auth,
- CONF.network.catalog_type,
- CONF.network.region or CONF.identity.region,
- endpoint_type=CONF.network.endpoint_type,
- build_interval=CONF.network.build_interval,
- build_timeout=CONF.network.build_timeout,
- **default_params)
- self.subnets = subnets_client.SubnetsClient(
- _auth,
- CONF.network.catalog_type,
- CONF.network.region or CONF.identity.region,
- endpoint_type=CONF.network.endpoint_type,
- build_interval=CONF.network.build_interval,
- build_timeout=CONF.network.build_timeout,
- **default_params)
-
-
-def load_resources(fname):
- """Load the expected resources from a yaml file."""
- return yaml.load(open(fname, 'r'))
-
-
-def keystone_admin():
- return OSClient(OPTS.os_username, OPTS.os_password, OPTS.os_tenant_name)
-
-
-def client_for_user(name):
- LOG.debug("Entering client_for_user")
- if name in USERS:
- user = USERS[name]
- LOG.debug("Created client for user %s" % user)
- return OSClient(user['name'], user['pass'], user['tenant'])
- else:
- LOG.error("%s not found in USERS: %s" % (name, USERS))
-
-
-###################
-#
-# TENANTS
-#
-###################
-
-
-def create_tenants(tenants):
- """Create tenants from resource definition.
-
- Don't create the tenants if they already exist.
- """
- admin = keystone_admin()
- body = admin.tenants.list_tenants()['tenants']
- existing = [x['name'] for x in body]
- for tenant in tenants:
- if tenant not in existing:
- admin.tenants.create_tenant(name=tenant)['tenant']
- else:
- LOG.warning("Tenant '%s' already exists in this environment"
- % tenant)
-
-
-def destroy_tenants(tenants):
- admin = keystone_admin()
- for tenant in tenants:
- tenant_id = identity.get_tenant_by_name(admin.tenant, tenant)['id']
- admin.tenants.delete_tenant(tenant_id)
-
-##############
-#
-# USERS
-#
-##############
-
-
-def _users_for_tenant(users, tenant):
- u_for_t = []
- for user in users:
- for n in user:
- if user[n]['tenant'] == tenant:
- u_for_t.append(user[n])
- return u_for_t
-
-
-def _tenants_from_users(users):
- tenants = set()
- for user in users:
- for n in user:
- tenants.add(user[n]['tenant'])
- return tenants
-
-
-def _assign_swift_role(user, swift_role):
- admin = keystone_admin()
- roles = admin.roles.list_roles()
- role = next(r for r in roles if r['name'] == swift_role)
- LOG.debug(USERS[user])
- try:
- admin.roles.create_user_role_on_project(
- USERS[user]['tenant_id'],
- USERS[user]['id'],
- role['id'])
- except lib_exc.Conflict:
- # don't care if it's already assigned
- pass
-
-
-def create_users(users):
- """Create tenants from resource definition.
-
- Don't create the tenants if they already exist.
- """
- global USERS
- LOG.info("Creating users")
- admin = keystone_admin()
- for u in users:
- try:
- tenant = identity.get_tenant_by_name(admin.tenants, u['tenant'])
- except lib_exc.NotFound:
- LOG.error("Tenant: %s - not found" % u['tenant'])
- continue
- try:
- identity.get_user_by_username(admin.tenants,
- tenant['id'], u['name'])
- LOG.warning("User '%s' already exists in this environment"
- % u['name'])
- except lib_exc.NotFound:
- admin.users.create_user(
- name=u['name'], password=u['pass'],
- tenantId=tenant['id'],
- email="%s@%s" % (u['name'], tenant['id']),
- enabled=True)
-
-
-def destroy_users(users):
- admin = keystone_admin()
- for user in users:
- tenant_id = identity.get_tenant_by_name(admin.tenants,
- user['tenant'])['id']
- user_id = identity.get_user_by_username(admin.tenants,
- tenant_id, user['name'])['id']
- admin.users.delete_user(user_id)
-
-
-def collect_users(users):
- global USERS
- LOG.info("Collecting users")
- admin = keystone_admin()
- for u in users:
- tenant = identity.get_tenant_by_name(admin.tenants, u['tenant'])
- u['tenant_id'] = tenant['id']
- USERS[u['name']] = u
- body = identity.get_user_by_username(admin.tenants,
- tenant['id'], u['name'])
- USERS[u['name']]['id'] = body['id']
-
-
-class JavelinCheck(unittest.TestCase):
- def __init__(self, users, resources):
- super(JavelinCheck, self).__init__()
- self.users = users
- self.res = resources
-
- def runTest(self, *args):
- pass
-
- def _ping_ip(self, ip_addr, count, namespace=None):
- if namespace is None:
- ping_cmd = "ping -c1 " + ip_addr
- else:
- ping_cmd = "sudo ip netns exec %s ping -c1 %s" % (namespace,
- ip_addr)
- for current in range(count):
- return_code = os.system(ping_cmd)
- if return_code is 0:
- break
- self.assertNotEqual(current, count - 1,
- "Server is not pingable at %s" % ip_addr)
-
- def check(self):
- self.check_users()
- self.check_objects()
- self.check_servers()
- self.check_volumes()
- self.check_secgroups()
-
- # validate neutron is enabled and ironic disabled:
- # Tenant network isolation is not supported when using ironic.
- # "admin" has set up a neutron flat network environment within a shared
- # fixed network for all tenants to use.
- # In this case, network/subnet/router creation can be skipped and the
- # server booted the same as nova network.
- if (CONF.service_available.neutron and
- not CONF.baremetal.driver_enabled):
- self.check_networking()
-
- def check_users(self):
- """Check that the users we expect to exist, do.
-
- We don't use the resource list for this because we need to validate
- that things like tenantId didn't drift across versions.
- """
- LOG.info("checking users")
- for name, user in six.iteritems(self.users):
- client = keystone_admin()
- found = client.users.show_user(user['id'])['user']
- self.assertEqual(found['name'], user['name'])
- self.assertEqual(found['tenantId'], user['tenant_id'])
-
- # also ensure we can auth with that user, and do something
- # on the cloud. We don't care about the results except that it
- # remains authorized.
- client = client_for_user(user['name'])
- client.servers.list_servers()
-
- def check_objects(self):
- """Check that the objects created are still there."""
- if not self.res.get('objects'):
- return
- LOG.info("checking objects")
- for obj in self.res['objects']:
- client = client_for_user(obj['owner'])
- r, contents = client.objects.get_object(
- obj['container'], obj['name'])
- source = _file_contents(obj['file'])
- self.assertEqual(contents, source)
-
- def check_servers(self):
- """Check that the servers are still up and running."""
- if not self.res.get('servers'):
- return
- LOG.info("checking servers")
- for server in self.res['servers']:
- client = client_for_user(server['owner'])
- found = _get_server_by_name(client, server['name'])
- self.assertIsNotNone(
- found,
- "Couldn't find expected server %s" % server['name'])
-
- found = client.servers.show_server(found['id'])['server']
- # validate neutron is enabled and ironic disabled:
- if (CONF.service_available.neutron and
- not CONF.baremetal.driver_enabled):
- _floating_is_alive = False
- for network_name, body in found['addresses'].items():
- for addr in body:
- ip = addr['addr']
- # Use floating IP, fixed IP or other type to
- # reach the server.
- # This is useful in multi-node environment.
- if CONF.validation.connect_method == 'floating':
- if addr.get('OS-EXT-IPS:type',
- 'floating') == 'floating':
- self._ping_ip(ip, 60)
- _floating_is_alive = True
- elif CONF.validation.connect_method == 'fixed':
- if addr.get('OS-EXT-IPS:type',
- 'fixed') == 'fixed':
- namespace = _get_router_namespace(client,
- network_name)
- self._ping_ip(ip, 60, namespace)
- else:
- self._ping_ip(ip, 60)
- # If CONF.validation.connect_method is floating, validate
- # that the floating IP is attached to the server and the
- # the server is pingable.
- if CONF.validation.connect_method == 'floating':
- self.assertTrue(_floating_is_alive,
- "Server %s has no floating IP." %
- server['name'])
- else:
- addr = found['addresses']['private'][0]['addr']
- self._ping_ip(addr, 60)
-
- def check_secgroups(self):
- """Check that the security groups still exist."""
- LOG.info("Checking security groups")
- for secgroup in self.res['secgroups']:
- client = client_for_user(secgroup['owner'])
- found = _get_resource_by_name(client.secgroups, 'security_groups',
- secgroup['name'])
- self.assertIsNotNone(
- found,
- "Couldn't find expected secgroup %s" % secgroup['name'])
-
- def check_volumes(self):
- """Check that the volumes are still there and attached."""
- if not self.res.get('volumes'):
- return
- LOG.info("checking volumes")
- for volume in self.res['volumes']:
- client = client_for_user(volume['owner'])
- vol_body = _get_volume_by_name(client, volume['name'])
- self.assertIsNotNone(
- vol_body,
- "Couldn't find expected volume %s" % volume['name'])
-
- # Verify that a volume's attachment retrieved
- server_id = _get_server_by_name(client, volume['server'])['id']
- attachment = client.volumes.get_attachment_from_volume(vol_body)
- self.assertEqual(vol_body['id'], attachment['volume_id'])
- self.assertEqual(server_id, attachment['server_id'])
-
- def check_networking(self):
- """Check that the networks are still there."""
- for res_type in ('networks', 'subnets', 'routers'):
- for res in self.res[res_type]:
- client = client_for_user(res['owner'])
- found = _get_resource_by_name(client.networks, res_type,
- res['name'])
- self.assertIsNotNone(
- found,
- "Couldn't find expected resource %s" % res['name'])
-
-
-#######################
-#
-# OBJECTS
-#
-#######################
-
-
-def _file_contents(fname):
- with open(fname, 'r') as f:
- return f.read()
-
-
-def create_objects(objects):
- if not objects:
- return
- LOG.info("Creating objects")
- for obj in objects:
- LOG.debug("Object %s" % obj)
- swift_role = obj.get('swift_role', 'Member')
- _assign_swift_role(obj['owner'], swift_role)
- client = client_for_user(obj['owner'])
- client.containers.create_container(obj['container'])
- client.objects.create_object(
- obj['container'], obj['name'],
- _file_contents(obj['file']))
-
-
-def destroy_objects(objects):
- for obj in objects:
- client = client_for_user(obj['owner'])
- r, body = client.objects.delete_object(obj['container'], obj['name'])
- if not (200 <= int(r['status']) < 299):
- raise ValueError("unable to destroy object: [%s] %s" % (r, body))
-
-
-#######################
-#
-# IMAGES
-#
-#######################
-
-
-def _resolve_image(image, imgtype):
- name = image[imgtype]
- fname = os.path.join(OPTS.devstack_base, image['imgdir'], name)
- return name, fname
-
-
-def _get_image_by_name(client, name):
- body = client.images.list_images()
- for image in body:
- if name == image['name']:
- return image
- return None
-
-
-def create_images(images):
- if not images:
- return
- LOG.info("Creating images")
- for image in images:
- client = client_for_user(image['owner'])
-
- # DEPRECATED: 'format' was used for ami images
- # Use 'disk_format' and 'container_format' instead
- if 'format' in image:
- LOG.warning("Deprecated: 'format' is deprecated for images "
- "description. Please use 'disk_format' and 'container_"
- "format' instead.")
- image['disk_format'] = image['format']
- image['container_format'] = image['format']
-
- # only upload a new image if the name isn't there
- if _get_image_by_name(client, image['name']):
- LOG.info("Image '%s' already exists" % image['name'])
- continue
-
- # special handling for 3 part image
- extras = {}
- if image['disk_format'] == 'ami':
- name, fname = _resolve_image(image, 'aki')
- aki = client.images.create_image(
- 'javelin_' + name, 'aki', 'aki')
- client.images.store_image_file(aki.get('id'), open(fname, 'r'))
- extras['kernel_id'] = aki.get('id')
-
- name, fname = _resolve_image(image, 'ari')
- ari = client.images.create_image(
- 'javelin_' + name, 'ari', 'ari')
- client.images.store_image_file(ari.get('id'), open(fname, 'r'))
- extras['ramdisk_id'] = ari.get('id')
-
- _, fname = _resolve_image(image, 'file')
- body = client.images.create_image(
- image['name'], image['container_format'],
- image['disk_format'], **extras)
- image_id = body.get('id')
- client.images.store_image_file(image_id, open(fname, 'r'))
-
-
-def destroy_images(images):
- if not images:
- return
- LOG.info("Destroying images")
- for image in images:
- client = client_for_user(image['owner'])
-
- response = _get_image_by_name(client, image['name'])
- if not response:
- LOG.info("Image '%s' does not exist" % image['name'])
- continue
- client.images.delete_image(response['id'])
-
-
-#######################
-#
-# NETWORKS
-#
-#######################
-
-def _get_router_namespace(client, network):
- network_id = _get_resource_by_name(client.networks,
- 'networks', network)['id']
- n_body = client.routers.list_routers()
- for router in n_body['routers']:
- router_id = router['id']
- r_body = client.ports.list_ports(device_id=router_id)
- for port in r_body['ports']:
- if port['network_id'] == network_id:
- return "qrouter-%s" % router_id
-
-
-def _get_resource_by_name(client, resource, name):
- get_resources = getattr(client, 'list_%s' % resource)
- if get_resources is None:
- raise AttributeError("client doesn't have method list_%s" % resource)
- # Until all tempest client methods are changed to return only one value,
- # we cannot assume they all have the same signature so we need to discard
- # the unused response first value it two values are being returned.
- body = get_resources()
- if isinstance(body, tuple):
- body = body[1]
- if isinstance(body, dict):
- body = body[resource]
- for res in body:
- if name == res['name']:
- return res
- raise ValueError('%s not found in %s resources' % (name, resource))
-
-
-def create_networks(networks):
- LOG.info("Creating networks")
- for network in networks:
- client = client_for_user(network['owner'])
-
- # only create a network if the name isn't here
- body = client.networks.list_networks()
- if any(item['name'] == network['name'] for item in body['networks']):
- LOG.warning("Duplicated network name: %s" % network['name'])
- continue
-
- client.networks.create_network(name=network['name'])
-
-
-def destroy_networks(networks):
- LOG.info("Destroying subnets")
- for network in networks:
- client = client_for_user(network['owner'])
- network_id = _get_resource_by_name(client.networks, 'networks',
- network['name'])['id']
- client.networks.delete_network(network_id)
-
-
-def create_subnets(subnets):
- LOG.info("Creating subnets")
- for subnet in subnets:
- client = client_for_user(subnet['owner'])
-
- network = _get_resource_by_name(client.networks, 'networks',
- subnet['network'])
- ip_version = netaddr.IPNetwork(subnet['range']).version
- # ensure we don't overlap with another subnet in the network
- try:
- client.networks.create_subnet(network_id=network['id'],
- cidr=subnet['range'],
- name=subnet['name'],
- ip_version=ip_version)
- except lib_exc.BadRequest as e:
- is_overlapping_cidr = 'overlaps with another subnet' in str(e)
- if not is_overlapping_cidr:
- raise
-
-
-def destroy_subnets(subnets):
- LOG.info("Destroying subnets")
- for subnet in subnets:
- client = client_for_user(subnet['owner'])
- subnet_id = _get_resource_by_name(client.subnets,
- 'subnets', subnet['name'])['id']
- client.subnets.delete_subnet(subnet_id)
-
-
-def create_routers(routers):
- LOG.info("Creating routers")
- for router in routers:
- client = client_for_user(router['owner'])
-
- # only create a router if the name isn't here
- body = client.routers.list_routers()
- if any(item['name'] == router['name'] for item in body['routers']):
- LOG.warning("Duplicated router name: %s" % router['name'])
- continue
-
- client.networks.create_router(name=router['name'])
-
-
-def destroy_routers(routers):
- LOG.info("Destroying routers")
- for router in routers:
- client = client_for_user(router['owner'])
- router_id = _get_resource_by_name(client.networks,
- 'routers', router['name'])['id']
- for subnet in router['subnet']:
- subnet_id = _get_resource_by_name(client.networks,
- 'subnets', subnet)['id']
- client.routers.remove_router_interface(router_id,
- subnet_id=subnet_id)
- client.routers.delete_router(router_id)
-
-
-def add_router_interface(routers):
- for router in routers:
- client = client_for_user(router['owner'])
- router_id = _get_resource_by_name(client.networks,
- 'routers', router['name'])['id']
-
- for subnet in router['subnet']:
- subnet_id = _get_resource_by_name(client.networks,
- 'subnets', subnet)['id']
- # connect routers to their subnets
- client.routers.add_router_interface(router_id,
- subnet_id=subnet_id)
- # connect routers to external network if set to "gateway"
- if router['gateway']:
- if CONF.network.public_network_id:
- ext_net = CONF.network.public_network_id
- client.routers.update_router(
- router_id, set_enable_snat=True,
- external_gateway_info={"network_id": ext_net})
- else:
- raise ValueError('public_network_id is not configured.')
-
-
-#######################
-#
-# SERVERS
-#
-#######################
-
-def _get_server_by_name(client, name):
- body = client.servers.list_servers()
- for server in body['servers']:
- if name == server['name']:
- return server
- return None
-
-
-def _get_flavor_by_name(client, name):
- body = client.flavors.list_flavors()['flavors']
- for flavor in body:
- if name == flavor['name']:
- return flavor
- return None
-
-
-def create_servers(servers):
- if not servers:
- return
- LOG.info("Creating servers")
- for server in servers:
- client = client_for_user(server['owner'])
-
- if _get_server_by_name(client, server['name']):
- LOG.info("Server '%s' already exists" % server['name'])
- continue
-
- image_id = _get_image_by_name(client, server['image'])['id']
- flavor_id = _get_flavor_by_name(client, server['flavor'])['id']
- # validate neutron is enabled and ironic disabled
- kwargs = dict()
- if (CONF.service_available.neutron and
- not CONF.baremetal.driver_enabled and server.get('networks')):
- get_net_id = lambda x: (_get_resource_by_name(
- client.networks, 'networks', x)['id'])
- kwargs['networks'] = [{'uuid': get_net_id(network)}
- for network in server['networks']]
- body = client.servers.create_server(
- name=server['name'], imageRef=image_id, flavorRef=flavor_id,
- **kwargs)['server']
- server_id = body['id']
- client.servers.wait_for_server_status(server_id, 'ACTIVE')
- # create security group(s) after server spawning
- for secgroup in server['secgroups']:
- client.servers.add_security_group(server_id, name=secgroup)
- if CONF.validation.connect_method == 'floating':
- floating_ip_pool = server.get('floating_ip_pool')
- floating_ip = client.floating_ips.create_floating_ip(
- pool_name=floating_ip_pool)['floating_ip']
- client.floating_ips.associate_floating_ip_to_server(
- floating_ip['ip'], server_id)
-
-
-def destroy_servers(servers):
- if not servers:
- return
- LOG.info("Destroying servers")
- for server in servers:
- client = client_for_user(server['owner'])
-
- response = _get_server_by_name(client, server['name'])
- if not response:
- LOG.info("Server '%s' does not exist" % server['name'])
- continue
-
- # TODO(EmilienM): disassociate floating IP from server and release it.
- client.servers.delete_server(response['id'])
- waiters.wait_for_server_termination(client.servers, response['id'],
- ignore_error=True)
-
-
-def create_secgroups(secgroups):
- LOG.info("Creating security groups")
- for secgroup in secgroups:
- client = client_for_user(secgroup['owner'])
-
- # only create a security group if the name isn't here
- # i.e. a security group may be used by another server
- # only create a router if the name isn't here
- body = client.secgroups.list_security_groups()['security_groups']
- if any(item['name'] == secgroup['name'] for item in body):
- LOG.warning("Security group '%s' already exists" %
- secgroup['name'])
- continue
-
- body = client.secgroups.create_security_group(
- name=secgroup['name'],
- description=secgroup['description'])['security_group']
- secgroup_id = body['id']
- # for each security group, create the rules
- for rule in secgroup['rules']:
- ip_proto, from_port, to_port, cidr = rule.split()
- client.secrules.create_security_group_rule(
- parent_group_id=secgroup_id, ip_protocol=ip_proto,
- from_port=from_port, to_port=to_port, cidr=cidr)
-
-
-def destroy_secgroups(secgroups):
- LOG.info("Destroying security groups")
- for secgroup in secgroups:
- client = client_for_user(secgroup['owner'])
- sg_id = _get_resource_by_name(client.secgroups,
- 'security_groups',
- secgroup['name'])
- # sg rules are deleted automatically
- client.secgroups.delete_security_group(sg_id['id'])
-
-
-#######################
-#
-# VOLUMES
-#
-#######################
-
-def _get_volume_by_name(client, name):
- body = client.volumes.list_volumes()['volumes']
- for volume in body:
- if name == volume['display_name']:
- return volume
- return None
-
-
-def create_volumes(volumes):
- if not volumes:
- return
- LOG.info("Creating volumes")
- for volume in volumes:
- client = client_for_user(volume['owner'])
-
- # only create a volume if the name isn't here
- if _get_volume_by_name(client, volume['name']):
- LOG.info("volume '%s' already exists" % volume['name'])
- continue
-
- size = volume['gb']
- v_name = volume['name']
- body = client.volumes.create_volume(size=size,
- display_name=v_name)['volume']
- waiters.wait_for_volume_status(client.volumes, body['id'], 'available')
-
-
-def destroy_volumes(volumes):
- for volume in volumes:
- client = client_for_user(volume['owner'])
- volume_id = _get_volume_by_name(client, volume['name'])['id']
- client.volumes.detach_volume(volume_id)
- client.volumes.delete_volume(volume_id)
-
-
-def attach_volumes(volumes):
- for volume in volumes:
- client = client_for_user(volume['owner'])
- server_id = _get_server_by_name(client, volume['server'])['id']
- volume_id = _get_volume_by_name(client, volume['name'])['id']
- device = volume['device']
- client.volumes.attach_volume(volume_id,
- instance_uuid=server_id,
- mountpoint=device)
-
-
-#######################
-#
-# MAIN LOGIC
-#
-#######################
-
-def create_resources():
- LOG.info("Creating Resources")
- # first create keystone level resources, and we need to be admin
- # for this.
- create_tenants(RES['tenants'])
- create_users(RES['users'])
- collect_users(RES['users'])
-
- # next create resources in a well known order
- create_objects(RES['objects'])
- create_images(RES['images'])
-
- # validate neutron is enabled and ironic is disabled
- if CONF.service_available.neutron and not CONF.baremetal.driver_enabled:
- create_networks(RES['networks'])
- create_subnets(RES['subnets'])
- create_routers(RES['routers'])
- add_router_interface(RES['routers'])
-
- create_secgroups(RES['secgroups'])
- create_volumes(RES['volumes'])
-
- # Only attempt attaching the volumes if servers are defined in the
- # resource file
- if 'servers' in RES:
- create_servers(RES['servers'])
- attach_volumes(RES['volumes'])
-
-
-def destroy_resources():
- LOG.info("Destroying Resources")
- # Destroy in inverse order of create
- destroy_servers(RES['servers'])
- destroy_images(RES['images'])
- destroy_objects(RES['objects'])
- destroy_volumes(RES['volumes'])
- if CONF.service_available.neutron and not CONF.baremetal.driver_enabled:
- destroy_routers(RES['routers'])
- destroy_subnets(RES['subnets'])
- destroy_networks(RES['networks'])
- destroy_secgroups(RES['secgroups'])
- destroy_users(RES['users'])
- destroy_tenants(RES['tenants'])
- LOG.warning("Destroy mode incomplete")
-
-
-def get_options():
- global OPTS
- parser = argparse.ArgumentParser(
- description='Create and validate a fixed set of OpenStack resources')
- parser.add_argument('-m', '--mode',
- metavar='<create|check|destroy>',
- required=True,
- help=('One of (create, check, destroy)'))
- parser.add_argument('-r', '--resources',
- required=True,
- metavar='resourcefile.yaml',
- help='Resources definition yaml file')
-
- parser.add_argument(
- '-d', '--devstack-base',
- required=True,
- metavar='/opt/stack/old',
- help='Devstack base directory for retrieving artifacts')
- parser.add_argument(
- '-c', '--config-file',
- metavar='/etc/tempest.conf',
- help='path to javelin2(tempest) config file')
-
- # auth bits, letting us also just source the devstack openrc
- parser.add_argument('--os-username',
- metavar='<auth-user-name>',
- default=os.environ.get('OS_USERNAME'),
- help=('Defaults to env[OS_USERNAME].'))
- parser.add_argument('--os-password',
- metavar='<auth-password>',
- default=os.environ.get('OS_PASSWORD'),
- help=('Defaults to env[OS_PASSWORD].'))
- parser.add_argument('--os-tenant-name',
- metavar='<auth-tenant-name>',
- default=os.environ.get('OS_TENANT_NAME'),
- help=('Defaults to env[OS_TENANT_NAME].'))
-
- OPTS = parser.parse_args()
- if OPTS.mode not in ('create', 'check', 'destroy'):
- print("ERROR: Unknown mode -m %s\n" % OPTS.mode)
- parser.print_help()
- sys.exit(1)
- if OPTS.config_file:
- config.CONF.set_config_path(OPTS.config_file)
-
-
-def setup_logging():
- global LOG
- logging.setup(CONF, __name__)
- LOG = logging.getLogger(__name__)
-
-
-def main():
- print("Javelin is deprecated and will be removed from Tempest in the "
- "future.")
- global RES
- get_options()
- setup_logging()
- RES.update(load_resources(OPTS.resources))
-
- if OPTS.mode == 'create':
- create_resources()
- # Make sure the resources we just created actually work
- checker = JavelinCheck(USERS, RES)
- checker.check()
- elif OPTS.mode == 'check':
- collect_users(RES['users'])
- checker = JavelinCheck(USERS, RES)
- checker.check()
- elif OPTS.mode == 'destroy':
- collect_users(RES['users'])
- destroy_resources()
- else:
- LOG.error('Unknown mode %s' % OPTS.mode)
- return 1
- LOG.info('javelin2 successfully finished')
- return 0
-
-if __name__ == "__main__":
- sys.exit(main())
diff --git a/tempest/cmd/list_plugins.py b/tempest/cmd/list_plugins.py
index 36e45a5..86732da 100644
--- a/tempest/cmd/list_plugins.py
+++ b/tempest/cmd/list_plugins.py
@@ -21,7 +21,7 @@
from cliff import command
import prettytable
-from tempest.test_discover.plugins import TempestTestPluginManager
+from tempest.test_discover import plugins as plg
class TempestListPlugins(command.Command):
@@ -32,7 +32,7 @@
return 'List all tempest plugins'
def _list_plugins(self):
- plugins = TempestTestPluginManager()
+ plugins = plg.TempestTestPluginManager()
output = prettytable.PrettyTable(["Name", "EntryPoint"])
for plugin in plugins.ext_plugins.extensions:
diff --git a/tempest/cmd/resources.yaml b/tempest/cmd/resources.yaml
deleted file mode 100644
index 5c62ee3..0000000
--- a/tempest/cmd/resources.yaml
+++ /dev/null
@@ -1,95 +0,0 @@
-# This is a yaml description for the most basic definitions
-# of what should exist across the resource boundary. Perhaps
-# one day this will grow into a Heat resource template, but as
-# Heat isn't a known working element in the upgrades, we do
-# this much simpler thing for now.
-
-tenants:
- - javelin
- - discuss
-
-users:
- - name: javelin
- pass: gungnir
- tenant: javelin
- - name: javelin2
- pass: gungnir2
- tenant: discuss
-
-secgroups:
- - name: angon
- owner: javelin
- description: angon
- rules:
- - 'icmp -1 -1 0.0.0.0/0'
- - 'tcp 22 22 0.0.0.0/0'
- - name: baobab
- owner: javelin
- description: baobab
- rules:
- - 'tcp 80 80 0.0.0.0/0'
-
-# resources that we want to create
-images:
- - name: javelin_cirros
- owner: javelin
- imgdir: files/images/cirros-0.3.2-x86_64-uec
- file: cirros-0.3.2-x86_64-blank.img
- format: ami
- aki: cirros-0.3.2-x86_64-vmlinuz
- ari: cirros-0.3.2-x86_64-initrd
-volumes:
- - name: assegai
- server: peltast
- owner: javelin
- gb: 1
- device: /dev/vdb
- - name: pifpouf
- server: hoplite
- owner: javelin
- gb: 2
- device: /dev/vdb
-networks:
- - name: world1
- owner: javelin
- - name: world2
- owner: javelin
-subnets:
- - name: subnet1
- range: 10.1.0.0/24
- network: world1
- owner: javelin
- - name: subnet2
- range: 192.168.1.0/24
- network: world2
- owner: javelin
-routers:
- - name: connector
- owner: javelin
- gateway: true
- subnet:
- - subnet1
- - subnet2
-servers:
- - name: peltast
- owner: javelin
- flavor: m1.small
- image: javelin_cirros
- networks:
- - world1
- secgroups:
- - angon
- - baobab
- - name: hoplite
- owner: javelin
- flavor: m1.medium
- image: javelin_cirros
- networks:
- - world2
- secgroups:
- - angon
-objects:
- - container: jc1
- name: javelin1
- owner: javelin
- file: /etc/hosts
diff --git a/tempest/cmd/run.py b/tempest/cmd/run.py
index 1c0d9c4..5fa8b74 100644
--- a/tempest/cmd/run.py
+++ b/tempest/cmd/run.py
@@ -23,7 +23,7 @@
any tests that match on re.match() with the regex
* **--smoke**: Run all the tests tagged as smoke
-There are also the **--blacklist_file** and **--whitelist_file** options that
+There are also the **--blacklist-file** and **--whitelist-file** options that
let you pass a filepath to tempest run with the file format being a line
separated regex, with '#' used to signify the start of a comment on a line.
For example::
@@ -135,6 +135,12 @@
workspace_mgr = workspace.WorkspaceManager(
parsed_args.workspace_path)
path = workspace_mgr.get_workspace(parsed_args.workspace)
+ if not path:
+ sys.exit(
+ "The %r workspace isn't registered in "
+ "%r. Use 'tempest init' to "
+ "register the workspace." %
+ (parsed_args.workspace, workspace_mgr.path))
os.chdir(path)
# NOTE(mtreinish): tempest init should create a .testrepository dir
# but since workspaces can be imported let's sanity check and
@@ -191,11 +197,11 @@
help='A normal testr selection regex used to '
'specify a subset of tests to run')
list_selector = parser.add_mutually_exclusive_group()
- list_selector.add_argument('--whitelist_file',
+ list_selector.add_argument('--whitelist-file', '--whitelist_file',
help="Path to a whitelist file, this file "
"contains a separate regex on each "
"newline.")
- list_selector.add_argument('--blacklist_file',
+ list_selector.add_argument('--blacklist-file', '--blacklist_file',
help='Path to a blacklist file, this file '
'contains a separate regex exclude on '
'each newline')
@@ -264,8 +270,8 @@
run_thread = threading.Thread(target=run_argv_thread)
run_thread.start()
- returncodes['subunit-trace'] = subunit_trace.trace(subunit_r,
- sys.stdout)
+ returncodes['subunit-trace'] = subunit_trace.trace(
+ subunit_r, sys.stdout, post_fails=True, print_failures=True)
run_thread.join()
subunit_r.close()
# python version of pipefail
diff --git a/tempest/cmd/run_stress.py b/tempest/cmd/run_stress.py
deleted file mode 100755
index 7502c23..0000000
--- a/tempest/cmd/run_stress.py
+++ /dev/null
@@ -1,172 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2013 Quanta Research Cambridge, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import argparse
-import inspect
-import sys
-try:
- from unittest import loader
-except ImportError:
- # unittest in python 2.6 does not contain loader, so uses unittest2
- from unittest2 import loader
-import traceback
-import warnings
-
-from cliff import command
-from oslo_log import log as logging
-from oslo_serialization import jsonutils as json
-from testtools import testsuite
-
-from tempest.stress import driver
-
-LOG = logging.getLogger(__name__)
-
-
-def discover_stress_tests(path="./", filter_attr=None, call_inherited=False):
- """Discovers all tempest tests and create action out of them"""
- LOG.info("Start test discovery")
- tests = []
- testloader = loader.TestLoader()
- list = testloader.discover(path)
- for func in (testsuite.iterate_tests(list)):
- attrs = []
- try:
- method_name = getattr(func, '_testMethodName')
- full_name = "%s.%s.%s" % (func.__module__,
- func.__class__.__name__,
- method_name)
- test_func = getattr(func, method_name)
- # NOTE(mkoderer): this contains a list of all type attributes
- attrs = getattr(test_func, "__testtools_attrs")
- except Exception:
- next
- if 'stress' in attrs:
- if filter_attr is not None and filter_attr not in attrs:
- continue
- class_setup_per = getattr(test_func, "st_class_setup_per")
-
- action = {'action':
- "tempest.stress.actions.unit_test.UnitTest",
- 'kwargs': {"test_method": full_name,
- "class_setup_per": class_setup_per
- }
- }
- if (not call_inherited and
- getattr(test_func, "st_allow_inheritance") is not True):
- class_structure = inspect.getmro(test_func.im_class)
- if test_func.__name__ not in class_structure[0].__dict__:
- continue
- tests.append(action)
- return tests
-
-
-class TempestRunStress(command.Command):
-
- @staticmethod
- def display_deprecation_warning():
- warnings.simplefilter('once', category=DeprecationWarning)
- warnings.warn(
- 'Stress tests are deprecated and will be removed from Tempest '
- 'in the Newton release.',
- DeprecationWarning)
- warnings.resetwarnings()
-
- def get_parser(self, prog_name):
- self.display_deprecation_warning()
- pa = super(TempestRunStress, self).get_parser(prog_name)
- pa = add_arguments(pa)
- return pa
-
- def take_action(self, pa):
- try:
- action(pa)
- except Exception:
- LOG.exception("Failure in the stress test framework")
- traceback.print_exc()
- raise
-
- def get_description(self):
- return 'Run tempest stress tests'
-
-
-def add_arguments(parser):
- parser.add_argument('-d', '--duration', default=300, type=int,
- help="Duration of test in secs")
- parser.add_argument('-s', '--serial', action='store_true',
- help="Trigger running tests serially")
- parser.add_argument('-S', '--stop', action='store_true',
- default=False, help="Stop on first error")
- parser.add_argument('-n', '--number', type=int,
- help="How often an action is executed for each "
- "process")
- group = parser.add_mutually_exclusive_group(required=True)
- group.add_argument('-a', '--all', action='store_true',
- help="Execute all stress tests")
- parser.add_argument('-T', '--type',
- help="Filters tests of a certain type (e.g. gate)")
- parser.add_argument('-i', '--call-inherited', action='store_true',
- default=False,
- help="Call also inherited function with stress "
- "attribute")
- group.add_argument('-t', "--tests", nargs='?',
- help="Name of the file with test description")
- return parser
-
-
-def action(ns):
- result = 0
- if not ns.all:
- tests = json.load(open(ns.tests, 'r'))
- else:
- tests = discover_stress_tests(filter_attr=ns.type,
- call_inherited=ns.call_inherited)
-
- if ns.serial:
- # Duration is total time
- duration = ns.duration / len(tests)
- for test in tests:
- step_result = driver.stress_openstack([test],
- duration,
- ns.number,
- ns.stop)
- # NOTE(mkoderer): we just save the last result code
- if (step_result != 0):
- result = step_result
- if ns.stop:
- return result
- else:
- result = driver.stress_openstack(tests,
- ns.duration,
- ns.number,
- ns.stop)
- return result
-
-
-def main():
- TempestRunStress.display_deprecation_warning()
- parser = argparse.ArgumentParser(description='Run stress tests')
- pa = add_arguments(parser)
- ns = pa.parse_args()
- return action(ns)
-
-
-if __name__ == "__main__":
- try:
- sys.exit(main())
- except Exception:
- LOG.exception("Failure in the stress test framework")
- traceback.print_exc()
- sys.exit(1)
diff --git a/tempest/cmd/subunit_describe_calls.py b/tempest/cmd/subunit_describe_calls.py
index da7f426..0f868a9 100644
--- a/tempest/cmd/subunit_describe_calls.py
+++ b/tempest/cmd/subunit_describe_calls.py
@@ -21,13 +21,14 @@
Runtime Arguments
-----------------
-**--subunit, -s**: (Required) The path to the subunit file being parsed
+**--subunit, -s**: (Optional) The path to the subunit file being parsed,
+defaults to stdin
**--non-subunit-name, -n**: (Optional) The file_name that the logs are being
stored in
-**--output-file, -o**: (Required) The path where the JSON output will be
-written to
+**--output-file, -o**: (Optional) The path where the JSON output will be
+written to. This contains more information than is present in stdout.
**--ports, -p**: (Optional) The path to a JSON file describing the ports being
used by different services
@@ -35,13 +36,14 @@
Usage
-----
-subunit-describe-calls will take in a file path via the --subunit parameter
-which contains either a subunit v1 or v2 stream. This is then parsed checking
-for details contained in the file_bytes of the --non-subunit-name parameter
-(the default is pythonlogging which is what Tempest uses to store logs). By
-default the OpenStack Kilo release port defaults (http://bit.ly/22jpF5P)
-are used unless a file is provided via the --ports option. The resulting output
-is dumped in JSON output to the path provided in the --output-file option.
+subunit-describe-calls will take in either stdin subunit v1 or v2 stream or a
+file path which contains either a subunit v1 or v2 stream passed via the
+--subunit parameter. This is then parsed checking for details contained in the
+file_bytes of the --non-subunit-name parameter (the default is pythonlogging
+which is what Tempest uses to store logs). By default the OpenStack Kilo
+release port defaults (http://bit.ly/22jpF5P) are used unless a file is
+provided via the --ports option. The resulting output is dumped in JSON output
+to the path provided in the --output-file option.
Ports file JSON structure
^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -64,7 +66,11 @@
"verb": "HTTP Verb",
"service": "Name of the service",
"url": "A shortened version of the URL called",
- "status_code": "The status code of the response"
+ "status_code": "The status code of the response",
+ "request_headers": "The headers of the request",
+ "request_body": "The body of the request",
+ "response_headers": "The headers of the response",
+ "response_body": "The body of the response"
}
]
}
@@ -75,6 +81,7 @@
import json
import os
import re
+import sys
import subunit
import testtools
@@ -91,6 +98,9 @@
'(?P<verb>\w*) (?P<url>.*) .*')
port_re = re.compile(r'.*:(?P<port>\d+).*')
path_re = re.compile(r'http[s]?://[^/]*/(?P<path>.*)')
+ request_re = re.compile(r'.* Request - Headers: (?P<headers>.*)')
+ response_re = re.compile(r'.* Response - Headers: (?P<headers>.*)')
+ body_re = re.compile(r'.*Body: (?P<body>.*)')
# Based on mitaka defaults:
# http://docs.openstack.org/mitaka/config-reference/
@@ -151,15 +161,46 @@
calls = []
for _, detail in details.items():
+ in_request = False
+ in_response = False
+ current_call = {}
for line in detail.as_text().split("\n"):
- match = self.url_re.match(line)
- if match is not None:
- calls.append({
- "name": match.group("name"),
- "verb": match.group("verb"),
- "status_code": match.group("code"),
- "service": self.get_service(match.group("url")),
- "url": self.url_path(match.group("url"))})
+ url_match = self.url_re.match(line)
+ request_match = self.request_re.match(line)
+ response_match = self.response_re.match(line)
+ body_match = self.body_re.match(line)
+
+ if url_match is not None:
+ if current_call != {}:
+ calls.append(current_call.copy())
+ current_call = {}
+ in_request, in_response = False, False
+ current_call.update({
+ "name": url_match.group("name"),
+ "verb": url_match.group("verb"),
+ "status_code": url_match.group("code"),
+ "service": self.get_service(url_match.group("url")),
+ "url": self.url_path(url_match.group("url"))})
+ elif request_match is not None:
+ in_request, in_response = True, False
+ current_call.update(
+ {"request_headers": request_match.group("headers")})
+ elif in_request and body_match is not None:
+ in_request = False
+ current_call.update(
+ {"request_body": body_match.group(
+ "body")})
+ elif response_match is not None:
+ in_request, in_response = False, True
+ current_call.update(
+ {"response_headers": response_match.group(
+ "headers")})
+ elif in_response and body_match is not None:
+ in_response = False
+ current_call.update(
+ {"response_body": body_match.group("body")})
+ if current_call != {}:
+ calls.append(current_call.copy())
return calls
@@ -203,11 +244,12 @@
desc = "Outputs all HTTP calls a given test made that were logged."
super(ArgumentParser, self).__init__(description=desc)
- self.prog = "Argument Parser"
+ self.prog = "subunit-describe-calls"
self.add_argument(
- "-s", "--subunit", metavar="<subunit file>", required=True,
- default=None, help="The path to the subunit output file.")
+ "-s", "--subunit", metavar="<subunit file>",
+ nargs="?", type=argparse.FileType('rb'), default=sys.stdin,
+ help="The path to the subunit output file.")
self.add_argument(
"-n", "--non-subunit-name", metavar="<non subunit name>",
@@ -216,19 +258,18 @@
self.add_argument(
"-o", "--output-file", metavar="<output file>", default=None,
- help="The output file name for the json.", required=True)
+ help="The output file name for the json.")
self.add_argument(
"-p", "--ports", metavar="<ports file>", default=None,
help="A JSON file describing the ports for each service.")
-def parse(subunit_file, non_subunit_name, ports):
+def parse(stream, non_subunit_name, ports):
if ports is not None and os.path.exists(ports):
ports = json.loads(open(ports).read())
url_parser = UrlParser(ports)
- stream = open(subunit_file, 'rb')
suite = subunit.ByteStreamToStreamResult(
stream, non_subunit_name=non_subunit_name)
result = testtools.StreamToExtendedDecorator(url_parser)
@@ -248,8 +289,21 @@
def output(url_parser, output_file):
- with open(output_file, "w") as outfile:
- outfile.write(json.dumps(url_parser.test_logs))
+ if output_file is not None:
+ with open(output_file, "w") as outfile:
+ outfile.write(json.dumps(url_parser.test_logs))
+ return
+
+ for test_name, items in url_parser.test_logs.iteritems():
+ sys.stdout.write('{0}\n'.format(test_name))
+ if not items:
+ sys.stdout.write('\n')
+ continue
+ for item in items:
+ sys.stdout.write('\t- {0} {1} request for {2} to {3}\n'.format(
+ item.get('status_code'), item.get('verb'),
+ item.get('service'), item.get('url')))
+ sys.stdout.write('\n')
def entry_point():
diff --git a/tempest/cmd/verify_tempest_config.py b/tempest/cmd/verify_tempest_config.py
index 77b88f9..381f3df 100644
--- a/tempest/cmd/verify_tempest_config.py
+++ b/tempest/cmd/verify_tempest_config.py
@@ -46,7 +46,7 @@
conf_dir = os.environ.get('TEMPEST_CONFIG_DIR', default_config_dir)
conf_file = os.environ.get('TEMPEST_CONFIG', default_config_file)
path = os.path.join(conf_dir, conf_file)
- fd = open(path, 'rw')
+ fd = open(path, 'r+')
return fd
@@ -147,6 +147,10 @@
contains_version('v2.', versions)):
print_and_or_update('api_v2', 'volume-feature-enabled',
not CONF.volume_feature_enabled.api_v2, update)
+ if (CONF.volume_feature_enabled.api_v3 !=
+ contains_version('v3.', versions)):
+ print_and_or_update('api_v3', 'volume-feature-enabled',
+ not CONF.volume_feature_enabled.api_v3, update)
def verify_api_versions(os, service, update):
@@ -282,7 +286,6 @@
'object_storage': 'swift',
'compute': 'nova',
'orchestration': 'heat',
- 'data_processing': 'sahara',
'baremetal': 'ironic',
'identity': 'keystone',
}
@@ -366,10 +369,9 @@
replace = opts.replace_ext
global CONF_PARSER
- outfile = sys.stdout
if update:
conf_file = _get_config_file()
- CONF_PARSER = moves.configparser.SafeConfigParser()
+ CONF_PARSER = moves.configparser.ConfigParser()
CONF_PARSER.optionxform = str
CONF_PARSER.readfp(conf_file)
diff --git a/tempest/cmd/workspace.py b/tempest/cmd/workspace.py
index b36cf4e..3c58648 100644
--- a/tempest/cmd/workspace.py
+++ b/tempest/cmd/workspace.py
@@ -72,7 +72,10 @@
@lockutils.synchronized('workspaces', external=True)
def get_workspace(self, name):
- """Returns the workspace that has the given name"""
+ """Returns the workspace that has the given name
+
+ If the workspace isn't registered then `None` is returned.
+ """
self._populate()
return self.workspaces.get(name)
diff --git a/tempest/common/compute.py b/tempest/common/compute.py
index a2edcdc..318eb10 100644
--- a/tempest/common/compute.py
+++ b/tempest/common/compute.py
@@ -101,13 +101,14 @@
wait_until = 'ACTIVE'
if volume_backed:
- volume_name = data_utils.rand_name('volume')
+ volume_name = data_utils.rand_name(__name__ + '-volume')
volumes_client = clients.volumes_v2_client
if CONF.volume_feature_enabled.api_v1:
volumes_client = clients.volumes_client
volume = volumes_client.create_volume(
display_name=volume_name,
- imageRef=image_id)
+ imageRef=image_id,
+ size=CONF.volume.volume_size)
waiters.wait_for_volume_status(volumes_client,
volume['volume']['id'], 'available')
@@ -128,7 +129,6 @@
**kwargs)
# handle the case of multiple servers
- servers = []
if multiple_create_request:
# Get servers created which name match with name param.
body_servers = clients.servers_client.list_servers()
diff --git a/tempest/common/cred_client.py b/tempest/common/cred_client.py
index 2ca9f40..ad968f1 100644
--- a/tempest/common/cred_client.py
+++ b/tempest/common/cred_client.py
@@ -17,7 +17,7 @@
from tempest.lib import auth
from tempest.lib import exceptions as lib_exc
-from tempest.services.identity.v2.json import identity_client as v2_identity
+from tempest.lib.services.identity.v2 import identity_client as v2_identity
LOG = logging.getLogger(__name__)
@@ -40,8 +40,10 @@
self.roles_client = roles_client
def create_user(self, username, password, project, email):
- params = self._create_user_params(username, password,
- project['id'], email)
+ params = {'name': username,
+ 'password': password,
+ self.project_id_param: project['id'],
+ 'email': email}
user = self.users_client.create_user(**params)
if 'user' in user:
user = user['user']
@@ -72,7 +74,9 @@
msg = 'No "%s" role found' % role_name
raise lib_exc.NotFound(msg)
try:
- self._assign_user_role(project, user, role)
+ self.roles_client.create_user_role_on_project(project['id'],
+ user['id'],
+ role['id'])
except lib_exc.Conflict:
LOG.debug("Role %s already assigned on project %s for user %s" % (
role['id'], project['id'], user['id']))
@@ -94,6 +98,7 @@
class V2CredsClient(CredsClient):
+ project_id_param = 'tenantId'
def __init__(self, identity_client, projects_client, users_client,
roles_client):
@@ -102,13 +107,6 @@
users_client,
roles_client)
- def _create_user_params(self, username, password, project_id, email):
- params = {'name': username,
- 'password': password,
- 'tenantId': project_id,
- 'email': email}
- return params
-
def create_project(self, name, description):
tenant = self.projects_client.create_tenant(
name=name, description=description)['tenant']
@@ -128,13 +126,9 @@
tenant_name=project['name'], tenant_id=project['id'],
password=password)
- def _assign_user_role(self, project, user, role):
- self.roles_client.create_user_role_on_project(project['id'],
- user['id'],
- role['id'])
-
class V3CredsClient(CredsClient):
+ project_id_param = 'project_id'
def __init__(self, identity_client, projects_client, users_client,
roles_client, domains_client, domain_name):
@@ -152,13 +146,6 @@
msg = "Requested domain %s could not be found" % domain_name
raise lib_exc.InvalidCredentials(msg)
- def _create_user_params(self, username, password, project_id, email):
- params = {'user_name': username,
- 'password': password,
- 'project_id': project_id,
- 'email': email}
- return params
-
def create_project(self, name, description):
project = self.projects_client.create_project(
name=name, description=description,
@@ -186,11 +173,6 @@
domain_id=self.creds_domain['id'],
domain_name=self.creds_domain['name'])
- def _assign_user_role(self, project, user, role):
- self.roles_client.assign_user_role_on_project(project['id'],
- user['id'],
- role['id'])
-
def assign_user_role_on_domain(self, user, role_name, domain=None):
"""Assign the specified role on a domain
@@ -208,7 +190,7 @@
msg = 'No "%s" role found' % role_name
raise lib_exc.NotFound(msg)
try:
- self.roles_client.assign_user_role_on_domain(
+ self.roles_client.create_user_role_on_domain(
domain['id'], user['id'], role['id'])
except lib_exc.Conflict:
LOG.debug("Role %s already assigned on domain %s for user %s",
diff --git a/tempest/common/credentials_factory.py b/tempest/common/credentials_factory.py
index c22afc1..5634958 100644
--- a/tempest/common/credentials_factory.py
+++ b/tempest/common/credentials_factory.py
@@ -17,8 +17,8 @@
from tempest.common import dynamic_creds
from tempest.common import preprov_creds
from tempest import config
-from tempest import exceptions
from tempest.lib import auth
+from tempest.lib import exceptions
CONF = config.CONF
@@ -80,6 +80,16 @@
network_resources=network_resources,
identity_version=identity_version,
admin_creds=admin_creds,
+ identity_admin_domain_scope=CONF.identity.admin_domain_scope,
+ identity_admin_role=CONF.identity.admin_role,
+ extra_roles=CONF.auth.tempest_roles,
+ neutron_available=CONF.service_available.neutron,
+ project_network_cidr=CONF.network.project_network_cidr,
+ project_network_mask_bits=CONF.network.project_network_mask_bits,
+ public_network_id=CONF.network.public_network_id,
+ create_networks=(CONF.auth.create_isolated_networks and not
+ CONF.baremetal.driver_enabled),
+ resource_prefix=CONF.resources_prefix,
**get_dynamic_provider_params())
else:
if CONF.auth.test_accounts_file:
diff --git a/tempest/common/custom_matchers.py b/tempest/common/custom_matchers.py
index 8ba33ed..b6ff241 100644
--- a/tempest/common/custom_matchers.py
+++ b/tempest/common/custom_matchers.py
@@ -28,7 +28,7 @@
checked in each test code.
"""
- def __init__(self, target, method):
+ def __init__(self, target, method, policies=None):
"""Initialization of ExistsAllResponseHeaders
param: target Account/Container/Object
@@ -36,14 +36,34 @@
"""
self.target = target
self.method = method
+ self.policies = policies or []
+
+ def _content_length_required(self, resp):
+ # Verify whether given HTTP response must contain content-length.
+ # Take into account the exceptions defined in RFC 7230.
+ if resp.status in range(100, 200) or resp.status == 204:
+ return False
+
+ return True
def match(self, actual):
"""Check headers
- param: actual HTTP response headers
+ param: actual HTTP response object containing headers and status
"""
- # Check common headers for all HTTP methods
- if 'content-length' not in actual:
+ # Check common headers for all HTTP methods.
+ #
+ # Please note that for 1xx and 204 responses Content-Length presence
+ # is not checked intensionally. According to RFC 7230 a server MUST
+ # NOT send the header in such responses. Thus, clients should not
+ # depend on this header. However, the standard does not require them
+ # to validate the server's behavior. We leverage that to not refuse
+ # any implementation violating it like Swift [1] or some versions of
+ # Ceph RadosGW [2].
+ # [1] https://bugs.launchpad.net/swift/+bug/1537811
+ # [2] http://tracker.ceph.com/issues/13582
+ if ('content-length' not in actual and
+ self._content_length_required(actual)):
return NonExistentHeader('content-length')
if 'content-type' not in actual:
return NonExistentHeader('content-type')
@@ -65,11 +85,63 @@
return NonExistentHeader('x-account-container-count')
if 'x-account-object-count' not in actual:
return NonExistentHeader('x-account-object-count')
+ if actual['x-account-container-count'] > 0:
+ acct_header = "x-account-storage-policy-"
+ matched_policy_count = 0
+
+ # Loop through the policies and look for account
+ # usage data. There should be at least 1 set
+ for policy in self.policies:
+ front_header = acct_header + policy['name'].lower()
+
+ usage_policies = [
+ front_header + '-bytes-used',
+ front_header + '-object-count',
+ front_header + '-container-count'
+ ]
+
+ # There should be 3 usage values for a give storage
+ # policy in an account bytes, object count, and
+ # container count
+ policy_hdrs = sum(1 for use_hdr in usage_policies
+ if use_hdr in actual)
+
+ # If there are less than 3 headers here then 1 is
+ # missing, let's figure out which one and report
+ if policy_hdrs == 3:
+ matched_policy_count = matched_policy_count + 1
+ else:
+ if policy_hdrs > 0 and policy_hdrs < 3:
+ for use_hdr in usage_policies:
+ if use_hdr not in actual:
+ return NonExistentHeader(use_hdr)
+
+ # Only flag an error if actual policies have been read and
+ # no usage has been found
+ if self.policies and matched_policy_count == 0:
+ return GenericError("No storage policy usage headers")
+
elif self.target == 'Container':
if 'x-container-bytes-used' not in actual:
return NonExistentHeader('x-container-bytes-used')
if 'x-container-object-count' not in actual:
return NonExistentHeader('x-container-object-count')
+ if 'x-storage-policy' not in actual:
+ return NonExistentHeader('x-storage-policy')
+ else:
+ policy_name = actual['x-storage-policy']
+
+ # loop through the policies and ensure that
+ # the value in the container header matches
+ # one of the storage policies
+ for policy in self.policies:
+ if policy['name'] == policy_name:
+ break
+ else:
+ # Ensure that there are actual policies stored
+ if self.policies:
+ return InvalidHeaderValue('x-storage-policy',
+ policy_name)
elif self.target == 'Object':
if 'etag' not in actual:
return NonExistentHeader('etag')
@@ -95,6 +167,19 @@
return None
+class GenericError(object):
+ """Informs an error message of a generic error during header evaluation"""
+
+ def __init__(self, body):
+ self.body = body
+
+ def describe(self):
+ return "%s" % self.body
+
+ def get_details(self):
+ return {}
+
+
class NonExistentHeader(object):
"""Informs an error message in the case of missing a certain header"""
@@ -108,6 +193,20 @@
return {}
+class InvalidHeaderValue(object):
+ """Informs an error message when a header contains a bad value"""
+
+ def __init__(self, header, value):
+ self.header = header
+ self.value = value
+
+ def describe(self):
+ return "InvalidValue (%s, %s)" % (self.header, self.value)
+
+ def get_details(self):
+ return {}
+
+
class AreAllWellFormatted(object):
"""Specific matcher to check the correctness of formats of values
diff --git a/tempest/common/dynamic_creds.py b/tempest/common/dynamic_creds.py
index c9b9db1..5c12fd8 100644
--- a/tempest/common/dynamic_creds.py
+++ b/tempest/common/dynamic_creds.py
@@ -18,20 +18,22 @@
from tempest import clients
from tempest.common import cred_client
-from tempest.common import cred_provider
-from tempest.common.utils import data_utils
-from tempest import config
-from tempest import exceptions
+from tempest.lib.common import cred_provider
+from tempest.lib.common.utils import data_utils
from tempest.lib import exceptions as lib_exc
-CONF = config.CONF
LOG = logging.getLogger(__name__)
class DynamicCredentialProvider(cred_provider.CredentialProvider):
def __init__(self, identity_version, name=None, network_resources=None,
- credentials_domain=None, admin_role=None, admin_creds=None):
+ credentials_domain=None, admin_role=None, admin_creds=None,
+ identity_admin_domain_scope=False,
+ identity_admin_role='admin', extra_roles=None,
+ neutron_available=False, create_networks=True,
+ project_network_cidr=None, project_network_mask_bits=None,
+ public_network_id=None, resource_prefix=None):
"""Creates credentials dynamically for tests
A credential provider that, based on an initial set of
@@ -48,6 +50,23 @@
:param dict network_resources: network resources to be created for
the created credentials
:param Credentials admin_creds: initial admin credentials
+ :param bool identity_admin_domain_scope: Set to true if admin should be
+ scoped to the domain. By
+ default this is False and the
+ admin role is scoped to the
+ project.
+ :param str identity_admin_role: The role name to use for admin
+ :param list extra_roles: A list of strings for extra roles that should
+ be assigned to all created users
+ :param bool neutron_available: Whether we are running in an environemnt
+ with neutron
+ :param bool create_networks: Whether dynamic project networks should be
+ created or not
+ :param project_network_cidr: The CIDR to use for created project
+ networks
+ :param project_network_mask_bits: The network mask bits to use for
+ created project networks
+ :param public_network_id: The id for the public network to use
"""
super(DynamicCredentialProvider, self).__init__(
identity_version=identity_version, admin_role=admin_role,
@@ -56,7 +75,16 @@
self.network_resources = network_resources
self._creds = {}
self.ports = []
+ self.resource_prefix = resource_prefix or ''
+ self.neutron_available = neutron_available
+ self.create_networks = create_networks
+ self.project_network_cidr = project_network_cidr
+ self.project_network_mask_bits = project_network_mask_bits
+ self.public_network_id = public_network_id
self.default_admin_creds = admin_creds
+ self.identity_admin_domain_scope = identity_admin_domain_scope
+ self.identity_admin_role = identity_admin_role or 'admin'
+ self.extra_roles = extra_roles or []
(self.identity_admin_client,
self.tenants_admin_client,
self.users_admin_client,
@@ -98,7 +126,7 @@
else:
# We use a dedicated client manager for identity client in case we
# need a different token scope for them.
- scope = 'domain' if CONF.identity.admin_domain_scope else 'project'
+ scope = 'domain' if self.identity_admin_domain_scope else 'project'
identity_os = clients.Manager(self.default_admin_creds,
scope=scope)
return (identity_os.identity_v3_client,
@@ -124,7 +152,7 @@
"""
root = self.name
- project_name = data_utils.rand_name(root)
+ project_name = data_utils.rand_name(root, prefix=self.resource_prefix)
project_desc = project_name + "-desc"
project = self.creds_client.create_project(
name=project_name, description=project_desc)
@@ -133,21 +161,20 @@
# having the same ID in both makes it easier to match them and debug.
username = project_name
user_password = data_utils.rand_password()
- email = data_utils.rand_name(root) + "@example.com"
+ email = data_utils.rand_name(
+ root, prefix=self.resource_prefix) + "@example.com"
user = self.creds_client.create_user(
username, user_password, project, email)
- if 'user' in user:
- user = user['user']
role_assigned = False
if admin:
self.creds_client.assign_user_role(user, project, self.admin_role)
role_assigned = True
if (self.identity_version == 'v3' and
- CONF.identity.admin_domain_scope):
+ self.identity_admin_domain_scope):
self.creds_client.assign_user_role_on_domain(
- user, CONF.identity.admin_role)
+ user, self.identity_admin_role)
# Add roles specified in config file
- for conf_role in CONF.auth.tempest_roles:
+ for conf_role in self.extra_roles:
self.creds_client.assign_user_role(user, project, conf_role)
role_assigned = True
# Add roles requested by caller
@@ -191,26 +218,27 @@
if self.network_resources['router']:
if (not self.network_resources['subnet'] or
not self.network_resources['network']):
- raise exceptions.InvalidConfiguration(
+ raise lib_exc.InvalidConfiguration(
'A router requires a subnet and network')
elif self.network_resources['subnet']:
if not self.network_resources['network']:
- raise exceptions.InvalidConfiguration(
+ raise lib_exc.InvalidConfiguration(
'A subnet requires a network')
elif self.network_resources['dhcp']:
- raise exceptions.InvalidConfiguration('DHCP requires a subnet')
+ raise lib_exc.InvalidConfiguration('DHCP requires a subnet')
- data_utils.rand_name_root = data_utils.rand_name(self.name)
+ rand_name_root = data_utils.rand_name(
+ self.name, prefix=self.resource_prefix)
if not self.network_resources or self.network_resources['network']:
- network_name = data_utils.rand_name_root + "-network"
+ network_name = rand_name_root + "-network"
network = self._create_network(network_name, tenant_id)
try:
if not self.network_resources or self.network_resources['subnet']:
- subnet_name = data_utils.rand_name_root + "-subnet"
+ subnet_name = rand_name_root + "-subnet"
subnet = self._create_subnet(subnet_name, tenant_id,
network['id'])
if not self.network_resources or self.network_resources['router']:
- router_name = data_utils.rand_name_root + "-router"
+ router_name = rand_name_root + "-router"
router = self._create_router(router_name, tenant_id)
self._add_router_interface(router['id'], subnet['id'])
except Exception:
@@ -236,8 +264,8 @@
return resp_body['network']
def _create_subnet(self, subnet_name, tenant_id, network_id):
- base_cidr = netaddr.IPNetwork(CONF.network.project_network_cidr)
- mask_bits = CONF.network.project_network_mask_bits
+ base_cidr = netaddr.IPNetwork(self.project_network_cidr)
+ mask_bits = self.project_network_mask_bits
for subnet_cidr in base_cidr.subnet(mask_bits):
try:
if self.network_resources:
@@ -266,7 +294,7 @@
def _create_router(self, router_name, tenant_id):
external_net_id = dict(
- network_id=CONF.network.public_network_id)
+ network_id=self.public_network_id)
resp_body = self.routers_admin_client.create_router(
name=router_name,
external_gateway_info=external_net_id,
@@ -290,9 +318,8 @@
# Maintained until tests are ported
LOG.info("Acquired dynamic creds:\n credentials: %s"
% credentials)
- if (CONF.service_available.neutron and
- not CONF.baremetal.driver_enabled and
- CONF.auth.create_isolated_networks):
+ if (self.neutron_available and
+ self.create_networks):
network, subnet, router = self._create_network_resources(
credentials.tenant_id)
credentials.set_resources(network=network, subnet=subnet,
@@ -401,9 +428,18 @@
except lib_exc.NotFound:
LOG.warning("user with name: %s not found for delete" %
creds.username)
+ # NOTE(zhufl): Only when neutron's security_group ext is
+ # enabled, _cleanup_default_secgroup will not raise error. But
+ # here cannot use test.is_extension_enabled for it will cause
+ # "circular dependency". So here just use try...except to
+ # ensure tenant deletion without big changes.
try:
- if CONF.service_available.neutron:
+ if self.neutron_available:
self._cleanup_default_secgroup(creds.tenant_id)
+ except lib_exc.NotFound:
+ LOG.warning("failed to cleanup tenant %s's secgroup" %
+ creds.tenant_name)
+ try:
self.creds_client.delete_project(creds.tenant_id)
except lib_exc.NotFound:
LOG.warning("tenant with name: %s not found for delete" %
diff --git a/tempest/common/preprov_creds.py b/tempest/common/preprov_creds.py
index 5992d24..5e23696 100644
--- a/tempest/common/preprov_creds.py
+++ b/tempest/common/preprov_creds.py
@@ -21,10 +21,10 @@
import yaml
from tempest import clients
-from tempest.common import cred_provider
from tempest.common import fixed_network
from tempest import exceptions
from tempest.lib import auth
+from tempest.lib.common import cred_provider
from tempest.lib import exceptions as lib_exc
LOG = logging.getLogger(__name__)
@@ -35,7 +35,7 @@
with open(path, 'r') as yaml_file:
accounts = yaml.load(yaml_file)
except IOError:
- raise exceptions.InvalidConfiguration(
+ raise lib_exc.InvalidConfiguration(
'The path for the test accounts file: %s '
'could not be found' % path)
return accounts
diff --git a/tempest/common/utils/linux/remote_client.py b/tempest/common/utils/linux/remote_client.py
index 7cb9ebe..9ec217f 100644
--- a/tempest/common/utils/linux/remote_client.py
+++ b/tempest/common/utils/linux/remote_client.py
@@ -19,7 +19,6 @@
from oslo_log import log as logging
from tempest import config
-from tempest import exceptions
from tempest.lib.common import ssh
from tempest.lib.common.utils import test_utils
import tempest.lib.exceptions
@@ -218,8 +217,8 @@
supported_clients = ['udhcpc', 'dhclient']
dhcp_client = CONF.scenario.dhcp_client
if dhcp_client not in supported_clients:
- raise exceptions.InvalidConfiguration('%s DHCP client unsupported'
- % dhcp_client)
+ raise tempest.lib.exceptions.InvalidConfiguration(
+ '%s DHCP client unsupported' % dhcp_client)
if dhcp_client == 'udhcpc' and not fixed_ip:
raise ValueError("need to set 'fixed_ip' for udhcpc client")
return getattr(self, '_renew_lease_' + dhcp_client)(fixed_ip=fixed_ip)
diff --git a/tempest/services/volume/v2/json/admin/hosts_client.py b/tempest/common/utils/net_info.py
similarity index 64%
rename from tempest/services/volume/v2/json/admin/hosts_client.py
rename to tempest/common/utils/net_info.py
index e092c6a..9b0a083 100644
--- a/tempest/services/volume/v2/json/admin/hosts_client.py
+++ b/tempest/common/utils/net_info.py
@@ -1,4 +1,3 @@
-# Copyright 2014 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
@@ -12,10 +11,15 @@
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
+import re
-from tempest.services.volume.base.admin import base_hosts_client
+RE_OWNER = re.compile('^network:.*router_.*interface.*')
-class HostsClient(base_hosts_client.BaseHostsClient):
- """Client class to send CRUD Volume V2 API requests"""
- api_version = "v2"
+def _is_owner_router_interface(owner):
+ return bool(RE_OWNER.match(owner))
+
+
+def is_router_interface_port(port):
+ """Based on the port attributes determines is it a router interface."""
+ return _is_owner_router_interface(port['device_owner'])
diff --git a/tempest/common/utils/net_utils.py b/tempest/common/utils/net_utils.py
index d98fb32..f0d3da3 100644
--- a/tempest/common/utils/net_utils.py
+++ b/tempest/common/utils/net_utils.py
@@ -12,7 +12,6 @@
# License for the specific language governing permissions and limitations
# under the License.
-import itertools
import netaddr
from tempest.lib import exceptions as lib_exc
@@ -38,11 +37,17 @@
for fixed_ip in port.get('fixed_ips'):
alloc_set.add(fixed_ip['ip_address'])
+ # exclude gateway_ip of subnet
+ gateway_ip = subnet['subnet']['gateway_ip']
+ if gateway_ip:
+ alloc_set.add(gateway_ip)
+
av_set = subnet_set - alloc_set
- ip_list = [str(ip) for ip in itertools.islice(av_set, count)]
-
- if len(ip_list) != count:
- msg = "Insufficient IP addresses available"
- raise lib_exc.BadRequest(message=msg)
-
- return ip_list
+ addrs = []
+ for cidr in reversed(av_set.iter_cidrs()):
+ for ip in reversed(cidr):
+ addrs.append(str(ip))
+ if len(addrs) == count:
+ return addrs
+ msg = "Insufficient IP addresses available"
+ raise lib_exc.BadRequest(message=msg)
diff --git a/tempest/common/waiters.py b/tempest/common/waiters.py
index df08e30..c1942d6 100644
--- a/tempest/common/waiters.py
+++ b/tempest/common/waiters.py
@@ -53,9 +53,7 @@
return
# NOTE(afazekas): The instance is in "ready for action state"
# when no task in progress
- # NOTE(afazekas): Converted to string because of the XML
- # responses
- if str(task_state) == "None":
+ if task_state is None:
# without state api extension 3 sec usually enough
time.sleep(CONF.compute.ready_wait)
return
@@ -141,7 +139,7 @@
while int(time.time()) - start < client.build_timeout:
image = show_image(image_id)
# Compute image client returns response wrapped in 'image' element
- # which is not case with Glance image client.
+ # which is not the case with Glance image client.
if 'image' in image:
image = image['image']
@@ -212,6 +210,27 @@
raise exceptions.TimeoutException(message)
+def wait_for_backup_status(client, backup_id, status):
+ """Waits for a Backup to reach a given status."""
+ body = client.show_backup(backup_id)['backup']
+ backup_status = body['status']
+ start = int(time.time())
+
+ while backup_status != status:
+ time.sleep(client.build_interval)
+ body = client.show_backup(backup_id)['backup']
+ backup_status = body['status']
+ if backup_status == 'error' and backup_status != status:
+ raise lib_exc.VolumeBackupException(backup_id=backup_id)
+
+ if int(time.time()) - start >= client.build_timeout:
+ message = ('Volume backup %s failed to reach %s status '
+ '(current %s) within the required time (%s s).' %
+ (backup_id, status, backup_status,
+ client.build_timeout))
+ raise exceptions.TimeoutException(message)
+
+
def wait_for_bm_node_status(client, node_id, attr, status):
"""Waits for a baremetal node attribute to reach given status.
@@ -239,3 +258,35 @@
if caller:
message = '(%s) %s' % (caller, message)
raise exceptions.TimeoutException(message)
+
+
+def wait_for_qos_operations(client, qos_id, operation, args=None):
+ """Waits for a qos operations to be completed.
+
+ NOTE : operation value is required for wait_for_qos_operations()
+ operation = 'qos-key' / 'disassociate' / 'disassociate-all'
+ args = keys[] when operation = 'qos-key'
+ args = volume-type-id disassociated when operation = 'disassociate'
+ args = None when operation = 'disassociate-all'
+ """
+ start_time = int(time.time())
+ while True:
+ if operation == 'qos-key-unset':
+ body = client.show_qos(qos_id)['qos_specs']
+ if not any(key in body['specs'] for key in args):
+ return
+ elif operation == 'disassociate':
+ body = client.show_association_qos(qos_id)['qos_associations']
+ if not any(args in body[i]['id'] for i in range(0, len(body))):
+ return
+ elif operation == 'disassociate-all':
+ body = client.show_association_qos(qos_id)['qos_associations']
+ if not body:
+ return
+ else:
+ msg = (" operation value is either not defined or incorrect.")
+ raise lib_exc.UnprocessableEntity(msg)
+
+ if int(time.time()) - start_time >= client.build_timeout:
+ raise exceptions.TimeoutException
+ time.sleep(client.build_interval)
diff --git a/tempest/config.py b/tempest/config.py
index 0c2b913..bc9215c 100644
--- a/tempest/config.py
+++ b/tempest/config.py
@@ -26,6 +26,7 @@
import testtools
from tempest.lib import exceptions
+from tempest.lib.services import clients
from tempest.test_discover import plugins
@@ -173,6 +174,16 @@
"a domain scoped token to use admin APIs")
]
+service_clients_group = cfg.OptGroup(name='service-clients',
+ title="Service Clients Options")
+
+ServiceClientsGroup = [
+ cfg.IntOpt('http_timeout',
+ default=60,
+ help='Timeout in seconds to wait for the http request to '
+ 'return'),
+]
+
identity_feature_group = cfg.OptGroup(name='identity-feature-enabled',
title='Enabled Identity Features')
@@ -234,7 +245,7 @@
"projects. If multiple networks are available for a "
"project, this is the network which will be used for "
"creating servers if tempest does not create a network or "
- "s network is not specified elsewhere. It may be used for "
+ "a network is not specified elsewhere. It may be used for "
"ssh validation only if floating IPs are disabled."),
cfg.StrOpt('catalog_type',
default='compute',
@@ -293,6 +304,12 @@
title="Enabled Compute Service Features")
ComputeFeaturesGroup = [
+ # NOTE(mriedem): This is a feature toggle for bug 1175464 which is fixed in
+ # mitaka and newton. This option can be removed after liberty-eol.
+ cfg.BoolOpt('allow_port_security_disabled',
+ default=False,
+ help='Does the test environment support creating ports in a '
+ 'network where port security is disabled?'),
cfg.BoolOpt('disk_config',
default=True,
help="If false, skip disk config tests"),
@@ -301,7 +318,13 @@
help='A list of enabled compute extensions with a special '
'entry all which indicates every extension is enabled. '
'Each extension should be specified with alias name. '
- 'Empty list indicates all extensions are disabled'),
+ 'Empty list indicates all extensions are disabled',
+ deprecated_for_removal=True,
+ deprecated_reason='The Nova extensions API and mechanism '
+ 'is deprecated. This option will be '
+ 'removed when all releases supported '
+ 'by tempest no longer contain the Nova '
+ 'extensions API and mechanism.'),
cfg.BoolOpt('change_password',
default=False,
help="Does the test environment support changing the admin "
@@ -322,10 +345,12 @@
cfg.BoolOpt('suspend',
default=True,
help="Does the test environment support suspend/resume?"),
+ cfg.BoolOpt('cold_migration',
+ default=True,
+ help="Does the test environment support cold migration?"),
cfg.BoolOpt('live_migration',
default=True,
- help="Does the test environment support live migration "
- "available?"),
+ help="Does the test environment support live migration?"),
cfg.BoolOpt('metadata_service',
default=True,
help="Does the test environment support metadata service? "
@@ -359,7 +384,8 @@
default=True,
help='Enables returning of the instance password by the '
'relevant server API calls such as create, rebuild '
- 'or rescue.'),
+ 'or rescue. This configuration value should be same as '
+ 'nova.conf: DEFAULT.enable_instance_password'),
cfg.BoolOpt('interface_attach',
default=True,
help='Does the test environment support dynamic network '
@@ -369,8 +395,9 @@
help='Does the test environment support creating snapshot '
'images of running instances?'),
cfg.BoolOpt('nova_cert',
- default=True,
- help='Does the test environment have the nova cert running?'),
+ default=False,
+ help='Does the test environment have the nova cert running?',
+ deprecated_for_removal=True),
cfg.BoolOpt('personality',
default=False,
help='Does the test environment support server personality'),
@@ -391,7 +418,10 @@
"list indicates all filters are disabled. The full "
"available list of filters is in nova.conf: "
"DEFAULT.scheduler_available_filters"),
-
+ cfg.BoolOpt('swap_volume',
+ default=False,
+ help='Does the test environment support in-place swapping of '
+ 'volumes attached to a server instance?'),
]
@@ -491,7 +521,7 @@
default=False,
help="Whether project networks can be reached directly from "
"the test client. This must be set to True when the "
- "'fixed' ssh_connect_method is selected."),
+ "'fixed' connect_method is selected."),
cfg.StrOpt('public_network_id',
default="",
help="Id of the public network that provides external "
@@ -527,6 +557,15 @@
default=["1.0.0.0/16", "2.0.0.0/16"],
help="List of ip pools"
" for subnetpools creation"),
+ # TODO(ylobankov): Delete this option once the Liberty release is EOL.
+ cfg.BoolOpt('dvr_extra_resources',
+ default=True,
+ help="Whether or not to create internal network, subnet, "
+ "port and add network interface to distributed router "
+ "in L3 agent scheduler test. Extra resources need to be "
+ "provisioned in order to bind router to L3 agent in the "
+ "Liberty release or older, and are not required since "
+ "the Mitaka release.")
]
network_feature_group = cfg.OptGroup(name='network-feature-enabled',
@@ -552,6 +591,9 @@
default=True,
help="Does the test environment support changing"
" port admin state"),
+ cfg.BoolOpt('port_security',
+ default=False,
+ help="Does the test environment support port security?"),
]
validation_group = cfg.OptGroup(name='validation',
@@ -642,7 +684,7 @@
cfg.StrOpt('network_for_ssh',
default='public',
help="Network used for SSH connections. Ignored if "
- "connect_method=floating or run_validation=false.",
+ "connect_method=floating.",
deprecated_opts=[cfg.DeprecatedOpt('network_for_ssh',
group='compute')]),
]
@@ -857,72 +899,6 @@
help="Value must match heat configuration of the same name."),
]
-data_processing_group = cfg.OptGroup(name="data-processing",
- title="Data Processing options")
-
-DataProcessingGroup = [
- cfg.StrOpt('catalog_type',
- default='data-processing',
- deprecated_group="data_processing",
- help="Catalog type of the data processing service."),
- cfg.StrOpt('endpoint_type',
- default='publicURL',
- choices=['public', 'admin', 'internal',
- 'publicURL', 'adminURL', 'internalURL'],
- deprecated_group="data_processing",
- help="The endpoint type to use for the data processing "
- "service."),
-]
-
-
-data_processing_feature_group = cfg.OptGroup(
- name="data-processing-feature-enabled",
- title="Enabled Data Processing features")
-
-DataProcessingFeaturesGroup = [
- cfg.ListOpt('plugins',
- default=["vanilla", "cdh"],
- deprecated_group="data_processing-feature-enabled",
- help="List of enabled data processing plugins")
-]
-
-stress_group = cfg.OptGroup(name='stress', title='Stress Test Options')
-
-StressGroup = [
- cfg.StrOpt('nova_logdir',
- help='Directory containing log files on the compute nodes'),
- cfg.IntOpt('max_instances',
- default=16,
- help='Maximum number of instances to create during test.'),
- cfg.StrOpt('controller',
- help='Controller host.'),
- # new stress options
- cfg.StrOpt('target_controller',
- help='Controller host.'),
- cfg.StrOpt('target_ssh_user',
- help='ssh user.'),
- cfg.StrOpt('target_private_key_path',
- help='Path to private key.'),
- cfg.StrOpt('target_logfiles',
- help='regexp for list of log files.'),
- cfg.IntOpt('log_check_interval',
- default=60,
- help='time (in seconds) between log file error checks.'),
- cfg.IntOpt('default_thread_number_per_action',
- default=4,
- help='The number of threads created while stress test.'),
- cfg.BoolOpt('leave_dirty_stack',
- default=False,
- help='Prevent the cleaning (tearDownClass()) between'
- ' each stress test run if an exception occurs'
- ' during this run.'),
- cfg.BoolOpt('full_clean_stack',
- default=False,
- help='Allows a full cleaning process after a stress test.'
- ' Caution : this cleanup will remove every objects of'
- ' every project.')
-]
-
scenario_group = cfg.OptGroup(name='scenario', title='Scenario Test Options')
@@ -1119,6 +1095,7 @@
(compute_group, ComputeGroup),
(compute_features_group, ComputeFeaturesGroup),
(identity_group, IdentityGroup),
+ (service_clients_group, ServiceClientsGroup),
(identity_feature_group, IdentityFeatureGroup),
(image_group, ImageGroup),
(image_feature_group, ImageFeaturesGroup),
@@ -1130,9 +1107,6 @@
(object_storage_group, ObjectStoreGroup),
(object_storage_feature_group, ObjectStoreFeaturesGroup),
(orchestration_group, OrchestrationGroup),
- (data_processing_group, DataProcessingGroup),
- (data_processing_feature_group, DataProcessingFeaturesGroup),
- (stress_group, StressGroup),
(scenario_group, ScenarioGroup),
(service_available_group, ServiceAvailableGroup),
(debug_group, DebugGroup),
@@ -1184,6 +1158,7 @@
self.compute = _CONF.compute
self.compute_feature_enabled = _CONF['compute-feature-enabled']
self.identity = _CONF.identity
+ self.service_clients = _CONF['service-clients']
self.identity_feature_enabled = _CONF['identity-feature-enabled']
self.image = _CONF.image
self.image_feature_enabled = _CONF['image-feature-enabled']
@@ -1196,10 +1171,6 @@
self.object_storage_feature_enabled = _CONF[
'object-storage-feature-enabled']
self.orchestration = _CONF.orchestration
- self.data_processing = _CONF['data-processing']
- self.data_processing_feature_enabled = _CONF[
- 'data-processing-feature-enabled']
- self.stress = _CONF.stress
self.scenario = _CONF.scenario
self.service_available = _CONF.service_available
self.debug = _CONF.debug
@@ -1275,6 +1246,15 @@
lockutils.set_defaults(lock_dir)
self._config = TempestConfigPrivate(config_path=self._path)
+ # Pushing tempest internal service client configuration to the
+ # service clients register. Doing this in the config module ensures
+ # that the configuration is available by the time we register the
+ # service clients.
+ # NOTE(andreaf) This has to be done at the time the first
+ # attribute is accessed, to ensure all plugins have been already
+ # loaded, options registered, and _config is set.
+ _register_tempest_service_clients()
+
return getattr(self._config, attr)
def set_config_path(self, path):
@@ -1372,6 +1352,7 @@
* `disable_ssl_certificate_validation`
* `ca_certs`
* `trace_requests`
+ * `http_timeout`
The dict returned by this does not fit a few service clients:
@@ -1393,7 +1374,8 @@
'disable_ssl_certificate_validation':
CONF.identity.disable_ssl_certificate_validation,
'ca_certs': CONF.identity.ca_certificates_file,
- 'trace_requests': CONF.debug.trace_requests
+ 'trace_requests': CONF.debug.trace_requests,
+ 'http_timeout': CONF.service_clients.http_timeout
}
if service_client_name is None:
@@ -1432,3 +1414,29 @@
# Set service
_parameters['service'] = getattr(options, 'catalog_type')
return _parameters
+
+
+def _register_tempest_service_clients():
+ # Register tempest own service clients using the same mechanism used
+ # for external plugins.
+ # The configuration data is pushed to the registry so that automatic
+ # configuration of tempest own service clients is possible both for
+ # tempest as well as for the plugins.
+ service_clients = clients.tempest_modules()
+ registry = clients.ClientsRegistry()
+ all_clients = []
+ for service_client in service_clients:
+ module = service_clients[service_client]
+ configs = service_client.split('.')[0]
+ service_client_data = dict(
+ name=service_client.replace('.', '_'),
+ service_version=service_client,
+ module_path=module.__name__,
+ client_names=module.__all__,
+ **service_client_config(configs)
+ )
+ all_clients.append(service_client_data)
+ # NOTE(andreaf) Internal service clients do not actually belong
+ # to a plugin, so using '__tempest__' to indicate a virtual plugin
+ # which holds internal service clients.
+ registry.register_service_client('__tempest__', all_clients)
diff --git a/tempest/exceptions.py b/tempest/exceptions.py
index f534f30..727d54e 100644
--- a/tempest/exceptions.py
+++ b/tempest/exceptions.py
@@ -17,10 +17,6 @@
from tempest.lib import exceptions
-class InvalidConfiguration(exceptions.TempestException):
- message = "Invalid Configuration"
-
-
class InvalidServiceTag(exceptions.TempestException):
message = "Invalid service tag"
@@ -53,24 +49,21 @@
message = "Snapshot %(snapshot_id)s failed to build and is in ERROR status"
-class VolumeBackupException(exceptions.TempestException):
- message = "Volume backup %(backup_id)s failed and is in ERROR status"
-
-
class StackBuildErrorException(exceptions.TempestException):
message = ("Stack %(stack_identifier)s is in %(stack_status)s status "
"due to '%(stack_status_reason)s'")
class ServerUnreachable(exceptions.TempestException):
- message = "The server is not reachable via the configured network"
+ message = ("Server %(server_id)s is not reachable via "
+ "the configured network")
# NOTE(andreaf) This exception is added here to facilitate the migration
# of get_network_from_name and preprov_creds to tempest.lib, and it should
# be migrated along with them
class InvalidTestResource(exceptions.TempestException):
- message = "%(name) is not a valid %(type), or the name is ambiguous"
+ message = "%(name)s is not a valid %(type)s, or the name is ambiguous"
class RFCViolation(exceptions.RestClientException):
diff --git a/tempest/hacking/checks.py b/tempest/hacking/checks.py
index e2d6585..4123ae5 100644
--- a/tempest/hacking/checks.py
+++ b/tempest/hacking/checks.py
@@ -19,7 +19,7 @@
PYTHON_CLIENTS = ['cinder', 'glance', 'keystone', 'nova', 'swift', 'neutron',
- 'ironic', 'savanna', 'heat', 'sahara']
+ 'ironic', 'heat', 'sahara']
PYTHON_CLIENT_RE = re.compile('import (%s)client' % '|'.join(PYTHON_CLIENTS))
TEST_DEFINITION = re.compile(r'^\s*def test.*')
diff --git a/tempest/lib/api_schema/response/compute/v2_1/images.py b/tempest/lib/api_schema/response/compute/v2_1/images.py
index b0f1934..f65b9d8 100644
--- a/tempest/lib/api_schema/response/compute/v2_1/images.py
+++ b/tempest/lib/api_schema/response/compute/v2_1/images.py
@@ -19,11 +19,13 @@
image_links = copy.deepcopy(parameter_types.links)
image_links['items']['properties'].update({'type': {'type': 'string'}})
+image_status_enums = ['ACTIVE', 'SAVING', 'DELETED', 'ERROR', 'UNKNOWN']
+
common_image_schema = {
'type': 'object',
'properties': {
'id': {'type': 'string'},
- 'status': {'type': 'string'},
+ 'status': {'enum': image_status_enums},
'updated': {'type': 'string'},
'links': image_links,
'name': {'type': ['string', 'null']},
diff --git a/tempest/api/data_processing/__init__.py b/tempest/lib/api_schema/response/compute/v2_16/__init__.py
similarity index 100%
copy from tempest/api/data_processing/__init__.py
copy to tempest/lib/api_schema/response/compute/v2_16/__init__.py
diff --git a/tempest/lib/api_schema/response/compute/v2_16/servers.py b/tempest/lib/api_schema/response/compute/v2_16/servers.py
new file mode 100644
index 0000000..6868110
--- /dev/null
+++ b/tempest/lib/api_schema/response/compute/v2_16/servers.py
@@ -0,0 +1,160 @@
+# Copyright 2014 NEC Corporation. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+import copy
+
+from tempest.lib.api_schema.response.compute.v2_1 import parameter_types
+from tempest.lib.api_schema.response.compute.v2_9 import servers
+
+# Compute microversion 2.16:
+# 1. New attributes in 'server' dict.
+# 'host_status'
+
+server_detail = {
+ 'type': 'object',
+ 'properties': {
+ 'id': {'type': 'string'},
+ 'name': {'type': 'string'},
+ 'status': {'type': 'string'},
+ 'image': {'oneOf': [
+ {'type': 'object',
+ 'properties': {
+ 'id': {'type': 'string'},
+ 'links': parameter_types.links
+ },
+ 'additionalProperties': False,
+ 'required': ['id', 'links']},
+ {'type': ['string', 'null']}
+ ]},
+ 'flavor': {
+ 'type': 'object',
+ 'properties': {
+ 'id': {'type': 'string'},
+ 'links': parameter_types.links
+ },
+ 'additionalProperties': False,
+ 'required': ['id', 'links']
+ },
+ 'fault': {
+ 'type': 'object',
+ 'properties': {
+ 'code': {'type': 'integer'},
+ 'created': {'type': 'string'},
+ 'message': {'type': 'string'},
+ 'details': {'type': 'string'},
+ },
+ 'additionalProperties': False,
+ # NOTE(gmann): 'details' is not necessary to be present
+ # in the 'fault'. So it is not defined as 'required'.
+ 'required': ['code', 'created', 'message']
+ },
+ 'user_id': {'type': 'string'},
+ 'tenant_id': {'type': 'string'},
+ 'created': {'type': 'string'},
+ 'updated': {'type': 'string'},
+ 'progress': {'type': 'integer'},
+ 'metadata': {'type': 'object'},
+ 'links': parameter_types.links,
+ 'addresses': parameter_types.addresses,
+ 'hostId': {'type': 'string'},
+ 'OS-DCF:diskConfig': {'type': 'string'},
+ 'accessIPv4': parameter_types.access_ip_v4,
+ 'accessIPv6': parameter_types.access_ip_v6,
+ 'key_name': {'type': ['string', 'null']},
+ 'security_groups': {'type': 'array'},
+ 'OS-SRV-USG:launched_at': {'type': ['string', 'null']},
+ 'OS-SRV-USG:terminated_at': {'type': ['string', 'null']},
+ 'OS-EXT-AZ:availability_zone': {'type': 'string'},
+ 'OS-EXT-STS:task_state': {'type': ['string', 'null']},
+ 'OS-EXT-STS:vm_state': {'type': 'string'},
+ 'OS-EXT-STS:power_state': {'type': 'integer'},
+ 'OS-EXT-SRV-ATTR:host': {'type': ['string', 'null']},
+ 'OS-EXT-SRV-ATTR:instance_name': {'type': 'string'},
+ 'OS-EXT-SRV-ATTR:hypervisor_hostname': {'type': ['string', 'null']},
+ 'config_drive': {'type': 'string'},
+ 'os-extended-volumes:volumes_attached': {
+ 'type': 'array',
+ 'items': {
+ 'type': 'object',
+ 'properties': {
+ 'id': {'type': 'string'},
+ 'delete_on_termination': {'type': 'boolean'}
+ },
+ 'additionalProperties': False,
+ },
+ },
+ 'OS-EXT-SRV-ATTR:reservation_id': {'type': ['string', 'null']},
+ 'OS-EXT-SRV-ATTR:launch_index': {'type': 'integer'},
+ 'OS-EXT-SRV-ATTR:kernel_id': {'type': ['string', 'null']},
+ 'OS-EXT-SRV-ATTR:ramdisk_id': {'type': ['string', 'null']},
+ 'OS-EXT-SRV-ATTR:hostname': {'type': 'string'},
+ 'OS-EXT-SRV-ATTR:root_device_name': {'type': ['string', 'null']},
+ 'OS-EXT-SRV-ATTR:user_data': {'type': ['string', 'null']},
+ 'locked': {'type': 'boolean'},
+ # NOTE(gmann): new attributes in version 2.16
+ 'host_status': {'type': 'string'}
+ },
+ 'additionalProperties': False,
+ # NOTE(gmann): 'progress' attribute is present in the response
+ # only when server's status is one of the progress statuses
+ # ("ACTIVE","BUILD", "REBUILD", "RESIZE","VERIFY_RESIZE")
+ # 'fault' attribute is present in the response
+ # only when server's status is one of the "ERROR", "DELETED".
+ # OS-DCF:diskConfig and accessIPv4/v6 are API
+ # extensions, and some environments return a response
+ # without these attributes.So these are not defined as 'required'.
+ 'required': ['id', 'name', 'status', 'image', 'flavor',
+ 'user_id', 'tenant_id', 'created', 'updated',
+ 'metadata', 'links', 'addresses', 'hostId']
+}
+
+server_detail['properties']['addresses']['patternProperties'][
+ '^[a-zA-Z0-9-_.]+$']['items']['properties'].update({
+ 'OS-EXT-IPS:type': {'type': 'string'},
+ 'OS-EXT-IPS-MAC:mac_addr': parameter_types.mac_address})
+# NOTE(gmann)dd: Update OS-EXT-IPS:type and OS-EXT-IPS-MAC:mac_addr
+# attributes in server address. Those are API extension,
+# and some environments return a response without
+# these attributes. So they are not 'required'.
+
+get_server = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'server': server_detail
+ },
+ 'additionalProperties': False,
+ 'required': ['server']
+ }
+}
+
+list_servers_detail = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'servers': {
+ 'type': 'array',
+ 'items': server_detail
+ },
+ 'servers_links': parameter_types.links
+ },
+ 'additionalProperties': False,
+ # NOTE(gmann): servers_links attribute is not necessary to be
+ # present always So it is not 'required'.
+ 'required': ['servers']
+ }
+}
+
+list_servers = copy.deepcopy(servers.list_servers)
diff --git a/tempest/lib/api_schema/response/compute/v2_19/servers.py b/tempest/lib/api_schema/response/compute/v2_19/servers.py
index 883839e..05cc32c 100644
--- a/tempest/lib/api_schema/response/compute/v2_19/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_19/servers.py
@@ -15,15 +15,18 @@
import copy
from tempest.lib.api_schema.response.compute.v2_1 import servers as serversv21
-from tempest.lib.api_schema.response.compute.v2_9 import servers as serversv29
+from tempest.lib.api_schema.response.compute.v2_16 import servers \
+ as serversv216
-get_server = copy.deepcopy(serversv29.get_server)
+list_servers = copy.deepcopy(serversv216.list_servers)
+
+get_server = copy.deepcopy(serversv216.get_server)
get_server['response_body']['properties']['server'][
'properties'].update({'description': {'type': ['string', 'null']}})
get_server['response_body']['properties']['server'][
'required'].append('description')
-list_servers_detail = copy.deepcopy(serversv29.list_servers_detail)
+list_servers_detail = copy.deepcopy(serversv216.list_servers_detail)
list_servers_detail['response_body']['properties']['servers']['items'][
'properties'].update({'description': {'type': ['string', 'null']}})
list_servers_detail['response_body']['properties']['servers']['items'][
diff --git a/tempest/api/data_processing/__init__.py b/tempest/lib/api_schema/response/compute/v2_23/__init__.py
similarity index 100%
copy from tempest/api/data_processing/__init__.py
copy to tempest/lib/api_schema/response/compute/v2_23/__init__.py
diff --git a/tempest/lib/api_schema/response/compute/v2_23/migrations.py b/tempest/lib/api_schema/response/compute/v2_23/migrations.py
new file mode 100644
index 0000000..3cd0f6e
--- /dev/null
+++ b/tempest/lib/api_schema/response/compute/v2_23/migrations.py
@@ -0,0 +1,62 @@
+# Copyright 2014 NEC Corporation. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.lib.api_schema.response.compute.v2_1 import parameter_types
+
+# Compute microversion 2.23:
+# New attributes in 'migrations' list.
+# 'migration_type'
+# 'links'
+
+list_migrations = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'migrations': {
+ 'type': 'array',
+ 'items': {
+ 'type': 'object',
+ 'properties': {
+ 'id': {'type': 'integer'},
+ 'status': {'type': ['string', 'null']},
+ 'instance_uuid': {'type': ['string', 'null']},
+ 'source_node': {'type': ['string', 'null']},
+ 'source_compute': {'type': ['string', 'null']},
+ 'dest_node': {'type': ['string', 'null']},
+ 'dest_compute': {'type': ['string', 'null']},
+ 'dest_host': {'type': ['string', 'null']},
+ 'old_instance_type_id': {'type': ['integer', 'null']},
+ 'new_instance_type_id': {'type': ['integer', 'null']},
+ 'created_at': {'type': 'string'},
+ 'updated_at': {'type': ['string', 'null']},
+ # New attributes in version 2.23
+ 'migration_type': {'type': ['string', 'null']},
+ 'links': parameter_types.links
+ },
+ 'additionalProperties': False,
+ 'required': [
+ 'id', 'status', 'instance_uuid', 'source_node',
+ 'source_compute', 'dest_node', 'dest_compute',
+ 'dest_host', 'old_instance_type_id',
+ 'new_instance_type_id', 'created_at', 'updated_at',
+ 'migration_type'
+ ]
+ }
+ }
+ },
+ 'additionalProperties': False,
+ 'required': ['migrations']
+ }
+}
diff --git a/tempest/api/data_processing/__init__.py b/tempest/lib/api_schema/response/compute/v2_26/__init__.py
similarity index 100%
copy from tempest/api/data_processing/__init__.py
copy to tempest/lib/api_schema/response/compute/v2_26/__init__.py
diff --git a/tempest/lib/api_schema/response/compute/v2_26/servers.py b/tempest/lib/api_schema/response/compute/v2_26/servers.py
new file mode 100644
index 0000000..bc5d18e
--- /dev/null
+++ b/tempest/lib/api_schema/response/compute/v2_26/servers.py
@@ -0,0 +1,47 @@
+# Copyright 2016 IBM Corp.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import copy
+
+from tempest.lib.api_schema.response.compute.v2_1 import servers as servers21
+from tempest.lib.api_schema.response.compute.v2_19 import servers as servers219
+
+# The 2.26 microversion changes the server GET and (detailed) LIST responses to
+# include the server 'tags' which is just a list of strings.
+
+tag_items = {
+ 'type': 'array',
+ 'maxItems': 50,
+ 'items': {
+ 'type': 'string',
+ 'pattern': '^[^,/]*$',
+ 'maxLength': 60
+ }
+}
+
+get_server = copy.deepcopy(servers219.get_server)
+get_server['response_body']['properties']['server'][
+ 'properties'].update({'tags': tag_items})
+get_server['response_body']['properties']['server'][
+ 'required'].append('tags')
+
+list_servers_detail = copy.deepcopy(servers219.list_servers_detail)
+list_servers_detail['response_body']['properties']['servers']['items'][
+ 'properties'].update({'tags': tag_items})
+list_servers_detail['response_body']['properties']['servers']['items'][
+ 'required'].append('tags')
+
+# list response schema wasn't changed for v2.26 so use v2.1
+
+list_servers = copy.deepcopy(servers21.list_servers)
diff --git a/tempest/api/data_processing/__init__.py b/tempest/lib/api_schema/response/compute/v2_3/__init__.py
similarity index 100%
copy from tempest/api/data_processing/__init__.py
copy to tempest/lib/api_schema/response/compute/v2_3/__init__.py
diff --git a/tempest/lib/api_schema/response/compute/v2_3/servers.py b/tempest/lib/api_schema/response/compute/v2_3/servers.py
new file mode 100644
index 0000000..ee16333
--- /dev/null
+++ b/tempest/lib/api_schema/response/compute/v2_3/servers.py
@@ -0,0 +1,166 @@
+# Copyright 2014 NEC Corporation. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+import copy
+
+from tempest.lib.api_schema.response.compute.v2_1 import parameter_types
+from tempest.lib.api_schema.response.compute.v2_1 import servers
+
+# Compute microversion 2.3:
+# 1. New attributes in 'os-extended-volumes:volumes_attached' dict.
+# 'delete_on_termination'
+# 2. New attributes in 'server' dict.
+# 'OS-EXT-SRV-ATTR:reservation_id'
+# 'OS-EXT-SRV-ATTR:launch_index'
+# 'OS-EXT-SRV-ATTR:kernel_id'
+# 'OS-EXT-SRV-ATTR:ramdisk_id'
+# 'OS-EXT-SRV-ATTR:hostname'
+# 'OS-EXT-SRV-ATTR:root_device_name'
+# 'OS-EXT-SRV-ATTR:user_data'
+
+server_detail = {
+ 'type': 'object',
+ 'properties': {
+ 'id': {'type': 'string'},
+ 'name': {'type': 'string'},
+ 'status': {'type': 'string'},
+ 'image': {'oneOf': [
+ {'type': 'object',
+ 'properties': {
+ 'id': {'type': 'string'},
+ 'links': parameter_types.links
+ },
+ 'additionalProperties': False,
+ 'required': ['id', 'links']},
+ {'type': ['string', 'null']}
+ ]},
+ 'flavor': {
+ 'type': 'object',
+ 'properties': {
+ 'id': {'type': 'string'},
+ 'links': parameter_types.links
+ },
+ 'additionalProperties': False,
+ 'required': ['id', 'links']
+ },
+ 'fault': {
+ 'type': 'object',
+ 'properties': {
+ 'code': {'type': 'integer'},
+ 'created': {'type': 'string'},
+ 'message': {'type': 'string'},
+ 'details': {'type': 'string'},
+ },
+ 'additionalProperties': False,
+ # NOTE(gmann): 'details' is not necessary to be present
+ # in the 'fault'. So it is not defined as 'required'.
+ 'required': ['code', 'created', 'message']
+ },
+ 'user_id': {'type': 'string'},
+ 'tenant_id': {'type': 'string'},
+ 'created': {'type': 'string'},
+ 'updated': {'type': 'string'},
+ 'progress': {'type': 'integer'},
+ 'metadata': {'type': 'object'},
+ 'links': parameter_types.links,
+ 'addresses': parameter_types.addresses,
+ 'hostId': {'type': 'string'},
+ 'OS-DCF:diskConfig': {'type': 'string'},
+ 'accessIPv4': parameter_types.access_ip_v4,
+ 'accessIPv6': parameter_types.access_ip_v6,
+ 'key_name': {'type': ['string', 'null']},
+ 'security_groups': {'type': 'array'},
+ 'OS-SRV-USG:launched_at': {'type': ['string', 'null']},
+ 'OS-SRV-USG:terminated_at': {'type': ['string', 'null']},
+ 'OS-EXT-AZ:availability_zone': {'type': 'string'},
+ 'OS-EXT-STS:task_state': {'type': ['string', 'null']},
+ 'OS-EXT-STS:vm_state': {'type': 'string'},
+ 'OS-EXT-STS:power_state': {'type': 'integer'},
+ 'OS-EXT-SRV-ATTR:host': {'type': ['string', 'null']},
+ 'OS-EXT-SRV-ATTR:instance_name': {'type': 'string'},
+ 'OS-EXT-SRV-ATTR:hypervisor_hostname': {'type': ['string', 'null']},
+ 'config_drive': {'type': 'string'},
+ # NOTE(gmann): new attributes in version 2.3
+ 'os-extended-volumes:volumes_attached': {
+ 'type': 'array',
+ 'items': {
+ 'type': 'object',
+ 'properties': {
+ 'id': {'type': 'string'},
+ 'delete_on_termination': {'type': 'boolean'}
+ },
+ 'additionalProperties': False,
+ },
+ },
+ 'OS-EXT-SRV-ATTR:reservation_id': {'type': ['string', 'null']},
+ 'OS-EXT-SRV-ATTR:launch_index': {'type': 'integer'},
+ 'OS-EXT-SRV-ATTR:kernel_id': {'type': ['string', 'null']},
+ 'OS-EXT-SRV-ATTR:ramdisk_id': {'type': ['string', 'null']},
+ 'OS-EXT-SRV-ATTR:hostname': {'type': 'string'},
+ 'OS-EXT-SRV-ATTR:root_device_name': {'type': ['string', 'null']},
+ 'OS-EXT-SRV-ATTR:user_data': {'type': ['string', 'null']},
+ },
+ 'additionalProperties': False,
+ # NOTE(gmann): 'progress' attribute is present in the response
+ # only when server's status is one of the progress statuses
+ # ("ACTIVE","BUILD", "REBUILD", "RESIZE","VERIFY_RESIZE")
+ # 'fault' attribute is present in the response
+ # only when server's status is one of the "ERROR", "DELETED".
+ # OS-DCF:diskConfig and accessIPv4/v6 are API
+ # extensions, and some environments return a response
+ # without these attributes.So these are not defined as 'required'.
+ 'required': ['id', 'name', 'status', 'image', 'flavor',
+ 'user_id', 'tenant_id', 'created', 'updated',
+ 'metadata', 'links', 'addresses', 'hostId']
+}
+
+server_detail['properties']['addresses']['patternProperties'][
+ '^[a-zA-Z0-9-_.]+$']['items']['properties'].update({
+ 'OS-EXT-IPS:type': {'type': 'string'},
+ 'OS-EXT-IPS-MAC:mac_addr': parameter_types.mac_address})
+# NOTE(gmann)dd: Update OS-EXT-IPS:type and OS-EXT-IPS-MAC:mac_addr
+# attributes in server address. Those are API extension,
+# and some environments return a response without
+# these attributes. So they are not 'required'.
+
+get_server = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'server': server_detail
+ },
+ 'additionalProperties': False,
+ 'required': ['server']
+ }
+}
+
+list_servers_detail = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'servers': {
+ 'type': 'array',
+ 'items': server_detail
+ },
+ 'servers_links': parameter_types.links
+ },
+ 'additionalProperties': False,
+ # NOTE(gmann): servers_links attribute is not necessary to be
+ # present always So it is not 'required'.
+ 'required': ['servers']
+ }
+}
+
+list_servers = copy.deepcopy(servers.list_servers)
diff --git a/tempest/lib/api_schema/response/compute/v2_9/servers.py b/tempest/lib/api_schema/response/compute/v2_9/servers.py
index e9b7249..470190c 100644
--- a/tempest/lib/api_schema/response/compute/v2_9/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_9/servers.py
@@ -14,7 +14,9 @@
import copy
-from tempest.lib.api_schema.response.compute.v2_1 import servers
+from tempest.lib.api_schema.response.compute.v2_3 import servers
+
+list_servers = copy.deepcopy(servers.list_servers)
get_server = copy.deepcopy(servers.get_server)
get_server['response_body']['properties']['server'][
diff --git a/tempest/lib/auth.py b/tempest/lib/auth.py
index 54a7002..83aa405 100644
--- a/tempest/lib/auth.py
+++ b/tempest/lib/auth.py
@@ -260,11 +260,13 @@
def __init__(self, credentials, auth_url,
disable_ssl_certificate_validation=None,
- ca_certs=None, trace_requests=None, scope='project'):
+ ca_certs=None, trace_requests=None, scope='project',
+ http_timeout=None):
super(KeystoneAuthProvider, self).__init__(credentials, scope)
self.dscv = disable_ssl_certificate_validation
self.ca_certs = ca_certs
self.trace_requests = trace_requests
+ self.http_timeout = http_timeout
self.auth_url = auth_url
self.auth_client = self._auth_client(auth_url)
@@ -342,7 +344,8 @@
def _auth_client(self, auth_url):
return json_v2id.TokenClient(
auth_url, disable_ssl_certificate_validation=self.dscv,
- ca_certs=self.ca_certs, trace_requests=self.trace_requests)
+ ca_certs=self.ca_certs, trace_requests=self.trace_requests,
+ http_timeout=self.http_timeout)
def _auth_params(self):
"""Auth parameters to be passed to the token request
@@ -429,7 +432,8 @@
def _auth_client(self, auth_url):
return json_v3id.V3TokenClient(
auth_url, disable_ssl_certificate_validation=self.dscv,
- ca_certs=self.ca_certs, trace_requests=self.trace_requests)
+ ca_certs=self.ca_certs, trace_requests=self.trace_requests,
+ http_timeout=self.http_timeout)
def _auth_params(self):
"""Auth parameters to be passed to the token request
@@ -595,7 +599,7 @@
def get_credentials(auth_url, fill_in=True, identity_version='v2',
disable_ssl_certificate_validation=None, ca_certs=None,
- trace_requests=None, **kwargs):
+ trace_requests=None, http_timeout=None, **kwargs):
"""Builds a credentials object based on the configured auth_version
:param auth_url (string): Full URI of the OpenStack Identity API(Keystone)
@@ -611,6 +615,8 @@
:param ca_certs: CA certificate bundle for validation of certificates
in SSL API requests to the auth system
:param trace_requests: trace in log API requests to the auth system
+ :param http_timeout: timeout in seconds to wait for the http request to
+ return
:param kwargs (dict): Dict of credential key/value pairs
Examples:
@@ -634,7 +640,8 @@
dscv = disable_ssl_certificate_validation
auth_provider = auth_provider_class(
creds, auth_url, disable_ssl_certificate_validation=dscv,
- ca_certs=ca_certs, trace_requests=trace_requests)
+ ca_certs=ca_certs, trace_requests=trace_requests,
+ http_timeout=http_timeout)
creds = auth_provider.fill_credentials()
return creds
@@ -682,6 +689,10 @@
"""Credentials are equal if attributes in self.ATTRIBUTES are equal"""
return str(self) == str(other)
+ def __ne__(self, other):
+ """Contrary to the __eq__"""
+ return not self.__eq__(other)
+
def __getattr__(self, key):
# If an attribute is set, __getattr__ is not invoked
# If an attribute is not set, and it is a known one, return None
diff --git a/tempest/lib/base.py b/tempest/lib/base.py
index f687343..33a32ee 100644
--- a/tempest/lib/base.py
+++ b/tempest/lib/base.py
@@ -42,7 +42,7 @@
def setUp(self):
super(BaseTestCase, self).setUp()
if not self.setUpClassCalled:
- raise RuntimeError("setUpClass does not calls the super's"
+ raise RuntimeError("setUpClass does not calls the super's "
"setUpClass in the "
+ self.__class__.__name__)
test_timeout = os.environ.get('OS_TEST_TIMEOUT', 0)
diff --git a/tempest/lib/cmd/check_uuid.py b/tempest/lib/cmd/check_uuid.py
index be3aa49..1239ac5 100755
--- a/tempest/lib/cmd/check_uuid.py
+++ b/tempest/lib/cmd/check_uuid.py
@@ -69,7 +69,8 @@
lines[line_no - 1] = ''.join(('{%s:s}' % patch_id, lines[line_no - 1]))
self.source_files[filename] = self._quote('\n').join(lines)
- def _save_changes(self, filename, source):
+ @staticmethod
+ def _save_changes(filename, source):
print('%s fixed' % filename)
with open(filename, 'w') as f:
f.write(source)
diff --git a/tempest/common/cred_provider.py b/tempest/lib/common/cred_provider.py
similarity index 100%
rename from tempest/common/cred_provider.py
rename to tempest/lib/common/cred_provider.py
diff --git a/tempest/lib/common/http.py b/tempest/lib/common/http.py
index dffc5f9..86ea26e 100644
--- a/tempest/lib/common/http.py
+++ b/tempest/lib/common/http.py
@@ -18,7 +18,7 @@
class ClosingHttp(urllib3.poolmanager.PoolManager):
def __init__(self, disable_ssl_certificate_validation=False,
- ca_certs=None):
+ ca_certs=None, timeout=None):
kwargs = {}
if disable_ssl_certificate_validation:
@@ -29,6 +29,9 @@
kwargs['cert_reqs'] = 'CERT_REQUIRED'
kwargs['ca_certs'] = ca_certs
+ if timeout:
+ kwargs['timeout'] = timeout
+
super(ClosingHttp, self).__init__(**kwargs)
def request(self, url, method, *args, **kwargs):
diff --git a/tempest/lib/common/rest_client.py b/tempest/lib/common/rest_client.py
index 1b0f53a..2d2771f 100644
--- a/tempest/lib/common/rest_client.py
+++ b/tempest/lib/common/rest_client.py
@@ -64,8 +64,10 @@
certificate validation
:param str ca_certs: File containing the CA Bundle to use in verifying a
TLS server cert
- :param str trace_request: Regex to use for specifying logging the entirety
+ :param str trace_requests: Regex to use for specifying logging the entirety
of the request and response payload
+ :param str http_timeout: Timeout in seconds to wait for the http request to
+ return
"""
TYPE = "json"
@@ -78,7 +80,7 @@
endpoint_type='publicURL',
build_interval=1, build_timeout=60,
disable_ssl_certificate_validation=False, ca_certs=None,
- trace_requests='', name=None):
+ trace_requests='', name=None, http_timeout=None):
self.auth_provider = auth_provider
self.service = service
self.region = region
@@ -99,9 +101,13 @@
'vary', 'www-authenticate'))
dscv = disable_ssl_certificate_validation
self.http_obj = http.ClosingHttp(
- disable_ssl_certificate_validation=dscv, ca_certs=ca_certs)
+ disable_ssl_certificate_validation=dscv, ca_certs=ca_certs,
+ timeout=http_timeout)
def _get_type(self):
+ if self.TYPE != "json":
+ self.LOG.warning("Tempest has dropped XML support and the TYPE "
+ "became meaningless")
return self.TYPE
def get_headers(self, accept_type=None, send_type=None):
@@ -229,8 +235,8 @@
raise TypeError("'read_code' must be an int instead of (%s)"
% type(read_code))
- assert_msg = ("This function only allowed to use for HTTP status"
- "codes which explicitly defined in the RFC 7231 & 4918."
+ assert_msg = ("This function only allowed to use for HTTP status "
+ "codes which explicitly defined in the RFC 7231 & 4918. "
"{0} is not a defined Success Code!"
).format(expected_code)
if isinstance(expected_code, list):
@@ -397,19 +403,14 @@
else:
return text
- def _log_request_start(self, method, req_url, req_headers=None,
- req_body=None):
- if req_headers is None:
- req_headers = {}
+ def _log_request_start(self, method, req_url):
caller_name = test_utils.find_test_caller()
if self.trace_requests and re.search(self.trace_requests, caller_name):
self.LOG.debug('Starting Request (%s): %s %s' %
(caller_name, method, req_url))
- def _log_request_full(self, method, req_url, resp,
- secs="", req_headers=None,
- req_body=None, resp_body=None,
- caller_name=None, extra=None):
+ def _log_request_full(self, resp, req_headers=None, req_body=None,
+ resp_body=None, extra=None):
if 'X-Auth-Token' in req_headers:
req_headers['X-Auth-Token'] = '<omitted>'
# A shallow copy is sufficient
@@ -455,8 +456,8 @@
# Also look everything at DEBUG if you want to filter this
# out, don't run at debug.
if self.LOG.isEnabledFor(real_logging.DEBUG):
- self._log_request_full(method, req_url, resp, secs, req_headers,
- req_body, resp_body, caller_name, extra)
+ self._log_request_full(resp, req_headers, req_body,
+ resp_body, extra)
def _parse_resp(self, body):
try:
@@ -660,8 +661,7 @@
time.sleep(delay)
resp, resp_body = self._request(method, url,
headers=headers, body=body)
- self._error_checker(method, url, headers, body,
- resp, resp_body)
+ self._error_checker(resp, resp_body)
return resp, resp_body
def _get_retry_after_delay(self, resp):
@@ -709,8 +709,7 @@
raise ValueError("Failed to parse date %s" % val)
return time.mktime(parts)
- def _error_checker(self, method, url,
- headers, body, resp, resp_body):
+ def _error_checker(self, resp, resp_body):
# NOTE(mtreinish): Check for httplib response from glance_http. The
# object can't be used here because importing httplib breaks httplib2.
@@ -892,11 +891,11 @@
cls=JSONSCHEMA_VALIDATOR,
format_checker=FORMAT_CHECKER)
except jsonschema.ValidationError as ex:
- msg = ("HTTP response body is invalid (%s)") % ex
+ msg = ("HTTP response body is invalid (%s)" % ex)
raise exceptions.InvalidHTTPResponseBody(msg)
else:
if body:
- msg = ("HTTP response body should not exist (%s)") % body
+ msg = ("HTTP response body should not exist (%s)" % body)
raise exceptions.InvalidHTTPResponseBody(msg)
# Check the header of a response
@@ -907,7 +906,7 @@
cls=JSONSCHEMA_VALIDATOR,
format_checker=FORMAT_CHECKER)
except jsonschema.ValidationError as ex:
- msg = ("HTTP response header is invalid (%s)") % ex
+ msg = ("HTTP response header is invalid (%s)" % ex)
raise exceptions.InvalidHTTPResponseHeader(msg)
diff --git a/tempest/lib/common/ssh.py b/tempest/lib/common/ssh.py
index a831dbd..4226cd6 100644
--- a/tempest/lib/common/ssh.py
+++ b/tempest/lib/common/ssh.py
@@ -36,9 +36,11 @@
class Client(object):
def __init__(self, host, username, password=None, timeout=300, pkey=None,
- channel_timeout=10, look_for_keys=False, key_filename=None):
+ channel_timeout=10, look_for_keys=False, key_filename=None,
+ port=22):
self.host = host
self.username = username
+ self.port = port
self.password = password
if isinstance(pkey, six.string_types):
pkey = paramiko.RSAKey.from_private_key(
@@ -58,17 +60,17 @@
paramiko.AutoAddPolicy())
_start_time = time.time()
if self.pkey is not None:
- LOG.info("Creating ssh connection to '%s' as '%s'"
+ LOG.info("Creating ssh connection to '%s:%d' as '%s'"
" with public key authentication",
- self.host, self.username)
+ self.host, self.port, self.username)
else:
- LOG.info("Creating ssh connection to '%s' as '%s'"
+ LOG.info("Creating ssh connection to '%s:%d' as '%s'"
" with password %s",
- self.host, self.username, str(self.password))
+ self.host, self.port, self.username, str(self.password))
attempts = 0
while True:
try:
- ssh.connect(self.host, username=self.username,
+ ssh.connect(self.host, port=self.port, username=self.username,
password=self.password,
look_for_keys=self.look_for_keys,
key_filename=self.key_filename,
@@ -77,7 +79,7 @@
self.username, self.host)
return ssh
except (EOFError,
- socket.error,
+ socket.error, socket.timeout,
paramiko.SSHException) as e:
if self._is_timed_out(_start_time):
LOG.exception("Failed to establish authenticated ssh"
@@ -121,7 +123,6 @@
channel.fileno() # Register event pipe
channel.exec_command(cmd)
channel.shutdown_write()
- exit_status = channel.recv_exit_status()
# If the executing host is linux-based, poll the channel
if self._can_system_poll():
@@ -162,6 +163,8 @@
out_data = out_data.decode(encoding)
err_data = err_data.decode(encoding)
+ exit_status = channel.recv_exit_status()
+
if 0 != exit_status:
raise exceptions.SSHExecCommandFailed(
command=cmd, exit_status=exit_status,
diff --git a/tempest/lib/common/utils/data_utils.py b/tempest/lib/common/utils/data_utils.py
index 93382c0..4095c77 100644
--- a/tempest/lib/common/utils/data_utils.py
+++ b/tempest/lib/common/utils/data_utils.py
@@ -19,6 +19,7 @@
import string
import uuid
+from debtcollector import removals
from oslo_utils import netutils
import six.moves
@@ -75,7 +76,7 @@
ascii_char = string.ascii_letters
digits = string.digits
digit = random.choice(string.digits)
- puncs = '~!@#$%^&*_=+'
+ puncs = '~!@#%^&*_=+'
punc = random.choice(puncs)
seed = ascii_char + digits + puncs
pre = upper + digit + punc
@@ -153,7 +154,7 @@
This generates a string with an arbitrary number of characters, generated
by looping the base_text string. If the size is smaller than the size of
- base_text, returning string is shrinked to the size.
+ base_text, returning string is shrunk to the size.
:param int size: a returning characters size
:param str base_text: a string you want to repeat
:return: size string
@@ -171,10 +172,14 @@
:return: size randomly bytes
:rtype: string
"""
- return ''.join([chr(random.randint(0, 255))
+ return b''.join([six.int2byte(random.randint(0, 255))
for i in range(size)])
+@removals.remove(
+ message="use get_ipv6_addr_by_EUI64 from oslo_utils.netutils",
+ version="Newton",
+ removal_version="Ocata")
def get_ipv6_addr_by_EUI64(cidr, mac):
"""Generate a IPv6 addr by EUI-64 with CIDR and MAC
@@ -204,5 +209,5 @@
# Courtesy of http://stackoverflow.com/a/312464
def chunkify(sequence, chunksize):
"""Yield successive chunks from `sequence`."""
- for i in six.moves.xrange(0, len(sequence), chunksize):
+ for i in range(0, len(sequence), chunksize):
yield sequence[i:i + chunksize]
diff --git a/tempest/lib/common/utils/test_utils.py b/tempest/lib/common/utils/test_utils.py
index 50a1a7d..3b28701 100644
--- a/tempest/lib/common/utils/test_utils.py
+++ b/tempest/lib/common/utils/test_utils.py
@@ -14,6 +14,7 @@
# under the License.
import inspect
import re
+import time
from oslo_log import log as logging
@@ -83,3 +84,24 @@
return func(*args, **kwargs)
except exceptions.NotFound:
pass
+
+
+def call_until_true(func, duration, sleep_for):
+ """Call the given function until it returns True (and return True)
+
+ or until the specified duration (in seconds) elapses (and return False).
+
+ :param func: A zero argument callable that returns True on success.
+ :param duration: The number of seconds for which to attempt a
+ successful call of the function.
+ :param sleep_for: The number of seconds to sleep after an unsuccessful
+ invocation of the function.
+ """
+ now = time.time()
+ timeout = now + duration
+ while now < timeout:
+ if func():
+ return True
+ time.sleep(sleep_for)
+ now = time.time()
+ return False
diff --git a/tempest/lib/exceptions.py b/tempest/lib/exceptions.py
index 5ca78f9..a6c01bb 100644
--- a/tempest/lib/exceptions.py
+++ b/tempest/lib/exceptions.py
@@ -100,7 +100,7 @@
class OverLimit(ClientRestClientException):
- message = "Quota exceeded"
+ message = "Request entity is too large"
class ServerFault(ServerRestClientException):
@@ -149,6 +149,10 @@
message = "Unexpected response code received"
+class InvalidConfiguration(TempestException):
+ message = "Invalid Configuration"
+
+
class InvalidIdentityVersion(TempestException):
message = "Invalid version %(identity_version)s of the identity service"
@@ -229,3 +233,17 @@
class UnknownServiceClient(TempestException):
message = "Service clients named %(services)s are not known"
+
+
+class ServiceClientRegistrationException(TempestException):
+ message = ("Error registering module %(name)s in path %(module_path)s, "
+ "with service %(service_version)s and clients "
+ "%(client_names)s: %(detailed_error)s")
+
+
+class PluginRegistrationException(TempestException):
+ message = "Error registering plugin %(name)s: %(detailed_error)s"
+
+
+class VolumeBackupException(TempestException):
+ message = "Volume backup %(backup_id)s failed and is in ERROR status"
diff --git a/tempest/lib/services/clients.py b/tempest/lib/services/clients.py
new file mode 100644
index 0000000..adf666b
--- /dev/null
+++ b/tempest/lib/services/clients.py
@@ -0,0 +1,447 @@
+# Copyright 2012 OpenStack Foundation
+# Copyright (c) 2016 Hewlett-Packard Enterprise Development Company, L.P.
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import copy
+import importlib
+import inspect
+import logging
+
+from tempest.lib import auth
+from tempest.lib.common.utils import misc
+from tempest.lib import exceptions
+from tempest.lib.services import compute
+from tempest.lib.services import image
+from tempest.lib.services import network
+
+
+LOG = logging.getLogger(__name__)
+
+
+def tempest_modules():
+ """Dict of service client modules available in Tempest.
+
+ Provides a dict of stable service modules available in Tempest, with
+ ``service_version`` as key, and the module object as value.
+ """
+ return {
+ 'compute': compute,
+ 'image.v1': image.v1,
+ 'image.v2': image.v2,
+ 'network': network
+ }
+
+
+def _tempest_internal_modules():
+ # Set of unstable service clients available in Tempest
+ # NOTE(andreaf) This list will exists only as long the remain clients
+ # are migrated to tempest.lib, and it will then be deleted without
+ # deprecation or advance notice
+ return set(['identity.v2', 'identity.v3', 'object-storage', 'volume.v1',
+ 'volume.v2', 'volume.v3'])
+
+
+def available_modules():
+ """Set of service client modules available in Tempest and plugins
+
+ Set of stable service clients from Tempest and service clients exposed
+ by plugins. This set of available modules can be used for automatic
+ configuration.
+
+ :raise PluginRegistrationException: if a plugin exposes a service_version
+ already defined by Tempest or another plugin.
+
+ Examples:
+
+ >>> from tempest import config
+ >>> params = {}
+ >>> for service_version in available_modules():
+ >>> service = service_version.split('.')[0]
+ >>> params[service] = config.service_client_config(service)
+ >>> service_clients = ServiceClients(creds, identity_uri,
+ >>> client_parameters=params)
+ """
+ extra_service_versions = set([])
+ _tempest_modules = set(tempest_modules())
+ plugin_services = ClientsRegistry().get_service_clients()
+ for plugin_name in plugin_services:
+ plug_service_versions = set([x['service_version'] for x in
+ plugin_services[plugin_name]])
+ # If a plugin exposes a duplicate service_version raise an exception
+ if plug_service_versions:
+ if not plug_service_versions.isdisjoint(extra_service_versions):
+ detailed_error = (
+ 'Plugin %s is trying to register a service %s already '
+ 'claimed by another one' % (plugin_name,
+ extra_service_versions &
+ plug_service_versions))
+ raise exceptions.PluginRegistrationException(
+ name=plugin_name, detailed_error=detailed_error)
+ # NOTE(andreaf) Once all tempest clients are stable, the following
+ # if will have to be removed.
+ if not plug_service_versions.isdisjoint(
+ _tempest_internal_modules()):
+ detailed_error = (
+ 'Plugin %s is trying to register a service %s already '
+ 'claimed by a Tempest one' % (plugin_name,
+ _tempest_internal_modules() &
+ plug_service_versions))
+ raise exceptions.PluginRegistrationException(
+ name=plugin_name, detailed_error=detailed_error)
+ extra_service_versions |= plug_service_versions
+ return _tempest_modules | extra_service_versions
+
+
+@misc.singleton
+class ClientsRegistry(object):
+ """Registry of all service clients available from plugins"""
+
+ def __init__(self):
+ self._service_clients = {}
+
+ def register_service_client(self, plugin_name, service_client_data):
+ if plugin_name in self._service_clients:
+ detailed_error = 'Clients for plugin %s already registered'
+ raise exceptions.PluginRegistrationException(
+ name=plugin_name,
+ detailed_error=detailed_error % plugin_name)
+ self._service_clients[plugin_name] = service_client_data
+
+ def get_service_clients(self):
+ return self._service_clients
+
+
+class ClientsFactory(object):
+ """Builds service clients for a service client module
+
+ This class implements the logic of feeding service client parameters
+ to service clients from a specific module. It allows setting the
+ parameters once and obtaining new instances of the clients without the
+ need of passing any parameter.
+
+ ClientsFactory can be used directly, or consumed via the `ServiceClients`
+ class, which manages the authorization part.
+ """
+
+ def __init__(self, module_path, client_names, auth_provider, **kwargs):
+ """Initialises the client factory
+
+ :param module_path: Path to module that includes all service clients.
+ All service client classes must be exposed by a single module.
+ If they are separated in different modules, defining __all__
+ in the root module can help, similar to what is done by service
+ clients in tempest.
+ :param client_names: List or set of names of the service client
+ classes.
+ :param auth_provider: The auth provider used to initialise client.
+ :param kwargs: Parameters to be passed to all clients. Parameters
+ values can be overwritten when clients are initialised, but
+ parameters cannot be deleted.
+ :raise ImportError if the specified module_path cannot be imported
+
+ Example:
+
+ >>> # Get credentials and an auth_provider
+ >>> clients = ClientsFactory(
+ >>> module_path='my_service.my_service_clients',
+ >>> client_names=['ServiceClient1', 'ServiceClient2'],
+ >>> auth_provider=auth_provider,
+ >>> service='my_service',
+ >>> region='region1')
+ >>> my_api_client = clients.MyApiClient()
+ >>> my_api_client_region2 = clients.MyApiClient(region='region2')
+
+ """
+ # Import the module. If it's not importable, the raised exception
+ # provides good enough information about what happened
+ _module = importlib.import_module(module_path)
+ # If any of the classes is not in the module we fail
+ for class_name in client_names:
+ # TODO(andreaf) This always passes all parameters to all clients.
+ # In future to allow clients to specify the list of parameters
+ # that they accept based out of a list of standard ones.
+
+ # Obtain the class
+ klass = self._get_class(_module, class_name)
+ final_kwargs = copy.copy(kwargs)
+
+ # Set the function as an attribute of the factory
+ setattr(self, class_name, self._get_partial_class(
+ klass, auth_provider, final_kwargs))
+
+ def _get_partial_class(self, klass, auth_provider, kwargs):
+
+ # Define a function that returns a new class instance by
+ # combining default kwargs with extra ones
+ def partial_class(alias=None, **later_kwargs):
+ """Returns a callable the initialises a service client
+
+ Builds a callable that accepts kwargs, which are passed through
+ to the __init__ of the service client, along with a set of defaults
+ set in factory at factory __init__ time.
+ Original args in the service client can only be passed as kwargs.
+
+ It accepts one extra parameter 'alias' compared to the original
+ service client. When alias is provided, the returned callable will
+ also set an attribute called with a name defined in 'alias', which
+ contains the instance of the service client.
+
+ :param alias: str Name of the attribute set on the factory once
+ the callable is invoked which contains the initialised
+ service client. If None, no attribute is set.
+ :param later_kwargs: kwargs passed through to the service client
+ __init__ on top of defaults set at factory level.
+ """
+ kwargs.update(later_kwargs)
+ _client = klass(auth_provider=auth_provider, **kwargs)
+ if alias:
+ setattr(self, alias, _client)
+ return _client
+
+ return partial_class
+
+ @classmethod
+ def _get_class(cls, module, class_name):
+ klass = getattr(module, class_name, None)
+ if not klass:
+ msg = 'Invalid class name, %s is not found in %s'
+ raise AttributeError(msg % (class_name, module))
+ if not inspect.isclass(klass):
+ msg = 'Expected a class, got %s of type %s instead'
+ raise TypeError(msg % (klass, type(klass)))
+ return klass
+
+
+class ServiceClients(object):
+ """Service client provider class
+
+ The ServiceClients object provides a useful means for tests to access
+ service clients configured for a specified set of credentials.
+ It hides some of the complexity from the authorization and configuration
+ layers.
+
+ Examples:
+
+ >>> from tempest.lib.services import clients
+ >>> johndoe = cred_provider.get_creds_by_role(['johndoe'])
+ >>> johndoe_clients = clients.ServiceClients(johndoe,
+ >>> identity_uri)
+ >>> johndoe_servers = johndoe_clients.servers_client.list_servers()
+
+ """
+ # NOTE(andreaf) This class does not depend on tempest configuration
+ # and its meant for direct consumption by external clients such as tempest
+ # plugins. Tempest provides a wrapper class, `clients.Manager`, that
+ # initialises this class using values from tempest CONF object. The wrapper
+ # class should only be used by tests hosted in Tempest.
+
+ def __init__(self, credentials, identity_uri, region=None, scope='project',
+ disable_ssl_certificate_validation=True, ca_certs=None,
+ trace_requests='', client_parameters=None):
+ """Service Clients provider
+
+ Instantiate a `ServiceClients` object, from a set of credentials and an
+ identity URI. The identity version is inferred from the credentials
+ object. Optionally auth scope can be provided.
+
+ A few parameters can be given a value which is applied as default
+ for all service clients: region, dscv, ca_certs, trace_requests.
+
+ Parameters dscv, ca_certs and trace_requests all apply to the auth
+ provider as well as any service clients provided by this manager.
+
+ Any other client parameter must be set via client_parameters.
+ The list of available parameters is defined in the service clients
+ interfaces. For reference, most clients will accept 'region',
+ 'service', 'endpoint_type', 'build_timeout' and 'build_interval', which
+ are all inherited from RestClient.
+
+ The `config` module in Tempest exposes an helper function
+ `service_client_config` that can be used to extract from configuration
+ a dictionary ready to be injected in kwargs.
+
+ Exceptions are:
+ - Token clients for 'identity' have a very different interface
+ - Volume client for 'volume' accepts 'default_volume_size'
+ - Servers client from 'compute' accepts 'enable_instance_password'
+
+ Examples:
+
+ >>> identity_params = config.service_client_config('identity')
+ >>> params = {
+ >>> 'identity': identity_params,
+ >>> 'compute': {'region': 'region2'}}
+ >>> manager = lib_manager.Manager(
+ >>> my_creds, identity_uri, client_parameters=params)
+
+ :param credentials: An instance of `auth.Credentials`
+ :param identity_uri: URI of the identity API. This should be a
+ mandatory parameter, and it will so soon.
+ :param region: Default value of region for service clients.
+ :param scope: default scope for tokens produced by the auth provider
+ :param disable_ssl_certificate_validation: Applies to auth and to all
+ service clients.
+ :param ca_certs: Applies to auth and to all service clients.
+ :param trace_requests: Applies to auth and to all service clients.
+ :param client_parameters: Dictionary with parameters for service
+ clients. Keys of the dictionary are the service client service
+ name, as declared in `service_clients.available_modules()` except
+ for the version. Values are dictionaries of parameters that are
+ going to be passed to all clients in the service client module.
+
+ Examples:
+
+ >>> params_service_x = {'param_name': 'param_value'}
+ >>> client_parameters = { 'service_x': params_service_x }
+
+ >>> params_service_y = config.service_client_config('service_y')
+ >>> client_parameters['service_y'] = params_service_y
+
+ """
+ self._registered_services = set([])
+ self.credentials = credentials
+ self.identity_uri = identity_uri
+ if not identity_uri:
+ raise exceptions.InvalidCredentials(
+ 'ServiceClients requires a non-empty identity_uri.')
+ self.region = region
+ # Check if passed or default credentials are valid
+ if not self.credentials.is_valid():
+ raise exceptions.InvalidCredentials()
+ # Get the identity classes matching the provided credentials
+ # TODO(andreaf) Define a new interface in Credentials to get
+ # the API version from an instance
+ identity = [(k, auth.IDENTITY_VERSION[k][1]) for k in
+ auth.IDENTITY_VERSION.keys() if
+ isinstance(self.credentials, auth.IDENTITY_VERSION[k][0])]
+ # Zero matches or more than one are both not valid.
+ if len(identity) != 1:
+ raise exceptions.InvalidCredentials()
+ self.auth_version, auth_provider_class = identity[0]
+ self.dscv = disable_ssl_certificate_validation
+ self.ca_certs = ca_certs
+ self.trace_requests = trace_requests
+ # Creates an auth provider for the credentials
+ self.auth_provider = auth_provider_class(
+ self.credentials, self.identity_uri, scope=scope,
+ disable_ssl_certificate_validation=self.dscv,
+ ca_certs=self.ca_certs, trace_requests=self.trace_requests)
+ # Setup some defaults for client parameters of registered services
+ client_parameters = client_parameters or {}
+ self.parameters = {}
+ # Parameters are provided for unversioned services
+ all_modules = available_modules() | _tempest_internal_modules()
+ unversioned_services = set(
+ [x.split('.')[0] for x in all_modules])
+ for service in unversioned_services:
+ self.parameters[service] = self._setup_parameters(
+ client_parameters.pop(service, {}))
+ # Check that no client parameters was supplied for unregistered clients
+ if client_parameters:
+ raise exceptions.UnknownServiceClient(
+ services=list(client_parameters.keys()))
+
+ # Register service clients from the registry (__tempest__ and plugins)
+ clients_registry = ClientsRegistry()
+ plugin_service_clients = clients_registry.get_service_clients()
+ for plugin in plugin_service_clients:
+ service_clients = plugin_service_clients[plugin]
+ # Each plugin returns a list of service client parameters
+ for service_client in service_clients:
+ # NOTE(andreaf) If a plugin cannot register, stop the
+ # registration process, log some details to help
+ # troubleshooting, and re-raise
+ try:
+ self.register_service_client_module(**service_client)
+ except Exception:
+ LOG.exception(
+ 'Failed to register service client from plugin %s '
+ 'with parameters %s' % (plugin, service_client))
+ raise
+
+ def register_service_client_module(self, name, service_version,
+ module_path, client_names, **kwargs):
+ """Register a service client module
+
+ Initiates a client factory for the specified module, using this
+ class auth_provider, and accessible via a `name` attribute in the
+ service client.
+
+ :param name: Name used to access the client
+ :param service_version: Name of the service complete with version.
+ Used to track registered services. When a plugin implements it,
+ it can be used by other plugins to obtain their configuration.
+ :param module_path: Path to module that includes all service clients.
+ All service client classes must be exposed by a single module.
+ If they are separated in different modules, defining __all__
+ in the root module can help, similar to what is done by service
+ clients in tempest.
+ :param client_names: List or set of names of service client classes.
+ :param kwargs: Extra optional parameters to be passed to all clients.
+ ServiceClient provides defaults for region, dscv, ca_certs and
+ trace_requests.
+ :raise ServiceClientRegistrationException: if the provided name is
+ already in use or if service_version is already registered.
+ :raise ImportError: if module_path cannot be imported.
+ """
+ if hasattr(self, name):
+ using_name = getattr(self, name)
+ detailed_error = 'Module name already in use: %s' % using_name
+ raise exceptions.ServiceClientRegistrationException(
+ name=name, service_version=service_version,
+ module_path=module_path, client_names=client_names,
+ detailed_error=detailed_error)
+ if service_version in self.registered_services:
+ detailed_error = 'Service %s already registered.' % service_version
+ raise exceptions.ServiceClientRegistrationException(
+ name=name, service_version=service_version,
+ module_path=module_path, client_names=client_names,
+ detailed_error=detailed_error)
+ params = dict(region=self.region,
+ disable_ssl_certificate_validation=self.dscv,
+ ca_certs=self.ca_certs,
+ trace_requests=self.trace_requests)
+ params.update(kwargs)
+ # Instantiate the client factory
+ _factory = ClientsFactory(module_path=module_path,
+ client_names=client_names,
+ auth_provider=self.auth_provider,
+ **params)
+ # Adds the client factory to the service_client
+ setattr(self, name, _factory)
+ # Add the name of the new service in self.SERVICES for discovery
+ self._registered_services.add(service_version)
+
+ @property
+ def registered_services(self):
+ # NOTE(andreaf) Once all tempest modules are stable this needs to
+ # be updated to remove _tempest_internal_modules
+ return self._registered_services | _tempest_internal_modules()
+
+ def _setup_parameters(self, parameters):
+ """Setup default values for client parameters
+
+ Region by default is the region passed as an __init__ parameter.
+ Checks that no parameter for an unknown service is provided.
+ """
+ _parameters = {}
+ # Use region from __init__
+ if self.region:
+ _parameters['region'] = self.region
+ # Update defaults with specified parameters
+ _parameters.update(parameters)
+ # If any parameter is left, parameters for an unknown service were
+ # provided as input. Fail rather than ignore silently.
+ return _parameters
diff --git a/tempest/lib/services/compute/agents_client.py b/tempest/lib/services/compute/agents_client.py
index 6d3a817..3f05d3b 100644
--- a/tempest/lib/services/compute/agents_client.py
+++ b/tempest/lib/services/compute/agents_client.py
@@ -24,7 +24,11 @@
"""Tests Agents API"""
def list_agents(self, **params):
- """List all agent builds."""
+ """List all agent builds.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#listbuilds
+ """
url = 'os-agents'
if params:
url += '?%s' % urllib.urlencode(params)
@@ -46,7 +50,11 @@
return rest_client.ResponseBody(resp, body)
def delete_agent(self, agent_id):
- """Delete an existing agent build."""
+ """Delete an existing agent build.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#deleteBuild
+ """
resp, body = self.delete("os-agents/%s" % agent_id)
self.validate_response(schema.delete_agent, resp, body)
return rest_client.ResponseBody(resp, body)
diff --git a/tempest/lib/services/compute/flavors_client.py b/tempest/lib/services/compute/flavors_client.py
index 5be8272..ae1700c 100644
--- a/tempest/lib/services/compute/flavors_client.py
+++ b/tempest/lib/services/compute/flavors_client.py
@@ -28,6 +28,11 @@
class FlavorsClient(base_compute_client.BaseComputeClient):
def list_flavors(self, detail=False, **params):
+ """Lists flavors.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#listFlavors
+ """
url = 'flavors'
_schema = schema.list_flavors
@@ -43,6 +48,11 @@
return rest_client.ResponseBody(resp, body)
def show_flavor(self, flavor_id):
+ """Shows details for a flavor.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#showFlavor
+ """
resp, body = self.get("flavors/%s" % flavor_id)
body = json.loads(body)
self.validate_response(schema.create_get_flavor_details, resp, body)
@@ -67,7 +77,11 @@
return rest_client.ResponseBody(resp, body)
def delete_flavor(self, flavor_id):
- """Delete the given flavor."""
+ """Delete the given flavor.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#deleteFlavor
+ """
resp, body = self.delete("flavors/{0}".format(flavor_id))
self.validate_response(schema.delete_flavor, resp, body)
return rest_client.ResponseBody(resp, body)
@@ -102,7 +116,11 @@
return rest_client.ResponseBody(resp, body)
def list_flavor_extra_specs(self, flavor_id):
- """Get extra Specs details of the mentioned flavor."""
+ """Get extra Specs details of the mentioned flavor.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#listFlavorExtraSpecs
+ """
resp, body = self.get('flavors/%s/os-extra_specs' % flavor_id)
body = json.loads(body)
self.validate_response(schema_extra_specs.set_get_flavor_extra_specs,
@@ -110,7 +128,11 @@
return rest_client.ResponseBody(resp, body)
def show_flavor_extra_spec(self, flavor_id, key):
- """Get extra Specs key-value of the mentioned flavor and key."""
+ """Get extra Specs key-value of the mentioned flavor and key.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#showFlavorExtraSpec
+ """
resp, body = self.get('flavors/%s/os-extra_specs/%s' % (flavor_id,
key))
body = json.loads(body)
@@ -136,14 +158,22 @@
def unset_flavor_extra_spec(self, flavor_id, key): # noqa
# NOTE: This noqa is for passing T111 check and we cannot rename
# to keep backwards compatibility.
- """Unset extra Specs from the mentioned flavor."""
+ """Unset extra Specs from the mentioned flavor.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#deleteFlavorExtraSpec
+ """
resp, body = self.delete('flavors/%s/os-extra_specs/%s' %
(flavor_id, key))
self.validate_response(schema.unset_flavor_extra_specs, resp, body)
return rest_client.ResponseBody(resp, body)
def list_flavor_access(self, flavor_id):
- """Get flavor access information given the flavor id."""
+ """Get flavor access information given the flavor id.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#listFlavorAccess
+ """
resp, body = self.get('flavors/%s/os-flavor-access' % flavor_id)
body = json.loads(body)
self.validate_response(schema_access.add_remove_list_flavor_access,
@@ -151,7 +181,11 @@
return rest_client.ResponseBody(resp, body)
def add_flavor_access(self, flavor_id, tenant_id):
- """Add flavor access for the specified tenant."""
+ """Add flavor access for the specified tenant.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#addFlavorAccess
+ """
post_body = {
'addTenantAccess': {
'tenant': tenant_id
@@ -165,7 +199,11 @@
return rest_client.ResponseBody(resp, body)
def remove_flavor_access(self, flavor_id, tenant_id):
- """Remove flavor access from the specified tenant."""
+ """Remove flavor access from the specified tenant.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#removeFlavorAccess
+ """
post_body = {
'removeTenantAccess': {
'tenant': tenant_id
diff --git a/tempest/lib/services/compute/floating_ips_client.py b/tempest/lib/services/compute/floating_ips_client.py
index 03e4894..6922c48 100644
--- a/tempest/lib/services/compute/floating_ips_client.py
+++ b/tempest/lib/services/compute/floating_ips_client.py
@@ -25,7 +25,11 @@
class FloatingIPsClient(base_compute_client.BaseComputeClient):
def list_floating_ips(self, **params):
- """Returns a list of all floating IPs filtered by any parameters."""
+ """Returns a list of all floating IPs filtered by any parameters.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#listfloatingipsObject
+ """
url = 'os-floating-ips'
if params:
url += '?%s' % urllib.urlencode(params)
@@ -36,7 +40,11 @@
return rest_client.ResponseBody(resp, body)
def show_floating_ip(self, floating_ip_id):
- """Get the details of a floating IP."""
+ """Get the details of a floating IP.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#showFloatingIP
+ """
url = "os-floating-ips/%s" % floating_ip_id
resp, body = self.get(url)
body = json.loads(body)
@@ -57,7 +65,11 @@
return rest_client.ResponseBody(resp, body)
def delete_floating_ip(self, floating_ip_id):
- """Deletes the provided floating IP from the project."""
+ """Deletes the provided floating IP from the project.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#deleteFloatingIP
+ """
url = "os-floating-ips/%s" % floating_ip_id
resp, body = self.delete(url)
self.validate_response(schema.add_remove_floating_ip, resp, body)
diff --git a/tempest/lib/services/compute/hosts_client.py b/tempest/lib/services/compute/hosts_client.py
index 16b5edd..1b93b00 100644
--- a/tempest/lib/services/compute/hosts_client.py
+++ b/tempest/lib/services/compute/hosts_client.py
@@ -45,8 +45,9 @@
def update_host(self, hostname, **kwargs):
"""Update a host.
- Available params: see http://developer.openstack.org/
- api-ref-compute-v2.1.html#enablehost
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-compute-v2.1.html#enablehost
"""
request_body = {
diff --git a/tempest/lib/services/compute/images_client.py b/tempest/lib/services/compute/images_client.py
index da8a61e..e937c13 100644
--- a/tempest/lib/services/compute/images_client.py
+++ b/tempest/lib/services/compute/images_client.py
@@ -27,8 +27,9 @@
def create_image(self, server_id, **kwargs):
"""Create an image of the original server.
- Available params: see http://developer.openstack.org/
- api-ref-compute-v2.1.html#createImage
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-compute-v2.1.html#createImage
"""
post_body = {'createImage': kwargs}
@@ -41,8 +42,9 @@
def list_images(self, detail=False, **params):
"""Return a list of all images filtered by any parameter.
- Available params: see http://developer.openstack.org/
- api-ref-compute-v2.1.html#listImages
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-compute-v2.1.html#listImages
"""
url = 'images'
_schema = schema.list_images
@@ -61,7 +63,6 @@
def show_image(self, image_id):
"""Return the details of a single image."""
resp, body = self.get("images/%s" % image_id)
- self.expected_success(200, resp.status)
body = json.loads(body)
self.validate_response(schema.get_image, resp, body)
return rest_client.ResponseBody(resp, body)
@@ -82,8 +83,9 @@
def set_image_metadata(self, image_id, meta):
"""Set the metadata for an image.
- Available params: see http://developer.openstack.org/
- api-ref-compute-v2.1.html#createImageMetadata
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-compute-v2.1.html#createImageMetadata
"""
post_body = json.dumps({'metadata': meta})
resp, body = self.put('images/%s/metadata' % image_id, post_body)
@@ -94,8 +96,9 @@
def update_image_metadata(self, image_id, meta):
"""Update the metadata for an image.
- Available params: see http://developer.openstack.org/
- api-ref-compute-v2.1.html#updateImageMetadata
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-compute-v2.1.html#updateImageMetadata
"""
post_body = json.dumps({'metadata': meta})
resp, body = self.post('images/%s/metadata' % image_id, post_body)
@@ -113,8 +116,9 @@
def set_image_metadata_item(self, image_id, key, meta):
"""Set the value for a specific image metadata key.
- Available params: see http://developer.openstack.org/
- api-ref-compute-v2.1.html#setImageMetadataItem
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-compute-v2.1.html#setImageMetadataItem
"""
post_body = json.dumps({'meta': meta})
resp, body = self.put('images/%s/metadata/%s' % (image_id, key),
diff --git a/tempest/lib/services/compute/interfaces_client.py b/tempest/lib/services/compute/interfaces_client.py
index 80192a1..37157a4 100644
--- a/tempest/lib/services/compute/interfaces_client.py
+++ b/tempest/lib/services/compute/interfaces_client.py
@@ -31,8 +31,9 @@
def create_interface(self, server_id, **kwargs):
"""Create an interface.
- Available params: see http://developer.openstack.org/
- api-ref-compute-v2.1.html#createAttachInterface
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-compute-v2.1.html#createAttachInterface
"""
post_body = {'interfaceAttachment': kwargs}
post_body = json.dumps(post_body)
diff --git a/tempest/lib/services/compute/keypairs_client.py b/tempest/lib/services/compute/keypairs_client.py
index 7b8e6b2..c3f1781 100644
--- a/tempest/lib/services/compute/keypairs_client.py
+++ b/tempest/lib/services/compute/keypairs_client.py
@@ -28,6 +28,12 @@
{'min': '2.2', 'max': None, 'schema': schemav22}]
def list_keypairs(self, **params):
+ """Lists keypairs that are associated with the account.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-compute-v2.1.html#listKeypairs
+ """
url = 'os-keypairs'
if params:
url += '?%s' % urllib.urlencode(params)
@@ -38,6 +44,12 @@
return rest_client.ResponseBody(resp, body)
def show_keypair(self, keypair_name, **params):
+ """Shows details for a keypair that is associated with the account.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-compute-v2.1.html#showKeypair
+ """
url = "os-keypairs/%s" % keypair_name
if params:
url += '?%s' % urllib.urlencode(params)
@@ -50,8 +62,9 @@
def create_keypair(self, **kwargs):
"""Create a keypair.
- Available params: see http://developer.openstack.org/
- api-ref-compute-v2.1.html#createKeypair
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-compute-v2.1.html#createKeypair
"""
post_body = json.dumps({'keypair': kwargs})
resp, body = self.post("os-keypairs", body=post_body)
@@ -61,6 +74,12 @@
return rest_client.ResponseBody(resp, body)
def delete_keypair(self, keypair_name, **params):
+ """Deletes a keypair.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-compute-v2.1.html#deleteKeypair
+ """
url = "os-keypairs/%s" % keypair_name
if params:
url += '?%s' % urllib.urlencode(params)
diff --git a/tempest/lib/services/compute/migrations_client.py b/tempest/lib/services/compute/migrations_client.py
index 62246d3..375cbda 100644
--- a/tempest/lib/services/compute/migrations_client.py
+++ b/tempest/lib/services/compute/migrations_client.py
@@ -16,17 +16,23 @@
from six.moves.urllib import parse as urllib
from tempest.lib.api_schema.response.compute.v2_1 import migrations as schema
+from tempest.lib.api_schema.response.compute.v2_23 import migrations \
+ as schemav223
from tempest.lib.common import rest_client
from tempest.lib.services.compute import base_compute_client
class MigrationsClient(base_compute_client.BaseComputeClient):
+ schema_versions_info = [
+ {'min': None, 'max': '2.22', 'schema': schema},
+ {'min': '2.23', 'max': None, 'schema': schemav223}]
def list_migrations(self, **params):
"""List all migrations.
- Available params: see http://developer.openstack.org/
- api-ref-compute-v2.1.html#listMigrations
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-compute-v2.1.html#listMigrations
"""
url = 'os-migrations'
@@ -35,5 +41,6 @@
resp, body = self.get(url)
body = json.loads(body)
+ schema = self.get_schema(self.schema_versions_info)
self.validate_response(schema.list_migrations, resp, body)
return rest_client.ResponseBody(resp, body)
diff --git a/tempest/lib/services/compute/quota_classes_client.py b/tempest/lib/services/compute/quota_classes_client.py
index 9dc04ad..523a306 100644
--- a/tempest/lib/services/compute/quota_classes_client.py
+++ b/tempest/lib/services/compute/quota_classes_client.py
@@ -35,8 +35,9 @@
def update_quota_class_set(self, quota_class_id, **kwargs):
"""Update the quota class's limits for one or more resources.
- Available params: see http://developer.openstack.org/
- api-ref-compute-v2.1.html#updatequota
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-compute-v2.1.html#updatequota
"""
post_body = json.dumps({'quota_class_set': kwargs})
diff --git a/tempest/lib/services/compute/quotas_client.py b/tempest/lib/services/compute/quotas_client.py
index 6d41f4b..a2b0397 100644
--- a/tempest/lib/services/compute/quotas_client.py
+++ b/tempest/lib/services/compute/quotas_client.py
@@ -45,8 +45,9 @@
def update_quota_set(self, tenant_id, user_id=None, **kwargs):
"""Updates the tenant's quota limits for one or more resources.
- Available params: see http://developer.openstack.org/
- api-ref-compute-v2.1.html#updateQuota
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-compute-v2.1.html#updateQuota
"""
post_body = json.dumps({'quota_set': kwargs})
diff --git a/tempest/lib/services/compute/security_groups_client.py b/tempest/lib/services/compute/security_groups_client.py
index 6b9c7e1..386c214 100644
--- a/tempest/lib/services/compute/security_groups_client.py
+++ b/tempest/lib/services/compute/security_groups_client.py
@@ -26,7 +26,11 @@
class SecurityGroupsClient(base_compute_client.BaseComputeClient):
def list_security_groups(self, **params):
- """List all security groups for a user."""
+ """List all security groups for a user.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#listSecGroups
+ """
url = 'os-security-groups'
if params:
@@ -38,7 +42,11 @@
return rest_client.ResponseBody(resp, body)
def show_security_group(self, security_group_id):
- """Get the details of a Security Group."""
+ """Get the details of a Security Group.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#showSecGroup
+ """
url = "os-security-groups/%s" % security_group_id
resp, body = self.get(url)
body = json.loads(body)
@@ -71,7 +79,11 @@
return rest_client.ResponseBody(resp, body)
def delete_security_group(self, security_group_id):
- """Delete the provided Security Group."""
+ """Delete the provided Security Group.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#deleteSecGroup
+ """
resp, body = self.delete(
'os-security-groups/%s' % security_group_id)
self.validate_response(schema.delete_security_group, resp, body)
diff --git a/tempest/lib/services/compute/servers_client.py b/tempest/lib/services/compute/servers_client.py
index 0d31ac7..d5902e1 100644
--- a/tempest/lib/services/compute/servers_client.py
+++ b/tempest/lib/services/compute/servers_client.py
@@ -20,7 +20,10 @@
from six.moves.urllib import parse as urllib
from tempest.lib.api_schema.response.compute.v2_1 import servers as schema
+from tempest.lib.api_schema.response.compute.v2_16 import servers as schemav216
from tempest.lib.api_schema.response.compute.v2_19 import servers as schemav219
+from tempest.lib.api_schema.response.compute.v2_26 import servers as schemav226
+from tempest.lib.api_schema.response.compute.v2_3 import servers as schemav23
from tempest.lib.api_schema.response.compute.v2_9 import servers as schemav29
from tempest.lib.common import rest_client
from tempest.lib.services.compute import base_compute_client
@@ -28,9 +31,12 @@
class ServersClient(base_compute_client.BaseComputeClient):
schema_versions_info = [
- {'min': None, 'max': '2.8', 'schema': schema},
- {'min': '2.9', 'max': '2.18', 'schema': schemav29},
- {'min': '2.19', 'max': None, 'schema': schemav219}]
+ {'min': None, 'max': '2.2', 'schema': schema},
+ {'min': '2.3', 'max': '2.8', 'schema': schemav23},
+ {'min': '2.9', 'max': '2.15', 'schema': schemav29},
+ {'min': '2.16', 'max': '2.18', 'schema': schemav216},
+ {'min': '2.19', 'max': '2.25', 'schema': schemav219},
+ {'min': '2.26', 'max': None, 'schema': schemav226}]
def __init__(self, auth_provider, service, region,
enable_instance_password=True, **kwargs):
@@ -41,8 +47,13 @@
def create_server(self, **kwargs):
"""Create server.
- Available params: see http://developer.openstack.org/
- api-ref-compute-v2.1.html#createServer
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/compute/#create-server
+
+ :param name: Server name
+ :param imageRef: Image reference (UUID)
+ :param flavorRef: Flavor reference (UUID or full URL)
Most parameters except the following are passed to the API without
any changes.
@@ -99,7 +110,11 @@
return rest_client.ResponseBody(resp, body)
def show_server(self, server_id):
- """Get server details."""
+ """Get server details.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#showServer
+ """
resp, body = self.get("servers/%s" % server_id)
body = json.loads(body)
schema = self.get_schema(self.schema_versions_info)
@@ -107,7 +122,11 @@
return rest_client.ResponseBody(resp, body)
def delete_server(self, server_id):
- """Delete server."""
+ """Delete server.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#deleteServer
+ """
resp, body = self.delete("servers/%s" % server_id)
self.validate_response(schema.delete_server, resp, body)
return rest_client.ResponseBody(resp, body)
@@ -137,7 +156,11 @@
return rest_client.ResponseBody(resp, body)
def list_addresses(self, server_id):
- """Lists all addresses for a server."""
+ """Lists all addresses for a server.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#list-ips
+ """
resp, body = self.get("servers/%s/ips" % server_id)
body = json.loads(body)
self.validate_response(schema.list_addresses, resp, body)
@@ -260,12 +283,22 @@
return self.action(server_id, 'revertResize', **kwargs)
def list_server_metadata(self, server_id):
+ """Lists all metadata for a server.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#listServerMetadata
+ """
resp, body = self.get("servers/%s/metadata" % server_id)
body = json.loads(body)
self.validate_response(schema.list_server_metadata, resp, body)
return rest_client.ResponseBody(resp, body)
def set_server_metadata(self, server_id, meta, no_metadata_field=False):
+ """Sets one or more metadata items for a server.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#createServerMetadata
+ """
if no_metadata_field:
post_body = ""
else:
@@ -277,6 +310,11 @@
return rest_client.ResponseBody(resp, body)
def update_server_metadata(self, server_id, meta):
+ """Updates one or more metadata items for a server.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#updateServerMetadata
+ """
post_body = json.dumps({'metadata': meta})
resp, body = self.post('servers/%s/metadata' % server_id,
post_body)
@@ -286,6 +324,11 @@
return rest_client.ResponseBody(resp, body)
def show_server_metadata_item(self, server_id, key):
+ """Shows details for a metadata item, by key, for a server.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#showServerMetadataItem
+ """
resp, body = self.get("servers/%s/metadata/%s" % (server_id, key))
body = json.loads(body)
self.validate_response(schema.set_show_server_metadata_item,
@@ -293,6 +336,11 @@
return rest_client.ResponseBody(resp, body)
def set_server_metadata_item(self, server_id, key, meta):
+ """Sets a metadata item, by key, for a server.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#setServerMetadataItem
+ """
post_body = json.dumps({'meta': meta})
resp, body = self.put('servers/%s/metadata/%s' % (server_id, key),
post_body)
@@ -302,6 +350,11 @@
return rest_client.ResponseBody(resp, body)
def delete_server_metadata_item(self, server_id, key):
+ """Deletes a metadata item, by key, from a server.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#deleteServerMetadataItem
+ """
resp, body = self.delete("servers/%s/metadata/%s" %
(server_id, key))
self.validate_response(schema.delete_server_metadata_item,
@@ -309,9 +362,19 @@
return rest_client.ResponseBody(resp, body)
def stop_server(self, server_id, **kwargs):
+ """Stops a running server and changes its status to SHUTOFF.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#stop
+ """
return self.action(server_id, 'os-stop', **kwargs)
def start_server(self, server_id, **kwargs):
+ """Starts a stopped server and changes its status to ACTIVE.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#start
+ """
return self.action(server_id, 'os-start', **kwargs)
def attach_volume(self, server_id, **kwargs):
@@ -337,14 +400,23 @@
return rest_client.ResponseBody(resp, body)
def detach_volume(self, server_id, volume_id): # noqa
- """Detaches a volume from a server instance."""
+ """Detaches a volume from a server instance.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#deleteVolumeAttachment
+ """
resp, body = self.delete('servers/%s/os-volume_attachments/%s' %
(server_id, volume_id))
self.validate_response(schema.detach_volume, resp, body)
return rest_client.ResponseBody(resp, body)
def show_volume_attachment(self, server_id, volume_id):
- """Return details about the given volume attachment."""
+ """Return details about the given volume attachment.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#
+ getVolumeAttachmentDetails
+ """
resp, body = self.get('servers/%s/os-volume_attachments/%s' % (
server_id, volume_id))
body = json.loads(body)
@@ -352,7 +424,11 @@
return rest_client.ResponseBody(resp, body)
def list_volume_attachments(self, server_id):
- """Returns the list of volume attachments for a given instance."""
+ """Returns the list of volume attachments for a given instance.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#listVolumeAttachments
+ """
resp, body = self.get('servers/%s/os-volume_attachments' % (
server_id))
body = json.loads(body)
@@ -362,7 +438,8 @@
def add_security_group(self, server_id, **kwargs):
"""Add a security group to the server.
- Available params: TODO
+ Available params: http://developer.openstack.org/
+ api-ref-compute-v2.1.html#addSecurityGroup
"""
# TODO(oomichi): The api-site doesn't contain this API description.
# So the above should be changed to the api-site link after
@@ -373,7 +450,8 @@
def remove_security_group(self, server_id, **kwargs):
"""Remove a security group from the server.
- Available params: TODO
+ Available params: http://developer.openstack.org/
+ api-ref-compute-v2.1.html#removeSecurityGroup
"""
# TODO(oomichi): The api-site doesn't contain this API description.
# So the above should be changed to the api-site link after
@@ -503,7 +581,11 @@
return self.action(server_id, 'rescue', schema.rescue_server, **kwargs)
def unrescue_server(self, server_id):
- """Unrescue the provided server."""
+ """Unrescue the provided server.
+
+ Available params: http://developer.openstack.org/
+ api-ref-compute-v2.1.html#unrescue
+ """
return self.action(server_id, 'unrescue')
def show_server_diagnostics(self, server_id):
diff --git a/tempest/lib/services/compute/services_client.py b/tempest/lib/services/compute/services_client.py
index a190e5f..b6dbe28 100644
--- a/tempest/lib/services/compute/services_client.py
+++ b/tempest/lib/services/compute/services_client.py
@@ -25,6 +25,11 @@
class ServicesClient(base_compute_client.BaseComputeClient):
def list_services(self, **params):
+ """Lists all running Compute services for a tenant.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#listServices
+ """
url = 'os-services'
if params:
url += '?%s' % urllib.urlencode(params)
diff --git a/tempest/lib/services/compute/versions_client.py b/tempest/lib/services/compute/versions_client.py
index eb4e7e9..b2052c3 100644
--- a/tempest/lib/services/compute/versions_client.py
+++ b/tempest/lib/services/compute/versions_client.py
@@ -40,6 +40,7 @@
def list_versions(self):
version_url = self._get_base_version_url()
resp, body = self.raw_request(version_url, 'GET')
+ self._error_checker(resp, body)
body = json.loads(body)
self.validate_response(schema.list_versions, resp, body)
return rest_client.ResponseBody(resp, body)
@@ -56,6 +57,7 @@
# we need a token for this request
resp, body = self.raw_request(version_url, 'GET',
{'X-Auth-Token': self.token})
+ self._error_checker(resp, body)
body = json.loads(body)
self.validate_response(schema.get_one_version, resp, body)
return rest_client.ResponseBody(resp, body)
diff --git a/tempest/lib/services/compute/volumes_client.py b/tempest/lib/services/compute/volumes_client.py
index 41d9af2..2787779 100644
--- a/tempest/lib/services/compute/volumes_client.py
+++ b/tempest/lib/services/compute/volumes_client.py
@@ -25,7 +25,11 @@
class VolumesClient(base_compute_client.BaseComputeClient):
def list_volumes(self, detail=False, **params):
- """List all the volumes created."""
+ """List all the volumes created.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#listVolumes
+ """
url = 'os-volumes'
if detail:
@@ -39,7 +43,11 @@
return rest_client.ResponseBody(resp, body)
def show_volume(self, volume_id):
- """Return the details of a single volume."""
+ """Return the details of a single volume.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#showVolume
+ """
url = "os-volumes/%s" % volume_id
resp, body = self.get(url)
body = json.loads(body)
@@ -59,7 +67,11 @@
return rest_client.ResponseBody(resp, body)
def delete_volume(self, volume_id):
- """Delete the Specified Volume."""
+ """Delete the Specified Volume.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-compute-v2.1.html#deleteVolume
+ """
resp, body = self.delete("os-volumes/%s" % volume_id)
self.validate_response(schema.delete_volume, resp, body)
return rest_client.ResponseBody(resp, body)
diff --git a/tempest/lib/services/identity/v2/__init__.py b/tempest/lib/services/identity/v2/__init__.py
index e69de29..b7d3c74 100644
--- a/tempest/lib/services/identity/v2/__init__.py
+++ b/tempest/lib/services/identity/v2/__init__.py
@@ -0,0 +1,24 @@
+# Copyright (c) 2016 Hewlett-Packard Enterprise Development Company, L.P.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may not
+# use this file except in compliance with the License. You may obtain a copy of
+# the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations under
+# the License.
+
+from tempest.lib.services.identity.v2.endpoints_client import EndpointsClient
+from tempest.lib.services.identity.v2.identity_client import IdentityClient
+from tempest.lib.services.identity.v2.roles_client import RolesClient
+from tempest.lib.services.identity.v2.services_client import ServicesClient
+from tempest.lib.services.identity.v2.tenants_client import TenantsClient
+from tempest.lib.services.identity.v2.token_client import TokenClient
+from tempest.lib.services.identity.v2.users_client import UsersClient
+
+__all__ = ['EndpointsClient', 'IdentityClient', 'RolesClient',
+ 'ServicesClient', 'TenantsClient', 'TokenClient', 'UsersClient']
diff --git a/tempest/lib/services/identity/v2/endpoints_client.py b/tempest/lib/services/identity/v2/endpoints_client.py
index f7b265d..770e8ae 100644
--- a/tempest/lib/services/identity/v2/endpoints_client.py
+++ b/tempest/lib/services/identity/v2/endpoints_client.py
@@ -23,8 +23,9 @@
def create_endpoint(self, **kwargs):
"""Create an endpoint for service.
- Available params: http://developer.openstack.org/
- api-ref-identity-v2-ext.html#createEndpoint
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-identity-v2-ext.html#createEndpoint
"""
post_body = json.dumps({'endpoint': kwargs})
diff --git a/tempest/services/identity/v2/json/identity_client.py b/tempest/lib/services/identity/v2/identity_client.py
similarity index 100%
rename from tempest/services/identity/v2/json/identity_client.py
rename to tempest/lib/services/identity/v2/identity_client.py
diff --git a/tempest/lib/services/identity/v2/roles_client.py b/tempest/lib/services/identity/v2/roles_client.py
index 15c8834..635d013 100644
--- a/tempest/lib/services/identity/v2/roles_client.py
+++ b/tempest/lib/services/identity/v2/roles_client.py
@@ -22,8 +22,9 @@
def create_role(self, **kwargs):
"""Create a role.
- Available params: see http://developer.openstack.org/
- api-ref-identity-v2-ext.html#createRole
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-identity-v2-ext.html#createRole
"""
post_body = json.dumps({'role': kwargs})
resp, body = self.post('OS-KSADM/roles', post_body)
@@ -34,12 +35,11 @@
def show_role(self, role_id_or_name):
"""Get a role by its id or name.
- Available params: see
- http://developer.openstack.org/
- api-ref-identity-v2-ext.html#showRoleByID
- OR
- http://developer.openstack.org/
- api-ref-identity-v2-ext.html#showRoleByName
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-identity-v2-ext.html#showRoleByID
+ OR
+ http://developer.openstack.org/api-ref-identity-v2-ext.html#showRoleByName
"""
resp, body = self.get('OS-KSADM/roles/%s' % role_id_or_name)
self.expected_success(200, resp.status)
@@ -49,8 +49,9 @@
def list_roles(self, **params):
"""Returns roles.
- Available params: see http://developer.openstack.org/
- api-ref-identity-v2-ext.html#listRoles
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-identity-v2-ext.html#listRoles
"""
url = 'OS-KSADM/roles'
if params:
@@ -63,19 +64,20 @@
def delete_role(self, role_id):
"""Delete a role.
- Available params: see http://developer.openstack.org/
- api-ref-identity-v2-ext.html#deleteRole
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-identity-v2-ext.html#deleteRole
"""
- resp, body = self.delete('OS-KSADM/roles/%s' % str(role_id))
+ resp, body = self.delete('OS-KSADM/roles/%s' % role_id)
self.expected_success(204, resp.status)
return rest_client.ResponseBody(resp, body)
def create_user_role_on_project(self, tenant_id, user_id, role_id):
"""Add roles to a user on a tenant.
- Available params: see
- http://developer.openstack.org/
- api-ref-identity-v2-ext.html#grantRoleToUserOnTenant
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-identity-v2-ext.html#grantRoleToUserOnTenant
"""
resp, body = self.put('/tenants/%s/users/%s/roles/OS-KSADM/%s' %
(tenant_id, user_id, role_id), "")
@@ -97,9 +99,9 @@
def delete_role_from_user_on_project(self, tenant_id, user_id, role_id):
"""Removes a role assignment for a user on a tenant.
- Available params: see
- http://developer.openstack.org/
- api-ref-identity-v2-ext.html#revokeRoleFromUserOnTenant
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-identity-v2-ext.html#revokeRoleFromUserOnTenant
"""
resp, body = self.delete('/tenants/%s/users/%s/roles/OS-KSADM/%s' %
(tenant_id, user_id, role_id))
diff --git a/tempest/lib/services/identity/v2/services_client.py b/tempest/lib/services/identity/v2/services_client.py
old mode 100755
new mode 100644
index c26d419..b3f94aa
--- a/tempest/lib/services/identity/v2/services_client.py
+++ b/tempest/lib/services/identity/v2/services_client.py
@@ -24,8 +24,9 @@
def create_service(self, **kwargs):
"""Create a service.
- Available params: see http://developer.openstack.org/api-ref/identity/
- v2-ext/?expanded=#create-service-admin-extension
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v2-ext/?expanded=#create-service-admin-extension
"""
post_body = json.dumps({'OS-KSADM:service': kwargs})
resp, body = self.post('/OS-KSADM/services', post_body)
@@ -44,8 +45,9 @@
def list_services(self, **params):
"""List Service - Returns Services.
- Available params: see http://developer.openstack.org/api-ref/identity/
- v2-ext/?expanded=#list-services-admin-extension
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v2-ext/?expanded=#list-services-admin-extension
"""
url = '/OS-KSADM/services'
if params:
diff --git a/tempest/lib/services/identity/v2/tenants_client.py b/tempest/lib/services/identity/v2/tenants_client.py
index 77ddaa5..b687332 100644
--- a/tempest/lib/services/identity/v2/tenants_client.py
+++ b/tempest/lib/services/identity/v2/tenants_client.py
@@ -24,8 +24,9 @@
def create_tenant(self, **kwargs):
"""Create a tenant
- Available params: see http://developer.openstack.org/
- api-ref-identity-v2-ext.html#createTenant
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v2-admin/index.html#create-tenant
"""
post_body = json.dumps({'tenant': kwargs})
resp, body = self.post('tenants', post_body)
@@ -36,8 +37,9 @@
def delete_tenant(self, tenant_id):
"""Delete a tenant.
- Available params: see http://developer.openstack.org/
- api-ref-identity-v2-ext.html#deleteTenant
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-identity-v2-ext.html#deleteTenant
"""
resp, body = self.delete('tenants/%s' % str(tenant_id))
self.expected_success(204, resp.status)
@@ -46,9 +48,9 @@
def show_tenant(self, tenant_id):
"""Get tenant details.
- Available params: see
- http://developer.openstack.org/
- api-ref-identity-v2-ext.html#admin-showTenantById
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-identity-v2-ext.html#admin-showTenantById
"""
resp, body = self.get('tenants/%s' % str(tenant_id))
self.expected_success(200, resp.status)
@@ -58,8 +60,9 @@
def list_tenants(self, **params):
"""Returns tenants.
- Available params: see http://developer.openstack.org/
- api-ref-identity-v2-ext.html#admin-listTenants
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v2-admin/index.html#list-tenants-admin-endpoint
"""
url = 'tenants'
if params:
@@ -72,8 +75,9 @@
def update_tenant(self, tenant_id, **kwargs):
"""Updates a tenant.
- Available params: see http://developer.openstack.org/
- api-ref-identity-v2-ext.html#updateTenant
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v2-admin/index.html#update-tenant
"""
if 'id' not in kwargs:
kwargs['id'] = tenant_id
@@ -86,8 +90,9 @@
def list_tenant_users(self, tenant_id, **params):
"""List users for a Tenant.
- Available params: see http://developer.openstack.org/
- api-ref-identity-v2-ext.html#listUsersForTenant
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v2-admin/index.html#list-users-on-a-tenant
"""
url = '/tenants/%s/users' % tenant_id
if params:
diff --git a/tempest/lib/services/identity/v2/token_client.py b/tempest/lib/services/identity/v2/token_client.py
index 5716027..a5d7c86 100644
--- a/tempest/lib/services/identity/v2/token_client.py
+++ b/tempest/lib/services/identity/v2/token_client.py
@@ -22,11 +22,11 @@
class TokenClient(rest_client.RestClient):
def __init__(self, auth_url, disable_ssl_certificate_validation=None,
- ca_certs=None, trace_requests=None):
+ ca_certs=None, trace_requests=None, **kwargs):
dscv = disable_ssl_certificate_validation
super(TokenClient, self).__init__(
None, None, None, disable_ssl_certificate_validation=dscv,
- ca_certs=ca_certs, trace_requests=trace_requests)
+ ca_certs=ca_certs, trace_requests=trace_requests, **kwargs)
if auth_url is None:
raise exceptions.IdentityError("Couldn't determine auth_url")
diff --git a/tempest/lib/services/identity/v2/users_client.py b/tempest/lib/services/identity/v2/users_client.py
index 4ea17f9..f20fdc4 100644
--- a/tempest/lib/services/identity/v2/users_client.py
+++ b/tempest/lib/services/identity/v2/users_client.py
@@ -22,8 +22,9 @@
def create_user(self, **kwargs):
"""Create a user.
- Available params: see http://developer.openstack.org/
- api-ref-identity-admin-v2.html#admin-createUser
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v2-admin/index.html#create-user-admin-endpoint
"""
post_body = json.dumps({'user': kwargs})
resp, body = self.post('users', post_body)
@@ -34,8 +35,9 @@
def update_user(self, user_id, **kwargs):
"""Updates a user.
- Available params: see http://developer.openstack.org/
- api-ref-identity-admin-v2.html#admin-updateUser
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v2-admin/index.html#update-user-admin-endpoint
"""
put_body = json.dumps({'user': kwargs})
resp, body = self.put('users/%s' % user_id, put_body)
@@ -46,8 +48,9 @@
def show_user(self, user_id):
"""GET a user.
- Available params: see http://developer.openstack.org/
- api-ref-identity-admin-v2.html#admin-showUser
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-identity-admin-v2.html#admin-showUser
"""
resp, body = self.get("users/%s" % user_id)
self.expected_success(200, resp.status)
@@ -57,8 +60,9 @@
def delete_user(self, user_id):
"""Delete a user.
- Available params: see http://developer.openstack.org/
- api-ref-identity-admin-v2.html#admin-deleteUser
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-identity-admin-v2.html#admin-deleteUser
"""
resp, body = self.delete("users/%s" % user_id)
self.expected_success(204, resp.status)
@@ -67,8 +71,9 @@
def list_users(self, **params):
"""Get the list of users.
- Available params: see http://developer.openstack.org/
- api-ref-identity-admin-v2.html#admin-listUsers
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v2-admin/index.html#list-users-admin-endpoint
"""
url = "users"
if params:
@@ -81,8 +86,9 @@
def update_user_enabled(self, user_id, **kwargs):
"""Enables or disables a user.
- Available params: see http://developer.openstack.org/
- api-ref-identity-v2-ext.html#enableUser
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-identity-v2-ext.html#enableUser
"""
# NOTE: The URL (users/<id>/enabled) is different from the api-site
# one (users/<id>/OS-KSADM/enabled) , but they are the same API
diff --git a/tempest/services/identity/v3/json/credentials_client.py b/tempest/lib/services/identity/v3/credentials_client.py
similarity index 60%
rename from tempest/services/identity/v3/json/credentials_client.py
rename to tempest/lib/services/identity/v3/credentials_client.py
index 6ab94d0..6e5fd31 100644
--- a/tempest/services/identity/v3/json/credentials_client.py
+++ b/tempest/lib/services/identity/v3/credentials_client.py
@@ -14,10 +14,11 @@
# under the License.
"""
-http://developer.openstack.org/api-ref-identity-v3.html#credentials-v3
+http://developer.openstack.org/api-ref/identity/v3/index.html#credentials
"""
from oslo_serialization import jsonutils as json
+from six.moves.urllib import parse as urllib
from tempest.lib.common import rest_client
@@ -28,46 +29,63 @@
def create_credential(self, **kwargs):
"""Creates a credential.
- Available params: see http://developer.openstack.org/
- api-ref-identity-v3.html#createCredential
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/index.html#create-credential
"""
post_body = json.dumps({'credential': kwargs})
resp, body = self.post('credentials', post_body)
self.expected_success(201, resp.status)
body = json.loads(body)
- body['credential']['blob'] = json.loads(body['credential']['blob'])
return rest_client.ResponseBody(resp, body)
def update_credential(self, credential_id, **kwargs):
"""Updates a credential.
- Available params: see http://developer.openstack.org/
- api-ref-identity-v3.html#updateCredential
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/index.html#update-credential
"""
post_body = json.dumps({'credential': kwargs})
resp, body = self.patch('credentials/%s' % credential_id, post_body)
self.expected_success(200, resp.status)
body = json.loads(body)
- body['credential']['blob'] = json.loads(body['credential']['blob'])
return rest_client.ResponseBody(resp, body)
def show_credential(self, credential_id):
- """To GET Details of a credential."""
+ """To GET Details of a credential.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/index.html#show-credential-details
+ """
resp, body = self.get('credentials/%s' % credential_id)
self.expected_success(200, resp.status)
body = json.loads(body)
- body['credential']['blob'] = json.loads(body['credential']['blob'])
return rest_client.ResponseBody(resp, body)
- def list_credentials(self):
- """Lists out all the available credentials."""
- resp, body = self.get('credentials')
+ def list_credentials(self, **params):
+ """Lists out all the available credentials.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/#list-credentials
+ """
+ url = 'credentials'
+ if params:
+ url += '?%s' % urllib.urlencode(params)
+ resp, body = self.get(url)
self.expected_success(200, resp.status)
body = json.loads(body)
return rest_client.ResponseBody(resp, body)
def delete_credential(self, credential_id):
- """Deletes a credential."""
+ """Deletes a credential.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/#delete-credential
+ """
resp, body = self.delete('credentials/%s' % credential_id)
self.expected_success(204, resp.status)
return rest_client.ResponseBody(resp, body)
diff --git a/tempest/lib/services/identity/v3/endpoints_client.py b/tempest/lib/services/identity/v3/endpoints_client.py
index db30508..c4c0d8d 100644
--- a/tempest/lib/services/identity/v3/endpoints_client.py
+++ b/tempest/lib/services/identity/v3/endpoints_client.py
@@ -35,8 +35,9 @@
def create_endpoint(self, **kwargs):
"""Create endpoint.
- Available params: see http://developer.openstack.org/
- api-ref-identity-v3.html#createEndpoint
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/index.html#create-endpoint
"""
post_body = json.dumps({'endpoint': kwargs})
resp, body = self.post('endpoints', post_body)
@@ -47,8 +48,9 @@
def update_endpoint(self, endpoint_id, **kwargs):
"""Updates an endpoint with given parameters.
- Available params: see http://developer.openstack.org/
- api-ref-identity-v3.html#updateEndpoint
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/index.html#update-endpoint
"""
post_body = json.dumps({'endpoint': kwargs})
resp, body = self.patch('endpoints/%s' % endpoint_id, post_body)
diff --git a/tempest/services/identity/v3/json/groups_client.py b/tempest/lib/services/identity/v3/groups_client.py
similarity index 70%
rename from tempest/services/identity/v3/json/groups_client.py
rename to tempest/lib/services/identity/v3/groups_client.py
index 1a495f8..5e68939 100644
--- a/tempest/services/identity/v3/json/groups_client.py
+++ b/tempest/lib/services/identity/v3/groups_client.py
@@ -18,6 +18,7 @@
"""
from oslo_serialization import jsonutils as json
+from six.moves.urllib import parse as urllib
from tempest.lib.common import rest_client
@@ -28,8 +29,9 @@
def create_group(self, **kwargs):
"""Creates a group.
- Available params: see http://developer.openstack.org/
- api-ref-identity-v3.html#createGroup
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/index.html#create-group
"""
post_body = json.dumps({'group': kwargs})
resp, body = self.post('groups', post_body)
@@ -44,9 +46,17 @@
body = json.loads(body)
return rest_client.ResponseBody(resp, body)
- def list_groups(self):
- """Lists the groups."""
- resp, body = self.get('groups')
+ def list_groups(self, **params):
+ """Lists the groups.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/#list-groups
+ """
+ url = 'groups'
+ if params:
+ url += '?%s' % urllib.urlencode(params)
+ resp, body = self.get(url)
self.expected_success(200, resp.status)
body = json.loads(body)
return rest_client.ResponseBody(resp, body)
@@ -54,8 +64,9 @@
def update_group(self, group_id, **kwargs):
"""Updates a group.
- Available params: see http://developer.openstack.org/
- api-ref-identity-v3.html#updateGroup
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/index.html#update-group
"""
post_body = json.dumps({'group': kwargs})
resp, body = self.patch('groups/%s' % group_id, post_body)
@@ -65,7 +76,7 @@
def delete_group(self, group_id):
"""Delete a group."""
- resp, body = self.delete('groups/%s' % str(group_id))
+ resp, body = self.delete('groups/%s' % group_id)
self.expected_success(204, resp.status)
return rest_client.ResponseBody(resp, body)
@@ -76,9 +87,17 @@
self.expected_success(204, resp.status)
return rest_client.ResponseBody(resp, body)
- def list_group_users(self, group_id):
- """List users in group."""
- resp, body = self.get('groups/%s/users' % group_id)
+ def list_group_users(self, group_id, **params):
+ """List users in group.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/#list-users-in-group
+ """
+ url = 'groups/%s/users' % group_id
+ if params:
+ url += '?%s' % urllib.urlencode(params)
+ resp, body = self.get(url)
self.expected_success(200, resp.status)
body = json.loads(body)
return rest_client.ResponseBody(resp, body)
diff --git a/tempest/services/identity/v3/json/identity_client.py b/tempest/lib/services/identity/v3/identity_client.py
similarity index 100%
rename from tempest/services/identity/v3/json/identity_client.py
rename to tempest/lib/services/identity/v3/identity_client.py
diff --git a/tempest/lib/services/identity/v3/inherited_roles_client.py b/tempest/lib/services/identity/v3/inherited_roles_client.py
new file mode 100644
index 0000000..691c7fd
--- /dev/null
+++ b/tempest/lib/services/identity/v3/inherited_roles_client.py
@@ -0,0 +1,151 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from oslo_serialization import jsonutils as json
+
+from tempest.lib.common import rest_client
+
+
+class InheritedRolesClient(rest_client.RestClient):
+ api_version = "v3"
+
+ def create_inherited_role_on_domains_user(
+ self, domain_id, user_id, role_id):
+ """Assigns a role to a user on projects owned by a domain."""
+ resp, body = self.put(
+ "OS-INHERIT/domains/%s/users/%s/roles/%s/inherited_to_projects"
+ % (domain_id, user_id, role_id), None)
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def delete_inherited_role_from_user_on_domain(
+ self, domain_id, user_id, role_id):
+ """Revokes an inherited project role from a user on a domain."""
+ resp, body = self.delete(
+ "OS-INHERIT/domains/%s/users/%s/roles/%s/inherited_to_projects"
+ % (domain_id, user_id, role_id))
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def list_inherited_project_role_for_user_on_domain(
+ self, domain_id, user_id):
+ """Lists the inherited project roles on a domain for a user."""
+ resp, body = self.get(
+ "OS-INHERIT/domains/%s/users/%s/roles/inherited_to_projects"
+ % (domain_id, user_id))
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def check_user_inherited_project_role_on_domain(
+ self, domain_id, user_id, role_id):
+ """Checks whether a user has an inherited project role on a domain."""
+ resp, body = self.head(
+ "OS-INHERIT/domains/%s/users/%s/roles/%s/inherited_to_projects"
+ % (domain_id, user_id, role_id))
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp)
+
+ def create_inherited_role_on_domains_group(
+ self, domain_id, group_id, role_id):
+ """Assigns a role to a group on projects owned by a domain."""
+ resp, body = self.put(
+ "OS-INHERIT/domains/%s/groups/%s/roles/%s/inherited_to_projects"
+ % (domain_id, group_id, role_id), None)
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def delete_inherited_role_from_group_on_domain(
+ self, domain_id, group_id, role_id):
+ """Revokes an inherited project role from a group on a domain."""
+ resp, body = self.delete(
+ "OS-INHERIT/domains/%s/groups/%s/roles/%s/inherited_to_projects"
+ % (domain_id, group_id, role_id))
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def list_inherited_project_role_for_group_on_domain(
+ self, domain_id, group_id):
+ """Lists the inherited project roles on a domain for a group."""
+ resp, body = self.get(
+ "OS-INHERIT/domains/%s/groups/%s/roles/inherited_to_projects"
+ % (domain_id, group_id))
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def check_group_inherited_project_role_on_domain(
+ self, domain_id, group_id, role_id):
+ """Checks whether a group has an inherited project role on a domain."""
+ resp, body = self.head(
+ "OS-INHERIT/domains/%s/groups/%s/roles/%s/inherited_to_projects"
+ % (domain_id, group_id, role_id))
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp)
+
+ def create_inherited_role_on_projects_user(
+ self, project_id, user_id, role_id):
+ """Assigns a role to a user on projects in a subtree."""
+ resp, body = self.put(
+ "OS-INHERIT/projects/%s/users/%s/roles/%s/inherited_to_projects"
+ % (project_id, user_id, role_id), None)
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def delete_inherited_role_from_user_on_project(
+ self, project_id, user_id, role_id):
+ """Revokes an inherited role from a user on a project."""
+ resp, body = self.delete(
+ "OS-INHERIT/projects/%s/users/%s/roles/%s/inherited_to_projects"
+ % (project_id, user_id, role_id))
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def check_user_has_flag_on_inherited_to_project(
+ self, project_id, user_id, role_id):
+ """Checks whether a user has a role assignment"""
+ """with the inherited_to_projects flag on a project."""
+ resp, body = self.head(
+ "OS-INHERIT/projects/%s/users/%s/roles/%s/inherited_to_projects"
+ % (project_id, user_id, role_id))
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp)
+
+ def create_inherited_role_on_projects_group(
+ self, project_id, group_id, role_id):
+ """Assigns a role to a group on projects in a subtree."""
+ resp, body = self.put(
+ "OS-INHERIT/projects/%s/groups/%s/roles/%s/inherited_to_projects"
+ % (project_id, group_id, role_id), None)
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def delete_inherited_role_from_group_on_project(
+ self, project_id, group_id, role_id):
+ """Revokes an inherited role from a group on a project."""
+ resp, body = self.delete(
+ "OS-INHERIT/projects/%s/groups/%s/roles/%s/inherited_to_projects"
+ % (project_id, group_id, role_id))
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def check_group_has_flag_on_inherited_to_project(
+ self, project_id, group_id, role_id):
+ """Checks whether a group has a role assignment"""
+ """with the inherited_to_projects flag on a project."""
+ resp, body = self.head(
+ "OS-INHERIT/projects/%s/groups/%s/roles/%s/inherited_to_projects"
+ % (project_id, group_id, role_id))
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp)
diff --git a/tempest/lib/services/identity/v3/policies_client.py b/tempest/lib/services/identity/v3/policies_client.py
index f28db9a..0282745 100644
--- a/tempest/lib/services/identity/v3/policies_client.py
+++ b/tempest/lib/services/identity/v3/policies_client.py
@@ -28,8 +28,9 @@
def create_policy(self, **kwargs):
"""Creates a Policy.
- Available params: see http://developer.openstack.org/
- api-ref-identity-v3.html#createPolicy
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/index.html#create-policy
"""
post_body = json.dumps({'policy': kwargs})
resp, body = self.post('policies', post_body)
@@ -55,8 +56,9 @@
def update_policy(self, policy_id, **kwargs):
"""Updates a policy.
- Available params: see http://developer.openstack.org/
- api-ref-identity-v3.html#updatePolicy
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/index.html#update-policy
"""
post_body = json.dumps({'policy': kwargs})
url = 'policies/%s' % policy_id
diff --git a/tempest/services/identity/v3/json/projects_client.py b/tempest/lib/services/identity/v3/projects_client.py
similarity index 84%
rename from tempest/services/identity/v3/json/projects_client.py
rename to tempest/lib/services/identity/v3/projects_client.py
index 97e43df..20787da 100644
--- a/tempest/services/identity/v3/json/projects_client.py
+++ b/tempest/lib/services/identity/v3/projects_client.py
@@ -25,8 +25,9 @@
def create_project(self, name, **kwargs):
"""Create a Project.
- Available params: see http://developer.openstack.org/
- api-ref-identity-v3.html#createProject
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/index.html#create-project
"""
# Include the project name to the kwargs parameters
@@ -49,8 +50,9 @@
def update_project(self, project_id, **kwargs):
"""Update a Project.
- Available params: see http://developer.openstack.org/
- api-ref-identity-v3.html#updateProject
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/index.html#update-project
"""
post_body = json.dumps({'project': kwargs})
@@ -68,6 +70,6 @@
def delete_project(self, project_id):
"""Delete a project."""
- resp, body = self.delete('projects/%s' % str(project_id))
+ resp, body = self.delete('projects/%s' % project_id)
self.expected_success(204, resp.status)
return rest_client.ResponseBody(resp, body)
diff --git a/tempest/services/identity/v3/json/regions_client.py b/tempest/lib/services/identity/v3/regions_client.py
similarity index 86%
rename from tempest/services/identity/v3/json/regions_client.py
rename to tempest/lib/services/identity/v3/regions_client.py
index 90dd9d7..33c754a 100644
--- a/tempest/services/identity/v3/json/regions_client.py
+++ b/tempest/lib/services/identity/v3/regions_client.py
@@ -29,11 +29,9 @@
def create_region(self, region_id=None, **kwargs):
"""Create region.
- Available params: see http://developer.openstack.org/
- api-ref-identity-v3.html#createRegion
-
- see http://developer.openstack.org/
- api-ref-identity-v3.html#createRegionWithID
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/index.html#create-region
"""
if region_id is not None:
method = self.put
@@ -50,8 +48,9 @@
def update_region(self, region_id, **kwargs):
"""Updates a region.
- Available params: see http://developer.openstack.org/
- api-ref-identity-v3.html#updateRegion
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/index.html#update-region
"""
post_body = json.dumps({'region': kwargs})
resp, body = self.patch('regions/%s' % region_id, post_body)
diff --git a/tempest/lib/services/identity/v3/roles_client.py b/tempest/lib/services/identity/v3/roles_client.py
new file mode 100644
index 0000000..f1339dd
--- /dev/null
+++ b/tempest/lib/services/identity/v3/roles_client.py
@@ -0,0 +1,192 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from oslo_serialization import jsonutils as json
+from six.moves.urllib import parse as urllib
+
+from tempest.lib.common import rest_client
+
+
+class RolesClient(rest_client.RestClient):
+ api_version = "v3"
+
+ def create_role(self, **kwargs):
+ """Create a Role.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/index.html#create-role
+ """
+ post_body = json.dumps({'role': kwargs})
+ resp, body = self.post('roles', post_body)
+ self.expected_success(201, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def show_role(self, role_id):
+ """GET a Role."""
+ resp, body = self.get('roles/%s' % role_id)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def list_roles(self, **params):
+ """Get the list of Roles."""
+
+ url = 'roles'
+ if params:
+ url += '?%s' % urllib.urlencode(params)
+ resp, body = self.get(url)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def update_role(self, role_id, **kwargs):
+ """Update a Role.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/index.html#update-role
+ """
+ post_body = json.dumps({'role': kwargs})
+ resp, body = self.patch('roles/%s' % role_id, post_body)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def delete_role(self, role_id):
+ """Delete a role."""
+ resp, body = self.delete('roles/%s' % role_id)
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def create_user_role_on_project(self, project_id, user_id, role_id):
+ """Add roles to a user on a project."""
+ resp, body = self.put('projects/%s/users/%s/roles/%s' %
+ (project_id, user_id, role_id), None)
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def create_user_role_on_domain(self, domain_id, user_id, role_id):
+ """Add roles to a user on a domain."""
+ resp, body = self.put('domains/%s/users/%s/roles/%s' %
+ (domain_id, user_id, role_id), None)
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def list_user_roles_on_project(self, project_id, user_id):
+ """list roles of a user on a project."""
+ resp, body = self.get('projects/%s/users/%s/roles' %
+ (project_id, user_id))
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def list_user_roles_on_domain(self, domain_id, user_id):
+ """list roles of a user on a domain."""
+ resp, body = self.get('domains/%s/users/%s/roles' %
+ (domain_id, user_id))
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def delete_role_from_user_on_project(self, project_id, user_id, role_id):
+ """Delete role of a user on a project."""
+ resp, body = self.delete('projects/%s/users/%s/roles/%s' %
+ (project_id, user_id, role_id))
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def delete_role_from_user_on_domain(self, domain_id, user_id, role_id):
+ """Delete role of a user on a domain."""
+ resp, body = self.delete('domains/%s/users/%s/roles/%s' %
+ (domain_id, user_id, role_id))
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def check_user_role_existence_on_project(self, project_id,
+ user_id, role_id):
+ """Check role of a user on a project."""
+ resp, body = self.head('projects/%s/users/%s/roles/%s' %
+ (project_id, user_id, role_id))
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp)
+
+ def check_user_role_existence_on_domain(self, domain_id,
+ user_id, role_id):
+ """Check role of a user on a domain."""
+ resp, body = self.head('domains/%s/users/%s/roles/%s' %
+ (domain_id, user_id, role_id))
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp)
+
+ def create_group_role_on_project(self, project_id, group_id, role_id):
+ """Add roles to a group on a project."""
+ resp, body = self.put('projects/%s/groups/%s/roles/%s' %
+ (project_id, group_id, role_id), None)
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def create_group_role_on_domain(self, domain_id, group_id, role_id):
+ """Add roles to a group on a domain."""
+ resp, body = self.put('domains/%s/groups/%s/roles/%s' %
+ (domain_id, group_id, role_id), None)
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def list_group_roles_on_project(self, project_id, group_id):
+ """list roles of a group on a project."""
+ resp, body = self.get('projects/%s/groups/%s/roles' %
+ (project_id, group_id))
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def list_group_roles_on_domain(self, domain_id, group_id):
+ """list roles of a group on a domain."""
+ resp, body = self.get('domains/%s/groups/%s/roles' %
+ (domain_id, group_id))
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def delete_role_from_group_on_project(self, project_id, group_id, role_id):
+ """Delete role of a group on a project."""
+ resp, body = self.delete('projects/%s/groups/%s/roles/%s' %
+ (project_id, group_id, role_id))
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def delete_role_from_group_on_domain(self, domain_id, group_id, role_id):
+ """Delete role of a group on a domain."""
+ resp, body = self.delete('domains/%s/groups/%s/roles/%s' %
+ (domain_id, group_id, role_id))
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def check_role_from_group_on_project_existence(self, project_id,
+ group_id, role_id):
+ """Check role of a group on a project."""
+ resp, body = self.head('projects/%s/groups/%s/roles/%s' %
+ (project_id, group_id, role_id))
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp)
+
+ def check_role_from_group_on_domain_existence(self, domain_id,
+ group_id, role_id):
+ """Check role of a group on a domain."""
+ resp, body = self.head('domains/%s/groups/%s/roles/%s' %
+ (domain_id, group_id, role_id))
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp)
diff --git a/tempest/services/identity/v3/json/services_client.py b/tempest/lib/services/identity/v3/services_client.py
similarity index 71%
rename from tempest/services/identity/v3/json/services_client.py
rename to tempest/lib/services/identity/v3/services_client.py
index e863016..14c81cc 100644
--- a/tempest/services/identity/v3/json/services_client.py
+++ b/tempest/lib/services/identity/v3/services_client.py
@@ -18,6 +18,7 @@
"""
from oslo_serialization import jsonutils as json
+from six.moves.urllib import parse as urllib
from tempest.lib.common import rest_client
@@ -28,8 +29,9 @@
def update_service(self, service_id, **kwargs):
"""Updates a service.
- Available params: see http://developer.openstack.org/
- api-ref-identity-v3.html#updateService
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/index.html#update-service
"""
patch_body = json.dumps({'service': kwargs})
resp, body = self.patch('services/%s' % service_id, patch_body)
@@ -48,8 +50,9 @@
def create_service(self, **kwargs):
"""Creates a service.
- Available params: see http://developer.openstack.org/
- api-ref-identity-v3.html#createService
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/index.html#create-service
"""
body = json.dumps({'service': kwargs})
resp, body = self.post("services", body)
@@ -57,13 +60,22 @@
body = json.loads(body)
return rest_client.ResponseBody(resp, body)
- def delete_service(self, serv_id):
- url = "services/" + serv_id
+ def delete_service(self, service_id):
+ url = "services/" + service_id
resp, body = self.delete(url)
self.expected_success(204, resp.status)
return rest_client.ResponseBody(resp, body)
- def list_services(self):
+ def list_services(self, **params):
+ """List services.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/#list-services
+ """
+ url = 'services'
+ if params:
+ url += '?%s' % urllib.urlencode(params)
resp, body = self.get('services')
self.expected_success(200, resp.status)
body = json.loads(body)
diff --git a/tempest/lib/services/identity/v3/token_client.py b/tempest/lib/services/identity/v3/token_client.py
index 964d43f..c1f7e7b 100644
--- a/tempest/lib/services/identity/v3/token_client.py
+++ b/tempest/lib/services/identity/v3/token_client.py
@@ -22,11 +22,11 @@
class V3TokenClient(rest_client.RestClient):
def __init__(self, auth_url, disable_ssl_certificate_validation=None,
- ca_certs=None, trace_requests=None):
+ ca_certs=None, trace_requests=None, **kwargs):
dscv = disable_ssl_certificate_validation
super(V3TokenClient, self).__init__(
None, None, None, disable_ssl_certificate_validation=dscv,
- ca_certs=ca_certs, trace_requests=trace_requests)
+ ca_certs=ca_certs, trace_requests=trace_requests, **kwargs)
if auth_url is None:
raise exceptions.IdentityError("Couldn't determine auth_url")
diff --git a/tempest/services/identity/v3/json/trusts_client.py b/tempest/lib/services/identity/v3/trusts_client.py
similarity index 80%
rename from tempest/services/identity/v3/json/trusts_client.py
rename to tempest/lib/services/identity/v3/trusts_client.py
index dedee05..d113905 100644
--- a/tempest/services/identity/v3/json/trusts_client.py
+++ b/tempest/lib/services/identity/v3/trusts_client.py
@@ -13,6 +13,7 @@
# limitations under the License.
from oslo_serialization import jsonutils as json
+from six.moves.urllib import parse as urllib
from tempest.lib.common import rest_client
@@ -23,8 +24,9 @@
def create_trust(self, **kwargs):
"""Creates a trust.
- Available params: see http://developer.openstack.org/
- api-ref-identity-v3-ext.html#createTrust
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3-ext/index.html#create-trust
"""
post_body = json.dumps({'trust': kwargs})
resp, body = self.post('OS-TRUST/trusts', post_body)
@@ -38,16 +40,17 @@
self.expected_success(204, resp.status)
return rest_client.ResponseBody(resp, body)
- def list_trusts(self, trustor_user_id=None, trustee_user_id=None):
- """GET trusts."""
- if trustor_user_id:
- resp, body = self.get("OS-TRUST/trusts?trustor_user_id=%s"
- % trustor_user_id)
- elif trustee_user_id:
- resp, body = self.get("OS-TRUST/trusts?trustee_user_id=%s"
- % trustee_user_id)
- else:
- resp, body = self.get("OS-TRUST/trusts")
+ def list_trusts(self, **params):
+ """Returns trusts
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3-ext/index.html#list-trusts
+ """
+ url = "OS-TRUST/trusts/"
+ if params:
+ url += '?%s' % urllib.urlencode(params)
+ resp, body = self.get(url)
self.expected_success(200, resp.status)
body = json.loads(body)
return rest_client.ResponseBody(resp, body)
diff --git a/tempest/lib/services/identity/v3/users_client.py b/tempest/lib/services/identity/v3/users_client.py
new file mode 100644
index 0000000..e99a971
--- /dev/null
+++ b/tempest/lib/services/identity/v3/users_client.py
@@ -0,0 +1,120 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from oslo_serialization import jsonutils as json
+from six.moves.urllib import parse as urllib
+
+from tempest.lib.common import rest_client
+
+
+class UsersClient(rest_client.RestClient):
+ api_version = "v3"
+
+ def create_user(self, **kwargs):
+ """Creates a user.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/#create-user
+ """
+ post_body = json.dumps({'user': kwargs})
+ resp, body = self.post('users', post_body)
+ self.expected_success(201, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def update_user(self, user_id, **kwargs):
+ """Updates a user.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/#update-user
+ """
+ if 'id' not in kwargs:
+ kwargs['id'] = user_id
+ post_body = json.dumps({'user': kwargs})
+ resp, body = self.patch('users/%s' % user_id, post_body)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def update_user_password(self, user_id, **kwargs):
+ """Update a user password
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/index.html#change-password-for-user
+ """
+ update_user = json.dumps({'user': kwargs})
+ resp, _ = self.post('users/%s/password' % user_id, update_user)
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp)
+
+ def list_user_projects(self, user_id, **params):
+ """Lists the projects on which a user has roles assigned.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/#list-projects-for-user
+ """
+ url = 'users/%s/projects' % user_id
+ if params:
+ url += '?%s' % urllib.urlencode(params)
+ resp, body = self.get(url)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def list_users(self, **params):
+ """Get the list of users.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/#list-users
+ """
+ url = 'users'
+ if params:
+ url += '?%s' % urllib.urlencode(params)
+ resp, body = self.get(url)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def show_user(self, user_id):
+ """GET a user."""
+ resp, body = self.get("users/%s" % user_id)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def delete_user(self, user_id):
+ """Deletes a User."""
+ resp, body = self.delete("users/%s" % user_id)
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def list_user_groups(self, user_id, **params):
+ """Lists groups which a user belongs to.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/identity/v3/#list-groups-to-which-a-user-belongs
+ """
+ url = 'users/%s/groups' % user_id
+ if params:
+ url += '?%s' % urllib.urlencode(params)
+ resp, body = self.get(url)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
diff --git a/tempest/lib/services/image/v1/image_members_client.py b/tempest/lib/services/image/v1/image_members_client.py
index e7fa0c9..2318087 100644
--- a/tempest/lib/services/image/v1/image_members_client.py
+++ b/tempest/lib/services/image/v1/image_members_client.py
@@ -29,8 +29,9 @@
def list_shared_images(self, tenant_id):
"""List image memberships for the given tenant.
- Available params: see http://developer.openstack.org/
- api-ref-image-v1.html#listSharedImages-v1
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v1/#list-shared-images
"""
url = 'shared-images/%s' % tenant_id
@@ -42,8 +43,9 @@
def create_image_member(self, image_id, member_id, **kwargs):
"""Add a member to an image.
- Available params: see http://developer.openstack.org/
- api-ref-image-v1.html#addMember-v1
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v1/#add-member-to-image
"""
url = 'images/%s/members/%s' % (image_id, member_id)
body = json.dumps({'member': kwargs})
@@ -54,8 +56,9 @@
def delete_image_member(self, image_id, member_id):
"""Removes a membership from the image.
- Available params: see http://developer.openstack.org/
- api-ref-image-v1.html#removeMember-v1
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v1/#remove-member
"""
url = 'images/%s/members/%s' % (image_id, member_id)
resp, __ = self.delete(url)
diff --git a/tempest/lib/services/image/v1/images_client.py b/tempest/lib/services/image/v1/images_client.py
index 0db98f8..e67a547 100644
--- a/tempest/lib/services/image/v1/images_client.py
+++ b/tempest/lib/services/image/v1/images_client.py
@@ -34,8 +34,7 @@
data = iter(functools.partial(data.read, CHUNKSIZE), b'')
resp, body = self.request('POST', 'images',
headers=headers, body=data, chunked=True)
- self._error_checker('POST', 'images', headers, data, resp,
- body)
+ self._error_checker(resp, body)
body = json.loads(body)
return rest_client.ResponseBody(resp, body)
@@ -47,8 +46,7 @@
url = 'images/%s' % image_id
resp, body = self.request('PUT', url, headers=headers,
body=data, chunked=True)
- self._error_checker('PUT', url, headers, data,
- resp, body)
+ self._error_checker(resp, body)
body = json.loads(body)
return rest_client.ResponseBody(resp, body)
@@ -61,8 +59,9 @@
def create_image(self, data=None, headers=None):
"""Create an image.
- Available params: http://developer.openstack.org/
- api-ref-image-v1.html#createImage-v1
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-image-v1.html#createImage-v1
"""
if headers is None:
headers = {}
@@ -78,8 +77,9 @@
def update_image(self, image_id, data=None, headers=None):
"""Update an image.
- Available params: http://developer.openstack.org/
- api-ref-image-v1.html#updateImage-v1
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-image-v1.html#updateImage-v1
"""
if headers is None:
headers = {}
@@ -102,8 +102,9 @@
def list_images(self, detail=False, **kwargs):
"""Return a list of all images filtered by input parameters.
- Available params: see http://developer.openstack.org/
- api-ref-image-v1.html#listImage-v1
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v1/#list-images
Most parameters except the following are passed to the API without
any changes.
diff --git a/tempest/lib/services/image/v2/image_members_client.py b/tempest/lib/services/image/v2/image_members_client.py
index d0ab165..e5118a8 100644
--- a/tempest/lib/services/image/v2/image_members_client.py
+++ b/tempest/lib/services/image/v2/image_members_client.py
@@ -21,8 +21,9 @@
def list_image_members(self, image_id):
"""List image members.
- Available params: http://developer.openstack.org/
- api-ref-image-v2.html#listImageMembers-v2
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v2/#list-image-members
"""
url = 'images/%s/members' % image_id
resp, body = self.get(url)
@@ -33,8 +34,9 @@
def create_image_member(self, image_id, **kwargs):
"""Create an image member.
- Available params: see http://developer.openstack.org/
- api-ref-image-v2.html#createImageMember-v2
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v2/#create-image-member
"""
url = 'images/%s/members' % image_id
data = json.dumps(kwargs)
@@ -46,8 +48,9 @@
def update_image_member(self, image_id, member_id, **kwargs):
"""Update an image member.
- Available params: see http://developer.openstack.org/
- api-ref-image-v2.html#updateImageMember-v2
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v2/#update-image-member
"""
url = 'images/%s/members/%s' % (image_id, member_id)
data = json.dumps(kwargs)
@@ -59,8 +62,9 @@
def show_image_member(self, image_id, member_id):
"""Show an image member.
- Available params: http://developer.openstack.org/
- api-ref-image-v2.html#showImageMember-v2
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v2/#show-image-member-details
"""
url = 'images/%s/members/%s' % (image_id, member_id)
resp, body = self.get(url)
@@ -70,8 +74,9 @@
def delete_image_member(self, image_id, member_id):
"""Delete an image member.
- Available params: http://developer.openstack.org/
- api-ref-image-v2.html#deleteImageMember-v2
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v2/#delete-image-member
"""
url = 'images/%s/members/%s' % (image_id, member_id)
resp, _ = self.delete(url)
diff --git a/tempest/lib/services/image/v2/images_client.py b/tempest/lib/services/image/v2/images_client.py
index 996ce94..bcdae44 100644
--- a/tempest/lib/services/image/v2/images_client.py
+++ b/tempest/lib/services/image/v2/images_client.py
@@ -30,8 +30,9 @@
def update_image(self, image_id, patch):
"""Update an image.
- Available params: see http://developer.openstack.org/
- api-ref-image-v2.html#updateImage-v2
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v2/index.html#update-an-image
"""
data = json.dumps(patch)
headers = {"Content-Type": "application/openstack-images-v2.0"
@@ -44,8 +45,9 @@
def create_image(self, **kwargs):
"""Create an image.
- Available params: see http://developer.openstack.org/
- api-ref-image-v2.html#createImage-v2
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v2/index.html#create-an-image
"""
data = json.dumps(kwargs)
resp, body = self.post('images', data)
@@ -56,9 +58,10 @@
def deactivate_image(self, image_id):
"""Deactivate image.
- Available params: see http://developer.openstack.org/
- api-ref-image-v2.html#deactivateImage-v2
- """
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v2/#deactivate-image
+ """
url = 'images/%s/actions/deactivate' % image_id
resp, body = self.post(url, None)
self.expected_success(204, resp.status)
@@ -67,9 +70,10 @@
def reactivate_image(self, image_id):
"""Reactivate image.
- Available params: see http://developer.openstack.org/
- api-ref-image-v2.html#reactivateImage-v2
- """
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v2/#reactivate-image
+ """
url = 'images/%s/actions/reactivate' % image_id
resp, body = self.post(url, None)
self.expected_success(204, resp.status)
@@ -78,8 +82,9 @@
def delete_image(self, image_id):
"""Delete image.
- Available params: see http://developer.openstack.org/
- api-ref-image-v2.html#deleteImage-v2
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v2/#delete-an-image
"""
url = 'images/%s' % image_id
resp, _ = self.delete(url)
@@ -89,8 +94,9 @@
def list_images(self, params=None):
"""List images.
- Available params: see http://developer.openstack.org/
- api-ref-image-v2.html#listImages-v2
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v2/#show-images
"""
url = 'images'
@@ -105,8 +111,9 @@
def show_image(self, image_id):
"""Show image details.
- Available params: http://developer.openstack.org/
- api-ref-image-v2.html#showImage-v2
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v2/#show-image-details
"""
url = 'images/%s' % image_id
resp, body = self.get(url)
@@ -129,8 +136,9 @@
def store_image_file(self, image_id, data):
"""Upload binary image data.
- Available params: http://developer.openstack.org/
- api-ref-image-v2.html#storeImageFile-v2
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v2/#upload-binary-image-data
"""
url = 'images/%s/file' % image_id
@@ -147,8 +155,9 @@
def show_image_file(self, image_id):
"""Download binary image data.
- Available params: http://developer.openstack.org/
- api-ref-image-v2.html#showImageFile-v2
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v2/#download-binary-image-data
"""
url = 'images/%s/file' % image_id
resp, body = self.get(url)
@@ -158,8 +167,9 @@
def add_image_tag(self, image_id, tag):
"""Add an image tag.
- Available params: http://developer.openstack.org/
- api-ref-image-v2.html#addImageTag-v2
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v2/#add-image-tag
"""
url = 'images/%s/tags/%s' % (image_id, tag)
resp, body = self.put(url, body=None)
@@ -169,8 +179,9 @@
def delete_image_tag(self, image_id, tag):
"""Delete an image tag.
- Available params: http://developer.openstack.org/
- api-ref-image-v2.html#deleteImageTag-v2
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v2/#delete-image-tag
"""
url = 'images/%s/tags/%s' % (image_id, tag)
resp, _ = self.delete(url)
diff --git a/tempest/lib/services/image/v2/namespaces_client.py b/tempest/lib/services/image/v2/namespaces_client.py
index 5bd096d..b00de89 100644
--- a/tempest/lib/services/image/v2/namespaces_client.py
+++ b/tempest/lib/services/image/v2/namespaces_client.py
@@ -24,8 +24,9 @@
def create_namespace(self, **kwargs):
"""Create a namespace.
- Available params: see http://developer.openstack.org/
- api-ref-image-v2.html#createNamespace-v2
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v2/metadefs-index.html#create-namespace
"""
data = json.dumps(kwargs)
resp, body = self.post('metadefs/namespaces', data)
@@ -33,7 +34,26 @@
body = json.loads(body)
return rest_client.ResponseBody(resp, body)
+ def list_namespaces(self):
+ """List namespaces
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v2/metadefs-index.html#list-namespaces
+ """
+ url = 'metadefs/namespaces'
+ resp, body = self.get(url)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
def show_namespace(self, namespace):
+ """Show namespace details.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v2/metadefs-index.html#get-namespace-details
+ """
url = 'metadefs/namespaces/%s' % namespace
resp, body = self.get(url)
self.expected_success(200, resp.status)
@@ -43,8 +63,9 @@
def update_namespace(self, namespace, **kwargs):
"""Update a namespace.
- Available params: see http://developer.openstack.org/
- api-ref-image-v2.html#updateNamespace-v2
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v2/metadefs-index.html#update-namespace
"""
# NOTE: On Glance API, we need to pass namespace on both URI
# and a request body.
@@ -60,8 +81,9 @@
def delete_namespace(self, namespace):
"""Delete a namespace.
- Available params: http://developer.openstack.org/
- api-ref-image-v2.html#deleteNamespace-v2
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v2/metadefs-index.html#delete-namespace
"""
url = 'metadefs/namespaces/%s' % namespace
resp, _ = self.delete(url)
diff --git a/tempest/lib/services/image/v2/resource_types_client.py b/tempest/lib/services/image/v2/resource_types_client.py
index 1349c63..1b6889f 100644
--- a/tempest/lib/services/image/v2/resource_types_client.py
+++ b/tempest/lib/services/image/v2/resource_types_client.py
@@ -22,8 +22,54 @@
api_version = "v2"
def list_resource_types(self):
+ """Lists all resource types.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v2/metadefs-index.html?expanded=#list-resource-types
+ """
url = 'metadefs/resource_types'
resp, body = self.get(url)
self.expected_success(200, resp.status)
body = json.loads(body)
return rest_client.ResponseBody(resp, body)
+
+ def create_resource_type_association(self, namespace_id, **kwargs):
+ """Creates a resource type association in given namespace.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v2/metadefs-index.html?expanded=#create-resource-type-association
+ """
+ url = 'metadefs/namespaces/%s/resource_types' % namespace_id
+ data = json.dumps(kwargs)
+ resp, body = self.post(url, data)
+ self.expected_success(201, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def list_resource_type_association(self, namespace_id):
+ """Lists resource type associations in given namespace.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v2/metadefs-index.html?expanded=#list-resource-type-associations
+ """
+ url = 'metadefs/namespaces/%s/resource_types' % namespace_id
+ resp, body = self.get(url)
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
+
+ def delete_resource_type_association(self, namespace_id, resource_name):
+ """Removes resource type association in given namespace.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/image/v2/metadefs-index.html?expanded=#remove-resource-type-association
+ """
+ url = 'metadefs/namespaces/%s/resource_types/%s' % (namespace_id,
+ resource_name)
+ resp, _ = self.delete(url)
+ self.expected_success(204, resp.status)
+ return rest_client.ResponseBody(resp)
diff --git a/tempest/lib/services/network/agents_client.py b/tempest/lib/services/network/agents_client.py
index c5d4c66..9bdf090 100644
--- a/tempest/lib/services/network/agents_client.py
+++ b/tempest/lib/services/network/agents_client.py
@@ -44,7 +44,7 @@
# link to api-site.
# LP: https://bugs.launchpad.net/openstack-api-site/+bug/1526670
uri = '/agents/%s/l3-routers' % agent_id
- return self.create_resource(uri, kwargs)
+ return self.create_resource(uri, kwargs, expect_empty_body=True)
def delete_router_from_l3_agent(self, agent_id, router_id):
uri = '/agents/%s/l3-routers/%s' % (agent_id, router_id)
@@ -65,4 +65,4 @@
# link to api-site.
# LP: https://bugs.launchpad.net/openstack-api-site/+bug/1526212
uri = '/agents/%s/dhcp-networks' % agent_id
- return self.create_resource(uri, kwargs)
+ return self.create_resource(uri, kwargs, expect_empty_body=True)
diff --git a/tempest/lib/services/network/base.py b/tempest/lib/services/network/base.py
index a6ada04..b6f9c91 100644
--- a/tempest/lib/services/network/base.py
+++ b/tempest/lib/services/network/base.py
@@ -54,18 +54,30 @@
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
- def create_resource(self, uri, post_data):
+ def create_resource(self, uri, post_data, expect_empty_body=False):
req_uri = self.uri_prefix + uri
req_post_data = json.dumps(post_data)
resp, body = self.post(req_uri, req_post_data)
- body = json.loads(body)
+ # NOTE: RFC allows both a valid non-empty body and an empty body for
+ # response of POST API. If a body is expected not empty, we decode the
+ # body. Otherwise we returns the body as it is.
+ if not expect_empty_body:
+ body = json.loads(body)
+ else:
+ body = None
self.expected_success(201, resp.status)
return rest_client.ResponseBody(resp, body)
- def update_resource(self, uri, post_data):
+ def update_resource(self, uri, post_data, expect_empty_body=False):
req_uri = self.uri_prefix + uri
req_post_data = json.dumps(post_data)
resp, body = self.put(req_uri, req_post_data)
- body = json.loads(body)
+ # NOTE: RFC allows both a valid non-empty body and an empty body for
+ # response of PUT API. If a body is expected not empty, we decode the
+ # body. Otherwise we returns the body as it is.
+ if not expect_empty_body:
+ body = json.loads(body)
+ else:
+ body = None
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
diff --git a/tempest/lib/services/network/floating_ips_client.py b/tempest/lib/services/network/floating_ips_client.py
old mode 100755
new mode 100644
index f6cc0ff..2bb18e0
--- a/tempest/lib/services/network/floating_ips_client.py
+++ b/tempest/lib/services/network/floating_ips_client.py
@@ -21,8 +21,9 @@
If you specify port information, associates the floating IP with an
internal port.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2-ext.html#createFloatingIp
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#create-floating-ip
"""
uri = '/floatingips'
post_data = {'floatingip': kwargs}
@@ -31,8 +32,9 @@
def update_floatingip(self, floatingip_id, **kwargs):
"""Updates a floating IP and its association with an internal port.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2-ext.html#updateFloatingIp
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#update-floating-ip
"""
uri = '/floatingips/%s' % floatingip_id
post_data = {'floatingip': kwargs}
@@ -41,8 +43,9 @@
def show_floatingip(self, floatingip_id, **fields):
"""Shows details for a floating IP.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2-ext.html#showFloatingIp
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#show-floating-ip-details
"""
uri = '/floatingips/%s' % floatingip_id
return self.show_resource(uri, **fields)
@@ -54,8 +57,9 @@
def list_floatingips(self, **filters):
"""Lists floating IPs.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2-ext.html#listFloatingIps
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#list-floating-ips
"""
uri = '/floatingips'
return self.list_resources(uri, **filters)
diff --git a/tempest/lib/services/network/metering_labels_client.py b/tempest/lib/services/network/metering_labels_client.py
old mode 100755
new mode 100644
index 12a5834..411da1f
--- a/tempest/lib/services/network/metering_labels_client.py
+++ b/tempest/lib/services/network/metering_labels_client.py
@@ -18,9 +18,9 @@
def create_metering_label(self, **kwargs):
"""Creates an L3 metering label.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2-ext.html#
- createMeteringLabel
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#create-metering-label
"""
uri = '/metering/metering-labels'
post_data = {'metering_label': kwargs}
@@ -29,8 +29,9 @@
def show_metering_label(self, metering_label_id, **fields):
"""Shows details for a metering label.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2-ext.html#showMeteringLabel
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#show-metering-label-details
"""
uri = '/metering/metering-labels/%s' % metering_label_id
return self.show_resource(uri, **fields)
@@ -38,9 +39,9 @@
def delete_metering_label(self, metering_label_id):
"""Deletes an L3 metering label.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2-ext.html#
- deleteMeteringLabel
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#delete-metering-label
"""
uri = '/metering/metering-labels/%s' % metering_label_id
return self.delete_resource(uri)
@@ -48,9 +49,9 @@
def list_metering_labels(self, **filters):
"""Lists all L3 metering labels that belong to the tenant.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2-ext.html#
- listMeteringLabels
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#list-metering-labels
"""
uri = '/metering/metering-labels'
return self.list_resources(uri, **filters)
diff --git a/tempest/lib/services/network/networks_client.py b/tempest/lib/services/network/networks_client.py
old mode 100755
new mode 100644
index 7d75bf7..77d4823
--- a/tempest/lib/services/network/networks_client.py
+++ b/tempest/lib/services/network/networks_client.py
@@ -18,8 +18,9 @@
def create_network(self, **kwargs):
"""Creates a network.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2.html#createNetwork
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#create-network
"""
uri = '/networks'
post_data = {'network': kwargs}
@@ -28,8 +29,9 @@
def update_network(self, network_id, **kwargs):
"""Updates a network.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2.html#updateNetwork
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#update-network
"""
uri = '/networks/%s' % network_id
post_data = {'network': kwargs}
@@ -38,8 +40,9 @@
def show_network(self, network_id, **fields):
"""Shows details for a network.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2.html#showNetwork
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#show-network-details
"""
uri = '/networks/%s' % network_id
return self.show_resource(uri, **fields)
@@ -51,8 +54,9 @@
def list_networks(self, **filters):
"""Lists networks to which the tenant has access.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2.html#listNetworks
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#list-networks
"""
uri = '/networks'
return self.list_resources(uri, **filters)
@@ -60,8 +64,9 @@
def create_bulk_networks(self, **kwargs):
"""Create multiple networks in a single request.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2.html#bulkCreateNetwork
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#bulk-create-networks
"""
uri = '/networks'
return self.create_resource(uri, kwargs)
diff --git a/tempest/lib/services/network/ports_client.py b/tempest/lib/services/network/ports_client.py
old mode 100755
new mode 100644
index 71f1103..93138b9
--- a/tempest/lib/services/network/ports_client.py
+++ b/tempest/lib/services/network/ports_client.py
@@ -19,8 +19,9 @@
def create_port(self, **kwargs):
"""Creates a port on a network.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2.html#createPort
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#create-port
"""
uri = '/ports'
post_data = {'port': kwargs}
@@ -29,8 +30,9 @@
def update_port(self, port_id, **kwargs):
"""Updates a port.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2.html#updatePort
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#update-port
"""
uri = '/ports/%s' % port_id
post_data = {'port': kwargs}
@@ -39,8 +41,9 @@
def show_port(self, port_id, **fields):
"""Shows details for a port.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2.html#showPort
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#show-port-details
"""
uri = '/ports/%s' % port_id
return self.show_resource(uri, **fields)
@@ -48,8 +51,9 @@
def delete_port(self, port_id):
"""Deletes a port.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2.html#removePort
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#delete-port
"""
uri = '/ports/%s' % port_id
return self.delete_resource(uri)
@@ -57,8 +61,9 @@
def list_ports(self, **filters):
"""Lists ports to which the tenant has access.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2.html#listPorts
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#list-ports
"""
uri = '/ports'
return self.list_resources(uri, **filters)
@@ -66,8 +71,9 @@
def create_bulk_ports(self, **kwargs):
"""Create multiple ports in a single request.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2.html#bulkCreatePorts
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html?expanded=#bulk-create-ports
"""
uri = '/ports'
return self.create_resource(uri, kwargs)
diff --git a/tempest/lib/services/network/routers_client.py b/tempest/lib/services/network/routers_client.py
old mode 100755
new mode 100644
index 23e9c4e..19b7627
--- a/tempest/lib/services/network/routers_client.py
+++ b/tempest/lib/services/network/routers_client.py
@@ -18,8 +18,9 @@
def create_router(self, **kwargs):
"""Create a router.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2-ext.html#createRouter
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#create-router
"""
post_body = {'router': kwargs}
uri = '/routers'
@@ -28,8 +29,9 @@
def update_router(self, router_id, **kwargs):
"""Updates a logical router.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2-ext.html#updateRouter
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#update-router
"""
uri = '/routers/%s' % router_id
update_body = {'router': kwargs}
@@ -38,8 +40,9 @@
def show_router(self, router_id, **fields):
"""Shows details for a router.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2-ext.html#showRouter
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#show-router-details
"""
uri = '/routers/%s' % router_id
return self.show_resource(uri, **fields)
@@ -51,8 +54,9 @@
def list_routers(self, **filters):
"""Lists logical routers.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2-ext.html#listRouters
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#list-routers
"""
uri = '/routers'
return self.list_resources(uri, **filters)
@@ -60,9 +64,9 @@
def add_router_interface(self, router_id, **kwargs):
"""Add router interface.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2-ext.html#
- addRouterInterface
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#add-interface-to-router
"""
uri = '/routers/%s/add_router_interface' % router_id
return self.update_resource(uri, kwargs)
@@ -70,9 +74,9 @@
def remove_router_interface(self, router_id, **kwargs):
"""Remove router interface.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2-ext.html#
- deleteRouterInterface
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#remove-interface-from-router
"""
uri = '/routers/%s/remove_router_interface' % router_id
return self.update_resource(uri, kwargs)
diff --git a/tempest/lib/services/network/security_group_rules_client.py b/tempest/lib/services/network/security_group_rules_client.py
old mode 100755
new mode 100644
index 6cd01e1..d2bc4a9
--- a/tempest/lib/services/network/security_group_rules_client.py
+++ b/tempest/lib/services/network/security_group_rules_client.py
@@ -18,9 +18,9 @@
def create_security_group_rule(self, **kwargs):
"""Creates an OpenStack Networking security group rule.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2-ext.html#
- createSecGroupRule
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#create-security-group-rule
"""
uri = '/security-group-rules'
post_data = {'security_group_rule': kwargs}
@@ -29,8 +29,9 @@
def show_security_group_rule(self, security_group_rule_id, **fields):
"""Shows detailed information for a security group rule.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2-ext.html#showSecGroupRule
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#show-security-group-rule
"""
uri = '/security-group-rules/%s' % security_group_rule_id
return self.show_resource(uri, **fields)
@@ -42,8 +43,9 @@
def list_security_group_rules(self, **filters):
"""Lists a summary of all OpenStack Networking security group rules.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2-ext.html#listSecGroupRules
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#list-security-group-rules
"""
uri = '/security-group-rules'
return self.list_resources(uri, **filters)
diff --git a/tempest/lib/services/network/security_groups_client.py b/tempest/lib/services/network/security_groups_client.py
old mode 100755
new mode 100644
index 5c89a6f..1f30216
--- a/tempest/lib/services/network/security_groups_client.py
+++ b/tempest/lib/services/network/security_groups_client.py
@@ -18,8 +18,9 @@
def create_security_group(self, **kwargs):
"""Creates an OpenStack Networking security group.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2-ext.html#createSecGroup
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#create-security-group
"""
uri = '/security-groups'
post_data = {'security_group': kwargs}
@@ -28,8 +29,9 @@
def update_security_group(self, security_group_id, **kwargs):
"""Updates a security group.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2-ext.html#updateSecGroup
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#update-security-group
"""
uri = '/security-groups/%s' % security_group_id
post_data = {'security_group': kwargs}
@@ -38,8 +40,9 @@
def show_security_group(self, security_group_id, **fields):
"""Shows details for a security group.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2-ext.html#showSecGroup
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#show-security-group
"""
uri = '/security-groups/%s' % security_group_id
return self.show_resource(uri, **fields)
@@ -47,8 +50,9 @@
def delete_security_group(self, security_group_id):
"""Deletes an OpenStack Networking security group.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2-ext.html#deleteSecGroup
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#delete-security-group
"""
uri = '/security-groups/%s' % security_group_id
return self.delete_resource(uri)
@@ -56,8 +60,9 @@
def list_security_groups(self, **filters):
"""Lists OpenStack Networking security groups.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2-ext.html#listSecGroups
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#list-security-groups
"""
uri = '/security-groups'
return self.list_resources(uri, **filters)
diff --git a/tempest/lib/services/network/subnetpools_client.py b/tempest/lib/services/network/subnetpools_client.py
old mode 100755
new mode 100644
index f0a66a0..7e77e30
--- a/tempest/lib/services/network/subnetpools_client.py
+++ b/tempest/lib/services/network/subnetpools_client.py
@@ -20,8 +20,9 @@
def list_subnetpools(self, **filters):
"""Lists subnet pools to which the tenant has access.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2-ext.html#listSubnetPools
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#list-subnet-pools
"""
uri = '/subnetpools'
return self.list_resources(uri, **filters)
@@ -29,8 +30,9 @@
def create_subnetpool(self, **kwargs):
"""Creates a subnet pool.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2-ext.html#createSubnetPool
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#create-subnet-pool
"""
uri = '/subnetpools'
post_data = {'subnetpool': kwargs}
@@ -39,8 +41,9 @@
def show_subnetpool(self, subnetpool_id, **fields):
"""Shows information for a subnet pool.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2-ext.html#showSubnetPool
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#show-subnet-pool
"""
uri = '/subnetpools/%s' % subnetpool_id
return self.show_resource(uri, **fields)
@@ -48,8 +51,9 @@
def update_subnetpool(self, subnetpool_id, **kwargs):
"""Updates a subnet pool.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2-ext.html#updateSubnetPool
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#update-subnet-pool
"""
uri = '/subnetpools/%s' % subnetpool_id
post_data = {'subnetpool': kwargs}
diff --git a/tempest/lib/services/network/subnets_client.py b/tempest/lib/services/network/subnets_client.py
old mode 100755
new mode 100644
index 0fde3ee..b843f84
--- a/tempest/lib/services/network/subnets_client.py
+++ b/tempest/lib/services/network/subnets_client.py
@@ -18,8 +18,9 @@
def create_subnet(self, **kwargs):
"""Creates a subnet on a network.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2.html#createSubnet
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#create-subnet
"""
uri = '/subnets'
post_data = {'subnet': kwargs}
@@ -28,8 +29,9 @@
def update_subnet(self, subnet_id, **kwargs):
"""Updates a subnet.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2.html#updateSubnet
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#update-subnet
"""
uri = '/subnets/%s' % subnet_id
post_data = {'subnet': kwargs}
@@ -38,8 +40,9 @@
def show_subnet(self, subnet_id, **fields):
"""Shows details for a subnet.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2.html#showSubnet
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#show-subnet-details
"""
uri = '/subnets/%s' % subnet_id
return self.show_resource(uri, **fields)
@@ -51,8 +54,9 @@
def list_subnets(self, **filters):
"""Lists subnets to which the tenant has access.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2.html#listSubnets
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#list-subnets
"""
uri = '/subnets'
return self.list_resources(uri, **filters)
@@ -60,8 +64,9 @@
def create_bulk_subnets(self, **kwargs):
"""Create multiple subnets in a single request.
- Available params: see http://developer.openstack.org/
- api-ref-networking-v2.html#bulkCreateSubnet
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/networking/v2/index.html#bulk-create-subnet
"""
uri = '/subnets'
return self.create_resource(uri, kwargs)
diff --git a/tempest/lib/services/network/versions_client.py b/tempest/lib/services/network/versions_client.py
index 0202927..a9c3bbf 100644
--- a/tempest/lib/services/network/versions_client.py
+++ b/tempest/lib/services/network/versions_client.py
@@ -35,6 +35,7 @@
start = time.time()
self._log_request_start('GET', version_url)
response, body = self.raw_request(version_url, 'GET')
+ self._error_checker(response, body)
end = time.time()
self._log_request('GET', version_url, response,
secs=(end - start), resp_body=body)
diff --git a/tempest/api/data_processing/__init__.py b/tempest/lib/services/volume/__init__.py
similarity index 100%
rename from tempest/api/data_processing/__init__.py
rename to tempest/lib/services/volume/__init__.py
diff --git a/tempest/api/data_processing/__init__.py b/tempest/lib/services/volume/v1/__init__.py
similarity index 100%
copy from tempest/api/data_processing/__init__.py
copy to tempest/lib/services/volume/v1/__init__.py
diff --git a/tempest/services/volume/base/base_availability_zone_client.py b/tempest/lib/services/volume/v1/availability_zone_client.py
similarity index 90%
rename from tempest/services/volume/base/base_availability_zone_client.py
rename to tempest/lib/services/volume/v1/availability_zone_client.py
index 1c2deba..be4f539 100644
--- a/tempest/services/volume/base/base_availability_zone_client.py
+++ b/tempest/lib/services/volume/v1/availability_zone_client.py
@@ -18,7 +18,8 @@
from tempest.lib.common import rest_client
-class BaseAvailabilityZoneClient(rest_client.RestClient):
+class AvailabilityZoneClient(rest_client.RestClient):
+ """Volume V1 availability zone client."""
def list_availability_zones(self):
resp, body = self.get('os-availability-zone')
diff --git a/tempest/services/volume/base/base_backups_client.py b/tempest/lib/services/volume/v1/backups_client.py
similarity index 65%
copy from tempest/services/volume/base/base_backups_client.py
copy to tempest/lib/services/volume/v1/backups_client.py
index 3842d66..2728c67 100644
--- a/tempest/services/volume/base/base_backups_client.py
+++ b/tempest/lib/services/volume/v1/backups_client.py
@@ -13,17 +13,15 @@
# License for the specific language governing permissions and limitations
# under the License.
-import time
-
from oslo_serialization import jsonutils as json
-from tempest import exceptions
from tempest.lib.common import rest_client
from tempest.lib import exceptions as lib_exc
-class BaseBackupsClient(rest_client.RestClient):
- """Client class to send CRUD Volume backup API requests"""
+class BackupsClient(rest_client.RestClient):
+ """Volume V1 Backups client"""
+ api_version = "v1"
def create_backup(self, **kwargs):
"""Creates a backup of volume.
@@ -51,13 +49,13 @@
def delete_backup(self, backup_id):
"""Delete a backup of volume."""
- resp, body = self.delete('backups/%s' % (str(backup_id)))
+ resp, body = self.delete('backups/%s' % backup_id)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
def show_backup(self, backup_id):
"""Returns the details of a single backup."""
- url = "backups/%s" % str(backup_id)
+ url = "backups/%s" % backup_id
resp, body = self.get(url)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -89,34 +87,16 @@
self.expected_success(201, resp.status)
return rest_client.ResponseBody(resp, body)
- def wait_for_backup_status(self, backup_id, status):
- """Waits for a Backup to reach a given status."""
- body = self.show_backup(backup_id)['backup']
- backup_status = body['status']
- start = int(time.time())
+ def reset_backup_status(self, backup_id, status):
+ """Reset the specified backup's status."""
+ post_body = json.dumps({'os-reset_status': {"status": status}})
+ resp, body = self.post('backups/%s/action' % backup_id, post_body)
+ self.expected_success(202, resp.status)
+ return rest_client.ResponseBody(resp, body)
- while backup_status != status:
- time.sleep(self.build_interval)
- body = self.show_backup(backup_id)['backup']
- backup_status = body['status']
- if backup_status == 'error':
- raise exceptions.VolumeBackupException(backup_id=backup_id)
-
- if int(time.time()) - start >= self.build_timeout:
- message = ('Volume backup %s failed to reach %s status '
- '(current %s) within the required time (%s s).' %
- (backup_id, status, backup_status,
- self.build_timeout))
- raise exceptions.TimeoutException(message)
-
- def wait_for_backup_deletion(self, backup_id):
- """Waits for backup deletion"""
- start_time = int(time.time())
- while True:
- try:
- self.show_backup(backup_id)
- except lib_exc.NotFound:
- return
- if int(time.time()) - start_time >= self.build_timeout:
- raise exceptions.TimeoutException
- time.sleep(self.build_interval)
+ def is_resource_deleted(self, id):
+ try:
+ self.show_backup(id)
+ except lib_exc.NotFound:
+ return True
+ return False
diff --git a/tempest/lib/services/volume/v1/encryption_types_client.py b/tempest/lib/services/volume/v1/encryption_types_client.py
new file mode 100755
index 0000000..067b4e8
--- /dev/null
+++ b/tempest/lib/services/volume/v1/encryption_types_client.py
@@ -0,0 +1,68 @@
+# Copyright 2012 OpenStack Foundation
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from oslo_serialization import jsonutils as json
+
+from tempest.lib.common import rest_client
+from tempest.lib import exceptions as lib_exc
+
+
+class EncryptionTypesClient(rest_client.RestClient):
+
+ def is_resource_deleted(self, id):
+ try:
+ body = self.show_encryption_type(id)
+ if not body:
+ return True
+ except lib_exc.NotFound:
+ return True
+ return False
+
+ @property
+ def resource_type(self):
+ """Returns the primary type of resource this client works with."""
+ return 'encryption-type'
+
+ def show_encryption_type(self, volume_type_id):
+ """Get the volume encryption type for the specified volume type.
+
+ volume_type_id: Id of volume_type.
+ """
+ url = "/types/%s/encryption" % volume_type_id
+ resp, body = self.get(url)
+ body = json.loads(body)
+ self.expected_success(200, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def create_encryption_type(self, volume_type_id, **kwargs):
+ """Create encryption type.
+
+ TODO: Current api-site doesn't contain this API description.
+ After fixing the api-site, we need to fix here also for putting
+ the link to api-site.
+ """
+ url = "/types/%s/encryption" % volume_type_id
+ post_body = json.dumps({'encryption': kwargs})
+ resp, body = self.post(url, post_body)
+ body = json.loads(body)
+ self.expected_success(200, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def delete_encryption_type(self, volume_type_id):
+ """Delete the encryption type for the specified volume-type."""
+ resp, body = self.delete(
+ "/types/%s/encryption/provider" % volume_type_id)
+ self.expected_success(202, resp.status)
+ return rest_client.ResponseBody(resp, body)
diff --git a/tempest/services/volume/base/base_extensions_client.py b/tempest/lib/services/volume/v1/extensions_client.py
similarity index 91%
rename from tempest/services/volume/base/base_extensions_client.py
rename to tempest/lib/services/volume/v1/extensions_client.py
index b90fe94..7b849a8 100644
--- a/tempest/services/volume/base/base_extensions_client.py
+++ b/tempest/lib/services/volume/v1/extensions_client.py
@@ -18,7 +18,8 @@
from tempest.lib.common import rest_client
-class BaseExtensionsClient(rest_client.RestClient):
+class ExtensionsClient(rest_client.RestClient):
+ """Volume V1 extensions client."""
def list_extensions(self):
url = 'extensions'
diff --git a/tempest/services/volume/base/admin/base_hosts_client.py b/tempest/lib/services/volume/v1/hosts_client.py
similarity index 90%
rename from tempest/services/volume/base/admin/base_hosts_client.py
rename to tempest/lib/services/volume/v1/hosts_client.py
index 382e9a8..56ba12c 100644
--- a/tempest/services/volume/base/admin/base_hosts_client.py
+++ b/tempest/lib/services/volume/v1/hosts_client.py
@@ -19,8 +19,8 @@
from tempest.lib.common import rest_client
-class BaseHostsClient(rest_client.RestClient):
- """Client class to send CRUD Volume Hosts API requests"""
+class HostsClient(rest_client.RestClient):
+ """Client class to send CRUD Volume Host API V1 requests"""
def list_hosts(self, **params):
"""Lists all hosts."""
diff --git a/tempest/services/volume/base/base_qos_client.py b/tempest/lib/services/volume/v1/qos_client.py
similarity index 69%
rename from tempest/services/volume/base/base_qos_client.py
rename to tempest/lib/services/volume/v1/qos_client.py
index 2d9f02a..65ae274 100644
--- a/tempest/services/volume/base/base_qos_client.py
+++ b/tempest/lib/services/volume/v1/qos_client.py
@@ -12,17 +12,19 @@
# License for the specific language governing permissions and limitations
# under the License.
-import time
-
from oslo_serialization import jsonutils as json
-from tempest import exceptions
from tempest.lib.common import rest_client
from tempest.lib import exceptions as lib_exc
-class BaseQosSpecsClient(rest_client.RestClient):
- """Client class to send CRUD QoS API requests"""
+class QosSpecsClient(rest_client.RestClient):
+ """Volume V1 QoS client.
+
+ Client class to send CRUD QoS API requests
+ """
+
+ api_version = "v1"
def is_resource_deleted(self, qos_id):
try:
@@ -36,37 +38,6 @@
"""Returns the primary type of resource this client works with."""
return 'qos'
- def wait_for_qos_operations(self, qos_id, operation, args=None):
- """Waits for a qos operations to be completed.
-
- NOTE : operation value is required for wait_for_qos_operations()
- operation = 'qos-key' / 'disassociate' / 'disassociate-all'
- args = keys[] when operation = 'qos-key'
- args = volume-type-id disassociated when operation = 'disassociate'
- args = None when operation = 'disassociate-all'
- """
- start_time = int(time.time())
- while True:
- if operation == 'qos-key-unset':
- body = self.show_qos(qos_id)['qos_specs']
- if not any(key in body['specs'] for key in args):
- return
- elif operation == 'disassociate':
- body = self.show_association_qos(qos_id)['qos_associations']
- if not any(args in body[i]['id'] for i in range(0, len(body))):
- return
- elif operation == 'disassociate-all':
- body = self.show_association_qos(qos_id)['qos_associations']
- if not body:
- return
- else:
- msg = (" operation value is either not defined or incorrect.")
- raise lib_exc.UnprocessableEntity(msg)
-
- if int(time.time()) - start_time >= self.build_timeout:
- raise exceptions.TimeoutException
- time.sleep(self.build_interval)
-
def create_qos(self, **kwargs):
"""Create a QoS Specification.
@@ -82,7 +53,7 @@
def delete_qos(self, qos_id, force=False):
"""Delete the specified QoS specification."""
resp, body = self.delete(
- "qos-specs/%s?force=%s" % (str(qos_id), force))
+ "qos-specs/%s?force=%s" % (qos_id, force))
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
@@ -96,7 +67,7 @@
def show_qos(self, qos_id):
"""Get the specified QoS specification."""
- url = "qos-specs/%s" % str(qos_id)
+ url = "qos-specs/%s" % qos_id
resp, body = self.get(url)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -128,7 +99,7 @@
def associate_qos(self, qos_id, vol_type_id):
"""Associate the specified QoS with specified volume-type."""
- url = "qos-specs/%s/associate" % str(qos_id)
+ url = "qos-specs/%s/associate" % qos_id
url += "?vol_type_id=%s" % vol_type_id
resp, body = self.get(url)
self.expected_success(202, resp.status)
@@ -136,7 +107,7 @@
def show_association_qos(self, qos_id):
"""Get the association of the specified QoS specification."""
- url = "qos-specs/%s/associations" % str(qos_id)
+ url = "qos-specs/%s/associations" % qos_id
resp, body = self.get(url)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -144,7 +115,7 @@
def disassociate_qos(self, qos_id, vol_type_id):
"""Disassociate the specified QoS with specified volume-type."""
- url = "qos-specs/%s/disassociate" % str(qos_id)
+ url = "qos-specs/%s/disassociate" % qos_id
url += "?vol_type_id=%s" % vol_type_id
resp, body = self.get(url)
self.expected_success(202, resp.status)
@@ -152,7 +123,7 @@
def disassociate_all_qos(self, qos_id):
"""Disassociate the specified QoS with all associations."""
- url = "qos-specs/%s/disassociate_all" % str(qos_id)
+ url = "qos-specs/%s/disassociate_all" % qos_id
resp, body = self.get(url)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
diff --git a/tempest/services/volume/base/admin/base_quotas_client.py b/tempest/lib/services/volume/v1/quotas_client.py
similarity index 82%
rename from tempest/services/volume/base/admin/base_quotas_client.py
rename to tempest/lib/services/volume/v1/quotas_client.py
index 83816f2..678fd82 100644
--- a/tempest/services/volume/base/admin/base_quotas_client.py
+++ b/tempest/lib/services/volume/v1/quotas_client.py
@@ -18,10 +18,8 @@
from tempest.lib.common import rest_client
-class BaseQuotasClient(rest_client.RestClient):
- """Client class to send CRUD Volume Quotas API requests"""
-
- TYPE = "json"
+class QuotasClient(rest_client.RestClient):
+ """Client class to send CRUD Volume Quotas API V1 requests"""
def show_default_quota_set(self, tenant_id):
"""List the default volume quota set for a tenant."""
@@ -44,17 +42,12 @@
body = jsonutils.loads(body)
return rest_client.ResponseBody(resp, body)
- def show_quota_usage(self, tenant_id):
- """List the quota set for a tenant."""
-
- body = self.show_quota_set(tenant_id, params={'usage': True})
- return body
-
def update_quota_set(self, tenant_id, **kwargs):
"""Updates quota set
- Available params: see http://developer.openstack.org/
- api-ref-blockstorage-v2.html#updateQuotas-v2
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-blockstorage-v1.html#updateQuota
"""
put_body = jsonutils.dumps({'quota_set': kwargs})
resp, body = self.put('os-quota-sets/%s' % tenant_id, put_body)
diff --git a/tempest/services/volume/base/admin/base_services_client.py b/tempest/lib/services/volume/v1/services_client.py
similarity index 91%
rename from tempest/services/volume/base/admin/base_services_client.py
rename to tempest/lib/services/volume/v1/services_client.py
index 861eb92..d438a34 100644
--- a/tempest/services/volume/base/admin/base_services_client.py
+++ b/tempest/lib/services/volume/v1/services_client.py
@@ -19,7 +19,8 @@
from tempest.lib.common import rest_client
-class BaseServicesClient(rest_client.RestClient):
+class ServicesClient(rest_client.RestClient):
+ """Volume V1 volume services client"""
def list_services(self, **params):
url = 'os-services'
diff --git a/tempest/services/volume/base/base_snapshots_client.py b/tempest/lib/services/volume/v1/snapshots_client.py
old mode 100755
new mode 100644
similarity index 85%
copy from tempest/services/volume/base/base_snapshots_client.py
copy to tempest/lib/services/volume/v1/snapshots_client.py
index 7a8e12b..1881078
--- a/tempest/services/volume/base/base_snapshots_client.py
+++ b/tempest/lib/services/volume/v1/snapshots_client.py
@@ -17,8 +17,8 @@
from tempest.lib import exceptions as lib_exc
-class BaseSnapshotsClient(rest_client.RestClient):
- """Base Client class to send CRUD Volume API requests."""
+class SnapshotsClient(rest_client.RestClient):
+ """Client class to send CRUD Volume V1 API requests."""
create_resp = 200
@@ -26,7 +26,7 @@
"""List all the snapshot.
Available params: see http://developer.openstack.org/
- api-ref-blockstorage-v2.html#listSnapshots
+ api-ref-blockstorage-v1.html#listSnapshots
"""
url = 'snapshots'
if detail:
@@ -43,9 +43,9 @@
"""Returns the details of a single snapshot.
Available params: see http://developer.openstack.org/
- api-ref-blockstorage-v2.html#showSnapshot
+ api-ref-blockstorage-v1.html#showSnapshot
"""
- url = "snapshots/%s" % str(snapshot_id)
+ url = "snapshots/%s" % snapshot_id
resp, body = self.get(url)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -55,7 +55,7 @@
"""Creates a new snapshot.
Available params: see http://developer.openstack.org/
- api-ref-blockstorage-v2.html#createSnapshot
+ api-ref-blockstorage-v1.html#createSnapshot
"""
post_body = json.dumps({'snapshot': kwargs})
resp, body = self.post('snapshots', post_body)
@@ -63,25 +63,13 @@
self.expected_success(self.create_resp, resp.status)
return rest_client.ResponseBody(resp, body)
- def update_snapshot(self, snapshot_id, **kwargs):
- """Updates a snapshot.
-
- Available params: see http://developer.openstack.org/
- api-ref-blockstorage-v2.html#updateSnapshot
- """
- put_body = json.dumps({'snapshot': kwargs})
- resp, body = self.put('snapshots/%s' % snapshot_id, put_body)
- body = json.loads(body)
- self.expected_success(200, resp.status)
- return rest_client.ResponseBody(resp, body)
-
def delete_snapshot(self, snapshot_id):
"""Delete Snapshot.
Available params: see http://developer.openstack.org/
- api-ref-blockstorage-v2.html#deleteSnapshot
+ api-ref-blockstorage-v1.html#deleteSnapshot
"""
- resp, body = self.delete("snapshots/%s" % str(snapshot_id))
+ resp, body = self.delete("snapshots/%s" % snapshot_id)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
@@ -112,7 +100,7 @@
# Bug https://bugs.launchpad.net/openstack-api-site/+bug/1532645
post_body = json.dumps({'os-update_snapshot_status': kwargs})
- url = 'snapshots/%s/action' % str(snapshot_id)
+ url = 'snapshots/%s/action' % snapshot_id
resp, body = self.post(url, post_body)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
@@ -120,20 +108,33 @@
def create_snapshot_metadata(self, snapshot_id, metadata):
"""Create metadata for the snapshot."""
put_body = json.dumps({'metadata': metadata})
- url = "snapshots/%s/metadata" % str(snapshot_id)
+ url = "snapshots/%s/metadata" % snapshot_id
resp, body = self.post(url, put_body)
body = json.loads(body)
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
+ def update_snapshot(self, snapshot_id, **kwargs):
+ """Updates a snapshot.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-blockstorage-v1.html#
+ updateSnapshotMetadata
+ """
+ put_body = json.dumps({'snapshot': kwargs})
+ resp, body = self.put('snapshots/%s' % snapshot_id, put_body)
+ body = json.loads(body)
+ self.expected_success(200, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
def show_snapshot_metadata(self, snapshot_id):
"""Get metadata of the snapshot.
Available params: see http://developer.openstack.org/
- api-ref-blockstorage-v2.html#
+ api-ref-blockstorage-v1.html#
showSnapshotMetadata
"""
- url = "snapshots/%s/metadata" % str(snapshot_id)
+ url = "snapshots/%s/metadata" % snapshot_id
resp, body = self.get(url)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -143,11 +144,11 @@
"""Update metadata for the snapshot.
Available params: see http://developer.openstack.org/
- api-ref-blockstorage-v2.html#
+ api-ref-blockstorage-v1.html#
updateSnapshotMetadata
"""
put_body = json.dumps(kwargs)
- url = "snapshots/%s/metadata" % str(snapshot_id)
+ url = "snapshots/%s/metadata" % snapshot_id
resp, body = self.put(url, put_body)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -160,7 +161,7 @@
# link to api-site.
# LP: https://bugs.launchpad.net/openstack-api-site/+bug/1529064
put_body = json.dumps(kwargs)
- url = "snapshots/%s/metadata/%s" % (str(snapshot_id), str(id))
+ url = "snapshots/%s/metadata/%s" % (snapshot_id, id)
resp, body = self.put(url, put_body)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -168,7 +169,7 @@
def delete_snapshot_metadata_item(self, snapshot_id, id):
"""Delete metadata item for the snapshot."""
- url = "snapshots/%s/metadata/%s" % (str(snapshot_id), str(id))
+ url = "snapshots/%s/metadata/%s" % (snapshot_id, id)
resp, body = self.delete(url)
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
diff --git a/tempest/lib/services/volume/v1/types_client.py b/tempest/lib/services/volume/v1/types_client.py
new file mode 100644
index 0000000..dce728d
--- /dev/null
+++ b/tempest/lib/services/volume/v1/types_client.py
@@ -0,0 +1,160 @@
+# Copyright 2012 OpenStack Foundation
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from oslo_serialization import jsonutils as json
+from six.moves.urllib import parse as urllib
+
+from tempest.lib.common import rest_client
+from tempest.lib import exceptions as lib_exc
+
+
+class TypesClient(rest_client.RestClient):
+ """Client class to send CRUD Volume Types API requests"""
+
+ def is_resource_deleted(self, id):
+ try:
+ self.show_volume_type(id)
+ except lib_exc.NotFound:
+ return True
+ return False
+
+ @property
+ def resource_type(self):
+ """Returns the primary type of resource this client works with."""
+ return 'volume-type'
+
+ def list_volume_types(self, **params):
+ """List all the volume_types created.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-blockstorage-v1.html#listVolumeTypes
+ """
+ url = 'types'
+ if params:
+ url += '?%s' % urllib.urlencode(params)
+
+ resp, body = self.get(url)
+ body = json.loads(body)
+ self.expected_success(200, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def show_volume_type(self, volume_type_id):
+ """Returns the details of a single volume_type.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-blockstorage-v1.html#showVolumeType
+ """
+ url = "types/%s" % volume_type_id
+ resp, body = self.get(url)
+ body = json.loads(body)
+ self.expected_success(200, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def create_volume_type(self, **kwargs):
+ """Create volume type.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-blockstorage-v1.html#createVolumeType
+ """
+ post_body = json.dumps({'volume_type': kwargs})
+ resp, body = self.post('types', post_body)
+ body = json.loads(body)
+ self.expected_success(200, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def delete_volume_type(self, volume_type_id):
+ """Deletes the Specified Volume_type.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-blockstorage-v1.html#deleteVolumeType
+ """
+ resp, body = self.delete("types/%s" % volume_type_id)
+ self.expected_success(202, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def list_volume_types_extra_specs(self, volume_type_id, **params):
+ """List all the volume_types extra specs created.
+
+ TODO: Current api-site doesn't contain this API description.
+ After fixing the api-site, we need to fix here also for putting
+ the link to api-site.
+ """
+ url = 'types/%s/extra_specs' % volume_type_id
+ if params:
+ url += '?%s' % urllib.urlencode(params)
+
+ resp, body = self.get(url)
+ body = json.loads(body)
+ self.expected_success(200, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def show_volume_type_extra_specs(self, volume_type_id, extra_specs_name):
+ """Returns the details of a single volume_type extra spec."""
+ url = "types/%s/extra_specs/%s" % (volume_type_id, extra_specs_name)
+ resp, body = self.get(url)
+ body = json.loads(body)
+ self.expected_success(200, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def create_volume_type_extra_specs(self, volume_type_id, extra_specs):
+ """Creates a new Volume_type extra spec.
+
+ volume_type_id: Id of volume_type.
+ extra_specs: A dictionary of values to be used as extra_specs.
+ """
+ url = "types/%s/extra_specs" % volume_type_id
+ post_body = json.dumps({'extra_specs': extra_specs})
+ resp, body = self.post(url, post_body)
+ body = json.loads(body)
+ self.expected_success(200, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def delete_volume_type_extra_specs(self, volume_type_id, extra_spec_name):
+ """Deletes the Specified Volume_type extra spec."""
+ resp, body = self.delete("types/%s/extra_specs/%s" % (
+ volume_type_id, extra_spec_name))
+ self.expected_success(202, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def update_volume_type(self, volume_type_id, **kwargs):
+ """Updates volume type name, description, and/or is_public.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-blockstorage-v2.html#updateVolumeType
+ """
+ put_body = json.dumps({'volume_type': kwargs})
+ resp, body = self.put('types/%s' % volume_type_id, put_body)
+ body = json.loads(body)
+ self.expected_success(200, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def update_volume_type_extra_specs(self, volume_type_id, extra_spec_name,
+ extra_specs):
+ """Update a volume_type extra spec.
+
+ volume_type_id: Id of volume_type.
+ extra_spec_name: Name of the extra spec to be updated.
+ extra_spec: A dictionary of with key as extra_spec_name and the
+ updated value.
+ Available params: see http://developer.openstack.org/
+ api-ref-blockstorage-v2.html#
+ updateVolumeTypeExtraSpecs
+ """
+ url = "types/%s/extra_specs/%s" % (volume_type_id, extra_spec_name)
+ put_body = json.dumps(extra_specs)
+ resp, body = self.put(url, put_body)
+ body = json.loads(body)
+ self.expected_success(200, resp.status)
+ return rest_client.ResponseBody(resp, body)
diff --git a/tempest/services/volume/base/base_volumes_client.py b/tempest/lib/services/volume/v1/volumes_client.py
old mode 100755
new mode 100644
similarity index 77%
copy from tempest/services/volume/base/base_volumes_client.py
copy to tempest/lib/services/volume/v1/volumes_client.py
index d694c53..3df8da4
--- a/tempest/services/volume/base/base_volumes_client.py
+++ b/tempest/lib/services/volume/v1/volumes_client.py
@@ -21,21 +21,9 @@
from tempest.lib import exceptions as lib_exc
-class BaseVolumesClient(rest_client.RestClient):
+class VolumesClient(rest_client.RestClient):
"""Base client class to send CRUD Volume API requests"""
- create_resp = 200
-
- def __init__(self, auth_provider, service, region,
- default_volume_size=1, **kwargs):
- super(BaseVolumesClient, self).__init__(
- auth_provider, service, region, **kwargs)
- self.default_volume_size = default_volume_size
-
- def get_attachment_from_volume(self, volume):
- """Return the element 'attachment' from input volumes."""
- return volume['attachments'][0]
-
def _prepare_params(self, params):
"""Prepares params for use in get or _ext_get methods.
@@ -62,20 +50,9 @@
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
- def show_pools(self, detail=False):
- # List all the volumes pools (hosts)
- url = 'scheduler-stats/get_pools'
- if detail:
- url += '?detail=True'
-
- resp, body = self.get(url)
- body = json.loads(body)
- self.expected_success(200, resp.status)
- return rest_client.ResponseBody(resp, body)
-
def show_volume(self, volume_id):
"""Returns the details of a single volume."""
- url = "volumes/%s" % str(volume_id)
+ url = "volumes/%s" % volume_id
resp, body = self.get(url)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -87,12 +64,10 @@
Available params: see http://developer.openstack.org/
api-ref-blockstorage-v2.html#createVolume
"""
- if 'size' not in kwargs:
- kwargs['size'] = self.default_volume_size
post_body = json.dumps({'volume': kwargs})
resp, body = self.post('volumes', post_body)
body = json.loads(body)
- self.expected_success(self.create_resp, resp.status)
+ self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
def update_volume(self, volume_id, **kwargs):
@@ -109,7 +84,7 @@
def delete_volume(self, volume_id):
"""Deletes the Specified Volume."""
- resp, body = self.delete("volumes/%s" % str(volume_id))
+ resp, body = self.delete("volumes/%s" % volume_id)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
@@ -201,22 +176,6 @@
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
- def volume_begin_detaching(self, volume_id):
- """Volume Begin Detaching."""
- # ref cinder/api/contrib/volume_actions.py#L158
- post_body = json.dumps({'os-begin_detaching': {}})
- resp, body = self.post('volumes/%s/action' % volume_id, post_body)
- self.expected_success(202, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def volume_roll_detaching(self, volume_id):
- """Volume Roll Detaching."""
- # cinder/api/contrib/volume_actions.py#L170
- post_body = json.dumps({'os-roll_detaching': {}})
- resp, body = self.post('volumes/%s/action' % volume_id, post_body)
- self.expected_success(202, resp.status)
- return rest_client.ResponseBody(resp, body)
-
def create_volume_transfer(self, **kwargs):
"""Create a volume transfer.
@@ -231,7 +190,7 @@
def show_volume_transfer(self, transfer_id):
"""Returns the details of a volume transfer."""
- url = "os-volume-transfer/%s" % str(transfer_id)
+ url = "os-volume-transfer/%s" % transfer_id
resp, body = self.get(url)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -253,7 +212,7 @@
def delete_volume_transfer(self, transfer_id):
"""Delete a volume transfer."""
- resp, body = self.delete("os-volume-transfer/%s" % str(transfer_id))
+ resp, body = self.delete("os-volume-transfer/%s" % transfer_id)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
@@ -288,7 +247,7 @@
def create_volume_metadata(self, volume_id, metadata):
"""Create metadata for the volume."""
put_body = json.dumps({'metadata': metadata})
- url = "volumes/%s/metadata" % str(volume_id)
+ url = "volumes/%s/metadata" % volume_id
resp, body = self.post(url, put_body)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -296,7 +255,7 @@
def show_volume_metadata(self, volume_id):
"""Get metadata of the volume."""
- url = "volumes/%s/metadata" % str(volume_id)
+ url = "volumes/%s/metadata" % volume_id
resp, body = self.get(url)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -305,7 +264,7 @@
def update_volume_metadata(self, volume_id, metadata):
"""Update metadata for the volume."""
put_body = json.dumps({'metadata': metadata})
- url = "volumes/%s/metadata" % str(volume_id)
+ url = "volumes/%s/metadata" % volume_id
resp, body = self.put(url, put_body)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -314,7 +273,7 @@
def update_volume_metadata_item(self, volume_id, id, meta_item):
"""Update metadata item for the volume."""
put_body = json.dumps({'meta': meta_item})
- url = "volumes/%s/metadata/%s" % (str(volume_id), str(id))
+ url = "volumes/%s/metadata/%s" % (volume_id, id)
resp, body = self.put(url, put_body)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -322,33 +281,11 @@
def delete_volume_metadata_item(self, volume_id, id):
"""Delete metadata item for the volume."""
- url = "volumes/%s/metadata/%s" % (str(volume_id), str(id))
+ url = "volumes/%s/metadata/%s" % (volume_id, id)
resp, body = self.delete(url)
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
- def update_volume_image_metadata(self, volume_id, **kwargs):
- """Update image metadata for the volume.
-
- Available params: see http://developer.openstack.org/
- api-ref-blockstorage-v2.html
- #setVolumeimagemetadata
- """
- post_body = json.dumps({'os-set_image_metadata': {'metadata': kwargs}})
- url = "volumes/%s/action" % (volume_id)
- resp, body = self.post(url, post_body)
- body = json.loads(body)
- self.expected_success(200, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def delete_volume_image_metadata(self, volume_id, key_name):
- """Delete image metadata item for the volume."""
- post_body = json.dumps({'os-unset_image_metadata': {'key': key_name}})
- url = "volumes/%s/action" % (volume_id)
- resp, body = self.post(url, post_body)
- self.expected_success(200, resp.status)
- return rest_client.ResponseBody(resp, body)
-
def retype_volume(self, volume_id, **kwargs):
"""Updates volume with new volume type."""
post_body = json.dumps({'os-retype': kwargs})
diff --git a/tempest/api_schema/request/compute/v2/__init__.py b/tempest/lib/services/volume/v2/__init__.py
similarity index 100%
rename from tempest/api_schema/request/compute/v2/__init__.py
rename to tempest/lib/services/volume/v2/__init__.py
diff --git a/tempest/services/volume/base/base_availability_zone_client.py b/tempest/lib/services/volume/v2/availability_zone_client.py
similarity index 89%
copy from tempest/services/volume/base/base_availability_zone_client.py
copy to tempest/lib/services/volume/v2/availability_zone_client.py
index 1c2deba..bb4a357 100644
--- a/tempest/services/volume/base/base_availability_zone_client.py
+++ b/tempest/lib/services/volume/v2/availability_zone_client.py
@@ -1,4 +1,4 @@
-# Copyright 2014 NEC Corporation.
+# Copyright 2014 IBM Corp.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
@@ -18,7 +18,8 @@
from tempest.lib.common import rest_client
-class BaseAvailabilityZoneClient(rest_client.RestClient):
+class AvailabilityZoneClient(rest_client.RestClient):
+ api_version = "v2"
def list_availability_zones(self):
resp, body = self.get('os-availability-zone')
diff --git a/tempest/services/volume/base/base_backups_client.py b/tempest/lib/services/volume/v2/backups_client.py
similarity index 65%
rename from tempest/services/volume/base/base_backups_client.py
rename to tempest/lib/services/volume/v2/backups_client.py
index 3842d66..61f865d 100644
--- a/tempest/services/volume/base/base_backups_client.py
+++ b/tempest/lib/services/volume/v2/backups_client.py
@@ -13,17 +13,15 @@
# License for the specific language governing permissions and limitations
# under the License.
-import time
-
from oslo_serialization import jsonutils as json
-from tempest import exceptions
from tempest.lib.common import rest_client
from tempest.lib import exceptions as lib_exc
-class BaseBackupsClient(rest_client.RestClient):
- """Client class to send CRUD Volume backup API requests"""
+class BackupsClient(rest_client.RestClient):
+ """Volume V2 Backups client"""
+ api_version = "v2"
def create_backup(self, **kwargs):
"""Creates a backup of volume.
@@ -51,13 +49,13 @@
def delete_backup(self, backup_id):
"""Delete a backup of volume."""
- resp, body = self.delete('backups/%s' % (str(backup_id)))
+ resp, body = self.delete('backups/%s' % backup_id)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
def show_backup(self, backup_id):
"""Returns the details of a single backup."""
- url = "backups/%s" % str(backup_id)
+ url = "backups/%s" % backup_id
resp, body = self.get(url)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -89,34 +87,16 @@
self.expected_success(201, resp.status)
return rest_client.ResponseBody(resp, body)
- def wait_for_backup_status(self, backup_id, status):
- """Waits for a Backup to reach a given status."""
- body = self.show_backup(backup_id)['backup']
- backup_status = body['status']
- start = int(time.time())
+ def reset_backup_status(self, backup_id, status):
+ """Reset the specified backup's status."""
+ post_body = json.dumps({'os-reset_status': {"status": status}})
+ resp, body = self.post('backups/%s/action' % backup_id, post_body)
+ self.expected_success(202, resp.status)
+ return rest_client.ResponseBody(resp, body)
- while backup_status != status:
- time.sleep(self.build_interval)
- body = self.show_backup(backup_id)['backup']
- backup_status = body['status']
- if backup_status == 'error':
- raise exceptions.VolumeBackupException(backup_id=backup_id)
-
- if int(time.time()) - start >= self.build_timeout:
- message = ('Volume backup %s failed to reach %s status '
- '(current %s) within the required time (%s s).' %
- (backup_id, status, backup_status,
- self.build_timeout))
- raise exceptions.TimeoutException(message)
-
- def wait_for_backup_deletion(self, backup_id):
- """Waits for backup deletion"""
- start_time = int(time.time())
- while True:
- try:
- self.show_backup(backup_id)
- except lib_exc.NotFound:
- return
- if int(time.time()) - start_time >= self.build_timeout:
- raise exceptions.TimeoutException
- time.sleep(self.build_interval)
+ def is_resource_deleted(self, id):
+ try:
+ self.show_backup(id)
+ except lib_exc.NotFound:
+ return True
+ return False
diff --git a/tempest/lib/services/volume/v2/encryption_types_client.py b/tempest/lib/services/volume/v2/encryption_types_client.py
new file mode 100755
index 0000000..8b01f11
--- /dev/null
+++ b/tempest/lib/services/volume/v2/encryption_types_client.py
@@ -0,0 +1,69 @@
+# Copyright 2012 OpenStack Foundation
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from oslo_serialization import jsonutils as json
+
+from tempest.lib.common import rest_client
+from tempest.lib import exceptions as lib_exc
+
+
+class EncryptionTypesClient(rest_client.RestClient):
+ api_version = "v2"
+
+ def is_resource_deleted(self, id):
+ try:
+ body = self.show_encryption_type(id)
+ if not body:
+ return True
+ except lib_exc.NotFound:
+ return True
+ return False
+
+ @property
+ def resource_type(self):
+ """Returns the primary type of resource this client works with."""
+ return 'encryption-type'
+
+ def show_encryption_type(self, volume_type_id):
+ """Get the volume encryption type for the specified volume type.
+
+ volume_type_id: Id of volume_type.
+ """
+ url = "/types/%s/encryption" % volume_type_id
+ resp, body = self.get(url)
+ body = json.loads(body)
+ self.expected_success(200, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def create_encryption_type(self, volume_type_id, **kwargs):
+ """Create encryption type.
+
+ TODO: Current api-site doesn't contain this API description.
+ After fixing the api-site, we need to fix here also for putting
+ the link to api-site.
+ """
+ url = "/types/%s/encryption" % volume_type_id
+ post_body = json.dumps({'encryption': kwargs})
+ resp, body = self.post(url, post_body)
+ body = json.loads(body)
+ self.expected_success(200, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def delete_encryption_type(self, volume_type_id):
+ """Delete the encryption type for the specified volume-type."""
+ resp, body = self.delete(
+ "/types/%s/encryption/provider" % volume_type_id)
+ self.expected_success(202, resp.status)
+ return rest_client.ResponseBody(resp, body)
diff --git a/tempest/services/volume/base/base_extensions_client.py b/tempest/lib/services/volume/v2/extensions_client.py
similarity index 86%
copy from tempest/services/volume/base/base_extensions_client.py
copy to tempest/lib/services/volume/v2/extensions_client.py
index b90fe94..09279d5 100644
--- a/tempest/services/volume/base/base_extensions_client.py
+++ b/tempest/lib/services/volume/v2/extensions_client.py
@@ -1,4 +1,4 @@
-# Copyright 2012 OpenStack Foundation
+# Copyright 2014 IBM Corp.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
@@ -18,7 +18,9 @@
from tempest.lib.common import rest_client
-class BaseExtensionsClient(rest_client.RestClient):
+class ExtensionsClient(rest_client.RestClient):
+ """Volume V2 extensions client."""
+ api_version = "v2"
def list_extensions(self):
url = 'extensions'
diff --git a/tempest/services/volume/base/admin/base_hosts_client.py b/tempest/lib/services/volume/v2/hosts_client.py
similarity index 86%
copy from tempest/services/volume/base/admin/base_hosts_client.py
copy to tempest/lib/services/volume/v2/hosts_client.py
index 382e9a8..dd7c482 100644
--- a/tempest/services/volume/base/admin/base_hosts_client.py
+++ b/tempest/lib/services/volume/v2/hosts_client.py
@@ -1,4 +1,4 @@
-# Copyright 2013 OpenStack Foundation
+# Copyright 2014 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
@@ -19,8 +19,9 @@
from tempest.lib.common import rest_client
-class BaseHostsClient(rest_client.RestClient):
- """Client class to send CRUD Volume Hosts API requests"""
+class HostsClient(rest_client.RestClient):
+ """Client class to send CRUD Volume V2 API requests"""
+ api_version = "v2"
def list_hosts(self, **params):
"""Lists all hosts."""
diff --git a/tempest/services/volume/base/base_qos_client.py b/tempest/lib/services/volume/v2/qos_client.py
similarity index 64%
copy from tempest/services/volume/base/base_qos_client.py
copy to tempest/lib/services/volume/v2/qos_client.py
index 2d9f02a..40d4a3f 100644
--- a/tempest/services/volume/base/base_qos_client.py
+++ b/tempest/lib/services/volume/v2/qos_client.py
@@ -12,17 +12,19 @@
# License for the specific language governing permissions and limitations
# under the License.
-import time
-
from oslo_serialization import jsonutils as json
-from tempest import exceptions
from tempest.lib.common import rest_client
from tempest.lib import exceptions as lib_exc
-class BaseQosSpecsClient(rest_client.RestClient):
- """Client class to send CRUD QoS API requests"""
+class QosSpecsClient(rest_client.RestClient):
+ """Volume V2 QoS client.
+
+ Client class to send CRUD QoS API requests
+ """
+
+ api_version = "v2"
def is_resource_deleted(self, qos_id):
try:
@@ -36,42 +38,14 @@
"""Returns the primary type of resource this client works with."""
return 'qos'
- def wait_for_qos_operations(self, qos_id, operation, args=None):
- """Waits for a qos operations to be completed.
-
- NOTE : operation value is required for wait_for_qos_operations()
- operation = 'qos-key' / 'disassociate' / 'disassociate-all'
- args = keys[] when operation = 'qos-key'
- args = volume-type-id disassociated when operation = 'disassociate'
- args = None when operation = 'disassociate-all'
- """
- start_time = int(time.time())
- while True:
- if operation == 'qos-key-unset':
- body = self.show_qos(qos_id)['qos_specs']
- if not any(key in body['specs'] for key in args):
- return
- elif operation == 'disassociate':
- body = self.show_association_qos(qos_id)['qos_associations']
- if not any(args in body[i]['id'] for i in range(0, len(body))):
- return
- elif operation == 'disassociate-all':
- body = self.show_association_qos(qos_id)['qos_associations']
- if not body:
- return
- else:
- msg = (" operation value is either not defined or incorrect.")
- raise lib_exc.UnprocessableEntity(msg)
-
- if int(time.time()) - start_time >= self.build_timeout:
- raise exceptions.TimeoutException
- time.sleep(self.build_interval)
-
def create_qos(self, **kwargs):
"""Create a QoS Specification.
- Available params: see http://developer.openstack.org/
- api-ref-blockstorage-v2.html#createQoSSpec
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/block-storage/v2/index.html
+ ?expanded=create-qos-specification-detail
+ #quality-of-service-qos-specifications-qos-specs
"""
post_body = json.dumps({'qos_specs': kwargs})
resp, body = self.post('qos-specs', post_body)
@@ -82,7 +56,7 @@
def delete_qos(self, qos_id, force=False):
"""Delete the specified QoS specification."""
resp, body = self.delete(
- "qos-specs/%s?force=%s" % (str(qos_id), force))
+ "qos-specs/%s?force=%s" % (qos_id, force))
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
@@ -96,7 +70,7 @@
def show_qos(self, qos_id):
"""Get the specified QoS specification."""
- url = "qos-specs/%s" % str(qos_id)
+ url = "qos-specs/%s" % qos_id
resp, body = self.get(url)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -105,8 +79,11 @@
def set_qos_key(self, qos_id, **kwargs):
"""Set the specified keys/values of QoS specification.
- Available params: see http://developer.openstack.org/
- api-ref-blockstorage-v2.html#setQoSKey
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/block-storage/v2/index.html
+ ?expanded=set-keys-in-qos-specification-detail
+ #quality-of-service-qos-specifications-qos-specs
"""
put_body = json.dumps({"qos_specs": kwargs})
resp, body = self.put('qos-specs/%s' % qos_id, put_body)
@@ -119,7 +96,11 @@
:param keys: keys to delete from the QoS specification.
- TODO(jordanP): Add a link once LP #1524877 is fixed.
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/block-storage/v2/index.html
+ ?expanded=unset-keys-in-qos-specification-detail
+ #quality-of-service-qos-specifications-qos-specs
"""
put_body = json.dumps({'keys': keys})
resp, body = self.put('qos-specs/%s/delete_keys' % qos_id, put_body)
@@ -128,7 +109,7 @@
def associate_qos(self, qos_id, vol_type_id):
"""Associate the specified QoS with specified volume-type."""
- url = "qos-specs/%s/associate" % str(qos_id)
+ url = "qos-specs/%s/associate" % qos_id
url += "?vol_type_id=%s" % vol_type_id
resp, body = self.get(url)
self.expected_success(202, resp.status)
@@ -136,7 +117,7 @@
def show_association_qos(self, qos_id):
"""Get the association of the specified QoS specification."""
- url = "qos-specs/%s/associations" % str(qos_id)
+ url = "qos-specs/%s/associations" % qos_id
resp, body = self.get(url)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -144,7 +125,7 @@
def disassociate_qos(self, qos_id, vol_type_id):
"""Disassociate the specified QoS with specified volume-type."""
- url = "qos-specs/%s/disassociate" % str(qos_id)
+ url = "qos-specs/%s/disassociate" % qos_id
url += "?vol_type_id=%s" % vol_type_id
resp, body = self.get(url)
self.expected_success(202, resp.status)
@@ -152,7 +133,7 @@
def disassociate_all_qos(self, qos_id):
"""Disassociate the specified QoS with all associations."""
- url = "qos-specs/%s/disassociate_all" % str(qos_id)
+ url = "qos-specs/%s/disassociate_all" % qos_id
resp, body = self.get(url)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
diff --git a/tempest/services/volume/base/admin/base_quotas_client.py b/tempest/lib/services/volume/v2/quotas_client.py
similarity index 79%
copy from tempest/services/volume/base/admin/base_quotas_client.py
copy to tempest/lib/services/volume/v2/quotas_client.py
index 83816f2..430957d 100644
--- a/tempest/services/volume/base/admin/base_quotas_client.py
+++ b/tempest/lib/services/volume/v2/quotas_client.py
@@ -1,4 +1,5 @@
-# Copyright (C) 2014 eNovance SAS <licensing@enovance.com>
+# Copyright 2014 OpenStack Foundation
+# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
@@ -18,10 +19,9 @@
from tempest.lib.common import rest_client
-class BaseQuotasClient(rest_client.RestClient):
- """Client class to send CRUD Volume Quotas API requests"""
-
- TYPE = "json"
+class QuotasClient(rest_client.RestClient):
+ """Client class to send CRUD Volume Quotas API V2 requests"""
+ api_version = "v2"
def show_default_quota_set(self, tenant_id):
"""List the default volume quota set for a tenant."""
@@ -44,17 +44,12 @@
body = jsonutils.loads(body)
return rest_client.ResponseBody(resp, body)
- def show_quota_usage(self, tenant_id):
- """List the quota set for a tenant."""
-
- body = self.show_quota_set(tenant_id, params={'usage': True})
- return body
-
def update_quota_set(self, tenant_id, **kwargs):
"""Updates quota set
- Available params: see http://developer.openstack.org/
- api-ref-blockstorage-v2.html#updateQuotas-v2
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-blockstorage-v2.html#updateQuota
"""
put_body = jsonutils.dumps({'quota_set': kwargs})
resp, body = self.put('os-quota-sets/%s' % tenant_id, put_body)
diff --git a/tempest/services/volume/base/admin/base_services_client.py b/tempest/lib/services/volume/v2/services_client.py
similarity index 85%
copy from tempest/services/volume/base/admin/base_services_client.py
copy to tempest/lib/services/volume/v2/services_client.py
index 861eb92..bc55469 100644
--- a/tempest/services/volume/base/admin/base_services_client.py
+++ b/tempest/lib/services/volume/v2/services_client.py
@@ -1,4 +1,4 @@
-# Copyright 2014 NEC Corporation
+# Copyright 2014 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
@@ -19,7 +19,9 @@
from tempest.lib.common import rest_client
-class BaseServicesClient(rest_client.RestClient):
+class ServicesClient(rest_client.RestClient):
+ """Client class to send CRUD Volume V2 API requests"""
+ api_version = "v2"
def list_services(self, **params):
url = 'os-services'
diff --git a/tempest/services/volume/base/base_snapshots_client.py b/tempest/lib/services/volume/v2/snapshots_client.py
old mode 100755
new mode 100644
similarity index 91%
rename from tempest/services/volume/base/base_snapshots_client.py
rename to tempest/lib/services/volume/v2/snapshots_client.py
index 7a8e12b..c84e557
--- a/tempest/services/volume/base/base_snapshots_client.py
+++ b/tempest/lib/services/volume/v2/snapshots_client.py
@@ -17,10 +17,10 @@
from tempest.lib import exceptions as lib_exc
-class BaseSnapshotsClient(rest_client.RestClient):
- """Base Client class to send CRUD Volume API requests."""
-
- create_resp = 200
+class SnapshotsClient(rest_client.RestClient):
+ """Client class to send CRUD Volume V2 API requests."""
+ api_version = "v2"
+ create_resp = 202
def list_snapshots(self, detail=False, **params):
"""List all the snapshot.
@@ -45,7 +45,7 @@
Available params: see http://developer.openstack.org/
api-ref-blockstorage-v2.html#showSnapshot
"""
- url = "snapshots/%s" % str(snapshot_id)
+ url = "snapshots/%s" % snapshot_id
resp, body = self.get(url)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -81,7 +81,7 @@
Available params: see http://developer.openstack.org/
api-ref-blockstorage-v2.html#deleteSnapshot
"""
- resp, body = self.delete("snapshots/%s" % str(snapshot_id))
+ resp, body = self.delete("snapshots/%s" % snapshot_id)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
@@ -112,7 +112,7 @@
# Bug https://bugs.launchpad.net/openstack-api-site/+bug/1532645
post_body = json.dumps({'os-update_snapshot_status': kwargs})
- url = 'snapshots/%s/action' % str(snapshot_id)
+ url = 'snapshots/%s/action' % snapshot_id
resp, body = self.post(url, post_body)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
@@ -120,7 +120,7 @@
def create_snapshot_metadata(self, snapshot_id, metadata):
"""Create metadata for the snapshot."""
put_body = json.dumps({'metadata': metadata})
- url = "snapshots/%s/metadata" % str(snapshot_id)
+ url = "snapshots/%s/metadata" % snapshot_id
resp, body = self.post(url, put_body)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -133,7 +133,7 @@
api-ref-blockstorage-v2.html#
showSnapshotMetadata
"""
- url = "snapshots/%s/metadata" % str(snapshot_id)
+ url = "snapshots/%s/metadata" % snapshot_id
resp, body = self.get(url)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -147,7 +147,7 @@
updateSnapshotMetadata
"""
put_body = json.dumps(kwargs)
- url = "snapshots/%s/metadata" % str(snapshot_id)
+ url = "snapshots/%s/metadata" % snapshot_id
resp, body = self.put(url, put_body)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -160,7 +160,7 @@
# link to api-site.
# LP: https://bugs.launchpad.net/openstack-api-site/+bug/1529064
put_body = json.dumps(kwargs)
- url = "snapshots/%s/metadata/%s" % (str(snapshot_id), str(id))
+ url = "snapshots/%s/metadata/%s" % (snapshot_id, id)
resp, body = self.put(url, put_body)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -168,7 +168,7 @@
def delete_snapshot_metadata_item(self, snapshot_id, id):
"""Delete metadata item for the snapshot."""
- url = "snapshots/%s/metadata/%s" % (str(snapshot_id), str(id))
+ url = "snapshots/%s/metadata/%s" % (snapshot_id, id)
resp, body = self.delete(url)
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
diff --git a/tempest/services/volume/base/admin/base_types_client.py b/tempest/lib/services/volume/v2/types_client.py
old mode 100755
new mode 100644
similarity index 65%
rename from tempest/services/volume/base/admin/base_types_client.py
rename to tempest/lib/services/volume/v2/types_client.py
index afca752..d399e99
--- a/tempest/services/volume/base/admin/base_types_client.py
+++ b/tempest/lib/services/volume/v2/types_client.py
@@ -20,24 +20,13 @@
from tempest.lib import exceptions as lib_exc
-class BaseTypesClient(rest_client.RestClient):
- """Client class to send CRUD Volume Types API requests"""
+class TypesClient(rest_client.RestClient):
+ """Client class to send CRUD Volume V2 API requests"""
+ api_version = "v2"
- def is_resource_deleted(self, resource):
- # to use this method self.resource must be defined to respective value
- # Resource is a dictionary containing resource id and type
- # Resource : {"id" : resource_id
- # "type": resource_type}
+ def is_resource_deleted(self, id):
try:
- if resource['type'] == "volume-type":
- self.show_volume_type(resource['id'])
- elif resource['type'] == "encryption-type":
- body = self.show_encryption_type(resource['id'])
- if not body:
- return True
- else:
- msg = (" resource value is either not defined or incorrect.")
- raise lib_exc.UnprocessableEntity(msg)
+ self.show_volume_type(id)
except lib_exc.NotFound:
return True
return False
@@ -45,7 +34,7 @@
@property
def resource_type(self):
"""Returns the primary type of resource this client works with."""
- return 'volume-type/encryption-type'
+ return 'volume-type'
def list_volume_types(self, **params):
"""List all the volume_types created.
@@ -62,13 +51,13 @@
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
- def show_volume_type(self, volume_id):
+ def show_volume_type(self, volume_type_id):
"""Returns the details of a single volume_type.
Available params: see http://developer.openstack.org/
api-ref-blockstorage-v2.html#showVolumeType
"""
- url = "types/%s" % str(volume_id)
+ url = "types/%s" % volume_type_id
resp, body = self.get(url)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -86,24 +75,24 @@
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
- def delete_volume_type(self, volume_id):
+ def delete_volume_type(self, volume_type_id):
"""Deletes the Specified Volume_type.
Available params: see http://developer.openstack.org/
api-ref-blockstorage-v2.html#deleteVolumeType
"""
- resp, body = self.delete("types/%s" % str(volume_id))
+ resp, body = self.delete("types/%s" % volume_type_id)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
- def list_volume_types_extra_specs(self, vol_type_id, **params):
+ def list_volume_types_extra_specs(self, volume_type_id, **params):
"""List all the volume_types extra specs created.
TODO: Current api-site doesn't contain this API description.
After fixing the api-site, we need to fix here also for putting
the link to api-site.
"""
- url = 'types/%s/extra_specs' % str(vol_type_id)
+ url = 'types/%s/extra_specs' % volume_type_id
if params:
url += '?%s' % urllib.urlencode(params)
@@ -112,40 +101,51 @@
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
- def show_volume_type_extra_specs(self, vol_type_id, extra_specs_name):
+ def show_volume_type_extra_specs(self, volume_type_id, extra_specs_name):
"""Returns the details of a single volume_type extra spec."""
- url = "types/%s/extra_specs/%s" % (str(vol_type_id),
- str(extra_specs_name))
+ url = "types/%s/extra_specs/%s" % (volume_type_id, extra_specs_name)
resp, body = self.get(url)
body = json.loads(body)
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
- def create_volume_type_extra_specs(self, vol_type_id, extra_specs):
+ def create_volume_type_extra_specs(self, volume_type_id, extra_specs):
"""Creates a new Volume_type extra spec.
- vol_type_id: Id of volume_type.
+ volume_type_id: Id of volume_type.
extra_specs: A dictionary of values to be used as extra_specs.
"""
- url = "types/%s/extra_specs" % str(vol_type_id)
+ url = "types/%s/extra_specs" % volume_type_id
post_body = json.dumps({'extra_specs': extra_specs})
resp, body = self.post(url, post_body)
body = json.loads(body)
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
- def delete_volume_type_extra_specs(self, vol_id, extra_spec_name):
+ def delete_volume_type_extra_specs(self, volume_type_id, extra_spec_name):
"""Deletes the Specified Volume_type extra spec."""
resp, body = self.delete("types/%s/extra_specs/%s" % (
- (str(vol_id)), str(extra_spec_name)))
+ volume_type_id, extra_spec_name))
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
- def update_volume_type_extra_specs(self, vol_type_id, extra_spec_name,
+ def update_volume_type(self, volume_type_id, **kwargs):
+ """Updates volume type name, description, and/or is_public.
+
+ Available params: see http://developer.openstack.org/
+ api-ref-blockstorage-v2.html#updateVolumeType
+ """
+ put_body = json.dumps({'volume_type': kwargs})
+ resp, body = self.put('types/%s' % volume_type_id, put_body)
+ body = json.loads(body)
+ self.expected_success(200, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def update_volume_type_extra_specs(self, volume_type_id, extra_spec_name,
extra_specs):
"""Update a volume_type extra spec.
- vol_type_id: Id of volume_type.
+ volume_type_id: Id of volume_type.
extra_spec_name: Name of the extra spec to be updated.
extra_spec: A dictionary of with key as extra_spec_name and the
updated value.
@@ -153,46 +153,13 @@
api-ref-blockstorage-v2.html#
updateVolumeTypeExtraSpecs
"""
- url = "types/%s/extra_specs/%s" % (str(vol_type_id),
- str(extra_spec_name))
+ url = "types/%s/extra_specs/%s" % (volume_type_id, extra_spec_name)
put_body = json.dumps(extra_specs)
resp, body = self.put(url, put_body)
body = json.loads(body)
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
- def show_encryption_type(self, vol_type_id):
- """Get the volume encryption type for the specified volume type.
-
- vol_type_id: Id of volume_type.
- """
- url = "/types/%s/encryption" % str(vol_type_id)
- resp, body = self.get(url)
- body = json.loads(body)
- self.expected_success(200, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def create_encryption_type(self, vol_type_id, **kwargs):
- """Create encryption type.
-
- TODO: Current api-site doesn't contain this API description.
- After fixing the api-site, we need to fix here also for putting
- the link to api-site.
- """
- url = "/types/%s/encryption" % str(vol_type_id)
- post_body = json.dumps({'encryption': kwargs})
- resp, body = self.post(url, post_body)
- body = json.loads(body)
- self.expected_success(200, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def delete_encryption_type(self, vol_type_id):
- """Delete the encryption type for the specified volume-type."""
- resp, body = self.delete(
- "/types/%s/encryption/provider" % str(vol_type_id))
- self.expected_success(202, resp.status)
- return rest_client.ResponseBody(resp, body)
-
def add_type_access(self, volume_type_id, **kwargs):
"""Adds volume type access for the given project.
@@ -201,7 +168,7 @@
#createVolumeTypeAccessExt
"""
post_body = json.dumps({'addProjectAccess': kwargs})
- url = 'types/%s/action' % (volume_type_id)
+ url = 'types/%s/action' % volume_type_id
resp, body = self.post(url, post_body)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
@@ -214,7 +181,7 @@
#removeVolumeTypeAccessExt
"""
post_body = json.dumps({'removeProjectAccess': kwargs})
- url = 'types/%s/action' % (volume_type_id)
+ url = 'types/%s/action' % volume_type_id
resp, body = self.post(url, post_body)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
@@ -226,7 +193,7 @@
api-ref-blockstorage-v2.html#
listVolumeTypeAccessExt
"""
- url = 'types/%s/os-volume-type-access' % (volume_type_id)
+ url = 'types/%s/os-volume-type-access' % volume_type_id
resp, body = self.get(url)
body = json.loads(body)
self.expected_success(200, resp.status)
diff --git a/tempest/services/volume/base/base_volumes_client.py b/tempest/lib/services/volume/v2/volumes_client.py
old mode 100755
new mode 100644
similarity index 86%
rename from tempest/services/volume/base/base_volumes_client.py
rename to tempest/lib/services/volume/v2/volumes_client.py
index d694c53..b1930e1
--- a/tempest/services/volume/base/base_volumes_client.py
+++ b/tempest/lib/services/volume/v2/volumes_client.py
@@ -21,20 +21,9 @@
from tempest.lib import exceptions as lib_exc
-class BaseVolumesClient(rest_client.RestClient):
- """Base client class to send CRUD Volume API requests"""
-
- create_resp = 200
-
- def __init__(self, auth_provider, service, region,
- default_volume_size=1, **kwargs):
- super(BaseVolumesClient, self).__init__(
- auth_provider, service, region, **kwargs)
- self.default_volume_size = default_volume_size
-
- def get_attachment_from_volume(self, volume):
- """Return the element 'attachment' from input volumes."""
- return volume['attachments'][0]
+class VolumesClient(rest_client.RestClient):
+ """Client class to send CRUD Volume V2 API requests"""
+ api_version = "v2"
def _prepare_params(self, params):
"""Prepares params for use in get or _ext_get methods.
@@ -62,20 +51,9 @@
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
- def show_pools(self, detail=False):
- # List all the volumes pools (hosts)
- url = 'scheduler-stats/get_pools'
- if detail:
- url += '?detail=True'
-
- resp, body = self.get(url)
- body = json.loads(body)
- self.expected_success(200, resp.status)
- return rest_client.ResponseBody(resp, body)
-
def show_volume(self, volume_id):
"""Returns the details of a single volume."""
- url = "volumes/%s" % str(volume_id)
+ url = "volumes/%s" % volume_id
resp, body = self.get(url)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -87,12 +65,10 @@
Available params: see http://developer.openstack.org/
api-ref-blockstorage-v2.html#createVolume
"""
- if 'size' not in kwargs:
- kwargs['size'] = self.default_volume_size
post_body = json.dumps({'volume': kwargs})
resp, body = self.post('volumes', post_body)
body = json.loads(body)
- self.expected_success(self.create_resp, resp.status)
+ self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
def update_volume(self, volume_id, **kwargs):
@@ -109,7 +85,7 @@
def delete_volume(self, volume_id):
"""Deletes the Specified Volume."""
- resp, body = self.delete("volumes/%s" % str(volume_id))
+ resp, body = self.delete("volumes/%s" % volume_id)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
@@ -201,22 +177,6 @@
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
- def volume_begin_detaching(self, volume_id):
- """Volume Begin Detaching."""
- # ref cinder/api/contrib/volume_actions.py#L158
- post_body = json.dumps({'os-begin_detaching': {}})
- resp, body = self.post('volumes/%s/action' % volume_id, post_body)
- self.expected_success(202, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def volume_roll_detaching(self, volume_id):
- """Volume Roll Detaching."""
- # cinder/api/contrib/volume_actions.py#L170
- post_body = json.dumps({'os-roll_detaching': {}})
- resp, body = self.post('volumes/%s/action' % volume_id, post_body)
- self.expected_success(202, resp.status)
- return rest_client.ResponseBody(resp, body)
-
def create_volume_transfer(self, **kwargs):
"""Create a volume transfer.
@@ -231,7 +191,7 @@
def show_volume_transfer(self, transfer_id):
"""Returns the details of a volume transfer."""
- url = "os-volume-transfer/%s" % str(transfer_id)
+ url = "os-volume-transfer/%s" % transfer_id
resp, body = self.get(url)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -253,7 +213,7 @@
def delete_volume_transfer(self, transfer_id):
"""Delete a volume transfer."""
- resp, body = self.delete("os-volume-transfer/%s" % str(transfer_id))
+ resp, body = self.delete("os-volume-transfer/%s" % transfer_id)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
@@ -288,7 +248,7 @@
def create_volume_metadata(self, volume_id, metadata):
"""Create metadata for the volume."""
put_body = json.dumps({'metadata': metadata})
- url = "volumes/%s/metadata" % str(volume_id)
+ url = "volumes/%s/metadata" % volume_id
resp, body = self.post(url, put_body)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -296,7 +256,7 @@
def show_volume_metadata(self, volume_id):
"""Get metadata of the volume."""
- url = "volumes/%s/metadata" % str(volume_id)
+ url = "volumes/%s/metadata" % volume_id
resp, body = self.get(url)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -305,7 +265,7 @@
def update_volume_metadata(self, volume_id, metadata):
"""Update metadata for the volume."""
put_body = json.dumps({'metadata': metadata})
- url = "volumes/%s/metadata" % str(volume_id)
+ url = "volumes/%s/metadata" % volume_id
resp, body = self.put(url, put_body)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -314,7 +274,7 @@
def update_volume_metadata_item(self, volume_id, id, meta_item):
"""Update metadata item for the volume."""
put_body = json.dumps({'meta': meta_item})
- url = "volumes/%s/metadata/%s" % (str(volume_id), str(id))
+ url = "volumes/%s/metadata/%s" % (volume_id, id)
resp, body = self.put(url, put_body)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -322,11 +282,17 @@
def delete_volume_metadata_item(self, volume_id, id):
"""Delete metadata item for the volume."""
- url = "volumes/%s/metadata/%s" % (str(volume_id), str(id))
+ url = "volumes/%s/metadata/%s" % (volume_id, id)
resp, body = self.delete(url)
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
+ def retype_volume(self, volume_id, **kwargs):
+ """Updates volume with new volume type."""
+ post_body = json.dumps({'os-retype': kwargs})
+ resp, body = self.post('volumes/%s/action' % volume_id, post_body)
+ self.expected_success(202, resp.status)
+
def update_volume_image_metadata(self, volume_id, **kwargs):
"""Update image metadata for the volume.
@@ -349,8 +315,26 @@
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
- def retype_volume(self, volume_id, **kwargs):
- """Updates volume with new volume type."""
- post_body = json.dumps({'os-retype': kwargs})
- resp, body = self.post('volumes/%s/action' % volume_id, post_body)
- self.expected_success(202, resp.status)
+ def show_pools(self, detail=False):
+ # List all the volumes pools (hosts)
+ url = 'scheduler-stats/get_pools'
+ if detail:
+ url += '?detail=True'
+
+ resp, body = self.get(url)
+ body = json.loads(body)
+ self.expected_success(200, resp.status)
+ return rest_client.ResponseBody(resp, body)
+
+ def show_backend_capabilities(self, host):
+ """Shows capabilities for a storage back end.
+
+ Output params: see http://developer.openstack.org/
+ api-ref-blockstorage-v2.html
+ #showBackendCapabilities
+ """
+ url = 'capabilities/%s' % host
+ resp, body = self.get(url)
+ body = json.loads(body)
+ self.expected_success(200, resp.status)
+ return rest_client.ResponseBody(resp, body)
diff --git a/tempest/manager.py b/tempest/manager.py
index 3d495b6..e3174d4 100644
--- a/tempest/manager.py
+++ b/tempest/manager.py
@@ -15,15 +15,15 @@
from oslo_log import log as logging
-from tempest import clients
+from tempest import clients as tempest_clients
from tempest import config
-from tempest import service_clients
+from tempest.lib.services import clients
CONF = config.CONF
LOG = logging.getLogger(__name__)
-class Manager(service_clients.ServiceClients):
+class Manager(clients.ServiceClients):
"""Service client manager class for backward compatibility
The former manager.Manager is not a stable interface in Tempest,
@@ -37,7 +37,7 @@
"soon as the client manager becomes available in tempest.lib.")
LOG.warning(msg)
dscv = CONF.identity.disable_ssl_certificate_validation
- _, uri = clients.get_auth_provider_class(credentials)
+ _, uri = tempest_clients.get_auth_provider_class(credentials)
super(Manager, self).__init__(
credentials=credentials, scope=scope,
identity_uri=uri,
@@ -58,5 +58,5 @@
"as such it should not imported directly. It will be removed as "
"the client manager becomes available in tempest.lib.")
LOG.warning(msg)
- return clients.get_auth_provider(credentials=credentials,
- pre_auth=pre_auth, scope=scope)
+ return tempest_clients.get_auth_provider(credentials=credentials,
+ pre_auth=pre_auth, scope=scope)
diff --git a/tempest/scenario/manager.py b/tempest/scenario/manager.py
index f889c44..ab388c2 100644
--- a/tempest/scenario/manager.py
+++ b/tempest/scenario/manager.py
@@ -57,7 +57,7 @@
elif CONF.image_feature_enabled.api_v2:
cls.image_client = cls.manager.image_client_v2
else:
- raise exceptions.InvalidConfiguration(
+ raise lib_exc.InvalidConfiguration(
'Either api_v1 or api_v2 must be True in '
'[image-feature-enabled].')
# Compute image client
@@ -80,70 +80,33 @@
cls.security_group_rules_client = (
cls.manager.security_group_rules_client)
- if CONF.volume_feature_enabled.api_v1:
- cls.volumes_client = cls.manager.volumes_client
- cls.snapshots_client = cls.manager.snapshots_client
- else:
+ if CONF.volume_feature_enabled.api_v2:
cls.volumes_client = cls.manager.volumes_v2_client
cls.snapshots_client = cls.manager.snapshots_v2_client
-
- # ## Methods to handle sync and async deletes
-
- def setUp(self):
- super(ScenarioTest, self).setUp()
- self.cleanup_waits = []
- # NOTE(mtreinish) This is safe to do in setUp instead of setUp class
- # because scenario tests in the same test class should not share
- # resources. If resources were shared between test cases then it
- # should be a single scenario test instead of multiples.
-
- # NOTE(yfried): this list is cleaned at the end of test_methods and
- # not at the end of the class
- self.addCleanup(self._wait_for_cleanups)
-
- def addCleanup_with_wait(self, waiter_callable, thing_id, thing_id_param,
- cleanup_callable, cleanup_args=None,
- cleanup_kwargs=None, waiter_client=None):
- """Adds wait for async resource deletion at the end of cleanups
-
- @param waiter_callable: callable to wait for the resource to delete
- with the following waiter_client if specified.
- @param thing_id: the id of the resource to be cleaned-up
- @param thing_id_param: the name of the id param in the waiter
- @param cleanup_callable: method to load pass to self.addCleanup with
- the following *cleanup_args, **cleanup_kwargs.
- usually a delete method.
- """
- if cleanup_args is None:
- cleanup_args = []
- if cleanup_kwargs is None:
- cleanup_kwargs = {}
- self.addCleanup(cleanup_callable, *cleanup_args, **cleanup_kwargs)
- wait_dict = {
- 'waiter_callable': waiter_callable,
- thing_id_param: thing_id
- }
- if waiter_client:
- wait_dict['client'] = waiter_client
- self.cleanup_waits.append(wait_dict)
-
- def _wait_for_cleanups(self):
- # To handle async delete actions, a list of waits is added
- # which will be iterated over as the last step of clearing the
- # cleanup queue. That way all the delete calls are made up front
- # and the tests won't succeed unless the deletes are eventually
- # successful. This is the same basic approach used in the api tests to
- # limit cleanup execution time except here it is multi-resource,
- # because of the nature of the scenario tests.
- for wait in self.cleanup_waits:
- waiter_callable = wait.pop('waiter_callable')
- waiter_callable(**wait)
+ else:
+ cls.volumes_client = cls.manager.volumes_client
+ cls.snapshots_client = cls.manager.snapshots_client
# ## Test functions library
#
# The create_[resource] functions only return body and discard the
# resp part which is not used in scenario tests
+ def _create_port(self, network_id, client=None, namestart='port-quotatest',
+ **kwargs):
+ if not client:
+ client = self.ports_client
+ name = data_utils.rand_name(namestart)
+ result = client.create_port(
+ name=name,
+ network_id=network_id,
+ **kwargs)
+ self.assertIsNotNone(result, 'Unable to allocate port')
+ port = result['port']
+ self.addCleanup(test_utils.call_and_ignore_notfound_exc,
+ client.delete_port, port['id'])
+ return port
+
def create_keypair(self, client=None):
if not client:
client = self.keypairs_client
@@ -155,7 +118,7 @@
def create_server(self, name=None, image_id=None, flavor=None,
validatable=False, wait_until=None,
- wait_on_delete=True, clients=None, **kwargs):
+ clients=None, **kwargs):
"""Wrapper utility that returns a test server.
This wrapper utility calls the common create test server and
@@ -183,7 +146,7 @@
# every network
if vnic_type:
ports = []
- networks = []
+
create_port_body = {'binding:vnic_type': vnic_type,
'namestart': 'port-smoke'}
if kwargs:
@@ -204,25 +167,30 @@
if security_groups_ids:
create_port_body[
'security_groups'] = security_groups_ids
- networks = kwargs.pop('networks')
+ networks = kwargs.pop('networks', [])
+ else:
+ networks = []
# If there are no networks passed to us we look up
- # for the project's private networks and create a port
- # if there is only one private network. The same behaviour
- # as we would expect when passing the call to the clients
- # with no networks
+ # for the project's private networks and create a port.
+ # The same behaviour as we would expect when passing
+ # the call to the clients with no networks
if not networks:
networks = clients.networks_client.list_networks(
- filters={'router:external': False})
- self.assertEqual(1, len(networks),
- "There is more than one"
- " network for the tenant")
+ **{'router:external': False, 'fields': 'id'})['networks']
+
+ # It's net['uuid'] if networks come from kwargs
+ # and net['id'] if they come from
+ # clients.networks_client.list_networks
for net in networks:
- net_id = net['uuid']
- port = self._create_port(network_id=net_id,
- client=clients.ports_client,
- **create_port_body)
- ports.append({'port': port['id']})
+ net_id = net.get('uuid', net.get('id'))
+ if 'port' not in net:
+ port = self._create_port(network_id=net_id,
+ client=clients.ports_client,
+ **create_port_body)
+ ports.append({'port': port['id']})
+ else:
+ ports.append({'port': net['port']})
if ports:
kwargs['networks'] = ports
self.ports = ports
@@ -236,31 +204,24 @@
name=name, flavor=flavor,
image_id=image_id, **kwargs)
- # TODO(jlanoux) Move wait_on_delete in compute.py
- if wait_on_delete:
- self.addCleanup(waiters.wait_for_server_termination,
- clients.servers_client,
- body['id'])
-
- self.addCleanup_with_wait(
- waiter_callable=waiters.wait_for_server_termination,
- thing_id=body['id'], thing_id_param='server_id',
- cleanup_callable=test_utils.call_and_ignore_notfound_exc,
- cleanup_args=[clients.servers_client.delete_server, body['id']],
- waiter_client=clients.servers_client)
+ self.addCleanup(waiters.wait_for_server_termination,
+ clients.servers_client, body['id'])
+ self.addCleanup(test_utils.call_and_ignore_notfound_exc,
+ clients.servers_client.delete_server, body['id'])
server = clients.servers_client.show_server(body['id'])['server']
return server
def create_volume(self, size=None, name=None, snapshot_id=None,
imageRef=None, volume_type=None):
+ if size is None:
+ size = CONF.volume.volume_size
if name is None:
- name = data_utils.rand_name(self.__class__.__name__)
+ name = data_utils.rand_name(self.__class__.__name__ + "-volume")
kwargs = {'display_name': name,
'snapshot_id': snapshot_id,
'imageRef': imageRef,
- 'volume_type': volume_type}
- if size is not None:
- kwargs.update({'size': size})
+ 'volume_type': volume_type,
+ 'size': size}
volume = self.volumes_client.create_volume(**kwargs)['volume']
self.addCleanup(self.volumes_client.wait_for_resource_deletion,
@@ -456,16 +417,17 @@
# Compute client
_images_client = self.compute_images_client
if name is None:
- name = data_utils.rand_name('scenario-snapshot')
+ name = data_utils.rand_name(self.__class__.__name__ + 'snapshot')
LOG.debug("Creating a snapshot image for server: %s", server['name'])
image = _images_client.create_image(server['id'], name=name)
image_id = image.response['location'].split('images/')[1]
waiters.wait_for_image_status(_image_client, image_id, 'active')
- self.addCleanup_with_wait(
- waiter_callable=_image_client.wait_for_resource_deletion,
- thing_id=image_id, thing_id_param='id',
- cleanup_callable=test_utils.call_and_ignore_notfound_exc,
- cleanup_args=[_image_client.delete_image, image_id])
+
+ self.addCleanup(_image_client.wait_for_resource_deletion,
+ image_id)
+ self.addCleanup(test_utils.call_and_ignore_notfound_exc,
+ _image_client.delete_image, image_id)
+
if CONF.image_feature_enabled.api_v1:
# In glance v1 the additional properties are stored in the headers.
resp = _image_client.check_image(image_id)
@@ -552,7 +514,7 @@
'should_succeed':
'reachable' if should_succeed else 'unreachable'
})
- result = tempest.test.call_until_true(ping, timeout, 1)
+ result = test_utils.call_until_true(ping, timeout, 1)
LOG.debug('%(caller)s finishes ping %(ip)s in %(timeout)s sec and the '
'ping result is %(result)s' % {
'caller': caller, 'ip': ip_address, 'timeout': timeout,
@@ -660,13 +622,24 @@
# method is creating the floating IP there.
return self.create_floating_ip(server)['ip']
elif CONF.validation.connect_method == 'fixed':
- addresses = server['addresses'][CONF.validation.network_for_ssh]
+ # Determine the network name to look for based on config or creds
+ # provider network resources.
+ if CONF.validation.network_for_ssh:
+ addresses = server['addresses'][
+ CONF.validation.network_for_ssh]
+ else:
+ creds_provider = self._get_credentials_provider()
+ net_creds = creds_provider.get_primary_creds()
+ network = getattr(net_creds, 'network', None)
+ addresses = (server['addresses'][network['name']]
+ if network else [])
for address in addresses:
- if address['version'] == CONF.validation.ip_version_for_ssh:
+ if (address['version'] == CONF.validation.ip_version_for_ssh
+ and address['OS-EXT-IPS:type'] == 'fixed'):
return address['addr']
- raise exceptions.ServerUnreachable()
+ raise exceptions.ServerUnreachable(server_id=server['id'])
else:
- raise exceptions.InvalidConfiguration()
+ raise lib_exc.InvalidConfiguration()
class NetworkScenarioTest(ScenarioTest):
@@ -696,7 +669,8 @@
def _create_network(self, networks_client=None,
routers_client=None, tenant_id=None,
- namestart='network-smoke-'):
+ namestart='network-smoke-',
+ port_security_enabled=True):
if not networks_client:
networks_client = self.networks_client
if not routers_client:
@@ -704,7 +678,12 @@
if not tenant_id:
tenant_id = networks_client.tenant_id
name = data_utils.rand_name(namestart)
- result = networks_client.create_network(name=name, tenant_id=tenant_id)
+ network_kwargs = dict(name=name, tenant_id=tenant_id)
+ # Neutron disables port security by default so we have to check the
+ # config before trying to create the network with port_security_enabled
+ if CONF.network_feature_enabled.port_security:
+ network_kwargs['port_security_enabled'] = port_security_enabled
+ result = networks_client.create_network(**network_kwargs)
network = result['network']
self.assertEqual(network['name'], name)
@@ -808,21 +787,6 @@
return subnet
- def _create_port(self, network_id, client=None, namestart='port-quotatest',
- **kwargs):
- if not client:
- client = self.ports_client
- name = data_utils.rand_name(namestart)
- result = client.create_port(
- name=name,
- network_id=network_id,
- **kwargs)
- self.assertIsNotNone(result, 'Unable to allocate port')
- port = result['port']
- self.addCleanup(test_utils.call_and_ignore_notfound_exc,
- client.delete_port, port['id'])
- return port
-
def _get_server_port_id_and_ip4(self, server, ip_addr=None):
ports = self._list_ports(device_id=server['id'], fixed_ip=ip_addr)
# A port can have more then one IP address in some cases.
@@ -832,6 +796,7 @@
# NOTE(vsaienko) With Ironic, instances live on separate hardware
# servers. Neutron does not bind ports for Ironic instances, as a
# result the port remains in the DOWN state.
+ # TODO(vsaienko) remove once bug: #1599836 is resolved.
if CONF.service_available.ironic:
p_status.append('DOWN')
port_map = [(p["id"], fxip["ip_address"])
@@ -910,9 +875,9 @@
show_floatingip(floatingip_id)['floatingip'])
return status == result['status']
- tempest.test.call_until_true(refresh,
- CONF.network.build_timeout,
- CONF.network.build_interval)
+ test_utils.call_until_true(refresh,
+ CONF.network.build_timeout,
+ CONF.network.build_interval)
floating_ip = self.floating_ips_client.show_floatingip(
floatingip_id)['floatingip']
self.assertEqual(status, floating_ip['status'],
@@ -967,9 +932,9 @@
return not should_succeed
return should_succeed
- return tempest.test.call_until_true(ping_remote,
- CONF.validation.ping_timeout,
- 1)
+ return test_utils.call_until_true(ping_remote,
+ CONF.validation.ping_timeout,
+ 1)
def _create_security_group(self, security_group_rules_client=None,
tenant_id=None,
@@ -1029,7 +994,7 @@
def _default_security_group(self, client=None, tenant_id=None):
"""Get default secgroup for given tenant_id.
- :returns: DeletableSecurityGroup -- default secgroup for given tenant
+ :returns: default secgroup for given tenant
"""
if client is None:
client = self.security_groups_client
@@ -1193,7 +1158,8 @@
def create_networks(self, networks_client=None,
routers_client=None, subnets_client=None,
- tenant_id=None, dns_nameservers=None):
+ tenant_id=None, dns_nameservers=None,
+ port_security_enabled=True):
"""Create a network with a subnet connected to a router.
The baremetal driver is a special case since all nodes are
@@ -1211,7 +1177,7 @@
# https://blueprints.launchpad.net/tempest/+spec/test-accounts
if not CONF.compute.fixed_network_name:
m = 'fixed_network_name must be specified in config'
- raise exceptions.InvalidConfiguration(m)
+ raise lib_exc.InvalidConfiguration(m)
network = self._get_network_by_name(
CONF.compute.fixed_network_name)
router = None
@@ -1219,7 +1185,8 @@
else:
network = self._create_network(
networks_client=networks_client,
- tenant_id=tenant_id)
+ tenant_id=tenant_id,
+ port_security_enabled=port_security_enabled)
router = self._get_router(client=routers_client,
tenant_id=tenant_id)
subnet_kwargs = dict(network=network,
@@ -1302,7 +1269,7 @@
return True
return False
- if not tempest.test.call_until_true(
+ if not test_utils.call_until_true(
check_state, timeout, interval):
msg = ("Timed out waiting for node %s to reach %s state(s) %s" %
(node_id, state_attr, target_states))
@@ -1326,7 +1293,7 @@
self.get_node, instance_id=instance_id)
return node is not None
- if not tempest.test.call_until_true(
+ if not test_utils.call_until_true(
_get_node, CONF.baremetal.association_timeout, 1):
msg = ('Timed out waiting to get Ironic node by instance id %s'
% instance_id)
@@ -1396,10 +1363,14 @@
@classmethod
def setup_clients(cls):
super(EncryptionScenarioTest, cls).setup_clients()
- if CONF.volume_feature_enabled.api_v1:
- cls.admin_volume_types_client = cls.os_adm.volume_types_client
- else:
+ if CONF.volume_feature_enabled.api_v2:
cls.admin_volume_types_client = cls.os_adm.volume_types_v2_client
+ cls.admin_encryption_types_client =\
+ cls.os_adm.encryption_types_v2_client
+ else:
+ cls.admin_volume_types_client = cls.os_adm.volume_types_client
+ cls.admin_encryption_types_client =\
+ cls.os_adm.encryption_types_client
def create_volume_type(self, client=None, name=None):
if not client:
@@ -1418,7 +1389,7 @@
key_size=None, cipher=None,
control_location=None):
if not client:
- client = self.admin_volume_types_client
+ client = self.admin_encryption_types_client
if not type_id:
volume_type = self.create_volume_type()
type_id = volume_type['id']
diff --git a/tempest/scenario/test_aggregates_basic_ops.py b/tempest/scenario/test_aggregates_basic_ops.py
index 086b82d..8de3561 100644
--- a/tempest/scenario/test_aggregates_basic_ops.py
+++ b/tempest/scenario/test_aggregates_basic_ops.py
@@ -36,7 +36,6 @@
super(TestAggregatesBasicOps, cls).setup_clients()
# Use admin client by default
cls.manager = cls.admin_manager
- super(TestAggregatesBasicOps, cls).resource_setup()
cls.aggregates_client = cls.manager.aggregates_client
cls.hosts_client = cls.manager.hosts_client
@@ -53,7 +52,7 @@
def _get_host_name(self):
hosts = self.hosts_client.list_hosts()['hosts']
- self.assertTrue(len(hosts) >= 1)
+ self.assertGreaterEqual(len(hosts), 1)
computes = [x for x in hosts if x['service'] == 'compute']
return computes[0]['host_name']
diff --git a/tempest/scenario/test_baremetal_basic_ops.py b/tempest/scenario/test_baremetal_basic_ops.py
index 655d19d..45c38f6 100644
--- a/tempest/scenario/test_baremetal_basic_ops.py
+++ b/tempest/scenario/test_baremetal_basic_ops.py
@@ -15,17 +15,14 @@
from oslo_log import log as logging
-from tempest import config
from tempest.scenario import manager
from tempest import test
-CONF = config.CONF
-
LOG = logging.getLogger(__name__)
class BaremetalBasicOps(manager.BaremetalScenarioTest):
- """This smoke test tests the pxe_ssh Ironic driver.
+ """This test tests the pxe_ssh Ironic driver.
It follows this basic set of operations:
* Creates a keypair
diff --git a/tempest/scenario/test_encrypted_cinder_volumes.py b/tempest/scenario/test_encrypted_cinder_volumes.py
index dcd77ad..1659ebe 100644
--- a/tempest/scenario/test_encrypted_cinder_volumes.py
+++ b/tempest/scenario/test_encrypted_cinder_volumes.py
@@ -53,7 +53,7 @@
volume_type = self.create_volume_type(name=volume_type)
self.create_encryption_type(type_id=volume_type['id'],
provider=encryption_provider,
- key_size=512,
+ key_size=256,
cipher='aes-xts-plain64',
control_location='front-end')
return self.create_volume(volume_type=volume_type['name'])
diff --git a/tempest/scenario/test_minimum_basic.py b/tempest/scenario/test_minimum_basic.py
index f7c7434..3ac6759 100644
--- a/tempest/scenario/test_minimum_basic.py
+++ b/tempest/scenario/test_minimum_basic.py
@@ -17,6 +17,7 @@
from tempest.common import waiters
from tempest import config
from tempest import exceptions
+from tempest.lib.common.utils import test_utils
from tempest.scenario import manager
from tempest import test
@@ -46,12 +47,6 @@
10. Check SSH connection to instance after reboot
"""
-
- def nova_list(self):
- servers = self.servers_client.list_servers()
- # The list servers in the compute client is inconsistent...
- return servers['servers']
-
def nova_show(self, server):
got_server = (self.servers_client.show_server(server['id'])
['server'])
@@ -88,13 +83,20 @@
['server'])
return {'name': secgroup['name']} in body['security_groups']
- if not test.call_until_true(wait_for_secgroup_add,
- CONF.compute.build_timeout,
- CONF.compute.build_interval):
+ if not test_utils.call_until_true(wait_for_secgroup_add,
+ CONF.compute.build_timeout,
+ CONF.compute.build_interval):
msg = ('Timed out waiting for adding security group %s to server '
'%s' % (secgroup['id'], server['id']))
raise exceptions.TimeoutException(msg)
+ def _get_floating_ip_in_server_addresses(self, floating_ip, server):
+ for network_name, addresses in server['addresses'].items():
+ for address in addresses:
+ if (address['OS-EXT-IPS:type'] == 'floating' and
+ address['addr'] == floating_ip['ip']):
+ return address
+
@test.idempotent_id('bdbb5441-9204-419d-a225-b4fdbfb1a1a8')
@test.services('compute', 'volume', 'image', 'network')
def test_minimum_basic_scenario(self):
@@ -104,7 +106,7 @@
server = self.create_server(image_id=image,
key_name=keypair['name'],
wait_until='ACTIVE')
- servers = self.nova_list()
+ servers = self.servers_client.list_servers()['servers']
self.assertIn(server['id'], [x['id'] for x in servers])
self.nova_show(server)
@@ -120,6 +122,16 @@
self.cinder_show(volume)
floating_ip = self.create_floating_ip(server)
+ # fetch the server again to make sure the addresses were refreshed
+ # after associating the floating IP
+ server = self.servers_client.show_server(server['id'])['server']
+ address = self._get_floating_ip_in_server_addresses(
+ floating_ip, server)
+ self.assertIsNotNone(
+ address,
+ "Failed to find floating IP '%s' in server addresses: %s" %
+ (floating_ip['ip'], server['addresses']))
+
self.create_and_add_security_group_to_server(server)
# check that we can SSH to the server before reboot
@@ -134,3 +146,21 @@
floating_ip['ip'], private_key=keypair['private_key'])
self.check_partitions()
+
+ # delete the floating IP, this should refresh the server addresses
+ self.compute_floating_ips_client.delete_floating_ip(floating_ip['id'])
+
+ def is_floating_ip_detached_from_server():
+ server_info = self.servers_client.show_server(
+ server['id'])['server']
+ address = self._get_floating_ip_in_server_addresses(
+ floating_ip, server_info)
+ return (not address)
+
+ if not test_utils.call_until_true(
+ is_floating_ip_detached_from_server,
+ CONF.compute.build_timeout,
+ CONF.compute.build_interval):
+ msg = ("Floating IP '%s' should not be in server addresses: %s" %
+ (floating_ip['ip'], server['addresses']))
+ raise exceptions.TimeoutException(msg)
diff --git a/tempest/scenario/test_network_advanced_server_ops.py b/tempest/scenario/test_network_advanced_server_ops.py
index e4b699e..60b030d 100644
--- a/tempest/scenario/test_network_advanced_server_ops.py
+++ b/tempest/scenario/test_network_advanced_server_ops.py
@@ -15,7 +15,6 @@
import testtools
-from tempest.common.utils import data_utils
from tempest.common import waiters
from tempest import config
from tempest.scenario import manager
@@ -50,28 +49,28 @@
cls.set_network_resources()
super(TestNetworkAdvancedServerOps, cls).setup_credentials()
- def _setup_network_and_servers(self):
- keypair = self.create_keypair()
+ def _setup_server(self, keypair):
security_groups = []
if test.is_extension_enabled('security-group', 'network'):
security_group = self._create_security_group()
security_groups = [{'name': security_group['name']}]
network, subnet, router = self.create_networks()
- public_network_id = CONF.network.public_network_id
- server_name = data_utils.rand_name('server-smoke')
server = self.create_server(
- name=server_name,
networks=[{'uuid': network['id']}],
key_name=keypair['name'],
security_groups=security_groups,
wait_until='ACTIVE')
+ return server
+
+ def _setup_network(self, server, keypair):
+ public_network_id = CONF.network.public_network_id
floating_ip = self.create_floating_ip(server, public_network_id)
# Verify that we can indeed connect to the server before we mess with
# it's state
self._wait_server_status_and_check_network_connectivity(
server, keypair, floating_ip)
- return server, keypair, floating_ip
+ return floating_ip
def _check_network_connectivity(self, server, keypair, floating_ip,
should_connect=True):
@@ -96,10 +95,11 @@
self._check_network_connectivity(server, keypair, floating_ip)
@test.idempotent_id('61f1aa9a-1573-410e-9054-afa557cab021')
- @test.stresstest(class_setup_per='process')
@test.services('compute', 'network')
def test_server_connectivity_stop_start(self):
- server, keypair, floating_ip = self._setup_network_and_servers()
+ keypair = self.create_keypair()
+ server = self._setup_server(keypair)
+ floating_ip = self._setup_network(server, keypair)
self.servers_client.stop_server(server['id'])
waiters.wait_for_server_status(self.servers_client, server['id'],
'SHUTOFF')
@@ -112,7 +112,9 @@
@test.idempotent_id('7b6860c2-afa3-4846-9522-adeb38dfbe08')
@test.services('compute', 'network')
def test_server_connectivity_reboot(self):
- server, keypair, floating_ip = self._setup_network_and_servers()
+ keypair = self.create_keypair()
+ server = self._setup_server(keypair)
+ floating_ip = self._setup_network(server, keypair)
self.servers_client.reboot_server(server['id'], type='SOFT')
self._wait_server_status_and_check_network_connectivity(
server, keypair, floating_ip)
@@ -120,7 +122,9 @@
@test.idempotent_id('88a529c2-1daa-4c85-9aec-d541ba3eb699')
@test.services('compute', 'network')
def test_server_connectivity_rebuild(self):
- server, keypair, floating_ip = self._setup_network_and_servers()
+ keypair = self.create_keypair()
+ server = self._setup_server(keypair)
+ floating_ip = self._setup_network(server, keypair)
image_ref_alt = CONF.compute.image_ref_alt
self.servers_client.rebuild_server(server['id'],
image_ref=image_ref_alt)
@@ -132,7 +136,9 @@
'Pause is not available.')
@test.services('compute', 'network')
def test_server_connectivity_pause_unpause(self):
- server, keypair, floating_ip = self._setup_network_and_servers()
+ keypair = self.create_keypair()
+ server = self._setup_server(keypair)
+ floating_ip = self._setup_network(server, keypair)
self.servers_client.pause_server(server['id'])
waiters.wait_for_server_status(self.servers_client, server['id'],
'PAUSED')
@@ -147,7 +153,9 @@
'Suspend is not available.')
@test.services('compute', 'network')
def test_server_connectivity_suspend_resume(self):
- server, keypair, floating_ip = self._setup_network_and_servers()
+ keypair = self.create_keypair()
+ server = self._setup_server(keypair)
+ floating_ip = self._setup_network(server, keypair)
self.servers_client.suspend_server(server['id'])
waiters.wait_for_server_status(self.servers_client, server['id'],
'SUSPENDED')
@@ -166,7 +174,9 @@
if resize_flavor == CONF.compute.flavor_ref:
msg = "Skipping test - flavor_ref and flavor_ref_alt are identical"
raise self.skipException(msg)
- server, keypair, floating_ip = self._setup_network_and_servers()
+ keypair = self.create_keypair()
+ server = self._setup_server(keypair)
+ floating_ip = self._setup_network(server, keypair)
self.servers_client.resize_server(server['id'],
flavor_ref=resize_flavor)
waiters.wait_for_server_status(self.servers_client, server['id'],
diff --git a/tempest/scenario/test_network_basic_ops.py b/tempest/scenario/test_network_basic_ops.py
index 9c48080..a295b6a 100644
--- a/tempest/scenario/test_network_basic_ops.py
+++ b/tempest/scenario/test_network_basic_ops.py
@@ -24,6 +24,7 @@
from tempest import config
from tempest import exceptions
from tempest.lib.common.utils import test_utils
+from tempest.lib import decorators
from tempest.scenario import manager
from tempest import test
@@ -262,8 +263,9 @@
if port['id'] != old_port['id']]
return len(self.new_port_list) == 1
- if not test.call_until_true(check_ports, CONF.network.build_timeout,
- CONF.network.build_interval):
+ if not test_utils.call_until_true(
+ check_ports, CONF.network.build_timeout,
+ CONF.network.build_interval):
raise exceptions.TimeoutException(
"No new port attached to the server in time (%s sec)! "
"Old port: %s. Number of new ports: %d" % (
@@ -276,8 +278,9 @@
self.diff_list = [n for n in new_nic_list if n not in old_nic_list]
return len(self.diff_list) == 1
- if not test.call_until_true(check_new_nic, CONF.network.build_timeout,
- CONF.network.build_interval):
+ if not test_utils.call_until_true(
+ check_new_nic, CONF.network.build_timeout,
+ CONF.network.build_interval):
raise exceptions.TimeoutException("Interface not visible on the "
"guest after %s sec"
% CONF.network.build_timeout)
@@ -410,6 +413,7 @@
@test.idempotent_id('1546850e-fbaa-42f5-8b5f-03d8a6a95f15')
@testtools.skipIf(CONF.baremetal.driver_enabled,
'Baremetal relies on a shared physical network.')
+ @decorators.skip_because(bug="1610994")
@test.services('compute', 'network')
def test_connectivity_between_vms_on_different_networks(self):
"""Test connectivity between VMs on different networks
@@ -469,11 +473,11 @@
def test_hotplug_nic(self):
"""Test hotplug network interface
- 1. create a new network, with no gateway (to prevent overwriting VM's
- gateway)
- 2. connect VM to new network
- 3. set static ip and bring new nic up
- 4. check VM can ping new network dhcp port
+ 1. Create a network and a VM.
+ 2. Check connectivity to the VM via a public network.
+ 3. Create a new network, with no gateway.
+ 4. Bring up a new interface
+ 5. check the VM reach the new network
"""
self._setup_network_and_servers()
@@ -591,9 +595,9 @@
return False
return True
- self.assertTrue(test.call_until_true(check_new_dns_server,
- renew_timeout,
- renew_delay),
+ self.assertTrue(test_utils.call_until_true(check_new_dns_server,
+ renew_timeout,
+ renew_delay),
msg="DHCP renewal failed to fetch "
"new DNS nameservers")
diff --git a/tempest/scenario/test_network_v6.py b/tempest/scenario/test_network_v6.py
index 59ebb7a..496f07e 100644
--- a/tempest/scenario/test_network_v6.py
+++ b/tempest/scenario/test_network_v6.py
@@ -186,10 +186,10 @@
srv2_v6_addr_assigned = functools.partial(
guest_has_address, sshv4_2, ips_from_api_2['6'][i])
- self.assertTrue(test.call_until_true(srv1_v6_addr_assigned,
+ self.assertTrue(test_utils.call_until_true(srv1_v6_addr_assigned,
CONF.validation.ping_timeout, 1))
- self.assertTrue(test.call_until_true(srv2_v6_addr_assigned,
+ self.assertTrue(test_utils.call_until_true(srv2_v6_addr_assigned,
CONF.validation.ping_timeout, 1))
self._check_connectivity(sshv4_1, ips_from_api_2['4'])
diff --git a/tempest/scenario/test_object_storage_basic_ops.py b/tempest/scenario/test_object_storage_basic_ops.py
index 63ffa0b..9ac1e30 100644
--- a/tempest/scenario/test_object_storage_basic_ops.py
+++ b/tempest/scenario/test_object_storage_basic_ops.py
@@ -13,12 +13,9 @@
# License for the specific language governing permissions and limitations
# under the License.
-from tempest import config
from tempest.scenario import manager
from tempest import test
-CONF = config.CONF
-
class TestObjectStorageBasicOps(manager.ObjectStorageScenarioTest):
"""Test swift basic ops.
diff --git a/tempest/scenario/test_security_groups_basic_ops.py b/tempest/scenario/test_security_groups_basic_ops.py
index 86185c8..32f5d9f 100644
--- a/tempest/scenario/test_security_groups_basic_ops.py
+++ b/tempest/scenario/test_security_groups_basic_ops.py
@@ -13,9 +13,11 @@
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log
+import testtools
from tempest import clients
from tempest.common.utils import data_utils
+from tempest.common.utils import net_info
from tempest import config
from tempest.scenario import manager
from tempest import test
@@ -246,17 +248,11 @@
myport = (tenant.router['id'], tenant.subnet['id'])
router_ports = [(i['device_id'], i['fixed_ips'][0]['subnet_id']) for i
in self._list_ports()
- if self._is_router_port(i)]
+ if net_info.is_router_interface_port(i)]
self.assertIn(myport, router_ports)
- def _is_router_port(self, port):
- """Return True if port is a router interface."""
- # NOTE(armando-migliaccio): match device owner for both centralized
- # and distributed routers; 'device_owner' is "" by default.
- return port['device_owner'].startswith('network:router_interface')
-
- def _create_server(self, name, tenant, security_groups=None, **kwargs):
+ def _create_server(self, name, tenant, security_groups, **kwargs):
"""Creates a server and assigns it to security group.
If multi-host is enabled, Ensures servers are created on different
@@ -264,8 +260,6 @@
as scheduler_hints on creation.
Validates servers are created as requested, using admin client.
"""
- if security_groups is None:
- security_groups = [tenant.security_groups['default']]
security_groups_names = [{'name': s['name']} for s in security_groups]
if self.multi_node:
kwargs["scheduler_hints"] = {'different_host': self.servers}
@@ -277,9 +271,10 @@
wait_until='ACTIVE',
clients=tenant.manager,
**kwargs)
- self.assertEqual(
- sorted([s['name'] for s in security_groups]),
- sorted([s['name'] for s in server['security_groups']]))
+ if 'security_groups' in server:
+ self.assertEqual(
+ sorted([s['name'] for s in security_groups]),
+ sorted([s['name'] for s in server['security_groups']]))
# Verify servers are on different compute nodes
if self.multi_node:
@@ -303,7 +298,8 @@
num=i
)
name = data_utils.rand_name(name)
- server = self._create_server(name, tenant)
+ server = self._create_server(name, tenant,
+ [tenant.security_groups['default']])
tenant.servers.append(server)
def _set_access_point(self, tenant):
@@ -326,11 +322,12 @@
client=tenant.manager.floating_ips_client)
self.floating_ips.setdefault(server['id'], floating_ip)
- def _create_tenant_network(self, tenant):
+ def _create_tenant_network(self, tenant, port_security_enabled=True):
network, subnet, router = self.create_networks(
networks_client=tenant.manager.networks_client,
routers_client=tenant.manager.routers_client,
- subnets_client=tenant.manager.subnets_client)
+ subnets_client=tenant.manager.subnets_client,
+ port_security_enabled=port_security_enabled)
tenant.set_network(network, subnet, router)
def _deploy_tenant(self, tenant_or_id):
@@ -533,7 +530,8 @@
tenant=new_tenant.creds.tenant_name
)
name = data_utils.rand_name(name)
- server = self._create_server(name, new_tenant)
+ server = self._create_server(name, new_tenant,
+ [new_tenant.security_groups['default']])
# Check connectivity failure with default security group
try:
@@ -599,7 +597,8 @@
tenant=new_tenant.creds.tenant_name
)
name = data_utils.rand_name(name)
- server = self._create_server(name, new_tenant)
+ server = self._create_server(name, new_tenant,
+ [new_tenant.security_groups['default']])
access_point_ssh = self._connect_to_access_point(new_tenant)
server_id = server['id']
@@ -624,3 +623,32 @@
for tenant in self.tenants.values():
self._log_console_output(servers=tenant.servers)
raise
+
+ @test.requires_ext(service='network', extension='port-security')
+ @test.idempotent_id('13ccf253-e5ad-424b-9c4a-97b88a026699')
+ @testtools.skipUnless(
+ CONF.compute_feature_enabled.allow_port_security_disabled,
+ 'Port security must be enabled.')
+ # TODO(mriedem): We shouldn't actually need to check this since neutron
+ # disables the port_security extension by default, but the problem is nova
+ # assumes port_security_enabled=True if it's not set on the network
+ # resource, which will mean nova may attempt to apply a security group on
+ # a port on that network which would fail. This is really a bug in nova.
+ @testtools.skipUnless(
+ CONF.network_feature_enabled.port_security,
+ 'Port security must be enabled.')
+ @test.services('compute', 'network')
+ def test_boot_into_disabled_port_security_network_without_secgroup(self):
+ tenant = self.primary_tenant
+ self._create_tenant_network(tenant, port_security_enabled=False)
+ self.assertFalse(tenant.network['port_security_enabled'])
+ name = data_utils.rand_name('server-smoke')
+ sec_groups = []
+ server = self._create_server(name, tenant, sec_groups)
+ server_id = server['id']
+ ports = self._list_ports(device_id=server_id)
+ self.assertEqual(1, len(ports))
+ for port in ports:
+ self.assertEmpty(port['security_groups'],
+ "Neutron shouldn't even use it's default sec "
+ "group.")
diff --git a/tempest/scenario/test_server_basic_ops.py b/tempest/scenario/test_server_basic_ops.py
index 60dca3d..e031ff7 100644
--- a/tempest/scenario/test_server_basic_ops.py
+++ b/tempest/scenario/test_server_basic_ops.py
@@ -18,6 +18,7 @@
from tempest import config
from tempest import exceptions
+from tempest.lib.common.utils import test_utils
from tempest.scenario import manager
from tempest import test
@@ -70,9 +71,9 @@
self.assertEqual(self.fip, result, msg)
return 'Verification is successful!'
- if not test.call_until_true(exec_cmd_and_verify_output,
- CONF.compute.build_timeout,
- CONF.compute.build_interval):
+ if not test_utils.call_until_true(exec_cmd_and_verify_output,
+ CONF.compute.build_timeout,
+ CONF.compute.build_interval):
raise exceptions.TimeoutException('Timed out while waiting to '
'verify metadata on server. '
'%s is empty.' % md_url)
diff --git a/tempest/scenario/test_server_multinode.py b/tempest/scenario/test_server_multinode.py
index 0cf72c3..170d220 100644
--- a/tempest/scenario/test_server_multinode.py
+++ b/tempest/scenario/test_server_multinode.py
@@ -15,7 +15,7 @@
from tempest import config
-from tempest import exceptions
+from tempest.lib import exceptions
from tempest.scenario import manager
from tempest import test
@@ -42,15 +42,22 @@
# this is needed so that we can use the availability_zone:host
# scheduler hint, which is admin_only by default
cls.servers_client = cls.admin_manager.servers_client
- super(TestServerMultinode, cls).resource_setup()
@test.idempotent_id('9cecbe35-b9d4-48da-a37e-7ce70aa43d30')
@test.attr(type='smoke')
@test.services('compute', 'network')
def test_schedule_to_all_nodes(self):
- host_client = self.manager.hosts_client
- hosts = host_client.list_hosts()['hosts']
- hosts = [x for x in hosts if x['service'] == 'compute']
+ available_zone = \
+ self.os_adm.availability_zone_client.list_availability_zones(
+ detail=True)['availabilityZoneInfo']
+ hosts = []
+ for zone in available_zone:
+ if zone['zoneState']['available']:
+ for host in zone['hosts']:
+ if 'nova-compute' in zone['hosts'][host] and \
+ zone['hosts'][host]['nova-compute']['available']:
+ hosts.append({'zone': zone['zoneName'],
+ 'host_name': host})
# ensure we have at least as many compute hosts as we expect
if len(hosts) < CONF.compute.min_compute_nodes:
diff --git a/tempest/scenario/test_shelve_instance.py b/tempest/scenario/test_shelve_instance.py
index 4b9c61c..7f04b0d 100644
--- a/tempest/scenario/test_shelve_instance.py
+++ b/tempest/scenario/test_shelve_instance.py
@@ -53,25 +53,12 @@
security_group = self._create_security_group()
security_groups = [{'name': security_group['name']}]
- if boot_from_volume:
- volume = self.create_volume(size=CONF.volume.volume_size,
- imageRef=CONF.compute.image_ref)
- bd_map = [{
- 'device_name': 'vda',
- 'volume_id': volume['id'],
- 'delete_on_termination': '0'}]
-
- server = self.create_server(
- key_name=keypair['name'],
- security_groups=security_groups,
- block_device_mapping=bd_map,
- wait_until='ACTIVE')
- else:
- server = self.create_server(
- image_id=CONF.compute.image_ref,
- key_name=keypair['name'],
- security_groups=security_groups,
- wait_until='ACTIVE')
+ server = self.create_server(
+ image_id=CONF.compute.image_ref,
+ key_name=keypair['name'],
+ security_groups=security_groups,
+ wait_until='ACTIVE',
+ volume_backed=boot_from_volume)
instance_ip = self.get_server_ip(server)
timestamp = self.create_timestamp(instance_ip,
diff --git a/tempest/scenario/test_snapshot_pattern.py b/tempest/scenario/test_snapshot_pattern.py
index d6528a3..47c6e8d 100644
--- a/tempest/scenario/test_snapshot_pattern.py
+++ b/tempest/scenario/test_snapshot_pattern.py
@@ -13,8 +13,6 @@
# License for the specific language governing permissions and limitations
# under the License.
-import testtools
-
from tempest import config
from tempest.scenario import manager
from tempest import test
@@ -33,9 +31,13 @@
"""
+ @classmethod
+ def skip_checks(cls):
+ super(TestSnapshotPattern, cls).skip_checks()
+ if not CONF.compute_feature_enabled.snapshot:
+ raise cls.skipException("Snapshotting is not available.")
+
@test.idempotent_id('608e604b-1d63-4a82-8e3e-91bc665c90b4')
- @testtools.skipUnless(CONF.compute_feature_enabled.snapshot,
- 'Snapshotting is not available.')
@test.services('compute', 'network', 'image')
def test_snapshot_pattern(self):
# prepare for booting an instance
diff --git a/tempest/scenario/test_stamp_pattern.py b/tempest/scenario/test_stamp_pattern.py
index e7223c7..5fd934c 100644
--- a/tempest/scenario/test_stamp_pattern.py
+++ b/tempest/scenario/test_stamp_pattern.py
@@ -22,6 +22,7 @@
from tempest.common import waiters
from tempest import config
from tempest import exceptions
+from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
from tempest.scenario import manager
@@ -89,9 +90,9 @@
LOG.debug("Partitions:%s" % part)
return CONF.compute.volume_device_name in part
- if not test.call_until_true(_func,
- CONF.compute.build_timeout,
- CONF.compute.build_interval):
+ if not test_utils.call_until_true(_func,
+ CONF.compute.build_timeout,
+ CONF.compute.build_interval):
raise exceptions.TimeoutException
@decorators.skip_because(bug="1205344")
diff --git a/tempest/scenario/test_volume_boot_pattern.py b/tempest/scenario/test_volume_boot_pattern.py
index 25d825a..db5e009 100644
--- a/tempest/scenario/test_volume_boot_pattern.py
+++ b/tempest/scenario/test_volume_boot_pattern.py
@@ -24,18 +24,6 @@
class TestVolumeBootPattern(manager.ScenarioTest):
- """This test case attempts to reproduce the following steps:
-
- * Create in Cinder some bootable volume importing a Glance image
- * Boot an instance from the bootable volume
- * Write content to the volume
- * Delete an instance and Boot a new instance from the volume
- * Check written content in the instance
- * Create a volume snapshot while the instance is running
- * Boot an additional instance from the new snapshot based volume
- * Check written content in the instance booted from snapshot
- """
-
# Boot from volume scenario is quite slow, and needs extra
# breathing room to get through deletes in the time allotted.
TIMEOUT_SCALING_FACTOR = 2
@@ -48,7 +36,8 @@
def _create_volume_from_image(self):
img_uuid = CONF.compute.image_ref
- vol_name = data_utils.rand_name('volume-origin')
+ vol_name = data_utils.rand_name(
+ self.__class__.__name__ + '-volume-origin')
return self.create_volume(name=vol_name, imageRef=img_uuid)
def _get_bdm(self, vol_id, delete_on_termination=False):
@@ -79,7 +68,8 @@
**create_kwargs)
def _create_snapshot_from_volume(self, vol_id):
- snap_name = data_utils.rand_name('snapshot')
+ snap_name = data_utils.rand_name(
+ self.__class__.__name__ + '-snapshot')
snap = self.snapshots_client.create_snapshot(
volume_id=vol_id,
force=True,
@@ -98,10 +88,6 @@
return snap
- def _create_volume_from_snapshot(self, snap_id):
- vol_name = data_utils.rand_name('volume')
- return self.create_volume(name=vol_name, snapshot_id=snap_id)
-
def _delete_server(self, server):
self.servers_client.delete_server(server['id'])
waiters.wait_for_server_termination(self.servers_client, server['id'])
@@ -110,6 +96,19 @@
@test.attr(type='smoke')
@test.services('compute', 'volume', 'image')
def test_volume_boot_pattern(self):
+
+ """This test case attempts to reproduce the following steps:
+
+ * Create in Cinder some bootable volume importing a Glance image
+ * Boot an instance from the bootable volume
+ * Write content to the volume
+ * Delete an instance and Boot a new instance from the volume
+ * Check written content in the instance
+ * Create a volume snapshot while the instance is running
+ * Boot an additional instance from the new snapshot based volume
+ * Check written content in the instance booted from snapshot
+ """
+
LOG.info("Creating keypair and security group")
keypair = self.create_keypair()
security_group = self._create_security_group()
@@ -149,7 +148,7 @@
# create a 3rd instance from snapshot
LOG.info("Creating third instance from snapshot: %s" % snapshot['id'])
- volume = self._create_volume_from_snapshot(snapshot['id'])
+ volume = self.create_volume(snapshot_id=snapshot['id'])
server_from_snapshot = (
self._boot_instance_from_volume(volume['id'],
keypair, security_group))
@@ -170,8 +169,7 @@
instance = self._boot_instance_from_volume(volume_origin['id'],
delete_on_termination=True)
# create EBS image
- name = data_utils.rand_name('image')
- image = self.create_server_snapshot(instance, name=name)
+ image = self.create_server_snapshot(instance)
# delete instance
self._delete_server(instance)
diff --git a/tempest/service_clients.py b/tempest/service_clients.py
deleted file mode 100644
index 386e621..0000000
--- a/tempest/service_clients.py
+++ /dev/null
@@ -1,178 +0,0 @@
-# Copyright 2012 OpenStack Foundation
-# Copyright (c) 2016 Hewlett-Packard Enterprise Development Company, L.P.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.lib import auth
-from tempest.lib import exceptions
-
-
-def tempest_modules():
- """List of service client modules available in Tempest.
-
- Provides a list of service modules available Tempest.
- """
- return set(['compute', 'identity.v2', 'identity.v3', 'image.v1',
- 'image.v2', 'network', 'object-storage', 'volume.v1',
- 'volume.v2', 'volume.v3'])
-
-
-def available_modules():
- """List of service client modules available in Tempest and plugins"""
- # TODO(andreaf) For now this returns only tempest_modules
- return tempest_modules()
-
-
-class ServiceClients(object):
- """Service client provider class
-
- The ServiceClients object provides a useful means for tests to access
- service clients configured for a specified set of credentials.
- It hides some of the complexity from the authorization and configuration
- layers.
-
- Examples:
-
- >>> from tempest import service_clients
- >>> johndoe = cred_provider.get_creds_by_role(['johndoe'])
- >>> johndoe_clients = service_clients.ServiceClients(johndoe,
- >>> identity_uri)
- >>> johndoe_servers = johndoe_clients.servers_client.list_servers()
-
- """
- # NOTE(andreaf) This class does not depend on tempest configuration
- # and its meant for direct consumption by external clients such as tempest
- # plugins. Tempest provides a wrapper class, `clients.Manager`, that
- # initialises this class using values from tempest CONF object. The wrapper
- # class should only be used by tests hosted in Tempest.
-
- def __init__(self, credentials, identity_uri, region=None, scope='project',
- disable_ssl_certificate_validation=True, ca_certs=None,
- trace_requests='', client_parameters=None):
- """Service Clients provider
-
- Instantiate a `ServiceClients` object, from a set of credentials and an
- identity URI. The identity version is inferred from the credentials
- object. Optionally auth scope can be provided.
-
- A few parameters can be given a value which is applied as default
- for all service clients: region, dscv, ca_certs, trace_requests.
-
- Parameters dscv, ca_certs and trace_requests all apply to the auth
- provider as well as any service clients provided by this manager.
-
- Any other client parameter must be set via client_parameters.
- The list of available parameters is defined in the service clients
- interfaces. For reference, most clients will accept 'region',
- 'service', 'endpoint_type', 'build_timeout' and 'build_interval', which
- are all inherited from RestClient.
-
- The `config` module in Tempest exposes an helper function
- `service_client_config` that can be used to extract from configuration
- a dictionary ready to be injected in kwargs.
-
- Exceptions are:
- - Token clients for 'identity' have a very different interface
- - Volume client for 'volume' accepts 'default_volume_size'
- - Servers client from 'compute' accepts 'enable_instance_password'
-
- Examples:
-
- >>> identity_params = config.service_client_config('identity')
- >>> params = {
- >>> 'identity': identity_params,
- >>> 'compute': {'region': 'region2'}}
- >>> manager = lib_manager.Manager(
- >>> my_creds, identity_uri, client_parameters=params)
-
- :param credentials: An instance of `auth.Credentials`
- :param identity_uri: URI of the identity API. This should be a
- mandatory parameter, and it will so soon.
- :param region: Default value of region for service clients.
- :param scope: default scope for tokens produced by the auth provider
- :param disable_ssl_certificate_validation Applies to auth and to all
- service clients.
- :param ca_certs Applies to auth and to all service clients.
- :param trace_requests Applies to auth and to all service clients.
- :param client_parameters Dictionary with parameters for service
- clients. Keys of the dictionary are the service client service
- name, as declared in `service_clients.available_modules()` except
- for the version. Values are dictionaries of parameters that are
- going to be passed to all clients in the service client module.
-
- Examples:
-
- >>> params_service_x = {'param_name': 'param_value'}
- >>> client_parameters = { 'service_x': params_service_x }
-
- >>> params_service_y = config.service_client_config('service_y')
- >>> client_parameters['service_y'] = params_service_y
-
- """
- self.credentials = credentials
- self.identity_uri = identity_uri
- if not identity_uri:
- raise exceptions.InvalidCredentials(
- 'ServiceClients requires a non-empty identity_uri.')
- self.region = region
- # Check if passed or default credentials are valid
- if not self.credentials.is_valid():
- raise exceptions.InvalidCredentials()
- # Get the identity classes matching the provided credentials
- # TODO(andreaf) Define a new interface in Credentials to get
- # the API version from an instance
- identity = [(k, auth.IDENTITY_VERSION[k][1]) for k in
- auth.IDENTITY_VERSION.keys() if
- isinstance(self.credentials, auth.IDENTITY_VERSION[k][0])]
- # Zero matches or more than one are both not valid.
- if len(identity) != 1:
- raise exceptions.InvalidCredentials()
- self.auth_version, auth_provider_class = identity[0]
- self.dscv = disable_ssl_certificate_validation
- self.ca_certs = ca_certs
- self.trace_requests = trace_requests
- # Creates an auth provider for the credentials
- self.auth_provider = auth_provider_class(
- self.credentials, self.identity_uri, scope=scope,
- disable_ssl_certificate_validation=self.dscv,
- ca_certs=self.ca_certs, trace_requests=self.trace_requests)
- # Setup some defaults for client parameters of registered services
- client_parameters = client_parameters or {}
- self.parameters = {}
- # Parameters are provided for unversioned services
- unversioned_services = set(
- [x.split('.')[0] for x in available_modules()])
- for service in unversioned_services:
- self.parameters[service] = self._setup_parameters(
- client_parameters.pop(service, {}))
- # Check that no client parameters was supplied for unregistered clients
- if client_parameters:
- raise exceptions.UnknownServiceClient(
- services=list(client_parameters.keys()))
-
- def _setup_parameters(self, parameters):
- """Setup default values for client parameters
-
- Region by default is the region passed as an __init__ parameter.
- Checks that no parameter for an unknown service is provided.
- """
- _parameters = {}
- # Use region from __init__
- if self.region:
- _parameters['region'] = self.region
- # Update defaults with specified parameters
- _parameters.update(parameters)
- # If any parameter is left, parameters for an unknown service were
- # provided as input. Fail rather than ignore silently.
- return _parameters
diff --git a/tempest/services/baremetal/v1/json/baremetal_client.py b/tempest/services/baremetal/v1/json/baremetal_client.py
old mode 100755
new mode 100644
index ede0d90..7405871
--- a/tempest/services/baremetal/v1/json/baremetal_client.py
+++ b/tempest/services/baremetal/v1/json/baremetal_client.py
@@ -84,7 +84,7 @@
def show_node_by_instance_uuid(self, instance_uuid):
"""Gets a node associated with given instance uuid.
- :param uuid: Unique identifier of the node in UUID format.
+ :param instance_uuid: Unique identifier of the instance in UUID format.
:return: Serialized node as a dictionary.
"""
@@ -138,6 +138,7 @@
def create_node(self, chassis_id=None, **kwargs):
"""Create a baremetal node with the specified parameters.
+ :param chassis_id: The unique identifier of the chassis.
:param cpu_arch: CPU architecture of the node. Default: x86_64.
:param cpus: Number of CPUs. Default: 8.
:param local_gb: Disk size. Default: 1024.
@@ -269,7 +270,7 @@
"""Set power state of the specified node.
:param node_uuid: The unique identifier of the node.
- :state: desired state to set (on/off/reboot).
+ :param state: desired state to set (on/off/reboot).
"""
target = {'target': state}
@@ -280,7 +281,7 @@
def validate_driver_interface(self, node_uuid):
"""Get all driver interfaces of a specific node.
- :param uuid: Unique identifier of the node in UUID format.
+ :param node_uuid: Unique identifier of the node in UUID format.
"""
diff --git a/tempest/services/data_processing/__init__.py b/tempest/services/data_processing/__init__.py
deleted file mode 100644
index c49bc5c..0000000
--- a/tempest/services/data_processing/__init__.py
+++ /dev/null
@@ -1,18 +0,0 @@
-# Copyright (c) 2016 Hewlett-Packard Enterprise Development Company, L.P.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may not
-# use this file except in compliance with the License. You may obtain a copy of
-# the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations under
-# the License.
-
-from tempest.services.data_processing.v1_1.data_processing_client import \
- DataProcessingClient
-
-__all__ = ['DataProcessingClient']
diff --git a/tempest/services/data_processing/v1_1/__init__.py b/tempest/services/data_processing/v1_1/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/tempest/services/data_processing/v1_1/__init__.py
+++ /dev/null
diff --git a/tempest/services/data_processing/v1_1/data_processing_client.py b/tempest/services/data_processing/v1_1/data_processing_client.py
deleted file mode 100644
index c74672f..0000000
--- a/tempest/services/data_processing/v1_1/data_processing_client.py
+++ /dev/null
@@ -1,280 +0,0 @@
-# Copyright (c) 2013 Mirantis Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from oslo_serialization import jsonutils as json
-
-from tempest.lib.common import rest_client
-
-
-class DataProcessingClient(rest_client.RestClient):
-
- def _request_and_check_resp(self, request_func, uri, resp_status):
- """Make a request and check response status code.
-
- It returns a ResponseBody.
- """
- resp, body = request_func(uri)
- self.expected_success(resp_status, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def _request_and_check_resp_data(self, request_func, uri, resp_status):
- """Make a request and check response status code.
-
- It returns pair: resp and response data.
- """
- resp, body = request_func(uri)
- self.expected_success(resp_status, resp.status)
- return resp, body
-
- def _request_check_and_parse_resp(self, request_func, uri,
- resp_status, *args, **kwargs):
- """Make a request, check response status code and parse response body.
-
- It returns a ResponseBody.
- """
- headers = {'Content-Type': 'application/json'}
- resp, body = request_func(uri, headers=headers, *args, **kwargs)
- self.expected_success(resp_status, resp.status)
- body = json.loads(body)
- return rest_client.ResponseBody(resp, body)
-
- def list_node_group_templates(self):
- """List all node group templates for a user."""
-
- uri = 'node-group-templates'
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def get_node_group_template(self, tmpl_id):
- """Returns the details of a single node group template."""
-
- uri = 'node-group-templates/%s' % tmpl_id
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def create_node_group_template(self, name, plugin_name, hadoop_version,
- node_processes, flavor_id,
- node_configs=None, **kwargs):
- """Creates node group template with specified params.
-
- It supports passing additional params using kwargs and returns created
- object.
- """
- uri = 'node-group-templates'
- body = kwargs.copy()
- body.update({
- 'name': name,
- 'plugin_name': plugin_name,
- 'hadoop_version': hadoop_version,
- 'node_processes': node_processes,
- 'flavor_id': flavor_id,
- 'node_configs': node_configs or dict(),
- })
- return self._request_check_and_parse_resp(self.post, uri, 202,
- body=json.dumps(body))
-
- def delete_node_group_template(self, tmpl_id):
- """Deletes the specified node group template by id."""
-
- uri = 'node-group-templates/%s' % tmpl_id
- return self._request_and_check_resp(self.delete, uri, 204)
-
- def list_plugins(self):
- """List all enabled plugins."""
-
- uri = 'plugins'
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def get_plugin(self, plugin_name, plugin_version=None):
- """Returns the details of a single plugin."""
-
- uri = 'plugins/%s' % plugin_name
- if plugin_version:
- uri += '/%s' % plugin_version
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def list_cluster_templates(self):
- """List all cluster templates for a user."""
-
- uri = 'cluster-templates'
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def get_cluster_template(self, tmpl_id):
- """Returns the details of a single cluster template."""
-
- uri = 'cluster-templates/%s' % tmpl_id
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def create_cluster_template(self, name, plugin_name, hadoop_version,
- node_groups, cluster_configs=None,
- **kwargs):
- """Creates cluster template with specified params.
-
- It supports passing additional params using kwargs and returns created
- object.
- """
- uri = 'cluster-templates'
- body = kwargs.copy()
- body.update({
- 'name': name,
- 'plugin_name': plugin_name,
- 'hadoop_version': hadoop_version,
- 'node_groups': node_groups,
- 'cluster_configs': cluster_configs or dict(),
- })
- return self._request_check_and_parse_resp(self.post, uri, 202,
- body=json.dumps(body))
-
- def delete_cluster_template(self, tmpl_id):
- """Deletes the specified cluster template by id."""
-
- uri = 'cluster-templates/%s' % tmpl_id
- return self._request_and_check_resp(self.delete, uri, 204)
-
- def list_data_sources(self):
- """List all data sources for a user."""
-
- uri = 'data-sources'
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def get_data_source(self, source_id):
- """Returns the details of a single data source."""
-
- uri = 'data-sources/%s' % source_id
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def create_data_source(self, name, data_source_type, url, **kwargs):
- """Creates data source with specified params.
-
- It supports passing additional params using kwargs and returns created
- object.
- """
- uri = 'data-sources'
- body = kwargs.copy()
- body.update({
- 'name': name,
- 'type': data_source_type,
- 'url': url
- })
- return self._request_check_and_parse_resp(self.post, uri,
- 202, body=json.dumps(body))
-
- def delete_data_source(self, source_id):
- """Deletes the specified data source by id."""
-
- uri = 'data-sources/%s' % source_id
- return self._request_and_check_resp(self.delete, uri, 204)
-
- def list_job_binary_internals(self):
- """List all job binary internals for a user."""
-
- uri = 'job-binary-internals'
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def get_job_binary_internal(self, job_binary_id):
- """Returns the details of a single job binary internal."""
-
- uri = 'job-binary-internals/%s' % job_binary_id
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def create_job_binary_internal(self, name, data):
- """Creates job binary internal with specified params."""
-
- uri = 'job-binary-internals/%s' % name
- return self._request_check_and_parse_resp(self.put, uri, 202, data)
-
- def delete_job_binary_internal(self, job_binary_id):
- """Deletes the specified job binary internal by id."""
-
- uri = 'job-binary-internals/%s' % job_binary_id
- return self._request_and_check_resp(self.delete, uri, 204)
-
- def get_job_binary_internal_data(self, job_binary_id):
- """Returns data of a single job binary internal."""
-
- uri = 'job-binary-internals/%s/data' % job_binary_id
- return self._request_and_check_resp_data(self.get, uri, 200)
-
- def list_job_binaries(self):
- """List all job binaries for a user."""
-
- uri = 'job-binaries'
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def get_job_binary(self, job_binary_id):
- """Returns the details of a single job binary."""
-
- uri = 'job-binaries/%s' % job_binary_id
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def create_job_binary(self, name, url, extra=None, **kwargs):
- """Creates job binary with specified params.
-
- It supports passing additional params using kwargs and returns created
- object.
- """
- uri = 'job-binaries'
- body = kwargs.copy()
- body.update({
- 'name': name,
- 'url': url,
- 'extra': extra or dict(),
- })
- return self._request_check_and_parse_resp(self.post, uri,
- 202, body=json.dumps(body))
-
- def delete_job_binary(self, job_binary_id):
- """Deletes the specified job binary by id."""
-
- uri = 'job-binaries/%s' % job_binary_id
- return self._request_and_check_resp(self.delete, uri, 204)
-
- def get_job_binary_data(self, job_binary_id):
- """Returns data of a single job binary."""
-
- uri = 'job-binaries/%s/data' % job_binary_id
- return self._request_and_check_resp_data(self.get, uri, 200)
-
- def list_jobs(self):
- """List all jobs for a user."""
-
- uri = 'jobs'
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def get_job(self, job_id):
- """Returns the details of a single job."""
-
- uri = 'jobs/%s' % job_id
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def create_job(self, name, job_type, mains, libs=None, **kwargs):
- """Creates job with specified params.
-
- It supports passing additional params using kwargs and returns created
- object.
- """
- uri = 'jobs'
- body = kwargs.copy()
- body.update({
- 'name': name,
- 'type': job_type,
- 'mains': mains,
- 'libs': libs or list(),
- })
- return self._request_check_and_parse_resp(self.post, uri,
- 202, body=json.dumps(body))
-
- def delete_job(self, job_id):
- """Deletes the specified job by id."""
-
- uri = 'jobs/%s' % job_id
- return self._request_and_check_resp(self.delete, uri, 204)
diff --git a/tempest/services/identity/__init__.py b/tempest/services/identity/__init__.py
index 0e24926..53c223f 100644
--- a/tempest/services/identity/__init__.py
+++ b/tempest/services/identity/__init__.py
@@ -12,7 +12,7 @@
# License for the specific language governing permissions and limitations under
# the License.
-from tempest.services.identity import v2
+from tempest.lib.services.identity import v2
from tempest.services.identity import v3
__all__ = ['v2', 'v3']
diff --git a/tempest/services/identity/v2/__init__.py b/tempest/services/identity/v2/__init__.py
deleted file mode 100644
index ac2a874..0000000
--- a/tempest/services/identity/v2/__init__.py
+++ /dev/null
@@ -1,24 +0,0 @@
-# Copyright (c) 2016 Hewlett-Packard Enterprise Development Company, L.P.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may not
-# use this file except in compliance with the License. You may obtain a copy of
-# the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations under
-# the License.
-
-from tempest.lib.services.identity.v2.endpoints_client import EndpointsClient
-from tempest.lib.services.identity.v2.roles_client import RolesClient
-from tempest.lib.services.identity.v2.services_client import ServicesClient
-from tempest.lib.services.identity.v2.tenants_client import TenantsClient
-from tempest.lib.services.identity.v2.token_client import TokenClient
-from tempest.lib.services.identity.v2.users_client import UsersClient
-from tempest.services.identity.v2.json.identity_client import IdentityClient
-
-__all__ = ['EndpointsClient', 'TokenClient', 'IdentityClient', 'RolesClient',
- 'ServicesClient', 'TenantsClient', 'UsersClient']
diff --git a/tempest/services/identity/v2/json/__init__.py b/tempest/services/identity/v2/json/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/tempest/services/identity/v2/json/__init__.py
+++ /dev/null
diff --git a/tempest/services/identity/v3/__init__.py b/tempest/services/identity/v3/__init__.py
index 6ad8ef2..9b40b77 100644
--- a/tempest/services/identity/v3/__init__.py
+++ b/tempest/services/identity/v3/__init__.py
@@ -12,22 +12,27 @@
# License for the specific language governing permissions and limitations under
# the License.
-from tempest.lib.services.identity.v3.endpoints_client import EndPointsClient
-from tempest.lib.services.identity.v3.policies_client import PoliciesClient
-from tempest.lib.services.identity.v3.token_client import V3TokenClient
-from tempest.services.identity.v3.json.credentials_client import \
+from tempest.lib.services.identity.v3.credentials_client import \
CredentialsClient
+from tempest.lib.services.identity.v3.endpoints_client import EndPointsClient
+from tempest.lib.services.identity.v3.groups_client import GroupsClient
+from tempest.lib.services.identity.v3.identity_client import IdentityClient
+from tempest.lib.services.identity.v3.inherited_roles_client import \
+ InheritedRolesClient
+from tempest.lib.services.identity.v3.policies_client import PoliciesClient
+from tempest.lib.services.identity.v3.projects_client import ProjectsClient
+from tempest.lib.services.identity.v3.regions_client import RegionsClient
+from tempest.lib.services.identity.v3.roles_client import RolesClient
+from tempest.lib.services.identity.v3.services_client import ServicesClient
+from tempest.lib.services.identity.v3.token_client import V3TokenClient
+from tempest.lib.services.identity.v3.trusts_client import TrustsClient
+from tempest.lib.services.identity.v3.users_client import UsersClient
from tempest.services.identity.v3.json.domains_client import DomainsClient
-from tempest.services.identity.v3.json.groups_client import GroupsClient
-from tempest.services.identity.v3.json.identity_client import IdentityClient
-from tempest.services.identity.v3.json.projects_client import ProjectsClient
-from tempest.services.identity.v3.json.regions_client import RegionsClient
-from tempest.services.identity.v3.json.roles_client import RolesClient
-from tempest.services.identity.v3.json.services_client import ServicesClient
-from tempest.services.identity.v3.json.trusts_client import TrustsClient
-from tempest.services.identity.v3.json.users_clients import UsersClient
+from tempest.services.identity.v3.json.role_assignments_client import \
+ RoleAssignmentsClient
-__all__ = ['EndPointsClient', 'PoliciesClient', 'V3TokenClient',
- 'CredentialsClient', 'DomainsClient', 'GroupsClient',
- 'IdentityClient', 'ProjectsClient', 'RegionsClient', 'RolesClient',
- 'ServicesClient', 'TrustsClient', 'UsersClient', ]
+__all__ = ['CredentialsClient', 'EndPointsClient', 'GroupsClient',
+ 'IdentityClient', 'InheritedRolesClient', 'PoliciesClient',
+ 'ProjectsClient', 'RegionsClient', 'RoleAssignmentsClient',
+ 'RolesClient', 'ServicesClient', 'V3TokenClient', 'TrustsClient',
+ 'UsersClient', 'DomainsClient', ]
diff --git a/tempest/services/identity/v3/json/domains_client.py b/tempest/services/identity/v3/json/domains_client.py
index d129a0a..fe929a5 100644
--- a/tempest/services/identity/v3/json/domains_client.py
+++ b/tempest/services/identity/v3/json/domains_client.py
@@ -38,7 +38,7 @@
def delete_domain(self, domain_id):
"""Deletes a domain."""
- resp, body = self.delete('domains/%s' % str(domain_id))
+ resp, body = self.delete('domains/%s' % domain_id)
self.expected_success(204, resp.status)
return rest_client.ResponseBody(resp, body)
diff --git a/tempest/services/identity/v3/json/role_assignments_client.py b/tempest/services/identity/v3/json/role_assignments_client.py
new file mode 100644
index 0000000..9fd7736
--- /dev/null
+++ b/tempest/services/identity/v3/json/role_assignments_client.py
@@ -0,0 +1,31 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from oslo_serialization import jsonutils as json
+
+from tempest.lib.common import rest_client
+
+
+class RoleAssignmentsClient(rest_client.RestClient):
+ api_version = "v3"
+
+ def list_user_project_effective_assignments(
+ self, project_id, user_id):
+ """List the effective role assignments for a user in a project."""
+ resp, body = self.get(
+ "role_assignments?scope.project.id=%s&user.id=%s&effective" %
+ (project_id, user_id))
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
diff --git a/tempest/services/identity/v3/json/roles_client.py b/tempest/services/identity/v3/json/roles_client.py
deleted file mode 100644
index bdb0490..0000000
--- a/tempest/services/identity/v3/json/roles_client.py
+++ /dev/null
@@ -1,315 +0,0 @@
-# Copyright 2016 Red Hat, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from oslo_serialization import jsonutils as json
-
-from tempest.lib.common import rest_client
-
-
-class RolesClient(rest_client.RestClient):
- api_version = "v3"
-
- def create_role(self, **kwargs):
- """Create a Role.
-
- Available params: see http://developer.openstack.org/
- api-ref-identity-v3.html#createRole
- """
- post_body = json.dumps({'role': kwargs})
- resp, body = self.post('roles', post_body)
- self.expected_success(201, resp.status)
- body = json.loads(body)
- return rest_client.ResponseBody(resp, body)
-
- def show_role(self, role_id):
- """GET a Role."""
- resp, body = self.get('roles/%s' % str(role_id))
- self.expected_success(200, resp.status)
- body = json.loads(body)
- return rest_client.ResponseBody(resp, body)
-
- def list_roles(self):
- """Get the list of Roles."""
- resp, body = self.get("roles")
- self.expected_success(200, resp.status)
- body = json.loads(body)
- return rest_client.ResponseBody(resp, body)
-
- def update_role(self, role_id, **kwargs):
- """Update a Role.
-
- Available params: see http://developer.openstack.org/
- api-ref-identity-v3.html#updateRole
- """
- post_body = json.dumps({'role': kwargs})
- resp, body = self.patch('roles/%s' % str(role_id), post_body)
- self.expected_success(200, resp.status)
- body = json.loads(body)
- return rest_client.ResponseBody(resp, body)
-
- def delete_role(self, role_id):
- """Delete a role."""
- resp, body = self.delete('roles/%s' % str(role_id))
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def assign_user_role_on_project(self, project_id, user_id, role_id):
- """Add roles to a user on a project."""
- resp, body = self.put('projects/%s/users/%s/roles/%s' %
- (project_id, user_id, role_id), None)
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def assign_user_role_on_domain(self, domain_id, user_id, role_id):
- """Add roles to a user on a domain."""
- resp, body = self.put('domains/%s/users/%s/roles/%s' %
- (domain_id, user_id, role_id), None)
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def list_user_roles_on_project(self, project_id, user_id):
- """list roles of a user on a project."""
- resp, body = self.get('projects/%s/users/%s/roles' %
- (project_id, user_id))
- self.expected_success(200, resp.status)
- body = json.loads(body)
- return rest_client.ResponseBody(resp, body)
-
- def list_user_roles_on_domain(self, domain_id, user_id):
- """list roles of a user on a domain."""
- resp, body = self.get('domains/%s/users/%s/roles' %
- (domain_id, user_id))
- self.expected_success(200, resp.status)
- body = json.loads(body)
- return rest_client.ResponseBody(resp, body)
-
- def delete_role_from_user_on_project(self, project_id, user_id, role_id):
- """Delete role of a user on a project."""
- resp, body = self.delete('projects/%s/users/%s/roles/%s' %
- (project_id, user_id, role_id))
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def delete_role_from_user_on_domain(self, domain_id, user_id, role_id):
- """Delete role of a user on a domain."""
- resp, body = self.delete('domains/%s/users/%s/roles/%s' %
- (domain_id, user_id, role_id))
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def check_user_role_existence_on_project(self, project_id,
- user_id, role_id):
- """Check role of a user on a project."""
- resp, body = self.head('projects/%s/users/%s/roles/%s' %
- (project_id, user_id, role_id))
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp)
-
- def check_user_role_existence_on_domain(self, domain_id,
- user_id, role_id):
- """Check role of a user on a domain."""
- resp, body = self.head('domains/%s/users/%s/roles/%s' %
- (domain_id, user_id, role_id))
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp)
-
- def assign_group_role_on_project(self, project_id, group_id, role_id):
- """Add roles to a user on a project."""
- resp, body = self.put('projects/%s/groups/%s/roles/%s' %
- (project_id, group_id, role_id), None)
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def assign_group_role_on_domain(self, domain_id, group_id, role_id):
- """Add roles to a user on a domain."""
- resp, body = self.put('domains/%s/groups/%s/roles/%s' %
- (domain_id, group_id, role_id), None)
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def list_group_roles_on_project(self, project_id, group_id):
- """list roles of a user on a project."""
- resp, body = self.get('projects/%s/groups/%s/roles' %
- (project_id, group_id))
- self.expected_success(200, resp.status)
- body = json.loads(body)
- return rest_client.ResponseBody(resp, body)
-
- def list_group_roles_on_domain(self, domain_id, group_id):
- """list roles of a user on a domain."""
- resp, body = self.get('domains/%s/groups/%s/roles' %
- (domain_id, group_id))
- self.expected_success(200, resp.status)
- body = json.loads(body)
- return rest_client.ResponseBody(resp, body)
-
- def delete_role_from_group_on_project(self, project_id, group_id, role_id):
- """Delete role of a user on a project."""
- resp, body = self.delete('projects/%s/groups/%s/roles/%s' %
- (project_id, group_id, role_id))
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def delete_role_from_group_on_domain(self, domain_id, group_id, role_id):
- """Delete role of a user on a domain."""
- resp, body = self.delete('domains/%s/groups/%s/roles/%s' %
- (domain_id, group_id, role_id))
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def check_role_from_group_on_project_existence(self, project_id,
- group_id, role_id):
- """Check role of a user on a project."""
- resp, body = self.head('projects/%s/groups/%s/roles/%s' %
- (project_id, group_id, role_id))
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp)
-
- def check_role_from_group_on_domain_existence(self, domain_id,
- group_id, role_id):
- """Check role of a user on a domain."""
- resp, body = self.head('domains/%s/groups/%s/roles/%s' %
- (domain_id, group_id, role_id))
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp)
-
- def assign_inherited_role_on_domains_user(
- self, domain_id, user_id, role_id):
- """Assigns a role to a user on projects owned by a domain."""
- resp, body = self.put(
- "OS-INHERIT/domains/%s/users/%s/roles/%s/inherited_to_projects"
- % (domain_id, user_id, role_id), None)
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def revoke_inherited_role_from_user_on_domain(
- self, domain_id, user_id, role_id):
- """Revokes an inherited project role from a user on a domain."""
- resp, body = self.delete(
- "OS-INHERIT/domains/%s/users/%s/roles/%s/inherited_to_projects"
- % (domain_id, user_id, role_id))
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def list_inherited_project_role_for_user_on_domain(
- self, domain_id, user_id):
- """Lists the inherited project roles on a domain for a user."""
- resp, body = self.get(
- "OS-INHERIT/domains/%s/users/%s/roles/inherited_to_projects"
- % (domain_id, user_id))
- self.expected_success(200, resp.status)
- body = json.loads(body)
- return rest_client.ResponseBody(resp, body)
-
- def check_user_inherited_project_role_on_domain(
- self, domain_id, user_id, role_id):
- """Checks whether a user has an inherited project role on a domain."""
- resp, body = self.head(
- "OS-INHERIT/domains/%s/users/%s/roles/%s/inherited_to_projects"
- % (domain_id, user_id, role_id))
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp)
-
- def assign_inherited_role_on_domains_group(
- self, domain_id, group_id, role_id):
- """Assigns a role to a group on projects owned by a domain."""
- resp, body = self.put(
- "OS-INHERIT/domains/%s/groups/%s/roles/%s/inherited_to_projects"
- % (domain_id, group_id, role_id), None)
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def revoke_inherited_role_from_group_on_domain(
- self, domain_id, group_id, role_id):
- """Revokes an inherited project role from a group on a domain."""
- resp, body = self.delete(
- "OS-INHERIT/domains/%s/groups/%s/roles/%s/inherited_to_projects"
- % (domain_id, group_id, role_id))
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def list_inherited_project_role_for_group_on_domain(
- self, domain_id, group_id):
- """Lists the inherited project roles on a domain for a group."""
- resp, body = self.get(
- "OS-INHERIT/domains/%s/groups/%s/roles/inherited_to_projects"
- % (domain_id, group_id))
- self.expected_success(200, resp.status)
- body = json.loads(body)
- return rest_client.ResponseBody(resp, body)
-
- def check_group_inherited_project_role_on_domain(
- self, domain_id, group_id, role_id):
- """Checks whether a group has an inherited project role on a domain."""
- resp, body = self.head(
- "OS-INHERIT/domains/%s/groups/%s/roles/%s/inherited_to_projects"
- % (domain_id, group_id, role_id))
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp)
-
- def assign_inherited_role_on_projects_user(
- self, project_id, user_id, role_id):
- """Assigns a role to a user on projects in a subtree."""
- resp, body = self.put(
- "OS-INHERIT/projects/%s/users/%s/roles/%s/inherited_to_projects"
- % (project_id, user_id, role_id), None)
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def revoke_inherited_role_from_user_on_project(
- self, project_id, user_id, role_id):
- """Revokes an inherited role from a user on a project."""
- resp, body = self.delete(
- "OS-INHERIT/projects/%s/users/%s/roles/%s/inherited_to_projects"
- % (project_id, user_id, role_id))
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def check_user_has_flag_on_inherited_to_project(
- self, project_id, user_id, role_id):
- """Checks whether a user has a role assignment"""
- """with the inherited_to_projects flag on a project."""
- resp, body = self.head(
- "OS-INHERIT/projects/%s/users/%s/roles/%s/inherited_to_projects"
- % (project_id, user_id, role_id))
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp)
-
- def assign_inherited_role_on_projects_group(
- self, project_id, group_id, role_id):
- """Assigns a role to a group on projects in a subtree."""
- resp, body = self.put(
- "OS-INHERIT/projects/%s/groups/%s/roles/%s/inherited_to_projects"
- % (project_id, group_id, role_id), None)
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def revoke_inherited_role_from_group_on_project(
- self, project_id, group_id, role_id):
- """Revokes an inherited role from a group on a project."""
- resp, body = self.delete(
- "OS-INHERIT/projects/%s/groups/%s/roles/%s/inherited_to_projects"
- % (project_id, group_id, role_id))
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def check_group_has_flag_on_inherited_to_project(
- self, project_id, group_id, role_id):
- """Checks whether a group has a role assignment"""
- """with the inherited_to_projects flag on a project."""
- resp, body = self.head(
- "OS-INHERIT/projects/%s/groups/%s/roles/%s/inherited_to_projects"
- % (project_id, group_id, role_id))
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp)
diff --git a/tempest/services/identity/v3/json/users_clients.py b/tempest/services/identity/v3/json/users_clients.py
deleted file mode 100644
index 73bd343..0000000
--- a/tempest/services/identity/v3/json/users_clients.py
+++ /dev/null
@@ -1,125 +0,0 @@
-# Copyright 2016 Red Hat, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from oslo_serialization import jsonutils as json
-from six.moves.urllib import parse as urllib
-
-from tempest.lib.common import rest_client
-
-
-class UsersClient(rest_client.RestClient):
- api_version = "v3"
-
- def create_user(self, user_name, password=None, project_id=None,
- email=None, domain_id='default', **kwargs):
- """Creates a user."""
- en = kwargs.get('enabled', True)
- description = kwargs.get('description', None)
- default_project_id = kwargs.get('default_project_id')
- post_body = {
- 'project_id': project_id,
- 'default_project_id': default_project_id,
- 'description': description,
- 'domain_id': domain_id,
- 'email': email,
- 'enabled': en,
- 'name': user_name,
- 'password': password
- }
- post_body = json.dumps({'user': post_body})
- resp, body = self.post('users', post_body)
- self.expected_success(201, resp.status)
- body = json.loads(body)
- return rest_client.ResponseBody(resp, body)
-
- def update_user(self, user_id, name, **kwargs):
- """Updates a user.
-
- Available params: see http://developer.openstack.org/
- api-ref-identity-v3.html#updateUser
- """
- body = self.show_user(user_id)['user']
- email = kwargs.get('email', body['email'])
- en = kwargs.get('enabled', body['enabled'])
- project_id = kwargs.get('project_id', body['project_id'])
- if 'default_project_id' in body.keys():
- default_project_id = kwargs.get('default_project_id',
- body['default_project_id'])
- else:
- default_project_id = kwargs.get('default_project_id')
- description = kwargs.get('description', body['description'])
- domain_id = kwargs.get('domain_id', body['domain_id'])
- post_body = {
- 'name': name,
- 'email': email,
- 'enabled': en,
- 'project_id': project_id,
- 'default_project_id': default_project_id,
- 'id': user_id,
- 'domain_id': domain_id,
- 'description': description
- }
- post_body = json.dumps({'user': post_body})
- resp, body = self.patch('users/%s' % user_id, post_body)
- self.expected_success(200, resp.status)
- body = json.loads(body)
- return rest_client.ResponseBody(resp, body)
-
- def update_user_password(self, user_id, **kwargs):
- """Update a user password
-
- Available params: see http://developer.openstack.org/
- api-ref-identity-v3.html#changeUserPassword
- """
- update_user = json.dumps({'user': kwargs})
- resp, _ = self.post('users/%s/password' % user_id, update_user)
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp)
-
- def list_user_projects(self, user_id):
- """Lists the projects on which a user has roles assigned."""
- resp, body = self.get('users/%s/projects' % user_id)
- self.expected_success(200, resp.status)
- body = json.loads(body)
- return rest_client.ResponseBody(resp, body)
-
- def list_users(self, params=None):
- """Get the list of users."""
- url = 'users'
- if params:
- url += '?%s' % urllib.urlencode(params)
- resp, body = self.get(url)
- self.expected_success(200, resp.status)
- body = json.loads(body)
- return rest_client.ResponseBody(resp, body)
-
- def show_user(self, user_id):
- """GET a user."""
- resp, body = self.get("users/%s" % user_id)
- self.expected_success(200, resp.status)
- body = json.loads(body)
- return rest_client.ResponseBody(resp, body)
-
- def delete_user(self, user_id):
- """Deletes a User."""
- resp, body = self.delete("users/%s" % user_id)
- self.expected_success(204, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def list_user_groups(self, user_id):
- """Lists groups which a user belongs to."""
- resp, body = self.get('users/%s/groups' % user_id)
- self.expected_success(200, resp.status)
- body = json.loads(body)
- return rest_client.ResponseBody(resp, body)
diff --git a/tempest/services/object_storage/container_client.py b/tempest/services/object_storage/container_client.py
index 5a26bfc..2509156 100644
--- a/tempest/services/object_storage/container_client.py
+++ b/tempest/services/object_storage/container_client.py
@@ -96,28 +96,6 @@
self.expected_success(204, resp.status)
return resp, body
- def list_all_container_objects(self, container, params=None):
- """Returns complete list of all objects in the container
-
- even if item count is beyond 10,000 item listing limit.
- Does not require any parameters aside from container name.
- """
- # TODO(dwalleck): Rewrite using json format to avoid newlines at end of
- # obj names. Set limit to API limit - 1 (max returned items = 9999)
- limit = 9999
- if params is not None:
- if 'limit' in params:
- limit = params['limit']
-
- if 'marker' in params:
- limit = params['marker']
-
- resp, objlist = self.list_container_contents(
- container,
- params={'limit': limit, 'format': 'json'})
- self.expected_success(200, resp.status)
- return objlist
-
def list_container_contents(self, container, params=None):
"""List the objects in a container, given the container name
diff --git a/tempest/services/object_storage/object_client.py b/tempest/services/object_storage/object_client.py
index 33dba6e..9445e34 100644
--- a/tempest/services/object_storage/object_client.py
+++ b/tempest/services/object_storage/object_client.py
@@ -42,12 +42,6 @@
self.expected_success(201, resp.status)
return resp, body
- def update_object(self, container, object_name, data):
- """Upload data to replace current storage object."""
- resp, body = self.create_object(container, object_name, data)
- self.expected_success(201, resp.status)
- return resp, body
-
def delete_object(self, container, object_name, params=None):
"""Delete storage object."""
url = "%s/%s" % (str(container), str(object_name))
@@ -170,7 +164,7 @@
chunked=True
)
- self._error_checker('PUT', None, headers, contents, resp, body)
+ self._error_checker(resp, body)
self.expected_success(201, resp.status)
return resp.status, resp.reason, resp
@@ -201,7 +195,6 @@
# Read the 100 status prior to sending the data
response = conn.response_class(conn.sock,
- strict=conn.strict,
method=conn._method)
_, status, _ = response._read_status()
@@ -237,37 +230,3 @@
conn = httplib.HTTPConnection(parsed_url.netloc)
return conn
-
-
-def put_object_connection(base_url, container, name, contents=None,
- chunk_size=65536, headers=None, query_string=None):
- """Helper function to make connection to put object with httplib
-
- :param base_url: base_url of an object client
- :param container: container name that the object is in
- :param name: object name to put
- :param contents: a string or a file like object to read object data
- from; if None, a zero-byte put will be done
- :param chunk_size: chunk size of data to write; it defaults to 65536;
- used only if the contents object has a 'read'
- method, eg. file-like objects, ignored otherwise
- :param headers: additional headers to include in the request, if any
- :param query_string: if set will be appended with '?' to generated path
- """
- parsed = urlparse.urlparse(base_url)
-
- path = str(parsed.path) + "/"
- path += "%s/%s" % (str(container), str(name))
-
- conn = create_connection(parsed)
-
- if query_string:
- path += '?' + query_string
- if headers:
- headers = dict(headers)
- else:
- headers = {}
-
- conn.request('PUT', path, contents, headers)
-
- return conn
diff --git a/tempest/services/volume/base/admin/__init__.py b/tempest/services/volume/base/admin/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/tempest/services/volume/base/admin/__init__.py
+++ /dev/null
diff --git a/tempest/services/volume/v1/__init__.py b/tempest/services/volume/v1/__init__.py
index 6bdb8c4..7fb3ed3 100644
--- a/tempest/services/volume/v1/__init__.py
+++ b/tempest/services/volume/v1/__init__.py
@@ -12,19 +12,22 @@
# License for the specific language governing permissions and limitations under
# the License.
-from tempest.services.volume.v1.json.admin.hosts_client import HostsClient
-from tempest.services.volume.v1.json.admin.quotas_client import QuotasClient
-from tempest.services.volume.v1.json.admin.services_client import \
- ServicesClient
-from tempest.services.volume.v1.json.admin.types_client import TypesClient
-from tempest.services.volume.v1.json.availability_zone_client import \
+from tempest.lib.services.volume.v1.availability_zone_client import \
AvailabilityZoneClient
-from tempest.services.volume.v1.json.backups_client import BackupsClient
-from tempest.services.volume.v1.json.extensions_client import ExtensionsClient
-from tempest.services.volume.v1.json.qos_client import QosSpecsClient
-from tempest.services.volume.v1.json.snapshots_client import SnapshotsClient
-from tempest.services.volume.v1.json.volumes_client import VolumesClient
+from tempest.lib.services.volume.v1.backups_client import BackupsClient
+from tempest.lib.services.volume.v1.encryption_types_client import \
+ EncryptionTypesClient
+from tempest.lib.services.volume.v1.extensions_client import ExtensionsClient
+from tempest.lib.services.volume.v1.hosts_client import HostsClient
+from tempest.lib.services.volume.v1.qos_client import QosSpecsClient
+from tempest.lib.services.volume.v1.quotas_client import QuotasClient
+from tempest.lib.services.volume.v1.services_client import ServicesClient
+from tempest.lib.services.volume.v1.snapshots_client import SnapshotsClient
+from tempest.lib.services.volume.v1.types_client import TypesClient
+from tempest.lib.services.volume.v1.volumes_client import VolumesClient
-__all__ = ['HostsClient', 'QuotasClient', 'ServicesClient', 'TypesClient',
- 'AvailabilityZoneClient', 'BackupsClient', 'ExtensionsClient',
- 'QosSpecsClient', 'SnapshotsClient', 'VolumesClient']
+__all__ = ['AvailabilityZoneClient', 'EncryptionTypesClient',
+ 'ExtensionsClient', 'HostsClient', 'QuotasClient',
+ 'QosSpecsClient', 'ServicesClient',
+ 'SnapshotsClient', 'TypesClient', 'BackupsClient',
+ 'VolumesClient', ]
diff --git a/tempest/services/volume/v1/json/admin/__init__.py b/tempest/services/volume/v1/json/admin/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/tempest/services/volume/v1/json/admin/__init__.py
+++ /dev/null
diff --git a/tempest/services/volume/v1/json/admin/hosts_client.py b/tempest/services/volume/v1/json/admin/hosts_client.py
deleted file mode 100644
index 3b52968..0000000
--- a/tempest/services/volume/v1/json/admin/hosts_client.py
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright 2013 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.services.volume.base.admin import base_hosts_client
-
-
-class HostsClient(base_hosts_client.BaseHostsClient):
- """Client class to send CRUD Volume Host API V1 requests"""
diff --git a/tempest/services/volume/v1/json/admin/quotas_client.py b/tempest/services/volume/v1/json/admin/quotas_client.py
deleted file mode 100644
index 27fc301..0000000
--- a/tempest/services/volume/v1/json/admin/quotas_client.py
+++ /dev/null
@@ -1,19 +0,0 @@
-# Copyright (C) 2014 eNovance SAS <licensing@enovance.com>
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.services.volume.base.admin import base_quotas_client
-
-
-class QuotasClient(base_quotas_client.BaseQuotasClient):
- """Client class to send CRUD Volume Type API V1 requests"""
diff --git a/tempest/services/volume/v1/json/admin/services_client.py b/tempest/services/volume/v1/json/admin/services_client.py
deleted file mode 100644
index 2bffd55..0000000
--- a/tempest/services/volume/v1/json/admin/services_client.py
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright 2014 NEC Corporation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.services.volume.base.admin import base_services_client
-
-
-class ServicesClient(base_services_client.BaseServicesClient):
- """Volume V1 volume services client"""
diff --git a/tempest/services/volume/v1/json/admin/types_client.py b/tempest/services/volume/v1/json/admin/types_client.py
deleted file mode 100644
index 0e84296..0000000
--- a/tempest/services/volume/v1/json/admin/types_client.py
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright 2012 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.services.volume.base.admin import base_types_client
-
-
-class TypesClient(base_types_client.BaseTypesClient):
- """Volume V1 Volume Types client"""
diff --git a/tempest/services/volume/v1/json/availability_zone_client.py b/tempest/services/volume/v1/json/availability_zone_client.py
deleted file mode 100644
index 3a27027..0000000
--- a/tempest/services/volume/v1/json/availability_zone_client.py
+++ /dev/null
@@ -1,21 +0,0 @@
-# Copyright 2014 NEC Corporation.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.services.volume.base import base_availability_zone_client
-
-
-class AvailabilityZoneClient(
- base_availability_zone_client.BaseAvailabilityZoneClient):
- """Volume V1 availability zone client."""
diff --git a/tempest/services/volume/v1/json/backups_client.py b/tempest/services/volume/v1/json/backups_client.py
deleted file mode 100644
index ac6db6a..0000000
--- a/tempest/services/volume/v1/json/backups_client.py
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright 2014 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.services.volume.base import base_backups_client
-
-
-class BackupsClient(base_backups_client.BaseBackupsClient):
- """Volume V1 Backups client"""
diff --git a/tempest/services/volume/v1/json/extensions_client.py b/tempest/services/volume/v1/json/extensions_client.py
deleted file mode 100644
index f99d0f5..0000000
--- a/tempest/services/volume/v1/json/extensions_client.py
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright 2012 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.services.volume.base import base_extensions_client
-
-
-class ExtensionsClient(base_extensions_client.BaseExtensionsClient):
- """Volume V1 extensions client."""
diff --git a/tempest/services/volume/v1/json/qos_client.py b/tempest/services/volume/v1/json/qos_client.py
deleted file mode 100644
index b2b2195..0000000
--- a/tempest/services/volume/v1/json/qos_client.py
+++ /dev/null
@@ -1,19 +0,0 @@
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.services.volume.base import base_qos_client
-
-
-class QosSpecsClient(base_qos_client.BaseQosSpecsClient):
- """Volume V1 QoS client."""
diff --git a/tempest/services/volume/v1/json/snapshots_client.py b/tempest/services/volume/v1/json/snapshots_client.py
deleted file mode 100644
index b039c2b..0000000
--- a/tempest/services/volume/v1/json/snapshots_client.py
+++ /dev/null
@@ -1,17 +0,0 @@
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.services.volume.base import base_snapshots_client
-
-
-class SnapshotsClient(base_snapshots_client.BaseSnapshotsClient):
- """Client class to send CRUD Volume V1 API requests."""
diff --git a/tempest/services/volume/v1/json/volumes_client.py b/tempest/services/volume/v1/json/volumes_client.py
deleted file mode 100644
index 7782043..0000000
--- a/tempest/services/volume/v1/json/volumes_client.py
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright 2012 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.services.volume.base import base_volumes_client
-
-
-class VolumesClient(base_volumes_client.BaseVolumesClient):
- """Client class to send CRUD Volume V1 API requests"""
diff --git a/tempest/services/volume/v2/__init__.py b/tempest/services/volume/v2/__init__.py
index c75b0e5..8edaf2a 100644
--- a/tempest/services/volume/v2/__init__.py
+++ b/tempest/services/volume/v2/__init__.py
@@ -12,19 +12,21 @@
# License for the specific language governing permissions and limitations under
# the License.
-from tempest.services.volume.v2.json.admin.hosts_client import HostsClient
-from tempest.services.volume.v2.json.admin.quotas_client import QuotasClient
-from tempest.services.volume.v2.json.admin.services_client import \
- ServicesClient
-from tempest.services.volume.v2.json.admin.types_client import TypesClient
-from tempest.services.volume.v2.json.availability_zone_client import \
+from tempest.lib.services.volume.v2.availability_zone_client import \
AvailabilityZoneClient
-from tempest.services.volume.v2.json.backups_client import BackupsClient
-from tempest.services.volume.v2.json.extensions_client import ExtensionsClient
-from tempest.services.volume.v2.json.qos_client import QosSpecsClient
-from tempest.services.volume.v2.json.snapshots_client import SnapshotsClient
-from tempest.services.volume.v2.json.volumes_client import VolumesClient
+from tempest.lib.services.volume.v2.backups_client import BackupsClient
+from tempest.lib.services.volume.v2.encryption_types_client import \
+ EncryptionTypesClient
+from tempest.lib.services.volume.v2.extensions_client import ExtensionsClient
+from tempest.lib.services.volume.v2.hosts_client import HostsClient
+from tempest.lib.services.volume.v2.qos_client import QosSpecsClient
+from tempest.lib.services.volume.v2.quotas_client import QuotasClient
+from tempest.lib.services.volume.v2.services_client import ServicesClient
+from tempest.lib.services.volume.v2.snapshots_client import SnapshotsClient
+from tempest.lib.services.volume.v2.types_client import TypesClient
+from tempest.lib.services.volume.v2.volumes_client import VolumesClient
-__all__ = ['HostsClient', 'QuotasClient', 'ServicesClient', 'TypesClient',
- 'AvailabilityZoneClient', 'BackupsClient', 'ExtensionsClient',
- 'QosSpecsClient', 'SnapshotsClient', 'VolumesClient']
+__all__ = ['AvailabilityZoneClient', 'BackupsClient', 'EncryptionTypesClient',
+ 'ExtensionsClient', 'HostsClient', 'QosSpecsClient', 'QuotasClient',
+ 'ServicesClient', 'SnapshotsClient', 'TypesClient',
+ 'VolumesClient', ]
diff --git a/tempest/services/volume/v2/json/admin/__init__.py b/tempest/services/volume/v2/json/admin/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/tempest/services/volume/v2/json/admin/__init__.py
+++ /dev/null
diff --git a/tempest/services/volume/v2/json/admin/quotas_client.py b/tempest/services/volume/v2/json/admin/quotas_client.py
deleted file mode 100644
index 11e0e22..0000000
--- a/tempest/services/volume/v2/json/admin/quotas_client.py
+++ /dev/null
@@ -1,21 +0,0 @@
-# Copyright 2014 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.services.volume.base.admin import base_quotas_client
-
-
-class QuotasClient(base_quotas_client.BaseQuotasClient):
- """Client class to send CRUD Volume V2 API requests"""
- api_version = "v2"
diff --git a/tempest/services/volume/v2/json/admin/services_client.py b/tempest/services/volume/v2/json/admin/services_client.py
deleted file mode 100644
index db19ba9..0000000
--- a/tempest/services/volume/v2/json/admin/services_client.py
+++ /dev/null
@@ -1,21 +0,0 @@
-# Copyright 2014 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.services.volume.base.admin import base_services_client
-
-
-class ServicesClient(base_services_client.BaseServicesClient):
- """Client class to send CRUD Volume V2 API requests"""
- api_version = "v2"
diff --git a/tempest/services/volume/v2/json/admin/types_client.py b/tempest/services/volume/v2/json/admin/types_client.py
deleted file mode 100644
index ecf5131..0000000
--- a/tempest/services/volume/v2/json/admin/types_client.py
+++ /dev/null
@@ -1,21 +0,0 @@
-# Copyright 2012 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.services.volume.base.admin import base_types_client
-
-
-class TypesClient(base_types_client.BaseTypesClient):
- """Client class to send CRUD Volume V2 API requests"""
- api_version = "v2"
diff --git a/tempest/services/volume/v2/json/availability_zone_client.py b/tempest/services/volume/v2/json/availability_zone_client.py
deleted file mode 100644
index 905ebdc..0000000
--- a/tempest/services/volume/v2/json/availability_zone_client.py
+++ /dev/null
@@ -1,21 +0,0 @@
-# Copyright 2014 IBM Corp.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.services.volume.base import base_availability_zone_client
-
-
-class AvailabilityZoneClient(
- base_availability_zone_client.BaseAvailabilityZoneClient):
- api_version = "v2"
diff --git a/tempest/services/volume/v2/json/backups_client.py b/tempest/services/volume/v2/json/backups_client.py
deleted file mode 100644
index 78bab82..0000000
--- a/tempest/services/volume/v2/json/backups_client.py
+++ /dev/null
@@ -1,21 +0,0 @@
-# Copyright 2014 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.services.volume.base import base_backups_client
-
-
-class BackupsClient(base_backups_client.BaseBackupsClient):
- """Client class to send CRUD Volume V2 API requests"""
- api_version = "v2"
diff --git a/tempest/services/volume/v2/json/extensions_client.py b/tempest/services/volume/v2/json/extensions_client.py
deleted file mode 100644
index 245906f..0000000
--- a/tempest/services/volume/v2/json/extensions_client.py
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright 2014 IBM Corp.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.services.volume.base import base_extensions_client
-
-
-class ExtensionsClient(base_extensions_client.BaseExtensionsClient):
- api_version = "v2"
diff --git a/tempest/services/volume/v2/json/qos_client.py b/tempest/services/volume/v2/json/qos_client.py
deleted file mode 100644
index 3c0f74f..0000000
--- a/tempest/services/volume/v2/json/qos_client.py
+++ /dev/null
@@ -1,19 +0,0 @@
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.services.volume.base import base_qos_client
-
-
-class QosSpecsClient(base_qos_client.BaseQosSpecsClient):
- api_version = "v2"
diff --git a/tempest/services/volume/v2/json/snapshots_client.py b/tempest/services/volume/v2/json/snapshots_client.py
deleted file mode 100644
index a2d415f..0000000
--- a/tempest/services/volume/v2/json/snapshots_client.py
+++ /dev/null
@@ -1,19 +0,0 @@
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.services.volume.base import base_snapshots_client
-
-
-class SnapshotsClient(base_snapshots_client.BaseSnapshotsClient):
- """Client class to send CRUD Volume V2 API requests."""
- api_version = "v2"
- create_resp = 202
diff --git a/tempest/services/volume/v2/json/volumes_client.py b/tempest/services/volume/v2/json/volumes_client.py
deleted file mode 100644
index b7d9dfb..0000000
--- a/tempest/services/volume/v2/json/volumes_client.py
+++ /dev/null
@@ -1,22 +0,0 @@
-# Copyright 2012 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.services.volume.base import base_volumes_client
-
-
-class VolumesClient(base_volumes_client.BaseVolumesClient):
- """Client class to send CRUD Volume V2 API requests"""
- api_version = "v2"
- create_resp = 202
diff --git a/tempest/stress/README.rst b/tempest/stress/README.rst
deleted file mode 100644
index 33842fd..0000000
--- a/tempest/stress/README.rst
+++ /dev/null
@@ -1,60 +0,0 @@
-.. _stress_field_guide:
-
-Tempest Field Guide to Stress Tests
-===================================
-
-OpenStack is a distributed, asynchronous system that is prone to race condition
-bugs. These bugs will not be easily found during
-functional testing but will be encountered by users in large deployments in a
-way that is hard to debug. The stress test tries to cause these bugs to happen
-in a more controlled environment.
-
-
-Environment
------------
-This particular framework assumes your working Nova cluster understands Nova
-API 2.0. The stress tests can read the logs from the cluster. To enable this
-you have to provide the hostname to call 'nova-manage' and
-the private key and user name for ssh to the cluster in the
-[stress] section of tempest.conf. You also need to provide the
-location of the log files:
-
- target_logfiles = "regexp to all log files to be checked for errors"
- target_private_key_path = "private ssh key for controller and log file nodes"
- target_ssh_user = "username for controller and log file nodes"
- target_controller = "hostname or ip of controller node (for nova-manage)
- log_check_interval = "time between checking logs for errors (default 60s)"
-
-To activate logging on your console please make sure that you activate `use_stderr`
-in tempest.conf or use the default `logging.conf.sample` file.
-
-Running default stress test set
--------------------------------
-
-The stress test framework can automatically discover test inside the tempest
-test suite. All test flag with the `@stresstest` decorator will be executed.
-In order to use this discovery you have to install tempest CLI, be in the
-tempest root directory and execute the following:
-
- tempest run-stress -a -d 30
-
-Running the sample test
------------------------
-
-To test installation, do the following:
-
- tempest run-stress -t tempest/stress/etc/server-create-destroy-test.json -d 30
-
-This sample test tries to create a few VMs and kill a few VMs.
-
-
-Additional Tools
-----------------
-
-Sometimes the tests don't finish, or there are failures. In these
-cases, you may want to clean out the nova cluster. We have provided
-some scripts to do this in the ``tools`` subdirectory.
-You can use the following script to destroy any keypairs,
-floating ips, and servers:
-
-tempest/stress/tools/cleanup.py
diff --git a/tempest/stress/__init__.py b/tempest/stress/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/tempest/stress/__init__.py
+++ /dev/null
diff --git a/tempest/stress/actions/__init__.py b/tempest/stress/actions/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/tempest/stress/actions/__init__.py
+++ /dev/null
diff --git a/tempest/stress/actions/server_create_destroy.py b/tempest/stress/actions/server_create_destroy.py
deleted file mode 100644
index 44b6f62..0000000
--- a/tempest/stress/actions/server_create_destroy.py
+++ /dev/null
@@ -1,42 +0,0 @@
-# Copyright 2013 Quanta Research Cambridge, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from tempest.common.utils import data_utils
-from tempest.common import waiters
-from tempest import config
-import tempest.stress.stressaction as stressaction
-
-CONF = config.CONF
-
-
-class ServerCreateDestroyTest(stressaction.StressAction):
-
- def setUp(self, **kwargs):
- self.image = CONF.compute.image_ref
- self.flavor = CONF.compute.flavor_ref
-
- def run(self):
- name = data_utils.rand_name("instance")
- self.logger.info("creating %s" % name)
- server = self.manager.servers_client.create_server(
- name=name, imageRef=self.image, flavorRef=self.flavor)['server']
- server_id = server['id']
- waiters.wait_for_server_status(self.manager.servers_client, server_id,
- 'ACTIVE')
- self.logger.info("created %s" % server_id)
- self.logger.info("deleting %s" % name)
- self.manager.servers_client.delete_server(server_id)
- waiters.wait_for_server_termination(self.manager.servers_client,
- server_id)
- self.logger.info("deleted %s" % server_id)
diff --git a/tempest/stress/actions/ssh_floating.py b/tempest/stress/actions/ssh_floating.py
deleted file mode 100644
index 4f8c6bd..0000000
--- a/tempest/stress/actions/ssh_floating.py
+++ /dev/null
@@ -1,199 +0,0 @@
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import socket
-import subprocess
-
-from tempest.common.utils import data_utils
-from tempest.common import waiters
-from tempest import config
-import tempest.stress.stressaction as stressaction
-import tempest.test
-
-CONF = config.CONF
-
-
-class FloatingStress(stressaction.StressAction):
-
- # from the scenario manager
- def ping_ip_address(self, ip_address):
- cmd = ['ping', '-c1', '-w1', ip_address]
-
- proc = subprocess.Popen(cmd,
- stdout=subprocess.PIPE,
- stderr=subprocess.PIPE)
- proc.communicate()
- success = proc.returncode == 0
- return success
-
- def tcp_connect_scan(self, addr, port):
- # like tcp
- s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
- try:
- s.connect((addr, port))
- except socket.error as exc:
- self.logger.info("%s(%s): %s", self.server_id, self.floating['ip'],
- str(exc))
- return False
- self.logger.info("%s(%s): Connected :)", self.server_id,
- self.floating['ip'])
- s.close()
- return True
-
- def check_port_ssh(self):
- def func():
- return self.tcp_connect_scan(self.floating['ip'], 22)
- if not tempest.test.call_until_true(func, self.check_timeout,
- self.check_interval):
- raise RuntimeError("Cannot connect to the ssh port.")
-
- def check_icmp_echo(self):
- self.logger.info("%s(%s): Pinging..",
- self.server_id, self.floating['ip'])
-
- def func():
- return self.ping_ip_address(self.floating['ip'])
- if not tempest.test.call_until_true(func, self.check_timeout,
- self.check_interval):
- raise RuntimeError("%s(%s): Cannot ping the machine.",
- self.server_id, self.floating['ip'])
- self.logger.info("%s(%s): pong :)",
- self.server_id, self.floating['ip'])
-
- def _create_vm(self):
- self.name = name = data_utils.rand_name("instance")
- servers_client = self.manager.servers_client
- self.logger.info("creating %s" % name)
- vm_args = self.vm_extra_args.copy()
- vm_args['security_groups'] = [self.sec_grp]
- server = servers_client.create_server(name=name, imageRef=self.image,
- flavorRef=self.flavor,
- **vm_args)['server']
- self.server_id = server['id']
- if self.wait_after_vm_create:
- waiters.wait_for_server_status(self.manager.servers_client,
- self.server_id, 'ACTIVE')
-
- def _destroy_vm(self):
- self.logger.info("deleting %s" % self.server_id)
- self.manager.servers_client.delete_server(self.server_id)
- waiters.wait_for_server_termination(self.manager.servers_client,
- self.server_id)
- self.logger.info("deleted %s" % self.server_id)
-
- def _create_sec_group(self):
- sec_grp_cli = self.manager.compute_security_groups_client
- s_name = data_utils.rand_name('sec_grp')
- s_description = data_utils.rand_name('desc')
- self.sec_grp = sec_grp_cli.create_security_group(
- name=s_name, description=s_description)['security_group']
- create_rule = sec_grp_cli.create_security_group_rule
- create_rule(parent_group_id=self.sec_grp['id'], ip_protocol='tcp',
- from_port=22, to_port=22)
- create_rule(parent_group_id=self.sec_grp['id'], ip_protocol='icmp',
- from_port=-1, to_port=-1)
-
- def _destroy_sec_grp(self):
- sec_grp_cli = self.manager.compute_security_groups_client
- sec_grp_cli.delete_security_group(self.sec_grp['id'])
-
- def _create_floating_ip(self):
- floating_cli = self.manager.compute_floating_ips_client
- self.floating = (floating_cli.create_floating_ip(self.floating_pool)
- ['floating_ip'])
-
- def _destroy_floating_ip(self):
- cli = self.manager.compute_floating_ips_client
- cli.delete_floating_ip(self.floating['id'])
- cli.wait_for_resource_deletion(self.floating['id'])
- self.logger.info("Deleted Floating IP %s", str(self.floating['ip']))
-
- def setUp(self, **kwargs):
- self.image = CONF.compute.image_ref
- self.flavor = CONF.compute.flavor_ref
- self.vm_extra_args = kwargs.get('vm_extra_args', {})
- self.wait_after_vm_create = kwargs.get('wait_after_vm_create',
- True)
- self.new_vm = kwargs.get('new_vm', False)
- self.new_sec_grp = kwargs.get('new_sec_group', False)
- self.new_floating = kwargs.get('new_floating', False)
- self.reboot = kwargs.get('reboot', False)
- self.floating_pool = kwargs.get('floating_pool', None)
- self.verify = kwargs.get('verify', ('check_port_ssh',
- 'check_icmp_echo'))
- self.check_timeout = kwargs.get('check_timeout', 120)
- self.check_interval = kwargs.get('check_interval', 1)
- self.wait_for_disassociate = kwargs.get('wait_for_disassociate',
- True)
-
- # allocate floating
- if not self.new_floating:
- self._create_floating_ip()
- # add security group
- if not self.new_sec_grp:
- self._create_sec_group()
- # create vm
- if not self.new_vm:
- self._create_vm()
-
- def wait_disassociate(self):
- cli = self.manager.compute_floating_ips_client
-
- def func():
- floating = (cli.show_floating_ip(self.floating['id'])
- ['floating_ip'])
- return floating['instance_id'] is None
-
- if not tempest.test.call_until_true(func, self.check_timeout,
- self.check_interval):
- raise RuntimeError("IP disassociate timeout!")
-
- def run_core(self):
- cli = self.manager.compute_floating_ips_client
- cli.associate_floating_ip_to_server(self.floating['ip'],
- self.server_id)
- for method in self.verify:
- m = getattr(self, method)
- m()
- cli.disassociate_floating_ip_from_server(self.floating['ip'],
- self.server_id)
- if self.wait_for_disassociate:
- self.wait_disassociate()
-
- def run(self):
- if self.new_sec_grp:
- self._create_sec_group()
- if self.new_floating:
- self._create_floating_ip()
- if self.new_vm:
- self._create_vm()
- if self.reboot:
- self.manager.servers_client.reboot(self.server_id, 'HARD')
- waiters.wait_for_server_status(self.manager.servers_client,
- self.server_id, 'ACTIVE')
-
- self.run_core()
-
- if self.new_vm:
- self._destroy_vm()
- if self.new_floating:
- self._destroy_floating_ip()
- if self.new_sec_grp:
- self._destroy_sec_grp()
-
- def tearDown(self):
- if not self.new_vm:
- self._destroy_vm()
- if not self.new_floating:
- self._destroy_floating_ip()
- if not self.new_sec_grp:
- self._destroy_sec_grp()
diff --git a/tempest/stress/actions/unit_test.py b/tempest/stress/actions/unit_test.py
deleted file mode 100644
index e016c61..0000000
--- a/tempest/stress/actions/unit_test.py
+++ /dev/null
@@ -1,92 +0,0 @@
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from oslo_log import log as logging
-from oslo_utils import importutils
-
-from tempest import config
-import tempest.stress.stressaction as stressaction
-
-CONF = config.CONF
-
-
-class SetUpClassRunTime(object):
-
- process = 'process'
- action = 'action'
- application = 'application'
-
- allowed = set((process, action, application))
-
- @classmethod
- def validate(cls, name):
- if name not in cls.allowed:
- raise KeyError("\'%s\' not a valid option" % name)
-
-
-class UnitTest(stressaction.StressAction):
- """This is a special action for running existing unittests as stress test.
-
- You need to pass ``test_method`` and ``class_setup_per``
- using ``kwargs`` in the JSON descriptor;
- ``test_method`` should be the fully qualified name of a unittest,
- ``class_setup_per`` should be one from:
- ``application``: once in the stress job lifetime
- ``process``: once in the worker process lifetime
- ``action``: on each action
- Not all combination working in every case.
- """
-
- def setUp(self, **kwargs):
- method = kwargs['test_method'].split('.')
- self.test_method = method.pop()
- self.klass = importutils.import_class('.'.join(method))
- self.logger = logging.getLogger('.'.join(method))
- # valid options are 'process', 'application' , 'action'
- self.class_setup_per = kwargs.get('class_setup_per',
- SetUpClassRunTime.process)
- SetUpClassRunTime.validate(self.class_setup_per)
-
- if self.class_setup_per == SetUpClassRunTime.application:
- self.klass.setUpClass()
- self.setupclass_called = False
-
- @property
- def action(self):
- if self.test_method:
- return self.test_method
- return super(UnitTest, self).action
-
- def run_core(self):
- res = self.klass(self.test_method).run()
- if res.errors:
- raise RuntimeError(res.errors)
-
- def run(self):
- if self.class_setup_per != SetUpClassRunTime.application:
- if (self.class_setup_per == SetUpClassRunTime.action
- or self.setupclass_called is False):
- self.klass.setUpClass()
- self.setupclass_called = True
-
- try:
- self.run_core()
- finally:
- if (CONF.stress.leave_dirty_stack is False
- and self.class_setup_per == SetUpClassRunTime.action):
- self.klass.tearDownClass()
- else:
- self.run_core()
-
- def tearDown(self):
- if self.class_setup_per != SetUpClassRunTime.action:
- self.klass.tearDownClass()
diff --git a/tempest/stress/actions/volume_attach_delete.py b/tempest/stress/actions/volume_attach_delete.py
deleted file mode 100644
index 847f342..0000000
--- a/tempest/stress/actions/volume_attach_delete.py
+++ /dev/null
@@ -1,70 +0,0 @@
-# (c) 2013 Deutsche Telekom AG
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from tempest.common.utils import data_utils
-from tempest.common import waiters
-from tempest import config
-import tempest.stress.stressaction as stressaction
-
-CONF = config.CONF
-
-
-class VolumeAttachDeleteTest(stressaction.StressAction):
-
- def setUp(self, **kwargs):
- self.image = CONF.compute.image_ref
- self.flavor = CONF.compute.flavor_ref
-
- def run(self):
- # Step 1: create volume
- name = data_utils.rand_name("volume")
- self.logger.info("creating volume: %s" % name)
- volume = self.manager.volumes_client.create_volume(
- display_name=name)['volume']
- self.manager.volumes_client.wait_for_volume_status(volume['id'],
- 'available')
- self.logger.info("created volume: %s" % volume['id'])
-
- # Step 2: create vm instance
- vm_name = data_utils.rand_name("instance")
- self.logger.info("creating vm: %s" % vm_name)
- server = self.manager.servers_client.create_server(
- name=vm_name, imageRef=self.image, flavorRef=self.flavor)['server']
- server_id = server['id']
- waiters.wait_for_server_status(self.manager.servers_client, server_id,
- 'ACTIVE')
- self.logger.info("created vm %s" % server_id)
-
- # Step 3: attach volume to vm
- self.logger.info("attach volume (%s) to vm %s" %
- (volume['id'], server_id))
- self.manager.servers_client.attach_volume(server_id,
- volumeId=volume['id'],
- device='/dev/vdc')
- self.manager.volumes_client.wait_for_volume_status(volume['id'],
- 'in-use')
- self.logger.info("volume (%s) attached to vm %s" %
- (volume['id'], server_id))
-
- # Step 4: delete vm
- self.logger.info("deleting vm: %s" % vm_name)
- self.manager.servers_client.delete_server(server_id)
- waiters.wait_for_server_termination(self.manager.servers_client,
- server_id)
- self.logger.info("deleted vm: %s" % server_id)
-
- # Step 5: delete volume
- self.logger.info("deleting volume: %s" % volume['id'])
- self.manager.volumes_client.delete_volume(volume['id'])
- self.manager.volumes_client.wait_for_resource_deletion(volume['id'])
- self.logger.info("deleted volume: %s" % volume['id'])
diff --git a/tempest/stress/actions/volume_attach_verify.py b/tempest/stress/actions/volume_attach_verify.py
deleted file mode 100644
index 8bbbfc4..0000000
--- a/tempest/stress/actions/volume_attach_verify.py
+++ /dev/null
@@ -1,232 +0,0 @@
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import re
-
-from tempest.common.utils import data_utils
-from tempest.common.utils.linux import remote_client
-from tempest.common import waiters
-from tempest import config
-import tempest.stress.stressaction as stressaction
-import tempest.test
-
-CONF = config.CONF
-
-
-class VolumeVerifyStress(stressaction.StressAction):
-
- def _create_keypair(self):
- keyname = data_utils.rand_name("key")
- self.key = (self.manager.keypairs_client.create_keypair(name=keyname)
- ['keypair'])
-
- def _delete_keypair(self):
- self.manager.keypairs_client.delete_keypair(self.key['name'])
-
- def _create_vm(self):
- self.name = name = data_utils.rand_name("instance")
- servers_client = self.manager.servers_client
- self.logger.info("creating %s" % name)
- vm_args = self.vm_extra_args.copy()
- vm_args['security_groups'] = [self.sec_grp]
- vm_args['key_name'] = self.key['name']
- server = servers_client.create_server(name=name, imageRef=self.image,
- flavorRef=self.flavor,
- **vm_args)['server']
- self.server_id = server['id']
- waiters.wait_for_server_status(self.manager.servers_client,
- self.server_id, 'ACTIVE')
-
- def _destroy_vm(self):
- self.logger.info("deleting server: %s" % self.server_id)
- self.manager.servers_client.delete_server(self.server_id)
- waiters.wait_for_server_termination(self.manager.servers_client,
- self.server_id)
- self.logger.info("deleted server: %s" % self.server_id)
-
- def _create_sec_group(self):
- sec_grp_cli = self.manager.compute_security_groups_client
- s_name = data_utils.rand_name('sec_grp')
- s_description = data_utils.rand_name('desc')
- self.sec_grp = sec_grp_cli.create_security_group(
- name=s_name, description=s_description)['security_group']
- create_rule = sec_grp_cli.create_security_group_rule
- create_rule(parent_group_id=self.sec_grp['id'], ip_protocol='tcp',
- from_port=22, to_port=22)
- create_rule(parent_group_id=self.sec_grp['id'], ip_protocol='icmp',
- from_port=-1, to_port=-1)
-
- def _destroy_sec_grp(self):
- sec_grp_cli = self.manager.compute_security_groups_client
- sec_grp_cli.delete_security_group(self.sec_grp['id'])
-
- def _create_floating_ip(self):
- floating_cli = self.manager.compute_floating_ips_client
- self.floating = (floating_cli.create_floating_ip(self.floating_pool)
- ['floating_ip'])
-
- def _destroy_floating_ip(self):
- cli = self.manager.compute_floating_ips_client
- cli.delete_floating_ip(self.floating['id'])
- cli.wait_for_resource_deletion(self.floating['id'])
- self.logger.info("Deleted Floating IP %s", str(self.floating['ip']))
-
- def _create_volume(self):
- name = data_utils.rand_name("volume")
- self.logger.info("creating volume: %s" % name)
- volumes_client = self.manager.volumes_client
- self.volume = volumes_client.create_volume(
- display_name=name)['volume']
- volumes_client.wait_for_volume_status(self.volume['id'],
- 'available')
- self.logger.info("created volume: %s" % self.volume['id'])
-
- def _delete_volume(self):
- self.logger.info("deleting volume: %s" % self.volume['id'])
- volumes_client = self.manager.volumes_client
- volumes_client.delete_volume(self.volume['id'])
- volumes_client.wait_for_resource_deletion(self.volume['id'])
- self.logger.info("deleted volume: %s" % self.volume['id'])
-
- def _wait_disassociate(self):
- cli = self.manager.compute_floating_ips_client
-
- def func():
- floating = (cli.show_floating_ip(self.floating['id'])
- ['floating_ip'])
- return floating['instance_id'] is None
-
- if not tempest.test.call_until_true(func, CONF.compute.build_timeout,
- CONF.compute.build_interval):
- raise RuntimeError("IP disassociate timeout!")
-
- def new_server_ops(self):
- self._create_vm()
- cli = self.manager.compute_floating_ips_client
- cli.associate_floating_ip_to_server(self.floating['ip'],
- self.server_id)
- if self.ssh_test_before_attach and self.enable_ssh_verify:
- self.logger.info("Scanning for block devices via ssh on %s"
- % self.server_id)
- self.part_wait(self.detach_match_count)
-
- def setUp(self, **kwargs):
- """Note able configuration combinations:
-
- Closest options to the test_stamp_pattern:
- new_server = True
- new_volume = True
- enable_ssh_verify = True
- ssh_test_before_attach = False
- Just attaching:
- new_server = False
- new_volume = False
- enable_ssh_verify = True
- ssh_test_before_attach = True
- Mostly API load by repeated attachment:
- new_server = False
- new_volume = False
- enable_ssh_verify = False
- ssh_test_before_attach = False
- Minimal Nova load, but cinder load not decreased:
- new_server = False
- new_volume = True
- enable_ssh_verify = True
- ssh_test_before_attach = True
- """
- self.image = CONF.compute.image_ref
- self.flavor = CONF.compute.flavor_ref
- self.vm_extra_args = kwargs.get('vm_extra_args', {})
- self.floating_pool = kwargs.get('floating_pool', None)
- self.new_volume = kwargs.get('new_volume', True)
- self.new_server = kwargs.get('new_server', False)
- self.enable_ssh_verify = kwargs.get('enable_ssh_verify', True)
- self.ssh_test_before_attach = kwargs.get('ssh_test_before_attach',
- False)
- self.part_line_re = re.compile(kwargs.get('part_line_re', '.*vd.*'))
- self.detach_match_count = kwargs.get('detach_match_count', 1)
- self.attach_match_count = kwargs.get('attach_match_count', 2)
- self.part_name = kwargs.get('part_name', '/dev/vdc')
-
- self._create_floating_ip()
- self._create_sec_group()
- self._create_keypair()
- private_key = self.key['private_key']
- username = CONF.validation.image_ssh_user
- self.remote_client = remote_client.RemoteClient(self.floating['ip'],
- username,
- pkey=private_key)
- if not self.new_volume:
- self._create_volume()
- if not self.new_server:
- self.new_server_ops()
-
- # now we just test that the number of partitions has increased or decreased
- def part_wait(self, num_match):
- def _part_state():
- self.partitions = self.remote_client.get_partitions().split('\n')
- matching = 0
- for part_line in self.partitions[1:]:
- if self.part_line_re.match(part_line):
- matching += 1
- return matching == num_match
- if tempest.test.call_until_true(_part_state,
- CONF.compute.build_timeout,
- CONF.compute.build_interval):
- return
- else:
- raise RuntimeError("Unexpected partitions: %s",
- str(self.partitions))
-
- def run(self):
- if self.new_server:
- self.new_server_ops()
- if self.new_volume:
- self._create_volume()
- servers_client = self.manager.servers_client
- self.logger.info("attach volume (%s) to vm %s" %
- (self.volume['id'], self.server_id))
- servers_client.attach_volume(self.server_id,
- volumeId=self.volume['id'],
- device=self.part_name)
- self.manager.volumes_client.wait_for_volume_status(self.volume['id'],
- 'in-use')
- if self.enable_ssh_verify:
- self.logger.info("Scanning for new block device on %s"
- % self.server_id)
- self.part_wait(self.attach_match_count)
-
- servers_client.detach_volume(self.server_id,
- self.volume['id'])
- self.manager.volumes_client.wait_for_volume_status(self.volume['id'],
- 'available')
- if self.enable_ssh_verify:
- self.logger.info("Scanning for block device disappearance on %s"
- % self.server_id)
- self.part_wait(self.detach_match_count)
- if self.new_volume:
- self._delete_volume()
- if self.new_server:
- self._destroy_vm()
-
- def tearDown(self):
- cli = self.manager.compute_floating_ips_client
- cli.disassociate_floating_ip_from_server(self.floating['ip'],
- self.server_id)
- self._wait_disassociate()
- if not self.new_server:
- self._destroy_vm()
- self._delete_keypair()
- self._destroy_floating_ip()
- self._destroy_sec_grp()
- if not self.new_volume:
- self._delete_volume()
diff --git a/tempest/stress/actions/volume_create_delete.py b/tempest/stress/actions/volume_create_delete.py
deleted file mode 100644
index 3986748..0000000
--- a/tempest/stress/actions/volume_create_delete.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from tempest.common.utils import data_utils
-import tempest.stress.stressaction as stressaction
-
-
-class VolumeCreateDeleteTest(stressaction.StressAction):
-
- def run(self):
- name = data_utils.rand_name("volume")
- self.logger.info("creating %s" % name)
- volumes_client = self.manager.volumes_client
- volume = volumes_client.create_volume(display_name=name)['volume']
- vol_id = volume['id']
- volumes_client.wait_for_volume_status(vol_id, 'available')
- self.logger.info("created %s" % volume['id'])
- self.logger.info("deleting %s" % name)
- volumes_client.delete_volume(vol_id)
- volumes_client.wait_for_resource_deletion(vol_id)
- self.logger.info("deleted %s" % vol_id)
diff --git a/tempest/stress/cleanup.py b/tempest/stress/cleanup.py
deleted file mode 100644
index 3b0a937..0000000
--- a/tempest/stress/cleanup.py
+++ /dev/null
@@ -1,118 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2013 Quanta Research Cambridge, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from oslo_log import log as logging
-
-from tempest.common import credentials_factory as credentials
-from tempest.common import waiters
-
-LOG = logging.getLogger(__name__)
-
-
-def cleanup():
- admin_manager = credentials.AdminManager()
-
- body = admin_manager.servers_client.list_servers(all_tenants=True)
- LOG.info("Cleanup::remove %s servers" % len(body['servers']))
- for s in body['servers']:
- try:
- admin_manager.servers_client.delete_server(s['id'])
- except Exception:
- pass
-
- for s in body['servers']:
- try:
- waiters.wait_for_server_termination(admin_manager.servers_client,
- s['id'])
- except Exception:
- pass
-
- keypairs = admin_manager.keypairs_client.list_keypairs()['keypairs']
- LOG.info("Cleanup::remove %s keypairs" % len(keypairs))
- for k in keypairs:
- try:
- admin_manager.keypairs_client.delete_keypair(k['name'])
- except Exception:
- pass
-
- secgrp_client = admin_manager.compute_security_groups_client
- secgrp = (secgrp_client.list_security_groups(all_tenants=True)
- ['security_groups'])
- secgrp_del = [grp for grp in secgrp if grp['name'] != 'default']
- LOG.info("Cleanup::remove %s Security Group" % len(secgrp_del))
- for g in secgrp_del:
- try:
- secgrp_client.delete_security_group(g['id'])
- except Exception:
- pass
-
- admin_floating_ips_client = admin_manager.compute_floating_ips_client
- floating_ips = (admin_floating_ips_client.list_floating_ips()
- ['floating_ips'])
- LOG.info("Cleanup::remove %s floating ips" % len(floating_ips))
- for f in floating_ips:
- try:
- admin_floating_ips_client.delete_floating_ip(f['id'])
- except Exception:
- pass
-
- users = admin_manager.users_client.list_users()['users']
- LOG.info("Cleanup::remove %s users" % len(users))
- for user in users:
- if user['name'].startswith("stress_user"):
- admin_manager.users_client.delete_user(user['id'])
- tenants = admin_manager.tenants_client.list_tenants()['tenants']
- LOG.info("Cleanup::remove %s tenants" % len(tenants))
- for tenant in tenants:
- if tenant['name'].startswith("stress_tenant"):
- admin_manager.tenants_client.delete_tenant(tenant['id'])
-
- # We have to delete snapshots first or
- # volume deletion may block
-
- _, snaps = admin_manager.snapshots_client.list_snapshots(
- all_tenants=True)['snapshots']
- LOG.info("Cleanup::remove %s snapshots" % len(snaps))
- for v in snaps:
- try:
- waiters.wait_for_snapshot_status(
- admin_manager.snapshots_client, v['id'], 'available')
- admin_manager.snapshots_client.delete_snapshot(v['id'])
- except Exception:
- pass
-
- for v in snaps:
- try:
- admin_manager.snapshots_client.wait_for_resource_deletion(v['id'])
- except Exception:
- pass
-
- vols = admin_manager.volumes_client.list_volumes(
- params={"all_tenants": True})
- LOG.info("Cleanup::remove %s volumes" % len(vols))
- for v in vols:
- try:
- waiters.wait_for_volume_status(
- admin_manager.volumes_client, v['id'], 'available')
- admin_manager.volumes_client.delete_volume(v['id'])
- except Exception:
- pass
-
- for v in vols:
- try:
- admin_manager.volumes_client.wait_for_resource_deletion(v['id'])
- except Exception:
- pass
diff --git a/tempest/stress/driver.py b/tempest/stress/driver.py
deleted file mode 100644
index 925d765..0000000
--- a/tempest/stress/driver.py
+++ /dev/null
@@ -1,266 +0,0 @@
-# Copyright 2013 Quanta Research Cambridge, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import multiprocessing
-import os
-import signal
-import time
-
-from oslo_log import log as logging
-from oslo_utils import importutils
-import six
-from six import moves
-
-
-from tempest import clients
-from tempest.common import cred_client
-from tempest.common import credentials_factory as credentials
-from tempest.common.utils import data_utils
-from tempest import config
-from tempest import exceptions
-from tempest.lib.common import ssh
-from tempest.stress import cleanup
-
-CONF = config.CONF
-
-LOG = logging.getLogger(__name__)
-processes = []
-
-
-def do_ssh(command, host, ssh_user, ssh_key=None):
- ssh_client = ssh.Client(host, ssh_user, key_filename=ssh_key)
- try:
- return ssh_client.exec_command(command)
- except exceptions.SSHExecCommandFailed:
- LOG.error('do_ssh raise exception. command:%s, host:%s.'
- % (command, host))
- return None
-
-
-def _get_compute_nodes(controller, ssh_user, ssh_key=None):
- """Returns a list of active compute nodes.
-
- List is generated by running nova-manage on the controller.
- """
- nodes = []
- cmd = "nova-manage service list | grep ^nova-compute"
- output = do_ssh(cmd, controller, ssh_user, ssh_key)
- if not output:
- return nodes
- # For example: nova-compute xg11eth0 nova enabled :-) 2011-10-31 18:57:46
- # This is fragile but there is, at present, no other way to get this info.
- for line in output.split('\n'):
- words = line.split()
- if len(words) > 0 and words[4] == ":-)":
- nodes.append(words[1])
- return nodes
-
-
-def _has_error_in_logs(logfiles, nodes, ssh_user, ssh_key=None,
- stop_on_error=False):
- """Detect errors in nova log files on the controller and compute nodes."""
- grep = 'egrep "ERROR|TRACE" %s' % logfiles
- ret = False
- for node in nodes:
- errors = do_ssh(grep, node, ssh_user, ssh_key)
- if len(errors) > 0:
- LOG.error('%s: %s' % (node, errors))
- ret = True
- if stop_on_error:
- break
- return ret
-
-
-def sigchld_handler(signalnum, frame):
- """Signal handler (only active if stop_on_error is True)."""
- for process in processes:
- if (not process['process'].is_alive() and
- process['process'].exitcode != 0):
- signal.signal(signalnum, signal.SIG_DFL)
- terminate_all_processes()
- break
-
-
-def terminate_all_processes(check_interval=20):
- """Goes through the process list and terminates all child processes."""
- LOG.info("Stopping all processes.")
- for process in processes:
- if process['process'].is_alive():
- try:
- process['process'].terminate()
- except Exception:
- pass
- time.sleep(check_interval)
- for process in processes:
- if process['process'].is_alive():
- try:
- pid = process['process'].pid
- LOG.warning("Process %d hangs. Send SIGKILL." % pid)
- os.kill(pid, signal.SIGKILL)
- except Exception:
- pass
- process['process'].join()
-
-
-def stress_openstack(tests, duration, max_runs=None, stop_on_error=False):
- """Workload driver. Executes an action function against a nova-cluster."""
- admin_manager = credentials.AdminManager()
-
- ssh_user = CONF.stress.target_ssh_user
- ssh_key = CONF.stress.target_private_key_path
- logfiles = CONF.stress.target_logfiles
- log_check_interval = int(CONF.stress.log_check_interval)
- default_thread_num = int(CONF.stress.default_thread_number_per_action)
- if logfiles:
- controller = CONF.stress.target_controller
- computes = _get_compute_nodes(controller, ssh_user, ssh_key)
- for node in computes:
- do_ssh("rm -f %s" % logfiles, node, ssh_user, ssh_key)
- skip = False
- for test in tests:
- for service in test.get('required_services', []):
- if not CONF.service_available.get(service):
- skip = True
- break
- if skip:
- break
- # TODO(andreaf) This has to be reworked to use the credential
- # provider interface. For now only tests marked as 'use_admin' will
- # work.
- if test.get('use_admin', False):
- manager = admin_manager
- else:
- raise NotImplemented('Non admin tests are not supported')
- for p_number in moves.xrange(test.get('threads', default_thread_num)):
- if test.get('use_isolated_tenants', False):
- username = data_utils.rand_name("stress_user")
- tenant_name = data_utils.rand_name("stress_tenant")
- password = "pass"
- if CONF.identity.auth_version == 'v2':
- identity_client = admin_manager.identity_client
- projects_client = admin_manager.tenants_client
- roles_client = admin_manager.roles_client
- users_client = admin_manager.users_client
- domains_client = None
- else:
- identity_client = admin_manager.identity_v3_client
- projects_client = admin_manager.projects_client
- roles_client = admin_manager.roles_v3_client
- users_client = admin_manager.users_v3_client
- domains_client = admin_manager.domains_client
- domain = (identity_client.auth_provider.credentials.
- get('project_domain_name', 'Default'))
- credentials_client = cred_client.get_creds_client(
- identity_client, projects_client, users_client,
- roles_client, domains_client, project_domain_name=domain)
- project = credentials_client.create_project(
- name=tenant_name, description=tenant_name)
- user = credentials_client.create_user(username, password,
- project, "email")
- # Add roles specified in config file
- for conf_role in CONF.auth.tempest_roles:
- credentials_client.assign_user_role(user, project,
- conf_role)
- creds = credentials_client.get_credentials(user, project,
- password)
- manager = clients.Manager(credentials=creds)
-
- test_obj = importutils.import_class(test['action'])
- test_run = test_obj(manager, max_runs, stop_on_error)
-
- kwargs = test.get('kwargs', {})
- test_run.setUp(**dict(six.iteritems(kwargs)))
-
- LOG.debug("calling Target Object %s" %
- test_run.__class__.__name__)
-
- mp_manager = multiprocessing.Manager()
- shared_statistic = mp_manager.dict()
- shared_statistic['runs'] = 0
- shared_statistic['fails'] = 0
-
- p = multiprocessing.Process(target=test_run.execute,
- args=(shared_statistic,))
-
- process = {'process': p,
- 'p_number': p_number,
- 'action': test_run.action,
- 'statistic': shared_statistic}
-
- processes.append(process)
- p.start()
- if stop_on_error:
- # NOTE(mkoderer): only the parent should register the handler
- signal.signal(signal.SIGCHLD, sigchld_handler)
- end_time = time.time() + duration
- had_errors = False
- try:
- while True:
- if max_runs is None:
- remaining = end_time - time.time()
- if remaining <= 0:
- break
- else:
- remaining = log_check_interval
- all_proc_term = True
- for process in processes:
- if process['process'].is_alive():
- all_proc_term = False
- break
- if all_proc_term:
- break
-
- time.sleep(min(remaining, log_check_interval))
- if stop_on_error:
- if any([True for proc in processes
- if proc['statistic']['fails'] > 0]):
- break
-
- if not logfiles:
- continue
- if _has_error_in_logs(logfiles, computes, ssh_user, ssh_key,
- stop_on_error):
- had_errors = True
- break
- except KeyboardInterrupt:
- LOG.warning("Interrupted, going to print statistics and exit ...")
-
- if stop_on_error:
- signal.signal(signal.SIGCHLD, signal.SIG_DFL)
- terminate_all_processes()
-
- sum_fails = 0
- sum_runs = 0
-
- LOG.info("Statistics (per process):")
- for process in processes:
- if process['statistic']['fails'] > 0:
- had_errors = True
- sum_runs += process['statistic']['runs']
- sum_fails += process['statistic']['fails']
- print("Process %d (%s): Run %d actions (%d failed)" % (
- process['p_number'],
- process['action'],
- process['statistic']['runs'],
- process['statistic']['fails']))
- print("Summary:")
- print("Run %d actions (%d failed)" % (sum_runs, sum_fails))
-
- if not had_errors and CONF.stress.full_clean_stack:
- LOG.info("cleaning up")
- cleanup.cleanup()
- if had_errors:
- return 1
- else:
- return 0
diff --git a/tempest/stress/etc/sample-unit-test.json b/tempest/stress/etc/sample-unit-test.json
deleted file mode 100644
index 54433d5..0000000
--- a/tempest/stress/etc/sample-unit-test.json
+++ /dev/null
@@ -1,8 +0,0 @@
-[{"action": "tempest.stress.actions.unit_test.UnitTest",
- "threads": 8,
- "use_admin": true,
- "use_isolated_tenants": true,
- "kwargs": {"test_method": "tempest.cli.simple_read_only.test_glance.SimpleReadOnlyGlanceClientTest.test_glance_fake_action",
- "class_setup_per": "process"}
- }
-]
diff --git a/tempest/stress/etc/server-create-destroy-test.json b/tempest/stress/etc/server-create-destroy-test.json
deleted file mode 100644
index bbb5352..0000000
--- a/tempest/stress/etc/server-create-destroy-test.json
+++ /dev/null
@@ -1,7 +0,0 @@
-[{"action": "tempest.stress.actions.server_create_destroy.ServerCreateDestroyTest",
- "threads": 8,
- "use_admin": true,
- "use_isolated_tenants": true,
- "kwargs": {}
- }
-]
diff --git a/tempest/stress/etc/ssh_floating.json b/tempest/stress/etc/ssh_floating.json
deleted file mode 100644
index c502e96..0000000
--- a/tempest/stress/etc/ssh_floating.json
+++ /dev/null
@@ -1,16 +0,0 @@
-[{"action": "tempest.stress.actions.ssh_floating.FloatingStress",
- "threads": 8,
- "use_admin": true,
- "use_isolated_tenants": true,
- "kwargs": {"vm_extra_args": {},
- "new_vm": true,
- "new_sec_group": true,
- "new_floating": true,
- "verify": ["check_icmp_echo", "check_port_ssh"],
- "check_timeout": 120,
- "check_interval": 1,
- "wait_after_vm_create": true,
- "wait_for_disassociate": true,
- "reboot": false}
-}
-]
diff --git a/tempest/stress/etc/stress-tox-job.json b/tempest/stress/etc/stress-tox-job.json
deleted file mode 100644
index bfa448d..0000000
--- a/tempest/stress/etc/stress-tox-job.json
+++ /dev/null
@@ -1,28 +0,0 @@
-[{"action": "tempest.stress.actions.server_create_destroy.ServerCreateDestroyTest",
- "threads": 8,
- "use_admin": true,
- "use_isolated_tenants": true,
- "kwargs": {}
- },
- {"action": "tempest.stress.actions.volume_create_delete.VolumeCreateDeleteTest",
- "threads": 4,
- "use_admin": true,
- "use_isolated_tenants": true,
- "kwargs": {}
- },
- {"action": "tempest.stress.actions.volume_attach_delete.VolumeAttachDeleteTest",
- "threads": 2,
- "use_admin": true,
- "use_isolated_tenants": true,
- "kwargs": {}
- },
- {"action": "tempest.stress.actions.unit_test.UnitTest",
- "threads": 4,
- "use_admin": true,
- "use_isolated_tenants": true,
- "required_services": ["neutron"],
- "kwargs": {"test_method": "tempest.scenario.test_network_advanced_server_ops.TestNetworkAdvancedServerOps.test_server_connectivity_stop_start",
- "class_setup_per": "process"}
- }
-]
-
diff --git a/tempest/stress/etc/volume-attach-delete-test.json b/tempest/stress/etc/volume-attach-delete-test.json
deleted file mode 100644
index d468967..0000000
--- a/tempest/stress/etc/volume-attach-delete-test.json
+++ /dev/null
@@ -1,7 +0,0 @@
-[{"action": "tempest.stress.actions.volume_attach_delete.VolumeAttachDeleteTest",
- "threads": 4,
- "use_admin": true,
- "use_isolated_tenants": true,
- "kwargs": {}
- }
-]
diff --git a/tempest/stress/etc/volume-attach-verify.json b/tempest/stress/etc/volume-attach-verify.json
deleted file mode 100644
index d8c96fd..0000000
--- a/tempest/stress/etc/volume-attach-verify.json
+++ /dev/null
@@ -1,11 +0,0 @@
-[{"action": "tempest.stress.actions.volume_attach_verify.VolumeVerifyStress",
- "threads": 1,
- "use_admin": true,
- "use_isolated_tenants": true,
- "kwargs": {"vm_extra_args": {},
- "new_volume": true,
- "new_server": false,
- "ssh_test_before_attach": false,
- "enable_ssh_verify": true}
-}
-]
diff --git a/tempest/stress/etc/volume-create-delete-test.json b/tempest/stress/etc/volume-create-delete-test.json
deleted file mode 100644
index a60cde6..0000000
--- a/tempest/stress/etc/volume-create-delete-test.json
+++ /dev/null
@@ -1,7 +0,0 @@
-[{"action": "tempest.stress.actions.volume_create_delete.VolumeCreateDeleteTest",
- "threads": 4,
- "use_admin": true,
- "use_isolated_tenants": true,
- "kwargs": {}
- }
-]
diff --git a/tempest/stress/stressaction.py b/tempest/stress/stressaction.py
deleted file mode 100644
index cf0a08a..0000000
--- a/tempest/stress/stressaction.py
+++ /dev/null
@@ -1,96 +0,0 @@
-# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import abc
-import signal
-import sys
-
-import six
-
-from oslo_log import log as logging
-
-
-@six.add_metaclass(abc.ABCMeta)
-class StressAction(object):
-
- def __init__(self, manager, max_runs=None, stop_on_error=False):
- full_cname = self.__module__ + "." + self.__class__.__name__
- self.logger = logging.getLogger(full_cname)
- self.manager = manager
- self.max_runs = max_runs
- self.stop_on_error = stop_on_error
-
- def _shutdown_handler(self, signal, frame):
- try:
- self.tearDown()
- except Exception:
- self.logger.exception("Error while tearDown")
- sys.exit(0)
-
- @property
- def action(self):
- """This methods returns the action.
-
- Overload this if you create a stress test wrapper.
- """
- return self.__class__.__name__
-
- def setUp(self, **kwargs):
- """Initialize test structures/resources
-
- This method is called before "run" method to help the test
- initialize any structures. kwargs contains arguments passed
- in from the configuration json file.
-
- setUp doesn't count against the time duration.
- """
- self.logger.debug("setUp")
-
- def tearDown(self):
- """Cleanup test structures/resources
-
- This method is called to do any cleanup after the test is complete.
- """
- self.logger.debug("tearDown")
-
- def execute(self, shared_statistic):
- """This is the main execution entry point called by the driver.
-
- We register a signal handler to allow us to tearDown gracefully,
- and then exit. We also keep track of how many runs we do.
- """
- signal.signal(signal.SIGHUP, self._shutdown_handler)
- signal.signal(signal.SIGTERM, self._shutdown_handler)
-
- while self.max_runs is None or (shared_statistic['runs'] <
- self.max_runs):
- self.logger.debug("Trigger new run (run %d)" %
- shared_statistic['runs'])
- try:
- self.run()
- except Exception:
- shared_statistic['fails'] += 1
- self.logger.exception("Failure in run")
- finally:
- shared_statistic['runs'] += 1
- if self.stop_on_error and (shared_statistic['fails'] > 1):
- self.logger.warning("Stop process due to"
- "\"stop-on-error\" argument")
- self.tearDown()
- sys.exit(1)
-
- @abc.abstractmethod
- def run(self):
- """This method is where the stress test code runs."""
- return
diff --git a/tempest/stress/tools/cleanup.py b/tempest/stress/tools/cleanup.py
deleted file mode 100755
index 3885ba0..0000000
--- a/tempest/stress/tools/cleanup.py
+++ /dev/null
@@ -1,19 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2013 Quanta Research Cambridge, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from tempest.stress import cleanup
-
-cleanup.cleanup()
diff --git a/tempest/test.py b/tempest/test.py
index 97ab25c..cc9410f 100644
--- a/tempest/test.py
+++ b/tempest/test.py
@@ -16,28 +16,22 @@
import atexit
import functools
import os
-import re
import sys
-import time
+import debtcollector.moves
import fixtures
from oslo_log import log as logging
-from oslo_serialization import jsonutils as json
-from oslo_utils import importutils
import six
-from six.moves import urllib
-import testscenarios
import testtools
from tempest import clients
from tempest.common import cred_client
from tempest.common import credentials_factory as credentials
from tempest.common import fixed_network
-import tempest.common.generator.valid_generator as valid
import tempest.common.validation_resources as vresources
from tempest import config
from tempest import exceptions
-from tempest.lib.common.utils import data_utils
+from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
@@ -108,32 +102,6 @@
return decorator
-def stresstest(**kwargs):
- """Add stress test decorator
-
- For all functions with this decorator a attr stress will be
- set automatically.
-
- @param class_setup_per: allowed values are application, process, action
- ``application``: once in the stress job lifetime
- ``process``: once in the worker process lifetime
- ``action``: on each action
- @param allow_inheritance: allows inheritance of this attribute
- """
- def decorator(f):
- if 'class_setup_per' in kwargs:
- setattr(f, "st_class_setup_per", kwargs['class_setup_per'])
- else:
- setattr(f, "st_class_setup_per", 'process')
- if 'allow_inheritance' in kwargs:
- setattr(f, "st_allow_inheritance", kwargs['allow_inheritance'])
- else:
- setattr(f, "st_allow_inheritance", False)
- attr(type='stress')(f)
- return f
- return decorator
-
-
def requires_ext(**kwargs):
"""A decorator to skip tests if an extension is not enabled
@@ -648,240 +616,6 @@
self.assertTrue(len(list) > 0, msg)
-class NegativeAutoTest(BaseTestCase):
-
- _resources = {}
-
- @classmethod
- def setUpClass(cls):
- super(NegativeAutoTest, cls).setUpClass()
- os = cls.get_client_manager(credential_type='primary')
- cls.client = os.negative_client
-
- @staticmethod
- def load_tests(*args):
- """Wrapper for testscenarios
-
- To set the mandatory scenarios variable only in case a real test
- loader is in place. Will be automatically called in case the variable
- "load_tests" is set.
- """
- if getattr(args[0], 'suiteClass', None) is not None:
- loader, standard_tests, pattern = args
- else:
- standard_tests, module, loader = args
- for test in testtools.iterate_tests(standard_tests):
- schema = getattr(test, '_schema', None)
- if schema is not None:
- setattr(test, 'scenarios',
- NegativeAutoTest.generate_scenario(schema))
- return testscenarios.load_tests_apply_scenarios(*args)
-
- @staticmethod
- def generate_scenario(description):
- """Generates the test scenario list for a given description.
-
- :param description: A file or dictionary with the following entries:
- name (required) name for the api
- http-method (required) one of HEAD,GET,PUT,POST,PATCH,DELETE
- url (required) the url to be appended to the catalog url with '%s'
- for each resource mentioned
- resources: (optional) A list of resource names such as "server",
- "flavor", etc. with an element for each '%s' in the url. This
- method will call self.get_resource for each element when
- constructing the positive test case template so negative
- subclasses are expected to return valid resource ids when
- appropriate.
- json-schema (optional) A valid json schema that will be used to
- create invalid data for the api calls. For "GET" and "HEAD",
- the data is used to generate query strings appended to the url,
- otherwise for the body of the http call.
- """
- LOG.debug(description)
- generator = importutils.import_class(
- CONF.negative.test_generator)()
- generator.validate_schema(description)
- schema = description.get("json-schema", None)
- resources = description.get("resources", [])
- scenario_list = []
- expected_result = None
- for resource in resources:
- if isinstance(resource, dict):
- expected_result = resource['expected_result']
- resource = resource['name']
- LOG.debug("Add resource to test %s" % resource)
- scn_name = "inv_res_%s" % (resource)
- scenario_list.append((scn_name, {
- "resource": (resource, data_utils.rand_uuid()),
- "expected_result": expected_result
- }))
- if schema is not None:
- for scenario in generator.generate_scenarios(schema):
- scenario_list.append((scenario['_negtest_name'],
- scenario))
- LOG.debug(scenario_list)
- return scenario_list
-
- def execute(self, description):
- """Execute a http call
-
- Execute a http call on an api that are expected to
- result in client errors. First it uses invalid resources that are part
- of the url, and then invalid data for queries and http request bodies.
-
- :param description: A json file or dictionary with the following
- entries:
- name (required) name for the api
- http-method (required) one of HEAD,GET,PUT,POST,PATCH,DELETE
- url (required) the url to be appended to the catalog url with '%s'
- for each resource mentioned
- resources: (optional) A list of resource names such as "server",
- "flavor", etc. with an element for each '%s' in the url. This
- method will call self.get_resource for each element when
- constructing the positive test case template so negative
- subclasses are expected to return valid resource ids when
- appropriate.
- json-schema (optional) A valid json schema that will be used to
- create invalid data for the api calls. For "GET" and "HEAD",
- the data is used to generate query strings appended to the url,
- otherwise for the body of the http call.
-
- """
- LOG.info("Executing %s" % description["name"])
- LOG.debug(description)
- generator = importutils.import_class(
- CONF.negative.test_generator)()
- schema = description.get("json-schema", None)
- method = description["http-method"]
- url = description["url"]
- expected_result = None
- if "default_result_code" in description:
- expected_result = description["default_result_code"]
-
- resources = [self.get_resource(r) for
- r in description.get("resources", [])]
-
- if hasattr(self, "resource"):
- # Note(mkoderer): The resources list already contains an invalid
- # entry (see get_resource).
- # We just send a valid json-schema with it
- valid_schema = None
- if schema:
- valid_schema = \
- valid.ValidTestGenerator().generate_valid(schema)
- new_url, body = self._http_arguments(valid_schema, url, method)
- elif hasattr(self, "_negtest_name"):
- schema_under_test = \
- valid.ValidTestGenerator().generate_valid(schema)
- local_expected_result = \
- generator.generate_payload(self, schema_under_test)
- if local_expected_result is not None:
- expected_result = local_expected_result
- new_url, body = \
- self._http_arguments(schema_under_test, url, method)
- else:
- raise Exception("testscenarios are not active. Please make sure "
- "that your test runner supports the load_tests "
- "mechanism")
-
- if "admin_client" in description and description["admin_client"]:
- if not credentials.is_admin_available(
- identity_version=self.get_identity_version()):
- msg = ("Missing Identity Admin API credentials in"
- "configuration.")
- raise self.skipException(msg)
- creds = self.credentials_provider.get_admin_creds()
- os_adm = clients.Manager(credentials=creds)
- client = os_adm.negative_client
- else:
- client = self.client
- resp, resp_body = client.send_request(method, new_url,
- resources, body=body)
- self._check_negative_response(expected_result, resp.status, resp_body)
-
- def _http_arguments(self, json_dict, url, method):
- LOG.debug("dict: %s url: %s method: %s" % (json_dict, url, method))
- if not json_dict:
- return url, None
- elif method in ["GET", "HEAD", "PUT", "DELETE"]:
- return "%s?%s" % (url, urllib.parse.urlencode(json_dict)), None
- else:
- return url, json.dumps(json_dict)
-
- def _check_negative_response(self, expected_result, result, body):
- self.assertTrue(result >= 400 and result < 500 and result != 413,
- "Expected client error, got %s:%s" %
- (result, body))
- self.assertTrue(expected_result is None or expected_result == result,
- "Expected %s, got %s:%s" %
- (expected_result, result, body))
-
- @classmethod
- def set_resource(cls, name, resource):
- """Register a resource for a test
-
- This function can be used in setUpClass context to register a resource
- for a test.
-
- :param name: The name of the kind of resource such as "flavor", "role",
- etc.
- :resource: The id of the resource
- """
- cls._resources[name] = resource
-
- def get_resource(self, name):
- """Return a valid uuid for a type of resource.
-
- If a real resource is needed as part of a url then this method should
- return one. Otherwise it can return None.
-
- :param name: The name of the kind of resource such as "flavor", "role",
- etc.
- """
- if isinstance(name, dict):
- name = name['name']
- if hasattr(self, "resource") and self.resource[0] == name:
- LOG.debug("Return invalid resource (%s) value: %s" %
- (self.resource[0], self.resource[1]))
- return self.resource[1]
- if name in self._resources:
- return self._resources[name]
- return None
-
-
-def SimpleNegativeAutoTest(klass):
- """This decorator registers a test function on basis of the class name."""
- @attr(type=['negative'])
- def generic_test(self):
- if hasattr(self, '_schema'):
- self.execute(self._schema)
-
- cn = klass.__name__
- cn = cn.replace('JSON', '')
- cn = cn.replace('Test', '')
- # NOTE(mkoderer): replaces uppercase chars inside the class name with '_'
- lower_cn = re.sub('(?<!^)(?=[A-Z])', '_', cn).lower()
- func_name = 'test_%s' % lower_cn
- setattr(klass, func_name, generic_test)
- return klass
-
-
-def call_until_true(func, duration, sleep_for):
- """Call the given function until it returns True (and return True)
-
- or until the specified duration (in seconds) elapses (and return False).
-
- :param func: A zero argument callable that returns True on success.
- :param duration: The number of seconds for which to attempt a
- successful call of the function.
- :param sleep_for: The number of seconds to sleep after an unsuccessful
- invocation of the function.
- """
- now = time.time()
- timeout = now + duration
- while now < timeout:
- if func():
- return True
- time.sleep(sleep_for)
- now = time.time()
- return False
+call_until_true = debtcollector.moves.moved_function(
+ test_utils.call_until_true, 'call_until_true', __name__,
+ version='Newton', removal_version='Ocata')
diff --git a/tempest/test_discover/plugins.py b/tempest/test_discover/plugins.py
index d604b28..f8d5d9d 100644
--- a/tempest/test_discover/plugins.py
+++ b/tempest/test_discover/plugins.py
@@ -19,6 +19,7 @@
import stevedore
from tempest.lib.common.utils import misc
+from tempest.lib.services import clients
LOG = logging.getLogger(__name__)
@@ -62,6 +63,54 @@
"""
return []
+ def get_service_clients(self):
+ """Get a list of the service clients for registration
+
+ If the plugin implements service clients for one or more APIs, it
+ may return their details by this method for automatic registration
+ in any ServiceClients object instantiated by tests.
+ The default implementation returns an empty list.
+
+ :return list of dictionaries. Each element of the list represents
+ the service client for an API. Each dict must define all
+ parameters required for the invocation of
+ `service_clients.ServiceClients.register_service_client_module`.
+ :rtype: list
+
+ Example:
+
+ >>> # Example implementation with one service client
+ >>> myservice_config = config.service_client_config('myservice')
+ >>> params = {
+ >>> 'name': 'myservice',
+ >>> 'service_version': 'myservice',
+ >>> 'module_path': 'myservice_tempest_tests.services',
+ >>> 'client_names': ['API1Client', 'API2Client'],
+ >>> }
+ >>> params.update(myservice_config)
+ >>> return [params]
+
+ >>> # Example implementation with two service clients
+ >>> foo1_config = config.service_client_config('foo')
+ >>> params_foo1 = {
+ >>> 'name': 'foo_v1',
+ >>> 'service_version': 'foo.v1',
+ >>> 'module_path': 'bar_tempest_tests.services.foo.v1',
+ >>> 'client_names': ['API1Client', 'API2Client'],
+ >>> }
+ >>> params_foo1.update(foo_config)
+ >>> foo2_config = config.service_client_config('foo')
+ >>> params_foo2 = {
+ >>> 'name': 'foo_v2',
+ >>> 'service_version': 'foo.v2',
+ >>> 'module_path': 'bar_tempest_tests.services.foo.v2',
+ >>> 'client_names': ['API1Client', 'API2Client'],
+ >>> }
+ >>> params_foo2.update(foo2_config)
+ >>> return [params_foo1, params_foo2]
+ """
+ return []
+
@misc.singleton
class TempestTestPluginManager(object):
@@ -75,6 +124,7 @@
'tempest.test_plugins', invoke_on_load=True,
propagate_map_exceptions=True,
on_load_failure_callback=self.failure_hook)
+ self._register_service_clients()
@staticmethod
def failure_hook(_, ep, err):
@@ -102,3 +152,15 @@
if opt_list:
plugin_options.extend(opt_list)
return plugin_options
+
+ def _register_service_clients(self):
+ registry = clients.ClientsRegistry()
+ for plug in self.ext_plugins:
+ try:
+ service_clients = plug.obj.get_service_clients()
+ if service_clients:
+ registry.register_service_client(
+ plug.name, service_clients)
+ except Exception:
+ LOG.exception('Plugin %s raised an exception trying to run '
+ 'get_service_clients' % plug.name)
diff --git a/tempest/tests/cmd/test_account_generator.py b/tempest/tests/cmd/test_account_generator.py
old mode 100755
new mode 100644
diff --git a/tempest/tests/cmd/test_javelin.py b/tempest/tests/cmd/test_javelin.py
deleted file mode 100644
index 5ec9720..0000000
--- a/tempest/tests/cmd/test_javelin.py
+++ /dev/null
@@ -1,422 +0,0 @@
-#!/usr/bin/env python
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import mock
-from oslotest import mockpatch
-
-from tempest.cmd import javelin
-from tempest.lib import exceptions as lib_exc
-from tempest.tests import base
-
-
-class JavelinUnitTest(base.TestCase):
-
- def setUp(self):
- super(JavelinUnitTest, self).setUp()
- javelin.LOG = mock.MagicMock()
- self.fake_client = mock.MagicMock()
- self.fake_object = mock.MagicMock()
-
- def test_load_resources(self):
- with mock.patch('six.moves.builtins.open', mock.mock_open(),
- create=True) as open_mock:
- with mock.patch('yaml.load', mock.MagicMock(),
- create=True) as load_mock:
- javelin.load_resources(self.fake_object)
- load_mock.assert_called_once_with(open_mock(self.fake_object))
-
- def test_keystone_admin(self):
- self.useFixture(mockpatch.PatchObject(javelin, "OSClient"))
- javelin.OPTS = self.fake_object
- javelin.keystone_admin()
- javelin.OSClient.assert_called_once_with(
- self.fake_object.os_username,
- self.fake_object.os_password,
- self.fake_object.os_tenant_name)
-
- def test_client_for_user(self):
- fake_user = mock.MagicMock()
- javelin.USERS = {fake_user['name']: fake_user}
- self.useFixture(mockpatch.PatchObject(javelin, "OSClient"))
- javelin.client_for_user(fake_user['name'])
- javelin.OSClient.assert_called_once_with(
- fake_user['name'], fake_user['pass'], fake_user['tenant'])
-
- def test_client_for_non_existing_user(self):
- fake_non_existing_user = self.fake_object
- fake_user = mock.MagicMock()
- javelin.USERS = {fake_user['name']: fake_user}
- self.useFixture(mockpatch.PatchObject(javelin, "OSClient"))
- javelin.client_for_user(fake_non_existing_user['name'])
- self.assertFalse(javelin.OSClient.called)
-
- def test_attach_volumes(self):
- self.useFixture(mockpatch.PatchObject(javelin, "client_for_user",
- return_value=self.fake_client))
-
- self.useFixture(mockpatch.PatchObject(
- javelin, "_get_volume_by_name",
- return_value=self.fake_object.volume))
-
- self.useFixture(mockpatch.PatchObject(
- javelin, "_get_server_by_name",
- return_value=self.fake_object.server))
-
- javelin.attach_volumes([self.fake_object])
-
- mocked_function = self.fake_client.volumes.attach_volume
- mocked_function.assert_called_once_with(
- self.fake_object.volume['id'],
- instance_uuid=self.fake_object.server['id'],
- mountpoint=self.fake_object['device'])
-
-
-class TestCreateResources(JavelinUnitTest):
- def test_create_tenants(self):
-
- self.fake_client.tenants.list_tenants.return_value = {'tenants': []}
- self.useFixture(mockpatch.PatchObject(javelin, "keystone_admin",
- return_value=self.fake_client))
-
- javelin.create_tenants([self.fake_object['name']])
-
- mocked_function = self.fake_client.tenants.create_tenant
- mocked_function.assert_called_once_with(name=self.fake_object['name'])
-
- def test_create_duplicate_tenant(self):
- self.fake_client.tenants.list_tenants.return_value = {'tenants': [
- {'name': self.fake_object['name']}]}
- self.useFixture(mockpatch.PatchObject(javelin, "keystone_admin",
- return_value=self.fake_client))
-
- javelin.create_tenants([self.fake_object['name']])
-
- mocked_function = self.fake_client.tenants.create_tenant
- self.assertFalse(mocked_function.called)
-
- def test_create_users(self):
- self.useFixture(mockpatch.Patch(
- 'tempest.common.identity.get_tenant_by_name',
- return_value=self.fake_object['tenant']))
- self.useFixture(mockpatch.Patch(
- 'tempest.common.identity.get_user_by_username',
- side_effect=lib_exc.NotFound("user is not found")))
- self.useFixture(mockpatch.PatchObject(javelin, "keystone_admin",
- return_value=self.fake_client))
-
- javelin.create_users([self.fake_object])
-
- fake_tenant_id = self.fake_object['tenant']['id']
- fake_email = "%s@%s" % (self.fake_object['user'], fake_tenant_id)
- mocked_function = self.fake_client.users.create_user
- mocked_function.assert_called_once_with(
- name=self.fake_object['name'],
- password=self.fake_object['password'],
- tenantId=fake_tenant_id,
- email=fake_email,
- enabled=True)
-
- def test_create_user_missing_tenant(self):
- self.useFixture(mockpatch.Patch(
- 'tempest.common.identity.get_tenant_by_name',
- side_effect=lib_exc.NotFound("tenant is not found")))
- self.useFixture(mockpatch.PatchObject(javelin, "keystone_admin",
- return_value=self.fake_client))
-
- javelin.create_users([self.fake_object])
-
- mocked_function = self.fake_client.users.create_user
- self.assertFalse(mocked_function.called)
-
- def test_create_objects(self):
-
- self.useFixture(mockpatch.PatchObject(javelin, "client_for_user",
- return_value=self.fake_client))
- self.useFixture(mockpatch.PatchObject(javelin, "_assign_swift_role"))
- self.useFixture(mockpatch.PatchObject(javelin, "_file_contents",
- return_value=self.fake_object.content))
-
- javelin.create_objects([self.fake_object])
-
- mocked_function = self.fake_client.containers.create_container
- mocked_function.assert_called_once_with(self.fake_object['container'])
- mocked_function = self.fake_client.objects.create_object
- mocked_function.assert_called_once_with(self.fake_object['container'],
- self.fake_object['name'],
- self.fake_object.content)
-
- def test_create_images(self):
- self.fake_client.images.create_image.return_value = \
- self.fake_object['body']
-
- self.useFixture(mockpatch.PatchObject(javelin, "client_for_user",
- return_value=self.fake_client))
- self.useFixture(mockpatch.PatchObject(javelin, "_get_image_by_name",
- return_value=[]))
- self.useFixture(mockpatch.PatchObject(javelin, "_resolve_image",
- return_value=(None, None)))
-
- with mock.patch('six.moves.builtins.open', mock.mock_open(),
- create=True) as open_mock:
- javelin.create_images([self.fake_object])
-
- mocked_function = self.fake_client.images.create_image
- mocked_function.assert_called_once_with(self.fake_object['name'],
- self.fake_object['format'],
- self.fake_object['format'])
-
- mocked_function = self.fake_client.images.store_image_file
- fake_image_id = self.fake_object['body'].get('id')
- mocked_function.assert_called_once_with(fake_image_id, open_mock())
-
- def test_create_networks(self):
- self.fake_client.networks.list_networks.return_value = {
- 'networks': []}
-
- self.useFixture(mockpatch.PatchObject(javelin, "client_for_user",
- return_value=self.fake_client))
-
- javelin.create_networks([self.fake_object])
-
- mocked_function = self.fake_client.networks.create_network
- mocked_function.assert_called_once_with(name=self.fake_object['name'])
-
- def test_create_subnet(self):
-
- fake_network = self.fake_object['network']
-
- self.useFixture(mockpatch.PatchObject(javelin, "client_for_user",
- return_value=self.fake_client))
- self.useFixture(mockpatch.PatchObject(javelin, "_get_resource_by_name",
- return_value=fake_network))
-
- fake_netaddr = mock.MagicMock()
- self.useFixture(mockpatch.PatchObject(javelin, "netaddr",
- return_value=fake_netaddr))
- fake_version = javelin.netaddr.IPNetwork().version
-
- javelin.create_subnets([self.fake_object])
-
- mocked_function = self.fake_client.networks.create_subnet
- mocked_function.assert_called_once_with(network_id=fake_network['id'],
- cidr=self.fake_object['range'],
- name=self.fake_object['name'],
- ip_version=fake_version)
-
- @mock.patch("tempest.common.waiters.wait_for_volume_status")
- def test_create_volumes(self, mock_wait_for_volume_status):
- self.useFixture(mockpatch.PatchObject(javelin, "client_for_user",
- return_value=self.fake_client))
- self.useFixture(mockpatch.PatchObject(javelin, "_get_volume_by_name",
- return_value=None))
- self.fake_client.volumes.create_volume.return_value = \
- self.fake_object.body
-
- javelin.create_volumes([self.fake_object])
-
- mocked_function = self.fake_client.volumes.create_volume
- mocked_function.assert_called_once_with(
- size=self.fake_object['gb'],
- display_name=self.fake_object['name'])
- mock_wait_for_volume_status.assert_called_once_with(
- self.fake_client.volumes, self.fake_object.body['volume']['id'],
- 'available')
-
- @mock.patch("tempest.common.waiters.wait_for_volume_status")
- def test_create_volume_existing(self, mock_wait_for_volume_status):
- self.useFixture(mockpatch.PatchObject(javelin, "client_for_user",
- return_value=self.fake_client))
- self.useFixture(mockpatch.PatchObject(javelin, "_get_volume_by_name",
- return_value=self.fake_object))
- self.fake_client.volumes.create_volume.return_value = \
- self.fake_object.body
-
- javelin.create_volumes([self.fake_object])
-
- mocked_function = self.fake_client.volumes.create_volume
- self.assertFalse(mocked_function.called)
- self.assertFalse(mock_wait_for_volume_status.called)
-
- def test_create_router(self):
-
- self.fake_client.routers.list_routers.return_value = {'routers': []}
- self.useFixture(mockpatch.PatchObject(javelin, "client_for_user",
- return_value=self.fake_client))
-
- javelin.create_routers([self.fake_object])
-
- mocked_function = self.fake_client.networks.create_router
- mocked_function.assert_called_once_with(name=self.fake_object['name'])
-
- def test_create_router_existing(self):
- self.fake_client.routers.list_routers.return_value = {
- 'routers': [self.fake_object]}
- self.useFixture(mockpatch.PatchObject(javelin, "client_for_user",
- return_value=self.fake_client))
-
- javelin.create_routers([self.fake_object])
-
- mocked_function = self.fake_client.networks.create_router
- self.assertFalse(mocked_function.called)
-
- def test_create_secgroup(self):
- self.useFixture(mockpatch.PatchObject(javelin, "client_for_user",
- return_value=self.fake_client))
- self.fake_client.secgroups.list_security_groups.return_value = (
- {'security_groups': []})
- self.fake_client.secgroups.create_security_group.return_value = \
- {'security_group': {'id': self.fake_object['secgroup_id']}}
-
- javelin.create_secgroups([self.fake_object])
-
- mocked_function = self.fake_client.secgroups.create_security_group
- mocked_function.assert_called_once_with(
- name=self.fake_object['name'],
- description=self.fake_object['description'])
-
-
-class TestDestroyResources(JavelinUnitTest):
-
- def test_destroy_tenants(self):
-
- fake_tenant = self.fake_object['tenant']
- fake_auth = self.fake_client
- self.useFixture(mockpatch.Patch(
- 'tempest.common.identity.get_tenant_by_name',
- return_value=fake_tenant))
- self.useFixture(mockpatch.PatchObject(javelin, "keystone_admin",
- return_value=fake_auth))
- javelin.destroy_tenants([fake_tenant])
-
- mocked_function = fake_auth.tenants.delete_tenant
- mocked_function.assert_called_once_with(fake_tenant['id'])
-
- def test_destroy_users(self):
-
- fake_user = self.fake_object['user']
- fake_tenant = self.fake_object['tenant']
-
- fake_auth = self.fake_client
- fake_auth.tenants.list_tenants.return_value = \
- {'tenants': [fake_tenant]}
- fake_auth.users.list_users.return_value = {'users': [fake_user]}
-
- self.useFixture(mockpatch.Patch(
- 'tempest.common.identity.get_user_by_username',
- return_value=fake_user))
- self.useFixture(mockpatch.PatchObject(javelin, "keystone_admin",
- return_value=fake_auth))
-
- javelin.destroy_users([fake_user])
-
- mocked_function = fake_auth.users.delete_user
- mocked_function.assert_called_once_with(fake_user['id'])
-
- def test_destroy_objects(self):
-
- self.fake_client.objects.delete_object.return_value = \
- {'status': "200"}, ""
- self.useFixture(mockpatch.PatchObject(javelin, "client_for_user",
- return_value=self.fake_client))
- javelin.destroy_objects([self.fake_object])
-
- mocked_function = self.fake_client.objects.delete_object
- mocked_function.asswert_called_once(self.fake_object['container'],
- self.fake_object['name'])
-
- def test_destroy_images(self):
-
- self.useFixture(mockpatch.PatchObject(javelin, "client_for_user",
- return_value=self.fake_client))
- self.useFixture(mockpatch.PatchObject(javelin, "_get_image_by_name",
- return_value=self.fake_object['image']))
-
- javelin.destroy_images([self.fake_object])
-
- mocked_function = self.fake_client.images.delete_image
- mocked_function.assert_called_once_with(
- self.fake_object['image']['id'])
-
- def test_destroy_networks(self):
-
- self.useFixture(mockpatch.PatchObject(javelin, "client_for_user",
- return_value=self.fake_client))
- self.useFixture(mockpatch.PatchObject(
- javelin, "_get_resource_by_name",
- return_value=self.fake_object['resource']))
-
- javelin.destroy_networks([self.fake_object])
-
- mocked_function = self.fake_client.networks.delete_network
- mocked_function.assert_called_once_with(
- self.fake_object['resource']['id'])
-
- def test_destroy_volumes(self):
- self.useFixture(mockpatch.PatchObject(javelin, "client_for_user",
- return_value=self.fake_client))
-
- self.useFixture(mockpatch.PatchObject(
- javelin, "_get_volume_by_name",
- return_value=self.fake_object.volume))
-
- javelin.destroy_volumes([self.fake_object])
-
- mocked_function = self.fake_client.volumes.detach_volume
- mocked_function.assert_called_once_with(self.fake_object.volume['id'])
- mocked_function = self.fake_client.volumes.delete_volume
- mocked_function.assert_called_once_with(self.fake_object.volume['id'])
-
- def test_destroy_subnets(self):
-
- self.useFixture(mockpatch.PatchObject(javelin, "client_for_user",
- return_value=self.fake_client))
- fake_subnet_id = self.fake_object['subnet_id']
- self.useFixture(mockpatch.PatchObject(javelin, "_get_resource_by_name",
- return_value={
- 'id': fake_subnet_id}))
-
- javelin.destroy_subnets([self.fake_object])
-
- mocked_function = self.fake_client.subnets.delete_subnet
- mocked_function.assert_called_once_with(fake_subnet_id)
-
- def test_destroy_routers(self):
- self.useFixture(mockpatch.PatchObject(javelin, "client_for_user",
- return_value=self.fake_client))
-
- # this function is used on 2 different occasions in the code
- def _fake_get_resource_by_name(*args):
- if args[1] == "routers":
- return {"id": self.fake_object['router_id']}
- elif args[1] == "subnets":
- return {"id": self.fake_object['subnet_id']}
- javelin._get_resource_by_name = _fake_get_resource_by_name
-
- javelin.destroy_routers([self.fake_object])
-
- mocked_function = self.fake_client.routers.delete_router
- mocked_function.assert_called_once_with(
- self.fake_object['router_id'])
-
- def test_destroy_secgroup(self):
- self.useFixture(mockpatch.PatchObject(javelin, "client_for_user",
- return_value=self.fake_client))
- fake_secgroup = {'id': self.fake_object['id']}
- self.useFixture(mockpatch.PatchObject(javelin, "_get_resource_by_name",
- return_value=fake_secgroup))
-
- javelin.destroy_secgroups([self.fake_object])
-
- mocked_function = self.fake_client.secgroups.delete_security_group
- mocked_function.assert_called_once_with(self.fake_object['id'])
diff --git a/tempest/tests/cmd/test_run.py b/tempest/tests/cmd/test_run.py
index 772391f..7ac347d 100644
--- a/tempest/tests/cmd/test_run.py
+++ b/tempest/tests/cmd/test_run.py
@@ -18,6 +18,7 @@
import subprocess
import tempfile
+import fixtures
import mock
from tempest.cmd import run
@@ -122,3 +123,32 @@
# too.
subprocess.call(['git', 'init'], stderr=DEVNULL)
self.assertRunExit(['tempest', 'run'], 1)
+
+
+class TestTakeAction(base.TestCase):
+ def test_workspace_not_registered(self):
+ class Exception_(Exception):
+ pass
+
+ m_exit = self.useFixture(fixtures.MockPatch('sys.exit')).mock
+ # sys.exit must not continue (or exit)
+ m_exit.side_effect = Exception_
+
+ workspace = self.getUniqueString()
+
+ tempest_run = run.TempestRun(app=mock.Mock(), app_args=mock.Mock())
+ parsed_args = mock.Mock()
+ parsed_args.config_file = []
+
+ # Override $HOME so that empty workspace gets created in temp dir.
+ self.useFixture(fixtures.TempHomeDir())
+
+ # Force use of the temporary home directory.
+ parsed_args.workspace_path = None
+
+ # Simulate --workspace argument.
+ parsed_args.workspace = workspace
+
+ self.assertRaises(Exception_, tempest_run.take_action, parsed_args)
+ exit_msg = m_exit.call_args[0][0]
+ self.assertIn(workspace, exit_msg)
diff --git a/tempest/tests/cmd/test_subunit_describe_calls.py b/tempest/tests/cmd/test_subunit_describe_calls.py
index 43b417a..1c24c37 100644
--- a/tempest/tests/cmd/test_subunit_describe_calls.py
+++ b/tempest/tests/cmd/test_subunit_describe_calls.py
@@ -38,46 +38,159 @@
os.path.dirname(os.path.abspath(__file__)),
'sample_streams/calls.subunit')
parser = subunit_describe_calls.parse(
- subunit_file, "pythonlogging", None)
+ open(subunit_file), "pythonlogging", None)
expected_result = {
- 'bar': [{'name': 'AgentsAdminTestJSON:setUp',
- 'service': 'Nova',
- 'status_code': '200',
- 'url': 'v2.1/<id>/os-agents',
- 'verb': 'POST'},
- {'name': 'AgentsAdminTestJSON:test_create_agent',
- 'service': 'Nova',
- 'status_code': '200',
- 'url': 'v2.1/<id>/os-agents',
- 'verb': 'POST'},
- {'name': 'AgentsAdminTestJSON:tearDown',
- 'service': 'Nova',
- 'status_code': '200',
- 'url': 'v2.1/<id>/os-agents/1',
- 'verb': 'DELETE'},
- {'name': 'AgentsAdminTestJSON:_run_cleanups',
- 'service': 'Nova',
- 'status_code': '200',
- 'url': 'v2.1/<id>/os-agents/2',
- 'verb': 'DELETE'}],
- 'foo': [{'name': 'AgentsAdminTestJSON:setUp',
- 'service': 'Nova',
- 'status_code': '200',
- 'url': 'v2.1/<id>/os-agents',
- 'verb': 'POST'},
- {'name': 'AgentsAdminTestJSON:test_delete_agent',
- 'service': 'Nova',
- 'status_code': '200',
- 'url': 'v2.1/<id>/os-agents/3',
- 'verb': 'DELETE'},
- {'name': 'AgentsAdminTestJSON:test_delete_agent',
- 'service': 'Nova',
- 'status_code': '200',
- 'url': 'v2.1/<id>/os-agents',
- 'verb': 'GET'},
- {'name': 'AgentsAdminTestJSON:tearDown',
- 'service': 'Nova',
- 'status_code': '404',
- 'url': 'v2.1/<id>/os-agents/3',
- 'verb': 'DELETE'}]}
+ 'bar': [{
+ 'name': 'AgentsAdminTestJSON:setUp',
+ 'request_body': '{"agent": {"url": "xxx://xxxx/xxx/xxx", '
+ '"hypervisor": "common", "md5hash": '
+ '"add6bb58e139be103324d04d82d8f545", "version": "7.0", '
+ '"architecture": "tempest-x86_64-424013832", "os": "linux"}}',
+ 'request_headers': "{'Content-Type': 'application/json', "
+ "'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}",
+ 'response_body': '{"agent": {"url": "xxx://xxxx/xxx/xxx", '
+ '"hypervisor": "common", "md5hash": '
+ '"add6bb58e139be103324d04d82d8f545", "version": "7.0", '
+ '"architecture": "tempest-x86_64-424013832", "os": "linux", '
+ '"agent_id": 1}}',
+ 'response_headers': "{'status': '200', 'content-length': "
+ "'203', 'x-compute-request-id': "
+ "'req-25ddaae2-0ef1-40d1-8228-59bd64a7e75b', 'vary': "
+ "'X-OpenStack-Nova-API-Version', 'connection': 'close', "
+ "'x-openstack-nova-api-version': '2.1', 'date': "
+ "'Tue, 02 Feb 2016 03:27:00 GMT', 'content-type': "
+ "'application/json'}",
+ 'service': 'Nova',
+ 'status_code': '200',
+ 'url': 'v2.1/<id>/os-agents',
+ 'verb': 'POST'}, {
+ 'name': 'AgentsAdminTestJSON:test_create_agent',
+ 'request_body': '{"agent": {"url": "xxx://xxxx/xxx/xxx", '
+ '"hypervisor": "kvm", "md5hash": '
+ '"add6bb58e139be103324d04d82d8f545", "version": "7.0", '
+ '"architecture": "tempest-x86-252246646", "os": "win"}}',
+ 'request_headers': "{'Content-Type': 'application/json', "
+ "'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}",
+ 'response_body': '{"agent": {"url": "xxx://xxxx/xxx/xxx", '
+ '"hypervisor": "kvm", "md5hash": '
+ '"add6bb58e139be103324d04d82d8f545", "version": "7.0", '
+ '"architecture": "tempest-x86-252246646", "os": "win", '
+ '"agent_id": 2}}',
+ 'response_headers': "{'status': '200', 'content-length': "
+ "'195', 'x-compute-request-id': "
+ "'req-b4136f06-c015-4e7e-995f-c43831e3ecce', 'vary': "
+ "'X-OpenStack-Nova-API-Version', 'connection': 'close', "
+ "'x-openstack-nova-api-version': '2.1', 'date': "
+ "'Tue, 02 Feb 2016 03:27:00 GMT', 'content-type': "
+ "'application/json'}",
+ 'service': 'Nova',
+ 'status_code': '200',
+ 'url': 'v2.1/<id>/os-agents',
+ 'verb': 'POST'}, {
+ 'name': 'AgentsAdminTestJSON:tearDown',
+ 'request_body': 'None',
+ 'request_headers': "{'Content-Type': 'application/json', "
+ "'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}",
+ 'response_body': '',
+ 'response_headers': "{'status': '200', 'content-length': "
+ "'0', 'x-compute-request-id': "
+ "'req-ee905fd6-a5b5-4da4-8c37-5363cb25bd9d', 'vary': "
+ "'X-OpenStack-Nova-API-Version', 'connection': 'close', "
+ "'x-openstack-nova-api-version': '2.1', 'date': "
+ "'Tue, 02 Feb 2016 03:27:00 GMT', 'content-type': "
+ "'application/json'}",
+ 'service': 'Nova',
+ 'status_code': '200',
+ 'url': 'v2.1/<id>/os-agents/1',
+ 'verb': 'DELETE'}, {
+ 'name': 'AgentsAdminTestJSON:_run_cleanups',
+ 'request_body': 'None',
+ 'request_headers': "{'Content-Type': 'application/json', "
+ "'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}",
+ 'response_headers': "{'status': '200', 'content-length': "
+ "'0', 'x-compute-request-id': "
+ "'req-e912cac0-63e0-4679-a68a-b6d18ddca074', 'vary': "
+ "'X-OpenStack-Nova-API-Version', 'connection': 'close', "
+ "'x-openstack-nova-api-version': '2.1', 'date': "
+ "'Tue, 02 Feb 2016 03:27:00 GMT', 'content-type': "
+ "'application/json'}",
+ 'service': 'Nova',
+ 'status_code': '200',
+ 'url': 'v2.1/<id>/os-agents/2',
+ 'verb': 'DELETE'}],
+ 'foo': [{
+ 'name': 'AgentsAdminTestJSON:setUp',
+ 'request_body': '{"agent": {"url": "xxx://xxxx/xxx/xxx", '
+ '"hypervisor": "common", "md5hash": '
+ '"add6bb58e139be103324d04d82d8f545", "version": "7.0", '
+ '"architecture": "tempest-x86_64-948635295", "os": "linux"}}',
+ 'request_headers': "{'Content-Type': 'application/json', "
+ "'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}",
+ 'response_body': '{"agent": {"url": "xxx://xxxx/xxx/xxx", '
+ '"hypervisor": "common", "md5hash": '
+ '"add6bb58e139be103324d04d82d8f545", "version": "7.0", '
+ '"architecture": "tempest-x86_64-948635295", "os": "linux", '
+ '"agent_id": 3}}',
+ 'response_headers': "{'status': '200', 'content-length': "
+ "'203', 'x-compute-request-id': "
+ "'req-ccd2116d-04b1-4ffe-ae32-fb623f68bf1c', 'vary': "
+ "'X-OpenStack-Nova-API-Version', 'connection': 'close', "
+ "'x-openstack-nova-api-version': '2.1', 'date': "
+ "'Tue, 02 Feb 2016 03:27:01 GMT', 'content-type': "
+ "'application/json'}",
+ 'service': 'Nova',
+ 'status_code': '200',
+ 'url': 'v2.1/<id>/os-agents',
+ 'verb': 'POST'}, {
+ 'name': 'AgentsAdminTestJSON:test_delete_agent',
+ 'request_body': 'None',
+ 'request_headers': "{'Content-Type': 'application/json', "
+ "'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}",
+ 'response_body': '',
+ 'response_headers': "{'status': '200', 'content-length': "
+ "'0', 'x-compute-request-id': "
+ "'req-6e7fa28f-ae61-4388-9a78-947c58bc0588', 'vary': "
+ "'X-OpenStack-Nova-API-Version', 'connection': 'close', "
+ "'x-openstack-nova-api-version': '2.1', 'date': "
+ "'Tue, 02 Feb 2016 03:27:01 GMT', 'content-type': "
+ "'application/json'}",
+ 'service': 'Nova',
+ 'status_code': '200',
+ 'url': 'v2.1/<id>/os-agents/3',
+ 'verb': 'DELETE'}, {
+ 'name': 'AgentsAdminTestJSON:test_delete_agent',
+ 'request_body': 'None',
+ 'request_headers': "{'Content-Type': 'application/json', "
+ "'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}",
+ 'response_body': '{"agents": []}',
+ 'response_headers': "{'status': '200', 'content-length': "
+ "'14', 'content-location': "
+ "'http://23.253.76.97:8774/v2.1/"
+ "cf6b1933fe5b476fbbabb876f6d1b924/os-agents', "
+ "'x-compute-request-id': "
+ "'req-e41aa9b4-41a6-4138-ae04-220b768eb644', 'vary': "
+ "'X-OpenStack-Nova-API-Version', 'connection': 'close', "
+ "'x-openstack-nova-api-version': '2.1', 'date': "
+ "'Tue, 02 Feb 2016 03:27:01 GMT', 'content-type': "
+ "'application/json'}",
+ 'service': 'Nova',
+ 'status_code': '200',
+ 'url': 'v2.1/<id>/os-agents',
+ 'verb': 'GET'}, {
+ 'name': 'AgentsAdminTestJSON:tearDown',
+ 'request_body': 'None',
+ 'request_headers': "{'Content-Type': 'application/json', "
+ "'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}",
+ 'response_headers': "{'status': '404', 'content-length': "
+ "'82', 'x-compute-request-id': "
+ "'req-e297aeea-91cf-4f26-b49c-8f46b1b7a926', 'vary': "
+ "'X-OpenStack-Nova-API-Version', 'connection': 'close', "
+ "'x-openstack-nova-api-version': '2.1', 'date': "
+ "'Tue, 02 Feb 2016 03:27:02 GMT', 'content-type': "
+ "'application/json; charset=UTF-8'}",
+ 'service': 'Nova',
+ 'status_code': '404',
+ 'url': 'v2.1/<id>/os-agents/3',
+ 'verb': 'DELETE'}]}
+
self.assertEqual(expected_result, parser.test_logs)
diff --git a/tempest/tests/cmd/test_tempest_init.py b/tempest/tests/cmd/test_tempest_init.py
index 031bf4d..79510be 100644
--- a/tempest/tests/cmd/test_tempest_init.py
+++ b/tempest/tests/cmd/test_tempest_init.py
@@ -45,6 +45,7 @@
init_cmd = init.TempestInit(None, None)
local_sample_conf_file = os.path.join(etc_dir_path,
'tempest.conf.sample')
+
# Verify no sample config file exist
self.assertFalse(os.path.isfile(local_sample_conf_file))
init_cmd.generate_sample_config(local_dir.path)
@@ -53,6 +54,52 @@
self.assertTrue(os.path.isfile(local_sample_conf_file))
self.assertGreater(os.path.getsize(local_sample_conf_file), 0)
+ def test_update_local_conf(self):
+ local_dir = self.useFixture(fixtures.TempDir())
+ etc_dir_path = os.path.join(local_dir.path, 'etc/')
+ os.mkdir(etc_dir_path)
+ lock_dir = os.path.join(local_dir.path, 'tempest_lock')
+ config_path = os.path.join(etc_dir_path, 'tempest.conf')
+ log_dir = os.path.join(local_dir.path, 'logs')
+
+ init_cmd = init.TempestInit(None, None)
+
+ # Generate the config file
+ init_cmd.generate_sample_config(local_dir.path)
+
+ # Create a conf file with populated values
+ config_parser_pre = init_cmd.get_configparser(config_path)
+ with open(config_path, 'w+') as conf_file:
+ # create the same section init will check for and add values to
+ config_parser_pre.add_section('oslo_concurrency')
+ config_parser_pre.set('oslo_concurrency', 'TEST', local_dir.path)
+ # create a new section
+ config_parser_pre.add_section('TEST')
+ config_parser_pre.set('TEST', 'foo', "bar")
+ config_parser_pre.write(conf_file)
+
+ # Update the config file the same way tempest init does
+ init_cmd.update_local_conf(config_path, lock_dir, log_dir)
+
+ # parse the new config file to verify it
+ config_parser_post = init_cmd.get_configparser(config_path)
+
+ # check that our value in oslo_concurrency wasn't overwritten
+ self.assertTrue(config_parser_post.has_section('oslo_concurrency'))
+ self.assertEqual(config_parser_post.get('oslo_concurrency', 'TEST'),
+ local_dir.path)
+ # check that the lock directory was set correctly
+ self.assertEqual(config_parser_post.get('oslo_concurrency',
+ 'lock_path'), lock_dir)
+
+ # check that our new section still exists and wasn't modified
+ self.assertTrue(config_parser_post.has_section('TEST'))
+ self.assertEqual(config_parser_post.get('TEST', 'foo'), 'bar')
+
+ # check that the DEFAULT values are correct
+ # NOTE(auggy): has_section ignores DEFAULT
+ self.assertEqual(config_parser_post.get('DEFAULT', 'log_dir'), log_dir)
+
def test_create_working_dir_with_existing_local_dir_non_empty(self):
fake_local_dir = self.useFixture(fixtures.TempDir())
fake_local_conf_dir = self.useFixture(fixtures.TempDir())
@@ -90,3 +137,18 @@
self.assertTrue(os.path.isfile(fake_file_moved))
self.assertTrue(os.path.isfile(local_conf_file))
self.assertTrue(os.path.isfile(local_testr_conf))
+
+ def test_take_action_fails(self):
+ class ParsedArgs(object):
+ workspace_dir = self.useFixture(fixtures.TempDir()).path
+ workspace_path = os.path.join(workspace_dir, 'workspace.yaml')
+ name = 'test'
+ dir_base = self.useFixture(fixtures.TempDir()).path
+ dir = os.path.join(dir_base, 'foo', 'bar')
+ config_dir = self.useFixture(fixtures.TempDir()).path
+ show_global_dir = False
+ pa = ParsedArgs()
+ init_cmd = init.TempestInit(None, None)
+ self.assertRaises(OSError, init_cmd.take_action, pa)
+ # one more trying should be a same error not "workspace already exists"
+ self.assertRaises(OSError, init_cmd.take_action, pa)
diff --git a/tempest/tests/cmd/test_verify_tempest_config.py b/tempest/tests/cmd/test_verify_tempest_config.py
index 70cbf87..00b4542 100644
--- a/tempest/tests/cmd/test_verify_tempest_config.py
+++ b/tempest/tests/cmd/test_verify_tempest_config.py
@@ -188,34 +188,54 @@
False, True)
@mock.patch('tempest.lib.common.http.ClosingHttp.request')
- def test_verify_cinder_api_versions_no_v2(self, mock_request):
+ def test_verify_cinder_api_versions_no_v3(self, mock_request):
self.useFixture(mockpatch.PatchObject(
verify_tempest_config, '_get_unversioned_endpoint',
return_value='http://fake_endpoint:5000'))
- fake_resp = {'versions': [{'id': 'v1.0'}]}
+ fake_resp = {'versions': [{'id': 'v1.0'}, {'id': 'v2.0'}]}
fake_resp = json.dumps(fake_resp)
mock_request.return_value = (None, fake_resp)
fake_os = mock.MagicMock()
with mock.patch.object(verify_tempest_config,
'print_and_or_update') as print_mock:
verify_tempest_config.verify_cinder_api_versions(fake_os, True)
- print_mock.assert_called_once_with('api_v2', 'volume-feature-enabled',
- False, True)
+ print_mock.assert_not_called()
+
+ @mock.patch('tempest.lib.common.http.ClosingHttp.request')
+ def test_verify_cinder_api_versions_no_v2(self, mock_request):
+ self.useFixture(mockpatch.PatchObject(
+ verify_tempest_config, '_get_unversioned_endpoint',
+ return_value='http://fake_endpoint:5000'))
+ fake_resp = {'versions': [{'id': 'v1.0'}, {'id': 'v3.0'}]}
+ fake_resp = json.dumps(fake_resp)
+ mock_request.return_value = (None, fake_resp)
+ fake_os = mock.MagicMock()
+ with mock.patch.object(verify_tempest_config,
+ 'print_and_or_update') as print_mock:
+ verify_tempest_config.verify_cinder_api_versions(fake_os, True)
+ print_mock.assert_any_call('api_v2', 'volume-feature-enabled',
+ False, True)
+ print_mock.assert_any_call('api_v3', 'volume-feature-enabled',
+ True, True)
+ self.assertEqual(2, print_mock.call_count)
@mock.patch('tempest.lib.common.http.ClosingHttp.request')
def test_verify_cinder_api_versions_no_v1(self, mock_request):
self.useFixture(mockpatch.PatchObject(
verify_tempest_config, '_get_unversioned_endpoint',
return_value='http://fake_endpoint:5000'))
- fake_resp = {'versions': [{'id': 'v2.0'}]}
+ fake_resp = {'versions': [{'id': 'v2.0'}, {'id': 'v3.0'}]}
fake_resp = json.dumps(fake_resp)
mock_request.return_value = (None, fake_resp)
fake_os = mock.MagicMock()
with mock.patch.object(verify_tempest_config,
'print_and_or_update') as print_mock:
verify_tempest_config.verify_cinder_api_versions(fake_os, True)
- print_mock.assert_called_once_with('api_v1', 'volume-feature-enabled',
- False, True)
+ print_mock.assert_any_call('api_v1', 'volume-feature-enabled',
+ False, True)
+ print_mock.assert_any_call('api_v3', 'volume-feature-enabled',
+ True, True)
+ self.assertEqual(2, print_mock.call_count)
def test_verify_glance_version_no_v2_with_v1_1(self):
def fake_get_versions():
diff --git a/tempest/tests/cmd/test_workspace.py b/tempest/tests/cmd/test_workspace.py
index 2639d93..6ca4d42 100644
--- a/tempest/tests/cmd/test_workspace.py
+++ b/tempest/tests/cmd/test_workspace.py
@@ -17,7 +17,7 @@
import subprocess
import tempfile
-from tempest.cmd.workspace import WorkspaceManager
+from tempest.cmd import workspace
from tempest.lib.common.utils import data_utils
from tempest.tests import base
@@ -31,7 +31,8 @@
store_dir = tempfile.mkdtemp()
self.addCleanup(shutil.rmtree, store_dir, ignore_errors=True)
self.store_file = os.path.join(store_dir, 'workspace.yaml')
- self.workspace_manager = WorkspaceManager(path=self.store_file)
+ self.workspace_manager = workspace.WorkspaceManager(
+ path=self.store_file)
self.workspace_manager.register_new_workspace(self.name, self.path)
@@ -92,7 +93,8 @@
store_dir = tempfile.mkdtemp()
self.addCleanup(shutil.rmtree, store_dir, ignore_errors=True)
self.store_file = os.path.join(store_dir, 'workspace.yaml')
- self.workspace_manager = WorkspaceManager(path=self.store_file)
+ self.workspace_manager = workspace.WorkspaceManager(
+ path=self.store_file)
self.workspace_manager.register_new_workspace(self.name, self.path)
def test_workspace_manager_get(self):
diff --git a/tempest/tests/common/test_custom_matchers.py b/tempest/tests/common/test_custom_matchers.py
index 2656a47..07867fc 100644
--- a/tempest/tests/common/test_custom_matchers.py
+++ b/tempest/tests/common/test_custom_matchers.py
@@ -16,11 +16,47 @@
from tempest.common import custom_matchers
from tempest.tests import base
-from testtools.tests.matchers import helpers
+
+# Stolen from testtools/testtools/tests/matchers/helpers.py
+class TestMatchersInterface(object):
+
+ def test_matches_match(self):
+ matcher = self.matches_matcher
+ matches = self.matches_matches
+ mismatches = self.matches_mismatches
+ for candidate in matches:
+ self.assertEqual(None, matcher.match(candidate))
+ for candidate in mismatches:
+ mismatch = matcher.match(candidate)
+ self.assertNotEqual(None, mismatch)
+ self.assertNotEqual(None, getattr(mismatch, 'describe', None))
+
+ def test__str__(self):
+ # [(expected, object to __str__)].
+ from testtools.matchers._doctest import DocTestMatches
+ examples = self.str_examples
+ for expected, matcher in examples:
+ self.assertThat(matcher, DocTestMatches(expected))
+
+ def test_describe_difference(self):
+ # [(expected, matchee, matcher), ...]
+ examples = self.describe_examples
+ for difference, matchee, matcher in examples:
+ mismatch = matcher.match(matchee)
+ self.assertEqual(difference, mismatch.describe())
+
+ def test_mismatch_details(self):
+ # The mismatch object must provide get_details, which must return a
+ # dictionary mapping names to Content objects.
+ examples = self.describe_examples
+ for difference, matchee, matcher in examples:
+ mismatch = matcher.match(matchee)
+ details = mismatch.get_details()
+ self.assertEqual(dict(details), details)
class TestMatchesDictExceptForKeys(base.TestCase,
- helpers.TestMatchersInterface):
+ TestMatchersInterface):
matches_matcher = custom_matchers.MatchesDictExceptForKeys(
{'a': 1, 'b': 2, 'c': 3, 'd': 4}, ['c', 'd'])
diff --git a/tempest/tests/common/test_dynamic_creds.py b/tempest/tests/common/test_dynamic_creds.py
index b7cc05d..a90ca8a 100644
--- a/tempest/tests/common/test_dynamic_creds.py
+++ b/tempest/tests/common/test_dynamic_creds.py
@@ -19,23 +19,23 @@
from tempest.common import credentials_factory as credentials
from tempest.common import dynamic_creds
from tempest import config
-from tempest import exceptions
from tempest.lib.common import rest_client
from tempest.lib import exceptions as lib_exc
+from tempest.lib.services.identity.v2 import identity_client as v2_iden_client
from tempest.lib.services.identity.v2 import roles_client as v2_roles_client
from tempest.lib.services.identity.v2 import tenants_client as \
v2_tenants_client
from tempest.lib.services.identity.v2 import token_client as v2_token_client
from tempest.lib.services.identity.v2 import users_client as v2_users_client
-from tempest.lib.services.identity.v3 import token_client as v3_token_client
-from tempest.lib.services.network import routers_client
-from tempest.services.identity.v2.json import identity_client as v2_iden_client
-from tempest.services.identity.v3.json import domains_client
-from tempest.services.identity.v3.json import identity_client as v3_iden_client
-from tempest.services.identity.v3.json import projects_client as \
+from tempest.lib.services.identity.v3 import identity_client as v3_iden_client
+from tempest.lib.services.identity.v3 import projects_client as \
v3_projects_client
-from tempest.services.identity.v3.json import roles_client as v3_roles_client
-from tempest.services.identity.v3.json import users_clients as v3_users_client
+from tempest.lib.services.identity.v3 import roles_client as v3_roles_client
+from tempest.lib.services.identity.v3 import token_client as v3_token_client
+from tempest.lib.services.identity.v3 import users_client as \
+ v3_users_client
+from tempest.lib.services.network import routers_client
+from tempest.services.identity.v3.json import domains_client
from tempest.tests import base
from tempest.tests import fake_config
from tempest.tests.lib import fake_http
@@ -55,7 +55,6 @@
users_client = v2_users_client
token_client_class = token_client.TokenClient
fake_response = fake_identity._fake_v2_response
- assign_role_on_project = 'create_user_role_on_project'
tenants_client_class = tenants_client.TenantsClient
delete_tenant = 'delete_tenant'
@@ -125,7 +124,7 @@
def _mock_assign_user_role(self):
tenant_fix = self.useFixture(mockpatch.PatchObject(
self.roles_client.RolesClient,
- self.assign_role_on_project,
+ 'create_user_role_on_project',
return_value=(rest_client.ResponseBody
(200, {}))))
return tenant_fix
@@ -176,7 +175,6 @@
@mock.patch('tempest.lib.common.rest_client.RestClient')
def test_primary_creds(self, MockRestClient):
- cfg.CONF.set_default('neutron', False, 'service_available')
creds = dynamic_creds.DynamicCredentialProvider(**self.fixed_params)
self._mock_assign_user_role()
self._mock_list_role()
@@ -191,18 +189,17 @@
@mock.patch('tempest.lib.common.rest_client.RestClient')
def test_admin_creds(self, MockRestClient):
- cfg.CONF.set_default('neutron', False, 'service_available')
creds = dynamic_creds.DynamicCredentialProvider(**self.fixed_params)
self._mock_list_roles('1234', 'admin')
self._mock_user_create('1234', 'fake_admin_user')
self._mock_tenant_create('1234', 'fake_admin_tenant')
user_mock = mock.patch.object(self.roles_client.RolesClient,
- self.assign_role_on_project)
+ 'create_user_role_on_project')
user_mock.start()
self.addCleanup(user_mock.stop)
with mock.patch.object(self.roles_client.RolesClient,
- self.assign_role_on_project) as user_mock:
+ 'create_user_role_on_project') as user_mock:
admin_creds = creds.get_admin_creds()
user_mock.assert_has_calls([
mock.call('1234', '1234', '1234')])
@@ -214,18 +211,17 @@
@mock.patch('tempest.lib.common.rest_client.RestClient')
def test_role_creds(self, MockRestClient):
- cfg.CONF.set_default('neutron', False, 'service_available')
creds = dynamic_creds.DynamicCredentialProvider(**self.fixed_params)
self._mock_list_2_roles()
self._mock_user_create('1234', 'fake_role_user')
self._mock_tenant_create('1234', 'fake_role_tenant')
user_mock = mock.patch.object(self.roles_client.RolesClient,
- self.assign_role_on_project)
+ 'create_user_role_on_project')
user_mock.start()
self.addCleanup(user_mock.stop)
with mock.patch.object(self.roles_client.RolesClient,
- self.assign_role_on_project) as user_mock:
+ 'create_user_role_on_project') as user_mock:
role_creds = creds.get_creds_by_roles(
roles=['role1', 'role2'])
calls = user_mock.mock_calls
@@ -243,7 +239,6 @@
@mock.patch('tempest.lib.common.rest_client.RestClient')
def test_all_cred_cleanup(self, MockRestClient):
- cfg.CONF.set_default('neutron', False, 'service_available')
creds = dynamic_creds.DynamicCredentialProvider(**self.fixed_params)
self._mock_assign_user_role()
self._mock_list_role()
@@ -281,7 +276,6 @@
@mock.patch('tempest.lib.common.rest_client.RestClient')
def test_alt_creds(self, MockRestClient):
- cfg.CONF.set_default('neutron', False, 'service_available')
creds = dynamic_creds.DynamicCredentialProvider(**self.fixed_params)
self._mock_assign_user_role()
self._mock_list_role()
@@ -296,8 +290,10 @@
@mock.patch('tempest.lib.common.rest_client.RestClient')
def test_no_network_creation_with_config_set(self, MockRestClient):
- cfg.CONF.set_default('create_isolated_networks', False, group='auth')
- creds = dynamic_creds.DynamicCredentialProvider(**self.fixed_params)
+ creds = dynamic_creds.DynamicCredentialProvider(
+ neutron_available=True, create_networks=False,
+ project_network_cidr='10.100.0.0/16', project_network_mask_bits=28,
+ **self.fixed_params)
self._mock_assign_user_role()
self._mock_list_role()
self._mock_user_create('1234', 'fake_prim_user')
@@ -325,7 +321,10 @@
@mock.patch('tempest.lib.common.rest_client.RestClient')
def test_network_creation(self, MockRestClient):
- creds = dynamic_creds.DynamicCredentialProvider(**self.fixed_params)
+ creds = dynamic_creds.DynamicCredentialProvider(
+ neutron_available=True,
+ project_network_cidr='10.100.0.0/16', project_network_mask_bits=28,
+ **self.fixed_params)
self._mock_assign_user_role()
self._mock_list_role()
self._mock_user_create('1234', 'fake_prim_user')
@@ -356,7 +355,10 @@
"description": args['name'],
"security_group_rules": [],
"id": "sg-%s" % args['tenant_id']}]}
- creds = dynamic_creds.DynamicCredentialProvider(**self.fixed_params)
+ creds = dynamic_creds.DynamicCredentialProvider(
+ neutron_available=True,
+ project_network_cidr='10.100.0.0/16', project_network_mask_bits=28,
+ **self.fixed_params)
# Create primary tenant and network
self._mock_assign_user_role()
self._mock_list_role()
@@ -460,7 +462,10 @@
@mock.patch('tempest.lib.common.rest_client.RestClient')
def test_network_alt_creation(self, MockRestClient):
- creds = dynamic_creds.DynamicCredentialProvider(**self.fixed_params)
+ creds = dynamic_creds.DynamicCredentialProvider(
+ neutron_available=True,
+ project_network_cidr='10.100.0.0/16', project_network_mask_bits=28,
+ **self.fixed_params)
self._mock_assign_user_role()
self._mock_list_role()
self._mock_user_create('1234', 'fake_alt_user')
@@ -485,7 +490,10 @@
@mock.patch('tempest.lib.common.rest_client.RestClient')
def test_network_admin_creation(self, MockRestClient):
- creds = dynamic_creds.DynamicCredentialProvider(**self.fixed_params)
+ creds = dynamic_creds.DynamicCredentialProvider(
+ neutron_available=True,
+ project_network_cidr='10.100.0.0/16', project_network_mask_bits=28,
+ **self.fixed_params)
self._mock_assign_user_role()
self._mock_user_create('1234', 'fake_admin_user')
self._mock_tenant_create('1234', 'fake_admin_tenant')
@@ -517,6 +525,8 @@
'dhcp': False,
}
creds = dynamic_creds.DynamicCredentialProvider(
+ neutron_available=True,
+ project_network_cidr='10.100.0.0/16', project_network_mask_bits=28,
network_resources=net_dict,
**self.fixed_params)
self._mock_assign_user_role()
@@ -553,13 +563,15 @@
'dhcp': False,
}
creds = dynamic_creds.DynamicCredentialProvider(
+ neutron_available=True,
+ project_network_cidr='10.100.0.0/16', project_network_mask_bits=28,
network_resources=net_dict,
**self.fixed_params)
self._mock_assign_user_role()
self._mock_list_role()
self._mock_user_create('1234', 'fake_prim_user')
self._mock_tenant_create('1234', 'fake_prim_tenant')
- self.assertRaises(exceptions.InvalidConfiguration,
+ self.assertRaises(lib_exc.InvalidConfiguration,
creds.get_primary_creds)
@mock.patch('tempest.lib.common.rest_client.RestClient')
@@ -571,13 +583,15 @@
'dhcp': False,
}
creds = dynamic_creds.DynamicCredentialProvider(
+ neutron_available=True,
+ project_network_cidr='10.100.0.0/16', project_network_mask_bits=28,
network_resources=net_dict,
**self.fixed_params)
self._mock_assign_user_role()
self._mock_list_role()
self._mock_user_create('1234', 'fake_prim_user')
self._mock_tenant_create('1234', 'fake_prim_tenant')
- self.assertRaises(exceptions.InvalidConfiguration,
+ self.assertRaises(lib_exc.InvalidConfiguration,
creds.get_primary_creds)
@mock.patch('tempest.lib.common.rest_client.RestClient')
@@ -589,13 +603,15 @@
'dhcp': True,
}
creds = dynamic_creds.DynamicCredentialProvider(
+ neutron_available=True,
+ project_network_cidr='10.100.0.0/16', project_network_mask_bits=28,
network_resources=net_dict,
**self.fixed_params)
self._mock_assign_user_role()
self._mock_list_role()
self._mock_user_create('1234', 'fake_prim_user')
self._mock_tenant_create('1234', 'fake_prim_tenant')
- self.assertRaises(exceptions.InvalidConfiguration,
+ self.assertRaises(lib_exc.InvalidConfiguration,
creds.get_primary_creds)
@@ -612,7 +628,6 @@
users_client = v3_users_client
token_client_class = token_client.V3TokenClient
fake_response = fake_identity._fake_v3_response
- assign_role_on_project = 'assign_user_role_on_project'
tenants_client_class = tenants_client.ProjectsClient
delete_tenant = 'delete_project'
@@ -624,7 +639,7 @@
return_value=dict(domains=[dict(id='default',
name='Default')])))
self.patchobject(self.roles_client.RolesClient,
- 'assign_user_role_on_domain')
+ 'create_user_role_on_domain')
def _mock_list_ec2_credentials(self, user_id, tenant_id):
pass
diff --git a/tempest/tests/common/test_preprov_creds.py b/tempest/tests/common/test_preprov_creds.py
index 13d4713..f824b6c 100644
--- a/tempest/tests/common/test_preprov_creds.py
+++ b/tempest/tests/common/test_preprov_creds.py
@@ -23,10 +23,10 @@
import shutil
import six
-from tempest.common import cred_provider
from tempest.common import preprov_creds
from tempest import config
from tempest.lib import auth
+from tempest.lib.common import cred_provider
from tempest.lib import exceptions as lib_exc
from tempest.tests import base
from tempest.tests import fake_config
diff --git a/tempest/tests/common/test_waiters.py b/tempest/tests/common/test_waiters.py
index a56f837..a826337 100644
--- a/tempest/tests/common/test_waiters.py
+++ b/tempest/tests/common/test_waiters.py
@@ -18,7 +18,7 @@
from tempest.common import waiters
from tempest import exceptions
-from tempest.services.volume.base import base_volumes_client
+from tempest.lib.services.volume.v2 import volumes_client
from tempest.tests import base
import tempest.tests.utils as utils
@@ -57,7 +57,7 @@
def test_wait_for_volume_status_error_restoring(self, mock_sleep):
# Tests that the wait method raises VolumeRestoreErrorException if
# the volume status is 'error_restoring'.
- client = mock.Mock(spec=base_volumes_client.BaseVolumesClient,
+ client = mock.Mock(spec=volumes_client.VolumesClient,
build_interval=1)
volume1 = {'volume': {'status': 'restoring-backup'}}
volume2 = {'volume': {'status': 'error_restoring'}}
diff --git a/tempest/tests/fake_auth_provider.py b/tempest/tests/fake_auth_provider.py
deleted file mode 100644
index 769f6a6..0000000
--- a/tempest/tests/fake_auth_provider.py
+++ /dev/null
@@ -1,26 +0,0 @@
-# Copyright 2014 Hewlett-Packard Development Company, L.P.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-class FakeAuthProvider(object):
-
- def auth_request(self, method, url, headers=None, body=None, filters=None):
- return url, headers, body
-
- def get_token(self):
- return "faketoken"
-
- def base_url(self, filters, auth_data=None):
- return "https://example.com"
diff --git a/tempest/tests/fake_tempest_plugin.py b/tempest/tests/fake_tempest_plugin.py
index f718d0b..56aae1e 100644
--- a/tempest/tests/fake_tempest_plugin.py
+++ b/tempest/tests/fake_tempest_plugin.py
@@ -18,6 +18,7 @@
class FakePlugin(plugins.TempestPlugin):
expected_load_test = ["my/test/path", "/home/dir"]
+ expected_service_clients = [{'foo': 'bar'}]
def load_tests(self):
return self.expected_load_test
@@ -28,6 +29,9 @@
def get_opt_lists(self):
return []
+ def get_service_clients(self):
+ return self.expected_service_clients
+
class FakeStevedoreObj(object):
obj = FakePlugin()
@@ -38,3 +42,26 @@
def __init__(self, name='Test1'):
self._name = name
+
+
+class FakePluginNoServiceClients(plugins.TempestPlugin):
+
+ def load_tests(self):
+ return []
+
+ def register_opts(self, conf):
+ return
+
+ def get_opt_lists(self):
+ return []
+
+
+class FakeStevedoreObjNoServiceClients(object):
+ obj = FakePluginNoServiceClients()
+
+ @property
+ def name(self):
+ return self._name
+
+ def __init__(self, name='Test2'):
+ self._name = name
diff --git a/tempest/tests/lib/common/utils/test_data_utils.py b/tempest/tests/lib/common/utils/test_data_utils.py
index 399c4af..4446e5c 100644
--- a/tempest/tests/lib/common/utils/test_data_utils.py
+++ b/tempest/tests/lib/common/utils/test_data_utils.py
@@ -59,7 +59,7 @@
def test_rand_password(self):
actual = data_utils.rand_password()
self.assertIsInstance(actual, str)
- self.assertRegex(actual, "[A-Za-z0-9~!@#$%^&*_=+]{15,}")
+ self.assertRegex(actual, "[A-Za-z0-9~!@#%^&*_=+]{15,}")
actual2 = data_utils.rand_password()
self.assertNotEqual(actual, actual2)
@@ -67,7 +67,7 @@
actual = data_utils.rand_password(8)
self.assertIsInstance(actual, str)
self.assertEqual(len(actual), 8)
- self.assertRegex(actual, "[A-Za-z0-9~!@#$%^&*_=+]{8}")
+ self.assertRegex(actual, "[A-Za-z0-9~!@#%^&*_=+]{8}")
actual2 = data_utils.rand_password(8)
self.assertNotEqual(actual, actual2)
@@ -75,7 +75,7 @@
actual = data_utils.rand_password(2)
self.assertIsInstance(actual, str)
self.assertEqual(len(actual), 3)
- self.assertRegex(actual, "[A-Za-z0-9~!@#$%^&*_=+]{3}")
+ self.assertRegex(actual, "[A-Za-z0-9~!@#%^&*_=+]{3}")
actual2 = data_utils.rand_password(2)
self.assertNotEqual(actual, actual2)
@@ -125,13 +125,13 @@
def test_random_bytes(self):
actual = data_utils.random_bytes() # default size=1024
- self.assertIsInstance(actual, str)
- self.assertRegex(actual, "^[\x00-\xFF]{1024}")
+ self.assertIsInstance(actual, bytes)
+ self.assertEqual(1024, len(actual))
actual2 = data_utils.random_bytes()
self.assertNotEqual(actual, actual2)
actual = data_utils.random_bytes(size=2048)
- self.assertRegex(actual, "^[\x00-\xFF]{2048}")
+ self.assertEqual(2048, len(actual))
def test_get_ipv6_addr_by_EUI64(self):
actual = data_utils.get_ipv6_addr_by_EUI64('2001:db8::',
diff --git a/tempest/tests/lib/common/utils/test_test_utils.py b/tempest/tests/lib/common/utils/test_test_utils.py
index 919e219..29c5684 100644
--- a/tempest/tests/lib/common/utils/test_test_utils.py
+++ b/tempest/tests/lib/common/utils/test_test_utils.py
@@ -17,6 +17,7 @@
from tempest.lib.common.utils import test_utils
from tempest.lib import exceptions
from tempest.tests import base
+from tempest.tests import utils
class TestTestUtils(base.TestCase):
@@ -76,3 +77,27 @@
self.assertEqual(
42, test_utils.call_and_ignore_notfound_exc(m, *args, **kwargs))
m.assert_called_once_with(*args, **kwargs)
+
+ @mock.patch('time.sleep')
+ @mock.patch('time.time')
+ def test_call_until_true_when_f_never_returns_true(self, m_time, m_sleep):
+ timeout = 42 # The value doesn't matter as we mock time.time()
+ sleep = 60 # The value doesn't matter as we mock time.sleep()
+ m_time.side_effect = utils.generate_timeout_series(timeout)
+ self.assertEqual(
+ False, test_utils.call_until_true(lambda: False, timeout, sleep)
+ )
+ m_sleep.call_args_list = [mock.call(sleep)] * 2
+ m_time.call_args_list = [mock.call()] * 2
+
+ @mock.patch('time.sleep')
+ @mock.patch('time.time')
+ def test_call_until_true_when_f_returns_true(self, m_time, m_sleep):
+ timeout = 42 # The value doesn't matter as we mock time.time()
+ sleep = 60 # The value doesn't matter as we mock time.sleep()
+ m_time.return_value = 0
+ self.assertEqual(
+ True, test_utils.call_until_true(lambda: True, timeout, sleep)
+ )
+ self.assertEqual(0, m_sleep.call_count)
+ self.assertEqual(1, m_time.call_count)
diff --git a/tempest/tests/lib/fake_auth_provider.py b/tempest/tests/lib/fake_auth_provider.py
index fa8ab47..e4582f8 100644
--- a/tempest/tests/lib/fake_auth_provider.py
+++ b/tempest/tests/lib/fake_auth_provider.py
@@ -27,6 +27,9 @@
def base_url(self, filters, auth_data=None):
return self.fake_base_url or "https://example.com"
+ def get_token(self):
+ return "faketoken"
+
class FakeCredentials(object):
diff --git a/tempest/tests/lib/services/identity/v2/test_identity_client.py b/tempest/tests/lib/services/identity/v2/test_identity_client.py
new file mode 100644
index 0000000..96d50d7
--- /dev/null
+++ b/tempest/tests/lib/services/identity/v2/test_identity_client.py
@@ -0,0 +1,175 @@
+# Copyright 2016 NEC Corporation. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.lib.services.identity.v2 import identity_client
+from tempest.tests.lib import fake_auth_provider
+from tempest.tests.lib.services import base
+
+
+class TestIdentityClient(base.BaseServiceTest):
+ FAKE_TOKEN = {
+ "tokens": {
+ "id": "cbc36478b0bd8e67e89",
+ "name": "FakeToken",
+ "type": "token",
+ }
+ }
+
+ FAKE_API_INFO = {
+ "name": "API_info",
+ "type": "API",
+ "description": "test_description"
+ }
+
+ FAKE_LIST_EXTENSIONS = {
+ "extensions": {
+ "values": [
+ {
+ "updated": "2013-07-07T12:00:0-00:00",
+ "name": "OpenStack S3 API",
+ "links": [
+ {
+ "href": "https://github.com/openstack/" +
+ "identity-api",
+ "type": "text/html",
+ "rel": "describedby"
+ }
+ ],
+ "namespace": "http://docs.openstack.org/identity/" +
+ "api/ext/s3tokens/v1.0",
+ "alias": "s3tokens",
+ "description": "OpenStack S3 API."
+ },
+ {
+ "updated": "2013-12-17T12:00:0-00:00",
+ "name": "OpenStack Federation APIs",
+ "links": [
+ {
+ "href": "https://github.com/openstack/" +
+ "identity-api",
+ "type": "text/html",
+ "rel": "describedby"
+ }
+ ],
+ "namespace": "http://docs.openstack.org/identity/" +
+ "api/ext/OS-FEDERATION/v1.0",
+ "alias": "OS-FEDERATION",
+ "description": "OpenStack Identity Providers Mechanism."
+ },
+ {
+ "updated": "2014-01-20T12:00:0-00:00",
+ "name": "OpenStack Simple Certificate API",
+ "links": [
+ {
+ "href": "https://github.com/openstack/" +
+ "identity-api",
+ "type": "text/html",
+ "rel": "describedby"
+ }
+ ],
+ "namespace": "http://docs.openstack.org/identity/api/" +
+ "ext/OS-SIMPLE-CERT/v1.0",
+ "alias": "OS-SIMPLE-CERT",
+ "description": "OpenStack simple certificate extension"
+ },
+ {
+ "updated": "2013-07-07T12:00:0-00:00",
+ "name": "OpenStack OAUTH1 API",
+ "links": [
+ {
+ "href": "https://github.com/openstack/" +
+ "identity-api",
+ "type": "text/html",
+ "rel": "describedby"
+ }
+ ],
+ "namespace": "http://docs.openstack.org/identity/" +
+ "api/ext/OS-OAUTH1/v1.0",
+ "alias": "OS-OAUTH1",
+ "description": "OpenStack OAuth Delegated Auth Mechanism."
+ },
+ {
+ "updated": "2013-07-07T12:00:0-00:00",
+ "name": "OpenStack EC2 API",
+ "links": [
+ {
+ "href": "https://github.com/openstack/" +
+ "identity-api",
+ "type": "text/html",
+ "rel": "describedby"
+ }
+ ],
+ "namespace": "http://docs.openstack.org/identity/api/" +
+ "ext/OS-EC2/v1.0",
+ "alias": "OS-EC2",
+ "description": "OpenStack EC2 Credentials backend."
+ }
+ ]
+ }
+ }
+
+ def setUp(self):
+ super(TestIdentityClient, self).setUp()
+ fake_auth = fake_auth_provider.FakeAuthProvider()
+ self.client = identity_client.IdentityClient(fake_auth,
+ 'identity',
+ 'regionOne')
+
+ def _test_show_api_description(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.show_api_description,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_API_INFO,
+ bytes_body)
+
+ def _test_list_extensions(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.list_extensions,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_LIST_EXTENSIONS,
+ bytes_body)
+
+ def _test_show_token(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.show_token,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_TOKEN,
+ bytes_body,
+ token_id="cbc36478b0bd8e67e89")
+
+ def test_show_api_description_with_str_body(self):
+ self._test_show_api_description()
+
+ def test_show_api_description_with_bytes_body(self):
+ self._test_show_api_description(bytes_body=True)
+
+ def test_show_list_extensions_with_str_body(self):
+ self._test_list_extensions()
+
+ def test_show_list_extensions_with_bytes_body(self):
+ self._test_list_extensions(bytes_body=True)
+
+ def test_show_token_with_str_body(self):
+ self._test_show_token()
+
+ def test_show_token_with_bytes_body(self):
+ self._test_show_token(bytes_body=True)
+
+ def test_delete_token(self):
+ self.check_service_client_function(
+ self.client.delete_token,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ token_id="cbc36478b0bd8e67e89",
+ status=204)
diff --git a/tempest/tests/lib/services/identity/v3/test_credentials_client.py b/tempest/tests/lib/services/identity/v3/test_credentials_client.py
new file mode 100644
index 0000000..29d7496
--- /dev/null
+++ b/tempest/tests/lib/services/identity/v3/test_credentials_client.py
@@ -0,0 +1,179 @@
+# Copyright 2016 NEC Corporation. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.lib.services.identity.v3 import credentials_client
+from tempest.tests.lib import fake_auth_provider
+from tempest.tests.lib.services import base
+
+
+class TestCredentialsClient(base.BaseServiceTest):
+ FAKE_CREATE_CREDENTIAL = {
+ "credential": {
+ "blob": "{\"access\":\"181920\",\"secret\":\"secretKey\"}",
+ "project_id": "731fc6f265cd486d900f16e84c5cb594",
+ "type": "ec2",
+ "user_id": "bb5476fd12884539b41d5a88f838d773"
+ }
+ }
+
+ FAKE_INFO_CREDENTIAL = {
+ "credential": {
+ "user_id": "bb5476fd12884539b41d5a88f838d773",
+ "links": {
+ "self": "http://example.com/identity/v3/credentials/" +
+ "207e9b76935efc03804d3dd6ab52d22e9b22a0711e4" +
+ "ada4ff8b76165a07311d7"
+ },
+ "blob": "{\"access\": \"a42a27755ce6442596b049bd7dd8a563\"," +
+ " \"secret\": \"71faf1d40bb24c82b479b1c6fbbd9f0c\"}",
+ "project_id": "6e01855f345f4c59812999b5e459137d",
+ "type": "ec2",
+ "id": "207e9b76935efc03804d3dd6ab52d22e9b22a0711e4ada4f"
+ }
+ }
+
+ FAKE_LIST_CREDENTIALS = {
+ "credentials": [
+ {
+ "user_id": "bb5476fd12884539b41d5a88f838d773",
+ "links": {
+ "self": "http://example.com/identity/v3/credentials/" +
+ "207e9b76935efc03804d3dd6ab52d22e9b22a0711e4" +
+ "ada4ff8b76165a07311d7"
+ },
+ "blob": "{\"access\": \"a42a27755ce6442596b049bd7dd8a563\"," +
+ " \"secret\": \"71faf1d40bb24c82b479b1c6fbbd9f0c\"," +
+ " \"trust_id\": null}",
+ "project_id": "6e01855f345f4c59812999b5e459137d",
+ "type": "ec2",
+ "id": "207e9b76935efc03804d3dd6ab52d22e9b22a0711e4ada4f"
+ },
+ {
+ "user_id": "6f556708d04b4ea6bc72d7df2296b71a",
+ "links": {
+ "self": "http://example.com/identity/v3/credentials/" +
+ "2441494e52ab6d594a34d74586075cb299489bdd1e9" +
+ "389e3ab06467a4f460609"
+ },
+ "blob": "{\"access\": \"7da79ff0aa364e1396f067e352b9b79a\"," +
+ " \"secret\": \"7a18d68ba8834b799d396f3ff6f1e98c\"," +
+ " \"trust_id\": null}",
+ "project_id": "1a1d14690f3c4ec5bf5f321c5fde3c16",
+ "type": "ec2",
+ "id": "2441494e52ab6d594a34d74586075cb299489bdd1e9389e3"
+ },
+ {
+ "user_id": "c14107e65d5c4a7f8894fc4b3fc209ff",
+ "links": {
+ "self": "http://example.com/identity/v3/credentials/" +
+ "3397b204b5f04c495bcdc8f34c8a39996f280f91726" +
+ "58241873e15f070ec79d7"
+ },
+ "blob": "{\"access\": \"db9c58a558534a10a070110de4f9f20c\"," +
+ " \"secret\": \"973e790b88db447ba6f93bca02bc745b\"," +
+ " \"trust_id\": null}",
+ "project_id": "7396e43183db40dcbf40dd727637b548",
+ "type": "ec2",
+ "id": "3397b204b5f04c495bcdc8f34c8a39996f280f9172658241"
+ },
+ {
+ "user_id": "bb5476fd12884539b41d5a88f838d773",
+ "links": {
+ "self": "http://example.com/identity/v3/credentials/" +
+ "7ef4faa904ae7b8b4ddc7bad15b05ee359dad7d7a9b" +
+ "82861d4ad92fdbbb2eb4e"
+ },
+ "blob": "{\"access\": \"7d7559359b57419eb5f5f5dcd65ab57d\"," +
+ " \"secret\": \"570652bcf8c2483c86eb29e9734eed3c\"," +
+ " \"trust_id\": null}",
+ "project_id": "731fc6f265cd486d900f16e84c5cb594",
+ "type": "ec2",
+ "id": "7ef4faa904ae7b8b4ddc7bad15b05ee359dad7d7a9b82861"
+ },
+ ],
+ "links": {
+ "self": "http://example.com/identity/v3/credentials",
+ "previous": None,
+ "next": None
+ }
+ }
+
+ def setUp(self):
+ super(TestCredentialsClient, self).setUp()
+ fake_auth = fake_auth_provider.FakeAuthProvider()
+ self.client = credentials_client.CredentialsClient(fake_auth,
+ 'identity',
+ 'regionOne')
+
+ def _test_create_credential(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.create_credential,
+ 'tempest.lib.common.rest_client.RestClient.post',
+ self.FAKE_CREATE_CREDENTIAL,
+ bytes_body, status=201)
+
+ def _test_show_credential(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.show_credential,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_INFO_CREDENTIAL,
+ bytes_body,
+ credential_id="207e9b76935efc03804d3dd6ab52d22e9b22a0711e4ada4f")
+
+ def _test_update_credential(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.update_credential,
+ 'tempest.lib.common.rest_client.RestClient.patch',
+ self.FAKE_INFO_CREDENTIAL,
+ bytes_body,
+ credential_id="207e9b76935efc03804d3dd6ab52d22e9b22a0711e4ada4f")
+
+ def _test_list_credentials(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.list_credentials,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_LIST_CREDENTIALS,
+ bytes_body)
+
+ def test_create_credential_with_str_body(self):
+ self._test_create_credential()
+
+ def test_create_credential_with_bytes_body(self):
+ self._test_create_credential(bytes_body=True)
+
+ def test_show_credential_with_str_body(self):
+ self._test_show_credential()
+
+ def test_show_credential_with_bytes_body(self):
+ self._test_show_credential(bytes_body=True)
+
+ def test_update_credential_with_str_body(self):
+ self._test_update_credential()
+
+ def test_update_credential_with_bytes_body(self):
+ self._test_update_credential(bytes_body=True)
+
+ def test_list_credentials_with_str_body(self):
+ self._test_list_credentials()
+
+ def test_list_credentials_with_bytes_body(self):
+ self._test_list_credentials(bytes_body=True)
+
+ def test_delete_credential(self):
+ self.check_service_client_function(
+ self.client.delete_credential,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ credential_id="207e9b76935efc03804d3dd6ab52d22e9b22a0711e4ada4f",
+ status=204)
diff --git a/tempest/tests/lib/services/identity/v3/test_groups_client.py b/tempest/tests/lib/services/identity/v3/test_groups_client.py
new file mode 100644
index 0000000..38cf3ae
--- /dev/null
+++ b/tempest/tests/lib/services/identity/v3/test_groups_client.py
@@ -0,0 +1,213 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from tempest.lib.services.identity.v3 import groups_client
+from tempest.tests.lib import fake_auth_provider
+from tempest.tests.lib.services import base
+
+
+class TestGroupsClient(base.BaseServiceTest):
+ FAKE_CREATE_GROUP = {
+ 'group': {
+ 'description': 'Tempest Group Description',
+ 'domain_id': 'TempestDomain',
+ 'name': 'Tempest Group',
+ }
+ }
+
+ FAKE_GROUP_INFO = {
+ 'group': {
+ 'description': 'Tempest Group Description',
+ 'domain_id': 'TempestDomain',
+ 'id': '6e13e2068cf9466e98950595baf6bb35',
+ 'links': {
+ 'self': 'http://example.com/identity/v3/groups/' +
+ '6e13e2068cf9466e98950595baf6bb35'
+ },
+ 'name': 'Tempest Group',
+ }
+ }
+
+ FAKE_GROUP_LIST = {
+ 'links': {
+ 'self': 'http://example.com/identity/v3/groups',
+ 'previous': None,
+ 'next': None,
+ },
+ 'groups': [
+ {
+ 'description': 'Tempest Group One Description',
+ 'domain_id': 'TempestDomain',
+ 'id': '1c92f3453ed34291a074b87493455b8f',
+ 'links': {
+ 'self': 'http://example.com/identity/v3/groups/' +
+ '1c92f3453ed34291a074b87493455b8f'
+ },
+ 'name': 'Tempest Group One',
+ },
+ {
+ 'description': 'Tempest Group Two Description',
+ 'domain_id': 'TempestDomain',
+ 'id': 'ce9e7dafed3b4877a7d4466ed730a9ee',
+ 'links': {
+ 'self': 'http://example.com/identity/v3/groups/' +
+ 'ce9e7dafed3b4877a7d4466ed730a9ee'
+ },
+ 'name': 'Tempest Group Two',
+ },
+ ]
+ }
+
+ FAKE_USER_LIST = {
+ 'links': {
+ 'self': 'http://example.com/identity/v3/groups/' +
+ '6e13e2068cf9466e98950595baf6bb35/users',
+ 'previous': None,
+ 'next': None,
+ },
+ 'users': [
+ {
+ 'domain_id': 'TempestDomain',
+ 'description': 'Tempest Test User One Description',
+ 'enabled': True,
+ 'id': '642688fa65a84217b86cef3c063de2b9',
+ 'name': 'TempestUserOne',
+ 'links': {
+ 'self': 'http://example.com/identity/v3/users/' +
+ '642688fa65a84217b86cef3c063de2b9'
+ }
+ },
+ {
+ 'domain_id': 'TempestDomain',
+ 'description': 'Tempest Test User Two Description',
+ 'enabled': True,
+ 'id': '1048ead6f8ef4a859b44ffbce3ac0b52',
+ 'name': 'TempestUserTwo',
+ 'links': {
+ 'self': 'http://example.com/identity/v3/users/' +
+ '1048ead6f8ef4a859b44ffbce3ac0b52'
+ }
+ },
+ ]
+ }
+
+ def setUp(self):
+ super(TestGroupsClient, self).setUp()
+ fake_auth = fake_auth_provider.FakeAuthProvider()
+ self.client = groups_client.GroupsClient(fake_auth, 'identity',
+ 'regionOne')
+
+ def _test_create_group(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.create_group,
+ 'tempest.lib.common.rest_client.RestClient.post',
+ self.FAKE_CREATE_GROUP,
+ bytes_body,
+ status=201,
+ )
+
+ def _test_show_group(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.show_group,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_GROUP_INFO,
+ bytes_body,
+ group_id='6e13e2068cf9466e98950595baf6bb35',
+ )
+
+ def _test_list_groups(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.list_groups,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_GROUP_LIST,
+ bytes_body,
+ )
+
+ def _test_update_group(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.update_group,
+ 'tempest.lib.common.rest_client.RestClient.patch',
+ self.FAKE_GROUP_INFO,
+ bytes_body,
+ group_id='6e13e2068cf9466e98950595baf6bb35',
+ name='NewName',
+ )
+
+ def _test_list_users_in_group(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.list_group_users,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_USER_LIST,
+ bytes_body,
+ group_id='6e13e2068cf9466e98950595baf6bb35',
+ )
+
+ def test_create_group_with_string_body(self):
+ self._test_create_group()
+
+ def test_create_group_with_bytes_body(self):
+ self._test_create_group(bytes_body=True)
+
+ def test_show_group_with_string_body(self):
+ self._test_show_group()
+
+ def test_show_group_with_bytes_body(self):
+ self._test_show_group(bytes_body=True)
+
+ def test_list_groups_with_string_body(self):
+ self._test_list_groups()
+
+ def test_list_groups_with_bytes_body(self):
+ self._test_list_groups(bytes_body=True)
+
+ def test_update_group_with_string_body(self):
+ self._test_update_group()
+
+ def test_update_group_with_bytes_body(self):
+ self._test_update_group(bytes_body=True)
+
+ def test_list_users_in_group_with_string_body(self):
+ self._test_list_users_in_group()
+
+ def test_list_users_in_group_with_bytes_body(self):
+ self._test_list_users_in_group(bytes_body=True)
+
+ def test_delete_group(self):
+ self.check_service_client_function(
+ self.client.delete_group,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ group_id='6e13e2068cf9466e98950595baf6bb35',
+ status=204,
+ )
+
+ def test_add_user_to_group(self):
+ self.check_service_client_function(
+ self.client.add_group_user,
+ 'tempest.lib.common.rest_client.RestClient.put',
+ {},
+ status=204,
+ group_id='6e13e2068cf9466e98950595baf6bb35',
+ user_id='642688fa65a84217b86cef3c063de2b9',
+ )
+
+ def test_check_user_in_group(self):
+ self.check_service_client_function(
+ self.client.check_group_user_existence,
+ 'tempest.lib.common.rest_client.RestClient.head',
+ {},
+ status=204,
+ group_id='6e13e2068cf9466e98950595baf6bb35',
+ user_id='642688fa65a84217b86cef3c063de2b9',
+ )
diff --git a/tempest/tests/lib/services/identity/v3/test_identity_client.py b/tempest/tests/lib/services/identity/v3/test_identity_client.py
new file mode 100644
index 0000000..9eaaaaf
--- /dev/null
+++ b/tempest/tests/lib/services/identity/v3/test_identity_client.py
@@ -0,0 +1,75 @@
+# Copyright 2016 NEC Corporation. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.lib.services.identity.v3 import identity_client
+from tempest.tests.lib import fake_auth_provider
+from tempest.tests.lib.services import base
+
+
+class TestIdentityClient(base.BaseServiceTest):
+ FAKE_TOKEN = {
+ "tokens": {
+ "id": "cbc36478b0bd8e67e89",
+ "name": "FakeToken",
+ "type": "token",
+ }
+ }
+
+ FAKE_API_INFO = {
+ "name": "API_info",
+ "type": "API",
+ "description": "test_description"
+ }
+
+ def setUp(self):
+ super(TestIdentityClient, self).setUp()
+ fake_auth = fake_auth_provider.FakeAuthProvider()
+ self.client = identity_client.IdentityClient(fake_auth,
+ 'identity',
+ 'regionOne')
+
+ def _test_show_api_description(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.show_api_description,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_API_INFO,
+ bytes_body)
+
+ def _test_show_token(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.show_token,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_TOKEN,
+ bytes_body,
+ resp_token="cbc36478b0bd8e67e89")
+
+ def test_show_api_description_with_str_body(self):
+ self._test_show_api_description()
+
+ def test_show_api_description_with_bytes_body(self):
+ self._test_show_api_description(bytes_body=True)
+
+ def test_show_token_with_str_body(self):
+ self._test_show_token()
+
+ def test_show_token_with_bytes_body(self):
+ self._test_show_token(bytes_body=True)
+
+ def test_delete_token(self):
+ self.check_service_client_function(
+ self.client.delete_token,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ resp_token="cbc36478b0bd8e67e89",
+ status=204)
diff --git a/tempest/tests/lib/services/identity/v3/test_inherited_roles_client.py b/tempest/tests/lib/services/identity/v3/test_inherited_roles_client.py
new file mode 100644
index 0000000..9da3cce
--- /dev/null
+++ b/tempest/tests/lib/services/identity/v3/test_inherited_roles_client.py
@@ -0,0 +1,220 @@
+# Copyright 2016 NEC Corporation. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.lib.services.identity.v3 import inherited_roles_client
+from tempest.tests.lib import fake_auth_provider
+from tempest.tests.lib.services import base
+
+
+class TestInheritedRolesClient(base.BaseServiceTest):
+ FAKE_LIST_INHERITED_ROLES = {
+ "roles": [
+ {
+ "id": "1",
+ "name": "test",
+ "links": "example.com"
+ },
+ {
+ "id": "2",
+ "name": "test2",
+ "links": "example.com"
+ }
+ ]
+ }
+
+ def setUp(self):
+ super(TestInheritedRolesClient, self).setUp()
+ fake_auth = fake_auth_provider.FakeAuthProvider()
+ self.client = inherited_roles_client.InheritedRolesClient(
+ fake_auth, 'identity', 'regionOne')
+
+ def _test_create_inherited_role_on_domains_user(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.create_inherited_role_on_domains_user,
+ 'tempest.lib.common.rest_client.RestClient.put',
+ {},
+ bytes_body,
+ domain_id="b344506af7644f6794d9cb316600b020",
+ user_id="123",
+ role_id="1234",
+ status=204)
+
+ def _test_list_inherited_project_role_for_user_on_domain(
+ self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.list_inherited_project_role_for_user_on_domain,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_LIST_INHERITED_ROLES,
+ bytes_body,
+ domain_id="b344506af7644f6794d9cb316600b020",
+ user_id="123")
+
+ def _test_create_inherited_role_on_domains_group(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.create_inherited_role_on_domains_group,
+ 'tempest.lib.common.rest_client.RestClient.put',
+ {},
+ bytes_body,
+ domain_id="b344506af7644f6794d9cb316600b020",
+ group_id="123",
+ role_id="1234",
+ status=204)
+
+ def _test_list_inherited_project_role_for_group_on_domain(
+ self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.list_inherited_project_role_for_group_on_domain,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_LIST_INHERITED_ROLES,
+ bytes_body,
+ domain_id="b344506af7644f6794d9cb316600b020",
+ group_id="123")
+
+ def _test_create_inherited_role_on_projects_user(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.create_inherited_role_on_projects_user,
+ 'tempest.lib.common.rest_client.RestClient.put',
+ {},
+ bytes_body,
+ project_id="b344506af7644f6794d9cb316600b020",
+ user_id="123",
+ role_id="1234",
+ status=204)
+
+ def _test_create_inherited_role_on_projects_group(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.create_inherited_role_on_projects_group,
+ 'tempest.lib.common.rest_client.RestClient.put',
+ {},
+ bytes_body,
+ project_id="b344506af7644f6794d9cb316600b020",
+ group_id="123",
+ role_id="1234",
+ status=204)
+
+ def test_create_inherited_role_on_domains_user_with_str_body(self):
+ self._test_create_inherited_role_on_domains_user()
+
+ def test_create_inherited_role_on_domains_user_with_bytes_body(self):
+ self._test_create_inherited_role_on_domains_user(bytes_body=True)
+
+ def test_create_inherited_role_on_domains_group_with_str_body(self):
+ self._test_create_inherited_role_on_domains_group()
+
+ def test_create_inherited_role_on_domains_group_with_bytes_body(self):
+ self._test_create_inherited_role_on_domains_group(bytes_body=True)
+
+ def test_create_inherited_role_on_projects_user_with_str_body(self):
+ self._test_create_inherited_role_on_projects_user()
+
+ def test_create_inherited_role_on_projects_group_with_bytes_body(self):
+ self._test_create_inherited_role_on_projects_group(bytes_body=True)
+
+ def test_list_inherited_project_role_for_user_on_domain_with_str_body(
+ self):
+ self._test_list_inherited_project_role_for_user_on_domain()
+
+ def test_list_inherited_project_role_for_user_on_domain_with_bytes_body(
+ self):
+ self._test_list_inherited_project_role_for_user_on_domain(
+ bytes_body=True)
+
+ def test_list_inherited_project_role_for_group_on_domain_with_str_body(
+ self):
+ self._test_list_inherited_project_role_for_group_on_domain()
+
+ def test_list_inherited_project_role_for_group_on_domain_with_bytes_body(
+ self):
+ self._test_list_inherited_project_role_for_group_on_domain(
+ bytes_body=True)
+
+ def test_delete_inherited_role_from_user_on_domain(self):
+ self.check_service_client_function(
+ self.client.delete_inherited_role_from_user_on_domain,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ domain_id="b344506af7644f6794d9cb316600b020",
+ user_id="123",
+ role_id="1234",
+ status=204)
+
+ def test_check_user_inherited_project_role_on_domain(self):
+ self.check_service_client_function(
+ self.client.check_user_inherited_project_role_on_domain,
+ 'tempest.lib.common.rest_client.RestClient.head',
+ {},
+ domain_id="b344506af7644f6794d9cb316600b020",
+ user_id="123",
+ role_id="1234",
+ status=204)
+
+ def test_delete_inherited_role_from_group_on_domain(self):
+ self.check_service_client_function(
+ self.client.delete_inherited_role_from_group_on_domain,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ domain_id="b344506af7644f6794d9cb316600b020",
+ group_id="123",
+ role_id="1234",
+ status=204)
+
+ def test_check_group_inherited_project_role_on_domain(self):
+ self.check_service_client_function(
+ self.client.check_group_inherited_project_role_on_domain,
+ 'tempest.lib.common.rest_client.RestClient.head',
+ {},
+ domain_id="b344506af7644f6794d9cb316600b020",
+ group_id="123",
+ role_id="1234",
+ status=204)
+
+ def test_delete_inherited_role_from_user_on_project(self):
+ self.check_service_client_function(
+ self.client.delete_inherited_role_from_user_on_project,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ project_id="b344506af7644f6794d9cb316600b020",
+ user_id="123",
+ role_id="1234",
+ status=204)
+
+ def test_check_user_has_flag_on_inherited_to_project(self):
+ self.check_service_client_function(
+ self.client.check_user_has_flag_on_inherited_to_project,
+ 'tempest.lib.common.rest_client.RestClient.head',
+ {},
+ project_id="b344506af7644f6794d9cb316600b020",
+ user_id="123",
+ role_id="1234",
+ status=204)
+
+ def test_delete_inherited_role_from_group_on_project(self):
+ self.check_service_client_function(
+ self.client.delete_inherited_role_from_group_on_project,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ project_id="b344506af7644f6794d9cb316600b020",
+ group_id="123",
+ role_id="1234",
+ status=204)
+
+ def test_check_group_has_flag_on_inherited_to_project(self):
+ self.check_service_client_function(
+ self.client.check_group_has_flag_on_inherited_to_project,
+ 'tempest.lib.common.rest_client.RestClient.head',
+ {},
+ project_id="b344506af7644f6794d9cb316600b020",
+ group_id="123",
+ role_id="1234",
+ status=204)
diff --git a/tempest/tests/lib/services/identity/v3/test_projects_client.py b/tempest/tests/lib/services/identity/v3/test_projects_client.py
new file mode 100644
index 0000000..6ffbcde
--- /dev/null
+++ b/tempest/tests/lib/services/identity/v3/test_projects_client.py
@@ -0,0 +1,178 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from tempest.lib.services.identity.v3 import projects_client
+from tempest.tests.lib import fake_auth_provider
+from tempest.tests.lib.services import base
+
+
+class TestProjectsClient(base.BaseServiceTest):
+ FAKE_CREATE_PROJECT = {
+ "project": {
+ "description": "My new project",
+ "domain_id": "default",
+ "enabled": True,
+ "is_domain": False,
+ "name": "myNewProject"
+ }
+ }
+
+ FAKE_PROJECT_INFO = {
+ "project": {
+ "is_domain": False,
+ "description": None,
+ "domain_id": "default",
+ "enabled": True,
+ "id": "0c4e939acacf4376bdcd1129f1a054ad",
+ "links": {
+ "self": "http://example.com/identity/v3/projects/0c4e" +
+ "939acacf4376bdcd1129f1a054ad"
+ },
+ "name": "admin",
+ "parent_id": "default"
+ }
+ }
+
+ FAKE_LIST_PROJECTS = {
+ "links": {
+ "next": None,
+ "previous": None,
+ "self": "http://example.com/identity/v3/projects"
+ },
+ "projects": [
+ {
+ "is_domain": False,
+ "description": None,
+ "domain_id": "default",
+ "enabled": True,
+ "id": "0c4e939acacf4376bdcd1129f1a054ad",
+ "links": {
+ "self": "http://example.com/identity/v3/projects" +
+ "/0c4e939acacf4376bdcd1129f1a054ad"
+ },
+ "name": "admin",
+ "parent_id": None
+ },
+ {
+ "is_domain": False,
+ "description": None,
+ "domain_id": "default",
+ "enabled": True,
+ "id": "0cbd49cbf76d405d9c86562e1d579bd3",
+ "links": {
+ "self": "http://example.com/identity/v3/projects" +
+ "/0cbd49cbf76d405d9c86562e1d579bd3"
+ },
+ "name": "demo",
+ "parent_id": None
+ },
+ {
+ "is_domain": False,
+ "description": None,
+ "domain_id": "default",
+ "enabled": True,
+ "id": "2db68fed84324f29bb73130c6c2094fb",
+ "links": {
+ "self": "http://example.com/identity/v3/projects" +
+ "/2db68fed84324f29bb73130c6c2094fb"
+ },
+ "name": "swifttenanttest2",
+ "parent_id": None
+ },
+ {
+ "is_domain": False,
+ "description": None,
+ "domain_id": "default",
+ "enabled": True,
+ "id": "3d594eb0f04741069dbbb521635b21c7",
+ "links": {
+ "self": "http://example.com/identity/v3/projects" +
+ "/3d594eb0f04741069dbbb521635b21c7"
+ },
+ "name": "service",
+ "parent_id": None
+ }
+ ]
+ }
+
+ def setUp(self):
+ super(TestProjectsClient, self).setUp()
+ fake_auth = fake_auth_provider.FakeAuthProvider()
+ self.client = projects_client.ProjectsClient(fake_auth,
+ 'identity',
+ 'regionOne')
+
+ def _test_create_project(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.create_project,
+ 'tempest.lib.common.rest_client.RestClient.post',
+ self.FAKE_CREATE_PROJECT,
+ bytes_body,
+ name=self.FAKE_CREATE_PROJECT["project"]["name"],
+ status=201)
+
+ def _test_show_project(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.show_project,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_PROJECT_INFO,
+ bytes_body,
+ project_id="0c4e939acacf4376bdcd1129f1a054ad")
+
+ def _test_list_projects(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.list_projects,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_LIST_PROJECTS,
+ bytes_body)
+
+ def _test_update_project(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.update_project,
+ 'tempest.lib.common.rest_client.RestClient.patch',
+ self.FAKE_PROJECT_INFO,
+ bytes_body,
+ project_id="0c4e939acacf4376bdcd1129f1a054ad")
+
+ def test_create_project_with_str_body(self):
+ self._test_create_project()
+
+ def test_create_project_with_bytes_body(self):
+ self._test_create_project(bytes_body=True)
+
+ def test_show_project_with_str_body(self):
+ self._test_show_project()
+
+ def test_show_project_with_bytes_body(self):
+ self._test_show_project(bytes_body=True)
+
+ def test_list_projects_with_str_body(self):
+ self._test_list_projects()
+
+ def test_list_projects_with_bytes_body(self):
+ self._test_list_projects(bytes_body=True)
+
+ def test_update_project_with_str_body(self):
+ self._test_update_project()
+
+ def test_update_project_with_bytes_body(self):
+ self._test_update_project(bytes_body=True)
+
+ def test_delete_project(self):
+ self.check_service_client_function(
+ self.client.delete_project,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ project_id="0c4e939acacf4376bdcd1129f1a054ad",
+ status=204)
diff --git a/tempest/tests/lib/services/identity/v3/test_regions_client.py b/tempest/tests/lib/services/identity/v3/test_regions_client.py
new file mode 100644
index 0000000..a2cb86f
--- /dev/null
+++ b/tempest/tests/lib/services/identity/v3/test_regions_client.py
@@ -0,0 +1,125 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from tempest.lib.services.identity.v3 import regions_client
+from tempest.tests.lib import fake_auth_provider
+from tempest.tests.lib.services import base
+
+
+class TestRegionsClient(base.BaseServiceTest):
+ FAKE_CREATE_REGION = {
+ "region": {
+ "description": "My subregion",
+ "id": "RegionOneSubRegion",
+ "parent_region_id": "RegionOne"
+ }
+ }
+
+ FAKE_REGION_INFO = {
+ "region": {
+ "description": "My subregion 3",
+ "id": "RegionThree",
+ "links": {
+ "self": "http://example.com/identity/v3/regions/RegionThree"
+ },
+ "parent_region_id": "RegionOne"
+ }
+ }
+
+ FAKE_LIST_REGIONS = {
+ "links": {
+ "next": None,
+ "previous": None,
+ "self": "http://example.com/identity/v3/regions"
+ },
+ "regions": [
+ {
+ "description": "",
+ "id": "RegionOne",
+ "links": {
+ "self": "http://example.com/identity/v3/regions/RegionOne"
+ },
+ "parent_region_id": None
+ }
+ ]
+ }
+
+ def setUp(self):
+ super(TestRegionsClient, self).setUp()
+ fake_auth = fake_auth_provider.FakeAuthProvider()
+ self.client = regions_client.RegionsClient(fake_auth, 'identity',
+ 'regionOne')
+
+ def _test_create_region(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.create_region,
+ 'tempest.lib.common.rest_client.RestClient.post',
+ self.FAKE_CREATE_REGION,
+ bytes_body,
+ status=201)
+
+ def _test_show_region(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.show_region,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_REGION_INFO,
+ bytes_body,
+ region_id="RegionThree")
+
+ def _test_list_regions(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.list_regions,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_LIST_REGIONS,
+ bytes_body)
+
+ def _test_update_region(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.update_region,
+ 'tempest.lib.common.rest_client.RestClient.patch',
+ self.FAKE_REGION_INFO,
+ bytes_body,
+ region_id="RegionThree")
+
+ def test_create_region_with_str_body(self):
+ self._test_create_region()
+
+ def test_create_region_with_bytes_body(self):
+ self._test_create_region(bytes_body=True)
+
+ def test_show_region_with_str_body(self):
+ self._test_show_region()
+
+ def test_show_region_with_bytes_body(self):
+ self._test_show_region(bytes_body=True)
+
+ def test_list_regions_with_str_body(self):
+ self._test_list_regions()
+
+ def test_list_regions_with_bytes_body(self):
+ self._test_list_regions(bytes_body=True)
+
+ def test_update_region_with_str_body(self):
+ self._test_update_region()
+
+ def test_update_region_with_bytes_body(self):
+ self._test_update_region(bytes_body=True)
+
+ def test_delete_region(self):
+ self.check_service_client_function(
+ self.client.delete_region,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ region_id="RegionThree",
+ status=204)
diff --git a/tempest/tests/lib/services/identity/v3/test_roles_client.py b/tempest/tests/lib/services/identity/v3/test_roles_client.py
new file mode 100644
index 0000000..bad1ef9
--- /dev/null
+++ b/tempest/tests/lib/services/identity/v3/test_roles_client.py
@@ -0,0 +1,313 @@
+# Copyright 2016 NEC Corporation. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.lib.services.identity.v3 import roles_client
+from tempest.tests.lib import fake_auth_provider
+from tempest.tests.lib.services import base
+
+
+class TestRolesClient(base.BaseServiceTest):
+ FAKE_ROLE_INFO = {
+ "role": {
+ "domain_id": "1",
+ "id": "1",
+ "name": "test",
+ "links": "example.com"
+ }
+ }
+
+ FAKE_LIST_ROLES = {
+ "roles": [
+ {
+ "domain_id": "1",
+ "id": "1",
+ "name": "test",
+ "links": "example.com"
+ },
+ {
+ "domain_id": "2",
+ "id": "2",
+ "name": "test2",
+ "links": "example.com"
+ }
+ ]
+ }
+
+ def setUp(self):
+ super(TestRolesClient, self).setUp()
+ fake_auth = fake_auth_provider.FakeAuthProvider()
+ self.client = roles_client.RolesClient(fake_auth,
+ 'identity', 'regionOne')
+
+ def _test_create_role(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.create_role,
+ 'tempest.lib.common.rest_client.RestClient.post',
+ self.FAKE_ROLE_INFO,
+ bytes_body,
+ domain_id="1",
+ name="test",
+ status=201)
+
+ def _test_show_role(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.show_role,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_ROLE_INFO,
+ bytes_body,
+ role_id="1")
+
+ def _test_list_roles(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.list_roles,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_LIST_ROLES,
+ bytes_body)
+
+ def _test_update_role(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.update_role,
+ 'tempest.lib.common.rest_client.RestClient.patch',
+ self.FAKE_ROLE_INFO,
+ bytes_body,
+ role_id="1",
+ name="test")
+
+ def _test_create_user_role_on_project(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.create_user_role_on_project,
+ 'tempest.lib.common.rest_client.RestClient.put',
+ {},
+ bytes_body,
+ project_id="b344506af7644f6794d9cb316600b020",
+ user_id="123",
+ role_id="1234",
+ status=204)
+
+ def _test_create_user_role_on_domain(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.create_user_role_on_domain,
+ 'tempest.lib.common.rest_client.RestClient.put',
+ {},
+ bytes_body,
+ domain_id="b344506af7644f6794d9cb316600b020",
+ user_id="123",
+ role_id="1234",
+ status=204)
+
+ def _test_list_user_roles_on_project(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.list_user_roles_on_project,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_LIST_ROLES,
+ bytes_body,
+ project_id="b344506af7644f6794d9cb316600b020",
+ user_id="123")
+
+ def _test_list_user_roles_on_domain(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.list_user_roles_on_domain,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_LIST_ROLES,
+ bytes_body,
+ domain_id="b344506af7644f6794d9cb316600b020",
+ user_id="123")
+
+ def _test_create_group_role_on_project(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.create_group_role_on_project,
+ 'tempest.lib.common.rest_client.RestClient.put',
+ {},
+ bytes_body,
+ project_id="b344506af7644f6794d9cb316600b020",
+ group_id="123",
+ role_id="1234",
+ status=204)
+
+ def _test_create_group_role_on_domain(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.create_group_role_on_domain,
+ 'tempest.lib.common.rest_client.RestClient.put',
+ {},
+ bytes_body,
+ domain_id="b344506af7644f6794d9cb316600b020",
+ group_id="123",
+ role_id="1234",
+ status=204)
+
+ def _test_list_group_roles_on_project(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.list_group_roles_on_project,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_LIST_ROLES,
+ bytes_body,
+ project_id="b344506af7644f6794d9cb316600b020",
+ group_id="123")
+
+ def _test_list_group_roles_on_domain(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.list_group_roles_on_domain,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_LIST_ROLES,
+ bytes_body,
+ domain_id="b344506af7644f6794d9cb316600b020",
+ group_id="123")
+
+ def test_create_role_with_str_body(self):
+ self._test_create_role()
+
+ def test_create_role_with_bytes_body(self):
+ self._test_create_role(bytes_body=True)
+
+ def test_show_role_with_str_body(self):
+ self._test_show_role()
+
+ def test_show_role_with_bytes_body(self):
+ self._test_show_role(bytes_body=True)
+
+ def test_list_roles_with_str_body(self):
+ self._test_list_roles()
+
+ def test_list_roles_with_bytes_body(self):
+ self._test_list_roles(bytes_body=True)
+
+ def test_update_role_with_str_body(self):
+ self._test_update_role()
+
+ def test_update_role_with_bytes_body(self):
+ self._test_update_role(bytes_body=True)
+
+ def test_delete_role(self):
+ self.check_service_client_function(
+ self.client.delete_role,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ role_id="1",
+ status=204)
+
+ def test_create_user_role_on_project_with_str_body(self):
+ self._test_create_user_role_on_project()
+
+ def test_create_user_role_on_project_with_bytes_body(self):
+ self._test_create_user_role_on_project(bytes_body=True)
+
+ def test_create_user_role_on_domain_with_str_body(self):
+ self._test_create_user_role_on_domain()
+
+ def test_create_user_role_on_domain_with_bytes_body(self):
+ self._test_create_user_role_on_domain(bytes_body=True)
+
+ def test_create_group_role_on_domain_with_str_body(self):
+ self._test_create_group_role_on_domain()
+
+ def test_create_group_role_on_domain_with_bytes_body(self):
+ self._test_create_group_role_on_domain(bytes_body=True)
+
+ def test_list_user_roles_on_project_with_str_body(self):
+ self._test_list_user_roles_on_project()
+
+ def test_list_user_roles_on_project_with_bytes_body(self):
+ self._test_list_user_roles_on_project(bytes_body=True)
+
+ def test_list_user_roles_on_domain_with_str_body(self):
+ self._test_list_user_roles_on_domain()
+
+ def test_list_user_roles_on_domain_with_bytes_body(self):
+ self._test_list_user_roles_on_domain(bytes_body=True)
+
+ def test_list_group_roles_on_domain_with_str_body(self):
+ self._test_list_group_roles_on_domain()
+
+ def test_list_group_roles_on_domain_with_bytes_body(self):
+ self._test_list_group_roles_on_domain(bytes_body=True)
+
+ def test_delete_role_from_user_on_project(self):
+ self.check_service_client_function(
+ self.client.delete_role_from_user_on_project,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ project_id="b344506af7644f6794d9cb316600b020",
+ user_id="123",
+ role_id="1234",
+ status=204)
+
+ def test_delete_role_from_user_on_domain(self):
+ self.check_service_client_function(
+ self.client.delete_role_from_user_on_domain,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ domain_id="b344506af7644f6794d9cb316600b020",
+ user_id="123",
+ role_id="1234",
+ status=204)
+
+ def test_delete_role_from_group_on_project(self):
+ self.check_service_client_function(
+ self.client.delete_role_from_group_on_project,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ project_id="b344506af7644f6794d9cb316600b020",
+ group_id="123",
+ role_id="1234",
+ status=204)
+
+ def test_delete_role_from_group_on_domain(self):
+ self.check_service_client_function(
+ self.client.delete_role_from_group_on_domain,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ domain_id="b344506af7644f6794d9cb316600b020",
+ group_id="123",
+ role_id="1234",
+ status=204)
+
+ def test_check_user_role_existence_on_project(self):
+ self.check_service_client_function(
+ self.client.check_user_role_existence_on_project,
+ 'tempest.lib.common.rest_client.RestClient.head',
+ {},
+ project_id="b344506af7644f6794d9cb316600b020",
+ user_id="123",
+ role_id="1234",
+ status=204)
+
+ def test_check_user_role_existence_on_domain(self):
+ self.check_service_client_function(
+ self.client.check_user_role_existence_on_domain,
+ 'tempest.lib.common.rest_client.RestClient.head',
+ {},
+ domain_id="b344506af7644f6794d9cb316600b020",
+ user_id="123",
+ role_id="1234",
+ status=204)
+
+ def test_check_role_from_group_on_project_existence(self):
+ self.check_service_client_function(
+ self.client.check_role_from_group_on_project_existence,
+ 'tempest.lib.common.rest_client.RestClient.head',
+ {},
+ project_id="b344506af7644f6794d9cb316600b020",
+ group_id="123",
+ role_id="1234",
+ status=204)
+
+ def test_check_role_from_group_on_domain_existence(self):
+ self.check_service_client_function(
+ self.client.check_role_from_group_on_domain_existence,
+ 'tempest.lib.common.rest_client.RestClient.head',
+ {},
+ domain_id="b344506af7644f6794d9cb316600b020",
+ group_id="123",
+ role_id="1234",
+ status=204)
diff --git a/tempest/tests/lib/services/identity/v3/test_services_client.py b/tempest/tests/lib/services/identity/v3/test_services_client.py
new file mode 100644
index 0000000..f87fcce
--- /dev/null
+++ b/tempest/tests/lib/services/identity/v3/test_services_client.py
@@ -0,0 +1,149 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from tempest.lib.services.identity.v3 import services_client
+from tempest.tests.lib import fake_auth_provider
+from tempest.tests.lib.services import base
+
+
+class TestServicesClient(base.BaseServiceTest):
+ FAKE_CREATE_SERVICE = {
+ "service": {
+ "type": "compute",
+ "name": "compute2",
+ "description": "Compute service 2"
+ }
+ }
+
+ FAKE_SERVICE_INFO = {
+ "service": {
+ "description": "Keystone Identity Service",
+ "enabled": True,
+ "id": "686766",
+ "links": {
+ "self": "http://example.com/identity/v3/services/686766"
+ },
+ "name": "keystone",
+ "type": "identity"
+ }
+ }
+
+ FAKE_LIST_SERVICES = {
+ "links": {
+ "next": None,
+ "previous": None,
+ "self": "http://example.com/identity/v3/services"
+ },
+ "services": [
+ {
+ "description": "Nova Compute Service",
+ "enabled": True,
+ "id": "1999c3",
+ "links": {
+ "self": "http://example.com/identity/v3/services/1999c3"
+ },
+ "name": "nova",
+ "type": "compute"
+ },
+ {
+ "description": "Cinder Volume Service V2",
+ "enabled": True,
+ "id": "392166",
+ "links": {
+ "self": "http://example.com/identity/v3/services/392166"
+ },
+ "name": "cinderv2",
+ "type": "volumev2"
+ },
+ {
+ "description": "Neutron Service",
+ "enabled": True,
+ "id": "4fe41a",
+ "links": {
+ "self": "http://example.com/identity/v3/services/4fe41a"
+ },
+ "name": "neutron",
+ "type": "network"
+ }
+ ]
+ }
+
+ def setUp(self):
+ super(TestServicesClient, self).setUp()
+ fake_auth = fake_auth_provider.FakeAuthProvider()
+ self.client = services_client.ServicesClient(fake_auth, 'identity',
+ 'regionOne')
+
+ def _test_create_service(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.create_service,
+ 'tempest.lib.common.rest_client.RestClient.post',
+ self.FAKE_CREATE_SERVICE,
+ bytes_body,
+ status=201)
+
+ def _test_show_service(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.show_service,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_SERVICE_INFO,
+ bytes_body,
+ service_id="686766")
+
+ def _test_list_services(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.list_services,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_LIST_SERVICES,
+ bytes_body)
+
+ def _test_update_service(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.update_service,
+ 'tempest.lib.common.rest_client.RestClient.patch',
+ self.FAKE_SERVICE_INFO,
+ bytes_body,
+ service_id="686766")
+
+ def test_create_service_with_str_body(self):
+ self._test_create_service()
+
+ def test_create_service_with_bytes_body(self):
+ self._test_create_service(bytes_body=True)
+
+ def test_show_service_with_str_body(self):
+ self._test_show_service()
+
+ def test_show_service_with_bytes_body(self):
+ self._test_show_service(bytes_body=True)
+
+ def test_list_services_with_str_body(self):
+ self._test_list_services()
+
+ def test_list_services_with_bytes_body(self):
+ self._test_list_services(bytes_body=True)
+
+ def test_update_service_with_str_body(self):
+ self._test_update_service()
+
+ def test_update_service_with_bytes_body(self):
+ self._test_update_service(bytes_body=True)
+
+ def test_delete_service(self):
+ self.check_service_client_function(
+ self.client.delete_service,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ service_id="686766",
+ status=204)
diff --git a/tempest/tests/lib/services/identity/v3/test_trusts_client.py b/tempest/tests/lib/services/identity/v3/test_trusts_client.py
new file mode 100644
index 0000000..a1ca020
--- /dev/null
+++ b/tempest/tests/lib/services/identity/v3/test_trusts_client.py
@@ -0,0 +1,150 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from tempest.lib.services.identity.v3 import trusts_client
+from tempest.tests.lib import fake_auth_provider
+from tempest.tests.lib.services import base
+
+
+class TestTrustsClient(base.BaseServiceTest):
+ FAKE_CREATE_TRUST = {
+ "trust": {
+ "expires_at": "2013-02-27T18:30:59.999999Z",
+ "impersonation": True,
+ "allow_redelegation": True,
+ "project_id": "ddef321",
+ "roles": [
+ {
+ "name": "member"
+ }
+ ],
+ "trustee_user_id": "86c0d5",
+ "trustor_user_id": "a0fdfd"
+ }
+ }
+
+ FAKE_LIST_TRUSTS = {
+ "trusts": [
+ {
+ "id": "1ff900",
+ "expires_at":
+ "2013-02-27T18:30:59.999999Z",
+ "impersonation": True,
+ "links": {
+ "self":
+ "http://example.com/identity/v3/OS-TRUST/trusts/1ff900"
+ },
+ "project_id": "0f1233",
+ "trustee_user_id": "86c0d5",
+ "trustor_user_id": "a0fdfd"
+ },
+ {
+ "id": "f4513a",
+ "impersonation": False,
+ "links": {
+ "self":
+ "http://example.com/identity/v3/OS-TRUST/trusts/f45513a"
+ },
+ "project_id": "0f1233",
+ "trustee_user_id": "86c0d5",
+ "trustor_user_id": "3cd2ce"
+ }
+ ]
+ }
+
+ FAKE_TRUST_INFO = {
+ "trust": {
+ "id": "987fe8",
+ "expires_at": "2013-02-27T18:30:59.999999Z",
+ "impersonation": True,
+ "links": {
+ "self":
+ "http://example.com/identity/v3/OS-TRUST/trusts/987fe8"
+ },
+ "roles": [
+ {
+ "id": "ed7b78",
+ "links": {
+ "self":
+ "http://example.com/identity/v3/roles/ed7b78"
+ },
+ "name": "member"
+ }
+ ],
+ "roles_links": {
+ "next": None,
+ "previous": None,
+ "self":
+ "http://example.com/identity/v3/OS-TRUST/trusts/1ff900/roles"
+ },
+ "project_id": "0f1233",
+ "trustee_user_id": "be34d1",
+ "trustor_user_id": "56ae32"
+ }
+ }
+
+ def setUp(self):
+ super(TestTrustsClient, self).setUp()
+ fake_auth = fake_auth_provider.FakeAuthProvider()
+ self.client = trusts_client.TrustsClient(fake_auth, 'identity',
+ 'regionOne')
+
+ def _test_create_trust(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.create_trust,
+ 'tempest.lib.common.rest_client.RestClient.post',
+ self.FAKE_CREATE_TRUST,
+ bytes_body,
+ status=201)
+
+ def _test_show_trust(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.show_trust,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_TRUST_INFO,
+ bytes_body,
+ trust_id="1ff900")
+
+ def _test_list_trusts(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.list_trusts,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_LIST_TRUSTS,
+ bytes_body)
+
+ def test_create_trust_with_str_body(self):
+ self._test_create_trust()
+
+ def test_create_trust_with_bytes_body(self):
+ self._test_create_trust(bytes_body=True)
+
+ def test_show_trust_with_str_body(self):
+ self._test_show_trust()
+
+ def test_show_trust_with_bytes_body(self):
+ self._test_show_trust(bytes_body=True)
+
+ def test_list_trusts_with_str_body(self):
+ self._test_list_trusts()
+
+ def test_list_trusts_with_bytes_body(self):
+ self._test_list_trusts(bytes_body=True)
+
+ def test_delete_trust(self):
+ self.check_service_client_function(
+ self.client.delete_trust,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ trust_id="1ff900",
+ status=204)
diff --git a/tempest/tests/lib/services/identity/v3/test_users_client.py b/tempest/tests/lib/services/identity/v3/test_users_client.py
new file mode 100644
index 0000000..5b572f5
--- /dev/null
+++ b/tempest/tests/lib/services/identity/v3/test_users_client.py
@@ -0,0 +1,205 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from tempest.lib.services.identity.v3 import users_client
+from tempest.tests.lib import fake_auth_provider
+from tempest.tests.lib.services import base
+
+
+class TestUsersClient(base.BaseServiceTest):
+ FAKE_CREATE_USER = {
+ 'user': {
+ 'default_project_id': '95f8c3f8e7b54409a418fc30717f9ae0',
+ 'domain_id': '8347b31afc3545c4b311cb4cce788a08',
+ 'enabled': True,
+ 'name': 'Tempest User',
+ 'password': 'TempestPassword',
+ }
+ }
+
+ FAKE_USER_INFO = {
+ 'user': {
+ 'default_project_id': '95f8c3f8e7b54409a418fc30717f9ae0',
+ 'domain_id': '8347b31afc3545c4b311cb4cce788a08',
+ 'enabled': True,
+ 'id': '817fb3c23fd7465ba6d7fe1b1320121d',
+ 'links': {
+ 'self': 'http://example.com/identity',
+ },
+ 'name': 'Tempest User',
+ 'password_expires_at': '2016-11-06T15:32:17.000000',
+ }
+ }
+
+ FAKE_USER_LIST = {
+ 'links': {
+ 'next': None,
+ 'previous': None,
+ 'self': 'http://example.com/identity/v3/users',
+ },
+ 'users': [
+ {
+ 'domain_id': 'TempestDomain',
+ 'enabled': True,
+ 'id': '817fb3c23fd7465ba6d7fe1b1320121d',
+ 'links': {
+ 'self': 'http://example.com/identity/v3/users/' +
+ '817fb3c23fd7465ba6d7fe1b1320121d',
+ },
+ 'name': 'Tempest User',
+ 'password_expires_at': '2016-11-06T15:32:17.000000',
+ },
+ {
+ 'domain_id': 'TempestDomain',
+ 'enabled': True,
+ 'id': 'bdbfb1e2f1344be197e90a778379cca1',
+ 'links': {
+ 'self': 'http://example.com/identity/v3/users/' +
+ 'bdbfb1e2f1344be197e90a778379cca1',
+ },
+ 'name': 'Tempest User',
+ 'password_expires_at': None,
+ },
+ ]
+ }
+
+ FAKE_GROUP_LIST = {
+ 'links': {
+ 'self': 'http://example.com/identity/v3/groups',
+ 'previous': None,
+ 'next': None,
+ },
+ 'groups': [
+ {
+ 'description': 'Tempest Group One Description',
+ 'domain_id': 'TempestDomain',
+ 'id': '1c92f3453ed34291a074b87493455b8f',
+ 'links': {
+ 'self': 'http://example.com/identity/v3/groups/' +
+ '1c92f3453ed34291a074b87493455b8f'
+ },
+ 'name': 'Tempest Group One',
+ },
+ {
+ 'description': 'Tempest Group Two Description',
+ 'domain_id': 'TempestDomain',
+ 'id': 'ce9e7dafed3b4877a7d4466ed730a9ee',
+ 'links': {
+ 'self': 'http://example.com/identity/v3/groups/' +
+ 'ce9e7dafed3b4877a7d4466ed730a9ee'
+ },
+ 'name': 'Tempest Group Two',
+ },
+ ]
+ }
+
+ def setUp(self):
+ super(TestUsersClient, self).setUp()
+ fake_auth = fake_auth_provider.FakeAuthProvider()
+ self.client = users_client.UsersClient(fake_auth, 'identity',
+ 'regionOne')
+
+ def _test_create_user(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.create_user,
+ 'tempest.lib.common.rest_client.RestClient.post',
+ self.FAKE_CREATE_USER,
+ bytes_body,
+ status=201,
+ )
+
+ def _test_show_user(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.show_user,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_USER_INFO,
+ bytes_body,
+ user_id='817fb3c23fd7465ba6d7fe1b1320121d',
+ )
+
+ def _test_list_users(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.list_users,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_USER_LIST,
+ bytes_body,
+ )
+
+ def _test_update_user(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.update_user,
+ 'tempest.lib.common.rest_client.RestClient.patch',
+ self.FAKE_USER_INFO,
+ bytes_body,
+ user_id='817fb3c23fd7465ba6d7fe1b1320121d',
+ name='NewName',
+ )
+
+ def _test_list_user_groups(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.list_user_groups,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_GROUP_LIST,
+ bytes_body,
+ user_id='817fb3c23fd7465ba6d7fe1b1320121d',
+ )
+
+ def test_create_user_with_string_body(self):
+ self._test_create_user()
+
+ def test_create_user_with_bytes_body(self):
+ self._test_create_user(bytes_body=True)
+
+ def test_show_user_with_string_body(self):
+ self._test_show_user()
+
+ def test_show_user_with_bytes_body(self):
+ self._test_show_user(bytes_body=True)
+
+ def test_list_users_with_string_body(self):
+ self._test_list_users()
+
+ def test_list_users_with_bytes_body(self):
+ self._test_list_users(bytes_body=True)
+
+ def test_update_user_with_string_body(self):
+ self._test_update_user()
+
+ def test_update_user_with_bytes_body(self):
+ self._test_update_user(bytes_body=True)
+
+ def test_list_user_groups_with_string_body(self):
+ self._test_list_user_groups()
+
+ def test_list_user_groups_with_bytes_body(self):
+ self._test_list_user_groups(bytes_body=True)
+
+ def test_delete_user(self):
+ self.check_service_client_function(
+ self.client.delete_user,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ user_id='817fb3c23fd7465ba6d7fe1b1320121d',
+ status=204,
+ )
+
+ def test_change_user_password(self):
+ self.check_service_client_function(
+ self.client.update_user_password,
+ 'tempest.lib.common.rest_client.RestClient.post',
+ {},
+ status=204,
+ user_id='817fb3c23fd7465ba6d7fe1b1320121d',
+ password='NewTempestPassword',
+ original_password='OldTempestPassword')
diff --git a/tempest/tests/lib/services/network/test_versions_client.py b/tempest/tests/lib/services/network/test_versions_client.py
index 715176b..ae52c8a 100644
--- a/tempest/tests/lib/services/network/test_versions_client.py
+++ b/tempest/tests/lib/services/network/test_versions_client.py
@@ -14,7 +14,7 @@
import copy
-from tempest.lib.services.network.versions_client import NetworkVersionsClient
+from tempest.lib.services.network import versions_client
from tempest.tests.lib import fake_auth_provider
from tempest.tests.lib.services import base
@@ -59,7 +59,7 @@
super(TestNetworkVersionsClient, self).setUp()
fake_auth = fake_auth_provider.FakeAuthProvider()
self.versions_client = (
- NetworkVersionsClient
+ versions_client.NetworkVersionsClient
(fake_auth, 'compute', 'regionOne'))
def _test_versions_client(self, bytes_body=False):
diff --git a/tempest/tests/lib/services/test_clients.py b/tempest/tests/lib/services/test_clients.py
new file mode 100644
index 0000000..5db932c
--- /dev/null
+++ b/tempest/tests/lib/services/test_clients.py
@@ -0,0 +1,370 @@
+# Copyright (c) 2016 Hewlett-Packard Enterprise Development Company, L.P.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may not
+# use this file except in compliance with the License. You may obtain a copy of
+# the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations under
+# the License.
+
+import fixtures
+import mock
+import testtools
+import types
+
+from tempest.lib import auth
+from tempest.lib import exceptions
+from tempest.lib.services import clients
+from tempest.tests import base
+from tempest.tests.lib import fake_auth_provider
+from tempest.tests.lib import fake_credentials
+
+
+has_attribute = testtools.matchers.MatchesPredicateWithParams(
+ lambda x, y: hasattr(x, y), '{0} does not have an attribute {1}')
+
+
+class TestClientsFactory(base.TestCase):
+
+ def setUp(self):
+ super(TestClientsFactory, self).setUp()
+ self.classes = []
+
+ def _setup_fake_module(self, class_names=None, extra_dict=None):
+ class_names = class_names or []
+ fake_module = types.ModuleType('fake_service_client')
+ _dict = {}
+ # Add fake classes to the fake module
+ for name in class_names:
+ _dict[name] = type(name, (object,), {})
+ # Store it for assertions
+ self.classes.append(_dict[name])
+ if extra_dict:
+ _dict[extra_dict] = extra_dict
+ fake_module.__dict__.update(_dict)
+ fixture_importlib = self.useFixture(fixtures.MockPatch(
+ 'importlib.import_module', return_value=fake_module))
+ return fixture_importlib.mock
+
+ def test___init___one_class(self):
+ fake_partial = 'fake_partial'
+ partial_mock = self.useFixture(fixtures.MockPatch(
+ 'tempest.lib.services.clients.ClientsFactory._get_partial_class',
+ return_value=fake_partial)).mock
+ class_names = ['FakeServiceClient1']
+ mock_importlib = self._setup_fake_module(class_names=class_names)
+ auth_provider = fake_auth_provider.FakeAuthProvider()
+ params = {'k1': 'v1', 'k2': 'v2'}
+ factory = clients.ClientsFactory('fake_path', class_names,
+ auth_provider, **params)
+ # Assert module has been imported
+ mock_importlib.assert_called_once_with('fake_path')
+ # All attributes have been created
+ for client in class_names:
+ self.assertThat(factory, has_attribute(client))
+ # Partial have been invoked correctly
+ partial_mock.assert_called_once_with(
+ self.classes[0], auth_provider, params)
+ # Get the clients
+ for name in class_names:
+ self.assertEqual(fake_partial, getattr(factory, name))
+
+ def test___init___two_classes(self):
+ fake_partial = 'fake_partial'
+ partial_mock = self.useFixture(fixtures.MockPatch(
+ 'tempest.lib.services.clients.ClientsFactory._get_partial_class',
+ return_value=fake_partial)).mock
+ class_names = ['FakeServiceClient1', 'FakeServiceClient2']
+ mock_importlib = self._setup_fake_module(class_names=class_names)
+ auth_provider = fake_auth_provider.FakeAuthProvider()
+ params = {'k1': 'v1', 'k2': 'v2'}
+ factory = clients.ClientsFactory('fake_path', class_names,
+ auth_provider, **params)
+ # Assert module has been imported
+ mock_importlib.assert_called_once_with('fake_path')
+ # All attributes have been created
+ for client in class_names:
+ self.assertThat(factory, has_attribute(client))
+ # Partial have been invoked the right number of times
+ partial_mock.call_count = len(class_names)
+ # Get the clients
+ for name in class_names:
+ self.assertEqual(fake_partial, getattr(factory, name))
+
+ def test___init___no_module(self):
+ auth_provider = fake_auth_provider.FakeAuthProvider()
+ class_names = ['FakeServiceClient1', 'FakeServiceClient2']
+ with testtools.ExpectedException(ImportError, '.*fake_module.*'):
+ clients.ClientsFactory('fake_module', class_names,
+ auth_provider)
+
+ def test___init___not_a_class(self):
+ class_names = ['FakeServiceClient1', 'FakeServiceClient2']
+ extended_class_names = class_names + ['not_really_a_class']
+ self._setup_fake_module(
+ class_names=class_names, extra_dict='not_really_a_class')
+ auth_provider = fake_auth_provider.FakeAuthProvider()
+ expected_msg = '.*not_really_a_class.*str.*'
+ with testtools.ExpectedException(TypeError, expected_msg):
+ clients.ClientsFactory('fake_module', extended_class_names,
+ auth_provider)
+
+ def test___init___class_not_found(self):
+ class_names = ['FakeServiceClient1', 'FakeServiceClient2']
+ extended_class_names = class_names + ['not_really_a_class']
+ self._setup_fake_module(class_names=class_names)
+ auth_provider = fake_auth_provider.FakeAuthProvider()
+ expected_msg = '.*not_really_a_class.*fake_service_client.*'
+ with testtools.ExpectedException(AttributeError, expected_msg):
+ clients.ClientsFactory('fake_module', extended_class_names,
+ auth_provider)
+
+ def test__get_partial_class_no_later_kwargs(self):
+ expected_fake_client = 'not_really_a_client'
+ self._setup_fake_module(class_names=[])
+ auth_provider = fake_auth_provider.FakeAuthProvider()
+ params = {'k1': 'v1', 'k2': 'v2'}
+ factory = clients.ClientsFactory(
+ 'fake_path', [], auth_provider, **params)
+ klass_mock = mock.Mock(return_value=expected_fake_client)
+ partial = factory._get_partial_class(klass_mock, auth_provider, params)
+ # Class has not be initialised yet
+ klass_mock.assert_not_called()
+ # Use partial and assert on parameters
+ client = partial()
+ self.assertEqual(expected_fake_client, client)
+ klass_mock.assert_called_once_with(auth_provider=auth_provider,
+ **params)
+
+ def test__get_partial_class_later_kwargs(self):
+ expected_fake_client = 'not_really_a_client'
+ self._setup_fake_module(class_names=[])
+ auth_provider = fake_auth_provider.FakeAuthProvider()
+ params = {'k1': 'v1', 'k2': 'v2'}
+ later_params = {'k2': 'v4', 'k3': 'v3'}
+ factory = clients.ClientsFactory(
+ 'fake_path', [], auth_provider, **params)
+ klass_mock = mock.Mock(return_value=expected_fake_client)
+ partial = factory._get_partial_class(klass_mock, auth_provider, params)
+ # Class has not be initialised yet
+ klass_mock.assert_not_called()
+ # Use partial and assert on parameters
+ client = partial(**later_params)
+ params.update(later_params)
+ self.assertEqual(expected_fake_client, client)
+ klass_mock.assert_called_once_with(auth_provider=auth_provider,
+ **params)
+
+ def test__get_partial_class_with_alias(self):
+ expected_fake_client = 'not_really_a_client'
+ client_alias = 'fake_client'
+ self._setup_fake_module(class_names=[])
+ auth_provider = fake_auth_provider.FakeAuthProvider()
+ params = {'k1': 'v1', 'k2': 'v2'}
+ later_params = {'k2': 'v4', 'k3': 'v3'}
+ factory = clients.ClientsFactory(
+ 'fake_path', [], auth_provider, **params)
+ klass_mock = mock.Mock(return_value=expected_fake_client)
+ partial = factory._get_partial_class(klass_mock, auth_provider, params)
+ # Class has not be initialised yet
+ klass_mock.assert_not_called()
+ # Use partial and assert on parameters
+ client = partial(alias=client_alias, **later_params)
+ params.update(later_params)
+ self.assertEqual(expected_fake_client, client)
+ klass_mock.assert_called_once_with(auth_provider=auth_provider,
+ **params)
+ self.assertThat(factory, has_attribute(client_alias))
+ self.assertEqual(expected_fake_client, getattr(factory, client_alias))
+
+
+class TestServiceClients(base.TestCase):
+
+ def setUp(self):
+ super(TestServiceClients, self).setUp()
+ self.useFixture(fixtures.MockPatch(
+ 'tempest.lib.services.clients.tempest_modules', return_value={}))
+ self.useFixture(fixtures.MockPatch(
+ 'tempest.lib.services.clients._tempest_internal_modules',
+ return_value=set(['fake_service1'])))
+
+ def test___init___creds_v2_uri(self):
+ # Verify that no API request is made, since no mock
+ # is required to run the test successfully
+ creds = fake_credentials.FakeKeystoneV2Credentials()
+ uri = 'fake_uri'
+ _manager = clients.ServiceClients(creds, identity_uri=uri)
+ self.assertIsInstance(_manager.auth_provider,
+ auth.KeystoneV2AuthProvider)
+
+ def test___init___creds_v3_uri(self):
+ # Verify that no API request is made, since no mock
+ # is required to run the test successfully
+ creds = fake_credentials.FakeKeystoneV3Credentials()
+ uri = 'fake_uri'
+ _manager = clients.ServiceClients(creds, identity_uri=uri)
+ self.assertIsInstance(_manager.auth_provider,
+ auth.KeystoneV3AuthProvider)
+
+ def test___init___base_creds_uri(self):
+ creds = fake_credentials.FakeCredentials()
+ uri = 'fake_uri'
+ with testtools.ExpectedException(exceptions.InvalidCredentials):
+ clients.ServiceClients(creds, identity_uri=uri)
+
+ def test___init___invalid_creds_uri(self):
+ creds = fake_credentials.FakeKeystoneV2Credentials()
+ delattr(creds, 'username')
+ uri = 'fake_uri'
+ with testtools.ExpectedException(exceptions.InvalidCredentials):
+ clients.ServiceClients(creds, identity_uri=uri)
+
+ def test___init___creds_uri_none(self):
+ creds = fake_credentials.FakeKeystoneV2Credentials()
+ msg = ("Invalid Credentials\nDetails: ServiceClients requires a "
+ "non-empty")
+ with testtools.ExpectedException(exceptions.InvalidCredentials,
+ value_re=msg):
+ clients.ServiceClients(creds, None)
+
+ def test___init___creds_uri_params(self):
+ creds = fake_credentials.FakeKeystoneV2Credentials()
+ expeted_params = {'fake_param1': 'fake_value1',
+ 'fake_param2': 'fake_value2'}
+ params = {'fake_service1': expeted_params}
+ uri = 'fake_uri'
+ _manager = clients.ServiceClients(creds, identity_uri=uri,
+ client_parameters=params)
+ self.assertIn('fake_service1', _manager.parameters)
+ for _key in expeted_params:
+ self.assertIn(_key, _manager.parameters['fake_service1'].keys())
+ self.assertEqual(expeted_params[_key],
+ _manager.parameters['fake_service1'].get(_key))
+
+ def test___init___creds_uri_params_unknown_services(self):
+ creds = fake_credentials.FakeKeystoneV2Credentials()
+ fake_params = {'fake_param1': 'fake_value1'}
+ params = {'unknown_service1': fake_params,
+ 'unknown_service2': fake_params}
+ uri = 'fake_uri'
+ msg = "(?=.*{0})(?=.*{1})".format(*list(params.keys()))
+ with testtools.ExpectedException(
+ exceptions.UnknownServiceClient, value_re=msg):
+ clients.ServiceClients(creds, identity_uri=uri,
+ client_parameters=params)
+
+ def _get_manager(self, init_region='fake_region'):
+ # Get a manager to invoke _setup_parameters on
+ creds = fake_credentials.FakeKeystoneV2Credentials()
+ return clients.ServiceClients(creds, identity_uri='fake_uri',
+ region=init_region)
+
+ def test__setup_parameters_none_no_region(self):
+ kwargs = {}
+ _manager = self._get_manager(init_region=None)
+ _params = _manager._setup_parameters(kwargs)
+ self.assertNotIn('region', _params)
+
+ def test__setup_parameters_none(self):
+ kwargs = {}
+ _manager = self._get_manager()
+ _params = _manager._setup_parameters(kwargs)
+ self.assertIn('region', _params)
+ self.assertEqual('fake_region', _params['region'])
+
+ def test__setup_parameters_all(self):
+ expected_params = {'region': 'fake_region1',
+ 'catalog_type': 'fake_service2_mod',
+ 'fake_param1': 'fake_value1',
+ 'fake_param2': 'fake_value2'}
+ _manager = self._get_manager()
+ _params = _manager._setup_parameters(expected_params)
+ for _key in _params.keys():
+ self.assertEqual(expected_params[_key],
+ _params[_key])
+
+ def test_register_service_client_module(self):
+ expected_params = {'fake_param1': 'fake_value1',
+ 'fake_param2': 'fake_value2'}
+ _manager = self._get_manager(init_region='fake_region_default')
+ # Mock after the _manager is setup to preserve the call count
+ factory_mock = self.useFixture(fixtures.MockPatch(
+ 'tempest.lib.services.clients.ClientsFactory')).mock
+ _manager.register_service_client_module(
+ name='fake_module',
+ service_version='fake_service',
+ module_path='fake.path.to.module',
+ client_names=[],
+ **expected_params)
+ self.assertThat(_manager, has_attribute('fake_module'))
+ # Assert called once, without check for exact parameters
+ self.assertTrue(factory_mock.called)
+ self.assertEqual(1, factory_mock.call_count)
+ # Assert expected params are in with their values
+ actual_kwargs = factory_mock.call_args[1]
+ self.assertIn('region', actual_kwargs)
+ self.assertEqual('fake_region_default', actual_kwargs['region'])
+ for param in expected_params:
+ self.assertIn(param, actual_kwargs)
+ self.assertEqual(expected_params[param], actual_kwargs[param])
+ # Assert the new service is registered
+ self.assertIn('fake_service', _manager._registered_services)
+
+ def test_register_service_client_module_override_default(self):
+ new_region = 'new_region'
+ expected_params = {'fake_param1': 'fake_value1',
+ 'fake_param2': 'fake_value2',
+ 'region': new_region}
+ _manager = self._get_manager(init_region='fake_region_default')
+ # Mock after the _manager is setup to preserve the call count
+ factory_mock = self.useFixture(fixtures.MockPatch(
+ 'tempest.lib.services.clients.ClientsFactory')).mock
+ _manager.register_service_client_module(
+ name='fake_module',
+ service_version='fake_service',
+ module_path='fake.path.to.module',
+ client_names=[],
+ **expected_params)
+ self.assertThat(_manager, has_attribute('fake_module'))
+ # Assert called once, without check for exact parameters
+ self.assertTrue(factory_mock.called)
+ self.assertEqual(1, factory_mock.call_count)
+ # Assert expected params are in with their values
+ actual_kwargs = factory_mock.call_args[1]
+ self.assertIn('region', actual_kwargs)
+ self.assertEqual(new_region, actual_kwargs['region'])
+ for param in expected_params:
+ self.assertIn(param, actual_kwargs)
+ self.assertEqual(expected_params[param], actual_kwargs[param])
+ # Assert the new service is registered
+ self.assertIn('fake_service', _manager._registered_services)
+
+ def test_register_service_client_module_duplicate_name(self):
+ self.useFixture(fixtures.MockPatch(
+ 'tempest.lib.services.clients.ClientsFactory')).mock
+ _manager = self._get_manager()
+ name_owner = 'this_is_a_string'
+ setattr(_manager, 'fake_module', name_owner)
+ expected_error = '.*' + name_owner
+ with testtools.ExpectedException(
+ exceptions.ServiceClientRegistrationException, expected_error):
+ _manager.register_service_client_module(
+ name='fake_module', module_path='fake.path.to.module',
+ service_version='fake_service', client_names=[])
+
+ def test_register_service_client_module_duplicate_service(self):
+ self.useFixture(fixtures.MockPatch(
+ 'tempest.lib.services.clients.ClientsFactory')).mock
+ _manager = self._get_manager()
+ duplicate_service = 'fake_service1'
+ expected_error = '.*' + duplicate_service
+ with testtools.ExpectedException(
+ exceptions.ServiceClientRegistrationException, expected_error):
+ _manager.register_service_client_module(
+ name='fake_module', module_path='fake.path.to.module',
+ service_version=duplicate_service, client_names=[])
diff --git a/tempest/tests/stress/__init__.py b/tempest/tests/lib/services/volume/__init__.py
similarity index 100%
rename from tempest/tests/stress/__init__.py
rename to tempest/tests/lib/services/volume/__init__.py
diff --git a/tempest/tests/stress/__init__.py b/tempest/tests/lib/services/volume/v1/__init__.py
similarity index 100%
copy from tempest/tests/stress/__init__.py
copy to tempest/tests/lib/services/volume/v1/__init__.py
diff --git a/tempest/tests/lib/services/volume/v1/test_encryption_types_client.py b/tempest/tests/lib/services/volume/v1/test_encryption_types_client.py
new file mode 100644
index 0000000..585904e
--- /dev/null
+++ b/tempest/tests/lib/services/volume/v1/test_encryption_types_client.py
@@ -0,0 +1,86 @@
+# Copyright 2016 NEC Corporation. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.lib.services.volume.v1 import encryption_types_client
+from tempest.tests.lib import fake_auth_provider
+from tempest.tests.lib.services import base
+
+
+class TestEncryptionTypesClient(base.BaseServiceTest):
+ FAKE_CREATE_ENCRYPTION_TYPE = {
+ "encryption": {
+ "id": "cbc36478b0bd8e67e89",
+ "name": "FakeEncryptionType",
+ "type": "fakeType",
+ "provider": "LuksEncryptor",
+ "cipher": "aes-xts-plain64",
+ "key_size": "512",
+ "control_location": "front-end"
+ }
+ }
+
+ FAKE_INFO_ENCRYPTION_TYPE = {
+ "encryption": {
+ "name": "FakeEncryptionType",
+ "type": "fakeType",
+ "description": "test_description",
+ "volume_type": "fakeType",
+ "provider": "LuksEncryptor",
+ "cipher": "aes-xts-plain64",
+ "key_size": "512",
+ "control_location": "front-end"
+ }
+ }
+
+ def setUp(self):
+ super(TestEncryptionTypesClient, self).setUp()
+ fake_auth = fake_auth_provider.FakeAuthProvider()
+ self.client = encryption_types_client.EncryptionTypesClient(fake_auth,
+ 'volume',
+ 'regionOne'
+ )
+
+ def _test_create_encryption(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.create_encryption_type,
+ 'tempest.lib.common.rest_client.RestClient.post',
+ self.FAKE_CREATE_ENCRYPTION_TYPE,
+ bytes_body, volume_type_id="cbc36478b0bd8e67e89")
+
+ def _test_show_encryption_type(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.show_encryption_type,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_INFO_ENCRYPTION_TYPE,
+ bytes_body, volume_type_id="cbc36478b0bd8e67e89")
+
+ def test_create_encryption_type_with_str_body(self):
+ self._test_create_encryption()
+
+ def test_create_encryption_type_with_bytes_body(self):
+ self._test_create_encryption(bytes_body=True)
+
+ def test_show_encryption_type_with_str_body(self):
+ self._test_show_encryption_type()
+
+ def test_show_encryption_type_with_bytes_body(self):
+ self._test_show_encryption_type(bytes_body=True)
+
+ def test_delete_encryption_type(self):
+ self.check_service_client_function(
+ self.client.delete_encryption_type,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ volume_type_id="cbc36478b0bd8e67e89",
+ status=202)
diff --git a/tempest/tests/stress/__init__.py b/tempest/tests/lib/services/volume/v2/__init__.py
similarity index 100%
copy from tempest/tests/stress/__init__.py
copy to tempest/tests/lib/services/volume/v2/__init__.py
diff --git a/tempest/tests/lib/services/volume/v2/test_encryption_types_client.py b/tempest/tests/lib/services/volume/v2/test_encryption_types_client.py
new file mode 100644
index 0000000..d029091
--- /dev/null
+++ b/tempest/tests/lib/services/volume/v2/test_encryption_types_client.py
@@ -0,0 +1,86 @@
+# Copyright 2016 NEC Corporation. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.lib.services.volume.v2 import encryption_types_client
+from tempest.tests.lib import fake_auth_provider
+from tempest.tests.lib.services import base
+
+
+class TestEncryptionTypesClient(base.BaseServiceTest):
+ FAKE_CREATE_ENCRYPTION_TYPE = {
+ "encryption": {
+ "id": "cbc36478b0bd8e67e89",
+ "name": "FakeEncryptionType",
+ "type": "fakeType",
+ "provider": "LuksEncryptor",
+ "cipher": "aes-xts-plain64",
+ "key_size": "512",
+ "control_location": "front-end"
+ }
+ }
+
+ FAKE_INFO_ENCRYPTION_TYPE = {
+ "encryption": {
+ "name": "FakeEncryptionType",
+ "type": "fakeType",
+ "description": "test_description",
+ "volume_type": "fakeType",
+ "provider": "LuksEncryptor",
+ "cipher": "aes-xts-plain64",
+ "key_size": "512",
+ "control_location": "front-end"
+ }
+ }
+
+ def setUp(self):
+ super(TestEncryptionTypesClient, self).setUp()
+ fake_auth = fake_auth_provider.FakeAuthProvider()
+ self.client = encryption_types_client.EncryptionTypesClient(fake_auth,
+ 'volume',
+ 'regionOne'
+ )
+
+ def _test_create_encryption(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.create_encryption_type,
+ 'tempest.lib.common.rest_client.RestClient.post',
+ self.FAKE_CREATE_ENCRYPTION_TYPE,
+ bytes_body, volume_type_id="cbc36478b0bd8e67e89")
+
+ def _test_show_encryption_type(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.show_encryption_type,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_INFO_ENCRYPTION_TYPE,
+ bytes_body, volume_type_id="cbc36478b0bd8e67e89")
+
+ def test_create_encryption_type_with_str_body(self):
+ self._test_create_encryption()
+
+ def test_create_encryption_type_with_bytes_body(self):
+ self._test_create_encryption(bytes_body=True)
+
+ def test_show_encryption_type_with_str_body(self):
+ self._test_show_encryption_type()
+
+ def test_show_encryption_type_with_bytes_body(self):
+ self._test_show_encryption_type(bytes_body=True)
+
+ def test_delete_encryption_type(self):
+ self.check_service_client_function(
+ self.client.delete_encryption_type,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ volume_type_id="cbc36478b0bd8e67e89",
+ status=202)
diff --git a/tempest/tests/lib/test_auth.py b/tempest/tests/lib/test_auth.py
index 12590a3..6da7e41 100644
--- a/tempest/tests/lib/test_auth.py
+++ b/tempest/tests/lib/test_auth.py
@@ -244,7 +244,7 @@
# The original headers where empty
self.assertNotEqual(url, self.target_url)
self.assertIsNone(headers)
- self.assertEqual(body, None)
+ self.assertIsNone(body)
def _test_request_with_alt_part_without_alt_data_no_change(self, body):
"""Test empty alternate auth data with no effect
diff --git a/tempest/tests/lib/test_rest_client.py b/tempest/tests/lib/test_rest_client.py
index 057f57b..e6cf047 100644
--- a/tempest/tests/lib/test_rest_client.py
+++ b/tempest/tests/lib/test_rest_client.py
@@ -296,10 +296,6 @@
status=int(r_code),
body=json.dumps(resp_body))
data = {
- "method": "fake_method",
- "url": "fake_url",
- "headers": "fake_headers",
- "body": "fake_body",
"resp": resp,
"resp_body": json.dumps(resp_body)
}
diff --git a/tempest/tests/lib/test_ssh.py b/tempest/tests/lib/test_ssh.py
index b07f6bc..8a0a84c 100644
--- a/tempest/tests/lib/test_ssh.py
+++ b/tempest/tests/lib/test_ssh.py
@@ -69,6 +69,7 @@
mock.sentinel.aa)
expected_connect = [mock.call(
'localhost',
+ port=22,
username='root',
pkey=None,
key_filename=None,
diff --git a/tempest/tests/negative/test_negative_auto_test.py b/tempest/tests/negative/test_negative_auto_test.py
deleted file mode 100644
index 44ce567..0000000
--- a/tempest/tests/negative/test_negative_auto_test.py
+++ /dev/null
@@ -1,67 +0,0 @@
-# Copyright 2014 Deutsche Telekom AG
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest import config
-import tempest.test as test
-from tempest.tests import base
-from tempest.tests import fake_config
-
-
-class TestNegativeAutoTest(base.TestCase):
- # Fake entries
- _service = 'compute'
-
- fake_input_desc = {"name": "list-flavors-with-detail",
- "http-method": "GET",
- "url": "flavors/detail",
- "json-schema": {"type": "object",
- "properties":
- {"minRam": {"type": "integer"},
- "minDisk": {"type": "integer"}}
- },
- "resources": ["flavor", "volume", "image"]
- }
-
- def setUp(self):
- super(TestNegativeAutoTest, self).setUp()
- self.useFixture(fake_config.ConfigFixture())
- self.patchobject(config, 'TempestConfigPrivate',
- fake_config.FakePrivate)
-
- def _check_prop_entries(self, result, entry):
- entries = [a for a in result if entry in a[0]]
- self.assertIsNotNone(entries)
- self.assertGreater(len(entries), 1)
- for entry in entries:
- self.assertIsNotNone(entry[1]['_negtest_name'])
-
- def _check_resource_entries(self, result, entry):
- entries = [a for a in result if entry in a[0]]
- self.assertIsNotNone(entries)
- self.assertIs(len(entries), 3)
- for entry in entries:
- self.assertIsNotNone(entry[1]['resource'])
-
- def test_generate_scenario(self):
- scenarios = test.NegativeAutoTest.\
- generate_scenario(self.fake_input_desc)
- self.assertIsInstance(scenarios, list)
- for scenario in scenarios:
- self.assertIsInstance(scenario, tuple)
- self.assertIsInstance(scenario[0], str)
- self.assertIsInstance(scenario[1], dict)
- self._check_prop_entries(scenarios, "minRam")
- self._check_prop_entries(scenarios, "minDisk")
- self._check_resource_entries(scenarios, "inv_res")
diff --git a/tempest/tests/negative/test_negative_generators.py b/tempest/tests/negative/test_negative_generators.py
index 78fd80d..7e1ee2c 100644
--- a/tempest/tests/negative/test_negative_generators.py
+++ b/tempest/tests/negative/test_negative_generators.py
@@ -107,7 +107,7 @@
def _validate_result(self, valid_schema, invalid_schema):
for k, v in six.iteritems(valid_schema):
- self.assertTrue(k in invalid_schema)
+ self.assertIn(k, invalid_schema)
def test_generator_mandatory_functions(self):
for data_type in self.types:
@@ -146,5 +146,5 @@
schema_under_test = copy.copy(valid_schema)
expected_result = \
self.generator.generate_payload(test, schema_under_test)
- self.assertEqual(expected_result, None)
+ self.assertIsNone(expected_result)
self._validate_result(valid_schema, schema_under_test)
diff --git a/tempest/tests/services/object_storage/test_object_client.py b/tempest/tests/services/object_storage/test_object_client.py
index cd8c8f1..cc1dc1a 100644
--- a/tempest/tests/services/object_storage/test_object_client.py
+++ b/tempest/tests/services/object_storage/test_object_client.py
@@ -20,7 +20,7 @@
from tempest.lib import exceptions
from tempest.services.object_storage import object_client
from tempest.tests import base
-from tempest.tests import fake_auth_provider
+from tempest.tests.lib import fake_auth_provider
class TestObjectClient(base.TestCase):
diff --git a/tempest/tests/stress/test_stress.py b/tempest/tests/stress/test_stress.py
deleted file mode 100644
index dfe0291..0000000
--- a/tempest/tests/stress/test_stress.py
+++ /dev/null
@@ -1,54 +0,0 @@
-# Copyright 2013 Deutsche Telekom AG
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import shlex
-import subprocess
-
-from oslo_log import log as logging
-from tempest.lib import exceptions
-from tempest.tests import base
-
-LOG = logging.getLogger(__name__)
-
-
-class StressFrameworkTest(base.TestCase):
- """Basic test for the stress test framework."""
-
- def _cmd(self, cmd, param):
- """Executes specified command."""
- cmd = ' '.join([cmd, param])
- LOG.info("running: '%s'" % cmd)
- cmd_str = cmd
- cmd = shlex.split(cmd)
- result = ''
- result_err = ''
- try:
- stdout = subprocess.PIPE
- stderr = subprocess.PIPE
- proc = subprocess.Popen(
- cmd, stdout=stdout, stderr=stderr)
- result, result_err = proc.communicate()
- if proc.returncode != 0:
- LOG.debug('error of %s:\n%s' % (cmd_str, result_err))
- raise exceptions.CommandFailed(proc.returncode,
- cmd,
- result)
- finally:
- LOG.debug('output of %s:\n%s' % (cmd_str, result))
- return proc.returncode
-
- def test_help_function(self):
- result = self._cmd("python", "-m tempest.cmd.run_stress -h")
- self.assertEqual(0, result)
diff --git a/tempest/tests/stress/test_stressaction.py b/tempest/tests/stress/test_stressaction.py
deleted file mode 100644
index 1a1bb67..0000000
--- a/tempest/tests/stress/test_stressaction.py
+++ /dev/null
@@ -1,63 +0,0 @@
-# Copyright 2013 Deutsche Telekom AG
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import tempest.stress.stressaction as stressaction
-import tempest.test
-
-
-class FakeStressAction(stressaction.StressAction):
- def __init__(self, manager, max_runs=None, stop_on_error=False):
- super(self.__class__, self).__init__(manager, max_runs, stop_on_error)
- self._run_called = False
-
- def run(self):
- self._run_called = True
-
- @property
- def run_called(self):
- return self._run_called
-
-
-class FakeStressActionFailing(stressaction.StressAction):
- def run(self):
- raise Exception('FakeStressActionFailing raise exception')
-
-
-class TestStressAction(tempest.test.BaseTestCase):
- def _bulid_stats_dict(self, runs=0, fails=0):
- return {'runs': runs, 'fails': fails}
-
- def testStressTestRun(self):
- stressAction = FakeStressAction(manager=None, max_runs=1)
- stats = self._bulid_stats_dict()
- stressAction.execute(stats)
- self.assertTrue(stressAction.run_called)
- self.assertEqual(stats['runs'], 1)
- self.assertEqual(stats['fails'], 0)
-
- def testStressMaxTestRuns(self):
- stressAction = FakeStressAction(manager=None, max_runs=500)
- stats = self._bulid_stats_dict(runs=499)
- stressAction.execute(stats)
- self.assertTrue(stressAction.run_called)
- self.assertEqual(stats['runs'], 500)
- self.assertEqual(stats['fails'], 0)
-
- def testStressTestRunWithException(self):
- stressAction = FakeStressActionFailing(manager=None, max_runs=1)
- stats = self._bulid_stats_dict()
- stressAction.execute(stats)
- self.assertEqual(stats['runs'], 1)
- self.assertEqual(stats['fails'], 1)
diff --git a/tempest/tests/test_decorators.py b/tempest/tests/test_decorators.py
index 8c5d861..ae2f2a3 100644
--- a/tempest/tests/test_decorators.py
+++ b/tempest/tests/test_decorators.py
@@ -12,7 +12,6 @@
# License for the specific language governing permissions and limitations
# under the License.
-import mock
from oslo_config import cfg
from oslotest import mockpatch
import testtools
@@ -154,36 +153,6 @@
continue
-class TestStressDecorator(BaseDecoratorsTest):
- def _test_stresstest_helper(self, expected_frequency='process',
- expected_inheritance=False,
- **decorator_args):
- @test.stresstest(**decorator_args)
- def foo():
- pass
- self.assertEqual(getattr(foo, 'st_class_setup_per'),
- expected_frequency)
- self.assertEqual(getattr(foo, 'st_allow_inheritance'),
- expected_inheritance)
- self.assertEqual(set(['stress']), getattr(foo, '__testtools_attrs'))
-
- def test_stresstest_decorator_default(self):
- self._test_stresstest_helper()
-
- def test_stresstest_decorator_class_setup_frequency(self):
- self._test_stresstest_helper('process', class_setup_per='process')
-
- def test_stresstest_decorator_class_setup_frequency_non_default(self):
- self._test_stresstest_helper(expected_frequency='application',
- class_setup_per='application')
-
- def test_stresstest_decorator_set_frequency_and_inheritance(self):
- self._test_stresstest_helper(expected_frequency='application',
- expected_inheritance=True,
- class_setup_per='application',
- allow_inheritance=True)
-
-
class TestRequiresExtDecorator(BaseDecoratorsTest):
def setUp(self):
super(TestRequiresExtDecorator, self).setUp()
@@ -232,22 +201,6 @@
service='bad_service')
-class TestSimpleNegativeDecorator(BaseDecoratorsTest):
- @test.SimpleNegativeAutoTest
- class FakeNegativeJSONTest(test.NegativeAutoTest):
- _schema = {}
-
- def test_testfunc_exist(self):
- self.assertIn("test_fake_negative", dir(self.FakeNegativeJSONTest))
-
- @mock.patch('tempest.test.NegativeAutoTest.execute')
- def test_testfunc_calls_execute(self, mock):
- obj = self.FakeNegativeJSONTest("test_fake_negative")
- self.assertIn("test_fake_negative", dir(obj))
- obj.test_fake_negative()
- mock.assert_called_once_with(self.FakeNegativeJSONTest._schema)
-
-
class TestConfigDecorators(BaseDecoratorsTest):
def setUp(self):
super(TestConfigDecorators, self).setUp()
diff --git a/tempest/tests/test_negative_rest_client.py b/tempest/tests/test_negative_rest_client.py
index 9d9c20f..05f9f3e 100644
--- a/tempest/tests/test_negative_rest_client.py
+++ b/tempest/tests/test_negative_rest_client.py
@@ -21,8 +21,8 @@
from tempest.common import negative_rest_client
from tempest import config
from tempest.tests import base
-from tempest.tests import fake_auth_provider
from tempest.tests import fake_config
+from tempest.tests.lib import fake_auth_provider
class TestNegativeRestClient(base.TestCase):
diff --git a/tempest/tests/test_service_clients.py b/tempest/tests/test_service_clients.py
deleted file mode 100644
index a559086..0000000
--- a/tempest/tests/test_service_clients.py
+++ /dev/null
@@ -1,126 +0,0 @@
-# Copyright (c) 2016 Hewlett-Packard Enterprise Development Company, L.P.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may not
-# use this file except in compliance with the License. You may obtain a copy of
-# the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations under
-# the License.
-
-import fixtures
-import testtools
-
-from tempest.lib import auth
-from tempest.lib import exceptions
-from tempest import service_clients
-from tempest.tests import base
-from tempest.tests.lib import fake_credentials
-
-
-class TestServiceClients(base.TestCase):
-
- def setUp(self):
- super(TestServiceClients, self).setUp()
- self.useFixture(fixtures.MockPatch(
- 'tempest.service_clients.tempest_modules',
- return_value=set(['fake_service1', 'fake_service2'])))
-
- def test___init___creds_v2_uri(self):
- # Verify that no API request is made, since no mock
- # is required to run the test successfully
- creds = fake_credentials.FakeKeystoneV2Credentials()
- uri = 'fake_uri'
- _manager = service_clients.ServiceClients(creds, identity_uri=uri)
- self.assertIsInstance(_manager.auth_provider,
- auth.KeystoneV2AuthProvider)
-
- def test___init___creds_v3_uri(self):
- # Verify that no API request is made, since no mock
- # is required to run the test successfully
- creds = fake_credentials.FakeKeystoneV3Credentials()
- uri = 'fake_uri'
- _manager = service_clients.ServiceClients(creds, identity_uri=uri)
- self.assertIsInstance(_manager.auth_provider,
- auth.KeystoneV3AuthProvider)
-
- def test___init___base_creds_uri(self):
- creds = fake_credentials.FakeCredentials()
- uri = 'fake_uri'
- with testtools.ExpectedException(exceptions.InvalidCredentials):
- service_clients.ServiceClients(creds, identity_uri=uri)
-
- def test___init___invalid_creds_uri(self):
- creds = fake_credentials.FakeKeystoneV2Credentials()
- delattr(creds, 'username')
- uri = 'fake_uri'
- with testtools.ExpectedException(exceptions.InvalidCredentials):
- service_clients.ServiceClients(creds, identity_uri=uri)
-
- def test___init___creds_uri_none(self):
- creds = fake_credentials.FakeKeystoneV2Credentials()
- msg = ("Invalid Credentials\nDetails: ServiceClients requires a "
- "non-empty")
- with testtools.ExpectedException(exceptions.InvalidCredentials,
- value_re=msg):
- service_clients.ServiceClients(creds, None)
-
- def test___init___creds_uri_params(self):
- creds = fake_credentials.FakeKeystoneV2Credentials()
- expeted_params = {'fake_param1': 'fake_value1',
- 'fake_param2': 'fake_value2'}
- params = {'fake_service1': expeted_params}
- uri = 'fake_uri'
- _manager = service_clients.ServiceClients(creds, identity_uri=uri,
- client_parameters=params)
- self.assertIn('fake_service1', _manager.parameters)
- for _key in expeted_params:
- self.assertIn(_key, _manager.parameters['fake_service1'].keys())
- self.assertEqual(expeted_params[_key],
- _manager.parameters['fake_service1'].get(_key))
-
- def test___init___creds_uri_params_unknown_services(self):
- creds = fake_credentials.FakeKeystoneV2Credentials()
- fake_params = {'fake_param1': 'fake_value1'}
- params = {'unknown_service1': fake_params,
- 'unknown_service2': fake_params}
- uri = 'fake_uri'
- msg = "(?=.*{0})(?=.*{1})".format(*list(params.keys()))
- with testtools.ExpectedException(
- exceptions.UnknownServiceClient, value_re=msg):
- service_clients.ServiceClients(creds, identity_uri=uri,
- client_parameters=params)
-
- def _get_manager(self, init_region='fake_region'):
- # Get a manager to invoke _setup_parameters on
- creds = fake_credentials.FakeKeystoneV2Credentials()
- return service_clients.ServiceClients(creds, identity_uri='fake_uri',
- region=init_region)
-
- def test__setup_parameters_none_no_region(self):
- kwargs = {}
- _manager = self._get_manager(init_region=None)
- _params = _manager._setup_parameters(kwargs)
- self.assertNotIn('region', _params)
-
- def test__setup_parameters_none(self):
- kwargs = {}
- _manager = self._get_manager()
- _params = _manager._setup_parameters(kwargs)
- self.assertIn('region', _params)
- self.assertEqual('fake_region', _params['region'])
-
- def test__setup_parameters_all(self):
- expected_params = {'region': 'fake_region1',
- 'catalog_type': 'fake_service2_mod',
- 'fake_param1': 'fake_value1',
- 'fake_param2': 'fake_value2'}
- _manager = self._get_manager()
- _params = _manager._setup_parameters(expected_params)
- for _key in _params.keys():
- self.assertEqual(expected_params[_key],
- _params[_key])
diff --git a/tempest/tests/test_tempest_plugin.py b/tempest/tests/test_tempest_plugin.py
index c07e98c..13e2499 100644
--- a/tempest/tests/test_tempest_plugin.py
+++ b/tempest/tests/test_tempest_plugin.py
@@ -13,6 +13,7 @@
# License for the specific language governing permissions and limitations
# under the License.
+from tempest.lib.services import clients
from tempest.test_discover import plugins
from tempest.tests import base
from tempest.tests import fake_tempest_plugin as fake_plugin
@@ -42,3 +43,37 @@
result['fake01'])
self.assertEqual(fake_plugin.FakePlugin.expected_load_test,
result['fake02'])
+
+ def test__register_service_clients_with_one_plugin(self):
+ registry = clients.ClientsRegistry()
+ manager = plugins.TempestTestPluginManager()
+ fake_obj = fake_plugin.FakeStevedoreObj()
+ manager.ext_plugins = [fake_obj]
+ manager._register_service_clients()
+ expected_result = fake_plugin.FakePlugin.expected_service_clients
+ registered_clients = registry.get_service_clients()
+ self.assertIn(fake_obj.name, registered_clients)
+ self.assertEqual(expected_result, registered_clients[fake_obj.name])
+
+ def test__get_service_clients_with_two_plugins(self):
+ registry = clients.ClientsRegistry()
+ manager = plugins.TempestTestPluginManager()
+ obj1 = fake_plugin.FakeStevedoreObj('fake01')
+ obj2 = fake_plugin.FakeStevedoreObj('fake02')
+ manager.ext_plugins = [obj1, obj2]
+ manager._register_service_clients()
+ expected_result = fake_plugin.FakePlugin.expected_service_clients
+ registered_clients = registry.get_service_clients()
+ self.assertIn('fake01', registered_clients)
+ self.assertIn('fake02', registered_clients)
+ self.assertEqual(expected_result, registered_clients['fake01'])
+ self.assertEqual(expected_result, registered_clients['fake02'])
+
+ def test__register_service_clients_one_plugin_no_service_clients(self):
+ registry = clients.ClientsRegistry()
+ manager = plugins.TempestTestPluginManager()
+ fake_obj = fake_plugin.FakeStevedoreObjNoServiceClients()
+ manager.ext_plugins = [fake_obj]
+ manager._register_service_clients()
+ registered_clients = registry.get_service_clients()
+ self.assertNotIn(fake_obj.name, registered_clients)
diff --git a/test-requirements.txt b/test-requirements.txt
index 04c3d6d..53efa46 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -3,9 +3,8 @@
# process, which may cause wedges in the gate later.
hacking<0.12,>=0.11.0 # Apache-2.0
# needed for doc build
-sphinx!=1.3b1,<1.3,>=1.2.1 # BSD
-python-subunit>=0.0.18 # Apache-2.0/BSD
-oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
+sphinx!=1.3b1,<1.4,>=1.2.1 # BSD
+oslosphinx>=4.7.0 # Apache-2.0
reno>=1.8.0 # Apache2
mock>=2.0 # BSD
coverage>=3.6 # Apache-2.0
diff --git a/tools/generate-tempest-plugins-list.py b/tools/generate-tempest-plugins-list.py
index 03dbd9b..03e838e 100644
--- a/tools/generate-tempest-plugins-list.py
+++ b/tools/generate-tempest-plugins-list.py
@@ -48,6 +48,8 @@
def has_tempest_plugin(proj):
+ if proj.startswith('openstack/deb-'):
+ return False
r = requests.get(
"https://git.openstack.org/cgit/%s/plain/setup.cfg" % proj)
p = re.compile('^tempest\.test_plugins', re.M)
diff --git a/tools/pretty_tox.sh b/tools/pretty_tox.sh
index fb4e6d5..0b83b91 100755
--- a/tools/pretty_tox.sh
+++ b/tools/pretty_tox.sh
@@ -1,5 +1,7 @@
#!/usr/bin/env bash
+echo "WARNING: This script is deprecated and will be removed in the near future. Please migrate to tempest run or another method of launching a test runner"
+
set -o pipefail
TESTRARGS=$1
diff --git a/tools/pretty_tox_serial.sh b/tools/pretty_tox_serial.sh
index e0fca0f..1f8204e 100755
--- a/tools/pretty_tox_serial.sh
+++ b/tools/pretty_tox_serial.sh
@@ -1,5 +1,7 @@
#!/usr/bin/env bash
+echo "WARNING: This script is deprecated and will be removed in the near future. Please migrate to tempest run or another method of launching a test runner"
+
set -o pipefail
TESTRARGS=$@
diff --git a/tox.ini b/tox.ini
index cff222d..82dba92 100644
--- a/tox.ini
+++ b/tox.ini
@@ -1,5 +1,5 @@
[tox]
-envlist = pep8,py34,py27
+envlist = pep8,py35,py34,py27,pip-check-reqs
minversion = 2.3.1
skipsdist = True
@@ -16,6 +16,7 @@
setenv =
VIRTUAL_ENV={envdir}
OS_TEST_PATH=./tempest/tests
+ PYTHONWARNINGS=default::DeprecationWarning
passenv = OS_STDOUT_CAPTURE OS_STDERR_CAPTURE OS_TEST_TIMEOUT OS_TEST_LOCK_PATH OS_TEST_PATH TEMPEST_CONFIG TEMPEST_CONFIG_DIR http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY
usedevelop = True
install_command = pip install -U {opts} {packages}
@@ -25,7 +26,7 @@
-r{toxinidir}/test-requirements.txt
commands =
find . -type f -name "*.pyc" -delete
- bash tools/pretty_tox.sh '{posargs}'
+ ostestr {posargs}
[testenv:genconfig]
commands = oslo-config-generator --config-file tempest/cmd/config-generator.tempest.conf
@@ -44,7 +45,7 @@
deps = {[tempestenv]deps}
commands =
find . -type f -name "*.pyc" -delete
- bash tools/pretty_tox.sh '{posargs}'
+ tempest run --regex {posargs}
[testenv:ostestr]
sitepackages = {[tempestenv]sitepackages}
@@ -66,7 +67,7 @@
deps = {[tempestenv]deps}
commands =
find . -type f -name "*.pyc" -delete
- bash tools/pretty_tox.sh '{posargs}'
+ tempest run --regex {posargs}
[testenv:full]
envdir = .tox/tempest
@@ -77,7 +78,7 @@
# See the testrepository bug: https://bugs.launchpad.net/testrepository/+bug/1208610
commands =
find . -type f -name "*.pyc" -delete
- bash tools/pretty_tox.sh '(?!.*\[.*\bslow\b.*\])(^tempest\.(api|scenario|thirdparty)) {posargs}'
+ tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.(api|scenario))' {posargs}
[testenv:full-serial]
envdir = .tox/tempest
@@ -88,7 +89,7 @@
# See the testrepository bug: https://bugs.launchpad.net/testrepository/+bug/1208610
commands =
find . -type f -name "*.pyc" -delete
- bash tools/pretty_tox_serial.sh '(?!.*\[.*\bslow\b.*\])(^tempest\.(api|scenario|thirdparty)) {posargs}'
+ tempest run --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.(api|scenario))' {posargs}
[testenv:smoke]
envdir = .tox/tempest
@@ -97,7 +98,7 @@
deps = {[tempestenv]deps}
commands =
find . -type f -name "*.pyc" -delete
- bash tools/pretty_tox.sh '\[.*\bsmoke\b.*\] {posargs}'
+ tempest run --regex '\[.*\bsmoke\b.*\]' {posargs}
[testenv:smoke-serial]
envdir = .tox/tempest
@@ -109,15 +110,7 @@
# job would fail if we moved it to parallel.
commands =
find . -type f -name "*.pyc" -delete
- bash tools/pretty_tox_serial.sh '\[.*\bsmoke\b.*\] {posargs}'
-
-[testenv:stress]
-envdir = .tox/tempest
-sitepackages = {[tempestenv]sitepackages}
-setenv = {[tempestenv]setenv}
-deps = {[tempestenv]deps}
-commands =
- run-tempest-stress {posargs}
+ tempest run --serial --regex '\[.*\bsmoke\b.*\]' {posargs}
[testenv:venv]
commands = {posargs}
@@ -153,7 +146,18 @@
# Skipped because of new hacking 0.9: H405
ignore = E125,E123,E129
show-source = True
-exclude = .git,.venv,.tox,dist,doc,openstack,*egg
+exclude = .git,.venv,.tox,dist,doc,*egg
[testenv:releasenotes]
commands = sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html
+
+[testenv:pip-check-reqs]
+# Do not install test-requirements as that will pollute the virtualenv for
+# determining missing packages.
+# This also means that pip-check-reqs must be installed separately, outside
+# of the requirements.txt files
+deps = pip_check_reqs
+ -r{toxinidir}/requirements.txt
+commands=
+ pip-extra-reqs -d --ignore-file=tempest/tests/* tempest
+ pip-missing-reqs -d --ignore-file=tempest/tests/* tempest