Merge "Correct a misleading in docstring"
diff --git a/HACKING.rst b/HACKING.rst
index 432db7d..7ab420b 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -166,8 +166,33 @@
suite, Tempest is not suitable for handling all negative test cases, as
the wide variety and complexity of negative tests can lead to long test
runs and knowledge of internal implementation details. The bulk of
-negative testing should be handled with project function tests. The
-exception to this rule is API tests used for interoperability testing.
+negative testing should be handled with project function tests.
+All negative tests should be based on `API-WG guideline`_ . Such negative
+tests can block any changes from accurate failure code to invalid one.
+
+.. _API-WG guideline: https://github.com/openstack/api-wg/blob/master/guidelines/http.rst#failure-code-clarifications
+
+If facing some gray area which is not clarified on the above guideline, propose
+a new guideline to the API-WG. With a proposal to the API-WG we will be able to
+build a consensus across all OpenStack projects and improve the quality and
+consistency of all the APIs.
+
+In addition, we have some guidelines for additional negative tests.
+
+- About BadRequest(HTTP400) case: We can add a single negative tests of
+ BadRequest for each resource and method(POST, PUT).
+ Please don't implement more negative tests on the same combination of
+ resource and method even if API request parameters are different from
+ the existing test.
+- About NotFound(HTTP404) case: We can add a single negative tests of
+ NotFound for each resource and method(GET, PUT, DELETE, HEAD).
+ Please don't implement more negative tests on the same combination
+ of resource and method.
+
+The above guidelines don't cover all cases and we will grow these guidelines
+organically over time. Patches outside of the above guidelines are left up to
+the reviewers' discretion and if we face some conflicts between reviewers, we
+will expand the guideline based on our discussion and experience.
Test skips because of Known Bugs
--------------------------------
diff --git a/doc/source/microversion_testing.rst b/doc/source/microversion_testing.rst
index bff18f8..dc73ef2 100644
--- a/doc/source/microversion_testing.rst
+++ b/doc/source/microversion_testing.rst
@@ -24,7 +24,9 @@
* max_microversion
Those should be defined under respective section of each service.
- For example::
+ For example:
+
+ .. code-block:: ini
[compute]
min_microversion = None
@@ -42,7 +44,9 @@
api_version_utils.check_skip_with_microversion function can be used
to automatically skip the tests which do not fall under configured
Microversion range.
-For example::
+For example:
+
+.. code-block:: python
class BaseTestCase1(api_version_utils.BaseMicroversionTest):
@@ -65,7 +69,9 @@
to send with API request.
api_version_utils.select_request_microversion function can be used
to select the appropriate Microversion which will be used for API request.
-For example::
+For example:
+
+.. code-block:: python
@classmethod
def resource_setup(cls):
@@ -87,7 +93,9 @@
Also Microversion header name needs to be defined on service clients which
should be constant because it is not supposed to be changed by project
as per API contract.
-For example::
+For example:
+
+.. code-block:: python
COMPUTE_MICROVERSION = None
@@ -96,7 +104,9 @@
Now test class can set the selected Microversion on required service clients
using fixture which can take care of resetting the same once tests is completed.
-For example::
+For example:
+
+.. code-block:: python
def setUp(self):
super(BaseTestCase1, self).setUp()
@@ -105,7 +115,9 @@
Service clients needs to add set Microversion in API request header which
can be done by overriding the get_headers() method of rest_client.
-For example::
+For example:
+
+.. code-block:: python
COMPUTE_MICROVERSION = None
@@ -136,7 +148,9 @@
For example:
-Below test is applicable for Microversion from 2.2 till 2.9::
+Below test is applicable for Microversion from 2.2 till 2.9:
+
+.. code-block:: python
class BaseTestCase1(api_version_utils.BaseMicroversionTest,
tempest.test.BaseTestCase):
@@ -150,7 +164,9 @@
[..]
-Below test is applicable for Microversion from 2.10 till latest::
+Below test is applicable for Microversion from 2.10 till latest:
+
+.. code-block:: python
class Test2(BaseTestCase1):
min_microversion = '2.10'
@@ -159,8 +175,6 @@
[..]
-
-
Notes about Compute Microversion Tests
""""""""""""""""""""""""""""""""""""""
diff --git a/doc/source/plugin.rst b/doc/source/plugin.rst
index 285ad5d..6b30825 100644
--- a/doc/source/plugin.rst
+++ b/doc/source/plugin.rst
@@ -61,7 +61,9 @@
to the "tempest.test_plugins" namespace.
If you are using pbr this is fairly straightforward, in the setup.cfg just add
-something like the following::
+something like the following:
+
+.. code-block:: ini
[entry_points]
tempest.test_plugins =
@@ -105,7 +107,9 @@
your plugin you need to create a plugin class which tempest will load and call
to get information when it needs. To simplify creating this tempest provides an
abstract class that should be used as the parent for your plugin. To use this
-you would do something like the following::
+you would do something like the following:
+
+.. code-block:: python
from tempest.test_discover import plugins
@@ -177,7 +181,9 @@
easy to write tests which rely on multiple APIs whose service clients are in
different plugins.
-Example implementation of ``get_service_clients``::
+Example implementation of ``get_service_clients``:
+
+.. code-block:: python
def get_service_clients(self):
# Example implementation with two service clients
@@ -213,7 +219,9 @@
* **client_names**: Name of the classes that implement service clients in the
service clients module.
-Example usage of the service clients in tests::
+Example usage of the service clients in tests:
+
+.. code-block:: python
# my_creds is instance of tempest.lib.auth.Credentials
# identity_uri is v2 or v3 depending on the configuration
@@ -249,7 +257,9 @@
Third the service client classes should inherit from ``RestClient``, should
accept generic keyword arguments, and should pass those arguments to the
``__init__`` method of ``RestClient``. Extra arguments can be added. For
-instance::
+instance:
+
+.. code-block:: python
class MyAPIClient(rest_client.RestClient):
@@ -273,7 +283,9 @@
client_api_1.py
client_api_2.py
-The content of __init__.py module should be::
+The content of __init__.py module should be:
+
+.. code-block:: python
from client_api_1.py import API1Client
from client_api_2.py import API2Client
@@ -294,7 +306,9 @@
client_api_1.py
client_api_2.py
-The content each of __init__.py module under vN should be::
+The content each of __init__.py module under vN should be:
+
+.. code-block:: python
from client_api_1.py import API1Client
from client_api_2.py import API2Client
diff --git a/releasenotes/notes/12.1.0-remove-trove-tests-666522e9113549f9.yaml b/releasenotes/notes/12.1.0-remove-trove-tests-666522e9113549f9.yaml
index 1157a4f..7a1fc36 100644
--- a/releasenotes/notes/12.1.0-remove-trove-tests-666522e9113549f9.yaml
+++ b/releasenotes/notes/12.1.0-remove-trove-tests-666522e9113549f9.yaml
@@ -1,4 +1,4 @@
---
upgrade:
- All tests for the Trove project have been removed from tempest. They now
- live as a tempest plugin in the the trove project.
+ live as a tempest plugin in the trove project.
diff --git a/releasenotes/notes/add-new-identity-clients-as-library-5f7ndha733nwdsn9.yaml b/releasenotes/notes/13.0.0-add-new-identity-clients-as-library-5f7ndha733nwdsn9.yaml
similarity index 100%
rename from releasenotes/notes/add-new-identity-clients-as-library-5f7ndha733nwdsn9.yaml
rename to releasenotes/notes/13.0.0-add-new-identity-clients-as-library-5f7ndha733nwdsn9.yaml
diff --git a/releasenotes/notes/add-volume-clients-as-a-library-d05b6bc35e66c6ef.yaml b/releasenotes/notes/13.0.0-add-volume-clients-as-a-library-d05b6bc35e66c6ef.yaml
similarity index 87%
rename from releasenotes/notes/add-volume-clients-as-a-library-d05b6bc35e66c6ef.yaml
rename to releasenotes/notes/13.0.0-add-volume-clients-as-a-library-d05b6bc35e66c6ef.yaml
index 1ef2b0d..9cfce0d 100644
--- a/releasenotes/notes/add-volume-clients-as-a-library-d05b6bc35e66c6ef.yaml
+++ b/releasenotes/notes/13.0.0-add-volume-clients-as-a-library-d05b6bc35e66c6ef.yaml
@@ -6,7 +6,9 @@
so the other projects can use these modules as stable libraries without
any maintenance changes.
+ * backups_client
* encryption_types_client (v1)
+ * encryption_types_client (v2)
* qos_clients (v1)
* qos_clients (v2)
* snapshots_client (v1)
diff --git a/releasenotes/notes/deprecate-get_ipv6_addr_by_EUI64-4673f07677289cf6.yaml b/releasenotes/notes/13.0.0-deprecate-get_ipv6_addr_by_EUI64-4673f07677289cf6.yaml
similarity index 100%
rename from releasenotes/notes/deprecate-get_ipv6_addr_by_EUI64-4673f07677289cf6.yaml
rename to releasenotes/notes/13.0.0-deprecate-get_ipv6_addr_by_EUI64-4673f07677289cf6.yaml
diff --git a/releasenotes/notes/move-call-until-true-to-tempest-lib-c9ea70dd6fe9bd15.yaml b/releasenotes/notes/13.0.0-move-call-until-true-to-tempest-lib-c9ea70dd6fe9bd15.yaml
similarity index 100%
rename from releasenotes/notes/move-call-until-true-to-tempest-lib-c9ea70dd6fe9bd15.yaml
rename to releasenotes/notes/13.0.0-move-call-until-true-to-tempest-lib-c9ea70dd6fe9bd15.yaml
diff --git a/releasenotes/notes/13.0.0-start-of-newton-support-3ebb274f300f28eb.yaml b/releasenotes/notes/13.0.0-start-of-newton-support-3ebb274f300f28eb.yaml
new file mode 100644
index 0000000..b9b6fb5
--- /dev/null
+++ b/releasenotes/notes/13.0.0-start-of-newton-support-3ebb274f300f28eb.yaml
@@ -0,0 +1,13 @@
+---
+prelude: >
+ This release is marking the start of Newton release support in Tempest
+other:
+ - |
+ OpenStack releases supported at this time are **Liberty**, **Mitaka**,
+ and **Newton**.
+
+ The release under current development as of this tag is Ocata,
+ meaning that every Tempest commit is also tested against master during
+ the Ocata cycle. However, this does not necessarily mean that using
+ Tempest as of this tag will work against a Ocata (or future releases)
+ cloud.
diff --git a/releasenotes/notes/13.0.0-tempest-cleanup-nostandalone-39df2aafb2545d35.yaml b/releasenotes/notes/13.0.0-tempest-cleanup-nostandalone-39df2aafb2545d35.yaml
new file mode 100644
index 0000000..20f310d
--- /dev/null
+++ b/releasenotes/notes/13.0.0-tempest-cleanup-nostandalone-39df2aafb2545d35.yaml
@@ -0,0 +1,5 @@
+---
+upgrade:
+ - the already depreacted tempest-cleanup standalone command has been
+ removed. The corresponding functionalities can be accessed through
+ the unified `tempest` command (`tempest cleanup`).
diff --git a/releasenotes/notes/12.3.0-volume-clients-as-library-660811011be29d1a.yaml b/releasenotes/notes/13.0.0-volume-clients-as-library-660811011be29d1a.yaml
similarity index 100%
rename from releasenotes/notes/12.3.0-volume-clients-as-library-660811011be29d1a.yaml
rename to releasenotes/notes/13.0.0-volume-clients-as-library-660811011be29d1a.yaml
diff --git a/releasenotes/notes/add-ssh-port-parameter-to-client-6d16c374ac4456c1.yaml b/releasenotes/notes/add-ssh-port-parameter-to-client-6d16c374ac4456c1.yaml
new file mode 100644
index 0000000..b2ad199
--- /dev/null
+++ b/releasenotes/notes/add-ssh-port-parameter-to-client-6d16c374ac4456c1.yaml
@@ -0,0 +1,4 @@
+---
+features:
+ - A new optional parameter `port` for ssh client (`tempest.lib.common.ssh.Client`)
+ to specify destination port for a host. The default value is 22.
diff --git a/releasenotes/notes/remove-sahara-tests-1532c47c7df80e3a.yaml b/releasenotes/notes/remove-sahara-tests-1532c47c7df80e3a.yaml
new file mode 100644
index 0000000..b541cf9
--- /dev/null
+++ b/releasenotes/notes/remove-sahara-tests-1532c47c7df80e3a.yaml
@@ -0,0 +1,4 @@
+---
+upgrade:
+ - All tests for the Sahara project have been removed from Tempest. They now
+ live as a Tempest plugin in the ``openstack/sahara-tests`` repository.
diff --git a/releasenotes/source/index.rst b/releasenotes/source/index.rst
index 0ec0e94..8eac1d0 100644
--- a/releasenotes/source/index.rst
+++ b/releasenotes/source/index.rst
@@ -6,6 +6,7 @@
:maxdepth: 1
unreleased
+ v13.0.0
v12.0.0
v11.0.0
v10.0.0
diff --git a/releasenotes/source/v13.0.0.rst b/releasenotes/source/v13.0.0.rst
new file mode 100644
index 0000000..39816e4
--- /dev/null
+++ b/releasenotes/source/v13.0.0.rst
@@ -0,0 +1,6 @@
+=====================
+v13.0.0 Release Notes
+=====================
+
+.. release-notes:: 13.0.0 Release Notes
+ :version: 13.0.0
diff --git a/requirements.txt b/requirements.txt
index 4655b9f..4af8bb3 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -2,7 +2,7 @@
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr>=1.6 # Apache-2.0
-cliff!=1.16.0,!=1.17.0,>=1.15.0 # Apache-2.0
+cliff>=2.2.0 # Apache-2.0
jsonschema!=2.5.0,<3.0.0,>=2.0.0 # MIT
testtools>=1.4.0 # MIT
paramiko>=2.0 # LGPLv2.1+
@@ -16,11 +16,11 @@
six>=1.9.0 # MIT
fixtures>=3.0.0 # Apache-2.0/BSD
testscenarios>=0.4 # Apache-2.0/BSD
-PyYAML>=3.1.0 # MIT
+PyYAML>=3.10.0 # MIT
python-subunit>=0.0.18 # Apache-2.0/BSD
-stevedore>=1.16.0 # Apache-2.0
-PrettyTable<0.8,>=0.7 # BSD
-os-testr>=0.7.0 # Apache-2.0
+stevedore>=1.17.1 # Apache-2.0
+PrettyTable<0.8,>=0.7.1 # BSD
+os-testr>=0.8.0 # Apache-2.0
urllib3>=1.15.1 # MIT
debtcollector>=1.2.0 # Apache-2.0
-unittest2 # BSD
+unittest2 # BSD
diff --git a/setup.cfg b/setup.cfg
index 50bf891..28e17ef 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -29,7 +29,6 @@
console_scripts =
verify-tempest-config = tempest.cmd.verify_tempest_config:main
run-tempest-stress = tempest.cmd.run_stress:main
- tempest-cleanup = tempest.cmd.cleanup:main
tempest-account-generator = tempest.cmd.account_generator:main
tempest = tempest.cmd.main:main
skip-tracker = tempest.lib.cmd.skip_tracker:main
diff --git a/tempest/api/compute/admin/test_live_migration.py b/tempest/api/compute/admin/test_live_migration.py
index 18a6afc..4e9bb88 100644
--- a/tempest/api/compute/admin/test_live_migration.py
+++ b/tempest/api/compute/admin/test_live_migration.py
@@ -82,14 +82,6 @@
if host != target_host:
return target_host
- def _volume_clean_up(self, server_id, volume_id):
- body = self.volumes_client.show_volume(volume_id)['volume']
- if body['status'] == 'in-use':
- self.servers_client.detach_volume(server_id, volume_id)
- waiters.wait_for_volume_status(self.volumes_client,
- volume_id, 'available')
- self.volumes_client.delete_volume(volume_id)
-
def _test_live_migration(self, state='ACTIVE', volume_backed=False):
"""Tests live migration between two hosts.
@@ -151,22 +143,15 @@
block_migrate_cinder_iscsi,
'Block Live migration not configured for iSCSI')
def test_iscsi_volume(self):
- server_id = self.create_test_server(wait_until="ACTIVE")['id']
+ server = self.create_test_server(wait_until="ACTIVE")
+ server_id = server['id']
actual_host = self._get_host_for_server(server_id)
target_host = self._get_host_other_than(actual_host)
- volume = self.volumes_client.create_volume(
- size=CONF.volume.volume_size, display_name='test')['volume']
-
- waiters.wait_for_volume_status(self.volumes_client,
- volume['id'], 'available')
- self.addCleanup(self._volume_clean_up, server_id, volume['id'])
+ volume = self.create_volume()
# Attach the volume to the server
- self.servers_client.attach_volume(server_id, volumeId=volume['id'],
- device='/dev/xvdb')
- waiters.wait_for_volume_status(self.volumes_client,
- volume['id'], 'in-use')
+ self.attach_volume(server, volume, device='/dev/xvdb')
self._migrate_server_to(server_id, target_host)
waiters.wait_for_server_status(self.servers_client,
diff --git a/tempest/api/compute/admin/test_migrations.py b/tempest/api/compute/admin/test_migrations.py
index 62dbfe4..4f075eb 100644
--- a/tempest/api/compute/admin/test_migrations.py
+++ b/tempest/api/compute/admin/test_migrations.py
@@ -106,10 +106,7 @@
server = self.servers_client.show_server(server['id'])['server']
self.assertEqual(flavor['id'], server['flavor']['id'])
- @test.idempotent_id('4bf0be52-3b6f-4746-9a27-3143636fe30d')
- @testtools.skipUnless(CONF.compute_feature_enabled.cold_migration,
- 'Cold migration not available.')
- def test_cold_migration(self):
+ def _test_cold_migrate_server(self, revert=False):
if CONF.compute.min_compute_nodes < 2:
msg = "Less than 2 compute nodes, skipping multinode tests."
raise self.skipException(msg)
@@ -123,10 +120,27 @@
waiters.wait_for_server_status(self.servers_client,
server['id'], 'VERIFY_RESIZE')
- self.servers_client.confirm_resize_server(server['id'])
+ if revert:
+ self.servers_client.revert_resize_server(server['id'])
+ assert_func = self.assertEqual
+ else:
+ self.servers_client.confirm_resize_server(server['id'])
+ assert_func = self.assertNotEqual
+
waiters.wait_for_server_status(self.servers_client,
server['id'], 'ACTIVE')
dst_host = self.admin_servers_client.show_server(
server['id'])['server']['OS-EXT-SRV-ATTR:host']
+ assert_func(src_host, dst_host)
- self.assertNotEqual(src_host, dst_host)
+ @test.idempotent_id('4bf0be52-3b6f-4746-9a27-3143636fe30d')
+ @testtools.skipUnless(CONF.compute_feature_enabled.cold_migration,
+ 'Cold migration not available.')
+ def test_cold_migration(self):
+ self._test_cold_migrate_server(revert=False)
+
+ @test.idempotent_id('caa1aa8b-f4ef-4374-be0d-95f001c2ac2d')
+ @testtools.skipUnless(CONF.compute_feature_enabled.cold_migration,
+ 'Cold migration not available.')
+ def test_revert_cold_migration(self):
+ self._test_cold_migrate_server(revert=True)
diff --git a/tempest/api/compute/admin/test_servers.py b/tempest/api/compute/admin/test_servers.py
old mode 100755
new mode 100644
index c9ffcca..efa55d5
--- a/tempest/api/compute/admin/test_servers.py
+++ b/tempest/api/compute/admin/test_servers.py
@@ -106,6 +106,14 @@
self.assertNotIn(self.s1_name, servers_name)
self.assertNotIn(self.s2_name, servers_name)
+ # List the primary tenant with all_tenants is specified
+ params = {'all_tenants': '', 'tenant_id': tenant_id}
+ body = self.client.list_servers(detail=True, **params)
+ servers = body['servers']
+ servers_name = map(lambda x: x['name'], servers)
+ self.assertIn(self.s1_name, servers_name)
+ self.assertIn(self.s2_name, servers_name)
+
# List the admin tenant shouldn't get servers created by other tenants
admin_tenant_id = self.client.tenant_id
params = {'all_tenants': '', 'tenant_id': admin_tenant_id}
diff --git a/tempest/api/compute/admin/test_servers_negative.py b/tempest/api/compute/admin/test_servers_negative.py
old mode 100755
new mode 100644
diff --git a/tempest/api/compute/admin/test_volume_swap.py b/tempest/api/compute/admin/test_volume_swap.py
new file mode 100644
index 0000000..f603abd
--- /dev/null
+++ b/tempest/api/compute/admin/test_volume_swap.py
@@ -0,0 +1,75 @@
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.api.compute import base
+from tempest.common import waiters
+from tempest import config
+from tempest import test
+
+CONF = config.CONF
+
+
+class TestVolumeSwap(base.BaseV2ComputeAdminTest):
+ """The test suite for swapping of volume with admin user.
+
+ The following is the scenario outline:
+ 1. Create a volume "volume1" with non-admin.
+ 2. Create a volume "volume2" with non-admin.
+ 3. Boot an instance "instance1" with non-admin.
+ 4. Attach "volume1" to "instance1" with non-admin.
+ 5. Swap volume from "volume1" to "volume2" as admin.
+ 6. Check the swap volume is successful and "volume2"
+ is attached to "instance1" and "volume1" is in available state.
+ """
+
+ @classmethod
+ def skip_checks(cls):
+ super(TestVolumeSwap, cls).skip_checks()
+ if not CONF.compute_feature_enabled.swap_volume:
+ raise cls.skipException("Swapping volumes is not supported.")
+
+ @classmethod
+ def setup_clients(cls):
+ super(TestVolumeSwap, cls).setup_clients()
+ # We need the admin client for performing the update (swap) volume call
+ cls.servers_admin_client = cls.os_adm.servers_client
+
+ @test.idempotent_id('1769f00d-a693-4d67-a631-6a3496773813')
+ @test.services('volume')
+ def test_volume_swap(self):
+ # Create two volumes.
+ # NOTE(gmann): Volumes are created before server creation so that
+ # volumes cleanup can happen successfully irrespective of which volume
+ # is attached to server.
+ volume1 = self.create_volume()
+ volume2 = self.create_volume()
+ # Boot server
+ server = self.create_test_server(wait_until='ACTIVE')
+ # Attach "volume1" to server
+ self.attach_volume(server, volume1)
+ # Swap volume from "volume1" to "volume2"
+ self.servers_admin_client.update_attached_volume(
+ server['id'], volume1['id'], volumeId=volume2['id'])
+ waiters.wait_for_volume_status(self.volumes_client,
+ volume1['id'], 'available')
+ waiters.wait_for_volume_status(self.volumes_client,
+ volume2['id'], 'in-use')
+ self.addCleanup(self.servers_client.detach_volume,
+ server['id'], volume2['id'])
+ # Verify "volume2" is attached to the server
+ vol_attachments = self.servers_client.list_volume_attachments(
+ server['id'])['volumeAttachments']
+ self.assertEqual(1, len(vol_attachments))
+ self.assertIn(volume2['id'], vol_attachments[0]['volumeId'])
+
+ # TODO(mriedem): Test swapping back from volume2 to volume1 after
+ # nova bug 1490236 is fixed.
diff --git a/tempest/api/compute/base.py b/tempest/api/compute/base.py
index 27afff3..b738e82 100644
--- a/tempest/api/compute/base.py
+++ b/tempest/api/compute/base.py
@@ -120,6 +120,7 @@
cls.images = []
cls.security_groups = []
cls.server_groups = []
+ cls.volumes = []
@classmethod
def resource_cleanup(cls):
@@ -127,6 +128,7 @@
cls.clear_servers()
cls.clear_security_groups()
cls.clear_server_groups()
+ cls.clear_volumes()
super(BaseV2ComputeTest, cls).resource_cleanup()
@classmethod
@@ -370,6 +372,66 @@
self.useFixture(api_microversion_fixture.APIMicroversionFixture(
self.request_microversion))
+ @classmethod
+ def create_volume(cls):
+ """Create a volume and wait for it to become 'available'.
+
+ :returns: The available volume.
+ """
+ vol_name = data_utils.rand_name(cls.__name__ + '-volume')
+ volume = cls.volumes_client.create_volume(
+ size=CONF.volume.volume_size, display_name=vol_name)['volume']
+ cls.volumes.append(volume)
+ waiters.wait_for_volume_status(cls.volumes_client,
+ volume['id'], 'available')
+ return volume
+
+ @classmethod
+ def clear_volumes(cls):
+ LOG.debug('Clearing volumes: %s', ','.join(
+ volume['id'] for volume in cls.volumes))
+ for volume in cls.volumes:
+ try:
+ test_utils.call_and_ignore_notfound_exc(
+ cls.volumes_client.delete_volume, volume['id'])
+ except Exception:
+ LOG.exception('Deleting volume %s failed', volume['id'])
+
+ for volume in cls.volumes:
+ try:
+ cls.volumes_client.wait_for_resource_deletion(volume['id'])
+ except Exception:
+ LOG.exception('Waiting for deletion of volume %s failed',
+ volume['id'])
+
+ def attach_volume(self, server, volume, device=None):
+ """Attaches volume to server and waits for 'in-use' volume status.
+
+ The volume will be detached when the test tears down.
+
+ :param server: The server to which the volume will be attached.
+ :param volume: The volume to attach.
+ :param device: Optional mountpoint for the attached volume. Note that
+ this is not guaranteed for all hypervisors and is not recommended.
+ """
+ attach_kwargs = dict(volumeId=volume['id'])
+ if device:
+ attach_kwargs['device'] = device
+ self.servers_client.attach_volume(
+ server['id'], **attach_kwargs)
+ # On teardown detach the volume and wait for it to be available. This
+ # is so we don't error out when trying to delete the volume during
+ # teardown.
+ self.addCleanup(waiters.wait_for_volume_status,
+ self.volumes_client, volume['id'], 'available')
+ # Ignore 404s on detach in case the server is deleted or the volume
+ # is already detached.
+ self.addCleanup(test_utils.call_and_ignore_notfound_exc,
+ self.servers_client.detach_volume,
+ server['id'], volume['id'])
+ waiters.wait_for_volume_status(self.volumes_client,
+ volume['id'], 'in-use')
+
class BaseV2ComputeAdminTest(BaseV2ComputeTest):
"""Base test case class for Compute Admin API tests."""
diff --git a/tempest/api/compute/images/test_list_image_filters.py b/tempest/api/compute/images/test_list_image_filters.py
old mode 100755
new mode 100644
diff --git a/tempest/api/compute/servers/test_attach_interfaces.py b/tempest/api/compute/servers/test_attach_interfaces.py
index 7c12bf9..b936b23 100644
--- a/tempest/api/compute/servers/test_attach_interfaces.py
+++ b/tempest/api/compute/servers/test_attach_interfaces.py
@@ -47,22 +47,20 @@
@classmethod
def setup_clients(cls):
super(AttachInterfacesTestJSON, cls).setup_clients()
- cls.client = cls.os.interfaces_client
cls.networks_client = cls.os.networks_client
cls.subnets_client = cls.os.subnets_client
cls.ports_client = cls.os.ports_client
- cls.servers_client = cls.servers_client
def wait_for_interface_status(self, server, port_id, status):
"""Waits for an interface to reach a given status."""
- body = (self.client.show_interface(server, port_id)
+ body = (self.interfaces_client.show_interface(server, port_id)
['interfaceAttachment'])
interface_status = body['port_state']
start = int(time.time())
while(interface_status != status):
time.sleep(self.build_interval)
- body = (self.client.show_interface(server, port_id)
+ body = (self.interfaces_client.show_interface(server, port_id)
['interfaceAttachment'])
interface_status = body['port_state']
@@ -119,7 +117,7 @@
def _create_server_get_interfaces(self):
server = self.create_test_server(wait_until='ACTIVE')
- ifs = (self.client.list_interfaces(server['id'])
+ ifs = (self.interfaces_client.list_interfaces(server['id'])
['interfaceAttachments'])
body = self.wait_for_interface_status(
server['id'], ifs[0]['port_id'], 'ACTIVE')
@@ -127,7 +125,7 @@
return server, ifs
def _test_create_interface(self, server):
- iface = (self.client.create_interface(server['id'])
+ iface = (self.interfaces_client.create_interface(server['id'])
['interfaceAttachment'])
iface = self.wait_for_interface_status(
server['id'], iface['port_id'], 'ACTIVE')
@@ -136,7 +134,7 @@
def _test_create_interface_by_network_id(self, server, ifs):
network_id = ifs[0]['net_id']
- iface = self.client.create_interface(
+ iface = self.interfaces_client.create_interface(
server['id'], net_id=network_id)['interfaceAttachment']
iface = self.wait_for_interface_status(
server['id'], iface['port_id'], 'ACTIVE')
@@ -148,7 +146,7 @@
port = self.ports_client.create_port(network_id=network_id)
port_id = port['port']['id']
self.addCleanup(self.ports_client.delete_port, port_id)
- iface = self.client.create_interface(
+ iface = self.interfaces_client.create_interface(
server['id'], port_id=port_id)['interfaceAttachment']
iface = self.wait_for_interface_status(
server['id'], iface['port_id'], 'ACTIVE')
@@ -165,7 +163,7 @@
1)
fixed_ips = [{'ip_address': ip_list[0]}]
- iface = self.client.create_interface(
+ iface = self.interfaces_client.create_interface(
server['id'], net_id=network_id,
fixed_ips=fixed_ips)['interfaceAttachment']
self.addCleanup(self.ports_client.delete_port, iface['port_id'])
@@ -176,7 +174,7 @@
def _test_show_interface(self, server, ifs):
iface = ifs[0]
- _iface = self.client.show_interface(
+ _iface = self.interfaces_client.show_interface(
server['id'], iface['port_id'])['interfaceAttachment']
self._check_interface(iface, port_id=_iface['port_id'],
network_id=_iface['net_id'],
@@ -186,14 +184,14 @@
def _test_delete_interface(self, server, ifs):
# NOTE(danms): delete not the first or last, but one in the middle
iface = ifs[1]
- self.client.delete_interface(server['id'], iface['port_id'])
- _ifs = (self.client.list_interfaces(server['id'])
+ self.interfaces_client.delete_interface(server['id'], iface['port_id'])
+ _ifs = (self.interfaces_client.list_interfaces(server['id'])
['interfaceAttachments'])
start = int(time.time())
while len(ifs) == len(_ifs):
time.sleep(self.build_interval)
- _ifs = (self.client.list_interfaces(server['id'])
+ _ifs = (self.interfaces_client.list_interfaces(server['id'])
['interfaceAttachments'])
timed_out = int(time.time()) - start >= self.build_timeout
if len(ifs) == len(_ifs) and timed_out:
@@ -239,7 +237,7 @@
iface = self._test_create_interface_by_fixed_ips(server, ifs)
ifs.append(iface)
- _ifs = (self.client.list_interfaces(server['id'])
+ _ifs = (self.interfaces_client.list_interfaces(server['id'])
['interfaceAttachments'])
self._compare_iface_list(ifs, _ifs)
@@ -302,11 +300,11 @@
for server in servers:
# attach the port to the server
- iface = self.client.create_interface(
+ iface = self.interfaces_client.create_interface(
server['id'], port_id=port_id)['interfaceAttachment']
self._check_interface(iface, port_id=port_id)
# detach the port from the server; this is a cast in the compute
# API so we have to poll the port until the device_id is unset.
- self.client.delete_interface(server['id'], port_id)
+ self.interfaces_client.delete_interface(server['id'], port_id)
self.wait_for_port_detach(port_id)
diff --git a/tempest/api/compute/servers/test_create_server.py b/tempest/api/compute/servers/test_create_server.py
old mode 100755
new mode 100644
diff --git a/tempest/api/compute/servers/test_delete_server.py b/tempest/api/compute/servers/test_delete_server.py
index 079465d..07f46c5 100644
--- a/tempest/api/compute/servers/test_delete_server.py
+++ b/tempest/api/compute/servers/test_delete_server.py
@@ -106,24 +106,15 @@
@test.services('volume')
def test_delete_server_while_in_attached_volume(self):
# Delete a server while a volume is attached to it
- volumes_client = self.volumes_extensions_client
device = '/dev/%s' % CONF.compute.volume_device_name
server = self.create_test_server(wait_until='ACTIVE')
- volume = (volumes_client.create_volume(size=CONF.volume.volume_size)
- ['volume'])
- self.addCleanup(volumes_client.delete_volume, volume['id'])
- waiters.wait_for_volume_status(volumes_client,
- volume['id'], 'available')
- self.client.attach_volume(server['id'],
- volumeId=volume['id'],
- device=device)
- waiters.wait_for_volume_status(volumes_client,
- volume['id'], 'in-use')
+ volume = self.create_volume()
+ self.attach_volume(server, volume, device=device)
self.client.delete_server(server['id'])
waiters.wait_for_server_termination(self.client, server['id'])
- waiters.wait_for_volume_status(volumes_client,
+ waiters.wait_for_volume_status(self.volumes_client,
volume['id'], 'available')
diff --git a/tempest/api/compute/servers/test_server_actions.py b/tempest/api/compute/servers/test_server_actions.py
old mode 100755
new mode 100644
index 062e920..788dd8a
--- a/tempest/api/compute/servers/test_server_actions.py
+++ b/tempest/api/compute/servers/test_server_actions.py
@@ -81,14 +81,19 @@
@testtools.skipUnless(CONF.compute_feature_enabled.change_password,
'Change password not available.')
def test_change_server_password(self):
+ # Since this test messes with the password and makes the
+ # server unreachable, it should create its own server
+ newserver = self.create_test_server(
+ validatable=True,
+ wait_until='ACTIVE')
# The server's password should be set to the provided password
new_password = 'Newpass1234'
- self.client.change_password(self.server_id, adminPass=new_password)
- waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
+ self.client.change_password(newserver['id'], adminPass=new_password)
+ waiters.wait_for_server_status(self.client, newserver['id'], 'ACTIVE')
if CONF.validation.run_validation:
# Verify that the user can authenticate with the new password
- server = self.client.show_server(self.server_id)['server']
+ server = self.client.show_server(newserver['id'])['server']
linux_client = remote_client.RemoteClient(
self.get_server_ip(server),
self.ssh_user,
@@ -235,20 +240,10 @@
@test.services('volume')
def test_rebuild_server_with_volume_attached(self):
# create a new volume and attach it to the server
- volume = self.volumes_client.create_volume(
- size=CONF.volume.volume_size)
- volume = volume['volume']
- self.addCleanup(self.volumes_client.delete_volume, volume['id'])
- waiters.wait_for_volume_status(self.volumes_client, volume['id'],
- 'available')
+ volume = self.create_volume()
- self.client.attach_volume(self.server_id, volumeId=volume['id'])
- self.addCleanup(waiters.wait_for_volume_status, self.volumes_client,
- volume['id'], 'available')
- self.addCleanup(self.client.detach_volume,
- self.server_id, volume['id'])
- waiters.wait_for_volume_status(self.volumes_client, volume['id'],
- 'in-use')
+ server = self.client.show_server(self.server_id)['server']
+ self.attach_volume(server, volume)
# run general rebuild test
self.test_rebuild_server()
diff --git a/tempest/api/compute/servers/test_server_rescue_negative.py b/tempest/api/compute/servers/test_server_rescue_negative.py
index 8d63b6b..41b648c 100644
--- a/tempest/api/compute/servers/test_server_rescue_negative.py
+++ b/tempest/api/compute/servers/test_server_rescue_negative.py
@@ -60,20 +60,6 @@
waiters.wait_for_server_status(cls.servers_client,
cls.server_id, 'ACTIVE')
- def _create_volume(self):
- volume = self.volumes_extensions_client.create_volume(
- size=CONF.volume.volume_size, display_name=data_utils.rand_name(
- self.__class__.__name__ + '_volume'))['volume']
- self.addCleanup(self.delete_volume, volume['id'])
- waiters.wait_for_volume_status(self.volumes_extensions_client,
- volume['id'], 'available')
- return volume
-
- def _detach(self, server_id, volume_id):
- self.servers_client.detach_volume(server_id, volume_id)
- waiters.wait_for_volume_status(self.volumes_extensions_client,
- volume_id, 'available')
-
def _unrescue(self, server_id):
self.servers_client.unrescue_server(server_id)
waiters.wait_for_server_status(self.servers_client,
@@ -125,7 +111,7 @@
@test.services('volume')
@test.attr(type=['negative'])
def test_rescued_vm_attach_volume(self):
- volume = self._create_volume()
+ volume = self.create_volume()
# Rescue the server
self.servers_client.rescue_server(self.server_id,
@@ -145,14 +131,11 @@
@test.services('volume')
@test.attr(type=['negative'])
def test_rescued_vm_detach_volume(self):
- volume = self._create_volume()
+ volume = self.create_volume()
# Attach the volume to the server
- self.servers_client.attach_volume(self.server_id,
- volumeId=volume['id'],
- device='/dev/%s' % self.device)
- waiters.wait_for_volume_status(self.volumes_extensions_client,
- volume['id'], 'in-use')
+ server = self.servers_client.show_server(self.server_id)['server']
+ self.attach_volume(server, volume, device='/dev/%s' % self.device)
# Rescue the server
self.servers_client.rescue_server(self.server_id,
@@ -160,7 +143,6 @@
waiters.wait_for_server_status(self.servers_client,
self.server_id, 'RESCUE')
# addCleanup is a LIFO queue
- self.addCleanup(self._detach, self.server_id, volume['id'])
self.addCleanup(self._unrescue, self.server_id)
# Detach the volume from the server expecting failure
diff --git a/tempest/api/compute/servers/test_servers.py b/tempest/api/compute/servers/test_servers.py
old mode 100755
new mode 100644
diff --git a/tempest/api/compute/servers/test_servers_negative.py b/tempest/api/compute/servers/test_servers_negative.py
old mode 100755
new mode 100644
diff --git a/tempest/api/compute/volumes/test_volume_snapshots.py b/tempest/api/compute/volumes/test_volume_snapshots.py
old mode 100755
new mode 100644
diff --git a/tempest/api/compute/volumes/test_volumes_get.py b/tempest/api/compute/volumes/test_volumes_get.py
old mode 100755
new mode 100644
diff --git a/tempest/api/compute/volumes/test_volumes_list.py b/tempest/api/compute/volumes/test_volumes_list.py
old mode 100755
new mode 100644
diff --git a/tempest/api/compute/volumes/test_volumes_negative.py b/tempest/api/compute/volumes/test_volumes_negative.py
old mode 100755
new mode 100644
index 7ecad12..5fe4cb3
--- a/tempest/api/compute/volumes/test_volumes_negative.py
+++ b/tempest/api/compute/volumes/test_volumes_negative.py
@@ -84,13 +84,6 @@
size='0', display_name=v_name, metadata=metadata)
@test.attr(type=['negative'])
- @test.idempotent_id('f01904f2-e975-4915-98ce-cb5fa27bde4f')
- def test_get_invalid_volume_id(self):
- # Negative: Should not be able to get volume with invalid id
- self.assertRaises(lib_exc.NotFound,
- self.client.show_volume, '#$%%&^&^')
-
- @test.attr(type=['negative'])
@test.idempotent_id('62bab09a-4c03-4617-8cca-8572bc94af9b')
def test_get_volume_without_passing_volume_id(self):
# Negative: Should not be able to get volume when empty ID is passed
diff --git a/tempest/api/data_processing/base.py b/tempest/api/data_processing/base.py
deleted file mode 100644
index c8506ae..0000000
--- a/tempest/api/data_processing/base.py
+++ /dev/null
@@ -1,442 +0,0 @@
-# Copyright (c) 2014 Mirantis Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import collections
-import copy
-
-import six
-
-from tempest import config
-from tempest import exceptions
-from tempest.lib.common.utils import test_utils
-import tempest.test
-
-
-CONF = config.CONF
-
-"""Default templates.
-There should always be at least a master1 and a worker1 node
-group template."""
-BASE_VANILLA_DESC = {
- 'NODES': {
- 'master1': {
- 'count': 1,
- 'node_processes': ['namenode', 'resourcemanager',
- 'hiveserver']
- },
- 'master2': {
- 'count': 1,
- 'node_processes': ['oozie', 'historyserver',
- 'secondarynamenode']
- },
- 'worker1': {
- 'count': 1,
- 'node_processes': ['datanode', 'nodemanager'],
- 'node_configs': {
- 'MapReduce': {
- 'yarn.app.mapreduce.am.resource.mb': 256,
- 'yarn.app.mapreduce.am.command-opts': '-Xmx256m'
- },
- 'YARN': {
- 'yarn.scheduler.minimum-allocation-mb': 256,
- 'yarn.scheduler.maximum-allocation-mb': 1024,
- 'yarn.nodemanager.vmem-check-enabled': False
- }
- }
- }
- },
- 'cluster_configs': {
- 'HDFS': {
- 'dfs.replication': 1
- }
- }
-}
-
-BASE_SPARK_DESC = {
- 'NODES': {
- 'master1': {
- 'count': 1,
- 'node_processes': ['namenode', 'master']
- },
- 'worker1': {
- 'count': 1,
- 'node_processes': ['datanode', 'slave']
- }
- },
- 'cluster_configs': {
- 'HDFS': {
- 'dfs.replication': 1
- }
- }
-}
-
-BASE_CDH_DESC = {
- 'NODES': {
- 'master1': {
- 'count': 1,
- 'node_processes': ['CLOUDERA_MANAGER']
- },
- 'master2': {
- 'count': 1,
- 'node_processes': ['HDFS_NAMENODE',
- 'YARN_RESOURCEMANAGER']
- },
- 'master3': {
- 'count': 1,
- 'node_processes': ['OOZIE_SERVER', 'YARN_JOBHISTORY',
- 'HDFS_SECONDARYNAMENODE',
- 'HIVE_METASTORE', 'HIVE_SERVER2']
- },
- 'worker1': {
- 'count': 1,
- 'node_processes': ['YARN_NODEMANAGER', 'HDFS_DATANODE']
- }
- },
- 'cluster_configs': {
- 'HDFS': {
- 'dfs_replication': 1
- }
- }
-}
-
-
-DEFAULT_TEMPLATES = {
- 'vanilla': collections.OrderedDict([
- ('2.6.0', copy.deepcopy(BASE_VANILLA_DESC)),
- ('2.7.1', copy.deepcopy(BASE_VANILLA_DESC)),
- ('1.2.1', {
- 'NODES': {
- 'master1': {
- 'count': 1,
- 'node_processes': ['namenode', 'jobtracker']
- },
- 'worker1': {
- 'count': 1,
- 'node_processes': ['datanode', 'tasktracker'],
- 'node_configs': {
- 'HDFS': {
- 'Data Node Heap Size': 1024
- },
- 'MapReduce': {
- 'Task Tracker Heap Size': 1024
- }
- }
- }
- },
- 'cluster_configs': {
- 'HDFS': {
- 'dfs.replication': 1
- },
- 'MapReduce': {
- 'mapred.map.tasks.speculative.execution': False,
- 'mapred.child.java.opts': '-Xmx500m'
- },
- 'general': {
- 'Enable Swift': False
- }
- }
- })
- ]),
- 'hdp': collections.OrderedDict([
- ('2.0.6', {
- 'NODES': {
- 'master1': {
- 'count': 1,
- 'node_processes': ['NAMENODE', 'SECONDARY_NAMENODE',
- 'ZOOKEEPER_SERVER', 'AMBARI_SERVER',
- 'HISTORYSERVER', 'RESOURCEMANAGER',
- 'GANGLIA_SERVER', 'NAGIOS_SERVER',
- 'OOZIE_SERVER']
- },
- 'worker1': {
- 'count': 1,
- 'node_processes': ['HDFS_CLIENT', 'DATANODE',
- 'YARN_CLIENT', 'ZOOKEEPER_CLIENT',
- 'MAPREDUCE2_CLIENT', 'NODEMANAGER',
- 'PIG', 'OOZIE_CLIENT']
- }
- },
- 'cluster_configs': {
- 'HDFS': {
- 'dfs.replication': 1
- }
- }
- })
- ]),
- 'spark': collections.OrderedDict([
- ('1.0.0', copy.deepcopy(BASE_SPARK_DESC)),
- ('1.3.1', copy.deepcopy(BASE_SPARK_DESC))
- ]),
- 'cdh': collections.OrderedDict([
- ('5.4.0', copy.deepcopy(BASE_CDH_DESC)),
- ('5.3.0', copy.deepcopy(BASE_CDH_DESC)),
- ('5', copy.deepcopy(BASE_CDH_DESC))
- ]),
-}
-
-
-class BaseDataProcessingTest(tempest.test.BaseTestCase):
-
- credentials = ['primary']
-
- @classmethod
- def skip_checks(cls):
- super(BaseDataProcessingTest, cls).skip_checks()
- if not CONF.service_available.sahara:
- raise cls.skipException('Sahara support is required')
- cls.default_plugin = cls._get_default_plugin()
-
- @classmethod
- def setup_clients(cls):
- super(BaseDataProcessingTest, cls).setup_clients()
- cls.client = cls.os.data_processing_client
-
- @classmethod
- def resource_setup(cls):
- super(BaseDataProcessingTest, cls).resource_setup()
-
- cls.default_version = cls._get_default_version()
- if cls.default_plugin is not None and cls.default_version is None:
- raise exceptions.InvalidConfiguration(
- message="No known Sahara plugin version was found")
- cls.flavor_ref = CONF.compute.flavor_ref
-
- # add lists for watched resources
- cls._node_group_templates = []
- cls._cluster_templates = []
- cls._data_sources = []
- cls._job_binary_internals = []
- cls._job_binaries = []
- cls._jobs = []
-
- @classmethod
- def resource_cleanup(cls):
- cls.cleanup_resources(getattr(cls, '_cluster_templates', []),
- cls.client.delete_cluster_template)
- cls.cleanup_resources(getattr(cls, '_node_group_templates', []),
- cls.client.delete_node_group_template)
- cls.cleanup_resources(getattr(cls, '_jobs', []), cls.client.delete_job)
- cls.cleanup_resources(getattr(cls, '_job_binaries', []),
- cls.client.delete_job_binary)
- cls.cleanup_resources(getattr(cls, '_job_binary_internals', []),
- cls.client.delete_job_binary_internal)
- cls.cleanup_resources(getattr(cls, '_data_sources', []),
- cls.client.delete_data_source)
- super(BaseDataProcessingTest, cls).resource_cleanup()
-
- @staticmethod
- def cleanup_resources(resource_id_list, method):
- for resource_id in resource_id_list:
- test_utils.call_and_ignore_notfound_exc(method, resource_id)
-
- @classmethod
- def create_node_group_template(cls, name, plugin_name, hadoop_version,
- node_processes, flavor_id,
- node_configs=None, **kwargs):
- """Creates watched node group template with specified params.
-
- It supports passing additional params using kwargs and returns created
- object. All resources created in this method will be automatically
- removed in tearDownClass method.
- """
- resp_body = cls.client.create_node_group_template(name, plugin_name,
- hadoop_version,
- node_processes,
- flavor_id,
- node_configs,
- **kwargs)
- resp_body = resp_body['node_group_template']
- # store id of created node group template
- cls._node_group_templates.append(resp_body['id'])
-
- return resp_body
-
- @classmethod
- def create_cluster_template(cls, name, plugin_name, hadoop_version,
- node_groups, cluster_configs=None, **kwargs):
- """Creates watched cluster template with specified params.
-
- It supports passing additional params using kwargs and returns created
- object. All resources created in this method will be automatically
- removed in tearDownClass method.
- """
- resp_body = cls.client.create_cluster_template(name, plugin_name,
- hadoop_version,
- node_groups,
- cluster_configs,
- **kwargs)
- resp_body = resp_body['cluster_template']
- # store id of created cluster template
- cls._cluster_templates.append(resp_body['id'])
-
- return resp_body
-
- @classmethod
- def create_data_source(cls, name, type, url, **kwargs):
- """Creates watched data source with specified params.
-
- It supports passing additional params using kwargs and returns created
- object. All resources created in this method will be automatically
- removed in tearDownClass method.
- """
- resp_body = cls.client.create_data_source(name, type, url, **kwargs)
- resp_body = resp_body['data_source']
- # store id of created data source
- cls._data_sources.append(resp_body['id'])
-
- return resp_body
-
- @classmethod
- def create_job_binary_internal(cls, name, data):
- """Creates watched job binary internal with specified params.
-
- It returns created object. All resources created in this method will
- be automatically removed in tearDownClass method.
- """
- resp_body = cls.client.create_job_binary_internal(name, data)
- resp_body = resp_body['job_binary_internal']
- # store id of created job binary internal
- cls._job_binary_internals.append(resp_body['id'])
-
- return resp_body
-
- @classmethod
- def create_job_binary(cls, name, url, extra=None, **kwargs):
- """Creates watched job binary with specified params.
-
- It supports passing additional params using kwargs and returns created
- object. All resources created in this method will be automatically
- removed in tearDownClass method.
- """
- resp_body = cls.client.create_job_binary(name, url, extra, **kwargs)
- resp_body = resp_body['job_binary']
- # store id of created job binary
- cls._job_binaries.append(resp_body['id'])
-
- return resp_body
-
- @classmethod
- def create_job(cls, name, job_type, mains, libs=None, **kwargs):
- """Creates watched job with specified params.
-
- It supports passing additional params using kwargs and returns created
- object. All resources created in this method will be automatically
- removed in tearDownClass method.
- """
- resp_body = cls.client.create_job(name,
- job_type, mains, libs, **kwargs)
- resp_body = resp_body['job']
- # store id of created job
- cls._jobs.append(resp_body['id'])
-
- return resp_body
-
- @classmethod
- def _get_default_plugin(cls):
- """Returns the default plugin used for testing."""
- if len(CONF.data_processing_feature_enabled.plugins) == 0:
- return None
-
- for plugin in CONF.data_processing_feature_enabled.plugins:
- if plugin in DEFAULT_TEMPLATES:
- break
- else:
- plugin = ''
- return plugin
-
- @classmethod
- def _get_default_version(cls):
- """Returns the default plugin version used for testing.
-
- This is gathered separately from the plugin to allow
- the usage of plugin name in skip_checks. This method is
- rather invoked into resource_setup, which allows API calls
- and exceptions.
- """
- if not cls.default_plugin:
- return None
- plugin = cls.client.get_plugin(cls.default_plugin)['plugin']
-
- for version in DEFAULT_TEMPLATES[cls.default_plugin].keys():
- if version in plugin['versions']:
- break
- else:
- version = None
-
- return version
-
- @classmethod
- def get_node_group_template(cls, nodegroup='worker1'):
- """Returns a node group template for the default plugin."""
- try:
- plugin_data = (
- DEFAULT_TEMPLATES[cls.default_plugin][cls.default_version]
- )
- nodegroup_data = plugin_data['NODES'][nodegroup]
- node_group_template = {
- 'description': 'Test node group template',
- 'plugin_name': cls.default_plugin,
- 'hadoop_version': cls.default_version,
- 'node_processes': nodegroup_data['node_processes'],
- 'flavor_id': cls.flavor_ref,
- 'node_configs': nodegroup_data.get('node_configs', {}),
- }
- return node_group_template
- except (IndexError, KeyError):
- return None
-
- @classmethod
- def get_cluster_template(cls, node_group_template_ids=None):
- """Returns a cluster template for the default plugin.
-
- node_group_template_defined contains the type and ID of pre-defined
- node group templates that have to be used in the cluster template
- (instead of dynamically defining them with 'node_processes').
- """
- if node_group_template_ids is None:
- node_group_template_ids = {}
- try:
- plugin_data = (
- DEFAULT_TEMPLATES[cls.default_plugin][cls.default_version]
- )
-
- all_node_groups = []
- for ng_name, ng_data in six.iteritems(plugin_data['NODES']):
- node_group = {
- 'name': '%s-node' % (ng_name),
- 'flavor_id': cls.flavor_ref,
- 'count': ng_data['count']
- }
- if ng_name in node_group_template_ids.keys():
- # node group already defined, use it
- node_group['node_group_template_id'] = (
- node_group_template_ids[ng_name]
- )
- else:
- # node_processes list defined on-the-fly
- node_group['node_processes'] = ng_data['node_processes']
- if 'node_configs' in ng_data:
- node_group['node_configs'] = ng_data['node_configs']
- all_node_groups.append(node_group)
-
- cluster_template = {
- 'description': 'Test cluster template',
- 'plugin_name': cls.default_plugin,
- 'hadoop_version': cls.default_version,
- 'cluster_configs': plugin_data.get('cluster_configs', {}),
- 'node_groups': all_node_groups,
- }
- return cluster_template
- except (IndexError, KeyError):
- return None
diff --git a/tempest/api/data_processing/test_cluster_templates.py b/tempest/api/data_processing/test_cluster_templates.py
deleted file mode 100644
index dfd8e27..0000000
--- a/tempest/api/data_processing/test_cluster_templates.py
+++ /dev/null
@@ -1,124 +0,0 @@
-# Copyright (c) 2014 Mirantis Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.data_processing import base as dp_base
-from tempest.common.utils import data_utils
-from tempest import exceptions
-from tempest import test
-
-
-class ClusterTemplateTest(dp_base.BaseDataProcessingTest):
- # Link to the API documentation is http://docs.openstack.org/developer/
- # sahara/restapi/rest_api_v1.0.html#cluster-templates
-
- @classmethod
- def skip_checks(cls):
- super(ClusterTemplateTest, cls).skip_checks()
- if cls.default_plugin is None:
- raise cls.skipException("No Sahara plugins configured")
-
- @classmethod
- def resource_setup(cls):
- super(ClusterTemplateTest, cls).resource_setup()
-
- # pre-define a node group templates
- node_group_template_w = cls.get_node_group_template('worker1')
- if node_group_template_w is None:
- raise exceptions.InvalidConfiguration(
- message="No known Sahara plugin was found")
-
- node_group_template_w['name'] = data_utils.rand_name(
- 'sahara-ng-template')
- resp_body = cls.create_node_group_template(**node_group_template_w)
- node_group_template_id = resp_body['id']
- configured_node_group_templates = {'worker1': node_group_template_id}
-
- cls.full_cluster_template = cls.get_cluster_template(
- configured_node_group_templates)
-
- # create cls.cluster_template variable to use for comparison to cluster
- # template response body. The 'node_groups' field in the response body
- # has some extra info that post body does not have. The 'node_groups'
- # field in the response body is something like this
- #
- # 'node_groups': [
- # {
- # 'count': 3,
- # 'name': 'worker-node',
- # 'volume_mount_prefix': '/volumes/disk',
- # 'created_at': '2014-05-21 14:31:37',
- # 'updated_at': None,
- # 'floating_ip_pool': None,
- # ...
- # },
- # ...
- # ]
- cls.cluster_template = cls.full_cluster_template.copy()
- del cls.cluster_template['node_groups']
-
- def _create_cluster_template(self, template_name=None):
- """Creates Cluster Template with optional name specified.
-
- It creates template, ensures template name and response body.
- Returns id and name of created template.
- """
- if not template_name:
- # generate random name if it's not specified
- template_name = data_utils.rand_name('sahara-cluster-template')
-
- # create cluster template
- resp_body = self.create_cluster_template(template_name,
- **self.full_cluster_template)
-
- # ensure that template created successfully
- self.assertEqual(template_name, resp_body['name'])
- self.assertDictContainsSubset(self.cluster_template, resp_body)
-
- return resp_body['id'], template_name
-
- @test.attr(type='smoke')
- @test.idempotent_id('3525f1f1-3f9c-407d-891a-a996237e728b')
- def test_cluster_template_create(self):
- self._create_cluster_template()
-
- @test.attr(type='smoke')
- @test.idempotent_id('7a161882-e430-4840-a1c6-1d928201fab2')
- def test_cluster_template_list(self):
- template_info = self._create_cluster_template()
-
- # check for cluster template in list
- templates = self.client.list_cluster_templates()['cluster_templates']
- templates_info = [(template['id'], template['name'])
- for template in templates]
- self.assertIn(template_info, templates_info)
-
- @test.attr(type='smoke')
- @test.idempotent_id('2b75fe22-f731-4b0f-84f1-89ab25f86637')
- def test_cluster_template_get(self):
- template_id, template_name = self._create_cluster_template()
-
- # check cluster template fetch by id
- template = self.client.get_cluster_template(template_id)
- template = template['cluster_template']
- self.assertEqual(template_name, template['name'])
- self.assertDictContainsSubset(self.cluster_template, template)
-
- @test.attr(type='smoke')
- @test.idempotent_id('ff1fd989-171c-4dd7-91fd-9fbc71b09675')
- def test_cluster_template_delete(self):
- template_id, _ = self._create_cluster_template()
-
- # delete the cluster template by id
- self.client.delete_cluster_template(template_id)
- # TODO(ylobankov): check that cluster template is really deleted
diff --git a/tempest/api/data_processing/test_data_sources.py b/tempest/api/data_processing/test_data_sources.py
deleted file mode 100644
index 67d09a0..0000000
--- a/tempest/api/data_processing/test_data_sources.py
+++ /dev/null
@@ -1,161 +0,0 @@
-# Copyright (c) 2014 Mirantis Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.data_processing import base as dp_base
-from tempest.common.utils import data_utils
-from tempest import test
-
-
-class DataSourceTest(dp_base.BaseDataProcessingTest):
- @classmethod
- def resource_setup(cls):
- super(DataSourceTest, cls).resource_setup()
- cls.swift_data_source_with_creds = {
- 'url': 'swift://sahara-container.sahara/input-source',
- 'description': 'Test data source',
- 'credentials': {
- 'user': cls.os.credentials.username,
- 'password': cls.os.credentials.password
- },
- 'type': 'swift'
- }
- cls.swift_data_source = cls.swift_data_source_with_creds.copy()
- del cls.swift_data_source['credentials']
-
- cls.local_hdfs_data_source = {
- 'url': 'input-source',
- 'description': 'Test data source',
- 'type': 'hdfs'
- }
-
- cls.external_hdfs_data_source = {
- 'url': 'hdfs://172.18.168.2:8020/usr/hadoop/input-source',
- 'description': 'Test data source',
- 'type': 'hdfs'
- }
-
- def _create_data_source(self, source_body, source_name=None):
- """Creates Data Source with optional name specified.
-
- It creates a link to input-source file (it may not exist), ensures
- source name and response body. Returns id and name of created source.
- """
- if not source_name:
- # generate random name if it's not specified
- source_name = data_utils.rand_name('sahara-data-source')
-
- # create data source
- resp_body = self.create_data_source(source_name, **source_body)
-
- # ensure that source created successfully
- self.assertEqual(source_name, resp_body['name'])
- if source_body['type'] == 'swift':
- source_body = self.swift_data_source
- self.assertDictContainsSubset(source_body, resp_body)
-
- return resp_body['id'], source_name
-
- def _list_data_sources(self, source_info):
- # check for data source in list
- sources = self.client.list_data_sources()['data_sources']
- sources_info = [(source['id'], source['name']) for source in sources]
- self.assertIn(source_info, sources_info)
-
- def _get_data_source(self, source_id, source_name, source_body):
- # check data source fetch by id
- source = self.client.get_data_source(source_id)['data_source']
- self.assertEqual(source_name, source['name'])
- self.assertDictContainsSubset(source_body, source)
-
- @test.attr(type='smoke')
- @test.idempotent_id('9e0e836d-c372-4fca-91b7-b66c3e9646c8')
- def test_swift_data_source_create(self):
- self._create_data_source(self.swift_data_source_with_creds)
-
- @test.attr(type='smoke')
- @test.idempotent_id('3cb87a4a-0534-4b97-9edc-8bbc822b68a0')
- def test_swift_data_source_list(self):
- source_info = (
- self._create_data_source(self.swift_data_source_with_creds))
- self._list_data_sources(source_info)
-
- @test.attr(type='smoke')
- @test.idempotent_id('fc07409b-6477-4cb3-9168-e633c46b227f')
- def test_swift_data_source_get(self):
- source_id, source_name = (
- self._create_data_source(self.swift_data_source_with_creds))
- self._get_data_source(source_id, source_name, self.swift_data_source)
-
- @test.attr(type='smoke')
- @test.idempotent_id('df53669c-0cd1-4cf7-b408-4cf215d8beb8')
- def test_swift_data_source_delete(self):
- source_id, _ = (
- self._create_data_source(self.swift_data_source_with_creds))
-
- # delete the data source by id
- self.client.delete_data_source(source_id)
-
- @test.attr(type='smoke')
- @test.idempotent_id('88505d52-db01-4229-8f1d-a1137da5fe2d')
- def test_local_hdfs_data_source_create(self):
- self._create_data_source(self.local_hdfs_data_source)
-
- @test.attr(type='smoke')
- @test.idempotent_id('81d7d42a-d7f6-4d9b-b38c-0801a4dfe3c2')
- def test_local_hdfs_data_source_list(self):
- source_info = self._create_data_source(self.local_hdfs_data_source)
- self._list_data_sources(source_info)
-
- @test.attr(type='smoke')
- @test.idempotent_id('ec0144c6-db1e-4169-bb06-7abae14a8443')
- def test_local_hdfs_data_source_get(self):
- source_id, source_name = (
- self._create_data_source(self.local_hdfs_data_source))
- self._get_data_source(
- source_id, source_name, self.local_hdfs_data_source)
-
- @test.attr(type='smoke')
- @test.idempotent_id('e398308b-4230-4f86-ba10-9b0b60a59c8d')
- def test_local_hdfs_data_source_delete(self):
- source_id, _ = self._create_data_source(self.local_hdfs_data_source)
-
- # delete the data source by id
- self.client.delete_data_source(source_id)
-
- @test.attr(type='smoke')
- @test.idempotent_id('bfd91128-e642-4d95-a973-3e536962180c')
- def test_external_hdfs_data_source_create(self):
- self._create_data_source(self.external_hdfs_data_source)
-
- @test.attr(type='smoke')
- @test.idempotent_id('92e2be72-f7ab-499d-ae01-fb9943c90d8e')
- def test_external_hdfs_data_source_list(self):
- source_info = self._create_data_source(self.external_hdfs_data_source)
- self._list_data_sources(source_info)
-
- @test.attr(type='smoke')
- @test.idempotent_id('a31edb1b-6bc6-4f42-871f-70cd243184ac')
- def test_external_hdfs_data_source_get(self):
- source_id, source_name = (
- self._create_data_source(self.external_hdfs_data_source))
- self._get_data_source(
- source_id, source_name, self.external_hdfs_data_source)
-
- @test.attr(type='smoke')
- @test.idempotent_id('295924cd-a085-4b45-aea8-0707cdb2da7e')
- def test_external_hdfs_data_source_delete(self):
- source_id, _ = self._create_data_source(self.external_hdfs_data_source)
-
- # delete the data source by id
- self.client.delete_data_source(source_id)
diff --git a/tempest/api/data_processing/test_job_binaries.py b/tempest/api/data_processing/test_job_binaries.py
deleted file mode 100644
index a47ddbc..0000000
--- a/tempest/api/data_processing/test_job_binaries.py
+++ /dev/null
@@ -1,148 +0,0 @@
-# Copyright (c) 2014 Mirantis Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.data_processing import base as dp_base
-from tempest.common.utils import data_utils
-from tempest import test
-
-
-class JobBinaryTest(dp_base.BaseDataProcessingTest):
- # Link to the API documentation is http://docs.openstack.org/developer/
- # sahara/restapi/rest_api_v1.1_EDP.html#job-binaries
-
- @classmethod
- def resource_setup(cls):
- super(JobBinaryTest, cls).resource_setup()
- cls.swift_job_binary_with_extra = {
- 'url': 'swift://sahara-container.sahara/example.jar',
- 'description': 'Test job binary',
- 'extra': {
- 'user': cls.os.credentials.username,
- 'password': cls.os.credentials.password
- }
- }
- # Create extra cls.swift_job_binary variable to use for comparison to
- # job binary response body because response body has no 'extra' field.
- cls.swift_job_binary = cls.swift_job_binary_with_extra.copy()
- del cls.swift_job_binary['extra']
-
- name = data_utils.rand_name('sahara-internal-job-binary')
- cls.job_binary_data = 'Some script may be data'
- job_binary_internal = (
- cls.create_job_binary_internal(name, cls.job_binary_data))
- cls.internal_db_job_binary = {
- 'url': 'internal-db://%s' % job_binary_internal['id'],
- 'description': 'Test job binary',
- }
-
- def _create_job_binary(self, binary_body, binary_name=None):
- """Creates Job Binary with optional name specified.
-
- It creates a link to data (jar, pig files, etc.), ensures job binary
- name and response body. Returns id and name of created job binary.
- Data may not exist when using Swift as data storage.
- In other cases data must exist in storage.
- """
- if not binary_name:
- # generate random name if it's not specified
- binary_name = data_utils.rand_name('sahara-job-binary')
-
- # create job binary
- resp_body = self.create_job_binary(binary_name, **binary_body)
-
- # ensure that binary created successfully
- self.assertEqual(binary_name, resp_body['name'])
- if 'swift' in binary_body['url']:
- binary_body = self.swift_job_binary
- self.assertDictContainsSubset(binary_body, resp_body)
-
- return resp_body['id'], binary_name
-
- @test.attr(type='smoke')
- @test.idempotent_id('c00d43f8-4360-45f8-b280-af1a201b12d3')
- def test_swift_job_binary_create(self):
- self._create_job_binary(self.swift_job_binary_with_extra)
-
- @test.attr(type='smoke')
- @test.idempotent_id('f8809352-e79d-4748-9359-ce1efce89f2a')
- def test_swift_job_binary_list(self):
- binary_info = self._create_job_binary(self.swift_job_binary_with_extra)
-
- # check for job binary in list
- binaries = self.client.list_job_binaries()['binaries']
- binaries_info = [(binary['id'], binary['name']) for binary in binaries]
- self.assertIn(binary_info, binaries_info)
-
- @test.attr(type='smoke')
- @test.idempotent_id('2d4a670f-e8f1-413c-b5ac-50c1bfe9e1b1')
- def test_swift_job_binary_get(self):
- binary_id, binary_name = (
- self._create_job_binary(self.swift_job_binary_with_extra))
-
- # check job binary fetch by id
- binary = self.client.get_job_binary(binary_id)['job_binary']
- self.assertEqual(binary_name, binary['name'])
- self.assertDictContainsSubset(self.swift_job_binary, binary)
-
- @test.attr(type='smoke')
- @test.idempotent_id('9b0e8f38-04f3-4616-b399-cfa7eb2677ed')
- def test_swift_job_binary_delete(self):
- binary_id, _ = (
- self._create_job_binary(self.swift_job_binary_with_extra))
-
- # delete the job binary by id
- self.client.delete_job_binary(binary_id)
-
- @test.attr(type='smoke')
- @test.idempotent_id('63662f6d-8291-407e-a6fc-f654522ebab6')
- def test_internal_db_job_binary_create(self):
- self._create_job_binary(self.internal_db_job_binary)
-
- @test.attr(type='smoke')
- @test.idempotent_id('38731e7b-6d9d-4ffa-8fd1-193c453e88b1')
- def test_internal_db_job_binary_list(self):
- binary_info = self._create_job_binary(self.internal_db_job_binary)
-
- # check for job binary in list
- binaries = self.client.list_job_binaries()['binaries']
- binaries_info = [(binary['id'], binary['name']) for binary in binaries]
- self.assertIn(binary_info, binaries_info)
-
- @test.attr(type='smoke')
- @test.idempotent_id('1b32199b-c3f5-43e1-a37a-3797e57b7066')
- def test_internal_db_job_binary_get(self):
- binary_id, binary_name = (
- self._create_job_binary(self.internal_db_job_binary))
-
- # check job binary fetch by id
- binary = self.client.get_job_binary(binary_id)['job_binary']
- self.assertEqual(binary_name, binary['name'])
- self.assertDictContainsSubset(self.internal_db_job_binary, binary)
-
- @test.attr(type='smoke')
- @test.idempotent_id('3c42b0c3-3e03-46a5-adf0-df0650271a4e')
- def test_internal_db_job_binary_delete(self):
- binary_id, _ = self._create_job_binary(self.internal_db_job_binary)
-
- # delete the job binary by id
- self.client.delete_job_binary(binary_id)
-
- @test.attr(type='smoke')
- @test.idempotent_id('d5d47659-7e2c-4ea7-b292-5b3e559e8587')
- def test_job_binary_get_data(self):
- binary_id, _ = self._create_job_binary(self.internal_db_job_binary)
-
- # get data of job binary by id
- _, data = self.client.get_job_binary_data(binary_id)
- self.assertEqual(data, self.job_binary_data)
diff --git a/tempest/api/data_processing/test_job_binary_internals.py b/tempest/api/data_processing/test_job_binary_internals.py
deleted file mode 100644
index b4f0769..0000000
--- a/tempest/api/data_processing/test_job_binary_internals.py
+++ /dev/null
@@ -1,88 +0,0 @@
-# Copyright (c) 2014 Mirantis Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.data_processing import base as dp_base
-from tempest.common.utils import data_utils
-from tempest import test
-
-
-class JobBinaryInternalTest(dp_base.BaseDataProcessingTest):
- # Link to the API documentation is http://docs.openstack.org/developer/
- # sahara/restapi/rest_api_v1.1_EDP.html#job-binary-internals
-
- @classmethod
- def resource_setup(cls):
- super(JobBinaryInternalTest, cls).resource_setup()
- cls.job_binary_internal_data = 'Some script may be data'
-
- def _create_job_binary_internal(self, binary_name=None):
- """Creates Job Binary Internal with optional name specified.
-
- It puts data into Sahara database and ensures job binary internal name.
- Returns id and name of created job binary internal.
- """
- if not binary_name:
- # generate random name if it's not specified
- binary_name = data_utils.rand_name('sahara-job-binary-internal')
-
- # create job binary internal
- resp_body = (
- self.create_job_binary_internal(binary_name,
- self.job_binary_internal_data))
-
- # ensure that job binary internal created successfully
- self.assertEqual(binary_name, resp_body['name'])
-
- return resp_body['id'], binary_name
-
- @test.attr(type='smoke')
- @test.idempotent_id('249c4dc2-946f-4939-83e6-212ddb6ea0be')
- def test_job_binary_internal_create(self):
- self._create_job_binary_internal()
-
- @test.attr(type='smoke')
- @test.idempotent_id('1e3c2ecd-5673-499d-babe-4fe2fcdf64ee')
- def test_job_binary_internal_list(self):
- binary_info = self._create_job_binary_internal()
-
- # check for job binary internal in list
- binaries = self.client.list_job_binary_internals()['binaries']
- binaries_info = [(binary['id'], binary['name']) for binary in binaries]
- self.assertIn(binary_info, binaries_info)
-
- @test.attr(type='smoke')
- @test.idempotent_id('a2046a53-386c-43ab-be35-df54b19db776')
- def test_job_binary_internal_get(self):
- binary_id, binary_name = self._create_job_binary_internal()
-
- # check job binary internal fetch by id
- binary = self.client.get_job_binary_internal(binary_id)
- self.assertEqual(binary_name, binary['job_binary_internal']['name'])
-
- @test.attr(type='smoke')
- @test.idempotent_id('b3568c33-4eed-40d5-aae4-6ff3b2ac58f5')
- def test_job_binary_internal_delete(self):
- binary_id, _ = self._create_job_binary_internal()
-
- # delete the job binary internal by id
- self.client.delete_job_binary_internal(binary_id)
-
- @test.attr(type='smoke')
- @test.idempotent_id('8871f2b0-5782-4d66-9bb9-6f95bcb839ea')
- def test_job_binary_internal_get_data(self):
- binary_id, _ = self._create_job_binary_internal()
-
- # get data of job binary internal by id
- _, data = self.client.get_job_binary_internal_data(binary_id)
- self.assertEqual(data, self.job_binary_internal_data)
diff --git a/tempest/api/data_processing/test_jobs.py b/tempest/api/data_processing/test_jobs.py
deleted file mode 100644
index 8503320..0000000
--- a/tempest/api/data_processing/test_jobs.py
+++ /dev/null
@@ -1,93 +0,0 @@
-# Copyright (c) 2014 Mirantis Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.data_processing import base as dp_base
-from tempest.common.utils import data_utils
-from tempest import test
-
-
-class JobTest(dp_base.BaseDataProcessingTest):
- # NOTE: Link to the API documentation: http://docs.openstack.org/developer/
- # sahara/restapi/rest_api_v1.1_EDP.html#jobs
-
- @classmethod
- def resource_setup(cls):
- super(JobTest, cls).resource_setup()
- # create job binary
- job_binary = {
- 'name': data_utils.rand_name('sahara-job-binary'),
- 'url': 'swift://sahara-container.sahara/example.jar',
- 'description': 'Test job binary',
- 'extra': {
- 'user': cls.os.credentials.username,
- 'password': cls.os.credentials.password
- }
- }
- resp_body = cls.create_job_binary(**job_binary)
- job_binary_id = resp_body['id']
-
- cls.job = {
- 'job_type': 'Pig',
- 'mains': [job_binary_id]
- }
-
- def _create_job(self, job_name=None):
- """Creates Job with optional name specified.
-
- It creates job and ensures job name. Returns id and name of created
- job.
- """
- if not job_name:
- # generate random name if it's not specified
- job_name = data_utils.rand_name('sahara-job')
-
- # create job
- resp_body = self.create_job(job_name, **self.job)
-
- # ensure that job created successfully
- self.assertEqual(job_name, resp_body['name'])
-
- return resp_body['id'], job_name
-
- @test.attr(type='smoke')
- @test.idempotent_id('8cf785ca-adf4-473d-8281-fb9a5efa3073')
- def test_job_create(self):
- self._create_job()
-
- @test.attr(type='smoke')
- @test.idempotent_id('41e253fe-b02a-41a0-b186-5ff1f0463ba3')
- def test_job_list(self):
- job_info = self._create_job()
-
- # check for job in list
- jobs = self.client.list_jobs()['jobs']
- jobs_info = [(job['id'], job['name']) for job in jobs]
- self.assertIn(job_info, jobs_info)
-
- @test.attr(type='smoke')
- @test.idempotent_id('3faf17fa-bc94-4a60-b1c3-79e53674c16c')
- def test_job_get(self):
- job_id, job_name = self._create_job()
-
- # check job fetch by id
- job = self.client.get_job(job_id)['job']
- self.assertEqual(job_name, job['name'])
-
- @test.attr(type='smoke')
- @test.idempotent_id('dff85e62-7dda-4ad8-b1ee-850adecb0c6e')
- def test_job_delete(self):
- job_id, _ = self._create_job()
-
- # delete the job by id
- self.client.delete_job(job_id)
diff --git a/tempest/api/data_processing/test_node_group_templates.py b/tempest/api/data_processing/test_node_group_templates.py
deleted file mode 100644
index c2dae85..0000000
--- a/tempest/api/data_processing/test_node_group_templates.py
+++ /dev/null
@@ -1,86 +0,0 @@
-# Copyright (c) 2014 Mirantis Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.data_processing import base as dp_base
-from tempest.common.utils import data_utils
-from tempest import test
-
-
-class NodeGroupTemplateTest(dp_base.BaseDataProcessingTest):
-
- @classmethod
- def skip_checks(cls):
- super(NodeGroupTemplateTest, cls).skip_checks()
- if cls.default_plugin is None:
- raise cls.skipException("No Sahara plugins configured")
-
- def _create_node_group_template(self, template_name=None):
- """Creates Node Group Template with optional name specified.
-
- It creates template, ensures template name and response body.
- Returns id and name of created template.
- """
- self.node_group_template = self.get_node_group_template()
- self.assertIsNotNone(self.node_group_template,
- "No known Sahara plugin was found")
-
- if not template_name:
- # generate random name if it's not specified
- template_name = data_utils.rand_name('sahara-ng-template')
-
- # create node group template
- resp_body = self.create_node_group_template(template_name,
- **self.node_group_template)
-
- # ensure that template created successfully
- self.assertEqual(template_name, resp_body['name'])
- self.assertDictContainsSubset(self.node_group_template, resp_body)
-
- return resp_body['id'], template_name
-
- @test.attr(type='smoke')
- @test.idempotent_id('63164051-e46d-4387-9741-302ef4791cbd')
- def test_node_group_template_create(self):
- self._create_node_group_template()
-
- @test.attr(type='smoke')
- @test.idempotent_id('eb39801d-2612-45e5-88b1-b5d70b329185')
- def test_node_group_template_list(self):
- template_info = self._create_node_group_template()
-
- # check for node group template in list
- templates = self.client.list_node_group_templates()
- templates = templates['node_group_templates']
- templates_info = [(template['id'], template['name'])
- for template in templates]
- self.assertIn(template_info, templates_info)
-
- @test.attr(type='smoke')
- @test.idempotent_id('6ee31539-a708-466f-9c26-4093ce09a836')
- def test_node_group_template_get(self):
- template_id, template_name = self._create_node_group_template()
-
- # check node group template fetch by id
- template = self.client.get_node_group_template(template_id)
- template = template['node_group_template']
- self.assertEqual(template_name, template['name'])
- self.assertDictContainsSubset(self.node_group_template, template)
-
- @test.attr(type='smoke')
- @test.idempotent_id('f4f5cb82-708d-4031-81c4-b0618a706a2f')
- def test_node_group_template_delete(self):
- template_id, _ = self._create_node_group_template()
-
- # delete the node group template by id
- self.client.delete_node_group_template(template_id)
diff --git a/tempest/api/data_processing/test_plugins.py b/tempest/api/data_processing/test_plugins.py
deleted file mode 100644
index 14594e4..0000000
--- a/tempest/api/data_processing/test_plugins.py
+++ /dev/null
@@ -1,56 +0,0 @@
-# Copyright (c) 2014 Mirantis Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api.data_processing import base as dp_base
-from tempest import config
-from tempest import test
-
-CONF = config.CONF
-
-
-class PluginsTest(dp_base.BaseDataProcessingTest):
- def _list_all_plugin_names(self):
- """Returns all enabled plugin names.
-
- It ensures main plugins availability.
- """
- plugins = self.client.list_plugins()['plugins']
- plugins_names = [plugin['name'] for plugin in plugins]
- for enabled_plugin in CONF.data_processing_feature_enabled.plugins:
- self.assertIn(enabled_plugin, plugins_names)
-
- return plugins_names
-
- @test.attr(type='smoke')
- @test.idempotent_id('01a005a3-426c-4c0b-9617-d09475403e09')
- def test_plugin_list(self):
- self._list_all_plugin_names()
-
- @test.attr(type='smoke')
- @test.idempotent_id('53cf6487-2cfb-4a6f-8671-97c542c6e901')
- def test_plugin_get(self):
- for plugin_name in self._list_all_plugin_names():
- plugin = self.client.get_plugin(plugin_name)['plugin']
- self.assertEqual(plugin_name, plugin['name'])
-
- for plugin_version in plugin['versions']:
- detailed_plugin = self.client.get_plugin(plugin_name,
- plugin_version)
- detailed_plugin = detailed_plugin['plugin']
- self.assertEqual(plugin_name, detailed_plugin['name'])
-
- # check that required image tags contains name and version
- image_tags = detailed_plugin['required_image_tags']
- self.assertIn(plugin_name, image_tags)
- self.assertIn(plugin_version, image_tags)
diff --git a/tempest/api/identity/admin/v3/test_inherits.py b/tempest/api/identity/admin/v3/test_inherits.py
index 373d44b..955b6fb 100644
--- a/tempest/api/identity/admin/v3/test_inherits.py
+++ b/tempest/api/identity/admin/v3/test_inherits.py
@@ -147,3 +147,88 @@
(self.inherited_roles_client.
delete_inherited_role_from_group_on_project(
self.project['id'], self.group['id'], src_role['id']))
+
+ @test.idempotent_id('3acf666e-5354-42ac-8e17-8b68893bcd36')
+ def test_inherit_assign_list_revoke_user_roles_on_domain(self):
+ # Create role
+ src_role = self.roles_client.create_role(
+ name=data_utils.rand_name('Role'))['role']
+ self.addCleanup(self.roles_client.delete_role, src_role['id'])
+
+ # Create a project hierarchy
+ leaf_project_name = data_utils.rand_name('project')
+ leaf_project = self.projects_client.create_project(
+ leaf_project_name, domain_id=self.domain['id'],
+ parent_id=self.project['id'])['project']
+ self.addCleanup(
+ self.projects_client.delete_project, leaf_project['id'])
+
+ # Assign role on domain
+ self.inherited_roles_client.create_inherited_role_on_domains_user(
+ self.domain['id'], self.user['id'], src_role['id'])
+
+ # List "effective" role assignments from user on the parent project
+ assignments = (
+ self.role_assignments.list_user_project_effective_assignments(
+ self.project['id'], self.user['id']))['role_assignments']
+ self.assertNotEmpty(assignments)
+
+ # List "effective" role assignments from user on the leaf project
+ assignments = (
+ self.role_assignments.list_user_project_effective_assignments(
+ leaf_project['id'], self.user['id']))['role_assignments']
+ self.assertNotEmpty(assignments)
+
+ # Revoke role from domain
+ self.inherited_roles_client.delete_inherited_role_from_user_on_domain(
+ self.domain['id'], self.user['id'], src_role['id'])
+
+ # List "effective" role assignments from user on the parent project
+ # should return an empty list
+ assignments = (
+ self.role_assignments.list_user_project_effective_assignments(
+ self.project['id'], self.user['id']))['role_assignments']
+ self.assertEmpty(assignments)
+
+ # List "effective" role assignments from user on the leaf project
+ # should return an empty list
+ assignments = (
+ self.role_assignments.list_user_project_effective_assignments(
+ leaf_project['id'], self.user['id']))['role_assignments']
+ self.assertEmpty(assignments)
+
+ @test.idempotent_id('9f02ccd9-9b57-46b4-8f77-dd5a736f3a06')
+ def test_inherit_assign_list_revoke_user_roles_on_project_tree(self):
+ # Create role
+ src_role = self.roles_client.create_role(
+ name=data_utils.rand_name('Role'))['role']
+ self.addCleanup(self.roles_client.delete_role, src_role['id'])
+
+ # Create a project hierarchy
+ leaf_project_name = data_utils.rand_name('project')
+ leaf_project = self.projects_client.create_project(
+ leaf_project_name, domain_id=self.domain['id'],
+ parent_id=self.project['id'])['project']
+ self.addCleanup(
+ self.projects_client.delete_project, leaf_project['id'])
+
+ # Assign role on parent project
+ self.inherited_roles_client.create_inherited_role_on_projects_user(
+ self.project['id'], self.user['id'], src_role['id'])
+
+ # List "effective" role assignments from user on the leaf project
+ assignments = (
+ self.role_assignments.list_user_project_effective_assignments(
+ leaf_project['id'], self.user['id']))['role_assignments']
+ self.assertNotEmpty(assignments)
+
+ # Revoke role from parent project
+ self.inherited_roles_client.delete_inherited_role_from_user_on_project(
+ self.project['id'], self.user['id'], src_role['id'])
+
+ # List "effective" role assignments from user on the leaf project
+ # should return an empty list
+ assignments = (
+ self.role_assignments.list_user_project_effective_assignments(
+ leaf_project['id'], self.user['id']))['role_assignments']
+ self.assertEmpty(assignments)
diff --git a/tempest/api/identity/base.py b/tempest/api/identity/base.py
index f5e4943..14bf4f8 100644
--- a/tempest/api/identity/base.py
+++ b/tempest/api/identity/base.py
@@ -182,6 +182,7 @@
cls.creds_client = cls.os_adm.credentials_client
cls.groups_client = cls.os_adm.groups_client
cls.projects_client = cls.os_adm.projects_client
+ cls.role_assignments = cls.os_admin.role_assignments_client
if CONF.identity.admin_domain_scope:
# NOTE(andreaf) When keystone policy requires it, the identity
# admin clients for these tests shall use 'domain' scoped tokens.
diff --git a/tempest/api/image/admin/v2/test_images.py b/tempest/api/image/admin/v2/test_images.py
index c719b7a..9844a67 100644
--- a/tempest/api/image/admin/v2/test_images.py
+++ b/tempest/api/image/admin/v2/test_images.py
@@ -34,27 +34,26 @@
def test_admin_deactivate_reactivate_image(self):
# Create image by non-admin tenant
image_name = data_utils.rand_name('image')
- body = self.client.create_image(name=image_name,
- container_format='bare',
- disk_format='raw',
- visibility='private')
- image_id = body['id']
- self.addCleanup(self.client.delete_image, image_id)
+ image = self.client.create_image(name=image_name,
+ container_format='bare',
+ disk_format='raw',
+ visibility='private')
+ self.addCleanup(self.client.delete_image, image['id'])
# upload an image file
content = data_utils.random_bytes()
image_file = six.BytesIO(content)
- self.client.store_image_file(image_id, image_file)
+ self.client.store_image_file(image['id'], image_file)
# deactivate image
- self.admin_client.deactivate_image(image_id)
- body = self.client.show_image(image_id)
+ self.admin_client.deactivate_image(image['id'])
+ body = self.client.show_image(image['id'])
self.assertEqual("deactivated", body['status'])
# non-admin user unable to download deactivated image
self.assertRaises(lib_exc.Forbidden, self.client.show_image_file,
- image_id)
+ image['id'])
# reactivate image
- self.admin_client.reactivate_image(image_id)
- body = self.client.show_image(image_id)
+ self.admin_client.reactivate_image(image['id'])
+ body = self.client.show_image(image['id'])
self.assertEqual("active", body['status'])
# non-admin user able to download image after reactivation by admin
- body = self.client.show_image_file(image_id)
+ body = self.client.show_image_file(image['id'])
self.assertEqual(content, body.data)
diff --git a/tempest/api/image/base.py b/tempest/api/image/base.py
old mode 100755
new mode 100644
index f74f97b..26b88b0
--- a/tempest/api/image/base.py
+++ b/tempest/api/image/base.py
@@ -123,8 +123,7 @@
disk_format='raw',
is_public=False,
data=image_file)
- image_id = image['id']
- return image_id
+ return image['id']
class BaseV2ImageTest(BaseImageTest):
@@ -183,9 +182,8 @@
image = self.client.create_image(name=name,
container_format='bare',
disk_format='raw')
- image_id = image['id']
- self.addCleanup(self.client.delete_image, image_id)
- return image_id
+ self.addCleanup(self.client.delete_image, image['id'])
+ return image['id']
class BaseV1ImageAdminTest(BaseImageTest):
diff --git a/tempest/api/image/v1/test_images.py b/tempest/api/image/v1/test_images.py
index 712b34b..695efb5 100644
--- a/tempest/api/image/v1/test_images.py
+++ b/tempest/api/image/v1/test_images.py
@@ -49,22 +49,21 @@
# Register, then upload an image
properties = {'prop1': 'val1'}
container_format, disk_format = get_container_and_disk_format()
- body = self.create_image(name='New Name',
- container_format=container_format,
- disk_format=disk_format,
- is_public=False,
- properties=properties)
- self.assertIn('id', body)
- image_id = body.get('id')
- self.assertEqual('New Name', body.get('name'))
- self.assertFalse(body.get('is_public'))
- self.assertEqual('queued', body.get('status'))
+ image = self.create_image(name='New Name',
+ container_format=container_format,
+ disk_format=disk_format,
+ is_public=False,
+ properties=properties)
+ self.assertIn('id', image)
+ self.assertEqual('New Name', image.get('name'))
+ self.assertFalse(image.get('is_public'))
+ self.assertEqual('queued', image.get('status'))
for key, val in properties.items():
- self.assertEqual(val, body.get('properties')[key])
+ self.assertEqual(val, image.get('properties')[key])
# Now try uploading an image file
image_file = six.BytesIO(data_utils.random_bytes())
- body = self.client.update_image(image_id, data=image_file)['image']
+ body = self.client.update_image(image['id'], data=image_file)['image']
self.assertIn('size', body)
self.assertEqual(1024, body.get('size'))
@@ -89,16 +88,15 @@
@test.idempotent_id('6d0e13a7-515b-460c-b91f-9f4793f09816')
def test_register_http_image(self):
container_format, disk_format = get_container_and_disk_format()
- body = self.create_image(name='New Http Image',
- container_format=container_format,
- disk_format=disk_format, is_public=False,
- copy_from=CONF.image.http_image)
- self.assertIn('id', body)
- image_id = body.get('id')
- self.assertEqual('New Http Image', body.get('name'))
- self.assertFalse(body.get('is_public'))
- waiters.wait_for_image_status(self.client, image_id, 'active')
- self.client.show_image(image_id)
+ image = self.create_image(name='New Http Image',
+ container_format=container_format,
+ disk_format=disk_format, is_public=False,
+ copy_from=CONF.image.http_image)
+ self.assertIn('id', image)
+ self.assertEqual('New Http Image', image.get('name'))
+ self.assertFalse(image.get('is_public'))
+ waiters.wait_for_image_status(self.client, image['id'], 'active')
+ self.client.show_image(image['id'])
@test.idempotent_id('05b19d55-140c-40d0-b36b-fafd774d421b')
def test_register_image_with_min_ram(self):
@@ -188,8 +186,7 @@
disk_format=disk_format,
is_public=False,
location=location)
- image_id = image['id']
- return image_id
+ return image['id']
@classmethod
def _create_standard_image(cls, name, container_format,
@@ -205,8 +202,7 @@
container_format=container_format,
disk_format=disk_format,
is_public=False, data=image_file)
- image_id = image['id']
- return image_id
+ return image['id']
@test.idempotent_id('246178ab-3b33-4212-9a4b-a7fe8261794d')
def test_index_no_params(self):
@@ -301,8 +297,7 @@
disk_format=disk_format,
is_public=False, data=image_file,
properties={'key1': 'value1'})
- image_id = image['id']
- return image_id
+ return image['id']
@test.idempotent_id('01752c1c-0275-4de3-9e5b-876e44541928')
def test_list_image_metadata(self):
diff --git a/tempest/api/image/v2/test_images.py b/tempest/api/image/v2/test_images.py
index 443e332..aff8a78 100644
--- a/tempest/api/image/v2/test_images.py
+++ b/tempest/api/image/v2/test_images.py
@@ -44,35 +44,34 @@
image_name = data_utils.rand_name('image')
container_format = CONF.image.container_formats[0]
disk_format = CONF.image.disk_formats[0]
- body = self.create_image(name=image_name,
- container_format=container_format,
- disk_format=disk_format,
- visibility='private',
- ramdisk_id=uuid)
- self.assertIn('id', body)
- image_id = body.get('id')
- self.assertIn('name', body)
- self.assertEqual(image_name, body['name'])
- self.assertIn('visibility', body)
- self.assertEqual('private', body['visibility'])
- self.assertIn('status', body)
- self.assertEqual('queued', body['status'])
+ image = self.create_image(name=image_name,
+ container_format=container_format,
+ disk_format=disk_format,
+ visibility='private',
+ ramdisk_id=uuid)
+ self.assertIn('id', image)
+ self.assertIn('name', image)
+ self.assertEqual(image_name, image['name'])
+ self.assertIn('visibility', image)
+ self.assertEqual('private', image['visibility'])
+ self.assertIn('status', image)
+ self.assertEqual('queued', image['status'])
# Now try uploading an image file
file_content = data_utils.random_bytes()
image_file = six.BytesIO(file_content)
- self.client.store_image_file(image_id, image_file)
+ self.client.store_image_file(image['id'], image_file)
# Now try to get image details
- body = self.client.show_image(image_id)
- self.assertEqual(image_id, body['id'])
+ body = self.client.show_image(image['id'])
+ self.assertEqual(image['id'], body['id'])
self.assertEqual(image_name, body['name'])
self.assertEqual(uuid, body['ramdisk_id'])
self.assertIn('size', body)
self.assertEqual(1024, body.get('size'))
# Now try get image file
- body = self.client.show_image_file(image_id)
+ body = self.client.show_image_file(image['id'])
self.assertEqual(file_content, body.data)
@test.attr(type='smoke')
@@ -84,20 +83,18 @@
image_name = data_utils.rand_name('image')
container_format = CONF.image.container_formats[0]
disk_format = CONF.image.disk_formats[0]
- body = self.client.create_image(name=image_name,
- container_format=container_format,
- disk_format=disk_format,
- visibility='private')
- image_id = body['id']
-
+ image = self.client.create_image(name=image_name,
+ container_format=container_format,
+ disk_format=disk_format,
+ visibility='private')
# Delete Image
- self.client.delete_image(image_id)
- self.client.wait_for_resource_deletion(image_id)
+ self.client.delete_image(image['id'])
+ self.client.wait_for_resource_deletion(image['id'])
# Verifying deletion
images = self.client.list_images()['images']
images_id = [item['id'] for item in images]
- self.assertNotIn(image_id, images_id)
+ self.assertNotIn(image['id'], images_id)
@test.attr(type='smoke')
@test.idempotent_id('f66891a7-a35c-41a8-b590-a065c2a1caa6')
@@ -108,27 +105,26 @@
image_name = data_utils.rand_name('image')
container_format = CONF.image.container_formats[0]
disk_format = CONF.image.disk_formats[0]
- body = self.client.create_image(name=image_name,
- container_format=container_format,
- disk_format=disk_format,
- visibility='private')
- self.addCleanup(self.client.delete_image, body['id'])
- self.assertEqual('queued', body['status'])
- image_id = body['id']
+ image = self.client.create_image(name=image_name,
+ container_format=container_format,
+ disk_format=disk_format,
+ visibility='private')
+ self.addCleanup(self.client.delete_image, image['id'])
+ self.assertEqual('queued', image['status'])
# Now try uploading an image file
image_file = six.BytesIO(data_utils.random_bytes())
- self.client.store_image_file(image_id, image_file)
+ self.client.store_image_file(image['id'], image_file)
# Update Image
new_image_name = data_utils.rand_name('new-image')
- body = self.client.update_image(image_id, [
+ body = self.client.update_image(image['id'], [
dict(replace='/name', value=new_image_name)])
# Verifying updating
- body = self.client.show_image(image_id)
- self.assertEqual(image_id, body['id'])
+ body = self.client.show_image(image['id'])
+ self.assertEqual(image['id'], body['id'])
self.assertEqual(new_image_name, body['name'])
@@ -162,14 +158,13 @@
size = random.randint(1024, 4096)
image_file = six.BytesIO(data_utils.random_bytes(size))
name = data_utils.rand_name('image')
- body = cls.create_image(name=name,
- container_format=container_format,
- disk_format=disk_format,
- visibility='private')
- image_id = body['id']
- cls.client.store_image_file(image_id, data=image_file)
+ image = cls.create_image(name=name,
+ container_format=container_format,
+ disk_format=disk_format,
+ visibility='private')
+ cls.client.store_image_file(image['id'], data=image_file)
- return image_id
+ return image['id']
def _list_by_param_value_and_assert(self, params):
"""Perform list action with given params and validates result."""
@@ -250,6 +245,16 @@
self.assertEqual(len(images_list), params['limit'],
"Failed to get images by limit")
+ @test.idempotent_id('e9a44b91-31c8-4b40-a332-e0a39ffb4dbb')
+ def test_list_image_param_owner(self):
+ # Test to get images by owner
+ image_id = self.created_images[0]
+ # Get image metadata
+ image = self.client.show_image(image_id)
+
+ params = {"owner": image['owner']}
+ self._list_by_param_value_and_assert(params)
+
@test.idempotent_id('622b925c-479f-4736-860d-adeaf13bc371')
def test_get_image_schema(self):
# Test to get image schema
diff --git a/tempest/api/image/v2/test_images_negative.py b/tempest/api/image/v2/test_images_negative.py
index f60fb0c..cd1bca0 100644
--- a/tempest/api/image/v2/test_images_negative.py
+++ b/tempest/api/image/v2/test_images_negative.py
@@ -53,19 +53,19 @@
def test_get_delete_deleted_image(self):
# get and delete the deleted image
# create and delete image
- body = self.client.create_image(name='test',
- container_format='bare',
- disk_format='raw')
- image_id = body['id']
- self.client.delete_image(image_id)
- self.client.wait_for_resource_deletion(image_id)
+ image = self.client.create_image(name='test',
+ container_format='bare',
+ disk_format='raw')
+ self.client.delete_image(image['id'])
+ self.client.wait_for_resource_deletion(image['id'])
# get the deleted image
- self.assertRaises(lib_exc.NotFound, self.client.show_image, image_id)
+ self.assertRaises(lib_exc.NotFound,
+ self.client.show_image, image['id'])
# delete the deleted image
self.assertRaises(lib_exc.NotFound, self.client.delete_image,
- image_id)
+ image['id'])
@test.attr(type=['negative'])
@test.idempotent_id('6fe40f1c-57bd-4918-89cc-8500f850f3de')
diff --git a/tempest/api/image/v2/test_images_tags.py b/tempest/api/image/v2/test_images_tags.py
index 42a4b87..03f29bd 100644
--- a/tempest/api/image/v2/test_images_tags.py
+++ b/tempest/api/image/v2/test_images_tags.py
@@ -21,19 +21,18 @@
@test.idempotent_id('10407036-6059-4f95-a2cd-cbbbee7ed329')
def test_update_delete_tags_for_image(self):
- body = self.create_image(container_format='bare',
- disk_format='raw',
- visibility='private')
- image_id = body['id']
+ image = self.create_image(container_format='bare',
+ disk_format='raw',
+ visibility='private')
tag = data_utils.rand_name('tag')
- self.addCleanup(self.client.delete_image, image_id)
+ self.addCleanup(self.client.delete_image, image['id'])
# Creating image tag and verify it.
- self.client.add_image_tag(image_id, tag)
- body = self.client.show_image(image_id)
+ self.client.add_image_tag(image['id'], tag)
+ body = self.client.show_image(image['id'])
self.assertIn(tag, body['tags'])
# Deleting image tag and verify it.
- self.client.delete_image_tag(image_id, tag)
- body = self.client.show_image(image_id)
+ self.client.delete_image_tag(image['id'], tag)
+ body = self.client.show_image(image['id'])
self.assertNotIn(tag, body['tags'])
diff --git a/tempest/api/image/v2/test_images_tags_negative.py b/tempest/api/image/v2/test_images_tags_negative.py
index dd5650f..af4ffcf 100644
--- a/tempest/api/image/v2/test_images_tags_negative.py
+++ b/tempest/api/image/v2/test_images_tags_negative.py
@@ -33,12 +33,11 @@
@test.idempotent_id('39c023a2-325a-433a-9eea-649bf1414b19')
def test_delete_non_existing_tag(self):
# Delete non existing tag.
- body = self.create_image(container_format='bare',
- disk_format='raw',
- visibility='private'
- )
- image_id = body['id']
+ image = self.create_image(container_format='bare',
+ disk_format='raw',
+ visibility='private'
+ )
tag = data_utils.rand_name('non-exist-tag')
- self.addCleanup(self.client.delete_image, image_id)
+ self.addCleanup(self.client.delete_image, image['id'])
self.assertRaises(lib_exc.NotFound, self.client.delete_image_tag,
- image_id, tag)
+ image['id'], tag)
diff --git a/tempest/api/network/admin/test_l3_agent_scheduler.py b/tempest/api/network/admin/test_l3_agent_scheduler.py
index b2cb003..d2e1492 100644
--- a/tempest/api/network/admin/test_l3_agent_scheduler.py
+++ b/tempest/api/network/admin/test_l3_agent_scheduler.py
@@ -67,34 +67,35 @@
msg = "L3 Agent Scheduler enabled in conf, but L3 Agent not found"
raise exceptions.InvalidConfiguration(msg)
cls.router = cls.create_router(data_utils.rand_name('router'))
- # NOTE(armax): If DVR is an available extension, and the created router
- # is indeed a distributed one, more resources need to be provisioned
- # in order to bind the router to the L3 agent.
- # That said, let's preserve the existing test logic, where the extra
- # query and setup steps are only required if the extension is available
- # and only if the router's default type is distributed.
- if test.is_extension_enabled('dvr', 'network'):
- cls.is_dvr_router = cls.admin_routers_client.show_router(
- cls.router['id'])['router'].get('distributed', False)
- if cls.is_dvr_router:
- cls.network = cls.create_network()
- cls.subnet = cls.create_subnet(cls.network)
- cls.port = cls.create_port(cls.network)
- cls.routers_client.add_router_interface(
- cls.router['id'], port_id=cls.port['id'])
- # NOTE: Sometimes we have seen this test fail with dvr in,
- # multinode tests, since the dhcp port is not created before
- # the test gets executed and so the router is not scheduled
- # on the given agent. By adding the external gateway info to
- # the router, the router should be properly scheduled in the
- # dvr_snat node.
- # This is a temporary work around to prevent a race condition.
- external_gateway_info = {
- 'network_id': CONF.network.public_network_id,
- 'enable_snat': True}
- cls.admin_routers_client.update_router(
- cls.router['id'],
- external_gateway_info=external_gateway_info)
+
+ if CONF.network.dvr_extra_resources:
+ # NOTE(armax): If DVR is an available extension, and the created
+ # router is indeed a distributed one, more resources need to be
+ # provisioned in order to bind the router to the L3 agent in the
+ # Liberty release or older, and are not required since the Mitaka
+ # release.
+ if test.is_extension_enabled('dvr', 'network'):
+ cls.is_dvr_router = cls.admin_routers_client.show_router(
+ cls.router['id'])['router'].get('distributed', False)
+ if cls.is_dvr_router:
+ cls.network = cls.create_network()
+ cls.subnet = cls.create_subnet(cls.network)
+ cls.port = cls.create_port(cls.network)
+ cls.routers_client.add_router_interface(
+ cls.router['id'], port_id=cls.port['id'])
+ # NOTE: Sometimes we have seen this test fail with dvr in,
+ # multinode tests, since the dhcp port is not created
+ # before the test gets executed and so the router is not
+ # scheduled on the given agent. By adding the external
+ # gateway info to the router, the router should be properly
+ # scheduled in the dvr_snat node. This is a temporary work
+ # around to prevent a race condition.
+ external_gateway_info = {
+ 'network_id': CONF.network.public_network_id,
+ 'enable_snat': True}
+ cls.admin_routers_client.update_router(
+ cls.router['id'],
+ external_gateway_info=external_gateway_info)
@classmethod
def resource_cleanup(cls):
diff --git a/tempest/api/network/test_dhcp_ipv6.py b/tempest/api/network/test_dhcp_ipv6.py
index 77008ab..4bc4262 100644
--- a/tempest/api/network/test_dhcp_ipv6.py
+++ b/tempest/api/network/test_dhcp_ipv6.py
@@ -20,6 +20,7 @@
from tempest.api.network import base
from tempest.common.utils import data_utils
+from tempest.common.utils import net_info
from tempest import config
from tempest.lib import exceptions as lib_exc
from tempest import test
@@ -30,7 +31,7 @@
class NetworksTestDHCPv6(base.BaseNetworkTest):
_ip_version = 6
- """ Test DHCPv6 specific features using SLAAC, stateless and
+ """Test DHCPv6 specific features using SLAAC, stateless and
stateful settings for subnets. Also it shall check dual-stack
functionality (IPv4 + IPv6 together).
The tests include:
@@ -66,7 +67,7 @@
body = self.ports_client.list_ports()
ports = body['ports']
for port in ports:
- if (port['device_owner'].startswith('network:router_interface') and
+ if (net_info.is_router_interface_port(port) and
port['device_id'] in [r['id'] for r in self.routers]):
self.routers_client.remove_router_interface(port['device_id'],
port_id=port['id'])
diff --git a/tempest/api/object_storage/test_container_services_negative.py b/tempest/api/object_storage/test_container_services_negative.py
new file mode 100644
index 0000000..ed99eb2
--- /dev/null
+++ b/tempest/api/object_storage/test_container_services_negative.py
@@ -0,0 +1,167 @@
+# Copyright 2016 OpenStack Foundation
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.api.object_storage import base
+from tempest.lib.common.utils import data_utils
+from tempest.lib import exceptions
+from tempest import test
+
+
+class ContainerNegativeTest(base.BaseObjectTest):
+
+ @classmethod
+ def resource_setup(cls):
+ super(ContainerNegativeTest, cls).resource_setup()
+
+ # use /info to get default constraints
+ _, body = cls.account_client.list_extensions()
+ cls.constraints = body['swift']
+
+ @test.attr(type=["negative"])
+ @test.idempotent_id('30686921-4bed-4764-a038-40d741ed4e78')
+ def test_create_container_name_exceeds_max_length(self):
+ # Attempts to create a container name that is longer than max
+ max_length = self.constraints['max_container_name_length']
+ # create a container with long name
+ container_name = data_utils.arbitrary_string(size=max_length + 1)
+ ex = self.assertRaises(exceptions.BadRequest,
+ self.container_client.create_container,
+ container_name)
+ self.assertIn('Container name length of ' + str(max_length + 1) +
+ ' longer than ' + str(max_length), str(ex))
+
+ @test.attr(type=["negative"])
+ @test.idempotent_id('41e645bf-2e68-4f84-bf7b-c71aa5cd76ce')
+ def test_create_container_metadata_name_exceeds_max_length(self):
+ # Attempts to create container with metadata name
+ # that is longer than max.
+ max_length = self.constraints['max_meta_name_length']
+ container_name = data_utils.rand_name(name='TestContainer')
+ metadata_name = data_utils.arbitrary_string(size=max_length + 1)
+ metadata = {metadata_name: 'penguin'}
+ ex = self.assertRaises(exceptions.BadRequest,
+ self.container_client.create_container,
+ container_name, metadata=metadata)
+ self.assertIn('Metadata name too long', str(ex))
+
+ @test.attr(type=["negative"])
+ @test.idempotent_id('81e36922-326b-4b7c-8155-3bbceecd7a82')
+ def test_create_container_metadata_value_exceeds_max_length(self):
+ # Attempts to create container with metadata value
+ # that is longer than max.
+ max_length = self.constraints['max_meta_value_length']
+ container_name = data_utils.rand_name(name='TestContainer')
+ metadata_value = data_utils.arbitrary_string(size=max_length + 1)
+ metadata = {'animal': metadata_value}
+ ex = self.assertRaises(exceptions.BadRequest,
+ self.container_client.create_container,
+ container_name, metadata=metadata)
+ self.assertIn('Metadata value longer than ' + str(max_length), str(ex))
+
+ @test.attr(type=["negative"])
+ @test.idempotent_id('ac666539-d566-4f02-8ceb-58e968dfb732')
+ def test_create_container_metadata_exceeds_overall_metadata_count(self):
+ # Attempts to create container with metadata that exceeds the
+ # default count
+ max_count = self.constraints['max_meta_count']
+ container_name = data_utils.rand_name(name='TestContainer')
+ metadata = {}
+ for i in range(max_count + 1):
+ metadata['animal-' + str(i)] = 'penguin'
+
+ ex = self.assertRaises(exceptions.BadRequest,
+ self.container_client.create_container,
+ container_name, metadata=metadata)
+ self.assertIn('Too many metadata items; max ' + str(max_count),
+ str(ex))
+
+ @test.attr(type=["negative"])
+ @test.idempotent_id('1a95ab2e-b712-4a98-8a4d-8ce21b7557d6')
+ def test_get_metadata_headers_with_invalid_container_name(self):
+ # Attempts to retrieve metadata headers with an invalid
+ # container name.
+ invalid_name = data_utils.rand_name(name="TestInvalidContainer")
+
+ self.assertRaises(exceptions.NotFound,
+ self.container_client.list_container_metadata,
+ invalid_name)
+
+ @test.attr(type=["negative"])
+ @test.idempotent_id('125a24fa-90a7-4cfc-b604-44e49d788390')
+ def test_update_metadata_with_nonexistent_container_name(self):
+ # Attempts to update metadata using a nonexistent container name.
+ nonexistent_name = data_utils.rand_name(
+ name="TestNonexistentContainer")
+ metadata = {'animal': 'penguin'}
+
+ self.assertRaises(exceptions.NotFound,
+ self.container_client.update_container_metadata,
+ nonexistent_name, metadata)
+
+ @test.attr(type=["negative"])
+ @test.idempotent_id('65387dbf-a0e2-4aac-9ddc-16eb3f1f69ba')
+ def test_delete_with_nonexistent_container_name(self):
+ # Attempts to delete metadata using a nonexistent container name.
+ nonexistent_name = data_utils.rand_name(
+ name="TestNonexistentContainer")
+ metadata = {'animal': 'penguin'}
+
+ self.assertRaises(exceptions.NotFound,
+ self.container_client.delete_container_metadata,
+ nonexistent_name, metadata)
+
+ @test.attr(type=["negative"])
+ @test.idempotent_id('14331d21-1e81-420a-beea-19cb5e5207f5')
+ def test_list_all_container_objects_with_nonexistent_container(self):
+ # Attempts to get a listing of all objects on a container
+ # that doesn't exist.
+ nonexistent_name = data_utils.rand_name(
+ name="TestNonexistentContainer")
+
+ self.assertRaises(exceptions.NotFound,
+ self.container_client.list_all_container_objects,
+ nonexistent_name)
+
+ @test.attr(type=["negative"])
+ @test.idempotent_id('86b2ab08-92d5-493d-acd2-85f0c848819e')
+ def test_list_all_container_objects_on_deleted_container(self):
+ # Attempts to get a listing of all objects on a container
+ # that was deleted.
+ container_name = self.create_container()
+ # delete container
+ resp, _ = self.container_client.delete_container(container_name)
+ self.assertHeaders(resp, 'Container', 'DELETE')
+
+ self.assertRaises(exceptions.NotFound,
+ self.container_client.list_all_container_objects,
+ container_name)
+
+ @test.attr(type=["negative"])
+ @test.idempotent_id('42da116e-1e8c-4c96-9e06-2f13884ed2b1')
+ def test_delete_non_empty_container(self):
+ # create a container and an object within it
+ # attempt to delete a container that isn't empty.
+ container_name = self.create_container()
+ self.addCleanup(self.container_client.delete_container,
+ container_name)
+ object_name, _ = self.create_object(container_name)
+ self.addCleanup(self.object_client.delete_object,
+ container_name, object_name)
+
+ ex = self.assertRaises(exceptions.Conflict,
+ self.container_client.delete_container,
+ container_name)
+ self.assertIn('An object with that identifier already exists',
+ str(ex))
diff --git a/tempest/api/volume/admin/test_qos.py b/tempest/api/volume/admin/test_qos.py
old mode 100755
new mode 100644
index 98139e7..9f2d453
--- a/tempest/api/volume/admin/test_qos.py
+++ b/tempest/api/volume/admin/test_qos.py
@@ -14,6 +14,7 @@
from tempest.api.volume import base
from tempest.common.utils import data_utils as utils
+from tempest.common import waiters
from tempest import test
@@ -119,7 +120,9 @@
self.admin_volume_qos_client.unset_qos_key(self.created_qos['id'],
keys)
operation = 'qos-key-unset'
- self.wait_for_qos_operations(self.created_qos['id'], operation, keys)
+ waiters.wait_for_qos_operations(self.admin_volume_qos_client,
+ self.created_qos['id'],
+ operation, keys)
body = self.admin_volume_qos_client.show_qos(
self.created_qos['id'])['qos_specs']
self.assertNotIn(keys[0], body['specs'])
@@ -153,8 +156,9 @@
self.admin_volume_qos_client.disassociate_qos(
self.created_qos['id'], vol_type[0]['id'])
operation = 'disassociate'
- self.wait_for_qos_operations(self.created_qos['id'],
- operation, vol_type[0]['id'])
+ waiters.wait_for_qos_operations(self.admin_volume_qos_client,
+ self.created_qos['id'], operation,
+ vol_type[0]['id'])
associations = self._test_get_association_qos()
self.assertNotIn(vol_type[0]['id'], associations)
@@ -162,7 +166,8 @@
self.admin_volume_qos_client.disassociate_all_qos(
self.created_qos['id'])
operation = 'disassociate-all'
- self.wait_for_qos_operations(self.created_qos['id'], operation)
+ waiters.wait_for_qos_operations(self.admin_volume_qos_client,
+ self.created_qos['id'], operation)
associations = self._test_get_association_qos()
self.assertEmpty(associations)
diff --git a/tempest/api/volume/admin/test_volume_quotas.py b/tempest/api/volume/admin/test_volume_quotas.py
index fe105e8..b47a5f0 100644
--- a/tempest/api/volume/admin/test_volume_quotas.py
+++ b/tempest/api/volume/admin/test_volume_quotas.py
@@ -18,7 +18,7 @@
from tempest.common import waiters
from tempest import test
-QUOTA_KEYS = ['gigabytes', 'snapshots', 'volumes']
+QUOTA_KEYS = ['gigabytes', 'snapshots', 'volumes', 'backups']
QUOTA_USAGE_KEYS = ['reserved', 'limit', 'in_use']
@@ -54,7 +54,8 @@
self.demo_tenant_id)['quota_set']
new_quota_set = {'gigabytes': 1009,
'volumes': 11,
- 'snapshots': 11}
+ 'snapshots': 11,
+ 'backups': 11}
# Update limits for all quota resources
quota_set = self.admin_quotas_client.update_quota_set(
diff --git a/tempest/api/volume/admin/test_volume_types.py b/tempest/api/volume/admin/test_volume_types.py
old mode 100755
new mode 100644
diff --git a/tempest/api/volume/admin/test_volume_types_extra_specs.py b/tempest/api/volume/admin/test_volume_types_extra_specs.py
old mode 100755
new mode 100644
index d50ba27..8b7ceff
--- a/tempest/api/volume/admin/test_volume_types_extra_specs.py
+++ b/tempest/api/volume/admin/test_volume_types_extra_specs.py
@@ -15,6 +15,7 @@
from tempest.api.volume import base
from tempest.common.utils import data_utils
+from tempest.lib import exceptions as lib_exc
from tempest import test
@@ -66,13 +67,18 @@
self.assertEqual(extra_specs, body,
"Volume type extra spec incorrectly created")
- self.admin_volume_types_client.show_volume_type_extra_specs(
+ body = self.admin_volume_types_client.show_volume_type_extra_specs(
self.volume_type['id'],
spec_key)
self.assertEqual(extra_specs, body,
"Volume type extra spec incorrectly fetched")
+
self.admin_volume_types_client.delete_volume_type_extra_specs(
self.volume_type['id'], spec_key)
+ self.assertRaises(
+ lib_exc.NotFound,
+ self.admin_volume_types_client.show_volume_type_extra_specs,
+ self.volume_type['id'], spec_key)
class VolumeTypesExtraSpecsV1Test(VolumeTypesExtraSpecsV2Test):
diff --git a/tempest/api/volume/admin/test_volume_types_extra_specs_negative.py b/tempest/api/volume/admin/test_volume_types_extra_specs_negative.py
old mode 100755
new mode 100644
diff --git a/tempest/api/volume/admin/test_volumes_actions.py b/tempest/api/volume/admin/test_volumes_actions.py
old mode 100755
new mode 100644
diff --git a/tempest/api/volume/admin/test_volumes_backup.py b/tempest/api/volume/admin/test_volumes_backup.py
old mode 100755
new mode 100644
index a26052c..73f1f8f
--- a/tempest/api/volume/admin/test_volumes_backup.py
+++ b/tempest/api/volume/admin/test_volumes_backup.py
@@ -68,8 +68,8 @@
volume_id=self.volume['id'], name=backup_name)['backup'])
self.addCleanup(self._delete_backup, backup['id'])
self.assertEqual(backup_name, backup['name'])
- self.admin_backups_client.wait_for_backup_status(backup['id'],
- 'available')
+ waiters.wait_for_backup_status(self.admin_backups_client,
+ backup['id'], 'available')
# Export Backup
export_backup = (self.admin_backups_client.export_backup(backup['id'])
@@ -101,8 +101,8 @@
self.addCleanup(self._delete_backup, new_id)
self.assertIn("id", import_backup)
self.assertEqual(new_id, import_backup['id'])
- self.admin_backups_client.wait_for_backup_status(import_backup['id'],
- 'available')
+ waiters.wait_for_backup_status(self.admin_backups_client,
+ import_backup['id'], 'available')
# Verify Import Backup
backups = self.admin_backups_client.list_backups(
@@ -121,8 +121,8 @@
# Verify if restored volume is there in volume list
volumes = self.admin_volume_client.list_volumes()['volumes']
self.assertIn(restore['volume_id'], [v['id'] for v in volumes])
- self.admin_backups_client.wait_for_backup_status(import_backup['id'],
- 'available')
+ waiters.wait_for_backup_status(self.admin_backups_client,
+ import_backup['id'], 'available')
@test.idempotent_id('47a35425-a891-4e13-961c-c45deea21e94')
def test_volume_backup_reset_status(self):
@@ -134,13 +134,13 @@
self.addCleanup(self.admin_backups_client.delete_backup,
backup['id'])
self.assertEqual(backup_name, backup['name'])
- self.admin_backups_client.wait_for_backup_status(backup['id'],
- 'available')
+ waiters.wait_for_backup_status(self.admin_backups_client,
+ backup['id'], 'available')
# Reset backup status to error
self.admin_backups_client.reset_backup_status(backup_id=backup['id'],
status="error")
- self.admin_backups_client.wait_for_backup_status(backup['id'],
- 'error')
+ waiters.wait_for_backup_status(self.admin_backups_client,
+ backup['id'], 'error')
class VolumesBackupsAdminV1Test(VolumesBackupsAdminV2Test):
diff --git a/tempest/api/volume/v3/admin/__init__.py b/tempest/api/volume/admin/v2/__init__.py
similarity index 100%
rename from tempest/api/volume/v3/admin/__init__.py
rename to tempest/api/volume/admin/v2/__init__.py
diff --git a/tempest/api/volume/admin/test_volume_pools.py b/tempest/api/volume/admin/v2/test_volume_pools.py
similarity index 100%
rename from tempest/api/volume/admin/test_volume_pools.py
rename to tempest/api/volume/admin/v2/test_volume_pools.py
diff --git a/tempest/api/volume/admin/test_volume_type_access.py b/tempest/api/volume/admin/v2/test_volume_type_access.py
similarity index 100%
rename from tempest/api/volume/admin/test_volume_type_access.py
rename to tempest/api/volume/admin/v2/test_volume_type_access.py
diff --git a/tempest/api/volume/admin/test_volumes_list.py b/tempest/api/volume/admin/v2/test_volumes_list.py
similarity index 100%
rename from tempest/api/volume/admin/test_volumes_list.py
rename to tempest/api/volume/admin/v2/test_volumes_list.py
diff --git a/tempest/api/volume/v3/admin/__init__.py b/tempest/api/volume/admin/v3/__init__.py
similarity index 100%
copy from tempest/api/volume/v3/admin/__init__.py
copy to tempest/api/volume/admin/v3/__init__.py
diff --git a/tempest/api/volume/v3/admin/test_user_messages.py b/tempest/api/volume/admin/v3/test_user_messages.py
similarity index 100%
rename from tempest/api/volume/v3/admin/test_user_messages.py
rename to tempest/api/volume/admin/v3/test_user_messages.py
diff --git a/tempest/api/volume/base.py b/tempest/api/volume/base.py
index ada55f7..b49a126 100644
--- a/tempest/api/volume/base.py
+++ b/tempest/api/volume/base.py
@@ -13,15 +13,12 @@
# License for the specific language governing permissions and limitations
# under the License.
-import time
-
from tempest.common import compute
from tempest.common.utils import data_utils
from tempest.common import waiters
from tempest import config
from tempest import exceptions
from tempest.lib.common.utils import test_utils
-from tempest.lib import exceptions as lib_exc
import tempest.test
CONF = config.CONF
@@ -277,35 +274,3 @@
test_utils.call_and_ignore_notfound_exc(
cls.admin_volume_types_client.wait_for_resource_deletion,
vol_type)
-
- def wait_for_qos_operations(self, qos_id, operation, args=None):
- """Waits for a qos operations to be completed.
-
- NOTE : operation value is required for wait_for_qos_operations()
- operation = 'qos-key' / 'disassociate' / 'disassociate-all'
- args = keys[] when operation = 'qos-key'
- args = volume-type-id disassociated when operation = 'disassociate'
- args = None when operation = 'disassociate-all'
- """
- start_time = int(time.time())
- client = self.admin_volume_qos_client
- while True:
- if operation == 'qos-key-unset':
- body = client.show_qos(qos_id)['qos_specs']
- if not any(key in body['specs'] for key in args):
- return
- elif operation == 'disassociate':
- body = client.show_association_qos(qos_id)['qos_associations']
- if not any(args in body[i]['id'] for i in range(0, len(body))):
- return
- elif operation == 'disassociate-all':
- body = client.show_association_qos(qos_id)['qos_associations']
- if not body:
- return
- else:
- msg = (" operation value is either not defined or incorrect.")
- raise lib_exc.UnprocessableEntity(msg)
-
- if int(time.time()) - start_time >= self.build_timeout:
- raise exceptions.TimeoutException
- time.sleep(self.build_interval)
diff --git a/tempest/api/volume/test_volumes_actions.py b/tempest/api/volume/test_volumes_actions.py
old mode 100755
new mode 100644
index b80a4a4..5586e02
--- a/tempest/api/volume/test_volumes_actions.py
+++ b/tempest/api/volume/test_volumes_actions.py
@@ -48,8 +48,6 @@
# Create a test shared volume for attach/detach tests
cls.volume = cls.create_volume()
- waiters.wait_for_volume_status(cls.client,
- cls.volume['id'], 'available')
@test.idempotent_id('fff42874-7db5-4487-a8e1-ddda5fb5288d')
@test.stresstest(class_setup_per='process')
@@ -102,8 +100,6 @@
CONF.compute.volume_device_name)
waiters.wait_for_volume_status(self.client,
self.volume['id'], 'in-use')
- # NOTE(gfidente): added in reverse order because functions will be
- # called in reverse order to the order they are added (LIFO)
self.addCleanup(waiters.wait_for_volume_status, self.client,
self.volume['id'],
'available')
diff --git a/tempest/api/volume/test_volumes_backup.py b/tempest/api/volume/test_volumes_backup.py
old mode 100755
new mode 100644
index 86076b7..867e520
--- a/tempest/api/volume/test_volumes_backup.py
+++ b/tempest/api/volume/test_volumes_backup.py
@@ -46,8 +46,8 @@
self.assertEqual(backup_name, backup['name'])
waiters.wait_for_volume_status(self.volumes_client,
volume['id'], 'available')
- self.backups_client.wait_for_backup_status(backup['id'],
- 'available')
+ waiters.wait_for_backup_status(self.backups_client,
+ backup['id'], 'available')
# Get a given backup
backup = self.backups_client.show_backup(backup['id'])['backup']
@@ -67,8 +67,8 @@
self.addCleanup(self.volumes_client.delete_volume,
restore['volume_id'])
self.assertEqual(backup['id'], restore['backup_id'])
- self.backups_client.wait_for_backup_status(backup['id'],
- 'available')
+ waiters.wait_for_backup_status(self.backups_client,
+ backup['id'], 'available')
waiters.wait_for_volume_status(self.volumes_client,
restore['volume_id'], 'available')
@@ -103,8 +103,8 @@
volume_id=volume['id'],
name=backup_name, force=True)['backup']
self.addCleanup(self.backups_client.delete_backup, backup['id'])
- self.backups_client.wait_for_backup_status(backup['id'],
- 'available')
+ waiters.wait_for_backup_status(self.backups_client,
+ backup['id'], 'available')
self.assertEqual(backup_name, backup['name'])
diff --git a/tempest/api/volume/test_volumes_get.py b/tempest/api/volume/test_volumes_get.py
old mode 100755
new mode 100644
diff --git a/tempest/api/volume/test_volumes_negative.py b/tempest/api/volume/test_volumes_negative.py
old mode 100755
new mode 100644
diff --git a/tempest/api/volume/test_volumes_snapshots.py b/tempest/api/volume/test_volumes_snapshots.py
old mode 100755
new mode 100644
diff --git a/tempest/api/volume/test_volumes_snapshots_negative.py b/tempest/api/volume/test_volumes_snapshots_negative.py
old mode 100755
new mode 100644
diff --git a/tempest/clients.py b/tempest/clients.py
index edc34bd..6cb6980 100644
--- a/tempest/clients.py
+++ b/tempest/clients.py
@@ -24,7 +24,6 @@
from tempest.lib import exceptions as lib_exc
from tempest.lib.services import clients
from tempest.services import baremetal
-from tempest.services import data_processing
from tempest.services import identity
from tempest.services import object_storage
from tempest.services import orchestration
@@ -39,7 +38,7 @@
default_params = config.service_client_config()
- # TODO(andreaf) This is only used by data_processing and baremetal clients,
+ # TODO(andreaf) This is only used by baremetal clients,
# and should be removed once they are out of Tempest
default_params_with_timeout_values = {
'build_interval': CONF.compute.build_interval,
@@ -84,12 +83,6 @@
build_interval=CONF.orchestration.build_interval,
build_timeout=CONF.orchestration.build_timeout,
**self.default_params)
- self.data_processing_client = data_processing.DataProcessingClient(
- self.auth_provider,
- CONF.data_processing.catalog_type,
- CONF.identity.region,
- endpoint_type=CONF.data_processing.endpoint_type,
- **self.default_params_with_timeout_values)
self.negative_client = negative_rest_client.NegativeRestClient(
self.auth_provider, service, **self.default_params)
@@ -250,6 +243,8 @@
**params_v3)
self.inherited_roles_client = identity.v3.InheritedRolesClient(
self.auth_provider, **params_v3)
+ self.role_assignments_client = identity.v3.RoleAssignmentsClient(
+ self.auth_provider, **params_v3)
self.identity_services_v3_client = identity.v3.ServicesClient(
self.auth_provider, **params_v3)
self.policies_client = identity.v3.PoliciesClient(self.auth_provider,
diff --git a/tempest/cmd/cleanup_service.py b/tempest/cmd/cleanup_service.py
index 9758061..32b0ebb 100644
--- a/tempest/cmd/cleanup_service.py
+++ b/tempest/cmd/cleanup_service.py
@@ -18,6 +18,7 @@
from tempest.common import credentials_factory as credentials
from tempest.common import identity
+from tempest.common.utils import net_info
from tempest import config
from tempest import test
@@ -463,7 +464,7 @@
rid = router['id']
ports = [port for port
in ports_client.list_ports(device_id=rid)['ports']
- if port["device_owner"] == "network:router_interface"]
+ if net_info.is_router_interface_port(port)]
for port in ports:
client.remove_router_interface(rid, port_id=port['id'])
client.delete_router(rid)
diff --git a/tempest/cmd/init.py b/tempest/cmd/init.py
index f577d9b..baa36a2 100644
--- a/tempest/cmd/init.py
+++ b/tempest/cmd/init.py
@@ -173,10 +173,10 @@
workspace_manager = workspace.WorkspaceManager(
parsed_args.workspace_path)
name = parsed_args.name or parsed_args.dir.split(os.path.sep)[-1]
- workspace_manager.register_new_workspace(
- name, parsed_args.dir, init=True)
config_dir = parsed_args.config_dir or get_tempest_default_config_dir()
if parsed_args.show_global_dir:
print("Global config dir is located at: %s" % config_dir)
sys.exit(0)
self.create_working_dir(parsed_args.dir, config_dir)
+ workspace_manager.register_new_workspace(
+ name, parsed_args.dir, init=True)
diff --git a/tempest/cmd/verify_tempest_config.py b/tempest/cmd/verify_tempest_config.py
index b2e72c5..381f3df 100644
--- a/tempest/cmd/verify_tempest_config.py
+++ b/tempest/cmd/verify_tempest_config.py
@@ -286,7 +286,6 @@
'object_storage': 'swift',
'compute': 'nova',
'orchestration': 'heat',
- 'data_processing': 'sahara',
'baremetal': 'ironic',
'identity': 'keystone',
}
diff --git a/tempest/common/compute.py b/tempest/common/compute.py
index 8e9f0b0..318eb10 100644
--- a/tempest/common/compute.py
+++ b/tempest/common/compute.py
@@ -129,7 +129,6 @@
**kwargs)
# handle the case of multiple servers
- servers = []
if multiple_create_request:
# Get servers created which name match with name param.
body_servers = clients.servers_client.list_servers()
diff --git a/tempest/services/volume/v1/json/backups_client.py b/tempest/common/utils/net_info.py
similarity index 64%
rename from tempest/services/volume/v1/json/backups_client.py
rename to tempest/common/utils/net_info.py
index ac6db6a..9b0a083 100644
--- a/tempest/services/volume/v1/json/backups_client.py
+++ b/tempest/common/utils/net_info.py
@@ -1,4 +1,3 @@
-# Copyright 2014 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
@@ -12,9 +11,15 @@
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
+import re
-from tempest.services.volume.base import base_backups_client
+RE_OWNER = re.compile('^network:.*router_.*interface.*')
-class BackupsClient(base_backups_client.BaseBackupsClient):
- """Volume V1 Backups client"""
+def _is_owner_router_interface(owner):
+ return bool(RE_OWNER.match(owner))
+
+
+def is_router_interface_port(port):
+ """Based on the port attributes determines is it a router interface."""
+ return _is_owner_router_interface(port['device_owner'])
diff --git a/tempest/common/waiters.py b/tempest/common/waiters.py
index 9d307ee..fa951b5 100644
--- a/tempest/common/waiters.py
+++ b/tempest/common/waiters.py
@@ -210,6 +210,27 @@
raise exceptions.TimeoutException(message)
+def wait_for_backup_status(client, backup_id, status):
+ """Waits for a Backup to reach a given status."""
+ body = client.show_backup(backup_id)['backup']
+ backup_status = body['status']
+ start = int(time.time())
+
+ while backup_status != status:
+ time.sleep(client.build_interval)
+ body = client.show_backup(backup_id)['backup']
+ backup_status = body['status']
+ if backup_status == 'error' and backup_status != status:
+ raise exceptions.VolumeBackupException(backup_id=backup_id)
+
+ if int(time.time()) - start >= client.build_timeout:
+ message = ('Volume backup %s failed to reach %s status '
+ '(current %s) within the required time (%s s).' %
+ (backup_id, status, backup_status,
+ client.build_timeout))
+ raise exceptions.TimeoutException(message)
+
+
def wait_for_bm_node_status(client, node_id, attr, status):
"""Waits for a baremetal node attribute to reach given status.
@@ -237,3 +258,35 @@
if caller:
message = '(%s) %s' % (caller, message)
raise exceptions.TimeoutException(message)
+
+
+def wait_for_qos_operations(client, qos_id, operation, args=None):
+ """Waits for a qos operations to be completed.
+
+ NOTE : operation value is required for wait_for_qos_operations()
+ operation = 'qos-key' / 'disassociate' / 'disassociate-all'
+ args = keys[] when operation = 'qos-key'
+ args = volume-type-id disassociated when operation = 'disassociate'
+ args = None when operation = 'disassociate-all'
+ """
+ start_time = int(time.time())
+ while True:
+ if operation == 'qos-key-unset':
+ body = client.show_qos(qos_id)['qos_specs']
+ if not any(key in body['specs'] for key in args):
+ return
+ elif operation == 'disassociate':
+ body = client.show_association_qos(qos_id)['qos_associations']
+ if not any(args in body[i]['id'] for i in range(0, len(body))):
+ return
+ elif operation == 'disassociate-all':
+ body = client.show_association_qos(qos_id)['qos_associations']
+ if not body:
+ return
+ else:
+ msg = (" operation value is either not defined or incorrect.")
+ raise lib_exc.UnprocessableEntity(msg)
+
+ if int(time.time()) - start_time >= client.build_timeout:
+ raise exceptions.TimeoutException
+ time.sleep(client.build_interval)
diff --git a/tempest/config.py b/tempest/config.py
index b6fca7e..98cfb40 100644
--- a/tempest/config.py
+++ b/tempest/config.py
@@ -414,7 +414,10 @@
"list indicates all filters are disabled. The full "
"available list of filters is in nova.conf: "
"DEFAULT.scheduler_available_filters"),
-
+ cfg.BoolOpt('swap_volume',
+ default=False,
+ help='Does the test environment support in-place swapping of '
+ 'volumes attached to a server instance?'),
]
@@ -550,6 +553,15 @@
default=["1.0.0.0/16", "2.0.0.0/16"],
help="List of ip pools"
" for subnetpools creation"),
+ # TODO(ylobankov): Delete this option once the Liberty release is EOL.
+ cfg.BoolOpt('dvr_extra_resources',
+ default=True,
+ help="Whether or not to create internal network, subnet, "
+ "port and add network interface to distributed router "
+ "in L3 agent scheduler test. Extra resources need to be "
+ "provisioned in order to bind router to L3 agent in the "
+ "Liberty release or older, and are not required since "
+ "the Mitaka release.")
]
network_feature_group = cfg.OptGroup(name='network-feature-enabled',
@@ -883,34 +895,6 @@
help="Value must match heat configuration of the same name."),
]
-data_processing_group = cfg.OptGroup(name="data-processing",
- title="Data Processing options")
-
-DataProcessingGroup = [
- cfg.StrOpt('catalog_type',
- default='data-processing',
- deprecated_group="data_processing",
- help="Catalog type of the data processing service."),
- cfg.StrOpt('endpoint_type',
- default='publicURL',
- choices=['public', 'admin', 'internal',
- 'publicURL', 'adminURL', 'internalURL'],
- deprecated_group="data_processing",
- help="The endpoint type to use for the data processing "
- "service."),
-]
-
-
-data_processing_feature_group = cfg.OptGroup(
- name="data-processing-feature-enabled",
- title="Enabled Data Processing features")
-
-DataProcessingFeaturesGroup = [
- cfg.ListOpt('plugins',
- default=["vanilla", "cdh"],
- deprecated_group="data_processing-feature-enabled",
- help="List of enabled data processing plugins")
-]
stress_group = cfg.OptGroup(name='stress', title='Stress Test Options')
@@ -1157,8 +1141,6 @@
(object_storage_group, ObjectStoreGroup),
(object_storage_feature_group, ObjectStoreFeaturesGroup),
(orchestration_group, OrchestrationGroup),
- (data_processing_group, DataProcessingGroup),
- (data_processing_feature_group, DataProcessingFeaturesGroup),
(stress_group, StressGroup),
(scenario_group, ScenarioGroup),
(service_available_group, ServiceAvailableGroup),
@@ -1224,9 +1206,6 @@
self.object_storage_feature_enabled = _CONF[
'object-storage-feature-enabled']
self.orchestration = _CONF.orchestration
- self.data_processing = _CONF['data-processing']
- self.data_processing_feature_enabled = _CONF[
- 'data-processing-feature-enabled']
self.stress = _CONF.stress
self.scenario = _CONF.scenario
self.service_available = _CONF.service_available
diff --git a/tempest/exceptions.py b/tempest/exceptions.py
index 272f6e3..da32693 100644
--- a/tempest/exceptions.py
+++ b/tempest/exceptions.py
@@ -53,10 +53,6 @@
message = "Snapshot %(snapshot_id)s failed to build and is in ERROR status"
-class VolumeBackupException(exceptions.TempestException):
- message = "Volume backup %(backup_id)s failed and is in ERROR status"
-
-
class StackBuildErrorException(exceptions.TempestException):
message = ("Stack %(stack_identifier)s is in %(stack_status)s status "
"due to '%(stack_status_reason)s'")
diff --git a/tempest/lib/common/ssh.py b/tempest/lib/common/ssh.py
index c13f41a..4226cd6 100644
--- a/tempest/lib/common/ssh.py
+++ b/tempest/lib/common/ssh.py
@@ -36,9 +36,11 @@
class Client(object):
def __init__(self, host, username, password=None, timeout=300, pkey=None,
- channel_timeout=10, look_for_keys=False, key_filename=None):
+ channel_timeout=10, look_for_keys=False, key_filename=None,
+ port=22):
self.host = host
self.username = username
+ self.port = port
self.password = password
if isinstance(pkey, six.string_types):
pkey = paramiko.RSAKey.from_private_key(
@@ -58,17 +60,17 @@
paramiko.AutoAddPolicy())
_start_time = time.time()
if self.pkey is not None:
- LOG.info("Creating ssh connection to '%s' as '%s'"
+ LOG.info("Creating ssh connection to '%s:%d' as '%s'"
" with public key authentication",
- self.host, self.username)
+ self.host, self.port, self.username)
else:
- LOG.info("Creating ssh connection to '%s' as '%s'"
+ LOG.info("Creating ssh connection to '%s:%d' as '%s'"
" with password %s",
- self.host, self.username, str(self.password))
+ self.host, self.port, self.username, str(self.password))
attempts = 0
while True:
try:
- ssh.connect(self.host, username=self.username,
+ ssh.connect(self.host, port=self.port, username=self.username,
password=self.password,
look_for_keys=self.look_for_keys,
key_filename=self.key_filename,
diff --git a/tempest/lib/exceptions.py b/tempest/lib/exceptions.py
index e3f25e6..a5c6b1b 100644
--- a/tempest/lib/exceptions.py
+++ b/tempest/lib/exceptions.py
@@ -239,3 +239,7 @@
class PluginRegistrationException(TempestException):
message = "Error registering plugin %(name)s: %(detailed_error)s"
+
+
+class VolumeBackupException(TempestException):
+ message = "Volume backup %(backup_id)s failed and is in ERROR status"
diff --git a/tempest/lib/services/compute/agents_client.py b/tempest/lib/services/compute/agents_client.py
old mode 100755
new mode 100644
diff --git a/tempest/lib/services/compute/flavors_client.py b/tempest/lib/services/compute/flavors_client.py
old mode 100755
new mode 100644
diff --git a/tempest/lib/services/compute/floating_ips_client.py b/tempest/lib/services/compute/floating_ips_client.py
old mode 100755
new mode 100644
diff --git a/tempest/lib/services/compute/keypairs_client.py b/tempest/lib/services/compute/keypairs_client.py
old mode 100755
new mode 100644
diff --git a/tempest/lib/services/compute/security_groups_client.py b/tempest/lib/services/compute/security_groups_client.py
old mode 100755
new mode 100644
diff --git a/tempest/lib/services/compute/servers_client.py b/tempest/lib/services/compute/servers_client.py
old mode 100755
new mode 100644
diff --git a/tempest/lib/services/compute/services_client.py b/tempest/lib/services/compute/services_client.py
old mode 100755
new mode 100644
diff --git a/tempest/lib/services/compute/volumes_client.py b/tempest/lib/services/compute/volumes_client.py
old mode 100755
new mode 100644
diff --git a/tempest/lib/services/identity/v2/services_client.py b/tempest/lib/services/identity/v2/services_client.py
old mode 100755
new mode 100644
diff --git a/tempest/lib/services/network/floating_ips_client.py b/tempest/lib/services/network/floating_ips_client.py
old mode 100755
new mode 100644
diff --git a/tempest/lib/services/network/metering_labels_client.py b/tempest/lib/services/network/metering_labels_client.py
old mode 100755
new mode 100644
diff --git a/tempest/lib/services/network/networks_client.py b/tempest/lib/services/network/networks_client.py
old mode 100755
new mode 100644
diff --git a/tempest/lib/services/network/ports_client.py b/tempest/lib/services/network/ports_client.py
old mode 100755
new mode 100644
diff --git a/tempest/lib/services/network/routers_client.py b/tempest/lib/services/network/routers_client.py
old mode 100755
new mode 100644
diff --git a/tempest/lib/services/network/security_group_rules_client.py b/tempest/lib/services/network/security_group_rules_client.py
old mode 100755
new mode 100644
diff --git a/tempest/lib/services/network/security_groups_client.py b/tempest/lib/services/network/security_groups_client.py
old mode 100755
new mode 100644
diff --git a/tempest/lib/services/network/subnetpools_client.py b/tempest/lib/services/network/subnetpools_client.py
old mode 100755
new mode 100644
diff --git a/tempest/lib/services/network/subnets_client.py b/tempest/lib/services/network/subnets_client.py
old mode 100755
new mode 100644
diff --git a/tempest/services/volume/base/base_backups_client.py b/tempest/lib/services/volume/v1/backups_client.py
similarity index 76%
copy from tempest/services/volume/base/base_backups_client.py
copy to tempest/lib/services/volume/v1/backups_client.py
index a57e628..2728c67 100644
--- a/tempest/services/volume/base/base_backups_client.py
+++ b/tempest/lib/services/volume/v1/backups_client.py
@@ -13,17 +13,15 @@
# License for the specific language governing permissions and limitations
# under the License.
-import time
-
from oslo_serialization import jsonutils as json
-from tempest import exceptions
from tempest.lib.common import rest_client
from tempest.lib import exceptions as lib_exc
-class BaseBackupsClient(rest_client.RestClient):
- """Client class to send CRUD Volume backup API requests"""
+class BackupsClient(rest_client.RestClient):
+ """Volume V1 Backups client"""
+ api_version = "v1"
def create_backup(self, **kwargs):
"""Creates a backup of volume.
@@ -96,26 +94,6 @@
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
- def wait_for_backup_status(self, backup_id, status):
- """Waits for a Backup to reach a given status."""
- body = self.show_backup(backup_id)['backup']
- backup_status = body['status']
- start = int(time.time())
-
- while backup_status != status:
- time.sleep(self.build_interval)
- body = self.show_backup(backup_id)['backup']
- backup_status = body['status']
- if backup_status == 'error' and backup_status != status:
- raise exceptions.VolumeBackupException(backup_id=backup_id)
-
- if int(time.time()) - start >= self.build_timeout:
- message = ('Volume backup %s failed to reach %s status '
- '(current %s) within the required time (%s s).' %
- (backup_id, status, backup_status,
- self.build_timeout))
- raise exceptions.TimeoutException(message)
-
def is_resource_deleted(self, id):
try:
self.show_backup(id)
diff --git a/tempest/services/volume/base/base_backups_client.py b/tempest/lib/services/volume/v2/backups_client.py
similarity index 76%
rename from tempest/services/volume/base/base_backups_client.py
rename to tempest/lib/services/volume/v2/backups_client.py
index a57e628..61f865d 100644
--- a/tempest/services/volume/base/base_backups_client.py
+++ b/tempest/lib/services/volume/v2/backups_client.py
@@ -13,17 +13,15 @@
# License for the specific language governing permissions and limitations
# under the License.
-import time
-
from oslo_serialization import jsonutils as json
-from tempest import exceptions
from tempest.lib.common import rest_client
from tempest.lib import exceptions as lib_exc
-class BaseBackupsClient(rest_client.RestClient):
- """Client class to send CRUD Volume backup API requests"""
+class BackupsClient(rest_client.RestClient):
+ """Volume V2 Backups client"""
+ api_version = "v2"
def create_backup(self, **kwargs):
"""Creates a backup of volume.
@@ -96,26 +94,6 @@
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
- def wait_for_backup_status(self, backup_id, status):
- """Waits for a Backup to reach a given status."""
- body = self.show_backup(backup_id)['backup']
- backup_status = body['status']
- start = int(time.time())
-
- while backup_status != status:
- time.sleep(self.build_interval)
- body = self.show_backup(backup_id)['backup']
- backup_status = body['status']
- if backup_status == 'error' and backup_status != status:
- raise exceptions.VolumeBackupException(backup_id=backup_id)
-
- if int(time.time()) - start >= self.build_timeout:
- message = ('Volume backup %s failed to reach %s status '
- '(current %s) within the required time (%s s).' %
- (backup_id, status, backup_status,
- self.build_timeout))
- raise exceptions.TimeoutException(message)
-
def is_resource_deleted(self, id):
try:
self.show_backup(id)
diff --git a/tempest/services/volume/v2/json/encryption_types_client.py b/tempest/lib/services/volume/v2/encryption_types_client.py
similarity index 100%
rename from tempest/services/volume/v2/json/encryption_types_client.py
rename to tempest/lib/services/volume/v2/encryption_types_client.py
diff --git a/tempest/lib/services/volume/v2/qos_client.py b/tempest/lib/services/volume/v2/qos_client.py
index 5fac00f..40d4a3f 100644
--- a/tempest/lib/services/volume/v2/qos_client.py
+++ b/tempest/lib/services/volume/v2/qos_client.py
@@ -41,8 +41,11 @@
def create_qos(self, **kwargs):
"""Create a QoS Specification.
- Available params: see http://developer.openstack.org/
- api-ref-blockstorage-v2.html#createQoSSpec
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/block-storage/v2/index.html
+ ?expanded=create-qos-specification-detail
+ #quality-of-service-qos-specifications-qos-specs
"""
post_body = json.dumps({'qos_specs': kwargs})
resp, body = self.post('qos-specs', post_body)
@@ -76,8 +79,11 @@
def set_qos_key(self, qos_id, **kwargs):
"""Set the specified keys/values of QoS specification.
- Available params: see http://developer.openstack.org/
- api-ref-blockstorage-v2.html#setQoSKey
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/block-storage/v2/index.html
+ ?expanded=set-keys-in-qos-specification-detail
+ #quality-of-service-qos-specifications-qos-specs
"""
put_body = json.dumps({"qos_specs": kwargs})
resp, body = self.put('qos-specs/%s' % qos_id, put_body)
@@ -90,7 +96,11 @@
:param keys: keys to delete from the QoS specification.
- TODO(jordanP): Add a link once LP #1524877 is fixed.
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref/block-storage/v2/index.html
+ ?expanded=unset-keys-in-qos-specification-detail
+ #quality-of-service-qos-specifications-qos-specs
"""
put_body = json.dumps({'keys': keys})
resp, body = self.put('qos-specs/%s/delete_keys' % qos_id, put_body)
diff --git a/tempest/scenario/test_network_basic_ops.py b/tempest/scenario/test_network_basic_ops.py
index 519dbec..a295b6a 100644
--- a/tempest/scenario/test_network_basic_ops.py
+++ b/tempest/scenario/test_network_basic_ops.py
@@ -473,11 +473,11 @@
def test_hotplug_nic(self):
"""Test hotplug network interface
- 1. create a new network, with no gateway (to prevent overwriting VM's
- gateway)
- 2. connect VM to new network
- 3. set static ip and bring new nic up
- 4. check VM can ping new network dhcp port
+ 1. Create a network and a VM.
+ 2. Check connectivity to the VM via a public network.
+ 3. Create a new network, with no gateway.
+ 4. Bring up a new interface
+ 5. check the VM reach the new network
"""
self._setup_network_and_servers()
diff --git a/tempest/scenario/test_security_groups_basic_ops.py b/tempest/scenario/test_security_groups_basic_ops.py
index 2c16be8..32f5d9f 100644
--- a/tempest/scenario/test_security_groups_basic_ops.py
+++ b/tempest/scenario/test_security_groups_basic_ops.py
@@ -17,6 +17,7 @@
from tempest import clients
from tempest.common.utils import data_utils
+from tempest.common.utils import net_info
from tempest import config
from tempest.scenario import manager
from tempest import test
@@ -247,16 +248,10 @@
myport = (tenant.router['id'], tenant.subnet['id'])
router_ports = [(i['device_id'], i['fixed_ips'][0]['subnet_id']) for i
in self._list_ports()
- if self._is_router_port(i)]
+ if net_info.is_router_interface_port(i)]
self.assertIn(myport, router_ports)
- def _is_router_port(self, port):
- """Return True if port is a router interface."""
- # NOTE(armando-migliaccio): match device owner for both centralized
- # and distributed routers; 'device_owner' is "" by default.
- return port['device_owner'].startswith('network:router_interface')
-
def _create_server(self, name, tenant, security_groups, **kwargs):
"""Creates a server and assigns it to security group.
diff --git a/tempest/scenario/test_volume_boot_pattern.py b/tempest/scenario/test_volume_boot_pattern.py
old mode 100755
new mode 100644
index 3f6d9c4..44ad136
--- a/tempest/scenario/test_volume_boot_pattern.py
+++ b/tempest/scenario/test_volume_boot_pattern.py
@@ -24,18 +24,6 @@
class TestVolumeBootPattern(manager.ScenarioTest):
- """This test case attempts to reproduce the following steps:
-
- * Create in Cinder some bootable volume importing a Glance image
- * Boot an instance from the bootable volume
- * Write content to the volume
- * Delete an instance and Boot a new instance from the volume
- * Check written content in the instance
- * Create a volume snapshot while the instance is running
- * Boot an additional instance from the new snapshot based volume
- * Check written content in the instance booted from snapshot
- """
-
# Boot from volume scenario is quite slow, and needs extra
# breathing room to get through deletes in the time allotted.
TIMEOUT_SCALING_FACTOR = 2
@@ -113,6 +101,19 @@
@test.attr(type='smoke')
@test.services('compute', 'volume', 'image')
def test_volume_boot_pattern(self):
+
+ """This test case attempts to reproduce the following steps:
+
+ * Create in Cinder some bootable volume importing a Glance image
+ * Boot an instance from the bootable volume
+ * Write content to the volume
+ * Delete an instance and Boot a new instance from the volume
+ * Check written content in the instance
+ * Create a volume snapshot while the instance is running
+ * Boot an additional instance from the new snapshot based volume
+ * Check written content in the instance booted from snapshot
+ """
+
LOG.info("Creating keypair and security group")
keypair = self.create_keypair()
security_group = self._create_security_group()
diff --git a/tempest/services/baremetal/v1/json/baremetal_client.py b/tempest/services/baremetal/v1/json/baremetal_client.py
old mode 100755
new mode 100644
diff --git a/tempest/services/data_processing/__init__.py b/tempest/services/data_processing/__init__.py
deleted file mode 100644
index c49bc5c..0000000
--- a/tempest/services/data_processing/__init__.py
+++ /dev/null
@@ -1,18 +0,0 @@
-# Copyright (c) 2016 Hewlett-Packard Enterprise Development Company, L.P.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may not
-# use this file except in compliance with the License. You may obtain a copy of
-# the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations under
-# the License.
-
-from tempest.services.data_processing.v1_1.data_processing_client import \
- DataProcessingClient
-
-__all__ = ['DataProcessingClient']
diff --git a/tempest/services/data_processing/v1_1/data_processing_client.py b/tempest/services/data_processing/v1_1/data_processing_client.py
deleted file mode 100644
index c74672f..0000000
--- a/tempest/services/data_processing/v1_1/data_processing_client.py
+++ /dev/null
@@ -1,280 +0,0 @@
-# Copyright (c) 2013 Mirantis Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from oslo_serialization import jsonutils as json
-
-from tempest.lib.common import rest_client
-
-
-class DataProcessingClient(rest_client.RestClient):
-
- def _request_and_check_resp(self, request_func, uri, resp_status):
- """Make a request and check response status code.
-
- It returns a ResponseBody.
- """
- resp, body = request_func(uri)
- self.expected_success(resp_status, resp.status)
- return rest_client.ResponseBody(resp, body)
-
- def _request_and_check_resp_data(self, request_func, uri, resp_status):
- """Make a request and check response status code.
-
- It returns pair: resp and response data.
- """
- resp, body = request_func(uri)
- self.expected_success(resp_status, resp.status)
- return resp, body
-
- def _request_check_and_parse_resp(self, request_func, uri,
- resp_status, *args, **kwargs):
- """Make a request, check response status code and parse response body.
-
- It returns a ResponseBody.
- """
- headers = {'Content-Type': 'application/json'}
- resp, body = request_func(uri, headers=headers, *args, **kwargs)
- self.expected_success(resp_status, resp.status)
- body = json.loads(body)
- return rest_client.ResponseBody(resp, body)
-
- def list_node_group_templates(self):
- """List all node group templates for a user."""
-
- uri = 'node-group-templates'
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def get_node_group_template(self, tmpl_id):
- """Returns the details of a single node group template."""
-
- uri = 'node-group-templates/%s' % tmpl_id
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def create_node_group_template(self, name, plugin_name, hadoop_version,
- node_processes, flavor_id,
- node_configs=None, **kwargs):
- """Creates node group template with specified params.
-
- It supports passing additional params using kwargs and returns created
- object.
- """
- uri = 'node-group-templates'
- body = kwargs.copy()
- body.update({
- 'name': name,
- 'plugin_name': plugin_name,
- 'hadoop_version': hadoop_version,
- 'node_processes': node_processes,
- 'flavor_id': flavor_id,
- 'node_configs': node_configs or dict(),
- })
- return self._request_check_and_parse_resp(self.post, uri, 202,
- body=json.dumps(body))
-
- def delete_node_group_template(self, tmpl_id):
- """Deletes the specified node group template by id."""
-
- uri = 'node-group-templates/%s' % tmpl_id
- return self._request_and_check_resp(self.delete, uri, 204)
-
- def list_plugins(self):
- """List all enabled plugins."""
-
- uri = 'plugins'
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def get_plugin(self, plugin_name, plugin_version=None):
- """Returns the details of a single plugin."""
-
- uri = 'plugins/%s' % plugin_name
- if plugin_version:
- uri += '/%s' % plugin_version
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def list_cluster_templates(self):
- """List all cluster templates for a user."""
-
- uri = 'cluster-templates'
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def get_cluster_template(self, tmpl_id):
- """Returns the details of a single cluster template."""
-
- uri = 'cluster-templates/%s' % tmpl_id
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def create_cluster_template(self, name, plugin_name, hadoop_version,
- node_groups, cluster_configs=None,
- **kwargs):
- """Creates cluster template with specified params.
-
- It supports passing additional params using kwargs and returns created
- object.
- """
- uri = 'cluster-templates'
- body = kwargs.copy()
- body.update({
- 'name': name,
- 'plugin_name': plugin_name,
- 'hadoop_version': hadoop_version,
- 'node_groups': node_groups,
- 'cluster_configs': cluster_configs or dict(),
- })
- return self._request_check_and_parse_resp(self.post, uri, 202,
- body=json.dumps(body))
-
- def delete_cluster_template(self, tmpl_id):
- """Deletes the specified cluster template by id."""
-
- uri = 'cluster-templates/%s' % tmpl_id
- return self._request_and_check_resp(self.delete, uri, 204)
-
- def list_data_sources(self):
- """List all data sources for a user."""
-
- uri = 'data-sources'
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def get_data_source(self, source_id):
- """Returns the details of a single data source."""
-
- uri = 'data-sources/%s' % source_id
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def create_data_source(self, name, data_source_type, url, **kwargs):
- """Creates data source with specified params.
-
- It supports passing additional params using kwargs and returns created
- object.
- """
- uri = 'data-sources'
- body = kwargs.copy()
- body.update({
- 'name': name,
- 'type': data_source_type,
- 'url': url
- })
- return self._request_check_and_parse_resp(self.post, uri,
- 202, body=json.dumps(body))
-
- def delete_data_source(self, source_id):
- """Deletes the specified data source by id."""
-
- uri = 'data-sources/%s' % source_id
- return self._request_and_check_resp(self.delete, uri, 204)
-
- def list_job_binary_internals(self):
- """List all job binary internals for a user."""
-
- uri = 'job-binary-internals'
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def get_job_binary_internal(self, job_binary_id):
- """Returns the details of a single job binary internal."""
-
- uri = 'job-binary-internals/%s' % job_binary_id
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def create_job_binary_internal(self, name, data):
- """Creates job binary internal with specified params."""
-
- uri = 'job-binary-internals/%s' % name
- return self._request_check_and_parse_resp(self.put, uri, 202, data)
-
- def delete_job_binary_internal(self, job_binary_id):
- """Deletes the specified job binary internal by id."""
-
- uri = 'job-binary-internals/%s' % job_binary_id
- return self._request_and_check_resp(self.delete, uri, 204)
-
- def get_job_binary_internal_data(self, job_binary_id):
- """Returns data of a single job binary internal."""
-
- uri = 'job-binary-internals/%s/data' % job_binary_id
- return self._request_and_check_resp_data(self.get, uri, 200)
-
- def list_job_binaries(self):
- """List all job binaries for a user."""
-
- uri = 'job-binaries'
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def get_job_binary(self, job_binary_id):
- """Returns the details of a single job binary."""
-
- uri = 'job-binaries/%s' % job_binary_id
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def create_job_binary(self, name, url, extra=None, **kwargs):
- """Creates job binary with specified params.
-
- It supports passing additional params using kwargs and returns created
- object.
- """
- uri = 'job-binaries'
- body = kwargs.copy()
- body.update({
- 'name': name,
- 'url': url,
- 'extra': extra or dict(),
- })
- return self._request_check_and_parse_resp(self.post, uri,
- 202, body=json.dumps(body))
-
- def delete_job_binary(self, job_binary_id):
- """Deletes the specified job binary by id."""
-
- uri = 'job-binaries/%s' % job_binary_id
- return self._request_and_check_resp(self.delete, uri, 204)
-
- def get_job_binary_data(self, job_binary_id):
- """Returns data of a single job binary."""
-
- uri = 'job-binaries/%s/data' % job_binary_id
- return self._request_and_check_resp_data(self.get, uri, 200)
-
- def list_jobs(self):
- """List all jobs for a user."""
-
- uri = 'jobs'
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def get_job(self, job_id):
- """Returns the details of a single job."""
-
- uri = 'jobs/%s' % job_id
- return self._request_check_and_parse_resp(self.get, uri, 200)
-
- def create_job(self, name, job_type, mains, libs=None, **kwargs):
- """Creates job with specified params.
-
- It supports passing additional params using kwargs and returns created
- object.
- """
- uri = 'jobs'
- body = kwargs.copy()
- body.update({
- 'name': name,
- 'type': job_type,
- 'mains': mains,
- 'libs': libs or list(),
- })
- return self._request_check_and_parse_resp(self.post, uri,
- 202, body=json.dumps(body))
-
- def delete_job(self, job_id):
- """Deletes the specified job by id."""
-
- uri = 'jobs/%s' % job_id
- return self._request_and_check_resp(self.delete, uri, 204)
diff --git a/tempest/services/identity/v3/__init__.py b/tempest/services/identity/v3/__init__.py
index 3f5c3d5..9b40b77 100644
--- a/tempest/services/identity/v3/__init__.py
+++ b/tempest/services/identity/v3/__init__.py
@@ -28,8 +28,11 @@
from tempest.lib.services.identity.v3.trusts_client import TrustsClient
from tempest.lib.services.identity.v3.users_client import UsersClient
from tempest.services.identity.v3.json.domains_client import DomainsClient
+from tempest.services.identity.v3.json.role_assignments_client import \
+ RoleAssignmentsClient
__all__ = ['CredentialsClient', 'EndPointsClient', 'GroupsClient',
'IdentityClient', 'InheritedRolesClient', 'PoliciesClient',
- 'ProjectsClient', 'RegionsClient', 'RolesClient', 'ServicesClient',
- 'V3TokenClient', 'TrustsClient', 'UsersClient', 'DomainsClient', ]
+ 'ProjectsClient', 'RegionsClient', 'RoleAssignmentsClient',
+ 'RolesClient', 'ServicesClient', 'V3TokenClient', 'TrustsClient',
+ 'UsersClient', 'DomainsClient', ]
diff --git a/tempest/services/identity/v3/json/role_assignments_client.py b/tempest/services/identity/v3/json/role_assignments_client.py
new file mode 100644
index 0000000..9fd7736
--- /dev/null
+++ b/tempest/services/identity/v3/json/role_assignments_client.py
@@ -0,0 +1,31 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from oslo_serialization import jsonutils as json
+
+from tempest.lib.common import rest_client
+
+
+class RoleAssignmentsClient(rest_client.RestClient):
+ api_version = "v3"
+
+ def list_user_project_effective_assignments(
+ self, project_id, user_id):
+ """List the effective role assignments for a user in a project."""
+ resp, body = self.get(
+ "role_assignments?scope.project.id=%s&user.id=%s&effective" %
+ (project_id, user_id))
+ self.expected_success(200, resp.status)
+ body = json.loads(body)
+ return rest_client.ResponseBody(resp, body)
diff --git a/tempest/services/volume/base/base_volumes_client.py b/tempest/services/volume/base/base_volumes_client.py
old mode 100755
new mode 100644
diff --git a/tempest/services/volume/v1/__init__.py b/tempest/services/volume/v1/__init__.py
index e386faf..376ab72 100644
--- a/tempest/services/volume/v1/__init__.py
+++ b/tempest/services/volume/v1/__init__.py
@@ -14,6 +14,7 @@
from tempest.lib.services.volume.v1.availability_zone_client import \
AvailabilityZoneClient
+from tempest.lib.services.volume.v1.backups_client import BackupsClient
from tempest.lib.services.volume.v1.encryption_types_client import \
EncryptionTypesClient
from tempest.lib.services.volume.v1.extensions_client import ExtensionsClient
@@ -23,7 +24,6 @@
from tempest.lib.services.volume.v1.services_client import ServicesClient
from tempest.lib.services.volume.v1.snapshots_client import SnapshotsClient
from tempest.lib.services.volume.v1.types_client import TypesClient
-from tempest.services.volume.v1.json.backups_client import BackupsClient
from tempest.services.volume.v1.json.volumes_client import VolumesClient
__all__ = ['AvailabilityZoneClient', 'EncryptionTypesClient',
diff --git a/tempest/services/volume/v2/__init__.py b/tempest/services/volume/v2/__init__.py
index b63e6f2..5774977 100644
--- a/tempest/services/volume/v2/__init__.py
+++ b/tempest/services/volume/v2/__init__.py
@@ -14,6 +14,9 @@
from tempest.lib.services.volume.v2.availability_zone_client import \
AvailabilityZoneClient
+from tempest.lib.services.volume.v2.backups_client import BackupsClient
+from tempest.lib.services.volume.v2.encryption_types_client import \
+ EncryptionTypesClient
from tempest.lib.services.volume.v2.extensions_client import ExtensionsClient
from tempest.lib.services.volume.v2.hosts_client import HostsClient
from tempest.lib.services.volume.v2.qos_client import QosSpecsClient
@@ -21,12 +24,9 @@
from tempest.lib.services.volume.v2.services_client import ServicesClient
from tempest.lib.services.volume.v2.snapshots_client import SnapshotsClient
from tempest.lib.services.volume.v2.types_client import TypesClient
-from tempest.services.volume.v2.json.backups_client import BackupsClient
-from tempest.services.volume.v2.json.encryption_types_client import \
- EncryptionTypesClient
from tempest.services.volume.v2.json.volumes_client import VolumesClient
-__all__ = ['AvailabilityZoneClient', 'ExtensionsClient', 'HostsClient',
- 'QosSpecsClient', 'QuotasClient', 'ServicesClient',
- 'SnapshotsClient', 'TypesClient', 'BackupsClient',
- 'EncryptionTypesClient', 'VolumesClient', ]
+__all__ = ['AvailabilityZoneClient', 'BackupsClient', 'EncryptionTypesClient',
+ 'ExtensionsClient', 'HostsClient', 'QosSpecsClient', 'QuotasClient',
+ 'ServicesClient', 'SnapshotsClient', 'TypesClient',
+ 'VolumesClient', ]
diff --git a/tempest/services/volume/v2/json/backups_client.py b/tempest/services/volume/v2/json/backups_client.py
deleted file mode 100644
index 78bab82..0000000
--- a/tempest/services/volume/v2/json/backups_client.py
+++ /dev/null
@@ -1,21 +0,0 @@
-# Copyright 2014 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.services.volume.base import base_backups_client
-
-
-class BackupsClient(base_backups_client.BaseBackupsClient):
- """Client class to send CRUD Volume V2 API requests"""
- api_version = "v2"
diff --git a/tempest/stress/README.rst b/tempest/stress/README.rst
index 33842fd..f22c9ce 100644
--- a/tempest/stress/README.rst
+++ b/tempest/stress/README.rst
@@ -19,11 +19,13 @@
[stress] section of tempest.conf. You also need to provide the
location of the log files:
- target_logfiles = "regexp to all log files to be checked for errors"
- target_private_key_path = "private ssh key for controller and log file nodes"
- target_ssh_user = "username for controller and log file nodes"
- target_controller = "hostname or ip of controller node (for nova-manage)
- log_check_interval = "time between checking logs for errors (default 60s)"
+ .. code-block:: ini
+
+ target_logfiles = "regexp to all log files to be checked for errors"
+ target_private_key_path = "private ssh key for controller and log file nodes"
+ target_ssh_user = "username for controller and log file nodes"
+ target_controller = "hostname or ip of controller node (for nova-manage)
+ log_check_interval = "time between checking logs for errors (default 60s)"
To activate logging on your console please make sure that you activate `use_stderr`
in tempest.conf or use the default `logging.conf.sample` file.
@@ -36,14 +38,14 @@
In order to use this discovery you have to install tempest CLI, be in the
tempest root directory and execute the following:
- tempest run-stress -a -d 30
+ tempest run-stress -a -d 30
Running the sample test
-----------------------
To test installation, do the following:
- tempest run-stress -t tempest/stress/etc/server-create-destroy-test.json -d 30
+ tempest run-stress -t tempest/stress/etc/server-create-destroy-test.json -d 30
This sample test tries to create a few VMs and kill a few VMs.
diff --git a/tempest/test_discover/plugins.py b/tempest/test_discover/plugins.py
index eb50126..f8d5d9d 100644
--- a/tempest/test_discover/plugins.py
+++ b/tempest/test_discover/plugins.py
@@ -157,8 +157,10 @@
registry = clients.ClientsRegistry()
for plug in self.ext_plugins:
try:
- registry.register_service_client(
- plug.name, plug.obj.get_service_clients())
+ service_clients = plug.obj.get_service_clients()
+ if service_clients:
+ registry.register_service_client(
+ plug.name, service_clients)
except Exception:
LOG.exception('Plugin %s raised an exception trying to run '
'get_service_clients' % plug.name)
diff --git a/tempest/tests/cmd/test_account_generator.py b/tempest/tests/cmd/test_account_generator.py
old mode 100755
new mode 100644
diff --git a/tempest/tests/cmd/test_tempest_init.py b/tempest/tests/cmd/test_tempest_init.py
index 2844371..79510be 100644
--- a/tempest/tests/cmd/test_tempest_init.py
+++ b/tempest/tests/cmd/test_tempest_init.py
@@ -137,3 +137,18 @@
self.assertTrue(os.path.isfile(fake_file_moved))
self.assertTrue(os.path.isfile(local_conf_file))
self.assertTrue(os.path.isfile(local_testr_conf))
+
+ def test_take_action_fails(self):
+ class ParsedArgs(object):
+ workspace_dir = self.useFixture(fixtures.TempDir()).path
+ workspace_path = os.path.join(workspace_dir, 'workspace.yaml')
+ name = 'test'
+ dir_base = self.useFixture(fixtures.TempDir()).path
+ dir = os.path.join(dir_base, 'foo', 'bar')
+ config_dir = self.useFixture(fixtures.TempDir()).path
+ show_global_dir = False
+ pa = ParsedArgs()
+ init_cmd = init.TempestInit(None, None)
+ self.assertRaises(OSError, init_cmd.take_action, pa)
+ # one more trying should be a same error not "workspace already exists"
+ self.assertRaises(OSError, init_cmd.take_action, pa)
diff --git a/tempest/api/data_processing/__init__.py b/tempest/tests/lib/services/volume/__init__.py
similarity index 100%
rename from tempest/api/data_processing/__init__.py
rename to tempest/tests/lib/services/volume/__init__.py
diff --git a/tempest/services/data_processing/v1_1/__init__.py b/tempest/tests/lib/services/volume/v1/__init__.py
similarity index 100%
rename from tempest/services/data_processing/v1_1/__init__.py
rename to tempest/tests/lib/services/volume/v1/__init__.py
diff --git a/tempest/tests/lib/services/volume/v1/test_encryption_types_client.py b/tempest/tests/lib/services/volume/v1/test_encryption_types_client.py
new file mode 100644
index 0000000..585904e
--- /dev/null
+++ b/tempest/tests/lib/services/volume/v1/test_encryption_types_client.py
@@ -0,0 +1,86 @@
+# Copyright 2016 NEC Corporation. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.lib.services.volume.v1 import encryption_types_client
+from tempest.tests.lib import fake_auth_provider
+from tempest.tests.lib.services import base
+
+
+class TestEncryptionTypesClient(base.BaseServiceTest):
+ FAKE_CREATE_ENCRYPTION_TYPE = {
+ "encryption": {
+ "id": "cbc36478b0bd8e67e89",
+ "name": "FakeEncryptionType",
+ "type": "fakeType",
+ "provider": "LuksEncryptor",
+ "cipher": "aes-xts-plain64",
+ "key_size": "512",
+ "control_location": "front-end"
+ }
+ }
+
+ FAKE_INFO_ENCRYPTION_TYPE = {
+ "encryption": {
+ "name": "FakeEncryptionType",
+ "type": "fakeType",
+ "description": "test_description",
+ "volume_type": "fakeType",
+ "provider": "LuksEncryptor",
+ "cipher": "aes-xts-plain64",
+ "key_size": "512",
+ "control_location": "front-end"
+ }
+ }
+
+ def setUp(self):
+ super(TestEncryptionTypesClient, self).setUp()
+ fake_auth = fake_auth_provider.FakeAuthProvider()
+ self.client = encryption_types_client.EncryptionTypesClient(fake_auth,
+ 'volume',
+ 'regionOne'
+ )
+
+ def _test_create_encryption(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.create_encryption_type,
+ 'tempest.lib.common.rest_client.RestClient.post',
+ self.FAKE_CREATE_ENCRYPTION_TYPE,
+ bytes_body, volume_type_id="cbc36478b0bd8e67e89")
+
+ def _test_show_encryption_type(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.show_encryption_type,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_INFO_ENCRYPTION_TYPE,
+ bytes_body, volume_type_id="cbc36478b0bd8e67e89")
+
+ def test_create_encryption_type_with_str_body(self):
+ self._test_create_encryption()
+
+ def test_create_encryption_type_with_bytes_body(self):
+ self._test_create_encryption(bytes_body=True)
+
+ def test_show_encryption_type_with_str_body(self):
+ self._test_show_encryption_type()
+
+ def test_show_encryption_type_with_bytes_body(self):
+ self._test_show_encryption_type(bytes_body=True)
+
+ def test_delete_encryption_type(self):
+ self.check_service_client_function(
+ self.client.delete_encryption_type,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ volume_type_id="cbc36478b0bd8e67e89",
+ status=202)
diff --git a/tempest/api/data_processing/__init__.py b/tempest/tests/lib/services/volume/v2/__init__.py
similarity index 100%
copy from tempest/api/data_processing/__init__.py
copy to tempest/tests/lib/services/volume/v2/__init__.py
diff --git a/tempest/tests/lib/services/volume/v2/test_encryption_types_client.py b/tempest/tests/lib/services/volume/v2/test_encryption_types_client.py
new file mode 100644
index 0000000..d029091
--- /dev/null
+++ b/tempest/tests/lib/services/volume/v2/test_encryption_types_client.py
@@ -0,0 +1,86 @@
+# Copyright 2016 NEC Corporation. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.lib.services.volume.v2 import encryption_types_client
+from tempest.tests.lib import fake_auth_provider
+from tempest.tests.lib.services import base
+
+
+class TestEncryptionTypesClient(base.BaseServiceTest):
+ FAKE_CREATE_ENCRYPTION_TYPE = {
+ "encryption": {
+ "id": "cbc36478b0bd8e67e89",
+ "name": "FakeEncryptionType",
+ "type": "fakeType",
+ "provider": "LuksEncryptor",
+ "cipher": "aes-xts-plain64",
+ "key_size": "512",
+ "control_location": "front-end"
+ }
+ }
+
+ FAKE_INFO_ENCRYPTION_TYPE = {
+ "encryption": {
+ "name": "FakeEncryptionType",
+ "type": "fakeType",
+ "description": "test_description",
+ "volume_type": "fakeType",
+ "provider": "LuksEncryptor",
+ "cipher": "aes-xts-plain64",
+ "key_size": "512",
+ "control_location": "front-end"
+ }
+ }
+
+ def setUp(self):
+ super(TestEncryptionTypesClient, self).setUp()
+ fake_auth = fake_auth_provider.FakeAuthProvider()
+ self.client = encryption_types_client.EncryptionTypesClient(fake_auth,
+ 'volume',
+ 'regionOne'
+ )
+
+ def _test_create_encryption(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.create_encryption_type,
+ 'tempest.lib.common.rest_client.RestClient.post',
+ self.FAKE_CREATE_ENCRYPTION_TYPE,
+ bytes_body, volume_type_id="cbc36478b0bd8e67e89")
+
+ def _test_show_encryption_type(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.show_encryption_type,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ self.FAKE_INFO_ENCRYPTION_TYPE,
+ bytes_body, volume_type_id="cbc36478b0bd8e67e89")
+
+ def test_create_encryption_type_with_str_body(self):
+ self._test_create_encryption()
+
+ def test_create_encryption_type_with_bytes_body(self):
+ self._test_create_encryption(bytes_body=True)
+
+ def test_show_encryption_type_with_str_body(self):
+ self._test_show_encryption_type()
+
+ def test_show_encryption_type_with_bytes_body(self):
+ self._test_show_encryption_type(bytes_body=True)
+
+ def test_delete_encryption_type(self):
+ self.check_service_client_function(
+ self.client.delete_encryption_type,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ volume_type_id="cbc36478b0bd8e67e89",
+ status=202)
diff --git a/tempest/tests/lib/test_ssh.py b/tempest/tests/lib/test_ssh.py
index b07f6bc..8a0a84c 100644
--- a/tempest/tests/lib/test_ssh.py
+++ b/tempest/tests/lib/test_ssh.py
@@ -69,6 +69,7 @@
mock.sentinel.aa)
expected_connect = [mock.call(
'localhost',
+ port=22,
username='root',
pkey=None,
key_filename=None,
diff --git a/tempest/tests/test_tempest_plugin.py b/tempest/tests/test_tempest_plugin.py
index dd50125..13e2499 100644
--- a/tempest/tests/test_tempest_plugin.py
+++ b/tempest/tests/test_tempest_plugin.py
@@ -75,7 +75,5 @@
fake_obj = fake_plugin.FakeStevedoreObjNoServiceClients()
manager.ext_plugins = [fake_obj]
manager._register_service_clients()
- expected_result = []
registered_clients = registry.get_service_clients()
- self.assertIn(fake_obj.name, registered_clients)
- self.assertEqual(expected_result, registered_clients[fake_obj.name])
+ self.assertNotIn(fake_obj.name, registered_clients)
diff --git a/test-requirements.txt b/test-requirements.txt
index 567cf20..53efa46 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -3,8 +3,7 @@
# process, which may cause wedges in the gate later.
hacking<0.12,>=0.11.0 # Apache-2.0
# needed for doc build
-sphinx!=1.3b1,<1.3,>=1.2.1 # BSD
-python-subunit>=0.0.18 # Apache-2.0/BSD
+sphinx!=1.3b1,<1.4,>=1.2.1 # BSD
oslosphinx>=4.7.0 # Apache-2.0
reno>=1.8.0 # Apache2
mock>=2.0 # BSD
diff --git a/tox.ini b/tox.ini
index 7096e60..02eef78 100644
--- a/tox.ini
+++ b/tox.ini
@@ -26,7 +26,7 @@
-r{toxinidir}/test-requirements.txt
commands =
find . -type f -name "*.pyc" -delete
- bash tools/pretty_tox.sh '{posargs}'
+ ostestr {posargs}
[testenv:genconfig]
commands = oslo-config-generator --config-file tempest/cmd/config-generator.tempest.conf