Merge "Add section to the config guide on lock_path"
diff --git a/HACKING.rst b/HACKING.rst
index 81a7c2c..04b5eb6 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -312,3 +312,57 @@
* Boot an additional instance from the new snapshot based volume
* Check written content in the instance booted from snapshot
"""
+
+Branchless Tempest Considerations
+---------------------------------
+
+Starting with the OpenStack Icehouse release Tempest no longer has any stable
+branches. This is to better ensure API consistency between releases because
+the API behavior should not change between releases. This means that the stable
+branches are also gated by the Tempest master branch, which also means that
+proposed commits to Tempest must work against both the master and all the
+currently supported stable branches of the projects. As such there are a few
+special considerations that have to be accounted for when pushing new changes
+to tempest.
+
+1. New Tests for new features
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When adding tests for new features that were not in previous releases of the
+projects the new test has to be properly skipped with a feature flag. Whether
+this is just as simple as using the @test.requires_ext() decorator to check
+if the required extension (or discoverable optional API) is enabled or adding
+a new config option to the appropriate section. If there isn't a method of
+selecting the new **feature** from the config file then there won't be a
+mechanism to disable the test with older stable releases and the new test won't
+be able to merge.
+
+2. Bug fix on core project needing Tempest changes
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When trying to land a bug fix which changes a tested API you'll have to use the
+following procedure::
+
+ - Propose change to the project, get a +2 on the change even with failing
+ - Propose skip on Tempest which will only be approved after the
+ corresponding change in the project has a +2 on change
+ - Land project change in master and all open stable branches (if required)
+ - Land changed test in Tempest
+
+Otherwise the bug fix won't be able to land in the project.
+
+3. New Tests for existing features
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If a test is being added for a feature that exists in all the current releases
+of the projects then the only concern is that the API behavior is the same
+across all the versions of the project being tested. If the behavior is not
+consistent the test will not be able to merge.
+
+API Stability
+-------------
+
+For new tests being added to Tempest the assumption is that the API being
+tested is considered stable and adheres to the OpenStack API stability
+guidelines. If an API is still considered experimental or in development then
+it should not be tested by Tempest until it is considered stable.
diff --git a/README.rst b/README.rst
index 7af0025..9aaea24 100644
--- a/README.rst
+++ b/README.rst
@@ -59,50 +59,49 @@
will have a configuration file already set up to work with your
devstack installation.
-Tempest is not tied to any single test runner, but testr is the most commonly
-used tool. After setting up your configuration file, you can execute
-the set of Tempest tests by using ``testr`` ::
+Tempest is not tied to any single test runner, but `testr`_ is the most commonly
+used tool. Also, the nosetests test runner is **not** recommended to run tempest.
+
+After setting up your configuration file, you can execute the set of Tempest
+tests by using ``testr`` ::
$> testr run --parallel
-To run one single test ::
+.. _testr: http://testrepository.readthedocs.org/en/latest/MANUAL.html
- $> testr run --parallel tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_reboot_non_existent_server
+To run one single test serially ::
+
+ $> testr run tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_reboot_non_existent_server
Alternatively, you can use the run_tempest.sh script which will create a venv
-and run the tests or use tox to do the same.
+and run the tests or use tox to do the same. Tox also contains several existing
+job configurations. For example::
+
+ $> tox -efull
+
+which will run the same set of tests as the OpenStack gate. (it's exactly how
+the gate invokes tempest) Or::
+
+ $> tox -esmoke
+
+to run the tests tagged as smoke.
+
Configuration
-------------
Detailed configuration of tempest is beyond the scope of this
-document. The etc/tempest.conf.sample attempts to be a self
-documenting version of the configuration.
+document see :ref:`tempest-configuration` for more details on configuring
+tempest. The etc/tempest.conf.sample attempts to be a self documenting version
+of the configuration.
-To generate the sample tempest.conf file, run the following
+You can generate a new sample tempest.conf file, run the following
command from the top level of the tempest directory:
tox -egenconfig
The most important pieces that are needed are the user ids, openstack
-endpoints, and basic flavors and images needed to run tests.
-
-Common Issues
--------------
-
-Tempest was originally designed to primarily run against a full OpenStack
-deployment. Due to that focus, some issues may occur when running Tempest
-against devstack.
-
-Running Tempest, especially in parallel, against a devstack instance may
-cause requests to be rate limited, which will cause unexpected failures.
-Given the number of requests Tempest can make against a cluster, rate limiting
-should be disabled for all test accounts.
-
-Additionally, devstack only provides a single image which Nova can use.
-For the moment, the best solution is to provide the same image uuid for
-both image_ref and image_ref_alt. Tempest will skip tests as needed if it
-detects that both images are the same.
+endpoint, and basic flavors and images needed to run tests.
Unit Tests
----------
@@ -132,57 +131,3 @@
on an earlier release with python 2.6 you can easily run tempest against it
from a remote system running python 2.7. (or deploy a cloud guest in your cloud
that has python 2.7)
-
-Branchless Tempest Considerations
----------------------------------
-
-Starting with the OpenStack Icehouse release Tempest no longer has any stable
-branches. This is to better ensure API consistency between releases because
-the API behavior should not change between releases. This means that the stable
-branches are also gated by the Tempest master branch, which also means that
-proposed commits to Tempest must work against both the master and all the
-currently supported stable branches of the projects. As such there are a few
-special considerations that have to be accounted for when pushing new changes
-to tempest.
-
-1. New Tests for new features
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-When adding tests for new features that were not in previous releases of the
-projects the new test has to be properly skipped with a feature flag. Whether
-this is just as simple as using the @test.requires_ext() decorator to check
-if the required extension (or discoverable optional API) is enabled or adding
-a new config option to the appropriate section. If there isn't a method of
-selecting the new **feature** from the config file then there won't be a
-mechanism to disable the test with older stable releases and the new test won't
-be able to merge.
-
-2. Bug fix on core project needing Tempest changes
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-When trying to land a bug fix which changes a tested API you'll have to use the
-following procedure::
-
- - Propose change to the project, get a +2 on the change even with failing
- - Propose skip on Tempest which will only be approved after the
- corresponding change in the project has a +2 on change
- - Land project change in master and all open stable branches (if required)
- - Land changed test in Tempest
-
-Otherwise the bug fix won't be able to land in the project.
-
-3. New Tests for existing features
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-If a test is being added for a feature that exists in all the current releases
-of the projects then the only concern is that the API behavior is the same
-across all the versions of the project being tested. If the behavior is not
-consistent the test will not be able to merge.
-
-API Stability
--------------
-
-For new tests being added to Tempest the assumption is that the API being
-tested is considered stable and adheres to the OpenStack API stability
-guidelines. If an API is still considered experimental or in development then
-it should not be tested by Tempest until it is considered stable.
diff --git a/doc/source/configuration.rst b/doc/source/configuration.rst
index 5b13619..15369de 100644
--- a/doc/source/configuration.rst
+++ b/doc/source/configuration.rst
@@ -1,3 +1,5 @@
+.. _tempest-configuration:
+
Tempest Configuration Guide
===========================
diff --git a/requirements.txt b/requirements.txt
index f6e30ce..56796d8 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -12,7 +12,6 @@
python-ceilometerclient>=1.0.6
python-glanceclient>=0.15.0
python-keystoneclient>=1.1.0
-python-neutronclient>=2.3.11,<3
python-cinderclient>=1.1.0
python-heatclient>=0.3.0
python-ironicclient>=0.2.1
diff --git a/tempest/api/compute/admin/test_baremetal_nodes.py b/tempest/api/compute/admin/test_baremetal_nodes.py
index 1381f80..64099c3 100644
--- a/tempest/api/compute/admin/test_baremetal_nodes.py
+++ b/tempest/api/compute/admin/test_baremetal_nodes.py
@@ -31,14 +31,26 @@
skip_msg = ('%s skipped as Ironic is not available' % cls.__name__)
raise cls.skipException(skip_msg)
cls.client = cls.os_adm.baremetal_nodes_client
+ cls.ironic_client = cls.os_adm.baremetal_client
- @test.attr(type='smoke')
+ @test.attr(type=['smoke', 'baremetal'])
@test.idempotent_id('e475aa6e-416d-4fa4-b3af-28d5e84250fb')
- def test_list_baremetal_nodes(self):
- # List all baremetal nodes.
- baremetal_nodes = self.client.list_baremetal_nodes()
- self.assertNotEmpty(baremetal_nodes, "No baremetal nodes found.")
+ def test_list_get_baremetal_nodes(self):
+ # Create some test nodes in Ironic directly
+ test_nodes = []
+ for i in range(0, 3):
+ _, node = self.ironic_client.create_node()
+ test_nodes.append(node)
+ self.addCleanup(self.ironic_client.delete_node, node['uuid'])
- for node in baremetal_nodes:
- baremetal_node = self.client.get_baremetal_node(node['id'])
- self.assertEqual(node['id'], baremetal_node['id'])
+ # List all baremetal nodes and ensure our created test nodes are
+ # listed
+ bm_node_ids = set([n['id'] for n in
+ self.client.list_baremetal_nodes()])
+ test_node_ids = set([n['uuid'] for n in test_nodes])
+ self.assertTrue(test_node_ids.issubset(bm_node_ids))
+
+ # Test getting each individually
+ for node in test_nodes:
+ baremetal_node = self.client.get_baremetal_node(node['uuid'])
+ self.assertEqual(node['uuid'], baremetal_node['id'])
diff --git a/tempest/api/compute/admin/test_flavors_negative.py b/tempest/api/compute/admin/test_flavors_negative.py
deleted file mode 100644
index c7eb9ae..0000000
--- a/tempest/api/compute/admin/test_flavors_negative.py
+++ /dev/null
@@ -1,120 +0,0 @@
-# Copyright 2012 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import uuid
-
-from tempest_lib.common.utils import data_utils
-from tempest_lib import exceptions as lib_exc
-
-from tempest.api.compute import base
-from tempest.api_schema.request.compute.v2 import flavors
-from tempest import config
-from tempest import test
-
-
-CONF = config.CONF
-
-load_tests = test.NegativeAutoTest.load_tests
-
-
-class FlavorsAdminNegativeTestJSON(base.BaseV2ComputeAdminTest):
-
- """
- Tests Flavors API Create and Delete that require admin privileges
- """
-
- @classmethod
- def skip_checks(cls):
- super(FlavorsAdminNegativeTestJSON, cls).skip_checks()
- if not test.is_extension_enabled('OS-FLV-EXT-DATA', 'compute'):
- msg = "OS-FLV-EXT-DATA extension not enabled."
- raise cls.skipException(msg)
-
- @classmethod
- def setup_clients(cls):
- super(FlavorsAdminNegativeTestJSON, cls).setup_clients()
- cls.client = cls.os_adm.flavors_client
- cls.user_client = cls.os.flavors_client
-
- @classmethod
- def resource_setup(cls):
- super(FlavorsAdminNegativeTestJSON, cls).resource_setup()
- cls.flavor_name_prefix = 'test_flavor_'
- cls.ram = 512
- cls.vcpus = 1
- cls.disk = 10
- cls.ephemeral = 10
- cls.swap = 1024
- cls.rxtx = 2
-
- @test.attr(type=['negative', 'gate'])
- @test.idempotent_id('404451c0-c1ae-4448-8d50-d74f26f93ec8')
- def test_get_flavor_details_for_deleted_flavor(self):
- # Delete a flavor and ensure it is not listed
- # Create a test flavor
- flavor_name = data_utils.rand_name(self.flavor_name_prefix)
-
- # no need to specify flavor_id, we can get the flavor_id from a
- # response of create_flavor() call.
- flavor = self.client.create_flavor(flavor_name,
- self.ram,
- self.vcpus, self.disk,
- None,
- ephemeral=self.ephemeral,
- swap=self.swap,
- rxtx=self.rxtx)
- # Delete the flavor
- new_flavor_id = flavor['id']
- self.client.delete_flavor(new_flavor_id)
-
- # Deleted flavors can be seen via detailed GET
- flavor = self.client.get_flavor_details(new_flavor_id)
- self.assertEqual(flavor['name'], flavor_name)
-
- # Deleted flavors should not show up in a list however
- flavors = self.client.list_flavors_with_detail()
- flag = True
- for flavor in flavors:
- if flavor['name'] == flavor_name:
- flag = False
- self.assertTrue(flag)
-
- @test.attr(type=['negative', 'gate'])
- @test.idempotent_id('6f56e7b7-7500-4d0c-9913-880ca1efed87')
- def test_create_flavor_as_user(self):
- # only admin user can create a flavor
- flavor_name = data_utils.rand_name(self.flavor_name_prefix)
- new_flavor_id = str(uuid.uuid4())
-
- self.assertRaises(lib_exc.Forbidden,
- self.user_client.create_flavor,
- flavor_name, self.ram, self.vcpus, self.disk,
- new_flavor_id, ephemeral=self.ephemeral,
- swap=self.swap, rxtx=self.rxtx)
-
- @test.attr(type=['negative', 'gate'])
- @test.idempotent_id('a9a6dc02-8c14-4e05-a1ca-3468d4214882')
- def test_delete_flavor_as_user(self):
- # only admin user can delete a flavor
- self.assertRaises(lib_exc.Forbidden,
- self.user_client.delete_flavor,
- self.flavor_ref_alt)
-
-
-@test.SimpleNegativeAutoTest
-class FlavorCreateNegativeTestJSON(base.BaseV2ComputeAdminTest,
- test.NegativeAutoTest):
- _service = CONF.compute.catalog_type
- _schema = flavors.flavor_create
diff --git a/tempest/api/compute/servers/test_list_server_filters.py b/tempest/api/compute/servers/test_list_server_filters.py
index a694fb5..5c10f30 100644
--- a/tempest/api/compute/servers/test_list_server_filters.py
+++ b/tempest/api/compute/servers/test_list_server_filters.py
@@ -185,7 +185,7 @@
def test_list_servers_detailed_filter_by_image(self):
# Filter the detailed list of servers by image
params = {'image': self.image_ref}
- resp, body = self.client.list_servers_with_detail(params)
+ body = self.client.list_servers_with_detail(params)
servers = body['servers']
self.assertIn(self.s1['id'], map(lambda x: x['id'], servers))
diff --git a/tempest/api/identity/admin/v3/test_default_project_id.py b/tempest/api/identity/admin/v3/test_default_project_id.py
index f1cc530..9841cc8 100644
--- a/tempest/api/identity/admin/v3/test_default_project_id.py
+++ b/tempest/api/identity/admin/v3/test_default_project_id.py
@@ -66,7 +66,7 @@
"doesn't have domain id " + dom_id)
# get roles and find the admin role
- admin_role = self.get_role_by_name("admin")
+ admin_role = self.get_role_by_name(CONF.identity.admin_role)
admin_role_id = admin_role['id']
# grant the admin role to the user on his project
@@ -76,7 +76,7 @@
# create a new client with user's credentials (NOTE: unscoped token!)
creds = auth.KeystoneV3Credentials(username=user_name,
password=user_name,
- domain_name=dom_name)
+ user_domain_name=dom_name)
auth_provider = manager.get_auth_provider(creds)
creds = auth_provider.fill_credentials()
admin_client = clients.Manager(credentials=creds)
diff --git a/tempest/api/identity/admin/v3/test_roles.py b/tempest/api/identity/admin/v3/test_roles.py
index 0611393..b5b1d7b 100644
--- a/tempest/api/identity/admin/v3/test_roles.py
+++ b/tempest/api/identity/admin/v3/test_roles.py
@@ -144,11 +144,11 @@
self.client.add_group_user(self.group_body['id'], self.user_body['id'])
self.addCleanup(self.client.delete_group_user,
self.group_body['id'], self.user_body['id'])
- body = self.token.auth(user=self.user_body['id'],
+ body = self.token.auth(user_id=self.user_body['id'],
password=self.u_password,
- user_domain=self.domain['name'],
- project=self.project['name'],
- project_domain=self.domain['name'])
+ user_domain_name=self.domain['name'],
+ project_name=self.project['name'],
+ project_domain_name=self.domain['name'])
roles = body['token']['roles']
self.assertEqual(len(roles), 1)
self.assertEqual(roles[0]['id'], self.role['id'])
diff --git a/tempest/api/identity/admin/v3/test_tokens.py b/tempest/api/identity/admin/v3/test_tokens.py
index 5cc498f..7358ce9 100644
--- a/tempest/api/identity/admin/v3/test_tokens.py
+++ b/tempest/api/identity/admin/v3/test_tokens.py
@@ -36,7 +36,8 @@
email=u_email)
self.addCleanup(self.client.delete_user, user['id'])
# Perform Authentication
- resp = self.token.auth(user['id'], u_password).response
+ resp = self.token.auth(user_id=user['id'],
+ password=u_password).response
subject_token = resp['x-subject-token']
# Perform GET Token
token_details = self.client.get_token(subject_token)
@@ -87,7 +88,7 @@
role['id'])
# Get an unscoped token.
- token_auth = self.token.auth(user=user['id'],
+ token_auth = self.token.auth(user_id=user['id'],
password=user_password)
token_id = token_auth.response['x-subject-token']
@@ -110,8 +111,8 @@
# Use the unscoped token to get a scoped token.
token_auth = self.token.auth(token=token_id,
- project=project1_name,
- project_domain='Default')
+ project_name=project1_name,
+ project_domain_name='Default')
token1_id = token_auth.response['x-subject-token']
self.assertEqual(orig_expires_at, token_auth['token']['expires_at'],
@@ -140,8 +141,8 @@
# Now get another scoped token using the unscoped token.
token_auth = self.token.auth(token=token_id,
- project=project2_name,
- project_domain='Default')
+ project_name=project2_name,
+ project_domain_name='Default')
self.assertEqual(project2['id'],
token_auth['token']['project']['id'])
diff --git a/tempest/api/identity/admin/v3/test_users.py b/tempest/api/identity/admin/v3/test_users.py
index f29e72a..9d9f61c 100644
--- a/tempest/api/identity/admin/v3/test_users.py
+++ b/tempest/api/identity/admin/v3/test_users.py
@@ -79,7 +79,8 @@
new_password = data_utils.rand_name('pass1')
self.client.update_user_password(user['id'], new_password,
original_password)
- resp = self.token.auth(user['id'], new_password).response
+ resp = self.token.auth(user_id=user['id'],
+ password=new_password).response
subject_token = resp['x-subject-token']
# Perform GET Token to verify and confirm password is updated
token_details = self.client.get_token(subject_token)
diff --git a/tempest/api/volume/admin/test_volume_quotas.py b/tempest/api/volume/admin/test_volume_quotas.py
index 7a64de3..86d90f6 100644
--- a/tempest/api/volume/admin/test_volume_quotas.py
+++ b/tempest/api/volume/admin/test_volume_quotas.py
@@ -95,7 +95,8 @@
self.assertEqual(quota_usage['volumes']['in_use'] + 1,
new_quota_usage['volumes']['in_use'])
- self.assertEqual(quota_usage['gigabytes']['in_use'] + 1,
+ self.assertEqual(quota_usage['gigabytes']['in_use'] +
+ volume["size"],
new_quota_usage['gigabytes']['in_use'])
@test.attr(type='gate')
diff --git a/tempest/api/volume/admin/test_volume_quotas_negative.py b/tempest/api/volume/admin/test_volume_quotas_negative.py
index 98b7143..d7287f0 100644
--- a/tempest/api/volume/admin/test_volume_quotas_negative.py
+++ b/tempest/api/volume/admin/test_volume_quotas_negative.py
@@ -31,7 +31,9 @@
@classmethod
def resource_setup(cls):
super(BaseVolumeQuotasNegativeV2TestJSON, cls).resource_setup()
- cls.shared_quota_set = {'gigabytes': 3, 'volumes': 1, 'snapshots': 1}
+ cls.default_volume_size = cls.volumes_client.default_volume_size
+ cls.shared_quota_set = {'gigabytes': 3 * cls.default_volume_size,
+ 'volumes': 1, 'snapshots': 1}
# NOTE(gfidente): no need to restore original quota set
# after the tests as they only work with tenant isolation.
@@ -67,14 +69,16 @@
self.demo_tenant_id,
**self.shared_quota_set)
- new_quota_set = {'gigabytes': 2, 'volumes': 2, 'snapshots': 1}
+ new_quota_set = {'gigabytes': 2 * self.default_volume_size,
+ 'volumes': 2, 'snapshots': 1}
self.quotas_client.update_quota_set(
self.demo_tenant_id,
**new_quota_set)
self.assertRaises(lib_exc.OverLimit,
self.volumes_client.create_volume)
- new_quota_set = {'gigabytes': 2, 'volumes': 1, 'snapshots': 2}
+ new_quota_set = {'gigabytes': 2 * self.default_volume_size,
+ 'volumes': 1, 'snapshots': 2}
self.quotas_client.update_quota_set(
self.demo_tenant_id,
**self.shared_quota_set)
diff --git a/tempest/api_schema/response/compute/hypervisors.py b/tempest/api_schema/response/compute/hypervisors.py
index 273b579..fc3b828 100644
--- a/tempest/api_schema/response/compute/hypervisors.py
+++ b/tempest/api_schema/response/compute/hypervisors.py
@@ -56,6 +56,8 @@
'items': {
'type': 'object',
'properties': {
+ 'status': {'type': 'string'},
+ 'state': {'type': 'string'},
'cpu_info': {'type': 'string'},
'current_workload': {'type': 'integer'},
'disk_available_least': {'type': ['integer', 'null']},
@@ -85,6 +87,9 @@
'vcpus': {'type': 'integer'},
'vcpus_used': {'type': 'integer'}
},
+ # NOTE: When loading os-hypervisor-status extension,
+ # a response contains status and state. So these params
+ # should not be required.
'required': ['cpu_info', 'current_workload',
'disk_available_least', 'host_ip',
'free_disk_gb', 'free_ram_mb',
@@ -108,6 +113,8 @@
'hypervisor': {
'type': 'object',
'properties': {
+ 'status': {'type': 'string'},
+ 'state': {'type': 'string'},
'cpu_info': {'type': 'string'},
'current_workload': {'type': 'integer'},
'disk_available_least': {'type': ['integer', 'null']},
@@ -137,6 +144,9 @@
'vcpus': {'type': 'integer'},
'vcpus_used': {'type': 'integer'}
},
+ # NOTE: When loading os-hypervisor-status extension,
+ # a response contains status and state. So these params
+ # should not be required.
'required': ['cpu_info', 'current_workload',
'disk_available_least', 'host_ip',
'free_disk_gb', 'free_ram_mb',
@@ -184,9 +194,14 @@
'hypervisor': {
'type': 'object',
'properties': {
+ 'status': {'type': 'string'},
+ 'state': {'type': 'string'},
'id': {'type': ['integer', 'string']},
'hypervisor_hostname': {'type': 'string'},
},
+ # NOTE: When loading os-hypervisor-status extension,
+ # a response contains status and state. So these params
+ # should not be required.
'required': ['id', 'hypervisor_hostname']
}
},
diff --git a/tempest/api_schema/response/compute/servers.py b/tempest/api_schema/response/compute/servers.py
index f9c957b..3950173 100644
--- a/tempest/api_schema/response/compute/servers.py
+++ b/tempest/api_schema/response/compute/servers.py
@@ -71,6 +71,18 @@
},
'required': ['id', 'links']
},
+ 'fault': {
+ 'type': 'object',
+ 'properties': {
+ 'code': {'type': 'integer'},
+ 'created': {'type': 'string'},
+ 'message': {'type': 'string'},
+ 'details': {'type': 'string'},
+ },
+ # NOTE(gmann): 'details' is not necessary to be present
+ # in the 'fault'. So it is not defined as 'required'.
+ 'required': ['code', 'created', 'message']
+ },
'user_id': {'type': 'string'},
'tenant_id': {'type': 'string'},
'created': {'type': 'string'},
@@ -83,7 +95,9 @@
# NOTE(GMann): 'progress' attribute is present in the response
# only when server's status is one of the progress statuses
# ("ACTIVE","BUILD", "REBUILD", "RESIZE","VERIFY_RESIZE")
- # So it is not defined as 'required'.
+ # 'fault' attribute is present in the response
+ # only when server's status is one of the "ERROR", "DELETED".
+ # So they are not defined as 'required'.
'required': ['id', 'name', 'status', 'image', 'flavor',
'user_id', 'tenant_id', 'created', 'updated',
'metadata', 'links', 'addresses']
@@ -144,8 +158,11 @@
},
'required': ['id', 'links', 'name']
}
- }
+ },
+ 'servers_links': parameter_types.links
},
+ # NOTE(gmann): servers_links attribute is not necessary to be
+ # present always So it is not 'required'.
'required': ['servers']
}
}
diff --git a/tempest/api_schema/response/compute/v2/images.py b/tempest/api_schema/response/compute/v2/images.py
index 2317e6b..21dc9ab 100644
--- a/tempest/api_schema/response/compute/v2/images.py
+++ b/tempest/api_schema/response/compute/v2/images.py
@@ -40,11 +40,12 @@
},
'required': ['id', 'links']
},
- 'OS-EXT-IMG-SIZE:size': {'type': 'integer'}
+ 'OS-EXT-IMG-SIZE:size': {'type': 'integer'},
+ 'OS-DCF:diskConfig': {'type': 'string'}
},
# 'server' attributes only comes in response body if image is
- # associated with any server. 'OS-EXT-IMG-SIZE:size' is API
- # extension, So those are not defined as 'required'.
+ # associated with any server. 'OS-EXT-IMG-SIZE:size' & 'OS-DCF:diskConfig'
+ # are API extension, So those are not defined as 'required'.
'required': ['id', 'status', 'updated', 'links', 'name',
'created', 'minDisk', 'minRam', 'progress',
'metadata']
@@ -77,8 +78,11 @@
},
'required': ['id', 'links', 'name']
}
- }
+ },
+ 'images_links': parameter_types.links
},
+ # NOTE(gmann): images_links attribute is not necessary to be
+ # present always So it is not 'required'.
'required': ['images']
}
}
@@ -131,8 +135,11 @@
'images': {
'type': 'array',
'items': common_image_schema
- }
+ },
+ 'images_links': parameter_types.links
},
+ # NOTE(gmann): images_links attribute is not necessary to be
+ # present always So it is not 'required'.
'required': ['images']
}
}
diff --git a/tempest/api_schema/response/compute/v2/servers.py b/tempest/api_schema/response/compute/v2/servers.py
index 83dbb4f..ebee697 100644
--- a/tempest/api_schema/response/compute/v2/servers.py
+++ b/tempest/api_schema/response/compute/v2/servers.py
@@ -296,15 +296,34 @@
list_servers_detail = copy.deepcopy(servers.base_list_servers_detail)
list_servers_detail['response_body']['properties']['servers']['items'][
'properties'].update({
+ 'key_name': {'type': ['string', 'null']},
'hostId': {'type': 'string'},
'OS-DCF:diskConfig': {'type': 'string'},
'security_groups': {'type': 'array'},
+
+ # NOTE: Non-admin users also can see "OS-SRV-USG" and "OS-EXT-AZ"
+ # attributes.
+ 'OS-SRV-USG:launched_at': {'type': ['string', 'null']},
+ 'OS-SRV-USG:terminated_at': {'type': ['string', 'null']},
+ 'OS-EXT-AZ:availability_zone': {'type': 'string'},
+
+ # NOTE: Admin users only can see "OS-EXT-STS" and "OS-EXT-SRV-ATTR"
+ # attributes.
+ 'OS-EXT-STS:task_state': {'type': ['string', 'null']},
+ 'OS-EXT-STS:vm_state': {'type': 'string'},
+ 'OS-EXT-STS:power_state': {'type': 'integer'},
+ 'OS-EXT-SRV-ATTR:host': {'type': ['string', 'null']},
+ 'OS-EXT-SRV-ATTR:instance_name': {'type': 'string'},
+ 'OS-EXT-SRV-ATTR:hypervisor_hostname': {'type': ['string', 'null']},
+ 'os-extended-volumes:volumes_attached': {'type': 'array'},
'accessIPv4': parameter_types.access_ip_v4,
- 'accessIPv6': parameter_types.access_ip_v6
+ 'accessIPv6': parameter_types.access_ip_v6,
+ 'config_drive': {'type': 'string'}
})
-# NOTE(GMann): OS-DCF:diskConfig, security_groups and accessIPv4/v6
-# are API extensions, and some environments return a response
-# without these attributes. So they are not 'required'.
+# NOTE(GMann): OS-SRV-USG, OS-EXT-AZ, OS-EXT-STS, OS-EXT-SRV-ATTR,
+# os-extended-volumes, OS-DCF and accessIPv4/v6 are API
+# extensions, and some environments return a response without
+# these attributes. So they are not 'required'.
list_servers_detail['response_body']['properties']['servers']['items'][
'required'].append('hostId')
# NOTE(gmann): Update OS-EXT-IPS:type and OS-EXT-IPS-MAC:mac_addr
@@ -316,12 +335,14 @@
'items']['properties'].update({
'OS-EXT-IPS:type': {'type': 'string'},
'OS-EXT-IPS-MAC:mac_addr': parameter_types.mac_address})
-
+# Defining 'servers_links' attributes for V2 server schema
+list_servers_detail['response_body'][
+ 'properties'].update({'servers_links': parameter_types.links})
+# NOTE(gmann): servers_links attribute is not necessary to be
+# present always So it is not 'required'.
rebuild_server = copy.deepcopy(update_server)
rebuild_server['status_code'] = [202]
-del rebuild_server['response_body']['properties']['server'][
- 'properties']['OS-DCF:diskConfig']
rebuild_server_with_admin_pass = copy.deepcopy(rebuild_server)
rebuild_server_with_admin_pass['response_body']['properties']['server'][
diff --git a/tempest/auth.py b/tempest/auth.py
index d7f9adb..113ad69 100644
--- a/tempest/auth.py
+++ b/tempest/auth.py
@@ -20,9 +20,9 @@
import re
import urlparse
+from oslo_log import log as logging
import six
-from oslo_log import log as logging
from tempest.services.identity.v2.json import token_client as json_v2id
from tempest.services.identity.v3.json import token_client as json_v3id
@@ -328,11 +328,17 @@
def _auth_params(self):
return dict(
- user=self.credentials.username,
+ user_id=self.credentials.user_id,
+ username=self.credentials.username,
password=self.credentials.password,
- project=self.credentials.tenant_name,
- user_domain=self.credentials.user_domain_name,
- project_domain=self.credentials.project_domain_name,
+ project_id=self.credentials.project_id,
+ project_name=self.credentials.project_name,
+ user_domain_id=self.credentials.user_domain_id,
+ user_domain_name=self.credentials.user_domain_name,
+ project_domain_id=self.credentials.project_domain_id,
+ project_domain_name=self.credentials.project_domain_name,
+ domain_id=self.credentials.domain_id,
+ domain_name=self.credentials.domain_name,
auth_data=True)
def _fill_credentials(self, auth_data_body):
@@ -439,7 +445,9 @@
return identity_version in IDENTITY_VERSION
-def get_credentials(auth_url, fill_in=True, identity_version='v2', **kwargs):
+def get_credentials(auth_url, fill_in=True, identity_version='v2',
+ disable_ssl_certificate_validation=None, ca_certs=None,
+ trace_requests=None, **kwargs):
"""
Builds a credentials object based on the configured auth_version
@@ -451,6 +459,11 @@
by invoking ``is_valid()``
:param identity_version (string): identity API version is used to
select the matching auth provider and credentials class
+ :param disable_ssl_certificate_validation: whether to enforce SSL
+ certificate validation in SSL API requests to the auth system
+ :param ca_certs: CA certificate bundle for validation of certificates
+ in SSL API requests to the auth system
+ :param trace_requests: trace in log API requests to the auth system
:param kwargs (dict): Dict of credential key/value pairs
Examples:
@@ -471,7 +484,10 @@
creds = credential_class(**kwargs)
# Fill in the credentials fields that were not specified
if fill_in:
- auth_provider = auth_provider_class(creds, auth_url)
+ dsvm = disable_ssl_certificate_validation
+ auth_provider = auth_provider_class(
+ creds, auth_url, disable_ssl_certificate_validation=dsvm,
+ ca_certs=ca_certs, trace_requests=trace_requests)
creds = auth_provider.fill_credentials()
return creds
@@ -569,7 +585,7 @@
Credentials suitable for the Keystone Identity V3 API
"""
- ATTRIBUTES = ['domain_name', 'password', 'tenant_name', 'username',
+ ATTRIBUTES = ['domain_id', 'domain_name', 'password', 'username',
'project_domain_id', 'project_domain_name', 'project_id',
'project_name', 'tenant_id', 'tenant_name', 'user_domain_id',
'user_domain_name', 'user_id']
@@ -615,6 +631,8 @@
- None
- Project id (optional domain)
- Project name and its domain id/name
+ - Domain id
+ - Domain name
"""
valid_user_domain = any(
[self.user_domain_id is not None,
@@ -625,11 +643,16 @@
valid_user = any(
[self.user_id is not None,
self.username is not None and valid_user_domain])
- valid_project = any(
+ valid_project_scope = any(
[self.project_name is None and self.project_id is None,
self.project_id is not None,
self.project_name is not None and valid_project_domain])
- return all([self.password is not None, valid_user, valid_project])
+ valid_domain_scope = any(
+ [self.domain_id is None and self.domain_name is None,
+ self.domain_id or self.domain_name])
+ return all([self.password is not None,
+ valid_user,
+ valid_project_scope and valid_domain_scope])
IDENTITY_VERSION = {'v2': (KeystoneV2Credentials, KeystoneV2AuthProvider),
diff --git a/tempest/cli/simple_read_only/network/__init__.py b/tempest/cli/simple_read_only/network/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/tempest/cli/simple_read_only/network/__init__.py
+++ /dev/null
diff --git a/tempest/cli/simple_read_only/network/test_neutron.py b/tempest/cli/simple_read_only/network/test_neutron.py
deleted file mode 100644
index e8b3554..0000000
--- a/tempest/cli/simple_read_only/network/test_neutron.py
+++ /dev/null
@@ -1,285 +0,0 @@
-# Copyright 2013 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import re
-
-from oslo_log import log as logging
-from tempest_lib import exceptions
-
-from tempest import cli
-from tempest import config
-from tempest import test
-
-CONF = config.CONF
-
-LOG = logging.getLogger(__name__)
-
-
-class SimpleReadOnlyNeutronClientTest(cli.ClientTestBase):
- """Basic, read-only tests for Neutron CLI client.
-
- Checks return values and output of read-only commands.
- These tests do not presume any content, nor do they create
- their own. They only verify the structure of output if present.
- """
-
- @classmethod
- def resource_setup(cls):
- if (not CONF.service_available.neutron):
- msg = "Skipping all Neutron cli tests because it is not available"
- raise cls.skipException(msg)
- super(SimpleReadOnlyNeutronClientTest, cls).resource_setup()
-
- def neutron(self, *args, **kwargs):
- return self.clients.neutron(*args,
- endpoint_type=CONF.network.endpoint_type,
- **kwargs)
-
- @test.attr(type='smoke')
- @test.idempotent_id('84dd7190-2b98-4709-8e2c-3c1d25b9e7d2')
- def test_neutron_fake_action(self):
- self.assertRaises(exceptions.CommandFailed,
- self.neutron,
- 'this-does-not-exist')
-
- @test.attr(type='smoke')
- @test.idempotent_id('c598c337-313a-45ac-bf27-d6b4124a9e5b')
- def test_neutron_net_list(self):
- net_list = self.parser.listing(self.neutron('net-list'))
- self.assertTableStruct(net_list, ['id', 'name', 'subnets'])
-
- @test.attr(type='smoke')
- @test.idempotent_id('3e172b04-2e3b-4fcf-922d-99d5c803779f')
- def test_neutron_ext_list(self):
- ext = self.parser.listing(self.neutron('ext-list'))
- self.assertTableStruct(ext, ['alias', 'name'])
-
- @test.attr(type='smoke')
- @test.idempotent_id('2e0de814-52d6-4f81-be17-fe327072fc23')
- @test.requires_ext(extension='dhcp_agent_scheduler', service='network')
- def test_neutron_dhcp_agent_list_hosting_net(self):
- self.neutron('dhcp-agent-list-hosting-net',
- params=CONF.compute.fixed_network_name)
-
- @test.attr(type='smoke')
- @test.idempotent_id('8524a24a-3895-40a5-8c9d-49d4459cdda4')
- @test.requires_ext(extension='agent', service='network')
- def test_neutron_agent_list(self):
- agents = self.parser.listing(self.neutron('agent-list'))
- field_names = ['id', 'agent_type', 'host', 'alive', 'admin_state_up']
- self.assertTableStruct(agents, field_names)
-
- @test.attr(type='smoke')
- @test.idempotent_id('97c3ef92-7303-45f1-80db-b6622f176782')
- @test.requires_ext(extension='router', service='network')
- def test_neutron_floatingip_list(self):
- self.neutron('floatingip-list')
-
- @test.attr(type='smoke')
- @test.idempotent_id('823e0fee-404c-49a7-8bf3-d2f0383cc649')
- @test.requires_ext(extension='metering', service='network')
- def test_neutron_meter_label_list(self):
- self.neutron('meter-label-list')
-
- @test.attr(type='smoke')
- @test.idempotent_id('7fb76098-01f6-417f-b9c7-e630ba3f394b')
- @test.requires_ext(extension='metering', service='network')
- def test_neutron_meter_label_rule_list(self):
- self.neutron('meter-label-rule-list')
-
- @test.requires_ext(extension='lbaas_agent_scheduler', service='network')
- def _test_neutron_lbaas_command(self, command):
- try:
- self.neutron(command)
- except exceptions.CommandFailed as e:
- if '404 Not Found' not in e.stderr:
- self.fail('%s: Unexpected failure.' % command)
-
- @test.attr(type='smoke')
- @test.idempotent_id('396d1d87-fd0c-4716-9ff0-f1baa54c6c61')
- def test_neutron_lb_healthmonitor_list(self):
- self._test_neutron_lbaas_command('lb-healthmonitor-list')
-
- @test.attr(type='smoke')
- @test.idempotent_id('f41fa54d-5cd8-4f2c-bb4e-13abc72dccb6')
- def test_neutron_lb_member_list(self):
- self._test_neutron_lbaas_command('lb-member-list')
-
- @test.attr(type='smoke')
- @test.idempotent_id('3ec04885-7573-4cce-b086-5722c0b00d85')
- def test_neutron_lb_pool_list(self):
- self._test_neutron_lbaas_command('lb-pool-list')
-
- @test.attr(type='smoke')
- @test.idempotent_id('1ab530e0-ec87-498f-baf2-85f6635a2ad9')
- def test_neutron_lb_vip_list(self):
- self._test_neutron_lbaas_command('lb-vip-list')
-
- @test.attr(type='smoke')
- @test.idempotent_id('e92f7362-4009-4b37-afee-f469105b24e7')
- @test.requires_ext(extension='external-net', service='network')
- def test_neutron_net_external_list(self):
- net_ext_list = self.parser.listing(self.neutron('net-external-list'))
- self.assertTableStruct(net_ext_list, ['id', 'name', 'subnets'])
-
- @test.attr(type='smoke')
- @test.idempotent_id('ed840980-7c84-4b6e-b280-f13c5848a0e9')
- def test_neutron_port_list(self):
- port_list = self.parser.listing(self.neutron('port-list'))
- self.assertTableStruct(port_list, ['id', 'name', 'mac_address',
- 'fixed_ips'])
-
- @test.attr(type='smoke')
- @test.idempotent_id('dded0dfa-f2ac-4c1f-bc90-69fd06dd7132')
- @test.requires_ext(extension='quotas', service='network')
- def test_neutron_quota_list(self):
- self.neutron('quota-list')
-
- @test.attr(type='smoke')
- @test.idempotent_id('927fca1e-4397-42a2-ba47-d738299466de')
- @test.requires_ext(extension='router', service='network')
- def test_neutron_router_list(self):
- router_list = self.parser.listing(self.neutron('router-list'))
- self.assertTableStruct(router_list, ['id', 'name',
- 'external_gateway_info'])
-
- @test.attr(type='smoke')
- @test.idempotent_id('e2e3d2d5-1aee-499d-84d9-37382dcf26ff')
- @test.requires_ext(extension='security-group', service='network')
- def test_neutron_security_group_list(self):
- security_grp = self.parser.listing(self.neutron('security-group-list'))
- self.assertTableStruct(security_grp, ['id', 'name', 'description'])
-
- @test.attr(type='smoke')
- @test.idempotent_id('288602c2-8b59-44cd-8c5d-1ec916a114d3')
- @test.requires_ext(extension='security-group', service='network')
- def test_neutron_security_group_rule_list(self):
- security_grp = self.parser.listing(self.neutron
- ('security-group-rule-list'))
- self.assertTableStruct(security_grp, ['id', 'security_group',
- 'direction', 'protocol',
- 'remote_ip_prefix',
- 'remote_group'])
-
- @test.attr(type='smoke')
- @test.idempotent_id('2a874a08-b9c9-4f0f-82ef-8cadb15bbd5d')
- def test_neutron_subnet_list(self):
- subnet_list = self.parser.listing(self.neutron('subnet-list'))
- self.assertTableStruct(subnet_list, ['id', 'name', 'cidr',
- 'allocation_pools'])
-
- @test.attr(type='smoke')
- @test.idempotent_id('048e1ec3-cf6c-4066-b262-2028e03ce825')
- @test.requires_ext(extension='vpnaas', service='network')
- def test_neutron_vpn_ikepolicy_list(self):
- ikepolicy = self.parser.listing(self.neutron('vpn-ikepolicy-list'))
- self.assertTableStruct(ikepolicy, ['id', 'name',
- 'auth_algorithm',
- 'encryption_algorithm',
- 'ike_version', 'pfs'])
-
- @test.attr(type='smoke')
- @test.idempotent_id('bb8902b7-b2e6-49fd-b9bd-a26dd99732df')
- @test.requires_ext(extension='vpnaas', service='network')
- def test_neutron_vpn_ipsecpolicy_list(self):
- ipsecpolicy = self.parser.listing(self.neutron('vpn-ipsecpolicy-list'))
- self.assertTableStruct(ipsecpolicy, ['id', 'name',
- 'auth_algorithm',
- 'encryption_algorithm',
- 'pfs'])
-
- @test.attr(type='smoke')
- @test.idempotent_id('c0f33f9a-0ba9-4177-bcd5-dce34b81d523')
- @test.requires_ext(extension='vpnaas', service='network')
- def test_neutron_vpn_service_list(self):
- vpn_list = self.parser.listing(self.neutron('vpn-service-list'))
- self.assertTableStruct(vpn_list, ['id', 'name',
- 'router_id', 'status'])
-
- @test.attr(type='smoke')
- @test.idempotent_id('bb142f8a-e568-405f-b1b7-4cb458de7971')
- @test.requires_ext(extension='vpnaas', service='network')
- def test_neutron_ipsec_site_connection_list(self):
- ipsec_site = self.parser.listing(self.neutron
- ('ipsec-site-connection-list'))
- self.assertTableStruct(ipsec_site, ['id', 'name',
- 'peer_address',
- 'peer_cidrs',
- 'route_mode',
- 'auth_mode', 'status'])
-
- @test.attr(type='smoke')
- @test.idempotent_id('89baff14-8cb7-4ad8-9c24-b0278711170b')
- @test.requires_ext(extension='fwaas', service='network')
- def test_neutron_firewall_list(self):
- firewall_list = self.parser.listing(self.neutron
- ('firewall-list'))
- self.assertTableStruct(firewall_list, ['id', 'name',
- 'firewall_policy_id'])
-
- @test.attr(type='smoke')
- @test.idempotent_id('996e418a-2a51-4018-9602-478ca8053e61')
- @test.requires_ext(extension='fwaas', service='network')
- def test_neutron_firewall_policy_list(self):
- firewall_policy = self.parser.listing(self.neutron
- ('firewall-policy-list'))
- self.assertTableStruct(firewall_policy, ['id', 'name',
- 'firewall_rules'])
-
- @test.attr(type='smoke')
- @test.idempotent_id('d4638dd6-98d4-4400-a920-26572de1a6fc')
- @test.requires_ext(extension='fwaas', service='network')
- def test_neutron_firewall_rule_list(self):
- firewall_rule = self.parser.listing(self.neutron
- ('firewall-rule-list'))
- self.assertTableStruct(firewall_rule, ['id', 'name',
- 'firewall_policy_id',
- 'summary', 'enabled'])
-
- @test.attr(type='smoke')
- @test.idempotent_id('1c4551e1-e3f3-4af2-8a40-c3f551e4a536')
- def test_neutron_help(self):
- help_text = self.neutron('help')
- lines = help_text.split('\n')
- self.assertFirstLineStartsWith(lines, 'usage: neutron')
-
- commands = []
- cmds_start = lines.index('Commands for API v2.0:')
- command_pattern = re.compile('^ {2}([a-z0-9\-\_]+)')
- for line in lines[cmds_start:]:
- match = command_pattern.match(line)
- if match:
- commands.append(match.group(1))
- commands = set(commands)
- wanted_commands = set(('net-create', 'subnet-list', 'port-delete',
- 'router-show', 'agent-update', 'help'))
- self.assertFalse(wanted_commands - commands)
-
- # Optional arguments:
-
- @test.attr(type='smoke')
- @test.idempotent_id('381e6fe3-cddc-47c9-a773-70ddb2f79a91')
- def test_neutron_version(self):
- self.neutron('', flags='--version')
-
- @test.attr(type='smoke')
- @test.idempotent_id('bcad0e07-da8c-4c7c-8ab6-499e5d7ab8cb')
- def test_neutron_debug_net_list(self):
- self.neutron('net-list', flags='--debug')
-
- @test.attr(type='smoke')
- @test.idempotent_id('3e42d78e-65e5-4e8f-8c29-ca7be8feebb4')
- def test_neutron_quiet_net_list(self):
- self.neutron('net-list', flags='--quiet')
diff --git a/tempest/clients.py b/tempest/clients.py
index 9bd5738..c75bef5 100644
--- a/tempest/clients.py
+++ b/tempest/clients.py
@@ -229,13 +229,14 @@
self.negative_client = negative_rest_client.NegativeRestClient(
self.auth_provider, service)
- # TODO(andreaf) EC2 client still do their auth, v2 only
- ec2_client_args = (self.credentials.username,
- self.credentials.password,
- CONF.identity.uri,
- self.credentials.tenant_name)
- self.ec2api_client = botoclients.APIClientEC2(*ec2_client_args)
- self.s3_client = botoclients.ObjectClientS3(*ec2_client_args)
+ # Generating EC2 credentials in tempest is only supported
+ # with identity v2
+ if CONF.identity_feature_enabled.api_v2 and \
+ CONF.identity.auth_version == 'v2':
+ # EC2 and S3 clients, if used, will check onfigured AWS credentials
+ # and generate new ones if needed
+ self.ec2api_client = botoclients.APIClientEC2(self.identity_client)
+ self.s3_client = botoclients.ObjectClientS3(self.identity_client)
def _set_compute_clients(self):
params = {
diff --git a/tempest/cmd/javelin.py b/tempest/cmd/javelin.py
index 0735eeb..8f238a5 100755
--- a/tempest/cmd/javelin.py
+++ b/tempest/cmd/javelin.py
@@ -118,6 +118,7 @@
import tempest.auth
from tempest import config
from tempest.services.compute.json import flavors_client
+from tempest.services.compute.json import floating_ips_client
from tempest.services.compute.json import security_groups_client
from tempest.services.compute.json import servers_client
from tempest.services.identity.v2.json import identity_client
@@ -194,6 +195,8 @@
**compute_params)
self.flavors = flavors_client.FlavorsClientJSON(_auth,
**compute_params)
+ self.floating_ips = floating_ips_client.FloatingIPsClientJSON(
+ _auth, **compute_params)
self.secgroups = security_groups_client.SecurityGroupsClientJSON(
_auth, **compute_params)
self.objects = object_client.ObjectClient(_auth,
@@ -451,15 +454,31 @@
# validate neutron is enabled and ironic disabled:
if (CONF.service_available.neutron and
not CONF.baremetal.driver_enabled):
+ _floating_is_alive = False
for network_name, body in found['addresses'].items():
for addr in body:
ip = addr['addr']
- if addr.get('OS-EXT-IPS:type', 'fixed') == 'fixed':
+ # If floatingip_for_ssh is at True, it's assumed
+ # you want to use the floating IP to reach the server,
+ # fallback to fixed IP, then other type.
+ # This is useful in multi-node environment.
+ if CONF.compute.use_floatingip_for_ssh:
+ if addr.get('OS-EXT-IPS:type',
+ 'floating') == 'floating':
+ self._ping_ip(ip, 60)
+ _floating_is_alive = True
+ elif addr.get('OS-EXT-IPS:type', 'fixed') == 'fixed':
namespace = _get_router_namespace(client,
network_name)
self._ping_ip(ip, 60, namespace)
else:
self._ping_ip(ip, 60)
+ # if floatingip_for_ssh is at True, validate found a
+ # floating IP and ping worked.
+ if CONF.compute.use_floatingip_for_ssh:
+ self.assertTrue(_floating_is_alive,
+ "Server %s has no floating IP." %
+ server['name'])
else:
addr = found['addresses']['private'][0]['addr']
self._ping_ip(addr, 60)
@@ -838,6 +857,10 @@
# create to security group(s) after server spawning
for secgroup in server['secgroups']:
client.servers.add_security_group(server_id, secgroup)
+ if CONF.compute.use_floatingip_for_ssh:
+ floating_ip = client.floating_ips.create_floating_ip()
+ client.floating_ips.associate_floating_ip_to_server(
+ floating_ip['ip'], server_id)
def destroy_servers(servers):
@@ -847,13 +870,32 @@
for server in servers:
client = client_for_user(server['owner'])
- response = _get_server_by_name(client, server['name'])
- if not response:
+ res = _get_server_by_name(client, server['name'])
+ if not res:
LOG.info("Server '%s' does not exist" % server['name'])
continue
+ res = client.servers.get_server(res['id'])
- client.servers.delete_server(response['id'])
- client.servers.wait_for_server_termination(response['id'],
+ # we iterate all interfaces until we find a floating IP
+ # and stop looping after dropping it.
+ def _find_first_floating():
+ if (CONF.service_available.neutron and
+ not CONF.baremetal.driver_enabled and
+ CONF.compute.use_floatingip_for_ssh):
+ for body in res['addresses'].items():
+ for addr in body:
+ ip = addr['addr']
+ if addr.get('OS-EXT-IPS:type',
+ 'floating') == 'floating':
+ (client.floating_ips.
+ disassociate_floating_ip_from_server(
+ ip, res['id']))
+ client.floating_ips.delete_floating_ip(ip)
+ return
+
+ _find_first_floating()
+ client.servers.delete_server(res['id'])
+ client.servers.wait_for_server_termination(res['id'],
ignore_error=True)
diff --git a/tempest/cmd/run_stress.py b/tempest/cmd/run_stress.py
index 2bed355..06b338d 100755
--- a/tempest/cmd/run_stress.py
+++ b/tempest/cmd/run_stress.py
@@ -24,9 +24,9 @@
# unittest in python 2.6 does not contain loader, so uses unittest2
from unittest2 import loader
+from oslo_log import log as logging
from testtools import testsuite
-from oslo_log import log as logging
from tempest.stress import driver
LOG = logging.getLogger(__name__)
diff --git a/tempest/common/cred_provider.py b/tempest/common/cred_provider.py
index 6be1b6b..bff9a0a 100644
--- a/tempest/common/cred_provider.py
+++ b/tempest/common/cred_provider.py
@@ -31,6 +31,13 @@
'alt_user': ('identity', 'alt')
}
+DEFAULT_PARAMS = {
+ 'disable_ssl_certificate_validation':
+ CONF.identity.disable_ssl_certificate_validation,
+ 'ca_certs': CONF.identity.ca_certificates_file,
+ 'trace_requests': CONF.debug.trace_requests
+}
+
# Read credentials from configuration, builds a Credentials object
# based on the specified or configured version
@@ -46,7 +53,7 @@
if identity_version == 'v3':
conf_attributes.append('domain_name')
# Read the parts of credentials from config
- params = {}
+ params = DEFAULT_PARAMS.copy()
section, prefix = CREDENTIAL_TYPES[credential_type]
for attr in conf_attributes:
_section = getattr(CONF, section)
@@ -69,6 +76,7 @@
# Wrapper around auth.get_credentials to use the configured identity version
# is none is specified
def get_credentials(fill_in=True, identity_version=None, **kwargs):
+ params = dict(DEFAULT_PARAMS, **kwargs)
identity_version = identity_version or CONF.identity.auth_version
# In case of "v3" add the domain from config if not specified
if identity_version == 'v3':
@@ -82,7 +90,7 @@
return auth.get_credentials(auth_url,
fill_in=fill_in,
identity_version=identity_version,
- **kwargs)
+ **params)
@six.add_metaclass(abc.ABCMeta)
diff --git a/tempest/common/credentials.py b/tempest/common/credentials.py
index 3794b66..2f7fb73 100644
--- a/tempest/common/credentials.py
+++ b/tempest/common/credentials.py
@@ -58,7 +58,8 @@
is_admin = False
else:
try:
- cred_provider.get_configured_credentials('identity_admin')
+ cred_provider.get_configured_credentials('identity_admin',
+ fill_in=False)
except exceptions.InvalidConfiguration:
is_admin = False
return is_admin
diff --git a/tempest/scenario/test_network_basic_ops.py b/tempest/scenario/test_network_basic_ops.py
index 16d9d12..af7b683 100644
--- a/tempest/scenario/test_network_basic_ops.py
+++ b/tempest/scenario/test_network_basic_ops.py
@@ -213,11 +213,15 @@
self.floating_ip_tuple = Floating_IP_tuple(
floating_ip, server)
- def _create_new_network(self):
+ def _create_new_network(self, create_gateway=False):
self.new_net = self._create_network(tenant_id=self.tenant_id)
- self.new_subnet = self._create_subnet(
- network=self.new_net,
- gateway_ip=None)
+ if create_gateway:
+ self.new_subnet = self._create_subnet(
+ network=self.new_net)
+ else:
+ self.new_subnet = self._create_subnet(
+ network=self.new_net,
+ gateway_ip=None)
def _hotplug_server(self):
old_floating_ip, server = self.floating_ip_tuple
@@ -277,7 +281,8 @@
ipatxt = ssh_client.get_ip_list()
return reg.findall(ipatxt)
- def _check_network_internal_connectivity(self, network):
+ def _check_network_internal_connectivity(self, network,
+ should_connect=True):
"""
via ssh check VM internal connectivity:
- ping internal gateway and DHCP port, implying in-tenant connectivity
@@ -291,7 +296,9 @@
network_id=network.id)
if p['device_owner'].startswith('network'))
- self._check_server_connectivity(floating_ip, internal_ips)
+ self._check_server_connectivity(floating_ip,
+ internal_ips,
+ should_connect)
def _check_network_external_connectivity(self):
"""
@@ -311,17 +318,22 @@
self._check_server_connectivity(self.floating_ip_tuple.floating_ip,
external_ips)
- def _check_server_connectivity(self, floating_ip, address_list):
+ def _check_server_connectivity(self, floating_ip, address_list,
+ should_connect=True):
ip_address = floating_ip.floating_ip_address
private_key = self._get_server_key(self.floating_ip_tuple.server)
ssh_source = self._ssh_to_server(ip_address, private_key)
for remote_ip in address_list:
+ if should_connect:
+ msg = "Timed out waiting for "
+ "%s to become reachable" % remote_ip
+ else:
+ msg = "ip address %s is reachable" % remote_ip
try:
- self.assertTrue(self._check_remote_connectivity(ssh_source,
- remote_ip),
- "Timed out waiting for %s to become "
- "reachable" % remote_ip)
+ self.assertTrue(self._check_remote_connectivity
+ (ssh_source, remote_ip, should_connect),
+ msg)
except Exception:
LOG.exception("Unable to access {dest} via ssh to "
"floating-ip {src}".format(dest=remote_ip,
@@ -380,6 +392,52 @@
msg="after re-associate "
"floating ip")
+ @test.idempotent_id('1546850e-fbaa-42f5-8b5f-03d8a6a95f15')
+ @testtools.skipIf(CONF.baremetal.driver_enabled,
+ 'Baremetal relies on a shared physical network.')
+ @test.attr(type='smoke')
+ @test.services('compute', 'network')
+ def test_connectivity_between_vms_on_different_networks(self):
+ """
+ For a freshly-booted VM with an IP address ("port") on a given
+ network:
+
+ - the Tempest host can ping the IP address.
+
+ - the Tempest host can ssh into the VM via the IP address and
+ successfully execute the following:
+
+ - ping an external IP address, implying external connectivity.
+
+ - ping an external hostname, implying that dns is correctly
+ configured.
+
+ - ping an internal IP address, implying connectivity to another
+ VM on the same network.
+
+ - Create another network on the same tenant with subnet, create
+ an VM on the new network.
+
+ - Ping the new VM from previous VM failed since the new network
+ was not attached to router yet.
+
+ - Attach the new network to the router, Ping the new VM from
+ previous VM succeed.
+
+ """
+ self._setup_network_and_servers()
+ self.check_public_network_connectivity(should_connect=True)
+ self._check_network_internal_connectivity(network=self.network)
+ self._check_network_external_connectivity()
+ self._create_new_network(create_gateway=True)
+ name = data_utils.rand_name('server-smoke')
+ self._create_server(name, self.new_net)
+ self._check_network_internal_connectivity(network=self.new_net,
+ should_connect=False)
+ self.new_subnet.add_to_router(self.router.id)
+ self._check_network_internal_connectivity(network=self.new_net,
+ should_connect=True)
+
@test.idempotent_id('c5adff73-e961-41f1-b4a9-343614f18cfa')
@testtools.skipUnless(CONF.compute_feature_enabled.interface_attach,
'NIC hotplug not available')
diff --git a/tempest/services/baremetal/v1/json/baremetal_client.py b/tempest/services/baremetal/v1/json/baremetal_client.py
index 09b6cd1..0c319f6 100644
--- a/tempest/services/baremetal/v1/json/baremetal_client.py
+++ b/tempest/services/baremetal/v1/json/baremetal_client.py
@@ -131,7 +131,7 @@
return self._show_request('drivers', driver_name)
@base.handle_errors
- def create_node(self, chassis_id, **kwargs):
+ def create_node(self, chassis_id=None, **kwargs):
"""
Create a baremetal node with the specified parameters.
diff --git a/tempest/services/botoclients.py b/tempest/services/botoclients.py
index 1cbdb0c..6a1af6c 100644
--- a/tempest/services/botoclients.py
+++ b/tempest/services/botoclients.py
@@ -20,7 +20,6 @@
import urlparse
from tempest import config
-from tempest import exceptions
import boto
import boto.ec2
@@ -33,41 +32,15 @@
ALLOWED_METHODS = set()
- def __init__(self, username=None, password=None,
- auth_url=None, tenant_name=None,
- *args, **kwargs):
- # FIXME(andreaf) replace credentials and auth_url with auth_provider
+ def __init__(self, identity_client):
+ self.identity_client = identity_client
- insecure_ssl = CONF.identity.disable_ssl_certificate_validation
self.ca_cert = CONF.identity.ca_certificates_file
-
self.connection_timeout = str(CONF.boto.http_socket_timeout)
self.num_retries = str(CONF.boto.num_retries)
self.build_timeout = CONF.boto.build_timeout
- self.ks_cred = {"username": username,
- "password": password,
- "auth_url": auth_url,
- "tenant_name": tenant_name,
- "insecure": insecure_ssl,
- "cacert": self.ca_cert}
- def _keystone_aws_get(self):
- # FIXME(andreaf) Move EC2 credentials to AuthProvider
- import keystoneclient.v2_0.client
-
- keystone = keystoneclient.v2_0.client.Client(**self.ks_cred)
- ec2_cred_list = keystone.ec2.list(keystone.auth_user_id)
- ec2_cred = None
- for cred in ec2_cred_list:
- if cred.tenant_id == keystone.auth_tenant_id:
- ec2_cred = cred
- break
- else:
- ec2_cred = keystone.ec2.create(keystone.auth_user_id,
- keystone.auth_tenant_id)
- if not all((ec2_cred, ec2_cred.access, ec2_cred.secret)):
- raise lib_exc.NotFound("Unable to get access and secret keys")
- return ec2_cred
+ self.connection_data = {}
def _config_boto_timeout(self, timeout, retries):
try:
@@ -105,33 +78,47 @@
def get_connection(self):
self._config_boto_timeout(self.connection_timeout, self.num_retries)
self._config_boto_ca_certificates_file(self.ca_cert)
- if not all((self.connection_data["aws_access_key_id"],
- self.connection_data["aws_secret_access_key"])):
- if all([self.ks_cred.get('auth_url'),
- self.ks_cred.get('username'),
- self.ks_cred.get('tenant_name'),
- self.ks_cred.get('password')]):
- ec2_cred = self._keystone_aws_get()
- self.connection_data["aws_access_key_id"] = \
- ec2_cred.access
- self.connection_data["aws_secret_access_key"] = \
- ec2_cred.secret
- else:
- raise exceptions.InvalidConfiguration(
- "Unable to get access and secret keys")
+
+ ec2_client_args = {'aws_access_key_id': CONF.boto.aws_access,
+ 'aws_secret_access_key': CONF.boto.aws_secret}
+ if not all(ec2_client_args.values()):
+ ec2_client_args = self.get_aws_credentials(self.identity_client)
+
+ self.connection_data.update(ec2_client_args)
return self.connect_method(**self.connection_data)
+ def get_aws_credentials(self, identity_client):
+ """
+ Obtain existing, or create new AWS credentials
+ :param identity_client: identity client with embedded credentials
+ :return: EC2 credentials
+ """
+ ec2_cred_list = identity_client.list_user_ec2_credentials(
+ identity_client.user_id)
+ for cred in ec2_cred_list:
+ if cred['tenant_id'] == identity_client.tenant_id:
+ ec2_cred = cred
+ break
+ else:
+ ec2_cred = identity_client.create_user_ec2_credentials(
+ identity_client.user_id, identity_client.tenant_id)
+ if not all((ec2_cred, ec2_cred['access'], ec2_cred['secret'])):
+ raise lib_exc.NotFound("Unable to get access and secret keys")
+ else:
+ ec2_cred_aws = {}
+ ec2_cred_aws['aws_access_key_id'] = ec2_cred['access']
+ ec2_cred_aws['aws_secret_access_key'] = ec2_cred['secret']
+ return ec2_cred_aws
+
class APIClientEC2(BotoClientBase):
def connect_method(self, *args, **kwargs):
return boto.connect_ec2(*args, **kwargs)
- def __init__(self, *args, **kwargs):
- super(APIClientEC2, self).__init__(*args, **kwargs)
+ def __init__(self, identity_client):
+ super(APIClientEC2, self).__init__(identity_client)
insecure_ssl = CONF.identity.disable_ssl_certificate_validation
- aws_access = CONF.boto.aws_access
- aws_secret = CONF.boto.aws_secret
purl = urlparse.urlparse(CONF.boto.ec2_url)
region_name = CONF.compute.region
@@ -147,14 +134,12 @@
port = 443
else:
port = int(port)
- self.connection_data = {"aws_access_key_id": aws_access,
- "aws_secret_access_key": aws_secret,
- "is_secure": purl.scheme == "https",
- "validate_certs": not insecure_ssl,
- "region": region,
- "host": purl.hostname,
- "port": port,
- "path": purl.path}
+ self.connection_data.update({"is_secure": purl.scheme == "https",
+ "validate_certs": not insecure_ssl,
+ "region": region,
+ "host": purl.hostname,
+ "port": port,
+ "path": purl.path})
ALLOWED_METHODS = set(('create_key_pair', 'get_key_pair',
'delete_key_pair', 'import_key_pair',
@@ -207,11 +192,9 @@
def connect_method(self, *args, **kwargs):
return boto.connect_s3(*args, **kwargs)
- def __init__(self, *args, **kwargs):
- super(ObjectClientS3, self).__init__(*args, **kwargs)
+ def __init__(self, identity_client):
+ super(ObjectClientS3, self).__init__(identity_client)
insecure_ssl = CONF.identity.disable_ssl_certificate_validation
- aws_access = CONF.boto.aws_access
- aws_secret = CONF.boto.aws_secret
purl = urlparse.urlparse(CONF.boto.s3_url)
port = purl.port
if port is None:
@@ -221,14 +204,12 @@
port = 443
else:
port = int(port)
- self.connection_data = {"aws_access_key_id": aws_access,
- "aws_secret_access_key": aws_secret,
- "is_secure": purl.scheme == "https",
- "validate_certs": not insecure_ssl,
- "host": purl.hostname,
- "port": port,
- "calling_format": boto.s3.connection.
- OrdinaryCallingFormat()}
+ self.connection_data.update({"is_secure": purl.scheme == "https",
+ "validate_certs": not insecure_ssl,
+ "host": purl.hostname,
+ "port": port,
+ "calling_format": boto.s3.connection.
+ OrdinaryCallingFormat()})
ALLOWED_METHODS = set(('create_bucket', 'delete_bucket', 'generate_url',
'get_all_buckets', 'get_bucket', 'delete_key',
diff --git a/tempest/services/identity/v2/json/identity_client.py b/tempest/services/identity/v2/json/identity_client.py
index 6c4a6b4..039f9bb 100644
--- a/tempest/services/identity/v2/json/identity_client.py
+++ b/tempest/services/identity/v2/json/identity_client.py
@@ -269,3 +269,15 @@
body = json.loads(body)
return service_client.ResponseBodyList(resp,
body['extensions']['values'])
+
+ def create_user_ec2_credentials(self, user_id, tenant_id):
+ post_body = json.dumps({'tenant_id': tenant_id})
+ resp, body = self.post('/users/%s/credentials/OS-EC2' % user_id,
+ post_body)
+ self.expected_success(200, resp.status)
+ return service_client.ResponseBody(resp, self._parse_resp(body))
+
+ def list_user_ec2_credentials(self, user_id):
+ resp, body = self.get('/users/%s/credentials/OS-EC2' % user_id)
+ self.expected_success(200, resp.status)
+ return service_client.ResponseBodyList(resp, self._parse_resp(body))
diff --git a/tempest/services/identity/v3/json/token_client.py b/tempest/services/identity/v3/json/token_client.py
index b0824a7..3e37403 100644
--- a/tempest/services/identity/v3/json/token_client.py
+++ b/tempest/services/identity/v3/json/token_client.py
@@ -37,22 +37,30 @@
self.auth_url = auth_url
- def auth(self, user=None, password=None, project=None, user_type='id',
- user_domain=None, project_domain=None, token=None):
+ def auth(self, user_id=None, username=None, password=None, project_id=None,
+ project_name=None, user_domain_id=None, user_domain_name=None,
+ project_domain_id=None, project_domain_name=None, domain_id=None,
+ domain_name=None, token=None):
"""
- :param user: user id or name, as specified in user_type
- :param user_domain: the user domain
- :param project_domain: the project domain
+ :param user_id: user id
+ :param username: user name
+ :param user_domain_id: the user domain id
+ :param user_domain_name: the user domain name
+ :param project_domain_id: the project domain id
+ :param project_domain_name: the project domain name
+ :param domain_id: a domain id to scope to
+ :param domain_name: a domain name to scope to
+ :param project_id: a project id to scope to
+ :param project_name: a project name to scope to
:param token: a token to re-scope.
- Accepts different combinations of credentials. Restrictions:
- - project and domain are only name (no id)
+ Accepts different combinations of credentials.
Sample sample valid combinations:
- token
- - token, project, project_domain
+ - token, project_name, project_domain_id
- user_id, password
- - username, password, user_domain
- - username, password, project, user_domain, project_domain
+ - username, password, user_domain_id
+ - username, password, project_name, user_domain_id, project_domain_id
Validation is left to the server side.
"""
creds = {
@@ -68,25 +76,45 @@
id_obj['token'] = {
'id': token
}
- if user and password:
+
+ if (user_id or username) and password:
id_obj['methods'].append('password')
id_obj['password'] = {
'user': {
'password': password,
}
}
- if user_type == 'id':
- id_obj['password']['user']['id'] = user
+ if user_id:
+ id_obj['password']['user']['id'] = user_id
else:
- id_obj['password']['user']['name'] = user
- if user_domain is not None:
- _domain = dict(name=user_domain)
+ id_obj['password']['user']['name'] = username
+
+ _domain = None
+ if user_domain_id is not None:
+ _domain = dict(id=user_domain_id)
+ elif user_domain_name is not None:
+ _domain = dict(name=user_domain_name)
+ if _domain:
id_obj['password']['user']['domain'] = _domain
- if project is not None:
- _domain = dict(name=project_domain)
- _project = dict(name=project, domain=_domain)
- scope = dict(project=_project)
- creds['auth']['scope'] = scope
+
+ if (project_id or project_name):
+ _project = dict()
+
+ if project_id:
+ _project['id'] = project_id
+ elif project_name:
+ _project['name'] = project_name
+
+ if project_domain_id is not None:
+ _project['domain'] = {'id': project_domain_id}
+ elif project_domain_name is not None:
+ _project['domain'] = {'name': project_domain_name}
+
+ creds['auth']['scope'] = dict(project=_project)
+ elif domain_id:
+ creds['auth']['scope'] = dict(domain={'id': domain_id})
+ elif domain_name:
+ creds['auth']['scope'] = dict(domain={'name': domain_name})
body = json.dumps(creds)
resp, body = self.post(self.auth_url, body=body)
@@ -120,15 +148,22 @@
return resp, json.loads(resp_body)
- def get_token(self, user, password, project=None, project_domain='Default',
- user_domain='Default', auth_data=False):
+ def get_token(self, **kwargs):
"""
- :param user: username
Returns (token id, token data) for supplied credentials
"""
- body = self.auth(user, password, project, user_type='name',
- user_domain=user_domain,
- project_domain=project_domain)
+
+ auth_data = kwargs.pop('auth_data', False)
+
+ if not (kwargs.get('user_domain_id') or
+ kwargs.get('user_domain_name')):
+ kwargs['user_domain_name'] = 'Default'
+
+ if not (kwargs.get('project_domain_id') or
+ kwargs.get('project_domain_name')):
+ kwargs['project_domain_name'] = 'Default'
+
+ body = self.auth(**kwargs)
token = body.response.get('x-subject-token')
if auth_data:
diff --git a/tempest/test.py b/tempest/test.py
index d6858a3..7039f4c 100644
--- a/tempest/test.py
+++ b/tempest/test.py
@@ -94,7 +94,8 @@
'object_storage': CONF.service_available.swift,
'dashboard': CONF.service_available.horizon,
'telemetry': CONF.service_available.ceilometer,
- 'data_processing': CONF.service_available.sahara
+ 'data_processing': CONF.service_available.sahara,
+ 'database': CONF.service_available.trove
}
return service_list
@@ -108,7 +109,7 @@
def decorator(f):
services = ['compute', 'image', 'baremetal', 'volume', 'orchestration',
'network', 'identity', 'object_storage', 'dashboard',
- 'telemetry', 'data_processing']
+ 'telemetry', 'data_processing', 'database']
for service in args:
if service not in services:
raise exceptions.InvalidServiceTag('%s is not a valid '
diff --git a/tempest/tests/fake_credentials.py b/tempest/tests/fake_credentials.py
index 48f67d2..649d51d 100644
--- a/tempest/tests/fake_credentials.py
+++ b/tempest/tests/fake_credentials.py
@@ -43,7 +43,8 @@
username='fake_username',
password='fake_password',
user_domain_name='fake_domain_name',
- project_name='fake_tenant_name'
+ project_name='fake_tenant_name',
+ project_domain_name='fake_domain_name'
)
super(FakeKeystoneV3Credentials, self).__init__(**creds)
diff --git a/tempest/tests/test_tenant_isolation.py b/tempest/tests/test_tenant_isolation.py
index 15ff0ff..7ab3f1e 100644
--- a/tempest/tests/test_tenant_isolation.py
+++ b/tempest/tests/test_tenant_isolation.py
@@ -41,6 +41,7 @@
fake_identity._fake_v2_response)
cfg.CONF.set_default('operator_role', 'FakeRole',
group='object-storage')
+ self._mock_list_ec2_credentials('fake_user_id', 'fake_tenant_id')
def test_tempest_client(self):
iso_creds = isolated_creds.IsolatedCreds('test class')
@@ -102,6 +103,18 @@
(200, [{'id': '1', 'name': 'FakeRole'}]))))
return roles_fix
+ def _mock_list_ec2_credentials(self, user_id, tenant_id):
+ ec2_creds_fix = self.useFixture(mockpatch.PatchObject(
+ json_iden_client.IdentityClientJSON,
+ 'list_user_ec2_credentials',
+ return_value=(service_client.ResponseBodyList
+ (200, [{'access': 'fake_access',
+ 'secret': 'fake_secret',
+ 'tenant_id': tenant_id,
+ 'user_id': user_id,
+ 'trust_id': None}]))))
+ return ec2_creds_fix
+
def _mock_network_create(self, iso_creds, id, name):
net_fix = self.useFixture(mockpatch.PatchObject(
iso_creds.network_admin_client,
diff --git a/tempest/thirdparty/boto/test.py b/tempest/thirdparty/boto/test.py
index b5d3f8b..90d0838 100644
--- a/tempest/thirdparty/boto/test.py
+++ b/tempest/thirdparty/boto/test.py
@@ -27,6 +27,8 @@
from oslo_log import log as logging
import six
+from tempest_lib import exceptions as lib_exc
+
import tempest.clients
from tempest.common.utils import file_utils
from tempest import config
@@ -65,6 +67,8 @@
if not secret_matcher.match(connection_data["aws_secret_access_key"]):
raise Exception("Invalid AWS secret Key")
raise Exception("Unknown (Authentication?) Error")
+ # NOTE(andreaf) Setting up an extra manager here is redundant,
+ # and should be removed.
openstack = tempest.clients.Manager()
try:
if urlparse.urlparse(CONF.boto.ec2_url).hostname is None:
@@ -77,7 +81,7 @@
raise Exception("EC2 target does not looks EC2 service")
_cred_sub_check(ec2client.connection_data)
- except keystoneclient.exceptions.Unauthorized:
+ except lib_exc.Unauthorized:
EC2_CAN_CONNECT_ERROR = "AWS credentials not set," +\
" failed to get them even by keystoneclient"
except Exception as exc:
@@ -199,6 +203,9 @@
super(BotoTestCase, cls).skip_checks()
if not CONF.compute_feature_enabled.ec2_api:
raise cls.skipException("The EC2 API is not available")
+ if not CONF.identity_feature_enabled.api_v2 or \
+ not CONF.identity.auth_version == 'v2':
+ raise cls.skipException("Identity v2 is not available")
@classmethod
def setup_credentials(cls):