Merge "Adjust registration of ami image in boto tests"
diff --git a/doc/source/configuration.rst b/doc/source/configuration.rst
index 4fe2e9f..b6e00ce 100644
--- a/doc/source/configuration.rst
+++ b/doc/source/configuration.rst
@@ -40,6 +40,24 @@
Eventually the config options for providing credentials to tempest will be
deprecated and removed in favor of the accounts.yaml file.
+Keystone Connection Info
+^^^^^^^^^^^^^^^^^^^^^^^^
+In order for tempest to be able to talk to your OpenStack deployment you need
+to provide it with information about how it communicates with keystone.
+This involves configuring the following options in the identity section:
+
+ #. auth_version
+ #. uri
+ #. uri_v3
+
+The *auth_version* option is used to tell tempest whether it should be using
+keystone's v2 or v3 api for communicating with keystone. (except for the
+identity api tests which will test a specific version) The 2 uri options are
+used to tell tempest the url of the keystone endpoint. The *uri* option is used
+for keystone v2 request and *uri_v3* is used for keystone v3. You want to ensure
+that which ever version you set for *auth_version* has its uri option defined.
+
+
Credential Provider Mechanisms
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -65,8 +83,16 @@
This is also the currently the default credential provider enabled by tempest,
due to it's common use and ease of configuration.
-Locking Test Accounts
-"""""""""""""""""""""
+It is worth pointing out that depending on your cloud configuration you might
+need to assign a role to each of the users created Tempest's tenant isolation.
+This can be set using the *tempest_roles* option. It takes in a list of role
+names each of which will be assigned to each of the users created by tenant
+isolation. This option will not have any effect when set and tempest is not
+configured to use tenant isolation.
+
+
+Locking Test Accounts (aka accounts.yaml or accounts file)
+""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
For a long time using tenant isolation was the only method available if you
wanted to enable parallel execution of tempest tests. However this was
insufficient for certain use cases because of the admin credentials requirement
@@ -77,11 +103,6 @@
accounts.yaml before executing any of its tests so that each class is isolated
like in tenant isolation.
-Currently, this mechanism has some limitations, mostly around networking. The
-locking test accounts provider will only work with a single flat network as
-the default for each tenant/project. If another network configuration is used
-in your cloud you might face unexpected failures.
-
To enable and use locking test accounts you need do a few things:
#. Create a accounts.yaml file which contains the set of pre-existing
@@ -94,20 +115,20 @@
#. Provide tempest with the location of you accounts.yaml file with the
test_accounts_file option in the auth section
+It is worth pointing out that each set of credentials in the accounts.yaml
+should have a unique tenant. This is required to provide proper isolation
+to the tests using the credentials, and failure to do this will likely cause
+unexpected failures in some tests.
-Non-locking test accounts
-"""""""""""""""""""""""""
-When tempest was refactored to allow for locking test accounts, the original
-non-tenant isolated case was converted to support the new accounts.yaml file.
-This mechanism is the non-locking test accounts provider. It only makes sense
-to use it if parallel execution isn't needed. If the role restrictions were too
-limiting with the locking accounts provider and tenant isolation is not wanted
-then you can use the non-locking test accounts credential provider without the
-accounts.yaml file.
-To use the non-locking test accounts provider you have 2 ways to configure it.
-First you can specify the sets of credentials in the configuration file like
-detailed above with following 9 options in the identity section:
+Non-locking test accounts (aka credentials config options)
+""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
+When Tempest was refactored to allow for locking test accounts, the original
+non-tenant isolated case was converted to internally work similarly to the
+accounts.yaml file. This mechanism was then called the non-locking test accounts
+provider. To use the non-locking test accounts provider you can specify the sets
+of credentials in the configuration file like detailed above with following 9
+options in the identity section:
#. username
#. password
@@ -119,7 +140,241 @@
#. alt_password
#. alt_tenant_name
-The only restriction with using the traditional config options for credentials
-is that if a test requires specific roles on accounts these tests can not be
-run. This is because the config options do not give sufficient flexibility to
-describe the roles assigned to a user for running the tests.
+It only makes sense to use it if parallel execution isn't needed, since tempest
+won't be able to properly isolate tests using this. Additionally, using the
+traditional config options for credentials is not able to provide credentials to
+tests which requires specific roles on accounts. This is because the config
+options do not give sufficient flexibility to describe the roles assigned to a
+user for running the tests. There are additional limitations with regard to
+network configuration when using this credential provider mechanism, see the
+`Networking`_ section below.
+
+Compute
+-------
+
+Flavors
+^^^^^^^
+For tempest to be able to create servers you need to specify flavors that it
+can use to boot the servers with. There are 2 options in the tempest config
+for doing this:
+
+ #. flavor_ref
+ #. flavor_ref_alt
+
+Both of these options are in the compute section of the config file and take
+in the flavor id (not the name) from nova. The *flavor_ref* option is what will
+be used for booting almost all of the guests, *flavor_ref_alt* is only used in
+tests where 2 different sized servers are required. (for example a resize test)
+
+Using a smaller flavor is generally recommended, when larger flavors are used
+the extra time required to bring up servers will likely affect total run time
+and probably require tweaking timeout values to ensure tests have ample time to
+finish.
+
+Images
+^^^^^^
+Just like with flavors, tempest needs to know which images to use for booting
+servers. There are 2 options in the compute section just like with flavors:
+
+ #. image_ref
+ #. image_ref_alt
+
+Both options are expecting an image id (not name) from nova. The *image_ref*
+option is what what will be used for booting the majority of servers in tempest.
+*image_ref_alt* is used for tests that require 2 images such as rebuild. If 2
+images are not available you can set both options to the same image_ref and
+those tests will be skipped.
+
+There are also options in the scenario section for images:
+
+ #. img_file
+ #. img_dir
+ #. aki_img_file
+ #. ari_img_file
+ #. ami_img_file
+ #. img_container_format
+ #. img_disk_format
+
+however unlike the other image options these are used for a very small subset
+of scenario tests which are uploading an image. These options are used to tell
+tempest where an image file is located and describe it's metadata for when it's
+uploaded.
+
+The behavior of these options is a bit convoluted (which will likely be fixed
+in future versions). You first need to specify *img_dir*, which is the directory
+tempest will look for the image files in. First it will check if the filename
+set for *img_file* could be found in *img_dir*. If it is found then the
+*img_container_format* and *img_disk_format* options are used to upload that
+image to glance. However if it's not found tempest will look for the 3 uec image
+file name options as a fallback. If neither is found the tests requiring an
+image to upload will fail.
+
+It is worth pointing out that using `cirros`_ is a very good choice for running
+tempest. It's what is used for upstream testing, they boot quickly and have a
+small footprint.
+
+.. _cirros: https://launchpad.net/cirros
+
+Networking
+----------
+OpenStack has a myriad of different networking configurations possible and
+depending on which of the 2 network backends, nova-network or neutron, you are
+using things can vary drastically. Due to this complexity Tempest has to provide
+a certain level of flexibility in it's configuration to ensure it will work
+against any cloud. This ends up causing a large number of permutations in
+Tempest's config around network configuration.
+
+
+Enabling Remote Access to Created Servers
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+When Tempest creates servers for testing, some tests require being able to
+connect those servers. Depending on the configuration of the cloud, the methods
+for doing this can be different. In certain configurations it is required to
+specify a single network with server create calls. Accordingly, Tempest provides
+a few different methods for providing this information in configuration to try
+and ensure that regardless of the clouds configuration it'll still be able to
+run. This section covers the different methods of configuring Tempest to provide
+a network when creating servers.
+
+Fixed Network Name
+""""""""""""""""""
+This is the simplest method of specifying how networks should be used. You can
+just specify a single network name/label to use for all server creations. The
+limitation with this is that all tenants/projects and users must be able to see
+that network name/label if they were to perform a network list and be able to
+use it.
+
+If no network name is assigned in the config file and none of the below
+alternatives are used, then Tempest will not specify a network on server
+creations, which depending on the cloud configuration might prevent them from
+booting.
+
+To set a fixed network name simply do:
+
+ #. Set the fixed_network_name option in the compute group
+
+In the case that the configured fixed network name can not be found by a user
+network list call, it will be treated like one was not provided except that a
+warning will be logged stating that it couldn't be found.
+
+
+Accounts File
+"""""""""""""
+If you are using an accounts file to provide credentials for running Tempest
+then you can leverage it to also specify which network should be used with
+server creations on a per tenant/project and user pair basis. This provides
+the necessary flexibility to work with more intricate networking configurations
+by enabling the user to specify exactly which network to use for which
+tenants/projects. You can refer to the accounts.yaml sample file included in
+the tempest repo for the syntax around specifying networks in the file.
+
+However, specifying a network is not required when using an accounts file. If
+one is not specified you can use a fixed network name to specify the network to
+use when creating servers just as without an accounts file. However, any network
+specified in the accounts file will take precedence over the fixed network name
+provided. If no network is provided in the accounts file and a fixed network
+name is not set then no network will be included in create server requests.
+
+If a fixed network is provided and the accounts.yaml file also contains networks
+this has the benefit of enabling a couple more tests which require a static
+network to perform operations like server lists with a network filter. If a
+fixed network name is not provided these tests are skipped. Additionally, if a
+fixed network name is provided it will serve as a fallback in case of a
+misconfiguration or a missing network in the accounts file.
+
+
+With Tenant Isolation
+"""""""""""""""""""""
+With tenant isolation enabled and using nova-network then nothing changes. Your
+only option for configuration is to either set a fixed network name or not.
+However, in most cases it shouldn't matter because nova-network should have no
+problem booting a server with multiple networks. If this is not the case for
+your cloud then using an accounts file is recommended because it provides the
+necessary flexibility to describe your configuration. Tenant isolation is not
+able to dynamically allocate things as necessary if neutron is not enabled.
+
+With neutron and tenant isolation enabled there should not be any additional
+configuration necessary to enable Tempest to create servers with working
+networking, assuming you have properly configured the network section to work
+for your cloud. Tempest will dynamically create the neutron resources necessary
+to enable using servers with that network. Also, just as with the accounts
+file, if you specify a fixed network name while using neutron and tenant
+isolation it will enable running tests which require a static network and it
+will additionally be used as a fallback for server creation. However, unlike
+accounts.yaml this should never be triggered.
+
+Configuring Available Services
+------------------------------
+OpenStack is really a constellation of several different projects which
+are running together to create a cloud. However which projects you're running
+is not set in stone, and which services are running is up to the deployer.
+Tempest however needs to know which services are available so it can figure
+out which tests it is able to run and certain setup steps which differ based
+on the available services.
+
+The *service_available* section of the config file is used to set which
+services are available. It contains a boolean option for each service (except
+for keystone which is a hard requirement) set it to True if the service is
+available or False if it is not.
+
+Service Catalog
+^^^^^^^^^^^^^^^
+Each project which has its own REST API contains an entry in the service
+catalog. Like most things in OpenStack this is also completely configurable.
+However, for tempest to be able to figure out the endpoints to send REST API
+calls for each service to it needs to know how that project is defined in the
+service catalog. There are 3 options for each service section to accomplish
+this:
+
+ #. catalog_type
+ #. endpoint_type
+ #. region
+
+Setting *catalog_type* and *endpoint_type* should normally give Tempest enough
+information to determine which endpoint it should pull from the service
+catalog to use for talking to that particular service. However, if you're cloud
+has multiple regions available and you need to specify a particular one to use
+a service you can set the *region* option in that service's section.
+
+It should also be noted that the default values for these options are set
+to what devstack uses. (which is a de facto standard for service catalog
+entries) So often nothing actually needs to be set on these options to enable
+communication to a particular service. It is only if you are either not using
+the same *catalog_type* as devstack or you want Tempest to talk to a different
+endpoint type instead of publicURL for a service that these need to be changed.
+
+
+Service feature configuration
+-----------------------------
+
+OpenStack provides its deployers a myriad of different configuration options
+to enable anyone deploying it to create a cloud tailor-made for any individual
+use case. It provides options for several different backend type, databases,
+message queues, etc. However, the downside to this configurability is that
+certain operations and features aren't supported depending on the configuration.
+These features may or may not be discoverable from the API so the burden is
+often on the user to figure out what the cloud they're talking to supports.
+Besides the obvious interoperability issues with this it also leaves Tempest
+in an interesting situation trying to figure out which tests are expected to
+work. However, Tempest tests do not rely on dynamic api discovery for a feature
+(assuming one exists). Instead Tempest has to be explicitly configured as to
+which optional features are enabled. This is in order to prevent bugs in the
+discovery mechanisms from masking failures.
+
+The service feature-enabled config sections are how Tempest addresses the
+optional feature question. Each service that has tests for optional features
+contains one of these sections. The only options in it are boolean options
+with the name of a feature which is used. If it is set to false any test which
+depends on that functionality will be skipped. For a complete list of all these
+options refer to the sample config file.
+
+
+API Extensions
+^^^^^^^^^^^^^^
+The service feature-enabled sections often contain an *api-extensions* option
+(or in the case of swift a *discoverable_apis* option) this is used to tell
+tempest which api extensions (or configurable middleware) is used in your
+deployment. It has 2 valid config states, either it contains a single value
+"all" (which is the default) which means that every api extension is assumed
+to be enabled, or it is set to a list of each individual extension that is
+enabled for that service.
diff --git a/etc/accounts.yaml.sample b/etc/accounts.yaml.sample
index 64ff8a7..3f57eb7 100644
--- a/etc/accounts.yaml.sample
+++ b/etc/accounts.yaml.sample
@@ -1,4 +1,7 @@
# The number of accounts required can be estimated as CONCURRENCY x 2
+# It is expected that each user provided here will be in a different tenant.
+# This is required to provide isolation between test for running in parallel
+#
# Valid fields for credentials are defined in the descendants of
# auth.Credentials - see KeystoneV[2|3]Credentials.CONF_ATTRIBUTES
@@ -28,8 +31,13 @@
- 'reseller_admin'
- 'operator'
+# Networks can be specified to tell tempest which network it should use when
+# creating servers with an account
+
- username: 'admin_user_1'
tenant_name: 'admin_tenant_1'
password: 'test_password'
types:
- 'admin'
+ resources:
+ network: 'public'
diff --git a/etc/tempest.conf.sample b/etc/tempest.conf.sample
index 175f0d9..2a72635 100644
--- a/etc/tempest.conf.sample
+++ b/etc/tempest.conf.sample
@@ -253,10 +253,6 @@
# image. (string value)
#image_alt_ssh_user = root
-# Password used to authenticate to an instance using the alternate
-# image. (string value)
-#image_alt_ssh_password = password
-
# Time in seconds between build status checks. (integer value)
#build_interval = 1
@@ -269,16 +265,16 @@
#run_ssh = false
# Auth method used for authenticate to the instance. Valid choices
-# are: keypair, configured, adminpass. keypair: start the servers with
-# an ssh keypair. configured: use the configured user and password.
-# adminpass: use the injected adminPass. disabled: avoid using ssh
-# when it is an option. (string value)
+# are: keypair, configured, adminpass and disabled. Keypair: start the
+# servers with a ssh keypair. Configured: use the configured user and
+# password. Adminpass: use the injected adminPass. Disabled: avoid
+# using ssh when it is an option. (string value)
#ssh_auth_method = keypair
# How to connect to the instance? fixed: using the first ip belongs
-# the fixed network floating: creating and using a floating ip (string
-# value)
-#ssh_connect_method = fixed
+# the fixed network floating: creating and using a floating ip.
+# (string value)
+#ssh_connect_method = floating
# User name used to authenticate to an instance. (string value)
#ssh_user = root
@@ -286,6 +282,14 @@
# Timeout in seconds to wait for ping to succeed. (integer value)
#ping_timeout = 120
+# The packet size for ping packets originating from remote linux hosts
+# (integer value)
+#ping_size = 56
+
+# The number of ping packets originating from remote linux hosts
+# (integer value)
+#ping_count = 1
+
# Timeout in seconds to wait for authentication to succeed. (integer
# value)
#ssh_timeout = 300
@@ -301,7 +305,8 @@
# Name of the fixed network that is visible to all test tenants. If
# multiple networks are available for a tenant this is the network
# which will be used for creating servers if tempest does not create a
-# network or a network is not specified elsewhere (string value)
+# network or a network is not specified elsewhere. It may be used for
+# ssh validation only if floating IPs are disabled. (string value)
#fixed_network_name = <None>
# Network used for SSH connections. Ignored if
@@ -326,10 +331,6 @@
# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
#endpoint_type = publicURL
-# Path to a private key file for SSH access to remote hosts (string
-# value)
-#path_to_private_key = <None>
-
# Expected device name when a volume is attached to an instance
# (string value)
#volume_device_name = vdb
@@ -746,14 +747,19 @@
# The mask bits for tenant ipv6 subnets (integer value)
#tenant_network_v6_mask_bits = 64
-# Whether tenant network connectivity should be evaluated directly
-# (boolean value)
+# Whether tenant networks can be reached directly from the test
+# client. This must be set to True when the 'fixed' ssh_connect_method
+# is selected. (boolean value)
#tenant_networks_reachable = false
# Id of the public network that provides external connectivity (string
# value)
#public_network_id =
+# Default floating network name. Used to allocate floating IPs when
+# neutron is enabled. (string value)
+#floating_network_name = <None>
+
# Id of the public router that provides external connectivity. This
# should only be used when Neutron's 'allow_overlapping_ips' is set to
# 'False' in neutron.conf. usually not needed past 'Grizzly' release
@@ -796,6 +802,10 @@
# attributes ipv6_ra_mode and ipv6_address_mode (boolean value)
#ipv6_subnet_attributes = false
+# Does the test environment support changing port admin state (boolean
+# value)
+#port_admin_state_change = true
+
[object-storage]
@@ -1071,6 +1081,38 @@
#too_slow_to_test = true
+[validation]
+
+#
+# From tempest.config
+#
+
+# Default IP type used for validation: -fixed: uses the first IP
+# belonging to the fixed network -floating: creates and uses a
+# floating IP (string value)
+# Allowed values: fixed, floating
+#connect_method = floating
+
+# Default authentication method to the instance. Only ssh via keypair
+# is supported for now. Additional methods will be handled in a
+# separate spec. (string value)
+# Allowed values: keypair
+#auth_method = keypair
+
+# Default IP version for ssh connections. (integer value)
+#ip_version_for_ssh = 4
+
+# Timeout in seconds to wait for ping to succeed. (integer value)
+#ping_timeout = 120
+
+# Timeout in seconds to wait for the TCP connection to be successful.
+# (integer value)
+#connect_timeout = 60
+
+# Timeout in seconds to wait for the ssh banner. (integer value)
+#ssh_timeout = 300
+
+
[volume]
#
diff --git a/requirements.txt b/requirements.txt
index 35b5144..0d7fc0d 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -12,8 +12,6 @@
python-glanceclient>=0.15.0
python-cinderclient>=1.1.0
python-heatclient>=0.3.0
-python-saharaclient>=0.8.0
-python-swiftclient>=2.2.0
testrepository>=0.0.18
oslo.concurrency>=1.8.0,<1.9.0 # Apache-2.0
oslo.config>=1.9.3,<1.10.0 # Apache-2.0
diff --git a/tempest/api/compute/admin/test_aggregates.py b/tempest/api/compute/admin/test_aggregates.py
index 3a34a2e..c4cb11a 100644
--- a/tempest/api/compute/admin/test_aggregates.py
+++ b/tempest/api/compute/admin/test_aggregates.py
@@ -102,7 +102,7 @@
aggregate = self.client.create_aggregate(name=aggregate_name)
self.addCleanup(self.client.delete_aggregate, aggregate['id'])
- body = self.client.get_aggregate(aggregate['id'])
+ body = self.client.show_aggregate(aggregate['id'])
self.assertEqual(aggregate['name'], body['name'])
self.assertEqual(aggregate['availability_zone'],
body['availability_zone'])
@@ -114,7 +114,7 @@
self.assertEqual(meta, body["metadata"])
# verify the metadata has been set
- body = self.client.get_aggregate(aggregate['id'])
+ body = self.client.show_aggregate(aggregate['id'])
self.assertEqual(meta, body["metadata"])
@test.attr(type='gate')
@@ -198,7 +198,7 @@
self.client.add_host(aggregate['id'], self.host)
self.addCleanup(self.client.remove_host, aggregate['id'], self.host)
- body = self.client.get_aggregate(aggregate['id'])
+ body = self.client.show_aggregate(aggregate['id'])
self.assertEqual(aggregate_name, body['name'])
self.assertIsNone(body['availability_zone'])
self.assertIn(self.host, body['hosts'])
diff --git a/tempest/api/compute/admin/test_aggregates_negative.py b/tempest/api/compute/admin/test_aggregates_negative.py
index f6d6ad3..882986c 100644
--- a/tempest/api/compute/admin/test_aggregates_negative.py
+++ b/tempest/api/compute/admin/test_aggregates_negative.py
@@ -110,7 +110,7 @@
self.addCleanup(self.client.delete_aggregate, aggregate['id'])
self.assertRaises(lib_exc.Forbidden,
- self.user_client.get_aggregate,
+ self.user_client.show_aggregate,
aggregate['id'])
@test.attr(type=['negative', 'gate'])
@@ -125,7 +125,7 @@
def test_aggregate_get_details_with_invalid_id(self):
# Get aggregate details with invalid id should raise exceptions.
self.assertRaises(lib_exc.NotFound,
- self.client.get_aggregate, -1)
+ self.client.show_aggregate, -1)
@test.attr(type=['negative', 'gate'])
@test.idempotent_id('0ef07828-12b4-45ba-87cc-41425faf5711')
diff --git a/tempest/api/compute/admin/test_availability_zone.py b/tempest/api/compute/admin/test_availability_zone.py
index eadc15a..1ec171b 100644
--- a/tempest/api/compute/admin/test_availability_zone.py
+++ b/tempest/api/compute/admin/test_availability_zone.py
@@ -32,12 +32,12 @@
@test.idempotent_id('d3431479-8a09-4f76-aa2d-26dc580cb27c')
def test_get_availability_zone_list(self):
# List of availability zone
- availability_zone = self.client.get_availability_zone_list()
+ availability_zone = self.client.list_availability_zones()
self.assertTrue(len(availability_zone) > 0)
@test.attr(type='gate')
@test.idempotent_id('ef726c58-530f-44c2-968c-c7bed22d5b8c')
def test_get_availability_zone_list_detail(self):
# List of availability zones and available services
- availability_zone = self.client.get_availability_zone_list_detail()
+ availability_zone = self.client.list_availability_zones(detail=True)
self.assertTrue(len(availability_zone) > 0)
diff --git a/tempest/api/compute/admin/test_availability_zone_negative.py b/tempest/api/compute/admin/test_availability_zone_negative.py
index d6e577e..e9de628 100644
--- a/tempest/api/compute/admin/test_availability_zone_negative.py
+++ b/tempest/api/compute/admin/test_availability_zone_negative.py
@@ -36,4 +36,4 @@
# non-administrator user
self.assertRaises(
lib_exc.Forbidden,
- self.non_adm_client.get_availability_zone_list_detail)
+ self.non_adm_client.list_availability_zones, detail=True)
diff --git a/tempest/api/compute/admin/test_baremetal_nodes.py b/tempest/api/compute/admin/test_baremetal_nodes.py
index 64099c3..9b88938 100644
--- a/tempest/api/compute/admin/test_baremetal_nodes.py
+++ b/tempest/api/compute/admin/test_baremetal_nodes.py
@@ -52,5 +52,5 @@
# Test getting each individually
for node in test_nodes:
- baremetal_node = self.client.get_baremetal_node(node['uuid'])
+ baremetal_node = self.client.show_baremetal_node(node['uuid'])
self.assertEqual(node['uuid'], baremetal_node['id'])
diff --git a/tempest/api/compute/base.py b/tempest/api/compute/base.py
index 4995209..9f1a548 100644
--- a/tempest/api/compute/base.py
+++ b/tempest/api/compute/base.py
@@ -352,10 +352,10 @@
@classmethod
def skip_checks(cls):
+ super(BaseComputeAdminTest, cls).skip_checks()
if not credentials.is_admin_available():
msg = ("Missing Identity Admin API credentials in configuration.")
raise cls.skipException(msg)
- super(BaseComputeAdminTest, cls).skip_checks()
@classmethod
def setup_credentials(cls):
diff --git a/tempest/api/compute/certificates/test_certificates.py b/tempest/api/compute/certificates/test_certificates.py
index 2be201a..4fe87ad 100644
--- a/tempest/api/compute/certificates/test_certificates.py
+++ b/tempest/api/compute/certificates/test_certificates.py
@@ -33,6 +33,6 @@
@test.idempotent_id('3ac273d0-92d2-4632-bdfc-afbc21d4606c')
def test_get_root_certificate(self):
# get the root certificate
- body = self.certificates_client.get_certificate('root')
+ body = self.certificates_client.show_certificate('root')
self.assertIn('data', body)
self.assertIn('private_key', body)
diff --git a/tempest/api/compute/images/test_image_metadata.py b/tempest/api/compute/images/test_image_metadata.py
index ab21ad7..52d47dd 100644
--- a/tempest/api/compute/images/test_image_metadata.py
+++ b/tempest/api/compute/images/test_image_metadata.py
@@ -13,7 +13,7 @@
# License for the specific language governing permissions and limitations
# under the License.
-import StringIO
+import six
from tempest_lib.common.utils import data_utils
@@ -51,7 +51,7 @@
is_public=False)
cls.image_id = body['id']
cls.images.append(cls.image_id)
- image_file = StringIO.StringIO(('*' * 1024))
+ image_file = six.StringIO(('*' * 1024))
cls.glance_client.update_image(cls.image_id, data=image_file)
cls.client.wait_for_image_status(cls.image_id, 'ACTIVE')
diff --git a/tempest/api/compute/images/test_list_image_filters.py b/tempest/api/compute/images/test_list_image_filters.py
index 2c6d2df..430ca35 100644
--- a/tempest/api/compute/images/test_list_image_filters.py
+++ b/tempest/api/compute/images/test_list_image_filters.py
@@ -13,10 +13,10 @@
# License for the specific language governing permissions and limitations
# under the License.
-import StringIO
import time
from oslo_log import log as logging
+import six
from tempest_lib.common.utils import data_utils
import testtools
@@ -59,7 +59,7 @@
# Wait 1 second between creation and upload to ensure a delta
# between created_at and updated_at.
time.sleep(1)
- image_file = StringIO.StringIO(('*' * 1024))
+ image_file = six.StringIO(('*' * 1024))
cls.glance_client.update_image(image_id, data=image_file)
cls.client.wait_for_image_status(image_id, 'ACTIVE')
body = cls.client.get_image(image_id)
diff --git a/tempest/api/compute/security_groups/test_security_groups_negative.py b/tempest/api/compute/security_groups/test_security_groups_negative.py
index e069f6e..3a6b42d 100644
--- a/tempest/api/compute/security_groups/test_security_groups_negative.py
+++ b/tempest/api/compute/security_groups/test_security_groups_negative.py
@@ -156,7 +156,7 @@
@test.idempotent_id('00579617-fe04-4e1c-9d08-ca7467d2e34b')
@testtools.skipIf(CONF.service_available.neutron,
- "Neutron not check the security_group_id")
+ "Neutron does not check the security group ID")
@test.attr(type=['negative', 'smoke'])
@test.services('network')
def test_update_security_group_with_invalid_sg_id(self):
@@ -171,7 +171,7 @@
@test.idempotent_id('cda8d8b4-59f8-4087-821d-20cf5a03b3b1')
@testtools.skipIf(CONF.service_available.neutron,
- "Neutron not check the security_group_name")
+ "Neutron does not check the security group name")
@test.attr(type=['negative', 'smoke'])
@test.services('network')
def test_update_security_group_with_invalid_sg_name(self):
@@ -187,7 +187,7 @@
@test.idempotent_id('97d12b1c-a610-4194-93f1-ba859e718b45')
@testtools.skipIf(CONF.service_available.neutron,
- "Neutron not check the security_group_description")
+ "Neutron does not check the security group description")
@test.attr(type=['negative', 'smoke'])
@test.services('network')
def test_update_security_group_with_invalid_sg_des(self):
diff --git a/tempest/api/compute/servers/test_availability_zone.py b/tempest/api/compute/servers/test_availability_zone.py
index f3650ac..8d3f31c 100644
--- a/tempest/api/compute/servers/test_availability_zone.py
+++ b/tempest/api/compute/servers/test_availability_zone.py
@@ -32,5 +32,5 @@
@test.idempotent_id('a8333aa2-205c-449f-a828-d38c2489bf25')
def test_get_availability_zone_list_with_non_admin_user(self):
# List of availability zone with non-administrator user
- availability_zone = self.client.get_availability_zone_list()
+ availability_zone = self.client.list_availability_zones()
self.assertTrue(len(availability_zone) > 0)
diff --git a/tempest/api/compute/servers/test_list_server_filters.py b/tempest/api/compute/servers/test_list_server_filters.py
index f33204d..eccd600 100644
--- a/tempest/api/compute/servers/test_list_server_filters.py
+++ b/tempest/api/compute/servers/test_list_server_filters.py
@@ -69,7 +69,10 @@
network = cls.get_tenant_network()
if network:
- cls.fixed_network_name = network['name']
+ if network.get('name'):
+ cls.fixed_network_name = network['name']
+ else:
+ cls.fixed_network_name = None
else:
cls.fixed_network_name = None
network_kwargs = fixed_network.set_networks_kwarg(network)
diff --git a/tempest/api/compute/test_authorization.py b/tempest/api/compute/test_authorization.py
index f9ee75b..2baf608 100644
--- a/tempest/api/compute/test_authorization.py
+++ b/tempest/api/compute/test_authorization.py
@@ -13,7 +13,7 @@
# License for the specific language governing permissions and limitations
# under the License.
-import StringIO
+import six
from oslo_log import log as logging
from tempest_lib.common.utils import data_utils
@@ -75,7 +75,7 @@
disk_format='raw',
is_public=False)
image_id = body['id']
- image_file = StringIO.StringIO(('*' * 1024))
+ image_file = six.StringIO(('*' * 1024))
body = cls.glance_client.update_image(image_id, data=image_file)
cls.glance_client.wait_for_image_status(image_id, 'active')
cls.image = cls.images_client.get_image(image_id)
diff --git a/tempest/api/compute/test_extensions.py b/tempest/api/compute/test_extensions.py
index 5b14071..09927fc 100644
--- a/tempest/api/compute/test_extensions.py
+++ b/tempest/api/compute/test_extensions.py
@@ -50,5 +50,5 @@
@test.attr(type='gate')
def test_get_extension(self):
# get the specified extensions
- extension = self.extensions_client.get_extension('os-consoles')
+ extension = self.extensions_client.show_extension('os-consoles')
self.assertEqual('os-consoles', extension['alias'])
diff --git a/tempest/api/data_processing/base.py b/tempest/api/data_processing/base.py
index d91fbaa..5a903b7 100644
--- a/tempest/api/data_processing/base.py
+++ b/tempest/api/data_processing/base.py
@@ -12,14 +12,216 @@
# License for the specific language governing permissions and limitations
# under the License.
+from collections import OrderedDict
+
+import six
from tempest_lib import exceptions as lib_exc
from tempest import config
+from tempest import exceptions
import tempest.test
CONF = config.CONF
+"""Default templates.
+There should always be at least a master1 and a worker1 node
+group template."""
+DEFAULT_TEMPLATES = {
+ 'vanilla': OrderedDict([
+ ('2.6.0', {
+ 'NODES': {
+ 'master1': {
+ 'count': 1,
+ 'node_processes': ['namenode', 'resourcemanager',
+ 'hiveserver']
+ },
+ 'master2': {
+ 'count': 1,
+ 'node_processes': ['oozie', 'historyserver',
+ 'secondarynamenode']
+ },
+ 'worker1': {
+ 'count': 1,
+ 'node_processes': ['datanode', 'nodemanager'],
+ 'node_configs': {
+ 'MapReduce': {
+ 'yarn.app.mapreduce.am.resource.mb': 256,
+ 'yarn.app.mapreduce.am.command-opts': '-Xmx256m'
+ },
+ 'YARN': {
+ 'yarn.scheduler.minimum-allocation-mb': 256,
+ 'yarn.scheduler.maximum-allocation-mb': 1024,
+ 'yarn.nodemanager.vmem-check-enabled': False
+ }
+ }
+ }
+ },
+ 'cluster_configs': {
+ 'HDFS': {
+ 'dfs.replication': 1
+ }
+ }
+ }),
+ ('1.2.1', {
+ 'NODES': {
+ 'master1': {
+ 'count': 1,
+ 'node_processes': ['namenode', 'jobtracker']
+ },
+ 'worker1': {
+ 'count': 1,
+ 'node_processes': ['datanode', 'tasktracker'],
+ 'node_configs': {
+ 'HDFS': {
+ 'Data Node Heap Size': 1024
+ },
+ 'MapReduce': {
+ 'Task Tracker Heap Size': 1024
+ }
+ }
+ }
+ },
+ 'cluster_configs': {
+ 'HDFS': {
+ 'dfs.replication': 1
+ },
+ 'MapReduce': {
+ 'mapred.map.tasks.speculative.execution': False,
+ 'mapred.child.java.opts': '-Xmx500m'
+ },
+ 'general': {
+ 'Enable Swift': False
+ }
+ }
+ })
+ ]),
+ 'hdp': OrderedDict([
+ ('2.0.6', {
+ 'NODES': {
+ 'master1': {
+ 'count': 1,
+ 'node_processes': ['NAMENODE', 'SECONDARY_NAMENODE',
+ 'ZOOKEEPER_SERVER', 'AMBARI_SERVER',
+ 'HISTORYSERVER', 'RESOURCEMANAGER',
+ 'GANGLIA_SERVER', 'NAGIOS_SERVER',
+ 'OOZIE_SERVER']
+ },
+ 'worker1': {
+ 'count': 1,
+ 'node_processes': ['HDFS_CLIENT', 'DATANODE',
+ 'YARN_CLIENT', 'ZOOKEEPER_CLIENT',
+ 'MAPREDUCE2_CLIENT', 'NODEMANAGER',
+ 'PIG', 'OOZIE_CLIENT']
+ }
+ },
+ 'cluster_configs': {
+ 'HDFS': {
+ 'dfs.replication': 1
+ }
+ }
+ })
+ ]),
+ 'spark': OrderedDict([
+ ('1.0.0', {
+ 'NODES': {
+ 'master1': {
+ 'count': 1,
+ 'node_processes': ['namenode', 'master']
+ },
+ 'worker1': {
+ 'count': 1,
+ 'node_processes': ['datanode', 'slave']
+ }
+ },
+ 'cluster_configs': {
+ 'HDFS': {
+ 'dfs.replication': 1
+ }
+ }
+ })
+ ]),
+ 'cdh': OrderedDict([
+ ('5.3.0', {
+ 'NODES': {
+ 'master1': {
+ 'count': 1,
+ 'node_processes': ['CLOUDERA_MANAGER']
+ },
+ 'master2': {
+ 'count': 1,
+ 'node_processes': ['HDFS_NAMENODE',
+ 'YARN_RESOURCEMANAGER']
+ },
+ 'master3': {
+ 'count': 1,
+ 'node_processes': ['OOZIE_SERVER', 'YARN_JOBHISTORY',
+ 'HDFS_SECONDARYNAMENODE',
+ 'HIVE_METASTORE', 'HIVE_SERVER2']
+ },
+ 'worker1': {
+ 'count': 1,
+ 'node_processes': ['YARN_NODEMANAGER', 'HDFS_DATANODE']
+ }
+ },
+ 'cluster_configs': {
+ 'HDFS': {
+ 'dfs_replication': 1
+ }
+ }
+ }),
+ ('5', {
+ 'NODES': {
+ 'master1': {
+ 'count': 1,
+ 'node_processes': ['CLOUDERA_MANAGER']
+ },
+ 'master2': {
+ 'count': 1,
+ 'node_processes': ['HDFS_NAMENODE',
+ 'YARN_RESOURCEMANAGER']
+ },
+ 'master3': {
+ 'count': 1,
+ 'node_processes': ['OOZIE_SERVER', 'YARN_JOBHISTORY',
+ 'HDFS_SECONDARYNAMENODE',
+ 'HIVE_METASTORE', 'HIVE_SERVER2']
+ },
+ 'worker1': {
+ 'count': 1,
+ 'node_processes': ['YARN_NODEMANAGER', 'HDFS_DATANODE']
+ }
+ },
+ 'cluster_configs': {
+ 'HDFS': {
+ 'dfs_replication': 1
+ }
+ }
+ })
+ ]),
+ 'mapr': OrderedDict([
+ ('4.0.1.mrv2', {
+ 'NODES': {
+ 'master1': {
+ 'count': 1,
+ 'node_processes': ['CLDB', 'FileServer', 'ZooKeeper',
+ 'NodeManager', 'ResourceManager',
+ 'HistoryServer', 'Oozie']
+ },
+ 'worker1': {
+ 'count': 1,
+ 'node_processes': ['FileServer', 'NodeManager', 'Pig']
+ }
+ },
+ 'cluster_configs': {
+ 'Hive': {
+ 'Hive Version': '0.13',
+ }
+ }
+ })
+ ]),
+}
+
class BaseDataProcessingTest(tempest.test.BaseTestCase):
@@ -28,6 +230,7 @@
super(BaseDataProcessingTest, cls).skip_checks()
if not CONF.service_available.sahara:
raise cls.skipException('Sahara support is required')
+ cls.default_plugin = cls._get_default_plugin()
@classmethod
def setup_credentials(cls):
@@ -43,6 +246,10 @@
def resource_setup(cls):
super(BaseDataProcessingTest, cls).resource_setup()
+ cls.default_version = cls._get_default_version()
+ if cls.default_plugin is not None and cls.default_version is None:
+ raise exceptions.InvalidConfiguration(
+ message="No known Sahara plugin version was found")
cls.flavor_ref = CONF.compute.flavor_ref
# add lists for watched resources
@@ -172,3 +379,100 @@
cls._jobs.append(resp_body['id'])
return resp_body
+
+ @classmethod
+ def _get_default_plugin(cls):
+ """Returns the default plugin used for testing."""
+ if len(CONF.data_processing_feature_enabled.plugins) == 0:
+ return None
+
+ for plugin in CONF.data_processing_feature_enabled.plugins:
+ if plugin in DEFAULT_TEMPLATES.keys():
+ break
+ else:
+ plugin = ''
+ return plugin
+
+ @classmethod
+ def _get_default_version(cls):
+ """Returns the default plugin version used for testing.
+ This is gathered separately from the plugin to allow
+ the usage of plugin name in skip_checks. This method is
+ rather invoked into resource_setup, which allows API calls
+ and exceptions.
+ """
+ if not cls.default_plugin:
+ return None
+ plugin = cls.client.get_plugin(cls.default_plugin)
+
+ for version in DEFAULT_TEMPLATES[cls.default_plugin].keys():
+ if version in plugin['versions']:
+ break
+ else:
+ version = None
+
+ return version
+
+ @classmethod
+ def get_node_group_template(cls, nodegroup='worker1'):
+ """Returns a node group template for the default plugin."""
+ try:
+ plugin_data = (
+ DEFAULT_TEMPLATES[cls.default_plugin][cls.default_version]
+ )
+ nodegroup_data = plugin_data['NODES'][nodegroup]
+ node_group_template = {
+ 'description': 'Test node group template',
+ 'plugin_name': cls.default_plugin,
+ 'hadoop_version': cls.default_version,
+ 'node_processes': nodegroup_data['node_processes'],
+ 'flavor_id': cls.flavor_ref,
+ 'node_configs': nodegroup_data.get('node_configs', {}),
+ }
+ return node_group_template
+ except (IndexError, KeyError):
+ return None
+
+ @classmethod
+ def get_cluster_template(cls, node_group_template_ids=None):
+ """Returns a cluster template for the default plugin.
+ node_group_template_defined contains the type and ID of pre-defined
+ node group templates that have to be used in the cluster template
+ (instead of dynamically defining them with 'node_processes').
+ """
+ if node_group_template_ids is None:
+ node_group_template_ids = {}
+ try:
+ plugin_data = (
+ DEFAULT_TEMPLATES[cls.default_plugin][cls.default_version]
+ )
+
+ all_node_groups = []
+ for ng_name, ng_data in six.iteritems(plugin_data['NODES']):
+ node_group = {
+ 'name': '%s-node' % (ng_name),
+ 'flavor_id': cls.flavor_ref,
+ 'count': ng_data['count']
+ }
+ if ng_name in node_group_template_ids.keys():
+ # node group already defined, use it
+ node_group['node_group_template_id'] = (
+ node_group_template_ids[ng_name]
+ )
+ else:
+ # node_processes list defined on-the-fly
+ node_group['node_processes'] = ng_data['node_processes']
+ if 'node_configs' in ng_data:
+ node_group['node_configs'] = ng_data['node_configs']
+ all_node_groups.append(node_group)
+
+ cluster_template = {
+ 'description': 'Test cluster template',
+ 'plugin_name': cls.default_plugin,
+ 'hadoop_version': cls.default_version,
+ 'cluster_configs': plugin_data.get('cluster_configs', {}),
+ 'node_groups': all_node_groups,
+ }
+ return cluster_template
+ except (IndexError, KeyError):
+ return None
diff --git a/tempest/api/data_processing/test_cluster_templates.py b/tempest/api/data_processing/test_cluster_templates.py
index 8a63c3f..cebf493 100644
--- a/tempest/api/data_processing/test_cluster_templates.py
+++ b/tempest/api/data_processing/test_cluster_templates.py
@@ -15,6 +15,7 @@
from tempest_lib.common.utils import data_utils
from tempest.api.data_processing import base as dp_base
+from tempest import exceptions
from tempest import test
@@ -23,55 +24,30 @@
sahara/restapi/rest_api_v1.0.html#cluster-templates
"""
@classmethod
+ def skip_checks(cls):
+ super(ClusterTemplateTest, cls).skip_checks()
+ if cls.default_plugin is None:
+ raise cls.skipException("No Sahara plugins configured")
+
+ @classmethod
def resource_setup(cls):
super(ClusterTemplateTest, cls).resource_setup()
- # create node group template
- node_group_template = {
- 'name': data_utils.rand_name('sahara-ng-template'),
- 'description': 'Test node group template',
- 'plugin_name': 'vanilla',
- 'hadoop_version': '1.2.1',
- 'node_processes': ['datanode'],
- 'flavor_id': cls.flavor_ref,
- 'node_configs': {
- 'HDFS': {
- 'Data Node Heap Size': 1024
- }
- }
- }
- resp_body = cls.create_node_group_template(**node_group_template)
- node_group_template_id = resp_body['id']
- cls.full_cluster_template = {
- 'description': 'Test cluster template',
- 'plugin_name': 'vanilla',
- 'hadoop_version': '1.2.1',
- 'cluster_configs': {
- 'HDFS': {
- 'dfs.replication': 2
- },
- 'MapReduce': {
- 'mapred.map.tasks.speculative.execution': False,
- 'mapred.child.java.opts': '-Xmx500m'
- },
- 'general': {
- 'Enable Swift': False
- }
- },
- 'node_groups': [
- {
- 'name': 'master-node',
- 'flavor_id': cls.flavor_ref,
- 'node_processes': ['namenode'],
- 'count': 1
- },
- {
- 'name': 'worker-node',
- 'node_group_template_id': node_group_template_id,
- 'count': 3
- }
- ]
- }
+ # pre-define a node group templates
+ node_group_template_w = cls.get_node_group_template('worker1')
+ if node_group_template_w is None:
+ raise exceptions.InvalidConfiguration(
+ message="No known Sahara plugin was found")
+
+ node_group_template_w['name'] = data_utils.rand_name(
+ 'sahara-ng-template')
+ resp_body = cls.create_node_group_template(**node_group_template_w)
+ node_group_template_id = resp_body['id']
+ configured_node_group_templates = {'worker1': node_group_template_id}
+
+ cls.full_cluster_template = cls.get_cluster_template(
+ configured_node_group_templates)
+
# create cls.cluster_template variable to use for comparison to cluster
# template response body. The 'node_groups' field in the response body
# has some extra info that post body does not have. The 'node_groups'
diff --git a/tempest/api/data_processing/test_node_group_templates.py b/tempest/api/data_processing/test_node_group_templates.py
index d7381f4..4068027 100644
--- a/tempest/api/data_processing/test_node_group_templates.py
+++ b/tempest/api/data_processing/test_node_group_templates.py
@@ -19,27 +19,16 @@
class NodeGroupTemplateTest(dp_base.BaseDataProcessingTest):
+
+ @classmethod
+ def skip_checks(cls):
+ super(NodeGroupTemplateTest, cls).skip_checks()
+ if cls.default_plugin is None:
+ raise cls.skipException("No Sahara plugins configured")
+
@classmethod
def resource_setup(cls):
super(NodeGroupTemplateTest, cls).resource_setup()
- cls.node_group_template = {
- 'description': 'Test node group template',
- 'plugin_name': 'vanilla',
- 'hadoop_version': '1.2.1',
- 'node_processes': [
- 'datanode',
- 'tasktracker'
- ],
- 'flavor_id': cls.flavor_ref,
- 'node_configs': {
- 'HDFS': {
- 'Data Node Heap Size': 1024
- },
- 'MapReduce': {
- 'Task Tracker Heap Size': 1024
- }
- }
- }
def _create_node_group_template(self, template_name=None):
"""Creates Node Group Template with optional name specified.
@@ -47,6 +36,10 @@
It creates template, ensures template name and response body.
Returns id and name of created template.
"""
+ self.node_group_template = self.get_node_group_template()
+ self.assertIsNotNone(self.node_group_template,
+ "No known Sahara plugin was found")
+
if not template_name:
# generate random name if it's not specified
template_name = data_utils.rand_name('sahara-ng-template')
diff --git a/tempest/api/image/base.py b/tempest/api/image/base.py
index d513b0c..74044dc 100644
--- a/tempest/api/image/base.py
+++ b/tempest/api/image/base.py
@@ -12,9 +12,8 @@
# License for the specific language governing permissions and limitations
# under the License.
-import cStringIO as StringIO
-
from oslo_log import log as logging
+from six import moves
from tempest_lib.common.utils import data_utils
from tempest_lib import exceptions as lib_exc
@@ -113,7 +112,7 @@
cls.alt_tenant_id = cls.alt_img_cli.tenant_id
def _create_image(self):
- image_file = StringIO.StringIO(data_utils.random_bytes())
+ image_file = moves.cStringIO(data_utils.random_bytes())
image = self.create_image(container_format='bare',
disk_format='raw',
is_public=False,
diff --git a/tempest/api/image/v1/test_images.py b/tempest/api/image/v1/test_images.py
index bd672c9..49e167b 100644
--- a/tempest/api/image/v1/test_images.py
+++ b/tempest/api/image/v1/test_images.py
@@ -13,8 +13,7 @@
# License for the specific language governing permissions and limitations
# under the License.
-import cStringIO as StringIO
-
+from six import moves
from tempest_lib.common.utils import data_utils
from tempest.api.image import base
@@ -46,7 +45,7 @@
self.assertEqual(val, body.get('properties')[key])
# Now try uploading an image file
- image_file = StringIO.StringIO(data_utils.random_bytes())
+ image_file = moves.cStringIO(data_utils.random_bytes())
body = self.client.update_image(image_id, data=image_file)
self.assertIn('size', body)
self.assertEqual(1024, body.get('size'))
@@ -161,7 +160,7 @@
image. Note that the size of the new image is a random number between
1024 and 4096
"""
- image_file = StringIO.StringIO(data_utils.random_bytes(size))
+ image_file = moves.cStringIO(data_utils.random_bytes(size))
name = 'New Standard Image %s' % name
image = cls.create_image(name=name,
container_format=container_format,
@@ -257,7 +256,7 @@
Create a new standard image and return the ID of the newly-registered
image.
"""
- image_file = StringIO.StringIO(data_utils.random_bytes(size))
+ image_file = moves.cStringIO(data_utils.random_bytes(size))
name = 'New Standard Image %s' % name
image = cls.create_image(name=name,
container_format=container_format,
diff --git a/tempest/api/image/v2/test_images.py b/tempest/api/image/v2/test_images.py
index a00296c..ef0b5f1 100644
--- a/tempest/api/image/v2/test_images.py
+++ b/tempest/api/image/v2/test_images.py
@@ -14,9 +14,9 @@
# License for the specific language governing permissions and limitations
# under the License.
-import cStringIO as StringIO
import random
+from six import moves
from tempest_lib.common.utils import data_utils
from tempest.api.image import base
@@ -55,7 +55,7 @@
# Now try uploading an image file
file_content = data_utils.random_bytes()
- image_file = StringIO.StringIO(file_content)
+ image_file = moves.cStringIO(file_content)
self.client.store_image(image_id, image_file)
# Now try to get image details
@@ -108,7 +108,7 @@
image_id = body['id']
# Now try uploading an image file
- image_file = StringIO.StringIO(data_utils.random_bytes())
+ image_file = moves.cStringIO(data_utils.random_bytes())
self.client.store_image(image_id, image_file)
# Update Image
@@ -149,7 +149,7 @@
1024 and 4096
"""
size = random.randint(1024, 4096)
- image_file = StringIO.StringIO(data_utils.random_bytes(size))
+ image_file = moves.cStringIO(data_utils.random_bytes(size))
name = data_utils.rand_name('image')
body = cls.create_image(name=name,
container_format=container_format,
diff --git a/tempest/api/messaging/base.py b/tempest/api/messaging/base.py
index b3ed941..c4214f2 100644
--- a/tempest/api/messaging/base.py
+++ b/tempest/api/messaging/base.py
@@ -71,7 +71,7 @@
@classmethod
def check_queue_exists(cls, queue_name):
"""Wrapper utility that checks the existence of a test queue."""
- resp, body = cls.client.get_queue(queue_name)
+ resp, body = cls.client.show_queue(queue_name)
return resp, body
@classmethod
@@ -89,13 +89,13 @@
@classmethod
def get_queue_stats(cls, queue_name):
"""Wrapper utility that returns the queue stats."""
- resp, body = cls.client.get_queue_stats(queue_name)
+ resp, body = cls.client.show_queue_stats(queue_name)
return resp, body
@classmethod
def get_queue_metadata(cls, queue_name):
"""Wrapper utility that gets a queue metadata."""
- resp, body = cls.client.get_queue_metadata(queue_name)
+ resp, body = cls.client.show_queue_metadata(queue_name)
return resp, body
@classmethod
@@ -121,14 +121,14 @@
@classmethod
def get_single_message(cls, message_uri):
"""Wrapper utility that gets a single message."""
- resp, body = cls.client.get_single_message(message_uri)
+ resp, body = cls.client.show_single_message(message_uri)
return resp, body
@classmethod
def get_multiple_messages(cls, message_uri):
"""Wrapper utility that gets multiple messages."""
- resp, body = cls.client.get_multiple_messages(message_uri)
+ resp, body = cls.client.show_multiple_messages(message_uri)
return resp, body
diff --git a/tempest/api/messaging/test_messages.py b/tempest/api/messaging/test_messages.py
index f982f59..c8640b3 100644
--- a/tempest/api/messaging/test_messages.py
+++ b/tempest/api/messaging/test_messages.py
@@ -49,7 +49,7 @@
# Get on the posted messages
message_uri = resp['location']
- resp, _ = self.client.get_multiple_messages(message_uri)
+ resp, _ = self.client.show_multiple_messages(message_uri)
# The test has an assertion here, because the response cannot be 204
# in this case (the client allows 200 or 204 for this API call).
self.assertEqual('200', resp['status'])
@@ -74,7 +74,7 @@
message_uri = body['resources'][0]
# Get posted message
- resp, _ = self.client.get_single_message(message_uri)
+ resp, _ = self.client.show_single_message(message_uri)
# The test has an assertion here, because the response cannot be 204
# in this case (the client allows 200 or 204 for this API call).
self.assertEqual('200', resp['status'])
@@ -87,7 +87,7 @@
message_uri = resp['location']
# Get posted messages
- resp, _ = self.client.get_multiple_messages(message_uri)
+ resp, _ = self.client.show_multiple_messages(message_uri)
# The test has an assertion here, because the response cannot be 204
# in this case (the client allows 200 or 204 for this API call).
self.assertEqual('200', resp['status'])
@@ -103,7 +103,7 @@
self.client.delete_messages(message_uri)
message_uri = message_uri.replace('/messages/', '/messages?ids=')
- resp, _ = self.client.get_multiple_messages(message_uri)
+ resp, _ = self.client.show_multiple_messages(message_uri)
# The test has an assertion here, because the response has to be 204
# in this case (the client allows 200 or 204 for this API call).
self.assertEqual('204', resp['status'])
@@ -117,7 +117,7 @@
# Delete multiple messages
self.client.delete_messages(message_uri)
- resp, _ = self.client.get_multiple_messages(message_uri)
+ resp, _ = self.client.show_multiple_messages(message_uri)
# The test has an assertion here, because the response has to be 204
# in this case (the client allows 200 or 204 for this API call).
self.assertEqual('204', resp['status'])
diff --git a/tempest/api/messaging/test_queues.py b/tempest/api/messaging/test_queues.py
index c444e0b..2dac346 100644
--- a/tempest/api/messaging/test_queues.py
+++ b/tempest/api/messaging/test_queues.py
@@ -44,7 +44,7 @@
self.delete_queue(queue_name)
self.assertRaises(lib_exc.NotFound,
- self.client.get_queue,
+ self.client.show_queue,
queue_name)
diff --git a/tempest/api/network/admin/test_l3_agent_scheduler.py b/tempest/api/network/admin/test_l3_agent_scheduler.py
index cf0b5e3..fca57c6 100644
--- a/tempest/api/network/admin/test_l3_agent_scheduler.py
+++ b/tempest/api/network/admin/test_l3_agent_scheduler.py
@@ -65,6 +65,21 @@
msg = "L3 Agent Scheduler enabled in conf, but L3 Agent not found"
raise exceptions.InvalidConfiguration(msg)
cls.router = cls.create_router(data_utils.rand_name('router'))
+ # NOTE(armax): If DVR is an available extension, and the created router
+ # is indeed a distributed one, more resources need to be provisioned
+ # in order to bind the router to the L3 agent.
+ # That said, let's preserve the existing test logic, where the extra
+ # query and setup steps are only required if the extension is available
+ # and only if the router's default type is distributed.
+ if test.is_extension_enabled('dvr', 'network'):
+ is_dvr_router = cls.admin_client.show_router(
+ cls.router['id'])['router'].get('distributed', False)
+ if is_dvr_router:
+ cls.network = cls.create_network()
+ cls.create_subnet(cls.network)
+ cls.port = cls.create_port(cls.network)
+ cls.client.add_router_interface_with_port_id(
+ cls.router['id'], cls.port['id'])
@test.attr(type='smoke')
@test.idempotent_id('b7ce6e89-e837-4ded-9b78-9ed3c9c6a45a')
diff --git a/tempest/api/object_storage/test_object_services.py b/tempest/api/object_storage/test_object_services.py
index 2091eb5..5797e7f 100644
--- a/tempest/api/object_storage/test_object_services.py
+++ b/tempest/api/object_storage/test_object_services.py
@@ -13,7 +13,6 @@
# License for the specific language governing permissions and limitations
# under the License.
-import cStringIO as StringIO
import hashlib
import random
import re
@@ -21,6 +20,7 @@
import zlib
import six
+from six import moves
from tempest_lib.common.utils import data_utils
from tempest.api.object_storage import base
@@ -216,7 +216,7 @@
status, _, resp_headers = self.object_client.put_object_with_chunk(
container=self.container_name,
name=object_name,
- contents=StringIO.StringIO(data),
+ contents=moves.cStringIO(data),
chunk_size=512)
self.assertHeaders(resp_headers, 'Object', 'PUT')
diff --git a/tempest/api/orchestration/base.py b/tempest/api/orchestration/base.py
index 59fdec0..d4b107e 100644
--- a/tempest/api/orchestration/base.py
+++ b/tempest/api/orchestration/base.py
@@ -199,5 +199,5 @@
for r in resources)
def get_stack_output(self, stack_identifier, output_key):
- body = self.client.get_stack(stack_identifier)
+ body = self.client.show_stack(stack_identifier)
return self.stack_output(body, output_key)
diff --git a/tempest/api/orchestration/stacks/test_neutron_resources.py b/tempest/api/orchestration/stacks/test_neutron_resources.py
index bcf091a..81e6e82 100644
--- a/tempest/api/orchestration/stacks/test_neutron_resources.py
+++ b/tempest/api/orchestration/stacks/test_neutron_resources.py
@@ -81,8 +81,8 @@
# attempt to log the server console to help with debugging
# the cause of the server not signalling the waitcondition
# to heat.
- body = cls.client.get_resource(cls.stack_identifier,
- 'Server')
+ body = cls.client.show_resource(cls.stack_identifier,
+ 'Server')
server_id = body['physical_resource_id']
LOG.debug('Console output for %s', server_id)
output = cls.servers_client.get_console_output(
diff --git a/tempest/api/orchestration/stacks/test_non_empty_stack.py b/tempest/api/orchestration/stacks/test_non_empty_stack.py
index 9c5a6d5..5f96de3 100644
--- a/tempest/api/orchestration/stacks/test_non_empty_stack.py
+++ b/tempest/api/orchestration/stacks/test_non_empty_stack.py
@@ -66,7 +66,7 @@
@test.idempotent_id('992f96e3-41ee-4ff6-91c7-bcfb670c0919')
def test_stack_show(self):
"""Getting details about created stack should be possible."""
- stack = self.client.get_stack(self.stack_name)
+ stack = self.client.show_stack(self.stack_name)
self.assertIsInstance(stack, dict)
self.assert_fields_in_dict(stack, 'stack_name', 'id', 'links',
'parameters', 'outputs', 'disable_rollback',
@@ -105,8 +105,8 @@
@test.idempotent_id('2aba03b3-392f-4237-900b-1f5a5e9bd962')
def test_show_resource(self):
"""Getting details about created resource should be possible."""
- resource = self.client.get_resource(self.stack_identifier,
- self.resource_name)
+ resource = self.client.show_resource(self.stack_identifier,
+ self.resource_name)
self.assertIsInstance(resource, dict)
self.assert_fields_in_dict(resource, 'resource_name', 'description',
'links', 'logical_resource_id',
diff --git a/tempest/api/orchestration/stacks/test_nova_keypair_resources.py b/tempest/api/orchestration/stacks/test_nova_keypair_resources.py
index 28ef5a5..acdd4c7 100644
--- a/tempest/api/orchestration/stacks/test_nova_keypair_resources.py
+++ b/tempest/api/orchestration/stacks/test_nova_keypair_resources.py
@@ -73,7 +73,7 @@
@test.attr(type='gate')
@test.idempotent_id('8d77dec7-91fd-45a6-943d-5abd45e338a4')
def test_stack_keypairs_output(self):
- stack = self.client.get_stack(self.stack_name)
+ stack = self.client.show_stack(self.stack_name)
self.assertIsInstance(stack, dict)
output_map = {}
diff --git a/tempest/api/orchestration/stacks/test_resource_types.py b/tempest/api/orchestration/stacks/test_resource_types.py
index 32b0b8e..8f15f9c 100644
--- a/tempest/api/orchestration/stacks/test_resource_types.py
+++ b/tempest/api/orchestration/stacks/test_resource_types.py
@@ -32,7 +32,7 @@
self.assertNotEmpty(resource_types)
for resource_type in resource_types:
- type_schema = self.client.get_resource_type(resource_type)
+ type_schema = self.client.show_resource_type(resource_type)
self.assert_fields_in_dict(type_schema, 'properties',
'attributes', 'resource_type')
self.assertEqual(resource_type, type_schema['resource_type'])
@@ -41,7 +41,7 @@
@test.idempotent_id('8401821d-65fe-4d43-9fa3-57d5ce3a35c7')
def test_resource_type_template(self):
"""Verify it is possible to get template about resource types."""
- type_template = self.client.get_resource_type_template(
+ type_template = self.client.show_resource_type_template(
'OS::Nova::Server')
self.assert_fields_in_dict(
type_template,
diff --git a/tempest/api/orchestration/stacks/test_soft_conf.py b/tempest/api/orchestration/stacks/test_soft_conf.py
index 649bf47..13f0a6c 100644
--- a/tempest/api/orchestration/stacks/test_soft_conf.py
+++ b/tempest/api/orchestration/stacks/test_soft_conf.py
@@ -71,28 +71,28 @@
self.client.delete_software_deploy(deploy_id)
# Testing that it is really gone
self.assertRaises(
- lib_exc.NotFound, self.client.get_software_deploy,
+ lib_exc.NotFound, self.client.show_software_deployment,
self.deployment_id)
def _config_delete(self, config_id):
self.client.delete_software_config(config_id)
# Testing that it is really gone
self.assertRaises(
- lib_exc.NotFound, self.client.get_software_config, config_id)
+ lib_exc.NotFound, self.client.show_software_config, config_id)
@test.attr(type='smoke')
@test.idempotent_id('136162ed-9445-4b9c-b7fc-306af8b5da99')
def test_get_software_config(self):
"""Testing software config get."""
for conf in self.configs:
- api_config = self.client.get_software_config(conf['id'])
+ api_config = self.client.show_software_config(conf['id'])
self._validate_config(conf, api_config)
@test.attr(type='smoke')
@test.idempotent_id('1275c835-c967-4a2c-8d5d-ad533447ed91')
def test_get_deployment_list(self):
"""Getting a list of all deployments"""
- deploy_list = self.client.get_software_deploy_list()
+ deploy_list = self.client.list_software_deployments()
deploy_ids = [deploy['id'] for deploy in
deploy_list['software_deployments']]
self.assertIn(self.deployment_id, deploy_ids)
@@ -101,12 +101,13 @@
@test.idempotent_id('fe7cd9f9-54b1-429c-a3b7-7df8451db913')
def test_get_deployment_metadata(self):
"""Testing deployment metadata get"""
- metadata = self.client.get_software_deploy_meta(self.server_id)
+ metadata = self.client.show_software_deployment_metadata(
+ self.server_id)
conf_ids = [conf['id'] for conf in metadata['metadata']]
self.assertIn(self.configs[0]['id'], conf_ids)
def _validate_deployment(self, action, status, reason, config_id):
- deployment = self.client.get_software_deploy(self.deployment_id)
+ deployment = self.client.show_software_deployment(self.deployment_id)
self.assertEqual(action, deployment['software_deployment']['action'])
self.assertEqual(status, deployment['software_deployment']['status'])
self.assertEqual(reason,
@@ -131,7 +132,8 @@
@test.idempotent_id('2ac43ab3-34f2-415d-be2e-eabb4d14ee32')
def test_software_deployment_update_no_metadata_change(self):
"""Testing software deployment update without metadata change."""
- metadata = self.client.get_software_deploy_meta(self.server_id)
+ metadata = self.client.show_software_deployment_metadata(
+ self.server_id)
# Updating values without changing the configuration ID
new_action = 'ACTION_1'
new_status = 'STATUS_1'
@@ -145,7 +147,8 @@
new_reason, self.configs[0]['id'])
# Metadata should not be changed at this point
- test_metadata = self.client.get_software_deploy_meta(self.server_id)
+ test_metadata = self.client.show_software_deployment_metadata(
+ self.server_id)
for key in metadata['metadata'][0]:
self.assertEqual(
metadata['metadata'][0][key],
@@ -155,7 +158,8 @@
@test.idempotent_id('92c48944-d79d-4595-a840-8e1a581c1a72')
def test_software_deployment_update_with_metadata_change(self):
"""Testing software deployment update with metadata change."""
- metadata = self.client.get_software_deploy_meta(self.server_id)
+ metadata = self.client.show_software_deployment_metadata(
+ self.server_id)
self.client.update_software_deploy(
self.deployment_id, self.server_id, self.configs[1]['id'],
self.action, self.status, self.input_values,
@@ -163,7 +167,8 @@
self._validate_deployment(self.action, self.status,
self.status_reason, self.configs[1]['id'])
# Metadata should now be changed
- new_metadata = self.client.get_software_deploy_meta(self.server_id)
+ new_metadata = self.client.show_software_deployment_metadata(
+ self.server_id)
# Its enough to test the ID in this case
meta_id = metadata['metadata'][0]['id']
test_id = new_metadata['metadata'][0]['id']
diff --git a/tempest/api/orchestration/stacks/test_stacks.py b/tempest/api/orchestration/stacks/test_stacks.py
index 147f456..9ce8ebeb 100644
--- a/tempest/api/orchestration/stacks/test_stacks.py
+++ b/tempest/api/orchestration/stacks/test_stacks.py
@@ -52,15 +52,15 @@
self.assertIn(stack_id, list_ids)
# fetch the stack
- stack = self.client.get_stack(stack_identifier)
+ stack = self.client.show_stack(stack_identifier)
self.assertEqual('CREATE_COMPLETE', stack['stack_status'])
# fetch the stack by name
- stack = self.client.get_stack(stack_name)
+ stack = self.client.show_stack(stack_name)
self.assertEqual('CREATE_COMPLETE', stack['stack_status'])
# fetch the stack by id
- stack = self.client.get_stack(stack_id)
+ stack = self.client.show_stack(stack_id)
self.assertEqual('CREATE_COMPLETE', stack['stack_status'])
# delete the stack
diff --git a/tempest/api/orchestration/stacks/test_volumes.py b/tempest/api/orchestration/stacks/test_volumes.py
index 5f03e16..2b1ec12 100644
--- a/tempest/api/orchestration/stacks/test_volumes.py
+++ b/tempest/api/orchestration/stacks/test_volumes.py
@@ -34,7 +34,7 @@
def _cinder_verify(self, volume_id, template):
self.assertIsNotNone(volume_id)
- volume = self.volumes_client.get_volume(volume_id)
+ volume = self.volumes_client.show_volume(volume_id)
self.assertEqual('available', volume.get('status'))
self.assertEqual(template['resources']['volume']['properties'][
'size'], volume.get('size'))
@@ -76,7 +76,7 @@
self.client.delete_stack(stack_identifier)
self.client.wait_for_stack_status(stack_identifier, 'DELETE_COMPLETE')
self.assertRaises(lib_exc.NotFound,
- self.volumes_client.get_volume,
+ self.volumes_client.show_volume,
volume_id)
def _cleanup_volume(self, volume_id):
diff --git a/tempest/api/telemetry/test_telemetry_alarming_api.py b/tempest/api/telemetry/test_telemetry_alarming_api.py
index 8bc97e8..d106b28 100644
--- a/tempest/api/telemetry/test_telemetry_alarming_api.py
+++ b/tempest/api/telemetry/test_telemetry_alarming_api.py
@@ -67,13 +67,13 @@
self.assertEqual(alarm_name, body['name'])
self.assertDictContainsSubset(new_rule, body['threshold_rule'])
# Get and verify details of an alarm after update
- body = self.telemetry_client.get_alarm(alarm_id)
+ body = self.telemetry_client.show_alarm(alarm_id)
self.assertEqual(alarm_name, body['name'])
self.assertDictContainsSubset(new_rule, body['threshold_rule'])
# Delete alarm and verify if deleted
self.telemetry_client.delete_alarm(alarm_id)
self.assertRaises(lib_exc.NotFound,
- self.telemetry_client.get_alarm, alarm_id)
+ self.telemetry_client.show_alarm, alarm_id)
@test.attr(type="gate")
@test.idempotent_id('aca49486-70bb-4016-87e0-f6131374f741')
@@ -87,7 +87,7 @@
new_state)
self.assertEqual(new_state, state.data)
# Get alarm state and verify
- state = self.telemetry_client.alarm_get_state(alarm['alarm_id'])
+ state = self.telemetry_client.show_alarm_state(alarm['alarm_id'])
self.assertEqual(new_state, state.data)
@test.attr(type="gate")
@@ -106,4 +106,4 @@
# Verify alarm delete
self.telemetry_client.delete_alarm(alarm_id)
self.assertRaises(lib_exc.NotFound,
- self.telemetry_client.get_alarm, alarm_id)
+ self.telemetry_client.show_alarm, alarm_id)
diff --git a/tempest/api/volume/admin/test_multi_backend.py b/tempest/api/volume/admin/test_multi_backend.py
index ad5eb7d..db2d143 100644
--- a/tempest/api/volume/admin/test_multi_backend.py
+++ b/tempest/api/volume/admin/test_multi_backend.py
@@ -139,7 +139,7 @@
# the multi backend feature has been enabled
# if multi-backend is enabled: os-vol-attr:host should be like:
# host@backend_name
- volume = self.admin_volume_client.get_volume(volume_id)
+ volume = self.admin_volume_client.show_volume(volume_id)
volume1_host = volume['os-vol-host-attr:host']
msg = ("multi-backend reporting incorrect values for volume %s" %
@@ -150,10 +150,10 @@
# this test checks that the two volumes created at setUp don't
# belong to the same backend (if they are, than the
# volume backend distinction is not working properly)
- volume = self.admin_volume_client.get_volume(volume1_id)
+ volume = self.admin_volume_client.show_volume(volume1_id)
volume1_host = volume['os-vol-host-attr:host']
- volume = self.admin_volume_client.get_volume(volume2_id)
+ volume = self.admin_volume_client.show_volume(volume2_id)
volume2_host = volume['os-vol-host-attr:host']
msg = ("volumes %s and %s were created in the same backend" %
diff --git a/tempest/api/volume/admin/test_snapshots_actions.py b/tempest/api/volume/admin/test_snapshots_actions.py
index db026c1..d6e3f3e 100644
--- a/tempest/api/volume/admin/test_snapshots_actions.py
+++ b/tempest/api/volume/admin/test_snapshots_actions.py
@@ -89,7 +89,7 @@
self.admin_snapshots_client.\
reset_snapshot_status(self.snapshot['id'], status)
snapshot_get \
- = self.admin_snapshots_client.get_snapshot(self.snapshot['id'])
+ = self.admin_snapshots_client.show_snapshot(self.snapshot['id'])
self.assertEqual(status, snapshot_get['status'])
@test.attr(type='gate')
@@ -107,7 +107,7 @@
self.client.update_snapshot_status(self.snapshot['id'],
status, progress)
snapshot_get \
- = self.admin_snapshots_client.get_snapshot(self.snapshot['id'])
+ = self.admin_snapshots_client.show_snapshot(self.snapshot['id'])
self.assertEqual(status, snapshot_get['status'])
self.assertEqual(progress, snapshot_get[progress_alias])
diff --git a/tempest/api/volume/admin/test_volume_quotas.py b/tempest/api/volume/admin/test_volume_quotas.py
index 86d90f6..3ec3219 100644
--- a/tempest/api/volume/admin/test_volume_quotas.py
+++ b/tempest/api/volume/admin/test_volume_quotas.py
@@ -32,14 +32,14 @@
@test.attr(type='gate')
@test.idempotent_id('59eada70-403c-4cef-a2a3-a8ce2f1b07a0')
def test_list_quotas(self):
- quotas = self.quotas_client.get_quota_set(self.demo_tenant_id)
+ quotas = self.quotas_client.show_quota_set(self.demo_tenant_id)
for key in QUOTA_KEYS:
self.assertIn(key, quotas)
@test.attr(type='gate')
@test.idempotent_id('2be020a2-5fdd-423d-8d35-a7ffbc36e9f7')
def test_list_default_quotas(self):
- quotas = self.quotas_client.get_default_quota_set(
+ quotas = self.quotas_client.show_default_quota_set(
self.demo_tenant_id)
for key in QUOTA_KEYS:
self.assertIn(key, quotas)
@@ -48,7 +48,7 @@
@test.idempotent_id('3d45c99e-cc42-4424-a56e-5cbd212b63a6')
def test_update_all_quota_resources_for_tenant(self):
# Admin can update all the resource quota limits for a tenant
- default_quota_set = self.quotas_client.get_default_quota_set(
+ default_quota_set = self.quotas_client.show_default_quota_set(
self.demo_tenant_id)
new_quota_set = {'gigabytes': 1009,
'volumes': 11,
@@ -72,7 +72,7 @@
@test.attr(type='gate')
@test.idempotent_id('18c51ae9-cb03-48fc-b234-14a19374dbed')
def test_show_quota_usage(self):
- quota_usage = self.quotas_client.get_quota_usage(
+ quota_usage = self.quotas_client.show_quota_usage(
self.os_adm.credentials.tenant_id)
for key in QUOTA_KEYS:
self.assertIn(key, quota_usage)
@@ -82,14 +82,14 @@
@test.attr(type='gate')
@test.idempotent_id('ae8b6091-48ad-4bfa-a188-bbf5cc02115f')
def test_quota_usage(self):
- quota_usage = self.quotas_client.get_quota_usage(
+ quota_usage = self.quotas_client.show_quota_usage(
self.demo_tenant_id)
volume = self.create_volume()
self.addCleanup(self.admin_volume_client.delete_volume,
volume['id'])
- new_quota_usage = self.quotas_client.get_quota_usage(
+ new_quota_usage = self.quotas_client.show_quota_usage(
self.demo_tenant_id)
self.assertEqual(quota_usage['volumes']['in_use'] + 1,
@@ -108,7 +108,7 @@
tenant = identity_client.create_tenant(tenant_name)
tenant_id = tenant['id']
self.addCleanup(identity_client.delete_tenant, tenant_id)
- quota_set_default = self.quotas_client.get_default_quota_set(
+ quota_set_default = self.quotas_client.show_default_quota_set(
tenant_id)
volume_default = quota_set_default['volumes']
@@ -116,7 +116,7 @@
volumes=(int(volume_default) + 5))
self.quotas_client.delete_quota_set(tenant_id)
- quota_set_new = self.quotas_client.get_quota_set(tenant_id)
+ quota_set_new = self.quotas_client.show_quota_set(tenant_id)
self.assertEqual(volume_default, quota_set_new['volumes'])
diff --git a/tempest/api/volume/admin/test_volume_types.py b/tempest/api/volume/admin/test_volume_types.py
index 681a48a..048b02c 100644
--- a/tempest/api/volume/admin/test_volume_types.py
+++ b/tempest/api/volume/admin/test_volume_types.py
@@ -77,7 +77,7 @@
self.volumes_client.wait_for_volume_status(volume['id'], 'available')
# Get volume details and Verify
- fetched_volume = self.volumes_client.get_volume(volume['id'])
+ fetched_volume = self.volumes_client.show_volume(volume['id'])
self.assertEqual(volume_types[1]['name'],
fetched_volume['volume_type'],
'The fetched Volume type is different '
@@ -110,7 +110,7 @@
"to the requested name")
self.assertTrue(body['id'] is not None,
"Field volume_type id is empty or not found.")
- fetched_volume_type = self.volume_types_client.get_volume_type(
+ fetched_volume_type = self.volume_types_client.show_volume_type(
body['id'])
self.assertEqual(name, fetched_volume_type['name'],
'The fetched Volume_type is different '
@@ -146,7 +146,7 @@
# Get encryption type
fetched_encryption_type = (
- self.volume_types_client.get_encryption_type(
+ self.volume_types_client.show_encryption_type(
encryption_type['volume_type_id']))
self.assertEqual(provider,
fetched_encryption_type['provider'],
@@ -164,7 +164,7 @@
"type": "encryption-type"}
self.volume_types_client.wait_for_resource_deletion(resource)
deleted_encryption_type = (
- self.volume_types_client.get_encryption_type(
+ self.volume_types_client.show_encryption_type(
encryption_type['volume_type_id']))
self.assertEmpty(deleted_encryption_type)
diff --git a/tempest/api/volume/admin/test_volume_types_extra_specs.py b/tempest/api/volume/admin/test_volume_types_extra_specs.py
index f382a67..0f4dbe5 100644
--- a/tempest/api/volume/admin/test_volume_types_extra_specs.py
+++ b/tempest/api/volume/admin/test_volume_types_extra_specs.py
@@ -77,7 +77,7 @@
self.assertEqual(extra_specs, body,
"Volume type extra spec incorrectly created")
- self.volume_types_client.get_volume_type_extra_specs(
+ self.volume_types_client.show_volume_type_extra_specs(
self.volume_type['id'],
extra_specs.keys()[0])
self.assertEqual(extra_specs, body,
diff --git a/tempest/api/volume/admin/test_volume_types_extra_specs_negative.py b/tempest/api/volume/admin/test_volume_types_extra_specs_negative.py
index 7775025..e861c5f 100644
--- a/tempest/api/volume/admin/test_volume_types_extra_specs_negative.py
+++ b/tempest/api/volume/admin/test_volume_types_extra_specs_negative.py
@@ -137,7 +137,7 @@
extra_specs = {"spec1": "val1"}
self.assertRaises(
lib_exc.NotFound,
- self.volume_types_client.get_volume_type_extra_specs,
+ self.volume_types_client.show_volume_type_extra_specs,
str(uuid.uuid4()), extra_specs.keys()[0])
@test.attr(type='gate')
@@ -147,7 +147,7 @@
# id.
self.assertRaises(
lib_exc.NotFound,
- self.volume_types_client.get_volume_type_extra_specs,
+ self.volume_types_client.show_volume_type_extra_specs,
self.volume_type['id'], str(uuid.uuid4()))
diff --git a/tempest/api/volume/admin/test_volume_types_negative.py b/tempest/api/volume/admin/test_volume_types_negative.py
index d2bf777..d9be337 100644
--- a/tempest/api/volume/admin/test_volume_types_negative.py
+++ b/tempest/api/volume/admin/test_volume_types_negative.py
@@ -45,7 +45,7 @@
def test_get_nonexistent_type_id(self):
# Should not be able to get volume type with nonexistent type id.
self.assertRaises(lib_exc.NotFound,
- self.volume_types_client.get_volume_type,
+ self.volume_types_client.show_volume_type,
str(uuid.uuid4()))
@test.attr(type='gate')
diff --git a/tempest/api/volume/admin/test_volumes_actions.py b/tempest/api/volume/admin/test_volumes_actions.py
index 1b69549..feb46a3 100644
--- a/tempest/api/volume/admin/test_volumes_actions.py
+++ b/tempest/api/volume/admin/test_volumes_actions.py
@@ -79,7 +79,7 @@
def test_volume_reset_status(self):
# test volume reset status : available->error->available
self._reset_volume_status(self.volume['id'], 'error')
- volume_get = self.admin_volume_client.get_volume(
+ volume_get = self.admin_volume_client.show_volume(
self.volume['id'])
self.assertEqual('error', volume_get['status'])
diff --git a/tempest/api/volume/admin/test_volumes_backup.py b/tempest/api/volume/admin/test_volumes_backup.py
index 6fd2a5e..2d830c8 100644
--- a/tempest/api/volume/admin/test_volumes_backup.py
+++ b/tempest/api/volume/admin/test_volumes_backup.py
@@ -55,11 +55,11 @@
'available')
# Get a given backup
- backup = self.backups_adm_client.get_backup(backup['id'])
+ backup = self.backups_adm_client.show_backup(backup['id'])
self.assertEqual(backup_name, backup['name'])
# Get all backups with detail
- backups = self.backups_adm_client.list_backups_with_detail()
+ backups = self.backups_adm_client.list_backups(detail=True)
self.assertIn((backup['name'], backup['id']),
[(m['name'], m['id']) for m in backups])
diff --git a/tempest/api/volume/base.py b/tempest/api/volume/base.py
index 1f76b1c..28676b0 100644
--- a/tempest/api/volume/base.py
+++ b/tempest/api/volume/base.py
@@ -18,6 +18,7 @@
from tempest_lib import exceptions as lib_exc
from tempest import clients
+from tempest.common import credentials
from tempest.common import fixed_network
from tempest import config
from tempest import exceptions
@@ -175,14 +176,17 @@
"""Base test case class for all Volume Admin API tests."""
@classmethod
+ def skip_checks(cls):
+ super(BaseVolumeAdminTest, cls).skip_checks()
+ if not credentials.is_admin_available():
+ msg = ("Missing Identity Admin API credentials in configuration.")
+ raise cls.skipException(msg)
+
+ @classmethod
def setup_credentials(cls):
super(BaseVolumeAdminTest, cls).setup_credentials()
- try:
- cls.adm_creds = cls.isolated_creds.get_admin_creds()
- cls.os_adm = clients.Manager(credentials=cls.adm_creds)
- except NotImplementedError:
- msg = "Missing Volume Admin API credentials in configuration."
- raise cls.skipException(msg)
+ cls.adm_creds = cls.isolated_creds.get_admin_creds()
+ cls.os_adm = clients.Manager(credentials=cls.adm_creds)
@classmethod
def setup_clients(cls):
diff --git a/tempest/api/volume/test_availability_zone.py b/tempest/api/volume/test_availability_zone.py
index e63cfcd..d544821 100644
--- a/tempest/api/volume/test_availability_zone.py
+++ b/tempest/api/volume/test_availability_zone.py
@@ -32,7 +32,7 @@
@test.idempotent_id('01f1ae88-eba9-4c6b-a011-6f7ace06b725')
def test_get_availability_zone_list(self):
# List of availability zone
- availability_zone = self.client.get_availability_zone_list()
+ availability_zone = self.client.list_availability_zones()
self.assertTrue(len(availability_zone) > 0)
diff --git a/tempest/api/volume/test_qos.py b/tempest/api/volume/test_qos.py
index f806790..edece79 100644
--- a/tempest/api/volume/test_qos.py
+++ b/tempest/api/volume/test_qos.py
@@ -64,7 +64,7 @@
self.created_qos['id'], vol_type_id)
def _test_get_association_qos(self):
- body = self.volume_qos_client.get_association_qos(
+ body = self.volume_qos_client.show_association_qos(
self.created_qos['id'])
associations = []
@@ -102,7 +102,7 @@
@test.idempotent_id('7aa214cc-ac1a-4397-931f-3bb2e83bb0fd')
def test_get_qos(self):
"""Tests the detail of a given qos-specs"""
- body = self.volume_qos_client.get_qos(self.created_qos['id'])
+ body = self.volume_qos_client.show_qos(self.created_qos['id'])
self.assertEqual(self.qos_name, body['name'])
self.assertEqual(self.qos_consumer, body['consumer'])
@@ -121,7 +121,7 @@
body = self.volume_qos_client.set_qos_key(self.created_qos['id'],
iops_bytes='500')
self.assertEqual(args, body)
- body = self.volume_qos_client.get_qos(self.created_qos['id'])
+ body = self.volume_qos_client.show_qos(self.created_qos['id'])
self.assertEqual(args['iops_bytes'], body['specs']['iops_bytes'])
# test the deletion of a specs key from qos-specs
@@ -130,7 +130,7 @@
operation = 'qos-key-unset'
self.volume_qos_client.wait_for_qos_operations(self.created_qos['id'],
operation, keys)
- body = self.volume_qos_client.get_qos(self.created_qos['id'])
+ body = self.volume_qos_client.show_qos(self.created_qos['id'])
self.assertNotIn(keys[0], body['specs'])
@test.attr(type='smoke')
diff --git a/tempest/api/volume/test_snapshot_metadata.py b/tempest/api/volume/test_snapshot_metadata.py
index d4efc2a..536648d 100644
--- a/tempest/api/volume/test_snapshot_metadata.py
+++ b/tempest/api/volume/test_snapshot_metadata.py
@@ -50,12 +50,12 @@
body = self.client.create_snapshot_metadata(self.snapshot_id,
metadata)
# Get the metadata of the snapshot
- body = self.client.get_snapshot_metadata(self.snapshot_id)
+ body = self.client.show_snapshot_metadata(self.snapshot_id)
self.assertEqual(metadata, body)
# Delete one item metadata of the snapshot
self.client.delete_snapshot_metadata_item(
self.snapshot_id, "key1")
- body = self.client.get_snapshot_metadata(self.snapshot_id)
+ body = self.client.show_snapshot_metadata(self.snapshot_id)
self.assertEqual(expected, body)
@test.attr(type='gate')
@@ -71,13 +71,13 @@
body = self.client.create_snapshot_metadata(self.snapshot_id,
metadata)
# Get the metadata of the snapshot
- body = self.client.get_snapshot_metadata(self.snapshot_id)
+ body = self.client.show_snapshot_metadata(self.snapshot_id)
self.assertEqual(metadata, body)
# Update metadata item
body = self.client.update_snapshot_metadata(
self.snapshot_id, update)
# Get the metadata of the snapshot
- body = self.client.get_snapshot_metadata(self.snapshot_id)
+ body = self.client.show_snapshot_metadata(self.snapshot_id)
self.assertEqual(update, body)
@test.attr(type='gate')
@@ -95,13 +95,13 @@
body = self.client.create_snapshot_metadata(self.snapshot_id,
metadata)
# Get the metadata of the snapshot
- body = self.client.get_snapshot_metadata(self.snapshot_id)
+ body = self.client.show_snapshot_metadata(self.snapshot_id)
self.assertEqual(metadata, body)
# Update metadata item
body = self.client.update_snapshot_metadata_item(
self.snapshot_id, "key3", update_item)
# Get the metadata of the snapshot
- body = self.client.get_snapshot_metadata(self.snapshot_id)
+ body = self.client.show_snapshot_metadata(self.snapshot_id)
self.assertEqual(expect, body)
diff --git a/tempest/api/volume/test_volume_metadata.py b/tempest/api/volume/test_volume_metadata.py
index e601349..a0e1161 100644
--- a/tempest/api/volume/test_volume_metadata.py
+++ b/tempest/api/volume/test_volume_metadata.py
@@ -45,12 +45,12 @@
body = self.volumes_client.create_volume_metadata(self.volume_id,
metadata)
# Get the metadata of the volume
- body = self.volumes_client.get_volume_metadata(self.volume_id)
+ body = self.volumes_client.show_volume_metadata(self.volume_id)
self.assertThat(body.items(), matchers.ContainsAll(metadata.items()))
# Delete one item metadata of the volume
self.volumes_client.delete_volume_metadata_item(
self.volume_id, "key1")
- body = self.volumes_client.get_volume_metadata(self.volume_id)
+ body = self.volumes_client.show_volume_metadata(self.volume_id)
self.assertNotIn("key1", body)
del metadata["key1"]
self.assertThat(body.items(), matchers.ContainsAll(metadata.items()))
@@ -70,13 +70,13 @@
body = self.volumes_client.create_volume_metadata(
self.volume_id, metadata)
# Get the metadata of the volume
- body = self.volumes_client.get_volume_metadata(self.volume_id)
+ body = self.volumes_client.show_volume_metadata(self.volume_id)
self.assertThat(body.items(), matchers.ContainsAll(metadata.items()))
# Update metadata
body = self.volumes_client.update_volume_metadata(
self.volume_id, update)
# Get the metadata of the volume
- body = self.volumes_client.get_volume_metadata(self.volume_id)
+ body = self.volumes_client.show_volume_metadata(self.volume_id)
self.assertThat(body.items(), matchers.ContainsAll(update.items()))
@test.attr(type='gate')
@@ -98,7 +98,7 @@
body = self.volumes_client.update_volume_metadata_item(
self.volume_id, "key3", update_item)
# Get the metadata of the volume
- body = self.volumes_client.get_volume_metadata(self.volume_id)
+ body = self.volumes_client.show_volume_metadata(self.volume_id)
self.assertThat(body.items(), matchers.ContainsAll(expect.items()))
diff --git a/tempest/api/volume/test_volume_transfers.py b/tempest/api/volume/test_volume_transfers.py
index 40947df..4acab39 100644
--- a/tempest/api/volume/test_volume_transfers.py
+++ b/tempest/api/volume/test_volume_transfers.py
@@ -71,7 +71,7 @@
'awaiting-transfer')
# Get a volume transfer
- body = self.client.get_volume_transfer(transfer_id)
+ body = self.client.show_volume_transfer(transfer_id)
self.assertEqual(volume['id'], body['volume_id'])
# List volume transfers, the result should be greater than
diff --git a/tempest/api/volume/test_volumes_actions.py b/tempest/api/volume/test_volumes_actions.py
index 1872ec7..fecb98b 100644
--- a/tempest/api/volume/test_volumes_actions.py
+++ b/tempest/api/volume/test_volumes_actions.py
@@ -86,7 +86,7 @@
self.volume['id'],
'available')
self.addCleanup(self.client.detach_volume, self.volume['id'])
- volume = self.client.get_volume(self.volume['id'])
+ volume = self.client.show_volume(self.volume['id'])
self.assertIn('attachments', volume)
attachment = self.client.get_attachment_from_volume(volume)
self.assertEqual(mountpoint, attachment['device'])
@@ -117,12 +117,12 @@
# Mark volume as reserved.
body = self.client.reserve_volume(self.volume['id'])
# To get the volume info
- body = self.client.get_volume(self.volume['id'])
+ body = self.client.show_volume(self.volume['id'])
self.assertIn('attaching', body['status'])
# Unmark volume as reserved.
body = self.client.unreserve_volume(self.volume['id'])
# To get the volume info
- body = self.client.get_volume(self.volume['id'])
+ body = self.client.show_volume(self.volume['id'])
self.assertIn('available', body['status'])
def _is_true(self, val):
@@ -136,7 +136,7 @@
self.client.update_volume_readonly(self.volume['id'],
readonly)
# Get Volume information
- fetched_volume = self.client.get_volume(self.volume['id'])
+ fetched_volume = self.client.show_volume(self.volume['id'])
bool_flag = self._is_true(fetched_volume['metadata']['readonly'])
self.assertEqual(True, bool_flag)
@@ -145,7 +145,7 @@
self.client.update_volume_readonly(self.volume['id'], readonly)
# Get Volume information
- fetched_volume = self.client.get_volume(self.volume['id'])
+ fetched_volume = self.client.show_volume(self.volume['id'])
bool_flag = self._is_true(fetched_volume['metadata']['readonly'])
self.assertEqual(False, bool_flag)
diff --git a/tempest/api/volume/test_volumes_extend.py b/tempest/api/volume/test_volumes_extend.py
index 35c12bc..38bb748 100644
--- a/tempest/api/volume/test_volumes_extend.py
+++ b/tempest/api/volume/test_volumes_extend.py
@@ -35,7 +35,7 @@
extend_size = int(self.volume['size']) + 1
self.client.extend_volume(self.volume['id'], extend_size)
self.client.wait_for_volume_status(self.volume['id'], 'available')
- volume = self.client.get_volume(self.volume['id'])
+ volume = self.client.show_volume(self.volume['id'])
self.assertEqual(int(volume['size']), extend_size)
diff --git a/tempest/api/volume/test_volumes_get.py b/tempest/api/volume/test_volumes_get.py
index 1fa1d5f..1027f48 100644
--- a/tempest/api/volume/test_volumes_get.py
+++ b/tempest/api/volume/test_volumes_get.py
@@ -60,7 +60,7 @@
self.assertTrue(volume['id'] is not None,
"Field volume id is empty or not found.")
# Get Volume information
- fetched_volume = self.client.get_volume(volume['id'])
+ fetched_volume = self.client.show_volume(volume['id'])
self.assertEqual(v_name,
fetched_volume[self.name_field],
'The fetched Volume name is different '
@@ -92,8 +92,8 @@
# Assert response body for update_volume method
self.assertEqual(new_v_name, update_volume[self.name_field])
self.assertEqual(new_desc, update_volume[self.descrip_field])
- # Assert response body for get_volume method
- updated_volume = self.client.get_volume(volume['id'])
+ # Assert response body for show_volume method
+ updated_volume = self.client.show_volume(volume['id'])
self.assertEqual(volume['id'], updated_volume['id'])
self.assertEqual(new_v_name, updated_volume[self.name_field])
self.assertEqual(new_desc, updated_volume[self.descrip_field])
diff --git a/tempest/api/volume/test_volumes_list.py b/tempest/api/volume/test_volumes_list.py
index 29e3324..1c7b1c8 100644
--- a/tempest/api/volume/test_volumes_list.py
+++ b/tempest/api/volume/test_volumes_list.py
@@ -70,7 +70,7 @@
cls.metadata = {'Type': 'work'}
for i in range(3):
volume = cls.create_volume(metadata=cls.metadata)
- volume = cls.client.get_volume(volume['id'])
+ volume = cls.client.show_volume(volume['id'])
cls.volume_list.append(volume)
cls.volume_id_list.append(volume['id'])
@@ -89,7 +89,7 @@
"""
if with_detail:
fetched_vol_list = \
- self.client.list_volumes_with_detail(params=params)
+ self.client.list_volumes(detail=True, params=params)
else:
fetched_vol_list = self.client.list_volumes(params=params)
@@ -125,7 +125,7 @@
def test_volume_list_with_details(self):
# Get a list of Volumes with details
# Fetch all Volumes
- fetched_list = self.client.list_volumes_with_detail()
+ fetched_list = self.client.list_volumes(detail=True)
self.assertVolumesIn(fetched_list, self.volume_list)
@test.attr(type='gate')
@@ -133,7 +133,7 @@
def test_volume_list_by_name(self):
volume = self.volume_list[data_utils.rand_int_id(0, 2)]
params = {self.name: volume[self.name]}
- fetched_vol = self.client.list_volumes(params)
+ fetched_vol = self.client.list_volumes(params=params)
self.assertEqual(1, len(fetched_vol), str(fetched_vol))
self.assertEqual(fetched_vol[0][self.name],
volume[self.name])
@@ -143,7 +143,7 @@
def test_volume_list_details_by_name(self):
volume = self.volume_list[data_utils.rand_int_id(0, 2)]
params = {self.name: volume[self.name]}
- fetched_vol = self.client.list_volumes_with_detail(params)
+ fetched_vol = self.client.list_volumes(detail=True, params=params)
self.assertEqual(1, len(fetched_vol), str(fetched_vol))
self.assertEqual(fetched_vol[0][self.name],
volume[self.name])
@@ -152,7 +152,7 @@
@test.idempotent_id('39654e13-734c-4dab-95ce-7613bf8407ce')
def test_volumes_list_by_status(self):
params = {'status': 'available'}
- fetched_list = self.client.list_volumes(params)
+ fetched_list = self.client.list_volumes(params=params)
self._list_by_param_value_and_assert(params)
self.assertVolumesIn(fetched_list, self.volume_list,
fields=self.VOLUME_FIELDS)
@@ -161,7 +161,7 @@
@test.idempotent_id('2943f712-71ec-482a-bf49-d5ca06216b9f')
def test_volumes_list_details_by_status(self):
params = {'status': 'available'}
- fetched_list = self.client.list_volumes_with_detail(params)
+ fetched_list = self.client.list_volumes(detail=True, params=params)
for volume in fetched_list:
self.assertEqual('available', volume['status'])
self.assertVolumesIn(fetched_list, self.volume_list)
@@ -172,7 +172,7 @@
volume = self.volume_list[data_utils.rand_int_id(0, 2)]
zone = volume['availability_zone']
params = {'availability_zone': zone}
- fetched_list = self.client.list_volumes(params)
+ fetched_list = self.client.list_volumes(params=params)
self._list_by_param_value_and_assert(params)
self.assertVolumesIn(fetched_list, self.volume_list,
fields=self.VOLUME_FIELDS)
@@ -183,7 +183,7 @@
volume = self.volume_list[data_utils.rand_int_id(0, 2)]
zone = volume['availability_zone']
params = {'availability_zone': zone}
- fetched_list = self.client.list_volumes_with_detail(params)
+ fetched_list = self.client.list_volumes(detail=True, params=params)
for volume in fetched_list:
self.assertEqual(zone, volume['availability_zone'])
self.assertVolumesIn(fetched_list, self.volume_list)
diff --git a/tempest/api/volume/test_volumes_negative.py b/tempest/api/volume/test_volumes_negative.py
index a47e964..aba245a 100644
--- a/tempest/api/volume/test_volumes_negative.py
+++ b/tempest/api/volume/test_volumes_negative.py
@@ -43,7 +43,7 @@
@test.idempotent_id('f131c586-9448-44a4-a8b0-54ca838aa43e')
def test_volume_get_nonexistent_volume_id(self):
# Should not be able to get a non-existent volume
- self.assertRaises(lib_exc.NotFound, self.client.get_volume,
+ self.assertRaises(lib_exc.NotFound, self.client.show_volume,
str(uuid.uuid4()))
@test.attr(type=['negative', 'gate'])
@@ -152,14 +152,14 @@
@test.idempotent_id('30799cfd-7ee4-446c-b66c-45b383ed211b')
def test_get_invalid_volume_id(self):
# Should not be able to get volume with invalid id
- self.assertRaises(lib_exc.NotFound, self.client.get_volume,
+ self.assertRaises(lib_exc.NotFound, self.client.show_volume,
'#$%%&^&^')
@test.attr(type=['negative', 'gate'])
@test.idempotent_id('c6c3db06-29ad-4e91-beb0-2ab195fe49e3')
def test_get_volume_without_passing_volume_id(self):
# Should not be able to get volume when empty ID is passed
- self.assertRaises(lib_exc.NotFound, self.client.get_volume, '')
+ self.assertRaises(lib_exc.NotFound, self.client.show_volume, '')
@test.attr(type=['negative', 'gate'])
@test.idempotent_id('1f035827-7c32-4019-9240-b4ec2dbd9dfd')
@@ -266,7 +266,7 @@
def test_list_volumes_with_nonexistent_name(self):
v_name = data_utils.rand_name('Volume')
params = {self.name_field: v_name}
- fetched_volume = self.client.list_volumes(params)
+ fetched_volume = self.client.list_volumes(params=params)
self.assertEqual(0, len(fetched_volume))
@test.attr(type=['negative', 'gate'])
@@ -275,14 +275,14 @@
v_name = data_utils.rand_name('Volume')
params = {self.name_field: v_name}
fetched_volume = \
- self.client.list_volumes_with_detail(params)
+ self.client.list_volumes(detail=True, params=params)
self.assertEqual(0, len(fetched_volume))
@test.attr(type=['negative', 'gate'])
@test.idempotent_id('143b279b-7522-466b-81be-34a87d564a7c')
def test_list_volumes_with_invalid_status(self):
params = {'status': 'null'}
- fetched_volume = self.client.list_volumes(params)
+ fetched_volume = self.client.list_volumes(params=params)
self.assertEqual(0, len(fetched_volume))
@test.attr(type=['negative', 'gate'])
@@ -290,7 +290,7 @@
def test_list_volumes_detail_with_invalid_status(self):
params = {'status': 'null'}
fetched_volume = \
- self.client.list_volumes_with_detail(params)
+ self.client.list_volumes(detail=True, params=params)
self.assertEqual(0, len(fetched_volume))
diff --git a/tempest/api/volume/test_volumes_snapshots.py b/tempest/api/volume/test_volumes_snapshots.py
index b277390..2c15f92 100644
--- a/tempest/api/volume/test_volumes_snapshots.py
+++ b/tempest/api/volume/test_volumes_snapshots.py
@@ -50,7 +50,7 @@
if with_detail:
fetched_snap_list = \
self.snapshots_client.\
- list_snapshots_with_detail(params=params)
+ list_snapshots(detail=True, params=params)
else:
fetched_snap_list = \
self.snapshots_client.list_snapshots(params=params)
@@ -98,7 +98,7 @@
snapshot = self.create_snapshot(self.volume_origin['id'], **params)
# Get the snap and check for some of its details
- snap_get = self.snapshots_client.get_snapshot(snapshot['id'])
+ snap_get = self.snapshots_client.show_snapshot(snapshot['id'])
self.assertEqual(self.volume_origin['id'],
snap_get['volume_id'],
"Referred volume origin mismatch")
@@ -119,9 +119,9 @@
# Assert response body for update_snapshot method
self.assertEqual(new_s_name, update_snapshot[self.name_field])
self.assertEqual(new_desc, update_snapshot[self.descrip_field])
- # Assert response body for get_snapshot method
+ # Assert response body for show_snapshot method
updated_snapshot = \
- self.snapshots_client.get_snapshot(snapshot['id'])
+ self.snapshots_client.show_snapshot(snapshot['id'])
self.assertEqual(new_s_name, updated_snapshot[self.name_field])
self.assertEqual(new_desc, updated_snapshot[self.descrip_field])
diff --git a/tempest/api/volume/v2/test_volumes_list.py b/tempest/api/volume/v2/test_volumes_list.py
index f6b52a9..04ea361 100644
--- a/tempest/api/volume/v2/test_volumes_list.py
+++ b/tempest/api/volume/v2/test_volumes_list.py
@@ -45,7 +45,7 @@
cls.metadata = {'Type': 'work'}
for i in range(3):
volume = cls.create_volume(metadata=cls.metadata)
- volume = cls.client.get_volume(volume['id'])
+ volume = cls.client.show_volume(volume['id'])
cls.volume_list.append(volume)
cls.volume_id_list.append(volume['id'])
@@ -70,7 +70,8 @@
'sort_dir': sort_dir,
'sort_key': sort_key
}
- fetched_volume = self.client.list_volumes_with_detail(params)
+ fetched_volume = self.client.list_volumes(detail=True,
+ params=params)
self.assertEqual(limit, len(fetched_volume),
"The count of volumes is %s, expected:%s " %
(len(fetched_volume), limit))
diff --git a/tempest/api_schema/response/compute/flavors.py b/tempest/api_schema/response/compute/flavors.py
deleted file mode 100644
index 65f2c28..0000000
--- a/tempest/api_schema/response/compute/flavors.py
+++ /dev/null
@@ -1,80 +0,0 @@
-# Copyright 2014 NEC Corporation. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest.api_schema.response.compute import parameter_types
-
-list_flavors = {
- 'status_code': [200],
- 'response_body': {
- 'type': 'object',
- 'properties': {
- 'flavors': {
- 'type': 'array',
- 'items': {
- 'type': 'object',
- 'properties': {
- 'name': {'type': 'string'},
- 'links': parameter_types.links,
- 'id': {'type': 'string'}
- },
- 'required': ['name', 'links', 'id']
- }
- },
- 'flavors_links': parameter_types.links
- },
- # NOTE(gmann): flavors_links attribute is not necessary
- # to be present always So it is not 'required'.
- 'required': ['flavors']
- }
-}
-
-common_flavor_info = {
- 'type': 'object',
- 'properties': {
- 'name': {'type': 'string'},
- 'links': parameter_types.links,
- 'ram': {'type': 'integer'},
- 'vcpus': {'type': 'integer'},
- 'swap': {'type': 'integer'},
- 'disk': {'type': 'integer'},
- 'id': {'type': 'string'}
- },
- 'required': ['name', 'links', 'ram', 'vcpus',
- 'swap', 'disk', 'id']
-}
-
-common_flavor_list_details = {
- 'status_code': [200],
- 'response_body': {
- 'type': 'object',
- 'properties': {
- 'flavors': {
- 'type': 'array',
- 'items': common_flavor_info
- }
- },
- 'required': ['flavors']
- }
-}
-
-common_flavor_details = {
- 'status_code': [200],
- 'response_body': {
- 'type': 'object',
- 'properties': {
- 'flavor': common_flavor_info
- },
- 'required': ['flavor']
- }
-}
diff --git a/tempest/api_schema/response/compute/hosts.py b/tempest/api_schema/response/compute/hosts.py
deleted file mode 100644
index 2596c27..0000000
--- a/tempest/api_schema/response/compute/hosts.py
+++ /dev/null
@@ -1,85 +0,0 @@
-# Copyright 2014 NEC Corporation. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-common_start_up_body = {
- 'type': 'object',
- 'properties': {
- 'host': {'type': 'string'},
- 'power_action': {'enum': ['startup']}
- },
- 'required': ['host', 'power_action']
-}
-
-list_hosts = {
- 'status_code': [200],
- 'response_body': {
- 'type': 'object',
- 'properties': {
- 'hosts': {
- 'type': 'array',
- 'items': {
- 'type': 'object',
- 'properties': {
- 'host_name': {'type': 'string'},
- 'service': {'type': 'string'},
- 'zone': {'type': 'string'}
- },
- 'required': ['host_name', 'service', 'zone']
- }
- }
- },
- 'required': ['hosts']
- }
-}
-
-show_host_detail = {
- 'status_code': [200],
- 'response_body': {
- 'type': 'object',
- 'properties': {
- 'host': {
- 'type': 'array',
- 'item': {
- 'type': 'object',
- 'properties': {
- 'resource': {
- 'type': 'object',
- 'properties': {
- 'cpu': {'type': 'integer'},
- 'disk_gb': {'type': 'integer'},
- 'host': {'type': 'string'},
- 'memory_mb': {'type': 'integer'},
- 'project': {'type': 'string'}
- },
- 'required': ['cpu', 'disk_gb', 'host',
- 'memory_mb', 'project']
- }
- },
- 'required': ['resource']
- }
- }
- },
- 'required': ['host']
- }
-}
-
-update_host_common = {
- 'type': 'object',
- 'properties': {
- 'host': {'type': 'string'},
- 'maintenance_mode': {'enum': ['on_maintenance', 'off_maintenance']},
- 'status': {'enum': ['enabled', 'disabled']}
- },
- 'required': ['host', 'maintenance_mode', 'status']
-}
diff --git a/tempest/api_schema/response/compute/hypervisors.py b/tempest/api_schema/response/compute/hypervisors.py
deleted file mode 100644
index d6f2bd1..0000000
--- a/tempest/api_schema/response/compute/hypervisors.py
+++ /dev/null
@@ -1,225 +0,0 @@
-# Copyright 2014 NEC Corporation. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import copy
-
-hypervisor_statistics = {
- 'status_code': [200],
- 'response_body': {
- 'type': 'object',
- 'properties': {
- 'hypervisor_statistics': {
- 'type': 'object',
- 'properties': {
- 'count': {'type': 'integer'},
- 'current_workload': {'type': 'integer'},
- 'disk_available_least': {'type': ['integer', 'null']},
- 'free_disk_gb': {'type': 'integer'},
- 'free_ram_mb': {'type': 'integer'},
- 'local_gb': {'type': 'integer'},
- 'local_gb_used': {'type': 'integer'},
- 'memory_mb': {'type': 'integer'},
- 'memory_mb_used': {'type': 'integer'},
- 'running_vms': {'type': 'integer'},
- 'vcpus': {'type': 'integer'},
- 'vcpus_used': {'type': 'integer'}
- },
- 'required': ['count', 'current_workload',
- 'disk_available_least', 'free_disk_gb',
- 'free_ram_mb', 'local_gb', 'local_gb_used',
- 'memory_mb', 'memory_mb_used', 'running_vms',
- 'vcpus', 'vcpus_used']
- }
- },
- 'required': ['hypervisor_statistics']
- }
-}
-
-common_list_hypervisors_detail = {
- 'status_code': [200],
- 'response_body': {
- 'type': 'object',
- 'properties': {
- 'hypervisors': {
- 'type': 'array',
- 'items': {
- 'type': 'object',
- 'properties': {
- 'status': {'type': 'string'},
- 'state': {'type': 'string'},
- 'cpu_info': {'type': 'string'},
- 'current_workload': {'type': 'integer'},
- 'disk_available_least': {'type': ['integer', 'null']},
- 'host_ip': {
- 'type': 'string',
- 'format': 'ip-address'
- },
- 'free_disk_gb': {'type': 'integer'},
- 'free_ram_mb': {'type': 'integer'},
- 'hypervisor_hostname': {'type': 'string'},
- 'hypervisor_type': {'type': 'string'},
- 'hypervisor_version': {'type': 'integer'},
- 'id': {'type': ['integer', 'string']},
- 'local_gb': {'type': 'integer'},
- 'local_gb_used': {'type': 'integer'},
- 'memory_mb': {'type': 'integer'},
- 'memory_mb_used': {'type': 'integer'},
- 'running_vms': {'type': 'integer'},
- 'service': {
- 'type': 'object',
- 'properties': {
- 'host': {'type': 'string'},
- 'id': {'type': ['integer', 'string']},
- 'disabled_reason': {'type': ['string', 'null']}
- },
- # NOTE(gmann): 'disabled_reason' is updated in
- # 'service' dict if 'os-hypervisor-status'
- # extension is loaded. So this is not required.
- 'required': ['host', 'id']
- },
- 'vcpus': {'type': 'integer'},
- 'vcpus_used': {'type': 'integer'}
- },
- # NOTE: When loading os-hypervisor-status extension,
- # a response contains status and state. So these params
- # should not be required.
- 'required': ['cpu_info', 'current_workload',
- 'disk_available_least', 'host_ip',
- 'free_disk_gb', 'free_ram_mb',
- 'hypervisor_hostname', 'hypervisor_type',
- 'hypervisor_version', 'id', 'local_gb',
- 'local_gb_used', 'memory_mb',
- 'memory_mb_used', 'running_vms', 'service',
- 'vcpus', 'vcpus_used']
- }
- }
- },
- 'required': ['hypervisors']
- }
-}
-
-common_show_hypervisor = {
- 'status_code': [200],
- 'response_body': {
- 'type': 'object',
- 'properties': {
- 'hypervisor': {
- 'type': 'object',
- 'properties': {
- 'status': {'type': 'string'},
- 'state': {'type': 'string'},
- 'cpu_info': {'type': 'string'},
- 'current_workload': {'type': 'integer'},
- 'disk_available_least': {'type': ['integer', 'null']},
- 'host_ip': {
- 'type': 'string',
- 'format': 'ip-address'
- },
- 'free_disk_gb': {'type': 'integer'},
- 'free_ram_mb': {'type': 'integer'},
- 'hypervisor_hostname': {'type': 'string'},
- 'hypervisor_type': {'type': 'string'},
- 'hypervisor_version': {'type': 'integer'},
- 'id': {'type': ['integer', 'string']},
- 'local_gb': {'type': 'integer'},
- 'local_gb_used': {'type': 'integer'},
- 'memory_mb': {'type': 'integer'},
- 'memory_mb_used': {'type': 'integer'},
- 'running_vms': {'type': 'integer'},
- 'service': {
- 'type': 'object',
- 'properties': {
- 'host': {'type': 'string'},
- 'id': {'type': ['integer', 'string']},
- 'disabled_reason': {'type': ['string', 'null']}
- },
- # NOTE: 'disabled_reason' is updated in 'service'
- # dict if os-hypervisor-status' extension is loaded.
- # So this is not required.
- 'required': ['host', 'id']
- },
- 'vcpus': {'type': 'integer'},
- 'vcpus_used': {'type': 'integer'}
- },
- # NOTE: When loading os-hypervisor-status extension,
- # a response contains status and state. So these params
- # should not be required.
- 'required': ['cpu_info', 'current_workload',
- 'disk_available_least', 'host_ip',
- 'free_disk_gb', 'free_ram_mb',
- 'hypervisor_hostname', 'hypervisor_type',
- 'hypervisor_version', 'id', 'local_gb',
- 'local_gb_used', 'memory_mb', 'memory_mb_used',
- 'running_vms', 'service', 'vcpus', 'vcpus_used']
- }
- },
- 'required': ['hypervisor']
- }
-}
-
-common_hypervisors_detail = {
- 'status_code': [200],
- 'response_body': {
- 'type': 'object',
- 'properties': {
- 'hypervisors': {
- 'type': 'array',
- 'items': {
- 'type': 'object',
- 'properties': {
- 'status': {'type': 'string'},
- 'state': {'type': 'string'},
- 'id': {'type': ['integer', 'string']},
- 'hypervisor_hostname': {'type': 'string'}
- },
- # NOTE: When loading os-hypervisor-status extension,
- # a response contains status and state. So these params
- # should not be required.
- 'required': ['id', 'hypervisor_hostname']
- }
- }
- },
- 'required': ['hypervisors']
- }
-}
-
-common_hypervisors_info = {
- 'status_code': [200],
- 'response_body': {
- 'type': 'object',
- 'properties': {
- 'hypervisor': {
- 'type': 'object',
- 'properties': {
- 'status': {'type': 'string'},
- 'state': {'type': 'string'},
- 'id': {'type': ['integer', 'string']},
- 'hypervisor_hostname': {'type': 'string'},
- },
- # NOTE: When loading os-hypervisor-status extension,
- # a response contains status and state. So these params
- # should not be required.
- 'required': ['id', 'hypervisor_hostname']
- }
- },
- 'required': ['hypervisor']
- }
-}
-
-
-hypervisor_uptime = copy.deepcopy(common_hypervisors_info)
-hypervisor_uptime['response_body']['properties']['hypervisor'][
- 'properties']['uptime'] = {'type': 'string'}
-hypervisor_uptime['response_body']['properties']['hypervisor'][
- 'required'] = ['id', 'hypervisor_hostname', 'uptime']
diff --git a/tempest/api_schema/response/compute/keypairs.py b/tempest/api_schema/response/compute/keypairs.py
deleted file mode 100644
index 2ae410c..0000000
--- a/tempest/api_schema/response/compute/keypairs.py
+++ /dev/null
@@ -1,62 +0,0 @@
-# Copyright 2014 NEC Corporation. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-list_keypairs = {
- 'status_code': [200],
- 'response_body': {
- 'type': 'object',
- 'properties': {
- 'keypairs': {
- 'type': 'array',
- 'items': {
- 'type': 'object',
- 'properties': {
- 'keypair': {
- 'type': 'object',
- 'properties': {
- 'public_key': {'type': 'string'},
- 'name': {'type': 'string'},
- 'fingerprint': {'type': 'string'}
- },
- 'required': ['public_key', 'name', 'fingerprint']
- }
- },
- 'required': ['keypair']
- }
- }
- },
- 'required': ['keypairs']
- }
-}
-
-create_keypair = {
- 'type': 'object',
- 'properties': {
- 'keypair': {
- 'type': 'object',
- 'properties': {
- 'fingerprint': {'type': 'string'},
- 'name': {'type': 'string'},
- 'public_key': {'type': 'string'},
- 'user_id': {'type': 'string'},
- 'private_key': {'type': 'string'}
- },
- # When create keypair API is being called with 'Public key'
- # (Importing keypair) then, response body does not contain
- # 'private_key' So it is not defined as 'required'
- 'required': ['fingerprint', 'name', 'public_key', 'user_id']
- }
- },
- 'required': ['keypair']
-}
diff --git a/tempest/api_schema/response/compute/quotas.py b/tempest/api_schema/response/compute/quotas.py
deleted file mode 100644
index 863104c..0000000
--- a/tempest/api_schema/response/compute/quotas.py
+++ /dev/null
@@ -1,46 +0,0 @@
-# Copyright 2014 NEC Corporation. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-common_quota_set = {
- 'status_code': [200],
- 'response_body': {
- 'type': 'object',
- 'properties': {
- 'quota_set': {
- 'type': 'object',
- 'properties': {
- 'instances': {'type': 'integer'},
- 'cores': {'type': 'integer'},
- 'ram': {'type': 'integer'},
- 'floating_ips': {'type': 'integer'},
- 'fixed_ips': {'type': 'integer'},
- 'metadata_items': {'type': 'integer'},
- 'key_pairs': {'type': 'integer'},
- 'security_groups': {'type': 'integer'},
- 'security_group_rules': {'type': 'integer'},
- 'server_group_members': {'type': 'integer'},
- 'server_groups': {'type': 'integer'},
- },
- # NOTE: server_group_members and server_groups are represented
- # when enabling quota_server_group extension. So they should
- # not be required.
- 'required': ['instances', 'cores', 'ram',
- 'floating_ips', 'fixed_ips',
- 'metadata_items', 'key_pairs',
- 'security_groups', 'security_group_rules']
- }
- },
- 'required': ['quota_set']
- }
-}
diff --git a/tempest/api_schema/response/compute/servers.py b/tempest/api_schema/response/compute/servers.py
deleted file mode 100644
index 3950173..0000000
--- a/tempest/api_schema/response/compute/servers.py
+++ /dev/null
@@ -1,238 +0,0 @@
-# Copyright 2014 NEC Corporation. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import copy
-
-from tempest.api_schema.response.compute import parameter_types
-
-get_password = {
- 'status_code': [200],
- 'response_body': {
- 'type': 'object',
- 'properties': {
- 'password': {'type': 'string'}
- },
- 'required': ['password']
- }
-}
-
-get_vnc_console = {
- 'status_code': [200],
- 'response_body': {
- 'type': 'object',
- 'properties': {
- 'console': {
- 'type': 'object',
- 'properties': {
- 'type': {'type': 'string'},
- 'url': {
- 'type': 'string',
- 'format': 'uri'
- }
- },
- 'required': ['type', 'url']
- }
- },
- 'required': ['console']
- }
-}
-
-common_show_server = {
- 'type': 'object',
- 'properties': {
- 'id': {'type': 'string'},
- 'name': {'type': 'string'},
- 'status': {'type': 'string'},
- 'image': {'oneOf': [
- {'type': 'object',
- 'properties': {
- 'id': {'type': 'string'},
- 'links': parameter_types.links
- },
- 'required': ['id', 'links']},
- {'type': ['string', 'null']}
- ]},
- 'flavor': {
- 'type': 'object',
- 'properties': {
- 'id': {'type': 'string'},
- 'links': parameter_types.links
- },
- 'required': ['id', 'links']
- },
- 'fault': {
- 'type': 'object',
- 'properties': {
- 'code': {'type': 'integer'},
- 'created': {'type': 'string'},
- 'message': {'type': 'string'},
- 'details': {'type': 'string'},
- },
- # NOTE(gmann): 'details' is not necessary to be present
- # in the 'fault'. So it is not defined as 'required'.
- 'required': ['code', 'created', 'message']
- },
- 'user_id': {'type': 'string'},
- 'tenant_id': {'type': 'string'},
- 'created': {'type': 'string'},
- 'updated': {'type': 'string'},
- 'progress': {'type': 'integer'},
- 'metadata': {'type': 'object'},
- 'links': parameter_types.links,
- 'addresses': parameter_types.addresses,
- },
- # NOTE(GMann): 'progress' attribute is present in the response
- # only when server's status is one of the progress statuses
- # ("ACTIVE","BUILD", "REBUILD", "RESIZE","VERIFY_RESIZE")
- # 'fault' attribute is present in the response
- # only when server's status is one of the "ERROR", "DELETED".
- # So they are not defined as 'required'.
- 'required': ['id', 'name', 'status', 'image', 'flavor',
- 'user_id', 'tenant_id', 'created', 'updated',
- 'metadata', 'links', 'addresses']
-}
-
-base_update_get_server = {
- 'status_code': [200],
- 'response_body': {
- 'type': 'object',
- 'properties': {
- 'server': common_show_server
- },
- 'required': ['server']
- }
-}
-
-delete_server = {
- 'status_code': [204],
-}
-
-set_server_metadata = {
- 'status_code': [200],
- 'response_body': {
- 'type': 'object',
- 'properties': {
- 'metadata': {
- 'type': 'object',
- 'patternProperties': {
- '^.+$': {'type': 'string'}
- }
- }
- },
- 'required': ['metadata']
- }
-}
-
-list_server_metadata = copy.deepcopy(set_server_metadata)
-
-update_server_metadata = copy.deepcopy(set_server_metadata)
-
-delete_server_metadata_item = {
- 'status_code': [204]
-}
-
-list_servers = {
- 'status_code': [200],
- 'response_body': {
- 'type': 'object',
- 'properties': {
- 'servers': {
- 'type': 'array',
- 'items': {
- 'type': 'object',
- 'properties': {
- 'id': {'type': 'string'},
- 'links': parameter_types.links,
- 'name': {'type': 'string'}
- },
- 'required': ['id', 'links', 'name']
- }
- },
- 'servers_links': parameter_types.links
- },
- # NOTE(gmann): servers_links attribute is not necessary to be
- # present always So it is not 'required'.
- 'required': ['servers']
- }
-}
-
-server_actions_common_schema = {
- 'status_code': [202]
-}
-
-server_actions_delete_password = {
- 'status_code': [204]
-}
-
-get_console_output = {
- 'status_code': [200],
- 'response_body': {
- 'type': 'object',
- 'properties': {
- 'output': {'type': 'string'}
- },
- 'required': ['output']
- }
-}
-
-common_instance_actions = {
- 'type': 'object',
- 'properties': {
- 'action': {'type': 'string'},
- 'request_id': {'type': 'string'},
- 'user_id': {'type': 'string'},
- 'project_id': {'type': 'string'},
- 'start_time': {'type': 'string'},
- 'message': {'type': ['string', 'null']}
- },
- 'required': ['action', 'request_id', 'user_id', 'project_id',
- 'start_time', 'message']
-}
-
-instance_action_events = {
- 'type': 'array',
- 'items': {
- 'type': 'object',
- 'properties': {
- 'event': {'type': 'string'},
- 'start_time': {'type': 'string'},
- 'finish_time': {'type': 'string'},
- 'result': {'type': 'string'},
- 'traceback': {'type': ['string', 'null']}
- },
- 'required': ['event', 'start_time', 'finish_time', 'result',
- 'traceback']
- }
-}
-
-common_get_instance_action = copy.deepcopy(common_instance_actions)
-
-common_get_instance_action['properties'].update({
- 'events': instance_action_events})
-# 'events' does not come in response body always so it is not
-# defined as 'required'
-
-base_list_servers_detail = {
- 'status_code': [200],
- 'response_body': {
- 'type': 'object',
- 'properties': {
- 'servers': {
- 'type': 'array',
- 'items': common_show_server
- }
- },
- 'required': ['servers']
- }
-}
diff --git a/tempest/api_schema/response/compute/v2_1/flavors.py b/tempest/api_schema/response/compute/v2_1/flavors.py
index 76c4cee..725d17a 100644
--- a/tempest/api_schema/response/compute/v2_1/flavors.py
+++ b/tempest/api_schema/response/compute/v2_1/flavors.py
@@ -12,52 +12,86 @@
# License for the specific language governing permissions and limitations
# under the License.
-import copy
-
-from tempest.api_schema.response.compute import flavors
from tempest.api_schema.response.compute import parameter_types
-list_flavors_details = copy.deepcopy(flavors.common_flavor_list_details)
+list_flavors = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'flavors': {
+ 'type': 'array',
+ 'items': {
+ 'type': 'object',
+ 'properties': {
+ 'name': {'type': 'string'},
+ 'links': parameter_types.links,
+ 'id': {'type': 'string'}
+ },
+ 'required': ['name', 'links', 'id']
+ }
+ },
+ 'flavors_links': parameter_types.links
+ },
+ # NOTE(gmann): flavors_links attribute is not necessary
+ # to be present always So it is not 'required'.
+ 'required': ['flavors']
+ }
+}
-# 'swap' attributes comes as integer value but if it is empty it comes as "".
-# So defining type of as string and integer.
-list_flavors_details['response_body']['properties']['flavors']['items'][
- 'properties']['swap'] = {'type': ['string', 'integer']}
+common_flavor_info = {
+ 'type': 'object',
+ 'properties': {
+ 'name': {'type': 'string'},
+ 'links': parameter_types.links,
+ 'ram': {'type': 'integer'},
+ 'vcpus': {'type': 'integer'},
+ # 'swap' attributes comes as integer value but if it is empty
+ # it comes as "". So defining type of as string and integer.
+ 'swap': {'type': ['integer', 'string']},
+ 'disk': {'type': 'integer'},
+ 'id': {'type': 'string'},
+ 'OS-FLV-DISABLED:disabled': {'type': 'boolean'},
+ 'os-flavor-access:is_public': {'type': 'boolean'},
+ 'rxtx_factor': {'type': 'number'},
+ 'OS-FLV-EXT-DATA:ephemeral': {'type': 'integer'}
+ },
+ # 'OS-FLV-DISABLED', 'os-flavor-access', 'rxtx_factor' and
+ # 'OS-FLV-EXT-DATA' are API extensions. So they are not 'required'.
+ 'required': ['name', 'links', 'ram', 'vcpus', 'swap', 'disk', 'id']
+}
-# Defining 'flavors_links' attributes for V2 flavor schema
-list_flavors_details['response_body'][
- 'properties'].update({'flavors_links': parameter_types.links})
-# NOTE(gmann): flavors_links attribute is not necessary to be
-# present always So it is not 'required'.
-
-# Defining extra attributes for V2 flavor schema
-list_flavors_details['response_body']['properties']['flavors']['items'][
- 'properties'].update({'OS-FLV-DISABLED:disabled': {'type': 'boolean'},
- 'os-flavor-access:is_public': {'type': 'boolean'},
- 'rxtx_factor': {'type': 'number'},
- 'OS-FLV-EXT-DATA:ephemeral': {'type': 'integer'}})
-# 'OS-FLV-DISABLED', 'os-flavor-access', 'rxtx_factor' and 'OS-FLV-EXT-DATA'
-# are API extensions. So they are not 'required'.
+list_flavors_details = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'flavors': {
+ 'type': 'array',
+ 'items': common_flavor_info
+ },
+ # NOTE(gmann): flavors_links attribute is not necessary
+ # to be present always So it is not 'required'.
+ 'flavors_links': parameter_types.links
+ },
+ 'required': ['flavors']
+ }
+}
unset_flavor_extra_specs = {
'status_code': [200]
}
-create_get_flavor_details = copy.deepcopy(flavors.common_flavor_details)
-
-# 'swap' attributes comes as integer value but if it is empty it comes as "".
-# So defining type of as string and integer.
-create_get_flavor_details['response_body']['properties']['flavor'][
- 'properties']['swap'] = {'type': ['string', 'integer']}
-
-# Defining extra attributes for V2 flavor schema
-create_get_flavor_details['response_body']['properties']['flavor'][
- 'properties'].update({'OS-FLV-DISABLED:disabled': {'type': 'boolean'},
- 'os-flavor-access:is_public': {'type': 'boolean'},
- 'rxtx_factor': {'type': 'number'},
- 'OS-FLV-EXT-DATA:ephemeral': {'type': 'integer'}})
-# 'OS-FLV-DISABLED', 'os-flavor-access', 'rxtx_factor' and 'OS-FLV-EXT-DATA'
-# are API extensions. So they are not 'required'.
+create_get_flavor_details = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'flavor': common_flavor_info
+ },
+ 'required': ['flavor']
+ }
+}
delete_flavor = {
'status_code': [202]
diff --git a/tempest/api_schema/response/compute/v2_1/hosts.py b/tempest/api_schema/response/compute/v2_1/hosts.py
index 0944792..72d5a07 100644
--- a/tempest/api_schema/response/compute/v2_1/hosts.py
+++ b/tempest/api_schema/response/compute/v2_1/hosts.py
@@ -14,12 +14,70 @@
import copy
-from tempest.api_schema.response.compute import hosts
+list_hosts = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'hosts': {
+ 'type': 'array',
+ 'items': {
+ 'type': 'object',
+ 'properties': {
+ 'host_name': {'type': 'string'},
+ 'service': {'type': 'string'},
+ 'zone': {'type': 'string'}
+ },
+ 'required': ['host_name', 'service', 'zone']
+ }
+ }
+ },
+ 'required': ['hosts']
+ }
+}
+
+get_host_detail = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'host': {
+ 'type': 'array',
+ 'item': {
+ 'type': 'object',
+ 'properties': {
+ 'resource': {
+ 'type': 'object',
+ 'properties': {
+ 'cpu': {'type': 'integer'},
+ 'disk_gb': {'type': 'integer'},
+ 'host': {'type': 'string'},
+ 'memory_mb': {'type': 'integer'},
+ 'project': {'type': 'string'}
+ },
+ 'required': ['cpu', 'disk_gb', 'host',
+ 'memory_mb', 'project']
+ }
+ },
+ 'required': ['resource']
+ }
+ }
+ },
+ 'required': ['host']
+ }
+}
startup_host = {
'status_code': [200],
- 'response_body': hosts.common_start_up_body
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'host': {'type': 'string'},
+ 'power_action': {'enum': ['startup']}
+ },
+ 'required': ['host', 'power_action']
+ }
}
# The 'power_action' attribute of 'shutdown_host' API is 'shutdown'
@@ -38,5 +96,14 @@
update_host = {
'status_code': [200],
- 'response_body': hosts.update_host_common
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'host': {'type': 'string'},
+ 'maintenance_mode': {'enum': ['on_maintenance',
+ 'off_maintenance']},
+ 'status': {'enum': ['enabled', 'disabled']}
+ },
+ 'required': ['host', 'maintenance_mode', 'status']
+ }
}
diff --git a/tempest/api_schema/response/compute/v2_1/hypervisors.py b/tempest/api_schema/response/compute/v2_1/hypervisors.py
index cbb7698..3efa46b 100644
--- a/tempest/api_schema/response/compute/v2_1/hypervisors.py
+++ b/tempest/api_schema/response/compute/v2_1/hypervisors.py
@@ -14,13 +14,163 @@
import copy
-from tempest.api_schema.response.compute import hypervisors
+get_hypervisor_statistics = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'hypervisor_statistics': {
+ 'type': 'object',
+ 'properties': {
+ 'count': {'type': 'integer'},
+ 'current_workload': {'type': 'integer'},
+ 'disk_available_least': {'type': ['integer', 'null']},
+ 'free_disk_gb': {'type': 'integer'},
+ 'free_ram_mb': {'type': 'integer'},
+ 'local_gb': {'type': 'integer'},
+ 'local_gb_used': {'type': 'integer'},
+ 'memory_mb': {'type': 'integer'},
+ 'memory_mb_used': {'type': 'integer'},
+ 'running_vms': {'type': 'integer'},
+ 'vcpus': {'type': 'integer'},
+ 'vcpus_used': {'type': 'integer'}
+ },
+ 'required': ['count', 'current_workload',
+ 'disk_available_least', 'free_disk_gb',
+ 'free_ram_mb', 'local_gb', 'local_gb_used',
+ 'memory_mb', 'memory_mb_used', 'running_vms',
+ 'vcpus', 'vcpus_used']
+ }
+ },
+ 'required': ['hypervisor_statistics']
+ }
+}
-hypervisors_servers = copy.deepcopy(hypervisors.common_hypervisors_detail)
+hypervisor_detail = {
+ 'type': 'object',
+ 'properties': {
+ 'status': {'type': 'string'},
+ 'state': {'type': 'string'},
+ 'cpu_info': {'type': 'string'},
+ 'current_workload': {'type': 'integer'},
+ 'disk_available_least': {'type': ['integer', 'null']},
+ 'host_ip': {
+ 'type': 'string',
+ 'format': 'ip-address'
+ },
+ 'free_disk_gb': {'type': 'integer'},
+ 'free_ram_mb': {'type': 'integer'},
+ 'hypervisor_hostname': {'type': 'string'},
+ 'hypervisor_type': {'type': 'string'},
+ 'hypervisor_version': {'type': 'integer'},
+ 'id': {'type': ['integer', 'string']},
+ 'local_gb': {'type': 'integer'},
+ 'local_gb_used': {'type': 'integer'},
+ 'memory_mb': {'type': 'integer'},
+ 'memory_mb_used': {'type': 'integer'},
+ 'running_vms': {'type': 'integer'},
+ 'service': {
+ 'type': 'object',
+ 'properties': {
+ 'host': {'type': 'string'},
+ 'id': {'type': ['integer', 'string']},
+ 'disabled_reason': {'type': ['string', 'null']}
+ },
+ 'required': ['host', 'id']
+ },
+ 'vcpus': {'type': 'integer'},
+ 'vcpus_used': {'type': 'integer'}
+ },
+ # NOTE: When loading os-hypervisor-status extension,
+ # a response contains status and state. So these params
+ # should not be required.
+ 'required': ['cpu_info', 'current_workload',
+ 'disk_available_least', 'host_ip',
+ 'free_disk_gb', 'free_ram_mb',
+ 'hypervisor_hostname', 'hypervisor_type',
+ 'hypervisor_version', 'id', 'local_gb',
+ 'local_gb_used', 'memory_mb', 'memory_mb_used',
+ 'running_vms', 'service', 'vcpus', 'vcpus_used']
+}
-# Defining extra attributes for V3 show hypervisor schema
-hypervisors_servers['response_body']['properties']['hypervisors']['items'][
+list_hypervisors_detail = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'hypervisors': {
+ 'type': 'array',
+ 'items': hypervisor_detail
+ }
+ },
+ 'required': ['hypervisors']
+ }
+}
+
+get_hypervisor = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'hypervisor': hypervisor_detail
+ },
+ 'required': ['hypervisor']
+ }
+}
+
+list_search_hypervisors = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'hypervisors': {
+ 'type': 'array',
+ 'items': {
+ 'type': 'object',
+ 'properties': {
+ 'status': {'type': 'string'},
+ 'state': {'type': 'string'},
+ 'id': {'type': ['integer', 'string']},
+ 'hypervisor_hostname': {'type': 'string'}
+ },
+ # NOTE: When loading os-hypervisor-status extension,
+ # a response contains status and state. So these params
+ # should not be required.
+ 'required': ['id', 'hypervisor_hostname']
+ }
+ }
+ },
+ 'required': ['hypervisors']
+ }
+}
+
+get_hypervisor_uptime = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'hypervisor': {
+ 'type': 'object',
+ 'properties': {
+ 'status': {'type': 'string'},
+ 'state': {'type': 'string'},
+ 'id': {'type': ['integer', 'string']},
+ 'hypervisor_hostname': {'type': 'string'},
+ 'uptime': {'type': 'string'}
+ },
+ # NOTE: When loading os-hypervisor-status extension,
+ # a response contains status and state. So these params
+ # should not be required.
+ 'required': ['id', 'hypervisor_hostname', 'uptime']
+ }
+ },
+ 'required': ['hypervisor']
+ }
+}
+
+get_hypervisors_servers = copy.deepcopy(list_search_hypervisors)
+get_hypervisors_servers['response_body']['properties']['hypervisors']['items'][
'properties']['servers'] = {
'type': 'array',
'items': {
diff --git a/tempest/api_schema/response/compute/v2_1/keypairs.py b/tempest/api_schema/response/compute/v2_1/keypairs.py
index ec26fa0..ceae6cf 100644
--- a/tempest/api_schema/response/compute/v2_1/keypairs.py
+++ b/tempest/api_schema/response/compute/v2_1/keypairs.py
@@ -12,8 +12,6 @@
# License for the specific language governing permissions and limitations
# under the License.
-from tempest.api_schema.response.compute import keypairs
-
get_keypair = {
'status_code': [200],
'response_body': {
@@ -47,9 +45,56 @@
create_keypair = {
'status_code': [200],
- 'response_body': keypairs.create_keypair
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'keypair': {
+ 'type': 'object',
+ 'properties': {
+ 'fingerprint': {'type': 'string'},
+ 'name': {'type': 'string'},
+ 'public_key': {'type': 'string'},
+ 'user_id': {'type': 'string'},
+ 'private_key': {'type': 'string'}
+ },
+ # When create keypair API is being called with 'Public key'
+ # (Importing keypair) then, response body does not contain
+ # 'private_key' So it is not defined as 'required'
+ 'required': ['fingerprint', 'name', 'public_key', 'user_id']
+ }
+ },
+ 'required': ['keypair']
+ }
}
delete_keypair = {
'status_code': [202],
}
+
+list_keypairs = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'keypairs': {
+ 'type': 'array',
+ 'items': {
+ 'type': 'object',
+ 'properties': {
+ 'keypair': {
+ 'type': 'object',
+ 'properties': {
+ 'public_key': {'type': 'string'},
+ 'name': {'type': 'string'},
+ 'fingerprint': {'type': 'string'}
+ },
+ 'required': ['public_key', 'name', 'fingerprint']
+ }
+ },
+ 'required': ['keypair']
+ }
+ }
+ },
+ 'required': ['keypairs']
+ }
+}
diff --git a/tempest/api_schema/response/compute/v2_1/quota_classes.py b/tempest/api_schema/response/compute/v2_1/quota_classes.py
index a7374df..a0cdaf5 100644
--- a/tempest/api_schema/response/compute/v2_1/quota_classes.py
+++ b/tempest/api_schema/response/compute/v2_1/quota_classes.py
@@ -20,12 +20,12 @@
# NOTE(mriedem): os-quota-class-sets responses are the same as os-quota-sets
# except for the key in the response body is quota_class_set instead of
# quota_set, so update this copy of the schema from os-quota-sets.
-quota_set = copy.deepcopy(quotas.quota_set)
-quota_set['response_body']['properties']['quota_class_set'] = (
- quota_set['response_body']['properties'].pop('quota_set'))
-quota_set['response_body']['required'] = ['quota_class_set']
+get_quota_class_set = copy.deepcopy(quotas.get_quota_set)
+get_quota_class_set['response_body']['properties']['quota_class_set'] = (
+ get_quota_class_set['response_body']['properties'].pop('quota_set'))
+get_quota_class_set['response_body']['required'] = ['quota_class_set']
-quota_set_update = copy.deepcopy(quotas.quota_set_update)
-quota_set_update['response_body']['properties']['quota_class_set'] = (
- quota_set_update['response_body']['properties'].pop('quota_set'))
-quota_set_update['response_body']['required'] = ['quota_class_set']
+update_quota_class_set = copy.deepcopy(quotas.update_quota_set)
+update_quota_class_set['response_body']['properties']['quota_class_set'] = (
+ update_quota_class_set['response_body']['properties'].pop('quota_set'))
+update_quota_class_set['response_body']['required'] = ['quota_class_set']
diff --git a/tempest/api_schema/response/compute/v2_1/quotas.py b/tempest/api_schema/response/compute/v2_1/quotas.py
index 630b227..9141f7e 100644
--- a/tempest/api_schema/response/compute/v2_1/quotas.py
+++ b/tempest/api_schema/response/compute/v2_1/quotas.py
@@ -14,34 +14,49 @@
import copy
-from tempest.api_schema.response.compute import quotas
+update_quota_set = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'quota_set': {
+ 'type': 'object',
+ 'properties': {
+ 'instances': {'type': 'integer'},
+ 'cores': {'type': 'integer'},
+ 'ram': {'type': 'integer'},
+ 'floating_ips': {'type': 'integer'},
+ 'fixed_ips': {'type': 'integer'},
+ 'metadata_items': {'type': 'integer'},
+ 'key_pairs': {'type': 'integer'},
+ 'security_groups': {'type': 'integer'},
+ 'security_group_rules': {'type': 'integer'},
+ 'server_group_members': {'type': 'integer'},
+ 'server_groups': {'type': 'integer'},
+ 'injected_files': {'type': 'integer'},
+ 'injected_file_content_bytes': {'type': 'integer'},
+ 'injected_file_path_bytes': {'type': 'integer'}
+ },
+ # NOTE: server_group_members and server_groups are represented
+ # when enabling quota_server_group extension. So they should
+ # not be required.
+ 'required': ['instances', 'cores', 'ram',
+ 'floating_ips', 'fixed_ips',
+ 'metadata_items', 'key_pairs',
+ 'security_groups', 'security_group_rules',
+ 'injected_files', 'injected_file_content_bytes',
+ 'injected_file_path_bytes']
+ }
+ },
+ 'required': ['quota_set']
+ }
+}
-quota_set = copy.deepcopy(quotas.common_quota_set)
-quota_set['response_body']['properties']['quota_set']['properties'][
+get_quota_set = copy.deepcopy(update_quota_set)
+get_quota_set['response_body']['properties']['quota_set']['properties'][
'id'] = {'type': 'string'}
-quota_set['response_body']['properties']['quota_set']['properties'][
- 'injected_files'] = {'type': 'integer'}
-quota_set['response_body']['properties']['quota_set']['properties'][
- 'injected_file_content_bytes'] = {'type': 'integer'}
-quota_set['response_body']['properties']['quota_set']['properties'][
- 'injected_file_path_bytes'] = {'type': 'integer'}
-quota_set['response_body']['properties']['quota_set']['required'].extend([
- 'id',
- 'injected_files',
- 'injected_file_content_bytes',
- 'injected_file_path_bytes'])
-
-quota_set_update = copy.deepcopy(quotas.common_quota_set)
-quota_set_update['response_body']['properties']['quota_set']['properties'][
- 'injected_files'] = {'type': 'integer'}
-quota_set_update['response_body']['properties']['quota_set']['properties'][
- 'injected_file_content_bytes'] = {'type': 'integer'}
-quota_set_update['response_body']['properties']['quota_set']['properties'][
- 'injected_file_path_bytes'] = {'type': 'integer'}
-quota_set_update['response_body']['properties']['quota_set'][
- 'required'].extend(['injected_files',
- 'injected_file_content_bytes',
- 'injected_file_path_bytes'])
+get_quota_set['response_body']['properties']['quota_set']['required'].extend([
+ 'id'])
delete_quota = {
'status_code': [202]
diff --git a/tempest/api_schema/response/compute/v2_1/servers.py b/tempest/api_schema/response/compute/v2_1/servers.py
index ebee697..726f9b1 100644
--- a/tempest/api_schema/response/compute/v2_1/servers.py
+++ b/tempest/api_schema/response/compute/v2_1/servers.py
@@ -15,7 +15,6 @@
import copy
from tempest.api_schema.response.compute import parameter_types
-from tempest.api_schema.response.compute import servers
create_server = {
'status_code': [202],
@@ -46,24 +45,110 @@
create_server_with_admin_pass['response_body']['properties']['server'][
'required'].append('adminPass')
-update_server = copy.deepcopy(servers.base_update_get_server)
-update_server['response_body']['properties']['server']['properties'].update({
- 'hostId': {'type': 'string'},
- 'OS-DCF:diskConfig': {'type': 'string'},
- 'accessIPv4': parameter_types.access_ip_v4,
- 'accessIPv6': parameter_types.access_ip_v6
-})
-update_server['response_body']['properties']['server']['required'].append(
- # NOTE: OS-DCF:diskConfig and accessIPv4/v6 are API
- # extensions, and some environments return a response
- # without these attributes. So they are not 'required'.
- 'hostId'
-)
+list_servers = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'servers': {
+ 'type': 'array',
+ 'items': {
+ 'type': 'object',
+ 'properties': {
+ 'id': {'type': 'string'},
+ 'links': parameter_types.links,
+ 'name': {'type': 'string'}
+ },
+ 'required': ['id', 'links', 'name']
+ }
+ },
+ 'servers_links': parameter_types.links
+ },
+ # NOTE(gmann): servers_links attribute is not necessary to be
+ # present always So it is not 'required'.
+ 'required': ['servers']
+ }
+}
-get_server = copy.deepcopy(servers.base_update_get_server)
-get_server['response_body']['properties']['server']['properties'].update({
+delete_server = {
+ 'status_code': [204],
+}
+
+common_show_server = {
+ 'type': 'object',
+ 'properties': {
+ 'id': {'type': 'string'},
+ 'name': {'type': 'string'},
+ 'status': {'type': 'string'},
+ 'image': {'oneOf': [
+ {'type': 'object',
+ 'properties': {
+ 'id': {'type': 'string'},
+ 'links': parameter_types.links
+ },
+ 'required': ['id', 'links']},
+ {'type': ['string', 'null']}
+ ]},
+ 'flavor': {
+ 'type': 'object',
+ 'properties': {
+ 'id': {'type': 'string'},
+ 'links': parameter_types.links
+ },
+ 'required': ['id', 'links']
+ },
+ 'fault': {
+ 'type': 'object',
+ 'properties': {
+ 'code': {'type': 'integer'},
+ 'created': {'type': 'string'},
+ 'message': {'type': 'string'},
+ 'details': {'type': 'string'},
+ },
+ # NOTE(gmann): 'details' is not necessary to be present
+ # in the 'fault'. So it is not defined as 'required'.
+ 'required': ['code', 'created', 'message']
+ },
+ 'user_id': {'type': 'string'},
+ 'tenant_id': {'type': 'string'},
+ 'created': {'type': 'string'},
+ 'updated': {'type': 'string'},
+ 'progress': {'type': 'integer'},
+ 'metadata': {'type': 'object'},
+ 'links': parameter_types.links,
+ 'addresses': parameter_types.addresses,
+ 'hostId': {'type': 'string'},
+ 'OS-DCF:diskConfig': {'type': 'string'},
+ 'accessIPv4': parameter_types.access_ip_v4,
+ 'accessIPv6': parameter_types.access_ip_v6
+ },
+ # NOTE(GMann): 'progress' attribute is present in the response
+ # only when server's status is one of the progress statuses
+ # ("ACTIVE","BUILD", "REBUILD", "RESIZE","VERIFY_RESIZE")
+ # 'fault' attribute is present in the response
+ # only when server's status is one of the "ERROR", "DELETED".
+ # OS-DCF:diskConfig and accessIPv4/v6 are API
+ # extensions, and some environments return a response
+ # without these attributes.So these are not defined as 'required'.
+ 'required': ['id', 'name', 'status', 'image', 'flavor',
+ 'user_id', 'tenant_id', 'created', 'updated',
+ 'metadata', 'links', 'addresses', 'hostId']
+}
+
+update_server = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'server': common_show_server
+ },
+ 'required': ['server']
+ }
+}
+
+server_detail = copy.deepcopy(common_show_server)
+server_detail['properties'].update({
'key_name': {'type': ['string', 'null']},
- 'hostId': {'type': 'string'},
'security_groups': {'type': 'array'},
# NOTE: Non-admin users also can see "OS-SRV-USG" and "OS-EXT-AZ"
@@ -81,27 +166,64 @@
'OS-EXT-SRV-ATTR:instance_name': {'type': 'string'},
'OS-EXT-SRV-ATTR:hypervisor_hostname': {'type': ['string', 'null']},
'os-extended-volumes:volumes_attached': {'type': 'array'},
- 'OS-DCF:diskConfig': {'type': 'string'},
- 'accessIPv4': parameter_types.access_ip_v4,
- 'accessIPv6': parameter_types.access_ip_v6,
'config_drive': {'type': 'string'}
})
-get_server['response_body']['properties']['server']['required'].append(
- # NOTE: OS-SRV-USG, OS-EXT-AZ, OS-EXT-STS, OS-EXT-SRV-ATTR,
- # os-extended-volumes, OS-DCF and accessIPv4/v6 are API
- # extension, and some environments return a response without
- # these attributes. So they are not 'required'.
- 'hostId'
-)
+server_detail['properties']['addresses']['patternProperties'][
+ '^[a-zA-Z0-9-_.]+$']['items']['properties'].update({
+ 'OS-EXT-IPS:type': {'type': 'string'},
+ 'OS-EXT-IPS-MAC:mac_addr': parameter_types.mac_address})
# NOTE(gmann): Update OS-EXT-IPS:type and OS-EXT-IPS-MAC:mac_addr
# attributes in server address. Those are API extension,
# and some environments return a response without
# these attributes. So they are not 'required'.
-get_server['response_body']['properties']['server']['properties'][
- 'addresses']['patternProperties']['^[a-zA-Z0-9-_.]+$']['items'][
- 'properties'].update({
- 'OS-EXT-IPS:type': {'type': 'string'},
- 'OS-EXT-IPS-MAC:mac_addr': parameter_types.mac_address})
+
+get_server = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'server': server_detail
+ },
+ 'required': ['server']
+ }
+}
+
+list_servers_detail = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'servers': {
+ 'type': 'array',
+ 'items': server_detail
+ },
+ 'servers_links': parameter_types.links
+ },
+ # NOTE(gmann): servers_links attribute is not necessary to be
+ # present always So it is not 'required'.
+ 'required': ['servers']
+ }
+}
+
+rebuild_server = copy.deepcopy(update_server)
+rebuild_server['status_code'] = [202]
+
+rebuild_server_with_admin_pass = copy.deepcopy(rebuild_server)
+rebuild_server_with_admin_pass['response_body']['properties']['server'][
+ 'properties'].update({'adminPass': {'type': 'string'}})
+rebuild_server_with_admin_pass['response_body']['properties']['server'][
+ 'required'].append('adminPass')
+
+rescue_server = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'adminPass': {'type': 'string'}
+ },
+ 'required': ['adminPass']
+ }
+}
list_virtual_interfaces = {
'status_code': [200],
@@ -174,30 +296,11 @@
'volumeAttachments']['items']['properties'].update(
{'serverId': {'type': 'string'}})
-set_get_server_metadata_item = {
- 'status_code': [200],
- 'response_body': {
- 'type': 'object',
- 'properties': {
- 'meta': {
- 'type': 'object',
- 'patternProperties': {
- '^.+$': {'type': 'string'}
- }
- }
- },
- 'required': ['meta']
- }
-}
-
list_addresses_by_network = {
'status_code': [200],
'response_body': parameter_types.addresses
}
-server_actions_confirm_resize = copy.deepcopy(
- servers.server_actions_delete_password)
-
list_addresses = {
'status_code': [200],
'response_body': {
@@ -258,10 +361,36 @@
}
}
-instance_actions_object = copy.deepcopy(servers.common_instance_actions)
-instance_actions_object[
- 'properties'].update({'instance_uuid': {'type': 'string'}})
-instance_actions_object['required'].extend(['instance_uuid'])
+instance_actions = {
+ 'type': 'object',
+ 'properties': {
+ 'action': {'type': 'string'},
+ 'request_id': {'type': 'string'},
+ 'user_id': {'type': 'string'},
+ 'project_id': {'type': 'string'},
+ 'start_time': {'type': 'string'},
+ 'message': {'type': ['string', 'null']},
+ 'instance_uuid': {'type': 'string'}
+ },
+ 'required': ['action', 'request_id', 'user_id', 'project_id',
+ 'start_time', 'message', 'instance_uuid']
+}
+
+instance_action_events = {
+ 'type': 'array',
+ 'items': {
+ 'type': 'object',
+ 'properties': {
+ 'event': {'type': 'string'},
+ 'start_time': {'type': 'string'},
+ 'finish_time': {'type': 'string'},
+ 'result': {'type': 'string'},
+ 'traceback': {'type': ['string', 'null']}
+ },
+ 'required': ['event', 'start_time', 'finish_time', 'result',
+ 'traceback']
+ }
+}
list_instance_actions = {
'status_code': [200],
@@ -270,93 +399,120 @@
'properties': {
'instanceActions': {
'type': 'array',
- 'items': instance_actions_object
+ 'items': instance_actions
}
},
'required': ['instanceActions']
}
}
-get_instance_actions_object = copy.deepcopy(servers.common_get_instance_action)
-get_instance_actions_object[
- 'properties'].update({'instance_uuid': {'type': 'string'}})
-get_instance_actions_object['required'].extend(['instance_uuid'])
+instance_actions_with_events = copy.deepcopy(instance_actions)
+instance_actions_with_events['properties'].update({
+ 'events': instance_action_events})
+# 'events' does not come in response body always so it is not
+# defined as 'required'
get_instance_action = {
'status_code': [200],
'response_body': {
'type': 'object',
'properties': {
- 'instanceAction': get_instance_actions_object
+ 'instanceAction': instance_actions_with_events
},
'required': ['instanceAction']
}
}
-list_servers_detail = copy.deepcopy(servers.base_list_servers_detail)
-list_servers_detail['response_body']['properties']['servers']['items'][
- 'properties'].update({
- 'key_name': {'type': ['string', 'null']},
- 'hostId': {'type': 'string'},
- 'OS-DCF:diskConfig': {'type': 'string'},
- 'security_groups': {'type': 'array'},
-
- # NOTE: Non-admin users also can see "OS-SRV-USG" and "OS-EXT-AZ"
- # attributes.
- 'OS-SRV-USG:launched_at': {'type': ['string', 'null']},
- 'OS-SRV-USG:terminated_at': {'type': ['string', 'null']},
- 'OS-EXT-AZ:availability_zone': {'type': 'string'},
-
- # NOTE: Admin users only can see "OS-EXT-STS" and "OS-EXT-SRV-ATTR"
- # attributes.
- 'OS-EXT-STS:task_state': {'type': ['string', 'null']},
- 'OS-EXT-STS:vm_state': {'type': 'string'},
- 'OS-EXT-STS:power_state': {'type': 'integer'},
- 'OS-EXT-SRV-ATTR:host': {'type': ['string', 'null']},
- 'OS-EXT-SRV-ATTR:instance_name': {'type': 'string'},
- 'OS-EXT-SRV-ATTR:hypervisor_hostname': {'type': ['string', 'null']},
- 'os-extended-volumes:volumes_attached': {'type': 'array'},
- 'accessIPv4': parameter_types.access_ip_v4,
- 'accessIPv6': parameter_types.access_ip_v6,
- 'config_drive': {'type': 'string'}
- })
-# NOTE(GMann): OS-SRV-USG, OS-EXT-AZ, OS-EXT-STS, OS-EXT-SRV-ATTR,
-# os-extended-volumes, OS-DCF and accessIPv4/v6 are API
-# extensions, and some environments return a response without
-# these attributes. So they are not 'required'.
-list_servers_detail['response_body']['properties']['servers']['items'][
- 'required'].append('hostId')
-# NOTE(gmann): Update OS-EXT-IPS:type and OS-EXT-IPS-MAC:mac_addr
-# attributes in server address. Those are API extension,
-# and some environments return a response without
-# these attributes. So they are not 'required'.
-list_servers_detail['response_body']['properties']['servers']['items'][
- 'properties']['addresses']['patternProperties']['^[a-zA-Z0-9-_.]+$'][
- 'items']['properties'].update({
- 'OS-EXT-IPS:type': {'type': 'string'},
- 'OS-EXT-IPS-MAC:mac_addr': parameter_types.mac_address})
-# Defining 'servers_links' attributes for V2 server schema
-list_servers_detail['response_body'][
- 'properties'].update({'servers_links': parameter_types.links})
-# NOTE(gmann): servers_links attribute is not necessary to be
-# present always So it is not 'required'.
-
-rebuild_server = copy.deepcopy(update_server)
-rebuild_server['status_code'] = [202]
-
-rebuild_server_with_admin_pass = copy.deepcopy(rebuild_server)
-rebuild_server_with_admin_pass['response_body']['properties']['server'][
- 'properties'].update({'adminPass': {'type': 'string'}})
-rebuild_server_with_admin_pass['response_body']['properties']['server'][
- 'required'].append('adminPass')
-
-rescue_server = {
+get_password = {
'status_code': [200],
'response_body': {
'type': 'object',
'properties': {
- 'adminPass': {'type': 'string'}
+ 'password': {'type': 'string'}
},
- 'required': ['adminPass']
+ 'required': ['password']
}
}
+
+get_vnc_console = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'console': {
+ 'type': 'object',
+ 'properties': {
+ 'type': {'type': 'string'},
+ 'url': {
+ 'type': 'string',
+ 'format': 'uri'
+ }
+ },
+ 'required': ['type', 'url']
+ }
+ },
+ 'required': ['console']
+ }
+}
+
+get_console_output = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'output': {'type': 'string'}
+ },
+ 'required': ['output']
+ }
+}
+
+set_server_metadata = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'metadata': {
+ 'type': 'object',
+ 'patternProperties': {
+ '^.+$': {'type': 'string'}
+ }
+ }
+ },
+ 'required': ['metadata']
+ }
+}
+
+list_server_metadata = copy.deepcopy(set_server_metadata)
+
+update_server_metadata = copy.deepcopy(set_server_metadata)
+
+delete_server_metadata_item = {
+ 'status_code': [204]
+}
+
+set_get_server_metadata_item = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'meta': {
+ 'type': 'object',
+ 'patternProperties': {
+ '^.+$': {'type': 'string'}
+ }
+ }
+ },
+ 'required': ['meta']
+ }
+}
+
+server_actions_common_schema = {
+ 'status_code': [202]
+}
+
+server_actions_delete_password = {
+ 'status_code': [204]
+}
+
+server_actions_confirm_resize = copy.deepcopy(
+ server_actions_delete_password)
diff --git a/tempest/api_schema/response/compute/version.py b/tempest/api_schema/response/compute/version.py
index 32c6d96..6579c63 100644
--- a/tempest/api_schema/response/compute/version.py
+++ b/tempest/api_schema/response/compute/version.py
@@ -45,8 +45,12 @@
}
},
'status': {'type': 'string'},
- 'updated': {'type': 'string', 'format': 'date-time'}
+ 'updated': {'type': 'string', 'format': 'date-time'},
+ 'version': {'type': 'string'},
+ 'min_version': {'type': 'string'}
},
+ # NOTE: version and min_version have been added since Kilo,
+ # so they should not be required.
'required': ['id', 'links', 'media-types', 'status', 'updated']
}
},
diff --git a/tempest/cli/simple_read_only/data_processing/__init__.py b/tempest/cli/simple_read_only/data_processing/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/tempest/cli/simple_read_only/data_processing/__init__.py
+++ /dev/null
diff --git a/tempest/cli/simple_read_only/data_processing/test_sahara.py b/tempest/cli/simple_read_only/data_processing/test_sahara.py
deleted file mode 100644
index 153dbd2..0000000
--- a/tempest/cli/simple_read_only/data_processing/test_sahara.py
+++ /dev/null
@@ -1,191 +0,0 @@
-# Copyright (c) 2013 Mirantis Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-# implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import logging
-import re
-
-from tempest_lib import exceptions
-import testtools
-
-from tempest import cli
-from tempest import config
-from tempest import test
-
-CONF = config.CONF
-
-LOG = logging.getLogger(__name__)
-
-
-class SimpleReadOnlySaharaClientTest(cli.ClientTestBase):
- """Basic, read-only tests for Sahara CLI client.
-
- Checks return values and output of read-only commands.
- These tests do not presume any content, nor do they create
- their own. They only verify the structure of output if present.
- """
-
- @classmethod
- def resource_setup(cls):
- if not CONF.service_available.sahara:
- msg = "Skipping all Sahara cli tests because it is not available"
- raise cls.skipException(msg)
- super(SimpleReadOnlySaharaClientTest, cls).resource_setup()
-
- def sahara(self, *args, **kwargs):
- return self.clients.sahara(
- *args, endpoint_type=CONF.data_processing.endpoint_type, **kwargs)
-
- @test.attr(type='negative')
- @test.idempotent_id('c8809259-710f-43f9-b452-54b2be3115a9')
- def test_sahara_fake_action(self):
- self.assertRaises(exceptions.CommandFailed,
- self.sahara,
- 'this-does-not-exist')
-
- @test.idempotent_id('39afe90c-0fd8-456e-89e2-da6de9680fff')
- def test_sahara_plugins_list(self):
- plugins = self.parser.listing(self.sahara('plugin-list'))
- self.assertTableStruct(plugins, [
- 'name',
- 'versions',
- 'title'
- ])
-
- @test.idempotent_id('3eb36fd8-bb06-4004-9e90-84ddf4dbcf5b')
- @testtools.skipUnless(CONF.data_processing_feature_enabled.plugins,
- 'No plugins defined')
- def test_sahara_plugins_show(self):
- name_param = '--name %s' % \
- (CONF.data_processing_feature_enabled.plugins[0])
- result = self.sahara('plugin-show', params=name_param)
- plugin = self.parser.listing(result)
- self.assertTableStruct(plugin, [
- 'Property',
- 'Value'
- ])
-
- @test.idempotent_id('502b684b-3d41-4619-aa6c-4db3465ae79d')
- def test_sahara_node_group_template_list(self):
- result = self.sahara('node-group-template-list')
- node_group_templates = self.parser.listing(result)
- self.assertTableStruct(node_group_templates, [
- 'name',
- 'id',
- 'plugin_name',
- 'node_processes',
- 'description'
- ])
-
- @test.idempotent_id('6c36fe4d-3b88-4b0d-b702-2a051db7dae7')
- def test_sahara_cluster_template_list(self):
- result = self.sahara('cluster-template-list')
- cluster_templates = self.parser.listing(result)
- self.assertTableStruct(cluster_templates, [
- 'name',
- 'id',
- 'plugin_name',
- 'node_groups',
- 'description'
- ])
-
- @test.idempotent_id('b951949d-b9a6-49db-add5-8a18ac533810')
- def test_sahara_cluster_list(self):
- result = self.sahara('cluster-list')
- clusters = self.parser.listing(result)
- self.assertTableStruct(clusters, [
- 'name',
- 'id',
- 'status',
- 'node_count'
- ])
-
- @test.idempotent_id('dbc83a8c-15b6-4aa8-b274-5896577397e1')
- def test_sahara_data_source_list(self):
- result = self.sahara('data-source-list')
- data_sources = self.parser.listing(result)
- self.assertTableStruct(data_sources, [
- 'name',
- 'id',
- 'type',
- 'description'
- ])
-
- @test.idempotent_id('a8f77e05-d4bf-45c3-8245-57835d0de37b')
- def test_sahara_job_binary_data_list(self):
- result = self.sahara('job-binary-data-list')
- job_binary_data_list = self.parser.listing(result)
- self.assertTableStruct(job_binary_data_list, [
- 'id',
- 'name'
- ])
-
- @test.idempotent_id('a8f4d0f3-fa1c-49ce-b73f-d624d89dc381')
- def test_sahara_job_binary_list(self):
- result = self.sahara('job-binary-list')
- job_binaries = self.parser.listing(result)
- self.assertTableStruct(job_binaries, [
- 'id',
- 'name',
- 'description'
- ])
-
- @test.idempotent_id('91164ca4-d049-49e0-a52a-686b408196ff')
- def test_sahara_job_template_list(self):
- result = self.sahara('job-template-list')
- job_templates = self.parser.listing(result)
- self.assertTableStruct(job_templates, [
- 'id',
- 'name',
- 'description'
- ])
-
- @test.idempotent_id('6829c251-a8b6-449d-af86-7dd98b69a7ce')
- def test_sahara_job_list(self):
- result = self.sahara('job-list')
- jobs = self.parser.listing(result)
- self.assertTableStruct(jobs, [
- 'id',
- 'cluster_id',
- 'status'
- ])
-
- @test.idempotent_id('e4bd5d3b-474b-4b7a-82ab-f6bb0bc89faf')
- def test_sahara_bash_completion(self):
- self.sahara('bash-completion')
-
- # Optional arguments
- @test.idempotent_id('699c14e5-632e-46b8-91e5-6bff8c8307e5')
- def test_sahara_help(self):
- help_text = self.sahara('help')
- lines = help_text.split('\n')
- self.assertFirstLineStartsWith(lines, 'usage: sahara')
-
- commands = []
- cmds_start = lines.index('Positional arguments:')
- cmds_end = lines.index('Optional arguments:')
- command_pattern = re.compile('^ {4}([a-z0-9\-\_]+)')
- for line in lines[cmds_start:cmds_end]:
- match = command_pattern.match(line)
- if match:
- commands.append(match.group(1))
- commands = set(commands)
- wanted_commands = set(('cluster-create', 'data-source-create',
- 'image-unregister', 'job-binary-create',
- 'plugin-list', 'job-binary-create', 'help'))
- self.assertFalse(wanted_commands - commands)
-
- @test.idempotent_id('84a18ea6-6379-4024-af6b-0e938f60dfc2')
- def test_sahara_version(self):
- version = self.sahara('', flags='--version')
- self.assertTrue(re.search('[0-9.]+', version))
diff --git a/tempest/cli/simple_read_only/object_storage/__init__.py b/tempest/cli/simple_read_only/object_storage/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/tempest/cli/simple_read_only/object_storage/__init__.py
+++ /dev/null
diff --git a/tempest/cli/simple_read_only/object_storage/test_swift.py b/tempest/cli/simple_read_only/object_storage/test_swift.py
deleted file mode 100644
index 7201eab..0000000
--- a/tempest/cli/simple_read_only/object_storage/test_swift.py
+++ /dev/null
@@ -1,110 +0,0 @@
-# Copyright 2014 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import re
-
-from tempest_lib import exceptions
-
-from tempest import cli
-from tempest import config
-from tempest import test
-
-CONF = config.CONF
-
-
-class SimpleReadOnlySwiftClientTest(cli.ClientTestBase):
- """Basic, read-only tests for Swift CLI client.
-
- Checks return values and output of read-only commands.
- These tests do not presume any content, nor do they create
- their own. They only verify the structure of output if present.
- """
-
- @classmethod
- def resource_setup(cls):
- if not CONF.service_available.swift:
- msg = ("%s skipped as Swift is not available" % cls.__name__)
- raise cls.skipException(msg)
- super(SimpleReadOnlySwiftClientTest, cls).resource_setup()
-
- def swift(self, *args, **kwargs):
- return self.clients.swift(
- *args, endpoint_type=CONF.object_storage.endpoint_type, **kwargs)
-
- @test.idempotent_id('74360cdc-e7ec-493f-8a87-2b65f4d54aa3')
- def test_swift_fake_action(self):
- self.assertRaises(exceptions.CommandFailed,
- self.swift,
- 'this-does-not-exist')
-
- @test.idempotent_id('809ec373-828e-4279-8df6-9d4db81c7909')
- def test_swift_list(self):
- self.swift('list')
-
- @test.idempotent_id('325d5fe4-e5ab-4f52-aec4-357533f24fa1')
- def test_swift_stat(self):
- output = self.swift('stat')
- entries = ['Account', 'Containers', 'Objects', 'Bytes', 'Content-Type',
- 'X-Timestamp', 'X-Trans-Id']
- for entry in entries:
- self.assertTrue(entry in output)
-
- @test.idempotent_id('af1483e1-dafd-4552-a39b-b9d337df808b')
- def test_swift_capabilities(self):
- output = self.swift('capabilities')
- entries = ['account_listing_limit', 'container_listing_limit',
- 'max_file_size', 'Additional middleware']
- for entry in entries:
- self.assertTrue(entry in output)
-
- @test.idempotent_id('29c83a64-8eb7-418c-a39b-c70cefa5b695')
- def test_swift_help(self):
- help_text = self.swift('', flags='--help')
- lines = help_text.split('\n')
- self.assertFirstLineStartsWith(lines, 'Usage: swift')
-
- commands = []
- cmds_start = lines.index('Positional arguments:')
- cmds_end = lines.index('Examples:')
- command_pattern = re.compile('^ {4}([a-z0-9\-\_]+)')
- for line in lines[cmds_start:cmds_end]:
- match = command_pattern.match(line)
- if match:
- commands.append(match.group(1))
- commands = set(commands)
- wanted_commands = set(('stat', 'list', 'delete',
- 'download', 'post', 'upload'))
- self.assertFalse(wanted_commands - commands)
-
- # Optional arguments:
-
- @test.idempotent_id('2026be82-4e53-4414-a828-f1c894b8cf0f')
- def test_swift_version(self):
- self.swift('', flags='--version')
-
- @test.idempotent_id('0ae6172e-3df7-42b8-a987-d42609ada6ed')
- def test_swift_debug_list(self):
- self.swift('list', flags='--debug')
-
- @test.idempotent_id('1bdf5dd0-7df5-446c-a124-2b0703a5d199')
- def test_swift_retries_list(self):
- self.swift('list', flags='--retries 3')
-
- @test.idempotent_id('64eae749-8fbd-4d85-bc7f-f706d3581c6f')
- def test_swift_region_list(self):
- region = CONF.object_storage.region
- if not region:
- region = CONF.identity.region
- self.swift('list', flags='--os-region-name ' + region)
diff --git a/tempest/cmd/javelin.py b/tempest/cmd/javelin.py
index 05aaabe..f84771f 100755
--- a/tempest/cmd/javelin.py
+++ b/tempest/cmd/javelin.py
@@ -77,7 +77,8 @@
- name: javelin_cirros
owner: javelin
file: cirros-0.3.2-x86_64-blank.img
- format: ami
+ disk_format: ami
+ container_format: ami
aki: cirros-0.3.2-x86_64-vmlinuz
ari: cirros-0.3.2-x86_64-initrd
@@ -86,6 +87,7 @@
owner: javelin
flavor: m1.small
image: javelin_cirros
+ floating_ip_pool: public
- name: hoplite
owner: javelin
flavor: m1.medium
@@ -306,10 +308,10 @@
return tenants
-def _assign_swift_role(user):
+def _assign_swift_role(user, swift_role):
admin = keystone_admin()
roles = admin.identity.list_roles()
- role = next(r for r in roles if r['name'] == 'Member')
+ role = next(r for r in roles if r['name'] == swift_role)
LOG.debug(USERS[user])
try:
admin.identity.assign_user_role(
@@ -583,7 +585,8 @@
LOG.info("Creating objects")
for obj in objects:
LOG.debug("Object %s" % obj)
- _assign_swift_role(obj['owner'])
+ swift_role = obj.get('swift_role', 'Member')
+ _assign_swift_role(obj['owner'], swift_role)
client = client_for_user(obj['owner'])
client.containers.create_container(obj['container'])
client.objects.create_object(
@@ -627,6 +630,15 @@
for image in images:
client = client_for_user(image['owner'])
+ # DEPRECATED: 'format' was used for ami images
+ # Use 'disk_format' and 'container_format' instead
+ if 'format' in image:
+ LOG.warning("Deprecated: 'format' is deprecated for images "
+ "description. Please use 'disk_format' and 'container_"
+ "format' instead.")
+ image['disk_format'] = image['format']
+ image['container_format'] = image['format']
+
# only upload a new image if the name isn't there
if _get_image_by_name(client, image['name']):
LOG.info("Image '%s' already exists" % image['name'])
@@ -634,7 +646,7 @@
# special handling for 3 part image
extras = {}
- if image['format'] == 'ami':
+ if image['disk_format'] == 'ami':
name, fname = _resolve_image(image, 'aki')
aki = client.images.create_image(
'javelin_' + name, 'aki', 'aki')
@@ -649,7 +661,8 @@
_, fname = _resolve_image(image, 'file')
body = client.images.create_image(
- image['name'], image['format'], image['format'], **extras)
+ image['name'], image['container_format'],
+ image['disk_format'], **extras)
image_id = body.get('id')
client.images.store_image(image_id, open(fname, 'r'))
@@ -858,7 +871,9 @@
for secgroup in server['secgroups']:
client.servers.add_security_group(server_id, secgroup)
if CONF.compute.use_floatingip_for_ssh:
- floating_ip = client.floating_ips.create_floating_ip()
+ floating_ip_pool = server.get('floating_ip_pool')
+ floating_ip = client.floating_ips.create_floating_ip(
+ pool_name=floating_ip_pool)
client.floating_ips.associate_floating_ip_to_server(
floating_ip['ip'], server_id)
diff --git a/tempest/cmd/verify_tempest_config.py b/tempest/cmd/verify_tempest_config.py
index 909de96..b61f286 100755
--- a/tempest/cmd/verify_tempest_config.py
+++ b/tempest/cmd/verify_tempest_config.py
@@ -24,6 +24,7 @@
from six import moves
from tempest import clients
+from tempest.common import credentials
from tempest import config
@@ -253,10 +254,13 @@
'database': 'trove'
}
# Get catalog list for endpoints to use for validation
- endpoints = os.endpoints_client.list_endpoints()
- for endpoint in endpoints:
- service = os.service_client.get_service(endpoint['service_id'])
- services.append(service['type'])
+ _token, auth_data = os.auth_provider.get_auth()
+ if os.auth_version == 'v2':
+ catalog_key = 'serviceCatalog'
+ else:
+ catalog_key = 'catalog'
+ for entry in auth_data[catalog_key]:
+ services.append(entry['type'])
# Pull all catalog types from config file and compare against endpoint list
for cfgname in dir(CONF._config):
cfg = getattr(CONF, cfgname)
@@ -330,7 +334,8 @@
CONF_PARSER = moves.configparser.SafeConfigParser()
CONF_PARSER.optionxform = str
CONF_PARSER.readfp(conf_file)
- os = clients.AdminManager()
+ icreds = credentials.get_isolated_credentials('verify_tempest_config')
+ os = clients.Manager(icreds.get_primary_creds())
services = check_service_availability(os, update)
results = {}
for service in ['nova', 'cinder', 'neutron', 'swift']:
diff --git a/tempest/common/accounts.py b/tempest/common/accounts.py
index 6d376d6..93c8bcf 100644
--- a/tempest/common/accounts.py
+++ b/tempest/common/accounts.py
@@ -19,7 +19,9 @@
from oslo_log import log as logging
import yaml
+from tempest import clients
from tempest.common import cred_provider
+from tempest.common import fixed_network
from tempest import config
from tempest import exceptions
@@ -60,15 +62,18 @@
@classmethod
def get_hash_dict(cls, accounts):
- hash_dict = {'roles': {}, 'creds': {}}
+ hash_dict = {'roles': {}, 'creds': {}, 'networks': {}}
# Loop over the accounts read from the yaml file
for account in accounts:
roles = []
types = []
+ resources = []
if 'roles' in account:
roles = account.pop('roles')
if 'types' in account:
types = account.pop('types')
+ if 'resources' in account:
+ resources = account.pop('resources')
temp_hash = hashlib.md5()
temp_hash.update(str(account))
temp_hash_key = temp_hash.hexdigest()
@@ -91,6 +96,13 @@
CONF.object_storage.reseller_admin_role,
temp_hash_key,
hash_dict)
+ # Populate the network subdict
+ for resource in resources:
+ if resource == 'network':
+ hash_dict['networks'][temp_hash_key] = resources[resource]
+ else:
+ LOG.warning('Unkown resource type %s, ignoring this field'
+ % resource)
return hash_dict
def is_multi_user(self):
@@ -168,13 +180,21 @@
useable_hashes = hashes
return useable_hashes
+ def _sanitize_creds(self, creds):
+ temp_creds = creds.copy()
+ temp_creds.pop('password')
+ return temp_creds
+
def _get_creds(self, roles=None):
if self.use_default_creds:
raise exceptions.InvalidConfiguration(
"Account file %s doesn't exist" % CONF.auth.test_accounts_file)
useable_hashes = self._get_match_hash_list(roles)
free_hash = self._get_free_hash(useable_hashes)
- return self.hash_dict['creds'][free_hash]
+ clean_creds = self._sanitize_creds(
+ self.hash_dict['creds'][free_hash])
+ LOG.info('%s allocated creds:\n%s' % (self.name, clean_creds))
+ return self._wrap_creds_with_network(free_hash)
@lockutils.synchronized('test_accounts_io', external=True)
def remove_hash(self, hash_string):
@@ -190,32 +210,37 @@
def get_hash(self, creds):
for _hash in self.hash_dict['creds']:
# Comparing on the attributes that are expected in the YAML
- if all([getattr(creds, k) == self.hash_dict['creds'][_hash][k] for
- k in creds.get_init_attributes()]):
+ init_attributes = creds.get_init_attributes()
+ hash_attributes = self.hash_dict['creds'][_hash].copy()
+ if ('user_domain_name' in init_attributes and 'user_domain_name'
+ not in hash_attributes):
+ # Allow for the case of domain_name populated from config
+ domain_name = CONF.identity.admin_domain_name
+ hash_attributes['user_domain_name'] = domain_name
+ if all([getattr(creds, k) == hash_attributes[k] for
+ k in init_attributes]):
return _hash
raise AttributeError('Invalid credentials %s' % creds)
def remove_credentials(self, creds):
_hash = self.get_hash(creds)
+ clean_creds = self._sanitize_creds(self.hash_dict['creds'][_hash])
self.remove_hash(_hash)
+ LOG.info("%s returned allocated creds:\n%s" % (self.name, clean_creds))
def get_primary_creds(self):
if self.isolated_creds.get('primary'):
return self.isolated_creds.get('primary')
- creds = self._get_creds()
- primary_credential = cred_provider.get_credentials(
- identity_version=self.identity_version, **creds)
- self.isolated_creds['primary'] = primary_credential
- return primary_credential
+ net_creds = self._get_creds()
+ self.isolated_creds['primary'] = net_creds
+ return net_creds
def get_alt_creds(self):
if self.isolated_creds.get('alt'):
return self.isolated_creds.get('alt')
- creds = self._get_creds()
- alt_credential = cred_provider.get_credentials(
- identity_version=self.identity_version, **creds)
- self.isolated_creds['alt'] = alt_credential
- return alt_credential
+ net_creds = self._get_creds()
+ self.isolated_creds['alt'] = net_creds
+ return net_creds
def get_creds_by_roles(self, roles, force_new=False):
roles = list(set(roles))
@@ -228,11 +253,9 @@
elif exist_creds and force_new:
new_index = str(roles) + '-' + str(len(self.isolated_creds))
self.isolated_creds[new_index] = exist_creds
- creds = self._get_creds(roles=roles)
- role_credential = cred_provider.get_credentials(
- identity_version=self.identity_version, **creds)
- self.isolated_creds[str(roles)] = role_credential
- return role_credential
+ net_creds = self._get_creds(roles=roles)
+ self.isolated_creds[str(roles)] = net_creds
+ return net_creds
def clear_isolated_creds(self):
for creds in self.isolated_creds.values():
@@ -252,6 +275,19 @@
def admin_available(self):
return self.is_role_available(CONF.identity.admin_role)
+ def _wrap_creds_with_network(self, hash):
+ creds_dict = self.hash_dict['creds'][hash]
+ credential = cred_provider.get_credentials(
+ identity_version=self.identity_version, **creds_dict)
+ net_creds = cred_provider.TestResources(credential)
+ net_clients = clients.Manager(credentials=credential)
+ compute_network_client = net_clients.networks_client
+ net_name = self.hash_dict['networks'].get(hash, None)
+ network = fixed_network.get_network_from_name(
+ net_name, compute_network_client)
+ net_creds.set_resources(network=network)
+ return net_creds
+
class NotLockingAccounts(Accounts):
"""Credentials provider which always returns the first and second
@@ -282,8 +318,9 @@
return self.isolated_creds.get('primary')
primary_credential = cred_provider.get_configured_credentials(
credential_type='user', identity_version=self.identity_version)
- self.isolated_creds['primary'] = primary_credential
- return primary_credential
+ self.isolated_creds['primary'] = cred_provider.TestResources(
+ primary_credential)
+ return self.isolated_creds['primary']
def get_alt_creds(self):
if self.isolated_creds.get('alt'):
@@ -291,8 +328,9 @@
alt_credential = cred_provider.get_configured_credentials(
credential_type='alt_user',
identity_version=self.identity_version)
- self.isolated_creds['alt'] = alt_credential
- return alt_credential
+ self.isolated_creds['alt'] = cred_provider.TestResources(
+ alt_credential)
+ return self.isolated_creds['alt']
def clear_isolated_creds(self):
self.isolated_creds = {}
@@ -300,8 +338,8 @@
def get_admin_creds(self):
creds = cred_provider.get_configured_credentials(
"identity_admin", fill_in=False)
- self.isolated_creds['admin'] = creds
- return creds
+ self.isolated_creds['admin'] = cred_provider.TestResources(creds)
+ return self.isolated_creds['admin']
def get_creds_by_roles(self, roles, force_new=False):
msg = "Credentials being specified through the config file can not be"\
diff --git a/tempest/common/cred_provider.py b/tempest/common/cred_provider.py
index 9630d1c..3223027 100644
--- a/tempest/common/cred_provider.py
+++ b/tempest/common/cred_provider.py
@@ -84,6 +84,8 @@
domain_fields = set(x for x in auth.KeystoneV3Credentials.ATTRIBUTES
if 'domain' in x)
if not domain_fields.intersection(kwargs.keys()):
+ # TODO(andreaf) It might be better here to use a dedicated config
+ # option such as CONF.auth.tenant_isolation_domain_name
params['user_domain_name'] = CONF.identity.admin_domain_name
auth_url = CONF.identity.uri_v3
else:
@@ -147,3 +149,25 @@
@abc.abstractmethod
def is_role_available(self, role):
return
+
+
+class TestResources(object):
+ """Readonly Credentials, with network resources added."""
+
+ def __init__(self, credentials):
+ self._credentials = credentials
+ self.network = None
+ self.subnet = None
+ self.router = None
+
+ def __getattr__(self, item):
+ return getattr(self._credentials, item)
+
+ def set_resources(self, **kwargs):
+ for key in kwargs.keys():
+ if hasattr(self, key):
+ setattr(self, key, kwargs[key])
+
+ @property
+ def credentials(self):
+ return self._credentials
diff --git a/tempest/common/fixed_network.py b/tempest/common/fixed_network.py
index b533898..1557474 100644
--- a/tempest/common/fixed_network.py
+++ b/tempest/common/fixed_network.py
@@ -13,17 +13,70 @@
import copy
from oslo_log import log as logging
+from tempest_lib.common.utils import misc as misc_utils
from tempest_lib import exceptions as lib_exc
-from tempest.common import isolated_creds
from tempest import config
-from tempest import exceptions
CONF = config.CONF
LOG = logging.getLogger(__name__)
+def get_network_from_name(name, compute_networks_client):
+ """Get a full network dict from just a network name
+
+ :param str name: the name of the network to use
+ :param NetworksClientJSON compute_networks_client: The network client
+ object to use for making the network lists api request
+ :return: The full dictionary for the network in question, unless the
+ network for the supplied name can not be found. In which case a dict
+ with just the name will be returned.
+ :rtype: dict
+ """
+ caller = misc_utils.find_test_caller()
+ if not name:
+ network = {'name': name}
+ else:
+ try:
+ resp = compute_networks_client.list_networks(name=name)
+ if isinstance(resp, list):
+ networks = resp
+ elif isinstance(resp, dict):
+ networks = resp['networks']
+ else:
+ raise lib_exc.NotFound()
+ if len(networks) > 0:
+ network = networks[0]
+ else:
+ msg = "Network with name: %s not found" % name
+ if caller:
+ LOG.warn('(%s) %s' % (caller, msg))
+ else:
+ LOG.warn(msg)
+ raise lib_exc.NotFound()
+ # To be consistent with network isolation, add name is only
+ # label is available
+ name = network.get('name', network.get('label'))
+ if name:
+ network['name'] = name
+ else:
+ raise lib_exc.NotFound()
+ except lib_exc.NotFound:
+ # In case of nova network, if the fixed_network_name is not
+ # owned by the tenant, and the network client is not an admin
+ # one, list_networks will not find it
+ msg = ('Unable to find network %s. '
+ 'Starting instance without specifying a network.' %
+ name)
+ if caller:
+ LOG.info('(%s) %s' % (caller, msg))
+ else:
+ LOG.info(msg)
+ network = {'name': name}
+ return network
+
+
def get_tenant_network(creds_provider, compute_networks_client):
"""Get a network usable by the primary tenant
@@ -35,42 +88,25 @@
is the network to be used, and it's not visible to the tenant
:return a dict with 'id' and 'name' of the network
"""
+ caller = misc_utils.find_test_caller()
fixed_network_name = CONF.compute.fixed_network_name
- network = None
- # NOTE(andreaf) get_primary_network will always be available once
- # bp test-accounts-continued is implemented
- if (isinstance(creds_provider, isolated_creds.IsolatedCreds) and
- (CONF.service_available.neutron and
- not CONF.service_available.ironic)):
- network = creds_provider.get_primary_network()
- else:
+ net_creds = creds_provider.get_primary_creds()
+ network = getattr(net_creds, 'network', None)
+ if not network or not network.get('name'):
if fixed_network_name:
- try:
- resp = compute_networks_client.list_networks(
- name=fixed_network_name)
- if isinstance(resp, list):
- networks = resp
- elif isinstance(resp, dict):
- networks = resp['networks']
- else:
- raise lib_exc.NotFound()
- if len(networks) > 0:
- network = networks[0]
- else:
- msg = "Configured fixed_network_name not found"
- raise exceptions.InvalidConfiguration(msg)
- # To be consistent with network isolation, add name is only
- # label is available
- network['name'] = network.get('name', network.get('label'))
- except lib_exc.NotFound:
- # In case of nova network, if the fixed_network_name is not
- # owned by the tenant, and the network client is not an admin
- # one, list_networks will not find it
- LOG.info('Unable to find network %s. '
- 'Starting instance without specifying a network.' %
- fixed_network_name)
- network = {'name': fixed_network_name}
- LOG.info('Found network %s available for tenant' % network)
+ msg = ('No valid network provided or created, defaulting to '
+ 'fixed_network_name')
+ if caller:
+ LOG.debug('(%s) %s' % (caller, msg))
+ else:
+ LOG.debug(msg)
+ network = get_network_from_name(fixed_network_name,
+ compute_networks_client)
+ msg = ('Found network %s available for tenant' % network)
+ if caller:
+ LOG.info('(%s) %s' % (caller, msg))
+ else:
+ LOG.info(msg)
return network
@@ -86,5 +122,9 @@
return params
if network:
- params.update({"networks": [{'uuid': network['id']}]})
+ if 'id' in network.keys():
+ params.update({"networks": [{'uuid': network['id']}]})
+ else:
+ LOG.warn('The provided network dict: %s was invalid and did not '
+ ' contain an id' % network)
return params
diff --git a/tempest/common/glance_http.py b/tempest/common/glance_http.py
index c6b8ba3..ee07e73 100644
--- a/tempest/common/glance_http.py
+++ b/tempest/common/glance_http.py
@@ -22,13 +22,13 @@
import posixpath
import re
import socket
-import StringIO
import struct
import urlparse
import OpenSSL
from oslo_log import log as logging
+import six
from six import moves
from tempest_lib import exceptions as lib_exc
@@ -129,7 +129,7 @@
# Read body into string if it isn't obviously image data
if resp.getheader('content-type', None) != 'application/octet-stream':
body_str = ''.join([body_chunk for body_chunk in body_iter])
- body_iter = StringIO.StringIO(body_str)
+ body_iter = six.StringIO(body_str)
self._log_response(resp, None)
else:
self._log_response(resp, body_iter)
diff --git a/tempest/common/isolated_creds.py b/tempest/common/isolated_creds.py
index 22fc9c3..1f85872 100644
--- a/tempest/common/isolated_creds.py
+++ b/tempest/common/isolated_creds.py
@@ -142,7 +142,6 @@
network_resources)
self.network_resources = network_resources
self.isolated_creds = {}
- self.isolated_net_resources = {}
self.ports = []
self.password = password
self.default_admin_creds = cred_provider.get_configured_credentials(
@@ -207,7 +206,8 @@
if roles:
for role in roles:
self.creds_client.assign_user_role(user, project, role)
- return self.creds_client.get_credentials(user, project, self.password)
+ creds = self.creds_client.get_credentials(user, project, self.password)
+ return cred_provider.TestResources(creds)
def _create_network_resources(self, tenant_id):
network = None
@@ -297,33 +297,6 @@
self.network_admin_client.add_router_interface_with_subnet_id(
router_id, subnet_id)
- def get_primary_network(self):
- return self.isolated_net_resources.get('primary')[0]
-
- def get_primary_subnet(self):
- return self.isolated_net_resources.get('primary')[1]
-
- def get_primary_router(self):
- return self.isolated_net_resources.get('primary')[2]
-
- def get_admin_network(self):
- return self.isolated_net_resources.get('admin')[0]
-
- def get_admin_subnet(self):
- return self.isolated_net_resources.get('admin')[1]
-
- def get_admin_router(self):
- return self.isolated_net_resources.get('admin')[2]
-
- def get_alt_network(self):
- return self.isolated_net_resources.get('alt')[0]
-
- def get_alt_subnet(self):
- return self.isolated_net_resources.get('alt')[1]
-
- def get_alt_router(self):
- return self.isolated_net_resources.get('alt')[2]
-
def get_credentials(self, credential_type):
if self.isolated_creds.get(str(credential_type)):
credentials = self.isolated_creds[str(credential_type)]
@@ -341,8 +314,8 @@
not CONF.baremetal.driver_enabled):
network, subnet, router = self._create_network_resources(
credentials.tenant_id)
- self.isolated_net_resources[str(credential_type)] = (
- network, subnet, router,)
+ credentials.set_resources(network=network, subnet=subnet,
+ router=router)
LOG.info("Created isolated network resources for : \n"
+ " credentials: %s" % credentials)
return credentials
@@ -368,12 +341,6 @@
new_index = str(roles) + '-' + str(len(self.isolated_creds))
self.isolated_creds[new_index] = exist_creds
del self.isolated_creds[str(roles)]
- # Handle isolated neutron resouces if they exist too
- if CONF.service_available.neutron:
- exist_net = self.isolated_net_resources.get(str(roles))
- if exist_net:
- self.isolated_net_resources[new_index] = exist_net
- del self.isolated_net_resources[str(roles)]
return self.get_credentials(roles)
def _clear_isolated_router(self, router_id, router_name):
@@ -414,27 +381,33 @@
def _clear_isolated_net_resources(self):
net_client = self.network_admin_client
- for cred in self.isolated_net_resources:
- network, subnet, router = self.isolated_net_resources.get(cred)
+ for cred in self.isolated_creds:
+ creds = self.isolated_creds.get(cred)
+ if (not creds or not any([creds.router, creds.network,
+ creds.subnet])):
+ continue
LOG.debug("Clearing network: %(network)s, "
"subnet: %(subnet)s, router: %(router)s",
- {'network': network, 'subnet': subnet, 'router': router})
+ {'network': creds.network, 'subnet': creds.subnet,
+ 'router': creds.router})
if (not self.network_resources or
- self.network_resources.get('router')):
+ (self.network_resources.get('router') and creds.subnet)):
try:
net_client.remove_router_interface_with_subnet_id(
- router['id'], subnet['id'])
+ creds.router['id'], creds.subnet['id'])
except lib_exc.NotFound:
LOG.warn('router with name: %s not found for delete' %
- router['name'])
- self._clear_isolated_router(router['id'], router['name'])
+ creds.router['name'])
+ self._clear_isolated_router(creds.router['id'],
+ creds.router['name'])
if (not self.network_resources or
self.network_resources.get('subnet')):
- self._clear_isolated_subnet(subnet['id'], subnet['name'])
+ self._clear_isolated_subnet(creds.subnet['id'],
+ creds.subnet['name'])
if (not self.network_resources or
self.network_resources.get('network')):
- self._clear_isolated_network(network['id'], network['name'])
- self.isolated_net_resources = {}
+ self._clear_isolated_network(creds.network['id'],
+ creds.network['name'])
def clear_isolated_creds(self):
if not self.isolated_creds:
diff --git a/tempest/common/ssh.py b/tempest/common/ssh.py
index fe67ff8..d0e484c 100644
--- a/tempest/common/ssh.py
+++ b/tempest/common/ssh.py
@@ -13,8 +13,6 @@
# License for the specific language governing permissions and limitations
# under the License.
-
-import cStringIO
import select
import socket
import time
@@ -22,6 +20,7 @@
from oslo_log import log as logging
import six
+from six import moves
from tempest import exceptions
@@ -43,7 +42,7 @@
self.password = password
if isinstance(pkey, six.string_types):
pkey = paramiko.RSAKey.from_private_key(
- cStringIO.StringIO(str(pkey)))
+ moves.cStringIO(str(pkey)))
self.pkey = pkey
self.look_for_keys = look_for_keys
self.key_filename = key_filename
diff --git a/tempest/common/utils/linux/remote_client.py b/tempest/common/utils/linux/remote_client.py
index b19faef..29fb493 100644
--- a/tempest/common/utils/linux/remote_client.py
+++ b/tempest/common/utils/linux/remote_client.py
@@ -87,10 +87,11 @@
cmd = 'sudo sh -c "echo \\"%s\\" >/dev/console"' % message
return self.exec_command(cmd)
- def ping_host(self, host):
+ def ping_host(self, host, count=CONF.compute.ping_count,
+ size=CONF.compute.ping_size):
addr = netaddr.IPAddress(host)
cmd = 'ping6' if addr.version == 6 else 'ping'
- cmd += ' -c1 -w1 {0}'.format(host)
+ cmd += ' -c{0} -w{0} -s{1} {2}'.format(count, size, host)
return self.exec_command(cmd)
def get_mac_address(self):
diff --git a/tempest/config.py b/tempest/config.py
index 3725f58..bcbe41f 100644
--- a/tempest/config.py
+++ b/tempest/config.py
@@ -187,10 +187,6 @@
default="root",
help="User name used to authenticate to an instance using "
"the alternate image."),
- cfg.StrOpt('image_alt_ssh_password',
- default="password",
- help="Password used to authenticate to an instance using "
- "the alternate image."),
cfg.IntOpt('build_interval',
default=1,
help="Time in seconds between build status checks."),
@@ -205,16 +201,17 @@
cfg.StrOpt('ssh_auth_method',
default='keypair',
help="Auth method used for authenticate to the instance. "
- "Valid choices are: keypair, configured, adminpass. "
- "keypair: start the servers with an ssh keypair. "
- "configured: use the configured user and password. "
- "adminpass: use the injected adminPass. "
- "disabled: avoid using ssh when it is an option."),
+ "Valid choices are: keypair, configured, adminpass "
+ "and disabled. "
+ "Keypair: start the servers with a ssh keypair. "
+ "Configured: use the configured user and password. "
+ "Adminpass: use the injected adminPass. "
+ "Disabled: avoid using ssh when it is an option."),
cfg.StrOpt('ssh_connect_method',
- default='fixed',
+ default='floating',
help="How to connect to the instance? "
"fixed: using the first ip belongs the fixed network "
- "floating: creating and using a floating ip"),
+ "floating: creating and using a floating ip."),
cfg.StrOpt('ssh_user',
default='root',
help="User name used to authenticate to an instance."),
@@ -222,6 +219,14 @@
default=120,
help="Timeout in seconds to wait for ping to "
"succeed."),
+ cfg.IntOpt('ping_size',
+ default=56,
+ help="The packet size for ping packets originating "
+ "from remote linux hosts"),
+ cfg.IntOpt('ping_count',
+ default=1,
+ help="The number of ping packets originating from remote "
+ "linux hosts"),
cfg.IntOpt('ssh_timeout',
default=300,
help="Timeout in seconds to wait for authentication to "
@@ -239,7 +244,8 @@
"tenants. If multiple networks are available for a tenant"
" this is the network which will be used for creating "
"servers if tempest does not create a network or a "
- "network is not specified elsewhere"),
+ "network is not specified elsewhere. It may be used for "
+ "ssh validation only if floating IPs are disabled."),
cfg.StrOpt('network_for_ssh',
default='public',
help="Network used for SSH connections. Ignored if "
@@ -264,9 +270,6 @@
choices=['public', 'admin', 'internal',
'publicURL', 'adminURL', 'internalURL'],
help="The endpoint type to use for the compute service."),
- cfg.StrOpt('path_to_private_key',
- help="Path to a private key file for SSH access to remote "
- "hosts"),
cfg.StrOpt('volume_device_name',
default='vdb',
help="Expected device name when a volume is attached to "
@@ -449,12 +452,16 @@
help="The mask bits for tenant ipv6 subnets"),
cfg.BoolOpt('tenant_networks_reachable',
default=False,
- help="Whether tenant network connectivity should be "
- "evaluated directly"),
+ help="Whether tenant networks can be reached directly from "
+ "the test client. This must be set to True when the "
+ "'fixed' ssh_connect_method is selected."),
cfg.StrOpt('public_network_id',
default="",
help="Id of the public network that provides external "
"connectivity"),
+ cfg.StrOpt('floating_network_name',
+ help="Default floating network name. Used to allocate floating "
+ "IPs when neutron is enabled."),
cfg.StrOpt('public_router_id',
default="",
help="Id of the public router that provides external "
@@ -499,6 +506,10 @@
"the extended IPv6 attributes ipv6_ra_mode "
"and ipv6_address_mode"
),
+ cfg.BoolOpt('port_admin_state_change',
+ default=True,
+ help="Does the test environment support changing"
+ " port admin state"),
]
messaging_group = cfg.OptGroup(name='messaging',
@@ -536,6 +547,37 @@
help='The maximum grace period for a claim'),
]
+validation_group = cfg.OptGroup(name='validation',
+ title='SSH Validation options')
+
+ValidationGroup = [
+ cfg.StrOpt('connect_method',
+ default='floating',
+ choices=['fixed', 'floating'],
+ help='Default IP type used for validation: '
+ '-fixed: uses the first IP belonging to the fixed network '
+ '-floating: creates and uses a floating IP'),
+ cfg.StrOpt('auth_method',
+ default='keypair',
+ choices=['keypair'],
+ help='Default authentication method to the instance. '
+ 'Only ssh via keypair is supported for now. '
+ 'Additional methods will be handled in a separate spec.'),
+ cfg.IntOpt('ip_version_for_ssh',
+ default=4,
+ help='Default IP version for ssh connections.'),
+ cfg.IntOpt('ping_timeout',
+ default=120,
+ help='Timeout in seconds to wait for ping to succeed.'),
+ cfg.IntOpt('connect_timeout',
+ default=60,
+ help='Timeout in seconds to wait for the TCP connection to be '
+ 'successful.'),
+ cfg.IntOpt('ssh_timeout',
+ default=300,
+ help='Timeout in seconds to wait for the ssh banner.'),
+]
+
volume_group = cfg.OptGroup(name='volume',
title='Block Storage Options')
@@ -1088,6 +1130,7 @@
(network_group, NetworkGroup),
(network_feature_group, NetworkFeaturesGroup),
(messaging_group, MessagingGroup),
+ (validation_group, ValidationGroup),
(volume_group, VolumeGroup),
(volume_feature_group, VolumeFeaturesGroup),
(object_storage_group, ObjectStoreGroup),
@@ -1148,6 +1191,7 @@
self.image_feature_enabled = _CONF['image-feature-enabled']
self.network = _CONF.network
self.network_feature_enabled = _CONF['network-feature-enabled']
+ self.validation = _CONF.validation
self.volume = _CONF.volume
self.volume_feature_enabled = _CONF['volume-feature-enabled']
self.object_storage = _CONF['object-storage']
diff --git a/tempest/manager.py b/tempest/manager.py
index a256f25..025ce65 100644
--- a/tempest/manager.py
+++ b/tempest/manager.py
@@ -46,8 +46,14 @@
# Check if passed or default credentials are valid
if not self.credentials.is_valid():
raise exceptions.InvalidCredentials()
+ # Tenant isolation creates TestResources, but Accounts and some tests
+ # creates Credentials
+ if isinstance(credentials, cred_provider.TestResources):
+ creds = self.credentials.credentials
+ else:
+ creds = self.credentials
# Creates an auth provider for the credentials
- self.auth_provider = get_auth_provider(self.credentials)
+ self.auth_provider = get_auth_provider(creds)
# FIXME(andreaf) unused
self.client_attr_names = []
diff --git a/tempest/scenario/manager.py b/tempest/scenario/manager.py
index ce640b1..cc152d2 100644
--- a/tempest/scenario/manager.py
+++ b/tempest/scenario/manager.py
@@ -186,7 +186,8 @@
if create_kwargs is None:
create_kwargs = {}
network = self.get_tenant_network()
- fixed_network.set_networks_kwarg(network, create_kwargs)
+ create_kwargs = fixed_network.set_networks_kwarg(network,
+ create_kwargs)
LOG.debug("Creating a server (name: %s, image: %s, flavor: %s)",
name, image, flavor)
@@ -234,7 +235,7 @@
self.volumes_client.wait_for_volume_status(volume['id'], 'available')
# The volume retrieved on creation has a non-up-to-date status.
# Retrieval after it becomes active ensures correct details.
- volume = self.volumes_client.get_volume(volume['id'])
+ volume = self.volumes_client.show_volume(volume['id'])
return volume
def _create_loginable_secgroup_rule(self, secgroup_id=None):
@@ -308,15 +309,28 @@
if isinstance(server_or_ip, six.string_types):
ip = server_or_ip
else:
- addr = server_or_ip['addresses'][CONF.compute.network_for_ssh][0]
- ip = addr['addr']
+ addrs = server_or_ip['addresses'][CONF.compute.network_for_ssh]
+ try:
+ ip = (addr['addr'] for addr in addrs if
+ netaddr.valid_ipv4(addr['addr'])).next()
+ except StopIteration:
+ raise lib_exc.NotFound("No IPv4 addresses to use for SSH to "
+ "remote server.")
if username is None:
username = CONF.scenario.ssh_user
- if private_key is None:
- private_key = self.keypair['private_key']
+ # Set this with 'keypair' or others to log in with keypair or
+ # username/password.
+ if CONF.compute.ssh_auth_method == 'keypair':
+ password = None
+ if private_key is None:
+ private_key = self.keypair['private_key']
+ else:
+ password = CONF.compute.image_ssh_password
+ private_key = None
linux_client = remote_client.RemoteClient(ip, username,
- pkey=private_key)
+ pkey=private_key,
+ password=password)
try:
linux_client.validate_authentication()
except Exception:
@@ -425,14 +439,14 @@
self.assertEqual(self.volume['id'], volume['id'])
self.volumes_client.wait_for_volume_status(volume['id'], 'in-use')
# Refresh the volume after the attachment
- self.volume = self.volumes_client.get_volume(volume['id'])
+ self.volume = self.volumes_client.show_volume(volume['id'])
def nova_volume_detach(self):
self.servers_client.detach_volume(self.server['id'], self.volume['id'])
self.volumes_client.wait_for_volume_status(self.volume['id'],
'available')
- volume = self.volumes_client.get_volume(self.volume['id'])
+ volume = self.volumes_client.show_volume(self.volume['id'])
self.assertEqual('available', volume['status'])
def rebuild_server(self, server_id, image=None,
diff --git a/tempest/scenario/test_aggregates_basic_ops.py b/tempest/scenario/test_aggregates_basic_ops.py
index 92e6c74..b1d3418 100644
--- a/tempest/scenario/test_aggregates_basic_ops.py
+++ b/tempest/scenario/test_aggregates_basic_ops.py
@@ -16,6 +16,7 @@
from oslo_log import log as logging
from tempest_lib.common.utils import data_utils
+from tempest.common import credentials
from tempest.common import tempest_fixtures as fixtures
from tempest.scenario import manager
from tempest import test
@@ -34,6 +35,13 @@
Deletes aggregate
"""
@classmethod
+ def skip_checks(cls):
+ super(TestAggregatesBasicOps, cls).skip_checks()
+ if not credentials.is_admin_available():
+ msg = ("Missing Identity Admin API credentials in configuration.")
+ raise cls.skipException(msg)
+
+ @classmethod
def setup_clients(cls):
super(TestAggregatesBasicOps, cls).setup_clients()
cls.aggregates_client = cls.manager.aggregates_client
@@ -72,7 +80,7 @@
def _check_aggregate_details(self, aggregate, aggregate_name, azone,
hosts, metadata):
- aggregate = self.aggregates_client.get_aggregate(aggregate['id'])
+ aggregate = self.aggregates_client.show_aggregate(aggregate['id'])
self.assertEqual(aggregate_name, aggregate['name'])
self.assertEqual(azone, aggregate['availability_zone'])
self.assertEqual(hosts, aggregate['hosts'])
diff --git a/tempest/scenario/test_load_balancer_basic.py b/tempest/scenario/test_load_balancer_basic.py
index 0d17048..8f37d74 100644
--- a/tempest/scenario/test_load_balancer_basic.py
+++ b/tempest/scenario/test_load_balancer_basic.py
@@ -185,7 +185,7 @@
# Start netcat
start_server = ('while true; do '
'sudo nc -ll -p %(port)s -e sh /tmp/%(script)s; '
- 'done &')
+ 'done > /dev/null &')
cmd = start_server % {'port': self.port1,
'script': 'script1'}
ssh_client.exec_command(cmd)
diff --git a/tempest/scenario/test_minimum_basic.py b/tempest/scenario/test_minimum_basic.py
index c780464..45923ce 100644
--- a/tempest/scenario/test_minimum_basic.py
+++ b/tempest/scenario/test_minimum_basic.py
@@ -73,7 +73,7 @@
self.assertIn(self.volume['id'], [x['id'] for x in volumes])
def cinder_show(self):
- volume = self.volumes_client.get_volume(self.volume['id'])
+ volume = self.volumes_client.show_volume(self.volume['id'])
self.assertEqual(self.volume, volume)
def nova_volume_attach(self):
@@ -83,7 +83,7 @@
self.assertEqual(self.volume['id'], volume['id'])
self.volumes_client.wait_for_volume_status(volume['id'], 'in-use')
# Refresh the volume after the attachment
- self.volume = self.volumes_client.get_volume(volume['id'])
+ self.volume = self.volumes_client.show_volume(volume['id'])
def nova_reboot(self):
self.servers_client.reboot(self.server['id'], 'SOFT')
@@ -99,7 +99,7 @@
self.volumes_client.wait_for_volume_status(self.volume['id'],
'available')
- volume = self.volumes_client.get_volume(self.volume['id'])
+ volume = self.volumes_client.show_volume(self.volume['id'])
self.assertEqual('available', volume['status'])
def create_and_add_security_group(self):
diff --git a/tempest/scenario/test_network_basic_ops.py b/tempest/scenario/test_network_basic_ops.py
index 8353048..b97ad0b 100644
--- a/tempest/scenario/test_network_basic_ops.py
+++ b/tempest/scenario/test_network_basic_ops.py
@@ -318,11 +318,15 @@
LOG.info(msg)
return
- subnet = self._list_subnets(
- network_id=CONF.network.public_network_id)
- self.assertEqual(1, len(subnet), "Found %d subnets" % len(subnet))
+ # We ping the external IP from the instance using its floating IP
+ # which is always IPv4, so we must only test connectivity to
+ # external IPv4 IPs if the external network is dualstack.
+ v4_subnets = [s for s in self._list_subnets(
+ network_id=CONF.network.public_network_id) if s['ip_version'] == 4]
+ self.assertEqual(1, len(v4_subnets),
+ "Found %d IPv4 subnets" % len(v4_subnets))
- external_ips = [subnet[0]['gateway_ip']]
+ external_ips = [v4_subnets[0]['gateway_ip']]
self._check_server_connectivity(self.floating_ip_tuple.floating_ip,
external_ips)
@@ -586,6 +590,9 @@
@testtools.skipIf(CONF.baremetal.driver_enabled,
'admin_state of instance ports cannot be altered '
'for baremetal nodes')
+ @testtools.skipUnless(CONF.network_feature_enabled.port_admin_state_change,
+ "Changing a port's admin state is not supported "
+ "by the test environment")
@test.attr(type='smoke')
@test.services('compute', 'network')
def test_update_instance_port_admin_state(self):
diff --git a/tempest/scenario/test_security_groups_basic_ops.py b/tempest/scenario/test_security_groups_basic_ops.py
index 4fab38b..1ecc212 100644
--- a/tempest/scenario/test_security_groups_basic_ops.py
+++ b/tempest/scenario/test_security_groups_basic_ops.py
@@ -125,6 +125,10 @@
if CONF.baremetal.driver_enabled:
msg = ('Not currently supported by baremetal.')
raise cls.skipException(msg)
+ if CONF.network.port_vnic_type in ['direct', 'macvtap']:
+ msg = ('Not currently supported when using vnic_type'
+ ' direct or macvtap')
+ raise cls.skipException(msg)
if not (CONF.network.tenant_networks_reachable or
CONF.network.public_network_id):
msg = ('Either tenant_networks_reachable must be "true", or '
diff --git a/tempest/scenario/test_stamp_pattern.py b/tempest/scenario/test_stamp_pattern.py
index 056159e..53b471a 100644
--- a/tempest/scenario/test_stamp_pattern.py
+++ b/tempest/scenario/test_stamp_pattern.py
@@ -85,7 +85,7 @@
def cleaner():
self.snapshots_client.delete_snapshot(snapshot['id'])
try:
- while self.snapshots_client.get_snapshot(snapshot['id']):
+ while self.snapshots_client.show_snapshot(snapshot['id']):
time.sleep(1)
except lib_exc.NotFound:
pass
diff --git a/tempest/scenario/test_volume_boot_pattern.py b/tempest/scenario/test_volume_boot_pattern.py
index 8fa2df5..5bc24ea 100644
--- a/tempest/scenario/test_volume_boot_pattern.py
+++ b/tempest/scenario/test_volume_boot_pattern.py
@@ -41,6 +41,8 @@
super(TestVolumeBootPattern, cls).skip_checks()
if not CONF.volume_feature_enabled.snapshot:
raise cls.skipException("Cinder volume snapshots are disabled")
+ if CONF.volume.storage_protocol == 'ceph':
+ raise cls.skipException('Skip until bug 1439371 is fixed.')
def _create_volume_from_image(self):
img_uuid = CONF.compute.image_ref
diff --git a/tempest/services/compute/json/aggregates_client.py b/tempest/services/compute/json/aggregates_client.py
index 36a347b..6c02b63 100644
--- a/tempest/services/compute/json/aggregates_client.py
+++ b/tempest/services/compute/json/aggregates_client.py
@@ -30,7 +30,7 @@
self.validate_response(schema.list_aggregates, resp, body)
return service_client.ResponseBodyList(resp, body['aggregates'])
- def get_aggregate(self, aggregate_id):
+ def show_aggregate(self, aggregate_id):
"""Get details of the given aggregate."""
resp, body = self.get("os-aggregates/%s" % str(aggregate_id))
body = json.loads(body)
@@ -67,7 +67,7 @@
def is_resource_deleted(self, id):
try:
- self.get_aggregate(id)
+ self.show_aggregate(id)
except lib_exc.NotFound:
return True
return False
diff --git a/tempest/services/compute/json/availability_zone_client.py b/tempest/services/compute/json/availability_zone_client.py
index 6c50398..925d79f 100644
--- a/tempest/services/compute/json/availability_zone_client.py
+++ b/tempest/services/compute/json/availability_zone_client.py
@@ -22,17 +22,15 @@
class AvailabilityZoneClientJSON(service_client.ServiceClient):
- def get_availability_zone_list(self):
- resp, body = self.get('os-availability-zone')
- body = json.loads(body)
- self.validate_response(schema.list_availability_zone_list, resp, body)
- return service_client.ResponseBodyList(resp,
- body['availabilityZoneInfo'])
+ def list_availability_zones(self, detail=False):
+ url = 'os-availability-zone'
+ schema_list = schema.list_availability_zone_list
+ if detail:
+ url += '/detail'
+ schema_list = schema.list_availability_zone_list_detail
- def get_availability_zone_list_detail(self):
- resp, body = self.get('os-availability-zone/detail')
+ resp, body = self.get(url)
body = json.loads(body)
- self.validate_response(schema.list_availability_zone_list_detail, resp,
- body)
+ self.validate_response(schema_list, resp, body)
return service_client.ResponseBodyList(resp,
body['availabilityZoneInfo'])
diff --git a/tempest/services/compute/json/baremetal_nodes_client.py b/tempest/services/compute/json/baremetal_nodes_client.py
index d8bbadd..e4a4e88 100644
--- a/tempest/services/compute/json/baremetal_nodes_client.py
+++ b/tempest/services/compute/json/baremetal_nodes_client.py
@@ -34,7 +34,7 @@
self.validate_response(schema.list_baremetal_nodes, resp, body)
return service_client.ResponseBodyList(resp, body['nodes'])
- def get_baremetal_node(self, baremetal_node_id):
+ def show_baremetal_node(self, baremetal_node_id):
"""Returns the details of a single baremetal node."""
url = 'os-baremetal-nodes/%s' % baremetal_node_id
resp, body = self.get(url)
diff --git a/tempest/services/compute/json/certificates_client.py b/tempest/services/compute/json/certificates_client.py
index e6b72bb..752a48e 100644
--- a/tempest/services/compute/json/certificates_client.py
+++ b/tempest/services/compute/json/certificates_client.py
@@ -21,7 +21,7 @@
class CertificatesClientJSON(service_client.ServiceClient):
- def get_certificate(self, id):
+ def show_certificate(self, id):
url = "os-certificates/%s" % (id)
resp, body = self.get(url)
body = json.loads(body)
diff --git a/tempest/services/compute/json/extensions_client.py b/tempest/services/compute/json/extensions_client.py
index 5c69085..265b381 100644
--- a/tempest/services/compute/json/extensions_client.py
+++ b/tempest/services/compute/json/extensions_client.py
@@ -33,7 +33,7 @@
exts = extensions['extensions']
return any([e for e in exts if e['name'] == extension])
- def get_extension(self, extension_alias):
+ def show_extension(self, extension_alias):
resp, body = self.get('extensions/%s' % extension_alias)
body = json.loads(body)
return service_client.ResponseBody(resp, body['extension'])
diff --git a/tempest/services/compute/json/flavors_client.py b/tempest/services/compute/json/flavors_client.py
index 25b1869..2de43cf 100644
--- a/tempest/services/compute/json/flavors_client.py
+++ b/tempest/services/compute/json/flavors_client.py
@@ -16,11 +16,10 @@
import json
import urllib
-from tempest.api_schema.response.compute import flavors as common_schema
from tempest.api_schema.response.compute import flavors_access as schema_access
from tempest.api_schema.response.compute import flavors_extra_specs \
as schema_extra_specs
-from tempest.api_schema.response.compute.v2_1 import flavors as v2schema
+from tempest.api_schema.response.compute.v2_1 import flavors as schema
from tempest.common import service_client
@@ -33,7 +32,7 @@
resp, body = self.get(url)
body = json.loads(body)
- self.validate_response(common_schema.list_flavors, resp, body)
+ self.validate_response(schema.list_flavors, resp, body)
return service_client.ResponseBodyList(resp, body['flavors'])
def list_flavors_with_detail(self, params=None):
@@ -43,13 +42,13 @@
resp, body = self.get(url)
body = json.loads(body)
- self.validate_response(v2schema.list_flavors_details, resp, body)
+ self.validate_response(schema.list_flavors_details, resp, body)
return service_client.ResponseBodyList(resp, body['flavors'])
def get_flavor_details(self, flavor_id):
resp, body = self.get("flavors/%s" % str(flavor_id))
body = json.loads(body)
- self.validate_response(v2schema.create_get_flavor_details, resp, body)
+ self.validate_response(schema.create_get_flavor_details, resp, body)
return service_client.ResponseBody(resp, body['flavor'])
def create_flavor(self, name, ram, vcpus, disk, flavor_id, **kwargs):
@@ -73,13 +72,13 @@
resp, body = self.post('flavors', post_body)
body = json.loads(body)
- self.validate_response(v2schema.create_get_flavor_details, resp, body)
+ self.validate_response(schema.create_get_flavor_details, resp, body)
return service_client.ResponseBody(resp, body['flavor'])
def delete_flavor(self, flavor_id):
"""Deletes the given flavor."""
resp, body = self.delete("flavors/{0}".format(flavor_id))
- self.validate_response(v2schema.delete_flavor, resp, body)
+ self.validate_response(schema.delete_flavor, resp, body)
return service_client.ResponseBody(resp, body)
def is_resource_deleted(self, id):
@@ -137,7 +136,7 @@
"""Unsets extra Specs from the mentioned flavor."""
resp, body = self.delete('flavors/%s/os-extra_specs/%s' %
(str(flavor_id), key))
- self.validate_response(v2schema.unset_flavor_extra_specs, resp, body)
+ self.validate_response(schema.unset_flavor_extra_specs, resp, body)
return service_client.ResponseBody(resp, body)
def list_flavor_access(self, flavor_id):
diff --git a/tempest/services/compute/json/hosts_client.py b/tempest/services/compute/json/hosts_client.py
index de925a9..088e695 100644
--- a/tempest/services/compute/json/hosts_client.py
+++ b/tempest/services/compute/json/hosts_client.py
@@ -15,8 +15,7 @@
import json
import urllib
-from tempest.api_schema.response.compute import hosts as schema
-from tempest.api_schema.response.compute.v2_1 import hosts as v2_schema
+from tempest.api_schema.response.compute.v2_1 import hosts as schema
from tempest.common import service_client
@@ -39,7 +38,7 @@
resp, body = self.get("os-hosts/%s" % str(hostname))
body = json.loads(body)
- self.validate_response(schema.show_host_detail, resp, body)
+ self.validate_response(schema.get_host_detail, resp, body)
return service_client.ResponseBodyList(resp, body['host'])
def update_host(self, hostname, **kwargs):
@@ -54,7 +53,7 @@
resp, body = self.put("os-hosts/%s" % str(hostname), request_body)
body = json.loads(body)
- self.validate_response(v2_schema.update_host, resp, body)
+ self.validate_response(schema.update_host, resp, body)
return service_client.ResponseBody(resp, body)
def startup_host(self, hostname):
@@ -62,7 +61,7 @@
resp, body = self.get("os-hosts/%s/startup" % str(hostname))
body = json.loads(body)
- self.validate_response(v2_schema.startup_host, resp, body)
+ self.validate_response(schema.startup_host, resp, body)
return service_client.ResponseBody(resp, body['host'])
def shutdown_host(self, hostname):
@@ -70,7 +69,7 @@
resp, body = self.get("os-hosts/%s/shutdown" % str(hostname))
body = json.loads(body)
- self.validate_response(v2_schema.shutdown_host, resp, body)
+ self.validate_response(schema.shutdown_host, resp, body)
return service_client.ResponseBody(resp, body['host'])
def reboot_host(self, hostname):
@@ -78,5 +77,5 @@
resp, body = self.get("os-hosts/%s/reboot" % str(hostname))
body = json.loads(body)
- self.validate_response(v2_schema.reboot_host, resp, body)
+ self.validate_response(schema.reboot_host, resp, body)
return service_client.ResponseBody(resp, body['host'])
diff --git a/tempest/services/compute/json/hypervisor_client.py b/tempest/services/compute/json/hypervisor_client.py
index bf4bc7f..49ac266 100644
--- a/tempest/services/compute/json/hypervisor_client.py
+++ b/tempest/services/compute/json/hypervisor_client.py
@@ -15,8 +15,7 @@
import json
-from tempest.api_schema.response.compute import hypervisors as common_schema
-from tempest.api_schema.response.compute.v2_1 import hypervisors as v2schema
+from tempest.api_schema.response.compute.v2_1 import hypervisors as schema
from tempest.common import service_client
@@ -26,51 +25,47 @@
"""List hypervisors information."""
resp, body = self.get('os-hypervisors')
body = json.loads(body)
- self.validate_response(common_schema.common_hypervisors_detail,
- resp, body)
+ self.validate_response(schema.list_search_hypervisors, resp, body)
return service_client.ResponseBodyList(resp, body['hypervisors'])
def get_hypervisor_list_details(self):
"""Show detailed hypervisors information."""
resp, body = self.get('os-hypervisors/detail')
body = json.loads(body)
- self.validate_response(common_schema.common_list_hypervisors_detail,
- resp, body)
+ self.validate_response(schema.list_hypervisors_detail, resp, body)
return service_client.ResponseBodyList(resp, body['hypervisors'])
def get_hypervisor_show_details(self, hyper_id):
"""Display the details of the specified hypervisor."""
resp, body = self.get('os-hypervisors/%s' % hyper_id)
body = json.loads(body)
- self.validate_response(common_schema.common_show_hypervisor,
- resp, body)
+ self.validate_response(schema.get_hypervisor, resp, body)
return service_client.ResponseBody(resp, body['hypervisor'])
def get_hypervisor_servers(self, hyper_name):
"""List instances belonging to the specified hypervisor."""
resp, body = self.get('os-hypervisors/%s/servers' % hyper_name)
body = json.loads(body)
- self.validate_response(v2schema.hypervisors_servers, resp, body)
+ self.validate_response(schema.get_hypervisors_servers, resp, body)
return service_client.ResponseBodyList(resp, body['hypervisors'])
def get_hypervisor_stats(self):
"""Get hypervisor statistics over all compute nodes."""
resp, body = self.get('os-hypervisors/statistics')
body = json.loads(body)
- self.validate_response(common_schema.hypervisor_statistics, resp, body)
+ self.validate_response(schema.get_hypervisor_statistics, resp, body)
return service_client.ResponseBody(resp, body['hypervisor_statistics'])
def get_hypervisor_uptime(self, hyper_id):
"""Display the uptime of the specified hypervisor."""
resp, body = self.get('os-hypervisors/%s/uptime' % hyper_id)
body = json.loads(body)
- self.validate_response(common_schema.hypervisor_uptime, resp, body)
+ self.validate_response(schema.get_hypervisor_uptime, resp, body)
return service_client.ResponseBody(resp, body['hypervisor'])
def search_hypervisor(self, hyper_name):
"""Search specified hypervisor."""
resp, body = self.get('os-hypervisors/%s/search' % hyper_name)
body = json.loads(body)
- self.validate_response(common_schema.common_hypervisors_detail,
- resp, body)
+ self.validate_response(schema.list_search_hypervisors, resp, body)
return service_client.ResponseBodyList(resp, body['hypervisors'])
diff --git a/tempest/services/compute/json/interfaces_client.py b/tempest/services/compute/json/interfaces_client.py
index c3bfa99..223e90b 100644
--- a/tempest/services/compute/json/interfaces_client.py
+++ b/tempest/services/compute/json/interfaces_client.py
@@ -16,8 +16,8 @@
import json
import time
-from tempest.api_schema.response.compute import servers as servers_schema
from tempest.api_schema.response.compute.v2_1 import interfaces as schema
+from tempest.api_schema.response.compute.v2_1 import servers as servers_schema
from tempest.common import service_client
from tempest import exceptions
diff --git a/tempest/services/compute/json/keypairs_client.py b/tempest/services/compute/json/keypairs_client.py
index 722aefa..7fe335b 100644
--- a/tempest/services/compute/json/keypairs_client.py
+++ b/tempest/services/compute/json/keypairs_client.py
@@ -15,7 +15,6 @@
import json
-from tempest.api_schema.response.compute import keypairs as common_schema
from tempest.api_schema.response.compute.v2_1 import keypairs as schema
from tempest.common import service_client
@@ -30,7 +29,7 @@
# servers, etc. A bug?
# For now we shall adhere to the spec, but the spec for keypairs
# is yet to be found
- self.validate_response(common_schema.list_keypairs, resp, body)
+ self.validate_response(schema.list_keypairs, resp, body)
return service_client.ResponseBodyList(resp, body['keypairs'])
def get_keypair(self, key_name):
diff --git a/tempest/services/compute/json/quotas_client.py b/tempest/services/compute/json/quotas_client.py
index 89f4acd..6e38c47 100644
--- a/tempest/services/compute/json/quotas_client.py
+++ b/tempest/services/compute/json/quotas_client.py
@@ -31,7 +31,7 @@
url += '?user_id=%s' % str(user_id)
resp, body = self.get(url)
body = json.loads(body)
- self.validate_response(schema.quota_set, resp, body)
+ self.validate_response(schema.get_quota_set, resp, body)
return service_client.ResponseBody(resp, body['quota_set'])
def get_default_quota_set(self, tenant_id):
@@ -40,7 +40,7 @@
url = 'os-quota-sets/%s/defaults' % str(tenant_id)
resp, body = self.get(url)
body = json.loads(body)
- self.validate_response(schema.quota_set, resp, body)
+ self.validate_response(schema.get_quota_set, resp, body)
return service_client.ResponseBody(resp, body['quota_set'])
def update_quota_set(self, tenant_id, user_id=None,
@@ -105,7 +105,7 @@
post_body)
body = json.loads(body)
- self.validate_response(schema.quota_set_update, resp, body)
+ self.validate_response(schema.update_quota_set, resp, body)
return service_client.ResponseBody(resp, body['quota_set'])
def delete_quota_set(self, tenant_id):
@@ -123,7 +123,7 @@
url = 'os-quota-class-sets/%s' % str(quota_class_id)
resp, body = self.get(url)
body = json.loads(body)
- self.validate_response(classes_schema.quota_set, resp, body)
+ self.validate_response(classes_schema.get_quota_class_set, resp, body)
return service_client.ResponseBody(resp, body['quota_class_set'])
def update_quota_class_set(self, quota_class_id, **kwargs):
@@ -136,5 +136,6 @@
post_body)
body = json.loads(body)
- self.validate_response(classes_schema.quota_set_update, resp, body)
+ self.validate_response(classes_schema.update_quota_class_set,
+ resp, body)
return service_client.ResponseBody(resp, body['quota_class_set'])
diff --git a/tempest/services/compute/json/servers_client.py b/tempest/services/compute/json/servers_client.py
index bd27668..c9ba2c3 100644
--- a/tempest/services/compute/json/servers_client.py
+++ b/tempest/services/compute/json/servers_client.py
@@ -20,7 +20,6 @@
from tempest_lib import exceptions as lib_exc
-from tempest.api_schema.response.compute import servers as common_schema
from tempest.api_schema.response.compute.v2_1 import servers as schema
from tempest.common import service_client
from tempest.common import waiters
@@ -147,7 +146,7 @@
def delete_server(self, server_id):
"""Deletes the given server."""
resp, body = self.delete("servers/%s" % str(server_id))
- self.validate_response(common_schema.delete_server, resp, body)
+ self.validate_response(schema.delete_server, resp, body)
return service_client.ResponseBody(resp, body)
def list_servers(self, params=None):
@@ -159,7 +158,7 @@
resp, body = self.get(url)
body = json.loads(body)
- self.validate_response(common_schema.list_servers, resp, body)
+ self.validate_response(schema.list_servers, resp, body)
return service_client.ResponseBody(resp, body)
def list_servers_with_detail(self, params=None):
@@ -216,7 +215,7 @@
return service_client.ResponseBody(resp, body)
def action(self, server_id, action_name, response_key,
- schema=common_schema.server_actions_common_schema,
+ schema=schema.server_actions_common_schema,
response_class=service_client.ResponseBody, **kwargs):
post_body = json.dumps({action_name: kwargs})
resp, body = self.post('servers/%s/action' % str(server_id),
@@ -253,7 +252,7 @@
resp, body = self.get("servers/%s/os-server-password" %
str(server_id))
body = json.loads(body)
- self.validate_response(common_schema.get_password, resp, body)
+ self.validate_response(schema.get_password, resp, body)
return service_client.ResponseBody(resp, body)
def delete_password(self, server_id):
@@ -264,7 +263,7 @@
"""
resp, body = self.delete("servers/%s/os-server-password" %
str(server_id))
- self.validate_response(common_schema.server_actions_delete_password,
+ self.validate_response(schema.server_actions_delete_password,
resp, body)
return service_client.ResponseBody(resp, body)
@@ -306,7 +305,7 @@
def list_server_metadata(self, server_id):
resp, body = self.get("servers/%s/metadata" % str(server_id))
body = json.loads(body)
- self.validate_response(common_schema.list_server_metadata, resp, body)
+ self.validate_response(schema.list_server_metadata, resp, body)
return service_client.ResponseBody(resp, body['metadata'])
def set_server_metadata(self, server_id, meta, no_metadata_field=False):
@@ -317,7 +316,7 @@
resp, body = self.put('servers/%s/metadata' % str(server_id),
post_body)
body = json.loads(body)
- self.validate_response(common_schema.set_server_metadata, resp, body)
+ self.validate_response(schema.set_server_metadata, resp, body)
return service_client.ResponseBody(resp, body['metadata'])
def update_server_metadata(self, server_id, meta):
@@ -325,7 +324,7 @@
resp, body = self.post('servers/%s/metadata' % str(server_id),
post_body)
body = json.loads(body)
- self.validate_response(common_schema.update_server_metadata,
+ self.validate_response(schema.update_server_metadata,
resp, body)
return service_client.ResponseBody(resp, body['metadata'])
@@ -348,7 +347,7 @@
def delete_server_metadata_item(self, server_id, key):
resp, body = self.delete("servers/%s/metadata/%s" %
(str(server_id), key))
- self.validate_response(common_schema.delete_server_metadata_item,
+ self.validate_response(schema.delete_server_metadata_item,
resp, body)
return service_client.ResponseBody(resp, body)
@@ -415,7 +414,7 @@
req_body = json.dumps({'os-migrateLive': migrate_params})
resp, body = self.post("servers/%s/action" % str(server_id), req_body)
- self.validate_response(common_schema.server_actions_common_schema,
+ self.validate_response(schema.server_actions_common_schema,
resp, body)
return service_client.ResponseBody(resp, body)
@@ -466,7 +465,7 @@
def get_console_output(self, server_id, length):
kwargs = {'length': length} if length else {}
return self.action(server_id, 'os-getConsoleOutput', 'output',
- common_schema.get_console_output,
+ schema.get_console_output,
response_class=service_client.ResponseBodyData,
**kwargs)
@@ -531,7 +530,7 @@
def get_vnc_console(self, server_id, console_type):
"""Get URL of VNC console."""
return self.action(server_id, "os-getVNCConsole",
- "console", common_schema.get_vnc_console,
+ "console", schema.get_vnc_console,
type=console_type)
def create_server_group(self, name, policies):
diff --git a/tempest/services/messaging/json/messaging_client.py b/tempest/services/messaging/json/messaging_client.py
index 36444a9..483ba93 100644
--- a/tempest/services/messaging/json/messaging_client.py
+++ b/tempest/services/messaging/json/messaging_client.py
@@ -58,7 +58,7 @@
self.expected_success(201, resp.status)
return resp, body
- def get_queue(self, queue_name):
+ def show_queue(self, queue_name):
uri = '{0}/queues/{1}'.format(self.uri_prefix, queue_name)
resp, body = self.get(uri)
self.expected_success(204, resp.status)
@@ -76,14 +76,14 @@
self.expected_success(204, resp.status)
return resp, body
- def get_queue_stats(self, queue_name):
+ def show_queue_stats(self, queue_name):
uri = '{0}/queues/{1}/stats'.format(self.uri_prefix, queue_name)
resp, body = self.get(uri)
body = json.loads(body)
self.validate_response(queues_schema.queue_stats, resp, body)
return resp, body
- def get_queue_metadata(self, queue_name):
+ def show_queue_metadata(self, queue_name):
uri = '{0}/queues/{1}/metadata'.format(self.uri_prefix, queue_name)
resp, body = self.get(uri)
self.expected_success(200, resp.status)
@@ -117,7 +117,7 @@
return resp, body
- def get_single_message(self, message_uri):
+ def show_single_message(self, message_uri):
resp, body = self.get(message_uri, extra_headers=True,
headers=self.headers)
if resp['status'] != '204':
@@ -126,7 +126,7 @@
body)
return resp, body
- def get_multiple_messages(self, message_uri):
+ def show_multiple_messages(self, message_uri):
resp, body = self.get(message_uri, extra_headers=True,
headers=self.headers)
diff --git a/tempest/services/orchestration/json/orchestration_client.py b/tempest/services/orchestration/json/orchestration_client.py
index 1a4c5d9..debf39b 100644
--- a/tempest/services/orchestration/json/orchestration_client.py
+++ b/tempest/services/orchestration/json/orchestration_client.py
@@ -105,7 +105,7 @@
headers['X-Auth-User'] = self.user
return headers, body
- def get_stack(self, stack_identifier):
+ def show_stack(self, stack_identifier):
"""Returns the details of a single stack."""
url = "stacks/%s" % stack_identifier
resp, body = self.get(url)
@@ -137,7 +137,7 @@
body = json.loads(body)
return service_client.ResponseBodyList(resp, body['resources'])
- def get_resource(self, stack_identifier, resource_name):
+ def show_resource(self, stack_identifier, resource_name):
"""Returns the details of a single resource."""
url = "stacks/%s/resources/%s" % (stack_identifier, resource_name)
resp, body = self.get(url)
@@ -159,7 +159,7 @@
while True:
try:
- body = self.get_resource(
+ body = self.show_resource(
stack_identifier, resource_name)
except lib_exc.NotFound:
# ignore this, as the resource may not have
@@ -195,7 +195,7 @@
while True:
try:
- body = self.get_stack(stack_identifier)
+ body = self.show_stack(stack_identifier)
except lib_exc.NotFound:
if status == 'DELETE_COMPLETE':
return
@@ -295,14 +295,14 @@
body = json.loads(body)
return service_client.ResponseBodyList(resp, body['resource_types'])
- def get_resource_type(self, resource_type_name):
+ def show_resource_type(self, resource_type_name):
"""Return the schema of a resource type."""
url = 'resource_types/%s' % resource_type_name
resp, body = self.get(url)
self.expected_success(200, resp.status)
return service_client.ResponseBody(resp, json.loads(body))
- def get_resource_type_template(self, resource_type_name):
+ def show_resource_type_template(self, resource_type_name):
"""Return the template of a resource type."""
url = 'resource_types/%s/template' % resource_type_name
resp, body = self.get(url)
@@ -320,7 +320,7 @@
body = json.loads(body)
return service_client.ResponseBody(resp, body)
- def get_software_config(self, conf_id):
+ def show_software_config(self, conf_id):
"""Returns a software configuration resource."""
url = 'software_configs/%s' % str(conf_id)
resp, body = self.get(url)
@@ -365,7 +365,7 @@
body = json.loads(body)
return service_client.ResponseBody(resp, body)
- def get_software_deploy_list(self):
+ def list_software_deployments(self):
"""Returns a list of all deployments."""
url = 'software_deployments'
resp, body = self.get(url)
@@ -373,7 +373,7 @@
body = json.loads(body)
return service_client.ResponseBody(resp, body)
- def get_software_deploy(self, deploy_id):
+ def show_software_deployment(self, deploy_id):
"""Returns a specific software deployment."""
url = 'software_deployments/%s' % str(deploy_id)
resp, body = self.get(url)
@@ -381,7 +381,7 @@
body = json.loads(body)
return service_client.ResponseBody(resp, body)
- def get_software_deploy_meta(self, server_id):
+ def show_software_deployment_metadata(self, server_id):
"""Return a config metadata for a specific server."""
url = 'software_deployments/metadata/%s' % server_id
resp, body = self.get(url)
diff --git a/tempest/services/telemetry/json/telemetry_client.py b/tempest/services/telemetry/json/telemetry_client.py
index 36c123b..0c01908 100644
--- a/tempest/services/telemetry/json/telemetry_client.py
+++ b/tempest/services/telemetry/json/telemetry_client.py
@@ -50,7 +50,7 @@
body = self.deserialize(body)
return service_client.ResponseBody(resp, body)
- def helper_list(self, uri, query=None, period=None):
+ def _helper_list(self, uri, query=None, period=None):
uri_dict = {}
if query:
uri_dict = {'q.field': query[0],
@@ -67,32 +67,32 @@
def list_resources(self, query=None):
uri = '%s/resources' % self.uri_prefix
- return self.helper_list(uri, query)
+ return self._helper_list(uri, query)
def list_meters(self, query=None):
uri = '%s/meters' % self.uri_prefix
- return self.helper_list(uri, query)
+ return self._helper_list(uri, query)
def list_alarms(self, query=None):
uri = '%s/alarms' % self.uri_prefix
- return self.helper_list(uri, query)
+ return self._helper_list(uri, query)
def list_statistics(self, meter, period=None, query=None):
uri = "%s/meters/%s/statistics" % (self.uri_prefix, meter)
- return self.helper_list(uri, query, period)
+ return self._helper_list(uri, query, period)
def list_samples(self, meter_id, query=None):
uri = '%s/meters/%s' % (self.uri_prefix, meter_id)
- return self.helper_list(uri, query)
+ return self._helper_list(uri, query)
- def get_resource(self, resource_id):
+ def show_resource(self, resource_id):
uri = '%s/resources/%s' % (self.uri_prefix, resource_id)
resp, body = self.get(uri)
self.expected_success(200, resp.status)
body = self.deserialize(body)
return service_client.ResponseBody(resp, body)
- def get_alarm(self, alarm_id):
+ def show_alarm(self, alarm_id):
uri = '%s/alarms/%s' % (self.uri_prefix, alarm_id)
resp, body = self.get(uri)
self.expected_success(200, resp.status)
@@ -123,7 +123,7 @@
body = self.deserialize(body)
return service_client.ResponseBody(resp, body)
- def alarm_get_state(self, alarm_id):
+ def show_alarm_state(self, alarm_id):
uri = "%s/alarms/%s/state" % (self.uri_prefix, alarm_id)
resp, body = self.get(uri)
self.expected_success(200, resp.status)
diff --git a/tempest/services/volume/json/admin/volume_quotas_client.py b/tempest/services/volume/json/admin/volume_quotas_client.py
index abd36c1..5092afc 100644
--- a/tempest/services/volume/json/admin/volume_quotas_client.py
+++ b/tempest/services/volume/json/admin/volume_quotas_client.py
@@ -26,7 +26,7 @@
TYPE = "json"
- def get_default_quota_set(self, tenant_id):
+ def show_default_quota_set(self, tenant_id):
"""List the default volume quota set for a tenant."""
url = 'os-quota-sets/%s/defaults' % tenant_id
@@ -34,7 +34,7 @@
self.expected_success(200, resp.status)
return service_client.ResponseBody(resp, self._parse_resp(body))
- def get_quota_set(self, tenant_id, params=None):
+ def show_quota_set(self, tenant_id, params=None):
"""List the quota set for a tenant."""
url = 'os-quota-sets/%s' % tenant_id
@@ -45,10 +45,10 @@
self.expected_success(200, resp.status)
return service_client.ResponseBody(resp, self._parse_resp(body))
- def get_quota_usage(self, tenant_id):
+ def show_quota_usage(self, tenant_id):
"""List the quota set for a tenant."""
- body = self.get_quota_set(tenant_id, params={'usage': True})
+ body = self.show_quota_set(tenant_id, params={'usage': True})
return body
def update_quota_set(self, tenant_id, gigabytes=None, volumes=None,
diff --git a/tempest/services/volume/json/admin/volume_types_client.py b/tempest/services/volume/json/admin/volume_types_client.py
index c905155..9366984 100644
--- a/tempest/services/volume/json/admin/volume_types_client.py
+++ b/tempest/services/volume/json/admin/volume_types_client.py
@@ -33,9 +33,9 @@
# "type": resource_type}
try:
if resource['type'] == "volume-type":
- self.get_volume_type(resource['id'])
+ self.show_volume_type(resource['id'])
elif resource['type'] == "encryption-type":
- body = self.get_encryption_type(resource['id'])
+ body = self.show_encryption_type(resource['id'])
if not body:
return True
else:
@@ -61,7 +61,7 @@
self.expected_success(200, resp.status)
return service_client.ResponseBodyList(resp, body['volume_types'])
- def get_volume_type(self, volume_id):
+ def show_volume_type(self, volume_id):
"""Returns the details of a single volume_type."""
url = "types/%s" % str(volume_id)
resp, body = self.get(url)
@@ -104,7 +104,7 @@
self.expected_success(200, resp.status)
return service_client.ResponseBody(resp, body['extra_specs'])
- def get_volume_type_extra_specs(self, vol_type_id, extra_spec_name):
+ def show_volume_type_extra_specs(self, vol_type_id, extra_spec_name):
"""Returns the details of a single volume_type extra spec."""
url = "types/%s/extra_specs/%s" % (str(vol_type_id),
str(extra_spec_name))
@@ -150,7 +150,7 @@
self.expected_success(200, resp.status)
return service_client.ResponseBody(resp, body)
- def get_encryption_type(self, vol_type_id):
+ def show_encryption_type(self, vol_type_id):
"""
Get the volume encryption type for the specified volume type.
vol_type_id: Id of volume_type.
diff --git a/tempest/services/volume/json/availability_zone_client.py b/tempest/services/volume/json/availability_zone_client.py
index bb5e39b..dc0388f 100644
--- a/tempest/services/volume/json/availability_zone_client.py
+++ b/tempest/services/volume/json/availability_zone_client.py
@@ -20,7 +20,7 @@
class BaseVolumeAvailabilityZoneClientJSON(service_client.ServiceClient):
- def get_availability_zone_list(self):
+ def list_availability_zones(self):
resp, body = self.get('os-availability-zone')
body = json.loads(body)
self.expected_success(200, resp.status)
diff --git a/tempest/services/volume/json/backups_client.py b/tempest/services/volume/json/backups_client.py
index dad5aff..83ec182 100644
--- a/tempest/services/volume/json/backups_client.py
+++ b/tempest/services/volume/json/backups_client.py
@@ -56,7 +56,7 @@
self.expected_success(202, resp.status)
return service_client.ResponseBody(resp, body)
- def get_backup(self, backup_id):
+ def show_backup(self, backup_id):
"""Returns the details of a single backup."""
url = "backups/%s" % str(backup_id)
resp, body = self.get(url)
@@ -64,9 +64,11 @@
self.expected_success(200, resp.status)
return service_client.ResponseBody(resp, body['backup'])
- def list_backups_with_detail(self):
+ def list_backups(self, detail=False):
"""Information for all the tenant's backups."""
- url = "backups/detail"
+ url = "backups"
+ if detail:
+ url += "/detail"
resp, body = self.get(url)
body = json.loads(body)
self.expected_success(200, resp.status)
@@ -74,13 +76,13 @@
def wait_for_backup_status(self, backup_id, status):
"""Waits for a Backup to reach a given status."""
- body = self.get_backup(backup_id)
+ body = self.show_backup(backup_id)
backup_status = body['status']
start = int(time.time())
while backup_status != status:
time.sleep(self.build_interval)
- body = self.get_backup(backup_id)
+ body = self.show_backup(backup_id)
backup_status = body['status']
if backup_status == 'error':
raise exceptions.VolumeBackupException(backup_id=backup_id)
diff --git a/tempest/services/volume/json/qos_client.py b/tempest/services/volume/json/qos_client.py
index 14ff506..e9d3777 100644
--- a/tempest/services/volume/json/qos_client.py
+++ b/tempest/services/volume/json/qos_client.py
@@ -26,7 +26,7 @@
def is_resource_deleted(self, qos_id):
try:
- self.get_qos(qos_id)
+ self.show_qos(qos_id)
except lib_exc.NotFound:
return True
return False
@@ -48,15 +48,15 @@
start_time = int(time.time())
while True:
if operation == 'qos-key-unset':
- body = self.get_qos(qos_id)
+ body = self.show_qos(qos_id)
if not any(key in body['specs'] for key in args):
return
elif operation == 'disassociate':
- body = self.get_association_qos(qos_id)
+ body = self.show_association_qos(qos_id)
if not any(args in body[i]['id'] for i in range(0, len(body))):
return
elif operation == 'disassociate-all':
- body = self.get_association_qos(qos_id)
+ body = self.show_association_qos(qos_id)
if not body:
return
else:
@@ -96,7 +96,7 @@
self.expected_success(200, resp.status)
return service_client.ResponseBodyList(resp, body['qos_specs'])
- def get_qos(self, qos_id):
+ def show_qos(self, qos_id):
"""Get the specified QoS specification."""
url = "qos-specs/%s" % str(qos_id)
resp, body = self.get(url)
@@ -133,7 +133,7 @@
self.expected_success(202, resp.status)
return service_client.ResponseBody(resp, body)
- def get_association_qos(self, qos_id):
+ def show_association_qos(self, qos_id):
"""Get the association of the specified QoS specification."""
url = "qos-specs/%s/associations" % str(qos_id)
resp, body = self.get(url)
diff --git a/tempest/services/volume/json/snapshots_client.py b/tempest/services/volume/json/snapshots_client.py
index 9f88085..2140c62 100644
--- a/tempest/services/volume/json/snapshots_client.py
+++ b/tempest/services/volume/json/snapshots_client.py
@@ -29,29 +29,20 @@
create_resp = 200
- def list_snapshots(self, params=None):
+ def list_snapshots(self, detail=False, params=None):
"""List all the snapshot."""
url = 'snapshots'
+ if detail:
+ url += '/detail'
if params:
- url += '?%s' % urllib.urlencode(params)
+ url += '?%s' % urllib.urlencode(params)
resp, body = self.get(url)
body = json.loads(body)
self.expected_success(200, resp.status)
return service_client.ResponseBodyList(resp, body['snapshots'])
- def list_snapshots_with_detail(self, params=None):
- """List the details of all snapshots."""
- url = 'snapshots/detail'
- if params:
- url += '?%s' % urllib.urlencode(params)
-
- resp, body = self.get(url)
- body = json.loads(body)
- self.expected_success(200, resp.status)
- return service_client.ResponseBodyList(resp, body['snapshots'])
-
- def get_snapshot(self, snapshot_id):
+ def show_snapshot(self, snapshot_id):
"""Returns the details of a single snapshot."""
url = "snapshots/%s" % str(snapshot_id)
resp, body = self.get(url)
@@ -85,7 +76,7 @@
# NOTE(afazekas): just for the wait function
def _get_snapshot_status(self, snapshot_id):
- body = self.get_snapshot(snapshot_id)
+ body = self.show_snapshot(snapshot_id)
status = body['status']
# NOTE(afazekas): snapshot can reach an "error"
# state in a "normal" lifecycle
@@ -128,7 +119,7 @@
def is_resource_deleted(self, id):
try:
- self.get_snapshot(id)
+ self.show_snapshot(id)
except lib_exc.NotFound:
return True
return False
@@ -166,7 +157,7 @@
self.expected_success(200, resp.status)
return service_client.ResponseBody(resp, body['metadata'])
- def get_snapshot_metadata(self, snapshot_id):
+ def show_snapshot_metadata(self, snapshot_id):
"""Get metadata of the snapshot."""
url = "snapshots/%s/metadata" % str(snapshot_id)
resp, body = self.get(url)
diff --git a/tempest/services/volume/json/volumes_client.py b/tempest/services/volume/json/volumes_client.py
index 059664c..a82291a 100644
--- a/tempest/services/volume/json/volumes_client.py
+++ b/tempest/services/volume/json/volumes_client.py
@@ -40,29 +40,20 @@
"""Return the element 'attachment' from input volumes."""
return volume['attachments'][0]
- def list_volumes(self, params=None):
+ def list_volumes(self, detail=False, params=None):
"""List all the volumes created."""
url = 'volumes'
+ if detail:
+ url += '/detail'
if params:
- url += '?%s' % urllib.urlencode(params)
+ url += '?%s' % urllib.urlencode(params)
resp, body = self.get(url)
body = json.loads(body)
self.expected_success(200, resp.status)
return service_client.ResponseBodyList(resp, body['volumes'])
- def list_volumes_with_detail(self, params=None):
- """List the details of all volumes."""
- url = 'volumes/detail'
- if params:
- url += '?%s' % urllib.urlencode(params)
-
- resp, body = self.get(url)
- body = json.loads(body)
- self.expected_success(200, resp.status)
- return service_client.ResponseBodyList(resp, body['volumes'])
-
- def get_volume(self, volume_id):
+ def show_volume(self, volume_id):
"""Returns the details of a single volume."""
url = "volumes/%s" % str(volume_id)
resp, body = self.get(url)
@@ -161,13 +152,13 @@
def wait_for_volume_status(self, volume_id, status):
"""Waits for a Volume to reach a given status."""
- body = self.get_volume(volume_id)
+ body = self.show_volume(volume_id)
volume_status = body['status']
start = int(time.time())
while volume_status != status:
time.sleep(self.build_interval)
- body = self.get_volume(volume_id)
+ body = self.show_volume(volume_id)
volume_status = body['status']
if volume_status == 'error':
raise exceptions.VolumeBuildErrorException(volume_id=volume_id)
@@ -183,7 +174,7 @@
def is_resource_deleted(self, id):
try:
- self.get_volume(id)
+ self.show_volume(id)
except lib_exc.NotFound:
return True
return False
@@ -240,7 +231,7 @@
self.expected_success(202, resp.status)
return service_client.ResponseBody(resp, body['transfer'])
- def get_volume_transfer(self, transfer_id):
+ def show_volume_transfer(self, transfer_id):
"""Returns the details of a volume transfer."""
url = "os-volume-transfer/%s" % str(transfer_id)
resp, body = self.get(url)
@@ -303,7 +294,7 @@
self.expected_success(200, resp.status)
return service_client.ResponseBody(resp, body['metadata'])
- def get_volume_metadata(self, volume_id):
+ def show_volume_metadata(self, volume_id):
"""Get metadata of the volume."""
url = "volumes/%s/metadata" % str(volume_id)
resp, body = self.get(url)
diff --git a/tempest/stress/cleanup.py b/tempest/stress/cleanup.py
index d0b1be1..29c4401 100644
--- a/tempest/stress/cleanup.py
+++ b/tempest/stress/cleanup.py
@@ -80,7 +80,7 @@
# volume deletion may block
_, snaps = admin_manager.snapshots_client.\
- list_snapshots({"all_tenants": True})
+ list_snapshots(params={"all_tenants": True})
LOG.info("Cleanup::remove %s snapshots" % len(snaps))
for v in snaps:
try:
@@ -96,7 +96,8 @@
except Exception:
pass
- vols = admin_manager.volumes_client.list_volumes({"all_tenants": True})
+ vols = admin_manager.volumes_client.list_volumes(
+ params={"all_tenants": True})
LOG.info("Cleanup::remove %s volumes" % len(vols))
for v in vols:
try:
diff --git a/tempest/test.py b/tempest/test.py
index da936b4..d57b1d8 100644
--- a/tempest/test.py
+++ b/tempest/test.py
@@ -445,7 +445,12 @@
# Make sure isolated_creds exists and get a network client
networks_client = cls.get_client_manager().networks_client
isolated_creds = getattr(cls, 'isolated_creds', None)
- if credentials.is_admin_available():
+ # In case of nova network, isolated tenants are not able to list the
+ # network configured in fixed_network_name, even if the can use it
+ # for their servers, so using an admin network client to validate
+ # the network name
+ if (not CONF.service_available.neutron and
+ credentials.is_admin_available()):
admin_creds = isolated_creds.get_admin_creds()
networks_client = clients.Manager(admin_creds).networks_client
return fixed_network.get_tenant_network(isolated_creds,
@@ -467,8 +472,6 @@
super(NegativeAutoTest, cls).setUpClass()
os = cls.get_client_manager()
cls.client = os.negative_client
- os_admin = clients.AdminManager(service=cls._service)
- cls.admin_client = os_admin.negative_client
@staticmethod
def load_tests(*args):
@@ -596,7 +599,13 @@
"mechanism")
if "admin_client" in description and description["admin_client"]:
- client = self.admin_client
+ if not credentials.is_admin_available():
+ msg = ("Missing Identity Admin API credentials in"
+ "configuration.")
+ raise self.skipException(msg)
+ creds = self.isolated_creds.get_admin_creds()
+ os_adm = clients.Manager(credentials=creds)
+ client = os_adm.negative_client
else:
client = self.client
resp, resp_body = client.send_request(method, new_url,
diff --git a/tempest/tests/common/test_accounts.py b/tempest/tests/common/test_accounts.py
index 6371e49..b4048ba 100644
--- a/tempest/tests/common/test_accounts.py
+++ b/tempest/tests/common/test_accounts.py
@@ -23,11 +23,13 @@
from tempest import auth
from tempest.common import accounts
+from tempest.common import cred_provider
from tempest import config
from tempest import exceptions
from tempest.services.identity.v2.json import token_client
from tempest.tests import base
from tempest.tests import fake_config
+from tempest.tests import fake_http
from tempest.tests import fake_identity
@@ -37,6 +39,9 @@
super(TestAccount, self).setUp()
self.useFixture(fake_config.ConfigFixture())
self.stubs.Set(config, 'TempestConfigPrivate', fake_config.FakePrivate)
+ self.fake_http = fake_http.fake_httplib2(return_type=200)
+ self.stubs.Set(token_client.TokenClientJSON, 'raw_request',
+ fake_identity._fake_v2_response)
self.useFixture(lockutils_fixtures.ExternalLockFixture())
self.test_accounts = [
{'username': 'test_user1', 'tenant_name': 'test_tenant1',
@@ -64,7 +69,7 @@
{'username': 'test_user12', 'tenant_name': 'test_tenant12',
'password': 'p', 'roles': [cfg.CONF.identity.admin_role]},
]
- self.useFixture(mockpatch.Patch(
+ self.accounts_mock = self.useFixture(mockpatch.Patch(
'tempest.common.accounts.read_accounts_yaml',
return_value=self.test_accounts))
cfg.CONF.set_default('test_accounts_file', 'fake_path', group='auth')
@@ -275,6 +280,31 @@
for i in admin_hashes:
self.assertNotIn(i, args)
+ def test_networks_returned_with_creds(self):
+ test_accounts = [
+ {'username': 'test_user13', 'tenant_name': 'test_tenant13',
+ 'password': 'p', 'resources': {'network': 'network-1'}},
+ {'username': 'test_user14', 'tenant_name': 'test_tenant14',
+ 'password': 'p', 'roles': ['role-7', 'role-11'],
+ 'resources': {'network': 'network-2'}}]
+ # Clear previous mock using self.test_accounts
+ self.accounts_mock.cleanUp()
+ self.useFixture(mockpatch.Patch(
+ 'tempest.common.accounts.read_accounts_yaml',
+ return_value=test_accounts))
+ test_accounts_class = accounts.Accounts('v2', 'test_name')
+ with mock.patch('tempest.services.compute.json.networks_client.'
+ 'NetworksClientJSON.list_networks',
+ return_value=[{'name': 'network-2', 'id': 'fake-id'}]):
+ creds = test_accounts_class.get_creds_by_roles(['role-7'])
+ self.assertTrue(isinstance(creds, cred_provider.TestResources))
+ network = creds.network
+ self.assertIsNotNone(network)
+ self.assertIn('name', network)
+ self.assertIn('id', network)
+ self.assertEqual('fake-id', network['id'])
+ self.assertEqual('network-2', network['name'])
+
class TestNotLockingAccount(base.TestCase):
diff --git a/tempest/tests/common/utils/linux/test_remote_client.py b/tempest/tests/common/utils/linux/test_remote_client.py
index 40b7b32..d6377e6 100644
--- a/tempest/tests/common/utils/linux/test_remote_client.py
+++ b/tempest/tests/common/utils/linux/test_remote_client.py
@@ -100,15 +100,17 @@
self._assert_exec_called_with('cut -f1 -d. /proc/uptime')
def test_ping_host(self):
- ping_response = """PING localhost (127.0.0.1) 56(84) bytes of data.
-64 bytes from localhost (127.0.0.1): icmp_req=1 ttl=64 time=0.048 ms
+ ping_response = """PING localhost (127.0.0.1) 70(98) bytes of data.
+78 bytes from localhost (127.0.0.1): icmp_req=1 ttl=64 time=0.048 ms
+78 bytes from localhost (127.0.0.1): icmp_req=2 ttl=64 time=0.048 ms
--- localhost ping statistics ---
-1 packets transmitted, 1 received, 0% packet loss, time 0ms
+2 packets transmitted, 2 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms"""
self.ssh_mock.mock.exec_command.return_value = ping_response
- self.assertEqual(self.conn.ping_host('127.0.0.1'), ping_response)
- self._assert_exec_called_with('ping -c1 -w1 127.0.0.1')
+ self.assertEqual(self.conn.ping_host('127.0.0.1', count=2, size=70),
+ ping_response)
+ self._assert_exec_called_with('ping -c2 -w2 -s70 127.0.0.1')
def test_get_mac_address(self):
macs = """0a:0b:0c:0d:0e:0f
diff --git a/tempest/tests/test_ssh.py b/tempest/tests/test_ssh.py
index 27cd6b5..aaacaab 100644
--- a/tempest/tests/test_ssh.py
+++ b/tempest/tests/test_ssh.py
@@ -29,7 +29,7 @@
def test_pkey_calls_paramiko_RSAKey(self):
with contextlib.nested(
mock.patch('paramiko.RSAKey.from_private_key'),
- mock.patch('cStringIO.StringIO')) as (rsa_mock, cs_mock):
+ mock.patch('six.moves.cStringIO')) as (rsa_mock, cs_mock):
cs_mock.return_value = mock.sentinel.csio
pkey = 'mykey'
ssh.Client('localhost', 'root', pkey=pkey)
diff --git a/tempest/tests/test_tenant_isolation.py b/tempest/tests/test_tenant_isolation.py
index 82cbde9..fd8718f 100644
--- a/tempest/tests/test_tenant_isolation.py
+++ b/tempest/tests/test_tenant_isolation.py
@@ -278,11 +278,11 @@
router_interface_mock = self.patch(
'tempest.services.network.json.network_client.NetworkClientJSON.'
'add_router_interface_with_subnet_id')
- iso_creds.get_primary_creds()
+ primary_creds = iso_creds.get_primary_creds()
router_interface_mock.called_once_with('1234', '1234')
- network = iso_creds.get_primary_network()
- subnet = iso_creds.get_primary_subnet()
- router = iso_creds.get_primary_router()
+ network = primary_creds.network
+ subnet = primary_creds.subnet
+ router = primary_creds.router
self.assertEqual(network['id'], '1234')
self.assertEqual(network['name'], 'fake_net')
self.assertEqual(subnet['id'], '1234')
@@ -427,11 +427,11 @@
router_interface_mock = self.patch(
'tempest.services.network.json.network_client.NetworkClientJSON.'
'add_router_interface_with_subnet_id')
- iso_creds.get_alt_creds()
+ alt_creds = iso_creds.get_alt_creds()
router_interface_mock.called_once_with('1234', '1234')
- network = iso_creds.get_alt_network()
- subnet = iso_creds.get_alt_subnet()
- router = iso_creds.get_alt_router()
+ network = alt_creds.network
+ subnet = alt_creds.subnet
+ router = alt_creds.router
self.assertEqual(network['id'], '1234')
self.assertEqual(network['name'], 'fake_alt_net')
self.assertEqual(subnet['id'], '1234')
@@ -453,11 +453,11 @@
'tempest.services.network.json.network_client.NetworkClientJSON.'
'add_router_interface_with_subnet_id')
self._mock_list_roles('123456', 'admin')
- iso_creds.get_admin_creds()
+ admin_creds = iso_creds.get_admin_creds()
router_interface_mock.called_once_with('1234', '1234')
- network = iso_creds.get_admin_network()
- subnet = iso_creds.get_admin_subnet()
- router = iso_creds.get_admin_router()
+ network = admin_creds.network
+ subnet = admin_creds.subnet
+ router = admin_creds.router
self.assertEqual(network['id'], '1234')
self.assertEqual(network['name'], 'fake_admin_net')
self.assertEqual(subnet['id'], '1234')
@@ -490,13 +490,13 @@
'delete_router')
router_mock = router.start()
- iso_creds.get_primary_creds()
+ primary_creds = iso_creds.get_primary_creds()
self.assertEqual(net_mock.mock_calls, [])
self.assertEqual(subnet_mock.mock_calls, [])
self.assertEqual(router_mock.mock_calls, [])
- network = iso_creds.get_primary_network()
- subnet = iso_creds.get_primary_subnet()
- router = iso_creds.get_primary_router()
+ network = primary_creds.network
+ subnet = primary_creds.subnet
+ router = primary_creds.router
self.assertIsNone(network)
self.assertIsNone(subnet)
self.assertIsNone(router)
diff --git a/tempest/tests/test_wrappers.py b/tempest/tests/test_wrappers.py
index ae7860d..a4ef699 100644
--- a/tempest/tests/test_wrappers.py
+++ b/tempest/tests/test_wrappers.py
@@ -14,10 +14,11 @@
import os
import shutil
-import StringIO
import subprocess
import tempfile
+import six
+
from tempest.tests import base
DEVNULL = open(os.devnull, 'wb')
@@ -50,8 +51,8 @@
shutil.copy('tools/pretty_tox_serial.sh',
os.path.join(self.directory, 'pretty_tox_serial.sh'))
- self.stdout = StringIO.StringIO()
- self.stderr = StringIO.StringIO()
+ self.stdout = six.StringIO()
+ self.stderr = six.StringIO()
# Change directory, run wrapper and check result
self.addCleanup(os.chdir, os.path.abspath(os.curdir))
os.chdir(self.directory)
diff --git a/tempest/thirdparty/boto/test_ec2_network.py b/tempest/thirdparty/boto/test_ec2_network.py
deleted file mode 100644
index ce20156..0000000
--- a/tempest/thirdparty/boto/test_ec2_network.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# Copyright 2012 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from tempest import test
-from tempest.thirdparty.boto import test as boto_test
-
-
-class EC2NetworkTest(boto_test.BotoTestCase):
-
- @classmethod
- def setup_clients(cls):
- super(EC2NetworkTest, cls).setup_clients()
- cls.ec2_client = cls.os.ec2api_client
-
- # Note(afazekas): these tests for things duable without an instance
- @test.idempotent_id('48b912af-9403-4b4f-aa69-fa76d690a81f')
- def test_disassociate_not_associated_floating_ip(self):
- # EC2 disassociate not associated floating ip
- ec2_codes = self.ec2_error_code
- address = self.ec2_client.allocate_address()
- public_ip = address.public_ip
- rcuk = self.addResourceCleanUp(self.ec2_client.release_address,
- public_ip)
- addresses_get = self.ec2_client.get_all_addresses(
- addresses=(public_ip,))
- self.assertEqual(len(addresses_get), 1)
- self.assertEqual(addresses_get[0].public_ip, public_ip)
- self.assertBotoError(ec2_codes.client.InvalidAssociationID.NotFound,
- address.disassociate)
- self.ec2_client.release_address(public_ip)
- self.assertAddressReleasedWait(address)
- self.cancelResourceCleanUp(rcuk)