Merge "Move unset_flavor_extra_specs to extra specs file"
diff --git a/HACKING.rst b/HACKING.rst
index 1c084f8..bb55ac5 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -419,34 +419,3 @@
 tested is considered stable and adheres to the OpenStack API stability
 guidelines. If an API is still considered experimental or in development then
 it should not be tested by Tempest until it is considered stable.
-
-Stable Support Policy
----------------------
-
-Since the `Extended Maintenance policy`_ for stable branches was adopted
-OpenStack projects will keep stable branches around after a "stable" or
-"maintained" period for a phase of indeterminate length called "Extended
-Maintenance". Prior to this resolution Tempest supported all stable branches
-which were supported upstream. This policy does not scale under the new model
-as Tempest would be responsible for gating proposed changes against an ever
-increasing number of branches. Therefore due to resource constraints, Tempest
-will only provide support for branches in the "Maintained" phase from the
-documented `Support Phases`_. When a branch moves from the *Maintained* to the
-*Extended Maintenance* phase, Tempest will tag the removal of support for that
-branch as it has in the past when a branch goes end of life.
-
-The expectation for *Extended Maintenance* phase branches is that they will continue
-running Tempest during that phase of support. Since the REST APIs are stable
-interfaces across release boundaries, branches in these phases should run
-Tempest from master as long as possible. But, because we won't be actively
-testing branches in these phases, it's possible that we'll introduce changes to
-Tempest on master which will break support on *Extended Maintenance* phase
-branches. When this happens the expectation for those branches is to either
-switch to running Tempest from a tag with support for the branch, or blacklist
-a newly introduced test (if that is the cause of the issue). Tempest will not
-be creating stable branches to support *Extended Maintenance* phase branches, as
-the burden is on the *Extended Maintenance* phase branche maintainers, not the Tempest
-project, to support that branch.
-
-.. _Extended Maintenance policy: https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html
-.. _Support Phases: https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases
diff --git a/doc/source/index.rst b/doc/source/index.rst
index f562850..fecf98a 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -80,6 +80,14 @@
 
    library
 
+Support Policy
+--------------
+
+.. toctree::
+   :maxdepth: 2
+
+   stable_branch_support_policy
+
 Indices and tables
 ==================
 
diff --git a/doc/source/stable_branch_support_policy.rst b/doc/source/stable_branch_support_policy.rst
new file mode 100644
index 0000000..87e3ad1
--- /dev/null
+++ b/doc/source/stable_branch_support_policy.rst
@@ -0,0 +1,30 @@
+Stable Branch Support Policy
+============================
+
+Since the `Extended Maintenance policy`_ for stable branches was adopted
+OpenStack projects will keep stable branches around after a "stable" or
+"maintained" period for a phase of indeterminate length called "Extended
+Maintenance". Prior to this resolution Tempest supported all stable branches
+which were supported upstream. This policy does not scale under the new model
+as Tempest would be responsible for gating proposed changes against an ever
+increasing number of branches. Therefore due to resource constraints, Tempest
+will only provide support for branches in the "Maintained" phase from the
+documented `Support Phases`_. When a branch moves from the *Maintained* to the
+*Extended Maintenance* phase, Tempest will tag the removal of support for that
+branch as it has in the past when a branch goes end of life.
+
+The expectation for *Extended Maintenance* phase branches is that they will continue
+running Tempest during that phase of support. Since the REST APIs are stable
+interfaces across release boundaries, branches in these phases should run
+Tempest from master as long as possible. But, because we won't be actively
+testing branches in these phases, it's possible that we'll introduce changes to
+Tempest on master which will break support on *Extended Maintenance* phase
+branches. When this happens the expectation for those branches is to either
+switch to running Tempest from a tag with support for the branch, or blacklist
+a newly introduced test (if that is the cause of the issue). Tempest will not
+be creating stable branches to support *Extended Maintenance* phase branches, as
+the burden is on the *Extended Maintenance* phase branche maintainers, not the Tempest
+project, to support that branch.
+
+.. _Extended Maintenance policy: https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html
+.. _Support Phases: https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases
diff --git a/releasenotes/notes/add-extra-apis-to-volume-v3-services-client-bf9b235cf5a611fe.yaml b/releasenotes/notes/add-extra-apis-to-volume-v3-services-client-bf9b235cf5a611fe.yaml
new file mode 100644
index 0000000..03d0ae8
--- /dev/null
+++ b/releasenotes/notes/add-extra-apis-to-volume-v3-services-client-bf9b235cf5a611fe.yaml
@@ -0,0 +1,6 @@
+---
+features:
+  - |
+    Add ``enable_service``, ``disable_service`` , ``disable_log_reason``,
+    ``freeze_host`` and ``thaw_host`` API endpoints to volume v3
+    ``services_client``.
diff --git a/releasenotes/notes/add-port-profile-config-option-2610b2fa67027960.yaml b/releasenotes/notes/add-port-profile-config-option-2610b2fa67027960.yaml
index b54ee8b..19d47d1 100644
--- a/releasenotes/notes/add-port-profile-config-option-2610b2fa67027960.yaml
+++ b/releasenotes/notes/add-port-profile-config-option-2610b2fa67027960.yaml
@@ -1,11 +1,9 @@
 ---
-prelude: >
-    When using OVS HW offload feature we need to create
-    Neutron port with a certain capability. This is done
-    by creating Neutron port with binding profile. To be
-    able to test this we need profile capability support
-    in Tempest as well.
 features:
   - A new config option 'port_profile' is added to the section
     'network' to specify capabilities of the port.
-    By default this is set to {}.
+    By default this is set to {}. When using OVS HW offload
+    feature we need to create Neutron port with a certain
+    capability. This is done by creating Neutron port with
+    binding profile. To be able to test this we need profile
+    capability support in Tempest as well.
diff --git a/tempest/api/compute/servers/test_server_actions.py b/tempest/api/compute/servers/test_server_actions.py
index 9fc5af0..350e8ba 100644
--- a/tempest/api/compute/servers/test_server_actions.py
+++ b/tempest/api/compute/servers/test_server_actions.py
@@ -369,6 +369,42 @@
         server = self.client.show_server(self.server_id)['server']
         self.assertEqual(self.flavor_ref, server['flavor']['id'])
 
+    @decorators.idempotent_id('fbbf075f-a812-4022-bc5c-ccb8047eef12')
+    @decorators.related_bug('1737599')
+    @testtools.skipUnless(CONF.compute_feature_enabled.resize,
+                          'Resize not available.')
+    @utils.services('volume')
+    def test_resize_server_revert_with_volume_attached(self):
+        # Tests attaching a volume to a server instance and then resizing
+        # the instance. Once the instance is resized, revert the resize which
+        # should move the instance and volume attachment back to the original
+        # compute host.
+
+        # Create a blank volume and attach it to the server created in setUp.
+        volume = self.create_volume()
+        server = self.client.show_server(self.server_id)['server']
+        self.attach_volume(server, volume)
+        # Now resize the server with the blank volume attached.
+        self.client.resize_server(self.server_id, self.flavor_ref_alt)
+        # Explicitly delete the server to get a new one for later
+        # tests. Avoids resize down race issues.
+        self.addCleanup(self.delete_server, self.server_id)
+        waiters.wait_for_server_status(
+            self.client, self.server_id, 'VERIFY_RESIZE')
+        # Now revert the resize which should move the instance and it's volume
+        # attachment back to the original source compute host.
+        self.client.revert_resize_server(self.server_id)
+        waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
+        # Make sure everything still looks OK.
+        server = self.client.show_server(self.server_id)['server']
+        # The flavor id is not returned in the server response after
+        # microversion 2.46 so handle that gracefully.
+        if server['flavor'].get('id'):
+            self.assertEqual(self.flavor_ref, server['flavor']['id'])
+        attached_volumes = server['os-extended-volumes:volumes_attached']
+        self.assertEqual(1, len(attached_volumes))
+        self.assertEqual(volume['id'], attached_volumes[0]['id'])
+
     @decorators.idempotent_id('b963d4f1-94b3-4c40-9e97-7b583f46e470')
     @testtools.skipUnless(CONF.compute_feature_enabled.snapshot,
                           'Snapshotting not available, backup not possible.')
diff --git a/tempest/api/identity/admin/v3/test_domains.py b/tempest/api/identity/admin/v3/test_domains.py
index 97a1f36..72b6be4 100644
--- a/tempest/api/identity/admin/v3/test_domains.py
+++ b/tempest/api/identity/admin/v3/test_domains.py
@@ -121,11 +121,7 @@
         # Create a domain with a user and a group in it
         domain = self.setup_test_domain()
         user = self.create_test_user(domain_id=domain['id'])
-        group = self.groups_client.create_group(
-            name=data_utils.rand_name('group'),
-            domain_id=domain['id'])['group']
-        self.addCleanup(test_utils.call_and_ignore_notfound_exc,
-                        self.groups_client.delete_group, group['id'])
+        group = self.setup_test_group(domain_id=domain['id'])
         # Delete the domain
         self.delete_domain(domain['id'])
         # Check the domain, its users and groups are gone
diff --git a/tempest/api/identity/admin/v3/test_groups.py b/tempest/api/identity/admin/v3/test_groups.py
index 507810b..37ce266 100644
--- a/tempest/api/identity/admin/v3/test_groups.py
+++ b/tempest/api/identity/admin/v3/test_groups.py
@@ -30,50 +30,46 @@
 
     @decorators.idempotent_id('2e80343b-6c81-4ac3-88c7-452f3e9d5129')
     def test_group_create_update_get(self):
+        # Verify group creation works.
         name = data_utils.rand_name('Group')
         description = data_utils.rand_name('Description')
-        group = self.groups_client.create_group(
-            name=name, domain_id=self.domain['id'],
-            description=description)['group']
-        self.addCleanup(self.groups_client.delete_group, group['id'])
+        group = self.setup_test_group(name=name, domain_id=self.domain['id'],
+                                      description=description)
         self.assertEqual(group['name'], name)
         self.assertEqual(group['description'], description)
+        self.assertEqual(self.domain['id'], group['domain_id'])
 
-        new_name = data_utils.rand_name('UpdateGroup')
-        new_desc = data_utils.rand_name('UpdateDescription')
+        # Verify updating name and description works.
+        first_name_update = data_utils.rand_name('UpdateGroup')
+        first_desc_update = data_utils.rand_name('UpdateDescription')
         updated_group = self.groups_client.update_group(
-            group['id'], name=new_name, description=new_desc)['group']
-        self.assertEqual(updated_group['name'], new_name)
-        self.assertEqual(updated_group['description'], new_desc)
+            group['id'], name=first_name_update,
+            description=first_desc_update)['group']
+        self.assertEqual(updated_group['name'], first_name_update)
+        self.assertEqual(updated_group['description'], first_desc_update)
 
+        # Verify that the updated values are reflected after performing show.
         new_group = self.groups_client.show_group(group['id'])['group']
         self.assertEqual(group['id'], new_group['id'])
-        self.assertEqual(new_name, new_group['name'])
-        self.assertEqual(new_desc, new_group['description'])
+        self.assertEqual(first_name_update, new_group['name'])
+        self.assertEqual(first_desc_update, new_group['description'])
 
-    @decorators.idempotent_id('b66eb441-b08a-4a6d-81ab-fef71baeb26c')
-    def test_group_update_with_few_fields(self):
-        name = data_utils.rand_name('Group')
-        old_description = data_utils.rand_name('Description')
-        group = self.groups_client.create_group(
-            name=name, domain_id=self.domain['id'],
-            description=old_description)['group']
-        self.addCleanup(self.groups_client.delete_group, group['id'])
-
-        new_name = data_utils.rand_name('UpdateGroup')
+        # Verify that updating a single field for a group (name) leaves the
+        # other fields (description, domain_id) unchanged.
+        second_name_update = data_utils.rand_name(
+            self.__class__.__name__ + 'UpdateGroup')
         updated_group = self.groups_client.update_group(
-            group['id'], name=new_name)['group']
-        self.assertEqual(new_name, updated_group['name'])
-        # Verify that 'description' is not being updated or deleted.
-        self.assertEqual(old_description, updated_group['description'])
+            group['id'], name=second_name_update)['group']
+        self.assertEqual(second_name_update, updated_group['name'])
+        # Verify that 'description' and 'domain_id' were not updated or
+        # deleted.
+        self.assertEqual(first_desc_update, updated_group['description'])
+        self.assertEqual(self.domain['id'], updated_group['domain_id'])
 
     @decorators.attr(type='smoke')
     @decorators.idempotent_id('1598521a-2f36-4606-8df9-30772bd51339')
     def test_group_users_add_list_delete(self):
-        name = data_utils.rand_name('Group')
-        group = self.groups_client.create_group(
-            name=name, domain_id=self.domain['id'])['group']
-        self.addCleanup(self.groups_client.delete_group, group['id'])
+        group = self.setup_test_group(domain_id=self.domain['id'])
         # add user into group
         users = []
         for _ in range(3):
@@ -100,11 +96,8 @@
         # create two groups, and add user into them
         groups = []
         for _ in range(2):
-            name = data_utils.rand_name('Group')
-            group = self.groups_client.create_group(
-                name=name, domain_id=self.domain['id'])['group']
+            group = self.setup_test_group(domain_id=self.domain['id'])
             groups.append(group)
-            self.addCleanup(self.groups_client.delete_group, group['id'])
             self.groups_client.add_group_user(group['id'], user['id'])
         # list groups which user belongs to
         user_groups = self.users_client.list_user_groups(user['id'])['groups']
@@ -118,12 +111,7 @@
         group_ids = list()
         fetched_ids = list()
         for _ in range(3):
-            name = data_utils.rand_name('Group')
-            description = data_utils.rand_name('Description')
-            group = self.groups_client.create_group(
-                name=name, domain_id=self.domain['id'],
-                description=description)['group']
-            self.addCleanup(self.groups_client.delete_group, group['id'])
+            group = self.setup_test_group(domain_id=self.domain['id'])
             group_ids.append(group['id'])
         # List and Verify Groups
         # When domain specific drivers are enabled the operations
diff --git a/tempest/api/identity/admin/v3/test_tokens.py b/tempest/api/identity/admin/v3/test_tokens.py
index 0845407..532f0d7 100644
--- a/tempest/api/identity/admin/v3/test_tokens.py
+++ b/tempest/api/identity/admin/v3/test_tokens.py
@@ -201,10 +201,7 @@
             role_id = self.setup_test_role()['id']
 
             # Create a group.
-            group_name = data_utils.rand_name('Group')
-            group_id = self.groups_client.create_group(
-                name=group_name, domain_id=domain_id)['group']['id']
-            self.addCleanup(self.groups_client.delete_group, group_id)
+            group_id = self.setup_test_group(domain_id=domain_id)['id']
 
             # Add the alt user to the group.
             self.groups_client.add_group_user(group_id, alt_user_id)
diff --git a/tempest/api/identity/base.py b/tempest/api/identity/base.py
index 68f2c07..282343c 100644
--- a/tempest/api/identity/base.py
+++ b/tempest/api/identity/base.py
@@ -292,6 +292,20 @@
             self.delete_domain, domain['id'])
         return domain
 
+    def setup_test_group(self, **kwargs):
+        """Set up a test group."""
+        if 'name' not in kwargs:
+            kwargs['name'] = data_utils.rand_name(
+                self.__class__.__name__ + '_test_project')
+        if 'description' not in kwargs:
+            kwargs['description'] = data_utils.rand_name(
+                self.__class__.__name__ + '_test_description')
+        group = self.groups_client.create_group(**kwargs)['group']
+        self.addCleanup(
+            test_utils.call_and_ignore_notfound_exc,
+            self.groups_client.delete_group, group['id'])
+        return group
+
 
 class BaseApplicationCredentialsV3Test(BaseIdentityV3Test):
 
diff --git a/tempest/api/network/test_floating_ips.py b/tempest/api/network/test_floating_ips.py
index ef4a23a..b4bb88e 100644
--- a/tempest/api/network/test_floating_ips.py
+++ b/tempest/api/network/test_floating_ips.py
@@ -15,6 +15,7 @@
 
 from tempest.api.network import base
 from tempest.common import utils
+from tempest.common.utils import data_utils
 from tempest.common.utils import net_utils
 from tempest import config
 from tempest.lib import decorators
@@ -158,11 +159,21 @@
         self.addCleanup(self.floating_ips_client.delete_floatingip,
                         created_floating_ip['id'])
         self.assertEqual(created_floating_ip['router_id'], self.router['id'])
-        network2 = self.create_network()
+        network_name = data_utils.rand_name(self.__class__.__name__)
+        network2 = self.networks_client.create_network(
+            name=network_name)['network']
+        self.addCleanup(self.networks_client.delete_network,
+                        network2['id'])
         subnet2 = self.create_subnet(network2)
+        self.addCleanup(self.subnets_client.delete_subnet, subnet2['id'])
         router2 = self.create_router(external_network_id=self.ext_net_id)
+        self.addCleanup(self.routers_client.delete_router, router2['id'])
         self.create_router_interface(router2['id'], subnet2['id'])
+        self.addCleanup(self.routers_client.remove_router_interface,
+                        router2['id'], subnet_id=subnet2['id'])
         port_other_router = self.create_port(network2)
+        self.addCleanup(self.ports_client.delete_port,
+                        port_other_router['id'])
         # Associate floating IP to the other port on another router
         floating_ip = self.floating_ips_client.update_floatingip(
             created_floating_ip['id'],
diff --git a/tempest/api/network/test_ports.py b/tempest/api/network/test_ports.py
index 5168423..246a5c3 100644
--- a/tempest/api/network/test_ports.py
+++ b/tempest/api/network/test_ports.py
@@ -52,6 +52,21 @@
         ports_list = body['ports']
         self.assertFalse(port_id in [n['id'] for n in ports_list])
 
+    def _create_subnet(self, network, gateway='',
+                       cidr=None, mask_bits=None, **kwargs):
+        subnet = self.create_subnet(network, gateway, cidr, mask_bits)
+        self.addCleanup(self.subnets_client.delete_subnet, subnet['id'])
+        return subnet
+
+    def _create_network(self, network_name=None, **kwargs):
+        network_name = network_name or data_utils.rand_name(
+            self.__class__.__name__)
+        network = self.networks_client.create_network(
+            name=network_name, **kwargs)['network']
+        self.addCleanup(self.networks_client.delete_network,
+                        network['id'])
+        return network
+
     @decorators.attr(type='smoke')
     @decorators.idempotent_id('c72c1c0c-2193-4aca-aaa4-b1442640f51c')
     def test_create_update_delete_port(self):
@@ -73,7 +88,7 @@
     @decorators.idempotent_id('67f1b811-f8db-43e2-86bd-72c074d4a42c')
     def test_create_bulk_port(self):
         network1 = self.network
-        network2 = self.create_network()
+        network2 = self._create_network()
         network_list = [network1['id'], network2['id']]
         port_list = [{'network_id': net_id} for net_id in network_list]
         body = self.ports_client.create_bulk_ports(ports=port_list)
@@ -90,7 +105,7 @@
     @decorators.attr(type='smoke')
     @decorators.idempotent_id('0435f278-40ae-48cb-a404-b8a087bc09b1')
     def test_create_port_in_allowed_allocation_pools(self):
-        network = self.create_network()
+        network = self._create_network()
         net_id = network['id']
         address = self.cidr
         address.prefixlen = self.mask_bits
@@ -100,10 +115,9 @@
             raise exceptions.InvalidConfiguration(msg)
         allocation_pools = {'allocation_pools': [{'start': str(address[2]),
                                                   'end': str(address[-2])}]}
-        subnet = self.create_subnet(network, cidr=address,
-                                    mask_bits=address.prefixlen,
-                                    **allocation_pools)
-        self.addCleanup(self.subnets_client.delete_subnet, subnet['id'])
+        self._create_subnet(network, cidr=address,
+                            mask_bits=address.prefixlen,
+                            **allocation_pools)
         body = self.ports_client.create_port(network_id=net_id)
         self.addCleanup(self.ports_client.delete_port, body['port']['id'])
         port = body['port']
@@ -153,9 +167,8 @@
     @decorators.idempotent_id('e7fe260b-1e79-4dd3-86d9-bec6a7959fc5')
     def test_port_list_filter_by_ip(self):
         # Create network and subnet
-        network = self.create_network()
-        subnet = self.create_subnet(network)
-        self.addCleanup(self.subnets_client.delete_subnet, subnet['id'])
+        network = self._create_network()
+        self._create_subnet(network)
         # Create two ports
         port_1 = self.ports_client.create_port(network_id=network['id'])
         self.addCleanup(self.ports_client.delete_port, port_1['port']['id'])
@@ -187,10 +200,8 @@
         'ip-substring-filtering extension not enabled.')
     def test_port_list_filter_by_ip_substr(self):
         # Create network and subnet
-        network = self.create_network()
-        subnet = self.create_subnet(network)
-        self.addCleanup(self.subnets_client.delete_subnet, subnet['id'])
-
+        network = self._create_network()
+        subnet = self._create_subnet(network)
         # Get two IP addresses
         ip_address_1 = None
         ip_address_2 = None
@@ -261,10 +272,8 @@
     @decorators.idempotent_id('5ad01ed0-0e6e-4c5d-8194-232801b15c72')
     def test_port_list_filter_by_router_id(self):
         # Create a router
-        network = self.create_network()
-        self.addCleanup(self.networks_client.delete_network, network['id'])
-        subnet = self.create_subnet(network)
-        self.addCleanup(self.subnets_client.delete_subnet, subnet['id'])
+        network = self._create_network()
+        self._create_subnet(network)
         router = self.create_router()
         self.addCleanup(self.routers_client.delete_router, router['id'])
         port = self.ports_client.create_port(network_id=network['id'])
@@ -294,12 +303,9 @@
     @decorators.idempotent_id('63aeadd4-3b49-427f-a3b1-19ca81f06270')
     def test_create_update_port_with_second_ip(self):
         # Create a network with two subnets
-        network = self.create_network()
-        self.addCleanup(self.networks_client.delete_network, network['id'])
-        subnet_1 = self.create_subnet(network)
-        self.addCleanup(self.subnets_client.delete_subnet, subnet_1['id'])
-        subnet_2 = self.create_subnet(network)
-        self.addCleanup(self.subnets_client.delete_subnet, subnet_2['id'])
+        network = self._create_network()
+        subnet_1 = self._create_subnet(network)
+        subnet_2 = self._create_subnet(network)
         fixed_ip_1 = [{'subnet_id': subnet_1['id']}]
         fixed_ip_2 = [{'subnet_id': subnet_2['id']}]
 
@@ -323,8 +329,7 @@
         self.assertEqual(2, len(port['fixed_ips']))
 
     def _update_port_with_security_groups(self, security_groups_names):
-        subnet_1 = self.create_subnet(self.network)
-        self.addCleanup(self.subnets_client.delete_subnet, subnet_1['id'])
+        subnet_1 = self._create_subnet(self.network)
         fixed_ip_1 = [{'subnet_id': subnet_1['id']}]
 
         security_groups_list = list()
@@ -413,10 +418,8 @@
         utils.is_extension_enabled('security-group', 'network'),
         'security-group extension not enabled.')
     def test_create_port_with_no_securitygroups(self):
-        network = self.create_network()
-        self.addCleanup(self.networks_client.delete_network, network['id'])
-        subnet = self.create_subnet(network)
-        self.addCleanup(self.subnets_client.delete_subnet, subnet['id'])
+        network = self._create_network()
+        self._create_subnet(network)
         port = self.create_port(network, security_groups=[])
         self.addCleanup(self.ports_client.delete_port, port['id'])
         self.assertIsNotNone(port['security_groups'])
diff --git a/tempest/api/network/test_routers.py b/tempest/api/network/test_routers.py
index abbb779..3ff12e4 100644
--- a/tempest/api/network/test_routers.py
+++ b/tempest/api/network/test_routers.py
@@ -39,6 +39,11 @@
         self.addCleanup(self._cleanup_router, router)
         return router
 
+    def _create_subnet(self, network, gateway='', cidr=None):
+        subnet = self.create_subnet(network, gateway, cidr)
+        self.addCleanup(self.subnets_client.delete_subnet, subnet['id'])
+        return subnet
+
     def _add_router_interface_with_subnet_id(self, router_id, subnet_id):
         interface = self.routers_client.add_router_interface(
             router_id, subnet_id=subnet_id)
@@ -65,12 +70,12 @@
                           'The public_network_id option must be specified.')
     def test_create_show_list_update_delete_router(self):
         # Create a router
-        name = data_utils.rand_name(self.__class__.__name__ + '-router')
+        router_name = data_utils.rand_name(self.__class__.__name__ + '-router')
         router = self._create_router(
-            name=name,
+            name=router_name,
             admin_state_up=False,
             external_network_id=CONF.network.public_network_id)
-        self.assertEqual(router['name'], name)
+        self.assertEqual(router['name'], router_name)
         self.assertEqual(router['admin_state_up'], False)
         self.assertEqual(
             router['external_gateway_info']['network_id'],
@@ -97,8 +102,12 @@
     @decorators.attr(type='smoke')
     @decorators.idempotent_id('b42e6e39-2e37-49cc-a6f4-8467e940900a')
     def test_add_remove_router_interface_with_subnet_id(self):
-        network = self.create_network()
-        subnet = self.create_subnet(network)
+        network_name = data_utils.rand_name(self.__class__.__name__)
+        network = self.networks_client.create_network(
+            name=network_name)['network']
+        self.addCleanup(self.networks_client.delete_network,
+                        network['id'])
+        subnet = self._create_subnet(network)
         router = self._create_router()
         # Add router interface with subnet id
         interface = self.routers_client.add_router_interface(
@@ -116,8 +125,12 @@
     @decorators.attr(type='smoke')
     @decorators.idempotent_id('2b7d2f37-6748-4d78-92e5-1d590234f0d5')
     def test_add_remove_router_interface_with_port_id(self):
-        network = self.create_network()
-        self.create_subnet(network)
+        network_name = data_utils.rand_name(self.__class__.__name__)
+        network = self.networks_client.create_network(
+            name=network_name)['network']
+        self.addCleanup(self.networks_client.delete_network,
+                        network['id'])
+        self._create_subnet(network)
         router = self._create_router()
         port_body = self.ports_client.create_port(
             network_id=network['id'])
@@ -183,13 +196,18 @@
         # Update router extra route, second ip of the range is
         # used as next hop
         for i in range(routes_num):
-            network = self.create_network()
+            network_name = data_utils.rand_name(self.__class__.__name__)
+            network = self.networks_client.create_network(
+                name=network_name)['network']
+            self.addCleanup(self.networks_client.delete_network,
+                            network['id'])
             subnet = self.create_subnet(network, cidr=next_cidr)
             next_cidr = next_cidr.next()
 
             # Add router interface with subnet id
             self.create_router_interface(router['id'], subnet['id'])
-
+            self.addCleanup(self._remove_router_interface_with_subnet_id,
+                            router['id'], subnet['id'])
             cidr = netaddr.IPNetwork(subnet['cidr'])
             next_hop = str(cidr[2])
             destination = str(subnet['cidr'])
@@ -242,13 +260,18 @@
     @decorators.attr(type='smoke')
     @decorators.idempotent_id('802c73c9-c937-4cef-824b-2191e24a6aab')
     def test_add_multiple_router_interfaces(self):
-        network01 = self.create_network(
-            network_name=data_utils.rand_name('router-network01-'))
-        network02 = self.create_network(
-            network_name=data_utils.rand_name('router-network02-'))
-        subnet01 = self.create_subnet(network01)
+        network_name = data_utils.rand_name(self.__class__.__name__)
+        network01 = self.networks_client.create_network(
+            name=network_name)['network']
+        self.addCleanup(self.networks_client.delete_network,
+                        network01['id'])
+        network02 = self.networks_client.create_network(
+            name=data_utils.rand_name(self.__class__.__name__))['network']
+        self.addCleanup(self.networks_client.delete_network,
+                        network02['id'])
+        subnet01 = self._create_subnet(network01)
         sub02_cidr = self.cidr.next()
-        subnet02 = self.create_subnet(network02, cidr=sub02_cidr)
+        subnet02 = self._create_subnet(network02, cidr=sub02_cidr)
         router = self._create_router()
         interface01 = self._add_router_interface_with_subnet_id(router['id'],
                                                                 subnet01['id'])
@@ -261,8 +284,12 @@
 
     @decorators.idempotent_id('96522edf-b4b5-45d9-8443-fa11c26e6eff')
     def test_router_interface_port_update_with_fixed_ip(self):
-        network = self.create_network()
-        subnet = self.create_subnet(network)
+        network_name = data_utils.rand_name(self.__class__.__name__)
+        network = self.networks_client.create_network(
+            name=network_name)['network']
+        self.addCleanup(self.networks_client.delete_network,
+                        network['id'])
+        subnet = self._create_subnet(network)
         router = self._create_router()
         fixed_ip = [{'subnet_id': subnet['id']}]
         interface = self._add_router_interface_with_subnet_id(router['id'],
diff --git a/tempest/api/volume/admin/test_group_snapshots.py b/tempest/api/volume/admin/test_group_snapshots.py
index 45f4caa..731a055 100644
--- a/tempest/api/volume/admin/test_group_snapshots.py
+++ b/tempest/api/volume/admin/test_group_snapshots.py
@@ -157,6 +157,57 @@
         waiters.wait_for_volume_resource_status(
             self.groups_client, grp2['id'], 'available')
 
+    @decorators.idempotent_id('7d7fc000-0b4c-4376-a372-544116d2e127')
+    @decorators.related_bug('1739031')
+    def test_delete_group_snapshots_following_updated_volumes(self):
+        volume_type = self.create_volume_type()
+
+        group_type = self.create_group_type()
+
+        # Create a volume group
+        grp = self.create_group(group_type=group_type['id'],
+                                volume_types=[volume_type['id']])
+
+        # Note: When dealing with consistency groups all volumes must
+        # reside on the same backend. Adding volumes to the same consistency
+        # group from multiple backends isn't supported. In order to ensure all
+        # volumes share the same backend, all volumes must share same
+        # volume-type and group id.
+        volume_list = []
+        for _ in range(2):
+            volume = self.create_volume(volume_type=volume_type['id'],
+                                        group_id=grp['id'])
+            volume_list.append(volume['id'])
+
+        for vol in volume_list:
+            self.groups_client.update_group(grp['id'],
+                                            remove_volumes=vol)
+            waiters.wait_for_volume_resource_status(
+                self.groups_client, grp['id'], 'available')
+
+            self.groups_client.update_group(grp['id'],
+                                            add_volumes=vol)
+            waiters.wait_for_volume_resource_status(
+                self.groups_client, grp['id'], 'available')
+
+        # Verify the created volumes are associated with consistency group
+        vols = self.volumes_client.list_volumes(detail=True)['volumes']
+        grp_vols = [v for v in vols if v['group_id'] == grp['id']]
+        self.assertEqual(2, len(grp_vols))
+
+        # Create a snapshot group
+        group_snapshot = self._create_group_snapshot(group_id=grp['id'])
+        snapshots = self.snapshots_client.list_snapshots(
+            detail=True)['snapshots']
+
+        for snap in snapshots:
+            if snap['volume_id'] in volume_list:
+                waiters.wait_for_volume_resource_status(
+                    self.snapshots_client, snap['id'], 'available')
+
+        # Delete a snapshot group
+        self._delete_group_snapshot(group_snapshot)
+
 
 class GroupSnapshotsV319Test(BaseGroupSnapshotsTest):
     _api_version = 3
diff --git a/tempest/api/volume/admin/test_volume_retype_with_migration.py b/tempest/api/volume/admin/test_volume_retype.py
similarity index 67%
rename from tempest/api/volume/admin/test_volume_retype_with_migration.py
rename to tempest/api/volume/admin/test_volume_retype.py
index 025c1be..1c56eb2 100644
--- a/tempest/api/volume/admin/test_volume_retype_with_migration.py
+++ b/tempest/api/volume/admin/test_volume_retype.py
@@ -10,6 +10,7 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+import abc
 
 from oslo_log import log as logging
 
@@ -23,31 +24,7 @@
 LOG = logging.getLogger(__name__)
 
 
-class VolumeRetypeWithMigrationTest(base.BaseVolumeAdminTest):
-
-    @classmethod
-    def skip_checks(cls):
-        super(VolumeRetypeWithMigrationTest, cls).skip_checks()
-
-        if not CONF.volume_feature_enabled.multi_backend:
-            raise cls.skipException("Cinder multi-backend feature disabled.")
-
-        if len(set(CONF.volume.backend_names)) < 2:
-            raise cls.skipException("Requires at least two different "
-                                    "backend names")
-
-    @classmethod
-    def resource_setup(cls):
-        super(VolumeRetypeWithMigrationTest, cls).resource_setup()
-        # read backend name from a list.
-        backend_src = CONF.volume.backend_names[0]
-        backend_dst = CONF.volume.backend_names[1]
-
-        extra_specs_src = {"volume_backend_name": backend_src}
-        extra_specs_dst = {"volume_backend_name": backend_dst}
-
-        cls.src_vol_type = cls.create_volume_type(extra_specs=extra_specs_src)
-        cls.dst_vol_type = cls.create_volume_type(extra_specs=extra_specs_dst)
+class VolumeRetypeTest(base.BaseVolumeAdminTest):
 
     def _wait_for_internal_volume_cleanup(self, vol):
         # When retyping a volume, Cinder creates an internal volume in the
@@ -70,43 +47,11 @@
                     fetched_vol['id'])
                 break
 
-    def _retype_volume(self, volume):
-        keys_with_no_change = ('id', 'size', 'description', 'name', 'user_id',
-                               'os-vol-tenant-attr:tenant_id')
-        keys_with_change = ('volume_type', 'os-vol-host-attr:host')
+    @abc.abstractmethod
+    def _verify_migration(self, source_vol, dest_vol):
+        pass
 
-        volume_source = self.admin_volume_client.show_volume(
-            volume['id'])['volume']
-
-        self.volumes_client.retype_volume(
-            volume['id'],
-            new_type=self.dst_vol_type['name'],
-            migration_policy='on-demand')
-        self.addCleanup(self._wait_for_internal_volume_cleanup, volume)
-        waiters.wait_for_volume_retype(self.volumes_client, volume['id'],
-                                       self.dst_vol_type['name'])
-
-        volume_dest = self.admin_volume_client.show_volume(
-            volume['id'])['volume']
-
-        # Check the volume information after the migration.
-        self.assertEqual('success',
-                         volume_dest['os-vol-mig-status-attr:migstat'])
-        self.assertEqual('success', volume_dest['migration_status'])
-
-        for key in keys_with_no_change:
-            self.assertEqual(volume_source[key], volume_dest[key])
-
-        for key in keys_with_change:
-            self.assertNotEqual(volume_source[key], volume_dest[key])
-
-    @decorators.idempotent_id('a1a41f3f-9dad-493e-9f09-3ff197d477cd')
-    def test_available_volume_retype_with_migration(self):
-        src_vol = self.create_volume(volume_type=self.src_vol_type['name'])
-        self._retype_volume(src_vol)
-
-    @decorators.idempotent_id('d0d9554f-e7a5-4104-8973-f35b27ccb60d')
-    def test_volume_from_snapshot_retype_with_migration(self):
+    def _create_volume_from_snapshot(self):
         # Create a volume in the first backend
         src_vol = self.create_volume(volume_type=self.src_vol_type['name'])
 
@@ -121,5 +66,115 @@
         self.snapshots_client.delete_snapshot(snapshot['id'])
         self.snapshots_client.wait_for_resource_deletion(snapshot['id'])
 
+        return src_vol
+
+    def _retype_volume(self, volume, migration_policy):
+
+        volume_source = self.admin_volume_client.show_volume(
+            volume['id'])['volume']
+
+        self.volumes_client.retype_volume(
+            volume['id'],
+            new_type=self.dst_vol_type['name'],
+            migration_policy=migration_policy)
+        self.addCleanup(self._wait_for_internal_volume_cleanup, volume)
+        waiters.wait_for_volume_retype(self.volumes_client, volume['id'],
+                                       self.dst_vol_type['name'])
+
+        volume_dest = self.admin_volume_client.show_volume(
+            volume['id'])['volume']
+
+        self._verify_migration(volume_source, volume_dest)
+
+
+class VolumeRetypeWithMigrationTest(VolumeRetypeTest):
+
+    @classmethod
+    def skip_checks(cls):
+        super(VolumeRetypeTest, cls).skip_checks()
+
+        if not CONF.volume_feature_enabled.multi_backend:
+            raise cls.skipException("Cinder multi-backend feature disabled.")
+
+        if len(set(CONF.volume.backend_names)) < 2:
+            raise cls.skipException("Requires at least two different "
+                                    "backend names")
+
+    @classmethod
+    def resource_setup(cls):
+        super(VolumeRetypeWithMigrationTest, cls).resource_setup()
+        # read backend name from a list.
+        backend_src = CONF.volume.backend_names[0]
+        backend_dst = CONF.volume.backend_names[1]
+
+        extra_specs_src = {"volume_backend_name": backend_src}
+        extra_specs_dst = {"volume_backend_name": backend_dst}
+
+        cls.src_vol_type = cls.create_volume_type(extra_specs=extra_specs_src)
+        cls.dst_vol_type = cls.create_volume_type(extra_specs=extra_specs_dst)
+
+    def _verify_migration(self, volume_source, volume_dest):
+
+        keys_with_no_change = ('id', 'size', 'description', 'name',
+                               'user_id', 'os-vol-tenant-attr:tenant_id')
+        keys_with_change = ('volume_type', 'os-vol-host-attr:host')
+
+        # Check the volume information after the migration.
+        self.assertEqual('success',
+                         volume_dest['os-vol-mig-status-attr:migstat'])
+        self.assertEqual('success', volume_dest['migration_status'])
+
+        for key in keys_with_no_change:
+            self.assertEqual(volume_source[key], volume_dest[key])
+
+        for key in keys_with_change:
+            self.assertNotEqual(volume_source[key], volume_dest[key])
+
+        self.assertEqual(volume_dest['volume_type'], self.dst_vol_type['name'])
+
+    @decorators.idempotent_id('a1a41f3f-9dad-493e-9f09-3ff197d477cd')
+    def test_available_volume_retype_with_migration(self):
+        src_vol = self.create_volume(volume_type=self.src_vol_type['name'])
+        self._retype_volume(src_vol, migration_policy='on-demand')
+
+    @decorators.idempotent_id('d0d9554f-e7a5-4104-8973-f35b27ccb60d')
+    def test_volume_from_snapshot_retype_with_migration(self):
+        src_vol = self._create_volume_from_snapshot()
+
         # Migrate the volume from snapshot to the second backend
-        self._retype_volume(src_vol)
+        self._retype_volume(src_vol, migration_policy='on-demand')
+
+
+class VolumeRetypeWithoutMigrationTest(VolumeRetypeTest):
+
+    @classmethod
+    def resource_setup(cls):
+        super(VolumeRetypeWithoutMigrationTest, cls).resource_setup()
+        cls.src_vol_type = cls.create_volume_type('volume-type-1')
+        cls.dst_vol_type = cls.create_volume_type('volume-type-2')
+
+    def _verify_migration(self, volume_source, volume_dest):
+
+        keys_with_no_change = ('id', 'size', 'description', 'name',
+                               'user_id', 'os-vol-tenant-attr:tenant_id',
+                               'os-vol-host-attr:host')
+        keys_with_change = ('volume_type',)
+
+        # Check the volume information after the retype
+        self.assertIsNone(volume_dest['os-vol-mig-status-attr:migstat'])
+        self.assertIsNone(volume_dest['migration_status'])
+
+        for key in keys_with_no_change:
+            self.assertEqual(volume_source[key], volume_dest[key])
+
+        for key in keys_with_change:
+            self.assertNotEqual(volume_source[key], volume_dest[key])
+
+        self.assertEqual(volume_dest['volume_type'], self.dst_vol_type['name'])
+
+    @decorators.idempotent_id('b90412ee-465d-46e9-b249-ec84a47d5f25')
+    def test_available_volume_retype(self):
+        src_vol = self.create_volume(volume_type=self.src_vol_type['name'])
+
+        # Retype the volume from snapshot
+        self._retype_volume(src_vol, migration_policy='never')
diff --git a/tempest/api/volume/admin/test_volume_services_negative.py b/tempest/api/volume/admin/test_volume_services_negative.py
new file mode 100644
index 0000000..6f3dbc6
--- /dev/null
+++ b/tempest/api/volume/admin/test_volume_services_negative.py
@@ -0,0 +1,65 @@
+# Copyright 2018 FiberHome Telecommunication Technologies CO.,LTD
+# All Rights Reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+from tempest.api.volume import base
+from tempest.lib import decorators
+from tempest.lib import exceptions as lib_exc
+
+
+class VolumeServicesNegativeTest(base.BaseVolumeAdminTest):
+
+    @classmethod
+    def resource_setup(cls):
+        super(VolumeServicesNegativeTest, cls).resource_setup()
+        cls.services = cls.admin_volume_services_client.list_services()[
+            'services']
+        cls.host = cls.services[0]['host']
+        cls.binary = cls.services[0]['binary']
+
+    @decorators.attr(type='negative')
+    @decorators.idempotent_id('3246ce65-ba70-4159-aa3b-082c28e4b484')
+    def test_enable_service_with_invalid_host(self):
+        self.assertRaises(lib_exc.NotFound,
+                          self.admin_volume_services_client.enable_service,
+                          host='invalid_host', binary=self.binary)
+
+    @decorators.attr(type='negative')
+    @decorators.idempotent_id('c571f179-c6e6-4c50-a0ab-368b628a8ac1')
+    def test_disable_service_with_invalid_binary(self):
+        self.assertRaises(lib_exc.NotFound,
+                          self.admin_volume_services_client.disable_service,
+                          host=self.host, binary='invalid_binary')
+
+    @decorators.attr(type='negative')
+    @decorators.idempotent_id('77767b36-5e8f-4c68-a0b5-2308cc21ec64')
+    def test_disable_log_reason_with_no_reason(self):
+        self.assertRaises(lib_exc.BadRequest,
+                          self.admin_volume_services_client.disable_log_reason,
+                          host=self.host, binary=self.binary,
+                          disabled_reason=None)
+
+    @decorators.attr(type='negative')
+    @decorators.idempotent_id('712bfab8-1f44-4eb5-a632-fa70bf78f05e')
+    def test_freeze_host_with_invalid_host(self):
+        self.assertRaises(lib_exc.BadRequest,
+                          self.admin_volume_services_client.freeze_host,
+                          host='invalid_host')
+
+    @decorators.attr(type='negative')
+    @decorators.idempotent_id('7c6287c9-d655-47e1-9a11-76f6657a6dce')
+    def test_thaw_host_with_invalid_host(self):
+        self.assertRaises(lib_exc.BadRequest,
+                          self.admin_volume_services_client.thaw_host,
+                          host='invalid_host')
diff --git a/tempest/api/volume/admin/test_volumes_backup.py b/tempest/api/volume/admin/test_volumes_backup.py
index c179c35..45060d0 100644
--- a/tempest/api/volume/admin/test_volumes_backup.py
+++ b/tempest/api/volume/admin/test_volumes_backup.py
@@ -60,6 +60,8 @@
         # Create backup
         backup_name = data_utils.rand_name(self.__class__.__name__ + '-Backup')
         backup = self.create_backup(volume_id=volume['id'], name=backup_name)
+        waiters.wait_for_volume_resource_status(self.volumes_client,
+                                                volume['id'], 'available')
         self.assertEqual(backup_name, backup['name'])
 
         # Export Backup
@@ -126,6 +128,8 @@
         backup_name = data_utils.rand_name(
             self.__class__.__name__ + '-Backup')
         backup = self.create_backup(volume_id=volume['id'], name=backup_name)
+        waiters.wait_for_volume_resource_status(self.volumes_client,
+                                                volume['id'], 'available')
         self.assertEqual(backup_name, backup['name'])
         # Reset backup status to error
         self.admin_backups_client.reset_backup_status(backup_id=backup['id'],
diff --git a/tempest/api/volume/test_volumes_backup.py b/tempest/api/volume/test_volumes_backup.py
index 07cfad5..c178272 100644
--- a/tempest/api/volume/test_volumes_backup.py
+++ b/tempest/api/volume/test_volumes_backup.py
@@ -117,6 +117,8 @@
             self.__class__.__name__ + '-Backup')
         backup = self.create_backup(volume_id=volume['id'],
                                     name=backup_name, force=True)
+        waiters.wait_for_volume_resource_status(self.volumes_client,
+                                                volume['id'], 'in-use')
         self.assertEqual(backup_name, backup['name'])
 
     @decorators.idempotent_id('2a8ba340-dff2-4511-9db7-646f07156b15')
@@ -132,6 +134,8 @@
 
         # Create a backup
         backup = self.create_backup(volume_id=volume['id'])
+        waiters.wait_for_volume_resource_status(self.volumes_client,
+                                                volume['id'], 'available')
 
         # Restore the backup
         restored_volume_id = self.restore_backup(backup['id'])['volume_id']
@@ -160,6 +164,8 @@
         # Create volume and backup
         volume = self.create_volume()
         backup = self.create_backup(volume_id=volume['id'])
+        waiters.wait_for_volume_resource_status(self.volumes_client,
+                                                volume['id'], 'available')
 
         # Update backup and assert response body for update_backup method
         update_kwargs = {
diff --git a/tempest/api/volume/test_volumes_snapshots.py b/tempest/api/volume/test_volumes_snapshots.py
index 52114bc..93638b8 100644
--- a/tempest/api/volume/test_volumes_snapshots.py
+++ b/tempest/api/volume/test_volumes_snapshots.py
@@ -15,6 +15,7 @@
 
 from tempest.api.volume import base
 from tempest.common import utils
+from tempest.common import waiters
 from tempest import config
 from tempest.lib.common.utils import data_utils
 from tempest.lib import decorators
@@ -163,6 +164,8 @@
 
         backup = self.create_backup(volume_id=self.volume_origin['id'],
                                     snapshot_id=snapshot['id'])
+        waiters.wait_for_volume_resource_status(self.snapshots_client,
+                                                snapshot['id'], 'available')
         backup_info = self.backups_client.show_backup(backup['id'])['backup']
         self.assertEqual(self.volume_origin['id'], backup_info['volume_id'])
         self.assertEqual(snapshot['id'], backup_info['snapshot_id'])
diff --git a/tempest/lib/services/volume/v3/services_client.py b/tempest/lib/services/volume/v3/services_client.py
index 09036a4..22155a9 100644
--- a/tempest/lib/services/volume/v3/services_client.py
+++ b/tempest/lib/services/volume/v3/services_client.py
@@ -20,9 +20,15 @@
 
 
 class ServicesClient(rest_client.RestClient):
-    """Client class to send CRUD Volume API requests"""
+    """Client class to send CRUD Volume Services API requests"""
 
     def list_services(self, **params):
+        """List all Cinder services.
+
+        For a full list of available parameters, please refer to the official
+        API reference:
+        https://developer.openstack.org/api-ref/block-storage/v3/#list-all-cinder-services
+        """
         url = 'os-services'
         if params:
             url += '?%s' % urllib.urlencode(params)
@@ -31,3 +37,66 @@
         body = json.loads(body)
         self.expected_success(200, resp.status)
         return rest_client.ResponseBody(resp, body)
+
+    def enable_service(self, **kwargs):
+        """Enable service on a host.
+
+        For a full list of available parameters, please refer to the official
+        API reference:
+        https://developer.openstack.org/api-ref/block-storage/v3/#enable-a-cinder-service
+        """
+        put_body = json.dumps(kwargs)
+        resp, body = self.put('os-services/enable', put_body)
+        body = json.loads(body)
+        self.expected_success(200, resp.status)
+        return rest_client.ResponseBody(resp, body)
+
+    def disable_service(self, **kwargs):
+        """Disable service on a host.
+
+        For a full list of available parameters, please refer to the official
+        API reference:
+        https://developer.openstack.org/api-ref/block-storage/v3/#disable-a-cinder-service
+        """
+        put_body = json.dumps(kwargs)
+        resp, body = self.put('os-services/disable', put_body)
+        body = json.loads(body)
+        self.expected_success(200, resp.status)
+        return rest_client.ResponseBody(resp, body)
+
+    def disable_log_reason(self, **kwargs):
+        """Disable scheduling for a volume service and log disabled reason.
+
+        For a full list of available parameters, please refer to the official
+        API reference:
+        https://developer.openstack.org/api-ref/block-storage/v3/#log-disabled-cinder-service-information
+        """
+        put_body = json.dumps(kwargs)
+        resp, body = self.put('os-services/disable-log-reason', put_body)
+        body = json.loads(body)
+        self.expected_success(200, resp.status)
+        return rest_client.ResponseBody(resp, body)
+
+    def freeze_host(self, **kwargs):
+        """Freeze a Cinder backend host.
+
+        For a full list of available parameters, please refer to the official
+        API reference:
+        https://developer.openstack.org/api-ref/block-storage/v3/#freeze-a-cinder-backend-host
+        """
+        put_body = json.dumps(kwargs)
+        resp, _ = self.put('os-services/freeze', put_body)
+        self.expected_success(200, resp.status)
+        return rest_client.ResponseBody(resp)
+
+    def thaw_host(self, **kwargs):
+        """Thaw a Cinder backend host.
+
+        For a full list of available parameters, please refer to the official
+        API reference:
+        https://developer.openstack.org/api-ref/block-storage/v3/#thaw-a-cinder-backend-host
+        """
+        put_body = json.dumps(kwargs)
+        resp, _ = self.put('os-services/thaw', put_body)
+        self.expected_success(200, resp.status)
+        return rest_client.ResponseBody(resp)
diff --git a/tempest/scenario/manager.py b/tempest/scenario/manager.py
index 9965fe5..145dcf1 100644
--- a/tempest/scenario/manager.py
+++ b/tempest/scenario/manager.py
@@ -443,7 +443,9 @@
                                        disk_format=img_disk_format,
                                        properties=img_properties)
         except IOError:
-            LOG.debug("A qcow2 image was not found. Try to get a uec image.")
+            LOG.warning(
+                "A(n) %s image was not found. Retrying with uec image.",
+                img_disk_format)
             kernel = self._image_create('scenario-aki', 'aki', aki_img_path)
             ramdisk = self._image_create('scenario-ari', 'ari', ari_img_path)
             properties = {'kernel_id': kernel, 'ramdisk_id': ramdisk}
diff --git a/tempest/scenario/test_volume_backup_restore.py b/tempest/scenario/test_volume_backup_restore.py
index c23b564..8a8c54e 100644
--- a/tempest/scenario/test_volume_backup_restore.py
+++ b/tempest/scenario/test_volume_backup_restore.py
@@ -14,6 +14,7 @@
 #    under the License.
 
 from tempest.common import utils
+from tempest.common import waiters
 from tempest import config
 from tempest.lib import decorators
 from tempest.scenario import manager
@@ -56,6 +57,8 @@
 
         # Create a backup
         backup = self.create_backup(volume_id=volume['id'])
+        waiters.wait_for_volume_resource_status(self.volumes_client,
+                                                volume['id'], 'available')
 
         # Restore the backup
         restored_volume_id = self.restore_backup(backup['id'])['volume_id']
diff --git a/tempest/tests/cmd/test_workspace.py b/tempest/tests/cmd/test_workspace.py
index a1c8c53..3ed8a10 100644
--- a/tempest/tests/cmd/test_workspace.py
+++ b/tempest/tests/cmd/test_workspace.py
@@ -17,6 +17,11 @@
 import subprocess
 import tempfile
 
+from mock import patch
+try:
+    from StringIO import StringIO
+except ImportError:
+    from io import StringIO
 from tempest.cmd import workspace
 from tempest.lib.common.utils import data_utils
 from tempest.tests import base
@@ -140,3 +145,42 @@
         self.addCleanup(shutil.rmtree, path, ignore_errors=True)
         self.workspace_manager.register_new_workspace(name, path)
         self.assertIsNotNone(self.workspace_manager.get_workspace(name))
+
+    def test_workspace_name_not_exists(self):
+        nonexistent_name = data_utils.rand_uuid()
+        with patch('sys.stdout', new_callable=StringIO) as mock_stdout:
+            ex = self.assertRaises(SystemExit,
+                                   self.workspace_manager._name_exists,
+                                   nonexistent_name)
+        self.assertEqual(1, ex.code)
+        self.assertEqual(mock_stdout.getvalue(),
+                         "A workspace was not found with name: %s\n" %
+                         nonexistent_name)
+
+    def test_workspace_name_already_exists(self):
+        duplicate_name = self.name
+        with patch('sys.stdout', new_callable=StringIO) as mock_stdout:
+            ex = self.assertRaises(SystemExit,
+                                   self.workspace_manager.
+                                   _workspace_name_exists,
+                                   duplicate_name)
+        self.assertEqual(1, ex.code)
+        self.assertEqual(mock_stdout.getvalue(),
+                         "A workspace already exists with name: %s.\n"
+                         % duplicate_name)
+
+    def test_workspace_manager_path_not_exist(self):
+        fake_path = "fake_path"
+        with patch('sys.stdout', new_callable=StringIO) as mock_stdout:
+            ex = self.assertRaises(SystemExit,
+                                   self.workspace_manager._validate_path,
+                                   fake_path)
+        self.assertEqual(1, ex.code)
+        self.assertEqual(mock_stdout.getvalue(),
+                         "Path does not exist.\n")
+
+    def test_workspace_manager_list_workspaces(self):
+        listed = self.workspace_manager.list_workspaces()
+        self.assertEqual(1, len(listed))
+        self.assertIn(self.name, listed)
+        self.assertEqual(self.path, listed.get(self.name))
diff --git a/tempest/tests/lib/services/volume/v3/test_services_client.py b/tempest/tests/lib/services/volume/v3/test_services_client.py
new file mode 100644
index 0000000..f65228f
--- /dev/null
+++ b/tempest/tests/lib/services/volume/v3/test_services_client.py
@@ -0,0 +1,214 @@
+# Copyright 2018 FiberHome Telecommunication Technologies CO.,LTD
+# All Rights Reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+import copy
+
+import mock
+from oslo_serialization import jsonutils as json
+
+from tempest.lib.services.volume.v3 import services_client
+from tempest.tests.lib import fake_auth_provider
+from tempest.tests.lib.services import base
+
+
+class TestServicesClient(base.BaseServiceTest):
+
+    FAKE_SERVICE_LIST = {
+        "services": [
+            {
+                "status": "enabled",
+                "binary": "cinder-backup",
+                "zone": "nova",
+                "state": "up",
+                "updated_at": "2017-07-20T07:20:17.000000",
+                "host": "fake-host",
+                "disabled_reason": None
+            },
+            {
+                "status": "enabled",
+                "binary": "cinder-scheduler",
+                "zone": "nova",
+                "state": "up",
+                "updated_at": "2017-07-20T07:20:24.000000",
+                "host": "fake-host",
+                "disabled_reason": None
+            },
+            {
+                "status": "enabled",
+                "binary": "cinder-volume",
+                "zone": "nova",
+                "frozen": False,
+                "state": "up",
+                "updated_at": "2017-07-20T07:20:20.000000",
+                "host": "fake-host@lvm",
+                "replication_status": "disabled",
+                "active_backend_id": None,
+                "disabled_reason": None
+            }
+        ]
+    }
+
+    FAKE_SERVICE_REQUEST = {
+        "host": "fake-host",
+        "binary": "cinder-volume"
+    }
+
+    FAKE_SERVICE_RESPONSE = {
+        "disabled": False,
+        "status": "enabled",
+        "host": "fake-host@lvm",
+        "service": "",
+        "binary": "cinder-volume",
+        "disabled_reason": None
+    }
+
+    def setUp(self):
+        super(TestServicesClient, self).setUp()
+        fake_auth = fake_auth_provider.FakeAuthProvider()
+        self.client = services_client.ServicesClient(fake_auth,
+                                                     'volume',
+                                                     'regionOne')
+
+    def _test_list_services(self, bytes_body=False,
+                            mock_args='os-services', **params):
+        self.check_service_client_function(
+            self.client.list_services,
+            'tempest.lib.common.rest_client.RestClient.get',
+            self.FAKE_SERVICE_LIST,
+            to_utf=bytes_body,
+            mock_args=[mock_args],
+            **params)
+
+    def _test_enable_service(self, bytes_body=False):
+        resp_body = self.FAKE_SERVICE_RESPONSE
+        kwargs = self.FAKE_SERVICE_REQUEST
+        payload = json.dumps(kwargs, sort_keys=True)
+        json_dumps = json.dumps
+
+        # NOTE: Use sort_keys for json.dumps so that the expected and actual
+        # payloads are guaranteed to be identical for mock_args assert check.
+        with mock.patch.object(services_client.json, 'dumps') as mock_dumps:
+            mock_dumps.side_effect = lambda d: json_dumps(d, sort_keys=True)
+
+            self.check_service_client_function(
+                self.client.enable_service,
+                'tempest.lib.common.rest_client.RestClient.put',
+                resp_body,
+                to_utf=bytes_body,
+                mock_args=['os-services/enable', payload],
+                **kwargs)
+
+    def _test_disable_service(self, bytes_body=False):
+        resp_body = copy.deepcopy(self.FAKE_SERVICE_RESPONSE)
+        resp_body.pop('disabled_reason')
+        resp_body['disabled'] = True
+        resp_body['status'] = 'disabled'
+        kwargs = self.FAKE_SERVICE_REQUEST
+        payload = json.dumps(kwargs, sort_keys=True)
+        json_dumps = json.dumps
+
+        # NOTE: Use sort_keys for json.dumps so that the expected and actual
+        # payloads are guaranteed to be identical for mock_args assert check.
+        with mock.patch.object(services_client.json, 'dumps') as mock_dumps:
+            mock_dumps.side_effect = lambda d: json_dumps(d, sort_keys=True)
+
+            self.check_service_client_function(
+                self.client.disable_service,
+                'tempest.lib.common.rest_client.RestClient.put',
+                resp_body,
+                to_utf=bytes_body,
+                mock_args=['os-services/disable', payload],
+                **kwargs)
+
+    def _test_disable_log_reason(self, bytes_body=False):
+        resp_body = copy.deepcopy(self.FAKE_SERVICE_RESPONSE)
+        resp_body['disabled_reason'] = "disabled for test"
+        resp_body['disabled'] = True
+        resp_body['status'] = 'disabled'
+        kwargs = copy.deepcopy(self.FAKE_SERVICE_REQUEST)
+        kwargs.update({"disabled_reason": "disabled for test"})
+        payload = json.dumps(kwargs, sort_keys=True)
+        json_dumps = json.dumps
+
+        # NOTE: Use sort_keys for json.dumps so that the expected and actual
+        # payloads are guaranteed to be identical for mock_args assert check.
+        with mock.patch.object(services_client.json, 'dumps') as mock_dumps:
+            mock_dumps.side_effect = lambda d: json_dumps(d, sort_keys=True)
+
+            self.check_service_client_function(
+                self.client.disable_log_reason,
+                'tempest.lib.common.rest_client.RestClient.put',
+                resp_body,
+                to_utf=bytes_body,
+                mock_args=['os-services/disable-log-reason', payload],
+                **kwargs)
+
+    def _test_freeze_host(self, bytes_body=False):
+        kwargs = {'host': 'host1@lvm'}
+        self.check_service_client_function(
+            self.client.freeze_host,
+            'tempest.lib.common.rest_client.RestClient.put',
+            {},
+            bytes_body,
+            **kwargs)
+
+    def _test_thaw_host(self, bytes_body=False):
+        kwargs = {'host': 'host1@lvm'}
+        self.check_service_client_function(
+            self.client.thaw_host,
+            'tempest.lib.common.rest_client.RestClient.put',
+            {},
+            bytes_body,
+            **kwargs)
+
+    def test_list_services_with_str_body(self):
+        self._test_list_services()
+
+    def test_list_services_with_bytes_body(self):
+        self._test_list_services(bytes_body=True)
+
+    def test_list_services_with_params(self):
+        mock_args = 'os-services?host=fake-host'
+        self._test_list_services(mock_args=mock_args, host='fake-host')
+
+    def test_enable_service_with_str_body(self):
+        self._test_enable_service()
+
+    def test_enable_service_with_bytes_body(self):
+        self._test_enable_service(bytes_body=True)
+
+    def test_disable_service_with_str_body(self):
+        self._test_disable_service()
+
+    def test_disable_service_with_bytes_body(self):
+        self._test_disable_service(bytes_body=True)
+
+    def test_disable_log_reason_with_str_body(self):
+        self._test_disable_log_reason()
+
+    def test_disable_log_reason_with_bytes_body(self):
+        self._test_disable_log_reason(bytes_body=True)
+
+    def test_freeze_host_with_str_body(self):
+        self._test_freeze_host()
+
+    def test_freeze_host_with_bytes_body(self):
+        self._test_freeze_host(bytes_body=True)
+
+    def test_thaw_host_with_str_body(self):
+        self._test_thaw_host()
+
+    def test_thaw_host_with_bytes_body(self):
+        self._test_thaw_host(bytes_body=True)
diff --git a/tox.ini b/tox.ini
index da0233a..de4f1b7 100644
--- a/tox.ini
+++ b/tox.ini
@@ -19,7 +19,7 @@
     OS_STDOUT_CAPTURE=1
     OS_STDERR_CAPTURE=1
     OS_TEST_TIMEOUT=160
-    PYTHONWARNINGS=default::DeprecationWarning
+    PYTHONWARNINGS=default::DeprecationWarning,ignore::DeprecationWarning:distutils,ignore::DeprecationWarning:site
 passenv = OS_STDOUT_CAPTURE OS_STDERR_CAPTURE OS_TEST_TIMEOUT OS_TEST_LOCK_PATH TEMPEST_CONFIG TEMPEST_CONFIG_DIR http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY ZUUL_CACHE_DIR REQUIREMENTS_PIP_LOCATION GENERATE_TEMPEST_PLUGIN_LIST
 usedevelop = True
 install_command = pip install {opts} {packages}