Merge "Fix arguments for method expected_success"
diff --git a/HACKING.rst b/HACKING.rst
index 8652971..29d5bf4 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -12,6 +12,7 @@
 - [T104] Scenario tests require a services decorator
 - [T105] Unit tests cannot use setUpClass
 - [T106] vim configuration should not be kept in source files.
+- [N322] Method's default argument shouldn't be mutable
 
 Test Data/Configuration
 -----------------------
@@ -226,3 +227,48 @@
 
 2. The unit tests cannot use setUpClass, instead fixtures and testresources
    should be used for shared state between tests.
+
+
+.. _TestDocumentation:
+
+Test Documentation
+------------------
+For tests being added we need to require inline documentation in the form of
+docstings to explain what is being tested. In API tests for a new API a class
+level docstring should be added to an API reference doc. If one doesn't exist
+a TODO comment should be put indicating that the reference needs to be added.
+For individual API test cases a method level docstring should be used to
+explain the functionality being tested if the test name isn't descriptive
+enough. For example::
+
+    def test_get_role_by_id(self):
+        """Get a role by its id."""
+
+the docstring there is superfluous and shouldn't be added. but for a method
+like::
+
+    def test_volume_backup_create_get_detailed_list_restore_delete(self):
+        pass
+
+a docstring would be useful because while the test title is fairly descriptive
+the operations being performed are complex enough that a bit more explanation
+will help people figure out the intent of the test.
+
+For scenario tests a class level docstring describing the steps in the scenario
+is required. If there is more than one test case in the class individual
+docstrings for the workflow in each test methods can be used instead. A good
+example of this would be::
+
+    class TestVolumeBootPattern(manager.OfficialClientTest):
+    """
+    This test case attempts to reproduce the following steps:
+
+     * Create in Cinder some bootable volume importing a Glance image
+     * Boot an instance from the bootable volume
+     * Write content to the volume
+     * Delete an instance and Boot a new instance from the volume
+     * Check written content in the instance
+     * Create a volume snapshot while the instance is running
+     * Boot an additional instance from the new snapshot based volume
+     * Check written content in the instance booted from snapshot
+    """
diff --git a/REVIEWING.rst b/REVIEWING.rst
new file mode 100644
index 0000000..74bd2ad
--- /dev/null
+++ b/REVIEWING.rst
@@ -0,0 +1,69 @@
+Reviewing Tempest Code
+======================
+
+To start read the `OpenStack Common Review Checklist
+<https://wiki.openstack.org/wiki/ReviewChecklist#Common_Review_Checklist>`_
+
+
+Ensuring code is executed
+-------------------------
+
+For any new or change to a test it has to be verified in the gate. This means
+that the first thing to check with any change is that a gate job actually runs
+it. Tests which aren't executed either because of configuration or skips should
+not be accepted.
+
+
+Unit Tests
+----------
+
+For any change that adds new functionality to either common functionality or an
+out-of-band tool unit tests are required. This is to ensure we don't introduce
+future regressions and to test conditions which we may not hit in the gate runs.
+Tests, and service clients aren't required to have unit tests since they should
+be self verifying by running them in the gate.
+
+
+API Stability
+-------------
+Tests should only be added for a published stable APIs. If a patch contains
+tests for an API which hasn't been marked as stable or for an API that which
+doesn't conform to the `API stability guidelines
+<https://wiki.openstack.org/wiki/Governance/Approved/APIStability>`_ then it
+should not be approved.
+
+
+Reject Copy and Paste Test Code
+------------------------
+When creating new tests that are similar to existing tests it is tempting to
+simply copy the code and make a few modifications. This increases code size and
+the maintenance burden. Such changes should not be approved if it is easy to
+abstract the duplicated code into a function or method.
+
+
+Being explicit
+--------------
+When tests are being added that depend on a configurable feature or extension,
+polling the API to discover that it is enabled should not be done. This will
+just result in bugs being masked because the test can be skipped automatically.
+Instead the config file should be used to determine whether a test should be
+skipped or not. Do not approve changes that depend on an API call to determine
+whether to skip or not.
+
+
+Test Documentation
+------------------
+When a new test is being added refer to the :ref:`TestDocumentation` section in
+hacking to see if the requirements are being met. With the exception of a class
+level docstring linking to the API ref doc in the API tests and a docstring for
+scenario tests this is up to the reviewers discretion whether a docstring is
+required or not.
+
+
+When to approve
+---------------
+ * Every patch needs two +2s before being approved.
+ * Its ok to hold off on an approval until a subject matter expert reviews it
+ * If a patch has already been approved but requires a trivial rebase to merge,
+   you do not have to wait for a second +2, since the patch has already had
+   two +2s.
diff --git a/doc/source/REVIEWING.rst b/doc/source/REVIEWING.rst
new file mode 120000
index 0000000..841e042
--- /dev/null
+++ b/doc/source/REVIEWING.rst
@@ -0,0 +1 @@
+../../REVIEWING.rst
\ No newline at end of file
diff --git a/doc/source/cleanup.rst b/doc/source/cleanup.rst
new file mode 100644
index 0000000..acd016c
--- /dev/null
+++ b/doc/source/cleanup.rst
@@ -0,0 +1,5 @@
+--------------------------------
+Post Tempest Run Cleanup Utility
+--------------------------------
+
+.. automodule:: tempest.cmd.cleanup
\ No newline at end of file
diff --git a/doc/source/conf.py b/doc/source/conf.py
index bd4e553..daa293c 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -27,7 +27,6 @@
 # Add any Sphinx extension module names here, as strings. They can be extensions
 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
 extensions = ['sphinx.ext.autodoc',
-              'sphinx.ext.intersphinx',
               'sphinx.ext.todo',
               'sphinx.ext.viewcode',
               'oslosphinx'
diff --git a/doc/source/index.rst b/doc/source/index.rst
index 25bc900..bc4fc46 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -9,6 +9,7 @@
 
    overview
    HACKING
+   REVIEWING
 
 ------------
 Field Guides
@@ -28,6 +29,15 @@
    field_guide/thirdparty
    field_guide/unit_tests
 
+---------------------
+Command Documentation
+---------------------
+
+.. toctree::
+   :maxdepth: 1
+
+   cleanup
+
 ==================
 Indices and tables
 ==================
diff --git a/etc/accounts.yaml.sample b/etc/accounts.yaml.sample
new file mode 100644
index 0000000..54fdcad
--- /dev/null
+++ b/etc/accounts.yaml.sample
@@ -0,0 +1,11 @@
+# The number of accounts required can be estimated as CONCURRENCY x 2
+# Valid fields for credentials are defined in the descendants of
+# auth.Credentials - see KeystoneV[2|3]Credentials.CONF_ATTRIBUTES
+
+- username: 'user_1'
+  tenant_name: 'test_tenant_1'
+  password: 'test_password'
+
+- username: 'user_2'
+  tenant_name: 'test_tenant_2'
+  password: 'test_password'
diff --git a/etc/schemas/compute/admin/flavor_create.json b/etc/schemas/compute/admin/flavor_create.json
deleted file mode 100644
index 0a3e7b3..0000000
--- a/etc/schemas/compute/admin/flavor_create.json
+++ /dev/null
@@ -1,20 +0,0 @@
-{
-    "name": "flavor-create",
-    "http-method": "POST",
-    "admin_client": true,
-    "url": "flavors",
-    "default_result_code": 400,
-    "json-schema": {
-        "type": "object",
-        "properties": {
-            "name": { "type": "string"},
-            "ram": { "type": "integer", "minimum": 1},
-            "vcpus": { "type": "integer", "minimum": 1},
-            "disk": { "type": "integer"},
-            "id": { "type": "integer"},
-            "swap": { "type": "integer"},
-            "rxtx_factor": { "type": "integer"},
-            "OS-FLV-EXT-DATA:ephemeral": { "type": "integer"}
-        }
-    }
-}
diff --git a/etc/schemas/compute/flavors/flavor_details.json b/etc/schemas/compute/flavors/flavor_details.json
deleted file mode 100644
index c16075c..0000000
--- a/etc/schemas/compute/flavors/flavor_details.json
+++ /dev/null
@@ -1,8 +0,0 @@
-{
-    "name": "get-flavor-details",
-    "http-method": "GET",
-    "url": "flavors/%s",
-    "resources": [
-        {"name": "flavor", "expected_result": 404}
-    ]
-}
diff --git a/etc/schemas/compute/flavors/flavor_details_v3.json b/etc/schemas/compute/flavors/flavor_details_v3.json
deleted file mode 100644
index d1c1077..0000000
--- a/etc/schemas/compute/flavors/flavor_details_v3.json
+++ /dev/null
@@ -1,6 +0,0 @@
-{
-    "name": "get-flavor-details",
-    "http-method": "GET",
-    "url": "flavors/%s",
-    "resources": ["flavor"]
-}
diff --git a/etc/schemas/compute/flavors/flavors_list.json b/etc/schemas/compute/flavors/flavors_list.json
deleted file mode 100644
index eb8383b..0000000
--- a/etc/schemas/compute/flavors/flavors_list.json
+++ /dev/null
@@ -1,24 +0,0 @@
-{
-    "name": "list-flavors-with-detail",
-    "http-method": "GET",
-    "url": "flavors/detail",
-    "json-schema": {
-        "type": "object",
-        "properties": {
-            "minRam": {
-                "type": "integer",
-                "results": {
-                    "gen_none": 400,
-                    "gen_string": 400
-                }
-            },
-            "minDisk": {
-                "type": "integer",
-                "results": {
-                    "gen_none": 400,
-                    "gen_string": 400
-                }
-            }
-        }
-    }
-}
diff --git a/etc/schemas/compute/flavors/flavors_list_v3.json b/etc/schemas/compute/flavors/flavors_list_v3.json
deleted file mode 100644
index d5388b3..0000000
--- a/etc/schemas/compute/flavors/flavors_list_v3.json
+++ /dev/null
@@ -1,24 +0,0 @@
-{
-    "name": "list-flavors-with-detail",
-    "http-method": "GET",
-    "url": "flavors/detail",
-    "json-schema": {
-        "type": "object",
-        "properties": {
-            "min_ram": {
-                "type": "integer",
-                "results": {
-                    "gen_none": 400,
-                    "gen_string": 400
-                }
-            },
-            "min_disk": {
-                "type": "integer",
-                "results": {
-                    "gen_none": 400,
-                    "gen_string": 400
-                }
-            }
-        }
-    }
-}
diff --git a/etc/schemas/compute/servers/get_console_output.json b/etc/schemas/compute/servers/get_console_output.json
deleted file mode 100644
index 8d974ba..0000000
--- a/etc/schemas/compute/servers/get_console_output.json
+++ /dev/null
@@ -1,23 +0,0 @@
-{
-    "name": "get-console-output",
-    "http-method": "POST",
-    "url": "servers/%s/action",
-    "resources": [
-        {"name":"server", "expected_result": 404}
-    ],
-    "json-schema": {
-        "type": "object",
-        "properties": {
-            "os-getConsoleOutput": {
-                "type": "object",
-                "properties": {
-                    "length": {
-                        "type": ["integer", "string"],
-                        "minimum": 0
-                    }
-                }
-            }
-        },
-        "additionalProperties": false
-    }
-}
diff --git a/etc/tempest.conf.sample b/etc/tempest.conf.sample
index 96ef11a..dfcbaba 100644
--- a/etc/tempest.conf.sample
+++ b/etc/tempest.conf.sample
@@ -105,6 +105,17 @@
 #syslog_log_facility=LOG_USER
 
 
+[auth]
+
+#
+# Options defined in tempest.config
+#
+
+# Path to the yaml file that contains the list of credentials
+# to use for running tests (string value)
+#test_accounts_file=etc/accounts.yaml
+
+
 [baremetal]
 
 #
@@ -386,6 +397,9 @@
 # If false, skip all nova v3 tests. (boolean value)
 #api_v3=false
 
+# If false skip all v2 api tests with xml (boolean value)
+#xml_api_v2=true
+
 # If false, skip disk config tests (boolean value)
 #disk_config=true
 
@@ -405,6 +419,10 @@
 # password? (boolean value)
 #change_password=false
 
+# Does the test environment support obtaining instance serial
+# console output? (boolean value)
+#console_output=true
+
 # Does the test environment support resizing? (boolean value)
 #resize=false
 
@@ -456,6 +474,10 @@
 # attachment? (boolean value)
 #interface_attach=true
 
+# Does the test environment support creating snapshot images
+# of running instances? (boolean value)
+#snapshot=true
+
 
 [dashboard]
 
@@ -687,6 +709,42 @@
 #ssh_user_regex=[["^.*[Cc]irros.*$", "root"]]
 
 
+[messaging]
+
+#
+# Options defined in tempest.config
+#
+
+# Catalog type of the Messaging service. (string value)
+#catalog_type=messaging
+
+# The maximum number of queue records per page when listing
+# queues (integer value)
+#max_queues_per_page=20
+
+# The maximum metadata size for a queue (integer value)
+#max_queue_metadata=65536
+
+# The maximum number of queue message per page when listing
+# (or) posting messages (integer value)
+#max_messages_per_page=20
+
+# The maximum size of a message body (integer value)
+#max_message_size=262144
+
+# The maximum number of messages per claim (integer value)
+#max_messages_per_claim=20
+
+# The maximum ttl for a message (integer value)
+#max_message_ttl=1209600
+
+# The maximum ttl for a claim (integer value)
+#max_claim_ttl=43200
+
+# The maximum grace period for a claim (integer value)
+#max_claim_grace=43200
+
+
 [negative]
 
 #
@@ -725,10 +783,10 @@
 
 # The cidr block to allocate tenant ipv6 subnets from (string
 # value)
-#tenant_network_v6_cidr=2003::/64
+#tenant_network_v6_cidr=2003::/48
 
 # The mask bits for tenant ipv6 subnets (integer value)
-#tenant_network_v6_mask_bits=96
+#tenant_network_v6_mask_bits=64
 
 # Whether tenant network connectivity should be evaluated
 # directly (boolean value)
@@ -821,6 +879,15 @@
 # expected to be enabled (list value)
 #discoverable_apis=all
 
+# Execute (old style) container-sync tests (boolean value)
+#container_sync=true
+
+# Execute object-versioning tests (boolean value)
+#object_versioning=true
+
+# Execute discoverability tests (boolean value)
+#discoverability=true
+
 
 [orchestration]
 
@@ -866,42 +933,6 @@
 #max_resources_per_stack=1000
 
 
-[queuing]
-
-#
-# Options defined in tempest.config
-#
-
-# Catalog type of the Queuing service. (string value)
-#catalog_type=queuing
-
-# The maximum number of queue records per page when listing
-# queues (integer value)
-#max_queues_per_page=20
-
-# The maximum metadata size for a queue (integer value)
-#max_queue_metadata=65536
-
-# The maximum number of queue message per page when listing
-# (or) posting messages (integer value)
-#max_messages_per_page=20
-
-# The maximum size of a message body (integer value)
-#max_message_size=262144
-
-# The maximum number of messages per claim (integer value)
-#max_messages_per_claim=20
-
-# The maximum ttl for a message (integer value)
-#max_message_ttl=1209600
-
-# The maximum ttl for a claim (integer value)
-#max_claim_ttl=43200
-
-# The maximum grace period for a claim (integer value)
-#max_claim_grace=43200
-
-
 [scenario]
 
 #
@@ -911,8 +942,15 @@
 # Directory containing image files (string value)
 #img_dir=/opt/stack/new/devstack/files/images/cirros-0.3.1-x86_64-uec
 
-# QCOW2 image file name (string value)
-#qcow2_img_file=cirros-0.3.1-x86_64-disk.img
+# Image file name (string value)
+# Deprecated group/name - [DEFAULT]/qcow2_img_file
+#img_file=cirros-0.3.1-x86_64-disk.img
+
+# Image disk format (string value)
+#img_disk_format=qcow2
+
+# Image container format (string value)
+#img_container_format=bare
 
 # AMI image file name (string value)
 #ami_img_file=cirros-0.3.1-x86_64-blank.img
@@ -981,9 +1019,9 @@
 # value)
 #trove=false
 
-# Whether or not Marconi is expected to be available (boolean
+# Whether or not Zaqar is expected to be available (boolean
 # value)
-#marconi=false
+#zaqar=false
 
 
 [stress]
diff --git a/requirements.txt b/requirements.txt
index 9a3b74d..708ede3 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,25 +1,28 @@
+# The order of packages is significant, because pip processes them in the order
+# of appearance. Changing the order has an impact on the overall integration
+# process, which may cause wedges in the gate later.
 pbr>=0.6,!=0.7,<1.0
 anyjson>=0.3.3
 httplib2>=0.7.5
 jsonschema>=2.0.0,<3.0.0
 testtools>=0.9.34
 lxml>=2.3
-boto>=2.12.0,!=2.13.0
+boto>=2.32.1
 paramiko>=1.13.0
-netaddr>=0.7.6
+netaddr>=0.7.12
 python-ceilometerclient>=1.0.6
-python-glanceclient>=0.13.1
-python-keystoneclient>=0.9.0
-python-novaclient>=2.17.0
-python-neutronclient>=2.3.5,<3
-python-cinderclient>=1.0.7
+python-glanceclient>=0.14.0
+python-keystoneclient>=0.10.0
+python-novaclient>=2.18.0
+python-neutronclient>=2.3.6,<3
+python-cinderclient>=1.1.0
 python-heatclient>=0.2.9
-python-ironicclient
-python-saharaclient>=0.6.0
-python-swiftclient>=2.0.2
+python-ironicclient>=0.2.1
+python-saharaclient>=0.7.3
+python-swiftclient>=2.2.0
 testresources>=0.2.4
 testrepository>=0.0.18
-oslo.config>=1.2.1
+oslo.config>=1.4.0  # Apache-2.0
 six>=1.7.0
 iso8601>=0.1.9
 fixtures>=0.3.14
diff --git a/setup.cfg b/setup.cfg
index 5c62710..2e25ace 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -22,6 +22,7 @@
     verify-tempest-config = tempest.cmd.verify_tempest_config:main
     javelin2 = tempest.cmd.javelin:main
     run-tempest-stress = tempest.cmd.run_stress:main
+    tempest-cleanup = tempest.cmd.cleanup:main
 
 [build_sphinx]
 all_files = 1
diff --git a/tempest/api/baremetal/README.rst b/tempest/api/baremetal/README.rst
new file mode 100644
index 0000000..759c937
--- /dev/null
+++ b/tempest/api/baremetal/README.rst
@@ -0,0 +1,25 @@
+Tempest Field Guide to Baremetal API tests
+==========================================
+
+
+What are these tests?
+---------------------
+
+These tests stress the OpenStack baremetal provisioning API provided by
+Ironic.
+
+
+Why are these tests in tempest?
+------------------------------
+
+The purpose of these tests is to exercise the various APIs provided by Ironic
+for managing baremetal nodes.
+
+
+Scope of these tests
+--------------------
+
+The baremetal API test perform basic CRUD operations on the Ironic node
+inventory.  They do not actually perform hardware provisioning. It is important
+to note that all Ironic API actions are admin operations meant to be used
+either by cloud operators or other OpenStack services (i.e., Nova).
diff --git a/tempest/api/queuing/__init__.py b/tempest/api/baremetal/admin/__init__.py
similarity index 100%
copy from tempest/api/queuing/__init__.py
copy to tempest/api/baremetal/admin/__init__.py
diff --git a/tempest/api/baremetal/base.py b/tempest/api/baremetal/admin/base.py
similarity index 91%
rename from tempest/api/baremetal/base.py
rename to tempest/api/baremetal/admin/base.py
index 62edd10..3b12b8e 100644
--- a/tempest/api/baremetal/base.py
+++ b/tempest/api/baremetal/admin/base.py
@@ -28,6 +28,10 @@
 # which has no external dependencies.
 SUPPORTED_DRIVERS = ['fake']
 
+# NOTE(jroll): resources must be deleted in a specific order, this list
+# defines the resource types to clean up, and the correct order.
+RESOURCE_TYPES = ['port', 'node', 'chassis']
+
 
 def creates(resource):
     """Decorator that adds resources to the appropriate cleanup list."""
@@ -49,8 +53,8 @@
     """Base class for Baremetal API tests."""
 
     @classmethod
-    def setUpClass(cls):
-        super(BaseBaremetalTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(BaseBaremetalTest, cls).resource_setup()
 
         if not CONF.service_available.ironic:
             skip_msg = ('%s skipped as Ironic is not available' % cls.__name__)
@@ -66,21 +70,22 @@
         mgr = clients.AdminManager()
         cls.client = mgr.baremetal_client
         cls.power_timeout = CONF.baremetal.power_timeout
-        cls.created_objects = {'chassis': set(),
-                               'port': set(),
-                               'node': set()}
+        cls.created_objects = {}
+        for resource in RESOURCE_TYPES:
+            cls.created_objects[resource] = set()
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         """Ensure that all created objects get destroyed."""
 
         try:
-            for resource, uuids in cls.created_objects.iteritems():
+            for resource in RESOURCE_TYPES:
+                uuids = cls.created_objects[resource]
                 delete_method = getattr(cls.client, 'delete_%s' % resource)
                 for u in uuids:
                     delete_method(u, ignore_errors=exc.NotFound)
         finally:
-            super(BaseBaremetalTest, cls).tearDownClass()
+            super(BaseBaremetalTest, cls).resource_cleanup()
 
     @classmethod
     @creates('chassis')
diff --git a/tempest/api/baremetal/test_api_discovery.py b/tempest/api/baremetal/admin/test_api_discovery.py
similarity index 78%
rename from tempest/api/baremetal/test_api_discovery.py
rename to tempest/api/baremetal/admin/test_api_discovery.py
index bee10b9..09788f2 100644
--- a/tempest/api/baremetal/test_api_discovery.py
+++ b/tempest/api/baremetal/admin/test_api_discovery.py
@@ -10,7 +10,7 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-from tempest.api.baremetal import base
+from tempest.api.baremetal.admin import base
 from tempest import test
 
 
@@ -19,10 +19,8 @@
 
     @test.attr(type='smoke')
     def test_api_versions(self):
-        resp, descr = self.client.get_api_description()
-        self.assertEqual('200', resp['status'])
+        _, descr = self.client.get_api_description()
         expected_versions = ('v1',)
-
         versions = [version['id'] for version in descr['versions']]
 
         for v in expected_versions:
@@ -30,16 +28,13 @@
 
     @test.attr(type='smoke')
     def test_default_version(self):
-        resp, descr = self.client.get_api_description()
-        self.assertEqual('200', resp['status'])
+        _, descr = self.client.get_api_description()
         default_version = descr['default_version']
-
         self.assertEqual(default_version['id'], 'v1')
 
     @test.attr(type='smoke')
     def test_version_1_resources(self):
-        resp, descr = self.client.get_version_description(version='v1')
-        self.assertEqual('200', resp['status'])
+        _, descr = self.client.get_version_description(version='v1')
         expected_resources = ('nodes', 'chassis',
                               'ports', 'links', 'media_types')
 
diff --git a/tempest/api/baremetal/test_chassis.py b/tempest/api/baremetal/admin/test_chassis.py
similarity index 71%
rename from tempest/api/baremetal/test_chassis.py
rename to tempest/api/baremetal/admin/test_chassis.py
index 4ab86c2..6f83412 100644
--- a/tempest/api/baremetal/test_chassis.py
+++ b/tempest/api/baremetal/admin/test_chassis.py
@@ -11,7 +11,7 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-from tempest.api.baremetal import base
+from tempest.api.baremetal.admin import base
 from tempest.common.utils import data_utils
 from tempest import exceptions as exc
 from tempest import test
@@ -21,8 +21,8 @@
     """Tests for chassis."""
 
     @classmethod
-    def setUpClass(cls):
-        super(TestChassis, cls).setUpClass()
+    def resource_setup(cls):
+        super(TestChassis, cls).resource_setup()
         _, cls.chassis = cls.create_chassis()
 
     def _assertExpected(self, expected, actual):
@@ -35,8 +35,7 @@
     @test.attr(type='smoke')
     def test_create_chassis(self):
         descr = data_utils.rand_name('test-chassis-')
-        resp, chassis = self.create_chassis(description=descr)
-        self.assertEqual('201', resp['status'])
+        _, chassis = self.create_chassis(description=descr)
         self.assertEqual(chassis['description'], descr)
 
     @test.attr(type='smoke')
@@ -44,40 +43,41 @@
         # Use a unicode string for testing:
         # 'We ♡ OpenStack in Ukraine'
         descr = u'В Україні ♡ OpenStack!'
-        resp, chassis = self.create_chassis(description=descr)
-        self.assertEqual('201', resp['status'])
+        _, chassis = self.create_chassis(description=descr)
         self.assertEqual(chassis['description'], descr)
 
     @test.attr(type='smoke')
     def test_show_chassis(self):
-        resp, chassis = self.client.show_chassis(self.chassis['uuid'])
-        self.assertEqual('200', resp['status'])
+        _, chassis = self.client.show_chassis(self.chassis['uuid'])
         self._assertExpected(self.chassis, chassis)
 
     @test.attr(type="smoke")
     def test_list_chassis(self):
-        resp, body = self.client.list_chassis()
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.list_chassis()
         self.assertIn(self.chassis['uuid'],
                       [i['uuid'] for i in body['chassis']])
 
     @test.attr(type='smoke')
     def test_delete_chassis(self):
-        resp, body = self.create_chassis()
+        _, body = self.create_chassis()
         uuid = body['uuid']
 
-        resp = self.delete_chassis(uuid)
-        self.assertEqual('204', resp['status'])
+        self.delete_chassis(uuid)
         self.assertRaises(exc.NotFound, self.client.show_chassis, uuid)
 
     @test.attr(type='smoke')
     def test_update_chassis(self):
-        resp, body = self.create_chassis()
+        _, body = self.create_chassis()
         uuid = body['uuid']
 
         new_description = data_utils.rand_name('new-description-')
-        resp, body = (self.client.update_chassis(uuid,
-                      description=new_description))
-        self.assertEqual('200', resp['status'])
-        resp, chassis = self.client.show_chassis(uuid)
+        _, body = (self.client.update_chassis(uuid,
+                   description=new_description))
+        _, chassis = self.client.show_chassis(uuid)
         self.assertEqual(chassis['description'], new_description)
+
+    @test.attr(type='smoke')
+    def test_chassis_node_list(self):
+        _, node = self.create_node(self.chassis['uuid'])
+        _, body = self.client.list_chassis_nodes(self.chassis['uuid'])
+        self.assertIn(node['uuid'], [n['uuid'] for n in body['nodes']])
diff --git a/tempest/api/baremetal/test_drivers.py b/tempest/api/baremetal/admin/test_drivers.py
similarity index 76%
rename from tempest/api/baremetal/test_drivers.py
rename to tempest/api/baremetal/admin/test_drivers.py
index 852b6ab..9d99295 100644
--- a/tempest/api/baremetal/test_drivers.py
+++ b/tempest/api/baremetal/admin/test_drivers.py
@@ -12,7 +12,7 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-from tempest.api.baremetal import base
+from tempest.api.baremetal.admin import base
 from tempest import config
 from tempest import test
 
@@ -22,20 +22,17 @@
 class TestDrivers(base.BaseBaremetalTest):
     """Tests for drivers."""
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(TestDrivers, cls).setUpClass()
+    def resource_setup(cls):
+        super(TestDrivers, cls).resource_setup()
         cls.driver_name = CONF.baremetal.driver
 
     @test.attr(type="smoke")
     def test_list_drivers(self):
-        resp, drivers = self.client.list_drivers()
-        self.assertEqual('200', resp['status'])
+        _, drivers = self.client.list_drivers()
         self.assertIn(self.driver_name,
                       [d['name'] for d in drivers['drivers']])
 
     @test.attr(type="smoke")
     def test_show_driver(self):
-        resp, driver = self.client.show_driver(self.driver_name)
-        self.assertEqual('200', resp['status'])
+        _, driver = self.client.show_driver(self.driver_name)
         self.assertEqual(self.driver_name, driver['name'])
diff --git a/tempest/api/baremetal/admin/test_nodes.py b/tempest/api/baremetal/admin/test_nodes.py
new file mode 100644
index 0000000..8ccd36b
--- /dev/null
+++ b/tempest/api/baremetal/admin/test_nodes.py
@@ -0,0 +1,170 @@
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+import six
+
+from tempest.api.baremetal.admin import base
+from tempest.common.utils import data_utils
+from tempest.common import waiters
+from tempest import exceptions as exc
+from tempest import test
+
+
+class TestNodes(base.BaseBaremetalTest):
+    '''Tests for baremetal nodes.'''
+
+    def setUp(self):
+        super(TestNodes, self).setUp()
+
+        _, self.chassis = self.create_chassis()
+        _, self.node = self.create_node(self.chassis['uuid'])
+
+    def _assertExpected(self, expected, actual):
+        # Check if not expected keys/values exists in actual response body
+        for key, value in six.iteritems(expected):
+            if key not in ('created_at', 'updated_at'):
+                self.assertIn(key, actual)
+                self.assertEqual(value, actual[key])
+
+    def _associate_node_with_instance(self):
+        self.client.set_node_power_state(self.node['uuid'], 'power off')
+        waiters.wait_for_bm_node_status(self.client, self.node['uuid'],
+                                        'power_state', 'power off')
+        instance_uuid = data_utils.rand_uuid()
+        self.client.update_node(self.node['uuid'],
+                                instance_uuid=instance_uuid)
+        self.addCleanup(self.client.update_node,
+                        uuid=self.node['uuid'], instance_uuid=None)
+        return instance_uuid
+
+    @test.attr(type='smoke')
+    def test_create_node(self):
+        params = {'cpu_arch': 'x86_64',
+                  'cpu_num': '12',
+                  'storage': '10240',
+                  'memory': '1024'}
+
+        _, body = self.create_node(self.chassis['uuid'], **params)
+        self._assertExpected(params, body['properties'])
+
+    @test.attr(type='smoke')
+    def test_delete_node(self):
+        _, node = self.create_node(self.chassis['uuid'])
+
+        self.delete_node(node['uuid'])
+
+        self.assertRaises(exc.NotFound, self.client.show_node, node['uuid'])
+
+    @test.attr(type='smoke')
+    def test_show_node(self):
+        _, loaded_node = self.client.show_node(self.node['uuid'])
+        self._assertExpected(self.node, loaded_node)
+
+    @test.attr(type='smoke')
+    def test_list_nodes(self):
+        _, body = self.client.list_nodes()
+        self.assertIn(self.node['uuid'],
+                      [i['uuid'] for i in body['nodes']])
+
+    @test.attr(type='smoke')
+    def test_list_nodes_association(self):
+        _, body = self.client.list_nodes(associated=True)
+        self.assertNotIn(self.node['uuid'],
+                         [n['uuid'] for n in body['nodes']])
+
+        self._associate_node_with_instance()
+
+        _, body = self.client.list_nodes(associated=True)
+        self.assertIn(self.node['uuid'], [n['uuid'] for n in body['nodes']])
+
+        _, body = self.client.list_nodes(associated=False)
+        self.assertNotIn(self.node['uuid'], [n['uuid'] for n in body['nodes']])
+
+    @test.attr(type='smoke')
+    def test_node_port_list(self):
+        _, port = self.create_port(self.node['uuid'],
+                                   data_utils.rand_mac_address())
+        _, body = self.client.list_node_ports(self.node['uuid'])
+        self.assertIn(port['uuid'],
+                      [p['uuid'] for p in body['ports']])
+
+    @test.attr(type='smoke')
+    def test_node_port_list_no_ports(self):
+        _, node = self.create_node(self.chassis['uuid'])
+        _, body = self.client.list_node_ports(node['uuid'])
+        self.assertEmpty(body['ports'])
+
+    @test.attr(type='smoke')
+    def test_update_node(self):
+        props = {'cpu_arch': 'x86_64',
+                 'cpu_num': '12',
+                 'storage': '10',
+                 'memory': '128'}
+
+        _, node = self.create_node(self.chassis['uuid'], **props)
+
+        new_p = {'cpu_arch': 'x86',
+                 'cpu_num': '1',
+                 'storage': '10000',
+                 'memory': '12300'}
+
+        _, body = self.client.update_node(node['uuid'], properties=new_p)
+        _, node = self.client.show_node(node['uuid'])
+        self._assertExpected(new_p, node['properties'])
+
+    @test.attr(type='smoke')
+    def test_validate_driver_interface(self):
+        _, body = self.client.validate_driver_interface(self.node['uuid'])
+        core_interfaces = ['power', 'deploy']
+        for interface in core_interfaces:
+            self.assertIn(interface, body)
+
+    @test.attr(type='smoke')
+    def test_set_node_boot_device(self):
+        body = self.client.set_node_boot_device(self.node['uuid'], 'pxe')
+        # No content
+        self.assertEqual('', body)
+
+    @test.attr(type='smoke')
+    def test_get_node_boot_device(self):
+        body = self.client.get_node_boot_device(self.node['uuid'])
+        self.assertIn('boot_device', body)
+        self.assertIn('persistent', body)
+        self.assertTrue(isinstance(body['boot_device'], six.string_types))
+        self.assertTrue(isinstance(body['persistent'], bool))
+
+    @test.attr(type='smoke')
+    def test_get_node_supported_boot_devices(self):
+        body = self.client.get_node_supported_boot_devices(self.node['uuid'])
+        self.assertIn('supported_boot_devices', body)
+        self.assertTrue(isinstance(body['supported_boot_devices'], list))
+
+    @test.attr(type='smoke')
+    def test_get_console(self):
+        _, body = self.client.get_console(self.node['uuid'])
+        con_info = ['console_enabled', 'console_info']
+        for key in con_info:
+            self.assertIn(key, body)
+
+    @test.attr(type='smoke')
+    def test_set_console_mode(self):
+        self.client.set_console_mode(self.node['uuid'], True)
+
+        _, body = self.client.get_console(self.node['uuid'])
+        self.assertEqual(True, body['console_enabled'])
+
+    @test.attr(type='smoke')
+    def test_get_node_by_instance_uuid(self):
+        instance_uuid = self._associate_node_with_instance()
+        _, body = self.client.show_node_by_instance_uuid(instance_uuid)
+        self.assertEqual(len(body['nodes']), 1)
+        self.assertIn(self.node['uuid'], [n['uuid'] for n in body['nodes']])
diff --git a/tempest/api/baremetal/test_nodestates.py b/tempest/api/baremetal/admin/test_nodestates.py
similarity index 71%
rename from tempest/api/baremetal/test_nodestates.py
rename to tempest/api/baremetal/admin/test_nodestates.py
index 3044bc6..f4f8054 100644
--- a/tempest/api/baremetal/test_nodestates.py
+++ b/tempest/api/baremetal/admin/test_nodestates.py
@@ -12,7 +12,7 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-from tempest.api.baremetal import base
+from tempest.api.baremetal.admin import base
 from tempest import exceptions
 from tempest.openstack.common import timeutils
 from tempest import test
@@ -22,10 +22,10 @@
     """Tests for baremetal NodeStates."""
 
     @classmethod
-    def setUpClass(cls):
-        super(TestNodeStates, cls).setUpClass()
-        resp, cls.chassis = cls.create_chassis()
-        resp, cls.node = cls.create_node(cls.chassis['uuid'])
+    def resource_setup(cls):
+        super(TestNodeStates, cls).resource_setup()
+        _, cls.chassis = cls.create_chassis()
+        _, cls.node = cls.create_node(cls.chassis['uuid'])
 
     def _validate_power_state(self, node_uuid, power_state):
         # Validate that power state is set within timeout
@@ -34,8 +34,7 @@
         start = timeutils.utcnow()
         while timeutils.delta_seconds(
                 start, timeutils.utcnow()) < self.power_timeout:
-            resp, node = self.client.show_node(node_uuid)
-            self.assertEqual(200, resp.status)
+            _, node = self.client.show_node(node_uuid)
             if node['power_state'] == power_state:
                 return
         message = ('Failed to set power state within '
@@ -44,20 +43,16 @@
 
     @test.attr(type='smoke')
     def test_list_nodestates(self):
-        resp, nodestates = self.client.list_nodestates(self.node['uuid'])
-        self.assertEqual('200', resp['status'])
+        _, nodestates = self.client.list_nodestates(self.node['uuid'])
         for key in nodestates:
             self.assertEqual(nodestates[key], self.node[key])
 
     @test.attr(type='smoke')
     def test_set_node_power_state(self):
-        resp, node = self.create_node(self.chassis['uuid'])
-        self.assertEqual('201', resp['status'])
+        _, node = self.create_node(self.chassis['uuid'])
         states = ["power on", "rebooting", "power off"]
         for state in states:
             # Set power state
-            resp, _ = self.client.set_node_power_state(node['uuid'],
-                                                       state)
-            self.assertEqual('202', resp['status'])
+            self.client.set_node_power_state(node['uuid'], state)
             # Check power state after state is set
             self._validate_power_state(node['uuid'], state)
diff --git a/tempest/api/baremetal/test_ports.py b/tempest/api/baremetal/admin/test_ports.py
similarity index 67%
rename from tempest/api/baremetal/test_ports.py
rename to tempest/api/baremetal/admin/test_ports.py
index 4ac7e29..b3f9b7f 100644
--- a/tempest/api/baremetal/test_ports.py
+++ b/tempest/api/baremetal/admin/test_ports.py
@@ -10,7 +10,7 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-from tempest.api.baremetal import base
+from tempest.api.baremetal.admin import base
 from tempest.common.utils import data_utils
 from tempest import exceptions as exc
 from tempest import test
@@ -39,12 +39,10 @@
         node_id = self.node['uuid']
         address = data_utils.rand_mac_address()
 
-        resp, port = self.create_port(node_id=node_id, address=address)
-        self.assertEqual(201, resp.status)
+        _, port = self.create_port(node_id=node_id, address=address)
 
-        resp, body = self.client.show_port(port['uuid'])
+        _, body = self.client.show_port(port['uuid'])
 
-        self.assertEqual(200, resp.status)
         self._assertExpected(port, body)
 
     @test.attr(type='smoke')
@@ -53,12 +51,10 @@
         address = data_utils.rand_mac_address()
         uuid = data_utils.rand_uuid()
 
-        resp, port = self.create_port(node_id=node_id,
-                                      address=address, uuid=uuid)
-        self.assertEqual(201, resp.status)
+        _, port = self.create_port(node_id=node_id,
+                                   address=address, uuid=uuid)
 
-        resp, body = self.client.show_port(uuid)
-        self.assertEqual(200, resp.status)
+        _, body = self.client.show_port(uuid)
         self._assertExpected(port, body)
 
     @test.attr(type='smoke')
@@ -67,44 +63,37 @@
         address = data_utils.rand_mac_address()
         extra = {'key': 'value'}
 
-        resp, port = self.create_port(node_id=node_id, address=address,
-                                      extra=extra)
-        self.assertEqual(201, resp.status)
+        _, port = self.create_port(node_id=node_id, address=address,
+                                   extra=extra)
 
-        resp, body = self.client.show_port(port['uuid'])
-        self.assertEqual(200, resp.status)
+        _, body = self.client.show_port(port['uuid'])
         self._assertExpected(port, body)
 
     @test.attr(type='smoke')
     def test_delete_port(self):
         node_id = self.node['uuid']
         address = data_utils.rand_mac_address()
-        resp, port = self.create_port(node_id=node_id, address=address)
-        self.assertEqual(201, resp.status)
+        _, port = self.create_port(node_id=node_id, address=address)
 
-        resp = self.delete_port(port['uuid'])
+        self.delete_port(port['uuid'])
 
-        self.assertEqual(204, resp.status)
         self.assertRaises(exc.NotFound, self.client.show_port, port['uuid'])
 
     @test.attr(type='smoke')
     def test_show_port(self):
-        resp, port = self.client.show_port(self.port['uuid'])
-        self.assertEqual(200, resp.status)
+        _, port = self.client.show_port(self.port['uuid'])
         self._assertExpected(self.port, port)
 
     @test.attr(type='smoke')
     def test_show_port_with_links(self):
-        resp, port = self.client.show_port(self.port['uuid'])
-        self.assertEqual(200, resp.status)
+        _, port = self.client.show_port(self.port['uuid'])
         self.assertIn('links', port.keys())
         self.assertEqual(2, len(port['links']))
         self.assertIn(port['uuid'], port['links'][0]['href'])
 
     @test.attr(type='smoke')
     def test_list_ports(self):
-        resp, body = self.client.list_ports()
-        self.assertEqual(200, resp.status)
+        _, body = self.client.list_ports()
         self.assertIn(self.port['uuid'],
                       [i['uuid'] for i in body['ports']])
         # Verify self links.
@@ -114,8 +103,7 @@
 
     @test.attr(type='smoke')
     def test_list_with_limit(self):
-        resp, body = self.client.list_ports(limit=3)
-        self.assertEqual(200, resp.status)
+        _, body = self.client.list_ports(limit=3)
 
         next_marker = body['ports'][-1]['uuid']
         self.assertIn(next_marker, body['next'])
@@ -128,8 +116,7 @@
                              address=data_utils.rand_mac_address())
             [1]['uuid'] for i in range(0, 5)]
 
-        resp, body = self.client.list_ports_detail()
-        self.assertEqual(200, resp.status)
+        _, body = self.client.list_ports_detail()
 
         ports_dict = dict((port['uuid'], port) for port in body['ports']
                           if port['uuid'] in uuids)
@@ -153,8 +140,7 @@
             self.create_port(node_id=node_id,
                              address=data_utils.rand_mac_address())
 
-        resp, body = self.client.list_ports_detail(address=address)
-        self.assertEqual(200, resp.status)
+        _, body = self.client.list_ports_detail(address=address)
         self.assertEqual(1, len(body['ports']))
         self.assertEqual(address, body['ports'][0]['address'])
 
@@ -164,9 +150,8 @@
         address = data_utils.rand_mac_address()
         extra = {'key1': 'value1', 'key2': 'value2', 'key3': 'value3'}
 
-        resp, port = self.create_port(node_id=node_id, address=address,
-                                      extra=extra)
-        self.assertEqual(201, resp.status)
+        _, port = self.create_port(node_id=node_id, address=address,
+                                   extra=extra)
 
         new_address = data_utils.rand_mac_address()
         new_extra = {'key1': 'new-value1', 'key2': 'new-value2',
@@ -185,11 +170,9 @@
                   'op': 'replace',
                   'value': new_extra['key3']}]
 
-        resp, _ = self.client.update_port(port['uuid'], patch)
-        self.assertEqual(200, resp.status)
+        self.client.update_port(port['uuid'], patch)
 
-        resp, body = self.client.show_port(port['uuid'])
-        self.assertEqual(200, resp.status)
+        _, body = self.client.show_port(port['uuid'])
         self.assertEqual(new_address, body['address'])
         self.assertEqual(new_extra, body['extra'])
 
@@ -199,26 +182,21 @@
         address = data_utils.rand_mac_address()
         extra = {'key1': 'value1', 'key2': 'value2', 'key3': 'value3'}
 
-        resp, port = self.create_port(node_id=node_id, address=address,
-                                      extra=extra)
-        self.assertEqual(201, resp.status)
+        _, port = self.create_port(node_id=node_id, address=address,
+                                   extra=extra)
 
         # Removing one item from the collection
-        resp, _ = self.client.update_port(port['uuid'],
-                                          [{'path': '/extra/key2',
-                                           'op': 'remove'}])
-        self.assertEqual(200, resp.status)
+        self.client.update_port(port['uuid'],
+                                [{'path': '/extra/key2',
+                                 'op': 'remove'}])
         extra.pop('key2')
-        resp, body = self.client.show_port(port['uuid'])
-        self.assertEqual(200, resp.status)
+        _, body = self.client.show_port(port['uuid'])
         self.assertEqual(extra, body['extra'])
 
         # Removing the collection
-        resp, _ = self.client.update_port(port['uuid'], [{'path': '/extra',
-                                                         'op': 'remove'}])
-        self.assertEqual(200, resp.status)
-        resp, body = self.client.show_port(port['uuid'])
-        self.assertEqual(200, resp.status)
+        self.client.update_port(port['uuid'], [{'path': '/extra',
+                                               'op': 'remove'}])
+        _, body = self.client.show_port(port['uuid'])
         self.assertEqual({}, body['extra'])
 
         # Assert nothing else was changed
@@ -230,8 +208,7 @@
         node_id = self.node['uuid']
         address = data_utils.rand_mac_address()
 
-        resp, port = self.create_port(node_id=node_id, address=address)
-        self.assertEqual(201, resp.status)
+        _, port = self.create_port(node_id=node_id, address=address)
 
         extra = {'key1': 'value1', 'key2': 'value2'}
 
@@ -242,11 +219,9 @@
                   'op': 'add',
                   'value': extra['key2']}]
 
-        resp, _ = self.client.update_port(port['uuid'], patch)
-        self.assertEqual(200, resp.status)
+        self.client.update_port(port['uuid'], patch)
 
-        resp, body = self.client.show_port(port['uuid'])
-        self.assertEqual(200, resp.status)
+        _, body = self.client.show_port(port['uuid'])
         self.assertEqual(extra, body['extra'])
 
     @test.attr(type='smoke')
@@ -255,9 +230,8 @@
         address = data_utils.rand_mac_address()
         extra = {'key1': 'value1', 'key2': 'value2'}
 
-        resp, port = self.create_port(node_id=node_id, address=address,
-                                      extra=extra)
-        self.assertEqual(201, resp.status)
+        _, port = self.create_port(node_id=node_id, address=address,
+                                   extra=extra)
 
         new_address = data_utils.rand_mac_address()
         new_extra = {'key1': 'new-value1', 'key3': 'new-value3'}
@@ -274,10 +248,8 @@
                   'op': 'add',
                   'value': new_extra['key3']}]
 
-        resp, _ = self.client.update_port(port['uuid'], patch)
-        self.assertEqual(200, resp.status)
+        self.client.update_port(port['uuid'], patch)
 
-        resp, body = self.client.show_port(port['uuid'])
-        self.assertEqual(200, resp.status)
+        _, body = self.client.show_port(port['uuid'])
         self.assertEqual(new_address, body['address'])
         self.assertEqual(new_extra, body['extra'])
diff --git a/tempest/api/baremetal/test_ports_negative.py b/tempest/api/baremetal/admin/test_ports_negative.py
similarity index 83%
rename from tempest/api/baremetal/test_ports_negative.py
rename to tempest/api/baremetal/admin/test_ports_negative.py
index 3e77a5f..ead3799 100644
--- a/tempest/api/baremetal/test_ports_negative.py
+++ b/tempest/api/baremetal/admin/test_ports_negative.py
@@ -10,7 +10,7 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-from tempest.api.baremetal import base
+from tempest.api.baremetal.admin import base
 from tempest.common.utils import data_utils
 from tempest import exceptions as exc
 from tempest import test
@@ -22,11 +22,8 @@
     def setUp(self):
         super(TestPortsNegative, self).setUp()
 
-        resp, self.chassis = self.create_chassis()
-        self.assertEqual('201', resp['status'])
-
-        resp, self.node = self.create_node(self.chassis['uuid'])
-        self.assertEqual('201', resp['status'])
+        _, self.chassis = self.create_chassis()
+        _, self.node = self.create_node(self.chassis['uuid'])
 
     @test.attr(type=['negative', 'smoke'])
     def test_create_port_malformed_mac(self):
@@ -137,13 +134,11 @@
         address = data_utils.rand_mac_address()
         extra = {'key': 'value'}
 
-        resp, port = self.create_port(node_id=node_id, address=address,
-                                      extra=extra)
-        self.assertEqual('201', resp['status'])
+        _, port = self.create_port(node_id=node_id, address=address,
+                                   extra=extra)
         port_id = port['uuid']
 
-        resp, body = self.client.delete_port(port_id)
-        self.assertEqual('204', resp['status'])
+        _, body = self.client.delete_port(port_id)
 
         patch = [{'path': '/extra/key',
                   'op': 'replace',
@@ -169,8 +164,7 @@
         node_id = self.node['uuid']
         address = data_utils.rand_mac_address()
 
-        resp, port = self.create_port(node_id=node_id, address=address)
-        self.assertEqual('201', resp['status'])
+        _, port = self.create_port(node_id=node_id, address=address)
         port_id = port['uuid']
 
         self.assertRaises(exc.BadRequest, self.client.update_port, port_id,
@@ -182,8 +176,7 @@
         node_id = self.node['uuid']
         address = data_utils.rand_mac_address()
 
-        resp, port = self.create_port(node_id=node_id, address=address)
-        self.assertEqual('201', resp['status'])
+        _, port = self.create_port(node_id=node_id, address=address)
         port_id = port['uuid']
 
         self.assertRaises(exc.BadRequest, self.client.update_port, port_id,
@@ -196,8 +189,7 @@
         node_id = self.node['uuid']
         address = data_utils.rand_mac_address()
 
-        resp, port = self.create_port(node_id=node_id, address=address)
-        self.assertEqual('201', resp['status'])
+        _, port = self.create_port(node_id=node_id, address=address)
         port_id = port['uuid']
 
         self.assertRaises(exc.BadRequest, self.client.update_port, port_id,
@@ -209,8 +201,7 @@
         node_id = self.node['uuid']
         address = data_utils.rand_mac_address()
 
-        resp, port = self.create_port(node_id=node_id, address=address)
-        self.assertEqual('201', resp['status'])
+        _, port = self.create_port(node_id=node_id, address=address)
         port_id = port['uuid']
 
         patch = [{'path': '/node_uuid',
@@ -225,11 +216,9 @@
         address1 = data_utils.rand_mac_address()
         address2 = data_utils.rand_mac_address()
 
-        resp, port1 = self.create_port(node_id=node_id, address=address1)
-        self.assertEqual('201', resp['status'])
+        _, port1 = self.create_port(node_id=node_id, address=address1)
 
-        resp, port2 = self.create_port(node_id=node_id, address=address2)
-        self.assertEqual('201', resp['status'])
+        _, port2 = self.create_port(node_id=node_id, address=address2)
         port_id = port2['uuid']
 
         patch = [{'path': '/address',
@@ -243,8 +232,7 @@
         node_id = self.node['uuid']
         address = data_utils.rand_mac_address()
 
-        resp, port = self.create_port(node_id=node_id, address=address)
-        self.assertEqual('201', resp['status'])
+        _, port = self.create_port(node_id=node_id, address=address)
         port_id = port['uuid']
 
         patch = [{'path': '/node_uuid',
@@ -258,8 +246,7 @@
         node_id = self.node['uuid']
         address = data_utils.rand_mac_address()
 
-        resp, port = self.create_port(node_id=node_id, address=address)
-        self.assertEqual('201', resp['status'])
+        _, port = self.create_port(node_id=node_id, address=address)
         port_id = port['uuid']
 
         patch = [{'path': '/address',
@@ -275,9 +262,8 @@
         address = data_utils.rand_mac_address()
         extra = {'key': 'value'}
 
-        resp, port = self.create_port(node_id=node_id, address=address,
-                                      extra=extra)
-        self.assertEqual('201', resp['status'])
+        _, port = self.create_port(node_id=node_id, address=address,
+                                   extra=extra)
         port_id = port['uuid']
 
         patch = [{'path': '/extra/key',
@@ -291,8 +277,7 @@
         node_id = self.node['uuid']
         address = data_utils.rand_mac_address()
 
-        resp, port = self.create_port(node_id=node_id, address=address)
-        self.assertEqual('201', resp['status'])
+        _, port = self.create_port(node_id=node_id, address=address)
         port_id = port['uuid']
 
         patch = [{'path': '/extra',
@@ -307,8 +292,7 @@
         node_id = self.node['uuid']
         address = data_utils.rand_mac_address()
 
-        resp, port = self.create_port(node_id=node_id, address=address)
-        self.assertEqual('201', resp['status'])
+        _, port = self.create_port(node_id=node_id, address=address)
         port_id = port['uuid']
 
         patch = [{'path': '/nonexistent', ' op': 'replace', 'value': 'value'}]
@@ -321,8 +305,7 @@
         node_id = self.node['uuid']
         address = data_utils.rand_mac_address()
 
-        resp, port = self.create_port(node_id=node_id, address=address)
-        self.assertEqual('201', resp['status'])
+        _, port = self.create_port(node_id=node_id, address=address)
         port_id = port['uuid']
 
         self.assertRaises(exc.BadRequest, self.client.update_port, port_id,
@@ -333,8 +316,7 @@
         node_id = self.node['uuid']
         address = data_utils.rand_mac_address()
 
-        resp, port = self.create_port(node_id=node_id, address=address)
-        self.assertEqual('201', resp['status'])
+        _, port = self.create_port(node_id=node_id, address=address)
         port_id = port['uuid']
 
         self.assertRaises(exc.BadRequest, self.client.update_port, port_id,
@@ -345,8 +327,7 @@
         node_id = self.node['uuid']
         address = data_utils.rand_mac_address()
 
-        resp, port = self.create_port(node_id=node_id, address=address)
-        self.assertEqual('201', resp['status'])
+        _, port = self.create_port(node_id=node_id, address=address)
         port_id = port['uuid']
 
         self.assertRaises(exc.BadRequest, self.client.update_port, port_id,
@@ -366,9 +347,8 @@
         address = data_utils.rand_mac_address()
         extra = {'key1': 'value1', 'key2': 'value2'}
 
-        resp, port = self.create_port(node_id=node_id, address=address,
-                                      extra=extra)
-        self.assertEqual('201', resp['status'])
+        _, port = self.create_port(node_id=node_id, address=address,
+                                   extra=extra)
         port_id = port['uuid']
 
         new_address = data_utils.rand_mac_address()
@@ -393,7 +373,6 @@
                           patch)
 
         # patch should not be applied
-        resp, body = self.client.show_port(port_id)
-        self.assertEqual(200, resp.status)
+        _, body = self.client.show_port(port_id)
         self.assertEqual(address, body['address'])
         self.assertEqual(extra, body['extra'])
diff --git a/tempest/api/baremetal/test_nodes.py b/tempest/api/baremetal/test_nodes.py
deleted file mode 100644
index 1572840..0000000
--- a/tempest/api/baremetal/test_nodes.py
+++ /dev/null
@@ -1,97 +0,0 @@
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-import six
-
-from tempest.api.baremetal import base
-from tempest import exceptions as exc
-from tempest import test
-
-
-class TestNodes(base.BaseBaremetalTest):
-    '''Tests for baremetal nodes.'''
-
-    def setUp(self):
-        super(TestNodes, self).setUp()
-
-        _, self.chassis = self.create_chassis()
-        _, self.node = self.create_node(self.chassis['uuid'])
-
-    def _assertExpected(self, expected, actual):
-        # Check if not expected keys/values exists in actual response body
-        for key, value in six.iteritems(expected):
-            if key not in ('created_at', 'updated_at'):
-                self.assertIn(key, actual)
-                self.assertEqual(value, actual[key])
-
-    @test.attr(type='smoke')
-    def test_create_node(self):
-        params = {'cpu_arch': 'x86_64',
-                  'cpu_num': '12',
-                  'storage': '10240',
-                  'memory': '1024'}
-
-        resp, body = self.create_node(self.chassis['uuid'], **params)
-        self.assertEqual('201', resp['status'])
-        self._assertExpected(params, body['properties'])
-
-    @test.attr(type='smoke')
-    def test_delete_node(self):
-        resp, node = self.create_node(self.chassis['uuid'])
-        self.assertEqual('201', resp['status'])
-
-        resp = self.delete_node(node['uuid'])
-
-        self.assertEqual(resp['status'], '204')
-        self.assertRaises(exc.NotFound, self.client.show_node, node['uuid'])
-
-    @test.attr(type='smoke')
-    def test_show_node(self):
-        resp, loaded_node = self.client.show_node(self.node['uuid'])
-        self.assertEqual('200', resp['status'])
-        self._assertExpected(self.node, loaded_node)
-
-    @test.attr(type='smoke')
-    def test_list_nodes(self):
-        resp, body = self.client.list_nodes()
-        self.assertEqual('200', resp['status'])
-        self.assertIn(self.node['uuid'],
-                      [i['uuid'] for i in body['nodes']])
-
-    @test.attr(type='smoke')
-    def test_update_node(self):
-        props = {'cpu_arch': 'x86_64',
-                 'cpu_num': '12',
-                 'storage': '10',
-                 'memory': '128'}
-
-        resp, node = self.create_node(self.chassis['uuid'], **props)
-        self.assertEqual('201', resp['status'])
-
-        new_p = {'cpu_arch': 'x86',
-                 'cpu_num': '1',
-                 'storage': '10000',
-                 'memory': '12300'}
-
-        resp, body = self.client.update_node(node['uuid'], properties=new_p)
-        self.assertEqual('200', resp['status'])
-        resp, node = self.client.show_node(node['uuid'])
-        self.assertEqual('200', resp['status'])
-        self._assertExpected(new_p, node['properties'])
-
-    @test.attr(type='smoke')
-    def test_validate_driver_interface(self):
-        resp, body = self.client.validate_driver_interface(self.node['uuid'])
-        self.assertEqual('200', resp['status'])
-        core_interfaces = ['power', 'deploy']
-        for interface in core_interfaces:
-            self.assertIn(interface, body)
diff --git a/tempest/api/compute/admin/test_agents.py b/tempest/api/compute/admin/test_agents.py
index 4808601..3bdcfd6 100644
--- a/tempest/api/compute/admin/test_agents.py
+++ b/tempest/api/compute/admin/test_agents.py
@@ -27,8 +27,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(AgentsAdminTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(AgentsAdminTestJSON, cls).resource_setup()
         cls.client = cls.os_adm.agents_client
 
     def setUp(self):
diff --git a/tempest/api/compute/admin/test_aggregates.py b/tempest/api/compute/admin/test_aggregates.py
index c2376c9..f33089c 100644
--- a/tempest/api/compute/admin/test_aggregates.py
+++ b/tempest/api/compute/admin/test_aggregates.py
@@ -29,8 +29,8 @@
     _host_key = 'OS-EXT-SRV-ATTR:host'
 
     @classmethod
-    def setUpClass(cls):
-        super(AggregatesAdminTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(AggregatesAdminTestJSON, cls).resource_setup()
         cls.client = cls.os_adm.aggregates_client
         cls.aggregate_name_prefix = 'test_aggregate_'
         cls.az_name_prefix = 'test_az_'
@@ -145,7 +145,7 @@
         self.assertEqual(200, resp.status)
         self.assertIn((aggregate_id, new_aggregate_name, new_az_name),
                       map(lambda x:
-                         (x['id'], x['name'], x['availability_zone']),
+                          (x['id'], x['name'], x['availability_zone']),
                           aggregates))
 
     @test.attr(type='gate')
diff --git a/tempest/api/compute/admin/test_aggregates_negative.py b/tempest/api/compute/admin/test_aggregates_negative.py
index 690f2ab..ef6752b 100644
--- a/tempest/api/compute/admin/test_aggregates_negative.py
+++ b/tempest/api/compute/admin/test_aggregates_negative.py
@@ -27,8 +27,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(AggregatesAdminNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(AggregatesAdminNegativeTestJSON, cls).resource_setup()
         cls.client = cls.os_adm.aggregates_client
         cls.user_client = cls.aggregates_client
         cls.aggregate_name_prefix = 'test_aggregate_'
diff --git a/tempest/api/compute/admin/test_availability_zone.py b/tempest/api/compute/admin/test_availability_zone.py
index 3a6de36..0a040d7 100644
--- a/tempest/api/compute/admin/test_availability_zone.py
+++ b/tempest/api/compute/admin/test_availability_zone.py
@@ -24,8 +24,8 @@
     _api_version = 3
 
     @classmethod
-    def setUpClass(cls):
-        super(AZAdminV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(AZAdminV3Test, cls).resource_setup()
         cls.client = cls.availability_zone_admin_client
 
     @test.attr(type='gate')
diff --git a/tempest/api/compute/admin/test_availability_zone_negative.py b/tempest/api/compute/admin/test_availability_zone_negative.py
index ce97491..ea157b3 100644
--- a/tempest/api/compute/admin/test_availability_zone_negative.py
+++ b/tempest/api/compute/admin/test_availability_zone_negative.py
@@ -24,8 +24,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(AZAdminNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(AZAdminNegativeTestJSON, cls).resource_setup()
         cls.non_adm_client = cls.availability_zone_client
 
     @test.attr(type=['negative', 'gate'])
diff --git a/tempest/api/compute/admin/test_fixed_ips.py b/tempest/api/compute/admin/test_fixed_ips.py
index 939f1a1..e7f269d 100644
--- a/tempest/api/compute/admin/test_fixed_ips.py
+++ b/tempest/api/compute/admin/test_fixed_ips.py
@@ -23,8 +23,8 @@
 class FixedIPsTestJson(base.BaseV2ComputeAdminTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(FixedIPsTestJson, cls).setUpClass()
+    def resource_setup(cls):
+        super(FixedIPsTestJson, cls).resource_setup()
         if CONF.service_available.neutron:
             msg = ("%s skipped as neutron is available" % cls.__name__)
             raise cls.skipException(msg)
diff --git a/tempest/api/compute/admin/test_fixed_ips_negative.py b/tempest/api/compute/admin/test_fixed_ips_negative.py
index 1caa246..90be820 100644
--- a/tempest/api/compute/admin/test_fixed_ips_negative.py
+++ b/tempest/api/compute/admin/test_fixed_ips_negative.py
@@ -23,8 +23,8 @@
 class FixedIPsNegativeTestJson(base.BaseV2ComputeAdminTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(FixedIPsNegativeTestJson, cls).setUpClass()
+    def resource_setup(cls):
+        super(FixedIPsNegativeTestJson, cls).resource_setup()
         if CONF.service_available.neutron:
             msg = ("%s skipped as neutron is available" % cls.__name__)
             raise cls.skipException(msg)
diff --git a/tempest/api/compute/admin/test_flavors.py b/tempest/api/compute/admin/test_flavors.py
index 18866e5..d365f3a 100644
--- a/tempest/api/compute/admin/test_flavors.py
+++ b/tempest/api/compute/admin/test_flavors.py
@@ -28,8 +28,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(FlavorsAdminTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(FlavorsAdminTestJSON, cls).resource_setup()
         if not test.is_extension_enabled('OS-FLV-EXT-DATA', 'compute'):
             msg = "OS-FLV-EXT-DATA extension not enabled."
             raise cls.skipException(msg)
diff --git a/tempest/api/compute/admin/test_flavors_access.py b/tempest/api/compute/admin/test_flavors_access.py
index f2554ea..176a134 100644
--- a/tempest/api/compute/admin/test_flavors_access.py
+++ b/tempest/api/compute/admin/test_flavors_access.py
@@ -26,8 +26,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(FlavorsAccessTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(FlavorsAccessTestJSON, cls).resource_setup()
         if not test.is_extension_enabled('OS-FLV-EXT-DATA', 'compute'):
             msg = "OS-FLV-EXT-DATA extension not enabled."
             raise cls.skipException(msg)
diff --git a/tempest/api/compute/admin/test_flavors_access_negative.py b/tempest/api/compute/admin/test_flavors_access_negative.py
index b636ccd..9cc2a92 100644
--- a/tempest/api/compute/admin/test_flavors_access_negative.py
+++ b/tempest/api/compute/admin/test_flavors_access_negative.py
@@ -29,8 +29,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(FlavorsAccessNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(FlavorsAccessNegativeTestJSON, cls).resource_setup()
         if not test.is_extension_enabled('OS-FLV-EXT-DATA', 'compute'):
             msg = "OS-FLV-EXT-DATA extension not enabled."
             raise cls.skipException(msg)
diff --git a/tempest/api/compute/admin/test_flavors_extra_specs.py b/tempest/api/compute/admin/test_flavors_extra_specs.py
index 56daf96..c05abe2 100644
--- a/tempest/api/compute/admin/test_flavors_extra_specs.py
+++ b/tempest/api/compute/admin/test_flavors_extra_specs.py
@@ -27,8 +27,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(FlavorsExtraSpecsTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(FlavorsExtraSpecsTestJSON, cls).resource_setup()
         if not test.is_extension_enabled('OS-FLV-EXT-DATA', 'compute'):
             msg = "OS-FLV-EXT-DATA extension not enabled."
             raise cls.skipException(msg)
@@ -51,10 +51,10 @@
                                                     swap=swap, rxtx=rxtx)
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         resp, body = cls.client.delete_flavor(cls.flavor['id'])
         cls.client.wait_for_resource_deletion(cls.flavor['id'])
-        super(FlavorsExtraSpecsTestJSON, cls).tearDownClass()
+        super(FlavorsExtraSpecsTestJSON, cls).resource_cleanup()
 
     @test.attr(type='gate')
     def test_flavor_set_get_update_show_unset_keys(self):
diff --git a/tempest/api/compute/admin/test_flavors_extra_specs_negative.py b/tempest/api/compute/admin/test_flavors_extra_specs_negative.py
index 1e5695f..30adf73 100644
--- a/tempest/api/compute/admin/test_flavors_extra_specs_negative.py
+++ b/tempest/api/compute/admin/test_flavors_extra_specs_negative.py
@@ -28,8 +28,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(FlavorsExtraSpecsNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(FlavorsExtraSpecsNegativeTestJSON, cls).resource_setup()
         if not test.is_extension_enabled('OS-FLV-EXT-DATA', 'compute'):
             msg = "OS-FLV-EXT-DATA extension not enabled."
             raise cls.skipException(msg)
@@ -52,10 +52,10 @@
                                                     swap=swap, rxtx=rxtx)
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         resp, body = cls.client.delete_flavor(cls.flavor['id'])
         cls.client.wait_for_resource_deletion(cls.flavor['id'])
-        super(FlavorsExtraSpecsNegativeTestJSON, cls).tearDownClass()
+        super(FlavorsExtraSpecsNegativeTestJSON, cls).resource_cleanup()
 
     @test.attr(type=['negative', 'gate'])
     def test_flavor_non_admin_set_keys(self):
diff --git a/tempest/api/compute/admin/test_flavors_negative.py b/tempest/api/compute/admin/test_flavors_negative.py
index 9e4412f..3389aee 100644
--- a/tempest/api/compute/admin/test_flavors_negative.py
+++ b/tempest/api/compute/admin/test_flavors_negative.py
@@ -16,6 +16,7 @@
 import uuid
 
 from tempest.api.compute import base
+from tempest.api_schema.request.compute.v2 import flavors
 from tempest.common.utils import data_utils
 from tempest import exceptions
 from tempest import test
@@ -30,8 +31,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(FlavorsAdminNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(FlavorsAdminNegativeTestJSON, cls).resource_setup()
         if not test.is_extension_enabled('OS-FLV-EXT-DATA', 'compute'):
             msg = "OS-FLV-EXT-DATA extension not enabled."
             raise cls.skipException(msg)
@@ -106,4 +107,4 @@
                                    test.NegativeAutoTest):
     _interface = 'json'
     _service = 'compute'
-    _schema_file = 'compute/admin/flavor_create.json'
+    _schema = flavors.flavor_create
diff --git a/tempest/api/compute/admin/test_floating_ips_bulk.py b/tempest/api/compute/admin/test_floating_ips_bulk.py
index 16c2810..c1263ea 100644
--- a/tempest/api/compute/admin/test_floating_ips_bulk.py
+++ b/tempest/api/compute/admin/test_floating_ips_bulk.py
@@ -31,9 +31,8 @@
     """
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(FloatingIPsBulkAdminTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(FloatingIPsBulkAdminTestJSON, cls).resource_setup()
         cls.client = cls.os_adm.floating_ips_client
         cls.ip_range = CONF.compute.floating_ip_range
         cls.verify_unallocated_floating_ip_range(cls.ip_range)
diff --git a/tempest/api/compute/admin/test_hosts.py b/tempest/api/compute/admin/test_hosts.py
index e612566..bcae492 100644
--- a/tempest/api/compute/admin/test_hosts.py
+++ b/tempest/api/compute/admin/test_hosts.py
@@ -24,8 +24,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(HostsAdminTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(HostsAdminTestJSON, cls).resource_setup()
         cls.client = cls.os_adm.hosts_client
 
     @test.attr(type='gate')
diff --git a/tempest/api/compute/admin/test_hosts_negative.py b/tempest/api/compute/admin/test_hosts_negative.py
index 0f26e84..4111aba 100644
--- a/tempest/api/compute/admin/test_hosts_negative.py
+++ b/tempest/api/compute/admin/test_hosts_negative.py
@@ -25,8 +25,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(HostsAdminNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(HostsAdminNegativeTestJSON, cls).resource_setup()
         cls.client = cls.os_adm.hosts_client
         cls.non_admin_client = cls.os.hosts_client
 
diff --git a/tempest/api/compute/admin/test_hypervisor.py b/tempest/api/compute/admin/test_hypervisor.py
index 85b26a1..c51d0a5 100644
--- a/tempest/api/compute/admin/test_hypervisor.py
+++ b/tempest/api/compute/admin/test_hypervisor.py
@@ -24,8 +24,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(HypervisorAdminTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(HypervisorAdminTestJSON, cls).resource_setup()
         cls.client = cls.os_adm.hypervisor_client
 
     def _list_hypervisors(self):
diff --git a/tempest/api/compute/admin/test_hypervisor_negative.py b/tempest/api/compute/admin/test_hypervisor_negative.py
index 4ba8d30..d3804e8 100644
--- a/tempest/api/compute/admin/test_hypervisor_negative.py
+++ b/tempest/api/compute/admin/test_hypervisor_negative.py
@@ -28,8 +28,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(HypervisorAdminNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(HypervisorAdminNegativeTestJSON, cls).resource_setup()
         cls.client = cls.os_adm.hypervisor_client
         cls.non_adm_client = cls.hypervisor_client
 
diff --git a/tempest/api/compute/admin/test_instance_usage_audit_log.py b/tempest/api/compute/admin/test_instance_usage_audit_log.py
index 055a177..91f0b02 100644
--- a/tempest/api/compute/admin/test_instance_usage_audit_log.py
+++ b/tempest/api/compute/admin/test_instance_usage_audit_log.py
@@ -23,8 +23,8 @@
 class InstanceUsageAuditLogTestJSON(base.BaseV2ComputeAdminTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(InstanceUsageAuditLogTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(InstanceUsageAuditLogTestJSON, cls).resource_setup()
         cls.adm_client = cls.os_adm.instance_usages_audit_log_client
 
     @test.attr(type='gate')
diff --git a/tempest/api/compute/admin/test_instance_usage_audit_log_negative.py b/tempest/api/compute/admin/test_instance_usage_audit_log_negative.py
index 6a5fc96..1af340d 100644
--- a/tempest/api/compute/admin/test_instance_usage_audit_log_negative.py
+++ b/tempest/api/compute/admin/test_instance_usage_audit_log_negative.py
@@ -24,8 +24,8 @@
 class InstanceUsageAuditLogNegativeTestJSON(base.BaseV2ComputeAdminTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(InstanceUsageAuditLogNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(InstanceUsageAuditLogNegativeTestJSON, cls).resource_setup()
         cls.adm_client = cls.os_adm.instance_usages_audit_log_client
 
     @test.attr(type=['negative', 'gate'])
diff --git a/tempest/api/compute/admin/test_migrations.py b/tempest/api/compute/admin/test_migrations.py
index 514f1fa..7ba05ef 100644
--- a/tempest/api/compute/admin/test_migrations.py
+++ b/tempest/api/compute/admin/test_migrations.py
@@ -24,8 +24,8 @@
 class MigrationsAdminTest(base.BaseV2ComputeAdminTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(MigrationsAdminTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(MigrationsAdminTest, cls).resource_setup()
         cls.client = cls.os_adm.migrations_client
 
     @test.attr(type='gate')
diff --git a/tempest/api/compute/admin/test_networks.py b/tempest/api/compute/admin/test_networks.py
new file mode 100644
index 0000000..75f3199
--- /dev/null
+++ b/tempest/api/compute/admin/test_networks.py
@@ -0,0 +1,52 @@
+# Copyright 2014 Hewlett-Packard Development Company, L.P.
+# All Rights Reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+from tempest.api.compute import base
+from tempest import config
+
+CONF = config.CONF
+
+
+class NetworksTest(base.BaseComputeAdminTest):
+    _api_version = 2
+
+    """
+    Tests Nova Networks API that usually requires admin privileges.
+    API docs:
+    http://developer.openstack.org/api-ref-compute-v2-ext.html#ext-os-networks
+    """
+
+    @classmethod
+    def resource_setup(cls):
+        super(NetworksTest, cls).resource_setup()
+        cls.client = cls.os_adm.networks_client
+
+    def test_get_network(self):
+        resp, networks = self.client.list_networks()
+        configured_network = [x for x in networks if x['label'] ==
+                              CONF.compute.fixed_network_name]
+        self.assertEqual(1, len(configured_network),
+                         "{0} networks with label {1}".format(
+                             len(configured_network),
+                             CONF.compute.fixed_network_name))
+        configured_network = configured_network[0]
+        _, network = self.client.get_network(configured_network['id'])
+        self.assertEqual(configured_network['label'], network['label'])
+
+    def test_list_all_networks(self):
+        _, networks = self.client.list_networks()
+        # Check the configured network is in the list
+        configured_network = CONF.compute.fixed_network_name
+        self.assertIn(configured_network, [x['label'] for x in networks])
diff --git a/tempest/api/compute/admin/test_quotas.py b/tempest/api/compute/admin/test_quotas.py
index d27d78b..701e1c2 100644
--- a/tempest/api/compute/admin/test_quotas.py
+++ b/tempest/api/compute/admin/test_quotas.py
@@ -34,8 +34,8 @@
         super(QuotasAdminTestJSON, self).setUp()
 
     @classmethod
-    def setUpClass(cls):
-        super(QuotasAdminTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(QuotasAdminTestJSON, cls).resource_setup()
         cls.adm_client = cls.os_adm.quotas_client
 
         # NOTE(afazekas): these test cases should always create and use a new
@@ -57,9 +57,9 @@
         resp, quota_set = self.adm_client.get_default_quota_set(
             self.demo_tenant_id)
         self.assertEqual(200, resp.status)
-        self.assertEqual(sorted(expected_quota_set),
-                         sorted(quota_set.keys()))
         self.assertEqual(quota_set['id'], self.demo_tenant_id)
+        for quota in expected_quota_set:
+            self.assertIn(quota, quota_set.keys())
 
     @test.attr(type='gate')
     def test_update_all_quota_resources_for_tenant(self):
@@ -79,10 +79,18 @@
             **new_quota_set)
 
         default_quota_set.pop('id')
+        # NOTE(PhilDay) The following is safe as we're not updating these
+        # two quota values yet.  Once the Nova change to add these is merged
+        # and the client updated to support them this can be removed
+        if 'server_groups' in default_quota_set:
+            default_quota_set.pop('server_groups')
+        if 'server_group_members' in default_quota_set:
+            default_quota_set.pop('server_group_members')
         self.addCleanup(self.adm_client.update_quota_set,
                         self.demo_tenant_id, **default_quota_set)
         self.assertEqual(200, resp.status)
-        self.assertEqual(new_quota_set, quota_set)
+        for quota in new_quota_set:
+            self.assertIn(quota, quota_set.keys())
 
     # TODO(afazekas): merge these test cases
     @test.attr(type='gate')
@@ -159,8 +167,8 @@
         super(QuotaClassesAdminTestJSON, self).setUp()
 
     @classmethod
-    def setUpClass(cls):
-        super(QuotaClassesAdminTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(QuotaClassesAdminTestJSON, cls).resource_setup()
         cls.adm_client = cls.os_adm.quota_classes_client
 
     def _restore_default_quotas(self, original_defaults):
diff --git a/tempest/api/compute/admin/test_quotas_negative.py b/tempest/api/compute/admin/test_quotas_negative.py
index 599b058..a9ed7ce 100644
--- a/tempest/api/compute/admin/test_quotas_negative.py
+++ b/tempest/api/compute/admin/test_quotas_negative.py
@@ -25,8 +25,8 @@
     force_tenant_isolation = True
 
     @classmethod
-    def setUpClass(cls):
-        super(QuotasAdminNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(QuotasAdminNegativeTestJSON, cls).resource_setup()
         cls.client = cls.os.quotas_client
         cls.adm_client = cls.os_adm.quotas_client
         cls.sg_client = cls.security_groups_client
@@ -44,7 +44,6 @@
 
     # TODO(afazekas): Add dedicated tenant to the skiped quota tests
     # it can be moved into the setUpClass as well
-    @test.skip_because(bug="1298131")
     @test.attr(type=['negative', 'gate'])
     def test_create_server_when_cpu_quota_is_full(self):
         # Disallow server creation when tenant's vcpu quota is full
@@ -58,9 +57,9 @@
 
         self.addCleanup(self.adm_client.update_quota_set, self.demo_tenant_id,
                         cores=default_vcpu_quota)
-        self.assertRaises(exceptions.Unauthorized, self.create_test_server)
+        self.assertRaises((exceptions.Unauthorized, exceptions.OverLimit),
+                          self.create_test_server)
 
-    @test.skip_because(bug="1298131")
     @test.attr(type=['negative', 'gate'])
     def test_create_server_when_memory_quota_is_full(self):
         # Disallow server creation when tenant's memory quota is full
@@ -74,9 +73,9 @@
 
         self.addCleanup(self.adm_client.update_quota_set, self.demo_tenant_id,
                         ram=default_mem_quota)
-        self.assertRaises(exceptions.Unauthorized, self.create_test_server)
+        self.assertRaises((exceptions.Unauthorized, exceptions.OverLimit),
+                          self.create_test_server)
 
-    @test.skip_because(bug="1298131")
     @test.attr(type=['negative', 'gate'])
     def test_create_server_when_instances_quota_is_full(self):
         # Once instances quota limit is reached, disallow server creation
@@ -89,11 +88,13 @@
                                          instances=instances_quota)
         self.addCleanup(self.adm_client.update_quota_set, self.demo_tenant_id,
                         instances=default_instances_quota)
-        self.assertRaises(exceptions.Unauthorized, self.create_test_server)
+        self.assertRaises((exceptions.Unauthorized, exceptions.OverLimit),
+                          self.create_test_server)
 
     @test.skip_because(bug="1186354",
                        condition=CONF.service_available.neutron)
     @test.attr(type='gate')
+    @test.services('network')
     def test_security_groups_exceed_limit(self):
         # Negative test: Creation Security Groups over limit should FAIL
 
@@ -120,6 +121,7 @@
     @test.skip_because(bug="1186354",
                        condition=CONF.service_available.neutron)
     @test.attr(type=['negative', 'gate'])
+    @test.services('network')
     def test_security_groups_rules_exceed_limit(self):
         # Negative test: Creation of Security Group Rules should FAIL
         # when we reach limit maxSecurityGroupRules
diff --git a/tempest/api/compute/admin/test_security_group_default_rules.py b/tempest/api/compute/admin/test_security_group_default_rules.py
index 07408a8..4c0bd47 100644
--- a/tempest/api/compute/admin/test_security_group_default_rules.py
+++ b/tempest/api/compute/admin/test_security_group_default_rules.py
@@ -30,11 +30,10 @@
     @testtools.skipIf(CONF.service_available.neutron,
                       "Skip as this functionality is not yet "
                       "implemented in Neutron. Related Bug#1311500")
-    @test.safe_setup
-    def setUpClass(cls):
+    def resource_setup(cls):
         # A network and a subnet will be created for these tests
         cls.set_network_resources(network=True, subnet=True)
-        super(SecurityGroupDefaultRulesTest, cls).setUpClass()
+        super(SecurityGroupDefaultRulesTest, cls).resource_setup()
         cls.adm_client = cls.os_adm.security_group_default_rules_client
 
     def _create_security_group_default_rules(self, ip_protocol='tcp',
@@ -55,7 +54,7 @@
     @test.attr(type='smoke')
     def test_create_delete_security_group_default_rules(self):
         # Create and delete Security Group default rule
-        ip_protocols = {'tcp', 'udp', 'icmp'}
+        ip_protocols = ['tcp', 'udp', 'icmp']
         for ip_protocol in ip_protocols:
             rule = self._create_security_group_default_rules(ip_protocol)
             # Delete Security Group default rule
diff --git a/tempest/api/compute/admin/test_security_groups.py b/tempest/api/compute/admin/test_security_groups.py
index 004ce8f..40ae236 100644
--- a/tempest/api/compute/admin/test_security_groups.py
+++ b/tempest/api/compute/admin/test_security_groups.py
@@ -26,8 +26,8 @@
 class SecurityGroupsTestAdminJSON(base.BaseV2ComputeAdminTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(SecurityGroupsTestAdminJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(SecurityGroupsTestAdminJSON, cls).resource_setup()
         cls.adm_client = cls.os_adm.security_groups_client
         cls.client = cls.security_groups_client
 
diff --git a/tempest/api/compute/admin/test_servers.py b/tempest/api/compute/admin/test_servers.py
index 49af645..47aaee3 100644
--- a/tempest/api/compute/admin/test_servers.py
+++ b/tempest/api/compute/admin/test_servers.py
@@ -26,9 +26,8 @@
     _host_key = 'OS-EXT-SRV-ATTR:host'
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(ServersAdminTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(ServersAdminTestJSON, cls).resource_setup()
         cls.client = cls.os_adm.servers_client
         cls.non_admin_client = cls.servers_client
         cls.flavors_client = cls.os_adm.flavors_client
diff --git a/tempest/api/compute/admin/test_servers_negative.py b/tempest/api/compute/admin/test_servers_negative.py
index f4d010e..9aa489c 100644
--- a/tempest/api/compute/admin/test_servers_negative.py
+++ b/tempest/api/compute/admin/test_servers_negative.py
@@ -17,6 +17,7 @@
 import testtools
 
 from tempest.api.compute import base
+from tempest.common import tempest_fixtures as fixtures
 from tempest.common.utils import data_utils
 from tempest import config
 from tempest import exceptions
@@ -32,8 +33,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(ServersAdminNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(ServersAdminNegativeTestJSON, cls).resource_setup()
         cls.client = cls.os_adm.servers_client
         cls.non_adm_client = cls.servers_client
         cls.flavors_client = cls.os_adm.flavors_client
@@ -54,11 +55,12 @@
             flavor_id = data_utils.rand_int_id(start=1000)
         return flavor_id
 
-    @test.skip_because(bug="1298131")
     @testtools.skipUnless(CONF.compute_feature_enabled.resize,
                           'Resize not available.')
     @test.attr(type=['negative', 'gate'])
     def test_resize_server_using_overlimit_ram(self):
+        # NOTE(mriedem): Avoid conflicts with os-quota-class-sets tests.
+        self.useFixture(fixtures.LockFixture('compute_quotas'))
         flavor_name = data_utils.rand_name("flavor-")
         flavor_id = self._get_unused_flavor_id()
         resp, quota_set = self.quotas_client.get_default_quota_set(
@@ -70,16 +72,17 @@
                                                              ram, vcpus, disk,
                                                              flavor_id)
         self.addCleanup(self.flavors_client.delete_flavor, flavor_id)
-        self.assertRaises(exceptions.Unauthorized,
+        self.assertRaises((exceptions.Unauthorized, exceptions.OverLimit),
                           self.client.resize,
                           self.servers[0]['id'],
                           flavor_ref['id'])
 
-    @test.skip_because(bug="1298131")
     @testtools.skipUnless(CONF.compute_feature_enabled.resize,
                           'Resize not available.')
     @test.attr(type=['negative', 'gate'])
     def test_resize_server_using_overlimit_vcpus(self):
+        # NOTE(mriedem): Avoid conflicts with os-quota-class-sets tests.
+        self.useFixture(fixtures.LockFixture('compute_quotas'))
         flavor_name = data_utils.rand_name("flavor-")
         flavor_id = self._get_unused_flavor_id()
         ram = 512
@@ -91,7 +94,7 @@
                                                              ram, vcpus, disk,
                                                              flavor_id)
         self.addCleanup(self.flavors_client.delete_flavor, flavor_id)
-        self.assertRaises(exceptions.Unauthorized,
+        self.assertRaises((exceptions.Unauthorized, exceptions.OverLimit),
                           self.client.resize,
                           self.servers[0]['id'],
                           flavor_ref['id'])
diff --git a/tempest/api/compute/admin/test_services.py b/tempest/api/compute/admin/test_services.py
index 2feb825..76153e7 100644
--- a/tempest/api/compute/admin/test_services.py
+++ b/tempest/api/compute/admin/test_services.py
@@ -25,8 +25,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(ServicesAdminTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(ServicesAdminTestJSON, cls).resource_setup()
         cls.client = cls.os_adm.services_client
 
     @test.attr(type='gate')
diff --git a/tempest/api/compute/admin/test_services_negative.py b/tempest/api/compute/admin/test_services_negative.py
index c78d70d..5331097 100644
--- a/tempest/api/compute/admin/test_services_negative.py
+++ b/tempest/api/compute/admin/test_services_negative.py
@@ -24,8 +24,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(ServicesAdminNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(ServicesAdminNegativeTestJSON, cls).resource_setup()
         cls.client = cls.os_adm.services_client
         cls.non_admin_client = cls.services_client
 
diff --git a/tempest/api/compute/admin/test_simple_tenant_usage.py b/tempest/api/compute/admin/test_simple_tenant_usage.py
index f3a81d1..5d596ba 100644
--- a/tempest/api/compute/admin/test_simple_tenant_usage.py
+++ b/tempest/api/compute/admin/test_simple_tenant_usage.py
@@ -23,8 +23,8 @@
 class TenantUsagesTestJSON(base.BaseV2ComputeAdminTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(TenantUsagesTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(TenantUsagesTestJSON, cls).resource_setup()
         cls.adm_client = cls.os_adm.tenant_usages_client
         cls.client = cls.os.tenant_usages_client
         cls.tenant_id = cls.client.tenant_id
diff --git a/tempest/api/compute/admin/test_simple_tenant_usage_negative.py b/tempest/api/compute/admin/test_simple_tenant_usage_negative.py
index 1031b24..5e2c593 100644
--- a/tempest/api/compute/admin/test_simple_tenant_usage_negative.py
+++ b/tempest/api/compute/admin/test_simple_tenant_usage_negative.py
@@ -23,8 +23,8 @@
 class TenantUsagesNegativeTestJSON(base.BaseV2ComputeAdminTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(TenantUsagesNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(TenantUsagesNegativeTestJSON, cls).resource_setup()
         cls.adm_client = cls.os_adm.tenant_usages_client
         cls.client = cls.os.tenant_usages_client
         cls.identity_client = cls._get_identity_admin_client()
diff --git a/tempest/api/compute/base.py b/tempest/api/compute/base.py
index a3295eb..6c93d33 100644
--- a/tempest/api/compute/base.py
+++ b/tempest/api/compute/base.py
@@ -19,6 +19,7 @@
 from tempest.common.utils import data_utils
 from tempest import config
 from tempest import exceptions
+from tempest.openstack.common import excutils
 from tempest.openstack.common import log as logging
 import tempest.test
 
@@ -34,9 +35,12 @@
     force_tenant_isolation = False
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         cls.set_network_resources()
-        super(BaseComputeTest, cls).setUpClass()
+        super(BaseComputeTest, cls).resource_setup()
+        if getattr(cls, '_interface', None) == 'xml' and cls._api_version == 2:
+            if not CONF.compute_feature_enabled.xml_api_v2:
+                raise cls.skipException('XML API is not enabled')
 
         # TODO(andreaf) WE should care also for the alt_manager here
         # but only once client lazy load in the manager is done
@@ -141,14 +145,19 @@
         for server in cls.servers:
             try:
                 cls.servers_client.delete_server(server['id'])
-            except Exception:
+            except exceptions.NotFound:
+                # Something else already cleaned up the server, nothing to be
+                # worried about
                 pass
+            except Exception:
+                LOG.exception('Deleting server %s failed' % server['id'])
 
         for server in cls.servers:
             try:
                 cls.servers_client.wait_for_server_termination(server['id'])
             except Exception:
-                pass
+                LOG.exception('Waiting for deletion of server %s failed'
+                              % server['id'])
 
     @classmethod
     def server_check_teardown(cls):
@@ -208,13 +217,12 @@
                               server_group_id)
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         cls.clear_images()
         cls.clear_servers()
         cls.clear_security_groups()
-        cls.clear_isolated_creds()
         cls.clear_server_groups()
-        super(BaseComputeTest, cls).tearDownClass()
+        super(BaseComputeTest, cls).resource_cleanup()
 
     @classmethod
     def create_test_server(cls, **kwargs):
@@ -240,15 +248,16 @@
                 try:
                     cls.servers_client.wait_for_server_status(
                         server['id'], kwargs['wait_until'])
-                except Exception as ex:
-                    if ('preserve_server_on_error' not in kwargs
-                        or kwargs['preserve_server_on_error'] is False):
-                        for server in servers:
-                            try:
-                                cls.servers_client.delete_server(server['id'])
-                            except Exception:
-                                pass
-                    raise ex
+                except Exception:
+                    with excutils.save_and_reraise_exception():
+                        if ('preserve_server_on_error' not in kwargs
+                            or kwargs['preserve_server_on_error'] is False):
+                            for server in servers:
+                                try:
+                                    cls.servers_client.delete_server(
+                                        server['id'])
+                                except Exception:
+                                    pass
 
         cls.servers.extend(servers)
 
@@ -268,10 +277,10 @@
         return resp, body
 
     @classmethod
-    def create_test_server_group(cls, name="", policy=[]):
+    def create_test_server_group(cls, name="", policy=None):
         if not name:
             name = data_utils.rand_name(cls.__name__ + "-Server-Group")
-        if not policy:
+        if policy is None:
             policy = ['affinity']
         resp, body = cls.servers_client.create_server_group(name, policy)
         cls.server_groups.append(body['id'])
@@ -379,8 +388,8 @@
     _interface = "json"
 
     @classmethod
-    def setUpClass(cls):
-        super(BaseComputeAdminTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(BaseComputeAdminTest, cls).resource_setup()
         if (CONF.compute.allow_tenant_isolation or
             cls.force_tenant_isolation is True):
             creds = cls.isolated_creds.get_admin_creds()
diff --git a/tempest/api/compute/flavors/test_flavors.py b/tempest/api/compute/flavors/test_flavors.py
index c1c2d05..7beef23 100644
--- a/tempest/api/compute/flavors/test_flavors.py
+++ b/tempest/api/compute/flavors/test_flavors.py
@@ -24,8 +24,8 @@
     _min_ram = 'min_ram'
 
     @classmethod
-    def setUpClass(cls):
-        super(FlavorsV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(FlavorsV3Test, cls).resource_setup()
         cls.client = cls.flavors_client
 
     @test.attr(type='smoke')
diff --git a/tempest/api/compute/flavors/test_flavors_negative.py b/tempest/api/compute/flavors/test_flavors_negative.py
index 1638f2d..cae1ac4 100644
--- a/tempest/api/compute/flavors/test_flavors_negative.py
+++ b/tempest/api/compute/flavors/test_flavors_negative.py
@@ -13,8 +13,8 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-
 from tempest.api.compute import base
+from tempest.api_schema.request.compute.v2 import flavors
 from tempest import test
 
 
@@ -25,16 +25,16 @@
 class FlavorsListWithDetailsNegativeTestJSON(base.BaseV2ComputeTest,
                                              test.NegativeAutoTest):
     _service = 'compute'
-    _schema_file = 'compute/flavors/flavors_list.json'
+    _schema = flavors.flavor_list
 
 
 @test.SimpleNegativeAutoTest
 class FlavorDetailsNegativeTestJSON(base.BaseV2ComputeTest,
                                     test.NegativeAutoTest):
     _service = 'compute'
-    _schema_file = 'compute/flavors/flavor_details.json'
+    _schema = flavors.flavors_details
 
     @classmethod
-    def setUpClass(cls):
-        super(FlavorDetailsNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(FlavorDetailsNegativeTestJSON, cls).resource_setup()
         cls.set_resource("flavor", cls.flavor_ref)
diff --git a/tempest/api/compute/flavors/test_flavors_negative_xml.py b/tempest/api/compute/flavors/test_flavors_negative_xml.py
index bf73c0e..299b18a 100644
--- a/tempest/api/compute/flavors/test_flavors_negative_xml.py
+++ b/tempest/api/compute/flavors/test_flavors_negative_xml.py
@@ -24,8 +24,8 @@
     _interface = 'xml'
 
     @classmethod
-    def setUpClass(cls):
-        super(FlavorsNegativeTestXML, cls).setUpClass()
+    def resource_setup(cls):
+        super(FlavorsNegativeTestXML, cls).resource_setup()
         cls.client = cls.flavors_client
 
     @test.attr(type=['negative', 'gate'])
diff --git a/tempest/api/compute/floating_ips/base.py b/tempest/api/compute/floating_ips/base.py
index fd76e62..19b6a50 100644
--- a/tempest/api/compute/floating_ips/base.py
+++ b/tempest/api/compute/floating_ips/base.py
@@ -19,8 +19,8 @@
 class BaseFloatingIPsTest(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         # Floating IP actions might need a full network configuration
         cls.set_network_resources(network=True, subnet=True,
                                   router=True, dhcp=True)
-        super(BaseFloatingIPsTest, cls).setUpClass()
+        super(BaseFloatingIPsTest, cls).resource_setup()
diff --git a/tempest/api/compute/floating_ips/test_floating_ips_actions.py b/tempest/api/compute/floating_ips/test_floating_ips_actions.py
index 1eae66f..ba66ab9 100644
--- a/tempest/api/compute/floating_ips/test_floating_ips_actions.py
+++ b/tempest/api/compute/floating_ips/test_floating_ips_actions.py
@@ -24,9 +24,8 @@
     floating_ip = None
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(FloatingIPsTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(FloatingIPsTestJSON, cls).resource_setup()
         cls.client = cls.floating_ips_client
         cls.floating_ip_id = None
 
@@ -39,11 +38,11 @@
         cls.floating_ip = body['ip']
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         # Deleting the floating IP which is created in this method
         if cls.floating_ip_id:
             resp, body = cls.client.delete_floating_ip(cls.floating_ip_id)
-        super(FloatingIPsTestJSON, cls).tearDownClass()
+        super(FloatingIPsTestJSON, cls).resource_cleanup()
 
     def _try_delete_floating_ip(self, floating_ip_id):
         # delete floating ip, if it exists
diff --git a/tempest/api/compute/floating_ips/test_floating_ips_actions_negative.py b/tempest/api/compute/floating_ips/test_floating_ips_actions_negative.py
index 042a19a..104d130 100644
--- a/tempest/api/compute/floating_ips/test_floating_ips_actions_negative.py
+++ b/tempest/api/compute/floating_ips/test_floating_ips_actions_negative.py
@@ -28,8 +28,8 @@
     server_id = None
 
     @classmethod
-    def setUpClass(cls):
-        super(FloatingIPsNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(FloatingIPsNegativeTestJSON, cls).resource_setup()
         cls.client = cls.floating_ips_client
 
         # Server creation
diff --git a/tempest/api/compute/floating_ips/test_list_floating_ips.py b/tempest/api/compute/floating_ips/test_list_floating_ips.py
index a6878d9..cb93177 100644
--- a/tempest/api/compute/floating_ips/test_list_floating_ips.py
+++ b/tempest/api/compute/floating_ips/test_list_floating_ips.py
@@ -20,8 +20,8 @@
 class FloatingIPDetailsTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(FloatingIPDetailsTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(FloatingIPDetailsTestJSON, cls).resource_setup()
         cls.client = cls.floating_ips_client
         cls.floating_ip = []
         cls.floating_ip_id = []
@@ -31,10 +31,10 @@
             cls.floating_ip_id.append(body['id'])
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         for i in range(3):
             cls.client.delete_floating_ip(cls.floating_ip_id[i])
-        super(FloatingIPDetailsTestJSON, cls).tearDownClass()
+        super(FloatingIPDetailsTestJSON, cls).resource_cleanup()
 
     @test.attr(type='gate')
     @test.services('network')
diff --git a/tempest/api/compute/floating_ips/test_list_floating_ips_negative.py b/tempest/api/compute/floating_ips/test_list_floating_ips_negative.py
index b11ef5b..08819c2 100644
--- a/tempest/api/compute/floating_ips/test_list_floating_ips_negative.py
+++ b/tempest/api/compute/floating_ips/test_list_floating_ips_negative.py
@@ -27,8 +27,8 @@
 class FloatingIPDetailsNegativeTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(FloatingIPDetailsNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(FloatingIPDetailsNegativeTestJSON, cls).resource_setup()
         cls.client = cls.floating_ips_client
 
     @test.attr(type=['negative', 'gate'])
diff --git a/tempest/api/compute/images/test_image_metadata.py b/tempest/api/compute/images/test_image_metadata.py
index 9036726..1fa591f 100644
--- a/tempest/api/compute/images/test_image_metadata.py
+++ b/tempest/api/compute/images/test_image_metadata.py
@@ -26,9 +26,8 @@
 class ImagesMetadataTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(ImagesMetadataTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(ImagesMetadataTestJSON, cls).resource_setup()
         if not CONF.service_available.glance:
             skip_msg = ("%s skipped as glance is not available" % cls.__name__)
             raise cls.skipException(skip_msg)
diff --git a/tempest/api/compute/images/test_image_metadata_negative.py b/tempest/api/compute/images/test_image_metadata_negative.py
index 15bb66a..7f0bc4e 100644
--- a/tempest/api/compute/images/test_image_metadata_negative.py
+++ b/tempest/api/compute/images/test_image_metadata_negative.py
@@ -22,8 +22,8 @@
 class ImagesMetadataTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(ImagesMetadataTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(ImagesMetadataTestJSON, cls).resource_setup()
         cls.client = cls.images_client
 
     @test.attr(type=['negative', 'gate'])
diff --git a/tempest/api/compute/images/test_images.py b/tempest/api/compute/images/test_images.py
index 29df2b0..68f793a 100644
--- a/tempest/api/compute/images/test_images.py
+++ b/tempest/api/compute/images/test_images.py
@@ -23,11 +23,17 @@
 class ImagesTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(ImagesTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(ImagesTestJSON, cls).resource_setup()
         if not CONF.service_available.glance:
             skip_msg = ("%s skipped as glance is not available" % cls.__name__)
             raise cls.skipException(skip_msg)
+
+        if not CONF.compute_feature_enabled.snapshot:
+            skip_msg = ("%s skipped as instance snapshotting is not supported"
+                        % cls.__name__)
+            raise cls.skipException(skip_msg)
+
         cls.client = cls.images_client
         cls.servers_client = cls.servers_client
 
diff --git a/tempest/api/compute/images/test_images_negative.py b/tempest/api/compute/images/test_images_negative.py
index 6163f4d..e406374 100644
--- a/tempest/api/compute/images/test_images_negative.py
+++ b/tempest/api/compute/images/test_images_negative.py
@@ -24,11 +24,17 @@
 class ImagesNegativeTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(ImagesNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(ImagesNegativeTestJSON, cls).resource_setup()
         if not CONF.service_available.glance:
             skip_msg = ("%s skipped as glance is not available" % cls.__name__)
             raise cls.skipException(skip_msg)
+
+        if not CONF.compute_feature_enabled.snapshot:
+            skip_msg = ("%s skipped as instance snapshotting is not supported"
+                        % cls.__name__)
+            raise cls.skipException(skip_msg)
+
         cls.client = cls.images_client
         cls.servers_client = cls.servers_client
 
diff --git a/tempest/api/compute/images/test_images_oneserver.py b/tempest/api/compute/images/test_images_oneserver.py
index c81cec5..c0b6730 100644
--- a/tempest/api/compute/images/test_images_oneserver.py
+++ b/tempest/api/compute/images/test_images_oneserver.py
@@ -47,19 +47,20 @@
             self.__class__.server_id = self.rebuild_server(self.server_id)
 
     @classmethod
-    def setUpClass(cls):
-        super(ImagesOneServerTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(ImagesOneServerTestJSON, cls).resource_setup()
         cls.client = cls.images_client
         if not CONF.service_available.glance:
             skip_msg = ("%s skipped as glance is not available" % cls.__name__)
             raise cls.skipException(skip_msg)
 
-        try:
-            resp, server = cls.create_test_server(wait_until='ACTIVE')
-            cls.server_id = server['id']
-        except Exception:
-            cls.tearDownClass()
-            raise
+        if not CONF.compute_feature_enabled.snapshot:
+            skip_msg = ("%s skipped as instance snapshotting is not supported"
+                        % cls.__name__)
+            raise cls.skipException(skip_msg)
+
+        resp, server = cls.create_test_server(wait_until='ACTIVE')
+        cls.server_id = server['id']
 
     def _get_default_flavor_disk_size(self, flavor_id):
         resp, flavor = self.flavors_client.get_flavor_details(flavor_id)
diff --git a/tempest/api/compute/images/test_images_oneserver_negative.py b/tempest/api/compute/images/test_images_oneserver_negative.py
index 9c4ab00..dc3d6bc 100644
--- a/tempest/api/compute/images/test_images_oneserver_negative.py
+++ b/tempest/api/compute/images/test_images_oneserver_negative.py
@@ -55,35 +55,23 @@
         self.__class__.server_id = self.rebuild_server(self.server_id)
 
     @classmethod
-    def setUpClass(cls):
-        super(ImagesOneServerNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(ImagesOneServerNegativeTestJSON, cls).resource_setup()
         cls.client = cls.images_client
         if not CONF.service_available.glance:
             skip_msg = ("%s skipped as glance is not available" % cls.__name__)
             raise cls.skipException(skip_msg)
 
-        try:
-            resp, server = cls.create_test_server(wait_until='ACTIVE')
-            cls.server_id = server['id']
-        except Exception:
-            cls.tearDownClass()
-            raise
+        if not CONF.compute_feature_enabled.snapshot:
+            skip_msg = ("%s skipped as instance snapshotting is not supported"
+                        % cls.__name__)
+            raise cls.skipException(skip_msg)
+
+        resp, server = cls.create_test_server(wait_until='ACTIVE')
+        cls.server_id = server['id']
 
         cls.image_ids = []
 
-    @test.skip_because(bug="1006725")
-    @test.attr(type=['negative', 'gate'])
-    def test_create_image_specify_multibyte_character_image_name(self):
-        if self.__class__._interface == "xml":
-            raise self.skipException("Not testable in XML")
-        # invalid multibyte sequence from:
-        # http://stackoverflow.com/questions/1301402/
-        #     example-invalid-utf8-string
-        invalid_name = data_utils.rand_name(u'\xc3\x28')
-        self.assertRaises(exceptions.BadRequest,
-                          self.client.create_image, self.server_id,
-                          invalid_name)
-
     @test.attr(type=['negative', 'gate'])
     def test_create_image_specify_invalid_metadata(self):
         # Return an error when creating image with invalid metadata
diff --git a/tempest/api/compute/images/test_list_image_filters.py b/tempest/api/compute/images/test_list_image_filters.py
index f9350e1..30a99dd 100644
--- a/tempest/api/compute/images/test_list_image_filters.py
+++ b/tempest/api/compute/images/test_list_image_filters.py
@@ -16,6 +16,8 @@
 import StringIO
 import time
 
+import testtools
+
 from tempest.api.compute import base
 from tempest.common.utils import data_utils
 from tempest import config
@@ -30,8 +32,8 @@
 class ListImageFiltersTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(ListImageFiltersTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(ListImageFiltersTestJSON, cls).resource_setup()
         if not CONF.service_available.glance:
             skip_msg = ("%s skipped as glance is not available" % cls.__name__)
             raise cls.skipException(skip_msg)
@@ -63,34 +65,32 @@
         cls.image3 = _create_image()
         cls.image3_id = cls.image3['id']
 
+        if not CONF.compute_feature_enabled.snapshot:
+            return
+
         # Create instances and snapshots via nova
-        try:
-            resp, cls.server1 = cls.create_test_server()
-            resp, cls.server2 = cls.create_test_server(wait_until='ACTIVE')
-            # NOTE(sdague) this is faster than doing the sync wait_util on both
-            cls.servers_client.wait_for_server_status(cls.server1['id'],
-                                                      'ACTIVE')
+        resp, cls.server1 = cls.create_test_server()
+        resp, cls.server2 = cls.create_test_server(wait_until='ACTIVE')
+        # NOTE(sdague) this is faster than doing the sync wait_util on both
+        cls.servers_client.wait_for_server_status(cls.server1['id'],
+                                                  'ACTIVE')
 
-            # Create images to be used in the filter tests
-            resp, cls.snapshot1 = cls.create_image_from_server(
-                cls.server1['id'], wait_until='ACTIVE')
-            cls.snapshot1_id = cls.snapshot1['id']
+        # Create images to be used in the filter tests
+        resp, cls.snapshot1 = cls.create_image_from_server(
+            cls.server1['id'], wait_until='ACTIVE')
+        cls.snapshot1_id = cls.snapshot1['id']
 
-            # Servers have a hidden property for when they are being imaged
-            # Performing back-to-back create image calls on a single
-            # server will sometimes cause failures
-            resp, cls.snapshot3 = cls.create_image_from_server(
-                cls.server2['id'], wait_until='ACTIVE')
-            cls.snapshot3_id = cls.snapshot3['id']
+        # Servers have a hidden property for when they are being imaged
+        # Performing back-to-back create image calls on a single
+        # server will sometimes cause failures
+        resp, cls.snapshot3 = cls.create_image_from_server(
+            cls.server2['id'], wait_until='ACTIVE')
+        cls.snapshot3_id = cls.snapshot3['id']
 
-            # Wait for the server to be active after the image upload
-            resp, cls.snapshot2 = cls.create_image_from_server(
-                cls.server1['id'], wait_until='ACTIVE')
-            cls.snapshot2_id = cls.snapshot2['id']
-        except Exception:
-            LOG.exception('setUpClass failed')
-            cls.tearDownClass()
-            raise
+        # Wait for the server to be active after the image upload
+        resp, cls.snapshot2 = cls.create_image_from_server(
+            cls.server1['id'], wait_until='ACTIVE')
+        cls.snapshot2_id = cls.snapshot2['id']
 
     @test.attr(type='gate')
     def test_list_images_filter_by_status(self):
@@ -114,6 +114,8 @@
         self.assertFalse(any([i for i in images if i['id'] == self.image2_id]))
         self.assertFalse(any([i for i in images if i['id'] == self.image3_id]))
 
+    @testtools.skipUnless(CONF.compute_feature_enabled.snapshot,
+                          'Snapshotting is not available.')
     @test.attr(type='gate')
     def test_list_images_filter_by_server_id(self):
         # The images should contain images filtered by server id
@@ -129,6 +131,8 @@
         self.assertFalse(any([i for i in images
                               if i['id'] == self.snapshot3_id]))
 
+    @testtools.skipUnless(CONF.compute_feature_enabled.snapshot,
+                          'Snapshotting is not available.')
     @test.attr(type='gate')
     def test_list_images_filter_by_server_ref(self):
         # The list of servers should be filtered by server ref
@@ -146,6 +150,8 @@
             self.assertTrue(any([i for i in images
                                  if i['id'] == self.snapshot3_id]))
 
+    @testtools.skipUnless(CONF.compute_feature_enabled.snapshot,
+                          'Snapshotting is not available.')
     @test.attr(type='gate')
     def test_list_images_filter_by_type(self):
         # The list of servers should be filtered by image type
@@ -211,6 +217,8 @@
         resp, images = self.client.list_images_with_detail(params)
         self.assertEqual(1, len(images))
 
+    @testtools.skipUnless(CONF.compute_feature_enabled.snapshot,
+                          'Snapshotting is not available.')
     @test.attr(type='gate')
     def test_list_images_with_detail_filter_by_server_ref(self):
         # Detailed list of servers should be filtered by server ref
@@ -228,6 +236,8 @@
             self.assertTrue(any([i for i in images
                                  if i['id'] == self.snapshot3_id]))
 
+    @testtools.skipUnless(CONF.compute_feature_enabled.snapshot,
+                          'Snapshotting is not available.')
     @test.attr(type='gate')
     def test_list_images_with_detail_filter_by_type(self):
         # The detailed list of servers should be filtered by image type
diff --git a/tempest/api/compute/images/test_list_image_filters_negative.py b/tempest/api/compute/images/test_list_image_filters_negative.py
index 80d59a7..53a21a0 100644
--- a/tempest/api/compute/images/test_list_image_filters_negative.py
+++ b/tempest/api/compute/images/test_list_image_filters_negative.py
@@ -24,8 +24,8 @@
 class ListImageFiltersNegativeTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(ListImageFiltersNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(ListImageFiltersNegativeTestJSON, cls).resource_setup()
         if not CONF.service_available.glance:
             skip_msg = ("%s skipped as glance is not available" % cls.__name__)
             raise cls.skipException(skip_msg)
diff --git a/tempest/api/compute/images/test_list_images.py b/tempest/api/compute/images/test_list_images.py
index eba331f..eceac82 100644
--- a/tempest/api/compute/images/test_list_images.py
+++ b/tempest/api/compute/images/test_list_images.py
@@ -23,8 +23,8 @@
 class ListImagesTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(ListImagesTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(ListImagesTestJSON, cls).resource_setup()
         if not CONF.service_available.glance:
             skip_msg = ("%s skipped as glance is not available" % cls.__name__)
             raise cls.skipException(skip_msg)
diff --git a/tempest/api/compute/keypairs/test_keypairs.py b/tempest/api/compute/keypairs/test_keypairs.py
index 01979c0..2f0febf 100644
--- a/tempest/api/compute/keypairs/test_keypairs.py
+++ b/tempest/api/compute/keypairs/test_keypairs.py
@@ -23,8 +23,8 @@
     _api_version = 3
 
     @classmethod
-    def setUpClass(cls):
-        super(KeyPairsV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(KeyPairsV3Test, cls).resource_setup()
         cls.client = cls.keypairs_client
 
     def _delete_keypair(self, keypair_name):
diff --git a/tempest/api/compute/keypairs/test_keypairs_negative.py b/tempest/api/compute/keypairs/test_keypairs_negative.py
index a91a9c2..0da449b 100644
--- a/tempest/api/compute/keypairs/test_keypairs_negative.py
+++ b/tempest/api/compute/keypairs/test_keypairs_negative.py
@@ -23,8 +23,8 @@
 class KeyPairsNegativeTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(KeyPairsNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(KeyPairsNegativeTestJSON, cls).resource_setup()
         cls.client = cls.keypairs_client
 
     def _create_keypair(self, keypair_name, pub_key=None):
diff --git a/tempest/api/compute/limits/test_absolute_limits.py b/tempest/api/compute/limits/test_absolute_limits.py
index d64fd57..bac1a39 100644
--- a/tempest/api/compute/limits/test_absolute_limits.py
+++ b/tempest/api/compute/limits/test_absolute_limits.py
@@ -20,8 +20,8 @@
 class AbsoluteLimitsTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(AbsoluteLimitsTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(AbsoluteLimitsTestJSON, cls).resource_setup()
         cls.client = cls.limits_client
 
     @test.attr(type='gate')
diff --git a/tempest/api/compute/limits/test_absolute_limits_negative.py b/tempest/api/compute/limits/test_absolute_limits_negative.py
index b2e2981..2b41ea0 100644
--- a/tempest/api/compute/limits/test_absolute_limits_negative.py
+++ b/tempest/api/compute/limits/test_absolute_limits_negative.py
@@ -21,8 +21,8 @@
 class AbsoluteLimitsNegativeTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(AbsoluteLimitsNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(AbsoluteLimitsNegativeTestJSON, cls).resource_setup()
         cls.client = cls.limits_client
         cls.server_client = cls.servers_client
 
diff --git a/tempest/api/compute/security_groups/base.py b/tempest/api/compute/security_groups/base.py
index 6838ce1..05cad9a 100644
--- a/tempest/api/compute/security_groups/base.py
+++ b/tempest/api/compute/security_groups/base.py
@@ -19,7 +19,7 @@
 class BaseSecurityGroupsTest(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         # A network and a subnet will be created for these tests
         cls.set_network_resources(network=True, subnet=True)
-        super(BaseSecurityGroupsTest, cls).setUpClass()
+        super(BaseSecurityGroupsTest, cls).resource_setup()
diff --git a/tempest/api/compute/security_groups/test_security_group_rules.py b/tempest/api/compute/security_groups/test_security_group_rules.py
index a1808dc..901c377 100644
--- a/tempest/api/compute/security_groups/test_security_group_rules.py
+++ b/tempest/api/compute/security_groups/test_security_group_rules.py
@@ -23,11 +23,18 @@
 class SecurityGroupRulesTestJSON(base.BaseSecurityGroupsTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(SecurityGroupRulesTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(SecurityGroupRulesTestJSON, cls).resource_setup()
         cls.client = cls.security_groups_client
         cls.neutron_available = CONF.service_available.neutron
 
+    @classmethod
+    def setUpClass(self):
+        super(SecurityGroupRulesTestJSON, self).setUpClass()
+        self.ip_protocol = 'tcp'
+        self.from_port = 22
+        self.to_port = 22
+
     @test.attr(type='smoke')
     @test.services('network')
     def test_security_group_rules_create(self):
@@ -37,14 +44,11 @@
         resp, security_group = self.create_security_group()
         securitygroup_id = security_group['id']
         # Adding rules to the created Security Group
-        ip_protocol = 'tcp'
-        from_port = 22
-        to_port = 22
         resp, rule = \
             self.client.create_security_group_rule(securitygroup_id,
-                                                   ip_protocol,
-                                                   from_port,
-                                                   to_port)
+                                                   self.ip_protocol,
+                                                   self.from_port,
+                                                   self.to_port)
         self.addCleanup(self.client.delete_security_group_rule, rule['id'])
         self.assertEqual(200, resp.status)
 
@@ -65,16 +69,13 @@
         secgroup2 = security_group['id']
         # Adding rules to the created Security Group with optional arguments
         parent_group_id = secgroup1
-        ip_protocol = 'tcp'
-        from_port = 22
-        to_port = 22
         cidr = '10.2.3.124/24'
         group_id = secgroup2
         resp, rule = \
             self.client.create_security_group_rule(parent_group_id,
-                                                   ip_protocol,
-                                                   from_port,
-                                                   to_port,
+                                                   self.ip_protocol,
+                                                   self.from_port,
+                                                   self.to_port,
                                                    cidr=cidr,
                                                    group_id=group_id)
         self.assertEqual(200, resp.status)
@@ -89,13 +90,11 @@
         securitygroup_id = security_group['id']
 
         # Add a first rule to the created Security Group
-        ip_protocol1 = 'tcp'
-        from_port1 = 22
-        to_port1 = 22
         resp, rule = \
             self.client.create_security_group_rule(securitygroup_id,
-                                                   ip_protocol1,
-                                                   from_port1, to_port1)
+                                                   self.ip_protocol,
+                                                   self.from_port,
+                                                   self.to_port)
         rule1_id = rule['id']
 
         # Add a second rule to the created Security Group
@@ -127,14 +126,11 @@
         resp, security_group = self.create_security_group()
         sg2_id = security_group['id']
         # Adding rules to the Group1
-        ip_protocol = 'tcp'
-        from_port = 22
-        to_port = 22
         resp, rule = \
             self.client.create_security_group_rule(sg1_id,
-                                                   ip_protocol,
-                                                   from_port,
-                                                   to_port,
+                                                   self.ip_protocol,
+                                                   self.from_port,
+                                                   self.to_port,
                                                    group_id=sg2_id)
 
         self.assertEqual(200, resp.status)
diff --git a/tempest/api/compute/security_groups/test_security_group_rules_negative.py b/tempest/api/compute/security_groups/test_security_group_rules_negative.py
index cfa839a..7850909 100644
--- a/tempest/api/compute/security_groups/test_security_group_rules_negative.py
+++ b/tempest/api/compute/security_groups/test_security_group_rules_negative.py
@@ -32,8 +32,8 @@
 class SecurityGroupRulesNegativeTestJSON(base.BaseSecurityGroupsTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(SecurityGroupRulesNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(SecurityGroupRulesNegativeTestJSON, cls).resource_setup()
         cls.client = cls.security_groups_client
 
     @test.attr(type=['negative', 'smoke'])
diff --git a/tempest/api/compute/security_groups/test_security_groups.py b/tempest/api/compute/security_groups/test_security_groups.py
index 860aebc..82dd4f0 100644
--- a/tempest/api/compute/security_groups/test_security_groups.py
+++ b/tempest/api/compute/security_groups/test_security_groups.py
@@ -22,8 +22,8 @@
 class SecurityGroupsTestJSON(base.BaseSecurityGroupsTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(SecurityGroupsTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(SecurityGroupsTestJSON, cls).resource_setup()
         cls.client = cls.security_groups_client
 
     @test.attr(type='smoke')
diff --git a/tempest/api/compute/security_groups/test_security_groups_negative.py b/tempest/api/compute/security_groups/test_security_groups_negative.py
index a9cca55..3101052 100644
--- a/tempest/api/compute/security_groups/test_security_groups_negative.py
+++ b/tempest/api/compute/security_groups/test_security_groups_negative.py
@@ -27,8 +27,8 @@
 class SecurityGroupsNegativeTestJSON(base.BaseSecurityGroupsTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(SecurityGroupsNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(SecurityGroupsNegativeTestJSON, cls).resource_setup()
         cls.client = cls.security_groups_client
         cls.neutron_available = CONF.service_available.neutron
 
diff --git a/tempest/api/compute/servers/test_attach_interfaces.py b/tempest/api/compute/servers/test_attach_interfaces.py
index d1192c0..d62d19f 100644
--- a/tempest/api/compute/servers/test_attach_interfaces.py
+++ b/tempest/api/compute/servers/test_attach_interfaces.py
@@ -26,14 +26,14 @@
 class AttachInterfacesTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         if not CONF.service_available.neutron:
             raise cls.skipException("Neutron is required")
         if not CONF.compute_feature_enabled.interface_attach:
             raise cls.skipException("Interface attachment is not available.")
         # This test class requires network and subnet
         cls.set_network_resources(network=True, subnet=True)
-        super(AttachInterfacesTestJSON, cls).setUpClass()
+        super(AttachInterfacesTestJSON, cls).resource_setup()
         cls.client = cls.os.interfaces_client
 
     def _check_interface(self, iface, port_id=None, network_id=None,
diff --git a/tempest/api/compute/servers/test_availability_zone.py b/tempest/api/compute/servers/test_availability_zone.py
index cf9837f..44bd7d3 100644
--- a/tempest/api/compute/servers/test_availability_zone.py
+++ b/tempest/api/compute/servers/test_availability_zone.py
@@ -24,8 +24,8 @@
     _api_version = 3
 
     @classmethod
-    def setUpClass(cls):
-        super(AZV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(AZV3Test, cls).resource_setup()
         cls.client = cls.availability_zone_client
 
     @test.attr(type='gate')
diff --git a/tempest/api/compute/servers/test_create_server.py b/tempest/api/compute/servers/test_create_server.py
index 279dc51..5df8d82 100644
--- a/tempest/api/compute/servers/test_create_server.py
+++ b/tempest/api/compute/servers/test_create_server.py
@@ -31,9 +31,9 @@
     disk_config = 'AUTO'
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         cls.prepare_instance_network()
-        super(ServersTestJSON, cls).setUpClass()
+        super(ServersTestJSON, cls).resource_setup()
         cls.meta = {'hello': 'world'}
         cls.accessIPv4 = '1.1.1.1'
         cls.accessIPv6 = '0000:0000:0000:0000:0000:babe:220.12.22.2'
@@ -129,9 +129,9 @@
     disk_config = 'AUTO'
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         cls.prepare_instance_network()
-        super(ServersWithSpecificFlavorTestJSON, cls).setUpClass()
+        super(ServersWithSpecificFlavorTestJSON, cls).resource_setup()
         cls.flavor_client = cls.os_adm.flavors_client
         cls.client = cls.servers_client
 
@@ -214,11 +214,11 @@
     disk_config = 'MANUAL'
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         if not CONF.compute_feature_enabled.disk_config:
             msg = "DiskConfig extension not enabled."
             raise cls.skipException(msg)
-        super(ServersTestManualDisk, cls).setUpClass()
+        super(ServersTestManualDisk, cls).resource_setup()
 
 
 class ServersTestXML(ServersTestJSON):
diff --git a/tempest/api/compute/servers/test_delete_server.py b/tempest/api/compute/servers/test_delete_server.py
index 9c8271f..6a5da58 100644
--- a/tempest/api/compute/servers/test_delete_server.py
+++ b/tempest/api/compute/servers/test_delete_server.py
@@ -28,8 +28,8 @@
     # for preventing "Quota exceeded for instances"
 
     @classmethod
-    def setUpClass(cls):
-        super(DeleteServersTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(DeleteServersTestJSON, cls).resource_setup()
         cls.client = cls.servers_client
 
     @test.attr(type='gate')
@@ -70,6 +70,18 @@
         self.assertEqual('204', resp['status'])
         self.client.wait_for_server_termination(server['id'])
 
+    @testtools.skipUnless(CONF.compute_feature_enabled.suspend,
+                          'Suspend is not available.')
+    @test.attr(type='gate')
+    def test_delete_server_while_in_suspended_state(self):
+        # Delete a server while it's VM state is Suspended
+        _, server = self.create_test_server(wait_until='ACTIVE')
+        self.client.suspend_server(server['id'])
+        self.client.wait_for_server_status(server['id'], 'SUSPENDED')
+        resp, _ = self.client.delete_server(server['id'])
+        self.assertEqual('204', resp['status'])
+        self.client.wait_for_server_termination(server['id'])
+
     @testtools.skipUnless(CONF.compute_feature_enabled.shelve,
                           'Shelve is not available.')
     @test.attr(type='gate')
@@ -130,8 +142,8 @@
     # for preventing "Quota exceeded for instances".
 
     @classmethod
-    def setUpClass(cls):
-        super(DeleteServersAdminTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(DeleteServersAdminTestJSON, cls).resource_setup()
         cls.non_admin_client = cls.servers_client
         cls.admin_client = cls.os_adm.servers_client
 
diff --git a/tempest/api/compute/servers/test_disk_config.py b/tempest/api/compute/servers/test_disk_config.py
index 332358c..51f2eb4 100644
--- a/tempest/api/compute/servers/test_disk_config.py
+++ b/tempest/api/compute/servers/test_disk_config.py
@@ -25,11 +25,11 @@
 class ServerDiskConfigTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         if not CONF.compute_feature_enabled.disk_config:
             msg = "DiskConfig extension not enabled."
             raise cls.skipException(msg)
-        super(ServerDiskConfigTestJSON, cls).setUpClass()
+        super(ServerDiskConfigTestJSON, cls).resource_setup()
         cls.client = cls.os.servers_client
         resp, server = cls.create_test_server(wait_until='ACTIVE')
         cls.server_id = server['id']
diff --git a/tempest/api/compute/servers/test_instance_actions.py b/tempest/api/compute/servers/test_instance_actions.py
index dd31165..d11ce25 100644
--- a/tempest/api/compute/servers/test_instance_actions.py
+++ b/tempest/api/compute/servers/test_instance_actions.py
@@ -20,8 +20,8 @@
 class InstanceActionsTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(InstanceActionsTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(InstanceActionsTestJSON, cls).resource_setup()
         cls.client = cls.servers_client
         resp, server = cls.create_test_server(wait_until='ACTIVE')
         cls.request_id = resp['x-compute-request-id']
diff --git a/tempest/api/compute/servers/test_instance_actions_negative.py b/tempest/api/compute/servers/test_instance_actions_negative.py
index e67b69d..c706ad5 100644
--- a/tempest/api/compute/servers/test_instance_actions_negative.py
+++ b/tempest/api/compute/servers/test_instance_actions_negative.py
@@ -22,8 +22,8 @@
 class InstanceActionsNegativeTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(InstanceActionsNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(InstanceActionsNegativeTestJSON, cls).resource_setup()
         cls.client = cls.servers_client
         resp, server = cls.create_test_server(wait_until='ACTIVE')
         cls.server_id = server['id']
diff --git a/tempest/api/compute/servers/test_list_server_filters.py b/tempest/api/compute/servers/test_list_server_filters.py
index 9d39c9f..98fe387 100644
--- a/tempest/api/compute/servers/test_list_server_filters.py
+++ b/tempest/api/compute/servers/test_list_server_filters.py
@@ -26,10 +26,9 @@
 class ListServerFiltersTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
+    def resource_setup(cls):
         cls.set_network_resources(network=True, subnet=True, dhcp=True)
-        super(ListServerFiltersTestJSON, cls).setUpClass()
+        super(ListServerFiltersTestJSON, cls).resource_setup()
         cls.client = cls.servers_client
 
         # Check to see if the alternate image ref actually exists...
@@ -234,6 +233,30 @@
         self.assertNotIn(self.s3_name, map(lambda x: x['name'], servers))
 
     @test.attr(type='gate')
+    def test_list_servers_filtered_by_name_regex(self):
+        # list of regex that should match s1, s2 and s3
+        regexes = ['^.*\-instance\-[0-9]+$', '^.*\-instance\-.*$']
+        for regex in regexes:
+            params = {'name': regex}
+            resp, body = self.client.list_servers(params)
+            servers = body['servers']
+
+            self.assertIn(self.s1_name, map(lambda x: x['name'], servers))
+            self.assertIn(self.s2_name, map(lambda x: x['name'], servers))
+            self.assertIn(self.s3_name, map(lambda x: x['name'], servers))
+
+        # Let's take random part of name and try to search it
+        part_name = self.s1_name[-10:]
+
+        params = {'name': part_name}
+        resp, body = self.client.list_servers(params)
+        servers = body['servers']
+
+        self.assertIn(self.s1_name, map(lambda x: x['name'], servers))
+        self.assertNotIn(self.s2_name, map(lambda x: x['name'], servers))
+        self.assertNotIn(self.s3_name, map(lambda x: x['name'], servers))
+
+    @test.attr(type='gate')
     def test_list_servers_filtered_by_ip(self):
         # Filter servers by ip
         # Here should be listed 1 server
diff --git a/tempest/api/compute/servers/test_list_servers_negative.py b/tempest/api/compute/servers/test_list_servers_negative.py
index 28d64fb..f4d8dda 100644
--- a/tempest/api/compute/servers/test_list_servers_negative.py
+++ b/tempest/api/compute/servers/test_list_servers_negative.py
@@ -24,9 +24,8 @@
     force_tenant_isolation = True
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(ListServersNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(ListServersNegativeTestJSON, cls).resource_setup()
         cls.client = cls.servers_client
 
         # The following servers are created for use
diff --git a/tempest/api/compute/servers/test_server_actions.py b/tempest/api/compute/servers/test_server_actions.py
index ee525e7..3aacf2a 100644
--- a/tempest/api/compute/servers/test_server_actions.py
+++ b/tempest/api/compute/servers/test_server_actions.py
@@ -15,9 +15,9 @@
 
 import base64
 import logging
+import urlparse
 
 import testtools
-import urlparse
 
 from tempest.api.compute import base
 from tempest.common.utils import data_utils
@@ -52,9 +52,9 @@
         super(ServerActionsTestJSON, self).tearDown()
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         cls.prepare_instance_network()
-        super(ServerActionsTestJSON, cls).setUpClass()
+        super(ServerActionsTestJSON, cls).resource_setup()
         cls.client = cls.servers_client
         cls.server_id = cls.rebuild_server(None)
 
@@ -75,9 +75,7 @@
                                                       new_password)
             linux_client.validate_authentication()
 
-    @test.attr(type='smoke')
-    def test_reboot_server_hard(self):
-        # The server should be power cycled
+    def _test_reboot_server(self, reboot_type):
         if self.run_ssh:
             # Get the time the server was last rebooted,
             resp, server = self.client.get_server(self.server_id)
@@ -85,7 +83,7 @@
                                                       self.password)
             boot_time = linux_client.get_boot_time()
 
-        resp, body = self.client.reboot(self.server_id, 'HARD')
+        resp, body = self.client.reboot(self.server_id, reboot_type)
         self.assertEqual(202, resp.status)
         self.client.wait_for_server_status(self.server_id, 'ACTIVE')
 
@@ -97,28 +95,16 @@
             self.assertTrue(new_boot_time > boot_time,
                             '%s > %s' % (new_boot_time, boot_time))
 
+    @test.attr(type='smoke')
+    def test_reboot_server_hard(self):
+        # The server should be power cycled
+        self._test_reboot_server('HARD')
+
     @test.skip_because(bug="1014647")
     @test.attr(type='smoke')
     def test_reboot_server_soft(self):
         # The server should be signaled to reboot gracefully
-        if self.run_ssh:
-            # Get the time the server was last rebooted,
-            resp, server = self.client.get_server(self.server_id)
-            linux_client = remote_client.RemoteClient(server, self.ssh_user,
-                                                      self.password)
-            boot_time = linux_client.get_boot_time()
-
-        resp, body = self.client.reboot(self.server_id, 'SOFT')
-        self.assertEqual(202, resp.status)
-        self.client.wait_for_server_status(self.server_id, 'ACTIVE')
-
-        if self.run_ssh:
-            # Log in and verify the boot time has changed
-            linux_client = remote_client.RemoteClient(server, self.ssh_user,
-                                                      self.password)
-            new_boot_time = linux_client.get_boot_time()
-            self.assertTrue(new_boot_time > boot_time,
-                            '%s > %s' % (new_boot_time, boot_time))
+        self._test_reboot_server('SOFT')
 
     @test.attr(type='smoke')
     def test_rebuild_server(self):
@@ -256,7 +242,10 @@
         resp, server = self.client.get_server(self.server_id)
         self.assertEqual(previous_flavor_ref, server['flavor']['id'])
 
+    @testtools.skipUnless(CONF.compute_feature_enabled.snapshot,
+                          'Snapshotting not available, backup not possible.')
     @test.attr(type='gate')
+    @test.services('image')
     def test_create_backup(self):
         # Positive test:create backup successfully and rotate backups correctly
         # create the first and the second backup
@@ -348,6 +337,8 @@
         lines = len(output.split('\n'))
         self.assertEqual(lines, 10)
 
+    @testtools.skipUnless(CONF.compute_feature_enabled.console_output,
+                          'Console output not supported.')
     @test.attr(type='gate')
     def test_get_console_output(self):
         # Positive test:Should be able to GET the console output
@@ -364,6 +355,8 @@
 
         self.wait_for(self._get_output)
 
+    @testtools.skipUnless(CONF.compute_feature_enabled.console_output,
+                          'Console output not supported.')
     @test.attr(type='gate')
     def test_get_console_output_server_id_in_shutoff_status(self):
         # Positive test:Should be able to GET the console output
diff --git a/tempest/api/compute/servers/test_server_addresses.py b/tempest/api/compute/servers/test_server_addresses.py
index 846bf3e..6c29f51 100644
--- a/tempest/api/compute/servers/test_server_addresses.py
+++ b/tempest/api/compute/servers/test_server_addresses.py
@@ -23,10 +23,10 @@
 class ServerAddressesTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         # This test module might use a network and a subnet
         cls.set_network_resources(network=True, subnet=True)
-        super(ServerAddressesTestJSON, cls).setUpClass()
+        super(ServerAddressesTestJSON, cls).resource_setup()
         cls.client = cls.servers_client
 
         resp, cls.server = cls.create_test_server(wait_until='ACTIVE')
diff --git a/tempest/api/compute/servers/test_server_addresses_negative.py b/tempest/api/compute/servers/test_server_addresses_negative.py
index e190161..c7e4c89 100644
--- a/tempest/api/compute/servers/test_server_addresses_negative.py
+++ b/tempest/api/compute/servers/test_server_addresses_negative.py
@@ -21,9 +21,9 @@
 class ServerAddressesNegativeTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         cls.set_network_resources(network=True, subnet=True)
-        super(ServerAddressesNegativeTestJSON, cls).setUpClass()
+        super(ServerAddressesNegativeTestJSON, cls).resource_setup()
         cls.client = cls.servers_client
 
         resp, cls.server = cls.create_test_server(wait_until='ACTIVE')
diff --git a/tempest/api/compute/servers/test_server_group.py b/tempest/api/compute/servers/test_server_group.py
index f1ef5d5..0af19c0 100644
--- a/tempest/api/compute/servers/test_server_group.py
+++ b/tempest/api/compute/servers/test_server_group.py
@@ -26,9 +26,8 @@
     It also adds the tests for list and get details of server-groups
     """
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(ServerGroupTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(ServerGroupTestJSON, cls).resource_setup()
         if not test.is_extension_enabled('os-server-groups', 'compute'):
             msg = "os-server-groups extension is not enabled."
             raise cls.skipException(msg)
diff --git a/tempest/api/compute/servers/test_server_metadata.py b/tempest/api/compute/servers/test_server_metadata.py
index 01ff6b9..c265352 100644
--- a/tempest/api/compute/servers/test_server_metadata.py
+++ b/tempest/api/compute/servers/test_server_metadata.py
@@ -20,8 +20,8 @@
 class ServerMetadataTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(ServerMetadataTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(ServerMetadataTestJSON, cls).resource_setup()
         cls.client = cls.servers_client
         cls.quotas = cls.quotas_client
         resp, server = cls.create_test_server(meta={}, wait_until='ACTIVE')
diff --git a/tempest/api/compute/servers/test_server_metadata_negative.py b/tempest/api/compute/servers/test_server_metadata_negative.py
index fbda401..497b94b 100644
--- a/tempest/api/compute/servers/test_server_metadata_negative.py
+++ b/tempest/api/compute/servers/test_server_metadata_negative.py
@@ -22,8 +22,8 @@
 class ServerMetadataNegativeTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(ServerMetadataNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(ServerMetadataNegativeTestJSON, cls).resource_setup()
         cls.client = cls.servers_client
         cls.quotas = cls.quotas_client
         cls.tenant_id = cls.client.tenant_id
diff --git a/tempest/api/compute/servers/test_server_password.py b/tempest/api/compute/servers/test_server_password.py
index 50c881a..aba9bb6 100644
--- a/tempest/api/compute/servers/test_server_password.py
+++ b/tempest/api/compute/servers/test_server_password.py
@@ -21,8 +21,8 @@
 class ServerPasswordTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(ServerPasswordTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(ServerPasswordTestJSON, cls).resource_setup()
         cls.client = cls.servers_client
         resp, cls.server = cls.create_test_server(wait_until="ACTIVE")
 
diff --git a/tempest/api/compute/servers/test_server_personality.py b/tempest/api/compute/servers/test_server_personality.py
index 6cc463d..effb52f 100644
--- a/tempest/api/compute/servers/test_server_personality.py
+++ b/tempest/api/compute/servers/test_server_personality.py
@@ -23,8 +23,8 @@
 class ServerPersonalityTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(ServerPersonalityTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(ServerPersonalityTestJSON, cls).resource_setup()
         cls.client = cls.servers_client
         cls.user_client = cls.limits_client
 
diff --git a/tempest/api/compute/servers/test_server_rescue.py b/tempest/api/compute/servers/test_server_rescue.py
index ab98d88..a984ade 100644
--- a/tempest/api/compute/servers/test_server_rescue.py
+++ b/tempest/api/compute/servers/test_server_rescue.py
@@ -24,14 +24,13 @@
 class ServerRescueTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
+    def resource_setup(cls):
         if not CONF.compute_feature_enabled.rescue:
             msg = "Server rescue not available."
             raise cls.skipException(msg)
 
         cls.set_network_resources(network=True, subnet=True, router=True)
-        super(ServerRescueTestJSON, cls).setUpClass()
+        super(ServerRescueTestJSON, cls).resource_setup()
 
         # Floating IP creation
         resp, body = cls.floating_ips_client.create_floating_ip()
@@ -54,7 +53,6 @@
 
         # Server for positive tests
         resp, server = cls.create_test_server(wait_until='BUILD')
-        resp, resc_server = cls.create_test_server(wait_until='ACTIVE')
         cls.server_id = server['id']
         cls.password = server['adminPass']
         cls.servers_client.wait_for_server_status(cls.server_id, 'ACTIVE')
@@ -63,13 +61,14 @@
         super(ServerRescueTestJSON, self).setUp()
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         # Deleting the floating IP which is created in this method
         cls.floating_ips_client.delete_floating_ip(cls.floating_ip_id)
-        cls.delete_volume(cls.volume['id'])
+        if getattr(cls, 'volume', None):
+            cls.delete_volume(cls.volume['id'])
         resp, cls.sg = cls.security_groups_client.delete_security_group(
             cls.sg_id)
-        super(ServerRescueTestJSON, cls).tearDownClass()
+        super(ServerRescueTestJSON, cls).resource_cleanup()
 
     def tearDown(self):
         super(ServerRescueTestJSON, self).tearDown()
diff --git a/tempest/api/compute/servers/test_server_rescue_negative.py b/tempest/api/compute/servers/test_server_rescue_negative.py
index b35e55c..0d29968 100644
--- a/tempest/api/compute/servers/test_server_rescue_negative.py
+++ b/tempest/api/compute/servers/test_server_rescue_negative.py
@@ -26,15 +26,14 @@
 class ServerRescueNegativeTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
+    def resource_setup(cls):
         if not CONF.compute_feature_enabled.rescue:
             msg = "Server rescue not available."
             raise cls.skipException(msg)
 
         cls.set_network_resources(network=True, subnet=True, router=True)
-        super(ServerRescueNegativeTestJSON, cls).setUpClass()
-        cls.device = 'vdf'
+        super(ServerRescueNegativeTestJSON, cls).resource_setup()
+        cls.device = CONF.compute.volume_device_name
 
         # Create a volume and wait for it to become ready for attach
         resp, cls.volume = cls.volumes_extensions_client.create_volume(
@@ -56,9 +55,10 @@
         cls.servers_client.wait_for_server_status(cls.server_id, 'ACTIVE')
 
     @classmethod
-    def tearDownClass(cls):
-        cls.delete_volume(cls.volume['id'])
-        super(ServerRescueNegativeTestJSON, cls).tearDownClass()
+    def resource_cleanup(cls):
+        if getattr(cls, 'volume', None):
+            cls.delete_volume(cls.volume['id'])
+        super(ServerRescueNegativeTestJSON, cls).resource_cleanup()
 
     def _detach(self, server_id, volume_id):
         self.servers_client.detach_volume(server_id, volume_id)
diff --git a/tempest/api/compute/servers/test_servers.py b/tempest/api/compute/servers/test_servers.py
index 936b871..d501839 100644
--- a/tempest/api/compute/servers/test_servers.py
+++ b/tempest/api/compute/servers/test_servers.py
@@ -21,8 +21,8 @@
 class ServersTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(ServersTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(ServersTestJSON, cls).resource_setup()
         cls.client = cls.servers_client
 
     def tearDown(self):
diff --git a/tempest/api/compute/servers/test_servers_negative.py b/tempest/api/compute/servers/test_servers_negative.py
index 792b523..b86ee06 100644
--- a/tempest/api/compute/servers/test_servers_negative.py
+++ b/tempest/api/compute/servers/test_servers_negative.py
@@ -42,8 +42,8 @@
         super(ServersNegativeTestJSON, self).tearDown()
 
     @classmethod
-    def setUpClass(cls):
-        super(ServersNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(ServersNegativeTestJSON, cls).resource_setup()
         cls.client = cls.servers_client
         if CONF.compute.allow_tenant_isolation:
             cls.alt_os = clients.Manager(cls.isolated_creds.get_alt_creds())
@@ -404,13 +404,6 @@
                           nonexistent_server)
 
     @test.attr(type=['negative', 'gate'])
-    def test_force_delete_server_invalid_state(self):
-        # we can only force-delete a server in 'soft-delete' state
-        self.assertRaises(exceptions.Conflict,
-                          self.client.force_delete_server,
-                          self.server_id)
-
-    @test.attr(type=['negative', 'gate'])
     def test_restore_nonexistent_server_id(self):
         # restore-delete a non existent server
         nonexistent_server = data_utils.rand_uuid()
diff --git a/tempest/api/compute/servers/test_servers_negative_new.py b/tempest/api/compute/servers/test_servers_negative_new.py
index 43ddb3a..7fc2d4f 100644
--- a/tempest/api/compute/servers/test_servers_negative_new.py
+++ b/tempest/api/compute/servers/test_servers_negative_new.py
@@ -15,6 +15,7 @@
 
 
 from tempest.api.compute import base
+from tempest.api_schema.request.compute.v2 import servers
 from tempest import test
 
 
@@ -25,10 +26,10 @@
 class GetConsoleOutputNegativeTestJSON(base.BaseV2ComputeTest,
                                        test.NegativeAutoTest):
     _service = 'compute'
-    _schema_file = 'compute/servers/get_console_output.json'
+    _schema = servers.get_console_output
 
     @classmethod
-    def setUpClass(cls):
-        super(GetConsoleOutputNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(GetConsoleOutputNegativeTestJSON, cls).resource_setup()
         _resp, server = cls.create_test_server()
         cls.set_resource("server", server['id'])
diff --git a/tempest/api/compute/servers/test_virtual_interfaces.py b/tempest/api/compute/servers/test_virtual_interfaces.py
index 421ba8b..f205761 100644
--- a/tempest/api/compute/servers/test_virtual_interfaces.py
+++ b/tempest/api/compute/servers/test_virtual_interfaces.py
@@ -25,10 +25,10 @@
 class VirtualInterfacesTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         # This test needs a network and a subnet
         cls.set_network_resources(network=True, subnet=True)
-        super(VirtualInterfacesTestJSON, cls).setUpClass()
+        super(VirtualInterfacesTestJSON, cls).resource_setup()
         cls.client = cls.servers_client
         resp, server = cls.create_test_server(wait_until='ACTIVE')
         cls.server_id = server['id']
diff --git a/tempest/api/compute/servers/test_virtual_interfaces_negative.py b/tempest/api/compute/servers/test_virtual_interfaces_negative.py
index bcb2686..1f4a20e 100644
--- a/tempest/api/compute/servers/test_virtual_interfaces_negative.py
+++ b/tempest/api/compute/servers/test_virtual_interfaces_negative.py
@@ -23,10 +23,10 @@
 class VirtualInterfacesNegativeTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         # For this test no network resources are needed
         cls.set_network_resources()
-        super(VirtualInterfacesNegativeTestJSON, cls).setUpClass()
+        super(VirtualInterfacesNegativeTestJSON, cls).resource_setup()
         cls.client = cls.servers_client
 
     @test.attr(type=['negative', 'gate'])
diff --git a/tempest/api/compute/test_authorization.py b/tempest/api/compute/test_authorization.py
index fb8ded3..015d9f5 100644
--- a/tempest/api/compute/test_authorization.py
+++ b/tempest/api/compute/test_authorization.py
@@ -30,12 +30,12 @@
 
 class AuthorizationTestJSON(base.BaseV2ComputeTest):
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         if not CONF.service_available.glance:
             raise cls.skipException('Glance is not available.')
         # No network resources required for this test
         cls.set_network_resources()
-        super(AuthorizationTestJSON, cls).setUpClass()
+        super(AuthorizationTestJSON, cls).resource_setup()
         if not cls.multi_user:
             msg = "Need >1 user"
             raise cls.skipException(msg)
@@ -62,9 +62,9 @@
 
         name = data_utils.rand_name('image')
         resp, body = cls.glance_client.create_image(name=name,
-                                                   container_format='bare',
-                                                   disk_format='raw',
-                                                   is_public=False)
+                                                    container_format='bare',
+                                                    disk_format='raw',
+                                                    is_public=False)
         image_id = body['id']
         image_file = StringIO.StringIO(('*' * 1024))
         resp, body = cls.glance_client.update_image(image_id, data=image_file)
@@ -88,12 +88,12 @@
             parent_group_id, ip_protocol, from_port, to_port)
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         if cls.multi_user:
             cls.images_client.delete_image(cls.image['id'])
             cls.keypairs_client.delete_keypair(cls.keypairname)
             cls.security_client.delete_security_group(cls.security_group['id'])
-        super(AuthorizationTestJSON, cls).tearDownClass()
+        super(AuthorizationTestJSON, cls).resource_cleanup()
 
     @test.attr(type='gate')
     def test_get_server_for_alt_account_fails(self):
diff --git a/tempest/api/compute/test_live_block_migration.py b/tempest/api/compute/test_live_block_migration.py
index 93ff4b0..86b8395 100644
--- a/tempest/api/compute/test_live_block_migration.py
+++ b/tempest/api/compute/test_live_block_migration.py
@@ -27,8 +27,8 @@
     _host_key = 'OS-EXT-SRV-ATTR:host'
 
     @classmethod
-    def setUpClass(cls):
-        super(LiveBlockMigrationTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(LiveBlockMigrationTestJSON, cls).resource_setup()
 
         cls.admin_hosts_client = cls.os_adm.hosts_client
         cls.admin_servers_client = cls.os_adm.servers_client
diff --git a/tempest/api/compute/test_live_block_migration_negative.py b/tempest/api/compute/test_live_block_migration_negative.py
index c10818e..95eea19 100644
--- a/tempest/api/compute/test_live_block_migration_negative.py
+++ b/tempest/api/compute/test_live_block_migration_negative.py
@@ -27,8 +27,8 @@
     _host_key = 'OS-EXT-SRV-ATTR:host'
 
     @classmethod
-    def setUpClass(cls):
-        super(LiveBlockMigrationNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(LiveBlockMigrationNegativeTestJSON, cls).resource_setup()
         if not CONF.compute_feature_enabled.live_migration:
             raise cls.skipException("Live migration is not enabled")
         cls.admin_hosts_client = cls.os_adm.hosts_client
diff --git a/tempest/api/compute/test_quotas.py b/tempest/api/compute/test_quotas.py
index eeff3ce..e66b652 100644
--- a/tempest/api/compute/test_quotas.py
+++ b/tempest/api/compute/test_quotas.py
@@ -26,8 +26,8 @@
         super(QuotasTestJSON, self).setUp()
 
     @classmethod
-    def setUpClass(cls):
-        super(QuotasTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(QuotasTestJSON, cls).resource_setup()
         cls.client = cls.quotas_client
         cls.tenant_id = cls.client.tenant_id
         cls.user_id = cls.client.user_id
@@ -45,17 +45,17 @@
         expected_quota_set = self.default_quota_set | set(['id'])
         resp, quota_set = self.client.get_quota_set(self.tenant_id)
         self.assertEqual(200, resp.status)
-        self.assertEqual(sorted(expected_quota_set),
-                         sorted(quota_set.keys()))
         self.assertEqual(quota_set['id'], self.tenant_id)
+        for quota in expected_quota_set:
+            self.assertIn(quota, quota_set.keys())
 
         # get the quota set using user id
         resp, quota_set = self.client.get_quota_set(self.tenant_id,
                                                     self.user_id)
         self.assertEqual(200, resp.status)
-        self.assertEqual(sorted(expected_quota_set),
-                         sorted(quota_set.keys()))
         self.assertEqual(quota_set['id'], self.tenant_id)
+        for quota in expected_quota_set:
+            self.assertIn(quota, quota_set.keys())
 
     @test.attr(type='smoke')
     def test_get_default_quotas(self):
@@ -63,9 +63,9 @@
         expected_quota_set = self.default_quota_set | set(['id'])
         resp, quota_set = self.client.get_default_quota_set(self.tenant_id)
         self.assertEqual(200, resp.status)
-        self.assertEqual(sorted(expected_quota_set),
-                         sorted(quota_set.keys()))
         self.assertEqual(quota_set['id'], self.tenant_id)
+        for quota in expected_quota_set:
+            self.assertIn(quota, quota_set.keys())
 
     @test.attr(type='smoke')
     def test_compare_tenant_quotas_with_default_quotas(self):
diff --git a/tempest/api/compute/v3/admin/test_agents.py b/tempest/api/compute/v3/admin/test_agents.py
index 9d01b71..b7c0011 100644
--- a/tempest/api/compute/v3/admin/test_agents.py
+++ b/tempest/api/compute/v3/admin/test_agents.py
@@ -23,8 +23,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(AgentsAdminV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(AgentsAdminV3Test, cls).resource_setup()
         cls.client = cls.agents_admin_client
 
     @test.attr(type='gate')
diff --git a/tempest/api/compute/v3/admin/test_aggregates.py b/tempest/api/compute/v3/admin/test_aggregates.py
index e5ec08b..1beeb13 100644
--- a/tempest/api/compute/v3/admin/test_aggregates.py
+++ b/tempest/api/compute/v3/admin/test_aggregates.py
@@ -28,8 +28,8 @@
     _host_key = 'os-extended-server-attributes:host'
 
     @classmethod
-    def setUpClass(cls):
-        super(AggregatesAdminV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(AggregatesAdminV3Test, cls).resource_setup()
         cls.client = cls.aggregates_admin_client
         cls.aggregate_name_prefix = 'test_aggregate_'
         cls.az_name_prefix = 'test_az_'
@@ -135,7 +135,7 @@
         self.assertEqual(200, resp.status)
         self.assertIn((aggregate_id, new_aggregate_name, new_az_name),
                       map(lambda x:
-                         (x['id'], x['name'], x['availability_zone']),
+                          (x['id'], x['name'], x['availability_zone']),
                           aggregates))
 
     @test.attr(type='gate')
diff --git a/tempest/api/compute/v3/admin/test_aggregates_negative.py b/tempest/api/compute/v3/admin/test_aggregates_negative.py
index 1505f74..093963f 100644
--- a/tempest/api/compute/v3/admin/test_aggregates_negative.py
+++ b/tempest/api/compute/v3/admin/test_aggregates_negative.py
@@ -27,8 +27,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(AggregatesAdminNegativeV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(AggregatesAdminNegativeV3Test, cls).resource_setup()
         cls.client = cls.aggregates_admin_client
         cls.user_client = cls.aggregates_client
         cls.aggregate_name_prefix = 'test_aggregate_'
diff --git a/tempest/api/compute/v3/admin/test_availability_zone_negative.py b/tempest/api/compute/v3/admin/test_availability_zone_negative.py
index b012e65..56cdd6c 100644
--- a/tempest/api/compute/v3/admin/test_availability_zone_negative.py
+++ b/tempest/api/compute/v3/admin/test_availability_zone_negative.py
@@ -25,8 +25,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(AZAdminNegativeV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(AZAdminNegativeV3Test, cls).resource_setup()
         cls.client = cls.availability_zone_admin_client
         cls.non_adm_client = cls.availability_zone_client
 
diff --git a/tempest/api/compute/v3/admin/test_flavors.py b/tempest/api/compute/v3/admin/test_flavors.py
index 09d76b8..f307907 100644
--- a/tempest/api/compute/v3/admin/test_flavors.py
+++ b/tempest/api/compute/v3/admin/test_flavors.py
@@ -28,8 +28,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(FlavorsAdminV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(FlavorsAdminV3Test, cls).resource_setup()
 
         cls.client = cls.flavors_admin_client
         cls.user_client = cls.flavors_client
diff --git a/tempest/api/compute/v3/admin/test_flavors_access.py b/tempest/api/compute/v3/admin/test_flavors_access.py
index c641bf6..c79e591 100644
--- a/tempest/api/compute/v3/admin/test_flavors_access.py
+++ b/tempest/api/compute/v3/admin/test_flavors_access.py
@@ -26,8 +26,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(FlavorsAccessV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(FlavorsAccessV3Test, cls).resource_setup()
 
         cls.client = cls.flavors_admin_client
         admin_client = cls._get_identity_admin_client()
@@ -38,7 +38,6 @@
         cls.vcpus = 1
         cls.disk = 10
 
-    @test.skip_because(bug='1265416')
     @test.attr(type='gate')
     def test_flavor_access_list_with_private_flavor(self):
         # Test to list flavor access successfully by querying private flavor
@@ -58,7 +57,6 @@
         self.assertEqual(str(new_flavor_id), str(first_flavor['flavor_id']))
         self.assertEqual(self.adm_tenant_id, first_flavor['tenant_id'])
 
-    @test.skip_because(bug='1265416')
     @test.attr(type='gate')
     def test_flavor_access_add_remove(self):
         # Test to add and remove flavor access to a given tenant.
diff --git a/tempest/api/compute/v3/admin/test_flavors_access_negative.py b/tempest/api/compute/v3/admin/test_flavors_access_negative.py
index 02ecb24..87e8cbf 100644
--- a/tempest/api/compute/v3/admin/test_flavors_access_negative.py
+++ b/tempest/api/compute/v3/admin/test_flavors_access_negative.py
@@ -29,8 +29,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(FlavorsAccessNegativeV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(FlavorsAccessNegativeV3Test, cls).resource_setup()
 
         cls.client = cls.flavors_admin_client
         cls.tenant_id = cls.client.tenant_id
@@ -55,7 +55,6 @@
                           self.client.list_flavor_access,
                           new_flavor_id)
 
-    @test.skip_because(bug='1265416')
     @test.attr(type=['negative', 'gate'])
     def test_flavor_non_admin_add(self):
         # Test to add flavor access as a user without admin privileges.
@@ -72,7 +71,6 @@
                           new_flavor['id'],
                           self.tenant_id)
 
-    @test.skip_because(bug='1265416')
     @test.attr(type=['negative', 'gate'])
     def test_flavor_non_admin_remove(self):
         # Test to remove flavor access as a user without admin privileges.
@@ -93,7 +91,6 @@
                           new_flavor['id'],
                           self.tenant_id)
 
-    @test.skip_because(bug='1265416')
     @test.attr(type=['negative', 'gate'])
     def test_add_flavor_access_duplicate(self):
         # Create a new flavor.
@@ -118,7 +115,6 @@
                           new_flavor['id'],
                           self.tenant_id)
 
-    @test.skip_because(bug='1265416')
     @test.attr(type=['negative', 'gate'])
     def test_remove_flavor_access_not_found(self):
         # Create a new flavor.
diff --git a/tempest/api/compute/v3/admin/test_flavors_extra_specs.py b/tempest/api/compute/v3/admin/test_flavors_extra_specs.py
index 29cd8db..24844b1 100644
--- a/tempest/api/compute/v3/admin/test_flavors_extra_specs.py
+++ b/tempest/api/compute/v3/admin/test_flavors_extra_specs.py
@@ -27,8 +27,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(FlavorsExtraSpecsV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(FlavorsExtraSpecsV3Test, cls).resource_setup()
 
         cls.client = cls.flavors_admin_client
         flavor_name = data_utils.rand_name('test_flavor')
@@ -48,10 +48,10 @@
                                                     swap=swap, rxtx=rxtx)
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         resp, body = cls.client.delete_flavor(cls.flavor['id'])
         cls.client.wait_for_resource_deletion(cls.flavor['id'])
-        super(FlavorsExtraSpecsV3Test, cls).tearDownClass()
+        super(FlavorsExtraSpecsV3Test, cls).resource_cleanup()
 
     @test.attr(type='gate')
     def test_flavor_set_get_update_show_unset_keys(self):
diff --git a/tempest/api/compute/v3/admin/test_flavors_extra_specs_negative.py b/tempest/api/compute/v3/admin/test_flavors_extra_specs_negative.py
index e9c04a3..5fcd7a4 100644
--- a/tempest/api/compute/v3/admin/test_flavors_extra_specs_negative.py
+++ b/tempest/api/compute/v3/admin/test_flavors_extra_specs_negative.py
@@ -28,8 +28,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(FlavorsExtraSpecsNegativeV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(FlavorsExtraSpecsNegativeV3Test, cls).resource_setup()
 
         cls.client = cls.flavors_admin_client
         flavor_name = data_utils.rand_name('test_flavor')
@@ -49,10 +49,10 @@
                                                     swap=swap, rxtx=rxtx)
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         resp, body = cls.client.delete_flavor(cls.flavor['id'])
         cls.client.wait_for_resource_deletion(cls.flavor['id'])
-        super(FlavorsExtraSpecsNegativeV3Test, cls).tearDownClass()
+        super(FlavorsExtraSpecsNegativeV3Test, cls).resource_cleanup()
 
     @test.attr(type=['negative', 'gate'])
     def test_flavor_non_admin_set_keys(self):
diff --git a/tempest/api/compute/v3/admin/test_flavors_negative.py b/tempest/api/compute/v3/admin/test_flavors_negative.py
index 6d3308e..426d13e 100644
--- a/tempest/api/compute/v3/admin/test_flavors_negative.py
+++ b/tempest/api/compute/v3/admin/test_flavors_negative.py
@@ -28,8 +28,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(FlavorsAdminNegativeV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(FlavorsAdminNegativeV3Test, cls).resource_setup()
 
         cls.client = cls.flavors_admin_client
         cls.user_client = cls.flavors_client
diff --git a/tempest/api/compute/v3/admin/test_hosts.py b/tempest/api/compute/v3/admin/test_hosts.py
index 8cb1f23..898a704 100644
--- a/tempest/api/compute/v3/admin/test_hosts.py
+++ b/tempest/api/compute/v3/admin/test_hosts.py
@@ -24,8 +24,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(HostsAdminV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(HostsAdminV3Test, cls).resource_setup()
         cls.client = cls.hosts_admin_client
 
     @test.attr(type='gate')
diff --git a/tempest/api/compute/v3/admin/test_hosts_negative.py b/tempest/api/compute/v3/admin/test_hosts_negative.py
index 79cd97f..2b82baa 100644
--- a/tempest/api/compute/v3/admin/test_hosts_negative.py
+++ b/tempest/api/compute/v3/admin/test_hosts_negative.py
@@ -25,8 +25,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(HostsAdminNegativeV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(HostsAdminNegativeV3Test, cls).resource_setup()
         cls.client = cls.hosts_admin_client
         cls.non_admin_client = cls.hosts_client
 
diff --git a/tempest/api/compute/v3/admin/test_hypervisor.py b/tempest/api/compute/v3/admin/test_hypervisor.py
index 9a23789..831e20f 100644
--- a/tempest/api/compute/v3/admin/test_hypervisor.py
+++ b/tempest/api/compute/v3/admin/test_hypervisor.py
@@ -24,8 +24,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(HypervisorAdminV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(HypervisorAdminV3Test, cls).resource_setup()
         cls.client = cls.hypervisor_admin_client
 
     def _list_hypervisors(self):
diff --git a/tempest/api/compute/v3/admin/test_hypervisor_negative.py b/tempest/api/compute/v3/admin/test_hypervisor_negative.py
index ae4df15..df23b46 100644
--- a/tempest/api/compute/v3/admin/test_hypervisor_negative.py
+++ b/tempest/api/compute/v3/admin/test_hypervisor_negative.py
@@ -28,8 +28,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(HypervisorAdminNegativeV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(HypervisorAdminNegativeV3Test, cls).resource_setup()
         cls.client = cls.hypervisor_admin_client
         cls.non_adm_client = cls.hypervisor_client
 
diff --git a/tempest/api/compute/v3/admin/test_quotas.py b/tempest/api/compute/v3/admin/test_quotas.py
index 19c31fe..3dad45c 100644
--- a/tempest/api/compute/v3/admin/test_quotas.py
+++ b/tempest/api/compute/v3/admin/test_quotas.py
@@ -25,8 +25,8 @@
     force_tenant_isolation = True
 
     @classmethod
-    def setUpClass(cls):
-        super(QuotasAdminV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(QuotasAdminV3Test, cls).resource_setup()
         cls.client = cls.quotas_client
         cls.adm_client = cls.quotas_admin_client
 
diff --git a/tempest/api/compute/v3/admin/test_quotas_negative.py b/tempest/api/compute/v3/admin/test_quotas_negative.py
index 7739f09..86abcab 100644
--- a/tempest/api/compute/v3/admin/test_quotas_negative.py
+++ b/tempest/api/compute/v3/admin/test_quotas_negative.py
@@ -23,8 +23,8 @@
     force_tenant_isolation = True
 
     @classmethod
-    def setUpClass(cls):
-        super(QuotasAdminNegativeV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(QuotasAdminNegativeV3Test, cls).resource_setup()
         cls.client = cls.quotas_client
         cls.adm_client = cls.quotas_admin_client
 
@@ -34,7 +34,6 @@
 
     # TODO(afazekas): Add dedicated tenant to the skiped quota tests
     # it can be moved into the setUpClass as well
-    @test.skip_because(bug="1298131")
     @test.attr(type=['negative', 'gate'])
     def test_create_server_when_cpu_quota_is_full(self):
         # Disallow server creation when tenant's vcpu quota is full
@@ -48,9 +47,9 @@
 
         self.addCleanup(self.adm_client.update_quota_set, self.demo_tenant_id,
                         cores=default_vcpu_quota)
-        self.assertRaises(exceptions.Unauthorized, self.create_test_server)
+        self.assertRaises((exceptions.Unauthorized, exceptions.OverLimit),
+                          self.create_test_server)
 
-    @test.skip_because(bug="1298131")
     @test.attr(type=['negative', 'gate'])
     def test_create_server_when_memory_quota_is_full(self):
         # Disallow server creation when tenant's memory quota is full
@@ -64,7 +63,8 @@
 
         self.addCleanup(self.adm_client.update_quota_set, self.demo_tenant_id,
                         ram=default_mem_quota)
-        self.assertRaises(exceptions.Unauthorized, self.create_test_server)
+        self.assertRaises((exceptions.Unauthorized, exceptions.OverLimit),
+                          self.create_test_server)
 
     @test.attr(type=['negative', 'gate'])
     def test_update_quota_normal_user(self):
@@ -73,7 +73,6 @@
                           self.demo_tenant_id,
                           ram=0)
 
-    @test.skip_because(bug="1298131")
     @test.attr(type=['negative', 'gate'])
     def test_create_server_when_instances_quota_is_full(self):
         # Once instances quota limit is reached, disallow server creation
@@ -86,4 +85,5 @@
                                          instances=instances_quota)
         self.addCleanup(self.adm_client.update_quota_set, self.demo_tenant_id,
                         instances=default_instances_quota)
-        self.assertRaises(exceptions.Unauthorized, self.create_test_server)
+        self.assertRaises((exceptions.Unauthorized, exceptions.OverLimit),
+                          self.create_test_server)
diff --git a/tempest/api/compute/v3/admin/test_servers.py b/tempest/api/compute/v3/admin/test_servers.py
index 366cfc6..36ea7ba 100644
--- a/tempest/api/compute/v3/admin/test_servers.py
+++ b/tempest/api/compute/v3/admin/test_servers.py
@@ -26,9 +26,8 @@
     _host_key = 'os-extended-server-attributes:host'
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(ServersAdminV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(ServersAdminV3Test, cls).resource_setup()
         cls.client = cls.servers_admin_client
         cls.non_admin_client = cls.servers_client
         cls.flavors_client = cls.flavors_admin_client
@@ -51,7 +50,6 @@
         self.assertEqual('200', resp['status'])
         self.assertEqual([], servers)
 
-    @test.skip_because(bug='1265416')
     @test.attr(type='gate')
     def test_list_servers_by_admin_with_all_tenants(self):
         # Listing servers by admin user with all tenants parameter
diff --git a/tempest/api/compute/v3/admin/test_servers_negative.py b/tempest/api/compute/v3/admin/test_servers_negative.py
index 5eb6395..f561ed3 100644
--- a/tempest/api/compute/v3/admin/test_servers_negative.py
+++ b/tempest/api/compute/v3/admin/test_servers_negative.py
@@ -17,6 +17,7 @@
 import testtools
 
 from tempest.api.compute import base
+from tempest.common import tempest_fixtures as fixtures
 from tempest.common.utils import data_utils
 from tempest import config
 from tempest import exceptions
@@ -32,8 +33,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(ServersAdminNegativeV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(ServersAdminNegativeV3Test, cls).resource_setup()
         cls.client = cls.servers_admin_client
         cls.non_adm_client = cls.servers_client
         cls.flavors_client = cls.flavors_admin_client
@@ -54,9 +55,10 @@
             flavor_id = data_utils.rand_int_id(start=1000)
         return flavor_id
 
-    @test.skip_because(bug="1298131")
     @test.attr(type=['negative', 'gate'])
     def test_resize_server_using_overlimit_ram(self):
+        # NOTE(mriedem): Avoid conflicts with os-quota-class-sets tests.
+        self.useFixture(fixtures.LockFixture('compute_quotas'))
         flavor_name = data_utils.rand_name("flavor-")
         flavor_id = self._get_unused_flavor_id()
         resp, quota_set = self.quotas_client.get_default_quota_set(
@@ -68,14 +70,15 @@
                                                              ram, vcpus, disk,
                                                              flavor_id)
         self.addCleanup(self.flavors_client.delete_flavor, flavor_id)
-        self.assertRaises(exceptions.Unauthorized,
+        self.assertRaises((exceptions.Unauthorized, exceptions.OverLimit),
                           self.client.resize,
                           self.servers[0]['id'],
                           flavor_ref['id'])
 
-    @test.skip_because(bug="1298131")
     @test.attr(type=['negative', 'gate'])
     def test_resize_server_using_overlimit_vcpus(self):
+        # NOTE(mriedem): Avoid conflicts with os-quota-class-sets tests.
+        self.useFixture(fixtures.LockFixture('compute_quotas'))
         flavor_name = data_utils.rand_name("flavor-")
         flavor_id = self._get_unused_flavor_id()
         ram = 512
@@ -87,7 +90,7 @@
                                                              ram, vcpus, disk,
                                                              flavor_id)
         self.addCleanup(self.flavors_client.delete_flavor, flavor_id)
-        self.assertRaises(exceptions.Unauthorized,
+        self.assertRaises((exceptions.Unauthorized, exceptions.OverLimit),
                           self.client.resize,
                           self.servers[0]['id'],
                           flavor_ref['id'])
diff --git a/tempest/api/compute/v3/admin/test_services.py b/tempest/api/compute/v3/admin/test_services.py
index e6efb70..f1c3b9a 100644
--- a/tempest/api/compute/v3/admin/test_services.py
+++ b/tempest/api/compute/v3/admin/test_services.py
@@ -25,8 +25,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(ServicesAdminV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(ServicesAdminV3Test, cls).resource_setup()
         cls.client = cls.services_admin_client
 
     @test.attr(type='gate')
diff --git a/tempest/api/compute/v3/admin/test_services_negative.py b/tempest/api/compute/v3/admin/test_services_negative.py
index 6ac78d4..1f9f2b1 100644
--- a/tempest/api/compute/v3/admin/test_services_negative.py
+++ b/tempest/api/compute/v3/admin/test_services_negative.py
@@ -26,8 +26,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(ServicesAdminNegativeV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(ServicesAdminNegativeV3Test, cls).resource_setup()
         cls.client = cls.services_admin_client
         cls.non_admin_client = cls.services_client
 
diff --git a/tempest/api/compute/v3/flavors/test_flavors_negative.py b/tempest/api/compute/v3/flavors/test_flavors_negative.py
index 657e2cd..2dd7b71 100644
--- a/tempest/api/compute/v3/flavors/test_flavors_negative.py
+++ b/tempest/api/compute/v3/flavors/test_flavors_negative.py
@@ -14,6 +14,7 @@
 #    under the License.
 
 from tempest.api.compute import base
+from tempest.api_schema.request.compute.v3 import flavors
 from tempest import test
 
 
@@ -24,16 +25,16 @@
 class FlavorsListNegativeV3Test(base.BaseV3ComputeTest,
                                 test.NegativeAutoTest):
     _service = 'computev3'
-    _schema_file = 'compute/flavors/flavors_list_v3.json'
+    _schema = flavors.flavor_list
 
 
 @test.SimpleNegativeAutoTest
 class FlavorDetailsNegativeV3Test(base.BaseV3ComputeTest,
                                   test.NegativeAutoTest):
     _service = 'computev3'
-    _schema_file = 'compute/flavors/flavor_details_v3.json'
+    _schema = flavors.flavors_details
 
     @classmethod
-    def setUpClass(cls):
-        super(FlavorDetailsNegativeV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(FlavorDetailsNegativeV3Test, cls).resource_setup()
         cls.set_resource("flavor", cls.flavor_ref)
diff --git a/tempest/api/compute/v3/images/test_images.py b/tempest/api/compute/v3/images/test_images.py
index bb81626..a234a22 100644
--- a/tempest/api/compute/v3/images/test_images.py
+++ b/tempest/api/compute/v3/images/test_images.py
@@ -23,8 +23,8 @@
 class ImagesV3Test(base.BaseV3ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(ImagesV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(ImagesV3Test, cls).resource_setup()
         if not CONF.service_available.glance:
             skip_msg = ("%s skipped as glance is not available" % cls.__name__)
             raise cls.skipException(skip_msg)
diff --git a/tempest/api/compute/v3/images/test_images_negative.py b/tempest/api/compute/v3/images/test_images_negative.py
index 0705bdc..83e9436 100644
--- a/tempest/api/compute/v3/images/test_images_negative.py
+++ b/tempest/api/compute/v3/images/test_images_negative.py
@@ -24,8 +24,8 @@
 class ImagesNegativeV3Test(base.BaseV3ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(ImagesNegativeV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(ImagesNegativeV3Test, cls).resource_setup()
         if not CONF.service_available.glance:
             skip_msg = ("%s skipped as glance is not available" % cls.__name__)
             raise cls.skipException(skip_msg)
diff --git a/tempest/api/compute/v3/images/test_images_oneserver.py b/tempest/api/compute/v3/images/test_images_oneserver.py
index 795437b..87e730c 100644
--- a/tempest/api/compute/v3/images/test_images_oneserver.py
+++ b/tempest/api/compute/v3/images/test_images_oneserver.py
@@ -47,19 +47,15 @@
         super(ImagesOneServerV3Test, self).tearDown()
 
     @classmethod
-    def setUpClass(cls):
-        super(ImagesOneServerV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(ImagesOneServerV3Test, cls).resource_setup()
         cls.client = cls.images_client
         if not CONF.service_available.glance:
             skip_msg = ("%s skipped as glance is not available" % cls.__name__)
             raise cls.skipException(skip_msg)
 
-        try:
-            resp, server = cls.create_test_server(wait_until='ACTIVE')
-            cls.server_id = server['id']
-        except Exception:
-            cls.tearDownClass()
-            raise
+        resp, server = cls.create_test_server(wait_until='ACTIVE')
+        cls.server_id = server['id']
 
     def _get_default_flavor_disk_size(self, flavor_id):
         resp, flavor = self.flavors_client.get_flavor_details(flavor_id)
diff --git a/tempest/api/compute/v3/images/test_images_oneserver_negative.py b/tempest/api/compute/v3/images/test_images_oneserver_negative.py
index eed81c6..5892cd7 100644
--- a/tempest/api/compute/v3/images/test_images_oneserver_negative.py
+++ b/tempest/api/compute/v3/images/test_images_oneserver_negative.py
@@ -55,19 +55,15 @@
         self.__class__.server_id = self.rebuild_server(self.server_id)
 
     @classmethod
-    def setUpClass(cls):
-        super(ImagesOneServerNegativeV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(ImagesOneServerNegativeV3Test, cls).resource_setup()
         cls.client = cls.images_client
         if not CONF.service_available.glance:
             skip_msg = ("%s skipped as glance is not available" % cls.__name__)
             raise cls.skipException(skip_msg)
 
-        try:
-            resp, server = cls.create_test_server(wait_until='ACTIVE')
-            cls.server_id = server['id']
-        except Exception:
-            cls.tearDownClass()
-            raise
+        resp, server = cls.create_test_server(wait_until='ACTIVE')
+        cls.server_id = server['id']
 
         cls.image_ids = []
 
diff --git a/tempest/api/compute/v3/keypairs/test_keypairs_negative.py b/tempest/api/compute/v3/keypairs/test_keypairs_negative.py
index e426b85..1f7206a 100644
--- a/tempest/api/compute/v3/keypairs/test_keypairs_negative.py
+++ b/tempest/api/compute/v3/keypairs/test_keypairs_negative.py
@@ -23,8 +23,8 @@
 class KeyPairsNegativeV3Test(base.BaseV3ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(KeyPairsNegativeV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(KeyPairsNegativeV3Test, cls).resource_setup()
         cls.client = cls.keypairs_client
 
     def _create_keypair(self, keypair_name, pub_key=None):
diff --git a/tempest/api/compute/v3/servers/test_attach_interfaces.py b/tempest/api/compute/v3/servers/test_attach_interfaces.py
index c2cf7e0..d4d4fca 100644
--- a/tempest/api/compute/v3/servers/test_attach_interfaces.py
+++ b/tempest/api/compute/v3/servers/test_attach_interfaces.py
@@ -26,14 +26,14 @@
 class AttachInterfacesV3Test(base.BaseV3ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         if not CONF.service_available.neutron:
             raise cls.skipException("Neutron is required")
         if not CONF.compute_feature_enabled.interface_attach:
             raise cls.skipException("Interface attachment is not available.")
         # This test class requires network and subnet
         cls.set_network_resources(network=True, subnet=True)
-        super(AttachInterfacesV3Test, cls).setUpClass()
+        super(AttachInterfacesV3Test, cls).resource_setup()
         cls.client = cls.interfaces_client
 
     def _check_interface(self, iface, port_id=None, network_id=None,
diff --git a/tempest/api/compute/v3/servers/test_attach_volume.py b/tempest/api/compute/v3/servers/test_attach_volume.py
index e994c7f..76b5549 100644
--- a/tempest/api/compute/v3/servers/test_attach_volume.py
+++ b/tempest/api/compute/v3/servers/test_attach_volume.py
@@ -32,9 +32,9 @@
         self.attached = False
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         cls.prepare_instance_network()
-        super(AttachVolumeV3Test, cls).setUpClass()
+        super(AttachVolumeV3Test, cls).resource_setup()
         cls.device = CONF.compute.volume_device_name
         if not CONF.service_available.cinder:
             skip_msg = ("%s skipped as Cinder is not available" % cls.__name__)
diff --git a/tempest/api/compute/v3/servers/test_create_server.py b/tempest/api/compute/v3/servers/test_create_server.py
index c59fe91..bcd6176 100644
--- a/tempest/api/compute/v3/servers/test_create_server.py
+++ b/tempest/api/compute/v3/servers/test_create_server.py
@@ -31,9 +31,9 @@
     disk_config = 'AUTO'
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         cls.prepare_instance_network()
-        super(ServersV3Test, cls).setUpClass()
+        super(ServersV3Test, cls).resource_setup()
         cls.meta = {'hello': 'world'}
         cls.accessIPv4 = '1.1.1.1'
         cls.accessIPv6 = '0000:0000:0000:0000:0000:babe:220.12.22.2'
@@ -108,9 +108,9 @@
     disk_config = 'AUTO'
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         cls.prepare_instance_network()
-        super(ServersWithSpecificFlavorV3Test, cls).setUpClass()
+        super(ServersWithSpecificFlavorV3Test, cls).resource_setup()
         cls.client = cls.servers_client
         cls.flavor_client = cls.flavors_admin_client
 
@@ -192,8 +192,8 @@
     disk_config = 'MANUAL'
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         if not CONF.compute_feature_enabled.disk_config:
             msg = "DiskConfig extension not enabled."
             raise cls.skipException(msg)
-        super(ServersV3TestManualDisk, cls).setUpClass()
+        super(ServersV3TestManualDisk, cls).resource_setup()
diff --git a/tempest/api/compute/v3/servers/test_delete_server.py b/tempest/api/compute/v3/servers/test_delete_server.py
index e2b47ee..ab10b4c 100644
--- a/tempest/api/compute/v3/servers/test_delete_server.py
+++ b/tempest/api/compute/v3/servers/test_delete_server.py
@@ -26,8 +26,8 @@
     # for preventing "Quota exceeded for instances".
 
     @classmethod
-    def setUpClass(cls):
-        super(DeleteServersV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(DeleteServersV3Test, cls).resource_setup()
         cls.client = cls.servers_client
 
     @test.attr(type='gate')
@@ -128,8 +128,8 @@
     # for preventing "Quota exceeded for instances".
 
     @classmethod
-    def setUpClass(cls):
-        super(DeleteServersAdminV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(DeleteServersAdminV3Test, cls).resource_setup()
         cls.non_admin_client = cls.servers_client
         cls.admin_client = cls.servers_admin_client
 
diff --git a/tempest/api/compute/v3/servers/test_instance_actions.py b/tempest/api/compute/v3/servers/test_instance_actions.py
index 4c2dcbe..227f6cd 100644
--- a/tempest/api/compute/v3/servers/test_instance_actions.py
+++ b/tempest/api/compute/v3/servers/test_instance_actions.py
@@ -20,8 +20,8 @@
 class InstanceActionsV3Test(base.BaseV3ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(InstanceActionsV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(InstanceActionsV3Test, cls).resource_setup()
         cls.client = cls.servers_client
         resp, server = cls.create_test_server(wait_until='ACTIVE')
         cls.resp = resp
diff --git a/tempest/api/compute/v3/servers/test_instance_actions_negative.py b/tempest/api/compute/v3/servers/test_instance_actions_negative.py
index 0b2c6f9..b9d4be2 100644
--- a/tempest/api/compute/v3/servers/test_instance_actions_negative.py
+++ b/tempest/api/compute/v3/servers/test_instance_actions_negative.py
@@ -22,8 +22,8 @@
 class InstanceActionsNegativeV3Test(base.BaseV3ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(InstanceActionsNegativeV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(InstanceActionsNegativeV3Test, cls).resource_setup()
         cls.client = cls.servers_client
         resp, server = cls.create_test_server(wait_until='ACTIVE')
         cls.server_id = server['id']
diff --git a/tempest/api/compute/v3/servers/test_list_server_filters.py b/tempest/api/compute/v3/servers/test_list_server_filters.py
index 778b033..209d293 100644
--- a/tempest/api/compute/v3/servers/test_list_server_filters.py
+++ b/tempest/api/compute/v3/servers/test_list_server_filters.py
@@ -26,10 +26,9 @@
 class ListServerFiltersV3Test(base.BaseV3ComputeTest):
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
+    def resource_setup(cls):
         cls.set_network_resources(network=True, subnet=True, dhcp=True)
-        super(ListServerFiltersV3Test, cls).setUpClass()
+        super(ListServerFiltersV3Test, cls).resource_setup()
         cls.client = cls.servers_client
 
         # Check to see if the alternate image ref actually exists...
diff --git a/tempest/api/compute/v3/servers/test_list_servers_negative.py b/tempest/api/compute/v3/servers/test_list_servers_negative.py
index 18e5c67..67e1155 100644
--- a/tempest/api/compute/v3/servers/test_list_servers_negative.py
+++ b/tempest/api/compute/v3/servers/test_list_servers_negative.py
@@ -26,9 +26,8 @@
     force_tenant_isolation = True
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(ListServersNegativeV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(ListServersNegativeV3Test, cls).resource_setup()
         cls.client = cls.servers_client
 
         # The following servers are created for use
diff --git a/tempest/api/compute/v3/servers/test_server_actions.py b/tempest/api/compute/v3/servers/test_server_actions.py
index 4404043..a4e8dba 100644
--- a/tempest/api/compute/v3/servers/test_server_actions.py
+++ b/tempest/api/compute/v3/servers/test_server_actions.py
@@ -14,9 +14,9 @@
 #    under the License.
 
 import logging
+import urlparse
 
 import testtools
-import urlparse
 
 from tempest.api.compute import base
 from tempest.common.utils import data_utils
@@ -51,9 +51,9 @@
         super(ServerActionsV3Test, self).tearDown()
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         cls.prepare_instance_network()
-        super(ServerActionsV3Test, cls).setUpClass()
+        super(ServerActionsV3Test, cls).resource_setup()
         cls.client = cls.servers_client
         cls.server_id = cls.rebuild_server(None)
 
@@ -250,6 +250,8 @@
         resp, server = self.client.get_server(self.server_id)
         self.assertEqual(previous_flavor_ref, server['flavor']['id'])
 
+    @testtools.skipUnless(CONF.compute_feature_enabled.snapshot,
+                          'Snapshotting not available, backup not possible.')
     @test.attr(type='gate')
     def test_create_backup(self):
         # Positive test:create backup successfully and rotate backups correctly
@@ -339,6 +341,8 @@
         lines = len(output.split('\n'))
         self.assertEqual(lines, 10)
 
+    @testtools.skipUnless(CONF.compute_feature_enabled.console_output,
+                          'Console output not supported.')
     @test.attr(type='gate')
     def test_get_console_output(self):
         # Positive test:Should be able to GET the console output
@@ -355,6 +359,8 @@
 
         self.wait_for(self._get_output)
 
+    @testtools.skipUnless(CONF.compute_feature_enabled.console_output,
+                          'Console output not supported.')
     @test.attr(type='gate')
     def test_get_console_output_server_id_in_shutoff_status(self):
         # Positive test:Should be able to GET the console output
diff --git a/tempest/api/compute/v3/servers/test_server_addresses.py b/tempest/api/compute/v3/servers/test_server_addresses.py
index efd7500..0590146 100644
--- a/tempest/api/compute/v3/servers/test_server_addresses.py
+++ b/tempest/api/compute/v3/servers/test_server_addresses.py
@@ -23,10 +23,10 @@
 class ServerAddressesV3Test(base.BaseV3ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         # This test module might use a network and a subnet
         cls.set_network_resources(network=True, subnet=True)
-        super(ServerAddressesV3Test, cls).setUpClass()
+        super(ServerAddressesV3Test, cls).resource_setup()
         cls.client = cls.servers_client
 
         resp, cls.server = cls.create_test_server(wait_until='ACTIVE')
diff --git a/tempest/api/compute/v3/servers/test_server_addresses_negative.py b/tempest/api/compute/v3/servers/test_server_addresses_negative.py
index 8a9877b..7a1b6fc 100644
--- a/tempest/api/compute/v3/servers/test_server_addresses_negative.py
+++ b/tempest/api/compute/v3/servers/test_server_addresses_negative.py
@@ -23,10 +23,10 @@
     _interface = 'json'
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         # This test module might use a network and a subnet
         cls.set_network_resources(network=True, subnet=True)
-        super(ServerAddressesV3NegativeTest, cls).setUpClass()
+        super(ServerAddressesV3NegativeTest, cls).resource_setup()
         cls.client = cls.servers_client
 
         resp, cls.server = cls.create_test_server(wait_until='ACTIVE')
diff --git a/tempest/api/compute/v3/servers/test_server_metadata.py b/tempest/api/compute/v3/servers/test_server_metadata.py
index c5443ee..ccdfbad 100644
--- a/tempest/api/compute/v3/servers/test_server_metadata.py
+++ b/tempest/api/compute/v3/servers/test_server_metadata.py
@@ -20,8 +20,8 @@
 class ServerMetadataV3Test(base.BaseV3ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(ServerMetadataV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(ServerMetadataV3Test, cls).resource_setup()
         cls.client = cls.servers_client
         cls.quotas = cls.quotas_client
         resp, server = cls.create_test_server(meta={}, wait_until='ACTIVE')
diff --git a/tempest/api/compute/v3/servers/test_server_metadata_negative.py b/tempest/api/compute/v3/servers/test_server_metadata_negative.py
index f746be3..036b126 100644
--- a/tempest/api/compute/v3/servers/test_server_metadata_negative.py
+++ b/tempest/api/compute/v3/servers/test_server_metadata_negative.py
@@ -21,8 +21,8 @@
 class ServerMetadataV3NegativeTest(base.BaseV3ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(ServerMetadataV3NegativeTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(ServerMetadataV3NegativeTest, cls).resource_setup()
         cls.client = cls.servers_client
         cls.quotas = cls.quotas_client
         cls.tenant_id = cls.client.tenant_id
diff --git a/tempest/api/compute/v3/servers/test_server_password.py b/tempest/api/compute/v3/servers/test_server_password.py
index fc0b145..bb0e310 100644
--- a/tempest/api/compute/v3/servers/test_server_password.py
+++ b/tempest/api/compute/v3/servers/test_server_password.py
@@ -21,8 +21,8 @@
 class ServerPasswordV3Test(base.BaseV3ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(ServerPasswordV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(ServerPasswordV3Test, cls).resource_setup()
         cls.client = cls.servers_client
         resp, cls.server = cls.create_test_server(wait_until="ACTIVE")
 
diff --git a/tempest/api/compute/v3/servers/test_server_rescue.py b/tempest/api/compute/v3/servers/test_server_rescue.py
index da58f26..ae21a7e 100644
--- a/tempest/api/compute/v3/servers/test_server_rescue.py
+++ b/tempest/api/compute/v3/servers/test_server_rescue.py
@@ -23,11 +23,11 @@
 class ServerRescueV3Test(base.BaseV3ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         if not CONF.compute_feature_enabled.rescue:
             msg = "Server rescue not available."
             raise cls.skipException(msg)
-        super(ServerRescueV3Test, cls).setUpClass()
+        super(ServerRescueV3Test, cls).resource_setup()
 
         # Server for positive tests
         resp, server = cls.create_test_server(wait_until='BUILD')
diff --git a/tempest/api/compute/v3/servers/test_server_rescue_negative.py b/tempest/api/compute/v3/servers/test_server_rescue_negative.py
index 5eb6c9a..db26298 100644
--- a/tempest/api/compute/v3/servers/test_server_rescue_negative.py
+++ b/tempest/api/compute/v3/servers/test_server_rescue_negative.py
@@ -26,14 +26,13 @@
 class ServerRescueNegativeV3Test(base.BaseV3ComputeTest):
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
+    def resource_setup(cls):
         if not CONF.compute_feature_enabled.rescue:
             msg = "Server rescue not available."
             raise cls.skipException(msg)
 
-        super(ServerRescueNegativeV3Test, cls).setUpClass()
-        cls.device = 'vdf'
+        super(ServerRescueNegativeV3Test, cls).resource_setup()
+        cls.device = CONF.compute.volume_device_name
 
         # Create a volume and wait for it to become ready for attach
         resp, cls.volume = cls.volumes_client.create_volume(
@@ -55,9 +54,10 @@
         cls.servers_client.wait_for_server_status(cls.server_id, 'ACTIVE')
 
     @classmethod
-    def tearDownClass(cls):
-        cls.delete_volume(cls.volume['id'])
-        super(ServerRescueNegativeV3Test, cls).tearDownClass()
+    def resource_cleanup(cls):
+        if hasattr(cls, 'volume'):
+            cls.delete_volume(cls.volume['id'])
+        super(ServerRescueNegativeV3Test, cls).resource_cleanup()
 
     def _detach(self, server_id, volume_id):
         self.servers_client.detach_volume(server_id, volume_id)
diff --git a/tempest/api/compute/v3/servers/test_servers.py b/tempest/api/compute/v3/servers/test_servers.py
index 426ee8d..e09f4a8 100644
--- a/tempest/api/compute/v3/servers/test_servers.py
+++ b/tempest/api/compute/v3/servers/test_servers.py
@@ -21,8 +21,8 @@
 class ServersV3Test(base.BaseV3ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(ServersV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(ServersV3Test, cls).resource_setup()
         cls.client = cls.servers_client
 
     def tearDown(self):
diff --git a/tempest/api/compute/v3/servers/test_servers_negative.py b/tempest/api/compute/v3/servers/test_servers_negative.py
index f8ff7c8..30ac0ac 100644
--- a/tempest/api/compute/v3/servers/test_servers_negative.py
+++ b/tempest/api/compute/v3/servers/test_servers_negative.py
@@ -42,8 +42,8 @@
             super(ServersNegativeV3Test, self).tearDown()
 
     @classmethod
-    def setUpClass(cls):
-        super(ServersNegativeV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(ServersNegativeV3Test, cls).resource_setup()
         cls.client = cls.servers_client
         if CONF.compute.allow_tenant_isolation:
             cls.alt_os = clients.Manager(cls.isolated_creds.get_alt_creds())
diff --git a/tempest/api/compute/v3/test_live_block_migration.py b/tempest/api/compute/v3/test_live_block_migration.py
index 6ca37e6..d6231b7 100644
--- a/tempest/api/compute/v3/test_live_block_migration.py
+++ b/tempest/api/compute/v3/test_live_block_migration.py
@@ -26,8 +26,8 @@
     _host_key = 'os-extended-server-attributes:host'
 
     @classmethod
-    def setUpClass(cls):
-        super(LiveBlockMigrationV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(LiveBlockMigrationV3Test, cls).resource_setup()
 
         cls.admin_hosts_client = cls.hosts_admin_client
         cls.admin_servers_client = cls.servers_admin_client
diff --git a/tempest/api/compute/v3/test_live_block_migration_negative.py b/tempest/api/compute/v3/test_live_block_migration_negative.py
index b4ec505..93127f3 100644
--- a/tempest/api/compute/v3/test_live_block_migration_negative.py
+++ b/tempest/api/compute/v3/test_live_block_migration_negative.py
@@ -27,8 +27,8 @@
     _host_key = 'os-extended-server-attributes:host'
 
     @classmethod
-    def setUpClass(cls):
-        super(LiveBlockMigrationV3NegativeTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(LiveBlockMigrationV3NegativeTest, cls).resource_setup()
         if not CONF.compute_feature_enabled.live_migration:
             raise cls.skipException("Live migration is not enabled")
 
diff --git a/tempest/api/compute/v3/test_quotas.py b/tempest/api/compute/v3/test_quotas.py
index ecf70cf..f6d8b3f 100644
--- a/tempest/api/compute/v3/test_quotas.py
+++ b/tempest/api/compute/v3/test_quotas.py
@@ -26,8 +26,8 @@
         super(QuotasV3Test, self).setUp()
 
     @classmethod
-    def setUpClass(cls):
-        super(QuotasV3Test, cls).setUpClass()
+    def resource_setup(cls):
+        super(QuotasV3Test, cls).resource_setup()
         cls.client = cls.quotas_client
         cls.tenant_id = cls.client.tenant_id
         cls.user_id = cls.client.user_id
diff --git a/tempest/api/compute/volumes/test_attach_volume.py b/tempest/api/compute/volumes/test_attach_volume.py
index 5a64544..484c34d 100644
--- a/tempest/api/compute/volumes/test_attach_volume.py
+++ b/tempest/api/compute/volumes/test_attach_volume.py
@@ -32,9 +32,9 @@
         self.attached = False
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         cls.prepare_instance_network()
-        super(AttachVolumeTestJSON, cls).setUpClass()
+        super(AttachVolumeTestJSON, cls).resource_setup()
         cls.device = CONF.compute.volume_device_name
         if not CONF.service_available.cinder:
             skip_msg = ("%s skipped as Cinder is not available" % cls.__name__)
diff --git a/tempest/api/compute/volumes/test_volumes_get.py b/tempest/api/compute/volumes/test_volumes_get.py
index c3d6ba6..4f77fa7 100644
--- a/tempest/api/compute/volumes/test_volumes_get.py
+++ b/tempest/api/compute/volumes/test_volumes_get.py
@@ -13,11 +13,13 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from testtools import matchers
+
 from tempest.api.compute import base
 from tempest.common.utils import data_utils
 from tempest import config
 from tempest import test
-from testtools import matchers
+
 
 CONF = config.CONF
 
@@ -25,8 +27,8 @@
 class VolumesGetTestJSON(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(VolumesGetTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(VolumesGetTestJSON, cls).resource_setup()
         cls.client = cls.volumes_extensions_client
         if not CONF.service_available.cinder:
             skip_msg = ("%s skipped as Cinder is not available" % cls.__name__)
diff --git a/tempest/api/compute/volumes/test_volumes_list.py b/tempest/api/compute/volumes/test_volumes_list.py
index 25a8547..dc54c67 100644
--- a/tempest/api/compute/volumes/test_volumes_list.py
+++ b/tempest/api/compute/volumes/test_volumes_list.py
@@ -32,8 +32,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(VolumesTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(VolumesTestJSON, cls).resource_setup()
         cls.client = cls.volumes_extensions_client
         if not CONF.service_available.cinder:
             skip_msg = ("%s skipped as Cinder is not available" % cls.__name__)
@@ -69,11 +69,11 @@
                 raise
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         # Delete the created Volumes
         for volume in cls.volume_list:
             cls.delete_volume(volume['id'])
-        super(VolumesTestJSON, cls).tearDownClass()
+        super(VolumesTestJSON, cls).resource_cleanup()
 
     @test.attr(type='gate')
     def test_volume_list(self):
diff --git a/tempest/api/compute/volumes/test_volumes_negative.py b/tempest/api/compute/volumes/test_volumes_negative.py
index 5dfbad7..ad94ea7 100644
--- a/tempest/api/compute/volumes/test_volumes_negative.py
+++ b/tempest/api/compute/volumes/test_volumes_negative.py
@@ -27,8 +27,8 @@
 class VolumesNegativeTest(base.BaseV2ComputeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(VolumesNegativeTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(VolumesNegativeTest, cls).resource_setup()
         cls.client = cls.volumes_extensions_client
         if not CONF.service_available.cinder:
             skip_msg = ("%s skipped as Cinder is not available" % cls.__name__)
diff --git a/tempest/api/data_processing/base.py b/tempest/api/data_processing/base.py
index 65085b9..2ec1017 100644
--- a/tempest/api/data_processing/base.py
+++ b/tempest/api/data_processing/base.py
@@ -24,8 +24,8 @@
     _interface = 'json'
 
     @classmethod
-    def setUpClass(cls):
-        super(BaseDataProcessingTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(BaseDataProcessingTest, cls).resource_setup()
         if not CONF.service_available.sahara:
             raise cls.skipException('Sahara support is required')
 
@@ -43,7 +43,7 @@
         cls._jobs = []
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         cls.cleanup_resources(getattr(cls, '_cluster_templates', []),
                               cls.client.delete_cluster_template)
         cls.cleanup_resources(getattr(cls, '_node_group_templates', []),
@@ -56,7 +56,7 @@
         cls.cleanup_resources(getattr(cls, '_data_sources', []),
                               cls.client.delete_data_source)
         cls.clear_isolated_creds()
-        super(BaseDataProcessingTest, cls).tearDownClass()
+        super(BaseDataProcessingTest, cls).resource_cleanup()
 
     @staticmethod
     def cleanup_resources(resource_id_list, method):
diff --git a/tempest/api/data_processing/test_cluster_templates.py b/tempest/api/data_processing/test_cluster_templates.py
index ff67c1c..537f90c 100644
--- a/tempest/api/data_processing/test_cluster_templates.py
+++ b/tempest/api/data_processing/test_cluster_templates.py
@@ -22,9 +22,8 @@
     sahara/restapi/rest_api_v1.0.html#cluster-templates
     """
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(ClusterTemplateTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(ClusterTemplateTest, cls).resource_setup()
         # create node group template
         node_group_template = {
             'name': data_utils.rand_name('sahara-ng-template'),
diff --git a/tempest/api/data_processing/test_data_sources.py b/tempest/api/data_processing/test_data_sources.py
index aae56c4..3650751 100644
--- a/tempest/api/data_processing/test_data_sources.py
+++ b/tempest/api/data_processing/test_data_sources.py
@@ -19,8 +19,8 @@
 
 class DataSourceTest(dp_base.BaseDataProcessingTest):
     @classmethod
-    def setUpClass(cls):
-        super(DataSourceTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(DataSourceTest, cls).resource_setup()
         cls.swift_data_source_with_creds = {
             'url': 'swift://sahara-container.sahara/input-source',
             'description': 'Test data source',
diff --git a/tempest/api/data_processing/test_job_binaries.py b/tempest/api/data_processing/test_job_binaries.py
index 15ee145..d006991 100644
--- a/tempest/api/data_processing/test_job_binaries.py
+++ b/tempest/api/data_processing/test_job_binaries.py
@@ -22,9 +22,8 @@
     sahara/restapi/rest_api_v1.1_EDP.html#job-binaries
     """
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(JobBinaryTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(JobBinaryTest, cls).resource_setup()
         cls.swift_job_binary_with_extra = {
             'url': 'swift://sahara-container.sahara/example.jar',
             'description': 'Test job binary',
diff --git a/tempest/api/data_processing/test_job_binary_internals.py b/tempest/api/data_processing/test_job_binary_internals.py
index 45e1140..7e99867 100644
--- a/tempest/api/data_processing/test_job_binary_internals.py
+++ b/tempest/api/data_processing/test_job_binary_internals.py
@@ -22,8 +22,8 @@
     sahara/restapi/rest_api_v1.1_EDP.html#job-binary-internals
     """
     @classmethod
-    def setUpClass(cls):
-        super(JobBinaryInternalTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(JobBinaryInternalTest, cls).resource_setup()
         cls.job_binary_internal_data = 'Some script may be data'
 
     def _create_job_binary_internal(self, binary_name=None):
diff --git a/tempest/api/data_processing/test_jobs.py b/tempest/api/data_processing/test_jobs.py
index 8591dbd..5af2eef 100644
--- a/tempest/api/data_processing/test_jobs.py
+++ b/tempest/api/data_processing/test_jobs.py
@@ -22,9 +22,8 @@
     sahara/restapi/rest_api_v1.1_EDP.html#jobs
     """
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(JobTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(JobTest, cls).resource_setup()
         # create job binary
         job_binary = {
             'name': data_utils.rand_name('sahara-job-binary'),
diff --git a/tempest/api/data_processing/test_node_group_templates.py b/tempest/api/data_processing/test_node_group_templates.py
index c2c0075..f3f59fc 100644
--- a/tempest/api/data_processing/test_node_group_templates.py
+++ b/tempest/api/data_processing/test_node_group_templates.py
@@ -19,8 +19,8 @@
 
 class NodeGroupTemplateTest(dp_base.BaseDataProcessingTest):
     @classmethod
-    def setUpClass(cls):
-        super(NodeGroupTemplateTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(NodeGroupTemplateTest, cls).resource_setup()
         cls.node_group_template = {
             'description': 'Test node group template',
             'plugin_name': 'vanilla',
diff --git a/tempest/api/database/base.py b/tempest/api/database/base.py
index b68c84a..c9f16ca 100644
--- a/tempest/api/database/base.py
+++ b/tempest/api/database/base.py
@@ -25,11 +25,10 @@
     """Base test case class for all Database API tests."""
 
     _interface = 'json'
-    force_tenant_isolation = False
 
     @classmethod
-    def setUpClass(cls):
-        super(BaseDatabaseTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(BaseDatabaseTest, cls).resource_setup()
         if not CONF.service_available.trove:
             skip_msg = ("%s skipped as trove is not available" % cls.__name__)
             raise cls.skipException(skip_msg)
diff --git a/tempest/api/database/flavors/test_flavors.py b/tempest/api/database/flavors/test_flavors.py
index 64d71b9..aed1abe 100644
--- a/tempest/api/database/flavors/test_flavors.py
+++ b/tempest/api/database/flavors/test_flavors.py
@@ -20,15 +20,14 @@
 class DatabaseFlavorsTest(base.BaseDatabaseTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(DatabaseFlavorsTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(DatabaseFlavorsTest, cls).resource_setup()
         cls.client = cls.database_flavors_client
 
     @test.attr(type='smoke')
     def test_get_db_flavor(self):
         # The expected flavor details should be returned
-        resp, flavor = self.client.get_db_flavor_details(self.db_flavor_ref)
-        self.assertEqual(200, resp.status)
+        _, flavor = self.client.get_db_flavor_details(self.db_flavor_ref)
         self.assertEqual(self.db_flavor_ref, str(flavor['id']))
         self.assertIn('ram', flavor)
         self.assertIn('links', flavor)
@@ -36,11 +35,9 @@
 
     @test.attr(type='smoke')
     def test_list_db_flavors(self):
-        resp, flavor = self.client.get_db_flavor_details(self.db_flavor_ref)
-        self.assertEqual(200, resp.status)
+        _, flavor = self.client.get_db_flavor_details(self.db_flavor_ref)
         # List of all flavors should contain the expected flavor
-        resp, flavors = self.client.list_db_flavors()
-        self.assertEqual(200, resp.status)
+        _, flavors = self.client.list_db_flavors()
         self.assertIn(flavor, flavors)
 
     def _check_values(self, names, db_flavor, os_flavor, in_db=True):
@@ -55,18 +52,16 @@
                 self.assertNotIn(name, db_flavor)
 
     @test.attr(type='smoke')
+    @test.services('compute')
     def test_compare_db_flavors_with_os(self):
-        resp, db_flavors = self.client.list_db_flavors()
-        self.assertEqual(200, resp.status)
-        resp, os_flavors = self.os_flavors_client.list_flavors_with_detail()
-        self.assertEqual(200, resp.status)
+        _, db_flavors = self.client.list_db_flavors()
+        _, os_flavors = self.os_flavors_client.list_flavors_with_detail()
         self.assertEqual(len(os_flavors), len(db_flavors),
                          "OS flavors %s do not match DB flavors %s" %
                          (os_flavors, db_flavors))
         for os_flavor in os_flavors:
-            resp, db_flavor =\
+            _, db_flavor =\
                 self.client.get_db_flavor_details(os_flavor['id'])
-            self.assertEqual(200, resp.status)
             self._check_values(['id', 'name', 'ram'], db_flavor, os_flavor)
             self._check_values(['disk', 'vcpus', 'swap'], db_flavor, os_flavor,
                                in_db=False)
diff --git a/tempest/api/database/flavors/test_flavors_negative.py b/tempest/api/database/flavors/test_flavors_negative.py
index 202dc48..9f14cce 100644
--- a/tempest/api/database/flavors/test_flavors_negative.py
+++ b/tempest/api/database/flavors/test_flavors_negative.py
@@ -21,8 +21,8 @@
 class DatabaseFlavorsNegativeTest(base.BaseDatabaseTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(DatabaseFlavorsNegativeTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(DatabaseFlavorsNegativeTest, cls).resource_setup()
         cls.client = cls.database_flavors_client
 
     @test.attr(type=['negative', 'gate'])
diff --git a/tempest/api/database/versions/test_versions.py b/tempest/api/database/versions/test_versions.py
index 6101f47..80fcecf 100644
--- a/tempest/api/database/versions/test_versions.py
+++ b/tempest/api/database/versions/test_versions.py
@@ -21,14 +21,13 @@
     _interface = 'json'
 
     @classmethod
-    def setUpClass(cls):
-        super(DatabaseVersionsTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(DatabaseVersionsTest, cls).resource_setup()
         cls.client = cls.database_versions_client
 
     @test.attr(type='smoke')
     def test_list_db_versions(self):
-        resp, versions = self.client.list_db_versions()
-        self.assertEqual(200, resp.status)
+        _, versions = self.client.list_db_versions()
         self.assertTrue(len(versions) > 0, "No database versions found")
         # List of all versions should contain the current version, and there
         # should only be one 'current' version
diff --git a/tempest/api/identity/admin/test_roles.py b/tempest/api/identity/admin/test_roles.py
index 492d56f..d87d5c1 100644
--- a/tempest/api/identity/admin/test_roles.py
+++ b/tempest/api/identity/admin/test_roles.py
@@ -24,9 +24,8 @@
     _interface = 'json'
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(RolesTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(RolesTestJSON, cls).resource_setup()
         for _ in moves.xrange(5):
             role_name = data_utils.rand_name(name='role-')
             _, role = cls.client.create_role(role_name)
diff --git a/tempest/api/identity/admin/test_tokens.py b/tempest/api/identity/admin/test_tokens.py
index e1db008..2c5fb74 100644
--- a/tempest/api/identity/admin/test_tokens.py
+++ b/tempest/api/identity/admin/test_tokens.py
@@ -35,10 +35,9 @@
                                           tenant['id'], '')
         self.data.users.append(user)
         # then get a token for the user
-        rsp, body = self.token_client.auth(user_name,
-                                           user_password,
-                                           tenant['name'])
-        self.assertEqual(rsp['status'], '200')
+        _, body = self.token_client.auth(user_name,
+                                         user_password,
+                                         tenant['name'])
         self.assertEqual(body['token']['tenant']['name'],
                          tenant['name'])
         # Perform GET Token
@@ -89,15 +88,13 @@
                                      role['id'])
 
         # Get an unscoped token.
-        resp, body = self.token_client.auth(user_name, user_password)
-        self.assertEqual(200, resp.status)
+        _, body = self.token_client.auth(user_name, user_password)
 
         token_id = body['token']['id']
 
         # Use the unscoped token to get a token scoped to tenant1
-        resp, body = self.token_client.auth_token(token_id,
-                                                  tenant=tenant1_name)
-        self.assertEqual(200, resp.status)
+        _, body = self.token_client.auth_token(token_id,
+                                               tenant=tenant1_name)
 
         scoped_token_id = body['token']['id']
 
@@ -105,9 +102,8 @@
         self.client.delete_token(scoped_token_id)
 
         # Use the unscoped token to get a token scoped to tenant2
-        resp, body = self.token_client.auth_token(token_id,
-                                                  tenant=tenant2_name)
-        self.assertEqual(200, resp.status)
+        _, body = self.token_client.auth_token(token_id,
+                                               tenant=tenant2_name)
 
 
 class TokensTestXML(TokensTestJSON):
diff --git a/tempest/api/identity/admin/test_users.py b/tempest/api/identity/admin/test_users.py
index 5838da3..66a1737 100644
--- a/tempest/api/identity/admin/test_users.py
+++ b/tempest/api/identity/admin/test_users.py
@@ -24,8 +24,8 @@
     _interface = 'json'
 
     @classmethod
-    def setUpClass(cls):
-        super(UsersTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(UsersTestJSON, cls).resource_setup()
         cls.alt_user = data_utils.rand_name('test_user_')
         cls.alt_password = data_utils.rand_name('pass_')
         cls.alt_email = cls.alt_user + '@testmail.tm'
@@ -97,10 +97,9 @@
         self.token_client.auth(self.data.test_user, self.data.test_password,
                                self.data.test_tenant)
         # Re-auth
-        resp, body = self.token_client.auth(self.data.test_user,
-                                            self.data.test_password,
-                                            self.data.test_tenant)
-        self.assertEqual('200', resp['status'])
+        self.token_client.auth(self.data.test_user,
+                               self.data.test_password,
+                               self.data.test_tenant)
 
     @test.attr(type='gate')
     def test_authentication_request_without_token(self):
@@ -113,10 +112,9 @@
         # Delete the token from database
         self.client.delete_token(token)
         # Re-auth
-        resp, body = self.token_client.auth(self.data.test_user,
-                                            self.data.test_password,
-                                            self.data.test_tenant)
-        self.assertEqual('200', resp['status'])
+        self.token_client.auth(self.data.test_user,
+                               self.data.test_password,
+                               self.data.test_tenant)
         self.client.auth_provider.clear_auth()
 
     @test.attr(type='smoke')
@@ -205,9 +203,8 @@
 
         # Validate the updated password
         # Get a token
-        resp, body = self.token_client.auth(self.data.test_user, new_pass,
-                                            self.data.test_tenant)
-        self.assertEqual('200', resp['status'])
+        _, body = self.token_client.auth(self.data.test_user, new_pass,
+                                         self.data.test_tenant)
         self.assertTrue('id' in body['token'])
 
 
diff --git a/tempest/api/identity/admin/test_users_negative.py b/tempest/api/identity/admin/test_users_negative.py
index a584a7b..bad2b89 100644
--- a/tempest/api/identity/admin/test_users_negative.py
+++ b/tempest/api/identity/admin/test_users_negative.py
@@ -25,8 +25,8 @@
     _interface = 'json'
 
     @classmethod
-    def setUpClass(cls):
-        super(UsersNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(UsersNegativeTestJSON, cls).resource_setup()
         cls.alt_user = data_utils.rand_name('test_user_')
         cls.alt_password = data_utils.rand_name('pass_')
         cls.alt_email = cls.alt_user + '@testmail.tm'
diff --git a/tempest/api/identity/admin/v3/test_credentials.py b/tempest/api/identity/admin/v3/test_credentials.py
index d40e0f3..7a0edb0 100644
--- a/tempest/api/identity/admin/v3/test_credentials.py
+++ b/tempest/api/identity/admin/v3/test_credentials.py
@@ -22,9 +22,8 @@
     _interface = 'json'
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(CredentialsTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(CredentialsTestJSON, cls).resource_setup()
         cls.projects = list()
         cls.creds_list = [['project_id', 'user_id', 'id'],
                           ['access', 'secret']]
@@ -43,25 +42,23 @@
             email=u_email, project_id=cls.projects[0])
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         cls.client.delete_user(cls.user_body['id'])
         for p in cls.projects:
             cls.client.delete_project(p)
-        super(CredentialsTestJSON, cls).tearDownClass()
+        super(CredentialsTestJSON, cls).resource_cleanup()
 
     def _delete_credential(self, cred_id):
-        resp, body = self.creds_client.delete_credential(cred_id)
-        self.assertEqual(resp['status'], '204')
+        self.creds_client.delete_credential(cred_id)
 
     @test.attr(type='smoke')
     def test_credentials_create_get_update_delete(self):
         keys = [data_utils.rand_name('Access-'),
                 data_utils.rand_name('Secret-')]
-        resp, cred = self.creds_client.create_credential(
+        _, cred = self.creds_client.create_credential(
             keys[0], keys[1], self.user_body['id'],
             self.projects[0])
         self.addCleanup(self._delete_credential, cred['id'])
-        self.assertEqual(resp['status'], '201')
         for value1 in self.creds_list[0]:
             self.assertIn(value1, cred)
         for value2 in self.creds_list[1]:
@@ -69,18 +66,16 @@
 
         new_keys = [data_utils.rand_name('NewAccess-'),
                     data_utils.rand_name('NewSecret-')]
-        resp, update_body = self.creds_client.update_credential(
+        _, update_body = self.creds_client.update_credential(
             cred['id'], access_key=new_keys[0], secret_key=new_keys[1],
             project_id=self.projects[1])
-        self.assertEqual(resp['status'], '200')
         self.assertEqual(cred['id'], update_body['id'])
         self.assertEqual(self.projects[1], update_body['project_id'])
         self.assertEqual(self.user_body['id'], update_body['user_id'])
         self.assertEqual(update_body['blob']['access'], new_keys[0])
         self.assertEqual(update_body['blob']['secret'], new_keys[1])
 
-        resp, get_body = self.creds_client.get_credential(cred['id'])
-        self.assertEqual(resp['status'], '200')
+        _, get_body = self.creds_client.get_credential(cred['id'])
         for value1 in self.creds_list[0]:
             self.assertEqual(update_body[value1],
                              get_body[value1])
@@ -94,16 +89,14 @@
         fetched_cred_ids = list()
 
         for i in range(2):
-            resp, cred = self.creds_client.create_credential(
+            _, cred = self.creds_client.create_credential(
                 data_utils.rand_name('Access-'),
                 data_utils.rand_name('Secret-'),
                 self.user_body['id'], self.projects[0])
-            self.assertEqual(resp['status'], '201')
             created_cred_ids.append(cred['id'])
             self.addCleanup(self._delete_credential, cred['id'])
 
-        resp, creds = self.creds_client.list_credentials()
-        self.assertEqual(resp['status'], '200')
+        _, creds = self.creds_client.list_credentials()
 
         for i in creds:
             fetched_cred_ids.append(i['id'])
diff --git a/tempest/api/identity/admin/v3/test_endpoints.py b/tempest/api/identity/admin/v3/test_endpoints.py
index 6beb8f2..f1f1eb6 100644
--- a/tempest/api/identity/admin/v3/test_endpoints.py
+++ b/tempest/api/identity/admin/v3/test_endpoints.py
@@ -22,9 +22,8 @@
     _interface = 'json'
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(EndPointsTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(EndPointsTestJSON, cls).resource_setup()
         cls.identity_client = cls.client
         cls.client = cls.endpoints_client
         cls.service_ids = list()
@@ -47,19 +46,18 @@
             cls.setup_endpoints.append(endpoint)
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         for e in cls.setup_endpoints:
             cls.client.delete_endpoint(e['id'])
         for s in cls.service_ids:
             cls.service_client.delete_service(s)
-        super(EndPointsTestJSON, cls).tearDownClass()
+        super(EndPointsTestJSON, cls).resource_cleanup()
 
     @test.attr(type='gate')
     def test_list_endpoints(self):
         # Get a list of endpoints
-        resp, fetched_endpoints = self.client.list_endpoints()
+        _, fetched_endpoints = self.client.list_endpoints()
         # Asserting LIST endpoints
-        self.assertEqual(resp['status'], '200')
         missing_endpoints =\
             [e for e in self.setup_endpoints if e not in fetched_endpoints]
         self.assertEqual(0, len(missing_endpoints),
@@ -71,11 +69,10 @@
         region = data_utils.rand_name('region')
         url = data_utils.rand_url()
         interface = 'public'
-        resp, endpoint =\
+        _, endpoint =\
             self.client.create_endpoint(self.service_id, interface, url,
                                         region=region, enabled=True)
         # Asserting Create Endpoint response body
-        self.assertEqual(resp['status'], '201')
         self.assertIn('id', endpoint)
         self.assertEqual(region, endpoint['region'])
         self.assertEqual(url, endpoint['url'])
@@ -84,8 +81,7 @@
         fetched_endpoints_id = [e['id'] for e in fetched_endpoints]
         self.assertIn(endpoint['id'], fetched_endpoints_id)
         # Deleting the endpoint created in this method
-        resp, body = self.client.delete_endpoint(endpoint['id'])
-        self.assertEqual(resp['status'], '204')
+        _, body = self.client.delete_endpoint(endpoint['id'])
         self.assertEqual(body, '')
         # Checking whether endpoint is deleted successfully
         resp, fetched_endpoints = self.client.list_endpoints()
@@ -116,12 +112,11 @@
         region2 = data_utils.rand_name('region')
         url2 = data_utils.rand_url()
         interface2 = 'internal'
-        resp, endpoint = \
+        _, endpoint = \
             self.client.update_endpoint(endpoint_for_update['id'],
                                         service_id=service2['id'],
                                         interface=interface2, url=url2,
                                         region=region2, enabled=False)
-        self.assertEqual(resp['status'], '200')
         # Asserting if the attributes of endpoint are updated
         self.assertEqual(service2['id'], endpoint['service_id'])
         self.assertEqual(interface2, endpoint['interface'])
diff --git a/tempest/api/identity/admin/v3/test_endpoints_negative.py b/tempest/api/identity/admin/v3/test_endpoints_negative.py
index d728b1d..b987d12 100644
--- a/tempest/api/identity/admin/v3/test_endpoints_negative.py
+++ b/tempest/api/identity/admin/v3/test_endpoints_negative.py
@@ -25,8 +25,8 @@
     _interface = 'json'
 
     @classmethod
-    def setUpClass(cls):
-        super(EndpointsNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(EndpointsNegativeTestJSON, cls).resource_setup()
         cls.identity_client = cls.client
         cls.client = cls.endpoints_client
         cls.service_ids = list()
@@ -40,10 +40,10 @@
         cls.service_ids.append(cls.service_id)
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         for s in cls.service_ids:
             cls.service_client.delete_service(s)
-        super(EndpointsNegativeTestJSON, cls).tearDownClass()
+        super(EndpointsNegativeTestJSON, cls).resource_cleanup()
 
     @test.attr(type=['negative', 'gate'])
     def test_create_with_enabled_False(self):
diff --git a/tempest/api/identity/admin/v3/test_groups.py b/tempest/api/identity/admin/v3/test_groups.py
index 4d2cc46..987a9d5 100644
--- a/tempest/api/identity/admin/v3/test_groups.py
+++ b/tempest/api/identity/admin/v3/test_groups.py
@@ -22,8 +22,8 @@
     _interface = 'json'
 
     @classmethod
-    def setUpClass(cls):
-        super(GroupsV3TestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(GroupsV3TestJSON, cls).resource_setup()
 
     @test.attr(type='smoke')
     def test_group_create_update_get(self):
diff --git a/tempest/api/identity/admin/v3/test_list_projects.py b/tempest/api/identity/admin/v3/test_list_projects.py
index a3944e2..be06c7f 100644
--- a/tempest/api/identity/admin/v3/test_list_projects.py
+++ b/tempest/api/identity/admin/v3/test_list_projects.py
@@ -22,8 +22,8 @@
     _interface = 'json'
 
     @classmethod
-    def setUpClass(cls):
-        super(ListProjectsTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(ListProjectsTestJSON, cls).resource_setup()
         cls.project_ids = list()
         cls.data.setup_test_domain()
         # Create project with domain
diff --git a/tempest/api/identity/admin/v3/test_list_users.py b/tempest/api/identity/admin/v3/test_list_users.py
index 497c5ea..903ad5c 100644
--- a/tempest/api/identity/admin/v3/test_list_users.py
+++ b/tempest/api/identity/admin/v3/test_list_users.py
@@ -32,8 +32,8 @@
                          map(lambda x: x[key], body))
 
     @classmethod
-    def setUpClass(cls):
-        super(UsersV3TestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(UsersV3TestJSON, cls).resource_setup()
         alt_user = data_utils.rand_name('test_user')
         alt_password = data_utils.rand_name('pass')
         cls.alt_email = alt_user + '@testmail.tm'
diff --git a/tempest/api/identity/admin/v3/test_policies.py b/tempest/api/identity/admin/v3/test_policies.py
index 0e79440..65c5230 100644
--- a/tempest/api/identity/admin/v3/test_policies.py
+++ b/tempest/api/identity/admin/v3/test_policies.py
@@ -22,8 +22,7 @@
     _interface = 'json'
 
     def _delete_policy(self, policy_id):
-        resp, _ = self.policy_client.delete_policy(policy_id)
-        self.assertEqual(204, resp.status)
+        self.policy_client.delete_policy(policy_id)
 
     @test.attr(type='smoke')
     def test_list_policies(self):
@@ -39,8 +38,7 @@
             self.addCleanup(self._delete_policy, policy['id'])
             policy_ids.append(policy['id'])
         # List and Verify Policies
-        resp, body = self.policy_client.list_policies()
-        self.assertEqual(resp['status'], '200')
+        _, body = self.policy_client.list_policies()
         for p in body:
             fetched_ids.append(p['id'])
         missing_pols = [p for p in policy_ids if p not in fetched_ids]
@@ -51,7 +49,7 @@
         # Test to update policy
         blob = data_utils.rand_name('BlobName-')
         policy_type = data_utils.rand_name('PolicyType-')
-        resp, policy = self.policy_client.create_policy(blob, policy_type)
+        _, policy = self.policy_client.create_policy(blob, policy_type)
         self.addCleanup(self._delete_policy, policy['id'])
         self.assertIn('id', policy)
         self.assertIn('type', policy)
@@ -59,15 +57,13 @@
         self.assertIsNotNone(policy['id'])
         self.assertEqual(blob, policy['blob'])
         self.assertEqual(policy_type, policy['type'])
-        resp, fetched_policy = self.policy_client.get_policy(policy['id'])
-        self.assertEqual(resp['status'], '200')
         # Update policy
         update_type = data_utils.rand_name('UpdatedPolicyType-')
-        resp, data = self.policy_client.update_policy(
+        _, data = self.policy_client.update_policy(
             policy['id'], type=update_type)
         self.assertIn('type', data)
         # Assertion for updated value with fetched value
-        resp, fetched_policy = self.policy_client.get_policy(policy['id'])
+        _, fetched_policy = self.policy_client.get_policy(policy['id'])
         self.assertIn('id', fetched_policy)
         self.assertIn('blob', fetched_policy)
         self.assertIn('type', fetched_policy)
diff --git a/tempest/api/identity/admin/v3/test_regions.py b/tempest/api/identity/admin/v3/test_regions.py
index c8b034f..c5d5824 100644
--- a/tempest/api/identity/admin/v3/test_regions.py
+++ b/tempest/api/identity/admin/v3/test_regions.py
@@ -23,9 +23,8 @@
     _interface = 'json'
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(RegionsTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(RegionsTestJSON, cls).resource_setup()
         cls.setup_regions = list()
         cls.client = cls.region_client
         for i in range(2):
@@ -34,40 +33,36 @@
             cls.setup_regions.append(region)
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         for r in cls.setup_regions:
             cls.client.delete_region(r['id'])
-        super(RegionsTestJSON, cls).tearDownClass()
+        super(RegionsTestJSON, cls).resource_cleanup()
 
     def _delete_region(self, region_id):
-        resp, _ = self.client.delete_region(region_id)
-        self.assertEqual(204, resp.status)
+        self.client.delete_region(region_id)
         self.assertRaises(exceptions.NotFound,
                           self.client.get_region, region_id)
 
     @test.attr(type='gate')
     def test_create_update_get_delete_region(self):
         r_description = data_utils.rand_name('description-')
-        resp, region = self.client.create_region(
+        _, region = self.client.create_region(
             r_description, parent_region_id=self.setup_regions[0]['id'])
-        self.assertEqual(201, resp.status)
         self.addCleanup(self._delete_region, region['id'])
         self.assertEqual(r_description, region['description'])
         self.assertEqual(self.setup_regions[0]['id'],
                          region['parent_region_id'])
         # Update region with new description and parent ID
         r_alt_description = data_utils.rand_name('description-')
-        resp, region = self.client.update_region(
+        _, region = self.client.update_region(
             region['id'],
             description=r_alt_description,
             parent_region_id=self.setup_regions[1]['id'])
-        self.assertEqual(200, resp.status)
         self.assertEqual(r_alt_description, region['description'])
         self.assertEqual(self.setup_regions[1]['id'],
                          region['parent_region_id'])
         # Get the details of region
-        resp, region = self.client.get_region(region['id'])
-        self.assertEqual(200, resp.status)
+        _, region = self.client.get_region(region['id'])
         self.assertEqual(r_alt_description, region['description'])
         self.assertEqual(self.setup_regions[1]['id'],
                          region['parent_region_id'])
@@ -77,19 +72,17 @@
         # Create a region with a specific id
         r_region_id = data_utils.rand_uuid()
         r_description = data_utils.rand_name('description-')
-        resp, region = self.client.create_region(
+        _, region = self.client.create_region(
             r_description, unique_region_id=r_region_id)
         self.addCleanup(self._delete_region, region['id'])
         # Asserting Create Region with specific id response body
-        self.assertEqual(201, resp.status)
         self.assertEqual(r_region_id, region['id'])
         self.assertEqual(r_description, region['description'])
 
     @test.attr(type='gate')
     def test_list_regions(self):
         # Get a list of regions
-        resp, fetched_regions = self.client.list_regions()
-        self.assertEqual(200, resp.status)
+        _, fetched_regions = self.client.list_regions()
         missing_regions =\
             [e for e in self.setup_regions if e not in fetched_regions]
         # Asserting List Regions response
diff --git a/tempest/api/identity/admin/v3/test_roles.py b/tempest/api/identity/admin/v3/test_roles.py
index 2e732fe..5e14a04 100644
--- a/tempest/api/identity/admin/v3/test_roles.py
+++ b/tempest/api/identity/admin/v3/test_roles.py
@@ -22,9 +22,8 @@
     _interface = 'json'
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(RolesV3TestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(RolesV3TestJSON, cls).resource_setup()
         for _ in range(3):
             role_name = data_utils.rand_name(name='role-')
             _, role = cls.client.create_role(role_name)
@@ -52,7 +51,7 @@
             data_utils.rand_name('Role-'))
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         cls.client.delete_role(cls.role['id'])
         cls.client.delete_group(cls.group_body['id'])
         cls.client.delete_user(cls.user_body['id'])
@@ -61,7 +60,7 @@
         # before deleting,or else it would result in unauthorized error
         cls.client.update_domain(cls.domain['id'], enabled=False)
         cls.client.delete_domain(cls.domain['id'])
-        super(RolesV3TestJSON, cls).tearDownClass()
+        super(RolesV3TestJSON, cls).resource_cleanup()
 
     def _list_assertions(self, body, fetched_role_ids, role_id):
         self.assertEqual(len(body), 1)
@@ -141,11 +140,10 @@
         self.client.add_group_user(self.group_body['id'], self.user_body['id'])
         self.addCleanup(self.client.delete_group_user,
                         self.group_body['id'], self.user_body['id'])
-        resp, body = self.token.auth(self.user_body['id'], self.u_password,
-                                     self.project['name'],
-                                     domain=self.domain['name'])
+        _, body = self.token.auth(self.user_body['id'], self.u_password,
+                                  self.project['name'],
+                                  domain=self.domain['name'])
         roles = body['token']['roles']
-        self.assertEqual(resp['status'], '201')
         self.assertEqual(len(roles), 1)
         self.assertEqual(roles[0]['id'], self.role['id'])
         # Revoke role to group on project
diff --git a/tempest/api/identity/admin/v3/test_services.py b/tempest/api/identity/admin/v3/test_services.py
index f6078da..7e21cc3 100644
--- a/tempest/api/identity/admin/v3/test_services.py
+++ b/tempest/api/identity/admin/v3/test_services.py
@@ -13,41 +13,84 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-
 from tempest.api.identity import base
 from tempest.common.utils import data_utils
+from tempest import exceptions
 from tempest import test
 
 
 class ServicesTestJSON(base.BaseIdentityV3AdminTest):
     _interface = 'json'
 
-    @test.attr(type='gate')
-    def test_update_service(self):
-        # Update description attribute of service
-        name = data_utils.rand_name('service-')
-        serv_type = data_utils.rand_name('type--')
-        desc = data_utils.rand_name('description-')
-        _, body = self.service_client.create_service(name, serv_type,
-                                                     description=desc)
-        # Deleting the service created in this method
-        self.addCleanup(self.service_client.delete_service, body['id'])
+    def _del_service(self, service_id):
+        # Used for deleting the services created in this class
+        self.service_client.delete_service(service_id)
+        # Checking whether service is deleted successfully
+        self.assertRaises(exceptions.NotFound, self.service_client.get_service,
+                          service_id)
 
-        s_id = body['id']
-        resp1_desc = body['description']
+    @test.attr(type='smoke')
+    def test_create_update_get_service(self):
+        # Creating a Service
+        name = data_utils.rand_name('service')
+        serv_type = data_utils.rand_name('type')
+        desc = data_utils.rand_name('description')
+        _, create_service = self.service_client.create_service(
+            serv_type, name=name, description=desc)
+        self.addCleanup(self._del_service, create_service['id'])
+        self.assertIsNotNone(create_service['id'])
 
-        s_desc2 = data_utils.rand_name('desc2-')
-        _, body = self.service_client.update_service(
+        # Verifying response body of create service
+        expected_data = {'name': name, 'type': serv_type, 'description': desc}
+        self.assertDictContainsSubset(expected_data, create_service)
+
+        # Update description
+        s_id = create_service['id']
+        resp1_desc = create_service['description']
+        s_desc2 = data_utils.rand_name('desc2')
+        _, update_service = self.service_client.update_service(
             s_id, description=s_desc2)
-        resp2_desc = body['description']
+        resp2_desc = update_service['description']
+
         self.assertNotEqual(resp1_desc, resp2_desc)
 
         # Get service
-        _, body = self.service_client.get_service(s_id)
-        resp3_desc = body['description']
+        _, fetched_service = self.service_client.get_service(s_id)
+        resp3_desc = fetched_service['description']
 
-        self.assertNotEqual(resp1_desc, resp3_desc)
         self.assertEqual(resp2_desc, resp3_desc)
+        self.assertDictContainsSubset(update_service, fetched_service)
+
+    @test.attr(type='smoke')
+    def test_create_service_without_description(self):
+        # Create a service only with name and type
+        name = data_utils.rand_name('service')
+        serv_type = data_utils.rand_name('type')
+        _, service = self.service_client.create_service(
+            serv_type, name=name)
+        self.addCleanup(self.service_client.delete_service, service['id'])
+        self.assertIn('id', service)
+        expected_data = {'name': name, 'type': serv_type}
+        self.assertDictContainsSubset(expected_data, service)
+
+    @test.attr(type='smoke')
+    def test_list_services(self):
+        # Create, List, Verify and Delete Services
+        service_ids = list()
+        for _ in range(3):
+            name = data_utils.rand_name('service')
+            serv_type = data_utils.rand_name('type')
+            _, create_service = self.service_client.create_service(
+                serv_type, name=name)
+            self.addCleanup(self.service_client.delete_service,
+                            create_service['id'])
+            service_ids.append(create_service['id'])
+
+        # List and Verify Services
+        _, services = self.service_client.list_services()
+        fetched_ids = [service['id'] for service in services]
+        found = [s for s in fetched_ids if s in service_ids]
+        self.assertEqual(len(found), len(service_ids))
 
 
 class ServicesTestXML(ServicesTestJSON):
diff --git a/tempest/api/identity/admin/v3/test_tokens.py b/tempest/api/identity/admin/v3/test_tokens.py
index bd08614..230e09f 100644
--- a/tempest/api/identity/admin/v3/test_tokens.py
+++ b/tempest/api/identity/admin/v3/test_tokens.py
@@ -35,8 +35,7 @@
             email=u_email)
         self.addCleanup(self.client.delete_user, user['id'])
         # Perform Authentication
-        resp, body = self.token.auth(user['id'], u_password)
-        self.assertEqual(201, resp.status)
+        resp, _ = self.token.auth(user['id'], u_password)
         subject_token = resp['x-subject-token']
         # Perform GET Token
         _, token_details = self.client.get_token(subject_token)
@@ -48,7 +47,6 @@
         self.assertRaises(exceptions.NotFound, self.client.get_token,
                           subject_token)
 
-    @test.skip_because(bug="1351026")
     @test.attr(type='gate')
     def test_rescope_token(self):
         """Rescope a token.
@@ -89,7 +87,6 @@
         # Get an unscoped token.
         resp, token_auth = self.token.auth(user=user['id'],
                                            password=user_password)
-        self.assertEqual(201, resp.status)
 
         token_id = resp['x-subject-token']
         orig_expires_at = token_auth['token']['expires_at']
@@ -114,7 +111,6 @@
                                            tenant=project1_name,
                                            domain='Default')
         token1_id = resp['x-subject-token']
-        self.assertEqual(201, resp.status)
 
         self.assertEqual(orig_expires_at, token_auth['token']['expires_at'],
                          'Expiration time should match original token')
@@ -141,10 +137,9 @@
         self.client.delete_token(token1_id)
 
         # Now get another scoped token using the unscoped token.
-        resp, token_auth = self.token.auth(token=token_id,
-                                           tenant=project2_name,
-                                           domain='Default')
-        self.assertEqual(201, resp.status)
+        _, token_auth = self.token.auth(token=token_id,
+                                        tenant=project2_name,
+                                        domain='Default')
 
         self.assertEqual(project2['id'],
                          token_auth['token']['project']['id'])
diff --git a/tempest/api/identity/admin/v3/test_trusts.py b/tempest/api/identity/admin/v3/test_trusts.py
index 1561a6e..886c808 100644
--- a/tempest/api/identity/admin/v3/test_trusts.py
+++ b/tempest/api/identity/admin/v3/test_trusts.py
@@ -12,6 +12,7 @@
 
 import datetime
 import re
+
 from tempest.api.identity import base
 from tempest import auth
 from tempest import clients
diff --git a/tempest/api/identity/admin/v3/test_users.py b/tempest/api/identity/admin/v3/test_users.py
index 3c25819..898bcd0 100644
--- a/tempest/api/identity/admin/v3/test_users.py
+++ b/tempest/api/identity/admin/v3/test_users.py
@@ -77,8 +77,7 @@
         new_password = data_utils.rand_name('pass1')
         self.client.update_user_password(user['id'], new_password,
                                          original_password)
-        resp, body = self.token.auth(user['id'], new_password)
-        self.assertEqual(201, resp.status)
+        resp, _ = self.token.auth(user['id'], new_password)
         subject_token = resp['x-subject-token']
         # Perform GET Token to verify and confirm password is updated
         _, token_details = self.client.get_token(subject_token)
diff --git a/tempest/api/identity/base.py b/tempest/api/identity/base.py
index 8eb7d33..a225f12 100644
--- a/tempest/api/identity/base.py
+++ b/tempest/api/identity/base.py
@@ -18,16 +18,19 @@
 from tempest import clients
 from tempest.common.utils import data_utils
 from tempest import config
+from tempest import exceptions
+from tempest.openstack.common import log as logging
 import tempest.test
 
 CONF = config.CONF
+LOG = logging.getLogger(__name__)
 
 
 class BaseIdentityAdminTest(tempest.test.BaseTestCase):
 
     @classmethod
-    def setUpClass(cls):
-        super(BaseIdentityAdminTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(BaseIdentityAdminTest, cls).resource_setup()
         cls.os_adm = clients.AdminManager(interface=cls._interface)
         cls.os = clients.Manager(interface=cls._interface)
 
@@ -69,10 +72,10 @@
 class BaseIdentityV2AdminTest(BaseIdentityAdminTest):
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         if not CONF.identity_feature_enabled.api_v2:
             raise cls.skipException("Identity api v2 is not enabled")
-        super(BaseIdentityV2AdminTest, cls).setUpClass()
+        super(BaseIdentityV2AdminTest, cls).resource_setup()
         cls.client = cls.os_adm.identity_client
         cls.token_client = cls.os_adm.token_client
         if not cls.client.has_admin_extensions():
@@ -81,18 +84,18 @@
         cls.non_admin_client = cls.os.identity_client
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         cls.data.teardown_all()
-        super(BaseIdentityV2AdminTest, cls).tearDownClass()
+        super(BaseIdentityV2AdminTest, cls).resource_cleanup()
 
 
 class BaseIdentityV3AdminTest(BaseIdentityAdminTest):
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         if not CONF.identity_feature_enabled.api_v3:
             raise cls.skipException("Identity api v3 is not enabled")
-        super(BaseIdentityV3AdminTest, cls).setUpClass()
+        super(BaseIdentityV3AdminTest, cls).resource_setup()
         cls.client = cls.os_adm.identity_v3_client
         cls.token = cls.os_adm.token_v3_client
         cls.endpoints_client = cls.os_adm.endpoints_client
@@ -105,9 +108,9 @@
         cls.non_admin_client = cls.os.identity_v3_client
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         cls.data.teardown_all()
-        super(BaseIdentityV3AdminTest, cls).tearDownClass()
+        super(BaseIdentityV3AdminTest, cls).resource_cleanup()
 
 
 class DataGenerator(object):
@@ -195,19 +198,36 @@
                 description=self.test_description)
             self.domains.append(self.domain)
 
+        @staticmethod
+        def _try_wrapper(func, item, **kwargs):
+            try:
+                if kwargs:
+                    func(item['id'], kwargs)
+                else:
+                    func(item['id'])
+            except exceptions.NotFound:
+                pass
+            except Exception:
+                LOG.exception("Unexpected exception occurred in %s deletion."
+                              " But ignored here." % item['id'])
+
         def teardown_all(self):
+            # NOTE(masayukig): v3 client doesn't have v2 method.
+            # (e.g. delete_tenant) So we need to check resources existence
+            # before using client methods.
             for user in self.users:
-                self.client.delete_user(user['id'])
+                self._try_wrapper(self.client.delete_user, user)
             for tenant in self.tenants:
-                self.client.delete_tenant(tenant['id'])
+                self._try_wrapper(self.client.delete_tenant, tenant)
             for role in self.roles:
-                self.client.delete_role(role['id'])
+                self._try_wrapper(self.client.delete_role, role)
             for v3_user in self.v3_users:
-                self.client.delete_user(v3_user['id'])
+                self._try_wrapper(self.client.delete_user, v3_user)
             for v3_project in self.projects:
-                self.client.delete_project(v3_project['id'])
+                self._try_wrapper(self.client.delete_project, v3_project)
             for v3_role in self.v3_roles:
-                self.client.delete_role(v3_role['id'])
+                self._try_wrapper(self.client.delete_role, v3_role)
             for domain in self.domains:
-                self.client.update_domain(domain['id'], enabled=False)
-                self.client.delete_domain(domain['id'])
+                self._try_wrapper(self.client.update_domain, domain,
+                                  enabled=False)
+                self._try_wrapper(self.client.delete_domain, domain)
diff --git a/tempest/api/image/base.py b/tempest/api/image/base.py
index c875b2f..08767e3 100644
--- a/tempest/api/image/base.py
+++ b/tempest/api/image/base.py
@@ -31,9 +31,9 @@
     """Base test class for Image API tests."""
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         cls.set_network_resources()
-        super(BaseImageTest, cls).setUpClass()
+        super(BaseImageTest, cls).resource_setup()
         cls.created_images = []
         cls._interface = 'json'
         cls.isolated_creds = isolated_creds.IsolatedCreds(
@@ -47,7 +47,7 @@
             cls.os = clients.Manager()
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         for image_id in cls.created_images:
             try:
                 cls.client.delete_image(image_id)
@@ -57,7 +57,7 @@
         for image_id in cls.created_images:
                 cls.client.wait_for_resource_deletion(image_id)
         cls.isolated_creds.clear_isolated_creds()
-        super(BaseImageTest, cls).tearDownClass()
+        super(BaseImageTest, cls).resource_cleanup()
 
     @classmethod
     def create_image(cls, **kwargs):
@@ -79,8 +79,8 @@
 class BaseV1ImageTest(BaseImageTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(BaseV1ImageTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(BaseV1ImageTest, cls).resource_setup()
         cls.client = cls.os.image_client
         if not CONF.image_feature_enabled.api_v1:
             msg = "Glance API v1 not supported"
@@ -89,8 +89,8 @@
 
 class BaseV1ImageMembersTest(BaseV1ImageTest):
     @classmethod
-    def setUpClass(cls):
-        super(BaseV1ImageMembersTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(BaseV1ImageMembersTest, cls).resource_setup()
         if CONF.compute.allow_tenant_isolation:
             cls.os_alt = clients.Manager(cls.isolated_creds.get_alt_creds())
         else:
@@ -113,8 +113,8 @@
 class BaseV2ImageTest(BaseImageTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(BaseV2ImageTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(BaseV2ImageTest, cls).resource_setup()
         cls.client = cls.os.image_client_v2
         if not CONF.image_feature_enabled.api_v2:
             msg = "Glance API v2 not supported"
@@ -124,8 +124,8 @@
 class BaseV2MemberImageTest(BaseV2ImageTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(BaseV2MemberImageTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(BaseV2MemberImageTest, cls).resource_setup()
         if CONF.compute.allow_tenant_isolation:
             creds = cls.isolated_creds.get_alt_creds()
             cls.os_alt = clients.Manager(creds)
diff --git a/tempest/api/image/v1/test_images.py b/tempest/api/image/v1/test_images.py
index bf55b89..38a623a 100644
--- a/tempest/api/image/v1/test_images.py
+++ b/tempest/api/image/v1/test_images.py
@@ -106,9 +106,8 @@
     """
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(ListImagesTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(ListImagesTest, cls).resource_setup()
         # We add a few images here to test the listing functionality of
         # the images API
         img1 = cls._create_remote_image('one', 'bare', 'raw')
@@ -235,8 +234,7 @@
 
 class ListSnapshotImagesTest(base.BaseV1ImageTest):
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
+    def resource_setup(cls):
         # This test class only uses nova v3 api to create snapshot
         # as the similar test which uses nova v2 api already exists
         # in nova v2 compute images api tests.
@@ -246,7 +244,7 @@
             skip_msg = ("%s skipped as nova v3 api is not available" %
                         cls.__name__)
             raise cls.skipException(skip_msg)
-        super(ListSnapshotImagesTest, cls).setUpClass()
+        super(ListSnapshotImagesTest, cls).resource_setup()
         cls.servers_client = cls.os.servers_v3_client
         cls.servers = []
         # We add a few images here to test the listing functionality of
@@ -265,10 +263,10 @@
         cls.client.wait_for_image_status(image['id'], 'active')
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         for server in getattr(cls, "servers", []):
             cls.servers_client.delete_server(server['id'])
-        super(ListSnapshotImagesTest, cls).tearDownClass()
+        super(ListSnapshotImagesTest, cls).resource_cleanup()
 
     @classmethod
     def _create_snapshot(cls, name, image_id, flavor, **kwargs):
@@ -329,8 +327,8 @@
 
 class UpdateImageMetaTest(base.BaseV1ImageTest):
     @classmethod
-    def setUpClass(cls):
-        super(UpdateImageMetaTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(UpdateImageMetaTest, cls).resource_setup()
         cls.image_id = cls._create_standard_image('1', 'ami', 'ami', 42)
 
     @classmethod
diff --git a/tempest/api/image/v2/test_images.py b/tempest/api/image/v2/test_images.py
index a974ebb..7e018e5 100644
--- a/tempest/api/image/v2/test_images.py
+++ b/tempest/api/image/v2/test_images.py
@@ -125,9 +125,8 @@
     """
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(ListImagesTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(ListImagesTest, cls).resource_setup()
         # We add a few images here to test the listing functionality of
         # the images API
         cls._create_standard_image('bare', 'raw')
diff --git a/tempest/api/queuing/__init__.py b/tempest/api/messaging/__init__.py
similarity index 100%
rename from tempest/api/queuing/__init__.py
rename to tempest/api/messaging/__init__.py
diff --git a/tempest/api/queuing/base.py b/tempest/api/messaging/base.py
similarity index 76%
rename from tempest/api/queuing/base.py
rename to tempest/api/messaging/base.py
index f4ff7f1..58511a9 100644
--- a/tempest/api/queuing/base.py
+++ b/tempest/api/messaging/base.py
@@ -23,25 +23,25 @@
 LOG = logging.getLogger(__name__)
 
 
-class BaseQueuingTest(test.BaseTestCase):
+class BaseMessagingTest(test.BaseTestCase):
 
     """
-    Base class for the Queuing tests that use the Tempest Marconi REST client
+    Base class for the Messaging tests that use the Tempest Zaqar REST client
 
     It is assumed that the following option is defined in the
     [service_available] section of etc/tempest.conf
 
-        queuing as True
+        messaging as True
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(BaseQueuingTest, cls).setUpClass()
-        if not CONF.service_available.marconi:
-            raise cls.skipException("Marconi support is required")
+    def resource_setup(cls):
+        super(BaseMessagingTest, cls).resource_setup()
+        if not CONF.service_available.zaqar:
+            raise cls.skipException("Zaqar support is required")
         os = cls.get_client_manager()
-        cls.queuing_cfg = CONF.queuing
-        cls.client = os.queuing_client
+        cls.messaging_cfg = CONF.messaging
+        cls.client = os.messaging_client
 
     @classmethod
     def create_queue(cls, queue_name):
@@ -93,42 +93,42 @@
 
     @classmethod
     def post_messages(cls, queue_name, rbody):
-        '''Wrapper utility that posts messages to a queue.'''
+        """Wrapper utility that posts messages to a queue."""
         resp, body = cls.client.post_messages(queue_name, rbody)
 
         return resp, body
 
     @classmethod
     def list_messages(cls, queue_name):
-        '''Wrapper utility that lists the messages in a queue.'''
+        """Wrapper utility that lists the messages in a queue."""
         resp, body = cls.client.list_messages(queue_name)
 
         return resp, body
 
     @classmethod
     def get_single_message(cls, message_uri):
-        '''Wrapper utility that gets a single message.'''
+        """Wrapper utility that gets a single message."""
         resp, body = cls.client.get_single_message(message_uri)
 
         return resp, body
 
     @classmethod
     def get_multiple_messages(cls, message_uri):
-        '''Wrapper utility that gets multiple messages.'''
+        """Wrapper utility that gets multiple messages."""
         resp, body = cls.client.get_multiple_messages(message_uri)
 
         return resp, body
 
     @classmethod
     def delete_messages(cls, message_uri):
-        '''Wrapper utility that deletes messages.'''
+        """Wrapper utility that deletes messages."""
         resp, body = cls.client.delete_messages(message_uri)
 
         return resp, body
 
     @classmethod
     def post_claims(cls, queue_name, rbody, url_params=False):
-        '''Wrapper utility that claims messages.'''
+        """Wrapper utility that claims messages."""
         resp, body = cls.client.post_claims(
             queue_name, rbody, url_params=False)
 
@@ -136,33 +136,34 @@
 
     @classmethod
     def query_claim(cls, claim_uri):
-        '''Wrapper utility that gets a claim.'''
+        """Wrapper utility that gets a claim."""
         resp, body = cls.client.query_claim(claim_uri)
 
         return resp, body
 
     @classmethod
     def update_claim(cls, claim_uri, rbody):
-        '''Wrapper utility that updates a claim.'''
+        """Wrapper utility that updates a claim."""
         resp, body = cls.client.update_claim(claim_uri, rbody)
 
         return resp, body
 
     @classmethod
     def release_claim(cls, claim_uri):
-        '''Wrapper utility that deletes a claim.'''
+        """Wrapper utility that deletes a claim."""
         resp, body = cls.client.release_claim(claim_uri)
 
         return resp, body
 
     @classmethod
     def generate_message_body(cls, repeat=1):
-        '''Wrapper utility that sets the metadata of a queue.'''
-        message_ttl = data_utils.rand_int_id(start=60,
-                                             end=CONF.queuing.max_message_ttl)
+        """Wrapper utility that sets the metadata of a queue."""
+        message_ttl = data_utils.\
+            rand_int_id(start=60, end=CONF.messaging.max_message_ttl)
 
-        key = data_utils.arbitrary_string(size=20, base_text='QueuingKey')
-        value = data_utils.arbitrary_string(size=20, base_text='QueuingValue')
+        key = data_utils.arbitrary_string(size=20, base_text='MessagingKey')
+        value = data_utils.arbitrary_string(size=20,
+                                            base_text='MessagingValue')
         message_body = {key: value}
 
         rbody = ([{'body': message_body, 'ttl': message_ttl}] * repeat)
diff --git a/tempest/api/queuing/test_claims.py b/tempest/api/messaging/test_claims.py
similarity index 87%
rename from tempest/api/queuing/test_claims.py
rename to tempest/api/messaging/test_claims.py
index a306623..1b004dd 100644
--- a/tempest/api/queuing/test_claims.py
+++ b/tempest/api/messaging/test_claims.py
@@ -16,7 +16,7 @@
 import logging
 import urlparse
 
-from tempest.api.queuing import base
+from tempest.api.messaging import base
 from tempest.common.utils import data_utils
 from tempest import config
 from tempest import test
@@ -26,12 +26,12 @@
 CONF = config.CONF
 
 
-class TestClaims(base.BaseQueuingTest):
+class TestClaims(base.BaseMessagingTest):
     _interface = 'json'
 
     @classmethod
-    def setUpClass(cls):
-        super(TestClaims, cls).setUpClass()
+    def resource_setup(cls):
+        super(TestClaims, cls).resource_setup()
         cls.queue_name = data_utils.rand_name('Queues-Test')
         # Create Queue
         cls.create_queue(cls.queue_name)
@@ -44,9 +44,9 @@
 
         # Post Claim
         claim_ttl = data_utils.rand_int_id(start=60,
-                                           end=CONF.queuing.max_claim_ttl)
-        claim_grace = data_utils.rand_int_id(start=60,
-                                             end=CONF.queuing.max_claim_grace)
+                                           end=CONF.messaging.max_claim_ttl)
+        claim_grace = data_utils.\
+            rand_int_id(start=60, end=CONF.messaging.max_claim_grace)
         claim_body = {"ttl": claim_ttl, "grace": claim_grace}
         resp, body = self.client.post_claims(queue_name=self.queue_name,
                                              rbody=claim_body)
@@ -90,7 +90,7 @@
 
         # Update Claim
         claim_ttl = data_utils.rand_int_id(start=60,
-                                           end=CONF.queuing.max_claim_ttl)
+                                           end=CONF.messaging.max_claim_ttl)
         update_rbody = {"ttl": claim_ttl}
 
         self.client.update_claim(claim_uri, rbody=update_rbody)
@@ -118,6 +118,6 @@
         self.client.delete_messages(message_uri)
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         cls.delete_queue(cls.queue_name)
-        super(TestClaims, cls).tearDownClass()
+        super(TestClaims, cls).resource_cleanup()
diff --git a/tempest/api/queuing/test_messages.py b/tempest/api/messaging/test_messages.py
similarity index 92%
rename from tempest/api/queuing/test_messages.py
rename to tempest/api/messaging/test_messages.py
index 9546c91..3c27ac2 100644
--- a/tempest/api/queuing/test_messages.py
+++ b/tempest/api/messaging/test_messages.py
@@ -15,7 +15,7 @@
 
 import logging
 
-from tempest.api.queuing import base
+from tempest.api.messaging import base
 from tempest.common.utils import data_utils
 from tempest import config
 from tempest import test
@@ -25,17 +25,17 @@
 CONF = config.CONF
 
 
-class TestMessages(base.BaseQueuingTest):
+class TestMessages(base.BaseMessagingTest):
     _interface = 'json'
 
     @classmethod
-    def setUpClass(cls):
-        super(TestMessages, cls).setUpClass()
+    def resource_setup(cls):
+        super(TestMessages, cls).resource_setup()
         cls.queue_name = data_utils.rand_name('Queues-Test')
         # Create Queue
         cls.client.create_queue(cls.queue_name)
 
-    def _post_messages(self, repeat=CONF.queuing.max_messages_per_page):
+    def _post_messages(self, repeat=CONF.messaging.max_messages_per_page):
         message_body = self.generate_message_body(repeat=repeat)
         resp, body = self.post_messages(queue_name=self.queue_name,
                                         rbody=message_body)
@@ -117,6 +117,6 @@
         self.assertEqual('204', resp['status'])
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         cls.delete_queue(cls.queue_name)
-        super(TestMessages, cls).tearDownClass()
+        super(TestMessages, cls).resource_cleanup()
diff --git a/tempest/api/queuing/test_queues.py b/tempest/api/messaging/test_queues.py
similarity index 93%
rename from tempest/api/queuing/test_queues.py
rename to tempest/api/messaging/test_queues.py
index e43178a..ab099ff 100644
--- a/tempest/api/queuing/test_queues.py
+++ b/tempest/api/messaging/test_queues.py
@@ -14,10 +14,11 @@
 # limitations under the License.
 
 import logging
+
 from six import moves
 from testtools import matchers
 
-from tempest.api.queuing import base
+from tempest.api.messaging import base
 from tempest.common.utils import data_utils
 from tempest import test
 
@@ -25,7 +26,7 @@
 LOG = logging.getLogger(__name__)
 
 
-class TestQueues(base.BaseQueuingTest):
+class TestQueues(base.BaseMessagingTest):
 
     @test.attr(type='smoke')
     def test_create_queue(self):
@@ -39,12 +40,12 @@
         self.assertEqual('', body)
 
 
-class TestManageQueue(base.BaseQueuingTest):
+class TestManageQueue(base.BaseMessagingTest):
     _interface = 'json'
 
     @classmethod
-    def setUpClass(cls):
-        super(TestManageQueue, cls).setUpClass()
+    def resource_setup(cls):
+        super(TestManageQueue, cls).resource_setup()
         cls.queues = list()
         for _ in moves.xrange(5):
             queue_name = data_utils.rand_name('Queues-Test')
@@ -124,7 +125,7 @@
         self.assertThat(body, matchers.Equals(req_body))
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         for queue_name in cls.queues:
             cls.client.delete_queue(queue_name)
-        super(TestManageQueue, cls).tearDownClass()
+        super(TestManageQueue, cls).resource_cleanup()
diff --git a/tempest/api/network/admin/test_agent_management.py b/tempest/api/network/admin/test_agent_management.py
index b848994..0d27afa 100644
--- a/tempest/api/network/admin/test_agent_management.py
+++ b/tempest/api/network/admin/test_agent_management.py
@@ -21,8 +21,8 @@
     _interface = 'json'
 
     @classmethod
-    def setUpClass(cls):
-        super(AgentManagementTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(AgentManagementTestJSON, cls).resource_setup()
         if not test.is_extension_enabled('agent', 'network'):
             msg = "agent extension not enabled."
             raise cls.skipException(msg)
@@ -32,8 +32,7 @@
 
     @test.attr(type='smoke')
     def test_list_agent(self):
-        resp, body = self.admin_client.list_agents()
-        self.assertEqual('200', resp['status'])
+        _, body = self.admin_client.list_agents()
         agents = body['agents']
         # Hearthbeats must be excluded from comparison
         self.agent.pop('heartbeat_timestamp', None)
@@ -45,15 +44,13 @@
 
     @test.attr(type=['smoke'])
     def test_list_agents_non_admin(self):
-        resp, body = self.client.list_agents()
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.list_agents()
         self.assertEqual(len(body["agents"]), 0)
 
     @test.attr(type='smoke')
     def test_show_agent(self):
-        resp, body = self.admin_client.show_agent(self.agent['id'])
+        _, body = self.admin_client.show_agent(self.agent['id'])
         agent = body['agent']
-        self.assertEqual('200', resp['status'])
         self.assertEqual(agent['id'], self.agent['id'])
 
     @test.attr(type='smoke')
@@ -62,10 +59,9 @@
         # Try to update the 'admin_state_up' to the original
         # one to avoid the negative effect.
         agent_status = {'admin_state_up': origin_status}
-        resp, body = self.admin_client.update_agent(agent_id=self.agent['id'],
-                                                    agent_info=agent_status)
+        _, body = self.admin_client.update_agent(agent_id=self.agent['id'],
+                                                 agent_info=agent_status)
         updated_status = body['agent']['admin_state_up']
-        self.assertEqual('200', resp['status'])
         self.assertEqual(origin_status, updated_status)
 
     @test.attr(type='smoke')
@@ -73,10 +69,8 @@
         self.useFixture(fixtures.LockFixture('agent_description'))
         description = 'description for update agent.'
         agent_description = {'description': description}
-        resp, body = self.admin_client.update_agent(
-            agent_id=self.agent['id'],
-            agent_info=agent_description)
-        self.assertEqual('200', resp['status'])
+        _, body = self.admin_client.update_agent(agent_id=self.agent['id'],
+                                                 agent_info=agent_description)
         self.addCleanup(self._restore_agent)
         updated_description = body['agent']['description']
         self.assertEqual(updated_description, description)
diff --git a/tempest/api/network/admin/test_dhcp_agent_scheduler.py b/tempest/api/network/admin/test_dhcp_agent_scheduler.py
index 25e1cc0..78f211d 100644
--- a/tempest/api/network/admin/test_dhcp_agent_scheduler.py
+++ b/tempest/api/network/admin/test_dhcp_agent_scheduler.py
@@ -20,9 +20,8 @@
     _interface = 'json'
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(DHCPAgentSchedulersTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(DHCPAgentSchedulersTestJSON, cls).resource_setup()
         if not test.is_extension_enabled('dhcp_agent_scheduler', 'network'):
             msg = "dhcp_agent_scheduler extension not enabled."
             raise cls.skipException(msg)
@@ -35,9 +34,8 @@
 
     @test.attr(type='smoke')
     def test_list_dhcp_agent_hosting_network(self):
-        resp, body = self.admin_client.list_dhcp_agent_hosting_network(
+        _, body = self.admin_client.list_dhcp_agent_hosting_network(
             self.network['id'])
-        self.assertEqual(resp['status'], '200')
 
     @test.attr(type='smoke')
     def test_list_networks_hosted_by_one_dhcp(self):
@@ -51,9 +49,8 @@
 
     def _check_network_in_dhcp_agent(self, network_id, agent):
         network_ids = []
-        resp, body = self.admin_client.list_networks_hosted_by_one_dhcp_agent(
+        _, body = self.admin_client.list_networks_hosted_by_one_dhcp_agent(
             agent['id'])
-        self.assertEqual(resp['status'], '200')
         networks = body['networks']
         for network in networks:
             network_ids.append(network['id'])
@@ -85,17 +82,15 @@
             self._remove_network_from_dhcp_agent(network_id, agent)
 
     def _remove_network_from_dhcp_agent(self, network_id, agent):
-        resp, body = self.admin_client.remove_network_from_dhcp_agent(
+        _, body = self.admin_client.remove_network_from_dhcp_agent(
             agent_id=agent['id'],
             network_id=network_id)
-        self.assertEqual(resp['status'], '204')
         self.assertFalse(self._check_network_in_dhcp_agent(
             network_id, agent))
 
     def _add_dhcp_agent_to_network(self, network_id, agent):
-        resp, body = self.admin_client.add_dhcp_agent_to_network(
-            agent['id'], network_id)
-        self.assertEqual(resp['status'], '201')
+        _, body = self.admin_client.add_dhcp_agent_to_network(agent['id'],
+                                                              network_id)
         self.assertTrue(self._check_network_in_dhcp_agent(
             network_id, agent))
 
diff --git a/tempest/api/network/admin/test_external_network_extension.py b/tempest/api/network/admin/test_external_network_extension.py
index c7fde77..2e58dae 100644
--- a/tempest/api/network/admin/test_external_network_extension.py
+++ b/tempest/api/network/admin/test_external_network_extension.py
@@ -18,17 +18,16 @@
     _interface = 'json'
 
     @classmethod
-    def setUpClass(cls):
-        super(ExternalNetworksTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(ExternalNetworksTestJSON, cls).resource_setup()
         cls.network = cls.create_network()
 
     def _create_network(self, external=True):
         post_body = {'name': data_utils.rand_name('network-')}
         if external:
             post_body['router:external'] = external
-        resp, body = self.admin_client.create_network(**post_body)
+        _, body = self.admin_client.create_network(**post_body)
         network = body['network']
-        self.assertEqual('201', resp['status'])
         self.addCleanup(self.admin_client.delete_network, network['id'])
         return network
 
@@ -46,9 +45,8 @@
         network = self._create_network(external=False)
         self.assertFalse(network.get('router:external', False))
         update_body = {'router:external': True}
-        resp, body = self.admin_client.update_network(network['id'],
-                                                      **update_body)
-        self.assertEqual('200', resp['status'])
+        _, body = self.admin_client.update_network(network['id'],
+                                                   **update_body)
         updated_network = body['network']
         # Verify that router:external parameter was updated
         self.assertTrue(updated_network['router:external'])
@@ -59,8 +57,7 @@
         # List networks as a normal user and confirm the external
         # network extension attribute is returned for those networks
         # that were created as external
-        resp, body = self.client.list_networks()
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.list_networks()
         networks_list = [net['id'] for net in body['networks']]
         self.assertIn(external_network['id'], networks_list)
         self.assertIn(self.network['id'], networks_list)
@@ -75,14 +72,12 @@
         external_network = self._create_network()
         # Show an external network as a normal user and confirm the
         # external network extension attribute is returned.
-        resp, body = self.client.show_network(external_network['id'])
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.show_network(external_network['id'])
         show_ext_net = body['network']
         self.assertEqual(external_network['name'], show_ext_net['name'])
         self.assertEqual(external_network['id'], show_ext_net['id'])
         self.assertTrue(show_ext_net['router:external'])
-        resp, body = self.client.show_network(self.network['id'])
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.show_network(self.network['id'])
         show_net = body['network']
         # Verify with show that router:external is False for network
         self.assertEqual(self.network['name'], show_net['name'])
diff --git a/tempest/api/network/admin/test_floating_ips_admin_actions.py b/tempest/api/network/admin/test_floating_ips_admin_actions.py
index 5728432..46c5e76 100644
--- a/tempest/api/network/admin/test_floating_ips_admin_actions.py
+++ b/tempest/api/network/admin/test_floating_ips_admin_actions.py
@@ -26,8 +26,8 @@
     force_tenant_isolation = True
 
     @classmethod
-    def setUpClass(cls):
-        super(FloatingIPAdminTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(FloatingIPAdminTestJSON, cls).resource_setup()
         cls.ext_net_id = CONF.network.public_network_id
         cls.floating_ip = cls.create_floatingip(cls.ext_net_id)
         cls.alt_manager = clients.Manager(cls.isolated_creds.get_alt_creds())
@@ -36,21 +36,18 @@
     @test.attr(type='smoke')
     def test_list_floating_ips_from_admin_and_nonadmin(self):
         # Create floating ip from admin user
-        resp, floating_ip_admin = self.admin_client.create_floatingip(
+        _, floating_ip_admin = self.admin_client.create_floatingip(
             floating_network_id=self.ext_net_id)
-        self.assertEqual('201', resp['status'])
         self.addCleanup(self.admin_client.delete_floatingip,
                         floating_ip_admin['floatingip']['id'])
         # Create floating ip from alt user
-        resp, body = self.alt_client.create_floatingip(
+        _, body = self.alt_client.create_floatingip(
             floating_network_id=self.ext_net_id)
-        self.assertEqual('201', resp['status'])
         floating_ip_alt = body['floatingip']
         self.addCleanup(self.alt_client.delete_floatingip,
                         floating_ip_alt['id'])
         # List floating ips from admin
-        resp, body = self.admin_client.list_floatingips()
-        self.assertEqual('200', resp['status'])
+        _, body = self.admin_client.list_floatingips()
         floating_ip_ids_admin = [f['id'] for f in body['floatingips']]
         # Check that admin sees all floating ips
         self.assertIn(self.floating_ip['id'], floating_ip_ids_admin)
diff --git a/tempest/api/network/admin/test_l3_agent_scheduler.py b/tempest/api/network/admin/test_l3_agent_scheduler.py
index 3b05f42..567af24 100644
--- a/tempest/api/network/admin/test_l3_agent_scheduler.py
+++ b/tempest/api/network/admin/test_l3_agent_scheduler.py
@@ -34,8 +34,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(L3AgentSchedulerTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(L3AgentSchedulerTestJSON, cls).resource_setup()
         if not test.is_extension_enabled('l3_agent_scheduler', 'network'):
             msg = "L3 Agent Scheduler Extension not enabled."
             raise cls.skipException(msg)
@@ -52,9 +52,7 @@
 
     @test.attr(type='smoke')
     def test_list_routers_on_l3_agent(self):
-        resp, body = self.admin_client.list_routers_on_l3_agent(
-            self.agent['id'])
-        self.assertEqual('200', resp['status'])
+        _, body = self.admin_client.list_routers_on_l3_agent(self.agent['id'])
 
     @test.attr(type='smoke')
     def test_add_list_remove_router_on_l3_agent(self):
@@ -62,21 +60,20 @@
         name = data_utils.rand_name('router1-')
         resp, router = self.client.create_router(name)
         self.addCleanup(self.client.delete_router, router['router']['id'])
-        resp, body = self.admin_client.add_router_to_l3_agent(
-            self.agent['id'], router['router']['id'])
-        self.assertEqual('201', resp['status'])
-        resp, body = self.admin_client.list_l3_agents_hosting_router(
+        _, body = self.admin_client.add_router_to_l3_agent(
+            self.agent['id'],
             router['router']['id'])
-        self.assertEqual('200', resp['status'])
+        _, body = self.admin_client.list_l3_agents_hosting_router(
+            router['router']['id'])
         for agent in body['agents']:
             l3_agent_ids.append(agent['id'])
             self.assertIn('agent_type', agent)
             self.assertEqual('L3 agent', agent['agent_type'])
         self.assertIn(self.agent['id'], l3_agent_ids)
         del l3_agent_ids[:]
-        resp, body = self.admin_client.remove_router_from_l3_agent(
-            self.agent['id'], router['router']['id'])
-        self.assertEqual('204', resp['status'])
+        _, body = self.admin_client.remove_router_from_l3_agent(
+            self.agent['id'],
+            router['router']['id'])
         # NOTE(afazekas): The deletion not asserted, because neutron
         # is not forbidden to reschedule the router to the same agent
 
diff --git a/tempest/api/network/admin/test_lbaas_agent_scheduler.py b/tempest/api/network/admin/test_lbaas_agent_scheduler.py
index 675c62d..1476f30 100644
--- a/tempest/api/network/admin/test_lbaas_agent_scheduler.py
+++ b/tempest/api/network/admin/test_lbaas_agent_scheduler.py
@@ -35,9 +35,8 @@
     """
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(LBaaSAgentSchedulerTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(LBaaSAgentSchedulerTestJSON, cls).resource_setup()
         if not test.is_extension_enabled('lbaas_agent_scheduler', 'network'):
             msg = "LBaaS Agent Scheduler Extension not enabled."
             raise cls.skipException(msg)
@@ -50,17 +49,15 @@
     @test.attr(type='smoke')
     def test_list_pools_on_lbaas_agent(self):
         found = False
-        resp, body = self.admin_client.list_agents(
+        _, body = self.admin_client.list_agents(
             agent_type="Loadbalancer agent")
-        self.assertEqual('200', resp['status'])
         agents = body['agents']
         for a in agents:
             msg = 'Load Balancer agent expected'
             self.assertEqual(a['agent_type'], 'Loadbalancer agent', msg)
-            resp, body = (
+            _, body = (
                 self.admin_client.list_pools_hosted_by_one_lbaas_agent(
                     a['id']))
-            self.assertEqual('200', resp['status'])
             pools = body['pools']
             if self.pool['id'] in [p['id'] for p in pools]:
                 found = True
@@ -69,9 +66,8 @@
 
     @test.attr(type='smoke')
     def test_show_lbaas_agent_hosting_pool(self):
-        resp, body = self.admin_client.show_lbaas_agent_hosting_pool(
+        _, body = self.admin_client.show_lbaas_agent_hosting_pool(
             self.pool['id'])
-        self.assertEqual('200', resp['status'])
         self.assertEqual('Loadbalancer agent', body['agent']['agent_type'])
 
 
diff --git a/tempest/api/network/admin/test_load_balancer_admin_actions.py b/tempest/api/network/admin/test_load_balancer_admin_actions.py
index fe4fc60..6d115e8 100644
--- a/tempest/api/network/admin/test_load_balancer_admin_actions.py
+++ b/tempest/api/network/admin/test_load_balancer_admin_actions.py
@@ -29,9 +29,8 @@
     """
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(LoadBalancerAdminTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(LoadBalancerAdminTestJSON, cls).resource_setup()
         if not test.is_extension_enabled('lbaas', 'network'):
             msg = "lbaas extension not enabled."
             raise cls.skipException(msg)
@@ -47,56 +46,54 @@
     @test.attr(type='smoke')
     def test_create_vip_as_admin_for_another_tenant(self):
         name = data_utils.rand_name('vip-')
-        resp, body = self.admin_client.create_pool(
-            name=data_utils.rand_name('pool-'), lb_method="ROUND_ROBIN",
-            protocol="HTTP", subnet_id=self.subnet['id'],
+        _, body = self.admin_client.create_pool(
+            name=data_utils.rand_name('pool-'),
+            lb_method="ROUND_ROBIN",
+            protocol="HTTP",
+            subnet_id=self.subnet['id'],
             tenant_id=self.tenant_id)
-        self.assertEqual('201', resp['status'])
         pool = body['pool']
         self.addCleanup(self.admin_client.delete_pool, pool['id'])
-        resp, body = self.admin_client.create_vip(name=name,
-                                                  protocol="HTTP",
-                                                  protocol_port=80,
-                                                  subnet_id=self.subnet['id'],
-                                                  pool_id=pool['id'],
-                                                  tenant_id=self.tenant_id)
-        self.assertEqual('201', resp['status'])
+        _, body = self.admin_client.create_vip(name=name,
+                                               protocol="HTTP",
+                                               protocol_port=80,
+                                               subnet_id=self.subnet['id'],
+                                               pool_id=pool['id'],
+                                               tenant_id=self.tenant_id)
         vip = body['vip']
         self.addCleanup(self.admin_client.delete_vip, vip['id'])
         self.assertIsNotNone(vip['id'])
         self.assertEqual(self.tenant_id, vip['tenant_id'])
-        resp, body = self.client.show_vip(vip['id'])
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.show_vip(vip['id'])
         show_vip = body['vip']
         self.assertEqual(vip['id'], show_vip['id'])
         self.assertEqual(vip['name'], show_vip['name'])
 
     @test.attr(type='smoke')
     def test_create_health_monitor_as_admin_for_another_tenant(self):
-        resp, body = (
+        _, body = (
             self.admin_client.create_health_monitor(delay=4,
                                                     max_retries=3,
                                                     type="TCP",
                                                     timeout=1,
                                                     tenant_id=self.tenant_id))
-        self.assertEqual('201', resp['status'])
         health_monitor = body['health_monitor']
         self.addCleanup(self.admin_client.delete_health_monitor,
                         health_monitor['id'])
         self.assertIsNotNone(health_monitor['id'])
         self.assertEqual(self.tenant_id, health_monitor['tenant_id'])
-        resp, body = self.client.show_health_monitor(health_monitor['id'])
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.show_health_monitor(health_monitor['id'])
         show_health_monitor = body['health_monitor']
         self.assertEqual(health_monitor['id'], show_health_monitor['id'])
 
     @test.attr(type='smoke')
     def test_create_pool_from_admin_user_other_tenant(self):
-        resp, body = self.admin_client.create_pool(
-            name=data_utils.rand_name('pool-'), lb_method="ROUND_ROBIN",
-            protocol="HTTP", subnet_id=self.subnet['id'],
+        _, body = self.admin_client.create_pool(
+            name=data_utils.rand_name('pool-'),
+            lb_method="ROUND_ROBIN",
+            protocol="HTTP",
+            subnet_id=self.subnet['id'],
             tenant_id=self.tenant_id)
-        self.assertEqual('201', resp['status'])
         pool = body['pool']
         self.addCleanup(self.admin_client.delete_pool, pool['id'])
         self.assertIsNotNone(pool['id'])
@@ -104,10 +101,10 @@
 
     @test.attr(type='smoke')
     def test_create_member_from_admin_user_other_tenant(self):
-        resp, body = self.admin_client.create_member(
-            address="10.0.9.47", protocol_port=80, pool_id=self.pool['id'],
-            tenant_id=self.tenant_id)
-        self.assertEqual('201', resp['status'])
+        _, body = self.admin_client.create_member(address="10.0.9.47",
+                                                  protocol_port=80,
+                                                  pool_id=self.pool['id'],
+                                                  tenant_id=self.tenant_id)
         member = body['member']
         self.addCleanup(self.admin_client.delete_member, member['id'])
         self.assertIsNotNone(member['id'])
diff --git a/tempest/api/network/admin/test_quotas.py b/tempest/api/network/admin/test_quotas.py
index 9fa54b1..72aef36 100644
--- a/tempest/api/network/admin/test_quotas.py
+++ b/tempest/api/network/admin/test_quotas.py
@@ -39,8 +39,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(QuotasTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(QuotasTest, cls).resource_setup()
         if not test.is_extension_enabled('quotas', 'network'):
             msg = "quotas extension not enabled."
             raise cls.skipException(msg)
@@ -57,16 +57,14 @@
         self.addCleanup(self.identity_admin_client.delete_tenant, tenant_id)
 
         # Change quotas for tenant
-        resp, quota_set = self.admin_client.update_quotas(tenant_id,
-                                                          **new_quotas)
-        self.assertEqual('200', resp['status'])
+        _, quota_set = self.admin_client.update_quotas(tenant_id,
+                                                       **new_quotas)
         self.addCleanup(self.admin_client.reset_quotas, tenant_id)
         for key, value in new_quotas.iteritems():
             self.assertEqual(value, quota_set[key])
 
         # Confirm our tenant is listed among tenants with non default quotas
-        resp, non_default_quotas = self.admin_client.list_quotas()
-        self.assertEqual('200', resp['status'])
+        _, non_default_quotas = self.admin_client.list_quotas()
         found = False
         for qs in non_default_quotas['quotas']:
             if qs['tenant_id'] == tenant_id:
@@ -74,17 +72,14 @@
         self.assertTrue(found)
 
         # Confirm from API quotas were changed as requested for tenant
-        resp, quota_set = self.admin_client.show_quotas(tenant_id)
+        _, quota_set = self.admin_client.show_quotas(tenant_id)
         quota_set = quota_set['quota']
-        self.assertEqual('200', resp['status'])
         for key, value in new_quotas.iteritems():
             self.assertEqual(value, quota_set[key])
 
         # Reset quotas to default and confirm
-        resp, body = self.admin_client.reset_quotas(tenant_id)
-        self.assertEqual('204', resp['status'])
-        resp, non_default_quotas = self.admin_client.list_quotas()
-        self.assertEqual('200', resp['status'])
+        _, body = self.admin_client.reset_quotas(tenant_id)
+        _, non_default_quotas = self.admin_client.list_quotas()
         for q in non_default_quotas['quotas']:
             self.assertNotEqual(tenant_id, q['tenant_id'])
 
diff --git a/tempest/api/network/base.py b/tempest/api/network/base.py
index d75339c..834c010 100644
--- a/tempest/api/network/base.py
+++ b/tempest/api/network/base.py
@@ -49,16 +49,17 @@
         neutron as True
     """
 
+    _interface = 'json'
     force_tenant_isolation = False
 
     # Default to ipv4.
     _ip_version = 4
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         # Create no network resources for these test.
         cls.set_network_resources()
-        super(BaseNetworkTest, cls).setUpClass()
+        super(BaseNetworkTest, cls).resource_setup()
         if not CONF.service_available.neutron:
             raise cls.skipException("Neutron support is required")
 
@@ -84,7 +85,7 @@
         cls.ipsecpolicies = []
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         if CONF.service_available.neutron:
             # Clean up ipsec policies
             for ipsecpolicy in cls.ipsecpolicies:
@@ -137,7 +138,7 @@
             for network in cls.networks:
                 cls.client.delete_network(network['id'])
             cls.clear_isolated_creds()
-        super(BaseNetworkTest, cls).tearDownClass()
+        super(BaseNetworkTest, cls).resource_cleanup()
 
     @classmethod
     def create_network(cls, network_name=None):
@@ -362,8 +363,8 @@
 class BaseAdminNetworkTest(BaseNetworkTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(BaseAdminNetworkTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(BaseAdminNetworkTest, cls).resource_setup()
         admin_username = CONF.compute_admin.username
         admin_password = CONF.compute_admin.password
         admin_tenant = CONF.compute_admin.tenant_name
diff --git a/tempest/api/network/base_routers.py b/tempest/api/network/base_routers.py
index 1303bcf..38985a0 100644
--- a/tempest/api/network/base_routers.py
+++ b/tempest/api/network/base_routers.py
@@ -22,38 +22,33 @@
     # require admin credentials by default
 
     @classmethod
-    def setUpClass(cls):
-        super(BaseRouterTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(BaseRouterTest, cls).resource_setup()
 
     def _delete_router(self, router_id):
-        resp, _ = self.client.delete_router(router_id)
-        self.assertEqual(204, resp.status)
+        self.client.delete_router(router_id)
         # Asserting that the router is not found in the list
         # after deletion
-        resp, list_body = self.client.list_routers()
-        self.assertEqual('200', resp['status'])
+        _, list_body = self.client.list_routers()
         routers_list = list()
         for router in list_body['routers']:
             routers_list.append(router['id'])
         self.assertNotIn(router_id, routers_list)
 
     def _add_router_interface_with_subnet_id(self, router_id, subnet_id):
-        resp, interface = self.client.add_router_interface_with_subnet_id(
+        _, interface = self.client.add_router_interface_with_subnet_id(
             router_id, subnet_id)
-        self.assertEqual('200', resp['status'])
         self.addCleanup(self._remove_router_interface_with_subnet_id,
                         router_id, subnet_id)
         self.assertEqual(subnet_id, interface['subnet_id'])
         return interface
 
     def _remove_router_interface_with_subnet_id(self, router_id, subnet_id):
-        resp, body = self.client.remove_router_interface_with_subnet_id(
+        _, body = self.client.remove_router_interface_with_subnet_id(
             router_id, subnet_id)
-        self.assertEqual('200', resp['status'])
         self.assertEqual(subnet_id, body['subnet_id'])
 
     def _remove_router_interface_with_port_id(self, router_id, port_id):
-        resp, body = self.client.remove_router_interface_with_port_id(
-            router_id, port_id)
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.remove_router_interface_with_port_id(router_id,
+                                                                   port_id)
         self.assertEqual(port_id, body['port_id'])
diff --git a/tempest/api/network/base_security_groups.py b/tempest/api/network/base_security_groups.py
index 90be454..622ed01 100644
--- a/tempest/api/network/base_security_groups.py
+++ b/tempest/api/network/base_security_groups.py
@@ -20,38 +20,33 @@
 class BaseSecGroupTest(base.BaseNetworkTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(BaseSecGroupTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(BaseSecGroupTest, cls).resource_setup()
 
     def _create_security_group(self):
         # Create a security group
         name = data_utils.rand_name('secgroup-')
-        resp, group_create_body = self.client.create_security_group(name=name)
-        self.assertEqual('201', resp['status'])
+        _, group_create_body = self.client.create_security_group(name=name)
         self.addCleanup(self._delete_security_group,
                         group_create_body['security_group']['id'])
         self.assertEqual(group_create_body['security_group']['name'], name)
         return group_create_body, name
 
     def _delete_security_group(self, secgroup_id):
-        resp, _ = self.client.delete_security_group(secgroup_id)
-        self.assertEqual(204, resp.status)
+        self.client.delete_security_group(secgroup_id)
         # Asserting that the security group is not found in the list
         # after deletion
-        resp, list_body = self.client.list_security_groups()
-        self.assertEqual('200', resp['status'])
+        _, list_body = self.client.list_security_groups()
         secgroup_list = list()
         for secgroup in list_body['security_groups']:
             secgroup_list.append(secgroup['id'])
         self.assertNotIn(secgroup_id, secgroup_list)
 
     def _delete_security_group_rule(self, rule_id):
-        resp, _ = self.client.delete_security_group_rule(rule_id)
-        self.assertEqual(204, resp.status)
+        self.client.delete_security_group_rule(rule_id)
         # Asserting that the security group is not found in the list
         # after deletion
-        resp, list_body = self.client.list_security_group_rules()
-        self.assertEqual('200', resp['status'])
+        _, list_body = self.client.list_security_group_rules()
         rules_list = list()
         for rule in list_body['security_group_rules']:
             rules_list.append(rule['id'])
diff --git a/tempest/api/network/test_allowed_address_pair.py b/tempest/api/network/test_allowed_address_pair.py
index c897716..c085c39 100644
--- a/tempest/api/network/test_allowed_address_pair.py
+++ b/tempest/api/network/test_allowed_address_pair.py
@@ -37,9 +37,8 @@
     """
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(AllowedAddressPairTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(AllowedAddressPairTestJSON, cls).resource_setup()
         if not test.is_extension_enabled('allowed-address-pairs', 'network'):
             msg = "Allowed Address Pairs extension not enabled."
             raise cls.skipException(msg)
@@ -54,16 +53,14 @@
         # Create port with allowed address pair attribute
         allowed_address_pairs = [{'ip_address': self.ip_address,
                                   'mac_address': self.mac_address}]
-        resp, body = self.client.create_port(
+        _, body = self.client.create_port(
             network_id=self.network['id'],
             allowed_address_pairs=allowed_address_pairs)
-        self.assertEqual('201', resp['status'])
         port_id = body['port']['id']
         self.addCleanup(self.client.delete_port, port_id)
 
         # Confirm port was created with allowed address pair attribute
-        resp, body = self.client.list_ports()
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.list_ports()
         ports = body['ports']
         port = [p for p in ports if p['id'] == port_id]
         msg = 'Created port not found in list of ports returned by Neutron'
@@ -73,21 +70,18 @@
     @test.attr(type='smoke')
     def test_update_port_with_address_pair(self):
         # Create a port without allowed address pair
-        resp, body = self.client.create_port(network_id=self.network['id'])
-        self.assertEqual('201', resp['status'])
+        _, body = self.client.create_port(network_id=self.network['id'])
         port_id = body['port']['id']
         self.addCleanup(self.client.delete_port, port_id)
 
         # Confirm  port is created
-        resp, body = self.client.show_port(port_id)
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.show_port(port_id)
 
         # Update allowed address pair attribute of port
         allowed_address_pairs = [{'ip_address': self.ip_address,
                                   'mac_address': self.mac_address}]
-        resp, body = self.client.update_port(port_id,
-                          allowed_address_pairs=allowed_address_pairs)
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.update_port(
+            port_id, allowed_address_pairs=allowed_address_pairs)
         newport = body['port']
         self._confirm_allowed_address_pair(newport, self.ip_address)
 
diff --git a/tempest/api/network/test_extensions.py b/tempest/api/network/test_extensions.py
index 529f8e9..715136c 100644
--- a/tempest/api/network/test_extensions.py
+++ b/tempest/api/network/test_extensions.py
@@ -33,8 +33,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(ExtensionsTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(ExtensionsTestJSON, cls).resource_setup()
 
     @test.attr(type='smoke')
     def test_list_show_extensions(self):
@@ -47,16 +47,14 @@
         expected_alias = [ext for ext in expected_alias if
                           test.is_extension_enabled(ext, 'network')]
         actual_alias = list()
-        resp, extensions = self.client.list_extensions()
-        self.assertEqual('200', resp['status'])
+        _, extensions = self.client.list_extensions()
         list_extensions = extensions['extensions']
         # Show and verify the details of the available extensions
         for ext in list_extensions:
             ext_name = ext['name']
             ext_alias = ext['alias']
             actual_alias.append(ext['alias'])
-            resp, ext_details = self.client.show_extension(ext_alias)
-            self.assertEqual('200', resp['status'])
+            _, ext_details = self.client.show_extension(ext_alias)
             ext_details = ext_details['extension']
 
             self.assertIsNotNone(ext_details)
diff --git a/tempest/api/network/test_extra_dhcp_options.py b/tempest/api/network/test_extra_dhcp_options.py
index 371c651..86da9b7 100644
--- a/tempest/api/network/test_extra_dhcp_options.py
+++ b/tempest/api/network/test_extra_dhcp_options.py
@@ -36,9 +36,8 @@
     """
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(ExtraDHCPOptionsTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(ExtraDHCPOptionsTestJSON, cls).resource_setup()
         if not test.is_extension_enabled('extra_dhcp_opt', 'network'):
             msg = "Extra DHCP Options extension not enabled."
             raise cls.skipException(msg)
@@ -54,16 +53,13 @@
             {'opt_value': '123.123.123.123', 'opt_name': 'tftp-server'},
             {'opt_value': '123.123.123.45', 'opt_name': 'server-ip-address'}
         ]
-        resp, body = self.client.create_port(
-            network_id=self.network['id'],
-            extra_dhcp_opts=extra_dhcp_opts)
-        self.assertEqual('201', resp['status'])
+        _, body = self.client.create_port(network_id=self.network['id'],
+                                          extra_dhcp_opts=extra_dhcp_opts)
         port_id = body['port']['id']
         self.addCleanup(self.client.delete_port, port_id)
 
         # Confirm port created has Extra DHCP Options
-        resp, body = self.client.list_ports()
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.list_ports()
         ports = body['ports']
         port = [p for p in ports if p['id'] == port_id]
         self.assertTrue(port)
@@ -78,13 +74,11 @@
             {'opt_value': '123.123.123.45', 'opt_name': 'server-ip-address'}
         ]
         name = data_utils.rand_name('new-port-name')
-        resp, body = self.client.update_port(
-            self.port['id'], name=name, extra_dhcp_opts=extra_dhcp_opts)
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.update_port(self.port['id'], name=name,
+                                          extra_dhcp_opts=extra_dhcp_opts)
 
         # Confirm extra dhcp options were added to the port
-        resp, body = self.client.show_port(self.port['id'])
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.show_port(self.port['id'])
         self._confirm_extra_dhcp_options(body['port'], extra_dhcp_opts)
 
     def _confirm_extra_dhcp_options(self, port, extra_dhcp_opts):
diff --git a/tempest/api/network/test_floating_ips.py b/tempest/api/network/test_floating_ips.py
index 2463654..52672ea 100644
--- a/tempest/api/network/test_floating_ips.py
+++ b/tempest/api/network/test_floating_ips.py
@@ -46,9 +46,8 @@
     """
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(FloatingIPTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(FloatingIPTestJSON, cls).resource_setup()
         if not test.is_extension_enabled('router', 'network'):
             msg = "router extension not enabled."
             raise cls.skipException(msg)
@@ -68,9 +67,9 @@
     @test.attr(type='smoke')
     def test_create_list_show_update_delete_floating_ip(self):
         # Creates a floating IP
-        resp, body = self.client.create_floatingip(
-            floating_network_id=self.ext_net_id, port_id=self.ports[0]['id'])
-        self.assertEqual('201', resp['status'])
+        _, body = self.client.create_floatingip(
+            floating_network_id=self.ext_net_id,
+            port_id=self.ports[0]['id'])
         created_floating_ip = body['floatingip']
         self.addCleanup(self.client.delete_floatingip,
                         created_floating_ip['id'])
@@ -83,9 +82,7 @@
         self.assertIn(created_floating_ip['fixed_ip_address'],
                       [ip['ip_address'] for ip in self.ports[0]['fixed_ips']])
         # Verifies the details of a floating_ip
-        resp, floating_ip = self.client.show_floatingip(
-            created_floating_ip['id'])
-        self.assertEqual('200', resp['status'])
+        _, floating_ip = self.client.show_floatingip(created_floating_ip['id'])
         shown_floating_ip = floating_ip['floatingip']
         self.assertEqual(shown_floating_ip['id'], created_floating_ip['id'])
         self.assertEqual(shown_floating_ip['floating_network_id'],
@@ -97,16 +94,15 @@
         self.assertEqual(shown_floating_ip['port_id'], self.ports[0]['id'])
 
         # Verify the floating ip exists in the list of all floating_ips
-        resp, floating_ips = self.client.list_floatingips()
-        self.assertEqual('200', resp['status'])
+        _, floating_ips = self.client.list_floatingips()
         floatingip_id_list = list()
         for f in floating_ips['floatingips']:
             floatingip_id_list.append(f['id'])
         self.assertIn(created_floating_ip['id'], floatingip_id_list)
         # Associate floating IP to the other port
-        resp, floating_ip = self.client.update_floatingip(
-            created_floating_ip['id'], port_id=self.ports[1]['id'])
-        self.assertEqual('200', resp['status'])
+        _, floating_ip = self.client.update_floatingip(
+            created_floating_ip['id'],
+            port_id=self.ports[1]['id'])
         updated_floating_ip = floating_ip['floatingip']
         self.assertEqual(updated_floating_ip['port_id'], self.ports[1]['id'])
         self.assertEqual(updated_floating_ip['fixed_ip_address'],
@@ -114,9 +110,9 @@
         self.assertEqual(updated_floating_ip['router_id'], self.router['id'])
 
         # Disassociate floating IP from the port
-        resp, floating_ip = self.client.update_floatingip(
-            created_floating_ip['id'], port_id=None)
-        self.assertEqual('200', resp['status'])
+        _, floating_ip = self.client.update_floatingip(
+            created_floating_ip['id'],
+            port_id=None)
         updated_floating_ip = floating_ip['floatingip']
         self.assertIsNone(updated_floating_ip['port_id'])
         self.assertIsNone(updated_floating_ip['fixed_ip_address'])
@@ -125,24 +121,21 @@
     @test.attr(type='smoke')
     def test_floating_ip_delete_port(self):
         # Create a floating IP
-        resp, body = self.client.create_floatingip(
+        _, body = self.client.create_floatingip(
             floating_network_id=self.ext_net_id)
-        self.assertEqual('201', resp['status'])
         created_floating_ip = body['floatingip']
         self.addCleanup(self.client.delete_floatingip,
                         created_floating_ip['id'])
         # Create a port
         resp, port = self.client.create_port(network_id=self.network['id'])
         created_port = port['port']
-        resp, floating_ip = self.client.update_floatingip(
-            created_floating_ip['id'], port_id=created_port['id'])
-        self.assertEqual('200', resp['status'])
+        _, floating_ip = self.client.update_floatingip(
+            created_floating_ip['id'],
+            port_id=created_port['id'])
         # Delete port
         self.client.delete_port(created_port['id'])
         # Verifies the details of the floating_ip
-        resp, floating_ip = self.client.show_floatingip(
-            created_floating_ip['id'])
-        self.assertEqual('200', resp['status'])
+        _, floating_ip = self.client.show_floatingip(created_floating_ip['id'])
         shown_floating_ip = floating_ip['floatingip']
         # Confirm the fields are back to None
         self.assertEqual(shown_floating_ip['id'], created_floating_ip['id'])
@@ -153,9 +146,9 @@
     @test.attr(type='smoke')
     def test_floating_ip_update_different_router(self):
         # Associate a floating IP to a port on a router
-        resp, body = self.client.create_floatingip(
-            floating_network_id=self.ext_net_id, port_id=self.ports[1]['id'])
-        self.assertEqual('201', resp['status'])
+        _, body = self.client.create_floatingip(
+            floating_network_id=self.ext_net_id,
+            port_id=self.ports[1]['id'])
         created_floating_ip = body['floatingip']
         self.addCleanup(self.client.delete_floatingip,
                         created_floating_ip['id'])
@@ -167,9 +160,9 @@
         self.create_router_interface(router2['id'], subnet2['id'])
         port_other_router = self.create_port(network2)
         # Associate floating IP to the other port on another router
-        resp, floating_ip = self.client.update_floatingip(
-            created_floating_ip['id'], port_id=port_other_router['id'])
-        self.assertEqual('200', resp['status'])
+        _, floating_ip = self.client.update_floatingip(
+            created_floating_ip['id'],
+            port_id=port_other_router['id'])
         updated_floating_ip = floating_ip['floatingip']
         self.assertEqual(updated_floating_ip['router_id'], router2['id'])
         self.assertEqual(updated_floating_ip['port_id'],
@@ -178,20 +171,19 @@
 
     @test.attr(type='smoke')
     def test_create_floating_ip_specifying_a_fixed_ip_address(self):
-        resp, body = self.client.create_floatingip(
+        _, body = self.client.create_floatingip(
             floating_network_id=self.ext_net_id,
             port_id=self.ports[1]['id'],
             fixed_ip_address=self.ports[1]['fixed_ips'][0]['ip_address'])
-        self.assertEqual('201', resp['status'])
         created_floating_ip = body['floatingip']
         self.addCleanup(self.client.delete_floatingip,
                         created_floating_ip['id'])
         self.assertIsNotNone(created_floating_ip['id'])
         self.assertEqual(created_floating_ip['fixed_ip_address'],
                          self.ports[1]['fixed_ips'][0]['ip_address'])
-        resp, floating_ip = self.client.update_floatingip(
-            created_floating_ip['id'], port_id=None)
-        self.assertEqual('200', resp['status'])
+        _, floating_ip = self.client.update_floatingip(
+            created_floating_ip['id'],
+            port_id=None)
         self.assertIsNone(floating_ip['floatingip']['port_id'])
 
     @test.attr(type='smoke')
@@ -201,25 +193,23 @@
         list_ips = [str(ip) for ip in ips[-3:-1]]
         fixed_ips = [{'ip_address': list_ips[0]}, {'ip_address': list_ips[1]}]
         # Create port
-        resp, body = self.client.create_port(network_id=self.network['id'],
-                                             fixed_ips=fixed_ips)
-        self.assertEqual('201', resp['status'])
+        _, body = self.client.create_port(network_id=self.network['id'],
+                                          fixed_ips=fixed_ips)
         port = body['port']
         self.addCleanup(self.client.delete_port, port['id'])
         # Create floating ip
-        resp, body = self.client.create_floatingip(
-            floating_network_id=self.ext_net_id, port_id=port['id'],
+        _, body = self.client.create_floatingip(
+            floating_network_id=self.ext_net_id,
+            port_id=port['id'],
             fixed_ip_address=list_ips[0])
-        self.assertEqual('201', resp['status'])
         floating_ip = body['floatingip']
         self.addCleanup(self.client.delete_floatingip, floating_ip['id'])
         self.assertIsNotNone(floating_ip['id'])
         self.assertEqual(floating_ip['fixed_ip_address'], list_ips[0])
         # Update floating ip
-        resp, body = self.client.update_floatingip(
-            floating_ip['id'], port_id=port['id'],
-            fixed_ip_address=list_ips[1])
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.update_floatingip(floating_ip['id'],
+                                                port_id=port['id'],
+                                                fixed_ip_address=list_ips[1])
         update_floating_ip = body['floatingip']
         self.assertEqual(update_floating_ip['fixed_ip_address'],
                          list_ips[1])
diff --git a/tempest/api/network/test_fwaas_extensions.py b/tempest/api/network/test_fwaas_extensions.py
index 555cbda..11588d6 100644
--- a/tempest/api/network/test_fwaas_extensions.py
+++ b/tempest/api/network/test_fwaas_extensions.py
@@ -46,8 +46,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(FWaaSExtensionTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(FWaaSExtensionTestJSON, cls).resource_setup()
         if not test.is_extension_enabled('fwaas', 'network'):
             msg = "FWaaS Extension not enabled."
             raise cls.skipException(msg)
@@ -72,23 +72,23 @@
 
         self.client.wait_for_resource_deletion('firewall', fw_id)
 
-    def _wait_for_active(self, fw_id):
+    def _wait_until_ready(self, fw_id):
+        target_states = ('ACTIVE', 'CREATED')
+
         def _wait():
-            resp, firewall = self.client.show_firewall(fw_id)
-            self.assertEqual('200', resp['status'])
+            _, firewall = self.client.show_firewall(fw_id)
             firewall = firewall['firewall']
-            return firewall['status'] == 'ACTIVE'
+            return firewall['status'] in target_states
 
         if not test.call_until_true(_wait, CONF.network.build_timeout,
                                     CONF.network.build_interval):
-            m = 'Timed out waiting for firewall %s to become ACTIVE.' % fw_id
+            m = ("Timed out waiting for firewall %s to reach %s state(s)" %
+                 (fw_id, target_states))
             raise exceptions.TimeoutException(m)
 
-    @test.attr(type='smoke')
     def test_list_firewall_rules(self):
         # List firewall rules
-        resp, fw_rules = self.client.list_firewall_rules()
-        self.assertEqual('200', resp['status'])
+        _, fw_rules = self.client.list_firewall_rules()
         fw_rules = fw_rules['firewall_rules']
         self.assertIn((self.fw_rule['id'],
                        self.fw_rule['name'],
@@ -103,42 +103,34 @@
                         m['ip_version'],
                         m['enabled']) for m in fw_rules])
 
-    @test.attr(type='smoke')
     def test_create_update_delete_firewall_rule(self):
         # Create firewall rule
-        resp, body = self.client.create_firewall_rule(
+        _, body = self.client.create_firewall_rule(
             name=data_utils.rand_name("fw-rule"),
             action="allow",
             protocol="tcp")
-        self.assertEqual('201', resp['status'])
         fw_rule_id = body['firewall_rule']['id']
 
         # Update firewall rule
-        resp, body = self.client.update_firewall_rule(fw_rule_id,
-                                                      shared=True)
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.update_firewall_rule(fw_rule_id,
+                                                   shared=True)
         self.assertTrue(body["firewall_rule"]['shared'])
 
         # Delete firewall rule
-        resp, _ = self.client.delete_firewall_rule(fw_rule_id)
-        self.assertEqual('204', resp['status'])
+        self.client.delete_firewall_rule(fw_rule_id)
         # Confirm deletion
         resp, fw_rules = self.client.list_firewall_rules()
         self.assertNotIn(fw_rule_id,
                          [m['id'] for m in fw_rules['firewall_rules']])
 
-    @test.attr(type='smoke')
     def test_show_firewall_rule(self):
         # show a created firewall rule
-        resp, fw_rule = self.client.show_firewall_rule(self.fw_rule['id'])
-        self.assertEqual('200', resp['status'])
+        _, fw_rule = self.client.show_firewall_rule(self.fw_rule['id'])
         for key, value in fw_rule['firewall_rule'].iteritems():
             self.assertEqual(self.fw_rule[key], value)
 
-    @test.attr(type='smoke')
     def test_list_firewall_policies(self):
-        resp, fw_policies = self.client.list_firewall_policies()
-        self.assertEqual('200', resp['status'])
+        _, fw_policies = self.client.list_firewall_policies()
         fw_policies = fw_policies['firewall_policies']
         self.assertIn((self.fw_policy['id'],
                        self.fw_policy['name'],
@@ -147,43 +139,35 @@
                         m['name'],
                         m['firewall_rules']) for m in fw_policies])
 
-    @test.attr(type='smoke')
     def test_create_update_delete_firewall_policy(self):
         # Create firewall policy
-        resp, body = self.client.create_firewall_policy(
+        _, body = self.client.create_firewall_policy(
             name=data_utils.rand_name("fw-policy"))
-        self.assertEqual('201', resp['status'])
         fw_policy_id = body['firewall_policy']['id']
         self.addCleanup(self._try_delete_policy, fw_policy_id)
 
         # Update firewall policy
-        resp, body = self.client.update_firewall_policy(fw_policy_id,
-                                                        shared=True,
-                                                        name="updated_policy")
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.update_firewall_policy(fw_policy_id,
+                                                     shared=True,
+                                                     name="updated_policy")
         updated_fw_policy = body["firewall_policy"]
         self.assertTrue(updated_fw_policy['shared'])
         self.assertEqual("updated_policy", updated_fw_policy['name'])
 
         # Delete firewall policy
-        resp, _ = self.client.delete_firewall_policy(fw_policy_id)
-        self.assertEqual('204', resp['status'])
+        self.client.delete_firewall_policy(fw_policy_id)
         # Confirm deletion
         resp, fw_policies = self.client.list_firewall_policies()
         fw_policies = fw_policies['firewall_policies']
         self.assertNotIn(fw_policy_id, [m['id'] for m in fw_policies])
 
-    @test.attr(type='smoke')
     def test_show_firewall_policy(self):
         # show a created firewall policy
-        resp, fw_policy = self.client.show_firewall_policy(
-            self.fw_policy['id'])
-        self.assertEqual('200', resp['status'])
+        _, fw_policy = self.client.show_firewall_policy(self.fw_policy['id'])
         fw_policy = fw_policy['firewall_policy']
         for key, value in fw_policy.iteritems():
             self.assertEqual(self.fw_policy[key], value)
 
-    @test.attr(type='smoke')
     def test_create_show_delete_firewall(self):
         # Create tenant network resources required for an ACTIVE firewall
         network = self.create_network()
@@ -195,19 +179,18 @@
             router['id'], subnet['id'])
 
         # Create firewall
-        resp, body = self.client.create_firewall(
+        _, body = self.client.create_firewall(
             name=data_utils.rand_name("firewall"),
             firewall_policy_id=self.fw_policy['id'])
-        self.assertEqual('201', resp['status'])
         created_firewall = body['firewall']
         firewall_id = created_firewall['id']
         self.addCleanup(self._try_delete_firewall, firewall_id)
 
-        self._wait_for_active(firewall_id)
+        # Wait for the firewall resource to become ready
+        self._wait_until_ready(firewall_id)
 
         # show a created firewall
-        resp, firewall = self.client.show_firewall(firewall_id)
-        self.assertEqual('200', resp['status'])
+        _, firewall = self.client.show_firewall(firewall_id)
         firewall = firewall['firewall']
 
         for key, value in firewall.iteritems():
@@ -216,8 +199,7 @@
             self.assertEqual(created_firewall[key], value)
 
         # list firewall
-        resp, firewalls = self.client.list_firewalls()
-        self.assertEqual('200', resp['status'])
+        _, firewalls = self.client.list_firewalls()
         firewalls = firewalls['firewalls']
         self.assertIn((created_firewall['id'],
                        created_firewall['name'],
@@ -227,8 +209,7 @@
                         m['firewall_policy_id']) for m in firewalls])
 
         # Delete firewall
-        resp, _ = self.client.delete_firewall(firewall_id)
-        self.assertEqual('204', resp['status'])
+        self.client.delete_firewall(firewall_id)
 
 
 class FWaaSExtensionTestXML(FWaaSExtensionTestJSON):
diff --git a/tempest/api/network/test_load_balancer.py b/tempest/api/network/test_load_balancer.py
index db24e0d..baa8cad 100644
--- a/tempest/api/network/test_load_balancer.py
+++ b/tempest/api/network/test_load_balancer.py
@@ -38,9 +38,8 @@
     """
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(LoadBalancerTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(LoadBalancerTestJSON, cls).resource_setup()
         if not test.is_extension_enabled('lbaas', 'network'):
             msg = "lbaas extension not enabled."
             raise cls.skipException(msg)
@@ -67,34 +66,31 @@
         delete_obj = getattr(self.client, 'delete_' + obj_name)
         list_objs = getattr(self.client, 'list_' + obj_name + 's')
 
-        resp, body = create_obj(**kwargs)
-        self.assertEqual('201', resp['status'])
+        _, body = create_obj(**kwargs)
         obj = body[obj_name]
         self.addCleanup(delete_obj, obj['id'])
         for key, value in obj.iteritems():
             # It is not relevant to filter by all arguments. That is why
             # there is a list of attr to except
             if key not in attr_exceptions:
-                resp, body = list_objs(**{key: value})
-                self.assertEqual('200', resp['status'])
+                _, body = list_objs(**{key: value})
                 objs = [v[key] for v in body[obj_name + 's']]
                 self.assertIn(value, objs)
 
     @test.attr(type='smoke')
     def test_list_vips(self):
         # Verify the vIP exists in the list of all vIPs
-        resp, body = self.client.list_vips()
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.list_vips()
         vips = body['vips']
         self.assertIn(self.vip['id'], [v['id'] for v in vips])
 
     @test.attr(type='smoke')
     def test_list_vips_with_filter(self):
         name = data_utils.rand_name('vip-')
-        resp, body = self.client.create_pool(
-            name=data_utils.rand_name("pool-"), lb_method="ROUND_ROBIN",
-            protocol="HTTPS", subnet_id=self.subnet['id'])
-        self.assertEqual('201', resp['status'])
+        _, body = self.client.create_pool(name=data_utils.rand_name("pool-"),
+                                          lb_method="ROUND_ROBIN",
+                                          protocol="HTTPS",
+                                          subnet_id=self.subnet['id'])
         pool = body['pool']
         self.addCleanup(self.client.delete_pool, pool['id'])
         attr_exceptions = ['status', 'session_persistence',
@@ -116,18 +112,16 @@
             protocol='HTTP',
             subnet_id=self.subnet['id'])
         pool = body['pool']
-        resp, body = self.client.create_vip(name=name,
-                                            protocol="HTTP",
-                                            protocol_port=80,
-                                            subnet_id=self.subnet['id'],
-                                            pool_id=pool['id'],
-                                            address=address)
-        self.assertEqual('201', resp['status'])
+        _, body = self.client.create_vip(name=name,
+                                         protocol="HTTP",
+                                         protocol_port=80,
+                                         subnet_id=self.subnet['id'],
+                                         pool_id=pool['id'],
+                                         address=address)
         vip = body['vip']
         vip_id = vip['id']
         # Confirm VIP's address correctness with a show
-        resp, body = self.client.show_vip(vip_id)
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.show_vip(vip_id)
         vip = body['vip']
         self.assertEqual(address, vip['address'])
         # Verification of vip update
@@ -136,13 +130,12 @@
         persistence_type = "HTTP_COOKIE"
         update_data = {"session_persistence": {
             "type": persistence_type}}
-        resp, body = self.client.update_vip(vip_id,
-                                            name=new_name,
-                                            description=new_description,
-                                            connection_limit=10,
-                                            admin_state_up=False,
-                                            **update_data)
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.update_vip(vip_id,
+                                         name=new_name,
+                                         description=new_description,
+                                         connection_limit=10,
+                                         admin_state_up=False,
+                                         **update_data)
         updated_vip = body['vip']
         self.assertEqual(new_name, updated_vip['name'])
         self.assertEqual(new_description, updated_vip['description'])
@@ -150,30 +143,24 @@
         self.assertFalse(updated_vip['admin_state_up'])
         self.assertEqual(persistence_type,
                          updated_vip['session_persistence']['type'])
-        # Verification of vip delete
-        resp, body = self.client.delete_vip(vip['id'])
-        self.assertEqual('204', resp['status'])
+        self.client.delete_vip(vip['id'])
         self.client.wait_for_resource_deletion('vip', vip['id'])
         # Verification of pool update
         new_name = "New_pool"
-        resp, body = self.client.update_pool(pool['id'],
-                                             name=new_name,
-                                             description="new_description",
-                                             lb_method='LEAST_CONNECTIONS')
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.update_pool(pool['id'],
+                                          name=new_name,
+                                          description="new_description",
+                                          lb_method='LEAST_CONNECTIONS')
         updated_pool = body['pool']
         self.assertEqual(new_name, updated_pool['name'])
         self.assertEqual('new_description', updated_pool['description'])
         self.assertEqual('LEAST_CONNECTIONS', updated_pool['lb_method'])
-        # Verification of pool delete
-        resp, body = self.client.delete_pool(pool['id'])
-        self.assertEqual('204', resp['status'])
+        self.client.delete_pool(pool['id'])
 
     @test.attr(type='smoke')
     def test_show_vip(self):
         # Verifies the details of a vip
-        resp, body = self.client.show_vip(self.vip['id'])
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.show_vip(self.vip['id'])
         vip = body['vip']
         for key, value in vip.iteritems():
             # 'status' should not be confirmed in api tests
@@ -183,17 +170,14 @@
     @test.attr(type='smoke')
     def test_show_pool(self):
         # Here we need to new pool without any dependence with vips
-        resp, body = self.client.create_pool(
-            name=data_utils.rand_name("pool-"),
-            lb_method='ROUND_ROBIN',
-            protocol='HTTP',
-            subnet_id=self.subnet['id'])
-        self.assertEqual('201', resp['status'])
+        _, body = self.client.create_pool(name=data_utils.rand_name("pool-"),
+                                          lb_method='ROUND_ROBIN',
+                                          protocol='HTTP',
+                                          subnet_id=self.subnet['id'])
         pool = body['pool']
         self.addCleanup(self.client.delete_pool, pool['id'])
         # Verifies the details of a pool
-        resp, body = self.client.show_pool(pool['id'])
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.show_pool(pool['id'])
         shown_pool = body['pool']
         for key, value in pool.iteritems():
             # 'status' should not be confirmed in api tests
@@ -203,8 +187,7 @@
     @test.attr(type='smoke')
     def test_list_pools(self):
         # Verify the pool exists in the list of all pools
-        resp, body = self.client.list_pools()
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.list_pools()
         pools = body['pools']
         self.assertIn(self.pool['id'], [p['id'] for p in pools])
 
@@ -222,8 +205,7 @@
     @test.attr(type='smoke')
     def test_list_members(self):
         # Verify the member exists in the list of all members
-        resp, body = self.client.list_members()
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.list_members()
         members = body['members']
         self.assertIn(self.member['id'], [m['id'] for m in members])
 
@@ -237,26 +219,22 @@
     @test.attr(type='smoke')
     def test_create_update_delete_member(self):
         # Creates a member
-        resp, body = self.client.create_member(address="10.0.9.47",
-                                               protocol_port=80,
-                                               pool_id=self.pool['id'])
-        self.assertEqual('201', resp['status'])
+        _, body = self.client.create_member(address="10.0.9.47",
+                                            protocol_port=80,
+                                            pool_id=self.pool['id'])
         member = body['member']
         # Verification of member update
-        resp, body = self.client.update_member(member['id'],
-                                               admin_state_up=False)
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.update_member(member['id'],
+                                            admin_state_up=False)
         updated_member = body['member']
         self.assertFalse(updated_member['admin_state_up'])
         # Verification of member delete
-        resp, body = self.client.delete_member(member['id'])
-        self.assertEqual('204', resp['status'])
+        self.client.delete_member(member['id'])
 
     @test.attr(type='smoke')
     def test_show_member(self):
         # Verifies the details of a member
-        resp, body = self.client.show_member(self.member['id'])
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.show_member(self.member['id'])
         member = body['member']
         for key, value in member.iteritems():
             # 'status' should not be confirmed in api tests
@@ -266,8 +244,7 @@
     @test.attr(type='smoke')
     def test_list_health_monitors(self):
         # Verify the health monitor exists in the list of all health monitors
-        resp, body = self.client.list_health_monitors()
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.list_health_monitors()
         health_monitors = body['health_monitors']
         self.assertIn(self.health_monitor['id'],
                       [h['id'] for h in health_monitors])
@@ -282,31 +259,27 @@
     @test.attr(type='smoke')
     def test_create_update_delete_health_monitor(self):
         # Creates a health_monitor
-        resp, body = self.client.create_health_monitor(delay=4,
-                                                       max_retries=3,
-                                                       type="TCP",
-                                                       timeout=1)
-        self.assertEqual('201', resp['status'])
+        _, body = self.client.create_health_monitor(delay=4,
+                                                    max_retries=3,
+                                                    type="TCP",
+                                                    timeout=1)
         health_monitor = body['health_monitor']
         # Verification of health_monitor update
-        resp, body = (self.client.update_health_monitor
-                     (health_monitor['id'],
-                      admin_state_up=False))
-        self.assertEqual('200', resp['status'])
+        _, body = (self.client.update_health_monitor
+                   (health_monitor['id'],
+                    admin_state_up=False))
         updated_health_monitor = body['health_monitor']
         self.assertFalse(updated_health_monitor['admin_state_up'])
         # Verification of health_monitor delete
-        resp, body = self.client.delete_health_monitor(health_monitor['id'])
-        self.assertEqual('204', resp['status'])
+        _, body = self.client.delete_health_monitor(health_monitor['id'])
 
     @test.attr(type='smoke')
     def test_create_health_monitor_http_type(self):
         hm_type = "HTTP"
-        resp, body = self.client.create_health_monitor(delay=4,
-                                                       max_retries=3,
-                                                       type=hm_type,
-                                                       timeout=1)
-        self.assertEqual('201', resp['status'])
+        _, body = self.client.create_health_monitor(delay=4,
+                                                    max_retries=3,
+                                                    type=hm_type,
+                                                    timeout=1)
         health_monitor = body['health_monitor']
         self.addCleanup(self.client.delete_health_monitor,
                         health_monitor['id'])
@@ -314,20 +287,18 @@
 
     @test.attr(type='smoke')
     def test_update_health_monitor_http_method(self):
-        resp, body = self.client.create_health_monitor(delay=4,
-                                                       max_retries=3,
-                                                       type="HTTP",
-                                                       timeout=1)
-        self.assertEqual('201', resp['status'])
+        _, body = self.client.create_health_monitor(delay=4,
+                                                    max_retries=3,
+                                                    type="HTTP",
+                                                    timeout=1)
         health_monitor = body['health_monitor']
         self.addCleanup(self.client.delete_health_monitor,
                         health_monitor['id'])
-        resp, body = (self.client.update_health_monitor
-                     (health_monitor['id'],
-                      http_method="POST",
-                      url_path="/home/user",
-                      expected_codes="290"))
-        self.assertEqual('200', resp['status'])
+        _, body = (self.client.update_health_monitor
+                   (health_monitor['id'],
+                    http_method="POST",
+                    url_path="/home/user",
+                    expected_codes="290"))
         updated_health_monitor = body['health_monitor']
         self.assertEqual("POST", updated_health_monitor['http_method'])
         self.assertEqual("/home/user", updated_health_monitor['url_path'])
@@ -336,8 +307,7 @@
     @test.attr(type='smoke')
     def test_show_health_monitor(self):
         # Verifies the details of a health_monitor
-        resp, body = self.client.show_health_monitor(self.health_monitor['id'])
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.show_health_monitor(self.health_monitor['id'])
         health_monitor = body['health_monitor']
         for key, value in health_monitor.iteritems():
             # 'status' should not be confirmed in api tests
@@ -347,9 +317,8 @@
     @test.attr(type='smoke')
     def test_associate_disassociate_health_monitor_with_pool(self):
         # Verify that a health monitor can be associated with a pool
-        resp, body = (self.client.associate_health_monitor_with_pool
-                     (self.health_monitor['id'], self.pool['id']))
-        self.assertEqual('201', resp['status'])
+        _, body = (self.client.associate_health_monitor_with_pool
+                   (self.health_monitor['id'], self.pool['id']))
         resp, body = self.client.show_health_monitor(
             self.health_monitor['id'])
         health_monitor = body['health_monitor']
@@ -359,10 +328,9 @@
                       [p['pool_id'] for p in health_monitor['pools']])
         self.assertIn(health_monitor['id'], pool['health_monitors'])
         # Verify that a health monitor can be disassociated from a pool
-        resp, body = (self.client.disassociate_health_monitor_with_pool
-                     (self.health_monitor['id'], self.pool['id']))
-        self.assertEqual('204', resp['status'])
-        resp, body = self.client.show_pool(self.pool['id'])
+        (self.client.disassociate_health_monitor_with_pool
+            (self.health_monitor['id'], self.pool['id']))
+        _, body = self.client.show_pool(self.pool['id'])
         pool = body['pool']
         resp, body = self.client.show_health_monitor(
             self.health_monitor['id'])
@@ -374,8 +342,7 @@
     @test.attr(type='smoke')
     def test_get_lb_pool_stats(self):
         # Verify the details of pool stats
-        resp, body = self.client.list_lb_pool_stats(self.pool['id'])
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.list_lb_pool_stats(self.pool['id'])
         stats = body['stats']
         self.assertIn("bytes_in", stats)
         self.assertIn("total_connections", stats)
@@ -384,52 +351,41 @@
 
     @test.attr(type='smoke')
     def test_update_list_of_health_monitors_associated_with_pool(self):
-        resp, _ = (self.client.associate_health_monitor_with_pool
-                   (self.health_monitor['id'], self.pool['id']))
-        self.assertEqual('201', resp['status'])
-        resp, _ = self.client.update_health_monitor(
+        (self.client.associate_health_monitor_with_pool
+            (self.health_monitor['id'], self.pool['id']))
+        self.client.update_health_monitor(
             self.health_monitor['id'], admin_state_up=False)
-        self.assertEqual('200', resp['status'])
-        resp, body = self.client.show_pool(self.pool['id'])
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.show_pool(self.pool['id'])
         health_monitors = body['pool']['health_monitors']
         for health_monitor_id in health_monitors:
-            resp, body = self.client.show_health_monitor(health_monitor_id)
-            self.assertEqual('200', resp['status'])
+            _, body = self.client.show_health_monitor(health_monitor_id)
             self.assertFalse(body['health_monitor']['admin_state_up'])
-        resp, _ = (self.client.disassociate_health_monitor_with_pool
-                   (self.health_monitor['id'], self.pool['id']))
-        self.assertEqual('204', resp['status'])
+            (self.client.disassociate_health_monitor_with_pool
+                (self.health_monitor['id'], self.pool['id']))
 
     @test.attr(type='smoke')
     def test_update_admin_state_up_of_pool(self):
-        resp, _ = self.client.update_pool(self.pool['id'],
-                                          admin_state_up=False)
-        self.assertEqual('200', resp['status'])
-        resp, body = self.client.show_pool(self.pool['id'])
-        self.assertEqual('200', resp['status'])
+        self.client.update_pool(self.pool['id'],
+                                admin_state_up=False)
+        _, body = self.client.show_pool(self.pool['id'])
         pool = body['pool']
         self.assertFalse(pool['admin_state_up'])
 
     @test.attr(type='smoke')
     def test_show_vip_associated_with_pool(self):
-        resp, body = self.client.show_pool(self.pool['id'])
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.show_pool(self.pool['id'])
         pool = body['pool']
-        resp, body = self.client.show_vip(pool['vip_id'])
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.show_vip(pool['vip_id'])
         vip = body['vip']
         self.assertEqual(self.vip['name'], vip['name'])
         self.assertEqual(self.vip['id'], vip['id'])
 
     @test.attr(type='smoke')
     def test_show_members_associated_with_pool(self):
-        resp, body = self.client.show_pool(self.pool['id'])
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.show_pool(self.pool['id'])
         members = body['pool']['members']
         for member_id in members:
-            resp, body = self.client.show_member(member_id)
-            self.assertEqual('200', resp['status'])
+            _, body = self.client.show_member(member_id)
             self.assertIsNotNone(body['member']['status'])
             self.assertEqual(member_id, body['member']['id'])
             self.assertIsNotNone(body['member']['admin_state_up'])
@@ -437,34 +393,28 @@
     @test.attr(type='smoke')
     def test_update_pool_related_to_member(self):
         # Create new pool
-        resp, body = self.client.create_pool(
-            name=data_utils.rand_name("pool-"),
-            lb_method='ROUND_ROBIN',
-            protocol='HTTP',
-            subnet_id=self.subnet['id'])
-        self.assertEqual('201', resp['status'])
+        _, body = self.client.create_pool(name=data_utils.rand_name("pool-"),
+                                          lb_method='ROUND_ROBIN',
+                                          protocol='HTTP',
+                                          subnet_id=self.subnet['id'])
         new_pool = body['pool']
         self.addCleanup(self.client.delete_pool, new_pool['id'])
         # Update member with new pool's id
-        resp, body = self.client.update_member(self.member['id'],
-                                               pool_id=new_pool['id'])
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.update_member(self.member['id'],
+                                            pool_id=new_pool['id'])
         # Confirm with show that pool_id change
         resp, body = self.client.show_member(self.member['id'])
         member = body['member']
         self.assertEqual(member['pool_id'], new_pool['id'])
         # Update member with old pool id, this is needed for clean up
-        resp, body = self.client.update_member(self.member['id'],
-                                               pool_id=self.pool['id'])
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.update_member(self.member['id'],
+                                            pool_id=self.pool['id'])
 
     @test.attr(type='smoke')
     def test_update_member_weight(self):
-        resp, _ = self.client.update_member(self.member['id'],
-                                            weight=2)
-        self.assertEqual('200', resp['status'])
-        resp, body = self.client.show_member(self.member['id'])
-        self.assertEqual('200', resp['status'])
+        self.client.update_member(self.member['id'],
+                                  weight=2)
+        _, body = self.client.show_member(self.member['id'])
         member = body['member']
         self.assertEqual(2, member['weight'])
 
diff --git a/tempest/api/network/test_metering_extensions.py b/tempest/api/network/test_metering_extensions.py
index 08ccbfe..2cfb841 100644
--- a/tempest/api/network/test_metering_extensions.py
+++ b/tempest/api/network/test_metering_extensions.py
@@ -35,29 +35,23 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(MeteringJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(MeteringJSON, cls).resource_setup()
         if not test.is_extension_enabled('metering', 'network'):
             msg = "metering extension not enabled."
             raise cls.skipException(msg)
         description = "metering label created by tempest"
         name = data_utils.rand_name("metering-label")
-        try:
-            cls.metering_label = cls.create_metering_label(name, description)
-            remote_ip_prefix = "10.0.0.0/24"
-            direction = "ingress"
-            cls.metering_label_rule = cls.create_metering_label_rule(
-                remote_ip_prefix, direction,
-                metering_label_id=cls.metering_label['id'])
-        except Exception:
-            LOG.exception('setUpClass failed')
-            cls.tearDownClass()
-            raise
+        cls.metering_label = cls.create_metering_label(name, description)
+        remote_ip_prefix = "10.0.0.0/24"
+        direction = "ingress"
+        cls.metering_label_rule = cls.create_metering_label_rule(
+            remote_ip_prefix, direction,
+            metering_label_id=cls.metering_label['id'])
 
     def _delete_metering_label(self, metering_label_id):
         # Deletes a label and verifies if it is deleted or not
-        resp, body = self.admin_client.delete_metering_label(metering_label_id)
-        self.assertEqual(204, resp.status)
+        _, body = self.admin_client.delete_metering_label(metering_label_id)
         # Asserting that the label is not found in list after deletion
         resp, labels = (self.admin_client.list_metering_labels(
                         id=metering_label_id))
@@ -65,9 +59,8 @@
 
     def _delete_metering_label_rule(self, metering_label_rule_id):
         # Deletes a rule and verifies if it is deleted or not
-        resp, body = (self.admin_client.delete_metering_label_rule(
-                      metering_label_rule_id))
-        self.assertEqual(204, resp.status)
+        _, body = (self.admin_client.delete_metering_label_rule(
+                   metering_label_rule_id))
         # Asserting that the rule is not found in list after deletion
         resp, rules = (self.admin_client.list_metering_label_rules(
                        id=metering_label_rule_id))
@@ -76,8 +69,7 @@
     @test.attr(type='smoke')
     def test_list_metering_labels(self):
         # Verify label filtering
-        resp, body = self.admin_client.list_metering_labels(id=33)
-        self.assertEqual('200', resp['status'])
+        _, body = self.admin_client.list_metering_labels(id=33)
         metering_labels = body['metering_labels']
         self.assertEqual(0, len(metering_labels))
 
@@ -86,9 +78,8 @@
         # Creates a label
         name = data_utils.rand_name('metering-label-')
         description = "label created by tempest"
-        resp, body = (self.admin_client.create_metering_label(name=name,
-                      description=description))
-        self.assertEqual('201', resp['status'])
+        _, body = (self.admin_client.create_metering_label(name=name,
+                   description=description))
         metering_label = body['metering_label']
         self.addCleanup(self._delete_metering_label,
                         metering_label['id'])
@@ -101,9 +92,8 @@
     @test.attr(type='smoke')
     def test_show_metering_label(self):
         # Verifies the details of a label
-        resp, body = (self.admin_client.show_metering_label(
-                      self.metering_label['id']))
-        self.assertEqual('200', resp['status'])
+        _, body = (self.admin_client.show_metering_label(
+                   self.metering_label['id']))
         metering_label = body['metering_label']
         self.assertEqual(self.metering_label['id'], metering_label['id'])
         self.assertEqual(self.metering_label['tenant_id'],
@@ -115,19 +105,17 @@
     @test.attr(type='smoke')
     def test_list_metering_label_rules(self):
         # Verify rule filtering
-        resp, body = self.admin_client.list_metering_label_rules(id=33)
-        self.assertEqual('200', resp['status'])
+        _, body = self.admin_client.list_metering_label_rules(id=33)
         metering_label_rules = body['metering_label_rules']
         self.assertEqual(0, len(metering_label_rules))
 
     @test.attr(type='smoke')
     def test_create_delete_metering_label_rule_with_filters(self):
         # Creates a rule
-        resp, body = (self.admin_client.create_metering_label_rule(
-                      remote_ip_prefix="10.0.1.0/24",
-                      direction="ingress",
-                      metering_label_id=self.metering_label['id']))
-        self.assertEqual('201', resp['status'])
+        _, body = (self.admin_client.create_metering_label_rule(
+                   remote_ip_prefix="10.0.1.0/24",
+                   direction="ingress",
+                   metering_label_id=self.metering_label['id']))
         metering_label_rule = body['metering_label_rule']
         self.addCleanup(self._delete_metering_label_rule,
                         metering_label_rule['id'])
@@ -140,9 +128,8 @@
     @test.attr(type='smoke')
     def test_show_metering_label_rule(self):
         # Verifies the details of a rule
-        resp, body = (self.admin_client.show_metering_label_rule(
-                      self.metering_label_rule['id']))
-        self.assertEqual('200', resp['status'])
+        _, body = (self.admin_client.show_metering_label_rule(
+                   self.metering_label_rule['id']))
         metering_label_rule = body['metering_label_rule']
         self.assertEqual(self.metering_label_rule['id'],
                          metering_label_rule['id'])
diff --git a/tempest/api/network/test_networks.py b/tempest/api/network/test_networks.py
index ac3a072..986a2c8 100644
--- a/tempest/api/network/test_networks.py
+++ b/tempest/api/network/test_networks.py
@@ -17,6 +17,7 @@
 import testtools
 
 from tempest.api.network import base
+from tempest.common import custom_matchers
 from tempest.common.utils import data_utils
 from tempest import config
 from tempest import exceptions
@@ -59,26 +60,105 @@
     """
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(NetworksTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(NetworksTestJSON, cls).resource_setup()
         cls.network = cls.create_network()
         cls.name = cls.network['name']
         cls.subnet = cls.create_subnet(cls.network)
         cls.cidr = cls.subnet['cidr']
+        cls._subnet_data = {6: {'gateway':
+                                str(cls._get_gateway_from_tempest_conf(6)),
+                                'allocation_pools':
+                                cls._get_allocation_pools_from_gateway(6),
+                                'dns_nameservers': ['2001:4860:4860::8844',
+                                                    '2001:4860:4860::8888'],
+                                'host_routes': [{'destination': '2001::/64',
+                                                 'nexthop': '2003::1'}],
+                                'new_host_routes': [{'destination':
+                                                     '2001::/64',
+                                                     'nexthop': '2005::1'}],
+                                'new_dns_nameservers':
+                                ['2001:4860:4860::7744',
+                                 '2001:4860:4860::7888']},
+                            4: {'gateway':
+                                str(cls._get_gateway_from_tempest_conf(4)),
+                                'allocation_pools':
+                                cls._get_allocation_pools_from_gateway(4),
+                                'dns_nameservers': ['8.8.4.4', '8.8.8.8'],
+                                'host_routes': [{'destination': '10.20.0.0/32',
+                                                 'nexthop': '10.100.1.1'}],
+                                'new_host_routes': [{'destination':
+                                                     '10.20.0.0/32',
+                                                     'nexthop':
+                                                     '10.100.1.2'}],
+                                'new_dns_nameservers': ['7.8.8.8', '7.8.4.4']}}
+
+    @classmethod
+    def _get_gateway_from_tempest_conf(cls, ip_version):
+        """Return first subnet gateway for configured CIDR """
+        if ip_version == 4:
+            cidr = netaddr.IPNetwork(CONF.network.tenant_network_cidr)
+            mask_bits = CONF.network.tenant_network_mask_bits
+        elif ip_version == 6:
+            cidr = netaddr.IPNetwork(CONF.network.tenant_network_v6_cidr)
+            mask_bits = CONF.network.tenant_network_v6_mask_bits
+
+        if mask_bits >= cidr.prefixlen:
+            return netaddr.IPAddress(cidr) + 1
+        else:
+            for subnet in cidr.subnet(mask_bits):
+                return netaddr.IPAddress(subnet) + 1
+
+    @classmethod
+    def _get_allocation_pools_from_gateway(cls, ip_version):
+        """Return allocation range for subnet of given gateway"""
+        gateway = cls._get_gateway_from_tempest_conf(ip_version)
+        return [{'start': str(gateway + 2), 'end': str(gateway + 3)}]
+
+    def subnet_dict(self, include_keys):
+        """Return a subnet dict which has include_keys and their corresponding
+           value from self._subnet_data
+        """
+        return dict((key, self._subnet_data[self._ip_version][key])
+                    for key in include_keys)
+
+    def _compare_resource_attrs(self, actual, expected):
+        exclude_keys = set(actual).symmetric_difference(expected)
+        self.assertThat(actual, custom_matchers.MatchesDictExceptForKeys(
+                        expected, exclude_keys))
+
+    def _create_verify_delete_subnet(self, cidr=None, mask_bits=None,
+                                     **kwargs):
+        network = self.create_network()
+        net_id = network['id']
+        gateway = kwargs.pop('gateway', None)
+        subnet = self.create_subnet(network, gateway, cidr, mask_bits,
+                                    **kwargs)
+        compare_args_full = dict(gateway_ip=gateway, cidr=cidr,
+                                 mask_bits=mask_bits, **kwargs)
+        compare_args = dict((k, v) for k, v in compare_args_full.iteritems()
+                            if v is not None)
+
+        if 'dns_nameservers' in set(subnet).intersection(compare_args):
+            self.assertEqual(sorted(compare_args['dns_nameservers']),
+                             sorted(subnet['dns_nameservers']))
+            del subnet['dns_nameservers'], compare_args['dns_nameservers']
+
+        self._compare_resource_attrs(subnet, compare_args)
+        self.client.delete_network(net_id)
+        self.networks.pop()
+        self.subnets.pop()
 
     @test.attr(type='smoke')
     def test_create_update_delete_network_subnet(self):
         # Create a network
         name = data_utils.rand_name('network-')
-        resp, body = self.client.create_network(name=name)
-        self.assertEqual('201', resp['status'])
-        network = body['network']
+        network = self.create_network(network_name=name)
         net_id = network['id']
+        self.assertEqual('ACTIVE', network['status'])
         # Verify network update
         new_name = "New_network"
-        resp, body = self.client.update_network(net_id, name=new_name)
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.update_network(net_id, name=new_name)
         updated_net = body['network']
         self.assertEqual(updated_net['name'], new_name)
         # Find a cidr that is not in use yet and create a subnet with it
@@ -86,23 +166,14 @@
         subnet_id = subnet['id']
         # Verify subnet update
         new_name = "New_subnet"
-        resp, body = self.client.update_subnet(subnet_id, name=new_name)
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.update_subnet(subnet_id, name=new_name)
         updated_subnet = body['subnet']
         self.assertEqual(updated_subnet['name'], new_name)
-        # Delete subnet and network
-        resp, body = self.client.delete_subnet(subnet_id)
-        self.assertEqual('204', resp['status'])
-        # Remove subnet from cleanup list
-        self.subnets.pop()
-        resp, body = self.client.delete_network(net_id)
-        self.assertEqual('204', resp['status'])
 
     @test.attr(type='smoke')
     def test_show_network(self):
         # Verify the details of a network
-        resp, body = self.client.show_network(self.network['id'])
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.show_network(self.network['id'])
         network = body['network']
         for key in ['id', 'name']:
             self.assertEqual(network[key], self.network[key])
@@ -111,9 +182,8 @@
     def test_show_network_fields(self):
         # Verify specific fields of a network
         fields = ['id', 'name']
-        resp, body = self.client.show_network(self.network['id'],
-                                              fields=fields)
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.show_network(self.network['id'],
+                                           fields=fields)
         network = body['network']
         self.assertEqual(sorted(network.keys()), sorted(fields))
         for field_name in fields:
@@ -122,8 +192,7 @@
     @test.attr(type='smoke')
     def test_list_networks(self):
         # Verify the network exists in the list of all networks
-        resp, body = self.client.list_networks()
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.list_networks()
         networks = [network['id'] for network in body['networks']
                     if network['id'] == self.network['id']]
         self.assertNotEmpty(networks, "Created network not found in the list")
@@ -132,8 +201,7 @@
     def test_list_networks_fields(self):
         # Verify specific fields of the networks
         fields = ['id', 'name']
-        resp, body = self.client.list_networks(fields=fields)
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.list_networks(fields=fields)
         networks = body['networks']
         self.assertNotEmpty(networks, "Network list returned is empty")
         for network in networks:
@@ -142,8 +210,7 @@
     @test.attr(type='smoke')
     def test_show_subnet(self):
         # Verify the details of a subnet
-        resp, body = self.client.show_subnet(self.subnet['id'])
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.show_subnet(self.subnet['id'])
         subnet = body['subnet']
         self.assertNotEmpty(subnet, "Subnet returned has no fields")
         for key in ['id', 'cidr']:
@@ -154,9 +221,8 @@
     def test_show_subnet_fields(self):
         # Verify specific fields of a subnet
         fields = ['id', 'network_id']
-        resp, body = self.client.show_subnet(self.subnet['id'],
-                                             fields=fields)
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.show_subnet(self.subnet['id'],
+                                          fields=fields)
         subnet = body['subnet']
         self.assertEqual(sorted(subnet.keys()), sorted(fields))
         for field_name in fields:
@@ -165,8 +231,7 @@
     @test.attr(type='smoke')
     def test_list_subnets(self):
         # Verify the subnet exists in the list of all subnets
-        resp, body = self.client.list_subnets()
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.list_subnets()
         subnets = [subnet['id'] for subnet in body['subnets']
                    if subnet['id'] == self.subnet['id']]
         self.assertNotEmpty(subnets, "Created subnet not found in the list")
@@ -175,8 +240,7 @@
     def test_list_subnets_fields(self):
         # Verify specific fields of subnets
         fields = ['id', 'network_id']
-        resp, body = self.client.list_subnets(fields=fields)
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.list_subnets(fields=fields)
         subnets = body['subnets']
         self.assertNotEmpty(subnets, "Subnet list returned is empty")
         for subnet in subnets:
@@ -194,8 +258,7 @@
     def test_delete_network_with_subnet(self):
         # Creates a network
         name = data_utils.rand_name('network-')
-        resp, body = self.client.create_network(name=name)
-        self.assertEqual('201', resp['status'])
+        _, body = self.client.create_network(name=name)
         network = body['network']
         net_id = network['id']
         self.addCleanup(self._try_delete_network, net_id)
@@ -205,8 +268,7 @@
         subnet_id = subnet['id']
 
         # Delete network while the subnet still exists
-        resp, body = self.client.delete_network(net_id)
-        self.assertEqual('204', resp['status'])
+        _, body = self.client.delete_network(net_id)
 
         # Verify that the subnet got automatically deleted.
         self.assertRaises(exceptions.NotFound, self.client.show_subnet,
@@ -219,36 +281,65 @@
 
     @test.attr(type='smoke')
     def test_create_delete_subnet_with_gw(self):
-        gateway = '10.100.0.13'
-        name = data_utils.rand_name('network-')
-        resp, body = self.client.create_network(name=name)
-        self.assertEqual('201', resp['status'])
-        network = body['network']
-        net_id = network['id']
-        subnet = self.create_subnet(network, gateway)
-        # Verifies Subnet GW in IPv4
-        self.assertEqual(subnet['gateway_ip'], gateway)
-        # Delete network and subnet
-        resp, body = self.client.delete_network(net_id)
-        self.assertEqual('204', resp['status'])
-        self.subnets.pop()
+        self._create_verify_delete_subnet(
+            **self.subnet_dict(['gateway']))
 
     @test.attr(type='smoke')
-    def test_create_delete_subnet_without_gw(self):
-        net = netaddr.IPNetwork(CONF.network.tenant_network_cidr)
-        gateway_ip = str(netaddr.IPAddress(net.first + 1))
-        name = data_utils.rand_name('network-')
-        resp, body = self.client.create_network(name=name)
-        self.assertEqual('201', resp['status'])
-        network = body['network']
-        net_id = network['id']
-        subnet = self.create_subnet(network)
-        # Verifies Subnet GW in IPv4
-        self.assertEqual(subnet['gateway_ip'], gateway_ip)
-        # Delete network and subnet
-        resp, body = self.client.delete_network(net_id)
-        self.assertEqual('204', resp['status'])
-        self.subnets.pop()
+    def test_create_delete_subnet_with_allocation_pools(self):
+        self._create_verify_delete_subnet(
+            **self.subnet_dict(['allocation_pools']))
+
+    @test.attr(type='smoke')
+    def test_create_delete_subnet_with_gw_and_allocation_pools(self):
+        self._create_verify_delete_subnet(**self.subnet_dict(
+            ['gateway', 'allocation_pools']))
+
+    @test.attr(type='smoke')
+    def test_create_delete_subnet_with_host_routes_and_dns_nameservers(self):
+        self._create_verify_delete_subnet(
+            **self.subnet_dict(['host_routes', 'dns_nameservers']))
+
+    @test.attr(type='smoke')
+    def test_create_delete_subnet_with_dhcp_enabled(self):
+        self._create_verify_delete_subnet(enable_dhcp=True)
+
+    @test.attr(type='smoke')
+    def test_update_subnet_gw_dns_host_routes_dhcp(self):
+        network = self.create_network()
+
+        subnet = self.create_subnet(
+            network, **self.subnet_dict(['gateway', 'host_routes',
+                                        'dns_nameservers',
+                                         'allocation_pools']))
+        subnet_id = subnet['id']
+        new_gateway = str(netaddr.IPAddress(
+                          self._subnet_data[self._ip_version]['gateway']) + 1)
+        # Verify subnet update
+        new_host_routes = self._subnet_data[self._ip_version][
+            'new_host_routes']
+
+        new_dns_nameservers = self._subnet_data[self._ip_version][
+            'new_dns_nameservers']
+        kwargs = {'host_routes': new_host_routes,
+                  'dns_nameservers': new_dns_nameservers,
+                  'gateway_ip': new_gateway, 'enable_dhcp': True}
+
+        new_name = "New_subnet"
+        _, body = self.client.update_subnet(subnet_id, name=new_name,
+                                            **kwargs)
+        updated_subnet = body['subnet']
+        kwargs['name'] = new_name
+        self.assertEqual(sorted(updated_subnet['dns_nameservers']),
+                         sorted(kwargs['dns_nameservers']))
+        del subnet['dns_nameservers'], kwargs['dns_nameservers']
+
+        self._compare_resource_attrs(updated_subnet, kwargs)
+
+    @test.attr(type='smoke')
+    def test_create_delete_subnet_all_attributes(self):
+        self._create_verify_delete_subnet(
+            enable_dhcp=True,
+            **self.subnet_dict(['gateway', 'host_routes', 'dns_nameservers']))
 
 
 class NetworksTestXML(NetworksTestJSON):
@@ -279,8 +370,7 @@
 
     def _delete_networks(self, created_networks):
         for n in created_networks:
-            resp, body = self.client.delete_network(n['id'])
-            self.assertEqual(204, resp.status)
+            self.client.delete_network(n['id'])
         # Asserting that the networks are not found in the list after deletion
         resp, body = self.client.list_networks()
         networks_list = [network['id'] for network in body['networks']]
@@ -289,8 +379,7 @@
 
     def _delete_subnets(self, created_subnets):
         for n in created_subnets:
-            resp, body = self.client.delete_subnet(n['id'])
-            self.assertEqual(204, resp.status)
+            self.client.delete_subnet(n['id'])
         # Asserting that the subnets are not found in the list after deletion
         resp, body = self.client.list_subnets()
         subnets_list = [subnet['id'] for subnet in body['subnets']]
@@ -299,8 +388,7 @@
 
     def _delete_ports(self, created_ports):
         for n in created_ports:
-            resp, body = self.client.delete_port(n['id'])
-            self.assertEqual(204, resp.status)
+            self.client.delete_port(n['id'])
         # Asserting that the ports are not found in the list after deletion
         resp, body = self.client.list_ports()
         ports_list = [port['id'] for port in body['ports']]
@@ -312,9 +400,8 @@
         # Creates 2 networks in one request
         network_names = [data_utils.rand_name('network-'),
                          data_utils.rand_name('network-')]
-        resp, body = self.client.create_bulk_network(2, network_names)
+        _, body = self.client.create_bulk_network(network_names)
         created_networks = body['networks']
-        self.assertEqual('201', resp['status'])
         self.addCleanup(self._delete_networks, created_networks)
         # Asserting that the networks are found in the list after creation
         resp, body = self.client.list_networks()
@@ -344,10 +431,9 @@
             }
             subnets_list.append(p1)
         del subnets_list[1]['name']
-        resp, body = self.client.create_bulk_subnet(subnets_list)
+        _, body = self.client.create_bulk_subnet(subnets_list)
         created_subnets = body['subnets']
         self.addCleanup(self._delete_subnets, created_subnets)
-        self.assertEqual('201', resp['status'])
         # Asserting that the subnets are found in the list after creation
         resp, body = self.client.list_subnets()
         subnets_list = [subnet['id'] for subnet in body['subnets']]
@@ -370,10 +456,9 @@
             }
             port_list.append(p1)
         del port_list[1]['name']
-        resp, body = self.client.create_bulk_port(port_list)
+        _, body = self.client.create_bulk_port(port_list)
         created_ports = body['ports']
         self.addCleanup(self._delete_ports, created_ports)
-        self.assertEqual('201', resp['status'])
         # Asserting that the ports are found in the list after creation
         resp, body = self.client.list_ports()
         ports_list = [port['id'] for port in body['ports']]
@@ -390,65 +475,41 @@
     _ip_version = 6
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         if not CONF.network_feature_enabled.ipv6:
             skip_msg = "IPv6 Tests are disabled."
             raise cls.skipException(skip_msg)
-        super(NetworksIpV6TestJSON, cls).setUpClass()
+        super(NetworksIpV6TestJSON, cls).resource_setup()
 
     @test.attr(type='smoke')
     def test_create_delete_subnet_with_gw(self):
-        gateway = '2003::2'
+        net = netaddr.IPNetwork(CONF.network.tenant_network_v6_cidr)
+        gateway = str(netaddr.IPAddress(net.first + 2))
         name = data_utils.rand_name('network-')
-        resp, body = self.client.create_network(name=name)
-        self.assertEqual('201', resp['status'])
-        network = body['network']
-        net_id = network['id']
+        network = self.create_network(network_name=name)
         subnet = self.create_subnet(network, gateway)
         # Verifies Subnet GW in IPv6
         self.assertEqual(subnet['gateway_ip'], gateway)
-        # Delete network and subnet
-        resp, body = self.client.delete_network(net_id)
-        self.assertEqual('204', resp['status'])
-        self.subnets.pop()
 
     @test.attr(type='smoke')
     def test_create_delete_subnet_without_gw(self):
+        net = netaddr.IPNetwork(CONF.network.tenant_network_v6_cidr)
+        gateway_ip = str(netaddr.IPAddress(net.first + 1))
         name = data_utils.rand_name('network-')
-        resp, body = self.client.create_network(name=name)
-        self.assertEqual('201', resp['status'])
-        network = body['network']
-        net_id = network['id']
+        network = self.create_network(network_name=name)
         subnet = self.create_subnet(network)
         # Verifies Subnet GW in IPv6
-        self.assertEqual(subnet['gateway_ip'], '2003::1')
-        # Delete network and subnet
-        resp, body = self.client.delete_network(net_id)
-        self.assertEqual('204', resp['status'])
-        self.subnets.pop()
+        self.assertEqual(subnet['gateway_ip'], gateway_ip)
 
     @testtools.skipUnless(CONF.network_feature_enabled.ipv6_subnet_attributes,
                           "IPv6 extended attributes for subnets not "
                           "available")
     @test.attr(type='smoke')
     def test_create_delete_subnet_with_v6_attributes(self):
-        name = data_utils.rand_name('network-')
-        resp, body = self.client.create_network(name=name)
-        self.assertEqual('201', resp['status'])
-        network = body['network']
-        net_id = network['id']
-        subnet = self.create_subnet(network,
-                                    gateway='fe80::1',
-                                    ipv6_ra_mode='slaac',
-                                    ipv6_address_mode='slaac')
-        # Verifies Subnet GW in IPv6
-        self.assertEqual(subnet['gateway_ip'], 'fe80::1')
-        self.assertEqual(subnet['ipv6_ra_mode'], 'slaac')
-        self.assertEqual(subnet['ipv6_address_mode'], 'slaac')
-        # Delete network and subnet
-        resp, body = self.client.delete_network(net_id)
-        self.assertEqual('204', resp['status'])
-        self.subnets.pop()
+        self._create_verify_delete_subnet(
+            gateway=self._subnet_data[self._ip_version]['gateway'],
+            ipv6_ra_mode='slaac',
+            ipv6_address_mode='slaac')
 
 
 class NetworksIpV6TestXML(NetworksIpV6TestJSON):
diff --git a/tempest/api/network/test_ports.py b/tempest/api/network/test_ports.py
index e6e6ea1..cdd3a29 100644
--- a/tempest/api/network/test_ports.py
+++ b/tempest/api/network/test_ports.py
@@ -16,6 +16,7 @@
 import socket
 
 from tempest.api.network import base
+from tempest.common import custom_matchers
 from tempest.common.utils import data_utils
 from tempest import config
 from tempest import test
@@ -37,36 +38,30 @@
     """
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(PortsTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(PortsTestJSON, cls).resource_setup()
         cls.network = cls.create_network()
         cls.port = cls.create_port(cls.network)
 
     def _delete_port(self, port_id):
-        resp, body = self.client.delete_port(port_id)
-        self.assertEqual('204', resp['status'])
-        resp, body = self.client.list_ports()
-        self.assertEqual('200', resp['status'])
+        self.client.delete_port(port_id)
+        _, body = self.client.list_ports()
         ports_list = body['ports']
         self.assertFalse(port_id in [n['id'] for n in ports_list])
 
     @test.attr(type='smoke')
     def test_create_update_delete_port(self):
         # Verify port creation
-        resp, body = self.client.create_port(network_id=self.network['id'])
-        self.assertEqual('201', resp['status'])
+        _, body = self.client.create_port(network_id=self.network['id'])
         port = body['port']
         # Schedule port deletion with verification upon test completion
         self.addCleanup(self._delete_port, port['id'])
         self.assertTrue(port['admin_state_up'])
         # Verify port update
         new_name = "New_Port"
-        resp, body = self.client.update_port(
-            port['id'],
-            name=new_name,
-            admin_state_up=False)
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.update_port(port['id'],
+                                          name=new_name,
+                                          admin_state_up=False)
         updated_port = body['port']
         self.assertEqual(updated_port['name'], new_name)
         self.assertFalse(updated_port['admin_state_up'])
@@ -74,30 +69,22 @@
     @test.attr(type='smoke')
     def test_show_port(self):
         # Verify the details of port
-        resp, body = self.client.show_port(self.port['id'])
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.show_port(self.port['id'])
         port = body['port']
         self.assertIn('id', port)
-        self.assertEqual(port['id'], self.port['id'])
-        self.assertEqual(self.port['admin_state_up'], port['admin_state_up'])
-        self.assertEqual(self.port['device_id'], port['device_id'])
-        self.assertEqual(self.port['device_owner'], port['device_owner'])
-        self.assertEqual(self.port['mac_address'], port['mac_address'])
-        self.assertEqual(self.port['name'], port['name'])
-        self.assertEqual(self.port['security_groups'],
-                         port['security_groups'])
-        self.assertEqual(self.port['network_id'], port['network_id'])
-        self.assertEqual(self.port['security_groups'],
-                         port['security_groups'])
-        self.assertEqual(port['fixed_ips'], [])
+        # TODO(Santosh)- This is a temporary workaround to compare create_port
+        # and show_port dict elements.Remove this once extra_dhcp_opts issue
+        # gets fixed in neutron.( bug - 1365341.)
+        self.assertThat(self.port,
+                        custom_matchers.MatchesDictExceptForKeys
+                        (port, excluded_keys=['extra_dhcp_opts']))
 
     @test.attr(type='smoke')
     def test_show_port_fields(self):
         # Verify specific fields of a port
         fields = ['id', 'mac_address']
-        resp, body = self.client.show_port(self.port['id'],
-                                           fields=fields)
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.show_port(self.port['id'],
+                                        fields=fields)
         port = body['port']
         self.assertEqual(sorted(port.keys()), sorted(fields))
         for field_name in fields:
@@ -106,8 +93,7 @@
     @test.attr(type='smoke')
     def test_list_ports(self):
         # Verify the port exists in the list of all ports
-        resp, body = self.client.list_ports()
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.list_ports()
         ports = [port['id'] for port in body['ports']
                  if port['id'] == self.port['id']]
         self.assertNotEmpty(ports, "Created port not found in the list")
@@ -125,9 +111,7 @@
         self.addCleanup(self.client.remove_router_interface_with_port_id,
                         router['id'], port['port']['id'])
         # List ports filtered by router_id
-        resp, port_list = self.client.list_ports(
-            device_id=router['id'])
-        self.assertEqual('200', resp['status'])
+        _, port_list = self.client.list_ports(device_id=router['id'])
         ports = port_list['ports']
         self.assertEqual(len(ports), 1)
         self.assertEqual(ports[0]['id'], port['port']['id'])
@@ -137,8 +121,7 @@
     def test_list_ports_fields(self):
         # Verify specific fields of ports
         fields = ['id', 'mac_address']
-        resp, body = self.client.list_ports(fields=fields)
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.list_ports(fields=fields)
         ports = body['ports']
         self.assertNotEmpty(ports, "Port list returned is empty")
         # Asserting the fields returned are correct
@@ -177,9 +160,8 @@
     _interface = 'json'
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(PortsAdminExtendedAttrsTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(PortsAdminExtendedAttrsTestJSON, cls).resource_setup()
         cls.identity_client = cls._get_identity_admin_client()
         cls.tenant = cls.identity_client.get_tenant_by_name(
             CONF.identity.tenant_name)
@@ -190,8 +172,7 @@
     def test_create_port_binding_ext_attr(self):
         post_body = {"network_id": self.network['id'],
                      "binding:host_id": self.host_id}
-        resp, body = self.admin_client.create_port(**post_body)
-        self.assertEqual('201', resp['status'])
+        _, body = self.admin_client.create_port(**post_body)
         port = body['port']
         self.addCleanup(self.admin_client.delete_port, port['id'])
         host_id = port['binding:host_id']
@@ -201,13 +182,11 @@
     @test.attr(type='smoke')
     def test_update_port_binding_ext_attr(self):
         post_body = {"network_id": self.network['id']}
-        resp, body = self.admin_client.create_port(**post_body)
-        self.assertEqual('201', resp['status'])
+        _, body = self.admin_client.create_port(**post_body)
         port = body['port']
         self.addCleanup(self.admin_client.delete_port, port['id'])
         update_body = {"binding:host_id": self.host_id}
-        resp, body = self.admin_client.update_port(port['id'], **update_body)
-        self.assertEqual('200', resp['status'])
+        _, body = self.admin_client.update_port(port['id'], **update_body)
         updated_port = body['port']
         host_id = updated_port['binding:host_id']
         self.assertIsNotNone(host_id)
@@ -217,21 +196,18 @@
     def test_list_ports_binding_ext_attr(self):
         # Create a new port
         post_body = {"network_id": self.network['id']}
-        resp, body = self.admin_client.create_port(**post_body)
-        self.assertEqual('201', resp['status'])
+        _, body = self.admin_client.create_port(**post_body)
         port = body['port']
         self.addCleanup(self.admin_client.delete_port, port['id'])
 
         # Update the port's binding attributes so that is now 'bound'
         # to a host
         update_body = {"binding:host_id": self.host_id}
-        resp, _ = self.admin_client.update_port(port['id'], **update_body)
-        self.assertEqual('200', resp['status'])
+        self.admin_client.update_port(port['id'], **update_body)
 
         # List all ports, ensure new port is part of list and its binding
         # attributes are set and accurate
-        resp, body = self.admin_client.list_ports()
-        self.assertEqual('200', resp['status'])
+        _, body = self.admin_client.list_ports()
         ports_list = body['ports']
         pids_list = [p['id'] for p in ports_list]
         self.assertIn(port['id'], pids_list)
@@ -243,13 +219,10 @@
 
     @test.attr(type='smoke')
     def test_show_port_binding_ext_attr(self):
-        resp, body = self.admin_client.create_port(
-            network_id=self.network['id'])
-        self.assertEqual('201', resp['status'])
+        _, body = self.admin_client.create_port(network_id=self.network['id'])
         port = body['port']
         self.addCleanup(self.admin_client.delete_port, port['id'])
-        resp, body = self.admin_client.show_port(port['id'])
-        self.assertEqual('200', resp['status'])
+        _, body = self.admin_client.show_port(port['id'])
         show_port = body['port']
         self.assertEqual(port['binding:host_id'],
                          show_port['binding:host_id'])
@@ -269,10 +242,9 @@
     _tenant_network_mask_bits = CONF.network.tenant_network_v6_mask_bits
 
     @classmethod
-    def setUpClass(cls):
-        super(PortsIpV6TestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(PortsIpV6TestJSON, cls).resource_setup()
         if not CONF.network_feature_enabled.ipv6:
-            cls.tearDownClass()
             skip_msg = "IPv6 Tests are disabled."
             raise cls.skipException(skip_msg)
 
@@ -287,13 +259,12 @@
     _tenant_network_mask_bits = CONF.network.tenant_network_v6_mask_bits
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         if not CONF.network_feature_enabled.ipv6:
             skip_msg = "IPv6 Tests are disabled."
             raise cls.skipException(skip_msg)
-        super(PortsAdminExtendedAttrsIpV6TestJSON, cls).setUpClass()
+        super(PortsAdminExtendedAttrsIpV6TestJSON, cls).resource_setup()
 
 
-class PortsAdminExtendedAttrsIpV6TestXML(
-    PortsAdminExtendedAttrsIpV6TestJSON):
+class PortsAdminExtendedAttrsIpV6TestXML(PortsAdminExtendedAttrsIpV6TestJSON):
     _interface = 'xml'
diff --git a/tempest/api/network/test_routers.py b/tempest/api/network/test_routers.py
index d38633f..f3f25ac 100644
--- a/tempest/api/network/test_routers.py
+++ b/tempest/api/network/test_routers.py
@@ -28,8 +28,8 @@
     _interface = 'json'
 
     @classmethod
-    def setUpClass(cls):
-        super(RoutersTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(RoutersTest, cls).resource_setup()
         if not test.is_extension_enabled('router', 'network'):
             msg = "router extension not enabled."
             raise cls.skipException(msg)
@@ -54,11 +54,10 @@
         # NOTE(salv-orlando): Do not invoke self.create_router
         # as we need to check the response code
         name = data_utils.rand_name('router-')
-        resp, create_body = self.client.create_router(
+        _, create_body = self.client.create_router(
             name, external_gateway_info={
                 "network_id": CONF.network.public_network_id},
             admin_state_up=False)
-        self.assertEqual('201', resp['status'])
         self.addCleanup(self._delete_router, create_body['router']['id'])
         self.assertEqual(create_body['router']['name'], name)
         self.assertEqual(
@@ -66,26 +65,22 @@
             CONF.network.public_network_id)
         self.assertEqual(create_body['router']['admin_state_up'], False)
         # Show details of the created router
-        resp, show_body = self.client.show_router(
-            create_body['router']['id'])
-        self.assertEqual('200', resp['status'])
+        _, show_body = self.client.show_router(create_body['router']['id'])
         self.assertEqual(show_body['router']['name'], name)
         self.assertEqual(
             show_body['router']['external_gateway_info']['network_id'],
             CONF.network.public_network_id)
         self.assertEqual(show_body['router']['admin_state_up'], False)
         # List routers and verify if created router is there in response
-        resp, list_body = self.client.list_routers()
-        self.assertEqual('200', resp['status'])
+        _, list_body = self.client.list_routers()
         routers_list = list()
         for router in list_body['routers']:
             routers_list.append(router['id'])
         self.assertIn(create_body['router']['id'], routers_list)
         # Update the name of router and verify if it is updated
         updated_name = 'updated ' + name
-        resp, update_body = self.client.update_router(
-            create_body['router']['id'], name=updated_name)
-        self.assertEqual('200', resp['status'])
+        _, update_body = self.client.update_router(create_body['router']['id'],
+                                                   name=updated_name)
         self.assertEqual(update_body['router']['name'], updated_name)
         resp, show_body = self.client.show_router(
             create_body['router']['id'])
@@ -97,28 +92,54 @@
         test_tenant = data_utils.rand_name('test_tenant_')
         test_description = data_utils.rand_name('desc_')
         _, tenant = self.identity_admin_client.create_tenant(
-            name=test_tenant,
-            description=test_description)
+            name=test_tenant, description=test_description)
         tenant_id = tenant['id']
         self.addCleanup(self.identity_admin_client.delete_tenant, tenant_id)
 
         name = data_utils.rand_name('router-')
-        resp, create_body = self.admin_client.create_router(
-            name, tenant_id=tenant_id)
-        self.assertEqual('201', resp['status'])
+        _, create_body = self.admin_client.create_router(name,
+                                                         tenant_id=tenant_id)
         self.addCleanup(self.admin_client.delete_router,
                         create_body['router']['id'])
         self.assertEqual(tenant_id, create_body['router']['tenant_id'])
 
+    @test.requires_ext(extension='ext-gw-mode', service='network')
+    @test.attr(type='smoke')
+    def test_create_router_with_default_snat_value(self):
+        # Create a router with default snat rule
+        name = data_utils.rand_name('router')
+        router = self._create_router(
+            name, external_network_id=CONF.network.public_network_id)
+        self._verify_router_gateway(
+            router['id'], {'network_id': CONF.network.public_network_id,
+                           'enable_snat': True})
+
+    @test.requires_ext(extension='ext-gw-mode', service='network')
+    @test.attr(type='smoke')
+    def test_create_router_with_snat_explicit(self):
+        name = data_utils.rand_name('snat-router')
+        # Create a router enabling snat attributes
+        enable_snat_states = [False, True]
+        for enable_snat in enable_snat_states:
+            external_gateway_info = {
+                'network_id': CONF.network.public_network_id,
+                'enable_snat': enable_snat}
+            _, create_body = self.admin_client.create_router(
+                name, external_gateway_info=external_gateway_info)
+            self.addCleanup(self.admin_client.delete_router,
+                            create_body['router']['id'])
+            # Verify snat attributes after router creation
+            self._verify_router_gateway(create_body['router']['id'],
+                                        exp_ext_gw_info=external_gateway_info)
+
     @test.attr(type='smoke')
     def test_add_remove_router_interface_with_subnet_id(self):
         network = self.create_network()
         subnet = self.create_subnet(network)
         router = self._create_router(data_utils.rand_name('router-'))
         # Add router interface with subnet id
-        resp, interface = self.client.add_router_interface_with_subnet_id(
+        _, interface = self.client.add_router_interface_with_subnet_id(
             router['id'], subnet['id'])
-        self.assertEqual('200', resp['status'])
         self.addCleanup(self._remove_router_interface_with_subnet_id,
                         router['id'], subnet['id'])
         self.assertIn('subnet_id', interface.keys())
@@ -137,9 +158,8 @@
         resp, port_body = self.client.create_port(
             network_id=network['id'])
         # add router interface to port created above
-        resp, interface = self.client.add_router_interface_with_port_id(
+        _, interface = self.client.add_router_interface_with_port_id(
             router['id'], port_body['port']['id'])
-        self.assertEqual('200', resp['status'])
         self.addCleanup(self._remove_router_interface_with_port_id,
                         router['id'], port_body['port']['id'])
         self.assertIn('subnet_id', interface.keys())
@@ -151,8 +171,7 @@
                          router['id'])
 
     def _verify_router_gateway(self, router_id, exp_ext_gw_info=None):
-        resp, show_body = self.client.show_router(router_id)
-        self.assertEqual('200', resp['status'])
+        _, show_body = self.admin_client.show_router(router_id)
         actual_ext_gw_info = show_body['router']['external_gateway_info']
         if exp_ext_gw_info is None:
             self.assertIsNone(actual_ext_gw_info)
@@ -182,8 +201,7 @@
             external_gateway_info={
                 'network_id': CONF.network.public_network_id})
         # Verify operation - router
-        resp, show_body = self.client.show_router(router['id'])
-        self.assertEqual('200', resp['status'])
+        _, show_body = self.client.show_router(router['id'])
         self._verify_router_gateway(
             router['id'],
             {'network_id': CONF.network.public_network_id})
@@ -267,16 +285,14 @@
         cidr = netaddr.IPNetwork(self.subnet['cidr'])
         next_hop = str(cidr[2])
         destination = str(self.subnet['cidr'])
-        resp, extra_route = self.client.update_extra_routes(
-            self.router['id'], next_hop, destination)
-        self.assertEqual('200', resp['status'])
+        _, extra_route = self.client.update_extra_routes(self.router['id'],
+                                                         next_hop, destination)
         self.assertEqual(1, len(extra_route['router']['routes']))
         self.assertEqual(destination,
                          extra_route['router']['routes'][0]['destination'])
         self.assertEqual(next_hop,
                          extra_route['router']['routes'][0]['nexthop'])
-        resp, show_body = self.client.show_router(self.router['id'])
-        self.assertEqual('200', resp['status'])
+        _, show_body = self.client.show_router(self.router['id'])
         self.assertEqual(destination,
                          show_body['router']['routes'][0]['destination'])
         self.assertEqual(next_hop,
@@ -290,12 +306,10 @@
         self.router = self._create_router(data_utils.rand_name('router-'))
         self.assertFalse(self.router['admin_state_up'])
         # Update router admin state
-        resp, update_body = self.client.update_router(self.router['id'],
-                                                      admin_state_up=True)
-        self.assertEqual('200', resp['status'])
+        _, update_body = self.client.update_router(self.router['id'],
+                                                   admin_state_up=True)
         self.assertTrue(update_body['router']['admin_state_up'])
-        resp, show_body = self.client.show_router(self.router['id'])
-        self.assertEqual('200', resp['status'])
+        _, show_body = self.client.show_router(self.router['id'])
         self.assertTrue(show_body['router']['admin_state_up'])
 
     @test.attr(type='smoke')
@@ -318,8 +332,7 @@
                                       interface02['port_id'])
 
     def _verify_router_interface(self, router_id, subnet_id, port_id):
-        resp, show_port_body = self.client.show_port(port_id)
-        self.assertEqual('200', resp['status'])
+        _, show_port_body = self.client.show_port(port_id)
         interface_port = show_port_body['port']
         self.assertEqual(router_id, interface_port['device_id'])
         self.assertEqual(subnet_id,
diff --git a/tempest/api/network/test_routers_negative.py b/tempest/api/network/test_routers_negative.py
index feee51b..4c226af 100644
--- a/tempest/api/network/test_routers_negative.py
+++ b/tempest/api/network/test_routers_negative.py
@@ -28,9 +28,8 @@
     _interface = 'json'
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(RoutersNegativeTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(RoutersNegativeTest, cls).resource_setup()
         if not test.is_extension_enabled('router', 'network'):
             msg = "router extension not enabled."
             raise cls.skipException(msg)
diff --git a/tempest/api/network/test_security_groups.py b/tempest/api/network/test_security_groups.py
index b98cea1..9764b4d 100644
--- a/tempest/api/network/test_security_groups.py
+++ b/tempest/api/network/test_security_groups.py
@@ -24,8 +24,8 @@
     _interface = 'json'
 
     @classmethod
-    def setUpClass(cls):
-        super(SecGroupTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(SecGroupTest, cls).resource_setup()
         if not test.is_extension_enabled('security-group', 'network'):
             msg = "security-group extension not enabled."
             raise cls.skipException(msg)
@@ -33,8 +33,7 @@
     @test.attr(type='smoke')
     def test_list_security_groups(self):
         # Verify the that security group belonging to tenant exist in list
-        resp, body = self.client.list_security_groups()
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.list_security_groups()
         security_groups = body['security_groups']
         found = None
         for n in security_groups:
@@ -48,8 +47,7 @@
         group_create_body, name = self._create_security_group()
 
         # List security groups and verify if created group is there in response
-        resp, list_body = self.client.list_security_groups()
-        self.assertEqual('200', resp['status'])
+        _, list_body = self.client.list_security_groups()
         secgroup_list = list()
         for secgroup in list_body['security_groups']:
             secgroup_list.append(secgroup['id'])
@@ -57,12 +55,11 @@
         # Update the security group
         new_name = data_utils.rand_name('security-')
         new_description = data_utils.rand_name('security-description')
-        resp, update_body = self.client.update_security_group(
+        _, update_body = self.client.update_security_group(
             group_create_body['security_group']['id'],
             name=new_name,
             description=new_description)
         # Verify if security group is updated
-        self.assertEqual('200', resp['status'])
         self.assertEqual(update_body['security_group']['name'], new_name)
         self.assertEqual(update_body['security_group']['description'],
                          new_description)
@@ -80,18 +77,16 @@
         # Create rules for each protocol
         protocols = ['tcp', 'udp', 'icmp']
         for protocol in protocols:
-            resp, rule_create_body = self.client.create_security_group_rule(
+            _, rule_create_body = self.client.create_security_group_rule(
                 security_group_id=group_create_body['security_group']['id'],
                 protocol=protocol,
                 direction='ingress'
             )
-            self.assertEqual('201', resp['status'])
 
             # Show details of the created security rule
-            resp, show_rule_body = self.client.show_security_group_rule(
+            _, show_rule_body = self.client.show_security_group_rule(
                 rule_create_body['security_group_rule']['id']
             )
-            self.assertEqual('200', resp['status'])
             create_dict = rule_create_body['security_group_rule']
             for key, value in six.iteritems(create_dict):
                 self.assertEqual(value,
@@ -99,8 +94,7 @@
                                  "%s does not match." % key)
 
             # List rules and verify created rule is in response
-            resp, rule_list_body = self.client.list_security_group_rules()
-            self.assertEqual('200', resp['status'])
+            _, rule_list_body = self.client.list_security_group_rules()
             rule_list = [rule['id']
                          for rule in rule_list_body['security_group_rules']]
             self.assertIn(rule_create_body['security_group_rule']['id'],
@@ -117,7 +111,7 @@
         protocol = 'tcp'
         port_range_min = 77
         port_range_max = 77
-        resp, rule_create_body = self.client.create_security_group_rule(
+        _, rule_create_body = self.client.create_security_group_rule(
             security_group_id=group_create_body['security_group']['id'],
             direction=direction,
             protocol=protocol,
@@ -125,7 +119,6 @@
             port_range_max=port_range_max
         )
 
-        self.assertEqual('201', resp['status'])
         sec_group_rule = rule_create_body['security_group_rule']
 
         self.assertEqual(sec_group_rule['direction'], direction)
diff --git a/tempest/api/network/test_security_groups_negative.py b/tempest/api/network/test_security_groups_negative.py
index 53c9d12..9c6c267 100644
--- a/tempest/api/network/test_security_groups_negative.py
+++ b/tempest/api/network/test_security_groups_negative.py
@@ -24,8 +24,8 @@
     _interface = 'json'
 
     @classmethod
-    def setUpClass(cls):
-        super(NegativeSecGroupTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(NegativeSecGroupTest, cls).resource_setup()
         if not test.is_extension_enabled('security-group', 'network'):
             msg = "security-group extension not enabled."
             raise cls.skipException(msg)
diff --git a/tempest/api/network/test_service_type_management.py b/tempest/api/network/test_service_type_management.py
index d272c47..302069f 100644
--- a/tempest/api/network/test_service_type_management.py
+++ b/tempest/api/network/test_service_type_management.py
@@ -18,16 +18,15 @@
     _interface = 'json'
 
     @classmethod
-    def setUpClass(cls):
-        super(ServiceTypeManagementTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(ServiceTypeManagementTestJSON, cls).resource_setup()
         if not test.is_extension_enabled('service-type', 'network'):
             msg = "Neutron Service Type Management not enabled."
             raise cls.skipException(msg)
 
     @test.attr(type='smoke')
     def test_service_provider_list(self):
-        resp, body = self.client.list_service_providers()
-        self.assertEqual(resp['status'], '200')
+        _, body = self.client.list_service_providers()
         self.assertIsInstance(body['service_providers'], list)
 
 
diff --git a/tempest/api/network/test_vpnaas_extensions.py b/tempest/api/network/test_vpnaas_extensions.py
index 0cc3f19..c61bf41 100644
--- a/tempest/api/network/test_vpnaas_extensions.py
+++ b/tempest/api/network/test_vpnaas_extensions.py
@@ -34,12 +34,11 @@
     """
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
+    def resource_setup(cls):
         if not test.is_extension_enabled('vpnaas', 'network'):
             msg = "vpnaas extension not enabled."
             raise cls.skipException(msg)
-        super(VPNaaSTestJSON, cls).setUpClass()
+        super(VPNaaSTestJSON, cls).resource_setup()
         cls.network = cls.create_network()
         cls.subnet = cls.create_subnet(cls.network)
         cls.router = cls.create_router(
@@ -61,8 +60,7 @@
         for ike in all_ike['ikepolicies']:
             ike_list.append(ike['id'])
         if ike_policy_id in ike_list:
-            resp, _ = self.client.delete_ikepolicy(ike_policy_id)
-            self.assertEqual(204, resp.status)
+            self.client.delete_ikepolicy(ike_policy_id)
             # Asserting that the policy is not found in list after deletion
             resp, ikepolicies = self.client.list_ikepolicies()
             ike_id_list = list()
@@ -85,8 +83,7 @@
             self.assertEqual(value, actual[key])
 
     def _delete_vpn_service(self, vpn_service_id):
-        resp, _ = self.client.delete_vpnservice(vpn_service_id)
-        self.assertEqual('204', resp['status'])
+        self.client.delete_vpnservice(vpn_service_id)
         # Asserting if vpn service is found in the list after deletion
         _, body = self.client.list_vpnservices()
         vpn_services = [vs['id'] for vs in body['vpnservices']]
@@ -107,9 +104,8 @@
         tenant_id = self._get_tenant_id()
         # Create IPSec policy for the newly created tenant
         name = data_utils.rand_name('ipsec-policy')
-        resp, body = (self.admin_client.
-                      create_ipsecpolicy(name=name, tenant_id=tenant_id))
-        self.assertEqual('201', resp['status'])
+        _, body = (self.admin_client.
+                   create_ipsecpolicy(name=name, tenant_id=tenant_id))
         ipsecpolicy = body['ipsecpolicy']
         self.assertIsNotNone(ipsecpolicy['id'])
         self.addCleanup(self.admin_client.delete_ipsecpolicy,
@@ -126,13 +122,12 @@
 
         # Create vpn service for the newly created tenant
         name = data_utils.rand_name('vpn-service')
-        resp, body = self.admin_client.create_vpnservice(
+        _, body = self.admin_client.create_vpnservice(
             subnet_id=self.subnet['id'],
             router_id=self.router['id'],
             name=name,
             admin_state_up=True,
             tenant_id=tenant_id)
-        self.assertEqual('201', resp['status'])
         vpnservice = body['vpnservice']
         self.assertIsNotNone(vpnservice['id'])
         self.addCleanup(self.admin_client.delete_vpnservice, vpnservice['id'])
@@ -148,12 +143,11 @@
 
         # Create IKE policy for the newly created tenant
         name = data_utils.rand_name('ike-policy')
-        resp, body = (self.admin_client.
-                      create_ikepolicy(name=name, ike_version="v1",
-                                       encryption_algorithm="aes-128",
-                                       auth_algorithm="sha1",
-                                       tenant_id=tenant_id))
-        self.assertEqual('201', resp['status'])
+        _, body = (self.admin_client.
+                   create_ikepolicy(name=name, ike_version="v1",
+                                    encryption_algorithm="aes-128",
+                                    auth_algorithm="sha1",
+                                    tenant_id=tenant_id))
         ikepolicy = body['ikepolicy']
         self.assertIsNotNone(ikepolicy['id'])
         self.addCleanup(self.admin_client.delete_ikepolicy, ikepolicy['id'])
@@ -166,8 +160,7 @@
     @test.attr(type='smoke')
     def test_list_vpn_services(self):
         # Verify the VPN service exists in the list of all VPN services
-        resp, body = self.client.list_vpnservices()
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.list_vpnservices()
         vpnservices = body['vpnservices']
         self.assertIn(self.vpnservice['id'], [v['id'] for v in vpnservices])
 
@@ -175,11 +168,10 @@
     def test_create_update_delete_vpn_service(self):
         # Creates a VPN service and sets up deletion
         name = data_utils.rand_name('vpn-service')
-        resp, body = self.client.create_vpnservice(subnet_id=self.subnet['id'],
-                                                   router_id=self.router['id'],
-                                                   name=name,
-                                                   admin_state_up=True)
-        self.assertEqual('201', resp['status'])
+        _, body = self.client.create_vpnservice(subnet_id=self.subnet['id'],
+                                                router_id=self.router['id'],
+                                                name=name,
+                                                admin_state_up=True)
         vpnservice = body['vpnservice']
         self.addCleanup(self._delete_vpn_service, vpnservice['id'])
         # Assert if created vpnservices are not found in vpnservices list
@@ -196,8 +188,7 @@
     @test.attr(type='smoke')
     def test_show_vpn_service(self):
         # Verifies the details of a vpn service
-        resp, body = self.client.show_vpnservice(self.vpnservice['id'])
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.show_vpnservice(self.vpnservice['id'])
         vpnservice = body['vpnservice']
         self.assertEqual(self.vpnservice['id'], vpnservice['id'])
         self.assertEqual(self.vpnservice['name'], vpnservice['name'])
@@ -213,8 +204,7 @@
     @test.attr(type='smoke')
     def test_list_ike_policies(self):
         # Verify the ike policy exists in the list of all IKE policies
-        resp, body = self.client.list_ikepolicies()
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.list_ikepolicies()
         ikepolicies = body['ikepolicies']
         self.assertIn(self.ikepolicy['id'], [i['id'] for i in ikepolicies])
 
@@ -222,12 +212,11 @@
     def test_create_update_delete_ike_policy(self):
         # Creates a IKE policy
         name = data_utils.rand_name('ike-policy')
-        resp, body = (self.client.create_ikepolicy(
-                      name=name,
-                      ike_version="v1",
-                      encryption_algorithm="aes-128",
-                      auth_algorithm="sha1"))
-        self.assertEqual('201', resp['status'])
+        _, body = (self.client.create_ikepolicy(
+                   name=name,
+                   ike_version="v1",
+                   encryption_algorithm="aes-128",
+                   auth_algorithm="sha1"))
         ikepolicy = body['ikepolicy']
         self.assertIsNotNone(ikepolicy['id'])
         self.addCleanup(self._delete_ike_policy, ikepolicy['id'])
@@ -239,8 +228,7 @@
                    'ike_version': "v2",
                    'pfs': "group14",
                    'lifetime': {'units': "seconds", 'value': 2000}}
-        resp, _ = self.client.update_ikepolicy(ikepolicy['id'], **new_ike)
-        self.assertEqual('200', resp['status'])
+        self.client.update_ikepolicy(ikepolicy['id'], **new_ike)
         # Confirm that update was successful by verifying using 'show'
         _, body = self.client.show_ikepolicy(ikepolicy['id'])
         ike_policy = body['ikepolicy']
@@ -249,8 +237,7 @@
             self.assertEqual(value, ike_policy[key])
 
         # Verification of ike policy delete
-        resp, _ = self.client.delete_ikepolicy(ikepolicy['id'])
-        self.assertEqual('204', resp['status'])
+        self.client.delete_ikepolicy(ikepolicy['id'])
         _, body = self.client.list_ikepolicies()
         ikepolicies = [ikp['id'] for ikp in body['ikepolicies']]
         self.assertNotIn(ike_policy['id'], ikepolicies)
@@ -258,8 +245,7 @@
     @test.attr(type='smoke')
     def test_show_ike_policy(self):
         # Verifies the details of a ike policy
-        resp, body = self.client.show_ikepolicy(self.ikepolicy['id'])
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.show_ikepolicy(self.ikepolicy['id'])
         ikepolicy = body['ikepolicy']
         self.assertEqual(self.ikepolicy['id'], ikepolicy['id'])
         self.assertEqual(self.ikepolicy['name'], ikepolicy['name'])
@@ -281,8 +267,7 @@
     @test.attr(type='smoke')
     def test_list_ipsec_policies(self):
         # Verify the ipsec policy exists in the list of all ipsec policies
-        resp, body = self.client.list_ipsecpolicies()
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.list_ipsecpolicies()
         ipsecpolicies = body['ipsecpolicies']
         self.assertIn(self.ipsecpolicy['id'], [i['id'] for i in ipsecpolicies])
 
@@ -293,8 +278,7 @@
                              'pfs': 'group5',
                              'encryption_algorithm': "aes-128",
                              'auth_algorithm': 'sha1'}
-        resp, resp_body = self.client.create_ipsecpolicy(**ipsec_policy_body)
-        self.assertEqual('201', resp['status'])
+        _, resp_body = self.client.create_ipsecpolicy(**ipsec_policy_body)
         ipsecpolicy = resp_body['ipsecpolicy']
         self.addCleanup(self._delete_ipsec_policy, ipsecpolicy['id'])
         self._assertExpected(ipsec_policy_body, ipsecpolicy)
@@ -304,22 +288,19 @@
                      'name': data_utils.rand_name("New-IPSec"),
                      'encryption_algorithm': "aes-256",
                      'lifetime': {'units': "seconds", 'value': '2000'}}
-        resp, body = self.client.update_ipsecpolicy(ipsecpolicy['id'],
-                                                    **new_ipsec)
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.update_ipsecpolicy(ipsecpolicy['id'],
+                                                 **new_ipsec)
         updated_ipsec_policy = body['ipsecpolicy']
         self._assertExpected(new_ipsec, updated_ipsec_policy)
         # Verification of ipsec policy delete
-        resp, _ = self.client.delete_ipsecpolicy(ipsecpolicy['id'])
-        self.assertEqual('204', resp['status'])
+        self.client.delete_ipsecpolicy(ipsecpolicy['id'])
         self.assertRaises(exceptions.NotFound,
                           self.client.delete_ipsecpolicy, ipsecpolicy['id'])
 
     @test.attr(type='smoke')
     def test_show_ipsec_policy(self):
         # Verifies the details of an ipsec policy
-        resp, body = self.client.show_ipsecpolicy(self.ipsecpolicy['id'])
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.show_ipsecpolicy(self.ipsecpolicy['id'])
         ipsecpolicy = body['ipsecpolicy']
         self._assertExpected(self.ipsecpolicy, ipsecpolicy)
 
diff --git a/tempest/api/object_storage/base.py b/tempest/api/object_storage/base.py
index ccc0067..6a5fd3d 100644
--- a/tempest/api/object_storage/base.py
+++ b/tempest/api/object_storage/base.py
@@ -28,9 +28,9 @@
 class BaseObjectTest(tempest.test.BaseTestCase):
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         cls.set_network_resources()
-        super(BaseObjectTest, cls).setUpClass()
+        super(BaseObjectTest, cls).resource_setup()
         if not CONF.service_available.swift:
             skip_msg = ("%s skipped as swift is not available" % cls.__name__)
             raise cls.skipException(skip_msg)
@@ -69,12 +69,13 @@
         cls.object_client_alt.auth_provider.clear_auth()
         cls.container_client_alt.auth_provider.clear_auth()
 
-        cls.data = base.DataGenerator(cls.identity_admin_client)
+        cls.data = SwiftDataGenerator(cls.identity_admin_client)
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
+        cls.data.teardown_all()
         cls.isolated_creds.clear_isolated_creds()
-        super(BaseObjectTest, cls).tearDownClass()
+        super(BaseObjectTest, cls).resource_cleanup()
 
     @classmethod
     def delete_containers(cls, containers, container_client=None,
@@ -116,3 +117,28 @@
         self.assertThat(resp, custom_matchers.ExistsAllResponseHeaders(
                         target, method))
         self.assertThat(resp, custom_matchers.AreAllWellFormatted())
+
+
+class SwiftDataGenerator(base.DataGenerator):
+
+    def setup_test_user(self, reseller=False):
+        super(SwiftDataGenerator, self).setup_test_user()
+        if reseller:
+            role_name = CONF.object_storage.reseller_admin_role
+        else:
+            role_name = CONF.object_storage.operator_role
+        role_id = self._get_role_id(role_name)
+        self._assign_role(role_id)
+
+    def _get_role_id(self, role_name):
+        try:
+            _, roles = self.client.list_roles()
+            return next(r['id'] for r in roles if r['name'] == role_name)
+        except StopIteration:
+            msg = "Role name '%s' is not found" % role_name
+            raise exceptions.NotFound(msg)
+
+    def _assign_role(self, role_id):
+        self.client.assign_user_role(self.tenant['id'],
+                                     self.user['id'],
+                                     role_id)
diff --git a/tempest/api/object_storage/test_account_bulk.py b/tempest/api/object_storage/test_account_bulk.py
index a94c883..743f1aa 100644
--- a/tempest/api/object_storage/test_account_bulk.py
+++ b/tempest/api/object_storage/test_account_bulk.py
@@ -50,16 +50,27 @@
 
         return tarpath.name, container_name, object_name
 
-    @test.attr(type='gate')
-    def test_extract_archive(self):
-        # Test bulk operation of file upload with an archived file
-        filepath, container_name, object_name = self._create_archive()
-
+    def _upload_archive(self, filepath):
+        # upload an archived file
         params = {'extract-archive': 'tar'}
         with open(filepath) as fh:
             mydata = fh.read()
             resp, body = self.account_client.create_account(data=mydata,
                                                             params=params)
+        return resp, body
+
+    def _check_contents_deleted(self, container_name):
+        param = {'format': 'txt'}
+        resp, body = self.account_client.list_account_containers(param)
+        self.assertHeaders(resp, 'Account', 'GET')
+        self.assertNotIn(container_name, body)
+
+    @test.attr(type='gate')
+    @test.requires_ext(extension='bulk', service='object')
+    def test_extract_archive(self):
+        # Test bulk operation of file upload with an archived file
+        filepath, container_name, object_name = self._create_archive()
+        resp, _ = self._upload_archive(filepath)
 
         self.containers.append(container_name)
 
@@ -95,23 +106,17 @@
         self.assertIn(object_name, [c['name'] for c in contents_list])
 
     @test.attr(type='gate')
+    @test.requires_ext(extension='bulk', service='object')
     def test_bulk_delete(self):
         # Test bulk operation of deleting multiple files
         filepath, container_name, object_name = self._create_archive()
-
-        params = {'extract-archive': 'tar'}
-        with open(filepath) as fh:
-            mydata = fh.read()
-            resp, body = self.account_client.create_account(data=mydata,
-                                                            params=params)
+        self._upload_archive(filepath)
 
         data = '%s/%s\n%s' % (container_name, object_name, container_name)
         params = {'bulk-delete': ''}
         resp, body = self.account_client.delete_account(data=data,
                                                         params=params)
 
-        self.assertIn(int(resp['status']), test.HTTP_SUCCESS)
-
         # When deleting multiple files using the bulk operation, the response
         # does not contain 'content-length' header. This is the special case,
         # therefore the existence of response headers is checked without
@@ -124,11 +129,33 @@
         # Check only the format of common headers with custom matcher
         self.assertThat(resp, custom_matchers.AreAllWellFormatted())
 
-        # Check if a container is deleted
-        param = {'format': 'txt'}
-        resp, body = self.account_client.list_account_containers(param)
+        # Check if uploaded contents are completely deleted
+        self._check_contents_deleted(container_name)
 
-        self.assertIn(int(resp['status']), test.HTTP_SUCCESS)
-        self.assertHeaders(resp, 'Account', 'GET')
+    @test.attr(type='gate')
+    @test.requires_ext(extension='bulk', service='object')
+    def test_bulk_delete_by_POST(self):
+        # Test bulk operation of deleting multiple files
+        filepath, container_name, object_name = self._create_archive()
+        self._upload_archive(filepath)
 
-        self.assertNotIn(container_name, body)
+        data = '%s/%s\n%s' % (container_name, object_name, container_name)
+        params = {'bulk-delete': ''}
+
+        resp, body = self.account_client.create_account_metadata(
+            {}, data=data, params=params)
+
+        # When deleting multiple files using the bulk operation, the response
+        # does not contain 'content-length' header. This is the special case,
+        # therefore the existence of response headers is checked without
+        # custom matcher.
+        self.assertIn('transfer-encoding', resp)
+        self.assertIn('content-type', resp)
+        self.assertIn('x-trans-id', resp)
+        self.assertIn('date', resp)
+
+        # Check only the format of common headers with custom matcher
+        self.assertThat(resp, custom_matchers.AreAllWellFormatted())
+
+        # Check if uploaded contents are completely deleted
+        self._check_contents_deleted(container_name)
diff --git a/tempest/api/object_storage/test_account_quotas.py b/tempest/api/object_storage/test_account_quotas.py
index 19e3068..97e9195 100644
--- a/tempest/api/object_storage/test_account_quotas.py
+++ b/tempest/api/object_storage/test_account_quotas.py
@@ -18,7 +18,6 @@
 from tempest import clients
 from tempest.common.utils import data_utils
 from tempest import config
-from tempest import exceptions
 from tempest import test
 
 CONF = config.CONF
@@ -27,38 +26,15 @@
 class AccountQuotasTest(base.BaseObjectTest):
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(AccountQuotasTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(AccountQuotasTest, cls).resource_setup()
         cls.container_name = data_utils.rand_name(name="TestContainer")
         cls.container_client.create_container(cls.container_name)
 
-        cls.data.setup_test_user()
+        cls.data.setup_test_user(reseller=True)
 
         cls.os_reselleradmin = clients.Manager(cls.data.test_credentials)
 
-        # Retrieve the ResellerAdmin role id
-        reseller_role_id = None
-        try:
-            _, roles = cls.os_admin.identity_client.list_roles()
-            reseller_role_id = next(r['id'] for r in roles if r['name']
-                                    == CONF.object_storage.reseller_admin_role)
-        except StopIteration:
-            msg = "No ResellerAdmin role found"
-            raise exceptions.NotFound(msg)
-
-        # Retrieve the ResellerAdmin user id
-        reseller_user_id = cls.data.test_credentials.user_id
-
-        # Retrieve the ResellerAdmin tenant id
-        reseller_tenant_id = cls.data.test_credentials.tenant_id
-
-        # Assign the newly created user the appropriate ResellerAdmin role
-        cls.os_admin.identity_client.assign_user_role(
-            reseller_tenant_id,
-            reseller_user_id,
-            reseller_role_id)
-
         # Retrieve a ResellerAdmin auth data and use it to set a quota
         # on the client's account
         cls.reselleradmin_auth_data = \
@@ -94,11 +70,10 @@
         super(AccountQuotasTest, self).tearDown()
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         if hasattr(cls, "container_name"):
             cls.delete_containers([cls.container_name])
-        cls.data.teardown_all()
-        super(AccountQuotasTest, cls).tearDownClass()
+        super(AccountQuotasTest, cls).resource_cleanup()
 
     @test.attr(type="smoke")
     @test.requires_ext(extension='account_quotas', service='object')
diff --git a/tempest/api/object_storage/test_account_quotas_negative.py b/tempest/api/object_storage/test_account_quotas_negative.py
index 6afd381..6c1fb5a 100644
--- a/tempest/api/object_storage/test_account_quotas_negative.py
+++ b/tempest/api/object_storage/test_account_quotas_negative.py
@@ -27,38 +27,15 @@
 class AccountQuotasNegativeTest(base.BaseObjectTest):
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(AccountQuotasNegativeTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(AccountQuotasNegativeTest, cls).resource_setup()
         cls.container_name = data_utils.rand_name(name="TestContainer")
         cls.container_client.create_container(cls.container_name)
 
-        cls.data.setup_test_user()
+        cls.data.setup_test_user(reseller=True)
 
         cls.os_reselleradmin = clients.Manager(cls.data.test_credentials)
 
-        # Retrieve the ResellerAdmin role id
-        reseller_role_id = None
-        try:
-            _, roles = cls.os_admin.identity_client.list_roles()
-            reseller_role_id = next(r['id'] for r in roles if r['name']
-                                    == CONF.object_storage.reseller_admin_role)
-        except StopIteration:
-            msg = "No ResellerAdmin role found"
-            raise exceptions.NotFound(msg)
-
-        # Retrieve the ResellerAdmin tenant id
-        reseller_user_id = cls.data.test_credentials.user_id
-
-        # Retrieve the ResellerAdmin tenant id
-        reseller_tenant_id = cls.data.test_credentials.tenant_id
-
-        # Assign the newly created user the appropriate ResellerAdmin role
-        cls.os_admin.identity_client.assign_user_role(
-            reseller_tenant_id,
-            reseller_user_id,
-            reseller_role_id)
-
         # Retrieve a ResellerAdmin auth data and use it to set a quota
         # on the client's account
         cls.reselleradmin_auth_data = \
@@ -93,11 +70,10 @@
         super(AccountQuotasNegativeTest, self).tearDown()
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         if hasattr(cls, "container_name"):
             cls.delete_containers([cls.container_name])
-        cls.data.teardown_all()
-        super(AccountQuotasNegativeTest, cls).tearDownClass()
+        super(AccountQuotasNegativeTest, cls).resource_cleanup()
 
     @test.attr(type=["negative", "smoke"])
     @test.requires_ext(extension='account_quotas', service='object')
diff --git a/tempest/api/object_storage/test_account_services.py b/tempest/api/object_storage/test_account_services.py
index d615374..a0436ee 100644
--- a/tempest/api/object_storage/test_account_services.py
+++ b/tempest/api/object_storage/test_account_services.py
@@ -14,15 +14,14 @@
 #    under the License.
 
 import random
-
 from six import moves
+import testtools
 
 from tempest.api.object_storage import base
 from tempest import clients
 from tempest.common import custom_matchers
 from tempest.common.utils import data_utils
 from tempest import config
-from tempest import exceptions
 from tempest import test
 
 CONF = config.CONF
@@ -33,9 +32,8 @@
     containers = []
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(AccountTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(AccountTest, cls).resource_setup()
         for i in moves.xrange(ord('a'), ord('f') + 1):
             name = data_utils.rand_name(name='%s-' % chr(i))
             cls.container_client.create_container(name)
@@ -43,10 +41,9 @@
         cls.containers_count = len(cls.containers)
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         cls.delete_containers(cls.containers)
-        cls.data.teardown_all()
-        super(AccountTest, cls).tearDownClass()
+        super(AccountTest, cls).resource_cleanup()
 
     @test.attr(type='smoke')
     def test_list_containers(self):
@@ -66,35 +63,7 @@
         # the base user of this instance.
         self.data.setup_test_user()
 
-        os_test_user = clients.Manager(
-            self.data.test_credentials)
-
-        # Retrieve the id of an operator role of object storage
-        test_role_id = None
-        swift_role = CONF.object_storage.operator_role
-        try:
-            _, roles = self.os_admin.identity_client.list_roles()
-            test_role_id = next(r['id'] for r in roles if r['name']
-                                == swift_role)
-        except StopIteration:
-            msg = "%s role found" % swift_role
-            raise exceptions.NotFound(msg)
-
-        # Retrieve the test_user id
-        _, users = self.os_admin.identity_client.get_users()
-        test_user_id = next(usr['id'] for usr in users if usr['name']
-                            == self.data.test_user)
-
-        # Retrieve the test_tenant id
-        _, tenants = self.os_admin.identity_client.list_tenants()
-        test_tenant_id = next(tnt['id'] for tnt in tenants if tnt['name']
-                              == self.data.test_tenant)
-
-        # Assign the newly created user the appropriate operator role
-        self.os_admin.identity_client.assign_user_role(
-            test_tenant_id,
-            test_user_id,
-            test_role_id)
+        os_test_user = clients.Manager(self.data.test_credentials)
 
         resp, container_list = \
             os_test_user.account_client.list_account_containers()
@@ -148,6 +117,9 @@
         self.assertEqual(container_list.find(".//bytes").tag, 'bytes')
 
     @test.attr(type='smoke')
+    @testtools.skipIf(
+        not CONF.object_storage_feature_enabled.discoverability,
+        'Discoverability function is disabled')
     def test_list_extensions(self):
         resp, extensions = self.account_client.list_extensions()
 
diff --git a/tempest/api/object_storage/test_account_services_negative.py b/tempest/api/object_storage/test_account_services_negative.py
index 490672d..e4c46e2 100644
--- a/tempest/api/object_storage/test_account_services_negative.py
+++ b/tempest/api/object_storage/test_account_services_negative.py
@@ -47,5 +47,3 @@
         self.assertRaises(exceptions.Unauthorized,
                           self.custom_account_client.list_account_containers,
                           params=params)
-        # delete the user which was created
-        self.data.teardown_all()
diff --git a/tempest/api/object_storage/test_container_acl.py b/tempest/api/object_storage/test_container_acl.py
index fc51504..e816a9f 100644
--- a/tempest/api/object_storage/test_container_acl.py
+++ b/tempest/api/object_storage/test_container_acl.py
@@ -21,17 +21,12 @@
 
 class ObjectTestACLs(base.BaseObjectTest):
     @classmethod
-    def setUpClass(cls):
-        super(ObjectTestACLs, cls).setUpClass()
+    def resource_setup(cls):
+        super(ObjectTestACLs, cls).resource_setup()
         cls.data.setup_test_user()
         test_os = clients.Manager(cls.data.test_credentials)
         cls.test_auth_data = test_os.auth_provider.auth_data
 
-    @classmethod
-    def tearDownClass(cls):
-        cls.data.teardown_all()
-        super(ObjectTestACLs, cls).tearDownClass()
-
     def setUp(self):
         super(ObjectTestACLs, self).setUp()
         self.container_name = data_utils.rand_name(name='TestContainer')
diff --git a/tempest/api/object_storage/test_container_acl_negative.py b/tempest/api/object_storage/test_container_acl_negative.py
index ca53876..9b49db3 100644
--- a/tempest/api/object_storage/test_container_acl_negative.py
+++ b/tempest/api/object_storage/test_container_acl_negative.py
@@ -23,17 +23,12 @@
 
 class ObjectACLsNegativeTest(base.BaseObjectTest):
     @classmethod
-    def setUpClass(cls):
-        super(ObjectACLsNegativeTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(ObjectACLsNegativeTest, cls).resource_setup()
         cls.data.setup_test_user()
         test_os = clients.Manager(cls.data.test_credentials)
         cls.test_auth_data = test_os.auth_provider.auth_data
 
-    @classmethod
-    def tearDownClass(cls):
-        cls.data.teardown_all()
-        super(ObjectACLsNegativeTest, cls).tearDownClass()
-
     def setUp(self):
         super(ObjectACLsNegativeTest, self).setUp()
         self.container_name = data_utils.rand_name(name='TestContainer')
diff --git a/tempest/api/object_storage/test_container_staticweb.py b/tempest/api/object_storage/test_container_staticweb.py
index 581c6d9..966a08d 100644
--- a/tempest/api/object_storage/test_container_staticweb.py
+++ b/tempest/api/object_storage/test_container_staticweb.py
@@ -23,9 +23,8 @@
 class StaticWebTest(base.BaseObjectTest):
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(StaticWebTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(StaticWebTest, cls).resource_setup()
         cls.container_name = data_utils.rand_name(name="TestContainer")
 
         # This header should be posted on the container before every test
@@ -45,11 +44,10 @@
             metadata_prefix="X-Container-")
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         if hasattr(cls, "container_name"):
             cls.delete_containers([cls.container_name])
-        cls.data.teardown_all()
-        super(StaticWebTest, cls).tearDownClass()
+        super(StaticWebTest, cls).resource_cleanup()
 
     @test.requires_ext(extension='staticweb', service='object')
     @test.attr('gate')
diff --git a/tempest/api/object_storage/test_container_sync.py b/tempest/api/object_storage/test_container_sync.py
index 5f46d01..aebcb5c 100644
--- a/tempest/api/object_storage/test_container_sync.py
+++ b/tempest/api/object_storage/test_container_sync.py
@@ -13,6 +13,7 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+import testtools
 import time
 import urlparse
 
@@ -34,9 +35,8 @@
     clients = {}
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(ContainerSyncTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(ContainerSyncTest, cls).resource_setup()
         cls.containers = []
         cls.objects = []
 
@@ -61,13 +61,16 @@
             cls.containers.append(cont_name)
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         for client in cls.clients.values():
             cls.delete_containers(cls.containers, client[0], client[1])
-        super(ContainerSyncTest, cls).tearDownClass()
+        super(ContainerSyncTest, cls).resource_cleanup()
 
     @test.attr(type='slow')
     @test.skip_because(bug='1317133')
+    @testtools.skipIf(
+        not CONF.object_storage_feature_enabled.container_sync,
+        'Old-style container sync function is disabled')
     def test_container_synchronization(self):
         # container to container synchronization
         # to allow/accept sync requests to/from other accounts
diff --git a/tempest/api/object_storage/test_crossdomain.py b/tempest/api/object_storage/test_crossdomain.py
index d1541b9..f6d1fb9 100644
--- a/tempest/api/object_storage/test_crossdomain.py
+++ b/tempest/api/object_storage/test_crossdomain.py
@@ -15,7 +15,6 @@
 # under the License.
 
 from tempest.api.object_storage import base
-from tempest import clients
 from tempest.common import custom_matchers
 from tempest import test
 
@@ -23,13 +22,8 @@
 class CrossdomainTest(base.BaseObjectTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(CrossdomainTest, cls).setUpClass()
-        # creates a test user. The test user will set its base_url to the Swift
-        # endpoint and test the healthcheck feature.
-        cls.data.setup_test_user()
-
-        cls.os_test_user = clients.Manager(cls.data.test_credentials)
+    def resource_setup(cls):
+        super(CrossdomainTest, cls).resource_setup()
 
         cls.xml_start = '<?xml version="1.0"?>\n' \
                         '<!DOCTYPE cross-domain-policy SYSTEM ' \
@@ -38,29 +32,16 @@
 
         cls.xml_end = "</cross-domain-policy>"
 
-    @classmethod
-    def tearDownClass(cls):
-        cls.data.teardown_all()
-        super(CrossdomainTest, cls).tearDownClass()
-
     def setUp(self):
         super(CrossdomainTest, self).setUp()
 
-        client = self.os_test_user.account_client
         # Turning http://.../v1/foobar into http://.../
-        client.skip_path()
-
-    def tearDown(self):
-        # clear the base_url for subsequent requests
-        self.os_test_user.account_client.reset_path()
-
-        super(CrossdomainTest, self).tearDown()
+        self.account_client.skip_path()
 
     @test.attr('gate')
     @test.requires_ext(extension='crossdomain', service='object')
     def test_get_crossdomain_policy(self):
-        resp, body = self.os_test_user.account_client.get("crossdomain.xml",
-                                                          {})
+        resp, body = self.account_client.get("crossdomain.xml", {})
 
         self.assertIn(int(resp['status']), test.HTTP_SUCCESS)
         self.assertTrue(body.startswith(self.xml_start) and
diff --git a/tempest/api/object_storage/test_healthcheck.py b/tempest/api/object_storage/test_healthcheck.py
index e27c7ef..a1138e6 100644
--- a/tempest/api/object_storage/test_healthcheck.py
+++ b/tempest/api/object_storage/test_healthcheck.py
@@ -23,8 +23,8 @@
 class HealthcheckTest(base.BaseObjectTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(HealthcheckTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(HealthcheckTest, cls).resource_setup()
 
     def setUp(self):
         super(HealthcheckTest, self).setUp()
diff --git a/tempest/api/object_storage/test_object_expiry.py b/tempest/api/object_storage/test_object_expiry.py
index 73b4f3b..8cec2fc 100644
--- a/tempest/api/object_storage/test_object_expiry.py
+++ b/tempest/api/object_storage/test_object_expiry.py
@@ -23,8 +23,8 @@
 
 class ObjectExpiryTest(base.BaseObjectTest):
     @classmethod
-    def setUpClass(cls):
-        super(ObjectExpiryTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(ObjectExpiryTest, cls).resource_setup()
         cls.container_name = data_utils.rand_name(name='TestContainer')
         cls.container_client.create_container(cls.container_name)
 
@@ -36,9 +36,9 @@
                                                    self.object_name, '')
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         cls.delete_containers([cls.container_name])
-        super(ObjectExpiryTest, cls).tearDownClass()
+        super(ObjectExpiryTest, cls).resource_cleanup()
 
     def _test_object_expiry(self, metadata):
         # update object metadata
diff --git a/tempest/api/object_storage/test_object_formpost.py b/tempest/api/object_storage/test_object_formpost.py
index dc5585e..05c8ff2 100644
--- a/tempest/api/object_storage/test_object_formpost.py
+++ b/tempest/api/object_storage/test_object_formpost.py
@@ -30,9 +30,8 @@
     containers = []
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(ObjectFormPostTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(ObjectFormPostTest, cls).resource_setup()
         cls.container_name = data_utils.rand_name(name='TestContainer')
         cls.object_name = data_utils.rand_name(name='ObjectTemp')
 
@@ -56,11 +55,10 @@
             self.key)
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         cls.account_client.delete_account_metadata(metadata=cls.metadata)
         cls.delete_containers(cls.containers)
-        cls.data.teardown_all()
-        super(ObjectFormPostTest, cls).tearDownClass()
+        super(ObjectFormPostTest, cls).resource_cleanup()
 
     def get_multipart_form(self, expires=600):
         path = "%s/%s/%s" % (
diff --git a/tempest/api/object_storage/test_object_formpost_negative.py b/tempest/api/object_storage/test_object_formpost_negative.py
index 878bf6d..32f5917 100644
--- a/tempest/api/object_storage/test_object_formpost_negative.py
+++ b/tempest/api/object_storage/test_object_formpost_negative.py
@@ -30,9 +30,8 @@
     containers = []
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(ObjectFormPostNegativeTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(ObjectFormPostNegativeTest, cls).resource_setup()
         cls.container_name = data_utils.rand_name(name='TestContainer')
         cls.object_name = data_utils.rand_name(name='ObjectTemp')
 
@@ -56,11 +55,10 @@
             self.key)
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         cls.account_client.delete_account_metadata(metadata=cls.metadata)
         cls.delete_containers(cls.containers)
-        cls.data.teardown_all()
-        super(ObjectFormPostNegativeTest, cls).tearDownClass()
+        super(ObjectFormPostNegativeTest, cls).resource_cleanup()
 
     def get_multipart_form(self, expires=600):
         path = "%s/%s/%s" % (
diff --git a/tempest/api/object_storage/test_object_services.py b/tempest/api/object_storage/test_object_services.py
index b21aa44..56ab1fb 100644
--- a/tempest/api/object_storage/test_object_services.py
+++ b/tempest/api/object_storage/test_object_services.py
@@ -17,10 +17,11 @@
 import hashlib
 import random
 import re
-import six
 import time
 import zlib
 
+import six
+
 from tempest.api.object_storage import base
 from tempest.common import custom_matchers
 from tempest.common.utils import data_utils
@@ -29,16 +30,16 @@
 
 class ObjectTest(base.BaseObjectTest):
     @classmethod
-    def setUpClass(cls):
-        super(ObjectTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(ObjectTest, cls).resource_setup()
         cls.container_name = data_utils.rand_name(name='TestContainer')
         cls.container_client.create_container(cls.container_name)
         cls.containers = [cls.container_name]
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         cls.delete_containers(cls.containers)
-        super(ObjectTest, cls).tearDownClass()
+        super(ObjectTest, cls).resource_cleanup()
 
     def _create_object(self, metadata=None):
         # setup object
diff --git a/tempest/api/object_storage/test_object_slo.py b/tempest/api/object_storage/test_object_slo.py
index 0443a80..159ad5c 100644
--- a/tempest/api/object_storage/test_object_slo.py
+++ b/tempest/api/object_storage/test_object_slo.py
@@ -59,23 +59,23 @@
         object_name_base_1 = object_name + '_01'
         object_name_base_2 = object_name + '_02'
         data_size = MIN_SEGMENT_SIZE
-        self.data = data_utils.arbitrary_string(data_size)
+        self.content = data_utils.arbitrary_string(data_size)
         self._create_object(self.container_name,
                             object_name_base_1,
-                            self.data)
+                            self.content)
         self._create_object(self.container_name,
                             object_name_base_2,
-                            self.data)
+                            self.content)
 
         path_object_1 = '/%s/%s' % (self.container_name,
                                     object_name_base_1)
         path_object_2 = '/%s/%s' % (self.container_name,
                                     object_name_base_2)
         data_manifest = [{'path': path_object_1,
-                          'etag': hashlib.md5(self.data).hexdigest(),
+                          'etag': hashlib.md5(self.content).hexdigest(),
                           'size_bytes': data_size},
                          {'path': path_object_2,
-                          'etag': hashlib.md5(self.data).hexdigest(),
+                          'etag': hashlib.md5(self.content).hexdigest(),
                           'size_bytes': data_size}]
 
         return json.dumps(data_manifest)
@@ -147,7 +147,7 @@
         self.assertIn(int(resp['status']), test.HTTP_SUCCESS)
         self._assertHeadersSLO(resp, 'GET')
 
-        sum_data = self.data + self.data
+        sum_data = self.content + self.content
         self.assertEqual(body, sum_data)
 
     @test.attr(type='gate')
diff --git a/tempest/api/object_storage/test_object_temp_url.py b/tempest/api/object_storage/test_object_temp_url.py
index c597255..e70bd9a 100644
--- a/tempest/api/object_storage/test_object_temp_url.py
+++ b/tempest/api/object_storage/test_object_temp_url.py
@@ -30,8 +30,8 @@
 class ObjectTempUrlTest(base.BaseObjectTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(ObjectTempUrlTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(ObjectTempUrlTest, cls).resource_setup()
         # create a container
         cls.container_name = data_utils.rand_name(name='TestContainer')
         cls.container_client.create_container(cls.container_name)
@@ -52,16 +52,14 @@
                                         cls.object_name, cls.content)
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         for metadata in cls.metadatas:
             cls.account_client.delete_account_metadata(
                 metadata=metadata)
 
         cls.delete_containers(cls.containers)
 
-        # delete the user setup created
-        cls.data.teardown_all()
-        super(ObjectTempUrlTest, cls).tearDownClass()
+        super(ObjectTempUrlTest, cls).resource_cleanup()
 
     def setUp(self):
         super(ObjectTempUrlTest, self).setUp()
@@ -185,3 +183,20 @@
         resp, body = self.object_client.head(url)
         self.assertIn(int(resp['status']), test.HTTP_SUCCESS)
         self.assertHeaders(resp, 'Object', 'HEAD')
+
+    @test.attr(type='gate')
+    @test.requires_ext(extension='tempurl', service='object')
+    def test_get_object_using_temp_url_with_inline_query_parameter(self):
+        expires = self._get_expiry_date()
+
+        # get a temp URL for the created object
+        url = self._get_temp_url(self.container_name, self.object_name, "GET",
+                                 expires, self.key)
+        url = url + '&inline'
+
+        # trying to get object using temp url within expiry time
+        resp, body = self.object_client.get(url)
+        self.assertIn(int(resp['status']), test.HTTP_SUCCESS)
+        self.assertHeaders(resp, 'Object', 'GET')
+        self.assertEqual(body, self.content)
+        self.assertEqual(resp['content-disposition'], 'inline')
diff --git a/tempest/api/object_storage/test_object_temp_url_negative.py b/tempest/api/object_storage/test_object_temp_url_negative.py
index 7d26433..b752348 100644
--- a/tempest/api/object_storage/test_object_temp_url_negative.py
+++ b/tempest/api/object_storage/test_object_temp_url_negative.py
@@ -31,9 +31,8 @@
     containers = []
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(ObjectTempUrlNegativeTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(ObjectTempUrlNegativeTest, cls).resource_setup()
 
         cls.container_name = data_utils.rand_name(name='TestContainer')
         cls.container_client.create_container(cls.container_name)
@@ -47,15 +46,13 @@
             cls.account_client.list_account_metadata()
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         resp, _ = cls.account_client.delete_account_metadata(
             metadata=cls.metadata)
 
         cls.delete_containers(cls.containers)
 
-        # delete the user setup created
-        cls.data.teardown_all()
-        super(ObjectTempUrlNegativeTest, cls).tearDownClass()
+        super(ObjectTempUrlNegativeTest, cls).resource_cleanup()
 
     def setUp(self):
         super(ObjectTempUrlNegativeTest, self).setUp()
@@ -69,10 +66,10 @@
 
         # create object
         self.object_name = data_utils.rand_name(name='ObjectTemp')
-        self.data = data_utils.arbitrary_string(size=len(self.object_name),
-                                                base_text=self.object_name)
+        self.content = data_utils.arbitrary_string(size=len(self.object_name),
+                                                   base_text=self.object_name)
         self.object_client.create_object(self.container_name,
-                                         self.object_name, self.data)
+                                         self.object_name, self.content)
 
     def _get_expiry_date(self, expiration_time=1000):
         return int(time.time() + expiration_time)
diff --git a/tempest/api/object_storage/test_object_version.py b/tempest/api/object_storage/test_object_version.py
index 8d2ff9b..5fe4fc8 100644
--- a/tempest/api/object_storage/test_object_version.py
+++ b/tempest/api/object_storage/test_object_version.py
@@ -13,21 +13,26 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+import testtools
+
 from tempest.api.object_storage import base
 from tempest.common.utils import data_utils
+from tempest import config
 from tempest import test
 
+CONF = config.CONF
+
 
 class ContainerTest(base.BaseObjectTest):
     @classmethod
-    def setUpClass(cls):
-        super(ContainerTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(ContainerTest, cls).resource_setup()
         cls.containers = []
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         cls.delete_containers(cls.containers)
-        super(ContainerTest, cls).tearDownClass()
+        super(ContainerTest, cls).resource_cleanup()
 
     def assertContainer(self, container, count, byte, versioned):
         resp, _ = self.container_client.list_container_metadata(container)
@@ -41,6 +46,9 @@
         self.assertEqual(header_value, versioned)
 
     @test.attr(type='smoke')
+    @testtools.skipIf(
+        not CONF.object_storage_feature_enabled.object_versioning,
+        'Object-versioning is disabled')
     def test_versioned_container(self):
         # create container
         vers_container_name = data_utils.rand_name(name='TestVersionContainer')
diff --git a/tempest/api/orchestration/base.py b/tempest/api/orchestration/base.py
index 531df2d..5a586fc 100644
--- a/tempest/api/orchestration/base.py
+++ b/tempest/api/orchestration/base.py
@@ -11,6 +11,7 @@
 #    under the License.
 
 import os.path
+
 import yaml
 
 from tempest import clients
@@ -29,8 +30,8 @@
     """Base test case class for all Orchestration API tests."""
 
     @classmethod
-    def setUpClass(cls):
-        super(BaseOrchestrationTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(BaseOrchestrationTest, cls).resource_setup()
         cls.os = clients.Manager()
         if not CONF.service_available.heat:
             raise cls.skipException("Heat support is required")
@@ -50,7 +51,7 @@
 
     @classmethod
     def _get_default_network(cls):
-        __, networks = cls.network_client.list_networks()
+        _, networks = cls.network_client.list_networks()
         for net in networks['networks']:
             if net['name'] == CONF.compute.fixed_network_name:
                 return net
@@ -63,8 +64,10 @@
         return admin_client
 
     @classmethod
-    def create_stack(cls, stack_name, template_data, parameters={},
+    def create_stack(cls, stack_name, template_data, parameters=None,
                      environment=None, files=None):
+        if parameters is None:
+            parameters = {}
         resp, body = cls.client.create_stack(
             stack_name,
             template=template_data,
@@ -85,13 +88,16 @@
                 pass
 
         for stack_identifier in cls.stacks:
-            cls.client.wait_for_stack_status(
-                stack_identifier, 'DELETE_COMPLETE')
+            try:
+                cls.client.wait_for_stack_status(
+                    stack_identifier, 'DELETE_COMPLETE')
+            except exceptions.NotFound:
+                pass
 
     @classmethod
     def _create_keypair(cls, name_start='keypair-heat-'):
         kp_name = data_utils.rand_name(name_start)
-        __, body = cls.keypairs_client.create_keypair(kp_name)
+        _, body = cls.keypairs_client.create_keypair(kp_name)
         cls.keypairs.append(kp_name)
         return body
 
@@ -107,9 +113,9 @@
     def _create_image(cls, name_start='image-heat-', container_format='bare',
                       disk_format='iso'):
         image_name = data_utils.rand_name(name_start)
-        __, body = cls.images_v2_client.create_image(image_name,
-                                                     container_format,
-                                                     disk_format)
+        _, body = cls.images_v2_client.create_image(image_name,
+                                                    container_format,
+                                                    disk_format)
         image_id = body['id']
         cls.images.append(image_id)
         return body
@@ -140,11 +146,11 @@
             return yaml.safe_load(f)
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         cls._clear_stacks()
         cls._clear_keypairs()
         cls._clear_images()
-        super(BaseOrchestrationTest, cls).tearDownClass()
+        super(BaseOrchestrationTest, cls).resource_cleanup()
 
     @staticmethod
     def stack_output(stack, output_key):
@@ -158,8 +164,7 @@
 
     def list_resources(self, stack_identifier):
         """Get a dict mapping of resource names to types."""
-        resp, resources = self.client.list_resources(stack_identifier)
-        self.assertEqual('200', resp['status'])
+        _, resources = self.client.list_resources(stack_identifier)
         self.assertIsInstance(resources, list)
         for res in resources:
             self.assert_fields_in_dict(res, 'logical_resource_id',
@@ -170,6 +175,5 @@
                     for r in resources)
 
     def get_stack_output(self, stack_identifier, output_key):
-        resp, body = self.client.get_stack(stack_identifier)
-        self.assertEqual('200', resp['status'])
+        _, body = self.client.get_stack(stack_identifier)
         return self.stack_output(body, output_key)
diff --git a/tempest/api/orchestration/stacks/test_neutron_resources.py b/tempest/api/orchestration/stacks/test_neutron_resources.py
index 27c6196..f1a4f85 100644
--- a/tempest/api/orchestration/stacks/test_neutron_resources.py
+++ b/tempest/api/orchestration/stacks/test_neutron_resources.py
@@ -12,6 +12,7 @@
 
 
 import logging
+
 import netaddr
 
 from tempest.api.orchestration import base
@@ -29,14 +30,14 @@
 class NeutronResourcesTestJSON(base.BaseOrchestrationTest):
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(NeutronResourcesTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(NeutronResourcesTestJSON, cls).resource_setup()
         if not CONF.orchestration.image_ref:
             raise cls.skipException("No image available to test")
         os = clients.Manager()
         if not CONF.service_available.neutron:
             raise cls.skipException("Neutron support is required")
+        cls.neutron_basic_template = cls.load_template('neutron_basic')
         cls.network_client = os.network_client
         cls.stack_name = data_utils.rand_name('heat')
         template = cls.read_template('neutron_basic')
@@ -66,16 +67,17 @@
             cls.client.wait_for_stack_status(cls.stack_id, 'CREATE_COMPLETE')
             _, resources = cls.client.list_resources(cls.stack_identifier)
         except exceptions.TimeoutException as e:
-            # attempt to log the server console to help with debugging
-            # the cause of the server not signalling the waitcondition
-            # to heat.
-            resp, body = cls.client.get_resource(cls.stack_identifier,
-                                                 'Server')
-            server_id = body['physical_resource_id']
-            LOG.debug('Console output for %s', server_id)
-            resp, output = cls.servers_client.get_console_output(
-                server_id, None)
-            LOG.debug(output)
+            if CONF.compute_feature_enabled.console_output:
+                # attempt to log the server console to help with debugging
+                # the cause of the server not signalling the waitcondition
+                # to heat.
+                _, body = cls.client.get_resource(cls.stack_identifier,
+                                                  'Server')
+                server_id = body['physical_resource_id']
+                LOG.debug('Console output for %s', server_id)
+                _, output = cls.servers_client.get_console_output(
+                    server_id, None)
+                LOG.debug(output)
             raise e
 
         cls.test_resources = {}
@@ -85,10 +87,14 @@
     @test.attr(type='slow')
     def test_created_resources(self):
         """Verifies created neutron resources."""
-        resources = [('Network', 'OS::Neutron::Net'),
-                     ('Subnet', 'OS::Neutron::Subnet'),
-                     ('RouterInterface', 'OS::Neutron::RouterInterface'),
-                     ('Server', 'OS::Nova::Server')]
+        resources = [('Network', self.neutron_basic_template['resources'][
+                      'Network']['type']),
+                     ('Subnet', self.neutron_basic_template['resources'][
+                      'Subnet']['type']),
+                     ('RouterInterface', self.neutron_basic_template[
+                      'resources']['RouterInterface']['type']),
+                     ('Server', self.neutron_basic_template['resources'][
+                      'Server']['type'])]
         for resource_name, resource_type in resources:
             resource = self.test_resources.get(resource_name, None)
             self.assertIsInstance(resource, dict)
@@ -97,52 +103,56 @@
             self.assertEqual('CREATE_COMPLETE', resource['resource_status'])
 
     @test.attr(type='slow')
+    @test.services('network')
     def test_created_network(self):
         """Verifies created network."""
         network_id = self.test_resources.get('Network')['physical_resource_id']
-        resp, body = self.network_client.show_network(network_id)
-        self.assertEqual('200', resp['status'])
+        _, body = self.network_client.show_network(network_id)
         network = body['network']
         self.assertIsInstance(network, dict)
         self.assertEqual(network_id, network['id'])
-        self.assertEqual('NewNetwork', network['name'])
+        self.assertEqual(self.neutron_basic_template['resources'][
+            'Network']['properties']['name'], network['name'])
 
     @test.attr(type='slow')
+    @test.services('network')
     def test_created_subnet(self):
         """Verifies created subnet."""
         subnet_id = self.test_resources.get('Subnet')['physical_resource_id']
-        resp, body = self.network_client.show_subnet(subnet_id)
-        self.assertEqual('200', resp['status'])
+        _, body = self.network_client.show_subnet(subnet_id)
         subnet = body['subnet']
         network_id = self.test_resources.get('Network')['physical_resource_id']
         self.assertEqual(subnet_id, subnet['id'])
         self.assertEqual(network_id, subnet['network_id'])
-        self.assertEqual('NewSubnet', subnet['name'])
+        self.assertEqual(self.neutron_basic_template['resources'][
+            'Subnet']['properties']['name'], subnet['name'])
         self.assertEqual(sorted(CONF.network.dns_servers),
                          sorted(subnet['dns_nameservers']))
-        self.assertEqual(4, subnet['ip_version'])
+        self.assertEqual(self.neutron_basic_template['resources'][
+            'Subnet']['properties']['ip_version'], subnet['ip_version'])
         self.assertEqual(str(self.subnet_cidr), subnet['cidr'])
 
     @test.attr(type='slow')
+    @test.services('network')
     def test_created_router(self):
         """Verifies created router."""
         router_id = self.test_resources.get('Router')['physical_resource_id']
-        resp, body = self.network_client.show_router(router_id)
-        self.assertEqual('200', resp['status'])
+        _, body = self.network_client.show_router(router_id)
         router = body['router']
-        self.assertEqual('NewRouter', router['name'])
+        self.assertEqual(self.neutron_basic_template['resources'][
+            'Router']['properties']['name'], router['name'])
         self.assertEqual(self.external_network_id,
                          router['external_gateway_info']['network_id'])
         self.assertEqual(True, router['admin_state_up'])
 
     @test.attr(type='slow')
+    @test.services('network')
     def test_created_router_interface(self):
         """Verifies created router interface."""
         router_id = self.test_resources.get('Router')['physical_resource_id']
         network_id = self.test_resources.get('Network')['physical_resource_id']
         subnet_id = self.test_resources.get('Subnet')['physical_resource_id']
-        resp, body = self.network_client.list_ports()
-        self.assertEqual('200', resp['status'])
+        _, body = self.network_client.list_ports()
         ports = body['ports']
         router_ports = filter(lambda port: port['device_id'] ==
                               router_id, ports)
@@ -159,13 +169,14 @@
                          router_interface_ip)
 
     @test.attr(type='slow')
+    @test.services('compute', 'network')
     def test_created_server(self):
         """Verifies created sever."""
         server_id = self.test_resources.get('Server')['physical_resource_id']
-        resp, server = self.servers_client.get_server(server_id)
-        self.assertEqual('200', resp['status'])
+        _, server = self.servers_client.get_server(server_id)
         self.assertEqual(self.keypair_name, server['key_name'])
         self.assertEqual('ACTIVE', server['status'])
-        network = server['addresses']['NewNetwork'][0]
+        network = server['addresses'][self.neutron_basic_template['resources'][
+                                      'Network']['properties']['name']][0]
         self.assertEqual(4, network['version'])
         self.assertIn(netaddr.IPAddress(network['addr']), self.subnet_cidr)
diff --git a/tempest/api/orchestration/stacks/test_non_empty_stack.py b/tempest/api/orchestration/stacks/test_non_empty_stack.py
index a97c561..759cbbe 100644
--- a/tempest/api/orchestration/stacks/test_non_empty_stack.py
+++ b/tempest/api/orchestration/stacks/test_non_empty_stack.py
@@ -25,8 +25,8 @@
 class StacksTestJSON(base.BaseOrchestrationTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(StacksTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(StacksTestJSON, cls).resource_setup()
         cls.stack_name = data_utils.rand_name('heat')
         template = cls.read_template('non_empty_stack')
         image_id = (CONF.orchestration.image_ref or
@@ -45,8 +45,7 @@
         cls.client.wait_for_stack_status(cls.stack_id, 'CREATE_COMPLETE')
 
     def _list_stacks(self, expected_num=None, **filter_kwargs):
-        resp, stacks = self.client.list_stacks(params=filter_kwargs)
-        self.assertEqual('200', resp['status'])
+        _, stacks = self.client.list_stacks(params=filter_kwargs)
         self.assertIsInstance(stacks, list)
         if expected_num is not None:
             self.assertEqual(expected_num, len(stacks))
@@ -62,8 +61,7 @@
     @test.attr(type='gate')
     def test_stack_show(self):
         """Getting details about created stack should be possible."""
-        resp, stack = self.client.get_stack(self.stack_name)
-        self.assertEqual('200', resp['status'])
+        _, stack = self.client.get_stack(self.stack_name)
         self.assertIsInstance(stack, dict)
         self.assert_fields_in_dict(stack, 'stack_name', 'id', 'links',
                                    'parameters', 'outputs', 'disable_rollback',
@@ -82,12 +80,10 @@
     @test.attr(type='gate')
     def test_suspend_resume_stack(self):
         """Suspend and resume a stack."""
-        resp, suspend_stack = self.client.suspend_stack(self.stack_identifier)
-        self.assertEqual('200', resp['status'])
+        _, suspend_stack = self.client.suspend_stack(self.stack_identifier)
         self.client.wait_for_stack_status(self.stack_identifier,
                                           'SUSPEND_COMPLETE')
-        resp, resume_stack = self.client.resume_stack(self.stack_identifier)
-        self.assertEqual('200', resp['status'])
+        _, resume_stack = self.client.resume_stack(self.stack_identifier)
         self.client.wait_for_stack_status(self.stack_identifier,
                                           'RESUME_COMPLETE')
 
@@ -101,8 +97,8 @@
     @test.attr(type='gate')
     def test_show_resource(self):
         """Getting details about created resource should be possible."""
-        resp, resource = self.client.get_resource(self.stack_identifier,
-                                                  self.resource_name)
+        _, resource = self.client.get_resource(self.stack_identifier,
+                                               self.resource_name)
         self.assertIsInstance(resource, dict)
         self.assert_fields_in_dict(resource, 'resource_name', 'description',
                                    'links', 'logical_resource_id',
@@ -115,18 +111,16 @@
     @test.attr(type='gate')
     def test_resource_metadata(self):
         """Getting metadata for created resources should be possible."""
-        resp, metadata = self.client.show_resource_metadata(
+        _, metadata = self.client.show_resource_metadata(
             self.stack_identifier,
             self.resource_name)
-        self.assertEqual('200', resp['status'])
         self.assertIsInstance(metadata, dict)
         self.assertEqual(['Tom', 'Stinky'], metadata.get('kittens', None))
 
     @test.attr(type='gate')
     def test_list_events(self):
         """Getting list of created events for the stack should be possible."""
-        resp, events = self.client.list_events(self.stack_identifier)
-        self.assertEqual('200', resp['status'])
+        _, events = self.client.list_events(self.stack_identifier)
         self.assertIsInstance(events, list)
 
         for event in events:
@@ -141,14 +135,13 @@
     @test.attr(type='gate')
     def test_show_event(self):
         """Getting details about an event should be possible."""
-        resp, events = self.client.list_resource_events(self.stack_identifier,
-                                                        self.resource_name)
+        _, events = self.client.list_resource_events(self.stack_identifier,
+                                                     self.resource_name)
         self.assertNotEqual([], events)
         events.sort(key=lambda event: event['event_time'])
         event_id = events[0]['id']
-        resp, event = self.client.show_event(self.stack_identifier,
-                                             self.resource_name, event_id)
-        self.assertEqual('200', resp['status'])
+        _, event = self.client.show_event(self.stack_identifier,
+                                          self.resource_name, event_id)
         self.assertIsInstance(event, dict)
         self.assert_fields_in_dict(event, 'resource_name', 'event_time',
                                    'links', 'logical_resource_id',
diff --git a/tempest/api/orchestration/stacks/test_nova_keypair_resources.py b/tempest/api/orchestration/stacks/test_nova_keypair_resources.py
index e3ffdaf..1da340c 100644
--- a/tempest/api/orchestration/stacks/test_nova_keypair_resources.py
+++ b/tempest/api/orchestration/stacks/test_nova_keypair_resources.py
@@ -27,8 +27,8 @@
     _type = 'type'
 
     @classmethod
-    def setUpClass(cls):
-        super(NovaKeyPairResourcesYAMLTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(NovaKeyPairResourcesYAMLTest, cls).resource_setup()
         cls.stack_name = data_utils.rand_name('heat')
         template = cls.read_template('nova_keypair', ext=cls._tpl_type)
 
@@ -56,10 +56,10 @@
                                                    ext=self._tpl_type)
         resources = [('KeyPairSavePrivate',
                       nova_keypair_template[self._resource][
-                      'KeyPairSavePrivate'][self._type]),
+                          'KeyPairSavePrivate'][self._type]),
                      ('KeyPairDontSavePrivate',
                       nova_keypair_template[self._resource][
-                      'KeyPairDontSavePrivate'][self._type])]
+                          'KeyPairDontSavePrivate'][self._type])]
 
         for resource_name, resource_type in resources:
             resource = self.test_resources.get(resource_name, None)
@@ -70,8 +70,7 @@
 
     @test.attr(type='gate')
     def test_stack_keypairs_output(self):
-        resp, stack = self.client.get_stack(self.stack_name)
-        self.assertEqual('200', resp['status'])
+        _, stack = self.client.get_stack(self.stack_name)
         self.assertIsInstance(stack, dict)
 
         output_map = {}
diff --git a/tempest/api/orchestration/stacks/test_resource_types.py b/tempest/api/orchestration/stacks/test_resource_types.py
new file mode 100644
index 0000000..e204894
--- /dev/null
+++ b/tempest/api/orchestration/stacks/test_resource_types.py
@@ -0,0 +1,44 @@
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+from tempest.api.orchestration import base
+from tempest import test
+
+
+class ResourceTypesTest(base.BaseOrchestrationTest):
+
+    @test.attr(type='smoke')
+    def test_resource_type_list(self):
+        """Verify it is possible to list resource types."""
+        resource_types = self.client.list_resource_types()
+        self.assertIsInstance(resource_types, list)
+        self.assertIn('OS::Nova::Server', resource_types)
+
+    @test.attr(type='smoke')
+    def test_resource_type_show(self):
+        """Verify it is possible to get schema about resource types."""
+        resource_types = self.client.list_resource_types()
+        self.assertNotEmpty(resource_types)
+
+        for resource_type in resource_types:
+            type_schema = self.client.get_resource_type(resource_type)
+            self.assert_fields_in_dict(type_schema, 'properties',
+                                       'attributes', 'resource_type')
+            self.assertEqual(resource_type, type_schema['resource_type'])
+
+    @test.attr(type='smoke')
+    def test_resource_type_template(self):
+        """Verify it is possible to get template about resource types."""
+        type_template = self.client.get_resource_type_template(
+            'OS::Nova::Server')
+        self.assert_fields_in_dict(type_template, 'Outputs',
+            'Parameters', 'Resources')
\ No newline at end of file
diff --git a/tempest/api/orchestration/stacks/test_stacks.py b/tempest/api/orchestration/stacks/test_stacks.py
index d5e66e8..d7fbd65 100644
--- a/tempest/api/orchestration/stacks/test_stacks.py
+++ b/tempest/api/orchestration/stacks/test_stacks.py
@@ -23,13 +23,12 @@
     empty_template = "HeatTemplateFormatVersion: '2012-12-12'\n"
 
     @classmethod
-    def setUpClass(cls):
-        super(StacksTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(StacksTestJSON, cls).resource_setup()
 
     @test.attr(type='smoke')
     def test_stack_list_responds(self):
-        resp, stacks = self.client.list_stacks()
-        self.assertEqual('200', resp['status'])
+        _, stacks = self.client.list_stacks()
         self.assertIsInstance(stacks, list)
 
     @test.attr(type='smoke')
@@ -45,23 +44,22 @@
         self.client.wait_for_stack_status(stack_identifier, 'CREATE_COMPLETE')
 
         # check for stack in list
-        resp, stacks = self.client.list_stacks()
+        _, stacks = self.client.list_stacks()
         list_ids = list([stack['id'] for stack in stacks])
         self.assertIn(stack_id, list_ids)
 
         # fetch the stack
-        resp, stack = self.client.get_stack(stack_identifier)
+        _, stack = self.client.get_stack(stack_identifier)
         self.assertEqual('CREATE_COMPLETE', stack['stack_status'])
 
         # fetch the stack by name
-        resp, stack = self.client.get_stack(stack_name)
+        _, stack = self.client.get_stack(stack_name)
         self.assertEqual('CREATE_COMPLETE', stack['stack_status'])
 
         # fetch the stack by id
-        resp, stack = self.client.get_stack(stack_id)
+        _, stack = self.client.get_stack(stack_id)
         self.assertEqual('CREATE_COMPLETE', stack['stack_status'])
 
         # delete the stack
-        resp = self.client.delete_stack(stack_identifier)
-        self.assertEqual('204', resp[0]['status'])
+        self.client.delete_stack(stack_identifier)
         self.client.wait_for_stack_status(stack_identifier, 'DELETE_COMPLETE')
diff --git a/tempest/api/orchestration/stacks/test_swift_resources.py b/tempest/api/orchestration/stacks/test_swift_resources.py
index adab8c3..307468e 100644
--- a/tempest/api/orchestration/stacks/test_swift_resources.py
+++ b/tempest/api/orchestration/stacks/test_swift_resources.py
@@ -26,9 +26,8 @@
 
 class SwiftResourcesTestJSON(base.BaseOrchestrationTest):
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(SwiftResourcesTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(SwiftResourcesTestJSON, cls).resource_setup()
         cls.stack_name = data_utils.rand_name('heat')
         template = cls.read_template('swift_basic')
         os = clients.Manager()
@@ -61,15 +60,16 @@
             self.assertEqual(resource_name, resource['logical_resource_id'])
             self.assertEqual('CREATE_COMPLETE', resource['resource_status'])
 
+    @test.services('object_storage')
     def test_created_containers(self):
         params = {'format': 'json'}
-        resp, container_list = \
+        _, container_list = \
             self.account_client.list_account_containers(params=params)
-        self.assertEqual('200', resp['status'])
-        self.assertEqual(2, len(container_list))
-        for cont in container_list:
-            self.assertTrue(cont['name'].startswith(self.stack_name))
+        created_containers = [cont for cont in container_list
+                              if cont['name'].startswith(self.stack_name)]
+        self.assertEqual(2, len(created_containers))
 
+    @test.services('object_storage')
     def test_acl(self):
         acl_headers = ('x-container-meta-web-index', 'x-container-read')
 
@@ -86,6 +86,7 @@
         for h in acl_headers:
             self.assertIn(h, headers)
 
+    @test.services('object_storage')
     def test_metadata(self):
         swift_basic_template = self.load_template('swift_basic')
         metadatas = swift_basic_template['resources']['SwiftContainerWebsite'][
diff --git a/tempest/api/orchestration/stacks/test_templates.py b/tempest/api/orchestration/stacks/test_templates.py
index 74950a9..262c576 100644
--- a/tempest/api/orchestration/stacks/test_templates.py
+++ b/tempest/api/orchestration/stacks/test_templates.py
@@ -26,9 +26,8 @@
 """
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(TemplateYAMLTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(TemplateYAMLTestJSON, cls).resource_setup()
         cls.stack_name = data_utils.rand_name('heat')
         cls.stack_identifier = cls.create_stack(cls.stack_name, cls.template)
         cls.client.wait_for_stack_status(cls.stack_identifier,
@@ -39,15 +38,13 @@
     @test.attr(type='gate')
     def test_show_template(self):
         """Getting template used to create the stack."""
-        resp, template = self.client.show_template(self.stack_identifier)
-        self.assertEqual('200', resp['status'])
+        _, template = self.client.show_template(self.stack_identifier)
 
     @test.attr(type='gate')
     def test_validate_template(self):
         """Validating template passing it content."""
-        resp, parameters = self.client.validate_template(self.template,
-                                                         self.parameters)
-        self.assertEqual('200', resp['status'])
+        _, parameters = self.client.validate_template(self.template,
+                                                      self.parameters)
 
 
 class TemplateAWSTestJSON(TemplateYAMLTestJSON):
diff --git a/tempest/api/orchestration/stacks/test_templates_negative.py b/tempest/api/orchestration/stacks/test_templates_negative.py
index b325104..9082107 100644
--- a/tempest/api/orchestration/stacks/test_templates_negative.py
+++ b/tempest/api/orchestration/stacks/test_templates_negative.py
@@ -30,8 +30,8 @@
     invalid_template_url = 'http://www.example.com/template.yaml'
 
     @classmethod
-    def setUpClass(cls):
-        super(TemplateYAMLNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(TemplateYAMLNegativeTestJSON, cls).resource_setup()
         cls.parameters = {}
 
     @test.attr(type=['gate', 'negative'])
diff --git a/tempest/api/orchestration/stacks/test_update.py b/tempest/api/orchestration/stacks/test_update.py
deleted file mode 100644
index a9a43b6..0000000
--- a/tempest/api/orchestration/stacks/test_update.py
+++ /dev/null
@@ -1,84 +0,0 @@
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-import logging
-
-from tempest.api.orchestration import base
-from tempest.common.utils import data_utils
-from tempest import test
-
-
-LOG = logging.getLogger(__name__)
-
-
-class UpdateStackTestJSON(base.BaseOrchestrationTest):
-    _interface = 'json'
-
-    template = '''
-heat_template_version: 2013-05-23
-resources:
-  random1:
-    type: OS::Heat::RandomString
-'''
-    update_template = '''
-heat_template_version: 2013-05-23
-resources:
-  random1:
-    type: OS::Heat::RandomString
-  random2:
-    type: OS::Heat::RandomString
-'''
-
-    def update_stack(self, stack_identifier, template):
-        stack_name = stack_identifier.split('/')[0]
-        resp = self.client.update_stack(
-            stack_identifier=stack_identifier,
-            name=stack_name,
-            template=template)
-        self.assertEqual('202', resp[0]['status'])
-        self.client.wait_for_stack_status(stack_identifier, 'UPDATE_COMPLETE')
-
-    @test.attr(type='gate')
-    def test_stack_update_nochange(self):
-        stack_name = data_utils.rand_name('heat')
-        stack_identifier = self.create_stack(stack_name, self.template)
-        self.client.wait_for_stack_status(stack_identifier, 'CREATE_COMPLETE')
-        expected_resources = {'random1': 'OS::Heat::RandomString'}
-        self.assertEqual(expected_resources,
-                         self.list_resources(stack_identifier))
-
-        # Update with no changes, resources should be unchanged
-        self.update_stack(stack_identifier, self.template)
-        self.assertEqual(expected_resources,
-                         self.list_resources(stack_identifier))
-
-    @test.attr(type='gate')
-    @test.skip_because(bug='1308682')
-    def test_stack_update_add_remove(self):
-        stack_name = data_utils.rand_name('heat')
-        stack_identifier = self.create_stack(stack_name, self.template)
-        self.client.wait_for_stack_status(stack_identifier, 'CREATE_COMPLETE')
-        initial_resources = {'random1': 'OS::Heat::RandomString'}
-        self.assertEqual(initial_resources,
-                         self.list_resources(stack_identifier))
-
-        # Add one resource via a stack update
-        self.update_stack(stack_identifier, self.update_template)
-        updated_resources = {'random1': 'OS::Heat::RandomString',
-                             'random2': 'OS::Heat::RandomString'}
-        self.assertEqual(updated_resources,
-                         self.list_resources(stack_identifier))
-
-        # Then remove it by updating with the original template
-        self.update_stack(stack_identifier, self.template)
-        self.assertEqual(initial_resources,
-                         self.list_resources(stack_identifier))
diff --git a/tempest/api/orchestration/stacks/test_volumes.py b/tempest/api/orchestration/stacks/test_volumes.py
index d422752..f47078c 100644
--- a/tempest/api/orchestration/stacks/test_volumes.py
+++ b/tempest/api/orchestration/stacks/test_volumes.py
@@ -26,15 +26,14 @@
 class CinderResourcesTest(base.BaseOrchestrationTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(CinderResourcesTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(CinderResourcesTest, cls).resource_setup()
         if not CONF.service_available.cinder:
             raise cls.skipException('Cinder support is required')
 
     def _cinder_verify(self, volume_id, template):
         self.assertIsNotNone(volume_id)
-        resp, volume = self.volumes_client.get_volume(volume_id)
-        self.assertEqual(200, resp.status)
+        _, volume = self.volumes_client.get_volume(volume_id)
         self.assertEqual('available', volume.get('status'))
         self.assertEqual(template['resources']['volume']['properties'][
             'size'], volume.get('size'))
@@ -55,6 +54,7 @@
             'name'], self.get_stack_output(stack_identifier, 'display_name'))
 
     @test.attr(type='gate')
+    @test.services('volume')
     def test_cinder_volume_create_delete(self):
         """Create and delete a volume via OS::Cinder::Volume."""
         stack_name = data_utils.rand_name('heat')
@@ -79,11 +79,11 @@
 
     def _cleanup_volume(self, volume_id):
         """Cleanup the volume direct with cinder."""
-        resp = self.volumes_client.delete_volume(volume_id)
-        self.assertEqual(202, resp[0].status)
+        self.volumes_client.delete_volume(volume_id)
         self.volumes_client.wait_for_resource_deletion(volume_id)
 
     @test.attr(type='gate')
+    @test.services('volume')
     def test_cinder_volume_create_delete_retain(self):
         """Ensure the 'Retain' deletion policy is respected."""
         stack_name = data_utils.rand_name('heat')
diff --git a/tempest/api/telemetry/base.py b/tempest/api/telemetry/base.py
index b5b2bb1..769c201 100644
--- a/tempest/api/telemetry/base.py
+++ b/tempest/api/telemetry/base.py
@@ -26,10 +26,11 @@
     """Base test case class for all Telemetry API tests."""
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         if not CONF.service_available.ceilometer:
             raise cls.skipException("Ceilometer support is required")
-        super(BaseTelemetryTest, cls).setUpClass()
+        cls.set_network_resources()
+        super(BaseTelemetryTest, cls).resource_setup()
         os = cls.get_client_manager()
         cls.telemetry_client = os.telemetry_client
         cls.servers_client = os.servers_client
@@ -83,12 +84,12 @@
                 pass
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         cls.cleanup_resources(cls.telemetry_client.delete_alarm, cls.alarm_ids)
         cls.cleanup_resources(cls.servers_client.delete_server, cls.server_ids)
         cls.cleanup_resources(cls.image_client.delete_image, cls.image_ids)
         cls.clear_isolated_creds()
-        super(BaseTelemetryTest, cls).tearDownClass()
+        super(BaseTelemetryTest, cls).resource_cleanup()
 
     def await_samples(self, metric, query):
         """
diff --git a/tempest/api/telemetry/test_telemetry_alarming_api.py b/tempest/api/telemetry/test_telemetry_alarming_api.py
index 95758e8..b45d545 100644
--- a/tempest/api/telemetry/test_telemetry_alarming_api.py
+++ b/tempest/api/telemetry/test_telemetry_alarming_api.py
@@ -20,8 +20,8 @@
     _interface = 'json'
 
     @classmethod
-    def setUpClass(cls):
-        super(TelemetryAlarmingAPITestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(TelemetryAlarmingAPITestJSON, cls).resource_setup()
         cls.rule = {'meter_name': 'cpu_util',
                     'comparison_operator': 'gt',
                     'threshold': 80.0,
diff --git a/tempest/api/telemetry/test_telemetry_notification_api.py b/tempest/api/telemetry/test_telemetry_notification_api.py
index 9b15c51..3782b70 100644
--- a/tempest/api/telemetry/test_telemetry_notification_api.py
+++ b/tempest/api/telemetry/test_telemetry_notification_api.py
@@ -23,11 +23,11 @@
     _interface = 'json'
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         if CONF.telemetry.too_slow_to_test:
             raise cls.skipException("Ceilometer feature for fast work mysql "
                                     "is disabled")
-        super(TelemetryNotificationAPITestJSON, cls).setUpClass()
+        super(TelemetryNotificationAPITestJSON, cls).resource_setup()
 
     @test.attr(type="gate")
     @testtools.skipIf(not CONF.service_available.nova,
diff --git a/tempest/api/volume/admin/test_multi_backend.py b/tempest/api/volume/admin/test_multi_backend.py
index d451517..db2aab5 100644
--- a/tempest/api/volume/admin/test_multi_backend.py
+++ b/tempest/api/volume/admin/test_multi_backend.py
@@ -25,9 +25,8 @@
     _interface = "json"
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(VolumeMultiBackendTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(VolumeMultiBackendTest, cls).resource_setup()
         if not CONF.volume_feature_enabled.multi_backend:
             raise cls.skipException("Cinder multi-backend feature disabled")
 
@@ -61,22 +60,22 @@
             extra_specs = {spec_key_with_prefix: backend_name_key}
         else:
             extra_specs = {spec_key_without_prefix: backend_name_key}
-        resp, self.type = self.client.create_volume_type(
+        _, self.type = self.client.create_volume_type(
             type_name, extra_specs=extra_specs)
         self.volume_type_id_list.append(self.type['id'])
 
-        resp, self.volume = self.volume_client.create_volume(
+        _, self.volume = self.volume_client.create_volume(
             size=1, display_name=vol_name, volume_type=type_name)
-        self.volume_client.wait_for_volume_status(
-            self.volume['id'], 'available')
         if with_prefix:
             self.volume_id_list_with_prefix.append(self.volume['id'])
         else:
             self.volume_id_list_without_prefix.append(
                 self.volume['id'])
+        self.volume_client.wait_for_volume_status(
+            self.volume['id'], 'available')
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         # volumes deletion
         vid_prefix = getattr(cls, 'volume_id_list_with_prefix', [])
         for volume_id in vid_prefix:
@@ -93,7 +92,7 @@
         for volume_type_id in volume_type_id_list:
             cls.client.delete_volume_type(volume_type_id)
 
-        super(VolumeMultiBackendTest, cls).tearDownClass()
+        super(VolumeMultiBackendTest, cls).resource_cleanup()
 
     @test.attr(type='smoke')
     def test_backend_name_reporting(self):
@@ -130,8 +129,7 @@
         # the multi backend feature has been enabled
         # if multi-backend is enabled: os-vol-attr:host should be like:
         # host@backend_name
-        resp, volume = self.volume_client.get_volume(volume_id)
-        self.assertEqual(200, resp.status)
+        _, volume = self.volume_client.get_volume(volume_id)
 
         volume1_host = volume['os-vol-host-attr:host']
         msg = ("multi-backend reporting incorrect values for volume %s" %
@@ -142,10 +140,10 @@
         # this test checks that the two volumes created at setUp don't
         # belong to the same backend (if they are, than the
         # volume backend distinction is not working properly)
-        resp, volume = self.volume_client.get_volume(volume1_id)
+        _, volume = self.volume_client.get_volume(volume1_id)
         volume1_host = volume['os-vol-host-attr:host']
 
-        resp, volume = self.volume_client.get_volume(volume2_id)
+        _, volume = self.volume_client.get_volume(volume2_id)
         volume2_host = volume['os-vol-host-attr:host']
 
         msg = ("volumes %s and %s were created in the same backend" %
diff --git a/tempest/api/volume/admin/test_snapshots_actions.py b/tempest/api/volume/admin/test_snapshots_actions.py
index 594c703..720734b 100644
--- a/tempest/api/volume/admin/test_snapshots_actions.py
+++ b/tempest/api/volume/admin/test_snapshots_actions.py
@@ -22,9 +22,8 @@
     _interface = "json"
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(SnapshotsActionsTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(SnapshotsActionsTest, cls).resource_setup()
         cls.client = cls.snapshots_client
 
         # Create admin volume client
@@ -32,21 +31,21 @@
 
         # Create a test shared volume for tests
         vol_name = data_utils.rand_name(cls.__name__ + '-Volume-')
-        resp_vol, cls.volume = \
+        _, cls.volume = \
             cls.volumes_client.create_volume(size=1, display_name=vol_name)
         cls.volumes_client.wait_for_volume_status(cls.volume['id'],
                                                   'available')
 
         # Create a test shared snapshot for tests
         snap_name = data_utils.rand_name(cls.__name__ + '-Snapshot-')
-        resp_snap, cls.snapshot = \
+        _, cls.snapshot = \
             cls.client.create_snapshot(cls.volume['id'],
                                        display_name=snap_name)
         cls.client.wait_for_snapshot_status(cls.snapshot['id'],
                                             'available')
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         # Delete the test snapshot
         cls.client.delete_snapshot(cls.snapshot['id'])
         cls.client.wait_for_resource_deletion(cls.snapshot['id'])
@@ -55,7 +54,7 @@
         cls.volumes_client.delete_volume(cls.volume['id'])
         cls.volumes_client.wait_for_resource_deletion(cls.volume['id'])
 
-        super(SnapshotsActionsTest, cls).tearDownClass()
+        super(SnapshotsActionsTest, cls).resource_cleanup()
 
     def tearDown(self):
         # Set snapshot's status to available after test
@@ -70,12 +69,10 @@
         # and force delete temp snapshot
         temp_snapshot = self.create_snapshot(self.volume['id'])
         if status:
-            resp, body = self.admin_snapshots_client.\
+            _, body = self.admin_snapshots_client.\
                 reset_snapshot_status(temp_snapshot['id'], status)
-            self.assertEqual(202, resp.status)
-        resp_delete, volume_delete = self.admin_snapshots_client.\
+        _, volume_delete = self.admin_snapshots_client.\
             force_delete_snapshot(temp_snapshot['id'])
-        self.assertEqual(202, resp_delete.status)
         self.client.wait_for_resource_deletion(temp_snapshot['id'])
 
     def _get_progress_alias(self):
@@ -85,12 +82,10 @@
     def test_reset_snapshot_status(self):
         # Reset snapshot status to creating
         status = 'creating'
-        resp, body = self.admin_snapshots_client.\
+        _, body = self.admin_snapshots_client.\
             reset_snapshot_status(self.snapshot['id'], status)
-        self.assertEqual(202, resp.status)
-        resp_get, snapshot_get \
+        _, snapshot_get \
             = self.admin_snapshots_client.get_snapshot(self.snapshot['id'])
-        self.assertEqual(200, resp_get.status)
         self.assertEqual(status, snapshot_get['status'])
 
     @test.attr(type='gate')
@@ -104,12 +99,10 @@
         progress = '80%'
         status = 'error'
         progress_alias = self._get_progress_alias()
-        resp, body = self.client.update_snapshot_status(self.snapshot['id'],
-                                                        status, progress)
-        self.assertEqual(202, resp.status)
-        resp_get, snapshot_get \
+        _, body = self.client.update_snapshot_status(self.snapshot['id'],
+                                                     status, progress)
+        _, snapshot_get \
             = self.admin_snapshots_client.get_snapshot(self.snapshot['id'])
-        self.assertEqual(200, resp_get.status)
         self.assertEqual(status, snapshot_get['status'])
         self.assertEqual(progress, snapshot_get[progress_alias])
 
diff --git a/tempest/api/volume/admin/test_volume_hosts.py b/tempest/api/volume/admin/test_volume_hosts.py
index 01ba915..017363d 100644
--- a/tempest/api/volume/admin/test_volume_hosts.py
+++ b/tempest/api/volume/admin/test_volume_hosts.py
@@ -22,8 +22,7 @@
 
     @test.attr(type='gate')
     def test_list_hosts(self):
-        resp, hosts = self.hosts_client.list_hosts()
-        self.assertEqual(200, resp.status)
+        _, hosts = self.hosts_client.list_hosts()
         self.assertTrue(len(hosts) >= 2, "No. of hosts are < 2,"
                         "response of list hosts is: % s" % hosts)
 
diff --git a/tempest/api/volume/admin/test_volume_quotas.py b/tempest/api/volume/admin/test_volume_quotas.py
index ecd8836..7e24fa4 100644
--- a/tempest/api/volume/admin/test_volume_quotas.py
+++ b/tempest/api/volume/admin/test_volume_quotas.py
@@ -27,37 +27,35 @@
     force_tenant_isolation = True
 
     @classmethod
-    def setUpClass(cls):
-        super(VolumeQuotasAdminTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(VolumeQuotasAdminTestJSON, cls).resource_setup()
         cls.admin_volume_client = cls.os_adm.volumes_client
         cls.demo_tenant_id = cls.isolated_creds.get_primary_creds().tenant_id
 
     @test.attr(type='gate')
     def test_list_quotas(self):
-        resp, quotas = self.quotas_client.get_quota_set(self.demo_tenant_id)
-        self.assertEqual(200, resp.status)
+        _, quotas = self.quotas_client.get_quota_set(self.demo_tenant_id)
         for key in QUOTA_KEYS:
             self.assertIn(key, quotas)
 
     @test.attr(type='gate')
     def test_list_default_quotas(self):
-        resp, quotas = self.quotas_client.get_default_quota_set(
+        _, quotas = self.quotas_client.get_default_quota_set(
             self.demo_tenant_id)
-        self.assertEqual(200, resp.status)
         for key in QUOTA_KEYS:
             self.assertIn(key, quotas)
 
     @test.attr(type='gate')
     def test_update_all_quota_resources_for_tenant(self):
         # Admin can update all the resource quota limits for a tenant
-        resp, default_quota_set = self.quotas_client.get_default_quota_set(
+        _, default_quota_set = self.quotas_client.get_default_quota_set(
             self.demo_tenant_id)
         new_quota_set = {'gigabytes': 1009,
                          'volumes': 11,
                          'snapshots': 11}
 
         # Update limits for all quota resources
-        resp, quota_set = self.quotas_client.update_quota_set(
+        _, quota_set = self.quotas_client.update_quota_set(
             self.demo_tenant_id,
             **new_quota_set)
 
@@ -66,7 +64,6 @@
             if k in QUOTA_KEYS)
         self.addCleanup(self.quotas_client.update_quota_set,
                         self.demo_tenant_id, **cleanup_quota_set)
-        self.assertEqual(200, resp.status)
         # test that the specific values we set are actually in
         # the final result. There is nothing here that ensures there
         # would be no other values in there.
@@ -74,8 +71,7 @@
 
     @test.attr(type='gate')
     def test_show_quota_usage(self):
-        resp, quota_usage = self.quotas_client.get_quota_usage(self.adm_tenant)
-        self.assertEqual(200, resp.status)
+        _, quota_usage = self.quotas_client.get_quota_usage(self.adm_tenant)
         for key in QUOTA_KEYS:
             self.assertIn(key, quota_usage)
             for usage_key in QUOTA_USAGE_KEYS:
@@ -83,17 +79,16 @@
 
     @test.attr(type='gate')
     def test_quota_usage(self):
-        resp, quota_usage = self.quotas_client.get_quota_usage(
+        _, quota_usage = self.quotas_client.get_quota_usage(
             self.demo_tenant_id)
 
         volume = self.create_volume(size=1)
         self.addCleanup(self.admin_volume_client.delete_volume,
                         volume['id'])
 
-        resp, new_quota_usage = self.quotas_client.get_quota_usage(
+        _, new_quota_usage = self.quotas_client.get_quota_usage(
             self.demo_tenant_id)
 
-        self.assertEqual(200, resp.status)
         self.assertEqual(quota_usage['volumes']['in_use'] + 1,
                          new_quota_usage['volumes']['in_use'])
 
@@ -115,9 +110,7 @@
         self.quotas_client.update_quota_set(tenant_id,
                                             volumes=(int(volume_default) + 5))
 
-        resp, _ = self.quotas_client.delete_quota_set(tenant_id)
-        self.assertEqual(200, resp.status)
-
+        self.quotas_client.delete_quota_set(tenant_id)
         _, quota_set_new = self.quotas_client.get_quota_set(tenant_id)
         self.assertEqual(volume_default, quota_set_new['volumes'])
 
diff --git a/tempest/api/volume/admin/test_volume_quotas_negative.py b/tempest/api/volume/admin/test_volume_quotas_negative.py
index ab88b90..60a0adb 100644
--- a/tempest/api/volume/admin/test_volume_quotas_negative.py
+++ b/tempest/api/volume/admin/test_volume_quotas_negative.py
@@ -23,16 +23,15 @@
     force_tenant_isolation = True
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(VolumeQuotasNegativeTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(VolumeQuotasNegativeTestJSON, cls).resource_setup()
         demo_user = cls.isolated_creds.get_primary_creds()
         cls.demo_tenant_id = demo_user.tenant_id
         cls.shared_quota_set = {'gigabytes': 3, 'volumes': 1, 'snapshots': 1}
 
         # NOTE(gfidente): no need to restore original quota set
         # after the tests as they only work with tenant isolation.
-        resp, quota_set = cls.quotas_client.update_quota_set(
+        _, quota_set = cls.quotas_client.update_quota_set(
             cls.demo_tenant_id,
             **cls.shared_quota_set)
 
@@ -63,7 +62,7 @@
                         **self.shared_quota_set)
 
         new_quota_set = {'gigabytes': 2, 'volumes': 2, 'snapshots': 1}
-        resp, quota_set = self.quotas_client.update_quota_set(
+        _, quota_set = self.quotas_client.update_quota_set(
             self.demo_tenant_id,
             **new_quota_set)
         self.assertRaises(exceptions.OverLimit,
@@ -71,7 +70,7 @@
                           size=1)
 
         new_quota_set = {'gigabytes': 2, 'volumes': 1, 'snapshots': 2}
-        resp, quota_set = self.quotas_client.update_quota_set(
+        _, quota_set = self.quotas_client.update_quota_set(
             self.demo_tenant_id,
             **self.shared_quota_set)
         self.assertRaises(exceptions.OverLimit,
diff --git a/tempest/api/volume/admin/test_volume_services.py b/tempest/api/volume/admin/test_volume_services.py
index 012c231..7820148 100644
--- a/tempest/api/volume/admin/test_volume_services.py
+++ b/tempest/api/volume/admin/test_volume_services.py
@@ -25,24 +25,22 @@
     _interface = "json"
 
     @classmethod
-    def setUpClass(cls):
-        super(VolumesServicesTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(VolumesServicesTestJSON, cls).resource_setup()
         cls.client = cls.os_adm.volume_services_client
-        resp, cls.services = cls.client.list_services()
+        _, cls.services = cls.client.list_services()
         cls.host_name = cls.services[0]['host']
         cls.binary_name = cls.services[0]['binary']
 
     @test.attr(type='gate')
     def test_list_services(self):
-        resp, services = self.client.list_services()
-        self.assertEqual(200, resp.status)
+        _, services = self.client.list_services()
         self.assertNotEqual(0, len(services))
 
     @test.attr(type='gate')
     def test_get_service_by_service_binary_name(self):
         params = {'binary': self.binary_name}
-        resp, services = self.client.list_services(params)
-        self.assertEqual(200, resp.status)
+        _, services = self.client.list_services(params)
         self.assertNotEqual(0, len(services))
         for service in services:
             self.assertEqual(self.binary_name, service['binary'])
@@ -53,7 +51,7 @@
                             service['host'] == self.host_name]
         params = {'host': self.host_name}
 
-        resp, services = self.client.list_services(params)
+        _, services = self.client.list_services(params)
 
         # we could have a periodic job checkin between the 2 service
         # lookups, so only compare binary lists.
@@ -67,8 +65,7 @@
     def test_get_service_by_service_and_host_name(self):
         params = {'host': self.host_name, 'binary': self.binary_name}
 
-        resp, services = self.client.list_services(params)
-        self.assertEqual(200, resp.status)
+        _, services = self.client.list_services(params)
         self.assertEqual(1, len(services))
         self.assertEqual(self.host_name, services[0]['host'])
         self.assertEqual(self.binary_name, services[0]['binary'])
diff --git a/tempest/api/volume/admin/test_volume_types.py b/tempest/api/volume/admin/test_volume_types.py
index 3b8c214..070d38f 100644
--- a/tempest/api/volume/admin/test_volume_types.py
+++ b/tempest/api/volume/admin/test_volume_types.py
@@ -25,19 +25,16 @@
     _interface = "json"
 
     def _delete_volume(self, volume_id):
-        resp, _ = self.volumes_client.delete_volume(volume_id)
-        self.assertEqual(202, resp.status)
+        self.volumes_client.delete_volume(volume_id)
         self.volumes_client.wait_for_resource_deletion(volume_id)
 
     def _delete_volume_type(self, volume_type_id):
-        resp, _ = self.client.delete_volume_type(volume_type_id)
-        self.assertEqual(202, resp.status)
+        self.client.delete_volume_type(volume_type_id)
 
     @test.attr(type='smoke')
     def test_volume_type_list(self):
         # List Volume types.
-        resp, body = self.client.list_volume_types()
-        self.assertEqual(200, resp.status)
+        _, body = self.client.list_volume_types()
         self.assertIsInstance(body, list)
 
     @test.attr(type='smoke')
@@ -51,17 +48,15 @@
         extra_specs = {"storage_protocol": proto,
                        "vendor_name": vendor}
         body = {}
-        resp, body = self.client.create_volume_type(
+        _, body = self.client.create_volume_type(
             vol_type_name,
             extra_specs=extra_specs)
-        self.assertEqual(200, resp.status)
         self.assertIn('id', body)
         self.addCleanup(self._delete_volume_type, body['id'])
         self.assertIn('name', body)
-        resp, volume = self.volumes_client.create_volume(
+        _, volume = self.volumes_client.create_volume(
             size=1, display_name=vol_name,
             volume_type=vol_type_name)
-        self.assertEqual(200, resp.status)
         self.assertIn('id', volume)
         self.addCleanup(self._delete_volume, volume['id'])
         self.assertIn('display_name', volume)
@@ -72,8 +67,7 @@
                         "Field volume id is empty or not found.")
         self.volumes_client.wait_for_volume_status(volume['id'],
                                                    'available')
-        resp, fetched_volume = self.volumes_client.get_volume(volume['id'])
-        self.assertEqual(200, resp.status)
+        _, fetched_volume = self.volumes_client.get_volume(volume['id'])
         self.assertEqual(vol_name, fetched_volume['display_name'],
                          'The fetched Volume is different '
                          'from the created Volume')
@@ -93,10 +87,9 @@
         vendor = CONF.volume.vendor_name
         extra_specs = {"storage_protocol": proto,
                        "vendor_name": vendor}
-        resp, body = self.client.create_volume_type(
+        _, body = self.client.create_volume_type(
             name,
             extra_specs=extra_specs)
-        self.assertEqual(200, resp.status)
         self.assertIn('id', body)
         self.addCleanup(self._delete_volume_type, body['id'])
         self.assertIn('name', body)
@@ -105,8 +98,7 @@
                          "to the requested name")
         self.assertTrue(body['id'] is not None,
                         "Field volume_type id is empty or not found.")
-        resp, fetched_volume_type = self.client.get_volume_type(body['id'])
-        self.assertEqual(200, resp.status)
+        _, fetched_volume_type = self.client.get_volume_type(body['id'])
         self.assertEqual(name, fetched_volume_type['name'],
                          'The fetched Volume_type is different '
                          'from the created Volume_type')
@@ -123,15 +115,13 @@
         provider = "LuksEncryptor"
         control_location = "front-end"
         name = data_utils.rand_name("volume-type-")
-        resp, body = self.client.create_volume_type(name)
-        self.assertEqual(200, resp.status)
+        _, body = self.client.create_volume_type(name)
         self.addCleanup(self._delete_volume_type, body['id'])
 
         # Create encryption type
-        resp, encryption_type = self.client.create_encryption_type(
+        _, encryption_type = self.client.create_encryption_type(
             body['id'], provider=provider,
             control_location=control_location)
-        self.assertEqual(200, resp.status)
         self.assertIn('volume_type_id', encryption_type)
         self.assertEqual(provider, encryption_type['provider'],
                          "The created encryption_type provider is not equal "
@@ -141,9 +131,8 @@
                          "equal to the requested control_location")
 
         # Get encryption type
-        resp, fetched_encryption_type = self.client.get_encryption_type(
+        _, fetched_encryption_type = self.client.get_encryption_type(
             encryption_type['volume_type_id'])
-        self.assertEqual(200, resp.status)
         self.assertEqual(provider,
                          fetched_encryption_type['provider'],
                          'The fetched encryption_type provider is different '
@@ -154,13 +143,11 @@
                          'different from the created encryption_type')
 
         # Delete encryption type
-        resp, _ = self.client.delete_encryption_type(
+        self.client.delete_encryption_type(
             encryption_type['volume_type_id'])
-        self.assertEqual(202, resp.status)
         resource = {"id": encryption_type['volume_type_id'],
                     "type": "encryption-type"}
         self.client.wait_for_resource_deletion(resource)
-        resp, deleted_encryption_type = self.client.get_encryption_type(
+        _, deleted_encryption_type = self.client.get_encryption_type(
             encryption_type['volume_type_id'])
-        self.assertEqual(200, resp.status)
         self.assertEmpty(deleted_encryption_type)
diff --git a/tempest/api/volume/admin/test_volume_types_extra_specs.py b/tempest/api/volume/admin/test_volume_types_extra_specs.py
index 06a0b34..2d72dd2 100644
--- a/tempest/api/volume/admin/test_volume_types_extra_specs.py
+++ b/tempest/api/volume/admin/test_volume_types_extra_specs.py
@@ -22,28 +22,26 @@
     _interface = "json"
 
     @classmethod
-    def setUpClass(cls):
-        super(VolumeTypesExtraSpecsTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(VolumeTypesExtraSpecsTest, cls).resource_setup()
         vol_type_name = data_utils.rand_name('Volume-type-')
-        resp, cls.volume_type = cls.client.create_volume_type(vol_type_name)
+        _, cls.volume_type = cls.client.create_volume_type(vol_type_name)
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         cls.client.delete_volume_type(cls.volume_type['id'])
-        super(VolumeTypesExtraSpecsTest, cls).tearDownClass()
+        super(VolumeTypesExtraSpecsTest, cls).resource_cleanup()
 
     @test.attr(type='smoke')
     def test_volume_type_extra_specs_list(self):
         # List Volume types extra specs.
         extra_specs = {"spec1": "val1"}
-        resp, body = self.client.create_volume_type_extra_specs(
+        _, body = self.client.create_volume_type_extra_specs(
             self.volume_type['id'], extra_specs)
-        self.assertEqual(200, resp.status)
         self.assertEqual(extra_specs, body,
                          "Volume type extra spec incorrectly created")
-        resp, body = self.client.list_volume_types_extra_specs(
+        _, body = self.client.list_volume_types_extra_specs(
             self.volume_type['id'])
-        self.assertEqual(200, resp.status)
         self.assertIsInstance(body, dict)
         self.assertIn('spec1', body)
 
@@ -51,18 +49,16 @@
     def test_volume_type_extra_specs_update(self):
         # Update volume type extra specs
         extra_specs = {"spec2": "val1"}
-        resp, body = self.client.create_volume_type_extra_specs(
+        _, body = self.client.create_volume_type_extra_specs(
             self.volume_type['id'], extra_specs)
-        self.assertEqual(200, resp.status)
         self.assertEqual(extra_specs, body,
                          "Volume type extra spec incorrectly created")
 
         extra_spec = {"spec2": "val2"}
-        resp, body = self.client.update_volume_type_extra_specs(
+        _, body = self.client.update_volume_type_extra_specs(
             self.volume_type['id'],
             extra_spec.keys()[0],
             extra_spec)
-        self.assertEqual(200, resp.status)
         self.assertIn('spec2', body)
         self.assertEqual(extra_spec['spec2'], body['spec2'],
                          "Volume type extra spec incorrectly updated")
@@ -71,21 +67,18 @@
     def test_volume_type_extra_spec_create_get_delete(self):
         # Create/Get/Delete volume type extra spec.
         extra_specs = {"spec3": "val1"}
-        resp, body = self.client.create_volume_type_extra_specs(
+        _, body = self.client.create_volume_type_extra_specs(
             self.volume_type['id'],
             extra_specs)
-        self.assertEqual(200, resp.status)
         self.assertEqual(extra_specs, body,
                          "Volume type extra spec incorrectly created")
 
-        resp, _ = self.client.get_volume_type_extra_specs(
+        self.client.get_volume_type_extra_specs(
             self.volume_type['id'],
             extra_specs.keys()[0])
-        self.assertEqual(200, resp.status)
         self.assertEqual(extra_specs, body,
                          "Volume type extra spec incorrectly fetched")
 
-        resp, _ = self.client.delete_volume_type_extra_specs(
+        self.client.delete_volume_type_extra_specs(
             self.volume_type['id'],
             extra_specs.keys()[0])
-        self.assertEqual(202, resp.status)
diff --git a/tempest/api/volume/admin/test_volume_types_extra_specs_negative.py b/tempest/api/volume/admin/test_volume_types_extra_specs_negative.py
index da421dc..f3eee00 100644
--- a/tempest/api/volume/admin/test_volume_types_extra_specs_negative.py
+++ b/tempest/api/volume/admin/test_volume_types_extra_specs_negative.py
@@ -25,18 +25,18 @@
     _interface = 'json'
 
     @classmethod
-    def setUpClass(cls):
-        super(ExtraSpecsNegativeTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(ExtraSpecsNegativeTest, cls).resource_setup()
         vol_type_name = data_utils.rand_name('Volume-type-')
         cls.extra_specs = {"spec1": "val1"}
-        resp, cls.volume_type = cls.client.create_volume_type(
+        _, cls.volume_type = cls.client.create_volume_type(
             vol_type_name,
             extra_specs=cls.extra_specs)
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         cls.client.delete_volume_type(cls.volume_type['id'])
-        super(ExtraSpecsNegativeTest, cls).tearDownClass()
+        super(ExtraSpecsNegativeTest, cls).resource_cleanup()
 
     @test.attr(type='gate')
     def test_update_no_body(self):
diff --git a/tempest/api/volume/admin/test_volumes_actions.py b/tempest/api/volume/admin/test_volumes_actions.py
index 008f739..f85718b 100644
--- a/tempest/api/volume/admin/test_volumes_actions.py
+++ b/tempest/api/volume/admin/test_volumes_actions.py
@@ -22,9 +22,8 @@
     _interface = "json"
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(VolumesActionsTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(VolumesActionsTest, cls).resource_setup()
         cls.client = cls.volumes_client
 
         # Create admin volume client
@@ -33,23 +32,23 @@
         # Create a test shared volume for tests
         vol_name = utils.rand_name(cls.__name__ + '-Volume-')
 
-        resp, cls.volume = cls.client.create_volume(size=1,
-                                                    display_name=vol_name)
+        _, cls.volume = cls.client.create_volume(size=1,
+                                                 display_name=vol_name)
         cls.client.wait_for_volume_status(cls.volume['id'], 'available')
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         # Delete the test volume
         cls.client.delete_volume(cls.volume['id'])
         cls.client.wait_for_resource_deletion(cls.volume['id'])
 
-        super(VolumesActionsTest, cls).tearDownClass()
+        super(VolumesActionsTest, cls).resource_cleanup()
 
     def _reset_volume_status(self, volume_id, status):
         # Reset the volume status
-        resp, body = self.admin_volume_client.reset_volume_status(volume_id,
-                                                                  status)
-        return resp, body
+        _, body = self.admin_volume_client.reset_volume_status(volume_id,
+                                                               status)
+        return _, body
 
     def tearDown(self):
         # Set volume's status to available after test
@@ -59,8 +58,8 @@
     def _create_temp_volume(self):
         # Create a temp volume for force delete tests
         vol_name = utils.rand_name('Volume')
-        resp, temp_volume = self.client.create_volume(size=1,
-                                                      display_name=vol_name)
+        _, temp_volume = self.client.create_volume(size=1,
+                                                   display_name=vol_name)
         self.client.wait_for_volume_status(temp_volume['id'], 'available')
 
         return temp_volume
@@ -69,19 +68,16 @@
         # Create volume, reset volume status, and force delete temp volume
         temp_volume = self._create_temp_volume()
         if status:
-            resp, body = self._reset_volume_status(temp_volume['id'], status)
-            self.assertEqual(202, resp.status)
-        resp_delete, volume_delete = self.admin_volume_client.\
+            _, body = self._reset_volume_status(temp_volume['id'], status)
+        _, volume_delete = self.admin_volume_client.\
             force_delete_volume(temp_volume['id'])
-        self.assertEqual(202, resp_delete.status)
         self.client.wait_for_resource_deletion(temp_volume['id'])
 
     @test.attr(type='gate')
     def test_volume_reset_status(self):
         # test volume reset status : available->error->available
-        resp, body = self._reset_volume_status(self.volume['id'], 'error')
-        self.assertEqual(202, resp.status)
-        resp_get, volume_get = self.admin_volume_client.get_volume(
+        _, body = self._reset_volume_status(self.volume['id'], 'error')
+        _, volume_get = self.admin_volume_client.get_volume(
             self.volume['id'])
         self.assertEqual('error', volume_get['status'])
 
diff --git a/tempest/api/volume/admin/test_volumes_backup.py b/tempest/api/volume/admin/test_volumes_backup.py
index f9fbe18..8b90b07 100644
--- a/tempest/api/volume/admin/test_volumes_backup.py
+++ b/tempest/api/volume/admin/test_volumes_backup.py
@@ -27,9 +27,8 @@
     _interface = "json"
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(VolumesBackupsTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(VolumesBackupsTest, cls).resource_setup()
 
         if not CONF.volume_feature_enabled.backup:
             raise cls.skipException("Cinder backup feature disabled")
@@ -43,9 +42,8 @@
         # Create backup
         backup_name = data_utils.rand_name('Backup')
         create_backup = self.backups_adm_client.create_backup
-        resp, backup = create_backup(self.volume['id'],
-                                     name=backup_name)
-        self.assertEqual(202, resp.status)
+        _, backup = create_backup(self.volume['id'],
+                                  name=backup_name)
         self.addCleanup(self.backups_adm_client.delete_backup,
                         backup['id'])
         self.assertEqual(backup_name, backup['name'])
@@ -55,19 +53,16 @@
                                                        'available')
 
         # Get a given backup
-        resp, backup = self.backups_adm_client.get_backup(backup['id'])
-        self.assertEqual(200, resp.status)
+        _, backup = self.backups_adm_client.get_backup(backup['id'])
         self.assertEqual(backup_name, backup['name'])
 
         # Get all backups with detail
-        resp, backups = self.backups_adm_client.list_backups_with_detail()
-        self.assertEqual(200, resp.status)
+        _, backups = self.backups_adm_client.list_backups_with_detail()
         self.assertIn((backup['name'], backup['id']),
                       [(m['name'], m['id']) for m in backups])
 
         # Restore backup
-        resp, restore = self.backups_adm_client.restore_backup(backup['id'])
-        self.assertEqual(202, resp.status)
+        _, restore = self.backups_adm_client.restore_backup(backup['id'])
 
         # Delete backup
         self.addCleanup(self.volumes_adm_client.delete_volume,
diff --git a/tempest/api/volume/base.py b/tempest/api/volume/base.py
index b7de767..7f5361d 100644
--- a/tempest/api/volume/base.py
+++ b/tempest/api/volume/base.py
@@ -32,9 +32,9 @@
     _interface = 'json'
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         cls.set_network_resources()
-        super(BaseVolumeTest, cls).setUpClass()
+        super(BaseVolumeTest, cls).resource_setup()
 
         if not CONF.service_available.cinder:
             skip_msg = ("%s skipped as Cinder is not available" % cls.__name__)
@@ -63,32 +63,31 @@
                 cls.os.volume_availability_zone_client)
             # Special fields and resp code for cinder v1
             cls.special_fields = {'name_field': 'display_name',
-                                  'descrip_field': 'display_description',
-                                  'create_resp': 200}
+                                  'descrip_field': 'display_description'}
 
         elif cls._api_version == 2:
             if not CONF.volume_feature_enabled.api_v2:
                 msg = "Volume API v2 is disabled"
                 raise cls.skipException(msg)
+            cls.snapshots_client = cls.os.snapshots_v2_client
             cls.volumes_client = cls.os.volumes_v2_client
             cls.volumes_extension_client = cls.os.volumes_v2_extension_client
             cls.availability_zone_client = (
                 cls.os.volume_v2_availability_zone_client)
             # Special fields and resp code for cinder v2
             cls.special_fields = {'name_field': 'name',
-                                  'descrip_field': 'description',
-                                  'create_resp': 202}
+                                  'descrip_field': 'description'}
 
         else:
             msg = ("Invalid Cinder API version (%s)" % cls._api_version)
             raise exceptions.InvalidConfiguration(message=msg)
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         cls.clear_snapshots()
         cls.clear_volumes()
         cls.clear_isolated_creds()
-        super(BaseVolumeTest, cls).tearDownClass()
+        super(BaseVolumeTest, cls).resource_cleanup()
 
     @classmethod
     def create_volume(cls, size=1, **kwargs):
@@ -96,11 +95,9 @@
         name = data_utils.rand_name('Volume')
 
         name_field = cls.special_fields['name_field']
-        expect_status = cls.special_fields['create_resp']
 
         kwargs[name_field] = name
-        resp, volume = cls.volumes_client.create_volume(size, **kwargs)
-        assert expect_status == resp.status
+        _, volume = cls.volumes_client.create_volume(size, **kwargs)
 
         cls.volumes.append(volume)
         cls.volumes_client.wait_for_volume_status(volume['id'], 'available')
@@ -109,9 +106,8 @@
     @classmethod
     def create_snapshot(cls, volume_id=1, **kwargs):
         """Wrapper utility that returns a test snapshot."""
-        resp, snapshot = cls.snapshots_client.create_snapshot(volume_id,
-                                                              **kwargs)
-        assert 200 == resp.status
+        _, snapshot = cls.snapshots_client.create_snapshot(volume_id,
+                                                           **kwargs)
         cls.snapshots.append(snapshot)
         cls.snapshots_client.wait_for_snapshot_status(snapshot['id'],
                                                       'available')
@@ -153,11 +149,11 @@
     _api_version = 1
 
 
-class BaseVolumeV1AdminTest(BaseVolumeV1Test):
+class BaseVolumeAdminTest(BaseVolumeTest):
     """Base test case class for all Volume Admin API tests."""
     @classmethod
-    def setUpClass(cls):
-        super(BaseVolumeV1AdminTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(BaseVolumeAdminTest, cls).resource_setup()
         cls.adm_user = CONF.identity.admin_username
         cls.adm_pass = CONF.identity.admin_password
         cls.adm_tenant = CONF.identity.admin_tenant_name
@@ -165,11 +161,62 @@
             msg = ("Missing Volume Admin API credentials "
                    "in configuration.")
             raise cls.skipException(msg)
+
         if CONF.compute.allow_tenant_isolation:
             cls.os_adm = clients.Manager(cls.isolated_creds.get_admin_creds(),
                                          interface=cls._interface)
         else:
             cls.os_adm = clients.AdminManager(interface=cls._interface)
+
+        cls.qos_specs = []
+
         cls.client = cls.os_adm.volume_types_client
         cls.hosts_client = cls.os_adm.volume_hosts_client
         cls.quotas_client = cls.os_adm.volume_quotas_client
+        cls.volume_types_client = cls.os_adm.volume_types_client
+
+        if cls._api_version == 1:
+            if not CONF.volume_feature_enabled.api_v1:
+                msg = "Volume API v1 is disabled"
+                raise cls.skipException(msg)
+            cls.volume_qos_client = cls.os_adm.volume_qos_client
+        elif cls._api_version == 2:
+            if not CONF.volume_feature_enabled.api_v2:
+                msg = "Volume API v2 is disabled"
+                raise cls.skipException(msg)
+            cls.volume_qos_client = cls.os_adm.volume_qos_v2_client
+
+    @classmethod
+    def resource_cleanup(cls):
+        cls.clear_qos_specs()
+        super(BaseVolumeAdminTest, cls).resource_cleanup()
+
+    @classmethod
+    def create_test_qos_specs(cls, name=None, consumer=None, **kwargs):
+        """create a test Qos-Specs."""
+        name = name or data_utils.rand_name(cls.__name__ + '-QoS')
+        consumer = consumer or 'front-end'
+        _, qos_specs = cls.volume_qos_client.create_qos(name, consumer,
+                                                        **kwargs)
+        cls.qos_specs.append(qos_specs['id'])
+        return qos_specs
+
+    @classmethod
+    def clear_qos_specs(cls):
+        for qos_id in cls.qos_specs:
+            try:
+                cls.volume_qos_client.delete_qos(qos_id)
+            except exceptions.NotFound:
+                # The qos_specs may have already been deleted which is OK.
+                pass
+
+        for qos_id in cls.qos_specs:
+            try:
+                cls.volume_qos_client.wait_for_resource_deletion(qos_id)
+            except exceptions.NotFound:
+                # The qos_specs may have already been deleted which is OK.
+                pass
+
+
+class BaseVolumeV1AdminTest(BaseVolumeAdminTest):
+    _api_version = 1
diff --git a/tempest/api/volume/test_availability_zone.py b/tempest/api/volume/test_availability_zone.py
index 25b7b85..648bd8b 100644
--- a/tempest/api/volume/test_availability_zone.py
+++ b/tempest/api/volume/test_availability_zone.py
@@ -24,15 +24,14 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(AvailabilityZoneV2TestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(AvailabilityZoneV2TestJSON, cls).resource_setup()
         cls.client = cls.availability_zone_client
 
     @test.attr(type='gate')
     def test_get_availability_zone_list(self):
         # List of availability zone
-        resp, availability_zone = self.client.get_availability_zone_list()
-        self.assertEqual(200, resp.status)
+        _, availability_zone = self.client.get_availability_zone_list()
         self.assertTrue(len(availability_zone) > 0)
 
 
diff --git a/tempest/api/volume/test_extensions.py b/tempest/api/volume/test_extensions.py
index ff00dd1..4fc6ee4 100644
--- a/tempest/api/volume/test_extensions.py
+++ b/tempest/api/volume/test_extensions.py
@@ -30,8 +30,7 @@
     @test.attr(type='gate')
     def test_list_extensions(self):
         # List of all extensions
-        resp, extensions = self.volumes_extension_client.list_extensions()
-        self.assertEqual(200, resp.status)
+        _, extensions = self.volumes_extension_client.list_extensions()
         if len(CONF.volume_feature_enabled.api_extensions) == 0:
             raise self.skipException('There are not any extensions configured')
         extension_list = [extension.get('alias') for extension in extensions]
diff --git a/tempest/api/volume/test_qos.py b/tempest/api/volume/test_qos.py
new file mode 100644
index 0000000..a719b79
--- /dev/null
+++ b/tempest/api/volume/test_qos.py
@@ -0,0 +1,175 @@
+# All Rights Reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+from tempest.api.volume import base
+from tempest.common.utils import data_utils as utils
+from tempest import test
+
+
+class QosSpecsV2TestJSON(base.BaseVolumeAdminTest):
+    """Test the Cinder QoS-specs.
+
+    Tests for  create, list, delete, show, associate,
+    disassociate, set/unset key V2 APIs.
+    """
+
+    @classmethod
+    def resource_setup(cls):
+        super(QosSpecsV2TestJSON, cls).resource_setup()
+        # Create admin qos client
+        # Create a test shared qos-specs for tests
+        cls.qos_name = utils.rand_name(cls.__name__ + '-QoS')
+        cls.qos_consumer = 'front-end'
+
+        cls.created_qos = cls.create_test_qos_specs(cls.qos_name,
+                                                    cls.qos_consumer,
+                                                    read_iops_sec='2000')
+
+    def _create_delete_test_qos_with_given_consumer(self, consumer):
+        name = utils.rand_name('qos')
+        qos = {'name': name, 'consumer': consumer}
+        body = self.create_test_qos_specs(name, consumer)
+        for key in ['name', 'consumer']:
+            self.assertEqual(qos[key], body[key])
+
+        self.volume_qos_client.delete_qos(body['id'])
+        self.volume_qos_client.wait_for_resource_deletion(body['id'])
+
+        # validate the deletion
+        _, list_qos = self.volume_qos_client.list_qos()
+        self.assertNotIn(body, list_qos)
+
+    def _create_test_volume_type(self):
+        vol_type_name = utils.rand_name("volume-type")
+        _, vol_type = self.volume_types_client.create_volume_type(
+            vol_type_name)
+        self.addCleanup(self.volume_types_client.delete_volume_type,
+                        vol_type['id'])
+        return vol_type
+
+    def _test_associate_qos(self, vol_type_id):
+        self.volume_qos_client.associate_qos(
+            self.created_qos['id'], vol_type_id)
+
+    def _test_get_association_qos(self):
+        _, body = self.volume_qos_client.get_association_qos(
+            self.created_qos['id'])
+
+        associations = []
+        for association in body:
+            associations.append(association['id'])
+
+        return associations
+
+    def test_create_delete_qos_with_front_end_consumer(self):
+        """Tests the creation and deletion of QoS specs
+
+        With consumer as front end
+        """
+        self._create_delete_test_qos_with_given_consumer('front-end')
+
+    def test_create_delete_qos_with_back_end_consumer(self):
+        """Tests the creation and deletion of QoS specs
+
+        With consumer as back-end
+        """
+        self._create_delete_test_qos_with_given_consumer('back-end')
+
+    @test.attr(type='smoke')
+    def test_create_delete_qos_with_both_consumer(self):
+        """Tests the creation and deletion of QoS specs
+
+        With consumer as both front end and back end
+        """
+        self._create_delete_test_qos_with_given_consumer('both')
+
+    @test.attr(type='smoke')
+    def test_get_qos(self):
+        """Tests the detail of a given qos-specs"""
+        _, body = self.volume_qos_client.get_qos(self.created_qos['id'])
+        self.assertEqual(self.qos_name, body['name'])
+        self.assertEqual(self.qos_consumer, body['consumer'])
+
+    @test.attr(type='smoke')
+    def test_list_qos(self):
+        """Tests the list of all qos-specs"""
+        _, body = self.volume_qos_client.list_qos()
+        self.assertIn(self.created_qos, body)
+
+    @test.attr(type='smoke')
+    def test_set_unset_qos_key(self):
+        """Test the addition of a specs key to qos-specs"""
+        args = {'iops_bytes': '500'}
+        _, body = self.volume_qos_client.set_qos_key(self.created_qos['id'],
+                                                     iops_bytes='500')
+        self.assertEqual(args, body)
+        _, body = self.volume_qos_client.get_qos(self.created_qos['id'])
+        self.assertEqual(args['iops_bytes'], body['specs']['iops_bytes'])
+
+        # test the deletion of a specs key from qos-specs
+        keys = ['iops_bytes']
+        self.volume_qos_client.unset_qos_key(self.created_qos['id'], keys)
+        operation = 'qos-key-unset'
+        self.volume_qos_client.wait_for_qos_operations(self.created_qos['id'],
+                                                       operation, keys)
+        _, body = self.volume_qos_client.get_qos(self.created_qos['id'])
+        self.assertNotIn(keys[0], body['specs'])
+
+    @test.attr(type='smoke')
+    def test_associate_disassociate_qos(self):
+        """Test the following operations :
+
+        1. associate_qos
+        2. get_association_qos
+        3. disassociate_qos
+        4. disassociate_all_qos
+        """
+
+        # create a test volume-type
+        vol_type = []
+        for _ in range(0, 3):
+            vol_type.append(self._create_test_volume_type())
+
+        # associate the qos-specs with volume-types
+        for i in range(0, 3):
+            self._test_associate_qos(vol_type[i]['id'])
+
+        # get the association of the qos-specs
+        associations = self._test_get_association_qos()
+
+        for i in range(0, 3):
+            self.assertIn(vol_type[i]['id'], associations)
+
+        # disassociate a volume-type with qos-specs
+        self.volume_qos_client.disassociate_qos(
+            self.created_qos['id'], vol_type[0]['id'])
+        operation = 'disassociate'
+        self.volume_qos_client.wait_for_qos_operations(self.created_qos['id'],
+                                                       operation,
+                                                       vol_type[0]['id'])
+        associations = self._test_get_association_qos()
+        self.assertNotIn(vol_type[0]['id'], associations)
+
+        # disassociate all volume-types from qos-specs
+        self.volume_qos_client.disassociate_all_qos(
+            self.created_qos['id'])
+        operation = 'disassociate-all'
+        self.volume_qos_client.wait_for_qos_operations(self.created_qos['id'],
+                                                       operation)
+        associations = self._test_get_association_qos()
+        self.assertEmpty(associations)
+
+
+class QosSpecsV1TestJSON(QosSpecsV2TestJSON):
+    _api_version = 1
diff --git a/tempest/api/volume/test_snapshot_metadata.py b/tempest/api/volume/test_snapshot_metadata.py
index d2c4ab7..777d3de 100644
--- a/tempest/api/volume/test_snapshot_metadata.py
+++ b/tempest/api/volume/test_snapshot_metadata.py
@@ -17,13 +17,11 @@
 from tempest import test
 
 
-class SnapshotMetadataTest(base.BaseVolumeV1Test):
-    _interface = "json"
+class SnapshotV2MetadataTestJSON(base.BaseVolumeTest):
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(SnapshotMetadataTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(SnapshotV2MetadataTestJSON, cls).resource_setup()
         cls.client = cls.snapshots_client
         # Create a volume
         cls.volume = cls.create_volume()
@@ -34,7 +32,7 @@
     def tearDown(self):
         # Update the metadata to {}
         self.client.update_snapshot_metadata(self.snapshot_id, {})
-        super(SnapshotMetadataTest, self).tearDown()
+        super(SnapshotV2MetadataTestJSON, self).tearDown()
 
     @test.attr(type='gate')
     def test_create_get_delete_snapshot_metadata(self):
@@ -44,19 +42,15 @@
                     "key3": "value3"}
         expected = {"key2": "value2",
                     "key3": "value3"}
-        resp, body = self.client.create_snapshot_metadata(self.snapshot_id,
-                                                          metadata)
-        self.assertEqual(200, resp.status)
+        _, body = self.client.create_snapshot_metadata(self.snapshot_id,
+                                                       metadata)
         # Get the metadata of the snapshot
-        resp, body = self.client.get_snapshot_metadata(self.snapshot_id)
-        self.assertEqual(200, resp.status)
+        _, body = self.client.get_snapshot_metadata(self.snapshot_id)
         self.assertEqual(metadata, body)
         # Delete one item metadata of the snapshot
-        resp, body = self.client.delete_snapshot_metadata_item(
-            self.snapshot_id,
-            "key1")
-        self.assertEqual(200, resp.status)
-        resp, body = self.client.get_snapshot_metadata(self.snapshot_id)
+        self.client.delete_snapshot_metadata_item(
+            self.snapshot_id, "key1")
+        _, body = self.client.get_snapshot_metadata(self.snapshot_id)
         self.assertEqual(expected, body)
 
     @test.attr(type='gate')
@@ -68,21 +62,16 @@
         update = {"key3": "value3_update",
                   "key4": "value4"}
         # Create metadata for the snapshot
-        resp, body = self.client.create_snapshot_metadata(self.snapshot_id,
-                                                          metadata)
-        self.assertEqual(200, resp.status)
+        _, body = self.client.create_snapshot_metadata(self.snapshot_id,
+                                                       metadata)
         # Get the metadata of the snapshot
-        resp, body = self.client.get_snapshot_metadata(self.snapshot_id)
-        self.assertEqual(200, resp.status)
+        _, body = self.client.get_snapshot_metadata(self.snapshot_id)
         self.assertEqual(metadata, body)
         # Update metadata item
-        resp, body = self.client.update_snapshot_metadata(
-            self.snapshot_id,
-            update)
-        self.assertEqual(200, resp.status)
+        _, body = self.client.update_snapshot_metadata(
+            self.snapshot_id, update)
         # Get the metadata of the snapshot
-        resp, body = self.client.get_snapshot_metadata(self.snapshot_id)
-        self.assertEqual(200, resp.status)
+        _, body = self.client.get_snapshot_metadata(self.snapshot_id)
         self.assertEqual(update, body)
 
     @test.attr(type='gate')
@@ -96,23 +85,26 @@
                   "key2": "value2",
                   "key3": "value3_update"}
         # Create metadata for the snapshot
-        resp, body = self.client.create_snapshot_metadata(self.snapshot_id,
-                                                          metadata)
-        self.assertEqual(200, resp.status)
+        _, body = self.client.create_snapshot_metadata(self.snapshot_id,
+                                                       metadata)
         # Get the metadata of the snapshot
-        resp, body = self.client.get_snapshot_metadata(self.snapshot_id)
+        _, body = self.client.get_snapshot_metadata(self.snapshot_id)
         self.assertEqual(metadata, body)
         # Update metadata item
-        resp, body = self.client.update_snapshot_metadata_item(
-            self.snapshot_id,
-            "key3",
-            update_item)
-        self.assertEqual(200, resp.status)
+        _, body = self.client.update_snapshot_metadata_item(
+            self.snapshot_id, "key3", update_item)
         # Get the metadata of the snapshot
-        resp, body = self.client.get_snapshot_metadata(self.snapshot_id)
-        self.assertEqual(200, resp.status)
+        _, body = self.client.get_snapshot_metadata(self.snapshot_id)
         self.assertEqual(expect, body)
 
 
-class SnapshotMetadataTestXML(SnapshotMetadataTest):
+class SnapshotV2MetadataTestXML(SnapshotV2MetadataTestJSON):
+    _interface = "xml"
+
+
+class SnapshotV1MetadataTestJSON(SnapshotV2MetadataTestJSON):
+    _api_version = 1
+
+
+class SnapshotV1MetadataTestXML(SnapshotV1MetadataTestJSON):
     _interface = "xml"
diff --git a/tempest/api/volume/test_volume_metadata.py b/tempest/api/volume/test_volume_metadata.py
index 0505f19..2ec8667 100644
--- a/tempest/api/volume/test_volume_metadata.py
+++ b/tempest/api/volume/test_volume_metadata.py
@@ -22,9 +22,8 @@
 class VolumesV2MetadataTest(base.BaseVolumeTest):
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(VolumesV2MetadataTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(VolumesV2MetadataTest, cls).resource_setup()
         # Create a volume
         cls.volume = cls.create_volume()
         cls.volume_id = cls.volume['id']
@@ -42,19 +41,15 @@
                     "key3": "value3",
                     "key4": "<value&special_chars>"}
 
-        rsp, body = self.volumes_client.create_volume_metadata(self.volume_id,
-                                                               metadata)
-        self.assertEqual(200, rsp.status)
+        _, body = self.volumes_client.create_volume_metadata(self.volume_id,
+                                                             metadata)
         # Get the metadata of the volume
-        resp, body = self.volumes_client.get_volume_metadata(self.volume_id)
-        self.assertEqual(200, resp.status)
+        _, body = self.volumes_client.get_volume_metadata(self.volume_id)
         self.assertThat(body.items(), matchers.ContainsAll(metadata.items()))
         # Delete one item metadata of the volume
-        rsp, body = self.volumes_client.delete_volume_metadata_item(
-            self.volume_id,
-            "key1")
-        self.assertEqual(200, rsp.status)
-        resp, body = self.volumes_client.get_volume_metadata(self.volume_id)
+        self.volumes_client.delete_volume_metadata_item(
+            self.volume_id, "key1")
+        _, body = self.volumes_client.get_volume_metadata(self.volume_id)
         self.assertNotIn("key1", body)
         del metadata["key1"]
         self.assertThat(body.items(), matchers.ContainsAll(metadata.items()))
@@ -70,22 +65,16 @@
                   "key1": "value1_update"}
 
         # Create metadata for the volume
-        resp, body = self.volumes_client.create_volume_metadata(
-            self.volume_id,
-            metadata)
-        self.assertEqual(200, resp.status)
+        _, body = self.volumes_client.create_volume_metadata(
+            self.volume_id, metadata)
         # Get the metadata of the volume
-        resp, body = self.volumes_client.get_volume_metadata(self.volume_id)
-        self.assertEqual(200, resp.status)
+        _, body = self.volumes_client.get_volume_metadata(self.volume_id)
         self.assertThat(body.items(), matchers.ContainsAll(metadata.items()))
         # Update metadata
-        resp, body = self.volumes_client.update_volume_metadata(
-            self.volume_id,
-            update)
-        self.assertEqual(200, resp.status)
+        _, body = self.volumes_client.update_volume_metadata(
+            self.volume_id, update)
         # Get the metadata of the volume
-        resp, body = self.volumes_client.get_volume_metadata(self.volume_id)
-        self.assertEqual(200, resp.status)
+        _, body = self.volumes_client.get_volume_metadata(self.volume_id)
         self.assertThat(body.items(), matchers.ContainsAll(update.items()))
 
     @test.attr(type='gate')
@@ -99,20 +88,14 @@
                   "key2": "value2",
                   "key3": "value3_update"}
         # Create metadata for the volume
-        resp, body = self.volumes_client.create_volume_metadata(
-            self.volume_id,
-            metadata)
-        self.assertEqual(200, resp.status)
+        _, body = self.volumes_client.create_volume_metadata(
+            self.volume_id, metadata)
         self.assertThat(body.items(), matchers.ContainsAll(metadata.items()))
         # Update metadata item
-        resp, body = self.volumes_client.update_volume_metadata_item(
-            self.volume_id,
-            "key3",
-            update_item)
-        self.assertEqual(200, resp.status)
+        _, body = self.volumes_client.update_volume_metadata_item(
+            self.volume_id, "key3", update_item)
         # Get the metadata of the volume
-        resp, body = self.volumes_client.get_volume_metadata(self.volume_id)
-        self.assertEqual(200, resp.status)
+        _, body = self.volumes_client.get_volume_metadata(self.volume_id)
         self.assertThat(body.items(), matchers.ContainsAll(expect.items()))
 
 
diff --git a/tempest/api/volume/test_volume_transfers.py b/tempest/api/volume/test_volume_transfers.py
index bf61222..90ac9c1 100644
--- a/tempest/api/volume/test_volume_transfers.py
+++ b/tempest/api/volume/test_volume_transfers.py
@@ -26,8 +26,8 @@
 class VolumesV2TransfersTest(base.BaseVolumeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(VolumesV2TransfersTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(VolumesV2TransfersTest, cls).resource_setup()
 
         # Add another tenant to test volume-transfer
         if CONF.compute.allow_tenant_isolation:
@@ -47,8 +47,7 @@
 
     def _delete_volume(self, volume_id):
         # Delete the specified volume using admin creds
-        resp, _ = self.adm_client.delete_volume(volume_id)
-        self.assertEqual(202, resp.status)
+        self.adm_client.delete_volume(volume_id)
         self.adm_client.wait_for_resource_deletion(volume_id)
 
     @test.attr(type='gate')
@@ -58,28 +57,24 @@
         self.addCleanup(self._delete_volume, volume['id'])
 
         # Create a volume transfer
-        resp, transfer = self.client.create_volume_transfer(volume['id'])
-        self.assertEqual(202, resp.status)
+        _, transfer = self.client.create_volume_transfer(volume['id'])
         transfer_id = transfer['id']
         auth_key = transfer['auth_key']
         self.client.wait_for_volume_status(volume['id'],
                                            'awaiting-transfer')
 
         # Get a volume transfer
-        resp, body = self.client.get_volume_transfer(transfer_id)
-        self.assertEqual(200, resp.status)
+        _, body = self.client.get_volume_transfer(transfer_id)
         self.assertEqual(volume['id'], body['volume_id'])
 
         # List volume transfers, the result should be greater than
         # or equal to 1
-        resp, body = self.client.list_volume_transfers()
-        self.assertEqual(200, resp.status)
+        _, body = self.client.list_volume_transfers()
         self.assertThat(len(body), matchers.GreaterThan(0))
 
         # Accept a volume transfer by alt_tenant
-        resp, body = self.alt_client.accept_volume_transfer(transfer_id,
-                                                            auth_key)
-        self.assertEqual(202, resp.status)
+        _, body = self.alt_client.accept_volume_transfer(transfer_id,
+                                                         auth_key)
         self.alt_client.wait_for_volume_status(volume['id'], 'available')
 
     def test_create_list_delete_volume_transfer(self):
@@ -88,15 +83,13 @@
         self.addCleanup(self._delete_volume, volume['id'])
 
         # Create a volume transfer
-        resp, body = self.client.create_volume_transfer(volume['id'])
-        self.assertEqual(202, resp.status)
+        _, body = self.client.create_volume_transfer(volume['id'])
         transfer_id = body['id']
         self.client.wait_for_volume_status(volume['id'],
                                            'awaiting-transfer')
 
         # List all volume transfers (looking for the one we created)
-        resp, body = self.client.list_volume_transfers()
-        self.assertEqual(200, resp.status)
+        _, body = self.client.list_volume_transfers()
         for transfer in body:
             if volume['id'] == transfer['volume_id']:
                 break
@@ -104,8 +97,7 @@
             self.fail('Transfer not found for volume %s' % volume['id'])
 
         # Delete a volume transfer
-        resp, body = self.client.delete_volume_transfer(transfer_id)
-        self.assertEqual(202, resp.status)
+        self.client.delete_volume_transfer(transfer_id)
         self.client.wait_for_volume_status(volume['id'], 'available')
 
 
diff --git a/tempest/api/volume/test_volumes_actions.py b/tempest/api/volume/test_volumes_actions.py
index 6fef564..a9bc70a 100644
--- a/tempest/api/volume/test_volumes_actions.py
+++ b/tempest/api/volume/test_volumes_actions.py
@@ -24,9 +24,8 @@
 class VolumesV2ActionsTest(base.BaseVolumeTest):
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(VolumesV2ActionsTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(VolumesV2ActionsTest, cls).resource_setup()
         cls.client = cls.volumes_client
         cls.image_client = cls.os.image_client
 
@@ -40,13 +39,17 @@
         # Create a test shared volume for attach/detach tests
         cls.volume = cls.create_volume()
 
+    def _delete_image_with_wait(self, image_id):
+        self.image_client.delete_image(image_id)
+        self.image_client.wait_for_resource_deletion(image_id)
+
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         # Delete the test instance
         cls.servers_client.delete_server(cls.server['id'])
         cls.servers_client.wait_for_server_termination(cls.server['id'])
 
-        super(VolumesV2ActionsTest, cls).tearDownClass()
+        super(VolumesV2ActionsTest, cls).resource_cleanup()
 
     @test.stresstest(class_setup_per='process')
     @test.attr(type='smoke')
@@ -54,13 +57,11 @@
     def test_attach_detach_volume_to_instance(self):
         # Volume is attached and detached successfully from an instance
         mountpoint = '/dev/vdc'
-        resp, body = self.client.attach_volume(self.volume['id'],
-                                               self.server['id'],
-                                               mountpoint)
-        self.assertEqual(202, resp.status)
+        _, body = self.client.attach_volume(self.volume['id'],
+                                            self.server['id'],
+                                            mountpoint)
         self.client.wait_for_volume_status(self.volume['id'], 'in-use')
-        resp, body = self.client.detach_volume(self.volume['id'])
-        self.assertEqual(202, resp.status)
+        _, body = self.client.detach_volume(self.volume['id'])
         self.client.wait_for_volume_status(self.volume['id'], 'available')
 
     @test.stresstest(class_setup_per='process')
@@ -69,10 +70,9 @@
     def test_get_volume_attachment(self):
         # Verify that a volume's attachment information is retrieved
         mountpoint = '/dev/vdc'
-        resp, body = self.client.attach_volume(self.volume['id'],
-                                               self.server['id'],
-                                               mountpoint)
-        self.assertEqual(202, resp.status)
+        _, body = self.client.attach_volume(self.volume['id'],
+                                            self.server['id'],
+                                            mountpoint)
         self.client.wait_for_volume_status(self.volume['id'], 'in-use')
         # NOTE(gfidente): added in reverse order because functions will be
         # called in reverse order to the order they are added (LIFO)
@@ -80,8 +80,7 @@
                         self.volume['id'],
                         'available')
         self.addCleanup(self.client.detach_volume, self.volume['id'])
-        resp, volume = self.client.get_volume(self.volume['id'])
-        self.assertEqual(200, resp.status)
+        _, volume = self.client.get_volume(self.volume['id'])
         self.assertIn('attachments', volume)
         attachment = self.client.get_attachment_from_volume(volume)
         self.assertEqual(mountpoint, attachment['device'])
@@ -97,41 +96,25 @@
         # there is no way to delete it from Cinder, so we delete it from Glance
         # using the Glance image_client and from Cinder via tearDownClass.
         image_name = data_utils.rand_name('Image-')
-        resp, body = self.client.upload_volume(self.volume['id'],
-                                               image_name,
-                                               CONF.volume.disk_format)
+        _, body = self.client.upload_volume(self.volume['id'],
+                                            image_name,
+                                            CONF.volume.disk_format)
         image_id = body["image_id"]
         self.addCleanup(self.image_client.delete_image, image_id)
-        self.assertEqual(202, resp.status)
         self.image_client.wait_for_image_status(image_id, 'active')
         self.client.wait_for_volume_status(self.volume['id'], 'available')
 
     @test.attr(type='gate')
-    def test_volume_extend(self):
-        # Extend Volume Test.
-        extend_size = int(self.volume['size']) + 1
-        resp, body = self.client.extend_volume(self.volume['id'], extend_size)
-        self.assertEqual(202, resp.status)
-        self.client.wait_for_volume_status(self.volume['id'], 'available')
-        resp, volume = self.client.get_volume(self.volume['id'])
-        self.assertEqual(200, resp.status)
-        self.assertEqual(int(volume['size']), extend_size)
-
-    @test.attr(type='gate')
     def test_reserve_unreserve_volume(self):
         # Mark volume as reserved.
-        resp, body = self.client.reserve_volume(self.volume['id'])
-        self.assertEqual(202, resp.status)
+        _, body = self.client.reserve_volume(self.volume['id'])
         # To get the volume info
-        resp, body = self.client.get_volume(self.volume['id'])
-        self.assertEqual(200, resp.status)
+        _, body = self.client.get_volume(self.volume['id'])
         self.assertIn('attaching', body['status'])
         # Unmark volume as reserved.
-        resp, body = self.client.unreserve_volume(self.volume['id'])
-        self.assertEqual(202, resp.status)
+        _, body = self.client.unreserve_volume(self.volume['id'])
         # To get the volume info
-        resp, body = self.client.get_volume(self.volume['id'])
-        self.assertEqual(200, resp.status)
+        _, body = self.client.get_volume(self.volume['id'])
         self.assertIn('available', body['status'])
 
     def _is_true(self, val):
@@ -141,26 +124,21 @@
     def test_volume_readonly_update(self):
         # Update volume readonly true
         readonly = True
-        resp, body = self.client.update_volume_readonly(self.volume['id'],
-                                                        readonly)
-        self.assertEqual(202, resp.status)
-
+        _, body = self.client.update_volume_readonly(self.volume['id'],
+                                                     readonly)
         # Get Volume information
-        resp, fetched_volume = self.client.get_volume(self.volume['id'])
+        _, fetched_volume = self.client.get_volume(self.volume['id'])
         bool_flag = self._is_true(fetched_volume['metadata']['readonly'])
-        self.assertEqual(200, resp.status)
         self.assertEqual(True, bool_flag)
 
         # Update volume readonly false
         readonly = False
-        resp, body = self.client.update_volume_readonly(self.volume['id'],
-                                                        readonly)
-        self.assertEqual(202, resp.status)
+        _, body = self.client.update_volume_readonly(self.volume['id'],
+                                                     readonly)
 
         # Get Volume information
-        resp, fetched_volume = self.client.get_volume(self.volume['id'])
+        _, fetched_volume = self.client.get_volume(self.volume['id'])
         bool_flag = self._is_true(fetched_volume['metadata']['readonly'])
-        self.assertEqual(200, resp.status)
         self.assertEqual(False, bool_flag)
 
 
diff --git a/tempest/api/volume/test_volumes_extend.py b/tempest/api/volume/test_volumes_extend.py
new file mode 100644
index 0000000..edd497c
--- /dev/null
+++ b/tempest/api/volume/test_volumes_extend.py
@@ -0,0 +1,50 @@
+# Copyright 2012 OpenStack Foundation
+# All Rights Reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+from tempest.api.volume import base
+from tempest import config
+from tempest import test
+
+CONF = config.CONF
+
+
+class VolumesV2ExtendTest(base.BaseVolumeTest):
+
+    @classmethod
+    def resource_setup(cls):
+        super(VolumesV2ExtendTest, cls).resource_setup()
+        cls.client = cls.volumes_client
+
+    @test.attr(type='gate')
+    def test_volume_extend(self):
+        # Extend Volume Test.
+        self.volume = self.create_volume()
+        extend_size = int(self.volume['size']) + 1
+        _, body = self.client.extend_volume(self.volume['id'], extend_size)
+        self.client.wait_for_volume_status(self.volume['id'], 'available')
+        _, volume = self.client.get_volume(self.volume['id'])
+        self.assertEqual(int(volume['size']), extend_size)
+
+
+class VolumesV2ExtendTestXML(VolumesV2ExtendTest):
+    _interface = "xml"
+
+
+class VolumesV1ExtendTest(VolumesV2ExtendTest):
+    _api_version = 1
+
+
+class VolumesV1ExtendTestXML(VolumesV1ExtendTest):
+    _interface = "xml"
diff --git a/tempest/api/volume/test_volumes_get.py b/tempest/api/volume/test_volumes_get.py
index 82208aa..033beb4 100644
--- a/tempest/api/volume/test_volumes_get.py
+++ b/tempest/api/volume/test_volumes_get.py
@@ -26,17 +26,15 @@
 class VolumesV2GetTest(base.BaseVolumeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(VolumesV2GetTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(VolumesV2GetTest, cls).resource_setup()
         cls.client = cls.volumes_client
 
         cls.name_field = cls.special_fields['name_field']
         cls.descrip_field = cls.special_fields['descrip_field']
-        cls.create_resp = cls.special_fields['create_resp']
 
     def _delete_volume(self, volume_id):
-        resp, _ = self.client.delete_volume(volume_id)
-        self.assertEqual(202, resp.status)
+        self.client.delete_volume(volume_id)
         self.client.wait_for_resource_deletion(volume_id)
 
     def _is_true(self, val):
@@ -56,8 +54,7 @@
         # Create a volume
         kwargs[self.name_field] = v_name
         kwargs['metadata'] = metadata
-        resp, volume = self.client.create_volume(**kwargs)
-        self.assertEqual(self.create_resp, resp.status)
+        _, volume = self.client.create_volume(**kwargs)
         self.assertIn('id', volume)
         self.addCleanup(self._delete_volume, volume['id'])
         self.client.wait_for_volume_status(volume['id'], 'available')
@@ -68,8 +65,7 @@
         self.assertTrue(volume['id'] is not None,
                         "Field volume id is empty or not found.")
         # Get Volume information
-        resp, fetched_volume = self.client.get_volume(volume['id'])
-        self.assertEqual(200, resp.status)
+        _, fetched_volume = self.client.get_volume(volume['id'])
         self.assertEqual(v_name,
                          fetched_volume[self.name_field],
                          'The fetched Volume name is different '
@@ -94,21 +90,18 @@
         # Update Volume
         # Test volume update when display_name is same with original value
         params = {self.name_field: v_name}
-        resp, update_volume = self.client.update_volume(volume['id'], **params)
-        self.assertEqual(200, resp.status)
+        _, update_volume = self.client.update_volume(volume['id'], **params)
         # Test volume update when display_name is new
         new_v_name = data_utils.rand_name('new-Volume')
         new_desc = 'This is the new description of volume'
         params = {self.name_field: new_v_name,
                   self.descrip_field: new_desc}
-        resp, update_volume = self.client.update_volume(volume['id'], **params)
+        _, update_volume = self.client.update_volume(volume['id'], **params)
         # Assert response body for update_volume method
-        self.assertEqual(200, resp.status)
         self.assertEqual(new_v_name, update_volume[self.name_field])
         self.assertEqual(new_desc, update_volume[self.descrip_field])
         # Assert response body for get_volume method
-        resp, updated_volume = self.client.get_volume(volume['id'])
-        self.assertEqual(200, resp.status)
+        _, updated_volume = self.client.get_volume(volume['id'])
         self.assertEqual(volume['id'], updated_volume['id'])
         self.assertEqual(new_v_name, updated_volume[self.name_field])
         self.assertEqual(new_desc, updated_volume[self.descrip_field])
@@ -123,17 +116,15 @@
         new_v_desc = data_utils.rand_name('@#$%^* description')
         params = {self.descrip_field: new_v_desc,
                   'availability_zone': volume['availability_zone']}
-        resp, new_volume = self.client.create_volume(size=1, **params)
-        self.assertEqual(self.create_resp, resp.status)
+        _, new_volume = self.client.create_volume(size=1, **params)
         self.assertIn('id', new_volume)
         self.addCleanup(self._delete_volume, new_volume['id'])
         self.client.wait_for_volume_status(new_volume['id'], 'available')
 
         params = {self.name_field: volume[self.name_field],
                   self.descrip_field: volume[self.descrip_field]}
-        resp, update_volume = self.client.update_volume(new_volume['id'],
-                                                        **params)
-        self.assertEqual(200, resp.status)
+        _, update_volume = self.client.update_volume(new_volume['id'],
+                                                     **params)
 
         # NOTE(jdg): Revert back to strict true/false checking
         # after fix for bug #1227837 merges
diff --git a/tempest/api/volume/test_volumes_list.py b/tempest/api/volume/test_volumes_list.py
index b8a2faa..016e9ab 100644
--- a/tempest/api/volume/test_volumes_list.py
+++ b/tempest/api/volume/test_volumes_list.py
@@ -15,11 +15,12 @@
 #    under the License.
 import operator
 
+from testtools import matchers
+
 from tempest.api.volume import base
 from tempest.common.utils import data_utils
 from tempest.openstack.common import log as logging
 from tempest import test
-from testtools import matchers
 
 LOG = logging.getLogger(__name__)
 
@@ -54,9 +55,8 @@
                              [str_vol(v) for v in fetched_list]))
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(VolumesV2ListTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(VolumesV2ListTestJSON, cls).resource_setup()
         cls.client = cls.volumes_client
         cls.name = cls.VOLUME_FIELDS[1]
 
@@ -66,17 +66,17 @@
         cls.metadata = {'Type': 'work'}
         for i in range(3):
             volume = cls.create_volume(metadata=cls.metadata)
-            resp, volume = cls.client.get_volume(volume['id'])
+            _, volume = cls.client.get_volume(volume['id'])
             cls.volume_list.append(volume)
             cls.volume_id_list.append(volume['id'])
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         # Delete the created volumes
         for volid in cls.volume_id_list:
-            resp, _ = cls.client.delete_volume(volid)
+            cls.client.delete_volume(volid)
             cls.client.wait_for_resource_deletion(volid)
-        super(VolumesV2ListTestJSON, cls).tearDownClass()
+        super(VolumesV2ListTestJSON, cls).resource_cleanup()
 
     def _list_by_param_value_and_assert(self, params, with_detail=False):
         """
@@ -84,12 +84,11 @@
         and validates result.
         """
         if with_detail:
-            resp, fetched_vol_list = \
+            _, fetched_vol_list = \
                 self.client.list_volumes_with_detail(params=params)
         else:
-            resp, fetched_vol_list = self.client.list_volumes(params=params)
+            _, fetched_vol_list = self.client.list_volumes(params=params)
 
-        self.assertEqual(200, resp.status)
         # Validating params of fetched volumes
         # In v2, only list detail view includes items in params.
         # In v1, list view and list detail view are same. So the
@@ -112,8 +111,7 @@
     def test_volume_list(self):
         # Get a list of Volumes
         # Fetch all volumes
-        resp, fetched_list = self.client.list_volumes()
-        self.assertEqual(200, resp.status)
+        _, fetched_list = self.client.list_volumes()
         self.assertVolumesIn(fetched_list, self.volume_list,
                              fields=self.VOLUME_FIELDS)
 
@@ -121,16 +119,14 @@
     def test_volume_list_with_details(self):
         # Get a list of Volumes with details
         # Fetch all Volumes
-        resp, fetched_list = self.client.list_volumes_with_detail()
-        self.assertEqual(200, resp.status)
+        _, fetched_list = self.client.list_volumes_with_detail()
         self.assertVolumesIn(fetched_list, self.volume_list)
 
     @test.attr(type='gate')
     def test_volume_list_by_name(self):
         volume = self.volume_list[data_utils.rand_int_id(0, 2)]
         params = {self.name: volume[self.name]}
-        resp, fetched_vol = self.client.list_volumes(params)
-        self.assertEqual(200, resp.status)
+        _, fetched_vol = self.client.list_volumes(params)
         self.assertEqual(1, len(fetched_vol), str(fetched_vol))
         self.assertEqual(fetched_vol[0][self.name],
                          volume[self.name])
@@ -139,8 +135,7 @@
     def test_volume_list_details_by_name(self):
         volume = self.volume_list[data_utils.rand_int_id(0, 2)]
         params = {self.name: volume[self.name]}
-        resp, fetched_vol = self.client.list_volumes_with_detail(params)
-        self.assertEqual(200, resp.status)
+        _, fetched_vol = self.client.list_volumes_with_detail(params)
         self.assertEqual(1, len(fetched_vol), str(fetched_vol))
         self.assertEqual(fetched_vol[0][self.name],
                          volume[self.name])
@@ -148,8 +143,7 @@
     @test.attr(type='gate')
     def test_volumes_list_by_status(self):
         params = {'status': 'available'}
-        resp, fetched_list = self.client.list_volumes(params)
-        self.assertEqual(200, resp.status)
+        _, fetched_list = self.client.list_volumes(params)
         self._list_by_param_value_and_assert(params)
         self.assertVolumesIn(fetched_list, self.volume_list,
                              fields=self.VOLUME_FIELDS)
@@ -157,8 +151,7 @@
     @test.attr(type='gate')
     def test_volumes_list_details_by_status(self):
         params = {'status': 'available'}
-        resp, fetched_list = self.client.list_volumes_with_detail(params)
-        self.assertEqual(200, resp.status)
+        _, fetched_list = self.client.list_volumes_with_detail(params)
         for volume in fetched_list:
             self.assertEqual('available', volume['status'])
         self.assertVolumesIn(fetched_list, self.volume_list)
@@ -168,8 +161,7 @@
         volume = self.volume_list[data_utils.rand_int_id(0, 2)]
         zone = volume['availability_zone']
         params = {'availability_zone': zone}
-        resp, fetched_list = self.client.list_volumes(params)
-        self.assertEqual(200, resp.status)
+        _, fetched_list = self.client.list_volumes(params)
         self._list_by_param_value_and_assert(params)
         self.assertVolumesIn(fetched_list, self.volume_list,
                              fields=self.VOLUME_FIELDS)
@@ -179,8 +171,7 @@
         volume = self.volume_list[data_utils.rand_int_id(0, 2)]
         zone = volume['availability_zone']
         params = {'availability_zone': zone}
-        resp, fetched_list = self.client.list_volumes_with_detail(params)
-        self.assertEqual(200, resp.status)
+        _, fetched_list = self.client.list_volumes_with_detail(params)
         for volume in fetched_list:
             self.assertEqual(zone, volume['availability_zone'])
         self.assertVolumesIn(fetched_list, self.volume_list)
diff --git a/tempest/api/volume/test_volumes_negative.py b/tempest/api/volume/test_volumes_negative.py
index 8bd4c88..2b43c63 100644
--- a/tempest/api/volume/test_volumes_negative.py
+++ b/tempest/api/volume/test_volumes_negative.py
@@ -24,9 +24,8 @@
 class VolumesV2NegativeTest(base.BaseVolumeTest):
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(VolumesV2NegativeTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(VolumesV2NegativeTest, cls).resource_setup()
         cls.client = cls.volumes_client
 
         cls.name_field = cls.special_fields['name_field']
@@ -225,44 +224,38 @@
     @test.attr(type=['negative', 'gate'])
     def test_reserve_volume_with_negative_volume_status(self):
         # Mark volume as reserved.
-        resp, body = self.client.reserve_volume(self.volume['id'])
-        self.assertEqual(202, resp.status)
+        _, body = self.client.reserve_volume(self.volume['id'])
         # Mark volume which is marked as reserved before
         self.assertRaises(exceptions.BadRequest,
                           self.client.reserve_volume,
                           self.volume['id'])
         # Unmark volume as reserved.
-        resp, body = self.client.unreserve_volume(self.volume['id'])
-        self.assertEqual(202, resp.status)
+        _, body = self.client.unreserve_volume(self.volume['id'])
 
     @test.attr(type=['negative', 'gate'])
     def test_list_volumes_with_nonexistent_name(self):
         v_name = data_utils.rand_name('Volume-')
         params = {self.name_field: v_name}
-        resp, fetched_volume = self.client.list_volumes(params)
-        self.assertEqual(200, resp.status)
+        _, fetched_volume = self.client.list_volumes(params)
         self.assertEqual(0, len(fetched_volume))
 
     @test.attr(type=['negative', 'gate'])
     def test_list_volumes_detail_with_nonexistent_name(self):
         v_name = data_utils.rand_name('Volume-')
         params = {self.name_field: v_name}
-        resp, fetched_volume = self.client.list_volumes_with_detail(params)
-        self.assertEqual(200, resp.status)
+        _, fetched_volume = self.client.list_volumes_with_detail(params)
         self.assertEqual(0, len(fetched_volume))
 
     @test.attr(type=['negative', 'gate'])
     def test_list_volumes_with_invalid_status(self):
         params = {'status': 'null'}
-        resp, fetched_volume = self.client.list_volumes(params)
-        self.assertEqual(200, resp.status)
+        _, fetched_volume = self.client.list_volumes(params)
         self.assertEqual(0, len(fetched_volume))
 
     @test.attr(type=['negative', 'gate'])
     def test_list_volumes_detail_with_invalid_status(self):
         params = {'status': 'null'}
-        resp, fetched_volume = self.client.list_volumes_with_detail(params)
-        self.assertEqual(200, resp.status)
+        _, fetched_volume = self.client.list_volumes_with_detail(params)
         self.assertEqual(0, len(fetched_volume))
 
 
diff --git a/tempest/api/volume/test_volumes_snapshots.py b/tempest/api/volume/test_volumes_snapshots.py
index 26316d2..78df1df 100644
--- a/tempest/api/volume/test_volumes_snapshots.py
+++ b/tempest/api/volume/test_volumes_snapshots.py
@@ -20,21 +20,18 @@
 CONF = config.CONF
 
 
-class VolumesSnapshotTest(base.BaseVolumeV1Test):
-    _interface = "json"
+class VolumesV2SnapshotTestJSON(base.BaseVolumeTest):
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(VolumesSnapshotTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(VolumesV2SnapshotTestJSON, cls).resource_setup()
         cls.volume_origin = cls.create_volume()
 
         if not CONF.volume_feature_enabled.snapshot:
             raise cls.skipException("Cinder volume snapshots are disabled")
 
-    @classmethod
-    def tearDownClass(cls):
-        super(VolumesSnapshotTest, cls).tearDownClass()
+        cls.name_field = cls.special_fields['name_field']
+        cls.descrip_field = cls.special_fields['descrip_field']
 
     def _detach(self, volume_id):
         """Detach volume."""
@@ -47,14 +44,13 @@
         and validates result.
         """
         if with_detail:
-            resp, fetched_snap_list = \
+            _, fetched_snap_list = \
                 self.snapshots_client.\
                 list_snapshots_with_detail(params=params)
         else:
-            resp, fetched_snap_list = \
+            _, fetched_snap_list = \
                 self.snapshots_client.list_snapshots(params=params)
 
-        self.assertEqual(200, resp.status)
         # Validating params of fetched snapshots
         for snap in fetched_snap_list:
             for key in params:
@@ -74,9 +70,8 @@
         self.addCleanup(self.servers_client.delete_server, server['id'])
         self.servers_client.wait_for_server_status(server['id'], 'ACTIVE')
         mountpoint = '/dev/%s' % CONF.compute.volume_device_name
-        resp, body = self.volumes_client.attach_volume(
+        _, body = self.volumes_client.attach_volume(
             self.volume_origin['id'], server['id'], mountpoint)
-        self.assertEqual(202, resp.status)
         self.volumes_client.wait_for_volume_status(self.volume_origin['id'],
                                                    'in-use')
         self.addCleanup(self._detach, self.volume_origin['id'])
@@ -85,7 +80,6 @@
                                         force=True)
         # Delete the snapshot
         self.snapshots_client.delete_snapshot(snapshot['id'])
-        self.assertEqual(202, resp.status)
         self.snapshots_client.wait_for_resource_deletion(snapshot['id'])
         self.snapshots.remove(snapshot)
 
@@ -93,44 +87,39 @@
     def test_snapshot_create_get_list_update_delete(self):
         # Create a snapshot
         s_name = data_utils.rand_name('snap')
-        snapshot = self.create_snapshot(self.volume_origin['id'],
-                                        display_name=s_name)
+        params = {self.name_field: s_name}
+        snapshot = self.create_snapshot(self.volume_origin['id'], **params)
 
         # Get the snap and check for some of its details
-        resp, snap_get = self.snapshots_client.get_snapshot(snapshot['id'])
-        self.assertEqual(200, resp.status)
+        _, snap_get = self.snapshots_client.get_snapshot(snapshot['id'])
         self.assertEqual(self.volume_origin['id'],
                          snap_get['volume_id'],
                          "Referred volume origin mismatch")
 
         # Compare also with the output from the list action
-        tracking_data = (snapshot['id'], snapshot['display_name'])
-        resp, snaps_list = self.snapshots_client.list_snapshots()
-        self.assertEqual(200, resp.status)
-        snaps_data = [(f['id'], f['display_name']) for f in snaps_list]
+        tracking_data = (snapshot['id'], snapshot[self.name_field])
+        _, snaps_list = self.snapshots_client.list_snapshots()
+        snaps_data = [(f['id'], f[self.name_field]) for f in snaps_list]
         self.assertIn(tracking_data, snaps_data)
 
         # Updates snapshot with new values
         new_s_name = data_utils.rand_name('new-snap')
         new_desc = 'This is the new description of snapshot.'
-        resp, update_snapshot = \
-            self.snapshots_client.update_snapshot(snapshot['id'],
-                                                  display_name=new_s_name,
-                                                  display_description=new_desc)
+        params = {self.name_field: new_s_name,
+                  self.descrip_field: new_desc}
+        _, update_snapshot = \
+            self.snapshots_client.update_snapshot(snapshot['id'], **params)
         # Assert response body for update_snapshot method
-        self.assertEqual(200, resp.status)
-        self.assertEqual(new_s_name, update_snapshot['display_name'])
-        self.assertEqual(new_desc, update_snapshot['display_description'])
+        self.assertEqual(new_s_name, update_snapshot[self.name_field])
+        self.assertEqual(new_desc, update_snapshot[self.descrip_field])
         # Assert response body for get_snapshot method
-        resp, updated_snapshot = \
+        _, updated_snapshot = \
             self.snapshots_client.get_snapshot(snapshot['id'])
-        self.assertEqual(200, resp.status)
-        self.assertEqual(new_s_name, updated_snapshot['display_name'])
-        self.assertEqual(new_desc, updated_snapshot['display_description'])
+        self.assertEqual(new_s_name, updated_snapshot[self.name_field])
+        self.assertEqual(new_desc, updated_snapshot[self.descrip_field])
 
         # Delete the snapshot
         self.snapshots_client.delete_snapshot(snapshot['id'])
-        self.assertEqual(200, resp.status)
         self.snapshots_client.wait_for_resource_deletion(snapshot['id'])
         self.snapshots.remove(snapshot)
 
@@ -139,11 +128,11 @@
         """list snapshots with params."""
         # Create a snapshot
         display_name = data_utils.rand_name('snap')
-        snapshot = self.create_snapshot(self.volume_origin['id'],
-                                        display_name=display_name)
+        params = {self.name_field: display_name}
+        snapshot = self.create_snapshot(self.volume_origin['id'], **params)
 
         # Verify list snapshots by display_name filter
-        params = {'display_name': snapshot['display_name']}
+        params = {self.name_field: snapshot[self.name_field]}
         self._list_by_param_values_and_assert(params)
 
         # Verify list snapshots by status filter
@@ -152,7 +141,7 @@
 
         # Verify list snapshots by status and display name filter
         params = {'status': 'available',
-                  'display_name': snapshot['display_name']}
+                  self.name_field: snapshot[self.name_field]}
         self._list_by_param_values_and_assert(params)
 
     @test.attr(type='gate')
@@ -160,35 +149,42 @@
         """list snapshot details with params."""
         # Create a snapshot
         display_name = data_utils.rand_name('snap')
-        snapshot = self.create_snapshot(self.volume_origin['id'],
-                                        display_name=display_name)
+        params = {self.name_field: display_name}
+        snapshot = self.create_snapshot(self.volume_origin['id'], **params)
 
         # Verify list snapshot details by display_name filter
-        params = {'display_name': snapshot['display_name']}
+        params = {self.name_field: snapshot[self.name_field]}
         self._list_by_param_values_and_assert(params, with_detail=True)
         # Verify list snapshot details by status filter
         params = {'status': 'available'}
         self._list_by_param_values_and_assert(params, with_detail=True)
         # Verify list snapshot details by status and display name filter
         params = {'status': 'available',
-                  'display_name': snapshot['display_name']}
+                  self.name_field: snapshot[self.name_field]}
         self._list_by_param_values_and_assert(params, with_detail=True)
 
     @test.attr(type='gate')
     def test_volume_from_snapshot(self):
         # Create a temporary snap using wrapper method from base, then
-        # create a snap based volume, check resp code and deletes it
+        # create a snap based volume and deletes it
         snapshot = self.create_snapshot(self.volume_origin['id'])
         # NOTE(gfidente): size is required also when passing snapshot_id
-        resp, volume = self.volumes_client.create_volume(
+        _, volume = self.volumes_client.create_volume(
             size=1,
             snapshot_id=snapshot['id'])
-        self.assertEqual(200, resp.status)
         self.volumes_client.wait_for_volume_status(volume['id'], 'available')
         self.volumes_client.delete_volume(volume['id'])
         self.volumes_client.wait_for_resource_deletion(volume['id'])
         self.clear_snapshots()
 
 
-class VolumesSnapshotTestXML(VolumesSnapshotTest):
+class VolumesV2SnapshotTestXML(VolumesV2SnapshotTestJSON):
+    _interface = "xml"
+
+
+class VolumesV1SnapshotTestJSON(VolumesV2SnapshotTestJSON):
+    _api_version = 1
+
+
+class VolumesV1SnapshotTestXML(VolumesV1SnapshotTestJSON):
     _interface = "xml"
diff --git a/tempest/api/volume/test_volumes_snapshots_negative.py b/tempest/api/volume/test_volumes_snapshots_negative.py
index 61aa307..75a62a8 100644
--- a/tempest/api/volume/test_volumes_snapshots_negative.py
+++ b/tempest/api/volume/test_volumes_snapshots_negative.py
@@ -21,12 +21,11 @@
 CONF = config.CONF
 
 
-class VolumesSnapshotNegativeTest(base.BaseVolumeV1Test):
-    _interface = "json"
+class VolumesV2SnapshotNegativeTestJSON(base.BaseVolumeTest):
 
     @classmethod
-    def setUpClass(cls):
-        super(VolumesSnapshotNegativeTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(VolumesV2SnapshotNegativeTestJSON, cls).resource_setup()
 
         if not CONF.volume_feature_enabled.snapshot:
             raise cls.skipException("Cinder volume snapshots are disabled")
@@ -48,5 +47,13 @@
                           None, display_name=s_name)
 
 
-class VolumesSnapshotNegativeTestXML(VolumesSnapshotNegativeTest):
+class VolumesV2SnapshotNegativeTestXML(VolumesV2SnapshotNegativeTestJSON):
+    _interface = "xml"
+
+
+class VolumesV1SnapshotNegativeTestJSON(VolumesV2SnapshotNegativeTestJSON):
+    _api_version = 1
+
+
+class VolumesV1SnapshotNegativeTestXML(VolumesV1SnapshotNegativeTestJSON):
     _interface = "xml"
diff --git a/tempest/api/volume/v2/test_volumes_list.py b/tempest/api/volume/v2/test_volumes_list.py
index 7ca8599..cc56873 100644
--- a/tempest/api/volume/v2/test_volumes_list.py
+++ b/tempest/api/volume/v2/test_volumes_list.py
@@ -31,9 +31,8 @@
     """
 
     @classmethod
-    @test.safe_setup
-    def setUpClass(cls):
-        super(VolumesV2ListTestJSON, cls).setUpClass()
+    def resource_setup(cls):
+        super(VolumesV2ListTestJSON, cls).resource_setup()
         cls.client = cls.volumes_client
 
         # Create 3 test volumes
@@ -42,17 +41,17 @@
         cls.metadata = {'Type': 'work'}
         for i in range(3):
             volume = cls.create_volume(metadata=cls.metadata)
-            resp, volume = cls.client.get_volume(volume['id'])
+            _, volume = cls.client.get_volume(volume['id'])
             cls.volume_list.append(volume)
             cls.volume_id_list.append(volume['id'])
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         # Delete the created volumes
         for volid in cls.volume_id_list:
-            resp, _ = cls.client.delete_volume(volid)
+            cls.client.delete_volume(volid)
             cls.client.wait_for_resource_deletion(volid)
-        super(VolumesV2ListTestJSON, cls).tearDownClass()
+        super(VolumesV2ListTestJSON, cls).resource_cleanup()
 
     @test.attr(type='gate')
     def test_volume_list_details_with_multiple_params(self):
@@ -66,8 +65,7 @@
                       'sort_dir': sort_dir,
                       'sort_key': sort_key
                       }
-            resp, fetched_volume = self.client.list_volumes_with_detail(params)
-            self.assertEqual(200, resp.status)
+            _, fetched_volume = self.client.list_volumes_with_detail(params)
             self.assertEqual(limit, len(fetched_volume),
                              "The count of volumes is %s, expected:%s " %
                              (len(fetched_volume), limit))
diff --git a/tempest/api_schema/response/queuing/__init__.py b/tempest/api_schema/request/__init__.py
similarity index 100%
copy from tempest/api_schema/response/queuing/__init__.py
copy to tempest/api_schema/request/__init__.py
diff --git a/tempest/api_schema/response/queuing/__init__.py b/tempest/api_schema/request/compute/__init__.py
similarity index 100%
copy from tempest/api_schema/response/queuing/__init__.py
copy to tempest/api_schema/request/compute/__init__.py
diff --git a/tempest/api_schema/request/compute/flavors.py b/tempest/api_schema/request/compute/flavors.py
new file mode 100644
index 0000000..8fe9e3a
--- /dev/null
+++ b/tempest/api_schema/request/compute/flavors.py
@@ -0,0 +1,53 @@
+# (c) 2014 Deutsche Telekom AG
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+
+common_flavor_details = {
+    "name": "get-flavor-details",
+    "http-method": "GET",
+    "url": "flavors/%s",
+    "resources": [
+        {"name": "flavor", "expected_result": 404}
+    ]
+}
+
+common_flavor_list = {
+    "name": "list-flavors-with-detail",
+    "http-method": "GET",
+    "url": "flavors/detail",
+    "json-schema": {
+        "type": "object",
+        "properties": {
+        }
+    }
+}
+
+common_admin_flavor_create = {
+    "name": "flavor-create",
+    "http-method": "POST",
+    "admin_client": True,
+    "url": "flavors",
+    "default_result_code": 400,
+    "json-schema": {
+        "type": "object",
+        "properties": {
+            "name": {"type": "string"},
+            "ram": {"type": "integer", "minimum": 1},
+            "vcpus": {"type": "integer", "minimum": 1},
+            "disk": {"type": "integer"},
+            "id": {"type": "integer"},
+            "swap": {"type": "integer"},
+            "rxtx_factor": {"type": "integer"},
+            "OS-FLV-EXT-DATA:ephemeral": {"type": "integer"}
+        }
+    }
+}
diff --git a/tempest/api_schema/request/compute/servers.py b/tempest/api_schema/request/compute/servers.py
new file mode 100644
index 0000000..731649c
--- /dev/null
+++ b/tempest/api_schema/request/compute/servers.py
@@ -0,0 +1,36 @@
+# (c) 2014 Deutsche Telekom AG
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+
+common_get_console_output = {
+    "name": "get-console-output",
+    "http-method": "POST",
+    "url": "servers/%s/action",
+    "resources": [
+        {"name": "server", "expected_result": 404}
+    ],
+    "json-schema": {
+        "type": "object",
+        "properties": {
+            "os-getConsoleOutput": {
+                "type": "object",
+                "properties": {
+                    "length": {
+                        "type": ["integer", "string"],
+                        "minimum": 0
+                    }
+                }
+            }
+        },
+        "additionalProperties": False
+    }
+}
diff --git a/tempest/api_schema/response/queuing/__init__.py b/tempest/api_schema/request/compute/v2/__init__.py
similarity index 100%
copy from tempest/api_schema/response/queuing/__init__.py
copy to tempest/api_schema/request/compute/v2/__init__.py
diff --git a/tempest/api_schema/request/compute/v2/flavors.py b/tempest/api_schema/request/compute/v2/flavors.py
new file mode 100644
index 0000000..bc459ad
--- /dev/null
+++ b/tempest/api_schema/request/compute/v2/flavors.py
@@ -0,0 +1,39 @@
+# (c) 2014 Deutsche Telekom AG
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+
+import copy
+
+from tempest.api_schema.request.compute import flavors
+
+flavors_details = copy.deepcopy(flavors.common_flavor_details)
+
+flavor_list = copy.deepcopy(flavors.common_flavor_list)
+
+flavor_create = copy.deepcopy(flavors.common_admin_flavor_create)
+
+flavor_list["json-schema"]["properties"] = {
+    "minRam": {
+        "type": "integer",
+        "results": {
+            "gen_none": 400,
+            "gen_string": 400
+        }
+    },
+    "minDisk": {
+        "type": "integer",
+        "results": {
+            "gen_none": 400,
+            "gen_string": 400
+        }
+    }
+}
diff --git a/tempest/api_schema/request/compute/v2/servers.py b/tempest/api_schema/request/compute/v2/servers.py
new file mode 100644
index 0000000..c9002ed
--- /dev/null
+++ b/tempest/api_schema/request/compute/v2/servers.py
@@ -0,0 +1,18 @@
+# (c) 2014 Deutsche Telekom AG
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+
+import copy
+
+from tempest.api_schema.request.compute import servers
+
+get_console_output = copy.deepcopy(servers.common_get_console_output)
diff --git a/tempest/api_schema/response/queuing/__init__.py b/tempest/api_schema/request/compute/v3/__init__.py
similarity index 100%
copy from tempest/api_schema/response/queuing/__init__.py
copy to tempest/api_schema/request/compute/v3/__init__.py
diff --git a/tempest/api_schema/request/compute/v3/flavors.py b/tempest/api_schema/request/compute/v3/flavors.py
new file mode 100644
index 0000000..b913aca
--- /dev/null
+++ b/tempest/api_schema/request/compute/v3/flavors.py
@@ -0,0 +1,37 @@
+# (c) 2014 Deutsche Telekom AG
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+
+import copy
+
+from tempest.api_schema.request.compute import flavors
+
+flavors_details = copy.deepcopy(flavors.common_flavor_details)
+
+flavor_list = copy.deepcopy(flavors.common_flavor_list)
+
+flavor_list["json-schema"]["properties"] = {
+    "min_ram": {
+        "type": "integer",
+        "results": {
+            "gen_none": 400,
+            "gen_string": 400
+        }
+    },
+    "min_disk": {
+        "type": "integer",
+        "results": {
+            "gen_none": 400,
+            "gen_string": 400
+        }
+    }
+}
diff --git a/tempest/api_schema/response/compute/servers.py b/tempest/api_schema/response/compute/servers.py
index d6c2ddb..f9c957b 100644
--- a/tempest/api_schema/response/compute/servers.py
+++ b/tempest/api_schema/response/compute/servers.py
@@ -54,14 +54,15 @@
         'id': {'type': 'string'},
         'name': {'type': 'string'},
         'status': {'type': 'string'},
-        'image': {
-            'type': 'object',
-            'properties': {
-                'id': {'type': 'string'},
-                'links': parameter_types.links
-            },
-            'required': ['id', 'links']
-        },
+        'image': {'oneOf': [
+            {'type': 'object',
+                'properties': {
+                    'id': {'type': 'string'},
+                    'links': parameter_types.links
+                },
+                'required': ['id', 'links']},
+            {'type': ['string', 'null']}
+        ]},
         'flavor': {
             'type': 'object',
             'properties': {
diff --git a/tempest/api_schema/response/compute/v2/hypervisors.py b/tempest/api_schema/response/compute/v2/hypervisors.py
index 7ad81c0..cbb7698 100644
--- a/tempest/api_schema/response/compute/v2/hypervisors.py
+++ b/tempest/api_schema/response/compute/v2/hypervisors.py
@@ -13,8 +13,10 @@
 #    under the License.
 
 import copy
+
 from tempest.api_schema.response.compute import hypervisors
 
+
 hypervisors_servers = copy.deepcopy(hypervisors.common_hypervisors_detail)
 
 # Defining extra attributes for V3 show hypervisor schema
@@ -24,11 +26,7 @@
         'items': {
             'type': 'object',
             'properties': {
-                # NOTE: Now the type of 'id' is integer,
-                # but here allows 'string' also because we
-                # will be able to change it to 'uuid' in
-                # the future.
-                'id': {'type': ['integer', 'string']},
+                'uuid': {'type': 'string'},
                 'name': {'type': 'string'}
             }
         }
diff --git a/tempest/api_schema/response/compute/v2/security_group_default_rule.py b/tempest/api_schema/response/compute/v2/security_group_default_rule.py
new file mode 100644
index 0000000..9246ab8
--- /dev/null
+++ b/tempest/api_schema/response/compute/v2/security_group_default_rule.py
@@ -0,0 +1,61 @@
+# Copyright 2014 NEC Corporation.  All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+common_security_group_default_rule_info = {
+    'type': 'object',
+    'properties': {
+        'from_port': {'type': 'integer'},
+        'id': {'type': 'integer'},
+        'ip_protocol': {'type': 'string'},
+        'ip_range': {
+            'type': 'object',
+            'properties': {
+                'cidr': {'type': 'string'}
+            },
+            'required': ['cidr'],
+        },
+        'to_port': {'type': 'integer'},
+    },
+    'required': ['from_port', 'id', 'ip_protocol', 'ip_range', 'to_port'],
+}
+
+create_get_security_group_default_rule = {
+    'status_code': [200],
+    'response_body': {
+        'type': 'object',
+        'properties': {
+            'security_group_default_rule':
+                common_security_group_default_rule_info
+        },
+        'required': ['security_group_default_rule']
+    }
+}
+
+delete_security_group_default_rule = {
+    'status_code': [204]
+}
+
+list_security_group_default_rules = {
+    'status_code': [200],
+    'response_body': {
+        'type': 'object',
+        'properties': {
+            'security_group_default_rules': {
+                'type': 'array',
+                'items': common_security_group_default_rule_info
+            }
+        },
+        'required': ['security_group_default_rules']
+    }
+}
diff --git a/tempest/api_schema/response/compute/v3/hypervisors.py b/tempest/api_schema/response/compute/v3/hypervisors.py
index 9a9a9f7..b36ae7e 100644
--- a/tempest/api_schema/response/compute/v3/hypervisors.py
+++ b/tempest/api_schema/response/compute/v3/hypervisors.py
@@ -13,8 +13,10 @@
 #    under the License.
 
 import copy
+
 from tempest.api_schema.response.compute import hypervisors
 
+
 list_hypervisors_detail = copy.deepcopy(
     hypervisors.common_list_hypervisors_detail)
 # Defining extra attributes for V3 show hypervisor schema
diff --git a/tempest/api_schema/response/compute/v3/servers.py b/tempest/api_schema/response/compute/v3/servers.py
index 230de5f..d0edd44 100644
--- a/tempest/api_schema/response/compute/v3/servers.py
+++ b/tempest/api_schema/response/compute/v3/servers.py
@@ -55,9 +55,7 @@
         'mac_addr': {'type': 'string'}
     })
 addresses_v3['patternProperties']['^[a-zA-Z0-9-_.]+$']['items'][
-    'required'].extend(
-        ['type', 'mac_addr']
-    )
+    'required'].extend(['type', 'mac_addr'])
 
 update_server = copy.deepcopy(servers.base_update_get_server)
 update_server['response_body']['properties']['server']['properties'].update({
diff --git a/tempest/api_schema/response/queuing/__init__.py b/tempest/api_schema/response/messaging/__init__.py
similarity index 100%
rename from tempest/api_schema/response/queuing/__init__.py
rename to tempest/api_schema/response/messaging/__init__.py
diff --git a/tempest/api_schema/response/queuing/v1/__init__.py b/tempest/api_schema/response/messaging/v1/__init__.py
similarity index 100%
rename from tempest/api_schema/response/queuing/v1/__init__.py
rename to tempest/api_schema/response/messaging/v1/__init__.py
diff --git a/tempest/api_schema/response/queuing/v1/queues.py b/tempest/api_schema/response/messaging/v1/queues.py
similarity index 100%
rename from tempest/api_schema/response/queuing/v1/queues.py
rename to tempest/api_schema/response/messaging/v1/queues.py
diff --git a/tempest/auth.py b/tempest/auth.py
index 5df6224..b1ead29 100644
--- a/tempest/auth.py
+++ b/tempest/auth.py
@@ -18,16 +18,17 @@
 import datetime
 import exceptions
 import re
-import six
 import urlparse
 
+import six
+
 from tempest import config
+from tempest.openstack.common import log as logging
 from tempest.services.identity.json import identity_client as json_id
 from tempest.services.identity.v3.json import identity_client as json_v3id
 from tempest.services.identity.v3.xml import identity_client as xml_v3id
 from tempest.services.identity.xml import identity_client as xml_id
 
-from tempest.openstack.common import log as logging
 
 CONF = config.CONF
 LOG = logging.getLogger(__name__)
@@ -39,11 +40,9 @@
     Provide authentication
     """
 
-    def __init__(self, credentials, client_type='tempest',
-                 interface=None):
+    def __init__(self, credentials, interface=None):
         """
         :param credentials: credentials for authentication
-        :param client_type: 'tempest' or 'official'
         :param interface: 'json' or 'xml'. Applicable for tempest client only
         """
         credentials = self._convert_credentials(credentials)
@@ -51,9 +50,8 @@
             self.credentials = credentials
         else:
             raise TypeError("Invalid credentials")
-        self.client_type = client_type
         self.interface = interface
-        if self.client_type == 'tempest' and self.interface is None:
+        if self.interface is None:
             self.interface = 'json'
         self.cache = None
         self.alt_auth_data = None
@@ -67,11 +65,10 @@
             return credentials
 
     def __str__(self):
-        return "Creds :{creds}, client type: {client_type}, interface: " \
-               "{interface}, cached auth data: {cache}".format(
-                   creds=self.credentials, client_type=self.client_type,
-                   interface=self.interface, cache=self.cache
-               )
+        return "Creds :{creds}, interface: {interface}, " \
+               "cached auth data: {cache}".format(
+                   creds=self.credentials, interface=self.interface,
+                   cache=self.cache)
 
     @abc.abstractmethod
     def _decorate_request(self, filters, method, url, headers=None, body=None,
@@ -207,9 +204,8 @@
 
     token_expiry_threshold = datetime.timedelta(seconds=60)
 
-    def __init__(self, credentials, client_type='tempest', interface=None):
-        super(KeystoneAuthProvider, self).__init__(credentials, client_type,
-                                                   interface)
+    def __init__(self, credentials, interface=None):
+        super(KeystoneAuthProvider, self).__init__(credentials, interface)
         self.auth_client = self._auth_client()
 
     def _decorate_request(self, filters, method, url, headers=None, body=None,
@@ -243,15 +239,12 @@
 
     def _get_auth(self):
         # Bypasses the cache
-        if self.client_type == 'tempest':
-            auth_func = getattr(self.auth_client, 'get_token')
-            auth_params = self._auth_params()
+        auth_func = getattr(self.auth_client, 'get_token')
+        auth_params = self._auth_params()
 
-            # returns token, auth_data
-            token, auth_data = auth_func(**auth_params)
-            return token, auth_data
-        else:
-            raise NotImplementedError
+        # returns token, auth_data
+        token, auth_data = auth_func(**auth_params)
+        return token, auth_data
 
     def get_token(self):
         return self.auth_data[0]
@@ -262,23 +255,17 @@
     EXPIRY_DATE_FORMAT = '%Y-%m-%dT%H:%M:%SZ'
 
     def _auth_client(self):
-        if self.client_type == 'tempest':
-            if self.interface == 'json':
-                return json_id.TokenClientJSON()
-            else:
-                return xml_id.TokenClientXML()
+        if self.interface == 'json':
+            return json_id.TokenClientJSON()
         else:
-            raise NotImplementedError
+            return xml_id.TokenClientXML()
 
     def _auth_params(self):
-        if self.client_type == 'tempest':
-            return dict(
-                user=self.credentials.username,
-                password=self.credentials.password,
-                tenant=self.credentials.tenant_name,
-                auth_data=True)
-        else:
-            raise NotImplementedError
+        return dict(
+            user=self.credentials.username,
+            password=self.credentials.password,
+            tenant=self.credentials.tenant_name,
+            auth_data=True)
 
     def _fill_credentials(self, auth_data_body):
         tenant = auth_data_body['token']['tenant']
@@ -349,24 +336,18 @@
     EXPIRY_DATE_FORMAT = '%Y-%m-%dT%H:%M:%S.%fZ'
 
     def _auth_client(self):
-        if self.client_type == 'tempest':
-            if self.interface == 'json':
-                return json_v3id.V3TokenClientJSON()
-            else:
-                return xml_v3id.V3TokenClientXML()
+        if self.interface == 'json':
+            return json_v3id.V3TokenClientJSON()
         else:
-            raise NotImplementedError
+            return xml_v3id.V3TokenClientXML()
 
     def _auth_params(self):
-        if self.client_type == 'tempest':
-            return dict(
-                user=self.credentials.username,
-                password=self.credentials.password,
-                tenant=self.credentials.tenant_name,
-                domain=self.credentials.user_domain_name,
-                auth_data=True)
-        else:
-            raise NotImplementedError
+        return dict(
+            user=self.credentials.username,
+            password=self.credentials.password,
+            tenant=self.credentials.tenant_name,
+            domain=self.credentials.user_domain_name,
+            auth_data=True)
 
     def _fill_credentials(self, auth_data_body):
         # project or domain, depending on the scope
diff --git a/tempest/cli/__init__.py b/tempest/cli/__init__.py
index 02f8c05..ca6d7fe 100644
--- a/tempest/cli/__init__.py
+++ b/tempest/cli/__init__.py
@@ -13,14 +13,18 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+import functools
 import os
 import shlex
 import subprocess
 
+import testtools
+
 import tempest.cli.output_parser
 from tempest import config
 from tempest import exceptions
 from tempest.openstack.common import log as logging
+from tempest.openstack.common import versionutils
 import tempest.test
 
 
@@ -29,13 +33,72 @@
 CONF = config.CONF
 
 
+def execute(cmd, action, flags='', params='', fail_ok=False,
+            merge_stderr=False):
+    """Executes specified command for the given action."""
+    cmd = ' '.join([os.path.join(CONF.cli.cli_dir, cmd),
+                    flags, action, params])
+    LOG.info("running: '%s'" % cmd)
+    cmd = shlex.split(cmd.encode('utf-8'))
+    result = ''
+    result_err = ''
+    stdout = subprocess.PIPE
+    stderr = subprocess.STDOUT if merge_stderr else subprocess.PIPE
+    proc = subprocess.Popen(cmd, stdout=stdout, stderr=stderr)
+    result, result_err = proc.communicate()
+    if not fail_ok and proc.returncode != 0:
+        raise exceptions.CommandFailed(proc.returncode,
+                                       cmd,
+                                       result,
+                                       result_err)
+    return result
+
+
+def check_client_version(client, version):
+    """Checks if the client's version is compatible with the given version
+
+    @param client: The client to check.
+    @param version: The version to compare against.
+    @return: True if the client version is compatible with the given version
+             parameter, False otherwise.
+    """
+    current_version = execute(client, '', params='--version',
+                              merge_stderr=True)
+
+    if not current_version.strip():
+        raise exceptions.TempestException('"%s --version" output was empty' %
+                                          client)
+
+    return versionutils.is_compatible(version, current_version,
+                                      same_major=False)
+
+
+def min_client_version(*args, **kwargs):
+    """A decorator to skip tests if the client used isn't of the right version.
+
+    @param client: The client command to run. For python-novaclient, this is
+                   'nova', for python-cinderclient this is 'cinder', etc.
+    @param version: The minimum version required to run the CLI test.
+    """
+    def decorator(func):
+        @functools.wraps(func)
+        def wrapper(*func_args, **func_kwargs):
+            if not check_client_version(kwargs['client'], kwargs['version']):
+                msg = "requires %s client version >= %s" % (kwargs['client'],
+                                                            kwargs['version'])
+                raise testtools.TestCase.skipException(msg)
+            return func(*func_args, **func_kwargs)
+        return wrapper
+    return decorator
+
+
 class ClientTestBase(tempest.test.BaseTestCase):
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         if not CONF.cli.enabled:
             msg = "cli testing disabled"
             raise cls.skipException(msg)
-        super(ClientTestBase, cls).setUpClass()
+        super(ClientTestBase, cls).resource_setup()
 
     def __init__(self, *args, **kwargs):
         self.parser = tempest.cli.output_parser
@@ -50,7 +113,7 @@
     def nova_manage(self, action, flags='', params='', fail_ok=False,
                     merge_stderr=False):
         """Executes nova-manage command for the given action."""
-        return self.cmd(
+        return execute(
             'nova-manage', action, flags, params, fail_ok, merge_stderr)
 
     def keystone(self, action, flags='', params='', admin=True, fail_ok=False):
@@ -114,28 +177,7 @@
                   CONF.identity.admin_password,
                   CONF.identity.uri))
         flags = creds + ' ' + flags
-        return self.cmd(cmd, action, flags, params, fail_ok, merge_stderr)
-
-    def cmd(self, cmd, action, flags='', params='', fail_ok=False,
-            merge_stderr=False):
-        """Executes specified command for the given action."""
-        cmd = ' '.join([os.path.join(CONF.cli.cli_dir, cmd),
-                        flags, action, params])
-        LOG.info("running: '%s'" % cmd)
-        cmd = shlex.split(cmd.encode('utf-8'))
-        result = ''
-        result_err = ''
-        stdout = subprocess.PIPE
-        stderr = subprocess.STDOUT if merge_stderr else subprocess.PIPE
-        proc = subprocess.Popen(
-            cmd, stdout=stdout, stderr=stderr)
-        result, result_err = proc.communicate()
-        if not fail_ok and proc.returncode != 0:
-            raise exceptions.CommandFailed(proc.returncode,
-                                           cmd,
-                                           result,
-                                           result_err)
-        return result
+        return execute(cmd, action, flags, params, fail_ok, merge_stderr)
 
     def assertTableStruct(self, items, field_names):
         """Verify that all items has keys listed in field_names."""
diff --git a/tempest/api/queuing/__init__.py b/tempest/cli/simple_read_only/compute/__init__.py
similarity index 100%
copy from tempest/api/queuing/__init__.py
copy to tempest/cli/simple_read_only/compute/__init__.py
diff --git a/tempest/cli/simple_read_only/test_nova.py b/tempest/cli/simple_read_only/compute/test_nova.py
similarity index 97%
rename from tempest/cli/simple_read_only/test_nova.py
rename to tempest/cli/simple_read_only/compute/test_nova.py
index 7085cc9..6e5e077 100644
--- a/tempest/cli/simple_read_only/test_nova.py
+++ b/tempest/cli/simple_read_only/compute/test_nova.py
@@ -41,11 +41,11 @@
     """
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         if not CONF.service_available.nova:
             msg = ("%s skipped as Nova is not available" % cls.__name__)
             raise cls.skipException(msg)
-        super(SimpleReadOnlyNovaClientTest, cls).setUpClass()
+        super(SimpleReadOnlyNovaClientTest, cls).resource_setup()
 
     def test_admin_fake_action(self):
         self.assertRaises(exceptions.CommandFailed,
@@ -144,6 +144,7 @@
     def test_admin_secgroup_list_rules(self):
         self.nova('secgroup-list-rules')
 
+    @tempest.cli.min_client_version(client='nova', version='2.18')
     def test_admin_server_group_list(self):
         self.nova('server-group-list')
 
diff --git a/tempest/cli/simple_read_only/test_nova_manage.py b/tempest/cli/simple_read_only/compute/test_nova_manage.py
similarity index 86%
rename from tempest/cli/simple_read_only/test_nova_manage.py
rename to tempest/cli/simple_read_only/compute/test_nova_manage.py
index dae0cf8..cff543f 100644
--- a/tempest/cli/simple_read_only/test_nova_manage.py
+++ b/tempest/cli/simple_read_only/compute/test_nova_manage.py
@@ -36,7 +36,7 @@
     """
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         if not CONF.service_available.nova:
             msg = ("%s skipped as Nova is not available" % cls.__name__)
             raise cls.skipException(msg)
@@ -44,7 +44,7 @@
             msg = ("%s skipped as *-manage commands not available"
                    % cls.__name__)
             raise cls.skipException(msg)
-        super(SimpleReadOnlyNovaManageTest, cls).setUpClass()
+        super(SimpleReadOnlyNovaManageTest, cls).resource_setup()
 
     def test_admin_fake_action(self):
         self.assertRaises(exceptions.CommandFailed,
@@ -65,24 +65,17 @@
                          self.nova_manage('', '--version', merge_stderr=True))
 
     def test_debug_flag(self):
-        self.assertNotEqual("", self.nova_manage('flavor list',
+        self.assertNotEqual("", self.nova_manage('service list',
                             '--debug'))
 
     def test_verbose_flag(self):
-        self.assertNotEqual("", self.nova_manage('flavor list',
+        self.assertNotEqual("", self.nova_manage('service list',
                             '--verbose'))
 
     # test actions
     def test_version(self):
         self.assertNotEqual("", self.nova_manage('version'))
 
-    def test_flavor_list(self):
-        self.assertNotEqual("", self.nova_manage('flavor list'))
-
-    def test_db_archive_deleted_rows(self):
-        # make sure command doesn't error out
-        self.nova_manage('db archive_deleted_rows 50')
-
     def test_db_sync(self):
         # make sure command doesn't error out
         self.nova_manage('db sync')
diff --git a/tempest/api/queuing/__init__.py b/tempest/cli/simple_read_only/data_processing/__init__.py
similarity index 100%
copy from tempest/api/queuing/__init__.py
copy to tempest/cli/simple_read_only/data_processing/__init__.py
diff --git a/tempest/cli/simple_read_only/test_sahara.py b/tempest/cli/simple_read_only/data_processing/test_sahara.py
similarity index 97%
rename from tempest/cli/simple_read_only/test_sahara.py
rename to tempest/cli/simple_read_only/data_processing/test_sahara.py
index 2c6e0e2..751a4ad 100644
--- a/tempest/cli/simple_read_only/test_sahara.py
+++ b/tempest/cli/simple_read_only/data_processing/test_sahara.py
@@ -34,11 +34,11 @@
     """
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         if not CONF.service_available.sahara:
             msg = "Skipping all Sahara cli tests because it is not available"
             raise cls.skipException(msg)
-        super(SimpleReadOnlySaharaClientTest, cls).setUpClass()
+        super(SimpleReadOnlySaharaClientTest, cls).resource_setup()
 
     @test.attr(type='negative')
     def test_sahara_fake_action(self):
diff --git a/tempest/api/queuing/__init__.py b/tempest/cli/simple_read_only/identity/__init__.py
similarity index 100%
copy from tempest/api/queuing/__init__.py
copy to tempest/cli/simple_read_only/identity/__init__.py
diff --git a/tempest/cli/simple_read_only/test_keystone.py b/tempest/cli/simple_read_only/identity/test_keystone.py
similarity index 100%
rename from tempest/cli/simple_read_only/test_keystone.py
rename to tempest/cli/simple_read_only/identity/test_keystone.py
diff --git a/tempest/api/queuing/__init__.py b/tempest/cli/simple_read_only/image/__init__.py
similarity index 100%
copy from tempest/api/queuing/__init__.py
copy to tempest/cli/simple_read_only/image/__init__.py
diff --git a/tempest/cli/simple_read_only/test_glance.py b/tempest/cli/simple_read_only/image/test_glance.py
similarity index 96%
rename from tempest/cli/simple_read_only/test_glance.py
rename to tempest/cli/simple_read_only/image/test_glance.py
index 2fd8212..a9cbadb 100644
--- a/tempest/cli/simple_read_only/test_glance.py
+++ b/tempest/cli/simple_read_only/image/test_glance.py
@@ -34,11 +34,11 @@
     """
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         if not CONF.service_available.glance:
             msg = ("%s skipped as Glance is not available" % cls.__name__)
             raise cls.skipException(msg)
-        super(SimpleReadOnlyGlanceClientTest, cls).setUpClass()
+        super(SimpleReadOnlyGlanceClientTest, cls).resource_setup()
 
     def test_glance_fake_action(self):
         self.assertRaises(exceptions.CommandFailed,
diff --git a/tempest/api/queuing/__init__.py b/tempest/cli/simple_read_only/network/__init__.py
similarity index 100%
copy from tempest/api/queuing/__init__.py
copy to tempest/cli/simple_read_only/network/__init__.py
diff --git a/tempest/cli/simple_read_only/test_neutron.py b/tempest/cli/simple_read_only/network/test_neutron.py
similarity index 98%
rename from tempest/cli/simple_read_only/test_neutron.py
rename to tempest/cli/simple_read_only/network/test_neutron.py
index 87f6b67..f9f8906 100644
--- a/tempest/cli/simple_read_only/test_neutron.py
+++ b/tempest/cli/simple_read_only/network/test_neutron.py
@@ -35,11 +35,11 @@
     """
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         if (not CONF.service_available.neutron):
             msg = "Skipping all Neutron cli tests because it is not available"
             raise cls.skipException(msg)
-        super(SimpleReadOnlyNeutronClientTest, cls).setUpClass()
+        super(SimpleReadOnlyNeutronClientTest, cls).resource_setup()
 
     @test.attr(type='smoke')
     def test_neutron_fake_action(self):
diff --git a/tempest/api/queuing/__init__.py b/tempest/cli/simple_read_only/object_storage/__init__.py
similarity index 100%
copy from tempest/api/queuing/__init__.py
copy to tempest/cli/simple_read_only/object_storage/__init__.py
diff --git a/tempest/cli/simple_read_only/test_swift.py b/tempest/cli/simple_read_only/object_storage/test_swift.py
similarity index 96%
rename from tempest/cli/simple_read_only/test_swift.py
rename to tempest/cli/simple_read_only/object_storage/test_swift.py
index 069a384..a162660 100644
--- a/tempest/cli/simple_read_only/test_swift.py
+++ b/tempest/cli/simple_read_only/object_storage/test_swift.py
@@ -31,11 +31,11 @@
     """
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         if not CONF.service_available.swift:
             msg = ("%s skipped as Swift is not available" % cls.__name__)
             raise cls.skipException(msg)
-        super(SimpleReadOnlySwiftClientTest, cls).setUpClass()
+        super(SimpleReadOnlySwiftClientTest, cls).resource_setup()
 
     def test_swift_fake_action(self):
         self.assertRaises(exceptions.CommandFailed,
diff --git a/tempest/services/queuing/json/__init__.py b/tempest/cli/simple_read_only/orchestration/__init__.py
similarity index 100%
copy from tempest/services/queuing/json/__init__.py
copy to tempest/cli/simple_read_only/orchestration/__init__.py
diff --git a/tempest/cli/simple_read_only/test_heat.py b/tempest/cli/simple_read_only/orchestration/test_heat.py
similarity index 84%
rename from tempest/cli/simple_read_only/test_heat.py
rename to tempest/cli/simple_read_only/orchestration/test_heat.py
index 7a952fc..7d7f8c9 100644
--- a/tempest/cli/simple_read_only/test_heat.py
+++ b/tempest/cli/simple_read_only/orchestration/test_heat.py
@@ -12,6 +12,7 @@
 
 import json
 import os
+
 import yaml
 
 import tempest.cli
@@ -31,12 +32,15 @@
     """
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         if (not CONF.service_available.heat):
             msg = ("Skipping all Heat cli tests because it is "
                    "not available")
             raise cls.skipException(msg)
-        super(SimpleReadOnlyHeatClientTest, cls).setUpClass()
+        super(SimpleReadOnlyHeatClientTest, cls).resource_setup()
+        cls.heat_template_path = os.path.join(os.path.dirname(
+            os.path.dirname(os.path.realpath(__file__))),
+            'heat_templates/heat_minimal.yaml')
 
     def test_heat_stack_list(self):
         self.heat('stack-list')
@@ -55,7 +59,7 @@
 
     def test_heat_resource_template_fmt_arg_long_json(self):
         ret = self.heat('resource-template --format json OS::Nova::Server')
-        self.assertIn('"Type": "OS::Nova::Server",', ret)
+        self.assertIn('"Type": "OS::Nova::Server"', ret)
         self.assertIsInstance(json.loads(ret), dict)
 
     def test_heat_resource_type_list(self):
@@ -69,22 +73,19 @@
         self.assertIsInstance(json.loads(rsrc_schema), dict)
 
     def test_heat_template_validate_yaml(self):
-        filepath = os.path.join(os.path.dirname(os.path.realpath(__file__)),
-                                'heat_templates/heat_minimal.yaml')
-        ret = self.heat('template-validate -f %s' % filepath)
+        ret = self.heat('template-validate -f %s' % self.heat_template_path)
         # On success template-validate returns a json representation
         # of the template parameters
         self.assertIsInstance(json.loads(ret), dict)
 
     def test_heat_template_validate_hot(self):
-        filepath = os.path.join(os.path.dirname(os.path.realpath(__file__)),
-                                'heat_templates/heat_minimal_hot.yaml')
-        ret = self.heat('template-validate -f %s' % filepath)
+        ret = self.heat('template-validate -f %s' % self.heat_template_path)
         self.assertIsInstance(json.loads(ret), dict)
 
     def test_heat_help(self):
         self.heat('help')
 
+    @tempest.cli.min_client_version(client='heat', version='0.2.7')
     def test_heat_bash_completion(self):
         self.heat('bash-completion')
 
diff --git a/tempest/api/queuing/__init__.py b/tempest/cli/simple_read_only/telemetry/__init__.py
similarity index 100%
copy from tempest/api/queuing/__init__.py
copy to tempest/cli/simple_read_only/telemetry/__init__.py
diff --git a/tempest/cli/simple_read_only/test_ceilometer.py b/tempest/cli/simple_read_only/telemetry/test_ceilometer.py
similarity index 94%
rename from tempest/cli/simple_read_only/test_ceilometer.py
rename to tempest/cli/simple_read_only/telemetry/test_ceilometer.py
index 1d2822d..45b793b 100644
--- a/tempest/cli/simple_read_only/test_ceilometer.py
+++ b/tempest/cli/simple_read_only/telemetry/test_ceilometer.py
@@ -32,12 +32,12 @@
     """
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         if (not CONF.service_available.ceilometer):
             msg = ("Skipping all Ceilometer cli tests because it is "
                    "not available")
             raise cls.skipException(msg)
-        super(SimpleReadOnlyCeilometerClientTest, cls).setUpClass()
+        super(SimpleReadOnlyCeilometerClientTest, cls).resource_setup()
 
     def test_ceilometer_meter_list(self):
         self.ceilometer('meter-list')
diff --git a/tempest/api/queuing/__init__.py b/tempest/cli/simple_read_only/volume/__init__.py
similarity index 100%
copy from tempest/api/queuing/__init__.py
copy to tempest/cli/simple_read_only/volume/__init__.py
diff --git a/tempest/cli/simple_read_only/test_cinder.py b/tempest/cli/simple_read_only/volume/test_cinder.py
similarity index 95%
rename from tempest/cli/simple_read_only/test_cinder.py
rename to tempest/cli/simple_read_only/volume/test_cinder.py
index 04971c1..45f6c41 100644
--- a/tempest/cli/simple_read_only/test_cinder.py
+++ b/tempest/cli/simple_read_only/volume/test_cinder.py
@@ -15,6 +15,7 @@
 
 import logging
 import re
+
 import testtools
 
 from tempest import cli
@@ -34,11 +35,11 @@
     """
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         if not CONF.service_available.cinder:
             msg = ("%s skipped as Cinder is not available" % cls.__name__)
             raise cls.skipException(msg)
-        super(SimpleReadOnlyCinderClientTest, cls).setUpClass()
+        super(SimpleReadOnlyCinderClientTest, cls).resource_setup()
 
     def test_cinder_fake_action(self):
         self.assertRaises(exceptions.CommandFailed,
@@ -120,8 +121,12 @@
         self.assertTableStruct(zone_list, ['Name', 'Status'])
 
     def test_cinder_endpoints(self):
-        endpoints = self.parser.listing(self.cinder('endpoints'))
-        self.assertTableStruct(endpoints, ['nova', 'Value'])
+        out = self.cinder('endpoints')
+        tables = self.parser.tables(out)
+        for table in tables:
+            headers = table['headers']
+            self.assertTrue(2 >= len(headers))
+            self.assertEqual('Value', headers[1])
 
     def test_cinder_service_list(self):
         service_list = self.parser.listing(self.cinder('service-list'))
diff --git a/tempest/clients.py b/tempest/clients.py
index 0edcdf4..2d07852 100644
--- a/tempest/clients.py
+++ b/tempest/clients.py
@@ -13,9 +13,6 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-import keystoneclient.exceptions
-import keystoneclient.v2_0.client
-
 from tempest import auth
 from tempest.common import rest_client
 from tempest import config
@@ -50,6 +47,7 @@
 from tempest.services.compute.json.limits_client import LimitsClientJSON
 from tempest.services.compute.json.migrations_client import \
     MigrationsClientJSON
+from tempest.services.compute.json.networks_client import NetworksClientJSON
 from tempest.services.compute.json.quotas_client import QuotaClassesClientJSON
 from tempest.services.compute.json.quotas_client import QuotasClientJSON
 from tempest.services.compute.json.security_group_default_rules_client import \
@@ -150,6 +148,8 @@
 from tempest.services.identity.xml.identity_client import TokenClientXML
 from tempest.services.image.v1.json.image_client import ImageClientJSON
 from tempest.services.image.v2.json.image_client import ImageClientV2JSON
+from tempest.services.messaging.json.messaging_client import \
+    MessagingClientJSON
 from tempest.services.network.json.network_client import NetworkClientJSON
 from tempest.services.network.xml.network_client import NetworkClientXML
 from tempest.services.object_storage.account_client import AccountClient
@@ -161,7 +161,6 @@
     ObjectClientCustomizedHeader
 from tempest.services.orchestration.json.orchestration_client import \
     OrchestrationClient
-from tempest.services.queuing.json.queuing_client import QueuingClientJSON
 from tempest.services.telemetry.json.telemetry_client import \
     TelemetryClientJSON
 from tempest.services.telemetry.xml.telemetry_client import \
@@ -179,17 +178,23 @@
 from tempest.services.volume.json.backups_client import BackupsClientJSON
 from tempest.services.volume.json.extensions_client import \
     ExtensionsClientJSON as VolumeExtensionClientJSON
+from tempest.services.volume.json.qos_client import QosSpecsClientJSON
 from tempest.services.volume.json.snapshots_client import SnapshotsClientJSON
 from tempest.services.volume.json.volumes_client import VolumesClientJSON
 from tempest.services.volume.v2.json.availability_zone_client import \
     VolumeV2AvailabilityZoneClientJSON
 from tempest.services.volume.v2.json.extensions_client import \
     ExtensionsV2ClientJSON as VolumeV2ExtensionClientJSON
+from tempest.services.volume.v2.json.qos_client import QosSpecsV2ClientJSON
+from tempest.services.volume.v2.json.snapshots_client import \
+    SnapshotsV2ClientJSON
 from tempest.services.volume.v2.json.volumes_client import VolumesV2ClientJSON
 from tempest.services.volume.v2.xml.availability_zone_client import \
     VolumeV2AvailabilityZoneClientXML
 from tempest.services.volume.v2.xml.extensions_client import \
     ExtensionsV2ClientXML as VolumeV2ExtensionClientXML
+from tempest.services.volume.v2.xml.snapshots_client import \
+    SnapshotsV2ClientXML
 from tempest.services.volume.v2.xml.volumes_client import VolumesV2ClientXML
 from tempest.services.volume.xml.admin.volume_hosts_client import \
     VolumeHostsClientXML
@@ -220,7 +225,6 @@
     def __init__(self, credentials=None, interface='json', service=None):
         # Set interface and client type first
         self.interface = interface
-        self.client_type = 'tempest'
         # super cares for credentials validation
         super(Manager, self).__init__(credentials=credentials)
 
@@ -242,6 +246,7 @@
                 self.auth_provider)
             self.backups_client = BackupsClientXML(self.auth_provider)
             self.snapshots_client = SnapshotsClientXML(self.auth_provider)
+            self.snapshots_v2_client = SnapshotsV2ClientXML(self.auth_provider)
             self.volumes_client = VolumesClientXML(self.auth_provider)
             self.volumes_v2_client = VolumesV2ClientXML(self.auth_provider)
             self.volume_types_client = VolumeTypesClientXML(
@@ -321,6 +326,8 @@
                 self.auth_provider)
             self.backups_client = BackupsClientJSON(self.auth_provider)
             self.snapshots_client = SnapshotsClientJSON(self.auth_provider)
+            self.snapshots_v2_client = SnapshotsV2ClientJSON(
+                self.auth_provider)
             self.volumes_client = VolumesClientJSON(self.auth_provider)
             self.volumes_v2_client = VolumesV2ClientJSON(self.auth_provider)
             self.volume_types_client = VolumeTypesClientJSON(
@@ -381,7 +388,7 @@
                 self.auth_provider)
             self.database_versions_client = DatabaseVersionsClientJSON(
                 self.auth_provider)
-            self.queuing_client = QueuingClientJSON(self.auth_provider)
+            self.messaging_client = MessagingClientJSON(self.auth_provider)
             if CONF.service_available.ceilometer:
                 self.telemetry_client = TelemetryClientJSON(
                     self.auth_provider)
@@ -426,6 +433,14 @@
         self.migrations_client = MigrationsClientJSON(self.auth_provider)
         self.security_group_default_rules_client = (
             SecurityGroupDefaultRulesClientJSON(self.auth_provider))
+        self.networks_client = NetworksClientJSON(self.auth_provider)
+        # NOTE : As XML clients are not implemented for Qos-specs.
+        # So, setting the qos_client here. Once client are implemented,
+        # qos_client would be moved to its respective if/else.
+        # Bug : 1312553
+        self.volume_qos_client = QosSpecsClientJSON(self.auth_provider)
+        self.volume_qos_v2_client = QosSpecsV2ClientJSON(
+            self.auth_provider)
 
 
 class AltManager(Manager):
@@ -469,290 +484,3 @@
             credentials=auth.get_default_credentials('compute_admin'),
             interface=interface,
             service=service)
-
-
-class OfficialClientManager(manager.Manager):
-    """
-    Manager that provides access to the official python clients for
-    calling various OpenStack APIs.
-    """
-
-    NOVACLIENT_VERSION = '2'
-    CINDERCLIENT_VERSION = '1'
-    HEATCLIENT_VERSION = '1'
-    IRONICCLIENT_VERSION = '1'
-    SAHARACLIENT_VERSION = '1.1'
-    CEILOMETERCLIENT_VERSION = '2'
-
-    def __init__(self, credentials):
-        # FIXME(andreaf) Auth provider for client_type 'official' is
-        # not implemented yet, setting to 'tempest' for now.
-        self.client_type = 'tempest'
-        self.interface = None
-        # super cares for credentials validation
-        super(OfficialClientManager, self).__init__(credentials=credentials)
-        self.baremetal_client = self._get_baremetal_client()
-        self.compute_client = self._get_compute_client(credentials)
-        self.identity_client = self._get_identity_client(credentials)
-        self.image_client = self._get_image_client()
-        self.network_client = self._get_network_client()
-        self.volume_client = self._get_volume_client(credentials)
-        self.object_storage_client = self._get_object_storage_client(
-            credentials)
-        self.orchestration_client = self._get_orchestration_client(
-            credentials)
-        self.data_processing_client = self._get_data_processing_client(
-            credentials)
-        self.ceilometer_client = self._get_ceilometer_client(
-            credentials)
-
-    def _get_roles(self):
-        admin_credentials = auth.get_default_credentials('identity_admin')
-        keystone_admin = self._get_identity_client(admin_credentials)
-
-        username = self.credentials.username
-        tenant_name = self.credentials.tenant_name
-        user_id = keystone_admin.users.find(name=username).id
-        tenant_id = keystone_admin.tenants.find(name=tenant_name).id
-
-        roles = keystone_admin.roles.roles_for_user(
-            user=user_id, tenant=tenant_id)
-
-        return [r.name for r in roles]
-
-    def _get_compute_client(self, credentials):
-        # Novaclient will not execute operations for anyone but the
-        # identified user, so a new client needs to be created for
-        # each user that operations need to be performed for.
-        if not CONF.service_available.nova:
-            return None
-        import novaclient.client
-
-        auth_url = CONF.identity.uri
-        dscv = CONF.identity.disable_ssl_certificate_validation
-        region = CONF.identity.region
-
-        client_args = (credentials.username, credentials.password,
-                       credentials.tenant_name, auth_url)
-
-        # Create our default Nova client to use in testing
-        service_type = CONF.compute.catalog_type
-        endpoint_type = CONF.compute.endpoint_type
-        return novaclient.client.Client(self.NOVACLIENT_VERSION,
-                                        *client_args,
-                                        service_type=service_type,
-                                        endpoint_type=endpoint_type,
-                                        region_name=region,
-                                        no_cache=True,
-                                        insecure=dscv,
-                                        http_log_debug=True)
-
-    def _get_image_client(self):
-        if not CONF.service_available.glance:
-            return None
-        import glanceclient
-        token = self.identity_client.auth_token
-        region = CONF.identity.region
-        endpoint_type = CONF.image.endpoint_type
-        endpoint = self.identity_client.service_catalog.url_for(
-            attr='region', filter_value=region,
-            service_type=CONF.image.catalog_type, endpoint_type=endpoint_type)
-        dscv = CONF.identity.disable_ssl_certificate_validation
-        return glanceclient.Client('1', endpoint=endpoint, token=token,
-                                   insecure=dscv)
-
-    def _get_volume_client(self, credentials):
-        if not CONF.service_available.cinder:
-            return None
-        import cinderclient.client
-        auth_url = CONF.identity.uri
-        region = CONF.identity.region
-        endpoint_type = CONF.volume.endpoint_type
-        dscv = CONF.identity.disable_ssl_certificate_validation
-        return cinderclient.client.Client(self.CINDERCLIENT_VERSION,
-                                          credentials.username,
-                                          credentials.password,
-                                          credentials.tenant_name,
-                                          auth_url,
-                                          region_name=region,
-                                          endpoint_type=endpoint_type,
-                                          insecure=dscv,
-                                          http_log_debug=True)
-
-    def _get_object_storage_client(self, credentials):
-        if not CONF.service_available.swift:
-            return None
-        import swiftclient
-        auth_url = CONF.identity.uri
-        # add current tenant to swift operator role group.
-        admin_credentials = auth.get_default_credentials('identity_admin')
-        keystone_admin = self._get_identity_client(admin_credentials)
-
-        # enable test user to operate swift by adding operator role to him.
-        roles = keystone_admin.roles.list()
-        operator_role = CONF.object_storage.operator_role
-        member_role = [role for role in roles if role.name == operator_role][0]
-        # NOTE(maurosr): This is surrounded in the try-except block cause
-        # neutron tests doesn't have tenant isolation.
-        try:
-            keystone_admin.roles.add_user_role(self.identity_client.user_id,
-                                               member_role.id,
-                                               self.identity_client.tenant_id)
-        except keystoneclient.exceptions.Conflict:
-            pass
-
-        endpoint_type = CONF.object_storage.endpoint_type
-        os_options = {'endpoint_type': endpoint_type}
-        return swiftclient.Connection(auth_url, credentials.username,
-                                      credentials.password,
-                                      tenant_name=credentials.tenant_name,
-                                      auth_version='2',
-                                      os_options=os_options)
-
-    def _get_orchestration_client(self, credentials):
-        if not CONF.service_available.heat:
-            return None
-        import heatclient.client
-
-        keystone = self._get_identity_client(credentials)
-        region = CONF.identity.region
-        endpoint_type = CONF.orchestration.endpoint_type
-        token = keystone.auth_token
-        service_type = CONF.orchestration.catalog_type
-        try:
-            endpoint = keystone.service_catalog.url_for(
-                attr='region',
-                filter_value=region,
-                service_type=service_type,
-                endpoint_type=endpoint_type)
-        except keystoneclient.exceptions.EndpointNotFound:
-            return None
-        else:
-            return heatclient.client.Client(self.HEATCLIENT_VERSION,
-                                            endpoint,
-                                            token=token,
-                                            username=credentials.username,
-                                            password=credentials.password)
-
-    def _get_identity_client(self, credentials):
-        # This identity client is not intended to check the security
-        # of the identity service, so use admin credentials by default.
-
-        auth_url = CONF.identity.uri
-        dscv = CONF.identity.disable_ssl_certificate_validation
-
-        return keystoneclient.v2_0.client.Client(
-            username=credentials.username,
-            password=credentials.password,
-            tenant_name=credentials.tenant_name,
-            auth_url=auth_url,
-            insecure=dscv)
-
-    def _get_baremetal_client(self):
-        # ironic client is currently intended to by used by admin users
-        if not CONF.service_available.ironic:
-            return None
-        import ironicclient.client
-        roles = self._get_roles()
-        if CONF.identity.admin_role not in roles:
-            return None
-
-        auth_url = CONF.identity.uri
-        api_version = self.IRONICCLIENT_VERSION
-        insecure = CONF.identity.disable_ssl_certificate_validation
-        service_type = CONF.baremetal.catalog_type
-        endpoint_type = CONF.baremetal.endpoint_type
-        creds = {
-            'os_username': self.credentials.username,
-            'os_password': self.credentials.password,
-            'os_tenant_name': self.credentials.tenant_name
-        }
-
-        try:
-            return ironicclient.client.get_client(
-                api_version=api_version,
-                os_auth_url=auth_url,
-                insecure=insecure,
-                os_service_type=service_type,
-                os_endpoint_type=endpoint_type,
-                **creds)
-        except keystoneclient.exceptions.EndpointNotFound:
-            return None
-
-    def _get_network_client(self):
-        # The intended configuration is for the network client to have
-        # admin privileges and indicate for whom resources are being
-        # created via a 'tenant_id' parameter.  This will often be
-        # preferable to authenticating as a specific user because
-        # working with certain resources (public routers and networks)
-        # often requires admin privileges anyway.
-        if not CONF.service_available.neutron:
-            return None
-        import neutronclient.v2_0.client
-
-        credentials = auth.get_default_credentials('identity_admin')
-
-        auth_url = CONF.identity.uri
-        dscv = CONF.identity.disable_ssl_certificate_validation
-        endpoint_type = CONF.network.endpoint_type
-
-        return neutronclient.v2_0.client.Client(
-            username=credentials.username,
-            password=credentials.password,
-            tenant_name=credentials.tenant_name,
-            endpoint_type=endpoint_type,
-            auth_url=auth_url,
-            insecure=dscv)
-
-    def _get_data_processing_client(self, credentials):
-        if not CONF.service_available.sahara:
-            # Sahara isn't available
-            return None
-
-        import saharaclient.client
-
-        endpoint_type = CONF.data_processing.endpoint_type
-        catalog_type = CONF.data_processing.catalog_type
-        auth_url = CONF.identity.uri
-
-        client = saharaclient.client.Client(
-            self.SAHARACLIENT_VERSION,
-            credentials.username,
-            credentials.password,
-            project_name=credentials.tenant_name,
-            endpoint_type=endpoint_type,
-            service_type=catalog_type,
-            auth_url=auth_url)
-
-        return client
-
-    def _get_ceilometer_client(self, credentials):
-        if not CONF.service_available.ceilometer:
-            return None
-
-        import ceilometerclient.client
-
-        keystone = self._get_identity_client(credentials)
-        region = CONF.identity.region
-
-        endpoint_type = CONF.telemetry.endpoint_type
-        service_type = CONF.telemetry.catalog_type
-        auth_url = CONF.identity.uri
-
-        try:
-            keystone.service_catalog.url_for(
-                attr='region',
-                filter_value=region,
-                service_type=service_type,
-                endpoint_type=endpoint_type)
-        except keystoneclient.exceptions.EndpointNotFound:
-            return None
-        else:
-            return ceilometerclient.client.get_client(
-                self.CEILOMETERCLIENT_VERSION,
-                os_username=credentials.username,
-                os_password=credentials.password,
-                os_tenant_name=credentials.tenant_name,
-                os_auth_url=auth_url,
-                os_service_type=service_type,
-                os_endpoint_type=endpoint_type)
diff --git a/tempest/cmd/cleanup.py b/tempest/cmd/cleanup.py
new file mode 100644
index 0000000..9ae3dfb
--- /dev/null
+++ b/tempest/cmd/cleanup.py
@@ -0,0 +1,301 @@
+#!/usr/bin/env python
+#
+# Copyright 2014 Dell Inc.
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+# @author: David Paterson
+
+"""
+Utility for cleaning up environment after Tempest run
+
+Runtime Arguments
+-----------------
+
+--init-saved-state: Before you can execute cleanup you must initialize
+the saved state by running it with the --init-saved-state flag
+(creating ./saved_state.json), which protects your deployment from
+cleanup deleting objects you want to keep.  Typically you would run
+cleanup with --init-saved-state prior to a tempest run. If this is not
+the case saved_state.json must be edited, removing objects you want
+cleanup to delete.
+
+--dry-run: Creates a report (dry_run.json) of the tenants that will be
+cleaned up (in the "_tenants_to_clean" array), and the global objects
+that will be removed (tenants, users, flavors and images).  Once
+cleanup is executed in normal mode, running it again with --dry-run
+should yield an empty report.
+
+**NOTE**: The _tenants_to_clean array in dry-run.json lists the
+tenants that cleanup will loop through and delete child objects, not
+delete the tenant itself. This may differ from the tenants array as you
+can clean the tempest and alternate tempest tenants but not delete the
+tenants themselves.  This is actually the default behavior.
+
+**Normal mode**: running with no arguments, will query your deployment and
+build a list of objects to delete after filtering out out the objects
+found in saved_state.json and based on the
+--preserve-tempest-conf-objects and
+--delete-tempest-conf-objects flags.
+
+By default the tempest and alternate tempest users and tenants are not
+deleted and the admin user specified in tempest.conf is never deleted.
+
+Please run with --help to see full list of options.
+"""
+import argparse
+import json
+import sys
+
+from tempest import auth
+from tempest import clients
+from tempest.cmd import cleanup_service
+from tempest import config
+from tempest.openstack.common import log as logging
+
+SAVED_STATE_JSON = "saved_state.json"
+DRY_RUN_JSON = "dry_run.json"
+LOG = logging.getLogger(__name__)
+CONF = config.CONF
+
+
+class Cleanup(object):
+
+    def __init__(self):
+        self.admin_mgr = clients.AdminManager()
+        self.dry_run_data = {}
+        self.json_data = {}
+        self._init_options()
+
+        self.admin_id = ""
+        self.admin_role_id = ""
+        self.admin_tenant_id = ""
+        self._init_admin_ids()
+
+        self.admin_role_added = []
+
+        # available services
+        self.tenant_services = cleanup_service.get_tenant_cleanup_services()
+        self.global_services = cleanup_service.get_global_cleanup_services()
+        cleanup_service.init_conf()
+
+    def run(self):
+        opts = self.options
+        if opts.init_saved_state:
+            self._init_state()
+            return
+
+        self._load_json()
+        self._cleanup()
+
+    def _cleanup(self):
+        LOG.debug("Begin cleanup")
+        is_dry_run = self.options.dry_run
+        is_preserve = self.options.preserve_tempest_conf_objects
+        is_save_state = False
+
+        if is_dry_run:
+            self.dry_run_data["_tenants_to_clean"] = {}
+            f = open(DRY_RUN_JSON, 'w+')
+
+        admin_mgr = self.admin_mgr
+        # Always cleanup tempest and alt tempest tenants unless
+        # they are in saved state json. Therefore is_preserve is False
+        kwargs = {'data': self.dry_run_data,
+                  'is_dry_run': is_dry_run,
+                  'saved_state_json': self.json_data,
+                  'is_preserve': False,
+                  'is_save_state': is_save_state}
+        tenant_service = cleanup_service.TenantService(admin_mgr, **kwargs)
+        tenants = tenant_service.list()
+        LOG.debug("Process %s tenants" % len(tenants))
+
+        # Loop through list of tenants and clean them up.
+        for tenant in tenants:
+            self._add_admin(tenant['id'])
+            self._clean_tenant(tenant)
+
+        kwargs = {'data': self.dry_run_data,
+                  'is_dry_run': is_dry_run,
+                  'saved_state_json': self.json_data,
+                  'is_preserve': is_preserve,
+                  'is_save_state': is_save_state}
+        for service in self.global_services:
+            svc = service(admin_mgr, **kwargs)
+            svc.run()
+
+        if is_dry_run:
+            f.write(json.dumps(self.dry_run_data, sort_keys=True,
+                               indent=2, separators=(',', ': ')))
+            f.close()
+
+        self._remove_admin_user_roles()
+
+    def _remove_admin_user_roles(self):
+        tenant_ids = self.admin_role_added
+        LOG.debug("Removing admin user roles where needed for tenants: %s"
+                  % tenant_ids)
+        for tenant_id in tenant_ids:
+            self._remove_admin_role(tenant_id)
+
+    def _clean_tenant(self, tenant):
+        LOG.debug("Cleaning tenant:  %s " % tenant['name'])
+        is_dry_run = self.options.dry_run
+        dry_run_data = self.dry_run_data
+        is_preserve = self.options.preserve_tempest_conf_objects
+        tenant_id = tenant['id']
+        tenant_name = tenant['name']
+        tenant_data = None
+        if is_dry_run:
+            tenant_data = dry_run_data["_tenants_to_clean"][tenant_id] = {}
+            tenant_data['name'] = tenant_name
+
+        kwargs = {"username": CONF.identity.admin_username,
+                  "password": CONF.identity.admin_password,
+                  "tenant_name": tenant['name']}
+        mgr = clients.Manager(credentials=auth.get_credentials(**kwargs))
+        kwargs = {'data': tenant_data,
+                  'is_dry_run': is_dry_run,
+                  'saved_state_json': None,
+                  'is_preserve': is_preserve,
+                  'is_save_state': False,
+                  'tenant_id': tenant_id}
+        for service in self.tenant_services:
+            svc = service(mgr, **kwargs)
+            svc.run()
+
+    def _init_admin_ids(self):
+        id_cl = self.admin_mgr.identity_client
+
+        tenant = id_cl.get_tenant_by_name(CONF.identity.admin_tenant_name)
+        self.admin_tenant_id = tenant['id']
+
+        user = id_cl.get_user_by_username(self.admin_tenant_id,
+                                          CONF.identity.admin_username)
+        self.admin_id = user['id']
+
+        _, roles = id_cl.list_roles()
+        for role in roles:
+            if role['name'] == CONF.identity.admin_role:
+                self.admin_role_id = role['id']
+                break
+
+    def _init_options(self):
+        parser = argparse.ArgumentParser(
+            description='Cleanup after tempest run')
+        parser.add_argument('--init-saved-state', action="store_true",
+                            dest='init_saved_state', default=False,
+                            help="Creates JSON file: " + SAVED_STATE_JSON +
+                            ", representing the current state of your "
+                            "deployment,  specifically objects types "
+                            "Tempest creates and destroys during a run. "
+                            "You must run with this flag prior to "
+                            "executing cleanup.")
+        parser.add_argument('--preserve-tempest-conf-objects',
+                            action="store_true",
+                            dest='preserve_tempest_conf_objects',
+                            default=True, help="Do not delete the "
+                            "tempest and alternate tempest users and "
+                            "tenants, so they may be used for future "
+                            "tempest runs. By default this is argument "
+                            "is true.")
+        parser.add_argument('--delete-tempest-conf-objects',
+                            action="store_false",
+                            dest='preserve_tempest_conf_objects',
+                            default=False,
+                            help="Delete the tempest and "
+                            "alternate tempest users and tenants.")
+        parser.add_argument('--dry-run', action="store_true",
+                            dest='dry_run', default=False,
+                            help="Generate JSON file:" + DRY_RUN_JSON +
+                            ", that reports the objects that would have "
+                            "been deleted had a full cleanup been run.")
+
+        self.options = parser.parse_args()
+
+    def _add_admin(self, tenant_id):
+        id_cl = self.admin_mgr.identity_client
+        needs_role = True
+        _, roles = id_cl.list_user_roles(tenant_id, self.admin_id)
+        for role in roles:
+            if role['id'] == self.admin_role_id:
+                needs_role = False
+                LOG.debug("User already had admin privilege for this tenant")
+        if needs_role:
+            LOG.debug("Adding admin priviledge for : %s" % tenant_id)
+            id_cl.assign_user_role(tenant_id, self.admin_id,
+                                   self.admin_role_id)
+            self.admin_role_added.append(tenant_id)
+
+    def _remove_admin_role(self, tenant_id):
+        LOG.debug("Remove admin user role for tenant: %s" % tenant_id)
+        # Must initialize AdminManager for each user role
+        # Otherwise authentication exception is thrown, weird
+        id_cl = clients.AdminManager().identity_client
+        if (self._tenant_exists(tenant_id)):
+            try:
+                id_cl.remove_user_role(tenant_id, self.admin_id,
+                                       self.admin_role_id)
+            except Exception as ex:
+                LOG.exception("Failed removing role from tenant which still"
+                              "exists, exception: %s" % ex)
+
+    def _tenant_exists(self, tenant_id):
+        id_cl = self.admin_mgr.identity_client
+        try:
+            t = id_cl.get_tenant(tenant_id)
+            LOG.debug("Tenant is: %s" % str(t))
+            return True
+        except Exception as ex:
+            LOG.debug("Tenant no longer exists? %s" % ex)
+            return False
+
+    def _init_state(self):
+        LOG.debug("Initializing saved state.")
+        data = {}
+        admin_mgr = self.admin_mgr
+        kwargs = {'data': data,
+                  'is_dry_run': False,
+                  'saved_state_json': data,
+                  'is_preserve': False,
+                  'is_save_state': True}
+        for service in self.global_services:
+            svc = service(admin_mgr, **kwargs)
+            svc.run()
+
+        f = open(SAVED_STATE_JSON, 'w+')
+        f.write(json.dumps(data,
+                           sort_keys=True, indent=2, separators=(',', ': ')))
+        f.close()
+
+    def _load_json(self):
+        try:
+            json_file = open(SAVED_STATE_JSON)
+            self.json_data = json.load(json_file)
+            json_file.close()
+        except IOError as ex:
+            LOG.exception("Failed loading saved state, please be sure you"
+                          " have first run cleanup with --init-saved-state "
+                          "flag prior to running tempest. Exception: %s" % ex)
+            sys.exit(ex)
+        except Exception as ex:
+            LOG.exception("Exception parsing saved state json : %s" % ex)
+            sys.exit(ex)
+
+
+def main():
+    cleanup = Cleanup()
+    cleanup.run()
+    LOG.info('Cleanup finished!')
+    return 0
+
+if __name__ == "__main__":
+    sys.exit(main())
diff --git a/tempest/cmd/cleanup_service.py b/tempest/cmd/cleanup_service.py
new file mode 100644
index 0000000..f5f0db3
--- /dev/null
+++ b/tempest/cmd/cleanup_service.py
@@ -0,0 +1,1066 @@
+#!/usr/bin/env python
+
+# Copyright 2014 Dell Inc.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+'''
+Created on Sep 3, 2014
+
+@author: David_Paterson
+'''
+from tempest import config
+from tempest.openstack.common import log as logging
+from tempest import test
+
+LOG = logging.getLogger(__name__)
+CONF = config.CONF
+
+CONF_USERS = None
+CONF_TENANTS = None
+CONF_PUB_NETWORK = None
+CONF_PRIV_NETWORK_NAME = None
+CONF_PUB_ROUTER = None
+CONF_FLAVORS = None
+CONF_IMAGES = None
+
+IS_CEILOMETER = None
+IS_CINDER = None
+IS_GLANCE = None
+IS_HEAT = None
+IS_NEUTRON = None
+IS_NOVA = None
+
+
+def init_conf():
+    global CONF_USERS
+    global CONF_TENANTS
+    global CONF_PUB_NETWORK
+    global CONF_PRIV_NETWORK_NAME
+    global CONF_PUB_ROUTER
+    global CONF_FLAVORS
+    global CONF_IMAGES
+
+    global IS_CEILOMETER
+    global IS_CINDER
+    global IS_GLANCE
+    global IS_HEAT
+    global IS_NEUTRON
+    global IS_NOVA
+
+    CONF_USERS = [CONF.identity.admin_username, CONF.identity.username,
+                  CONF.identity.alt_username]
+    CONF_TENANTS = [CONF.identity.admin_tenant_name,
+                    CONF.identity.tenant_name,
+                    CONF.identity.alt_tenant_name]
+    CONF_PUB_NETWORK = CONF.network.public_network_id
+    CONF_PRIV_NETWORK_NAME = CONF.compute.fixed_network_name
+    CONF_PUB_ROUTER = CONF.network.public_router_id
+    CONF_FLAVORS = [CONF.compute.flavor_ref, CONF.compute.flavor_ref_alt]
+    CONF_IMAGES = [CONF.compute.image_ref, CONF.compute.image_ref_alt]
+
+    IS_CEILOMETER = CONF.service_available.ceilometer
+    IS_CINDER = CONF.service_available.cinder
+    IS_GLANCE = CONF.service_available.glance
+    IS_HEAT = CONF.service_available.heat
+    IS_NEUTRON = CONF.service_available.neutron
+    IS_NOVA = CONF.service_available.nova
+
+
+class BaseService(object):
+    def __init__(self, kwargs):
+        self.client = None
+        for key, value in kwargs.items():
+            setattr(self, key, value)
+
+    def _filter_by_tenant_id(self, item_list):
+        if (item_list is None
+                or len(item_list) == 0
+                or not hasattr(self, 'tenant_id')
+                or self.tenant_id is None
+                or 'tenant_id' not in item_list[0]):
+            return item_list
+
+        _filtered_list = []
+        for item in item_list:
+            if item['tenant_id'] == self.tenant_id:
+                _filtered_list.append(item)
+        return _filtered_list
+
+    def list(self):
+        pass
+
+    def delete(self):
+        pass
+
+    def dry_run(self):
+        pass
+
+    def save_state(self):
+        pass
+
+    def run(self):
+        if self.is_dry_run:
+            self.dry_run()
+        elif self.is_save_state:
+            self.save_state()
+        else:
+            self.delete()
+
+
+class SnapshotService(BaseService):
+
+    def __init__(self, manager, **kwargs):
+        super(SnapshotService, self).__init__(kwargs)
+        self.client = manager.snapshots_client
+
+    def list(self):
+        client = self.client
+        __, snaps = client.list_snapshots()
+        LOG.debug("List count, %s Snapshots" % len(snaps))
+        return snaps
+
+    def delete(self):
+        snaps = self.list()
+        client = self.client
+        for snap in snaps:
+            try:
+                client.delete_snapshot(snap['id'])
+            except Exception as e:
+                LOG.exception("Delete Snapshot exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        snaps = self.list()
+        self.data['snapshots'] = snaps
+
+
+class ServerService(BaseService):
+    def __init__(self, manager, **kwargs):
+        super(ServerService, self).__init__(kwargs)
+        self.client = manager.servers_client
+
+    def list(self):
+        client = self.client
+        _, servers_body = client.list_servers()
+        servers = servers_body['servers']
+        LOG.debug("List count, %s Servers" % len(servers))
+        return servers
+
+    def delete(self):
+        client = self.client
+        servers = self.list()
+        for server in servers:
+            try:
+                client.delete_server(server['id'])
+            except Exception as e:
+                LOG.exception("Delete Server exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        servers = self.list()
+        self.data['servers'] = servers
+
+
+class ServerGroupService(ServerService):
+
+    def list(self):
+        client = self.client
+        _, sgs = client.list_server_groups()
+        LOG.debug("List count, %s Server Groups" % len(sgs))
+        return sgs
+
+    def delete(self):
+        client = self.client
+        sgs = self.list()
+        for sg in sgs:
+            try:
+                client.delete_server_group(sg['id'])
+            except Exception as e:
+                LOG.exception("Delete Server Group exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        sgs = self.list()
+        self.data['server_groups'] = sgs
+
+
+class StackService(BaseService):
+    def __init__(self, manager, **kwargs):
+        super(StackService, self).__init__(kwargs)
+        self.client = manager.orchestration_client
+
+    def list(self):
+        client = self.client
+        _, stacks = client.list_stacks()
+        LOG.debug("List count, %s Stacks" % len(stacks))
+        return stacks
+
+    def delete(self):
+        client = self.client
+        stacks = self.list()
+        for stack in stacks:
+            try:
+                client.delete_stack(stack['id'])
+            except Exception as e:
+                LOG.exception("Delete Stack exception: %s " % e)
+                pass
+
+    def dry_run(self):
+        stacks = self.list()
+        self.data['stacks'] = stacks
+
+
+class KeyPairService(BaseService):
+    def __init__(self, manager, **kwargs):
+        super(KeyPairService, self).__init__(kwargs)
+        self.client = manager.keypairs_client
+
+    def list(self):
+        client = self.client
+        _, keypairs = client.list_keypairs()
+        LOG.debug("List count, %s Keypairs" % len(keypairs))
+        return keypairs
+
+    def delete(self):
+        client = self.client
+        keypairs = self.list()
+        for k in keypairs:
+            try:
+                name = k['keypair']['name']
+                client.delete_keypair(name)
+            except Exception as e:
+                LOG.exception("Delete Keypairs exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        keypairs = self.list()
+        self.data['keypairs'] = keypairs
+
+
+class SecurityGroupService(BaseService):
+    def __init__(self, manager, **kwargs):
+        super(SecurityGroupService, self).__init__(kwargs)
+        self.client = manager.security_groups_client
+
+    def list(self):
+        client = self.client
+        _, secgrps = client.list_security_groups()
+        secgrp_del = [grp for grp in secgrps if grp['name'] != 'default']
+        LOG.debug("List count, %s Security Groups" % len(secgrp_del))
+        return secgrp_del
+
+    def delete(self):
+        client = self.client
+        secgrp_del = self.list()
+        for g in secgrp_del:
+            try:
+                client.delete_security_group(g['id'])
+            except Exception as e:
+                LOG.exception("Delete Security Groups exception: %s" % e)
+
+    def dry_run(self):
+        secgrp_del = self.list()
+        self.data['security_groups'] = secgrp_del
+
+
+class FloatingIpService(BaseService):
+    def __init__(self, manager, **kwargs):
+        super(FloatingIpService, self).__init__(kwargs)
+        self.client = manager.floating_ips_client
+
+    def list(self):
+        client = self.client
+        _, floating_ips = client.list_floating_ips()
+        LOG.debug("List count, %s Floating IPs" % len(floating_ips))
+        return floating_ips
+
+    def delete(self):
+        client = self.client
+        floating_ips = self.list()
+        for f in floating_ips:
+            try:
+                client.delete_floating_ip(f['id'])
+            except Exception as e:
+                LOG.exception("Delete Floating IPs exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        floating_ips = self.list()
+        self.data['floating_ips'] = floating_ips
+
+
+class VolumeService(BaseService):
+    def __init__(self, manager, **kwargs):
+        super(VolumeService, self).__init__(kwargs)
+        self.client = manager.volumes_client
+
+    def list(self):
+        client = self.client
+        _, vols = client.list_volumes()
+        LOG.debug("List count, %s Volumes" % len(vols))
+        return vols
+
+    def delete(self):
+        client = self.client
+        vols = self.list()
+        for v in vols:
+            try:
+                client.delete_volume(v['id'])
+            except Exception as e:
+                LOG.exception("Delete Volume exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        vols = self.list()
+        self.data['volumes'] = vols
+
+
+# Begin network service classes
+class NetworkService(BaseService):
+    def __init__(self, manager, **kwargs):
+        super(NetworkService, self).__init__(kwargs)
+        self.client = manager.network_client
+
+    def list(self):
+        client = self.client
+        _, networks = client.list_networks()
+        networks = self._filter_by_tenant_id(networks['networks'])
+        # filter out networks declared in tempest.conf
+        if self.is_preserve:
+            networks = [network for network in networks
+                        if (network['name'] != CONF_PRIV_NETWORK_NAME
+                            and network['id'] != CONF_PUB_NETWORK)]
+        LOG.debug("List count, %s Networks" % networks)
+        return networks
+
+    def delete(self):
+        client = self.client
+        networks = self.list()
+        for n in networks:
+            try:
+                client.delete_network(n['id'])
+            except Exception as e:
+                LOG.exception("Delete Network exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        networks = self.list()
+        self.data['networks'] = networks
+
+
+class NetworkIpSecPolicyService(NetworkService):
+
+    def list(self):
+        client = self.client
+        _, ipsecpols = client.list_ipsecpolicies()
+        ipsecpols = ipsecpols['ipsecpolicies']
+        ipsecpols = self._filter_by_tenant_id(ipsecpols)
+        LOG.debug("List count, %s IP Security Policies" % len(ipsecpols))
+        return ipsecpols
+
+    def delete(self):
+        client = self.client
+        ipsecpols = self.list()
+        for ipsecpol in ipsecpols:
+            try:
+                client.delete_ipsecpolicy(ipsecpol['id'])
+            except Exception as e:
+                LOG.exception("Delete IP Securty Policy exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        ipsecpols = self.list()
+        self.data['ip_security_policies'] = ipsecpols
+
+
+class NetworkFwPolicyService(NetworkService):
+
+    def list(self):
+        client = self.client
+        _, fwpols = client.list_firewall_policies()
+        fwpols = fwpols['firewall_policies']
+        fwpols = self._filter_by_tenant_id(fwpols)
+        LOG.debug("List count, %s Firewall Policies" % len(fwpols))
+        return fwpols
+
+    def delete(self):
+        client = self.client
+        fwpols = self.list()
+        for fwpol in fwpols:
+            try:
+                client.delete_firewall_policy(fwpol['id'])
+            except Exception as e:
+                LOG.exception("Delete Firewall Policy exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        fwpols = self.list()
+        self.data['firewall_policies'] = fwpols
+
+
+class NetworkFwRulesService(NetworkService):
+
+    def list(self):
+        client = self.client
+        _, fwrules = client.list_firewall_rules()
+        fwrules = fwrules['firewall_rules']
+        fwrules = self._filter_by_tenant_id(fwrules)
+        LOG.debug("List count, %s Firewall Rules" % len(fwrules))
+        return fwrules
+
+    def delete(self):
+        client = self.client
+        fwrules = self.list()
+        for fwrule in fwrules:
+            try:
+                client.delete_firewall_rule(fwrule['id'])
+            except Exception as e:
+                LOG.exception("Delete Firewall Rule exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        fwrules = self.list()
+        self.data['firewall_rules'] = fwrules
+
+
+class NetworkIkePolicyService(NetworkService):
+
+    def list(self):
+        client = self.client
+        _, ikepols = client.list_ikepolicies()
+        ikepols = ikepols['ikepolicies']
+        ikepols = self._filter_by_tenant_id(ikepols)
+        LOG.debug("List count, %s IKE Policies" % len(ikepols))
+        return ikepols
+
+    def delete(self):
+        client = self.client
+        ikepols = self.list()
+        for ikepol in ikepols:
+            try:
+                client.delete_firewall_rule(ikepol['id'])
+            except Exception as e:
+                LOG.exception("Delete IKE Policy exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        ikepols = self.list()
+        self.data['ike_policies'] = ikepols
+
+
+class NetworkVpnServiceService(NetworkService):
+
+    def list(self):
+        client = self.client
+        _, vpnsrvs = client.list_vpnservices()
+        vpnsrvs = vpnsrvs['vpnservices']
+        vpnsrvs = self._filter_by_tenant_id(vpnsrvs)
+        LOG.debug("List count, %s VPN Services" % len(vpnsrvs))
+        return vpnsrvs
+
+    def delete(self):
+        client = self.client
+        vpnsrvs = self.list()
+        for vpnsrv in vpnsrvs:
+            try:
+                client.delete_vpnservice(vpnsrv['id'])
+            except Exception as e:
+                LOG.exception("Delete VPN Service exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        vpnsrvs = self.list()
+        self.data['vpn_services'] = vpnsrvs
+
+
+class NetworkFloatingIpService(NetworkService):
+
+    def list(self):
+        client = self.client
+        _, flips = client.list_floatingips()
+        flips = flips['floatingips']
+        flips = self._filter_by_tenant_id(flips)
+        LOG.debug("List count, %s Network Floating IPs" % len(flips))
+        return flips
+
+    def delete(self):
+        client = self.client
+        flips = self.list()
+        for flip in flips:
+            try:
+                client.delete_floatingip(flip['id'])
+            except Exception as e:
+                LOG.exception("Delete Network Floating IP exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        flips = self.list()
+        self.data['floating_ips'] = flips
+
+
+class NetworkRouterService(NetworkService):
+
+    def list(self):
+        client = self.client
+        _, routers = client.list_routers()
+        routers = routers['routers']
+        routers = self._filter_by_tenant_id(routers)
+        if self.is_preserve:
+            routers = [router for router in routers
+                       if router['id'] != CONF_PUB_ROUTER]
+
+        LOG.debug("List count, %s Routers" % len(routers))
+        return routers
+
+    def delete(self):
+        client = self.client
+        routers = self.list()
+        for router in routers:
+            try:
+                rid = router['id']
+                _, ports = client.list_router_interfaces(rid)
+                ports = ports['ports']
+                for port in ports:
+                    subid = port['fixed_ips'][0]['subnet_id']
+                    client.remove_router_interface_with_subnet_id(rid, subid)
+                    client.delete_router(rid)
+            except Exception as e:
+                LOG.exception("Delete Router exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        routers = self.list()
+        self.data['routers'] = routers
+
+
+class NetworkHealthMonitorService(NetworkService):
+
+    def list(self):
+        client = self.client
+        _, hms = client.list_health_monitors()
+        hms = hms['health_monitors']
+        hms = self._filter_by_tenant_id(hms)
+        LOG.debug("List count, %s Health Monitors" % len(hms))
+        return hms
+
+    def delete(self):
+        client = self.client
+        hms = self.list()
+        for hm in hms:
+            try:
+                client.delete_health_monitor(hm['id'])
+            except Exception as e:
+                LOG.exception("Delete Health Monitor exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        hms = self.list()
+        self.data['health_monitors'] = hms
+
+
+class NetworkMemberService(NetworkService):
+
+    def list(self):
+        client = self.client
+        _, members = client.list_members()
+        members = members['members']
+        members = self._filter_by_tenant_id(members)
+        LOG.debug("List count, %s Members" % len(members))
+        return members
+
+    def delete(self):
+        client = self.client
+        members = self.list()
+        for member in members:
+            try:
+                client.delete_member(member['id'])
+            except Exception as e:
+                LOG.exception("Delete Member exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        members = self.list()
+        self.data['members'] = members
+
+
+class NetworkVipService(NetworkService):
+
+    def list(self):
+        client = self.client
+        _, vips = client.list_vips()
+        vips = vips['vips']
+        vips = self._filter_by_tenant_id(vips)
+        LOG.debug("List count, %s VIPs" % len(vips))
+        return vips
+
+    def delete(self):
+        client = self.client
+        vips = self.list()
+        for vip in vips:
+            try:
+                client.delete_vip(vip['id'])
+            except Exception as e:
+                LOG.exception("Delete VIP exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        vips = self.list()
+        self.data['vips'] = vips
+
+
+class NetworkPoolService(NetworkService):
+
+    def list(self):
+        client = self.client
+        _, pools = client.list_pools()
+        pools = pools['pools']
+        pools = self._filter_by_tenant_id(pools)
+        LOG.debug("List count, %s Pools" % len(pools))
+        return pools
+
+    def delete(self):
+        client = self.client
+        pools = self.list()
+        for pool in pools:
+            try:
+                client.delete_pool(pool['id'])
+            except Exception as e:
+                LOG.exception("Delete Pool exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        pools = self.list()
+        self.data['pools'] = pools
+
+
+class NetworMeteringLabelRuleService(NetworkService):
+
+    def list(self):
+        client = self.client
+        _, rules = client.list_metering_label_rules()
+        rules = rules['metering_label_rules']
+        rules = self._filter_by_tenant_id(rules)
+        LOG.debug("List count, %s Metering Label Rules" % len(rules))
+        return rules
+
+    def delete(self):
+        client = self.client
+        rules = self.list()
+        for rule in rules:
+            try:
+                client.delete_metering_label_rule(rule['id'])
+            except Exception as e:
+                LOG.exception("Delete Metering Label Rule exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        rules = self.list()
+        self.data['rules'] = rules
+
+
+class NetworMeteringLabelService(NetworkService):
+
+    def list(self):
+        client = self.client
+        _, labels = client.list_metering_labels()
+        labels = labels['metering_labels']
+        labels = self._filter_by_tenant_id(labels)
+        LOG.debug("List count, %s Metering Labels" % len(labels))
+        return labels
+
+    def delete(self):
+        client = self.client
+        labels = self.list()
+        for label in labels:
+            try:
+                client.delete_metering_label(label['id'])
+            except Exception as e:
+                LOG.exception("Delete Metering Label exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        labels = self.list()
+        self.data['labels'] = labels
+
+
+class NetworkPortService(NetworkService):
+
+    def list(self):
+        client = self.client
+        _, ports = client.list_ports()
+        ports = ports['ports']
+        ports = self._filter_by_tenant_id(ports)
+        LOG.debug("List count, %s Ports" % len(ports))
+        return ports
+
+    def delete(self):
+        client = self.client
+        ports = self.list()
+        for port in ports:
+            try:
+                client.delete_port(port['id'])
+            except Exception as e:
+                LOG.exception("Delete Port exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        ports = self.list()
+        self.data['ports'] = ports
+
+
+class NetworkSubnetService(NetworkService):
+
+    def list(self):
+        client = self.client
+        _, subnets = client.list_subnets()
+        subnets = subnets['subnets']
+        subnets = self._filter_by_tenant_id(subnets)
+        LOG.debug("List count, %s Subnets" % len(subnets))
+        return subnets
+
+    def delete(self):
+        client = self.client
+        subnets = self.list()
+        for subnet in subnets:
+            try:
+                client.delete_subnet(subnet['id'])
+            except Exception as e:
+                LOG.exception("Delete Subnet exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        subnets = self.list()
+        self.data['subnets'] = subnets
+
+
+# Telemetry services
+class TelemetryAlarmService(BaseService):
+    def __init__(self, manager, **kwargs):
+        super(TelemetryAlarmService, self).__init__(kwargs)
+        self.client = manager.telemetry_client
+
+    def list(self):
+        client = self.client
+        _, alarms = client.list_alarms()
+        LOG.debug("List count, %s Alarms" % len(alarms))
+        return alarms
+
+    def delete(self):
+        client = self.client
+        alarms = self.list()
+        for alarm in alarms:
+            try:
+                client.delete_alarm(alarm['id'])
+            except Exception as e:
+                LOG.exception("Delete Alarms exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        alarms = self.list()
+        self.data['alarms'] = alarms
+
+
+# begin global services
+class FlavorService(BaseService):
+    def __init__(self, manager, **kwargs):
+        super(FlavorService, self).__init__(kwargs)
+        self.client = manager.flavors_client
+
+    def list(self):
+        client = self.client
+        _, flavors = client.list_flavors({"is_public": None})
+        if not self.is_save_state:
+            # recreate list removing saved flavors
+            flavors = [flavor for flavor in flavors if flavor['id']
+                       not in self.saved_state_json['flavors'].keys()]
+
+        if self.is_preserve:
+            flavors = [flavor for flavor in flavors
+                       if flavor['id'] not in CONF_FLAVORS]
+        LOG.debug("List count, %s Flavors after reconcile" % len(flavors))
+        return flavors
+
+    def delete(self):
+        client = self.client
+        flavors = self.list()
+        for flavor in flavors:
+            try:
+                client.delete_flavor(flavor['id'])
+            except Exception as e:
+                LOG.exception("Delete Flavor exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        flavors = self.list()
+        self.data['flavors'] = flavors
+
+    def save_state(self):
+        flavors = self.list()
+        flavor_data = self.data['flavors'] = {}
+        for flavor in flavors:
+            flavor_data[flavor['id']] = flavor['name']
+
+
+class ImageService(BaseService):
+    def __init__(self, manager, **kwargs):
+        super(ImageService, self).__init__(kwargs)
+        self.client = manager.images_client
+
+    def list(self):
+        client = self.client
+        _, images = client.list_images({"all_tenants": True})
+        if not self.is_save_state:
+            images = [image for image in images if image['id']
+                      not in self.saved_state_json['images'].keys()]
+        if self.is_preserve:
+            images = [image for image in images
+                      if image['id'] not in CONF_IMAGES]
+        LOG.debug("List count, %s Images after reconcile" % len(images))
+        return images
+
+    def delete(self):
+        client = self.client
+        images = self.list()
+        for image in images:
+            try:
+                client.delete_image(image['id'])
+            except Exception as e:
+                LOG.exception("Delete Image exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        images = self.list()
+        self.data['images'] = images
+
+    def save_state(self):
+        images = self.list()
+        image_data = self.data['images'] = {}
+        for image in images:
+            image_data[image['id']] = image['name']
+
+
+class IdentityService(BaseService):
+    def __init__(self, manager, **kwargs):
+        super(IdentityService, self).__init__(kwargs)
+        self.client = manager.identity_client
+
+
+class UserService(IdentityService):
+
+    def list(self):
+        client = self.client
+        _, users = client.get_users()
+
+        if not self.is_save_state:
+            users = [user for user in users if user['id']
+                     not in self.saved_state_json['users'].keys()]
+
+        if self.is_preserve:
+            users = [user for user in users if user['name']
+                     not in CONF_USERS]
+
+        elif not self.is_save_state:  # Never delete admin user
+            users = [user for user in users if user['name'] !=
+                     CONF.identity.admin_username]
+
+        LOG.debug("List count, %s Users after reconcile" % len(users))
+        return users
+
+    def delete(self):
+        client = self.client
+        users = self.list()
+        for user in users:
+            try:
+                client.delete_user(user['id'])
+            except Exception as e:
+                LOG.exception("Delete User exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        users = self.list()
+        self.data['users'] = users
+
+    def save_state(self):
+        users = self.list()
+        user_data = self.data['users'] = {}
+        for user in users:
+            user_data[user['id']] = user['name']
+
+
+class RoleService(IdentityService):
+
+    def list(self):
+        client = self.client
+        try:
+            _, roles = client.list_roles()
+            # reconcile roles with saved state and never list admin role
+            if not self.is_save_state:
+                roles = [role for role in roles if
+                         (role['id'] not in
+                          self.saved_state_json['roles'].keys()
+                          and role['name'] != CONF.identity.admin_role)]
+                LOG.debug("List count, %s Roles after reconcile" % len(roles))
+            return roles
+        except Exception as ex:
+            LOG.exception("Cannot retrieve Roles, exception: %s" % ex)
+            return []
+
+    def delete(self):
+        client = self.client
+        roles = self.list()
+        for role in roles:
+            try:
+                client.delete_role(role['id'])
+            except Exception as e:
+                LOG.exception("Delete Role exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        roles = self.list()
+        self.data['roles'] = roles
+
+    def save_state(self):
+        roles = self.list()
+        role_data = self.data['roles'] = {}
+        for role in roles:
+            role_data[role['id']] = role['name']
+
+
+class TenantService(IdentityService):
+
+    def list(self):
+        client = self.client
+        _, tenants = client.list_tenants()
+        if not self.is_save_state:
+            tenants = [tenant for tenant in tenants if (tenant['id']
+                       not in self.saved_state_json['tenants'].keys()
+                       and tenant['name'] != CONF.identity.admin_tenant_name)]
+
+        if self.is_preserve:
+            tenants = [tenant for tenant in tenants if tenant['name']
+                       not in CONF_TENANTS]
+
+        LOG.debug("List count, %s Tenants after reconcile" % len(tenants))
+        return tenants
+
+    def delete(self):
+        client = self.client
+        tenants = self.list()
+        for tenant in tenants:
+            try:
+                client.delete_tenant(tenant['id'])
+            except Exception as e:
+                LOG.exception("Delete Tenant exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        tenants = self.list()
+        self.data['tenants'] = tenants
+
+    def save_state(self):
+        tenants = self.list()
+        tenant_data = self.data['tenants'] = {}
+        for tenant in tenants:
+            tenant_data[tenant['id']] = tenant['name']
+
+
+class DomainService(BaseService):
+
+    def __init__(self, manager, **kwargs):
+        super(DomainService, self).__init__(kwargs)
+        self.client = manager.identity_v3_client
+
+    def list(self):
+        client = self.client
+        _, domains = client.list_domains()
+        if not self.is_save_state:
+            domains = [domain for domain in domains if domain['id']
+                       not in self.saved_state_json['domains'].keys()]
+
+        LOG.debug("List count, %s Domains after reconcile" % len(domains))
+        return domains
+
+    def delete(self):
+        client = self.client
+        domains = self.list()
+        for domain in domains:
+            try:
+                client.update_domain(domain['id'], enabled=False)
+                client.delete_domain(domain['id'])
+            except Exception as e:
+                LOG.exception("Delete Domain exception: %s" % e)
+                pass
+
+    def dry_run(self):
+        domains = self.list()
+        self.data['domains'] = domains
+
+    def save_state(self):
+        domains = self.list()
+        domain_data = self.data['domains'] = {}
+        for domain in domains:
+            domain_data[domain['id']] = domain['name']
+
+
+def get_tenant_cleanup_services():
+    tenant_services = []
+
+    if IS_CEILOMETER:
+        tenant_services.append(TelemetryAlarmService)
+    if IS_NOVA:
+        tenant_services.append(ServerService)
+        tenant_services.append(KeyPairService)
+        tenant_services.append(SecurityGroupService)
+        tenant_services.append(ServerGroupService)
+        if not IS_NEUTRON:
+            tenant_services.append(FloatingIpService)
+    if IS_HEAT:
+        tenant_services.append(StackService)
+    if IS_NEUTRON:
+        if test.is_extension_enabled('vpnaas', 'network'):
+            tenant_services.append(NetworkIpSecPolicyService)
+            tenant_services.append(NetworkIkePolicyService)
+            tenant_services.append(NetworkVpnServiceService)
+        if test.is_extension_enabled('fwaas', 'network'):
+            tenant_services.append(NetworkFwPolicyService)
+            tenant_services.append(NetworkFwRulesService)
+        if test.is_extension_enabled('lbaas', 'network'):
+            tenant_services.append(NetworkHealthMonitorService)
+            tenant_services.append(NetworkMemberService)
+            tenant_services.append(NetworkVipService)
+            tenant_services.append(NetworkPoolService)
+        if test.is_extension_enabled('metering', 'network'):
+            tenant_services.append(NetworMeteringLabelRuleService)
+            tenant_services.append(NetworMeteringLabelService)
+        tenant_services.append(NetworkRouterService)
+        tenant_services.append(NetworkFloatingIpService)
+        tenant_services.append(NetworkPortService)
+        tenant_services.append(NetworkSubnetService)
+        tenant_services.append(NetworkService)
+    if IS_CINDER:
+        tenant_services.append(SnapshotService)
+        tenant_services.append(VolumeService)
+    return tenant_services
+
+
+def get_global_cleanup_services():
+    global_services = []
+    if IS_NOVA:
+        global_services.append(FlavorService)
+    if IS_GLANCE:
+        global_services.append(ImageService)
+    global_services.append(UserService)
+    global_services.append(TenantService)
+    global_services.append(DomainService)
+    global_services.append(RoleService)
+    return global_services
diff --git a/tempest/cmd/javelin.py b/tempest/cmd/javelin.py
index 3616a82..3c41dd9 100755
--- a/tempest/cmd/javelin.py
+++ b/tempest/cmd/javelin.py
@@ -19,23 +19,26 @@
 
 """
 
-import logging
+import argparse
+import datetime
 import os
 import sys
 import unittest
-import yaml
 
-import argparse
+import yaml
 
 import tempest.auth
 from tempest import config
 from tempest import exceptions
+from tempest.openstack.common import log as logging
+from tempest.openstack.common import timeutils
 from tempest.services.compute.json import flavors_client
 from tempest.services.compute.json import servers_client
 from tempest.services.identity.json import identity_client
 from tempest.services.image.v2.json import image_client
 from tempest.services.object_storage import container_client
 from tempest.services.object_storage import object_client
+from tempest.services.telemetry.json import telemetry_client
 from tempest.services.volume.json import volumes_client
 
 OPTS = {}
@@ -44,6 +47,8 @@
 
 LOG = None
 
+JAVELIN_START = datetime.datetime.utcnow()
+
 
 class OSClient(object):
     _creds = None
@@ -62,6 +67,7 @@
         self.containers = container_client.ContainerClient(_auth)
         self.images = image_client.ImageClientV2JSON(_auth)
         self.flavors = flavors_client.FlavorsClientJSON(_auth)
+        self.telemetry = telemetry_client.TelemetryClientJSON(_auth)
         self.volumes = volumes_client.VolumesClientJSON(_auth)
 
 
@@ -104,6 +110,13 @@
         else:
             LOG.warn("Tenant '%s' already exists in this environment" % tenant)
 
+
+def destroy_tenants(tenants):
+    admin = keystone_admin()
+    for tenant in tenants:
+        tenant_id = admin.identity.get_tenant_by_name(tenant)['id']
+        r, body = admin.identity.delete_tenant(tenant_id)
+
 ##############
 #
 # USERS
@@ -168,6 +181,15 @@
                 enabled=True)
 
 
+def destroy_users(users):
+    admin = keystone_admin()
+    for user in users:
+        tenant_id = admin.identity.get_tenant_by_name(user['tenant'])['id']
+        user_id = admin.identity.get_user_by_username(tenant_id,
+                                                      user['name'])['id']
+        r, body = admin.identity.delete_user(user_id)
+
+
 def collect_users(users):
     global USERS
     LOG.info("Collecting users")
@@ -193,9 +215,8 @@
         self.check_users()
         self.check_objects()
         self.check_servers()
-        # TODO(sdague): Volumes not yet working, bring it back once the
-        # code is self testing.
-        # self.check_volumes()
+        self.check_volumes()
+        self.check_telemetry()
 
     def check_users(self):
         """Check that the users we expect to exist, do.
@@ -249,8 +270,28 @@
                 if return_code is 0:
                     break
             self.assertNotEqual(count, 59,
-                               "Server %s is not pingable at %s" % (
-                               server['name'], addr))
+                                "Server %s is not pingable at %s" % (
+                                    server['name'], addr))
+
+    def check_telemetry(self):
+        """Check that ceilometer provides a sane sample.
+
+        Confirm that there are more than one sample and that they have the
+        expected metadata.
+
+        If in check mode confirm that the oldest sample available is from
+        before the upgrade.
+        """
+        LOG.info("checking telemetry")
+        for server in self.res['servers']:
+            client = client_for_user(server['owner'])
+            response, body = client.telemetry.list_samples(
+                'instance',
+                query=('metadata.display_name', 'eq', server['name'])
+            )
+            self.assertEqual(response.status, 200)
+            self.assertTrue(len(body) >= 1, 'expecting at least one sample')
+            self._confirm_telemetry_sample(server, body[-1])
 
     def check_volumes(self):
         """Check that the volumes are still there and attached."""
@@ -259,17 +300,37 @@
         LOG.info("checking volumes")
         for volume in self.res['volumes']:
             client = client_for_user(volume['owner'])
-            found = _get_volume_by_name(client, volume['name'])
+            vol_body = _get_volume_by_name(client, volume['name'])
             self.assertIsNotNone(
-                found,
+                vol_body,
                 "Couldn't find expected volume %s" % volume['name'])
 
             # Verify that a volume's attachment retrieved
             server_id = _get_server_by_name(client, volume['server'])['id']
-            attachment = self.client.get_attachment_from_volume(volume)
-            self.assertEqual(volume['id'], attachment['volume_id'])
+            attachment = client.volumes.get_attachment_from_volume(vol_body)
+            self.assertEqual(vol_body['id'], attachment['volume_id'])
             self.assertEqual(server_id, attachment['server_id'])
 
+    def _confirm_telemetry_sample(self, server, sample):
+        """Check this sample matches the expected resource metadata."""
+        # Confirm display_name
+        self.assertEqual(server['name'],
+                         sample['resource_metadata']['display_name'])
+        # Confirm instance_type of flavor
+        flavor = sample['resource_metadata'].get(
+            'flavor.name',
+            sample['resource_metadata'].get('instance_type')
+        )
+        self.assertEqual(server['flavor'], flavor)
+        # Confirm the oldest sample was created before upgrade.
+        if OPTS.mode == 'check':
+            oldest_timestamp = timeutils.normalize_time(
+                timeutils.parse_isotime(sample['timestamp']))
+            self.assertTrue(
+                oldest_timestamp < JAVELIN_START,
+                'timestamp should come before start of second javelin run'
+            )
+
 
 #######################
 #
@@ -296,6 +357,15 @@
             obj['container'], obj['name'],
             _file_contents(obj['file']))
 
+
+def destroy_objects(objects):
+    for obj in objects:
+        client = client_for_user(obj['owner'])
+        r, body = client.objects.delete_object(obj['container'], obj['name'])
+        if not (200 <= int(r['status']) < 299):
+            raise ValueError("unable to destroy object: [%s] %s" % (r, body))
+
+
 #######################
 #
 # IMAGES
@@ -401,7 +471,7 @@
         image_id = _get_image_by_name(client, server['image'])['id']
         flavor_id = _get_flavor_by_name(client, server['flavor'])['id']
         resp, body = client.servers.create_server(server['name'], image_id,
-                                                 flavor_id)
+                                                  flavor_id)
         server_id = body['id']
         client.servers.wait_for_server_status(server_id, 'ACTIVE')
 
@@ -420,7 +490,7 @@
 
         client.servers.delete_server(response['id'])
         client.servers.wait_for_server_termination(response['id'],
-                ignore_error=True)
+                                                   ignore_error=True)
 
 
 #######################
@@ -431,8 +501,8 @@
 
 def _get_volume_by_name(client, name):
     r, body = client.volumes.list_volumes()
-    for volume in body['volumes']:
-        if name == volume['name']:
+    for volume in body:
+        if name == volume['display_name']:
             return volume
     return None
 
@@ -442,19 +512,32 @@
         client = client_for_user(volume['owner'])
 
         # only create a volume if the name isn't here
-        r, body = client.volumes.list_volumes()
-        if any(item['name'] == volume['name'] for item in body):
+        if _get_volume_by_name(client, volume['name']):
+            LOG.info("volume '%s' already exists" % volume['name'])
             continue
 
-        client.volumes.create_volume(volume['name'], volume['size'])
+        size = volume['gb']
+        v_name = volume['name']
+        resp, body = client.volumes.create_volume(size=size,
+                                                  display_name=v_name)
+        client.volumes.wait_for_volume_status(body['id'], 'available')
+
+
+def destroy_volumes(volumes):
+    for volume in volumes:
+        client = client_for_user(volume['owner'])
+        volume_id = _get_volume_by_name(client, volume['name'])['id']
+        client.volumes.detach_volume(volume_id)
+        client.volumes.delete_volume(volume_id)
 
 
 def attach_volumes(volumes):
     for volume in volumes:
         client = client_for_user(volume['owner'])
-
         server_id = _get_server_by_name(client, volume['server'])['id']
-        client.volumes.attach_volume(volume['name'], server_id)
+        volume_id = _get_volume_by_name(client, volume['name'])['id']
+        device = volume['device']
+        client.volumes.attach_volume(volume_id, server_id, device)
 
 
 #######################
@@ -475,27 +558,19 @@
     create_objects(RES['objects'])
     create_images(RES['images'])
     create_servers(RES['servers'])
-    # TODO(sdague): volumes definition doesn't work yet, bring it
-    # back once we're actually executing the code
-    # create_volumes(RES['volumes'])
-    # attach_volumes(RES['volumes'])
+    create_volumes(RES['volumes'])
+    attach_volumes(RES['volumes'])
 
 
 def destroy_resources():
     LOG.info("Destroying Resources")
     # Destroy in inverse order of create
-
-    # Future
-    # detach_volumes
-    # destroy_volumes
-
     destroy_servers(RES['servers'])
     destroy_images(RES['images'])
-    # destroy_objects
-
-    # destroy_users
-    # destroy_tenants
-
+    destroy_objects(RES['objects'])
+    destroy_volumes(RES['volumes'])
+    destroy_users(RES['users'])
+    destroy_tenants(RES['tenants'])
     LOG.warn("Destroy mode incomplete")
 
 
@@ -545,21 +620,10 @@
         config.CONF.set_config_path(OPTS.config_file)
 
 
-def setup_logging(debug=True):
+def setup_logging():
     global LOG
+    logging.setup(__name__)
     LOG = logging.getLogger(__name__)
-    if debug:
-        LOG.setLevel(logging.DEBUG)
-    else:
-        LOG.setLevel(logging.INFO)
-
-    ch = logging.StreamHandler(sys.stdout)
-    ch.setLevel(logging.DEBUG)
-    formatter = logging.Formatter(
-        datefmt='%Y-%m-%d %H:%M:%S',
-        fmt='%(asctime)s.%(msecs).03d - %(levelname)s - %(message)s')
-    ch.setFormatter(formatter)
-    LOG.addHandler(ch)
 
 
 def main():
diff --git a/tempest/cmd/resources.yaml b/tempest/cmd/resources.yaml
index 3450e1f..19ee6d5 100644
--- a/tempest/cmd/resources.yaml
+++ b/tempest/cmd/resources.yaml
@@ -36,11 +36,13 @@
   - name: assegai
     server: peltast
     owner: javelin
-    size: 1
+    gb: 1
+    device: /dev/vdb
   - name: pifpouf
     server: hoplite
     owner: javelin
-    size: 2
+    gb: 2
+    device: /dev/vdb
 servers:
   - name: peltast
     owner: javelin
diff --git a/tempest/cmd/run_stress.py b/tempest/cmd/run_stress.py
index 07f3f66..a3f185c 100755
--- a/tempest/cmd/run_stress.py
+++ b/tempest/cmd/run_stress.py
@@ -18,13 +18,14 @@
 import inspect
 import json
 import sys
-from testtools import testsuite
 try:
     from unittest import loader
 except ImportError:
     # unittest in python 2.6 does not contain loader, so uses unittest2
     from unittest2 import loader
 
+from testtools import testsuite
+
 from tempest.openstack.common import log as logging
 from tempest.stress import driver
 
diff --git a/tempest/cmd/verify_tempest_config.py b/tempest/cmd/verify_tempest_config.py
index 673da4f..5046bff 100755
--- a/tempest/cmd/verify_tempest_config.py
+++ b/tempest/cmd/verify_tempest_config.py
@@ -247,7 +247,7 @@
         'data_processing': 'sahara',
         'baremetal': 'ironic',
         'identity': 'keystone',
-        'queuing': 'marconi',
+        'messaging': 'zaqar',
         'database': 'trove'
     }
     # Get catalog list for endpoints to use for validation
@@ -269,7 +269,7 @@
                 if getattr(CONF.service_available, codename_match[cfgname]):
                     print('Endpoint type %s not found either disable service '
                           '%s or fix the catalog_type in the config file' % (
-                          catalog_type, codename_match[cfgname]))
+                              catalog_type, codename_match[cfgname]))
                     if update:
                         change_option(codename_match[cfgname],
                                       'service_available', False)
@@ -278,7 +278,7 @@
                                codename_match[cfgname]):
                     print('Endpoint type %s is available, service %s should be'
                           ' set as available in the config file.' % (
-                          catalog_type, codename_match[cfgname]))
+                              catalog_type, codename_match[cfgname]))
                     if update:
                         change_option(codename_match[cfgname],
                                       'service_available', True)
diff --git a/tempest/common/accounts.py b/tempest/common/accounts.py
new file mode 100644
index 0000000..7423c17
--- /dev/null
+++ b/tempest/common/accounts.py
@@ -0,0 +1,184 @@
+# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+import hashlib
+import os
+
+import yaml
+
+from tempest import auth
+from tempest.common import cred_provider
+from tempest import config
+from tempest import exceptions
+from tempest.openstack.common import lockutils
+from tempest.openstack.common import log as logging
+
+CONF = config.CONF
+LOG = logging.getLogger(__name__)
+
+
+def read_accounts_yaml(path):
+    yaml_file = open(path, 'r')
+    accounts = yaml.load(yaml_file)
+    return accounts
+
+
+class Accounts(cred_provider.CredentialProvider):
+
+    def __init__(self, name):
+        super(Accounts, self).__init__(name)
+        if os.path.isfile(CONF.auth.test_accounts_file):
+            accounts = read_accounts_yaml(CONF.auth.test_accounts_file)
+            self.use_default_creds = False
+        else:
+            accounts = {}
+            self.use_default_creds = True
+        self.hash_dict = self.get_hash_dict(accounts)
+        self.accounts_dir = os.path.join(CONF.lock_path, 'test_accounts')
+        self.isolated_creds = {}
+
+    @classmethod
+    def get_hash_dict(cls, accounts):
+        hash_dict = {}
+        for account in accounts:
+            temp_hash = hashlib.md5()
+            temp_hash.update(str(account))
+            hash_dict[temp_hash.hexdigest()] = account
+        return hash_dict
+
+    def is_multi_user(self):
+        return len(self.hash_dict) > 1
+
+    def _create_hash_file(self, hash_string):
+        path = os.path.join(os.path.join(self.accounts_dir, hash_string))
+        if not os.path.isfile(path):
+            open(path, 'w').close()
+            return True
+        return False
+
+    @lockutils.synchronized('test_accounts_io', external=True)
+    def _get_free_hash(self, hashes):
+        if not os.path.isdir(self.accounts_dir):
+            os.mkdir(self.accounts_dir)
+            # Create File from first hash (since none are in use)
+            self._create_hash_file(hashes[0])
+            return hashes[0]
+        for _hash in hashes:
+            res = self._create_hash_file(_hash)
+            if res:
+                return _hash
+        msg = 'Insufficient number of users provided'
+        raise exceptions.InvalidConfiguration(msg)
+
+    def _get_creds(self):
+        if self.use_default_creds:
+            raise exceptions.InvalidConfiguration(
+                "Account file %s doesn't exist" % CONF.auth.test_accounts_file)
+        free_hash = self._get_free_hash(self.hash_dict.keys())
+        return self.hash_dict[free_hash]
+
+    @lockutils.synchronized('test_accounts_io', external=True)
+    def remove_hash(self, hash_string):
+        hash_path = os.path.join(self.accounts_dir, hash_string)
+        if not os.path.isfile(hash_path):
+            LOG.warning('Expected an account lock file %s to remove, but '
+                        'one did not exist')
+        else:
+            os.remove(hash_path)
+            if not os.listdir(self.accounts_dir):
+                os.rmdir(self.accounts_dir)
+
+    def get_hash(self, creds):
+        for _hash in self.hash_dict:
+            # Comparing on the attributes that are expected in the YAML
+            if all([getattr(creds, k) == self.hash_dict[_hash][k] for k in
+                    creds.CONF_ATTRIBUTES]):
+                return _hash
+        raise AttributeError('Invalid credentials %s' % creds)
+
+    def remove_credentials(self, creds):
+        _hash = self.get_hash(creds)
+        self.remove_hash(_hash)
+
+    def get_primary_creds(self):
+        if self.isolated_creds.get('primary'):
+            return self.isolated_creds.get('primary')
+        creds = self._get_creds()
+        primary_credential = auth.get_credentials(**creds)
+        self.isolated_creds['primary'] = primary_credential
+        return primary_credential
+
+    def get_alt_creds(self):
+        if self.isolated_creds.get('alt'):
+            return self.isolated_creds.get('alt')
+        creds = self._get_creds()
+        alt_credential = auth.get_credentials(**creds)
+        self.isolated_creds['alt'] = alt_credential
+        return alt_credential
+
+    def clear_isolated_creds(self):
+        for creds in self.isolated_creds.values():
+            self.remove_credentials(creds)
+
+    def get_admin_creds(self):
+        msg = ('If admin credentials are available tenant_isolation should be'
+               ' used instead')
+        raise NotImplementedError(msg)
+
+
+class NotLockingAccounts(Accounts):
+    """Credentials provider which always returns the first and second
+    configured accounts as primary and alt users.
+    This credential provider can be used in case of serial test execution
+    to preserve the current behaviour of the serial tempest run.
+    """
+
+    def get_creds(self, id):
+        try:
+            # No need to sort the dict as within the same python process
+            # the HASH seed won't change, so subsequent calls to keys()
+            # will return the same result
+            _hash = self.hash_dict.keys()[id]
+        except IndexError:
+            msg = 'Insufficient number of users provided'
+            raise exceptions.InvalidConfiguration(msg)
+        return self.hash_dict[_hash]
+
+    def get_primary_creds(self):
+        if self.isolated_creds.get('primary'):
+            return self.isolated_creds.get('primary')
+        if not self.use_default_creds:
+            creds = self.get_creds(0)
+            primary_credential = auth.get_credentials(**creds)
+        else:
+            primary_credential = auth.get_default_credentials('user')
+        self.isolated_creds['primary'] = primary_credential
+        return primary_credential
+
+    def get_alt_creds(self):
+        if self.isolated_creds.get('alt'):
+            return self.isolated_creds.get('alt')
+        if not self.use_default_creds:
+            creds = self.get_creds(1)
+            alt_credential = auth.get_credentials(**creds)
+        else:
+            alt_credential = auth.get_default_credentials('alt_user')
+        self.isolated_creds['alt'] = alt_credential
+        return alt_credential
+
+    def clear_isolated_creds(self):
+        self.isolated_creds = {}
+
+    def get_admin_creds(self):
+        return auth.get_default_credentials("identity_admin", fill_in=False)
diff --git a/tempest/common/cred_provider.py b/tempest/common/cred_provider.py
index dc4f049..56d34a5 100644
--- a/tempest/common/cred_provider.py
+++ b/tempest/common/cred_provider.py
@@ -12,6 +12,7 @@
 #    limitations under the License.
 
 import abc
+
 import six
 
 from tempest import config
@@ -23,8 +24,8 @@
 
 @six.add_metaclass(abc.ABCMeta)
 class CredentialProvider(object):
-    def __init__(self, name, tempest_client=True, interface='json',
-                 password='pass', network_resources=None):
+    def __init__(self, name, interface='json', password='pass',
+                 network_resources=None):
         self.name = name
 
     @abc.abstractmethod
diff --git a/tempest/common/custom_matchers.py b/tempest/common/custom_matchers.py
index 996c365..39e3a67 100644
--- a/tempest/common/custom_matchers.py
+++ b/tempest/common/custom_matchers.py
@@ -14,6 +14,8 @@
 
 import re
 
+from testtools import helpers
+
 
 class ExistsAllResponseHeaders(object):
     """
@@ -172,3 +174,65 @@
 
     def get_details(self):
         return {}
+
+
+class MatchesDictExceptForKeys(object):
+    """Matches two dictionaries. Verifies all items are equals except for those
+    identified by a list of keys.
+    """
+
+    def __init__(self, expected, excluded_keys=None):
+        self.expected = expected
+        self.excluded_keys = excluded_keys if excluded_keys is not None else []
+
+    def match(self, actual):
+        filtered_expected = helpers.dict_subtract(self.expected,
+                                                  self.excluded_keys)
+        filtered_actual = helpers.dict_subtract(actual,
+                                                self.excluded_keys)
+        if filtered_actual != filtered_expected:
+            return DictMismatch(filtered_expected, filtered_actual)
+
+
+class DictMismatch(object):
+    """Mismatch between two dicts describes deltas"""
+
+    def __init__(self, expected, actual):
+        self.expected = expected
+        self.actual = actual
+        self.intersect = set(self.expected) & set(self.actual)
+        self.symmetric_diff = set(self.expected) ^ set(self.actual)
+
+    def _format_dict(self, dict_to_format):
+        # Ensure the error string dict is printed in a set order
+        # NOTE(mtreinish): needed to ensure a deterministic error msg for
+        # testing. Otherwise the error message will be dependent on the
+        # dict ordering.
+        dict_string = "{"
+        for key in sorted(dict_to_format):
+            dict_string += "'%s': %s, " % (key, dict_to_format[key])
+        dict_string = dict_string[:-2] + '}'
+        return dict_string
+
+    def describe(self):
+        msg = ""
+        if self.symmetric_diff:
+            only_expected = helpers.dict_subtract(self.expected, self.actual)
+            only_actual = helpers.dict_subtract(self.actual, self.expected)
+            if only_expected:
+                msg += "Only in expected:\n  %s\n" % self._format_dict(
+                    only_expected)
+            if only_actual:
+                msg += "Only in actual:\n  %s\n" % self._format_dict(
+                    only_actual)
+        diff_set = set(o for o in self.intersect if
+                       self.expected[o] != self.actual[o])
+        if diff_set:
+            msg += "Differences:\n"
+            for o in diff_set:
+                msg += "  %s: expected %s, actual %s\n" % (
+                    o, self.expected[o], self.actual[o])
+        return msg
+
+    def get_details(self):
+        return {}
diff --git a/tempest/common/debug.py b/tempest/common/debug.py
index 228be7a..16e5ffe 100644
--- a/tempest/common/debug.py
+++ b/tempest/common/debug.py
@@ -14,7 +14,6 @@
 
 from tempest.common import commands
 from tempest import config
-
 from tempest.openstack.common import log as logging
 
 CONF = config.CONF
diff --git a/tempest/common/generator/base_generator.py b/tempest/common/generator/base_generator.py
index 57b98f7..0398af1 100644
--- a/tempest/common/generator/base_generator.py
+++ b/tempest/common/generator/base_generator.py
@@ -13,6 +13,8 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+import functools
+
 import jsonschema
 
 from tempest.openstack.common import log as logging
@@ -39,6 +41,7 @@
     """
     Decorator for simple generators that return one value
     """
+    @functools.wraps(fn)
     def wrapped(self, schema):
         result = fn(self, schema)
         if result is not None:
diff --git a/tempest/common/generator/valid_generator.py b/tempest/common/generator/valid_generator.py
index a99bbc0..0d7b398 100644
--- a/tempest/common/generator/valid_generator.py
+++ b/tempest/common/generator/valid_generator.py
@@ -24,7 +24,7 @@
     @base.generator_type("string")
     @base.simple_generator
     def generate_valid_string(self, schema):
-        size = schema.get("minLength", 0)
+        size = schema.get("minLength", 1)
         # TODO(dkr mko): handle format and pattern
         return "x" * size
 
diff --git a/tempest/common/glance_http.py b/tempest/common/glance_http.py
index 55aca5a..5f35c85 100644
--- a/tempest/common/glance_http.py
+++ b/tempest/common/glance_http.py
@@ -19,15 +19,17 @@
 import hashlib
 import httplib
 import json
-import OpenSSL
 import posixpath
 import re
-from six import moves
 import socket
 import StringIO
 import struct
 import urlparse
 
+
+import OpenSSL
+from six import moves
+
 from tempest import exceptions as exc
 from tempest.openstack.common import log as logging
 
diff --git a/tempest/common/isolated_creds.py b/tempest/common/isolated_creds.py
index 05d758f..b2edfee 100644
--- a/tempest/common/isolated_creds.py
+++ b/tempest/common/isolated_creds.py
@@ -28,15 +28,14 @@
 
 class IsolatedCreds(cred_provider.CredentialProvider):
 
-    def __init__(self, name, tempest_client=True, interface='json',
-                 password='pass', network_resources=None):
-        super(IsolatedCreds, self).__init__(name, tempest_client, interface,
-                                            password, network_resources)
+    def __init__(self, name, interface='json', password='pass',
+                 network_resources=None):
+        super(IsolatedCreds, self).__init__(name, interface, password,
+                                            network_resources)
         self.network_resources = network_resources
         self.isolated_creds = {}
         self.isolated_net_resources = {}
         self.ports = []
-        self.tempest_client = tempest_client
         self.interface = interface
         self.password = password
         self.identity_admin_client, self.network_admin_client = (
@@ -49,94 +48,50 @@
             identity
             network
         """
-        if self.tempest_client:
-            os = clients.AdminManager(interface=self.interface)
-        else:
-            os = clients.OfficialClientManager(
-                auth.get_default_credentials('identity_admin')
-            )
+        os = clients.AdminManager(interface=self.interface)
         return os.identity_client, os.network_client
 
     def _create_tenant(self, name, description):
-        if self.tempest_client:
-            _, tenant = self.identity_admin_client.create_tenant(
-                name=name, description=description)
-        else:
-            tenant = self.identity_admin_client.tenants.create(
-                name,
-                description=description)
+        _, tenant = self.identity_admin_client.create_tenant(
+            name=name, description=description)
         return tenant
 
     def _get_tenant_by_name(self, name):
-        if self.tempest_client:
-            _, tenant = self.identity_admin_client.get_tenant_by_name(name)
-        else:
-            tenants = self.identity_admin_client.tenants.list()
-            for ten in tenants:
-                if ten['name'] == name:
-                    tenant = ten
-                    break
-            else:
-                raise exceptions.NotFound('No such tenant')
+        _, tenant = self.identity_admin_client.get_tenant_by_name(name)
         return tenant
 
     def _create_user(self, username, password, tenant, email):
-        if self.tempest_client:
-            _, user = self.identity_admin_client.create_user(username,
-                                                             password,
-                                                             tenant['id'],
-                                                             email)
-        else:
-            user = self.identity_admin_client.users.create(username, password,
-                                                           email,
-                                                           tenant_id=tenant.id)
+        _, user = self.identity_admin_client.create_user(
+            username, password, tenant['id'], email)
         return user
 
     def _get_user(self, tenant, username):
-        if self.tempest_client:
-            _, user = self.identity_admin_client.get_user_by_username(
-                tenant['id'],
-                username)
-        else:
-            user = self.identity_admin_client.users.get(username)
+        _, user = self.identity_admin_client.get_user_by_username(
+            tenant['id'], username)
         return user
 
     def _list_roles(self):
-        if self.tempest_client:
-            _, roles = self.identity_admin_client.list_roles()
-        else:
-            roles = self.identity_admin_client.roles.list()
+        _, roles = self.identity_admin_client.list_roles()
         return roles
 
     def _assign_user_role(self, tenant, user, role_name):
         role = None
         try:
             roles = self._list_roles()
-            if self.tempest_client:
-                role = next(r for r in roles if r['name'] == role_name)
-            else:
-                role = next(r for r in roles if r.name == role_name)
+            role = next(r for r in roles if r['name'] == role_name)
         except StopIteration:
             msg = 'No "%s" role found' % role_name
             raise exceptions.NotFound(msg)
-        if self.tempest_client:
-            self.identity_admin_client.assign_user_role(tenant['id'],
-                                                        user['id'], role['id'])
-        else:
-            self.identity_admin_client.roles.add_user_role(user.id, role.id,
-                                                           tenant.id)
+        self.identity_admin_client.assign_user_role(tenant['id'], user['id'],
+                                                    role['id'])
 
     def _delete_user(self, user):
-        if self.tempest_client:
-            self.identity_admin_client.delete_user(user)
-        else:
-            self.identity_admin_client.users.delete(user)
+        self.identity_admin_client.delete_user(user)
 
     def _delete_tenant(self, tenant):
-        if self.tempest_client:
-            self.identity_admin_client.delete_tenant(tenant)
-        else:
-            self.identity_admin_client.tenants.delete(tenant)
+        if CONF.service_available.neutron:
+            self._cleanup_default_secgroup(tenant)
+        self.identity_admin_client.delete_tenant(tenant)
 
     def _create_creds(self, suffix="", admin=False):
         """Create random credentials under the following schema.
@@ -162,23 +117,19 @@
         email = data_utils.rand_name(root) + suffix + "@example.com"
         user = self._create_user(username, self.password,
                                  tenant, email)
-        # NOTE(andrey-mp): user needs this role to create containers in swift
-        swift_operator_role = CONF.object_storage.operator_role
-        self._assign_user_role(tenant, user, swift_operator_role)
+        if CONF.service_available.swift:
+            # NOTE(andrey-mp): user needs this role to create containers
+            # in swift
+            swift_operator_role = CONF.object_storage.operator_role
+            self._assign_user_role(tenant, user, swift_operator_role)
         if admin:
             self._assign_user_role(tenant, user, CONF.identity.admin_role)
         return self._get_credentials(user, tenant)
 
     def _get_credentials(self, user, tenant):
-        if self.tempest_client:
-            user_get = user.get
-            tenant_get = tenant.get
-        else:
-            user_get = user.__dict__.get
-            tenant_get = tenant.__dict__.get
         return auth.get_credentials(
-            username=user_get('name'), user_id=user_get('id'),
-            tenant_name=tenant_get('name'), tenant_id=tenant_get('id'),
+            username=user['name'], user_id=user['id'],
+            tenant_name=tenant['name'], tenant_id=tenant['id'],
             password=self.password)
 
     def _create_network_resources(self, tenant_id):
@@ -223,43 +174,30 @@
         return network, subnet, router
 
     def _create_network(self, name, tenant_id):
-        if self.tempest_client:
-            resp, resp_body = self.network_admin_client.create_network(
-                name=name, tenant_id=tenant_id)
-        else:
-            body = {'network': {'tenant_id': tenant_id, 'name': name}}
-            resp_body = self.network_admin_client.create_network(body)
+        _, resp_body = self.network_admin_client.create_network(
+            name=name, tenant_id=tenant_id)
         return resp_body['network']
 
     def _create_subnet(self, subnet_name, tenant_id, network_id):
-        if not self.tempest_client:
-            body = {'subnet': {'name': subnet_name, 'tenant_id': tenant_id,
-                               'network_id': network_id, 'ip_version': 4}}
-            if self.network_resources:
-                body['enable_dhcp'] = self.network_resources['dhcp']
         base_cidr = netaddr.IPNetwork(CONF.network.tenant_network_cidr)
         mask_bits = CONF.network.tenant_network_mask_bits
         for subnet_cidr in base_cidr.subnet(mask_bits):
             try:
-                if self.tempest_client:
-                    if self.network_resources:
-                        resp, resp_body = self.network_admin_client.\
-                            create_subnet(
-                                network_id=network_id, cidr=str(subnet_cidr),
-                                name=subnet_name,
-                                tenant_id=tenant_id,
-                                enable_dhcp=self.network_resources['dhcp'],
-                                ip_version=4)
-                    else:
-                        resp, resp_body = self.network_admin_client.\
-                            create_subnet(network_id=network_id,
-                                          cidr=str(subnet_cidr),
-                                          name=subnet_name,
-                                          tenant_id=tenant_id,
-                                          ip_version=4)
+                if self.network_resources:
+                    _, resp_body = self.network_admin_client.\
+                        create_subnet(
+                            network_id=network_id, cidr=str(subnet_cidr),
+                            name=subnet_name,
+                            tenant_id=tenant_id,
+                            enable_dhcp=self.network_resources['dhcp'],
+                            ip_version=4)
                 else:
-                    body['subnet']['cidr'] = str(subnet_cidr)
-                    resp_body = self.network_admin_client.create_subnet(body)
+                    _, resp_body = self.network_admin_client.\
+                        create_subnet(network_id=network_id,
+                                      cidr=str(subnet_cidr),
+                                      name=subnet_name,
+                                      tenant_id=tenant_id,
+                                      ip_version=4)
                 break
             except exceptions.BadRequest as e:
                 if 'overlaps with another subnet' not in str(e):
@@ -273,25 +211,15 @@
     def _create_router(self, router_name, tenant_id):
         external_net_id = dict(
             network_id=CONF.network.public_network_id)
-        if self.tempest_client:
-            resp, resp_body = self.network_admin_client.create_router(
-                router_name,
-                external_gateway_info=external_net_id,
-                tenant_id=tenant_id)
-        else:
-            body = {'router': {'name': router_name, 'tenant_id': tenant_id,
-                               'external_gateway_info': external_net_id,
-                               'admin_state_up': True}}
-            resp_body = self.network_admin_client.create_router(body)
+        _, resp_body = self.network_admin_client.create_router(
+            router_name,
+            external_gateway_info=external_net_id,
+            tenant_id=tenant_id)
         return resp_body['router']
 
     def _add_router_interface(self, router_id, subnet_id):
-        if self.tempest_client:
-            self.network_admin_client.add_router_interface_with_subnet_id(
-                router_id, subnet_id)
-        else:
-            body = {'subnet_id': subnet_id}
-            self.network_admin_client.add_interface_router(router_id, body)
+        self.network_admin_client.add_router_interface_with_subnet_id(
+            router_id, subnet_id)
 
     def get_primary_network(self):
         return self.isolated_net_resources.get('primary')[0]
@@ -373,30 +301,17 @@
             LOG.warn('network with name: %s not found for delete' %
                      network_name)
 
-    def _cleanup_ports(self, network_id):
-        # TODO(mlavalle) This method will be removed once patch
-        # https://review.openstack.org/#/c/46563/ merges in Neutron
-        if not self.ports:
-            if self.tempest_client:
-                resp, resp_body = self.network_admin_client.list_ports()
-            else:
-                resp_body = self.network_admin_client.list_ports()
-            self.ports = resp_body['ports']
-        ports_to_delete = [
-            port
-            for port in self.ports
-            if (port['network_id'] == network_id and
-                port['device_owner'] != 'network:router_interface' and
-                port['device_owner'] != 'network:dhcp')
-        ]
-        for port in ports_to_delete:
+    def _cleanup_default_secgroup(self, tenant):
+        net_client = self.network_admin_client
+        _, resp_body = net_client.list_security_groups(tenant_id=tenant,
+                                                       name="default")
+        secgroups_to_delete = resp_body['security_groups']
+        for secgroup in secgroups_to_delete:
             try:
-                LOG.info('Cleaning up port id %s, name %s' %
-                         (port['id'], port['name']))
-                self.network_admin_client.delete_port(port['id'])
+                net_client.delete_security_group(secgroup['id'])
             except exceptions.NotFound:
-                LOG.warn('Port id: %s, name %s not found for clean-up' %
-                         (port['id'], port['name']))
+                LOG.warn('Security group %s, id %s not found for clean-up' %
+                         (secgroup['name'], secgroup['id']))
 
     def _clear_isolated_net_resources(self):
         net_client = self.network_admin_client
@@ -408,22 +323,13 @@
             if (not self.network_resources or
                 self.network_resources.get('router')):
                 try:
-                    if self.tempest_client:
-                        net_client.remove_router_interface_with_subnet_id(
-                            router['id'], subnet['id'])
-                    else:
-                        body = {'subnet_id': subnet['id']}
-                        net_client.remove_interface_router(router['id'], body)
+                    net_client.remove_router_interface_with_subnet_id(
+                        router['id'], subnet['id'])
                 except exceptions.NotFound:
                     LOG.warn('router with name: %s not found for delete' %
                              router['name'])
                 self._clear_isolated_router(router['id'], router['name'])
             if (not self.network_resources or
-                self.network_resources.get('network')):
-                # TODO(mlavalle) This method call will be removed once patch
-                # https://review.openstack.org/#/c/46563/ merges in Neutron
-                self._cleanup_ports(network['id'])
-            if (not self.network_resources or
                 self.network_resources.get('subnet')):
                 self._clear_isolated_subnet(subnet['id'], subnet['name'])
             if (not self.network_resources or
diff --git a/tempest/common/rest_client.py b/tempest/common/rest_client.py
index 9e0f4d3..00fe8d2 100644
--- a/tempest/common/rest_client.py
+++ b/tempest/common/rest_client.py
@@ -16,12 +16,12 @@
 
 import collections
 import json
-from lxml import etree
 import re
-import string
 import time
 
 import jsonschema
+from lxml import etree
+import six
 
 from tempest.common import http
 from tempest.common.utils import misc as misc_utils
@@ -40,6 +40,19 @@
 HTTP_SUCCESS = (200, 201, 202, 203, 204, 205, 206)
 
 
+# convert a structure into a string safely
+def safe_body(body, maxlen=2048):
+    try:
+        text = six.text_type(body)
+    except UnicodeDecodeError:
+        # if this isn't actually text, return marker that
+        return "<BinaryData: removed>"
+    if len(text) > maxlen:
+        return text[:maxlen]
+    else:
+        return text
+
+
 class RestClient(object):
 
     TYPE = "json"
@@ -209,8 +222,9 @@
             pattern = """Unexpected http success status code {0},
                          The expected status code is {1}"""
             if ((not isinstance(expected_code, list) and
-                (read_code != expected_code)) or (isinstance(expected_code,
-                list) and (read_code not in expected_code))):
+                 (read_code != expected_code)) or
+                (isinstance(expected_code, list) and
+                 (read_code not in expected_code))):
                 details = pattern.format(read_code, expected_code)
                 raise exceptions.InvalidHttpSuccessCode(details)
 
@@ -247,17 +261,46 @@
                 return resp[i]
         return ""
 
-    def _log_request_start(self, method, req_url, req_headers={},
+    def _log_request_start(self, method, req_url, req_headers=None,
                            req_body=None):
+        if req_headers is None:
+            req_headers = {}
         caller_name = misc_utils.find_test_caller()
         trace_regex = CONF.debug.trace_requests
         if trace_regex and re.search(trace_regex, caller_name):
             self.LOG.debug('Starting Request (%s): %s %s' %
                            (caller_name, method, req_url))
 
+    def _log_request_full(self, method, req_url, resp,
+                          secs="", req_headers=None,
+                          req_body=None, resp_body=None,
+                          caller_name=None, extra=None):
+        if 'X-Auth-Token' in req_headers:
+            req_headers['X-Auth-Token'] = '<omitted>'
+        log_fmt = """Request (%s): %s %s %s%s
+    Request - Headers: %s
+        Body: %s
+    Response - Headers: %s
+        Body: %s"""
+
+        self.LOG.debug(
+            log_fmt % (
+                caller_name,
+                resp['status'],
+                method,
+                req_url,
+                secs,
+                str(req_headers),
+                safe_body(req_body),
+                str(resp),
+                safe_body(resp_body)),
+            extra=extra)
+
     def _log_request(self, method, req_url, resp,
-                     secs="", req_headers={},
+                     secs="", req_headers=None,
                      req_body=None, resp_body=None):
+        if req_headers is None:
+            req_headers = {}
         # if we have the request id, put it in the right part of the log
         extra = dict(request_id=self._get_request_id(resp))
         # NOTE(sdague): while we still have 6 callers to this function
@@ -276,32 +319,10 @@
                 secs),
             extra=extra)
 
-        # We intentionally duplicate the info content because in a parallel
-        # world this is important to match
-        trace_regex = CONF.debug.trace_requests
-        if trace_regex and re.search(trace_regex, caller_name):
-            if 'X-Auth-Token' in req_headers:
-                req_headers['X-Auth-Token'] = '<omitted>'
-            log_fmt = """Request (%s): %s %s %s%s
-    Request - Headers: %s
-        Body: %s
-    Response - Headers: %s
-        Body: %s"""
-
-            self.LOG.debug(
-                log_fmt % (
-                    caller_name,
-                    resp['status'],
-                    method,
-                    req_url,
-                    secs,
-                    str(req_headers),
-                    filter(lambda x: x in string.printable,
-                           str(req_body)[:2048]),
-                    str(resp),
-                    filter(lambda x: x in string.printable,
-                           str(resp_body)[:2048])),
-                extra=extra)
+        # Also look everything at DEBUG if you want to filter this
+        # out, don't run at debug.
+        self._log_request_full(method, req_url, resp, secs, req_headers,
+                               req_body, resp_body, caller_name, extra)
 
     def _parse_resp(self, body):
         if self._get_type() is "json":
@@ -547,7 +568,13 @@
             if self.is_resource_deleted(id):
                 return
             if int(time.time()) - start_time >= self.build_timeout:
-                raise exceptions.TimeoutException
+                message = ('Failed to delete resource %(id)s within the '
+                           'required time (%(timeout)s s).' %
+                           {'id': id, 'timeout': self.build_timeout})
+                caller = misc_utils.find_test_caller()
+                if caller:
+                    message = '(%s) %s' % (caller, message)
+                raise exceptions.TimeoutException(message)
             time.sleep(self.build_interval)
 
     def is_resource_deleted(self, id):
diff --git a/tempest/common/ssh.py b/tempest/common/ssh.py
index 531887c..c06ce3b 100644
--- a/tempest/common/ssh.py
+++ b/tempest/common/ssh.py
@@ -16,11 +16,12 @@
 
 import cStringIO
 import select
-import six
 import socket
 import time
 import warnings
 
+import six
+
 from tempest import exceptions
 from tempest.openstack.common import log as logging
 
diff --git a/tempest/common/utils/linux/remote_client.py b/tempest/common/utils/linux/remote_client.py
index 57a14a2..89904b2 100644
--- a/tempest/common/utils/linux/remote_client.py
+++ b/tempest/common/utils/linux/remote_client.py
@@ -11,9 +11,10 @@
 #    under the License.
 
 import re
-import six
 import time
 
+import six
+
 from tempest.common import ssh
 from tempest import config
 from tempest import exceptions
diff --git a/tempest/common/waiters.py b/tempest/common/waiters.py
index d242c14..52568cb 100644
--- a/tempest/common/waiters.py
+++ b/tempest/common/waiters.py
@@ -74,7 +74,7 @@
         if (server_status == 'ERROR') and raise_on_error:
             if 'fault' in body:
                 raise exceptions.BuildErrorException(body['fault'],
-                                                    server_id=server_id)
+                                                     server_id=server_id)
             else:
                 raise exceptions.BuildErrorException(server_id=server_id)
 
@@ -131,3 +131,31 @@
             if caller:
                 message = '(%s) %s' % (caller, message)
             raise exceptions.TimeoutException(message)
+
+
+def wait_for_bm_node_status(client, node_id, attr, status):
+    """Waits for a baremetal node attribute to reach given status.
+
+    The client should have a show_node(node_uuid) method to get the node.
+    """
+    _, node = client.show_node(node_id)
+    start = int(time.time())
+
+    while node[attr] != status:
+        time.sleep(client.build_interval)
+        _, node = client.show_node(node_id)
+        if node[attr] == status:
+            return
+
+        if int(time.time()) - start >= client.build_timeout:
+            message = ('Node %(node_id)s failed to reach %(attr)s=%(status)s '
+                       'within the required time (%(timeout)s s).' %
+                       {'node_id': node_id,
+                        'attr': attr,
+                        'status': status,
+                        'timeout': client.build_timeout})
+            message += ' Current state of %s: %s.' % (attr, node[attr])
+            caller = misc_utils.find_test_caller()
+            if caller:
+                message = '(%s) %s' % (caller, message)
+            raise exceptions.TimeoutException(message)
diff --git a/tempest/common/xml_utils.py b/tempest/common/xml_utils.py
index b1bf789..7d460a4 100644
--- a/tempest/common/xml_utils.py
+++ b/tempest/common/xml_utils.py
@@ -14,6 +14,7 @@
 #    under the License.
 
 import collections
+import copy
 
 XMLNS_11 = "http://docs.openstack.org/compute/api/v1.1"
 XMLNS_V3 = "http://docs.openstack.org/compute/api/v1.1"
@@ -78,16 +79,19 @@
 
 class Document(Element):
     def __init__(self, *args, **kwargs):
-        if 'version' not in kwargs:
-            kwargs['version'] = '1.0'
-        if 'encoding' not in kwargs:
-            kwargs['encoding'] = 'UTF-8'
         Element.__init__(self, '?xml', *args, **kwargs)
 
     def __str__(self):
-        args = " ".join(['%s="%s"' %
-                        (k, v if v is not None else "")
-                        for k, v in self._attrs.items()])
+        attrs = copy.copy(self._attrs)
+        # pop the required standard attrs out and render in required
+        # order.
+        vers = attrs.pop('version', '1.0')
+        enc = attrs.pop('encoding', 'UTF-8')
+        args = 'version="%s" encoding="%s"' % (vers, enc)
+        if attrs:
+            args = " ".join([args] + ['%s="%s"' %
+                            (k, v if v is not None else "")
+                            for k, v in attrs.items()])
         string = '<?xml %s?>\n' % args
         for element in self._elements:
             string += str(element)
diff --git a/tempest/config.py b/tempest/config.py
index 7d195bf..174a895 100644
--- a/tempest/config.py
+++ b/tempest/config.py
@@ -29,6 +29,18 @@
         conf.register_opt(opt, group=opt_group.name)
 
 
+auth_group = cfg.OptGroup(name='auth',
+                          title="Options for authentication and credentials")
+
+
+AuthGroup = [
+    cfg.StrOpt('test_accounts_file',
+               default='etc/accounts.yaml',
+               help="Path to the yaml file that contains the list of "
+                    "credentials to use for running tests"),
+]
+
+
 identity_group = cfg.OptGroup(name='identity',
                               title="Keystone Configuration Options")
 
@@ -40,7 +52,6 @@
                 default=False,
                 help="Set to True if using self-signed SSL certificates."),
     cfg.StrOpt('uri',
-               default=None,
                help="Full URI of the OpenStack Identity API (Keystone), v2"),
     cfg.StrOpt('uri_v3',
                help='Full URI of the OpenStack Identity API (Keystone), v3'),
@@ -60,52 +71,40 @@
                         'publicURL', 'adminURL', 'internalURL'],
                help="The endpoint type to use for the identity service."),
     cfg.StrOpt('username',
-               default=None,
                help="Username to use for Nova API requests."),
     cfg.StrOpt('tenant_name',
-               default=None,
                help="Tenant name to use for Nova API requests."),
     cfg.StrOpt('admin_role',
                default='admin',
                help="Role required to administrate keystone."),
     cfg.StrOpt('password',
-               default=None,
                help="API key to use when authenticating.",
                secret=True),
     cfg.StrOpt('domain_name',
-               default=None,
                help="Domain name for authentication (Keystone V3)."
                     "The same domain applies to user and project"),
     cfg.StrOpt('alt_username',
-               default=None,
                help="Username of alternate user to use for Nova API "
                     "requests."),
     cfg.StrOpt('alt_tenant_name',
-               default=None,
                help="Alternate user's Tenant name to use for Nova API "
                     "requests."),
     cfg.StrOpt('alt_password',
-               default=None,
                help="API key to use when authenticating as alternate user.",
                secret=True),
     cfg.StrOpt('alt_domain_name',
-               default=None,
                help="Alternate domain name for authentication (Keystone V3)."
                     "The same domain applies to user and project"),
     cfg.StrOpt('admin_username',
-               default=None,
                help="Administrative Username to use for "
                     "Keystone API requests."),
     cfg.StrOpt('admin_tenant_name',
-               default=None,
                help="Administrative Tenant name to use for Keystone API "
                     "requests."),
     cfg.StrOpt('admin_password',
-               default=None,
                help="API key to use when authenticating as admin.",
                secret=True),
     cfg.StrOpt('admin_domain_name',
-               default=None,
                help="Admin domain name for authentication (Keystone V3)."
                     "The same domain applies to user and project"),
 ]
@@ -234,7 +233,6 @@
                default='computev3',
                help="Catalog type of the Compute v3 service."),
     cfg.StrOpt('path_to_private_key',
-               default=None,
                help="Path to a private key file for SSH access to remote "
                     "hosts"),
     cfg.StrOpt('volume_device_name',
@@ -261,6 +259,9 @@
     cfg.BoolOpt('api_v3',
                 default=False,
                 help="If false, skip all nova v3 tests."),
+    cfg.BoolOpt('xml_api_v2',
+                default=True,
+                help="If false skip all v2 api tests with xml"),
     cfg.BoolOpt('disk_config',
                 default=True,
                 help="If false, skip disk config tests"),
@@ -280,6 +281,10 @@
                 default=False,
                 help="Does the test environment support changing the admin "
                      "password?"),
+    cfg.BoolOpt('console_output',
+                default=True,
+                help="Does the test environment support obtaining instance "
+                     "serial console output?"),
     cfg.BoolOpt('resize',
                 default=False,
                 help="Does the test environment support resizing?"),
@@ -328,7 +333,11 @@
     cfg.BoolOpt('interface_attach',
                 default=True,
                 help='Does the test environment support dynamic network '
-                     'interface attachment?')
+                     'interface attachment?'),
+    cfg.BoolOpt('snapshot',
+                default=True,
+                help='Does the test environment support creating snapshot '
+                     'images of running instances?')
 ]
 
 
@@ -337,18 +346,14 @@
 
 ComputeAdminGroup = [
     cfg.StrOpt('username',
-               default=None,
                help="Administrative Username to use for Nova API requests."),
     cfg.StrOpt('tenant_name',
-               default=None,
                help="Administrative Tenant name to use for Nova API "
                     "requests."),
     cfg.StrOpt('password',
-               default=None,
                help="API key to use when authenticating as admin.",
                secret=True),
     cfg.StrOpt('domain_name',
-               default=None,
                help="Domain name for authentication as admin (Keystone V3)."
                     "The same domain applies to user and project"),
 ]
@@ -414,10 +419,10 @@
                default=28,
                help="The mask bits for tenant ipv4 subnets"),
     cfg.StrOpt('tenant_network_v6_cidr',
-               default="2003::/64",
+               default="2003::/48",
                help="The cidr block to allocate tenant ipv6 subnets from"),
     cfg.IntOpt('tenant_network_v6_mask_bits',
-               default=96,
+               default=64,
                help="The mask bits for tenant ipv6 subnets"),
     cfg.BoolOpt('tenant_networks_reachable',
                 default=False,
@@ -465,13 +470,13 @@
                 )
 ]
 
-queuing_group = cfg.OptGroup(name='queuing',
-                             title='Queuing Service')
+messaging_group = cfg.OptGroup(name='messaging',
+                               title='Messaging Service')
 
-QueuingGroup = [
+MessagingGroup = [
     cfg.StrOpt('catalog_type',
-               default='queuing',
-               help='Catalog type of the Queuing service.'),
+               default='messaging',
+               help='Catalog type of the Messaging service.'),
     cfg.IntOpt('max_queues_per_page',
                default=20,
                help='The maximum number of queue records per page when '
@@ -617,6 +622,15 @@
                 help="A list of the enabled optional discoverable apis. "
                      "A single entry, all, indicates that all of these "
                      "features are expected to be enabled"),
+    cfg.BoolOpt('container_sync',
+                default=True,
+                help="Execute (old style) container-sync tests"),
+    cfg.BoolOpt('object_versioning',
+                default=True,
+                help="Execute object-versioning tests"),
+    cfg.BoolOpt('discoverability',
+                default=True,
+                help="Execute discoverability tests"),
 ]
 
 database_group = cfg.OptGroup(name='database',
@@ -669,11 +683,9 @@
                help="Instance type for tests. Needs to be big enough for a "
                     "full OS plus the test workload"),
     cfg.StrOpt('image_ref',
-               default=None,
                help="Name of heat-cfntools enabled image to use when "
                     "launching test instances."),
     cfg.StrOpt('keypair_name',
-               default=None,
                help="Name of existing keypair to launch servers with."),
     cfg.IntOpt('max_template_size',
                default=524288,
@@ -742,11 +754,9 @@
                default="http://localhost:8080",
                help="S3 URL"),
     cfg.StrOpt('aws_secret',
-               default=None,
                help="AWS Secret Key",
                secret=True),
     cfg.StrOpt('aws_access',
-               default=None,
                help="AWS Access Key"),
     cfg.StrOpt('aws_zone',
                default="nova",
@@ -785,26 +795,20 @@
 
 StressGroup = [
     cfg.StrOpt('nova_logdir',
-               default=None,
                help='Directory containing log files on the compute nodes'),
     cfg.IntOpt('max_instances',
                default=16,
                help='Maximum number of instances to create during test.'),
     cfg.StrOpt('controller',
-               default=None,
                help='Controller host.'),
     # new stress options
     cfg.StrOpt('target_controller',
-               default=None,
                help='Controller host.'),
     cfg.StrOpt('target_ssh_user',
-               default=None,
                help='ssh user.'),
     cfg.StrOpt('target_private_key_path',
-               default=None,
                help='Path to private key.'),
     cfg.StrOpt('target_logfiles',
-               default=None,
                help='regexp for list of log files.'),
     cfg.IntOpt('log_check_interval',
                default=60,
@@ -832,9 +836,15 @@
                default='/opt/stack/new/devstack/files/images/'
                'cirros-0.3.1-x86_64-uec',
                help='Directory containing image files'),
-    cfg.StrOpt('qcow2_img_file',
+    cfg.StrOpt('img_file', deprecated_name='qcow2_img_file',
                default='cirros-0.3.1-x86_64-disk.img',
-               help='QCOW2 image file name'),
+               help='Image file name'),
+    cfg.StrOpt('img_disk_format',
+               default='qcow2',
+               help='Image disk format'),
+    cfg.StrOpt('img_container_format',
+               default='bare',
+               help='Image container format'),
     cfg.StrOpt('ami_img_file',
                default='cirros-0.3.1-x86_64-blank.img',
                help='AMI image file name'),
@@ -892,9 +902,9 @@
     cfg.BoolOpt('trove',
                 default=False,
                 help="Whether or not Trove is expected to be available"),
-    cfg.BoolOpt('marconi',
+    cfg.BoolOpt('zaqar',
                 default=False,
-                help="Whether or not Marconi is expected to be available"),
+                help="Whether or not Zaqar is expected to be available"),
 ]
 
 debug_group = cfg.OptGroup(name="debug",
@@ -1012,6 +1022,7 @@
 
 
 def register_opts():
+    register_opt_group(cfg.CONF, auth_group, AuthGroup)
     register_opt_group(cfg.CONF, compute_group, ComputeGroup)
     register_opt_group(cfg.CONF, compute_features_group,
                        ComputeFeaturesGroup)
@@ -1023,7 +1034,7 @@
     register_opt_group(cfg.CONF, network_group, NetworkGroup)
     register_opt_group(cfg.CONF, network_feature_group,
                        NetworkFeaturesGroup)
-    register_opt_group(cfg.CONF, queuing_group, QueuingGroup)
+    register_opt_group(cfg.CONF, messaging_group, MessagingGroup)
     register_opt_group(cfg.CONF, volume_group, VolumeGroup)
     register_opt_group(cfg.CONF, volume_feature_group,
                        VolumeFeaturesGroup)
@@ -1059,7 +1070,12 @@
 
     DEFAULT_CONFIG_FILE = "tempest.conf"
 
+    def __getattr__(self, attr):
+        # Handles config options from the default group
+        return getattr(cfg.CONF, attr)
+
     def _set_attrs(self):
+        self.auth = cfg.CONF.auth
         self.compute = cfg.CONF.compute
         self.compute_feature_enabled = cfg.CONF['compute-feature-enabled']
         self.identity = cfg.CONF.identity
@@ -1075,7 +1091,7 @@
             'object-storage-feature-enabled']
         self.database = cfg.CONF.database
         self.orchestration = cfg.CONF.orchestration
-        self.queuing = cfg.CONF.queuing
+        self.messaging = cfg.CONF.messaging
         self.telemetry = cfg.CONF.telemetry
         self.dashboard = cfg.CONF.dashboard
         self.data_processing = cfg.CONF.data_processing
@@ -1125,8 +1141,10 @@
         # to remove an issue with the config file up to date checker.
         if parse_conf:
             config_files.append(path)
-
-        cfg.CONF([], project='tempest', default_config_files=config_files)
+        if os.path.isfile(path):
+            cfg.CONF([], project='tempest', default_config_files=config_files)
+        else:
+            cfg.CONF([], project='tempest')
         logging.setup('tempest')
         LOG = logging.getLogger('tempest')
         LOG.info("Using tempest config file %s" % path)
diff --git a/tempest/exceptions.py b/tempest/exceptions.py
index 9d443cc..cc31fad 100644
--- a/tempest/exceptions.py
+++ b/tempest/exceptions.py
@@ -223,5 +223,8 @@
 
     def __str__(self):
         return ("Command '%s' returned non-zero exit status %d.\n"
-        "stdout:\n%s\n"
-        "stderr:\n%s" % (self.cmd, self.returncode, self.stdout, self.stderr))
+                "stdout:\n%s\n"
+                "stderr:\n%s" % (self.cmd,
+                                 self.returncode,
+                                 self.stdout,
+                                 self.stderr))
diff --git a/tempest/hacking/checks.py b/tempest/hacking/checks.py
index 93329bc..55cc89b 100644
--- a/tempest/hacking/checks.py
+++ b/tempest/hacking/checks.py
@@ -20,13 +20,14 @@
 
 PYTHON_CLIENTS = ['cinder', 'glance', 'keystone', 'nova', 'swift', 'neutron',
                   'trove', 'ironic', 'savanna', 'heat', 'ceilometer',
-                  'marconi', 'sahara']
+                  'zaqar', 'sahara']
 
 PYTHON_CLIENT_RE = re.compile('import (%s)client' % '|'.join(PYTHON_CLIENTS))
 TEST_DEFINITION = re.compile(r'^\s*def test.*')
 SETUPCLASS_DEFINITION = re.compile(r'^\s*def setUpClass')
 SCENARIO_DECORATOR = re.compile(r'\s*@.*services\((.*)\)')
 VI_HEADER_RE = re.compile(r"^#\s+vim?:.+")
+mutable_default_args = re.compile(r"^\s*def .+\((.+=\{\}|.+=\[\])")
 
 
 def import_no_clients_in_api(physical_line, filename):
@@ -105,18 +106,14 @@
                             "T107: service tag should not be in path")
 
 
-def no_official_client_manager_in_api_tests(physical_line, filename):
-    """Check that the OfficialClientManager isn't used in the api tests
+def no_mutable_default_args(logical_line):
+    """Check that mutable object isn't used as default argument
 
-    The api tests should not use the official clients.
-
-    T108: Can not use OfficialClientManager in the API tests
+    N322: Method's default argument shouldn't be mutable
     """
-    if 'tempest/api' in filename:
-        if 'OfficialClientManager' in physical_line:
-            return (physical_line.find('OfficialClientManager'),
-                    'T108: OfficialClientManager can not be used in the api '
-                    'tests')
+    msg = "N322: Method's default argument shouldn't be mutable!"
+    if mutable_default_args.match(logical_line):
+        yield (0, msg)
 
 
 def factory(register):
@@ -125,4 +122,4 @@
     register(no_setupclass_for_unit_tests)
     register(no_vi_headers)
     register(service_tags_not_in_module_path)
-    register(no_official_client_manager_in_api_tests)
+    register(no_mutable_default_args)
diff --git a/tempest/manager.py b/tempest/manager.py
index fb2842f..538b619 100644
--- a/tempest/manager.py
+++ b/tempest/manager.py
@@ -51,18 +51,17 @@
         self.client_attr_names = []
 
     @classmethod
-    def get_auth_provider_class(cls, auth_version):
-        if auth_version == 'v2':
-            return auth.KeystoneV2AuthProvider
-        else:
+    def get_auth_provider_class(cls, credentials):
+        if isinstance(credentials, auth.KeystoneV3Credentials):
             return auth.KeystoneV3AuthProvider
+        else:
+            return auth.KeystoneV2AuthProvider
 
     def get_auth_provider(self, credentials):
         if credentials is None:
             raise exceptions.InvalidCredentials(
                 'Credentials must be specified')
-        auth_provider_class = self.get_auth_provider_class(self.auth_version)
+        auth_provider_class = self.get_auth_provider_class(credentials)
         return auth_provider_class(
-            client_type=getattr(self, 'client_type', None),
             interface=getattr(self, 'interface', None),
             credentials=credentials)
diff --git a/tempest/scenario/manager.py b/tempest/scenario/manager.py
index 3cfc698..79207cd 100644
--- a/tempest/scenario/manager.py
+++ b/tempest/scenario/manager.py
@@ -16,19 +16,11 @@
 
 import logging
 import os
-import re
-import six
 import subprocess
-import time
 
-from cinderclient import exceptions as cinder_exceptions
-import glanceclient
-from heatclient import exc as heat_exceptions
 import netaddr
-from neutronclient.common import exceptions as exc
-from novaclient import exceptions as nova_exceptions
+import six
 
-from tempest.api.network import common as net_common
 from tempest import auth
 from tempest import clients
 from tempest.common import debug
@@ -38,7 +30,7 @@
 from tempest import config
 from tempest import exceptions
 from tempest.openstack.common import log
-from tempest.openstack.common import timeutils
+from tempest.services.network import resources as net_resources
 import tempest.test
 
 CONF = config.CONF
@@ -54,68 +46,37 @@
 
 
 class ScenarioTest(tempest.test.BaseTestCase):
+    """Base class for scenario tests. Uses tempest own clients. """
 
     @classmethod
-    def setUpClass(cls):
-        super(ScenarioTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(ScenarioTest, cls).resource_setup()
+        # Using tempest client for isolated credentials as well
         cls.isolated_creds = isolated_creds.IsolatedCreds(
-            cls.__name__, tempest_client=True,
-            network_resources=cls.network_resources)
+            cls.__name__, network_resources=cls.network_resources)
         cls.manager = clients.Manager(
             credentials=cls.credentials()
         )
-
-    @classmethod
-    def _get_credentials(cls, get_creds, ctype):
-        if CONF.compute.allow_tenant_isolation:
-            creds = get_creds()
-        else:
-            creds = auth.get_default_credentials(ctype)
-        return creds
-
-    @classmethod
-    def credentials(cls):
-        return cls._get_credentials(cls.isolated_creds.get_primary_creds,
-                                    'user')
-
-
-class OfficialClientTest(tempest.test.BaseTestCase):
-    """
-    Official Client test base class for scenario testing.
-
-    Official Client tests are tests that have the following characteristics:
-
-     * Test basic operations of an API, typically in an order that
-       a regular user would perform those operations
-     * Test only the correct inputs and action paths -- no fuzz or
-       random input data is sent, only valid inputs.
-     * Use only the default client tool for calling an API
-    """
-
-    @classmethod
-    def setUpClass(cls):
-        super(OfficialClientTest, cls).setUpClass()
-        cls.isolated_creds = isolated_creds.IsolatedCreds(
-            cls.__name__, tempest_client=False,
-            network_resources=cls.network_resources)
-
-        cls.manager = clients.OfficialClientManager(
-            credentials=cls.credentials())
-        cls.compute_client = cls.manager.compute_client
+        cls.admin_manager = clients.Manager(cls.admin_credentials())
+        # Clients (in alphabetical order)
+        cls.flavors_client = cls.manager.flavors_client
+        cls.floating_ips_client = cls.manager.floating_ips_client
+        # Glance image client v1
         cls.image_client = cls.manager.image_client
-        cls.baremetal_client = cls.manager.baremetal_client
-        cls.identity_client = cls.manager.identity_client
+        # Compute image client
+        cls.images_client = cls.manager.images_client
+        cls.keypairs_client = cls.manager.keypairs_client
+        cls.networks_client = cls.admin_manager.networks_client
+        # Nova security groups client
+        cls.security_groups_client = cls.manager.security_groups_client
+        cls.servers_client = cls.manager.servers_client
+        cls.volumes_client = cls.manager.volumes_client
+        cls.snapshots_client = cls.manager.snapshots_client
+        cls.interface_client = cls.manager.interfaces_client
+        # Neutron network client
         cls.network_client = cls.manager.network_client
-        cls.volume_client = cls.manager.volume_client
-        cls.object_storage_client = cls.manager.object_storage_client
+        # Heat client
         cls.orchestration_client = cls.manager.orchestration_client
-        cls.data_processing_client = cls.manager.data_processing_client
-        cls.ceilometer_client = cls.manager.ceilometer_client
-
-    @classmethod
-    def tearDownClass(cls):
-        cls.isolated_creds.clear_isolated_creds()
-        super(OfficialClientTest, cls).tearDownClass()
 
     @classmethod
     def _get_credentials(cls, get_creds, ctype):
@@ -140,8 +101,10 @@
         return cls._get_credentials(cls.isolated_creds.get_admin_creds,
                                     'identity_admin')
 
+    # ## Methods to handle sync and async deletes
+
     def setUp(self):
-        super(OfficialClientTest, self).setUp()
+        super(ScenarioTest, self).setUp()
         self.cleanup_waits = []
         # NOTE(mtreinish) This is safe to do in setUp instead of setUp class
         # because scenario tests in the same test class should not share
@@ -152,31 +115,43 @@
         # not at the end of the class
         self.addCleanup(self._wait_for_cleanups)
 
-    @staticmethod
-    def not_found_exception(exception):
-        """
-        @return: True if exception is of NotFound type
-        """
-        NOT_FOUND_LIST = ['NotFound', 'HTTPNotFound']
-        return (exception.__class__.__name__ in NOT_FOUND_LIST
-                or
-                hasattr(exception, 'status_code') and
-                exception.status_code == 404)
-
-    def delete_wrapper(self, thing):
+    def delete_wrapper(self, delete_thing, *args, **kwargs):
         """Ignores NotFound exceptions for delete operations.
 
-        @param thing: object with delete() method.
-            OpenStack resources are assumed to have a delete() method which
-            destroys the resource
-        """
+        @param delete_thing: delete method of a resource. method will be
+            executed as delete_thing(*args, **kwargs)
 
+        """
         try:
-            thing.delete()
-        except Exception as e:
+            # Tempest clients return dicts, so there is no common delete
+            # method available. Using a callable instead
+            delete_thing(*args, **kwargs)
+        except exceptions.NotFound:
             # If the resource is already missing, mission accomplished.
-            if not self.not_found_exception(e):
-                raise
+            pass
+
+    def addCleanup_with_wait(self, waiter_callable, thing_id, thing_id_param,
+                             cleanup_callable, cleanup_args=None,
+                             cleanup_kwargs=None, ignore_error=True):
+        """Adds wait for async resource deletion at the end of cleanups
+
+        @param waiter_callable: callable to wait for the resource to delete
+        @param thing_id: the id of the resource to be cleaned-up
+        @param thing_id_param: the name of the id param in the waiter
+        @param cleanup_callable: method to load pass to self.addCleanup with
+            the following *cleanup_args, **cleanup_kwargs.
+            usually a delete method.
+        """
+        if cleanup_args is None:
+            cleanup_args = []
+        if cleanup_kwargs is None:
+            cleanup_kwargs = {}
+        self.addCleanup(cleanup_callable, *cleanup_args, **cleanup_kwargs)
+        wait_dict = {
+            'waiter_callable': waiter_callable,
+            thing_id_param: thing_id
+        }
+        self.cleanup_waits.append(wait_dict)
 
     def _wait_for_cleanups(self):
         """To handle async delete actions, a list of waits is added
@@ -188,127 +163,99 @@
         because of the nature of the scenario tests.
         """
         for wait in self.cleanup_waits:
-            self.delete_timeout(**wait)
+            waiter_callable = wait.pop('waiter_callable')
+            waiter_callable(**wait)
 
-    def addCleanup_with_wait(self, things, thing_id,
-                             error_status='ERROR',
-                             exc_type=nova_exceptions.NotFound,
-                             cleanup_callable=None, cleanup_args=[],
-                             cleanup_kwargs={}):
-        """Adds wait for ansyc resource deletion at the end of cleanups
+    # ## Test functions library
+    #
+    # The create_[resource] functions only return body and discard the
+    # resp part which is not used in scenario tests
 
-        @param things: type of the resource to delete
-        @param thing_id:
-        @param error_status: see manager.delete_timeout()
-        @param exc_type: see manager.delete_timeout()
-        @param cleanup_callable: method to load pass to self.addCleanup with
-            the following *cleanup_args, **cleanup_kwargs.
-            usually a delete method. if not used, will try to use:
-            things.delete(thing_id)
+    def create_keypair(self, client=None):
+        if not client:
+            client = self.keypairs_client
+        name = data_utils.rand_name(self.__class__.__name__)
+        # We don't need to create a keypair by pubkey in scenario
+        resp, body = client.create_keypair(name)
+        self.addCleanup(client.delete_keypair, name)
+        return body
+
+    def create_server(self, name=None, image=None, flavor=None,
+                      wait_on_boot=True, wait_on_delete=True,
+                      create_kwargs=None):
+        """Creates VM instance.
+
+        @param image: image from which to create the instance
+        @param wait_on_boot: wait for status ACTIVE before continue
+        @param wait_on_delete: force synchronous delete on cleanup
+        @param create_kwargs: additional details for instance creation
+        @return: server dict
         """
-        if cleanup_callable is None:
-            LOG.debug("no delete method passed. using {rclass}.delete({id}) as"
-                      " default".format(rclass=things, id=thing_id))
-            self.addCleanup(things.delete, thing_id)
+        if name is None:
+            name = data_utils.rand_name(self.__class__.__name__)
+        if image is None:
+            image = CONF.compute.image_ref
+        if flavor is None:
+            flavor = CONF.compute.flavor_ref
+        if create_kwargs is None:
+            create_kwargs = {}
+
+        LOG.debug("Creating a server (name: %s, image: %s, flavor: %s)",
+                  name, image, flavor)
+        _, server = self.servers_client.create_server(name, image, flavor,
+                                                      **create_kwargs)
+        if wait_on_delete:
+            self.addCleanup(self.servers_client.wait_for_server_termination,
+                            server['id'])
+        self.addCleanup_with_wait(
+            waiter_callable=self.servers_client.wait_for_server_termination,
+            thing_id=server['id'], thing_id_param='server_id',
+            cleanup_callable=self.delete_wrapper,
+            cleanup_args=[self.servers_client.delete_server, server['id']])
+        if wait_on_boot:
+            self.servers_client.wait_for_server_status(server_id=server['id'],
+                                                       status='ACTIVE')
+        # The instance retrieved on creation is missing network
+        # details, necessitating retrieval after it becomes active to
+        # ensure correct details.
+        _, server = self.servers_client.get_server(server['id'])
+        self.assertEqual(server['name'], name)
+        return server
+
+    def create_volume(self, size=1, name=None, snapshot_id=None,
+                      imageRef=None, volume_type=None, wait_on_delete=True):
+        if name is None:
+            name = data_utils.rand_name(self.__class__.__name__)
+        _, volume = self.volumes_client.create_volume(
+            size=size, display_name=name, snapshot_id=snapshot_id,
+            imageRef=imageRef, volume_type=volume_type)
+
+        if wait_on_delete:
+            self.addCleanup(self.volumes_client.wait_for_resource_deletion,
+                            volume['id'])
+            self.addCleanup(self.delete_wrapper,
+                            self.volumes_client.delete_volume, volume['id'])
         else:
-            self.addCleanup(cleanup_callable, *cleanup_args, **cleanup_kwargs)
-        wait_dict = {
-            'things': things,
-            'thing_id': thing_id,
-            'error_status': error_status,
-            'not_found_exception': exc_type,
-        }
-        self.cleanup_waits.append(wait_dict)
+            self.addCleanup_with_wait(
+                waiter_callable=self.volumes_client.wait_for_resource_deletion,
+                thing_id=volume['id'], thing_id_param='id',
+                cleanup_callable=self.delete_wrapper,
+                cleanup_args=[self.volumes_client.delete_volume, volume['id']])
 
-    def status_timeout(self, things, thing_id, expected_status,
-                       error_status='ERROR',
-                       not_found_exception=nova_exceptions.NotFound):
-        """
-        Given a thing and an expected status, do a loop, sleeping
-        for a configurable amount of time, checking for the
-        expected status to show. At any time, if the returned
-        status of the thing is ERROR, fail out.
-        """
-        self._status_timeout(things, thing_id,
-                             expected_status=expected_status,
-                             error_status=error_status,
-                             not_found_exception=not_found_exception)
+        self.assertEqual(name, volume['display_name'])
+        self.volumes_client.wait_for_volume_status(volume['id'], 'available')
+        # The volume retrieved on creation has a non-up-to-date status.
+        # Retrieval after it becomes active ensures correct details.
+        _, volume = self.volumes_client.get_volume(volume['id'])
+        return volume
 
-    def delete_timeout(self, things, thing_id,
-                       error_status='ERROR',
-                       not_found_exception=nova_exceptions.NotFound):
-        """
-        Given a thing, do a loop, sleeping
-        for a configurable amount of time, checking for the
-        deleted status to show. At any time, if the returned
-        status of the thing is ERROR, fail out.
-        """
-        self._status_timeout(things,
-                             thing_id,
-                             allow_notfound=True,
-                             error_status=error_status,
-                             not_found_exception=not_found_exception)
-
-    def _status_timeout(self,
-                        things,
-                        thing_id,
-                        expected_status=None,
-                        allow_notfound=False,
-                        error_status='ERROR',
-                        not_found_exception=nova_exceptions.NotFound):
-
-        log_status = expected_status if expected_status else ''
-        if allow_notfound:
-            log_status += ' or NotFound' if log_status != '' else 'NotFound'
-
-        def check_status():
-            # python-novaclient has resources available to its client
-            # that all implement a get() method taking an identifier
-            # for the singular resource to retrieve.
-            try:
-                thing = things.get(thing_id)
-            except not_found_exception:
-                if allow_notfound:
-                    return True
-                raise
-            except Exception as e:
-                if allow_notfound and self.not_found_exception(e):
-                    return True
-                raise
-
-            new_status = thing.status
-
-            # Some components are reporting error status in lower case
-            # so case sensitive comparisons can really mess things
-            # up.
-            if new_status.lower() == error_status.lower():
-                message = ("%s failed to get to expected status (%s). "
-                           "In %s state.") % (thing, expected_status,
-                                              new_status)
-                raise exceptions.BuildErrorException(message,
-                                                     server_id=thing_id)
-            elif new_status == expected_status and expected_status is not None:
-                return True  # All good.
-            LOG.debug("Waiting for %s to get to %s status. "
-                      "Currently in %s status",
-                      thing, log_status, new_status)
-        if not tempest.test.call_until_true(
-            check_status,
-            CONF.compute.build_timeout,
-            CONF.compute.build_interval):
-            message = ("Timed out waiting for thing %s "
-                       "to become %s") % (thing_id, log_status)
-            raise exceptions.TimeoutException(message)
-
-    def _create_loginable_secgroup_rule_nova(self, client=None,
-                                             secgroup_id=None):
-        if client is None:
-            client = self.compute_client
+    def _create_loginable_secgroup_rule(self, secgroup_id=None):
+        _client = self.security_groups_client
         if secgroup_id is None:
-            sgs = client.security_groups.list()
+            _, sgs = _client.list_security_groups()
             for sg in sgs:
-                if sg.name == 'default':
-                    secgroup_id = sg.id
+                if sg['name'] == 'default':
+                    secgroup_id = sg['id']
 
         # These rules are intended to permit inbound ssh and icmp
         # traffic from all sources, so no group_id is provided.
@@ -317,14 +264,14 @@
         rulesets = [
             {
                 # ssh
-                'ip_protocol': 'tcp',
+                'ip_proto': 'tcp',
                 'from_port': 22,
                 'to_port': 22,
                 'cidr': '0.0.0.0/0',
             },
             {
                 # ping
-                'ip_protocol': 'icmp',
+                'ip_proto': 'icmp',
                 'from_port': -1,
                 'to_port': -1,
                 'cidr': '0.0.0.0/0',
@@ -332,152 +279,42 @@
         ]
         rules = list()
         for ruleset in rulesets:
-            sg_rule = client.security_group_rules.create(secgroup_id,
-                                                         **ruleset)
-            self.addCleanup(self.delete_wrapper, sg_rule)
+            _, sg_rule = _client.create_security_group_rule(secgroup_id,
+                                                            **ruleset)
+            self.addCleanup(self.delete_wrapper,
+                            _client.delete_security_group_rule,
+                            sg_rule['id'])
             rules.append(sg_rule)
         return rules
 
-    def _create_security_group_nova(self, client=None,
-                                    namestart='secgroup-smoke-'):
-        if client is None:
-            client = self.compute_client
+    def _create_security_group(self):
         # Create security group
-        sg_name = data_utils.rand_name(namestart)
+        sg_name = data_utils.rand_name(self.__class__.__name__)
         sg_desc = sg_name + " description"
-        secgroup = client.security_groups.create(sg_name, sg_desc)
-        self.assertEqual(secgroup.name, sg_name)
-        self.assertEqual(secgroup.description, sg_desc)
-        self.addCleanup(self.delete_wrapper, secgroup)
+        _, secgroup = self.security_groups_client.create_security_group(
+            sg_name, sg_desc)
+        self.assertEqual(secgroup['name'], sg_name)
+        self.assertEqual(secgroup['description'], sg_desc)
+        self.addCleanup(self.delete_wrapper,
+                        self.security_groups_client.delete_security_group,
+                        secgroup['id'])
 
         # Add rules to the security group
-        self._create_loginable_secgroup_rule_nova(client, secgroup.id)
+        self._create_loginable_secgroup_rule(secgroup['id'])
 
         return secgroup
 
-    def create_server(self, client=None, name=None, image=None, flavor=None,
-                      wait_on_boot=True, wait_on_delete=True,
-                      create_kwargs={}):
-        """Creates VM instance.
-
-        @param client: compute client to create the instance
-        @param image: image from which to create the instance
-        @param wait_on_boot: wait for status ACTIVE before continue
-        @param wait_on_delete: force synchronous delete on cleanup
-        @param create_kwargs: additional details for instance creation
-        @return: client.server object
-        """
-        if client is None:
-            client = self.compute_client
-        if name is None:
-            name = data_utils.rand_name('scenario-server-')
-        if image is None:
-            image = CONF.compute.image_ref
-        if flavor is None:
-            flavor = CONF.compute.flavor_ref
-
-        fixed_network_name = CONF.compute.fixed_network_name
-        if 'nics' not in create_kwargs and fixed_network_name:
-            networks = client.networks.list()
-            # If several networks found, set the NetID on which to connect the
-            # server to avoid the following error "Multiple possible networks
-            # found, use a Network ID to be more specific."
-            # See Tempest #1250866
-            if len(networks) > 1:
-                for network in networks:
-                    if network.label == fixed_network_name:
-                        create_kwargs['nics'] = [{'net-id': network.id}]
-                        break
-                # If we didn't find the network we were looking for :
-                else:
-                    msg = ("The network on which the NIC of the server must "
-                           "be connected can not be found : "
-                           "fixed_network_name=%s. Starting instance without "
-                           "specifying a network.") % fixed_network_name
-                    LOG.info(msg)
-
-        LOG.debug("Creating a server (name: %s, image: %s, flavor: %s)",
-                  name, image, flavor)
-        server = client.servers.create(name, image, flavor, **create_kwargs)
-        self.assertEqual(server.name, name)
-        if wait_on_delete:
-            self.addCleanup(self.delete_timeout,
-                            self.compute_client.servers,
-                            server.id)
-        self.addCleanup_with_wait(self.compute_client.servers, server.id,
-                                  cleanup_callable=self.delete_wrapper,
-                                  cleanup_args=[server])
-        if wait_on_boot:
-            self.status_timeout(client.servers, server.id, 'ACTIVE')
-        # The instance retrieved on creation is missing network
-        # details, necessitating retrieval after it becomes active to
-        # ensure correct details.
-        server = client.servers.get(server.id)
-        LOG.debug("Created server: %s", server)
-        return server
-
-    def create_volume(self, client=None, size=1, name=None,
-                      snapshot_id=None, imageRef=None, volume_type=None,
-                      wait_on_delete=True):
-        if client is None:
-            client = self.volume_client
-        if name is None:
-            name = data_utils.rand_name('scenario-volume-')
-        LOG.debug("Creating a volume (size: %s, name: %s)", size, name)
-        volume = client.volumes.create(size=size, display_name=name,
-                                       snapshot_id=snapshot_id,
-                                       imageRef=imageRef,
-                                       volume_type=volume_type)
-        if wait_on_delete:
-            self.addCleanup(self.delete_timeout,
-                            self.volume_client.volumes,
-                            volume.id)
-        self.addCleanup_with_wait(self.volume_client.volumes, volume.id,
-                                  exc_type=cinder_exceptions.NotFound)
-        self.assertEqual(name, volume.display_name)
-        self.status_timeout(client.volumes, volume.id, 'available')
-        LOG.debug("Created volume: %s", volume)
-        return volume
-
-    def create_server_snapshot(self, server, compute_client=None,
-                               image_client=None, name=None):
-        if compute_client is None:
-            compute_client = self.compute_client
-        if image_client is None:
-            image_client = self.image_client
-        if name is None:
-            name = data_utils.rand_name('scenario-snapshot-')
-        LOG.debug("Creating a snapshot image for server: %s", server.name)
-        image_id = compute_client.servers.create_image(server, name)
-        self.addCleanup_with_wait(self.image_client.images, image_id,
-                                  exc_type=glanceclient.exc.HTTPNotFound)
-        self.status_timeout(image_client.images, image_id, 'active')
-        snapshot_image = image_client.images.get(image_id)
-        self.assertEqual(name, snapshot_image.name)
-        LOG.debug("Created snapshot image %s for server %s",
-                  snapshot_image.name, server.name)
-        return snapshot_image
-
-    def create_keypair(self, client=None, name=None):
-        if client is None:
-            client = self.compute_client
-        if name is None:
-            name = data_utils.rand_name('scenario-keypair-')
-        keypair = client.keypairs.create(name)
-        self.assertEqual(keypair.name, name)
-        self.addCleanup(self.delete_wrapper, keypair)
-        return keypair
-
     def get_remote_client(self, server_or_ip, username=None, private_key=None):
         if isinstance(server_or_ip, six.string_types):
             ip = server_or_ip
         else:
-            network_name_for_ssh = CONF.compute.network_for_ssh
-            ip = server_or_ip.networks[network_name_for_ssh][0]
+            addr = server_or_ip['addresses'][CONF.compute.network_for_ssh][0]
+            ip = addr['addr']
+
         if username is None:
             username = CONF.scenario.ssh_user
         if private_key is None:
-            private_key = self.keypair.private_key
+            private_key = self.keypair['private_key']
         linux_client = remote_client.RemoteClient(ip, username,
                                                   pkey=private_key)
         try:
@@ -489,19 +326,9 @@
 
         return linux_client
 
-    def _log_console_output(self, servers=None):
-        if not servers:
-            servers = self.compute_client.servers.list()
-        for server in servers:
-            LOG.debug('Console output for %s', server.id)
-            LOG.debug(server.get_console_output())
-
-    def wait_for_volume_status(self, status):
-        volume_id = self.volume.id
-        self.status_timeout(
-            self.volume_client.volumes, volume_id, status)
-
-    def _image_create(self, name, fmt, path, properties={}):
+    def _image_create(self, name, fmt, path, properties=None):
+        if properties is None:
+            properties = {}
         name = data_utils.rand_name('%s-' % name)
         image_file = open(path, 'rb')
         self.addCleanup(image_file.close)
@@ -512,26 +339,29 @@
             'is_public': 'False',
         }
         params.update(properties)
-        image = self.image_client.images.create(**params)
-        self.addCleanup(self.image_client.images.delete, image)
-        self.assertEqual("queued", image.status)
-        image.update(data=image_file)
-        return image.id
+        _, image = self.image_client.create_image(**params)
+        self.addCleanup(self.image_client.delete_image, image['id'])
+        self.assertEqual("queued", image['status'])
+        self.image_client.update_image(image['id'], data=image_file)
+        return image['id']
 
     def glance_image_create(self):
-        qcow2_img_path = (CONF.scenario.img_dir + "/" +
-                          CONF.scenario.qcow2_img_file)
+        img_path = CONF.scenario.img_dir + "/" + CONF.scenario.img_file
         aki_img_path = CONF.scenario.img_dir + "/" + CONF.scenario.aki_img_file
         ari_img_path = CONF.scenario.img_dir + "/" + CONF.scenario.ari_img_file
         ami_img_path = CONF.scenario.img_dir + "/" + CONF.scenario.ami_img_file
-        LOG.debug("paths: img: %s, ami: %s, ari: %s, aki: %s"
-                  % (qcow2_img_path, ami_img_path, ari_img_path, aki_img_path))
+        img_container_format = CONF.scenario.img_container_format
+        img_disk_format = CONF.scenario.img_disk_format
+        LOG.debug("paths: img: %s, container_fomat: %s, disk_format: %s, "
+                  "ami: %s, ari: %s, aki: %s" %
+                  (img_path, img_container_format, img_disk_format,
+                   ami_img_path, ari_img_path, aki_img_path))
         try:
             self.image = self._image_create('scenario-img',
-                                            'bare',
-                                            qcow2_img_path,
+                                            img_container_format,
+                                            img_path,
                                             properties={'disk_format':
-                                                        'qcow2'})
+                                                        img_disk_format})
         except IOError:
             LOG.debug("A qcow2 image was not found. Try to get a uec image.")
             kernel = self._image_create('scenario-aki', 'aki', aki_img_path)
@@ -544,300 +374,157 @@
                                             properties=properties)
         LOG.debug("image:%s" % self.image)
 
+    def _log_console_output(self, servers=None):
+        if not CONF.compute_feature_enabled.console_output:
+            LOG.debug('Console output not supported, cannot log')
+            return
+        if not servers:
+            _, servers = self.servers_client.list_servers()
+            servers = servers['servers']
+        for server in servers:
+            LOG.debug('Console output for %s', server['id'])
+            LOG.debug(self.servers_client.get_console_output(server['id'],
+                                                             length=None))
 
-# power/provision states as of icehouse
-class BaremetalPowerStates(object):
-    """Possible power states of an Ironic node."""
-    POWER_ON = 'power on'
-    POWER_OFF = 'power off'
-    REBOOT = 'rebooting'
-    SUSPEND = 'suspended'
-
-
-class BaremetalProvisionStates(object):
-    """Possible provision states of an Ironic node."""
-    NOSTATE = None
-    INIT = 'initializing'
-    ACTIVE = 'active'
-    BUILDING = 'building'
-    DEPLOYWAIT = 'wait call-back'
-    DEPLOYING = 'deploying'
-    DEPLOYFAIL = 'deploy failed'
-    DEPLOYDONE = 'deploy complete'
-    DELETING = 'deleting'
-    DELETED = 'deleted'
-    ERROR = 'error'
-
-
-class BaremetalScenarioTest(OfficialClientTest):
-    @classmethod
-    def setUpClass(cls):
-        super(BaremetalScenarioTest, cls).setUpClass()
-
-        if (not CONF.service_available.ironic or
-           not CONF.baremetal.driver_enabled):
-            msg = 'Ironic not available or Ironic compute driver not enabled'
-            raise cls.skipException(msg)
-
-        # use an admin client manager for baremetal client
-        admin_creds = cls.admin_credentials()
-        manager = clients.OfficialClientManager(credentials=admin_creds)
-        cls.baremetal_client = manager.baremetal_client
-
-        # allow any issues obtaining the node list to raise early
-        cls.baremetal_client.node.list()
-
-    def _node_state_timeout(self, node_id, state_attr,
-                            target_states, timeout=10, interval=1):
-        if not isinstance(target_states, list):
-            target_states = [target_states]
-
-        def check_state():
-            node = self.get_node(node_id=node_id)
-            if getattr(node, state_attr) in target_states:
-                return True
-            return False
-
-        if not tempest.test.call_until_true(
-            check_state, timeout, interval):
-            msg = ("Timed out waiting for node %s to reach %s state(s) %s" %
-                   (node_id, state_attr, target_states))
-            raise exceptions.TimeoutException(msg)
-
-    def wait_provisioning_state(self, node_id, state, timeout):
-        self._node_state_timeout(
-            node_id=node_id, state_attr='provision_state',
-            target_states=state, timeout=timeout)
-
-    def wait_power_state(self, node_id, state):
-        self._node_state_timeout(
-            node_id=node_id, state_attr='power_state',
-            target_states=state, timeout=CONF.baremetal.power_timeout)
-
-    def wait_node(self, instance_id):
-        """Waits for a node to be associated with instance_id."""
-        from ironicclient import exc as ironic_exceptions
-
-        def _get_node():
-            node = None
-            try:
-                node = self.get_node(instance_id=instance_id)
-            except ironic_exceptions.HTTPNotFound:
-                pass
-            return node is not None
-
-        if not tempest.test.call_until_true(
-            _get_node, CONF.baremetal.association_timeout, 1):
-            msg = ('Timed out waiting to get Ironic node by instance id %s'
-                   % instance_id)
-            raise exceptions.TimeoutException(msg)
-
-    def get_node(self, node_id=None, instance_id=None):
-        if node_id:
-            return self.baremetal_client.node.get(node_id)
-        elif instance_id:
-            return self.baremetal_client.node.get_by_instance_uuid(instance_id)
-
-    def get_ports(self, node_id):
-        ports = []
-        for port in self.baremetal_client.node.list_ports(node_id):
-            ports.append(self.baremetal_client.port.get(port.uuid))
-        return ports
-
-    def add_keypair(self):
-        self.keypair = self.create_keypair()
-
-    def verify_connectivity(self, ip=None):
-        if ip:
-            dest = self.get_remote_client(ip)
-        else:
-            dest = self.get_remote_client(self.instance)
-        dest.validate_authentication()
-
-    def boot_instance(self):
-        create_kwargs = {
-            'key_name': self.keypair.id
-        }
-        self.instance = self.create_server(
-            wait_on_boot=False, create_kwargs=create_kwargs)
-
-        self.addCleanup_with_wait(self.compute_client.servers,
-                                  self.instance.id,
-                                  cleanup_callable=self.delete_wrapper,
-                                  cleanup_args=[self.instance])
-
-        self.wait_node(self.instance.id)
-        self.node = self.get_node(instance_id=self.instance.id)
-
-        self.wait_power_state(self.node.uuid, BaremetalPowerStates.POWER_ON)
-
-        self.wait_provisioning_state(
-            self.node.uuid,
-            [BaremetalProvisionStates.DEPLOYWAIT,
-             BaremetalProvisionStates.ACTIVE],
-            timeout=15)
-
-        self.wait_provisioning_state(self.node.uuid,
-                                     BaremetalProvisionStates.ACTIVE,
-                                     timeout=CONF.baremetal.active_timeout)
-
-        self.status_timeout(
-            self.compute_client.servers, self.instance.id, 'ACTIVE')
-
-        self.node = self.get_node(instance_id=self.instance.id)
-        self.instance = self.compute_client.servers.get(self.instance.id)
-
-    def terminate_instance(self):
-        self.instance.delete()
-        self.wait_power_state(self.node.uuid, BaremetalPowerStates.POWER_OFF)
-        self.wait_provisioning_state(
-            self.node.uuid,
-            BaremetalProvisionStates.NOSTATE,
-            timeout=CONF.baremetal.unprovision_timeout)
-
-
-class EncryptionScenarioTest(OfficialClientTest):
-    """
-    Base class for encryption scenario tests
-    """
-
-    @classmethod
-    def setUpClass(cls):
-        super(EncryptionScenarioTest, cls).setUpClass()
-
-        # use admin credentials to create encrypted volume types
-        admin_creds = cls.admin_credentials()
-        manager = clients.OfficialClientManager(credentials=admin_creds)
-        cls.admin_volume_client = manager.volume_client
-
-    def _wait_for_volume_status(self, status):
-        self.status_timeout(
-            self.volume_client.volumes, self.volume.id, status)
-
-    def nova_boot(self):
-        self.keypair = self.create_keypair()
-        create_kwargs = {'key_name': self.keypair.name}
-        self.server = self.create_server(self.compute_client,
-                                         image=self.image,
-                                         create_kwargs=create_kwargs)
-
-    def create_volume_type(self, client=None, name=None):
-        if not client:
-            client = self.admin_volume_client
-        if not name:
-            name = 'generic'
-        randomized_name = data_utils.rand_name('scenario-type-' + name + '-')
-        LOG.debug("Creating a volume type: %s", randomized_name)
-        volume_type = client.volume_types.create(randomized_name)
-        self.addCleanup(client.volume_types.delete, volume_type.id)
-        return volume_type
-
-    def create_encryption_type(self, client=None, type_id=None, provider=None,
-                               key_size=None, cipher=None,
-                               control_location=None):
-        if not client:
-            client = self.admin_volume_client
-        if not type_id:
-            volume_type = self.create_volume_type()
-            type_id = volume_type.id
-        LOG.debug("Creating an encryption type for volume type: %s", type_id)
-        client.volume_encryption_types.create(type_id,
-                                              {'provider': provider,
-                                               'key_size': key_size,
-                                               'cipher': cipher,
-                                               'control_location':
-                                               control_location})
+    def create_server_snapshot(self, server, name=None):
+        # Glance client
+        _image_client = self.image_client
+        # Compute client
+        _images_client = self.images_client
+        if name is None:
+            name = data_utils.rand_name('scenario-snapshot-')
+        LOG.debug("Creating a snapshot image for server: %s", server['name'])
+        resp, image = _images_client.create_image(server['id'], name)
+        image_id = resp['location'].split('images/')[1]
+        _image_client.wait_for_image_status(image_id, 'active')
+        self.addCleanup_with_wait(
+            waiter_callable=_image_client.wait_for_resource_deletion,
+            thing_id=image_id, thing_id_param='id',
+            cleanup_callable=self.delete_wrapper,
+            cleanup_args=[_image_client.delete_image, image_id])
+        _, snapshot_image = _image_client.get_image_meta(image_id)
+        image_name = snapshot_image['name']
+        self.assertEqual(name, image_name)
+        LOG.debug("Created snapshot image %s for server %s",
+                  image_name, server['name'])
+        return snapshot_image
 
     def nova_volume_attach(self):
-        attach_volume_client = self.compute_client.volumes.create_server_volume
-        volume = attach_volume_client(self.server.id,
-                                      self.volume.id,
-                                      '/dev/vdb')
-        self.assertEqual(self.volume.id, volume.id)
-        self._wait_for_volume_status('in-use')
+        # TODO(andreaf) Device should be here CONF.compute.volume_device_name
+        _, volume_attachment = self.servers_client.attach_volume(
+            self.server['id'], self.volume['id'], '/dev/vdb')
+        volume = volume_attachment['volumeAttachment']
+        self.assertEqual(self.volume['id'], volume['id'])
+        self.volumes_client.wait_for_volume_status(volume['id'], 'in-use')
+        # Refresh the volume after the attachment
+        _, self.volume = self.volumes_client.get_volume(volume['id'])
 
     def nova_volume_detach(self):
-        detach_volume_client = self.compute_client.volumes.delete_server_volume
-        detach_volume_client(self.server.id, self.volume.id)
-        self._wait_for_volume_status('available')
+        self.servers_client.detach_volume(self.server['id'], self.volume['id'])
+        self.volumes_client.wait_for_volume_status(self.volume['id'],
+                                                   'available')
 
-        volume = self.volume_client.volumes.get(self.volume.id)
-        self.assertEqual('available', volume.status)
+        _, volume = self.volumes_client.get_volume(self.volume['id'])
+        self.assertEqual('available', volume['status'])
+
+    def rebuild_server(self, server_id, image=None,
+                       preserve_ephemeral=False, wait=True,
+                       rebuild_kwargs=None):
+        if image is None:
+            image = CONF.compute.image_ref
+
+        rebuild_kwargs = rebuild_kwargs or {}
+
+        LOG.debug("Rebuilding server (id: %s, image: %s, preserve eph: %s)",
+                  server_id, image, preserve_ephemeral)
+        self.servers_client.rebuild(server_id=server_id, image_ref=image,
+                                    preserve_ephemeral=preserve_ephemeral,
+                                    **rebuild_kwargs)
+        if wait:
+            self.servers_client.wait_for_server_status(server_id, 'ACTIVE')
+
+    def ping_ip_address(self, ip_address, should_succeed=True):
+        cmd = ['ping', '-c1', '-w1', ip_address]
+
+        def ping():
+            proc = subprocess.Popen(cmd,
+                                    stdout=subprocess.PIPE,
+                                    stderr=subprocess.PIPE)
+            proc.communicate()
+            return (proc.returncode == 0) == should_succeed
+
+        return tempest.test.call_until_true(
+            ping, CONF.compute.ping_timeout, 1)
 
 
-class NetworkScenarioTest(OfficialClientTest):
-    """
-    Base class for network scenario tests
+class NetworkScenarioTest(ScenarioTest):
+    """Base class for network scenario tests.
+    This class provide helpers for network scenario tests, using the neutron
+    API. Helpers from ancestor which use the nova network API are overridden
+    with the neutron API.
+
+    This Class also enforces using Neutron instead of novanetwork.
+    Subclassed tests will be skipped if Neutron is not enabled
+
     """
 
     @classmethod
     def check_preconditions(cls):
-        if (CONF.service_available.neutron):
-            cls.enabled = True
-            # verify that neutron_available is telling the truth
-            try:
-                cls.network_client.list_networks()
-            except exc.EndpointNotFound:
-                cls.enabled = False
-                raise
-        else:
-            cls.enabled = False
-            msg = 'Neutron not available'
-            raise cls.skipException(msg)
+        if not CONF.service_available.neutron:
+            raise cls.skipException('Neutron not available')
 
     @classmethod
-    def setUpClass(cls):
-        super(NetworkScenarioTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(NetworkScenarioTest, cls).resource_setup()
         cls.tenant_id = cls.manager.identity_client.tenant_id
+        cls.check_preconditions()
 
-    def _create_network(self, tenant_id, namestart='network-smoke-'):
+    def _create_network(self, client=None, tenant_id=None,
+                        namestart='network-smoke-'):
+        if not client:
+            client = self.network_client
+        if not tenant_id:
+            tenant_id = client.rest_client.tenant_id
         name = data_utils.rand_name(namestart)
-        body = dict(
-            network=dict(
-                name=name,
-                tenant_id=tenant_id,
-            ),
-        )
-        result = self.network_client.create_network(body=body)
-        network = net_common.DeletableNetwork(client=self.network_client,
-                                              **result['network'])
+        _, result = client.create_network(name=name, tenant_id=tenant_id)
+        network = net_resources.DeletableNetwork(client=client,
+                                                 **result['network'])
         self.assertEqual(network.name, name)
-        self.addCleanup(self.delete_wrapper, network)
+        self.addCleanup(self.delete_wrapper, network.delete)
         return network
 
-    def _list_networks(self, **kwargs):
-        nets = self.network_client.list_networks(**kwargs)
-        return nets['networks']
+    def _list_networks(self, *args, **kwargs):
+        """List networks using admin creds """
+        return self._admin_lister('networks')(*args, **kwargs)
 
-    def _list_subnets(self, **kwargs):
-        subnets = self.network_client.list_subnets(**kwargs)
-        return subnets['subnets']
+    def _list_subnets(self, *args, **kwargs):
+        """List subnets using admin creds """
+        return self._admin_lister('subnets')(*args, **kwargs)
 
-    def _list_routers(self, **kwargs):
-        routers = self.network_client.list_routers(**kwargs)
-        return routers['routers']
+    def _list_routers(self, *args, **kwargs):
+        """List routers using admin creds """
+        return self._admin_lister('routers')(*args, **kwargs)
 
-    def _list_ports(self, **kwargs):
-        ports = self.network_client.list_ports(**kwargs)
-        return ports['ports']
+    def _list_ports(self, *args, **kwargs):
+        """List ports using admin creds """
+        return self._admin_lister('ports')(*args, **kwargs)
 
-    def _get_tenant_own_network_num(self, tenant_id):
-        nets = self._list_networks(tenant_id=tenant_id)
-        return len(nets)
+    def _admin_lister(self, resource_type):
+        def temp(*args, **kwargs):
+            temp_method = self.admin_manager.network_client.__getattr__(
+                'list_%s' % resource_type)
+            _, resource_list = temp_method(*args, **kwargs)
+            return resource_list[resource_type]
+        return temp
 
-    def _get_tenant_own_subnet_num(self, tenant_id):
-        subnets = self._list_subnets(tenant_id=tenant_id)
-        return len(subnets)
-
-    def _get_tenant_own_port_num(self, tenant_id):
-        ports = self._list_ports(tenant_id=tenant_id)
-        return len(ports)
-
-    def _create_subnet(self, network, namestart='subnet-smoke-', **kwargs):
+    def _create_subnet(self, network, client=None, namestart='subnet-smoke',
+                       **kwargs):
         """
         Create a subnet for the given network within the cidr block
         configured for tenant networks.
         """
+        if not client:
+            client = self.network_client
 
         def cidr_in_use(cidr, tenant_id):
             """
@@ -852,69 +539,73 @@
         # Repeatedly attempt subnet creation with sequential cidr
         # blocks until an unallocated block is found.
         for subnet_cidr in tenant_cidr.subnet(
-            CONF.network.tenant_network_mask_bits):
+                CONF.network.tenant_network_mask_bits):
             str_cidr = str(subnet_cidr)
             if cidr_in_use(str_cidr, tenant_id=network.tenant_id):
                 continue
 
-            body = dict(
-                subnet=dict(
-                    name=data_utils.rand_name(namestart),
-                    ip_version=4,
-                    network_id=network.id,
-                    tenant_id=network.tenant_id,
-                    cidr=str_cidr,
-                ),
+            subnet = dict(
+                name=data_utils.rand_name(namestart),
+                ip_version=4,
+                network_id=network.id,
+                tenant_id=network.tenant_id,
+                cidr=str_cidr,
+                **kwargs
             )
-            body['subnet'].update(kwargs)
             try:
-                result = self.network_client.create_subnet(body=body)
+                _, result = client.create_subnet(**subnet)
                 break
-            except exc.NeutronClientException as e:
+            except exceptions.Conflict as e:
                 is_overlapping_cidr = 'overlaps with another subnet' in str(e)
                 if not is_overlapping_cidr:
                     raise
         self.assertIsNotNone(result, 'Unable to allocate tenant network')
-        subnet = net_common.DeletableSubnet(client=self.network_client,
-                                            **result['subnet'])
+        subnet = net_resources.DeletableSubnet(client=client,
+                                               **result['subnet'])
         self.assertEqual(subnet.cidr, str_cidr)
-        self.addCleanup(self.delete_wrapper, subnet)
+        self.addCleanup(self.delete_wrapper, subnet.delete)
         return subnet
 
-    def _create_port(self, network, namestart='port-quotatest-'):
+    def _create_port(self, network, client=None, namestart='port-quotatest'):
+        if not client:
+            client = self.network_client
         name = data_utils.rand_name(namestart)
-        body = dict(
-            port=dict(name=name,
-                      network_id=network.id,
-                      tenant_id=network.tenant_id))
-        result = self.network_client.create_port(body=body)
+        _, result = client.create_port(
+            name=name,
+            network_id=network.id,
+            tenant_id=network.tenant_id)
         self.assertIsNotNone(result, 'Unable to allocate port')
-        port = net_common.DeletablePort(client=self.network_client,
-                                        **result['port'])
-        self.addCleanup(self.delete_wrapper, port)
+        port = net_resources.DeletablePort(client=client,
+                                           **result['port'])
+        self.addCleanup(self.delete_wrapper, port.delete)
         return port
 
     def _get_server_port_id(self, server, ip_addr=None):
-        ports = self._list_ports(device_id=server.id, fixed_ip=ip_addr)
+        ports = self._list_ports(device_id=server['id'],
+                                 fixed_ip=ip_addr)
         self.assertEqual(len(ports), 1,
                          "Unable to determine which port to target.")
         return ports[0]['id']
 
-    def _create_floating_ip(self, thing, external_network_id, port_id=None):
+    def _get_network_by_name(self, network_name):
+        net = self._list_networks(name=network_name)
+        return net_resources.AttributeDict(net[0])
+
+    def _create_floating_ip(self, thing, external_network_id, port_id=None,
+                            client=None):
+        if not client:
+            client = self.network_client
         if not port_id:
             port_id = self._get_server_port_id(thing)
-        body = dict(
-            floatingip=dict(
-                floating_network_id=external_network_id,
-                port_id=port_id,
-                tenant_id=thing.tenant_id,
-            )
+        _, result = client.create_floatingip(
+            floating_network_id=external_network_id,
+            port_id=port_id,
+            tenant_id=thing['tenant_id']
         )
-        result = self.network_client.create_floatingip(body=body)
-        floating_ip = net_common.DeletableFloatingIp(
-            client=self.network_client,
+        floating_ip = net_resources.DeletableFloatingIp(
+            client=client,
             **result['floatingip'])
-        self.addCleanup(self.delete_wrapper, floating_ip)
+        self.addCleanup(self.delete_wrapper, floating_ip.delete)
         return floating_ip
 
     def _associate_floating_ip(self, floating_ip, server):
@@ -931,71 +622,6 @@
         self.assertIsNone(floating_ip.port_id)
         return floating_ip
 
-    def _ping_ip_address(self, ip_address, should_succeed=True):
-        cmd = ['ping', '-c1', '-w1', ip_address]
-
-        def ping():
-            proc = subprocess.Popen(cmd,
-                                    stdout=subprocess.PIPE,
-                                    stderr=subprocess.PIPE)
-            proc.wait()
-            return (proc.returncode == 0) == should_succeed
-
-        return tempest.test.call_until_true(
-            ping, CONF.compute.ping_timeout, 1)
-
-    def _create_pool(self, lb_method, protocol, subnet_id):
-        """Wrapper utility that returns a test pool."""
-        name = data_utils.rand_name('pool-')
-        body = {
-            "pool": {
-                "protocol": protocol,
-                "name": name,
-                "subnet_id": subnet_id,
-                "lb_method": lb_method
-            }
-        }
-        resp = self.network_client.create_pool(body=body)
-        pool = net_common.DeletablePool(client=self.network_client,
-                                        **resp['pool'])
-        self.assertEqual(pool['name'], name)
-        self.addCleanup(self.delete_wrapper, pool)
-        return pool
-
-    def _create_member(self, address, protocol_port, pool_id):
-        """Wrapper utility that returns a test member."""
-        body = {
-            "member": {
-                "protocol_port": protocol_port,
-                "pool_id": pool_id,
-                "address": address
-            }
-        }
-        resp = self.network_client.create_member(body)
-        member = net_common.DeletableMember(client=self.network_client,
-                                            **resp['member'])
-        self.addCleanup(self.delete_wrapper, member)
-        return member
-
-    def _create_vip(self, protocol, protocol_port, subnet_id, pool_id):
-        """Wrapper utility that returns a test vip."""
-        name = data_utils.rand_name('vip-')
-        body = {
-            "vip": {
-                "protocol": protocol,
-                "name": name,
-                "subnet_id": subnet_id,
-                "pool_id": pool_id,
-                "protocol_port": protocol_port
-            }
-        }
-        resp = self.network_client.create_vip(body)
-        vip = net_common.DeletableVip(client=self.network_client,
-                                      **resp['vip'])
-        self.assertEqual(vip['name'], name)
-        self.addCleanup(self.delete_wrapper, vip)
-        return vip
-
     def _check_vm_connectivity(self, ip_address,
                                username=None,
                                private_key=None,
@@ -1015,8 +641,8 @@
             msg = "Timed out waiting for %s to become reachable" % ip_address
         else:
             msg = "ip address %s is reachable" % ip_address
-        self.assertTrue(self._ping_ip_address(ip_address,
-                                              should_succeed=should_connect),
+        self.assertTrue(self.ping_ip_address(ip_address,
+                                             should_succeed=should_connect),
                         msg=msg)
         if should_connect:
             # no need to check ssh for negative connectivity
@@ -1057,7 +683,7 @@
         # The target login is assumed to have been configured for
         # key-based authentication by cloud-init.
         try:
-            for net_name, ip_addresses in server.networks.iteritems():
+            for net_name, ip_addresses in server['networks'].iteritems():
                 for ip_address in ip_addresses:
                     self._check_vm_connectivity(ip_address,
                                                 username,
@@ -1085,7 +711,8 @@
             try:
                 source.ping_host(dest)
             except exceptions.SSHExecCommandFailed:
-                LOG.exception('Failed to ping host via ssh connection')
+                LOG.warn('Failed to ping IP: %s via a ssh connection from: %s.'
+                         % (dest, source.ssh_client.host))
                 return not should_succeed
             return should_succeed
 
@@ -1093,23 +720,25 @@
                                             CONF.compute.ping_timeout,
                                             1)
 
-    def _create_security_group_neutron(self, tenant_id, client=None,
-                                       namestart='secgroup-smoke-'):
+    def _create_security_group(self, client=None, tenant_id=None,
+                               namestart='secgroup-smoke'):
         if client is None:
             client = self.network_client
+        if tenant_id is None:
+            tenant_id = client.rest_client.tenant_id
         secgroup = self._create_empty_security_group(namestart=namestart,
                                                      client=client,
                                                      tenant_id=tenant_id)
 
         # Add rules to the security group
-        rules = self._create_loginable_secgroup_rule_neutron(secgroup=secgroup)
+        rules = self._create_loginable_secgroup_rule(secgroup=secgroup)
         for rule in rules:
             self.assertEqual(tenant_id, rule.tenant_id)
             self.assertEqual(secgroup.id, rule.security_group_id)
         return secgroup
 
-    def _create_empty_security_group(self, tenant_id, client=None,
-                                     namestart='secgroup-smoke-'):
+    def _create_empty_security_group(self, client=None, tenant_id=None,
+                                     namestart='secgroup-smoke'):
         """Create a security group without rules.
 
         Default rules will be created:
@@ -1121,43 +750,43 @@
         """
         if client is None:
             client = self.network_client
+        if not tenant_id:
+            tenant_id = client.rest_client.tenant_id
         sg_name = data_utils.rand_name(namestart)
         sg_desc = sg_name + " description"
         sg_dict = dict(name=sg_name,
                        description=sg_desc)
         sg_dict['tenant_id'] = tenant_id
-        body = dict(security_group=sg_dict)
-        result = client.create_security_group(body=body)
-        secgroup = net_common.DeletableSecurityGroup(
+        _, result = client.create_security_group(**sg_dict)
+        secgroup = net_resources.DeletableSecurityGroup(
             client=client,
             **result['security_group']
         )
         self.assertEqual(secgroup.name, sg_name)
         self.assertEqual(tenant_id, secgroup.tenant_id)
         self.assertEqual(secgroup.description, sg_desc)
-        self.addCleanup(self.delete_wrapper, secgroup)
+        self.addCleanup(self.delete_wrapper, secgroup.delete)
         return secgroup
 
-    def _default_security_group(self, tenant_id, client=None):
+    def _default_security_group(self, client=None, tenant_id=None):
         """Get default secgroup for given tenant_id.
 
         :returns: DeletableSecurityGroup -- default secgroup for given tenant
         """
         if client is None:
             client = self.network_client
+        if not tenant_id:
+            tenant_id = client.rest_client.tenant_id
         sgs = [
             sg for sg in client.list_security_groups().values()[0]
             if sg['tenant_id'] == tenant_id and sg['name'] == 'default'
         ]
         msg = "No default security group for tenant %s." % (tenant_id)
         self.assertTrue(len(sgs) > 0, msg)
-        if len(sgs) > 1:
-            msg = "Found %d default security groups" % len(sgs)
-            raise exc.NeutronClientNoUniqueMatch(msg=msg)
-        return net_common.DeletableSecurityGroup(client=client,
-                                                 **sgs[0])
+        return net_resources.DeletableSecurityGroup(client=client,
+                                                    **sgs[0])
 
-    def _create_security_group_rule(self, client=None, secgroup=None,
+    def _create_security_group_rule(self, secgroup=None, client=None,
                                     tenant_id=None, **kwargs):
         """Create a rule from a dictionary of rule parameters.
 
@@ -1165,8 +794,6 @@
         default secgroup in tenant_id.
 
         :param secgroup: type DeletableSecurityGroup.
-        :param secgroup_id: search for secgroup by id
-            default -- choose default secgroup for given tenant_id
         :param tenant_id: if secgroup not passed -- the tenant in which to
             search for default secgroup
         :param kwargs: a dictionary containing rule parameters:
@@ -1180,28 +807,28 @@
         """
         if client is None:
             client = self.network_client
+        if not tenant_id:
+            tenant_id = client.rest_client.tenant_id
         if secgroup is None:
-            secgroup = self._default_security_group(tenant_id)
+            secgroup = self._default_security_group(client=client,
+                                                    tenant_id=tenant_id)
 
         ruleset = dict(security_group_id=secgroup.id,
-                       tenant_id=secgroup.tenant_id,
-                       )
+                       tenant_id=secgroup.tenant_id)
         ruleset.update(kwargs)
 
-        body = dict(security_group_rule=dict(ruleset))
-        sg_rule = client.create_security_group_rule(body=body)
-        sg_rule = net_common.DeletableSecurityGroupRule(
+        _, sg_rule = client.create_security_group_rule(**ruleset)
+        sg_rule = net_resources.DeletableSecurityGroupRule(
             client=client,
             **sg_rule['security_group_rule']
         )
-        self.addCleanup(self.delete_wrapper, sg_rule)
+        self.addCleanup(self.delete_wrapper, sg_rule.delete)
         self.assertEqual(secgroup.tenant_id, sg_rule.tenant_id)
         self.assertEqual(secgroup.id, sg_rule.security_group_id)
 
         return sg_rule
 
-    def _create_loginable_secgroup_rule_neutron(self, client=None,
-                                                secgroup=None):
+    def _create_loginable_secgroup_rule(self, client=None, secgroup=None):
         """These rules are intended to permit inbound ssh and icmp
         traffic from all sources, so no group_id is provided.
         Setting a group_id would only permit traffic from ports
@@ -1229,10 +856,10 @@
                 try:
                     sg_rule = self._create_security_group_rule(
                         client=client, secgroup=secgroup, **ruleset)
-                except exc.NeutronClientException as ex:
+                except exceptions.Conflict as ex:
                     # if rule already exist - skip rule and continue
-                    if not (ex.status_code is 409 and 'Security group rule'
-                            ' already exists' in ex.message):
+                    msg = 'Security group rule already exists'
+                    if msg not in ex._error_string:
                         raise ex
                 else:
                     self.assertEqual(r_direction, sg_rule.direction)
@@ -1240,25 +867,48 @@
 
         return rules
 
+    def _create_pool(self, lb_method, protocol, subnet_id):
+        """Wrapper utility that returns a test pool."""
+        client = self.network_client
+        name = data_utils.rand_name('pool')
+        _, resp_pool = client.create_pool(protocol=protocol, name=name,
+                                          subnet_id=subnet_id,
+                                          lb_method=lb_method)
+        pool = net_resources.DeletablePool(client=client, **resp_pool['pool'])
+        self.assertEqual(pool['name'], name)
+        self.addCleanup(self.delete_wrapper, pool.delete)
+        return pool
+
+    def _create_member(self, address, protocol_port, pool_id):
+        """Wrapper utility that returns a test member."""
+        client = self.network_client
+        _, resp_member = client.create_member(protocol_port=protocol_port,
+                                              pool_id=pool_id,
+                                              address=address)
+        member = net_resources.DeletableMember(client=client,
+                                               **resp_member['member'])
+        self.addCleanup(self.delete_wrapper, member.delete)
+        return member
+
+    def _create_vip(self, protocol, protocol_port, subnet_id, pool_id):
+        """Wrapper utility that returns a test vip."""
+        client = self.network_client
+        name = data_utils.rand_name('vip')
+        _, resp_vip = client.create_vip(protocol=protocol, name=name,
+                                        subnet_id=subnet_id, pool_id=pool_id,
+                                        protocol_port=protocol_port)
+        vip = net_resources.DeletableVip(client=client, **resp_vip['vip'])
+        self.assertEqual(vip['name'], name)
+        self.addCleanup(self.delete_wrapper, vip.delete)
+        return vip
+
     def _ssh_to_server(self, server, private_key):
         ssh_login = CONF.compute.image_ssh_user
         return self.get_remote_client(server,
                                       username=ssh_login,
                                       private_key=private_key)
 
-    def _show_quota_network(self, tenant_id):
-        quota = self.network_client.show_quota(tenant_id)
-        return quota['quota']['network']
-
-    def _show_quota_subnet(self, tenant_id):
-        quota = self.network_client.show_quota(tenant_id)
-        return quota['quota']['subnet']
-
-    def _show_quota_port(self, tenant_id):
-        quota = self.network_client.show_quota(tenant_id)
-        return quota['quota']['port']
-
-    def _get_router(self, tenant_id):
+    def _get_router(self, client=None, tenant_id=None):
         """Retrieve a router for the given tenant id.
 
         If a public router has been configured, it will be returned.
@@ -1267,57 +917,272 @@
         network has, a tenant router will be created and returned that
         routes traffic to the public network.
         """
+        if not client:
+            client = self.network_client
+        if not tenant_id:
+            tenant_id = client.rest_client.tenant_id
         router_id = CONF.network.public_router_id
         network_id = CONF.network.public_network_id
         if router_id:
-            result = self.network_client.show_router(router_id)
-            return net_common.AttributeDict(**result['router'])
+            result = client.show_router(router_id)
+            return net_resources.AttributeDict(**result['router'])
         elif network_id:
-            router = self._create_router(tenant_id)
-            router.add_gateway(network_id)
+            router = self._create_router(client, tenant_id)
+            router.set_gateway(network_id)
             return router
         else:
             raise Exception("Neither of 'public_router_id' or "
                             "'public_network_id' has been defined.")
 
-    def _create_router(self, tenant_id, namestart='router-smoke-'):
+    def _create_router(self, client=None, tenant_id=None,
+                       namestart='router-smoke'):
+        if not client:
+            client = self.network_client
+        if not tenant_id:
+            tenant_id = client.rest_client.tenant_id
         name = data_utils.rand_name(namestart)
-        body = dict(
-            router=dict(
-                name=name,
-                admin_state_up=True,
-                tenant_id=tenant_id,
-            ),
-        )
-        result = self.network_client.create_router(body=body)
-        router = net_common.DeletableRouter(client=self.network_client,
-                                            **result['router'])
+        _, result = client.create_router(name=name,
+                                         admin_state_up=True,
+                                         tenant_id=tenant_id)
+        router = net_resources.DeletableRouter(client=client,
+                                               **result['router'])
         self.assertEqual(router.name, name)
-        self.addCleanup(self.delete_wrapper, router)
+        self.addCleanup(self.delete_wrapper, router.delete)
         return router
 
-    def _create_networks(self, tenant_id=None):
+    def create_networks(self, client=None, tenant_id=None):
         """Create a network with a subnet connected to a router.
 
+        The baremetal driver is a special case since all nodes are
+        on the same shared network.
+
         :returns: network, subnet, router
         """
-        if tenant_id is None:
-            tenant_id = self.tenant_id
-        network = self._create_network(tenant_id)
-        router = self._get_router(tenant_id)
-        subnet = self._create_subnet(network)
-        subnet.add_to_router(router.id)
+        if CONF.baremetal.driver_enabled:
+            # NOTE(Shrews): This exception is for environments where tenant
+            # credential isolation is available, but network separation is
+            # not (the current baremetal case). Likely can be removed when
+            # test account mgmt is reworked:
+            # https://blueprints.launchpad.net/tempest/+spec/test-accounts
+            network = self._get_network_by_name(
+                CONF.compute.fixed_network_name)
+            router = None
+            subnet = None
+        else:
+            network = self._create_network(client=client, tenant_id=tenant_id)
+            router = self._get_router(client=client, tenant_id=tenant_id)
+            subnet = self._create_subnet(network=network, client=client)
+            subnet.add_to_router(router.id)
         return network, subnet, router
 
 
-class OrchestrationScenarioTest(OfficialClientTest):
+# power/provision states as of icehouse
+class BaremetalPowerStates(object):
+    """Possible power states of an Ironic node."""
+    POWER_ON = 'power on'
+    POWER_OFF = 'power off'
+    REBOOT = 'rebooting'
+    SUSPEND = 'suspended'
+
+
+class BaremetalProvisionStates(object):
+    """Possible provision states of an Ironic node."""
+    NOSTATE = None
+    INIT = 'initializing'
+    ACTIVE = 'active'
+    BUILDING = 'building'
+    DEPLOYWAIT = 'wait call-back'
+    DEPLOYING = 'deploying'
+    DEPLOYFAIL = 'deploy failed'
+    DEPLOYDONE = 'deploy complete'
+    DELETING = 'deleting'
+    DELETED = 'deleted'
+    ERROR = 'error'
+
+
+class BaremetalScenarioTest(ScenarioTest):
+    @classmethod
+    def resource_setup(cls):
+        super(BaremetalScenarioTest, cls).resource_setup()
+
+        if (not CONF.service_available.ironic or
+           not CONF.baremetal.driver_enabled):
+            msg = 'Ironic not available or Ironic compute driver not enabled'
+            raise cls.skipException(msg)
+
+        # use an admin client manager for baremetal client
+        manager = clients.Manager(
+            credentials=cls.admin_credentials()
+        )
+        cls.baremetal_client = manager.baremetal_client
+
+        # allow any issues obtaining the node list to raise early
+        cls.baremetal_client.list_nodes()
+
+    def _node_state_timeout(self, node_id, state_attr,
+                            target_states, timeout=10, interval=1):
+        if not isinstance(target_states, list):
+            target_states = [target_states]
+
+        def check_state():
+            node = self.get_node(node_id=node_id)
+            if node.get(state_attr) in target_states:
+                return True
+            return False
+
+        if not tempest.test.call_until_true(
+            check_state, timeout, interval):
+            msg = ("Timed out waiting for node %s to reach %s state(s) %s" %
+                   (node_id, state_attr, target_states))
+            raise exceptions.TimeoutException(msg)
+
+    def wait_provisioning_state(self, node_id, state, timeout):
+        self._node_state_timeout(
+            node_id=node_id, state_attr='provision_state',
+            target_states=state, timeout=timeout)
+
+    def wait_power_state(self, node_id, state):
+        self._node_state_timeout(
+            node_id=node_id, state_attr='power_state',
+            target_states=state, timeout=CONF.baremetal.power_timeout)
+
+    def wait_node(self, instance_id):
+        """Waits for a node to be associated with instance_id."""
+
+        def _get_node():
+            node = None
+            try:
+                node = self.get_node(instance_id=instance_id)
+            except exceptions.NotFound:
+                pass
+            return node is not None
+
+        if not tempest.test.call_until_true(
+            _get_node, CONF.baremetal.association_timeout, 1):
+            msg = ('Timed out waiting to get Ironic node by instance id %s'
+                   % instance_id)
+            raise exceptions.TimeoutException(msg)
+
+    def get_node(self, node_id=None, instance_id=None):
+        if node_id:
+            _, body = self.baremetal_client.show_node(node_id)
+            return body
+        elif instance_id:
+            _, body = self.baremetal_client.show_node_by_instance_uuid(
+                instance_id)
+            if body['nodes']:
+                return body['nodes'][0]
+
+    def get_ports(self, node_uuid):
+        ports = []
+        _, body = self.baremetal_client.list_node_ports(node_uuid)
+        for port in body['ports']:
+            _, p = self.baremetal_client.show_port(port['uuid'])
+            ports.append(p)
+        return ports
+
+    def add_keypair(self):
+        self.keypair = self.create_keypair()
+
+    def verify_connectivity(self, ip=None):
+        if ip:
+            dest = self.get_remote_client(ip)
+        else:
+            dest = self.get_remote_client(self.instance)
+        dest.validate_authentication()
+
+    def boot_instance(self):
+        create_kwargs = {
+            'key_name': self.keypair['name']
+        }
+        self.instance = self.create_server(
+            wait_on_boot=False, create_kwargs=create_kwargs)
+
+        self.wait_node(self.instance['id'])
+        self.node = self.get_node(instance_id=self.instance['id'])
+
+        self.wait_power_state(self.node['uuid'], BaremetalPowerStates.POWER_ON)
+
+        self.wait_provisioning_state(
+            self.node['uuid'],
+            [BaremetalProvisionStates.DEPLOYWAIT,
+             BaremetalProvisionStates.ACTIVE],
+            timeout=15)
+
+        self.wait_provisioning_state(self.node['uuid'],
+                                     BaremetalProvisionStates.ACTIVE,
+                                     timeout=CONF.baremetal.active_timeout)
+
+        self.servers_client.wait_for_server_status(self.instance['id'],
+                                                   'ACTIVE')
+        self.node = self.get_node(instance_id=self.instance['id'])
+        _, self.instance = self.servers_client.get_server(self.instance['id'])
+
+    def terminate_instance(self):
+        self.servers_client.delete_server(self.instance['id'])
+        self.wait_power_state(self.node['uuid'],
+                              BaremetalPowerStates.POWER_OFF)
+        self.wait_provisioning_state(
+            self.node['uuid'],
+            BaremetalProvisionStates.NOSTATE,
+            timeout=CONF.baremetal.unprovision_timeout)
+
+
+class EncryptionScenarioTest(ScenarioTest):
+    """
+    Base class for encryption scenario tests
+    """
+
+    @classmethod
+    def resource_setup(cls):
+        super(EncryptionScenarioTest, cls).resource_setup()
+        cls.admin_volume_types_client = cls.admin_manager.volume_types_client
+
+    def _wait_for_volume_status(self, status):
+        self.status_timeout(
+            self.volume_client.volumes, self.volume.id, status)
+
+    def nova_boot(self):
+        self.keypair = self.create_keypair()
+        create_kwargs = {'key_name': self.keypair['name']}
+        self.server = self.create_server(image=self.image,
+                                         create_kwargs=create_kwargs)
+
+    def create_volume_type(self, client=None, name=None):
+        if not client:
+            client = self.admin_volume_types_client
+        if not name:
+            name = 'generic'
+        randomized_name = data_utils.rand_name('scenario-type-' + name + '-')
+        LOG.debug("Creating a volume type: %s", randomized_name)
+        _, body = client.create_volume_type(
+            randomized_name)
+        self.assertIn('id', body)
+        self.addCleanup(client.delete_volume_type, body['id'])
+        return body
+
+    def create_encryption_type(self, client=None, type_id=None, provider=None,
+                               key_size=None, cipher=None,
+                               control_location=None):
+        if not client:
+            client = self.admin_volume_types_client
+        if not type_id:
+            volume_type = self.create_volume_type()
+            type_id = volume_type['id']
+        LOG.debug("Creating an encryption type for volume type: %s", type_id)
+        client.create_encryption_type(
+            type_id, provider=provider, key_size=key_size, cipher=cipher,
+            control_location=control_location)
+
+
+class OrchestrationScenarioTest(ScenarioTest):
     """
     Base class for orchestration scenario tests
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(OrchestrationScenarioTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(OrchestrationScenarioTest, cls).resource_setup()
         if not CONF.service_available.heat:
             raise cls.skipException("Heat support is required")
 
@@ -1340,102 +1205,96 @@
 
     @classmethod
     def _get_default_network(cls):
-        networks = cls.network_client.list_networks()
-        for net in networks['networks']:
-            if net['name'] == CONF.compute.fixed_network_name:
+        _, networks = cls.networks_client.list_networks()
+        for net in networks:
+            if net['label'] == CONF.compute.fixed_network_name:
                 return net
 
     @staticmethod
     def _stack_output(stack, output_key):
         """Return a stack output value for a given key."""
-        return next((o['output_value'] for o in stack.outputs
+        return next((o['output_value'] for o in stack['outputs']
                     if o['output_key'] == output_key), None)
 
-    def _ping_ip_address(self, ip_address, should_succeed=True):
-        cmd = ['ping', '-c1', '-w1', ip_address]
 
-        def ping():
-            proc = subprocess.Popen(cmd,
-                                    stdout=subprocess.PIPE,
-                                    stderr=subprocess.PIPE)
-            proc.wait()
-            return (proc.returncode == 0) == should_succeed
+class SwiftScenarioTest(ScenarioTest):
+    """
+    Provide harness to do Swift scenario tests.
 
-        return tempest.test.call_until_true(
-            ping, CONF.orchestration.build_timeout, 1)
+    Subclasses implement the tests that use the methods provided by this
+    class.
+    """
 
-    def _wait_for_resource_status(self, stack_identifier, resource_name,
-                                  status, failure_pattern='^.*_FAILED$'):
-        """Waits for a Resource to reach a given status."""
-        fail_regexp = re.compile(failure_pattern)
-        build_timeout = CONF.orchestration.build_timeout
-        build_interval = CONF.orchestration.build_interval
+    @classmethod
+    def resource_setup(cls):
+        cls.set_network_resources()
+        super(SwiftScenarioTest, cls).resource_setup()
+        if not CONF.service_available.swift:
+            skip_msg = ("%s skipped as swift is not available" %
+                        cls.__name__)
+            raise cls.skipException(skip_msg)
+        # Clients for Swift
+        cls.account_client = cls.manager.account_client
+        cls.container_client = cls.manager.container_client
+        cls.object_client = cls.manager.object_client
 
-        start = timeutils.utcnow()
-        while timeutils.delta_seconds(start,
-                                      timeutils.utcnow()) < build_timeout:
-            try:
-                res = self.client.resources.get(
-                    stack_identifier, resource_name)
-            except heat_exceptions.HTTPNotFound:
-                # ignore this, as the resource may not have
-                # been created yet
-                pass
-            else:
-                if res.resource_status == status:
-                    return
-                if fail_regexp.search(res.resource_status):
-                    raise exceptions.StackResourceBuildErrorException(
-                        resource_name=res.resource_name,
-                        stack_identifier=stack_identifier,
-                        resource_status=res.resource_status,
-                        resource_status_reason=res.resource_status_reason)
-            time.sleep(build_interval)
+    def get_swift_stat(self):
+        """get swift status for our user account."""
+        self.account_client.list_account_containers()
+        LOG.debug('Swift status information obtained successfully')
 
-        message = ('Resource %s failed to reach %s status within '
-                   'the required time (%s s).' %
-                   (res.resource_name, status, build_timeout))
-        raise exceptions.TimeoutException(message)
+    def create_container(self, container_name=None):
+        name = container_name or data_utils.rand_name(
+            'swift-scenario-container')
+        self.container_client.create_container(name)
+        # look for the container to assure it is created
+        self.list_and_check_container_objects(name)
+        LOG.debug('Container %s created' % (name))
+        return name
 
-    def _wait_for_stack_status(self, stack_identifier, status,
-                               failure_pattern='^.*_FAILED$'):
+    def delete_container(self, container_name):
+        self.container_client.delete_container(container_name)
+        LOG.debug('Container %s deleted' % (container_name))
+
+    def upload_object_to_container(self, container_name, obj_name=None):
+        obj_name = obj_name or data_utils.rand_name('swift-scenario-object')
+        obj_data = data_utils.arbitrary_string()
+        self.object_client.create_object(container_name, obj_name, obj_data)
+        return obj_name, obj_data
+
+    def delete_object(self, container_name, filename):
+        self.object_client.delete_object(container_name, filename)
+        self.list_and_check_container_objects(container_name,
+                                              not_present_obj=[filename])
+
+    def list_and_check_container_objects(self, container_name,
+                                         present_obj=None,
+                                         not_present_obj=None):
         """
-        Waits for a Stack to reach a given status.
-
-        Note this compares the full $action_$status, e.g
-        CREATE_COMPLETE, not just COMPLETE which is exposed
-        via the status property of Stack in heatclient
+        List objects for a given container and assert which are present and
+        which are not.
         """
-        fail_regexp = re.compile(failure_pattern)
-        build_timeout = CONF.orchestration.build_timeout
-        build_interval = CONF.orchestration.build_interval
+        if present_obj is None:
+            present_obj = []
+        if not_present_obj is None:
+            not_present_obj = []
+        _, object_list = self.container_client.list_container_contents(
+            container_name)
+        if present_obj:
+            for obj in present_obj:
+                self.assertIn(obj, object_list)
+        if not_present_obj:
+            for obj in not_present_obj:
+                self.assertNotIn(obj, object_list)
 
-        start = timeutils.utcnow()
-        while timeutils.delta_seconds(start,
-                                      timeutils.utcnow()) < build_timeout:
-            try:
-                stack = self.client.stacks.get(stack_identifier)
-            except heat_exceptions.HTTPNotFound:
-                # ignore this, as the stackource may not have
-                # been created yet
-                pass
-            else:
-                if stack.stack_status == status:
-                    return
-                if fail_regexp.search(stack.stack_status):
-                    raise exceptions.StackBuildErrorException(
-                        stack_identifier=stack_identifier,
-                        stack_status=stack.stack_status,
-                        stack_status_reason=stack.stack_status_reason)
-            time.sleep(build_interval)
+    def change_container_acl(self, container_name, acl):
+        metadata_param = {'metadata_prefix': 'x-container-',
+                          'metadata': {'read': acl}}
+        self.container_client.update_container_metadata(container_name,
+                                                        **metadata_param)
+        resp, _ = self.container_client.list_container_metadata(container_name)
+        self.assertEqual(resp['x-container-read'], acl)
 
-        message = ('Stack %s failed to reach %s status within '
-                   'the required time (%s s).' %
-                   (stack.stack_name, status, build_timeout))
-        raise exceptions.TimeoutException(message)
-
-    def _stack_delete(self, stack_identifier):
-        try:
-            self.client.stacks.delete(stack_identifier)
-        except heat_exceptions.HTTPNotFound:
-            pass
+    def download_and_verify(self, container_name, obj_name, expected_data):
+        _, obj = self.object_client.get_object(container_name, obj_name)
+        self.assertEqual(obj, expected_data)
diff --git a/tempest/scenario/orchestration/test_autoscaling.py b/tempest/scenario/orchestration/test_autoscaling.py
deleted file mode 100644
index aa7b6f8..0000000
--- a/tempest/scenario/orchestration/test_autoscaling.py
+++ /dev/null
@@ -1,124 +0,0 @@
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-import heatclient.exc as heat_exceptions
-import time
-
-from tempest import config
-from tempest.scenario import manager
-from tempest import test
-
-CONF = config.CONF
-
-
-class AutoScalingTest(manager.OrchestrationScenarioTest):
-
-    def setUp(self):
-        super(AutoScalingTest, self).setUp()
-        if not CONF.orchestration.image_ref:
-            raise self.skipException("No image available to test")
-        self.client = self.orchestration_client
-
-    def assign_keypair(self):
-        self.stack_name = self._stack_rand_name()
-        if CONF.orchestration.keypair_name:
-            self.keypair_name = CONF.orchestration.keypair_name
-        else:
-            self.keypair = self.create_keypair()
-            self.keypair_name = self.keypair.id
-
-    def launch_stack(self):
-        net = self._get_default_network()
-        self.parameters = {
-            'KeyName': self.keypair_name,
-            'InstanceType': CONF.orchestration.instance_type,
-            'ImageId': CONF.orchestration.image_ref,
-            'StackStart': str(time.time()),
-            'Subnet': net['subnets'][0]
-        }
-
-        # create the stack
-        self.template = self._load_template(__file__, 'test_autoscaling.yaml')
-        self.client.stacks.create(
-            stack_name=self.stack_name,
-            template=self.template,
-            parameters=self.parameters)
-
-        self.stack = self.client.stacks.get(self.stack_name)
-        self.stack_identifier = '%s/%s' % (self.stack_name, self.stack.id)
-
-        # if a keypair was set, do not delete the stack on exit to allow
-        # for manual post-mortums
-        if not CONF.orchestration.keypair_name:
-            self.addCleanup(self.client.stacks.delete, self.stack)
-
-    @test.skip_because(bug="1257575")
-    @test.attr(type='slow')
-    @test.services('orchestration', 'compute')
-    def test_scale_up_then_down(self):
-
-        self.assign_keypair()
-        self.launch_stack()
-
-        sid = self.stack_identifier
-        timeout = CONF.orchestration.build_timeout
-        interval = 10
-
-        self.assertEqual('CREATE', self.stack.action)
-        # wait for create to complete.
-        self.status_timeout(self.client.stacks, sid, 'COMPLETE',
-                            error_status='FAILED')
-
-        self.stack.get()
-        self.assertEqual('CREATE_COMPLETE', self.stack.stack_status)
-
-        # the resource SmokeServerGroup is implemented as a nested
-        # stack, so servers can be counted by counting the resources
-        # inside that nested stack
-        resource = self.client.resources.get(sid, 'SmokeServerGroup')
-        nested_stack_id = resource.physical_resource_id
-
-        def server_count():
-            # the number of servers is the number of resources
-            # in the nested stack
-            self.server_count = len(
-                self.client.resources.list(nested_stack_id))
-            return self.server_count
-
-        def assertScale(from_servers, to_servers):
-            test.call_until_true(lambda: server_count() == to_servers,
-                                 timeout, interval)
-            self.assertEqual(to_servers, self.server_count,
-                             'Failed scaling from %d to %d servers. '
-                             'Current server count: %s' % (
-                                 from_servers, to_servers,
-                                 self.server_count))
-
-        # he marched them up to the top of the hill
-        assertScale(1, 2)
-        assertScale(2, 3)
-
-        # and he marched them down again
-        assertScale(3, 2)
-        assertScale(2, 1)
-
-        # delete stack on completion
-        self.stack.delete()
-        self.status_timeout(self.client.stacks, sid, 'COMPLETE',
-                            error_status='FAILED',
-                            not_found_exception=heat_exceptions.NotFound)
-
-        try:
-            self.stack.get()
-            self.assertEqual('DELETE_COMPLETE', self.stack.stack_status)
-        except heat_exceptions.NotFound:
-            pass
diff --git a/tempest/scenario/orchestration/test_autoscaling.yaml b/tempest/scenario/orchestration/test_autoscaling.yaml
deleted file mode 100644
index 4651284..0000000
--- a/tempest/scenario/orchestration/test_autoscaling.yaml
+++ /dev/null
@@ -1,185 +0,0 @@
-HeatTemplateFormatVersion: '2012-12-12'
-Description: |
-  Template which tests autoscaling and load balancing
-Parameters:
-  KeyName:
-    Type: String
-  InstanceType:
-    Type: String
-  ImageId:
-    Type: String
-  Subnet:
-    Type: String
-  StackStart:
-    Description: Epoch seconds when the stack was launched
-    Type: Number
-  ConsumeStartSeconds:
-    Description: Seconds after invocation when memory should be consumed
-    Type: Number
-    Default: '60'
-  ConsumeStopSeconds:
-    Description: Seconds after StackStart when memory should be released
-    Type: Number
-    Default: '420'
-  ScaleUpThreshold:
-    Description: Memory percentage threshold to scale up on
-    Type: String
-    Default: '70'
-  ScaleDownThreshold:
-    Description: Memory percentage threshold to scale down on
-    Type: String
-    Default: '60'
-  ConsumeMemoryLimit:
-    Description: Memory percentage threshold to consume
-    Type: Number
-    Default: '71'
-Resources:
-  SmokeServerGroup:
-    Type: AWS::AutoScaling::AutoScalingGroup
-    Properties:
-      AvailabilityZones: {'Fn::GetAZs': ''}
-      LaunchConfigurationName: {Ref: LaunchConfig}
-      MinSize: '1'
-      MaxSize: '3'
-      VPCZoneIdentifier: [{Ref: Subnet}]
-  SmokeServerScaleUpPolicy:
-    Type: AWS::AutoScaling::ScalingPolicy
-    Properties:
-      AdjustmentType: ChangeInCapacity
-      AutoScalingGroupName: {Ref: SmokeServerGroup}
-      Cooldown: '60'
-      ScalingAdjustment: '1'
-  SmokeServerScaleDownPolicy:
-    Type: AWS::AutoScaling::ScalingPolicy
-    Properties:
-      AdjustmentType: ChangeInCapacity
-      AutoScalingGroupName: {Ref: SmokeServerGroup}
-      Cooldown: '60'
-      ScalingAdjustment: '-1'
-  MEMAlarmHigh:
-    Type: AWS::CloudWatch::Alarm
-    Properties:
-      AlarmDescription: Scale-up if MEM > ScaleUpThreshold% for 10 seconds
-      MetricName: MemoryUtilization
-      Namespace: system/linux
-      Statistic: Average
-      Period: '10'
-      EvaluationPeriods: '1'
-      Threshold: {Ref: ScaleUpThreshold}
-      AlarmActions: [{Ref: SmokeServerScaleUpPolicy}]
-      Dimensions:
-      - Name: AutoScalingGroupName
-        Value: {Ref: SmokeServerGroup}
-      ComparisonOperator: GreaterThanThreshold
-  MEMAlarmLow:
-    Type: AWS::CloudWatch::Alarm
-    Properties:
-      AlarmDescription: Scale-down if MEM < ScaleDownThreshold% for 10 seconds
-      MetricName: MemoryUtilization
-      Namespace: system/linux
-      Statistic: Average
-      Period: '10'
-      EvaluationPeriods: '1'
-      Threshold: {Ref: ScaleDownThreshold}
-      AlarmActions: [{Ref: SmokeServerScaleDownPolicy}]
-      Dimensions:
-      - Name: AutoScalingGroupName
-        Value: {Ref: SmokeServerGroup}
-      ComparisonOperator: LessThanThreshold
-  CfnUser:
-    Type: AWS::IAM::User
-  SmokeKeys:
-    Type: AWS::IAM::AccessKey
-    Properties:
-      UserName: {Ref: CfnUser}
-  SmokeSecurityGroup:
-    Type: AWS::EC2::SecurityGroup
-    Properties:
-      GroupDescription: Standard firewall rules
-      SecurityGroupIngress:
-      - {IpProtocol: tcp, FromPort: '22', ToPort: '22', CidrIp: 0.0.0.0/0}
-      - {IpProtocol: tcp, FromPort: '80', ToPort: '80', CidrIp: 0.0.0.0/0}
-  LaunchConfig:
-    Type: AWS::AutoScaling::LaunchConfiguration
-    Metadata:
-      AWS::CloudFormation::Init:
-        config:
-          files:
-            /etc/cfn/cfn-credentials:
-              content:
-                Fn::Replace:
-                - $AWSAccessKeyId: {Ref: SmokeKeys}
-                  $AWSSecretKey: {'Fn::GetAtt': [SmokeKeys, SecretAccessKey]}
-                - |
-                  AWSAccessKeyId=$AWSAccessKeyId
-                  AWSSecretKey=$AWSSecretKey
-              mode: '000400'
-              owner: root
-              group: root
-            /root/watch_loop:
-              content:
-                Fn::Replace:
-                - _hi_: {Ref: MEMAlarmHigh}
-                  _lo_: {Ref: MEMAlarmLow}
-                - |
-                  #!/bin/bash
-                  while :
-                  do
-                    /opt/aws/bin/cfn-push-stats --watch _hi_ --mem-util
-                    /opt/aws/bin/cfn-push-stats --watch _lo_ --mem-util
-                    sleep 4
-                  done
-              mode: '000700'
-              owner: root
-              group: root
-            /root/consume_memory:
-              content:
-                Fn::Replace:
-                - StackStart: {Ref: StackStart}
-                  ConsumeStopSeconds: {Ref: ConsumeStopSeconds}
-                  ConsumeStartSeconds: {Ref: ConsumeStartSeconds}
-                  ConsumeMemoryLimit: {Ref: ConsumeMemoryLimit}
-                - |
-                  #!/usr/bin/env python
-                  import psutil
-                  import time
-                  import datetime
-                  import sys
-                  a = []
-                  sleep_until_consume = ConsumeStartSeconds
-                  stack_start = StackStart
-                  consume_stop_time = stack_start + ConsumeStopSeconds
-                  memory_limit = ConsumeMemoryLimit
-                  if sleep_until_consume > 0:
-                      sys.stdout.flush()
-                      time.sleep(sleep_until_consume)
-                  while psutil.virtual_memory().percent < memory_limit:
-                      sys.stdout.flush()
-                      a.append(' ' * 10**5)
-                      time.sleep(0.1)
-                  sleep_until_exit = consume_stop_time - time.time()
-                  if sleep_until_exit > 0:
-                      time.sleep(sleep_until_exit)
-              mode: '000700'
-              owner: root
-              group: root
-    Properties:
-      ImageId: {Ref: ImageId}
-      InstanceType: {Ref: InstanceType}
-      KeyName: {Ref: KeyName}
-      SecurityGroups: [{Ref: SmokeSecurityGroup}]
-      UserData:
-        Fn::Base64:
-          Fn::Replace:
-          - ConsumeStopSeconds: {Ref: ConsumeStopSeconds}
-            ConsumeStartSeconds: {Ref: ConsumeStartSeconds}
-            ConsumeMemoryLimit: {Ref: ConsumeMemoryLimit}
-          - |
-            #!/bin/bash -v
-            /opt/aws/bin/cfn-init
-            # report on memory consumption every 4 seconds
-            /root/watch_loop &
-            # wait ConsumeStartSeconds then ramp up memory consumption
-            # until it is over ConsumeMemoryLimit%
-            # then exits ConsumeStopSeconds seconds after stack launch
-            /root/consume_memory > /root/consume_memory.log &
diff --git a/tempest/scenario/orchestration/test_server_cfn_init.py b/tempest/scenario/orchestration/test_server_cfn_init.py
index 36e6126..abda1f8 100644
--- a/tempest/scenario/orchestration/test_server_cfn_init.py
+++ b/tempest/scenario/orchestration/test_server_cfn_init.py
@@ -24,6 +24,7 @@
 
 class CfnInitScenarioTest(manager.OrchestrationScenarioTest):
 
+    @test.skip_because(bug="1374175")
     def setUp(self):
         super(CfnInitScenarioTest, self).setUp()
         if not CONF.orchestration.image_ref:
@@ -38,7 +39,7 @@
             self.keypair_name = CONF.orchestration.keypair_name
         else:
             self.keypair = self.create_keypair()
-            self.keypair_name = self.keypair.id
+            self.keypair_name = self.keypair['name']
 
     def launch_stack(self):
         net = self._get_default_network()
@@ -52,40 +53,45 @@
 
         # create the stack
         self.template = self._load_template(__file__, self.template_name)
-        self.client.stacks.create(
-            stack_name=self.stack_name,
+        _, stack = self.client.create_stack(
+            name=self.stack_name,
             template=self.template,
             parameters=self.parameters)
+        stack = stack['stack']
 
-        self.stack = self.client.stacks.get(self.stack_name)
-        self.stack_identifier = '%s/%s' % (self.stack_name, self.stack.id)
-        self.addCleanup(self._stack_delete, self.stack_identifier)
+        _, self.stack = self.client.get_stack(stack['id'])
+        self.stack_identifier = '%s/%s' % (self.stack_name, self.stack['id'])
+        self.addCleanup(self.delete_wrapper,
+                        self.orchestration_client.delete_stack,
+                        self.stack_identifier)
 
     def check_stack(self):
         sid = self.stack_identifier
-        self._wait_for_resource_status(
+        self.client.wait_for_resource_status(
             sid, 'WaitHandle', 'CREATE_COMPLETE')
-        self._wait_for_resource_status(
+        self.client.wait_for_resource_status(
             sid, 'SmokeSecurityGroup', 'CREATE_COMPLETE')
-        self._wait_for_resource_status(
+        self.client.wait_for_resource_status(
             sid, 'SmokeKeys', 'CREATE_COMPLETE')
-        self._wait_for_resource_status(
+        self.client.wait_for_resource_status(
             sid, 'CfnUser', 'CREATE_COMPLETE')
-        self._wait_for_resource_status(
+        self.client.wait_for_resource_status(
             sid, 'SmokeServer', 'CREATE_COMPLETE')
 
-        server_resource = self.client.resources.get(sid, 'SmokeServer')
-        server_id = server_resource.physical_resource_id
-        server = self.compute_client.servers.get(server_id)
-        server_ip = server.networks[CONF.compute.network_for_ssh][0]
+        _, server_resource = self.client.get_resource(sid, 'SmokeServer')
+        server_id = server_resource['physical_resource_id']
+        _, server = self.servers_client.get_server(server_id)
+        server_ip =\
+            server['addresses'][CONF.compute.network_for_ssh][0]['addr']
 
-        if not self._ping_ip_address(server_ip):
+        if not self.ping_ip_address(server_ip):
             self._log_console_output(servers=[server])
             self.fail(
-                "Timed out waiting for %s to become reachable" % server_ip)
+                "(CfnInitScenarioTest:test_server_cfn_init) Timed out waiting "
+                "for %s to become reachable" % server_ip)
 
         try:
-            self._wait_for_resource_status(
+            self.client.wait_for_resource_status(
                 sid, 'WaitCondition', 'CREATE_COMPLETE')
         except (exceptions.StackResourceBuildErrorException,
                 exceptions.TimeoutException) as e:
@@ -96,9 +102,9 @@
             # logs to be compared
             self._log_console_output(servers=[server])
 
-        self._wait_for_stack_status(sid, 'CREATE_COMPLETE')
+        self.client.wait_for_stack_status(sid, 'CREATE_COMPLETE')
 
-        stack = self.client.stacks.get(sid)
+        _, stack = self.client.get_stack(sid)
 
         # This is an assert of great significance, as it means the following
         # has happened:
diff --git a/tempest/scenario/test_aggregates_basic_ops.py b/tempest/scenario/test_aggregates_basic_ops.py
index 0059619..75769ce 100644
--- a/tempest/scenario/test_aggregates_basic_ops.py
+++ b/tempest/scenario/test_aggregates_basic_ops.py
@@ -23,7 +23,7 @@
 LOG = logging.getLogger(__name__)
 
 
-class TestAggregatesBasicOps(manager.OfficialClientTest):
+class TestAggregatesBasicOps(manager.ScenarioTest):
     """
     Creates an aggregate within an availability zone
     Adds a host to the aggregate
@@ -33,74 +33,67 @@
     Deletes aggregate
     """
     @classmethod
+    def resource_setup(cls):
+        super(TestAggregatesBasicOps, cls).resource_setup()
+        cls.aggregates_client = cls.manager.aggregates_client
+        cls.hosts_client = cls.manager.hosts_client
+
+    @classmethod
     def credentials(cls):
         return cls.admin_credentials()
 
     def _create_aggregate(self, **kwargs):
-        aggregate = self.compute_client.aggregates.create(**kwargs)
+        _, aggregate = self.aggregates_client.create_aggregate(**kwargs)
+        self.addCleanup(self._delete_aggregate, aggregate)
         aggregate_name = kwargs['name']
         availability_zone = kwargs['availability_zone']
-        self.assertEqual(aggregate.name, aggregate_name)
-        self.assertEqual(aggregate.availability_zone, availability_zone)
-        self.addCleanup(self._delete_aggregate, aggregate)
-        LOG.debug("Aggregate %s created." % (aggregate.name))
+        self.assertEqual(aggregate['name'], aggregate_name)
+        self.assertEqual(aggregate['availability_zone'], availability_zone)
         return aggregate
 
     def _delete_aggregate(self, aggregate):
-        self.compute_client.aggregates.delete(aggregate.id)
-        LOG.debug("Aggregate %s deleted. " % (aggregate.name))
+        self.aggregates_client.delete_aggregate(aggregate['id'])
 
     def _get_host_name(self):
-        hosts = self.compute_client.hosts.list()
+        _, hosts = self.hosts_client.list_hosts()
         self.assertTrue(len(hosts) >= 1)
-        computes = [x for x in hosts if x.service == 'compute']
-        return computes[0].host_name
+        computes = [x for x in hosts if x['service'] == 'compute']
+        return computes[0]['host_name']
 
-    def _add_host(self, aggregate_name, host):
-        aggregate = self.compute_client.aggregates.add_host(aggregate_name,
-                                                            host)
-        self.addCleanup(self._remove_host, aggregate, host)
-        self.assertIn(host, aggregate.hosts)
-        LOG.debug("Host %s added to Aggregate %s." % (host, aggregate.name))
+    def _add_host(self, aggregate_id, host):
+        _, aggregate = self.aggregates_client.add_host(aggregate_id, host)
+        self.addCleanup(self._remove_host, aggregate['id'], host)
+        self.assertIn(host, aggregate['hosts'])
 
-    def _remove_host(self, aggregate_name, host):
-        aggregate = self.compute_client.aggregates.remove_host(aggregate_name,
-                                                               host)
-        self.assertNotIn(host, aggregate.hosts)
-        LOG.debug("Host %s removed to Aggregate %s." % (host, aggregate.name))
+    def _remove_host(self, aggregate_id, host):
+        _, aggregate = self.aggregates_client.remove_host(aggregate_id, host)
+        self.assertNotIn(host, aggregate['hosts'])
 
     def _check_aggregate_details(self, aggregate, aggregate_name, azone,
                                  hosts, metadata):
-        aggregate = self.compute_client.aggregates.get(aggregate.id)
-        self.assertEqual(aggregate_name, aggregate.name)
-        self.assertEqual(azone, aggregate.availability_zone)
-        self.assertEqual(aggregate.hosts, hosts)
+        _, aggregate = self.aggregates_client.get_aggregate(aggregate['id'])
+        self.assertEqual(aggregate_name, aggregate['name'])
+        self.assertEqual(azone, aggregate['availability_zone'])
+        self.assertEqual(hosts, aggregate['hosts'])
         for meta_key in metadata.keys():
-            self.assertIn(meta_key, aggregate.metadata)
-            self.assertEqual(metadata[meta_key], aggregate.metadata[meta_key])
-        LOG.debug("Aggregate %s details match." % aggregate.name)
+            self.assertIn(meta_key, aggregate['metadata'])
+            self.assertEqual(metadata[meta_key],
+                             aggregate['metadata'][meta_key])
 
     def _set_aggregate_metadata(self, aggregate, meta):
-        aggregate = self.compute_client.aggregates.set_metadata(aggregate.id,
-                                                                meta)
+        _, aggregate = self.aggregates_client.set_metadata(aggregate['id'],
+                                                           meta)
 
         for key, value in meta.items():
-            self.assertEqual(meta[key], aggregate.metadata[key])
-        LOG.debug("Aggregate %s metadata updated successfully." %
-                  aggregate.name)
+            self.assertEqual(meta[key], aggregate['metadata'][key])
 
     def _update_aggregate(self, aggregate, aggregate_name,
                           availability_zone):
-        values = {}
-        if aggregate_name:
-            values.update({'name': aggregate_name})
-        if availability_zone:
-            values.update({'availability_zone': availability_zone})
-        if values.keys():
-            aggregate = self.compute_client.aggregates.update(aggregate.id,
-                                                              values)
-            for key, values in values.items():
-                self.assertEqual(getattr(aggregate, key), values)
+        _, aggregate = self.aggregates_client.update_aggregate(
+            aggregate['id'], name=aggregate_name,
+            availability_zone=availability_zone)
+        self.assertEqual(aggregate['name'], aggregate_name)
+        self.assertEqual(aggregate['availability_zone'], availability_zone)
         return aggregate
 
     @test.services('compute')
@@ -115,16 +108,17 @@
         self._set_aggregate_metadata(aggregate, metadata)
 
         host = self._get_host_name()
-        self._add_host(aggregate, host)
+        self._add_host(aggregate['id'], host)
         self._check_aggregate_details(aggregate, aggregate_name, az, [host],
                                       metadata)
 
         aggregate_name = data_utils.rand_name('renamed-aggregate-scenario')
-        aggregate = self._update_aggregate(aggregate, aggregate_name, None)
+        # Updating the name alone. The az must be specified again otherwise
+        # the tempest client would send None in the put body
+        aggregate = self._update_aggregate(aggregate, aggregate_name, az)
 
-        additional_metadata = {'foo': 'bar'}
-        self._set_aggregate_metadata(aggregate, additional_metadata)
+        new_metadata = {'foo': 'bar'}
+        self._set_aggregate_metadata(aggregate, new_metadata)
 
-        metadata.update(additional_metadata)
-        self._check_aggregate_details(aggregate, aggregate.name, az, [host],
-                                      metadata)
+        self._check_aggregate_details(aggregate, aggregate['name'], az,
+                                      [host], new_metadata)
diff --git a/tempest/scenario/test_baremetal_basic_ops.py b/tempest/scenario/test_baremetal_basic_ops.py
index f197c15..35571c6 100644
--- a/tempest/scenario/test_baremetal_basic_ops.py
+++ b/tempest/scenario/test_baremetal_basic_ops.py
@@ -23,7 +23,7 @@
 LOG = logging.getLogger(__name__)
 
 
-class BaremetalBasicOpsPXESSH(manager.BaremetalScenarioTest):
+class BaremetalBasicOps(manager.BaremetalScenarioTest):
     """
     This smoke test tests the pxe_ssh Ironic driver.  It follows this basic
     set of operations:
@@ -35,28 +35,121 @@
         * Verifies SSH connectivity using created keypair via fixed IP
         * Associates a floating ip
         * Verifies SSH connectivity using created keypair via floating IP
+        * Verifies instance rebuild with ephemeral partition preservation
         * Deletes instance
         * Monitors the associated Ironic node for power and
           expected state transitions
     """
+    def rebuild_instance(self, preserve_ephemeral=False):
+        self.rebuild_server(server_id=self.instance['id'],
+                            preserve_ephemeral=preserve_ephemeral,
+                            wait=False)
+
+        node = self.get_node(instance_id=self.instance['id'])
+
+        # We should remain on the same node
+        self.assertEqual(self.node['uuid'], node['uuid'])
+        self.node = node
+
+        self.servers_client.wait_for_server_status(
+            server_id=self.instance['id'],
+            status='REBUILD',
+            ready_wait=False)
+        self.servers_client.wait_for_server_status(
+            server_id=self.instance['id'],
+            status='ACTIVE')
+
+    def create_remote_file(self, client, filename):
+        """Create a file on the remote client connection.
+
+        After creating the file, force a filesystem sync. Otherwise,
+        if we issue a rebuild too quickly, the file may not exist.
+        """
+        client.exec_command('sudo touch ' + filename)
+        client.exec_command('sync')
+
+    def verify_partition(self, client, label, mount, gib_size):
+        """Verify a labeled partition's mount point and size."""
+        LOG.info("Looking for partition %s mounted on %s" % (label, mount))
+
+        # Validate we have a device with the given partition label
+        cmd = "/sbin/blkid | grep '%s' | cut -d':' -f1" % label
+        device = client.exec_command(cmd).rstrip('\n')
+        LOG.debug("Partition device is %s" % device)
+        self.assertNotEqual('', device)
+
+        # Validate the mount point for the device
+        cmd = "mount | grep '%s' | cut -d' ' -f3" % device
+        actual_mount = client.exec_command(cmd).rstrip('\n')
+        LOG.debug("Partition mount point is %s" % actual_mount)
+        self.assertEqual(actual_mount, mount)
+
+        # Validate the partition size matches what we expect
+        numbers = '0123456789'
+        devnum = device.replace('/dev/', '')
+        cmd = "cat /sys/block/%s/%s/size" % (devnum.rstrip(numbers), devnum)
+        num_bytes = client.exec_command(cmd).rstrip('\n')
+        num_bytes = int(num_bytes) * 512
+        actual_gib_size = num_bytes / (1024 * 1024 * 1024)
+        LOG.debug("Partition size is %d GiB" % actual_gib_size)
+        self.assertEqual(actual_gib_size, gib_size)
+
+    def get_flavor_ephemeral_size(self):
+        """Returns size of the ephemeral partition in GiB."""
+        f_id = self.instance['flavor']['id']
+        _, flavor = self.flavors_client.get_flavor_details(f_id)
+        ephemeral = flavor.get('OS-FLV-EXT-DATA:ephemeral')
+        if not ephemeral or ephemeral == 'N/A':
+            return None
+        return int(ephemeral)
+
     def add_floating_ip(self):
-        floating_ip = self.compute_client.floating_ips.create()
-        self.instance.add_floating_ip(floating_ip)
-        return floating_ip.ip
+        _, floating_ip = self.floating_ips_client.create_floating_ip()
+        self.floating_ips_client.associate_floating_ip_to_server(
+            floating_ip['ip'], self.instance['id'])
+        return floating_ip['ip']
 
     def validate_ports(self):
-        for port in self.get_ports(self.node.uuid):
-            n_port_id = port.extra['vif_port_id']
-            n_port = self.network_client.show_port(n_port_id)['port']
-            self.assertEqual(n_port['device_id'], self.instance.id)
-            self.assertEqual(n_port['mac_address'], port.address)
+        for port in self.get_ports(self.node['uuid']):
+            n_port_id = port['extra']['vif_port_id']
+            _, body = self.network_client.show_port(n_port_id)
+            n_port = body['port']
+            self.assertEqual(n_port['device_id'], self.instance['id'])
+            self.assertEqual(n_port['mac_address'], port['address'])
 
     @test.services('baremetal', 'compute', 'image', 'network')
     def test_baremetal_server_ops(self):
+        test_filename = '/mnt/rebuild_test.txt'
         self.add_keypair()
         self.boot_instance()
         self.validate_ports()
         self.verify_connectivity()
         floating_ip = self.add_floating_ip()
         self.verify_connectivity(ip=floating_ip)
+
+        vm_client = self.get_remote_client(self.instance)
+
+        # We expect the ephemeral partition to be mounted on /mnt and to have
+        # the same size as our flavor definition.
+        eph_size = self.get_flavor_ephemeral_size()
+        self.assertIsNotNone(eph_size)
+        if eph_size > 0:
+            preserve_ephemeral = True
+
+            self.verify_partition(vm_client, 'ephemeral0', '/mnt', eph_size)
+            # Create the test file
+            self.create_remote_file(vm_client, test_filename)
+        else:
+            preserve_ephemeral = False
+
+        # Rebuild and preserve the ephemeral partition if it exists
+        self.rebuild_instance(preserve_ephemeral)
+        self.verify_connectivity()
+
+        # Check that we maintained our data
+        if eph_size > 0:
+            vm_client = self.get_remote_client(self.instance)
+            self.verify_partition(vm_client, 'ephemeral0', '/mnt', eph_size)
+            vm_client.exec_command('ls ' + test_filename)
+
         self.terminate_instance()
diff --git a/tempest/scenario/test_dashboard_basic_ops.py b/tempest/scenario/test_dashboard_basic_ops.py
index 4fcc70a..f218fb2 100644
--- a/tempest/scenario/test_dashboard_basic_ops.py
+++ b/tempest/scenario/test_dashboard_basic_ops.py
@@ -34,9 +34,9 @@
     """
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         cls.set_network_resources()
-        super(TestDashboardBasicOps, cls).setUpClass()
+        super(TestDashboardBasicOps, cls).resource_setup()
 
         if not CONF.service_available.horizon:
             raise cls.skipException("Horizon support is required")
diff --git a/tempest/scenario/test_encrypted_cinder_volumes.py b/tempest/scenario/test_encrypted_cinder_volumes.py
index 366cd93..ac2ef8a 100644
--- a/tempest/scenario/test_encrypted_cinder_volumes.py
+++ b/tempest/scenario/test_encrypted_cinder_volumes.py
@@ -37,12 +37,12 @@
 
     def create_encrypted_volume(self, encryption_provider):
         volume_type = self.create_volume_type(name='luks')
-        self.create_encryption_type(type_id=volume_type.id,
+        self.create_encryption_type(type_id=volume_type['id'],
                                     provider=encryption_provider,
                                     key_size=512,
                                     cipher='aes-xts-plain64',
                                     control_location='front-end')
-        self.volume = self.create_volume(volume_type=volume_type.name)
+        self.volume = self.create_volume(volume_type=volume_type['name'])
 
     def attach_detach_volume(self):
         self.nova_volume_attach()
diff --git a/tempest/scenario/test_large_ops.py b/tempest/scenario/test_large_ops.py
index 15cf13b..b111939 100644
--- a/tempest/scenario/test_large_ops.py
+++ b/tempest/scenario/test_large_ops.py
@@ -25,7 +25,7 @@
 LOG = logging.getLogger(__name__)
 
 
-class TestLargeOpsScenario(manager.NetworkScenarioTest):
+class TestLargeOpsScenario(manager.ScenarioTest):
 
     """
     Test large operations.
@@ -38,40 +38,46 @@
     """
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
+        if CONF.scenario.large_ops_number < 1:
+            raise cls.skipException("large_ops_number not set to multiple "
+                                    "instances")
         cls.set_network_resources()
-        super(TestLargeOpsScenario, cls).setUpClass()
+        super(TestLargeOpsScenario, cls).resource_setup()
 
     def _wait_for_server_status(self, status):
         for server in self.servers:
-            self.status_timeout(
-                self.compute_client.servers, server.id, status)
+            self.servers_client.wait_for_server_status(server['id'], status)
 
     def nova_boot(self):
         name = data_utils.rand_name('scenario-server-')
-        client = self.compute_client
         flavor_id = CONF.compute.flavor_ref
-        secgroup = self._create_security_group_nova()
-        self.servers = client.servers.create(
-            name=name, image=self.image,
-            flavor=flavor_id,
+        secgroup = self._create_security_group()
+        self.servers_client.create_server(
+            name,
+            self.image,
+            flavor_id,
             min_count=CONF.scenario.large_ops_number,
-            security_groups=[secgroup.name])
+            security_groups=[secgroup])
         # needed because of bug 1199788
-        self.servers = [x for x in client.servers.list() if name in x.name]
+        params = {'name': name}
+        _, server_list = self.servers_client.list_servers(params)
+        self.servers = server_list['servers']
         for server in self.servers:
             # after deleting all servers - wait for all servers to clear
             # before cleanup continues
-            self.addCleanup(self.delete_timeout,
-                            self.compute_client.servers,
-                            server.id)
+            self.addCleanup(self.servers_client.wait_for_server_termination,
+                            server['id'])
         for server in self.servers:
-            self.addCleanup_with_wait(self.compute_client.servers, server.id)
+            self.addCleanup_with_wait(
+                waiter_callable=(self.servers_client.
+                                 wait_for_server_termination),
+                thing_id=server['id'], thing_id_param='server_id',
+                cleanup_callable=self.delete_wrapper,
+                cleanup_args=[self.servers_client.delete_server, server['id']])
         self._wait_for_server_status('ACTIVE')
 
     def _large_ops_scenario(self):
-        if CONF.scenario.large_ops_number < 1:
-            return
         self.glance_image_create()
         self.nova_boot()
 
diff --git a/tempest/scenario/test_load_balancer_basic.py b/tempest/scenario/test_load_balancer_basic.py
index 8191984..9e404c8 100644
--- a/tempest/scenario/test_load_balancer_basic.py
+++ b/tempest/scenario/test_load_balancer_basic.py
@@ -18,11 +18,11 @@
 import time
 import urllib2
 
-from tempest.api.network import common as net_common
 from tempest.common import commands
 from tempest import config
 from tempest import exceptions
 from tempest.scenario import manager
+from tempest.services.network import resources as net_resources
 from tempest import test
 
 config = config.CONF
@@ -38,9 +38,8 @@
     2. SSH to the instance and start two servers
     3. Create a load balancer with two members and with ROUND_ROBIN algorithm
        associate the VIP with a floating ip
-    4. Send 10 requests to the floating ip and check that they are shared
-       between the two servers and that both of them get equal portions
-    of the requests
+    4. Send NUM requests to the floating ip and check that they are shared
+       between the two servers.
     """
 
     @classmethod
@@ -58,8 +57,8 @@
             raise cls.skipException(msg)
 
     @classmethod
-    def setUpClass(cls):
-        super(TestLoadBalancerBasic, cls).setUpClass()
+    def resource_setup(cls):
+        super(TestLoadBalancerBasic, cls).resource_setup()
         cls.check_preconditions()
         cls.servers_keypairs = {}
         cls.members = []
@@ -67,15 +66,45 @@
         cls.server_ips = {}
         cls.port1 = 80
         cls.port2 = 88
+        cls.num = 50
 
     def setUp(self):
         super(TestLoadBalancerBasic, self).setUp()
         self.server_ips = {}
         self.server_fixed_ips = {}
-        self._create_security_group()
+        self._create_security_group_for_test()
+        self._set_net_and_subnet()
 
-    def _create_security_group(self):
-        self.security_group = self._create_security_group_neutron(
+    def _set_net_and_subnet(self):
+        """
+        Query and set appropriate network and subnet attributes to be used
+        for the test.  Existing tenant networks are used if they are found.
+        The configured private network and associated subnet is used as a
+        fallback in absence of tenant networking.
+        """
+        try:
+            tenant_net = self._list_networks(tenant_id=self.tenant_id)[0]
+        except IndexError:
+            tenant_net = None
+
+        if tenant_net:
+            tenant_subnet = self._list_subnets(tenant_id=self.tenant_id)[0]
+            self.subnet = net_resources.DeletableSubnet(
+                client=self.network_client,
+                **tenant_subnet)
+            self.network = tenant_net
+        else:
+            self.network = self._get_network_by_name(
+                config.compute.fixed_network_name)
+            # TODO(adam_g): We are assuming that the first subnet associated
+            # with the fixed network is the one we want.  In the future, we
+            # should instead pull a subnet id from config, which is set by
+            # devstack/admin/etc.
+            subnet = self._list_subnets(network_id=self.network['id'])[0]
+            self.subnet = net_resources.AttributeDict(subnet)
+
+    def _create_security_group_for_test(self):
+        self.security_group = self._create_security_group(
             tenant_id=self.tenant_id)
         self._create_security_group_rules_for_port(self.port1)
         self._create_security_group_rules_for_port(self.port2)
@@ -88,35 +117,35 @@
             'port_range_max': port,
         }
         self._create_security_group_rule(
-            client=self.network_client,
             secgroup=self.security_group,
             tenant_id=self.tenant_id,
             **rule)
 
     def _create_server(self, name):
-        keypair = self.create_keypair(name='keypair-%s' % name)
-        security_groups = [self.security_group.name]
-        net = self._list_networks(tenant_id=self.tenant_id)[0]
+        keypair = self.create_keypair()
+        security_groups = [self.security_group]
         create_kwargs = {
             'nics': [
-                {'net-id': net['id']},
+                {'net-id': self.network['id']},
             ],
-            'key_name': keypair.name,
+            'key_name': keypair['name'],
             'security_groups': security_groups,
         }
-        server = self.create_server(name=name,
-                                    create_kwargs=create_kwargs)
-        self.servers_keypairs[server.id] = keypair
+        net_name = self.network['name']
+        server = self.create_server(name=name, create_kwargs=create_kwargs)
+        self.servers_keypairs[server['id']] = keypair
         if (config.network.public_network_id and not
                 config.network.tenant_networks_reachable):
             public_network_id = config.network.public_network_id
             floating_ip = self._create_floating_ip(
                 server, public_network_id)
             self.floating_ips[floating_ip] = server
-            self.server_ips[server.id] = floating_ip.floating_ip_address
+            self.server_ips[server['id']] = floating_ip.floating_ip_address
         else:
-            self.server_ips[server.id] = server.networks[net['name']][0]
-        self.server_fixed_ips[server.id] = server.networks[net['name']][0]
+            self.server_ips[server['id']] =\
+                server['addresses'][net_name][0]['addr']
+        self.server_fixed_ips[server['id']] =\
+            server['addresses'][net_name][0]['addr']
         self.assertTrue(self.servers_keypairs)
         return server
 
@@ -132,10 +161,9 @@
         1. SSH to the instance
         2. Start two http backends listening on ports 80 and 88 respectively
         """
-
         for server_id, ip in self.server_ips.iteritems():
-            private_key = self.servers_keypairs[server_id].private_key
-            server_name = self.compute_client.servers.get(server_id).name
+            private_key = self.servers_keypairs[server_id]['private_key']
+            server_name = self.servers_client.get_server(server_id)[1]['name']
             username = config.scenario.ssh_user
             ssh_client = self.get_remote_client(
                 server_or_ip=ip,
@@ -196,10 +224,6 @@
 
     def _create_pool(self):
         """Create a pool with ROUND_ROBIN algorithm."""
-        # get tenant subnet and verify there's only one
-        subnet = self._list_subnets(tenant_id=self.tenant_id)[0]
-        self.subnet = net_common.DeletableSubnet(client=self.network_client,
-                                                 **subnet)
         self.pool = super(TestLoadBalancerBasic, self)._create_pool(
             lb_method='ROUND_ROBIN',
             protocol='HTTP',
@@ -245,11 +269,7 @@
                                     protocol_port=80,
                                     subnet_id=self.subnet.id,
                                     pool_id=self.pool.id)
-        self.status_timeout(NeutronRetriever(self.network_client,
-                                             self.network_client.vip_path,
-                                             net_common.DeletableVip),
-                            self.vip.id,
-                            expected_status='ACTIVE')
+        self.vip.wait_for_status('ACTIVE')
         if (config.network.public_network_id and not
                 config.network.tenant_networks_reachable):
             self._assign_floating_ip_to_vip(self.vip)
@@ -262,60 +282,30 @@
         # vip port - see https://bugs.launchpad.net/neutron/+bug/1163569
         # However the linuxbridge-agent does, and it is necessary to add a
         # security group with a rule that allows tcp port 80 to the vip port.
-        body = {'port': {'security_groups': [self.security_group.id]}}
-        self.network_client.update_port(self.vip.port_id, body)
+        self.network_client.update_port(
+            self.vip.port_id, security_groups=[self.security_group.id])
 
     def _check_load_balancing(self):
         """
-        1. Send 10 requests on the floating ip associated with the VIP
-        2. Check that the requests are shared between
-           the two servers and that both of them get equal portions
-           of the requests
+        1. Send NUM requests on the floating ip associated with the VIP
+        2. Check that the requests are shared between the two servers
         """
 
         self._check_connection(self.vip_ip)
-        self._send_requests(self.vip_ip, set(["server1", "server2"]))
+        self._send_requests(self.vip_ip, ["server1", "server2"])
 
-    def _send_requests(self, vip_ip, expected, num_req=10):
-        count = 0
-        while count < num_req:
-            resp = []
-            for i in range(len(self.members)):
-                resp.append(
-                    urllib2.urlopen(
-                        "http://{0}/".format(vip_ip)).read())
-            count += 1
-            self.assertEqual(expected,
-                             set(resp))
+    def _send_requests(self, vip_ip, servers):
+        counters = dict.fromkeys(servers, 0)
+        for i in range(self.num):
+            server = urllib2.urlopen("http://{0}/".format(vip_ip)).read()
+            counters[server] += 1
+        # Assert that each member of the pool gets balanced at least once
+        for member, counter in counters.iteritems():
+            self.assertGreater(counter, 0, 'Member %s never balanced' % member)
 
-    @test.attr(type='smoke')
     @test.services('compute', 'network')
     def test_load_balancer_basic(self):
         self._create_server('server1')
         self._start_servers()
         self._create_load_balancer()
         self._check_load_balancing()
-
-
-class NeutronRetriever(object):
-    """
-    Helper class to make possible handling neutron objects returned by GET
-    requests as attribute dicts.
-
-    Whet get() method is called, the returned dictionary is wrapped into
-    a corresponding DeletableResource class which provides attribute access
-    to dictionary values.
-
-    Usage:
-        This retriever is used to allow using status_timeout from
-        tempest.manager with Neutron objects.
-    """
-
-    def __init__(self, network_client, path, resource):
-        self.network_client = network_client
-        self.path = path
-        self.resource = resource
-
-    def get(self, thing_id):
-        obj = self.network_client.get(self.path % thing_id)
-        return self.resource(client=self.network_client, **obj.values()[0])
diff --git a/tempest/scenario/test_minimum_basic.py b/tempest/scenario/test_minimum_basic.py
index 29fdc74..8a8e387 100644
--- a/tempest/scenario/test_minimum_basic.py
+++ b/tempest/scenario/test_minimum_basic.py
@@ -13,6 +13,7 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from tempest.common import custom_matchers
 from tempest.common import debug
 from tempest import config
 from tempest.openstack.common import log as logging
@@ -24,7 +25,7 @@
 LOG = logging.getLogger(__name__)
 
 
-class TestMinimumBasicScenario(manager.OfficialClientTest):
+class TestMinimumBasicScenario(manager.ScenarioTest):
 
     """
     This is a basic minimum scenario test.
@@ -38,61 +39,69 @@
     """
 
     def _wait_for_server_status(self, status):
-        server_id = self.server.id
-        self.status_timeout(
-            self.compute_client.servers, server_id, status)
+        server_id = self.server['id']
+        # Raise on error defaults to True, which is consistent with the
+        # original function from scenario tests here
+        self.servers_client.wait_for_server_status(server_id, status)
 
     def nova_keypair_add(self):
         self.keypair = self.create_keypair()
 
     def nova_boot(self):
-        create_kwargs = {'key_name': self.keypair.name}
+        create_kwargs = {'key_name': self.keypair['name']}
         self.server = self.create_server(image=self.image,
                                          create_kwargs=create_kwargs)
 
     def nova_list(self):
-        servers = self.compute_client.servers.list()
-        LOG.debug("server_list:%s" % servers)
-        self.assertIn(self.server, servers)
+        _, servers = self.servers_client.list_servers()
+        # The list servers in the compute client is inconsistent...
+        servers = servers['servers']
+        self.assertIn(self.server['id'], [x['id'] for x in servers])
 
     def nova_show(self):
-        got_server = self.compute_client.servers.get(self.server)
-        LOG.debug("got server:%s" % got_server)
-        self.assertEqual(self.server, got_server)
+        _, got_server = self.servers_client.get_server(self.server['id'])
+        self.assertThat(
+            self.server, custom_matchers.MatchesDictExceptForKeys(
+                got_server, excluded_keys=['OS-EXT-AZ:availability_zone']))
 
     def cinder_create(self):
         self.volume = self.create_volume()
 
     def cinder_list(self):
-        volumes = self.volume_client.volumes.list()
-        self.assertIn(self.volume, volumes)
+        _, volumes = self.volumes_client.list_volumes()
+        self.assertIn(self.volume['id'], [x['id'] for x in volumes])
 
     def cinder_show(self):
-        volume = self.volume_client.volumes.get(self.volume.id)
+        _, volume = self.volumes_client.get_volume(self.volume['id'])
         self.assertEqual(self.volume, volume)
 
     def nova_volume_attach(self):
-        attach_volume_client = self.compute_client.volumes.create_server_volume
-        volume = attach_volume_client(self.server.id,
-                                      self.volume.id,
-                                      '/dev/vdb')
-        self.assertEqual(self.volume.id, volume.id)
-        self.wait_for_volume_status('in-use')
+        volume_device_path = '/dev/' + CONF.compute.volume_device_name
+        _, volume_attachment = self.servers_client.attach_volume(
+            self.server['id'], self.volume['id'], volume_device_path)
+        volume = volume_attachment['volumeAttachment']
+        self.assertEqual(self.volume['id'], volume['id'])
+        self.volumes_client.wait_for_volume_status(volume['id'], 'in-use')
+        # Refresh the volume after the attachment
+        _, self.volume = self.volumes_client.get_volume(volume['id'])
 
     def nova_reboot(self):
-        self.server.reboot()
+        self.servers_client.reboot(self.server['id'], 'SOFT')
         self._wait_for_server_status('ACTIVE')
 
     def nova_floating_ip_create(self):
-        self.floating_ip = self.compute_client.floating_ips.create()
-        self.addCleanup(self.delete_wrapper, self.floating_ip)
+        _, self.floating_ip = self.floating_ips_client.create_floating_ip()
+        self.addCleanup(self.delete_wrapper,
+                        self.floating_ips_client.delete_floating_ip,
+                        self.floating_ip['id'])
 
     def nova_floating_ip_add(self):
-        self.server.add_floating_ip(self.floating_ip)
+        self.floating_ips_client.associate_floating_ip_to_server(
+            self.floating_ip['ip'], self.server['id'])
 
     def ssh_to_server(self):
         try:
-            self.linux_client = self.get_remote_client(self.floating_ip.ip)
+            self.linux_client = self.get_remote_client(self.floating_ip['ip'])
         except Exception as e:
             LOG.exception('ssh to server failed')
             self._log_console_output()
@@ -102,21 +111,24 @@
             raise
 
     def check_partitions(self):
+        # NOTE(andreaf) The device name may be different on different guest OS
         partitions = self.linux_client.get_partitions()
-        self.assertEqual(1, partitions.count('vdb'))
+        self.assertEqual(1, partitions.count(CONF.compute.volume_device_name))
 
     def nova_volume_detach(self):
-        detach_volume_client = self.compute_client.volumes.delete_server_volume
-        detach_volume_client(self.server.id, self.volume.id)
-        self.wait_for_volume_status('available')
+        self.servers_client.detach_volume(self.server['id'], self.volume['id'])
+        self.volumes_client.wait_for_volume_status(self.volume['id'],
+                                                   'available')
 
-        volume = self.volume_client.volumes.get(self.volume.id)
-        self.assertEqual('available', volume.status)
+        _, volume = self.volumes_client.get_volume(self.volume['id'])
+        self.assertEqual('available', volume['status'])
 
     def create_and_add_security_group(self):
-        secgroup = self._create_security_group_nova()
-        self.server.add_security_group(secgroup.name)
-        self.addCleanup(self.server.remove_security_group, secgroup.name)
+        secgroup = self._create_security_group()
+        self.servers_client.add_security_group(self.server['id'],
+                                               secgroup['name'])
+        self.addCleanup(self.servers_client.remove_security_group,
+                        self.server['id'], secgroup['name'])
 
     @test.services('compute', 'volume', 'image', 'network')
     def test_minimum_basic_scenario(self):
diff --git a/tempest/scenario/test_network_advanced_server_ops.py b/tempest/scenario/test_network_advanced_server_ops.py
index 431de9a..58a028f 100644
--- a/tempest/scenario/test_network_advanced_server_ops.py
+++ b/tempest/scenario/test_network_advanced_server_ops.py
@@ -40,9 +40,8 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(TestNetworkAdvancedServerOps, cls).setUpClass()
-        cls.check_preconditions()
+    def check_preconditions(cls):
+        super(TestNetworkAdvancedServerOps, cls).check_preconditions()
         if not (CONF.network.tenant_networks_reachable
                 or CONF.network.public_network_id):
             msg = ('Either tenant_networks_reachable must be "true", or '
@@ -50,35 +49,39 @@
             cls.enabled = False
             raise cls.skipException(msg)
 
-    def setUp(self):
-        super(TestNetworkAdvancedServerOps, self).setUp()
-        key_name = data_utils.rand_name('keypair-smoke-')
-        self.keypair = self.create_keypair(name=key_name)
-        security_group =\
-            self._create_security_group_neutron(tenant_id=self.tenant_id)
-        network = self._create_network(self.tenant_id)
-        router = self._get_router(self.tenant_id)
-        subnet = self._create_subnet(network)
-        subnet.add_to_router(router.id)
+    @classmethod
+    def resource_setup(cls):
+        # Create no network resources for these tests.
+        cls.set_network_resources()
+        super(TestNetworkAdvancedServerOps, cls).resource_setup()
+
+    def _setup_network_and_servers(self):
+        self.keypair = self.create_keypair()
+        security_group = self._create_security_group()
+        network, subnet, router = self.create_networks()
         public_network_id = CONF.network.public_network_id
         create_kwargs = {
-            'nics': [
-                {'net-id': network.id},
+            'networks': [
+                {'uuid': network.id},
             ],
-            'key_name': self.keypair.name,
-            'security_groups': [security_group.name],
+            'key_name': self.keypair['name'],
+            'security_groups': [security_group],
         }
-        server_name = data_utils.rand_name('server-smoke-%d-')
+        server_name = data_utils.rand_name('server-smoke')
         self.server = self.create_server(name=server_name,
                                          create_kwargs=create_kwargs)
         self.floating_ip = self._create_floating_ip(self.server,
                                                     public_network_id)
+        # Verify that we can indeed connect to the server before we mess with
+        # it's state
+        self._wait_server_status_and_check_network_connectivity()
 
     def _check_network_connectivity(self, should_connect=True):
         username = CONF.compute.image_ssh_user
-        private_key = self.keypair.private_key
+        private_key = self.keypair['private_key']
         self._check_tenant_network_connectivity(
-            self.server, username, private_key, should_connect=should_connect,
+            self.server, username, private_key,
+            should_connect=should_connect,
             servers_for_debug=[self.server])
         floating_ip = self.floating_ip.floating_ip_address
         self._check_public_network_connectivity(floating_ip, username,
@@ -86,52 +89,58 @@
                                                 servers=[self.server])
 
     def _wait_server_status_and_check_network_connectivity(self):
-        self.status_timeout(self.compute_client.servers, self.server.id,
-                            'ACTIVE')
+        self.servers_client.wait_for_server_status(self.server['id'], 'ACTIVE')
         self._check_network_connectivity()
 
+    @test.skip_because(bug="1323658")
     @test.services('compute', 'network')
     def test_server_connectivity_stop_start(self):
-        self.server.stop()
-        self.status_timeout(self.compute_client.servers, self.server.id,
-                            'SHUTOFF')
+        self._setup_network_and_servers()
+        self.servers_client.stop(self.server['id'])
+        self.servers_client.wait_for_server_status(self.server['id'],
+                                                   'SHUTOFF')
         self._check_network_connectivity(should_connect=False)
-        self.server.start()
+        self.servers_client.start(self.server['id'])
         self._wait_server_status_and_check_network_connectivity()
 
     @test.services('compute', 'network')
     def test_server_connectivity_reboot(self):
-        self.server.reboot()
+        self._setup_network_and_servers()
+        self.servers_client.reboot(self.server['id'], reboot_type='SOFT')
         self._wait_server_status_and_check_network_connectivity()
 
     @test.services('compute', 'network')
     def test_server_connectivity_rebuild(self):
+        self._setup_network_and_servers()
         image_ref_alt = CONF.compute.image_ref_alt
-        self.server.rebuild(image_ref_alt)
+        self.servers_client.rebuild(self.server['id'],
+                                    image_ref=image_ref_alt)
         self._wait_server_status_and_check_network_connectivity()
 
     @testtools.skipUnless(CONF.compute_feature_enabled.pause,
                           'Pause is not available.')
     @test.services('compute', 'network')
     def test_server_connectivity_pause_unpause(self):
-        self.server.pause()
-        self.status_timeout(self.compute_client.servers, self.server.id,
-                            'PAUSED')
+        self._setup_network_and_servers()
+        self.servers_client.pause_server(self.server['id'])
+        self.servers_client.wait_for_server_status(self.server['id'], 'PAUSED')
         self._check_network_connectivity(should_connect=False)
-        self.server.unpause()
+        self.servers_client.unpause_server(self.server['id'])
         self._wait_server_status_and_check_network_connectivity()
 
     @testtools.skipUnless(CONF.compute_feature_enabled.suspend,
                           'Suspend is not available.')
     @test.services('compute', 'network')
     def test_server_connectivity_suspend_resume(self):
-        self.server.suspend()
-        self.status_timeout(self.compute_client.servers, self.server.id,
-                            'SUSPENDED')
+        self._setup_network_and_servers()
+        self.servers_client.suspend_server(self.server['id'])
+        self.servers_client.wait_for_server_status(self.server['id'],
+                                                   'SUSPENDED')
         self._check_network_connectivity(should_connect=False)
-        self.server.resume()
+        self.servers_client.resume_server(self.server['id'])
         self._wait_server_status_and_check_network_connectivity()
 
+    @test.skip_because(bug="1323658")
     @testtools.skipUnless(CONF.compute_feature_enabled.resize,
                           'Resize is not available.')
     @test.services('compute', 'network')
@@ -140,9 +149,9 @@
         if resize_flavor == CONF.compute.flavor_ref:
             msg = "Skipping test - flavor_ref and flavor_ref_alt are identical"
             raise self.skipException(msg)
-        resize_flavor = CONF.compute.flavor_ref_alt
-        self.server.resize(resize_flavor)
-        self.status_timeout(self.compute_client.servers, self.server.id,
-                            'VERIFY_RESIZE')
-        self.server.confirm_resize()
+        self._setup_network_and_servers()
+        self.servers_client.resize(self.server['id'], flavor_ref=resize_flavor)
+        self.servers_client.wait_for_server_status(self.server['id'],
+                                                   'VERIFY_RESIZE')
+        self.servers_client.confirm_resize(self.server['id'])
         self._wait_server_status_and_check_network_connectivity()
diff --git a/tempest/scenario/test_network_basic_ops.py b/tempest/scenario/test_network_basic_ops.py
index 8c7af3d..de60745 100644
--- a/tempest/scenario/test_network_basic_ops.py
+++ b/tempest/scenario/test_network_basic_ops.py
@@ -15,14 +15,16 @@
 #    under the License.
 import collections
 import re
+
 import testtools
 
-from tempest.api.network import common as net_common
 from tempest.common import debug
 from tempest.common.utils import data_utils
 from tempest import config
+from tempest import exceptions
 from tempest.openstack.common import log as logging
 from tempest.scenario import manager
+from tempest.services.network import resources as net_resources
 from tempest import test
 
 CONF = config.CONF
@@ -86,29 +88,31 @@
             raise cls.skipException(msg)
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         # Create no network resources for these tests.
         cls.set_network_resources()
-        super(TestNetworkBasicOps, cls).setUpClass()
+        super(TestNetworkBasicOps, cls).resource_setup()
         for ext in ['router', 'security-group']:
             if not test.is_extension_enabled(ext, 'network'):
                 msg = "%s extension not enabled." % ext
                 raise cls.skipException(msg)
-        cls.check_preconditions()
 
     def setUp(self):
         super(TestNetworkBasicOps, self).setUp()
+        self.keypairs = {}
+        self.servers = []
+
+    def _setup_network_and_servers(self):
         self.security_group = \
-            self._create_security_group_neutron(tenant_id=self.tenant_id)
-        self.network, self.subnet, self.router = self._create_networks()
+            self._create_security_group(tenant_id=self.tenant_id)
+        self.network, self.subnet, self.router = self.create_networks()
         self.check_networks()
-        self.servers = {}
+
         name = data_utils.rand_name('server-smoke')
-        serv_dict = self._create_server(name, self.network)
-        self.servers[serv_dict['server']] = serv_dict['keypair']
+        server = self._create_server(name, self.network)
         self._check_tenant_network_connectivity()
 
-        self._create_and_associate_floating_ips()
+        self._create_and_associate_floating_ips(server)
 
     def check_networks(self):
         """
@@ -122,47 +126,53 @@
         self.assertIn(self.network.name, seen_names)
         self.assertIn(self.network.id, seen_ids)
 
-        seen_subnets = self._list_subnets()
-        seen_net_ids = [n['network_id'] for n in seen_subnets]
-        seen_subnet_ids = [n['id'] for n in seen_subnets]
-        self.assertIn(self.network.id, seen_net_ids)
-        self.assertIn(self.subnet.id, seen_subnet_ids)
+        if self.subnet:
+            seen_subnets = self._list_subnets()
+            seen_net_ids = [n['network_id'] for n in seen_subnets]
+            seen_subnet_ids = [n['id'] for n in seen_subnets]
+            self.assertIn(self.network.id, seen_net_ids)
+            self.assertIn(self.subnet.id, seen_subnet_ids)
 
-        seen_routers = self._list_routers()
-        seen_router_ids = [n['id'] for n in seen_routers]
-        seen_router_names = [n['name'] for n in seen_routers]
-        self.assertIn(self.router.name,
-                      seen_router_names)
-        self.assertIn(self.router.id,
-                      seen_router_ids)
+        if self.router:
+            seen_routers = self._list_routers()
+            seen_router_ids = [n['id'] for n in seen_routers]
+            seen_router_names = [n['name'] for n in seen_routers]
+            self.assertIn(self.router.name,
+                          seen_router_names)
+            self.assertIn(self.router.id,
+                          seen_router_ids)
 
     def _create_server(self, name, network):
-        keypair = self.create_keypair(name='keypair-%s' % name)
-        security_groups = [self.security_group.name]
+        keypair = self.create_keypair()
+        self.keypairs[keypair['name']] = keypair
+        security_groups = [self.security_group]
         create_kwargs = {
-            'nics': [
-                {'net-id': network.id},
+            'networks': [
+                {'uuid': network.id},
             ],
-            'key_name': keypair.name,
+            'key_name': keypair['name'],
             'security_groups': security_groups,
         }
         server = self.create_server(name=name, create_kwargs=create_kwargs)
-        return dict(server=server, keypair=keypair)
+        self.servers.append(server)
+        return server
+
+    def _get_server_key(self, server):
+        return self.keypairs[server['key_name']]['private_key']
 
     def _check_tenant_network_connectivity(self):
         ssh_login = CONF.compute.image_ssh_user
-        for server, key in self.servers.iteritems():
+        for server in self.servers:
             # call the common method in the parent class
             super(TestNetworkBasicOps, self).\
                 _check_tenant_network_connectivity(
-                    server, ssh_login, key.private_key,
-                    servers_for_debug=self.servers.keys())
+                    server, ssh_login, self._get_server_key(server),
+                    servers_for_debug=self.servers)
 
-    def _create_and_associate_floating_ips(self):
+    def _create_and_associate_floating_ips(self, server):
         public_network_id = CONF.network.public_network_id
-        for server in self.servers.keys():
-            floating_ip = self._create_floating_ip(server, public_network_id)
-            self.floating_ip_tuple = Floating_IP_tuple(floating_ip, server)
+        floating_ip = self._create_floating_ip(server, public_network_id)
+        self.floating_ip_tuple = Floating_IP_tuple(floating_ip, server)
 
     def _check_public_network_connectivity(self, should_connect=True,
                                            msg=None):
@@ -171,11 +181,11 @@
         ip_address = floating_ip.floating_ip_address
         private_key = None
         if should_connect:
-            private_key = self.servers[server].private_key
+            private_key = self._get_server_key(server)
         # call the common method in the parent class
         super(TestNetworkBasicOps, self)._check_public_network_connectivity(
             ip_address, ssh_login, private_key, should_connect, msg,
-            self.servers.keys())
+            self.servers)
 
     def _disassociate_floating_ips(self):
         floating_ip, server = self.floating_ip_tuple
@@ -187,14 +197,13 @@
         floating_ip, server = self.floating_ip_tuple
         name = data_utils.rand_name('new_server-smoke-')
         # create a new server for the floating ip
-        serv_dict = self._create_server(name, self.network)
-        self.servers[serv_dict['server']] = serv_dict['keypair']
-        self._associate_floating_ip(floating_ip, serv_dict['server'])
+        server = self._create_server(name, self.network)
+        self._associate_floating_ip(floating_ip, server)
         self.floating_ip_tuple = Floating_IP_tuple(
-            floating_ip, serv_dict['server'])
+            floating_ip, server)
 
     def _create_new_network(self):
-        self.new_net = self._create_network(self.tenant_id)
+        self.new_net = self._create_network(tenant_id=self.tenant_id)
         self.new_subnet = self._create_subnet(
             network=self.new_net,
             gateway_ip=None)
@@ -202,42 +211,50 @@
     def _hotplug_server(self):
         old_floating_ip, server = self.floating_ip_tuple
         ip_address = old_floating_ip.floating_ip_address
-        private_key = self.servers[server].private_key
+        private_key = self._get_server_key(server)
         ssh_client = self.get_remote_client(ip_address,
                                             private_key=private_key)
         old_nic_list = self._get_server_nics(ssh_client)
         # get a port from a list of one item
-        port_list = self._list_ports(device_id=server.id)
+        port_list = self._list_ports(device_id=server['id'])
         self.assertEqual(1, len(port_list))
         old_port = port_list[0]
-        self.compute_client.servers.interface_attach(server=server,
-                                                     net_id=self.new_net.id,
-                                                     port_id=None,
-                                                     fixed_ip=None)
-        # move server to the head of the cleanup list
-        self.addCleanup(self.delete_timeout,
-                        self.compute_client.servers,
-                        server.id)
-        self.addCleanup(self.delete_wrapper, server)
+        _, interface = self.interface_client.create_interface(
+            server=server['id'],
+            network_id=self.new_net.id)
+        self.addCleanup(self.network_client.wait_for_resource_deletion,
+                        'port',
+                        interface['port_id'])
+        self.addCleanup(self.delete_wrapper,
+                        self.interface_client.delete_interface,
+                        server['id'], interface['port_id'])
 
         def check_ports():
-            port_list = [port for port in
-                         self._list_ports(device_id=server.id)
-                         if port != old_port]
-            return len(port_list) == 1
+            self.new_port_list = [port for port in
+                                  self._list_ports(device_id=server['id'])
+                                  if port != old_port]
+            return len(self.new_port_list) == 1
 
-        test.call_until_true(check_ports, 60, 1)
-        new_port_list = [p for p in
-                         self._list_ports(device_id=server.id)
-                         if p != old_port]
-        self.assertEqual(1, len(new_port_list))
-        new_port = new_port_list[0]
-        new_port = net_common.DeletablePort(client=self.network_client,
-                                            **new_port)
-        new_nic_list = self._get_server_nics(ssh_client)
-        diff_list = [n for n in new_nic_list if n not in old_nic_list]
-        self.assertEqual(1, len(diff_list))
-        num, new_nic = diff_list[0]
+        if not test.call_until_true(check_ports, CONF.network.build_timeout,
+                                    CONF.network.build_interval):
+            raise exceptions.TimeoutException("No new port attached to the "
+                                              "server in time (%s sec) !"
+                                              % CONF.network.build_timeout)
+        new_port = net_resources.DeletablePort(client=self.network_client,
+                                               **self.new_port_list[0])
+
+        def check_new_nic():
+            new_nic_list = self._get_server_nics(ssh_client)
+            self.diff_list = [n for n in new_nic_list if n not in old_nic_list]
+            return len(self.diff_list) == 1
+
+        if not test.call_until_true(check_new_nic, CONF.network.build_timeout,
+                                    CONF.network.build_interval):
+            raise exceptions.TimeoutException("Interface not visible on the "
+                                              "guest after %s sec"
+                                              % CONF.network.build_timeout)
+
+        num, new_nic = self.diff_list[0]
         ssh_client.assign_static_ip(nic=new_nic,
                                     addr=new_port.fixed_ips[0]['ip_address'])
         ssh_client.turn_nic_on(nic=new_nic)
@@ -257,7 +274,7 @@
         # get internal ports' ips:
         # get all network ports in the new network
         internal_ips = (p['fixed_ips'][0]['ip_address'] for p in
-                        self._list_ports(tenant_id=server.tenant_id,
+                        self._list_ports(tenant_id=server['tenant_id'],
                                          network_id=network.id)
                         if p['device_owner'].startswith('network'))
 
@@ -273,8 +290,8 @@
             LOG.info(msg)
             return
 
-        subnet = self.network_client.list_subnets(
-            network_id=CONF.network.public_network_id)['subnets']
+        subnet = self._list_subnets(
+            network_id=CONF.network.public_network_id)
         self.assertEqual(1, len(subnet), "Found %d subnets" % len(subnet))
 
         external_ips = [subnet[0]['gateway_ip']]
@@ -283,7 +300,7 @@
 
     def _check_server_connectivity(self, floating_ip, address_list):
         ip_address = floating_ip.floating_ip_address
-        private_key = self.servers[self.floating_ip_tuple.server].private_key
+        private_key = self._get_server_key(self.floating_ip_tuple.server)
         ssh_source = self._ssh_to_server(ip_address, private_key)
 
         for remote_ip in address_list:
@@ -335,6 +352,7 @@
 
 
         """
+        self._setup_network_and_servers()
         self._check_public_network_connectivity(should_connect=True)
         self._check_network_internal_connectivity(network=self.network)
         self._check_network_external_connectivity()
@@ -360,7 +378,7 @@
         4. check VM can ping new network dhcp port
 
         """
-
+        self._setup_network_and_servers()
         self._check_public_network_connectivity(should_connect=True)
         self._create_new_network()
         self._hotplug_server()
diff --git a/tempest/scenario/test_security_groups_basic_ops.py b/tempest/scenario/test_security_groups_basic_ops.py
index 8058b3d..188dea8 100644
--- a/tempest/scenario/test_security_groups_basic_ops.py
+++ b/tempest/scenario/test_security_groups_basic_ops.py
@@ -99,7 +99,7 @@
         """
 
         def __init__(self, credentials):
-            self.manager = clients.OfficialClientManager(credentials)
+            self.manager = clients.Manager(credentials)
             # Credentials from manager are filled with both names and IDs
             self.creds = self.manager.credentials
             self.network = None
@@ -113,13 +113,18 @@
             self.subnet = subnet
             self.router = router
 
-        def _get_tenant_credentials(self):
-            # FIXME(andreaf) Unused method
-            return self.creds
-
     @classmethod
     def check_preconditions(cls):
+        if CONF.baremetal.driver_enabled:
+            msg = ('Not currently supported by baremetal.')
+            cls.enabled = False
+            raise cls.skipException(msg)
         super(TestSecurityGroupsBasicOps, cls).check_preconditions()
+        # need alt_creds here to check preconditions
+        cls.alt_creds = cls.alt_credentials()
+        cls.alt_manager = clients.Manager(cls.alt_creds)
+        # Credentials from the manager are filled with both IDs and Names
+        cls.alt_creds = cls.alt_manager.credentials
         if (cls.alt_creds is None) or \
                 (cls.tenant_id is cls.alt_creds.tenant_id):
             msg = 'No alt_tenant defined'
@@ -133,15 +138,10 @@
             raise cls.skipException(msg)
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         # Create no network resources for these tests.
         cls.set_network_resources()
-        super(TestSecurityGroupsBasicOps, cls).setUpClass()
-        cls.alt_creds = cls.alt_credentials()
-        cls.alt_manager = clients.OfficialClientManager(cls.alt_creds)
-        # Credentials from the manager are filled with both IDs and Names
-        cls.alt_creds = cls.alt_manager.credentials
-        cls.check_preconditions()
+        super(TestSecurityGroupsBasicOps, cls).resource_setup()
         # TODO(mnewby) Consider looking up entities as needed instead
         # of storing them as collections on the class.
         cls.floating_ips = {}
@@ -162,21 +162,22 @@
         self._verify_network_details(self.primary_tenant)
         self._verify_mac_addr(self.primary_tenant)
 
-    def _create_tenant_keypairs(self, tenant_id):
-        keypair = self.create_keypair(
-            name=data_utils.rand_name('keypair-smoke-'))
-        self.tenants[tenant_id].keypair = keypair
+    def _create_tenant_keypairs(self, tenant):
+        keypair = self.create_keypair(tenant.manager.keypairs_client)
+        tenant.keypair = keypair
 
     def _create_tenant_security_groups(self, tenant):
         access_sg = self._create_empty_security_group(
             namestart='secgroup_access-',
-            tenant_id=tenant.creds.tenant_id
+            tenant_id=tenant.creds.tenant_id,
+            client=tenant.manager.network_client
         )
 
         # don't use default secgroup since it allows in-tenant traffic
         def_sg = self._create_empty_security_group(
             namestart='secgroup_general-',
-            tenant_id=tenant.creds.tenant_id
+            tenant_id=tenant.creds.tenant_id,
+            client=tenant.manager.network_client
         )
         tenant.security_groups.update(access=access_sg, default=def_sg)
         ssh_rule = dict(
@@ -185,7 +186,9 @@
             port_range_max=22,
             direction='ingress',
         )
-        self._create_security_group_rule(secgroup=access_sg, **ssh_rule)
+        self._create_security_group_rule(secgroup=access_sg,
+                                         client=tenant.manager.network_client,
+                                         **ssh_rule)
 
     def _verify_network_details(self, tenant):
         # Checks that we see the newly created network/subnet/router via
@@ -212,28 +215,33 @@
 
         myport = (tenant.router.id, tenant.subnet.id)
         router_ports = [(i['device_id'], i['fixed_ips'][0]['subnet_id']) for i
-                        in self.network_client.list_ports()['ports']
-                        if i['device_owner'] == 'network:router_interface']
+                        in self._list_ports()
+                        if self._is_router_port(i)]
 
         self.assertIn(myport, router_ports)
 
+    def _is_router_port(self, port):
+        """Return True if port is a router interface."""
+        # NOTE(armando-migliaccio): match device owner for both centralized
+        # and distributed routers; 'device_owner' is "" by default.
+        return port['device_owner'].startswith('network:router_interface')
+
     def _create_server(self, name, tenant, security_groups=None):
         """
         creates a server and assigns to security group
         """
         self._set_compute_context(tenant)
         if security_groups is None:
-            security_groups = [tenant.security_groups['default'].name]
+            security_groups = [tenant.security_groups['default']]
         create_kwargs = {
-            'nics': [
-                {'net-id': tenant.network.id},
+            'networks': [
+                {'uuid': tenant.network.id},
             ],
-            'key_name': tenant.keypair.name,
+            'key_name': tenant.keypair['name'],
             'security_groups': security_groups,
             'tenant_id': tenant.creds.tenant_id
         }
-        server = self.create_server(name=name, create_kwargs=create_kwargs)
-        return server
+        return self.create_server(name=name, create_kwargs=create_kwargs)
 
     def _create_tenant_servers(self, tenant, num=1):
         for i in range(num):
@@ -251,27 +259,30 @@
         in order to access tenant internal network
         workaround ip namespace
         """
-        secgroups = [sg.name for sg in tenant.security_groups.values()]
+        secgroups = tenant.security_groups.values()
         name = 'server-{tenant}-access_point-'.format(
             tenant=tenant.creds.tenant_name)
         name = data_utils.rand_name(name)
         server = self._create_server(name, tenant,
                                      security_groups=secgroups)
         tenant.access_point = server
-        self._assign_floating_ips(server)
+        self._assign_floating_ips(tenant, server)
 
-    def _assign_floating_ips(self, server):
+    def _assign_floating_ips(self, tenant, server):
         public_network_id = CONF.network.public_network_id
-        floating_ip = self._create_floating_ip(server, public_network_id)
-        self.floating_ips.setdefault(server, floating_ip)
+        floating_ip = self._create_floating_ip(
+            server, public_network_id,
+            client=tenant.manager.network_client)
+        self.floating_ips.setdefault(server['id'], floating_ip)
 
     def _create_tenant_network(self, tenant):
-        network, subnet, router = self._create_networks(tenant.creds.tenant_id)
+        network, subnet, router = self.create_networks(
+            client=tenant.manager.network_client)
         tenant.set_network(network, subnet, router)
 
     def _set_compute_context(self, tenant):
-        self.compute_client = tenant.manager.compute_client
-        return self.compute_client
+        self.servers_client = tenant.manager.servers_client
+        return self.servers_client
 
     def _deploy_tenant(self, tenant_or_id):
         """
@@ -284,12 +295,10 @@
         """
         if not isinstance(tenant_or_id, self.TenantProperties):
             tenant = self.tenants[tenant_or_id]
-            tenant_id = tenant_or_id
         else:
             tenant = tenant_or_id
-            tenant_id = tenant.creds.tenant_id
         self._set_compute_context(tenant)
-        self._create_tenant_keypairs(tenant_id)
+        self._create_tenant_keypairs(tenant)
         self._create_tenant_network(tenant)
         self._create_tenant_security_groups(tenant)
         self._set_access_point(tenant)
@@ -299,12 +308,12 @@
         returns the ip (floating/internal) of a server
         """
         if floating:
-            server_ip = self.floating_ips[server].floating_ip_address
+            server_ip = self.floating_ips[server['id']].floating_ip_address
         else:
             server_ip = None
-            network_name = self.tenants[server.tenant_id].network.name
-            if network_name in server.networks:
-                server_ip = server.networks[network_name][0]
+            network_name = self.tenants[server['tenant_id']].network.name
+            if network_name in server['addresses']:
+                server_ip = server['addresses'][network_name][0]['addr']
         return server_ip
 
     def _connect_to_access_point(self, tenant):
@@ -312,8 +321,8 @@
         create ssh connection to tenant access point
         """
         access_point_ssh = \
-            self.floating_ips[tenant.access_point].floating_ip_address
-        private_key = tenant.keypair.private_key
+            self.floating_ips[tenant.access_point['id']].floating_ip_address
+        private_key = tenant.keypair['private_key']
         access_point_ssh = self._ssh_to_server(access_point_ssh,
                                                private_key=private_key)
         return access_point_ssh
@@ -377,6 +386,7 @@
         )
         self._create_security_group_rule(
             secgroup=dest_tenant.security_groups['default'],
+            client=dest_tenant.manager.network_client,
             **ruleset
         )
         access_point_ssh = self._connect_to_access_point(source_tenant)
@@ -390,6 +400,7 @@
         # allow reverse traffic and check
         self._create_security_group_rule(
             secgroup=source_tenant.security_groups['default'],
+            client=source_tenant.manager.network_client,
             **ruleset
         )
 
@@ -408,8 +419,7 @@
         mac_addr = mac_addr.strip().lower()
         # Get the fixed_ips and mac_address fields of all ports. Select
         # only those two columns to reduce the size of the response.
-        port_list = self.network_client.list_ports(
-            fields=['fixed_ips', 'mac_address'])['ports']
+        port_list = self._list_ports(fields=['fixed_ips', 'mac_address'])
         port_detail_list = [
             (port['fixed_ips'][0]['subnet_id'],
              port['fixed_ips'][0]['ip_address'],
diff --git a/tempest/scenario/test_server_advanced_ops.py b/tempest/scenario/test_server_advanced_ops.py
index 5a1dc04..c53e22b 100644
--- a/tempest/scenario/test_server_advanced_ops.py
+++ b/tempest/scenario/test_server_advanced_ops.py
@@ -25,7 +25,7 @@
 LOG = logging.getLogger(__name__)
 
 
-class TestServerAdvancedOps(manager.OfficialClientTest):
+class TestServerAdvancedOps(manager.ScenarioTest):
 
     """
     This test case stresses some advanced server instance operations:
@@ -35,9 +35,9 @@
     """
 
     @classmethod
-    def setUpClass(cls):
+    def resource_setup(cls):
         cls.set_network_resources()
-        super(TestServerAdvancedOps, cls).setUpClass()
+        super(TestServerAdvancedOps, cls).resource_setup()
 
         if CONF.compute.flavor_ref_alt == CONF.compute.flavor_ref:
             msg = "Skipping test - flavor_ref and flavor_ref_alt are identical"
@@ -49,19 +49,19 @@
     def test_resize_server_confirm(self):
         # We create an instance for use in this test
         instance = self.create_server()
-        instance_id = instance.id
+        instance_id = instance['id']
         resize_flavor = CONF.compute.flavor_ref_alt
         LOG.debug("Resizing instance %s from flavor %s to flavor %s",
-                  instance.id, instance.flavor, resize_flavor)
-        instance.resize(resize_flavor)
-        self.status_timeout(self.compute_client.servers, instance_id,
-                            'VERIFY_RESIZE')
+                  instance['id'], instance['flavor']['id'], resize_flavor)
+        self.servers_client.resize(instance_id, resize_flavor)
+        self.servers_client.wait_for_server_status(instance_id,
+                                                   'VERIFY_RESIZE')
 
         LOG.debug("Confirming resize of instance %s", instance_id)
-        instance.confirm_resize()
+        self.servers_client.confirm_resize(instance_id)
 
-        self.status_timeout(
-            self.compute_client.servers, instance_id, 'ACTIVE')
+        self.servers_client.wait_for_server_status(instance_id,
+                                                   'ACTIVE')
 
     @testtools.skipUnless(CONF.compute_feature_enabled.suspend,
                           'Suspend is not available.')
@@ -69,24 +69,27 @@
     def test_server_sequence_suspend_resume(self):
         # We create an instance for use in this test
         instance = self.create_server()
-        instance_id = instance.id
+        instance_id = instance['id']
         LOG.debug("Suspending instance %s. Current status: %s",
-                  instance_id, instance.status)
-        instance.suspend()
-        self.status_timeout(self.compute_client.servers, instance_id,
-                            'SUSPENDED')
+                  instance_id, instance['status'])
+        self.servers_client.suspend_server(instance_id)
+        self.servers_client.wait_for_server_status(instance_id,
+                                                   'SUSPENDED')
+        _, fetched_instance = self.servers_client.get_server(instance_id)
         LOG.debug("Resuming instance %s. Current status: %s",
-                  instance_id, instance.status)
-        instance.resume()
-        self.status_timeout(self.compute_client.servers, instance_id,
-                            'ACTIVE')
+                  instance_id, fetched_instance['status'])
+        self.servers_client.resume_server(instance_id)
+        self.servers_client.wait_for_server_status(instance_id,
+                                                   'ACTIVE')
+        _, fetched_instance = self.servers_client.get_server(instance_id)
         LOG.debug("Suspending instance %s. Current status: %s",
-                  instance_id, instance.status)
-        instance.suspend()
-        self.status_timeout(self.compute_client.servers, instance_id,
-                            'SUSPENDED')
+                  instance_id, fetched_instance['status'])
+        self.servers_client.suspend_server(instance_id)
+        self.servers_client.wait_for_server_status(instance_id,
+                                                   'SUSPENDED')
+        _, fetched_instance = self.servers_client.get_server(instance_id)
         LOG.debug("Resuming instance %s. Current status: %s",
-                  instance_id, instance.status)
-        instance.resume()
-        self.status_timeout(self.compute_client.servers, instance_id,
-                            'ACTIVE')
+                  instance_id, fetched_instance['status'])
+        self.servers_client.resume_server(instance_id)
+        self.servers_client.wait_for_server_status(instance_id,
+                                                   'ACTIVE')
diff --git a/tempest/scenario/test_server_basic_ops.py b/tempest/scenario/test_server_basic_ops.py
index 38686d9..eb636f7 100644
--- a/tempest/scenario/test_server_basic_ops.py
+++ b/tempest/scenario/test_server_basic_ops.py
@@ -26,7 +26,7 @@
 load_tests = test_utils.load_tests_input_scenario_utils
 
 
-class TestServerBasicOps(manager.OfficialClientTest):
+class TestServerBasicOps(manager.ScenarioTest):
 
     """
     This smoke test case follows this basic set of operations:
@@ -35,8 +35,7 @@
      * Create a security group to control network access in instance
      * Add simple permissive rules to the security group
      * Launch an instance
-     * Pause/unpause the instance
-     * Suspend/resume the instance
+     * Perform ssh to instance
      * Terminate the instance
     """
 
@@ -69,9 +68,9 @@
 
     def boot_instance(self):
         # Create server with image and flavor from input scenario
-        security_groups = [self.security_group.name]
+        security_groups = [self.security_group]
         create_kwargs = {
-            'key_name': self.keypair.id,
+            'key_name': self.keypair['name'],
             'security_groups': security_groups
         }
         self.instance = self.create_server(image=self.image_ref,
@@ -81,16 +80,19 @@
     def verify_ssh(self):
         if self.run_ssh:
             # Obtain a floating IP
-            floating_ip = self.compute_client.floating_ips.create()
-            self.addCleanup(self.delete_wrapper, floating_ip)
+            _, floating_ip = self.floating_ips_client.create_floating_ip()
+            self.addCleanup(self.delete_wrapper,
+                            self.floating_ips_client.delete_floating_ip,
+                            floating_ip['id'])
             # Attach a floating IP
-            self.instance.add_floating_ip(floating_ip)
+            self.floating_ips_client.associate_floating_ip_to_server(
+                floating_ip['ip'], self.instance['id'])
             # Check ssh
             try:
                 self.get_remote_client(
-                    server_or_ip=floating_ip.ip,
+                    server_or_ip=floating_ip['ip'],
                     username=self.image_utils.ssh_user(self.image_ref),
-                    private_key=self.keypair.private_key)
+                    private_key=self.keypair['private_key'])
             except Exception:
                 LOG.exception('ssh to server failed')
                 self._log_console_output()
@@ -99,7 +101,7 @@
     @test.services('compute', 'network')
     def test_server_basicops(self):
         self.add_keypair()
-        self.security_group = self._create_security_group_nova()
+        self.security_group = self._create_security_group()
         self.boot_instance()
         self.verify_ssh()
-        self.instance.delete()
+        self.servers_client.delete_server(self.instance['id'])
diff --git a/tempest/scenario/test_snapshot_pattern.py b/tempest/scenario/test_snapshot_pattern.py
index 7dd662d..dc32edc 100644
--- a/tempest/scenario/test_snapshot_pattern.py
+++ b/tempest/scenario/test_snapshot_pattern.py
@@ -13,6 +13,8 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+import testtools
+
 from tempest import config
 from tempest.openstack.common import log
 from tempest.scenario import manager
@@ -23,7 +25,7 @@
 LOG = log.getLogger(__name__)
 
 
-class TestSnapshotPattern(manager.OfficialClientTest):
+class TestSnapshotPattern(manager.ScenarioTest):
     """
     This test is for snapshotting an instance and booting with it.
     The following is the scenario outline:
@@ -35,9 +37,9 @@
     """
 
     def _boot_image(self, image_id):
-        security_groups = [self.security_group.name]
+        security_groups = [self.security_group]
         create_kwargs = {
-            'key_name': self.keypair.name,
+            'key_name': self.keypair['name'],
             'security_groups': security_groups
         }
         return self.create_server(image=image_id, create_kwargs=create_kwargs)
@@ -64,25 +66,30 @@
         self.assertEqual(self.timestamp, got_timestamp)
 
     def _create_floating_ip(self):
-        floating_ip = self.compute_client.floating_ips.create()
-        self.addCleanup(self.delete_wrapper, floating_ip)
+        _, floating_ip = self.floating_ips_client.create_floating_ip()
+        self.addCleanup(self.delete_wrapper,
+                        self.floating_ips_client.delete_floating_ip,
+                        floating_ip['id'])
         return floating_ip
 
     def _set_floating_ip_to_server(self, server, floating_ip):
-        server.add_floating_ip(floating_ip)
+        self.floating_ips_client.associate_floating_ip_to_server(
+            floating_ip['ip'], server['id'])
 
+    @testtools.skipUnless(CONF.compute_feature_enabled.snapshot,
+                          'Snapshotting is not available.')
     @test.services('compute', 'network', 'image')
     def test_snapshot_pattern(self):
         # prepare for booting a instance
         self._add_keypair()
-        self.security_group = self._create_security_group_nova()
+        self.security_group = self._create_security_group()
 
         # boot a instance and create a timestamp file in it
         server = self._boot_image(CONF.compute.image_ref)
         if CONF.compute.use_floatingip_for_ssh:
             fip_for_server = self._create_floating_ip()
             self._set_floating_ip_to_server(server, fip_for_server)
-            self._write_timestamp(fip_for_server.ip)
+            self._write_timestamp(fip_for_server['ip'])
         else:
             self._write_timestamp(server)
 
@@ -90,13 +97,13 @@
         snapshot_image = self.create_server_snapshot(server=server)
 
         # boot a second instance from the snapshot
-        server_from_snapshot = self._boot_image(snapshot_image.id)
+        server_from_snapshot = self._boot_image(snapshot_image['id'])
 
         # check the existence of the timestamp file in the second instance
         if CONF.compute.use_floatingip_for_ssh:
             fip_for_snapshot = self._create_floating_ip()
             self._set_floating_ip_to_server(server_from_snapshot,
                                             fip_for_snapshot)
-            self._check_timestamp(fip_for_snapshot.ip)
+            self._check_timestamp(fip_for_snapshot['ip'])
         else:
             self._check_timestamp(server_from_snapshot)
diff --git a/tempest/scenario/test_stamp_pattern.py b/tempest/scenario/test_stamp_pattern.py
index be27024..8ea2814 100644
--- a/tempest/scenario/test_stamp_pattern.py
+++ b/tempest/scenario/test_stamp_pattern.py
@@ -15,7 +15,7 @@
 
 import time
 
-from cinderclient import exceptions as cinder_exceptions
+import testtools
 
 from tempest.common.utils import data_utils
 from tempest import config
@@ -29,7 +29,7 @@
 LOG = logging.getLogger(__name__)
 
 
-class TestStampPattern(manager.OfficialClientTest):
+class TestStampPattern(manager.ScenarioTest):
     """
     This test is for snapshotting an instance/volume and attaching the volume
     created from snapshot to the instance booted from snapshot.
@@ -51,20 +51,20 @@
     """
 
     @classmethod
-    def setUpClass(cls):
-        super(TestStampPattern, cls).setUpClass()
+    def resource_setup(cls):
+        super(TestStampPattern, cls).resource_setup()
 
         if not CONF.volume_feature_enabled.snapshot:
             raise cls.skipException("Cinder volume snapshots are disabled")
 
     def _wait_for_volume_snapshot_status(self, volume_snapshot, status):
-        self.status_timeout(self.volume_client.volume_snapshots,
-                            volume_snapshot.id, status)
+        self.snapshots_client.wait_for_snapshot_status(volume_snapshot['id'],
+                                                       status)
 
     def _boot_image(self, image_id):
-        security_groups = [self.security_group.name]
+        security_groups = [self.security_group]
         create_kwargs = {
-            'key_name': self.keypair.name,
+            'key_name': self.keypair['name'],
             'security_groups': security_groups
         }
         return self.create_server(image=image_id, create_kwargs=create_kwargs)
@@ -73,53 +73,54 @@
         self.keypair = self.create_keypair()
 
     def _create_floating_ip(self):
-        floating_ip = self.compute_client.floating_ips.create()
-        self.addCleanup(self.delete_wrapper, floating_ip)
+        _, floating_ip = self.floating_ips_client.create_floating_ip()
+        self.addCleanup(self.delete_wrapper,
+                        self.floating_ips_client.delete_floating_ip,
+                        floating_ip['id'])
         return floating_ip
 
     def _add_floating_ip(self, server, floating_ip):
-        server.add_floating_ip(floating_ip)
+        self.floating_ips_client.associate_floating_ip_to_server(
+            floating_ip['ip'], server['id'])
 
     def _ssh_to_server(self, server_or_ip):
         return self.get_remote_client(server_or_ip)
 
     def _create_volume_snapshot(self, volume):
         snapshot_name = data_utils.rand_name('scenario-snapshot-')
-        volume_snapshots = self.volume_client.volume_snapshots
-        snapshot = volume_snapshots.create(
-            volume.id, display_name=snapshot_name)
+        _, snapshot = self.snapshots_client.create_snapshot(
+            volume['id'], display_name=snapshot_name)
 
         def cleaner():
-            volume_snapshots.delete(snapshot)
+            self.snapshots_client.delete_snapshot(snapshot['id'])
             try:
-                while volume_snapshots.get(snapshot.id):
+                while self.snapshots_client.get_snapshot(snapshot['id']):
                     time.sleep(1)
-            except cinder_exceptions.NotFound:
+            except exceptions.NotFound:
                 pass
         self.addCleanup(cleaner)
         self._wait_for_volume_status(volume, 'available')
-        self._wait_for_volume_snapshot_status(snapshot, 'available')
-        self.assertEqual(snapshot_name, snapshot.display_name)
+        self.snapshots_client.wait_for_snapshot_status(snapshot['id'],
+                                                       'available')
+        self.assertEqual(snapshot_name, snapshot['display_name'])
         return snapshot
 
     def _wait_for_volume_status(self, volume, status):
-        self.status_timeout(
-            self.volume_client.volumes, volume.id, status)
+        self.volumes_client.wait_for_volume_status(volume['id'], status)
 
     def _create_volume(self, snapshot_id=None):
         return self.create_volume(snapshot_id=snapshot_id)
 
     def _attach_volume(self, server, volume):
-        attach_volume_client = self.compute_client.volumes.create_server_volume
-        attached_volume = attach_volume_client(server.id,
-                                               volume.id,
-                                               '/dev/vdb')
-        self.assertEqual(volume.id, attached_volume.id)
+        # TODO(andreaf) we should use device from config instead if vdb
+        _, attached_volume = self.servers_client.attach_volume(
+            server['id'], volume['id'], device='/dev/vdb')
+        attached_volume = attached_volume['volumeAttachment']
+        self.assertEqual(volume['id'], attached_volume['id'])
         self._wait_for_volume_status(attached_volume, 'in-use')
 
     def _detach_volume(self, server, volume):
-        detach_volume_client = self.compute_client.volumes.delete_server_volume
-        detach_volume_client(server.id, volume.id)
+        self.servers_client.detach_volume(server['id'], volume['id'])
         self._wait_for_volume_status(volume, 'available')
 
     def _wait_for_volume_available_on_the_system(self, server_or_ip):
@@ -150,11 +151,13 @@
         self.assertEqual(self.timestamp, got_timestamp)
 
     @tempest.test.skip_because(bug="1205344")
+    @testtools.skipUnless(CONF.compute_feature_enabled.snapshot,
+                          'Snapshotting is not available.')
     @tempest.test.services('compute', 'network', 'volume', 'image')
     def test_stamp_pattern(self):
         # prepare for booting a instance
         self._add_keypair()
-        self.security_group = self._create_security_group_nova()
+        self.security_group = self._create_security_group()
 
         # boot an instance and create a timestamp file in it
         volume = self._create_volume()
@@ -164,7 +167,7 @@
         if CONF.compute.use_floatingip_for_ssh:
             floating_ip_for_server = self._create_floating_ip()
             self._add_floating_ip(server, floating_ip_for_server)
-            ip_for_server = floating_ip_for_server.ip
+            ip_for_server = floating_ip_for_server['ip']
         else:
             ip_for_server = server
 
@@ -181,17 +184,17 @@
 
         # create second volume from the snapshot(volume2)
         volume_from_snapshot = self._create_volume(
-            snapshot_id=volume_snapshot.id)
+            snapshot_id=volume_snapshot['id'])
 
         # boot second instance from the snapshot(instance2)
-        server_from_snapshot = self._boot_image(snapshot_image.id)
+        server_from_snapshot = self._boot_image(snapshot_image['id'])
 
         # create and add floating IP to server_from_snapshot
         if CONF.compute.use_floatingip_for_ssh:
             floating_ip_for_snapshot = self._create_floating_ip()
             self._add_floating_ip(server_from_snapshot,
                                   floating_ip_for_snapshot)
-            ip_for_snapshot = floating_ip_for_snapshot.ip
+            ip_for_snapshot = floating_ip_for_snapshot['ip']
         else:
             ip_for_snapshot = server_from_snapshot
 
diff --git a/tempest/scenario/test_swift_basic_ops.py b/tempest/scenario/test_swift_basic_ops.py
index 86e0867..9e0fee0 100644
--- a/tempest/scenario/test_swift_basic_ops.py
+++ b/tempest/scenario/test_swift_basic_ops.py
@@ -13,8 +13,7 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-
-from tempest.common.utils import data_utils
+from tempest.common import http
 from tempest import config
 from tempest.openstack.common import log as logging
 from tempest.scenario import manager
@@ -25,80 +24,49 @@
 LOG = logging.getLogger(__name__)
 
 
-class TestSwiftBasicOps(manager.OfficialClientTest):
+class TestSwiftBasicOps(manager.SwiftScenarioTest):
     """
-    Test swift with the follow operations:
+    Test swift basic ops.
      * get swift stat.
      * create container.
      * upload a file to the created container.
      * list container's objects and assure that the uploaded file is present.
+     * download the object and check the content
      * delete object from container.
      * list container's objects and assure that the deleted file is gone.
      * delete a container.
      * list containers and assure that the deleted container is gone.
+     * change ACL of the container and make sure it works successfully
     """
 
-    @classmethod
-    def setUpClass(cls):
-        cls.set_network_resources()
-        super(TestSwiftBasicOps, cls).setUpClass()
-        if not CONF.service_available.swift:
-            skip_msg = ("%s skipped as swift is not available" %
-                        cls.__name__)
-            raise cls.skipException(skip_msg)
-
-    def _get_swift_stat(self):
-        """get swift status for our user account."""
-        self.object_storage_client.get_account()
-        LOG.debug('Swift status information obtained successfully')
-
-    def _create_container(self, container_name=None):
-        name = container_name or data_utils.rand_name(
-            'swift-scenario-container')
-        self.object_storage_client.put_container(name)
-        # look for the container to assure it is created
-        self._list_and_check_container_objects(name)
-        LOG.debug('Container %s created' % (name))
-        return name
-
-    def _delete_container(self, container_name):
-        self.object_storage_client.delete_container(container_name)
-        LOG.debug('Container %s deleted' % (container_name))
-
-    def _upload_object_to_container(self, container_name, obj_name=None):
-        obj_name = obj_name or data_utils.rand_name('swift-scenario-object')
-        self.object_storage_client.put_object(container_name, obj_name,
-                                              data_utils.rand_name('obj_data'),
-                                              content_type='text/plain')
-        return obj_name
-
-    def _delete_object(self, container_name, filename):
-        self.object_storage_client.delete_object(container_name, filename)
-        self._list_and_check_container_objects(container_name,
-                                               not_present_obj=[filename])
-
-    def _list_and_check_container_objects(self, container_name, present_obj=[],
-                                          not_present_obj=[]):
-        """
-        List objects for a given container and assert which are present and
-        which are not.
-        """
-        meta, response = self.object_storage_client.get_container(
-            container_name)
-        # create a list with file name only
-        object_list = [obj['name'] for obj in response]
-        if present_obj:
-            for obj in present_obj:
-                self.assertIn(obj, object_list)
-        if not_present_obj:
-            for obj in not_present_obj:
-                self.assertNotIn(obj, object_list)
-
     @test.services('object_storage')
     def test_swift_basic_ops(self):
-        self._get_swift_stat()
-        container_name = self._create_container()
-        obj_name = self._upload_object_to_container(container_name)
-        self._list_and_check_container_objects(container_name, [obj_name])
-        self._delete_object(container_name, obj_name)
-        self._delete_container(container_name)
+        self.get_swift_stat()
+        container_name = self.create_container()
+        obj_name, obj_data = self.upload_object_to_container(container_name)
+        self.list_and_check_container_objects(container_name, [obj_name])
+        self.download_and_verify(container_name, obj_name, obj_data)
+        self.delete_object(container_name, obj_name)
+        self.delete_container(container_name)
+
+    @test.services('object_storage')
+    def test_swift_acl_anonymous_download(self):
+        """This test will cover below steps:
+        1. Create container
+        2. Upload object to the new container
+        3. Change the ACL of the container
+        4. Check if the object can be download by anonymous user
+        5. Delete the object and container
+        """
+        container_name = self.create_container()
+        obj_name, _ = self.upload_object_to_container(container_name)
+        obj_url = '%s/%s/%s' % (self.object_client.base_url,
+                                container_name, obj_name)
+        http_client = http.ClosingHttp()
+        resp, _ = http_client.request(obj_url, 'GET')
+        self.assertEqual(resp.status, 401)
+        self.change_container_acl(container_name, '.r:*')
+        resp, _ = http_client.request(obj_url, 'GET')
+        self.assertEqual(resp.status, 200)
+        self.delete_object(container_name, obj_name)
+        self.delete_container(container_name)
diff --git a/tempest/scenario/test_volume_boot_pattern.py b/tempest/scenario/test_volume_boot_pattern.py
index bf5d1f6..a20db5c 100644
--- a/tempest/scenario/test_volume_boot_pattern.py
+++ b/tempest/scenario/test_volume_boot_pattern.py
@@ -10,8 +10,6 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-from cinderclient import exceptions as cinder_exc
-
 from tempest.common.utils import data_utils
 from tempest import config
 from tempest.openstack.common import log
@@ -23,7 +21,7 @@
 LOG = log.getLogger(__name__)
 
 
-class TestVolumeBootPattern(manager.OfficialClientTest):
+class TestVolumeBootPattern(manager.ScenarioTest):
 
     """
     This test case attempts to reproduce the following steps:
@@ -38,8 +36,8 @@
      * Check written content in the instance booted from snapshot
     """
     @classmethod
-    def setUpClass(cls):
-        super(TestVolumeBootPattern, cls).setUpClass()
+    def resource_setup(cls):
+        super(TestVolumeBootPattern, cls).resource_setup()
 
         if not CONF.volume_feature_enabled.snapshot:
             raise cls.skipException("Cinder volume snapshots are disabled")
@@ -54,28 +52,32 @@
         # dev_name=id:type:size:delete_on_terminate
         # where type needs to be "snap" if the server is booted
         # from a snapshot, size instead can be safely left empty
-        bd_map = {
-            'vda': vol_id + ':::0'
-        }
-        security_groups = [self.security_group.name]
+        bd_map = [{
+            'device_name': 'vda',
+            'volume_id': vol_id,
+            'delete_on_termination': '0'}]
+        self.security_group = self._create_security_group()
+        security_groups = [{'name': self.security_group['name']}]
         create_kwargs = {
             'block_device_mapping': bd_map,
-            'key_name': keypair.name,
+            'key_name': keypair['name'],
             'security_groups': security_groups
         }
         return self.create_server(image='', create_kwargs=create_kwargs)
 
     def _create_snapshot_from_volume(self, vol_id):
-        volume_snapshots = self.volume_client.volume_snapshots
         snap_name = data_utils.rand_name('snapshot')
-        snap = volume_snapshots.create(volume_id=vol_id,
-                                       force=True,
-                                       display_name=snap_name)
-        self.addCleanup_with_wait(self.volume_client.volume_snapshots, snap.id,
-                                  exc_type=cinder_exc.NotFound)
-        self.status_timeout(volume_snapshots,
-                            snap.id,
-                            'available')
+        _, snap = self.snapshots_client.create_snapshot(
+            volume_id=vol_id,
+            force=True,
+            display_name=snap_name)
+        self.addCleanup_with_wait(
+            waiter_callable=self.snapshots_client.wait_for_resource_deletion,
+            thing_id=snap['id'], thing_id_param='id',
+            cleanup_callable=self.delete_wrapper,
+            cleanup_args=[self.snapshots_client.delete_snapshot, snap['id']])
+        self.snapshots_client.wait_for_snapshot_status(snap['id'], 'available')
+        self.assertEqual(snap_name, snap['display_name'])
         return snap
 
     def _create_volume_from_snapshot(self, snap_id):
@@ -85,27 +87,26 @@
     def _stop_instances(self, instances):
         # NOTE(gfidente): two loops so we do not wait for the status twice
         for i in instances:
-            self.compute_client.servers.stop(i)
+            self.servers_client.stop(i['id'])
         for i in instances:
-            self.status_timeout(self.compute_client.servers,
-                                i.id,
-                                'SHUTOFF')
+            self.servers_client.wait_for_server_status(i['id'], 'SHUTOFF')
 
     def _detach_volumes(self, volumes):
         # NOTE(gfidente): two loops so we do not wait for the status twice
         for v in volumes:
-            self.volume_client.volumes.detach(v)
+            self.volumes_client.detach_volume(v['id'])
         for v in volumes:
-            self.status_timeout(self.volume_client.volumes,
-                                v.id,
-                                'available')
+            self.volumes_client.wait_for_volume_status(v['id'], 'available')
 
     def _ssh_to_server(self, server, keypair):
         if CONF.compute.use_floatingip_for_ssh:
-            floating_ip = self.compute_client.floating_ips.create()
-            self.addCleanup(self.delete_wrapper, floating_ip)
-            server.add_floating_ip(floating_ip)
-            ip = floating_ip.ip
+            _, floating_ip = self.floating_ips_client.create_floating_ip()
+            self.addCleanup(self.delete_wrapper,
+                            self.floating_ips_client.delete_floating_ip,
+                            floating_ip['id'])
+            self.floating_ips_client.associate_floating_ip_to_server(
+                floating_ip['ip'], server['id'])
+            ip = floating_ip['ip']
         else:
             network_name_for_ssh = CONF.compute.network_for_ssh
             ip = server.networks[network_name_for_ssh][0]
@@ -113,10 +114,10 @@
         try:
             return self.get_remote_client(
                 ip,
-                private_key=keypair.private_key)
+                private_key=keypair['private_key'])
         except Exception:
             LOG.exception('ssh to server failed')
-            self._log_console_output()
+            self._log_console_output(servers=[server])
             raise
 
     def _get_content(self, ssh_client):
@@ -129,8 +130,8 @@
         return self._get_content(ssh_client)
 
     def _delete_server(self, server):
-        self.compute_client.servers.delete(server)
-        self.delete_timeout(self.compute_client.servers, server.id)
+        self.servers_client.delete_server(server['id'])
+        self.servers_client.wait_for_server_termination(server['id'])
 
     def _check_content_of_written_file(self, ssh_client, expected):
         actual = self._get_content(ssh_client)
@@ -139,11 +140,11 @@
     @test.services('compute', 'volume', 'image')
     def test_volume_boot_pattern(self):
         keypair = self.create_keypair()
-        self.security_group = self._create_security_group_nova()
+        self.security_group = self._create_security_group()
 
         # create an instance from volume
         volume_origin = self._create_volume_from_image()
-        instance_1st = self._boot_instance_from_volume(volume_origin.id,
+        instance_1st = self._boot_instance_from_volume(volume_origin['id'],
                                                        keypair)
 
         # write content to volume on instance
@@ -155,7 +156,7 @@
         self._delete_server(instance_1st)
 
         # create a 2nd instance from volume
-        instance_2nd = self._boot_instance_from_volume(volume_origin.id,
+        instance_2nd = self._boot_instance_from_volume(volume_origin['id'],
                                                        keypair)
 
         # check the content of written file
@@ -164,11 +165,11 @@
         self._check_content_of_written_file(ssh_client_for_instance_2nd, text)
 
         # snapshot a volume
-        snapshot = self._create_snapshot_from_volume(volume_origin.id)
+        snapshot = self._create_snapshot_from_volume(volume_origin['id'])
 
         # create a 3rd instance from snapshot
-        volume = self._create_volume_from_snapshot(snapshot.id)
-        instance_from_snapshot = self._boot_instance_from_volume(volume.id,
+        volume = self._create_volume_from_snapshot(snapshot['id'])
+        instance_from_snapshot = self._boot_instance_from_volume(volume['id'],
                                                                  keypair)
 
         # check the content of written file
@@ -186,10 +187,11 @@
         bdms = [{'uuid': vol_id, 'source_type': 'volume',
                  'destination_type': 'volume', 'boot_index': 0,
                  'delete_on_termination': False}]
-        security_groups = [self.security_group.name]
+        self.security_group = self._create_security_group()
+        security_groups = [{'name': self.security_group['name']}]
         create_kwargs = {
             'block_device_mapping_v2': bdms,
-            'key_name': keypair.name,
+            'key_name': keypair['name'],
             'security_groups': security_groups
         }
         return self.create_server(image='', create_kwargs=create_kwargs)
diff --git a/tempest/scenario/utils.py b/tempest/scenario/utils.py
index e2adb34..c20f20c 100644
--- a/tempest/scenario/utils.py
+++ b/tempest/scenario/utils.py
@@ -40,33 +40,33 @@
         self.non_ssh_image_pattern = \
             CONF.input_scenario.non_ssh_image_regex
         # Setup clients
-        ocm = clients.OfficialClientManager(
-            auth.get_default_credentials('user'))
-        self.client = ocm.compute_client
+        os = clients.Manager()
+        self.images_client = os.images_client
+        self.flavors_client = os.flavors_client
 
     def ssh_user(self, image_id):
-        _image = self.client.images.get(image_id)
+        _, _image = self.images_client.get_image(image_id)
         for regex, user in self.ssh_users:
             # First match wins
-            if re.match(regex, _image.name) is not None:
+            if re.match(regex, _image['name']) is not None:
                 return user
         else:
             return self.default_ssh_user
 
     def _is_sshable_image(self, image):
         return not re.search(pattern=self.non_ssh_image_pattern,
-                             string=str(image.name))
+                             string=str(image['name']))
 
     def is_sshable_image(self, image_id):
-        _image = self.client.images.get(image_id)
+        _, _image = self.images_client.get_image(image_id)
         return self._is_sshable_image(_image)
 
     def _is_flavor_enough(self, flavor, image):
-        return image.minDisk <= flavor.disk
+        return image['minDisk'] <= flavor['disk']
 
     def is_flavor_enough(self, flavor_id, image_id):
-        _image = self.client.images.get(image_id)
-        _flavor = self.client.flavors.get(flavor_id)
+        _, _image = self.images_client.get_image(image_id)
+        _, _flavor = self.flavors_client.get_flavor_details(flavor_id)
         return self._is_flavor_enough(_flavor, _image)
 
 
@@ -81,7 +81,7 @@
     load_tests = testscenarios.load_tests_apply_scenarios
 
 
-    class TestInputScenario(manager.OfficialClientTest):
+    class TestInputScenario(manager.ScenarioTest):
 
         scenario_utils = utils.InputScenarioUtils()
         scenario_flavor = scenario_utils.scenario_flavors
@@ -91,17 +91,18 @@
 
         def test_create_server_metadata(self):
             name = rand_name('instance')
-            _ = self.compute_client.servers.create(name=name,
-                                                   flavor=self.flavor_ref,
-                                                   image=self.image_ref)
+            self.servers_client.create_server(name=name,
+                                              flavor_ref=self.flavor_ref,
+                                              image_ref=self.image_ref)
     """
     validchars = "-_.{ascii}{digit}".format(ascii=string.ascii_letters,
                                             digit=string.digits)
 
     def __init__(self):
-        ocm = clients.OfficialClientManager(
+        os = clients.Manager(
             auth.get_default_credentials('user', fill_in=False))
-        self.client = ocm.compute_client
+        self.images_client = os.images_client
+        self.flavors_client = os.flavors_client
         self.image_pattern = CONF.input_scenario.image_regex
         self.flavor_pattern = CONF.input_scenario.flavor_regex
 
@@ -118,10 +119,11 @@
         if not CONF.service_available.glance:
             return []
         if not hasattr(self, '_scenario_images'):
-            images = self.client.images.list(detailed=False)
+            _, images = self.images_client.list_images()
             self._scenario_images = [
-                (self._normalize_name(i.name), dict(image_ref=i.id))
-                for i in images if re.search(self.image_pattern, str(i.name))
+                (self._normalize_name(i['name']), dict(image_ref=i['id']))
+                for i in images if re.search(self.image_pattern,
+                                             str(i['name']))
             ]
         return self._scenario_images
 
@@ -131,10 +133,11 @@
         :return: a scenario with name and uuid of flavors
         """
         if not hasattr(self, '_scenario_flavors'):
-            flavors = self.client.flavors.list(detailed=False)
+            _, flavors = self.flavors_client.list_flavors()
             self._scenario_flavors = [
-                (self._normalize_name(f.name), dict(flavor_ref=f.id))
-                for f in flavors if re.search(self.flavor_pattern, str(f.name))
+                (self._normalize_name(f['name']), dict(flavor_ref=f['id']))
+                for f in flavors if re.search(self.flavor_pattern,
+                                              str(f['name']))
             ]
         return self._scenario_flavors
 
diff --git a/tempest/services/baremetal/base.py b/tempest/services/baremetal/base.py
index f98ecff..4933300 100644
--- a/tempest/services/baremetal/base.py
+++ b/tempest/services/baremetal/base.py
@@ -95,9 +95,13 @@
                     for ch in get_change(value, path + '%s/' % name):
                         yield ch
                 else:
-                    yield {'path': path + name,
-                           'value': value,
-                           'op': 'replace'}
+                    if value is None:
+                        yield {'path': path + name,
+                               'op': 'remove'}
+                    else:
+                        yield {'path': path + name,
+                               'value': value,
+                               'op': 'replace'}
 
         patch = [ch for ch in get_change(kw)
                  if ch['path'].lstrip('/') in allowed_attributes]
@@ -119,6 +123,7 @@
             uri += "?%s" % urllib.urlencode(kwargs)
 
         resp, body = self.get(uri)
+        self.expected_success(200, resp['status'])
 
         return resp, self.deserialize(body)
 
@@ -135,6 +140,7 @@
         else:
             uri = self._get_uri(resource, uuid=uuid, permanent=permanent)
         resp, body = self.get(uri)
+        self.expected_success(200, resp['status'])
 
         return resp, self.deserialize(body)
 
@@ -153,6 +159,7 @@
         uri = self._get_uri(resource)
 
         resp, body = self.post(uri, body=body)
+        self.expected_success(201, resp['status'])
 
         return resp, self.deserialize(body)
 
@@ -168,6 +175,7 @@
         uri = self._get_uri(resource, uuid)
 
         resp, body = self.delete(uri)
+        self.expected_success(204, resp['status'])
         return resp, body
 
     def _patch_request(self, resource, uuid, patch_object):
@@ -184,6 +192,7 @@
         patch_body = json.dumps(patch_object)
 
         resp, body = self.patch(uri, body=patch_body)
+        self.expected_success(200, resp['status'])
         return resp, self.deserialize(body)
 
     @handle_errors
@@ -212,4 +221,5 @@
         put_body = json.dumps(put_object)
 
         resp, body = self.put(uri, body=put_body)
+        self.expected_success(202, resp['status'])
         return resp, body
diff --git a/tempest/services/baremetal/v1/base_v1.py b/tempest/services/baremetal/v1/base_v1.py
index 9c753c2..9359808 100644
--- a/tempest/services/baremetal/v1/base_v1.py
+++ b/tempest/services/baremetal/v1/base_v1.py
@@ -27,9 +27,9 @@
         self.uri_prefix = 'v%s' % self.version
 
     @base.handle_errors
-    def list_nodes(self):
+    def list_nodes(self, **kwargs):
         """List all existing nodes."""
-        return self._list_request('nodes')
+        return self._list_request('nodes', **kwargs)
 
     @base.handle_errors
     def list_chassis(self):
@@ -37,11 +37,21 @@
         return self._list_request('chassis')
 
     @base.handle_errors
+    def list_chassis_nodes(self, chassis_uuid):
+        """List all nodes associated with a chassis."""
+        return self._list_request('/chassis/%s/nodes' % chassis_uuid)
+
+    @base.handle_errors
     def list_ports(self, **kwargs):
         """List all existing ports."""
         return self._list_request('ports', **kwargs)
 
     @base.handle_errors
+    def list_node_ports(self, uuid):
+        """List all ports associated with the node."""
+        return self._list_request('/nodes/%s/ports' % uuid)
+
+    @base.handle_errors
     def list_nodestates(self, uuid):
         """List all existing states."""
         return self._list_request('/nodes/%s/states' % uuid)
@@ -68,6 +78,21 @@
         return self._show_request('nodes', uuid)
 
     @base.handle_errors
+    def show_node_by_instance_uuid(self, instance_uuid):
+        """
+        Gets a node associated with given instance uuid.
+
+        :param uuid: Unique identifier of the node in UUID format.
+        :return: Serialized node as a dictionary.
+
+        """
+        uri = '/nodes/detail?instance_uuid=%s' % instance_uuid
+
+        return self._show_request('nodes',
+                                  uuid=None,
+                                  uri=uri)
+
+    @base.handle_errors
     def show_chassis(self, uuid):
         """
         Gets a specific chassis.
@@ -203,7 +228,8 @@
                            'properties/cpu_num',
                            'properties/storage',
                            'properties/memory',
-                           'driver')
+                           'driver',
+                           'instance_uuid')
 
         patch = self._make_patch(node_attributes, **kwargs)
 
@@ -264,3 +290,77 @@
                                                    postf='validate')
 
         return self._show_request('nodes', node_uuid, uri=uri)
+
+    @base.handle_errors
+    def set_node_boot_device(self, node_uuid, boot_device, persistent=False):
+        """
+        Set the boot device of the specified node.
+
+        :param node_uuid: The unique identifier of the node.
+        :param boot_device: The boot device name.
+        :param persistent: Boolean value. True if the boot device will
+                           persist to all future boots, False if not.
+                           Default: False.
+
+        """
+        request = {'boot_device': boot_device, 'persistent': persistent}
+        resp, body = self._put_request('nodes/%s/management/boot_device' %
+                                       node_uuid, request)
+        self.expected_success(204, resp.status)
+        return body
+
+    @base.handle_errors
+    def get_node_boot_device(self, node_uuid):
+        """
+        Get the current boot device of the specified node.
+
+        :param node_uuid: The unique identifier of the node.
+
+        """
+        path = 'nodes/%s/management/boot_device' % node_uuid
+        resp, body = self._list_request(path)
+        self.expected_success(200, resp.status)
+        return body
+
+    @base.handle_errors
+    def get_node_supported_boot_devices(self, node_uuid):
+        """
+        Get the supported boot devices of the specified node.
+
+        :param node_uuid: The unique identifier of the node.
+
+        """
+        path = 'nodes/%s/management/boot_device/supported' % node_uuid
+        resp, body = self._list_request(path)
+        self.expected_success(200, resp.status)
+        return body
+
+    @base.handle_errors
+    def get_console(self, node_uuid):
+        """
+        Get connection information about the console.
+
+        :param node_uuid: Unique identifier of the node in UUID format.
+
+        """
+
+        resp, body = self._show_request('nodes/states/console', node_uuid)
+        self.expected_success(200, resp.status)
+        return resp, body
+
+    @base.handle_errors
+    def set_console_mode(self, node_uuid, enabled):
+        """
+        Start and stop the node console.
+
+        :param node_uuid: Unique identifier of the node in UUID format.
+        :param enabled: Boolean value; whether to enable or disable the
+                        console.
+
+        """
+
+        enabled = {'enabled': enabled}
+        resp, body = self._put_request('nodes/%s/states/console' % node_uuid,
+                                       enabled)
+        self.expected_success(202, resp.status)
+        return resp, body
diff --git a/tempest/services/compute/json/networks_client.py b/tempest/services/compute/json/networks_client.py
new file mode 100644
index 0000000..40eb1a6
--- /dev/null
+++ b/tempest/services/compute/json/networks_client.py
@@ -0,0 +1,40 @@
+# Copyright 2012 OpenStack Foundation
+# All Rights Reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+import json
+
+from tempest.common import rest_client
+from tempest import config
+
+CONF = config.CONF
+
+
+class NetworksClientJSON(rest_client.RestClient):
+
+    def __init__(self, auth_provider):
+        super(NetworksClientJSON, self).__init__(auth_provider)
+        self.service = CONF.compute.catalog_type
+
+    def list_networks(self):
+        resp, body = self.get("os-networks")
+        body = json.loads(body)
+        self.expected_success(200, resp.status)
+        return resp, body['networks']
+
+    def get_network(self, network_id):
+        resp, body = self.get("os-networks/%s" % str(network_id))
+        body = json.loads(body)
+        self.expected_success(200, resp.status)
+        return resp, body['network']
diff --git a/tempest/services/compute/json/security_group_default_rules_client.py b/tempest/services/compute/json/security_group_default_rules_client.py
index 6d29837..7743f9c 100644
--- a/tempest/services/compute/json/security_group_default_rules_client.py
+++ b/tempest/services/compute/json/security_group_default_rules_client.py
@@ -15,6 +15,8 @@
 
 import json
 
+from tempest.api_schema.response.compute.v2 import \
+    security_group_default_rule as schema
 from tempest.common import rest_client
 from tempest import config
 
@@ -46,8 +48,9 @@
         post_body = json.dumps({'security_group_default_rule': post_body})
         url = 'os-security-group-default-rules'
         resp, body = self.post(url, post_body)
-        self.expected_success(200, resp.status)
         body = json.loads(body)
+        self.validate_response(schema.create_get_security_group_default_rule,
+                               resp, body)
         return resp, body['security_group_default_rule']
 
     def delete_security_group_default_rule(self,
@@ -55,20 +58,23 @@
         """Deletes the provided Security Group default rule."""
         resp, body = self.delete('os-security-group-default-rules/%s' % str(
             security_group_default_rule_id))
-        self.expected_success(204, resp.status)
+        self.validate_response(schema.delete_security_group_default_rule,
+                               resp, body)
         return resp, body
 
     def list_security_group_default_rules(self):
         """List all Security Group default rules."""
         resp, body = self.get('os-security-group-default-rules')
-        self.expected_success(200, resp.status)
         body = json.loads(body)
+        self.validate_response(schema.list_security_group_default_rules,
+                               resp, body)
         return resp, body['security_group_default_rules']
 
     def get_security_group_default_rule(self, security_group_default_rule_id):
         """Return the details of provided Security Group default rule."""
         resp, body = self.get('os-security-group-default-rules/%s' % str(
             security_group_default_rule_id))
-        self.expected_success(200, resp.status)
         body = json.loads(body)
+        self.validate_response(schema.create_get_security_group_default_rule,
+                               resp, body)
         return resp, body['security_group_default_rule']
diff --git a/tempest/services/compute/json/servers_client.py b/tempest/services/compute/json/servers_client.py
index f44be29..947ba7a 100644
--- a/tempest/services/compute/json/servers_client.py
+++ b/tempest/services/compute/json/servers_client.py
@@ -58,6 +58,7 @@
         disk_config: Determines if user or admin controls disk configuration.
         return_reservation_id: Enable/Disable the return of reservation id
         block_device_mapping: Block device mapping for the server.
+        block_device_mapping_v2: Block device mapping V2 for the server.
         """
         post_body = {
             'name': name,
@@ -70,7 +71,8 @@
                        'availability_zone', 'accessIPv4', 'accessIPv6',
                        'min_count', 'max_count', ('metadata', 'meta'),
                        ('OS-DCF:diskConfig', 'disk_config'),
-                       'return_reservation_id', 'block_device_mapping']:
+                       'return_reservation_id', 'block_device_mapping',
+                       'block_device_mapping_v2']:
             if isinstance(option, tuple):
                 post_param = option[0]
                 key = option[1]
@@ -80,6 +82,7 @@
             value = kwargs.get(key)
             if value is not None:
                 post_body[post_param] = value
+
         post_body = {'server': post_body}
 
         if 'sched_hints' in kwargs:
@@ -172,11 +175,12 @@
         return resp, body
 
     def wait_for_server_status(self, server_id, status, extra_timeout=0,
-                               raise_on_error=True):
+                               raise_on_error=True, ready_wait=True):
         """Waits for a server to reach a given status."""
         return waiters.wait_for_server_status(self, server_id, status,
                                               extra_timeout=extra_timeout,
-                                              raise_on_error=raise_on_error)
+                                              raise_on_error=raise_on_error,
+                                              ready_wait=ready_wait)
 
     def wait_for_server_termination(self, server_id, ignore_error=False):
         """Waits for server to reach termination."""
diff --git a/tempest/services/compute/xml/floating_ips_client.py b/tempest/services/compute/xml/floating_ips_client.py
index fa4aa07..730e870 100644
--- a/tempest/services/compute/xml/floating_ips_client.py
+++ b/tempest/services/compute/xml/floating_ips_client.py
@@ -13,9 +13,10 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-from lxml import etree
 import urllib
 
+from lxml import etree
+
 from tempest.common import rest_client
 from tempest.common import xml_utils
 from tempest import config
diff --git a/tempest/services/compute/xml/hosts_client.py b/tempest/services/compute/xml/hosts_client.py
index 23a7dd6..ddb92b6 100644
--- a/tempest/services/compute/xml/hosts_client.py
+++ b/tempest/services/compute/xml/hosts_client.py
@@ -15,6 +15,7 @@
 import urllib
 
 from lxml import etree
+
 from tempest.common import rest_client
 from tempest.common import xml_utils
 from tempest import config
diff --git a/tempest/services/compute/xml/security_groups_client.py b/tempest/services/compute/xml/security_groups_client.py
index 9eccb90..56ac7ba 100644
--- a/tempest/services/compute/xml/security_groups_client.py
+++ b/tempest/services/compute/xml/security_groups_client.py
@@ -13,9 +13,10 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-from lxml import etree
 import urllib
 
+from lxml import etree
+
 from tempest.common import rest_client
 from tempest.common import xml_utils
 from tempest import config
diff --git a/tempest/services/database/json/flavors_client.py b/tempest/services/database/json/flavors_client.py
index 2ec0405..f276a45 100644
--- a/tempest/services/database/json/flavors_client.py
+++ b/tempest/services/database/json/flavors_client.py
@@ -33,8 +33,10 @@
             url += '?%s' % urllib.urlencode(params)
 
         resp, body = self.get(url)
+        self.expected_success(200, resp.status)
         return resp, self._parse_resp(body)
 
     def get_db_flavor_details(self, db_flavor_id):
         resp, body = self.get("flavors/%s" % str(db_flavor_id))
+        self.expected_success(200, resp.status)
         return resp, self._parse_resp(body)
diff --git a/tempest/services/database/json/versions_client.py b/tempest/services/database/json/versions_client.py
index 0269c43..81c0e6c 100644
--- a/tempest/services/database/json/versions_client.py
+++ b/tempest/services/database/json/versions_client.py
@@ -35,4 +35,5 @@
             url += '?%s' % urllib.urlencode(params)
 
         resp, body = self.get(url)
+        self.expected_success(200, resp.status)
         return resp, self._parse_resp(body)
diff --git a/tempest/services/identity/json/identity_client.py b/tempest/services/identity/json/identity_client.py
index ac65f81..e76c1bd 100644
--- a/tempest/services/identity/json/identity_client.py
+++ b/tempest/services/identity/json/identity_client.py
@@ -309,6 +309,7 @@
 
         body = json.dumps(creds)
         resp, body = self.post(self.auth_url, body=body)
+        self.expected_success(200, resp.status)
 
         return resp, body['access']
 
@@ -326,6 +327,7 @@
 
         body = json.dumps(creds)
         resp, body = self.post(self.auth_url, body=body)
+        self.expected_success(200, resp.status)
 
         return resp, body['access']
 
diff --git a/tempest/services/identity/v3/json/credentials_client.py b/tempest/services/identity/v3/json/credentials_client.py
index f795c7b..d424f4c 100644
--- a/tempest/services/identity/v3/json/credentials_client.py
+++ b/tempest/services/identity/v3/json/credentials_client.py
@@ -41,13 +41,14 @@
         }
         post_body = json.dumps({'credential': post_body})
         resp, body = self.post('credentials', post_body)
+        self.expected_success(201, resp.status)
         body = json.loads(body)
         body['credential']['blob'] = json.loads(body['credential']['blob'])
         return resp, body['credential']
 
     def update_credential(self, credential_id, **kwargs):
         """Updates a credential."""
-        resp, body = self.get_credential(credential_id)
+        _, body = self.get_credential(credential_id)
         cred_type = kwargs.get('type', body['type'])
         access_key = kwargs.get('access_key', body['blob']['access'])
         secret_key = kwargs.get('secret_key', body['blob']['secret'])
@@ -63,6 +64,7 @@
         }
         post_body = json.dumps({'credential': post_body})
         resp, body = self.patch('credentials/%s' % credential_id, post_body)
+        self.expected_success(200, resp.status)
         body = json.loads(body)
         body['credential']['blob'] = json.loads(body['credential']['blob'])
         return resp, body['credential']
@@ -70,6 +72,7 @@
     def get_credential(self, credential_id):
         """To GET Details of a credential."""
         resp, body = self.get('credentials/%s' % credential_id)
+        self.expected_success(200, resp.status)
         body = json.loads(body)
         body['credential']['blob'] = json.loads(body['credential']['blob'])
         return resp, body['credential']
@@ -77,10 +80,12 @@
     def list_credentials(self):
         """Lists out all the available credentials."""
         resp, body = self.get('credentials')
+        self.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body['credentials']
 
     def delete_credential(self, credential_id):
         """Deletes a credential."""
         resp, body = self.delete('credentials/%s' % credential_id)
+        self.expected_success(204, resp.status)
         return resp, body
diff --git a/tempest/services/identity/v3/json/endpoints_client.py b/tempest/services/identity/v3/json/endpoints_client.py
index f7a894b..c3fedb2 100644
--- a/tempest/services/identity/v3/json/endpoints_client.py
+++ b/tempest/services/identity/v3/json/endpoints_client.py
@@ -32,6 +32,7 @@
     def list_endpoints(self):
         """GET endpoints."""
         resp, body = self.get('endpoints')
+        self.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body['endpoints']
 
@@ -56,6 +57,7 @@
         }
         post_body = json.dumps({'endpoint': post_body})
         resp, body = self.post('endpoints', post_body)
+        self.expected_success(201, resp.status)
         body = json.loads(body)
         return resp, body['endpoint']
 
@@ -82,10 +84,12 @@
             post_body['enabled'] = enabled
         post_body = json.dumps({'endpoint': post_body})
         resp, body = self.patch('endpoints/%s' % endpoint_id, post_body)
+        self.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body['endpoint']
 
     def delete_endpoint(self, endpoint_id):
         """Delete endpoint."""
         resp_header, resp_body = self.delete('endpoints/%s' % endpoint_id)
+        self.expected_success(204, resp_header.status)
         return resp_header, resp_body
diff --git a/tempest/services/identity/v3/json/identity_client.py b/tempest/services/identity/v3/json/identity_client.py
index d57b931..df424ca 100644
--- a/tempest/services/identity/v3/json/identity_client.py
+++ b/tempest/services/identity/v3/json/identity_client.py
@@ -525,7 +525,7 @@
     def __init__(self):
         super(V3TokenClientJSON, self).__init__(None)
         auth_url = CONF.identity.uri_v3
-        if not auth_url and CONF.identity_feature_enabled.api_v3:
+        if not auth_url:
             raise exceptions.InvalidConfiguration('you must specify a v3 uri '
                                                   'if using the v3 identity '
                                                   'api')
@@ -588,6 +588,7 @@
 
         body = json.dumps(creds)
         resp, body = self.post(self.auth_url, body=body)
+        self.expected_success(201, resp.status)
         return resp, body
 
     def request(self, method, url, extra_headers=False, headers=None,
diff --git a/tempest/services/identity/v3/json/policy_client.py b/tempest/services/identity/v3/json/policy_client.py
index 3c90fa1..e093260 100644
--- a/tempest/services/identity/v3/json/policy_client.py
+++ b/tempest/services/identity/v3/json/policy_client.py
@@ -37,12 +37,14 @@
         }
         post_body = json.dumps({'policy': post_body})
         resp, body = self.post('policies', post_body)
+        self.expected_success(201, resp.status)
         body = json.loads(body)
         return resp, body['policy']
 
     def list_policies(self):
         """Lists the policies."""
         resp, body = self.get('policies')
+        self.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body['policies']
 
@@ -50,12 +52,12 @@
         """Lists out the given policy."""
         url = 'policies/%s' % policy_id
         resp, body = self.get(url)
+        self.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body['policy']
 
     def update_policy(self, policy_id, **kwargs):
         """Updates a policy."""
-        resp, body = self.get_policy(policy_id)
         type = kwargs.get('type')
         post_body = {
             'type': type
@@ -63,10 +65,13 @@
         post_body = json.dumps({'policy': post_body})
         url = 'policies/%s' % policy_id
         resp, body = self.patch(url, post_body)
+        self.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body['policy']
 
     def delete_policy(self, policy_id):
         """Deletes the policy."""
         url = "policies/%s" % policy_id
-        return self.delete(url)
+        resp, body = self.delete(url)
+        self.expected_success(204, resp.status)
+        return resp, body
diff --git a/tempest/services/identity/v3/json/region_client.py b/tempest/services/identity/v3/json/region_client.py
index c078765..becea6b 100644
--- a/tempest/services/identity/v3/json/region_client.py
+++ b/tempest/services/identity/v3/json/region_client.py
@@ -43,6 +43,7 @@
                 'regions/%s' % kwargs.get('unique_region_id'), req_body)
         else:
             resp, body = self.post('regions', req_body)
+        self.expected_success(201, resp.status)
         body = json.loads(body)
         return resp, body['region']
 
@@ -55,6 +56,7 @@
             post_body['parent_region_id'] = kwargs.get('parent_region_id')
         post_body = json.dumps({'region': post_body})
         resp, body = self.patch('regions/%s' % region_id, post_body)
+        self.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body['region']
 
@@ -62,6 +64,7 @@
         """Get region."""
         url = 'regions/%s' % region_id
         resp, body = self.get(url)
+        self.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body['region']
 
@@ -71,10 +74,12 @@
         if params:
             url += '?%s' % urllib.urlencode(params)
         resp, body = self.get(url)
+        self.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body['regions']
 
     def delete_region(self, region_id):
         """Delete region."""
         resp, body = self.delete('regions/%s' % region_id)
+        self.expected_success(204, resp.status)
         return resp, body
diff --git a/tempest/services/identity/v3/json/service_client.py b/tempest/services/identity/v3/json/service_client.py
index 82e8aad..8e89957 100644
--- a/tempest/services/identity/v3/json/service_client.py
+++ b/tempest/services/identity/v3/json/service_client.py
@@ -57,10 +57,10 @@
     def create_service(self, serv_type, name=None, description=None,
                        enabled=True):
         body_dict = {
-            "name": name,
+            'name': name,
             'type': serv_type,
             'enabled': enabled,
-            "description": description,
+            'description': description,
         }
         body = json.dumps({'service': body_dict})
         resp, body = self.post("services", body)
@@ -73,3 +73,9 @@
         resp, body = self.delete(url)
         self.expected_success(204, resp.status)
         return resp, body
+
+    def list_services(self):
+        resp, body = self.get('services')
+        self.expected_success(200, resp.status)
+        body = json.loads(body)
+        return resp, body['services']
diff --git a/tempest/services/identity/v3/xml/credentials_client.py b/tempest/services/identity/v3/xml/credentials_client.py
index 3c44188..37513d0 100644
--- a/tempest/services/identity/v3/xml/credentials_client.py
+++ b/tempest/services/identity/v3/xml/credentials_client.py
@@ -60,13 +60,14 @@
                                     type=cred_type, user_id=user_id)
         credential.append(blob)
         resp, body = self.post('credentials', str(common.Document(credential)))
+        self.expected_success(201, resp.status)
         body = self._parse_body(etree.fromstring(body))
         body['blob'] = json.loads(body['blob'])
         return resp, body
 
     def update_credential(self, credential_id, **kwargs):
         """Updates a credential."""
-        resp, body = self.get_credential(credential_id)
+        _, body = self.get_credential(credential_id)
         cred_type = kwargs.get('type', body['type'])
         access_key = kwargs.get('access_key', body['blob']['access'])
         secret_key = kwargs.get('secret_key', body['blob']['secret'])
@@ -83,6 +84,7 @@
         credential.append(blob)
         resp, body = self.patch('credentials/%s' % credential_id,
                                 str(common.Document(credential)))
+        self.expected_success(200, resp.status)
         body = self._parse_body(etree.fromstring(body))
         body['blob'] = json.loads(body['blob'])
         return resp, body
@@ -90,6 +92,7 @@
     def get_credential(self, credential_id):
         """To GET Details of a credential."""
         resp, body = self.get('credentials/%s' % credential_id)
+        self.expected_success(200, resp.status)
         body = self._parse_body(etree.fromstring(body))
         body['blob'] = json.loads(body['blob'])
         return resp, body
@@ -97,10 +100,12 @@
     def list_credentials(self):
         """Lists out all the available credentials."""
         resp, body = self.get('credentials')
+        self.expected_success(200, resp.status)
         body = self._parse_creds(etree.fromstring(body))
         return resp, body
 
     def delete_credential(self, credential_id):
         """Deletes a credential."""
         resp, body = self.delete('credentials/%s' % credential_id)
+        self.expected_success(204, resp.status)
         return resp, body
diff --git a/tempest/services/identity/v3/xml/endpoints_client.py b/tempest/services/identity/v3/xml/endpoints_client.py
index 6490e34..892fb58 100644
--- a/tempest/services/identity/v3/xml/endpoints_client.py
+++ b/tempest/services/identity/v3/xml/endpoints_client.py
@@ -65,6 +65,7 @@
     def list_endpoints(self):
         """Get the list of endpoints."""
         resp, body = self.get("endpoints")
+        self.expected_success(200, resp.status)
         body = self._parse_array(etree.fromstring(body))
         return resp, body
 
@@ -90,6 +91,7 @@
                                          enabled=enabled)
         resp, body = self.post('endpoints',
                                str(common.Document(create_endpoint)))
+        self.expected_success(201, resp.status)
         body = self._parse_body(etree.fromstring(body))
         return resp, body
 
@@ -120,10 +122,12 @@
             endpoint.add_attr("enabled", str(enabled).lower())
 
         resp, body = self.patch('endpoints/%s' % str(endpoint_id), str(doc))
+        self.expected_success(200, resp.status)
         body = self._parse_body(etree.fromstring(body))
         return resp, body
 
     def delete_endpoint(self, endpoint_id):
         """Delete endpoint."""
         resp_header, resp_body = self.delete('endpoints/%s' % endpoint_id)
+        self.expected_success(204, resp_header.status)
         return resp_header, resp_body
diff --git a/tempest/services/identity/v3/xml/identity_client.py b/tempest/services/identity/v3/xml/identity_client.py
index c2bd77e..5c43692 100644
--- a/tempest/services/identity/v3/xml/identity_client.py
+++ b/tempest/services/identity/v3/xml/identity_client.py
@@ -520,7 +520,7 @@
     def __init__(self):
         super(V3TokenClientXML, self).__init__(None)
         auth_url = CONF.identity.uri_v3
-        if not auth_url and CONF.identity_feature_enabled.api_v3:
+        if not auth_url:
             raise exceptions.InvalidConfiguration('you must specify a v3 uri '
                                                   'if using the v3 identity '
                                                   'api')
@@ -590,6 +590,7 @@
             auth.append(scope)
 
         resp, body = self.post(self.auth_url, body=str(common.Document(auth)))
+        self.expected_success(201, resp.status)
         return resp, body
 
     def request(self, method, url, extra_headers=False, headers=None,
diff --git a/tempest/services/identity/v3/xml/policy_client.py b/tempest/services/identity/v3/xml/policy_client.py
index 73d831b..41bbfe5 100644
--- a/tempest/services/identity/v3/xml/policy_client.py
+++ b/tempest/services/identity/v3/xml/policy_client.py
@@ -67,12 +67,14 @@
         create_policy = common.Element("policy", xmlns=XMLNS,
                                        blob=blob, type=type)
         resp, body = self.post('policies', str(common.Document(create_policy)))
+        self.expected_success(201, resp.status)
         body = self._parse_body(etree.fromstring(body))
         return resp, body
 
     def list_policies(self):
         """Lists the policies."""
         resp, body = self.get('policies')
+        self.expected_success(200, resp.status)
         body = self._parse_array(etree.fromstring(body))
         return resp, body
 
@@ -80,20 +82,23 @@
         """Lists out the given policy."""
         url = 'policies/%s' % policy_id
         resp, body = self.get(url)
+        self.expected_success(200, resp.status)
         body = self._parse_body(etree.fromstring(body))
         return resp, body
 
     def update_policy(self, policy_id, **kwargs):
         """Updates a policy."""
-        resp, body = self.get_policy(policy_id)
         type = kwargs.get('type')
         update_policy = common.Element("policy", xmlns=XMLNS, type=type)
         url = 'policies/%s' % policy_id
         resp, body = self.patch(url, str(common.Document(update_policy)))
+        self.expected_success(200, resp.status)
         body = self._parse_body(etree.fromstring(body))
         return resp, body
 
     def delete_policy(self, policy_id):
         """Deletes the policy."""
         url = "policies/%s" % policy_id
-        return self.delete(url)
+        resp, body = self.delete(url)
+        self.expected_success(204, resp.status)
+        return resp, body
diff --git a/tempest/services/identity/v3/xml/region_client.py b/tempest/services/identity/v3/xml/region_client.py
index f854138..7669678 100644
--- a/tempest/services/identity/v3/xml/region_client.py
+++ b/tempest/services/identity/v3/xml/region_client.py
@@ -79,6 +79,7 @@
         else:
             resp, body = self.post('regions',
                                    str(common.Document(create_region)))
+        self.expected_success(201, resp.status)
         body = self._parse_body(etree.fromstring(body))
         return resp, body
 
@@ -95,6 +96,7 @@
 
         resp, body = self.patch('regions/%s' % str(region_id),
                                 str(common.Document(update_region)))
+        self.expected_success(200, resp.status)
         body = self._parse_body(etree.fromstring(body))
         return resp, body
 
@@ -102,6 +104,7 @@
         """Get Region."""
         url = 'regions/%s' % region_id
         resp, body = self.get(url)
+        self.expected_success(200, resp.status)
         body = self._parse_body(etree.fromstring(body))
         return resp, body
 
@@ -111,10 +114,12 @@
         if params:
             url += '?%s' % urllib.urlencode(params)
         resp, body = self.get(url)
+        self.expected_success(200, resp.status)
         body = self._parse_array(etree.fromstring(body))
         return resp, body
 
     def delete_region(self, region_id):
         """Delete region."""
         resp, body = self.delete('regions/%s' % region_id)
+        self.expected_success(204, resp.status)
         return resp, body
diff --git a/tempest/services/identity/v3/xml/service_client.py b/tempest/services/identity/v3/xml/service_client.py
index 3beeb89..14adfac 100644
--- a/tempest/services/identity/v3/xml/service_client.py
+++ b/tempest/services/identity/v3/xml/service_client.py
@@ -37,6 +37,14 @@
         data = common.xml_to_json(body)
         return data
 
+    def _parse_array(self, node):
+        array = []
+        for child in node.getchildren():
+            tag_list = child.tag.split('}', 1)
+            if tag_list[1] == "service":
+                array.append(common.xml_to_json(child))
+        return array
+
     def update_service(self, service_id, **kwargs):
         """Updates a service_id."""
         resp, body = self.get_service(service_id)
@@ -79,3 +87,9 @@
         resp, body = self.delete(url)
         self.expected_success(204, resp.status)
         return resp, body
+
+    def list_services(self):
+        resp, body = self.get('services')
+        self.expected_success(200, resp.status)
+        body = self._parse_array(etree.fromstring(body))
+        return resp, body
diff --git a/tempest/services/identity/xml/identity_client.py b/tempest/services/identity/xml/identity_client.py
index 4ada46c..eaf9390 100644
--- a/tempest/services/identity/xml/identity_client.py
+++ b/tempest/services/identity/xml/identity_client.py
@@ -157,6 +157,7 @@
         auth = xml.Element('auth', **auth_kwargs)
         auth.append(passwordCreds)
         resp, body = self.post(self.auth_url, body=str(xml.Document(auth)))
+        self.expected_success(200, resp.status)
         return resp, body['access']
 
     def auth_token(self, token_id, tenant=None):
@@ -167,4 +168,5 @@
         auth = xml.Element('auth', **auth_kwargs)
         auth.append(tokenCreds)
         resp, body = self.post(self.auth_url, body=str(xml.Document(auth)))
+        self.expected_success(200, resp.status)
         return resp, body['access']
diff --git a/tempest/services/image/v1/json/image_client.py b/tempest/services/image/v1/json/image_client.py
index 4a7c163..bc5e04a 100644
--- a/tempest/services/image/v1/json/image_client.py
+++ b/tempest/services/image/v1/json/image_client.py
@@ -235,7 +235,7 @@
 
     def is_resource_deleted(self, id):
         try:
-            self.get_image(id)
+            self.get_image_meta(id)
         except exceptions.NotFound:
             return True
         return False
diff --git a/tempest/services/queuing/__init__.py b/tempest/services/messaging/__init__.py
similarity index 100%
rename from tempest/services/queuing/__init__.py
rename to tempest/services/messaging/__init__.py
diff --git a/tempest/services/queuing/json/__init__.py b/tempest/services/messaging/json/__init__.py
similarity index 100%
rename from tempest/services/queuing/json/__init__.py
rename to tempest/services/messaging/json/__init__.py
diff --git a/tempest/services/queuing/json/queuing_client.py b/tempest/services/messaging/json/messaging_client.py
similarity index 95%
rename from tempest/services/queuing/json/queuing_client.py
rename to tempest/services/messaging/json/messaging_client.py
index 14960ad..3e82399 100644
--- a/tempest/services/queuing/json/queuing_client.py
+++ b/tempest/services/messaging/json/messaging_client.py
@@ -16,7 +16,7 @@
 import json
 import urllib
 
-from tempest.api_schema.response.queuing.v1 import queues as queues_schema
+from tempest.api_schema.response.messaging.v1 import queues as queues_schema
 from tempest.common import rest_client
 from tempest.common.utils import data_utils
 from tempest import config
@@ -25,11 +25,11 @@
 CONF = config.CONF
 
 
-class QueuingClientJSON(rest_client.RestClient):
+class MessagingClientJSON(rest_client.RestClient):
 
     def __init__(self, auth_provider):
-        super(QueuingClientJSON, self).__init__(auth_provider)
-        self.service = CONF.queuing.catalog_type
+        super(MessagingClientJSON, self).__init__(auth_provider)
+        self.service = CONF.messaging.catalog_type
         self.version = '1'
         self.uri_prefix = 'v{0}'.format(self.version)
 
diff --git a/tempest/services/network/json/network_client.py b/tempest/services/network/json/network_client.py
index 8e53b8d..16a4f5c 100644
--- a/tempest/services/network/json/network_client.py
+++ b/tempest/services/network/json/network_client.py
@@ -54,12 +54,14 @@
         body = json.dumps(put_body)
         uri = '%s/quotas/%s' % (self.uri_prefix, tenant_id)
         resp, body = self.put(uri, body)
+        self.rest_client.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body['quota']
 
     def reset_quotas(self, tenant_id):
         uri = '%s/quotas/%s' % (self.uri_prefix, tenant_id)
         resp, body = self.delete(uri)
+        self.rest_client.expected_success(204, resp.status)
         return resp, body
 
     def create_router(self, name, admin_state_up=True, **kwargs):
@@ -69,12 +71,14 @@
         body = json.dumps(post_body)
         uri = '%s/routers' % (self.uri_prefix)
         resp, body = self.post(uri, body)
+        self.rest_client.expected_success(201, resp.status)
         body = json.loads(body)
         return resp, body
 
     def _update_router(self, router_id, set_enable_snat, **kwargs):
         uri = '%s/routers/%s' % (self.uri_prefix, router_id)
         resp, body = self.get(uri)
+        self.rest_client.expected_success(200, resp.status)
         body = json.loads(body)
         update_body = {}
         update_body['name'] = kwargs.get('name', body['router']['name'])
@@ -88,6 +92,7 @@
         update_body = dict(router=update_body)
         update_body = json.dumps(update_body)
         resp, body = self.put(uri, update_body)
+        self.rest_client.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body
 
@@ -110,37 +115,41 @@
 
     def add_router_interface_with_subnet_id(self, router_id, subnet_id):
         uri = '%s/routers/%s/add_router_interface' % (self.uri_prefix,
-              router_id)
+                                                      router_id)
         update_body = {"subnet_id": subnet_id}
         update_body = json.dumps(update_body)
         resp, body = self.put(uri, update_body)
+        self.rest_client.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body
 
     def add_router_interface_with_port_id(self, router_id, port_id):
         uri = '%s/routers/%s/add_router_interface' % (self.uri_prefix,
-              router_id)
+                                                      router_id)
         update_body = {"port_id": port_id}
         update_body = json.dumps(update_body)
         resp, body = self.put(uri, update_body)
+        self.rest_client.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body
 
     def remove_router_interface_with_subnet_id(self, router_id, subnet_id):
         uri = '%s/routers/%s/remove_router_interface' % (self.uri_prefix,
-              router_id)
+                                                         router_id)
         update_body = {"subnet_id": subnet_id}
         update_body = json.dumps(update_body)
         resp, body = self.put(uri, update_body)
+        self.rest_client.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body
 
     def remove_router_interface_with_port_id(self, router_id, port_id):
         uri = '%s/routers/%s/remove_router_interface' % (self.uri_prefix,
-              router_id)
+                                                         router_id)
         update_body = {"port_id": port_id}
         update_body = json.dumps(update_body)
         resp, body = self.put(uri, update_body)
+        self.rest_client.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body
 
@@ -155,6 +164,7 @@
         uri = '%s/lb/pools/%s/health_monitors' % (self.uri_prefix,
                                                   pool_id)
         resp, body = self.post(uri, body)
+        self.rest_client.expected_success(201, resp.status)
         body = json.loads(body)
         return resp, body
 
@@ -163,11 +173,13 @@
         uri = '%s/lb/pools/%s/health_monitors/%s' % (self.uri_prefix, pool_id,
                                                      health_monitor_id)
         resp, body = self.delete(uri)
+        self.rest_client.expected_success(204, resp.status)
         return resp, body
 
     def list_router_interfaces(self, uuid):
         uri = '%s/ports?device_id=%s' % (self.uri_prefix, uuid)
         resp, body = self.get(uri)
+        self.rest_client.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body
 
@@ -180,12 +192,14 @@
         agent = {"agent": agent_info}
         body = json.dumps(agent)
         resp, body = self.put(uri, body)
+        self.rest_client.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body
 
     def list_pools_hosted_by_one_lbaas_agent(self, agent_id):
         uri = '%s/agents/%s/loadbalancer-pools' % (self.uri_prefix, agent_id)
         resp, body = self.get(uri)
+        self.rest_client.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body
 
@@ -193,18 +207,21 @@
         uri = ('%s/lb/pools/%s/loadbalancer-agent' %
                (self.uri_prefix, pool_id))
         resp, body = self.get(uri)
+        self.rest_client.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body
 
     def list_routers_on_l3_agent(self, agent_id):
         uri = '%s/agents/%s/l3-routers' % (self.uri_prefix, agent_id)
         resp, body = self.get(uri)
+        self.rest_client.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body
 
     def list_l3_agents_hosting_router(self, router_id):
         uri = '%s/routers/%s/l3-agents' % (self.uri_prefix, router_id)
         resp, body = self.get(uri)
+        self.rest_client.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body
 
@@ -213,6 +230,7 @@
         post_body = {"router_id": router_id}
         body = json.dumps(post_body)
         resp, body = self.post(uri, body)
+        self.rest_client.expected_success(201, resp.status)
         body = json.loads(body)
         return resp, body
 
@@ -220,17 +238,20 @@
         uri = '%s/agents/%s/l3-routers/%s' % (
             self.uri_prefix, agent_id, router_id)
         resp, body = self.delete(uri)
+        self.rest_client.expected_success(204, resp.status)
         return resp, body
 
     def list_dhcp_agent_hosting_network(self, network_id):
         uri = '%s/networks/%s/dhcp-agents' % (self.uri_prefix, network_id)
         resp, body = self.get(uri)
+        self.rest_client.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body
 
     def list_networks_hosted_by_one_dhcp_agent(self, agent_id):
         uri = '%s/agents/%s/dhcp-networks' % (self.uri_prefix, agent_id)
         resp, body = self.get(uri)
+        self.rest_client.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body
 
@@ -238,6 +259,7 @@
         uri = '%s/agents/%s/dhcp-networks/%s' % (self.uri_prefix, agent_id,
                                                  network_id)
         resp, body = self.delete(uri)
+        self.rest_client.expected_success(204, resp.status)
         return resp, body
 
     def create_ikepolicy(self, name, **kwargs):
@@ -251,6 +273,7 @@
         body = json.dumps(post_body)
         uri = '%s/vpn/ikepolicies' % (self.uri_prefix)
         resp, body = self.post(uri, body)
+        self.rest_client.expected_success(201, resp.status)
         body = json.loads(body)
         return resp, body
 
@@ -264,6 +287,7 @@
         }
         body = json.dumps(put_body)
         resp, body = self.put(uri, body)
+        self.rest_client.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body
 
@@ -277,12 +301,14 @@
         }
         body = json.dumps(put_body)
         resp, body = self.put(uri, body)
+        self.rest_client.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body
 
     def list_lb_pool_stats(self, pool_id):
         uri = '%s/lb/pools/%s/stats' % (self.uri_prefix, pool_id)
         resp, body = self.get(uri)
+        self.rest_client.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body
 
@@ -291,5 +317,6 @@
         body = json.dumps(post_body)
         uri = '%s/agents/%s/dhcp-networks' % (self.uri_prefix, agent_id)
         resp, body = self.post(uri, body)
+        self.rest_client.expected_success(201, resp.status)
         body = json.loads(body)
         return resp, body
diff --git a/tempest/services/network/network_client_base.py b/tempest/services/network/network_client_base.py
index 4ee8302..5ad5f37 100644
--- a/tempest/services/network/network_client_base.py
+++ b/tempest/services/network/network_client_base.py
@@ -13,6 +13,7 @@
 import time
 import urllib
 
+from tempest.common.utils import misc
 from tempest import config
 from tempest import exceptions
 
@@ -111,6 +112,7 @@
                 uri += '?' + urllib.urlencode(filters, doseq=1)
             resp, body = self.get(uri)
             result = {plural_name: self.deserialize_list(body)}
+            self.rest_client.expected_success(200, resp.status)
             return resp, result
 
         return _list
@@ -119,7 +121,9 @@
         def _delete(resource_id):
             plural = self.pluralize(resource_name)
             uri = '%s/%s' % (self.get_uri(plural), resource_id)
-            return self.delete(uri)
+            resp, body = self.delete(uri)
+            self.rest_client.expected_success(204, resp.status)
+            return resp, body
 
         return _delete
 
@@ -134,6 +138,7 @@
                 uri += '?' + urllib.urlencode(fields, doseq=1)
             resp, body = self.get(uri)
             body = self.deserialize_single(body)
+            self.rest_client.expected_success(200, resp.status)
             return resp, body
 
         return _show
@@ -145,6 +150,7 @@
             post_data = self.serialize({resource_name: kwargs})
             resp, body = self.post(uri, post_data)
             body = self.deserialize_single(body)
+            self.rest_client.expected_success(201, resp.status)
             return resp, body
 
         return _create
@@ -156,6 +162,7 @@
             post_data = self.serialize({resource_name: kwargs})
             resp, body = self.put(uri, post_data)
             body = self.deserialize_single(body)
+            self.rest_client.expected_success(200, resp.status)
             return resp, body
 
         return _update
@@ -174,15 +181,14 @@
         raise AttributeError(name)
 
     # Common methods that are hard to automate
-    def create_bulk_network(self, count, names):
-        network_list = list()
-        for i in range(count):
-            network_list.append({'name': names[i]})
+    def create_bulk_network(self, names):
+        network_list = [{'name': name} for name in names]
         post_data = {'networks': network_list}
         body = self.serialize_list(post_data, "networks", "network")
         uri = self.get_uri("networks")
         resp, body = self.post(uri, body)
         body = {'networks': self.deserialize_list(body)}
+        self.rest_client.expected_success(201, resp.status)
         return resp, body
 
     def create_bulk_subnet(self, subnet_list):
@@ -191,6 +197,7 @@
         uri = self.get_uri('subnets')
         resp, body = self.post(uri, body)
         body = {'subnets': self.deserialize_list(body)}
+        self.rest_client.expected_success(201, resp.status)
         return resp, body
 
     def create_bulk_port(self, port_list):
@@ -199,6 +206,7 @@
         uri = self.get_uri('ports')
         resp, body = self.post(uri, body)
         body = {'ports': self.deserialize_list(body)}
+        self.rest_client.expected_success(201, resp.status)
         return resp, body
 
     def wait_for_resource_deletion(self, resource_type, id):
@@ -220,3 +228,39 @@
         except exceptions.NotFound:
             return True
         return False
+
+    def wait_for_resource_status(self, fetch, status, interval=None,
+                                 timeout=None):
+        """
+        @summary: Waits for a network resource to reach a status
+        @param fetch: the callable to be used to query the resource status
+        @type fecth: callable that takes no parameters and returns the resource
+        @param status: the status that the resource has to reach
+        @type status: String
+        @param interval: the number of seconds to wait between each status
+          query
+        @type interval: Integer
+        @param timeout: the maximum number of seconds to wait for the resource
+          to reach the desired status
+        @type timeout: Integer
+        """
+        if not interval:
+            interval = self.build_interval
+        if not timeout:
+            timeout = self.build_timeout
+        start_time = time.time()
+
+        while time.time() - start_time <= timeout:
+            resource = fetch()
+            if resource['status'] == status:
+                return
+            time.sleep(interval)
+
+        # At this point, the wait has timed out
+        message = 'Resource %s' % (str(resource))
+        message += ' failed to reach status %s' % status
+        message += ' within the required time %s' % timeout
+        caller = misc.find_test_caller()
+        if caller:
+            message = '(%s) %s' % (caller, message)
+        raise exceptions.TimeoutException(message)
diff --git a/tempest/api/network/common.py b/tempest/services/network/resources.py
similarity index 69%
rename from tempest/api/network/common.py
rename to tempest/services/network/resources.py
index 97e120f..2b182d0 100644
--- a/tempest/api/network/common.py
+++ b/tempest/services/network/resources.py
@@ -13,6 +13,10 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+import abc
+
+import six
+
 
 class AttributeDict(dict):
 
@@ -27,6 +31,7 @@
         return super(AttributeDict, self).__getattribute__(name)
 
 
+@six.add_metaclass(abc.ABCMeta)
 class DeletableResource(AttributeDict):
 
     """
@@ -42,11 +47,22 @@
         return '<%s id="%s" name="%s">' % (self.__class__.__name__,
                                            self.id, self.name)
 
+    @abc.abstractmethod
     def delete(self):
-        raise NotImplemented()
+        return
+
+    @abc.abstractmethod
+    def show(self):
+        return
 
     def __hash__(self):
-        return id(self)
+        return hash(self.id)
+
+    def wait_for_status(self, status):
+        if not hasattr(self, 'status'):
+            return
+
+        return self.client.wait_for_resource_status(self.show, status)
 
 
 class DeletableNetwork(DeletableResource):
@@ -62,42 +78,48 @@
         self._router_ids = set()
 
     def update(self, *args, **kwargs):
-        body = dict(subnet=dict(*args, **kwargs))
-        result = self.client.update_subnet(subnet=self.id, body=body)
+        _, result = self.client.update_subnet(subnet=self.id, *args, **kwargs)
         super(DeletableSubnet, self).update(**result['subnet'])
 
     def add_to_router(self, router_id):
         self._router_ids.add(router_id)
-        body = dict(subnet_id=self.id)
-        self.client.add_interface_router(router_id, body=body)
+        self.client.add_router_interface_with_subnet_id(router_id,
+                                                        subnet_id=self.id)
 
     def delete(self):
         for router_id in self._router_ids.copy():
-            body = dict(subnet_id=self.id)
-            self.client.remove_interface_router(router_id, body=body)
+            self.client.remove_router_interface_with_subnet_id(
+                router_id,
+                subnet_id=self.id)
             self._router_ids.remove(router_id)
         self.client.delete_subnet(self.id)
 
 
 class DeletableRouter(DeletableResource):
 
-    def add_gateway(self, network_id):
-        body = dict(network_id=network_id)
-        self.client.add_gateway_router(self.id, body=body)
+    def set_gateway(self, network_id):
+        return self.update(external_gateway_info=dict(network_id=network_id))
+
+    def unset_gateway(self):
+        return self.update(external_gateway_info=dict())
+
+    def update(self, *args, **kwargs):
+        _, result = self.client.update_router(self.id,
+                                              *args,
+                                              **kwargs)
+        return super(DeletableRouter, self).update(**result['router'])
 
     def delete(self):
-        self.client.remove_gateway_router(self.id)
+        self.unset_gateway()
         self.client.delete_router(self.id)
 
 
 class DeletableFloatingIp(DeletableResource):
 
     def update(self, *args, **kwargs):
-        result = self.client.update_floatingip(floatingip=self.id,
-                                               body=dict(
-                                                   floatingip=dict(*args,
-                                                                   **kwargs)
-                                               ))
+        _, result = self.client.update_floatingip(self.id,
+                                                  *args,
+                                                  **kwargs)
         super(DeletableFloatingIp, self).update(**result['floatingip'])
 
     def __repr__(self):
@@ -149,3 +171,8 @@
 
     def delete(self):
         self.client.delete_vip(self.id)
+
+    def show(self):
+        _, result = self.client.show_vip(self.id)
+        super(DeletableVip, self).update(**result['vip'])
+        return self
diff --git a/tempest/services/network/xml/network_client.py b/tempest/services/network/xml/network_client.py
index 22cc948..17b1f8e 100644
--- a/tempest/services/network/xml/network_client.py
+++ b/tempest/services/network/xml/network_client.py
@@ -10,9 +10,10 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-from lxml import etree
 import xml.etree.ElementTree as ET
 
+from lxml import etree
+
 from tempest.common import rest_client
 from tempest.common import xml_utils as common
 from tempest.services.network import network_client_base as client_base
@@ -102,17 +103,21 @@
         post_body.append(p1)
         resp, body = self.post(uri, str(common.Document(post_body)))
         body = _root_tag_fetcher_and_xml_to_json_parse(body)
+        self.rest_client.expected_success(201, resp.status)
         return resp, body
 
     def disassociate_health_monitor_with_pool(self, health_monitor_id,
                                               pool_id):
         uri = '%s/lb/pools/%s/health_monitors/%s' % (self.uri_prefix, pool_id,
                                                      health_monitor_id)
-        return self.delete(uri)
+        resp, body = self.delete(uri)
+        self.rest_client.expected_success(204, resp.status)
+        return resp, body
 
     def show_extension_details(self, ext_alias):
         uri = '%s/extensions/%s' % (self.uri_prefix, str(ext_alias))
         resp, body = self.get(uri)
+        self.rest_client.expected_success(200, resp.status)
         body = _root_tag_fetcher_and_xml_to_json_parse(body)
         return resp, body
 
@@ -122,6 +127,7 @@
         router.append(common.Element("name", name))
         common.deep_dict_to_xml(router, kwargs)
         resp, body = self.post(uri, str(common.Document(router)))
+        self.rest_client.expected_success(201, resp.status)
         body = _root_tag_fetcher_and_xml_to_json_parse(body)
         return resp, body
 
@@ -131,44 +137,50 @@
         for element, content in kwargs.iteritems():
             router.append(common.Element(element, content))
         resp, body = self.put(uri, str(common.Document(router)))
+        self.rest_client.expected_success(200, resp.status)
         body = _root_tag_fetcher_and_xml_to_json_parse(body)
         return resp, body
 
     def add_router_interface_with_subnet_id(self, router_id, subnet_id):
         uri = '%s/routers/%s/add_router_interface' % (self.uri_prefix,
-              router_id)
+                                                      router_id)
         subnet = common.Element("subnet_id", subnet_id)
         resp, body = self.put(uri, str(common.Document(subnet)))
+        self.rest_client.expected_success(200, resp.status)
         body = _root_tag_fetcher_and_xml_to_json_parse(body)
         return resp, body
 
     def add_router_interface_with_port_id(self, router_id, port_id):
         uri = '%s/routers/%s/add_router_interface' % (self.uri_prefix,
-              router_id)
+                                                      router_id)
         port = common.Element("port_id", port_id)
         resp, body = self.put(uri, str(common.Document(port)))
+        self.rest_client.expected_success(200, resp.status)
         body = _root_tag_fetcher_and_xml_to_json_parse(body)
         return resp, body
 
     def remove_router_interface_with_subnet_id(self, router_id, subnet_id):
         uri = '%s/routers/%s/remove_router_interface' % (self.uri_prefix,
-              router_id)
+                                                         router_id)
         subnet = common.Element("subnet_id", subnet_id)
         resp, body = self.put(uri, str(common.Document(subnet)))
+        self.rest_client.expected_success(200, resp.status)
         body = _root_tag_fetcher_and_xml_to_json_parse(body)
         return resp, body
 
     def remove_router_interface_with_port_id(self, router_id, port_id):
         uri = '%s/routers/%s/remove_router_interface' % (self.uri_prefix,
-              router_id)
+                                                         router_id)
         port = common.Element("port_id", port_id)
         resp, body = self.put(uri, str(common.Document(port)))
+        self.rest_client.expected_success(200, resp.status)
         body = _root_tag_fetcher_and_xml_to_json_parse(body)
         return resp, body
 
     def list_router_interfaces(self, uuid):
         uri = '%s/ports?device_id=%s' % (self.uri_prefix, uuid)
         resp, body = self.get(uri)
+        self.rest_client.expected_success(200, resp.status)
         ports = common.parse_array(etree.fromstring(body), self.PLURALS)
         ports = {"ports": ports}
         return resp, ports
@@ -180,12 +192,14 @@
             p = common.Element(key, value)
             agent.append(p)
         resp, body = self.put(uri, str(common.Document(agent)))
+        self.rest_client.expected_success(200, resp.status)
         body = _root_tag_fetcher_and_xml_to_json_parse(body)
         return resp, body
 
     def list_pools_hosted_by_one_lbaas_agent(self, agent_id):
         uri = '%s/agents/%s/loadbalancer-pools' % (self.uri_prefix, agent_id)
         resp, body = self.get(uri)
+        self.rest_client.expected_success(200, resp.status)
         pools = common.parse_array(etree.fromstring(body))
         body = {'pools': pools}
         return resp, body
@@ -194,12 +208,14 @@
         uri = ('%s/lb/pools/%s/loadbalancer-agent' %
                (self.uri_prefix, pool_id))
         resp, body = self.get(uri)
+        self.rest_client.expected_success(200, resp.status)
         body = _root_tag_fetcher_and_xml_to_json_parse(body)
         return resp, body
 
     def list_routers_on_l3_agent(self, agent_id):
         uri = '%s/agents/%s/l3-routers' % (self.uri_prefix, agent_id)
         resp, body = self.get(uri)
+        self.rest_client.expected_success(200, resp.status)
         routers = common.parse_array(etree.fromstring(body))
         body = {'routers': routers}
         return resp, body
@@ -207,6 +223,7 @@
     def list_l3_agents_hosting_router(self, router_id):
         uri = '%s/routers/%s/l3-agents' % (self.uri_prefix, router_id)
         resp, body = self.get(uri)
+        self.rest_client.expected_success(200, resp.status)
         agents = common.parse_array(etree.fromstring(body))
         body = {'agents': agents}
         return resp, body
@@ -215,6 +232,7 @@
         uri = '%s/agents/%s/l3-routers' % (self.uri_prefix, agent_id)
         router = (common.Element("router_id", router_id))
         resp, body = self.post(uri, str(common.Document(router)))
+        self.rest_client.expected_success(201, resp.status)
         body = _root_tag_fetcher_and_xml_to_json_parse(body)
         return resp, body
 
@@ -222,11 +240,13 @@
         uri = '%s/agents/%s/l3-routers/%s' % (
             self.uri_prefix, agent_id, router_id)
         resp, body = self.delete(uri)
+        self.rest_client.expected_success(204, resp.status)
         return resp, body
 
     def list_dhcp_agent_hosting_network(self, network_id):
         uri = '%s/networks/%s/dhcp-agents' % (self.uri_prefix, network_id)
         resp, body = self.get(uri)
+        self.rest_client.expected_success(200, resp.status)
         agents = common.parse_array(etree.fromstring(body))
         body = {'agents': agents}
         return resp, body
@@ -234,6 +254,7 @@
     def list_networks_hosted_by_one_dhcp_agent(self, agent_id):
         uri = '%s/agents/%s/dhcp-networks' % (self.uri_prefix, agent_id)
         resp, body = self.get(uri)
+        self.rest_client.expected_success(200, resp.status)
         networks = common.parse_array(etree.fromstring(body))
         body = {'networks': networks}
         return resp, body
@@ -242,11 +263,13 @@
         uri = '%s/agents/%s/dhcp-networks/%s' % (self.uri_prefix, agent_id,
                                                  network_id)
         resp, body = self.delete(uri)
+        self.rest_client.expected_success(204, resp.status)
         return resp, body
 
     def list_lb_pool_stats(self, pool_id):
         uri = '%s/lb/pools/%s/stats' % (self.uri_prefix, pool_id)
         resp, body = self.get(uri)
+        self.rest_client.expected_success(200, resp.status)
         body = _root_tag_fetcher_and_xml_to_json_parse(body)
         return resp, body
 
@@ -254,6 +277,7 @@
         uri = '%s/agents/%s/dhcp-networks' % (self.uri_prefix, agent_id)
         network = common.Element("network_id", network_id)
         resp, body = self.post(uri, str(common.Document(network)))
+        self.rest_client.expected_success(201, resp.status)
         body = _root_tag_fetcher_and_xml_to_json_parse(body)
         return resp, body
 
diff --git a/tempest/services/object_storage/account_client.py b/tempest/services/object_storage/account_client.py
index a0506f2..4dc588f 100644
--- a/tempest/services/object_storage/account_client.py
+++ b/tempest/services/object_storage/account_client.py
@@ -15,12 +15,12 @@
 
 import json
 import urllib
+from xml.etree import ElementTree as etree
 
 from tempest.common import http
 from tempest.common import rest_client
 from tempest import config
 from tempest import exceptions
-from xml.etree import ElementTree as etree
 
 CONF = config.CONF
 
@@ -32,11 +32,15 @@
 
     def create_account(self, data=None,
                        params=None,
-                       metadata={},
-                       remove_metadata={},
+                       metadata=None,
+                       remove_metadata=None,
                        metadata_prefix='X-Account-Meta-',
                        remove_metadata_prefix='X-Remove-Account-Meta-'):
         """Create an account."""
+        if metadata is None:
+            metadata = {}
+        if remove_metadata is None:
+            remove_metadata = {}
         url = ''
         if params:
             url += '?%s' % urllib.urlencode(params)
@@ -54,8 +58,6 @@
         """Delete an account."""
         url = ''
         if params:
-            if 'bulk-delete' in params:
-                url += 'bulk-delete&'
             url = '?%s%s' % (url, urllib.urlencode(params))
 
         resp, body = self.delete(url, headers={}, body=data)
@@ -70,13 +72,19 @@
         return resp, body
 
     def create_account_metadata(self, metadata,
-                                metadata_prefix='X-Account-Meta-'):
+                                metadata_prefix='X-Account-Meta-',
+                                data=None, params=None):
         """Creates an account metadata entry."""
         headers = {}
-        for key in metadata:
-            headers[metadata_prefix + key] = metadata[key]
+        if metadata:
+            for key in metadata:
+                headers[metadata_prefix + key] = metadata[key]
 
-        resp, body = self.post('', headers=headers, body=None)
+        url = ''
+        if params:
+            url = '?%s%s' % (url, urllib.urlencode(params))
+
+        resp, body = self.post(url, headers=headers, body=data)
         return resp, body
 
     def delete_account_metadata(self, metadata,
diff --git a/tempest/services/object_storage/container_client.py b/tempest/services/object_storage/container_client.py
index 546b557..ffc1326 100644
--- a/tempest/services/object_storage/container_client.py
+++ b/tempest/services/object_storage/container_client.py
@@ -15,10 +15,10 @@
 
 import json
 import urllib
+from xml.etree import ElementTree as etree
 
 from tempest.common import rest_client
 from tempest import config
-from xml.etree import ElementTree as etree
 
 CONF = config.CONF
 
diff --git a/tempest/services/orchestration/json/orchestration_client.py b/tempest/services/orchestration/json/orchestration_client.py
index dd8d1b9..15306a0 100644
--- a/tempest/services/orchestration/json/orchestration_client.py
+++ b/tempest/services/orchestration/json/orchestration_client.py
@@ -41,12 +41,15 @@
             uri += '?%s' % urllib.urlencode(params)
 
         resp, body = self.get(uri)
+        self.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body['stacks']
 
-    def create_stack(self, name, disable_rollback=True, parameters={},
+    def create_stack(self, name, disable_rollback=True, parameters=None,
                      timeout_mins=60, template=None, template_url=None,
                      environment=None, files=None):
+        if parameters is None:
+            parameters = {}
         headers, body = self._prepare_update_create(
             name,
             disable_rollback,
@@ -58,11 +61,15 @@
             files)
         uri = 'stacks'
         resp, body = self.post(uri, headers=headers, body=body)
+        self.expected_success(201, resp.status)
+        body = json.loads(body)
         return resp, body
 
     def update_stack(self, stack_identifier, name, disable_rollback=True,
-                     parameters={}, timeout_mins=60, template=None,
+                     parameters=None, timeout_mins=60, template=None,
                      template_url=None, environment=None, files=None):
+        if parameters is None:
+            parameters = {}
         headers, body = self._prepare_update_create(
             name,
             disable_rollback,
@@ -74,12 +81,15 @@
 
         uri = "stacks/%s" % stack_identifier
         resp, body = self.put(uri, headers=headers, body=body)
+        self.expected_success(202, resp.status)
         return resp, body
 
     def _prepare_update_create(self, name, disable_rollback=True,
-                               parameters={}, timeout_mins=60,
+                               parameters=None, timeout_mins=60,
                                template=None, template_url=None,
                                environment=None, files=None):
+        if parameters is None:
+            parameters = {}
         post_body = {
             "stack_name": name,
             "disable_rollback": disable_rollback,
@@ -106,6 +116,7 @@
         """Returns the details of a single stack."""
         url = "stacks/%s" % stack_identifier
         resp, body = self.get(url)
+        self.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body['stack']
 
@@ -114,6 +125,7 @@
         url = 'stacks/%s/actions' % stack_identifier
         body = {'suspend': None}
         resp, body = self.post(url, json.dumps(body))
+        self.expected_success(200, resp.status)
         return resp, body
 
     def resume_stack(self, stack_identifier):
@@ -121,12 +133,14 @@
         url = 'stacks/%s/actions' % stack_identifier
         body = {'resume': None}
         resp, body = self.post(url, json.dumps(body))
+        self.expected_success(200, resp.status)
         return resp, body
 
     def list_resources(self, stack_identifier):
         """Returns the details of a single resource."""
         url = "stacks/%s/resources" % stack_identifier
         resp, body = self.get(url)
+        self.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body['resources']
 
@@ -134,12 +148,15 @@
         """Returns the details of a single resource."""
         url = "stacks/%s/resources/%s" % (stack_identifier, resource_name)
         resp, body = self.get(url)
+        self.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body['resource']
 
     def delete_stack(self, stack_identifier):
         """Deletes the specified Stack."""
-        return self.delete("stacks/%s" % str(stack_identifier))
+        resp, _ = self.delete("stacks/%s" % str(stack_identifier))
+        self.expected_success(204, resp.status)
+        return resp
 
     def wait_for_resource_status(self, stack_identifier, resource_name,
                                  status, failure_pattern='^.*_FAILED$'):
@@ -208,6 +225,7 @@
         url = ('stacks/{stack_identifier}/resources/{resource_name}'
                '/metadata'.format(**locals()))
         resp, body = self.get(url)
+        self.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body['metadata']
 
@@ -215,6 +233,7 @@
         """Returns list of all events for a stack."""
         url = 'stacks/{stack_identifier}/events'.format(**locals())
         resp, body = self.get(url)
+        self.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body['events']
 
@@ -223,6 +242,7 @@
         url = ('stacks/{stack_identifier}/resources/{resource_name}'
                '/events'.format(**locals()))
         resp, body = self.get(url)
+        self.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body['events']
 
@@ -231,6 +251,7 @@
         url = ('stacks/{stack_identifier}/resources/{resource_name}/events'
                '/{event_id}'.format(**locals()))
         resp, body = self.get(url)
+        self.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body['event']
 
@@ -238,6 +259,7 @@
         """Returns the template for the stack."""
         url = ('stacks/{stack_identifier}/template'.format(**locals()))
         resp, body = self.get(url)
+        self.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body
 
@@ -245,25 +267,51 @@
         """Returns the validation request result."""
         post_body = json.dumps(post_body)
         resp, body = self.post('validate', post_body)
+        self.expected_success(200, resp.status)
         body = json.loads(body)
         return resp, body
 
-    def validate_template(self, template, parameters={}):
+    def validate_template(self, template, parameters=None):
         """Returns the validation result for a template with parameters."""
+        if parameters is None:
+            parameters = {}
         post_body = {
             'template': template,
             'parameters': parameters,
         }
         return self._validate_template(post_body)
 
-    def validate_template_url(self, template_url, parameters={}):
+    def validate_template_url(self, template_url, parameters=None):
         """Returns the validation result for a template with parameters."""
+        if parameters is None:
+            parameters = {}
         post_body = {
             'template_url': template_url,
             'parameters': parameters,
         }
         return self._validate_template(post_body)
 
+    def list_resource_types(self):
+        """List resource types."""
+        resp, body = self.get('resource_types')
+        self.expected_success(200, resp.status)
+        body = json.loads(body)
+        return body['resource_types']
+
+    def get_resource_type(self, resource_type_name):
+        """Return the schema of a resource type."""
+        url = 'resource_types/%s' % resource_type_name
+        resp, body = self.get(url)
+        self.expected_success(200, resp.status)
+        return json.loads(body)
+
+    def get_resource_type_template(self, resource_type_name):
+        """Return the template of a resource type."""
+        url = 'resource_types/%s/template' % resource_type_name
+        resp, body = self.get(url)
+        self.expected_success(200, resp.status)
+        return json.loads(body)
+
     def create_software_config(self, name=None, config=None, group=None,
                                inputs=None, outputs=None, options=None):
         headers, body = self._prep_software_config_create(
diff --git a/tempest/services/telemetry/telemetry_client_base.py b/tempest/services/telemetry/telemetry_client_base.py
index b45c239..a184a77 100644
--- a/tempest/services/telemetry/telemetry_client_base.py
+++ b/tempest/services/telemetry/telemetry_client_base.py
@@ -14,9 +14,10 @@
 #    under the License.
 
 import abc
-import six
 import urllib
 
+import six
+
 from tempest import config
 
 CONF = config.CONF
diff --git a/tempest/services/volume/json/admin/volume_hosts_client.py b/tempest/services/volume/json/admin/volume_hosts_client.py
index 84e4705..b3a22b5 100644
--- a/tempest/services/volume/json/admin/volume_hosts_client.py
+++ b/tempest/services/volume/json/admin/volume_hosts_client.py
@@ -43,4 +43,5 @@
 
         resp, body = self.get(url)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['hosts']
diff --git a/tempest/services/volume/json/admin/volume_quotas_client.py b/tempest/services/volume/json/admin/volume_quotas_client.py
index 961c7da..90790e3 100644
--- a/tempest/services/volume/json/admin/volume_quotas_client.py
+++ b/tempest/services/volume/json/admin/volume_quotas_client.py
@@ -16,10 +16,9 @@
 
 import urllib
 
-from tempest.openstack.common import jsonutils
-
 from tempest.common import rest_client
 from tempest import config
+from tempest.openstack.common import jsonutils
 
 CONF = config.CONF
 
@@ -43,6 +42,7 @@
 
         url = 'os-quota-sets/%s/defaults' % tenant_id
         resp, body = self.get(url)
+        self.expected_success(200, resp.status)
         return resp, self._parse_resp(body)
 
     def get_quota_set(self, tenant_id, params=None):
@@ -53,12 +53,14 @@
             url += '?%s' % urllib.urlencode(params)
 
         resp, body = self.get(url)
+        self.expected_success(200, resp.status)
         return resp, self._parse_resp(body)
 
     def get_quota_usage(self, tenant_id):
         """List the quota set for a tenant."""
 
         resp, body = self.get_quota_set(tenant_id, params={'usage': True})
+        self.expected_success(200, resp.status)
         return resp, body
 
     def update_quota_set(self, tenant_id, gigabytes=None, volumes=None,
@@ -76,8 +78,10 @@
 
         post_body = jsonutils.dumps({'quota_set': post_body})
         resp, body = self.put('os-quota-sets/%s' % tenant_id, post_body)
+        self.expected_success(200, resp.status)
         return resp, self._parse_resp(body)
 
     def delete_quota_set(self, tenant_id):
         """Delete the tenant's quota set."""
-        return self.delete('os-quota-sets/%s' % tenant_id)
+        resp, body = self.delete('os-quota-sets/%s' % tenant_id)
+        self.expected_success(200, resp.status)
diff --git a/tempest/services/volume/json/admin/volume_services_client.py b/tempest/services/volume/json/admin/volume_services_client.py
index d43c04a..c9b8bcc 100644
--- a/tempest/services/volume/json/admin/volume_services_client.py
+++ b/tempest/services/volume/json/admin/volume_services_client.py
@@ -35,4 +35,5 @@
 
         resp, body = self.get(url)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['services']
diff --git a/tempest/services/volume/json/admin/volume_types_client.py b/tempest/services/volume/json/admin/volume_types_client.py
index 65ecc67..44ef9fe 100644
--- a/tempest/services/volume/json/admin/volume_types_client.py
+++ b/tempest/services/volume/json/admin/volume_types_client.py
@@ -63,6 +63,7 @@
 
         resp, body = self.get(url)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['volume_types']
 
     def get_volume_type(self, volume_id):
@@ -70,6 +71,7 @@
         url = "types/%s" % str(volume_id)
         resp, body = self.get(url)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['volume_type']
 
     def create_volume_type(self, name, **kwargs):
@@ -87,11 +89,13 @@
         post_body = json.dumps({'volume_type': post_body})
         resp, body = self.post('types', post_body)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['volume_type']
 
     def delete_volume_type(self, volume_id):
         """Deletes the Specified Volume_type."""
-        return self.delete("types/%s" % str(volume_id))
+        resp, body = self.delete("types/%s" % str(volume_id))
+        self.expected_success(202, resp.status)
 
     def list_volume_types_extra_specs(self, vol_type_id, params=None):
         """List all the volume_types extra specs created."""
@@ -101,6 +105,7 @@
 
         resp, body = self.get(url)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['extra_specs']
 
     def get_volume_type_extra_specs(self, vol_type_id, extra_spec_name):
@@ -109,6 +114,7 @@
                                            str(extra_spec_name))
         resp, body = self.get(url)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body
 
     def create_volume_type_extra_specs(self, vol_type_id, extra_spec):
@@ -121,12 +127,14 @@
         post_body = json.dumps({'extra_specs': extra_spec})
         resp, body = self.post(url, post_body)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['extra_specs']
 
     def delete_volume_type_extra_specs(self, vol_id, extra_spec_name):
         """Deletes the Specified Volume_type extra spec."""
-        return self.delete("types/%s/extra_specs/%s" % ((str(vol_id)),
-                                                        str(extra_spec_name)))
+        resp, body = self.delete("types/%s/extra_specs/%s" % (
+            (str(vol_id)), str(extra_spec_name)))
+        self.expected_success(202, resp.status)
 
     def update_volume_type_extra_specs(self, vol_type_id, extra_spec_name,
                                        extra_spec):
@@ -142,6 +150,7 @@
         put_body = json.dumps(extra_spec)
         resp, body = self.put(url, put_body)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body
 
     def get_encryption_type(self, vol_type_id):
@@ -152,6 +161,7 @@
         url = "/types/%s/encryption" % str(vol_type_id)
         resp, body = self.get(url)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body
 
     def create_encryption_type(self, vol_type_id, **kwargs):
@@ -170,8 +180,11 @@
         post_body = json.dumps({'encryption': post_body})
         resp, body = self.post(url, post_body)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['encryption']
 
     def delete_encryption_type(self, vol_type_id):
         """Delete the encryption type for the specified volume-type."""
-        return self.delete("/types/%s/encryption/provider" % str(vol_type_id))
+        resp, body = self.delete(
+            "/types/%s/encryption/provider" % str(vol_type_id))
+        self.expected_success(202, resp.status)
diff --git a/tempest/services/volume/json/availability_zone_client.py b/tempest/services/volume/json/availability_zone_client.py
index f2e7c5c..5ad2287 100644
--- a/tempest/services/volume/json/availability_zone_client.py
+++ b/tempest/services/volume/json/availability_zone_client.py
@@ -31,6 +31,7 @@
     def get_availability_zone_list(self):
         resp, body = self.get('os-availability-zone')
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['availabilityZoneInfo']
 
 
diff --git a/tempest/services/volume/json/backups_client.py b/tempest/services/volume/json/backups_client.py
index 183d06b..63fc646 100644
--- a/tempest/services/volume/json/backups_client.py
+++ b/tempest/services/volume/json/backups_client.py
@@ -47,6 +47,7 @@
         post_body = json.dumps({'backup': post_body})
         resp, body = self.post('backups', post_body)
         body = json.loads(body)
+        self.expected_success(202, resp.status)
         return resp, body['backup']
 
     def restore_backup(self, backup_id, volume_id=None):
@@ -55,11 +56,13 @@
         post_body = json.dumps({'restore': post_body})
         resp, body = self.post('backups/%s/restore' % (backup_id), post_body)
         body = json.loads(body)
+        self.expected_success(202, resp.status)
         return resp, body['restore']
 
     def delete_backup(self, backup_id):
         """Delete a backup of volume."""
         resp, body = self.delete('backups/%s' % (str(backup_id)))
+        self.expected_success(202, resp.status)
         return resp, body
 
     def get_backup(self, backup_id):
@@ -67,6 +70,7 @@
         url = "backups/%s" % str(backup_id)
         resp, body = self.get(url)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['backup']
 
     def list_backups_with_detail(self):
@@ -74,6 +78,7 @@
         url = "backups/detail"
         resp, body = self.get(url)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['backups']
 
     def wait_for_backup_status(self, backup_id, status):
diff --git a/tempest/services/volume/json/extensions_client.py b/tempest/services/volume/json/extensions_client.py
index e3ff00b..c84b186 100644
--- a/tempest/services/volume/json/extensions_client.py
+++ b/tempest/services/volume/json/extensions_client.py
@@ -31,6 +31,7 @@
         url = 'extensions'
         resp, body = self.get(url)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['extensions']
 
 
diff --git a/tempest/services/volume/json/qos_client.py b/tempest/services/volume/json/qos_client.py
new file mode 100644
index 0000000..6e0bee9
--- /dev/null
+++ b/tempest/services/volume/json/qos_client.py
@@ -0,0 +1,161 @@
+# All Rights Reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+import json
+import time
+
+from tempest.common import rest_client
+from tempest import config
+from tempest import exceptions
+
+CONF = config.CONF
+
+
+class BaseQosSpecsClientJSON(rest_client.RestClient):
+    """Client class to send CRUD QoS API requests"""
+
+    def __init__(self, auth_provider):
+        super(BaseQosSpecsClientJSON, self).__init__(auth_provider)
+        self.service = CONF.volume.catalog_type
+        self.build_interval = CONF.volume.build_interval
+        self.build_timeout = CONF.volume.build_timeout
+
+    def is_resource_deleted(self, qos_id):
+        try:
+            self.get_qos(qos_id)
+        except exceptions.NotFound:
+            return True
+        return False
+
+    def wait_for_qos_operations(self, qos_id, operation, args=None):
+        """Waits for a qos operations to be completed.
+
+        NOTE : operation value is required for  wait_for_qos_operations()
+        operation = 'qos-key' / 'disassociate' / 'disassociate-all'
+        args = keys[] when operation = 'qos-key'
+        args = volume-type-id disassociated when operation = 'disassociate'
+        args = None when operation = 'disassociate-all'
+        """
+        start_time = int(time.time())
+        while True:
+            if operation == 'qos-key-unset':
+                resp, body = self.get_qos(qos_id)
+                self.expected_success(200, resp.status)
+                if not any(key in body['specs'] for key in args):
+                    return
+            elif operation == 'disassociate':
+                resp, body = self.get_association_qos(qos_id)
+                self.expected_success(200, resp.status)
+                if not any(args in body[i]['id'] for i in range(0, len(body))):
+                    return
+            elif operation == 'disassociate-all':
+                resp, body = self.get_association_qos(qos_id)
+                self.expected_success(200, resp.status)
+                if not body:
+                    return
+            else:
+                msg = (" operation value is either not defined or incorrect.")
+                raise exceptions.UnprocessableEntity(msg)
+
+            if int(time.time()) - start_time >= self.build_timeout:
+                raise exceptions.TimeoutException
+            time.sleep(self.build_interval)
+
+    def create_qos(self, name, consumer, **kwargs):
+        """Create a QoS Specification.
+
+        name : name of the QoS specifications
+        consumer : conumer of Qos ( front-end / back-end / both )
+        """
+        post_body = {'name': name, 'consumer': consumer}
+        post_body.update(kwargs)
+        post_body = json.dumps({'qos_specs': post_body})
+        resp, body = self.post('qos-specs', post_body)
+        self.expected_success(200, resp.status)
+        body = json.loads(body)
+        return resp, body['qos_specs']
+
+    def delete_qos(self, qos_id, force=False):
+        """Delete the specified QoS specification."""
+        resp, body = self.delete(
+            "qos-specs/%s?force=%s" % (str(qos_id), force))
+        self.expected_success(202, resp.status)
+
+    def list_qos(self):
+        """List all the QoS specifications created."""
+        url = 'qos-specs'
+        resp, body = self.get(url)
+        body = json.loads(body)
+        self.expected_success(200, resp.status)
+        return resp, body['qos_specs']
+
+    def get_qos(self, qos_id):
+        """Get the specified QoS specification."""
+        url = "qos-specs/%s" % str(qos_id)
+        resp, body = self.get(url)
+        body = json.loads(body)
+        self.expected_success(200, resp.status)
+        return resp, body['qos_specs']
+
+    def set_qos_key(self, qos_id, **kwargs):
+        """Set the specified keys/values of QoS specification.
+
+        kwargs : it is the dictionary of the key=value pairs to set
+        """
+        put_body = json.dumps({"qos_specs": kwargs})
+        resp, body = self.put('qos-specs/%s' % qos_id, put_body)
+        body = json.loads(body)
+        self.expected_success(200, resp.status)
+        return resp, body['qos_specs']
+
+    def unset_qos_key(self, qos_id, keys):
+        """Unset the specified keys of QoS specification.
+
+        keys : it is the array of the keys to unset
+        """
+        put_body = json.dumps({'keys': keys})
+        resp, _ = self.put('qos-specs/%s/delete_keys' % qos_id, put_body)
+        self.expected_success(202, resp.status)
+
+    def associate_qos(self, qos_id, vol_type_id):
+        """Associate the specified QoS with specified volume-type."""
+        url = "qos-specs/%s/associate" % str(qos_id)
+        url += "?vol_type_id=%s" % vol_type_id
+        resp, _ = self.get(url)
+        self.expected_success(202, resp.status)
+
+    def get_association_qos(self, qos_id):
+        """Get the association of the specified QoS specification."""
+        url = "qos-specs/%s/associations" % str(qos_id)
+        resp, body = self.get(url)
+        body = json.loads(body)
+        self.expected_success(200, resp.status)
+        return resp, body['qos_associations']
+
+    def disassociate_qos(self, qos_id, vol_type_id):
+        """Disassociate the specified QoS with specified volume-type."""
+        url = "qos-specs/%s/disassociate" % str(qos_id)
+        url += "?vol_type_id=%s" % vol_type_id
+        resp, _ = self.get(url)
+        self.expected_success(202, resp.status)
+
+    def disassociate_all_qos(self, qos_id):
+        """Disassociate the specified QoS with all associations."""
+        url = "qos-specs/%s/disassociate_all" % str(qos_id)
+        resp, _ = self.get(url)
+        self.expected_success(202, resp.status)
+
+
+class QosSpecsClientJSON(BaseQosSpecsClientJSON):
+    """Volume V1 QoS client."""
diff --git a/tempest/services/volume/json/snapshots_client.py b/tempest/services/volume/json/snapshots_client.py
index 2dff63d..1f8065b 100644
--- a/tempest/services/volume/json/snapshots_client.py
+++ b/tempest/services/volume/json/snapshots_client.py
@@ -24,15 +24,16 @@
 LOG = logging.getLogger(__name__)
 
 
-class SnapshotsClientJSON(rest_client.RestClient):
-    """Client class to send CRUD Volume API requests."""
+class BaseSnapshotsClientJSON(rest_client.RestClient):
+    """Base Client class to send CRUD Volume API requests."""
 
     def __init__(self, auth_provider):
-        super(SnapshotsClientJSON, self).__init__(auth_provider)
+        super(BaseSnapshotsClientJSON, self).__init__(auth_provider)
 
         self.service = CONF.volume.catalog_type
         self.build_interval = CONF.volume.build_interval
         self.build_timeout = CONF.volume.build_timeout
+        self.create_resp = 200
 
     def list_snapshots(self, params=None):
         """List all the snapshot."""
@@ -42,6 +43,7 @@
 
         resp, body = self.get(url)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['snapshots']
 
     def list_snapshots_with_detail(self, params=None):
@@ -52,6 +54,7 @@
 
         resp, body = self.get(url)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['snapshots']
 
     def get_snapshot(self, snapshot_id):
@@ -59,6 +62,7 @@
         url = "snapshots/%s" % str(snapshot_id)
         resp, body = self.get(url)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['snapshot']
 
     def create_snapshot(self, volume_id, **kwargs):
@@ -74,6 +78,7 @@
         post_body = json.dumps({'snapshot': post_body})
         resp, body = self.post('snapshots', post_body)
         body = json.loads(body)
+        self.expected_success(self.create_resp, resp.status)
         return resp, body['snapshot']
 
     def update_snapshot(self, snapshot_id, **kwargs):
@@ -81,6 +86,7 @@
         put_body = json.dumps({'snapshot': kwargs})
         resp, body = self.put('snapshots/%s' % snapshot_id, put_body)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['snapshot']
 
     # NOTE(afazekas): just for the wait function
@@ -122,7 +128,8 @@
 
     def delete_snapshot(self, snapshot_id):
         """Delete Snapshot."""
-        return self.delete("snapshots/%s" % str(snapshot_id))
+        resp, body = self.delete("snapshots/%s" % str(snapshot_id))
+        self.expected_success(202, resp.status)
 
     def is_resource_deleted(self, id):
         try:
@@ -135,6 +142,7 @@
         """Reset the specified snapshot's status."""
         post_body = json.dumps({'os-reset_status': {"status": status}})
         resp, body = self.post('snapshots/%s/action' % snapshot_id, post_body)
+        self.expected_success(202, resp.status)
         return resp, body
 
     def update_snapshot_status(self, snapshot_id, status, progress):
@@ -146,6 +154,7 @@
         post_body = json.dumps({'os-update_snapshot_status': post_body})
         url = 'snapshots/%s/action' % str(snapshot_id)
         resp, body = self.post(url, post_body)
+        self.expected_success(202, resp.status)
         return resp, body
 
     def create_snapshot_metadata(self, snapshot_id, metadata):
@@ -154,6 +163,7 @@
         url = "snapshots/%s/metadata" % str(snapshot_id)
         resp, body = self.post(url, put_body)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['metadata']
 
     def get_snapshot_metadata(self, snapshot_id):
@@ -161,6 +171,7 @@
         url = "snapshots/%s/metadata" % str(snapshot_id)
         resp, body = self.get(url)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['metadata']
 
     def update_snapshot_metadata(self, snapshot_id, metadata):
@@ -169,6 +180,7 @@
         url = "snapshots/%s/metadata" % str(snapshot_id)
         resp, body = self.put(url, put_body)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['metadata']
 
     def update_snapshot_metadata_item(self, snapshot_id, id, meta_item):
@@ -177,16 +189,22 @@
         url = "snapshots/%s/metadata/%s" % (str(snapshot_id), str(id))
         resp, body = self.put(url, put_body)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['meta']
 
     def delete_snapshot_metadata_item(self, snapshot_id, id):
         """Delete metadata item for the snapshot."""
         url = "snapshots/%s/metadata/%s" % (str(snapshot_id), str(id))
         resp, body = self.delete(url)
-        return resp, body
+        self.expected_success(200, resp.status)
 
     def force_delete_snapshot(self, snapshot_id):
         """Force Delete Snapshot."""
         post_body = json.dumps({'os-force_delete': {}})
         resp, body = self.post('snapshots/%s/action' % snapshot_id, post_body)
+        self.expected_success(202, resp.status)
         return resp, body
+
+
+class SnapshotsClientJSON(BaseSnapshotsClientJSON):
+    """Client class to send CRUD Volume V1 API requests."""
diff --git a/tempest/services/volume/json/volumes_client.py b/tempest/services/volume/json/volumes_client.py
index 6c97497..c3a9269 100644
--- a/tempest/services/volume/json/volumes_client.py
+++ b/tempest/services/volume/json/volumes_client.py
@@ -35,6 +35,7 @@
         self.service = CONF.volume.catalog_type
         self.build_interval = CONF.volume.build_interval
         self.build_timeout = CONF.volume.build_timeout
+        self.create_resp = 200
 
     def get_attachment_from_volume(self, volume):
         """Return the element 'attachment' from input volumes."""
@@ -48,6 +49,7 @@
 
         resp, body = self.get(url)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['volumes']
 
     def list_volumes_with_detail(self, params=None):
@@ -58,6 +60,7 @@
 
         resp, body = self.get(url)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['volumes']
 
     def get_volume(self, volume_id):
@@ -65,6 +68,7 @@
         url = "volumes/%s" % str(volume_id)
         resp, body = self.get(url)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['volume']
 
     def create_volume(self, size=None, **kwargs):
@@ -88,6 +92,7 @@
         post_body = json.dumps({'volume': post_body})
         resp, body = self.post('volumes', post_body)
         body = json.loads(body)
+        self.expected_success(self.create_resp, resp.status)
         return resp, body['volume']
 
     def update_volume(self, volume_id, **kwargs):
@@ -95,11 +100,13 @@
         put_body = json.dumps({'volume': kwargs})
         resp, body = self.put('volumes/%s' % volume_id, put_body)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['volume']
 
     def delete_volume(self, volume_id):
         """Deletes the Specified Volume."""
-        return self.delete("volumes/%s" % str(volume_id))
+        resp, body = self.delete("volumes/%s" % str(volume_id))
+        self.expected_success(202, resp.status)
 
     def upload_volume(self, volume_id, image_name, disk_format):
         """Uploads a volume in Glance."""
@@ -111,6 +118,7 @@
         url = 'volumes/%s/action' % (volume_id)
         resp, body = self.post(url, post_body)
         body = json.loads(body)
+        self.expected_success(202, resp.status)
         return resp, body['os-volume_upload_image']
 
     def attach_volume(self, volume_id, instance_uuid, mountpoint):
@@ -122,6 +130,7 @@
         post_body = json.dumps({'os-attach': post_body})
         url = 'volumes/%s/action' % (volume_id)
         resp, body = self.post(url, post_body)
+        self.expected_success(202, resp.status)
         return resp, body
 
     def detach_volume(self, volume_id):
@@ -130,6 +139,7 @@
         post_body = json.dumps({'os-detach': post_body})
         url = 'volumes/%s/action' % (volume_id)
         resp, body = self.post(url, post_body)
+        self.expected_success(202, resp.status)
         return resp, body
 
     def reserve_volume(self, volume_id):
@@ -138,6 +148,7 @@
         post_body = json.dumps({'os-reserve': post_body})
         url = 'volumes/%s/action' % (volume_id)
         resp, body = self.post(url, post_body)
+        self.expected_success(202, resp.status)
         return resp, body
 
     def unreserve_volume(self, volume_id):
@@ -146,6 +157,7 @@
         post_body = json.dumps({'os-unreserve': post_body})
         url = 'volumes/%s/action' % (volume_id)
         resp, body = self.post(url, post_body)
+        self.expected_success(202, resp.status)
         return resp, body
 
     def wait_for_volume_status(self, volume_id, status):
@@ -183,24 +195,30 @@
         post_body = json.dumps({'os-extend': post_body})
         url = 'volumes/%s/action' % (volume_id)
         resp, body = self.post(url, post_body)
+        self.expected_success(202, resp.status)
         return resp, body
 
     def reset_volume_status(self, volume_id, status):
         """Reset the Specified Volume's Status."""
         post_body = json.dumps({'os-reset_status': {"status": status}})
         resp, body = self.post('volumes/%s/action' % volume_id, post_body)
+        self.expected_success(202, resp.status)
         return resp, body
 
     def volume_begin_detaching(self, volume_id):
         """Volume Begin Detaching."""
+        # ref cinder/api/contrib/volume_actions.py#L158
         post_body = json.dumps({'os-begin_detaching': {}})
         resp, body = self.post('volumes/%s/action' % volume_id, post_body)
+        self.expected_success(202, resp.status)
         return resp, body
 
     def volume_roll_detaching(self, volume_id):
         """Volume Roll Detaching."""
+        # cinder/api/contrib/volume_actions.py#L170
         post_body = json.dumps({'os-roll_detaching': {}})
         resp, body = self.post('volumes/%s/action' % volume_id, post_body)
+        self.expected_success(202, resp.status)
         return resp, body
 
     def create_volume_transfer(self, vol_id, display_name=None):
@@ -213,6 +231,7 @@
         post_body = json.dumps({'transfer': post_body})
         resp, body = self.post('os-volume-transfer', post_body)
         body = json.loads(body)
+        self.expected_success(202, resp.status)
         return resp, body['transfer']
 
     def get_volume_transfer(self, transfer_id):
@@ -220,6 +239,7 @@
         url = "os-volume-transfer/%s" % str(transfer_id)
         resp, body = self.get(url)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['transfer']
 
     def list_volume_transfers(self, params=None):
@@ -229,11 +249,13 @@
             url += '?%s' % urllib.urlencode(params)
         resp, body = self.get(url)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['transfers']
 
     def delete_volume_transfer(self, transfer_id):
         """Delete a volume transfer."""
-        return self.delete("os-volume-transfer/%s" % str(transfer_id))
+        resp, body = self.delete("os-volume-transfer/%s" % str(transfer_id))
+        self.expected_success(202, resp.status)
 
     def accept_volume_transfer(self, transfer_id, transfer_auth_key):
         """Accept a volume transfer."""
@@ -244,6 +266,7 @@
         post_body = json.dumps({'accept': post_body})
         resp, body = self.post(url, post_body)
         body = json.loads(body)
+        self.expected_success(202, resp.status)
         return resp, body['transfer']
 
     def update_volume_readonly(self, volume_id, readonly):
@@ -254,12 +277,14 @@
         post_body = json.dumps({'os-update_readonly_flag': post_body})
         url = 'volumes/%s/action' % (volume_id)
         resp, body = self.post(url, post_body)
+        self.expected_success(202, resp.status)
         return resp, body
 
     def force_delete_volume(self, volume_id):
         """Force Delete Volume."""
         post_body = json.dumps({'os-force_delete': {}})
         resp, body = self.post('volumes/%s/action' % volume_id, post_body)
+        self.expected_success(202, resp.status)
         return resp, body
 
     def create_volume_metadata(self, volume_id, metadata):
@@ -268,6 +293,7 @@
         url = "volumes/%s/metadata" % str(volume_id)
         resp, body = self.post(url, put_body)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['metadata']
 
     def get_volume_metadata(self, volume_id):
@@ -275,6 +301,7 @@
         url = "volumes/%s/metadata" % str(volume_id)
         resp, body = self.get(url)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['metadata']
 
     def update_volume_metadata(self, volume_id, metadata):
@@ -283,6 +310,7 @@
         url = "volumes/%s/metadata" % str(volume_id)
         resp, body = self.put(url, put_body)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['metadata']
 
     def update_volume_metadata_item(self, volume_id, id, meta_item):
@@ -291,13 +319,14 @@
         url = "volumes/%s/metadata/%s" % (str(volume_id), str(id))
         resp, body = self.put(url, put_body)
         body = json.loads(body)
+        self.expected_success(200, resp.status)
         return resp, body['meta']
 
     def delete_volume_metadata_item(self, volume_id, id):
         """Delete metadata item for the volume."""
         url = "volumes/%s/metadata/%s" % (str(volume_id), str(id))
         resp, body = self.delete(url)
-        return resp, body
+        self.expected_success(200, resp.status)
 
 
 class VolumesClientJSON(BaseVolumesClientJSON):
diff --git a/tempest/services/volume/v2/json/qos_client.py b/tempest/services/volume/v2/json/qos_client.py
new file mode 100644
index 0000000..a734df8
--- /dev/null
+++ b/tempest/services/volume/v2/json/qos_client.py
@@ -0,0 +1,23 @@
+# All Rights Reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+from tempest.services.volume.json import qos_client
+
+
+class QosSpecsV2ClientJSON(qos_client.BaseQosSpecsClientJSON):
+
+    def __init__(self, auth_provider):
+        super(QosSpecsV2ClientJSON, self).__init__(auth_provider)
+
+        self.api_version = "v2"
diff --git a/tempest/services/volume/v2/json/snapshots_client.py b/tempest/services/volume/v2/json/snapshots_client.py
new file mode 100644
index 0000000..553176b
--- /dev/null
+++ b/tempest/services/volume/v2/json/snapshots_client.py
@@ -0,0 +1,23 @@
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+from tempest.services.volume.json import snapshots_client
+
+
+class SnapshotsV2ClientJSON(snapshots_client.BaseSnapshotsClientJSON):
+    """Client class to send CRUD Volume V2 API requests."""
+
+    def __init__(self, auth_provider):
+        super(SnapshotsV2ClientJSON, self).__init__(auth_provider)
+
+        self.api_version = "v2"
+        self.create_resp = 202
diff --git a/tempest/services/volume/v2/json/volumes_client.py b/tempest/services/volume/v2/json/volumes_client.py
index 1f16ead..ac4342e 100644
--- a/tempest/services/volume/v2/json/volumes_client.py
+++ b/tempest/services/volume/v2/json/volumes_client.py
@@ -25,3 +25,4 @@
         super(VolumesV2ClientJSON, self).__init__(auth_provider)
 
         self.api_version = "v2"
+        self.create_resp = 202
diff --git a/tempest/services/volume/v2/xml/snapshots_client.py b/tempest/services/volume/v2/xml/snapshots_client.py
new file mode 100644
index 0000000..b29d86c
--- /dev/null
+++ b/tempest/services/volume/v2/xml/snapshots_client.py
@@ -0,0 +1,23 @@
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+from tempest.services.volume.xml import snapshots_client
+
+
+class SnapshotsV2ClientXML(snapshots_client.BaseSnapshotsClientXML):
+    """Client class to send CRUD Volume V2 API requests."""
+
+    def __init__(self, auth_provider):
+        super(SnapshotsV2ClientXML, self).__init__(auth_provider)
+
+        self.api_version = "v2"
+        self.create_resp = 202
diff --git a/tempest/services/volume/v2/xml/volumes_client.py b/tempest/services/volume/v2/xml/volumes_client.py
index c1bcf6e..b3133af 100644
--- a/tempest/services/volume/v2/xml/volumes_client.py
+++ b/tempest/services/volume/v2/xml/volumes_client.py
@@ -30,6 +30,7 @@
         super(VolumesV2ClientXML, self).__init__(auth_provider)
 
         self.api_version = "v2"
+        self.create_resp = 202
 
     def _parse_volume(self, body):
         vol = dict((attr, body.get(attr)) for attr in body.keys())
@@ -61,6 +62,7 @@
             volumes += [self._parse_volume(vol) for vol in list(body)]
         for v in volumes:
             v = self._check_if_bootable(v)
+        self.expected_success(200, resp.status)
         return resp, volumes
 
     def get_volume(self, volume_id):
@@ -69,4 +71,5 @@
         resp, body = self.get(url)
         body = self._parse_volume(etree.fromstring(body))
         body = self._check_if_bootable(body)
+        self.expected_success(200, resp.status)
         return resp, body
diff --git a/tempest/services/volume/xml/admin/volume_hosts_client.py b/tempest/services/volume/xml/admin/volume_hosts_client.py
index 967c7c2..98a7c58 100644
--- a/tempest/services/volume/xml/admin/volume_hosts_client.py
+++ b/tempest/services/volume/xml/admin/volume_hosts_client.py
@@ -69,5 +69,6 @@
             url += '?%s' % urllib.urlencode(params)
 
         resp, body = self.get(url)
+        self.expected_success(200, resp.status)
         body = self._parse_array(etree.fromstring(body))
         return resp, body
diff --git a/tempest/services/volume/xml/admin/volume_quotas_client.py b/tempest/services/volume/xml/admin/volume_quotas_client.py
index a38410b..acf9102 100644
--- a/tempest/services/volume/xml/admin/volume_quotas_client.py
+++ b/tempest/services/volume/xml/admin/volume_quotas_client.py
@@ -15,6 +15,7 @@
 #    under the License.
 
 import ast
+
 from lxml import etree
 
 from tempest.common import xml_utils as xml
@@ -47,6 +48,7 @@
         """List the quota set for a tenant."""
 
         resp, body = self.get_quota_set(tenant_id, params={'usage': True})
+        self.expected_success(200, resp.status)
         return resp, self._format_quota(body)
 
     def update_quota_set(self, tenant_id, gigabytes=None, volumes=None,
@@ -67,8 +69,10 @@
         resp, body = self.put('os-quota-sets/%s' % tenant_id,
                               str(xml.Document(element)))
         body = xml.xml_to_json(etree.fromstring(body))
+        self.expected_success(200, resp.status)
         return resp, self._format_quota(body)
 
     def delete_quota_set(self, tenant_id):
         """Delete the tenant's quota set."""
-        return self.delete('os-quota-sets/%s' % tenant_id)
+        resp, body = self.delete('os-quota-sets/%s' % tenant_id)
+        self.expected_success(200, resp.status)
diff --git a/tempest/services/volume/xml/admin/volume_services_client.py b/tempest/services/volume/xml/admin/volume_services_client.py
index 7bad16d..2ecb590 100644
--- a/tempest/services/volume/xml/admin/volume_services_client.py
+++ b/tempest/services/volume/xml/admin/volume_services_client.py
@@ -39,4 +39,5 @@
         resp, body = self.get(url)
         node = etree.fromstring(body)
         body = [xml_utils.xml_to_json(x) for x in node.getchildren()]
+        self.expected_success(200, resp.status)
         return resp, body
diff --git a/tempest/services/volume/xml/admin/volume_types_client.py b/tempest/services/volume/xml/admin/volume_types_client.py
index 90897ee..679d097 100644
--- a/tempest/services/volume/xml/admin/volume_types_client.py
+++ b/tempest/services/volume/xml/admin/volume_types_client.py
@@ -75,6 +75,7 @@
         if body is not None:
             volume_types += [self._parse_volume_type(vol)
                              for vol in list(body)]
+        self.expected_success(200, resp.status)
         return resp, volume_types
 
     def get_volume_type(self, type_id):
@@ -82,6 +83,7 @@
         url = "types/%s" % str(type_id)
         resp, body = self.get(url)
         body = etree.fromstring(body)
+        self.expected_success(200, resp.status)
         return resp, self._parse_volume_type(body)
 
     def create_volume_type(self, name, **kwargs):
@@ -107,11 +109,13 @@
 
         resp, body = self.post('types', str(common.Document(vol_type)))
         body = common.xml_to_json(etree.fromstring(body))
+        self.expected_success(200, resp.status)
         return resp, body
 
     def delete_volume_type(self, type_id):
         """Deletes the Specified Volume_type."""
-        return self.delete("types/%s" % str(type_id))
+        resp, body = self.delete("types/%s" % str(type_id))
+        self.expected_success(202, resp.status)
 
     def list_volume_types_extra_specs(self, vol_type_id, params=None):
         """List all the volume_types extra specs created."""
@@ -126,6 +130,7 @@
         if body is not None:
             extra_specs += [self._parse_volume_type_extra_specs(spec)
                             for spec in list(body)]
+        self.expected_success(200, resp.status)
         return resp, extra_specs
 
     def get_volume_type_extra_specs(self, vol_type_id, extra_spec_name):
@@ -134,6 +139,7 @@
                                            str(extra_spec_name))
         resp, body = self.get(url)
         body = etree.fromstring(body)
+        self.expected_success(200, resp.status)
         return resp, self._parse_volume_type_extra_specs(body)
 
     def create_volume_type_extra_specs(self, vol_type_id, extra_spec):
@@ -158,12 +164,15 @@
 
         resp, body = self.post(url, str(common.Document(extra_specs)))
         body = common.xml_to_json(etree.fromstring(body))
+        self.expected_success(200, resp.status)
         return resp, body
 
     def delete_volume_type_extra_specs(self, vol_id, extra_spec_name):
         """Deletes the Specified Volume_type extra spec."""
-        return self.delete("types/%s/extra_specs/%s" % ((str(vol_id)),
-                                                        str(extra_spec_name)))
+        resp, body = self.delete("types/%s/extra_specs/%s" % (
+            (str(vol_id)), str(extra_spec_name)))
+        self.expected_success(202, resp.status)
+        return resp, body
 
     def update_volume_type_extra_specs(self, vol_type_id, extra_spec_name,
                                        extra_spec):
@@ -187,6 +196,7 @@
 
         resp, body = self.put(url, str(common.Document(extra_specs)))
         body = common.xml_to_json(etree.fromstring(body))
+        self.expected_success(200, resp.status)
         return resp, body
 
     def is_resource_deleted(self, id):
diff --git a/tempest/services/volume/xml/availability_zone_client.py b/tempest/services/volume/xml/availability_zone_client.py
index a883ef5..b956d3f 100644
--- a/tempest/services/volume/xml/availability_zone_client.py
+++ b/tempest/services/volume/xml/availability_zone_client.py
@@ -36,6 +36,7 @@
     def get_availability_zone_list(self):
         resp, body = self.get('os-availability-zone')
         availability_zone = self._parse_array(etree.fromstring(body))
+        self.expected_success(200, resp.status)
         return resp, availability_zone
 
 
diff --git a/tempest/services/volume/xml/extensions_client.py b/tempest/services/volume/xml/extensions_client.py
index fe8b7cb..f2b2e02 100644
--- a/tempest/services/volume/xml/extensions_client.py
+++ b/tempest/services/volume/xml/extensions_client.py
@@ -39,6 +39,7 @@
         url = 'extensions'
         resp, body = self.get(url)
         body = self._parse_array(etree.fromstring(body))
+        self.expected_success(200, resp.status)
         return resp, body
 
 
diff --git a/tempest/services/volume/xml/snapshots_client.py b/tempest/services/volume/xml/snapshots_client.py
index 4b1ba25..ce98eea 100644
--- a/tempest/services/volume/xml/snapshots_client.py
+++ b/tempest/services/volume/xml/snapshots_client.py
@@ -26,16 +26,17 @@
 LOG = logging.getLogger(__name__)
 
 
-class SnapshotsClientXML(rest_client.RestClient):
-    """Client class to send CRUD Volume API requests."""
+class BaseSnapshotsClientXML(rest_client.RestClient):
+    """Base Client class to send CRUD Volume API requests."""
     TYPE = "xml"
 
     def __init__(self, auth_provider):
-        super(SnapshotsClientXML, self).__init__(auth_provider)
+        super(BaseSnapshotsClientXML, self).__init__(auth_provider)
 
         self.service = CONF.volume.catalog_type
         self.build_interval = CONF.volume.build_interval
         self.build_timeout = CONF.volume.build_timeout
+        self.create_resp = 200
 
     def list_snapshots(self, params=None):
         """List all snapshot."""
@@ -49,6 +50,7 @@
         snapshots = []
         for snap in body:
             snapshots.append(common.xml_to_json(snap))
+        self.expected_success(200, resp.status)
         return resp, snapshots
 
     def list_snapshots_with_detail(self, params=None):
@@ -63,6 +65,7 @@
         snapshots = []
         for snap in body:
             snapshots.append(common.xml_to_json(snap))
+        self.expected_success(200, resp.status)
         return resp, snapshots
 
     def get_snapshot(self, snapshot_id):
@@ -70,6 +73,7 @@
         url = "snapshots/%s" % str(snapshot_id)
         resp, body = self.get(url)
         body = etree.fromstring(body)
+        self.expected_success(200, resp.status)
         return resp, common.xml_to_json(body)
 
     def create_snapshot(self, volume_id, **kwargs):
@@ -87,6 +91,7 @@
         resp, body = self.post('snapshots',
                                str(common.Document(snapshot)))
         body = common.xml_to_json(etree.fromstring(body))
+        self.expected_success(self.create_resp, resp.status)
         return resp, body
 
     def update_snapshot(self, snapshot_id, **kwargs):
@@ -96,6 +101,7 @@
         resp, body = self.put('snapshots/%s' % snapshot_id,
                               str(common.Document(put_body)))
         body = common.xml_to_json(etree.fromstring(body))
+        self.expected_success(200, resp.status)
         return resp, body
 
     # NOTE(afazekas): just for the wait function
@@ -137,7 +143,8 @@
 
     def delete_snapshot(self, snapshot_id):
         """Delete Snapshot."""
-        return self.delete("snapshots/%s" % str(snapshot_id))
+        resp, body = self.delete("snapshots/%s" % str(snapshot_id))
+        self.expected_success(202, resp.status)
 
     def is_resource_deleted(self, id):
         try:
@@ -153,6 +160,7 @@
         resp, body = self.post(url, str(common.Document(post_body)))
         if body:
             body = common.xml_to_json(etree.fromstring(body))
+        self.expected_success(202, resp.status)
         return resp, body
 
     def update_snapshot_status(self, snapshot_id, status, progress):
@@ -165,6 +173,7 @@
         resp, body = self.post(url, str(common.Document(post_body)))
         if body:
             body = common.xml_to_json(etree.fromstring(body))
+        self.expected_success(202, resp.status)
         return resp, body
 
     def _metadata_body(self, meta):
@@ -188,6 +197,7 @@
         resp, body = self.post('snapshots/%s/metadata' % snapshot_id,
                                str(common.Document(post_body)))
         body = self._parse_key_value(etree.fromstring(body))
+        self.expected_success(200, resp.status)
         return resp, body
 
     def get_snapshot_metadata(self, snapshot_id):
@@ -195,6 +205,7 @@
         url = "snapshots/%s/metadata" % str(snapshot_id)
         resp, body = self.get(url)
         body = self._parse_key_value(etree.fromstring(body))
+        self.expected_success(200, resp.status)
         return resp, body
 
     def update_snapshot_metadata(self, snapshot_id, metadata):
@@ -203,6 +214,7 @@
         url = "snapshots/%s/metadata" % str(snapshot_id)
         resp, body = self.put(url, str(common.Document(put_body)))
         body = self._parse_key_value(etree.fromstring(body))
+        self.expected_success(200, resp.status)
         return resp, body
 
     def update_snapshot_metadata_item(self, snapshot_id, id, meta_item):
@@ -213,12 +225,15 @@
         url = "snapshots/%s/metadata/%s" % (str(snapshot_id), str(id))
         resp, body = self.put(url, str(common.Document(put_body)))
         body = common.xml_to_json(etree.fromstring(body))
+        self.expected_success(200, resp.status)
         return resp, body
 
     def delete_snapshot_metadata_item(self, snapshot_id, id):
         """Delete metadata item for the snapshot."""
         url = "snapshots/%s/metadata/%s" % (str(snapshot_id), str(id))
-        return self.delete(url)
+        resp, body = self.delete(url)
+        self.expected_success(200, resp.status)
+        return resp, body
 
     def force_delete_snapshot(self, snapshot_id):
         """Force Delete Snapshot."""
@@ -227,4 +242,9 @@
         resp, body = self.post(url, str(common.Document(post_body)))
         if body:
             body = common.xml_to_json(etree.fromstring(body))
+        self.expected_success(202, resp.status)
         return resp, body
+
+
+class SnapshotsClientXML(BaseSnapshotsClientXML):
+    """Client class to send CRUD Volume V1 API requests."""
diff --git a/tempest/services/volume/xml/volumes_client.py b/tempest/services/volume/xml/volumes_client.py
index 2d4a9e9..a8c1ae5 100644
--- a/tempest/services/volume/xml/volumes_client.py
+++ b/tempest/services/volume/xml/volumes_client.py
@@ -15,9 +15,9 @@
 
 import time
 import urllib
+from xml.sax import saxutils
 
 from lxml import etree
-from xml.sax import saxutils
 
 from tempest.common import rest_client
 from tempest.common import xml_utils as common
@@ -43,6 +43,7 @@
         self.service = CONF.volume.catalog_type
         self.build_interval = CONF.compute.build_interval
         self.build_timeout = CONF.compute.build_timeout
+        self.create_resp = 200
 
     def _translate_attributes_to_json(self, volume):
         volume_host_attr = '{' + VOLUME_HOST_NS + '}host'
@@ -114,6 +115,7 @@
         volumes = []
         if body is not None:
             volumes += [self._parse_volume(vol) for vol in list(body)]
+        self.expected_success(200, resp.status)
         return resp, volumes
 
     def list_volumes_with_detail(self, params=None):
@@ -128,6 +130,7 @@
         volumes = []
         if body is not None:
             volumes += [self._parse_volume(vol) for vol in list(body)]
+        self.expected_success(200, resp.status)
         return resp, volumes
 
     def get_volume(self, volume_id):
@@ -135,6 +138,7 @@
         url = "volumes/%s" % str(volume_id)
         resp, body = self.get(url)
         body = self._parse_volume(etree.fromstring(body))
+        self.expected_success(200, resp.status)
         return resp, body
 
     def create_volume(self, size=None, **kwargs):
@@ -176,6 +180,7 @@
 
         resp, body = self.post('volumes', str(common.Document(volume)))
         body = common.xml_to_json(etree.fromstring(body))
+        self.expected_success(self.create_resp, resp.status)
         return resp, body
 
     def update_volume(self, volume_id, **kwargs):
@@ -185,11 +190,14 @@
         resp, body = self.put('volumes/%s' % volume_id,
                               str(common.Document(put_body)))
         body = common.xml_to_json(etree.fromstring(body))
+        self.expected_success(200, resp.status)
         return resp, body
 
     def delete_volume(self, volume_id):
         """Deletes the Specified Volume."""
-        return self.delete("volumes/%s" % str(volume_id))
+        resp, body = self.delete("volumes/%s" % str(volume_id))
+        self.expected_success(202, resp.status)
+        return resp, body
 
     def wait_for_volume_status(self, volume_id, status):
         """Waits for a Volume to reach a given status."""
@@ -228,6 +236,7 @@
         resp, body = self.post(url, str(common.Document(post_body)))
         if body:
             body = common.xml_to_json(etree.fromstring(body))
+        self.expected_success(202, resp.status)
         return resp, body
 
     def detach_volume(self, volume_id):
@@ -237,6 +246,7 @@
         resp, body = self.post(url, str(common.Document(post_body)))
         if body:
             body = common.xml_to_json(etree.fromstring(body))
+        self.expected_success(202, resp.status)
         return resp, body
 
     def upload_volume(self, volume_id, image_name, disk_format):
@@ -247,6 +257,7 @@
         url = 'volumes/%s/action' % str(volume_id)
         resp, body = self.post(url, str(common.Document(post_body)))
         volume = common.xml_to_json(etree.fromstring(body))
+        self.expected_success(202, resp.status)
         return resp, volume
 
     def extend_volume(self, volume_id, extend_size):
@@ -257,6 +268,7 @@
         resp, body = self.post(url, str(common.Document(post_body)))
         if body:
             body = common.xml_to_json(etree.fromstring(body))
+        self.expected_success(202, resp.status)
         return resp, body
 
     def reset_volume_status(self, volume_id, status):
@@ -268,6 +280,7 @@
         resp, body = self.post(url, str(common.Document(post_body)))
         if body:
             body = common.xml_to_json(etree.fromstring(body))
+        self.expected_success(202, resp.status)
         return resp, body
 
     def volume_begin_detaching(self, volume_id):
@@ -295,6 +308,7 @@
         resp, body = self.post(url, str(common.Document(post_body)))
         if body:
             body = common.xml_to_json(etree.fromstring(body))
+        self.expected_success(202, resp.status)
         return resp, body
 
     def unreserve_volume(self, volume_id):
@@ -304,6 +318,7 @@
         resp, body = self.post(url, str(common.Document(post_body)))
         if body:
             body = common.xml_to_json(etree.fromstring(body))
+        self.expected_success(202, resp.status)
         return resp, body
 
     def create_volume_transfer(self, vol_id, display_name=None):
@@ -315,6 +330,7 @@
         resp, body = self.post('os-volume-transfer',
                                str(common.Document(post_body)))
         volume = common.xml_to_json(etree.fromstring(body))
+        self.expected_success(202, resp.status)
         return resp, volume
 
     def get_volume_transfer(self, transfer_id):
@@ -322,6 +338,7 @@
         url = "os-volume-transfer/%s" % str(transfer_id)
         resp, body = self.get(url)
         volume = common.xml_to_json(etree.fromstring(body))
+        self.expected_success(200, resp.status)
         return resp, volume
 
     def list_volume_transfers(self, params=None):
@@ -335,6 +352,7 @@
         volumes = []
         if body is not None:
             volumes += [self._parse_volume_transfer(vol) for vol in list(body)]
+        self.expected_success(200, resp.status)
         return resp, volumes
 
     def _parse_volume_transfer(self, body):
@@ -348,7 +366,9 @@
 
     def delete_volume_transfer(self, transfer_id):
         """Delete a volume transfer."""
-        return self.delete("os-volume-transfer/%s" % str(transfer_id))
+        resp, body = self.delete("os-volume-transfer/%s" % str(transfer_id))
+        self.expected_success(202, resp.status)
+        return resp, body
 
     def accept_volume_transfer(self, transfer_id, transfer_auth_key):
         """Accept a volume transfer."""
@@ -356,6 +376,7 @@
         url = 'os-volume-transfer/%s/accept' % transfer_id
         resp, body = self.post(url, str(common.Document(post_body)))
         volume = common.xml_to_json(etree.fromstring(body))
+        self.expected_success(202, resp.status)
         return resp, volume
 
     def update_volume_readonly(self, volume_id, readonly):
@@ -366,6 +387,7 @@
         resp, body = self.post(url, str(common.Document(post_body)))
         if body:
             body = common.xml_to_json(etree.fromstring(body))
+        self.expected_success(202, resp.status)
         return resp, body
 
     def force_delete_volume(self, volume_id):
@@ -375,6 +397,7 @@
         resp, body = self.post(url, str(common.Document(post_body)))
         if body:
             body = common.xml_to_json(etree.fromstring(body))
+        self.expected_success(202, resp.status)
         return resp, body
 
     def _metadata_body(self, meta):
@@ -399,6 +422,7 @@
         resp, body = self.post('volumes/%s/metadata' % volume_id,
                                str(common.Document(post_body)))
         body = self._parse_key_value(etree.fromstring(body))
+        self.expected_success(200, resp.status)
         return resp, body
 
     def get_volume_metadata(self, volume_id):
@@ -406,6 +430,7 @@
         url = "volumes/%s/metadata" % str(volume_id)
         resp, body = self.get(url)
         body = self._parse_key_value(etree.fromstring(body))
+        self.expected_success(200, resp.status)
         return resp, body
 
     def update_volume_metadata(self, volume_id, metadata):
@@ -414,6 +439,7 @@
         url = "volumes/%s/metadata" % str(volume_id)
         resp, body = self.put(url, str(common.Document(put_body)))
         body = self._parse_key_value(etree.fromstring(body))
+        self.expected_success(200, resp.status)
         return resp, body
 
     def update_volume_metadata_item(self, volume_id, id, meta_item):
@@ -423,13 +449,16 @@
             put_body.append(common.Text(v))
         url = "volumes/%s/metadata/%s" % (str(volume_id), str(id))
         resp, body = self.put(url, str(common.Document(put_body)))
+        self.expected_success(200, resp.status)
         body = common.xml_to_json(etree.fromstring(body))
         return resp, body
 
     def delete_volume_metadata_item(self, volume_id, id):
         """Delete metadata item for the volume."""
         url = "volumes/%s/metadata/%s" % (str(volume_id), str(id))
-        return self.delete(url)
+        resp, body = self.delete(url)
+        self.expected_success(200, resp.status)
+        return resp, body
 
 
 class VolumesClientXML(BaseVolumesClientXML):
diff --git a/tempest/stress/actions/ssh_floating.py b/tempest/stress/actions/ssh_floating.py
index 478cd07..d78112c 100644
--- a/tempest/stress/actions/ssh_floating.py
+++ b/tempest/stress/actions/ssh_floating.py
@@ -30,7 +30,7 @@
         proc = subprocess.Popen(cmd,
                                 stdout=subprocess.PIPE,
                                 stderr=subprocess.PIPE)
-        proc.wait()
+        proc.communicate()
         success = proc.returncode == 0
         return success
 
diff --git a/tempest/stress/actions/volume_attach_delete.py b/tempest/stress/actions/volume_attach_delete.py
index b438f52..e0238d3 100644
--- a/tempest/stress/actions/volume_attach_delete.py
+++ b/tempest/stress/actions/volume_attach_delete.py
@@ -48,7 +48,7 @@
 
         # Step 3: attach volume to vm
         self.logger.info("attach volume (%s) to vm %s" %
-                        (volume['id'], server_id))
+                         (volume['id'], server_id))
         resp, body = self.manager.servers_client.attach_volume(server_id,
                                                                volume['id'],
                                                                '/dev/vdc')
diff --git a/tempest/stress/actions/volume_attach_verify.py b/tempest/stress/actions/volume_attach_verify.py
index a3ca0b7..0d3cb23 100644
--- a/tempest/stress/actions/volume_attach_verify.py
+++ b/tempest/stress/actions/volume_attach_verify.py
@@ -192,7 +192,7 @@
             self._create_volume()
         servers_client = self.manager.servers_client
         self.logger.info("attach volume (%s) to vm %s" %
-                        (self.volume['id'], self.server_id))
+                         (self.volume['id'], self.server_id))
         resp, body = servers_client.attach_volume(self.server_id,
                                                   self.volume['id'],
                                                   self.part_name)
diff --git a/tempest/stress/stressaction.py b/tempest/stress/stressaction.py
index f6770ab..286e022 100644
--- a/tempest/stress/stressaction.py
+++ b/tempest/stress/stressaction.py
@@ -12,12 +12,16 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+import abc
 import signal
 import sys
 
+import six
+
 from tempest.openstack.common import log as logging
 
 
+@six.add_metaclass(abc.ABCMeta)
 class StressAction(object):
 
     def __init__(self, manager, max_runs=None, stop_on_error=False):
@@ -83,6 +87,7 @@
                     self.tearDown()
                     sys.exit(1)
 
+    @abc.abstractmethod
     def run(self):
         """This method is where the stress test code runs."""
-        raise NotImplemented()
+        return
diff --git a/tempest/test.py b/tempest/test.py
index 95ae23f..4a22b1b 100644
--- a/tempest/test.py
+++ b/tempest/test.py
@@ -69,14 +69,20 @@
 def safe_setup(f):
     """A decorator used to wrap the setUpClass for cleaning up resources
        when setUpClass failed.
-    """
 
+    Deprecated, see:
+    http://specs.openstack.org/openstack/qa-specs/specs/resource-cleanup.html
+    """
+    @functools.wraps(f)
     def decorator(cls):
             try:
                 f(cls)
             except Exception as se:
                 etype, value, trace = sys.exc_info()
-                LOG.exception("setUpClass failed: %s" % se)
+                if etype is cls.skipException:
+                    LOG.info("setUpClass skipped: %s:" % se)
+                else:
+                    LOG.exception("setUpClass failed: %s" % se)
                 try:
                     cls.tearDownClass()
                 except Exception as te:
@@ -89,12 +95,7 @@
     return decorator
 
 
-def services(*args, **kwargs):
-    """A decorator used to set an attr for each service used in a test case
-
-    This decorator applies a testtools attr for each service that gets
-    exercised by a test case.
-    """
+def get_service_list():
     service_list = {
         'compute': CONF.service_available.nova,
         'image': CONF.service_available.glance,
@@ -107,19 +108,32 @@
         'identity': True,
         'object_storage': CONF.service_available.swift,
         'dashboard': CONF.service_available.horizon,
-        'ceilometer': CONF.service_available.ceilometer,
+        'telemetry': CONF.service_available.ceilometer,
         'data_processing': CONF.service_available.sahara
     }
+    return service_list
 
+
+def services(*args, **kwargs):
+    """A decorator used to set an attr for each service used in a test case
+
+    This decorator applies a testtools attr for each service that gets
+    exercised by a test case.
+    """
     def decorator(f):
+        services = ['compute', 'image', 'baremetal', 'volume', 'orchestration',
+                    'network', 'identity', 'object_storage', 'dashboard',
+                    'ceilometer', 'data_processing']
         for service in args:
-            if service not in service_list:
-                raise exceptions.InvalidServiceTag('%s is not a valid service'
-                                                   % service)
+            if service not in services:
+                raise exceptions.InvalidServiceTag('%s is not a valid '
+                                                   'service' % service)
         attr(type=list(args))(f)
 
         @functools.wraps(f)
         def wrapper(self, *func_args, **func_kwargs):
+            service_list = get_service_list()
+
             for service in args:
                 if not service_list[service]:
                     msg = 'Skipped because the %s service is not available' % (
@@ -268,15 +282,51 @@
 
     @classmethod
     def setUpClass(cls):
+        # It should never be overridden by descendants
         if hasattr(super(BaseTestCase, cls), 'setUpClass'):
             super(BaseTestCase, cls).setUpClass()
         cls.setUpClassCalled = True
+        # No test resource is allocated until here
+        try:
+            # TODO(andreaf) Split-up resource_setup in stages:
+            # skip checks, pre-hook, credentials, clients, resources, post-hook
+            cls.resource_setup()
+        except Exception:
+            etype, value, trace = sys.exc_info()
+            LOG.info("%s in resource setup. Invoking tearDownClass." % etype)
+            # Catch any exception in tearDown so we can re-raise the original
+            # exception at the end
+            try:
+                cls.tearDownClass()
+            except Exception as te:
+                LOG.exception("tearDownClass failed: %s" % te)
+            try:
+                raise etype(value), None, trace
+            finally:
+                del trace  # for avoiding circular refs
 
     @classmethod
     def tearDownClass(cls):
         at_exit_set.discard(cls)
+        # It should never be overridden by descendants
         if hasattr(super(BaseTestCase, cls), 'tearDownClass'):
             super(BaseTestCase, cls).tearDownClass()
+        try:
+            cls.resource_cleanup()
+        finally:
+            cls.clear_isolated_creds()
+
+    @classmethod
+    def resource_setup(cls):
+        """Class level setup steps for test cases.
+        Recommended order: skip checks, credentials, clients, resources.
+        """
+        pass
+
+    @classmethod
+    def resource_cleanup(cls):
+        """Class level resource cleanup for test cases. """
+        pass
 
     def setUp(self):
         super(BaseTestCase, self).setUp()
@@ -344,7 +394,7 @@
         """
         Clears isolated creds if set
         """
-        if getattr(cls, 'isolated_creds'):
+        if hasattr(cls, 'isolated_creds'):
             cls.isolated_creds.clear_isolated_creds()
 
     @classmethod
@@ -399,20 +449,6 @@
         cls.admin_client = os_admin.negative_client
 
     @staticmethod
-    def load_schema(file):
-        """
-        Loads a schema from a file on a specified location.
-
-        :param file: the file name
-        """
-        # NOTE(mkoderer): must be extended for xml support
-        fn = os.path.join(
-            os.path.abspath(os.path.dirname(os.path.dirname(__file__))),
-            "etc", "schemas", file)
-        LOG.debug("Open schema file: %s" % (fn))
-        return json.load(open(fn))
-
-    @staticmethod
     def load_tests(*args):
         """
         Wrapper for testscenarios to set the mandatory scenarios variable
@@ -425,17 +461,21 @@
             standard_tests, module, loader = args
         for test in testtools.iterate_tests(standard_tests):
             schema_file = getattr(test, '_schema_file', None)
+            schema = getattr(test, '_schema', None)
             if schema_file is not None:
                 setattr(test, 'scenarios',
                         NegativeAutoTest.generate_scenario(schema_file))
+            elif schema is not None:
+                setattr(test, 'scenarios',
+                        NegativeAutoTest.generate_scenario(schema))
         return testscenarios.load_tests_apply_scenarios(*args)
 
     @staticmethod
-    def generate_scenario(description_file):
+    def generate_scenario(description):
         """
         Generates the test scenario list for a given description.
 
-        :param description: A dictionary with the following entries:
+        :param description: A file or dictionary with the following entries:
             name (required) name for the api
             http-method (required) one of HEAD,GET,PUT,POST,PATCH,DELETE
             url (required) the url to be appended to the catalog url with '%s'
@@ -451,7 +491,6 @@
                 the data is used to generate query strings appended to the url,
                 otherwise for the body of the http call.
         """
-        description = NegativeAutoTest.load_schema(description_file)
         LOG.debug(description)
         generator = importutils.import_class(
             CONF.negative.test_generator)()
@@ -481,13 +520,14 @@
         LOG.debug(scenario_list)
         return scenario_list
 
-    def execute(self, description_file):
+    def execute(self, description):
         """
         Execute a http call on an api that are expected to
         result in client errors. First it uses invalid resources that are part
         of the url, and then invalid data for queries and http request bodies.
 
-        :param description: A dictionary with the following entries:
+        :param description: A json file or dictionary with the following
+        entries:
             name (required) name for the api
             http-method (required) one of HEAD,GET,PUT,POST,PATCH,DELETE
             url (required) the url to be appended to the catalog url with '%s'
@@ -504,7 +544,6 @@
                 otherwise for the body of the http call.
 
         """
-        description = NegativeAutoTest.load_schema(description_file)
         LOG.info("Executing %s" % description["name"])
         LOG.debug(description)
         method = description["http-method"]
@@ -594,7 +633,8 @@
     """
     @attr(type=['negative', 'gate'])
     def generic_test(self):
-        self.execute(self._schema_file)
+        if hasattr(self, '_schema'):
+            self.execute(self._schema)
 
     cn = klass.__name__
     cn = cn.replace('JSON', '')
diff --git a/tempest/tests/base.py b/tempest/tests/base.py
index f4df3b9..27eb2c4 100644
--- a/tempest/tests/base.py
+++ b/tempest/tests/base.py
@@ -13,7 +13,6 @@
 #    under the License.
 
 import mock
-
 from oslotest import base
 from oslotest import moxstubout
 
diff --git a/tempest/tests/cli/test_cli.py b/tempest/tests/cli/test_cli.py
new file mode 100644
index 0000000..1fd5ccb
--- /dev/null
+++ b/tempest/tests/cli/test_cli.py
@@ -0,0 +1,59 @@
+# Copyright 2014 IBM Corp.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+import mock
+import testtools
+
+from tempest import cli
+from tempest import exceptions
+from tempest.tests import base
+
+
+class TestMinClientVersion(base.TestCase):
+    """Tests for the min_client_version decorator.
+    """
+
+    def _test_min_version(self, required, installed, expect_skip):
+
+        @cli.min_client_version(client='nova', version=required)
+        def fake(self, expect_skip):
+            if expect_skip:
+                # If we got here, the decorator didn't raise a skipException as
+                # expected so we need to fail.
+                self.fail('Should not have gotten past the decorator.')
+
+        with mock.patch.object(cli, 'execute',
+                               return_value=installed) as mock_cmd:
+            if expect_skip:
+                self.assertRaises(testtools.TestCase.skipException, fake,
+                                  self, expect_skip)
+            else:
+                fake(self, expect_skip)
+            mock_cmd.assert_called_once_with('nova', '', params='--version',
+                                             merge_stderr=True)
+
+    def test_min_client_version(self):
+        # required, installed, expect_skip
+        cases = (('2.17.0', '2.17.0', False),
+                 ('2.17.0', '2.18.0', False),
+                 ('2.18.0', '2.17.0', True))
+
+        for case in cases:
+            self._test_min_version(*case)
+
+    @mock.patch.object(cli, 'execute', return_value=' ')
+    def test_check_client_version_empty_output(self, mock_execute):
+        # Tests that an exception is raised if the command output is empty.
+        self.assertRaises(exceptions.TempestException,
+                          cli.check_client_version, 'nova', '2.18.0')
diff --git a/tempest/tests/cmd/test_verify_tempest_config.py b/tempest/tests/cmd/test_verify_tempest_config.py
index d0140dd..a28684e 100644
--- a/tempest/tests/cmd/test_verify_tempest_config.py
+++ b/tempest/tests/cmd/test_verify_tempest_config.py
@@ -238,8 +238,8 @@
                                                           'neutron', {})
         self.assertIn('neutron', results)
         self.assertIn('extensions', results['neutron'])
-        self.assertEqual(['fake1', 'fake2', 'not_fake'],
-                         results['neutron']['extensions'])
+        self.assertEqual(sorted(['fake1', 'fake2', 'not_fake']),
+                         sorted(results['neutron']['extensions']))
 
     def test_verify_extensions_cinder(self):
         def fake_list_extensions():
@@ -277,8 +277,8 @@
                                                           'cinder', {})
         self.assertIn('cinder', results)
         self.assertIn('extensions', results['cinder'])
-        self.assertEqual(['fake1', 'fake2', 'not_fake'],
-                         results['cinder']['extensions'])
+        self.assertEqual(sorted(['fake1', 'fake2', 'not_fake']),
+                         sorted(results['cinder']['extensions']))
 
     def test_verify_extensions_nova(self):
         def fake_list_extensions():
@@ -316,8 +316,8 @@
                                                           'nova', {})
         self.assertIn('nova', results)
         self.assertIn('extensions', results['nova'])
-        self.assertEqual(['fake1', 'fake2', 'not_fake'],
-                         results['nova']['extensions'])
+        self.assertEqual(sorted(['fake1', 'fake2', 'not_fake']),
+                         sorted(results['nova']['extensions']))
 
     def test_verify_extensions_nova_v3(self):
         def fake_list_extensions():
@@ -355,8 +355,8 @@
                                                           'nova_v3', {})
         self.assertIn('nova_v3', results)
         self.assertIn('extensions', results['nova_v3'])
-        self.assertEqual(['fake1', 'fake2', 'not_fake'],
-                         results['nova_v3']['extensions'])
+        self.assertEqual(sorted(['fake1', 'fake2', 'not_fake']),
+                         sorted(results['nova_v3']['extensions']))
 
     def test_verify_extensions_swift(self):
         def fake_list_extensions():
@@ -395,5 +395,5 @@
                                                           'swift', {})
         self.assertIn('swift', results)
         self.assertIn('extensions', results['swift'])
-        self.assertEqual(['not_fake', 'fake1', 'fake2'],
-                         results['swift']['extensions'])
+        self.assertEqual(sorted(['not_fake', 'fake1', 'fake2']),
+                         sorted(results['swift']['extensions']))
diff --git a/tempest/tests/common/test_accounts.py b/tempest/tests/common/test_accounts.py
new file mode 100644
index 0000000..cf7ce65
--- /dev/null
+++ b/tempest/tests/common/test_accounts.py
@@ -0,0 +1,234 @@
+# Copyright 2014 Hewlett-Packard Development Company, L.P.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+import hashlib
+import os
+import tempfile
+
+import mock
+from oslo.config import cfg
+from oslotest import mockpatch
+
+from tempest import auth
+from tempest.common import accounts
+from tempest.common import http
+from tempest import config
+from tempest import exceptions
+from tempest.tests import base
+from tempest.tests import fake_config
+from tempest.tests import fake_identity
+
+
+class TestAccount(base.TestCase):
+
+    def setUp(self):
+        super(TestAccount, self).setUp()
+        self.useFixture(fake_config.ConfigFixture())
+        self.stubs.Set(config, 'TempestConfigPrivate', fake_config.FakePrivate)
+        self.temp_dir = tempfile.mkdtemp()
+        cfg.CONF.set_default('lock_path', self.temp_dir)
+        self.addCleanup(os.rmdir, self.temp_dir)
+        self.test_accounts = [
+            {'username': 'test_user1', 'tenant_name': 'test_tenant1',
+             'password': 'p'},
+            {'username': 'test_user2', 'tenant_name': 'test_tenant2',
+             'password': 'p'},
+            {'username': 'test_user3', 'tenant_name': 'test_tenant3',
+             'password': 'p'},
+            {'username': 'test_user4', 'tenant_name': 'test_tenant4',
+             'password': 'p'},
+            {'username': 'test_user5', 'tenant_name': 'test_tenant5',
+             'password': 'p'},
+            {'username': 'test_user6', 'tenant_name': 'test_tenant6',
+             'password': 'p'},
+        ]
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.accounts.read_accounts_yaml',
+            return_value=self.test_accounts))
+        cfg.CONF.set_default('test_accounts_file', '', group='auth')
+        self.useFixture(mockpatch.Patch('os.path.isfile', return_value=True))
+
+    def _get_hash_list(self, accounts_list):
+        hash_list = []
+        for account in accounts_list:
+            hash = hashlib.md5()
+            hash.update(str(account))
+            hash_list.append(hash.hexdigest())
+        return hash_list
+
+    def test_get_hash(self):
+        self.stubs.Set(http.ClosingHttp, 'request',
+                       fake_identity._fake_v2_response)
+        test_account_class = accounts.Accounts('test_name')
+        hash_list = self._get_hash_list(self.test_accounts)
+        test_cred_dict = self.test_accounts[3]
+        test_creds = auth.get_credentials(**test_cred_dict)
+        results = test_account_class.get_hash(test_creds)
+        self.assertEqual(hash_list[3], results)
+
+    def test_get_hash_dict(self):
+        test_account_class = accounts.Accounts('test_name')
+        hash_dict = test_account_class.get_hash_dict(self.test_accounts)
+        hash_list = self._get_hash_list(self.test_accounts)
+        for hash in hash_list:
+            self.assertIn(hash, hash_dict.keys())
+            self.assertIn(hash_dict[hash], self.test_accounts)
+
+    def test_create_hash_file_previous_file(self):
+        # Emulate the lock existing on the filesystem
+        self.useFixture(mockpatch.Patch('os.path.isfile', return_value=True))
+        with mock.patch('__builtin__.open', mock.mock_open(), create=True):
+            test_account_class = accounts.Accounts('test_name')
+            res = test_account_class._create_hash_file('12345')
+        self.assertFalse(res, "_create_hash_file should return False if the "
+                         "pseudo-lock file already exists")
+
+    def test_create_hash_file_no_previous_file(self):
+        # Emulate the lock not existing on the filesystem
+        self.useFixture(mockpatch.Patch('os.path.isfile', return_value=False))
+        with mock.patch('__builtin__.open', mock.mock_open(), create=True):
+            test_account_class = accounts.Accounts('test_name')
+            res = test_account_class._create_hash_file('12345')
+        self.assertTrue(res, "_create_hash_file should return True if the "
+                        "pseudo-lock doesn't already exist")
+
+    @mock.patch('tempest.openstack.common.lockutils.lock')
+    def test_get_free_hash_no_previous_accounts(self, lock_mock):
+        # Emulate no pre-existing lock
+        self.useFixture(mockpatch.Patch('os.path.isdir', return_value=False))
+        hash_list = self._get_hash_list(self.test_accounts)
+        mkdir_mock = self.useFixture(mockpatch.Patch('os.mkdir'))
+        self.useFixture(mockpatch.Patch('os.path.isfile', return_value=False))
+        test_account_class = accounts.Accounts('test_name')
+        with mock.patch('__builtin__.open', mock.mock_open(),
+                        create=True) as open_mock:
+            test_account_class._get_free_hash(hash_list)
+            lock_path = os.path.join(accounts.CONF.lock_path, 'test_accounts',
+                                     hash_list[0])
+            open_mock.assert_called_once_with(lock_path, 'w')
+        mkdir_path = os.path.join(accounts.CONF.lock_path, 'test_accounts')
+        mkdir_mock.mock.assert_called_once_with(mkdir_path)
+
+    @mock.patch('tempest.openstack.common.lockutils.lock')
+    def test_get_free_hash_no_free_accounts(self, lock_mock):
+        hash_list = self._get_hash_list(self.test_accounts)
+        # Emulate pre-existing lock dir
+        self.useFixture(mockpatch.Patch('os.path.isdir', return_value=True))
+        # Emulate all lcoks in list are in use
+        self.useFixture(mockpatch.Patch('os.path.isfile', return_value=True))
+        test_account_class = accounts.Accounts('test_name')
+        self.assertRaises(exceptions.InvalidConfiguration,
+                          test_account_class._get_free_hash, hash_list)
+
+    @mock.patch('tempest.openstack.common.lockutils.lock')
+    def test_get_free_hash_some_in_use_accounts(self, lock_mock):
+        # Emulate no pre-existing lock
+        self.useFixture(mockpatch.Patch('os.path.isdir', return_value=True))
+        hash_list = self._get_hash_list(self.test_accounts)
+        test_account_class = accounts.Accounts('test_name')
+
+        def _fake_is_file(path):
+            # Fake isfile() to return that the path exists unless a specific
+            # hash is in the path
+            if hash_list[3] in path:
+                return False
+            return True
+
+        self.stubs.Set(os.path, 'isfile', _fake_is_file)
+        with mock.patch('__builtin__.open', mock.mock_open(),
+                        create=True) as open_mock:
+            test_account_class._get_free_hash(hash_list)
+            lock_path = os.path.join(accounts.CONF.lock_path, 'test_accounts',
+                                     hash_list[3])
+            open_mock.assert_called_once_with(lock_path, 'w')
+
+    @mock.patch('tempest.openstack.common.lockutils.lock')
+    def test_remove_hash_last_account(self, lock_mock):
+        hash_list = self._get_hash_list(self.test_accounts)
+        # Pretend the pseudo-lock is there
+        self.useFixture(mockpatch.Patch('os.path.isfile', return_value=True))
+        # Pretend the lock dir is empty
+        self.useFixture(mockpatch.Patch('os.listdir', return_value=[]))
+        test_account_class = accounts.Accounts('test_name')
+        remove_mock = self.useFixture(mockpatch.Patch('os.remove'))
+        rmdir_mock = self.useFixture(mockpatch.Patch('os.rmdir'))
+        test_account_class.remove_hash(hash_list[2])
+        hash_path = os.path.join(accounts.CONF.lock_path, 'test_accounts',
+                                 hash_list[2])
+        lock_path = os.path.join(accounts.CONF.lock_path, 'test_accounts')
+        remove_mock.mock.assert_called_once_with(hash_path)
+        rmdir_mock.mock.assert_called_once_with(lock_path)
+
+    @mock.patch('tempest.openstack.common.lockutils.lock')
+    def test_remove_hash_not_last_account(self, lock_mock):
+        hash_list = self._get_hash_list(self.test_accounts)
+        # Pretend the pseudo-lock is there
+        self.useFixture(mockpatch.Patch('os.path.isfile', return_value=True))
+        # Pretend the lock dir is empty
+        self.useFixture(mockpatch.Patch('os.listdir', return_value=[
+            hash_list[1], hash_list[4]]))
+        test_account_class = accounts.Accounts('test_name')
+        remove_mock = self.useFixture(mockpatch.Patch('os.remove'))
+        rmdir_mock = self.useFixture(mockpatch.Patch('os.rmdir'))
+        test_account_class.remove_hash(hash_list[2])
+        hash_path = os.path.join(accounts.CONF.lock_path, 'test_accounts',
+                                 hash_list[2])
+        remove_mock.mock.assert_called_once_with(hash_path)
+        rmdir_mock.mock.assert_not_called()
+
+    def test_is_multi_user(self):
+        test_accounts_class = accounts.Accounts('test_name')
+        self.assertTrue(test_accounts_class.is_multi_user())
+
+    def test_is_not_multi_user(self):
+        self.test_accounts = [self.test_accounts[0]]
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.accounts.read_accounts_yaml',
+            return_value=self.test_accounts))
+        test_accounts_class = accounts.Accounts('test_name')
+        self.assertFalse(test_accounts_class.is_multi_user())
+
+
+class TestNotLockingAccount(base.TestCase):
+
+    def setUp(self):
+        super(TestNotLockingAccount, self).setUp()
+        self.useFixture(fake_config.ConfigFixture())
+        self.stubs.Set(config, 'TempestConfigPrivate', fake_config.FakePrivate)
+        self.temp_dir = tempfile.mkdtemp()
+        cfg.CONF.set_default('lock_path', self.temp_dir)
+        self.addCleanup(os.rmdir, self.temp_dir)
+        self.test_accounts = [
+            {'username': 'test_user1', 'tenant_name': 'test_tenant1',
+             'password': 'p'},
+            {'username': 'test_user2', 'tenant_name': 'test_tenant2',
+             'password': 'p'},
+            {'username': 'test_user3', 'tenant_name': 'test_tenant3',
+             'password': 'p'},
+        ]
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.accounts.read_accounts_yaml',
+            return_value=self.test_accounts))
+        cfg.CONF.set_default('test_accounts_file', '', group='auth')
+        self.useFixture(mockpatch.Patch('os.path.isfile', return_value=True))
+
+    def test_get_creds(self):
+        test_accounts_class = accounts.NotLockingAccounts('test_name')
+        for i in xrange(len(self.test_accounts)):
+            creds = test_accounts_class.get_creds(i)
+            msg = "Empty credentials returned for ID %s" % str(i)
+            self.assertIsNotNone(creds, msg)
+        self.assertRaises(exceptions.InvalidConfiguration,
+                          test_accounts_class.get_creds,
+                          id=len(self.test_accounts))
diff --git a/tempest/tests/common/test_custom_matchers.py b/tempest/tests/common/test_custom_matchers.py
new file mode 100644
index 0000000..57217e3
--- /dev/null
+++ b/tempest/tests/common/test_custom_matchers.py
@@ -0,0 +1,66 @@
+# Copyright 2014 Hewlett-Packard Development Company, L.P.
+# All Rights Reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+from tempest.common import custom_matchers
+from tempest.tests import base
+
+from testtools.tests.matchers import helpers
+
+
+class TestMatchesDictExceptForKeys(base.TestCase,
+                                   helpers.TestMatchersInterface):
+
+    matches_matcher = custom_matchers.MatchesDictExceptForKeys(
+        {'a': 1, 'b': 2, 'c': 3, 'd': 4}, ['c', 'd'])
+    matches_matches = [
+        {'a': 1, 'b': 2, 'c': 3, 'd': 4},
+        {'a': 1, 'b': 2, 'c': 5},
+        {'a': 1, 'b': 2},
+    ]
+    matches_mismatches = [
+        {},
+        {'foo': 1},
+        {'a': 1, 'b': 3},
+        {'a': 1, 'b': 2, 'foo': 1},
+        {'a': 1, 'b': None, 'foo': 1},
+    ]
+
+    str_examples = []
+    describe_examples = [
+        ("Only in expected:\n"
+         "  {'a': 1, 'b': 2}\n",
+         {},
+         matches_matcher),
+        ("Only in expected:\n"
+         "  {'a': 1, 'b': 2}\n"
+         "Only in actual:\n"
+         "  {'foo': 1}\n",
+         {'foo': 1},
+         matches_matcher),
+        ("Differences:\n"
+         "  b: expected 2, actual 3\n",
+         {'a': 1, 'b': 3},
+         matches_matcher),
+        ("Only in actual:\n"
+         "  {'foo': 1}\n",
+         {'a': 1, 'b': 2, 'foo': 1},
+         matches_matcher),
+        ("Only in actual:\n"
+         "  {'foo': 1}\n"
+         "Differences:\n"
+         "  b: expected 2, actual None\n",
+         {'a': 1, 'b': None, 'foo': 1},
+         matches_matcher)
+    ]
\ No newline at end of file
diff --git a/tempest/tests/fake_config.py b/tempest/tests/fake_config.py
index 536cbcf..9e56916 100644
--- a/tempest/tests/fake_config.py
+++ b/tempest/tests/fake_config.py
@@ -61,3 +61,4 @@
     def __init__(self, parse_conf=True, config_path=None):
         cfg.CONF([], default_config_files=[])
         self._set_attrs()
+        self.lock_path = cfg.CONF.lock_path
diff --git a/tempest/tests/fake_http.py b/tempest/tests/fake_http.py
index ce2b2c0..7d77484 100644
--- a/tempest/tests/fake_http.py
+++ b/tempest/tests/fake_http.py
@@ -13,6 +13,7 @@
 #    under the License.
 
 import copy
+
 import httplib2
 
 
diff --git a/tempest/tests/fake_identity.py b/tempest/tests/fake_identity.py
index 1900fc9..97098e1 100644
--- a/tempest/tests/fake_identity.py
+++ b/tempest/tests/fake_identity.py
@@ -13,9 +13,9 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+import json
 
 import httplib2
-import json
 
 
 TOKEN = "fake_token"
diff --git a/tempest/tests/negative/test_negative_auto_test.py b/tempest/tests/negative/test_negative_auto_test.py
index 7a1909a..dddd083 100644
--- a/tempest/tests/negative/test_negative_auto_test.py
+++ b/tempest/tests/negative/test_negative_auto_test.py
@@ -13,8 +13,6 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-import mock
-
 from tempest import config
 import tempest.test as test
 from tempest.tests import base
@@ -30,9 +28,9 @@
                        "http-method": "GET",
                        "url": "flavors/detail",
                        "json-schema": {"type": "object",
-                                      "properties":
-                                      {"minRam": {"type": "integer"},
-                                       "minDisk": {"type": "integer"}}
+                                       "properties":
+                                       {"minRam": {"type": "integer"},
+                                        "minDisk": {"type": "integer"}}
                                        },
                        "resources": ["flavor", "volume", "image"]
                        }
@@ -56,11 +54,9 @@
         for entry in entries:
             self.assertIsNotNone(entry[1]['resource'])
 
-    @mock.patch('tempest.test.NegativeAutoTest.load_schema')
-    def test_generate_scenario(self, open_mock):
-        open_mock.return_value = self.fake_input_desc
+    def test_generate_scenario(self):
         scenarios = test.NegativeAutoTest.\
-            generate_scenario(None)
+            generate_scenario(self.fake_input_desc)
 
         self.assertIsInstance(scenarios, list)
         for scenario in scenarios:
diff --git a/tempest/tests/test_commands.py b/tempest/tests/test_commands.py
index 1e2925b..2379741 100644
--- a/tempest/tests/test_commands.py
+++ b/tempest/tests/test_commands.py
@@ -12,9 +12,10 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-import mock
 import subprocess
 
+import mock
+
 from tempest.common import commands
 from tempest.tests import base
 
diff --git a/tempest/tests/test_credentials.py b/tempest/tests/test_credentials.py
index 9da5f92..ea576c4 100644
--- a/tempest/tests/test_credentials.py
+++ b/tempest/tests/test_credentials.py
@@ -128,12 +128,22 @@
         creds = self._get_credentials()
         self.assertTrue(creds.is_valid())
 
-    def test_is_not_valid(self):
+    def _test_is_not_valid(self, ignore_key):
         creds = self._get_credentials()
         for attr in self.attributes.keys():
+            if attr == ignore_key:
+                continue
+            temp_attr = getattr(creds, attr)
             delattr(creds, attr)
             self.assertFalse(creds.is_valid(),
                              "Credentials should be invalid without %s" % attr)
+            setattr(creds, attr, temp_attr)
+
+    def test_is_not_valid(self):
+        # NOTE(mtreinish): A KeystoneV2 credential object is valid without
+        # a tenant_name. So skip that check. See tempest.auth for the valid
+        # credential requirements
+        self._test_is_not_valid('tenant_name')
 
     def test_default(self):
         self.useFixture(fixtures.LockFixture('auth_version'))
@@ -205,6 +215,12 @@
                     config_value = 'fake_' + attr
                 self.assertEqual(getattr(creds, attr), config_value)
 
+    def test_is_not_valid(self):
+        # NOTE(mtreinish) For a Keystone V3 credential object a project name
+        # is not required to be valid, so we skip that check. See tempest.auth
+        # for the valid credential requirements
+        self._test_is_not_valid('project_name')
+
     def test_synced_attributes(self):
         attributes = self.attributes
         # Create V3 credentials with tenant instead of project, and user_domain
diff --git a/tempest/tests/test_decorators.py b/tempest/tests/test_decorators.py
index 6b678f7..12104ec 100644
--- a/tempest/tests/test_decorators.py
+++ b/tempest/tests/test_decorators.py
@@ -237,7 +237,7 @@
 class TestSimpleNegativeDecorator(BaseDecoratorsTest):
     @test.SimpleNegativeAutoTest
     class FakeNegativeJSONTest(test.NegativeAutoTest):
-        _schema_file = 'fake/schemas/file.json'
+        _schema = {}
 
     def test_testfunc_exist(self):
         self.assertIn("test_fake_negative", dir(self.FakeNegativeJSONTest))
@@ -247,4 +247,4 @@
         obj = self.FakeNegativeJSONTest("test_fake_negative")
         self.assertIn("test_fake_negative", dir(obj))
         obj.test_fake_negative()
-        mock.assert_called_once_with(self.FakeNegativeJSONTest._schema_file)
+        mock.assert_called_once_with(self.FakeNegativeJSONTest._schema)
diff --git a/tempest/tests/test_glance_http.py b/tempest/tests/test_glance_http.py
index 88b8129..9a6c9de 100644
--- a/tempest/tests/test_glance_http.py
+++ b/tempest/tests/test_glance_http.py
@@ -15,9 +15,10 @@
 
 import httplib
 import json
+import socket
+
 import mock
 import six
-import socket
 
 from tempest.common import glance_http
 from tempest import exceptions
@@ -41,17 +42,17 @@
 
         self.fake_auth.base_url = mock.MagicMock(return_value=self.endpoint)
 
-        self.useFixture(mockpatch.PatchObject(httplib.HTTPConnection,
-                        'request',
-                        side_effect=self.fake_http.request(self.endpoint)[1]))
+        self.useFixture(mockpatch.PatchObject(
+            httplib.HTTPConnection,
+            'request',
+            side_effect=self.fake_http.request(self.endpoint)[1]))
         self.client = glance_http.HTTPClient(self.fake_auth, {})
 
     def _set_response_fixture(self, header, status, resp_body):
         resp = fake_http.fake_httplib(header, status=status,
                                       body=six.StringIO(resp_body))
         self.useFixture(mockpatch.PatchObject(httplib.HTTPConnection,
-                        'getresponse',
-                        return_value=resp))
+                        'getresponse', return_value=resp))
         return resp
 
     def test_json_request_without_content_type_header_in_response(self):
diff --git a/tempest/tests/test_hacking.py b/tempest/tests/test_hacking.py
index 52fdf7e..37ad18e 100644
--- a/tempest/tests/test_hacking.py
+++ b/tempest/tests/test_hacking.py
@@ -100,10 +100,15 @@
         self.assertFalse(checks.service_tags_not_in_module_path(
             "@test.services('compute')", './tempest/api/image/fake_test.py'))
 
-    def test_no_official_client_manager_in_api_tests(self):
-        self.assertTrue(checks.no_official_client_manager_in_api_tests(
-            "cls.official_client = clients.OfficialClientManager(credentials)",
-            "tempest/api/compute/base.py"))
-        self.assertFalse(checks.no_official_client_manager_in_api_tests(
-            "cls.official_client = clients.OfficialClientManager(credentials)",
-            "tempest/scenario/fake_test.py"))
+    def test_no_mutable_default_args(self):
+        self.assertEqual(1, len(list(checks.no_mutable_default_args(
+            " def function1(para={}):"))))
+
+        self.assertEqual(1, len(list(checks.no_mutable_default_args(
+            "def function2(para1, para2, para3=[])"))))
+
+        self.assertEqual(0, len(list(checks.no_mutable_default_args(
+            "defined = []"))))
+
+        self.assertEqual(0, len(list(checks.no_mutable_default_args(
+            "defined, undefined = [], {}"))))
diff --git a/tempest/tests/test_rest_client.py b/tempest/tests/test_rest_client.py
index a351bd5..5f55ca2 100644
--- a/tempest/tests/test_rest_client.py
+++ b/tempest/tests/test_rest_client.py
@@ -12,9 +12,9 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-import httplib2
 import json
 
+import httplib2
 from oslotest import mockpatch
 
 from tempest.common import rest_client
diff --git a/tempest/tests/test_tenant_isolation.py b/tempest/tests/test_tenant_isolation.py
index bbc3d15..27c45c2 100644
--- a/tempest/tests/test_tenant_isolation.py
+++ b/tempest/tests/test_tenant_isolation.py
@@ -12,12 +12,9 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-import keystoneclient.v2_0.client as keystoneclient
 import mock
-import neutronclient.v2_0.client as neutronclient
 from oslo.config import cfg
 
-from tempest import clients
 from tempest.common import http
 from tempest.common import isolated_creds
 from tempest import config
@@ -52,24 +49,6 @@
         self.assertTrue(isinstance(iso_creds.network_admin_client,
                                    json_network_client.NetworkClientJSON))
 
-    def test_official_client(self):
-        self.useFixture(mockpatch.PatchObject(keystoneclient.Client,
-                                              'authenticate'))
-        self.useFixture(mockpatch.PatchObject(clients.OfficialClientManager,
-                                              '_get_image_client'))
-        self.useFixture(mockpatch.PatchObject(clients.OfficialClientManager,
-                                              '_get_object_storage_client'))
-        self.useFixture(mockpatch.PatchObject(clients.OfficialClientManager,
-                                              '_get_orchestration_client'))
-        self.useFixture(mockpatch.PatchObject(clients.OfficialClientManager,
-                                              '_get_ceilometer_client'))
-        iso_creds = isolated_creds.IsolatedCreds('test class',
-                                                 tempest_client=False)
-        self.assertTrue(isinstance(iso_creds.identity_admin_client,
-                                   keystoneclient.Client))
-        self.assertTrue(isinstance(iso_creds.network_admin_client,
-                                   neutronclient.Client))
-
     def test_tempest_client_xml(self):
         iso_creds = isolated_creds.IsolatedCreds('test class', interface='xml')
         self.assertEqual(iso_creds.interface, 'xml')
@@ -272,6 +251,13 @@
 
     @mock.patch('tempest.common.rest_client.RestClient')
     def test_network_cleanup(self, MockRestClient):
+        def side_effect(**args):
+            return ({'status': 200},
+                    {"security_groups": [{"tenant_id": args['tenant_id'],
+                                          "name": args['name'],
+                                          "description": args['name'],
+                                          "security_group_rules": [],
+                                          "id": "sg-%s" % args['tenant_id']}]})
         iso_creds = isolated_creds.IsolatedCreds('test class',
                                                  password='fake_password')
         # Create primary tenant and network
@@ -335,11 +321,29 @@
         remove_router_interface_mock = self.patch(
             'tempest.services.network.json.network_client.NetworkClientJSON.'
             'remove_router_interface_with_subnet_id')
+        return_values = ({'status': 200}, {'ports': []})
         port_list_mock = mock.patch.object(iso_creds.network_admin_client,
-                                           'list_ports', return_value=(
-                                           {'status': 200}, {'ports': []}))
+                                           'list_ports',
+                                           return_value=return_values)
+
         port_list_mock.start()
+        secgroup_list_mock = mock.patch.object(iso_creds.network_admin_client,
+                                               'list_security_groups',
+                                               side_effect=side_effect)
+        secgroup_list_mock.start()
+
+        return_values = (fake_http.fake_httplib({}, status=204), {})
+        remove_secgroup_mock = self.patch(
+            'tempest.services.network.network_client_base.'
+            'NetworkClientBase.delete', return_value=return_values)
         iso_creds.clear_isolated_creds()
+        # Verify default security group delete
+        calls = remove_secgroup_mock.mock_calls
+        self.assertEqual(len(calls), 3)
+        args = map(lambda x: x[1][0], calls)
+        self.assertIn('v2.0/security-groups/sg-1234', args)
+        self.assertIn('v2.0/security-groups/sg-12345', args)
+        self.assertIn('v2.0/security-groups/sg-123456', args)
         # Verify remove router interface calls
         calls = remove_router_interface_mock.mock_calls
         self.assertEqual(len(calls), 3)
diff --git a/tempest/tests/test_wrappers.py b/tempest/tests/test_wrappers.py
index bba4012..0fd41f9 100644
--- a/tempest/tests/test_wrappers.py
+++ b/tempest/tests/test_wrappers.py
@@ -62,21 +62,18 @@
         p = subprocess.Popen(
             "bash %s" % cmd, shell=True,
             stdout=subprocess.PIPE, stderr=subprocess.PIPE)
-        # wait in the general case is dangerous, however the amount of
-        # data coming back on those pipes is small enough it shouldn't be
-        # a problem.
-        p.wait()
+        out, err = p.communicate()
 
         self.assertEqual(
             p.returncode, expected,
-            "Stdout: %s; Stderr: %s" % (p.stdout, p.stderr))
+            "Stdout: %s; Stderr: %s" % (out, err))
 
     def test_pretty_tox(self):
         # Git init is required for the pbr testr command. pbr requires a git
         # version or an sdist to work. so make the test directory a git repo
         # too.
         subprocess.call(['git', 'init'], stderr=DEVNULL)
-        self.assertRunExit('pretty_tox.sh tests.passing', 0)
+        self.assertRunExit('pretty_tox.sh passing', 0)
 
     def test_pretty_tox_fails(self):
         # Git init is required for the pbr testr command. pbr requires a git
@@ -86,7 +83,7 @@
         self.assertRunExit('pretty_tox.sh', 1)
 
     def test_pretty_tox_serial(self):
-        self.assertRunExit('pretty_tox_serial.sh tests.passing', 0)
+        self.assertRunExit('pretty_tox_serial.sh passing', 0)
 
     def test_pretty_tox_serial_fails(self):
         self.assertRunExit('pretty_tox_serial.sh', 1)
diff --git a/tempest/tests/test_xml_utils.py b/tempest/tests/test_xml_utils.py
new file mode 100644
index 0000000..53e31c4
--- /dev/null
+++ b/tempest/tests/test_xml_utils.py
@@ -0,0 +1,35 @@
+#
+# Copyright 2014 Hewlett-Packard Development Company, L.P.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.common import xml_utils
+from tempest.tests import base
+
+
+class TestDocumentXML(base.TestCase):
+    def test_xml_document_ordering_version_encoding(self):
+        expected = '<?xml version="1.0" encoding="UTF-8"?>'
+        xml_out = str(xml_utils.Document())
+        self.assertEqual(expected, xml_out.strip())
+
+        xml_out = str(xml_utils.Document(encoding='UTF-8', version='1.0'))
+        self.assertEqual(expected, xml_out.strip())
+
+        xml_out = str(xml_utils.Document(version='1.0', encoding='UTF-8'))
+        self.assertEqual(expected, xml_out.strip())
+
+    def test_xml_document_additonal_attrs(self):
+        expected = '<?xml version="1.0" encoding="UTF-8" foo="bar"?>'
+        xml_out = str(xml_utils.Document(foo='bar'))
+        self.assertEqual(expected, xml_out.strip())
diff --git a/tempest/thirdparty/boto/test.py b/tempest/thirdparty/boto/test.py
index d3cbc4b..3496dce 100644
--- a/tempest/thirdparty/boto/test.py
+++ b/tempest/thirdparty/boto/test.py
@@ -17,7 +17,6 @@
 import logging as orig_logging
 import os
 import re
-import six
 import urlparse
 
 import boto
@@ -25,6 +24,7 @@
 from boto import exception
 from boto import s3
 import keystoneclient.exceptions
+import six
 
 import tempest.clients
 from tempest.common.utils import file_utils
@@ -108,8 +108,8 @@
     CODE_RE = '.*'  # regexp makes sense in group match
 
     def match(self, exc):
-        """:returns: Retruns with an error string if not matches,
-               returns with None when matches.
+        """:returns: Returns with an error string if it does not match,
+               returns with None when it matches.
         """
         if not isinstance(exc, exception.BotoServerError):
             return "%r not an BotoServerError instance" % exc
@@ -187,7 +187,7 @@
         if len(args):
             string += ", "
     string += ", ".join("=".join(map(str, (key, value)))
-              for (key, value) in kwargs.items())
+                        for (key, value) in kwargs.items())
     return string + ")"
 
 
@@ -195,8 +195,8 @@
     """Recommended to use as base class for boto related test."""
 
     @classmethod
-    def setUpClass(cls):
-        super(BotoTestCase, cls).setUpClass()
+    def resource_setup(cls):
+        super(BotoTestCase, cls).resource_setup()
         cls.conclusion = decision_maker()
         cls.os = cls.get_client_manager()
         # The trash contains cleanup functions and paramaters in tuples
@@ -245,7 +245,7 @@
             raise self.failureException, "BotoServerError not raised"
 
     @classmethod
-    def tearDownClass(cls):
+    def resource_cleanup(cls):
         """Calls the callables added by addResourceCleanUp,
         when you overwrite this function don't forget to call this too.
         """
@@ -264,7 +264,7 @@
             finally:
                 del cls._resource_trash_bin[key]
         cls.clear_isolated_creds()
-        super(BotoTestCase, cls).tearDownClass()
+        super(BotoTestCase, cls).resource_cleanup()
         # NOTE(afazekas): let the super called even on exceptions
         # The real exceptions already logged, if the super throws another,
         # does not causes hidden issues
@@ -485,7 +485,7 @@
 
     @classmethod
     def destroy_volume_wait(cls, volume):
-        """Delete volume, tryies to detach first.
+        """Delete volume, tries to detach first.
            Use just for teardown!
         """
         exc_num = 0
@@ -518,7 +518,7 @@
 
     @classmethod
     def destroy_snapshot_wait(cls, snapshot):
-        """delete snaphot, wait until not exists."""
+        """delete snapshot, wait until it ceases to exist."""
         snapshot.delete()
 
         def _update():
@@ -592,83 +592,83 @@
 
 
 for code in (('AccessDenied', 403),
-            ('AccountProblem', 403),
-            ('AmbiguousGrantByEmailAddress', 400),
-            ('BadDigest', 400),
-            ('BucketAlreadyExists', 409),
-            ('BucketAlreadyOwnedByYou', 409),
-            ('BucketNotEmpty', 409),
-            ('CredentialsNotSupported', 400),
-            ('CrossLocationLoggingProhibited', 403),
-            ('EntityTooSmall', 400),
-            ('EntityTooLarge', 400),
-            ('ExpiredToken', 400),
-            ('IllegalVersioningConfigurationException', 400),
-            ('IncompleteBody', 400),
-            ('IncorrectNumberOfFilesInPostRequest', 400),
-            ('InlineDataTooLarge', 400),
-            ('InvalidAccessKeyId', 403),
+             ('AccountProblem', 403),
+             ('AmbiguousGrantByEmailAddress', 400),
+             ('BadDigest', 400),
+             ('BucketAlreadyExists', 409),
+             ('BucketAlreadyOwnedByYou', 409),
+             ('BucketNotEmpty', 409),
+             ('CredentialsNotSupported', 400),
+             ('CrossLocationLoggingProhibited', 403),
+             ('EntityTooSmall', 400),
+             ('EntityTooLarge', 400),
+             ('ExpiredToken', 400),
+             ('IllegalVersioningConfigurationException', 400),
+             ('IncompleteBody', 400),
+             ('IncorrectNumberOfFilesInPostRequest', 400),
+             ('InlineDataTooLarge', 400),
+             ('InvalidAccessKeyId', 403),
              'InvalidAddressingHeader',
-            ('InvalidArgument', 400),
-            ('InvalidBucketName', 400),
-            ('InvalidBucketState', 409),
-            ('InvalidDigest', 400),
-            ('InvalidLocationConstraint', 400),
-            ('InvalidPart', 400),
-            ('InvalidPartOrder', 400),
-            ('InvalidPayer', 403),
-            ('InvalidPolicyDocument', 400),
-            ('InvalidRange', 416),
-            ('InvalidRequest', 400),
-            ('InvalidSecurity', 403),
-            ('InvalidSOAPRequest', 400),
-            ('InvalidStorageClass', 400),
-            ('InvalidTargetBucketForLogging', 400),
-            ('InvalidToken', 400),
-            ('InvalidURI', 400),
-            ('KeyTooLong', 400),
-            ('MalformedACLError', 400),
-            ('MalformedPOSTRequest', 400),
-            ('MalformedXML', 400),
-            ('MaxMessageLengthExceeded', 400),
-            ('MaxPostPreDataLengthExceededError', 400),
-            ('MetadataTooLarge', 400),
-            ('MethodNotAllowed', 405),
-            ('MissingAttachment'),
-            ('MissingContentLength', 411),
-            ('MissingRequestBodyError', 400),
-            ('MissingSecurityElement', 400),
-            ('MissingSecurityHeader', 400),
-            ('NoLoggingStatusForKey', 400),
-            ('NoSuchBucket', 404),
-            ('NoSuchKey', 404),
-            ('NoSuchLifecycleConfiguration', 404),
-            ('NoSuchUpload', 404),
-            ('NoSuchVersion', 404),
-            ('NotSignedUp', 403),
-            ('NotSuchBucketPolicy', 404),
-            ('OperationAborted', 409),
-            ('PermanentRedirect', 301),
-            ('PreconditionFailed', 412),
-            ('Redirect', 307),
-            ('RequestIsNotMultiPartContent', 400),
-            ('RequestTimeout', 400),
-            ('RequestTimeTooSkewed', 403),
-            ('RequestTorrentOfBucketError', 400),
-            ('SignatureDoesNotMatch', 403),
-            ('TemporaryRedirect', 307),
-            ('TokenRefreshRequired', 400),
-            ('TooManyBuckets', 400),
-            ('UnexpectedContent', 400),
-            ('UnresolvableGrantByEmailAddress', 400),
-            ('UserKeyMustBeSpecified', 400)):
+             ('InvalidArgument', 400),
+             ('InvalidBucketName', 400),
+             ('InvalidBucketState', 409),
+             ('InvalidDigest', 400),
+             ('InvalidLocationConstraint', 400),
+             ('InvalidPart', 400),
+             ('InvalidPartOrder', 400),
+             ('InvalidPayer', 403),
+             ('InvalidPolicyDocument', 400),
+             ('InvalidRange', 416),
+             ('InvalidRequest', 400),
+             ('InvalidSecurity', 403),
+             ('InvalidSOAPRequest', 400),
+             ('InvalidStorageClass', 400),
+             ('InvalidTargetBucketForLogging', 400),
+             ('InvalidToken', 400),
+             ('InvalidURI', 400),
+             ('KeyTooLong', 400),
+             ('MalformedACLError', 400),
+             ('MalformedPOSTRequest', 400),
+             ('MalformedXML', 400),
+             ('MaxMessageLengthExceeded', 400),
+             ('MaxPostPreDataLengthExceededError', 400),
+             ('MetadataTooLarge', 400),
+             ('MethodNotAllowed', 405),
+             ('MissingAttachment'),
+             ('MissingContentLength', 411),
+             ('MissingRequestBodyError', 400),
+             ('MissingSecurityElement', 400),
+             ('MissingSecurityHeader', 400),
+             ('NoLoggingStatusForKey', 400),
+             ('NoSuchBucket', 404),
+             ('NoSuchKey', 404),
+             ('NoSuchLifecycleConfiguration', 404),
+             ('NoSuchUpload', 404),
+             ('NoSuchVersion', 404),
+             ('NotSignedUp', 403),
+             ('NotSuchBucketPolicy', 404),
+             ('OperationAborted', 409),
+             ('PermanentRedirect', 301),
+             ('PreconditionFailed', 412),
+             ('Redirect', 307),
+             ('RequestIsNotMultiPartContent', 400),
+             ('RequestTimeout', 400),
+             ('RequestTimeTooSkewed', 403),
+             ('RequestTorrentOfBucketError', 400),
+             ('SignatureDoesNotMatch', 403),
+             ('TemporaryRedirect', 307),
+             ('TokenRefreshRequired', 400),
+             ('TooManyBuckets', 400),
+             ('UnexpectedContent', 400),
+             ('UnresolvableGrantByEmailAddress', 400),
+             ('UserKeyMustBeSpecified', 400)):
     _add_matcher_class(BotoTestCase.s3_error_code.client,
                        code, base=ClientError)
 
 
 for code in (('InternalError', 500),
-            ('NotImplemented', 501),
-            ('ServiceUnavailable', 503),
-            ('SlowDown', 503)):
+             ('NotImplemented', 501),
+             ('ServiceUnavailable', 503),
+             ('SlowDown', 503)):
     _add_matcher_class(BotoTestCase.s3_error_code.server,
                        code, base=ServerError)
diff --git a/tempest/thirdparty/boto/test_ec2_instance_run.py b/tempest/thirdparty/boto/test_ec2_instance_run.py
index 2c68d6b..f3f11fd 100644
--- a/tempest/thirdparty/boto/test_ec2_instance_run.py
+++ b/tempest/thirdparty/boto/test_ec2_instance_run.py
@@ -13,8 +13,6 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-from boto import exception
-
 from tempest.common.utils import data_utils
 from tempest.common.utils.linux import remote_client
 from tempest import config
@@ -32,8 +30,8 @@
 class InstanceRunTest(boto_test.BotoTestCase):
 
     @classmethod
-    def setUpClass(cls):
-        super(InstanceRunTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(InstanceRunTest, cls).resource_setup()
         if not cls.conclusion['A_I_IMAGES_READY']:
             raise cls.skipException("".join(("EC2 ", cls.__name__,
                                     ": requires ami/aki/ari manifest")))
@@ -82,6 +80,13 @@
                 raise exceptions.EC2RegisterImageException(
                     image_id=image["image_id"])
 
+    def _terminate_reservation(self, reservation, rcuk):
+        for instance in reservation.instances:
+            instance.terminate()
+        for instance in reservation.instances:
+            self.assertInstanceStateWait(instance, '_GONE')
+        self.cancelResourceCleanUp(rcuk)
+
     def test_run_idempotent_instances(self):
         # EC2 run instances idempotently
 
@@ -96,11 +101,6 @@
                                            reservation)
             return (reservation, rcuk)
 
-        def _terminate_reservation(reservation, rcuk):
-            for instance in reservation.instances:
-                instance.terminate()
-            self.cancelResourceCleanUp(rcuk)
-
         reservation_1, rcuk_1 = _run_instance('token_1')
         reservation_2, rcuk_2 = _run_instance('token_2')
         reservation_1a, rcuk_1a = _run_instance('token_1')
@@ -116,8 +116,8 @@
         # handled by rcuk1
         self.cancelResourceCleanUp(rcuk_1a)
 
-        _terminate_reservation(reservation_1, rcuk_1)
-        _terminate_reservation(reservation_2, rcuk_2)
+        self._terminate_reservation(reservation_1, rcuk_1)
+        self._terminate_reservation(reservation_2, rcuk_2)
 
     def test_run_stop_terminate_instance(self):
         # EC2 run, stop and terminate instance
@@ -139,9 +139,7 @@
             if instance.state != "stopped":
                 self.assertInstanceStateWait(instance, "stopped")
 
-        for instance in reservation.instances:
-            instance.terminate()
-        self.cancelResourceCleanUp(rcuk)
+        self._terminate_reservation(reservation, rcuk)
 
     def test_run_stop_terminate_instance_with_tags(self):
         # EC2 run, stop and terminate instance with tags
@@ -188,9 +186,7 @@
             if instance.state != "stopped":
                 self.assertInstanceStateWait(instance, "stopped")
 
-        for instance in reservation.instances:
-            instance.terminate()
-        self.cancelResourceCleanUp(rcuk)
+        self._terminate_reservation(reservation, rcuk)
 
     def test_run_terminate_instance(self):
         # EC2 run, terminate immediately
@@ -202,18 +198,7 @@
 
         for instance in reservation.instances:
             instance.terminate()
-        try:
-            instance.update(validate=True)
-        except ValueError:
-            pass
-        except exception.EC2ResponseError as exc:
-            if self.ec2_error_code.\
-                client.InvalidInstanceID.NotFound.match(exc) is None:
-                pass
-            else:
-                raise
-        else:
-            self.assertNotEqual(instance.state, "running")
+        self.assertInstanceStateWait(instance, '_GONE')
 
     def test_compute_with_volumes(self):
         # EC2 1. integration test (not strict)
diff --git a/tempest/thirdparty/boto/test_ec2_keys.py b/tempest/thirdparty/boto/test_ec2_keys.py
index 698e3e1..c3e1e2a 100644
--- a/tempest/thirdparty/boto/test_ec2_keys.py
+++ b/tempest/thirdparty/boto/test_ec2_keys.py
@@ -26,8 +26,8 @@
 class EC2KeysTest(boto_test.BotoTestCase):
 
     @classmethod
-    def setUpClass(cls):
-        super(EC2KeysTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(EC2KeysTest, cls).resource_setup()
         cls.client = cls.os.ec2api_client
         cls.ec = cls.ec2_error_code
 
diff --git a/tempest/thirdparty/boto/test_ec2_network.py b/tempest/thirdparty/boto/test_ec2_network.py
index 792dde3..a75fb7b 100644
--- a/tempest/thirdparty/boto/test_ec2_network.py
+++ b/tempest/thirdparty/boto/test_ec2_network.py
@@ -20,8 +20,8 @@
 class EC2NetworkTest(boto_test.BotoTestCase):
 
     @classmethod
-    def setUpClass(cls):
-        super(EC2NetworkTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(EC2NetworkTest, cls).resource_setup()
         cls.client = cls.os.ec2api_client
 
     # Note(afazekas): these tests for things duable without an instance
diff --git a/tempest/thirdparty/boto/test_ec2_security_groups.py b/tempest/thirdparty/boto/test_ec2_security_groups.py
index 7d9bdab..fb3d32b 100644
--- a/tempest/thirdparty/boto/test_ec2_security_groups.py
+++ b/tempest/thirdparty/boto/test_ec2_security_groups.py
@@ -20,8 +20,8 @@
 class EC2SecurityGroupTest(boto_test.BotoTestCase):
 
     @classmethod
-    def setUpClass(cls):
-        super(EC2SecurityGroupTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(EC2SecurityGroupTest, cls).resource_setup()
         cls.client = cls.os.ec2api_client
 
     def test_create_authorize_security_group(self):
diff --git a/tempest/thirdparty/boto/test_ec2_volumes.py b/tempest/thirdparty/boto/test_ec2_volumes.py
index b50c6b0..9cee8a4 100644
--- a/tempest/thirdparty/boto/test_ec2_volumes.py
+++ b/tempest/thirdparty/boto/test_ec2_volumes.py
@@ -29,8 +29,8 @@
 class EC2VolumesTest(boto_test.BotoTestCase):
 
     @classmethod
-    def setUpClass(cls):
-        super(EC2VolumesTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(EC2VolumesTest, cls).resource_setup()
 
         if not CONF.service_available.cinder:
             skip_msg = ("%s skipped as Cinder is not available" % cls.__name__)
diff --git a/tempest/thirdparty/boto/test_s3_buckets.py b/tempest/thirdparty/boto/test_s3_buckets.py
index 3a8dc89..342fc0e 100644
--- a/tempest/thirdparty/boto/test_s3_buckets.py
+++ b/tempest/thirdparty/boto/test_s3_buckets.py
@@ -14,18 +14,16 @@
 #    under the License.
 
 from tempest.common.utils import data_utils
-from tempest import test
 from tempest.thirdparty.boto import test as boto_test
 
 
 class S3BucketsTest(boto_test.BotoTestCase):
 
     @classmethod
-    def setUpClass(cls):
-        super(S3BucketsTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(S3BucketsTest, cls).resource_setup()
         cls.client = cls.os.s3_client
 
-    @test.skip_because(bug="1076965")
     def test_create_and_get_delete_bucket(self):
         # S3 Create, get and delete bucket
         bucket_name = data_utils.rand_name("s3bucket-")
diff --git a/tempest/thirdparty/boto/test_s3_ec2_images.py b/tempest/thirdparty/boto/test_s3_ec2_images.py
index 389e25c..f5dec95 100644
--- a/tempest/thirdparty/boto/test_s3_ec2_images.py
+++ b/tempest/thirdparty/boto/test_s3_ec2_images.py
@@ -26,8 +26,8 @@
 class S3ImagesTest(boto_test.BotoTestCase):
 
     @classmethod
-    def setUpClass(cls):
-        super(S3ImagesTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(S3ImagesTest, cls).resource_setup()
         if not cls.conclusion['A_I_IMAGES_READY']:
             raise cls.skipException("".join(("EC2 ", cls.__name__,
                                     ": requires ami/aki/ari manifest")))
diff --git a/tempest/thirdparty/boto/test_s3_objects.py b/tempest/thirdparty/boto/test_s3_objects.py
index db3c1cf..43774c2 100644
--- a/tempest/thirdparty/boto/test_s3_objects.py
+++ b/tempest/thirdparty/boto/test_s3_objects.py
@@ -24,8 +24,8 @@
 class S3BucketsTest(boto_test.BotoTestCase):
 
     @classmethod
-    def setUpClass(cls):
-        super(S3BucketsTest, cls).setUpClass()
+    def resource_setup(cls):
+        super(S3BucketsTest, cls).resource_setup()
         cls.client = cls.os.s3_client
 
     def test_create_get_delete_object(self):
diff --git a/test-requirements.txt b/test-requirements.txt
index cd8154b..ba70259 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -1,10 +1,13 @@
+# The order of packages is significant, because pip processes them in the order
+# of appearance. Changing the order has an impact on the overall integration
+# process, which may cause wedges in the gate later.
 hacking>=0.9.2,<0.10
 # needed for doc build
 sphinx>=1.1.2,!=1.2.0,<1.3
 python-subunit>=0.0.18
-oslosphinx
+oslosphinx>=2.2.0  # Apache-2.0
 mox>=0.5.3
 mock>=1.0
 coverage>=3.6
-oslotest
-stevedore>=0.14
+oslotest>=1.1.0  # Apache-2.0
+stevedore>=1.0.0  # Apache-2.0
diff --git a/tools/check_logs.py b/tools/check_logs.py
index bc4eaca..7cf9d85 100755
--- a/tools/check_logs.py
+++ b/tools/check_logs.py
@@ -22,11 +22,13 @@
 import StringIO
 import sys
 import urllib2
+
 import yaml
 
 
-is_grenade = (os.environ.get('DEVSTACK_GATE_GRENADE', "0") == "1" or
-              os.environ.get('DEVSTACK_GATE_GRENADE_FORWARD', "0") == "1")
+# DEVSTACK_GATE_GRENADE is either unset if grenade is not running
+# or a string describing what type of grenade run to perform.
+is_grenade = os.environ.get('DEVSTACK_GATE_GRENADE') is not None
 dump_all_errors = True
 
 # As logs are made clean, add to this set
@@ -78,7 +80,6 @@
 
 def scan_content(name, content, regexp, whitelist):
     had_errors = False
-    print_log_name = True
     for line in content:
         if not line.startswith("Stderr:") and regexp.match(line):
             whitelisted = False
@@ -89,13 +90,8 @@
                     whitelisted = True
                     break
             if not whitelisted or dump_all_errors:
-                if print_log_name:
-                    print("\nLog File Has Errors: %s" % name)
-                    print_log_name = False
                 if not whitelisted:
                     had_errors = True
-                    print("*** Not Whitelisted ***"),
-                print(line.rstrip())
     return had_errors
 
 
@@ -149,17 +145,21 @@
             whitelists = loaded
     logs_with_errors = process_files(files_to_process, urls_to_process,
                                      whitelists)
-    if logs_with_errors:
-        print("Logs have errors")
-    if is_grenade:
-        print("Currently not failing grenade runs with errors")
-        return 0
+
     failed = False
-    for log in logs_with_errors:
-        if log not in allowed_dirty:
-            print("Log: %s not allowed to have ERRORS or TRACES" % log)
-            failed = True
+    if logs_with_errors:
+        log_files = set(logs_with_errors)
+        for log in log_files:
+            msg = '%s log file has errors' % log
+            if log not in allowed_dirty:
+                msg += ' and is not allowed to have them'
+                failed = True
+            print(msg)
+        print("\nPlease check the respective log files to see the errors")
     if failed:
+        if is_grenade:
+            print("Currently not failing grenade runs with errors")
+            return 0
         return 1
     print("ok")
     return 0
diff --git a/tools/colorizer.py b/tools/colorizer.py
index a3a0616..e7152f2 100755
--- a/tools/colorizer.py
+++ b/tools/colorizer.py
@@ -42,10 +42,10 @@
 """Display a subunit stream through a colorized unittest test runner."""
 
 import heapq
-import subunit
 import sys
 import unittest
 
+import subunit
 import testtools
 
 
diff --git a/tools/find_stack_traces.py b/tools/find_stack_traces.py
index c905976..4862d01 100755
--- a/tools/find_stack_traces.py
+++ b/tools/find_stack_traces.py
@@ -16,12 +16,13 @@
 #    under the License.
 
 import gzip
+import pprint
 import re
 import StringIO
 import sys
 import urllib2
 
-import pprint
+
 pp = pprint.PrettyPrinter()
 
 NOVA_TIMESTAMP = r"\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d\.\d\d\d"
diff --git a/tools/subunit-trace.py b/tools/subunit-trace.py
index 8ad59bb..57e58f2 100755
--- a/tools/subunit-trace.py
+++ b/tools/subunit-trace.py
@@ -23,7 +23,6 @@
 import re
 import sys
 
-import mimeparse
 import subunit
 import testtools
 
@@ -32,55 +31,6 @@
 RESULTS = {}
 
 
-class Starts(testtools.StreamResult):
-
-    def __init__(self, output):
-        super(Starts, self).__init__()
-        self._output = output
-
-    def startTestRun(self):
-        self._neednewline = False
-        self._emitted = set()
-
-    def status(self, test_id=None, test_status=None, test_tags=None,
-               runnable=True, file_name=None, file_bytes=None, eof=False,
-               mime_type=None, route_code=None, timestamp=None):
-        super(Starts, self).status(
-            test_id, test_status,
-            test_tags=test_tags, runnable=runnable, file_name=file_name,
-            file_bytes=file_bytes, eof=eof, mime_type=mime_type,
-            route_code=route_code, timestamp=timestamp)
-        if not test_id:
-            if not file_bytes:
-                return
-            if not mime_type or mime_type == 'test/plain;charset=utf8':
-                mime_type = 'text/plain; charset=utf-8'
-            primary, sub, parameters = mimeparse.parse_mime_type(mime_type)
-            content_type = testtools.content_type.ContentType(
-                primary, sub, parameters)
-            content = testtools.content.Content(
-                content_type, lambda: [file_bytes])
-            text = content.as_text()
-            if text and text[-1] not in '\r\n':
-                self._neednewline = True
-            self._output.write(text)
-        elif test_status == 'inprogress' and test_id not in self._emitted:
-            if self._neednewline:
-                self._neednewline = False
-                self._output.write('\n')
-            worker = ''
-            for tag in test_tags or ():
-                if tag.startswith('worker-'):
-                    worker = '(' + tag[7:] + ') '
-            if timestamp:
-                timestr = timestamp.isoformat()
-            else:
-                timestr = ''
-                self._output.write('%s: %s%s [start]\n' %
-                                   (timestr, worker, test_id))
-            self._emitted.add(test_id)
-
-
 def cleanup_test_name(name, strip_tags=True, strip_scenarios=False):
     """Clean up the test name for display.
 
@@ -274,17 +224,19 @@
     args = parse_args()
     stream = subunit.ByteStreamToStreamResult(
         sys.stdin, non_subunit_name='stdout')
-    starts = Starts(sys.stdout)
     outcomes = testtools.StreamToDict(
         functools.partial(show_outcome, sys.stdout,
                           print_failures=args.print_failures))
     summary = testtools.StreamSummary()
-    result = testtools.CopyStreamResult([starts, outcomes, summary])
+    result = testtools.CopyStreamResult([outcomes, summary])
     result.startTestRun()
     try:
         stream.run(result)
     finally:
         result.stopTestRun()
+    if count_tests('status', '.*') == 0:
+        print("The test run didn't actually run any tests")
+        return 1
     if args.post_fails:
         print_fails(sys.stdout)
     print_summary(sys.stdout)
diff --git a/tox.ini b/tox.ini
index 9c32121..cab59a8 100644
--- a/tox.ini
+++ b/tox.ini
@@ -6,33 +6,36 @@
 [testenv]
 setenv = VIRTUAL_ENV={envdir}
          OS_TEST_PATH=./tempest/test_discover
-         PYTHONHASHSEED=0
 usedevelop = True
 install_command = pip install -U {opts} {packages}
+whitelist_externals = bash
+
 
 [testenv:py26]
 setenv = OS_TEST_PATH=./tempest/tests
-         PYTHONHASHSEED=0
 commands = python setup.py test --slowest --testr-arg='tempest\.tests {posargs}'
 
 [testenv:py33]
 setenv = OS_TEST_PATH=./tempest/tests
+commands = python setup.py test --slowest --testr-arg='tempest\.tests {posargs}'
+
+[testenv:py34]
+setenv = OS_TEST_PATH=./tempest/tests
          PYTHONHASHSEED=0
 commands = python setup.py test --slowest --testr-arg='tempest\.tests {posargs}'
 
 [testenv:py27]
 setenv = OS_TEST_PATH=./tempest/tests
-         PYTHONHASHSEED=0
 commands = python setup.py test --slowest --testr-arg='tempest\.tests {posargs}'
 
 [testenv:cover]
 setenv = OS_TEST_PATH=./tempest/tests
-         PYTHONHASHSEED=0
 commands = python setup.py testr --coverage --testr-arg='tempest\.tests {posargs}'
+deps = -r{toxinidir}/requirements.txt
+       -r{toxinidir}/test-requirements.txt
 
 [testenv:all]
 sitepackages = True
-setenv = VIRTUAL_ENV={envdir}
 commands =
   bash tools/pretty_tox.sh '{posargs}'
 
@@ -95,6 +98,7 @@
        -r{toxinidir}/test-requirements.txt
 
 [testenv:pep8]
+setenv = PYTHONHASHSEED=0
 commands =
    flake8 {posargs}
    {toxinidir}/tools/config/check_uptodate.sh
@@ -109,7 +113,10 @@
 [flake8]
 # E125 is a won't fix until https://github.com/jcrocholl/pep8/issues/126 is resolved.  For further detail see https://review.openstack.org/#/c/36788/
 # H402 skipped because some docstrings aren't sentences
-# Skipped because of new hacking 0.9: H407,H405,H904,H305,E123,H307,E122,E129,E128
-ignore = E125,H402,H404,H407,H405,H904,H305,E123,H307,E122,E129,E128
+# E123 skipped because it is ignored by default in the default pep8
+# E129 skipped because it is too limiting when combined with other rules
+# H305 skipped because it is inconsistent between python versions
+# Skipped because of new hacking 0.9: H405,H904
+ignore = E125,H402,E123,E129,H404,H405,H904,H305
 show-source = True
 exclude = .git,.venv,.tox,dist,doc,openstack,*egg