Merge "Add test for the various config cases of is_admin_available()"
diff --git a/HACKING.rst b/HACKING.rst
index 81a7c2c..04b5eb6 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -312,3 +312,57 @@
          * Boot an additional instance from the new snapshot based volume
          * Check written content in the instance booted from snapshot
         """
+
+Branchless Tempest Considerations
+---------------------------------
+
+Starting with the OpenStack Icehouse release Tempest no longer has any stable
+branches. This is to better ensure API consistency between releases because
+the API behavior should not change between releases. This means that the stable
+branches are also gated by the Tempest master branch, which also means that
+proposed commits to Tempest must work against both the master and all the
+currently supported stable branches of the projects. As such there are a few
+special considerations that have to be accounted for when pushing new changes
+to tempest.
+
+1. New Tests for new features
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When adding tests for new features that were not in previous releases of the
+projects the new test has to be properly skipped with a feature flag. Whether
+this is just as simple as using the @test.requires_ext() decorator to check
+if the required extension (or discoverable optional API) is enabled or adding
+a new config option to the appropriate section. If there isn't a method of
+selecting the new **feature** from the config file then there won't be a
+mechanism to disable the test with older stable releases and the new test won't
+be able to merge.
+
+2. Bug fix on core project needing Tempest changes
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When trying to land a bug fix which changes a tested API you'll have to use the
+following procedure::
+
+    - Propose change to the project, get a +2 on the change even with failing
+    - Propose skip on Tempest which will only be approved after the
+      corresponding change in the project has a +2 on change
+    - Land project change in master and all open stable branches (if required)
+    - Land changed test in Tempest
+
+Otherwise the bug fix won't be able to land in the project.
+
+3. New Tests for existing features
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If a test is being added for a feature that exists in all the current releases
+of the projects then the only concern is that the API behavior is the same
+across all the versions of the project being tested. If the behavior is not
+consistent the test will not be able to merge.
+
+API Stability
+-------------
+
+For new tests being added to Tempest the assumption is that the API being
+tested is considered stable and adheres to the OpenStack API stability
+guidelines. If an API is still considered experimental or in development then
+it should not be tested by Tempest until it is considered stable.
diff --git a/README.rst b/README.rst
index 7af0025..ba93712 100644
--- a/README.rst
+++ b/README.rst
@@ -54,60 +54,59 @@
 
 .. note::
 
-    If you have a running devstack environment, tempest will be
+    If you have a running devstack environment, Tempest will be
     automatically configured and placed in ``/opt/stack/tempest``. It
     will have a configuration file already set up to work with your
     devstack installation.
 
-Tempest is not tied to any single test runner, but testr is the most commonly
-used tool. After setting up your configuration file, you can execute
-the set of Tempest tests by using ``testr`` ::
+Tempest is not tied to any single test runner, but `testr`_ is the most commonly
+used tool. Also, the nosetests test runner is **not** recommended to run Tempest.
+
+After setting up your configuration file, you can execute the set of Tempest
+tests by using ``testr`` ::
 
     $> testr run --parallel
 
-To run one single test  ::
+.. _testr: http://testrepository.readthedocs.org/en/latest/MANUAL.html
 
-    $> testr run --parallel tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_reboot_non_existent_server
+To run one single test serially ::
+
+    $> testr run tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_reboot_non_existent_server
 
 Alternatively, you can use the run_tempest.sh script which will create a venv
-and run the tests or use tox to do the same.
+and run the tests or use tox to do the same. Tox also contains several existing
+job configurations. For example::
+
+   $> tox -efull
+
+which will run the same set of tests as the OpenStack gate. (it's exactly how
+the gate invokes Tempest) Or::
+
+  $> tox -esmoke
+
+to run the tests tagged as smoke.
+
 
 Configuration
 -------------
 
-Detailed configuration of tempest is beyond the scope of this
-document. The etc/tempest.conf.sample attempts to be a self
-documenting version of the configuration.
+Detailed configuration of Tempest is beyond the scope of this
+document see :ref:`tempest-configuration` for more details on configuring
+Tempest. The etc/tempest.conf.sample attempts to be a self documenting version
+of the configuration.
 
-To generate the sample tempest.conf file, run the following
-command from the top level of the tempest directory:
+You can generate a new sample tempest.conf file, run the following
+command from the top level of the Tempest directory:
 
   tox -egenconfig
 
 The most important pieces that are needed are the user ids, openstack
-endpoints, and basic flavors and images needed to run tests.
-
-Common Issues
--------------
-
-Tempest was originally designed to primarily run against a full OpenStack
-deployment. Due to that focus, some issues may occur when running Tempest
-against devstack.
-
-Running Tempest, especially in parallel, against a devstack instance may
-cause requests to be rate limited, which will cause unexpected failures.
-Given the number of requests Tempest can make against a cluster, rate limiting
-should be disabled for all test accounts.
-
-Additionally, devstack only provides a single image which Nova can use.
-For the moment, the best solution is to provide the same image uuid for
-both image_ref and image_ref_alt. Tempest will skip tests as needed if it
-detects that both images are the same.
+endpoint, and basic flavors and images needed to run tests.
 
 Unit Tests
 ----------
 
-Tempest also has a set of unit tests which test the tempest code itself. These
+Tempest also has a set of unit tests which test the Tempest code itself. These
 tests can be run by specifing the test discovery path::
 
     $> OS_TEST_PATH=./tempest/tests testr run --parallel
@@ -115,7 +114,7 @@
 By setting OS_TEST_PATH to ./tempest/tests it specifies that test discover
 should only be run on the unit test directory. The default value of OS_TEST_PATH
 is OS_TEST_PATH=./tempest/test_discover which will only run test discover on the
-tempest suite.
+Tempest suite.
 
 Alternatively, you can use the run_tests.sh script which will create a venv and
 run the unit tests. There are also the py26, py27, or py33 tox jobs which will
@@ -125,64 +124,10 @@
 ----------
 
 Starting in the kilo release the OpenStack services dropped all support for
-python 2.6. This change has been mirrored in tempest, starting after the
-tempest-2 tag. This means that proposed changes to tempest which only fix
+python 2.6. This change has been mirrored in Tempest, starting after the
+tempest-2 tag. This means that proposed changes to Tempest which only fix
 python 2.6 compatibility will be rejected, and moving forward more features not
-present in python 2.6 will be used. If you're running you're OpenStack services
-on an earlier release with python 2.6 you can easily run tempest against it
+present in python 2.6 will be used. If you're running your OpenStack services
+on an earlier release with python 2.6 you can easily run Tempest against it
 from a remote system running python 2.7. (or deploy a cloud guest in your cloud
 that has python 2.7)
-
-Branchless Tempest Considerations
----------------------------------
-
-Starting with the OpenStack Icehouse release Tempest no longer has any stable
-branches. This is to better ensure API consistency between releases because
-the API behavior should not change between releases. This means that the stable
-branches are also gated by the Tempest master branch, which also means that
-proposed commits to Tempest must work against both the master and all the
-currently supported stable branches of the projects. As such there are a few
-special considerations that have to be accounted for when pushing new changes
-to tempest.
-
-1. New Tests for new features
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-When adding tests for new features that were not in previous releases of the
-projects the new test has to be properly skipped with a feature flag. Whether
-this is just as simple as using the @test.requires_ext() decorator to check
-if the required extension (or discoverable optional API) is enabled or adding
-a new config option to the appropriate section. If there isn't a method of
-selecting the new **feature** from the config file then there won't be a
-mechanism to disable the test with older stable releases and the new test won't
-be able to merge.
-
-2. Bug fix on core project needing Tempest changes
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-When trying to land a bug fix which changes a tested API you'll have to use the
-following procedure::
-
-    - Propose change to the project, get a +2 on the change even with failing
-    - Propose skip on Tempest which will only be approved after the
-      corresponding change in the project has a +2 on change
-    - Land project change in master and all open stable branches (if required)
-    - Land changed test in Tempest
-
-Otherwise the bug fix won't be able to land in the project.
-
-3. New Tests for existing features
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-If a test is being added for a feature that exists in all the current releases
-of the projects then the only concern is that the API behavior is the same
-across all the versions of the project being tested. If the behavior is not
-consistent the test will not be able to merge.
-
-API Stability
--------------
-
-For new tests being added to Tempest the assumption is that the API being
-tested is considered stable and adheres to the OpenStack API stability
-guidelines. If an API is still considered experimental or in development then
-it should not be tested by Tempest until it is considered stable.
diff --git a/doc/source/configuration.rst b/doc/source/configuration.rst
index f772aa3..15369de 100644
--- a/doc/source/configuration.rst
+++ b/doc/source/configuration.rst
@@ -1,6 +1,23 @@
+.. _tempest-configuration:
+
 Tempest Configuration Guide
 ===========================
 
+This guide is a starting point for configuring tempest. It aims to elaborate
+on and explain some of the mandatory and common configuration settings and how
+they are used in conjunction. The source of truth on each option is the sample
+config file which explains the purpose of each individual option.
+
+Lock Path
+---------
+
+There are some tests and operations inside of tempest that need to be
+externally locked when running in parallel to prevent them from running at
+the same time. This is a mandatory step for configuring tempest and is still
+needed even when running serially. All that is needed to do this is:
+
+ #. Set the lock_path option in the oslo_concurrency group
+
 Auth/Credentials
 ----------------
 
diff --git a/etc/tempest.conf.sample b/etc/tempest.conf.sample
index a2db4f4..7c5461b 100644
--- a/etc/tempest.conf.sample
+++ b/etc/tempest.conf.sample
@@ -1,17 +1,7 @@
 [DEFAULT]
 
 #
-# From tempest.config
-#
-
-# Whether to disable inter-process locks (boolean value)
-#disable_process_locking = false
-
-# Directory to use for lock files. (string value)
-#lock_path = <None>
-
-#
-# From tempest.config
+# From oslo.log
 #
 
 # Print debugging output (set logging level to DEBUG instead of
@@ -22,10 +12,6 @@
 # default WARNING level). (boolean value)
 #verbose = false
 
-#
-# From tempest.config
-#
-
 # The name of a logging configuration file. This file is appended to
 # any existing logging configuration files. For details about logging
 # configuration files, see the Python logging module documentation.
@@ -66,17 +52,9 @@
 # Syslog facility to receive log lines. (string value)
 #syslog_log_facility = LOG_USER
 
-#
-# From tempest.config
-#
-
 # Log output to standard error. (boolean value)
 #use_stderr = true
 
-#
-# From tempest.config
-#
-
 # Format string to use for log messages with context. (string value)
 #logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
 
@@ -92,7 +70,7 @@
 #logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d TRACE %(name)s %(instance)s
 
 # List of logger=LEVEL pairs. (list value)
-#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN
+#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN
 
 # Enables or disables publication of error events. (boolean value)
 #publish_errors = false
@@ -138,6 +116,11 @@
 # Roles to assign to all users created by tempest (list value)
 #tempest_roles =
 
+# Only applicable when identity.auth_version is v3.Domain within which
+# isolated credentials are provisioned.The default "None" means that
+# the domain from theadmin user is used instead. (string value)
+#tenant_isolation_domain_name = <None>
+
 
 [baremetal]
 
@@ -319,9 +302,11 @@
 # value)
 #ssh_channel_timeout = 60
 
-# Name of the fixed network that is visible to all test tenants.
-# (string value)
-#fixed_network_name = private
+# Name of the fixed network that is visible to all test tenants. If
+# multiple networks are available for a tenant this is the network
+# which will be used for creating servers if tempest does not create a
+# network or a network is not specified elsewhere (string value)
+#fixed_network_name = <None>
 
 # Network used for SSH connections. Ignored if
 # use_floatingip_for_ssh=true or run_ssh=false. (string value)
@@ -410,7 +395,8 @@
 #block_migration_for_live_migration = false
 
 # Does the test environment block migration support cinder iSCSI
-# volumes (boolean value)
+# volumes. Note, libvirt doesn't support this, see
+# https://bugs.launchpad.net/nova/+bug/1398999 (boolean value)
 #block_migrate_cinder_iscsi = false
 
 # Enable VNC console. This configuration value should be same as
@@ -444,6 +430,11 @@
 # Does the test environment have the ec2 api running? (boolean value)
 #ec2_api = true
 
+# Does Nova preserve preexisting ports from Neutron when deleting an
+# instance? This should be set to True if testing Kilo+ Nova. (boolean
+# value)
+#preserve_ports = false
+
 
 [dashboard]
 
@@ -895,6 +886,9 @@
 # Allowed values: public, admin, internal, publicURL, adminURL, internalURL
 #endpoint_type = publicURL
 
+# Role required for users to be able to manage stacks (string value)
+#stack_owner_role = heat_stack_owner
+
 # Time in seconds between build status checks. (integer value)
 #build_interval = 1
 
@@ -905,10 +899,6 @@
 # the test workload (string value)
 #instance_type = m1.micro
 
-# Name of heat-cfntools enabled image to use when launching test
-# instances. (string value)
-#image_ref = <None>
-
 # Name of existing keypair to launch servers with. (string value)
 #keypair_name = <None>
 
@@ -921,6 +911,24 @@
 #max_resources_per_stack = 1000
 
 
+[oslo_concurrency]
+
+#
+# From oslo.concurrency
+#
+
+# Enables or disables inter-process locks. (boolean value)
+# Deprecated group/name - [DEFAULT]/disable_process_locking
+#disable_process_locking = false
+
+# Directory to use for lock files.  For security, the specified
+# directory should only be writable by the user running the processes
+# that need locking. Defaults to environment variable OSLO_LOCK_PATH.
+# If external locks are used, a lock path must be set. (string value)
+# Deprecated group/name - [DEFAULT]/lock_path
+#lock_path = <None>
+
+
 [scenario]
 
 #
diff --git a/openstack-common.conf b/openstack-common.conf
index 5ae2089..1920295 100644
--- a/openstack-common.conf
+++ b/openstack-common.conf
@@ -2,10 +2,6 @@
 
 # The list of modules to copy from openstack-common
 module=install_venv_common
-module=lockutils
-module=log
-module=importlib
-module=fixture
 module=versionutils
 
 # The base module to hold the copy of openstack.common
diff --git a/requirements.txt b/requirements.txt
index b14af9d..35b5144 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -9,19 +9,20 @@
 boto>=2.32.1
 paramiko>=1.13.0
 netaddr>=0.7.12
-python-ceilometerclient>=1.0.6
 python-glanceclient>=0.15.0
-python-keystoneclient>=1.1.0
-python-neutronclient>=2.3.11,<3
 python-cinderclient>=1.1.0
 python-heatclient>=0.3.0
-python-ironicclient>=0.2.1
-python-saharaclient>=0.7.6
+python-saharaclient>=0.8.0
 python-swiftclient>=2.2.0
 testrepository>=0.0.18
-oslo.config>=1.6.0  # Apache-2.0
+oslo.concurrency>=1.8.0,<1.9.0         # Apache-2.0
+oslo.config>=1.9.3,<1.10.0  # Apache-2.0
+oslo.i18n>=1.5.0,<1.6.0  # Apache-2.0
+oslo.log>=1.0.0,<1.1.0  # Apache-2.0
+oslo.serialization>=1.4.0,<1.5.0               # Apache-2.0
+oslo.utils>=1.4.0,<1.5.0                       # Apache-2.0
 six>=1.9.0
 iso8601>=0.1.9
 fixtures>=0.3.14
 testscenarios>=0.4
-tempest-lib>=0.3.0
+tempest-lib>=0.4.0
diff --git a/tempest/api/baremetal/admin/base.py b/tempest/api/baremetal/admin/base.py
index 2834b2b..9aeea0a 100644
--- a/tempest/api/baremetal/admin/base.py
+++ b/tempest/api/baremetal/admin/base.py
@@ -16,6 +16,7 @@
 from tempest_lib import exceptions as lib_exc
 
 from tempest import clients
+from tempest.common import credentials
 from tempest import config
 from tempest import test
 
@@ -69,7 +70,11 @@
     @classmethod
     def setup_credentials(cls):
         super(BaseBaremetalTest, cls).setup_credentials()
-        cls.mgr = clients.AdminManager()
+        if (not hasattr(cls, 'isolated_creds') or
+            not cls.isolated_creds.name == cls.__name__):
+            cls.isolated_creds = credentials.get_isolated_credentials(
+                name=cls.__name__, network_resources=cls.network_resources)
+        cls.mgr = clients.Manager(cls.isolated_creds.get_admin_creds())
 
     @classmethod
     def setup_clients(cls):
@@ -110,7 +115,7 @@
         :return: Created chassis.
 
         """
-        description = description or data_utils.rand_name('test-chassis-')
+        description = description or data_utils.rand_name('test-chassis')
         resp, body = cls.client.create_chassis(description=description)
         return resp, body
 
diff --git a/tempest/api/baremetal/admin/test_chassis.py b/tempest/api/baremetal/admin/test_chassis.py
index ef2113c..2011905 100644
--- a/tempest/api/baremetal/admin/test_chassis.py
+++ b/tempest/api/baremetal/admin/test_chassis.py
@@ -36,7 +36,7 @@
     @test.attr(type='smoke')
     @test.idempotent_id('7c5a2e09-699c-44be-89ed-2bc189992d42')
     def test_create_chassis(self):
-        descr = data_utils.rand_name('test-chassis-')
+        descr = data_utils.rand_name('test-chassis')
         _, chassis = self.create_chassis(description=descr)
         self.assertEqual(chassis['description'], descr)
 
@@ -77,7 +77,7 @@
         _, body = self.create_chassis()
         uuid = body['uuid']
 
-        new_description = data_utils.rand_name('new-description-')
+        new_description = data_utils.rand_name('new-description')
         _, body = (self.client.update_chassis(uuid,
                    description=new_description))
         _, chassis = self.client.show_chassis(uuid)
diff --git a/tempest/api/baremetal/admin/test_nodestates.py b/tempest/api/baremetal/admin/test_nodestates.py
index bcb8b78..e7b6081 100644
--- a/tempest/api/baremetal/admin/test_nodestates.py
+++ b/tempest/api/baremetal/admin/test_nodestates.py
@@ -12,9 +12,10 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_utils import timeutils
+
 from tempest.api.baremetal.admin import base
 from tempest import exceptions
-from tempest.openstack.common import timeutils
 from tempest import test
 
 
diff --git a/tempest/api/compute/admin/test_agents.py b/tempest/api/compute/admin/test_agents.py
index f801f8a..aa29b36 100644
--- a/tempest/api/compute/admin/test_agents.py
+++ b/tempest/api/compute/admin/test_agents.py
@@ -12,11 +12,11 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log
 from tempest_lib.common.utils import data_utils
 from tempest_lib import exceptions as lib_exc
 
 from tempest.api.compute import base
-from tempest.openstack.common import log
 from tempest import test
 
 LOG = log.getLogger(__name__)
diff --git a/tempest/api/compute/admin/test_aggregates.py b/tempest/api/compute/admin/test_aggregates.py
index b5e969e..3a34a2e 100644
--- a/tempest/api/compute/admin/test_aggregates.py
+++ b/tempest/api/compute/admin/test_aggregates.py
@@ -37,8 +37,8 @@
     @classmethod
     def resource_setup(cls):
         super(AggregatesAdminTestJSON, cls).resource_setup()
-        cls.aggregate_name_prefix = 'test_aggregate_'
-        cls.az_name_prefix = 'test_az_'
+        cls.aggregate_name_prefix = 'test_aggregate'
+        cls.az_name_prefix = 'test_az'
 
         hosts_all = cls.os_adm.hosts_client.list_hosts()
         hosts = map(lambda x: x['host_name'],
@@ -215,7 +215,7 @@
         self.addCleanup(self.client.delete_aggregate, aggregate['id'])
         self.client.add_host(aggregate['id'], self.host)
         self.addCleanup(self.client.remove_host, aggregate['id'], self.host)
-        server_name = data_utils.rand_name('test_server_')
+        server_name = data_utils.rand_name('test_server')
         admin_servers_client = self.os_adm.servers_client
         server = self.create_test_server(name=server_name,
                                          availability_zone=az_name,
diff --git a/tempest/api/compute/admin/test_aggregates_negative.py b/tempest/api/compute/admin/test_aggregates_negative.py
index 07c8c4e..f6d6ad3 100644
--- a/tempest/api/compute/admin/test_aggregates_negative.py
+++ b/tempest/api/compute/admin/test_aggregates_negative.py
@@ -36,8 +36,8 @@
     @classmethod
     def resource_setup(cls):
         super(AggregatesAdminNegativeTestJSON, cls).resource_setup()
-        cls.aggregate_name_prefix = 'test_aggregate_'
-        cls.az_name_prefix = 'test_az_'
+        cls.aggregate_name_prefix = 'test_aggregate'
+        cls.az_name_prefix = 'test_az'
 
         hosts_all = cls.os_adm.hosts_client.list_hosts()
         hosts = map(lambda x: x['host_name'],
@@ -134,7 +134,7 @@
         hosts_all = self.os_adm.hosts_client.list_hosts()
         hosts = map(lambda x: x['host_name'], hosts_all)
         while True:
-            non_exist_host = data_utils.rand_name('nonexist_host_')
+            non_exist_host = data_utils.rand_name('nonexist_host')
             if non_exist_host not in hosts:
                 break
 
@@ -189,7 +189,7 @@
     @test.attr(type=['negative', 'gate'])
     @test.idempotent_id('95d6a6fa-8da9-4426-84d0-eec0329f2e4d')
     def test_aggregate_remove_nonexistent_host(self):
-        non_exist_host = data_utils.rand_name('nonexist_host_')
+        non_exist_host = data_utils.rand_name('nonexist_host')
         aggregate_name = data_utils.rand_name(self.aggregate_name_prefix)
         aggregate = self.client.create_aggregate(name=aggregate_name)
         self.addCleanup(self.client.delete_aggregate, aggregate['id'])
diff --git a/tempest/api/compute/admin/test_baremetal_nodes.py b/tempest/api/compute/admin/test_baremetal_nodes.py
index 1381f80..64099c3 100644
--- a/tempest/api/compute/admin/test_baremetal_nodes.py
+++ b/tempest/api/compute/admin/test_baremetal_nodes.py
@@ -31,14 +31,26 @@
             skip_msg = ('%s skipped as Ironic is not available' % cls.__name__)
             raise cls.skipException(skip_msg)
         cls.client = cls.os_adm.baremetal_nodes_client
+        cls.ironic_client = cls.os_adm.baremetal_client
 
-    @test.attr(type='smoke')
+    @test.attr(type=['smoke', 'baremetal'])
     @test.idempotent_id('e475aa6e-416d-4fa4-b3af-28d5e84250fb')
-    def test_list_baremetal_nodes(self):
-        # List all baremetal nodes.
-        baremetal_nodes = self.client.list_baremetal_nodes()
-        self.assertNotEmpty(baremetal_nodes, "No baremetal nodes found.")
+    def test_list_get_baremetal_nodes(self):
+        # Create some test nodes in Ironic directly
+        test_nodes = []
+        for i in range(0, 3):
+            _, node = self.ironic_client.create_node()
+            test_nodes.append(node)
+            self.addCleanup(self.ironic_client.delete_node, node['uuid'])
 
-        for node in baremetal_nodes:
-            baremetal_node = self.client.get_baremetal_node(node['id'])
-            self.assertEqual(node['id'], baremetal_node['id'])
+        # List all baremetal nodes and ensure our created test nodes are
+        # listed
+        bm_node_ids = set([n['id'] for n in
+                           self.client.list_baremetal_nodes()])
+        test_node_ids = set([n['uuid'] for n in test_nodes])
+        self.assertTrue(test_node_ids.issubset(bm_node_ids))
+
+        # Test getting each individually
+        for node in test_nodes:
+            baremetal_node = self.client.get_baremetal_node(node['uuid'])
+            self.assertEqual(node['uuid'], baremetal_node['id'])
diff --git a/tempest/api/compute/admin/test_flavors_negative.py b/tempest/api/compute/admin/test_flavors_negative.py
deleted file mode 100644
index c7eb9ae..0000000
--- a/tempest/api/compute/admin/test_flavors_negative.py
+++ /dev/null
@@ -1,120 +0,0 @@
-# Copyright 2012 OpenStack Foundation
-# All Rights Reserved.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-import uuid
-
-from tempest_lib.common.utils import data_utils
-from tempest_lib import exceptions as lib_exc
-
-from tempest.api.compute import base
-from tempest.api_schema.request.compute.v2 import flavors
-from tempest import config
-from tempest import test
-
-
-CONF = config.CONF
-
-load_tests = test.NegativeAutoTest.load_tests
-
-
-class FlavorsAdminNegativeTestJSON(base.BaseV2ComputeAdminTest):
-
-    """
-    Tests Flavors API Create and Delete that require admin privileges
-    """
-
-    @classmethod
-    def skip_checks(cls):
-        super(FlavorsAdminNegativeTestJSON, cls).skip_checks()
-        if not test.is_extension_enabled('OS-FLV-EXT-DATA', 'compute'):
-            msg = "OS-FLV-EXT-DATA extension not enabled."
-            raise cls.skipException(msg)
-
-    @classmethod
-    def setup_clients(cls):
-        super(FlavorsAdminNegativeTestJSON, cls).setup_clients()
-        cls.client = cls.os_adm.flavors_client
-        cls.user_client = cls.os.flavors_client
-
-    @classmethod
-    def resource_setup(cls):
-        super(FlavorsAdminNegativeTestJSON, cls).resource_setup()
-        cls.flavor_name_prefix = 'test_flavor_'
-        cls.ram = 512
-        cls.vcpus = 1
-        cls.disk = 10
-        cls.ephemeral = 10
-        cls.swap = 1024
-        cls.rxtx = 2
-
-    @test.attr(type=['negative', 'gate'])
-    @test.idempotent_id('404451c0-c1ae-4448-8d50-d74f26f93ec8')
-    def test_get_flavor_details_for_deleted_flavor(self):
-        # Delete a flavor and ensure it is not listed
-        # Create a test flavor
-        flavor_name = data_utils.rand_name(self.flavor_name_prefix)
-
-        # no need to specify flavor_id, we can get the flavor_id from a
-        # response of create_flavor() call.
-        flavor = self.client.create_flavor(flavor_name,
-                                           self.ram,
-                                           self.vcpus, self.disk,
-                                           None,
-                                           ephemeral=self.ephemeral,
-                                           swap=self.swap,
-                                           rxtx=self.rxtx)
-        # Delete the flavor
-        new_flavor_id = flavor['id']
-        self.client.delete_flavor(new_flavor_id)
-
-        # Deleted flavors can be seen via detailed GET
-        flavor = self.client.get_flavor_details(new_flavor_id)
-        self.assertEqual(flavor['name'], flavor_name)
-
-        # Deleted flavors should not show up in a list however
-        flavors = self.client.list_flavors_with_detail()
-        flag = True
-        for flavor in flavors:
-            if flavor['name'] == flavor_name:
-                flag = False
-        self.assertTrue(flag)
-
-    @test.attr(type=['negative', 'gate'])
-    @test.idempotent_id('6f56e7b7-7500-4d0c-9913-880ca1efed87')
-    def test_create_flavor_as_user(self):
-        # only admin user can create a flavor
-        flavor_name = data_utils.rand_name(self.flavor_name_prefix)
-        new_flavor_id = str(uuid.uuid4())
-
-        self.assertRaises(lib_exc.Forbidden,
-                          self.user_client.create_flavor,
-                          flavor_name, self.ram, self.vcpus, self.disk,
-                          new_flavor_id, ephemeral=self.ephemeral,
-                          swap=self.swap, rxtx=self.rxtx)
-
-    @test.attr(type=['negative', 'gate'])
-    @test.idempotent_id('a9a6dc02-8c14-4e05-a1ca-3468d4214882')
-    def test_delete_flavor_as_user(self):
-        # only admin user can delete a flavor
-        self.assertRaises(lib_exc.Forbidden,
-                          self.user_client.delete_flavor,
-                          self.flavor_ref_alt)
-
-
-@test.SimpleNegativeAutoTest
-class FlavorCreateNegativeTestJSON(base.BaseV2ComputeAdminTest,
-                                   test.NegativeAutoTest):
-    _service = CONF.compute.catalog_type
-    _schema = flavors.flavor_create
diff --git a/tempest/api/compute/admin/test_floating_ips_bulk.py b/tempest/api/compute/admin/test_floating_ips_bulk.py
index 3c5f507..3c48d9e 100644
--- a/tempest/api/compute/admin/test_floating_ips_bulk.py
+++ b/tempest/api/compute/admin/test_floating_ips_bulk.py
@@ -17,6 +17,7 @@
 
 from tempest.api.compute import base
 from tempest import config
+from tempest import exceptions
 from tempest import test
 
 CONF = config.CONF
@@ -51,7 +52,7 @@
                 msg = ("Configured unallocated floating IP range is already "
                        "allocated. Configure the correct unallocated range "
                        "as 'floating_ip_range'")
-                raise cls.skipException(msg)
+                raise exceptions.InvalidConfiguration(msg)
         return
 
     def _delete_floating_ips_bulk(self, ip_range):
diff --git a/tempest/api/compute/admin/test_networks.py b/tempest/api/compute/admin/test_networks.py
index c20d483..477dc61 100644
--- a/tempest/api/compute/admin/test_networks.py
+++ b/tempest/api/compute/admin/test_networks.py
@@ -37,12 +37,15 @@
     @test.idempotent_id('d206d211-8912-486f-86e2-a9d090d1f416')
     def test_get_network(self):
         networks = self.client.list_networks()
-        configured_network = [x for x in networks if x['label'] ==
-                              CONF.compute.fixed_network_name]
-        self.assertEqual(1, len(configured_network),
-                         "{0} networks with label {1}".format(
-                             len(configured_network),
-                             CONF.compute.fixed_network_name))
+        if CONF.compute.fixed_network_name:
+            configured_network = [x for x in networks if x['label'] ==
+                                  CONF.compute.fixed_network_name]
+            self.assertEqual(1, len(configured_network),
+                             "{0} networks with label {1}".format(
+                                 len(configured_network),
+                                 CONF.compute.fixed_network_name))
+        else:
+            configured_network = networks
         configured_network = configured_network[0]
         network = self.client.get_network(configured_network['id'])
         self.assertEqual(configured_network['label'], network['label'])
@@ -51,5 +54,9 @@
     def test_list_all_networks(self):
         networks = self.client.list_networks()
         # Check the configured network is in the list
-        configured_network = CONF.compute.fixed_network_name
-        self.assertIn(configured_network, [x['label'] for x in networks])
+        if CONF.compute.fixed_network_name:
+            configured_network = CONF.compute.fixed_network_name
+            self.assertIn(configured_network, [x['label'] for x in networks])
+        else:
+            network_name = map(lambda x: x['label'], networks)
+            self.assertGreaterEqual(len(network_name), 1)
diff --git a/tempest/api/compute/admin/test_quotas.py b/tempest/api/compute/admin/test_quotas.py
index 773f23e..7601b25 100644
--- a/tempest/api/compute/admin/test_quotas.py
+++ b/tempest/api/compute/admin/test_quotas.py
@@ -13,13 +13,13 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
 import six
 from tempest_lib.common.utils import data_utils
 from testtools import matchers
 
 from tempest.api.compute import base
 from tempest.common import tempest_fixtures as fixtures
-from tempest.openstack.common import log as logging
 from tempest import test
 
 LOG = logging.getLogger(__name__)
@@ -101,7 +101,7 @@
     @test.idempotent_id('ce9e0815-8091-4abd-8345-7fe5b85faa1d')
     def test_get_updated_quotas(self):
         # Verify that GET shows the updated quota set of tenant
-        tenant_name = data_utils.rand_name('cpu_quota_tenant_')
+        tenant_name = data_utils.rand_name('cpu_quota_tenant')
         tenant_desc = tenant_name + '-desc'
         identity_client = self.os_adm.identity_client
         tenant = identity_client.create_tenant(name=tenant_name,
@@ -114,8 +114,8 @@
         self.assertEqual(5120, quota_set['ram'])
 
         # Verify that GET shows the updated quota set of user
-        user_name = data_utils.rand_name('cpu_quota_user_')
-        password = data_utils.rand_name('password-')
+        user_name = data_utils.rand_name('cpu_quota_user')
+        password = data_utils.rand_name('password')
         email = user_name + '@testmail.tm'
         user = identity_client.create_user(name=user_name,
                                            password=password,
@@ -135,7 +135,7 @@
     @test.idempotent_id('389d04f0-3a41-405f-9317-e5f86e3c44f0')
     def test_delete_quota(self):
         # Admin can delete the resource quota set for a tenant
-        tenant_name = data_utils.rand_name('ram_quota_tenant_')
+        tenant_name = data_utils.rand_name('ram_quota_tenant')
         tenant_desc = tenant_name + '-desc'
         identity_client = self.os_adm.identity_client
         tenant = identity_client.create_tenant(name=tenant_name,
diff --git a/tempest/api/compute/admin/test_quotas_negative.py b/tempest/api/compute/admin/test_quotas_negative.py
index caa329e..b74285d 100644
--- a/tempest/api/compute/admin/test_quotas_negative.py
+++ b/tempest/api/compute/admin/test_quotas_negative.py
@@ -151,8 +151,8 @@
                         self.demo_tenant_id,
                         security_group_rules=default_sg_rules_quota)
 
-        s_name = data_utils.rand_name('securitygroup-')
-        s_description = data_utils.rand_name('description-')
+        s_name = data_utils.rand_name('securitygroup')
+        s_description = data_utils.rand_name('description')
         securitygroup =\
             self.sg_client.create_security_group(s_name, s_description)
         self.addCleanup(self.sg_client.delete_security_group,
diff --git a/tempest/api/compute/admin/test_security_groups.py b/tempest/api/compute/admin/test_security_groups.py
index 578f73b..95656e8 100644
--- a/tempest/api/compute/admin/test_security_groups.py
+++ b/tempest/api/compute/admin/test_security_groups.py
@@ -39,7 +39,7 @@
 
     @test.idempotent_id('49667619-5af9-4c63-ab5d-2cfdd1c8f7f1')
     @testtools.skipIf(CONF.service_available.neutron,
-                      "Skipped because neutron do not support all_tenants"
+                      "Skipped because neutron does not support all_tenants "
                       "search filter.")
     @test.attr(type='smoke')
     @test.services('network')
@@ -49,8 +49,8 @@
         security_group_list = []
         # Create two security groups for a non-admin tenant
         for i in range(2):
-            name = data_utils.rand_name('securitygroup-')
-            description = data_utils.rand_name('description-')
+            name = data_utils.rand_name('securitygroup')
+            description = data_utils.rand_name('description')
             securitygroup = (self.client
                              .create_security_group(name, description))
             self.addCleanup(self._delete_security_group,
@@ -60,8 +60,8 @@
         client_tenant_id = securitygroup['tenant_id']
         # Create two security groups for admin tenant
         for i in range(2):
-            name = data_utils.rand_name('securitygroup-')
-            description = data_utils.rand_name('description-')
+            name = data_utils.rand_name('securitygroup')
+            description = data_utils.rand_name('description')
             adm_securitygroup = (self.adm_client
                                  .create_security_group(name,
                                                         description))
diff --git a/tempest/api/compute/admin/test_servers.py b/tempest/api/compute/admin/test_servers.py
index c872184..ef3a029 100644
--- a/tempest/api/compute/admin/test_servers.py
+++ b/tempest/api/compute/admin/test_servers.py
@@ -16,6 +16,7 @@
 from tempest_lib import decorators
 
 from tempest.api.compute import base
+from tempest.common import fixed_network
 from tempest import test
 
 
@@ -112,7 +113,10 @@
         name = data_utils.rand_name('server')
         flavor = self.flavor_ref
         image_id = self.image_ref
-        test_server = self.client.create_server(name, image_id, flavor)
+        network = self.get_tenant_network()
+        network_kwargs = fixed_network.set_networks_kwarg(network)
+        test_server = self.client.create_server(name, image_id, flavor,
+                                                **network_kwargs)
         self.addCleanup(self.client.delete_server, test_server['id'])
         self.client.wait_for_server_status(test_server['id'], 'ACTIVE')
         server = self.client.get_server(test_server['id'])
diff --git a/tempest/api/compute/admin/test_servers_negative.py b/tempest/api/compute/admin/test_servers_negative.py
index edcb052..d7e62df 100644
--- a/tempest/api/compute/admin/test_servers_negative.py
+++ b/tempest/api/compute/admin/test_servers_negative.py
@@ -66,7 +66,7 @@
     def test_resize_server_using_overlimit_ram(self):
         # NOTE(mriedem): Avoid conflicts with os-quota-class-sets tests.
         self.useFixture(fixtures.LockFixture('compute_quotas'))
-        flavor_name = data_utils.rand_name("flavor-")
+        flavor_name = data_utils.rand_name("flavor")
         flavor_id = self._get_unused_flavor_id()
         quota_set = self.quotas_client.get_default_quota_set(self.tenant_id)
         ram = int(quota_set['ram']) + 1
@@ -88,7 +88,7 @@
     def test_resize_server_using_overlimit_vcpus(self):
         # NOTE(mriedem): Avoid conflicts with os-quota-class-sets tests.
         self.useFixture(fixtures.LockFixture('compute_quotas'))
-        flavor_name = data_utils.rand_name("flavor-")
+        flavor_name = data_utils.rand_name("flavor")
         flavor_id = self._get_unused_flavor_id()
         ram = 512
         quota_set = self.quotas_client.get_default_quota_set(self.tenant_id)
diff --git a/tempest/api/compute/base.py b/tempest/api/compute/base.py
index 18401f0..4995209 100644
--- a/tempest/api/compute/base.py
+++ b/tempest/api/compute/base.py
@@ -15,15 +15,16 @@
 
 import time
 
+from oslo_log import log as logging
+from oslo_utils import excutils
 from tempest_lib.common.utils import data_utils
 from tempest_lib import exceptions as lib_exc
 
 from tempest import clients
 from tempest.common import credentials
+from tempest.common import fixed_network
 from tempest import config
 from tempest import exceptions
-from tempest.openstack.common import excutils
-from tempest.openstack.common import log as logging
 import tempest.test
 
 CONF = config.CONF
@@ -212,6 +213,8 @@
         flavor = kwargs.get('flavor', cls.flavor_ref)
         image_id = kwargs.get('image_id', cls.image_ref)
 
+        kwargs = fixed_network.set_networks_kwarg(
+            cls.get_tenant_network(), kwargs) or {}
         body = cls.servers_client.create_server(
             name, image_id, flavor, **kwargs)
 
@@ -247,7 +250,7 @@
         if name is None:
             name = data_utils.rand_name(cls.__name__ + "-securitygroup")
         if description is None:
-            description = data_utils.rand_name('description-')
+            description = data_utils.rand_name('description')
         body = \
             cls.security_groups_client.create_security_group(name,
                                                              description)
diff --git a/tempest/api/compute/images/test_images.py b/tempest/api/compute/images/test_images.py
index 53d0e95..9591b38 100644
--- a/tempest/api/compute/images/test_images.py
+++ b/tempest/api/compute/images/test_images.py
@@ -43,7 +43,7 @@
     @test.attr(type='gate')
     @test.idempotent_id('aa06b52b-2db5-4807-b218-9441f75d74e3')
     def test_delete_saving_image(self):
-        snapshot_name = data_utils.rand_name('test-snap-')
+        snapshot_name = data_utils.rand_name('test-snap')
         server = self.create_test_server(wait_until='ACTIVE')
         self.addCleanup(self.servers_client.delete_server, server['id'])
         image = self.create_image_from_server(server['id'],
diff --git a/tempest/api/compute/images/test_images_negative.py b/tempest/api/compute/images/test_images_negative.py
index 10e468e..ad502ad 100644
--- a/tempest/api/compute/images/test_images_negative.py
+++ b/tempest/api/compute/images/test_images_negative.py
@@ -77,7 +77,7 @@
         self.servers_client.wait_for_server_status(server['id'],
                                                    'SHUTOFF')
         self.addCleanup(self.servers_client.delete_server, server['id'])
-        snapshot_name = data_utils.rand_name('test-snap-')
+        snapshot_name = data_utils.rand_name('test-snap')
         image = self.create_image_from_server(server['id'],
                                               name=snapshot_name,
                                               wait_until='ACTIVE',
@@ -89,7 +89,7 @@
     @test.idempotent_id('ec176029-73dc-4037-8d72-2e4ff60cf538')
     def test_create_image_specify_uuid_35_characters_or_less(self):
         # Return an error if Image ID passed is 35 characters or less
-        snapshot_name = data_utils.rand_name('test-snap-')
+        snapshot_name = data_utils.rand_name('test-snap')
         test_uuid = ('a' * 35)
         self.assertRaises(lib_exc.NotFound, self.client.create_image,
                           test_uuid, snapshot_name)
@@ -98,7 +98,7 @@
     @test.idempotent_id('36741560-510e-4cc2-8641-55fe4dfb2437')
     def test_create_image_specify_uuid_37_characters_or_more(self):
         # Return an error if Image ID passed is 37 characters or more
-        snapshot_name = data_utils.rand_name('test-snap-')
+        snapshot_name = data_utils.rand_name('test-snap')
         test_uuid = ('a' * 37)
         self.assertRaises(lib_exc.NotFound, self.client.create_image,
                           test_uuid, snapshot_name)
diff --git a/tempest/api/compute/images/test_images_oneserver.py b/tempest/api/compute/images/test_images_oneserver.py
index b5edc1d..1d26a00 100644
--- a/tempest/api/compute/images/test_images_oneserver.py
+++ b/tempest/api/compute/images/test_images_oneserver.py
@@ -13,11 +13,11 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
 from tempest_lib.common.utils import data_utils
 
 from tempest.api.compute import base
 from tempest import config
-from tempest.openstack.common import log as logging
 from tempest import test
 
 CONF = config.CONF
diff --git a/tempest/api/compute/images/test_images_oneserver_negative.py b/tempest/api/compute/images/test_images_oneserver_negative.py
index f1de320..18ce211 100644
--- a/tempest/api/compute/images/test_images_oneserver_negative.py
+++ b/tempest/api/compute/images/test_images_oneserver_negative.py
@@ -14,12 +14,12 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
 from tempest_lib.common.utils import data_utils
 from tempest_lib import exceptions as lib_exc
 
 from tempest.api.compute import base
 from tempest import config
-from tempest.openstack.common import log as logging
 from tempest import test
 
 CONF = config.CONF
@@ -84,7 +84,7 @@
     @test.idempotent_id('55d1d38c-dd66-4933-9c8e-7d92aeb60ddc')
     def test_create_image_specify_invalid_metadata(self):
         # Return an error when creating image with invalid metadata
-        snapshot_name = data_utils.rand_name('test-snap-')
+        snapshot_name = data_utils.rand_name('test-snap')
         meta = {'': ''}
         self.assertRaises(lib_exc.BadRequest, self.client.create_image,
                           self.server_id, snapshot_name, meta)
@@ -93,7 +93,7 @@
     @test.idempotent_id('3d24d11f-5366-4536-bd28-cff32b748eca')
     def test_create_image_specify_metadata_over_limits(self):
         # Return an error when creating image with meta data over 256 chars
-        snapshot_name = data_utils.rand_name('test-snap-')
+        snapshot_name = data_utils.rand_name('test-snap')
         meta = {'a' * 260: 'b' * 260}
         self.assertRaises(lib_exc.BadRequest, self.client.create_image,
                           self.server_id, snapshot_name, meta)
@@ -104,7 +104,7 @@
         # Disallow creating another image when first image is being saved
 
         # Create first snapshot
-        snapshot_name = data_utils.rand_name('test-snap-')
+        snapshot_name = data_utils.rand_name('test-snap')
         body = self.client.create_image(self.server_id,
                                         snapshot_name)
         image_id = data_utils.parse_image_id(body.response['location'])
@@ -112,7 +112,7 @@
         self.addCleanup(self._reset_server)
 
         # Create second snapshot
-        alt_snapshot_name = data_utils.rand_name('test-snap-')
+        alt_snapshot_name = data_utils.rand_name('test-snap')
         self.assertRaises(lib_exc.Conflict, self.client.create_image,
                           self.server_id, alt_snapshot_name)
 
@@ -130,7 +130,7 @@
     def test_delete_image_that_is_not_yet_active(self):
         # Return an error while trying to delete an image what is creating
 
-        snapshot_name = data_utils.rand_name('test-snap-')
+        snapshot_name = data_utils.rand_name('test-snap')
         body = self.client.create_image(self.server_id, snapshot_name)
         image_id = data_utils.parse_image_id(body.response['location'])
         self.image_ids.append(image_id)
diff --git a/tempest/api/compute/images/test_list_image_filters.py b/tempest/api/compute/images/test_list_image_filters.py
index f5a98ce..2c6d2df 100644
--- a/tempest/api/compute/images/test_list_image_filters.py
+++ b/tempest/api/compute/images/test_list_image_filters.py
@@ -16,12 +16,12 @@
 import StringIO
 import time
 
+from oslo_log import log as logging
 from tempest_lib.common.utils import data_utils
 import testtools
 
 from tempest.api.compute import base
 from tempest import config
-from tempest.openstack.common import log as logging
 from tempest import test
 
 CONF = config.CONF
diff --git a/tempest/api/compute/keypairs/test_keypairs.py b/tempest/api/compute/keypairs/test_keypairs.py
index 6e59601..20247d0 100644
--- a/tempest/api/compute/keypairs/test_keypairs.py
+++ b/tempest/api/compute/keypairs/test_keypairs.py
@@ -43,7 +43,7 @@
         # Create 3 keypairs
         key_list = list()
         for i in range(3):
-            k_name = data_utils.rand_name('keypair-')
+            k_name = data_utils.rand_name('keypair')
             keypair = self._create_keypair(k_name)
             # Need to pop these keys so that our compare doesn't fail later,
             # as the keypair dicts from list API doesn't have them.
@@ -69,7 +69,7 @@
     @test.idempotent_id('6c1d3123-4519-4742-9194-622cb1714b7d')
     def test_keypair_create_delete(self):
         # Keypair should be created, verified and deleted
-        k_name = data_utils.rand_name('keypair-')
+        k_name = data_utils.rand_name('keypair')
         keypair = self._create_keypair(k_name)
         private_key = keypair['private_key']
         key_name = keypair['name']
@@ -83,7 +83,7 @@
     @test.idempotent_id('a4233d5d-52d8-47cc-9a25-e1864527e3df')
     def test_get_keypair_detail(self):
         # Keypair should be created, Got details by name and deleted
-        k_name = data_utils.rand_name('keypair-')
+        k_name = data_utils.rand_name('keypair')
         self._create_keypair(k_name)
         keypair_detail = self.client.get_keypair(k_name)
         self.assertIn('name', keypair_detail)
@@ -99,7 +99,7 @@
     @test.idempotent_id('39c90c6a-304a-49dd-95ec-2366129def05')
     def test_keypair_create_with_pub_key(self):
         # Keypair should be created with a given public key
-        k_name = data_utils.rand_name('keypair-')
+        k_name = data_utils.rand_name('keypair')
         pub_key = ("ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCs"
                    "Ne3/1ILNCqFyfYWDeTKLD6jEXC2OQHLmietMWW+/vd"
                    "aZq7KZEwO0jhglaFjU1mpqq4Gz5RX156sCTNM9vRbw"
diff --git a/tempest/api/compute/keypairs/test_keypairs_negative.py b/tempest/api/compute/keypairs/test_keypairs_negative.py
index 7a1a5e3..6aa0939 100644
--- a/tempest/api/compute/keypairs/test_keypairs_negative.py
+++ b/tempest/api/compute/keypairs/test_keypairs_negative.py
@@ -36,7 +36,7 @@
     @test.idempotent_id('29cca892-46ae-4d48-bc32-8fe7e731eb81')
     def test_keypair_create_with_invalid_pub_key(self):
         # Keypair should not be created with a non RSA public key
-        k_name = data_utils.rand_name('keypair-')
+        k_name = data_utils.rand_name('keypair')
         pub_key = "ssh-rsa JUNK nova@ubuntu"
         self.assertRaises(lib_exc.BadRequest,
                           self._create_keypair, k_name, pub_key)
@@ -45,7 +45,7 @@
     @test.idempotent_id('7cc32e47-4c42-489d-9623-c5e2cb5a2fa5')
     def test_keypair_delete_nonexistent_key(self):
         # Non-existent key deletion should throw a proper error
-        k_name = data_utils.rand_name("keypair-non-existent-")
+        k_name = data_utils.rand_name("keypair-non-existent")
         self.assertRaises(lib_exc.NotFound, self.client.delete_keypair,
                           k_name)
 
@@ -53,7 +53,7 @@
     @test.idempotent_id('dade320e-69ca-42a9-ba4a-345300f127e0')
     def test_create_keypair_with_empty_public_key(self):
         # Keypair should not be created with an empty public key
-        k_name = data_utils.rand_name("keypair-")
+        k_name = data_utils.rand_name("keypair")
         pub_key = ' '
         self.assertRaises(lib_exc.BadRequest, self._create_keypair,
                           k_name, pub_key)
@@ -62,7 +62,7 @@
     @test.idempotent_id('fc100c19-2926-4b9c-8fdc-d0589ee2f9ff')
     def test_create_keypair_when_public_key_bits_exceeds_maximum(self):
         # Keypair should not be created when public key bits are too long
-        k_name = data_utils.rand_name("keypair-")
+        k_name = data_utils.rand_name("keypair")
         pub_key = 'ssh-rsa ' + 'A' * 2048 + ' openstack@ubuntu'
         self.assertRaises(lib_exc.BadRequest, self._create_keypair,
                           k_name, pub_key)
@@ -71,7 +71,7 @@
     @test.idempotent_id('0359a7f1-f002-4682-8073-0c91e4011b7c')
     def test_create_keypair_with_duplicate_name(self):
         # Keypairs with duplicate names should not be created
-        k_name = data_utils.rand_name('keypair-')
+        k_name = data_utils.rand_name('keypair')
         self.client.create_keypair(k_name)
         # Now try the same keyname to create another key
         self.assertRaises(lib_exc.Conflict, self._create_keypair,
diff --git a/tempest/api/compute/security_groups/test_security_groups.py b/tempest/api/compute/security_groups/test_security_groups.py
index 71ee16a..16e7acf 100644
--- a/tempest/api/compute/security_groups/test_security_groups.py
+++ b/tempest/api/compute/security_groups/test_security_groups.py
@@ -137,8 +137,8 @@
         self.assertIn('id', securitygroup)
         securitygroup_id = securitygroup['id']
         # Update the name and description
-        s_new_name = data_utils.rand_name('sg-hth-')
-        s_new_des = data_utils.rand_name('description-hth-')
+        s_new_name = data_utils.rand_name('sg-hth')
+        s_new_des = data_utils.rand_name('description-hth')
         self.client.update_security_group(securitygroup_id,
                                           name=s_new_name,
                                           description=s_new_des)
diff --git a/tempest/api/compute/security_groups/test_security_groups_negative.py b/tempest/api/compute/security_groups/test_security_groups_negative.py
index 11ea30b..e069f6e 100644
--- a/tempest/api/compute/security_groups/test_security_groups_negative.py
+++ b/tempest/api/compute/security_groups/test_security_groups_negative.py
@@ -69,7 +69,7 @@
     def test_security_group_create_with_invalid_group_name(self):
         # Negative test: Security Group should not be created with group name
         # as an empty string/with white spaces/chars more than 255
-        s_description = data_utils.rand_name('description-')
+        s_description = data_utils.rand_name('description')
         # Create Security Group with empty string as group name
         self.assertRaises(lib_exc.BadRequest,
                           self.client.create_security_group, "", s_description)
@@ -91,7 +91,7 @@
     def test_security_group_create_with_invalid_group_description(self):
         # Negative test:Security Group should not be created with description
         # as an empty string/with white spaces/chars more than 255
-        s_name = data_utils.rand_name('securitygroup-')
+        s_name = data_utils.rand_name('securitygroup')
         # Create Security Group with empty string as description
         self.assertRaises(lib_exc.BadRequest,
                           self.client.create_security_group, s_name, "")
@@ -112,8 +112,8 @@
     def test_security_group_create_with_duplicate_name(self):
         # Negative test:Security Group with duplicate name should not
         # be created
-        s_name = data_utils.rand_name('securitygroup-')
-        s_description = data_utils.rand_name('description-')
+        s_name = data_utils.rand_name('securitygroup')
+        s_description = data_utils.rand_name('description')
         self.create_security_group(s_name, s_description)
         # Now try the Security Group with the same 'Name'
         self.assertRaises(lib_exc.BadRequest,
@@ -161,10 +161,10 @@
     @test.services('network')
     def test_update_security_group_with_invalid_sg_id(self):
         # Update security_group with invalid sg_id should fail
-        s_name = data_utils.rand_name('sg-')
-        s_description = data_utils.rand_name('description-')
+        s_name = data_utils.rand_name('sg')
+        s_description = data_utils.rand_name('description')
         # Create a non int sg_id
-        sg_id_invalid = data_utils.rand_name('sg-')
+        sg_id_invalid = data_utils.rand_name('sg')
         self.assertRaises(lib_exc.BadRequest,
                           self.client.update_security_group, sg_id_invalid,
                           name=s_name, description=s_description)
@@ -207,8 +207,8 @@
     def test_update_non_existent_security_group(self):
         # Update a non-existent Security Group should Fail
         non_exist_id = self._generate_a_non_existent_security_group_id()
-        s_name = data_utils.rand_name('sg-')
-        s_description = data_utils.rand_name('description-')
+        s_name = data_utils.rand_name('sg')
+        s_description = data_utils.rand_name('description')
         self.assertRaises(lib_exc.NotFound,
                           self.client.update_security_group,
                           non_exist_id, name=s_name,
diff --git a/tempest/api/compute/servers/test_attach_interfaces.py b/tempest/api/compute/servers/test_attach_interfaces.py
index 0702f3f..42a61da 100644
--- a/tempest/api/compute/servers/test_attach_interfaces.py
+++ b/tempest/api/compute/servers/test_attach_interfaces.py
@@ -13,13 +13,15 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+import time
+
+from tempest_lib import exceptions as lib_exc
+
 from tempest.api.compute import base
 from tempest import config
 from tempest import exceptions
 from tempest import test
 
-import time
-
 CONF = config.CONF
 
 
@@ -125,8 +127,15 @@
         self.assertTrue(interface_count > 0)
         self._check_interface(ifs[0])
 
-        iface = self._test_create_interface(server)
-        ifs.append(iface)
+        try:
+            iface = self._test_create_interface(server)
+        except lib_exc.BadRequest as e:
+            msg = ('Multiple possible networks found, use a Network ID to be '
+                   'more specific.')
+            if not CONF.compute.fixed_network_name and e.message == msg:
+                raise
+        else:
+            ifs.append(iface)
 
         iface = self._test_create_interface_by_network_id(server, ifs)
         ifs.append(iface)
diff --git a/tempest/api/compute/servers/test_list_server_filters.py b/tempest/api/compute/servers/test_list_server_filters.py
index a694fb5..f33204d 100644
--- a/tempest/api/compute/servers/test_list_server_filters.py
+++ b/tempest/api/compute/servers/test_list_server_filters.py
@@ -19,6 +19,7 @@
 
 from tempest.api.compute import base
 from tempest.api import utils
+from tempest.common import fixed_network
 from tempest import config
 from tempest import test
 
@@ -66,9 +67,16 @@
             raise RuntimeError("Image %s (image_ref_alt) was not found!" %
                                cls.image_ref_alt)
 
+        network = cls.get_tenant_network()
+        if network:
+            cls.fixed_network_name = network['name']
+        else:
+            cls.fixed_network_name = None
+        network_kwargs = fixed_network.set_networks_kwarg(network)
         cls.s1_name = data_utils.rand_name(cls.__name__ + '-instance')
         cls.s1 = cls.create_test_server(name=cls.s1_name,
-                                        wait_until='ACTIVE')
+                                        wait_until='ACTIVE',
+                                        **network_kwargs)
 
         cls.s2_name = data_utils.rand_name(cls.__name__ + '-instance')
         cls.s2 = cls.create_test_server(name=cls.s2_name,
@@ -80,12 +88,6 @@
                                         flavor=cls.flavor_ref_alt,
                                         wait_until='ACTIVE')
 
-        cls.fixed_network_name = CONF.compute.fixed_network_name
-        if CONF.service_available.neutron:
-            if hasattr(cls.isolated_creds, 'get_primary_network'):
-                network = cls.isolated_creds.get_primary_network()
-                cls.fixed_network_name = network['name']
-
     @test.idempotent_id('05e8a8e7-9659-459a-989d-92c2f501f4ba')
     @utils.skip_unless_attr('multiple_images', 'Only one image found')
     @test.attr(type='gate')
@@ -185,7 +187,7 @@
     def test_list_servers_detailed_filter_by_image(self):
         # Filter the detailed list of servers by image
         params = {'image': self.image_ref}
-        resp, body = self.client.list_servers_with_detail(params)
+        body = self.client.list_servers_with_detail(params)
         servers = body['servers']
 
         self.assertIn(self.s1['id'], map(lambda x: x['id'], servers))
@@ -284,6 +286,9 @@
     def test_list_servers_filtered_by_ip(self):
         # Filter servers by ip
         # Here should be listed 1 server
+        if not self.fixed_network_name:
+            msg = 'fixed_network_name needs to be configured to run this test'
+            raise self.skipException(msg)
         self.s1 = self.client.get_server(self.s1['id'])
         ip = self.s1['addresses'][self.fixed_network_name][0]['addr']
         params = {'ip': ip}
@@ -302,6 +307,9 @@
         # Filter servers by regex ip
         # List all servers filtered by part of ip address.
         # Here should be listed all servers
+        if not self.fixed_network_name:
+            msg = 'fixed_network_name needs to be configured to run this test'
+            raise self.skipException(msg)
         self.s1 = self.client.get_server(self.s1['id'])
         ip = self.s1['addresses'][self.fixed_network_name][0]['addr'][0:-3]
         params = {'ip': ip}
diff --git a/tempest/api/compute/test_authorization.py b/tempest/api/compute/test_authorization.py
index 6502e70..f9ee75b 100644
--- a/tempest/api/compute/test_authorization.py
+++ b/tempest/api/compute/test_authorization.py
@@ -15,13 +15,13 @@
 
 import StringIO
 
+from oslo_log import log as logging
 from tempest_lib.common.utils import data_utils
 from tempest_lib import exceptions as lib_exc
 
 from tempest.api.compute import base
 from tempest import clients
 from tempest import config
-from tempest.openstack.common import log as logging
 from tempest import test
 
 CONF = config.CONF
@@ -214,7 +214,7 @@
         # A create keypair request should fail if the tenant id does not match
         # the current user
         # POST keypair with other user tenant
-        k_name = data_utils.rand_name('keypair-')
+        k_name = data_utils.rand_name('keypair')
         try:
             # Change the base URL to impersonate another user
             self.alt_keypairs_client.auth_provider.set_alt_auth_data(
@@ -269,7 +269,7 @@
         # A create security group request should fail if the tenant id does not
         # match the current user
         # POST security group with other user tenant
-        s_name = data_utils.rand_name('security-')
+        s_name = data_utils.rand_name('security')
         s_description = data_utils.rand_name('security')
         try:
             # Change the base URL to impersonate another user
diff --git a/tempest/api/compute/test_extensions.py b/tempest/api/compute/test_extensions.py
index 1063f90..5b14071 100644
--- a/tempest/api/compute/test_extensions.py
+++ b/tempest/api/compute/test_extensions.py
@@ -13,10 +13,10 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
 
 from tempest.api.compute import base
 from tempest import config
-from tempest.openstack.common import log as logging
 from tempest import test
 
 CONF = config.CONF
diff --git a/tempest/api/compute/test_live_block_migration_negative.py b/tempest/api/compute/test_live_block_migration_negative.py
index e1d353f..b59e334 100644
--- a/tempest/api/compute/test_live_block_migration_negative.py
+++ b/tempest/api/compute/test_live_block_migration_negative.py
@@ -49,7 +49,7 @@
     @test.idempotent_id('7fb7856e-ae92-44c9-861a-af62d7830bcb')
     def test_invalid_host_for_migration(self):
         # Migrating to an invalid host should not change the status
-        target_host = data_utils.rand_name('host-')
+        target_host = data_utils.rand_name('host')
         server = self.create_test_server(wait_until="ACTIVE")
         server_id = server['id']
 
diff --git a/tempest/api/compute/volumes/test_volumes_negative.py b/tempest/api/compute/volumes/test_volumes_negative.py
index 50ce198..fb9f365 100644
--- a/tempest/api/compute/volumes/test_volumes_negative.py
+++ b/tempest/api/compute/volumes/test_volumes_negative.py
@@ -62,7 +62,7 @@
     def test_create_volume_with_invalid_size(self):
         # Negative: Should not be able to create volume with invalid size
         # in request
-        v_name = data_utils.rand_name('Volume-')
+        v_name = data_utils.rand_name('Volume')
         metadata = {'Type': 'work'}
         self.assertRaises(lib_exc.BadRequest, self.client.create_volume,
                           size='#$%', display_name=v_name, metadata=metadata)
@@ -72,7 +72,7 @@
     def test_create_volume_with_out_passing_size(self):
         # Negative: Should not be able to create volume without passing size
         # in request
-        v_name = data_utils.rand_name('Volume-')
+        v_name = data_utils.rand_name('Volume')
         metadata = {'Type': 'work'}
         self.assertRaises(lib_exc.BadRequest, self.client.create_volume,
                           size='', display_name=v_name, metadata=metadata)
@@ -81,7 +81,7 @@
     @test.idempotent_id('8cce995e-0a83-479a-b94d-e1e40b8a09d1')
     def test_create_volume_with_size_zero(self):
         # Negative: Should not be able to create volume with size zero
-        v_name = data_utils.rand_name('Volume-')
+        v_name = data_utils.rand_name('Volume')
         metadata = {'Type': 'work'}
         self.assertRaises(lib_exc.BadRequest, self.client.create_volume,
                           size='0', display_name=v_name, metadata=metadata)
diff --git a/tempest/api/data_processing/base.py b/tempest/api/data_processing/base.py
index 5992921..d91fbaa 100644
--- a/tempest/api/data_processing/base.py
+++ b/tempest/api/data_processing/base.py
@@ -66,7 +66,6 @@
                               cls.client.delete_job_binary_internal)
         cls.cleanup_resources(getattr(cls, '_data_sources', []),
                               cls.client.delete_data_source)
-        cls.clear_isolated_creds()
         super(BaseDataProcessingTest, cls).resource_cleanup()
 
     @staticmethod
diff --git a/tempest/api/database/base.py b/tempest/api/database/base.py
index 31c5d2a..1868f23 100644
--- a/tempest/api/database/base.py
+++ b/tempest/api/database/base.py
@@ -13,8 +13,9 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
+
 from tempest import config
-from tempest.openstack.common import log as logging
 import tempest.test
 
 CONF = config.CONF
diff --git a/tempest/api/identity/__init__.py b/tempest/api/identity/__init__.py
index 9614b49..17efdcc 100644
--- a/tempest/api/identity/__init__.py
+++ b/tempest/api/identity/__init__.py
@@ -13,7 +13,7 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-from tempest.openstack.common import log as logging
+from oslo_log import log as logging
 
 LOG = logging.getLogger(__name__)
 
diff --git a/tempest/api/identity/admin/v2/test_roles.py b/tempest/api/identity/admin/v2/test_roles.py
index dd5164d..b3746f2 100644
--- a/tempest/api/identity/admin/v2/test_roles.py
+++ b/tempest/api/identity/admin/v2/test_roles.py
@@ -26,7 +26,7 @@
     def resource_setup(cls):
         super(RolesTestJSON, cls).resource_setup()
         for _ in moves.xrange(5):
-            role_name = data_utils.rand_name(name='role-')
+            role_name = data_utils.rand_name(name='role')
             role = cls.client.create_role(role_name)
             cls.data.roles.append(role)
 
@@ -58,7 +58,7 @@
     @test.idempotent_id('c62d909d-6c21-48c0-ae40-0a0760e6db5e')
     def test_role_create_delete(self):
         """Role should be created, verified, and deleted."""
-        role_name = data_utils.rand_name(name='role-test-')
+        role_name = data_utils.rand_name(name='role-test')
         body = self.client.create_role(role_name)
         self.assertEqual(role_name, body['name'])
 
diff --git a/tempest/api/identity/admin/v2/test_roles_negative.py b/tempest/api/identity/admin/v2/test_roles_negative.py
index 662d1ea..5e197ec 100644
--- a/tempest/api/identity/admin/v2/test_roles_negative.py
+++ b/tempest/api/identity/admin/v2/test_roles_negative.py
@@ -58,7 +58,7 @@
     @test.idempotent_id('585c8998-a8a4-4641-a5dd-abef7a8ced00')
     def test_create_role_by_unauthorized_user(self):
         # Non-administrator user should not be able to create role
-        role_name = data_utils.rand_name(name='role-')
+        role_name = data_utils.rand_name(name='role')
         self.assertRaises(lib_exc.Forbidden,
                           self.non_admin_client.create_role, role_name)
 
@@ -68,7 +68,7 @@
         # Request to create role without a valid token should fail
         token = self.client.auth_provider.get_token()
         self.client.delete_token(token)
-        role_name = data_utils.rand_name(name='role-')
+        role_name = data_utils.rand_name(name='role')
         self.assertRaises(lib_exc.Unauthorized,
                           self.client.create_role, role_name)
         self.client.auth_provider.clear_auth()
@@ -77,7 +77,7 @@
     @test.idempotent_id('c0cde2c8-81c1-4bb0-8fe2-cf615a3547a8')
     def test_role_create_duplicate(self):
         # Role names should be unique
-        role_name = data_utils.rand_name(name='role-dup-')
+        role_name = data_utils.rand_name(name='role-dup')
         body = self.client.create_role(role_name)
         role1_id = body.get('id')
         self.addCleanup(self.client.delete_role, role1_id)
@@ -88,7 +88,7 @@
     @test.idempotent_id('15347635-b5b1-4a87-a280-deb2bd6d865e')
     def test_delete_role_by_unauthorized_user(self):
         # Non-administrator user should not be able to delete role
-        role_name = data_utils.rand_name(name='role-')
+        role_name = data_utils.rand_name(name='role')
         body = self.client.create_role(role_name)
         self.data.roles.append(body)
         role_id = body.get('id')
@@ -99,7 +99,7 @@
     @test.idempotent_id('44b60b20-70de-4dac-beaf-a3fc2650a16b')
     def test_delete_role_request_without_token(self):
         # Request to delete role without a valid token should fail
-        role_name = data_utils.rand_name(name='role-')
+        role_name = data_utils.rand_name(name='role')
         body = self.client.create_role(role_name)
         self.data.roles.append(body)
         role_id = body.get('id')
diff --git a/tempest/api/identity/admin/v2/test_services.py b/tempest/api/identity/admin/v2/test_services.py
index 0759ec5..326c78b 100644
--- a/tempest/api/identity/admin/v2/test_services.py
+++ b/tempest/api/identity/admin/v2/test_services.py
@@ -35,9 +35,9 @@
     def test_create_get_delete_service(self):
         # GET Service
         # Creating a Service
-        name = data_utils.rand_name('service-')
-        type = data_utils.rand_name('type--')
-        description = data_utils.rand_name('description-')
+        name = data_utils.rand_name('service')
+        type = data_utils.rand_name('type')
+        description = data_utils.rand_name('description')
         service_data = self.client.create_service(
             name, type, description=description)
         self.assertFalse(service_data['id'] is None)
@@ -67,8 +67,8 @@
     @test.idempotent_id('5d3252c8-e555-494b-a6c8-e11d7335da42')
     def test_create_service_without_description(self):
         # Create a service only with name and type
-        name = data_utils.rand_name('service-')
-        type = data_utils.rand_name('type--')
+        name = data_utils.rand_name('service')
+        type = data_utils.rand_name('type')
         service = self.client.create_service(name, type)
         self.assertIn('id', service)
         self.addCleanup(self._del_service, service['id'])
@@ -83,9 +83,9 @@
         # Create, List, Verify and Delete Services
         services = []
         for _ in moves.xrange(3):
-            name = data_utils.rand_name('service-')
-            type = data_utils.rand_name('type--')
-            description = data_utils.rand_name('description-')
+            name = data_utils.rand_name('service')
+            type = data_utils.rand_name('type')
+            description = data_utils.rand_name('description')
             service = self.client.create_service(
                 name, type, description=description)
             services.append(service)
diff --git a/tempest/api/identity/admin/v2/test_tenant_negative.py b/tempest/api/identity/admin/v2/test_tenant_negative.py
index 8fd1f5a..3ee412b 100644
--- a/tempest/api/identity/admin/v2/test_tenant_negative.py
+++ b/tempest/api/identity/admin/v2/test_tenant_negative.py
@@ -44,7 +44,7 @@
     @test.idempotent_id('162ba316-f18b-4987-8c0c-fd9140cd63ed')
     def test_tenant_delete_by_unauthorized_user(self):
         # Non-administrator user should not be able to delete a tenant
-        tenant_name = data_utils.rand_name(name='tenant-')
+        tenant_name = data_utils.rand_name(name='tenant')
         tenant = self.client.create_tenant(tenant_name)
         self.data.tenants.append(tenant)
         self.assertRaises(lib_exc.Forbidden,
@@ -54,7 +54,7 @@
     @test.idempotent_id('e450db62-2e9d-418f-893a-54772d6386b1')
     def test_tenant_delete_request_without_token(self):
         # Request to delete a tenant without a valid token should fail
-        tenant_name = data_utils.rand_name(name='tenant-')
+        tenant_name = data_utils.rand_name(name='tenant')
         tenant = self.client.create_tenant(tenant_name)
         self.data.tenants.append(tenant)
         token = self.client.auth_provider.get_token()
@@ -74,7 +74,7 @@
     @test.idempotent_id('af16f44b-a849-46cb-9f13-a751c388f739')
     def test_tenant_create_duplicate(self):
         # Tenant names should be unique
-        tenant_name = data_utils.rand_name(name='tenant-')
+        tenant_name = data_utils.rand_name(name='tenant')
         body = self.client.create_tenant(tenant_name)
         tenant = body
         self.data.tenants.append(tenant)
@@ -89,7 +89,7 @@
     @test.idempotent_id('d26b278a-6389-4702-8d6e-5980d80137e0')
     def test_create_tenant_by_unauthorized_user(self):
         # Non-administrator user should not be authorized to create a tenant
-        tenant_name = data_utils.rand_name(name='tenant-')
+        tenant_name = data_utils.rand_name(name='tenant')
         self.assertRaises(lib_exc.Forbidden,
                           self.non_admin_client.create_tenant, tenant_name)
 
@@ -97,7 +97,7 @@
     @test.idempotent_id('a3ee9d7e-6920-4dd5-9321-d4b2b7f0a638')
     def test_create_tenant_request_without_token(self):
         # Create tenant request without a token should not be authorized
-        tenant_name = data_utils.rand_name(name='tenant-')
+        tenant_name = data_utils.rand_name(name='tenant')
         token = self.client.auth_provider.get_token()
         self.client.delete_token(token)
         self.assertRaises(lib_exc.Unauthorized, self.client.create_tenant,
@@ -130,7 +130,7 @@
     @test.idempotent_id('41704dc5-c5f7-4f79-abfa-76e6fedc570b')
     def test_tenant_update_by_unauthorized_user(self):
         # Non-administrator user should not be able to update a tenant
-        tenant_name = data_utils.rand_name(name='tenant-')
+        tenant_name = data_utils.rand_name(name='tenant')
         tenant = self.client.create_tenant(tenant_name)
         self.data.tenants.append(tenant)
         self.assertRaises(lib_exc.Forbidden,
@@ -140,7 +140,7 @@
     @test.idempotent_id('7a421573-72c7-4c22-a98e-ce539219c657')
     def test_tenant_update_request_without_token(self):
         # Request to update a tenant without a valid token should fail
-        tenant_name = data_utils.rand_name(name='tenant-')
+        tenant_name = data_utils.rand_name(name='tenant')
         tenant = self.client.create_tenant(tenant_name)
         self.data.tenants.append(tenant)
         token = self.client.auth_provider.get_token()
diff --git a/tempest/api/identity/admin/v2/test_tenants.py b/tempest/api/identity/admin/v2/test_tenants.py
index 0be25a9..5dba65a 100644
--- a/tempest/api/identity/admin/v2/test_tenants.py
+++ b/tempest/api/identity/admin/v2/test_tenants.py
@@ -49,8 +49,8 @@
     @test.idempotent_id('d25e9f24-1310-4d29-b61b-d91299c21d6d')
     def test_tenant_create_with_description(self):
         # Create tenant with a description
-        tenant_name = data_utils.rand_name(name='tenant-')
-        tenant_desc = data_utils.rand_name(name='desc-')
+        tenant_name = data_utils.rand_name(name='tenant')
+        tenant_desc = data_utils.rand_name(name='desc')
         body = self.client.create_tenant(tenant_name,
                                          description=tenant_desc)
         tenant = body
@@ -70,7 +70,7 @@
     @test.idempotent_id('670bdddc-1cd7-41c7-b8e2-751cfb67df50')
     def test_tenant_create_enabled(self):
         # Create a tenant that is enabled
-        tenant_name = data_utils.rand_name(name='tenant-')
+        tenant_name = data_utils.rand_name(name='tenant')
         body = self.client.create_tenant(tenant_name, enabled=True)
         tenant = body
         self.data.tenants.append(tenant)
@@ -87,7 +87,7 @@
     @test.idempotent_id('3be22093-b30f-499d-b772-38340e5e16fb')
     def test_tenant_create_not_enabled(self):
         # Create a tenant that is not enabled
-        tenant_name = data_utils.rand_name(name='tenant-')
+        tenant_name = data_utils.rand_name(name='tenant')
         body = self.client.create_tenant(tenant_name, enabled=False)
         tenant = body
         self.data.tenants.append(tenant)
@@ -106,7 +106,7 @@
     @test.idempotent_id('781f2266-d128-47f3-8bdb-f70970add238')
     def test_tenant_update_name(self):
         # Update name attribute of a tenant
-        t_name1 = data_utils.rand_name(name='tenant-')
+        t_name1 = data_utils.rand_name(name='tenant')
         body = self.client.create_tenant(t_name1)
         tenant = body
         self.data.tenants.append(tenant)
@@ -114,7 +114,7 @@
         t_id = body['id']
         resp1_name = body['name']
 
-        t_name2 = data_utils.rand_name(name='tenant2-')
+        t_name2 = data_utils.rand_name(name='tenant2')
         body = self.client.update_tenant(t_id, name=t_name2)
         resp2_name = body['name']
         self.assertNotEqual(resp1_name, resp2_name)
@@ -133,8 +133,8 @@
     @test.idempotent_id('859fcfe1-3a03-41ef-86f9-b19a47d1cd87')
     def test_tenant_update_desc(self):
         # Update description attribute of a tenant
-        t_name = data_utils.rand_name(name='tenant-')
-        t_desc = data_utils.rand_name(name='desc-')
+        t_name = data_utils.rand_name(name='tenant')
+        t_desc = data_utils.rand_name(name='desc')
         body = self.client.create_tenant(t_name, description=t_desc)
         tenant = body
         self.data.tenants.append(tenant)
@@ -142,7 +142,7 @@
         t_id = body['id']
         resp1_desc = body['description']
 
-        t_desc2 = data_utils.rand_name(name='desc2-')
+        t_desc2 = data_utils.rand_name(name='desc2')
         body = self.client.update_tenant(t_id, description=t_desc2)
         resp2_desc = body['description']
         self.assertNotEqual(resp1_desc, resp2_desc)
@@ -161,7 +161,7 @@
     @test.idempotent_id('8fc8981f-f12d-4c66-9972-2bdcf2bc2e1a')
     def test_tenant_update_enable(self):
         # Update the enabled attribute of a tenant
-        t_name = data_utils.rand_name(name='tenant-')
+        t_name = data_utils.rand_name(name='tenant')
         t_en = False
         body = self.client.create_tenant(t_name, enabled=t_en)
         tenant = body
diff --git a/tempest/api/identity/admin/v2/test_tokens.py b/tempest/api/identity/admin/v2/test_tokens.py
index 29ba1de..860e5af 100644
--- a/tempest/api/identity/admin/v2/test_tokens.py
+++ b/tempest/api/identity/admin/v2/test_tokens.py
@@ -25,10 +25,10 @@
     @test.idempotent_id('453ad4d5-e486-4b2f-be72-cffc8149e586')
     def test_create_get_delete_token(self):
         # get a token by username and password
-        user_name = data_utils.rand_name(name='user-')
-        user_password = data_utils.rand_name(name='pass-')
+        user_name = data_utils.rand_name(name='user')
+        user_password = data_utils.rand_name(name='pass')
         # first:create a tenant
-        tenant_name = data_utils.rand_name(name='tenant-')
+        tenant_name = data_utils.rand_name(name='tenant')
         tenant = self.client.create_tenant(tenant_name)
         self.data.tenants.append(tenant)
         # second:create a user
@@ -60,8 +60,8 @@
         """
 
         # Create a user.
-        user_name = data_utils.rand_name(name='user-')
-        user_password = data_utils.rand_name(name='pass-')
+        user_name = data_utils.rand_name(name='user')
+        user_password = data_utils.rand_name(name='pass')
         tenant_id = None  # No default tenant so will get unscoped token.
         email = ''
         user = self.client.create_user(user_name, user_password,
@@ -69,16 +69,16 @@
         self.data.users.append(user)
 
         # Create a couple tenants.
-        tenant1_name = data_utils.rand_name(name='tenant-')
+        tenant1_name = data_utils.rand_name(name='tenant')
         tenant1 = self.client.create_tenant(tenant1_name)
         self.data.tenants.append(tenant1)
 
-        tenant2_name = data_utils.rand_name(name='tenant-')
+        tenant2_name = data_utils.rand_name(name='tenant')
         tenant2 = self.client.create_tenant(tenant2_name)
         self.data.tenants.append(tenant2)
 
         # Create a role
-        role_name = data_utils.rand_name(name='role-')
+        role_name = data_utils.rand_name(name='role')
         role = self.client.create_role(role_name)
         self.data.roles.append(role)
 
diff --git a/tempest/api/identity/admin/v2/test_users.py b/tempest/api/identity/admin/v2/test_users.py
index 2ca5595..f47f985 100644
--- a/tempest/api/identity/admin/v2/test_users.py
+++ b/tempest/api/identity/admin/v2/test_users.py
@@ -25,8 +25,8 @@
     @classmethod
     def resource_setup(cls):
         super(UsersTestJSON, cls).resource_setup()
-        cls.alt_user = data_utils.rand_name('test_user_')
-        cls.alt_password = data_utils.rand_name('pass_')
+        cls.alt_user = data_utils.rand_name('test_user')
+        cls.alt_password = data_utils.rand_name('pass')
         cls.alt_email = cls.alt_user + '@testmail.tm'
 
     @test.attr(type='smoke')
@@ -45,7 +45,7 @@
     def test_create_user_with_enabled(self):
         # Create a user with enabled : False
         self.data.setup_test_tenant()
-        name = data_utils.rand_name('test_user_')
+        name = data_utils.rand_name('test_user')
         user = self.client.create_user(name, self.alt_password,
                                        self.data.tenant['id'],
                                        self.alt_email, enabled=False)
@@ -58,7 +58,7 @@
     @test.idempotent_id('39d05857-e8a5-4ed4-ba83-0b52d3ab97ee')
     def test_update_user(self):
         # Test case to check if updating of user attributes is successful.
-        test_user = data_utils.rand_name('test_user_')
+        test_user = data_utils.rand_name('test_user')
         self.data.setup_test_tenant()
         user = self.client.create_user(test_user, self.alt_password,
                                        self.data.tenant['id'],
@@ -66,7 +66,7 @@
         # Delete the User at the end of this method
         self.addCleanup(self.client.delete_user, user['id'])
         # Updating user details with new values
-        u_name2 = data_utils.rand_name('user2-')
+        u_name2 = data_utils.rand_name('user2')
         u_email2 = u_name2 + '@testmail.tm'
         update_user = self.client.update_user(user['id'], name=u_name2,
                                               email=u_email2,
@@ -85,7 +85,7 @@
     @test.idempotent_id('29ed26f4-a74e-4425-9a85-fdb49fa269d2')
     def test_delete_user(self):
         # Delete a user
-        test_user = data_utils.rand_name('test_user_')
+        test_user = data_utils.rand_name('test_user')
         self.data.setup_test_tenant()
         user = self.client.create_user(test_user, self.alt_password,
                                        self.data.tenant['id'],
@@ -139,14 +139,14 @@
         self.data.setup_test_tenant()
         user_ids = list()
         fetched_user_ids = list()
-        alt_tenant_user1 = data_utils.rand_name('tenant_user1_')
+        alt_tenant_user1 = data_utils.rand_name('tenant_user1')
         user1 = self.client.create_user(alt_tenant_user1, 'password1',
                                         self.data.tenant['id'],
                                         'user1@123')
         user_ids.append(user1['id'])
         self.data.users.append(user1)
 
-        alt_tenant_user2 = data_utils.rand_name('tenant_user2_')
+        alt_tenant_user2 = data_utils.rand_name('tenant_user2')
         user2 = self.client.create_user(alt_tenant_user2, 'password2',
                                         self.data.tenant['id'],
                                         'user2@123')
@@ -179,7 +179,7 @@
         role = self.client.assign_user_role(tenant['id'], user['id'],
                                             role['id'])
 
-        alt_user2 = data_utils.rand_name('second_user_')
+        alt_user2 = data_utils.rand_name('second_user')
         second_user = self.client.create_user(alt_user2, 'password1',
                                               self.data.tenant['id'],
                                               'user2@123')
@@ -205,7 +205,7 @@
         # Test case to check if updating of user password is successful.
         self.data.setup_test_user()
         # Updating the user with new password
-        new_pass = data_utils.rand_name('pass-')
+        new_pass = data_utils.rand_name('pass')
         update_user = self.client.update_user_password(
             self.data.user['id'], new_pass)
         self.assertEqual(update_user['id'], self.data.user['id'])
diff --git a/tempest/api/identity/admin/v2/test_users_negative.py b/tempest/api/identity/admin/v2/test_users_negative.py
index 387b714..dcf5565 100644
--- a/tempest/api/identity/admin/v2/test_users_negative.py
+++ b/tempest/api/identity/admin/v2/test_users_negative.py
@@ -27,8 +27,8 @@
     @classmethod
     def resource_setup(cls):
         super(UsersNegativeTestJSON, cls).resource_setup()
-        cls.alt_user = data_utils.rand_name('test_user_')
-        cls.alt_password = data_utils.rand_name('pass_')
+        cls.alt_user = data_utils.rand_name('test_user')
+        cls.alt_password = data_utils.rand_name('pass')
         cls.alt_email = cls.alt_user + '@testmail.tm'
 
     @test.attr(type=['negative', 'gate'])
@@ -97,7 +97,7 @@
     def test_create_user_with_enabled_non_bool(self):
         # Attempt to create a user with valid enabled para should fail
         self.data.setup_test_tenant()
-        name = data_utils.rand_name('test_user_')
+        name = data_utils.rand_name('test_user')
         self.assertRaises(lib_exc.BadRequest, self.client.create_user,
                           name, self.alt_password,
                           self.data.tenant['id'],
@@ -107,7 +107,7 @@
     @test.idempotent_id('3d07e294-27a0-4144-b780-a2a1bf6fee19')
     def test_update_user_for_non_existent_user(self):
         # Attempt to update a user non-existent user should fail
-        user_name = data_utils.rand_name('user-')
+        user_name = data_utils.rand_name('user')
         non_existent_id = str(uuid.uuid4())
         self.assertRaises(lib_exc.NotFound, self.client.update_user,
                           non_existent_id, name=user_name)
diff --git a/tempest/api/identity/admin/v3/test_credentials.py b/tempest/api/identity/admin/v3/test_credentials.py
index c427615..9bad8e0 100644
--- a/tempest/api/identity/admin/v3/test_credentials.py
+++ b/tempest/api/identity/admin/v3/test_credentials.py
@@ -27,14 +27,14 @@
         cls.projects = list()
         cls.creds_list = [['project_id', 'user_id', 'id'],
                           ['access', 'secret']]
-        u_name = data_utils.rand_name('user-')
+        u_name = data_utils.rand_name('user')
         u_desc = '%s description' % u_name
         u_email = '%s@testmail.tm' % u_name
-        u_password = data_utils.rand_name('pass-')
+        u_password = data_utils.rand_name('pass')
         for i in range(2):
             cls.project = cls.client.create_project(
-                data_utils.rand_name('project-'),
-                description=data_utils.rand_name('project-desc-'))
+                data_utils.rand_name('project'),
+                description=data_utils.rand_name('project-desc'))
             cls.projects.append(cls.project['id'])
 
         cls.user_body = cls.client.create_user(
@@ -54,8 +54,8 @@
     @test.attr(type='smoke')
     @test.idempotent_id('7cd59bf9-bda4-4c72-9467-d21cab278355')
     def test_credentials_create_get_update_delete(self):
-        keys = [data_utils.rand_name('Access-'),
-                data_utils.rand_name('Secret-')]
+        keys = [data_utils.rand_name('Access'),
+                data_utils.rand_name('Secret')]
         cred = self.creds_client.create_credential(
             keys[0], keys[1], self.user_body['id'],
             self.projects[0])
@@ -65,8 +65,8 @@
         for value2 in self.creds_list[1]:
             self.assertIn(value2, cred['blob'])
 
-        new_keys = [data_utils.rand_name('NewAccess-'),
-                    data_utils.rand_name('NewSecret-')]
+        new_keys = [data_utils.rand_name('NewAccess'),
+                    data_utils.rand_name('NewSecret')]
         update_body = self.creds_client.update_credential(
             cred['id'], access_key=new_keys[0], secret_key=new_keys[1],
             project_id=self.projects[1])
@@ -92,8 +92,8 @@
 
         for i in range(2):
             cred = self.creds_client.create_credential(
-                data_utils.rand_name('Access-'),
-                data_utils.rand_name('Secret-'),
+                data_utils.rand_name('Access'),
+                data_utils.rand_name('Secret'),
                 self.user_body['id'], self.projects[0])
             created_cred_ids.append(cred['id'])
             self.addCleanup(self._delete_credential, cred['id'])
diff --git a/tempest/api/identity/admin/v3/test_default_project_id.py b/tempest/api/identity/admin/v3/test_default_project_id.py
index f1cc530..9841cc8 100644
--- a/tempest/api/identity/admin/v3/test_default_project_id.py
+++ b/tempest/api/identity/admin/v3/test_default_project_id.py
@@ -66,7 +66,7 @@
                          "doesn't have domain id " + dom_id)
 
         # get roles and find the admin role
-        admin_role = self.get_role_by_name("admin")
+        admin_role = self.get_role_by_name(CONF.identity.admin_role)
         admin_role_id = admin_role['id']
 
         # grant the admin role to the user on his project
@@ -76,7 +76,7 @@
         # create a new client with user's credentials (NOTE: unscoped token!)
         creds = auth.KeystoneV3Credentials(username=user_name,
                                            password=user_name,
-                                           domain_name=dom_name)
+                                           user_domain_name=dom_name)
         auth_provider = manager.get_auth_provider(creds)
         creds = auth_provider.fill_credentials()
         admin_client = clients.Manager(credentials=creds)
diff --git a/tempest/api/identity/admin/v3/test_domains.py b/tempest/api/identity/admin/v3/test_domains.py
index 1f6e651..c6d379c 100644
--- a/tempest/api/identity/admin/v3/test_domains.py
+++ b/tempest/api/identity/admin/v3/test_domains.py
@@ -35,8 +35,8 @@
         fetched_ids = list()
         for _ in range(3):
             domain = self.client.create_domain(
-                data_utils.rand_name('domain-'),
-                description=data_utils.rand_name('domain-desc-'))
+                data_utils.rand_name('domain'),
+                description=data_utils.rand_name('domain-desc'))
             # Delete the domain at the end of this method
             self.addCleanup(self._delete_domain, domain['id'])
             domain_ids.append(domain['id'])
@@ -50,8 +50,8 @@
     @test.attr(type='smoke')
     @test.idempotent_id('f2f5b44a-82e8-4dad-8084-0661ea3b18cf')
     def test_create_update_delete_domain(self):
-        d_name = data_utils.rand_name('domain-')
-        d_desc = data_utils.rand_name('domain-desc-')
+        d_name = data_utils.rand_name('domain')
+        d_desc = data_utils.rand_name('domain-desc')
         domain = self.client.create_domain(
             d_name, description=d_desc)
         self.addCleanup(self._delete_domain, domain['id'])
@@ -64,8 +64,8 @@
         self.assertEqual(d_name, domain['name'])
         self.assertEqual(d_desc, domain['description'])
         self.assertEqual(True, domain['enabled'])
-        new_desc = data_utils.rand_name('new-desc-')
-        new_name = data_utils.rand_name('new-name-')
+        new_desc = data_utils.rand_name('new-desc')
+        new_name = data_utils.rand_name('new-name')
 
         updated_domain = self.client.update_domain(
             domain['id'], name=new_name, description=new_desc)
@@ -88,8 +88,8 @@
     @test.idempotent_id('036df86e-bb5d-42c0-a7c2-66b9db3a6046')
     def test_create_domain_with_disabled_status(self):
         # Create domain with enabled status as false
-        d_name = data_utils.rand_name('domain-')
-        d_desc = data_utils.rand_name('domain-desc-')
+        d_name = data_utils.rand_name('domain')
+        d_desc = data_utils.rand_name('domain-desc')
         domain = self.client.create_domain(
             d_name, description=d_desc, enabled=False)
         self.addCleanup(self.client.delete_domain, domain['id'])
diff --git a/tempest/api/identity/admin/v3/test_endpoints.py b/tempest/api/identity/admin/v3/test_endpoints.py
index c683f59..2d8c04f 100644
--- a/tempest/api/identity/admin/v3/test_endpoints.py
+++ b/tempest/api/identity/admin/v3/test_endpoints.py
@@ -31,9 +31,9 @@
     def resource_setup(cls):
         super(EndPointsTestJSON, cls).resource_setup()
         cls.service_ids = list()
-        s_name = data_utils.rand_name('service-')
-        s_type = data_utils.rand_name('type--')
-        s_description = data_utils.rand_name('description-')
+        s_name = data_utils.rand_name('service')
+        s_type = data_utils.rand_name('type')
+        s_description = data_utils.rand_name('description')
         cls.service_data =\
             cls.service_client.create_service(s_name, s_type,
                                               description=s_description)
@@ -107,9 +107,9 @@
                                         enabled=True)
         self.addCleanup(self.client.delete_endpoint, endpoint_for_update['id'])
         # Creating service so as update endpoint with new service ID
-        s_name = data_utils.rand_name('service-')
-        s_type = data_utils.rand_name('type--')
-        s_description = data_utils.rand_name('description-')
+        s_name = data_utils.rand_name('service')
+        s_type = data_utils.rand_name('type')
+        s_description = data_utils.rand_name('description')
         service2 =\
             self.service_client.create_service(s_name, s_type,
                                                description=s_description)
diff --git a/tempest/api/identity/admin/v3/test_endpoints_negative.py b/tempest/api/identity/admin/v3/test_endpoints_negative.py
index e2b7edc..8b85566 100644
--- a/tempest/api/identity/admin/v3/test_endpoints_negative.py
+++ b/tempest/api/identity/admin/v3/test_endpoints_negative.py
@@ -33,9 +33,9 @@
     def resource_setup(cls):
         super(EndpointsNegativeTestJSON, cls).resource_setup()
         cls.service_ids = list()
-        s_name = data_utils.rand_name('service-')
-        s_type = data_utils.rand_name('type--')
-        s_description = data_utils.rand_name('description-')
+        s_name = data_utils.rand_name('service')
+        s_type = data_utils.rand_name('type')
+        s_description = data_utils.rand_name('description')
         cls.service_data = (
             cls.service_client.create_service(s_name, s_type,
                                               description=s_description))
diff --git a/tempest/api/identity/admin/v3/test_groups.py b/tempest/api/identity/admin/v3/test_groups.py
index 98d1846..b366d1e 100644
--- a/tempest/api/identity/admin/v3/test_groups.py
+++ b/tempest/api/identity/admin/v3/test_groups.py
@@ -75,13 +75,13 @@
     def test_list_user_groups(self):
         # create a user
         user = self.client.create_user(
-            data_utils.rand_name('User-'),
-            password=data_utils.rand_name('Pass-'))
+            data_utils.rand_name('User'),
+            password=data_utils.rand_name('Pass'))
         self.addCleanup(self.client.delete_user, user['id'])
         # create two groups, and add user into them
         groups = []
         for i in range(2):
-            name = data_utils.rand_name('Group-')
+            name = data_utils.rand_name('Group')
             group = self.client.create_group(name)
             groups.append(group)
             self.addCleanup(self.client.delete_group, group['id'])
diff --git a/tempest/api/identity/admin/v3/test_policies.py b/tempest/api/identity/admin/v3/test_policies.py
index 63d2b0d..bad56f4 100644
--- a/tempest/api/identity/admin/v3/test_policies.py
+++ b/tempest/api/identity/admin/v3/test_policies.py
@@ -31,8 +31,8 @@
         policy_ids = list()
         fetched_ids = list()
         for _ in range(3):
-            blob = data_utils.rand_name('BlobName-')
-            policy_type = data_utils.rand_name('PolicyType-')
+            blob = data_utils.rand_name('BlobName')
+            policy_type = data_utils.rand_name('PolicyType')
             policy = self.policy_client.create_policy(blob,
                                                       policy_type)
             # Delete the Policy at the end of this method
@@ -49,8 +49,8 @@
     @test.idempotent_id('e544703a-2f03-4cf2-9b0f-350782fdb0d3')
     def test_create_update_delete_policy(self):
         # Test to update policy
-        blob = data_utils.rand_name('BlobName-')
-        policy_type = data_utils.rand_name('PolicyType-')
+        blob = data_utils.rand_name('BlobName')
+        policy_type = data_utils.rand_name('PolicyType')
         policy = self.policy_client.create_policy(blob, policy_type)
         self.addCleanup(self._delete_policy, policy['id'])
         self.assertIn('id', policy)
@@ -60,7 +60,7 @@
         self.assertEqual(blob, policy['blob'])
         self.assertEqual(policy_type, policy['type'])
         # Update policy
-        update_type = data_utils.rand_name('UpdatedPolicyType-')
+        update_type = data_utils.rand_name('UpdatedPolicyType')
         data = self.policy_client.update_policy(
             policy['id'], type=update_type)
         self.assertIn('type', data)
diff --git a/tempest/api/identity/admin/v3/test_projects.py b/tempest/api/identity/admin/v3/test_projects.py
index 69b1fb4..5f8b3d1 100644
--- a/tempest/api/identity/admin/v3/test_projects.py
+++ b/tempest/api/identity/admin/v3/test_projects.py
@@ -25,8 +25,8 @@
     @test.idempotent_id('0ecf465c-0dc4-4532-ab53-91ffeb74d12d')
     def test_project_create_with_description(self):
         # Create project with a description
-        project_name = data_utils.rand_name('project-')
-        project_desc = data_utils.rand_name('desc-')
+        project_name = data_utils.rand_name('project')
+        project_desc = data_utils.rand_name('desc')
         project = self.client.create_project(
             project_name, description=project_desc)
         self.data.projects.append(project)
@@ -59,7 +59,7 @@
     @test.idempotent_id('1f66dc76-50cc-4741-a200-af984509e480')
     def test_project_create_enabled(self):
         # Create a project that is enabled
-        project_name = data_utils.rand_name('project-')
+        project_name = data_utils.rand_name('project')
         project = self.client.create_project(
             project_name, enabled=True)
         self.data.projects.append(project)
@@ -74,7 +74,7 @@
     @test.idempotent_id('78f96a9c-e0e0-4ee6-a3ba-fbf6dfd03207')
     def test_project_create_not_enabled(self):
         # Create a project that is not enabled
-        project_name = data_utils.rand_name('project-')
+        project_name = data_utils.rand_name('project')
         project = self.client.create_project(
             project_name, enabled=False)
         self.data.projects.append(project)
@@ -90,13 +90,13 @@
     @test.idempotent_id('f608f368-048c-496b-ad63-d286c26dab6b')
     def test_project_update_name(self):
         # Update name attribute of a project
-        p_name1 = data_utils.rand_name('project-')
+        p_name1 = data_utils.rand_name('project')
         project = self.client.create_project(p_name1)
         self.data.projects.append(project)
 
         resp1_name = project['name']
 
-        p_name2 = data_utils.rand_name('project2-')
+        p_name2 = data_utils.rand_name('project2')
         body = self.client.update_project(project['id'], name=p_name2)
         resp2_name = body['name']
         self.assertNotEqual(resp1_name, resp2_name)
@@ -112,14 +112,14 @@
     @test.idempotent_id('f138b715-255e-4a7d-871d-351e1ef2e153')
     def test_project_update_desc(self):
         # Update description attribute of a project
-        p_name = data_utils.rand_name('project-')
-        p_desc = data_utils.rand_name('desc-')
+        p_name = data_utils.rand_name('project')
+        p_desc = data_utils.rand_name('desc')
         project = self.client.create_project(
             p_name, description=p_desc)
         self.data.projects.append(project)
         resp1_desc = project['description']
 
-        p_desc2 = data_utils.rand_name('desc2-')
+        p_desc2 = data_utils.rand_name('desc2')
         body = self.client.update_project(
             project['id'], description=p_desc2)
         resp2_desc = body['description']
@@ -136,7 +136,7 @@
     @test.idempotent_id('b6b25683-c97f-474d-a595-55d410b68100')
     def test_project_update_enable(self):
         # Update the enabled attribute of a project
-        p_name = data_utils.rand_name('project-')
+        p_name = data_utils.rand_name('project')
         p_en = False
         project = self.client.create_project(p_name, enabled=p_en)
         self.data.projects.append(project)
@@ -161,15 +161,15 @@
     def test_associate_user_to_project(self):
         # Associate a user to a project
         # Create a Project
-        p_name = data_utils.rand_name('project-')
+        p_name = data_utils.rand_name('project')
         project = self.client.create_project(p_name)
         self.data.projects.append(project)
 
         # Create a User
-        u_name = data_utils.rand_name('user-')
+        u_name = data_utils.rand_name('user')
         u_desc = u_name + 'description'
         u_email = u_name + '@testmail.tm'
-        u_password = data_utils.rand_name('pass-')
+        u_password = data_utils.rand_name('pass')
         user = self.client.create_user(
             u_name, description=u_desc, password=u_password,
             email=u_email, project_id=project['id'])
diff --git a/tempest/api/identity/admin/v3/test_projects_negative.py b/tempest/api/identity/admin/v3/test_projects_negative.py
index 739bb35..2dda585 100644
--- a/tempest/api/identity/admin/v3/test_projects_negative.py
+++ b/tempest/api/identity/admin/v3/test_projects_negative.py
@@ -33,7 +33,7 @@
     @test.idempotent_id('874c3e84-d174-4348-a16b-8c01f599561b')
     def test_project_create_duplicate(self):
         # Project names should be unique
-        project_name = data_utils.rand_name('project-dup-')
+        project_name = data_utils.rand_name('project-dup')
         project = self.client.create_project(project_name)
         self.data.projects.append(project)
 
@@ -44,7 +44,7 @@
     @test.idempotent_id('8fba9de2-3e1f-4e77-812a-60cb68f8df13')
     def test_create_project_by_unauthorized_user(self):
         # Non-admin user should not be authorized to create a project
-        project_name = data_utils.rand_name('project-')
+        project_name = data_utils.rand_name('project')
         self.assertRaises(
             lib_exc.Forbidden, self.non_admin_client.create_project,
             project_name)
@@ -68,7 +68,7 @@
     @test.idempotent_id('8d68c012-89e0-4394-8d6b-ccd7196def97')
     def test_project_delete_by_unauthorized_user(self):
         # Non-admin user should not be able to delete a project
-        project_name = data_utils.rand_name('project-')
+        project_name = data_utils.rand_name('project')
         project = self.client.create_project(project_name)
         self.data.projects.append(project)
         self.assertRaises(
diff --git a/tempest/api/identity/admin/v3/test_regions.py b/tempest/api/identity/admin/v3/test_regions.py
index b5c337d..146cc65 100644
--- a/tempest/api/identity/admin/v3/test_regions.py
+++ b/tempest/api/identity/admin/v3/test_regions.py
@@ -32,7 +32,7 @@
         super(RegionsTestJSON, cls).resource_setup()
         cls.setup_regions = list()
         for i in range(2):
-            r_description = data_utils.rand_name('description-')
+            r_description = data_utils.rand_name('description')
             region = cls.client.create_region(r_description)
             cls.setup_regions.append(region)
 
@@ -50,7 +50,7 @@
     @test.attr(type='gate')
     @test.idempotent_id('56186092-82e4-43f2-b954-91013218ba42')
     def test_create_update_get_delete_region(self):
-        r_description = data_utils.rand_name('description-')
+        r_description = data_utils.rand_name('description')
         region = self.client.create_region(
             r_description, parent_region_id=self.setup_regions[0]['id'])
         self.addCleanup(self._delete_region, region['id'])
@@ -58,7 +58,7 @@
         self.assertEqual(self.setup_regions[0]['id'],
                          region['parent_region_id'])
         # Update region with new description and parent ID
-        r_alt_description = data_utils.rand_name('description-')
+        r_alt_description = data_utils.rand_name('description')
         region = self.client.update_region(
             region['id'],
             description=r_alt_description,
@@ -77,7 +77,7 @@
     def test_create_region_with_specific_id(self):
         # Create a region with a specific id
         r_region_id = data_utils.rand_uuid()
-        r_description = data_utils.rand_name('description-')
+        r_description = data_utils.rand_name('description')
         region = self.client.create_region(
             r_description, unique_region_id=r_region_id)
         self.addCleanup(self._delete_region, region['id'])
diff --git a/tempest/api/identity/admin/v3/test_roles.py b/tempest/api/identity/admin/v3/test_roles.py
index 0611393..8334c18 100644
--- a/tempest/api/identity/admin/v3/test_roles.py
+++ b/tempest/api/identity/admin/v3/test_roles.py
@@ -25,30 +25,30 @@
     def resource_setup(cls):
         super(RolesV3TestJSON, cls).resource_setup()
         for _ in range(3):
-            role_name = data_utils.rand_name(name='role-')
+            role_name = data_utils.rand_name(name='role')
             role = cls.client.create_role(role_name)
             cls.data.v3_roles.append(role)
         cls.fetched_role_ids = list()
-        u_name = data_utils.rand_name('user-')
+        u_name = data_utils.rand_name('user')
         u_desc = '%s description' % u_name
         u_email = '%s@testmail.tm' % u_name
-        cls.u_password = data_utils.rand_name('pass-')
+        cls.u_password = data_utils.rand_name('pass')
         cls.domain = cls.client.create_domain(
-            data_utils.rand_name('domain-'),
-            description=data_utils.rand_name('domain-desc-'))
+            data_utils.rand_name('domain'),
+            description=data_utils.rand_name('domain-desc'))
         cls.project = cls.client.create_project(
-            data_utils.rand_name('project-'),
-            description=data_utils.rand_name('project-desc-'),
+            data_utils.rand_name('project'),
+            description=data_utils.rand_name('project-desc'),
             domain_id=cls.domain['id'])
         cls.group_body = cls.client.create_group(
-            data_utils.rand_name('Group-'), project_id=cls.project['id'],
+            data_utils.rand_name('Group'), project_id=cls.project['id'],
             domain_id=cls.domain['id'])
         cls.user_body = cls.client.create_user(
             u_name, description=u_desc, password=cls.u_password,
             email=u_email, project_id=cls.project['id'],
             domain_id=cls.domain['id'])
         cls.role = cls.client.create_role(
-            data_utils.rand_name('Role-'))
+            data_utils.rand_name('Role'))
 
     @classmethod
     def resource_cleanup(cls):
@@ -69,13 +69,13 @@
     @test.attr(type='smoke')
     @test.idempotent_id('18afc6c0-46cf-4911-824e-9989cc056c3a')
     def test_role_create_update_get_list(self):
-        r_name = data_utils.rand_name('Role-')
+        r_name = data_utils.rand_name('Role')
         role = self.client.create_role(r_name)
         self.addCleanup(self.client.delete_role, role['id'])
         self.assertIn('name', role)
         self.assertEqual(role['name'], r_name)
 
-        new_name = data_utils.rand_name('NewRole-')
+        new_name = data_utils.rand_name('NewRole')
         updated_role = self.client.update_role(new_name, role['id'])
         self.assertIn('name', updated_role)
         self.assertIn('id', updated_role)
@@ -144,11 +144,11 @@
         self.client.add_group_user(self.group_body['id'], self.user_body['id'])
         self.addCleanup(self.client.delete_group_user,
                         self.group_body['id'], self.user_body['id'])
-        body = self.token.auth(user=self.user_body['id'],
+        body = self.token.auth(user_id=self.user_body['id'],
                                password=self.u_password,
-                               user_domain=self.domain['name'],
-                               project=self.project['name'],
-                               project_domain=self.domain['name'])
+                               user_domain_name=self.domain['name'],
+                               project_name=self.project['name'],
+                               project_domain_name=self.domain['name'])
         roles = body['token']['roles']
         self.assertEqual(len(roles), 1)
         self.assertEqual(roles[0]['id'], self.role['id'])
diff --git a/tempest/api/identity/admin/v3/test_tokens.py b/tempest/api/identity/admin/v3/test_tokens.py
index 5cc498f..702e936 100644
--- a/tempest/api/identity/admin/v3/test_tokens.py
+++ b/tempest/api/identity/admin/v3/test_tokens.py
@@ -27,16 +27,17 @@
     def test_tokens(self):
         # Valid user's token is authenticated
         # Create a User
-        u_name = data_utils.rand_name('user-')
+        u_name = data_utils.rand_name('user')
         u_desc = '%s-description' % u_name
         u_email = '%s@testmail.tm' % u_name
-        u_password = data_utils.rand_name('pass-')
+        u_password = data_utils.rand_name('pass')
         user = self.client.create_user(
             u_name, description=u_desc, password=u_password,
             email=u_email)
         self.addCleanup(self.client.delete_user, user['id'])
         # Perform Authentication
-        resp = self.token.auth(user['id'], u_password).response
+        resp = self.token.auth(user_id=user['id'],
+                               password=u_password).response
         subject_token = resp['x-subject-token']
         # Perform GET Token
         token_details = self.client.get_token(subject_token)
@@ -60,22 +61,22 @@
         """
 
         # Create a user.
-        user_name = data_utils.rand_name(name='user-')
-        user_password = data_utils.rand_name(name='pass-')
+        user_name = data_utils.rand_name(name='user')
+        user_password = data_utils.rand_name(name='pass')
         user = self.client.create_user(user_name, password=user_password)
         self.addCleanup(self.client.delete_user, user['id'])
 
         # Create a couple projects
-        project1_name = data_utils.rand_name(name='project-')
+        project1_name = data_utils.rand_name(name='project')
         project1 = self.client.create_project(project1_name)
         self.addCleanup(self.client.delete_project, project1['id'])
 
-        project2_name = data_utils.rand_name(name='project-')
+        project2_name = data_utils.rand_name(name='project')
         project2 = self.client.create_project(project2_name)
         self.addCleanup(self.client.delete_project, project2['id'])
 
         # Create a role
-        role_name = data_utils.rand_name(name='role-')
+        role_name = data_utils.rand_name(name='role')
         role = self.client.create_role(role_name)
         self.addCleanup(self.client.delete_role, role['id'])
 
@@ -87,7 +88,7 @@
                                      role['id'])
 
         # Get an unscoped token.
-        token_auth = self.token.auth(user=user['id'],
+        token_auth = self.token.auth(user_id=user['id'],
                                      password=user_password)
 
         token_id = token_auth.response['x-subject-token']
@@ -110,8 +111,8 @@
 
         # Use the unscoped token to get a scoped token.
         token_auth = self.token.auth(token=token_id,
-                                     project=project1_name,
-                                     project_domain='Default')
+                                     project_name=project1_name,
+                                     project_domain_name='Default')
         token1_id = token_auth.response['x-subject-token']
 
         self.assertEqual(orig_expires_at, token_auth['token']['expires_at'],
@@ -140,8 +141,8 @@
 
         # Now get another scoped token using the unscoped token.
         token_auth = self.token.auth(token=token_id,
-                                     project=project2_name,
-                                     project_domain='Default')
+                                     project_name=project2_name,
+                                     project_domain_name='Default')
 
         self.assertEqual(project2['id'],
                          token_auth['token']['project']['id'])
diff --git a/tempest/api/identity/admin/v3/test_trusts.py b/tempest/api/identity/admin/v3/test_trusts.py
index d9346e9..cafd615 100644
--- a/tempest/api/identity/admin/v3/test_trusts.py
+++ b/tempest/api/identity/admin/v3/test_trusts.py
@@ -13,6 +13,7 @@
 import datetime
 import re
 
+from oslo_utils import timeutils
 from tempest_lib.common.utils import data_utils
 from tempest_lib import exceptions as lib_exc
 
@@ -20,7 +21,6 @@
 from tempest import clients
 from tempest.common import cred_provider
 from tempest import config
-from tempest.openstack.common import timeutils
 from tempest import test
 
 CONF = config.CONF
@@ -52,10 +52,10 @@
         self.assertIsNotNone(self.trustor_project_id)
 
         # Create a trustor User
-        self.trustor_username = data_utils.rand_name('user-')
+        self.trustor_username = data_utils.rand_name('user')
         u_desc = self.trustor_username + 'description'
         u_email = self.trustor_username + '@testmail.xx'
-        self.trustor_password = data_utils.rand_name('pass-')
+        self.trustor_password = data_utils.rand_name('pass')
         user = self.client.create_user(
             self.trustor_username,
             description=u_desc,
@@ -65,8 +65,8 @@
         self.trustor_user_id = user['id']
 
         # And two roles, one we'll delegate and one we won't
-        self.delegated_role = data_utils.rand_name('DelegatedRole-')
-        self.not_delegated_role = data_utils.rand_name('NotDelegatedRole-')
+        self.delegated_role = data_utils.rand_name('DelegatedRole')
+        self.not_delegated_role = data_utils.rand_name('NotDelegatedRole')
 
         role = self.client.create_role(self.delegated_role)
         self.delegated_role_id = role['id']
diff --git a/tempest/api/identity/admin/v3/test_users.py b/tempest/api/identity/admin/v3/test_users.py
index f29e72a..e4bfd6a 100644
--- a/tempest/api/identity/admin/v3/test_users.py
+++ b/tempest/api/identity/admin/v3/test_users.py
@@ -26,10 +26,10 @@
     def test_user_update(self):
         # Test case to check if updating of user attributes is successful.
         # Creating first user
-        u_name = data_utils.rand_name('user-')
+        u_name = data_utils.rand_name('user')
         u_desc = u_name + 'description'
         u_email = u_name + '@testmail.tm'
-        u_password = data_utils.rand_name('pass-')
+        u_password = data_utils.rand_name('pass')
         user = self.client.create_user(
             u_name, description=u_desc, password=u_password,
             email=u_email, enabled=False)
@@ -37,12 +37,12 @@
         self.addCleanup(self.client.delete_user, user['id'])
         # Creating second project for updation
         project = self.client.create_project(
-            data_utils.rand_name('project-'),
-            description=data_utils.rand_name('project-desc-'))
+            data_utils.rand_name('project'),
+            description=data_utils.rand_name('project-desc'))
         # Delete the Project at the end of this method
         self.addCleanup(self.client.delete_project, project['id'])
         # Updating user details with new values
-        u_name2 = data_utils.rand_name('user2-')
+        u_name2 = data_utils.rand_name('user2')
         u_email2 = u_name2 + '@testmail.tm'
         u_description2 = u_name2 + ' description'
         update_user = self.client.update_user(
@@ -79,7 +79,8 @@
         new_password = data_utils.rand_name('pass1')
         self.client.update_user_password(user['id'], new_password,
                                          original_password)
-        resp = self.token.auth(user['id'], new_password).response
+        resp = self.token.auth(user_id=user['id'],
+                               password=new_password).response
         subject_token = resp['x-subject-token']
         # Perform GET Token to verify and confirm password is updated
         token_details = self.client.get_token(subject_token)
@@ -94,15 +95,15 @@
         assigned_project_ids = list()
         fetched_project_ids = list()
         u_project = self.client.create_project(
-            data_utils.rand_name('project-'),
-            description=data_utils.rand_name('project-desc-'))
+            data_utils.rand_name('project'),
+            description=data_utils.rand_name('project-desc'))
         # Delete the Project at the end of this method
         self.addCleanup(self.client.delete_project, u_project['id'])
         # Create a user.
-        u_name = data_utils.rand_name('user-')
+        u_name = data_utils.rand_name('user')
         u_desc = u_name + 'description'
         u_email = u_name + '@testmail.tm'
-        u_password = data_utils.rand_name('pass-')
+        u_password = data_utils.rand_name('pass')
         user_body = self.client.create_user(
             u_name, description=u_desc, password=u_password,
             email=u_email, enabled=False, project_id=u_project['id'])
@@ -110,7 +111,7 @@
         self.addCleanup(self.client.delete_user, user_body['id'])
         # Creating Role
         role_body = self.client.create_role(
-            data_utils.rand_name('role-'))
+            data_utils.rand_name('role'))
         # Delete the Role at the end of this method
         self.addCleanup(self.client.delete_role, role_body['id'])
 
@@ -119,8 +120,8 @@
         for i in range(2):
             # Creating project so as to assign role
             project_body = self.client.create_project(
-                data_utils.rand_name('project-'),
-                description=data_utils.rand_name('project-desc-'))
+                data_utils.rand_name('project'),
+                description=data_utils.rand_name('project-desc'))
             project = self.client.get_project(project_body['id'])
             # Delete the Project at the end of this method
             self.addCleanup(self.client.delete_project, project_body['id'])
diff --git a/tempest/api/identity/base.py b/tempest/api/identity/base.py
index 72a4cbd..b83da3e 100644
--- a/tempest/api/identity/base.py
+++ b/tempest/api/identity/base.py
@@ -13,26 +13,21 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
 from tempest_lib.common.utils import data_utils
 from tempest_lib import exceptions as lib_exc
 
 from tempest import clients
 from tempest.common import cred_provider
+from tempest.common import credentials
 from tempest import config
-from tempest.openstack.common import log as logging
 import tempest.test
 
 CONF = config.CONF
 LOG = logging.getLogger(__name__)
 
 
-class BaseIdentityAdminTest(tempest.test.BaseTestCase):
-
-    @classmethod
-    def setup_credentials(cls):
-        super(BaseIdentityAdminTest, cls).setup_credentials()
-        cls.os_adm = clients.AdminManager()
-        cls.os = clients.Manager()
+class BaseIdentityTest(tempest.test.BaseTestCase):
 
     @classmethod
     def disable_user(cls, user_name):
@@ -69,24 +64,55 @@
             return role[0]
 
 
-class BaseIdentityV2AdminTest(BaseIdentityAdminTest):
+class BaseIdentityV2Test(BaseIdentityTest):
+
+    @classmethod
+    def setup_credentials(cls):
+        super(BaseIdentityV2Test, cls).setup_credentials()
+        cls.os = cls.get_client_manager(identity_version='v2')
 
     @classmethod
     def skip_checks(cls):
-        super(BaseIdentityV2AdminTest, cls).skip_checks()
+        super(BaseIdentityV2Test, cls).skip_checks()
         if not CONF.identity_feature_enabled.api_v2:
             raise cls.skipException("Identity api v2 is not enabled")
 
     @classmethod
     def setup_clients(cls):
+        super(BaseIdentityV2Test, cls).setup_clients()
+        cls.non_admin_client = cls.os.identity_client
+        cls.non_admin_token_client = cls.os.token_client
+
+    @classmethod
+    def resource_setup(cls):
+        super(BaseIdentityV2Test, cls).resource_setup()
+
+    @classmethod
+    def resource_cleanup(cls):
+        super(BaseIdentityV2Test, cls).resource_cleanup()
+
+
+class BaseIdentityV2AdminTest(BaseIdentityV2Test):
+
+    @classmethod
+    def setup_credentials(cls):
+        super(BaseIdentityV2AdminTest, cls).setup_credentials()
+        cls.os_adm = clients.Manager(cls.isolated_creds.get_admin_creds())
+
+    @classmethod
+    def skip_checks(cls):
+        if not credentials.is_admin_available():
+            raise cls.skipException('v2 Admin auth disabled')
+        super(BaseIdentityV2AdminTest, cls).skip_checks()
+
+    @classmethod
+    def setup_clients(cls):
         super(BaseIdentityV2AdminTest, cls).setup_clients()
         cls.client = cls.os_adm.identity_client
         cls.token_client = cls.os_adm.token_client
         if not cls.client.has_admin_extensions():
             raise cls.skipException("Admin extensions disabled")
 
-        cls.non_admin_client = cls.os.identity_client
-
     @classmethod
     def resource_setup(cls):
         super(BaseIdentityV2AdminTest, cls).resource_setup()
@@ -98,27 +124,59 @@
         super(BaseIdentityV2AdminTest, cls).resource_cleanup()
 
 
-class BaseIdentityV3AdminTest(BaseIdentityAdminTest):
+class BaseIdentityV3Test(BaseIdentityTest):
+
+    @classmethod
+    def setup_credentials(cls):
+        super(BaseIdentityV3Test, cls).setup_credentials()
+        cls.os = cls.get_client_manager(identity_version='v3')
 
     @classmethod
     def skip_checks(cls):
-        super(BaseIdentityV3AdminTest, cls).skip_checks()
+        super(BaseIdentityV3Test, cls).skip_checks()
         if not CONF.identity_feature_enabled.api_v3:
             raise cls.skipException("Identity api v3 is not enabled")
 
     @classmethod
     def setup_clients(cls):
+        super(BaseIdentityV3Test, cls).setup_clients()
+        cls.non_admin_client = cls.os.identity_v3_client
+        cls.non_admin_token = cls.os.token_v3_client
+        cls.non_admin_endpoints_client = cls.os.endpoints_client
+        cls.non_admin_region_client = cls.os.region_client
+        cls.non_admin_service_client = cls.os.service_client
+        cls.non_admin_policy_client = cls.os.policy_client
+        cls.non_admin_creds_client = cls.os.credentials_client
+
+    @classmethod
+    def resource_cleanup(cls):
+        super(BaseIdentityV3Test, cls).resource_cleanup()
+
+
+class BaseIdentityV3AdminTest(BaseIdentityV3Test):
+
+    @classmethod
+    def setup_credentials(cls):
+        super(BaseIdentityV3AdminTest, cls).setup_credentials()
+        cls.os_adm = clients.Manager(cls.isolated_creds.get_admin_creds())
+
+    @classmethod
+    def skip_checks(cls):
+        if not credentials.is_admin_available():
+            raise cls.skipException('v3 Admin auth disabled')
+        super(BaseIdentityV3AdminTest, cls).skip_checks()
+
+    @classmethod
+    def setup_clients(cls):
         super(BaseIdentityV3AdminTest, cls).setup_clients()
         cls.client = cls.os_adm.identity_v3_client
         cls.token = cls.os_adm.token_v3_client
         cls.endpoints_client = cls.os_adm.endpoints_client
         cls.region_client = cls.os_adm.region_client
         cls.data = DataGenerator(cls.client)
-        cls.non_admin_client = cls.os.identity_v3_client
         cls.service_client = cls.os_adm.service_client
         cls.policy_client = cls.os_adm.policy_client
         cls.creds_client = cls.os_adm.credentials_client
-        cls.non_admin_client = cls.os.identity_v3_client
 
     @classmethod
     def resource_cleanup(cls):
@@ -171,8 +229,8 @@
         def setup_test_user(self):
             """Set up a test user."""
             self.setup_test_tenant()
-            self.test_user = data_utils.rand_name('test_user_')
-            self.test_password = data_utils.rand_name('pass_')
+            self.test_user = data_utils.rand_name('test_user')
+            self.test_password = data_utils.rand_name('pass')
             self.test_email = self.test_user + '@testmail.tm'
             self.user = self.client.create_user(self.test_user,
                                                 self.test_password,
@@ -182,8 +240,8 @@
 
         def setup_test_tenant(self):
             """Set up a test tenant."""
-            self.test_tenant = data_utils.rand_name('test_tenant_')
-            self.test_description = data_utils.rand_name('desc_')
+            self.test_tenant = data_utils.rand_name('test_tenant')
+            self.test_description = data_utils.rand_name('desc')
             self.tenant = self.client.create_tenant(
                 name=self.test_tenant,
                 description=self.test_description)
@@ -198,8 +256,8 @@
         def setup_test_v3_user(self):
             """Set up a test v3 user."""
             self.setup_test_project()
-            self.test_user = data_utils.rand_name('test_user_')
-            self.test_password = data_utils.rand_name('pass_')
+            self.test_user = data_utils.rand_name('test_user')
+            self.test_password = data_utils.rand_name('pass')
             self.test_email = self.test_user + '@testmail.tm'
             self.v3_user = self.client.create_user(
                 self.test_user,
@@ -210,8 +268,8 @@
 
         def setup_test_project(self):
             """Set up a test project."""
-            self.test_project = data_utils.rand_name('test_project_')
-            self.test_description = data_utils.rand_name('desc_')
+            self.test_project = data_utils.rand_name('test_project')
+            self.test_description = data_utils.rand_name('desc')
             self.project = self.client.create_project(
                 name=self.test_project,
                 description=self.test_description)
diff --git a/tempest/api_schema/response/compute/v2/__init__.py b/tempest/api/identity/v2/__init__.py
similarity index 100%
copy from tempest/api_schema/response/compute/v2/__init__.py
copy to tempest/api/identity/v2/__init__.py
diff --git a/tempest/api/identity/v2/test_tokens.py b/tempest/api/identity/v2/test_tokens.py
new file mode 100644
index 0000000..5a8afa0
--- /dev/null
+++ b/tempest/api/identity/v2/test_tokens.py
@@ -0,0 +1,38 @@
+# Copyright 2015 OpenStack Foundation
+# All Rights Reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+from tempest.api.identity import base
+from tempest import test
+
+
+class TokensTest(base.BaseIdentityV2Test):
+
+    @test.idempotent_id('65ae3b78-91ff-467b-a705-f6678863b8ec')
+    def test_create_token(self):
+
+        token_client = self.non_admin_token_client
+
+        # get a token for the user
+        creds = self.os.credentials
+        username = creds.username
+        password = creds.password
+        tenant_name = creds.tenant_name
+
+        body = token_client.auth(username,
+                                 password,
+                                 tenant_name)
+
+        self.assertEqual(body['token']['tenant']['name'],
+                         tenant_name)
diff --git a/tempest/api_schema/response/compute/v2/__init__.py b/tempest/api/identity/v3/__init__.py
similarity index 100%
copy from tempest/api_schema/response/compute/v2/__init__.py
copy to tempest/api/identity/v3/__init__.py
diff --git a/tempest/api/identity/v3/test_tokens.py b/tempest/api/identity/v3/test_tokens.py
new file mode 100644
index 0000000..ab4a09f
--- /dev/null
+++ b/tempest/api/identity/v3/test_tokens.py
@@ -0,0 +1,33 @@
+# Copyright 2015 OpenStack Foundation
+# All Rights Reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+from tempest.api.identity import base
+from tempest import test
+
+
+class TokensV3Test(base.BaseIdentityV3Test):
+
+    @test.idempotent_id('6f8e4436-fc96-4282-8122-e41df57197a9')
+    def test_create_token(self):
+
+        creds = self.os.credentials
+        user_id = creds.user_id
+        username = creds.username
+        password = creds.password
+        resp = self.non_admin_token.auth(user_id=user_id,
+                                         password=password)
+
+        subject_name = resp['token']['user']['name']
+        self.assertEqual(subject_name, username)
diff --git a/tempest/api/image/base.py b/tempest/api/image/base.py
index 58d0003..d513b0c 100644
--- a/tempest/api/image/base.py
+++ b/tempest/api/image/base.py
@@ -14,13 +14,13 @@
 
 import cStringIO as StringIO
 
+from oslo_log import log as logging
 from tempest_lib.common.utils import data_utils
 from tempest_lib import exceptions as lib_exc
 
 from tempest import clients
 from tempest.common import credentials
 from tempest import config
-from tempest.openstack.common import log as logging
 import tempest.test
 
 CONF = config.CONF
@@ -61,7 +61,6 @@
 
         for image_id in cls.created_images:
                 cls.client.wait_for_resource_deletion(image_id)
-        cls.isolated_creds.clear_isolated_creds()
         super(BaseImageTest, cls).resource_cleanup()
 
     @classmethod
diff --git a/tempest/api/image/v2/test_images.py b/tempest/api/image/v2/test_images.py
index 0997c9f..a00296c 100644
--- a/tempest/api/image/v2/test_images.py
+++ b/tempest/api/image/v2/test_images.py
@@ -150,7 +150,7 @@
         """
         size = random.randint(1024, 4096)
         image_file = StringIO.StringIO(data_utils.random_bytes(size))
-        name = data_utils.rand_name('image-')
+        name = data_utils.rand_name('image')
         body = cls.create_image(name=name,
                                 container_format=container_format,
                                 disk_format=disk_format,
diff --git a/tempest/api/image/v2/test_images_tags.py b/tempest/api/image/v2/test_images_tags.py
index bdb1679..8c71db7 100644
--- a/tempest/api/image/v2/test_images_tags.py
+++ b/tempest/api/image/v2/test_images_tags.py
@@ -27,7 +27,7 @@
                                  disk_format='raw',
                                  visibility='private')
         image_id = body['id']
-        tag = data_utils.rand_name('tag-')
+        tag = data_utils.rand_name('tag')
         self.addCleanup(self.client.delete_image, image_id)
 
         # Creating image tag and verify it.
diff --git a/tempest/api/image/v2/test_images_tags_negative.py b/tempest/api/image/v2/test_images_tags_negative.py
index 13ef27d..227d35f 100644
--- a/tempest/api/image/v2/test_images_tags_negative.py
+++ b/tempest/api/image/v2/test_images_tags_negative.py
@@ -27,7 +27,7 @@
     @test.idempotent_id('8cd30f82-6f9a-4c6e-8034-c1b51fba43d9')
     def test_update_tags_for_non_existing_image(self):
         # Update tag with non existing image.
-        tag = data_utils.rand_name('tag-')
+        tag = data_utils.rand_name('tag')
         non_exist_image = str(uuid.uuid4())
         self.assertRaises(lib_exc.NotFound, self.client.add_image_tag,
                           non_exist_image, tag)
@@ -41,7 +41,7 @@
                                  visibility='private'
                                  )
         image_id = body['id']
-        tag = data_utils.rand_name('non-exist-tag-')
+        tag = data_utils.rand_name('non-exist-tag')
         self.addCleanup(self.client.delete_image, image_id)
         self.assertRaises(lib_exc.NotFound, self.client.delete_image_tag,
                           image_id, tag)
diff --git a/tempest/api/messaging/base.py b/tempest/api/messaging/base.py
index f193e32..b3ed941 100644
--- a/tempest/api/messaging/base.py
+++ b/tempest/api/messaging/base.py
@@ -13,10 +13,10 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
+from oslo_log import log as logging
 from tempest_lib.common.utils import data_utils
 
 from tempest import config
-from tempest.openstack.common import log as logging
 from tempest import test
 
 CONF = config.CONF
diff --git a/tempest/api/network/admin/test_agent_management.py b/tempest/api/network/admin/test_agent_management.py
index d5454ec..be5bb1f 100644
--- a/tempest/api/network/admin/test_agent_management.py
+++ b/tempest/api/network/admin/test_agent_management.py
@@ -20,11 +20,15 @@
 class AgentManagementTestJSON(base.BaseAdminNetworkTest):
 
     @classmethod
-    def resource_setup(cls):
-        super(AgentManagementTestJSON, cls).resource_setup()
+    def skip_checks(cls):
+        super(AgentManagementTestJSON, cls).skip_checks()
         if not test.is_extension_enabled('agent', 'network'):
             msg = "agent extension not enabled."
             raise cls.skipException(msg)
+
+    @classmethod
+    def resource_setup(cls):
+        super(AgentManagementTestJSON, cls).resource_setup()
         body = cls.admin_client.list_agents()
         agents = body['agents']
         cls.agent = agents[0]
diff --git a/tempest/api/network/admin/test_dhcp_agent_scheduler.py b/tempest/api/network/admin/test_dhcp_agent_scheduler.py
index 26f5e7a..3b94b82 100644
--- a/tempest/api/network/admin/test_dhcp_agent_scheduler.py
+++ b/tempest/api/network/admin/test_dhcp_agent_scheduler.py
@@ -19,11 +19,15 @@
 class DHCPAgentSchedulersTestJSON(base.BaseAdminNetworkTest):
 
     @classmethod
-    def resource_setup(cls):
-        super(DHCPAgentSchedulersTestJSON, cls).resource_setup()
+    def skip_checks(cls):
+        super(DHCPAgentSchedulersTestJSON, cls).skip_checks()
         if not test.is_extension_enabled('dhcp_agent_scheduler', 'network'):
             msg = "dhcp_agent_scheduler extension not enabled."
             raise cls.skipException(msg)
+
+    @classmethod
+    def resource_setup(cls):
+        super(DHCPAgentSchedulersTestJSON, cls).resource_setup()
         # Create a network and make sure it will be hosted by a
         # dhcp agent: this is done by creating a regular port
         cls.network = cls.create_network()
diff --git a/tempest/api/network/admin/test_floating_ips_admin_actions.py b/tempest/api/network/admin/test_floating_ips_admin_actions.py
index ccf3980..ce3e319 100644
--- a/tempest/api/network/admin/test_floating_ips_admin_actions.py
+++ b/tempest/api/network/admin/test_floating_ips_admin_actions.py
@@ -24,16 +24,23 @@
 
 
 class FloatingIPAdminTestJSON(base.BaseAdminNetworkTest):
-
     force_tenant_isolation = True
 
     @classmethod
+    def setup_credentials(cls):
+        super(FloatingIPAdminTestJSON, cls).setup_credentials()
+        cls.alt_manager = clients.Manager(cls.isolated_creds.get_alt_creds())
+
+    @classmethod
+    def setup_clients(cls):
+        super(FloatingIPAdminTestJSON, cls).setup_clients()
+        cls.alt_client = cls.alt_manager.network_client
+
+    @classmethod
     def resource_setup(cls):
         super(FloatingIPAdminTestJSON, cls).resource_setup()
         cls.ext_net_id = CONF.network.public_network_id
         cls.floating_ip = cls.create_floatingip(cls.ext_net_id)
-        cls.alt_manager = clients.Manager(cls.isolated_creds.get_alt_creds())
-        cls.alt_client = cls.alt_manager.network_client
         cls.network = cls.create_network()
         cls.subnet = cls.create_subnet(cls.network)
         cls.router = cls.create_router(data_utils.rand_name('router-'),
diff --git a/tempest/api/network/admin/test_l3_agent_scheduler.py b/tempest/api/network/admin/test_l3_agent_scheduler.py
index 257289f..ad121b0 100644
--- a/tempest/api/network/admin/test_l3_agent_scheduler.py
+++ b/tempest/api/network/admin/test_l3_agent_scheduler.py
@@ -15,11 +15,11 @@
 from tempest_lib.common.utils import data_utils
 
 from tempest.api.network import base
+from tempest import exceptions
 from tempest import test
 
 
 class L3AgentSchedulerTestJSON(base.BaseAdminNetworkTest):
-
     """
     Tests the following operations in the Neutron API using the REST client for
     Neutron:
@@ -34,12 +34,15 @@
     """
 
     @classmethod
-    def resource_setup(cls):
-        super(L3AgentSchedulerTestJSON, cls).resource_setup()
+    def skip_checks(cls):
+        super(L3AgentSchedulerTestJSON, cls).skip_checks()
         if not test.is_extension_enabled('l3_agent_scheduler', 'network'):
             msg = "L3 Agent Scheduler Extension not enabled."
             raise cls.skipException(msg)
-        # Trying to get agent details for L3 Agent
+
+    @classmethod
+    def resource_setup(cls):
+        super(L3AgentSchedulerTestJSON, cls).resource_setup()
         body = cls.admin_client.list_agents()
         agents = body['agents']
         for agent in agents:
@@ -47,8 +50,8 @@
                 cls.agent = agent
                 break
         else:
-            msg = "L3 Agent not found"
-            raise cls.skipException(msg)
+            msg = "L3 Agent Scheduler enabled in conf, but L3 Agent not found"
+            raise exceptions.InvalidConfiguration(msg)
 
     @test.attr(type='smoke')
     @test.idempotent_id('b7ce6e89-e837-4ded-9b78-9ed3c9c6a45a')
diff --git a/tempest/api/network/admin/test_lbaas_agent_scheduler.py b/tempest/api/network/admin/test_lbaas_agent_scheduler.py
index 29b69c3..c4f117b 100644
--- a/tempest/api/network/admin/test_lbaas_agent_scheduler.py
+++ b/tempest/api/network/admin/test_lbaas_agent_scheduler.py
@@ -19,7 +19,6 @@
 
 
 class LBaaSAgentSchedulerTestJSON(base.BaseAdminNetworkTest):
-
     """
     Tests the following operations in the Neutron API using the REST client for
     Neutron:
@@ -35,11 +34,15 @@
     """
 
     @classmethod
-    def resource_setup(cls):
-        super(LBaaSAgentSchedulerTestJSON, cls).resource_setup()
+    def skip_checks(cls):
+        super(LBaaSAgentSchedulerTestJSON, cls).skip_checks()
         if not test.is_extension_enabled('lbaas_agent_scheduler', 'network'):
             msg = "LBaaS Agent Scheduler Extension not enabled."
             raise cls.skipException(msg)
+
+    @classmethod
+    def resource_setup(cls):
+        super(LBaaSAgentSchedulerTestJSON, cls).resource_setup()
         cls.network = cls.create_network()
         cls.subnet = cls.create_subnet(cls.network)
         pool_name = data_utils.rand_name('pool-')
diff --git a/tempest/api/network/admin/test_load_balancer_admin_actions.py b/tempest/api/network/admin/test_load_balancer_admin_actions.py
index b49b57c..5a32119 100644
--- a/tempest/api/network/admin/test_load_balancer_admin_actions.py
+++ b/tempest/api/network/admin/test_load_balancer_admin_actions.py
@@ -20,7 +20,6 @@
 
 
 class LoadBalancerAdminTestJSON(base.BaseAdminNetworkTest):
-
     """
     Test admin actions for load balancer.
 
@@ -29,15 +28,28 @@
     """
 
     @classmethod
-    def resource_setup(cls):
-        super(LoadBalancerAdminTestJSON, cls).resource_setup()
+    def skip_checks(cls):
+        super(LoadBalancerAdminTestJSON, cls).skip_checks()
         if not test.is_extension_enabled('lbaas', 'network'):
             msg = "lbaas extension not enabled."
             raise cls.skipException(msg)
+
+    @classmethod
+    def setup_credentials(cls):
+        super(LoadBalancerAdminTestJSON, cls).setup_credentials()
+        cls.manager = cls.get_client_manager()
+        cls.primary_creds = cls.isolated_creds.get_primary_creds()
+
+    @classmethod
+    def setup_clients(cls):
+        super(LoadBalancerAdminTestJSON, cls).setup_clients()
+        cls.client = cls.manager.network_client
+
+    @classmethod
+    def resource_setup(cls):
+        super(LoadBalancerAdminTestJSON, cls).resource_setup()
         cls.force_tenant_isolation = True
-        manager = cls.get_client_manager()
-        cls.client = manager.network_client
-        cls.tenant_id = cls.isolated_creds.get_primary_creds().tenant_id
+        cls.tenant_id = cls.primary_creds.tenant_id
         cls.network = cls.create_network()
         cls.subnet = cls.create_subnet(cls.network)
         cls.pool = cls.create_pool(data_utils.rand_name('pool-'),
diff --git a/tempest/api/network/admin/test_quotas.py b/tempest/api/network/admin/test_quotas.py
index 60552b9..275c0d1 100644
--- a/tempest/api/network/admin/test_quotas.py
+++ b/tempest/api/network/admin/test_quotas.py
@@ -20,7 +20,6 @@
 
 
 class QuotasTest(base.BaseAdminNetworkTest):
-
     """
     Tests the following operations in the Neutron API using the REST client for
     Neutron:
@@ -38,11 +37,15 @@
     """
 
     @classmethod
-    def resource_setup(cls):
-        super(QuotasTest, cls).resource_setup()
+    def skip_checks(cls):
+        super(QuotasTest, cls).skip_checks()
         if not test.is_extension_enabled('quotas', 'network'):
             msg = "quotas extension not enabled."
             raise cls.skipException(msg)
+
+    @classmethod
+    def setup_clients(cls):
+        super(QuotasTest, cls).setup_clients()
         cls.identity_admin_client = cls.os_adm.identity_client
 
     def _check_quotas(self, new_quotas):
diff --git a/tempest/api/network/base.py b/tempest/api/network/base.py
index 270f5dd..09a5555 100644
--- a/tempest/api/network/base.py
+++ b/tempest/api/network/base.py
@@ -14,13 +14,14 @@
 #    under the License.
 
 import netaddr
+from oslo_log import log as logging
 from tempest_lib.common.utils import data_utils
 from tempest_lib import exceptions as lib_exc
 
 from tempest import clients
+from tempest.common import credentials
 from tempest import config
 from tempest import exceptions
-from tempest.openstack.common import log as logging
 import tempest.test
 
 CONF = config.CONF
@@ -56,19 +57,30 @@
     _ip_version = 4
 
     @classmethod
-    def resource_setup(cls):
-        # Create no network resources for these test.
-        cls.set_network_resources()
-        super(BaseNetworkTest, cls).resource_setup()
+    def skip_checks(cls):
+        super(BaseNetworkTest, cls).skip_checks()
         if not CONF.service_available.neutron:
             raise cls.skipException("Neutron support is required")
         if cls._ip_version == 6 and not CONF.network_feature_enabled.ipv6:
             raise cls.skipException("IPv6 Tests are disabled.")
 
-        os = cls.get_client_manager()
+    @classmethod
+    def setup_credentials(cls):
+        # Create no network resources for these test.
+        cls.set_network_resources()
+        super(BaseNetworkTest, cls).setup_credentials()
+        cls.os = cls.get_client_manager()
+
+    @classmethod
+    def setup_clients(cls):
+        super(BaseNetworkTest, cls).setup_clients()
+        cls.client = cls.os.network_client
+
+    @classmethod
+    def resource_setup(cls):
+        super(BaseNetworkTest, cls).resource_setup()
 
         cls.network_cfg = CONF.network
-        cls.client = os.network_client
         cls.networks = []
         cls.subnets = []
         cls.ports = []
@@ -157,7 +169,6 @@
             for network in cls.networks:
                 cls._try_delete_resource(cls.client.delete_network,
                                          network['id'])
-            cls.clear_isolated_creds()
         super(BaseNetworkTest, cls).resource_cleanup()
 
     @classmethod
@@ -415,16 +426,22 @@
 class BaseAdminNetworkTest(BaseNetworkTest):
 
     @classmethod
-    def resource_setup(cls):
-        super(BaseAdminNetworkTest, cls).resource_setup()
-
-        try:
-            creds = cls.isolated_creds.get_admin_creds()
-            cls.os_adm = clients.Manager(credentials=creds)
-        except NotImplementedError:
+    def skip_checks(cls):
+        super(BaseAdminNetworkTest, cls).skip_checks()
+        if not credentials.is_admin_available():
             msg = ("Missing Administrative Network API credentials "
                    "in configuration.")
             raise cls.skipException(msg)
+
+    @classmethod
+    def setup_credentials(cls):
+        super(BaseAdminNetworkTest, cls).setup_credentials()
+        creds = cls.isolated_creds.get_admin_creds()
+        cls.os_adm = clients.Manager(credentials=creds)
+
+    @classmethod
+    def setup_clients(cls):
+        super(BaseAdminNetworkTest, cls).setup_clients()
         cls.admin_client = cls.os_adm.network_client
 
     @classmethod
diff --git a/tempest/api/network/base_routers.py b/tempest/api/network/base_routers.py
index 1b580b0..aa4e200 100644
--- a/tempest/api/network/base_routers.py
+++ b/tempest/api/network/base_routers.py
@@ -21,10 +21,6 @@
     # as some router operations, such as enabling or disabling SNAT
     # require admin credentials by default
 
-    @classmethod
-    def resource_setup(cls):
-        super(BaseRouterTest, cls).resource_setup()
-
     def _delete_router(self, router_id):
         self.client.delete_router(router_id)
         # Asserting that the router is not found in the list
diff --git a/tempest/api/network/base_security_groups.py b/tempest/api/network/base_security_groups.py
index c704049..6699bf7 100644
--- a/tempest/api/network/base_security_groups.py
+++ b/tempest/api/network/base_security_groups.py
@@ -20,10 +20,6 @@
 
 class BaseSecGroupTest(base.BaseNetworkTest):
 
-    @classmethod
-    def resource_setup(cls):
-        super(BaseSecGroupTest, cls).resource_setup()
-
     def _create_security_group(self):
         # Create a security group
         name = data_utils.rand_name('secgroup-')
diff --git a/tempest/api/network/test_allowed_address_pair.py b/tempest/api/network/test_allowed_address_pair.py
index 99bd82c..d2db326 100644
--- a/tempest/api/network/test_allowed_address_pair.py
+++ b/tempest/api/network/test_allowed_address_pair.py
@@ -23,7 +23,6 @@
 
 
 class AllowedAddressPairTestJSON(base.BaseNetworkTest):
-
     """
     Tests the Neutron Allowed Address Pair API extension using the Tempest
     ReST client. The following API operations are tested with this extension:
@@ -41,11 +40,15 @@
     """
 
     @classmethod
-    def resource_setup(cls):
-        super(AllowedAddressPairTestJSON, cls).resource_setup()
+    def skip_checks(cls):
+        super(AllowedAddressPairTestJSON, cls).skip_checks()
         if not test.is_extension_enabled('allowed-address-pairs', 'network'):
             msg = "Allowed Address Pairs extension not enabled."
             raise cls.skipException(msg)
+
+    @classmethod
+    def resource_setup(cls):
+        super(AllowedAddressPairTestJSON, cls).resource_setup()
         cls.network = cls.create_network()
         cls.create_subnet(cls.network)
         port = cls.create_port(cls.network)
diff --git a/tempest/api/network/test_dhcp_ipv6.py b/tempest/api/network/test_dhcp_ipv6.py
index a10f749..253d779 100644
--- a/tempest/api/network/test_dhcp_ipv6.py
+++ b/tempest/api/network/test_dhcp_ipv6.py
@@ -41,6 +41,7 @@
 
     @classmethod
     def skip_checks(cls):
+        super(NetworksTestDHCPv6, cls).skip_checks()
         msg = None
         if not CONF.network_feature_enabled.ipv6:
             msg = "IPv6 is not enabled"
diff --git a/tempest/api/network/test_extensions.py b/tempest/api/network/test_extensions.py
index bce8efe..e9f1bf4 100644
--- a/tempest/api/network/test_extensions.py
+++ b/tempest/api/network/test_extensions.py
@@ -31,10 +31,6 @@
 
     """
 
-    @classmethod
-    def resource_setup(cls):
-        super(ExtensionsTestJSON, cls).resource_setup()
-
     @test.attr(type='smoke')
     @test.idempotent_id('ef28c7e6-e646-4979-9d67-deb207bc5564')
     def test_list_show_extensions(self):
diff --git a/tempest/api/network/test_extra_dhcp_options.py b/tempest/api/network/test_extra_dhcp_options.py
index 5060a48..1937028 100644
--- a/tempest/api/network/test_extra_dhcp_options.py
+++ b/tempest/api/network/test_extra_dhcp_options.py
@@ -20,7 +20,6 @@
 
 
 class ExtraDHCPOptionsTestJSON(base.BaseNetworkTest):
-
     """
     Tests the following operations with the Extra DHCP Options Neutron API
     extension:
@@ -36,11 +35,15 @@
     """
 
     @classmethod
-    def resource_setup(cls):
-        super(ExtraDHCPOptionsTestJSON, cls).resource_setup()
+    def skip_checks(cls):
+        super(ExtraDHCPOptionsTestJSON, cls).skip_checks()
         if not test.is_extension_enabled('extra_dhcp_opt', 'network'):
             msg = "Extra DHCP Options extension not enabled."
             raise cls.skipException(msg)
+
+    @classmethod
+    def resource_setup(cls):
+        super(ExtraDHCPOptionsTestJSON, cls).resource_setup()
         cls.network = cls.create_network()
         cls.subnet = cls.create_subnet(cls.network)
         cls.port = cls.create_port(cls.network)
diff --git a/tempest/api/network/test_floating_ips.py b/tempest/api/network/test_floating_ips.py
index 212013a..23223f6 100644
--- a/tempest/api/network/test_floating_ips.py
+++ b/tempest/api/network/test_floating_ips.py
@@ -24,7 +24,6 @@
 
 
 class FloatingIPTestJSON(base.BaseNetworkTest):
-
     """
     Tests the following operations in the Quantum API using the REST client for
     Neutron:
@@ -45,11 +44,15 @@
     """
 
     @classmethod
-    def resource_setup(cls):
-        super(FloatingIPTestJSON, cls).resource_setup()
+    def skip_checks(cls):
+        super(FloatingIPTestJSON, cls).skip_checks()
         if not test.is_extension_enabled('router', 'network'):
             msg = "router extension not enabled."
             raise cls.skipException(msg)
+
+    @classmethod
+    def resource_setup(cls):
+        super(FloatingIPTestJSON, cls).resource_setup()
         cls.ext_net_id = CONF.network.public_network_id
 
         # Create network, subnet, router and add interface
diff --git a/tempest/api/network/test_floating_ips_negative.py b/tempest/api/network/test_floating_ips_negative.py
index a7f806c..824034f 100644
--- a/tempest/api/network/test_floating_ips_negative.py
+++ b/tempest/api/network/test_floating_ips_negative.py
@@ -25,7 +25,6 @@
 
 
 class FloatingIPNegativeTestJSON(base.BaseNetworkTest):
-
     """
     Test the following negative  operations for floating ips:
 
@@ -35,11 +34,15 @@
     """
 
     @classmethod
-    def resource_setup(cls):
-        super(FloatingIPNegativeTestJSON, cls).resource_setup()
+    def skip_checks(cls):
+        super(FloatingIPNegativeTestJSON, cls).skip_checks()
         if not test.is_extension_enabled('router', 'network'):
             msg = "router extension not enabled."
             raise cls.skipException(msg)
+
+    @classmethod
+    def resource_setup(cls):
+        super(FloatingIPNegativeTestJSON, cls).resource_setup()
         cls.ext_net_id = CONF.network.public_network_id
         # Create a network with a subnet connected to a router.
         cls.network = cls.create_network()
diff --git a/tempest/api/network/test_fwaas_extensions.py b/tempest/api/network/test_fwaas_extensions.py
index e2b6ff1..cecf96d 100644
--- a/tempest/api/network/test_fwaas_extensions.py
+++ b/tempest/api/network/test_fwaas_extensions.py
@@ -24,7 +24,6 @@
 
 
 class FWaaSExtensionTestJSON(base.BaseNetworkTest):
-
     """
     Tests the following operations in the Neutron API using the REST client for
     Neutron:
@@ -51,11 +50,15 @@
     """
 
     @classmethod
-    def resource_setup(cls):
-        super(FWaaSExtensionTestJSON, cls).resource_setup()
+    def skip_checks(cls):
+        super(FWaaSExtensionTestJSON, cls).skip_checks()
         if not test.is_extension_enabled('fwaas', 'network'):
             msg = "FWaaS Extension not enabled."
             raise cls.skipException(msg)
+
+    @classmethod
+    def resource_setup(cls):
+        super(FWaaSExtensionTestJSON, cls).resource_setup()
         cls.fw_rule = cls.create_firewall_rule("allow", "tcp")
         cls.fw_policy = cls.create_firewall_policy()
 
diff --git a/tempest/api/network/test_load_balancer.py b/tempest/api/network/test_load_balancer.py
index 583f91a..8bd0f24 100644
--- a/tempest/api/network/test_load_balancer.py
+++ b/tempest/api/network/test_load_balancer.py
@@ -21,7 +21,6 @@
 
 
 class LoadBalancerTestJSON(base.BaseNetworkTest):
-
     """
     Tests the following operations in the Neutron API using the REST client for
     Neutron:
@@ -39,11 +38,15 @@
     """
 
     @classmethod
-    def resource_setup(cls):
-        super(LoadBalancerTestJSON, cls).resource_setup()
+    def skip_checks(cls):
+        super(LoadBalancerTestJSON, cls).skip_checks()
         if not test.is_extension_enabled('lbaas', 'network'):
             msg = "lbaas extension not enabled."
             raise cls.skipException(msg)
+
+    @classmethod
+    def resource_setup(cls):
+        super(LoadBalancerTestJSON, cls).resource_setup()
         cls.network = cls.create_network()
         cls.name = cls.network['name']
         cls.subnet = cls.create_subnet(cls.network)
diff --git a/tempest/api/network/test_metering_extensions.py b/tempest/api/network/test_metering_extensions.py
index 68aed27..7935e5b 100644
--- a/tempest/api/network/test_metering_extensions.py
+++ b/tempest/api/network/test_metering_extensions.py
@@ -12,10 +12,10 @@
 # License for the specific language governing permissions and limitations
 # under the License.
 
+from oslo_log import log as logging
 from tempest_lib.common.utils import data_utils
 
 from tempest.api.network import base
-from tempest.openstack.common import log as logging
 from tempest import test
 
 
@@ -23,7 +23,6 @@
 
 
 class MeteringTestJSON(base.BaseAdminNetworkTest):
-
     """
     Tests the following operations in the Neutron API using the REST client for
     Neutron:
@@ -33,11 +32,15 @@
     """
 
     @classmethod
-    def resource_setup(cls):
-        super(MeteringTestJSON, cls).resource_setup()
+    def skip_checks(cls):
+        super(MeteringTestJSON, cls).skip_checks()
         if not test.is_extension_enabled('metering', 'network'):
             msg = "metering extension not enabled."
             raise cls.skipException(msg)
+
+    @classmethod
+    def resource_setup(cls):
+        super(MeteringTestJSON, cls).resource_setup()
         description = "metering label created by tempest"
         name = data_utils.rand_name("metering-label")
         cls.metering_label = cls.create_metering_label(name, description)
diff --git a/tempest/api/network/test_networks.py b/tempest/api/network/test_networks.py
index 2e01a85..f85e8cf 100644
--- a/tempest/api/network/test_networks.py
+++ b/tempest/api/network/test_networks.py
@@ -27,7 +27,6 @@
 
 
 class NetworksTestJSON(base.BaseNetworkTest):
-
     """
     Tests the following operations in the Neutron API using the REST client for
     Neutron:
@@ -417,7 +416,6 @@
 
 
 class BulkNetworkOpsTestJSON(base.BaseNetworkTest):
-
     """
     Tests the following operations in the Neutron API using the REST client for
     Neutron:
@@ -604,11 +602,11 @@
 class NetworksIpV6TestAttrs(NetworksIpV6TestJSON):
 
     @classmethod
-    def resource_setup(cls):
+    def skip_checks(cls):
+        super(NetworksIpV6TestAttrs, cls).skip_checks()
         if not CONF.network_feature_enabled.ipv6_subnet_attributes:
             raise cls.skipException("IPv6 extended attributes for "
                                     "subnets not available")
-        super(NetworksIpV6TestAttrs, cls).resource_setup()
 
     @test.attr(type='smoke')
     @test.idempotent_id('da40cd1b-a833-4354-9a85-cd9b8a3b74ca')
diff --git a/tempest/api/network/test_ports.py b/tempest/api/network/test_ports.py
index 6fe955e..953b268 100644
--- a/tempest/api/network/test_ports.py
+++ b/tempest/api/network/test_ports.py
@@ -28,7 +28,6 @@
 
 
 class PortsTestJSON(sec_base.BaseSecGroupTest):
-
     """
     Test the following operations for ports:
 
@@ -315,13 +314,17 @@
 class PortsAdminExtendedAttrsTestJSON(base.BaseAdminNetworkTest):
 
     @classmethod
+    def setup_clients(cls):
+        super(PortsAdminExtendedAttrsTestJSON, cls).setup_clients()
+        cls.identity_client = cls.os_adm.identity_client
+
+    @classmethod
     def resource_setup(cls):
         super(PortsAdminExtendedAttrsTestJSON, cls).resource_setup()
-        cls.identity_client = cls._get_identity_admin_client()
-        cls.tenant = cls.identity_client.get_tenant_by_name(
-            CONF.identity.tenant_name)
         cls.network = cls.create_network()
         cls.host_id = socket.gethostname()
+        cls.tenant = cls.identity_client.get_tenant_by_name(
+            CONF.identity.tenant_name)
 
     @test.attr(type='smoke')
     @test.idempotent_id('8e8569c1-9ac7-44db-8bc1-f5fb2814f29b')
diff --git a/tempest/api/network/test_routers.py b/tempest/api/network/test_routers.py
index e9c9484..c6f3849 100644
--- a/tempest/api/network/test_routers.py
+++ b/tempest/api/network/test_routers.py
@@ -17,7 +17,6 @@
 from tempest_lib.common.utils import data_utils
 
 from tempest.api.network import base_routers as base
-from tempest import clients
 from tempest import config
 from tempest import test
 
@@ -27,13 +26,20 @@
 class RoutersTest(base.BaseRouterTest):
 
     @classmethod
-    def resource_setup(cls):
-        super(RoutersTest, cls).resource_setup()
+    def skip_checks(cls):
+        super(RoutersTest, cls).skip_checks()
         if not test.is_extension_enabled('router', 'network'):
             msg = "router extension not enabled."
             raise cls.skipException(msg)
-        admin_manager = clients.AdminManager()
-        cls.identity_admin_client = admin_manager.identity_client
+
+    @classmethod
+    def setup_clients(cls):
+        super(RoutersTest, cls).setup_clients()
+        cls.identity_admin_client = cls.os_adm.identity_client
+
+    @classmethod
+    def resource_setup(cls):
+        super(RoutersTest, cls).resource_setup()
         cls.tenant_cidr = (CONF.network.tenant_network_cidr
                            if cls._ip_version == 4 else
                            CONF.network.tenant_network_v6_cidr)
diff --git a/tempest/api/network/test_routers_negative.py b/tempest/api/network/test_routers_negative.py
index 9e7d574..ae17222 100644
--- a/tempest/api/network/test_routers_negative.py
+++ b/tempest/api/network/test_routers_negative.py
@@ -27,11 +27,15 @@
 class RoutersNegativeTest(base.BaseRouterTest):
 
     @classmethod
-    def resource_setup(cls):
-        super(RoutersNegativeTest, cls).resource_setup()
+    def skip_checks(cls):
+        super(RoutersNegativeTest, cls).skip_checks()
         if not test.is_extension_enabled('router', 'network'):
             msg = "router extension not enabled."
             raise cls.skipException(msg)
+
+    @classmethod
+    def resource_setup(cls):
+        super(RoutersNegativeTest, cls).resource_setup()
         cls.router = cls.create_router(data_utils.rand_name('router-'))
         cls.network = cls.create_network()
         cls.subnet = cls.create_subnet(cls.network)
diff --git a/tempest/api/network/test_security_groups.py b/tempest/api/network/test_security_groups.py
index 46dbeee..71e1beb 100644
--- a/tempest/api/network/test_security_groups.py
+++ b/tempest/api/network/test_security_groups.py
@@ -24,12 +24,11 @@
 
 
 class SecGroupTest(base.BaseSecGroupTest):
-
     _tenant_network_cidr = CONF.network.tenant_network_cidr
 
     @classmethod
-    def resource_setup(cls):
-        super(SecGroupTest, cls).resource_setup()
+    def skip_checks(cls):
+        super(SecGroupTest, cls).skip_checks()
         if not test.is_extension_enabled('security-group', 'network'):
             msg = "security-group extension not enabled."
             raise cls.skipException(msg)
diff --git a/tempest/api/network/test_security_groups_negative.py b/tempest/api/network/test_security_groups_negative.py
index 97c0592..0c5f017 100644
--- a/tempest/api/network/test_security_groups_negative.py
+++ b/tempest/api/network/test_security_groups_negative.py
@@ -25,12 +25,11 @@
 
 
 class NegativeSecGroupTest(base.BaseSecGroupTest):
-
     _tenant_network_cidr = CONF.network.tenant_network_cidr
 
     @classmethod
-    def resource_setup(cls):
-        super(NegativeSecGroupTest, cls).resource_setup()
+    def skip_checks(cls):
+        super(NegativeSecGroupTest, cls).skip_checks()
         if not test.is_extension_enabled('security-group', 'network'):
             msg = "security-group extension not enabled."
             raise cls.skipException(msg)
diff --git a/tempest/api/network/test_service_type_management.py b/tempest/api/network/test_service_type_management.py
index a1e4136..085ad73 100644
--- a/tempest/api/network/test_service_type_management.py
+++ b/tempest/api/network/test_service_type_management.py
@@ -19,8 +19,8 @@
 class ServiceTypeManagementTestJSON(base.BaseNetworkTest):
 
     @classmethod
-    def resource_setup(cls):
-        super(ServiceTypeManagementTestJSON, cls).resource_setup()
+    def skip_checks(cls):
+        super(ServiceTypeManagementTestJSON, cls).skip_checks()
         if not test.is_extension_enabled('service-type', 'network'):
             msg = "Neutron Service Type Management not enabled."
             raise cls.skipException(msg)
diff --git a/tempest/api/network/test_vpnaas_extensions.py b/tempest/api/network/test_vpnaas_extensions.py
index ba30326..4ab69e0 100644
--- a/tempest/api/network/test_vpnaas_extensions.py
+++ b/tempest/api/network/test_vpnaas_extensions.py
@@ -24,7 +24,6 @@
 
 
 class VPNaaSTestJSON(base.BaseAdminNetworkTest):
-
     """
     Tests the following operations in the Neutron API using the REST client for
     Neutron:
@@ -34,10 +33,14 @@
     """
 
     @classmethod
-    def resource_setup(cls):
+    def skip_checks(cls):
+        super(VPNaaSTestJSON, cls).skip_checks()
         if not test.is_extension_enabled('vpnaas', 'network'):
             msg = "vpnaas extension not enabled."
             raise cls.skipException(msg)
+
+    @classmethod
+    def resource_setup(cls):
         super(VPNaaSTestJSON, cls).resource_setup()
         cls.ext_net_id = CONF.network.public_network_id
         cls.network = cls.create_network()
diff --git a/tempest/api/object_storage/base.py b/tempest/api/object_storage/base.py
index f75f4c8..c8697e1 100644
--- a/tempest/api/object_storage/base.py
+++ b/tempest/api/object_storage/base.py
@@ -67,11 +67,6 @@
         cls.account_client.auth_provider.clear_auth()
 
     @classmethod
-    def resource_cleanup(cls):
-        cls.isolated_creds.clear_isolated_creds()
-        super(BaseObjectTest, cls).resource_cleanup()
-
-    @classmethod
     def delete_containers(cls, containers, container_client=None,
                           object_client=None):
         """Remove given containers and all objects in them.
diff --git a/tempest/api/orchestration/base.py b/tempest/api/orchestration/base.py
index 08fddb5..59fdec0 100644
--- a/tempest/api/orchestration/base.py
+++ b/tempest/api/orchestration/base.py
@@ -12,13 +12,14 @@
 
 import os.path
 
+from oslo_log import log as logging
 from tempest_lib.common.utils import data_utils
 from tempest_lib import exceptions as lib_exc
 import yaml
 
 from tempest import clients
+from tempest.common import credentials
 from tempest import config
-from tempest.openstack.common import log as logging
 import tempest.test
 
 CONF = config.CONF
@@ -38,7 +39,19 @@
     @classmethod
     def setup_credentials(cls):
         super(BaseOrchestrationTest, cls).setup_credentials()
-        cls.os = clients.Manager()
+        if (not hasattr(cls, 'isolated_creds') or
+            not cls.isolated_creds.name == cls.__name__):
+            cls.isolated_creds = credentials.get_isolated_credentials(
+                name=cls.__name__, network_resources=cls.network_resources)
+        stack_owner_role = CONF.orchestration.stack_owner_role
+        if not cls.isolated_creds.is_role_available(stack_owner_role):
+            skip_msg = ("%s skipped because the configured credential provider"
+                        " is not able to provide credentials with the %s role "
+                        "assigned." % (cls.__name__, stack_owner_role))
+            raise cls.skipException(skip_msg)
+        else:
+            cls.os = clients.Manager(cls.isolated_creds.get_creds_by_roles(
+                [stack_owner_role]))
 
     @classmethod
     def setup_clients(cls):
@@ -70,7 +83,7 @@
     @classmethod
     def _get_identity_admin_client(cls):
         """Returns an instance of the Identity Admin API client."""
-        manager = clients.AdminManager()
+        manager = clients.Manager(cls.isolated_creds.get_admin_creds())
         admin_client = manager.identity_client
         return admin_client
 
diff --git a/tempest/api/orchestration/stacks/templates/neutron_basic.yaml b/tempest/api/orchestration/stacks/templates/neutron_basic.yaml
index 878ff68..be33c94 100644
--- a/tempest/api/orchestration/stacks/templates/neutron_basic.yaml
+++ b/tempest/api/orchestration/stacks/templates/neutron_basic.yaml
@@ -51,12 +51,14 @@
       key_name: {get_param: KeyName}
       networks:
       - network: {get_resource: Network}
+      user_data_format: RAW
       user_data:
         str_replace:
           template: |
-            #!/bin/bash -v
+            #!/bin/sh -v
 
-            while ! /opt/aws/bin/cfn-signal -e 0 -r "SmokeServerNeutron created" \
+            SIGNAL_DATA='{"Status": "SUCCESS", "Reason": "SmokeServerNeutron created", "Data": "Application has completed configuration.", "UniqueId": "00000"}'
+            while ! curl --fail -X PUT -H 'Content-Type:' --data-binary "$SIGNAL_DATA" \
             'wait_handle' ; do sleep 3; done
           params:
             wait_handle: {get_resource: WaitHandleNeutron}
diff --git a/tempest/api/orchestration/stacks/test_neutron_resources.py b/tempest/api/orchestration/stacks/test_neutron_resources.py
index 998e3d0..bcf091a 100644
--- a/tempest/api/orchestration/stacks/test_neutron_resources.py
+++ b/tempest/api/orchestration/stacks/test_neutron_resources.py
@@ -32,8 +32,6 @@
     @classmethod
     def skip_checks(cls):
         super(NeutronResourcesTestJSON, cls).skip_checks()
-        if not CONF.orchestration.image_ref:
-            raise cls.skipException("No image available to test")
         if not CONF.service_available.neutron:
             raise cls.skipException("Neutron support is required")
 
@@ -68,7 +66,7 @@
             parameters={
                 'KeyName': cls.keypair_name,
                 'InstanceType': CONF.orchestration.instance_type,
-                'ImageId': CONF.orchestration.image_ref,
+                'ImageId': CONF.compute.image_ref,
                 'ExternalNetworkId': cls.external_network_id,
                 'timeout': CONF.orchestration.build_timeout,
                 'DNSServers': CONF.network.dns_servers,
diff --git a/tempest/api/orchestration/stacks/test_non_empty_stack.py b/tempest/api/orchestration/stacks/test_non_empty_stack.py
index 01b1ef1..9c5a6d5 100644
--- a/tempest/api/orchestration/stacks/test_non_empty_stack.py
+++ b/tempest/api/orchestration/stacks/test_non_empty_stack.py
@@ -30,7 +30,7 @@
         super(StacksTestJSON, cls).resource_setup()
         cls.stack_name = data_utils.rand_name('heat')
         template = cls.read_template('non_empty_stack')
-        image_id = (CONF.orchestration.image_ref or
+        image_id = (CONF.compute.image_ref or
                     cls._create_image()['id'])
         flavor = CONF.orchestration.instance_type
         # create the stack
diff --git a/tempest/api/orchestration/stacks/test_soft_conf.py b/tempest/api/orchestration/stacks/test_soft_conf.py
index 697c6ee..649bf47 100644
--- a/tempest/api/orchestration/stacks/test_soft_conf.py
+++ b/tempest/api/orchestration/stacks/test_soft_conf.py
@@ -10,12 +10,12 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
 from tempest_lib.common.utils import data_utils
 from tempest_lib import exceptions as lib_exc
 
 from tempest.api.orchestration import base
 from tempest import config
-from tempest.openstack.common import log as logging
 from tempest import test
 
 LOG = logging.getLogger(__name__)
diff --git a/tempest/api/orchestration/stacks/test_stacks.py b/tempest/api/orchestration/stacks/test_stacks.py
index 3e61de4..147f456 100644
--- a/tempest/api/orchestration/stacks/test_stacks.py
+++ b/tempest/api/orchestration/stacks/test_stacks.py
@@ -10,10 +10,10 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
 from tempest_lib.common.utils import data_utils
 
 from tempest.api.orchestration import base
-from tempest.openstack.common import log as logging
 from tempest import test
 
 
diff --git a/tempest/api/telemetry/base.py b/tempest/api/telemetry/base.py
index fff04fb..ed719c2 100644
--- a/tempest/api/telemetry/base.py
+++ b/tempest/api/telemetry/base.py
@@ -12,12 +12,12 @@
 
 import time
 
+from oslo_utils import timeutils
 from tempest_lib.common.utils import data_utils
 from tempest_lib import exceptions as lib_exc
 
 from tempest import config
 from tempest import exceptions
-from tempest.openstack.common import timeutils
 import tempest.test
 
 CONF = config.CONF
@@ -101,7 +101,6 @@
         cls.cleanup_resources(cls.telemetry_client.delete_alarm, cls.alarm_ids)
         cls.cleanup_resources(cls.servers_client.delete_server, cls.server_ids)
         cls.cleanup_resources(cls.image_client.delete_image, cls.image_ids)
-        cls.clear_isolated_creds()
         super(BaseTelemetryTest, cls).resource_cleanup()
 
     def await_samples(self, metric, query):
diff --git a/tempest/api/volume/admin/test_multi_backend.py b/tempest/api/volume/admin/test_multi_backend.py
index 09ec075..ad5eb7d 100644
--- a/tempest/api/volume/admin/test_multi_backend.py
+++ b/tempest/api/volume/admin/test_multi_backend.py
@@ -10,11 +10,11 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
 from tempest_lib.common.utils import data_utils
 
 from tempest.api.volume import base
 from tempest import config
-from tempest.openstack.common import log as logging
 from tempest import test
 
 CONF = config.CONF
diff --git a/tempest/api/volume/admin/test_snapshots_actions.py b/tempest/api/volume/admin/test_snapshots_actions.py
index cb55869..db026c1 100644
--- a/tempest/api/volume/admin/test_snapshots_actions.py
+++ b/tempest/api/volume/admin/test_snapshots_actions.py
@@ -31,7 +31,7 @@
         super(SnapshotsActionsV2Test, cls).resource_setup()
 
         # Create a test shared volume for tests
-        vol_name = data_utils.rand_name(cls.__name__ + '-Volume-')
+        vol_name = data_utils.rand_name(cls.__name__ + '-Volume')
         cls.name_field = cls.special_fields['name_field']
         params = {cls.name_field: vol_name}
         cls.volume = \
@@ -40,7 +40,7 @@
                                                   'available')
 
         # Create a test shared snapshot for tests
-        snap_name = data_utils.rand_name(cls.__name__ + '-Snapshot-')
+        snap_name = data_utils.rand_name(cls.__name__ + '-Snapshot')
         params = {cls.name_field: snap_name}
         cls.snapshot = \
             cls.client.create_snapshot(cls.volume['id'], **params)
diff --git a/tempest/api/volume/admin/test_volume_quotas.py b/tempest/api/volume/admin/test_volume_quotas.py
index 7a64de3..86d90f6 100644
--- a/tempest/api/volume/admin/test_volume_quotas.py
+++ b/tempest/api/volume/admin/test_volume_quotas.py
@@ -95,7 +95,8 @@
         self.assertEqual(quota_usage['volumes']['in_use'] + 1,
                          new_quota_usage['volumes']['in_use'])
 
-        self.assertEqual(quota_usage['gigabytes']['in_use'] + 1,
+        self.assertEqual(quota_usage['gigabytes']['in_use'] +
+                         volume["size"],
                          new_quota_usage['gigabytes']['in_use'])
 
     @test.attr(type='gate')
diff --git a/tempest/api/volume/admin/test_volume_quotas_negative.py b/tempest/api/volume/admin/test_volume_quotas_negative.py
index 98b7143..d7287f0 100644
--- a/tempest/api/volume/admin/test_volume_quotas_negative.py
+++ b/tempest/api/volume/admin/test_volume_quotas_negative.py
@@ -31,7 +31,9 @@
     @classmethod
     def resource_setup(cls):
         super(BaseVolumeQuotasNegativeV2TestJSON, cls).resource_setup()
-        cls.shared_quota_set = {'gigabytes': 3, 'volumes': 1, 'snapshots': 1}
+        cls.default_volume_size = cls.volumes_client.default_volume_size
+        cls.shared_quota_set = {'gigabytes': 3 * cls.default_volume_size,
+                                'volumes': 1, 'snapshots': 1}
 
         # NOTE(gfidente): no need to restore original quota set
         # after the tests as they only work with tenant isolation.
@@ -67,14 +69,16 @@
                         self.demo_tenant_id,
                         **self.shared_quota_set)
 
-        new_quota_set = {'gigabytes': 2, 'volumes': 2, 'snapshots': 1}
+        new_quota_set = {'gigabytes': 2 * self.default_volume_size,
+                         'volumes': 2, 'snapshots': 1}
         self.quotas_client.update_quota_set(
             self.demo_tenant_id,
             **new_quota_set)
         self.assertRaises(lib_exc.OverLimit,
                           self.volumes_client.create_volume)
 
-        new_quota_set = {'gigabytes': 2, 'volumes': 1, 'snapshots': 2}
+        new_quota_set = {'gigabytes': 2 * self.default_volume_size,
+                         'volumes': 1, 'snapshots': 2}
         self.quotas_client.update_quota_set(
             self.demo_tenant_id,
             **self.shared_quota_set)
diff --git a/tempest/api/volume/admin/test_volume_types.py b/tempest/api/volume/admin/test_volume_types.py
index 4669e0e..681a48a 100644
--- a/tempest/api/volume/admin/test_volume_types.py
+++ b/tempest/api/volume/admin/test_volume_types.py
@@ -43,7 +43,7 @@
     def test_volume_crud_with_volume_type_and_extra_specs(self):
         # Create/update/get/delete volume with volume_type and extra spec.
         volume_types = list()
-        vol_name = data_utils.rand_name("volume-")
+        vol_name = data_utils.rand_name("volume")
         self.name_field = self.special_fields['name_field']
         proto = CONF.volume.storage_protocol
         vendor = CONF.volume.vendor_name
@@ -51,7 +51,7 @@
                        "vendor_name": vendor}
         # Create two volume_types
         for i in range(2):
-            vol_type_name = data_utils.rand_name("volume-type-")
+            vol_type_name = data_utils.rand_name("volume-type")
             vol_type = self.volume_types_client.create_volume_type(
                 vol_type_name,
                 extra_specs=extra_specs)
@@ -94,7 +94,7 @@
     def test_volume_type_create_get_delete(self):
         # Create/get volume type.
         body = {}
-        name = data_utils.rand_name("volume-type-")
+        name = data_utils.rand_name("volume-type")
         proto = CONF.volume.storage_protocol
         vendor = CONF.volume.vendor_name
         extra_specs = {"storage_protocol": proto,
@@ -128,7 +128,7 @@
         # Create/get/delete encryption type.
         provider = "LuksEncryptor"
         control_location = "front-end"
-        name = data_utils.rand_name("volume-type-")
+        name = data_utils.rand_name("volume-type")
         body = self.volume_types_client.create_volume_type(name)
         self.addCleanup(self._delete_volume_type, body['id'])
 
diff --git a/tempest/api/volume/admin/test_volume_types_extra_specs.py b/tempest/api/volume/admin/test_volume_types_extra_specs.py
index a1b80ce..f382a67 100644
--- a/tempest/api/volume/admin/test_volume_types_extra_specs.py
+++ b/tempest/api/volume/admin/test_volume_types_extra_specs.py
@@ -24,7 +24,7 @@
     @classmethod
     def resource_setup(cls):
         super(VolumeTypesExtraSpecsV2Test, cls).resource_setup()
-        vol_type_name = data_utils.rand_name('Volume-type-')
+        vol_type_name = data_utils.rand_name('Volume-type')
         cls.volume_type = cls.volume_types_client.create_volume_type(
             vol_type_name)
 
diff --git a/tempest/api/volume/admin/test_volume_types_extra_specs_negative.py b/tempest/api/volume/admin/test_volume_types_extra_specs_negative.py
index 1eed800..7775025 100644
--- a/tempest/api/volume/admin/test_volume_types_extra_specs_negative.py
+++ b/tempest/api/volume/admin/test_volume_types_extra_specs_negative.py
@@ -27,7 +27,7 @@
     @classmethod
     def resource_setup(cls):
         super(ExtraSpecsNegativeV2Test, cls).resource_setup()
-        vol_type_name = data_utils.rand_name('Volume-type-')
+        vol_type_name = data_utils.rand_name('Volume-type')
         cls.extra_specs = {"spec1": "val1"}
         cls.volume_type = cls.volume_types_client.create_volume_type(
             vol_type_name,
diff --git a/tempest/api/volume/admin/test_volumes_actions.py b/tempest/api/volume/admin/test_volumes_actions.py
index 29de04d..1b69549 100644
--- a/tempest/api/volume/admin/test_volumes_actions.py
+++ b/tempest/api/volume/admin/test_volumes_actions.py
@@ -31,7 +31,7 @@
         super(VolumesActionsV2Test, cls).resource_setup()
 
         # Create a test shared volume for tests
-        vol_name = utils.rand_name(cls.__name__ + '-Volume-')
+        vol_name = utils.rand_name(cls.__name__ + '-Volume')
         cls.name_field = cls.special_fields['name_field']
         params = {cls.name_field: vol_name}
 
diff --git a/tempest/api/volume/admin/test_volumes_backup.py b/tempest/api/volume/admin/test_volumes_backup.py
index 986e986..6fd2a5e 100644
--- a/tempest/api/volume/admin/test_volumes_backup.py
+++ b/tempest/api/volume/admin/test_volumes_backup.py
@@ -13,11 +13,11 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
 from tempest_lib.common.utils import data_utils
 
 from tempest.api.volume import base
 from tempest import config
-from tempest.openstack.common import log as logging
 from tempest import test
 
 CONF = config.CONF
diff --git a/tempest/api/volume/base.py b/tempest/api/volume/base.py
index e5cff23..1f76b1c 100644
--- a/tempest/api/volume/base.py
+++ b/tempest/api/volume/base.py
@@ -13,13 +13,14 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
 from tempest_lib.common.utils import data_utils
 from tempest_lib import exceptions as lib_exc
 
 from tempest import clients
+from tempest.common import fixed_network
 from tempest import config
 from tempest import exceptions
-from tempest.openstack.common import log as logging
 import tempest.test
 
 CONF = config.CONF
@@ -62,6 +63,7 @@
         super(BaseVolumeTest, cls).setup_clients()
 
         cls.servers_client = cls.os.servers_client
+        cls.networks_client = cls.os.networks_client
 
         if cls._api_version == 1:
             cls.snapshots_client = cls.os.snapshots_client
@@ -102,7 +104,6 @@
     def resource_cleanup(cls):
         cls.clear_snapshots()
         cls.clear_volumes()
-        cls.clear_isolated_creds()
         super(BaseVolumeTest, cls).resource_cleanup()
 
     @classmethod
@@ -160,6 +161,15 @@
             except Exception:
                 pass
 
+    @classmethod
+    def create_server(cls, name, **kwargs):
+        network = cls.get_tenant_network()
+        network_kwargs = fixed_network.set_networks_kwarg(network, kwargs)
+        return cls.servers_client.create_server(name,
+                                                cls.image_ref,
+                                                cls.flavor_ref,
+                                                **network_kwargs)
+
 
 class BaseVolumeAdminTest(BaseVolumeTest):
     """Base test case class for all Volume Admin API tests."""
diff --git a/tempest/api/volume/test_extensions.py b/tempest/api/volume/test_extensions.py
index e5fe3c8..e8ff5e0 100644
--- a/tempest/api/volume/test_extensions.py
+++ b/tempest/api/volume/test_extensions.py
@@ -13,10 +13,10 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
 
 from tempest.api.volume import base
 from tempest import config
-from tempest.openstack.common import log as logging
 from tempest import test
 
 CONF = config.CONF
diff --git a/tempest/api/volume/test_volumes_actions.py b/tempest/api/volume/test_volumes_actions.py
index 7771300..1872ec7 100644
--- a/tempest/api/volume/test_volumes_actions.py
+++ b/tempest/api/volume/test_volumes_actions.py
@@ -35,10 +35,8 @@
         super(VolumesV2ActionsTest, cls).resource_setup()
 
         # Create a test shared instance
-        srv_name = data_utils.rand_name(cls.__name__ + '-Instance-')
-        cls.server = cls.servers_client.create_server(srv_name,
-                                                      cls.image_ref,
-                                                      cls.flavor_ref)
+        srv_name = data_utils.rand_name(cls.__name__ + '-Instance')
+        cls.server = cls.create_server(srv_name)
         cls.servers_client.wait_for_server_status(cls.server['id'], 'ACTIVE')
 
         # Create a test shared volume for attach/detach tests
@@ -104,7 +102,7 @@
         # it is shared with the other tests. After it is uploaded in Glance,
         # there is no way to delete it from Cinder, so we delete it from Glance
         # using the Glance image_client and from Cinder via tearDownClass.
-        image_name = data_utils.rand_name('Image-')
+        image_name = data_utils.rand_name('Image')
         body = self.client.upload_volume(self.volume['id'],
                                          image_name,
                                          CONF.volume.disk_format)
diff --git a/tempest/api/volume/test_volumes_list.py b/tempest/api/volume/test_volumes_list.py
index b5bf362..29e3324 100644
--- a/tempest/api/volume/test_volumes_list.py
+++ b/tempest/api/volume/test_volumes_list.py
@@ -15,11 +15,11 @@
 #    under the License.
 import operator
 
+from oslo_log import log as logging
 from tempest_lib.common.utils import data_utils
 from testtools import matchers
 
 from tempest.api.volume import base
-from tempest.openstack.common import log as logging
 from tempest import test
 
 LOG = logging.getLogger(__name__)
diff --git a/tempest/api/volume/test_volumes_negative.py b/tempest/api/volume/test_volumes_negative.py
index b59a313..a47e964 100644
--- a/tempest/api/volume/test_volumes_negative.py
+++ b/tempest/api/volume/test_volumes_negative.py
@@ -58,7 +58,7 @@
     def test_create_volume_with_invalid_size(self):
         # Should not be able to create volume with invalid size
         # in request
-        v_name = data_utils.rand_name('Volume-')
+        v_name = data_utils.rand_name('Volume')
         metadata = {'Type': 'work'}
         self.assertRaises(lib_exc.BadRequest, self.client.create_volume,
                           size='#$%', display_name=v_name, metadata=metadata)
@@ -68,7 +68,7 @@
     def test_create_volume_with_out_passing_size(self):
         # Should not be able to create volume without passing size
         # in request
-        v_name = data_utils.rand_name('Volume-')
+        v_name = data_utils.rand_name('Volume')
         metadata = {'Type': 'work'}
         self.assertRaises(lib_exc.BadRequest, self.client.create_volume,
                           size='', display_name=v_name, metadata=metadata)
@@ -77,7 +77,7 @@
     @test.idempotent_id('41331caa-eaf4-4001-869d-bc18c1869360')
     def test_create_volume_with_size_zero(self):
         # Should not be able to create volume with size zero
-        v_name = data_utils.rand_name('Volume-')
+        v_name = data_utils.rand_name('Volume')
         metadata = {'Type': 'work'}
         self.assertRaises(lib_exc.BadRequest, self.client.create_volume,
                           size='0', display_name=v_name, metadata=metadata)
@@ -86,7 +86,7 @@
     @test.idempotent_id('8b472729-9eba-446e-a83b-916bdb34bef7')
     def test_create_volume_with_size_negative(self):
         # Should not be able to create volume with size negative
-        v_name = data_utils.rand_name('Volume-')
+        v_name = data_utils.rand_name('Volume')
         metadata = {'Type': 'work'}
         self.assertRaises(lib_exc.BadRequest, self.client.create_volume,
                           size='-1', display_name=v_name, metadata=metadata)
@@ -95,7 +95,7 @@
     @test.idempotent_id('10254ed8-3849-454e-862e-3ab8e6aa01d2')
     def test_create_volume_with_nonexistent_volume_type(self):
         # Should not be able to create volume with non-existent volume type
-        v_name = data_utils.rand_name('Volume-')
+        v_name = data_utils.rand_name('Volume')
         metadata = {'Type': 'work'}
         self.assertRaises(lib_exc.NotFound, self.client.create_volume,
                           size='1', volume_type=str(uuid.uuid4()),
@@ -105,7 +105,7 @@
     @test.idempotent_id('0c36f6ae-4604-4017-b0a9-34fdc63096f9')
     def test_create_volume_with_nonexistent_snapshot_id(self):
         # Should not be able to create volume with non-existent snapshot
-        v_name = data_utils.rand_name('Volume-')
+        v_name = data_utils.rand_name('Volume')
         metadata = {'Type': 'work'}
         self.assertRaises(lib_exc.NotFound, self.client.create_volume,
                           size='1', snapshot_id=str(uuid.uuid4()),
@@ -115,7 +115,7 @@
     @test.idempotent_id('47c73e08-4be8-45bb-bfdf-0c4e79b88344')
     def test_create_volume_with_nonexistent_source_volid(self):
         # Should not be able to create volume with non-existent source volume
-        v_name = data_utils.rand_name('Volume-')
+        v_name = data_utils.rand_name('Volume')
         metadata = {'Type': 'work'}
         self.assertRaises(lib_exc.NotFound, self.client.create_volume,
                           size='1', source_volid=str(uuid.uuid4()),
@@ -124,7 +124,7 @@
     @test.attr(type=['negative', 'gate'])
     @test.idempotent_id('0186422c-999a-480e-a026-6a665744c30c')
     def test_update_volume_with_nonexistent_volume_id(self):
-        v_name = data_utils.rand_name('Volume-')
+        v_name = data_utils.rand_name('Volume')
         metadata = {'Type': 'work'}
         self.assertRaises(lib_exc.NotFound, self.client.update_volume,
                           volume_id=str(uuid.uuid4()), display_name=v_name,
@@ -133,7 +133,7 @@
     @test.attr(type=['negative', 'gate'])
     @test.idempotent_id('e66e40d6-65e6-4e75-bdc7-636792fa152d')
     def test_update_volume_with_invalid_volume_id(self):
-        v_name = data_utils.rand_name('Volume-')
+        v_name = data_utils.rand_name('Volume')
         metadata = {'Type': 'work'}
         self.assertRaises(lib_exc.NotFound, self.client.update_volume,
                           volume_id='#$%%&^&^', display_name=v_name,
@@ -142,7 +142,7 @@
     @test.attr(type=['negative', 'gate'])
     @test.idempotent_id('72aeca85-57a5-4c1f-9057-f320f9ea575b')
     def test_update_volume_with_empty_volume_id(self):
-        v_name = data_utils.rand_name('Volume-')
+        v_name = data_utils.rand_name('Volume')
         metadata = {'Type': 'work'}
         self.assertRaises(lib_exc.NotFound, self.client.update_volume,
                           volume_id='', display_name=v_name,
@@ -178,10 +178,8 @@
     @test.idempotent_id('f5e56b0a-5d02-43c1-a2a7-c9b792c2e3f6')
     @test.services('compute')
     def test_attach_volumes_with_nonexistent_volume_id(self):
-        srv_name = data_utils.rand_name('Instance-')
-        server = self.servers_client.create_server(srv_name,
-                                                   self.image_ref,
-                                                   self.flavor_ref)
+        srv_name = data_utils.rand_name('Instance')
+        server = self.create_server(srv_name)
         self.addCleanup(self.servers_client.delete_server, server['id'])
         self.servers_client.wait_for_server_status(server['id'], 'ACTIVE')
         self.assertRaises(lib_exc.NotFound,
@@ -266,7 +264,7 @@
     @test.attr(type=['negative', 'gate'])
     @test.idempotent_id('0f4aa809-8c7b-418f-8fb3-84c7a5dfc52f')
     def test_list_volumes_with_nonexistent_name(self):
-        v_name = data_utils.rand_name('Volume-')
+        v_name = data_utils.rand_name('Volume')
         params = {self.name_field: v_name}
         fetched_volume = self.client.list_volumes(params)
         self.assertEqual(0, len(fetched_volume))
@@ -274,7 +272,7 @@
     @test.attr(type=['negative', 'gate'])
     @test.idempotent_id('9ca17820-a0e7-4cbd-a7fa-f4468735e359')
     def test_list_volumes_detail_with_nonexistent_name(self):
-        v_name = data_utils.rand_name('Volume-')
+        v_name = data_utils.rand_name('Volume')
         params = {self.name_field: v_name}
         fetched_volume = \
             self.client.list_volumes_with_detail(params)
diff --git a/tempest/api/volume/test_volumes_snapshots.py b/tempest/api/volume/test_volumes_snapshots.py
index 9a72e90..b277390 100644
--- a/tempest/api/volume/test_volumes_snapshots.py
+++ b/tempest/api/volume/test_volumes_snapshots.py
@@ -10,11 +10,11 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
 from tempest_lib.common.utils import data_utils
 
 from tempest.api.volume import base
 from tempest import config
-from tempest.openstack.common import log as logging
 from tempest import test
 
 LOG = logging.getLogger(__name__)
@@ -68,10 +68,8 @@
     def test_snapshot_create_with_volume_in_use(self):
         # Create a snapshot when volume status is in-use
         # Create a test instance
-        server_name = data_utils.rand_name('instance-')
-        server = self.servers_client.create_server(server_name,
-                                                   self.image_ref,
-                                                   self.flavor_ref)
+        server_name = data_utils.rand_name('instance')
+        server = self.create_server(server_name)
         self.addCleanup(self.servers_client.delete_server, server['id'])
         self.servers_client.wait_for_server_status(server['id'], 'ACTIVE')
         mountpoint = '/dev/%s' % CONF.compute.volume_device_name
diff --git a/tempest/api_schema/response/compute/agents.py b/tempest/api_schema/response/compute/agents.py
deleted file mode 100644
index e5f3a8d..0000000
--- a/tempest/api_schema/response/compute/agents.py
+++ /dev/null
@@ -1,61 +0,0 @@
-# Copyright 2014 NEC Corporation.  All rights reserved.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-list_agents = {
-    'status_code': [200],
-    'response_body': {
-        'type': 'object',
-        'properties': {
-            'agents': {
-                'type': 'array',
-                'items': {
-                    'type': 'object',
-                    'properties': {
-                        'agent_id': {'type': 'integer'},
-                        'hypervisor': {'type': 'string'},
-                        'os': {'type': 'string'},
-                        'architecture': {'type': 'string'},
-                        'version': {'type': 'string'},
-                        'url': {'type': 'string', 'format': 'uri'},
-                        'md5hash': {'type': 'string'}
-                    },
-                    'required': ['agent_id', 'hypervisor', 'os',
-                                 'architecture', 'version', 'url', 'md5hash']
-                }
-            }
-        },
-        'required': ['agents']
-    }
-}
-
-common_create_agent = {
-    'type': 'object',
-    'properties': {
-        'agent': {
-            'type': 'object',
-            'properties': {
-                'agent_id': {'type': ['integer', 'string']},
-                'hypervisor': {'type': 'string'},
-                'os': {'type': 'string'},
-                'architecture': {'type': 'string'},
-                'version': {'type': 'string'},
-                'url': {'type': 'string', 'format': 'uri'},
-                'md5hash': {'type': 'string'}
-            },
-            'required': ['agent_id', 'hypervisor', 'os', 'architecture',
-                         'version', 'url', 'md5hash']
-        }
-    },
-    'required': ['agent']
-}
diff --git a/tempest/api_schema/response/compute/baremetal_nodes.py b/tempest/api_schema/response/compute/baremetal_nodes.py
index e82792c..82506e7 100644
--- a/tempest/api_schema/response/compute/baremetal_nodes.py
+++ b/tempest/api_schema/response/compute/baremetal_nodes.py
@@ -12,6 +12,8 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+import copy
+
 node = {
     'type': 'object',
     'properties': {
@@ -41,7 +43,7 @@
     }
 }
 
-get_baremetal_node = {
+baremetal_node = {
     'status_code': [200],
     'response_body': {
         'type': 'object',
@@ -51,3 +53,8 @@
         'required': ['node']
     }
 }
+get_baremetal_node = copy.deepcopy(baremetal_node)
+get_baremetal_node['response_body']['properties']['node'][
+    'properties'].update({'instance_uuid': {'type': ['string', 'null']}})
+get_baremetal_node['response_body']['properties']['node'][
+    'required'].append('instance_uuid')
diff --git a/tempest/api_schema/response/compute/hypervisors.py b/tempest/api_schema/response/compute/hypervisors.py
index 273b579..d6f2bd1 100644
--- a/tempest/api_schema/response/compute/hypervisors.py
+++ b/tempest/api_schema/response/compute/hypervisors.py
@@ -56,6 +56,8 @@
                 'items': {
                     'type': 'object',
                     'properties': {
+                        'status': {'type': 'string'},
+                        'state': {'type': 'string'},
                         'cpu_info': {'type': 'string'},
                         'current_workload': {'type': 'integer'},
                         'disk_available_least': {'type': ['integer', 'null']},
@@ -78,13 +80,20 @@
                             'type': 'object',
                             'properties': {
                                 'host': {'type': 'string'},
-                                'id': {'type': ['integer', 'string']}
+                                'id': {'type': ['integer', 'string']},
+                                'disabled_reason': {'type': ['string', 'null']}
                             },
+                            # NOTE(gmann): 'disabled_reason' is updated in
+                            # 'service' dict if 'os-hypervisor-status'
+                            # extension is loaded. So this is not required.
                             'required': ['host', 'id']
                         },
                         'vcpus': {'type': 'integer'},
                         'vcpus_used': {'type': 'integer'}
                     },
+                    # NOTE: When loading os-hypervisor-status extension,
+                    # a response contains status and state. So these params
+                    # should not be required.
                     'required': ['cpu_info', 'current_workload',
                                  'disk_available_least', 'host_ip',
                                  'free_disk_gb', 'free_ram_mb',
@@ -108,6 +117,8 @@
             'hypervisor': {
                 'type': 'object',
                 'properties': {
+                    'status': {'type': 'string'},
+                    'state': {'type': 'string'},
                     'cpu_info': {'type': 'string'},
                     'current_workload': {'type': 'integer'},
                     'disk_available_least': {'type': ['integer', 'null']},
@@ -130,13 +141,20 @@
                         'type': 'object',
                         'properties': {
                             'host': {'type': 'string'},
-                            'id': {'type': ['integer', 'string']}
+                            'id': {'type': ['integer', 'string']},
+                            'disabled_reason': {'type': ['string', 'null']}
                         },
+                        # NOTE: 'disabled_reason' is updated in 'service'
+                        # dict if os-hypervisor-status' extension is loaded.
+                        # So this is not required.
                         'required': ['host', 'id']
                     },
                     'vcpus': {'type': 'integer'},
                     'vcpus_used': {'type': 'integer'}
                 },
+                # NOTE: When loading os-hypervisor-status extension,
+                # a response contains status and state. So these params
+                # should not be required.
                 'required': ['cpu_info', 'current_workload',
                              'disk_available_least', 'host_ip',
                              'free_disk_gb', 'free_ram_mb',
@@ -184,9 +202,14 @@
             'hypervisor': {
                 'type': 'object',
                 'properties': {
+                    'status': {'type': 'string'},
+                    'state': {'type': 'string'},
                     'id': {'type': ['integer', 'string']},
                     'hypervisor_hostname': {'type': 'string'},
                 },
+                # NOTE: When loading os-hypervisor-status extension,
+                # a response contains status and state. So these params
+                # should not be required.
                 'required': ['id', 'hypervisor_hostname']
             }
         },
diff --git a/tempest/api_schema/response/compute/parameter_types.py b/tempest/api_schema/response/compute/parameter_types.py
index 4a1dfdd..90d4c8f 100644
--- a/tempest/api_schema/response/compute/parameter_types.py
+++ b/tempest/api_schema/response/compute/parameter_types.py
@@ -65,3 +65,17 @@
         }
     }
 }
+
+response_header = {
+    'connection': {'type': 'string'},
+    'content-length': {'type': 'string'},
+    'content-type': {'type': 'string'},
+    'status': {'type': 'string'},
+    'x-compute-request-id': {'type': 'string'},
+    'vary': {'type': 'string'},
+    'x-openstack-nova-api-version': {'type': 'string'},
+    'date': {
+        'type': 'string',
+        'format': 'data-time'
+    }
+}
diff --git a/tempest/api_schema/response/compute/servers.py b/tempest/api_schema/response/compute/servers.py
index f9c957b..3950173 100644
--- a/tempest/api_schema/response/compute/servers.py
+++ b/tempest/api_schema/response/compute/servers.py
@@ -71,6 +71,18 @@
             },
             'required': ['id', 'links']
         },
+        'fault': {
+            'type': 'object',
+            'properties': {
+                'code': {'type': 'integer'},
+                'created': {'type': 'string'},
+                'message': {'type': 'string'},
+                'details': {'type': 'string'},
+            },
+            # NOTE(gmann): 'details' is not necessary to be present
+            #  in the 'fault'. So it is not defined as 'required'.
+            'required': ['code', 'created', 'message']
+        },
         'user_id': {'type': 'string'},
         'tenant_id': {'type': 'string'},
         'created': {'type': 'string'},
@@ -83,7 +95,9 @@
     # NOTE(GMann): 'progress' attribute is present in the response
     # only when server's status is one of the progress statuses
     # ("ACTIVE","BUILD", "REBUILD", "RESIZE","VERIFY_RESIZE")
-    # So it is not defined as 'required'.
+    # 'fault' attribute is present in the response
+    # only when server's status is one of the  "ERROR", "DELETED".
+    # So they are not defined as 'required'.
     'required': ['id', 'name', 'status', 'image', 'flavor',
                  'user_id', 'tenant_id', 'created', 'updated',
                  'metadata', 'links', 'addresses']
@@ -144,8 +158,11 @@
                     },
                     'required': ['id', 'links', 'name']
                 }
-            }
+            },
+            'servers_links': parameter_types.links
         },
+        # NOTE(gmann): servers_links attribute is not necessary to be
+        # present always So it is not 'required'.
         'required': ['servers']
     }
 }
diff --git a/tempest/api_schema/response/compute/v2/agents.py b/tempest/api_schema/response/compute/v2/agents.py
deleted file mode 100644
index d827377..0000000
--- a/tempest/api_schema/response/compute/v2/agents.py
+++ /dev/null
@@ -1,24 +0,0 @@
-# Copyright 2014 NEC Corporation.  All rights reserved.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-from tempest.api_schema.response.compute import agents
-
-create_agent = {
-    'status_code': [200],
-    'response_body': agents.common_create_agent
-}
-
-delete_agent = {
-    'status_code': [200]
-}
diff --git a/tempest/api_schema/response/compute/v2/aggregates.py b/tempest/api_schema/response/compute/v2/aggregates.py
deleted file mode 100644
index d87e4de..0000000
--- a/tempest/api_schema/response/compute/v2/aggregates.py
+++ /dev/null
@@ -1,25 +0,0 @@
-# Copyright 2014 NEC Corporation.  All rights reserved.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-import copy
-
-from tempest.api_schema.response.compute import aggregates
-
-delete_aggregate = {
-    'status_code': [200]
-}
-
-create_aggregate = copy.deepcopy(aggregates.common_create_aggregate)
-# V2 API's response status_code is 200
-create_aggregate['status_code'] = [200]
diff --git a/tempest/api_schema/response/compute/v2/certificates.py b/tempest/api_schema/response/compute/v2/certificates.py
deleted file mode 100644
index bda6075..0000000
--- a/tempest/api_schema/response/compute/v2/certificates.py
+++ /dev/null
@@ -1,19 +0,0 @@
-# Copyright 2014 NEC Corporation.  All rights reserved.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-import copy
-
-from tempest.api_schema.response.compute import certificates
-
-create_certificate = copy.deepcopy(certificates._common_schema)
diff --git a/tempest/api_schema/response/compute/v2/floating_ips.py b/tempest/api_schema/response/compute/v2/floating_ips.py
deleted file mode 100644
index 7250773..0000000
--- a/tempest/api_schema/response/compute/v2/floating_ips.py
+++ /dev/null
@@ -1,164 +0,0 @@
-# Copyright 2014 NEC Corporation.  All rights reserved.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-list_floating_ips = {
-    'status_code': [200],
-    'response_body': {
-        'type': 'object',
-        'properties': {
-            'floating_ips': {
-                'type': 'array',
-                'items': {
-                    'type': 'object',
-                    'properties': {
-                        # NOTE: Now the type of 'id' is integer, but
-                        # here allows 'string' also because we will be
-                        # able to change it to 'uuid' in the future.
-                        'id': {'type': ['integer', 'string']},
-                        'pool': {'type': ['string', 'null']},
-                        'instance_id': {'type': ['string', 'null']},
-                        'ip': {
-                            'type': 'string',
-                            'format': 'ip-address'
-                        },
-                        'fixed_ip': {
-                            'type': ['string', 'null'],
-                            'format': 'ip-address'
-                        }
-                    },
-                    'required': ['id', 'pool', 'instance_id', 'ip', 'fixed_ip']
-                }
-            }
-        },
-        'required': ['floating_ips']
-    }
-}
-
-floating_ip = {
-    'status_code': [200],
-    'response_body': {
-        'type': 'object',
-        'properties': {
-            'floating_ip': {
-                'type': 'object',
-                'properties': {
-                    # NOTE: Now the type of 'id' is integer, but here allows
-                    # 'string' also because we will be able to change it to
-                    # 'uuid' in the future.
-                    'id': {'type': ['integer', 'string']},
-                    'pool': {'type': ['string', 'null']},
-                    'instance_id': {'type': ['string', 'null']},
-                    'ip': {
-                        'type': 'string',
-                        'format': 'ip-address'
-                    },
-                    'fixed_ip': {
-                        'type': ['string', 'null'],
-                        'format': 'ip-address'
-                    }
-                },
-                'required': ['id', 'pool', 'instance_id', 'ip', 'fixed_ip']
-            }
-        },
-        'required': ['floating_ip']
-    }
-}
-
-floating_ip_pools = {
-    'status_code': [200],
-    'response_body': {
-        'type': 'object',
-        'properties': {
-            'floating_ip_pools': {
-                'type': 'array',
-                'items': {
-                    'type': 'object',
-                    'properties': {
-                        'name': {'type': 'string'}
-                    },
-                    'required': ['name']
-                }
-            }
-        },
-        'required': ['floating_ip_pools']
-    }
-}
-
-add_remove_floating_ip = {
-    'status_code': [202]
-}
-
-create_floating_ips_bulk = {
-    'status_code': [200],
-    'response_body': {
-        'type': 'object',
-        'properties': {
-            'floating_ips_bulk_create': {
-                'type': 'object',
-                'properties': {
-                    'interface': {'type': ['string', 'null']},
-                    'ip_range': {'type': 'string'},
-                    'pool': {'type': ['string', 'null']},
-                },
-                'required': ['interface', 'ip_range', 'pool']
-            }
-        },
-        'required': ['floating_ips_bulk_create']
-    }
-}
-
-delete_floating_ips_bulk = {
-    'status_code': [200],
-    'response_body': {
-        'type': 'object',
-        'properties': {
-            'floating_ips_bulk_delete': {'type': 'string'}
-        },
-        'required': ['floating_ips_bulk_delete']
-    }
-}
-
-list_floating_ips_bulk = {
-    'status_code': [200],
-    'response_body': {
-        'type': 'object',
-        'properties': {
-            'floating_ip_info': {
-                'type': 'array',
-                'items': {
-                    'type': 'object',
-                    'properties': {
-                        'address': {
-                            'type': 'string',
-                            'format': 'ip-address'
-                        },
-                        'instance_uuid': {'type': ['string', 'null']},
-                        'interface': {'type': ['string', 'null']},
-                        'pool': {'type': ['string', 'null']},
-                        'project_id': {'type': ['string', 'null']},
-                        'fixed_ip': {
-                            'type': ['string', 'null'],
-                            'format': 'ip-address'
-                        }
-                    },
-                    # NOTE: fixed_ip is introduced after JUNO release,
-                    # So it is not defined as 'required'.
-                    'required': ['address', 'instance_uuid', 'interface',
-                                 'pool', 'project_id']
-                }
-            }
-        },
-        'required': ['floating_ip_info']
-    }
-}
diff --git a/tempest/api_schema/response/compute/v2/interfaces.py b/tempest/api_schema/response/compute/v2/interfaces.py
deleted file mode 100644
index 64d161d..0000000
--- a/tempest/api_schema/response/compute/v2/interfaces.py
+++ /dev/null
@@ -1,29 +0,0 @@
-# Copyright 2014 NEC Corporation.  All rights reserved.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-from tempest.api_schema.response.compute import interfaces as common_schema
-
-list_interfaces = {
-    'status_code': [200],
-    'response_body': {
-        'type': 'object',
-        'properties': {
-            'interfaceAttachments': {
-                'type': 'array',
-                'items': common_schema.interface_common_info
-            }
-        },
-        'required': ['interfaceAttachments']
-    }
-}
diff --git a/tempest/api_schema/response/compute/v2/__init__.py b/tempest/api_schema/response/compute/v2_1/__init__.py
similarity index 100%
rename from tempest/api_schema/response/compute/v2/__init__.py
rename to tempest/api_schema/response/compute/v2_1/__init__.py
diff --git a/tempest/api_schema/response/compute/v2_1/agents.py b/tempest/api_schema/response/compute/v2_1/agents.py
new file mode 100644
index 0000000..84c5fd3
--- /dev/null
+++ b/tempest/api_schema/response/compute/v2_1/agents.py
@@ -0,0 +1,57 @@
+# Copyright 2014 NEC Corporation.  All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+common_agent_info = {
+    'type': 'object',
+    'properties': {
+        'agent_id': {'type': ['integer', 'string']},
+        'hypervisor': {'type': 'string'},
+        'os': {'type': 'string'},
+        'architecture': {'type': 'string'},
+        'version': {'type': 'string'},
+        'url': {'type': 'string', 'format': 'uri'},
+        'md5hash': {'type': 'string'}
+    },
+    'required': ['agent_id', 'hypervisor', 'os', 'architecture',
+                 'version', 'url', 'md5hash']
+}
+
+list_agents = {
+    'status_code': [200],
+    'response_body': {
+        'type': 'object',
+        'properties': {
+            'agents': {
+                'type': 'array',
+                'items': common_agent_info
+            }
+        },
+        'required': ['agents']
+    }
+}
+
+create_agent = {
+    'status_code': [200],
+    'response_body': {
+        'type': 'object',
+        'properties': {
+            'agent': common_agent_info
+        },
+        'required': ['agent']
+    }
+}
+
+delete_agent = {
+    'status_code': [200]
+}
diff --git a/tempest/api_schema/response/compute/aggregates.py b/tempest/api_schema/response/compute/v2_1/aggregates.py
similarity index 67%
rename from tempest/api_schema/response/compute/aggregates.py
rename to tempest/api_schema/response/compute/v2_1/aggregates.py
index fc20885..c935592 100644
--- a/tempest/api_schema/response/compute/aggregates.py
+++ b/tempest/api_schema/response/compute/v2_1/aggregates.py
@@ -27,33 +27,15 @@
         'updated_at': {'type': ['string', 'null']}
     },
     'required': ['availability_zone', 'created_at', 'deleted',
-                 'deleted_at', 'id', 'name', 'updated_at']
+                 'deleted_at', 'id', 'name', 'updated_at'],
 }
 
-aggregate = copy.deepcopy(aggregate_for_create)
-aggregate['properties'].update({
+common_aggregate_info = copy.deepcopy(aggregate_for_create)
+common_aggregate_info['properties'].update({
     'hosts': {'type': 'array'},
     'metadata': {'type': 'object'}
 })
-aggregate['required'].extend(['hosts', 'metadata'])
-
-aggregate = {
-    'type': 'object',
-    'properties': {
-        'availability_zone': {'type': ['string', 'null']},
-        'created_at': {'type': 'string'},
-        'deleted': {'type': 'boolean'},
-        'deleted_at': {'type': ['string', 'null']},
-        'hosts': {'type': 'array'},
-        'id': {'type': 'integer'},
-        'metadata': {'type': 'object'},
-        'name': {'type': 'string'},
-        'updated_at': {'type': ['string', 'null']}
-    },
-    'required': ['availability_zone', 'created_at', 'deleted',
-                 'deleted_at', 'hosts', 'id', 'metadata',
-                 'name', 'updated_at']
-}
+common_aggregate_info['required'].extend(['hosts', 'metadata'])
 
 list_aggregates = {
     'status_code': [200],
@@ -62,10 +44,10 @@
         'properties': {
             'aggregates': {
                 'type': 'array',
-                'items': aggregate
+                'items': common_aggregate_info
             }
         },
-        'required': ['aggregates']
+        'required': ['aggregates'],
     }
 }
 
@@ -74,9 +56,9 @@
     'response_body': {
         'type': 'object',
         'properties': {
-            'aggregate': aggregate
+            'aggregate': common_aggregate_info
         },
-        'required': ['aggregate']
+        'required': ['aggregate'],
     }
 }
 
@@ -88,13 +70,18 @@
         'type': 'string'
     }
 
-common_create_aggregate = {
+delete_aggregate = {
+    'status_code': [200]
+}
+
+create_aggregate = {
+    'status_code': [200],
     'response_body': {
         'type': 'object',
         'properties': {
             'aggregate': aggregate_for_create
         },
-        'required': ['aggregate']
+        'required': ['aggregate'],
     }
 }
 
diff --git a/tempest/api_schema/response/compute/v2/availability_zone.py b/tempest/api_schema/response/compute/v2_1/availability_zone.py
similarity index 100%
rename from tempest/api_schema/response/compute/v2/availability_zone.py
rename to tempest/api_schema/response/compute/v2_1/availability_zone.py
diff --git a/tempest/api_schema/response/compute/certificates.py b/tempest/api_schema/response/compute/v2_1/certificates.py
similarity index 89%
rename from tempest/api_schema/response/compute/certificates.py
rename to tempest/api_schema/response/compute/v2_1/certificates.py
index caac2ab..35445d8 100644
--- a/tempest/api_schema/response/compute/certificates.py
+++ b/tempest/api_schema/response/compute/v2_1/certificates.py
@@ -25,13 +25,15 @@
                     'data': {'type': 'string'},
                     'private_key': {'type': 'string'},
                 },
-                'required': ['data', 'private_key'],
+                'required': ['data', 'private_key']
             }
         },
-        'required': ['certificate'],
+        'required': ['certificate']
     }
 }
 
 get_certificate = copy.deepcopy(_common_schema)
 get_certificate['response_body']['properties']['certificate'][
     'properties']['private_key'].update({'type': 'null'})
+
+create_certificate = copy.deepcopy(_common_schema)
diff --git a/tempest/api_schema/response/compute/v2/extensions.py b/tempest/api_schema/response/compute/v2_1/extensions.py
similarity index 100%
rename from tempest/api_schema/response/compute/v2/extensions.py
rename to tempest/api_schema/response/compute/v2_1/extensions.py
diff --git a/tempest/api_schema/response/compute/v2/fixed_ips.py b/tempest/api_schema/response/compute/v2_1/fixed_ips.py
similarity index 96%
rename from tempest/api_schema/response/compute/v2/fixed_ips.py
rename to tempest/api_schema/response/compute/v2_1/fixed_ips.py
index 446633f..13e70bf 100644
--- a/tempest/api_schema/response/compute/v2/fixed_ips.py
+++ b/tempest/api_schema/response/compute/v2_1/fixed_ips.py
@@ -12,7 +12,7 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-fixed_ips = {
+get_fixed_ip = {
     'status_code': [200],
     'response_body': {
         'type': 'object',
@@ -35,7 +35,7 @@
     }
 }
 
-fixed_ip_action = {
+reserve_fixed_ip = {
     'status_code': [202],
     'response_body': {'type': 'string'}
 }
diff --git a/tempest/api_schema/response/compute/v2/flavors.py b/tempest/api_schema/response/compute/v2_1/flavors.py
similarity index 100%
rename from tempest/api_schema/response/compute/v2/flavors.py
rename to tempest/api_schema/response/compute/v2_1/flavors.py
diff --git a/tempest/api_schema/response/compute/v2_1/floating_ips.py b/tempest/api_schema/response/compute/v2_1/floating_ips.py
new file mode 100644
index 0000000..7369bec
--- /dev/null
+++ b/tempest/api_schema/response/compute/v2_1/floating_ips.py
@@ -0,0 +1,148 @@
+# Copyright 2014 NEC Corporation.  All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+common_floating_ip_info = {
+    'type': 'object',
+    'properties': {
+        # NOTE: Now the type of 'id' is integer, but
+        # here allows 'string' also because we will be
+        # able to change it to 'uuid' in the future.
+        'id': {'type': ['integer', 'string']},
+        'pool': {'type': ['string', 'null']},
+        'instance_id': {'type': ['string', 'null']},
+        'ip': {
+            'type': 'string',
+            'format': 'ip-address'
+        },
+        'fixed_ip': {
+            'type': ['string', 'null'],
+            'format': 'ip-address'
+        }
+    },
+    'required': ['id', 'pool', 'instance_id',
+                 'ip', 'fixed_ip'],
+
+}
+list_floating_ips = {
+    'status_code': [200],
+    'response_body': {
+        'type': 'object',
+        'properties': {
+            'floating_ips': {
+                'type': 'array',
+                'items': common_floating_ip_info
+            },
+        },
+        'required': ['floating_ips'],
+    }
+}
+
+floating_ip = {
+    'status_code': [200],
+    'response_body': {
+        'type': 'object',
+        'properties': {
+            'floating_ip': common_floating_ip_info
+        },
+        'required': ['floating_ip'],
+    }
+}
+
+floating_ip_pools = {
+    'status_code': [200],
+    'response_body': {
+        'type': 'object',
+        'properties': {
+            'floating_ip_pools': {
+                'type': 'array',
+                'items': {
+                    'type': 'object',
+                    'properties': {
+                        'name': {'type': 'string'}
+                    },
+                    'required': ['name'],
+                }
+            }
+        },
+        'required': ['floating_ip_pools'],
+    }
+}
+
+add_remove_floating_ip = {
+    'status_code': [202]
+}
+
+create_floating_ips_bulk = {
+    'status_code': [200],
+    'response_body': {
+        'type': 'object',
+        'properties': {
+            'floating_ips_bulk_create': {
+                'type': 'object',
+                'properties': {
+                    'interface': {'type': ['string', 'null']},
+                    'ip_range': {'type': 'string'},
+                    'pool': {'type': ['string', 'null']},
+                },
+                'required': ['interface', 'ip_range', 'pool'],
+            }
+        },
+        'required': ['floating_ips_bulk_create'],
+    }
+}
+
+delete_floating_ips_bulk = {
+    'status_code': [200],
+    'response_body': {
+        'type': 'object',
+        'properties': {
+            'floating_ips_bulk_delete': {'type': 'string'}
+        },
+        'required': ['floating_ips_bulk_delete'],
+    }
+}
+
+list_floating_ips_bulk = {
+    'status_code': [200],
+    'response_body': {
+        'type': 'object',
+        'properties': {
+            'floating_ip_info': {
+                'type': 'array',
+                'items': {
+                    'type': 'object',
+                    'properties': {
+                        'address': {
+                            'type': 'string',
+                            'format': 'ip-address'
+                        },
+                        'instance_uuid': {'type': ['string', 'null']},
+                        'interface': {'type': ['string', 'null']},
+                        'pool': {'type': ['string', 'null']},
+                        'project_id': {'type': ['string', 'null']},
+                        'fixed_ip': {
+                            'type': ['string', 'null'],
+                            'format': 'ip-address'
+                        }
+                    },
+                    # NOTE: fixed_ip is introduced after JUNO release,
+                    # So it is not defined as 'required'.
+                    'required': ['address', 'instance_uuid', 'interface',
+                                 'pool', 'project_id'],
+                }
+            }
+        },
+        'required': ['floating_ip_info'],
+    }
+}
diff --git a/tempest/api_schema/response/compute/v2/hosts.py b/tempest/api_schema/response/compute/v2_1/hosts.py
similarity index 100%
rename from tempest/api_schema/response/compute/v2/hosts.py
rename to tempest/api_schema/response/compute/v2_1/hosts.py
diff --git a/tempest/api_schema/response/compute/v2/hypervisors.py b/tempest/api_schema/response/compute/v2_1/hypervisors.py
similarity index 100%
rename from tempest/api_schema/response/compute/v2/hypervisors.py
rename to tempest/api_schema/response/compute/v2_1/hypervisors.py
diff --git a/tempest/api_schema/response/compute/v2/images.py b/tempest/api_schema/response/compute/v2_1/images.py
similarity index 79%
rename from tempest/api_schema/response/compute/v2/images.py
rename to tempest/api_schema/response/compute/v2_1/images.py
index 2317e6b..3c0b80e 100644
--- a/tempest/api_schema/response/compute/v2/images.py
+++ b/tempest/api_schema/response/compute/v2_1/images.py
@@ -40,11 +40,12 @@
             },
             'required': ['id', 'links']
         },
-        'OS-EXT-IMG-SIZE:size': {'type': 'integer'}
+        'OS-EXT-IMG-SIZE:size': {'type': 'integer'},
+        'OS-DCF:diskConfig': {'type': 'string'}
     },
     # 'server' attributes only comes in response body if image is
-    # associated with any server. 'OS-EXT-IMG-SIZE:size' is API
-    # extension, So those are not defined as 'required'.
+    # associated with any server. 'OS-EXT-IMG-SIZE:size' & 'OS-DCF:diskConfig'
+    # are API extension,  So those are not defined as 'required'.
     'required': ['id', 'status', 'updated', 'links', 'name',
                  'created', 'minDisk', 'minRam', 'progress',
                  'metadata']
@@ -77,8 +78,11 @@
                     },
                     'required': ['id', 'links', 'name']
                 }
-            }
+            },
+            'images_links': parameter_types.links
         },
+        # NOTE(gmann): images_links attribute is not necessary to be
+        # present always So it is not 'required'.
         'required': ['images']
     }
 }
@@ -87,15 +91,16 @@
     'status_code': [202],
     'response_header': {
         'type': 'object',
-        'properties': {
-            'location': {
-                'type': 'string',
-                'format': 'uri'
-            }
-        },
-        'required': ['location']
+        'properties': parameter_types.response_header
     }
 }
+create_image['response_header']['properties'].update(
+    {'location': {
+        'type': 'string',
+        'format': 'uri'}
+     }
+)
+create_image['response_header']['required'] = ['location']
 
 delete = {
     'status_code': [204]
@@ -131,8 +136,11 @@
             'images': {
                 'type': 'array',
                 'items': common_image_schema
-            }
+            },
+            'images_links': parameter_types.links
         },
+        # NOTE(gmann): images_links attribute is not necessary to be
+        # present always So it is not 'required'.
         'required': ['images']
     }
 }
diff --git a/tempest/api_schema/response/compute/v2/instance_usage_audit_logs.py b/tempest/api_schema/response/compute/v2_1/instance_usage_audit_logs.py
similarity index 100%
rename from tempest/api_schema/response/compute/v2/instance_usage_audit_logs.py
rename to tempest/api_schema/response/compute/v2_1/instance_usage_audit_logs.py
diff --git a/tempest/api_schema/response/compute/interfaces.py b/tempest/api_schema/response/compute/v2_1/interfaces.py
similarity index 74%
rename from tempest/api_schema/response/compute/interfaces.py
rename to tempest/api_schema/response/compute/v2_1/interfaces.py
index fd53eb3..4de3309 100644
--- a/tempest/api_schema/response/compute/interfaces.py
+++ b/tempest/api_schema/response/compute/v2_1/interfaces.py
@@ -14,10 +14,6 @@
 
 from tempest.api_schema.response.compute import parameter_types
 
-delete_interface = {
-    'status_code': [202]
-}
-
 interface_common_info = {
     'type': 'object',
     'properties': {
@@ -45,3 +41,32 @@
     },
     'required': ['port_state', 'fixed_ips', 'port_id', 'net_id', 'mac_addr']
 }
+
+get_create_interfaces = {
+    'status_code': [200],
+    'response_body': {
+        'type': 'object',
+        'properties': {
+            'interfaceAttachment': interface_common_info
+        },
+        'required': ['interfaceAttachment']
+    }
+}
+
+list_interfaces = {
+    'status_code': [200],
+    'response_body': {
+        'type': 'object',
+        'properties': {
+            'interfaceAttachments': {
+                'type': 'array',
+                'items': interface_common_info
+            }
+        },
+        'required': ['interfaceAttachments']
+    }
+}
+
+delete_interface = {
+    'status_code': [202]
+}
diff --git a/tempest/api_schema/response/compute/v2/keypairs.py b/tempest/api_schema/response/compute/v2_1/keypairs.py
similarity index 100%
rename from tempest/api_schema/response/compute/v2/keypairs.py
rename to tempest/api_schema/response/compute/v2_1/keypairs.py
diff --git a/tempest/api_schema/response/compute/v2/limits.py b/tempest/api_schema/response/compute/v2_1/limits.py
similarity index 100%
rename from tempest/api_schema/response/compute/v2/limits.py
rename to tempest/api_schema/response/compute/v2_1/limits.py
diff --git a/tempest/api_schema/response/compute/v2/quota_classes.py b/tempest/api_schema/response/compute/v2_1/quota_classes.py
similarity index 95%
rename from tempest/api_schema/response/compute/v2/quota_classes.py
rename to tempest/api_schema/response/compute/v2_1/quota_classes.py
index 5474a89..a7374df 100644
--- a/tempest/api_schema/response/compute/v2/quota_classes.py
+++ b/tempest/api_schema/response/compute/v2_1/quota_classes.py
@@ -15,7 +15,7 @@
 
 import copy
 
-from tempest.api_schema.response.compute.v2 import quotas
+from tempest.api_schema.response.compute.v2_1 import quotas
 
 # NOTE(mriedem): os-quota-class-sets responses are the same as os-quota-sets
 # except for the key in the response body is quota_class_set instead of
diff --git a/tempest/api_schema/response/compute/v2/quotas.py b/tempest/api_schema/response/compute/v2_1/quotas.py
similarity index 100%
rename from tempest/api_schema/response/compute/v2/quotas.py
rename to tempest/api_schema/response/compute/v2_1/quotas.py
diff --git a/tempest/api_schema/response/compute/v2/security_group_default_rule.py b/tempest/api_schema/response/compute/v2_1/security_group_default_rule.py
similarity index 100%
rename from tempest/api_schema/response/compute/v2/security_group_default_rule.py
rename to tempest/api_schema/response/compute/v2_1/security_group_default_rule.py
diff --git a/tempest/api_schema/response/compute/v2/security_groups.py b/tempest/api_schema/response/compute/v2_1/security_groups.py
similarity index 100%
rename from tempest/api_schema/response/compute/v2/security_groups.py
rename to tempest/api_schema/response/compute/v2_1/security_groups.py
diff --git a/tempest/api_schema/response/compute/v2/servers.py b/tempest/api_schema/response/compute/v2_1/servers.py
similarity index 88%
rename from tempest/api_schema/response/compute/v2/servers.py
rename to tempest/api_schema/response/compute/v2_1/servers.py
index 83dbb4f..ebee697 100644
--- a/tempest/api_schema/response/compute/v2/servers.py
+++ b/tempest/api_schema/response/compute/v2_1/servers.py
@@ -296,15 +296,34 @@
 list_servers_detail = copy.deepcopy(servers.base_list_servers_detail)
 list_servers_detail['response_body']['properties']['servers']['items'][
     'properties'].update({
+        'key_name': {'type': ['string', 'null']},
         'hostId': {'type': 'string'},
         'OS-DCF:diskConfig': {'type': 'string'},
         'security_groups': {'type': 'array'},
+
+        # NOTE: Non-admin users also can see "OS-SRV-USG" and "OS-EXT-AZ"
+        # attributes.
+        'OS-SRV-USG:launched_at': {'type': ['string', 'null']},
+        'OS-SRV-USG:terminated_at': {'type': ['string', 'null']},
+        'OS-EXT-AZ:availability_zone': {'type': 'string'},
+
+        # NOTE: Admin users only can see "OS-EXT-STS" and "OS-EXT-SRV-ATTR"
+        # attributes.
+        'OS-EXT-STS:task_state': {'type': ['string', 'null']},
+        'OS-EXT-STS:vm_state': {'type': 'string'},
+        'OS-EXT-STS:power_state': {'type': 'integer'},
+        'OS-EXT-SRV-ATTR:host': {'type': ['string', 'null']},
+        'OS-EXT-SRV-ATTR:instance_name': {'type': 'string'},
+        'OS-EXT-SRV-ATTR:hypervisor_hostname': {'type': ['string', 'null']},
+        'os-extended-volumes:volumes_attached': {'type': 'array'},
         'accessIPv4': parameter_types.access_ip_v4,
-        'accessIPv6': parameter_types.access_ip_v6
+        'accessIPv6': parameter_types.access_ip_v6,
+        'config_drive': {'type': 'string'}
     })
-# NOTE(GMann): OS-DCF:diskConfig, security_groups and accessIPv4/v6
-# are API extensions, and some environments return a response
-# without these attributes. So they are not 'required'.
+# NOTE(GMann): OS-SRV-USG, OS-EXT-AZ, OS-EXT-STS, OS-EXT-SRV-ATTR,
+# os-extended-volumes, OS-DCF and accessIPv4/v6 are API
+# extensions, and some environments return a response without
+# these attributes. So they are not 'required'.
 list_servers_detail['response_body']['properties']['servers']['items'][
     'required'].append('hostId')
 # NOTE(gmann): Update OS-EXT-IPS:type and OS-EXT-IPS-MAC:mac_addr
@@ -316,12 +335,14 @@
     'items']['properties'].update({
         'OS-EXT-IPS:type': {'type': 'string'},
         'OS-EXT-IPS-MAC:mac_addr': parameter_types.mac_address})
-
+# Defining 'servers_links' attributes for V2 server schema
+list_servers_detail['response_body'][
+    'properties'].update({'servers_links': parameter_types.links})
+# NOTE(gmann): servers_links attribute is not necessary to be
+# present always So it is not 'required'.
 
 rebuild_server = copy.deepcopy(update_server)
 rebuild_server['status_code'] = [202]
-del rebuild_server['response_body']['properties']['server'][
-    'properties']['OS-DCF:diskConfig']
 
 rebuild_server_with_admin_pass = copy.deepcopy(rebuild_server)
 rebuild_server_with_admin_pass['response_body']['properties']['server'][
diff --git a/tempest/api_schema/response/compute/v2/tenant_networks.py b/tempest/api_schema/response/compute/v2_1/tenant_networks.py
similarity index 100%
rename from tempest/api_schema/response/compute/v2/tenant_networks.py
rename to tempest/api_schema/response/compute/v2_1/tenant_networks.py
diff --git a/tempest/api_schema/response/compute/v2/tenant_usages.py b/tempest/api_schema/response/compute/v2_1/tenant_usages.py
similarity index 100%
rename from tempest/api_schema/response/compute/v2/tenant_usages.py
rename to tempest/api_schema/response/compute/v2_1/tenant_usages.py
diff --git a/tempest/api_schema/response/compute/v2/volumes.py b/tempest/api_schema/response/compute/v2_1/volumes.py
similarity index 100%
rename from tempest/api_schema/response/compute/v2/volumes.py
rename to tempest/api_schema/response/compute/v2_1/volumes.py
diff --git a/tempest/auth.py b/tempest/auth.py
index 9d8341c..113ad69 100644
--- a/tempest/auth.py
+++ b/tempest/auth.py
@@ -20,9 +20,9 @@
 import re
 import urlparse
 
+from oslo_log import log as logging
 import six
 
-from tempest.openstack.common import log as logging
 from tempest.services.identity.v2.json import token_client as json_v2id
 from tempest.services.identity.v3.json import token_client as json_v3id
 
@@ -328,11 +328,17 @@
 
     def _auth_params(self):
         return dict(
-            user=self.credentials.username,
+            user_id=self.credentials.user_id,
+            username=self.credentials.username,
             password=self.credentials.password,
-            project=self.credentials.tenant_name,
-            user_domain=self.credentials.user_domain_name,
-            project_domain=self.credentials.project_domain_name,
+            project_id=self.credentials.project_id,
+            project_name=self.credentials.project_name,
+            user_domain_id=self.credentials.user_domain_id,
+            user_domain_name=self.credentials.user_domain_name,
+            project_domain_id=self.credentials.project_domain_id,
+            project_domain_name=self.credentials.project_domain_name,
+            domain_id=self.credentials.domain_id,
+            domain_name=self.credentials.domain_name,
             auth_data=True)
 
     def _fill_credentials(self, auth_data_body):
@@ -439,7 +445,9 @@
     return identity_version in IDENTITY_VERSION
 
 
-def get_credentials(auth_url, fill_in=True, identity_version='v2', **kwargs):
+def get_credentials(auth_url, fill_in=True, identity_version='v2',
+                    disable_ssl_certificate_validation=None, ca_certs=None,
+                    trace_requests=None, **kwargs):
     """
     Builds a credentials object based on the configured auth_version
 
@@ -451,6 +459,11 @@
            by invoking ``is_valid()``
     :param identity_version (string): identity API version is used to
            select the matching auth provider and credentials class
+    :param disable_ssl_certificate_validation: whether to enforce SSL
+           certificate validation in SSL API requests to the auth system
+    :param ca_certs: CA certificate bundle for validation of certificates
+           in SSL API requests to the auth system
+    :param trace_requests: trace in log API requests to the auth system
     :param kwargs (dict): Dict of credential key/value pairs
 
     Examples:
@@ -471,7 +484,10 @@
     creds = credential_class(**kwargs)
     # Fill in the credentials fields that were not specified
     if fill_in:
-        auth_provider = auth_provider_class(creds, auth_url)
+        dsvm = disable_ssl_certificate_validation
+        auth_provider = auth_provider_class(
+            creds, auth_url, disable_ssl_certificate_validation=dsvm,
+            ca_certs=ca_certs, trace_requests=trace_requests)
         creds = auth_provider.fill_credentials()
     return creds
 
@@ -569,7 +585,7 @@
     Credentials suitable for the Keystone Identity V3 API
     """
 
-    ATTRIBUTES = ['domain_name', 'password', 'tenant_name', 'username',
+    ATTRIBUTES = ['domain_id', 'domain_name', 'password', 'username',
                   'project_domain_id', 'project_domain_name', 'project_id',
                   'project_name', 'tenant_id', 'tenant_name', 'user_domain_id',
                   'user_domain_name', 'user_id']
@@ -615,6 +631,8 @@
         - None
         - Project id (optional domain)
         - Project name and its domain id/name
+        - Domain id
+        - Domain name
         """
         valid_user_domain = any(
             [self.user_domain_id is not None,
@@ -625,11 +643,16 @@
         valid_user = any(
             [self.user_id is not None,
              self.username is not None and valid_user_domain])
-        valid_project = any(
+        valid_project_scope = any(
             [self.project_name is None and self.project_id is None,
              self.project_id is not None,
              self.project_name is not None and valid_project_domain])
-        return all([self.password is not None, valid_user, valid_project])
+        valid_domain_scope = any(
+            [self.domain_id is None and self.domain_name is None,
+             self.domain_id or self.domain_name])
+        return all([self.password is not None,
+                    valid_user,
+                    valid_project_scope and valid_domain_scope])
 
 
 IDENTITY_VERSION = {'v2': (KeystoneV2Credentials, KeystoneV2AuthProvider),
diff --git a/tempest/cli/simple_read_only/image/test_glance.py b/tempest/cli/simple_read_only/image/test_glance.py
index 3d7126b..e38ca48 100644
--- a/tempest/cli/simple_read_only/image/test_glance.py
+++ b/tempest/cli/simple_read_only/image/test_glance.py
@@ -15,11 +15,11 @@
 
 import re
 
+from oslo_log import log as logging
 from tempest_lib import exceptions
 
 from tempest import cli
 from tempest import config
-from tempest.openstack.common import log as logging
 from tempest import test
 
 CONF = config.CONF
diff --git a/tempest/cli/simple_read_only/network/__init__.py b/tempest/cli/simple_read_only/network/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/tempest/cli/simple_read_only/network/__init__.py
+++ /dev/null
diff --git a/tempest/cli/simple_read_only/network/test_neutron.py b/tempest/cli/simple_read_only/network/test_neutron.py
deleted file mode 100644
index 8af8ada..0000000
--- a/tempest/cli/simple_read_only/network/test_neutron.py
+++ /dev/null
@@ -1,285 +0,0 @@
-# Copyright 2013 OpenStack Foundation
-# All Rights Reserved.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-import re
-
-from tempest_lib import exceptions
-
-from tempest import cli
-from tempest import config
-from tempest.openstack.common import log as logging
-from tempest import test
-
-CONF = config.CONF
-
-LOG = logging.getLogger(__name__)
-
-
-class SimpleReadOnlyNeutronClientTest(cli.ClientTestBase):
-    """Basic, read-only tests for Neutron CLI client.
-
-    Checks return values and output of read-only commands.
-    These tests do not presume any content, nor do they create
-    their own. They only verify the structure of output if present.
-    """
-
-    @classmethod
-    def resource_setup(cls):
-        if (not CONF.service_available.neutron):
-            msg = "Skipping all Neutron cli tests because it is not available"
-            raise cls.skipException(msg)
-        super(SimpleReadOnlyNeutronClientTest, cls).resource_setup()
-
-    def neutron(self, *args, **kwargs):
-        return self.clients.neutron(*args,
-                                    endpoint_type=CONF.network.endpoint_type,
-                                    **kwargs)
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('84dd7190-2b98-4709-8e2c-3c1d25b9e7d2')
-    def test_neutron_fake_action(self):
-        self.assertRaises(exceptions.CommandFailed,
-                          self.neutron,
-                          'this-does-not-exist')
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('c598c337-313a-45ac-bf27-d6b4124a9e5b')
-    def test_neutron_net_list(self):
-        net_list = self.parser.listing(self.neutron('net-list'))
-        self.assertTableStruct(net_list, ['id', 'name', 'subnets'])
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('3e172b04-2e3b-4fcf-922d-99d5c803779f')
-    def test_neutron_ext_list(self):
-        ext = self.parser.listing(self.neutron('ext-list'))
-        self.assertTableStruct(ext, ['alias', 'name'])
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('2e0de814-52d6-4f81-be17-fe327072fc23')
-    @test.requires_ext(extension='dhcp_agent_scheduler', service='network')
-    def test_neutron_dhcp_agent_list_hosting_net(self):
-        self.neutron('dhcp-agent-list-hosting-net',
-                     params=CONF.compute.fixed_network_name)
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('8524a24a-3895-40a5-8c9d-49d4459cdda4')
-    @test.requires_ext(extension='agent', service='network')
-    def test_neutron_agent_list(self):
-        agents = self.parser.listing(self.neutron('agent-list'))
-        field_names = ['id', 'agent_type', 'host', 'alive', 'admin_state_up']
-        self.assertTableStruct(agents, field_names)
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('97c3ef92-7303-45f1-80db-b6622f176782')
-    @test.requires_ext(extension='router', service='network')
-    def test_neutron_floatingip_list(self):
-        self.neutron('floatingip-list')
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('823e0fee-404c-49a7-8bf3-d2f0383cc649')
-    @test.requires_ext(extension='metering', service='network')
-    def test_neutron_meter_label_list(self):
-        self.neutron('meter-label-list')
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('7fb76098-01f6-417f-b9c7-e630ba3f394b')
-    @test.requires_ext(extension='metering', service='network')
-    def test_neutron_meter_label_rule_list(self):
-        self.neutron('meter-label-rule-list')
-
-    @test.requires_ext(extension='lbaas_agent_scheduler', service='network')
-    def _test_neutron_lbaas_command(self, command):
-        try:
-            self.neutron(command)
-        except exceptions.CommandFailed as e:
-            if '404 Not Found' not in e.stderr:
-                self.fail('%s: Unexpected failure.' % command)
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('396d1d87-fd0c-4716-9ff0-f1baa54c6c61')
-    def test_neutron_lb_healthmonitor_list(self):
-        self._test_neutron_lbaas_command('lb-healthmonitor-list')
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('f41fa54d-5cd8-4f2c-bb4e-13abc72dccb6')
-    def test_neutron_lb_member_list(self):
-        self._test_neutron_lbaas_command('lb-member-list')
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('3ec04885-7573-4cce-b086-5722c0b00d85')
-    def test_neutron_lb_pool_list(self):
-        self._test_neutron_lbaas_command('lb-pool-list')
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('1ab530e0-ec87-498f-baf2-85f6635a2ad9')
-    def test_neutron_lb_vip_list(self):
-        self._test_neutron_lbaas_command('lb-vip-list')
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('e92f7362-4009-4b37-afee-f469105b24e7')
-    @test.requires_ext(extension='external-net', service='network')
-    def test_neutron_net_external_list(self):
-        net_ext_list = self.parser.listing(self.neutron('net-external-list'))
-        self.assertTableStruct(net_ext_list, ['id', 'name', 'subnets'])
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('ed840980-7c84-4b6e-b280-f13c5848a0e9')
-    def test_neutron_port_list(self):
-        port_list = self.parser.listing(self.neutron('port-list'))
-        self.assertTableStruct(port_list, ['id', 'name', 'mac_address',
-                                           'fixed_ips'])
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('dded0dfa-f2ac-4c1f-bc90-69fd06dd7132')
-    @test.requires_ext(extension='quotas', service='network')
-    def test_neutron_quota_list(self):
-        self.neutron('quota-list')
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('927fca1e-4397-42a2-ba47-d738299466de')
-    @test.requires_ext(extension='router', service='network')
-    def test_neutron_router_list(self):
-        router_list = self.parser.listing(self.neutron('router-list'))
-        self.assertTableStruct(router_list, ['id', 'name',
-                                             'external_gateway_info'])
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('e2e3d2d5-1aee-499d-84d9-37382dcf26ff')
-    @test.requires_ext(extension='security-group', service='network')
-    def test_neutron_security_group_list(self):
-        security_grp = self.parser.listing(self.neutron('security-group-list'))
-        self.assertTableStruct(security_grp, ['id', 'name', 'description'])
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('288602c2-8b59-44cd-8c5d-1ec916a114d3')
-    @test.requires_ext(extension='security-group', service='network')
-    def test_neutron_security_group_rule_list(self):
-        security_grp = self.parser.listing(self.neutron
-                                           ('security-group-rule-list'))
-        self.assertTableStruct(security_grp, ['id', 'security_group',
-                                              'direction', 'protocol',
-                                              'remote_ip_prefix',
-                                              'remote_group'])
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('2a874a08-b9c9-4f0f-82ef-8cadb15bbd5d')
-    def test_neutron_subnet_list(self):
-        subnet_list = self.parser.listing(self.neutron('subnet-list'))
-        self.assertTableStruct(subnet_list, ['id', 'name', 'cidr',
-                                             'allocation_pools'])
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('048e1ec3-cf6c-4066-b262-2028e03ce825')
-    @test.requires_ext(extension='vpnaas', service='network')
-    def test_neutron_vpn_ikepolicy_list(self):
-        ikepolicy = self.parser.listing(self.neutron('vpn-ikepolicy-list'))
-        self.assertTableStruct(ikepolicy, ['id', 'name',
-                                           'auth_algorithm',
-                                           'encryption_algorithm',
-                                           'ike_version', 'pfs'])
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('bb8902b7-b2e6-49fd-b9bd-a26dd99732df')
-    @test.requires_ext(extension='vpnaas', service='network')
-    def test_neutron_vpn_ipsecpolicy_list(self):
-        ipsecpolicy = self.parser.listing(self.neutron('vpn-ipsecpolicy-list'))
-        self.assertTableStruct(ipsecpolicy, ['id', 'name',
-                                             'auth_algorithm',
-                                             'encryption_algorithm',
-                                             'pfs'])
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('c0f33f9a-0ba9-4177-bcd5-dce34b81d523')
-    @test.requires_ext(extension='vpnaas', service='network')
-    def test_neutron_vpn_service_list(self):
-        vpn_list = self.parser.listing(self.neutron('vpn-service-list'))
-        self.assertTableStruct(vpn_list, ['id', 'name',
-                                          'router_id', 'status'])
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('bb142f8a-e568-405f-b1b7-4cb458de7971')
-    @test.requires_ext(extension='vpnaas', service='network')
-    def test_neutron_ipsec_site_connection_list(self):
-        ipsec_site = self.parser.listing(self.neutron
-                                         ('ipsec-site-connection-list'))
-        self.assertTableStruct(ipsec_site, ['id', 'name',
-                                            'peer_address',
-                                            'peer_cidrs',
-                                            'route_mode',
-                                            'auth_mode', 'status'])
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('89baff14-8cb7-4ad8-9c24-b0278711170b')
-    @test.requires_ext(extension='fwaas', service='network')
-    def test_neutron_firewall_list(self):
-        firewall_list = self.parser.listing(self.neutron
-                                            ('firewall-list'))
-        self.assertTableStruct(firewall_list, ['id', 'name',
-                                               'firewall_policy_id'])
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('996e418a-2a51-4018-9602-478ca8053e61')
-    @test.requires_ext(extension='fwaas', service='network')
-    def test_neutron_firewall_policy_list(self):
-        firewall_policy = self.parser.listing(self.neutron
-                                              ('firewall-policy-list'))
-        self.assertTableStruct(firewall_policy, ['id', 'name',
-                                                 'firewall_rules'])
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('d4638dd6-98d4-4400-a920-26572de1a6fc')
-    @test.requires_ext(extension='fwaas', service='network')
-    def test_neutron_firewall_rule_list(self):
-        firewall_rule = self.parser.listing(self.neutron
-                                            ('firewall-rule-list'))
-        self.assertTableStruct(firewall_rule, ['id', 'name',
-                                               'firewall_policy_id',
-                                               'summary', 'enabled'])
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('1c4551e1-e3f3-4af2-8a40-c3f551e4a536')
-    def test_neutron_help(self):
-        help_text = self.neutron('help')
-        lines = help_text.split('\n')
-        self.assertFirstLineStartsWith(lines, 'usage: neutron')
-
-        commands = []
-        cmds_start = lines.index('Commands for API v2.0:')
-        command_pattern = re.compile('^ {2}([a-z0-9\-\_]+)')
-        for line in lines[cmds_start:]:
-            match = command_pattern.match(line)
-            if match:
-                commands.append(match.group(1))
-        commands = set(commands)
-        wanted_commands = set(('net-create', 'subnet-list', 'port-delete',
-                               'router-show', 'agent-update', 'help'))
-        self.assertFalse(wanted_commands - commands)
-
-    # Optional arguments:
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('381e6fe3-cddc-47c9-a773-70ddb2f79a91')
-    def test_neutron_version(self):
-        self.neutron('', flags='--version')
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('bcad0e07-da8c-4c7c-8ab6-499e5d7ab8cb')
-    def test_neutron_debug_net_list(self):
-        self.neutron('net-list', flags='--debug')
-
-    @test.attr(type='smoke')
-    @test.idempotent_id('3e42d78e-65e5-4e8f-8c29-ca7be8feebb4')
-    def test_neutron_quiet_net_list(self):
-        self.neutron('net-list', flags='--quiet')
diff --git a/tempest/cli/simple_read_only/orchestration/test_heat.py b/tempest/cli/simple_read_only/orchestration/test_heat.py
index 7751e2c..8defe51 100644
--- a/tempest/cli/simple_read_only/orchestration/test_heat.py
+++ b/tempest/cli/simple_read_only/orchestration/test_heat.py
@@ -13,11 +13,11 @@
 import json
 import os
 
+from oslo_log import log as logging
 import yaml
 
 import tempest.cli
 from tempest import config
-from tempest.openstack.common import log as logging
 from tempest import test
 
 CONF = config.CONF
diff --git a/tempest/cli/simple_read_only/telemetry/__init__.py b/tempest/cli/simple_read_only/telemetry/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/tempest/cli/simple_read_only/telemetry/__init__.py
+++ /dev/null
diff --git a/tempest/cli/simple_read_only/telemetry/test_ceilometer.py b/tempest/cli/simple_read_only/telemetry/test_ceilometer.py
deleted file mode 100644
index 85db596..0000000
--- a/tempest/cli/simple_read_only/telemetry/test_ceilometer.py
+++ /dev/null
@@ -1,61 +0,0 @@
-# Copyright 2013 OpenStack Foundation
-# All Rights Reserved.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-from tempest import cli
-from tempest import config
-from tempest.openstack.common import log as logging
-from tempest import test
-
-CONF = config.CONF
-
-LOG = logging.getLogger(__name__)
-
-
-class SimpleReadOnlyCeilometerClientTest(cli.ClientTestBase):
-    """Basic, read-only tests for Ceilometer CLI client.
-
-    Checks return values and output of read-only commands.
-    These tests do not presume any content, nor do they create
-    their own. They only verify the structure of output if present.
-    """
-
-    @classmethod
-    def resource_setup(cls):
-        if (not CONF.service_available.ceilometer):
-            msg = ("Skipping all Ceilometer cli tests because it is "
-                   "not available")
-            raise cls.skipException(msg)
-        super(SimpleReadOnlyCeilometerClientTest, cls).resource_setup()
-
-    def ceilometer(self, *args, **kwargs):
-        return self.clients.ceilometer(
-            *args, endpoint_type=CONF.telemetry.endpoint_type, **kwargs)
-
-    @test.idempotent_id('ab717d43-a9c4-4dcf-bad8-c4777933a970')
-    def test_ceilometer_meter_list(self):
-        self.ceilometer('meter-list')
-
-    @test.attr(type='slow')
-    @test.idempotent_id('fe2e52a4-a99b-426e-a52d-d0bde50f3e4c')
-    def test_ceilometer_resource_list(self):
-        self.ceilometer('resource-list')
-
-    @test.idempotent_id('eede695c-f3bf-449f-a420-02f3cc426d52')
-    def test_ceilometermeter_alarm_list(self):
-        self.ceilometer('alarm-list')
-
-    @test.idempotent_id('0586bcc4-8e35-415f-8f23-77b590042684')
-    def test_ceilometer_version(self):
-        self.ceilometer('', flags='--version')
diff --git a/tempest/clients.py b/tempest/clients.py
index e5f41eb..e1b6eab 100644
--- a/tempest/clients.py
+++ b/tempest/clients.py
@@ -15,11 +15,12 @@
 
 import copy
 
+from oslo_log import log as logging
+
 from tempest.common import cred_provider
 from tempest.common import negative_rest_client
 from tempest import config
 from tempest import manager
-from tempest.openstack.common import log as logging
 from tempest.services.baremetal.v1.json.baremetal_client import \
     BaremetalClientJSON
 from tempest.services import botoclients
@@ -226,15 +227,16 @@
             endpoint_type=CONF.data_processing.endpoint_type,
             **self.default_params_with_timeout_values)
         self.negative_client = negative_rest_client.NegativeRestClient(
-            self.auth_provider, service)
+            self.auth_provider, service, **self.default_params)
 
-        # TODO(andreaf) EC2 client still do their auth, v2 only
-        ec2_client_args = (self.credentials.username,
-                           self.credentials.password,
-                           CONF.identity.uri,
-                           self.credentials.tenant_name)
-        self.ec2api_client = botoclients.APIClientEC2(*ec2_client_args)
-        self.s3_client = botoclients.ObjectClientS3(*ec2_client_args)
+        # Generating EC2 credentials in tempest is only supported
+        # with identity v2
+        if CONF.identity_feature_enabled.api_v2 and \
+                CONF.identity.auth_version == 'v2':
+            # EC2 and S3 clients, if used, will check onfigured AWS credentials
+            # and generate new ones if needed
+            self.ec2api_client = botoclients.APIClientEC2(self.identity_client)
+            self.s3_client = botoclients.ObjectClientS3(self.identity_client)
 
     def _set_compute_clients(self):
         params = {
diff --git a/tempest/cmd/cleanup.py b/tempest/cmd/cleanup.py
index 669f506..ed6716e 100755
--- a/tempest/cmd/cleanup.py
+++ b/tempest/cmd/cleanup.py
@@ -54,11 +54,12 @@
 import json
 import sys
 
+from oslo_log import log as logging
+
 from tempest import clients
 from tempest.cmd import cleanup_service
 from tempest.common import cred_provider
 from tempest import config
-from tempest.openstack.common import log as logging
 
 SAVED_STATE_JSON = "saved_state.json"
 DRY_RUN_JSON = "dry_run.json"
diff --git a/tempest/cmd/cleanup_service.py b/tempest/cmd/cleanup_service.py
index 7b217bb..1ad12eb 100644
--- a/tempest/cmd/cleanup_service.py
+++ b/tempest/cmd/cleanup_service.py
@@ -14,9 +14,10 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
+
 from tempest import clients
 from tempest import config
-from tempest.openstack.common import log as logging
 from tempest import test
 
 LOG = logging.getLogger(__name__)
diff --git a/tempest/cmd/javelin.py b/tempest/cmd/javelin.py
index 4c09bd2..e970249 100755
--- a/tempest/cmd/javelin.py
+++ b/tempest/cmd/javelin.py
@@ -110,14 +110,15 @@
 import unittest
 
 import netaddr
+from oslo_log import log as logging
+from oslo_utils import timeutils
 from tempest_lib import exceptions as lib_exc
 import yaml
 
 import tempest.auth
 from tempest import config
-from tempest.openstack.common import log as logging
-from tempest.openstack.common import timeutils
 from tempest.services.compute.json import flavors_client
+from tempest.services.compute.json import floating_ips_client
 from tempest.services.compute.json import security_groups_client
 from tempest.services.compute.json import servers_client
 from tempest.services.identity.v2.json import identity_client
@@ -194,6 +195,8 @@
                                                         **compute_params)
         self.flavors = flavors_client.FlavorsClientJSON(_auth,
                                                         **compute_params)
+        self.floating_ips = floating_ips_client.FloatingIPsClientJSON(
+            _auth, **compute_params)
         self.secgroups = security_groups_client.SecurityGroupsClientJSON(
             _auth, **compute_params)
         self.objects = object_client.ObjectClient(_auth,
@@ -451,15 +454,31 @@
             # validate neutron is enabled and ironic disabled:
             if (CONF.service_available.neutron and
                     not CONF.baremetal.driver_enabled):
+                _floating_is_alive = False
                 for network_name, body in found['addresses'].items():
                     for addr in body:
                         ip = addr['addr']
-                        if addr.get('OS-EXT-IPS:type', 'fixed') == 'fixed':
+                        # If floatingip_for_ssh is at True, it's assumed
+                        # you want to use the floating IP to reach the server,
+                        # fallback to fixed IP, then other type.
+                        # This is useful in multi-node environment.
+                        if CONF.compute.use_floatingip_for_ssh:
+                            if addr.get('OS-EXT-IPS:type',
+                                        'floating') == 'floating':
+                                self._ping_ip(ip, 60)
+                                _floating_is_alive = True
+                        elif addr.get('OS-EXT-IPS:type', 'fixed') == 'fixed':
                             namespace = _get_router_namespace(client,
                                                               network_name)
                             self._ping_ip(ip, 60, namespace)
                         else:
                             self._ping_ip(ip, 60)
+                # if floatingip_for_ssh is at True, validate found a
+                # floating IP and ping worked.
+                if CONF.compute.use_floatingip_for_ssh:
+                    self.assertTrue(_floating_is_alive,
+                                    "Server %s has no floating IP." %
+                                    server['name'])
             else:
                 addr = found['addresses']['private'][0]['addr']
                 self._ping_ip(addr, 60)
@@ -838,6 +857,10 @@
         # create to security group(s) after server spawning
         for secgroup in server['secgroups']:
             client.servers.add_security_group(server_id, secgroup)
+        if CONF.compute.use_floatingip_for_ssh:
+            floating_ip = client.floating_ips.create_floating_ip()
+            client.floating_ips.associate_floating_ip_to_server(
+                floating_ip['ip'], server_id)
 
 
 def destroy_servers(servers):
@@ -852,6 +875,7 @@
             LOG.info("Server '%s' does not exist" % server['name'])
             continue
 
+        # TODO(EmilienM): disassociate floating IP from server and release it.
         client.servers.delete_server(response['id'])
         client.servers.wait_for_server_termination(response['id'],
                                                    ignore_error=True)
@@ -1038,7 +1062,7 @@
 
 def setup_logging():
     global LOG
-    logging.setup(__name__)
+    logging.setup(CONF, __name__)
     LOG = logging.getLogger(__name__)
 
 
diff --git a/tempest/cmd/run_stress.py b/tempest/cmd/run_stress.py
index d21a441..06b338d 100755
--- a/tempest/cmd/run_stress.py
+++ b/tempest/cmd/run_stress.py
@@ -24,9 +24,9 @@
     # unittest in python 2.6 does not contain loader, so uses unittest2
     from unittest2 import loader
 
+from oslo_log import log as logging
 from testtools import testsuite
 
-from tempest.openstack.common import log as logging
 from tempest.stress import driver
 
 LOG = logging.getLogger(__name__)
diff --git a/tempest/cmd/verify_tempest_config.py b/tempest/cmd/verify_tempest_config.py
index 697965f..909de96 100755
--- a/tempest/cmd/verify_tempest_config.py
+++ b/tempest/cmd/verify_tempest_config.py
@@ -28,7 +28,6 @@
 
 
 CONF = config.CONF
-RAW_HTTP = httplib2.Http()
 CONF_PARSER = None
 
 
@@ -83,7 +82,11 @@
     }
     client_dict[service].skip_path()
     endpoint = _get_unversioned_endpoint(client_dict[service].base_url)
-    __, body = RAW_HTTP.request(endpoint, 'GET')
+    dscv = CONF.identity.disable_ssl_certificate_validation
+    ca_certs = CONF.identity.ca_certificates_file
+    raw_http = httplib2.Http(disable_ssl_certificate_validation=dscv,
+                             ca_certs=ca_certs)
+    __, body = raw_http.request(endpoint, 'GET')
     client_dict[service].reset_path()
     body = json.loads(body)
     if service == 'keystone':
diff --git a/tempest/common/accounts.py b/tempest/common/accounts.py
index 8766e7d..8e9a018 100644
--- a/tempest/common/accounts.py
+++ b/tempest/common/accounts.py
@@ -15,13 +15,13 @@
 import hashlib
 import os
 
+from oslo_concurrency import lockutils
+from oslo_log import log as logging
 import yaml
 
 from tempest.common import cred_provider
 from tempest import config
 from tempest import exceptions
-from tempest.openstack.common import lockutils
-from tempest.openstack.common import log as logging
 
 CONF = config.CONF
 LOG = logging.getLogger(__name__)
@@ -35,9 +35,9 @@
 
 class Accounts(cred_provider.CredentialProvider):
 
-    def __init__(self, name):
-        super(Accounts, self).__init__(name)
-        self.name = name
+    def __init__(self, identity_version=None, name=None):
+        super(Accounts, self).__init__(identity_version=identity_version,
+                                       name=name)
         if os.path.isfile(CONF.auth.test_accounts_file):
             accounts = read_accounts_yaml(CONF.auth.test_accounts_file)
             self.use_default_creds = False
@@ -45,7 +45,8 @@
             accounts = {}
             self.use_default_creds = True
         self.hash_dict = self.get_hash_dict(accounts)
-        self.accounts_dir = os.path.join(CONF.lock_path, 'test_accounts')
+        self.accounts_dir = os.path.join(lockutils.get_lock_path(CONF),
+                                         'test_accounts')
         self.isolated_creds = {}
 
     @classmethod
@@ -201,7 +202,8 @@
         if self.isolated_creds.get('primary'):
             return self.isolated_creds.get('primary')
         creds = self._get_creds()
-        primary_credential = cred_provider.get_credentials(**creds)
+        primary_credential = cred_provider.get_credentials(
+            identity_version=self.identity_version, **creds)
         self.isolated_creds['primary'] = primary_credential
         return primary_credential
 
@@ -209,7 +211,8 @@
         if self.isolated_creds.get('alt'):
             return self.isolated_creds.get('alt')
         creds = self._get_creds()
-        alt_credential = cred_provider.get_credentials(**creds)
+        alt_credential = cred_provider.get_credentials(
+            identity_version=self.identity_version, **creds)
         self.isolated_creds['alt'] = alt_credential
         return alt_credential
 
@@ -225,7 +228,8 @@
             new_index = str(roles) + '-' + str(len(self.isolated_creds))
             self.isolated_creds[new_index] = exist_creds
         creds = self._get_creds(roles=roles)
-        role_credential = cred_provider.get_credentials(**creds)
+        role_credential = cred_provider.get_credentials(
+            identity_version=self.identity_version, **creds)
         self.isolated_creds[str(roles)] = role_credential
         return role_credential
 
@@ -293,10 +297,11 @@
             return self.isolated_creds.get('primary')
         if not self.use_default_creds:
             creds = self.get_creds(0)
-            primary_credential = cred_provider.get_credentials(**creds)
+            primary_credential = cred_provider.get_credentials(
+                identity_version=self.identity_version, **creds)
         else:
             primary_credential = cred_provider.get_configured_credentials(
-                'user')
+                credential_type='user', identity_version=self.identity_version)
         self.isolated_creds['primary'] = primary_credential
         return primary_credential
 
@@ -305,10 +310,12 @@
             return self.isolated_creds.get('alt')
         if not self.use_default_creds:
             creds = self.get_creds(1)
-            alt_credential = cred_provider.get_credentials(**creds)
+            alt_credential = cred_provider.get_credentials(
+                identity_version=self.identity_version, **creds)
         else:
             alt_credential = cred_provider.get_configured_credentials(
-                'alt_user')
+                credential_type='alt_user',
+                identity_version=self.identity_version)
         self.isolated_creds['alt'] = alt_credential
         return alt_credential
 
diff --git a/tempest/common/commands.py b/tempest/common/commands.py
index e68c20e..392c9d0 100644
--- a/tempest/common/commands.py
+++ b/tempest/common/commands.py
@@ -15,7 +15,7 @@
 import shlex
 import subprocess
 
-from tempest.openstack.common import log as logging
+from oslo_log import log as logging
 
 LOG = logging.getLogger(__name__)
 
diff --git a/tempest/common/cred_provider.py b/tempest/common/cred_provider.py
index ea628f6..9630d1c 100644
--- a/tempest/common/cred_provider.py
+++ b/tempest/common/cred_provider.py
@@ -14,12 +14,12 @@
 
 import abc
 
+from oslo_log import log as logging
 import six
 
 from tempest import auth
 from tempest import config
 from tempest import exceptions
-from tempest.openstack.common import log as logging
 
 CONF = config.CONF
 LOG = logging.getLogger(__name__)
@@ -31,6 +31,13 @@
     'alt_user': ('identity', 'alt')
 }
 
+DEFAULT_PARAMS = {
+    'disable_ssl_certificate_validation':
+        CONF.identity.disable_ssl_certificate_validation,
+    'ca_certs': CONF.identity.ca_certificates_file,
+    'trace_requests': CONF.debug.trace_requests
+}
+
 
 # Read credentials from configuration, builds a Credentials object
 # based on the specified or configured version
@@ -46,7 +53,7 @@
     if identity_version == 'v3':
         conf_attributes.append('domain_name')
     # Read the parts of credentials from config
-    params = {}
+    params = DEFAULT_PARAMS.copy()
     section, prefix = CREDENTIAL_TYPES[credential_type]
     for attr in conf_attributes:
         _section = getattr(CONF, section)
@@ -56,7 +63,8 @@
             params[attr] = getattr(_section, prefix + "_" + attr)
     # Build and validate credentials. We are reading configured credentials,
     # so validate them even if fill_in is False
-    credentials = get_credentials(fill_in=fill_in, **params)
+    credentials = get_credentials(fill_in=fill_in,
+                                  identity_version=identity_version, **params)
     if not fill_in:
         if not credentials.is_valid():
             msg = ("The %s credentials are incorrectly set in the config file."
@@ -69,26 +77,44 @@
 # Wrapper around auth.get_credentials to use the configured identity version
 # is none is specified
 def get_credentials(fill_in=True, identity_version=None, **kwargs):
+    params = dict(DEFAULT_PARAMS, **kwargs)
     identity_version = identity_version or CONF.identity.auth_version
     # In case of "v3" add the domain from config if not specified
     if identity_version == 'v3':
         domain_fields = set(x for x in auth.KeystoneV3Credentials.ATTRIBUTES
                             if 'domain' in x)
         if not domain_fields.intersection(kwargs.keys()):
-            kwargs['user_domain_name'] = CONF.identity.admin_domain_name
+            params['user_domain_name'] = CONF.identity.admin_domain_name
         auth_url = CONF.identity.uri_v3
     else:
         auth_url = CONF.identity.uri
     return auth.get_credentials(auth_url,
                                 fill_in=fill_in,
                                 identity_version=identity_version,
-                                **kwargs)
+                                **params)
 
 
 @six.add_metaclass(abc.ABCMeta)
 class CredentialProvider(object):
-    def __init__(self, name, password='pass', network_resources=None):
-        self.name = name
+    def __init__(self, identity_version=None, name=None, password='pass',
+                 network_resources=None):
+        """A CredentialProvider supplies credentials to test classes.
+        :param identity_version If specified it will return credentials of the
+                                corresponding identity version, otherwise it
+                                uses auth_version from configuration
+        :param name Name of the calling test. Included in provisioned
+                    credentials when credentials are provisioned on the fly
+        :param password Used for provisioned credentials when credentials are
+                        provisioned on the fly
+        :param network_resources Network resources required for the credentials
+        """
+        # TODO(andreaf) name and password are tenant isolation specific, and
+        # could be removed from this abstract class
+        self.name = name or "test_creds"
+        self.identity_version = identity_version or CONF.identity.auth_version
+        if not auth.is_identity_version_supported(self.identity_version):
+            raise exceptions.InvalidIdentityVersion(
+                identity_version=self.identity_version)
 
     @abc.abstractmethod
     def get_primary_creds(self):
diff --git a/tempest/common/credentials.py b/tempest/common/credentials.py
index 2f7fb73..1ca0128 100644
--- a/tempest/common/credentials.py
+++ b/tempest/common/credentials.py
@@ -26,7 +26,8 @@
 # Dropping interface and password, as they are never used anyways
 # TODO(andreaf) Drop them from the CredentialsProvider interface completely
 def get_isolated_credentials(name, network_resources=None,
-                             force_tenant_isolation=False):
+                             force_tenant_isolation=False,
+                             identity_version=None):
     # If a test requires a new account to work, it can have it via forcing
     # tenant isolation. A new account will be produced only for that test.
     # In case admin credentials are not available for the account creation,
@@ -34,13 +35,16 @@
     if CONF.auth.allow_tenant_isolation or force_tenant_isolation:
         return isolated_creds.IsolatedCreds(
             name=name,
-            network_resources=network_resources)
+            network_resources=network_resources,
+            identity_version=identity_version)
     else:
         if CONF.auth.locking_credentials_provider:
             # Most params are not relevant for pre-created accounts
-            return accounts.Accounts(name=name)
+            return accounts.Accounts(name=name,
+                                     identity_version=identity_version)
         else:
-            return accounts.NotLockingAccounts(name=name)
+            return accounts.NotLockingAccounts(
+                name=name, identity_version=identity_version)
 
 
 # We want a helper function here to check and see if admin credentials
diff --git a/tempest/common/fixed_network.py b/tempest/common/fixed_network.py
new file mode 100644
index 0000000..b06ddf2
--- /dev/null
+++ b/tempest/common/fixed_network.py
@@ -0,0 +1,89 @@
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+import copy
+from oslo_log import log as logging
+
+from tempest_lib import exceptions as lib_exc
+
+from tempest import config
+from tempest import exceptions
+
+CONF = config.CONF
+
+LOG = logging.getLogger(__name__)
+
+
+def get_tenant_network(creds_provider, compute_networks_client):
+    """Get a network usable by the primary tenant
+
+    :param creds_provider: instance of credential provider
+    :param compute_networks_client: compute network client. We want to have the
+           compute network client so we can have use a common approach for both
+           neutron and nova-network cases. If this is not an admin network
+           client, set_network_kwargs might fail in case fixed_network_name
+           is the network to be used, and it's not visible to the tenant
+    :return a dict with 'id' and 'name' of the network
+    """
+    fixed_network_name = CONF.compute.fixed_network_name
+    network = None
+    # NOTE(andreaf) get_primary_network will always be available once
+    # bp test-accounts-continued is implemented
+    if (CONF.auth.allow_tenant_isolation and
+        (CONF.service_available.neutron and
+         not CONF.service_available.ironic)):
+        network = creds_provider.get_primary_network()
+    else:
+        if fixed_network_name:
+            try:
+                resp = compute_networks_client.list_networks(
+                    name=fixed_network_name)
+                if isinstance(resp, list):
+                    networks = resp
+                elif isinstance(resp, dict):
+                    networks = resp['networks']
+                else:
+                    raise lib_exc.NotFound()
+                if len(networks) > 0:
+                    network = networks[0]
+                else:
+                    msg = "Configured fixed_network_name not found"
+                    raise exceptions.InvalidConfiguration(msg)
+                # To be consistent with network isolation, add name is only
+                # label is available
+                network['name'] = network.get('name', network.get('label'))
+            except lib_exc.NotFound:
+                # In case of nova network, if the fixed_network_name is not
+                # owned by the tenant, and the network client is not an admin
+                # one, list_networks will not find it
+                LOG.info('Unable to find network %s. '
+                         'Starting instance without specifying a network.' %
+                         fixed_network_name)
+                network = {'name': fixed_network_name}
+    LOG.info('Found network %s available for tenant' % network)
+    return network
+
+
+def set_networks_kwarg(network, kwargs=None):
+    """Set 'networks' kwargs for a server create if missing
+
+    :param network: dict of network to be used with 'id' and 'name'
+    :param kwargs: server create kwargs to be enhanced
+    :return: new dict of kwargs updated to include networks
+    """
+    params = copy.copy(kwargs) or {}
+    if kwargs and 'networks' in kwargs:
+        return params
+
+    if network:
+        params.update({"networks": [{'uuid': network['id']}]})
+    return params
diff --git a/tempest/common/generator/base_generator.py b/tempest/common/generator/base_generator.py
index 3f405b1..f81f405 100644
--- a/tempest/common/generator/base_generator.py
+++ b/tempest/common/generator/base_generator.py
@@ -18,7 +18,7 @@
 
 import jsonschema
 
-from tempest.openstack.common import log as logging
+from oslo_log import log as logging
 
 LOG = logging.getLogger(__name__)
 
diff --git a/tempest/common/generator/negative_generator.py b/tempest/common/generator/negative_generator.py
index 1d5ed43..17997a5 100644
--- a/tempest/common/generator/negative_generator.py
+++ b/tempest/common/generator/negative_generator.py
@@ -15,9 +15,10 @@
 
 import copy
 
+from oslo_log import log as logging
+
 import tempest.common.generator.base_generator as base
 import tempest.common.generator.valid_generator as valid
-from tempest.openstack.common import log as logging
 
 LOG = logging.getLogger(__name__)
 
diff --git a/tempest/common/generator/valid_generator.py b/tempest/common/generator/valid_generator.py
index 7b80afc..0c63bf5 100644
--- a/tempest/common/generator/valid_generator.py
+++ b/tempest/common/generator/valid_generator.py
@@ -13,8 +13,9 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
+
 import tempest.common.generator.base_generator as base
-from tempest.openstack.common import log as logging
 
 
 LOG = logging.getLogger(__name__)
diff --git a/tempest/common/glance_http.py b/tempest/common/glance_http.py
index dd1448a..c6b8ba3 100644
--- a/tempest/common/glance_http.py
+++ b/tempest/common/glance_http.py
@@ -28,11 +28,11 @@
 
 
 import OpenSSL
+from oslo_log import log as logging
 from six import moves
 from tempest_lib import exceptions as lib_exc
 
 from tempest import exceptions as exc
-from tempest.openstack.common import log as logging
 
 LOG = logging.getLogger(__name__)
 USER_AGENT = 'tempest'
diff --git a/tempest/common/isolated_creds.py b/tempest/common/isolated_creds.py
index ca2bd65..22fc9c3 100644
--- a/tempest/common/isolated_creds.py
+++ b/tempest/common/isolated_creds.py
@@ -12,7 +12,10 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+import abc
 import netaddr
+from oslo_log import log as logging
+import six
 from tempest_lib.common.utils import data_utils
 from tempest_lib import exceptions as lib_exc
 
@@ -20,23 +23,142 @@
 from tempest.common import cred_provider
 from tempest import config
 from tempest import exceptions
-from tempest.openstack.common import log as logging
+from tempest.services.identity.v2.json import identity_client as v2_identity
 
 CONF = config.CONF
 LOG = logging.getLogger(__name__)
 
 
+@six.add_metaclass(abc.ABCMeta)
+class CredsClient(object):
+    """This class is a wrapper around the identity clients, to provide a
+     single interface for managing credentials in both v2 and v3 cases.
+     It's not bound to created credentials, only to a specific set of admin
+     credentials used for generating credentials.
+    """
+
+    def __init__(self, identity_client):
+        # The client implies version and credentials
+        self.identity_client = identity_client
+        self.credentials = self.identity_client.auth_provider.credentials
+
+    def create_user(self, username, password, project, email):
+        user = self.identity_client.create_user(
+            username, password, project['id'], email)
+        return user
+
+    @abc.abstractmethod
+    def create_project(self, name, description):
+        pass
+
+    def assign_user_role(self, user, project, role_name):
+        try:
+            roles = self._list_roles()
+            role = next(r for r in roles if r['name'] == role_name)
+        except StopIteration:
+            msg = 'No "%s" role found' % role_name
+            raise lib_exc.NotFound(msg)
+        try:
+            self.identity_client.assign_user_role(project['id'], user['id'],
+                                                  role['id'])
+        except lib_exc.Conflict:
+            LOG.debug("Role %s already assigned on project %s for user %s" % (
+                role['id'], project['id'], user['id']))
+
+    @abc.abstractmethod
+    def get_credentials(self, user, project, password):
+        pass
+
+    def delete_user(self, user_id):
+        self.identity_client.delete_user(user_id)
+
+    def _list_roles(self):
+        roles = self.identity_client.list_roles()
+        return roles
+
+
+class V2CredsClient(CredsClient):
+
+    def create_project(self, name, description):
+        tenant = self.identity_client.create_tenant(
+            name=name, description=description)
+        return tenant
+
+    def get_credentials(self, user, project, password):
+        return cred_provider.get_credentials(
+            identity_version='v2',
+            username=user['name'], user_id=user['id'],
+            tenant_name=project['name'], tenant_id=project['id'],
+            password=password)
+
+    def delete_project(self, project_id):
+        self.identity_client.delete_tenant(project_id)
+
+
+class V3CredsClient(CredsClient):
+
+    def __init__(self, identity_client, domain_name):
+        super(V3CredsClient, self).__init__(identity_client)
+        try:
+            # Domain names must be unique, in any case a list is returned,
+            # selecting the first (and only) element
+            self.creds_domain = self.identity_client.list_domains(
+                params={'name': domain_name})[0]
+        except lib_exc.NotFound:
+            # TODO(andrea) we could probably create the domain on the fly
+            msg = "Configured domain %s could not be found" % domain_name
+            raise exceptions.InvalidConfiguration(msg)
+
+    def create_project(self, name, description):
+        project = self.identity_client.create_project(
+            name=name, description=description,
+            domain_id=self.creds_domain['id'])
+        return project
+
+    def get_credentials(self, user, project, password):
+        return cred_provider.get_credentials(
+            identity_version='v3',
+            username=user['name'], user_id=user['id'],
+            project_name=project['name'], project_id=project['id'],
+            password=password,
+            project_domain_name=self.creds_domain['name'])
+
+    def delete_project(self, project_id):
+        self.identity_client.delete_project(project_id)
+
+
+def get_creds_client(identity_client, project_domain_name=None):
+    if isinstance(identity_client, v2_identity.IdentityClientJSON):
+        return V2CredsClient(identity_client)
+    else:
+        return V3CredsClient(identity_client, project_domain_name)
+
+
 class IsolatedCreds(cred_provider.CredentialProvider):
 
-    def __init__(self, name, password='pass', network_resources=None):
-        super(IsolatedCreds, self).__init__(name, password, network_resources)
+    def __init__(self, identity_version=None, name=None, password='pass',
+                 network_resources=None):
+        super(IsolatedCreds, self).__init__(identity_version, name, password,
+                                            network_resources)
         self.network_resources = network_resources
         self.isolated_creds = {}
         self.isolated_net_resources = {}
         self.ports = []
         self.password = password
+        self.default_admin_creds = cred_provider.get_configured_credentials(
+            'identity_admin', fill_in=True,
+            identity_version=self.identity_version)
         self.identity_admin_client, self.network_admin_client = (
             self._get_admin_clients())
+        # Domain where isolated credentials are provisioned (v3 only).
+        # Use that of the admin account is None is configured.
+        self.creds_domain_name = None
+        if self.identity_version == 'v3':
+            self.creds_domain_name = (
+                CONF.auth.tenant_isolation_domain_name or
+                self.default_admin_creds.project_domain_name)
+        self.creds_client = get_creds_client(
+            self.identity_admin_client, self.creds_domain_name)
 
     def _get_admin_clients(self):
         """
@@ -45,57 +167,11 @@
             identity
             network
         """
-        os = clients.AdminManager()
-        return os.identity_client, os.network_client
-
-    def _create_tenant(self, name, description):
-        tenant = self.identity_admin_client.create_tenant(
-            name=name, description=description)
-        return tenant
-
-    def _get_tenant_by_name(self, name):
-        tenant = self.identity_admin_client.get_tenant_by_name(name)
-        return tenant
-
-    def _create_user(self, username, password, tenant, email):
-        user = self.identity_admin_client.create_user(
-            username, password, tenant['id'], email)
-        return user
-
-    def _get_user(self, tenant, username):
-        user = self.identity_admin_client.get_user_by_username(
-            tenant['id'], username)
-        return user
-
-    def _list_roles(self):
-        roles = self.identity_admin_client.list_roles()
-        return roles
-
-    def _assign_user_role(self, tenant, user, role_name):
-        role = None
-        try:
-            roles = self._list_roles()
-            role = next(r for r in roles if r['name'] == role_name)
-        except StopIteration:
-            msg = 'No "%s" role found' % role_name
-            raise lib_exc.NotFound(msg)
-        try:
-            self.identity_admin_client.assign_user_role(tenant['id'],
-                                                        user['id'],
-                                                        role['id'])
-        except lib_exc.Conflict:
-            LOG.warning('Trying to add %s for user %s in tenant %s but they '
-                        ' were already granted that role' % (role_name,
-                                                             user['name'],
-                                                             tenant['name']))
-
-    def _delete_user(self, user):
-        self.identity_admin_client.delete_user(user)
-
-    def _delete_tenant(self, tenant):
-        if CONF.service_available.neutron:
-            self._cleanup_default_secgroup(tenant)
-        self.identity_admin_client.delete_tenant(tenant)
+        os = clients.Manager(self.default_admin_creds)
+        if self.identity_version == 'v2':
+            return os.identity_client, os.network_client
+        else:
+            return os.identity_v3_client, os.network_client
 
     def _create_creds(self, suffix="", admin=False, roles=None):
         """Create random credentials under the following schema.
@@ -112,31 +188,26 @@
         else:
             root = self.name
 
-        tenant_name = data_utils.rand_name(root) + suffix
-        tenant_desc = tenant_name + "-desc"
-        tenant = self._create_tenant(name=tenant_name,
-                                     description=tenant_desc)
+        project_name = data_utils.rand_name(root) + suffix
+        project_desc = project_name + "-desc"
+        project = self.creds_client.create_project(
+            name=project_name, description=project_desc)
 
         username = data_utils.rand_name(root) + suffix
         email = data_utils.rand_name(root) + suffix + "@example.com"
-        user = self._create_user(username, self.password,
-                                 tenant, email)
+        user = self.creds_client.create_user(
+            username, self.password, project, email)
         if admin:
-            self._assign_user_role(tenant, user, CONF.identity.admin_role)
+            self.creds_client.assign_user_role(user, project,
+                                               CONF.identity.admin_role)
         # Add roles specified in config file
         for conf_role in CONF.auth.tempest_roles:
-            self._assign_user_role(tenant, user, conf_role)
+            self.creds_client.assign_user_role(user, project, conf_role)
         # Add roles requested by caller
         if roles:
             for role in roles:
-                self._assign_user_role(tenant, user, role)
-        return self._get_credentials(user, tenant)
-
-    def _get_credentials(self, user, tenant):
-        return cred_provider.get_credentials(
-            username=user['name'], user_id=user['id'],
-            tenant_name=tenant['name'], tenant_id=tenant['id'],
-            password=self.password)
+                self.creds_client.assign_user_role(user, project, role)
+        return self.creds_client.get_credentials(user, project, self.password)
 
     def _create_network_resources(self, tenant_id):
         network = None
@@ -371,12 +442,14 @@
         self._clear_isolated_net_resources()
         for creds in self.isolated_creds.itervalues():
             try:
-                self._delete_user(creds.user_id)
+                self.creds_client.delete_user(creds.user_id)
             except lib_exc.NotFound:
                 LOG.warn("user with name: %s not found for delete" %
                          creds.username)
             try:
-                self._delete_tenant(creds.tenant_id)
+                if CONF.service_available.neutron:
+                    self._cleanup_default_secgroup(creds.tenant_id)
+                self.creds_client.delete_project(creds.tenant_id)
             except lib_exc.NotFound:
                 LOG.warn("tenant with name: %s not found for delete" %
                          creds.tenant_name)
diff --git a/tempest/common/negative_rest_client.py b/tempest/common/negative_rest_client.py
index a02e494..abd8b31 100644
--- a/tempest/common/negative_rest_client.py
+++ b/tempest/common/negative_rest_client.py
@@ -25,25 +25,39 @@
     """
     Version of RestClient that does not raise exceptions.
     """
-    def __init__(self, auth_provider, service):
-        region = self._get_region(service)
-        super(NegativeRestClient, self).__init__(auth_provider,
-                                                 service, region)
+    def __init__(self, auth_provider, service,
+                 build_interval=None, build_timeout=None,
+                 disable_ssl_certificate_validation=None,
+                 ca_certs=None, trace_requests=None):
+        region, endpoint_type = self._get_region_and_endpoint_type(service)
+        super(NegativeRestClient, self).__init__(
+            auth_provider,
+            service,
+            region,
+            endpoint_type=endpoint_type,
+            build_interval=build_interval,
+            build_timeout=build_timeout,
+            disable_ssl_certificate_validation=(
+                disable_ssl_certificate_validation),
+            ca_certs=ca_certs,
+            trace_requests=trace_requests)
 
-    def _get_region(self, service):
+    def _get_region_and_endpoint_type(self, service):
         """
         Returns the region for a specific service
         """
         service_region = None
+        service_endpoint_type = None
         for cfgname in dir(CONF._config):
             # Find all config.FOO.catalog_type and assume FOO is a service.
             cfg = getattr(CONF, cfgname)
             catalog_type = getattr(cfg, 'catalog_type', None)
             if catalog_type == service:
                 service_region = getattr(cfg, 'region', None)
+                service_endpoint_type = getattr(cfg, 'endpoint_type', None)
         if not service_region:
             service_region = CONF.identity.region
-        return service_region
+        return service_region, service_endpoint_type
 
     def _error_checker(self, method, url,
                        headers, body, resp, resp_body):
diff --git a/tempest/common/service_client.py b/tempest/common/service_client.py
index ad6610a..87e925d 100644
--- a/tempest/common/service_client.py
+++ b/tempest/common/service_client.py
@@ -14,10 +14,6 @@
 
 from tempest_lib.common import rest_client
 
-from tempest import config
-
-CONF = config.CONF
-
 
 class ServiceClient(rest_client.RestClient):
 
@@ -26,15 +22,11 @@
                  disable_ssl_certificate_validation=None, ca_certs=None,
                  trace_requests=None):
 
-        # TODO(oomichi): This params setting should be removed after all
-        # service clients pass these values, and we can make ServiceClient
-        # free from CONF values.
-        dscv = (disable_ssl_certificate_validation or
-                CONF.identity.disable_ssl_certificate_validation)
+        dscv = disable_ssl_certificate_validation
         params = {
             'disable_ssl_certificate_validation': dscv,
-            'ca_certs': ca_certs or CONF.identity.ca_certificates_file,
-            'trace_requests': trace_requests or CONF.debug.trace_requests
+            'ca_certs': ca_certs,
+            'trace_requests': trace_requests
         }
 
         if endpoint_type is not None:
diff --git a/tempest/common/ssh.py b/tempest/common/ssh.py
index c06ce3b..fe67ff8 100644
--- a/tempest/common/ssh.py
+++ b/tempest/common/ssh.py
@@ -20,10 +20,10 @@
 import time
 import warnings
 
+from oslo_log import log as logging
 import six
 
 from tempest import exceptions
-from tempest.openstack.common import log as logging
 
 
 with warnings.catch_warnings():
diff --git a/tempest/common/tempest_fixtures.py b/tempest/common/tempest_fixtures.py
index b33f354..d416857 100644
--- a/tempest/common/tempest_fixtures.py
+++ b/tempest/common/tempest_fixtures.py
@@ -13,7 +13,7 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-from tempest.openstack.common.fixture import lockutils
+from oslo_concurrency.fixture import lockutils
 
 
 class LockFixture(lockutils.LockFixture):
diff --git a/tempest/common/utils/linux/remote_client.py b/tempest/common/utils/linux/remote_client.py
index 1f1414f..b19faef 100644
--- a/tempest/common/utils/linux/remote_client.py
+++ b/tempest/common/utils/linux/remote_client.py
@@ -145,7 +145,7 @@
 
     def _renew_lease_dhclient(self, fixed_ip=None):
         """Renews DHCP lease via dhclient client. """
-        cmd = "sudo /sbin/dhclient -r && /sbin/dhclient"
+        cmd = "sudo /sbin/dhclient -r && sudo /sbin/dhclient"
         self.exec_command(cmd)
 
     def renew_lease(self, fixed_ip=None):
diff --git a/tempest/common/waiters.py b/tempest/common/waiters.py
index 6d50b67..64ff7f2 100644
--- a/tempest/common/waiters.py
+++ b/tempest/common/waiters.py
@@ -13,11 +13,11 @@
 
 import time
 
+from oslo_log import log as logging
 from tempest_lib.common.utils import misc as misc_utils
 
 from tempest import config
 from tempest import exceptions
-from tempest.openstack.common import log as logging
 
 CONF = config.CONF
 LOG = logging.getLogger(__name__)
diff --git a/tempest/config.py b/tempest/config.py
index a66aa9b..78952f1 100644
--- a/tempest/config.py
+++ b/tempest/config.py
@@ -18,10 +18,9 @@
 import logging as std_logging
 import os
 
-from oslo.config import cfg
+from oslo_config import cfg
 
-from tempest.openstack.common import lockutils
-from tempest.openstack.common import log as logging
+from oslo_log import log as logging
 
 
 def register_opt_group(conf, opt_group, options):
@@ -61,7 +60,13 @@
                      "number of concurrent test processes."),
     cfg.ListOpt('tempest_roles',
                 help="Roles to assign to all users created by tempest",
-                default=[])
+                default=[]),
+    cfg.StrOpt('tenant_isolation_domain_name',
+               default=None,
+               help="Only applicable when identity.auth_version is v3."
+                    "Domain within which isolated credentials are provisioned."
+                    "The default \"None\" means that the domain from the"
+                    "admin user is used instead.")
 ]
 
 identity_group = cfg.OptGroup(name='identity',
@@ -227,9 +232,11 @@
                help="Timeout in seconds to wait for output from ssh "
                     "channel."),
     cfg.StrOpt('fixed_network_name',
-               default='private',
                help="Name of the fixed network that is visible to all test "
-                    "tenants."),
+                    "tenants. If multiple networks are available for a tenant"
+                    " this is the network which will be used for creating "
+                    "servers if tempest does not create a network or a "
+                    "network is not specified elsewhere"),
     cfg.StrOpt('network_for_ssh',
                default='public',
                help="Network used for SSH connections. Ignored if "
@@ -320,7 +327,8 @@
     cfg.BoolOpt('block_migrate_cinder_iscsi',
                 default=False,
                 help="Does the test environment block migration support "
-                     "cinder iSCSI volumes"),
+                "cinder iSCSI volumes. Note, libvirt doesn't support this, "
+                "see https://bugs.launchpad.net/nova/+bug/1398999"),
     cfg.BoolOpt('vnc_console',
                 default=False,
                 help='Enable VNC console. This configuration value should '
@@ -352,7 +360,13 @@
                      'images of running instances?'),
     cfg.BoolOpt('ec2_api',
                 default=True,
-                help='Does the test environment have the ec2 api running?')
+                help='Does the test environment have the ec2 api running?'),
+    # TODO(mriedem): Remove preserve_ports once juno-eol happens.
+    cfg.BoolOpt('preserve_ports',
+                default=False,
+                help='Does Nova preserve preexisting ports from Neutron '
+                     'when deleting an instance? This should be set to True '
+                     'if testing Kilo+ Nova.')
 ]
 
 
@@ -691,6 +705,8 @@
                choices=['public', 'admin', 'internal',
                         'publicURL', 'adminURL', 'internalURL'],
                help="The endpoint type to use for the orchestration service."),
+    cfg.StrOpt('stack_owner_role', default='heat_stack_owner',
+               help='Role required for users to be able to manage stacks'),
     cfg.IntOpt('build_interval',
                default=1,
                help="Time in seconds between build status checks."),
@@ -701,9 +717,6 @@
                default='m1.micro',
                help="Instance type for tests. Needs to be big enough for a "
                     "full OS plus the test workload"),
-    cfg.StrOpt('image_ref',
-               help="Name of heat-cfntools enabled image to use when "
-                    "launching test instances."),
     cfg.StrOpt('keypair_name',
                help="Name of existing keypair to launch servers with."),
     cfg.IntOpt('max_template_size',
@@ -1105,16 +1118,7 @@
     The purpose of this is to allow tools like the Oslo sample config file
     generator to discover the options exposed to users.
     """
-    optlist = [(g.name, o) for g, o in _opts]
-
-    # NOTE(jgrimm): Can be removed once oslo-incubator/oslo changes happen.
-    optlist.append((None, lockutils.util_opts))
-    optlist.append((None, logging.common_cli_opts))
-    optlist.append((None, logging.logging_cli_opts))
-    optlist.append((None, logging.generic_log_opts))
-    optlist.append((None, logging.log_opts))
-
-    return optlist
+    return [(g.name, o) for g, o in _opts]
 
 
 # this should never be called outside of this class
@@ -1193,11 +1197,12 @@
         # to remove an issue with the config file up to date checker.
         if parse_conf:
             config_files.append(path)
+        logging.register_options(cfg.CONF)
         if os.path.isfile(path):
             cfg.CONF([], project='tempest', default_config_files=config_files)
         else:
             cfg.CONF([], project='tempest')
-        logging.setup('tempest')
+        logging.setup(cfg.CONF, 'tempest')
         LOG = logging.getLogger('tempest')
         LOG.info("Using tempest config file %s" % path)
         register_opts()
@@ -1211,16 +1216,14 @@
     _path = None
 
     _extra_log_defaults = [
-        'keystoneclient.session=INFO',
-        'paramiko.transport=INFO',
-        'requests.packages.urllib3.connectionpool=WARN'
+        ('paramiko.transport', std_logging.INFO),
+        ('requests.packages.urllib3.connectionpool', std_logging.WARN),
     ]
 
     def _fix_log_levels(self):
         """Tweak the oslo log defaults."""
-        for opt in logging.log_opts:
-            if opt.dest == 'default_log_levels':
-                opt.default.extend(self._extra_log_defaults)
+        for name, level in self._extra_log_defaults:
+            std_logging.getLogger(name).setLevel(level)
 
     def __getattr__(self, attr):
         if not self._config:
diff --git a/tempest/openstack/common/_i18n.py b/tempest/openstack/common/_i18n.py
index fdc8327..5bbc77d 100644
--- a/tempest/openstack/common/_i18n.py
+++ b/tempest/openstack/common/_i18n.py
@@ -17,14 +17,14 @@
 """
 
 try:
-    import oslo.i18n
+    import oslo_i18n
 
     # NOTE(dhellmann): This reference to o-s-l-o will be replaced by the
     # application name when this module is synced into the separate
     # repository. It is OK to have more than one translation function
     # using the same domain, since there will still only be one message
     # catalog.
-    _translators = oslo.i18n.TranslatorFactory(domain='tempest')
+    _translators = oslo_i18n.TranslatorFactory(domain='tempest')
 
     # The primary translation function using the well-known name "_"
     _ = _translators.primary
@@ -40,6 +40,6 @@
     _LC = _translators.log_critical
 except ImportError:
     # NOTE(dims): Support for cases where a project wants to use
-    # code from tempest-incubator, but is not ready to be internationalized
+    # code from oslo-incubator, but is not ready to be internationalized
     # (like tempest)
     _ = _LI = _LW = _LE = _LC = lambda x: x
diff --git a/tempest/openstack/common/excutils.py b/tempest/openstack/common/excutils.py
deleted file mode 100644
index dc365da..0000000
--- a/tempest/openstack/common/excutils.py
+++ /dev/null
@@ -1,99 +0,0 @@
-# Copyright 2011 OpenStack Foundation.
-# Copyright 2012, Red Hat, Inc.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-"""
-Exception related utilities.
-"""
-
-import logging
-import sys
-import time
-import traceback
-
-import six
-
-from tempest.openstack.common.gettextutils import _
-
-
-class save_and_reraise_exception(object):
-    """Save current exception, run some code and then re-raise.
-
-    In some cases the exception context can be cleared, resulting in None
-    being attempted to be re-raised after an exception handler is run. This
-    can happen when eventlet switches greenthreads or when running an
-    exception handler, code raises and catches an exception. In both
-    cases the exception context will be cleared.
-
-    To work around this, we save the exception state, run handler code, and
-    then re-raise the original exception. If another exception occurs, the
-    saved exception is logged and the new exception is re-raised.
-
-    In some cases the caller may not want to re-raise the exception, and
-    for those circumstances this context provides a reraise flag that
-    can be used to suppress the exception.  For example::
-
-      except Exception:
-          with save_and_reraise_exception() as ctxt:
-              decide_if_need_reraise()
-              if not should_be_reraised:
-                  ctxt.reraise = False
-    """
-    def __init__(self):
-        self.reraise = True
-
-    def __enter__(self):
-        self.type_, self.value, self.tb, = sys.exc_info()
-        return self
-
-    def __exit__(self, exc_type, exc_val, exc_tb):
-        if exc_type is not None:
-            logging.error(_('Original exception being dropped: %s'),
-                          traceback.format_exception(self.type_,
-                                                     self.value,
-                                                     self.tb))
-            return False
-        if self.reraise:
-            six.reraise(self.type_, self.value, self.tb)
-
-
-def forever_retry_uncaught_exceptions(infunc):
-    def inner_func(*args, **kwargs):
-        last_log_time = 0
-        last_exc_message = None
-        exc_count = 0
-        while True:
-            try:
-                return infunc(*args, **kwargs)
-            except Exception as exc:
-                this_exc_message = six.u(str(exc))
-                if this_exc_message == last_exc_message:
-                    exc_count += 1
-                else:
-                    exc_count = 1
-                # Do not log any more frequently than once a minute unless
-                # the exception message changes
-                cur_time = int(time.time())
-                if (cur_time - last_log_time > 60 or
-                        this_exc_message != last_exc_message):
-                    logging.exception(
-                        _('Unexpected exception occurred %d time(s)... '
-                          'retrying.') % exc_count)
-                    last_log_time = cur_time
-                    last_exc_message = this_exc_message
-                    exc_count = 0
-                # This should be a very rare event. In case it isn't, do
-                # a sleep.
-                time.sleep(1)
-    return inner_func
diff --git a/tempest/openstack/common/fileutils.py b/tempest/openstack/common/fileutils.py
deleted file mode 100644
index 1845ed2..0000000
--- a/tempest/openstack/common/fileutils.py
+++ /dev/null
@@ -1,137 +0,0 @@
-# Copyright 2011 OpenStack Foundation.
-# All Rights Reserved.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-
-import contextlib
-import errno
-import os
-import tempfile
-
-from tempest.openstack.common import excutils
-from tempest.openstack.common.gettextutils import _
-from tempest.openstack.common import log as logging
-
-LOG = logging.getLogger(__name__)
-
-_FILE_CACHE = {}
-
-
-def ensure_tree(path):
-    """Create a directory (and any ancestor directories required)
-
-    :param path: Directory to create
-    """
-    try:
-        os.makedirs(path)
-    except OSError as exc:
-        if exc.errno == errno.EEXIST:
-            if not os.path.isdir(path):
-                raise
-        else:
-            raise
-
-
-def read_cached_file(filename, force_reload=False):
-    """Read from a file if it has been modified.
-
-    :param force_reload: Whether to reload the file.
-    :returns: A tuple with a boolean specifying if the data is fresh
-              or not.
-    """
-    global _FILE_CACHE
-
-    if force_reload and filename in _FILE_CACHE:
-        del _FILE_CACHE[filename]
-
-    reloaded = False
-    mtime = os.path.getmtime(filename)
-    cache_info = _FILE_CACHE.setdefault(filename, {})
-
-    if not cache_info or mtime > cache_info.get('mtime', 0):
-        LOG.debug(_("Reloading cached file %s") % filename)
-        with open(filename) as fap:
-            cache_info['data'] = fap.read()
-        cache_info['mtime'] = mtime
-        reloaded = True
-    return (reloaded, cache_info['data'])
-
-
-def delete_if_exists(path, remove=os.unlink):
-    """Delete a file, but ignore file not found error.
-
-    :param path: File to delete
-    :param remove: Optional function to remove passed path
-    """
-
-    try:
-        remove(path)
-    except OSError as e:
-        if e.errno != errno.ENOENT:
-            raise
-
-
-@contextlib.contextmanager
-def remove_path_on_error(path, remove=delete_if_exists):
-    """Protect code that wants to operate on PATH atomically.
-    Any exception will cause PATH to be removed.
-
-    :param path: File to work with
-    :param remove: Optional function to remove passed path
-    """
-
-    try:
-        yield
-    except Exception:
-        with excutils.save_and_reraise_exception():
-            remove(path)
-
-
-def file_open(*args, **kwargs):
-    """Open file
-
-    see built-in file() documentation for more details
-
-    Note: The reason this is kept in a separate module is to easily
-    be able to provide a stub module that doesn't alter system
-    state at all (for unit tests)
-    """
-    return file(*args, **kwargs)
-
-
-def write_to_tempfile(content, path=None, suffix='', prefix='tmp'):
-    """Create temporary file or use existing file.
-
-    This util is needed for creating temporary file with
-    specified content, suffix and prefix. If path is not None,
-    it will be used for writing content. If the path doesn't
-    exist it'll be created.
-
-    :param content: content for temporary file.
-    :param path: same as parameter 'dir' for mkstemp
-    :param suffix: same as parameter 'suffix' for mkstemp
-    :param prefix: same as parameter 'prefix' for mkstemp
-
-    For example: it can be used in database tests for creating
-    configuration files.
-    """
-    if path:
-        ensure_tree(path)
-
-    (fd, path) = tempfile.mkstemp(suffix=suffix, dir=path, prefix=prefix)
-    try:
-        os.write(fd, content)
-    finally:
-        os.close(fd)
-    return path
diff --git a/tempest/openstack/common/fixture/__init__.py b/tempest/openstack/common/fixture/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/tempest/openstack/common/fixture/__init__.py
+++ /dev/null
diff --git a/tempest/openstack/common/fixture/config.py b/tempest/openstack/common/fixture/config.py
deleted file mode 100644
index 0bf90ff..0000000
--- a/tempest/openstack/common/fixture/config.py
+++ /dev/null
@@ -1,45 +0,0 @@
-#
-# Copyright 2013 Mirantis, Inc.
-# Copyright 2013 OpenStack Foundation
-# All Rights Reserved.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-import fixtures
-from oslo.config import cfg
-import six
-
-
-class Config(fixtures.Fixture):
-    """Override some configuration values.
-
-    The keyword arguments are the names of configuration options to
-    override and their values.
-
-    If a group argument is supplied, the overrides are applied to
-    the specified configuration option group.
-
-    All overrides are automatically cleared at the end of the current
-    test by the reset() method, which is registered by addCleanup().
-    """
-
-    def __init__(self, conf=cfg.CONF):
-        self.conf = conf
-
-    def setUp(self):
-        super(Config, self).setUp()
-        self.addCleanup(self.conf.reset)
-
-    def config(self, **kw):
-        group = kw.pop('group', None)
-        for k, v in six.iteritems(kw):
-            self.conf.set_override(k, v, group)
diff --git a/tempest/openstack/common/fixture/lockutils.py b/tempest/openstack/common/fixture/lockutils.py
deleted file mode 100644
index 5936687..0000000
--- a/tempest/openstack/common/fixture/lockutils.py
+++ /dev/null
@@ -1,51 +0,0 @@
-# Copyright 2011 OpenStack Foundation.
-# All Rights Reserved.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-import fixtures
-
-from tempest.openstack.common import lockutils
-
-
-class LockFixture(fixtures.Fixture):
-    """External locking fixture.
-
-    This fixture is basically an alternative to the synchronized decorator with
-    the external flag so that tearDowns and addCleanups will be included in
-    the lock context for locking between tests. The fixture is recommended to
-    be the first line in a test method, like so::
-
-        def test_method(self):
-            self.useFixture(LockFixture)
-                ...
-
-    or the first line in setUp if all the test methods in the class are
-    required to be serialized. Something like::
-
-        class TestCase(testtools.testcase):
-            def setUp(self):
-                self.useFixture(LockFixture)
-                super(TestCase, self).setUp()
-                    ...
-
-    This is because addCleanups are put on a LIFO queue that gets run after the
-    test method exits. (either by completing or raising an exception)
-    """
-    def __init__(self, name, lock_file_prefix=None):
-        self.mgr = lockutils.lock(name, lock_file_prefix, True)
-
-    def setUp(self):
-        super(LockFixture, self).setUp()
-        self.addCleanup(self.mgr.__exit__, None, None, None)
-        self.mgr.__enter__()
diff --git a/tempest/openstack/common/fixture/mockpatch.py b/tempest/openstack/common/fixture/mockpatch.py
deleted file mode 100644
index d7dcc11..0000000
--- a/tempest/openstack/common/fixture/mockpatch.py
+++ /dev/null
@@ -1,50 +0,0 @@
-# Copyright 2010 United States Government as represented by the
-# Administrator of the National Aeronautics and Space Administration.
-# Copyright 2013 Hewlett-Packard Development Company, L.P.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#      http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import fixtures
-import mock
-
-
-class PatchObject(fixtures.Fixture):
-    """Deal with code around mock."""
-
-    def __init__(self, obj, attr, new=mock.DEFAULT, **kwargs):
-        self.obj = obj
-        self.attr = attr
-        self.kwargs = kwargs
-        self.new = new
-
-    def setUp(self):
-        super(PatchObject, self).setUp()
-        _p = mock.patch.object(self.obj, self.attr, self.new, **self.kwargs)
-        self.mock = _p.start()
-        self.addCleanup(_p.stop)
-
-
-class Patch(fixtures.Fixture):
-
-    """Deal with code around mock.patch."""
-
-    def __init__(self, obj, **kwargs):
-        self.obj = obj
-        self.kwargs = kwargs
-
-    def setUp(self):
-        super(Patch, self).setUp()
-        _p = mock.patch(self.obj, **self.kwargs)
-        self.mock = _p.start()
-        self.addCleanup(_p.stop)
diff --git a/tempest/openstack/common/fixture/moxstubout.py b/tempest/openstack/common/fixture/moxstubout.py
deleted file mode 100644
index e8c031f..0000000
--- a/tempest/openstack/common/fixture/moxstubout.py
+++ /dev/null
@@ -1,32 +0,0 @@
-# Copyright 2010 United States Government as represented by the
-# Administrator of the National Aeronautics and Space Administration.
-# Copyright 2013 Hewlett-Packard Development Company, L.P.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#      http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import fixtures
-import mox
-
-
-class MoxStubout(fixtures.Fixture):
-    """Deal with code around mox and stubout as a fixture."""
-
-    def setUp(self):
-        super(MoxStubout, self).setUp()
-        # emulate some of the mox stuff, we can't use the metaclass
-        # because it screws with our generators
-        self.mox = mox.Mox()
-        self.stubs = self.mox.stubs
-        self.addCleanup(self.mox.UnsetStubs)
-        self.addCleanup(self.mox.VerifyAll)
diff --git a/tempest/openstack/common/gettextutils.py b/tempest/openstack/common/gettextutils.py
deleted file mode 100644
index 872d58e..0000000
--- a/tempest/openstack/common/gettextutils.py
+++ /dev/null
@@ -1,479 +0,0 @@
-# Copyright 2012 Red Hat, Inc.
-# Copyright 2013 IBM Corp.
-# All Rights Reserved.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-"""
-gettext for openstack-common modules.
-
-Usual usage in an openstack.common module:
-
-    from tempest.openstack.common.gettextutils import _
-"""
-
-import copy
-import gettext
-import locale
-from logging import handlers
-import os
-
-from babel import localedata
-import six
-
-_AVAILABLE_LANGUAGES = {}
-
-# FIXME(dhellmann): Remove this when moving to oslo.i18n.
-USE_LAZY = False
-
-
-class TranslatorFactory(object):
-    """Create translator functions
-    """
-
-    def __init__(self, domain, localedir=None):
-        """Establish a set of translation functions for the domain.
-
-        :param domain: Name of translation domain,
-                       specifying a message catalog.
-        :type domain: str
-        :param lazy: Delays translation until a message is emitted.
-                     Defaults to False.
-        :type lazy: Boolean
-        :param localedir: Directory with translation catalogs.
-        :type localedir: str
-        """
-        self.domain = domain
-        if localedir is None:
-            localedir = os.environ.get(domain.upper() + '_LOCALEDIR')
-        self.localedir = localedir
-
-    def _make_translation_func(self, domain=None):
-        """Return a new translation function ready for use.
-
-        Takes into account whether or not lazy translation is being
-        done.
-
-        The domain can be specified to override the default from the
-        factory, but the localedir from the factory is always used
-        because we assume the log-level translation catalogs are
-        installed in the same directory as the main application
-        catalog.
-
-        """
-        if domain is None:
-            domain = self.domain
-        t = gettext.translation(domain,
-                                localedir=self.localedir,
-                                fallback=True)
-        # Use the appropriate method of the translation object based
-        # on the python version.
-        m = t.gettext if six.PY3 else t.ugettext
-
-        def f(msg):
-            """oslo.i18n.gettextutils translation function."""
-            if USE_LAZY:
-                return Message(msg, domain=domain)
-            return m(msg)
-        return f
-
-    @property
-    def primary(self):
-        "The default translation function."
-        return self._make_translation_func()
-
-    def _make_log_translation_func(self, level):
-        return self._make_translation_func(self.domain + '-log-' + level)
-
-    @property
-    def log_info(self):
-        "Translate info-level log messages."
-        return self._make_log_translation_func('info')
-
-    @property
-    def log_warning(self):
-        "Translate warning-level log messages."
-        return self._make_log_translation_func('warning')
-
-    @property
-    def log_error(self):
-        "Translate error-level log messages."
-        return self._make_log_translation_func('error')
-
-    @property
-    def log_critical(self):
-        "Translate critical-level log messages."
-        return self._make_log_translation_func('critical')
-
-
-# NOTE(dhellmann): When this module moves out of the incubator into
-# oslo.i18n, these global variables can be moved to an integration
-# module within each application.
-
-# Create the global translation functions.
-_translators = TranslatorFactory('tempest')
-
-# The primary translation function using the well-known name "_"
-_ = _translators.primary
-
-# Translators for log levels.
-#
-# The abbreviated names are meant to reflect the usual use of a short
-# name like '_'. The "L" is for "log" and the other letter comes from
-# the level.
-_LI = _translators.log_info
-_LW = _translators.log_warning
-_LE = _translators.log_error
-_LC = _translators.log_critical
-
-# NOTE(dhellmann): End of globals that will move to the application's
-# integration module.
-
-
-def enable_lazy():
-    """Convenience function for configuring _() to use lazy gettext
-
-    Call this at the start of execution to enable the gettextutils._
-    function to use lazy gettext functionality. This is useful if
-    your project is importing _ directly instead of using the
-    gettextutils.install() way of importing the _ function.
-    """
-    global USE_LAZY
-    USE_LAZY = True
-
-
-def install(domain):
-    """Install a _() function using the given translation domain.
-
-    Given a translation domain, install a _() function using gettext's
-    install() function.
-
-    The main difference from gettext.install() is that we allow
-    overriding the default localedir (e.g. /usr/share/locale) using
-    a translation-domain-specific environment variable (e.g.
-    NOVA_LOCALEDIR).
-
-    Note that to enable lazy translation, enable_lazy must be
-    called.
-
-    :param domain: the translation domain
-    """
-    from six import moves
-    tf = TranslatorFactory(domain)
-    moves.builtins.__dict__['_'] = tf.primary
-
-
-class Message(six.text_type):
-    """A Message object is a unicode object that can be translated.
-
-    Translation of Message is done explicitly using the translate() method.
-    For all non-translation intents and purposes, a Message is simply unicode,
-    and can be treated as such.
-    """
-
-    def __new__(cls, msgid, msgtext=None, params=None,
-                domain='tempest', *args):
-        """Create a new Message object.
-
-        In order for translation to work gettext requires a message ID, this
-        msgid will be used as the base unicode text. It is also possible
-        for the msgid and the base unicode text to be different by passing
-        the msgtext parameter.
-        """
-        # If the base msgtext is not given, we use the default translation
-        # of the msgid (which is in English) just in case the system locale is
-        # not English, so that the base text will be in that locale by default.
-        if not msgtext:
-            msgtext = Message._translate_msgid(msgid, domain)
-        # We want to initialize the parent unicode with the actual object that
-        # would have been plain unicode if 'Message' was not enabled.
-        msg = super(Message, cls).__new__(cls, msgtext)
-        msg.msgid = msgid
-        msg.domain = domain
-        msg.params = params
-        return msg
-
-    def translate(self, desired_locale=None):
-        """Translate this message to the desired locale.
-
-        :param desired_locale: The desired locale to translate the message to,
-                               if no locale is provided the message will be
-                               translated to the system's default locale.
-
-        :returns: the translated message in unicode
-        """
-
-        translated_message = Message._translate_msgid(self.msgid,
-                                                      self.domain,
-                                                      desired_locale)
-        if self.params is None:
-            # No need for more translation
-            return translated_message
-
-        # This Message object may have been formatted with one or more
-        # Message objects as substitution arguments, given either as a single
-        # argument, part of a tuple, or as one or more values in a dictionary.
-        # When translating this Message we need to translate those Messages too
-        translated_params = _translate_args(self.params, desired_locale)
-
-        translated_message = translated_message % translated_params
-
-        return translated_message
-
-    @staticmethod
-    def _translate_msgid(msgid, domain, desired_locale=None):
-        if not desired_locale:
-            system_locale = locale.getdefaultlocale()
-            # If the system locale is not available to the runtime use English
-            if not system_locale[0]:
-                desired_locale = 'en_US'
-            else:
-                desired_locale = system_locale[0]
-
-        locale_dir = os.environ.get(domain.upper() + '_LOCALEDIR')
-        lang = gettext.translation(domain,
-                                   localedir=locale_dir,
-                                   languages=[desired_locale],
-                                   fallback=True)
-        if six.PY3:
-            translator = lang.gettext
-        else:
-            translator = lang.ugettext
-
-        translated_message = translator(msgid)
-        return translated_message
-
-    def __mod__(self, other):
-        # When we mod a Message we want the actual operation to be performed
-        # by the parent class (i.e. unicode()), the only thing  we do here is
-        # save the original msgid and the parameters in case of a translation
-        params = self._sanitize_mod_params(other)
-        unicode_mod = super(Message, self).__mod__(params)
-        modded = Message(self.msgid,
-                         msgtext=unicode_mod,
-                         params=params,
-                         domain=self.domain)
-        return modded
-
-    def _sanitize_mod_params(self, other):
-        """Sanitize the object being modded with this Message.
-
-        - Add support for modding 'None' so translation supports it
-        - Trim the modded object, which can be a large dictionary, to only
-        those keys that would actually be used in a translation
-        - Snapshot the object being modded, in case the message is
-        translated, it will be used as it was when the Message was created
-        """
-        if other is None:
-            params = (other,)
-        elif isinstance(other, dict):
-            # Merge the dictionaries
-            # Copy each item in case one does not support deep copy.
-            params = {}
-            if isinstance(self.params, dict):
-                for key, val in self.params.items():
-                    params[key] = self._copy_param(val)
-            for key, val in other.items():
-                params[key] = self._copy_param(val)
-        else:
-            params = self._copy_param(other)
-        return params
-
-    def _copy_param(self, param):
-        try:
-            return copy.deepcopy(param)
-        except Exception:
-            # Fallback to casting to unicode this will handle the
-            # python code-like objects that can't be deep-copied
-            return six.text_type(param)
-
-    def __add__(self, other):
-        msg = _('Message objects do not support addition.')
-        raise TypeError(msg)
-
-    def __radd__(self, other):
-        return self.__add__(other)
-
-    if six.PY2:
-        def __str__(self):
-            # NOTE(luisg): Logging in python 2.6 tries to str() log records,
-            # and it expects specifically a UnicodeError in order to proceed.
-            msg = _('Message objects do not support str() because they may '
-                    'contain non-ascii characters. '
-                    'Please use unicode() or translate() instead.')
-            raise UnicodeError(msg)
-
-
-def get_available_languages(domain):
-    """Lists the available languages for the given translation domain.
-
-    :param domain: the domain to get languages for
-    """
-    if domain in _AVAILABLE_LANGUAGES:
-        return copy.copy(_AVAILABLE_LANGUAGES[domain])
-
-    localedir = '%s_LOCALEDIR' % domain.upper()
-    find = lambda x: gettext.find(domain,
-                                  localedir=os.environ.get(localedir),
-                                  languages=[x])
-
-    # NOTE(mrodden): en_US should always be available (and first in case
-    # order matters) since our in-line message strings are en_US
-    language_list = ['en_US']
-    # NOTE(luisg): Babel <1.0 used a function called list(), which was
-    # renamed to locale_identifiers() in >=1.0, the requirements master list
-    # requires >=0.9.6, uncapped, so defensively work with both. We can remove
-    # this check when the master list updates to >=1.0, and update all projects
-    list_identifiers = (getattr(localedata, 'list', None) or
-                        getattr(localedata, 'locale_identifiers'))
-    locale_identifiers = list_identifiers()
-
-    for i in locale_identifiers:
-        if find(i) is not None:
-            language_list.append(i)
-
-    # NOTE(luisg): Babel>=1.0,<1.3 has a bug where some OpenStack supported
-    # locales (e.g. 'zh_CN', and 'zh_TW') aren't supported even though they
-    # are perfectly legitimate locales:
-    #     https://github.com/mitsuhiko/babel/issues/37
-    # In Babel 1.3 they fixed the bug and they support these locales, but
-    # they are still not explicitly "listed" by locale_identifiers().
-    # That is  why we add the locales here explicitly if necessary so that
-    # they are listed as supported.
-    aliases = {'zh': 'zh_CN',
-               'zh_Hant_HK': 'zh_HK',
-               'zh_Hant': 'zh_TW',
-               'fil': 'tl_PH'}
-    for (locale_, alias) in six.iteritems(aliases):
-        if locale_ in language_list and alias not in language_list:
-            language_list.append(alias)
-
-    _AVAILABLE_LANGUAGES[domain] = language_list
-    return copy.copy(language_list)
-
-
-def translate(obj, desired_locale=None):
-    """Gets the translated unicode representation of the given object.
-
-    If the object is not translatable it is returned as-is.
-    If the locale is None the object is translated to the system locale.
-
-    :param obj: the object to translate
-    :param desired_locale: the locale to translate the message to, if None the
-                           default system locale will be used
-    :returns: the translated object in unicode, or the original object if
-              it could not be translated
-    """
-    message = obj
-    if not isinstance(message, Message):
-        # If the object to translate is not already translatable,
-        # let's first get its unicode representation
-        message = six.text_type(obj)
-    if isinstance(message, Message):
-        # Even after unicoding() we still need to check if we are
-        # running with translatable unicode before translating
-        return message.translate(desired_locale)
-    return obj
-
-
-def _translate_args(args, desired_locale=None):
-    """Translates all the translatable elements of the given arguments object.
-
-    This method is used for translating the translatable values in method
-    arguments which include values of tuples or dictionaries.
-    If the object is not a tuple or a dictionary the object itself is
-    translated if it is translatable.
-
-    If the locale is None the object is translated to the system locale.
-
-    :param args: the args to translate
-    :param desired_locale: the locale to translate the args to, if None the
-                           default system locale will be used
-    :returns: a new args object with the translated contents of the original
-    """
-    if isinstance(args, tuple):
-        return tuple(translate(v, desired_locale) for v in args)
-    if isinstance(args, dict):
-        translated_dict = {}
-        for (k, v) in six.iteritems(args):
-            translated_v = translate(v, desired_locale)
-            translated_dict[k] = translated_v
-        return translated_dict
-    return translate(args, desired_locale)
-
-
-class TranslationHandler(handlers.MemoryHandler):
-    """Handler that translates records before logging them.
-
-    The TranslationHandler takes a locale and a target logging.Handler object
-    to forward LogRecord objects to after translating them. This handler
-    depends on Message objects being logged, instead of regular strings.
-
-    The handler can be configured declaratively in the logging.conf as follows:
-
-        [handlers]
-        keys = translatedlog, translator
-
-        [handler_translatedlog]
-        class = handlers.WatchedFileHandler
-        args = ('/var/log/api-localized.log',)
-        formatter = context
-
-        [handler_translator]
-        class = openstack.common.log.TranslationHandler
-        target = translatedlog
-        args = ('zh_CN',)
-
-    If the specified locale is not available in the system, the handler will
-    log in the default locale.
-    """
-
-    def __init__(self, locale=None, target=None):
-        """Initialize a TranslationHandler
-
-        :param locale: locale to use for translating messages
-        :param target: logging.Handler object to forward
-                       LogRecord objects to after translation
-        """
-        # NOTE(luisg): In order to allow this handler to be a wrapper for
-        # other handlers, such as a FileHandler, and still be able to
-        # configure it using logging.conf, this handler has to extend
-        # MemoryHandler because only the MemoryHandlers' logging.conf
-        # parsing is implemented such that it accepts a target handler.
-        handlers.MemoryHandler.__init__(self, capacity=0, target=target)
-        self.locale = locale
-
-    def setFormatter(self, fmt):
-        self.target.setFormatter(fmt)
-
-    def emit(self, record):
-        # We save the message from the original record to restore it
-        # after translation, so other handlers are not affected by this
-        original_msg = record.msg
-        original_args = record.args
-
-        try:
-            self._translate_and_log_record(record)
-        finally:
-            record.msg = original_msg
-            record.args = original_args
-
-    def _translate_and_log_record(self, record):
-        record.msg = translate(record.msg, self.locale)
-
-        # In addition to translating the message, we also need to translate
-        # arguments that were passed to the log method that were not part
-        # of the main message e.g., log.info(_('Some message %s'), this_one))
-        record.args = _translate_args(record.args, self.locale)
-
-        self.target.emit(record)
diff --git a/tempest/openstack/common/importutils.py b/tempest/openstack/common/importutils.py
deleted file mode 100644
index d5dd22f..0000000
--- a/tempest/openstack/common/importutils.py
+++ /dev/null
@@ -1,73 +0,0 @@
-# Copyright 2011 OpenStack Foundation.
-# All Rights Reserved.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-"""
-Import related utilities and helper functions.
-"""
-
-import sys
-import traceback
-
-
-def import_class(import_str):
-    """Returns a class from a string including module and class."""
-    mod_str, _sep, class_str = import_str.rpartition('.')
-    __import__(mod_str)
-    try:
-        return getattr(sys.modules[mod_str], class_str)
-    except AttributeError:
-        raise ImportError('Class %s cannot be found (%s)' %
-                          (class_str,
-                           traceback.format_exception(*sys.exc_info())))
-
-
-def import_object(import_str, *args, **kwargs):
-    """Import a class and return an instance of it."""
-    return import_class(import_str)(*args, **kwargs)
-
-
-def import_object_ns(name_space, import_str, *args, **kwargs):
-    """Tries to import object from default namespace.
-
-    Imports a class and return an instance of it, first by trying
-    to find the class in a default namespace, then failing back to
-    a full path if not found in the default namespace.
-    """
-    import_value = "%s.%s" % (name_space, import_str)
-    try:
-        return import_class(import_value)(*args, **kwargs)
-    except ImportError:
-        return import_class(import_str)(*args, **kwargs)
-
-
-def import_module(import_str):
-    """Import a module."""
-    __import__(import_str)
-    return sys.modules[import_str]
-
-
-def import_versioned_module(version, submodule=None):
-    module = 'tempest.v%s' % version
-    if submodule:
-        module = '.'.join((module, submodule))
-    return import_module(module)
-
-
-def try_import(import_str, default=None):
-    """Try to import a module and if it fails return default."""
-    try:
-        return import_module(import_str)
-    except ImportError:
-        return default
diff --git a/tempest/openstack/common/jsonutils.py b/tempest/openstack/common/jsonutils.py
deleted file mode 100644
index cb83557..0000000
--- a/tempest/openstack/common/jsonutils.py
+++ /dev/null
@@ -1,190 +0,0 @@
-# Copyright 2010 United States Government as represented by the
-# Administrator of the National Aeronautics and Space Administration.
-# Copyright 2011 Justin Santa Barbara
-# All Rights Reserved.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-'''
-JSON related utilities.
-
-This module provides a few things:
-
-    1) A handy function for getting an object down to something that can be
-    JSON serialized.  See to_primitive().
-
-    2) Wrappers around loads() and dumps().  The dumps() wrapper will
-    automatically use to_primitive() for you if needed.
-
-    3) This sets up anyjson to use the loads() and dumps() wrappers if anyjson
-    is available.
-'''
-
-
-import codecs
-import datetime
-import functools
-import inspect
-import itertools
-import sys
-
-if sys.version_info < (2, 7):
-    # On Python <= 2.6, json module is not C boosted, so try to use
-    # simplejson module if available
-    try:
-        import simplejson as json
-    except ImportError:
-        import json
-else:
-    import json
-
-import six
-import six.moves.xmlrpc_client as xmlrpclib
-
-from tempest.openstack.common import gettextutils
-from tempest.openstack.common import importutils
-from tempest.openstack.common import strutils
-from tempest.openstack.common import timeutils
-
-netaddr = importutils.try_import("netaddr")
-
-_nasty_type_tests = [inspect.ismodule, inspect.isclass, inspect.ismethod,
-                     inspect.isfunction, inspect.isgeneratorfunction,
-                     inspect.isgenerator, inspect.istraceback, inspect.isframe,
-                     inspect.iscode, inspect.isbuiltin, inspect.isroutine,
-                     inspect.isabstract]
-
-_simple_types = (six.string_types + six.integer_types
-                 + (type(None), bool, float))
-
-
-def to_primitive(value, convert_instances=False, convert_datetime=True,
-                 level=0, max_depth=3):
-    """Convert a complex object into primitives.
-
-    Handy for JSON serialization. We can optionally handle instances,
-    but since this is a recursive function, we could have cyclical
-    data structures.
-
-    To handle cyclical data structures we could track the actual objects
-    visited in a set, but not all objects are hashable. Instead we just
-    track the depth of the object inspections and don't go too deep.
-
-    Therefore, convert_instances=True is lossy ... be aware.
-
-    """
-    # handle obvious types first - order of basic types determined by running
-    # full tests on nova project, resulting in the following counts:
-    # 572754 <type 'NoneType'>
-    # 460353 <type 'int'>
-    # 379632 <type 'unicode'>
-    # 274610 <type 'str'>
-    # 199918 <type 'dict'>
-    # 114200 <type 'datetime.datetime'>
-    #  51817 <type 'bool'>
-    #  26164 <type 'list'>
-    #   6491 <type 'float'>
-    #    283 <type 'tuple'>
-    #     19 <type 'long'>
-    if isinstance(value, _simple_types):
-        return value
-
-    if isinstance(value, datetime.datetime):
-        if convert_datetime:
-            return timeutils.strtime(value)
-        else:
-            return value
-
-    # value of itertools.count doesn't get caught by nasty_type_tests
-    # and results in infinite loop when list(value) is called.
-    if type(value) == itertools.count:
-        return six.text_type(value)
-
-    # FIXME(vish): Workaround for LP bug 852095. Without this workaround,
-    #              tests that raise an exception in a mocked method that
-    #              has a @wrap_exception with a notifier will fail. If
-    #              we up the dependency to 0.5.4 (when it is released) we
-    #              can remove this workaround.
-    if getattr(value, '__module__', None) == 'mox':
-        return 'mock'
-
-    if level > max_depth:
-        return '?'
-
-    # The try block may not be necessary after the class check above,
-    # but just in case ...
-    try:
-        recursive = functools.partial(to_primitive,
-                                      convert_instances=convert_instances,
-                                      convert_datetime=convert_datetime,
-                                      level=level,
-                                      max_depth=max_depth)
-        if isinstance(value, dict):
-            return dict((k, recursive(v)) for k, v in six.iteritems(value))
-        elif isinstance(value, (list, tuple)):
-            return [recursive(lv) for lv in value]
-
-        # It's not clear why xmlrpclib created their own DateTime type, but
-        # for our purposes, make it a datetime type which is explicitly
-        # handled
-        if isinstance(value, xmlrpclib.DateTime):
-            value = datetime.datetime(*tuple(value.timetuple())[:6])
-
-        if convert_datetime and isinstance(value, datetime.datetime):
-            return timeutils.strtime(value)
-        elif isinstance(value, gettextutils.Message):
-            return value.data
-        elif hasattr(value, 'iteritems'):
-            return recursive(dict(value.iteritems()), level=level + 1)
-        elif hasattr(value, '__iter__'):
-            return recursive(list(value))
-        elif convert_instances and hasattr(value, '__dict__'):
-            # Likely an instance of something. Watch for cycles.
-            # Ignore class member vars.
-            return recursive(value.__dict__, level=level + 1)
-        elif netaddr and isinstance(value, netaddr.IPAddress):
-            return six.text_type(value)
-        else:
-            if any(test(value) for test in _nasty_type_tests):
-                return six.text_type(value)
-            return value
-    except TypeError:
-        # Class objects are tricky since they may define something like
-        # __iter__ defined but it isn't callable as list().
-        return six.text_type(value)
-
-
-def dumps(value, default=to_primitive, **kwargs):
-    return json.dumps(value, default=default, **kwargs)
-
-
-def dump(obj, fp, *args, **kwargs):
-    return json.dump(obj, fp, *args, **kwargs)
-
-
-def loads(s, encoding='utf-8', **kwargs):
-    return json.loads(strutils.safe_decode(s, encoding), **kwargs)
-
-
-def load(fp, encoding='utf-8', **kwargs):
-    return json.load(codecs.getreader(encoding)(fp), **kwargs)
-
-
-try:
-    import anyjson
-except ImportError:
-    pass
-else:
-    anyjson._modules.append((__name__, 'dumps', TypeError,
-                                       'loads', ValueError, 'load'))
-    anyjson.force_implementation(__name__)
diff --git a/tempest/openstack/common/local.py b/tempest/openstack/common/local.py
deleted file mode 100644
index 0819d5b..0000000
--- a/tempest/openstack/common/local.py
+++ /dev/null
@@ -1,45 +0,0 @@
-# Copyright 2011 OpenStack Foundation.
-# All Rights Reserved.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-"""Local storage of variables using weak references"""
-
-import threading
-import weakref
-
-
-class WeakLocal(threading.local):
-    def __getattribute__(self, attr):
-        rval = super(WeakLocal, self).__getattribute__(attr)
-        if rval:
-            # NOTE(mikal): this bit is confusing. What is stored is a weak
-            # reference, not the value itself. We therefore need to lookup
-            # the weak reference and return the inner value here.
-            rval = rval()
-        return rval
-
-    def __setattr__(self, attr, value):
-        value = weakref.ref(value)
-        return super(WeakLocal, self).__setattr__(attr, value)
-
-
-# NOTE(mikal): the name "store" should be deprecated in the future
-store = WeakLocal()
-
-# A "weak" store uses weak references and allows an object to fall out of scope
-# when it falls out of scope in the code that uses the thread local storage. A
-# "strong" store will hold a reference to the object so that it never falls out
-# of scope.
-weak_store = WeakLocal()
-strong_store = threading.local()
diff --git a/tempest/openstack/common/lockutils.py b/tempest/openstack/common/lockutils.py
deleted file mode 100644
index 53cada1..0000000
--- a/tempest/openstack/common/lockutils.py
+++ /dev/null
@@ -1,303 +0,0 @@
-# Copyright 2011 OpenStack Foundation.
-# All Rights Reserved.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-
-import contextlib
-import errno
-import functools
-import os
-import shutil
-import subprocess
-import sys
-import tempfile
-import threading
-import time
-import weakref
-
-from oslo.config import cfg
-
-from tempest.openstack.common import fileutils
-from tempest.openstack.common.gettextutils import _
-from tempest.openstack.common import local
-from tempest.openstack.common import log as logging
-
-
-LOG = logging.getLogger(__name__)
-
-
-util_opts = [
-    cfg.BoolOpt('disable_process_locking', default=False,
-                help='Whether to disable inter-process locks'),
-    cfg.StrOpt('lock_path',
-               default=os.environ.get("TEMPEST_LOCK_PATH"),
-               help=('Directory to use for lock files.'))
-]
-
-
-CONF = cfg.CONF
-CONF.register_opts(util_opts)
-
-
-def set_defaults(lock_path):
-    cfg.set_defaults(util_opts, lock_path=lock_path)
-
-
-class _InterProcessLock(object):
-    """Lock implementation which allows multiple locks, working around
-    issues like bugs.debian.org/cgi-bin/bugreport.cgi?bug=632857 and does
-    not require any cleanup. Since the lock is always held on a file
-    descriptor rather than outside of the process, the lock gets dropped
-    automatically if the process crashes, even if __exit__ is not executed.
-
-    There are no guarantees regarding usage by multiple green threads in a
-    single process here. This lock works only between processes. Exclusive
-    access between local threads should be achieved using the semaphores
-    in the @synchronized decorator.
-
-    Note these locks are released when the descriptor is closed, so it's not
-    safe to close the file descriptor while another green thread holds the
-    lock. Just opening and closing the lock file can break synchronisation,
-    so lock files must be accessed only using this abstraction.
-    """
-
-    def __init__(self, name):
-        self.lockfile = None
-        self.fname = name
-
-    def __enter__(self):
-        self.lockfile = open(self.fname, 'w')
-
-        while True:
-            try:
-                # Using non-blocking locks since green threads are not
-                # patched to deal with blocking locking calls.
-                # Also upon reading the MSDN docs for locking(), it seems
-                # to have a laughable 10 attempts "blocking" mechanism.
-                self.trylock()
-                return self
-            except IOError as e:
-                if e.errno in (errno.EACCES, errno.EAGAIN):
-                    # external locks synchronise things like iptables
-                    # updates - give it some time to prevent busy spinning
-                    time.sleep(0.01)
-                else:
-                    raise
-
-    def __exit__(self, exc_type, exc_val, exc_tb):
-        try:
-            self.unlock()
-            self.lockfile.close()
-        except IOError:
-            LOG.exception(_("Could not release the acquired lock `%s`"),
-                          self.fname)
-
-    def trylock(self):
-        raise NotImplementedError()
-
-    def unlock(self):
-        raise NotImplementedError()
-
-
-class _WindowsLock(_InterProcessLock):
-    def trylock(self):
-        msvcrt.locking(self.lockfile.fileno(), msvcrt.LK_NBLCK, 1)
-
-    def unlock(self):
-        msvcrt.locking(self.lockfile.fileno(), msvcrt.LK_UNLCK, 1)
-
-
-class _PosixLock(_InterProcessLock):
-    def trylock(self):
-        fcntl.lockf(self.lockfile, fcntl.LOCK_EX | fcntl.LOCK_NB)
-
-    def unlock(self):
-        fcntl.lockf(self.lockfile, fcntl.LOCK_UN)
-
-
-if os.name == 'nt':
-    import msvcrt
-    InterProcessLock = _WindowsLock
-else:
-    import fcntl
-    InterProcessLock = _PosixLock
-
-_semaphores = weakref.WeakValueDictionary()
-_semaphores_lock = threading.Lock()
-
-
-@contextlib.contextmanager
-def lock(name, lock_file_prefix=None, external=False, lock_path=None):
-    """Context based lock
-
-    This function yields a `threading.Semaphore` instance (if we don't use
-    eventlet.monkey_patch(), else `semaphore.Semaphore`) unless external is
-    True, in which case, it'll yield an InterProcessLock instance.
-
-    :param lock_file_prefix: The lock_file_prefix argument is used to provide
-      lock files on disk with a meaningful prefix.
-
-    :param external: The external keyword argument denotes whether this lock
-      should work across multiple processes. This means that if two different
-      workers both run a a method decorated with @synchronized('mylock',
-      external=True), only one of them will execute at a time.
-
-    :param lock_path: The lock_path keyword argument is used to specify a
-      special location for external lock files to live. If nothing is set, then
-      CONF.lock_path is used as a default.
-    """
-    with _semaphores_lock:
-        try:
-            sem = _semaphores[name]
-        except KeyError:
-            sem = threading.Semaphore()
-            _semaphores[name] = sem
-
-    with sem:
-        LOG.debug(_('Got semaphore "%(lock)s"'), {'lock': name})
-
-        # NOTE(mikal): I know this looks odd
-        if not hasattr(local.strong_store, 'locks_held'):
-            local.strong_store.locks_held = []
-        local.strong_store.locks_held.append(name)
-
-        try:
-            if external and not CONF.disable_process_locking:
-                LOG.debug(_('Attempting to grab file lock "%(lock)s"'),
-                          {'lock': name})
-
-                # We need a copy of lock_path because it is non-local
-                local_lock_path = lock_path or CONF.lock_path
-                if not local_lock_path:
-                    raise cfg.RequiredOptError('lock_path')
-
-                if not os.path.exists(local_lock_path):
-                    fileutils.ensure_tree(local_lock_path)
-                    LOG.info(_('Created lock path: %s'), local_lock_path)
-
-                def add_prefix(name, prefix):
-                    if not prefix:
-                        return name
-                    sep = '' if prefix.endswith('-') else '-'
-                    return '%s%s%s' % (prefix, sep, name)
-
-                # NOTE(mikal): the lock name cannot contain directory
-                # separators
-                lock_file_name = add_prefix(name.replace(os.sep, '_'),
-                                            lock_file_prefix)
-
-                lock_file_path = os.path.join(local_lock_path, lock_file_name)
-
-                try:
-                    lock = InterProcessLock(lock_file_path)
-                    with lock as lock:
-                        LOG.debug(_('Got file lock "%(lock)s" at %(path)s'),
-                                  {'lock': name, 'path': lock_file_path})
-                        yield lock
-                finally:
-                    LOG.debug(_('Released file lock "%(lock)s" at %(path)s'),
-                              {'lock': name, 'path': lock_file_path})
-            else:
-                yield sem
-
-        finally:
-            local.strong_store.locks_held.remove(name)
-
-
-def synchronized(name, lock_file_prefix=None, external=False, lock_path=None):
-    """Synchronization decorator.
-
-    Decorating a method like so::
-
-        @synchronized('mylock')
-        def foo(self, *args):
-           ...
-
-    ensures that only one thread will execute the foo method at a time.
-
-    Different methods can share the same lock::
-
-        @synchronized('mylock')
-        def foo(self, *args):
-           ...
-
-        @synchronized('mylock')
-        def bar(self, *args):
-           ...
-
-    This way only one of either foo or bar can be executing at a time.
-    """
-
-    def wrap(f):
-        @functools.wraps(f)
-        def inner(*args, **kwargs):
-            try:
-                with lock(name, lock_file_prefix, external, lock_path):
-                    LOG.debug(_('Got semaphore / lock "%(function)s"'),
-                              {'function': f.__name__})
-                    return f(*args, **kwargs)
-            finally:
-                LOG.debug(_('Semaphore / lock released "%(function)s"'),
-                          {'function': f.__name__})
-        return inner
-    return wrap
-
-
-def synchronized_with_prefix(lock_file_prefix):
-    """Partial object generator for the synchronization decorator.
-
-    Redefine @synchronized in each project like so::
-
-        (in nova/utils.py)
-        from nova.openstack.common import lockutils
-
-        synchronized = lockutils.synchronized_with_prefix('nova-')
-
-
-        (in nova/foo.py)
-        from nova import utils
-
-        @utils.synchronized('mylock')
-        def bar(self, *args):
-           ...
-
-    The lock_file_prefix argument is used to provide lock files on disk with a
-    meaningful prefix.
-    """
-
-    return functools.partial(synchronized, lock_file_prefix=lock_file_prefix)
-
-
-def main(argv):
-    """Create a dir for locks and pass it to command from arguments
-
-    If you run this:
-    python -m openstack.common.lockutils python setup.py testr <etc>
-
-    a temporary directory will be created for all your locks and passed to all
-    your tests in an environment variable. The temporary dir will be deleted
-    afterwards and the return value will be preserved.
-    """
-
-    lock_dir = tempfile.mkdtemp()
-    os.environ["TEMPEST_LOCK_PATH"] = lock_dir
-    try:
-        ret_val = subprocess.call(argv[1:])
-    finally:
-        shutil.rmtree(lock_dir, ignore_errors=True)
-    return ret_val
-
-
-if __name__ == '__main__':
-    sys.exit(main(sys.argv))
diff --git a/tempest/openstack/common/log.py b/tempest/openstack/common/log.py
deleted file mode 100644
index 26cd6ad..0000000
--- a/tempest/openstack/common/log.py
+++ /dev/null
@@ -1,710 +0,0 @@
-# Copyright 2011 OpenStack Foundation.
-# Copyright 2010 United States Government as represented by the
-# Administrator of the National Aeronautics and Space Administration.
-# All Rights Reserved.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-"""OpenStack logging handler.
-
-This module adds to logging functionality by adding the option to specify
-a context object when calling the various log methods.  If the context object
-is not specified, default formatting is used. Additionally, an instance uuid
-may be passed as part of the log message, which is intended to make it easier
-for admins to find messages related to a specific instance.
-
-It also allows setting of formatting information through conf.
-
-"""
-
-import inspect
-import itertools
-import logging
-import logging.config
-import logging.handlers
-import os
-import socket
-import sys
-import traceback
-
-from oslo.config import cfg
-from oslo.serialization import jsonutils
-from oslo.utils import importutils
-import six
-from six import moves
-
-_PY26 = sys.version_info[0:2] == (2, 6)
-
-from tempest.openstack.common._i18n import _
-from tempest.openstack.common import local
-
-
-_DEFAULT_LOG_DATE_FORMAT = "%Y-%m-%d %H:%M:%S"
-
-
-common_cli_opts = [
-    cfg.BoolOpt('debug',
-                short='d',
-                default=False,
-                help='Print debugging output (set logging level to '
-                     'DEBUG instead of default WARNING level).'),
-    cfg.BoolOpt('verbose',
-                short='v',
-                default=False,
-                help='Print more verbose output (set logging level to '
-                     'INFO instead of default WARNING level).'),
-]
-
-logging_cli_opts = [
-    cfg.StrOpt('log-config-append',
-               metavar='PATH',
-               deprecated_name='log-config',
-               help='The name of a logging configuration file. This file '
-                    'is appended to any existing logging configuration '
-                    'files. For details about logging configuration files, '
-                    'see the Python logging module documentation.'),
-    cfg.StrOpt('log-format',
-               metavar='FORMAT',
-               help='DEPRECATED. '
-                    'A logging.Formatter log message format string which may '
-                    'use any of the available logging.LogRecord attributes. '
-                    'This option is deprecated.  Please use '
-                    'logging_context_format_string and '
-                    'logging_default_format_string instead.'),
-    cfg.StrOpt('log-date-format',
-               default=_DEFAULT_LOG_DATE_FORMAT,
-               metavar='DATE_FORMAT',
-               help='Format string for %%(asctime)s in log records. '
-                    'Default: %(default)s .'),
-    cfg.StrOpt('log-file',
-               metavar='PATH',
-               deprecated_name='logfile',
-               help='(Optional) Name of log file to output to. '
-                    'If no default is set, logging will go to stdout.'),
-    cfg.StrOpt('log-dir',
-               deprecated_name='logdir',
-               help='(Optional) The base directory used for relative '
-                    '--log-file paths.'),
-    cfg.BoolOpt('use-syslog',
-                default=False,
-                help='Use syslog for logging. '
-                     'Existing syslog format is DEPRECATED during I, '
-                     'and will change in J to honor RFC5424.'),
-    cfg.BoolOpt('use-syslog-rfc-format',
-                # TODO(bogdando) remove or use True after existing
-                #    syslog format deprecation in J
-                default=False,
-                help='(Optional) Enables or disables syslog rfc5424 format '
-                     'for logging. If enabled, prefixes the MSG part of the '
-                     'syslog message with APP-NAME (RFC5424). The '
-                     'format without the APP-NAME is deprecated in I, '
-                     'and will be removed in J.'),
-    cfg.StrOpt('syslog-log-facility',
-               default='LOG_USER',
-               help='Syslog facility to receive log lines.')
-]
-
-generic_log_opts = [
-    cfg.BoolOpt('use_stderr',
-                default=True,
-                help='Log output to standard error.')
-]
-
-DEFAULT_LOG_LEVELS = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN',
-                      'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO',
-                      'oslo.messaging=INFO', 'iso8601=WARN',
-                      'requests.packages.urllib3.connectionpool=WARN',
-                      'urllib3.connectionpool=WARN', 'websocket=WARN',
-                      "keystonemiddleware=WARN", "routes.middleware=WARN",
-                      "stevedore=WARN"]
-
-log_opts = [
-    cfg.StrOpt('logging_context_format_string',
-               default='%(asctime)s.%(msecs)03d %(process)d %(levelname)s '
-                       '%(name)s [%(request_id)s %(user_identity)s] '
-                       '%(instance)s%(message)s',
-               help='Format string to use for log messages with context.'),
-    cfg.StrOpt('logging_default_format_string',
-               default='%(asctime)s.%(msecs)03d %(process)d %(levelname)s '
-                       '%(name)s [-] %(instance)s%(message)s',
-               help='Format string to use for log messages without context.'),
-    cfg.StrOpt('logging_debug_format_suffix',
-               default='%(funcName)s %(pathname)s:%(lineno)d',
-               help='Data to append to log format when level is DEBUG.'),
-    cfg.StrOpt('logging_exception_prefix',
-               default='%(asctime)s.%(msecs)03d %(process)d TRACE %(name)s '
-               '%(instance)s',
-               help='Prefix each line of exception output with this format.'),
-    cfg.ListOpt('default_log_levels',
-                default=DEFAULT_LOG_LEVELS,
-                help='List of logger=LEVEL pairs.'),
-    cfg.BoolOpt('publish_errors',
-                default=False,
-                help='Enables or disables publication of error events.'),
-    cfg.BoolOpt('fatal_deprecations',
-                default=False,
-                help='Enables or disables fatal status of deprecations.'),
-
-    # NOTE(mikal): there are two options here because sometimes we are handed
-    # a full instance (and could include more information), and other times we
-    # are just handed a UUID for the instance.
-    cfg.StrOpt('instance_format',
-               default='[instance: %(uuid)s] ',
-               help='The format for an instance that is passed with the log '
-                    'message.'),
-    cfg.StrOpt('instance_uuid_format',
-               default='[instance: %(uuid)s] ',
-               help='The format for an instance UUID that is passed with the '
-                    'log message.'),
-]
-
-CONF = cfg.CONF
-CONF.register_cli_opts(common_cli_opts)
-CONF.register_cli_opts(logging_cli_opts)
-CONF.register_opts(generic_log_opts)
-CONF.register_opts(log_opts)
-
-# our new audit level
-# NOTE(jkoelker) Since we synthesized an audit level, make the logging
-#                module aware of it so it acts like other levels.
-logging.AUDIT = logging.INFO + 1
-logging.addLevelName(logging.AUDIT, 'AUDIT')
-
-
-try:
-    NullHandler = logging.NullHandler
-except AttributeError:  # NOTE(jkoelker) NullHandler added in Python 2.7
-    class NullHandler(logging.Handler):
-        def handle(self, record):
-            pass
-
-        def emit(self, record):
-            pass
-
-        def createLock(self):
-            self.lock = None
-
-
-def _dictify_context(context):
-    if context is None:
-        return None
-    if not isinstance(context, dict) and getattr(context, 'to_dict', None):
-        context = context.to_dict()
-    return context
-
-
-def _get_binary_name():
-    return os.path.basename(inspect.stack()[-1][1])
-
-
-def _get_log_file_path(binary=None):
-    logfile = CONF.log_file
-    logdir = CONF.log_dir
-
-    if logfile and not logdir:
-        return logfile
-
-    if logfile and logdir:
-        return os.path.join(logdir, logfile)
-
-    if logdir:
-        binary = binary or _get_binary_name()
-        return '%s.log' % (os.path.join(logdir, binary),)
-
-    return None
-
-
-class BaseLoggerAdapter(logging.LoggerAdapter):
-
-    def audit(self, msg, *args, **kwargs):
-        self.log(logging.AUDIT, msg, *args, **kwargs)
-
-    def isEnabledFor(self, level):
-        if _PY26:
-            # This method was added in python 2.7 (and it does the exact
-            # same logic, so we need to do the exact same logic so that
-            # python 2.6 has this capability as well).
-            return self.logger.isEnabledFor(level)
-        else:
-            return super(BaseLoggerAdapter, self).isEnabledFor(level)
-
-
-class LazyAdapter(BaseLoggerAdapter):
-    def __init__(self, name='unknown', version='unknown'):
-        self._logger = None
-        self.extra = {}
-        self.name = name
-        self.version = version
-
-    @property
-    def logger(self):
-        if not self._logger:
-            self._logger = getLogger(self.name, self.version)
-            if six.PY3:
-                # In Python 3, the code fails because the 'manager' attribute
-                # cannot be found when using a LoggerAdapter as the
-                # underlying logger. Work around this issue.
-                self._logger.manager = self._logger.logger.manager
-        return self._logger
-
-
-class ContextAdapter(BaseLoggerAdapter):
-    warn = logging.LoggerAdapter.warning
-
-    def __init__(self, logger, project_name, version_string):
-        self.logger = logger
-        self.project = project_name
-        self.version = version_string
-        self._deprecated_messages_sent = dict()
-
-    @property
-    def handlers(self):
-        return self.logger.handlers
-
-    def deprecated(self, msg, *args, **kwargs):
-        """Call this method when a deprecated feature is used.
-
-        If the system is configured for fatal deprecations then the message
-        is logged at the 'critical' level and :class:`DeprecatedConfig` will
-        be raised.
-
-        Otherwise, the message will be logged (once) at the 'warn' level.
-
-        :raises: :class:`DeprecatedConfig` if the system is configured for
-                 fatal deprecations.
-
-        """
-        stdmsg = _("Deprecated: %s") % msg
-        if CONF.fatal_deprecations:
-            self.critical(stdmsg, *args, **kwargs)
-            raise DeprecatedConfig(msg=stdmsg)
-
-        # Using a list because a tuple with dict can't be stored in a set.
-        sent_args = self._deprecated_messages_sent.setdefault(msg, list())
-
-        if args in sent_args:
-            # Already logged this message, so don't log it again.
-            return
-
-        sent_args.append(args)
-        self.warn(stdmsg, *args, **kwargs)
-
-    def process(self, msg, kwargs):
-        # NOTE(jecarey): If msg is not unicode, coerce it into unicode
-        #                before it can get to the python logging and
-        #                possibly cause string encoding trouble
-        if not isinstance(msg, six.text_type):
-            msg = six.text_type(msg)
-
-        if 'extra' not in kwargs:
-            kwargs['extra'] = {}
-        extra = kwargs['extra']
-
-        context = kwargs.pop('context', None)
-        if not context:
-            context = getattr(local.store, 'context', None)
-        if context:
-            extra.update(_dictify_context(context))
-
-        instance = kwargs.pop('instance', None)
-        instance_uuid = (extra.get('instance_uuid') or
-                         kwargs.pop('instance_uuid', None))
-        instance_extra = ''
-        if instance:
-            instance_extra = CONF.instance_format % instance
-        elif instance_uuid:
-            instance_extra = (CONF.instance_uuid_format
-                              % {'uuid': instance_uuid})
-        extra['instance'] = instance_extra
-
-        extra.setdefault('user_identity', kwargs.pop('user_identity', None))
-
-        extra['project'] = self.project
-        extra['version'] = self.version
-        extra['extra'] = extra.copy()
-        return msg, kwargs
-
-
-class JSONFormatter(logging.Formatter):
-    def __init__(self, fmt=None, datefmt=None):
-        # NOTE(jkoelker) we ignore the fmt argument, but its still there
-        #                since logging.config.fileConfig passes it.
-        self.datefmt = datefmt
-
-    def formatException(self, ei, strip_newlines=True):
-        lines = traceback.format_exception(*ei)
-        if strip_newlines:
-            lines = [moves.filter(
-                lambda x: x,
-                line.rstrip().splitlines()) for line in lines]
-            lines = list(itertools.chain(*lines))
-        return lines
-
-    def format(self, record):
-        message = {'message': record.getMessage(),
-                   'asctime': self.formatTime(record, self.datefmt),
-                   'name': record.name,
-                   'msg': record.msg,
-                   'args': record.args,
-                   'levelname': record.levelname,
-                   'levelno': record.levelno,
-                   'pathname': record.pathname,
-                   'filename': record.filename,
-                   'module': record.module,
-                   'lineno': record.lineno,
-                   'funcname': record.funcName,
-                   'created': record.created,
-                   'msecs': record.msecs,
-                   'relative_created': record.relativeCreated,
-                   'thread': record.thread,
-                   'thread_name': record.threadName,
-                   'process_name': record.processName,
-                   'process': record.process,
-                   'traceback': None}
-
-        if hasattr(record, 'extra'):
-            message['extra'] = record.extra
-
-        if record.exc_info:
-            message['traceback'] = self.formatException(record.exc_info)
-
-        return jsonutils.dumps(message)
-
-
-def _create_logging_excepthook(product_name):
-    def logging_excepthook(exc_type, value, tb):
-        extra = {'exc_info': (exc_type, value, tb)}
-        getLogger(product_name).critical(
-            "".join(traceback.format_exception_only(exc_type, value)),
-            **extra)
-    return logging_excepthook
-
-
-class LogConfigError(Exception):
-
-    message = _('Error loading logging config %(log_config)s: %(err_msg)s')
-
-    def __init__(self, log_config, err_msg):
-        self.log_config = log_config
-        self.err_msg = err_msg
-
-    def __str__(self):
-        return self.message % dict(log_config=self.log_config,
-                                   err_msg=self.err_msg)
-
-
-def _load_log_config(log_config_append):
-    try:
-        logging.config.fileConfig(log_config_append,
-                                  disable_existing_loggers=False)
-    except (moves.configparser.Error, KeyError) as exc:
-        raise LogConfigError(log_config_append, six.text_type(exc))
-
-
-def setup(product_name, version='unknown'):
-    """Setup logging."""
-    if CONF.log_config_append:
-        _load_log_config(CONF.log_config_append)
-    else:
-        _setup_logging_from_conf(product_name, version)
-    sys.excepthook = _create_logging_excepthook(product_name)
-
-
-def set_defaults(logging_context_format_string=None,
-                 default_log_levels=None):
-    # Just in case the caller is not setting the
-    # default_log_level. This is insurance because
-    # we introduced the default_log_level parameter
-    # later in a backwards in-compatible change
-    if default_log_levels is not None:
-        cfg.set_defaults(
-            log_opts,
-            default_log_levels=default_log_levels)
-    if logging_context_format_string is not None:
-        cfg.set_defaults(
-            log_opts,
-            logging_context_format_string=logging_context_format_string)
-
-
-def _find_facility_from_conf():
-    facility_names = logging.handlers.SysLogHandler.facility_names
-    facility = getattr(logging.handlers.SysLogHandler,
-                       CONF.syslog_log_facility,
-                       None)
-
-    if facility is None and CONF.syslog_log_facility in facility_names:
-        facility = facility_names.get(CONF.syslog_log_facility)
-
-    if facility is None:
-        valid_facilities = facility_names.keys()
-        consts = ['LOG_AUTH', 'LOG_AUTHPRIV', 'LOG_CRON', 'LOG_DAEMON',
-                  'LOG_FTP', 'LOG_KERN', 'LOG_LPR', 'LOG_MAIL', 'LOG_NEWS',
-                  'LOG_AUTH', 'LOG_SYSLOG', 'LOG_USER', 'LOG_UUCP',
-                  'LOG_LOCAL0', 'LOG_LOCAL1', 'LOG_LOCAL2', 'LOG_LOCAL3',
-                  'LOG_LOCAL4', 'LOG_LOCAL5', 'LOG_LOCAL6', 'LOG_LOCAL7']
-        valid_facilities.extend(consts)
-        raise TypeError(_('syslog facility must be one of: %s') %
-                        ', '.join("'%s'" % fac
-                                  for fac in valid_facilities))
-
-    return facility
-
-
-class RFCSysLogHandler(logging.handlers.SysLogHandler):
-    def __init__(self, *args, **kwargs):
-        self.binary_name = _get_binary_name()
-        # Do not use super() unless type(logging.handlers.SysLogHandler)
-        #  is 'type' (Python 2.7).
-        # Use old style calls, if the type is 'classobj' (Python 2.6)
-        logging.handlers.SysLogHandler.__init__(self, *args, **kwargs)
-
-    def format(self, record):
-        # Do not use super() unless type(logging.handlers.SysLogHandler)
-        #  is 'type' (Python 2.7).
-        # Use old style calls, if the type is 'classobj' (Python 2.6)
-        msg = logging.handlers.SysLogHandler.format(self, record)
-        msg = self.binary_name + ' ' + msg
-        return msg
-
-
-def _setup_logging_from_conf(project, version):
-    log_root = getLogger(None).logger
-    for handler in log_root.handlers:
-        log_root.removeHandler(handler)
-
-    logpath = _get_log_file_path()
-    if logpath:
-        filelog = logging.handlers.WatchedFileHandler(logpath)
-        log_root.addHandler(filelog)
-
-    if CONF.use_stderr:
-        streamlog = ColorHandler()
-        log_root.addHandler(streamlog)
-
-    elif not logpath:
-        # pass sys.stdout as a positional argument
-        # python2.6 calls the argument strm, in 2.7 it's stream
-        streamlog = logging.StreamHandler(sys.stdout)
-        log_root.addHandler(streamlog)
-
-    if CONF.publish_errors:
-        try:
-            handler = importutils.import_object(
-                "tempest.openstack.common.log_handler.PublishErrorsHandler",
-                logging.ERROR)
-        except ImportError:
-            handler = importutils.import_object(
-                "oslo.messaging.notify.log_handler.PublishErrorsHandler",
-                logging.ERROR)
-        log_root.addHandler(handler)
-
-    datefmt = CONF.log_date_format
-    for handler in log_root.handlers:
-        # NOTE(alaski): CONF.log_format overrides everything currently.  This
-        # should be deprecated in favor of context aware formatting.
-        if CONF.log_format:
-            handler.setFormatter(logging.Formatter(fmt=CONF.log_format,
-                                                   datefmt=datefmt))
-            log_root.info('Deprecated: log_format is now deprecated and will '
-                          'be removed in the next release')
-        else:
-            handler.setFormatter(ContextFormatter(project=project,
-                                                  version=version,
-                                                  datefmt=datefmt))
-
-    if CONF.debug:
-        log_root.setLevel(logging.DEBUG)
-    elif CONF.verbose:
-        log_root.setLevel(logging.INFO)
-    else:
-        log_root.setLevel(logging.WARNING)
-
-    for pair in CONF.default_log_levels:
-        mod, _sep, level_name = pair.partition('=')
-        logger = logging.getLogger(mod)
-        # NOTE(AAzza) in python2.6 Logger.setLevel doesn't convert string name
-        # to integer code.
-        if sys.version_info < (2, 7):
-            level = logging.getLevelName(level_name)
-            logger.setLevel(level)
-        else:
-            logger.setLevel(level_name)
-
-    if CONF.use_syslog:
-        try:
-            facility = _find_facility_from_conf()
-            # TODO(bogdando) use the format provided by RFCSysLogHandler
-            #   after existing syslog format deprecation in J
-            if CONF.use_syslog_rfc_format:
-                syslog = RFCSysLogHandler(facility=facility)
-            else:
-                syslog = logging.handlers.SysLogHandler(facility=facility)
-            log_root.addHandler(syslog)
-        except socket.error:
-            log_root.error('Unable to add syslog handler. Verify that syslog '
-                           'is running.')
-
-
-_loggers = {}
-
-
-def getLogger(name='unknown', version='unknown'):
-    if name not in _loggers:
-        _loggers[name] = ContextAdapter(logging.getLogger(name),
-                                        name,
-                                        version)
-    return _loggers[name]
-
-
-def getLazyLogger(name='unknown', version='unknown'):
-    """Returns lazy logger.
-
-    Creates a pass-through logger that does not create the real logger
-    until it is really needed and delegates all calls to the real logger
-    once it is created.
-    """
-    return LazyAdapter(name, version)
-
-
-class WritableLogger(object):
-    """A thin wrapper that responds to `write` and logs."""
-
-    def __init__(self, logger, level=logging.INFO):
-        self.logger = logger
-        self.level = level
-
-    def write(self, msg):
-        self.logger.log(self.level, msg.rstrip())
-
-
-class ContextFormatter(logging.Formatter):
-    """A context.RequestContext aware formatter configured through flags.
-
-    The flags used to set format strings are: logging_context_format_string
-    and logging_default_format_string.  You can also specify
-    logging_debug_format_suffix to append extra formatting if the log level is
-    debug.
-
-    For information about what variables are available for the formatter see:
-    http://docs.python.org/library/logging.html#formatter
-
-    If available, uses the context value stored in TLS - local.store.context
-
-    """
-
-    def __init__(self, *args, **kwargs):
-        """Initialize ContextFormatter instance
-
-        Takes additional keyword arguments which can be used in the message
-        format string.
-
-        :keyword project: project name
-        :type project: string
-        :keyword version: project version
-        :type version: string
-
-        """
-
-        self.project = kwargs.pop('project', 'unknown')
-        self.version = kwargs.pop('version', 'unknown')
-
-        logging.Formatter.__init__(self, *args, **kwargs)
-
-    def format(self, record):
-        """Uses contextstring if request_id is set, otherwise default."""
-
-        # NOTE(jecarey): If msg is not unicode, coerce it into unicode
-        #                before it can get to the python logging and
-        #                possibly cause string encoding trouble
-        if not isinstance(record.msg, six.text_type):
-            record.msg = six.text_type(record.msg)
-
-        # store project info
-        record.project = self.project
-        record.version = self.version
-
-        # store request info
-        context = getattr(local.store, 'context', None)
-        if context:
-            d = _dictify_context(context)
-            for k, v in d.items():
-                setattr(record, k, v)
-
-        # NOTE(sdague): default the fancier formatting params
-        # to an empty string so we don't throw an exception if
-        # they get used
-        for key in ('instance', 'color', 'user_identity'):
-            if key not in record.__dict__:
-                record.__dict__[key] = ''
-
-        if record.__dict__.get('request_id'):
-            fmt = CONF.logging_context_format_string
-        else:
-            fmt = CONF.logging_default_format_string
-
-        if (record.levelno == logging.DEBUG and
-                CONF.logging_debug_format_suffix):
-            fmt += " " + CONF.logging_debug_format_suffix
-
-        if sys.version_info < (3, 2):
-            self._fmt = fmt
-        else:
-            self._style = logging.PercentStyle(fmt)
-            self._fmt = self._style._fmt
-        # Cache this on the record, Logger will respect our formatted copy
-        if record.exc_info:
-            record.exc_text = self.formatException(record.exc_info, record)
-        return logging.Formatter.format(self, record)
-
-    def formatException(self, exc_info, record=None):
-        """Format exception output with CONF.logging_exception_prefix."""
-        if not record:
-            return logging.Formatter.formatException(self, exc_info)
-
-        stringbuffer = moves.StringIO()
-        traceback.print_exception(exc_info[0], exc_info[1], exc_info[2],
-                                  None, stringbuffer)
-        lines = stringbuffer.getvalue().split('\n')
-        stringbuffer.close()
-
-        if CONF.logging_exception_prefix.find('%(asctime)') != -1:
-            record.asctime = self.formatTime(record, self.datefmt)
-
-        formatted_lines = []
-        for line in lines:
-            pl = CONF.logging_exception_prefix % record.__dict__
-            fl = '%s%s' % (pl, line)
-            formatted_lines.append(fl)
-        return '\n'.join(formatted_lines)
-
-
-class ColorHandler(logging.StreamHandler):
-    LEVEL_COLORS = {
-        logging.DEBUG: '\033[00;32m',  # GREEN
-        logging.INFO: '\033[00;36m',  # CYAN
-        logging.AUDIT: '\033[01;36m',  # BOLD CYAN
-        logging.WARN: '\033[01;33m',  # BOLD YELLOW
-        logging.ERROR: '\033[01;31m',  # BOLD RED
-        logging.CRITICAL: '\033[01;31m',  # BOLD RED
-    }
-
-    def format(self, record):
-        record.color = self.LEVEL_COLORS[record.levelno]
-        return logging.StreamHandler.format(self, record)
-
-
-class DeprecatedConfig(Exception):
-    message = _("Fatal call to deprecated config: %(msg)s")
-
-    def __init__(self, msg):
-        super(Exception, self).__init__(self.message % dict(msg=msg))
diff --git a/tempest/openstack/common/strutils.py b/tempest/openstack/common/strutils.py
deleted file mode 100644
index 605cc02..0000000
--- a/tempest/openstack/common/strutils.py
+++ /dev/null
@@ -1,295 +0,0 @@
-# Copyright 2011 OpenStack Foundation.
-# All Rights Reserved.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-"""
-System-level utilities and helper functions.
-"""
-
-import math
-import re
-import sys
-import unicodedata
-
-import six
-
-from tempest.openstack.common.gettextutils import _
-
-
-UNIT_PREFIX_EXPONENT = {
-    'k': 1,
-    'K': 1,
-    'Ki': 1,
-    'M': 2,
-    'Mi': 2,
-    'G': 3,
-    'Gi': 3,
-    'T': 4,
-    'Ti': 4,
-}
-UNIT_SYSTEM_INFO = {
-    'IEC': (1024, re.compile(r'(^[-+]?\d*\.?\d+)([KMGT]i?)?(b|bit|B)$')),
-    'SI': (1000, re.compile(r'(^[-+]?\d*\.?\d+)([kMGT])?(b|bit|B)$')),
-}
-
-TRUE_STRINGS = ('1', 't', 'true', 'on', 'y', 'yes')
-FALSE_STRINGS = ('0', 'f', 'false', 'off', 'n', 'no')
-
-SLUGIFY_STRIP_RE = re.compile(r"[^\w\s-]")
-SLUGIFY_HYPHENATE_RE = re.compile(r"[-\s]+")
-
-
-# NOTE(flaper87): The following 3 globals are used by `mask_password`
-_SANITIZE_KEYS = ['adminPass', 'admin_pass', 'password', 'admin_password']
-
-# NOTE(ldbragst): Let's build a list of regex objects using the list of
-# _SANITIZE_KEYS we already have. This way, we only have to add the new key
-# to the list of _SANITIZE_KEYS and we can generate regular expressions
-# for XML and JSON automatically.
-_SANITIZE_PATTERNS = []
-_FORMAT_PATTERNS = [r'(%(key)s\s*[=]\s*[\"\']).*?([\"\'])',
-                    r'(<%(key)s>).*?(</%(key)s>)',
-                    r'([\"\']%(key)s[\"\']\s*:\s*[\"\']).*?([\"\'])',
-                    r'([\'"].*?%(key)s[\'"]\s*:\s*u?[\'"]).*?([\'"])',
-                    r'([\'"].*?%(key)s[\'"]\s*,\s*\'--?[A-z]+\'\s*,\s*u?[\'"])'
-                    '.*?([\'"])',
-                    r'(%(key)s\s*--?[A-z]+\s*)\S+(\s*)']
-
-for key in _SANITIZE_KEYS:
-    for pattern in _FORMAT_PATTERNS:
-        reg_ex = re.compile(pattern % {'key': key}, re.DOTALL)
-        _SANITIZE_PATTERNS.append(reg_ex)
-
-
-def int_from_bool_as_string(subject):
-    """Interpret a string as a boolean and return either 1 or 0.
-
-    Any string value in:
-
-        ('True', 'true', 'On', 'on', '1')
-
-    is interpreted as a boolean True.
-
-    Useful for JSON-decoded stuff and config file parsing
-    """
-    return bool_from_string(subject) and 1 or 0
-
-
-def bool_from_string(subject, strict=False, default=False):
-    """Interpret a string as a boolean.
-
-    A case-insensitive match is performed such that strings matching 't',
-    'true', 'on', 'y', 'yes', or '1' are considered True and, when
-    `strict=False`, anything else returns the value specified by 'default'.
-
-    Useful for JSON-decoded stuff and config file parsing.
-
-    If `strict=True`, unrecognized values, including None, will raise a
-    ValueError which is useful when parsing values passed in from an API call.
-    Strings yielding False are 'f', 'false', 'off', 'n', 'no', or '0'.
-    """
-    if not isinstance(subject, six.string_types):
-        subject = six.text_type(subject)
-
-    lowered = subject.strip().lower()
-
-    if lowered in TRUE_STRINGS:
-        return True
-    elif lowered in FALSE_STRINGS:
-        return False
-    elif strict:
-        acceptable = ', '.join(
-            "'%s'" % s for s in sorted(TRUE_STRINGS + FALSE_STRINGS))
-        msg = _("Unrecognized value '%(val)s', acceptable values are:"
-                " %(acceptable)s") % {'val': subject,
-                                      'acceptable': acceptable}
-        raise ValueError(msg)
-    else:
-        return default
-
-
-def safe_decode(text, incoming=None, errors='strict'):
-    """Decodes incoming text/bytes string using `incoming` if they're not
-       already unicode.
-
-    :param incoming: Text's current encoding
-    :param errors: Errors handling policy. See here for valid
-        values http://docs.python.org/2/library/codecs.html
-    :returns: text or a unicode `incoming` encoded
-                representation of it.
-    :raises TypeError: If text is not an instance of str
-    """
-    if not isinstance(text, (six.string_types, six.binary_type)):
-        raise TypeError("%s can't be decoded" % type(text))
-
-    if isinstance(text, six.text_type):
-        return text
-
-    if not incoming:
-        incoming = (sys.stdin.encoding or
-                    sys.getdefaultencoding())
-
-    try:
-        return text.decode(incoming, errors)
-    except UnicodeDecodeError:
-        # Note(flaper87) If we get here, it means that
-        # sys.stdin.encoding / sys.getdefaultencoding
-        # didn't return a suitable encoding to decode
-        # text. This happens mostly when global LANG
-        # var is not set correctly and there's no
-        # default encoding. In this case, most likely
-        # python will use ASCII or ANSI encoders as
-        # default encodings but they won't be capable
-        # of decoding non-ASCII characters.
-        #
-        # Also, UTF-8 is being used since it's an ASCII
-        # extension.
-        return text.decode('utf-8', errors)
-
-
-def safe_encode(text, incoming=None,
-                encoding='utf-8', errors='strict'):
-    """Encodes incoming text/bytes string using `encoding`.
-
-    If incoming is not specified, text is expected to be encoded with
-    current python's default encoding. (`sys.getdefaultencoding`)
-
-    :param incoming: Text's current encoding
-    :param encoding: Expected encoding for text (Default UTF-8)
-    :param errors: Errors handling policy. See here for valid
-        values http://docs.python.org/2/library/codecs.html
-    :returns: text or a bytestring `encoding` encoded
-                representation of it.
-    :raises TypeError: If text is not an instance of str
-    """
-    if not isinstance(text, (six.string_types, six.binary_type)):
-        raise TypeError("%s can't be encoded" % type(text))
-
-    if not incoming:
-        incoming = (sys.stdin.encoding or
-                    sys.getdefaultencoding())
-
-    if isinstance(text, six.text_type):
-        return text.encode(encoding, errors)
-    elif text and encoding != incoming:
-        # Decode text before encoding it with `encoding`
-        text = safe_decode(text, incoming, errors)
-        return text.encode(encoding, errors)
-    else:
-        return text
-
-
-def string_to_bytes(text, unit_system='IEC', return_int=False):
-    """Converts a string into an float representation of bytes.
-
-    The units supported for IEC ::
-
-        Kb(it), Kib(it), Mb(it), Mib(it), Gb(it), Gib(it), Tb(it), Tib(it)
-        KB, KiB, MB, MiB, GB, GiB, TB, TiB
-
-    The units supported for SI ::
-
-        kb(it), Mb(it), Gb(it), Tb(it)
-        kB, MB, GB, TB
-
-    Note that the SI unit system does not support capital letter 'K'
-
-    :param text: String input for bytes size conversion.
-    :param unit_system: Unit system for byte size conversion.
-    :param return_int: If True, returns integer representation of text
-                       in bytes. (default: decimal)
-    :returns: Numerical representation of text in bytes.
-    :raises ValueError: If text has an invalid value.
-
-    """
-    try:
-        base, reg_ex = UNIT_SYSTEM_INFO[unit_system]
-    except KeyError:
-        msg = _('Invalid unit system: "%s"') % unit_system
-        raise ValueError(msg)
-    match = reg_ex.match(text)
-    if match:
-        magnitude = float(match.group(1))
-        unit_prefix = match.group(2)
-        if match.group(3) in ['b', 'bit']:
-            magnitude /= 8
-    else:
-        msg = _('Invalid string format: %s') % text
-        raise ValueError(msg)
-    if not unit_prefix:
-        res = magnitude
-    else:
-        res = magnitude * pow(base, UNIT_PREFIX_EXPONENT[unit_prefix])
-    if return_int:
-        return int(math.ceil(res))
-    return res
-
-
-def to_slug(value, incoming=None, errors="strict"):
-    """Normalize string.
-
-    Convert to lowercase, remove non-word characters, and convert spaces
-    to hyphens.
-
-    Inspired by Django's `slugify` filter.
-
-    :param value: Text to slugify
-    :param incoming: Text's current encoding
-    :param errors: Errors handling policy. See here for valid
-        values http://docs.python.org/2/library/codecs.html
-    :returns: slugified unicode representation of `value`
-    :raises TypeError: If text is not an instance of str
-    """
-    value = safe_decode(value, incoming, errors)
-    # NOTE(aababilov): no need to use safe_(encode|decode) here:
-    # encodings are always "ascii", error handling is always "ignore"
-    # and types are always known (first: unicode; second: str)
-    value = unicodedata.normalize("NFKD", value).encode(
-        "ascii", "ignore").decode("ascii")
-    value = SLUGIFY_STRIP_RE.sub("", value).strip().lower()
-    return SLUGIFY_HYPHENATE_RE.sub("-", value)
-
-
-def mask_password(message, secret="***"):
-    """Replace password with 'secret' in message.
-
-    :param message: The string which includes security information.
-    :param secret: value with which to replace passwords.
-    :returns: The unicode value of message with the password fields masked.
-
-    For example:
-
-    >>> mask_password("'adminPass' : 'aaaaa'")
-    "'adminPass' : '***'"
-    >>> mask_password("'admin_pass' : 'aaaaa'")
-    "'admin_pass' : '***'"
-    >>> mask_password('"password" : "aaaaa"')
-    '"password" : "***"'
-    >>> mask_password("'original_password' : 'aaaaa'")
-    "'original_password' : '***'"
-    >>> mask_password("u'original_password' :   u'aaaaa'")
-    "u'original_password' :   u'***'"
-    """
-    message = six.text_type(message)
-
-    # NOTE(ldbragst): Check to see if anything in message contains any key
-    # specified in _SANITIZE_KEYS, if not then just return the message since
-    # we don't have to mask any passwords.
-    if not any(key in message for key in _SANITIZE_KEYS):
-        return message
-
-    secret = r'\g<1>' + secret + r'\g<2>'
-    for pattern in _SANITIZE_PATTERNS:
-        message = re.sub(pattern, secret, message)
-    return message
diff --git a/tempest/openstack/common/timeutils.py b/tempest/openstack/common/timeutils.py
deleted file mode 100644
index c48da95..0000000
--- a/tempest/openstack/common/timeutils.py
+++ /dev/null
@@ -1,210 +0,0 @@
-# Copyright 2011 OpenStack Foundation.
-# All Rights Reserved.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-"""
-Time related utilities and helper functions.
-"""
-
-import calendar
-import datetime
-import time
-
-import iso8601
-import six
-
-
-# ISO 8601 extended time format with microseconds
-_ISO8601_TIME_FORMAT_SUBSECOND = '%Y-%m-%dT%H:%M:%S.%f'
-_ISO8601_TIME_FORMAT = '%Y-%m-%dT%H:%M:%S'
-PERFECT_TIME_FORMAT = _ISO8601_TIME_FORMAT_SUBSECOND
-
-
-def isotime(at=None, subsecond=False):
-    """Stringify time in ISO 8601 format."""
-    if not at:
-        at = utcnow()
-    st = at.strftime(_ISO8601_TIME_FORMAT
-                     if not subsecond
-                     else _ISO8601_TIME_FORMAT_SUBSECOND)
-    tz = at.tzinfo.tzname(None) if at.tzinfo else 'UTC'
-    st += ('Z' if tz == 'UTC' else tz)
-    return st
-
-
-def parse_isotime(timestr):
-    """Parse time from ISO 8601 format."""
-    try:
-        return iso8601.parse_date(timestr)
-    except iso8601.ParseError as e:
-        raise ValueError(six.text_type(e))
-    except TypeError as e:
-        raise ValueError(six.text_type(e))
-
-
-def strtime(at=None, fmt=PERFECT_TIME_FORMAT):
-    """Returns formatted utcnow."""
-    if not at:
-        at = utcnow()
-    return at.strftime(fmt)
-
-
-def parse_strtime(timestr, fmt=PERFECT_TIME_FORMAT):
-    """Turn a formatted time back into a datetime."""
-    return datetime.datetime.strptime(timestr, fmt)
-
-
-def normalize_time(timestamp):
-    """Normalize time in arbitrary timezone to UTC naive object."""
-    offset = timestamp.utcoffset()
-    if offset is None:
-        return timestamp
-    return timestamp.replace(tzinfo=None) - offset
-
-
-def is_older_than(before, seconds):
-    """Return True if before is older than seconds."""
-    if isinstance(before, six.string_types):
-        before = parse_strtime(before).replace(tzinfo=None)
-    else:
-        before = before.replace(tzinfo=None)
-
-    return utcnow() - before > datetime.timedelta(seconds=seconds)
-
-
-def is_newer_than(after, seconds):
-    """Return True if after is newer than seconds."""
-    if isinstance(after, six.string_types):
-        after = parse_strtime(after).replace(tzinfo=None)
-    else:
-        after = after.replace(tzinfo=None)
-
-    return after - utcnow() > datetime.timedelta(seconds=seconds)
-
-
-def utcnow_ts():
-    """Timestamp version of our utcnow function."""
-    if utcnow.override_time is None:
-        # NOTE(kgriffs): This is several times faster
-        # than going through calendar.timegm(...)
-        return int(time.time())
-
-    return calendar.timegm(utcnow().timetuple())
-
-
-def utcnow():
-    """Overridable version of utils.utcnow."""
-    if utcnow.override_time:
-        try:
-            return utcnow.override_time.pop(0)
-        except AttributeError:
-            return utcnow.override_time
-    return datetime.datetime.utcnow()
-
-
-def iso8601_from_timestamp(timestamp):
-    """Returns an iso8601 formatted date from timestamp."""
-    return isotime(datetime.datetime.utcfromtimestamp(timestamp))
-
-
-utcnow.override_time = None
-
-
-def set_time_override(override_time=None):
-    """Overrides utils.utcnow.
-
-    Make it return a constant time or a list thereof, one at a time.
-
-    :param override_time: datetime instance or list thereof. If not
-                          given, defaults to the current UTC time.
-    """
-    utcnow.override_time = override_time or datetime.datetime.utcnow()
-
-
-def advance_time_delta(timedelta):
-    """Advance overridden time using a datetime.timedelta."""
-    assert utcnow.override_time is not None
-    try:
-        for dt in utcnow.override_time:
-            dt += timedelta
-    except TypeError:
-        utcnow.override_time += timedelta
-
-
-def advance_time_seconds(seconds):
-    """Advance overridden time by seconds."""
-    advance_time_delta(datetime.timedelta(0, seconds))
-
-
-def clear_time_override():
-    """Remove the overridden time."""
-    utcnow.override_time = None
-
-
-def marshall_now(now=None):
-    """Make an rpc-safe datetime with microseconds.
-
-    Note: tzinfo is stripped, but not required for relative times.
-    """
-    if not now:
-        now = utcnow()
-    return dict(day=now.day, month=now.month, year=now.year, hour=now.hour,
-                minute=now.minute, second=now.second,
-                microsecond=now.microsecond)
-
-
-def unmarshall_time(tyme):
-    """Unmarshall a datetime dict."""
-    return datetime.datetime(day=tyme['day'],
-                             month=tyme['month'],
-                             year=tyme['year'],
-                             hour=tyme['hour'],
-                             minute=tyme['minute'],
-                             second=tyme['second'],
-                             microsecond=tyme['microsecond'])
-
-
-def delta_seconds(before, after):
-    """Return the difference between two timing objects.
-
-    Compute the difference in seconds between two date, time, or
-    datetime objects (as a float, to microsecond resolution).
-    """
-    delta = after - before
-    return total_seconds(delta)
-
-
-def total_seconds(delta):
-    """Return the total seconds of datetime.timedelta object.
-
-    Compute total seconds of datetime.timedelta, datetime.timedelta
-    doesn't have method total_seconds in Python2.6, calculate it manually.
-    """
-    try:
-        return delta.total_seconds()
-    except AttributeError:
-        return ((delta.days * 24 * 3600) + delta.seconds +
-                float(delta.microseconds) / (10 ** 6))
-
-
-def is_soon(dt, window):
-    """Determines if time is going to happen in the next window seconds.
-
-    :param dt: the time
-    :param window: minimum seconds to remain to consider the time not soon
-
-    :return: True if expiration is within the given duration
-    """
-    soon = (utcnow() + datetime.timedelta(seconds=window))
-    return normalize_time(dt) <= soon
diff --git a/tempest/openstack/common/versionutils.py b/tempest/openstack/common/versionutils.py
index 131046e..12d2e14 100644
--- a/tempest/openstack/common/versionutils.py
+++ b/tempest/openstack/common/versionutils.py
@@ -17,14 +17,34 @@
 Helpers for comparing version strings.
 """
 
+import copy
 import functools
-import pkg_resources
+import inspect
+import logging
 
-from tempest.openstack.common.gettextutils import _
-from tempest.openstack.common import log as logging
+from oslo_config import cfg
+import pkg_resources
+import six
+
+from tempest.openstack.common._i18n import _
+from oslo_log import log as logging
 
 
 LOG = logging.getLogger(__name__)
+CONF = cfg.CONF
+
+
+deprecated_opts = [
+    cfg.BoolOpt('fatal_deprecations',
+                default=False,
+                help='Enables or disables fatal status of deprecations.'),
+]
+
+
+def list_opts():
+    """Entry point for oslo.config-generator.
+    """
+    return [(None, copy.deepcopy(deprecated_opts))]
 
 
 class deprecated(object):
@@ -52,18 +72,38 @@
     >>> @deprecated(as_of=deprecated.ICEHOUSE, remove_in=+1)
     ... def c(): pass
 
+    4. Specifying the deprecated functionality will not be removed:
+    >>> @deprecated(as_of=deprecated.ICEHOUSE, remove_in=0)
+    ... def d(): pass
+
+    5. Specifying a replacement, deprecated functionality will not be removed:
+    >>> @deprecated(as_of=deprecated.ICEHOUSE, in_favor_of='f()', remove_in=0)
+    ... def e(): pass
+
     """
 
+    # NOTE(morganfainberg): Bexar is used for unit test purposes, it is
+    # expected we maintain a gap between Bexar and Folsom in this list.
+    BEXAR = 'B'
     FOLSOM = 'F'
     GRIZZLY = 'G'
     HAVANA = 'H'
     ICEHOUSE = 'I'
+    JUNO = 'J'
+    KILO = 'K'
+    LIBERTY = 'L'
 
     _RELEASES = {
+        # NOTE(morganfainberg): Bexar is used for unit test purposes, it is
+        # expected we maintain a gap between Bexar and Folsom in this list.
+        'B': 'Bexar',
         'F': 'Folsom',
         'G': 'Grizzly',
         'H': 'Havana',
         'I': 'Icehouse',
+        'J': 'Juno',
+        'K': 'Kilo',
+        'L': 'Liberty',
     }
 
     _deprecated_msg_with_alternative = _(
@@ -74,6 +114,12 @@
         '%(what)s is deprecated as of %(as_of)s and may be '
         'removed in %(remove_in)s. It will not be superseded.')
 
+    _deprecated_msg_with_alternative_no_removal = _(
+        '%(what)s is deprecated as of %(as_of)s in favor of %(in_favor_of)s.')
+
+    _deprecated_msg_with_no_alternative_no_removal = _(
+        '%(what)s is deprecated as of %(as_of)s. It will not be superseded.')
+
     def __init__(self, as_of, in_favor_of=None, remove_in=2, what=None):
         """Initialize decorator
 
@@ -91,16 +137,34 @@
         self.remove_in = remove_in
         self.what = what
 
-    def __call__(self, func):
+    def __call__(self, func_or_cls):
         if not self.what:
-            self.what = func.__name__ + '()'
+            self.what = func_or_cls.__name__ + '()'
+        msg, details = self._build_message()
 
-        @functools.wraps(func)
-        def wrapped(*args, **kwargs):
-            msg, details = self._build_message()
-            LOG.deprecated(msg, details)
-            return func(*args, **kwargs)
-        return wrapped
+        if inspect.isfunction(func_or_cls):
+
+            @six.wraps(func_or_cls)
+            def wrapped(*args, **kwargs):
+                report_deprecated_feature(LOG, msg, details)
+                return func_or_cls(*args, **kwargs)
+            return wrapped
+        elif inspect.isclass(func_or_cls):
+            orig_init = func_or_cls.__init__
+
+            # TODO(tsufiev): change `functools` module to `six` as
+            # soon as six 1.7.4 (with fix for passing `assigned`
+            # argument to underlying `functools.wraps`) is released
+            # and added to the oslo-incubator requrements
+            @functools.wraps(orig_init, assigned=('__name__', '__doc__'))
+            def new_init(self, *args, **kwargs):
+                report_deprecated_feature(LOG, msg, details)
+                orig_init(self, *args, **kwargs)
+            func_or_cls.__init__ = new_init
+            return func_or_cls
+        else:
+            raise TypeError('deprecated can be used only with functions or '
+                            'classes')
 
     def _get_safe_to_remove_release(self, release):
         # TODO(dstanek): this method will have to be reimplemented once
@@ -119,9 +183,19 @@
 
         if self.in_favor_of:
             details['in_favor_of'] = self.in_favor_of
-            msg = self._deprecated_msg_with_alternative
+            if self.remove_in > 0:
+                msg = self._deprecated_msg_with_alternative
+            else:
+                # There are no plans to remove this function, but it is
+                # now deprecated.
+                msg = self._deprecated_msg_with_alternative_no_removal
         else:
-            msg = self._deprecated_msg_no_alternative
+            if self.remove_in > 0:
+                msg = self._deprecated_msg_no_alternative
+            else:
+                # There are no plans to remove this function, but it is
+                # now deprecated.
+                msg = self._deprecated_msg_with_no_alternative_no_removal
         return msg, details
 
 
@@ -146,3 +220,44 @@
         return False
 
     return current_parts >= requested_parts
+
+
+# Track the messages we have sent already. See
+# report_deprecated_feature().
+_deprecated_messages_sent = {}
+
+
+def report_deprecated_feature(logger, msg, *args, **kwargs):
+    """Call this function when a deprecated feature is used.
+
+    If the system is configured for fatal deprecations then the message
+    is logged at the 'critical' level and :class:`DeprecatedConfig` will
+    be raised.
+
+    Otherwise, the message will be logged (once) at the 'warn' level.
+
+    :raises: :class:`DeprecatedConfig` if the system is configured for
+             fatal deprecations.
+    """
+    stdmsg = _("Deprecated: %s") % msg
+    CONF.register_opts(deprecated_opts)
+    if CONF.fatal_deprecations:
+        logger.critical(stdmsg, *args, **kwargs)
+        raise DeprecatedConfig(msg=stdmsg)
+
+    # Using a list because a tuple with dict can't be stored in a set.
+    sent_args = _deprecated_messages_sent.setdefault(msg, list())
+
+    if args in sent_args:
+        # Already logged this message, so don't log it again.
+        return
+
+    sent_args.append(args)
+    logger.warn(stdmsg, *args, **kwargs)
+
+
+class DeprecatedConfig(Exception):
+    message = _("Fatal call to deprecated config: %(msg)s")
+
+    def __init__(self, msg):
+        super(Exception, self).__init__(self.message % dict(msg=msg))
diff --git a/tempest/scenario/manager.py b/tempest/scenario/manager.py
index ef1037c..bae8296 100644
--- a/tempest/scenario/manager.py
+++ b/tempest/scenario/manager.py
@@ -14,21 +14,20 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-import os
 import subprocess
 
 import netaddr
+from oslo_log import log
 import six
 from tempest_lib.common.utils import data_utils
 from tempest_lib import exceptions as lib_exc
 
 from tempest import clients
-from tempest.common import cred_provider
 from tempest.common import credentials
+from tempest.common import fixed_network
 from tempest.common.utils.linux import remote_client
 from tempest import config
 from tempest import exceptions
-from tempest.openstack.common import log
 from tempest.services.network import resources as net_resources
 import tempest.test
 
@@ -50,7 +49,6 @@
         cls.manager = clients.Manager(
             credentials=cls.credentials()
         )
-        cls.admin_manager = clients.Manager(cls.admin_credentials())
 
     @classmethod
     def setup_clients(cls):
@@ -63,7 +61,6 @@
         # Compute image client
         cls.images_client = cls.manager.images_client
         cls.keypairs_client = cls.manager.keypairs_client
-        cls.networks_client = cls.admin_manager.networks_client
         # Nova security groups client
         cls.security_groups_client = cls.manager.security_groups_client
         cls.servers_client = cls.manager.servers_client
@@ -188,6 +185,8 @@
             flavor = CONF.compute.flavor_ref
         if create_kwargs is None:
             create_kwargs = {}
+        network = self.get_tenant_network()
+        fixed_network.set_networks_kwarg(network, create_kwargs)
 
         LOG.debug("Creating a server (name: %s, image: %s, flavor: %s)",
                   name, image, flavor)
@@ -542,6 +541,14 @@
         super(NetworkScenarioTest, cls).skip_checks()
         if not CONF.service_available.neutron:
             raise cls.skipException('Neutron not available')
+        if not credentials.is_admin_available():
+            msg = ("Missing Identity Admin API credentials in configuration.")
+            raise cls.skipException(msg)
+
+    @classmethod
+    def setup_credentials(cls):
+        super(NetworkScenarioTest, cls).setup_credentials()
+        cls.admin_manager = clients.Manager(cls.admin_credentials())
 
     @classmethod
     def resource_setup(cls):
@@ -1283,9 +1290,17 @@
     """
 
     @classmethod
+    def skip_checks(cls):
+        super(EncryptionScenarioTest, cls).skip_checks()
+        if not credentials.is_admin_available():
+            msg = ("Missing Identity Admin API credentials in configuration.")
+            raise cls.skipException(msg)
+
+    @classmethod
     def setup_clients(cls):
         super(EncryptionScenarioTest, cls).setup_clients()
-        cls.admin_volume_types_client = cls.admin_manager.volume_types_client
+        admin_manager = clients.Manager(cls.admin_credentials())
+        cls.admin_volume_types_client = admin_manager.volume_types_client
 
     def _wait_for_volume_status(self, status):
         self.status_timeout(
@@ -1324,49 +1339,6 @@
             control_location=control_location)
 
 
-class OrchestrationScenarioTest(ScenarioTest):
-    """
-    Base class for orchestration scenario tests
-    """
-
-    @classmethod
-    def skip_checks(cls):
-        super(OrchestrationScenarioTest, cls).skip_checks()
-        if not CONF.service_available.heat:
-            raise cls.skipException("Heat support is required")
-
-    @classmethod
-    def credentials(cls):
-        admin_creds = cred_provider.get_configured_credentials(
-            'identity_admin')
-        creds = cred_provider.get_configured_credentials('user')
-        admin_creds.tenant_name = creds.tenant_name
-        return admin_creds
-
-    def _load_template(self, base_file, file_name):
-        filepath = os.path.join(os.path.dirname(os.path.realpath(base_file)),
-                                file_name)
-        with open(filepath) as f:
-            return f.read()
-
-    @classmethod
-    def _stack_rand_name(cls):
-        return data_utils.rand_name(cls.__name__ + '-')
-
-    @classmethod
-    def _get_default_network(cls):
-        networks = cls.networks_client.list_networks()
-        for net in networks:
-            if net['label'] == CONF.compute.fixed_network_name:
-                return net
-
-    @staticmethod
-    def _stack_output(stack, output_key):
-        """Return a stack output value for a given key."""
-        return next((o['output_value'] for o in stack['outputs']
-                    if o['output_key'] == output_key), None)
-
-
 class SwiftScenarioTest(ScenarioTest):
     """
     Provide harness to do Swift scenario tests.
diff --git a/tempest/scenario/test_aggregates_basic_ops.py b/tempest/scenario/test_aggregates_basic_ops.py
index 4074e9b..92e6c74 100644
--- a/tempest/scenario/test_aggregates_basic_ops.py
+++ b/tempest/scenario/test_aggregates_basic_ops.py
@@ -13,10 +13,10 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
 from tempest_lib.common.utils import data_utils
 
 from tempest.common import tempest_fixtures as fixtures
-from tempest.openstack.common import log as logging
 from tempest.scenario import manager
 from tempest import test
 
diff --git a/tempest/scenario/test_baremetal_basic_ops.py b/tempest/scenario/test_baremetal_basic_ops.py
index 434d3df..612a5a2 100644
--- a/tempest/scenario/test_baremetal_basic_ops.py
+++ b/tempest/scenario/test_baremetal_basic_ops.py
@@ -13,8 +13,9 @@
 # License for the specific language governing permissions and limitations
 # under the License.
 
+from oslo_log import log as logging
+
 from tempest import config
-from tempest.openstack.common import log as logging
 from tempest.scenario import manager
 from tempest import test
 
diff --git a/tempest/scenario/test_encrypted_cinder_volumes.py b/tempest/scenario/test_encrypted_cinder_volumes.py
index eed3d0b..e6912d8 100644
--- a/tempest/scenario/test_encrypted_cinder_volumes.py
+++ b/tempest/scenario/test_encrypted_cinder_volumes.py
@@ -35,8 +35,8 @@
         self.glance_image_create()
         self.nova_boot()
 
-    def create_encrypted_volume(self, encryption_provider):
-        volume_type = self.create_volume_type(name='luks')
+    def create_encrypted_volume(self, encryption_provider, volume_type):
+        volume_type = self.create_volume_type(name=volume_type)
         self.create_encryption_type(type_id=volume_type['id'],
                                     provider=encryption_provider,
                                     key_size=512,
@@ -53,7 +53,8 @@
     def test_encrypted_cinder_volumes_luks(self):
         self.launch_instance()
         self.create_encrypted_volume('nova.volume.encryptors.'
-                                     'luks.LuksEncryptor')
+                                     'luks.LuksEncryptor',
+                                     volume_type='luks')
         self.attach_detach_volume()
 
     @test.idempotent_id('cbc752ed-b716-4717-910f-956cce965722')
@@ -61,5 +62,6 @@
     def test_encrypted_cinder_volumes_cryptsetup(self):
         self.launch_instance()
         self.create_encrypted_volume('nova.volume.encryptors.'
-                                     'cryptsetup.CryptsetupEncryptor')
+                                     'cryptsetup.CryptsetupEncryptor',
+                                     volume_type='cryptsetup')
         self.attach_detach_volume()
diff --git a/tempest/scenario/test_large_ops.py b/tempest/scenario/test_large_ops.py
index 2408109..145efe7 100644
--- a/tempest/scenario/test_large_ops.py
+++ b/tempest/scenario/test_large_ops.py
@@ -13,11 +13,11 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
 from tempest_lib.common.utils import data_utils
 from tempest_lib import exceptions as lib_exc
 
 from tempest import config
-from tempest.openstack.common import log as logging
 from tempest.scenario import manager
 from tempest import test
 
diff --git a/tempest/scenario/test_minimum_basic.py b/tempest/scenario/test_minimum_basic.py
index 63f74c4..c780464 100644
--- a/tempest/scenario/test_minimum_basic.py
+++ b/tempest/scenario/test_minimum_basic.py
@@ -13,10 +13,11 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
+
 from tempest.common import custom_matchers
 from tempest import config
 from tempest import exceptions
-from tempest.openstack.common import log as logging
 from tempest.scenario import manager
 from tempest import test
 
diff --git a/tempest/scenario/test_network_advanced_server_ops.py b/tempest/scenario/test_network_advanced_server_ops.py
index b4837a2..3d6abff 100644
--- a/tempest/scenario/test_network_advanced_server_ops.py
+++ b/tempest/scenario/test_network_advanced_server_ops.py
@@ -13,12 +13,11 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
 from tempest_lib.common.utils import data_utils
-from tempest_lib import decorators
 import testtools
 
 from tempest import config
-from tempest.openstack.common import log as logging
 from tempest.scenario import manager
 from tempest import test
 
@@ -94,8 +93,8 @@
         self.servers_client.wait_for_server_status(self.server['id'], 'ACTIVE')
         self._check_network_connectivity()
 
-    @decorators.skip_because(bug="1323658")
     @test.idempotent_id('61f1aa9a-1573-410e-9054-afa557cab021')
+    @test.stresstest(class_setup_per='process')
     @test.services('compute', 'network')
     def test_server_connectivity_stop_start(self):
         self._setup_network_and_servers()
@@ -147,7 +146,6 @@
         self.servers_client.resume_server(self.server['id'])
         self._wait_server_status_and_check_network_connectivity()
 
-    @decorators.skip_because(bug="1323658")
     @test.idempotent_id('719eb59d-2f42-4b66-b8b1-bb1254473967')
     @testtools.skipUnless(CONF.compute_feature_enabled.resize,
                           'Resize is not available.')
diff --git a/tempest/scenario/test_network_basic_ops.py b/tempest/scenario/test_network_basic_ops.py
index c9aa1ab..bb19853 100644
--- a/tempest/scenario/test_network_basic_ops.py
+++ b/tempest/scenario/test_network_basic_ops.py
@@ -16,12 +16,12 @@
 import collections
 import re
 
+from oslo_log import log as logging
 from tempest_lib.common.utils import data_utils
 import testtools
 
 from tempest import config
 from tempest import exceptions
-from tempest.openstack.common import log as logging
 from tempest.scenario import manager
 from tempest.services.network import resources as net_resources
 from tempest import test
@@ -101,13 +101,19 @@
         self.servers = []
 
     def _setup_network_and_servers(self, **kwargs):
+        boot_with_port = kwargs.pop('boot_with_port', False)
         self.security_group = \
             self._create_security_group(tenant_id=self.tenant_id)
         self.network, self.subnet, self.router = self.create_networks(**kwargs)
         self.check_networks()
 
+        self.port_id = None
+        if boot_with_port:
+            # create a port on the network and boot with that
+            self.port_id = self._create_port(self.network['id']).id
+
         name = data_utils.rand_name('server-smoke')
-        server = self._create_server(name, self.network)
+        server = self._create_server(name, self.network, self.port_id)
         self._check_tenant_network_connectivity()
 
         floating_ip = self.create_floating_ip(server)
@@ -141,7 +147,7 @@
             self.assertIn(self.router.id,
                           seen_router_ids)
 
-    def _create_server(self, name, network):
+    def _create_server(self, name, network, port_id=None):
         keypair = self.create_keypair()
         self.keypairs[keypair['name']] = keypair
         security_groups = [{'name': self.security_group['name']}]
@@ -152,6 +158,8 @@
             'key_name': keypair['name'],
             'security_groups': security_groups,
         }
+        if port_id is not None:
+            create_kwargs['networks'][0]['port'] = port_id
         server = self.create_server(name=name, create_kwargs=create_kwargs)
         self.servers.append(server)
         return server
@@ -213,11 +221,15 @@
         self.floating_ip_tuple = Floating_IP_tuple(
             floating_ip, server)
 
-    def _create_new_network(self):
+    def _create_new_network(self, create_gateway=False):
         self.new_net = self._create_network(tenant_id=self.tenant_id)
-        self.new_subnet = self._create_subnet(
-            network=self.new_net,
-            gateway_ip=None)
+        if create_gateway:
+            self.new_subnet = self._create_subnet(
+                network=self.new_net)
+        else:
+            self.new_subnet = self._create_subnet(
+                network=self.new_net,
+                gateway_ip=None)
 
     def _hotplug_server(self):
         old_floating_ip, server = self.floating_ip_tuple
@@ -277,7 +289,8 @@
         ipatxt = ssh_client.get_ip_list()
         return reg.findall(ipatxt)
 
-    def _check_network_internal_connectivity(self, network):
+    def _check_network_internal_connectivity(self, network,
+                                             should_connect=True):
         """
         via ssh check VM internal connectivity:
         - ping internal gateway and DHCP port, implying in-tenant connectivity
@@ -291,7 +304,9 @@
                                          network_id=network.id)
                         if p['device_owner'].startswith('network'))
 
-        self._check_server_connectivity(floating_ip, internal_ips)
+        self._check_server_connectivity(floating_ip,
+                                        internal_ips,
+                                        should_connect)
 
     def _check_network_external_connectivity(self):
         """
@@ -311,17 +326,22 @@
         self._check_server_connectivity(self.floating_ip_tuple.floating_ip,
                                         external_ips)
 
-    def _check_server_connectivity(self, floating_ip, address_list):
+    def _check_server_connectivity(self, floating_ip, address_list,
+                                   should_connect=True):
         ip_address = floating_ip.floating_ip_address
         private_key = self._get_server_key(self.floating_ip_tuple.server)
         ssh_source = self._ssh_to_server(ip_address, private_key)
 
         for remote_ip in address_list:
+            if should_connect:
+                msg = "Timed out waiting for "
+                "%s to become reachable" % remote_ip
+            else:
+                msg = "ip address %s is reachable" % remote_ip
             try:
-                self.assertTrue(self._check_remote_connectivity(ssh_source,
-                                                                remote_ip),
-                                "Timed out waiting for %s to become "
-                                "reachable" % remote_ip)
+                self.assertTrue(self._check_remote_connectivity
+                                (ssh_source, remote_ip, should_connect),
+                                msg)
             except Exception:
                 LOG.exception("Unable to access {dest} via ssh to "
                               "floating-ip {src}".format(dest=remote_ip,
@@ -380,6 +400,52 @@
                                                msg="after re-associate "
                                                    "floating ip")
 
+    @test.idempotent_id('1546850e-fbaa-42f5-8b5f-03d8a6a95f15')
+    @testtools.skipIf(CONF.baremetal.driver_enabled,
+                      'Baremetal relies on a shared physical network.')
+    @test.attr(type='smoke')
+    @test.services('compute', 'network')
+    def test_connectivity_between_vms_on_different_networks(self):
+        """
+        For a freshly-booted VM with an IP address ("port") on a given
+            network:
+
+        - the Tempest host can ping the IP address.
+
+        - the Tempest host can ssh into the VM via the IP address and
+         successfully execute the following:
+
+         - ping an external IP address, implying external connectivity.
+
+         - ping an external hostname, implying that dns is correctly
+           configured.
+
+         - ping an internal IP address, implying connectivity to another
+           VM on the same network.
+
+        - Create another network on the same tenant with subnet, create
+        an VM on the new network.
+
+         - Ping the new VM from previous VM failed since the new network
+         was not attached to router yet.
+
+         - Attach the new network to the router, Ping the new VM from
+         previous VM succeed.
+
+        """
+        self._setup_network_and_servers()
+        self.check_public_network_connectivity(should_connect=True)
+        self._check_network_internal_connectivity(network=self.network)
+        self._check_network_external_connectivity()
+        self._create_new_network(create_gateway=True)
+        name = data_utils.rand_name('server-smoke')
+        self._create_server(name, self.new_net)
+        self._check_network_internal_connectivity(network=self.new_net,
+                                                  should_connect=False)
+        self.new_subnet.add_to_router(self.router.id)
+        self._check_network_internal_connectivity(network=self.new_net,
+                                                  should_connect=True)
+
     @test.idempotent_id('c5adff73-e961-41f1-b4a9-343614f18cfa')
     @testtools.skipUnless(CONF.compute_feature_enabled.interface_attach,
                           'NIC hotplug not available')
@@ -547,3 +613,39 @@
         self.check_public_network_connectivity(
             should_connect=True, msg="after updating "
             "admin_state_up of instance port to True")
+
+    @test.idempotent_id('759462e1-8535-46b0-ab3a-33aa45c55aaa')
+    @testtools.skipUnless(CONF.compute_feature_enabled.preserve_ports,
+                          'Preserving ports on instance delete may not be '
+                          'supported in the version of Nova being tested.')
+    @test.attr(type='smoke')
+    @test.services('compute', 'network')
+    def test_preserve_preexisting_port(self):
+        """Tests that a pre-existing port provided on server boot is not
+        deleted if the server is deleted.
+
+        Nova should unbind the port from the instance on delete if the port was
+        not created by Nova as part of the boot request.
+        """
+        # Setup the network, create a port and boot the server from that port.
+        self._setup_network_and_servers(boot_with_port=True)
+        _, server = self.floating_ip_tuple
+        self.assertIsNotNone(self.port_id,
+                             'Server should have been created from a '
+                             'pre-existing port.')
+        # Assert the port is bound to the server.
+        port_list = self._list_ports(device_id=server['id'],
+                                     network_id=self.network['id'])
+        self.assertEqual(1, len(port_list),
+                         'There should only be one port created for '
+                         'server %s.' % server['id'])
+        self.assertEqual(self.port_id, port_list[0]['id'])
+        # Delete the server.
+        self.servers_client.delete_server(server['id'])
+        self.servers_client.wait_for_server_termination(server['id'])
+        # Assert the port still exists on the network but is unbound from
+        # the deleted server.
+        port = self.network_client.show_port(self.port_id)['port']
+        self.assertEqual(self.network['id'], port['network_id'])
+        self.assertEqual('', port['device_id'])
+        self.assertEqual('', port['device_owner'])
diff --git a/tempest/scenario/test_network_v6.py b/tempest/scenario/test_network_v6.py
index 7b2bdd5..16ff848 100644
--- a/tempest/scenario/test_network_v6.py
+++ b/tempest/scenario/test_network_v6.py
@@ -14,8 +14,10 @@
 #    under the License.
 import functools
 import netaddr
+
+from oslo_log import log as logging
+
 from tempest import config
-from tempest.openstack.common import log as logging
 from tempest.scenario import manager
 from tempest import test
 
diff --git a/tempest/scenario/test_security_groups_basic_ops.py b/tempest/scenario/test_security_groups_basic_ops.py
index bb6c9b1..cffb2fe 100644
--- a/tempest/scenario/test_security_groups_basic_ops.py
+++ b/tempest/scenario/test_security_groups_basic_ops.py
@@ -13,11 +13,11 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
 from tempest_lib.common.utils import data_utils
 
 from tempest import clients
 from tempest import config
-from tempest.openstack.common import log as logging
 from tempest.scenario import manager
 from tempest import test
 
diff --git a/tempest/scenario/test_server_advanced_ops.py b/tempest/scenario/test_server_advanced_ops.py
index 8cbc388..f45f0c9 100644
--- a/tempest/scenario/test_server_advanced_ops.py
+++ b/tempest/scenario/test_server_advanced_ops.py
@@ -13,10 +13,10 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
 import testtools
 
 from tempest import config
-from tempest.openstack.common import log as logging
 from tempest.scenario import manager
 from tempest import test
 
diff --git a/tempest/scenario/test_server_basic_ops.py b/tempest/scenario/test_server_basic_ops.py
index b306b11..e093f43 100644
--- a/tempest/scenario/test_server_basic_ops.py
+++ b/tempest/scenario/test_server_basic_ops.py
@@ -13,8 +13,9 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
+
 from tempest import config
-from tempest.openstack.common import log as logging
 from tempest.scenario import manager
 from tempest.scenario import utils as test_utils
 from tempest import test
diff --git a/tempest/scenario/test_shelve_instance.py b/tempest/scenario/test_shelve_instance.py
index 155ecbf..e674101 100644
--- a/tempest/scenario/test_shelve_instance.py
+++ b/tempest/scenario/test_shelve_instance.py
@@ -13,10 +13,10 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log
 import testtools
 
 from tempest import config
-from tempest.openstack.common import log
 from tempest.scenario import manager
 from tempest import test
 
diff --git a/tempest/scenario/test_snapshot_pattern.py b/tempest/scenario/test_snapshot_pattern.py
index 109d36b..1298faa 100644
--- a/tempest/scenario/test_snapshot_pattern.py
+++ b/tempest/scenario/test_snapshot_pattern.py
@@ -13,10 +13,10 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log
 import testtools
 
 from tempest import config
-from tempest.openstack.common import log
 from tempest.scenario import manager
 from tempest import test
 
diff --git a/tempest/scenario/test_stamp_pattern.py b/tempest/scenario/test_stamp_pattern.py
index eaa6141..f7653e7 100644
--- a/tempest/scenario/test_stamp_pattern.py
+++ b/tempest/scenario/test_stamp_pattern.py
@@ -15,6 +15,7 @@
 
 import time
 
+from oslo_log import log as logging
 from tempest_lib.common.utils import data_utils
 from tempest_lib import decorators
 from tempest_lib import exceptions as lib_exc
@@ -22,7 +23,6 @@
 
 from tempest import config
 from tempest import exceptions
-from tempest.openstack.common import log as logging
 from tempest.scenario import manager
 from tempest import test
 import tempest.test
diff --git a/tempest/scenario/test_swift_basic_ops.py b/tempest/scenario/test_swift_basic_ops.py
index b622c4a..69e0c4c 100644
--- a/tempest/scenario/test_swift_basic_ops.py
+++ b/tempest/scenario/test_swift_basic_ops.py
@@ -13,8 +13,9 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
+
 from tempest import config
-from tempest.openstack.common import log as logging
 from tempest.scenario import manager
 from tempest import test
 
diff --git a/tempest/scenario/test_swift_telemetry_middleware.py b/tempest/scenario/test_swift_telemetry_middleware.py
index a10168c..302ccbe 100644
--- a/tempest/scenario/test_swift_telemetry_middleware.py
+++ b/tempest/scenario/test_swift_telemetry_middleware.py
@@ -14,9 +14,9 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
 
 from tempest import config
-from tempest.openstack.common import log as logging
 from tempest.scenario import manager
 from tempest import test
 
diff --git a/tempest/scenario/test_volume_boot_pattern.py b/tempest/scenario/test_volume_boot_pattern.py
index 3c5e88c..3e259b0 100644
--- a/tempest/scenario/test_volume_boot_pattern.py
+++ b/tempest/scenario/test_volume_boot_pattern.py
@@ -10,11 +10,11 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log
 from tempest_lib.common.utils import data_utils
 from tempest_lib import decorators
 
 from tempest import config
-from tempest.openstack.common import log
 from tempest.scenario import manager
 from tempest import test
 
@@ -119,7 +119,7 @@
         return ssh_client.exec_command('cat /tmp/text')
 
     def _write_text(self, ssh_client):
-        text = data_utils.rand_name('text-')
+        text = data_utils.rand_name('text')
         ssh_client.exec_command('echo "%s" > /tmp/text; sync' % (text))
 
         return self._get_content(ssh_client)
diff --git a/tempest/services/baremetal/v1/json/baremetal_client.py b/tempest/services/baremetal/v1/json/baremetal_client.py
index 09b6cd1..0c319f6 100644
--- a/tempest/services/baremetal/v1/json/baremetal_client.py
+++ b/tempest/services/baremetal/v1/json/baremetal_client.py
@@ -131,7 +131,7 @@
         return self._show_request('drivers', driver_name)
 
     @base.handle_errors
-    def create_node(self, chassis_id, **kwargs):
+    def create_node(self, chassis_id=None, **kwargs):
         """
         Create a baremetal node with the specified parameters.
 
diff --git a/tempest/services/botoclients.py b/tempest/services/botoclients.py
index 1cbdb0c..6a1af6c 100644
--- a/tempest/services/botoclients.py
+++ b/tempest/services/botoclients.py
@@ -20,7 +20,6 @@
 import urlparse
 
 from tempest import config
-from tempest import exceptions
 
 import boto
 import boto.ec2
@@ -33,41 +32,15 @@
 
     ALLOWED_METHODS = set()
 
-    def __init__(self, username=None, password=None,
-                 auth_url=None, tenant_name=None,
-                 *args, **kwargs):
-        # FIXME(andreaf) replace credentials and auth_url with auth_provider
+    def __init__(self, identity_client):
+        self.identity_client = identity_client
 
-        insecure_ssl = CONF.identity.disable_ssl_certificate_validation
         self.ca_cert = CONF.identity.ca_certificates_file
-
         self.connection_timeout = str(CONF.boto.http_socket_timeout)
         self.num_retries = str(CONF.boto.num_retries)
         self.build_timeout = CONF.boto.build_timeout
-        self.ks_cred = {"username": username,
-                        "password": password,
-                        "auth_url": auth_url,
-                        "tenant_name": tenant_name,
-                        "insecure": insecure_ssl,
-                        "cacert": self.ca_cert}
 
-    def _keystone_aws_get(self):
-        # FIXME(andreaf) Move EC2 credentials to AuthProvider
-        import keystoneclient.v2_0.client
-
-        keystone = keystoneclient.v2_0.client.Client(**self.ks_cred)
-        ec2_cred_list = keystone.ec2.list(keystone.auth_user_id)
-        ec2_cred = None
-        for cred in ec2_cred_list:
-            if cred.tenant_id == keystone.auth_tenant_id:
-                ec2_cred = cred
-                break
-        else:
-            ec2_cred = keystone.ec2.create(keystone.auth_user_id,
-                                           keystone.auth_tenant_id)
-        if not all((ec2_cred, ec2_cred.access, ec2_cred.secret)):
-            raise lib_exc.NotFound("Unable to get access and secret keys")
-        return ec2_cred
+        self.connection_data = {}
 
     def _config_boto_timeout(self, timeout, retries):
         try:
@@ -105,33 +78,47 @@
     def get_connection(self):
         self._config_boto_timeout(self.connection_timeout, self.num_retries)
         self._config_boto_ca_certificates_file(self.ca_cert)
-        if not all((self.connection_data["aws_access_key_id"],
-                   self.connection_data["aws_secret_access_key"])):
-            if all([self.ks_cred.get('auth_url'),
-                    self.ks_cred.get('username'),
-                    self.ks_cred.get('tenant_name'),
-                    self.ks_cred.get('password')]):
-                ec2_cred = self._keystone_aws_get()
-                self.connection_data["aws_access_key_id"] = \
-                    ec2_cred.access
-                self.connection_data["aws_secret_access_key"] = \
-                    ec2_cred.secret
-            else:
-                raise exceptions.InvalidConfiguration(
-                    "Unable to get access and secret keys")
+
+        ec2_client_args = {'aws_access_key_id': CONF.boto.aws_access,
+                           'aws_secret_access_key': CONF.boto.aws_secret}
+        if not all(ec2_client_args.values()):
+            ec2_client_args = self.get_aws_credentials(self.identity_client)
+
+        self.connection_data.update(ec2_client_args)
         return self.connect_method(**self.connection_data)
 
+    def get_aws_credentials(self, identity_client):
+        """
+        Obtain existing, or create new AWS credentials
+        :param identity_client: identity client with embedded credentials
+        :return: EC2 credentials
+        """
+        ec2_cred_list = identity_client.list_user_ec2_credentials(
+            identity_client.user_id)
+        for cred in ec2_cred_list:
+            if cred['tenant_id'] == identity_client.tenant_id:
+                ec2_cred = cred
+                break
+        else:
+            ec2_cred = identity_client.create_user_ec2_credentials(
+                identity_client.user_id, identity_client.tenant_id)
+        if not all((ec2_cred, ec2_cred['access'], ec2_cred['secret'])):
+            raise lib_exc.NotFound("Unable to get access and secret keys")
+        else:
+            ec2_cred_aws = {}
+            ec2_cred_aws['aws_access_key_id'] = ec2_cred['access']
+            ec2_cred_aws['aws_secret_access_key'] = ec2_cred['secret']
+        return ec2_cred_aws
+
 
 class APIClientEC2(BotoClientBase):
 
     def connect_method(self, *args, **kwargs):
         return boto.connect_ec2(*args, **kwargs)
 
-    def __init__(self, *args, **kwargs):
-        super(APIClientEC2, self).__init__(*args, **kwargs)
+    def __init__(self, identity_client):
+        super(APIClientEC2, self).__init__(identity_client)
         insecure_ssl = CONF.identity.disable_ssl_certificate_validation
-        aws_access = CONF.boto.aws_access
-        aws_secret = CONF.boto.aws_secret
         purl = urlparse.urlparse(CONF.boto.ec2_url)
 
         region_name = CONF.compute.region
@@ -147,14 +134,12 @@
                 port = 443
         else:
             port = int(port)
-        self.connection_data = {"aws_access_key_id": aws_access,
-                                "aws_secret_access_key": aws_secret,
-                                "is_secure": purl.scheme == "https",
-                                "validate_certs": not insecure_ssl,
-                                "region": region,
-                                "host": purl.hostname,
-                                "port": port,
-                                "path": purl.path}
+        self.connection_data.update({"is_secure": purl.scheme == "https",
+                                     "validate_certs": not insecure_ssl,
+                                     "region": region,
+                                     "host": purl.hostname,
+                                     "port": port,
+                                     "path": purl.path})
 
     ALLOWED_METHODS = set(('create_key_pair', 'get_key_pair',
                            'delete_key_pair', 'import_key_pair',
@@ -207,11 +192,9 @@
     def connect_method(self, *args, **kwargs):
         return boto.connect_s3(*args, **kwargs)
 
-    def __init__(self, *args, **kwargs):
-        super(ObjectClientS3, self).__init__(*args, **kwargs)
+    def __init__(self, identity_client):
+        super(ObjectClientS3, self).__init__(identity_client)
         insecure_ssl = CONF.identity.disable_ssl_certificate_validation
-        aws_access = CONF.boto.aws_access
-        aws_secret = CONF.boto.aws_secret
         purl = urlparse.urlparse(CONF.boto.s3_url)
         port = purl.port
         if port is None:
@@ -221,14 +204,12 @@
                 port = 443
         else:
             port = int(port)
-        self.connection_data = {"aws_access_key_id": aws_access,
-                                "aws_secret_access_key": aws_secret,
-                                "is_secure": purl.scheme == "https",
-                                "validate_certs": not insecure_ssl,
-                                "host": purl.hostname,
-                                "port": port,
-                                "calling_format": boto.s3.connection.
-                                OrdinaryCallingFormat()}
+        self.connection_data.update({"is_secure": purl.scheme == "https",
+                                     "validate_certs": not insecure_ssl,
+                                     "host": purl.hostname,
+                                     "port": port,
+                                     "calling_format": boto.s3.connection.
+                                     OrdinaryCallingFormat()})
 
     ALLOWED_METHODS = set(('create_bucket', 'delete_bucket', 'generate_url',
                            'get_all_buckets', 'get_bucket', 'delete_key',
diff --git a/tempest/services/compute/json/agents_client.py b/tempest/services/compute/json/agents_client.py
index e17495f..403437d 100644
--- a/tempest/services/compute/json/agents_client.py
+++ b/tempest/services/compute/json/agents_client.py
@@ -15,8 +15,7 @@
 import json
 import urllib
 
-from tempest.api_schema.response.compute import agents as common_schema
-from tempest.api_schema.response.compute.v2 import agents as schema
+from tempest.api_schema.response.compute.v2_1 import agents as schema
 from tempest.common import service_client
 
 
@@ -32,7 +31,7 @@
             url += '?%s' % urllib.urlencode(params)
         resp, body = self.get(url)
         body = json.loads(body)
-        self.validate_response(common_schema.list_agents, resp, body)
+        self.validate_response(schema.list_agents, resp, body)
         return service_client.ResponseBodyList(resp, body['agents'])
 
     def create_agent(self, **kwargs):
diff --git a/tempest/services/compute/json/aggregates_client.py b/tempest/services/compute/json/aggregates_client.py
index 10955fd..36a347b 100644
--- a/tempest/services/compute/json/aggregates_client.py
+++ b/tempest/services/compute/json/aggregates_client.py
@@ -17,8 +17,7 @@
 
 from tempest_lib import exceptions as lib_exc
 
-from tempest.api_schema.response.compute import aggregates as schema
-from tempest.api_schema.response.compute.v2 import aggregates as v2_schema
+from tempest.api_schema.response.compute.v2_1 import aggregates as schema
 from tempest.common import service_client
 
 
@@ -44,7 +43,7 @@
         resp, body = self.post('os-aggregates', post_body)
 
         body = json.loads(body)
-        self.validate_response(v2_schema.create_aggregate, resp, body)
+        self.validate_response(schema.create_aggregate, resp, body)
         return service_client.ResponseBody(resp, body['aggregate'])
 
     def update_aggregate(self, aggregate_id, name, availability_zone=None):
@@ -63,7 +62,7 @@
     def delete_aggregate(self, aggregate_id):
         """Deletes the given aggregate."""
         resp, body = self.delete("os-aggregates/%s" % str(aggregate_id))
-        self.validate_response(v2_schema.delete_aggregate, resp, body)
+        self.validate_response(schema.delete_aggregate, resp, body)
         return service_client.ResponseBody(resp, body)
 
     def is_resource_deleted(self, id):
diff --git a/tempest/services/compute/json/availability_zone_client.py b/tempest/services/compute/json/availability_zone_client.py
index 343c412..b541a2c 100644
--- a/tempest/services/compute/json/availability_zone_client.py
+++ b/tempest/services/compute/json/availability_zone_client.py
@@ -15,7 +15,8 @@
 
 import json
 
-from tempest.api_schema.response.compute.v2 import availability_zone as schema
+from tempest.api_schema.response.compute.v2_1 import availability_zone \
+    as schema
 from tempest.common import service_client
 
 
diff --git a/tempest/services/compute/json/certificates_client.py b/tempest/services/compute/json/certificates_client.py
index 4a30f1e..e6b72bb 100644
--- a/tempest/services/compute/json/certificates_client.py
+++ b/tempest/services/compute/json/certificates_client.py
@@ -15,8 +15,7 @@
 
 import json
 
-from tempest.api_schema.response.compute import certificates as schema
-from tempest.api_schema.response.compute.v2 import certificates as v2schema
+from tempest.api_schema.response.compute.v2_1 import certificates as schema
 from tempest.common import service_client
 
 
@@ -34,5 +33,5 @@
         url = "os-certificates"
         resp, body = self.post(url, None)
         body = json.loads(body)
-        self.validate_response(v2schema.create_certificate, resp, body)
+        self.validate_response(schema.create_certificate, resp, body)
         return service_client.ResponseBody(resp, body['certificate'])
diff --git a/tempest/services/compute/json/extensions_client.py b/tempest/services/compute/json/extensions_client.py
index 09561b3..5c69085 100644
--- a/tempest/services/compute/json/extensions_client.py
+++ b/tempest/services/compute/json/extensions_client.py
@@ -15,7 +15,7 @@
 
 import json
 
-from tempest.api_schema.response.compute.v2 import extensions as schema
+from tempest.api_schema.response.compute.v2_1 import extensions as schema
 from tempest.common import service_client
 
 
diff --git a/tempest/services/compute/json/fixed_ips_client.py b/tempest/services/compute/json/fixed_ips_client.py
index 31cf5b2..7ba424f 100644
--- a/tempest/services/compute/json/fixed_ips_client.py
+++ b/tempest/services/compute/json/fixed_ips_client.py
@@ -15,7 +15,7 @@
 
 import json
 
-from tempest.api_schema.response.compute.v2 import fixed_ips as schema
+from tempest.api_schema.response.compute.v2_1 import fixed_ips as schema
 from tempest.common import service_client
 
 
@@ -25,12 +25,12 @@
         url = "os-fixed-ips/%s" % (fixed_ip)
         resp, body = self.get(url)
         body = json.loads(body)
-        self.validate_response(schema.fixed_ips, resp, body)
+        self.validate_response(schema.get_fixed_ip, resp, body)
         return service_client.ResponseBody(resp, body['fixed_ip'])
 
     def reserve_fixed_ip(self, ip, body):
         """This reserves and unreserves fixed ips."""
         url = "os-fixed-ips/%s/action" % (ip)
         resp, body = self.post(url, json.dumps(body))
-        self.validate_response(schema.fixed_ip_action, resp, body)
+        self.validate_response(schema.reserve_fixed_ip, resp, body)
         return service_client.ResponseBody(resp)
diff --git a/tempest/services/compute/json/flavors_client.py b/tempest/services/compute/json/flavors_client.py
index 433c325..25b1869 100644
--- a/tempest/services/compute/json/flavors_client.py
+++ b/tempest/services/compute/json/flavors_client.py
@@ -20,7 +20,7 @@
 from tempest.api_schema.response.compute import flavors_access as schema_access
 from tempest.api_schema.response.compute import flavors_extra_specs \
     as schema_extra_specs
-from tempest.api_schema.response.compute.v2 import flavors as v2schema
+from tempest.api_schema.response.compute.v2_1 import flavors as v2schema
 from tempest.common import service_client
 
 
diff --git a/tempest/services/compute/json/floating_ips_client.py b/tempest/services/compute/json/floating_ips_client.py
index 0354ba4..5bad527 100644
--- a/tempest/services/compute/json/floating_ips_client.py
+++ b/tempest/services/compute/json/floating_ips_client.py
@@ -18,7 +18,7 @@
 
 from tempest_lib import exceptions as lib_exc
 
-from tempest.api_schema.response.compute.v2 import floating_ips as schema
+from tempest.api_schema.response.compute.v2_1 import floating_ips as schema
 from tempest.common import service_client
 
 
diff --git a/tempest/services/compute/json/hosts_client.py b/tempest/services/compute/json/hosts_client.py
index b06378b..de925a9 100644
--- a/tempest/services/compute/json/hosts_client.py
+++ b/tempest/services/compute/json/hosts_client.py
@@ -16,7 +16,7 @@
 import urllib
 
 from tempest.api_schema.response.compute import hosts as schema
-from tempest.api_schema.response.compute.v2 import hosts as v2_schema
+from tempest.api_schema.response.compute.v2_1 import hosts as v2_schema
 from tempest.common import service_client
 
 
diff --git a/tempest/services/compute/json/hypervisor_client.py b/tempest/services/compute/json/hypervisor_client.py
index 380b5ce..bf4bc7f 100644
--- a/tempest/services/compute/json/hypervisor_client.py
+++ b/tempest/services/compute/json/hypervisor_client.py
@@ -16,7 +16,7 @@
 import json
 
 from tempest.api_schema.response.compute import hypervisors as common_schema
-from tempest.api_schema.response.compute.v2 import hypervisors as v2schema
+from tempest.api_schema.response.compute.v2_1 import hypervisors as v2schema
 from tempest.common import service_client
 
 
diff --git a/tempest/services/compute/json/images_client.py b/tempest/services/compute/json/images_client.py
index 0ceb6d1..1223fef 100644
--- a/tempest/services/compute/json/images_client.py
+++ b/tempest/services/compute/json/images_client.py
@@ -18,7 +18,7 @@
 
 from tempest_lib import exceptions as lib_exc
 
-from tempest.api_schema.response.compute.v2 import images as schema
+from tempest.api_schema.response.compute.v2_1 import images as schema
 from tempest.common import service_client
 from tempest.common import waiters
 
diff --git a/tempest/services/compute/json/instance_usage_audit_log_client.py b/tempest/services/compute/json/instance_usage_audit_log_client.py
index 551d751..33ba76f 100644
--- a/tempest/services/compute/json/instance_usage_audit_log_client.py
+++ b/tempest/services/compute/json/instance_usage_audit_log_client.py
@@ -15,8 +15,8 @@
 
 import json
 
-from tempest.api_schema.response.compute.v2 import instance_usage_audit_logs \
-    as schema
+from tempest.api_schema.response.compute.v2_1 import \
+    instance_usage_audit_logs as schema
 from tempest.common import service_client
 
 
diff --git a/tempest/services/compute/json/interfaces_client.py b/tempest/services/compute/json/interfaces_client.py
index 0c5516c..c3bfa99 100644
--- a/tempest/services/compute/json/interfaces_client.py
+++ b/tempest/services/compute/json/interfaces_client.py
@@ -16,9 +16,8 @@
 import json
 import time
 
-from tempest.api_schema.response.compute import interfaces as common_schema
 from tempest.api_schema.response.compute import servers as servers_schema
-from tempest.api_schema.response.compute.v2 import interfaces as schema
+from tempest.api_schema.response.compute.v2_1 import interfaces as schema
 from tempest.common import service_client
 from tempest import exceptions
 
@@ -46,17 +45,19 @@
         resp, body = self.post('servers/%s/os-interface' % server,
                                body=post_body)
         body = json.loads(body)
+        self.validate_response(schema.get_create_interfaces, resp, body)
         return service_client.ResponseBody(resp, body['interfaceAttachment'])
 
     def show_interface(self, server, port_id):
         resp, body = self.get('servers/%s/os-interface/%s' % (server, port_id))
         body = json.loads(body)
+        self.validate_response(schema.get_create_interfaces, resp, body)
         return service_client.ResponseBody(resp, body['interfaceAttachment'])
 
     def delete_interface(self, server, port_id):
         resp, body = self.delete('servers/%s/os-interface/%s' % (server,
                                                                  port_id))
-        self.validate_response(common_schema.delete_interface, resp, body)
+        self.validate_response(schema.delete_interface, resp, body)
         return service_client.ResponseBody(resp, body)
 
     def wait_for_interface_status(self, server, port_id, status):
diff --git a/tempest/services/compute/json/keypairs_client.py b/tempest/services/compute/json/keypairs_client.py
index 18729c3..722aefa 100644
--- a/tempest/services/compute/json/keypairs_client.py
+++ b/tempest/services/compute/json/keypairs_client.py
@@ -16,7 +16,7 @@
 import json
 
 from tempest.api_schema.response.compute import keypairs as common_schema
-from tempest.api_schema.response.compute.v2 import keypairs as schema
+from tempest.api_schema.response.compute.v2_1 import keypairs as schema
 from tempest.common import service_client
 
 
diff --git a/tempest/services/compute/json/limits_client.py b/tempest/services/compute/json/limits_client.py
index 8769906..d2aaec6 100644
--- a/tempest/services/compute/json/limits_client.py
+++ b/tempest/services/compute/json/limits_client.py
@@ -15,7 +15,7 @@
 
 import json
 
-from tempest.api_schema.response.compute.v2 import limits as schema
+from tempest.api_schema.response.compute.v2_1 import limits as schema
 from tempest.common import service_client
 
 
diff --git a/tempest/services/compute/json/networks_client.py b/tempest/services/compute/json/networks_client.py
index ef1c058..0ae0920 100644
--- a/tempest/services/compute/json/networks_client.py
+++ b/tempest/services/compute/json/networks_client.py
@@ -20,11 +20,15 @@
 
 class NetworksClientJSON(service_client.ServiceClient):
 
-    def list_networks(self):
+    def list_networks(self, name=None):
         resp, body = self.get("os-networks")
         body = json.loads(body)
         self.expected_success(200, resp.status)
-        return service_client.ResponseBodyList(resp, body['networks'])
+        if name:
+            networks = [n for n in body['networks'] if n['label'] == name]
+        else:
+            networks = body['networks']
+        return service_client.ResponseBodyList(resp, networks)
 
     def get_network(self, network_id):
         resp, body = self.get("os-networks/%s" % str(network_id))
diff --git a/tempest/services/compute/json/quotas_client.py b/tempest/services/compute/json/quotas_client.py
index ea0f423..89f4acd 100644
--- a/tempest/services/compute/json/quotas_client.py
+++ b/tempest/services/compute/json/quotas_client.py
@@ -15,9 +15,9 @@
 
 import json
 
-from tempest.api_schema.response.compute.v2\
+from tempest.api_schema.response.compute.v2_1\
     import quota_classes as classes_schema
-from tempest.api_schema.response.compute.v2 import quotas as schema
+from tempest.api_schema.response.compute.v2_1 import quotas as schema
 from tempest.common import service_client
 
 
diff --git a/tempest/services/compute/json/security_group_default_rules_client.py b/tempest/services/compute/json/security_group_default_rules_client.py
index b370e00..3bf3263 100644
--- a/tempest/services/compute/json/security_group_default_rules_client.py
+++ b/tempest/services/compute/json/security_group_default_rules_client.py
@@ -15,7 +15,7 @@
 
 import json
 
-from tempest.api_schema.response.compute.v2 import \
+from tempest.api_schema.response.compute.v2_1 import \
     security_group_default_rule as schema
 from tempest.common import service_client
 
diff --git a/tempest/services/compute/json/security_groups_client.py b/tempest/services/compute/json/security_groups_client.py
index 5aefa7b..d8c8d63 100644
--- a/tempest/services/compute/json/security_groups_client.py
+++ b/tempest/services/compute/json/security_groups_client.py
@@ -18,7 +18,7 @@
 
 from tempest_lib import exceptions as lib_exc
 
-from tempest.api_schema.response.compute.v2 import security_groups as schema
+from tempest.api_schema.response.compute.v2_1 import security_groups as schema
 from tempest.common import service_client
 
 
diff --git a/tempest/services/compute/json/servers_client.py b/tempest/services/compute/json/servers_client.py
index bd4fd0e..bd27668 100644
--- a/tempest/services/compute/json/servers_client.py
+++ b/tempest/services/compute/json/servers_client.py
@@ -21,7 +21,7 @@
 from tempest_lib import exceptions as lib_exc
 
 from tempest.api_schema.response.compute import servers as common_schema
-from tempest.api_schema.response.compute.v2 import servers as schema
+from tempest.api_schema.response.compute.v2_1 import servers as schema
 from tempest.common import service_client
 from tempest.common import waiters
 from tempest import exceptions
diff --git a/tempest/services/compute/json/tenant_networks_client.py b/tempest/services/compute/json/tenant_networks_client.py
index c86c817..11251f6 100644
--- a/tempest/services/compute/json/tenant_networks_client.py
+++ b/tempest/services/compute/json/tenant_networks_client.py
@@ -14,7 +14,7 @@
 
 import json
 
-from tempest.api_schema.response.compute.v2 import tenant_networks as schema
+from tempest.api_schema.response.compute.v2_1 import tenant_networks as schema
 from tempest.common import service_client
 
 
diff --git a/tempest/services/compute/json/tenant_usages_client.py b/tempest/services/compute/json/tenant_usages_client.py
index bbc1051..ff6e7a2 100644
--- a/tempest/services/compute/json/tenant_usages_client.py
+++ b/tempest/services/compute/json/tenant_usages_client.py
@@ -16,7 +16,7 @@
 import json
 import urllib
 
-from tempest.api_schema.response.compute.v2 import tenant_usages as schema
+from tempest.api_schema.response.compute.v2_1 import tenant_usages as schema
 from tempest.common import service_client
 
 
diff --git a/tempest/services/compute/json/volumes_extensions_client.py b/tempest/services/compute/json/volumes_extensions_client.py
index b2d5cf9..ba5921e 100644
--- a/tempest/services/compute/json/volumes_extensions_client.py
+++ b/tempest/services/compute/json/volumes_extensions_client.py
@@ -19,7 +19,7 @@
 
 from tempest_lib import exceptions as lib_exc
 
-from tempest.api_schema.response.compute.v2 import volumes as schema
+from tempest.api_schema.response.compute.v2_1 import volumes as schema
 from tempest.common import service_client
 from tempest import exceptions
 
diff --git a/tempest/services/identity/v2/json/identity_client.py b/tempest/services/identity/v2/json/identity_client.py
index 6c4a6b4..039f9bb 100644
--- a/tempest/services/identity/v2/json/identity_client.py
+++ b/tempest/services/identity/v2/json/identity_client.py
@@ -269,3 +269,15 @@
         body = json.loads(body)
         return service_client.ResponseBodyList(resp,
                                                body['extensions']['values'])
+
+    def create_user_ec2_credentials(self, user_id, tenant_id):
+        post_body = json.dumps({'tenant_id': tenant_id})
+        resp, body = self.post('/users/%s/credentials/OS-EC2' % user_id,
+                               post_body)
+        self.expected_success(200, resp.status)
+        return service_client.ResponseBody(resp, self._parse_resp(body))
+
+    def list_user_ec2_credentials(self, user_id):
+        resp, body = self.get('/users/%s/credentials/OS-EC2' % user_id)
+        self.expected_success(200, resp.status)
+        return service_client.ResponseBodyList(resp, self._parse_resp(body))
diff --git a/tempest/services/identity/v3/json/identity_client.py b/tempest/services/identity/v3/json/identity_client.py
index be5aa80..bc90fd1 100644
--- a/tempest/services/identity/v3/json/identity_client.py
+++ b/tempest/services/identity/v3/json/identity_client.py
@@ -242,9 +242,12 @@
         self.expected_success(204, resp.status)
         return service_client.ResponseBody(resp, body)
 
-    def list_domains(self):
+    def list_domains(self, params=None):
         """List Domains."""
-        resp, body = self.get('domains')
+        url = 'domains'
+        if params:
+            url += '?%s' % urllib.urlencode(params)
+        resp, body = self.get(url)
         self.expected_success(200, resp.status)
         body = json.loads(body)
         return service_client.ResponseBodyList(resp, body['domains'])
diff --git a/tempest/services/identity/v3/json/token_client.py b/tempest/services/identity/v3/json/token_client.py
index b0824a7..3e37403 100644
--- a/tempest/services/identity/v3/json/token_client.py
+++ b/tempest/services/identity/v3/json/token_client.py
@@ -37,22 +37,30 @@
 
         self.auth_url = auth_url
 
-    def auth(self, user=None, password=None, project=None, user_type='id',
-             user_domain=None, project_domain=None, token=None):
+    def auth(self, user_id=None, username=None, password=None, project_id=None,
+             project_name=None, user_domain_id=None, user_domain_name=None,
+             project_domain_id=None, project_domain_name=None, domain_id=None,
+             domain_name=None, token=None):
         """
-        :param user: user id or name, as specified in user_type
-        :param user_domain: the user domain
-        :param project_domain: the project domain
+        :param user_id: user id
+        :param username: user name
+        :param user_domain_id: the user domain id
+        :param user_domain_name: the user domain name
+        :param project_domain_id: the project domain id
+        :param project_domain_name: the project domain name
+        :param domain_id: a domain id to scope to
+        :param domain_name: a domain name to scope to
+        :param project_id: a project id to scope to
+        :param project_name: a project name to scope to
         :param token: a token to re-scope.
 
-        Accepts different combinations of credentials. Restrictions:
-        - project and domain are only name (no id)
+        Accepts different combinations of credentials.
         Sample sample valid combinations:
         - token
-        - token, project, project_domain
+        - token, project_name, project_domain_id
         - user_id, password
-        - username, password, user_domain
-        - username, password, project, user_domain, project_domain
+        - username, password, user_domain_id
+        - username, password, project_name, user_domain_id, project_domain_id
         Validation is left to the server side.
         """
         creds = {
@@ -68,25 +76,45 @@
             id_obj['token'] = {
                 'id': token
             }
-        if user and password:
+
+        if (user_id or username) and password:
             id_obj['methods'].append('password')
             id_obj['password'] = {
                 'user': {
                     'password': password,
                 }
             }
-            if user_type == 'id':
-                id_obj['password']['user']['id'] = user
+            if user_id:
+                id_obj['password']['user']['id'] = user_id
             else:
-                id_obj['password']['user']['name'] = user
-            if user_domain is not None:
-                _domain = dict(name=user_domain)
+                id_obj['password']['user']['name'] = username
+
+            _domain = None
+            if user_domain_id is not None:
+                _domain = dict(id=user_domain_id)
+            elif user_domain_name is not None:
+                _domain = dict(name=user_domain_name)
+            if _domain:
                 id_obj['password']['user']['domain'] = _domain
-        if project is not None:
-            _domain = dict(name=project_domain)
-            _project = dict(name=project, domain=_domain)
-            scope = dict(project=_project)
-            creds['auth']['scope'] = scope
+
+        if (project_id or project_name):
+            _project = dict()
+
+            if project_id:
+                _project['id'] = project_id
+            elif project_name:
+                _project['name'] = project_name
+
+                if project_domain_id is not None:
+                    _project['domain'] = {'id': project_domain_id}
+                elif project_domain_name is not None:
+                    _project['domain'] = {'name': project_domain_name}
+
+            creds['auth']['scope'] = dict(project=_project)
+        elif domain_id:
+            creds['auth']['scope'] = dict(domain={'id': domain_id})
+        elif domain_name:
+            creds['auth']['scope'] = dict(domain={'name': domain_name})
 
         body = json.dumps(creds)
         resp, body = self.post(self.auth_url, body=body)
@@ -120,15 +148,22 @@
 
         return resp, json.loads(resp_body)
 
-    def get_token(self, user, password, project=None, project_domain='Default',
-                  user_domain='Default', auth_data=False):
+    def get_token(self, **kwargs):
         """
-        :param user: username
         Returns (token id, token data) for supplied credentials
         """
-        body = self.auth(user, password, project, user_type='name',
-                         user_domain=user_domain,
-                         project_domain=project_domain)
+
+        auth_data = kwargs.pop('auth_data', False)
+
+        if not (kwargs.get('user_domain_id') or
+                kwargs.get('user_domain_name')):
+            kwargs['user_domain_name'] = 'Default'
+
+        if not (kwargs.get('project_domain_id') or
+                kwargs.get('project_domain_name')):
+            kwargs['project_domain_name'] = 'Default'
+
+        body = self.auth(**kwargs)
 
         token = body.response.get('x-subject-token')
         if auth_data:
diff --git a/tempest/services/image/v1/json/image_client.py b/tempest/services/image/v1/json/image_client.py
index 01a9c54..ec7900b 100644
--- a/tempest/services/image/v1/json/image_client.py
+++ b/tempest/services/image/v1/json/image_client.py
@@ -20,13 +20,13 @@
 import time
 import urllib
 
+from oslo_log import log as logging
 from tempest_lib.common.utils import misc as misc_utils
 from tempest_lib import exceptions as lib_exc
 
 from tempest.common import glance_http
 from tempest.common import service_client
 from tempest import exceptions
-from tempest.openstack.common import log as logging
 
 LOG = logging.getLogger(__name__)
 
@@ -36,7 +36,7 @@
     def __init__(self, auth_provider, catalog_type, region, endpoint_type=None,
                  build_interval=None, build_timeout=None,
                  disable_ssl_certificate_validation=None,
-                 ca_certs=None, **kwargs):
+                 ca_certs=None, trace_requests=None):
         super(ImageClientJSON, self).__init__(
             auth_provider,
             catalog_type,
@@ -47,7 +47,7 @@
             disable_ssl_certificate_validation=(
                 disable_ssl_certificate_validation),
             ca_certs=ca_certs,
-            **kwargs)
+            trace_requests=trace_requests)
         self._http = None
         self.dscv = disable_ssl_certificate_validation
         self.ca_certs = ca_certs
diff --git a/tempest/services/image/v2/json/image_client.py b/tempest/services/image/v2/json/image_client.py
index e55a824..6b04144 100644
--- a/tempest/services/image/v2/json/image_client.py
+++ b/tempest/services/image/v2/json/image_client.py
@@ -28,7 +28,7 @@
     def __init__(self, auth_provider, catalog_type, region, endpoint_type=None,
                  build_interval=None, build_timeout=None,
                  disable_ssl_certificate_validation=None, ca_certs=None,
-                 **kwargs):
+                 trace_requests=None):
         super(ImageClientV2JSON, self).__init__(
             auth_provider,
             catalog_type,
@@ -39,7 +39,7 @@
             disable_ssl_certificate_validation=(
                 disable_ssl_certificate_validation),
             ca_certs=ca_certs,
-            **kwargs)
+            trace_requests=trace_requests)
         self._http = None
         self.dscv = disable_ssl_certificate_validation
         self.ca_certs = ca_certs
diff --git a/tempest/services/telemetry/json/telemetry_client.py b/tempest/services/telemetry/json/telemetry_client.py
index a249625..36c123b 100644
--- a/tempest/services/telemetry/json/telemetry_client.py
+++ b/tempest/services/telemetry/json/telemetry_client.py
@@ -15,8 +15,9 @@
 
 import urllib
 
+from oslo_serialization import jsonutils as json
+
 from tempest.common import service_client
-from tempest.openstack.common import jsonutils as json
 
 
 class TelemetryClientJSON(service_client.ServiceClient):
diff --git a/tempest/services/volume/json/admin/volume_quotas_client.py b/tempest/services/volume/json/admin/volume_quotas_client.py
index 616f8e4..abd36c1 100644
--- a/tempest/services/volume/json/admin/volume_quotas_client.py
+++ b/tempest/services/volume/json/admin/volume_quotas_client.py
@@ -14,8 +14,9 @@
 
 import urllib
 
+from oslo_serialization import jsonutils
+
 from tempest.common import service_client
-from tempest.openstack.common import jsonutils
 
 
 class BaseVolumeQuotasClientJSON(service_client.ServiceClient):
diff --git a/tempest/services/volume/json/snapshots_client.py b/tempest/services/volume/json/snapshots_client.py
index 8430b63..9f88085 100644
--- a/tempest/services/volume/json/snapshots_client.py
+++ b/tempest/services/volume/json/snapshots_client.py
@@ -14,11 +14,11 @@
 import time
 import urllib
 
+from oslo_log import log as logging
 from tempest_lib import exceptions as lib_exc
 
 from tempest.common import service_client
 from tempest import exceptions
-from tempest.openstack.common import log as logging
 
 
 LOG = logging.getLogger(__name__)
diff --git a/tempest/stress/actions/unit_test.py b/tempest/stress/actions/unit_test.py
index 2f1d28f..c376693 100644
--- a/tempest/stress/actions/unit_test.py
+++ b/tempest/stress/actions/unit_test.py
@@ -10,9 +10,10 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
+from oslo_utils import importutils
+
 from tempest import config
-from tempest.openstack.common import importutils
-from tempest.openstack.common import log as logging
 import tempest.stress.stressaction as stressaction
 
 CONF = config.CONF
diff --git a/tempest/stress/actions/volume_attach_verify.py b/tempest/stress/actions/volume_attach_verify.py
index 0baf2de..c8d9f06 100644
--- a/tempest/stress/actions/volume_attach_verify.py
+++ b/tempest/stress/actions/volume_attach_verify.py
@@ -53,8 +53,8 @@
 
     def _create_sec_group(self):
         sec_grp_cli = self.manager.security_groups_client
-        s_name = data_utils.rand_name('sec_grp-')
-        s_description = data_utils.rand_name('desc-')
+        s_name = data_utils.rand_name('sec_grp')
+        s_description = data_utils.rand_name('desc')
         self.sec_grp = sec_grp_cli.create_security_group(s_name,
                                                          s_description)
         create_rule = sec_grp_cli.create_security_group_rule
diff --git a/tempest/stress/cleanup.py b/tempest/stress/cleanup.py
index 161d93f..d0b1be1 100644
--- a/tempest/stress/cleanup.py
+++ b/tempest/stress/cleanup.py
@@ -14,8 +14,9 @@
 #    See the License for the specific language governing permissions and
 #    limitations under the License.
 
+from oslo_log import log as logging
+
 from tempest import clients
-from tempest.openstack.common import log as logging
 
 LOG = logging.getLogger(__name__)
 
diff --git a/tempest/stress/driver.py b/tempest/stress/driver.py
index e007a49..e84d627 100644
--- a/tempest/stress/driver.py
+++ b/tempest/stress/driver.py
@@ -17,6 +17,8 @@
 import signal
 import time
 
+from oslo_log import log as logging
+from oslo_utils import importutils
 from six import moves
 from tempest_lib.common.utils import data_utils
 
@@ -25,8 +27,6 @@
 from tempest.common import ssh
 from tempest import config
 from tempest import exceptions
-from tempest.openstack.common import importutils
-from tempest.openstack.common import log as logging
 from tempest.stress import cleanup
 
 CONF = config.CONF
@@ -132,7 +132,14 @@
         computes = _get_compute_nodes(controller, ssh_user, ssh_key)
         for node in computes:
             do_ssh("rm -f %s" % logfiles, node, ssh_user, ssh_key)
+    skip = False
     for test in tests:
+        for service in test.get('required_services', []):
+            if not CONF.service_available.get(service):
+                skip = True
+                break
+        if skip:
+            break
         if test.get('use_admin', False):
             manager = admin_manager
         else:
diff --git a/tempest/stress/etc/stress-tox-job.json b/tempest/stress/etc/stress-tox-job.json
index dffc469..9cee316 100644
--- a/tempest/stress/etc/stress-tox-job.json
+++ b/tempest/stress/etc/stress-tox-job.json
@@ -15,5 +15,14 @@
   "use_admin": false,
   "use_isolated_tenants": false,
   "kwargs": {}
+  },
+  {"action": "tempest.stress.actions.unit_test.UnitTest",
+  "threads": 4,
+  "use_admin": false,
+  "use_isolated_tenants": false,
+  "required_services": ["neutron"],
+  "kwargs": {"test_method": "tempest.scenario.test_network_advanced_server_ops.TestNetworkAdvancedServerOps.test_server_connectivity_stop_start",
+             "class_setup_per": "process"}
   }
 ]
+
diff --git a/tempest/stress/stressaction.py b/tempest/stress/stressaction.py
index 286e022..a3d0d17 100644
--- a/tempest/stress/stressaction.py
+++ b/tempest/stress/stressaction.py
@@ -18,7 +18,7 @@
 
 import six
 
-from tempest.openstack.common import log as logging
+from oslo_log import log as logging
 
 
 @six.add_metaclass(abc.ABCMeta)
diff --git a/tempest/test.py b/tempest/test.py
index f04aff7..da936b4 100644
--- a/tempest/test.py
+++ b/tempest/test.py
@@ -24,17 +24,18 @@
 import uuid
 
 import fixtures
+from oslo_log import log as logging
+from oslo_utils import importutils
 import six
 import testscenarios
 import testtools
 
 from tempest import clients
 from tempest.common import credentials
+from tempest.common import fixed_network
 import tempest.common.generator.valid_generator as valid
 from tempest import config
 from tempest import exceptions
-from tempest.openstack.common import importutils
-from tempest.openstack.common import log as logging
 
 LOG = logging.getLogger(__name__)
 
@@ -94,7 +95,8 @@
         'object_storage': CONF.service_available.swift,
         'dashboard': CONF.service_available.horizon,
         'telemetry': CONF.service_available.ceilometer,
-        'data_processing': CONF.service_available.sahara
+        'data_processing': CONF.service_available.sahara,
+        'database': CONF.service_available.trove
     }
     return service_list
 
@@ -108,7 +110,7 @@
     def decorator(f):
         services = ['compute', 'image', 'baremetal', 'volume', 'orchestration',
                     'network', 'identity', 'object_storage', 'dashboard',
-                    'telemetry', 'data_processing']
+                    'telemetry', 'data_processing', 'database']
         for service in args:
             if service not in services:
                 raise exceptions.InvalidServiceTag('%s is not a valid '
@@ -377,17 +379,19 @@
                                                    level=None))
 
     @classmethod
-    def get_client_manager(cls):
+    def get_client_manager(cls, identity_version=None):
         """
         Returns an OpenStack client manager
         """
         force_tenant_isolation = getattr(cls, 'force_tenant_isolation', None)
+        identity_version = identity_version or CONF.identity.auth_version
 
         if (not hasattr(cls, 'isolated_creds') or
             not cls.isolated_creds.name == cls.__name__):
             cls.isolated_creds = credentials.get_isolated_credentials(
                 name=cls.__name__, network_resources=cls.network_resources,
                 force_tenant_isolation=force_tenant_isolation,
+                identity_version=identity_version
             )
 
         creds = cls.isolated_creds.get_primary_creds()
@@ -432,6 +436,21 @@
                 'subnet': subnet,
                 'dhcp': dhcp}
 
+    @classmethod
+    def get_tenant_network(cls):
+        """Get the network to be used in testing
+
+        :return: network dict including 'id' and 'name'
+        """
+        # Make sure isolated_creds exists and get a network client
+        networks_client = cls.get_client_manager().networks_client
+        isolated_creds = getattr(cls, 'isolated_creds', None)
+        if credentials.is_admin_available():
+            admin_creds = isolated_creds.get_admin_creds()
+            networks_client = clients.Manager(admin_creds).networks_client
+        return fixed_network.get_tenant_network(isolated_creds,
+                                                networks_client)
+
     def assertEmpty(self, list, msg=None):
         self.assertTrue(len(list) == 0, msg)
 
diff --git a/tempest/tests/cmd/test_verify_tempest_config.py b/tempest/tests/cmd/test_verify_tempest_config.py
index 17e2550..b9afd5e 100644
--- a/tempest/tests/cmd/test_verify_tempest_config.py
+++ b/tempest/tests/cmd/test_verify_tempest_config.py
@@ -15,10 +15,10 @@
 import json
 
 import mock
+from oslotest import mockpatch
 
 from tempest.cmd import verify_tempest_config
 from tempest import config
-from tempest.openstack.common.fixture import mockpatch
 from tempest.tests import base
 from tempest.tests import fake_config
 
@@ -49,9 +49,8 @@
             return_value='http://fake_endpoint:5000'))
         fake_resp = {'versions': {'values': [{'id': 'v2.0'}, {'id': 'v3.0'}]}}
         fake_resp = json.dumps(fake_resp)
-        self.useFixture(mockpatch.PatchObject(
-            verify_tempest_config.RAW_HTTP, 'request',
-            return_value=(None, fake_resp)))
+        self.useFixture(mockpatch.Patch('httplib2.Http.request',
+                                        return_value=(None, fake_resp)))
         fake_os = mock.MagicMock()
         versions = verify_tempest_config._get_api_versions(fake_os, 'keystone')
         self.assertIn('v2.0', versions)
@@ -63,9 +62,8 @@
             return_value='http://fake_endpoint:5000'))
         fake_resp = {'versions': [{'id': 'v1.0'}, {'id': 'v2.0'}]}
         fake_resp = json.dumps(fake_resp)
-        self.useFixture(mockpatch.PatchObject(
-            verify_tempest_config.RAW_HTTP, 'request',
-            return_value=(None, fake_resp)))
+        self.useFixture(mockpatch.Patch('httplib2.Http.request',
+                                        return_value=(None, fake_resp)))
         fake_os = mock.MagicMock()
         versions = verify_tempest_config._get_api_versions(fake_os, 'cinder')
         self.assertIn('v1.0', versions)
@@ -77,9 +75,8 @@
             return_value='http://fake_endpoint:5000'))
         fake_resp = {'versions': [{'id': 'v2.0'}, {'id': 'v3.0'}]}
         fake_resp = json.dumps(fake_resp)
-        self.useFixture(mockpatch.PatchObject(
-            verify_tempest_config.RAW_HTTP, 'request',
-            return_value=(None, fake_resp)))
+        self.useFixture(mockpatch.Patch('httplib2.Http.request',
+                                        return_value=(None, fake_resp)))
         fake_os = mock.MagicMock()
         versions = verify_tempest_config._get_api_versions(fake_os, 'nova')
         self.assertIn('v2.0', versions)
@@ -109,9 +106,8 @@
             return_value='http://fake_endpoint:5000'))
         fake_resp = {'versions': {'values': [{'id': 'v2.0'}]}}
         fake_resp = json.dumps(fake_resp)
-        self.useFixture(mockpatch.PatchObject(
-            verify_tempest_config.RAW_HTTP, 'request',
-            return_value=(None, fake_resp)))
+        self.useFixture(mockpatch.Patch('httplib2.Http.request',
+                                        return_value=(None, fake_resp)))
         fake_os = mock.MagicMock()
         with mock.patch.object(verify_tempest_config,
                                'print_and_or_update') as print_mock:
@@ -126,9 +122,8 @@
             return_value='http://fake_endpoint:5000'))
         fake_resp = {'versions': {'values': [{'id': 'v3.0'}]}}
         fake_resp = json.dumps(fake_resp)
-        self.useFixture(mockpatch.PatchObject(
-            verify_tempest_config.RAW_HTTP, 'request',
-            return_value=(None, fake_resp)))
+        self.useFixture(mockpatch.Patch('httplib2.Http.request',
+                                        return_value=(None, fake_resp)))
         fake_os = mock.MagicMock()
         with mock.patch.object(verify_tempest_config,
                                'print_and_or_update') as print_mock:
@@ -143,9 +138,8 @@
             return_value='http://fake_endpoint:5000'))
         fake_resp = {'versions': [{'id': 'v1.0'}]}
         fake_resp = json.dumps(fake_resp)
-        self.useFixture(mockpatch.PatchObject(
-            verify_tempest_config.RAW_HTTP, 'request',
-            return_value=(None, fake_resp)))
+        self.useFixture(mockpatch.Patch('httplib2.Http.request',
+                                        return_value=(None, fake_resp)))
         fake_os = mock.MagicMock()
         with mock.patch.object(verify_tempest_config,
                                'print_and_or_update') as print_mock:
@@ -159,9 +153,8 @@
             return_value='http://fake_endpoint:5000'))
         fake_resp = {'versions': [{'id': 'v2.0'}]}
         fake_resp = json.dumps(fake_resp)
-        self.useFixture(mockpatch.PatchObject(
-            verify_tempest_config.RAW_HTTP, 'request',
-            return_value=(None, fake_resp)))
+        self.useFixture(mockpatch.Patch('httplib2.Http.request',
+                                        return_value=(None, fake_resp)))
         fake_os = mock.MagicMock()
         with mock.patch.object(verify_tempest_config,
                                'print_and_or_update') as print_mock:
diff --git a/tempest/tests/common/test_accounts.py b/tempest/tests/common/test_accounts.py
index 58e3c0c..2a98a06 100644
--- a/tempest/tests/common/test_accounts.py
+++ b/tempest/tests/common/test_accounts.py
@@ -14,10 +14,11 @@
 
 import hashlib
 import os
-import tempfile
 
 import mock
-from oslo.config import cfg
+from oslo_concurrency.fixture import lockutils as lockutils_fixtures
+from oslo_concurrency import lockutils
+from oslo_config import cfg
 from oslotest import mockpatch
 
 from tempest import auth
@@ -36,9 +37,7 @@
         super(TestAccount, self).setUp()
         self.useFixture(fake_config.ConfigFixture())
         self.stubs.Set(config, 'TempestConfigPrivate', fake_config.FakePrivate)
-        self.temp_dir = tempfile.mkdtemp()
-        cfg.CONF.set_default('lock_path', self.temp_dir)
-        self.addCleanup(os.rmdir, self.temp_dir)
+        self.useFixture(lockutils_fixtures.ExternalLockFixture())
         self.test_accounts = [
             {'username': 'test_user1', 'tenant_name': 'test_tenant1',
              'password': 'p'},
@@ -83,7 +82,7 @@
     def test_get_hash(self):
         self.stubs.Set(token_client.TokenClientJSON, 'raw_request',
                        fake_identity._fake_v2_response)
-        test_account_class = accounts.Accounts('test_name')
+        test_account_class = accounts.Accounts('v2', 'test_name')
         hash_list = self._get_hash_list(self.test_accounts)
         test_cred_dict = self.test_accounts[3]
         test_creds = auth.get_credentials(fake_identity.FAKE_AUTH_URL,
@@ -92,7 +91,7 @@
         self.assertEqual(hash_list[3], results)
 
     def test_get_hash_dict(self):
-        test_account_class = accounts.Accounts('test_name')
+        test_account_class = accounts.Accounts('v2', 'test_name')
         hash_dict = test_account_class.get_hash_dict(self.test_accounts)
         hash_list = self._get_hash_list(self.test_accounts)
         for hash in hash_list:
@@ -103,7 +102,7 @@
         # Emulate the lock existing on the filesystem
         self.useFixture(mockpatch.Patch('os.path.isfile', return_value=True))
         with mock.patch('__builtin__.open', mock.mock_open(), create=True):
-            test_account_class = accounts.Accounts('test_name')
+            test_account_class = accounts.Accounts('v2', 'test_name')
             res = test_account_class._create_hash_file('12345')
         self.assertFalse(res, "_create_hash_file should return False if the "
                          "pseudo-lock file already exists")
@@ -112,46 +111,48 @@
         # Emulate the lock not existing on the filesystem
         self.useFixture(mockpatch.Patch('os.path.isfile', return_value=False))
         with mock.patch('__builtin__.open', mock.mock_open(), create=True):
-            test_account_class = accounts.Accounts('test_name')
+            test_account_class = accounts.Accounts('v2', 'test_name')
             res = test_account_class._create_hash_file('12345')
         self.assertTrue(res, "_create_hash_file should return True if the "
                         "pseudo-lock doesn't already exist")
 
-    @mock.patch('tempest.openstack.common.lockutils.lock')
+    @mock.patch('oslo_concurrency.lockutils.lock')
     def test_get_free_hash_no_previous_accounts(self, lock_mock):
         # Emulate no pre-existing lock
         self.useFixture(mockpatch.Patch('os.path.isdir', return_value=False))
         hash_list = self._get_hash_list(self.test_accounts)
         mkdir_mock = self.useFixture(mockpatch.Patch('os.mkdir'))
         self.useFixture(mockpatch.Patch('os.path.isfile', return_value=False))
-        test_account_class = accounts.Accounts('test_name')
+        test_account_class = accounts.Accounts('v2', 'test_name')
         with mock.patch('__builtin__.open', mock.mock_open(),
                         create=True) as open_mock:
             test_account_class._get_free_hash(hash_list)
-            lock_path = os.path.join(accounts.CONF.lock_path, 'test_accounts',
+            lock_path = os.path.join(lockutils.get_lock_path(accounts.CONF),
+                                     'test_accounts',
                                      hash_list[0])
             open_mock.assert_called_once_with(lock_path, 'w')
-        mkdir_path = os.path.join(accounts.CONF.lock_path, 'test_accounts')
+        mkdir_path = os.path.join(accounts.CONF.oslo_concurrency.lock_path,
+                                  'test_accounts')
         mkdir_mock.mock.assert_called_once_with(mkdir_path)
 
-    @mock.patch('tempest.openstack.common.lockutils.lock')
+    @mock.patch('oslo_concurrency.lockutils.lock')
     def test_get_free_hash_no_free_accounts(self, lock_mock):
         hash_list = self._get_hash_list(self.test_accounts)
         # Emulate pre-existing lock dir
         self.useFixture(mockpatch.Patch('os.path.isdir', return_value=True))
         # Emulate all lcoks in list are in use
         self.useFixture(mockpatch.Patch('os.path.isfile', return_value=True))
-        test_account_class = accounts.Accounts('test_name')
+        test_account_class = accounts.Accounts('v2', 'test_name')
         with mock.patch('__builtin__.open', mock.mock_open(), create=True):
             self.assertRaises(exceptions.InvalidConfiguration,
                               test_account_class._get_free_hash, hash_list)
 
-    @mock.patch('tempest.openstack.common.lockutils.lock')
+    @mock.patch('oslo_concurrency.lockutils.lock')
     def test_get_free_hash_some_in_use_accounts(self, lock_mock):
         # Emulate no pre-existing lock
         self.useFixture(mockpatch.Patch('os.path.isdir', return_value=True))
         hash_list = self._get_hash_list(self.test_accounts)
-        test_account_class = accounts.Accounts('test_name')
+        test_account_class = accounts.Accounts('v2', 'test_name')
 
         def _fake_is_file(path):
             # Fake isfile() to return that the path exists unless a specific
@@ -164,28 +165,31 @@
         with mock.patch('__builtin__.open', mock.mock_open(),
                         create=True) as open_mock:
             test_account_class._get_free_hash(hash_list)
-            lock_path = os.path.join(accounts.CONF.lock_path, 'test_accounts',
+            lock_path = os.path.join(lockutils.get_lock_path(accounts.CONF),
+                                     'test_accounts',
                                      hash_list[3])
             open_mock.assert_has_calls([mock.call(lock_path, 'w')])
 
-    @mock.patch('tempest.openstack.common.lockutils.lock')
+    @mock.patch('oslo_concurrency.lockutils.lock')
     def test_remove_hash_last_account(self, lock_mock):
         hash_list = self._get_hash_list(self.test_accounts)
         # Pretend the pseudo-lock is there
         self.useFixture(mockpatch.Patch('os.path.isfile', return_value=True))
         # Pretend the lock dir is empty
         self.useFixture(mockpatch.Patch('os.listdir', return_value=[]))
-        test_account_class = accounts.Accounts('test_name')
+        test_account_class = accounts.Accounts('v2', 'test_name')
         remove_mock = self.useFixture(mockpatch.Patch('os.remove'))
         rmdir_mock = self.useFixture(mockpatch.Patch('os.rmdir'))
         test_account_class.remove_hash(hash_list[2])
-        hash_path = os.path.join(accounts.CONF.lock_path, 'test_accounts',
+        hash_path = os.path.join(lockutils.get_lock_path(accounts.CONF),
+                                 'test_accounts',
                                  hash_list[2])
-        lock_path = os.path.join(accounts.CONF.lock_path, 'test_accounts')
+        lock_path = os.path.join(accounts.CONF.oslo_concurrency.lock_path,
+                                 'test_accounts')
         remove_mock.mock.assert_called_once_with(hash_path)
         rmdir_mock.mock.assert_called_once_with(lock_path)
 
-    @mock.patch('tempest.openstack.common.lockutils.lock')
+    @mock.patch('oslo_concurrency.lockutils.lock')
     def test_remove_hash_not_last_account(self, lock_mock):
         hash_list = self._get_hash_list(self.test_accounts)
         # Pretend the pseudo-lock is there
@@ -193,17 +197,18 @@
         # Pretend the lock dir is empty
         self.useFixture(mockpatch.Patch('os.listdir', return_value=[
             hash_list[1], hash_list[4]]))
-        test_account_class = accounts.Accounts('test_name')
+        test_account_class = accounts.Accounts('v2', 'test_name')
         remove_mock = self.useFixture(mockpatch.Patch('os.remove'))
         rmdir_mock = self.useFixture(mockpatch.Patch('os.rmdir'))
         test_account_class.remove_hash(hash_list[2])
-        hash_path = os.path.join(accounts.CONF.lock_path, 'test_accounts',
+        hash_path = os.path.join(lockutils.get_lock_path(accounts.CONF),
+                                 'test_accounts',
                                  hash_list[2])
         remove_mock.mock.assert_called_once_with(hash_path)
         rmdir_mock.mock.assert_not_called()
 
     def test_is_multi_user(self):
-        test_accounts_class = accounts.Accounts('test_name')
+        test_accounts_class = accounts.Accounts('v2', 'test_name')
         self.assertTrue(test_accounts_class.is_multi_user())
 
     def test_is_not_multi_user(self):
@@ -211,14 +216,14 @@
         self.useFixture(mockpatch.Patch(
             'tempest.common.accounts.read_accounts_yaml',
             return_value=self.test_accounts))
-        test_accounts_class = accounts.Accounts('test_name')
+        test_accounts_class = accounts.Accounts('v2', 'test_name')
         self.assertFalse(test_accounts_class.is_multi_user())
 
     def test__get_creds_by_roles_one_role(self):
         self.useFixture(mockpatch.Patch(
             'tempest.common.accounts.read_accounts_yaml',
             return_value=self.test_accounts))
-        test_accounts_class = accounts.Accounts('test_name')
+        test_accounts_class = accounts.Accounts('v2', 'test_name')
         hashes = test_accounts_class.hash_dict['roles']['role4']
         temp_hash = hashes[0]
         get_free_hash_mock = self.useFixture(mockpatch.PatchObject(
@@ -235,7 +240,7 @@
         self.useFixture(mockpatch.Patch(
             'tempest.common.accounts.read_accounts_yaml',
             return_value=self.test_accounts))
-        test_accounts_class = accounts.Accounts('test_name')
+        test_accounts_class = accounts.Accounts('v2', 'test_name')
         hashes = test_accounts_class.hash_dict['roles']['role4']
         hashes2 = test_accounts_class.hash_dict['roles']['role2']
         hashes = list(set(hashes) & set(hashes2))
@@ -254,7 +259,7 @@
         self.useFixture(mockpatch.Patch(
             'tempest.common.accounts.read_accounts_yaml',
             return_value=self.test_accounts))
-        test_accounts_class = accounts.Accounts('test_name')
+        test_accounts_class = accounts.Accounts('v2', 'test_name')
         hashes = test_accounts_class.hash_dict['creds'].keys()
         admin_hashes = test_accounts_class.hash_dict['roles'][
             cfg.CONF.identity.admin_role]
@@ -277,9 +282,7 @@
         super(TestNotLockingAccount, self).setUp()
         self.useFixture(fake_config.ConfigFixture())
         self.stubs.Set(config, 'TempestConfigPrivate', fake_config.FakePrivate)
-        self.temp_dir = tempfile.mkdtemp()
-        cfg.CONF.set_default('lock_path', self.temp_dir)
-        self.addCleanup(os.rmdir, self.temp_dir)
+        self.useFixture(lockutils_fixtures.ExternalLockFixture())
         self.test_accounts = [
             {'username': 'test_user1', 'tenant_name': 'test_tenant1',
              'password': 'p'},
@@ -295,7 +298,7 @@
         self.useFixture(mockpatch.Patch('os.path.isfile', return_value=True))
 
     def test_get_creds(self):
-        test_accounts_class = accounts.NotLockingAccounts('test_name')
+        test_accounts_class = accounts.NotLockingAccounts('v2', 'test_name')
         for i in xrange(len(self.test_accounts)):
             creds = test_accounts_class.get_creds(i)
             msg = "Empty credentials returned for ID %s" % str(i)
diff --git a/tempest/tests/common/test_cred_provider.py b/tempest/tests/common/test_cred_provider.py
index 3f7c0f8..76430ac 100644
--- a/tempest/tests/common/test_cred_provider.py
+++ b/tempest/tests/common/test_cred_provider.py
@@ -12,16 +12,18 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-from oslo.config import cfg
+from oslo_config import cfg
 
 from tempest import auth
 from tempest.common import cred_provider
 from tempest.common import tempest_fixtures as fixtures
+from tempest import config
 from tempest.services.identity.v2.json import token_client as v2_client
 from tempest.services.identity.v3.json import token_client as v3_client
+from tempest.tests import fake_config
 from tempest.tests import fake_identity
-# Note: eventually the auth module will move to tempest-lib, and so wil its
-# unit tests. *CredentialsTests will be imported from tempest-lib then.
+# Note(andreaf): once credentials tests move to tempest-lib, I will copy the
+# parts of them required by these here.
 from tempest.tests import test_credentials as test_creds
 
 
@@ -39,6 +41,8 @@
 
     def setUp(self):
         super(ConfiguredV2CredentialsTests, self).setUp()
+        self.useFixture(fake_config.ConfigFixture())
+        self.stubs.Set(config, 'TempestConfigPrivate', fake_config.FakePrivate)
         self.stubs.Set(self.tokenclient_class, 'raw_request',
                        self.identity_response)
 
diff --git a/tempest/tests/common/utils/linux/test_remote_client.py b/tempest/tests/common/utils/linux/test_remote_client.py
index e8650c5..40b7b32 100644
--- a/tempest/tests/common/utils/linux/test_remote_client.py
+++ b/tempest/tests/common/utils/linux/test_remote_client.py
@@ -14,11 +14,11 @@
 
 import time
 
-from oslo.config import cfg
+from oslo_config import cfg
+from oslotest import mockpatch
 
 from tempest.common.utils.linux import remote_client
 from tempest import config
-from tempest.openstack.common.fixture import mockpatch
 from tempest.tests import base
 from tempest.tests import fake_config
 
diff --git a/tempest/tests/fake_config.py b/tempest/tests/fake_config.py
index 2f8efa1..4898c9c 100644
--- a/tempest/tests/fake_config.py
+++ b/tempest/tests/fake_config.py
@@ -14,19 +14,17 @@
 
 import os
 
-from oslo.config import cfg
+from oslo_concurrency import lockutils
+from oslo_config import cfg
+from oslo_config import fixture as conf_fixture
 
 from tempest import config
-from tempest.openstack.common.fixture import config as conf_fixture
-from tempest.openstack.common import importutils
 
 
 class ConfigFixture(conf_fixture.Config):
 
     def __init__(self):
         config.register_opts()
-        # Register locking options
-        importutils.import_module('tempest.openstack.common.lockutils')
         super(ConfigFixture, self).__init__()
 
     def setUp(self):
@@ -43,8 +41,9 @@
         self.conf.set_default('heat', True, group='service_available')
         if not os.path.exists(str(os.environ.get('OS_TEST_LOCK_PATH'))):
             os.mkdir(str(os.environ.get('OS_TEST_LOCK_PATH')))
-        self.conf.set_default('lock_path',
-                              str(os.environ.get('OS_TEST_LOCK_PATH')))
+        lockutils.set_defaults(
+            lock_path=str(os.environ.get('OS_TEST_LOCK_PATH')),
+        )
         self.conf.set_default('auth_version', 'v2', group='identity')
         for config_option in ['username', 'password', 'tenant_name']:
             # Identity group items
diff --git a/tempest/tests/fake_credentials.py b/tempest/tests/fake_credentials.py
index 48f67d2..649d51d 100644
--- a/tempest/tests/fake_credentials.py
+++ b/tempest/tests/fake_credentials.py
@@ -43,7 +43,8 @@
             username='fake_username',
             password='fake_password',
             user_domain_name='fake_domain_name',
-            project_name='fake_tenant_name'
+            project_name='fake_tenant_name',
+            project_domain_name='fake_domain_name'
         )
         super(FakeKeystoneV3Credentials, self).__init__(**creds)
 
diff --git a/tempest/tests/stress/test_stress.py b/tempest/tests/stress/test_stress.py
index 9c3533d..3a7b436 100644
--- a/tempest/tests/stress/test_stress.py
+++ b/tempest/tests/stress/test_stress.py
@@ -18,7 +18,7 @@
 
 from tempest_lib import exceptions
 
-from tempest.openstack.common import log as logging
+from oslo_log import log as logging
 from tempest.tests import base
 
 LOG = logging.getLogger(__name__)
diff --git a/tempest/tests/test_auth.py b/tempest/tests/test_auth.py
index f54ff4f..eb63b30 100644
--- a/tempest/tests/test_auth.py
+++ b/tempest/tests/test_auth.py
@@ -19,12 +19,10 @@
 from oslotest import mockpatch
 
 from tempest import auth
-from tempest import config
 from tempest import exceptions
 from tempest.services.identity.v2.json import token_client as v2_client
 from tempest.services.identity.v3.json import token_client as v3_client
 from tempest.tests import base
-from tempest.tests import fake_config
 from tempest.tests import fake_credentials
 from tempest.tests import fake_http
 from tempest.tests import fake_identity
@@ -46,8 +44,6 @@
 
     def setUp(self):
         super(BaseAuthTestsSetUp, self).setUp()
-        self.useFixture(fake_config.ConfigFixture())
-        self.stubs.Set(config, 'TempestConfigPrivate', fake_config.FakePrivate)
         self.fake_http = fake_http.fake_httplib2(return_type=200)
         self.stubs.Set(auth, 'get_credentials', fake_get_credentials)
         self.auth_provider = self._auth(self.credentials,
diff --git a/tempest/tests/test_credentials.py b/tempest/tests/test_credentials.py
index 350b190..bf44d11 100644
--- a/tempest/tests/test_credentials.py
+++ b/tempest/tests/test_credentials.py
@@ -16,13 +16,10 @@
 import copy
 
 from tempest import auth
-from tempest.common import tempest_fixtures as fixtures
-from tempest import config
 from tempest import exceptions
 from tempest.services.identity.v2.json import token_client as v2_client
 from tempest.services.identity.v3.json import token_client as v3_client
 from tempest.tests import base
-from tempest.tests import fake_config
 from tempest.tests import fake_identity
 
 
@@ -47,11 +44,6 @@
             else:
                 self.assertIsNone(getattr(credentials, attr))
 
-    def setUp(self):
-        super(CredentialsTests, self).setUp()
-        self.useFixture(fake_config.ConfigFixture())
-        self.stubs.Set(config, 'TempestConfigPrivate', fake_config.FakePrivate)
-
     def test_create(self):
         creds = self._get_credentials()
         self.assertEqual(self.attributes, creds._initial)
@@ -91,12 +83,10 @@
         self._check(creds, credentials_class, filled)
 
     def test_get_credentials(self):
-        self.useFixture(fixtures.LockFixture('auth_version'))
         self._verify_credentials(credentials_class=self.credentials_class,
                                  creds_dict=self.attributes)
 
     def test_get_credentials_not_filled(self):
-        self.useFixture(fixtures.LockFixture('auth_version'))
         self._verify_credentials(credentials_class=self.credentials_class,
                                  creds_dict=self.attributes,
                                  filled=False)
diff --git a/tempest/tests/test_decorators.py b/tempest/tests/test_decorators.py
index 5149ba6..0cd54b9 100644
--- a/tempest/tests/test_decorators.py
+++ b/tempest/tests/test_decorators.py
@@ -15,7 +15,7 @@
 import uuid
 
 import mock
-from oslo.config import cfg
+from oslo_config import cfg
 from oslotest import mockpatch
 import testtools
 
diff --git a/tempest/tests/test_glance_http.py b/tempest/tests/test_glance_http.py
index 852dd4b..84b66d7 100644
--- a/tempest/tests/test_glance_http.py
+++ b/tempest/tests/test_glance_http.py
@@ -18,12 +18,12 @@
 import socket
 
 import mock
+from oslotest import mockpatch
 import six
 from tempest_lib import exceptions as lib_exc
 
 from tempest.common import glance_http
 from tempest import exceptions
-from tempest.openstack.common.fixture import mockpatch
 from tempest.tests import base
 from tempest.tests import fake_auth_provider
 from tempest.tests import fake_http
diff --git a/tempest/tests/test_tenant_isolation.py b/tempest/tests/test_tenant_isolation.py
index a420a8f..82cbde9 100644
--- a/tempest/tests/test_tenant_isolation.py
+++ b/tempest/tests/test_tenant_isolation.py
@@ -13,13 +13,13 @@
 #    under the License.
 
 import mock
-from oslo.config import cfg
+from oslo_config import cfg
+from oslotest import mockpatch
 
 from tempest.common import isolated_creds
 from tempest.common import service_client
 from tempest import config
 from tempest import exceptions
-from tempest.openstack.common.fixture import mockpatch
 from tempest.services.identity.v2.json import identity_client as \
     json_iden_client
 from tempest.services.identity.v2.json import token_client as json_token_client
@@ -41,9 +41,10 @@
                        fake_identity._fake_v2_response)
         cfg.CONF.set_default('operator_role', 'FakeRole',
                              group='object-storage')
+        self._mock_list_ec2_credentials('fake_user_id', 'fake_tenant_id')
 
     def test_tempest_client(self):
-        iso_creds = isolated_creds.IsolatedCreds('test class')
+        iso_creds = isolated_creds.IsolatedCreds(name='test class')
         self.assertTrue(isinstance(iso_creds.identity_admin_client,
                                    json_iden_client.IdentityClientJSON))
         self.assertTrue(isinstance(iso_creds.network_admin_client,
@@ -102,6 +103,18 @@
                           (200, [{'id': '1', 'name': 'FakeRole'}]))))
         return roles_fix
 
+    def _mock_list_ec2_credentials(self, user_id, tenant_id):
+        ec2_creds_fix = self.useFixture(mockpatch.PatchObject(
+            json_iden_client.IdentityClientJSON,
+            'list_user_ec2_credentials',
+            return_value=(service_client.ResponseBodyList
+                          (200, [{'access': 'fake_access',
+                                  'secret': 'fake_secret',
+                                  'tenant_id': tenant_id,
+                                  'user_id': user_id,
+                                  'trust_id': None}]))))
+        return ec2_creds_fix
+
     def _mock_network_create(self, iso_creds, id, name):
         net_fix = self.useFixture(mockpatch.PatchObject(
             iso_creds.network_admin_client,
@@ -126,7 +139,7 @@
     @mock.patch('tempest_lib.common.rest_client.RestClient')
     def test_primary_creds(self, MockRestClient):
         cfg.CONF.set_default('neutron', False, 'service_available')
-        iso_creds = isolated_creds.IsolatedCreds('test class',
+        iso_creds = isolated_creds.IsolatedCreds(name='test class',
                                                  password='fake_password')
         self._mock_assign_user_role()
         self._mock_list_role()
@@ -142,7 +155,7 @@
     @mock.patch('tempest_lib.common.rest_client.RestClient')
     def test_admin_creds(self, MockRestClient):
         cfg.CONF.set_default('neutron', False, 'service_available')
-        iso_creds = isolated_creds.IsolatedCreds('test class',
+        iso_creds = isolated_creds.IsolatedCreds(name='test class',
                                                  password='fake_password')
         self._mock_list_roles('1234', 'admin')
         self._mock_user_create('1234', 'fake_admin_user')
@@ -166,7 +179,7 @@
     @mock.patch('tempest_lib.common.rest_client.RestClient')
     def test_role_creds(self, MockRestClient):
         cfg.CONF.set_default('neutron', False, 'service_available')
-        iso_creds = isolated_creds.IsolatedCreds('test class',
+        iso_creds = isolated_creds.IsolatedCreds('v2', 'test class',
                                                  password='fake_password')
         self._mock_list_2_roles()
         self._mock_user_create('1234', 'fake_role_user')
@@ -194,7 +207,7 @@
     @mock.patch('tempest_lib.common.rest_client.RestClient')
     def test_all_cred_cleanup(self, MockRestClient):
         cfg.CONF.set_default('neutron', False, 'service_available')
-        iso_creds = isolated_creds.IsolatedCreds('test class',
+        iso_creds = isolated_creds.IsolatedCreds(name='test class',
                                                  password='fake_password')
         self._mock_assign_user_role()
         roles_fix = self._mock_list_role()
@@ -238,7 +251,7 @@
     @mock.patch('tempest_lib.common.rest_client.RestClient')
     def test_alt_creds(self, MockRestClient):
         cfg.CONF.set_default('neutron', False, 'service_available')
-        iso_creds = isolated_creds.IsolatedCreds('test class',
+        iso_creds = isolated_creds.IsolatedCreds(name='test class',
                                                  password='fake_password')
         self._mock_assign_user_role()
         self._mock_list_role()
@@ -253,7 +266,7 @@
 
     @mock.patch('tempest_lib.common.rest_client.RestClient')
     def test_network_creation(self, MockRestClient):
-        iso_creds = isolated_creds.IsolatedCreds('test class',
+        iso_creds = isolated_creds.IsolatedCreds(name='test class',
                                                  password='fake_password')
         self._mock_assign_user_role()
         self._mock_list_role()
@@ -285,7 +298,7 @@
                                          "description": args['name'],
                                          "security_group_rules": [],
                                          "id": "sg-%s" % args['tenant_id']}]}
-        iso_creds = isolated_creds.IsolatedCreds('test class',
+        iso_creds = isolated_creds.IsolatedCreds(name='test class',
                                                  password='fake_password')
         # Create primary tenant and network
         self._mock_assign_user_role()
@@ -402,7 +415,7 @@
 
     @mock.patch('tempest_lib.common.rest_client.RestClient')
     def test_network_alt_creation(self, MockRestClient):
-        iso_creds = isolated_creds.IsolatedCreds('test class',
+        iso_creds = isolated_creds.IsolatedCreds(name='test class',
                                                  password='fake_password')
         self._mock_assign_user_role()
         self._mock_list_role()
@@ -428,7 +441,7 @@
 
     @mock.patch('tempest_lib.common.rest_client.RestClient')
     def test_network_admin_creation(self, MockRestClient):
-        iso_creds = isolated_creds.IsolatedCreds('test class',
+        iso_creds = isolated_creds.IsolatedCreds(name='test class',
                                                  password='fake_password')
         self._mock_assign_user_role()
         self._mock_user_create('1234', 'fake_admin_user')
@@ -460,7 +473,7 @@
             'subnet': False,
             'dhcp': False,
         }
-        iso_creds = isolated_creds.IsolatedCreds('test class',
+        iso_creds = isolated_creds.IsolatedCreds(name='test class',
                                                  password='fake_password',
                                                  network_resources=net_dict)
         self._mock_assign_user_role()
@@ -496,7 +509,7 @@
             'subnet': False,
             'dhcp': False,
         }
-        iso_creds = isolated_creds.IsolatedCreds('test class',
+        iso_creds = isolated_creds.IsolatedCreds(name='test class',
                                                  password='fake_password',
                                                  network_resources=net_dict)
         self._mock_assign_user_role()
@@ -514,7 +527,7 @@
             'subnet': True,
             'dhcp': False,
         }
-        iso_creds = isolated_creds.IsolatedCreds('test class',
+        iso_creds = isolated_creds.IsolatedCreds(name='test class',
                                                  password='fake_password',
                                                  network_resources=net_dict)
         self._mock_assign_user_role()
@@ -532,7 +545,7 @@
             'subnet': False,
             'dhcp': True,
         }
-        iso_creds = isolated_creds.IsolatedCreds('test class',
+        iso_creds = isolated_creds.IsolatedCreds(name='test class',
                                                  password='fake_password',
                                                  network_resources=net_dict)
         self._mock_assign_user_role()
diff --git a/tempest/thirdparty/boto/test.py b/tempest/thirdparty/boto/test.py
index 5b2ed70..cd35e7f 100644
--- a/tempest/thirdparty/boto/test.py
+++ b/tempest/thirdparty/boto/test.py
@@ -23,14 +23,15 @@
 from boto import ec2
 from boto import exception
 from boto import s3
-import keystoneclient.exceptions
+from oslo_log import log as logging
 import six
 
+from tempest_lib import exceptions as lib_exc
+
 import tempest.clients
 from tempest.common.utils import file_utils
 from tempest import config
 from tempest import exceptions
-from tempest.openstack.common import log as logging
 import tempest.test
 from tempest.thirdparty.boto.utils import wait
 
@@ -65,6 +66,8 @@
         if not secret_matcher.match(connection_data["aws_secret_access_key"]):
             raise Exception("Invalid AWS secret Key")
         raise Exception("Unknown (Authentication?) Error")
+    # NOTE(andreaf) Setting up an extra manager here is redundant,
+    # and should be removed.
     openstack = tempest.clients.Manager()
     try:
         if urlparse.urlparse(CONF.boto.ec2_url).hostname is None:
@@ -77,9 +80,9 @@
                     raise Exception("EC2 target does not looks EC2 service")
                 _cred_sub_check(ec2client.connection_data)
 
-    except keystoneclient.exceptions.Unauthorized:
+    except lib_exc.Unauthorized:
         EC2_CAN_CONNECT_ERROR = "AWS credentials not set," +\
-                                " failed to get them even by keystoneclient"
+                                " also failed to get it from keystone"
     except Exception as exc:
         EC2_CAN_CONNECT_ERROR = str(exc)
 
@@ -94,7 +97,7 @@
                 _cred_sub_check(s3client.connection_data)
     except Exception as exc:
         S3_CAN_CONNECT_ERROR = str(exc)
-    except keystoneclient.exceptions.Unauthorized:
+    except lib_exc.Unauthorized:
         S3_CAN_CONNECT_ERROR = "AWS credentials not set," +\
                                " failed to get them even by keystoneclient"
     boto_logger.logger.setLevel(level)
@@ -199,6 +202,9 @@
         super(BotoTestCase, cls).skip_checks()
         if not CONF.compute_feature_enabled.ec2_api:
             raise cls.skipException("The EC2 API is not available")
+        if not CONF.identity_feature_enabled.api_v2 or \
+                not CONF.identity.auth_version == 'v2':
+            raise cls.skipException("Identity v2 is not available")
 
     @classmethod
     def setup_credentials(cls):
@@ -273,7 +279,6 @@
                 LOG.exception("Cleanup failed %s" % func_name)
             finally:
                 del cls._resource_trash_bin[key]
-        cls.clear_isolated_creds()
         super(BotoTestCase, cls).resource_cleanup()
         # NOTE(afazekas): let the super called even on exceptions
         # The real exceptions already logged, if the super throws another,
diff --git a/tempest/thirdparty/boto/test_ec2_instance_run.py b/tempest/thirdparty/boto/test_ec2_instance_run.py
index 19be559..8894de0 100644
--- a/tempest/thirdparty/boto/test_ec2_instance_run.py
+++ b/tempest/thirdparty/boto/test_ec2_instance_run.py
@@ -13,12 +13,12 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
 from tempest_lib.common.utils import data_utils
 
 from tempest.common.utils.linux import remote_client
 from tempest import config
 from tempest import exceptions
-from tempest.openstack.common import log as logging
 from tempest import test
 from tempest.thirdparty.boto import test as boto_test
 from tempest.thirdparty.boto.utils import s3
@@ -49,8 +49,8 @@
         aki_manifest = CONF.boto.aki_manifest
         ari_manifest = CONF.boto.ari_manifest
         cls.instance_type = CONF.boto.instance_type
-        cls.bucket_name = data_utils.rand_name("s3bucket-")
-        cls.keypair_name = data_utils.rand_name("keypair-")
+        cls.bucket_name = data_utils.rand_name("s3bucket")
+        cls.keypair_name = data_utils.rand_name("keypair")
         cls.keypair = cls.ec2_client.create_key_pair(cls.keypair_name)
         cls.addResourceCleanUp(cls.ec2_client.delete_key_pair,
                                cls.keypair_name)
@@ -60,13 +60,13 @@
                                cls.bucket_name)
         s3.s3_upload_dir(bucket, cls.materials_path)
         cls.images = {"ami":
-                      {"name": data_utils.rand_name("ami-name-"),
+                      {"name": data_utils.rand_name("ami-name"),
                        "location": cls.bucket_name + "/" + ami_manifest},
                       "aki":
-                      {"name": data_utils.rand_name("aki-name-"),
+                      {"name": data_utils.rand_name("aki-name"),
                        "location": cls.bucket_name + "/" + aki_manifest},
                       "ari":
-                      {"name": data_utils.rand_name("ari-name-"),
+                      {"name": data_utils.rand_name("ari-name"),
                        "location": cls.bucket_name + "/" + ari_manifest}}
         for image in cls.images.itervalues():
             image["image_id"] = cls.ec2_client.register_image(
@@ -219,7 +219,7 @@
     def test_compute_with_volumes(self):
         # EC2 1. integration test (not strict)
         image_ami = self.ec2_client.get_image(self.images["ami"]["image_id"])
-        sec_group_name = data_utils.rand_name("securitygroup-")
+        sec_group_name = data_utils.rand_name("securitygroup")
         group_desc = sec_group_name + " security group description "
         security_group = self.ec2_client.create_security_group(sec_group_name,
                                                                group_desc)
@@ -273,7 +273,7 @@
         ssh = remote_client.RemoteClient(address.public_ip,
                                          CONF.compute.ssh_user,
                                          pkey=self.keypair.material)
-        text = data_utils.rand_name("Pattern text for console output -")
+        text = data_utils.rand_name("Pattern text for console output")
         resp = ssh.write_to_console(text)
         self.assertFalse(resp)
 
diff --git a/tempest/thirdparty/boto/test_ec2_keys.py b/tempest/thirdparty/boto/test_ec2_keys.py
index 2272a5c..58a5776 100644
--- a/tempest/thirdparty/boto/test_ec2_keys.py
+++ b/tempest/thirdparty/boto/test_ec2_keys.py
@@ -40,7 +40,7 @@
     @test.idempotent_id('54236804-01b7-4cfe-a6f9-bce1340feec8')
     def test_create_ec2_keypair(self):
         # EC2 create KeyPair
-        key_name = data_utils.rand_name("keypair-")
+        key_name = data_utils.rand_name("keypair")
         self.addResourceCleanUp(self.client.delete_key_pair, key_name)
         keypair = self.client.create_key_pair(key_name)
         self.assertTrue(compare_key_pairs(keypair,
@@ -49,7 +49,7 @@
     @test.idempotent_id('3283b898-f90c-4952-b238-3e42b8c3f34f')
     def test_delete_ec2_keypair(self):
         # EC2 delete KeyPair
-        key_name = data_utils.rand_name("keypair-")
+        key_name = data_utils.rand_name("keypair")
         self.client.create_key_pair(key_name)
         self.client.delete_key_pair(key_name)
         self.assertIsNone(self.client.get_key_pair(key_name))
@@ -57,7 +57,7 @@
     @test.idempotent_id('fd89bd26-4d4d-4cf3-a303-65dd9158fcdc')
     def test_get_ec2_keypair(self):
         # EC2 get KeyPair
-        key_name = data_utils.rand_name("keypair-")
+        key_name = data_utils.rand_name("keypair")
         self.addResourceCleanUp(self.client.delete_key_pair, key_name)
         keypair = self.client.create_key_pair(key_name)
         self.assertTrue(compare_key_pairs(keypair,
@@ -66,7 +66,7 @@
     @test.idempotent_id('daa73da1-e11c-4558-8d76-a716be79a401')
     def test_duplicate_ec2_keypair(self):
         # EC2 duplicate KeyPair
-        key_name = data_utils.rand_name("keypair-")
+        key_name = data_utils.rand_name("keypair")
         self.addResourceCleanUp(self.client.delete_key_pair, key_name)
         keypair = self.client.create_key_pair(key_name)
         self.assertBotoError(self.ec.client.InvalidKeyPair.Duplicate,
diff --git a/tempest/thirdparty/boto/test_ec2_security_groups.py b/tempest/thirdparty/boto/test_ec2_security_groups.py
index ef1ef52..94fab09 100644
--- a/tempest/thirdparty/boto/test_ec2_security_groups.py
+++ b/tempest/thirdparty/boto/test_ec2_security_groups.py
@@ -29,7 +29,7 @@
     @test.idempotent_id('519b566e-0c38-4629-905e-7d6b6355f524')
     def test_create_authorize_security_group(self):
         # EC2 Create, authorize/revoke security group
-        group_name = data_utils.rand_name("securty_group-")
+        group_name = data_utils.rand_name("securty_group")
         group_description = group_name + " security group description "
         group = self.client.create_security_group(group_name,
                                                   group_description)
diff --git a/tempest/thirdparty/boto/test_ec2_volumes.py b/tempest/thirdparty/boto/test_ec2_volumes.py
index 9a6d13f..483d4c3 100644
--- a/tempest/thirdparty/boto/test_ec2_volumes.py
+++ b/tempest/thirdparty/boto/test_ec2_volumes.py
@@ -13,8 +13,9 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from oslo_log import log as logging
+
 from tempest import config
-from tempest.openstack.common import log as logging
 from tempest import test
 from tempest.thirdparty.boto import test as boto_test
 
diff --git a/tempest/thirdparty/boto/test_s3_buckets.py b/tempest/thirdparty/boto/test_s3_buckets.py
index 451ae59..45401fd 100644
--- a/tempest/thirdparty/boto/test_s3_buckets.py
+++ b/tempest/thirdparty/boto/test_s3_buckets.py
@@ -29,7 +29,7 @@
     @test.idempotent_id('4678525d-8da0-4518-81c1-f1f67d595b00')
     def test_create_and_get_delete_bucket(self):
         # S3 Create, get and delete bucket
-        bucket_name = data_utils.rand_name("s3bucket-")
+        bucket_name = data_utils.rand_name("s3bucket")
         cleanup_key = self.addResourceCleanUp(self.client.delete_bucket,
                                               bucket_name)
         bucket = self.client.create_bucket(bucket_name)
diff --git a/tempest/thirdparty/boto/test_s3_ec2_images.py b/tempest/thirdparty/boto/test_s3_ec2_images.py
index 49749bc..1521249 100644
--- a/tempest/thirdparty/boto/test_s3_ec2_images.py
+++ b/tempest/thirdparty/boto/test_s3_ec2_images.py
@@ -46,7 +46,7 @@
         cls.ami_path = cls.materials_path + os.sep + cls.ami_manifest
         cls.aki_path = cls.materials_path + os.sep + cls.aki_manifest
         cls.ari_path = cls.materials_path + os.sep + cls.ari_manifest
-        cls.bucket_name = data_utils.rand_name("bucket-")
+        cls.bucket_name = data_utils.rand_name("bucket")
         bucket = cls.s3_client.create_bucket(cls.bucket_name)
         cls.addResourceCleanUp(cls.destroy_bucket,
                                cls.s3_client.connection_data,
@@ -56,7 +56,7 @@
     @test.idempotent_id('f9d360a5-0188-4c77-9db2-4c34c28d12a5')
     def test_register_get_deregister_ami_image(self):
         # Register and deregister ami image
-        image = {"name": data_utils.rand_name("ami-name-"),
+        image = {"name": data_utils.rand_name("ami-name"),
                  "location": self.bucket_name + "/" + self.ami_manifest,
                  "type": "ami"}
         image["image_id"] = self.images_client.register_image(
@@ -80,7 +80,7 @@
     @test.idempotent_id('42cca5b0-453b-4618-b99f-dbc039db426f')
     def test_register_get_deregister_aki_image(self):
         # Register and deregister aki image
-        image = {"name": data_utils.rand_name("aki-name-"),
+        image = {"name": data_utils.rand_name("aki-name"),
                  "location": self.bucket_name + "/" + self.aki_manifest,
                  "type": "aki"}
         image["image_id"] = self.images_client.register_image(
@@ -104,7 +104,7 @@
     @test.idempotent_id('1359e860-841c-43bb-80f3-bb389cbfd81d')
     def test_register_get_deregister_ari_image(self):
         # Register and deregister ari image
-        image = {"name": data_utils.rand_name("ari-name-"),
+        image = {"name": data_utils.rand_name("ari-name"),
                  "location": "/" + self.bucket_name + "/" + self.ari_manifest,
                  "type": "ari"}
         image["image_id"] = self.images_client.register_image(
diff --git a/tempest/thirdparty/boto/test_s3_objects.py b/tempest/thirdparty/boto/test_s3_objects.py
index dee6a7c..dba231c 100644
--- a/tempest/thirdparty/boto/test_s3_objects.py
+++ b/tempest/thirdparty/boto/test_s3_objects.py
@@ -32,8 +32,8 @@
     @test.idempotent_id('4eea567a-b46a-405b-a475-6097e1faebde')
     def test_create_get_delete_object(self):
         # S3 Create, get and delete object
-        bucket_name = data_utils.rand_name("s3bucket-")
-        object_name = data_utils.rand_name("s3object-")
+        bucket_name = data_utils.rand_name("s3bucket")
+        object_name = data_utils.rand_name("s3object")
         content = 'x' * 42
         bucket = self.client.create_bucket(bucket_name)
         self.addResourceCleanUp(self.destroy_bucket,
diff --git a/tempest/thirdparty/boto/utils/s3.py b/tempest/thirdparty/boto/utils/s3.py
index ff5e332..55c1b0a 100644
--- a/tempest/thirdparty/boto/utils/s3.py
+++ b/tempest/thirdparty/boto/utils/s3.py
@@ -20,7 +20,7 @@
 import boto
 import boto.s3.key
 
-from tempest.openstack.common import log as logging
+from oslo_log import log as logging
 
 LOG = logging.getLogger(__name__)
 
diff --git a/tempest/thirdparty/boto/utils/wait.py b/tempest/thirdparty/boto/utils/wait.py
index 752ed0f..8771ed7 100644
--- a/tempest/thirdparty/boto/utils/wait.py
+++ b/tempest/thirdparty/boto/utils/wait.py
@@ -17,10 +17,10 @@
 import time
 
 import boto.exception
+from oslo_log import log as logging
 import testtools
 
 from tempest import config
-from tempest.openstack.common import log as logging
 
 CONF = config.CONF
 LOG = logging.getLogger(__name__)
diff --git a/test-requirements.txt b/test-requirements.txt
index 6a9111e..76ae521 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -5,9 +5,9 @@
 # needed for doc build
 sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3
 python-subunit>=0.0.18
-oslosphinx>=2.2.0  # Apache-2.0
+oslosphinx>=2.5.0,<2.6.0  # Apache-2.0
 mox>=0.5.3
 mock>=1.0
 coverage>=3.6
-oslotest>=1.2.0  # Apache-2.0
-stevedore>=1.1.0  # Apache-2.0
+oslotest>=1.5.1,<1.6.0  # Apache-2.0
+stevedore>=1.3.0,<1.4.0  # Apache-2.0
diff --git a/tools/check_uuid.py b/tools/check_uuid.py
old mode 100644
new mode 100755
index 541e6c3..34effe4
--- a/tools/check_uuid.py
+++ b/tools/check_uuid.py
@@ -1,3 +1,5 @@
+#!/usr/bin/env python
+
 # Copyright 2014 Mirantis, Inc.
 #
 # Licensed under the Apache License, Version 2.0 (the "License"); you may
@@ -119,8 +121,10 @@
         idempotent_id = None
         for decorator in test_node.decorator_list:
             if (hasattr(decorator, 'func') and
-                    decorator.func.attr == DECORATOR_NAME and
-                    decorator.func.value.id == DECORATOR_MODULE):
+                hasattr(decorator.func, 'attr') and
+                decorator.func.attr == DECORATOR_NAME and
+                hasattr(decorator.func, 'value') and
+                decorator.func.value.id == DECORATOR_MODULE):
                 for arg in decorator.args:
                     idempotent_id = ast.literal_eval(arg)
         return idempotent_id
diff --git a/tools/config/config-generator.tempest.conf b/tools/config/config-generator.tempest.conf
index e5a02f8..d718f93 100644
--- a/tools/config/config-generator.tempest.conf
+++ b/tools/config/config-generator.tempest.conf
@@ -1,3 +1,8 @@
 [DEFAULT]
 output_file = etc/tempest.conf.sample
 namespace = tempest.config
+namespace = oslo.concurrency
+namespace = oslo.i18n
+namespace = oslo.log
+namespace = oslo.serialization
+namespace = oslo.utils