Merge "Return complete response from availability_zone_client"
diff --git a/.gitignore b/.gitignore
index 1777cb9..f584532 100644
--- a/.gitignore
+++ b/.gitignore
@@ -2,6 +2,7 @@
 ChangeLog
 *.pyc
 etc/tempest.conf
+etc/tempest.conf.sample
 etc/logging.conf
 include/swift_objects/swift_small
 include/swift_objects/swift_medium
@@ -18,3 +19,4 @@
 .coverage*
 !.coveragerc
 cover/
+doc/source/_static/tempest.conf
diff --git a/HACKING.rst b/HACKING.rst
index 45c35df..6ddb8ac 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -275,7 +275,7 @@
 Test Documentation
 ------------------
 For tests being added we need to require inline documentation in the form of
-docstings to explain what is being tested. In API tests for a new API a class
+docstrings to explain what is being tested. In API tests for a new API a class
 level docstring should be added to an API reference doc. If one doesn't exist
 a TODO comment should be put indicating that the reference needs to be added.
 For individual API test cases a method level docstring should be used to
diff --git a/README.rst b/README.rst
index d7063ba..431be7c 100644
--- a/README.rst
+++ b/README.rst
@@ -41,11 +41,13 @@
 will tell Tempest where to find the various OpenStack services and
 other testing behavior switches.
 
-The easiest way to create a configuration file is to copy the sample
-one in the ``etc/`` directory ::
+The easiest way to create a configuration file is to generate a sample
+in the ``etc/`` directory ::
 
     $> cd $TEMPEST_ROOT_DIR
-    $> cp etc/tempest.conf.sample etc/tempest.conf
+    $> oslo-config-generator --config-file \
+        tools/config/config-generator.tempest.conf \
+        --output-file etc/tempest.conf
 
 After that, open up the ``etc/tempest.conf`` file and edit the
 configuration variables to match valid data in your environment.
diff --git a/tempest/openstack/__init__.py b/doc/source/_static/.keep
similarity index 100%
rename from tempest/openstack/__init__.py
rename to doc/source/_static/.keep
diff --git a/doc/source/conf.py b/doc/source/conf.py
index daa293c..3ec25ea 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -13,6 +13,19 @@
 
 import sys
 import os
+import subprocess
+
+# Build a tempest sample config file:
+def build_sample_config(app):
+    root_dir = os.path.dirname(
+        os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
+    subprocess.call(["oslo-config-generator", "--config-file",
+                     "tools/config/config-generator.tempest.conf",
+                     "--output-file", "doc/source/_static/tempest.conf"],
+                    cwd=root_dir)
+
+def setup(app):
+    app.connect('builder-inited', build_sample_config)
 
 # If extensions (or modules to document with autodoc) are in another directory,
 # add these directories to sys.path here. If the directory is relative to the
diff --git a/doc/source/index.rst b/doc/source/index.rst
index f925018..e9f2161 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -10,6 +10,7 @@
    overview
    HACKING
    REVIEWING
+   plugin
 
 ------------
 Field Guides
@@ -37,6 +38,7 @@
    :maxdepth: 2
 
    configuration
+   sampleconf
 
 ---------------------
 Command Documentation
diff --git a/doc/source/plugin.rst b/doc/source/plugin.rst
new file mode 100644
index 0000000..4e97dbe
--- /dev/null
+++ b/doc/source/plugin.rst
@@ -0,0 +1,120 @@
+=============================
+Tempest Test Plugin Interface
+=============================
+
+Tempest has an external test plugin interface which enables anyone to integrate
+an external test suite as part of a tempest run. This will let any project
+leverage being run with the rest of the tempest suite while not requiring the
+tests live in the tempest tree.
+
+Creating a plugin
+=================
+
+Creating a plugin is fairly straightforward and doesn't require much additional
+effort on top of creating a test suite using tempest-lib. One thing to note with
+doing this is that the interfaces exposed by tempest are not considered stable
+(with the exception of configuration variables which ever effort goes into
+ensuring backwards compatibility). You should not need to import anything from
+tempest itself except where explicitly noted. If there is an interface from
+tempest that you need to rely on in your plugin it likely needs to be migrated
+to tempest-lib. In that situation, file a bug, push a migration patch, etc. to
+expedite providing the interface in a reliable manner.
+
+Plugin Class
+------------
+
+To provide tempest with all the required information it needs to be able to run
+your plugin you need to create a plugin class which tempest will load and call
+to get information when it needs. To simplify creating this tempest provides an
+abstract class that should be used as the parent for your plugin. To use this
+you would do something like the following::
+
+  from tempest.test_discover import plugin
+
+  class MyPlugin(plugin.TempestPlugin):
+
+Then you need to ensure you locally define all of the methods in the abstract
+class, you can refer to the api doc below for a reference of what that entails.
+
+Also, note eventually this abstract class will likely live in tempest-lib, when
+that migration occurs a deprecation shim will be added to tempest so as to not
+break any existing plugins. But, when that occurs migrating to using tempest-lib
+as the source for the abstract class will be prudent.
+
+Abstract Plugin Class
+^^^^^^^^^^^^^^^^^^^^^
+
+.. autoclass:: tempest.test_discover.plugins.TempestPlugin
+   :members:
+
+Entry Point
+-----------
+
+Once you've created your plugin class you need to add an entry point to your
+project to enable tempest to find the plugin. The entry point must be added
+to the "tempest.test_plugins" namespace.
+
+If you are using pbr this is fairly straightforward, in the setup.cfg just add
+something like the following::
+
+  [entry_points]
+  tempest.test_plugins =
+      plugin_name = module.path:PluginClass
+
+Plugin Structure
+----------------
+
+While there are no hard and fast rules for the structure a plugin, there are
+basically no constraints on what the plugin looks like as long as the 2 steps
+above are done. However,  there are some recommended patterns to follow to make
+it easy for people to contribute and work with your plugin. For example, if you
+create a directory structure with something like::
+
+    plugin_dir/
+      config.py
+      plugin.py
+      tests/
+        api/
+        scenario/
+      services/
+        client.py
+
+That will mirror what people expect from tempest. The file
+
+* **config.py**: contains any plugin specific configuration variables
+* **plugin.py**: contains the plugin class used for the entry point
+* **tests**: the directory where test discovery will be run, all tests should
+             be under this dir
+* **services**: where the plugin specific service clients are
+
+Additionally, when you're creating the plugin you likely want to follow all
+of the tempest developer and reviewer documentation to ensure that the tests
+being added in the plugin act and behave like the rest of tempest.
+
+Using Plugins
+=============
+
+Tempest will automatically discover any installed plugins when it is run. So by
+just installing the python packages which contain your plugin you'll be using
+them with tempest, nothing else is really required.
+
+However, you should take care when installing plugins. By their very nature
+there are no guarantees when running tempest with plugins enabled about the
+quality of the plugin. Additionally, while there is no limitation on running
+with multiple plugins it's worth noting that poorly written plugins might not
+properly isolate their tests which could cause unexpected cross interactions
+between plugins.
+
+Notes for using plugins with virtualenvs
+----------------------------------------
+
+When using a tempest inside a virtualenv (like when running under tox) you have
+to ensure that the package that contains your plugin is either installed in the
+venv too or that you have system site-packages enabled. The virtualenv will
+isolate the tempest install from the rest of your system so just installing the
+plugin package on your system and then running tempest inside a venv will not
+work.
+
+Tempest also exposes a tox job, all-plugin, which will setup a tox virtualenv
+with system site-packages enabled. This will let you leverage tox without
+requiring to manually install plugins in the tox venv before running tests.
diff --git a/doc/source/sampleconf.rst b/doc/source/sampleconf.rst
new file mode 100644
index 0000000..2a72971
--- /dev/null
+++ b/doc/source/sampleconf.rst
@@ -0,0 +1,14 @@
+.. _tempest-sampleconf:
+
+Sample Configuration File
+==========================
+
+The following is a sample Tempest configuration for adaptation and use. It is
+auto-generated from Tempest when this documentation is built, so
+if you are having issues with an option, please compare your version of
+Tempest with the version of this documentation.
+
+The sample configuration can also be viewed in `file form <_static/tempest.conf>`_.
+
+.. include:: _static/tempest.conf
+   :code:
diff --git a/etc/tempest.conf.sample b/etc/tempest.conf.sample
deleted file mode 100644
index c97eb97..0000000
--- a/etc/tempest.conf.sample
+++ /dev/null
@@ -1,1243 +0,0 @@
-[DEFAULT]
-
-#
-# From oslo.log
-#
-
-# Print debugging output (set logging level to DEBUG instead of
-# default INFO level). (boolean value)
-#debug = false
-
-# If set to false, will disable INFO logging level, making WARNING the
-# default. (boolean value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-#verbose = true
-
-# The name of a logging configuration file. This file is appended to
-# any existing logging configuration files. For details about logging
-# configuration files, see the Python logging module documentation.
-# (string value)
-# Deprecated group/name - [DEFAULT]/log_config
-#log_config_append = <None>
-
-# DEPRECATED. A logging.Formatter log message format string which may
-# use any of the available logging.LogRecord attributes. This option
-# is deprecated.  Please use logging_context_format_string and
-# logging_default_format_string instead. (string value)
-#log_format = <None>
-
-# Format string for %%(asctime)s in log records. Default: %(default)s
-# . (string value)
-#log_date_format = %Y-%m-%d %H:%M:%S
-
-# (Optional) Name of log file to output to. If no default is set,
-# logging will go to stdout. (string value)
-# Deprecated group/name - [DEFAULT]/logfile
-#log_file = <None>
-
-# (Optional) The base directory used for relative --log-file paths.
-# (string value)
-# Deprecated group/name - [DEFAULT]/logdir
-#log_dir = <None>
-
-# Use syslog for logging. Existing syslog format is DEPRECATED and
-# will be changed later to honor RFC5424. (boolean value)
-#use_syslog = false
-
-# (Optional) Enables or disables syslog rfc5424 format for logging. If
-# enabled, prefixes the MSG part of the syslog message with APP-NAME
-# (RFC5424). The format without the APP-NAME is deprecated in K, and
-# will be removed in M, along with this option. (boolean value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-#use_syslog_rfc_format = true
-
-# Syslog facility to receive log lines. (string value)
-#syslog_log_facility = LOG_USER
-
-# Log output to standard error. (boolean value)
-#use_stderr = true
-
-# Format string to use for log messages with context. (string value)
-#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
-
-# Format string to use for log messages without context. (string
-# value)
-#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
-
-# Data to append to log format when level is DEBUG. (string value)
-#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
-
-# Prefix each line of exception output with this format. (string
-# value)
-#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
-
-# List of logger=LEVEL pairs. (list value)
-#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN
-
-# Enables or disables publication of error events. (boolean value)
-#publish_errors = false
-
-# The format for an instance that is passed with the log message.
-# (string value)
-#instance_format = "[instance: %(uuid)s] "
-
-# The format for an instance UUID that is passed with the log message.
-# (string value)
-#instance_uuid_format = "[instance: %(uuid)s] "
-
-# Enables or disables fatal status of deprecations. (boolean value)
-#fatal_deprecations = false
-
-#
-# From tempest.config
-#
-
-# Prefix to be added when generating the name for test resources. It
-# can be used to discover all resources associated with a specific
-# test run when running tempest on a real-life cloud (string value)
-#resources_prefix = tempest
-
-
-[auth]
-
-#
-# From tempest.config
-#
-
-# Path to the yaml file that contains the list of credentials to use
-# for running tests. If used when running in parallel you have to make
-# sure sufficient credentials are provided in the accounts file. For
-# example if no tests with roles are being run it requires at least `2
-# * CONC` distinct accounts configured in  the `test_accounts_file`,
-# with CONC == the number of concurrent test processes. (string value)
-#test_accounts_file = <None>
-
-# Allows test cases to create/destroy tenants and users. This option
-# requires that OpenStack Identity API admin credentials are known. If
-# false, isolated test cases and parallel execution, can still be
-# achieved configuring a list of test accounts (boolean value)
-# Deprecated group/name - [compute]/allow_tenant_isolation
-# Deprecated group/name - [orchestration]/allow_tenant_isolation
-#allow_tenant_isolation = true
-
-# Roles to assign to all users created by tempest (list value)
-#tempest_roles =
-
-# Default domain used when getting v3 credentials. This is the name
-# keystone uses for v2 compatibility. (string value)
-# Deprecated group/name - [auth]/tenant_isolation_domain_name
-#default_credentials_domain_name = Default
-
-# If allow_tenant_isolation is set to True and Neutron is enabled
-# Tempest will try to create a useable network, subnet, and router
-# when needed for each tenant it  creates. However in some neutron
-# configurations, like with VLAN provider networks, this doesn't work.
-# So if set to False the isolated networks will not be created
-# (boolean value)
-#create_isolated_networks = true
-
-
-[baremetal]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the baremetal provisioning service (string value)
-#catalog_type = baremetal
-
-# Whether the Ironic nova-compute driver is enabled (boolean value)
-#driver_enabled = false
-
-# Driver name which Ironic uses (string value)
-#driver = fake
-
-# The endpoint type to use for the baremetal provisioning service
-# (string value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# Timeout for Ironic node to completely provision (integer value)
-#active_timeout = 300
-
-# Timeout for association of Nova instance and Ironic node (integer
-# value)
-#association_timeout = 30
-
-# Timeout for Ironic power transitions. (integer value)
-#power_timeout = 60
-
-# Timeout for unprovisioning an Ironic node. Takes longer since Kilo
-# as Ironic performs an extra step in Node cleaning. (integer value)
-#unprovision_timeout = 300
-
-
-[boto]
-
-#
-# From tempest.config
-#
-
-# EC2 URL (string value)
-#ec2_url = http://localhost:8773/services/Cloud
-
-# S3 URL (string value)
-#s3_url = http://localhost:8080
-
-# AWS Secret Key (string value)
-#aws_secret = <None>
-
-# AWS Access Key (string value)
-#aws_access = <None>
-
-# AWS Zone for EC2 tests (string value)
-#aws_zone = nova
-
-# S3 Materials Path (string value)
-#s3_materials_path = /opt/stack/devstack/files/images/s3-materials/cirros-0.3.0
-
-# ARI Ramdisk Image manifest (string value)
-#ari_manifest = cirros-0.3.0-x86_64-initrd.manifest.xml
-
-# AMI Machine Image manifest (string value)
-#ami_manifest = cirros-0.3.0-x86_64-blank.img.manifest.xml
-
-# AKI Kernel Image manifest (string value)
-#aki_manifest = cirros-0.3.0-x86_64-vmlinuz.manifest.xml
-
-# Instance type (string value)
-#instance_type = m1.tiny
-
-# boto Http socket timeout (integer value)
-#http_socket_timeout = 3
-
-# boto num_retries on error (integer value)
-#num_retries = 1
-
-# Status Change Timeout (integer value)
-#build_timeout = 60
-
-# Status Change Test Interval (integer value)
-#build_interval = 1
-
-
-[compute]
-
-#
-# From tempest.config
-#
-
-# Valid primary image reference to be used in tests. This is a
-# required option (string value)
-#image_ref = <None>
-
-# Valid secondary image reference to be used in tests. This is a
-# required option, but if only one image is available duplicate the
-# value of image_ref above (string value)
-#image_ref_alt = <None>
-
-# Valid primary flavor to use in tests. (string value)
-#flavor_ref = 1
-
-# Valid secondary flavor to be used in tests. (string value)
-#flavor_ref_alt = 2
-
-# User name used to authenticate to an instance. (string value)
-#image_ssh_user = root
-
-# Password used to authenticate to an instance. (string value)
-#image_ssh_password = password
-
-# User name used to authenticate to an instance using the alternate
-# image. (string value)
-#image_alt_ssh_user = root
-
-# Time in seconds between build status checks. (integer value)
-#build_interval = 1
-
-# Timeout in seconds to wait for an instance to build. Other services
-# that do not define build_timeout will inherit this value. (integer
-# value)
-#build_timeout = 300
-
-# Shell fragments to use before executing a command when sshing to a
-# guest. (string value)
-#ssh_shell_prologue = set -eu -o pipefail; PATH=$$PATH:/sbin;
-
-# Auth method used for authenticate to the instance. Valid choices
-# are: keypair, configured, adminpass and disabled. Keypair: start the
-# servers with a ssh keypair. Configured: use the configured user and
-# password. Adminpass: use the injected adminPass. Disabled: avoid
-# using ssh when it is an option. (string value)
-#ssh_auth_method = keypair
-
-# How to connect to the instance? fixed: using the first ip belongs
-# the fixed network floating: creating and using a floating ip.
-# (string value)
-#ssh_connect_method = floating
-
-# User name used to authenticate to an instance. (string value)
-#ssh_user = root
-
-# Timeout in seconds to wait for ping to succeed. (integer value)
-#ping_timeout = 120
-
-# The packet size for ping packets originating from remote linux hosts
-# (integer value)
-#ping_size = 56
-
-# The number of ping packets originating from remote linux hosts
-# (integer value)
-#ping_count = 1
-
-# Additional wait time for clean state, when there is no OS-EXT-STS
-# extension available (integer value)
-#ready_wait = 0
-
-# Name of the fixed network that is visible to all test tenants. If
-# multiple networks are available for a tenant this is the network
-# which will be used for creating servers if tempest does not create a
-# network or a network is not specified elsewhere. It may be used for
-# ssh validation only if floating IPs are disabled. (string value)
-#fixed_network_name = <None>
-
-# Network used for SSH connections. Ignored if
-# use_floatingip_for_ssh=true or run_validation=false. (string value)
-#network_for_ssh = public
-
-# Does SSH use Floating IPs? (boolean value)
-#use_floatingip_for_ssh = true
-
-# Catalog type of the Compute service. (string value)
-#catalog_type = compute
-
-# The compute region name to use. If empty, the value of
-# identity.region is used instead. If no such region is found in the
-# service catalog, the first found one is used. (string value)
-#region =
-
-# The endpoint type to use for the compute service. (string value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# Expected device name when a volume is attached to an instance
-# (string value)
-#volume_device_name = vdb
-
-# Time in seconds before a shelved instance is eligible for removing
-# from a host.  -1 never offload, 0 offload when shelved. This time
-# should be the same as the time of nova.conf, and some tests will run
-# for as long as the time. (integer value)
-#shelved_offload_time = 0
-
-# Unallocated floating IP range, which will be used to test the
-# floating IP bulk feature for CRUD operation. This block must not
-# overlap an existing floating IP pool. (string value)
-#floating_ip_range = 10.0.0.0/29
-
-
-[compute-feature-enabled]
-
-#
-# From tempest.config
-#
-
-# If false, skip disk config tests (boolean value)
-#disk_config = true
-
-# A list of enabled compute extensions with a special entry all which
-# indicates every extension is enabled. Each extension should be
-# specified with alias name. Empty list indicates all extensions are
-# disabled (list value)
-#api_extensions = all
-
-# Does the test environment support changing the admin password?
-# (boolean value)
-#change_password = false
-
-# Does the test environment support obtaining instance serial console
-# output? (boolean value)
-#console_output = true
-
-# Does the test environment support resizing? (boolean value)
-#resize = false
-
-# Does the test environment support pausing? (boolean value)
-#pause = true
-
-# Does the test environment support shelving/unshelving? (boolean
-# value)
-#shelve = true
-
-# Does the test environment support suspend/resume? (boolean value)
-#suspend = true
-
-# Does the test environment support live migration available? (boolean
-# value)
-#live_migration = true
-
-# Does the test environment support metadata service? Ignored unless
-# validation.run_validation=true. (boolean value)
-#metadata_service = true
-
-# Does the test environment use block devices for live migration
-# (boolean value)
-#block_migration_for_live_migration = false
-
-# Does the test environment block migration support cinder iSCSI
-# volumes. Note, libvirt doesn't support this, see
-# https://bugs.launchpad.net/nova/+bug/1398999 (boolean value)
-#block_migrate_cinder_iscsi = false
-
-# Does the test system allow live-migration of paused instances? Note,
-# this is more than just the ANDing of paused and live_migrate, but
-# all 3 should be set to True to run those tests (boolean value)
-#live_migrate_paused_instances = false
-
-# Enable VNC console. This configuration value should be same as
-# [nova.vnc]->vnc_enabled in nova.conf (boolean value)
-#vnc_console = false
-
-# Enable Spice console. This configuration value should be same as
-# [nova.spice]->enabled in nova.conf (boolean value)
-#spice_console = false
-
-# Enable RDP console. This configuration value should be same as
-# [nova.rdp]->enabled in nova.conf (boolean value)
-#rdp_console = false
-
-# Does the test environment support instance rescue mode? (boolean
-# value)
-#rescue = true
-
-# Enables returning of the instance password by the relevant server
-# API calls such as create, rebuild or rescue. (boolean value)
-#enable_instance_password = true
-
-# Does the test environment support dynamic network interface
-# attachment? (boolean value)
-#interface_attach = true
-
-# Does the test environment support creating snapshot images of
-# running instances? (boolean value)
-#snapshot = true
-
-# Does the test environment have the ec2 api running? (boolean value)
-#ec2_api = true
-
-# Does Nova preserve preexisting ports from Neutron when deleting an
-# instance? This should be set to True if testing Kilo+ Nova. (boolean
-# value)
-#preserve_ports = false
-
-# Does the test environment support attaching an encrypted volume to a
-# running server instance? This may depend on the combination of
-# compute_driver in nova and the volume_driver(s) in cinder. (boolean
-# value)
-#attach_encrypted_volume = true
-
-# Does the test environment support creating instances with multiple
-# ports on the same network? This is only valid when using Neutron.
-# (boolean value)
-#allow_duplicate_networks = false
-
-
-[dashboard]
-
-#
-# From tempest.config
-#
-
-# Where the dashboard can be found (string value)
-#dashboard_url = http://localhost/
-
-# Login page for the dashboard (string value)
-#login_url = http://localhost/auth/login/
-
-
-[data_processing]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the data processing service. (string value)
-#catalog_type = data_processing
-
-# The endpoint type to use for the data processing service. (string
-# value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-
-[data_processing-feature-enabled]
-
-#
-# From tempest.config
-#
-
-# List of enabled data processing plugins (list value)
-#plugins = vanilla,hdp
-
-
-[database]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the Database service. (string value)
-#catalog_type = database
-
-# Valid primary flavor to use in database tests. (string value)
-#db_flavor_ref = 1
-
-# Current database version to use in database tests. (string value)
-#db_current_version = v1.0
-
-
-[debug]
-
-#
-# From tempest.config
-#
-
-# A regex to determine which requests should be traced.
-#
-# This is a regex to match the caller for rest client requests to be
-# able to
-# selectively trace calls out of specific classes and methods. It
-# largely
-# exists for test development, and is not expected to be used in a
-# real deploy
-# of tempest. This will be matched against the discovered
-# ClassName:method
-# in the test environment.
-#
-# Expected values for this field are:
-#
-#  * ClassName:test_method_name - traces one test_method
-#  * ClassName:setUp(Class) - traces specific setup functions
-#  * ClassName:tearDown(Class) - traces specific teardown functions
-#  * ClassName:_run_cleanups - traces the cleanup functions
-#
-# If nothing is specified, this feature is not enabled. To trace
-# everything
-# specify .* as the regex.
-#  (string value)
-#trace_requests =
-
-
-[identity]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the Identity service. (string value)
-#catalog_type = identity
-
-# Set to True if using self-signed SSL certificates. (boolean value)
-#disable_ssl_certificate_validation = false
-
-# Specify a CA bundle file to use in verifying a TLS (https) server
-# certificate. (string value)
-#ca_certificates_file = <None>
-
-# Full URI of the OpenStack Identity API (Keystone), v2 (string value)
-#uri = <None>
-
-# Full URI of the OpenStack Identity API (Keystone), v3 (string value)
-#uri_v3 = <None>
-
-# Identity API version to be used for authentication for API tests.
-# (string value)
-#auth_version = v2
-
-# The identity region name to use. Also used as the other services'
-# region name unless they are set explicitly. If no such region is
-# found in the service catalog, the first found one is used. (string
-# value)
-#region = RegionOne
-
-# The endpoint type to use for the identity service. (string value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# Username to use for Nova API requests. (string value)
-#username = <None>
-
-# Tenant name to use for Nova API requests. (string value)
-#tenant_name = <None>
-
-# Role required to administrate keystone. (string value)
-#admin_role = admin
-
-# API key to use when authenticating. (string value)
-#password = <None>
-
-# Domain name for authentication (Keystone V3).The same domain applies
-# to user and project (string value)
-#domain_name = <None>
-
-# Username of alternate user to use for Nova API requests. (string
-# value)
-#alt_username = <None>
-
-# Alternate user's Tenant name to use for Nova API requests. (string
-# value)
-#alt_tenant_name = <None>
-
-# API key to use when authenticating as alternate user. (string value)
-#alt_password = <None>
-
-# Alternate domain name for authentication (Keystone V3).The same
-# domain applies to user and project (string value)
-#alt_domain_name = <None>
-
-# Administrative Username to use for Keystone API requests. (string
-# value)
-#admin_username = <None>
-
-# Administrative Tenant name to use for Keystone API requests. (string
-# value)
-#admin_tenant_name = <None>
-
-# API key to use when authenticating as admin. (string value)
-#admin_password = <None>
-
-# Admin domain name for authentication (Keystone V3).The same domain
-# applies to user and project (string value)
-#admin_domain_name = <None>
-
-# ID of the default domain (string value)
-#default_domain_id = default
-
-
-[identity-feature-enabled]
-
-#
-# From tempest.config
-#
-
-# Does the identity service have delegation and impersonation enabled
-# (boolean value)
-#trust = true
-
-# Is the v2 identity API enabled (boolean value)
-#api_v2 = true
-
-# Is the v3 identity API enabled (boolean value)
-#api_v3 = true
-
-
-[image]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the Image service. (string value)
-#catalog_type = image
-
-# The image region name to use. If empty, the value of identity.region
-# is used instead. If no such region is found in the service catalog,
-# the first found one is used. (string value)
-#region =
-
-# The endpoint type to use for the image service. (string value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# http accessible image (string value)
-#http_image = http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-uec.tar.gz
-
-# Timeout in seconds to wait for an image to become available.
-# (integer value)
-#build_timeout = 300
-
-# Time in seconds between image operation status checks. (integer
-# value)
-#build_interval = 1
-
-
-[image-feature-enabled]
-
-#
-# From tempest.config
-#
-
-# Is the v2 image API enabled (boolean value)
-#api_v2 = true
-
-# Is the v1 image API enabled (boolean value)
-#api_v1 = true
-
-# Is the deactivate-image feature enabled. The feature has been
-# integrated since Kilo. (boolean value)
-#deactivate_image = false
-
-
-[input-scenario]
-
-#
-# From tempest.config
-#
-
-# Matching images become parameters for scenario tests (string value)
-#image_regex = ^cirros-0.3.1-x86_64-uec$
-
-# Matching flavors become parameters for scenario tests (string value)
-#flavor_regex = ^m1.nano$
-
-# SSH verification in tests is skippedfor matching images (string
-# value)
-#non_ssh_image_regex = ^.*[Ww]in.*$
-
-# List of user mapped to regex to matching image names. (string value)
-#ssh_user_regex = [["^.*[Cc]irros.*$", "cirros"]]
-
-
-[messaging]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the Messaging service. (string value)
-#catalog_type = messaging
-
-# The maximum number of queue records per page when listing queues
-# (integer value)
-#max_queues_per_page = 20
-
-# The maximum metadata size for a queue (integer value)
-#max_queue_metadata = 65536
-
-# The maximum number of queue message per page when listing (or)
-# posting messages (integer value)
-#max_messages_per_page = 20
-
-# The maximum size of a message body (integer value)
-#max_message_size = 262144
-
-# The maximum number of messages per claim (integer value)
-#max_messages_per_claim = 20
-
-# The maximum ttl for a message (integer value)
-#max_message_ttl = 1209600
-
-# The maximum ttl for a claim (integer value)
-#max_claim_ttl = 43200
-
-# The maximum grace period for a claim (integer value)
-#max_claim_grace = 43200
-
-
-[negative]
-
-#
-# From tempest.config
-#
-
-# Test generator class for all negative tests (string value)
-#test_generator = tempest.common.generator.negative_generator.NegativeTestGenerator
-
-
-[network]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the Neutron service. (string value)
-#catalog_type = network
-
-# The network region name to use. If empty, the value of
-# identity.region is used instead. If no such region is found in the
-# service catalog, the first found one is used. (string value)
-#region =
-
-# The endpoint type to use for the network service. (string value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# The cidr block to allocate tenant ipv4 subnets from (string value)
-#tenant_network_cidr = 10.100.0.0/16
-
-# The mask bits for tenant ipv4 subnets (integer value)
-#tenant_network_mask_bits = 28
-
-# The cidr block to allocate tenant ipv6 subnets from (string value)
-#tenant_network_v6_cidr = 2003::/48
-
-# The mask bits for tenant ipv6 subnets (integer value)
-#tenant_network_v6_mask_bits = 64
-
-# Whether tenant networks can be reached directly from the test
-# client. This must be set to True when the 'fixed' ssh_connect_method
-# is selected. (boolean value)
-#tenant_networks_reachable = false
-
-# Id of the public network that provides external connectivity (string
-# value)
-#public_network_id =
-
-# Default floating network name. Used to allocate floating IPs when
-# neutron is enabled. (string value)
-#floating_network_name = <None>
-
-# Id of the public router that provides external connectivity. This
-# should only be used when Neutron's 'allow_overlapping_ips' is set to
-# 'False' in neutron.conf. usually not needed past 'Grizzly' release
-# (string value)
-#public_router_id =
-
-# Timeout in seconds to wait for network operation to complete.
-# (integer value)
-#build_timeout = 300
-
-# Time in seconds between network operation status checks. (integer
-# value)
-#build_interval = 1
-
-# List of dns servers which should be used for subnet creation (list
-# value)
-#dns_servers = 8.8.8.8,8.8.4.4
-
-# vnic_type to use when Launching instances with pre-configured ports.
-# Supported ports are: ['normal','direct','macvtap'] (string value)
-# Allowed values: <None>, normal, direct, macvtap
-#port_vnic_type = <None>
-
-
-[network-feature-enabled]
-
-#
-# From tempest.config
-#
-
-# Allow the execution of IPv6 tests (boolean value)
-#ipv6 = true
-
-# A list of enabled network extensions with a special entry all which
-# indicates every extension is enabled. Empty list indicates all
-# extensions are disabled. To get the list of extensions run: 'neutron
-# ext-list' (list value)
-#api_extensions = all
-
-# Allow the execution of IPv6 subnet tests that use the extended IPv6
-# attributes ipv6_ra_mode and ipv6_address_mode (boolean value)
-#ipv6_subnet_attributes = false
-
-# Does the test environment support changing port admin state (boolean
-# value)
-#port_admin_state_change = true
-
-
-[object-storage]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the Object-Storage service. (string value)
-#catalog_type = object-store
-
-# The object-storage region name to use. If empty, the value of
-# identity.region is used instead. If no such region is found in the
-# service catalog, the first found one is used. (string value)
-#region =
-
-# The endpoint type to use for the object-store service. (string
-# value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# Number of seconds to time on waiting for a container to container
-# synchronization complete. (integer value)
-#container_sync_timeout = 600
-
-# Number of seconds to wait while looping to check the status of a
-# container to container synchronization (integer value)
-#container_sync_interval = 5
-
-# Role to add to users created for swift tests to enable creating
-# containers (string value)
-#operator_role = Member
-
-# User role that has reseller admin (string value)
-#reseller_admin_role = ResellerAdmin
-
-# Name of sync realm. A sync realm is a set of clusters that have
-# agreed to allow container syncing with each other. Set the same
-# realm name as Swift's container-sync-realms.conf (string value)
-#realm_name = realm1
-
-# One name of cluster which is set in the realm whose name is set in
-# 'realm_name' item in this file. Set the same cluster name as Swift's
-# container-sync-realms.conf (string value)
-#cluster_name = name1
-
-
-[object-storage-feature-enabled]
-
-#
-# From tempest.config
-#
-
-# A list of the enabled optional discoverable apis. A single entry,
-# all, indicates that all of these features are expected to be enabled
-# (list value)
-#discoverable_apis = all
-
-# Execute (old style) container-sync tests (boolean value)
-#container_sync = true
-
-# Execute object-versioning tests (boolean value)
-#object_versioning = true
-
-# Execute discoverability tests (boolean value)
-#discoverability = true
-
-
-[orchestration]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the Orchestration service. (string value)
-#catalog_type = orchestration
-
-# The orchestration region name to use. If empty, the value of
-# identity.region is used instead. If no such region is found in the
-# service catalog, the first found one is used. (string value)
-#region =
-
-# The endpoint type to use for the orchestration service. (string
-# value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# Role required for users to be able to manage stacks (string value)
-#stack_owner_role = heat_stack_owner
-
-# Time in seconds between build status checks. (integer value)
-#build_interval = 1
-
-# Timeout in seconds to wait for a stack to build. (integer value)
-#build_timeout = 1200
-
-# Instance type for tests. Needs to be big enough for a full OS plus
-# the test workload (string value)
-#instance_type = m1.micro
-
-# Name of existing keypair to launch servers with. (string value)
-#keypair_name = <None>
-
-# Value must match heat configuration of the same name. (integer
-# value)
-#max_template_size = 524288
-
-# Value must match heat configuration of the same name. (integer
-# value)
-#max_resources_per_stack = 1000
-
-
-[oslo_concurrency]
-
-#
-# From oslo.concurrency
-#
-
-# Enables or disables inter-process locks. (boolean value)
-# Deprecated group/name - [DEFAULT]/disable_process_locking
-#disable_process_locking = false
-
-# Directory to use for lock files.  For security, the specified
-# directory should only be writable by the user running the processes
-# that need locking. Defaults to environment variable OSLO_LOCK_PATH.
-# If external locks are used, a lock path must be set. (string value)
-# Deprecated group/name - [DEFAULT]/lock_path
-#lock_path = <None>
-
-
-[scenario]
-
-#
-# From tempest.config
-#
-
-# Directory containing image files (string value)
-#img_dir = /opt/stack/new/devstack/files/images/cirros-0.3.1-x86_64-uec
-
-# Image file name (string value)
-# Deprecated group/name - [DEFAULT]/qcow2_img_file
-#img_file = cirros-0.3.1-x86_64-disk.img
-
-# Image disk format (string value)
-#img_disk_format = qcow2
-
-# Image container format (string value)
-#img_container_format = bare
-
-# Glance image properties. Use for custom images which require them
-# (dict value)
-#img_properties = <None>
-
-# AMI image file name (string value)
-#ami_img_file = cirros-0.3.1-x86_64-blank.img
-
-# ARI image file name (string value)
-#ari_img_file = cirros-0.3.1-x86_64-initrd
-
-# AKI image file name (string value)
-#aki_img_file = cirros-0.3.1-x86_64-vmlinuz
-
-# ssh username for the image file (string value)
-#ssh_user = cirros
-
-# specifies how many resources to request at once. Used for large
-# operations testing. (integer value)
-#large_ops_number = 0
-
-# DHCP client used by images to renew DCHP lease. If left empty,
-# update operation will be skipped. Supported clients: "udhcpc",
-# "dhclient" (string value)
-# Allowed values: udhcpc, dhclient
-#dhcp_client = udhcpc
-
-
-[service_available]
-
-#
-# From tempest.config
-#
-
-# Whether or not cinder is expected to be available (boolean value)
-#cinder = true
-
-# Whether or not neutron is expected to be available (boolean value)
-#neutron = false
-
-# Whether or not glance is expected to be available (boolean value)
-#glance = true
-
-# Whether or not swift is expected to be available (boolean value)
-#swift = true
-
-# Whether or not nova is expected to be available (boolean value)
-#nova = true
-
-# Whether or not Heat is expected to be available (boolean value)
-#heat = false
-
-# Whether or not Ceilometer is expected to be available (boolean
-# value)
-#ceilometer = true
-
-# Whether or not Horizon is expected to be available (boolean value)
-#horizon = true
-
-# Whether or not Sahara is expected to be available (boolean value)
-#sahara = false
-
-# Whether or not Ironic is expected to be available (boolean value)
-#ironic = false
-
-# Whether or not Trove is expected to be available (boolean value)
-#trove = false
-
-# Whether or not Zaqar is expected to be available (boolean value)
-#zaqar = false
-
-
-[stress]
-
-#
-# From tempest.config
-#
-
-# Directory containing log files on the compute nodes (string value)
-#nova_logdir = <None>
-
-# Maximum number of instances to create during test. (integer value)
-#max_instances = 16
-
-# Controller host. (string value)
-#controller = <None>
-
-# Controller host. (string value)
-#target_controller = <None>
-
-# ssh user. (string value)
-#target_ssh_user = <None>
-
-# Path to private key. (string value)
-#target_private_key_path = <None>
-
-# regexp for list of log files. (string value)
-#target_logfiles = <None>
-
-# time (in seconds) between log file error checks. (integer value)
-#log_check_interval = 60
-
-# The number of threads created while stress test. (integer value)
-#default_thread_number_per_action = 4
-
-# Prevent the cleaning (tearDownClass()) between each stress test run
-# if an exception occurs during this run. (boolean value)
-#leave_dirty_stack = false
-
-# Allows a full cleaning process after a stress test. Caution : this
-# cleanup will remove every objects of every tenant. (boolean value)
-#full_clean_stack = false
-
-
-[telemetry]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the Telemetry service. (string value)
-#catalog_type = metering
-
-# The endpoint type to use for the telemetry service. (string value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# This variable is used as flag to enable notification tests (boolean
-# value)
-#too_slow_to_test = true
-
-
-[telemetry-feature-enabled]
-
-#
-# From tempest.config
-#
-
-# Runs Ceilometer event-related tests (boolean value)
-#events = false
-
-
-[validation]
-
-#
-# From tempest.config
-#
-
-# Enable ssh on created servers and creation of additional validation
-# resources to enable remote access (boolean value)
-# Deprecated group/name - [compute]/run_ssh
-#run_validation = false
-
-# Default IP type used for validation: -fixed: uses the first IP
-# belonging to the fixed network -floating: creates and uses a
-# floating IP (string value)
-# Allowed values: fixed, floating
-#connect_method = floating
-
-# Default authentication method to the instance. Only ssh via keypair
-# is supported for now. Additional methods will be handled in a
-# separate spec. (string value)
-# Allowed values: keypair
-#auth_method = keypair
-
-# Default IP version for ssh connections. (integer value)
-# Deprecated group/name - [compute]/ip_version_for_ssh
-#ip_version_for_ssh = 4
-
-# Timeout in seconds to wait for ping to succeed. (integer value)
-#ping_timeout = 120
-
-# Timeout in seconds to wait for the TCP connection to be successful.
-# (integer value)
-# Deprecated group/name - [compute]/ssh_channel_timeout
-#connect_timeout = 60
-
-# Timeout in seconds to wait for the ssh banner. (integer value)
-# Deprecated group/name - [compute]/ssh_timeout
-#ssh_timeout = 300
-
-
-[volume]
-
-#
-# From tempest.config
-#
-
-# Time in seconds between volume availability checks. (integer value)
-#build_interval = 1
-
-# Timeout in seconds to wait for a volume to become available.
-# (integer value)
-#build_timeout = 300
-
-# Catalog type of the Volume Service (string value)
-#catalog_type = volume
-
-# The volume region name to use. If empty, the value of
-# identity.region is used instead. If no such region is found in the
-# service catalog, the first found one is used. (string value)
-#region =
-
-# The endpoint type to use for the volume service. (string value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# Name of the backend1 (must be declared in cinder.conf) (string
-# value)
-#backend1_name = BACKEND_1
-
-# Name of the backend2 (must be declared in cinder.conf) (string
-# value)
-#backend2_name = BACKEND_2
-
-# Backend protocol to target when creating volume types (string value)
-#storage_protocol = iSCSI
-
-# Backend vendor to target when creating volume types (string value)
-#vendor_name = Open Source
-
-# Disk format to use when copying a volume to image (string value)
-#disk_format = raw
-
-# Default size in GB for volumes created by volumes tests (integer
-# value)
-#volume_size = 1
-
-
-[volume-feature-enabled]
-
-#
-# From tempest.config
-#
-
-# Runs Cinder multi-backend test (requires 2 backends) (boolean value)
-#multi_backend = false
-
-# Runs Cinder volumes backup test (boolean value)
-#backup = true
-
-# Runs Cinder volume snapshot test (boolean value)
-#snapshot = true
-
-# A list of enabled volume extensions with a special entry all which
-# indicates every extension is enabled. Empty list indicates all
-# extensions are disabled (list value)
-#api_extensions = all
-
-# Is the v1 volume API enabled (boolean value)
-#api_v1 = true
-
-# Is the v2 volume API enabled (boolean value)
-#api_v2 = true
-
-# Update bootable status of a volume Not implemented on icehouse
-# (boolean value)
-#bootable = false
diff --git a/openstack-common.conf b/openstack-common.conf
index 16ba6a7..acb1437 100644
--- a/openstack-common.conf
+++ b/openstack-common.conf
@@ -2,7 +2,6 @@
 
 # The list of modules to copy from openstack-common
 module=install_venv_common
-module=versionutils
 module=with_venv
 module=install_venv
 
diff --git a/requirements.txt b/requirements.txt
index 415eaa5..cc2a187 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,8 +1,8 @@
 # The order of packages is significant, because pip processes them in the order
 # of appearance. Changing the order has an impact on the overall integration
 # process, which may cause wedges in the gate later.
-pbr<2.0,>=1.3
-cliff>=1.13.0 # Apache-2.0
+pbr<2.0,>=1.4
+cliff>=1.14.0 # Apache-2.0
 anyjson>=0.3.3
 httplib2>=0.7.5
 jsonschema!=2.5.0,<3.0.0,>=2.0.0
@@ -13,11 +13,11 @@
 testrepository>=0.0.18
 pyOpenSSL>=0.14
 oslo.concurrency>=2.3.0 # Apache-2.0
-oslo.config>=1.11.0 # Apache-2.0
+oslo.config>=2.1.0 # Apache-2.0
 oslo.i18n>=1.5.0 # Apache-2.0
-oslo.log>=1.6.0 # Apache-2.0
+oslo.log>=1.8.0 # Apache-2.0
 oslo.serialization>=1.4.0 # Apache-2.0
-oslo.utils>=1.9.0 # Apache-2.0
+oslo.utils>=2.0.0 # Apache-2.0
 six>=1.9.0
 iso8601>=0.1.9
 fixtures>=1.3.1
diff --git a/tempest/api/baremetal/admin/test_nodes.py b/tempest/api/baremetal/admin/test_nodes.py
index 4830dcd..b6dee18 100644
--- a/tempest/api/baremetal/admin/test_nodes.py
+++ b/tempest/api/baremetal/admin/test_nodes.py
@@ -20,7 +20,7 @@
 
 
 class TestNodes(base.BaseBaremetalTest):
-    '''Tests for baremetal nodes.'''
+    """Tests for baremetal nodes."""
 
     def setUp(self):
         super(TestNodes, self).setUp()
diff --git a/tempest/api/compute/admin/test_agents.py b/tempest/api/compute/admin/test_agents.py
index d9a1ee5..38f5fb7 100644
--- a/tempest/api/compute/admin/test_agents.py
+++ b/tempest/api/compute/admin/test_agents.py
@@ -38,7 +38,7 @@
             hypervisor='common', os='linux', architecture='x86_64',
             version='7.0', url='xxx://xxxx/xxx/xxx',
             md5hash='add6bb58e139be103324d04d82d8f545')
-        body = self.client.create_agent(**params)
+        body = self.client.create_agent(**params)['agent']
         self.agent_id = body['agent_id']
 
     def tearDown(self):
@@ -67,7 +67,7 @@
             hypervisor='kvm', os='win', architecture='x86',
             version='7.0', url='xxx://xxxx/xxx/xxx',
             md5hash='add6bb58e139be103324d04d82d8f545')
-        body = self.client.create_agent(**params)
+        body = self.client.create_agent(**params)['agent']
         self.addCleanup(self.client.delete_agent, body['agent_id'])
         for expected_item, value in params.items():
             self.assertEqual(value, body[expected_item])
@@ -78,7 +78,7 @@
         params = self._param_helper(
             version='8.0', url='xxx://xxxx/xxx/xxx2',
             md5hash='add6bb58e139be103324d04d82d8f547')
-        body = self.client.update_agent(self.agent_id, **params)
+        body = self.client.update_agent(self.agent_id, **params)['agent']
         for expected_item, value in params.items():
             self.assertEqual(value, body[expected_item])
 
@@ -88,13 +88,13 @@
         self.client.delete_agent(self.agent_id)
 
         # Verify the list doesn't contain the deleted agent.
-        agents = self.client.list_agents()
+        agents = self.client.list_agents()['agents']
         self.assertNotIn(self.agent_id, map(lambda x: x['agent_id'], agents))
 
     @test.idempotent_id('6a326c69-654b-438a-80a3-34bcc454e138')
     def test_list_agents(self):
         # List all agents.
-        agents = self.client.list_agents()
+        agents = self.client.list_agents()['agents']
         self.assertTrue(len(agents) > 0, 'Cannot get any agents.(%s)' % agents)
         self.assertIn(self.agent_id, map(lambda x: x['agent_id'], agents))
 
@@ -105,11 +105,12 @@
             hypervisor='xen', os='linux', architecture='x86',
             version='7.0', url='xxx://xxxx/xxx/xxx1',
             md5hash='add6bb58e139be103324d04d82d8f546')
-        agent_xen = self.client.create_agent(**params)
+        agent_xen = self.client.create_agent(**params)['agent']
         self.addCleanup(self.client.delete_agent, agent_xen['agent_id'])
 
         agent_id_xen = agent_xen['agent_id']
-        agents = self.client.list_agents(hypervisor=agent_xen['hypervisor'])
+        agents = (self.client.list_agents(hypervisor=agent_xen['hypervisor'])
+                  ['agents'])
         self.assertTrue(len(agents) > 0, 'Cannot get any agents.(%s)' % agents)
         self.assertIn(agent_id_xen, map(lambda x: x['agent_id'], agents))
         self.assertNotIn(self.agent_id, map(lambda x: x['agent_id'], agents))
diff --git a/tempest/api/compute/admin/test_aggregates.py b/tempest/api/compute/admin/test_aggregates.py
index 9334fb6..e42131d 100644
--- a/tempest/api/compute/admin/test_aggregates.py
+++ b/tempest/api/compute/admin/test_aggregates.py
@@ -40,7 +40,7 @@
         cls.aggregate_name_prefix = 'test_aggregate'
         cls.az_name_prefix = 'test_az'
 
-        hosts_all = cls.os_adm.hosts_client.list_hosts()
+        hosts_all = cls.os_adm.hosts_client.list_hosts()['hosts']
         hosts = map(lambda x: x['host_name'],
                     filter(lambda y: y['service'] == 'compute', hosts_all))
         cls.host = hosts[0]
diff --git a/tempest/api/compute/admin/test_aggregates_negative.py b/tempest/api/compute/admin/test_aggregates_negative.py
index 231c88f..02e0af0 100644
--- a/tempest/api/compute/admin/test_aggregates_negative.py
+++ b/tempest/api/compute/admin/test_aggregates_negative.py
@@ -39,7 +39,7 @@
         cls.aggregate_name_prefix = 'test_aggregate'
         cls.az_name_prefix = 'test_az'
 
-        hosts_all = cls.os_adm.hosts_client.list_hosts()
+        hosts_all = cls.os_adm.hosts_client.list_hosts()['hosts']
         hosts = map(lambda x: x['host_name'],
                     filter(lambda y: y['service'] == 'compute', hosts_all))
         cls.host = hosts[0]
@@ -131,7 +131,7 @@
     @test.idempotent_id('0ef07828-12b4-45ba-87cc-41425faf5711')
     def test_aggregate_add_non_exist_host(self):
         # Adding a non-exist host to an aggregate should raise exceptions.
-        hosts_all = self.os_adm.hosts_client.list_hosts()
+        hosts_all = self.os_adm.hosts_client.list_hosts()['hosts']
         hosts = map(lambda x: x['host_name'], hosts_all)
         while True:
             non_exist_host = data_utils.rand_name('nonexist_host')
diff --git a/tempest/api/compute/admin/test_baremetal_nodes.py b/tempest/api/compute/admin/test_baremetal_nodes.py
index 4d95f0a..2599d86 100644
--- a/tempest/api/compute/admin/test_baremetal_nodes.py
+++ b/tempest/api/compute/admin/test_baremetal_nodes.py
@@ -46,11 +46,11 @@
         # List all baremetal nodes and ensure our created test nodes are
         # listed
         bm_node_ids = set([n['id'] for n in
-                           self.client.list_baremetal_nodes()])
+                           self.client.list_baremetal_nodes()['nodes']])
         test_node_ids = set([n['uuid'] for n in test_nodes])
         self.assertTrue(test_node_ids.issubset(bm_node_ids))
 
         # Test getting each individually
         for node in test_nodes:
             baremetal_node = self.client.show_baremetal_node(node['uuid'])
-            self.assertEqual(node['uuid'], baremetal_node['id'])
+            self.assertEqual(node['uuid'], baremetal_node['node']['id'])
diff --git a/tempest/api/compute/admin/test_fixed_ips.py b/tempest/api/compute/admin/test_fixed_ips.py
index 3e20b46..669585c 100644
--- a/tempest/api/compute/admin/test_fixed_ips.py
+++ b/tempest/api/compute/admin/test_fixed_ips.py
@@ -51,7 +51,7 @@
     @test.services('network')
     def test_list_fixed_ip_details(self):
         fixed_ip = self.client.show_fixed_ip(self.ip)
-        self.assertEqual(fixed_ip['address'], self.ip)
+        self.assertEqual(fixed_ip['fixed_ip']['address'], self.ip)
 
     @test.idempotent_id('5485077b-7e46-4cec-b402-91dc3173433b')
     @test.services('network')
diff --git a/tempest/api/compute/admin/test_floating_ips_bulk.py b/tempest/api/compute/admin/test_floating_ips_bulk.py
index 4ac1915..c8ca938 100644
--- a/tempest/api/compute/admin/test_floating_ips_bulk.py
+++ b/tempest/api/compute/admin/test_floating_ips_bulk.py
@@ -45,7 +45,7 @@
     @classmethod
     def verify_unallocated_floating_ip_range(cls, ip_range):
         # Verify whether configure floating IP range is not already allocated.
-        body = cls.client.list_floating_ips_bulk()
+        body = cls.client.list_floating_ips_bulk()['floating_ip_info']
         allocated_ips_list = map(lambda x: x['address'], body)
         for ip_addr in netaddr.IPNetwork(ip_range).iter_hosts():
             if str(ip_addr) in allocated_ips_list:
@@ -70,12 +70,13 @@
         # anywhere. Using the below mentioned interface which is not ever
         # expected to be used. Clean Up has been done for created IP range
         interface = 'eth0'
-        body = self.client.create_floating_ips_bulk(self.ip_range,
-                                                    pool,
-                                                    interface)
+        body = (self.client.create_floating_ips_bulk(self.ip_range,
+                                                     pool,
+                                                     interface)
+                ['floating_ips_bulk_create'])
         self.addCleanup(self._delete_floating_ips_bulk, self.ip_range)
         self.assertEqual(self.ip_range, body['ip_range'])
-        ips_list = self.client.list_floating_ips_bulk()
+        ips_list = self.client.list_floating_ips_bulk()['floating_ip_info']
         self.assertNotEqual(0, len(ips_list))
         for ip in netaddr.IPNetwork(self.ip_range).iter_hosts():
             self.assertIn(str(ip), map(lambda x: x['address'], ips_list))
diff --git a/tempest/api/compute/admin/test_hosts.py b/tempest/api/compute/admin/test_hosts.py
index 0dadea5..6d8788f 100644
--- a/tempest/api/compute/admin/test_hosts.py
+++ b/tempest/api/compute/admin/test_hosts.py
@@ -30,15 +30,15 @@
 
     @test.idempotent_id('9bfaf98d-e2cb-44b0-a07e-2558b2821e4f')
     def test_list_hosts(self):
-        hosts = self.client.list_hosts()
+        hosts = self.client.list_hosts()['hosts']
         self.assertTrue(len(hosts) >= 2, str(hosts))
 
     @test.idempotent_id('5dc06f5b-d887-47a2-bb2a-67762ef3c6de')
     def test_list_hosts_with_zone(self):
         self.useFixture(fixtures.LockFixture('availability_zone'))
-        hosts = self.client.list_hosts()
+        hosts = self.client.list_hosts()['hosts']
         host = hosts[0]
-        hosts = self.client.list_hosts(zone=host['zone'])
+        hosts = self.client.list_hosts(zone=host['zone'])['hosts']
         self.assertTrue(len(hosts) >= 1)
         self.assertIn(host, hosts)
 
@@ -46,26 +46,26 @@
     def test_list_hosts_with_a_blank_zone(self):
         # If send the request with a blank zone, the request will be successful
         # and it will return all the hosts list
-        hosts = self.client.list_hosts(zone='')
+        hosts = self.client.list_hosts(zone='')['hosts']
         self.assertNotEqual(0, len(hosts))
 
     @test.idempotent_id('c6ddbadb-c94e-4500-b12f-8ffc43843ff8')
     def test_list_hosts_with_nonexistent_zone(self):
         # If send the request with a nonexistent zone, the request will be
         # successful and no hosts will be retured
-        hosts = self.client.list_hosts(zone='xxx')
+        hosts = self.client.list_hosts(zone='xxx')['hosts']
         self.assertEqual(0, len(hosts))
 
     @test.idempotent_id('38adbb12-aee2-4498-8aec-329c72423aa4')
     def test_show_host_detail(self):
-        hosts = self.client.list_hosts()
+        hosts = self.client.list_hosts()['hosts']
 
         hosts = [host for host in hosts if host['service'] == 'compute']
         self.assertTrue(len(hosts) >= 1)
 
         for host in hosts:
             hostname = host['host_name']
-            resources = self.client.show_host(hostname)
+            resources = self.client.show_host(hostname)['host']
             self.assertTrue(len(resources) >= 1)
             host_resource = resources[0]['resource']
             self.assertIsNotNone(host_resource)
diff --git a/tempest/api/compute/admin/test_hosts_negative.py b/tempest/api/compute/admin/test_hosts_negative.py
index b2d2a04..2ea7f1a 100644
--- a/tempest/api/compute/admin/test_hosts_negative.py
+++ b/tempest/api/compute/admin/test_hosts_negative.py
@@ -32,7 +32,7 @@
         cls.non_admin_client = cls.os.hosts_client
 
     def _get_host_name(self):
-        hosts = self.client.list_hosts()
+        hosts = self.client.list_hosts()['hosts']
         self.assertTrue(len(hosts) >= 1)
         hostname = hosts[0]['host_name']
         return hostname
diff --git a/tempest/api/compute/admin/test_live_migration.py b/tempest/api/compute/admin/test_live_migration.py
index 6ffa4e9..d6bc6f5 100644
--- a/tempest/api/compute/admin/test_live_migration.py
+++ b/tempest/api/compute/admin/test_live_migration.py
@@ -32,6 +32,7 @@
         super(LiveBlockMigrationTestJSON, cls).setup_clients()
         cls.admin_hosts_client = cls.os_adm.hosts_client
         cls.admin_servers_client = cls.os_adm.servers_client
+        cls.admin_migration_client = cls.os_adm.migrations_client
 
     @classmethod
     def resource_setup(cls):
@@ -40,7 +41,7 @@
         cls.created_server_ids = []
 
     def _get_compute_hostnames(self):
-        body = self.admin_hosts_client.list_hosts()
+        body = self.admin_hosts_client.list_hosts()['hosts']
         return [
             host_record['host_name']
             for host_record in body
@@ -55,9 +56,10 @@
         return self._get_server_details(server_id)[self._host_key]
 
     def _migrate_server_to(self, server_id, dest_host):
+        bmflm = CONF.compute_feature_enabled.block_migration_for_live_migration
         body = self.admin_servers_client.live_migrate_server(
-            server_id, dest_host,
-            CONF.compute_feature_enabled.block_migration_for_live_migration)
+            server_id, host=dest_host, block_migration=bmflm,
+            disk_over_commit=False)
         return body
 
     def _get_host_other_than(self, host):
@@ -109,7 +111,16 @@
 
         self._migrate_server_to(server_id, target_host)
         waiters.wait_for_server_status(self.servers_client, server_id, state)
-        self.assertEqual(target_host, self._get_host_for_server(server_id))
+        migration_list = self.admin_migration_client.list_migrations()
+
+        msg = ("Live Migration failed. Migrations list for Instance "
+               "%s: [" % server_id)
+        for live_migration in migration_list:
+            if (live_migration['instance_uuid'] == server_id):
+                msg += "\n%s" % live_migration
+        msg += "]"
+        self.assertEqual(target_host, self._get_host_for_server(server_id),
+                         msg)
 
     @test.idempotent_id('1dce86b8-eb04-4c03-a9d8-9c1dc3ee0c7b')
     @testtools.skipUnless(CONF.compute_feature_enabled.live_migration,
@@ -153,7 +164,7 @@
         self.addCleanup(self._volume_clean_up, server_id, volume['id'])
 
         # Attach the volume to the server
-        self.servers_client.attach_volume(server_id, volume['id'],
+        self.servers_client.attach_volume(server_id, volumeId=volume['id'],
                                           device='/dev/xvdb')
         self.volumes_client.wait_for_volume_status(volume['id'], 'in-use')
 
diff --git a/tempest/api/compute/admin/test_quotas.py b/tempest/api/compute/admin/test_quotas.py
index 47bdfa6..3416eae 100644
--- a/tempest/api/compute/admin/test_quotas.py
+++ b/tempest/api/compute/admin/test_quotas.py
@@ -118,6 +118,8 @@
                                            password=password,
                                            tenant_id=tenant_id,
                                            email=email)
+        if 'user' in user:
+            user = user['user']
         user_id = user['id']
         self.addCleanup(identity_client.delete_user, user_id)
 
diff --git a/tempest/api/compute/admin/test_quotas_negative.py b/tempest/api/compute/admin/test_quotas_negative.py
index 9acf23b..8dcd0b2 100644
--- a/tempest/api/compute/admin/test_quotas_negative.py
+++ b/tempest/api/compute/admin/test_quotas_negative.py
@@ -170,4 +170,5 @@
         # will be raised when out of quota
         self.assertRaises((lib_exc.OverLimit, lib_exc.Forbidden),
                           self.sgr_client.create_security_group_rule,
-                          secgroup_id, ip_protocol, 1025, 1025)
+                          parent_group_id=secgroup_id, ip_protocol=ip_protocol,
+                          from_port=1025, to_port=1025)
diff --git a/tempest/api/compute/base.py b/tempest/api/compute/base.py
index 2126787..1ec2b56 100644
--- a/tempest/api/compute/base.py
+++ b/tempest/api/compute/base.py
@@ -129,7 +129,8 @@
 
         for server in cls.servers:
             try:
-                cls.servers_client.wait_for_server_termination(server['id'])
+                waiters.wait_for_server_termination(cls.servers_client,
+                                                    server['id'])
             except Exception:
                 LOG.exception('Waiting for deletion of server %s failed'
                               % server['id'])
@@ -150,7 +151,8 @@
             except Exception as exc:
                 LOG.exception(exc)
                 cls.servers_client.delete_server(cls.server_id)
-                cls.servers_client.wait_for_server_termination(cls.server_id)
+                waiters.wait_for_server_termination(cls.servers_client,
+                                                    cls.server_id)
                 cls.server_id = None
                 raise
 
@@ -279,7 +281,7 @@
         if 'name' in kwargs:
             name = kwargs.pop('name')
 
-        image = cls.images_client.create_image(server_id, name)
+        image = cls.images_client.create_image(server_id, name=name)
         image_id = data_utils.parse_image_id(image.response['location'])
         cls.images.append(image_id)
 
@@ -300,7 +302,8 @@
         if server_id:
             try:
                 cls.servers_client.delete_server(server_id)
-                cls.servers_client.wait_for_server_termination(server_id)
+                waiters.wait_for_server_termination(cls.servers_client,
+                                                    server_id)
             except Exception:
                 LOG.exception('Failed to delete server %s' % server_id)
 
@@ -316,7 +319,8 @@
         """Deletes an existing server and waits for it to be gone."""
         try:
             cls.servers_client.delete_server(server_id)
-            cls.servers_client.wait_for_server_termination(server_id)
+            waiters.wait_for_server_termination(cls.servers_client,
+                                                server_id)
         except Exception:
             LOG.exception('Failed to delete server %s' % server_id)
 
diff --git a/tempest/api/compute/certificates/test_certificates.py b/tempest/api/compute/certificates/test_certificates.py
index 5f68786..78a0a93 100644
--- a/tempest/api/compute/certificates/test_certificates.py
+++ b/tempest/api/compute/certificates/test_certificates.py
@@ -24,13 +24,14 @@
     @test.idempotent_id('c070a441-b08e-447e-a733-905909535b1b')
     def test_create_root_certificate(self):
         # create certificates
-        body = self.certificates_client.create_certificate()
+        body = self.certificates_client.create_certificate()['certificate']
         self.assertIn('data', body)
         self.assertIn('private_key', body)
 
     @test.idempotent_id('3ac273d0-92d2-4632-bdfc-afbc21d4606c')
     def test_get_root_certificate(self):
         # get the root certificate
-        body = self.certificates_client.show_certificate('root')
+        body = (self.certificates_client.show_certificate('root')
+                ['certificate'])
         self.assertIn('data', body)
         self.assertIn('private_key', body)
diff --git a/tempest/api/compute/floating_ips/test_list_floating_ips.py b/tempest/api/compute/floating_ips/test_list_floating_ips.py
index d26a5e5..7a5bcff 100644
--- a/tempest/api/compute/floating_ips/test_list_floating_ips.py
+++ b/tempest/api/compute/floating_ips/test_list_floating_ips.py
@@ -78,5 +78,5 @@
     def test_list_floating_ip_pools(self):
         # Positive test:Should return the list of floating IP Pools
         floating_ip_pools = self.pools_client.list_floating_ip_pools()
-        self.assertNotEqual(0, len(floating_ip_pools),
+        self.assertNotEqual(0, len(floating_ip_pools['floating_ip_pools']),
                             "Expected floating IP Pools. Got zero.")
diff --git a/tempest/api/compute/images/test_image_metadata.py b/tempest/api/compute/images/test_image_metadata.py
index ab82d91..d16c020 100644
--- a/tempest/api/compute/images/test_image_metadata.py
+++ b/tempest/api/compute/images/test_image_metadata.py
@@ -48,7 +48,7 @@
         body = cls.glance_client.create_image(name=name,
                                               container_format='bare',
                                               disk_format='raw',
-                                              is_public=False)
+                                              is_public=False)['image']
         cls.image_id = body['id']
         cls.images.append(cls.image_id)
         image_file = six.StringIO(('*' * 1024))
diff --git a/tempest/api/compute/images/test_images_negative.py b/tempest/api/compute/images/test_images_negative.py
index 9721fa5..7f23730 100644
--- a/tempest/api/compute/images/test_images_negative.py
+++ b/tempest/api/compute/images/test_images_negative.py
@@ -50,7 +50,7 @@
 
         # Delete server before trying to create server
         self.servers_client.delete_server(server['id'])
-        self.servers_client.wait_for_server_termination(server['id'])
+        waiters.wait_for_server_termination(self.servers_client, server['id'])
         # Create a new image after server is deleted
         name = data_utils.rand_name('image')
         meta = {'image_type': 'test'}
@@ -93,7 +93,7 @@
         snapshot_name = data_utils.rand_name('test-snap')
         test_uuid = ('a' * 35)
         self.assertRaises(lib_exc.NotFound, self.client.create_image,
-                          test_uuid, snapshot_name)
+                          test_uuid, name=snapshot_name)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('36741560-510e-4cc2-8641-55fe4dfb2437')
@@ -102,7 +102,7 @@
         snapshot_name = data_utils.rand_name('test-snap')
         test_uuid = ('a' * 37)
         self.assertRaises(lib_exc.NotFound, self.client.create_image,
-                          test_uuid, snapshot_name)
+                          test_uuid, name=snapshot_name)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('381acb65-785a-4942-94ce-d8f8c84f1f0f')
diff --git a/tempest/api/compute/images/test_images_oneserver.py b/tempest/api/compute/images/test_images_oneserver.py
index 40a781c..06b7cac 100644
--- a/tempest/api/compute/images/test_images_oneserver.py
+++ b/tempest/api/compute/images/test_images_oneserver.py
@@ -80,7 +80,8 @@
         # Create a new image
         name = data_utils.rand_name('image')
         meta = {'image_type': 'test'}
-        body = self.client.create_image(self.server_id, name, meta)
+        body = self.client.create_image(self.server_id, name=name,
+                                        metadata=meta)
         image_id = data_utils.parse_image_id(body.response['location'])
         waiters.wait_for_image_status(self.client, image_id, 'ACTIVE')
 
@@ -112,6 +113,6 @@
         # #1370954 in glance which will 500 if mysql is used as the
         # backend and it attempts to store a 4 byte utf-8 character
         utf8_name = data_utils.rand_name('\xe2\x82\xa1')
-        body = self.client.create_image(self.server_id, utf8_name)
+        body = self.client.create_image(self.server_id, name=utf8_name)
         image_id = data_utils.parse_image_id(body.response['location'])
         self.addCleanup(self.client.delete_image, image_id)
diff --git a/tempest/api/compute/images/test_images_oneserver_negative.py b/tempest/api/compute/images/test_images_oneserver_negative.py
index 1a74e52..9ea62fb 100644
--- a/tempest/api/compute/images/test_images_oneserver_negative.py
+++ b/tempest/api/compute/images/test_images_oneserver_negative.py
@@ -93,7 +93,7 @@
         snapshot_name = data_utils.rand_name('test-snap')
         meta = {'': ''}
         self.assertRaises(lib_exc.BadRequest, self.client.create_image,
-                          self.server_id, snapshot_name, meta)
+                          self.server_id, name=snapshot_name, metadata=meta)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('3d24d11f-5366-4536-bd28-cff32b748eca')
@@ -102,7 +102,7 @@
         snapshot_name = data_utils.rand_name('test-snap')
         meta = {'a' * 260: 'b' * 260}
         self.assertRaises(lib_exc.BadRequest, self.client.create_image,
-                          self.server_id, snapshot_name, meta)
+                          self.server_id, name=snapshot_name, metadata=meta)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('0460efcf-ee88-4f94-acef-1bf658695456')
@@ -111,8 +111,7 @@
 
         # Create first snapshot
         snapshot_name = data_utils.rand_name('test-snap')
-        body = self.client.create_image(self.server_id,
-                                        snapshot_name)
+        body = self.client.create_image(self.server_id, name=snapshot_name)
         image_id = data_utils.parse_image_id(body.response['location'])
         self.image_ids.append(image_id)
         self.addCleanup(self._reset_server)
@@ -120,7 +119,7 @@
         # Create second snapshot
         alt_snapshot_name = data_utils.rand_name('test-snap')
         self.assertRaises(lib_exc.Conflict, self.client.create_image,
-                          self.server_id, alt_snapshot_name)
+                          self.server_id, name=alt_snapshot_name)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('084f0cbc-500a-4963-8a4e-312905862581')
@@ -129,7 +128,7 @@
 
         snapshot_name = data_utils.rand_name('a' * 260)
         self.assertRaises(lib_exc.BadRequest, self.client.create_image,
-                          self.server_id, snapshot_name)
+                          self.server_id, name=snapshot_name)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('0894954d-2db2-4195-a45b-ffec0bc0187e')
@@ -137,7 +136,7 @@
         # Return an error while trying to delete an image what is creating
 
         snapshot_name = data_utils.rand_name('test-snap')
-        body = self.client.create_image(self.server_id, snapshot_name)
+        body = self.client.create_image(self.server_id, name=snapshot_name)
         image_id = data_utils.parse_image_id(body.response['location'])
         self.image_ids.append(image_id)
         self.addCleanup(self._reset_server)
diff --git a/tempest/api/compute/images/test_list_image_filters.py b/tempest/api/compute/images/test_list_image_filters.py
index 2c0ce59..247a57b 100644
--- a/tempest/api/compute/images/test_list_image_filters.py
+++ b/tempest/api/compute/images/test_list_image_filters.py
@@ -54,7 +54,7 @@
             body = cls.glance_client.create_image(name=name,
                                                   container_format='bare',
                                                   disk_format='raw',
-                                                  is_public=False)
+                                                  is_public=False)['image']
             image_id = body['id']
             cls.images.append(image_id)
             # Wait 1 second between creation and upload to ensure a delta
diff --git a/tempest/api/compute/keypairs/base.py b/tempest/api/compute/keypairs/base.py
new file mode 100644
index 0000000..76e5573
--- /dev/null
+++ b/tempest/api/compute/keypairs/base.py
@@ -0,0 +1,38 @@
+# Copyright 2015 Deutsche Telekom AG
+# All Rights Reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+from tempest.api.compute import base
+
+
+class BaseKeypairTest(base.BaseComputeTest):
+    """Base test case class for all keypair API tests."""
+
+    _api_version = 2
+
+    @classmethod
+    def setup_clients(cls):
+        super(BaseKeypairTest, cls).setup_clients()
+        cls.client = cls.keypairs_client
+
+    def _delete_keypair(self, keypair_name):
+        self.client.delete_keypair(keypair_name)
+
+    def _create_keypair(self, keypair_name, pub_key=None):
+        kwargs = {'name': keypair_name}
+        if pub_key:
+            kwargs.update({'public_key': pub_key})
+        body = self.client.create_keypair(**kwargs)['keypair']
+        self.addCleanup(self._delete_keypair, keypair_name)
+        return body
diff --git a/tempest/api/compute/keypairs/test_keypairs.py b/tempest/api/compute/keypairs/test_keypairs.py
index 9243fdf..d10bf14 100644
--- a/tempest/api/compute/keypairs/test_keypairs.py
+++ b/tempest/api/compute/keypairs/test_keypairs.py
@@ -13,31 +13,12 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-from tempest.api.compute import base
+from tempest.api.compute.keypairs import base
 from tempest.common.utils import data_utils
 from tempest import test
 
 
-class KeyPairsV2TestJSON(base.BaseComputeTest):
-
-    _api_version = 2
-
-    @classmethod
-    def setup_clients(cls):
-        super(KeyPairsV2TestJSON, cls).setup_clients()
-        cls.client = cls.keypairs_client
-
-    def _delete_keypair(self, keypair_name):
-        self.client.delete_keypair(keypair_name)
-
-    def _create_keypair(self, keypair_name, pub_key=None):
-        kwargs = {'name': keypair_name}
-        if pub_key:
-            kwargs.update({'public_key': pub_key})
-        body = self.client.create_keypair(**kwargs)
-        self.addCleanup(self._delete_keypair, keypair_name)
-        return body
-
+class KeyPairsV2TestJSON(base.BaseKeypairTest):
     @test.idempotent_id('1d1dbedb-d7a0-432a-9d09-83f543c3c19b')
     def test_keypairs_create_list_delete(self):
         # Keypairs created should be available in the response list
@@ -53,9 +34,7 @@
             key_list.append(keypair)
         # Fetch all keypairs and verify the list
         # has all created keypairs
-        fetched_list = self.client.list_keypairs()
-        # We need to remove the extra 'keypair' element in the
-        # returned dict. See comment in keypairs_client.list_keypairs()
+        fetched_list = self.client.list_keypairs()['keypairs']
         new_list = list()
         for keypair in fetched_list:
             new_list.append(keypair['keypair'])
@@ -84,7 +63,7 @@
         # Keypair should be created, Got details by name and deleted
         k_name = data_utils.rand_name('keypair')
         self._create_keypair(k_name)
-        keypair_detail = self.client.show_keypair(k_name)
+        keypair_detail = self.client.show_keypair(k_name)['keypair']
         self.assertIn('name', keypair_detail)
         self.assertIn('public_key', keypair_detail)
         self.assertEqual(keypair_detail['name'], k_name,
diff --git a/tempest/api/compute/keypairs/test_keypairs_negative.py b/tempest/api/compute/keypairs/test_keypairs_negative.py
index 3e6d400..0ab78fb 100644
--- a/tempest/api/compute/keypairs/test_keypairs_negative.py
+++ b/tempest/api/compute/keypairs/test_keypairs_negative.py
@@ -16,25 +16,12 @@
 
 from tempest_lib import exceptions as lib_exc
 
-from tempest.api.compute import base
+from tempest.api.compute.keypairs import base
 from tempest.common.utils import data_utils
 from tempest import test
 
 
-class KeyPairsNegativeTestJSON(base.BaseV2ComputeTest):
-
-    @classmethod
-    def setup_clients(cls):
-        super(KeyPairsNegativeTestJSON, cls).setup_clients()
-        cls.client = cls.keypairs_client
-
-    def _create_keypair(self, keypair_name, pub_key=None):
-        kwargs = {'name': keypair_name}
-        if pub_key:
-            kwargs.update({'public_key': pub_key})
-        self.client.create_keypair(**kwargs)
-        self.addCleanup(self.client.delete_keypair, keypair_name)
-
+class KeyPairsNegativeTestJSON(base.BaseKeypairTest):
     @test.attr(type=['negative'])
     @test.idempotent_id('29cca892-46ae-4d48-bc32-8fe7e731eb81')
     def test_keypair_create_with_invalid_pub_key(self):
diff --git a/tempest/api/compute/security_groups/test_security_group_rules.py b/tempest/api/compute/security_groups/test_security_group_rules.py
index 4596e1f..b5eff70 100644
--- a/tempest/api/compute/security_groups/test_security_group_rules.py
+++ b/tempest/api/compute/security_groups/test_security_group_rules.py
@@ -69,11 +69,11 @@
         security_group = self.create_security_group()
         securitygroup_id = security_group['id']
         # Adding rules to the created Security Group
-        rule = \
-            self.client.create_security_group_rule(securitygroup_id,
-                                                   self.ip_protocol,
-                                                   self.from_port,
-                                                   self.to_port)
+        rule = self.client.create_security_group_rule(
+            parent_group_id=securitygroup_id,
+            ip_protocol=self.ip_protocol,
+            from_port=self.from_port,
+            to_port=self.to_port)
         self.expected['parent_group_id'] = securitygroup_id
         self.expected['ip_range'] = {'cidr': '0.0.0.0/0'}
         self._check_expected_response(rule)
@@ -91,12 +91,12 @@
 
         # Adding rules to the created Security Group with optional cidr
         cidr = '10.2.3.124/24'
-        rule = \
-            self.client.create_security_group_rule(parent_group_id,
-                                                   self.ip_protocol,
-                                                   self.from_port,
-                                                   self.to_port,
-                                                   cidr=cidr)
+        rule = self.client.create_security_group_rule(
+            parent_group_id=parent_group_id,
+            ip_protocol=self.ip_protocol,
+            from_port=self.from_port,
+            to_port=self.to_port,
+            cidr=cidr)
         self.expected['parent_group_id'] = parent_group_id
         self.expected['ip_range'] = {'cidr': cidr}
         self._check_expected_response(rule)
@@ -118,12 +118,12 @@
         group_name = security_group['name']
 
         # Adding rules to the created Security Group with optional group_id
-        rule = \
-            self.client.create_security_group_rule(parent_group_id,
-                                                   self.ip_protocol,
-                                                   self.from_port,
-                                                   self.to_port,
-                                                   group_id=group_id)
+        rule = self.client.create_security_group_rule(
+            parent_group_id=parent_group_id,
+            ip_protocol=self.ip_protocol,
+            from_port=self.from_port,
+            to_port=self.to_port,
+            group_id=group_id)
         self.expected['parent_group_id'] = parent_group_id
         self.expected['group'] = {'tenant_id': self.client.tenant_id,
                                   'name': group_name}
@@ -140,21 +140,22 @@
         securitygroup_id = security_group['id']
 
         # Add a first rule to the created Security Group
-        rule = \
-            self.client.create_security_group_rule(securitygroup_id,
-                                                   self.ip_protocol,
-                                                   self.from_port,
-                                                   self.to_port)
+        rule = self.client.create_security_group_rule(
+            parent_group_id=securitygroup_id,
+            ip_protocol=self.ip_protocol,
+            from_port=self.from_port,
+            to_port=self.to_port)
         rule1_id = rule['id']
 
         # Add a second rule to the created Security Group
         ip_protocol2 = 'icmp'
         from_port2 = -1
         to_port2 = -1
-        rule = \
-            self.client.create_security_group_rule(securitygroup_id,
-                                                   ip_protocol2,
-                                                   from_port2, to_port2)
+        rule = self.client.create_security_group_rule(
+            parent_group_id=securitygroup_id,
+            ip_protocol=ip_protocol2,
+            from_port=from_port2,
+            to_port=to_port2)
         rule2_id = rule['id']
         # Delete the Security Group rule2 at the end of this method
         self.addCleanup(self.client.delete_security_group_rule, rule2_id)
@@ -176,11 +177,12 @@
         security_group = self.create_security_group()
         sg2_id = security_group['id']
         # Adding rules to the Group1
-        self.client.create_security_group_rule(sg1_id,
-                                               self.ip_protocol,
-                                               self.from_port,
-                                               self.to_port,
-                                               group_id=sg2_id)
+        self.client.create_security_group_rule(
+            parent_group_id=sg1_id,
+            ip_protocol=self.ip_protocol,
+            from_port=self.from_port,
+            to_port=self.to_port,
+            group_id=sg2_id)
 
         # Delete group2
         self.security_groups_client.delete_security_group(sg2_id)
diff --git a/tempest/api/compute/security_groups/test_security_group_rules_negative.py b/tempest/api/compute/security_groups/test_security_group_rules_negative.py
index e2a1034..d12306a 100644
--- a/tempest/api/compute/security_groups/test_security_group_rules_negative.py
+++ b/tempest/api/compute/security_groups/test_security_group_rules_negative.py
@@ -51,7 +51,9 @@
         to_port = 22
         self.assertRaises(lib_exc.NotFound,
                           self.rules_client.create_security_group_rule,
-                          parent_group_id, ip_protocol, from_port, to_port)
+                          parent_group_id=parent_group_id,
+                          ip_protocol=ip_protocol, from_port=from_port,
+                          to_port=to_port)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('2244d7e4-adb7-4ecb-9930-2d77e123ce4f')
@@ -66,7 +68,9 @@
         to_port = 22
         self.assertRaises(lib_exc.BadRequest,
                           self.rules_client.create_security_group_rule,
-                          parent_group_id, ip_protocol, from_port, to_port)
+                          parent_group_id=parent_group_id,
+                          ip_protocol=ip_protocol, from_port=from_port,
+                          to_port=to_port)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('8bd56d02-3ffa-4d67-9933-b6b9a01d6089')
@@ -81,17 +85,17 @@
         from_port = 22
         to_port = 22
 
-        rule = \
-            self.rules_client.create_security_group_rule(parent_group_id,
-                                                         ip_protocol,
-                                                         from_port,
-                                                         to_port)
+        rule = self.rules_client.create_security_group_rule(
+            parent_group_id=parent_group_id, ip_protocol=ip_protocol,
+            from_port=from_port, to_port=to_port)
         self.addCleanup(self.rules_client.delete_security_group_rule,
                         rule['id'])
         # Add the same rule to the group should fail
         self.assertRaises(lib_exc.BadRequest,
                           self.rules_client.create_security_group_rule,
-                          parent_group_id, ip_protocol, from_port, to_port)
+                          parent_group_id=parent_group_id,
+                          ip_protocol=ip_protocol, from_port=from_port,
+                          to_port=to_port)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('84c81249-9f6e-439c-9bbf-cbb0d2cddbdf')
@@ -109,7 +113,9 @@
 
         self.assertRaises(lib_exc.BadRequest,
                           self.rules_client.create_security_group_rule,
-                          parent_group_id, ip_protocol, from_port, to_port)
+                          parent_group_id=parent_group_id,
+                          ip_protocol=ip_protocol, from_port=from_port,
+                          to_port=to_port)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('12bbc875-1045-4f7a-be46-751277baedb9')
@@ -126,7 +132,9 @@
         to_port = 22
         self.assertRaises(lib_exc.BadRequest,
                           self.rules_client.create_security_group_rule,
-                          parent_group_id, ip_protocol, from_port, to_port)
+                          parent_group_id=parent_group_id,
+                          ip_protocol=ip_protocol, from_port=from_port,
+                          to_port=to_port)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('ff88804d-144f-45d1-bf59-dd155838a43a')
@@ -143,7 +151,9 @@
         to_port = data_utils.rand_int_id(start=65536)
         self.assertRaises(lib_exc.BadRequest,
                           self.rules_client.create_security_group_rule,
-                          parent_group_id, ip_protocol, from_port, to_port)
+                          parent_group_id=parent_group_id,
+                          ip_protocol=ip_protocol, from_port=from_port,
+                          to_port=to_port)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('00296fa9-0576-496a-ae15-fbab843189e0')
@@ -160,7 +170,9 @@
         to_port = 21
         self.assertRaises(lib_exc.BadRequest,
                           self.rules_client.create_security_group_rule,
-                          secgroup_id, ip_protocol, from_port, to_port)
+                          parent_group_id=secgroup_id,
+                          ip_protocol=ip_protocol, from_port=from_port,
+                          to_port=to_port)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('56fddcca-dbb8-4494-a0db-96e9f869527c')
diff --git a/tempest/api/compute/security_groups/test_security_groups.py b/tempest/api/compute/security_groups/test_security_groups.py
index bd252b0..7fff8bf 100644
--- a/tempest/api/compute/security_groups/test_security_groups.py
+++ b/tempest/api/compute/security_groups/test_security_groups.py
@@ -123,7 +123,7 @@
         # Shutdown the server and then verify we can destroy the
         # security groups, since no active server instance is using them
         self.servers_client.delete_server(server_id)
-        self.servers_client.wait_for_server_termination(server_id)
+        waiters.wait_for_server_termination(self.servers_client, server_id)
 
         self.client.delete_security_group(sg['id'])
         self.client.delete_security_group(sg2['id'])
diff --git a/tempest/api/compute/servers/test_create_server.py b/tempest/api/compute/servers/test_create_server.py
index e62a52b..c6fb2fb 100644
--- a/tempest/api/compute/servers/test_create_server.py
+++ b/tempest/api/compute/servers/test_create_server.py
@@ -21,6 +21,7 @@
 from tempest.api.compute import base
 from tempest.common.utils import data_utils
 from tempest.common.utils.linux import remote_client
+from tempest.common import waiters
 from tempest import config
 from tempest import test
 
@@ -178,7 +179,8 @@
         # we're OK.
         def cleanup_server():
             self.client.delete_server(server_multi_nics['id'])
-            self.client.wait_for_server_termination(server_multi_nics['id'])
+            waiters.wait_for_server_termination(self.client,
+                                                server_multi_nics['id'])
 
         self.addCleanup(cleanup_server)
 
@@ -218,7 +220,8 @@
 
         def cleanup_server():
             self.client.delete_server(server_multi_nics['id'])
-            self.client.wait_for_server_termination(server_multi_nics['id'])
+            waiters.wait_for_server_termination(self.client,
+                                                server_multi_nics['id'])
 
         self.addCleanup(cleanup_server)
 
diff --git a/tempest/api/compute/servers/test_delete_server.py b/tempest/api/compute/servers/test_delete_server.py
index b2acd34..551f8b4 100644
--- a/tempest/api/compute/servers/test_delete_server.py
+++ b/tempest/api/compute/servers/test_delete_server.py
@@ -38,14 +38,14 @@
         # Delete a server while it's VM state is Building
         server = self.create_test_server(wait_until='BUILD')
         self.client.delete_server(server['id'])
-        self.client.wait_for_server_termination(server['id'])
+        waiters.wait_for_server_termination(self.client, server['id'])
 
     @test.idempotent_id('925fdfb4-5b13-47ea-ac8a-c36ae6fddb05')
     def test_delete_active_server(self):
         # Delete a server while it's VM state is Active
         server = self.create_test_server(wait_until='ACTIVE')
         self.client.delete_server(server['id'])
-        self.client.wait_for_server_termination(server['id'])
+        waiters.wait_for_server_termination(self.client, server['id'])
 
     @test.idempotent_id('546d368c-bb6c-4645-979a-83ed16f3a6be')
     def test_delete_server_while_in_shutoff_state(self):
@@ -54,7 +54,7 @@
         self.client.stop(server['id'])
         waiters.wait_for_server_status(self.client, server['id'], 'SHUTOFF')
         self.client.delete_server(server['id'])
-        self.client.wait_for_server_termination(server['id'])
+        waiters.wait_for_server_termination(self.client, server['id'])
 
     @test.idempotent_id('943bd6e8-4d7a-4904-be83-7a6cc2d4213b')
     @testtools.skipUnless(CONF.compute_feature_enabled.pause,
@@ -65,7 +65,7 @@
         self.client.pause_server(server['id'])
         waiters.wait_for_server_status(self.client, server['id'], 'PAUSED')
         self.client.delete_server(server['id'])
-        self.client.wait_for_server_termination(server['id'])
+        waiters.wait_for_server_termination(self.client, server['id'])
 
     @test.idempotent_id('1f82ebd3-8253-4f4e-b93f-de9b7df56d8b')
     @testtools.skipUnless(CONF.compute_feature_enabled.suspend,
@@ -76,7 +76,7 @@
         self.client.suspend_server(server['id'])
         waiters.wait_for_server_status(self.client, server['id'], 'SUSPENDED')
         self.client.delete_server(server['id'])
-        self.client.wait_for_server_termination(server['id'])
+        waiters.wait_for_server_termination(self.client, server['id'])
 
     @test.idempotent_id('bb0cb402-09dd-4947-b6e5-5e7e1cfa61ad')
     @testtools.skipUnless(CONF.compute_feature_enabled.shelve,
@@ -95,7 +95,7 @@
             waiters.wait_for_server_status(self.client, server['id'],
                                            'SHELVED')
         self.client.delete_server(server['id'])
-        self.client.wait_for_server_termination(server['id'])
+        waiters.wait_for_server_termination(self.client, server['id'])
 
     @test.idempotent_id('ab0c38b4-cdd8-49d3-9b92-0cb898723c01')
     @testtools.skipIf(not CONF.compute_feature_enabled.resize,
@@ -107,7 +107,7 @@
         waiters.wait_for_server_status(self.client, server['id'],
                                        'VERIFY_RESIZE')
         self.client.delete_server(server['id'])
-        self.client.wait_for_server_termination(server['id'])
+        waiters.wait_for_server_termination(self.client, server['id'])
 
     @test.idempotent_id('d0f3f0d6-d9b6-4a32-8da4-23015dcab23c')
     @test.services('volume')
@@ -122,13 +122,13 @@
         waiters.wait_for_volume_status(volumes_client,
                                        volume['id'], 'available')
         self.client.attach_volume(server['id'],
-                                  volume['id'],
+                                  volumeId=volume['id'],
                                   device=device)
         waiters.wait_for_volume_status(volumes_client,
                                        volume['id'], 'in-use')
 
         self.client.delete_server(server['id'])
-        self.client.wait_for_server_termination(server['id'])
+        waiters.wait_for_server_termination(self.client, server['id'])
         waiters.wait_for_volume_status(volumes_client,
                                        volume['id'], 'available')
 
@@ -152,12 +152,13 @@
         server = self.non_admin_client.show_server(server['id'])
         self.assertEqual(server['status'], 'ERROR')
         self.non_admin_client.delete_server(server['id'])
-        self.servers_client.wait_for_server_termination(server['id'],
-                                                        ignore_error=True)
+        waiters.wait_for_server_termination(self.servers_client,
+                                            server['id'],
+                                            ignore_error=True)
 
     @test.idempotent_id('73177903-6737-4f27-a60c-379e8ae8cf48')
     def test_admin_delete_servers_of_others(self):
         # Administrator can delete servers of others
         server = self.create_test_server(wait_until='ACTIVE')
         self.admin_client.delete_server(server['id'])
-        self.servers_client.wait_for_server_termination(server['id'])
+        waiters.wait_for_server_termination(self.servers_client, server['id'])
diff --git a/tempest/api/compute/servers/test_list_server_filters.py b/tempest/api/compute/servers/test_list_server_filters.py
index a75cb3e..6160844 100644
--- a/tempest/api/compute/servers/test_list_server_filters.py
+++ b/tempest/api/compute/servers/test_list_server_filters.py
@@ -305,12 +305,20 @@
             params = {'ip': ip}
         else:
             params = {'ip6': ip}
+        # capture all servers in case something goes wrong
+        all_servers = self.client.list_servers(detail=True)
         body = self.client.list_servers(**params)
         servers = body['servers']
 
-        self.assertIn(self.s1_name, map(lambda x: x['name'], servers))
-        self.assertIn(self.s2_name, map(lambda x: x['name'], servers))
-        self.assertIn(self.s3_name, map(lambda x: x['name'], servers))
+        self.assertIn(self.s1_name, map(lambda x: x['name'], servers),
+                      "%s not found in %s, all servers %s" %
+                      (self.s1_name, servers, all_servers))
+        self.assertIn(self.s2_name, map(lambda x: x['name'], servers),
+                      "%s not found in %s, all servers %s" %
+                      (self.s2_name, servers, all_servers))
+        self.assertIn(self.s3_name, map(lambda x: x['name'], servers),
+                      "%s not found in %s, all servers %s" %
+                      (self.s3_name, servers, all_servers))
 
     @test.idempotent_id('67aec2d0-35fe-4503-9f92-f13272b867ed')
     def test_list_servers_detailed_limit_results(self):
diff --git a/tempest/api/compute/servers/test_list_servers_negative.py b/tempest/api/compute/servers/test_list_servers_negative.py
index def6cf5..f205ddf 100644
--- a/tempest/api/compute/servers/test_list_servers_negative.py
+++ b/tempest/api/compute/servers/test_list_servers_negative.py
@@ -17,6 +17,7 @@
 from tempest_lib import exceptions as lib_exc
 
 from tempest.api.compute import base
+from tempest.common import waiters
 from tempest import test
 
 
@@ -47,8 +48,8 @@
         # be put into ERROR status on a quick spawn, then delete,
         # as the compute node expects the instance local status
         # to be spawning, not deleted. See LP Bug#1061167
-        cls.client.wait_for_server_termination(srv['id'],
-                                               ignore_error=True)
+        waiters.wait_for_server_termination(cls.client, srv['id'],
+                                            ignore_error=True)
         cls.deleted_fixtures.append(srv)
 
     @test.attr(type=['negative'])
diff --git a/tempest/api/compute/servers/test_server_actions.py b/tempest/api/compute/servers/test_server_actions.py
index f0f6b8c..a20f7f5 100644
--- a/tempest/api/compute/servers/test_server_actions.py
+++ b/tempest/api/compute/servers/test_server_actions.py
@@ -323,7 +323,7 @@
             properties=properties,
             status='active',
             sort_key='created_at',
-            sort_dir='asc')
+            sort_dir='asc')['images']
         self.assertEqual(2, len(image_list))
         self.assertEqual((backup1, backup2),
                          (image_list[0]['name'], image_list[1]['name']))
@@ -347,7 +347,7 @@
             properties=properties,
             status='active',
             sort_key='created_at',
-            sort_dir='asc')
+            sort_dir='asc')['images']
         self.assertEqual(2, len(image_list),
                          'Unexpected number of images for '
                          'v2:test_create_backup; was the oldest backup not '
@@ -474,6 +474,7 @@
     def test_lock_unlock_server(self):
         # Lock the server,try server stop(exceptions throw),unlock it and retry
         self.client.lock_server(self.server_id)
+        self.addCleanup(self.client.unlock_server, self.server_id)
         server = self.client.show_server(self.server_id)
         self.assertEqual(server['status'], 'ACTIVE')
         # Locked server is not allowed to be stopped by non-admin user
diff --git a/tempest/api/compute/servers/test_server_rescue_negative.py b/tempest/api/compute/servers/test_server_rescue_negative.py
index 2fe63ed..7a25526 100644
--- a/tempest/api/compute/servers/test_server_rescue_negative.py
+++ b/tempest/api/compute/servers/test_server_rescue_negative.py
@@ -137,7 +137,7 @@
         self.assertRaises(lib_exc.Conflict,
                           self.servers_client.attach_volume,
                           self.server_id,
-                          volume['id'],
+                          volumeId=volume['id'],
                           device='/dev/%s' % self.device)
 
     @test.idempotent_id('f56e465b-fe10-48bf-b75d-646cda3a8bc9')
@@ -148,7 +148,7 @@
 
         # Attach the volume to the server
         self.servers_client.attach_volume(self.server_id,
-                                          volume['id'],
+                                          volumeId=volume['id'],
                                           device='/dev/%s' % self.device)
         waiters.wait_for_volume_status(self.volumes_extensions_client,
                                        volume['id'], 'in-use')
diff --git a/tempest/api/compute/servers/test_servers_negative.py b/tempest/api/compute/servers/test_servers_negative.py
index fe05456..f5d99fc 100644
--- a/tempest/api/compute/servers/test_servers_negative.py
+++ b/tempest/api/compute/servers/test_servers_negative.py
@@ -171,7 +171,7 @@
         # Rebuild and Reboot a deleted server
         server = self.create_test_server()
         self.client.delete_server(server['id'])
-        self.client.wait_for_server_termination(server['id'])
+        waiters.wait_for_server_termination(self.client, server['id'])
 
         self.assertRaises(lib_exc.NotFound,
                           self.client.rebuild,
diff --git a/tempest/api/compute/test_authorization.py b/tempest/api/compute/test_authorization.py
index 1d7f7fa..b542d7f 100644
--- a/tempest/api/compute/test_authorization.py
+++ b/tempest/api/compute/test_authorization.py
@@ -70,10 +70,11 @@
         body = cls.glance_client.create_image(name=name,
                                               container_format='bare',
                                               disk_format='raw',
-                                              is_public=False)
+                                              is_public=False)['image']
         image_id = body['id']
         image_file = six.StringIO(('*' * 1024))
-        body = cls.glance_client.update_image(image_id, data=image_file)
+        body = cls.glance_client.update_image(image_id,
+                                              data=image_file)['image']
         cls.glance_client.wait_for_image_status(image_id, 'active')
         cls.image = cls.images_client.show_image(image_id)
 
@@ -90,7 +91,8 @@
         from_port = 22
         to_port = 22
         cls.rule = cls.rule_client.create_security_group_rule(
-            parent_group_id, ip_protocol, from_port, to_port)
+            parent_group_id=parent_group_id, ip_protocol=ip_protocol,
+            from_port=from_port, to_port=to_port)
 
     @classmethod
     def resource_cleanup(cls):
@@ -173,7 +175,7 @@
         # A create image request for another user's server should fail
         self.assertRaises(lib_exc.NotFound,
                           self.alt_images_client.create_image,
-                          self.server['id'], 'testImage')
+                          self.server['id'], name='testImage')
 
     @test.idempotent_id('95d445f6-babc-4f2e-aea3-aa24ec5e7f0d')
     def test_create_server_with_unauthorized_image(self):
@@ -304,8 +306,9 @@
             self.assertRaises(lib_exc.BadRequest,
                               self.alt_rule_client.
                               create_security_group_rule,
-                              parent_group_id, ip_protocol, from_port,
-                              to_port)
+                              parent_group_id=parent_group_id,
+                              ip_protocol=ip_protocol,
+                              from_port=from_port, to_port=to_port)
         finally:
             # Next request the base_url is back to normal
             if resp['status'] is not None:
diff --git a/tempest/api/compute/test_live_block_migration_negative.py b/tempest/api/compute/test_live_block_migration_negative.py
index fabe55d..2cd85f2 100644
--- a/tempest/api/compute/test_live_block_migration_negative.py
+++ b/tempest/api/compute/test_live_block_migration_negative.py
@@ -40,10 +40,10 @@
         cls.admin_servers_client = cls.os_adm.servers_client
 
     def _migrate_server_to(self, server_id, dest_host):
+        bmflm = CONF.compute_feature_enabled.block_migration_for_live_migration
         body = self.admin_servers_client.live_migrate_server(
-            server_id, dest_host,
-            CONF.compute_feature_enabled.
-            block_migration_for_live_migration)
+            server_id, host=dest_host, block_migration=bmflm,
+            disk_over_commit=False)
         return body
 
     @test.attr(type=['negative'])
diff --git a/tempest/api/compute/volumes/test_attach_volume.py b/tempest/api/compute/volumes/test_attach_volume.py
index 8e4278a..6496854 100644
--- a/tempest/api/compute/volumes/test_attach_volume.py
+++ b/tempest/api/compute/volumes/test_attach_volume.py
@@ -83,7 +83,7 @@
         # Attach the volume to the server
         self.attachment = self.servers_client.attach_volume(
             self.server['id'],
-            self.volume['id'],
+            volumeId=self.volume['id'],
             device='/dev/%s' % self.device)
         self.volumes_client.wait_for_volume_status(self.volume['id'], 'in-use')
 
diff --git a/tempest/api/data_processing/base.py b/tempest/api/data_processing/base.py
index 904cbb6..5d78539 100644
--- a/tempest/api/data_processing/base.py
+++ b/tempest/api/data_processing/base.py
@@ -297,6 +297,7 @@
                                                           flavor_id,
                                                           node_configs,
                                                           **kwargs)
+        resp_body = resp_body['node_group_template']
         # store id of created node group template
         cls._node_group_templates.append(resp_body['id'])
 
@@ -316,6 +317,7 @@
                                                        node_groups,
                                                        cluster_configs,
                                                        **kwargs)
+        resp_body = resp_body['cluster_template']
         # store id of created cluster template
         cls._cluster_templates.append(resp_body['id'])
 
@@ -330,6 +332,7 @@
         removed in tearDownClass method.
         """
         resp_body = cls.client.create_data_source(name, type, url, **kwargs)
+        resp_body = resp_body['data_source']
         # store id of created data source
         cls._data_sources.append(resp_body['id'])
 
@@ -343,6 +346,7 @@
         be automatically removed in tearDownClass method.
         """
         resp_body = cls.client.create_job_binary_internal(name, data)
+        resp_body = resp_body['job_binary_internal']
         # store id of created job binary internal
         cls._job_binary_internals.append(resp_body['id'])
 
@@ -357,6 +361,7 @@
         removed in tearDownClass method.
         """
         resp_body = cls.client.create_job_binary(name, url, extra, **kwargs)
+        resp_body = resp_body['job_binary']
         # store id of created job binary
         cls._job_binaries.append(resp_body['id'])
 
@@ -372,6 +377,7 @@
         """
         resp_body = cls.client.create_job(name,
                                           job_type, mains, libs, **kwargs)
+        resp_body = resp_body['job']
         # store id of created job
         cls._jobs.append(resp_body['id'])
 
@@ -400,7 +406,7 @@
         """
         if not cls.default_plugin:
             return None
-        plugin = cls.client.get_plugin(cls.default_plugin)
+        plugin = cls.client.get_plugin(cls.default_plugin)['plugin']
 
         for version in DEFAULT_TEMPLATES[cls.default_plugin].keys():
             if version in plugin['versions']:
diff --git a/tempest/api/data_processing/test_cluster_templates.py b/tempest/api/data_processing/test_cluster_templates.py
index e357a85..42cbd14 100644
--- a/tempest/api/data_processing/test_cluster_templates.py
+++ b/tempest/api/data_processing/test_cluster_templates.py
@@ -98,7 +98,7 @@
         template_info = self._create_cluster_template()
 
         # check for cluster template in list
-        templates = self.client.list_cluster_templates()
+        templates = self.client.list_cluster_templates()['cluster_templates']
         templates_info = [(template['id'], template['name'])
                           for template in templates]
         self.assertIn(template_info, templates_info)
@@ -110,6 +110,7 @@
 
         # check cluster template fetch by id
         template = self.client.get_cluster_template(template_id)
+        template = template['cluster_template']
         self.assertEqual(template_name, template['name'])
         self.assertDictContainsSubset(self.cluster_template, template)
 
diff --git a/tempest/api/data_processing/test_data_sources.py b/tempest/api/data_processing/test_data_sources.py
index dd16b2f..67d09a0 100644
--- a/tempest/api/data_processing/test_data_sources.py
+++ b/tempest/api/data_processing/test_data_sources.py
@@ -68,13 +68,13 @@
 
     def _list_data_sources(self, source_info):
         # check for data source in list
-        sources = self.client.list_data_sources()
+        sources = self.client.list_data_sources()['data_sources']
         sources_info = [(source['id'], source['name']) for source in sources]
         self.assertIn(source_info, sources_info)
 
     def _get_data_source(self, source_id, source_name, source_body):
         # check data source fetch by id
-        source = self.client.get_data_source(source_id)
+        source = self.client.get_data_source(source_id)['data_source']
         self.assertEqual(source_name, source['name'])
         self.assertDictContainsSubset(source_body, source)
 
diff --git a/tempest/api/data_processing/test_job_binaries.py b/tempest/api/data_processing/test_job_binaries.py
index fb21270..98b7e24 100644
--- a/tempest/api/data_processing/test_job_binaries.py
+++ b/tempest/api/data_processing/test_job_binaries.py
@@ -80,7 +80,7 @@
         binary_info = self._create_job_binary(self.swift_job_binary_with_extra)
 
         # check for job binary in list
-        binaries = self.client.list_job_binaries()
+        binaries = self.client.list_job_binaries()['binaries']
         binaries_info = [(binary['id'], binary['name']) for binary in binaries]
         self.assertIn(binary_info, binaries_info)
 
@@ -91,7 +91,7 @@
             self._create_job_binary(self.swift_job_binary_with_extra))
 
         # check job binary fetch by id
-        binary = self.client.get_job_binary(binary_id)
+        binary = self.client.get_job_binary(binary_id)['job_binary']
         self.assertEqual(binary_name, binary['name'])
         self.assertDictContainsSubset(self.swift_job_binary, binary)
 
@@ -115,7 +115,7 @@
         binary_info = self._create_job_binary(self.internal_db_job_binary)
 
         # check for job binary in list
-        binaries = self.client.list_job_binaries()
+        binaries = self.client.list_job_binaries()['binaries']
         binaries_info = [(binary['id'], binary['name']) for binary in binaries]
         self.assertIn(binary_info, binaries_info)
 
@@ -126,7 +126,7 @@
             self._create_job_binary(self.internal_db_job_binary))
 
         # check job binary fetch by id
-        binary = self.client.get_job_binary(binary_id)
+        binary = self.client.get_job_binary(binary_id)['job_binary']
         self.assertEqual(binary_name, binary['name'])
         self.assertDictContainsSubset(self.internal_db_job_binary, binary)
 
diff --git a/tempest/api/data_processing/test_job_binary_internals.py b/tempest/api/data_processing/test_job_binary_internals.py
index 3d76ebe..6919fa5 100644
--- a/tempest/api/data_processing/test_job_binary_internals.py
+++ b/tempest/api/data_processing/test_job_binary_internals.py
@@ -57,7 +57,7 @@
         binary_info = self._create_job_binary_internal()
 
         # check for job binary internal in list
-        binaries = self.client.list_job_binary_internals()
+        binaries = self.client.list_job_binary_internals()['binaries']
         binaries_info = [(binary['id'], binary['name']) for binary in binaries]
         self.assertIn(binary_info, binaries_info)
 
@@ -68,7 +68,7 @@
 
         # check job binary internal fetch by id
         binary = self.client.get_job_binary_internal(binary_id)
-        self.assertEqual(binary_name, binary['name'])
+        self.assertEqual(binary_name, binary['job_binary_internal']['name'])
 
     @test.attr(type='smoke')
     @test.idempotent_id('b3568c33-4eed-40d5-aae4-6ff3b2ac58f5')
diff --git a/tempest/api/data_processing/test_jobs.py b/tempest/api/data_processing/test_jobs.py
index 83eb54d..7798056 100644
--- a/tempest/api/data_processing/test_jobs.py
+++ b/tempest/api/data_processing/test_jobs.py
@@ -71,7 +71,7 @@
         job_info = self._create_job()
 
         # check for job in list
-        jobs = self.client.list_jobs()
+        jobs = self.client.list_jobs()['jobs']
         jobs_info = [(job['id'], job['name']) for job in jobs]
         self.assertIn(job_info, jobs_info)
 
@@ -81,7 +81,7 @@
         job_id, job_name = self._create_job()
 
         # check job fetch by id
-        job = self.client.get_job(job_id)
+        job = self.client.get_job(job_id)['job']
         self.assertEqual(job_name, job['name'])
 
     @test.attr(type='smoke')
diff --git a/tempest/api/data_processing/test_node_group_templates.py b/tempest/api/data_processing/test_node_group_templates.py
index 102799d..388bb58 100644
--- a/tempest/api/data_processing/test_node_group_templates.py
+++ b/tempest/api/data_processing/test_node_group_templates.py
@@ -65,6 +65,7 @@
 
         # check for node group template in list
         templates = self.client.list_node_group_templates()
+        templates = templates['node_group_templates']
         templates_info = [(template['id'], template['name'])
                           for template in templates]
         self.assertIn(template_info, templates_info)
@@ -76,6 +77,7 @@
 
         # check node group template fetch by id
         template = self.client.get_node_group_template(template_id)
+        template = template['node_group_template']
         self.assertEqual(template_name, template['name'])
         self.assertDictContainsSubset(self.node_group_template, template)
 
diff --git a/tempest/api/data_processing/test_plugins.py b/tempest/api/data_processing/test_plugins.py
index 92a5bd0..14594e4 100644
--- a/tempest/api/data_processing/test_plugins.py
+++ b/tempest/api/data_processing/test_plugins.py
@@ -25,7 +25,7 @@
 
         It ensures main plugins availability.
         """
-        plugins = self.client.list_plugins()
+        plugins = self.client.list_plugins()['plugins']
         plugins_names = [plugin['name'] for plugin in plugins]
         for enabled_plugin in CONF.data_processing_feature_enabled.plugins:
             self.assertIn(enabled_plugin, plugins_names)
@@ -41,12 +41,13 @@
     @test.idempotent_id('53cf6487-2cfb-4a6f-8671-97c542c6e901')
     def test_plugin_get(self):
         for plugin_name in self._list_all_plugin_names():
-            plugin = self.client.get_plugin(plugin_name)
+            plugin = self.client.get_plugin(plugin_name)['plugin']
             self.assertEqual(plugin_name, plugin['name'])
 
             for plugin_version in plugin['versions']:
                 detailed_plugin = self.client.get_plugin(plugin_name,
                                                          plugin_version)
+                detailed_plugin = detailed_plugin['plugin']
                 self.assertEqual(plugin_name, detailed_plugin['name'])
 
                 # check that required image tags contains name and version
diff --git a/tempest/api/identity/admin/v2/test_endpoints.py b/tempest/api/identity/admin/v2/test_endpoints.py
new file mode 100644
index 0000000..3af2e90
--- /dev/null
+++ b/tempest/api/identity/admin/v2/test_endpoints.py
@@ -0,0 +1,90 @@
+# Copyright 2013 OpenStack Foundation
+# All Rights Reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+from tempest.api.identity import base
+from tempest.common.utils import data_utils
+from tempest import test
+
+
+class EndPointsTestJSON(base.BaseIdentityV2AdminTest):
+
+    @classmethod
+    def resource_setup(cls):
+        super(EndPointsTestJSON, cls).resource_setup()
+        cls.service_ids = list()
+        s_name = data_utils.rand_name('service')
+        s_type = data_utils.rand_name('type')
+        s_description = data_utils.rand_name('description')
+        cls.service_data =\
+            cls.client.create_service(s_name, s_type,
+                                      description=s_description)
+        cls.service_id = cls.service_data['id']
+        cls.service_ids.append(cls.service_id)
+        # Create endpoints so as to use for LIST and GET test cases
+        cls.setup_endpoints = list()
+        for i in range(2):
+            region = data_utils.rand_name('region')
+            url = data_utils.rand_url()
+            endpoint = cls.client.create_endpoint(cls.service_id,
+                                                  region,
+                                                  publicurl=url,
+                                                  adminurl=url,
+                                                  internalurl=url)
+            # list_endpoints() will return 'enabled' field
+            endpoint['enabled'] = True
+            cls.setup_endpoints.append(endpoint)
+
+    @classmethod
+    def resource_cleanup(cls):
+        for e in cls.setup_endpoints:
+            cls.client.delete_endpoint(e['id'])
+        for s in cls.service_ids:
+            cls.client.delete_service(s)
+        super(EndPointsTestJSON, cls).resource_cleanup()
+
+    @test.idempotent_id('11f590eb-59d8-4067-8b2b-980c7f387f51')
+    def test_list_endpoints(self):
+        # Get a list of endpoints
+        fetched_endpoints = self.client.list_endpoints()
+        # Asserting LIST endpoints
+        missing_endpoints =\
+            [e for e in self.setup_endpoints if e not in fetched_endpoints]
+        self.assertEqual(0, len(missing_endpoints),
+                         "Failed to find endpoint %s in fetched list" %
+                         ', '.join(str(e) for e in missing_endpoints))
+
+    @test.idempotent_id('9974530a-aa28-4362-8403-f06db02b26c1')
+    def test_create_list_delete_endpoint(self):
+        region = data_utils.rand_name('region')
+        url = data_utils.rand_url()
+        endpoint = self.client.create_endpoint(self.service_id,
+                                               region,
+                                               publicurl=url,
+                                               adminurl=url,
+                                               internalurl=url)
+        # Asserting Create Endpoint response body
+        self.assertIn('id', endpoint)
+        self.assertEqual(region, endpoint['region'])
+        self.assertEqual(url, endpoint['publicurl'])
+        # Checking if created endpoint is present in the list of endpoints
+        fetched_endpoints = self.client.list_endpoints()
+        fetched_endpoints_id = [e['id'] for e in fetched_endpoints]
+        self.assertIn(endpoint['id'], fetched_endpoints_id)
+        # Deleting the endpoint created in this method
+        self.client.delete_endpoint(endpoint['id'])
+        # Checking whether endpoint is deleted successfully
+        fetched_endpoints = self.client.list_endpoints()
+        fetched_endpoints_id = [e['id'] for e in fetched_endpoints]
+        self.assertNotIn(endpoint['id'], fetched_endpoints_id)
diff --git a/tempest/api/identity/admin/v2/test_roles.py b/tempest/api/identity/admin/v2/test_roles.py
index 1babc45..0b28a07 100644
--- a/tempest/api/identity/admin/v2/test_roles.py
+++ b/tempest/api/identity/admin/v2/test_roles.py
@@ -76,7 +76,7 @@
         self.data.setup_test_role()
         role_id = self.data.role['id']
         role_name = self.data.role['name']
-        body = self.client.get_role(role_id)
+        body = self.client.get_role(role_id)['role']
         self.assertEqual(role_id, body['id'])
         self.assertEqual(role_name, body['name'])
 
diff --git a/tempest/api/identity/admin/v2/test_tenants.py b/tempest/api/identity/admin/v2/test_tenants.py
index f828f66..9fff5f3 100644
--- a/tempest/api/identity/admin/v2/test_tenants.py
+++ b/tempest/api/identity/admin/v2/test_tenants.py
@@ -32,7 +32,7 @@
             self.data.tenants.append(tenant)
             tenants.append(tenant)
         tenant_ids = map(lambda x: x['id'], tenants)
-        body = self.client.list_tenants()
+        body = self.client.list_tenants()['tenants']
         found = [t for t in body if t['id'] in tenant_ids]
         self.assertEqual(len(found), len(tenants), 'Tenants not created')
 
@@ -40,7 +40,7 @@
             self.client.delete_tenant(tenant['id'])
             self.data.tenants.remove(tenant)
 
-        body = self.client.list_tenants()
+        body = self.client.list_tenants()['tenants']
         found = [tenant for tenant in body if tenant['id'] in tenant_ids]
         self.assertFalse(any(found), 'Tenants failed to delete')
 
diff --git a/tempest/api/identity/admin/v3/test_credentials.py b/tempest/api/identity/admin/v3/test_credentials.py
index 662d06c..d22b27f 100644
--- a/tempest/api/identity/admin/v3/test_credentials.py
+++ b/tempest/api/identity/admin/v3/test_credentials.py
@@ -33,12 +33,12 @@
         for i in range(2):
             cls.project = cls.client.create_project(
                 data_utils.rand_name('project'),
-                description=data_utils.rand_name('project-desc'))
+                description=data_utils.rand_name('project-desc'))['project']
             cls.projects.append(cls.project['id'])
 
         cls.user_body = cls.client.create_user(
             u_name, description=u_desc, password=u_password,
-            email=u_email, project_id=cls.projects[0])
+            email=u_email, project_id=cls.projects[0])['user']
 
     @classmethod
     def resource_cleanup(cls):
@@ -57,7 +57,7 @@
                 data_utils.rand_name('Secret')]
         cred = self.creds_client.create_credential(
             keys[0], keys[1], self.user_body['id'],
-            self.projects[0])
+            self.projects[0])['credential']
         self.addCleanup(self._delete_credential, cred['id'])
         for value1 in self.creds_list[0]:
             self.assertIn(value1, cred)
@@ -68,14 +68,14 @@
                     data_utils.rand_name('NewSecret')]
         update_body = self.creds_client.update_credential(
             cred['id'], access_key=new_keys[0], secret_key=new_keys[1],
-            project_id=self.projects[1])
+            project_id=self.projects[1])['credential']
         self.assertEqual(cred['id'], update_body['id'])
         self.assertEqual(self.projects[1], update_body['project_id'])
         self.assertEqual(self.user_body['id'], update_body['user_id'])
         self.assertEqual(update_body['blob']['access'], new_keys[0])
         self.assertEqual(update_body['blob']['secret'], new_keys[1])
 
-        get_body = self.creds_client.get_credential(cred['id'])
+        get_body = self.creds_client.get_credential(cred['id'])['credential']
         for value1 in self.creds_list[0]:
             self.assertEqual(update_body[value1],
                              get_body[value1])
@@ -92,11 +92,11 @@
             cred = self.creds_client.create_credential(
                 data_utils.rand_name('Access'),
                 data_utils.rand_name('Secret'),
-                self.user_body['id'], self.projects[0])
+                self.user_body['id'], self.projects[0])['credential']
             created_cred_ids.append(cred['id'])
             self.addCleanup(self._delete_credential, cred['id'])
 
-        creds = self.creds_client.list_credentials()
+        creds = self.creds_client.list_credentials()['credentials']
 
         for i in creds:
             fetched_cred_ids.append(i['id'])
diff --git a/tempest/api/identity/admin/v3/test_default_project_id.py b/tempest/api/identity/admin/v3/test_default_project_id.py
index 98fff09..4c69758 100644
--- a/tempest/api/identity/admin/v3/test_default_project_id.py
+++ b/tempest/api/identity/admin/v3/test_default_project_id.py
@@ -39,13 +39,14 @@
     def test_default_project_id(self):
         # create a domain
         dom_name = data_utils.rand_name('dom')
-        domain_body = self.client.create_domain(dom_name)
+        domain_body = self.client.create_domain(dom_name)['domain']
         dom_id = domain_body['id']
         self.addCleanup(self._delete_domain, dom_id)
 
         # create a project in the domain
         proj_name = data_utils.rand_name('proj')
-        proj_body = self.client.create_project(proj_name, domain_id=dom_id)
+        proj_body = self.client.create_project(proj_name,
+                                               domain_id=dom_id)['project']
         proj_id = proj_body['id']
         self.addCleanup(self.client.delete_project, proj_id)
         self.assertEqual(proj_body['domain_id'], dom_id,
@@ -57,7 +58,7 @@
         user_name = data_utils.rand_name('user')
         user_body = self.client.create_user(user_name, password=user_name,
                                             domain_id=dom_id,
-                                            default_project_id=proj_id)
+                                            default_project_id=proj_id)['user']
         user_id = user_body['id']
         self.addCleanup(self.client.delete_user, user_id)
         self.assertEqual(user_body['domain_id'], dom_id,
@@ -82,6 +83,6 @@
 
         # verify the user's token and see that it is scoped to the project
         token, auth_data = admin_client.auth_provider.get_auth()
-        result = admin_client.identity_v3_client.get_token(token)
+        result = admin_client.identity_v3_client.get_token(token)['token']
         self.assertEqual(result['project']['domain']['id'], dom_id)
         self.assertEqual(result['project']['id'], proj_id)
diff --git a/tempest/api/identity/admin/v3/test_domains.py b/tempest/api/identity/admin/v3/test_domains.py
index 5bfb981..742d737 100644
--- a/tempest/api/identity/admin/v3/test_domains.py
+++ b/tempest/api/identity/admin/v3/test_domains.py
@@ -37,12 +37,12 @@
         for _ in range(3):
             domain = self.client.create_domain(
                 data_utils.rand_name('domain'),
-                description=data_utils.rand_name('domain-desc'))
+                description=data_utils.rand_name('domain-desc'))['domain']
             # Delete the domain at the end of this method
             self.addCleanup(self._delete_domain, domain['id'])
             domain_ids.append(domain['id'])
         # List and Verify Domains
-        body = self.client.list_domains()
+        body = self.client.list_domains()['domains']
         for d in body:
             fetched_ids.append(d['id'])
         missing_doms = [d for d in domain_ids if d not in fetched_ids]
@@ -54,7 +54,7 @@
         d_name = data_utils.rand_name('domain')
         d_desc = data_utils.rand_name('domain-desc')
         domain = self.client.create_domain(
-            d_name, description=d_desc)
+            d_name, description=d_desc)['domain']
         self.addCleanup(self._delete_domain, domain['id'])
         self.assertIn('id', domain)
         self.assertIn('description', domain)
@@ -69,7 +69,7 @@
         new_name = data_utils.rand_name('new-name')
 
         updated_domain = self.client.update_domain(
-            domain['id'], name=new_name, description=new_desc)
+            domain['id'], name=new_name, description=new_desc)['domain']
         self.assertIn('id', updated_domain)
         self.assertIn('description', updated_domain)
         self.assertIn('name', updated_domain)
@@ -80,7 +80,7 @@
         self.assertEqual(new_desc, updated_domain['description'])
         self.assertEqual(True, updated_domain['enabled'])
 
-        fetched_domain = self.client.get_domain(domain['id'])
+        fetched_domain = self.client.get_domain(domain['id'])['domain']
         self.assertEqual(new_name, fetched_domain['name'])
         self.assertEqual(new_desc, fetched_domain['description'])
         self.assertEqual(True, fetched_domain['enabled'])
@@ -91,7 +91,7 @@
         d_name = data_utils.rand_name('domain')
         d_desc = data_utils.rand_name('domain-desc')
         domain = self.client.create_domain(
-            d_name, description=d_desc, enabled=False)
+            d_name, description=d_desc, enabled=False)['domain']
         self.addCleanup(self.client.delete_domain, domain['id'])
         self.assertEqual(d_name, domain['name'])
         self.assertFalse(domain['enabled'])
@@ -101,7 +101,7 @@
     def test_create_domain_without_description(self):
         # Create domain only with name
         d_name = data_utils.rand_name('domain')
-        domain = self.client.create_domain(d_name)
+        domain = self.client.create_domain(d_name)['domain']
         self.addCleanup(self._delete_domain, domain['id'])
         self.assertIn('id', domain)
         expected_data = {'name': d_name, 'enabled': True}
@@ -119,6 +119,6 @@
     @test.attr(type='smoke')
     @test.idempotent_id('17a5de24-e6a0-4e4a-a9ee-d85b6e5612b5')
     def test_default_domain_exists(self):
-        domain = self.client.get_domain(self.domain_id)
+        domain = self.client.get_domain(self.domain_id)['domain']
 
         self.assertTrue(domain['enabled'])
diff --git a/tempest/api/identity/admin/v3/test_domains_negative.py b/tempest/api/identity/admin/v3/test_domains_negative.py
index e2f3ef5..156179c 100644
--- a/tempest/api/identity/admin/v3/test_domains_negative.py
+++ b/tempest/api/identity/admin/v3/test_domains_negative.py
@@ -28,7 +28,8 @@
     def test_delete_active_domain(self):
         d_name = data_utils.rand_name('domain')
         d_desc = data_utils.rand_name('domain-desc')
-        domain = self.client.create_domain(d_name, description=d_desc)
+        domain = self.client.create_domain(d_name,
+                                           description=d_desc)['domain']
         domain_id = domain['id']
 
         self.addCleanup(self.delete_domain, domain_id)
diff --git a/tempest/api/identity/admin/v3/test_endpoints.py b/tempest/api/identity/admin/v3/test_endpoints.py
index 9a8104f..e44a96b 100644
--- a/tempest/api/identity/admin/v3/test_endpoints.py
+++ b/tempest/api/identity/admin/v3/test_endpoints.py
@@ -36,6 +36,7 @@
         cls.service_data =\
             cls.service_client.create_service(s_name, s_type,
                                               description=s_description)
+        cls.service_data = cls.service_data['service']
         cls.service_id = cls.service_data['id']
         cls.service_ids.append(cls.service_id)
         # Create endpoints so as to use for LIST and GET test cases
@@ -44,8 +45,8 @@
             region = data_utils.rand_name('region')
             url = data_utils.rand_url()
             interface = 'public'
-            endpoint = cls.client.create_endpoint(
-                cls.service_id, interface, url, region=region, enabled=True)
+            endpoint = (cls.client.create_endpoint(cls.service_id, interface,
+                        url, region=region, enabled=True))['endpoint']
             cls.setup_endpoints.append(endpoint)
 
     @classmethod
@@ -59,7 +60,7 @@
     @test.idempotent_id('c19ecf90-240e-4e23-9966-21cee3f6a618')
     def test_list_endpoints(self):
         # Get a list of endpoints
-        fetched_endpoints = self.client.list_endpoints()
+        fetched_endpoints = self.client.list_endpoints()['endpoints']
         # Asserting LIST endpoints
         missing_endpoints =\
             [e for e in self.setup_endpoints if e not in fetched_endpoints]
@@ -72,21 +73,20 @@
         region = data_utils.rand_name('region')
         url = data_utils.rand_url()
         interface = 'public'
-        endpoint =\
-            self.client.create_endpoint(self.service_id, interface, url,
-                                        region=region, enabled=True)
+        endpoint = (self.client.create_endpoint(self.service_id, interface,
+                    url, region=region, enabled=True)['endpoint'])
         # Asserting Create Endpoint response body
         self.assertIn('id', endpoint)
         self.assertEqual(region, endpoint['region'])
         self.assertEqual(url, endpoint['url'])
         # Checking if created endpoint is present in the list of endpoints
-        fetched_endpoints = self.client.list_endpoints()
+        fetched_endpoints = self.client.list_endpoints()['endpoints']
         fetched_endpoints_id = [e['id'] for e in fetched_endpoints]
         self.assertIn(endpoint['id'], fetched_endpoints_id)
         # Deleting the endpoint created in this method
         self.client.delete_endpoint(endpoint['id'])
         # Checking whether endpoint is deleted successfully
-        fetched_endpoints = self.client.list_endpoints()
+        fetched_endpoints = self.client.list_endpoints()['endpoints']
         fetched_endpoints_id = [e['id'] for e in fetched_endpoints]
         self.assertNotIn(endpoint['id'], fetched_endpoints_id)
 
@@ -101,7 +101,7 @@
         endpoint_for_update =\
             self.client.create_endpoint(self.service_id, interface1,
                                         url1, region=region1,
-                                        enabled=True)
+                                        enabled=True)['endpoint']
         self.addCleanup(self.client.delete_endpoint, endpoint_for_update['id'])
         # Creating service so as update endpoint with new service ID
         s_name = data_utils.rand_name('service')
@@ -110,6 +110,7 @@
         service2 =\
             self.service_client.create_service(s_name, s_type,
                                                description=s_description)
+        service2 = service2['service']
         self.service_ids.append(service2['id'])
         # Updating endpoint with new values
         region2 = data_utils.rand_name('region')
@@ -119,7 +120,8 @@
             self.client.update_endpoint(endpoint_for_update['id'],
                                         service_id=service2['id'],
                                         interface=interface2, url=url2,
-                                        region=region2, enabled=False)
+                                        region=region2,
+                                        enabled=False)['endpoint']
         # Asserting if the attributes of endpoint are updated
         self.assertEqual(service2['id'], endpoint['service_id'])
         self.assertEqual(interface2, endpoint['interface'])
diff --git a/tempest/api/identity/admin/v3/test_endpoints_negative.py b/tempest/api/identity/admin/v3/test_endpoints_negative.py
index b043415..8cf853b 100644
--- a/tempest/api/identity/admin/v3/test_endpoints_negative.py
+++ b/tempest/api/identity/admin/v3/test_endpoints_negative.py
@@ -38,7 +38,8 @@
         s_description = data_utils.rand_name('description')
         cls.service_data = (
             cls.service_client.create_service(s_name, s_type,
-                                              description=s_description))
+                                              description=s_description)
+            ['service'])
         cls.service_id = cls.service_data['id']
         cls.service_ids.append(cls.service_id)
 
@@ -78,7 +79,8 @@
         interface1 = 'public'
         endpoint_for_update = (
             self.client.create_endpoint(self.service_id, interface1,
-                                        url1, region=region1, enabled=True))
+                                        url1, region=region1,
+                                        enabled=True))['endpoint']
         self.addCleanup(self.client.delete_endpoint, endpoint_for_update['id'])
 
         self.assertRaises(lib_exc.BadRequest, self.client.update_endpoint,
diff --git a/tempest/api/identity/admin/v3/test_groups.py b/tempest/api/identity/admin/v3/test_groups.py
index 88e2959..5ce6354 100644
--- a/tempest/api/identity/admin/v3/test_groups.py
+++ b/tempest/api/identity/admin/v3/test_groups.py
@@ -25,7 +25,7 @@
         name = data_utils.rand_name('Group')
         description = data_utils.rand_name('Description')
         group = self.client.create_group(name,
-                                         description=description)
+                                         description=description)['group']
         self.addCleanup(self.client.delete_group, group['id'])
         self.assertEqual(group['name'], name)
         self.assertEqual(group['description'], description)
@@ -34,11 +34,11 @@
         new_desc = data_utils.rand_name('UpdateDescription')
         updated_group = self.client.update_group(group['id'],
                                                  name=new_name,
-                                                 description=new_desc)
+                                                 description=new_desc)['group']
         self.assertEqual(updated_group['name'], new_name)
         self.assertEqual(updated_group['description'], new_desc)
 
-        new_group = self.client.get_group(group['id'])
+        new_group = self.client.get_group(group['id'])['group']
         self.assertEqual(group['id'], new_group['id'])
         self.assertEqual(new_name, new_group['name'])
         self.assertEqual(new_desc, new_group['description'])
@@ -47,25 +47,25 @@
     @test.idempotent_id('1598521a-2f36-4606-8df9-30772bd51339')
     def test_group_users_add_list_delete(self):
         name = data_utils.rand_name('Group')
-        group = self.client.create_group(name)
+        group = self.client.create_group(name)['group']
         self.addCleanup(self.client.delete_group, group['id'])
         # add user into group
         users = []
         for i in range(3):
             name = data_utils.rand_name('User')
-            user = self.client.create_user(name)
+            user = self.client.create_user(name)['user']
             users.append(user)
             self.addCleanup(self.client.delete_user, user['id'])
             self.client.add_group_user(group['id'], user['id'])
 
         # list users in group
-        group_users = self.client.list_group_users(group['id'])
+        group_users = self.client.list_group_users(group['id'])['users']
         self.assertEqual(sorted(users), sorted(group_users))
         # delete user in group
         for user in users:
             self.client.delete_group_user(group['id'],
                                           user['id'])
-        group_users = self.client.list_group_users(group['id'])
+        group_users = self.client.list_group_users(group['id'])['users']
         self.assertEqual(len(group_users), 0)
 
     @test.idempotent_id('64573281-d26a-4a52-b899-503cb0f4e4ec')
@@ -73,18 +73,18 @@
         # create a user
         user = self.client.create_user(
             data_utils.rand_name('User'),
-            password=data_utils.rand_name('Pass'))
+            password=data_utils.rand_name('Pass'))['user']
         self.addCleanup(self.client.delete_user, user['id'])
         # create two groups, and add user into them
         groups = []
         for i in range(2):
             name = data_utils.rand_name('Group')
-            group = self.client.create_group(name)
+            group = self.client.create_group(name)['group']
             groups.append(group)
             self.addCleanup(self.client.delete_group, group['id'])
             self.client.add_group_user(group['id'], user['id'])
         # list groups which user belongs to
-        user_groups = self.client.list_user_groups(user['id'])
+        user_groups = self.client.list_user_groups(user['id'])['groups']
         self.assertEqual(sorted(groups), sorted(user_groups))
         self.assertEqual(2, len(user_groups))
 
@@ -97,11 +97,11 @@
             name = data_utils.rand_name('Group')
             description = data_utils.rand_name('Description')
             group = self.client.create_group(name,
-                                             description=description)
+                                             description=description)['group']
             self.addCleanup(self.client.delete_group, group['id'])
             group_ids.append(group['id'])
         # List and Verify Groups
-        body = self.client.list_groups()
+        body = self.client.list_groups()['groups']
         for g in body:
             fetched_ids.append(g['id'])
         missing_groups = [g for g in group_ids if g not in fetched_ids]
diff --git a/tempest/api/identity/admin/v3/test_list_projects.py b/tempest/api/identity/admin/v3/test_list_projects.py
index 12d80bb..5185fea 100644
--- a/tempest/api/identity/admin/v3/test_list_projects.py
+++ b/tempest/api/identity/admin/v3/test_list_projects.py
@@ -28,22 +28,23 @@
         # Create project with domain
         cls.p1_name = data_utils.rand_name('project')
         cls.p1 = cls.client.create_project(
-            cls.p1_name, enabled=False, domain_id=cls.data.domain['id'])
+            cls.p1_name, enabled=False,
+            domain_id=cls.data.domain['id'])['project']
         cls.data.projects.append(cls.p1)
         cls.project_ids.append(cls.p1['id'])
         # Create default project
         p2_name = data_utils.rand_name('project')
-        cls.p2 = cls.client.create_project(p2_name)
+        cls.p2 = cls.client.create_project(p2_name)['project']
         cls.data.projects.append(cls.p2)
         cls.project_ids.append(cls.p2['id'])
 
     @test.idempotent_id('1d830662-22ad-427c-8c3e-4ec854b0af44')
     def test_projects_list(self):
         # List projects
-        list_projects = self.client.list_projects()
+        list_projects = self.client.list_projects()['projects']
 
         for p in self.project_ids:
-            get_project = self.client.get_project(p)
+            get_project = self.client.get_project(p)['project']
             self.assertIn(get_project, list_projects)
 
     @test.idempotent_id('fab13f3c-f6a6-4b9f-829b-d32fd44fdf10')
@@ -63,6 +64,6 @@
         self._list_projects_with_params({'name': self.p1_name}, 'name')
 
     def _list_projects_with_params(self, params, key):
-        body = self.client.list_projects(params)
+        body = self.client.list_projects(params)['projects']
         self.assertIn(self.p1[key], map(lambda x: x[key], body))
         self.assertNotIn(self.p2[key], map(lambda x: x[key], body))
diff --git a/tempest/api/identity/admin/v3/test_list_users.py b/tempest/api/identity/admin/v3/test_list_users.py
index d3d51b4..320b479 100644
--- a/tempest/api/identity/admin/v3/test_list_users.py
+++ b/tempest/api/identity/admin/v3/test_list_users.py
@@ -25,7 +25,7 @@
         # assert the response based on expected and not_expected
         # expected: user expected in the list response
         # not_expected: user, which should not be present in list response
-        body = self.client.get_users(params)
+        body = self.client.get_users(params)['users']
         self.assertIn(expected[key], map(lambda x: x[key], body))
         self.assertNotIn(not_expected[key],
                          map(lambda x: x[key], body))
@@ -41,13 +41,13 @@
         u1_name = data_utils.rand_name('test_user')
         cls.domain_enabled_user = cls.client.create_user(
             u1_name, password=alt_password,
-            email=cls.alt_email, domain_id=cls.data.domain['id'])
+            email=cls.alt_email, domain_id=cls.data.domain['id'])['user']
         cls.data.v3_users.append(cls.domain_enabled_user)
         # Create default not enabled user
         u2_name = data_utils.rand_name('test_user')
         cls.non_domain_enabled_user = cls.client.create_user(
             u2_name, password=alt_password,
-            email=cls.alt_email, enabled=False)
+            email=cls.alt_email, enabled=False)['user']
         cls.data.v3_users.append(cls.non_domain_enabled_user)
 
     @test.idempotent_id('08f9aabb-dcfe-41d0-8172-82b5fa0bd73d')
@@ -77,7 +77,7 @@
     @test.idempotent_id('b30d4651-a2ea-4666-8551-0c0e49692635')
     def test_list_users(self):
         # List users
-        body = self.client.get_users()
+        body = self.client.get_users()['users']
         fetched_ids = [u['id'] for u in body]
         missing_users = [u['id'] for u in self.data.v3_users
                          if u['id'] not in fetched_ids]
@@ -88,7 +88,7 @@
     @test.idempotent_id('b4baa3ae-ac00-4b4e-9e27-80deaad7771f')
     def test_get_user(self):
         # Get a user detail
-        user = self.client.get_user(self.data.v3_users[0]['id'])
+        user = self.client.get_user(self.data.v3_users[0]['id'])['user']
         self.assertEqual(self.data.v3_users[0]['id'], user['id'])
         self.assertEqual(self.data.v3_users[0]['name'], user['name'])
         self.assertEqual(self.alt_email, user['email'])
diff --git a/tempest/api/identity/admin/v3/test_projects.py b/tempest/api/identity/admin/v3/test_projects.py
index 17712f3..f014307 100644
--- a/tempest/api/identity/admin/v3/test_projects.py
+++ b/tempest/api/identity/admin/v3/test_projects.py
@@ -26,13 +26,13 @@
         project_name = data_utils.rand_name('project')
         project_desc = data_utils.rand_name('desc')
         project = self.client.create_project(
-            project_name, description=project_desc)
+            project_name, description=project_desc)['project']
         self.data.projects.append(project)
         project_id = project['id']
         desc1 = project['description']
         self.assertEqual(desc1, project_desc, 'Description should have '
                          'been sent in response for create')
-        body = self.client.get_project(project_id)
+        body = self.client.get_project(project_id)['project']
         desc2 = body['description']
         self.assertEqual(desc2, project_desc, 'Description does not appear'
                          'to be set')
@@ -43,12 +43,12 @@
         self.data.setup_test_domain()
         project_name = data_utils.rand_name('project')
         project = self.client.create_project(
-            project_name, domain_id=self.data.domain['id'])
+            project_name, domain_id=self.data.domain['id'])['project']
         self.data.projects.append(project)
         project_id = project['id']
         self.assertEqual(project_name, project['name'])
         self.assertEqual(self.data.domain['id'], project['domain_id'])
-        body = self.client.get_project(project_id)
+        body = self.client.get_project(project_id)['project']
         self.assertEqual(project_name, body['name'])
         self.assertEqual(self.data.domain['id'], body['domain_id'])
 
@@ -57,12 +57,12 @@
         # Create a project that is enabled
         project_name = data_utils.rand_name('project')
         project = self.client.create_project(
-            project_name, enabled=True)
+            project_name, enabled=True)['project']
         self.data.projects.append(project)
         project_id = project['id']
         en1 = project['enabled']
         self.assertTrue(en1, 'Enable should be True in response')
-        body = self.client.get_project(project_id)
+        body = self.client.get_project(project_id)['project']
         en2 = body['enabled']
         self.assertTrue(en2, 'Enable should be True in lookup')
 
@@ -71,12 +71,12 @@
         # Create a project that is not enabled
         project_name = data_utils.rand_name('project')
         project = self.client.create_project(
-            project_name, enabled=False)
+            project_name, enabled=False)['project']
         self.data.projects.append(project)
         en1 = project['enabled']
         self.assertEqual('false', str(en1).lower(),
                          'Enable should be False in response')
-        body = self.client.get_project(project['id'])
+        body = self.client.get_project(project['id'])['project']
         en2 = body['enabled']
         self.assertEqual('false', str(en2).lower(),
                          'Enable should be False in lookup')
@@ -85,17 +85,18 @@
     def test_project_update_name(self):
         # Update name attribute of a project
         p_name1 = data_utils.rand_name('project')
-        project = self.client.create_project(p_name1)
+        project = self.client.create_project(p_name1)['project']
         self.data.projects.append(project)
 
         resp1_name = project['name']
 
         p_name2 = data_utils.rand_name('project2')
-        body = self.client.update_project(project['id'], name=p_name2)
+        body = self.client.update_project(project['id'],
+                                          name=p_name2)['project']
         resp2_name = body['name']
         self.assertNotEqual(resp1_name, resp2_name)
 
-        body = self.client.get_project(project['id'])
+        body = self.client.get_project(project['id'])['project']
         resp3_name = body['name']
 
         self.assertNotEqual(resp1_name, resp3_name)
@@ -108,17 +109,17 @@
         p_name = data_utils.rand_name('project')
         p_desc = data_utils.rand_name('desc')
         project = self.client.create_project(
-            p_name, description=p_desc)
+            p_name, description=p_desc)['project']
         self.data.projects.append(project)
         resp1_desc = project['description']
 
         p_desc2 = data_utils.rand_name('desc2')
         body = self.client.update_project(
-            project['id'], description=p_desc2)
+            project['id'], description=p_desc2)['project']
         resp2_desc = body['description']
         self.assertNotEqual(resp1_desc, resp2_desc)
 
-        body = self.client.get_project(project['id'])
+        body = self.client.get_project(project['id'])['project']
         resp3_desc = body['description']
 
         self.assertNotEqual(resp1_desc, resp3_desc)
@@ -130,18 +131,18 @@
         # Update the enabled attribute of a project
         p_name = data_utils.rand_name('project')
         p_en = False
-        project = self.client.create_project(p_name, enabled=p_en)
+        project = self.client.create_project(p_name, enabled=p_en)['project']
         self.data.projects.append(project)
 
         resp1_en = project['enabled']
 
         p_en2 = True
         body = self.client.update_project(
-            project['id'], enabled=p_en2)
+            project['id'], enabled=p_en2)['project']
         resp2_en = body['enabled']
         self.assertNotEqual(resp1_en, resp2_en)
 
-        body = self.client.get_project(project['id'])
+        body = self.client.get_project(project['id'])['project']
         resp3_en = body['enabled']
 
         self.assertNotEqual(resp1_en, resp3_en)
@@ -153,7 +154,7 @@
         # Associate a user to a project
         # Create a Project
         p_name = data_utils.rand_name('project')
-        project = self.client.create_project(p_name)
+        project = self.client.create_project(p_name)['project']
         self.data.projects.append(project)
 
         # Create a User
@@ -163,12 +164,12 @@
         u_password = data_utils.rand_name('pass')
         user = self.client.create_user(
             u_name, description=u_desc, password=u_password,
-            email=u_email, project_id=project['id'])
+            email=u_email, project_id=project['id'])['user']
         # Delete the User at the end of this method
         self.addCleanup(self.client.delete_user, user['id'])
 
         # Get User To validate the user details
-        new_user_get = self.client.get_user(user['id'])
+        new_user_get = self.client.get_user(user['id'])['user']
         # Assert response body of GET
         self.assertEqual(u_name, new_user_get['name'])
         self.assertEqual(u_desc, new_user_get['description'])
diff --git a/tempest/api/identity/admin/v3/test_projects_negative.py b/tempest/api/identity/admin/v3/test_projects_negative.py
index d5ee5a7..9b60d54 100644
--- a/tempest/api/identity/admin/v3/test_projects_negative.py
+++ b/tempest/api/identity/admin/v3/test_projects_negative.py
@@ -34,7 +34,7 @@
     def test_project_create_duplicate(self):
         # Project names should be unique
         project_name = data_utils.rand_name('project-dup')
-        project = self.client.create_project(project_name)
+        project = self.client.create_project(project_name)['project']
         self.data.projects.append(project)
 
         self.assertRaises(
@@ -69,7 +69,7 @@
     def test_project_delete_by_unauthorized_user(self):
         # Non-admin user should not be able to delete a project
         project_name = data_utils.rand_name('project')
-        project = self.client.create_project(project_name)
+        project = self.client.create_project(project_name)['project']
         self.data.projects.append(project)
         self.assertRaises(
             lib_exc.Forbidden, self.non_admin_client.delete_project,
diff --git a/tempest/api/identity/admin/v3/test_regions.py b/tempest/api/identity/admin/v3/test_regions.py
index 7eb92bc..e96e0f5 100644
--- a/tempest/api/identity/admin/v3/test_regions.py
+++ b/tempest/api/identity/admin/v3/test_regions.py
@@ -33,7 +33,7 @@
         cls.setup_regions = list()
         for i in range(2):
             r_description = data_utils.rand_name('description')
-            region = cls.client.create_region(r_description)
+            region = cls.client.create_region(r_description)['region']
             cls.setup_regions.append(region)
 
     @classmethod
@@ -51,7 +51,8 @@
     def test_create_update_get_delete_region(self):
         r_description = data_utils.rand_name('description')
         region = self.client.create_region(
-            r_description, parent_region_id=self.setup_regions[0]['id'])
+            r_description,
+            parent_region_id=self.setup_regions[0]['id'])['region']
         self.addCleanup(self._delete_region, region['id'])
         self.assertEqual(r_description, region['description'])
         self.assertEqual(self.setup_regions[0]['id'],
@@ -61,12 +62,12 @@
         region = self.client.update_region(
             region['id'],
             description=r_alt_description,
-            parent_region_id=self.setup_regions[1]['id'])
+            parent_region_id=self.setup_regions[1]['id'])['region']
         self.assertEqual(r_alt_description, region['description'])
         self.assertEqual(self.setup_regions[1]['id'],
                          region['parent_region_id'])
         # Get the details of region
-        region = self.client.get_region(region['id'])
+        region = self.client.get_region(region['id'])['region']
         self.assertEqual(r_alt_description, region['description'])
         self.assertEqual(self.setup_regions[1]['id'],
                          region['parent_region_id'])
@@ -78,7 +79,7 @@
         r_region_id = data_utils.rand_uuid()
         r_description = data_utils.rand_name('description')
         region = self.client.create_region(
-            r_description, unique_region_id=r_region_id)
+            r_description, unique_region_id=r_region_id)['region']
         self.addCleanup(self._delete_region, region['id'])
         # Asserting Create Region with specific id response body
         self.assertEqual(r_region_id, region['id'])
@@ -87,7 +88,7 @@
     @test.idempotent_id('d180bf99-544a-445c-ad0d-0c0d27663796')
     def test_list_regions(self):
         # Get a list of regions
-        fetched_regions = self.client.list_regions()
+        fetched_regions = self.client.list_regions()['regions']
         missing_regions =\
             [e for e in self.setup_regions if e not in fetched_regions]
         # Asserting List Regions response
diff --git a/tempest/api/identity/admin/v3/test_roles.py b/tempest/api/identity/admin/v3/test_roles.py
index f58a5c5..ffc991a 100644
--- a/tempest/api/identity/admin/v3/test_roles.py
+++ b/tempest/api/identity/admin/v3/test_roles.py
@@ -25,7 +25,7 @@
         super(RolesV3TestJSON, cls).resource_setup()
         for _ in range(3):
             role_name = data_utils.rand_name(name='role')
-            role = cls.client.create_role(role_name)
+            role = cls.client.create_role(role_name)['role']
             cls.data.v3_roles.append(role)
         cls.fetched_role_ids = list()
         u_name = data_utils.rand_name('user')
@@ -34,20 +34,20 @@
         cls.u_password = data_utils.rand_name('pass')
         cls.domain = cls.client.create_domain(
             data_utils.rand_name('domain'),
-            description=data_utils.rand_name('domain-desc'))
+            description=data_utils.rand_name('domain-desc'))['domain']
         cls.project = cls.client.create_project(
             data_utils.rand_name('project'),
             description=data_utils.rand_name('project-desc'),
-            domain_id=cls.domain['id'])
+            domain_id=cls.domain['id'])['project']
         cls.group_body = cls.client.create_group(
             data_utils.rand_name('Group'), project_id=cls.project['id'],
-            domain_id=cls.domain['id'])
+            domain_id=cls.domain['id'])['group']
         cls.user_body = cls.client.create_user(
             u_name, description=u_desc, password=cls.u_password,
             email=u_email, project_id=cls.project['id'],
-            domain_id=cls.domain['id'])
+            domain_id=cls.domain['id'])['user']
         cls.role = cls.client.create_role(
-            data_utils.rand_name('Role'))
+            data_utils.rand_name('Role'))['role']
 
     @classmethod
     def resource_cleanup(cls):
@@ -69,23 +69,23 @@
     @test.idempotent_id('18afc6c0-46cf-4911-824e-9989cc056c3a')
     def test_role_create_update_get_list(self):
         r_name = data_utils.rand_name('Role')
-        role = self.client.create_role(r_name)
+        role = self.client.create_role(r_name)['role']
         self.addCleanup(self.client.delete_role, role['id'])
         self.assertIn('name', role)
         self.assertEqual(role['name'], r_name)
 
         new_name = data_utils.rand_name('NewRole')
-        updated_role = self.client.update_role(new_name, role['id'])
+        updated_role = self.client.update_role(new_name, role['id'])['role']
         self.assertIn('name', updated_role)
         self.assertIn('id', updated_role)
         self.assertIn('links', updated_role)
         self.assertNotEqual(r_name, updated_role['name'])
 
-        new_role = self.client.get_role(role['id'])
+        new_role = self.client.get_role(role['id'])['role']
         self.assertEqual(new_name, new_role['name'])
         self.assertEqual(updated_role['id'], new_role['id'])
 
-        roles = self.client.list_roles()
+        roles = self.client.list_roles()['roles']
         self.assertIn(role['id'], [r['id'] for r in roles])
 
     @test.idempotent_id('c6b80012-fe4a-498b-9ce8-eb391c05169f')
@@ -94,7 +94,7 @@
             self.project['id'], self.user_body['id'], self.role['id'])
 
         roles = self.client.list_user_roles_on_project(
-            self.project['id'], self.user_body['id'])
+            self.project['id'], self.user_body['id'])['roles']
 
         for i in roles:
             self.fetched_role_ids.append(i['id'])
@@ -111,7 +111,7 @@
             self.domain['id'], self.user_body['id'], self.role['id'])
 
         roles = self.client.list_user_roles_on_domain(
-            self.domain['id'], self.user_body['id'])
+            self.domain['id'], self.user_body['id'])['roles']
 
         for i in roles:
             self.fetched_role_ids.append(i['id'])
@@ -129,7 +129,7 @@
             self.project['id'], self.group_body['id'], self.role['id'])
         # List group roles on project
         roles = self.client.list_group_roles_on_project(
-            self.project['id'], self.group_body['id'])
+            self.project['id'], self.group_body['id'])['roles']
 
         for i in roles:
             self.fetched_role_ids.append(i['id'])
@@ -158,7 +158,7 @@
             self.domain['id'], self.group_body['id'], self.role['id'])
 
         roles = self.client.list_group_roles_on_domain(
-            self.domain['id'], self.group_body['id'])
+            self.domain['id'], self.group_body['id'])['roles']
 
         for i in roles:
             self.fetched_role_ids.append(i['id'])
@@ -172,6 +172,6 @@
     @test.idempotent_id('f5654bcc-08c4-4f71-88fe-05d64e06de94')
     def test_list_roles(self):
         # Return a list of all roles
-        body = self.client.list_roles()
+        body = self.client.list_roles()['roles']
         found = [role for role in body if role in self.data.v3_roles]
         self.assertEqual(len(found), len(self.data.v3_roles))
diff --git a/tempest/api/identity/admin/v3/test_services.py b/tempest/api/identity/admin/v3/test_services.py
index 95a7dcc..d920f64 100644
--- a/tempest/api/identity/admin/v3/test_services.py
+++ b/tempest/api/identity/admin/v3/test_services.py
@@ -37,7 +37,7 @@
         serv_type = data_utils.rand_name('type')
         desc = data_utils.rand_name('description')
         create_service = self.service_client.create_service(
-            serv_type, name=name, description=desc)
+            serv_type, name=name, description=desc)['service']
         self.addCleanup(self._del_service, create_service['id'])
         self.assertIsNotNone(create_service['id'])
 
@@ -50,13 +50,13 @@
         resp1_desc = create_service['description']
         s_desc2 = data_utils.rand_name('desc2')
         update_service = self.service_client.update_service(
-            s_id, description=s_desc2)
+            s_id, description=s_desc2)['service']
         resp2_desc = update_service['description']
 
         self.assertNotEqual(resp1_desc, resp2_desc)
 
         # Get service
-        fetched_service = self.service_client.get_service(s_id)
+        fetched_service = self.service_client.get_service(s_id)['service']
         resp3_desc = fetched_service['description']
 
         self.assertEqual(resp2_desc, resp3_desc)
@@ -68,7 +68,7 @@
         name = data_utils.rand_name('service')
         serv_type = data_utils.rand_name('type')
         service = self.service_client.create_service(
-            serv_type, name=name)
+            serv_type, name=name)['service']
         self.addCleanup(self.service_client.delete_service, service['id'])
         self.assertIn('id', service)
         expected_data = {'name': name, 'type': serv_type}
@@ -82,13 +82,13 @@
             name = data_utils.rand_name('service')
             serv_type = data_utils.rand_name('type')
             create_service = self.service_client.create_service(
-                serv_type, name=name)
+                serv_type, name=name)['service']
             self.addCleanup(self.service_client.delete_service,
                             create_service['id'])
             service_ids.append(create_service['id'])
 
         # List and Verify Services
-        services = self.service_client.list_services()
+        services = self.service_client.list_services()['services']
         fetched_ids = [service['id'] for service in services]
         found = [s for s in fetched_ids if s in service_ids]
         self.assertEqual(len(found), len(service_ids))
diff --git a/tempest/api/identity/admin/v3/test_tokens.py b/tempest/api/identity/admin/v3/test_tokens.py
index 951bc78..5681ac6 100644
--- a/tempest/api/identity/admin/v3/test_tokens.py
+++ b/tempest/api/identity/admin/v3/test_tokens.py
@@ -32,14 +32,14 @@
         u_password = data_utils.rand_name('pass')
         user = self.client.create_user(
             u_name, description=u_desc, password=u_password,
-            email=u_email)
+            email=u_email)['user']
         self.addCleanup(self.client.delete_user, user['id'])
         # Perform Authentication
         resp = self.token.auth(user_id=user['id'],
                                password=u_password).response
         subject_token = resp['x-subject-token']
         # Perform GET Token
-        token_details = self.client.get_token(subject_token)
+        token_details = self.client.get_token(subject_token)['token']
         self.assertEqual(resp['x-subject-token'], subject_token)
         self.assertEqual(token_details['user']['id'], user['id'])
         self.assertEqual(token_details['user']['name'], u_name)
@@ -61,21 +61,22 @@
         # Create a user.
         user_name = data_utils.rand_name(name='user')
         user_password = data_utils.rand_name(name='pass')
-        user = self.client.create_user(user_name, password=user_password)
+        user = self.client.create_user(user_name,
+                                       password=user_password)['user']
         self.addCleanup(self.client.delete_user, user['id'])
 
         # Create a couple projects
         project1_name = data_utils.rand_name(name='project')
-        project1 = self.client.create_project(project1_name)
+        project1 = self.client.create_project(project1_name)['project']
         self.addCleanup(self.client.delete_project, project1['id'])
 
         project2_name = data_utils.rand_name(name='project')
-        project2 = self.client.create_project(project2_name)
+        project2 = self.client.create_project(project2_name)['project']
         self.addCleanup(self.client.delete_project, project2['id'])
 
         # Create a role
         role_name = data_utils.rand_name(name='role')
-        role = self.client.create_role(role_name)
+        role = self.client.create_role(role_name)['role']
         self.addCleanup(self.client.delete_role, role['id'])
 
         # Grant the user the role on both projects.
diff --git a/tempest/api/identity/admin/v3/test_trusts.py b/tempest/api/identity/admin/v3/test_trusts.py
index 1ac34eb..b8700a6 100644
--- a/tempest/api/identity/admin/v3/test_trusts.py
+++ b/tempest/api/identity/admin/v3/test_trusts.py
@@ -48,7 +48,7 @@
         # create a project that trusts will be granted on
         self.trustor_project_name = data_utils.rand_name(name='project')
         project = self.client.create_project(self.trustor_project_name,
-                                             domain_id='default')
+                                             domain_id='default')['project']
         self.trustor_project_id = project['id']
         self.assertIsNotNone(self.trustor_project_id)
 
@@ -63,17 +63,17 @@
             password=self.trustor_password,
             email=u_email,
             project_id=self.trustor_project_id,
-            domain_id='default')
+            domain_id='default')['user']
         self.trustor_user_id = user['id']
 
         # And two roles, one we'll delegate and one we won't
         self.delegated_role = data_utils.rand_name('DelegatedRole')
         self.not_delegated_role = data_utils.rand_name('NotDelegatedRole')
 
-        role = self.client.create_role(self.delegated_role)
+        role = self.client.create_role(self.delegated_role)['role']
         self.delegated_role_id = role['id']
 
-        role = self.client.create_role(self.not_delegated_role)
+        role = self.client.create_role(self.not_delegated_role)['role']
         self.not_delegated_role_id = role['id']
 
         # Assign roles to trustor
@@ -118,7 +118,7 @@
             project_id=self.trustor_project_id,
             role_names=[self.delegated_role],
             impersonation=impersonate,
-            expires_at=expires)
+            expires_at=expires)['trust']
         self.trust_id = trust_create['id']
         return trust_create
 
@@ -141,7 +141,7 @@
             self.assertEqual(1, len(trust['roles']))
 
     def get_trust(self):
-        trust_get = self.trustor_client.get_trust(self.trust_id)
+        trust_get = self.trustor_client.get_trust(self.trust_id)['trust']
         return trust_get
 
     def validate_role(self, role):
@@ -157,12 +157,12 @@
     def check_trust_roles(self):
         # Check we find the delegated role
         roles_get = self.trustor_client.get_trust_roles(
-            self.trust_id)
+            self.trust_id)['roles']
         self.assertEqual(1, len(roles_get))
         self.validate_role(roles_get[0])
 
         role_get = self.trustor_client.get_trust_role(
-            self.trust_id, self.delegated_role_id)
+            self.trust_id, self.delegated_role_id)['role']
         self.validate_role(role_get)
 
         role_get = self.trustor_client.check_trust_role(
@@ -245,7 +245,7 @@
 
     @test.idempotent_id('3e48f95d-e660-4fa9-85e0-5a3d85594384')
     def test_trust_expire_invalid(self):
-        # Test case to check we can check an invlaid expiry time
+        # Test case to check we can check an invalid expiry time
         # is rejected with the correct error
         # with an expiry specified
         expires_str = 'bad.123Z'
@@ -257,7 +257,7 @@
     def test_get_trusts_query(self):
         self.create_trust()
         trusts_get = self.trustor_client.get_trusts(
-            trustor_user_id=self.trustor_user_id)
+            trustor_user_id=self.trustor_user_id)['trusts']
         self.assertEqual(1, len(trusts_get))
         self.validate_trust(trusts_get[0], summary=True)
 
@@ -265,7 +265,7 @@
     @test.idempotent_id('4773ebd5-ecbf-4255-b8d8-b63e6f72b65d')
     def test_get_trusts_all(self):
         self.create_trust()
-        trusts_get = self.client.get_trusts()
+        trusts_get = self.client.get_trusts()['trusts']
         trusts = [t for t in trusts_get
                   if t['id'] == self.trust_id]
         self.assertEqual(1, len(trusts))
diff --git a/tempest/api/identity/admin/v3/test_users.py b/tempest/api/identity/admin/v3/test_users.py
index 19cb24e..8fac0b3 100644
--- a/tempest/api/identity/admin/v3/test_users.py
+++ b/tempest/api/identity/admin/v3/test_users.py
@@ -30,13 +30,13 @@
         u_password = data_utils.rand_name('pass')
         user = self.client.create_user(
             u_name, description=u_desc, password=u_password,
-            email=u_email, enabled=False)
+            email=u_email, enabled=False)['user']
         # Delete the User at the end of this method
         self.addCleanup(self.client.delete_user, user['id'])
         # Creating second project for updation
         project = self.client.create_project(
             data_utils.rand_name('project'),
-            description=data_utils.rand_name('project-desc'))
+            description=data_utils.rand_name('project-desc'))['project']
         # Delete the Project at the end of this method
         self.addCleanup(self.client.delete_project, project['id'])
         # Updating user details with new values
@@ -46,7 +46,7 @@
         update_user = self.client.update_user(
             user['id'], name=u_name2, description=u_description2,
             project_id=project['id'],
-            email=u_email2, enabled=False)
+            email=u_email2, enabled=False)['user']
         self.assertEqual(u_name2, update_user['name'])
         self.assertEqual(u_description2, update_user['description'])
         self.assertEqual(project['id'],
@@ -54,7 +54,7 @@
         self.assertEqual(u_email2, update_user['email'])
         self.assertEqual(False, update_user['enabled'])
         # GET by id after updation
-        new_user_get = self.client.get_user(user['id'])
+        new_user_get = self.client.get_user(user['id'])['user']
         # Assert response body of GET after updation
         self.assertEqual(u_name2, new_user_get['name'])
         self.assertEqual(u_description2, new_user_get['description'])
@@ -69,7 +69,7 @@
         u_name = data_utils.rand_name('user')
         original_password = data_utils.rand_name('pass')
         user = self.client.create_user(
-            u_name, password=original_password)
+            u_name, password=original_password)['user']
         # Delete the User at the end all test methods
         self.addCleanup(self.client.delete_user, user['id'])
         # Update user with new password
@@ -80,7 +80,7 @@
                                password=new_password).response
         subject_token = resp['x-subject-token']
         # Perform GET Token to verify and confirm password is updated
-        token_details = self.client.get_token(subject_token)
+        token_details = self.client.get_token(subject_token)['token']
         self.assertEqual(resp['x-subject-token'], subject_token)
         self.assertEqual(token_details['user']['id'], user['id'])
         self.assertEqual(token_details['user']['name'], u_name)
@@ -92,7 +92,7 @@
         fetched_project_ids = list()
         u_project = self.client.create_project(
             data_utils.rand_name('project'),
-            description=data_utils.rand_name('project-desc'))
+            description=data_utils.rand_name('project-desc'))['project']
         # Delete the Project at the end of this method
         self.addCleanup(self.client.delete_project, u_project['id'])
         # Create a user.
@@ -102,23 +102,23 @@
         u_password = data_utils.rand_name('pass')
         user_body = self.client.create_user(
             u_name, description=u_desc, password=u_password,
-            email=u_email, enabled=False, project_id=u_project['id'])
+            email=u_email, enabled=False, project_id=u_project['id'])['user']
         # Delete the User at the end of this method
         self.addCleanup(self.client.delete_user, user_body['id'])
         # Creating Role
         role_body = self.client.create_role(
-            data_utils.rand_name('role'))
+            data_utils.rand_name('role'))['role']
         # Delete the Role at the end of this method
         self.addCleanup(self.client.delete_role, role_body['id'])
 
-        user = self.client.get_user(user_body['id'])
-        role = self.client.get_role(role_body['id'])
+        user = self.client.get_user(user_body['id'])['user']
+        role = self.client.get_role(role_body['id'])['role']
         for i in range(2):
             # Creating project so as to assign role
             project_body = self.client.create_project(
                 data_utils.rand_name('project'),
-                description=data_utils.rand_name('project-desc'))
-            project = self.client.get_project(project_body['id'])
+                description=data_utils.rand_name('project-desc'))['project']
+            project = self.client.get_project(project_body['id'])['project']
             # Delete the Project at the end of this method
             self.addCleanup(self.client.delete_project, project_body['id'])
             # Assigning roles to user on project
@@ -126,7 +126,7 @@
                                          user['id'],
                                          role['id'])
             assigned_project_ids.append(project['id'])
-        body = self.client.list_user_projects(user['id'])
+        body = self.client.list_user_projects(user['id'])['projects']
         for i in body:
             fetched_project_ids.append(i['id'])
         # verifying the project ids in list
@@ -142,5 +142,5 @@
     def test_get_user(self):
         # Get a user detail
         self.data.setup_test_v3_user()
-        user = self.client.get_user(self.data.v3_user['id'])
+        user = self.client.get_user(self.data.v3_user['id'])['user']
         self.assertEqual(self.data.v3_user['id'], user['id'])
diff --git a/tempest/api/identity/base.py b/tempest/api/identity/base.py
index 0654f37..7b23e66 100644
--- a/tempest/api/identity/base.py
+++ b/tempest/api/identity/base.py
@@ -48,8 +48,12 @@
     def get_tenant_by_name(cls, name):
         try:
             tenants = cls.client.list_tenants()
+            # TODO(jswarren): always retrieve 'tenants' value
+            # once both clients return full response objects
+            if 'tenants' in tenants:
+                tenants = tenants['tenants']
         except AttributeError:
-            tenants = cls.client.list_projects()
+            tenants = cls.client.list_projects()['projects']
         tenant = [t for t in tenants if t['name'] == name]
         if len(tenant) > 0:
             return tenant[0]
@@ -153,21 +157,21 @@
 
     @classmethod
     def get_user_by_name(cls, name):
-        users = cls.client.get_users()
+        users = cls.client.get_users()['users']
         user = [u for u in users if u['name'] == name]
         if len(user) > 0:
             return user[0]
 
     @classmethod
     def get_tenant_by_name(cls, name):
-        tenants = cls.client.list_projects()
+        tenants = cls.client.list_projects()['projects']
         tenant = [t for t in tenants if t['name'] == name]
         if len(tenant) > 0:
             return tenant[0]
 
     @classmethod
     def get_role_by_name(cls, name):
-        roles = cls.client.list_roles()
+        roles = cls.client.list_roles()['roles']
         role = [r for r in roles if r['name'] == name]
         if len(role) > 0:
             return role[0]
@@ -237,7 +241,7 @@
                 self.test_user,
                 password=self.test_password,
                 project_id=self.project['id'],
-                email=self.test_email)
+                email=self.test_email)['user']
             self.v3_users.append(self.v3_user)
 
         def setup_test_project(self):
@@ -246,13 +250,13 @@
             self.test_description = data_utils.rand_name('desc')
             self.project = self.client.create_project(
                 name=self.test_project,
-                description=self.test_description)
+                description=self.test_description)['project']
             self.projects.append(self.project)
 
         def setup_test_v3_role(self):
             """Set up a test v3 role."""
             self.test_role = data_utils.rand_name('role')
-            self.v3_role = self.client.create_role(self.test_role)
+            self.v3_role = self.client.create_role(self.test_role)['role']
             self.v3_roles.append(self.v3_role)
 
         def setup_test_domain(self):
@@ -261,7 +265,7 @@
             self.test_description = data_utils.rand_name('desc')
             self.domain = self.client.create_domain(
                 name=self.test_domain,
-                description=self.test_description)
+                description=self.test_description)['domain']
             self.domains.append(self.domain)
 
         @staticmethod
diff --git a/tempest/api/image/base.py b/tempest/api/image/base.py
index 87013db..4572310 100644
--- a/tempest/api/image/base.py
+++ b/tempest/api/image/base.py
@@ -90,6 +90,26 @@
         super(BaseV1ImageTest, cls).setup_clients()
         cls.client = cls.os.image_client
 
+    # TODO(jswarren) Remove this method once the v2 client also returns the
+    # full response object, not just the ['image'] value. At that
+    # point BaseImageTest.create_image will need to retrieve the
+    # ['image'] value.
+    @classmethod
+    def create_image(cls, **kwargs):
+        """Wrapper that returns a test image."""
+        name = data_utils.rand_name(cls.__name__ + "-instance")
+
+        if 'name' in kwargs:
+            name = kwargs.pop('name')
+
+        container_format = kwargs.pop('container_format')
+        disk_format = kwargs.pop('disk_format')
+
+        image = cls.client.create_image(name, container_format,
+                                        disk_format, **kwargs)['image']
+        cls.created_images.append(image['id'])
+        return image
+
 
 class BaseV1ImageMembersTest(BaseV1ImageTest):
 
diff --git a/tempest/api/image/v1/test_images.py b/tempest/api/image/v1/test_images.py
index 8beed32..7739d16 100644
--- a/tempest/api/image/v1/test_images.py
+++ b/tempest/api/image/v1/test_images.py
@@ -45,7 +45,7 @@
 
         # Now try uploading an image file
         image_file = moves.cStringIO(data_utils.random_bytes())
-        body = self.client.update_image(image_id, data=image_file)
+        body = self.client.update_image(image_id, data=image_file)['image']
         self.assertIn('size', body)
         self.assertEqual(1024, body.get('size'))
 
@@ -168,14 +168,14 @@
     @test.idempotent_id('246178ab-3b33-4212-9a4b-a7fe8261794d')
     def test_index_no_params(self):
         # Simple test to see all fixture images returned
-        images_list = self.client.list_images()
+        images_list = self.client.list_images()['images']
         image_list = map(lambda x: x['id'], images_list)
         for image_id in self.created_images:
             self.assertIn(image_id, image_list)
 
     @test.idempotent_id('f1755589-63d6-4468-b098-589820eb4031')
     def test_index_disk_format(self):
-        images_list = self.client.list_images(disk_format='ami')
+        images_list = self.client.list_images(disk_format='ami')['images']
         for image in images_list:
             self.assertEqual(image['disk_format'], 'ami')
         result_set = set(map(lambda x: x['id'], images_list))
@@ -184,7 +184,8 @@
 
     @test.idempotent_id('2143655d-96d9-4bec-9188-8674206b4b3b')
     def test_index_container_format(self):
-        images_list = self.client.list_images(container_format='bare')
+        images_list = (self.client.list_images(container_format='bare')
+                       ['images'])
         for image in images_list:
             self.assertEqual(image['container_format'], 'bare')
         result_set = set(map(lambda x: x['id'], images_list))
@@ -193,7 +194,7 @@
 
     @test.idempotent_id('feb32ac6-22bb-4a16-afd8-9454bb714b14')
     def test_index_max_size(self):
-        images_list = self.client.list_images(size_max=42)
+        images_list = self.client.list_images(size_max=42)['images']
         for image in images_list:
             self.assertTrue(image['size'] <= 42)
         result_set = set(map(lambda x: x['id'], images_list))
@@ -202,7 +203,7 @@
 
     @test.idempotent_id('6ffc16d0-4cbf-4401-95c8-4ac63eac34d8')
     def test_index_min_size(self):
-        images_list = self.client.list_images(size_min=142)
+        images_list = self.client.list_images(size_min=142)['images']
         for image in images_list:
             self.assertTrue(image['size'] >= 142)
         result_set = set(map(lambda x: x['id'], images_list))
@@ -214,7 +215,7 @@
         images_list = self.client.list_images(detail=True,
                                               status='active',
                                               sort_key='size',
-                                              sort_dir='desc')
+                                              sort_dir='desc')['images']
         top_size = images_list[0]['size']  # We have non-zero sized images
         for image in images_list:
             size = image['size']
@@ -226,7 +227,7 @@
     def test_index_name(self):
         images_list = self.client.list_images(
             detail=True,
-            name='New Remote Image dup')
+            name='New Remote Image dup')['images']
         result_set = set(map(lambda x: x['id'], images_list))
         for image in images_list:
             self.assertEqual(image['name'], 'New Remote Image dup')
@@ -272,7 +273,7 @@
         self.assertEqual(metadata['properties'], {'key1': 'value1'})
         metadata['properties'].update(req_metadata)
         metadata = self.client.update_image(
-            self.image_id, properties=metadata['properties'])
+            self.image_id, properties=metadata['properties'])['image']
 
         resp_metadata = self.client.get_image_meta(self.image_id)
         expected = {'key1': 'alt1', 'key2': 'value2'}
diff --git a/tempest/api/network/test_floating_ips.py b/tempest/api/network/test_floating_ips.py
index f0923d2..4b4a4e2 100644
--- a/tempest/api/network/test_floating_ips.py
+++ b/tempest/api/network/test_floating_ips.py
@@ -25,7 +25,7 @@
 
 class FloatingIPTestJSON(base.BaseNetworkTest):
     """
-    Tests the following operations in the Quantum API using the REST client for
+    Tests the following operations in the Neutron API using the REST client for
     Neutron:
 
         Create a Floating IP
diff --git a/tempest/api/network/test_routers.py b/tempest/api/network/test_routers.py
index 78b51c8..1308414 100644
--- a/tempest/api/network/test_routers.py
+++ b/tempest/api/network/test_routers.py
@@ -277,11 +277,11 @@
         test_routes = []
         routes_num = 5
         # Create a router
-        self.router = self._create_router(
+        router = self._create_router(
             data_utils.rand_name('router-'), True)
         self.addCleanup(
             self._delete_extra_routes,
-            self.router['id'])
+            router['id'])
         # Update router extra route, second ip of the range is
         # used as next hop
         for i in range(routes_num):
@@ -290,7 +290,7 @@
             next_cidr = next_cidr.next()
 
             # Add router interface with subnet id
-            self.create_router_interface(self.router['id'], subnet['id'])
+            self.create_router_interface(router['id'], subnet['id'])
 
             cidr = netaddr.IPNetwork(subnet['cidr'])
             next_hop = str(cidr[2])
@@ -300,9 +300,9 @@
             )
 
         test_routes.sort(key=lambda x: x['destination'])
-        extra_route = self.client.update_extra_routes(self.router['id'],
+        extra_route = self.client.update_extra_routes(router['id'],
                                                       test_routes)
-        show_body = self.client.show_router(self.router['id'])
+        show_body = self.client.show_router(router['id'])
         # Assert the number of routes
         self.assertEqual(routes_num, len(extra_route['router']['routes']))
         self.assertEqual(routes_num, len(show_body['router']['routes']))
@@ -327,13 +327,13 @@
 
     @test.idempotent_id('a8902683-c788-4246-95c7-ad9c6d63a4d9')
     def test_update_router_admin_state(self):
-        self.router = self._create_router(data_utils.rand_name('router-'))
-        self.assertFalse(self.router['admin_state_up'])
+        router = self._create_router(data_utils.rand_name('router-'))
+        self.assertFalse(router['admin_state_up'])
         # Update router admin state
-        update_body = self.client.update_router(self.router['id'],
+        update_body = self.client.update_router(router['id'],
                                                 admin_state_up=True)
         self.assertTrue(update_body['router']['admin_state_up'])
-        show_body = self.client.show_router(self.router['id'])
+        show_body = self.client.show_router(router['id'])
         self.assertTrue(show_body['router']['admin_state_up'])
 
     @test.attr(type='smoke')
diff --git a/tempest/api/orchestration/base.py b/tempest/api/orchestration/base.py
index 6578680..f2c59f3 100644
--- a/tempest/api/orchestration/base.py
+++ b/tempest/api/orchestration/base.py
@@ -96,7 +96,7 @@
     @classmethod
     def _create_keypair(cls, name_start='keypair-heat-'):
         kp_name = data_utils.rand_name(name_start)
-        body = cls.keypairs_client.create_keypair(name=kp_name)
+        body = cls.keypairs_client.create_keypair(name=kp_name)['keypair']
         cls.keypairs.append(kp_name)
         return body
 
diff --git a/tempest/api/telemetry/base.py b/tempest/api/telemetry/base.py
index 0f9b7dd..5d1784f 100644
--- a/tempest/api/telemetry/base.py
+++ b/tempest/api/telemetry/base.py
@@ -85,6 +85,11 @@
         body = client.create_image(
             data_utils.rand_name('image'), container_format='bare',
             disk_format='raw', visibility='private')
+        # TODO(jswarren) Move ['image'] up to initial body value assignment
+        # once both v1 and v2 glance clients include the full response
+        # object.
+        if 'image' in body:
+            body = body['image']
         cls.image_ids.append(body['id'])
         return body
 
diff --git a/tempest/api/volume/admin/test_snapshots_actions.py b/tempest/api/volume/admin/test_snapshots_actions.py
index c860b4b..66973a7 100644
--- a/tempest/api/volume/admin/test_snapshots_actions.py
+++ b/tempest/api/volume/admin/test_snapshots_actions.py
@@ -41,8 +41,8 @@
         # Create a test shared snapshot for tests
         snap_name = data_utils.rand_name(cls.__name__ + '-Snapshot')
         params = {cls.name_field: snap_name}
-        cls.snapshot = \
-            cls.client.create_snapshot(cls.volume['id'], **params)
+        cls.snapshot = cls.client.create_snapshot(
+            cls.volume['id'], **params)['snapshot']
         cls.client.wait_for_snapshot_status(cls.snapshot['id'],
                                             'available')
 
@@ -86,8 +86,8 @@
         status = 'creating'
         self.admin_snapshots_client.\
             reset_snapshot_status(self.snapshot['id'], status)
-        snapshot_get \
-            = self.admin_snapshots_client.show_snapshot(self.snapshot['id'])
+        snapshot_get = self.admin_snapshots_client.show_snapshot(
+            self.snapshot['id'])['snapshot']
         self.assertEqual(status, snapshot_get['status'])
 
     @test.idempotent_id('41288afd-d463-485e-8f6e-4eea159413eb')
@@ -103,8 +103,8 @@
         progress_alias = self._get_progress_alias()
         self.client.update_snapshot_status(self.snapshot['id'],
                                            status, progress)
-        snapshot_get \
-            = self.admin_snapshots_client.show_snapshot(self.snapshot['id'])
+        snapshot_get = self.admin_snapshots_client.show_snapshot(
+            self.snapshot['id'])['snapshot']
         self.assertEqual(status, snapshot_get['status'])
         self.assertEqual(progress, snapshot_get[progress_alias])
 
diff --git a/tempest/api/volume/admin/test_volume_hosts.py b/tempest/api/volume/admin/test_volume_hosts.py
index dd14d8c..b28488a 100644
--- a/tempest/api/volume/admin/test_volume_hosts.py
+++ b/tempest/api/volume/admin/test_volume_hosts.py
@@ -21,7 +21,7 @@
 
     @test.idempotent_id('d5f3efa2-6684-4190-9ced-1c2f526352ad')
     def test_list_hosts(self):
-        hosts = self.hosts_client.list_hosts()
+        hosts = self.hosts_client.list_hosts()['hosts']
         self.assertTrue(len(hosts) >= 2, "No. of hosts are < 2,"
                         "response of list hosts is: % s" % hosts)
 
diff --git a/tempest/api/volume/base.py b/tempest/api/volume/base.py
index b67a6d2..c987100 100644
--- a/tempest/api/volume/base.py
+++ b/tempest/api/volume/base.py
@@ -122,8 +122,8 @@
     @classmethod
     def create_snapshot(cls, volume_id=1, **kwargs):
         """Wrapper utility that returns a test snapshot."""
-        snapshot = cls.snapshots_client.create_snapshot(volume_id,
-                                                        **kwargs)
+        snapshot = cls.snapshots_client.create_snapshot(
+            volume_id, **kwargs)['snapshot']
         cls.snapshots.append(snapshot)
         cls.snapshots_client.wait_for_snapshot_status(snapshot['id'],
                                                       'available')
@@ -217,7 +217,7 @@
         name = name or data_utils.rand_name(cls.__name__ + '-QoS')
         consumer = consumer or 'front-end'
         qos_specs = cls.volume_qos_client.create_qos(name, consumer,
-                                                     **kwargs)
+                                                     **kwargs)['qos_specs']
         cls.qos_specs.append(qos_specs['id'])
         return qos_specs
 
diff --git a/tempest/api/volume/test_availability_zone.py b/tempest/api/volume/test_availability_zone.py
index f188fa9..366b8d2 100644
--- a/tempest/api/volume/test_availability_zone.py
+++ b/tempest/api/volume/test_availability_zone.py
@@ -31,7 +31,8 @@
     @test.idempotent_id('01f1ae88-eba9-4c6b-a011-6f7ace06b725')
     def test_get_availability_zone_list(self):
         # List of availability zone
-        availability_zone = self.client.list_availability_zones()
+        availability_zone = (self.client.list_availability_zones()
+                             ['availabilityZoneInfo'])
         self.assertTrue(len(availability_zone) > 0)
 
 
diff --git a/tempest/api/volume/test_extensions.py b/tempest/api/volume/test_extensions.py
index 17db45f..cce9ace 100644
--- a/tempest/api/volume/test_extensions.py
+++ b/tempest/api/volume/test_extensions.py
@@ -30,7 +30,8 @@
     @test.idempotent_id('94607eb0-43a5-47ca-82aa-736b41bd2e2c')
     def test_list_extensions(self):
         # List of all extensions
-        extensions = self.volumes_extension_client.list_extensions()
+        extensions = (self.volumes_extension_client.list_extensions()
+                      ['extensions'])
         if len(CONF.volume_feature_enabled.api_extensions) == 0:
             raise self.skipException('There are not any extensions configured')
         extension_list = [extension.get('alias') for extension in extensions]
diff --git a/tempest/api/volume/test_qos.py b/tempest/api/volume/test_qos.py
index 84fd7f6..5a58e2c 100644
--- a/tempest/api/volume/test_qos.py
+++ b/tempest/api/volume/test_qos.py
@@ -47,7 +47,7 @@
         self.volume_qos_client.wait_for_resource_deletion(body['id'])
 
         # validate the deletion
-        list_qos = self.volume_qos_client.list_qos()
+        list_qos = self.volume_qos_client.list_qos()['qos_specs']
         self.assertNotIn(body, list_qos)
 
     def _create_test_volume_type(self):
@@ -64,7 +64,7 @@
 
     def _test_get_association_qos(self):
         body = self.volume_qos_client.show_association_qos(
-            self.created_qos['id'])
+            self.created_qos['id'])['qos_associations']
 
         associations = []
         for association in body:
@@ -99,24 +99,27 @@
     @test.idempotent_id('7aa214cc-ac1a-4397-931f-3bb2e83bb0fd')
     def test_get_qos(self):
         """Tests the detail of a given qos-specs"""
-        body = self.volume_qos_client.show_qos(self.created_qos['id'])
+        body = self.volume_qos_client.show_qos(
+            self.created_qos['id'])['qos_specs']
         self.assertEqual(self.qos_name, body['name'])
         self.assertEqual(self.qos_consumer, body['consumer'])
 
     @test.idempotent_id('75e04226-bcf7-4595-a34b-fdf0736f38fc')
     def test_list_qos(self):
         """Tests the list of all qos-specs"""
-        body = self.volume_qos_client.list_qos()
+        body = self.volume_qos_client.list_qos()['qos_specs']
         self.assertIn(self.created_qos, body)
 
     @test.idempotent_id('ed00fd85-4494-45f2-8ceb-9e2048919aed')
     def test_set_unset_qos_key(self):
         """Test the addition of a specs key to qos-specs"""
         args = {'iops_bytes': '500'}
-        body = self.volume_qos_client.set_qos_key(self.created_qos['id'],
-                                                  iops_bytes='500')
+        body = self.volume_qos_client.set_qos_key(
+            self.created_qos['id'],
+            iops_bytes='500')['qos_specs']
         self.assertEqual(args, body)
-        body = self.volume_qos_client.show_qos(self.created_qos['id'])
+        body = self.volume_qos_client.show_qos(
+            self.created_qos['id'])['qos_specs']
         self.assertEqual(args['iops_bytes'], body['specs']['iops_bytes'])
 
         # test the deletion of a specs key from qos-specs
@@ -125,7 +128,8 @@
         operation = 'qos-key-unset'
         self.volume_qos_client.wait_for_qos_operations(self.created_qos['id'],
                                                        operation, keys)
-        body = self.volume_qos_client.show_qos(self.created_qos['id'])
+        body = self.volume_qos_client.show_qos(
+            self.created_qos['id'])['qos_specs']
         self.assertNotIn(keys[0], body['specs'])
 
     @test.idempotent_id('1dd93c76-6420-485d-a771-874044c416ac')
diff --git a/tempest/api/volume/test_snapshot_metadata.py b/tempest/api/volume/test_snapshot_metadata.py
index 641317a..ce6ba90 100644
--- a/tempest/api/volume/test_snapshot_metadata.py
+++ b/tempest/api/volume/test_snapshot_metadata.py
@@ -48,16 +48,18 @@
                     "key3": "value3"}
         expected = {"key2": "value2",
                     "key3": "value3"}
-        body = self.client.create_snapshot_metadata(self.snapshot_id,
-                                                    metadata)
+        body = self.client.create_snapshot_metadata(
+            self.snapshot_id, metadata)['metadata']
         # Get the metadata of the snapshot
-        body = self.client.show_snapshot_metadata(self.snapshot_id)
+        body = self.client.show_snapshot_metadata(
+            self.snapshot_id)['metadata']
         self.assertThat(body.items(), matchers.ContainsAll(metadata.items()))
 
         # Delete one item metadata of the snapshot
         self.client.delete_snapshot_metadata_item(
             self.snapshot_id, "key1")
-        body = self.client.show_snapshot_metadata(self.snapshot_id)
+        body = self.client.show_snapshot_metadata(
+            self.snapshot_id)['metadata']
         self.assertThat(body.items(), matchers.ContainsAll(expected.items()))
         self.assertNotIn("key1", body)
 
@@ -70,17 +72,19 @@
         update = {"key3": "value3_update",
                   "key4": "value4"}
         # Create metadata for the snapshot
-        body = self.client.create_snapshot_metadata(self.snapshot_id,
-                                                    metadata)
+        body = self.client.create_snapshot_metadata(
+            self.snapshot_id, metadata)['metadata']
         # Get the metadata of the snapshot
-        body = self.client.show_snapshot_metadata(self.snapshot_id)
+        body = self.client.show_snapshot_metadata(
+            self.snapshot_id)['metadata']
         self.assertThat(body.items(), matchers.ContainsAll(metadata.items()))
 
         # Update metadata item
         body = self.client.update_snapshot_metadata(
-            self.snapshot_id, update)
+            self.snapshot_id, update)['metadata']
         # Get the metadata of the snapshot
-        body = self.client.show_snapshot_metadata(self.snapshot_id)
+        body = self.client.show_snapshot_metadata(
+            self.snapshot_id)['metadata']
         self.assertEqual(update, body)
 
     @test.idempotent_id('e8ff85c5-8f97-477f-806a-3ac364a949ed')
@@ -94,16 +98,18 @@
                   "key2": "value2",
                   "key3": "value3_update"}
         # Create metadata for the snapshot
-        body = self.client.create_snapshot_metadata(self.snapshot_id,
-                                                    metadata)
+        body = self.client.create_snapshot_metadata(
+            self.snapshot_id, metadata)['metadata']
         # Get the metadata of the snapshot
-        body = self.client.show_snapshot_metadata(self.snapshot_id)
+        body = self.client.show_snapshot_metadata(
+            self.snapshot_id)['metadata']
         self.assertThat(body.items(), matchers.ContainsAll(metadata.items()))
         # Update metadata item
         body = self.client.update_snapshot_metadata_item(
-            self.snapshot_id, "key3", update_item)
+            self.snapshot_id, "key3", update_item)['meta']
         # Get the metadata of the snapshot
-        body = self.client.show_snapshot_metadata(self.snapshot_id)
+        body = self.client.show_snapshot_metadata(
+            self.snapshot_id)['metadata']
         self.assertThat(body.items(), matchers.ContainsAll(expect.items()))
 
 
diff --git a/tempest/api/volume/test_volumes_actions.py b/tempest/api/volume/test_volumes_actions.py
index 067c0c1..58c5ba9 100644
--- a/tempest/api/volume/test_volumes_actions.py
+++ b/tempest/api/volume/test_volumes_actions.py
@@ -53,7 +53,8 @@
     def resource_cleanup(cls):
         # Delete the test instance
         cls.servers_client.delete_server(cls.server['id'])
-        cls.servers_client.wait_for_server_termination(cls.server['id'])
+        waiters.wait_for_server_termination(cls.servers_client,
+                                            cls.server['id'])
 
         super(VolumesV2ActionsTest, cls).resource_cleanup()
 
diff --git a/tempest/api/volume/test_volumes_negative.py b/tempest/api/volume/test_volumes_negative.py
index 5203444..48f40f0 100644
--- a/tempest/api/volume/test_volumes_negative.py
+++ b/tempest/api/volume/test_volumes_negative.py
@@ -181,8 +181,8 @@
     def test_attach_volumes_with_nonexistent_volume_id(self):
         srv_name = data_utils.rand_name('Instance')
         server = self.create_server(srv_name)
-        self.addCleanup(self.servers_client.wait_for_server_termination,
-                        server['id'])
+        self.addCleanup(waiters.wait_for_server_termination,
+                        self.servers_client, server['id'])
         self.addCleanup(self.servers_client.delete_server, server['id'])
         waiters.wait_for_server_status(self.servers_client, server['id'],
                                        'ACTIVE')
diff --git a/tempest/api/volume/test_volumes_snapshots.py b/tempest/api/volume/test_volumes_snapshots.py
index 1df1896..058e220 100644
--- a/tempest/api/volume/test_volumes_snapshots.py
+++ b/tempest/api/volume/test_volumes_snapshots.py
@@ -49,12 +49,11 @@
         and validates result.
         """
         if with_detail:
-            fetched_snap_list = \
-                self.snapshots_client.\
-                list_snapshots(detail=True, params=params)
+            fetched_snap_list = self.snapshots_client.list_snapshots(
+                detail=True, params=params)['snapshots']
         else:
-            fetched_snap_list = \
-                self.snapshots_client.list_snapshots(params=params)
+            fetched_snap_list = self.snapshots_client.list_snapshots(
+                params=params)['snapshots']
 
         # Validating params of fetched snapshots
         for snap in fetched_snap_list:
@@ -75,7 +74,8 @@
                                        'ACTIVE')
         mountpoint = '/dev/%s' % CONF.compute.volume_device_name
         self.servers_client.attach_volume(
-            server['id'], self.volume_origin['id'], mountpoint)
+            server['id'], volumeId=self.volume_origin['id'],
+            device=mountpoint)
         self.volumes_client.wait_for_volume_status(self.volume_origin['id'],
                                                    'in-use')
         self.addCleanup(self.volumes_client.wait_for_volume_status,
@@ -98,14 +98,15 @@
         snapshot = self.create_snapshot(self.volume_origin['id'], **params)
 
         # Get the snap and check for some of its details
-        snap_get = self.snapshots_client.show_snapshot(snapshot['id'])
+        snap_get = self.snapshots_client.show_snapshot(
+            snapshot['id'])['snapshot']
         self.assertEqual(self.volume_origin['id'],
                          snap_get['volume_id'],
                          "Referred volume origin mismatch")
 
         # Compare also with the output from the list action
         tracking_data = (snapshot['id'], snapshot[self.name_field])
-        snaps_list = self.snapshots_client.list_snapshots()
+        snaps_list = self.snapshots_client.list_snapshots()['snapshots']
         snaps_data = [(f['id'], f[self.name_field]) for f in snaps_list]
         self.assertIn(tracking_data, snaps_data)
 
@@ -114,14 +115,14 @@
         new_desc = 'This is the new description of snapshot.'
         params = {self.name_field: new_s_name,
                   self.descrip_field: new_desc}
-        update_snapshot = \
-            self.snapshots_client.update_snapshot(snapshot['id'], **params)
+        update_snapshot = self.snapshots_client.update_snapshot(
+            snapshot['id'], **params)['snapshot']
         # Assert response body for update_snapshot method
         self.assertEqual(new_s_name, update_snapshot[self.name_field])
         self.assertEqual(new_desc, update_snapshot[self.descrip_field])
         # Assert response body for show_snapshot method
-        updated_snapshot = \
-            self.snapshots_client.show_snapshot(snapshot['id'])
+        updated_snapshot = self.snapshots_client.show_snapshot(
+            snapshot['id'])['snapshot']
         self.assertEqual(new_s_name, updated_snapshot[self.name_field])
         self.assertEqual(new_desc, updated_snapshot[self.descrip_field])
 
diff --git a/tempest/api/volume/v2/test_volumes_list.py b/tempest/api/volume/v2/test_volumes_list.py
index d1eb694..ddc6822 100644
--- a/tempest/api/volume/v2/test_volumes_list.py
+++ b/tempest/api/volume/v2/test_volumes_list.py
@@ -118,7 +118,7 @@
         else:
             remaining = None
 
-        # Mark that we are not comming from a next link
+        # Mark that the current iteration is not from a 'next' link
         next = None
 
         while True:
@@ -149,8 +149,8 @@
                     # We no longer expect it
                     remaining.remove(element_id)
 
-            # If we come from a next link check that absolute url is the same
-            # as the one used for this request
+            # If the current iteration is from a 'next' link, check that the
+            # absolute url is the same as the one used for this request
             if next:
                 self.assertEqual(next, response.response['content-location'])
 
diff --git a/tempest/clients.py b/tempest/clients.py
index e32d401..7cb4347 100644
--- a/tempest/clients.py
+++ b/tempest/clients.py
@@ -243,8 +243,8 @@
         # with identity v2
         if CONF.identity_feature_enabled.api_v2 and \
                 CONF.identity.auth_version == 'v2':
-            # EC2 and S3 clients, if used, will check onfigured AWS credentials
-            # and generate new ones if needed
+            # EC2 and S3 clients, if used, will check configured AWS
+            # credentials and generate new ones if needed
             self.ec2api_client = botoclients.APIClientEC2(self.identity_client)
             self.s3_client = botoclients.ObjectClientS3(self.identity_client)
 
@@ -344,15 +344,25 @@
     def _set_identity_clients(self):
         params = {
             'service': CONF.identity.catalog_type,
-            'region': CONF.identity.region,
-            'endpoint_type': 'adminURL'
+            'region': CONF.identity.region
         }
         params.update(self.default_params_with_timeout_values)
-
+        params_v2_admin = params.copy()
+        params_v2_admin['endpoint_type'] = CONF.identity.v2_admin_endpoint_type
+        # Client uses admin endpoint type of Keystone API v2
         self.identity_client = IdentityClient(self.auth_provider,
-                                              **params)
+                                              **params_v2_admin)
+        params_v2_public = params.copy()
+        params_v2_public['endpoint_type'] = (
+            CONF.identity.v2_public_endpoint_type)
+        # Client uses public endpoint type of Keystone API v2
+        self.identity_public_client = IdentityClient(self.auth_provider,
+                                                     **params_v2_public)
+        params_v3 = params.copy()
+        params_v3['endpoint_type'] = CONF.identity.v3_endpoint_type
+        # Client uses the endpoint type of Keystone API v3
         self.identity_v3_client = IdentityV3Client(self.auth_provider,
-                                                   **params)
+                                                   **params_v3)
         self.endpoints_client = EndPointClient(self.auth_provider,
                                                **params)
         self.service_client = ServiceClient(self.auth_provider, **params)
diff --git a/tempest/cmd/account_generator.py b/tempest/cmd/account_generator.py
index 0360146..e05cab3 100755
--- a/tempest/cmd/account_generator.py
+++ b/tempest/cmd/account_generator.py
@@ -160,7 +160,7 @@
                 raise exceptions.TempestException(
                     "Role: %s - doesn't exist" % r
                 )
-    existing = [x['name'] for x in identity_admin.list_tenants()]
+    existing = [x['name'] for x in identity_admin.list_tenants()['tenants']]
     for tenant in resources['tenants']:
         if tenant not in existing:
             identity_admin.create_tenant(tenant)
diff --git a/tempest/cmd/cleanup_service.py b/tempest/cmd/cleanup_service.py
index 2e96c81..3550842 100644
--- a/tempest/cmd/cleanup_service.py
+++ b/tempest/cmd/cleanup_service.py
@@ -245,7 +245,7 @@
 
     def list(self):
         client = self.client
-        keypairs = client.list_keypairs()
+        keypairs = client.list_keypairs()['keypairs']
         LOG.debug("List count, %s Keypairs" % len(keypairs))
         return keypairs
 
@@ -889,7 +889,7 @@
 
     def list(self):
         client = self.client
-        tenants = client.list_tenants()
+        tenants = client.list_tenants()['tenants']
         if not self.is_save_state:
             tenants = [tenant for tenant in tenants if (tenant['id']
                        not in self.saved_state_json['tenants'].keys()
diff --git a/tempest/cmd/init.py b/tempest/cmd/init.py
index c13fbe5..289b978 100644
--- a/tempest/cmd/init.py
+++ b/tempest/cmd/init.py
@@ -15,6 +15,7 @@
 import os
 import shutil
 import subprocess
+import sys
 
 from cliff import command
 from oslo_log import log as logging
@@ -33,13 +34,44 @@
 """
 
 
+def get_tempest_default_config_dir():
+    """Returns the correct default config dir to support both cases of
+    tempest being or not installed in a virtualenv.
+    Cases considered:
+    - no virtual env, python2: real_prefix and base_prefix not set
+    - no virtual env, python3: real_prefix not set, base_prefix set and
+      identical to prefix
+    - virtualenv, python2: real_prefix and prefix are set and different
+    - virtualenv, python3: real_prefix not set, base_prefix and prefix are
+      set and identical
+    - pyvenv, any python version: real_prefix not set, base_prefix and prefix
+      are set and different
+
+    :return: default config dir
+    """
+    real_prefix = getattr(sys, 'real_prefix', None)
+    base_prefix = getattr(sys, 'base_prefix', None)
+    prefix = sys.prefix
+    if real_prefix is None and base_prefix is None:
+        # Not running in a virtual environnment of any kind
+        return '/etc/tempest'
+    elif (real_prefix is None and base_prefix is not None and
+            base_prefix == prefix):
+        # Probably not running in a virtual environment
+        # NOTE(andreaf) we cannot distinguish this case from the case of
+        # a virtual environment created with virtualenv, and running python3.
+        return '/etc/tempest'
+    else:
+        return os.path.join(sys.prefix, 'etc/tempest')
+
+
 class TempestInit(command.Command):
     """Setup a local working environment for running tempest"""
 
     def get_parser(self, prog_name):
         parser = super(TempestInit, self).get_parser(prog_name)
         parser.add_argument('dir', nargs='?', default=os.getcwd())
-        parser.add_argument('--config-dir', '-c', default='/etc/tempest')
+        parser.add_argument('--config-dir', '-c', default=None)
         return parser
 
     def generate_testr_conf(self, local_path):
@@ -67,6 +99,11 @@
     def copy_config(self, etc_dir, config_dir):
         shutil.copytree(config_dir, etc_dir)
 
+    def generate_sample_config(self, local_dir):
+        subprocess.call(['oslo-config-generator', '--config-file',
+                         'tools/config/config-generator.tempest.conf'],
+                        cwd=local_dir)
+
     def create_working_dir(self, local_dir, config_dir):
         # Create local dir if missing
         if not os.path.isdir(local_dir):
@@ -87,6 +124,8 @@
             os.mkdir(log_dir)
         # Create and copy local etc dir
         self.copy_config(etc_dir, config_dir)
+        # Generate the sample config file
+        self.generate_sample_config(local_dir)
         # Update local confs to reflect local paths
         self.update_local_conf(config_path, lock_dir, log_dir)
         # Generate a testr conf file
@@ -96,4 +135,5 @@
             subprocess.call(['testr', 'init'], cwd=local_dir)
 
     def take_action(self, parsed_args):
-        self.create_working_dir(parsed_args.dir, parsed_args.config_dir)
+        config_dir = parsed_args.config_dir or get_tempest_default_config_dir()
+        self.create_working_dir(parsed_args.dir, config_dir)
diff --git a/tempest/cmd/javelin.py b/tempest/cmd/javelin.py
index 9402154..71aacbd 100755
--- a/tempest/cmd/javelin.py
+++ b/tempest/cmd/javelin.py
@@ -119,6 +119,7 @@
 from tempest_lib import exceptions as lib_exc
 import yaml
 
+from tempest.common import waiters
 from tempest import config
 from tempest.services.compute.json import flavors_client
 from tempest.services.compute.json import floating_ips_client
@@ -273,7 +274,7 @@
     Don't create the tenants if they already exist.
     """
     admin = keystone_admin()
-    body = admin.identity.list_tenants()
+    body = admin.identity.list_tenants()['tenants']
     existing = [x['name'] for x in body]
     for tenant in tenants:
         if tenant not in existing:
@@ -503,7 +504,7 @@
     def check_telemetry(self):
         """Check that ceilometer provides a sane sample.
 
-        Confirm that there are more than one sample and that they have the
+        Confirm that there is more than one sample and that they have the
         expected metadata.
 
         If in check mode confirm that the oldest sample available is from
@@ -680,7 +681,7 @@
 
         response = _get_image_by_name(client, image['name'])
         if not response:
-            LOG.info("Image '%s' does not exists" % image['name'])
+            LOG.info("Image '%s' does not exist" % image['name'])
             continue
         client.images.delete_image(response['id'])
 
@@ -729,7 +730,7 @@
         # only create a network if the name isn't here
         body = client.networks.list_networks()
         if any(item['name'] == network['name'] for item in body['networks']):
-            LOG.warning("Dupplicated network name: %s" % network['name'])
+            LOG.warning("Duplicated network name: %s" % network['name'])
             continue
 
         client.networks.create_network(name=network['name'])
@@ -781,7 +782,7 @@
         # only create a router if the name isn't here
         body = client.networks.list_routers()
         if any(item['name'] == router['name'] for item in body['routers']):
-            LOG.warning("Dupplicated router name: %s" % router['name'])
+            LOG.warning("Duplicated router name: %s" % router['name'])
             continue
 
         client.networks.create_router(router['name'])
@@ -813,7 +814,7 @@
             # connect routers to their subnets
             client.networks.add_router_interface_with_subnet_id(router_id,
                                                                 subnet_id)
-        # connect routers to exteral network if set to "gateway"
+        # connect routers to external network if set to "gateway"
         if router['gateway']:
             if CONF.network.public_network_id:
                 ext_net = CONF.network.public_network_id
@@ -871,7 +872,7 @@
             server['name'], image_id, flavor_id, **kwargs)
         server_id = body['id']
         client.servers.wait_for_server_status(server_id, 'ACTIVE')
-        # create to security group(s) after server spawning
+        # create security group(s) after server spawning
         for secgroup in server['secgroups']:
             client.servers.add_security_group(server_id, secgroup)
         if CONF.compute.use_floatingip_for_ssh:
@@ -896,8 +897,8 @@
 
         # TODO(EmilienM): disassociate floating IP from server and release it.
         client.servers.delete_server(response['id'])
-        client.servers.wait_for_server_termination(response['id'],
-                                                   ignore_error=True)
+        waiters.wait_for_server_termination(client.servers, response['id'],
+                                            ignore_error=True)
 
 
 def create_secgroups(secgroups):
@@ -921,7 +922,8 @@
         for rule in secgroup['rules']:
             ip_proto, from_port, to_port, cidr = rule.split()
             client.secrules.create_security_group_rule(
-                secgroup_id, ip_proto, from_port, to_port, cidr=cidr)
+                parent_group_id=secgroup_id, ip_protocol=ip_proto,
+                from_port=from_port, to_port=to_port, cidr=cidr)
 
 
 def destroy_secgroups(secgroups):
@@ -994,7 +996,7 @@
 def create_resources():
     LOG.info("Creating Resources")
     # first create keystone level resources, and we need to be admin
-    # for those.
+    # for this.
     create_tenants(RES['tenants'])
     create_users(RES['users'])
     collect_users(RES['users'])
@@ -1014,7 +1016,7 @@
     create_volumes(RES['volumes'])
 
     # Only attempt attaching the volumes if servers are defined in the
-    # resourcefile
+    # resource file
     if 'servers' in RES:
         create_servers(RES['servers'])
         attach_volumes(RES['volumes'])
diff --git a/tempest/common/compute.py b/tempest/common/compute.py
index 06e3493..05ea393 100644
--- a/tempest/common/compute.py
+++ b/tempest/common/compute.py
@@ -26,7 +26,7 @@
 LOG = logging.getLogger(__name__)
 
 
-def create_test_server(clients, validatable, validation_resources=None,
+def create_test_server(clients, validatable=False, validation_resources=None,
                        tenant_network=None, **kwargs):
     """Common wrapper utility returning a test server.
 
diff --git a/tempest/common/glance_http.py b/tempest/common/glance_http.py
index 4be3da1..868a3e9 100644
--- a/tempest/common/glance_http.py
+++ b/tempest/common/glance_http.py
@@ -330,7 +330,7 @@
             try:
                 self.context.load_verify_locations(self.ca_certs)
             except Exception as e:
-                msg = 'Unable to load CA from "%s"' % (self.ca_certs, e)
+                msg = 'Unable to load CA from "%s" %s' % (self.ca_certs, e)
                 raise exc.SSLConfigurationError(msg)
         else:
             self.context.set_default_verify_paths()
diff --git a/tempest/common/isolated_creds.py b/tempest/common/isolated_creds.py
index 7888811..6dca3a3 100644
--- a/tempest/common/isolated_creds.py
+++ b/tempest/common/isolated_creds.py
@@ -45,6 +45,8 @@
     def create_user(self, username, password, project, email):
         user = self.identity_client.create_user(
             username, password, project['id'], email)
+        if 'user' in user:
+            user = user['user']
         return user
 
     @abc.abstractmethod
@@ -113,7 +115,7 @@
             # Domain names must be unique, in any case a list is returned,
             # selecting the first (and only) element
             self.creds_domain = self.identity_client.list_domains(
-                params={'name': domain_name})[0]
+                params={'name': domain_name})['domains'][0]
         except lib_exc.NotFound:
             # TODO(andrea) we could probably create the domain on the fly
             msg = "Configured domain %s could not be found" % domain_name
@@ -122,7 +124,7 @@
     def create_project(self, name, description):
         project = self.identity_client.create_project(
             name=name, description=description,
-            domain_id=self.creds_domain['id'])
+            domain_id=self.creds_domain['id'])['project']
         return project
 
     def get_credentials(self, user, project, password):
@@ -136,6 +138,10 @@
     def delete_project(self, project_id):
         self.identity_client.delete_project(project_id)
 
+    def _list_roles(self):
+        roles = self.identity_client.list_roles()['roles']
+        return roles
+
 
 def get_creds_client(identity_client, project_domain_name=None):
     if isinstance(identity_client, v2_identity.IdentityClient):
@@ -206,6 +212,8 @@
         email = data_utils.rand_name(root) + suffix + "@example.com"
         user = self.creds_client.create_user(
             username, user_password, project, email)
+        if 'user' in user:
+            user = user['user']
         role_assigned = False
         if admin:
             self.creds_client.assign_user_role(user, project,
diff --git a/tempest/common/utils/linux/remote_client.py b/tempest/common/utils/linux/remote_client.py
index 93c2c10..a567c6a 100644
--- a/tempest/common/utils/linux/remote_client.py
+++ b/tempest/common/utils/linux/remote_client.py
@@ -106,7 +106,8 @@
 
     def get_nic_name(self, address):
         cmd = "ip -o addr | awk '/%s/ {print $2}'" % address
-        return self.exec_command(cmd)
+        nic = self.exec_command(cmd)
+        return nic.strip().strip(":").lower()
 
     def get_ip_list(self):
         cmd = "ip address"
@@ -144,7 +145,6 @@
         """Renews DHCP lease via udhcpc client. """
         file_path = '/var/run/udhcpc.'
         nic_name = self.get_nic_name(fixed_ip)
-        nic_name = nic_name.strip().lower()
         pid = self.exec_command('cat {path}{nic}.pid'.
                                 format(path=file_path, nic=nic_name))
         pid = pid.strip()
diff --git a/tempest/common/validation_resources.py b/tempest/common/validation_resources.py
index 15c452f..d018aed 100644
--- a/tempest/common/validation_resources.py
+++ b/tempest/common/validation_resources.py
@@ -23,17 +23,19 @@
 
 
 def create_ssh_security_group(os, add_rule=False):
-    security_group_client = os.security_groups_client
+    security_groups_client = os.security_groups_client
+    security_group_rules_client = os.security_group_rules_client
     sg_name = data_utils.rand_name('securitygroup-')
     sg_description = data_utils.rand_name('description-')
-    security_group = \
-        security_group_client.create_security_group(name=sg_name,
-                                                    description=sg_description)
+    security_group = security_groups_client.create_security_group(
+        name=sg_name, description=sg_description)
     if add_rule:
-        security_group_client.create_security_group_rule(security_group['id'],
-                                                         'tcp', 22, 22)
-        security_group_client.create_security_group_rule(security_group['id'],
-                                                         'icmp', -1, -1)
+        security_group_rules_client.create_security_group_rule(
+            parent_group_id=security_group['id'], ip_protocol='tcp',
+            from_port=22, to_port=22)
+        security_group_rules_client.create_security_group_rule(
+            parent_group_id=security_group['id'], ip_protocol='icmp',
+            from_port=-1, to_port=-1)
     LOG.debug("SSH Validation resource security group with tcp and icmp "
               "rules %s created"
               % sg_name)
@@ -46,8 +48,8 @@
     if validation_resources:
         if validation_resources['keypair']:
             keypair_name = data_utils.rand_name('keypair')
-            validation_data['keypair'] = \
-                os.keypairs_client.create_keypair(name=keypair_name)
+            validation_data.update(os.keypairs_client.create_keypair(
+                name=keypair_name))
             LOG.debug("Validation resource key %s created" % keypair_name)
         add_rule = False
         if validation_resources['security_group']:
diff --git a/tempest/common/waiters.py b/tempest/common/waiters.py
index 85a03cf..7aa4fc2 100644
--- a/tempest/common/waiters.py
+++ b/tempest/common/waiters.py
@@ -15,6 +15,7 @@
 
 from oslo_log import log as logging
 from tempest_lib.common.utils import misc as misc_utils
+from tempest_lib import exceptions as lib_exc
 
 from tempest import config
 from tempest import exceptions
@@ -96,6 +97,25 @@
         old_task_state = task_state
 
 
+def wait_for_server_termination(client, server_id, ignore_error=False):
+    """Waits for server to reach termination."""
+    start_time = int(time.time())
+    while True:
+        try:
+            body = client.show_server(server_id)
+        except lib_exc.NotFound:
+            return
+
+        server_status = body['status']
+        if server_status == 'ERROR' and not ignore_error:
+            raise exceptions.BuildErrorException(server_id=server_id)
+
+        if int(time.time()) - start_time >= client.build_timeout:
+            raise exceptions.TimeoutException
+
+        time.sleep(client.build_interval)
+
+
 def wait_for_image_status(client, image_id, status):
     """Waits for an image to reach a given status.
 
@@ -144,6 +164,8 @@
         volume_status = body['status']
         if volume_status == 'error':
             raise exceptions.VolumeBuildErrorException(volume_id=volume_id)
+        if volume_status == 'error_restoring':
+            raise exceptions.VolumeRestoreErrorException(volume_id=volume_id)
 
         if int(time.time()) - start >= client.build_timeout:
             message = ('Volume %s failed to reach %s status (current %s) '
diff --git a/tempest/config.py b/tempest/config.py
index ab503e3..0262d1b 100644
--- a/tempest/config.py
+++ b/tempest/config.py
@@ -112,11 +112,30 @@
                     "services' region name unless they are set explicitly. "
                     "If no such region is found in the service catalog, the "
                     "first found one is used."),
-    cfg.StrOpt('endpoint_type',
+    cfg.StrOpt('v2_admin_endpoint_type',
+               default='adminURL',
+               choices=['public', 'admin', 'internal',
+                        'publicURL', 'adminURL', 'internalURL'],
+               help="The admin endpoint type to use for OpenStack Identity "
+                    "(Keystone) API v2",
+               deprecated_opts=[cfg.DeprecatedOpt('endpoint_type',
+                                                  group='identity')]),
+    cfg.StrOpt('v2_public_endpoint_type',
                default='publicURL',
                choices=['public', 'admin', 'internal',
                         'publicURL', 'adminURL', 'internalURL'],
-               help="The endpoint type to use for the identity service."),
+               help="The public endpoint type to use for OpenStack Identity "
+                    "(Keystone) API v2",
+               deprecated_opts=[cfg.DeprecatedOpt('endpoint_type',
+                                                  group='identity')]),
+    cfg.StrOpt('v3_endpoint_type',
+               default='adminURL',
+               choices=['public', 'admin', 'internal',
+                        'publicURL', 'adminURL', 'internalURL'],
+               help="The endpoint type to use for OpenStack Identity "
+                    "(Keystone) API v3",
+               deprecated_opts=[cfg.DeprecatedOpt('endpoint_type',
+                                                  group='identity')]),
     cfg.StrOpt('username',
                help="Username to use for Nova API requests."),
     cfg.StrOpt('tenant_name',
@@ -1224,7 +1243,10 @@
     The purpose of this is to allow tools like the Oslo sample config file
     generator to discover the options exposed to users.
     """
-    return [(getattr(g, 'name', None), o) for g, o in _opts]
+    ext_plugins = plugins.TempestTestPluginManager()
+    opt_list = [(getattr(g, 'name', None), o) for g, o in _opts]
+    opt_list.extend(ext_plugins.get_plugin_options_list())
+    return opt_list
 
 
 # this should never be called outside of this class
diff --git a/tempest/exceptions.py b/tempest/exceptions.py
index 15617c6..15482ab 100644
--- a/tempest/exceptions.py
+++ b/tempest/exceptions.py
@@ -92,6 +92,10 @@
     message = "Volume %(volume_id)s failed to build and is in ERROR status"
 
 
+class VolumeRestoreErrorException(TempestException):
+    message = "Volume %(volume_id)s failed to restore and is in ERROR status"
+
+
 class SnapshotBuildErrorException(TempestException):
     message = "Snapshot %(snapshot_id)s failed to build and is in ERROR status"
 
diff --git a/tempest/openstack/common/__init__.py b/tempest/openstack/common/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/tempest/openstack/common/__init__.py
+++ /dev/null
diff --git a/tempest/openstack/common/_i18n.py b/tempest/openstack/common/_i18n.py
deleted file mode 100644
index 5bbc77d..0000000
--- a/tempest/openstack/common/_i18n.py
+++ /dev/null
@@ -1,45 +0,0 @@
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-"""oslo.i18n integration module.
-
-See http://docs.openstack.org/developer/oslo.i18n/usage.html
-
-"""
-
-try:
-    import oslo_i18n
-
-    # NOTE(dhellmann): This reference to o-s-l-o will be replaced by the
-    # application name when this module is synced into the separate
-    # repository. It is OK to have more than one translation function
-    # using the same domain, since there will still only be one message
-    # catalog.
-    _translators = oslo_i18n.TranslatorFactory(domain='tempest')
-
-    # The primary translation function using the well-known name "_"
-    _ = _translators.primary
-
-    # Translators for log levels.
-    #
-    # The abbreviated names are meant to reflect the usual use of a short
-    # name like '_'. The "L" is for "log" and the other letter comes from
-    # the level.
-    _LI = _translators.log_info
-    _LW = _translators.log_warning
-    _LE = _translators.log_error
-    _LC = _translators.log_critical
-except ImportError:
-    # NOTE(dims): Support for cases where a project wants to use
-    # code from oslo-incubator, but is not ready to be internationalized
-    # (like tempest)
-    _ = _LI = _LW = _LE = _LC = lambda x: x
diff --git a/tempest/openstack/common/versionutils.py b/tempest/openstack/common/versionutils.py
deleted file mode 100644
index 12d2e14..0000000
--- a/tempest/openstack/common/versionutils.py
+++ /dev/null
@@ -1,263 +0,0 @@
-# Copyright (c) 2013 OpenStack Foundation
-# All Rights Reserved.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-"""
-Helpers for comparing version strings.
-"""
-
-import copy
-import functools
-import inspect
-import logging
-
-from oslo_config import cfg
-import pkg_resources
-import six
-
-from tempest.openstack.common._i18n import _
-from oslo_log import log as logging
-
-
-LOG = logging.getLogger(__name__)
-CONF = cfg.CONF
-
-
-deprecated_opts = [
-    cfg.BoolOpt('fatal_deprecations',
-                default=False,
-                help='Enables or disables fatal status of deprecations.'),
-]
-
-
-def list_opts():
-    """Entry point for oslo.config-generator.
-    """
-    return [(None, copy.deepcopy(deprecated_opts))]
-
-
-class deprecated(object):
-    """A decorator to mark callables as deprecated.
-
-    This decorator logs a deprecation message when the callable it decorates is
-    used. The message will include the release where the callable was
-    deprecated, the release where it may be removed and possibly an optional
-    replacement.
-
-    Examples:
-
-    1. Specifying the required deprecated release
-
-    >>> @deprecated(as_of=deprecated.ICEHOUSE)
-    ... def a(): pass
-
-    2. Specifying a replacement:
-
-    >>> @deprecated(as_of=deprecated.ICEHOUSE, in_favor_of='f()')
-    ... def b(): pass
-
-    3. Specifying the release where the functionality may be removed:
-
-    >>> @deprecated(as_of=deprecated.ICEHOUSE, remove_in=+1)
-    ... def c(): pass
-
-    4. Specifying the deprecated functionality will not be removed:
-    >>> @deprecated(as_of=deprecated.ICEHOUSE, remove_in=0)
-    ... def d(): pass
-
-    5. Specifying a replacement, deprecated functionality will not be removed:
-    >>> @deprecated(as_of=deprecated.ICEHOUSE, in_favor_of='f()', remove_in=0)
-    ... def e(): pass
-
-    """
-
-    # NOTE(morganfainberg): Bexar is used for unit test purposes, it is
-    # expected we maintain a gap between Bexar and Folsom in this list.
-    BEXAR = 'B'
-    FOLSOM = 'F'
-    GRIZZLY = 'G'
-    HAVANA = 'H'
-    ICEHOUSE = 'I'
-    JUNO = 'J'
-    KILO = 'K'
-    LIBERTY = 'L'
-
-    _RELEASES = {
-        # NOTE(morganfainberg): Bexar is used for unit test purposes, it is
-        # expected we maintain a gap between Bexar and Folsom in this list.
-        'B': 'Bexar',
-        'F': 'Folsom',
-        'G': 'Grizzly',
-        'H': 'Havana',
-        'I': 'Icehouse',
-        'J': 'Juno',
-        'K': 'Kilo',
-        'L': 'Liberty',
-    }
-
-    _deprecated_msg_with_alternative = _(
-        '%(what)s is deprecated as of %(as_of)s in favor of '
-        '%(in_favor_of)s and may be removed in %(remove_in)s.')
-
-    _deprecated_msg_no_alternative = _(
-        '%(what)s is deprecated as of %(as_of)s and may be '
-        'removed in %(remove_in)s. It will not be superseded.')
-
-    _deprecated_msg_with_alternative_no_removal = _(
-        '%(what)s is deprecated as of %(as_of)s in favor of %(in_favor_of)s.')
-
-    _deprecated_msg_with_no_alternative_no_removal = _(
-        '%(what)s is deprecated as of %(as_of)s. It will not be superseded.')
-
-    def __init__(self, as_of, in_favor_of=None, remove_in=2, what=None):
-        """Initialize decorator
-
-        :param as_of: the release deprecating the callable. Constants
-            are define in this class for convenience.
-        :param in_favor_of: the replacement for the callable (optional)
-        :param remove_in: an integer specifying how many releases to wait
-            before removing (default: 2)
-        :param what: name of the thing being deprecated (default: the
-            callable's name)
-
-        """
-        self.as_of = as_of
-        self.in_favor_of = in_favor_of
-        self.remove_in = remove_in
-        self.what = what
-
-    def __call__(self, func_or_cls):
-        if not self.what:
-            self.what = func_or_cls.__name__ + '()'
-        msg, details = self._build_message()
-
-        if inspect.isfunction(func_or_cls):
-
-            @six.wraps(func_or_cls)
-            def wrapped(*args, **kwargs):
-                report_deprecated_feature(LOG, msg, details)
-                return func_or_cls(*args, **kwargs)
-            return wrapped
-        elif inspect.isclass(func_or_cls):
-            orig_init = func_or_cls.__init__
-
-            # TODO(tsufiev): change `functools` module to `six` as
-            # soon as six 1.7.4 (with fix for passing `assigned`
-            # argument to underlying `functools.wraps`) is released
-            # and added to the oslo-incubator requrements
-            @functools.wraps(orig_init, assigned=('__name__', '__doc__'))
-            def new_init(self, *args, **kwargs):
-                report_deprecated_feature(LOG, msg, details)
-                orig_init(self, *args, **kwargs)
-            func_or_cls.__init__ = new_init
-            return func_or_cls
-        else:
-            raise TypeError('deprecated can be used only with functions or '
-                            'classes')
-
-    def _get_safe_to_remove_release(self, release):
-        # TODO(dstanek): this method will have to be reimplemented once
-        #    when we get to the X release because once we get to the Y
-        #    release, what is Y+2?
-        new_release = chr(ord(release) + self.remove_in)
-        if new_release in self._RELEASES:
-            return self._RELEASES[new_release]
-        else:
-            return new_release
-
-    def _build_message(self):
-        details = dict(what=self.what,
-                       as_of=self._RELEASES[self.as_of],
-                       remove_in=self._get_safe_to_remove_release(self.as_of))
-
-        if self.in_favor_of:
-            details['in_favor_of'] = self.in_favor_of
-            if self.remove_in > 0:
-                msg = self._deprecated_msg_with_alternative
-            else:
-                # There are no plans to remove this function, but it is
-                # now deprecated.
-                msg = self._deprecated_msg_with_alternative_no_removal
-        else:
-            if self.remove_in > 0:
-                msg = self._deprecated_msg_no_alternative
-            else:
-                # There are no plans to remove this function, but it is
-                # now deprecated.
-                msg = self._deprecated_msg_with_no_alternative_no_removal
-        return msg, details
-
-
-def is_compatible(requested_version, current_version, same_major=True):
-    """Determine whether `requested_version` is satisfied by
-    `current_version`; in other words, `current_version` is >=
-    `requested_version`.
-
-    :param requested_version: version to check for compatibility
-    :param current_version: version to check against
-    :param same_major: if True, the major version must be identical between
-        `requested_version` and `current_version`. This is used when a
-        major-version difference indicates incompatibility between the two
-        versions. Since this is the common-case in practice, the default is
-        True.
-    :returns: True if compatible, False if not
-    """
-    requested_parts = pkg_resources.parse_version(requested_version)
-    current_parts = pkg_resources.parse_version(current_version)
-
-    if same_major and (requested_parts[0] != current_parts[0]):
-        return False
-
-    return current_parts >= requested_parts
-
-
-# Track the messages we have sent already. See
-# report_deprecated_feature().
-_deprecated_messages_sent = {}
-
-
-def report_deprecated_feature(logger, msg, *args, **kwargs):
-    """Call this function when a deprecated feature is used.
-
-    If the system is configured for fatal deprecations then the message
-    is logged at the 'critical' level and :class:`DeprecatedConfig` will
-    be raised.
-
-    Otherwise, the message will be logged (once) at the 'warn' level.
-
-    :raises: :class:`DeprecatedConfig` if the system is configured for
-             fatal deprecations.
-    """
-    stdmsg = _("Deprecated: %s") % msg
-    CONF.register_opts(deprecated_opts)
-    if CONF.fatal_deprecations:
-        logger.critical(stdmsg, *args, **kwargs)
-        raise DeprecatedConfig(msg=stdmsg)
-
-    # Using a list because a tuple with dict can't be stored in a set.
-    sent_args = _deprecated_messages_sent.setdefault(msg, list())
-
-    if args in sent_args:
-        # Already logged this message, so don't log it again.
-        return
-
-    sent_args.append(args)
-    logger.warn(stdmsg, *args, **kwargs)
-
-
-class DeprecatedConfig(Exception):
-    message = _("Fatal call to deprecated config: %(msg)s")
-
-    def __init__(self, msg):
-        super(Exception, self).__init__(self.message % dict(msg=msg))
diff --git a/tempest/scenario/manager.py b/tempest/scenario/manager.py
index 869de2d..766042e 100644
--- a/tempest/scenario/manager.py
+++ b/tempest/scenario/manager.py
@@ -96,10 +96,11 @@
 
     def addCleanup_with_wait(self, waiter_callable, thing_id, thing_id_param,
                              cleanup_callable, cleanup_args=None,
-                             cleanup_kwargs=None, ignore_error=True):
+                             cleanup_kwargs=None, waiter_client=None):
         """Adds wait for async resource deletion at the end of cleanups
 
         @param waiter_callable: callable to wait for the resource to delete
+            with the following waiter_client if specified.
         @param thing_id: the id of the resource to be cleaned-up
         @param thing_id_param: the name of the id param in the waiter
         @param cleanup_callable: method to load pass to self.addCleanup with
@@ -115,6 +116,8 @@
             'waiter_callable': waiter_callable,
             thing_id_param: thing_id
         }
+        if waiter_client:
+            wait_dict['client'] = waiter_client
         self.cleanup_waits.append(wait_dict)
 
     def _wait_for_cleanups(self):
@@ -142,7 +145,7 @@
         # We don't need to create a keypair by pubkey in scenario
         body = client.create_keypair(name=name)
         self.addCleanup(client.delete_keypair, name)
-        return body
+        return body['keypair']
 
     def create_server(self, name=None, image=None, flavor=None,
                       wait_on_boot=True, wait_on_delete=True,
@@ -172,13 +175,15 @@
         server = self.servers_client.create_server(name, image, flavor,
                                                    **create_kwargs)
         if wait_on_delete:
-            self.addCleanup(self.servers_client.wait_for_server_termination,
+            self.addCleanup(waiters.wait_for_server_termination,
+                            self.servers_client,
                             server['id'])
         self.addCleanup_with_wait(
-            waiter_callable=self.servers_client.wait_for_server_termination,
+            waiter_callable=waiters.wait_for_server_termination,
             thing_id=server['id'], thing_id_param='server_id',
             cleanup_callable=self.delete_wrapper,
-            cleanup_args=[self.servers_client.delete_server, server['id']])
+            cleanup_args=[self.servers_client.delete_server, server['id']],
+            waiter_client=self.servers_client)
         if wait_on_boot:
             waiters.wait_for_server_status(self.servers_client,
                                            server_id=server['id'],
@@ -233,14 +238,14 @@
         rulesets = [
             {
                 # ssh
-                'ip_proto': 'tcp',
+                'ip_protocol': 'tcp',
                 'from_port': 22,
                 'to_port': 22,
                 'cidr': '0.0.0.0/0',
             },
             {
                 # ping
-                'ip_proto': 'icmp',
+                'ip_protocol': 'icmp',
                 'from_port': -1,
                 'to_port': -1,
                 'cidr': '0.0.0.0/0',
@@ -248,8 +253,8 @@
         ]
         rules = list()
         for ruleset in rulesets:
-            sg_rule = _client_rules.create_security_group_rule(secgroup_id,
-                                                               **ruleset)
+            sg_rule = _client_rules.create_security_group_rule(
+                parent_group_id=secgroup_id, **ruleset)
             self.addCleanup(self.delete_wrapper,
                             _client_rules.delete_security_group_rule,
                             sg_rule['id'])
@@ -342,7 +347,7 @@
             'is_public': 'False',
         }
         params['properties'] = properties
-        image = self.image_client.create_image(**params)
+        image = self.image_client.create_image(**params)['image']
         self.addCleanup(self.image_client.delete_image, image['id'])
         self.assertEqual("queued", image['status'])
         self.image_client.update_image(image['id'], data=image_file)
@@ -402,7 +407,7 @@
         if name is None:
             name = data_utils.rand_name('scenario-snapshot')
         LOG.debug("Creating a snapshot image for server: %s", server['name'])
-        image = _images_client.create_image(server['id'], name)
+        image = _images_client.create_image(server['id'], name=name)
         image_id = image.response['location'].split('images/')[1]
         _image_client.wait_for_image_status(image_id, 'active')
         self.addCleanup_with_wait(
@@ -419,7 +424,7 @@
 
     def nova_volume_attach(self):
         volume = self.servers_client.attach_volume(
-            self.server['id'], self.volume['id'], '/dev/%s'
+            self.server['id'], volumeId=self.volume['id'], device='/dev/%s'
             % CONF.compute.volume_device_name)
         self.assertEqual(self.volume['id'], volume['id'])
         self.volumes_client.wait_for_volume_status(volume['id'], 'in-use')
@@ -663,14 +668,18 @@
     def _get_server_port_id_and_ip4(self, server, ip_addr=None):
         ports = self._list_ports(device_id=server['id'],
                                  fixed_ip=ip_addr)
-        self.assertEqual(len(ports), 1,
-                         "Unable to determine which port to target.")
         # it might happen here that this port has more then one ip address
         # as in case of dual stack- when this port is created on 2 subnets
-        for ip46 in ports[0]['fixed_ips']:
-            ip = ip46['ip_address']
-            if netaddr.valid_ipv4(ip):
-                return ports[0]['id'], ip
+        port_map = [(p["id"], fxip["ip_address"])
+                    for p in ports
+                    for fxip in p["fixed_ips"]
+                    if netaddr.valid_ipv4(fxip["ip_address"])]
+
+        self.assertEqual(len(port_map), 1,
+                         "Found multiple IPv4 addresses: %s. "
+                         "Unable to determine which port to target."
+                         % port_map)
+        return port_map[0]
 
     def _get_network_by_name(self, network_name):
         net = self._list_networks(name=network_name)
diff --git a/tempest/scenario/test_aggregates_basic_ops.py b/tempest/scenario/test_aggregates_basic_ops.py
index f5f4a61..fcde561 100644
--- a/tempest/scenario/test_aggregates_basic_ops.py
+++ b/tempest/scenario/test_aggregates_basic_ops.py
@@ -58,7 +58,7 @@
         self.aggregates_client.delete_aggregate(aggregate['id'])
 
     def _get_host_name(self):
-        hosts = self.hosts_client.list_hosts()
+        hosts = self.hosts_client.list_hosts()['hosts']
         self.assertTrue(len(hosts) >= 1)
         computes = [x for x in hosts if x['service'] == 'compute']
         return computes[0]['host_name']
diff --git a/tempest/scenario/test_dashboard_basic_ops.py b/tempest/scenario/test_dashboard_basic_ops.py
index eb018eb..8e91a6d 100644
--- a/tempest/scenario/test_dashboard_basic_ops.py
+++ b/tempest/scenario/test_dashboard_basic_ops.py
@@ -26,6 +26,7 @@
 class HorizonHTMLParser(HTMLParser.HTMLParser):
     csrf_token = None
     region = None
+    login = None
 
     def _find_name(self, attrs, name):
         for attrpair in attrs:
@@ -39,12 +40,20 @@
                 return attrpair[1]
         return None
 
+    def _find_attr_value(self, attrs, attr_name):
+        for attrpair in attrs:
+            if attrpair[0] == attr_name:
+                return attrpair[1]
+        return None
+
     def handle_starttag(self, tag, attrs):
         if tag == 'input':
             if self._find_name(attrs, 'csrfmiddlewaretoken'):
                 self.csrf_token = self._find_value(attrs)
             if self._find_name(attrs, 'region'):
                 self.region = self._find_value(attrs)
+        if tag == 'form':
+            self.login = self._find_attr_value(attrs, 'action')
 
 
 class TestDashboardBasicOps(manager.ScenarioTest):
@@ -79,8 +88,12 @@
         parser = HorizonHTMLParser()
         parser.feed(response)
 
+        # construct login url for dashboard, discovery accomodates non-/ web
+        # root for dashboard
+        login_url = CONF.dashboard.dashboard_url + parser.login[1:]
+
         # Prepare login form request
-        req = request.Request(CONF.dashboard.login_url)
+        req = request.Request(login_url)
         req.add_header('Content-type', 'application/x-www-form-urlencoded')
         req.add_header('Referer', CONF.dashboard.dashboard_url)
         params = {'username': username,
diff --git a/tempest/scenario/test_large_ops.py b/tempest/scenario/test_large_ops.py
index c44557e..4e6358e 100644
--- a/tempest/scenario/test_large_ops.py
+++ b/tempest/scenario/test_large_ops.py
@@ -109,8 +109,8 @@
         for server in self.servers:
             # after deleting all servers - wait for all servers to clear
             # before cleanup continues
-            self.addCleanupClass(self.servers_client.
-                                 wait_for_server_termination,
+            self.addCleanupClass(waiters.wait_for_server_termination,
+                                 self.servers_client,
                                  server['id'])
         for server in self.servers:
             self.addCleanupClass(self.servers_client.delete_server,
diff --git a/tempest/scenario/test_network_basic_ops.py b/tempest/scenario/test_network_basic_ops.py
index e676063..c194103 100644
--- a/tempest/scenario/test_network_basic_ops.py
+++ b/tempest/scenario/test_network_basic_ops.py
@@ -20,6 +20,7 @@
 import testtools
 
 from tempest.common.utils import data_utils
+from tempest.common import waiters
 from tempest import config
 from tempest import exceptions
 from tempest.scenario import manager
@@ -338,8 +339,8 @@
 
         for remote_ip in address_list:
             if should_connect:
-                msg = "Timed out waiting for "
-                "%s to become reachable" % remote_ip
+                msg = ("Timed out waiting for %s to become "
+                       "reachable") % remote_ip
             else:
                 msg = "ip address %s is reachable" % remote_ip
             try:
@@ -643,7 +644,7 @@
         self.assertEqual(self.port_id, port_list[0]['id'])
         # Delete the server.
         self.servers_client.delete_server(server['id'])
-        self.servers_client.wait_for_server_termination(server['id'])
+        waiters.wait_for_server_termination(self.servers_client, server['id'])
         # Assert the port still exists on the network but is unbound from
         # the deleted server.
         port = self.network_client.show_port(self.port_id)['port']
diff --git a/tempest/scenario/test_network_v6.py b/tempest/scenario/test_network_v6.py
index fba839a..a9394cb 100644
--- a/tempest/scenario/test_network_v6.py
+++ b/tempest/scenario/test_network_v6.py
@@ -27,13 +27,17 @@
 
 
 class TestGettingAddress(manager.NetworkScenarioTest):
-    """Create network with subnets: one IPv4 and
-    one or few IPv6 in a given address mode
-    Boot 2 VMs on this network
-    Allocate and assign 2 FIP4
-    Check that vNICs of all VMs gets all addresses actually assigned
-    Ping4 to one VM from another one
-    If ping6 available in VM, do ping6 to all v6 addresses
+    """Test Summary:
+
+    1. Create network with subnets:
+        1.1. one IPv4 and
+        1.2. one or more IPv6 in a given address mode
+    2. Boot 2 VMs on this network
+    3. Allocate and assign 2 FIP4
+    4. Check that vNICs of all VMs gets all addresses actually assigned
+    5. Each VM will ping the other's v4 private address
+    6. If ping6 available in VM, each VM will ping all of the other's  v6
+       addresses as well as the router's
     """
 
     @classmethod
@@ -65,23 +69,30 @@
             'key_name': self.keypair['name'],
             'security_groups': [{'name': self.sec_grp['name']}]}
 
-    def prepare_network(self, address6_mode, n_subnets6=1):
+    def prepare_network(self, address6_mode, n_subnets6=1, dualnet=False):
         """Creates network with
          given number of IPv6 subnets in the given mode and
          one IPv4 subnet
          Creates router with ports on all subnets
+         if dualnet - create IPv6 subnets on a different network
+         :return: list of created networks
         """
         self.network = self._create_network(tenant_id=self.tenant_id)
+        if dualnet:
+            self.network_v6 = self._create_network(tenant_id=self.tenant_id)
+
         sub4 = self._create_subnet(network=self.network,
                                    namestart='sub4',
-                                   ip_version=4,)
+                                   ip_version=4)
 
         router = self._get_router(tenant_id=self.tenant_id)
         sub4.add_to_router(router_id=router['id'])
         self.addCleanup(sub4.delete)
 
+        self.subnets_v6 = []
         for _ in range(n_subnets6):
-            sub6 = self._create_subnet(network=self.network,
+            net6 = self.network_v6 if dualnet else self.network
+            sub6 = self._create_subnet(network=net6,
                                        namestart='sub6',
                                        ip_version=6,
                                        ipv6_ra_mode=address6_mode,
@@ -89,6 +100,9 @@
 
             sub6.add_to_router(router_id=router['id'])
             self.addCleanup(sub6.delete)
+            self.subnets_v6.append(sub6)
+
+        return [self.network, self.network_v6] if dualnet else [self.network]
 
     @staticmethod
     def define_server_ips(srv):
@@ -101,11 +115,12 @@
                     ips['4'] = nic['addr']
         return ips
 
-    def prepare_server(self):
+    def prepare_server(self, networks=None):
         username = CONF.compute.image_ssh_user
 
         create_kwargs = self.srv_kwargs
-        create_kwargs['networks'] = [{'uuid': self.network.id}]
+        networks = networks or [self.network]
+        create_kwargs['networks'] = [{'uuid': n.id} for n in networks]
 
         srv = self.create_server(create_kwargs=create_kwargs)
         fip = self.create_floating_ip(thing=srv)
@@ -113,18 +128,42 @@
         ssh = self.get_remote_client(
             server_or_ip=fip.floating_ip_address,
             username=username)
-        return ssh, ips
+        return ssh, ips, srv["id"]
 
-    def _prepare_and_test(self, address6_mode, n_subnets6=1):
-        self.prepare_network(address6_mode=address6_mode,
-                             n_subnets6=n_subnets6)
+    def turn_nic6_on(self, ssh, sid):
+        """Turns the IPv6 vNIC on
 
-        sshv4_1, ips_from_api_1 = self.prepare_server()
-        sshv4_2, ips_from_api_2 = self.prepare_server()
+        Required because guest images usually set only the first vNIC on boot.
+        Searches for the IPv6 vNIC's MAC and brings it up.
+
+        @param ssh: RemoteClient ssh instance to server
+        @param sid: server uuid
+        """
+        ports = [p["mac_address"] for p in
+                 self._list_ports(device_id=sid,
+                                  network_id=self.network_v6.id)]
+        self.assertEqual(1, len(ports),
+                         message="Multiple IPv6 ports found on network %s"
+                         % self.network_v6)
+        mac6 = ports[0]
+        ssh.turn_nic_on(ssh.get_nic_name(mac6))
+
+    def _prepare_and_test(self, address6_mode, n_subnets6=1, dualnet=False):
+        net_list = self.prepare_network(address6_mode=address6_mode,
+                                        n_subnets6=n_subnets6,
+                                        dualnet=dualnet)
+
+        sshv4_1, ips_from_api_1, sid1 = self.prepare_server(networks=net_list)
+        sshv4_2, ips_from_api_2, sid2 = self.prepare_server(networks=net_list)
 
         def guest_has_address(ssh, addr):
             return addr in ssh.get_ip_list()
 
+        # Turn on 2nd NIC for Cirros when dualnet
+        if dualnet:
+            self.turn_nic6_on(sshv4_1, sid1)
+            self.turn_nic6_on(sshv4_2, sid2)
+
         # get addresses assigned to vNIC as reported by 'ip address' utility
         ips_from_ip_1 = sshv4_1.get_ip_list()
         ips_from_ip_2 = sshv4_2.get_ip_list()
@@ -145,23 +184,32 @@
             self.assertTrue(test.call_until_true(srv2_v6_addr_assigned,
                                                  CONF.compute.ping_timeout, 1))
 
-        result = sshv4_1.ping_host(ips_from_api_2['4'])
-        self.assertIn('0% packet loss', result)
-        result = sshv4_2.ping_host(ips_from_api_1['4'])
-        self.assertIn('0% packet loss', result)
+        self._check_connectivity(sshv4_1, ips_from_api_2['4'])
+        self._check_connectivity(sshv4_2, ips_from_api_1['4'])
 
         # Some VM (like cirros) may not have ping6 utility
         result = sshv4_1.exec_command('whereis ping6')
         is_ping6 = False if result == 'ping6:\n' else True
         if is_ping6:
             for i in range(n_subnets6):
-                result = sshv4_1.ping_host(ips_from_api_2['6'][i])
-                self.assertIn('0% packet loss', result)
-                result = sshv4_2.ping_host(ips_from_api_1['6'][i])
-                self.assertIn('0% packet loss', result)
+                self._check_connectivity(sshv4_1,
+                                         ips_from_api_2['6'][i])
+                self._check_connectivity(sshv4_1,
+                                         self.subnets_v6[i].gateway_ip)
+                self._check_connectivity(sshv4_2,
+                                         ips_from_api_1['6'][i])
+                self._check_connectivity(sshv4_2,
+                                         self.subnets_v6[i].gateway_ip)
         else:
             LOG.warning('Ping6 is not available, skipping')
 
+    def _check_connectivity(self, source, dest):
+        self.assertTrue(
+            self._check_remote_connectivity(source, dest),
+            "Timed out waiting for %s to become reachable from %s" %
+            (dest, source.ssh_client.host)
+        )
+
     @test.idempotent_id('2c92df61-29f0-4eaa-bee3-7c65bef62a43')
     @test.services('compute', 'network')
     def test_slaac_from_os(self):
@@ -181,3 +229,25 @@
     @test.services('compute', 'network')
     def test_multi_prefix_slaac(self):
         self._prepare_and_test(address6_mode='slaac', n_subnets6=2)
+
+    @test.idempotent_id('b6399d76-4438-4658-bcf5-0d6c8584fde2')
+    @test.services('compute', 'network')
+    def test_dualnet_slaac_from_os(self):
+        self._prepare_and_test(address6_mode='slaac', dualnet=True)
+
+    @test.idempotent_id('76f26acd-9688-42b4-bc3e-cd134c4cb09e')
+    @test.services('compute', 'network')
+    def test_dualnet_dhcp6_stateless_from_os(self):
+        self._prepare_and_test(address6_mode='dhcpv6-stateless', dualnet=True)
+
+    @test.idempotent_id('cf1c4425-766b-45b8-be35-e2959728eb00')
+    @test.services('compute', 'network')
+    def test_dualnet_multi_prefix_dhcpv6_stateless(self):
+        self._prepare_and_test(address6_mode='dhcpv6-stateless', n_subnets6=2,
+                               dualnet=True)
+
+    @test.idempotent_id('9178ad42-10e4-47e9-8987-e02b170cc5cd')
+    @test.services('compute', 'network')
+    def test_dualnet_multi_prefix_slaac(self):
+        self._prepare_and_test(address6_mode='slaac', n_subnets6=2,
+                               dualnet=True)
diff --git a/tempest/scenario/test_server_basic_ops.py b/tempest/scenario/test_server_basic_ops.py
index f61b151..3019cc4 100644
--- a/tempest/scenario/test_server_basic_ops.py
+++ b/tempest/scenario/test_server_basic_ops.py
@@ -16,6 +16,7 @@
 from oslo_log import log as logging
 
 from tempest import config
+from tempest import exceptions
 from tempest.scenario import manager
 from tempest.scenario import utils as test_utils
 from tempest import test
@@ -98,9 +99,24 @@
     def verify_metadata(self):
         if self.run_ssh and CONF.compute_feature_enabled.metadata_service:
             # Verify metadata service
-            result = self.ssh_client.exec_command(
-                "curl http://169.254.169.254/latest/meta-data/public-ipv4")
-            self.assertEqual(self.floating_ip['ip'], result)
+            md_url = 'http://169.254.169.254/latest/meta-data/public-ipv4'
+
+            def exec_cmd_and_verify_output():
+                cmd = 'curl ' + md_url
+                floating_ip = self.floating_ip['ip']
+                result = self.ssh_client.exec_command(cmd)
+                if result:
+                    msg = ('Failed while verifying metadata on server. Result '
+                           'of command "%s" is NOT "%s".' % (cmd, floating_ip))
+                    self.assertEqual(floating_ip, result, msg)
+                    return 'Verification is successful!'
+
+            if not test.call_until_true(exec_cmd_and_verify_output,
+                                        CONF.compute.build_timeout,
+                                        CONF.compute.build_interval):
+                raise exceptions.TimeoutException('Timed out while waiting to '
+                                                  'verify metadata on server. '
+                                                  '%s is empty.' % md_url)
 
     @test.idempotent_id('7fff3fb3-91d8-4fd0-bd7d-0204f1f180ba')
     @test.attr(type='smoke')
diff --git a/tempest/scenario/test_stamp_pattern.py b/tempest/scenario/test_stamp_pattern.py
index c1d9a1b..a7bdba3 100644
--- a/tempest/scenario/test_stamp_pattern.py
+++ b/tempest/scenario/test_stamp_pattern.py
@@ -80,12 +80,13 @@
     def _create_volume_snapshot(self, volume):
         snapshot_name = data_utils.rand_name('scenario-snapshot')
         _, snapshot = self.snapshots_client.create_snapshot(
-            volume['id'], display_name=snapshot_name)
+            volume['id'], display_name=snapshot_name)['snapshot']
 
         def cleaner():
             self.snapshots_client.delete_snapshot(snapshot['id'])
             try:
-                while self.snapshots_client.show_snapshot(snapshot['id']):
+                while self.snapshots_client.show_snapshot(
+                    snapshot['id'])['snapshot']:
                     time.sleep(1)
             except lib_exc.NotFound:
                 pass
@@ -104,7 +105,7 @@
 
     def _attach_volume(self, server, volume):
         attached_volume = self.servers_client.attach_volume(
-            server['id'], volume['id'], device='/dev/%s'
+            server['id'], volumeId=volume['id'], device='/dev/%s'
             % CONF.compute.volume_device_name)
         self.assertEqual(volume['id'], attached_volume['id'])
         self._wait_for_volume_status(attached_volume, 'in-use')
diff --git a/tempest/scenario/test_volume_boot_pattern.py b/tempest/scenario/test_volume_boot_pattern.py
index 3809831..f69b7d2 100644
--- a/tempest/scenario/test_volume_boot_pattern.py
+++ b/tempest/scenario/test_volume_boot_pattern.py
@@ -69,7 +69,7 @@
         snap = self.snapshots_client.create_snapshot(
             volume_id=vol_id,
             force=True,
-            display_name=snap_name)
+            display_name=snap_name)['snapshot']
         self.addCleanup(
             self.snapshots_client.wait_for_resource_deletion, snap['id'])
         self.addCleanup(self.snapshots_client.delete_snapshot, snap['id'])
@@ -122,7 +122,7 @@
 
     def _delete_server(self, server):
         self.servers_client.delete_server(server['id'])
-        self.servers_client.wait_for_server_termination(server['id'])
+        waiters.wait_for_server_termination(self.servers_client, server['id'])
 
     def _check_content_of_written_file(self, ssh_client, expected):
         actual = self._get_content(ssh_client)
diff --git a/tempest/services/compute/json/agents_client.py b/tempest/services/compute/json/agents_client.py
index 1a1d832..d38c8cd 100644
--- a/tempest/services/compute/json/agents_client.py
+++ b/tempest/services/compute/json/agents_client.py
@@ -32,7 +32,7 @@
         resp, body = self.get(url)
         body = json.loads(body)
         self.validate_response(schema.list_agents, resp, body)
-        return service_client.ResponseBodyList(resp, body['agents'])
+        return service_client.ResponseBody(resp, body)
 
     def create_agent(self, **kwargs):
         """Create an agent build."""
@@ -40,7 +40,7 @@
         resp, body = self.post('os-agents', post_body)
         body = json.loads(body)
         self.validate_response(schema.create_agent, resp, body)
-        return service_client.ResponseBody(resp, body['agent'])
+        return service_client.ResponseBody(resp, body)
 
     def delete_agent(self, agent_id):
         """Delete an existing agent build."""
@@ -53,4 +53,4 @@
         put_body = json.dumps({'para': kwargs})
         resp, body = self.put('os-agents/%s' % agent_id, put_body)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['agent'])
+        return service_client.ResponseBody(resp, body)
diff --git a/tempest/services/compute/json/baremetal_nodes_client.py b/tempest/services/compute/json/baremetal_nodes_client.py
index 8165292..15f883a 100644
--- a/tempest/services/compute/json/baremetal_nodes_client.py
+++ b/tempest/services/compute/json/baremetal_nodes_client.py
@@ -33,7 +33,7 @@
         resp, body = self.get(url)
         body = json.loads(body)
         self.validate_response(schema.list_baremetal_nodes, resp, body)
-        return service_client.ResponseBodyList(resp, body['nodes'])
+        return service_client.ResponseBody(resp, body)
 
     def show_baremetal_node(self, baremetal_node_id):
         """Returns the details of a single baremetal node."""
@@ -41,4 +41,4 @@
         resp, body = self.get(url)
         body = json.loads(body)
         self.validate_response(schema.get_baremetal_node, resp, body)
-        return service_client.ResponseBody(resp, body['node'])
+        return service_client.ResponseBody(resp, body)
diff --git a/tempest/services/compute/json/certificates_client.py b/tempest/services/compute/json/certificates_client.py
index c25b273..d6c72f4 100644
--- a/tempest/services/compute/json/certificates_client.py
+++ b/tempest/services/compute/json/certificates_client.py
@@ -26,7 +26,7 @@
         resp, body = self.get(url)
         body = json.loads(body)
         self.validate_response(schema.get_certificate, resp, body)
-        return service_client.ResponseBody(resp, body['certificate'])
+        return service_client.ResponseBody(resp, body)
 
     def create_certificate(self):
         """create certificates."""
@@ -34,4 +34,4 @@
         resp, body = self.post(url, None)
         body = json.loads(body)
         self.validate_response(schema.create_certificate, resp, body)
-        return service_client.ResponseBody(resp, body['certificate'])
+        return service_client.ResponseBody(resp, body)
diff --git a/tempest/services/compute/json/fixed_ips_client.py b/tempest/services/compute/json/fixed_ips_client.py
index d0d9ca1..23401c3 100644
--- a/tempest/services/compute/json/fixed_ips_client.py
+++ b/tempest/services/compute/json/fixed_ips_client.py
@@ -26,7 +26,7 @@
         resp, body = self.get(url)
         body = json.loads(body)
         self.validate_response(schema.get_fixed_ip, resp, body)
-        return service_client.ResponseBody(resp, body['fixed_ip'])
+        return service_client.ResponseBody(resp, body)
 
     def reserve_fixed_ip(self, fixed_ip, **kwargs):
         """This reserves and unreserves fixed ips."""
diff --git a/tempest/services/compute/json/floating_ip_pools_client.py b/tempest/services/compute/json/floating_ip_pools_client.py
index 1e2133b..7a4434f 100644
--- a/tempest/services/compute/json/floating_ip_pools_client.py
+++ b/tempest/services/compute/json/floating_ip_pools_client.py
@@ -24,7 +24,7 @@
 class FloatingIPPoolsClient(service_client.ServiceClient):
 
     def list_floating_ip_pools(self, params=None):
-        """Returns a list of all floating IP Pools."""
+        """Gets all floating IP Pools list."""
         url = 'os-floating-ip-pools'
         if params:
             url += '?%s' % urllib.urlencode(params)
@@ -32,4 +32,4 @@
         resp, body = self.get(url)
         body = json.loads(body)
         self.validate_response(schema.list_floating_ip_pools, resp, body)
-        return service_client.ResponseBodyList(resp, body['floating_ip_pools'])
+        return service_client.ResponseBody(resp, body)
diff --git a/tempest/services/compute/json/floating_ips_bulk_client.py b/tempest/services/compute/json/floating_ips_bulk_client.py
index 8b1c5a9..c51f77e 100644
--- a/tempest/services/compute/json/floating_ips_bulk_client.py
+++ b/tempest/services/compute/json/floating_ips_bulk_client.py
@@ -32,18 +32,17 @@
         resp, body = self.post('os-floating-ips-bulk', post_body)
         body = json.loads(body)
         self.validate_response(schema.create_floating_ips_bulk, resp, body)
-        return service_client.ResponseBody(resp,
-                                           body['floating_ips_bulk_create'])
+        return service_client.ResponseBody(resp, body)
 
     def list_floating_ips_bulk(self):
-        """Returns a list of all floating IPs bulk."""
+        """Gets all floating IPs in bulk."""
         resp, body = self.get('os-floating-ips-bulk')
         body = json.loads(body)
         self.validate_response(schema.list_floating_ips_bulk, resp, body)
-        return service_client.ResponseBodyList(resp, body['floating_ip_info'])
+        return service_client.ResponseBody(resp, body)
 
     def delete_floating_ips_bulk(self, ip_range):
-        """Deletes the provided floating IPs bulk."""
+        """Deletes the provided floating IPs in bulk."""
         post_body = json.dumps({'ip_range': ip_range})
         resp, body = self.put('os-floating-ips-bulk/delete', post_body)
         body = json.loads(body)
diff --git a/tempest/services/compute/json/hosts_client.py b/tempest/services/compute/json/hosts_client.py
index 752af68..3d3cb18 100644
--- a/tempest/services/compute/json/hosts_client.py
+++ b/tempest/services/compute/json/hosts_client.py
@@ -31,7 +31,7 @@
         resp, body = self.get(url)
         body = json.loads(body)
         self.validate_response(schema.list_hosts, resp, body)
-        return service_client.ResponseBodyList(resp, body['hosts'])
+        return service_client.ResponseBody(resp, body)
 
     def show_host(self, hostname):
         """Show detail information for the host."""
@@ -39,7 +39,7 @@
         resp, body = self.get("os-hosts/%s" % hostname)
         body = json.loads(body)
         self.validate_response(schema.get_host_detail, resp, body)
-        return service_client.ResponseBodyList(resp, body['host'])
+        return service_client.ResponseBody(resp, body)
 
     def update_host(self, hostname, **kwargs):
         """Update a host."""
@@ -62,7 +62,7 @@
         resp, body = self.get("os-hosts/%s/startup" % hostname)
         body = json.loads(body)
         self.validate_response(schema.startup_host, resp, body)
-        return service_client.ResponseBody(resp, body['host'])
+        return service_client.ResponseBody(resp, body)
 
     def shutdown_host(self, hostname):
         """Shutdown a host."""
@@ -70,7 +70,7 @@
         resp, body = self.get("os-hosts/%s/shutdown" % hostname)
         body = json.loads(body)
         self.validate_response(schema.shutdown_host, resp, body)
-        return service_client.ResponseBody(resp, body['host'])
+        return service_client.ResponseBody(resp, body)
 
     def reboot_host(self, hostname):
         """reboot a host."""
@@ -78,4 +78,4 @@
         resp, body = self.get("os-hosts/%s/reboot" % hostname)
         body = json.loads(body)
         self.validate_response(schema.reboot_host, resp, body)
-        return service_client.ResponseBody(resp, body['host'])
+        return service_client.ResponseBody(resp, body)
diff --git a/tempest/services/compute/json/images_client.py b/tempest/services/compute/json/images_client.py
index b0ce2dc..4e7e93f 100644
--- a/tempest/services/compute/json/images_client.py
+++ b/tempest/services/compute/json/images_client.py
@@ -23,18 +23,10 @@
 
 class ImagesClient(service_client.ServiceClient):
 
-    def create_image(self, server_id, name, meta=None):
+    def create_image(self, server_id, **kwargs):
         """Creates an image of the original server."""
 
-        post_body = {
-            'createImage': {
-                'name': name,
-            }
-        }
-
-        if meta is not None:
-            post_body['createImage']['metadata'] = meta
-
+        post_body = {'createImage': kwargs}
         post_body = json.dumps(post_body)
         resp, body = self.post('servers/%s/action' % server_id,
                                post_body)
diff --git a/tempest/services/compute/json/keypairs_client.py b/tempest/services/compute/json/keypairs_client.py
index e51671f..2e22bc6 100644
--- a/tempest/services/compute/json/keypairs_client.py
+++ b/tempest/services/compute/json/keypairs_client.py
@@ -24,26 +24,21 @@
     def list_keypairs(self):
         resp, body = self.get("os-keypairs")
         body = json.loads(body)
-        # Each returned keypair is embedded within an unnecessary 'keypair'
-        # element which is a deviation from other resources like floating-ips,
-        # servers, etc. A bug?
-        # For now we shall adhere to the spec, but the spec for keypairs
-        # is yet to be found
         self.validate_response(schema.list_keypairs, resp, body)
-        return service_client.ResponseBodyList(resp, body['keypairs'])
+        return service_client.ResponseBody(resp, body)
 
     def show_keypair(self, keypair_name):
         resp, body = self.get("os-keypairs/%s" % keypair_name)
         body = json.loads(body)
         self.validate_response(schema.get_keypair, resp, body)
-        return service_client.ResponseBody(resp, body['keypair'])
+        return service_client.ResponseBody(resp, body)
 
     def create_keypair(self, **kwargs):
         post_body = json.dumps({'keypair': kwargs})
         resp, body = self.post("os-keypairs", body=post_body)
         body = json.loads(body)
         self.validate_response(schema.create_keypair, resp, body)
-        return service_client.ResponseBody(resp, body['keypair'])
+        return service_client.ResponseBody(resp, body)
 
     def delete_keypair(self, keypair_name):
         resp, body = self.delete("os-keypairs/%s" % keypair_name)
diff --git a/tempest/services/compute/json/security_group_rules_client.py b/tempest/services/compute/json/security_group_rules_client.py
index f570eb7..9a7c881 100644
--- a/tempest/services/compute/json/security_group_rules_client.py
+++ b/tempest/services/compute/json/security_group_rules_client.py
@@ -23,8 +23,7 @@
 
 class SecurityGroupRulesClient(service_client.ServiceClient):
 
-    def create_security_group_rule(self, parent_group_id, ip_proto, from_port,
-                                   to_port, **kwargs):
+    def create_security_group_rule(self, **kwargs):
         """
         Creating a new security group rules.
         parent_group_id :ID of Security group
@@ -35,15 +34,7 @@
         cidr     : CIDR for address range.
         group_id : ID of the Source group
         """
-        post_body = {
-            'parent_group_id': parent_group_id,
-            'ip_protocol': ip_proto,
-            'from_port': from_port,
-            'to_port': to_port,
-            'cidr': kwargs.get('cidr'),
-            'group_id': kwargs.get('group_id'),
-        }
-        post_body = json.dumps({'security_group_rule': post_body})
+        post_body = json.dumps({'security_group_rule': kwargs})
         url = 'os-security-group-rules'
         resp, body = self.post(url, post_body)
         body = json.loads(body)
diff --git a/tempest/services/compute/json/servers_client.py b/tempest/services/compute/json/servers_client.py
index 1159a58..7022418 100644
--- a/tempest/services/compute/json/servers_client.py
+++ b/tempest/services/compute/json/servers_client.py
@@ -14,15 +14,11 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-import time
-
 from oslo_serialization import jsonutils as json
 from six.moves.urllib import parse as urllib
-from tempest_lib import exceptions as lib_exc
 
 from tempest.api_schema.response.compute.v2_1 import servers as schema
 from tempest.common import service_client
-from tempest import exceptions
 
 
 class ServersClient(service_client.ServiceClient):
@@ -165,24 +161,6 @@
         self.validate_response(_schema, resp, body)
         return service_client.ResponseBody(resp, body)
 
-    def wait_for_server_termination(self, server_id, ignore_error=False):
-        """Waits for server to reach termination."""
-        start_time = int(time.time())
-        while True:
-            try:
-                body = self.show_server(server_id)
-            except lib_exc.NotFound:
-                return
-
-            server_status = body['status']
-            if server_status == 'ERROR' and not ignore_error:
-                raise exceptions.BuildErrorException(server_id=server_id)
-
-            if int(time.time()) - start_time >= self.build_timeout:
-                raise exceptions.TimeoutException
-
-            time.sleep(self.build_interval)
-
     def list_addresses(self, server_id):
         """Lists all addresses for a server."""
         resp, body = self.get("servers/%s/ips" % server_id)
@@ -206,15 +184,7 @@
                                post_body)
         if response_key is not None:
             body = json.loads(body)
-            # Check for Schema as 'None' because if we do not have any server
-            # action schema implemented yet then they can pass 'None' to skip
-            # the validation.Once all server action has their schema
-            # implemented then, this check can be removed if every actions are
-            # supposed to validate their response.
-            # TODO(GMann): Remove the below 'if' check once all server actions
-            # schema are implemented.
-            if schema is not None:
-                self.validate_response(schema, resp, body)
+            self.validate_response(schema, resp, body)
             body = body[response_key]
         else:
             self.validate_response(schema, resp, body)
@@ -341,14 +311,9 @@
     def start(self, server_id, **kwargs):
         return self.action(server_id, 'os-start', None, **kwargs)
 
-    def attach_volume(self, server_id, volume_id, device='/dev/vdz'):
+    def attach_volume(self, server_id, **kwargs):
         """Attaches a volume to a server instance."""
-        post_body = json.dumps({
-            'volumeAttachment': {
-                'volumeId': volume_id,
-                'device': device,
-            }
-        })
+        post_body = json.dumps({'volumeAttachment': kwargs})
         resp, body = self.post('servers/%s/os-volume_attachments' % server_id,
                                post_body)
         body = json.loads(body)
@@ -386,16 +351,10 @@
         """Removes a security group from the server."""
         return self.action(server_id, 'removeSecurityGroup', None, name=name)
 
-    def live_migrate_server(self, server_id, dest_host, use_block_migration):
+    def live_migrate_server(self, server_id, **kwargs):
         """This should be called with administrator privileges ."""
 
-        migrate_params = {
-            "disk_over_commit": False,
-            "block_migration": use_block_migration,
-            "host": dest_host
-        }
-
-        req_body = json.dumps({'os-migrateLive': migrate_params})
+        req_body = json.dumps({'os-migrateLive': kwargs})
 
         resp, body = self.post("servers/%s/action" % server_id, req_body)
         self.validate_response(schema.server_actions_common_schema,
diff --git a/tempest/services/data_processing/v1_1/data_processing_client.py b/tempest/services/data_processing/v1_1/data_processing_client.py
index bbc0f2a..cba4c42 100644
--- a/tempest/services/data_processing/v1_1/data_processing_client.py
+++ b/tempest/services/data_processing/v1_1/data_processing_client.py
@@ -39,8 +39,8 @@
         self.expected_success(resp_status, resp.status)
         return resp, body
 
-    def _request_check_and_parse_resp(self, request_func, uri, resp_status,
-                                      resource_name, *args, **kwargs):
+    def _request_check_and_parse_resp(self, request_func, uri,
+                                      resp_status, *args, **kwargs):
         """Make a request using specified request_func, check response status
         code and parse response body.
 
@@ -50,36 +50,19 @@
         resp, body = request_func(uri, headers=headers, *args, **kwargs)
         self.expected_success(resp_status, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body[resource_name])
-
-    def _request_check_and_parse_resp_list(self, request_func, uri,
-                                           resp_status, resource_name,
-                                           *args, **kwargs):
-        """Make a request using specified request_func, check response status
-        code and parse response body.
-
-        It returns a ResponseBodyList.
-        """
-        headers = {'Content-Type': 'application/json'}
-        resp, body = request_func(uri, headers=headers, *args, **kwargs)
-        self.expected_success(resp_status, resp.status)
-        body = json.loads(body)
-        return service_client.ResponseBodyList(resp, body[resource_name])
+        return service_client.ResponseBody(resp, body)
 
     def list_node_group_templates(self):
         """List all node group templates for a user."""
 
         uri = 'node-group-templates'
-        return self._request_check_and_parse_resp_list(self.get, uri,
-                                                       200,
-                                                       'node_group_templates')
+        return self._request_check_and_parse_resp(self.get, uri, 200)
 
     def get_node_group_template(self, tmpl_id):
         """Returns the details of a single node group template."""
 
         uri = 'node-group-templates/%s' % tmpl_id
-        return self._request_check_and_parse_resp(self.get, uri,
-                                                  200, 'node_group_template')
+        return self._request_check_and_parse_resp(self.get, uri, 200)
 
     def create_node_group_template(self, name, plugin_name, hadoop_version,
                                    node_processes, flavor_id,
@@ -100,7 +83,6 @@
             'node_configs': node_configs or dict(),
         })
         return self._request_check_and_parse_resp(self.post, uri, 202,
-                                                  'node_group_template',
                                                   body=json.dumps(body))
 
     def delete_node_group_template(self, tmpl_id):
@@ -113,8 +95,7 @@
         """List all enabled plugins."""
 
         uri = 'plugins'
-        return self._request_check_and_parse_resp_list(self.get,
-                                                       uri, 200, 'plugins')
+        return self._request_check_and_parse_resp(self.get, uri, 200)
 
     def get_plugin(self, plugin_name, plugin_version=None):
         """Returns the details of a single plugin."""
@@ -122,22 +103,19 @@
         uri = 'plugins/%s' % plugin_name
         if plugin_version:
             uri += '/%s' % plugin_version
-        return self._request_check_and_parse_resp(self.get, uri, 200, 'plugin')
+        return self._request_check_and_parse_resp(self.get, uri, 200)
 
     def list_cluster_templates(self):
         """List all cluster templates for a user."""
 
         uri = 'cluster-templates'
-        return self._request_check_and_parse_resp_list(self.get, uri,
-                                                       200,
-                                                       'cluster_templates')
+        return self._request_check_and_parse_resp(self.get, uri, 200)
 
     def get_cluster_template(self, tmpl_id):
         """Returns the details of a single cluster template."""
 
         uri = 'cluster-templates/%s' % tmpl_id
-        return self._request_check_and_parse_resp(self.get,
-                                                  uri, 200, 'cluster_template')
+        return self._request_check_and_parse_resp(self.get, uri, 200)
 
     def create_cluster_template(self, name, plugin_name, hadoop_version,
                                 node_groups, cluster_configs=None,
@@ -157,7 +135,6 @@
             'cluster_configs': cluster_configs or dict(),
         })
         return self._request_check_and_parse_resp(self.post, uri, 202,
-                                                  'cluster_template',
                                                   body=json.dumps(body))
 
     def delete_cluster_template(self, tmpl_id):
@@ -170,16 +147,13 @@
         """List all data sources for a user."""
 
         uri = 'data-sources'
-        return self._request_check_and_parse_resp_list(self.get,
-                                                       uri, 200,
-                                                       'data_sources')
+        return self._request_check_and_parse_resp(self.get, uri, 200)
 
     def get_data_source(self, source_id):
         """Returns the details of a single data source."""
 
         uri = 'data-sources/%s' % source_id
-        return self._request_check_and_parse_resp(self.get,
-                                                  uri, 200, 'data_source')
+        return self._request_check_and_parse_resp(self.get, uri, 200)
 
     def create_data_source(self, name, data_source_type, url, **kwargs):
         """Creates data source with specified params.
@@ -195,8 +169,7 @@
             'url': url
         })
         return self._request_check_and_parse_resp(self.post, uri,
-                                                  202, 'data_source',
-                                                  body=json.dumps(body))
+                                                  202, body=json.dumps(body))
 
     def delete_data_source(self, source_id):
         """Deletes the specified data source by id."""
@@ -208,22 +181,19 @@
         """List all job binary internals for a user."""
 
         uri = 'job-binary-internals'
-        return self._request_check_and_parse_resp_list(self.get,
-                                                       uri, 200, 'binaries')
+        return self._request_check_and_parse_resp(self.get, uri, 200)
 
     def get_job_binary_internal(self, job_binary_id):
         """Returns the details of a single job binary internal."""
 
         uri = 'job-binary-internals/%s' % job_binary_id
-        return self._request_check_and_parse_resp(self.get, uri,
-                                                  200, 'job_binary_internal')
+        return self._request_check_and_parse_resp(self.get, uri, 200)
 
     def create_job_binary_internal(self, name, data):
         """Creates job binary internal with specified params."""
 
         uri = 'job-binary-internals/%s' % name
-        return self._request_check_and_parse_resp(self.put, uri, 202,
-                                                  'job_binary_internal', data)
+        return self._request_check_and_parse_resp(self.put, uri, 202, data)
 
     def delete_job_binary_internal(self, job_binary_id):
         """Deletes the specified job binary internal by id."""
@@ -241,15 +211,13 @@
         """List all job binaries for a user."""
 
         uri = 'job-binaries'
-        return self._request_check_and_parse_resp_list(self.get,
-                                                       uri, 200, 'binaries')
+        return self._request_check_and_parse_resp(self.get, uri, 200)
 
     def get_job_binary(self, job_binary_id):
         """Returns the details of a single job binary."""
 
         uri = 'job-binaries/%s' % job_binary_id
-        return self._request_check_and_parse_resp(self.get,
-                                                  uri, 200, 'job_binary')
+        return self._request_check_and_parse_resp(self.get, uri, 200)
 
     def create_job_binary(self, name, url, extra=None, **kwargs):
         """Creates job binary with specified params.
@@ -265,8 +233,7 @@
             'extra': extra or dict(),
         })
         return self._request_check_and_parse_resp(self.post, uri,
-                                                  202, 'job_binary',
-                                                  body=json.dumps(body))
+                                                  202, body=json.dumps(body))
 
     def delete_job_binary(self, job_binary_id):
         """Deletes the specified job binary by id."""
@@ -284,14 +251,13 @@
         """List all jobs for a user."""
 
         uri = 'jobs'
-        return self._request_check_and_parse_resp_list(self.get,
-                                                       uri, 200, 'jobs')
+        return self._request_check_and_parse_resp(self.get, uri, 200)
 
     def get_job(self, job_id):
         """Returns the details of a single job."""
 
         uri = 'jobs/%s' % job_id
-        return self._request_check_and_parse_resp(self.get, uri, 200, 'job')
+        return self._request_check_and_parse_resp(self.get, uri, 200)
 
     def create_job(self, name, job_type, mains, libs=None, **kwargs):
         """Creates job with specified params.
@@ -307,8 +273,8 @@
             'mains': mains,
             'libs': libs or list(),
         })
-        return self._request_check_and_parse_resp(self.post, uri, 202,
-                                                  'job', body=json.dumps(body))
+        return self._request_check_and_parse_resp(self.post, uri,
+                                                  202, body=json.dumps(body))
 
     def delete_job(self, job_id):
         """Deletes the specified job by id."""
diff --git a/tempest/services/identity/v2/json/identity_client.py b/tempest/services/identity/v2/json/identity_client.py
index 1076fca..e6416d6 100644
--- a/tempest/services/identity/v2/json/identity_client.py
+++ b/tempest/services/identity/v2/json/identity_client.py
@@ -56,7 +56,7 @@
         resp, body = self.get('OS-KSADM/roles/%s' % role_id)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['role'])
+        return service_client.ResponseBody(resp, body)
 
     def create_tenant(self, name, **kwargs):
         """
@@ -125,10 +125,10 @@
         resp, body = self.get('tenants')
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBodyList(resp, body['tenants'])
+        return service_client.ResponseBody(resp, body)
 
     def get_tenant_by_name(self, tenant_name):
-        tenants = self.list_tenants()
+        tenants = self.list_tenants()['tenants']
         for tenant in tenants:
             if tenant['name'] == tenant_name:
                 return tenant
@@ -259,6 +259,33 @@
         self.expected_success(204, resp.status)
         return service_client.ResponseBody(resp, body)
 
+    def create_endpoint(self, service_id, region_id, **kwargs):
+        """Create an endpoint for service."""
+        post_body = {
+            'service_id': service_id,
+            'region': region_id,
+            'publicurl': kwargs.get('publicurl'),
+            'adminurl': kwargs.get('adminurl'),
+            'internalurl': kwargs.get('internalurl')
+        }
+        post_body = json.dumps({'endpoint': post_body})
+        resp, body = self.post('/endpoints', post_body)
+        self.expected_success(200, resp.status)
+        return service_client.ResponseBody(resp, self._parse_resp(body))
+
+    def list_endpoints(self):
+        """List Endpoints - Returns Endpoints."""
+        resp, body = self.get('/endpoints')
+        self.expected_success(200, resp.status)
+        return service_client.ResponseBodyList(resp, self._parse_resp(body))
+
+    def delete_endpoint(self, endpoint_id):
+        """Delete an endpoint."""
+        url = '/endpoints/%s' % endpoint_id
+        resp, body = self.delete(url)
+        self.expected_success(204, resp.status)
+        return service_client.ResponseBody(resp, body)
+
     def update_user_password(self, user_id, new_pass):
         """Update User Password."""
         put_body = {
diff --git a/tempest/services/identity/v3/json/credentials_client.py b/tempest/services/identity/v3/json/credentials_client.py
index e27f960..decf3a8 100644
--- a/tempest/services/identity/v3/json/credentials_client.py
+++ b/tempest/services/identity/v3/json/credentials_client.py
@@ -36,11 +36,11 @@
         self.expected_success(201, resp.status)
         body = json.loads(body)
         body['credential']['blob'] = json.loads(body['credential']['blob'])
-        return service_client.ResponseBody(resp, body['credential'])
+        return service_client.ResponseBody(resp, body)
 
     def update_credential(self, credential_id, **kwargs):
         """Updates a credential."""
-        body = self.get_credential(credential_id)
+        body = self.get_credential(credential_id)['credential']
         cred_type = kwargs.get('type', body['type'])
         access_key = kwargs.get('access_key', body['blob']['access'])
         secret_key = kwargs.get('secret_key', body['blob']['secret'])
@@ -59,7 +59,7 @@
         self.expected_success(200, resp.status)
         body = json.loads(body)
         body['credential']['blob'] = json.loads(body['credential']['blob'])
-        return service_client.ResponseBody(resp, body['credential'])
+        return service_client.ResponseBody(resp, body)
 
     def get_credential(self, credential_id):
         """To GET Details of a credential."""
@@ -67,14 +67,14 @@
         self.expected_success(200, resp.status)
         body = json.loads(body)
         body['credential']['blob'] = json.loads(body['credential']['blob'])
-        return service_client.ResponseBody(resp, body['credential'])
+        return service_client.ResponseBody(resp, body)
 
     def list_credentials(self):
         """Lists out all the available credentials."""
         resp, body = self.get('credentials')
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBodyList(resp, body['credentials'])
+        return service_client.ResponseBody(resp, body)
 
     def delete_credential(self, credential_id):
         """Deletes a credential."""
diff --git a/tempest/services/identity/v3/json/endpoints_client.py b/tempest/services/identity/v3/json/endpoints_client.py
index f93fb74..6bdf8b3 100644
--- a/tempest/services/identity/v3/json/endpoints_client.py
+++ b/tempest/services/identity/v3/json/endpoints_client.py
@@ -26,7 +26,7 @@
         resp, body = self.get('endpoints')
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBodyList(resp, body['endpoints'])
+        return service_client.ResponseBody(resp, body)
 
     def create_endpoint(self, service_id, interface, url, **kwargs):
         """Create endpoint.
@@ -51,7 +51,7 @@
         resp, body = self.post('endpoints', post_body)
         self.expected_success(201, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['endpoint'])
+        return service_client.ResponseBody(resp, body)
 
     def update_endpoint(self, endpoint_id, service_id=None, interface=None,
                         url=None, region=None, enabled=None, **kwargs):
@@ -78,7 +78,7 @@
         resp, body = self.patch('endpoints/%s' % endpoint_id, post_body)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['endpoint'])
+        return service_client.ResponseBody(resp, body)
 
     def delete_endpoint(self, endpoint_id):
         """Delete endpoint."""
diff --git a/tempest/services/identity/v3/json/identity_client.py b/tempest/services/identity/v3/json/identity_client.py
index 87d4b79..9a60a24 100644
--- a/tempest/services/identity/v3/json/identity_client.py
+++ b/tempest/services/identity/v3/json/identity_client.py
@@ -49,11 +49,11 @@
         resp, body = self.post('users', post_body)
         self.expected_success(201, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['user'])
+        return service_client.ResponseBody(resp, body)
 
     def update_user(self, user_id, name, **kwargs):
         """Updates a user."""
-        body = self.get_user(user_id)
+        body = self.get_user(user_id)['user']
         email = kwargs.get('email', body['email'])
         en = kwargs.get('enabled', body['enabled'])
         project_id = kwargs.get('project_id', body['project_id'])
@@ -78,7 +78,7 @@
         resp, body = self.patch('users/%s' % user_id, post_body)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['user'])
+        return service_client.ResponseBody(resp, body)
 
     def update_user_password(self, user_id, password, original_password):
         """Updates a user password."""
@@ -96,7 +96,7 @@
         resp, body = self.get('users/%s/projects' % user_id)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBodyList(resp, body['projects'])
+        return service_client.ResponseBody(resp, body)
 
     def get_users(self, params=None):
         """Get the list of users."""
@@ -106,14 +106,14 @@
         resp, body = self.get(url)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBodyList(resp, body['users'])
+        return service_client.ResponseBody(resp, body)
 
     def get_user(self, user_id):
         """GET a user."""
         resp, body = self.get("users/%s" % user_id)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['user'])
+        return service_client.ResponseBody(resp, body)
 
     def delete_user(self, user_id):
         """Deletes a User."""
@@ -136,7 +136,7 @@
         resp, body = self.post('projects', post_body)
         self.expected_success(201, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['project'])
+        return service_client.ResponseBody(resp, body)
 
     def list_projects(self, params=None):
         url = "projects"
@@ -145,10 +145,10 @@
         resp, body = self.get(url)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBodyList(resp, body['projects'])
+        return service_client.ResponseBody(resp, body)
 
     def update_project(self, project_id, **kwargs):
-        body = self.get_project(project_id)
+        body = self.get_project(project_id)['project']
         name = kwargs.get('name', body['name'])
         desc = kwargs.get('description', body['description'])
         en = kwargs.get('enabled', body['enabled'])
@@ -164,14 +164,14 @@
         resp, body = self.patch('projects/%s' % project_id, post_body)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['project'])
+        return service_client.ResponseBody(resp, body)
 
     def get_project(self, project_id):
         """GET a Project."""
         resp, body = self.get("projects/%s" % project_id)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['project'])
+        return service_client.ResponseBody(resp, body)
 
     def delete_project(self, project_id):
         """Delete a project."""
@@ -188,21 +188,21 @@
         resp, body = self.post('roles', post_body)
         self.expected_success(201, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['role'])
+        return service_client.ResponseBody(resp, body)
 
     def get_role(self, role_id):
         """GET a Role."""
         resp, body = self.get('roles/%s' % str(role_id))
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['role'])
+        return service_client.ResponseBody(resp, body)
 
     def list_roles(self):
         """Get the list of Roles."""
         resp, body = self.get("roles")
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBodyList(resp, body['roles'])
+        return service_client.ResponseBody(resp, body)
 
     def update_role(self, name, role_id):
         """Create a Role."""
@@ -213,7 +213,7 @@
         resp, body = self.patch('roles/%s' % str(role_id), post_body)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['role'])
+        return service_client.ResponseBody(resp, body)
 
     def delete_role(self, role_id):
         """Delete a role."""
@@ -241,7 +241,7 @@
         resp, body = self.post('domains', post_body)
         self.expected_success(201, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['domain'])
+        return service_client.ResponseBody(resp, body)
 
     def delete_domain(self, domain_id):
         """Delete a domain."""
@@ -257,11 +257,11 @@
         resp, body = self.get(url)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBodyList(resp, body['domains'])
+        return service_client.ResponseBody(resp, body)
 
     def update_domain(self, domain_id, **kwargs):
         """Updates a domain."""
-        body = self.get_domain(domain_id)
+        body = self.get_domain(domain_id)['domain']
         description = kwargs.get('description', body['description'])
         en = kwargs.get('enabled', body['enabled'])
         name = kwargs.get('name', body['name'])
@@ -274,14 +274,14 @@
         resp, body = self.patch('domains/%s' % domain_id, post_body)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['domain'])
+        return service_client.ResponseBody(resp, body)
 
     def get_domain(self, domain_id):
         """Get Domain details."""
         resp, body = self.get('domains/%s' % domain_id)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['domain'])
+        return service_client.ResponseBody(resp, body)
 
     def get_token(self, resp_token):
         """Get token details."""
@@ -289,7 +289,7 @@
         resp, body = self.get("auth/tokens", headers=headers)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['token'])
+        return service_client.ResponseBody(resp, body)
 
     def delete_token(self, resp_token):
         """Deletes token."""
@@ -313,25 +313,25 @@
         resp, body = self.post('groups', post_body)
         self.expected_success(201, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['group'])
+        return service_client.ResponseBody(resp, body)
 
     def get_group(self, group_id):
         """Get group details."""
         resp, body = self.get('groups/%s' % group_id)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['group'])
+        return service_client.ResponseBody(resp, body)
 
     def list_groups(self):
         """Lists the groups."""
         resp, body = self.get('groups')
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBodyList(resp, body['groups'])
+        return service_client.ResponseBody(resp, body)
 
     def update_group(self, group_id, **kwargs):
         """Updates a group."""
-        body = self.get_group(group_id)
+        body = self.get_group(group_id)['group']
         name = kwargs.get('name', body['name'])
         description = kwargs.get('description', body['description'])
         post_body = {
@@ -342,7 +342,7 @@
         resp, body = self.patch('groups/%s' % group_id, post_body)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['group'])
+        return service_client.ResponseBody(resp, body)
 
     def delete_group(self, group_id):
         """Delete a group."""
@@ -362,14 +362,14 @@
         resp, body = self.get('groups/%s/users' % group_id)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBodyList(resp, body['users'])
+        return service_client.ResponseBody(resp, body)
 
     def list_user_groups(self, user_id):
         """Lists groups which a user belongs to."""
         resp, body = self.get('users/%s/groups' % user_id)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBodyList(resp, body['groups'])
+        return service_client.ResponseBody(resp, body)
 
     def delete_group_user(self, group_id, user_id):
         """Delete user in group."""
@@ -397,7 +397,7 @@
                               (project_id, user_id))
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBodyList(resp, body['roles'])
+        return service_client.ResponseBody(resp, body)
 
     def list_user_roles_on_domain(self, domain_id, user_id):
         """list roles of a user on a domain."""
@@ -405,7 +405,7 @@
                               (domain_id, user_id))
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBodyList(resp, body['roles'])
+        return service_client.ResponseBody(resp, body)
 
     def revoke_role_from_user_on_project(self, project_id, user_id, role_id):
         """Delete role of a user on a project."""
@@ -441,7 +441,7 @@
                               (project_id, group_id))
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBodyList(resp, body['roles'])
+        return service_client.ResponseBody(resp, body)
 
     def list_group_roles_on_domain(self, domain_id, group_id):
         """list roles of a user on a domain."""
@@ -449,7 +449,7 @@
                               (domain_id, group_id))
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBodyList(resp, body['roles'])
+        return service_client.ResponseBody(resp, body)
 
     def revoke_role_from_group_on_project(self, project_id, group_id, role_id):
         """Delete role of a user on a project."""
@@ -481,7 +481,7 @@
         resp, body = self.post('OS-TRUST/trusts', post_body)
         self.expected_success(201, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['trust'])
+        return service_client.ResponseBody(resp, body)
 
     def delete_trust(self, trust_id):
         """Deletes a trust."""
@@ -501,21 +501,21 @@
             resp, body = self.get("OS-TRUST/trusts")
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBodyList(resp, body['trusts'])
+        return service_client.ResponseBody(resp, body)
 
     def get_trust(self, trust_id):
         """GET trust."""
         resp, body = self.get("OS-TRUST/trusts/%s" % trust_id)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['trust'])
+        return service_client.ResponseBody(resp, body)
 
     def get_trust_roles(self, trust_id):
         """GET roles delegated by a trust."""
         resp, body = self.get("OS-TRUST/trusts/%s/roles" % trust_id)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBodyList(resp, body['roles'])
+        return service_client.ResponseBody(resp, body)
 
     def get_trust_role(self, trust_id, role_id):
         """GET role delegated by a trust."""
@@ -523,7 +523,7 @@
                               % (trust_id, role_id))
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['role'])
+        return service_client.ResponseBody(resp, body)
 
     def check_trust_role(self, trust_id, role_id):
         """HEAD Check if role is delegated by a trust."""
diff --git a/tempest/services/identity/v3/json/region_client.py b/tempest/services/identity/v3/json/region_client.py
index 43226be..24c6f33 100644
--- a/tempest/services/identity/v3/json/region_client.py
+++ b/tempest/services/identity/v3/json/region_client.py
@@ -37,7 +37,7 @@
             resp, body = self.post('regions', req_body)
         self.expected_success(201, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['region'])
+        return service_client.ResponseBody(resp, body)
 
     def update_region(self, region_id, **kwargs):
         """Updates a region."""
@@ -50,7 +50,7 @@
         resp, body = self.patch('regions/%s' % region_id, post_body)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['region'])
+        return service_client.ResponseBody(resp, body)
 
     def get_region(self, region_id):
         """Get region."""
@@ -58,7 +58,7 @@
         resp, body = self.get(url)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['region'])
+        return service_client.ResponseBody(resp, body)
 
     def list_regions(self, params=None):
         """List regions."""
@@ -68,7 +68,7 @@
         resp, body = self.get(url)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBodyList(resp, body['regions'])
+        return service_client.ResponseBody(resp, body)
 
     def delete_region(self, region_id):
         """Delete region."""
diff --git a/tempest/services/identity/v3/json/service_client.py b/tempest/services/identity/v3/json/service_client.py
index 52ff479..2acc3a8 100644
--- a/tempest/services/identity/v3/json/service_client.py
+++ b/tempest/services/identity/v3/json/service_client.py
@@ -23,7 +23,7 @@
 
     def update_service(self, service_id, **kwargs):
         """Updates a service."""
-        body = self.get_service(service_id)
+        body = self.get_service(service_id)['service']
         name = kwargs.get('name', body['name'])
         type = kwargs.get('type', body['type'])
         desc = kwargs.get('description', body['description'])
@@ -36,7 +36,7 @@
         resp, body = self.patch('services/%s' % service_id, patch_body)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['service'])
+        return service_client.ResponseBody(resp, body)
 
     def get_service(self, service_id):
         """Get Service."""
@@ -44,7 +44,7 @@
         resp, body = self.get(url)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['service'])
+        return service_client.ResponseBody(resp, body)
 
     def create_service(self, serv_type, name=None, description=None,
                        enabled=True):
@@ -58,7 +58,7 @@
         resp, body = self.post("services", body)
         self.expected_success(201, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body["service"])
+        return service_client.ResponseBody(resp, body)
 
     def delete_service(self, serv_id):
         url = "services/" + serv_id
@@ -70,4 +70,4 @@
         resp, body = self.get('services')
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBodyList(resp, body['services'])
+        return service_client.ResponseBody(resp, body)
diff --git a/tempest/services/image/v1/json/image_client.py b/tempest/services/image/v1/json/image_client.py
index a07612a..d97da36 100644
--- a/tempest/services/image/v1/json/image_client.py
+++ b/tempest/services/image/v1/json/image_client.py
@@ -130,7 +130,7 @@
         self._error_checker('POST', '/v1/images', headers, data, resp,
                             body_iter)
         body = json.loads(''.join([c for c in body_iter]))
-        return service_client.ResponseBody(resp, body['image'])
+        return service_client.ResponseBody(resp, body)
 
     def _update_with_data(self, image_id, headers, data):
         url = '/v1/images/%s' % image_id
@@ -139,7 +139,7 @@
         self._error_checker('PUT', url, headers, data,
                             resp, body_iter)
         body = json.loads(''.join([c for c in body_iter]))
-        return service_client.ResponseBody(resp, body['image'])
+        return service_client.ResponseBody(resp, body)
 
     @property
     def http(self):
@@ -169,7 +169,7 @@
         resp, body = self.post('v1/images', None, headers)
         self.expected_success(201, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['image'])
+        return service_client.ResponseBody(resp, body)
 
     def update_image(self, image_id, name=None, container_format=None,
                      data=None, properties=None):
@@ -193,7 +193,7 @@
         resp, body = self.put(url, data, headers)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['image'])
+        return service_client.ResponseBody(resp, body)
 
     def delete_image(self, image_id):
         url = 'v1/images/%s' % image_id
@@ -223,7 +223,7 @@
         resp, body = self.get(url)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBodyList(resp, body['images'])
+        return service_client.ResponseBody(resp, body)
 
     def get_image_meta(self, image_id):
         url = 'v1/images/%s' % image_id
diff --git a/tempest/services/object_storage/container_client.py b/tempest/services/object_storage/container_client.py
index b31fe1b..e8ee20b 100644
--- a/tempest/services/object_storage/container_client.py
+++ b/tempest/services/object_storage/container_client.py
@@ -119,24 +119,6 @@
             params={'limit': limit, 'format': 'json'})
         self.expected_success(200, resp.status)
         return objlist
-        """tmp = []
-        for obj in objlist:
-            tmp.append(obj['name'])
-        objlist = tmp
-
-        if len(objlist) >= limit:
-
-            # Increment marker
-            marker = objlist[len(objlist) - 1]
-
-            # Get the next chunk of the list
-            objlist.extend(_list_all_container_objects(container,
-                                                      params={'marker': marker,
-                                                              'limit': limit}))
-            return objlist
-        else:
-            # Return final, complete list
-            return objlist"""
 
     def list_container_contents(self, container, params=None):
         """
diff --git a/tempest/services/volume/json/admin/volume_hosts_client.py b/tempest/services/volume/json/admin/volume_hosts_client.py
index 6801453..ab9cd5a 100644
--- a/tempest/services/volume/json/admin/volume_hosts_client.py
+++ b/tempest/services/volume/json/admin/volume_hosts_client.py
@@ -34,7 +34,7 @@
         resp, body = self.get(url)
         body = json.loads(body)
         self.expected_success(200, resp.status)
-        return service_client.ResponseBodyList(resp, body['hosts'])
+        return service_client.ResponseBody(resp, body)
 
 
 class VolumeHostsClient(BaseVolumeHostsClient):
diff --git a/tempest/services/volume/json/availability_zone_client.py b/tempest/services/volume/json/availability_zone_client.py
index 13d5d55..4d24ede 100644
--- a/tempest/services/volume/json/availability_zone_client.py
+++ b/tempest/services/volume/json/availability_zone_client.py
@@ -24,7 +24,7 @@
         resp, body = self.get('os-availability-zone')
         body = json.loads(body)
         self.expected_success(200, resp.status)
-        return service_client.ResponseBody(resp, body['availabilityZoneInfo'])
+        return service_client.ResponseBody(resp, body)
 
 
 class VolumeAvailabilityZoneClient(BaseVolumeAvailabilityZoneClient):
diff --git a/tempest/services/volume/json/extensions_client.py b/tempest/services/volume/json/extensions_client.py
index 1098e1e..5744d4a 100644
--- a/tempest/services/volume/json/extensions_client.py
+++ b/tempest/services/volume/json/extensions_client.py
@@ -25,7 +25,7 @@
         resp, body = self.get(url)
         body = json.loads(body)
         self.expected_success(200, resp.status)
-        return service_client.ResponseBodyList(resp, body['extensions'])
+        return service_client.ResponseBody(resp, body)
 
 
 class ExtensionsClient(BaseExtensionsClient):
diff --git a/tempest/services/volume/json/qos_client.py b/tempest/services/volume/json/qos_client.py
index e3d6a29..c79168c 100644
--- a/tempest/services/volume/json/qos_client.py
+++ b/tempest/services/volume/json/qos_client.py
@@ -48,15 +48,15 @@
         start_time = int(time.time())
         while True:
             if operation == 'qos-key-unset':
-                body = self.show_qos(qos_id)
+                body = self.show_qos(qos_id)['qos_specs']
                 if not any(key in body['specs'] for key in args):
                     return
             elif operation == 'disassociate':
-                body = self.show_association_qos(qos_id)
+                body = self.show_association_qos(qos_id)['qos_associations']
                 if not any(args in body[i]['id'] for i in range(0, len(body))):
                     return
             elif operation == 'disassociate-all':
-                body = self.show_association_qos(qos_id)
+                body = self.show_association_qos(qos_id)['qos_associations']
                 if not body:
                     return
             else:
@@ -79,7 +79,7 @@
         resp, body = self.post('qos-specs', post_body)
         self.expected_success(200, resp.status)
         body = json.loads(body)
-        return service_client.ResponseBody(resp, body['qos_specs'])
+        return service_client.ResponseBody(resp, body)
 
     def delete_qos(self, qos_id, force=False):
         """Delete the specified QoS specification."""
@@ -94,7 +94,7 @@
         resp, body = self.get(url)
         body = json.loads(body)
         self.expected_success(200, resp.status)
-        return service_client.ResponseBodyList(resp, body['qos_specs'])
+        return service_client.ResponseBody(resp, body)
 
     def show_qos(self, qos_id):
         """Get the specified QoS specification."""
@@ -102,7 +102,7 @@
         resp, body = self.get(url)
         body = json.loads(body)
         self.expected_success(200, resp.status)
-        return service_client.ResponseBody(resp, body['qos_specs'])
+        return service_client.ResponseBody(resp, body)
 
     def set_qos_key(self, qos_id, **kwargs):
         """Set the specified keys/values of QoS specification.
@@ -113,7 +113,7 @@
         resp, body = self.put('qos-specs/%s' % qos_id, put_body)
         body = json.loads(body)
         self.expected_success(200, resp.status)
-        return service_client.ResponseBody(resp, body['qos_specs'])
+        return service_client.ResponseBody(resp, body)
 
     def unset_qos_key(self, qos_id, keys):
         """Unset the specified keys of QoS specification.
@@ -139,7 +139,7 @@
         resp, body = self.get(url)
         body = json.loads(body)
         self.expected_success(200, resp.status)
-        return service_client.ResponseBodyList(resp, body['qos_associations'])
+        return service_client.ResponseBody(resp, body)
 
     def disassociate_qos(self, qos_id, vol_type_id):
         """Disassociate the specified QoS with specified volume-type."""
diff --git a/tempest/services/volume/json/snapshots_client.py b/tempest/services/volume/json/snapshots_client.py
index fa1f9dd..3fcf18c 100644
--- a/tempest/services/volume/json/snapshots_client.py
+++ b/tempest/services/volume/json/snapshots_client.py
@@ -40,7 +40,7 @@
         resp, body = self.get(url)
         body = json.loads(body)
         self.expected_success(200, resp.status)
-        return service_client.ResponseBodyList(resp, body['snapshots'])
+        return service_client.ResponseBody(resp, body)
 
     def show_snapshot(self, snapshot_id):
         """Returns the details of a single snapshot."""
@@ -48,7 +48,7 @@
         resp, body = self.get(url)
         body = json.loads(body)
         self.expected_success(200, resp.status)
-        return service_client.ResponseBody(resp, body['snapshot'])
+        return service_client.ResponseBody(resp, body)
 
     def create_snapshot(self, volume_id, **kwargs):
         """
@@ -64,7 +64,7 @@
         resp, body = self.post('snapshots', post_body)
         body = json.loads(body)
         self.expected_success(self.create_resp, resp.status)
-        return service_client.ResponseBody(resp, body['snapshot'])
+        return service_client.ResponseBody(resp, body)
 
     def update_snapshot(self, snapshot_id, **kwargs):
         """Updates a snapshot."""
@@ -72,11 +72,11 @@
         resp, body = self.put('snapshots/%s' % snapshot_id, put_body)
         body = json.loads(body)
         self.expected_success(200, resp.status)
-        return service_client.ResponseBody(resp, body['snapshot'])
+        return service_client.ResponseBody(resp, body)
 
     # NOTE(afazekas): just for the wait function
     def _get_snapshot_status(self, snapshot_id):
-        body = self.show_snapshot(snapshot_id)
+        body = self.show_snapshot(snapshot_id)['snapshot']
         status = body['status']
         # NOTE(afazekas): snapshot can reach an "error"
         # state in a "normal" lifecycle
@@ -155,7 +155,7 @@
         resp, body = self.post(url, put_body)
         body = json.loads(body)
         self.expected_success(200, resp.status)
-        return service_client.ResponseBody(resp, body['metadata'])
+        return service_client.ResponseBody(resp, body)
 
     def show_snapshot_metadata(self, snapshot_id):
         """Get metadata of the snapshot."""
@@ -163,7 +163,7 @@
         resp, body = self.get(url)
         body = json.loads(body)
         self.expected_success(200, resp.status)
-        return service_client.ResponseBody(resp, body['metadata'])
+        return service_client.ResponseBody(resp, body)
 
     def update_snapshot_metadata(self, snapshot_id, metadata):
         """Update metadata for the snapshot."""
@@ -172,7 +172,7 @@
         resp, body = self.put(url, put_body)
         body = json.loads(body)
         self.expected_success(200, resp.status)
-        return service_client.ResponseBody(resp, body['metadata'])
+        return service_client.ResponseBody(resp, body)
 
     def update_snapshot_metadata_item(self, snapshot_id, id, meta_item):
         """Update metadata item for the snapshot."""
@@ -181,7 +181,7 @@
         resp, body = self.put(url, put_body)
         body = json.loads(body)
         self.expected_success(200, resp.status)
-        return service_client.ResponseBody(resp, body['meta'])
+        return service_client.ResponseBody(resp, body)
 
     def delete_snapshot_metadata_item(self, snapshot_id, id):
         """Delete metadata item for the snapshot."""
diff --git a/tempest/stress/actions/server_create_destroy.py b/tempest/stress/actions/server_create_destroy.py
index 9f41526..17f4bc9 100644
--- a/tempest/stress/actions/server_create_destroy.py
+++ b/tempest/stress/actions/server_create_destroy.py
@@ -37,5 +37,6 @@
         self.logger.info("created %s" % server_id)
         self.logger.info("deleting %s" % name)
         self.manager.servers_client.delete_server(server_id)
-        self.manager.servers_client.wait_for_server_termination(server_id)
+        waiters.wait_for_server_termination(self.manager.servers_client,
+                                            server_id)
         self.logger.info("deleted %s" % server_id)
diff --git a/tempest/stress/actions/ssh_floating.py b/tempest/stress/actions/ssh_floating.py
index 03a2d27..2a7a85c 100644
--- a/tempest/stress/actions/ssh_floating.py
+++ b/tempest/stress/actions/ssh_floating.py
@@ -86,7 +86,8 @@
     def _destroy_vm(self):
         self.logger.info("deleting %s" % self.server_id)
         self.manager.servers_client.delete_server(self.server_id)
-        self.manager.servers_client.wait_for_server_termination(self.server_id)
+        waiters.wait_for_server_termination(self.manager.servers_client,
+                                            self.server_id)
         self.logger.info("deleted %s" % self.server_id)
 
     def _create_sec_group(self):
@@ -96,8 +97,10 @@
         self.sec_grp = sec_grp_cli.create_security_group(
             name=s_name, description=s_description)
         create_rule = sec_grp_cli.create_security_group_rule
-        create_rule(self.sec_grp['id'], 'tcp', 22, 22)
-        create_rule(self.sec_grp['id'], 'icmp', -1, -1)
+        create_rule(parent_group_id=self.sec_grp['id'], ip_protocol='tcp',
+                    from_port=22, to_port=22)
+        create_rule(parent_group_id=self.sec_grp['id'], ip_protocol='icmp',
+                    from_port=-1, to_port=-1)
 
     def _destroy_sec_grp(self):
         sec_grp_cli = self.manager.security_groups_client
diff --git a/tempest/stress/actions/volume_attach_delete.py b/tempest/stress/actions/volume_attach_delete.py
index d6965c7..68e2989 100644
--- a/tempest/stress/actions/volume_attach_delete.py
+++ b/tempest/stress/actions/volume_attach_delete.py
@@ -49,8 +49,8 @@
         self.logger.info("attach volume (%s) to vm %s" %
                          (volume['id'], server_id))
         self.manager.servers_client.attach_volume(server_id,
-                                                  volume['id'],
-                                                  '/dev/vdc')
+                                                  volumeId=volume['id'],
+                                                  device='/dev/vdc')
         self.manager.volumes_client.wait_for_volume_status(volume['id'],
                                                            'in-use')
         self.logger.info("volume (%s) attached to vm %s" %
@@ -59,7 +59,8 @@
         # Step 4: delete vm
         self.logger.info("deleting vm: %s" % vm_name)
         self.manager.servers_client.delete_server(server_id)
-        self.manager.servers_client.wait_for_server_termination(server_id)
+        waiters.wait_for_server_termination(self.manager.servers_client,
+                                            server_id)
         self.logger.info("deleted vm: %s" % server_id)
 
         # Step 5: delete volume
diff --git a/tempest/stress/actions/volume_attach_verify.py b/tempest/stress/actions/volume_attach_verify.py
index 93a443e..038569a 100644
--- a/tempest/stress/actions/volume_attach_verify.py
+++ b/tempest/stress/actions/volume_attach_verify.py
@@ -26,7 +26,8 @@
 
     def _create_keypair(self):
         keyname = data_utils.rand_name("key")
-        self.key = self.manager.keypairs_client.create_keypair(name=keyname)
+        self.key = (self.manager.keypairs_client.create_keypair(name=keyname)
+                    ['keypair'])
 
     def _delete_keypair(self):
         self.manager.keypairs_client.delete_keypair(self.key['name'])
@@ -48,7 +49,8 @@
     def _destroy_vm(self):
         self.logger.info("deleting server: %s" % self.server_id)
         self.manager.servers_client.delete_server(self.server_id)
-        self.manager.servers_client.wait_for_server_termination(self.server_id)
+        waiters.wait_for_server_termination(self.manager.servers_client,
+                                            self.server_id)
         self.logger.info("deleted server: %s" % self.server_id)
 
     def _create_sec_group(self):
@@ -58,8 +60,10 @@
         self.sec_grp = sec_grp_cli.create_security_group(
             name=s_name, description=s_description)
         create_rule = sec_grp_cli.create_security_group_rule
-        create_rule(self.sec_grp['id'], 'tcp', 22, 22)
-        create_rule(self.sec_grp['id'], 'icmp', -1, -1)
+        create_rule(parent_group_id=self.sec_grp['id'], ip_protocol='tcp',
+                    from_port=22, to_port=22)
+        create_rule(parent_group_id=self.sec_grp['id'], ip_protocol='icmp',
+                    from_port=-1, to_port=-1)
 
     def _destroy_sec_grp(self):
         sec_grp_cli = self.manager.security_groups_client
@@ -163,7 +167,7 @@
         if not self.new_server:
             self.new_server_ops()
 
-    # now we just test is number of partition increased or decrised
+    # now we just test that the number of partitions has increased or decreased
     def part_wait(self, num_match):
         def _part_state():
             self.partitions = self.remote_client.get_partitions().split('\n')
@@ -189,8 +193,8 @@
         self.logger.info("attach volume (%s) to vm %s" %
                          (self.volume['id'], self.server_id))
         servers_client.attach_volume(self.server_id,
-                                     self.volume['id'],
-                                     self.part_name)
+                                     volumeId=self.volume['id'],
+                                     device=self.part_name)
         self.manager.volumes_client.wait_for_volume_status(self.volume['id'],
                                                            'in-use')
         if self.enable_ssh_verify:
@@ -203,7 +207,7 @@
         self.manager.volumes_client.wait_for_volume_status(self.volume['id'],
                                                            'available')
         if self.enable_ssh_verify:
-            self.logger.info("Scanning for block device disapperance on %s"
+            self.logger.info("Scanning for block device disappearance on %s"
                              % self.server_id)
             self.part_wait(self.detach_match_count)
         if self.new_volume:
diff --git a/tempest/stress/cleanup.py b/tempest/stress/cleanup.py
index b785156..9e21326 100644
--- a/tempest/stress/cleanup.py
+++ b/tempest/stress/cleanup.py
@@ -17,6 +17,7 @@
 from oslo_log import log as logging
 
 from tempest import clients
+from tempest.common import waiters
 
 LOG = logging.getLogger(__name__)
 
@@ -34,11 +35,12 @@
 
     for s in body['servers']:
         try:
-            admin_manager.servers_client.wait_for_server_termination(s['id'])
+            waiters.wait_for_server_termination(admin_manager.servers_client,
+                                                s['id'])
         except Exception:
             pass
 
-    keypairs = admin_manager.keypairs_client.list_keypairs()
+    keypairs = admin_manager.keypairs_client.list_keypairs()['keypairs']
     LOG.info("Cleanup::remove %s keypairs" % len(keypairs))
     for k in keypairs:
         try:
@@ -70,7 +72,7 @@
         if user['name'].startswith("stress_user"):
             admin_manager.identity_client.delete_user(user['id'])
 
-    tenants = admin_manager.identity_client.list_tenants()
+    tenants = admin_manager.identity_client.list_tenants()['tenants']
     LOG.info("Cleanup::remove %s tenants" % len(tenants))
     for tenant in tenants:
         if tenant['name'].startswith("stress_tenant"):
@@ -79,8 +81,8 @@
     # We have to delete snapshots first or
     # volume deletion may block
 
-    _, snaps = admin_manager.snapshots_client.\
-        list_snapshots(params={"all_tenants": True})
+    _, snaps = admin_manager.snapshots_client.list_snapshots(
+        params={"all_tenants": True})['snapshots']
     LOG.info("Cleanup::remove %s snapshots" % len(snaps))
     for v in snaps:
         try:
diff --git a/tempest/test_discover/plugins.py b/tempest/test_discover/plugins.py
index 45cd609..640b004 100644
--- a/tempest/test_discover/plugins.py
+++ b/tempest/test_discover/plugins.py
@@ -51,6 +51,16 @@
         """
         return
 
+    @abc.abstractmethod
+    def get_opt_lists(self):
+        """Method to get a list of options for sample config generation
+
+        :return option_list: A list of tuples with the group name and options
+                             in that group.
+        :rtype: list
+        """
+        return []
+
 
 @misc.singleton
 class TempestTestPluginManager(object):
@@ -79,3 +89,11 @@
     def register_plugin_opts(self, conf):
         for plug in self.ext_plugins:
             plug.obj.register_opts(conf)
+
+    def get_plugin_options_list(self):
+        plugin_options = []
+        for plug in self.ext_plugins:
+            opt_list = plug.obj.get_opt_lists()
+            if opt_list:
+                plugin_options.extend(opt_list)
+        return plugin_options
diff --git a/tempest/tests/cmd/test_javelin.py b/tempest/tests/cmd/test_javelin.py
index 3a3e46e..d1dee54 100644
--- a/tempest/tests/cmd/test_javelin.py
+++ b/tempest/tests/cmd/test_javelin.py
@@ -85,7 +85,7 @@
 class TestCreateResources(JavelinUnitTest):
     def test_create_tenants(self):
 
-        self.fake_client.identity.list_tenants.return_value = []
+        self.fake_client.identity.list_tenants.return_value = {'tenants': []}
         self.useFixture(mockpatch.PatchObject(javelin, "keystone_admin",
                                               return_value=self.fake_client))
 
@@ -95,8 +95,8 @@
         mocked_function.assert_called_once_with(self.fake_object['name'])
 
     def test_create_duplicate_tenant(self):
-        self.fake_client.identity.list_tenants.return_value = [
-            {'name': self.fake_object['name']}]
+        self.fake_client.identity.list_tenants.return_value = {'tenants': [
+            {'name': self.fake_object['name']}]}
         self.useFixture(mockpatch.PatchObject(javelin, "keystone_admin",
                                               return_value=self.fake_client))
 
diff --git a/tempest/tests/test_waiters.py b/tempest/tests/common/test_waiters.py
similarity index 65%
rename from tempest/tests/test_waiters.py
rename to tempest/tests/common/test_waiters.py
index 329d610..7aa6595 100644
--- a/tempest/tests/test_waiters.py
+++ b/tempest/tests/common/test_waiters.py
@@ -18,6 +18,7 @@
 
 from tempest.common import waiters
 from tempest import exceptions
+from tempest.services.volume.json import volumes_client
 from tempest.tests import base
 
 
@@ -47,3 +48,21 @@
         self.assertRaises(exceptions.AddImageException,
                           waiters.wait_for_image_status,
                           self.client, 'fake_image_id', 'active')
+
+    @mock.patch.object(time, 'sleep')
+    def test_wait_for_volume_status_error_restoring(self, mock_sleep):
+        # Tests that the wait method raises VolumeRestoreErrorException if
+        # the volume status is 'error_restoring'.
+        client = mock.Mock(spec=volumes_client.BaseVolumesClient,
+                           build_interval=1)
+        volume1 = {'status': 'restoring-backup'}
+        volume2 = {'status': 'error_restoring'}
+        mock_show = mock.Mock(side_effect=(volume1, volume2))
+        client.show_volume = mock_show
+        volume_id = '7532b91e-aa0a-4e06-b3e5-20c0c5ee1caa'
+        self.assertRaises(exceptions.VolumeRestoreErrorException,
+                          waiters.wait_for_volume_status,
+                          client, volume_id, 'available')
+        mock_show.assert_has_calls([mock.call(volume_id),
+                                    mock.call(volume_id)])
+        mock_sleep.assert_called_once_with(1)
diff --git a/tempest/tests/services/compute/test_agents_client.py b/tempest/tests/services/compute/test_agents_client.py
index 8316c90..d14d8bf 100644
--- a/tempest/tests/services/compute/test_agents_client.py
+++ b/tempest/tests/services/compute/test_agents_client.py
@@ -34,7 +34,7 @@
         body = '{"agents": []}'
         if bytes_body:
             body = body.encode('utf-8')
-        expected = []
+        expected = {"agents": []}
         response = (httplib2.Response({'status': 200}), body)
         self.useFixture(mockpatch.Patch(
             'tempest.common.service_client.ServiceClient.get',
@@ -42,10 +42,11 @@
         self.assertEqual(expected, self.client.list_agents())
 
     def _test_create_agent(self, bytes_body=False):
-        expected = {"url": "http://foo.com", "hypervisor": "kvm",
-                    "md5hash": "md5", "version": "2", "architecture": "x86_64",
-                    "os": "linux", "agent_id": 1}
-        serialized_body = json.dumps({"agent": expected})
+        expected = {"agent": {"url": "http://foo.com", "hypervisor": "kvm",
+                              "md5hash": "md5", "version": "2",
+                              "architecture": "x86_64",
+                              "os": "linux", "agent_id": 1}}
+        serialized_body = json.dumps(expected)
         if bytes_body:
             serialized_body = serialized_body.encode('utf-8')
 
@@ -67,9 +68,9 @@
         self.client.delete_agent("1")
 
     def _test_update_agent(self, bytes_body=False):
-        expected = {"url": "http://foo.com", "md5hash": "md5", "version": "2",
-                    "agent_id": 1}
-        serialized_body = json.dumps({"agent": expected})
+        expected = {"agent": {"url": "http://foo.com", "md5hash": "md5",
+                              "version": "2", "agent_id": 1}}
+        serialized_body = json.dumps(expected)
         if bytes_body:
             serialized_body = serialized_body.encode('utf-8')
 
diff --git a/tempest/tests/services/compute/test_aggregates_client.py b/tempest/tests/services/compute/test_aggregates_client.py
index 9fe4544..eacc251 100644
--- a/tempest/tests/services/compute/test_aggregates_client.py
+++ b/tempest/tests/services/compute/test_aggregates_client.py
@@ -14,6 +14,7 @@
 
 import httplib2
 
+from oslo_serialization import jsonutils as json
 from oslotest import mockpatch
 
 from tempest.services.compute.json import aggregates_client
@@ -45,3 +46,92 @@
 
     def test_list_aggregates_with_bytes_body(self):
         self._test_list_aggregates(bytes_body=True)
+
+    def _test_show_aggregate(self, bytes_body=False):
+        expected = {"name": "hoge",
+                    "availability_zone": None,
+                    "deleted": False,
+                    "created_at":
+                    "2015-07-16T03:07:32.000000",
+                    "updated_at": None,
+                    "hosts": [],
+                    "deleted_at": None,
+                    "id": 1,
+                    "metadata": {}}
+        serialized_body = json.dumps({"aggregate": expected})
+        if bytes_body:
+            serialized_body = serialized_body.encode('utf-8')
+
+        mocked_resp = (httplib2.Response({'status': 200}), serialized_body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.get',
+            return_value=mocked_resp))
+        resp = self.client.show_aggregate(1)
+        self.assertEqual(expected, resp)
+
+    def test_show_aggregate_with_str_body(self):
+        self._test_show_aggregate()
+
+    def test_show_aggregate_with_bytes_body(self):
+        self._test_show_aggregate(bytes_body=True)
+
+    def _test_create_aggregate(self, bytes_body=False):
+        expected = {"name": u'\xf4',
+                    "availability_zone": None,
+                    "deleted": False,
+                    "created_at": "2015-07-21T04:11:18.000000",
+                    "updated_at": None,
+                    "deleted_at": None,
+                    "id": 1}
+        serialized_body = json.dumps({"aggregate": expected})
+        if bytes_body:
+            serialized_body = serialized_body.encode('utf-8')
+
+        mocked_resp = (httplib2.Response({'status': 200}), serialized_body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.post',
+            return_value=mocked_resp))
+        resp = self.client.create_aggregate(name='hoge')
+        self.assertEqual(expected, resp)
+
+    def test_create_aggregate_with_str_body(self):
+        self._test_create_aggregate()
+
+    def test_create_aggregate_with_bytes_body(self):
+        self._test_create_aggregate(bytes_body=True)
+
+    def test_delete_aggregate(self):
+        expected = {}
+        mocked_resp = (httplib2.Response({'status': 200}), None)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.delete',
+            return_value=mocked_resp))
+        resp = self.client.delete_aggregate("1")
+        self.assertEqual(expected, resp)
+
+    def _test_update_aggregate(self, bytes_body=False):
+        expected = {"name": u'\xe9',
+                    "availability_zone": None,
+                    "deleted": False,
+                    "created_at": "2015-07-16T03:07:32.000000",
+                    "updated_at": "2015-07-23T05:16:29.000000",
+                    "hosts": [],
+                    "deleted_at": None,
+                    "id": 1,
+                    "metadata": {}}
+        serialized_body = json.dumps({"aggregate": expected})
+        if bytes_body:
+            serialized_body = serialized_body.encode('utf-8')
+
+        mocked_resp = (httplib2.Response({'status': 200}), serialized_body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.put',
+            return_value=mocked_resp))
+        resp = self.client.update_aggregate(1)
+        self.assertEqual(expected, resp)
+
+    def test_update_aggregate_with_str_body(self):
+        self._test_update_aggregate()
+
+    def test_update_aggregate_with_bytes_body(self):
+        self._test_update_aggregate(bytes_body=True)
diff --git a/tempest/tests/services/compute/test_extensions_client.py b/tempest/tests/services/compute/test_extensions_client.py
new file mode 100644
index 0000000..aa46efa
--- /dev/null
+++ b/tempest/tests/services/compute/test_extensions_client.py
@@ -0,0 +1,75 @@
+# Copyright 2015 NEC Corporation.  All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+import httplib2
+
+from oslo_serialization import jsonutils as json
+from oslotest import mockpatch
+
+from tempest.services.compute.json import extensions_client
+from tempest.tests import base
+from tempest.tests import fake_auth_provider
+
+
+class TestExtensionsClient(base.TestCase):
+
+    def setUp(self):
+        super(TestExtensionsClient, self).setUp()
+        fake_auth = fake_auth_provider.FakeAuthProvider()
+        self.client = extensions_client.ExtensionsClient(
+            fake_auth, 'compute', 'regionOne')
+
+    def _test_list_extensions(self, bytes_body=False):
+        body = '{"extensions": []}'
+        if bytes_body:
+            body = body.encode('utf-8')
+        expected = []
+        response = (httplib2.Response({'status': 200}), body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.get',
+            return_value=response))
+        self.assertEqual(expected, self.client.list_extensions())
+
+    def test_list_extensions_with_str_body(self):
+        self._test_list_extensions()
+
+    def test_list_extensions_with_bytes_body(self):
+        self._test_list_extensions(bytes_body=True)
+
+    def _test_show_extension(self, bytes_body=False):
+        expected = {
+            "updated": "2011-06-09T00:00:00Z",
+            "name": "Multinic",
+            "links": [],
+            "namespace":
+            "http://docs.openstack.org/compute/ext/multinic/api/v1.1",
+            "alias": "NMN",
+            "description": u'\u2740(*\xb4\u25e1`*)\u2740'
+        }
+        serialized_body = json.dumps({"extension": expected})
+        if bytes_body:
+            serialized_body = serialized_body.encode('utf-8')
+
+        mocked_resp = (httplib2.Response({'status': 200}), serialized_body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.get',
+            return_value=mocked_resp))
+        resp = self.client.show_extension("NMN")
+        self.assertEqual(expected, resp)
+
+    def test_show_extension_with_str_body(self):
+        self._test_show_extension()
+
+    def test_show_extension_with_bytes_body(self):
+        self._test_show_extension(bytes_body=True)
diff --git a/tempest/tests/services/compute/test_keypairs_client.py b/tempest/tests/services/compute/test_keypairs_client.py
index e79e411..8a0edd0 100644
--- a/tempest/tests/services/compute/test_keypairs_client.py
+++ b/tempest/tests/services/compute/test_keypairs_client.py
@@ -12,8 +12,10 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+import copy
 import httplib2
 
+from oslo_serialization import jsonutils as json
 from oslotest import mockpatch
 
 from tempest.services.compute.json import keypairs_client
@@ -23,6 +25,13 @@
 
 class TestKeyPairsClient(base.TestCase):
 
+    FAKE_KEYPAIR = {"keypair": {
+        "public_key": "ssh-rsa foo Generated-by-Nova",
+        "name": u'\u2740(*\xb4\u25e1`*)\u2740',
+        "user_id": "525d55f98980415ba98e634972fa4a10",
+        "fingerprint": "76:24:66:49:d7:ca:6e:5c:77:ea:8e:bb:9c:15:5f:98"
+        }}
+
     def setUp(self):
         super(TestKeyPairsClient, self).setUp()
         fake_auth = fake_auth_provider.FakeAuthProvider()
@@ -33,7 +42,7 @@
         body = '{"keypairs": []}'
         if bytes_body:
             body = body.encode('utf-8')
-        expected = []
+        expected = {"keypairs": []}
         response = (httplib2.Response({'status': 200}), body)
         self.useFixture(mockpatch.Patch(
             'tempest.common.service_client.ServiceClient.get',
@@ -45,3 +54,58 @@
 
     def test_list_keypairs_with_bytes_body(self):
         self._test_list_keypairs(bytes_body=True)
+
+    def _test_show_keypair(self, bytes_body=False):
+        fake_keypair = copy.deepcopy(self.FAKE_KEYPAIR)
+        fake_keypair["keypair"].update({
+            "deleted": False,
+            "created_at": "2015-07-22T04:53:52.000000",
+            "updated_at": None,
+            "deleted_at": None,
+            "id": 1
+            })
+        serialized_body = json.dumps(fake_keypair)
+        if bytes_body:
+            serialized_body = serialized_body.encode('utf-8')
+
+        mocked_resp = (httplib2.Response({'status': 200}), serialized_body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.get',
+            return_value=mocked_resp))
+        resp = self.client.show_keypair("test")
+        self.assertEqual(fake_keypair, resp)
+
+    def test_show_keypair_with_str_body(self):
+        self._test_show_keypair()
+
+    def test_show_keypair_with_bytes_body(self):
+        self._test_show_keypair(bytes_body=True)
+
+    def _test_create_keypair(self, bytes_body=False):
+        fake_keypair = copy.deepcopy(self.FAKE_KEYPAIR)
+        fake_keypair["keypair"].update({"private_key": "foo"})
+        serialized_body = json.dumps(fake_keypair)
+        if bytes_body:
+            serialized_body = serialized_body.encode('utf-8')
+
+        mocked_resp = (httplib2.Response({'status': 200}), serialized_body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.post',
+            return_value=mocked_resp))
+        resp = self.client.create_keypair(name='test')
+        self.assertEqual(fake_keypair, resp)
+
+    def test_create_keypair_with_str_body(self):
+        self._test_create_keypair()
+
+    def test_create_keypair_with_bytes_body(self):
+        self._test_create_keypair(bytes_body=True)
+
+    def test_delete_keypair(self):
+        expected = {}
+        mocked_resp = (httplib2.Response({'status': 202}), None)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.delete',
+            return_value=mocked_resp))
+        resp = self.client.delete_keypair('test')
+        self.assertEqual(expected, resp)
diff --git a/tempest/tests/services/compute/test_limits_client.py b/tempest/tests/services/compute/test_limits_client.py
new file mode 100644
index 0000000..4086210
--- /dev/null
+++ b/tempest/tests/services/compute/test_limits_client.py
@@ -0,0 +1,69 @@
+# Copyright 2015 NEC Corporation.  All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+import httplib2
+
+from oslo_serialization import jsonutils as json
+from oslotest import mockpatch
+
+from tempest.services.compute.json import limits_client
+from tempest.tests import base
+from tempest.tests import fake_auth_provider
+
+
+class TestLimitsClient(base.TestCase):
+
+    def setUp(self):
+        super(TestLimitsClient, self).setUp()
+        fake_auth = fake_auth_provider.FakeAuthProvider()
+        self.client = limits_client.LimitsClient(
+            fake_auth, 'compute', 'regionOne')
+
+    def _test_show_limits(self, bytes_body=False):
+        expected = {"rate": [],
+                    "absolute": {"maxServerMeta": 128,
+                                 "maxPersonality": 5,
+                                 "totalServerGroupsUsed": 0,
+                                 "maxImageMeta": 128,
+                                 "maxPersonalitySize": 10240,
+                                 "maxServerGroups": 10,
+                                 "maxSecurityGroupRules": 20,
+                                 "maxTotalKeypairs": 100,
+                                 "totalCoresUsed": 0,
+                                 "totalRAMUsed": 0,
+                                 "totalInstancesUsed": 0,
+                                 "maxSecurityGroups": 10,
+                                 "totalFloatingIpsUsed": 0,
+                                 "maxTotalCores": 20,
+                                 "totalSecurityGroupsUsed": 0,
+                                 "maxTotalFloatingIps": 10,
+                                 "maxTotalInstances": 10,
+                                 "maxTotalRAMSize": 51200,
+                                 "maxServerGroupMembers": 10}}
+        serialized_body = json.dumps({"limits": expected})
+        if bytes_body:
+            serialized_body = serialized_body.encode('utf-8')
+
+        mocked_resp = (httplib2.Response({'status': 200}), serialized_body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.get',
+            return_value=mocked_resp))
+        resp = self.client.show_limits()
+        self.assertEqual(expected, resp)
+
+    def test_show_limits_with_str_body(self):
+        self._test_show_limits()
+
+    def test_show_limits_with_bytes_body(self):
+        self._test_show_limits(bytes_body=True)
diff --git a/tempest/tests/services/compute/test_networks_client.py b/tempest/tests/services/compute/test_networks_client.py
new file mode 100644
index 0000000..5e57dea
--- /dev/null
+++ b/tempest/tests/services/compute/test_networks_client.py
@@ -0,0 +1,106 @@
+# Copyright 2015 NEC Corporation.  All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+import httplib2
+
+from oslo_serialization import jsonutils as json
+from oslotest import mockpatch
+
+from tempest.services.compute.json import networks_client
+from tempest.tests import base
+from tempest.tests import fake_auth_provider
+
+
+class TestNetworksClient(base.TestCase):
+
+    FAKE_NETWORK = {
+        "bridge": None,
+        "vpn_public_port": None,
+        "dhcp_start": None,
+        "bridge_interface": None,
+        "share_address": None,
+        "updated_at": None,
+        "id": "34d5ae1e-5659-49cf-af80-73bccd7d7ad3",
+        "cidr_v6": None,
+        "deleted_at": None,
+        "gateway": None,
+        "rxtx_base": None,
+        "label": u'30d7',
+        "priority": None,
+        "project_id": None,
+        "vpn_private_address": None,
+        "deleted": None,
+        "vlan": None,
+        "broadcast": None,
+        "netmask": None,
+        "injected": None,
+        "cidr": None,
+        "vpn_public_address": None,
+        "multi_host": None,
+        "enable_dhcp": None,
+        "dns2": None,
+        "created_at": None,
+        "host": None,
+        "mtu": None,
+        "gateway_v6": None,
+        "netmask_v6": None,
+        "dhcp_server": None,
+        "dns1": None
+        }
+
+    network_id = "34d5ae1e-5659-49cf-af80-73bccd7d7ad3"
+
+    FAKE_NETWORKS = [FAKE_NETWORK]
+
+    def setUp(self):
+        super(TestNetworksClient, self).setUp()
+        fake_auth = fake_auth_provider.FakeAuthProvider()
+        self.client = networks_client.NetworksClient(
+            fake_auth, 'compute', 'regionOne')
+
+    def _test_list_networks(self, bytes_body=False):
+        serialized_body = json.dumps({"networks": self.FAKE_NETWORKS})
+        if bytes_body:
+            serialized_body = serialized_body.encode('utf-8')
+
+        mocked_resp = (httplib2.Response({'status': 200}), serialized_body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.get',
+            return_value=mocked_resp))
+        resp = self.client.list_networks()
+        self.assertEqual(self.FAKE_NETWORKS, resp)
+
+    def test_list_networks_with_str_body(self):
+        self._test_list_networks()
+
+    def test_list_networks_with_bytes_body(self):
+        self._test_list_networks(bytes_body=True)
+
+    def _test_show_network(self, bytes_body=False):
+        serialized_body = json.dumps({"network": self.FAKE_NETWORK})
+        if bytes_body:
+            serialized_body = serialized_body.encode('utf-8')
+
+        mocked_resp = (httplib2.Response({'status': 200}), serialized_body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.get',
+            return_value=mocked_resp))
+        resp = self.client.show_network(self.network_id)
+        self.assertEqual(self.FAKE_NETWORK, resp)
+
+    def test_show_network_with_str_body(self):
+        self._test_show_network()
+
+    def test_show_network_with_bytes_body(self):
+        self._test_show_network(bytes_body=True)
diff --git a/tempest/tests/services/compute/test_quota_classes_client.py b/tempest/tests/services/compute/test_quota_classes_client.py
new file mode 100644
index 0000000..ff9b310
--- /dev/null
+++ b/tempest/tests/services/compute/test_quota_classes_client.py
@@ -0,0 +1,81 @@
+# Copyright 2015 NEC Corporation.  All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+import copy
+import httplib2
+
+from oslo_serialization import jsonutils as json
+from oslotest import mockpatch
+
+from tempest.services.compute.json import quota_classes_client
+from tempest.tests import base
+from tempest.tests import fake_auth_provider
+
+
+class TestQuotaClassesClient(base.TestCase):
+
+    FAKE_QUOTA_CLASS_SET = {
+        "injected_file_content_bytes": 10240,
+        "metadata_items": 128,
+        "server_group_members": 10,
+        "server_groups": 10,
+        "ram": 51200,
+        "floating_ips": 10,
+        "key_pairs": 100,
+        "id": u'\u2740(*\xb4\u25e1`*)\u2740',
+        "instances": 10,
+        "security_group_rules": 20,
+        "security_groups": 10,
+        "injected_files": 5,
+        "cores": 20,
+        "fixed_ips": -1,
+        "injected_file_path_bytes": 255,
+        }
+
+    def setUp(self):
+        super(TestQuotaClassesClient, self).setUp()
+        fake_auth = fake_auth_provider.FakeAuthProvider()
+        self.client = quota_classes_client.QuotaClassesClient(
+            fake_auth, 'compute', 'regionOne')
+
+    def _test_show_quota_class_set(self, bytes_body=False):
+        serialized_body = json.dumps({
+            "quota_class_set": self.FAKE_QUOTA_CLASS_SET})
+        if bytes_body:
+            serialized_body = serialized_body.encode('utf-8')
+
+        mocked_resp = (httplib2.Response({'status': 200}), serialized_body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.get',
+            return_value=mocked_resp))
+        resp = self.client.show_quota_class_set("test")
+        self.assertEqual(self.FAKE_QUOTA_CLASS_SET, resp)
+
+    def test_show_quota_class_set_with_str_body(self):
+        self._test_show_quota_class_set()
+
+    def test_show_quota_class_set_with_bytes_body(self):
+        self._test_show_quota_class_set(bytes_body=True)
+
+    def test_update_quota_class_set(self):
+        fake_quota_class_set = copy.deepcopy(self.FAKE_QUOTA_CLASS_SET)
+        fake_quota_class_set.pop("id")
+        serialized_body = json.dumps({"quota_class_set": fake_quota_class_set})
+
+        mocked_resp = (httplib2.Response({'status': 200}), serialized_body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.put',
+            return_value=mocked_resp))
+        resp = self.client.update_quota_class_set("test")
+        self.assertEqual(fake_quota_class_set, resp)
diff --git a/tempest/tests/services/compute/test_quotas_client.py b/tempest/tests/services/compute/test_quotas_client.py
new file mode 100644
index 0000000..a9bd0a1
--- /dev/null
+++ b/tempest/tests/services/compute/test_quotas_client.py
@@ -0,0 +1,106 @@
+# Copyright 2015 NEC Corporation.  All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+import copy
+import httplib2
+
+from oslo_serialization import jsonutils as json
+from oslotest import mockpatch
+
+from tempest.services.compute.json import quotas_client
+from tempest.tests import base
+from tempest.tests import fake_auth_provider
+
+
+class TestQuotasClient(base.TestCase):
+
+    FAKE_QUOTA_SET = {"injected_file_content_bytes": 10240,
+                      "metadata_items": 128,
+                      "server_group_members": 10,
+                      "server_groups": 10,
+                      "ram": 51200,
+                      "floating_ips": 10,
+                      "key_pairs": 100,
+                      "id": "8421f7be61064f50b680465c07f334af",
+                      "instances": 10,
+                      "security_group_rules": 20,
+                      "injected_files": 5,
+                      "cores": 20,
+                      "fixed_ips": -1,
+                      "injected_file_path_bytes": 255,
+                      "security_groups": 10}
+
+    project_id = "8421f7be61064f50b680465c07f334af"
+
+    def setUp(self):
+        super(TestQuotasClient, self).setUp()
+        fake_auth = fake_auth_provider.FakeAuthProvider()
+        self.client = quotas_client.QuotasClient(
+            fake_auth, 'compute', 'regionOne')
+
+    def _test_show_quota_set(self, bytes_body=False):
+        serialized_body = json.dumps({"quota_set": self.FAKE_QUOTA_SET})
+        if bytes_body:
+            serialized_body = serialized_body.encode('utf-8')
+
+        mocked_resp = (httplib2.Response({'status': 200}), serialized_body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.get',
+            return_value=mocked_resp))
+        resp = self.client.show_quota_set(self.project_id)
+        self.assertEqual(self.FAKE_QUOTA_SET, resp)
+
+    def test_show_quota_set_with_str_body(self):
+        self._test_show_quota_set()
+
+    def test_show_quota_set_with_bytes_body(self):
+        self._test_show_quota_set(bytes_body=True)
+
+    def _test_show_default_quota_set(self, bytes_body=False):
+        serialized_body = json.dumps({"quota_set": self.FAKE_QUOTA_SET})
+        if bytes_body:
+            serialized_body = serialized_body.encode('utf-8')
+
+        mocked_resp = (httplib2.Response({'status': 200}), serialized_body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.get',
+            return_value=mocked_resp))
+        resp = self.client.show_default_quota_set(self.project_id)
+        self.assertEqual(self.FAKE_QUOTA_SET, resp)
+
+    def test_show_default_quota_set_with_str_body(self):
+        self._test_show_quota_set()
+
+    def test_show_default_quota_set_with_bytes_body(self):
+        self._test_show_quota_set(bytes_body=True)
+
+    def test_update_quota_set(self):
+        fake_quota_set = copy.deepcopy(self.FAKE_QUOTA_SET)
+        fake_quota_set.pop("id")
+        serialized_body = json.dumps({"quota_set": fake_quota_set})
+        mocked_resp = (httplib2.Response({'status': 200}), serialized_body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.put',
+            return_value=mocked_resp))
+        resp = self.client.update_quota_set(self.project_id)
+        self.assertEqual(fake_quota_set, resp)
+
+    def test_delete_quota_set(self):
+        expected = {}
+        mocked_resp = (httplib2.Response({'status': 202}), None)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.delete',
+            return_value=mocked_resp))
+        resp = self.client.delete_quota_set(self.project_id)
+        self.assertEqual(expected, resp)
diff --git a/tempest/tests/services/compute/test_services_client.py b/tempest/tests/services/compute/test_services_client.py
new file mode 100644
index 0000000..61ca830
--- /dev/null
+++ b/tempest/tests/services/compute/test_services_client.py
@@ -0,0 +1,106 @@
+# Copyright 2015 NEC Corporation.  All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+import copy
+import httplib2
+
+from oslo_serialization import jsonutils as json
+from oslotest import mockpatch
+
+from tempest.services.compute.json import services_client
+from tempest.tests import base
+from tempest.tests import fake_auth_provider
+
+
+class TestServicesClient(base.TestCase):
+
+    FAKE_SERVICES = [{
+        "status": "enabled",
+        "binary": "nova-conductor",
+        "zone": "internal",
+        "state": "up",
+        "updated_at": "2015-08-19T06:50:55.000000",
+        "host": "controller",
+        "disabled_reason": None,
+        "id": 1
+        }]
+
+    FAKE_SERVICE = {
+        "status": "enabled",
+        "binary": "nova-conductor",
+        "host": "controller"
+        }
+
+    def setUp(self):
+        super(TestServicesClient, self).setUp()
+        fake_auth = fake_auth_provider.FakeAuthProvider()
+        self.client = services_client.ServicesClient(
+            fake_auth, 'compute', 'regionOne')
+
+    def _test_list_services(self, bytes_body=False):
+        serialized_body = json.dumps({"services": self.FAKE_SERVICES})
+        if bytes_body:
+            serialized_body = serialized_body.encode('utf-8')
+
+        mocked_resp = (httplib2.Response({'status': 200}), serialized_body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.get',
+            return_value=mocked_resp))
+        resp = self.client.list_services()
+        self.assertEqual(self.FAKE_SERVICES, resp)
+
+    def test_list_services_with_str_body(self):
+        self._test_list_services()
+
+    def test_list_services_with_bytes_body(self):
+        self._test_list_services(bytes_body=True)
+
+    def _test_enable_service(self, bytes_body=False):
+        serialized_body = json.dumps({"service": self.FAKE_SERVICE})
+        if bytes_body:
+            serialized_body = serialized_body.encode('utf-8')
+
+        mocked_resp = (httplib2.Response({'status': 200}), serialized_body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.put',
+            return_value=mocked_resp))
+        resp = self.client.enable_service("nova-conductor", "controller")
+        self.assertEqual(self.FAKE_SERVICE, resp)
+
+    def test_enable_service_with_str_body(self):
+        self._test_enable_service()
+
+    def test_enable_service_with_bytes_body(self):
+        self._test_enable_service(bytes_body=True)
+
+    def _test_disable_service(self, bytes_body=False):
+        fake_service = copy.deepcopy(self.FAKE_SERVICE)
+        fake_service["status"] = "disable"
+
+        serialized_body = json.dumps({"service": self.FAKE_SERVICE})
+        if bytes_body:
+            serialized_body = serialized_body.encode('utf-8')
+
+        mocked_resp = (httplib2.Response({'status': 200}), serialized_body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.put',
+            return_value=mocked_resp))
+        resp = self.client.disable_service("nova-conductor", "controller")
+        self.assertEqual(self.FAKE_SERVICE, resp)
+
+    def test_disable_service_with_str_body(self):
+        self._test_enable_service()
+
+    def test_disable_service_with_bytes_body(self):
+        self._test_enable_service(bytes_body=True)
diff --git a/tempest/thirdparty/boto/test.py b/tempest/thirdparty/boto/test.py
index 1ff4dee..9f119b4 100644
--- a/tempest/thirdparty/boto/test.py
+++ b/tempest/thirdparty/boto/test.py
@@ -505,7 +505,7 @@
             LOG.critical("%s Volume has %s snapshot(s)", volume.id,
                          map(snaps.id, snaps))
 
-        # NOTE(afazekas): detaching/attching not valid EC2 status
+        # NOTE(afazekas): detaching/attaching not valid EC2 status
         def _volume_state():
             volume.update(validate=True)
             try:
diff --git a/test-requirements.txt b/test-requirements.txt
index 2ea30ec..db2b2ce 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -9,4 +9,4 @@
 mox>=0.5.3
 mock>=1.2
 coverage>=3.6
-oslotest>=1.9.0 # Apache-2.0
+oslotest>=1.10.0 # Apache-2.0
diff --git a/tools/config/check_uptodate.sh b/tools/config/check_uptodate.sh
deleted file mode 100755
index 7b08695..0000000
--- a/tools/config/check_uptodate.sh
+++ /dev/null
@@ -1,29 +0,0 @@
-#!/usr/bin/env bash
-
-PROJECT_NAME=${PROJECT_NAME:-tempest}
-CFGFILE_NAME=${PROJECT_NAME}.conf.sample
-
-if [ -e etc/${PROJECT_NAME}/${CFGFILE_NAME} ]; then
-    CFGFILE=etc/${PROJECT_NAME}/${CFGFILE_NAME}
-elif [ -e etc/${CFGFILE_NAME} ]; then
-    CFGFILE=etc/${CFGFILE_NAME}
-else
-    echo "${0##*/}: can not find config file"
-    exit 1
-fi
-
-TEMPDIR=`mktemp -d /tmp/${PROJECT_NAME}.XXXXXX`
-trap "rm -rf $TEMPDIR" EXIT
-
-oslo-config-generator --config-file tools/config/config-generator.tempest.conf --output-file ${TEMPDIR}/${CFGFILE_NAME}
-if [ $? != 0 ]
-then
-    exit 1
-fi
-
-if ! diff -u ${TEMPDIR}/${CFGFILE_NAME} ${CFGFILE}
-then
-   echo "${0##*/}: ${PROJECT_NAME}.conf.sample is not up to date."
-   echo "${0##*/}: Please run tox -egenconfig."
-   exit 1
-fi
diff --git a/tox.ini b/tox.ini
index 389fee2..15652e8 100644
--- a/tox.ini
+++ b/tox.ini
@@ -108,12 +108,13 @@
 commands = {posargs}
 
 [testenv:docs]
-commands = python setup.py build_sphinx {posargs}
+# The sample config file we generate is included in the sphinxdoc, so build that first.
+commands =
+   python setup.py build_sphinx {posargs}
 
 [testenv:pep8]
 commands =
    flake8 {posargs}
-   {toxinidir}/tools/config/check_uptodate.sh
    python tools/check_uuid.py
 
 [testenv:uuidgen]