Merge "Add unit test for method show_limits"
diff --git a/.gitignore b/.gitignore
index 1777cb9..f584532 100644
--- a/.gitignore
+++ b/.gitignore
@@ -2,6 +2,7 @@
 ChangeLog
 *.pyc
 etc/tempest.conf
+etc/tempest.conf.sample
 etc/logging.conf
 include/swift_objects/swift_small
 include/swift_objects/swift_medium
@@ -18,3 +19,4 @@
 .coverage*
 !.coveragerc
 cover/
+doc/source/_static/tempest.conf
diff --git a/HACKING.rst b/HACKING.rst
index c776c49..45c35df 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -314,6 +314,39 @@
          * Check written content in the instance booted from snapshot
         """
 
+Test Identification with Idempotent ID
+--------------------------------------
+
+Every function that provides a test must have an ``idempotent_id`` decorator
+that is a unique ``uuid-4`` instance. This ID is used to complement the fully
+qualified test name and track test funcionality through refactoring. The
+format of the metadata looks like::
+
+    @test.idempotent_id('585e934c-448e-43c4-acbf-d06a9b899997')
+    def test_list_servers_with_detail(self):
+        # The created server should be in the detailed list of all servers
+        ...
+
+Tempest includes a ``check_uuid.py`` tool that will test for the existence
+and uniqueness of idempotent_id metadata for every test. By default the
+tool runs against the Tempest package by calling::
+
+    python check_uuid.py
+
+It can be invoked against any test suite by passing a package name::
+
+    python check_uuid.py --package <package_name>
+
+Tests without an ``idempotent_id`` can be automatically fixed by running
+the command with the ``--fix`` flag, which will modify the source package
+by inserting randomly generated uuids for every test that does not have
+one::
+
+    python check_uuid.py --fix
+
+The ``check_uuid.py`` tool is used as part of the tempest gate job
+to ensure that all tests have an ``idempotent_id`` decorator.
+
 Branchless Tempest Considerations
 ---------------------------------
 
diff --git a/README.rst b/README.rst
index af24569..d7063ba 100644
--- a/README.rst
+++ b/README.rst
@@ -107,7 +107,7 @@
 ----------
 
 Tempest also has a set of unit tests which test the Tempest code itself. These
-tests can be run by specifing the test discovery path::
+tests can be run by specifying the test discovery path::
 
     $> OS_TEST_PATH=./tempest/tests testr run --parallel
 
diff --git a/tempest/openstack/__init__.py b/doc/source/_static/.keep
similarity index 100%
rename from tempest/openstack/__init__.py
rename to doc/source/_static/.keep
diff --git a/doc/source/configuration.rst b/doc/source/configuration.rst
index 0805544..3e6013d 100644
--- a/doc/source/configuration.rst
+++ b/doc/source/configuration.rst
@@ -142,7 +142,7 @@
  #. alt_password
  #. alt_tenant_name
 
-And in the auth secion:
+And in the auth section:
 
  #. allow_tenant_isolation = False
  #. comment out 'test_accounts_file' or keep it as empty
diff --git a/doc/source/index.rst b/doc/source/index.rst
index f925018..e9f2161 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -10,6 +10,7 @@
    overview
    HACKING
    REVIEWING
+   plugin
 
 ------------
 Field Guides
@@ -37,6 +38,7 @@
    :maxdepth: 2
 
    configuration
+   sampleconf
 
 ---------------------
 Command Documentation
diff --git a/doc/source/plugin.rst b/doc/source/plugin.rst
new file mode 100644
index 0000000..4e97dbe
--- /dev/null
+++ b/doc/source/plugin.rst
@@ -0,0 +1,120 @@
+=============================
+Tempest Test Plugin Interface
+=============================
+
+Tempest has an external test plugin interface which enables anyone to integrate
+an external test suite as part of a tempest run. This will let any project
+leverage being run with the rest of the tempest suite while not requiring the
+tests live in the tempest tree.
+
+Creating a plugin
+=================
+
+Creating a plugin is fairly straightforward and doesn't require much additional
+effort on top of creating a test suite using tempest-lib. One thing to note with
+doing this is that the interfaces exposed by tempest are not considered stable
+(with the exception of configuration variables which ever effort goes into
+ensuring backwards compatibility). You should not need to import anything from
+tempest itself except where explicitly noted. If there is an interface from
+tempest that you need to rely on in your plugin it likely needs to be migrated
+to tempest-lib. In that situation, file a bug, push a migration patch, etc. to
+expedite providing the interface in a reliable manner.
+
+Plugin Class
+------------
+
+To provide tempest with all the required information it needs to be able to run
+your plugin you need to create a plugin class which tempest will load and call
+to get information when it needs. To simplify creating this tempest provides an
+abstract class that should be used as the parent for your plugin. To use this
+you would do something like the following::
+
+  from tempest.test_discover import plugin
+
+  class MyPlugin(plugin.TempestPlugin):
+
+Then you need to ensure you locally define all of the methods in the abstract
+class, you can refer to the api doc below for a reference of what that entails.
+
+Also, note eventually this abstract class will likely live in tempest-lib, when
+that migration occurs a deprecation shim will be added to tempest so as to not
+break any existing plugins. But, when that occurs migrating to using tempest-lib
+as the source for the abstract class will be prudent.
+
+Abstract Plugin Class
+^^^^^^^^^^^^^^^^^^^^^
+
+.. autoclass:: tempest.test_discover.plugins.TempestPlugin
+   :members:
+
+Entry Point
+-----------
+
+Once you've created your plugin class you need to add an entry point to your
+project to enable tempest to find the plugin. The entry point must be added
+to the "tempest.test_plugins" namespace.
+
+If you are using pbr this is fairly straightforward, in the setup.cfg just add
+something like the following::
+
+  [entry_points]
+  tempest.test_plugins =
+      plugin_name = module.path:PluginClass
+
+Plugin Structure
+----------------
+
+While there are no hard and fast rules for the structure a plugin, there are
+basically no constraints on what the plugin looks like as long as the 2 steps
+above are done. However,  there are some recommended patterns to follow to make
+it easy for people to contribute and work with your plugin. For example, if you
+create a directory structure with something like::
+
+    plugin_dir/
+      config.py
+      plugin.py
+      tests/
+        api/
+        scenario/
+      services/
+        client.py
+
+That will mirror what people expect from tempest. The file
+
+* **config.py**: contains any plugin specific configuration variables
+* **plugin.py**: contains the plugin class used for the entry point
+* **tests**: the directory where test discovery will be run, all tests should
+             be under this dir
+* **services**: where the plugin specific service clients are
+
+Additionally, when you're creating the plugin you likely want to follow all
+of the tempest developer and reviewer documentation to ensure that the tests
+being added in the plugin act and behave like the rest of tempest.
+
+Using Plugins
+=============
+
+Tempest will automatically discover any installed plugins when it is run. So by
+just installing the python packages which contain your plugin you'll be using
+them with tempest, nothing else is really required.
+
+However, you should take care when installing plugins. By their very nature
+there are no guarantees when running tempest with plugins enabled about the
+quality of the plugin. Additionally, while there is no limitation on running
+with multiple plugins it's worth noting that poorly written plugins might not
+properly isolate their tests which could cause unexpected cross interactions
+between plugins.
+
+Notes for using plugins with virtualenvs
+----------------------------------------
+
+When using a tempest inside a virtualenv (like when running under tox) you have
+to ensure that the package that contains your plugin is either installed in the
+venv too or that you have system site-packages enabled. The virtualenv will
+isolate the tempest install from the rest of your system so just installing the
+plugin package on your system and then running tempest inside a venv will not
+work.
+
+Tempest also exposes a tox job, all-plugin, which will setup a tox virtualenv
+with system site-packages enabled. This will let you leverage tox without
+requiring to manually install plugins in the tox venv before running tests.
diff --git a/doc/source/sampleconf.rst b/doc/source/sampleconf.rst
new file mode 100644
index 0000000..2a72971
--- /dev/null
+++ b/doc/source/sampleconf.rst
@@ -0,0 +1,14 @@
+.. _tempest-sampleconf:
+
+Sample Configuration File
+==========================
+
+The following is a sample Tempest configuration for adaptation and use. It is
+auto-generated from Tempest when this documentation is built, so
+if you are having issues with an option, please compare your version of
+Tempest with the version of this documentation.
+
+The sample configuration can also be viewed in `file form <_static/tempest.conf>`_.
+
+.. include:: _static/tempest.conf
+   :code:
diff --git a/etc/tempest.conf.sample b/etc/tempest.conf.sample
deleted file mode 100644
index 27d65e6..0000000
--- a/etc/tempest.conf.sample
+++ /dev/null
@@ -1,1216 +0,0 @@
-[DEFAULT]
-
-#
-# From oslo.log
-#
-
-# Print debugging output (set logging level to DEBUG instead of
-# default WARNING level). (boolean value)
-#debug = false
-
-# Print more verbose output (set logging level to INFO instead of
-# default WARNING level). (boolean value)
-#verbose = false
-
-# The name of a logging configuration file. This file is appended to
-# any existing logging configuration files. For details about logging
-# configuration files, see the Python logging module documentation.
-# (string value)
-# Deprecated group/name - [DEFAULT]/log_config
-#log_config_append = <None>
-
-# DEPRECATED. A logging.Formatter log message format string which may
-# use any of the available logging.LogRecord attributes. This option
-# is deprecated.  Please use logging_context_format_string and
-# logging_default_format_string instead. (string value)
-#log_format = <None>
-
-# Format string for %%(asctime)s in log records. Default: %(default)s
-# . (string value)
-#log_date_format = %Y-%m-%d %H:%M:%S
-
-# (Optional) Name of log file to output to. If no default is set,
-# logging will go to stdout. (string value)
-# Deprecated group/name - [DEFAULT]/logfile
-#log_file = <None>
-
-# (Optional) The base directory used for relative --log-file paths.
-# (string value)
-# Deprecated group/name - [DEFAULT]/logdir
-#log_dir = <None>
-
-# Use syslog for logging. Existing syslog format is DEPRECATED and
-# will be changed later to honor RFC5424. (boolean value)
-#use_syslog = false
-
-# (Optional) Enables or disables syslog rfc5424 format for logging. If
-# enabled, prefixes the MSG part of the syslog message with APP-NAME
-# (RFC5424). The format without the APP-NAME is deprecated in K, and
-# will be removed in M, along with this option. (boolean value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-#use_syslog_rfc_format = true
-
-# Syslog facility to receive log lines. (string value)
-#syslog_log_facility = LOG_USER
-
-# Log output to standard error. (boolean value)
-#use_stderr = true
-
-# Format string to use for log messages with context. (string value)
-#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
-
-# Format string to use for log messages without context. (string
-# value)
-#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
-
-# Data to append to log format when level is DEBUG. (string value)
-#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
-
-# Prefix each line of exception output with this format. (string
-# value)
-#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
-
-# List of logger=LEVEL pairs. (list value)
-#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN
-
-# Enables or disables publication of error events. (boolean value)
-#publish_errors = false
-
-# The format for an instance that is passed with the log message.
-# (string value)
-#instance_format = "[instance: %(uuid)s] "
-
-# The format for an instance UUID that is passed with the log message.
-# (string value)
-#instance_uuid_format = "[instance: %(uuid)s] "
-
-# Enables or disables fatal status of deprecations. (boolean value)
-#fatal_deprecations = false
-
-#
-# From tempest.config
-#
-
-# Prefix to be added when generating the name for test resources. It
-# can be used to discover all resources associated with a specific
-# test run when running tempest on a real-life cloud (string value)
-#resources_prefix = tempest
-
-
-[auth]
-
-#
-# From tempest.config
-#
-
-# Path to the yaml file that contains the list of credentials to use
-# for running tests. If used when running in parallel you have to make
-# sure sufficient credentials are provided in the accounts file. For
-# example if no tests with roles are being run it requires at least `2
-# * CONC` distinct accounts configured in  the `test_accounts_file`,
-# with CONC == the number of concurrent test processes. (string value)
-#test_accounts_file = <None>
-
-# Allows test cases to create/destroy tenants and users. This option
-# requires that OpenStack Identity API admin credentials are known. If
-# false, isolated test cases and parallel execution, can still be
-# achieved configuring a list of test accounts (boolean value)
-# Deprecated group/name - [compute]/allow_tenant_isolation
-# Deprecated group/name - [orchestration]/allow_tenant_isolation
-#allow_tenant_isolation = true
-
-# Roles to assign to all users created by tempest (list value)
-#tempest_roles =
-
-# Only applicable when identity.auth_version is v3.Domain within which
-# isolated credentials are provisioned.The default "None" means that
-# the domain from theadmin user is used instead. (string value)
-#tenant_isolation_domain_name = <None>
-
-# If allow_tenant_isolation is set to True and Neutron is enabled
-# Tempest will try to create a useable network, subnet, and router
-# when needed for each tenant it  creates. However in some neutron
-# configurations, like with VLAN provider networks, this doesn't work.
-# So if set to False the isolated networks will not be created
-# (boolean value)
-#create_isolated_networks = true
-
-
-[baremetal]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the baremetal provisioning service (string value)
-#catalog_type = baremetal
-
-# Whether the Ironic nova-compute driver is enabled (boolean value)
-#driver_enabled = false
-
-# Driver name which Ironic uses (string value)
-#driver = fake
-
-# The endpoint type to use for the baremetal provisioning service
-# (string value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# Timeout for Ironic node to completely provision (integer value)
-#active_timeout = 300
-
-# Timeout for association of Nova instance and Ironic node (integer
-# value)
-#association_timeout = 30
-
-# Timeout for Ironic power transitions. (integer value)
-#power_timeout = 60
-
-# Timeout for unprovisioning an Ironic node. Takes longer since Kilo
-# as Ironic performs an extra step in Node cleaning. (integer value)
-#unprovision_timeout = 300
-
-
-[boto]
-
-#
-# From tempest.config
-#
-
-# EC2 URL (string value)
-#ec2_url = http://localhost:8773/services/Cloud
-
-# S3 URL (string value)
-#s3_url = http://localhost:8080
-
-# AWS Secret Key (string value)
-#aws_secret = <None>
-
-# AWS Access Key (string value)
-#aws_access = <None>
-
-# AWS Zone for EC2 tests (string value)
-#aws_zone = nova
-
-# S3 Materials Path (string value)
-#s3_materials_path = /opt/stack/devstack/files/images/s3-materials/cirros-0.3.0
-
-# ARI Ramdisk Image manifest (string value)
-#ari_manifest = cirros-0.3.0-x86_64-initrd.manifest.xml
-
-# AMI Machine Image manifest (string value)
-#ami_manifest = cirros-0.3.0-x86_64-blank.img.manifest.xml
-
-# AKI Kernel Image manifest (string value)
-#aki_manifest = cirros-0.3.0-x86_64-vmlinuz.manifest.xml
-
-# Instance type (string value)
-#instance_type = m1.tiny
-
-# boto Http socket timeout (integer value)
-#http_socket_timeout = 3
-
-# boto num_retries on error (integer value)
-#num_retries = 1
-
-# Status Change Timeout (integer value)
-#build_timeout = 60
-
-# Status Change Test Interval (integer value)
-#build_interval = 1
-
-
-[compute]
-
-#
-# From tempest.config
-#
-
-# Valid primary image reference to be used in tests. This is a
-# required option (string value)
-#image_ref = <None>
-
-# Valid secondary image reference to be used in tests. This is a
-# required option, but if only one image is available duplicate the
-# value of image_ref above (string value)
-#image_ref_alt = <None>
-
-# Valid primary flavor to use in tests. (string value)
-#flavor_ref = 1
-
-# Valid secondary flavor to be used in tests. (string value)
-#flavor_ref_alt = 2
-
-# User name used to authenticate to an instance. (string value)
-#image_ssh_user = root
-
-# Password used to authenticate to an instance. (string value)
-#image_ssh_password = password
-
-# User name used to authenticate to an instance using the alternate
-# image. (string value)
-#image_alt_ssh_user = root
-
-# Time in seconds between build status checks. (integer value)
-#build_interval = 1
-
-# Timeout in seconds to wait for an instance to build. Other services
-# that do not define build_timeout will inherit this value. (integer
-# value)
-#build_timeout = 300
-
-# Shell fragments to use before executing a command when sshing to a
-# guest. (string value)
-#ssh_shell_prologue = set -eu -o pipefail; PATH=$$PATH:/sbin;
-
-# Auth method used for authenticate to the instance. Valid choices
-# are: keypair, configured, adminpass and disabled. Keypair: start the
-# servers with a ssh keypair. Configured: use the configured user and
-# password. Adminpass: use the injected adminPass. Disabled: avoid
-# using ssh when it is an option. (string value)
-#ssh_auth_method = keypair
-
-# How to connect to the instance? fixed: using the first ip belongs
-# the fixed network floating: creating and using a floating ip.
-# (string value)
-#ssh_connect_method = floating
-
-# User name used to authenticate to an instance. (string value)
-#ssh_user = root
-
-# Timeout in seconds to wait for ping to succeed. (integer value)
-#ping_timeout = 120
-
-# The packet size for ping packets originating from remote linux hosts
-# (integer value)
-#ping_size = 56
-
-# The number of ping packets originating from remote linux hosts
-# (integer value)
-#ping_count = 1
-
-# Additional wait time for clean state, when there is no OS-EXT-STS
-# extension available (integer value)
-#ready_wait = 0
-
-# Name of the fixed network that is visible to all test tenants. If
-# multiple networks are available for a tenant this is the network
-# which will be used for creating servers if tempest does not create a
-# network or a network is not specified elsewhere. It may be used for
-# ssh validation only if floating IPs are disabled. (string value)
-#fixed_network_name = <None>
-
-# Network used for SSH connections. Ignored if
-# use_floatingip_for_ssh=true or run_validation=false. (string value)
-#network_for_ssh = public
-
-# Does SSH use Floating IPs? (boolean value)
-#use_floatingip_for_ssh = true
-
-# Catalog type of the Compute service. (string value)
-#catalog_type = compute
-
-# The compute region name to use. If empty, the value of
-# identity.region is used instead. If no such region is found in the
-# service catalog, the first found one is used. (string value)
-#region =
-
-# The endpoint type to use for the compute service. (string value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# Expected device name when a volume is attached to an instance
-# (string value)
-#volume_device_name = vdb
-
-# Time in seconds before a shelved instance is eligible for removing
-# from a host.  -1 never offload, 0 offload when shelved. This time
-# should be the same as the time of nova.conf, and some tests will run
-# for as long as the time. (integer value)
-#shelved_offload_time = 0
-
-# Unallocated floating IP range, which will be used to test the
-# floating IP bulk feature for CRUD operation. This block must not
-# overlap an existing floating IP pool. (string value)
-#floating_ip_range = 10.0.0.0/29
-
-
-[compute-feature-enabled]
-
-#
-# From tempest.config
-#
-
-# If false, skip disk config tests (boolean value)
-#disk_config = true
-
-# A list of enabled compute extensions with a special entry all which
-# indicates every extension is enabled. Each extension should be
-# specified with alias name. Empty list indicates all extensions are
-# disabled (list value)
-#api_extensions = all
-
-# Does the test environment support changing the admin password?
-# (boolean value)
-#change_password = false
-
-# Does the test environment support obtaining instance serial console
-# output? (boolean value)
-#console_output = true
-
-# Does the test environment support resizing? (boolean value)
-#resize = false
-
-# Does the test environment support pausing? (boolean value)
-#pause = true
-
-# Does the test environment support shelving/unshelving? (boolean
-# value)
-#shelve = true
-
-# Does the test environment support suspend/resume? (boolean value)
-#suspend = true
-
-# Does the test environment support live migration available? (boolean
-# value)
-#live_migration = true
-
-# Does the test environment use block devices for live migration
-# (boolean value)
-#block_migration_for_live_migration = false
-
-# Does the test environment block migration support cinder iSCSI
-# volumes. Note, libvirt doesn't support this, see
-# https://bugs.launchpad.net/nova/+bug/1398999 (boolean value)
-#block_migrate_cinder_iscsi = false
-
-# Does the test system allow live-migration of paused instances? Note,
-# this is more than just the ANDing of paused and live_migrate, but
-# all 3 should be set to True to run those tests (boolean value)
-#live_migrate_paused_instances = false
-
-# Enable VNC console. This configuration value should be same as
-# [nova.vnc]->vnc_enabled in nova.conf (boolean value)
-#vnc_console = false
-
-# Enable Spice console. This configuration value should be same as
-# [nova.spice]->enabled in nova.conf (boolean value)
-#spice_console = false
-
-# Enable RDP console. This configuration value should be same as
-# [nova.rdp]->enabled in nova.conf (boolean value)
-#rdp_console = false
-
-# Does the test environment support instance rescue mode? (boolean
-# value)
-#rescue = true
-
-# Enables returning of the instance password by the relevant server
-# API calls such as create, rebuild or rescue. (boolean value)
-#enable_instance_password = true
-
-# Does the test environment support dynamic network interface
-# attachment? (boolean value)
-#interface_attach = true
-
-# Does the test environment support creating snapshot images of
-# running instances? (boolean value)
-#snapshot = true
-
-# Does the test environment have the ec2 api running? (boolean value)
-#ec2_api = true
-
-# Does Nova preserve preexisting ports from Neutron when deleting an
-# instance? This should be set to True if testing Kilo+ Nova. (boolean
-# value)
-#preserve_ports = false
-
-# Does the test environment support attaching an encrypted volume to a
-# running server instance? This may depend on the combination of
-# compute_driver in nova and the volume_driver(s) in cinder. (boolean
-# value)
-#attach_encrypted_volume = true
-
-# Does the test environment support creating instances with multiple
-# ports on the same network? This is only valid when using Neutron.
-# (boolean value)
-#allow_duplicate_networks = false
-
-
-[dashboard]
-
-#
-# From tempest.config
-#
-
-# Where the dashboard can be found (string value)
-#dashboard_url = http://localhost/
-
-# Login page for the dashboard (string value)
-#login_url = http://localhost/auth/login/
-
-
-[data_processing]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the data processing service. (string value)
-#catalog_type = data_processing
-
-# The endpoint type to use for the data processing service. (string
-# value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-
-[data_processing-feature-enabled]
-
-#
-# From tempest.config
-#
-
-# List of enabled data processing plugins (list value)
-#plugins = vanilla,hdp
-
-
-[database]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the Database service. (string value)
-#catalog_type = database
-
-# Valid primary flavor to use in database tests. (string value)
-#db_flavor_ref = 1
-
-# Current database version to use in database tests. (string value)
-#db_current_version = v1.0
-
-
-[debug]
-
-#
-# From tempest.config
-#
-
-# A regex to determine which requests should be traced.  This is a
-# regex to match the caller for rest client requests to be able to
-# selectively trace calls out of specific classes and methods. It
-# largely exists for test development, and is not expected to be used
-# in a real deploy of tempest. This will be matched against the
-# discovered ClassName:method in the test environment.  Expected
-# values for this field are:   * ClassName:test_method_name - traces
-# one test_method  * ClassName:setUp(Class) - traces specific setup
-# functions  * ClassName:tearDown(Class) - traces specific teardown
-# functions  * ClassName:_run_cleanups - traces the cleanup functions
-# If nothing is specified, this feature is not enabled. To trace
-# everything specify .* as the regex.  (string value)
-#trace_requests =
-
-
-[identity]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the Identity service. (string value)
-#catalog_type = identity
-
-# Set to True if using self-signed SSL certificates. (boolean value)
-#disable_ssl_certificate_validation = false
-
-# Specify a CA bundle file to use in verifying a TLS (https) server
-# certificate. (string value)
-#ca_certificates_file = <None>
-
-# Full URI of the OpenStack Identity API (Keystone), v2 (string value)
-#uri = <None>
-
-# Full URI of the OpenStack Identity API (Keystone), v3 (string value)
-#uri_v3 = <None>
-
-# Identity API version to be used for authentication for API tests.
-# (string value)
-#auth_version = v2
-
-# The identity region name to use. Also used as the other services'
-# region name unless they are set explicitly. If no such region is
-# found in the service catalog, the first found one is used. (string
-# value)
-#region = RegionOne
-
-# The endpoint type to use for the identity service. (string value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# Username to use for Nova API requests. (string value)
-#username = <None>
-
-# Tenant name to use for Nova API requests. (string value)
-#tenant_name = <None>
-
-# Role required to administrate keystone. (string value)
-#admin_role = admin
-
-# API key to use when authenticating. (string value)
-#password = <None>
-
-# Domain name for authentication (Keystone V3).The same domain applies
-# to user and project (string value)
-#domain_name = <None>
-
-# Username of alternate user to use for Nova API requests. (string
-# value)
-#alt_username = <None>
-
-# Alternate user's Tenant name to use for Nova API requests. (string
-# value)
-#alt_tenant_name = <None>
-
-# API key to use when authenticating as alternate user. (string value)
-#alt_password = <None>
-
-# Alternate domain name for authentication (Keystone V3).The same
-# domain applies to user and project (string value)
-#alt_domain_name = <None>
-
-# Administrative Username to use for Keystone API requests. (string
-# value)
-#admin_username = <None>
-
-# Administrative Tenant name to use for Keystone API requests. (string
-# value)
-#admin_tenant_name = <None>
-
-# API key to use when authenticating as admin. (string value)
-#admin_password = <None>
-
-# Admin domain name for authentication (Keystone V3).The same domain
-# applies to user and project (string value)
-#admin_domain_name = <None>
-
-# ID of the default domain (string value)
-#default_domain_id = default
-
-
-[identity-feature-enabled]
-
-#
-# From tempest.config
-#
-
-# Does the identity service have delegation and impersonation enabled
-# (boolean value)
-#trust = true
-
-# Is the v2 identity API enabled (boolean value)
-#api_v2 = true
-
-# Is the v3 identity API enabled (boolean value)
-#api_v3 = true
-
-
-[image]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the Image service. (string value)
-#catalog_type = image
-
-# The image region name to use. If empty, the value of identity.region
-# is used instead. If no such region is found in the service catalog,
-# the first found one is used. (string value)
-#region =
-
-# The endpoint type to use for the image service. (string value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# http accessible image (string value)
-#http_image = http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-uec.tar.gz
-
-# Timeout in seconds to wait for an image to become available.
-# (integer value)
-#build_timeout = 300
-
-# Time in seconds between image operation status checks. (integer
-# value)
-#build_interval = 1
-
-
-[image-feature-enabled]
-
-#
-# From tempest.config
-#
-
-# Is the v2 image API enabled (boolean value)
-#api_v2 = true
-
-# Is the v1 image API enabled (boolean value)
-#api_v1 = true
-
-# Is the deactivate-image feature enabled. The feature has been
-# integrated since Kilo. (boolean value)
-#deactivate_image = false
-
-
-[input-scenario]
-
-#
-# From tempest.config
-#
-
-# Matching images become parameters for scenario tests (string value)
-#image_regex = ^cirros-0.3.1-x86_64-uec$
-
-# Matching flavors become parameters for scenario tests (string value)
-#flavor_regex = ^m1.nano$
-
-# SSH verification in tests is skippedfor matching images (string
-# value)
-#non_ssh_image_regex = ^.*[Ww]in.*$
-
-# List of user mapped to regex to matching image names. (string value)
-#ssh_user_regex = [["^.*[Cc]irros.*$", "cirros"]]
-
-
-[messaging]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the Messaging service. (string value)
-#catalog_type = messaging
-
-# The maximum number of queue records per page when listing queues
-# (integer value)
-#max_queues_per_page = 20
-
-# The maximum metadata size for a queue (integer value)
-#max_queue_metadata = 65536
-
-# The maximum number of queue message per page when listing (or)
-# posting messages (integer value)
-#max_messages_per_page = 20
-
-# The maximum size of a message body (integer value)
-#max_message_size = 262144
-
-# The maximum number of messages per claim (integer value)
-#max_messages_per_claim = 20
-
-# The maximum ttl for a message (integer value)
-#max_message_ttl = 1209600
-
-# The maximum ttl for a claim (integer value)
-#max_claim_ttl = 43200
-
-# The maximum grace period for a claim (integer value)
-#max_claim_grace = 43200
-
-
-[negative]
-
-#
-# From tempest.config
-#
-
-# Test generator class for all negative tests (string value)
-#test_generator = tempest.common.generator.negative_generator.NegativeTestGenerator
-
-
-[network]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the Neutron service. (string value)
-#catalog_type = network
-
-# The network region name to use. If empty, the value of
-# identity.region is used instead. If no such region is found in the
-# service catalog, the first found one is used. (string value)
-#region =
-
-# The endpoint type to use for the network service. (string value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# The cidr block to allocate tenant ipv4 subnets from (string value)
-#tenant_network_cidr = 10.100.0.0/16
-
-# The mask bits for tenant ipv4 subnets (integer value)
-#tenant_network_mask_bits = 28
-
-# The cidr block to allocate tenant ipv6 subnets from (string value)
-#tenant_network_v6_cidr = 2003::/48
-
-# The mask bits for tenant ipv6 subnets (integer value)
-#tenant_network_v6_mask_bits = 64
-
-# Whether tenant networks can be reached directly from the test
-# client. This must be set to True when the 'fixed' ssh_connect_method
-# is selected. (boolean value)
-#tenant_networks_reachable = false
-
-# Id of the public network that provides external connectivity (string
-# value)
-#public_network_id =
-
-# Default floating network name. Used to allocate floating IPs when
-# neutron is enabled. (string value)
-#floating_network_name = <None>
-
-# Id of the public router that provides external connectivity. This
-# should only be used when Neutron's 'allow_overlapping_ips' is set to
-# 'False' in neutron.conf. usually not needed past 'Grizzly' release
-# (string value)
-#public_router_id =
-
-# Timeout in seconds to wait for network operation to complete.
-# (integer value)
-#build_timeout = 300
-
-# Time in seconds between network operation status checks. (integer
-# value)
-#build_interval = 1
-
-# List of dns servers which should be used for subnet creation (list
-# value)
-#dns_servers = 8.8.8.8,8.8.4.4
-
-# vnic_type to use when Launching instances with pre-configured ports.
-# Supported ports are: ['normal','direct','macvtap'] (string value)
-# Allowed values: <None>, normal, direct, macvtap
-#port_vnic_type = <None>
-
-
-[network-feature-enabled]
-
-#
-# From tempest.config
-#
-
-# Allow the execution of IPv6 tests (boolean value)
-#ipv6 = true
-
-# A list of enabled network extensions with a special entry all which
-# indicates every extension is enabled. Empty list indicates all
-# extensions are disabled. To get the list of extensions run: 'neutron
-# ext-list' (list value)
-#api_extensions = all
-
-# Allow the execution of IPv6 subnet tests that use the extended IPv6
-# attributes ipv6_ra_mode and ipv6_address_mode (boolean value)
-#ipv6_subnet_attributes = false
-
-# Does the test environment support changing port admin state (boolean
-# value)
-#port_admin_state_change = true
-
-
-[object-storage]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the Object-Storage service. (string value)
-#catalog_type = object-store
-
-# The object-storage region name to use. If empty, the value of
-# identity.region is used instead. If no such region is found in the
-# service catalog, the first found one is used. (string value)
-#region =
-
-# The endpoint type to use for the object-store service. (string
-# value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# Number of seconds to time on waiting for a container to container
-# synchronization complete. (integer value)
-#container_sync_timeout = 600
-
-# Number of seconds to wait while looping to check the status of a
-# container to container synchronization (integer value)
-#container_sync_interval = 5
-
-# Role to add to users created for swift tests to enable creating
-# containers (string value)
-#operator_role = Member
-
-# User role that has reseller admin (string value)
-#reseller_admin_role = ResellerAdmin
-
-# Name of sync realm. A sync realm is a set of clusters that have
-# agreed to allow container syncing with each other. Set the same
-# realm name as Swift's container-sync-realms.conf (string value)
-#realm_name = realm1
-
-# One name of cluster which is set in the realm whose name is set in
-# 'realm_name' item in this file. Set the same cluster name as Swift's
-# container-sync-realms.conf (string value)
-#cluster_name = name1
-
-
-[object-storage-feature-enabled]
-
-#
-# From tempest.config
-#
-
-# A list of the enabled optional discoverable apis. A single entry,
-# all, indicates that all of these features are expected to be enabled
-# (list value)
-#discoverable_apis = all
-
-# Execute (old style) container-sync tests (boolean value)
-#container_sync = true
-
-# Execute object-versioning tests (boolean value)
-#object_versioning = true
-
-# Execute discoverability tests (boolean value)
-#discoverability = true
-
-
-[orchestration]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the Orchestration service. (string value)
-#catalog_type = orchestration
-
-# The orchestration region name to use. If empty, the value of
-# identity.region is used instead. If no such region is found in the
-# service catalog, the first found one is used. (string value)
-#region =
-
-# The endpoint type to use for the orchestration service. (string
-# value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# Role required for users to be able to manage stacks (string value)
-#stack_owner_role = heat_stack_owner
-
-# Time in seconds between build status checks. (integer value)
-#build_interval = 1
-
-# Timeout in seconds to wait for a stack to build. (integer value)
-#build_timeout = 1200
-
-# Instance type for tests. Needs to be big enough for a full OS plus
-# the test workload (string value)
-#instance_type = m1.micro
-
-# Name of existing keypair to launch servers with. (string value)
-#keypair_name = <None>
-
-# Value must match heat configuration of the same name. (integer
-# value)
-#max_template_size = 524288
-
-# Value must match heat configuration of the same name. (integer
-# value)
-#max_resources_per_stack = 1000
-
-
-[oslo_concurrency]
-
-#
-# From oslo.concurrency
-#
-
-# Enables or disables inter-process locks. (boolean value)
-# Deprecated group/name - [DEFAULT]/disable_process_locking
-#disable_process_locking = false
-
-# Directory to use for lock files.  For security, the specified
-# directory should only be writable by the user running the processes
-# that need locking. Defaults to environment variable OSLO_LOCK_PATH.
-# If external locks are used, a lock path must be set. (string value)
-# Deprecated group/name - [DEFAULT]/lock_path
-#lock_path = <None>
-
-
-[scenario]
-
-#
-# From tempest.config
-#
-
-# Directory containing image files (string value)
-#img_dir = /opt/stack/new/devstack/files/images/cirros-0.3.1-x86_64-uec
-
-# Image file name (string value)
-# Deprecated group/name - [DEFAULT]/qcow2_img_file
-#img_file = cirros-0.3.1-x86_64-disk.img
-
-# Image disk format (string value)
-#img_disk_format = qcow2
-
-# Image container format (string value)
-#img_container_format = bare
-
-# Glance image properties. Use for custom images which require them
-# (dict value)
-#img_properties = <None>
-
-# AMI image file name (string value)
-#ami_img_file = cirros-0.3.1-x86_64-blank.img
-
-# ARI image file name (string value)
-#ari_img_file = cirros-0.3.1-x86_64-initrd
-
-# AKI image file name (string value)
-#aki_img_file = cirros-0.3.1-x86_64-vmlinuz
-
-# ssh username for the image file (string value)
-#ssh_user = cirros
-
-# specifies how many resources to request at once. Used for large
-# operations testing. (integer value)
-#large_ops_number = 0
-
-# DHCP client used by images to renew DCHP lease. If left empty,
-# update operation will be skipped. Supported clients: "udhcpc",
-# "dhclient" (string value)
-# Allowed values: udhcpc, dhclient
-#dhcp_client = udhcpc
-
-
-[service_available]
-
-#
-# From tempest.config
-#
-
-# Whether or not cinder is expected to be available (boolean value)
-#cinder = true
-
-# Whether or not neutron is expected to be available (boolean value)
-#neutron = false
-
-# Whether or not glance is expected to be available (boolean value)
-#glance = true
-
-# Whether or not swift is expected to be available (boolean value)
-#swift = true
-
-# Whether or not nova is expected to be available (boolean value)
-#nova = true
-
-# Whether or not Heat is expected to be available (boolean value)
-#heat = false
-
-# Whether or not Ceilometer is expected to be available (boolean
-# value)
-#ceilometer = true
-
-# Whether or not Horizon is expected to be available (boolean value)
-#horizon = true
-
-# Whether or not Sahara is expected to be available (boolean value)
-#sahara = false
-
-# Whether or not Ironic is expected to be available (boolean value)
-#ironic = false
-
-# Whether or not Trove is expected to be available (boolean value)
-#trove = false
-
-# Whether or not Zaqar is expected to be available (boolean value)
-#zaqar = false
-
-
-[stress]
-
-#
-# From tempest.config
-#
-
-# Directory containing log files on the compute nodes (string value)
-#nova_logdir = <None>
-
-# Maximum number of instances to create during test. (integer value)
-#max_instances = 16
-
-# Controller host. (string value)
-#controller = <None>
-
-# Controller host. (string value)
-#target_controller = <None>
-
-# ssh user. (string value)
-#target_ssh_user = <None>
-
-# Path to private key. (string value)
-#target_private_key_path = <None>
-
-# regexp for list of log files. (string value)
-#target_logfiles = <None>
-
-# time (in seconds) between log file error checks. (integer value)
-#log_check_interval = 60
-
-# The number of threads created while stress test. (integer value)
-#default_thread_number_per_action = 4
-
-# Prevent the cleaning (tearDownClass()) between each stress test run
-# if an exception occurs during this run. (boolean value)
-#leave_dirty_stack = false
-
-# Allows a full cleaning process after a stress test. Caution : this
-# cleanup will remove every objects of every tenant. (boolean value)
-#full_clean_stack = false
-
-
-[telemetry]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the Telemetry service. (string value)
-#catalog_type = metering
-
-# The endpoint type to use for the telemetry service. (string value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# This variable is used as flag to enable notification tests (boolean
-# value)
-#too_slow_to_test = true
-
-
-[validation]
-
-#
-# From tempest.config
-#
-
-# Enable ssh on created servers and creation of additional validation
-# resources to enable remote access (boolean value)
-# Deprecated group/name - [compute]/run_ssh
-#run_validation = false
-
-# Default IP type used for validation: -fixed: uses the first IP
-# belonging to the fixed network -floating: creates and uses a
-# floating IP (string value)
-# Allowed values: fixed, floating
-#connect_method = floating
-
-# Default authentication method to the instance. Only ssh via keypair
-# is supported for now. Additional methods will be handled in a
-# separate spec. (string value)
-# Allowed values: keypair
-#auth_method = keypair
-
-# Default IP version for ssh connections. (integer value)
-# Deprecated group/name - [compute]/ip_version_for_ssh
-#ip_version_for_ssh = 4
-
-# Timeout in seconds to wait for ping to succeed. (integer value)
-#ping_timeout = 120
-
-# Timeout in seconds to wait for the TCP connection to be successful.
-# (integer value)
-# Deprecated group/name - [compute]/ssh_channel_timeout
-#connect_timeout = 60
-
-# Timeout in seconds to wait for the ssh banner. (integer value)
-# Deprecated group/name - [compute]/ssh_timeout
-#ssh_timeout = 300
-
-
-[volume]
-
-#
-# From tempest.config
-#
-
-# Time in seconds between volume availability checks. (integer value)
-#build_interval = 1
-
-# Timeout in seconds to wait for a volume to become available.
-# (integer value)
-#build_timeout = 300
-
-# Catalog type of the Volume Service (string value)
-#catalog_type = volume
-
-# The volume region name to use. If empty, the value of
-# identity.region is used instead. If no such region is found in the
-# service catalog, the first found one is used. (string value)
-#region =
-
-# The endpoint type to use for the volume service. (string value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# Name of the backend1 (must be declared in cinder.conf) (string
-# value)
-#backend1_name = BACKEND_1
-
-# Name of the backend2 (must be declared in cinder.conf) (string
-# value)
-#backend2_name = BACKEND_2
-
-# Backend protocol to target when creating volume types (string value)
-#storage_protocol = iSCSI
-
-# Backend vendor to target when creating volume types (string value)
-#vendor_name = Open Source
-
-# Disk format to use when copying a volume to image (string value)
-#disk_format = raw
-
-# Default size in GB for volumes created by volumes tests (integer
-# value)
-#volume_size = 1
-
-
-[volume-feature-enabled]
-
-#
-# From tempest.config
-#
-
-# Runs Cinder multi-backend test (requires 2 backends) (boolean value)
-#multi_backend = false
-
-# Runs Cinder volumes backup test (boolean value)
-#backup = true
-
-# Runs Cinder volume snapshot test (boolean value)
-#snapshot = true
-
-# A list of enabled volume extensions with a special entry all which
-# indicates every extension is enabled. Empty list indicates all
-# extensions are disabled (list value)
-#api_extensions = all
-
-# Is the v1 volume API enabled (boolean value)
-#api_v1 = true
-
-# Is the v2 volume API enabled (boolean value)
-#api_v2 = true
-
-# Update bootable status of a volume Not implemented on icehouse
-# (boolean value)
-#bootable = false
diff --git a/openstack-common.conf b/openstack-common.conf
index 16ba6a7..acb1437 100644
--- a/openstack-common.conf
+++ b/openstack-common.conf
@@ -2,7 +2,6 @@
 
 # The list of modules to copy from openstack-common
 module=install_venv_common
-module=versionutils
 module=with_venv
 module=install_venv
 
diff --git a/requirements.txt b/requirements.txt
index d0419f7..415eaa5 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -12,7 +12,7 @@
 netaddr>=0.7.12
 testrepository>=0.0.18
 pyOpenSSL>=0.14
-oslo.concurrency>=2.1.0 # Apache-2.0
+oslo.concurrency>=2.3.0 # Apache-2.0
 oslo.config>=1.11.0 # Apache-2.0
 oslo.i18n>=1.5.0 # Apache-2.0
 oslo.log>=1.6.0 # Apache-2.0
diff --git a/setup.cfg b/setup.cfg
index f28c481..ab40f12 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -22,7 +22,7 @@
 packages =
     tempest
 data_files =
-    /etc/tempest = etc/*
+    etc/tempest = etc/*
 
 [entry_points]
 console_scripts =
diff --git a/tempest/api/compute/admin/test_flavors.py b/tempest/api/compute/admin/test_flavors.py
index 364d080..a3c25a2 100644
--- a/tempest/api/compute/admin/test_flavors.py
+++ b/tempest/api/compute/admin/test_flavors.py
@@ -63,13 +63,13 @@
         flavor_name = data_utils.rand_name(self.flavor_name_prefix)
 
         # Create the flavor
-        flavor = self.client.create_flavor(flavor_name,
-                                           self.ram, self.vcpus,
-                                           self.disk,
-                                           flavor_id,
+        flavor = self.client.create_flavor(name=flavor_name,
+                                           ram=self.ram, vcpus=self.vcpus,
+                                           disk=self.disk,
+                                           id=flavor_id,
                                            ephemeral=self.ephemeral,
                                            swap=self.swap,
-                                           rxtx=self.rxtx)
+                                           rxtx_factor=self.rxtx)
         self.addCleanup(self.flavor_clean_up, flavor['id'])
         self.assertEqual(flavor['name'], flavor_name)
         self.assertEqual(flavor['vcpus'], self.vcpus)
@@ -115,13 +115,13 @@
         new_flavor_id = data_utils.rand_int_id(start=1000)
 
         # Create the flavor
-        flavor = self.client.create_flavor(flavor_name,
-                                           self.ram, self.vcpus,
-                                           self.disk,
-                                           new_flavor_id,
+        flavor = self.client.create_flavor(name=flavor_name,
+                                           ram=self.ram, vcpus=self.vcpus,
+                                           disk=self.disk,
+                                           id=new_flavor_id,
                                            ephemeral=self.ephemeral,
                                            swap=self.swap,
-                                           rxtx=self.rxtx)
+                                           rxtx_factor=self.rxtx)
         self.addCleanup(self.flavor_clean_up, flavor['id'])
         flag = False
         # Verify flavor is retrieved
@@ -147,10 +147,10 @@
         new_flavor_id = data_utils.rand_int_id(start=1000)
 
         # Create the flavor
-        flavor = self.client.create_flavor(flavor_name,
-                                           self.ram, self.vcpus,
-                                           self.disk,
-                                           new_flavor_id)
+        flavor = self.client.create_flavor(name=flavor_name,
+                                           ram=self.ram, vcpus=self.vcpus,
+                                           disk=self.disk,
+                                           id=new_flavor_id)
         self.addCleanup(self.flavor_clean_up, flavor['id'])
         self.assertEqual(flavor['name'], flavor_name)
         self.assertEqual(flavor['ram'], self.ram)
@@ -182,10 +182,10 @@
         new_flavor_id = data_utils.rand_int_id(start=1000)
 
         # Create the flavor
-        flavor = self.client.create_flavor(flavor_name,
-                                           self.ram, self.vcpus,
-                                           self.disk,
-                                           new_flavor_id,
+        flavor = self.client.create_flavor(name=flavor_name,
+                                           ram=self.ram, vcpus=self.vcpus,
+                                           disk=self.disk,
+                                           id=new_flavor_id,
                                            is_public="False")
         self.addCleanup(self.flavor_clean_up, flavor['id'])
         # Verify flavor is retrieved
@@ -211,10 +211,10 @@
         new_flavor_id = data_utils.rand_int_id(start=1000)
 
         # Create the flavor
-        flavor = self.client.create_flavor(flavor_name,
-                                           self.ram, self.vcpus,
-                                           self.disk,
-                                           new_flavor_id,
+        flavor = self.client.create_flavor(name=flavor_name,
+                                           ram=self.ram, vcpus=self.vcpus,
+                                           disk=self.disk,
+                                           id=new_flavor_id,
                                            is_public="False")
         self.addCleanup(self.flavor_clean_up, flavor['id'])
 
@@ -231,10 +231,10 @@
         new_flavor_id = data_utils.rand_int_id(start=1000)
 
         # Create the flavor
-        flavor = self.client.create_flavor(flavor_name,
-                                           self.ram, self.vcpus,
-                                           self.disk,
-                                           new_flavor_id,
+        flavor = self.client.create_flavor(name=flavor_name,
+                                           ram=self.ram, vcpus=self.vcpus,
+                                           disk=self.disk,
+                                           id=new_flavor_id,
                                            is_public="True")
         self.addCleanup(self.flavor_clean_up, flavor['id'])
         flag = False
@@ -254,18 +254,18 @@
         flavor_name_public = data_utils.rand_name(self.flavor_name_prefix)
 
         # Create a non public flavor
-        flavor = self.client.create_flavor(flavor_name_not_public,
-                                           self.ram, self.vcpus,
-                                           self.disk,
-                                           flavor_id_not_public,
+        flavor = self.client.create_flavor(name=flavor_name_not_public,
+                                           ram=self.ram, vcpus=self.vcpus,
+                                           disk=self.disk,
+                                           id=flavor_id_not_public,
                                            is_public="False")
         self.addCleanup(self.flavor_clean_up, flavor['id'])
 
         # Create a public flavor
-        flavor = self.client.create_flavor(flavor_name_public,
-                                           self.ram, self.vcpus,
-                                           self.disk,
-                                           flavor_id_public,
+        flavor = self.client.create_flavor(name=flavor_name_public,
+                                           ram=self.ram, vcpus=self.vcpus,
+                                           disk=self.disk,
+                                           id=flavor_id_public,
                                            is_public="True")
         self.addCleanup(self.flavor_clean_up, flavor['id'])
 
@@ -294,10 +294,10 @@
         new_flavor_id = data_utils.rand_int_id(start=1000)
 
         ram = "1024"
-        flavor = self.client.create_flavor(flavor_name,
-                                           ram, self.vcpus,
-                                           self.disk,
-                                           new_flavor_id)
+        flavor = self.client.create_flavor(name=flavor_name,
+                                           ram=ram, vcpus=self.vcpus,
+                                           disk=self.disk,
+                                           id=new_flavor_id)
         self.addCleanup(self.flavor_clean_up, flavor['id'])
         self.assertEqual(flavor['name'], flavor_name)
         self.assertEqual(flavor['vcpus'], self.vcpus)
diff --git a/tempest/api/compute/admin/test_flavors_access.py b/tempest/api/compute/admin/test_flavors_access.py
index 2baa53e..ccfe20b 100644
--- a/tempest/api/compute/admin/test_flavors_access.py
+++ b/tempest/api/compute/admin/test_flavors_access.py
@@ -56,10 +56,10 @@
         # private flavor will return an empty access list
         flavor_name = data_utils.rand_name(self.flavor_name_prefix)
         new_flavor_id = data_utils.rand_int_id(start=1000)
-        new_flavor = self.client.create_flavor(flavor_name,
-                                               self.ram, self.vcpus,
-                                               self.disk,
-                                               new_flavor_id,
+        new_flavor = self.client.create_flavor(name=flavor_name,
+                                               ram=self.ram, vcpus=self.vcpus,
+                                               disk=self.disk,
+                                               id=new_flavor_id,
                                                is_public='False')
         self.addCleanup(self.client.delete_flavor, new_flavor['id'])
         flavor_access = self.client.list_flavor_access(new_flavor_id)
@@ -70,10 +70,10 @@
         # Test to add and remove flavor access to a given tenant.
         flavor_name = data_utils.rand_name(self.flavor_name_prefix)
         new_flavor_id = data_utils.rand_int_id(start=1000)
-        new_flavor = self.client.create_flavor(flavor_name,
-                                               self.ram, self.vcpus,
-                                               self.disk,
-                                               new_flavor_id,
+        new_flavor = self.client.create_flavor(name=flavor_name,
+                                               ram=self.ram, vcpus=self.vcpus,
+                                               disk=self.disk,
+                                               id=new_flavor_id,
                                                is_public='False')
         self.addCleanup(self.client.delete_flavor, new_flavor['id'])
         # Add flavor access to a tenant.
diff --git a/tempest/api/compute/admin/test_flavors_access_negative.py b/tempest/api/compute/admin/test_flavors_access_negative.py
index e5ae23b..03898c2 100644
--- a/tempest/api/compute/admin/test_flavors_access_negative.py
+++ b/tempest/api/compute/admin/test_flavors_access_negative.py
@@ -57,10 +57,10 @@
         # Test to list flavor access with exceptions by querying public flavor
         flavor_name = data_utils.rand_name(self.flavor_name_prefix)
         new_flavor_id = data_utils.rand_int_id(start=1000)
-        new_flavor = self.client.create_flavor(flavor_name,
-                                               self.ram, self.vcpus,
-                                               self.disk,
-                                               new_flavor_id,
+        new_flavor = self.client.create_flavor(name=flavor_name,
+                                               ram=self.ram, vcpus=self.vcpus,
+                                               disk=self.disk,
+                                               id=new_flavor_id,
                                                is_public='True')
         self.addCleanup(self.client.delete_flavor, new_flavor['id'])
         self.assertRaises(lib_exc.NotFound,
@@ -73,10 +73,10 @@
         # Test to add flavor access as a user without admin privileges.
         flavor_name = data_utils.rand_name(self.flavor_name_prefix)
         new_flavor_id = data_utils.rand_int_id(start=1000)
-        new_flavor = self.client.create_flavor(flavor_name,
-                                               self.ram, self.vcpus,
-                                               self.disk,
-                                               new_flavor_id,
+        new_flavor = self.client.create_flavor(name=flavor_name,
+                                               ram=self.ram, vcpus=self.vcpus,
+                                               disk=self.disk,
+                                               id=new_flavor_id,
                                                is_public='False')
         self.addCleanup(self.client.delete_flavor, new_flavor['id'])
         self.assertRaises(lib_exc.Forbidden,
@@ -90,10 +90,10 @@
         # Test to remove flavor access as a user without admin privileges.
         flavor_name = data_utils.rand_name(self.flavor_name_prefix)
         new_flavor_id = data_utils.rand_int_id(start=1000)
-        new_flavor = self.client.create_flavor(flavor_name,
-                                               self.ram, self.vcpus,
-                                               self.disk,
-                                               new_flavor_id,
+        new_flavor = self.client.create_flavor(name=flavor_name,
+                                               ram=self.ram, vcpus=self.vcpus,
+                                               disk=self.disk,
+                                               id=new_flavor_id,
                                                is_public='False')
         self.addCleanup(self.client.delete_flavor, new_flavor['id'])
         # Add flavor access to a tenant.
@@ -111,10 +111,10 @@
         # Create a new flavor.
         flavor_name = data_utils.rand_name(self.flavor_name_prefix)
         new_flavor_id = data_utils.rand_int_id(start=1000)
-        new_flavor = self.client.create_flavor(flavor_name,
-                                               self.ram, self.vcpus,
-                                               self.disk,
-                                               new_flavor_id,
+        new_flavor = self.client.create_flavor(name=flavor_name,
+                                               ram=self.ram, vcpus=self.vcpus,
+                                               disk=self.disk,
+                                               id=new_flavor_id,
                                                is_public='False')
         self.addCleanup(self.client.delete_flavor, new_flavor['id'])
 
@@ -136,10 +136,10 @@
         # Create a new flavor.
         flavor_name = data_utils.rand_name(self.flavor_name_prefix)
         new_flavor_id = data_utils.rand_int_id(start=1000)
-        new_flavor = self.client.create_flavor(flavor_name,
-                                               self.ram, self.vcpus,
-                                               self.disk,
-                                               new_flavor_id,
+        new_flavor = self.client.create_flavor(name=flavor_name,
+                                               ram=self.ram, vcpus=self.vcpus,
+                                               disk=self.disk,
+                                               id=new_flavor_id,
                                                is_public='False')
         self.addCleanup(self.client.delete_flavor, new_flavor['id'])
 
diff --git a/tempest/api/compute/admin/test_flavors_extra_specs.py b/tempest/api/compute/admin/test_flavors_extra_specs.py
index 6866c1a..6039cb2 100644
--- a/tempest/api/compute/admin/test_flavors_extra_specs.py
+++ b/tempest/api/compute/admin/test_flavors_extra_specs.py
@@ -50,12 +50,12 @@
         swap = 1024
         rxtx = 1
         # Create a flavor so as to set/get/unset extra specs
-        cls.flavor = cls.client.create_flavor(flavor_name,
-                                              ram, vcpus,
-                                              disk,
-                                              cls.new_flavor_id,
+        cls.flavor = cls.client.create_flavor(name=flavor_name,
+                                              ram=ram, vcpus=vcpus,
+                                              disk=disk,
+                                              id=cls.new_flavor_id,
                                               ephemeral=ephemeral,
-                                              swap=swap, rxtx=rxtx)
+                                              swap=swap, rxtx_factor=rxtx)
 
     @classmethod
     def resource_cleanup(cls):
@@ -71,7 +71,7 @@
         specs = {"key1": "value1", "key2": "value2"}
         # SET extra specs to the flavor created in setUp
         set_body = \
-            self.client.set_flavor_extra_spec(self.flavor['id'], specs)
+            self.client.set_flavor_extra_spec(self.flavor['id'], **specs)
         self.assertEqual(set_body, specs)
         # GET extra specs and verify
         get_body = self.client.list_flavor_extra_specs(self.flavor['id'])
@@ -96,7 +96,7 @@
     @test.idempotent_id('a99dad88-ae1c-4fba-aeb4-32f898218bd0')
     def test_flavor_non_admin_get_all_keys(self):
         specs = {"key1": "value1", "key2": "value2"}
-        self.client.set_flavor_extra_spec(self.flavor['id'], specs)
+        self.client.set_flavor_extra_spec(self.flavor['id'], **specs)
         body = self.flavors_client.list_flavor_extra_specs(self.flavor['id'])
 
         for key in specs:
@@ -104,8 +104,8 @@
 
     @test.idempotent_id('12805a7f-39a3-4042-b989-701d5cad9c90')
     def test_flavor_non_admin_get_specific_key(self):
-        specs = {"key1": "value1", "key2": "value2"}
-        body = self.client.set_flavor_extra_spec(self.flavor['id'], specs)
+        body = self.client.set_flavor_extra_spec(self.flavor['id'],
+                                                 key1="value1", key2="value2")
         self.assertEqual(body['key1'], 'value1')
         self.assertIn('key2', body)
         body = self.flavors_client.show_flavor_extra_spec(
diff --git a/tempest/api/compute/admin/test_flavors_extra_specs_negative.py b/tempest/api/compute/admin/test_flavors_extra_specs_negative.py
index 8c5a103..f1e11f4 100644
--- a/tempest/api/compute/admin/test_flavors_extra_specs_negative.py
+++ b/tempest/api/compute/admin/test_flavors_extra_specs_negative.py
@@ -53,12 +53,12 @@
         swap = 1024
         rxtx = 1
         # Create a flavor
-        cls.flavor = cls.client.create_flavor(flavor_name,
-                                              ram, vcpus,
-                                              disk,
-                                              cls.new_flavor_id,
+        cls.flavor = cls.client.create_flavor(name=flavor_name,
+                                              ram=ram, vcpus=vcpus,
+                                              disk=disk,
+                                              id=cls.new_flavor_id,
                                               ephemeral=ephemeral,
-                                              swap=swap, rxtx=rxtx)
+                                              swap=swap, rxtx_factor=rxtx)
 
     @classmethod
     def resource_cleanup(cls):
@@ -70,19 +70,17 @@
     @test.idempotent_id('a00a3b81-5641-45a8-ab2b-4a8ec41e1d7d')
     def test_flavor_non_admin_set_keys(self):
         # Test to SET flavor extra spec as a user without admin privileges.
-        specs = {"key1": "value1", "key2": "value2"}
         self.assertRaises(lib_exc.Forbidden,
                           self.flavors_client.set_flavor_extra_spec,
                           self.flavor['id'],
-                          specs)
+                          key1="value1", key2="value2")
 
     @test.attr(type=['negative'])
     @test.idempotent_id('1ebf4ef8-759e-48fe-a801-d451d80476fb')
     def test_flavor_non_admin_update_specific_key(self):
         # non admin user is not allowed to update flavor extra spec
-        specs = {"key1": "value1", "key2": "value2"}
         body = self.client.set_flavor_extra_spec(
-            self.flavor['id'], specs)
+            self.flavor['id'], key1="value1", key2="value2")
         self.assertEqual(body['key1'], 'value1')
         self.assertRaises(lib_exc.Forbidden,
                           self.flavors_client.
@@ -94,8 +92,8 @@
     @test.attr(type=['negative'])
     @test.idempotent_id('28f12249-27c7-44c1-8810-1f382f316b11')
     def test_flavor_non_admin_unset_keys(self):
-        specs = {"key1": "value1", "key2": "value2"}
-        self.client.set_flavor_extra_spec(self.flavor['id'], specs)
+        self.client.set_flavor_extra_spec(self.flavor['id'],
+                                          key1="value1", key2="value2")
 
         self.assertRaises(lib_exc.Forbidden,
                           self.flavors_client.unset_flavor_extra_spec,
diff --git a/tempest/api/compute/admin/test_quotas_negative.py b/tempest/api/compute/admin/test_quotas_negative.py
index 798bd30..8dcd0b2 100644
--- a/tempest/api/compute/admin/test_quotas_negative.py
+++ b/tempest/api/compute/admin/test_quotas_negative.py
@@ -32,6 +32,7 @@
         cls.client = cls.os.quotas_client
         cls.adm_client = cls.os_adm.quotas_client
         cls.sg_client = cls.security_groups_client
+        cls.sgr_client = cls.security_group_rules_client
 
     @classmethod
     def resource_setup(cls):
@@ -128,7 +129,7 @@
         # will be raised when out of quota
         self.assertRaises((lib_exc.Forbidden, lib_exc.OverLimit),
                           self.sg_client.create_security_group,
-                          "sg-overlimit", "sg-desc")
+                          name="sg-overlimit", description="sg-desc")
 
     @decorators.skip_because(bug="1186354",
                              condition=CONF.service_available.neutron)
@@ -156,7 +157,8 @@
         s_name = data_utils.rand_name('securitygroup')
         s_description = data_utils.rand_name('description')
         securitygroup =\
-            self.sg_client.create_security_group(s_name, s_description)
+            self.sg_client.create_security_group(name=s_name,
+                                                 description=s_description)
         self.addCleanup(self.sg_client.delete_security_group,
                         securitygroup['id'])
 
@@ -167,5 +169,6 @@
         # A 403 Forbidden or 413 Overlimit (old behaviour) exception
         # will be raised when out of quota
         self.assertRaises((lib_exc.OverLimit, lib_exc.Forbidden),
-                          self.sg_client.create_security_group_rule,
-                          secgroup_id, ip_protocol, 1025, 1025)
+                          self.sgr_client.create_security_group_rule,
+                          parent_group_id=secgroup_id, ip_protocol=ip_protocol,
+                          from_port=1025, to_port=1025)
diff --git a/tempest/api/compute/admin/test_security_group_default_rules.py b/tempest/api/compute/admin/test_security_group_default_rules.py
index 13d6cc0..5ae6553 100644
--- a/tempest/api/compute/admin/test_security_group_default_rules.py
+++ b/tempest/api/compute/admin/test_security_group_default_rules.py
@@ -45,9 +45,9 @@
                                              cidr='10.10.0.0/24'):
         # Create Security Group default rule
         rule = self.adm_client.create_security_default_group_rule(
-            ip_protocol,
-            from_port,
-            to_port,
+            ip_protocol=ip_protocol,
+            from_port=from_port,
+            to_port=to_port,
             cidr=cidr)
         self.assertEqual(ip_protocol, rule['ip_protocol'])
         self.assertEqual(from_port, rule['from_port'])
@@ -73,9 +73,9 @@
         from_port = 80
         to_port = 80
         rule = self.adm_client.create_security_default_group_rule(
-            ip_protocol,
-            from_port,
-            to_port)
+            ip_protocol=ip_protocol,
+            from_port=from_port,
+            to_port=to_port)
         self.addCleanup(self.adm_client.delete_security_group_default_rule,
                         rule['id'])
         self.assertNotEqual(0, rule['id'])
@@ -88,9 +88,9 @@
         to_port = 10
         cidr = ''
         rule = self.adm_client.create_security_default_group_rule(
-            ip_protocol,
-            from_port,
-            to_port,
+            ip_protocol=ip_protocol,
+            from_port=from_port,
+            to_port=to_port,
             cidr=cidr)
         self.addCleanup(self.adm_client.delete_security_group_default_rule,
                         rule['id'])
diff --git a/tempest/api/compute/admin/test_security_groups.py b/tempest/api/compute/admin/test_security_groups.py
index ff87a4f..b2a1b98 100644
--- a/tempest/api/compute/admin/test_security_groups.py
+++ b/tempest/api/compute/admin/test_security_groups.py
@@ -51,7 +51,8 @@
             name = data_utils.rand_name('securitygroup')
             description = data_utils.rand_name('description')
             securitygroup = (self.client
-                             .create_security_group(name, description))
+                             .create_security_group(name=name,
+                                                    description=description))
             self.addCleanup(self._delete_security_group,
                             securitygroup['id'], admin=False)
             security_group_list.append(securitygroup)
@@ -61,9 +62,8 @@
         for i in range(2):
             name = data_utils.rand_name('securitygroup')
             description = data_utils.rand_name('description')
-            adm_securitygroup = (self.adm_client
-                                 .create_security_group(name,
-                                                        description))
+            adm_securitygroup = self.adm_client.create_security_group(
+                name=name, description=description)
             self.addCleanup(self._delete_security_group,
                             adm_securitygroup['id'])
             security_group_list.append(adm_securitygroup)
diff --git a/tempest/api/compute/admin/test_servers_negative.py b/tempest/api/compute/admin/test_servers_negative.py
index b93aaca..0241e70 100644
--- a/tempest/api/compute/admin/test_servers_negative.py
+++ b/tempest/api/compute/admin/test_servers_negative.py
@@ -73,9 +73,9 @@
         ram = int(quota_set['ram']) + 1
         vcpus = 8
         disk = 10
-        flavor_ref = self.flavors_client.create_flavor(flavor_name,
-                                                       ram, vcpus, disk,
-                                                       flavor_id)
+        flavor_ref = self.flavors_client.create_flavor(name=flavor_name,
+                                                       ram=ram, vcpus=vcpus,
+                                                       disk=disk, id=flavor_id)
         self.addCleanup(self.flavors_client.delete_flavor, flavor_id)
         self.assertRaises((lib_exc.Forbidden, lib_exc.OverLimit),
                           self.client.resize,
@@ -95,9 +95,9 @@
         quota_set = self.quotas_client.show_default_quota_set(self.tenant_id)
         vcpus = int(quota_set['cores']) + 1
         disk = 10
-        flavor_ref = self.flavors_client.create_flavor(flavor_name,
-                                                       ram, vcpus, disk,
-                                                       flavor_id)
+        flavor_ref = self.flavors_client.create_flavor(name=flavor_name,
+                                                       ram=ram, vcpus=vcpus,
+                                                       disk=disk, id=flavor_id)
         self.addCleanup(self.flavors_client.delete_flavor, flavor_id)
         self.assertRaises((lib_exc.Forbidden, lib_exc.OverLimit),
                           self.client.resize,
diff --git a/tempest/api/compute/base.py b/tempest/api/compute/base.py
index 759bb8c..b2effc2 100644
--- a/tempest/api/compute/base.py
+++ b/tempest/api/compute/base.py
@@ -64,6 +64,7 @@
         cls.floating_ip_pools_client = cls.os.floating_ip_pools_client
         cls.floating_ips_client = cls.os.floating_ips_client
         cls.keypairs_client = cls.os.keypairs_client
+        cls.security_group_rules_client = cls.os.security_group_rules_client
         cls.security_groups_client = cls.os.security_groups_client
         cls.quotas_client = cls.os.quotas_client
         # NOTE(mriedem): os-quota-class-sets is v2 API only
@@ -221,8 +222,8 @@
         if description is None:
             description = data_utils.rand_name('description')
         body = \
-            cls.security_groups_client.create_security_group(name,
-                                                             description)
+            cls.security_groups_client.create_security_group(
+                name=name, description=description)
         cls.security_groups.append(body)
 
         return body
@@ -278,7 +279,7 @@
         if 'name' in kwargs:
             name = kwargs.pop('name')
 
-        image = cls.images_client.create_image(server_id, name)
+        image = cls.images_client.create_image(server_id, name=name)
         image_id = data_utils.parse_image_id(image.response['location'])
         cls.images.append(image_id)
 
diff --git a/tempest/api/compute/images/test_images_negative.py b/tempest/api/compute/images/test_images_negative.py
index 9721fa5..84a8258 100644
--- a/tempest/api/compute/images/test_images_negative.py
+++ b/tempest/api/compute/images/test_images_negative.py
@@ -93,7 +93,7 @@
         snapshot_name = data_utils.rand_name('test-snap')
         test_uuid = ('a' * 35)
         self.assertRaises(lib_exc.NotFound, self.client.create_image,
-                          test_uuid, snapshot_name)
+                          test_uuid, name=snapshot_name)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('36741560-510e-4cc2-8641-55fe4dfb2437')
@@ -102,7 +102,7 @@
         snapshot_name = data_utils.rand_name('test-snap')
         test_uuid = ('a' * 37)
         self.assertRaises(lib_exc.NotFound, self.client.create_image,
-                          test_uuid, snapshot_name)
+                          test_uuid, name=snapshot_name)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('381acb65-785a-4942-94ce-d8f8c84f1f0f')
diff --git a/tempest/api/compute/images/test_images_oneserver.py b/tempest/api/compute/images/test_images_oneserver.py
index 40a781c..06b7cac 100644
--- a/tempest/api/compute/images/test_images_oneserver.py
+++ b/tempest/api/compute/images/test_images_oneserver.py
@@ -80,7 +80,8 @@
         # Create a new image
         name = data_utils.rand_name('image')
         meta = {'image_type': 'test'}
-        body = self.client.create_image(self.server_id, name, meta)
+        body = self.client.create_image(self.server_id, name=name,
+                                        metadata=meta)
         image_id = data_utils.parse_image_id(body.response['location'])
         waiters.wait_for_image_status(self.client, image_id, 'ACTIVE')
 
@@ -112,6 +113,6 @@
         # #1370954 in glance which will 500 if mysql is used as the
         # backend and it attempts to store a 4 byte utf-8 character
         utf8_name = data_utils.rand_name('\xe2\x82\xa1')
-        body = self.client.create_image(self.server_id, utf8_name)
+        body = self.client.create_image(self.server_id, name=utf8_name)
         image_id = data_utils.parse_image_id(body.response['location'])
         self.addCleanup(self.client.delete_image, image_id)
diff --git a/tempest/api/compute/images/test_images_oneserver_negative.py b/tempest/api/compute/images/test_images_oneserver_negative.py
index 1a74e52..9ea62fb 100644
--- a/tempest/api/compute/images/test_images_oneserver_negative.py
+++ b/tempest/api/compute/images/test_images_oneserver_negative.py
@@ -93,7 +93,7 @@
         snapshot_name = data_utils.rand_name('test-snap')
         meta = {'': ''}
         self.assertRaises(lib_exc.BadRequest, self.client.create_image,
-                          self.server_id, snapshot_name, meta)
+                          self.server_id, name=snapshot_name, metadata=meta)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('3d24d11f-5366-4536-bd28-cff32b748eca')
@@ -102,7 +102,7 @@
         snapshot_name = data_utils.rand_name('test-snap')
         meta = {'a' * 260: 'b' * 260}
         self.assertRaises(lib_exc.BadRequest, self.client.create_image,
-                          self.server_id, snapshot_name, meta)
+                          self.server_id, name=snapshot_name, metadata=meta)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('0460efcf-ee88-4f94-acef-1bf658695456')
@@ -111,8 +111,7 @@
 
         # Create first snapshot
         snapshot_name = data_utils.rand_name('test-snap')
-        body = self.client.create_image(self.server_id,
-                                        snapshot_name)
+        body = self.client.create_image(self.server_id, name=snapshot_name)
         image_id = data_utils.parse_image_id(body.response['location'])
         self.image_ids.append(image_id)
         self.addCleanup(self._reset_server)
@@ -120,7 +119,7 @@
         # Create second snapshot
         alt_snapshot_name = data_utils.rand_name('test-snap')
         self.assertRaises(lib_exc.Conflict, self.client.create_image,
-                          self.server_id, alt_snapshot_name)
+                          self.server_id, name=alt_snapshot_name)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('084f0cbc-500a-4963-8a4e-312905862581')
@@ -129,7 +128,7 @@
 
         snapshot_name = data_utils.rand_name('a' * 260)
         self.assertRaises(lib_exc.BadRequest, self.client.create_image,
-                          self.server_id, snapshot_name)
+                          self.server_id, name=snapshot_name)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('0894954d-2db2-4195-a45b-ffec0bc0187e')
@@ -137,7 +136,7 @@
         # Return an error while trying to delete an image what is creating
 
         snapshot_name = data_utils.rand_name('test-snap')
-        body = self.client.create_image(self.server_id, snapshot_name)
+        body = self.client.create_image(self.server_id, name=snapshot_name)
         image_id = data_utils.parse_image_id(body.response['location'])
         self.image_ids.append(image_id)
         self.addCleanup(self._reset_server)
diff --git a/tempest/api/compute/keypairs/base.py b/tempest/api/compute/keypairs/base.py
new file mode 100644
index 0000000..b742c8c
--- /dev/null
+++ b/tempest/api/compute/keypairs/base.py
@@ -0,0 +1,38 @@
+# Copyright 2015 Deutsche Telekom AG
+# All Rights Reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+from tempest.api.compute import base
+
+
+class BaseKeypairTest(base.BaseComputeTest):
+    """Base test case class for all keypair API tests."""
+
+    _api_version = 2
+
+    @classmethod
+    def setup_clients(cls):
+        super(BaseKeypairTest, cls).setup_clients()
+        cls.client = cls.keypairs_client
+
+    def _delete_keypair(self, keypair_name):
+        self.client.delete_keypair(keypair_name)
+
+    def _create_keypair(self, keypair_name, pub_key=None):
+        kwargs = {'name': keypair_name}
+        if pub_key:
+            kwargs.update({'public_key': pub_key})
+        body = self.client.create_keypair(**kwargs)
+        self.addCleanup(self._delete_keypair, keypair_name)
+        return body
diff --git a/tempest/api/compute/keypairs/test_keypairs.py b/tempest/api/compute/keypairs/test_keypairs.py
index 45eaa97..225af12 100644
--- a/tempest/api/compute/keypairs/test_keypairs.py
+++ b/tempest/api/compute/keypairs/test_keypairs.py
@@ -13,28 +13,12 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-from tempest.api.compute import base
+from tempest.api.compute.keypairs import base
 from tempest.common.utils import data_utils
 from tempest import test
 
 
-class KeyPairsV2TestJSON(base.BaseComputeTest):
-
-    _api_version = 2
-
-    @classmethod
-    def setup_clients(cls):
-        super(KeyPairsV2TestJSON, cls).setup_clients()
-        cls.client = cls.keypairs_client
-
-    def _delete_keypair(self, keypair_name):
-        self.client.delete_keypair(keypair_name)
-
-    def _create_keypair(self, keypair_name, pub_key=None):
-        body = self.client.create_keypair(keypair_name, pub_key)
-        self.addCleanup(self._delete_keypair, keypair_name)
-        return body
-
+class KeyPairsV2TestJSON(base.BaseKeypairTest):
     @test.idempotent_id('1d1dbedb-d7a0-432a-9d09-83f543c3c19b')
     def test_keypairs_create_list_delete(self):
         # Keypairs created should be available in the response list
diff --git a/tempest/api/compute/keypairs/test_keypairs_negative.py b/tempest/api/compute/keypairs/test_keypairs_negative.py
index 54b07f0..0ab78fb 100644
--- a/tempest/api/compute/keypairs/test_keypairs_negative.py
+++ b/tempest/api/compute/keypairs/test_keypairs_negative.py
@@ -16,22 +16,12 @@
 
 from tempest_lib import exceptions as lib_exc
 
-from tempest.api.compute import base
+from tempest.api.compute.keypairs import base
 from tempest.common.utils import data_utils
 from tempest import test
 
 
-class KeyPairsNegativeTestJSON(base.BaseV2ComputeTest):
-
-    @classmethod
-    def setup_clients(cls):
-        super(KeyPairsNegativeTestJSON, cls).setup_clients()
-        cls.client = cls.keypairs_client
-
-    def _create_keypair(self, keypair_name, pub_key=None):
-        self.client.create_keypair(keypair_name, pub_key)
-        self.addCleanup(self.client.delete_keypair, keypair_name)
-
+class KeyPairsNegativeTestJSON(base.BaseKeypairTest):
     @test.attr(type=['negative'])
     @test.idempotent_id('29cca892-46ae-4d48-bc32-8fe7e731eb81')
     def test_keypair_create_with_invalid_pub_key(self):
@@ -72,7 +62,7 @@
     def test_create_keypair_with_duplicate_name(self):
         # Keypairs with duplicate names should not be created
         k_name = data_utils.rand_name('keypair')
-        self.client.create_keypair(k_name)
+        self.client.create_keypair(name=k_name)
         # Now try the same keyname to create another key
         self.assertRaises(lib_exc.Conflict, self._create_keypair,
                           k_name)
diff --git a/tempest/api/compute/security_groups/test_security_group_rules.py b/tempest/api/compute/security_groups/test_security_group_rules.py
index ff3f25b..b5eff70 100644
--- a/tempest/api/compute/security_groups/test_security_group_rules.py
+++ b/tempest/api/compute/security_groups/test_security_group_rules.py
@@ -25,7 +25,7 @@
     @classmethod
     def setup_clients(cls):
         super(SecurityGroupRulesTestJSON, cls).setup_clients()
-        cls.client = cls.security_groups_client
+        cls.client = cls.security_group_rules_client
 
     @classmethod
     def resource_setup(cls):
@@ -69,11 +69,11 @@
         security_group = self.create_security_group()
         securitygroup_id = security_group['id']
         # Adding rules to the created Security Group
-        rule = \
-            self.client.create_security_group_rule(securitygroup_id,
-                                                   self.ip_protocol,
-                                                   self.from_port,
-                                                   self.to_port)
+        rule = self.client.create_security_group_rule(
+            parent_group_id=securitygroup_id,
+            ip_protocol=self.ip_protocol,
+            from_port=self.from_port,
+            to_port=self.to_port)
         self.expected['parent_group_id'] = securitygroup_id
         self.expected['ip_range'] = {'cidr': '0.0.0.0/0'}
         self._check_expected_response(rule)
@@ -91,12 +91,12 @@
 
         # Adding rules to the created Security Group with optional cidr
         cidr = '10.2.3.124/24'
-        rule = \
-            self.client.create_security_group_rule(parent_group_id,
-                                                   self.ip_protocol,
-                                                   self.from_port,
-                                                   self.to_port,
-                                                   cidr=cidr)
+        rule = self.client.create_security_group_rule(
+            parent_group_id=parent_group_id,
+            ip_protocol=self.ip_protocol,
+            from_port=self.from_port,
+            to_port=self.to_port,
+            cidr=cidr)
         self.expected['parent_group_id'] = parent_group_id
         self.expected['ip_range'] = {'cidr': cidr}
         self._check_expected_response(rule)
@@ -118,12 +118,12 @@
         group_name = security_group['name']
 
         # Adding rules to the created Security Group with optional group_id
-        rule = \
-            self.client.create_security_group_rule(parent_group_id,
-                                                   self.ip_protocol,
-                                                   self.from_port,
-                                                   self.to_port,
-                                                   group_id=group_id)
+        rule = self.client.create_security_group_rule(
+            parent_group_id=parent_group_id,
+            ip_protocol=self.ip_protocol,
+            from_port=self.from_port,
+            to_port=self.to_port,
+            group_id=group_id)
         self.expected['parent_group_id'] = parent_group_id
         self.expected['group'] = {'tenant_id': self.client.tenant_id,
                                   'name': group_name}
@@ -140,21 +140,22 @@
         securitygroup_id = security_group['id']
 
         # Add a first rule to the created Security Group
-        rule = \
-            self.client.create_security_group_rule(securitygroup_id,
-                                                   self.ip_protocol,
-                                                   self.from_port,
-                                                   self.to_port)
+        rule = self.client.create_security_group_rule(
+            parent_group_id=securitygroup_id,
+            ip_protocol=self.ip_protocol,
+            from_port=self.from_port,
+            to_port=self.to_port)
         rule1_id = rule['id']
 
         # Add a second rule to the created Security Group
         ip_protocol2 = 'icmp'
         from_port2 = -1
         to_port2 = -1
-        rule = \
-            self.client.create_security_group_rule(securitygroup_id,
-                                                   ip_protocol2,
-                                                   from_port2, to_port2)
+        rule = self.client.create_security_group_rule(
+            parent_group_id=securitygroup_id,
+            ip_protocol=ip_protocol2,
+            from_port=from_port2,
+            to_port=to_port2)
         rule2_id = rule['id']
         # Delete the Security Group rule2 at the end of this method
         self.addCleanup(self.client.delete_security_group_rule, rule2_id)
@@ -176,14 +177,15 @@
         security_group = self.create_security_group()
         sg2_id = security_group['id']
         # Adding rules to the Group1
-        self.client.create_security_group_rule(sg1_id,
-                                               self.ip_protocol,
-                                               self.from_port,
-                                               self.to_port,
-                                               group_id=sg2_id)
+        self.client.create_security_group_rule(
+            parent_group_id=sg1_id,
+            ip_protocol=self.ip_protocol,
+            from_port=self.from_port,
+            to_port=self.to_port,
+            group_id=sg2_id)
 
         # Delete group2
-        self.client.delete_security_group(sg2_id)
+        self.security_groups_client.delete_security_group(sg2_id)
         # Get rules of the Group1
         rules = \
             self.client.list_security_group_rules(sg1_id)
diff --git a/tempest/api/compute/security_groups/test_security_group_rules_negative.py b/tempest/api/compute/security_groups/test_security_group_rules_negative.py
index 15e79ac..d12306a 100644
--- a/tempest/api/compute/security_groups/test_security_group_rules_negative.py
+++ b/tempest/api/compute/security_groups/test_security_group_rules_negative.py
@@ -36,6 +36,7 @@
     def setup_clients(cls):
         super(SecurityGroupRulesNegativeTestJSON, cls).setup_clients()
         cls.client = cls.security_groups_client
+        cls.rules_client = cls.security_group_rules_client
 
     @test.attr(type=['negative'])
     @test.idempotent_id('1d507e98-7951-469b-82c3-23f1e6b8c254')
@@ -49,8 +50,10 @@
         from_port = 22
         to_port = 22
         self.assertRaises(lib_exc.NotFound,
-                          self.client.create_security_group_rule,
-                          parent_group_id, ip_protocol, from_port, to_port)
+                          self.rules_client.create_security_group_rule,
+                          parent_group_id=parent_group_id,
+                          ip_protocol=ip_protocol, from_port=from_port,
+                          to_port=to_port)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('2244d7e4-adb7-4ecb-9930-2d77e123ce4f')
@@ -64,8 +67,10 @@
         from_port = 22
         to_port = 22
         self.assertRaises(lib_exc.BadRequest,
-                          self.client.create_security_group_rule,
-                          parent_group_id, ip_protocol, from_port, to_port)
+                          self.rules_client.create_security_group_rule,
+                          parent_group_id=parent_group_id,
+                          ip_protocol=ip_protocol, from_port=from_port,
+                          to_port=to_port)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('8bd56d02-3ffa-4d67-9933-b6b9a01d6089')
@@ -80,16 +85,17 @@
         from_port = 22
         to_port = 22
 
-        rule = \
-            self.client.create_security_group_rule(parent_group_id,
-                                                   ip_protocol,
-                                                   from_port,
-                                                   to_port)
-        self.addCleanup(self.client.delete_security_group_rule, rule['id'])
+        rule = self.rules_client.create_security_group_rule(
+            parent_group_id=parent_group_id, ip_protocol=ip_protocol,
+            from_port=from_port, to_port=to_port)
+        self.addCleanup(self.rules_client.delete_security_group_rule,
+                        rule['id'])
         # Add the same rule to the group should fail
         self.assertRaises(lib_exc.BadRequest,
-                          self.client.create_security_group_rule,
-                          parent_group_id, ip_protocol, from_port, to_port)
+                          self.rules_client.create_security_group_rule,
+                          parent_group_id=parent_group_id,
+                          ip_protocol=ip_protocol, from_port=from_port,
+                          to_port=to_port)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('84c81249-9f6e-439c-9bbf-cbb0d2cddbdf')
@@ -106,8 +112,10 @@
         to_port = 22
 
         self.assertRaises(lib_exc.BadRequest,
-                          self.client.create_security_group_rule,
-                          parent_group_id, ip_protocol, from_port, to_port)
+                          self.rules_client.create_security_group_rule,
+                          parent_group_id=parent_group_id,
+                          ip_protocol=ip_protocol, from_port=from_port,
+                          to_port=to_port)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('12bbc875-1045-4f7a-be46-751277baedb9')
@@ -123,8 +131,10 @@
         from_port = data_utils.rand_int_id(start=65536)
         to_port = 22
         self.assertRaises(lib_exc.BadRequest,
-                          self.client.create_security_group_rule,
-                          parent_group_id, ip_protocol, from_port, to_port)
+                          self.rules_client.create_security_group_rule,
+                          parent_group_id=parent_group_id,
+                          ip_protocol=ip_protocol, from_port=from_port,
+                          to_port=to_port)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('ff88804d-144f-45d1-bf59-dd155838a43a')
@@ -140,8 +150,10 @@
         from_port = 22
         to_port = data_utils.rand_int_id(start=65536)
         self.assertRaises(lib_exc.BadRequest,
-                          self.client.create_security_group_rule,
-                          parent_group_id, ip_protocol, from_port, to_port)
+                          self.rules_client.create_security_group_rule,
+                          parent_group_id=parent_group_id,
+                          ip_protocol=ip_protocol, from_port=from_port,
+                          to_port=to_port)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('00296fa9-0576-496a-ae15-fbab843189e0')
@@ -157,8 +169,10 @@
         from_port = 22
         to_port = 21
         self.assertRaises(lib_exc.BadRequest,
-                          self.client.create_security_group_rule,
-                          secgroup_id, ip_protocol, from_port, to_port)
+                          self.rules_client.create_security_group_rule,
+                          parent_group_id=secgroup_id,
+                          ip_protocol=ip_protocol, from_port=from_port,
+                          to_port=to_port)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('56fddcca-dbb8-4494-a0db-96e9f869527c')
@@ -168,5 +182,5 @@
         # with non existent id
         non_existent_rule_id = not_existing_id()
         self.assertRaises(lib_exc.NotFound,
-                          self.client.delete_security_group_rule,
+                          self.rules_client.delete_security_group_rule,
                           non_existent_rule_id)
diff --git a/tempest/api/compute/security_groups/test_security_groups_negative.py b/tempest/api/compute/security_groups/test_security_groups_negative.py
index d8cbe3d..642aca1 100644
--- a/tempest/api/compute/security_groups/test_security_groups_negative.py
+++ b/tempest/api/compute/security_groups/test_security_groups_negative.py
@@ -72,16 +72,17 @@
         s_description = data_utils.rand_name('description')
         # Create Security Group with empty string as group name
         self.assertRaises(lib_exc.BadRequest,
-                          self.client.create_security_group, "", s_description)
+                          self.client.create_security_group,
+                          name="", description=s_description)
         # Create Security Group with white space in group name
         self.assertRaises(lib_exc.BadRequest,
-                          self.client.create_security_group, " ",
-                          s_description)
+                          self.client.create_security_group,
+                          name=" ", description=s_description)
         # Create Security Group with group name longer than 255 chars
         s_name = 'securitygroup-'.ljust(260, '0')
         self.assertRaises(lib_exc.BadRequest,
-                          self.client.create_security_group, s_name,
-                          s_description)
+                          self.client.create_security_group,
+                          name=s_name, description=s_description)
 
     @decorators.skip_because(bug="1161411",
                              condition=CONF.service_available.neutron)
@@ -96,8 +97,8 @@
         # Create Security Group with group description longer than 255 chars
         s_description = 'description-'.ljust(260, '0')
         self.assertRaises(lib_exc.BadRequest,
-                          self.client.create_security_group, s_name,
-                          s_description)
+                          self.client.create_security_group,
+                          name=s_name, description=s_description)
 
     @test.idempotent_id('9fdb4abc-6b66-4b27-b89c-eb215a956168')
     @testtools.skipIf(CONF.service_available.neutron,
@@ -109,11 +110,11 @@
         # be created
         s_name = data_utils.rand_name('securitygroup')
         s_description = data_utils.rand_name('description')
-        self.create_security_group(s_name, s_description)
+        self.create_security_group(name=s_name, description=s_description)
         # Now try the Security Group with the same 'Name'
         self.assertRaises(lib_exc.BadRequest,
-                          self.client.create_security_group, s_name,
-                          s_description)
+                          self.client.create_security_group,
+                          name=s_name, description=s_description)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('36a1629f-c6da-4a26-b8b8-55e7e5d5cd58')
diff --git a/tempest/api/compute/servers/test_create_server.py b/tempest/api/compute/servers/test_create_server.py
index 23a9cb3..e62a52b 100644
--- a/tempest/api/compute/servers/test_create_server.py
+++ b/tempest/api/compute/servers/test_create_server.py
@@ -269,9 +269,9 @@
 
             # Create a flavor with extra specs
             flavor = (self.flavor_client.
-                      create_flavor(flavor_with_eph_disk_name,
-                                    ram, vcpus, disk,
-                                    flavor_with_eph_disk_id,
+                      create_flavor(name=flavor_with_eph_disk_name,
+                                    ram=ram, vcpus=vcpus, disk=disk,
+                                    id=flavor_with_eph_disk_id,
                                     ephemeral=1))
             self.addCleanup(flavor_clean_up, flavor['id'])
 
@@ -287,9 +287,9 @@
 
             # Create a flavor without extra specs
             flavor = (self.flavor_client.
-                      create_flavor(flavor_no_eph_disk_name,
-                                    ram, vcpus, disk,
-                                    flavor_no_eph_disk_id))
+                      create_flavor(name=flavor_no_eph_disk_name,
+                                    ram=ram, vcpus=vcpus, disk=disk,
+                                    id=flavor_no_eph_disk_id))
             self.addCleanup(flavor_clean_up, flavor['id'])
 
             return flavor['id']
diff --git a/tempest/api/compute/servers/test_list_server_filters.py b/tempest/api/compute/servers/test_list_server_filters.py
index a75cb3e..6160844 100644
--- a/tempest/api/compute/servers/test_list_server_filters.py
+++ b/tempest/api/compute/servers/test_list_server_filters.py
@@ -305,12 +305,20 @@
             params = {'ip': ip}
         else:
             params = {'ip6': ip}
+        # capture all servers in case something goes wrong
+        all_servers = self.client.list_servers(detail=True)
         body = self.client.list_servers(**params)
         servers = body['servers']
 
-        self.assertIn(self.s1_name, map(lambda x: x['name'], servers))
-        self.assertIn(self.s2_name, map(lambda x: x['name'], servers))
-        self.assertIn(self.s3_name, map(lambda x: x['name'], servers))
+        self.assertIn(self.s1_name, map(lambda x: x['name'], servers),
+                      "%s not found in %s, all servers %s" %
+                      (self.s1_name, servers, all_servers))
+        self.assertIn(self.s2_name, map(lambda x: x['name'], servers),
+                      "%s not found in %s, all servers %s" %
+                      (self.s2_name, servers, all_servers))
+        self.assertIn(self.s3_name, map(lambda x: x['name'], servers),
+                      "%s not found in %s, all servers %s" %
+                      (self.s3_name, servers, all_servers))
 
     @test.idempotent_id('67aec2d0-35fe-4503-9f92-f13272b867ed')
     def test_list_servers_detailed_limit_results(self):
diff --git a/tempest/api/compute/servers/test_server_rescue.py b/tempest/api/compute/servers/test_server_rescue.py
index 98a2f9d..7e09096 100644
--- a/tempest/api/compute/servers/test_server_rescue.py
+++ b/tempest/api/compute/servers/test_server_rescue.py
@@ -48,9 +48,8 @@
         # Security group creation
         cls.sg_name = data_utils.rand_name('sg')
         cls.sg_desc = data_utils.rand_name('sg-desc')
-        cls.sg = \
-            cls.security_groups_client.create_security_group(cls.sg_name,
-                                                             cls.sg_desc)
+        cls.sg = cls.security_groups_client.create_security_group(
+            name=cls.sg_name, description=cls.sg_desc)
         cls.sg_id = cls.sg['id']
 
         # Server for positive tests
diff --git a/tempest/api/compute/servers/test_servers.py b/tempest/api/compute/servers/test_servers.py
index 2c1e69c..c243adf 100644
--- a/tempest/api/compute/servers/test_servers.py
+++ b/tempest/api/compute/servers/test_servers.py
@@ -63,7 +63,7 @@
         # Specify a keypair while creating a server
 
         key_name = data_utils.rand_name('key')
-        self.keypairs_client.create_keypair(key_name)
+        self.keypairs_client.create_keypair(name=key_name)
         self.addCleanup(self.keypairs_client.delete_keypair, key_name)
         self.keypairs_client.list_keypairs()
         server = self.create_test_server(key_name=key_name)
diff --git a/tempest/api/compute/test_authorization.py b/tempest/api/compute/test_authorization.py
index 58c2206..e7111b0 100644
--- a/tempest/api/compute/test_authorization.py
+++ b/tempest/api/compute/test_authorization.py
@@ -52,11 +52,13 @@
         cls.glance_client = cls.os.image_client
         cls.keypairs_client = cls.os.keypairs_client
         cls.security_client = cls.os.security_groups_client
+        cls.rule_client = cls.os.security_group_rules_client
 
         cls.alt_client = cls.alt_manager.servers_client
         cls.alt_images_client = cls.alt_manager.images_client
         cls.alt_keypairs_client = cls.alt_manager.keypairs_client
         cls.alt_security_client = cls.alt_manager.security_groups_client
+        cls.alt_rule_client = cls.alt_manager.security_group_rules_client
 
     @classmethod
     def resource_setup(cls):
@@ -76,19 +78,20 @@
         cls.image = cls.images_client.show_image(image_id)
 
         cls.keypairname = data_utils.rand_name('keypair')
-        cls.keypairs_client.create_keypair(cls.keypairname)
+        cls.keypairs_client.create_keypair(name=cls.keypairname)
 
         name = data_utils.rand_name('security')
         description = data_utils.rand_name('description')
         cls.security_group = cls.security_client.create_security_group(
-            name, description)
+            name=name, description=description)
 
         parent_group_id = cls.security_group['id']
         ip_protocol = 'tcp'
         from_port = 22
         to_port = 22
-        cls.rule = cls.security_client.create_security_group_rule(
-            parent_group_id, ip_protocol, from_port, to_port)
+        cls.rule = cls.rule_client.create_security_group_rule(
+            parent_group_id=parent_group_id, ip_protocol=ip_protocol,
+            from_port=from_port, to_port=to_port)
 
     @classmethod
     def resource_cleanup(cls):
@@ -171,7 +174,7 @@
         # A create image request for another user's server should fail
         self.assertRaises(lib_exc.NotFound,
                           self.alt_images_client.create_image,
-                          self.server['id'], 'testImage')
+                          self.server['id'], name='testImage')
 
     @test.idempotent_id('95d445f6-babc-4f2e-aea3-aa24ec5e7f0d')
     def test_create_server_with_unauthorized_image(self):
@@ -207,7 +210,8 @@
             resp = {}
             resp['status'] = None
             self.assertRaises(lib_exc.BadRequest,
-                              self.alt_keypairs_client.create_keypair, k_name)
+                              self.alt_keypairs_client.create_keypair,
+                              name=k_name)
         finally:
             # Next request the base_url is back to normal
             if (resp['status'] is not None):
@@ -259,7 +263,7 @@
             resp['status'] = None
             self.assertRaises(lib_exc.BadRequest,
                               self.alt_security_client.create_security_group,
-                              s_name, s_description)
+                              name=s_name, description=s_description)
         finally:
             # Next request the base_url is back to normal
             if resp['status'] is not None:
@@ -292,21 +296,22 @@
         to_port = -1
         try:
             # Change the base URL to impersonate another user
-            self.alt_security_client.auth_provider.set_alt_auth_data(
+            self.alt_rule_client.auth_provider.set_alt_auth_data(
                 request_part='url',
-                auth_data=self.security_client.auth_provider.auth_data
+                auth_data=self.rule_client.auth_provider.auth_data
             )
             resp = {}
             resp['status'] = None
             self.assertRaises(lib_exc.BadRequest,
-                              self.alt_security_client.
+                              self.alt_rule_client.
                               create_security_group_rule,
-                              parent_group_id, ip_protocol, from_port,
-                              to_port)
+                              parent_group_id=parent_group_id,
+                              ip_protocol=ip_protocol,
+                              from_port=from_port, to_port=to_port)
         finally:
             # Next request the base_url is back to normal
             if resp['status'] is not None:
-                self.alt_security_client.delete_security_group_rule(resp['id'])
+                self.alt_rule_client.delete_security_group_rule(resp['id'])
                 LOG.error("Create security group rule request should not "
                           "happen if the tenant id does not match the"
                           " current user")
@@ -316,7 +321,7 @@
         # A DELETE request for another user's security group rule
         # should fail
         self.assertRaises(lib_exc.NotFound,
-                          self.alt_security_client.delete_security_group_rule,
+                          self.alt_rule_client.delete_security_group_rule,
                           self.rule['id'])
 
     @test.idempotent_id('c5f52351-53d9-4fc9-83e5-917f7f5e3d71')
diff --git a/tempest/api/identity/admin/v2/test_endpoints.py b/tempest/api/identity/admin/v2/test_endpoints.py
new file mode 100644
index 0000000..3af2e90
--- /dev/null
+++ b/tempest/api/identity/admin/v2/test_endpoints.py
@@ -0,0 +1,90 @@
+# Copyright 2013 OpenStack Foundation
+# All Rights Reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+from tempest.api.identity import base
+from tempest.common.utils import data_utils
+from tempest import test
+
+
+class EndPointsTestJSON(base.BaseIdentityV2AdminTest):
+
+    @classmethod
+    def resource_setup(cls):
+        super(EndPointsTestJSON, cls).resource_setup()
+        cls.service_ids = list()
+        s_name = data_utils.rand_name('service')
+        s_type = data_utils.rand_name('type')
+        s_description = data_utils.rand_name('description')
+        cls.service_data =\
+            cls.client.create_service(s_name, s_type,
+                                      description=s_description)
+        cls.service_id = cls.service_data['id']
+        cls.service_ids.append(cls.service_id)
+        # Create endpoints so as to use for LIST and GET test cases
+        cls.setup_endpoints = list()
+        for i in range(2):
+            region = data_utils.rand_name('region')
+            url = data_utils.rand_url()
+            endpoint = cls.client.create_endpoint(cls.service_id,
+                                                  region,
+                                                  publicurl=url,
+                                                  adminurl=url,
+                                                  internalurl=url)
+            # list_endpoints() will return 'enabled' field
+            endpoint['enabled'] = True
+            cls.setup_endpoints.append(endpoint)
+
+    @classmethod
+    def resource_cleanup(cls):
+        for e in cls.setup_endpoints:
+            cls.client.delete_endpoint(e['id'])
+        for s in cls.service_ids:
+            cls.client.delete_service(s)
+        super(EndPointsTestJSON, cls).resource_cleanup()
+
+    @test.idempotent_id('11f590eb-59d8-4067-8b2b-980c7f387f51')
+    def test_list_endpoints(self):
+        # Get a list of endpoints
+        fetched_endpoints = self.client.list_endpoints()
+        # Asserting LIST endpoints
+        missing_endpoints =\
+            [e for e in self.setup_endpoints if e not in fetched_endpoints]
+        self.assertEqual(0, len(missing_endpoints),
+                         "Failed to find endpoint %s in fetched list" %
+                         ', '.join(str(e) for e in missing_endpoints))
+
+    @test.idempotent_id('9974530a-aa28-4362-8403-f06db02b26c1')
+    def test_create_list_delete_endpoint(self):
+        region = data_utils.rand_name('region')
+        url = data_utils.rand_url()
+        endpoint = self.client.create_endpoint(self.service_id,
+                                               region,
+                                               publicurl=url,
+                                               adminurl=url,
+                                               internalurl=url)
+        # Asserting Create Endpoint response body
+        self.assertIn('id', endpoint)
+        self.assertEqual(region, endpoint['region'])
+        self.assertEqual(url, endpoint['publicurl'])
+        # Checking if created endpoint is present in the list of endpoints
+        fetched_endpoints = self.client.list_endpoints()
+        fetched_endpoints_id = [e['id'] for e in fetched_endpoints]
+        self.assertIn(endpoint['id'], fetched_endpoints_id)
+        # Deleting the endpoint created in this method
+        self.client.delete_endpoint(endpoint['id'])
+        # Checking whether endpoint is deleted successfully
+        fetched_endpoints = self.client.list_endpoints()
+        fetched_endpoints_id = [e['id'] for e in fetched_endpoints]
+        self.assertNotIn(endpoint['id'], fetched_endpoints_id)
diff --git a/tempest/api/orchestration/base.py b/tempest/api/orchestration/base.py
index 266f726..6578680 100644
--- a/tempest/api/orchestration/base.py
+++ b/tempest/api/orchestration/base.py
@@ -96,7 +96,7 @@
     @classmethod
     def _create_keypair(cls, name_start='keypair-heat-'):
         kp_name = data_utils.rand_name(name_start)
-        body = cls.keypairs_client.create_keypair(kp_name)
+        body = cls.keypairs_client.create_keypair(name=kp_name)
         cls.keypairs.append(kp_name)
         return body
 
diff --git a/tempest/api/telemetry/base.py b/tempest/api/telemetry/base.py
index 3be807b..0f9b7dd 100644
--- a/tempest/api/telemetry/base.py
+++ b/tempest/api/telemetry/base.py
@@ -121,3 +121,27 @@
             'Sample for metric:%s with query:%s has not been added to the '
             'database within %d seconds' % (metric, query,
                                             CONF.compute.build_timeout))
+
+
+class BaseTelemetryAdminTest(BaseTelemetryTest):
+    """Base test case class for admin Telemetry API tests."""
+
+    credentials = ['primary', 'admin']
+
+    @classmethod
+    def setup_clients(cls):
+        super(BaseTelemetryAdminTest, cls).setup_clients()
+        cls.telemetry_admin_client = cls.os_adm.telemetry_client
+
+    def await_events(self, query):
+        timeout = CONF.compute.build_timeout
+        start = timeutils.utcnow()
+        while timeutils.delta_seconds(start, timeutils.utcnow()) < timeout:
+            body = self.telemetry_admin_client.list_events(query)
+            if body:
+                return body
+            time.sleep(CONF.compute.build_interval)
+
+        raise exceptions.TimeoutException(
+            'Event with query:%s has not been added to the '
+            'database within %d seconds' % (query, CONF.compute.build_timeout))
diff --git a/tempest/api/telemetry/test_telemetry_notification_api.py b/tempest/api/telemetry/test_telemetry_notification_api.py
index 52793c8..71a00c9 100644
--- a/tempest/api/telemetry/test_telemetry_notification_api.py
+++ b/tempest/api/telemetry/test_telemetry_notification_api.py
@@ -10,6 +10,7 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from tempest_lib import decorators
 import testtools
 
 from tempest.api.telemetry import base
@@ -29,8 +30,7 @@
                                     "is disabled")
 
     @test.idempotent_id('d7f8c1c8-d470-4731-8604-315d3956caad')
-    @testtools.skipIf(not CONF.service_available.nova,
-                      "Nova is not available.")
+    @test.services('compute')
     def test_check_nova_notification(self):
 
         body = self.create_server()
@@ -71,3 +71,28 @@
 
         for metric in self.glance_v2_notifications:
             self.await_samples(metric, query)
+
+
+class TelemetryNotificationAdminAPITestJSON(base.BaseTelemetryAdminTest):
+
+    @classmethod
+    def skip_checks(cls):
+        super(TelemetryNotificationAdminAPITestJSON, cls).skip_checks()
+        if CONF.telemetry.too_slow_to_test:
+            raise cls.skipException("Ceilometer feature for fast work mysql "
+                                    "is disabled")
+
+    @test.idempotent_id('29604198-8b45-4fc0-8af8-1cae4f94ebe9')
+    @test.services('compute')
+    @decorators.skip_because(bug='1480490')
+    def test_check_nova_notification_event_and_meter(self):
+
+        body = self.create_server()
+
+        if CONF.telemetry_feature_enabled.events:
+            query = ('instance_id', 'eq', body['id'])
+            self.await_events(query)
+
+        query = ('resource', 'eq', body['id'])
+        for metric in self.nova_notifications:
+            self.await_samples(metric, query)
diff --git a/tempest/clients.py b/tempest/clients.py
index 6a2c601..b3fb8a8 100644
--- a/tempest/clients.py
+++ b/tempest/clients.py
@@ -42,9 +42,9 @@
 from tempest.services.compute.json.fixed_ips_client import FixedIPsClient
 from tempest.services.compute.json.flavors_client import FlavorsClient
 from tempest.services.compute.json.floating_ip_pools_client import \
-    FloatingIpPoolsClient
+    FloatingIPPoolsClient
 from tempest.services.compute.json.floating_ips_bulk_client import \
-    FloatingIpsBulkClient
+    FloatingIPsBulkClient
 from tempest.services.compute.json.floating_ips_client import \
     FloatingIPsClient
 from tempest.services.compute.json.hosts_client import HostsClient
@@ -65,6 +65,8 @@
 from tempest.services.compute.json.quotas_client import QuotasClient
 from tempest.services.compute.json.security_group_default_rules_client import \
     SecurityGroupDefaultRulesClient
+from tempest.services.compute.json.security_group_rules_client import \
+    SecurityGroupRulesClient
 from tempest.services.compute.json.security_groups_client import \
     SecurityGroupsClient
 from tempest.services.compute.json.server_groups_client import \
@@ -280,12 +282,14 @@
         self.flavors_client = FlavorsClient(self.auth_provider, **params)
         self.extensions_client = ExtensionsClient(self.auth_provider,
                                                   **params)
-        self.floating_ip_pools_client = FloatingIpPoolsClient(
+        self.floating_ip_pools_client = FloatingIPPoolsClient(
             self.auth_provider, **params)
-        self.floating_ips_bulk_client = FloatingIpsBulkClient(
+        self.floating_ips_bulk_client = FloatingIPsBulkClient(
             self.auth_provider, **params)
         self.floating_ips_client = FloatingIPsClient(self.auth_provider,
                                                      **params)
+        self.security_group_rules_client = SecurityGroupRulesClient(
+            self.auth_provider, **params)
         self.security_groups_client = SecurityGroupsClient(
             self.auth_provider, **params)
         self.interfaces_client = InterfacesClient(self.auth_provider,
@@ -340,15 +344,25 @@
     def _set_identity_clients(self):
         params = {
             'service': CONF.identity.catalog_type,
-            'region': CONF.identity.region,
-            'endpoint_type': 'adminURL'
+            'region': CONF.identity.region
         }
         params.update(self.default_params_with_timeout_values)
-
+        params_v2_admin = params.copy()
+        params_v2_admin['endpoint_type'] = CONF.identity.v2_admin_endpoint_type
+        # Client uses admin endpoint type of Keystone API v2
         self.identity_client = IdentityClient(self.auth_provider,
-                                              **params)
+                                              **params_v2_admin)
+        params_v2_public = params.copy()
+        params_v2_public['endpoint_type'] = (
+            CONF.identity.v2_public_endpoint_type)
+        # Client uses public endpoint type of Keystone API v2
+        self.identity_public_client = IdentityClient(self.auth_provider,
+                                                     **params_v2_public)
+        params_v3 = params.copy()
+        params_v3['endpoint_type'] = CONF.identity.v3_endpoint_type
+        # Client uses the endpoint type of Keystone API v3
         self.identity_v3_client = IdentityV3Client(self.auth_provider,
-                                                   **params)
+                                                   **params_v3)
         self.endpoints_client = EndPointClient(self.auth_provider,
                                                **params)
         self.service_client = ServiceClient(self.auth_provider, **params)
diff --git a/tempest/cmd/cleanup_service.py b/tempest/cmd/cleanup_service.py
index dcdf7c5..2e96c81 100644
--- a/tempest/cmd/cleanup_service.py
+++ b/tempest/cmd/cleanup_service.py
@@ -76,12 +76,12 @@
                   CONF.identity.alt_username]
 
     if IS_NEUTRON:
-        CONF_PRIV_NETWORK = _get_priv_net_id(CONF.compute.fixed_network_name,
-                                             CONF.identity.tenant_name)
+        CONF_PRIV_NETWORK = _get_network_id(CONF.compute.fixed_network_name,
+                                            CONF.identity.tenant_name)
         CONF_NETWORKS = [CONF_PUB_NETWORK, CONF_PRIV_NETWORK]
 
 
-def _get_priv_net_id(prv_net_name, tenant_name):
+def _get_network_id(net_name, tenant_name):
     am = clients.AdminManager()
     net_cl = am.network_client
     id_cl = am.identity_client
@@ -91,7 +91,7 @@
     t_id = tenant['id']
     n_id = None
     for net in networks['networks']:
-        if (net['tenant_id'] == t_id and net['name'] == prv_net_name):
+        if (net['tenant_id'] == t_id and net['name'] == net_name):
             n_id = net['id']
             break
     return n_id
@@ -103,6 +103,10 @@
         for key, value in kwargs.items():
             setattr(self, key, value)
 
+        self.tenant_filter = {}
+        if hasattr(self, 'tenant_id'):
+            self.tenant_filter['tenant_id'] = self.tenant_id
+
     def _filter_by_tenant_id(self, item_list):
         if (item_list is None
                 or len(item_list) == 0
@@ -387,8 +391,8 @@
 
     def list(self):
         client = self.client
-        networks = client.list_networks()
-        networks = self._filter_by_tenant_id(networks['networks'])
+        networks = client.list_networks(**self.tenant_filter)
+        networks = networks['networks']
         # filter out networks declared in tempest.conf
         if self.is_preserve:
             networks = [network for network in networks
@@ -414,9 +418,8 @@
 
     def list(self):
         client = self.client
-        flips = client.list_floatingips()
+        flips = client.list_floatingips(**self.tenant_filter)
         flips = flips['floatingips']
-        flips = self._filter_by_tenant_id(flips)
         LOG.debug("List count, %s Network Floating IPs" % len(flips))
         return flips
 
@@ -438,9 +441,8 @@
 
     def list(self):
         client = self.client
-        routers = client.list_routers()
+        routers = client.list_routers(**self.tenant_filter)
         routers = routers['routers']
-        routers = self._filter_by_tenant_id(routers)
         if self.is_preserve:
             routers = [router for router in routers
                        if router['id'] != CONF_PUB_ROUTER]
@@ -454,11 +456,12 @@
         for router in routers:
             try:
                 rid = router['id']
-                ports = client.list_router_interfaces(rid)
-                ports = ports['ports']
+                ports = [port for port
+                         in client.list_router_interfaces(rid)['ports']
+                         if port["device_owner"] == "network:router_interface"]
                 for port in ports:
-                    subid = port['fixed_ips'][0]['subnet_id']
-                    client.remove_router_interface_with_subnet_id(rid, subid)
+                    client.remove_router_interface_with_port_id(rid,
+                                                                port['id'])
                 client.delete_router(rid)
             except Exception:
                 LOG.exception("Delete Router exception.")
@@ -616,11 +619,14 @@
 
     def list(self):
         client = self.client
-        ports = client.list_ports()
-        ports = ports['ports']
-        ports = self._filter_by_tenant_id(ports)
+        ports = [port for port in
+                 client.list_ports(**self.tenant_filter)['ports']
+                 if port["device_owner"] == "" or
+                 port["device_owner"].startswith("compute:")]
+
         if self.is_preserve:
             ports = self._filter_by_conf_networks(ports)
+
         LOG.debug("List count, %s Ports" % len(ports))
         return ports
 
@@ -638,13 +644,40 @@
         self.data['ports'] = ports
 
 
+class NetworkSecGroupService(NetworkService):
+    def list(self):
+        client = self.client
+        filter = self.tenant_filter
+        # cannot delete default sec group so never show it.
+        secgroups = [secgroup for secgroup in
+                     client.list_security_groups(**filter)['security_groups']
+                     if secgroup['name'] != 'default']
+
+        if self.is_preserve:
+            secgroups = self._filter_by_conf_networks(secgroups)
+        LOG.debug("List count, %s securtiy_groups" % len(secgroups))
+        return secgroups
+
+    def delete(self):
+        client = self.client
+        secgroups = self.list()
+        for secgroup in secgroups:
+            try:
+                client.delete_secgroup(secgroup['id'])
+            except Exception:
+                LOG.exception("Delete security_group exception.")
+
+    def dry_run(self):
+        secgroups = self.list()
+        self.data['secgroups'] = secgroups
+
+
 class NetworkSubnetService(NetworkService):
 
     def list(self):
         client = self.client
-        subnets = client.list_subnets()
+        subnets = client.list_subnets(**self.tenant_filter)
         subnets = subnets['subnets']
-        subnets = self._filter_by_tenant_id(subnets)
         if self.is_preserve:
             subnets = self._filter_by_conf_networks(subnets)
         LOG.debug("List count, %s Subnets" % len(subnets))
@@ -761,8 +794,8 @@
         self.data['images'] = images
 
     def save_state(self):
-        images = self.list()
         self.data['images'] = {}
+        images = self.list()
         for image in images:
             self.data['images'][image['id']] = image['name']
 
@@ -928,7 +961,6 @@
 
 def get_tenant_cleanup_services():
     tenant_services = []
-
     if IS_CEILOMETER:
         tenant_services.append(TelemetryAlarmService)
     if IS_NOVA:
@@ -942,14 +974,15 @@
     if IS_HEAT:
         tenant_services.append(StackService)
     if IS_NEUTRON:
+        tenant_services.append(NetworkFloatingIpService)
         if test.is_extension_enabled('metering', 'network'):
             tenant_services.append(NetworkMeteringLabelRuleService)
             tenant_services.append(NetworkMeteringLabelService)
         tenant_services.append(NetworkRouterService)
-        tenant_services.append(NetworkFloatingIpService)
         tenant_services.append(NetworkPortService)
         tenant_services.append(NetworkSubnetService)
         tenant_services.append(NetworkService)
+        tenant_services.append(NetworkSecGroupService)
     if IS_CINDER:
         tenant_services.append(SnapshotService)
         tenant_services.append(VolumeService)
diff --git a/tempest/cmd/init.py b/tempest/cmd/init.py
index c13fbe5..289b978 100644
--- a/tempest/cmd/init.py
+++ b/tempest/cmd/init.py
@@ -15,6 +15,7 @@
 import os
 import shutil
 import subprocess
+import sys
 
 from cliff import command
 from oslo_log import log as logging
@@ -33,13 +34,44 @@
 """
 
 
+def get_tempest_default_config_dir():
+    """Returns the correct default config dir to support both cases of
+    tempest being or not installed in a virtualenv.
+    Cases considered:
+    - no virtual env, python2: real_prefix and base_prefix not set
+    - no virtual env, python3: real_prefix not set, base_prefix set and
+      identical to prefix
+    - virtualenv, python2: real_prefix and prefix are set and different
+    - virtualenv, python3: real_prefix not set, base_prefix and prefix are
+      set and identical
+    - pyvenv, any python version: real_prefix not set, base_prefix and prefix
+      are set and different
+
+    :return: default config dir
+    """
+    real_prefix = getattr(sys, 'real_prefix', None)
+    base_prefix = getattr(sys, 'base_prefix', None)
+    prefix = sys.prefix
+    if real_prefix is None and base_prefix is None:
+        # Not running in a virtual environnment of any kind
+        return '/etc/tempest'
+    elif (real_prefix is None and base_prefix is not None and
+            base_prefix == prefix):
+        # Probably not running in a virtual environment
+        # NOTE(andreaf) we cannot distinguish this case from the case of
+        # a virtual environment created with virtualenv, and running python3.
+        return '/etc/tempest'
+    else:
+        return os.path.join(sys.prefix, 'etc/tempest')
+
+
 class TempestInit(command.Command):
     """Setup a local working environment for running tempest"""
 
     def get_parser(self, prog_name):
         parser = super(TempestInit, self).get_parser(prog_name)
         parser.add_argument('dir', nargs='?', default=os.getcwd())
-        parser.add_argument('--config-dir', '-c', default='/etc/tempest')
+        parser.add_argument('--config-dir', '-c', default=None)
         return parser
 
     def generate_testr_conf(self, local_path):
@@ -67,6 +99,11 @@
     def copy_config(self, etc_dir, config_dir):
         shutil.copytree(config_dir, etc_dir)
 
+    def generate_sample_config(self, local_dir):
+        subprocess.call(['oslo-config-generator', '--config-file',
+                         'tools/config/config-generator.tempest.conf'],
+                        cwd=local_dir)
+
     def create_working_dir(self, local_dir, config_dir):
         # Create local dir if missing
         if not os.path.isdir(local_dir):
@@ -87,6 +124,8 @@
             os.mkdir(log_dir)
         # Create and copy local etc dir
         self.copy_config(etc_dir, config_dir)
+        # Generate the sample config file
+        self.generate_sample_config(local_dir)
         # Update local confs to reflect local paths
         self.update_local_conf(config_path, lock_dir, log_dir)
         # Generate a testr conf file
@@ -96,4 +135,5 @@
             subprocess.call(['testr', 'init'], cwd=local_dir)
 
     def take_action(self, parsed_args):
-        self.create_working_dir(parsed_args.dir, parsed_args.config_dir)
+        config_dir = parsed_args.config_dir or get_tempest_default_config_dir()
+        self.create_working_dir(parsed_args.dir, config_dir)
diff --git a/tempest/cmd/javelin.py b/tempest/cmd/javelin.py
index f091cd3..30fb38c 100755
--- a/tempest/cmd/javelin.py
+++ b/tempest/cmd/javelin.py
@@ -122,6 +122,7 @@
 from tempest import config
 from tempest.services.compute.json import flavors_client
 from tempest.services.compute.json import floating_ips_client
+from tempest.services.compute.json import security_group_rules_client
 from tempest.services.compute.json import security_groups_client
 from tempest.services.compute.json import servers_client
 from tempest.services.identity.v2.json import identity_client
@@ -202,6 +203,8 @@
             _auth, **compute_params)
         self.secgroups = security_groups_client.SecurityGroupsClient(
             _auth, **compute_params)
+        self.secrules = security_group_rules_client.SecurityGroupRulesClient(
+            _auth, **compute_params)
         self.objects = object_client.ObjectClient(_auth,
                                                   **object_storage_params)
         self.containers = container_client.ContainerClient(
@@ -912,13 +915,14 @@
             continue
 
         body = client.secgroups.create_security_group(
-            secgroup['name'], secgroup['description'])
+            name=secgroup['name'], description=secgroup['description'])
         secgroup_id = body['id']
         # for each security group, create the rules
         for rule in secgroup['rules']:
             ip_proto, from_port, to_port, cidr = rule.split()
-            client.secgroups.create_security_group_rule(
-                secgroup_id, ip_proto, from_port, to_port, cidr=cidr)
+            client.secrules.create_security_group_rule(
+                parent_group_id=secgroup_id, ip_protocol=ip_proto,
+                from_port=from_port, to_port=to_port, cidr=cidr)
 
 
 def destroy_secgroups(secgroups):
diff --git a/tempest/common/accounts.py b/tempest/common/accounts.py
index 78e0e72..27b44f6 100644
--- a/tempest/common/accounts.py
+++ b/tempest/common/accounts.py
@@ -216,7 +216,7 @@
             if ('user_domain_name' in init_attributes and 'user_domain_name'
                     not in hash_attributes):
                 # Allow for the case of domain_name populated from config
-                domain_name = CONF.identity.admin_domain_name
+                domain_name = CONF.auth.default_credentials_domain_name
                 hash_attributes['user_domain_name'] = domain_name
             if all([getattr(creds, k) == hash_attributes[k] for
                    k in init_attributes]):
diff --git a/tempest/common/cred_provider.py b/tempest/common/cred_provider.py
index 2b7e0db..783a5fc 100644
--- a/tempest/common/cred_provider.py
+++ b/tempest/common/cred_provider.py
@@ -84,9 +84,9 @@
         domain_fields = set(x for x in auth.KeystoneV3Credentials.ATTRIBUTES
                             if 'domain' in x)
         if not domain_fields.intersection(kwargs.keys()):
-            # TODO(andreaf) It might be better here to use a dedicated config
-            # option such as CONF.auth.tenant_isolation_domain_name
-            params['user_domain_name'] = CONF.identity.admin_domain_name
+            domain_name = CONF.auth.default_credentials_domain_name
+            params['user_domain_name'] = domain_name
+
         auth_url = CONF.identity.uri_v3
     else:
         auth_url = CONF.identity.uri
diff --git a/tempest/common/isolated_creds.py b/tempest/common/isolated_creds.py
index ff4eda9..7888811 100644
--- a/tempest/common/isolated_creds.py
+++ b/tempest/common/isolated_creds.py
@@ -163,8 +163,8 @@
         self.creds_domain_name = None
         if self.identity_version == 'v3':
             self.creds_domain_name = (
-                CONF.auth.tenant_isolation_domain_name or
-                self.default_admin_creds.project_domain_name)
+                self.default_admin_creds.project_domain_name or
+                CONF.auth.default_credentials_domain_name)
         self.creds_client = get_creds_client(
             self.identity_admin_client, self.creds_domain_name)
 
diff --git a/tempest/common/validation_resources.py b/tempest/common/validation_resources.py
index 18f0b1d..402638d 100644
--- a/tempest/common/validation_resources.py
+++ b/tempest/common/validation_resources.py
@@ -27,12 +27,15 @@
     sg_name = data_utils.rand_name('securitygroup-')
     sg_description = data_utils.rand_name('description-')
     security_group = \
-        security_group_client.create_security_group(sg_name, sg_description)
+        security_group_client.create_security_group(name=sg_name,
+                                                    description=sg_description)
     if add_rule:
-        security_group_client.create_security_group_rule(security_group['id'],
-                                                         'tcp', 22, 22)
-        security_group_client.create_security_group_rule(security_group['id'],
-                                                         'icmp', -1, -1)
+        security_group_client.create_security_group_rule(
+            parent_group_id=security_group['id'], ip_protocol='tcp',
+            from_port=22, to_port=22)
+        security_group_client.create_security_group_rule(
+            parent_group_id=security_group['id'], ip_protocol='icmp',
+            from_port=-1, to_port=-1)
     LOG.debug("SSH Validation resource security group with tcp and icmp "
               "rules %s created"
               % sg_name)
@@ -46,7 +49,7 @@
         if validation_resources['keypair']:
             keypair_name = data_utils.rand_name('keypair')
             validation_data['keypair'] = \
-                os.keypairs_client.create_keypair(keypair_name)
+                os.keypairs_client.create_keypair(name=keypair_name)
             LOG.debug("Validation resource key %s created" % keypair_name)
         add_rule = False
         if validation_resources['security_group']:
diff --git a/tempest/config.py b/tempest/config.py
index 5ea4d10..0262d1b 100644
--- a/tempest/config.py
+++ b/tempest/config.py
@@ -67,12 +67,13 @@
     cfg.ListOpt('tempest_roles',
                 help="Roles to assign to all users created by tempest",
                 default=[]),
-    cfg.StrOpt('tenant_isolation_domain_name',
-               default=None,
-               help="Only applicable when identity.auth_version is v3."
-                    "Domain within which isolated credentials are provisioned."
-                    "The default \"None\" means that the domain from the"
-                    "admin user is used instead."),
+    cfg.StrOpt('default_credentials_domain_name',
+               default='Default',
+               help="Default domain used when getting v3 credentials. "
+                    "This is the name keystone uses for v2 compatibility.",
+               deprecated_opts=[cfg.DeprecatedOpt(
+                                'tenant_isolation_domain_name',
+                                group='auth')]),
     cfg.BoolOpt('create_isolated_networks',
                 default=True,
                 help="If allow_tenant_isolation is set to True and Neutron is "
@@ -111,11 +112,30 @@
                     "services' region name unless they are set explicitly. "
                     "If no such region is found in the service catalog, the "
                     "first found one is used."),
-    cfg.StrOpt('endpoint_type',
+    cfg.StrOpt('v2_admin_endpoint_type',
+               default='adminURL',
+               choices=['public', 'admin', 'internal',
+                        'publicURL', 'adminURL', 'internalURL'],
+               help="The admin endpoint type to use for OpenStack Identity "
+                    "(Keystone) API v2",
+               deprecated_opts=[cfg.DeprecatedOpt('endpoint_type',
+                                                  group='identity')]),
+    cfg.StrOpt('v2_public_endpoint_type',
                default='publicURL',
                choices=['public', 'admin', 'internal',
                         'publicURL', 'adminURL', 'internalURL'],
-               help="The endpoint type to use for the identity service."),
+               help="The public endpoint type to use for OpenStack Identity "
+                    "(Keystone) API v2",
+               deprecated_opts=[cfg.DeprecatedOpt('endpoint_type',
+                                                  group='identity')]),
+    cfg.StrOpt('v3_endpoint_type',
+               default='adminURL',
+               choices=['public', 'admin', 'internal',
+                        'publicURL', 'adminURL', 'internalURL'],
+               help="The endpoint type to use for OpenStack Identity "
+                    "(Keystone) API v3",
+               deprecated_opts=[cfg.DeprecatedOpt('endpoint_type',
+                                                  group='identity')]),
     cfg.StrOpt('username',
                help="Username to use for Nova API requests."),
     cfg.StrOpt('tenant_name',
@@ -330,6 +350,10 @@
                 default=True,
                 help="Does the test environment support live migration "
                      "available?"),
+    cfg.BoolOpt('metadata_service',
+                default=True,
+                help="Does the test environment support metadata service? "
+                     "Ignored unless validation.run_validation=true."),
     cfg.BoolOpt('block_migration_for_live_migration',
                 default=False,
                 help="Does the test environment use block devices for live "
@@ -838,6 +862,16 @@
 ]
 
 
+telemetry_feature_group = cfg.OptGroup(name='telemetry-feature-enabled',
+                                       title='Enabled Ceilometer Features')
+
+TelemetryFeaturesGroup = [
+    cfg.BoolOpt('events',
+                default=False,
+                help="Runs Ceilometer event-related tests"),
+]
+
+
 dashboard_group = cfg.OptGroup(name="dashboard",
                                title="Dashboard options")
 
@@ -1178,6 +1212,7 @@
     (database_group, DatabaseGroup),
     (orchestration_group, OrchestrationGroup),
     (telemetry_group, TelemetryGroup),
+    (telemetry_feature_group, TelemetryFeaturesGroup),
     (dashboard_group, DashboardGroup),
     (data_processing_group, DataProcessingGroup),
     (data_processing_feature_group, DataProcessingFeaturesGroup),
@@ -1208,7 +1243,10 @@
     The purpose of this is to allow tools like the Oslo sample config file
     generator to discover the options exposed to users.
     """
-    return [(getattr(g, 'name', None), o) for g, o in _opts]
+    ext_plugins = plugins.TempestTestPluginManager()
+    opt_list = [(getattr(g, 'name', None), o) for g, o in _opts]
+    opt_list.extend(ext_plugins.get_plugin_options_list())
+    return opt_list
 
 
 # this should never be called outside of this class
@@ -1245,6 +1283,7 @@
         self.orchestration = _CONF.orchestration
         self.messaging = _CONF.messaging
         self.telemetry = _CONF.telemetry
+        self.telemetry_feature_enabled = _CONF['telemetry-feature-enabled']
         self.dashboard = _CONF.dashboard
         self.data_processing = _CONF.data_processing
         self.data_processing_feature_enabled = _CONF[
@@ -1257,9 +1296,11 @@
         self.baremetal = _CONF.baremetal
         self.input_scenario = _CONF['input-scenario']
         self.negative = _CONF.negative
-        _CONF.set_default('domain_name', self.identity.admin_domain_name,
+        _CONF.set_default('domain_name',
+                          self.auth.default_credentials_domain_name,
                           group='identity')
-        _CONF.set_default('alt_domain_name', self.identity.admin_domain_name,
+        _CONF.set_default('alt_domain_name',
+                          self.auth.default_credentials_domain_name,
                           group='identity')
 
     def __init__(self, parse_conf=True, config_path=None):
diff --git a/tempest/openstack/common/__init__.py b/tempest/openstack/common/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/tempest/openstack/common/__init__.py
+++ /dev/null
diff --git a/tempest/openstack/common/_i18n.py b/tempest/openstack/common/_i18n.py
deleted file mode 100644
index 5bbc77d..0000000
--- a/tempest/openstack/common/_i18n.py
+++ /dev/null
@@ -1,45 +0,0 @@
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-"""oslo.i18n integration module.
-
-See http://docs.openstack.org/developer/oslo.i18n/usage.html
-
-"""
-
-try:
-    import oslo_i18n
-
-    # NOTE(dhellmann): This reference to o-s-l-o will be replaced by the
-    # application name when this module is synced into the separate
-    # repository. It is OK to have more than one translation function
-    # using the same domain, since there will still only be one message
-    # catalog.
-    _translators = oslo_i18n.TranslatorFactory(domain='tempest')
-
-    # The primary translation function using the well-known name "_"
-    _ = _translators.primary
-
-    # Translators for log levels.
-    #
-    # The abbreviated names are meant to reflect the usual use of a short
-    # name like '_'. The "L" is for "log" and the other letter comes from
-    # the level.
-    _LI = _translators.log_info
-    _LW = _translators.log_warning
-    _LE = _translators.log_error
-    _LC = _translators.log_critical
-except ImportError:
-    # NOTE(dims): Support for cases where a project wants to use
-    # code from oslo-incubator, but is not ready to be internationalized
-    # (like tempest)
-    _ = _LI = _LW = _LE = _LC = lambda x: x
diff --git a/tempest/openstack/common/versionutils.py b/tempest/openstack/common/versionutils.py
deleted file mode 100644
index 12d2e14..0000000
--- a/tempest/openstack/common/versionutils.py
+++ /dev/null
@@ -1,263 +0,0 @@
-# Copyright (c) 2013 OpenStack Foundation
-# All Rights Reserved.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-"""
-Helpers for comparing version strings.
-"""
-
-import copy
-import functools
-import inspect
-import logging
-
-from oslo_config import cfg
-import pkg_resources
-import six
-
-from tempest.openstack.common._i18n import _
-from oslo_log import log as logging
-
-
-LOG = logging.getLogger(__name__)
-CONF = cfg.CONF
-
-
-deprecated_opts = [
-    cfg.BoolOpt('fatal_deprecations',
-                default=False,
-                help='Enables or disables fatal status of deprecations.'),
-]
-
-
-def list_opts():
-    """Entry point for oslo.config-generator.
-    """
-    return [(None, copy.deepcopy(deprecated_opts))]
-
-
-class deprecated(object):
-    """A decorator to mark callables as deprecated.
-
-    This decorator logs a deprecation message when the callable it decorates is
-    used. The message will include the release where the callable was
-    deprecated, the release where it may be removed and possibly an optional
-    replacement.
-
-    Examples:
-
-    1. Specifying the required deprecated release
-
-    >>> @deprecated(as_of=deprecated.ICEHOUSE)
-    ... def a(): pass
-
-    2. Specifying a replacement:
-
-    >>> @deprecated(as_of=deprecated.ICEHOUSE, in_favor_of='f()')
-    ... def b(): pass
-
-    3. Specifying the release where the functionality may be removed:
-
-    >>> @deprecated(as_of=deprecated.ICEHOUSE, remove_in=+1)
-    ... def c(): pass
-
-    4. Specifying the deprecated functionality will not be removed:
-    >>> @deprecated(as_of=deprecated.ICEHOUSE, remove_in=0)
-    ... def d(): pass
-
-    5. Specifying a replacement, deprecated functionality will not be removed:
-    >>> @deprecated(as_of=deprecated.ICEHOUSE, in_favor_of='f()', remove_in=0)
-    ... def e(): pass
-
-    """
-
-    # NOTE(morganfainberg): Bexar is used for unit test purposes, it is
-    # expected we maintain a gap between Bexar and Folsom in this list.
-    BEXAR = 'B'
-    FOLSOM = 'F'
-    GRIZZLY = 'G'
-    HAVANA = 'H'
-    ICEHOUSE = 'I'
-    JUNO = 'J'
-    KILO = 'K'
-    LIBERTY = 'L'
-
-    _RELEASES = {
-        # NOTE(morganfainberg): Bexar is used for unit test purposes, it is
-        # expected we maintain a gap between Bexar and Folsom in this list.
-        'B': 'Bexar',
-        'F': 'Folsom',
-        'G': 'Grizzly',
-        'H': 'Havana',
-        'I': 'Icehouse',
-        'J': 'Juno',
-        'K': 'Kilo',
-        'L': 'Liberty',
-    }
-
-    _deprecated_msg_with_alternative = _(
-        '%(what)s is deprecated as of %(as_of)s in favor of '
-        '%(in_favor_of)s and may be removed in %(remove_in)s.')
-
-    _deprecated_msg_no_alternative = _(
-        '%(what)s is deprecated as of %(as_of)s and may be '
-        'removed in %(remove_in)s. It will not be superseded.')
-
-    _deprecated_msg_with_alternative_no_removal = _(
-        '%(what)s is deprecated as of %(as_of)s in favor of %(in_favor_of)s.')
-
-    _deprecated_msg_with_no_alternative_no_removal = _(
-        '%(what)s is deprecated as of %(as_of)s. It will not be superseded.')
-
-    def __init__(self, as_of, in_favor_of=None, remove_in=2, what=None):
-        """Initialize decorator
-
-        :param as_of: the release deprecating the callable. Constants
-            are define in this class for convenience.
-        :param in_favor_of: the replacement for the callable (optional)
-        :param remove_in: an integer specifying how many releases to wait
-            before removing (default: 2)
-        :param what: name of the thing being deprecated (default: the
-            callable's name)
-
-        """
-        self.as_of = as_of
-        self.in_favor_of = in_favor_of
-        self.remove_in = remove_in
-        self.what = what
-
-    def __call__(self, func_or_cls):
-        if not self.what:
-            self.what = func_or_cls.__name__ + '()'
-        msg, details = self._build_message()
-
-        if inspect.isfunction(func_or_cls):
-
-            @six.wraps(func_or_cls)
-            def wrapped(*args, **kwargs):
-                report_deprecated_feature(LOG, msg, details)
-                return func_or_cls(*args, **kwargs)
-            return wrapped
-        elif inspect.isclass(func_or_cls):
-            orig_init = func_or_cls.__init__
-
-            # TODO(tsufiev): change `functools` module to `six` as
-            # soon as six 1.7.4 (with fix for passing `assigned`
-            # argument to underlying `functools.wraps`) is released
-            # and added to the oslo-incubator requrements
-            @functools.wraps(orig_init, assigned=('__name__', '__doc__'))
-            def new_init(self, *args, **kwargs):
-                report_deprecated_feature(LOG, msg, details)
-                orig_init(self, *args, **kwargs)
-            func_or_cls.__init__ = new_init
-            return func_or_cls
-        else:
-            raise TypeError('deprecated can be used only with functions or '
-                            'classes')
-
-    def _get_safe_to_remove_release(self, release):
-        # TODO(dstanek): this method will have to be reimplemented once
-        #    when we get to the X release because once we get to the Y
-        #    release, what is Y+2?
-        new_release = chr(ord(release) + self.remove_in)
-        if new_release in self._RELEASES:
-            return self._RELEASES[new_release]
-        else:
-            return new_release
-
-    def _build_message(self):
-        details = dict(what=self.what,
-                       as_of=self._RELEASES[self.as_of],
-                       remove_in=self._get_safe_to_remove_release(self.as_of))
-
-        if self.in_favor_of:
-            details['in_favor_of'] = self.in_favor_of
-            if self.remove_in > 0:
-                msg = self._deprecated_msg_with_alternative
-            else:
-                # There are no plans to remove this function, but it is
-                # now deprecated.
-                msg = self._deprecated_msg_with_alternative_no_removal
-        else:
-            if self.remove_in > 0:
-                msg = self._deprecated_msg_no_alternative
-            else:
-                # There are no plans to remove this function, but it is
-                # now deprecated.
-                msg = self._deprecated_msg_with_no_alternative_no_removal
-        return msg, details
-
-
-def is_compatible(requested_version, current_version, same_major=True):
-    """Determine whether `requested_version` is satisfied by
-    `current_version`; in other words, `current_version` is >=
-    `requested_version`.
-
-    :param requested_version: version to check for compatibility
-    :param current_version: version to check against
-    :param same_major: if True, the major version must be identical between
-        `requested_version` and `current_version`. This is used when a
-        major-version difference indicates incompatibility between the two
-        versions. Since this is the common-case in practice, the default is
-        True.
-    :returns: True if compatible, False if not
-    """
-    requested_parts = pkg_resources.parse_version(requested_version)
-    current_parts = pkg_resources.parse_version(current_version)
-
-    if same_major and (requested_parts[0] != current_parts[0]):
-        return False
-
-    return current_parts >= requested_parts
-
-
-# Track the messages we have sent already. See
-# report_deprecated_feature().
-_deprecated_messages_sent = {}
-
-
-def report_deprecated_feature(logger, msg, *args, **kwargs):
-    """Call this function when a deprecated feature is used.
-
-    If the system is configured for fatal deprecations then the message
-    is logged at the 'critical' level and :class:`DeprecatedConfig` will
-    be raised.
-
-    Otherwise, the message will be logged (once) at the 'warn' level.
-
-    :raises: :class:`DeprecatedConfig` if the system is configured for
-             fatal deprecations.
-    """
-    stdmsg = _("Deprecated: %s") % msg
-    CONF.register_opts(deprecated_opts)
-    if CONF.fatal_deprecations:
-        logger.critical(stdmsg, *args, **kwargs)
-        raise DeprecatedConfig(msg=stdmsg)
-
-    # Using a list because a tuple with dict can't be stored in a set.
-    sent_args = _deprecated_messages_sent.setdefault(msg, list())
-
-    if args in sent_args:
-        # Already logged this message, so don't log it again.
-        return
-
-    sent_args.append(args)
-    logger.warn(stdmsg, *args, **kwargs)
-
-
-class DeprecatedConfig(Exception):
-    message = _("Fatal call to deprecated config: %(msg)s")
-
-    def __init__(self, msg):
-        super(Exception, self).__init__(self.message % dict(msg=msg))
diff --git a/tempest/scenario/manager.py b/tempest/scenario/manager.py
index 03e572f..60bf7cb 100644
--- a/tempest/scenario/manager.py
+++ b/tempest/scenario/manager.py
@@ -54,6 +54,8 @@
         cls.keypairs_client = cls.manager.keypairs_client
         # Nova security groups client
         cls.security_groups_client = cls.manager.security_groups_client
+        cls.security_group_rules_client = (
+            cls.manager.security_group_rules_client)
         cls.servers_client = cls.manager.servers_client
         cls.volumes_client = cls.manager.volumes_client
         cls.snapshots_client = cls.manager.snapshots_client
@@ -138,7 +140,7 @@
             client = self.keypairs_client
         name = data_utils.rand_name(self.__class__.__name__)
         # We don't need to create a keypair by pubkey in scenario
-        body = client.create_keypair(name)
+        body = client.create_keypair(name=name)
         self.addCleanup(client.delete_keypair, name)
         return body
 
@@ -217,6 +219,7 @@
 
     def _create_loginable_secgroup_rule(self, secgroup_id=None):
         _client = self.security_groups_client
+        _client_rules = self.security_group_rules_client
         if secgroup_id is None:
             sgs = _client.list_security_groups()
             for sg in sgs:
@@ -230,14 +233,14 @@
         rulesets = [
             {
                 # ssh
-                'ip_proto': 'tcp',
+                'ip_protocol': 'tcp',
                 'from_port': 22,
                 'to_port': 22,
                 'cidr': '0.0.0.0/0',
             },
             {
                 # ping
-                'ip_proto': 'icmp',
+                'ip_protocol': 'icmp',
                 'from_port': -1,
                 'to_port': -1,
                 'cidr': '0.0.0.0/0',
@@ -245,10 +248,10 @@
         ]
         rules = list()
         for ruleset in rulesets:
-            sg_rule = _client.create_security_group_rule(secgroup_id,
-                                                         **ruleset)
+            sg_rule = _client_rules.create_security_group_rule(
+                parent_group_id=secgroup_id, **ruleset)
             self.addCleanup(self.delete_wrapper,
-                            _client.delete_security_group_rule,
+                            _client_rules.delete_security_group_rule,
                             sg_rule['id'])
             rules.append(sg_rule)
         return rules
@@ -258,7 +261,7 @@
         sg_name = data_utils.rand_name(self.__class__.__name__)
         sg_desc = sg_name + " description"
         secgroup = self.security_groups_client.create_security_group(
-            sg_name, sg_desc)
+            name=sg_name, description=sg_desc)
         self.assertEqual(secgroup['name'], sg_name)
         self.assertEqual(secgroup['description'], sg_desc)
         self.addCleanup(self.delete_wrapper,
@@ -399,7 +402,7 @@
         if name is None:
             name = data_utils.rand_name('scenario-snapshot')
         LOG.debug("Creating a snapshot image for server: %s", server['name'])
-        image = _images_client.create_image(server['id'], name)
+        image = _images_client.create_image(server['id'], name=name)
         image_id = image.response['location'].split('images/')[1]
         _image_client.wait_for_image_status(image_id, 'active')
         self.addCleanup_with_wait(
@@ -1077,6 +1080,11 @@
                 port = self._create_port(network_id=net_id,
                                          client=net_client,
                                          **create_port_body)
+                # if port_vnic_type is set, ports in the passing
+                # create_kwargs will be override, which cause the
+                # inconsistence. Set the port_id according to network id
+                if net_id == self.network['id']:
+                    self.port_id = port.id
                 ports.append({'port': port.id})
             if ports:
                 create_kwargs['networks'] = ports
diff --git a/tempest/scenario/test_large_ops.py b/tempest/scenario/test_large_ops.py
index fa70d3f..c44557e 100644
--- a/tempest/scenario/test_large_ops.py
+++ b/tempest/scenario/test_large_ops.py
@@ -87,7 +87,7 @@
         # Since no traffic is tested, we don't need to actually add rules to
         # secgroup
         secgroup = self.security_groups_client.create_security_group(
-            'secgroup-%s' % name, 'secgroup-desc-%s' % name)
+            name='secgroup-%s' % name, description='secgroup-desc-%s' % name)
         self.addCleanupClass(self.security_groups_client.delete_security_group,
                              secgroup['id'])
         create_kwargs = {
diff --git a/tempest/scenario/test_network_basic_ops.py b/tempest/scenario/test_network_basic_ops.py
index e676063..b31ba69 100644
--- a/tempest/scenario/test_network_basic_ops.py
+++ b/tempest/scenario/test_network_basic_ops.py
@@ -101,6 +101,7 @@
         self.servers = []
 
     def _setup_network_and_servers(self, **kwargs):
+        vnic_type = CONF.network.port_vnic_type
         boot_with_port = kwargs.pop('boot_with_port', False)
         self.security_group = \
             self._create_security_group(tenant_id=self.tenant_id)
@@ -108,7 +109,9 @@
         self.check_networks()
 
         self.port_id = None
-        if boot_with_port:
+        # when vnic_type is set, ports will be created in create_server.
+        # So no need to create a port here in this case.
+        if boot_with_port and not vnic_type:
             # create a port on the network and boot with that
             self.port_id = self._create_port(self.network['id']).id
 
diff --git a/tempest/scenario/test_network_v6.py b/tempest/scenario/test_network_v6.py
index fba839a..9481e58 100644
--- a/tempest/scenario/test_network_v6.py
+++ b/tempest/scenario/test_network_v6.py
@@ -27,13 +27,17 @@
 
 
 class TestGettingAddress(manager.NetworkScenarioTest):
-    """Create network with subnets: one IPv4 and
-    one or few IPv6 in a given address mode
-    Boot 2 VMs on this network
-    Allocate and assign 2 FIP4
-    Check that vNICs of all VMs gets all addresses actually assigned
-    Ping4 to one VM from another one
-    If ping6 available in VM, do ping6 to all v6 addresses
+    """Test Summary:
+
+    1. Create network with subnets:
+        1.1. one IPv4 and
+        1.2. one or more IPv6 in a given address mode
+    2. Boot 2 VMs on this network
+    3. Allocate and assign 2 FIP4
+    4. Check that vNICs of all VMs gets all addresses actually assigned
+    5. Each VM will ping the other's v4 private address
+    6. If ping6 available in VM, each VM will ping all of the other's  v6
+       addresses as well as the router's
     """
 
     @classmethod
@@ -74,12 +78,13 @@
         self.network = self._create_network(tenant_id=self.tenant_id)
         sub4 = self._create_subnet(network=self.network,
                                    namestart='sub4',
-                                   ip_version=4,)
+                                   ip_version=4)
 
         router = self._get_router(tenant_id=self.tenant_id)
         sub4.add_to_router(router_id=router['id'])
         self.addCleanup(sub4.delete)
 
+        self.subnets_v6 = []
         for _ in range(n_subnets6):
             sub6 = self._create_subnet(network=self.network,
                                        namestart='sub6',
@@ -89,6 +94,7 @@
 
             sub6.add_to_router(router_id=router['id'])
             self.addCleanup(sub6.delete)
+            self.subnets_v6.append(sub6)
 
     @staticmethod
     def define_server_ips(srv):
@@ -145,23 +151,32 @@
             self.assertTrue(test.call_until_true(srv2_v6_addr_assigned,
                                                  CONF.compute.ping_timeout, 1))
 
-        result = sshv4_1.ping_host(ips_from_api_2['4'])
-        self.assertIn('0% packet loss', result)
-        result = sshv4_2.ping_host(ips_from_api_1['4'])
-        self.assertIn('0% packet loss', result)
+        self._check_connectivity(sshv4_1, ips_from_api_2['4'])
+        self._check_connectivity(sshv4_2, ips_from_api_1['4'])
 
         # Some VM (like cirros) may not have ping6 utility
         result = sshv4_1.exec_command('whereis ping6')
         is_ping6 = False if result == 'ping6:\n' else True
         if is_ping6:
             for i in range(n_subnets6):
-                result = sshv4_1.ping_host(ips_from_api_2['6'][i])
-                self.assertIn('0% packet loss', result)
-                result = sshv4_2.ping_host(ips_from_api_1['6'][i])
-                self.assertIn('0% packet loss', result)
+                self._check_connectivity(sshv4_1,
+                                         ips_from_api_2['6'][i])
+                self._check_connectivity(sshv4_1,
+                                         self.subnets_v6[i].gateway_ip)
+                self._check_connectivity(sshv4_2,
+                                         ips_from_api_1['6'][i])
+                self._check_connectivity(sshv4_2,
+                                         self.subnets_v6[i].gateway_ip)
         else:
             LOG.warning('Ping6 is not available, skipping')
 
+    def _check_connectivity(self, source, dest):
+        self.assertTrue(
+            self._check_remote_connectivity(source, dest),
+            "Timed out waiting for %s to become reachable from %s" %
+            (dest, source.ssh_client.host)
+        )
+
     @test.idempotent_id('2c92df61-29f0-4eaa-bee3-7c65bef62a43')
     @test.services('compute', 'network')
     def test_slaac_from_os(self):
diff --git a/tempest/scenario/test_server_basic_ops.py b/tempest/scenario/test_server_basic_ops.py
index d9918f3..f61b151 100644
--- a/tempest/scenario/test_server_basic_ops.py
+++ b/tempest/scenario/test_server_basic_ops.py
@@ -37,6 +37,7 @@
      * Add simple permissive rules to the security group
      * Launch an instance
      * Perform ssh to instance
+     * Verify metadata service
      * Terminate the instance
     """
 
@@ -81,19 +82,26 @@
     def verify_ssh(self):
         if self.run_ssh:
             # Obtain a floating IP
-            floating_ip = self.floating_ips_client.create_floating_ip()
+            self.floating_ip = self.floating_ips_client.create_floating_ip()
             self.addCleanup(self.delete_wrapper,
                             self.floating_ips_client.delete_floating_ip,
-                            floating_ip['id'])
+                            self.floating_ip['id'])
             # Attach a floating IP
             self.floating_ips_client.associate_floating_ip_to_server(
-                floating_ip['ip'], self.instance['id'])
+                self.floating_ip['ip'], self.instance['id'])
             # Check ssh
-            self.get_remote_client(
-                server_or_ip=floating_ip['ip'],
+            self.ssh_client = self.get_remote_client(
+                server_or_ip=self.floating_ip['ip'],
                 username=self.image_utils.ssh_user(self.image_ref),
                 private_key=self.keypair['private_key'])
 
+    def verify_metadata(self):
+        if self.run_ssh and CONF.compute_feature_enabled.metadata_service:
+            # Verify metadata service
+            result = self.ssh_client.exec_command(
+                "curl http://169.254.169.254/latest/meta-data/public-ipv4")
+            self.assertEqual(self.floating_ip['ip'], result)
+
     @test.idempotent_id('7fff3fb3-91d8-4fd0-bd7d-0204f1f180ba')
     @test.attr(type='smoke')
     @test.services('compute', 'network')
@@ -102,4 +110,5 @@
         self.security_group = self._create_security_group()
         self.boot_instance()
         self.verify_ssh()
+        self.verify_metadata()
         self.servers_client.delete_server(self.instance['id'])
diff --git a/tempest/scenario/test_volume_boot_pattern.py b/tempest/scenario/test_volume_boot_pattern.py
index 45b7b74..3809831 100644
--- a/tempest/scenario/test_volume_boot_pattern.py
+++ b/tempest/scenario/test_volume_boot_pattern.py
@@ -106,8 +106,7 @@
                 floating_ip['ip'], server['id'])
             ip = floating_ip['ip']
         else:
-            network_name_for_ssh = CONF.compute.network_for_ssh
-            ip = server.networks[network_name_for_ssh][0]
+            ip = server
 
         return self.get_remote_client(ip, private_key=keypair['private_key'],
                                       log_console_of_servers=[server])
diff --git a/tempest/services/compute/json/agents_client.py b/tempest/services/compute/json/agents_client.py
index 1269991..1a1d832 100644
--- a/tempest/services/compute/json/agents_client.py
+++ b/tempest/services/compute/json/agents_client.py
@@ -52,4 +52,5 @@
         """Update an agent build."""
         put_body = json.dumps({'para': kwargs})
         resp, body = self.put('os-agents/%s' % agent_id, put_body)
-        return service_client.ResponseBody(resp, self._parse_resp(body))
+        body = json.loads(body)
+        return service_client.ResponseBody(resp, body['agent'])
diff --git a/tempest/services/compute/json/flavors_client.py b/tempest/services/compute/json/flavors_client.py
index b928f9f..1422944 100644
--- a/tempest/services/compute/json/flavors_client.py
+++ b/tempest/services/compute/json/flavors_client.py
@@ -47,24 +47,19 @@
         self.validate_response(schema.create_get_flavor_details, resp, body)
         return service_client.ResponseBody(resp, body['flavor'])
 
-    def create_flavor(self, name, ram, vcpus, disk, flavor_id, **kwargs):
-        """Creates a new flavor or instance type."""
-        post_body = {
-            'name': name,
-            'ram': ram,
-            'vcpus': vcpus,
-            'disk': disk,
-            'id': flavor_id,
-        }
+    def create_flavor(self, **kwargs):
+        """Creates a new flavor or instance type.
+        Most parameters except the following are passed to the API without
+        any changes.
+        :param ephemeral: The name is changed to OS-FLV-EXT-DATA:ephemeral
+        :param is_public: The name is changed to os-flavor-access:is_public
+        """
         if kwargs.get('ephemeral'):
-            post_body['OS-FLV-EXT-DATA:ephemeral'] = kwargs.get('ephemeral')
-        if kwargs.get('swap'):
-            post_body['swap'] = kwargs.get('swap')
-        if kwargs.get('rxtx'):
-            post_body['rxtx_factor'] = kwargs.get('rxtx')
+            kwargs['OS-FLV-EXT-DATA:ephemeral'] = kwargs.pop('ephemeral')
         if kwargs.get('is_public'):
-            post_body['os-flavor-access:is_public'] = kwargs.get('is_public')
-        post_body = json.dumps({'flavor': post_body})
+            kwargs['os-flavor-access:is_public'] = kwargs.pop('is_public')
+
+        post_body = json.dumps({'flavor': kwargs})
         resp, body = self.post('flavors', post_body)
 
         body = json.loads(body)
@@ -92,9 +87,9 @@
         """Returns the primary type of resource this client works with."""
         return 'flavor'
 
-    def set_flavor_extra_spec(self, flavor_id, specs):
+    def set_flavor_extra_spec(self, flavor_id, **kwargs):
         """Sets extra Specs to the mentioned flavor."""
-        post_body = json.dumps({'extra_specs': specs})
+        post_body = json.dumps({'extra_specs': kwargs})
         resp, body = self.post('flavors/%s/os-extra_specs' % flavor_id,
                                post_body)
         body = json.loads(body)
diff --git a/tempest/services/compute/json/floating_ip_pools_client.py b/tempest/services/compute/json/floating_ip_pools_client.py
index 1cc411b..1e2133b 100644
--- a/tempest/services/compute/json/floating_ip_pools_client.py
+++ b/tempest/services/compute/json/floating_ip_pools_client.py
@@ -21,7 +21,7 @@
 from tempest.common import service_client
 
 
-class FloatingIpPoolsClient(service_client.ServiceClient):
+class FloatingIPPoolsClient(service_client.ServiceClient):
 
     def list_floating_ip_pools(self, params=None):
         """Returns a list of all floating IP Pools."""
diff --git a/tempest/services/compute/json/floating_ips_bulk_client.py b/tempest/services/compute/json/floating_ips_bulk_client.py
index c8e7350..8b1c5a9 100644
--- a/tempest/services/compute/json/floating_ips_bulk_client.py
+++ b/tempest/services/compute/json/floating_ips_bulk_client.py
@@ -19,7 +19,7 @@
 from tempest.common import service_client
 
 
-class FloatingIpsBulkClient(service_client.ServiceClient):
+class FloatingIPsBulkClient(service_client.ServiceClient):
 
     def create_floating_ips_bulk(self, ip_range, pool, interface):
         """Allocate floating IPs in bulk."""
diff --git a/tempest/services/compute/json/images_client.py b/tempest/services/compute/json/images_client.py
index b0ce2dc..4e7e93f 100644
--- a/tempest/services/compute/json/images_client.py
+++ b/tempest/services/compute/json/images_client.py
@@ -23,18 +23,10 @@
 
 class ImagesClient(service_client.ServiceClient):
 
-    def create_image(self, server_id, name, meta=None):
+    def create_image(self, server_id, **kwargs):
         """Creates an image of the original server."""
 
-        post_body = {
-            'createImage': {
-                'name': name,
-            }
-        }
-
-        if meta is not None:
-            post_body['createImage']['metadata'] = meta
-
+        post_body = {'createImage': kwargs}
         post_body = json.dumps(post_body)
         resp, body = self.post('servers/%s/action' % server_id,
                                post_body)
diff --git a/tempest/services/compute/json/keypairs_client.py b/tempest/services/compute/json/keypairs_client.py
index 6f819ae..e51671f 100644
--- a/tempest/services/compute/json/keypairs_client.py
+++ b/tempest/services/compute/json/keypairs_client.py
@@ -38,11 +38,8 @@
         self.validate_response(schema.get_keypair, resp, body)
         return service_client.ResponseBody(resp, body['keypair'])
 
-    def create_keypair(self, name, pub_key=None):
-        post_body = {'keypair': {'name': name}}
-        if pub_key:
-            post_body['keypair']['public_key'] = pub_key
-        post_body = json.dumps(post_body)
+    def create_keypair(self, **kwargs):
+        post_body = json.dumps({'keypair': kwargs})
         resp, body = self.post("os-keypairs", body=post_body)
         body = json.loads(body)
         self.validate_response(schema.create_keypair, resp, body)
diff --git a/tempest/services/compute/json/security_group_default_rules_client.py b/tempest/services/compute/json/security_group_default_rules_client.py
index fcc715a..658b89a 100644
--- a/tempest/services/compute/json/security_group_default_rules_client.py
+++ b/tempest/services/compute/json/security_group_default_rules_client.py
@@ -22,8 +22,7 @@
 
 class SecurityGroupDefaultRulesClient(service_client.ServiceClient):
 
-    def create_security_default_group_rule(self, ip_protocol, from_port,
-                                           to_port, **kwargs):
+    def create_security_default_group_rule(self, **kwargs):
         """
         Creating security group default rules.
         ip_protocol : ip_protocol (icmp, tcp, udp).
@@ -31,13 +30,7 @@
         to_port  : Port at end of range.
         cidr     : CIDR for address range.
         """
-        post_body = {
-            'ip_protocol': ip_protocol,
-            'from_port': from_port,
-            'to_port': to_port,
-            'cidr': kwargs.get('cidr'),
-        }
-        post_body = json.dumps({'security_group_default_rule': post_body})
+        post_body = json.dumps({'security_group_default_rule': kwargs})
         url = 'os-security-group-default-rules'
         resp, body = self.post(url, post_body)
         body = json.loads(body)
diff --git a/tempest/services/compute/json/security_group_rules_client.py b/tempest/services/compute/json/security_group_rules_client.py
new file mode 100644
index 0000000..9a7c881
--- /dev/null
+++ b/tempest/services/compute/json/security_group_rules_client.py
@@ -0,0 +1,59 @@
+# Copyright 2012 OpenStack Foundation
+# All Rights Reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+import json
+
+from tempest_lib import exceptions as lib_exc
+
+from tempest.api_schema.response.compute.v2_1 import security_groups as schema
+from tempest.common import service_client
+
+
+class SecurityGroupRulesClient(service_client.ServiceClient):
+
+    def create_security_group_rule(self, **kwargs):
+        """
+        Creating a new security group rules.
+        parent_group_id :ID of Security group
+        ip_protocol : ip_proto (icmp, tcp, udp).
+        from_port: Port at start of range.
+        to_port  : Port at end of range.
+        Following optional keyword arguments are accepted:
+        cidr     : CIDR for address range.
+        group_id : ID of the Source group
+        """
+        post_body = json.dumps({'security_group_rule': kwargs})
+        url = 'os-security-group-rules'
+        resp, body = self.post(url, post_body)
+        body = json.loads(body)
+        self.validate_response(schema.create_security_group_rule, resp, body)
+        return service_client.ResponseBody(resp, body['security_group_rule'])
+
+    def delete_security_group_rule(self, group_rule_id):
+        """Deletes the provided Security Group rule."""
+        resp, body = self.delete('os-security-group-rules/%s' %
+                                 group_rule_id)
+        self.validate_response(schema.delete_security_group_rule, resp, body)
+        return service_client.ResponseBody(resp, body)
+
+    def list_security_group_rules(self, security_group_id):
+        """List all rules for a security group."""
+        resp, body = self.get('os-security-groups')
+        body = json.loads(body)
+        self.validate_response(schema.list_security_groups, resp, body)
+        for sg in body['security_groups']:
+            if sg['id'] == security_group_id:
+                return service_client.ResponseBodyList(resp, sg['rules'])
+        raise lib_exc.NotFound('No such Security Group')
diff --git a/tempest/services/compute/json/security_groups_client.py b/tempest/services/compute/json/security_groups_client.py
index 5a3d771..c0b667b 100644
--- a/tempest/services/compute/json/security_groups_client.py
+++ b/tempest/services/compute/json/security_groups_client.py
@@ -43,36 +43,26 @@
         self.validate_response(schema.get_security_group, resp, body)
         return service_client.ResponseBody(resp, body['security_group'])
 
-    def create_security_group(self, name, description):
+    def create_security_group(self, **kwargs):
         """
         Creates a new security group.
         name (Required): Name of security group.
         description (Required): Description of security group.
         """
-        post_body = {
-            'name': name,
-            'description': description,
-        }
-        post_body = json.dumps({'security_group': post_body})
+        post_body = json.dumps({'security_group': kwargs})
         resp, body = self.post('os-security-groups', post_body)
         body = json.loads(body)
         self.validate_response(schema.get_security_group, resp, body)
         return service_client.ResponseBody(resp, body['security_group'])
 
-    def update_security_group(self, security_group_id, name=None,
-                              description=None):
+    def update_security_group(self, security_group_id, **kwargs):
         """
         Update a security group.
         security_group_id: a security_group to update
         name: new name of security group
         description: new description of security group
         """
-        post_body = {}
-        if name:
-            post_body['name'] = name
-        if description:
-            post_body['description'] = description
-        post_body = json.dumps({'security_group': post_body})
+        post_body = json.dumps({'security_group': kwargs})
         resp, body = self.put('os-security-groups/%s' % security_group_id,
                               post_body)
         body = json.loads(body)
@@ -86,50 +76,6 @@
         self.validate_response(schema.delete_security_group, resp, body)
         return service_client.ResponseBody(resp, body)
 
-    def create_security_group_rule(self, parent_group_id, ip_proto, from_port,
-                                   to_port, **kwargs):
-        """
-        Creating a new security group rules.
-        parent_group_id :ID of Security group
-        ip_protocol : ip_proto (icmp, tcp, udp).
-        from_port: Port at start of range.
-        to_port  : Port at end of range.
-        Following optional keyword arguments are accepted:
-        cidr     : CIDR for address range.
-        group_id : ID of the Source group
-        """
-        post_body = {
-            'parent_group_id': parent_group_id,
-            'ip_protocol': ip_proto,
-            'from_port': from_port,
-            'to_port': to_port,
-            'cidr': kwargs.get('cidr'),
-            'group_id': kwargs.get('group_id'),
-        }
-        post_body = json.dumps({'security_group_rule': post_body})
-        url = 'os-security-group-rules'
-        resp, body = self.post(url, post_body)
-        body = json.loads(body)
-        self.validate_response(schema.create_security_group_rule, resp, body)
-        return service_client.ResponseBody(resp, body['security_group_rule'])
-
-    def delete_security_group_rule(self, group_rule_id):
-        """Deletes the provided Security Group rule."""
-        resp, body = self.delete('os-security-group-rules/%s' %
-                                 group_rule_id)
-        self.validate_response(schema.delete_security_group_rule, resp, body)
-        return service_client.ResponseBody(resp, body)
-
-    def list_security_group_rules(self, security_group_id):
-        """List all rules for a security group."""
-        resp, body = self.get('os-security-groups')
-        body = json.loads(body)
-        self.validate_response(schema.list_security_groups, resp, body)
-        for sg in body['security_groups']:
-            if sg['id'] == security_group_id:
-                return service_client.ResponseBodyList(resp, sg['rules'])
-        raise lib_exc.NotFound('No such Security Group')
-
     def is_resource_deleted(self, id):
         try:
             self.show_security_group(id)
diff --git a/tempest/services/identity/v2/json/identity_client.py b/tempest/services/identity/v2/json/identity_client.py
index 1076fca..c9345e0 100644
--- a/tempest/services/identity/v2/json/identity_client.py
+++ b/tempest/services/identity/v2/json/identity_client.py
@@ -259,6 +259,33 @@
         self.expected_success(204, resp.status)
         return service_client.ResponseBody(resp, body)
 
+    def create_endpoint(self, service_id, region_id, **kwargs):
+        """Create an endpoint for service."""
+        post_body = {
+            'service_id': service_id,
+            'region': region_id,
+            'publicurl': kwargs.get('publicurl'),
+            'adminurl': kwargs.get('adminurl'),
+            'internalurl': kwargs.get('internalurl')
+        }
+        post_body = json.dumps({'endpoint': post_body})
+        resp, body = self.post('/endpoints', post_body)
+        self.expected_success(200, resp.status)
+        return service_client.ResponseBody(resp, self._parse_resp(body))
+
+    def list_endpoints(self):
+        """List Endpoints - Returns Endpoints."""
+        resp, body = self.get('/endpoints')
+        self.expected_success(200, resp.status)
+        return service_client.ResponseBodyList(resp, self._parse_resp(body))
+
+    def delete_endpoint(self, endpoint_id):
+        """Delete an endpoint."""
+        url = '/endpoints/%s' % endpoint_id
+        resp, body = self.delete(url)
+        self.expected_success(204, resp.status)
+        return service_client.ResponseBody(resp, body)
+
     def update_user_password(self, user_id, new_pass):
         """Update User Password."""
         put_body = {
diff --git a/tempest/services/telemetry/json/telemetry_client.py b/tempest/services/telemetry/json/telemetry_client.py
index 2b1cdc0..1f181e3 100644
--- a/tempest/services/telemetry/json/telemetry_client.py
+++ b/tempest/services/telemetry/json/telemetry_client.py
@@ -84,6 +84,10 @@
         uri = '%s/meters/%s' % (self.uri_prefix, meter_id)
         return self._helper_list(uri, query)
 
+    def list_events(self, query=None):
+        uri = '%s/events' % self.uri_prefix
+        return self._helper_list(uri, query)
+
     def show_resource(self, resource_id):
         uri = '%s/resources/%s' % (self.uri_prefix, resource_id)
         resp, body = self.get(uri)
diff --git a/tempest/stress/actions/ssh_floating.py b/tempest/stress/actions/ssh_floating.py
index 4a27466..09e6d88 100644
--- a/tempest/stress/actions/ssh_floating.py
+++ b/tempest/stress/actions/ssh_floating.py
@@ -93,11 +93,13 @@
         sec_grp_cli = self.manager.security_groups_client
         s_name = data_utils.rand_name('sec_grp')
         s_description = data_utils.rand_name('desc')
-        self.sec_grp = sec_grp_cli.create_security_group(s_name,
-                                                         s_description)
+        self.sec_grp = sec_grp_cli.create_security_group(
+            name=s_name, description=s_description)
         create_rule = sec_grp_cli.create_security_group_rule
-        create_rule(self.sec_grp['id'], 'tcp', 22, 22)
-        create_rule(self.sec_grp['id'], 'icmp', -1, -1)
+        create_rule(parent_group_id=self.sec_grp['id'], ip_protocol='tcp',
+                    from_port=22, to_port=22)
+        create_rule(parent_group_id=self.sec_grp['id'], ip_protocol='icmp',
+                    from_port=-1, to_port=-1)
 
     def _destroy_sec_grp(self):
         sec_grp_cli = self.manager.security_groups_client
diff --git a/tempest/stress/actions/volume_attach_verify.py b/tempest/stress/actions/volume_attach_verify.py
index 3ba2a91..0e0141f 100644
--- a/tempest/stress/actions/volume_attach_verify.py
+++ b/tempest/stress/actions/volume_attach_verify.py
@@ -26,7 +26,7 @@
 
     def _create_keypair(self):
         keyname = data_utils.rand_name("key")
-        self.key = self.manager.keypairs_client.create_keypair(keyname)
+        self.key = self.manager.keypairs_client.create_keypair(name=keyname)
 
     def _delete_keypair(self):
         self.manager.keypairs_client.delete_keypair(self.key['name'])
@@ -55,11 +55,13 @@
         sec_grp_cli = self.manager.security_groups_client
         s_name = data_utils.rand_name('sec_grp')
         s_description = data_utils.rand_name('desc')
-        self.sec_grp = sec_grp_cli.create_security_group(s_name,
-                                                         s_description)
+        self.sec_grp = sec_grp_cli.create_security_group(
+            name=s_name, description=s_description)
         create_rule = sec_grp_cli.create_security_group_rule
-        create_rule(self.sec_grp['id'], 'tcp', 22, 22)
-        create_rule(self.sec_grp['id'], 'icmp', -1, -1)
+        create_rule(parent_group_id=self.sec_grp['id'], ip_protocol='tcp',
+                    from_port=22, to_port=22)
+        create_rule(parent_group_id=self.sec_grp['id'], ip_protocol='icmp',
+                    from_port=-1, to_port=-1)
 
     def _destroy_sec_grp(self):
         sec_grp_cli = self.manager.security_groups_client
diff --git a/tempest/test_discover/plugins.py b/tempest/test_discover/plugins.py
index 45cd609..640b004 100644
--- a/tempest/test_discover/plugins.py
+++ b/tempest/test_discover/plugins.py
@@ -51,6 +51,16 @@
         """
         return
 
+    @abc.abstractmethod
+    def get_opt_lists(self):
+        """Method to get a list of options for sample config generation
+
+        :return option_list: A list of tuples with the group name and options
+                             in that group.
+        :rtype: list
+        """
+        return []
+
 
 @misc.singleton
 class TempestTestPluginManager(object):
@@ -79,3 +89,11 @@
     def register_plugin_opts(self, conf):
         for plug in self.ext_plugins:
             plug.obj.register_opts(conf)
+
+    def get_plugin_options_list(self):
+        plugin_options = []
+        for plug in self.ext_plugins:
+            opt_list = plug.obj.get_opt_lists()
+            if opt_list:
+                plugin_options.extend(opt_list)
+        return plugin_options
diff --git a/tempest/tests/cmd/test_javelin.py b/tempest/tests/cmd/test_javelin.py
index f0921d5..3a3e46e 100644
--- a/tempest/tests/cmd/test_javelin.py
+++ b/tempest/tests/cmd/test_javelin.py
@@ -278,8 +278,8 @@
 
         mocked_function = self.fake_client.secgroups.create_security_group
         mocked_function.assert_called_once_with(
-            self.fake_object['name'],
-            self.fake_object['description'])
+            name=self.fake_object['name'],
+            description=self.fake_object['description'])
 
 
 class TestDestroyResources(JavelinUnitTest):
diff --git a/tempest/tests/common/test_service_clients.py b/tempest/tests/common/test_service_clients.py
index 695d4a4..00b8470 100644
--- a/tempest/tests/common/test_service_clients.py
+++ b/tempest/tests/common/test_service_clients.py
@@ -40,6 +40,7 @@
 from tempest.services.compute.json import quotas_client
 from tempest.services.compute.json import security_group_default_rules_client \
     as nova_secgrop_default_client
+from tempest.services.compute.json import security_group_rules_client
 from tempest.services.compute.json import security_groups_client
 from tempest.services.compute.json import server_groups_client
 from tempest.services.compute.json import servers_client
@@ -115,8 +116,8 @@
             extensions_client.ExtensionsClient,
             fixed_ips_client.FixedIPsClient,
             flavors_client.FlavorsClient,
-            floating_ip_pools_client.FloatingIpPoolsClient,
-            floating_ips_bulk_client.FloatingIpsBulkClient,
+            floating_ip_pools_client.FloatingIPPoolsClient,
+            floating_ips_bulk_client.FloatingIPsBulkClient,
             floating_ips_client.FloatingIPsClient,
             hosts_client.HostsClient,
             hypervisor_client.HypervisorClient,
@@ -130,6 +131,7 @@
             quotas_client.QuotasClient,
             quota_classes_client.QuotaClassesClient,
             nova_secgrop_default_client.SecurityGroupDefaultRulesClient,
+            security_group_rules_client.SecurityGroupRulesClient,
             security_groups_client.SecurityGroupsClient,
             server_groups_client.ServerGroupsClient,
             servers_client.ServersClient,
diff --git a/tempest/tests/services/compute/test_agents_client.py b/tempest/tests/services/compute/test_agents_client.py
index d268a18..8316c90 100644
--- a/tempest/tests/services/compute/test_agents_client.py
+++ b/tempest/tests/services/compute/test_agents_client.py
@@ -14,19 +14,18 @@
 
 import httplib2
 
+from oslo_serialization import jsonutils as json
 from oslotest import mockpatch
 
 from tempest.services.compute.json import agents_client
 from tempest.tests import base
 from tempest.tests import fake_auth_provider
-from tempest.tests import fake_config
 
 
 class TestAgentsClient(base.TestCase):
 
     def setUp(self):
         super(TestAgentsClient, self).setUp()
-        self.useFixture(fake_config.ConfigFixture())
         fake_auth = fake_auth_provider.FakeAuthProvider()
         self.client = agents_client.AgentsClient(fake_auth,
                                                  'compute', 'regionOne')
@@ -34,7 +33,7 @@
     def _test_list_agents(self, bytes_body=False):
         body = '{"agents": []}'
         if bytes_body:
-            body = bytes(body.encode('utf-8'))
+            body = body.encode('utf-8')
         expected = []
         response = (httplib2.Response({'status': 200}), body)
         self.useFixture(mockpatch.Patch(
@@ -42,8 +41,64 @@
             return_value=response))
         self.assertEqual(expected, self.client.list_agents())
 
+    def _test_create_agent(self, bytes_body=False):
+        expected = {"url": "http://foo.com", "hypervisor": "kvm",
+                    "md5hash": "md5", "version": "2", "architecture": "x86_64",
+                    "os": "linux", "agent_id": 1}
+        serialized_body = json.dumps({"agent": expected})
+        if bytes_body:
+            serialized_body = serialized_body.encode('utf-8')
+
+        mocked_resp = (httplib2.Response({'status': 200}), serialized_body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.post',
+            return_value=mocked_resp))
+        resp = self.client.create_agent(
+            url="http://foo.com", hypervisor="kvm", md5hash="md5",
+            version="2", architecture="x86_64", os="linux"
+        )
+        self.assertEqual(expected, resp)
+
+    def _test_delete_agent(self):
+        mocked_resp = (httplib2.Response({'status': 200}), None)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.delete',
+            return_value=mocked_resp))
+        self.client.delete_agent("1")
+
+    def _test_update_agent(self, bytes_body=False):
+        expected = {"url": "http://foo.com", "md5hash": "md5", "version": "2",
+                    "agent_id": 1}
+        serialized_body = json.dumps({"agent": expected})
+        if bytes_body:
+            serialized_body = serialized_body.encode('utf-8')
+
+        mocked_resp = (httplib2.Response({'status': 200}), serialized_body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.put',
+            return_value=mocked_resp))
+        resp = self.client.update_agent(
+            "1", url="http://foo.com", md5hash="md5", version="2"
+        )
+        self.assertEqual(expected, resp)
+
     def test_list_agents_with_str_body(self):
         self._test_list_agents()
 
     def test_list_agents_with_bytes_body(self):
         self._test_list_agents(bytes_body=True)
+
+    def test_create_agent_with_str_body(self):
+        self._test_create_agent()
+
+    def test_create_agent_with_bytes_body(self):
+        self._test_create_agent(bytes_body=True)
+
+    def test_delete_agent(self):
+        self._test_delete_agent()
+
+    def test_update_agent_with_str_body(self):
+        self._test_update_agent()
+
+    def test_update_agent_with_bytes_body(self):
+        self._test_update_agent(bytes_body=True)
diff --git a/tempest/tests/services/compute/test_aggregates_client.py b/tempest/tests/services/compute/test_aggregates_client.py
new file mode 100644
index 0000000..eacc251
--- /dev/null
+++ b/tempest/tests/services/compute/test_aggregates_client.py
@@ -0,0 +1,137 @@
+# Copyright 2015 NEC Corporation.  All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+import httplib2
+
+from oslo_serialization import jsonutils as json
+from oslotest import mockpatch
+
+from tempest.services.compute.json import aggregates_client
+from tempest.tests import base
+from tempest.tests import fake_auth_provider
+
+
+class TestAggregatesClient(base.TestCase):
+
+    def setUp(self):
+        super(TestAggregatesClient, self).setUp()
+        fake_auth = fake_auth_provider.FakeAuthProvider()
+        self.client = aggregates_client.AggregatesClient(
+            fake_auth, 'compute', 'regionOne')
+
+    def _test_list_aggregates(self, bytes_body=False):
+        body = '{"aggregates": []}'
+        if bytes_body:
+            body = body.encode('utf-8')
+        expected = []
+        response = (httplib2.Response({'status': 200}), body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.get',
+            return_value=response))
+        self.assertEqual(expected, self.client.list_aggregates())
+
+    def test_list_aggregates_with_str_body(self):
+        self._test_list_aggregates()
+
+    def test_list_aggregates_with_bytes_body(self):
+        self._test_list_aggregates(bytes_body=True)
+
+    def _test_show_aggregate(self, bytes_body=False):
+        expected = {"name": "hoge",
+                    "availability_zone": None,
+                    "deleted": False,
+                    "created_at":
+                    "2015-07-16T03:07:32.000000",
+                    "updated_at": None,
+                    "hosts": [],
+                    "deleted_at": None,
+                    "id": 1,
+                    "metadata": {}}
+        serialized_body = json.dumps({"aggregate": expected})
+        if bytes_body:
+            serialized_body = serialized_body.encode('utf-8')
+
+        mocked_resp = (httplib2.Response({'status': 200}), serialized_body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.get',
+            return_value=mocked_resp))
+        resp = self.client.show_aggregate(1)
+        self.assertEqual(expected, resp)
+
+    def test_show_aggregate_with_str_body(self):
+        self._test_show_aggregate()
+
+    def test_show_aggregate_with_bytes_body(self):
+        self._test_show_aggregate(bytes_body=True)
+
+    def _test_create_aggregate(self, bytes_body=False):
+        expected = {"name": u'\xf4',
+                    "availability_zone": None,
+                    "deleted": False,
+                    "created_at": "2015-07-21T04:11:18.000000",
+                    "updated_at": None,
+                    "deleted_at": None,
+                    "id": 1}
+        serialized_body = json.dumps({"aggregate": expected})
+        if bytes_body:
+            serialized_body = serialized_body.encode('utf-8')
+
+        mocked_resp = (httplib2.Response({'status': 200}), serialized_body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.post',
+            return_value=mocked_resp))
+        resp = self.client.create_aggregate(name='hoge')
+        self.assertEqual(expected, resp)
+
+    def test_create_aggregate_with_str_body(self):
+        self._test_create_aggregate()
+
+    def test_create_aggregate_with_bytes_body(self):
+        self._test_create_aggregate(bytes_body=True)
+
+    def test_delete_aggregate(self):
+        expected = {}
+        mocked_resp = (httplib2.Response({'status': 200}), None)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.delete',
+            return_value=mocked_resp))
+        resp = self.client.delete_aggregate("1")
+        self.assertEqual(expected, resp)
+
+    def _test_update_aggregate(self, bytes_body=False):
+        expected = {"name": u'\xe9',
+                    "availability_zone": None,
+                    "deleted": False,
+                    "created_at": "2015-07-16T03:07:32.000000",
+                    "updated_at": "2015-07-23T05:16:29.000000",
+                    "hosts": [],
+                    "deleted_at": None,
+                    "id": 1,
+                    "metadata": {}}
+        serialized_body = json.dumps({"aggregate": expected})
+        if bytes_body:
+            serialized_body = serialized_body.encode('utf-8')
+
+        mocked_resp = (httplib2.Response({'status': 200}), serialized_body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.put',
+            return_value=mocked_resp))
+        resp = self.client.update_aggregate(1)
+        self.assertEqual(expected, resp)
+
+    def test_update_aggregate_with_str_body(self):
+        self._test_update_aggregate()
+
+    def test_update_aggregate_with_bytes_body(self):
+        self._test_update_aggregate(bytes_body=True)
diff --git a/tempest/tests/services/compute/test_keypairs_client.py b/tempest/tests/services/compute/test_keypairs_client.py
new file mode 100644
index 0000000..e79e411
--- /dev/null
+++ b/tempest/tests/services/compute/test_keypairs_client.py
@@ -0,0 +1,47 @@
+# Copyright 2015 NEC Corporation.  All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+import httplib2
+
+from oslotest import mockpatch
+
+from tempest.services.compute.json import keypairs_client
+from tempest.tests import base
+from tempest.tests import fake_auth_provider
+
+
+class TestKeyPairsClient(base.TestCase):
+
+    def setUp(self):
+        super(TestKeyPairsClient, self).setUp()
+        fake_auth = fake_auth_provider.FakeAuthProvider()
+        self.client = keypairs_client.KeyPairsClient(
+            fake_auth, 'compute', 'regionOne')
+
+    def _test_list_keypairs(self, bytes_body=False):
+        body = '{"keypairs": []}'
+        if bytes_body:
+            body = body.encode('utf-8')
+        expected = []
+        response = (httplib2.Response({'status': 200}), body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.get',
+            return_value=response))
+        self.assertEqual(expected, self.client.list_keypairs())
+
+    def test_list_keypairs_with_str_body(self):
+        self._test_list_keypairs()
+
+    def test_list_keypairs_with_bytes_body(self):
+        self._test_list_keypairs(bytes_body=True)
diff --git a/test-requirements.txt b/test-requirements.txt
index 65e3531..2ea30ec 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -9,4 +9,4 @@
 mox>=0.5.3
 mock>=1.2
 coverage>=3.6
-oslotest>=1.7.0 # Apache-2.0
+oslotest>=1.9.0 # Apache-2.0
diff --git a/tools/config/check_uptodate.sh b/tools/config/check_uptodate.sh
deleted file mode 100755
index 7b08695..0000000
--- a/tools/config/check_uptodate.sh
+++ /dev/null
@@ -1,29 +0,0 @@
-#!/usr/bin/env bash
-
-PROJECT_NAME=${PROJECT_NAME:-tempest}
-CFGFILE_NAME=${PROJECT_NAME}.conf.sample
-
-if [ -e etc/${PROJECT_NAME}/${CFGFILE_NAME} ]; then
-    CFGFILE=etc/${PROJECT_NAME}/${CFGFILE_NAME}
-elif [ -e etc/${CFGFILE_NAME} ]; then
-    CFGFILE=etc/${CFGFILE_NAME}
-else
-    echo "${0##*/}: can not find config file"
-    exit 1
-fi
-
-TEMPDIR=`mktemp -d /tmp/${PROJECT_NAME}.XXXXXX`
-trap "rm -rf $TEMPDIR" EXIT
-
-oslo-config-generator --config-file tools/config/config-generator.tempest.conf --output-file ${TEMPDIR}/${CFGFILE_NAME}
-if [ $? != 0 ]
-then
-    exit 1
-fi
-
-if ! diff -u ${TEMPDIR}/${CFGFILE_NAME} ${CFGFILE}
-then
-   echo "${0##*/}: ${PROJECT_NAME}.conf.sample is not up to date."
-   echo "${0##*/}: Please run tox -egenconfig."
-   exit 1
-fi
diff --git a/tox.ini b/tox.ini
index 389fee2..eae6fc7 100644
--- a/tox.ini
+++ b/tox.ini
@@ -108,12 +108,14 @@
 commands = {posargs}
 
 [testenv:docs]
-commands = python setup.py build_sphinx {posargs}
+# The sample config file we generate is included in the sphinxdoc, so build that first.
+commands =
+   oslo-config-generator --config-file tools/config/config-generator.tempest.conf --output-file doc/source/_static/tempest.conf
+   python setup.py build_sphinx {posargs}
 
 [testenv:pep8]
 commands =
    flake8 {posargs}
-   {toxinidir}/tools/config/check_uptodate.sh
    python tools/check_uuid.py
 
 [testenv:uuidgen]