Merge "Updated from global requirements"
diff --git a/.gitignore b/.gitignore
index 1777cb9..f584532 100644
--- a/.gitignore
+++ b/.gitignore
@@ -2,6 +2,7 @@
 ChangeLog
 *.pyc
 etc/tempest.conf
+etc/tempest.conf.sample
 etc/logging.conf
 include/swift_objects/swift_small
 include/swift_objects/swift_medium
@@ -18,3 +19,4 @@
 .coverage*
 !.coveragerc
 cover/
+doc/source/_static/tempest.conf
diff --git a/doc/source/_static/.keep b/doc/source/_static/.keep
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/doc/source/_static/.keep
diff --git a/doc/source/index.rst b/doc/source/index.rst
index f925018..e9f2161 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -10,6 +10,7 @@
    overview
    HACKING
    REVIEWING
+   plugin
 
 ------------
 Field Guides
@@ -37,6 +38,7 @@
    :maxdepth: 2
 
    configuration
+   sampleconf
 
 ---------------------
 Command Documentation
diff --git a/doc/source/plugin.rst b/doc/source/plugin.rst
new file mode 100644
index 0000000..4e97dbe
--- /dev/null
+++ b/doc/source/plugin.rst
@@ -0,0 +1,120 @@
+=============================
+Tempest Test Plugin Interface
+=============================
+
+Tempest has an external test plugin interface which enables anyone to integrate
+an external test suite as part of a tempest run. This will let any project
+leverage being run with the rest of the tempest suite while not requiring the
+tests live in the tempest tree.
+
+Creating a plugin
+=================
+
+Creating a plugin is fairly straightforward and doesn't require much additional
+effort on top of creating a test suite using tempest-lib. One thing to note with
+doing this is that the interfaces exposed by tempest are not considered stable
+(with the exception of configuration variables which ever effort goes into
+ensuring backwards compatibility). You should not need to import anything from
+tempest itself except where explicitly noted. If there is an interface from
+tempest that you need to rely on in your plugin it likely needs to be migrated
+to tempest-lib. In that situation, file a bug, push a migration patch, etc. to
+expedite providing the interface in a reliable manner.
+
+Plugin Class
+------------
+
+To provide tempest with all the required information it needs to be able to run
+your plugin you need to create a plugin class which tempest will load and call
+to get information when it needs. To simplify creating this tempest provides an
+abstract class that should be used as the parent for your plugin. To use this
+you would do something like the following::
+
+  from tempest.test_discover import plugin
+
+  class MyPlugin(plugin.TempestPlugin):
+
+Then you need to ensure you locally define all of the methods in the abstract
+class, you can refer to the api doc below for a reference of what that entails.
+
+Also, note eventually this abstract class will likely live in tempest-lib, when
+that migration occurs a deprecation shim will be added to tempest so as to not
+break any existing plugins. But, when that occurs migrating to using tempest-lib
+as the source for the abstract class will be prudent.
+
+Abstract Plugin Class
+^^^^^^^^^^^^^^^^^^^^^
+
+.. autoclass:: tempest.test_discover.plugins.TempestPlugin
+   :members:
+
+Entry Point
+-----------
+
+Once you've created your plugin class you need to add an entry point to your
+project to enable tempest to find the plugin. The entry point must be added
+to the "tempest.test_plugins" namespace.
+
+If you are using pbr this is fairly straightforward, in the setup.cfg just add
+something like the following::
+
+  [entry_points]
+  tempest.test_plugins =
+      plugin_name = module.path:PluginClass
+
+Plugin Structure
+----------------
+
+While there are no hard and fast rules for the structure a plugin, there are
+basically no constraints on what the plugin looks like as long as the 2 steps
+above are done. However,  there are some recommended patterns to follow to make
+it easy for people to contribute and work with your plugin. For example, if you
+create a directory structure with something like::
+
+    plugin_dir/
+      config.py
+      plugin.py
+      tests/
+        api/
+        scenario/
+      services/
+        client.py
+
+That will mirror what people expect from tempest. The file
+
+* **config.py**: contains any plugin specific configuration variables
+* **plugin.py**: contains the plugin class used for the entry point
+* **tests**: the directory where test discovery will be run, all tests should
+             be under this dir
+* **services**: where the plugin specific service clients are
+
+Additionally, when you're creating the plugin you likely want to follow all
+of the tempest developer and reviewer documentation to ensure that the tests
+being added in the plugin act and behave like the rest of tempest.
+
+Using Plugins
+=============
+
+Tempest will automatically discover any installed plugins when it is run. So by
+just installing the python packages which contain your plugin you'll be using
+them with tempest, nothing else is really required.
+
+However, you should take care when installing plugins. By their very nature
+there are no guarantees when running tempest with plugins enabled about the
+quality of the plugin. Additionally, while there is no limitation on running
+with multiple plugins it's worth noting that poorly written plugins might not
+properly isolate their tests which could cause unexpected cross interactions
+between plugins.
+
+Notes for using plugins with virtualenvs
+----------------------------------------
+
+When using a tempest inside a virtualenv (like when running under tox) you have
+to ensure that the package that contains your plugin is either installed in the
+venv too or that you have system site-packages enabled. The virtualenv will
+isolate the tempest install from the rest of your system so just installing the
+plugin package on your system and then running tempest inside a venv will not
+work.
+
+Tempest also exposes a tox job, all-plugin, which will setup a tox virtualenv
+with system site-packages enabled. This will let you leverage tox without
+requiring to manually install plugins in the tox venv before running tests.
diff --git a/doc/source/sampleconf.rst b/doc/source/sampleconf.rst
new file mode 100644
index 0000000..2a72971
--- /dev/null
+++ b/doc/source/sampleconf.rst
@@ -0,0 +1,14 @@
+.. _tempest-sampleconf:
+
+Sample Configuration File
+==========================
+
+The following is a sample Tempest configuration for adaptation and use. It is
+auto-generated from Tempest when this documentation is built, so
+if you are having issues with an option, please compare your version of
+Tempest with the version of this documentation.
+
+The sample configuration can also be viewed in `file form <_static/tempest.conf>`_.
+
+.. include:: _static/tempest.conf
+   :code:
diff --git a/etc/tempest.conf.sample b/etc/tempest.conf.sample
deleted file mode 100644
index c97eb97..0000000
--- a/etc/tempest.conf.sample
+++ /dev/null
@@ -1,1243 +0,0 @@
-[DEFAULT]
-
-#
-# From oslo.log
-#
-
-# Print debugging output (set logging level to DEBUG instead of
-# default INFO level). (boolean value)
-#debug = false
-
-# If set to false, will disable INFO logging level, making WARNING the
-# default. (boolean value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-#verbose = true
-
-# The name of a logging configuration file. This file is appended to
-# any existing logging configuration files. For details about logging
-# configuration files, see the Python logging module documentation.
-# (string value)
-# Deprecated group/name - [DEFAULT]/log_config
-#log_config_append = <None>
-
-# DEPRECATED. A logging.Formatter log message format string which may
-# use any of the available logging.LogRecord attributes. This option
-# is deprecated.  Please use logging_context_format_string and
-# logging_default_format_string instead. (string value)
-#log_format = <None>
-
-# Format string for %%(asctime)s in log records. Default: %(default)s
-# . (string value)
-#log_date_format = %Y-%m-%d %H:%M:%S
-
-# (Optional) Name of log file to output to. If no default is set,
-# logging will go to stdout. (string value)
-# Deprecated group/name - [DEFAULT]/logfile
-#log_file = <None>
-
-# (Optional) The base directory used for relative --log-file paths.
-# (string value)
-# Deprecated group/name - [DEFAULT]/logdir
-#log_dir = <None>
-
-# Use syslog for logging. Existing syslog format is DEPRECATED and
-# will be changed later to honor RFC5424. (boolean value)
-#use_syslog = false
-
-# (Optional) Enables or disables syslog rfc5424 format for logging. If
-# enabled, prefixes the MSG part of the syslog message with APP-NAME
-# (RFC5424). The format without the APP-NAME is deprecated in K, and
-# will be removed in M, along with this option. (boolean value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-#use_syslog_rfc_format = true
-
-# Syslog facility to receive log lines. (string value)
-#syslog_log_facility = LOG_USER
-
-# Log output to standard error. (boolean value)
-#use_stderr = true
-
-# Format string to use for log messages with context. (string value)
-#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
-
-# Format string to use for log messages without context. (string
-# value)
-#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
-
-# Data to append to log format when level is DEBUG. (string value)
-#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
-
-# Prefix each line of exception output with this format. (string
-# value)
-#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
-
-# List of logger=LEVEL pairs. (list value)
-#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN
-
-# Enables or disables publication of error events. (boolean value)
-#publish_errors = false
-
-# The format for an instance that is passed with the log message.
-# (string value)
-#instance_format = "[instance: %(uuid)s] "
-
-# The format for an instance UUID that is passed with the log message.
-# (string value)
-#instance_uuid_format = "[instance: %(uuid)s] "
-
-# Enables or disables fatal status of deprecations. (boolean value)
-#fatal_deprecations = false
-
-#
-# From tempest.config
-#
-
-# Prefix to be added when generating the name for test resources. It
-# can be used to discover all resources associated with a specific
-# test run when running tempest on a real-life cloud (string value)
-#resources_prefix = tempest
-
-
-[auth]
-
-#
-# From tempest.config
-#
-
-# Path to the yaml file that contains the list of credentials to use
-# for running tests. If used when running in parallel you have to make
-# sure sufficient credentials are provided in the accounts file. For
-# example if no tests with roles are being run it requires at least `2
-# * CONC` distinct accounts configured in  the `test_accounts_file`,
-# with CONC == the number of concurrent test processes. (string value)
-#test_accounts_file = <None>
-
-# Allows test cases to create/destroy tenants and users. This option
-# requires that OpenStack Identity API admin credentials are known. If
-# false, isolated test cases and parallel execution, can still be
-# achieved configuring a list of test accounts (boolean value)
-# Deprecated group/name - [compute]/allow_tenant_isolation
-# Deprecated group/name - [orchestration]/allow_tenant_isolation
-#allow_tenant_isolation = true
-
-# Roles to assign to all users created by tempest (list value)
-#tempest_roles =
-
-# Default domain used when getting v3 credentials. This is the name
-# keystone uses for v2 compatibility. (string value)
-# Deprecated group/name - [auth]/tenant_isolation_domain_name
-#default_credentials_domain_name = Default
-
-# If allow_tenant_isolation is set to True and Neutron is enabled
-# Tempest will try to create a useable network, subnet, and router
-# when needed for each tenant it  creates. However in some neutron
-# configurations, like with VLAN provider networks, this doesn't work.
-# So if set to False the isolated networks will not be created
-# (boolean value)
-#create_isolated_networks = true
-
-
-[baremetal]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the baremetal provisioning service (string value)
-#catalog_type = baremetal
-
-# Whether the Ironic nova-compute driver is enabled (boolean value)
-#driver_enabled = false
-
-# Driver name which Ironic uses (string value)
-#driver = fake
-
-# The endpoint type to use for the baremetal provisioning service
-# (string value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# Timeout for Ironic node to completely provision (integer value)
-#active_timeout = 300
-
-# Timeout for association of Nova instance and Ironic node (integer
-# value)
-#association_timeout = 30
-
-# Timeout for Ironic power transitions. (integer value)
-#power_timeout = 60
-
-# Timeout for unprovisioning an Ironic node. Takes longer since Kilo
-# as Ironic performs an extra step in Node cleaning. (integer value)
-#unprovision_timeout = 300
-
-
-[boto]
-
-#
-# From tempest.config
-#
-
-# EC2 URL (string value)
-#ec2_url = http://localhost:8773/services/Cloud
-
-# S3 URL (string value)
-#s3_url = http://localhost:8080
-
-# AWS Secret Key (string value)
-#aws_secret = <None>
-
-# AWS Access Key (string value)
-#aws_access = <None>
-
-# AWS Zone for EC2 tests (string value)
-#aws_zone = nova
-
-# S3 Materials Path (string value)
-#s3_materials_path = /opt/stack/devstack/files/images/s3-materials/cirros-0.3.0
-
-# ARI Ramdisk Image manifest (string value)
-#ari_manifest = cirros-0.3.0-x86_64-initrd.manifest.xml
-
-# AMI Machine Image manifest (string value)
-#ami_manifest = cirros-0.3.0-x86_64-blank.img.manifest.xml
-
-# AKI Kernel Image manifest (string value)
-#aki_manifest = cirros-0.3.0-x86_64-vmlinuz.manifest.xml
-
-# Instance type (string value)
-#instance_type = m1.tiny
-
-# boto Http socket timeout (integer value)
-#http_socket_timeout = 3
-
-# boto num_retries on error (integer value)
-#num_retries = 1
-
-# Status Change Timeout (integer value)
-#build_timeout = 60
-
-# Status Change Test Interval (integer value)
-#build_interval = 1
-
-
-[compute]
-
-#
-# From tempest.config
-#
-
-# Valid primary image reference to be used in tests. This is a
-# required option (string value)
-#image_ref = <None>
-
-# Valid secondary image reference to be used in tests. This is a
-# required option, but if only one image is available duplicate the
-# value of image_ref above (string value)
-#image_ref_alt = <None>
-
-# Valid primary flavor to use in tests. (string value)
-#flavor_ref = 1
-
-# Valid secondary flavor to be used in tests. (string value)
-#flavor_ref_alt = 2
-
-# User name used to authenticate to an instance. (string value)
-#image_ssh_user = root
-
-# Password used to authenticate to an instance. (string value)
-#image_ssh_password = password
-
-# User name used to authenticate to an instance using the alternate
-# image. (string value)
-#image_alt_ssh_user = root
-
-# Time in seconds between build status checks. (integer value)
-#build_interval = 1
-
-# Timeout in seconds to wait for an instance to build. Other services
-# that do not define build_timeout will inherit this value. (integer
-# value)
-#build_timeout = 300
-
-# Shell fragments to use before executing a command when sshing to a
-# guest. (string value)
-#ssh_shell_prologue = set -eu -o pipefail; PATH=$$PATH:/sbin;
-
-# Auth method used for authenticate to the instance. Valid choices
-# are: keypair, configured, adminpass and disabled. Keypair: start the
-# servers with a ssh keypair. Configured: use the configured user and
-# password. Adminpass: use the injected adminPass. Disabled: avoid
-# using ssh when it is an option. (string value)
-#ssh_auth_method = keypair
-
-# How to connect to the instance? fixed: using the first ip belongs
-# the fixed network floating: creating and using a floating ip.
-# (string value)
-#ssh_connect_method = floating
-
-# User name used to authenticate to an instance. (string value)
-#ssh_user = root
-
-# Timeout in seconds to wait for ping to succeed. (integer value)
-#ping_timeout = 120
-
-# The packet size for ping packets originating from remote linux hosts
-# (integer value)
-#ping_size = 56
-
-# The number of ping packets originating from remote linux hosts
-# (integer value)
-#ping_count = 1
-
-# Additional wait time for clean state, when there is no OS-EXT-STS
-# extension available (integer value)
-#ready_wait = 0
-
-# Name of the fixed network that is visible to all test tenants. If
-# multiple networks are available for a tenant this is the network
-# which will be used for creating servers if tempest does not create a
-# network or a network is not specified elsewhere. It may be used for
-# ssh validation only if floating IPs are disabled. (string value)
-#fixed_network_name = <None>
-
-# Network used for SSH connections. Ignored if
-# use_floatingip_for_ssh=true or run_validation=false. (string value)
-#network_for_ssh = public
-
-# Does SSH use Floating IPs? (boolean value)
-#use_floatingip_for_ssh = true
-
-# Catalog type of the Compute service. (string value)
-#catalog_type = compute
-
-# The compute region name to use. If empty, the value of
-# identity.region is used instead. If no such region is found in the
-# service catalog, the first found one is used. (string value)
-#region =
-
-# The endpoint type to use for the compute service. (string value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# Expected device name when a volume is attached to an instance
-# (string value)
-#volume_device_name = vdb
-
-# Time in seconds before a shelved instance is eligible for removing
-# from a host.  -1 never offload, 0 offload when shelved. This time
-# should be the same as the time of nova.conf, and some tests will run
-# for as long as the time. (integer value)
-#shelved_offload_time = 0
-
-# Unallocated floating IP range, which will be used to test the
-# floating IP bulk feature for CRUD operation. This block must not
-# overlap an existing floating IP pool. (string value)
-#floating_ip_range = 10.0.0.0/29
-
-
-[compute-feature-enabled]
-
-#
-# From tempest.config
-#
-
-# If false, skip disk config tests (boolean value)
-#disk_config = true
-
-# A list of enabled compute extensions with a special entry all which
-# indicates every extension is enabled. Each extension should be
-# specified with alias name. Empty list indicates all extensions are
-# disabled (list value)
-#api_extensions = all
-
-# Does the test environment support changing the admin password?
-# (boolean value)
-#change_password = false
-
-# Does the test environment support obtaining instance serial console
-# output? (boolean value)
-#console_output = true
-
-# Does the test environment support resizing? (boolean value)
-#resize = false
-
-# Does the test environment support pausing? (boolean value)
-#pause = true
-
-# Does the test environment support shelving/unshelving? (boolean
-# value)
-#shelve = true
-
-# Does the test environment support suspend/resume? (boolean value)
-#suspend = true
-
-# Does the test environment support live migration available? (boolean
-# value)
-#live_migration = true
-
-# Does the test environment support metadata service? Ignored unless
-# validation.run_validation=true. (boolean value)
-#metadata_service = true
-
-# Does the test environment use block devices for live migration
-# (boolean value)
-#block_migration_for_live_migration = false
-
-# Does the test environment block migration support cinder iSCSI
-# volumes. Note, libvirt doesn't support this, see
-# https://bugs.launchpad.net/nova/+bug/1398999 (boolean value)
-#block_migrate_cinder_iscsi = false
-
-# Does the test system allow live-migration of paused instances? Note,
-# this is more than just the ANDing of paused and live_migrate, but
-# all 3 should be set to True to run those tests (boolean value)
-#live_migrate_paused_instances = false
-
-# Enable VNC console. This configuration value should be same as
-# [nova.vnc]->vnc_enabled in nova.conf (boolean value)
-#vnc_console = false
-
-# Enable Spice console. This configuration value should be same as
-# [nova.spice]->enabled in nova.conf (boolean value)
-#spice_console = false
-
-# Enable RDP console. This configuration value should be same as
-# [nova.rdp]->enabled in nova.conf (boolean value)
-#rdp_console = false
-
-# Does the test environment support instance rescue mode? (boolean
-# value)
-#rescue = true
-
-# Enables returning of the instance password by the relevant server
-# API calls such as create, rebuild or rescue. (boolean value)
-#enable_instance_password = true
-
-# Does the test environment support dynamic network interface
-# attachment? (boolean value)
-#interface_attach = true
-
-# Does the test environment support creating snapshot images of
-# running instances? (boolean value)
-#snapshot = true
-
-# Does the test environment have the ec2 api running? (boolean value)
-#ec2_api = true
-
-# Does Nova preserve preexisting ports from Neutron when deleting an
-# instance? This should be set to True if testing Kilo+ Nova. (boolean
-# value)
-#preserve_ports = false
-
-# Does the test environment support attaching an encrypted volume to a
-# running server instance? This may depend on the combination of
-# compute_driver in nova and the volume_driver(s) in cinder. (boolean
-# value)
-#attach_encrypted_volume = true
-
-# Does the test environment support creating instances with multiple
-# ports on the same network? This is only valid when using Neutron.
-# (boolean value)
-#allow_duplicate_networks = false
-
-
-[dashboard]
-
-#
-# From tempest.config
-#
-
-# Where the dashboard can be found (string value)
-#dashboard_url = http://localhost/
-
-# Login page for the dashboard (string value)
-#login_url = http://localhost/auth/login/
-
-
-[data_processing]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the data processing service. (string value)
-#catalog_type = data_processing
-
-# The endpoint type to use for the data processing service. (string
-# value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-
-[data_processing-feature-enabled]
-
-#
-# From tempest.config
-#
-
-# List of enabled data processing plugins (list value)
-#plugins = vanilla,hdp
-
-
-[database]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the Database service. (string value)
-#catalog_type = database
-
-# Valid primary flavor to use in database tests. (string value)
-#db_flavor_ref = 1
-
-# Current database version to use in database tests. (string value)
-#db_current_version = v1.0
-
-
-[debug]
-
-#
-# From tempest.config
-#
-
-# A regex to determine which requests should be traced.
-#
-# This is a regex to match the caller for rest client requests to be
-# able to
-# selectively trace calls out of specific classes and methods. It
-# largely
-# exists for test development, and is not expected to be used in a
-# real deploy
-# of tempest. This will be matched against the discovered
-# ClassName:method
-# in the test environment.
-#
-# Expected values for this field are:
-#
-#  * ClassName:test_method_name - traces one test_method
-#  * ClassName:setUp(Class) - traces specific setup functions
-#  * ClassName:tearDown(Class) - traces specific teardown functions
-#  * ClassName:_run_cleanups - traces the cleanup functions
-#
-# If nothing is specified, this feature is not enabled. To trace
-# everything
-# specify .* as the regex.
-#  (string value)
-#trace_requests =
-
-
-[identity]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the Identity service. (string value)
-#catalog_type = identity
-
-# Set to True if using self-signed SSL certificates. (boolean value)
-#disable_ssl_certificate_validation = false
-
-# Specify a CA bundle file to use in verifying a TLS (https) server
-# certificate. (string value)
-#ca_certificates_file = <None>
-
-# Full URI of the OpenStack Identity API (Keystone), v2 (string value)
-#uri = <None>
-
-# Full URI of the OpenStack Identity API (Keystone), v3 (string value)
-#uri_v3 = <None>
-
-# Identity API version to be used for authentication for API tests.
-# (string value)
-#auth_version = v2
-
-# The identity region name to use. Also used as the other services'
-# region name unless they are set explicitly. If no such region is
-# found in the service catalog, the first found one is used. (string
-# value)
-#region = RegionOne
-
-# The endpoint type to use for the identity service. (string value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# Username to use for Nova API requests. (string value)
-#username = <None>
-
-# Tenant name to use for Nova API requests. (string value)
-#tenant_name = <None>
-
-# Role required to administrate keystone. (string value)
-#admin_role = admin
-
-# API key to use when authenticating. (string value)
-#password = <None>
-
-# Domain name for authentication (Keystone V3).The same domain applies
-# to user and project (string value)
-#domain_name = <None>
-
-# Username of alternate user to use for Nova API requests. (string
-# value)
-#alt_username = <None>
-
-# Alternate user's Tenant name to use for Nova API requests. (string
-# value)
-#alt_tenant_name = <None>
-
-# API key to use when authenticating as alternate user. (string value)
-#alt_password = <None>
-
-# Alternate domain name for authentication (Keystone V3).The same
-# domain applies to user and project (string value)
-#alt_domain_name = <None>
-
-# Administrative Username to use for Keystone API requests. (string
-# value)
-#admin_username = <None>
-
-# Administrative Tenant name to use for Keystone API requests. (string
-# value)
-#admin_tenant_name = <None>
-
-# API key to use when authenticating as admin. (string value)
-#admin_password = <None>
-
-# Admin domain name for authentication (Keystone V3).The same domain
-# applies to user and project (string value)
-#admin_domain_name = <None>
-
-# ID of the default domain (string value)
-#default_domain_id = default
-
-
-[identity-feature-enabled]
-
-#
-# From tempest.config
-#
-
-# Does the identity service have delegation and impersonation enabled
-# (boolean value)
-#trust = true
-
-# Is the v2 identity API enabled (boolean value)
-#api_v2 = true
-
-# Is the v3 identity API enabled (boolean value)
-#api_v3 = true
-
-
-[image]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the Image service. (string value)
-#catalog_type = image
-
-# The image region name to use. If empty, the value of identity.region
-# is used instead. If no such region is found in the service catalog,
-# the first found one is used. (string value)
-#region =
-
-# The endpoint type to use for the image service. (string value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# http accessible image (string value)
-#http_image = http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-uec.tar.gz
-
-# Timeout in seconds to wait for an image to become available.
-# (integer value)
-#build_timeout = 300
-
-# Time in seconds between image operation status checks. (integer
-# value)
-#build_interval = 1
-
-
-[image-feature-enabled]
-
-#
-# From tempest.config
-#
-
-# Is the v2 image API enabled (boolean value)
-#api_v2 = true
-
-# Is the v1 image API enabled (boolean value)
-#api_v1 = true
-
-# Is the deactivate-image feature enabled. The feature has been
-# integrated since Kilo. (boolean value)
-#deactivate_image = false
-
-
-[input-scenario]
-
-#
-# From tempest.config
-#
-
-# Matching images become parameters for scenario tests (string value)
-#image_regex = ^cirros-0.3.1-x86_64-uec$
-
-# Matching flavors become parameters for scenario tests (string value)
-#flavor_regex = ^m1.nano$
-
-# SSH verification in tests is skippedfor matching images (string
-# value)
-#non_ssh_image_regex = ^.*[Ww]in.*$
-
-# List of user mapped to regex to matching image names. (string value)
-#ssh_user_regex = [["^.*[Cc]irros.*$", "cirros"]]
-
-
-[messaging]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the Messaging service. (string value)
-#catalog_type = messaging
-
-# The maximum number of queue records per page when listing queues
-# (integer value)
-#max_queues_per_page = 20
-
-# The maximum metadata size for a queue (integer value)
-#max_queue_metadata = 65536
-
-# The maximum number of queue message per page when listing (or)
-# posting messages (integer value)
-#max_messages_per_page = 20
-
-# The maximum size of a message body (integer value)
-#max_message_size = 262144
-
-# The maximum number of messages per claim (integer value)
-#max_messages_per_claim = 20
-
-# The maximum ttl for a message (integer value)
-#max_message_ttl = 1209600
-
-# The maximum ttl for a claim (integer value)
-#max_claim_ttl = 43200
-
-# The maximum grace period for a claim (integer value)
-#max_claim_grace = 43200
-
-
-[negative]
-
-#
-# From tempest.config
-#
-
-# Test generator class for all negative tests (string value)
-#test_generator = tempest.common.generator.negative_generator.NegativeTestGenerator
-
-
-[network]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the Neutron service. (string value)
-#catalog_type = network
-
-# The network region name to use. If empty, the value of
-# identity.region is used instead. If no such region is found in the
-# service catalog, the first found one is used. (string value)
-#region =
-
-# The endpoint type to use for the network service. (string value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# The cidr block to allocate tenant ipv4 subnets from (string value)
-#tenant_network_cidr = 10.100.0.0/16
-
-# The mask bits for tenant ipv4 subnets (integer value)
-#tenant_network_mask_bits = 28
-
-# The cidr block to allocate tenant ipv6 subnets from (string value)
-#tenant_network_v6_cidr = 2003::/48
-
-# The mask bits for tenant ipv6 subnets (integer value)
-#tenant_network_v6_mask_bits = 64
-
-# Whether tenant networks can be reached directly from the test
-# client. This must be set to True when the 'fixed' ssh_connect_method
-# is selected. (boolean value)
-#tenant_networks_reachable = false
-
-# Id of the public network that provides external connectivity (string
-# value)
-#public_network_id =
-
-# Default floating network name. Used to allocate floating IPs when
-# neutron is enabled. (string value)
-#floating_network_name = <None>
-
-# Id of the public router that provides external connectivity. This
-# should only be used when Neutron's 'allow_overlapping_ips' is set to
-# 'False' in neutron.conf. usually not needed past 'Grizzly' release
-# (string value)
-#public_router_id =
-
-# Timeout in seconds to wait for network operation to complete.
-# (integer value)
-#build_timeout = 300
-
-# Time in seconds between network operation status checks. (integer
-# value)
-#build_interval = 1
-
-# List of dns servers which should be used for subnet creation (list
-# value)
-#dns_servers = 8.8.8.8,8.8.4.4
-
-# vnic_type to use when Launching instances with pre-configured ports.
-# Supported ports are: ['normal','direct','macvtap'] (string value)
-# Allowed values: <None>, normal, direct, macvtap
-#port_vnic_type = <None>
-
-
-[network-feature-enabled]
-
-#
-# From tempest.config
-#
-
-# Allow the execution of IPv6 tests (boolean value)
-#ipv6 = true
-
-# A list of enabled network extensions with a special entry all which
-# indicates every extension is enabled. Empty list indicates all
-# extensions are disabled. To get the list of extensions run: 'neutron
-# ext-list' (list value)
-#api_extensions = all
-
-# Allow the execution of IPv6 subnet tests that use the extended IPv6
-# attributes ipv6_ra_mode and ipv6_address_mode (boolean value)
-#ipv6_subnet_attributes = false
-
-# Does the test environment support changing port admin state (boolean
-# value)
-#port_admin_state_change = true
-
-
-[object-storage]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the Object-Storage service. (string value)
-#catalog_type = object-store
-
-# The object-storage region name to use. If empty, the value of
-# identity.region is used instead. If no such region is found in the
-# service catalog, the first found one is used. (string value)
-#region =
-
-# The endpoint type to use for the object-store service. (string
-# value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# Number of seconds to time on waiting for a container to container
-# synchronization complete. (integer value)
-#container_sync_timeout = 600
-
-# Number of seconds to wait while looping to check the status of a
-# container to container synchronization (integer value)
-#container_sync_interval = 5
-
-# Role to add to users created for swift tests to enable creating
-# containers (string value)
-#operator_role = Member
-
-# User role that has reseller admin (string value)
-#reseller_admin_role = ResellerAdmin
-
-# Name of sync realm. A sync realm is a set of clusters that have
-# agreed to allow container syncing with each other. Set the same
-# realm name as Swift's container-sync-realms.conf (string value)
-#realm_name = realm1
-
-# One name of cluster which is set in the realm whose name is set in
-# 'realm_name' item in this file. Set the same cluster name as Swift's
-# container-sync-realms.conf (string value)
-#cluster_name = name1
-
-
-[object-storage-feature-enabled]
-
-#
-# From tempest.config
-#
-
-# A list of the enabled optional discoverable apis. A single entry,
-# all, indicates that all of these features are expected to be enabled
-# (list value)
-#discoverable_apis = all
-
-# Execute (old style) container-sync tests (boolean value)
-#container_sync = true
-
-# Execute object-versioning tests (boolean value)
-#object_versioning = true
-
-# Execute discoverability tests (boolean value)
-#discoverability = true
-
-
-[orchestration]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the Orchestration service. (string value)
-#catalog_type = orchestration
-
-# The orchestration region name to use. If empty, the value of
-# identity.region is used instead. If no such region is found in the
-# service catalog, the first found one is used. (string value)
-#region =
-
-# The endpoint type to use for the orchestration service. (string
-# value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# Role required for users to be able to manage stacks (string value)
-#stack_owner_role = heat_stack_owner
-
-# Time in seconds between build status checks. (integer value)
-#build_interval = 1
-
-# Timeout in seconds to wait for a stack to build. (integer value)
-#build_timeout = 1200
-
-# Instance type for tests. Needs to be big enough for a full OS plus
-# the test workload (string value)
-#instance_type = m1.micro
-
-# Name of existing keypair to launch servers with. (string value)
-#keypair_name = <None>
-
-# Value must match heat configuration of the same name. (integer
-# value)
-#max_template_size = 524288
-
-# Value must match heat configuration of the same name. (integer
-# value)
-#max_resources_per_stack = 1000
-
-
-[oslo_concurrency]
-
-#
-# From oslo.concurrency
-#
-
-# Enables or disables inter-process locks. (boolean value)
-# Deprecated group/name - [DEFAULT]/disable_process_locking
-#disable_process_locking = false
-
-# Directory to use for lock files.  For security, the specified
-# directory should only be writable by the user running the processes
-# that need locking. Defaults to environment variable OSLO_LOCK_PATH.
-# If external locks are used, a lock path must be set. (string value)
-# Deprecated group/name - [DEFAULT]/lock_path
-#lock_path = <None>
-
-
-[scenario]
-
-#
-# From tempest.config
-#
-
-# Directory containing image files (string value)
-#img_dir = /opt/stack/new/devstack/files/images/cirros-0.3.1-x86_64-uec
-
-# Image file name (string value)
-# Deprecated group/name - [DEFAULT]/qcow2_img_file
-#img_file = cirros-0.3.1-x86_64-disk.img
-
-# Image disk format (string value)
-#img_disk_format = qcow2
-
-# Image container format (string value)
-#img_container_format = bare
-
-# Glance image properties. Use for custom images which require them
-# (dict value)
-#img_properties = <None>
-
-# AMI image file name (string value)
-#ami_img_file = cirros-0.3.1-x86_64-blank.img
-
-# ARI image file name (string value)
-#ari_img_file = cirros-0.3.1-x86_64-initrd
-
-# AKI image file name (string value)
-#aki_img_file = cirros-0.3.1-x86_64-vmlinuz
-
-# ssh username for the image file (string value)
-#ssh_user = cirros
-
-# specifies how many resources to request at once. Used for large
-# operations testing. (integer value)
-#large_ops_number = 0
-
-# DHCP client used by images to renew DCHP lease. If left empty,
-# update operation will be skipped. Supported clients: "udhcpc",
-# "dhclient" (string value)
-# Allowed values: udhcpc, dhclient
-#dhcp_client = udhcpc
-
-
-[service_available]
-
-#
-# From tempest.config
-#
-
-# Whether or not cinder is expected to be available (boolean value)
-#cinder = true
-
-# Whether or not neutron is expected to be available (boolean value)
-#neutron = false
-
-# Whether or not glance is expected to be available (boolean value)
-#glance = true
-
-# Whether or not swift is expected to be available (boolean value)
-#swift = true
-
-# Whether or not nova is expected to be available (boolean value)
-#nova = true
-
-# Whether or not Heat is expected to be available (boolean value)
-#heat = false
-
-# Whether or not Ceilometer is expected to be available (boolean
-# value)
-#ceilometer = true
-
-# Whether or not Horizon is expected to be available (boolean value)
-#horizon = true
-
-# Whether or not Sahara is expected to be available (boolean value)
-#sahara = false
-
-# Whether or not Ironic is expected to be available (boolean value)
-#ironic = false
-
-# Whether or not Trove is expected to be available (boolean value)
-#trove = false
-
-# Whether or not Zaqar is expected to be available (boolean value)
-#zaqar = false
-
-
-[stress]
-
-#
-# From tempest.config
-#
-
-# Directory containing log files on the compute nodes (string value)
-#nova_logdir = <None>
-
-# Maximum number of instances to create during test. (integer value)
-#max_instances = 16
-
-# Controller host. (string value)
-#controller = <None>
-
-# Controller host. (string value)
-#target_controller = <None>
-
-# ssh user. (string value)
-#target_ssh_user = <None>
-
-# Path to private key. (string value)
-#target_private_key_path = <None>
-
-# regexp for list of log files. (string value)
-#target_logfiles = <None>
-
-# time (in seconds) between log file error checks. (integer value)
-#log_check_interval = 60
-
-# The number of threads created while stress test. (integer value)
-#default_thread_number_per_action = 4
-
-# Prevent the cleaning (tearDownClass()) between each stress test run
-# if an exception occurs during this run. (boolean value)
-#leave_dirty_stack = false
-
-# Allows a full cleaning process after a stress test. Caution : this
-# cleanup will remove every objects of every tenant. (boolean value)
-#full_clean_stack = false
-
-
-[telemetry]
-
-#
-# From tempest.config
-#
-
-# Catalog type of the Telemetry service. (string value)
-#catalog_type = metering
-
-# The endpoint type to use for the telemetry service. (string value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# This variable is used as flag to enable notification tests (boolean
-# value)
-#too_slow_to_test = true
-
-
-[telemetry-feature-enabled]
-
-#
-# From tempest.config
-#
-
-# Runs Ceilometer event-related tests (boolean value)
-#events = false
-
-
-[validation]
-
-#
-# From tempest.config
-#
-
-# Enable ssh on created servers and creation of additional validation
-# resources to enable remote access (boolean value)
-# Deprecated group/name - [compute]/run_ssh
-#run_validation = false
-
-# Default IP type used for validation: -fixed: uses the first IP
-# belonging to the fixed network -floating: creates and uses a
-# floating IP (string value)
-# Allowed values: fixed, floating
-#connect_method = floating
-
-# Default authentication method to the instance. Only ssh via keypair
-# is supported for now. Additional methods will be handled in a
-# separate spec. (string value)
-# Allowed values: keypair
-#auth_method = keypair
-
-# Default IP version for ssh connections. (integer value)
-# Deprecated group/name - [compute]/ip_version_for_ssh
-#ip_version_for_ssh = 4
-
-# Timeout in seconds to wait for ping to succeed. (integer value)
-#ping_timeout = 120
-
-# Timeout in seconds to wait for the TCP connection to be successful.
-# (integer value)
-# Deprecated group/name - [compute]/ssh_channel_timeout
-#connect_timeout = 60
-
-# Timeout in seconds to wait for the ssh banner. (integer value)
-# Deprecated group/name - [compute]/ssh_timeout
-#ssh_timeout = 300
-
-
-[volume]
-
-#
-# From tempest.config
-#
-
-# Time in seconds between volume availability checks. (integer value)
-#build_interval = 1
-
-# Timeout in seconds to wait for a volume to become available.
-# (integer value)
-#build_timeout = 300
-
-# Catalog type of the Volume Service (string value)
-#catalog_type = volume
-
-# The volume region name to use. If empty, the value of
-# identity.region is used instead. If no such region is found in the
-# service catalog, the first found one is used. (string value)
-#region =
-
-# The endpoint type to use for the volume service. (string value)
-# Allowed values: public, admin, internal, publicURL, adminURL, internalURL
-#endpoint_type = publicURL
-
-# Name of the backend1 (must be declared in cinder.conf) (string
-# value)
-#backend1_name = BACKEND_1
-
-# Name of the backend2 (must be declared in cinder.conf) (string
-# value)
-#backend2_name = BACKEND_2
-
-# Backend protocol to target when creating volume types (string value)
-#storage_protocol = iSCSI
-
-# Backend vendor to target when creating volume types (string value)
-#vendor_name = Open Source
-
-# Disk format to use when copying a volume to image (string value)
-#disk_format = raw
-
-# Default size in GB for volumes created by volumes tests (integer
-# value)
-#volume_size = 1
-
-
-[volume-feature-enabled]
-
-#
-# From tempest.config
-#
-
-# Runs Cinder multi-backend test (requires 2 backends) (boolean value)
-#multi_backend = false
-
-# Runs Cinder volumes backup test (boolean value)
-#backup = true
-
-# Runs Cinder volume snapshot test (boolean value)
-#snapshot = true
-
-# A list of enabled volume extensions with a special entry all which
-# indicates every extension is enabled. Empty list indicates all
-# extensions are disabled (list value)
-#api_extensions = all
-
-# Is the v1 volume API enabled (boolean value)
-#api_v1 = true
-
-# Is the v2 volume API enabled (boolean value)
-#api_v2 = true
-
-# Update bootable status of a volume Not implemented on icehouse
-# (boolean value)
-#bootable = false
diff --git a/tempest/api/compute/admin/test_quotas_negative.py b/tempest/api/compute/admin/test_quotas_negative.py
index 9acf23b..8dcd0b2 100644
--- a/tempest/api/compute/admin/test_quotas_negative.py
+++ b/tempest/api/compute/admin/test_quotas_negative.py
@@ -170,4 +170,5 @@
         # will be raised when out of quota
         self.assertRaises((lib_exc.OverLimit, lib_exc.Forbidden),
                           self.sgr_client.create_security_group_rule,
-                          secgroup_id, ip_protocol, 1025, 1025)
+                          parent_group_id=secgroup_id, ip_protocol=ip_protocol,
+                          from_port=1025, to_port=1025)
diff --git a/tempest/api/compute/base.py b/tempest/api/compute/base.py
index 2126787..b2effc2 100644
--- a/tempest/api/compute/base.py
+++ b/tempest/api/compute/base.py
@@ -279,7 +279,7 @@
         if 'name' in kwargs:
             name = kwargs.pop('name')
 
-        image = cls.images_client.create_image(server_id, name)
+        image = cls.images_client.create_image(server_id, name=name)
         image_id = data_utils.parse_image_id(image.response['location'])
         cls.images.append(image_id)
 
diff --git a/tempest/api/compute/images/test_images_negative.py b/tempest/api/compute/images/test_images_negative.py
index 9721fa5..84a8258 100644
--- a/tempest/api/compute/images/test_images_negative.py
+++ b/tempest/api/compute/images/test_images_negative.py
@@ -93,7 +93,7 @@
         snapshot_name = data_utils.rand_name('test-snap')
         test_uuid = ('a' * 35)
         self.assertRaises(lib_exc.NotFound, self.client.create_image,
-                          test_uuid, snapshot_name)
+                          test_uuid, name=snapshot_name)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('36741560-510e-4cc2-8641-55fe4dfb2437')
@@ -102,7 +102,7 @@
         snapshot_name = data_utils.rand_name('test-snap')
         test_uuid = ('a' * 37)
         self.assertRaises(lib_exc.NotFound, self.client.create_image,
-                          test_uuid, snapshot_name)
+                          test_uuid, name=snapshot_name)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('381acb65-785a-4942-94ce-d8f8c84f1f0f')
diff --git a/tempest/api/compute/images/test_images_oneserver.py b/tempest/api/compute/images/test_images_oneserver.py
index 40a781c..06b7cac 100644
--- a/tempest/api/compute/images/test_images_oneserver.py
+++ b/tempest/api/compute/images/test_images_oneserver.py
@@ -80,7 +80,8 @@
         # Create a new image
         name = data_utils.rand_name('image')
         meta = {'image_type': 'test'}
-        body = self.client.create_image(self.server_id, name, meta)
+        body = self.client.create_image(self.server_id, name=name,
+                                        metadata=meta)
         image_id = data_utils.parse_image_id(body.response['location'])
         waiters.wait_for_image_status(self.client, image_id, 'ACTIVE')
 
@@ -112,6 +113,6 @@
         # #1370954 in glance which will 500 if mysql is used as the
         # backend and it attempts to store a 4 byte utf-8 character
         utf8_name = data_utils.rand_name('\xe2\x82\xa1')
-        body = self.client.create_image(self.server_id, utf8_name)
+        body = self.client.create_image(self.server_id, name=utf8_name)
         image_id = data_utils.parse_image_id(body.response['location'])
         self.addCleanup(self.client.delete_image, image_id)
diff --git a/tempest/api/compute/images/test_images_oneserver_negative.py b/tempest/api/compute/images/test_images_oneserver_negative.py
index 1a74e52..9ea62fb 100644
--- a/tempest/api/compute/images/test_images_oneserver_negative.py
+++ b/tempest/api/compute/images/test_images_oneserver_negative.py
@@ -93,7 +93,7 @@
         snapshot_name = data_utils.rand_name('test-snap')
         meta = {'': ''}
         self.assertRaises(lib_exc.BadRequest, self.client.create_image,
-                          self.server_id, snapshot_name, meta)
+                          self.server_id, name=snapshot_name, metadata=meta)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('3d24d11f-5366-4536-bd28-cff32b748eca')
@@ -102,7 +102,7 @@
         snapshot_name = data_utils.rand_name('test-snap')
         meta = {'a' * 260: 'b' * 260}
         self.assertRaises(lib_exc.BadRequest, self.client.create_image,
-                          self.server_id, snapshot_name, meta)
+                          self.server_id, name=snapshot_name, metadata=meta)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('0460efcf-ee88-4f94-acef-1bf658695456')
@@ -111,8 +111,7 @@
 
         # Create first snapshot
         snapshot_name = data_utils.rand_name('test-snap')
-        body = self.client.create_image(self.server_id,
-                                        snapshot_name)
+        body = self.client.create_image(self.server_id, name=snapshot_name)
         image_id = data_utils.parse_image_id(body.response['location'])
         self.image_ids.append(image_id)
         self.addCleanup(self._reset_server)
@@ -120,7 +119,7 @@
         # Create second snapshot
         alt_snapshot_name = data_utils.rand_name('test-snap')
         self.assertRaises(lib_exc.Conflict, self.client.create_image,
-                          self.server_id, alt_snapshot_name)
+                          self.server_id, name=alt_snapshot_name)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('084f0cbc-500a-4963-8a4e-312905862581')
@@ -129,7 +128,7 @@
 
         snapshot_name = data_utils.rand_name('a' * 260)
         self.assertRaises(lib_exc.BadRequest, self.client.create_image,
-                          self.server_id, snapshot_name)
+                          self.server_id, name=snapshot_name)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('0894954d-2db2-4195-a45b-ffec0bc0187e')
@@ -137,7 +136,7 @@
         # Return an error while trying to delete an image what is creating
 
         snapshot_name = data_utils.rand_name('test-snap')
-        body = self.client.create_image(self.server_id, snapshot_name)
+        body = self.client.create_image(self.server_id, name=snapshot_name)
         image_id = data_utils.parse_image_id(body.response['location'])
         self.image_ids.append(image_id)
         self.addCleanup(self._reset_server)
diff --git a/tempest/api/compute/security_groups/test_security_group_rules.py b/tempest/api/compute/security_groups/test_security_group_rules.py
index 4596e1f..b5eff70 100644
--- a/tempest/api/compute/security_groups/test_security_group_rules.py
+++ b/tempest/api/compute/security_groups/test_security_group_rules.py
@@ -69,11 +69,11 @@
         security_group = self.create_security_group()
         securitygroup_id = security_group['id']
         # Adding rules to the created Security Group
-        rule = \
-            self.client.create_security_group_rule(securitygroup_id,
-                                                   self.ip_protocol,
-                                                   self.from_port,
-                                                   self.to_port)
+        rule = self.client.create_security_group_rule(
+            parent_group_id=securitygroup_id,
+            ip_protocol=self.ip_protocol,
+            from_port=self.from_port,
+            to_port=self.to_port)
         self.expected['parent_group_id'] = securitygroup_id
         self.expected['ip_range'] = {'cidr': '0.0.0.0/0'}
         self._check_expected_response(rule)
@@ -91,12 +91,12 @@
 
         # Adding rules to the created Security Group with optional cidr
         cidr = '10.2.3.124/24'
-        rule = \
-            self.client.create_security_group_rule(parent_group_id,
-                                                   self.ip_protocol,
-                                                   self.from_port,
-                                                   self.to_port,
-                                                   cidr=cidr)
+        rule = self.client.create_security_group_rule(
+            parent_group_id=parent_group_id,
+            ip_protocol=self.ip_protocol,
+            from_port=self.from_port,
+            to_port=self.to_port,
+            cidr=cidr)
         self.expected['parent_group_id'] = parent_group_id
         self.expected['ip_range'] = {'cidr': cidr}
         self._check_expected_response(rule)
@@ -118,12 +118,12 @@
         group_name = security_group['name']
 
         # Adding rules to the created Security Group with optional group_id
-        rule = \
-            self.client.create_security_group_rule(parent_group_id,
-                                                   self.ip_protocol,
-                                                   self.from_port,
-                                                   self.to_port,
-                                                   group_id=group_id)
+        rule = self.client.create_security_group_rule(
+            parent_group_id=parent_group_id,
+            ip_protocol=self.ip_protocol,
+            from_port=self.from_port,
+            to_port=self.to_port,
+            group_id=group_id)
         self.expected['parent_group_id'] = parent_group_id
         self.expected['group'] = {'tenant_id': self.client.tenant_id,
                                   'name': group_name}
@@ -140,21 +140,22 @@
         securitygroup_id = security_group['id']
 
         # Add a first rule to the created Security Group
-        rule = \
-            self.client.create_security_group_rule(securitygroup_id,
-                                                   self.ip_protocol,
-                                                   self.from_port,
-                                                   self.to_port)
+        rule = self.client.create_security_group_rule(
+            parent_group_id=securitygroup_id,
+            ip_protocol=self.ip_protocol,
+            from_port=self.from_port,
+            to_port=self.to_port)
         rule1_id = rule['id']
 
         # Add a second rule to the created Security Group
         ip_protocol2 = 'icmp'
         from_port2 = -1
         to_port2 = -1
-        rule = \
-            self.client.create_security_group_rule(securitygroup_id,
-                                                   ip_protocol2,
-                                                   from_port2, to_port2)
+        rule = self.client.create_security_group_rule(
+            parent_group_id=securitygroup_id,
+            ip_protocol=ip_protocol2,
+            from_port=from_port2,
+            to_port=to_port2)
         rule2_id = rule['id']
         # Delete the Security Group rule2 at the end of this method
         self.addCleanup(self.client.delete_security_group_rule, rule2_id)
@@ -176,11 +177,12 @@
         security_group = self.create_security_group()
         sg2_id = security_group['id']
         # Adding rules to the Group1
-        self.client.create_security_group_rule(sg1_id,
-                                               self.ip_protocol,
-                                               self.from_port,
-                                               self.to_port,
-                                               group_id=sg2_id)
+        self.client.create_security_group_rule(
+            parent_group_id=sg1_id,
+            ip_protocol=self.ip_protocol,
+            from_port=self.from_port,
+            to_port=self.to_port,
+            group_id=sg2_id)
 
         # Delete group2
         self.security_groups_client.delete_security_group(sg2_id)
diff --git a/tempest/api/compute/security_groups/test_security_group_rules_negative.py b/tempest/api/compute/security_groups/test_security_group_rules_negative.py
index e2a1034..d12306a 100644
--- a/tempest/api/compute/security_groups/test_security_group_rules_negative.py
+++ b/tempest/api/compute/security_groups/test_security_group_rules_negative.py
@@ -51,7 +51,9 @@
         to_port = 22
         self.assertRaises(lib_exc.NotFound,
                           self.rules_client.create_security_group_rule,
-                          parent_group_id, ip_protocol, from_port, to_port)
+                          parent_group_id=parent_group_id,
+                          ip_protocol=ip_protocol, from_port=from_port,
+                          to_port=to_port)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('2244d7e4-adb7-4ecb-9930-2d77e123ce4f')
@@ -66,7 +68,9 @@
         to_port = 22
         self.assertRaises(lib_exc.BadRequest,
                           self.rules_client.create_security_group_rule,
-                          parent_group_id, ip_protocol, from_port, to_port)
+                          parent_group_id=parent_group_id,
+                          ip_protocol=ip_protocol, from_port=from_port,
+                          to_port=to_port)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('8bd56d02-3ffa-4d67-9933-b6b9a01d6089')
@@ -81,17 +85,17 @@
         from_port = 22
         to_port = 22
 
-        rule = \
-            self.rules_client.create_security_group_rule(parent_group_id,
-                                                         ip_protocol,
-                                                         from_port,
-                                                         to_port)
+        rule = self.rules_client.create_security_group_rule(
+            parent_group_id=parent_group_id, ip_protocol=ip_protocol,
+            from_port=from_port, to_port=to_port)
         self.addCleanup(self.rules_client.delete_security_group_rule,
                         rule['id'])
         # Add the same rule to the group should fail
         self.assertRaises(lib_exc.BadRequest,
                           self.rules_client.create_security_group_rule,
-                          parent_group_id, ip_protocol, from_port, to_port)
+                          parent_group_id=parent_group_id,
+                          ip_protocol=ip_protocol, from_port=from_port,
+                          to_port=to_port)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('84c81249-9f6e-439c-9bbf-cbb0d2cddbdf')
@@ -109,7 +113,9 @@
 
         self.assertRaises(lib_exc.BadRequest,
                           self.rules_client.create_security_group_rule,
-                          parent_group_id, ip_protocol, from_port, to_port)
+                          parent_group_id=parent_group_id,
+                          ip_protocol=ip_protocol, from_port=from_port,
+                          to_port=to_port)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('12bbc875-1045-4f7a-be46-751277baedb9')
@@ -126,7 +132,9 @@
         to_port = 22
         self.assertRaises(lib_exc.BadRequest,
                           self.rules_client.create_security_group_rule,
-                          parent_group_id, ip_protocol, from_port, to_port)
+                          parent_group_id=parent_group_id,
+                          ip_protocol=ip_protocol, from_port=from_port,
+                          to_port=to_port)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('ff88804d-144f-45d1-bf59-dd155838a43a')
@@ -143,7 +151,9 @@
         to_port = data_utils.rand_int_id(start=65536)
         self.assertRaises(lib_exc.BadRequest,
                           self.rules_client.create_security_group_rule,
-                          parent_group_id, ip_protocol, from_port, to_port)
+                          parent_group_id=parent_group_id,
+                          ip_protocol=ip_protocol, from_port=from_port,
+                          to_port=to_port)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('00296fa9-0576-496a-ae15-fbab843189e0')
@@ -160,7 +170,9 @@
         to_port = 21
         self.assertRaises(lib_exc.BadRequest,
                           self.rules_client.create_security_group_rule,
-                          secgroup_id, ip_protocol, from_port, to_port)
+                          parent_group_id=secgroup_id,
+                          ip_protocol=ip_protocol, from_port=from_port,
+                          to_port=to_port)
 
     @test.attr(type=['negative'])
     @test.idempotent_id('56fddcca-dbb8-4494-a0db-96e9f869527c')
diff --git a/tempest/api/compute/servers/test_list_server_filters.py b/tempest/api/compute/servers/test_list_server_filters.py
index a75cb3e..6160844 100644
--- a/tempest/api/compute/servers/test_list_server_filters.py
+++ b/tempest/api/compute/servers/test_list_server_filters.py
@@ -305,12 +305,20 @@
             params = {'ip': ip}
         else:
             params = {'ip6': ip}
+        # capture all servers in case something goes wrong
+        all_servers = self.client.list_servers(detail=True)
         body = self.client.list_servers(**params)
         servers = body['servers']
 
-        self.assertIn(self.s1_name, map(lambda x: x['name'], servers))
-        self.assertIn(self.s2_name, map(lambda x: x['name'], servers))
-        self.assertIn(self.s3_name, map(lambda x: x['name'], servers))
+        self.assertIn(self.s1_name, map(lambda x: x['name'], servers),
+                      "%s not found in %s, all servers %s" %
+                      (self.s1_name, servers, all_servers))
+        self.assertIn(self.s2_name, map(lambda x: x['name'], servers),
+                      "%s not found in %s, all servers %s" %
+                      (self.s2_name, servers, all_servers))
+        self.assertIn(self.s3_name, map(lambda x: x['name'], servers),
+                      "%s not found in %s, all servers %s" %
+                      (self.s3_name, servers, all_servers))
 
     @test.idempotent_id('67aec2d0-35fe-4503-9f92-f13272b867ed')
     def test_list_servers_detailed_limit_results(self):
diff --git a/tempest/api/compute/test_authorization.py b/tempest/api/compute/test_authorization.py
index 1d7f7fa..e7111b0 100644
--- a/tempest/api/compute/test_authorization.py
+++ b/tempest/api/compute/test_authorization.py
@@ -90,7 +90,8 @@
         from_port = 22
         to_port = 22
         cls.rule = cls.rule_client.create_security_group_rule(
-            parent_group_id, ip_protocol, from_port, to_port)
+            parent_group_id=parent_group_id, ip_protocol=ip_protocol,
+            from_port=from_port, to_port=to_port)
 
     @classmethod
     def resource_cleanup(cls):
@@ -173,7 +174,7 @@
         # A create image request for another user's server should fail
         self.assertRaises(lib_exc.NotFound,
                           self.alt_images_client.create_image,
-                          self.server['id'], 'testImage')
+                          self.server['id'], name='testImage')
 
     @test.idempotent_id('95d445f6-babc-4f2e-aea3-aa24ec5e7f0d')
     def test_create_server_with_unauthorized_image(self):
@@ -304,8 +305,9 @@
             self.assertRaises(lib_exc.BadRequest,
                               self.alt_rule_client.
                               create_security_group_rule,
-                              parent_group_id, ip_protocol, from_port,
-                              to_port)
+                              parent_group_id=parent_group_id,
+                              ip_protocol=ip_protocol,
+                              from_port=from_port, to_port=to_port)
         finally:
             # Next request the base_url is back to normal
             if resp['status'] is not None:
diff --git a/tempest/api/identity/admin/v2/test_endpoints.py b/tempest/api/identity/admin/v2/test_endpoints.py
new file mode 100644
index 0000000..3af2e90
--- /dev/null
+++ b/tempest/api/identity/admin/v2/test_endpoints.py
@@ -0,0 +1,90 @@
+# Copyright 2013 OpenStack Foundation
+# All Rights Reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+from tempest.api.identity import base
+from tempest.common.utils import data_utils
+from tempest import test
+
+
+class EndPointsTestJSON(base.BaseIdentityV2AdminTest):
+
+    @classmethod
+    def resource_setup(cls):
+        super(EndPointsTestJSON, cls).resource_setup()
+        cls.service_ids = list()
+        s_name = data_utils.rand_name('service')
+        s_type = data_utils.rand_name('type')
+        s_description = data_utils.rand_name('description')
+        cls.service_data =\
+            cls.client.create_service(s_name, s_type,
+                                      description=s_description)
+        cls.service_id = cls.service_data['id']
+        cls.service_ids.append(cls.service_id)
+        # Create endpoints so as to use for LIST and GET test cases
+        cls.setup_endpoints = list()
+        for i in range(2):
+            region = data_utils.rand_name('region')
+            url = data_utils.rand_url()
+            endpoint = cls.client.create_endpoint(cls.service_id,
+                                                  region,
+                                                  publicurl=url,
+                                                  adminurl=url,
+                                                  internalurl=url)
+            # list_endpoints() will return 'enabled' field
+            endpoint['enabled'] = True
+            cls.setup_endpoints.append(endpoint)
+
+    @classmethod
+    def resource_cleanup(cls):
+        for e in cls.setup_endpoints:
+            cls.client.delete_endpoint(e['id'])
+        for s in cls.service_ids:
+            cls.client.delete_service(s)
+        super(EndPointsTestJSON, cls).resource_cleanup()
+
+    @test.idempotent_id('11f590eb-59d8-4067-8b2b-980c7f387f51')
+    def test_list_endpoints(self):
+        # Get a list of endpoints
+        fetched_endpoints = self.client.list_endpoints()
+        # Asserting LIST endpoints
+        missing_endpoints =\
+            [e for e in self.setup_endpoints if e not in fetched_endpoints]
+        self.assertEqual(0, len(missing_endpoints),
+                         "Failed to find endpoint %s in fetched list" %
+                         ', '.join(str(e) for e in missing_endpoints))
+
+    @test.idempotent_id('9974530a-aa28-4362-8403-f06db02b26c1')
+    def test_create_list_delete_endpoint(self):
+        region = data_utils.rand_name('region')
+        url = data_utils.rand_url()
+        endpoint = self.client.create_endpoint(self.service_id,
+                                               region,
+                                               publicurl=url,
+                                               adminurl=url,
+                                               internalurl=url)
+        # Asserting Create Endpoint response body
+        self.assertIn('id', endpoint)
+        self.assertEqual(region, endpoint['region'])
+        self.assertEqual(url, endpoint['publicurl'])
+        # Checking if created endpoint is present in the list of endpoints
+        fetched_endpoints = self.client.list_endpoints()
+        fetched_endpoints_id = [e['id'] for e in fetched_endpoints]
+        self.assertIn(endpoint['id'], fetched_endpoints_id)
+        # Deleting the endpoint created in this method
+        self.client.delete_endpoint(endpoint['id'])
+        # Checking whether endpoint is deleted successfully
+        fetched_endpoints = self.client.list_endpoints()
+        fetched_endpoints_id = [e['id'] for e in fetched_endpoints]
+        self.assertNotIn(endpoint['id'], fetched_endpoints_id)
diff --git a/tempest/clients.py b/tempest/clients.py
index e32d401..b3fb8a8 100644
--- a/tempest/clients.py
+++ b/tempest/clients.py
@@ -344,15 +344,25 @@
     def _set_identity_clients(self):
         params = {
             'service': CONF.identity.catalog_type,
-            'region': CONF.identity.region,
-            'endpoint_type': 'adminURL'
+            'region': CONF.identity.region
         }
         params.update(self.default_params_with_timeout_values)
-
+        params_v2_admin = params.copy()
+        params_v2_admin['endpoint_type'] = CONF.identity.v2_admin_endpoint_type
+        # Client uses admin endpoint type of Keystone API v2
         self.identity_client = IdentityClient(self.auth_provider,
-                                              **params)
+                                              **params_v2_admin)
+        params_v2_public = params.copy()
+        params_v2_public['endpoint_type'] = (
+            CONF.identity.v2_public_endpoint_type)
+        # Client uses public endpoint type of Keystone API v2
+        self.identity_public_client = IdentityClient(self.auth_provider,
+                                                     **params_v2_public)
+        params_v3 = params.copy()
+        params_v3['endpoint_type'] = CONF.identity.v3_endpoint_type
+        # Client uses the endpoint type of Keystone API v3
         self.identity_v3_client = IdentityV3Client(self.auth_provider,
-                                                   **params)
+                                                   **params_v3)
         self.endpoints_client = EndPointClient(self.auth_provider,
                                                **params)
         self.service_client = ServiceClient(self.auth_provider, **params)
diff --git a/tempest/cmd/init.py b/tempest/cmd/init.py
index c13fbe5..289b978 100644
--- a/tempest/cmd/init.py
+++ b/tempest/cmd/init.py
@@ -15,6 +15,7 @@
 import os
 import shutil
 import subprocess
+import sys
 
 from cliff import command
 from oslo_log import log as logging
@@ -33,13 +34,44 @@
 """
 
 
+def get_tempest_default_config_dir():
+    """Returns the correct default config dir to support both cases of
+    tempest being or not installed in a virtualenv.
+    Cases considered:
+    - no virtual env, python2: real_prefix and base_prefix not set
+    - no virtual env, python3: real_prefix not set, base_prefix set and
+      identical to prefix
+    - virtualenv, python2: real_prefix and prefix are set and different
+    - virtualenv, python3: real_prefix not set, base_prefix and prefix are
+      set and identical
+    - pyvenv, any python version: real_prefix not set, base_prefix and prefix
+      are set and different
+
+    :return: default config dir
+    """
+    real_prefix = getattr(sys, 'real_prefix', None)
+    base_prefix = getattr(sys, 'base_prefix', None)
+    prefix = sys.prefix
+    if real_prefix is None and base_prefix is None:
+        # Not running in a virtual environnment of any kind
+        return '/etc/tempest'
+    elif (real_prefix is None and base_prefix is not None and
+            base_prefix == prefix):
+        # Probably not running in a virtual environment
+        # NOTE(andreaf) we cannot distinguish this case from the case of
+        # a virtual environment created with virtualenv, and running python3.
+        return '/etc/tempest'
+    else:
+        return os.path.join(sys.prefix, 'etc/tempest')
+
+
 class TempestInit(command.Command):
     """Setup a local working environment for running tempest"""
 
     def get_parser(self, prog_name):
         parser = super(TempestInit, self).get_parser(prog_name)
         parser.add_argument('dir', nargs='?', default=os.getcwd())
-        parser.add_argument('--config-dir', '-c', default='/etc/tempest')
+        parser.add_argument('--config-dir', '-c', default=None)
         return parser
 
     def generate_testr_conf(self, local_path):
@@ -67,6 +99,11 @@
     def copy_config(self, etc_dir, config_dir):
         shutil.copytree(config_dir, etc_dir)
 
+    def generate_sample_config(self, local_dir):
+        subprocess.call(['oslo-config-generator', '--config-file',
+                         'tools/config/config-generator.tempest.conf'],
+                        cwd=local_dir)
+
     def create_working_dir(self, local_dir, config_dir):
         # Create local dir if missing
         if not os.path.isdir(local_dir):
@@ -87,6 +124,8 @@
             os.mkdir(log_dir)
         # Create and copy local etc dir
         self.copy_config(etc_dir, config_dir)
+        # Generate the sample config file
+        self.generate_sample_config(local_dir)
         # Update local confs to reflect local paths
         self.update_local_conf(config_path, lock_dir, log_dir)
         # Generate a testr conf file
@@ -96,4 +135,5 @@
             subprocess.call(['testr', 'init'], cwd=local_dir)
 
     def take_action(self, parsed_args):
-        self.create_working_dir(parsed_args.dir, parsed_args.config_dir)
+        config_dir = parsed_args.config_dir or get_tempest_default_config_dir()
+        self.create_working_dir(parsed_args.dir, config_dir)
diff --git a/tempest/cmd/javelin.py b/tempest/cmd/javelin.py
index 9402154..30fb38c 100755
--- a/tempest/cmd/javelin.py
+++ b/tempest/cmd/javelin.py
@@ -921,7 +921,8 @@
         for rule in secgroup['rules']:
             ip_proto, from_port, to_port, cidr = rule.split()
             client.secrules.create_security_group_rule(
-                secgroup_id, ip_proto, from_port, to_port, cidr=cidr)
+                parent_group_id=secgroup_id, ip_protocol=ip_proto,
+                from_port=from_port, to_port=to_port, cidr=cidr)
 
 
 def destroy_secgroups(secgroups):
diff --git a/tempest/common/validation_resources.py b/tempest/common/validation_resources.py
index 15c452f..402638d 100644
--- a/tempest/common/validation_resources.py
+++ b/tempest/common/validation_resources.py
@@ -30,10 +30,12 @@
         security_group_client.create_security_group(name=sg_name,
                                                     description=sg_description)
     if add_rule:
-        security_group_client.create_security_group_rule(security_group['id'],
-                                                         'tcp', 22, 22)
-        security_group_client.create_security_group_rule(security_group['id'],
-                                                         'icmp', -1, -1)
+        security_group_client.create_security_group_rule(
+            parent_group_id=security_group['id'], ip_protocol='tcp',
+            from_port=22, to_port=22)
+        security_group_client.create_security_group_rule(
+            parent_group_id=security_group['id'], ip_protocol='icmp',
+            from_port=-1, to_port=-1)
     LOG.debug("SSH Validation resource security group with tcp and icmp "
               "rules %s created"
               % sg_name)
diff --git a/tempest/config.py b/tempest/config.py
index ab503e3..0262d1b 100644
--- a/tempest/config.py
+++ b/tempest/config.py
@@ -112,11 +112,30 @@
                     "services' region name unless they are set explicitly. "
                     "If no such region is found in the service catalog, the "
                     "first found one is used."),
-    cfg.StrOpt('endpoint_type',
+    cfg.StrOpt('v2_admin_endpoint_type',
+               default='adminURL',
+               choices=['public', 'admin', 'internal',
+                        'publicURL', 'adminURL', 'internalURL'],
+               help="The admin endpoint type to use for OpenStack Identity "
+                    "(Keystone) API v2",
+               deprecated_opts=[cfg.DeprecatedOpt('endpoint_type',
+                                                  group='identity')]),
+    cfg.StrOpt('v2_public_endpoint_type',
                default='publicURL',
                choices=['public', 'admin', 'internal',
                         'publicURL', 'adminURL', 'internalURL'],
-               help="The endpoint type to use for the identity service."),
+               help="The public endpoint type to use for OpenStack Identity "
+                    "(Keystone) API v2",
+               deprecated_opts=[cfg.DeprecatedOpt('endpoint_type',
+                                                  group='identity')]),
+    cfg.StrOpt('v3_endpoint_type',
+               default='adminURL',
+               choices=['public', 'admin', 'internal',
+                        'publicURL', 'adminURL', 'internalURL'],
+               help="The endpoint type to use for OpenStack Identity "
+                    "(Keystone) API v3",
+               deprecated_opts=[cfg.DeprecatedOpt('endpoint_type',
+                                                  group='identity')]),
     cfg.StrOpt('username',
                help="Username to use for Nova API requests."),
     cfg.StrOpt('tenant_name',
@@ -1224,7 +1243,10 @@
     The purpose of this is to allow tools like the Oslo sample config file
     generator to discover the options exposed to users.
     """
-    return [(getattr(g, 'name', None), o) for g, o in _opts]
+    ext_plugins = plugins.TempestTestPluginManager()
+    opt_list = [(getattr(g, 'name', None), o) for g, o in _opts]
+    opt_list.extend(ext_plugins.get_plugin_options_list())
+    return opt_list
 
 
 # this should never be called outside of this class
diff --git a/tempest/scenario/manager.py b/tempest/scenario/manager.py
index bb0ccdd..60bf7cb 100644
--- a/tempest/scenario/manager.py
+++ b/tempest/scenario/manager.py
@@ -233,14 +233,14 @@
         rulesets = [
             {
                 # ssh
-                'ip_proto': 'tcp',
+                'ip_protocol': 'tcp',
                 'from_port': 22,
                 'to_port': 22,
                 'cidr': '0.0.0.0/0',
             },
             {
                 # ping
-                'ip_proto': 'icmp',
+                'ip_protocol': 'icmp',
                 'from_port': -1,
                 'to_port': -1,
                 'cidr': '0.0.0.0/0',
@@ -248,8 +248,8 @@
         ]
         rules = list()
         for ruleset in rulesets:
-            sg_rule = _client_rules.create_security_group_rule(secgroup_id,
-                                                               **ruleset)
+            sg_rule = _client_rules.create_security_group_rule(
+                parent_group_id=secgroup_id, **ruleset)
             self.addCleanup(self.delete_wrapper,
                             _client_rules.delete_security_group_rule,
                             sg_rule['id'])
@@ -402,7 +402,7 @@
         if name is None:
             name = data_utils.rand_name('scenario-snapshot')
         LOG.debug("Creating a snapshot image for server: %s", server['name'])
-        image = _images_client.create_image(server['id'], name)
+        image = _images_client.create_image(server['id'], name=name)
         image_id = image.response['location'].split('images/')[1]
         _image_client.wait_for_image_status(image_id, 'active')
         self.addCleanup_with_wait(
diff --git a/tempest/scenario/test_network_v6.py b/tempest/scenario/test_network_v6.py
index fba839a..9481e58 100644
--- a/tempest/scenario/test_network_v6.py
+++ b/tempest/scenario/test_network_v6.py
@@ -27,13 +27,17 @@
 
 
 class TestGettingAddress(manager.NetworkScenarioTest):
-    """Create network with subnets: one IPv4 and
-    one or few IPv6 in a given address mode
-    Boot 2 VMs on this network
-    Allocate and assign 2 FIP4
-    Check that vNICs of all VMs gets all addresses actually assigned
-    Ping4 to one VM from another one
-    If ping6 available in VM, do ping6 to all v6 addresses
+    """Test Summary:
+
+    1. Create network with subnets:
+        1.1. one IPv4 and
+        1.2. one or more IPv6 in a given address mode
+    2. Boot 2 VMs on this network
+    3. Allocate and assign 2 FIP4
+    4. Check that vNICs of all VMs gets all addresses actually assigned
+    5. Each VM will ping the other's v4 private address
+    6. If ping6 available in VM, each VM will ping all of the other's  v6
+       addresses as well as the router's
     """
 
     @classmethod
@@ -74,12 +78,13 @@
         self.network = self._create_network(tenant_id=self.tenant_id)
         sub4 = self._create_subnet(network=self.network,
                                    namestart='sub4',
-                                   ip_version=4,)
+                                   ip_version=4)
 
         router = self._get_router(tenant_id=self.tenant_id)
         sub4.add_to_router(router_id=router['id'])
         self.addCleanup(sub4.delete)
 
+        self.subnets_v6 = []
         for _ in range(n_subnets6):
             sub6 = self._create_subnet(network=self.network,
                                        namestart='sub6',
@@ -89,6 +94,7 @@
 
             sub6.add_to_router(router_id=router['id'])
             self.addCleanup(sub6.delete)
+            self.subnets_v6.append(sub6)
 
     @staticmethod
     def define_server_ips(srv):
@@ -145,23 +151,32 @@
             self.assertTrue(test.call_until_true(srv2_v6_addr_assigned,
                                                  CONF.compute.ping_timeout, 1))
 
-        result = sshv4_1.ping_host(ips_from_api_2['4'])
-        self.assertIn('0% packet loss', result)
-        result = sshv4_2.ping_host(ips_from_api_1['4'])
-        self.assertIn('0% packet loss', result)
+        self._check_connectivity(sshv4_1, ips_from_api_2['4'])
+        self._check_connectivity(sshv4_2, ips_from_api_1['4'])
 
         # Some VM (like cirros) may not have ping6 utility
         result = sshv4_1.exec_command('whereis ping6')
         is_ping6 = False if result == 'ping6:\n' else True
         if is_ping6:
             for i in range(n_subnets6):
-                result = sshv4_1.ping_host(ips_from_api_2['6'][i])
-                self.assertIn('0% packet loss', result)
-                result = sshv4_2.ping_host(ips_from_api_1['6'][i])
-                self.assertIn('0% packet loss', result)
+                self._check_connectivity(sshv4_1,
+                                         ips_from_api_2['6'][i])
+                self._check_connectivity(sshv4_1,
+                                         self.subnets_v6[i].gateway_ip)
+                self._check_connectivity(sshv4_2,
+                                         ips_from_api_1['6'][i])
+                self._check_connectivity(sshv4_2,
+                                         self.subnets_v6[i].gateway_ip)
         else:
             LOG.warning('Ping6 is not available, skipping')
 
+    def _check_connectivity(self, source, dest):
+        self.assertTrue(
+            self._check_remote_connectivity(source, dest),
+            "Timed out waiting for %s to become reachable from %s" %
+            (dest, source.ssh_client.host)
+        )
+
     @test.idempotent_id('2c92df61-29f0-4eaa-bee3-7c65bef62a43')
     @test.services('compute', 'network')
     def test_slaac_from_os(self):
diff --git a/tempest/services/compute/json/images_client.py b/tempest/services/compute/json/images_client.py
index b0ce2dc..4e7e93f 100644
--- a/tempest/services/compute/json/images_client.py
+++ b/tempest/services/compute/json/images_client.py
@@ -23,18 +23,10 @@
 
 class ImagesClient(service_client.ServiceClient):
 
-    def create_image(self, server_id, name, meta=None):
+    def create_image(self, server_id, **kwargs):
         """Creates an image of the original server."""
 
-        post_body = {
-            'createImage': {
-                'name': name,
-            }
-        }
-
-        if meta is not None:
-            post_body['createImage']['metadata'] = meta
-
+        post_body = {'createImage': kwargs}
         post_body = json.dumps(post_body)
         resp, body = self.post('servers/%s/action' % server_id,
                                post_body)
diff --git a/tempest/services/compute/json/security_group_rules_client.py b/tempest/services/compute/json/security_group_rules_client.py
index f570eb7..9a7c881 100644
--- a/tempest/services/compute/json/security_group_rules_client.py
+++ b/tempest/services/compute/json/security_group_rules_client.py
@@ -23,8 +23,7 @@
 
 class SecurityGroupRulesClient(service_client.ServiceClient):
 
-    def create_security_group_rule(self, parent_group_id, ip_proto, from_port,
-                                   to_port, **kwargs):
+    def create_security_group_rule(self, **kwargs):
         """
         Creating a new security group rules.
         parent_group_id :ID of Security group
@@ -35,15 +34,7 @@
         cidr     : CIDR for address range.
         group_id : ID of the Source group
         """
-        post_body = {
-            'parent_group_id': parent_group_id,
-            'ip_protocol': ip_proto,
-            'from_port': from_port,
-            'to_port': to_port,
-            'cidr': kwargs.get('cidr'),
-            'group_id': kwargs.get('group_id'),
-        }
-        post_body = json.dumps({'security_group_rule': post_body})
+        post_body = json.dumps({'security_group_rule': kwargs})
         url = 'os-security-group-rules'
         resp, body = self.post(url, post_body)
         body = json.loads(body)
diff --git a/tempest/services/identity/v2/json/identity_client.py b/tempest/services/identity/v2/json/identity_client.py
index 1076fca..c9345e0 100644
--- a/tempest/services/identity/v2/json/identity_client.py
+++ b/tempest/services/identity/v2/json/identity_client.py
@@ -259,6 +259,33 @@
         self.expected_success(204, resp.status)
         return service_client.ResponseBody(resp, body)
 
+    def create_endpoint(self, service_id, region_id, **kwargs):
+        """Create an endpoint for service."""
+        post_body = {
+            'service_id': service_id,
+            'region': region_id,
+            'publicurl': kwargs.get('publicurl'),
+            'adminurl': kwargs.get('adminurl'),
+            'internalurl': kwargs.get('internalurl')
+        }
+        post_body = json.dumps({'endpoint': post_body})
+        resp, body = self.post('/endpoints', post_body)
+        self.expected_success(200, resp.status)
+        return service_client.ResponseBody(resp, self._parse_resp(body))
+
+    def list_endpoints(self):
+        """List Endpoints - Returns Endpoints."""
+        resp, body = self.get('/endpoints')
+        self.expected_success(200, resp.status)
+        return service_client.ResponseBodyList(resp, self._parse_resp(body))
+
+    def delete_endpoint(self, endpoint_id):
+        """Delete an endpoint."""
+        url = '/endpoints/%s' % endpoint_id
+        resp, body = self.delete(url)
+        self.expected_success(204, resp.status)
+        return service_client.ResponseBody(resp, body)
+
     def update_user_password(self, user_id, new_pass):
         """Update User Password."""
         put_body = {
diff --git a/tempest/stress/actions/ssh_floating.py b/tempest/stress/actions/ssh_floating.py
index 03a2d27..09e6d88 100644
--- a/tempest/stress/actions/ssh_floating.py
+++ b/tempest/stress/actions/ssh_floating.py
@@ -96,8 +96,10 @@
         self.sec_grp = sec_grp_cli.create_security_group(
             name=s_name, description=s_description)
         create_rule = sec_grp_cli.create_security_group_rule
-        create_rule(self.sec_grp['id'], 'tcp', 22, 22)
-        create_rule(self.sec_grp['id'], 'icmp', -1, -1)
+        create_rule(parent_group_id=self.sec_grp['id'], ip_protocol='tcp',
+                    from_port=22, to_port=22)
+        create_rule(parent_group_id=self.sec_grp['id'], ip_protocol='icmp',
+                    from_port=-1, to_port=-1)
 
     def _destroy_sec_grp(self):
         sec_grp_cli = self.manager.security_groups_client
diff --git a/tempest/stress/actions/volume_attach_verify.py b/tempest/stress/actions/volume_attach_verify.py
index 93a443e..0e0141f 100644
--- a/tempest/stress/actions/volume_attach_verify.py
+++ b/tempest/stress/actions/volume_attach_verify.py
@@ -58,8 +58,10 @@
         self.sec_grp = sec_grp_cli.create_security_group(
             name=s_name, description=s_description)
         create_rule = sec_grp_cli.create_security_group_rule
-        create_rule(self.sec_grp['id'], 'tcp', 22, 22)
-        create_rule(self.sec_grp['id'], 'icmp', -1, -1)
+        create_rule(parent_group_id=self.sec_grp['id'], ip_protocol='tcp',
+                    from_port=22, to_port=22)
+        create_rule(parent_group_id=self.sec_grp['id'], ip_protocol='icmp',
+                    from_port=-1, to_port=-1)
 
     def _destroy_sec_grp(self):
         sec_grp_cli = self.manager.security_groups_client
diff --git a/tempest/test_discover/plugins.py b/tempest/test_discover/plugins.py
index 45cd609..640b004 100644
--- a/tempest/test_discover/plugins.py
+++ b/tempest/test_discover/plugins.py
@@ -51,6 +51,16 @@
         """
         return
 
+    @abc.abstractmethod
+    def get_opt_lists(self):
+        """Method to get a list of options for sample config generation
+
+        :return option_list: A list of tuples with the group name and options
+                             in that group.
+        :rtype: list
+        """
+        return []
+
 
 @misc.singleton
 class TempestTestPluginManager(object):
@@ -79,3 +89,11 @@
     def register_plugin_opts(self, conf):
         for plug in self.ext_plugins:
             plug.obj.register_opts(conf)
+
+    def get_plugin_options_list(self):
+        plugin_options = []
+        for plug in self.ext_plugins:
+            opt_list = plug.obj.get_opt_lists()
+            if opt_list:
+                plugin_options.extend(opt_list)
+        return plugin_options
diff --git a/tempest/tests/services/compute/test_aggregates_client.py b/tempest/tests/services/compute/test_aggregates_client.py
index 9fe4544..eacc251 100644
--- a/tempest/tests/services/compute/test_aggregates_client.py
+++ b/tempest/tests/services/compute/test_aggregates_client.py
@@ -14,6 +14,7 @@
 
 import httplib2
 
+from oslo_serialization import jsonutils as json
 from oslotest import mockpatch
 
 from tempest.services.compute.json import aggregates_client
@@ -45,3 +46,92 @@
 
     def test_list_aggregates_with_bytes_body(self):
         self._test_list_aggregates(bytes_body=True)
+
+    def _test_show_aggregate(self, bytes_body=False):
+        expected = {"name": "hoge",
+                    "availability_zone": None,
+                    "deleted": False,
+                    "created_at":
+                    "2015-07-16T03:07:32.000000",
+                    "updated_at": None,
+                    "hosts": [],
+                    "deleted_at": None,
+                    "id": 1,
+                    "metadata": {}}
+        serialized_body = json.dumps({"aggregate": expected})
+        if bytes_body:
+            serialized_body = serialized_body.encode('utf-8')
+
+        mocked_resp = (httplib2.Response({'status': 200}), serialized_body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.get',
+            return_value=mocked_resp))
+        resp = self.client.show_aggregate(1)
+        self.assertEqual(expected, resp)
+
+    def test_show_aggregate_with_str_body(self):
+        self._test_show_aggregate()
+
+    def test_show_aggregate_with_bytes_body(self):
+        self._test_show_aggregate(bytes_body=True)
+
+    def _test_create_aggregate(self, bytes_body=False):
+        expected = {"name": u'\xf4',
+                    "availability_zone": None,
+                    "deleted": False,
+                    "created_at": "2015-07-21T04:11:18.000000",
+                    "updated_at": None,
+                    "deleted_at": None,
+                    "id": 1}
+        serialized_body = json.dumps({"aggregate": expected})
+        if bytes_body:
+            serialized_body = serialized_body.encode('utf-8')
+
+        mocked_resp = (httplib2.Response({'status': 200}), serialized_body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.post',
+            return_value=mocked_resp))
+        resp = self.client.create_aggregate(name='hoge')
+        self.assertEqual(expected, resp)
+
+    def test_create_aggregate_with_str_body(self):
+        self._test_create_aggregate()
+
+    def test_create_aggregate_with_bytes_body(self):
+        self._test_create_aggregate(bytes_body=True)
+
+    def test_delete_aggregate(self):
+        expected = {}
+        mocked_resp = (httplib2.Response({'status': 200}), None)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.delete',
+            return_value=mocked_resp))
+        resp = self.client.delete_aggregate("1")
+        self.assertEqual(expected, resp)
+
+    def _test_update_aggregate(self, bytes_body=False):
+        expected = {"name": u'\xe9',
+                    "availability_zone": None,
+                    "deleted": False,
+                    "created_at": "2015-07-16T03:07:32.000000",
+                    "updated_at": "2015-07-23T05:16:29.000000",
+                    "hosts": [],
+                    "deleted_at": None,
+                    "id": 1,
+                    "metadata": {}}
+        serialized_body = json.dumps({"aggregate": expected})
+        if bytes_body:
+            serialized_body = serialized_body.encode('utf-8')
+
+        mocked_resp = (httplib2.Response({'status': 200}), serialized_body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.put',
+            return_value=mocked_resp))
+        resp = self.client.update_aggregate(1)
+        self.assertEqual(expected, resp)
+
+    def test_update_aggregate_with_str_body(self):
+        self._test_update_aggregate()
+
+    def test_update_aggregate_with_bytes_body(self):
+        self._test_update_aggregate(bytes_body=True)
diff --git a/tempest/tests/services/compute/test_limits_client.py b/tempest/tests/services/compute/test_limits_client.py
new file mode 100644
index 0000000..4086210
--- /dev/null
+++ b/tempest/tests/services/compute/test_limits_client.py
@@ -0,0 +1,69 @@
+# Copyright 2015 NEC Corporation.  All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+import httplib2
+
+from oslo_serialization import jsonutils as json
+from oslotest import mockpatch
+
+from tempest.services.compute.json import limits_client
+from tempest.tests import base
+from tempest.tests import fake_auth_provider
+
+
+class TestLimitsClient(base.TestCase):
+
+    def setUp(self):
+        super(TestLimitsClient, self).setUp()
+        fake_auth = fake_auth_provider.FakeAuthProvider()
+        self.client = limits_client.LimitsClient(
+            fake_auth, 'compute', 'regionOne')
+
+    def _test_show_limits(self, bytes_body=False):
+        expected = {"rate": [],
+                    "absolute": {"maxServerMeta": 128,
+                                 "maxPersonality": 5,
+                                 "totalServerGroupsUsed": 0,
+                                 "maxImageMeta": 128,
+                                 "maxPersonalitySize": 10240,
+                                 "maxServerGroups": 10,
+                                 "maxSecurityGroupRules": 20,
+                                 "maxTotalKeypairs": 100,
+                                 "totalCoresUsed": 0,
+                                 "totalRAMUsed": 0,
+                                 "totalInstancesUsed": 0,
+                                 "maxSecurityGroups": 10,
+                                 "totalFloatingIpsUsed": 0,
+                                 "maxTotalCores": 20,
+                                 "totalSecurityGroupsUsed": 0,
+                                 "maxTotalFloatingIps": 10,
+                                 "maxTotalInstances": 10,
+                                 "maxTotalRAMSize": 51200,
+                                 "maxServerGroupMembers": 10}}
+        serialized_body = json.dumps({"limits": expected})
+        if bytes_body:
+            serialized_body = serialized_body.encode('utf-8')
+
+        mocked_resp = (httplib2.Response({'status': 200}), serialized_body)
+        self.useFixture(mockpatch.Patch(
+            'tempest.common.service_client.ServiceClient.get',
+            return_value=mocked_resp))
+        resp = self.client.show_limits()
+        self.assertEqual(expected, resp)
+
+    def test_show_limits_with_str_body(self):
+        self._test_show_limits()
+
+    def test_show_limits_with_bytes_body(self):
+        self._test_show_limits(bytes_body=True)
diff --git a/tools/config/check_uptodate.sh b/tools/config/check_uptodate.sh
deleted file mode 100755
index 7b08695..0000000
--- a/tools/config/check_uptodate.sh
+++ /dev/null
@@ -1,29 +0,0 @@
-#!/usr/bin/env bash
-
-PROJECT_NAME=${PROJECT_NAME:-tempest}
-CFGFILE_NAME=${PROJECT_NAME}.conf.sample
-
-if [ -e etc/${PROJECT_NAME}/${CFGFILE_NAME} ]; then
-    CFGFILE=etc/${PROJECT_NAME}/${CFGFILE_NAME}
-elif [ -e etc/${CFGFILE_NAME} ]; then
-    CFGFILE=etc/${CFGFILE_NAME}
-else
-    echo "${0##*/}: can not find config file"
-    exit 1
-fi
-
-TEMPDIR=`mktemp -d /tmp/${PROJECT_NAME}.XXXXXX`
-trap "rm -rf $TEMPDIR" EXIT
-
-oslo-config-generator --config-file tools/config/config-generator.tempest.conf --output-file ${TEMPDIR}/${CFGFILE_NAME}
-if [ $? != 0 ]
-then
-    exit 1
-fi
-
-if ! diff -u ${TEMPDIR}/${CFGFILE_NAME} ${CFGFILE}
-then
-   echo "${0##*/}: ${PROJECT_NAME}.conf.sample is not up to date."
-   echo "${0##*/}: Please run tox -egenconfig."
-   exit 1
-fi
diff --git a/tox.ini b/tox.ini
index 389fee2..eae6fc7 100644
--- a/tox.ini
+++ b/tox.ini
@@ -108,12 +108,14 @@
 commands = {posargs}
 
 [testenv:docs]
-commands = python setup.py build_sphinx {posargs}
+# The sample config file we generate is included in the sphinxdoc, so build that first.
+commands =
+   oslo-config-generator --config-file tools/config/config-generator.tempest.conf --output-file doc/source/_static/tempest.conf
+   python setup.py build_sphinx {posargs}
 
 [testenv:pep8]
 commands =
    flake8 {posargs}
-   {toxinidir}/tools/config/check_uptodate.sh
    python tools/check_uuid.py
 
 [testenv:uuidgen]