Merge "test_neutron_resources.py exception handler"
diff --git a/HACKING.rst b/HACKING.rst
index a209b3f..910a977 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -313,7 +313,7 @@
 qualified test name and track test functionality through refactoring. The
 format of the metadata looks like::
 
-    @test.idempotent_id('585e934c-448e-43c4-acbf-d06a9b899997')
+    @decorators.idempotent_id('585e934c-448e-43c4-acbf-d06a9b899997')
     def test_list_servers_with_detail(self):
         # The created server should be in the detailed list of all servers
         ...
diff --git a/README.rst b/README.rst
index 281516b..3d7c804 100644
--- a/README.rst
+++ b/README.rst
@@ -209,15 +209,9 @@
 Python 3.x
 ----------
 
-Starting during the Liberty release development cycle work began on enabling
-Tempest to run under both Python 2.7 and Python 3.4. Tempest strives to fully
-support running with Python 3.4 and newer. A gating unit test job was added to
-also run Tempest's unit tests under Python 3. This means that the Tempest
-code at least imports under Python 3.4 and things that have unit test coverage
-will work on Python 3.4. However, because large parts of Tempest are
-self-verifying there might be uncaught issues running on Python 3. So until
-there is a gating job which does a full Tempest run using Python 3 there
-isn't any guarantee that running Tempest under Python 3 is bug free.
+Starting during the Pike cycle Tempest has a gating CI job that runs tempest
+with Python 3. Any tempest release after 15.0.0 should fully support running
+under Python 3 as well as Python 2.7.
 
 Legacy run method
 -----------------
@@ -263,9 +257,7 @@
 
     $ testr run tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_reboot_non_existent_server
 
-Alternatively, you can use the run_tempest.sh script which will create a venv
-and run the tests or use tox to do the same. Tox also contains several existing
-job configurations. For example::
+Tox also contains several existing job configurations. For example::
 
     $ tox -efull
 
diff --git a/doc/source/test-removal.rst b/doc/source/test-removal.rst
index 79a5846..d06e4ba 100644
--- a/doc/source/test-removal.rst
+++ b/doc/source/test-removal.rst
@@ -38,8 +38,10 @@
  #. The test proposed for removal has a failure rate <  0.50% in the gate over
     the past release (the value and interval will likely be adjusted in the
     future)
- #. There must not be an external user/consumer of tempest that depends on the
-    test proposed for removal
+
+    .. _`prong #3`:
+ #. There must not be an external user/consumer of tempest
+    that depends on the test proposed for removal
 
 The answers to 1 and 2 are easy to verify. For 1 just provide a link to the new
 test location. If you are linking to the tempest removal patch please also put
@@ -62,7 +64,7 @@
 
 SELECT * from tests where test_id like "%test_id%";
 (where $test_id is the full test_id, but truncated to the class because of
-setupClass or tearDownClass failures)
+setUpClass or tearDownClass failures)
 
 You can access the infra mysql subunit2sql db w/ read-only permissions with:
 
@@ -80,7 +82,7 @@
  #. run the query: MySQL [subunit2sql]> select * from tests where test_id like
     "tempest.api.compute.admin.test_flavors_negative.FlavorsAdminNegativeTestJSON%";
     which will return a table of all the tests in the class (but it will also
-    catch failures in setupClass and tearDownClass)
+    catch failures in setUpClass and tearDownClass)
  #. paste the output table with numbers and the mysql command you ran to
     generate it into the etherpad.
 
@@ -133,6 +135,10 @@
  #. A revert for a patch which added a broken test, or testing which didn't
     actually run in the gate (basically any revert for something which
     shouldn't have been added)
+ #. Tests that would become out of scope as a consequence of an API change,
+    as described in `API Compatibility`_.
+    Such tests cannot live in Tempest because of the branchless nature of
+    Tempest. Such test must still honor `prong #3`_.
 
 For the first exception type the only types of testing in tree which have been
 declared out of scope at this point are:
@@ -149,7 +155,7 @@
 Tempest Scope
 ^^^^^^^^^^^^^
 
-Also starting in the liberty cycle tempest has defined a set of projects which
+Starting in the liberty cycle tempest has defined a set of projects which
 are defined as in scope for direct testing in tempest. As of today that list
 is:
 
@@ -166,3 +172,17 @@
 to maintain continuity after migrating the tests out of tempest.
 
 .. _tempest plugin mechanism: http://docs.openstack.org/developer/tempest/plugin.html
+
+API Compatibility
+"""""""""""""""""
+
+If an API introduces a non-discoverable, backward incompatible change, and
+such change is not backported to all versions supported by Tempest, tests for
+that API cannot live in Tempest anymore.
+This is because tests would not be able to know or control which API response
+to expect, and thus would not be able to enforce a specific behavior.
+
+If a test exists in Tempest that would meet this criteria as consequence of a
+change, the test must be removed according to the procedure discussed into
+this document. The API change should not be merged until all conditions
+required for test removal can be met.
diff --git a/releasenotes/notes/14.0.0-remo-stress-tests-81052b211ad95d2e.yaml b/releasenotes/notes/14.0.0-remo-stress-tests-81052b211ad95d2e.yaml
index aa3a78e..389b29f 100644
--- a/releasenotes/notes/14.0.0-remo-stress-tests-81052b211ad95d2e.yaml
+++ b/releasenotes/notes/14.0.0-remo-stress-tests-81052b211ad95d2e.yaml
@@ -1,4 +1,13 @@
 ---
+prelude: >
+  This release is marking the end of Liberty release support in Tempest
 upgrade:
   - The Stress tests framework and all the stress tests have been removed.
+other:
+  - |
+    OpenStack releases supported at this time are **Mitaka** and **Newton**.
 
+    The release under current development as of this tag is Ocata, meaning that
+    every Tempest commit is also tested against master during the Ocata cycle.
+    However, this does not necessarily mean that using Tempest as of this tag
+    will work against a Ocata (or future releases) cloud.
diff --git a/releasenotes/notes/15.0.0-remove-deprecated-compute-validation-config-options-e3d1b89ce074d71c.yaml b/releasenotes/notes/15.0.0-remove-deprecated-compute-validation-config-options-e3d1b89ce074d71c.yaml
index 8665b8b..104bf27 100644
--- a/releasenotes/notes/15.0.0-remove-deprecated-compute-validation-config-options-e3d1b89ce074d71c.yaml
+++ b/releasenotes/notes/15.0.0-remove-deprecated-compute-validation-config-options-e3d1b89ce074d71c.yaml
@@ -1,4 +1,6 @@
 ---
+prelude: >
+    This release is marking the start of Ocata release support in Tempest
 upgrade:
   - |
     Below deprecated config options from compute group have been removed.
@@ -11,4 +13,13 @@
     - ``compute.ping_size `` (available as ``validation.ping_size``)
     - ``compute.ping_count `` (available as ``validation.ping_count``)
     - ``compute.floating_ip_range `` (available as ``validation.floating_ip_range``)
+other:
+  - |
+    OpenStack releases supported at this time are **Mitaka**, **Newton**,
+    and **Ocata**.
 
+    The release under current development as of this tag is Pike,
+    meaning that every Tempest commit is also tested against master during
+    the Pike cycle. However, this does not necessarily mean that using
+    Tempest as of this tag will work against a Pike (or future releases)
+    cloud.
diff --git a/releasenotes/notes/15.0.0-start-of-pike-support-4925678d477b0745.yaml b/releasenotes/notes/15.0.0-start-of-pike-support-4925678d477b0745.yaml
deleted file mode 100644
index 5555949..0000000
--- a/releasenotes/notes/15.0.0-start-of-pike-support-4925678d477b0745.yaml
+++ /dev/null
@@ -1,13 +0,0 @@
----
-prelude: >
-    This release is marking the start of Ocata release support in Tempest
-other:
-  - |
-    OpenStack releases supported at this time are **Mitaka**, **Newton**,
-    and **Ocata**.
-
-    The release under current development as of this tag is Pike,
-    meaning that every Tempest commit is also tested against master during
-    the Pike cycle. However, this does not necessarily mean that using
-    Tempest as of this tag will work against a Pike (or future releases)
-    cloud.
diff --git a/releasenotes/notes/add-tempest-run-combine-option-e94c1049ba8985d5.yaml b/releasenotes/notes/add-tempest-run-combine-option-e94c1049ba8985d5.yaml
new file mode 100644
index 0000000..73900ca
--- /dev/null
+++ b/releasenotes/notes/add-tempest-run-combine-option-e94c1049ba8985d5.yaml
@@ -0,0 +1,6 @@
+---
+features:
+  - |
+    Adds a new cli option to tempest run, --combine, which is used to indicate
+    you want the subunit stream output combined with the previous run's in
+    the testr repository
diff --git a/releasenotes/notes/create-server-tags-client-8c0042a77e859af6.yaml b/releasenotes/notes/create-server-tags-client-8c0042a77e859af6.yaml
new file mode 100644
index 0000000..9927971
--- /dev/null
+++ b/releasenotes/notes/create-server-tags-client-8c0042a77e859af6.yaml
@@ -0,0 +1,8 @@
+---
+features:
+  - |
+    Add server tags APIs to the servers_client library.
+    This feature enables the possibility of upating, deleting
+    and checking existence of a tag on a server, as well
+    as updating and deleting all tags on a server.
+
diff --git a/releasenotes/notes/deprecate-resources-prefix-option-ad490c0a30a0266b.yaml b/releasenotes/notes/deprecate-resources-prefix-option-ad490c0a30a0266b.yaml
new file mode 100644
index 0000000..f679208
--- /dev/null
+++ b/releasenotes/notes/deprecate-resources-prefix-option-ad490c0a30a0266b.yaml
@@ -0,0 +1,10 @@
+---
+upgrade:
+  - The default value of rand_name()'s prefix argument is changed
+    to 'tempest' from None to identify resources are created by
+    Tempest.
+deprecations:
+  - The resources_prefix is marked as deprecated because it is
+    enough to set 'tempest' as the prefix on rand_name() to
+    ideintify resources which are created by Tempest and no
+    projects set this option on OpenStack dev community.
diff --git a/releasenotes/notes/deprecate-skip_unless_attr-decorator-450a1ed727494724.yaml b/releasenotes/notes/deprecate-skip_unless_attr-decorator-450a1ed727494724.yaml
new file mode 100644
index 0000000..4d8b941
--- /dev/null
+++ b/releasenotes/notes/deprecate-skip_unless_attr-decorator-450a1ed727494724.yaml
@@ -0,0 +1,5 @@
+---
+deprecations:
+  - The ``skip_unless_attr`` decorator in lib/decorators.py has been deprecated,
+    please use the standard ``testtools.skipUnless`` and ``testtools.skipIf``
+    decorators.
diff --git a/releasenotes/notes/deprecate-skip_unless_config-decorator-64c32d588043ab12.yaml b/releasenotes/notes/deprecate-skip_unless_config-decorator-64c32d588043ab12.yaml
new file mode 100644
index 0000000..6285ea6
--- /dev/null
+++ b/releasenotes/notes/deprecate-skip_unless_config-decorator-64c32d588043ab12.yaml
@@ -0,0 +1,5 @@
+---
+deprecations:
+  - The ``skip_unless_config`` and ``skip_if_config`` decorators in the
+    ``config`` module have been deprecated and will be removed in the Queens
+    dev cycle. Use the ``testtools.skipUnless`` (or a variation of) instead.
diff --git a/releasenotes/notes/remove-call_until_true-of-test-de9c13bc8f969921.yaml b/releasenotes/notes/remove-call_until_true-of-test-de9c13bc8f969921.yaml
new file mode 100644
index 0000000..5670821
--- /dev/null
+++ b/releasenotes/notes/remove-call_until_true-of-test-de9c13bc8f969921.yaml
@@ -0,0 +1,6 @@
+---
+upgrade:
+  - The *call_until_true* of *test* module is removed because it was marked
+    as deprecated and Tempest provides it from *test_utils* as a stable
+    interface instead. Please switch to use *test_utils.call_until_true* if
+    necessary.
diff --git a/releasenotes/notes/use-keystone-v3-api-935860d30ddbb8e9.yaml b/releasenotes/notes/use-keystone-v3-api-935860d30ddbb8e9.yaml
new file mode 100644
index 0000000..dd6e924
--- /dev/null
+++ b/releasenotes/notes/use-keystone-v3-api-935860d30ddbb8e9.yaml
@@ -0,0 +1,5 @@
+---
+upgrade:
+  - Tempest now defaults to using Keystone v3 API for the
+    authentication, because Keystone v3 API is CURRENT and
+    the v2 API is deprecated.
diff --git a/requirements.txt b/requirements.txt
index d9a9ebb..124da7a 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -12,7 +12,7 @@
 oslo.config!=3.18.0,>=3.14.0 # Apache-2.0
 oslo.log>=3.11.0 # Apache-2.0
 oslo.serialization>=1.10.0 # Apache-2.0
-oslo.utils>=3.18.0 # Apache-2.0
+oslo.utils>=3.20.0 # Apache-2.0
 six>=1.9.0 # MIT
 fixtures>=3.0.0 # Apache-2.0/BSD
 PyYAML>=3.10.0 # MIT
diff --git a/run_tempest.sh b/run_tempest.sh
deleted file mode 100755
index 414146b..0000000
--- a/run_tempest.sh
+++ /dev/null
@@ -1,135 +0,0 @@
-#!/usr/bin/env bash
-
-echo "WARNING: This script is deprecated and will be removed in the near future. Please migrate to tempest run or another method of launching a test runner"
-
-function usage {
-  echo "Usage: $0 [OPTION]..."
-  echo "Run Tempest test suite"
-  echo ""
-  echo "  -V, --virtual-env        Always use virtualenv.  Install automatically if not present"
-  echo "  -N, --no-virtual-env     Don't use virtualenv.  Run tests in local environment"
-  echo "  -n, --no-site-packages   Isolate the virtualenv from the global Python environment"
-  echo "  -f, --force              Force a clean re-build of the virtual environment. Useful when dependencies have been added."
-  echo "  -u, --update             Update the virtual environment with any newer package versions"
-  echo "  -s, --smoke              Only run smoke tests"
-  echo "  -t, --serial             Run testr serially"
-  echo "  -C, --config             Config file location"
-  echo "  -h, --help               Print this usage message"
-  echo "  -d, --debug              Run tests with testtools instead of testr. This allows you to use PDB"
-  echo "  -- [TESTROPTIONS]        After the first '--' you can pass arbitrary arguments to testr "
-}
-
-testrargs=""
-venv=${VENV:-.venv}
-with_venv=tools/with_venv.sh
-serial=0
-always_venv=0
-never_venv=0
-no_site_packages=0
-debug=0
-force=0
-wrapper=""
-config_file=""
-update=0
-
-if ! options=$(getopt -o VNnfusthdC:lL: -l virtual-env,no-virtual-env,no-site-packages,force,update,smoke,serial,help,debug,config: -- "$@")
-then
-    # parse error
-    usage
-    exit 1
-fi
-
-eval set -- $options
-first_uu=yes
-while [ $# -gt 0 ]; do
-  case "$1" in
-    -h|--help) usage; exit;;
-    -V|--virtual-env) always_venv=1; never_venv=0;;
-    -N|--no-virtual-env) always_venv=0; never_venv=1;;
-    -n|--no-site-packages) no_site_packages=1;;
-    -f|--force) force=1;;
-    -u|--update) update=1;;
-    -d|--debug) debug=1;;
-    -C|--config) config_file=$2; shift;;
-    -s|--smoke) testrargs+="smoke";;
-    -t|--serial) serial=1;;
-    --) [ "yes" == "$first_uu" ] || testrargs="$testrargs $1"; first_uu=no  ;;
-    *) testrargs="$testrargs $1";;
-  esac
-  shift
-done
-
-if [ -n "$config_file" ]; then
-    config_file=`readlink -f "$config_file"`
-    export TEMPEST_CONFIG_DIR=`dirname "$config_file"`
-    export TEMPEST_CONFIG=`basename "$config_file"`
-fi
-
-cd `dirname "$0"`
-
-if [ $no_site_packages -eq 1 ]; then
-  installvenvopts="--no-site-packages"
-fi
-
-function testr_init {
-  if [ ! -d .testrepository ]; then
-      ${wrapper} testr init
-  fi
-}
-
-function run_tests {
-  testr_init
-  ${wrapper} find . -type f -name "*.pyc" -delete
-  export OS_TEST_PATH=./tempest/test_discover
-  if [ $debug -eq 1 ]; then
-      if [ "$testrargs" = "" ]; then
-           testrargs="discover ./tempest/test_discover"
-      fi
-      ${wrapper} python -m testtools.run $testrargs
-      return $?
-  fi
-
-  if [ $serial -eq 1 ]; then
-      ${wrapper} testr run --subunit $testrargs | ${wrapper} subunit-trace -n -f
-  else
-      ${wrapper} testr run --parallel --subunit $testrargs | ${wrapper} subunit-trace -n -f
-  fi
-}
-
-if [ $never_venv -eq 0 ]
-then
-  # Remove the virtual environment if --force used
-  if [ $force -eq 1 ]; then
-    echo "Cleaning virtualenv..."
-    rm -rf ${venv}
-  fi
-  if [ $update -eq 1 ]; then
-      echo "Updating virtualenv..."
-      virtualenv $installvenvopts $venv
-      $venv/bin/pip install -U -r requirements.txt
-  fi
-  if [ -e ${venv} ]; then
-    wrapper="${with_venv}"
-  else
-    if [ $always_venv -eq 1 ]; then
-      # Automatically install the virtualenv
-      virtualenv $installvenvopts $venv
-      wrapper="${with_venv}"
-      ${wrapper} pip install -U -r requirements.txt
-    else
-      echo -e "No virtual environment found...create one? (Y/n) \c"
-      read use_ve
-      if [ "x$use_ve" = "xY" -o "x$use_ve" = "x" -o "x$use_ve" = "xy" ]; then
-        # Install the virtualenv and run the test suite in it
-        virtualenv $installvenvopts $venv
-        wrapper=${with_venv}
-        ${wrapper} pip install -U -r requirements.txt
-      fi
-    fi
-  fi
-fi
-
-run_tests
-retval=$?
-
-exit $retval
diff --git a/run_tests.sh b/run_tests.sh
deleted file mode 100755
index a856bb4..0000000
--- a/run_tests.sh
+++ /dev/null
@@ -1,193 +0,0 @@
-#!/usr/bin/env bash
-
-function usage {
-  echo "Usage: $0 [OPTION]..."
-  echo "Run Tempest unit tests"
-  echo ""
-  echo "  -V, --virtual-env        Always use virtualenv.  Install automatically if not present"
-  echo "  -N, --no-virtual-env     Don't use virtualenv.  Run tests in local environment"
-  echo "  -n, --no-site-packages   Isolate the virtualenv from the global Python environment"
-  echo "  -f, --force              Force a clean re-build of the virtual environment. Useful when dependencies have been added."
-  echo "  -u, --update             Update the virtual environment with any newer package versions"
-  echo "  -t, --serial             Run testr serially"
-  echo "  -p, --pep8               Just run pep8"
-  echo "  -c, --coverage           Generate coverage report"
-  echo "  -h, --help               Print this usage message"
-  echo "  -d, --debug              Run tests with testtools instead of testr. This allows you to use PDB"
-  echo "  -- [TESTROPTIONS]        After the first '--' you can pass arbitrary arguments to testr "
-}
-
-function deprecation_warning {
-  cat <<EOF
--------------------------------------------------------------------------
-WARNING: run_tests.sh is deprecated and this script will be removed after
-the Newton release. All tests should be run through testr/ostestr or tox.
-
-To run style checks:
-
- tox -e pep8
-
-To run python 2.7 unit tests
-
- tox -e py27
-
-To run unit tests and generate coverage report
-
- tox -e cover
-
-To run a subset of any of these tests:
-
- tox -e py27 someregex
-
- i.e.: tox -e py27 test_servers
-
-Additional tox targets are available in tox.ini. For more information
-see:
-http://docs.openstack.org/project-team-guide/project-setup/python.html
-
-NOTE: if you want to use testr to run tests, you can instead use:
-
- OS_TEST_PATH=./tempest/tests testr run
-
-Documentation on using testr directly can be found at
-http://testrepository.readthedocs.org/en/latest/MANUAL.html
--------------------------------------------------------------------------
-EOF
-}
-
-testrargs=""
-just_pep8=0
-venv=${VENV:-.venv}
-with_venv=tools/with_venv.sh
-serial=0
-always_venv=0
-never_venv=0
-no_site_packages=0
-debug=0
-force=0
-coverage=0
-wrapper=""
-config_file=""
-update=0
-
-deprecation_warning
-
-if ! options=$(getopt -o VNnfuctphd -l virtual-env,no-virtual-env,no-site-packages,force,update,serial,coverage,pep8,help,debug -- "$@")
-then
-    # parse error
-    usage
-    exit 1
-fi
-
-eval set -- $options
-first_uu=yes
-while [ $# -gt 0 ]; do
-  case "$1" in
-    -h|--help) usage; exit;;
-    -V|--virtual-env) always_venv=1; never_venv=0;;
-    -N|--no-virtual-env) always_venv=0; never_venv=1;;
-    -n|--no-site-packages) no_site_packages=1;;
-    -f|--force) force=1;;
-    -u|--update) update=1;;
-    -d|--debug) debug=1;;
-    -p|--pep8) let just_pep8=1;;
-    -c|--coverage) coverage=1;;
-    -t|--serial) serial=1;;
-    --) [ "yes" == "$first_uu" ] || testrargs="$testrargs $1"; first_uu=no  ;;
-    *) testrargs="$testrargs $1";;
-  esac
-  shift
-done
-
-
-cd `dirname "$0"`
-
-if [ $no_site_packages -eq 1 ]; then
-  installvenvopts="--no-site-packages"
-fi
-
-function testr_init {
-  if [ ! -d .testrepository ]; then
-      ${wrapper} testr init
-  fi
-}
-
-function run_tests {
-  testr_init
-  ${wrapper} find . -type f -name "*.pyc" -delete
-  export OS_TEST_PATH=./tempest/tests
-  if [ $debug -eq 1 ]; then
-      if [ "$testrargs" = "" ]; then
-          testrargs="discover ./tempest/tests"
-      fi
-      ${wrapper} python -m testtools.run $testrargs
-      return $?
-  fi
-
-  if [ $coverage -eq 1 ]; then
-      ${wrapper} python setup.py test --coverage
-      return $?
-  fi
-
-  if [ $serial -eq 1 ]; then
-      ${wrapper} testr run --subunit $testrargs | ${wrapper} subunit-trace -n -f
-  else
-      ${wrapper} testr run --parallel --subunit $testrargs | ${wrapper} subunit-trace -n -f
-  fi
-}
-
-function run_pep8 {
-  echo "Running flake8 ..."
-  if [ $never_venv -eq 1 ]; then
-      echo "**WARNING**:" >&2
-      echo "Running flake8 without virtual env may miss OpenStack HACKING detection" >&2
-  fi
-  ${wrapper} flake8
-}
-
-if [ $never_venv -eq 0 ]
-then
-  # Remove the virtual environment if --force used
-  if [ $force -eq 1 ]; then
-    echo "Cleaning virtualenv..."
-    rm -rf ${venv}
-  fi
-  if [ $update -eq 1 ]; then
-      echo "Updating virtualenv..."
-      virtualenv $installvenvopts $venv
-      $venv/bin/pip install -U -r requirements.txt -r test-requirements.txt
-  fi
-  if [ -e ${venv} ]; then
-    wrapper="${with_venv}"
-  else
-    if [ $always_venv -eq 1 ]; then
-      # Automatically install the virtualenv
-      virtualenv $installvenvopts $venv
-      wrapper="${with_venv}"
-      ${wrapper} pip install -U -r requirements.txt -r test-requirements.txt
-    else
-      echo -e "No virtual environment found...create one? (Y/n) \c"
-      read use_ve
-      if [ "x$use_ve" = "xY" -o "x$use_ve" = "x" -o "x$use_ve" = "xy" ]; then
-        # Install the virtualenv and run the test suite in it
-        virtualenv $installvenvopts $venv
-        wrapper=${with_venv}
-        ${wrapper} pip install -U -r requirements.txt -r test-requirements.txt
-      fi
-    fi
-  fi
-fi
-
-if [ $just_pep8 -eq 1 ]; then
-    run_pep8
-    exit
-fi
-
-run_tests
-retval=$?
-
-if [ -z "$testrargs" ]; then
-    run_pep8
-fi
-
-exit $retval
diff --git a/tempest/api/compute/admin/test_flavors_access.py b/tempest/api/compute/admin/test_flavors_access.py
index 04b0c2d..5a38acc 100644
--- a/tempest/api/compute/admin/test_flavors_access.py
+++ b/tempest/api/compute/admin/test_flavors_access.py
@@ -14,7 +14,6 @@
 #    under the License.
 
 from tempest.api.compute import base
-from tempest.common.utils import data_utils
 from tempest.lib import decorators
 from tempest import test
 
@@ -38,7 +37,6 @@
 
         # Non admin tenant ID
         cls.tenant_id = cls.flavors_client.tenant_id
-        cls.flavor_name_prefix = 'test_flavor_access_'
         cls.ram = 512
         cls.vcpus = 1
         cls.disk = 10
@@ -47,51 +45,37 @@
     def test_flavor_access_list_with_private_flavor(self):
         # Test to make sure that list flavor access on a newly created
         # private flavor will return an empty access list
-        flavor_name = data_utils.rand_name(self.flavor_name_prefix)
-        new_flavor_id = data_utils.rand_int_id(start=1000)
-        new_flavor = self.admin_flavors_client.create_flavor(
-            name=flavor_name,
-            ram=self.ram, vcpus=self.vcpus,
-            disk=self.disk,
-            id=new_flavor_id,
-            is_public='False')['flavor']
-        self.addCleanup(self.admin_flavors_client.delete_flavor,
-                        new_flavor['id'])
+        flavor = self.create_flavor(ram=self.ram, vcpus=self.vcpus,
+                                    disk=self.disk, is_public='False')
+
         flavor_access = (self.admin_flavors_client.list_flavor_access(
-            new_flavor_id)['flavor_access'])
+                         flavor['id'])['flavor_access'])
         self.assertEqual(len(flavor_access), 0, str(flavor_access))
 
     @decorators.idempotent_id('59e622f6-bdf6-45e3-8ba8-fedad905a6b4')
     def test_flavor_access_add_remove(self):
         # Test to add and remove flavor access to a given tenant.
-        flavor_name = data_utils.rand_name(self.flavor_name_prefix)
-        new_flavor_id = data_utils.rand_int_id(start=1000)
-        new_flavor = self.admin_flavors_client.create_flavor(
-            name=flavor_name,
-            ram=self.ram, vcpus=self.vcpus,
-            disk=self.disk,
-            id=new_flavor_id,
-            is_public='False')['flavor']
-        self.addCleanup(self.admin_flavors_client.delete_flavor,
-                        new_flavor['id'])
+        flavor = self.create_flavor(ram=self.ram, vcpus=self.vcpus,
+                                    disk=self.disk, is_public='False')
+
         # Add flavor access to a tenant.
         resp_body = {
             "tenant_id": str(self.tenant_id),
-            "flavor_id": str(new_flavor['id']),
+            "flavor_id": str(flavor['id']),
         }
         add_body = (self.admin_flavors_client.add_flavor_access(
-            new_flavor['id'], self.tenant_id)['flavor_access'])
+            flavor['id'], self.tenant_id)['flavor_access'])
         self.assertIn(resp_body, add_body)
 
         # The flavor is present in list.
         flavors = self.flavors_client.list_flavors(detail=True)['flavors']
-        self.assertIn(new_flavor['id'], map(lambda x: x['id'], flavors))
+        self.assertIn(flavor['id'], map(lambda x: x['id'], flavors))
 
         # Remove flavor access from a tenant.
         remove_body = (self.admin_flavors_client.remove_flavor_access(
-            new_flavor['id'], self.tenant_id)['flavor_access'])
+            flavor['id'], self.tenant_id)['flavor_access'])
         self.assertNotIn(resp_body, remove_body)
 
         # The flavor is not present in list.
         flavors = self.flavors_client.list_flavors(detail=True)['flavors']
-        self.assertNotIn(new_flavor['id'], map(lambda x: x['id'], flavors))
+        self.assertNotIn(flavor['id'], map(lambda x: x['id'], flavors))
diff --git a/tempest/api/compute/admin/test_flavors_access_negative.py b/tempest/api/compute/admin/test_flavors_access_negative.py
index 9fe1f74..12e4587 100644
--- a/tempest/api/compute/admin/test_flavors_access_negative.py
+++ b/tempest/api/compute/admin/test_flavors_access_negative.py
@@ -14,7 +14,6 @@
 #    under the License.
 
 from tempest.api.compute import base
-from tempest.common.utils import data_utils
 from tempest.lib import decorators
 from tempest.lib import exceptions as lib_exc
 from tempest import test
@@ -40,7 +39,6 @@
         super(FlavorsAccessNegativeTestJSON, cls).resource_setup()
 
         cls.tenant_id = cls.flavors_client.tenant_id
-        cls.flavor_name_prefix = 'test_flavor_access_'
         cls.ram = 512
         cls.vcpus = 1
         cls.disk = 10
@@ -49,108 +47,69 @@
     @decorators.idempotent_id('0621c53e-d45d-40e7-951d-43e5e257b272')
     def test_flavor_access_list_with_public_flavor(self):
         # Test to list flavor access with exceptions by querying public flavor
-        flavor_name = data_utils.rand_name(self.flavor_name_prefix)
-        new_flavor_id = data_utils.rand_int_id(start=1000)
-        new_flavor = self.admin_flavors_client.create_flavor(
-            name=flavor_name,
-            ram=self.ram, vcpus=self.vcpus,
-            disk=self.disk,
-            id=new_flavor_id,
-            is_public='True')['flavor']
-        self.addCleanup(self.admin_flavors_client.delete_flavor,
-                        new_flavor['id'])
+        flavor = self.create_flavor(ram=self.ram, vcpus=self.vcpus,
+                                    disk=self.disk, is_public='True')
         self.assertRaises(lib_exc.NotFound,
                           self.admin_flavors_client.list_flavor_access,
-                          new_flavor_id)
+                          flavor['id'])
 
     @test.attr(type=['negative'])
     @decorators.idempotent_id('41eaaade-6d37-4f28-9c74-f21b46ca67bd')
     def test_flavor_non_admin_add(self):
         # Test to add flavor access as a user without admin privileges.
-        flavor_name = data_utils.rand_name(self.flavor_name_prefix)
-        new_flavor_id = data_utils.rand_int_id(start=1000)
-        new_flavor = self.admin_flavors_client.create_flavor(
-            name=flavor_name,
-            ram=self.ram, vcpus=self.vcpus,
-            disk=self.disk,
-            id=new_flavor_id,
-            is_public='False')['flavor']
-        self.addCleanup(self.admin_flavors_client.delete_flavor,
-                        new_flavor['id'])
+        flavor = self.create_flavor(ram=self.ram, vcpus=self.vcpus,
+                                    disk=self.disk, is_public='False')
         self.assertRaises(lib_exc.Forbidden,
                           self.flavors_client.add_flavor_access,
-                          new_flavor['id'],
+                          flavor['id'],
                           self.tenant_id)
 
     @test.attr(type=['negative'])
     @decorators.idempotent_id('073e79a6-c311-4525-82dc-6083d919cb3a')
     def test_flavor_non_admin_remove(self):
         # Test to remove flavor access as a user without admin privileges.
-        flavor_name = data_utils.rand_name(self.flavor_name_prefix)
-        new_flavor_id = data_utils.rand_int_id(start=1000)
-        new_flavor = self.admin_flavors_client.create_flavor(
-            name=flavor_name,
-            ram=self.ram, vcpus=self.vcpus,
-            disk=self.disk,
-            id=new_flavor_id,
-            is_public='False')['flavor']
-        self.addCleanup(self.admin_flavors_client.delete_flavor,
-                        new_flavor['id'])
+        flavor = self.create_flavor(ram=self.ram, vcpus=self.vcpus,
+                                    disk=self.disk, is_public='False')
+
         # Add flavor access to a tenant.
-        self.admin_flavors_client.add_flavor_access(new_flavor['id'],
+        self.admin_flavors_client.add_flavor_access(flavor['id'],
                                                     self.tenant_id)
         self.addCleanup(self.admin_flavors_client.remove_flavor_access,
-                        new_flavor['id'], self.tenant_id)
+                        flavor['id'], self.tenant_id)
         self.assertRaises(lib_exc.Forbidden,
                           self.flavors_client.remove_flavor_access,
-                          new_flavor['id'],
+                          flavor['id'],
                           self.tenant_id)
 
     @test.attr(type=['negative'])
     @decorators.idempotent_id('f3592cc0-0306-483c-b210-9a7b5346eddc')
     def test_add_flavor_access_duplicate(self):
         # Create a new flavor.
-        flavor_name = data_utils.rand_name(self.flavor_name_prefix)
-        new_flavor_id = data_utils.rand_int_id(start=1000)
-        new_flavor = self.admin_flavors_client.create_flavor(
-            name=flavor_name,
-            ram=self.ram, vcpus=self.vcpus,
-            disk=self.disk,
-            id=new_flavor_id,
-            is_public='False')['flavor']
-        self.addCleanup(self.admin_flavors_client.delete_flavor,
-                        new_flavor['id'])
+        flavor = self.create_flavor(ram=self.ram, vcpus=self.vcpus,
+                                    disk=self.disk, is_public='False')
 
         # Add flavor access to a tenant.
-        self.admin_flavors_client.add_flavor_access(new_flavor['id'],
+        self.admin_flavors_client.add_flavor_access(flavor['id'],
                                                     self.tenant_id)
         self.addCleanup(self.admin_flavors_client.remove_flavor_access,
-                        new_flavor['id'], self.tenant_id)
+                        flavor['id'], self.tenant_id)
 
         # An exception should be raised when adding flavor access to the same
         # tenant
         self.assertRaises(lib_exc.Conflict,
                           self.admin_flavors_client.add_flavor_access,
-                          new_flavor['id'],
+                          flavor['id'],
                           self.tenant_id)
 
     @test.attr(type=['negative'])
     @decorators.idempotent_id('1f710927-3bc7-4381-9f82-0ca6e42644b7')
     def test_remove_flavor_access_not_found(self):
         # Create a new flavor.
-        flavor_name = data_utils.rand_name(self.flavor_name_prefix)
-        new_flavor_id = data_utils.rand_int_id(start=1000)
-        new_flavor = self.admin_flavors_client.create_flavor(
-            name=flavor_name,
-            ram=self.ram, vcpus=self.vcpus,
-            disk=self.disk,
-            id=new_flavor_id,
-            is_public='False')['flavor']
-        self.addCleanup(self.admin_flavors_client.delete_flavor,
-                        new_flavor['id'])
+        flavor = self.create_flavor(ram=self.ram, vcpus=self.vcpus,
+                                    disk=self.disk, is_public='False')
 
         # An exception should be raised when flavor access is not found
         self.assertRaises(lib_exc.NotFound,
                           self.admin_flavors_client.remove_flavor_access,
-                          new_flavor['id'],
+                          flavor['id'],
                           self.os_alt.servers_client.tenant_id)
diff --git a/tempest/api/compute/admin/test_migrations.py b/tempest/api/compute/admin/test_migrations.py
index aa75348..18655cb 100644
--- a/tempest/api/compute/admin/test_migrations.py
+++ b/tempest/api/compute/admin/test_migrations.py
@@ -30,7 +30,6 @@
     def setup_clients(cls):
         super(MigrationsAdminTest, cls).setup_clients()
         cls.client = cls.os_adm.migrations_client
-        cls.flavors_admin_client = cls.os_adm.flavors_client
 
     @decorators.idempotent_id('75c0b83d-72a0-4cf8-a153-631e83e7d53f')
     def test_list_migrations(self):
@@ -54,8 +53,8 @@
 
     def _flavor_clean_up(self, flavor_id):
         try:
-            self.flavors_admin_client.delete_flavor(flavor_id)
-            self.flavors_admin_client.wait_for_resource_deletion(flavor_id)
+            self.admin_flavors_client.delete_flavor(flavor_id)
+            self.admin_flavors_client.wait_for_resource_deletion(flavor_id)
         except exceptions.NotFound:
             pass
 
@@ -68,9 +67,9 @@
 
         # First we have to create a flavor that we can delete so make a copy
         # of the normal flavor from which we'd create a server.
-        flavor = self.flavors_admin_client.show_flavor(
+        flavor = self.admin_flavors_client.show_flavor(
             self.flavor_ref)['flavor']
-        flavor = self.flavors_admin_client.create_flavor(
+        flavor = self.admin_flavors_client.create_flavor(
             name=data_utils.rand_name('test_resize_flavor_'),
             ram=flavor['ram'],
             disk=flavor['disk'],
diff --git a/tempest/api/compute/admin/test_servers_negative.py b/tempest/api/compute/admin/test_servers_negative.py
index 5220c97..adb49a5 100644
--- a/tempest/api/compute/admin/test_servers_negative.py
+++ b/tempest/api/compute/admin/test_servers_negative.py
@@ -34,7 +34,6 @@
         super(ServersAdminNegativeTestJSON, cls).setup_clients()
         cls.client = cls.os_adm.servers_client
         cls.non_adm_client = cls.servers_client
-        cls.flavors_client = cls.os_adm.flavors_client
         cls.quotas_client = cls.os_adm.quotas_client
 
     @classmethod
@@ -45,16 +44,6 @@
         server = cls.create_test_server(wait_until='ACTIVE')
         cls.s1_id = server['id']
 
-    def _get_unused_flavor_id(self):
-        flavor_id = data_utils.rand_int_id(start=1000)
-        while True:
-            try:
-                self.flavors_client.show_flavor(flavor_id)
-            except lib_exc.NotFound:
-                break
-            flavor_id = data_utils.rand_int_id(start=1000)
-        return flavor_id
-
     @decorators.idempotent_id('28dcec23-f807-49da-822c-56a92ea3c687')
     @testtools.skipUnless(CONF.compute_feature_enabled.resize,
                           'Resize not available.')
@@ -62,8 +51,6 @@
     def test_resize_server_using_overlimit_ram(self):
         # NOTE(mriedem): Avoid conflicts with os-quota-class-sets tests.
         self.useFixture(fixtures.LockFixture('compute_quotas'))
-        flavor_name = data_utils.rand_name("flavor")
-        flavor_id = self._get_unused_flavor_id()
         quota_set = self.quotas_client.show_quota_set(
             self.tenant_id)['quota_set']
         ram = quota_set['ram']
@@ -73,11 +60,7 @@
         ram += 1
         vcpus = 1
         disk = 5
-        flavor_ref = self.flavors_client.create_flavor(name=flavor_name,
-                                                       ram=ram, vcpus=vcpus,
-                                                       disk=disk,
-                                                       id=flavor_id)['flavor']
-        self.addCleanup(self.flavors_client.delete_flavor, flavor_id)
+        flavor_ref = self.create_flavor(ram=ram, vcpus=vcpus, disk=disk)
         self.assertRaises((lib_exc.Forbidden, lib_exc.OverLimit),
                           self.client.resize_server,
                           self.servers[0]['id'],
@@ -90,8 +73,6 @@
     def test_resize_server_using_overlimit_vcpus(self):
         # NOTE(mriedem): Avoid conflicts with os-quota-class-sets tests.
         self.useFixture(fixtures.LockFixture('compute_quotas'))
-        flavor_name = data_utils.rand_name("flavor")
-        flavor_id = self._get_unused_flavor_id()
         quota_set = self.quotas_client.show_quota_set(
             self.tenant_id)['quota_set']
         vcpus = quota_set['cores']
@@ -101,11 +82,7 @@
         vcpus += 1
         ram = 512
         disk = 5
-        flavor_ref = self.flavors_client.create_flavor(name=flavor_name,
-                                                       ram=ram, vcpus=vcpus,
-                                                       disk=disk,
-                                                       id=flavor_id)['flavor']
-        self.addCleanup(self.flavors_client.delete_flavor, flavor_id)
+        flavor_ref = self.create_flavor(ram=ram, vcpus=vcpus, disk=disk)
         self.assertRaises((lib_exc.Forbidden, lib_exc.OverLimit),
                           self.client.resize_server,
                           self.servers[0]['id'],
diff --git a/tempest/api/compute/base.py b/tempest/api/compute/base.py
index 706b859..55cc293 100644
--- a/tempest/api/compute/base.py
+++ b/tempest/api/compute/base.py
@@ -445,8 +445,8 @@
         attach_kwargs = dict(volumeId=volume['id'])
         if device:
             attach_kwargs['device'] = device
-        self.servers_client.attach_volume(
-            server['id'], **attach_kwargs)
+        attachment = self.servers_client.attach_volume(
+            server['id'], **attach_kwargs)['volumeAttachment']
         # On teardown detach the volume and wait for it to be available. This
         # is so we don't error out when trying to delete the volume during
         # teardown.
@@ -459,6 +459,7 @@
                         server['id'], volume['id'])
         waiters.wait_for_volume_resource_status(self.volumes_client,
                                                 volume['id'], 'in-use')
+        return attachment
 
 
 class BaseV2ComputeAdminTest(BaseV2ComputeTest):
diff --git a/tempest/api/compute/flavors/test_flavors_negative.py b/tempest/api/compute/flavors/test_flavors_negative.py
index a70c0a9..b313f44 100644
--- a/tempest/api/compute/flavors/test_flavors_negative.py
+++ b/tempest/api/compute/flavors/test_flavors_negative.py
@@ -21,6 +21,7 @@
 from tempest.common import image as common_image
 from tempest.common.utils import data_utils
 from tempest import config
+from tempest.lib import decorators
 from tempest.lib import exceptions as lib_exc
 from tempest import test
 
@@ -43,7 +44,7 @@
 
     @test.attr(type=['negative'])
     @test.services('image')
-    @test.idempotent_id('90f0d93a-91c1-450c-91e6-07d18172cefe')
+    @decorators.idempotent_id('90f0d93a-91c1-450c-91e6-07d18172cefe')
     def test_boot_with_low_ram(self):
         """Try boot a vm with lower than min ram
 
diff --git a/tempest/api/compute/security_groups/test_security_groups.py b/tempest/api/compute/security_groups/test_security_groups.py
index 349bfda..e90a1fc 100644
--- a/tempest/api/compute/security_groups/test_security_groups.py
+++ b/tempest/api/compute/security_groups/test_security_groups.py
@@ -145,7 +145,7 @@
         self.assertEqual(s_new_name, fetched_group['name'])
         self.assertEqual(s_new_des, fetched_group['description'])
 
-    @test.idempotent_id('79517d60-535a-438f-af3d-e6feab1cbea7')
+    @decorators.idempotent_id('79517d60-535a-438f-af3d-e6feab1cbea7')
     @test.services('network')
     def test_list_security_groups_by_server(self):
         # Create a couple security groups that we will use
diff --git a/tempest/api/compute/servers/test_create_server.py b/tempest/api/compute/servers/test_create_server.py
index a94c20b..38dbb50 100644
--- a/tempest/api/compute/servers/test_create_server.py
+++ b/tempest/api/compute/servers/test_create_server.py
@@ -122,7 +122,8 @@
             self.validation_resources['keypair']['private_key'],
             server=self.server,
             servers_client=self.client)
-        self.assertEqual(flavor['vcpus'], linux_client.get_number_of_vcpus())
+        output = linux_client.exec_command('grep -c ^processor /proc/cpuinfo')
+        self.assertEqual(flavor['vcpus'], int(output))
 
     @decorators.idempotent_id('ac1ad47f-984b-4441-9274-c9079b7a0666')
     @testtools.skipUnless(CONF.validation.run_validation,
@@ -136,7 +137,7 @@
             self.validation_resources['keypair']['private_key'],
             server=self.server,
             servers_client=self.client)
-        hostname = linux_client.get_hostname()
+        hostname = linux_client.exec_command("hostname").rstrip()
         msg = ('Failed while verifying servername equals hostname. Expected '
                'hostname "%s" but got "%s".' % (self.name, hostname))
         self.assertEqual(self.name.lower(), hostname, msg)
@@ -236,7 +237,6 @@
     @classmethod
     def setup_clients(cls):
         super(ServersWithSpecificFlavorTestJSON, cls).setup_clients()
-        cls.flavor_client = cls.os_adm.flavors_client
         cls.client = cls.servers_client
 
     @classmethod
@@ -254,7 +254,6 @@
             self.flavor_ref)['flavor']
 
         def create_flavor_with_ephemeral(ephem_disk):
-            flavor_id = data_utils.rand_int_id(start=1000)
             name = 'flavor_with_ephemeral_%s' % ephem_disk
             flavor_name = data_utils.rand_name(name)
 
@@ -263,17 +262,10 @@
             disk = flavor_base['disk']
 
             # Create a flavor with ephemeral disk
-            flavor = self.flavor_client.create_flavor(
-                name=flavor_name, ram=ram, vcpus=vcpus, disk=disk,
-                id=flavor_id, ephemeral=ephem_disk)['flavor']
-            self.addCleanup(flavor_clean_up, flavor['id'])
-
+            flavor = self.create_flavor(name=flavor_name, ram=ram, vcpus=vcpus,
+                                        disk=disk, ephemeral=ephem_disk)
             return flavor['id']
 
-        def flavor_clean_up(flavor_id):
-            self.flavor_client.delete_flavor(flavor_id)
-            self.flavor_client.wait_for_resource_deletion(flavor_id)
-
         flavor_with_eph_disk_id = create_flavor_with_ephemeral(ephem_disk=1)
         flavor_no_eph_disk_id = create_flavor_with_ephemeral(ephem_disk=0)
 
diff --git a/tempest/api/compute/servers/test_list_server_filters.py b/tempest/api/compute/servers/test_list_server_filters.py
index c0a8eae..7b782de 100644
--- a/tempest/api/compute/servers/test_list_server_filters.py
+++ b/tempest/api/compute/servers/test_list_server_filters.py
@@ -12,13 +12,17 @@
 #    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
 #    License for the specific language governing permissions and limitations
 #    under the License.
+import testtools
 
 from tempest.api.compute import base
 from tempest.common import fixed_network
 from tempest.common.utils import data_utils
 from tempest.common import waiters
+from tempest import config
 from tempest.lib import decorators
-from tempest.lib import exceptions as lib_exc
+
+
+CONF = config.CONF
 
 
 class ListServerFiltersTestJSON(base.BaseV2ComputeTest):
@@ -37,31 +41,6 @@
     def resource_setup(cls):
         super(ListServerFiltersTestJSON, cls).resource_setup()
 
-        # Check to see if the alternate image ref actually exists...
-        images_client = cls.compute_images_client
-        images = images_client.list_images()['images']
-
-        if cls.image_ref != cls.image_ref_alt and \
-            any([image for image in images
-                 if image['id'] == cls.image_ref_alt]):
-            cls.multiple_images = True
-        else:
-            cls.image_ref_alt = cls.image_ref
-
-        # Do some sanity checks here. If one of the images does
-        # not exist, fail early since the tests won't work...
-        try:
-            cls.compute_images_client.show_image(cls.image_ref)
-        except lib_exc.NotFound:
-            raise RuntimeError("Image %s (image_ref) was not found!" %
-                               cls.image_ref)
-
-        try:
-            cls.compute_images_client.show_image(cls.image_ref_alt)
-        except lib_exc.NotFound:
-            raise RuntimeError("Image %s (image_ref_alt) was not found!" %
-                               cls.image_ref_alt)
-
         network = cls.get_tenant_network()
         if network:
             cls.fixed_network_name = network.get('name')
@@ -74,9 +53,12 @@
                                         **network_kwargs)
 
         cls.s2_name = data_utils.rand_name(cls.__name__ + '-instance')
-        cls.s2 = cls.create_test_server(name=cls.s2_name,
-                                        image_id=cls.image_ref_alt,
-                                        wait_until='ACTIVE')
+        # If image_ref_alt is "" or None then we still want to boot a server
+        # but we rely on `testtools.skipUnless` decorator to actually skip
+        # the irrelevant tests.
+        cls.s2 = cls.create_test_server(
+            name=cls.s2_name, image_id=cls.image_ref_alt or cls.image_ref,
+            wait_until='ACTIVE')
 
         cls.s3_name = data_utils.rand_name(cls.__name__ + '-instance')
         cls.s3 = cls.create_test_server(name=cls.s3_name,
@@ -84,7 +66,8 @@
                                         wait_until='ACTIVE')
 
     @decorators.idempotent_id('05e8a8e7-9659-459a-989d-92c2f501f4ba')
-    @decorators.skip_unless_attr('multiple_images', 'Only one image found')
+    @testtools.skipUnless(CONF.compute.image_ref != CONF.compute.image_ref_alt,
+                          "Need distinct images to run this test")
     def test_list_servers_filter_by_image(self):
         # Filter the list of servers by image
         params = {'image': self.image_ref}
@@ -169,7 +152,8 @@
                          len([x for x in servers['servers'] if 'id' in x]))
 
     @decorators.idempotent_id('b3304c3b-97df-46d2-8cd3-e2b6659724e7')
-    @decorators.skip_unless_attr('multiple_images', 'Only one image found')
+    @testtools.skipUnless(CONF.compute.image_ref != CONF.compute.image_ref_alt,
+                          "Need distinct images to run this test")
     def test_list_servers_detailed_filter_by_image(self):
         # Filter the detailed list of servers by image
         params = {'image': self.image_ref}
@@ -269,16 +253,34 @@
         if not self.fixed_network_name:
             msg = 'fixed_network_name needs to be configured to run this test'
             raise self.skipException(msg)
+
+        # list servers filter by ip is something "regexp match", i.e,
+        # filter by "10.1.1.1" will return both "10.1.1.1" and "10.1.1.10".
+        # so here look for the longest server ip, and filter by that ip,
+        # so as to ensure only one server is returned.
+        ip_list = {}
         self.s1 = self.client.show_server(self.s1['id'])['server']
         # Get first ip address inspite of v4 or v6
-        addr_spec = self.s1['addresses'][self.fixed_network_name][0]
-        params = {'ip': addr_spec['addr']}
+        ip_addr = self.s1['addresses'][self.fixed_network_name][0]['addr']
+        ip_list[ip_addr] = self.s1['id']
+
+        self.s2 = self.client.show_server(self.s2['id'])['server']
+        ip_addr = self.s2['addresses'][self.fixed_network_name][0]['addr']
+        ip_list[ip_addr] = self.s2['id']
+
+        self.s3 = self.client.show_server(self.s3['id'])['server']
+        ip_addr = self.s3['addresses'][self.fixed_network_name][0]['addr']
+        ip_list[ip_addr] = self.s3['id']
+
+        longest_ip = max([[len(ip), ip] for ip in ip_list])[1]
+        params = {'ip': longest_ip}
         body = self.client.list_servers(**params)
         servers = body['servers']
 
-        self.assertIn(self.s1_name, map(lambda x: x['name'], servers))
-        self.assertNotIn(self.s2_name, map(lambda x: x['name'], servers))
-        self.assertNotIn(self.s3_name, map(lambda x: x['name'], servers))
+        self.assertIn(ip_list[longest_ip], map(lambda x: x['id'], servers))
+        del ip_list[longest_ip]
+        for ip in ip_list:
+            self.assertNotIn(ip_list[ip], map(lambda x: x['id'], servers))
 
     @decorators.skip_because(bug="1540645")
     @decorators.idempotent_id('a905e287-c35e-42f2-b132-d02b09f3654a')
diff --git a/tempest/api/compute/servers/test_server_actions.py b/tempest/api/compute/servers/test_server_actions.py
index 6160024..b915739 100644
--- a/tempest/api/compute/servers/test_server_actions.py
+++ b/tempest/api/compute/servers/test_server_actions.py
@@ -471,7 +471,7 @@
 
         # NOTE: SHUTOFF is irregular status. To avoid test instability,
         #       one server is created only for this test without using
-        #       the server that was created in setupClass.
+        #       the server that was created in setUpClass.
         server = self.create_test_server(wait_until='ACTIVE')
         temp_server_id = server['id']
 
diff --git a/tempest/api/compute/servers/test_server_addresses.py b/tempest/api/compute/servers/test_server_addresses.py
index dfda51b..cf4ed85 100644
--- a/tempest/api/compute/servers/test_server_addresses.py
+++ b/tempest/api/compute/servers/test_server_addresses.py
@@ -49,7 +49,7 @@
         # We do not know the exact network configuration, but an instance
         # should at least have a single public or private address
         self.assertGreaterEqual(len(addresses), 1)
-        for network_name, network_addresses in addresses.items():
+        for network_addresses in addresses.values():
             self.assertGreaterEqual(len(network_addresses), 1)
             for address in network_addresses:
                 self.assertTrue(address['addr'])
diff --git a/tempest/api/compute/servers/test_server_rescue.py b/tempest/api/compute/servers/test_server_rescue.py
index 209ab38..75ba15c 100644
--- a/tempest/api/compute/servers/test_server_rescue.py
+++ b/tempest/api/compute/servers/test_server_rescue.py
@@ -58,10 +58,8 @@
         cls.password = data_utils.rand_password()
         # Server for positive tests
         server = cls.create_test_server(adminPass=cls.password,
-                                        wait_until='BUILD')
+                                        wait_until='ACTIVE')
         cls.server_id = server['id']
-        waiters.wait_for_server_status(cls.servers_client, cls.server_id,
-                                       'ACTIVE')
 
     @classmethod
     def resource_cleanup(cls):
diff --git a/tempest/api/compute/servers/test_server_tags.py b/tempest/api/compute/servers/test_server_tags.py
new file mode 100644
index 0000000..20e2cee
--- /dev/null
+++ b/tempest/api/compute/servers/test_server_tags.py
@@ -0,0 +1,108 @@
+# Copyright 2017 AT&T Corp.
+# All Rights Reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+import six
+
+from tempest.api.compute import base
+from tempest.common.utils import data_utils
+from tempest.lib import decorators
+from tempest import test
+
+
+class ServerTagsTestJSON(base.BaseV2ComputeTest):
+
+    min_microversion = '2.26'
+    max_microversion = 'latest'
+
+    @classmethod
+    def skip_checks(cls):
+        super(ServerTagsTestJSON, cls).skip_checks()
+        if not test.is_extension_enabled('os-server-tags', 'compute'):
+            msg = "os-server-tags extension is not enabled."
+            raise cls.skipException(msg)
+
+    @classmethod
+    def setup_clients(cls):
+        super(ServerTagsTestJSON, cls).setup_clients()
+        cls.client = cls.servers_client
+
+    @classmethod
+    def resource_setup(cls):
+        super(ServerTagsTestJSON, cls).resource_setup()
+        cls.server = cls.create_test_server(wait_until='ACTIVE')
+
+    def _update_server_tags(self, server_id, tags):
+        if not isinstance(tags, (list, tuple)):
+            tags = [tags]
+        for tag in tags:
+            self.client.update_tag(server_id, tag)
+        self.addCleanup(self.client.delete_all_tags, server_id)
+
+    @decorators.idempotent_id('8d95abe2-c658-4c42-9a44-c0258500306b')
+    def test_create_delete_tag(self):
+        # Check that no tags exist.
+        fetched_tags = self.client.list_tags(self.server['id'])['tags']
+        self.assertEmpty(fetched_tags)
+
+        # Add server tag to the server.
+        assigned_tag = data_utils.rand_name('tag')
+        self._update_server_tags(self.server['id'], assigned_tag)
+
+        # Check that added tag exists.
+        fetched_tags = self.client.list_tags(self.server['id'])['tags']
+        self.assertEqual([assigned_tag], fetched_tags)
+
+        # Remove assigned tag from server and check that it was removed.
+        self.client.delete_tag(self.server['id'], assigned_tag)
+        fetched_tags = self.client.list_tags(self.server['id'])['tags']
+        self.assertEmpty(fetched_tags)
+
+    @decorators.idempotent_id('a2c1af8c-127d-417d-974b-8115f7e3d831')
+    def test_update_all_tags(self):
+        # Add server tags to the server.
+        tags = [data_utils.rand_name('tag'), data_utils.rand_name('tag')]
+        self._update_server_tags(self.server['id'], tags)
+
+        # Replace tags with new tags and check that they are present.
+        new_tags = [data_utils.rand_name('tag'), data_utils.rand_name('tag')]
+        replaced_tags = self.client.update_all_tags(
+            self.server['id'], new_tags)['tags']
+        six.assertCountEqual(self, new_tags, replaced_tags)
+
+        # List the tags and check that the tags were replaced.
+        fetched_tags = self.client.list_tags(self.server['id'])['tags']
+        six.assertCountEqual(self, new_tags, fetched_tags)
+
+    @decorators.idempotent_id('a63b2a74-e918-4b7c-bcab-10c855f3a57e')
+    def test_delete_all_tags(self):
+        # Add server tags to the server.
+        assigned_tags = [data_utils.rand_name('tag'),
+                         data_utils.rand_name('tag')]
+        self._update_server_tags(self.server['id'], assigned_tags)
+
+        # Delete tags from the server and check that they were deleted.
+        self.client.delete_all_tags(self.server['id'])
+        fetched_tags = self.client.list_tags(self.server['id'])['tags']
+        self.assertEmpty(fetched_tags)
+
+    @decorators.idempotent_id('81279a66-61c3-4759-b830-a2dbe64cbe08')
+    def test_check_tag_existence(self):
+        # Add server tag to the server.
+        assigned_tag = data_utils.rand_name('tag')
+        self._update_server_tags(self.server['id'], assigned_tag)
+
+        # Check that added tag exists. Throws a 404 if not found, else a 204,
+        # which was already checked by the schema validation.
+        self.client.check_tag_existence(self.server['id'], assigned_tag)
diff --git a/tempest/api/compute/servers/test_servers_negative.py b/tempest/api/compute/servers/test_servers_negative.py
index b22a434..1418b3f 100644
--- a/tempest/api/compute/servers/test_servers_negative.py
+++ b/tempest/api/compute/servers/test_servers_negative.py
@@ -176,7 +176,7 @@
 
         self.assertRaises(lib_exc.NotFound,
                           self.client.rebuild_server,
-                          server['id'], self.image_ref_alt)
+                          server['id'], self.image_ref)
 
     @test.related_bug('1660878', status_code=409)
     @test.attr(type=['negative'])
@@ -198,7 +198,7 @@
         self.assertRaises(lib_exc.NotFound,
                           self.client.rebuild_server,
                           nonexistent_server,
-                          self.image_ref_alt)
+                          self.image_ref)
 
     @test.attr(type=['negative'])
     @decorators.idempotent_id('fd57f159-68d6-4c2a-902b-03070828a87e')
diff --git a/tempest/api/compute/test_versions.py b/tempest/api/compute/test_versions.py
index c9f0724..dcab067 100644
--- a/tempest/api/compute/test_versions.py
+++ b/tempest/api/compute/test_versions.py
@@ -14,11 +14,13 @@
 
 from tempest.api.compute import base
 from tempest.lib import decorators
+from tempest import test
 
 
 class TestVersions(base.BaseV2ComputeTest):
 
     @decorators.idempotent_id('6c0a0990-43b6-4529-9b61-5fd8daf7c55c')
+    @test.attr(type='smoke')
     def test_list_api_versions(self):
         """Test that a get of the unversioned url returns the choices doc.
 
@@ -37,6 +39,7 @@
                          "The first listed version should be v2.0")
 
     @decorators.idempotent_id('b953a29e-929c-4a8e-81be-ec3a7e03cb76')
+    @test.attr(type='smoke')
     def test_get_version_details(self):
         """Test individual version endpoints info works.
 
diff --git a/tempest/api/compute/volumes/test_attach_volume.py b/tempest/api/compute/volumes/test_attach_volume.py
index 5304944..73c7614 100644
--- a/tempest/api/compute/volumes/test_attach_volume.py
+++ b/tempest/api/compute/volumes/test_attach_volume.py
@@ -22,7 +22,6 @@
 from tempest.common import waiters
 from tempest import config
 from tempest.lib import decorators
-from tempest.lib import exceptions as lib_exc
 
 CONF = config.CONF
 
@@ -61,38 +60,14 @@
             server['id'])['addresses']
         return server
 
-    def _detach_volume(self, server_id, volume_id):
-        try:
-            self.servers_client.detach_volume(server_id, volume_id)
-            waiters.wait_for_volume_resource_status(self.volumes_client,
-                                                    volume_id, 'available')
-        except lib_exc.NotFound:
-            LOG.warning("Unable to detach volume %s from server %s "
-                        "possibly it was already detached", volume_id,
-                        server_id)
-
-    def _attach_volume(self, server_id, volume_id, device=None):
-        # Attach the volume to the server
-        kwargs = {'volumeId': volume_id}
-        if device:
-            kwargs.update({'device': '/dev/%s' % device})
-        attachment = self.servers_client.attach_volume(
-            server_id, **kwargs)['volumeAttachment']
-        waiters.wait_for_volume_resource_status(self.volumes_client,
-                                                volume_id, 'in-use')
-        self.addCleanup(self._detach_volume, server_id,
-                        volume_id)
-
-        return attachment
-
     @decorators.idempotent_id('52e9045a-e90d-4c0d-9087-79d657faffff')
     def test_attach_detach_volume(self):
         # Stop and Start a server with an attached volume, ensuring that
         # the volume remains attached.
         server = self._create_server()
         volume = self.create_volume()
-        attachment = self._attach_volume(server['id'], volume['id'],
-                                         device=self.device)
+        attachment = self.attach_volume(server, volume,
+                                        device=('/dev/%s' % self.device))
 
         self.servers_client.stop_server(server['id'])
         waiters.wait_for_server_status(self.servers_client, server['id'],
@@ -115,7 +90,10 @@
             device_name_to_match = '\n' + self.device + ' '
             self.assertIn(device_name_to_match, disks)
 
-        self._detach_volume(server['id'], attachment['volumeId'])
+        self.servers_client.detach_volume(server['id'], attachment['volumeId'])
+        waiters.wait_for_volume_resource_status(
+            self.volumes_client, attachment['volumeId'], 'available')
+
         self.servers_client.stop_server(server['id'])
         waiters.wait_for_server_status(self.servers_client, server['id'],
                                        'SHUTOFF')
@@ -141,8 +119,8 @@
         # List volume attachment of the server
         server = self._create_server()
         volume = self.create_volume()
-        attachment = self._attach_volume(server['id'], volume['id'],
-                                         device=self.device)
+        attachment = self.attach_volume(server, volume,
+                                        device=('/dev/%s' % self.device))
         body = self.servers_client.list_volume_attachments(
             server['id'])['volumeAttachments']
         self.assertEqual(1, len(body))
@@ -165,8 +143,8 @@
         server = self._create_server()
         volume_1st = self.create_volume()
         volume_2nd = self.create_volume()
-        attachment_1st = self._attach_volume(server['id'], volume_1st['id'])
-        attachment_2nd = self._attach_volume(server['id'], volume_2nd['id'])
+        attachment_1st = self.attach_volume(server, volume_1st)
+        attachment_2nd = self.attach_volume(server, volume_2nd)
 
         body = self.servers_client.list_volume_attachments(
             server['id'])['volumeAttachments']
@@ -253,8 +231,8 @@
         volume = self.create_volume()
         num_vol = self._count_volumes(server)
         self._shelve_server(server)
-        attachment = self._attach_volume(server['id'], volume['id'],
-                                         device=self.device)
+        attachment = self.attach_volume(server, volume,
+                                        device=('/dev/%s' % self.device))
 
         # Unshelve the instance and check that attached volume exists
         self._unshelve_server_and_check_volumes(server, num_vol + 1)
@@ -279,9 +257,12 @@
         volume = self.create_volume()
         num_vol = self._count_volumes(server)
         self._shelve_server(server)
-        self._attach_volume(server['id'], volume['id'], device=self.device)
-        # Detach the volume
-        self._detach_volume(server['id'], volume['id'])
+
+        # Attach and then detach the volume
+        self.attach_volume(server, volume, device=('/dev/%s' % self.device))
+        self.servers_client.detach_volume(server['id'], volume['id'])
+        waiters.wait_for_volume_resource_status(self.volumes_client,
+                                                volume['id'], 'available')
 
         # Unshelve the instance and check that we have the expected number of
         # volume(s)
diff --git a/tempest/api/identity/admin/v2/test_endpoints.py b/tempest/api/identity/admin/v2/test_endpoints.py
index df55d2f..0ea2eb3 100644
--- a/tempest/api/identity/admin/v2/test_endpoints.py
+++ b/tempest/api/identity/admin/v2/test_endpoints.py
@@ -27,10 +27,10 @@
         s_name = data_utils.rand_name('service')
         s_type = data_utils.rand_name('type')
         s_description = data_utils.rand_name('description')
-        cls.service_data = cls.services_client.create_service(
+        service_data = cls.services_client.create_service(
             name=s_name, type=s_type,
             description=s_description)['OS-KSADM:service']
-        cls.service_id = cls.service_data['id']
+        cls.service_id = service_data['id']
         cls.service_ids.append(cls.service_id)
         # Create endpoints so as to use for LIST and GET test cases
         cls.setup_endpoints = list()
diff --git a/tempest/api/identity/admin/v3/test_endpoints.py b/tempest/api/identity/admin/v3/test_endpoints.py
index 686743b..9a0b3e4 100644
--- a/tempest/api/identity/admin/v3/test_endpoints.py
+++ b/tempest/api/identity/admin/v3/test_endpoints.py
@@ -33,11 +33,10 @@
         s_name = data_utils.rand_name('service')
         s_type = data_utils.rand_name('type')
         s_description = data_utils.rand_name('description')
-        cls.service_data = (
+        service_data = (
             cls.services_client.create_service(name=s_name, type=s_type,
                                                description=s_description))
-        cls.service_data = cls.service_data['service']
-        cls.service_id = cls.service_data['id']
+        cls.service_id = service_data['service']['id']
         cls.service_ids.append(cls.service_id)
         # Create endpoints so as to use for LIST and GET test cases
         cls.setup_endpoints = list()
diff --git a/tempest/api/identity/admin/v3/test_endpoints_negative.py b/tempest/api/identity/admin/v3/test_endpoints_negative.py
index 53c2b1f..8e00193 100644
--- a/tempest/api/identity/admin/v3/test_endpoints_negative.py
+++ b/tempest/api/identity/admin/v3/test_endpoints_negative.py
@@ -35,11 +35,11 @@
         s_name = data_utils.rand_name('service')
         s_type = data_utils.rand_name('type')
         s_description = data_utils.rand_name('description')
-        cls.service_data = (
+        service_data = (
             cls.services_client.create_service(name=s_name, type=s_type,
                                                description=s_description)
             ['service'])
-        cls.service_id = cls.service_data['id']
+        cls.service_id = service_data['id']
         cls.service_ids.append(cls.service_id)
 
     @classmethod
diff --git a/tempest/api/identity/admin/v3/test_roles.py b/tempest/api/identity/admin/v3/test_roles.py
index 445d928..9bee24a 100644
--- a/tempest/api/identity/admin/v3/test_roles.py
+++ b/tempest/api/identity/admin/v3/test_roles.py
@@ -15,11 +15,14 @@
 
 from tempest.api.identity import base
 from tempest.common.utils import data_utils
+from tempest import config
 from tempest.lib.common.utils import test_utils
 from tempest.lib import decorators
 from tempest.lib import exceptions as lib_exc
 from tempest import test
 
+CONF = config.CONF
+
 
 class RolesV3TestJSON(base.BaseIdentityV3AdminTest):
 
@@ -306,3 +309,75 @@
         roles_ids = [assignment['role']['id']
                      for assignment in role_assignments]
         self.assertIn(self.roles[0]['id'], roles_ids)
+
+    @decorators.idempotent_id('d92a41d2-5501-497a-84bb-6e294330e8f8')
+    def test_domain_roles_create_delete(self):
+        domain_role = self.roles_client.create_role(
+            name=data_utils.rand_name('domain_role'),
+            domain_id=self.domain['id'])['role']
+        self.addCleanup(
+            test_utils.call_and_ignore_notfound_exc,
+            self.roles_client.delete_role,
+            domain_role['id'])
+
+        domain_roles = self.roles_client.list_roles(
+            domain_id=self.domain['id'])['roles']
+        self.assertEqual(1, len(domain_roles))
+        self.assertIn(domain_role, domain_roles)
+
+        self.roles_client.delete_role(domain_role['id'])
+        domain_roles = self.roles_client.list_roles(
+            domain_id=self.domain['id'])['roles']
+        self.assertEmpty(domain_roles)
+
+    @decorators.idempotent_id('eb1e1c24-1bc4-4d47-9748-e127a1852c82')
+    def test_implied_domain_roles(self):
+        # Create two roles in the same domain
+        domain_role1 = self.setup_test_role(domain_id=self.domain['id'])
+        domain_role2 = self.setup_test_role(domain_id=self.domain['id'])
+
+        # Check if we can create an inference rule from roles in the same
+        # domain
+        self._create_implied_role(domain_role1['id'], domain_role2['id'])
+
+        # Create another role in a different domain
+        domain2 = self.setup_test_domain()
+        domain_role3 = self.setup_test_role(domain_id=domain2['id'])
+
+        # Check if we can create cross domain implied roles
+        self._create_implied_role(domain_role1['id'], domain_role3['id'])
+
+        # Finally, we also should be able to create an implied from a
+        # domain role to a global one
+        self._create_implied_role(domain_role1['id'], self.role['id'])
+
+        if CONF.identity_feature_enabled.forbid_global_implied_dsr:
+            # The contrary is not true: we can't create an inference rule
+            # from a global role to a domain role
+            self.assertRaises(
+                lib_exc.Forbidden,
+                self.roles_client.create_role_inference_rule,
+                self.role['id'],
+                domain_role1['id'])
+
+    @decorators.idempotent_id('3859df7e-5b78-4e4d-b10e-214c8953842a')
+    def test_assignments_for_domain_roles(self):
+        domain_role = self.setup_test_role(domain_id=self.domain['id'])
+
+        # Create a grant using "domain_role"
+        self.roles_client.create_user_role_on_project(
+            self.project['id'], self.user_body['id'], domain_role['id'])
+        self.addCleanup(
+            self.roles_client.delete_role_from_user_on_project,
+            self.project['id'], self.user_body['id'], domain_role['id'])
+
+        # NOTE(rodrigods): Regular roles would appear in the effective
+        # list of role assignments (meaning the role would be returned in
+        # a token) as a result from the grant above. This is not the case
+        # for domain roles, they should not appear in the effective role
+        # assignments list.
+        params = {'scope.project.id': self.project['id'],
+                  'user.id': self.user_body['id']}
+        role_assignments = self.role_assignments.list_role_assignments(
+            effective=True, **params)['role_assignments']
+        self.assertEmpty(role_assignments)
diff --git a/tempest/api/identity/base.py b/tempest/api/identity/base.py
index 3bbe47a..344779c 100644
--- a/tempest/api/identity/base.py
+++ b/tempest/api/identity/base.py
@@ -15,6 +15,7 @@
 
 from tempest.common.utils import data_utils
 from tempest import config
+from tempest.lib.common.utils import test_utils
 import tempest.test
 
 CONF = config.CONF
@@ -72,15 +73,22 @@
             kwargs['password'] = user_password
         user = self.users_client.create_user(**kwargs)['user']
         # Delete the user at the end of the test
-        self.addCleanup(self.users_client.delete_user, user['id'])
+        self.addCleanup(
+            test_utils.call_and_ignore_notfound_exc,
+            self.users_client.delete_user, user['id'])
         return user
 
-    def setup_test_role(self):
+    def setup_test_role(self, domain_id=None):
         """Set up a test role."""
-        role = self.roles_client.create_role(
-            name=data_utils.rand_name('test_role'))['role']
+        params = {'name': data_utils.rand_name('test_role')}
+        if domain_id:
+            params['domain_id'] = domain_id
+
+        role = self.roles_client.create_role(**params)['role']
         # Delete the role at the end of the test
-        self.addCleanup(self.roles_client.delete_role, role['id'])
+        self.addCleanup(
+            test_utils.call_and_ignore_notfound_exc,
+            self.roles_client.delete_role, role['id'])
         return role
 
 
@@ -149,7 +157,9 @@
             name=data_utils.rand_name('test_tenant'),
             description=data_utils.rand_name('desc'))['tenant']
         # Delete the tenant at the end of the test
-        self.addCleanup(self.tenants_client.delete_tenant, tenant['id'])
+        self.addCleanup(
+            test_utils.call_and_ignore_notfound_exc,
+            self.tenants_client.delete_tenant, tenant['id'])
         return tenant
 
 
@@ -243,12 +253,16 @@
             name=data_utils.rand_name('test_project'),
             description=data_utils.rand_name('desc'))['project']
         # Delete the project at the end of the test
-        self.addCleanup(self.projects_client.delete_project, project['id'])
+        self.addCleanup(
+            test_utils.call_and_ignore_notfound_exc,
+            self.projects_client.delete_project, project['id'])
         return project
 
     def setup_test_domain(self):
         """Set up a test domain."""
         domain = self.create_domain()
         # Delete the domain at the end of the test
-        self.addCleanup(self.delete_domain, domain['id'])
+        self.addCleanup(
+            test_utils.call_and_ignore_notfound_exc,
+            self.delete_domain, domain['id'])
         return domain
diff --git a/tempest/api/volume/admin/v2/test_snapshot_manage.py b/tempest/api/volume/admin/v2/test_snapshot_manage.py
index eed7dd1..e8bd477 100644
--- a/tempest/api/volume/admin/v2/test_snapshot_manage.py
+++ b/tempest/api/volume/admin/v2/test_snapshot_manage.py
@@ -61,8 +61,8 @@
         new_snapshot = self.admin_snapshot_manage_client.manage_snapshot(
             volume_id=volume['id'],
             ref={'source-name': snapshot_ref})['snapshot']
-        self.addCleanup(self.delete_snapshot,
-                        self.admin_snapshots_client, new_snapshot['id'])
+        self.addCleanup(self.delete_snapshot, new_snapshot['id'],
+                        self.admin_snapshots_client)
 
         # Wait for the snapshot to be available after manage operation
         waiters.wait_for_volume_resource_status(self.admin_snapshots_client,
diff --git a/tempest/api/volume/base.py b/tempest/api/volume/base.py
index f8c435f..fd10fb3 100644
--- a/tempest/api/volume/base.py
+++ b/tempest/api/volume/base.py
@@ -145,7 +145,7 @@
 
         snapshot = cls.snapshots_client.create_snapshot(
             volume_id=volume_id, **kwargs)['snapshot']
-        cls.snapshots.append(snapshot)
+        cls.snapshots.append(snapshot['id'])
         waiters.wait_for_volume_resource_status(cls.snapshots_client,
                                                 snapshot['id'], 'available')
         return snapshot
@@ -171,11 +171,14 @@
         client.delete_volume(volume_id)
         client.wait_for_resource_deletion(volume_id)
 
-    @staticmethod
-    def delete_snapshot(client, snapshot_id):
+    def delete_snapshot(self, snapshot_id, snapshots_client=None):
         """Delete snapshot by the given client"""
-        client.delete_snapshot(snapshot_id)
-        client.wait_for_resource_deletion(snapshot_id)
+        if snapshots_client is None:
+            snapshots_client = self.snapshots_client
+        snapshots_client.delete_snapshot(snapshot_id)
+        snapshots_client.wait_for_resource_deletion(snapshot_id)
+        if snapshot_id in self.snapshots:
+            self.snapshots.remove(snapshot_id)
 
     def attach_volume(self, server_id, volume_id):
         """Attach a volume to a server"""
@@ -207,12 +210,12 @@
     def clear_snapshots(cls):
         for snapshot in cls.snapshots:
             test_utils.call_and_ignore_notfound_exc(
-                cls.snapshots_client.delete_snapshot, snapshot['id'])
+                cls.snapshots_client.delete_snapshot, snapshot)
 
         for snapshot in cls.snapshots:
             test_utils.call_and_ignore_notfound_exc(
                 cls.snapshots_client.wait_for_resource_deletion,
-                snapshot['id'])
+                snapshot)
 
     def create_server(self, **kwargs):
         name = kwargs.pop(
diff --git a/tempest/api/volume/test_volumes_snapshots.py b/tempest/api/volume/test_volumes_snapshots.py
index 9f4ce95..5abda5e 100644
--- a/tempest/api/volume/test_volumes_snapshots.py
+++ b/tempest/api/volume/test_volumes_snapshots.py
@@ -10,6 +10,8 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from testtools import matchers
+
 from tempest.api.volume import base
 from tempest.common.utils import data_utils
 from tempest import config
@@ -34,12 +36,6 @@
         cls.name_field = cls.special_fields['name_field']
         cls.descrip_field = cls.special_fields['descrip_field']
 
-    def cleanup_snapshot(self, snapshot):
-        # Delete the snapshot
-        self.snapshots_client.delete_snapshot(snapshot['id'])
-        self.snapshots_client.wait_for_resource_deletion(snapshot['id'])
-        self.snapshots.remove(snapshot)
-
     @decorators.idempotent_id('b467b54c-07a4-446d-a1cf-651dedcc3ff1')
     @test.services('compute')
     def test_snapshot_create_with_volume_in_use(self):
@@ -52,7 +48,7 @@
         snapshot = self.create_snapshot(self.volume_origin['id'],
                                         force=True)
         # Delete the snapshot
-        self.cleanup_snapshot(snapshot)
+        self.delete_snapshot(snapshot['id'])
 
     @decorators.idempotent_id('8567b54c-4455-446d-a1cf-651ddeaa3ff2')
     @test.services('compute')
@@ -68,9 +64,9 @@
 
         # Delete the snapshots. Some snapshot implementations can take
         # different paths according to order they are deleted.
-        self.cleanup_snapshot(snapshot1)
-        self.cleanup_snapshot(snapshot3)
-        self.cleanup_snapshot(snapshot2)
+        self.delete_snapshot(snapshot1['id'])
+        self.delete_snapshot(snapshot3['id'])
+        self.delete_snapshot(snapshot2['id'])
 
     @decorators.idempotent_id('5210a1de-85a0-11e6-bb21-641c676a5d61')
     @test.services('compute')
@@ -89,14 +85,18 @@
 
         # Delete the snapshots. Some snapshot implementations can take
         # different paths according to order they are deleted.
-        self.cleanup_snapshot(snapshot3)
-        self.cleanup_snapshot(snapshot1)
-        self.cleanup_snapshot(snapshot2)
+        self.delete_snapshot(snapshot3['id'])
+        self.delete_snapshot(snapshot1['id'])
+        self.delete_snapshot(snapshot2['id'])
 
     @decorators.idempotent_id('2a8abbe4-d871-46db-b049-c41f5af8216e')
     def test_snapshot_create_get_list_update_delete(self):
-        # Create a snapshot
-        snapshot = self.create_snapshot(self.volume_origin['id'])
+        # Create a snapshot with metadata
+        metadata = {"snap-meta1": "value1",
+                    "snap-meta2": "value2",
+                    "snap-meta3": "value3"}
+        snapshot = self.create_snapshot(self.volume_origin['id'],
+                                        metadata=metadata)
 
         # Get the snap and check for some of its details
         snap_get = self.snapshots_client.show_snapshot(
@@ -105,6 +105,10 @@
                          snap_get['volume_id'],
                          "Referred volume origin mismatch")
 
+        # Verify snapshot metadata
+        self.assertThat(snap_get['metadata'].items(),
+                        matchers.ContainsAll(metadata.items()))
+
         # Compare also with the output from the list action
         tracking_data = (snapshot['id'], snapshot[self.name_field])
         snaps_list = self.snapshots_client.list_snapshots()['snapshots']
@@ -129,7 +133,7 @@
         self.assertEqual(new_desc, updated_snapshot[self.descrip_field])
 
         # Delete the snapshot
-        self.cleanup_snapshot(snapshot)
+        self.delete_snapshot(snapshot['id'])
 
     @decorators.idempotent_id('677863d1-3142-456d-b6ac-9924f667a7f4')
     def test_volume_from_snapshot(self):
diff --git a/tempest/api/volume/test_volumes_snapshots_negative.py b/tempest/api/volume/test_volumes_snapshots_negative.py
index a3d91b0..9e44379 100644
--- a/tempest/api/volume/test_volumes_snapshots_negative.py
+++ b/tempest/api/volume/test_volumes_snapshots_negative.py
@@ -62,6 +62,13 @@
                           size=src_size - 1,
                           snapshot_id=src_snap['id'])
 
+    @test.attr(type=['negative'])
+    @decorators.idempotent_id('8fd92339-e22f-4591-86b4-1e2215372a40')
+    def test_list_snapshot_invalid_param_limit(self):
+        self.assertRaises(lib_exc.BadRequest,
+                          self.snapshots_client.list_snapshots,
+                          limit='invalid')
+
 
 class VolumesV1SnapshotNegativeTestJSON(VolumesV2SnapshotNegativeTestJSON):
     _api_version = 1
diff --git a/tempest/api/volume/v2/test_volumes_list.py b/tempest/api/volume/v2/test_volumes_list.py
index 8b51e64..d2328c8 100644
--- a/tempest/api/volume/v2/test_volumes_list.py
+++ b/tempest/api/volume/v2/test_volumes_list.py
@@ -37,13 +37,12 @@
         super(VolumesV2ListTestJSON, cls).resource_setup()
 
         # Create 3 test volumes
-        metadata = {'Type': 'work'}
         # NOTE(zhufl): When using pre-provisioned credentials, the project
         # may have volumes other than those created below.
         existing_volumes = cls.volumes_client.list_volumes()['volumes']
         cls.volume_id_list = [vol['id'] for vol in existing_volumes]
         for _ in range(3):
-            volume = cls.create_volume(metadata=metadata)
+            volume = cls.create_volume()
             cls.volume_id_list.append(volume['id'])
 
     @decorators.idempotent_id('2a7064eb-b9c3-429b-b888-33928fc5edd3')
diff --git a/tempest/api/volume/v2/test_volumes_snapshots_negative.py b/tempest/api/volume/v2/test_volumes_snapshots_negative.py
new file mode 100644
index 0000000..e5581b9
--- /dev/null
+++ b/tempest/api/volume/v2/test_volumes_snapshots_negative.py
@@ -0,0 +1,46 @@
+# Copyright 2017 Red Hat, Inc.
+# All Rights Reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+from tempest.api.volume import base
+from tempest.common.utils import data_utils
+from tempest import config
+from tempest.lib import decorators
+from tempest.lib import exceptions as lib_exc
+from tempest import test
+
+CONF = config.CONF
+
+
+class VolumesV2SnapshotNegativeTest(base.BaseVolumeTest):
+
+    @classmethod
+    def skip_checks(cls):
+        super(VolumesV2SnapshotNegativeTest, cls).skip_checks()
+        if not CONF.volume_feature_enabled.snapshot:
+            raise cls.skipException("Cinder volume snapshots are disabled")
+
+    @test.attr(type=['negative'])
+    @decorators.idempotent_id('27b5f37f-bf69-4e8c-986e-c44f3d6819b8')
+    def test_list_snapshots_invalid_param_sort(self):
+        self.assertRaises(lib_exc.BadRequest,
+                          self.snapshots_client.list_snapshots,
+                          sort_key='invalid')
+
+    @test.attr(type=['negative'])
+    @decorators.idempotent_id('b68deeda-ca79-4a32-81af-5c51179e553a')
+    def test_list_snapshots_invalid_param_marker(self):
+        self.assertRaises(lib_exc.NotFound,
+                          self.snapshots_client.list_snapshots,
+                          marker=data_utils.rand_uuid())
diff --git a/tempest/cmd/run.py b/tempest/cmd/run.py
index 54b844a..b36bf5c 100644
--- a/tempest/cmd/run.py
+++ b/tempest/cmd/run.py
@@ -78,11 +78,20 @@
 subunit-trace output filter. But, if you would prefer a subunit v2 stream be
 output to STDOUT use the **--subunit** flag
 
+Combining Runs
+==============
+
+There are certain situations in which you want to split a single run of tempest
+across 2 executions of tempest run. (for example to run part of the tests
+serially and others in parallel) To accomplish this but still treat the results
+as a single run you can leverage the **--combine** option which will append
+the current run's results with the previous runs.
 """
 
 import io
 import os
 import sys
+import tempfile
 import threading
 
 from cliff import command
@@ -165,6 +174,12 @@
         else:
             print("No .testr.conf file was found for local execution")
             sys.exit(2)
+        if parsed_args.combine:
+            temp_stream = tempfile.NamedTemporaryFile()
+            return_code = run_argv(['tempest', 'last', '--subunit'], sys.stdin,
+                                   temp_stream, sys.stderr)
+            if return_code > 0:
+                sys.exit(return_code)
 
         regex = self._build_regex(parsed_args)
         if parsed_args.list_tests:
@@ -173,6 +188,16 @@
         else:
             options = self._build_options(parsed_args)
             returncode = self._run(regex, options)
+            if returncode > 0:
+                sys.exit(returncode)
+
+        if parsed_args.combine:
+            return_code = run_argv(['tempest', 'last', '--subunit'], sys.stdin,
+                                   temp_stream, sys.stderr)
+            if return_code > 0:
+                sys.exit(return_code)
+            returncode = run_argv(['tempest', 'load', temp_stream.name],
+                                  sys.stdin, sys.stdout, sys.stderr)
         sys.exit(returncode)
 
     def get_description(self):
@@ -231,6 +256,10 @@
         # output args
         parser.add_argument("--subunit", action='store_true',
                             help='Enable subunit v2 output')
+        parser.add_argument("--combine", action='store_true',
+                            help='Combine the output of this run with the '
+                                 "previous run's as a combined stream in the "
+                                 "testr repository after it finish")
 
         parser.set_defaults(parallel=True)
         return parser
diff --git a/tempest/cmd/subunit_describe_calls.py b/tempest/cmd/subunit_describe_calls.py
index 0f868a9..8ee3055 100644
--- a/tempest/cmd/subunit_describe_calls.py
+++ b/tempest/cmd/subunit_describe_calls.py
@@ -294,7 +294,8 @@
             outfile.write(json.dumps(url_parser.test_logs))
         return
 
-    for test_name, items in url_parser.test_logs.iteritems():
+    for test_name in url_parser.test_logs:
+        items = url_parser.test_logs[test_name]
         sys.stdout.write('{0}\n'.format(test_name))
         if not items:
             sys.stdout.write('\n')
diff --git a/tempest/common/utils/linux/remote_client.py b/tempest/common/utils/linux/remote_client.py
index 1487c1d..1aa09e6 100644
--- a/tempest/common/utils/linux/remote_client.py
+++ b/tempest/common/utils/linux/remote_client.py
@@ -75,9 +75,13 @@
         """
         self.server = server
         self.servers_client = servers_client
+
         ssh_timeout = CONF.validation.ssh_timeout
         connect_timeout = CONF.validation.connect_timeout
         self.log_console = CONF.compute_feature_enabled.console_output
+        self.ssh_shell_prologue = CONF.validation.ssh_shell_prologue
+        self.ping_count = CONF.validation.ping_count
+        self.ping_size = CONF.validation.ping_size
 
         self.ssh_client = ssh.Client(ip_address, username, password,
                                      ssh_timeout, pkey=pkey,
@@ -87,7 +91,7 @@
     def exec_command(self, cmd):
         # Shell options below add more clearness on failures,
         # path is extended for some non-cirros guest oses (centos7)
-        cmd = CONF.validation.ssh_shell_prologue + " " + cmd
+        cmd = self.ssh_shell_prologue + " " + cmd
         LOG.debug("Remote command: %s", cmd)
         return self.ssh_client.exec_command(cmd)
 
@@ -99,20 +103,6 @@
         """
         self.ssh_client.test_connection_auth()
 
-    def get_hostname(self):
-        # Get host name using command "hostname"
-        actual_hostname = self.exec_command("hostname").rstrip()
-        return actual_hostname
-
-    def get_ram_size_in_mb(self):
-        output = self.exec_command('free -m | grep Mem')
-        if output:
-            return output.split()[1]
-
-    def get_number_of_vcpus(self):
-        output = self.exec_command('grep -c ^processor /proc/cpuinfo')
-        return int(output)
-
     def get_disks(self):
         # Select root disk devices as shown by lsblk
         command = 'lsblk -lb --nodeps'
@@ -142,8 +132,12 @@
         cmd = 'sudo sh -c "echo \\"%s\\" >/dev/console"' % message
         return self.exec_command(cmd)
 
-    def ping_host(self, host, count=CONF.validation.ping_count,
-                  size=CONF.validation.ping_size, nic=None):
+    def ping_host(self, host, count=None, size=None, nic=None):
+        if count is None:
+            count = self.ping_count
+        if size is None:
+            size = self.ping_size
+
         addr = netaddr.IPAddress(host)
         cmd = 'ping6' if addr.version == 6 else 'ping'
         if nic:
@@ -176,11 +170,9 @@
         cmd = "ip address"
         return self.exec_command(cmd)
 
-    def assign_static_ip(self, nic, addr):
+    def assign_static_ip(self, nic, addr, network_mask_bits=28):
         cmd = "sudo ip addr add {ip}/{mask} dev {nic}".format(
-            ip=addr, mask=CONF.network.project_network_mask_bits,
-            nic=nic
-        )
+            ip=addr, mask=network_mask_bits, nic=nic)
         return self.exec_command(cmd)
 
     def set_nic_state(self, nic, state="up"):
@@ -218,7 +210,7 @@
         cmd = "sudo /sbin/dhclient -r && sudo /sbin/dhclient"
         self.exec_command(cmd)
 
-    def renew_lease(self, fixed_ip=None):
+    def renew_lease(self, fixed_ip=None, dhcp_client='udhcpc'):
         """Wrapper method for renewing DHCP lease via given client
 
         Supporting:
@@ -227,7 +219,6 @@
         """
         # TODO(yfried): add support for dhcpcd
         supported_clients = ['udhcpc', 'dhclient']
-        dhcp_client = CONF.scenario.dhcp_client
         if dhcp_client not in supported_clients:
             raise tempest.lib.exceptions.InvalidConfiguration(
                 '%s DHCP client unsupported' % dhcp_client)
diff --git a/tempest/config.py b/tempest/config.py
index b4d88c5..274cd21 100644
--- a/tempest/config.py
+++ b/tempest/config.py
@@ -15,12 +15,15 @@
 
 from __future__ import print_function
 
+import functools
 import os
 import tempfile
 
+import debtcollector.removals
 from oslo_concurrency import lockutils
 from oslo_config import cfg
 from oslo_log import log as logging
+import testtools
 
 from tempest.lib import exceptions
 from tempest.lib.services import clients
@@ -130,7 +133,7 @@
     cfg.StrOpt('uri_v3',
                help='Full URI of the OpenStack Identity API (Keystone), v3'),
     cfg.StrOpt('auth_version',
-               default='v2',
+               default='v3',
                help="Identity API version to be used for authentication "
                     "for API tests."),
     cfg.StrOpt('region',
@@ -222,6 +225,13 @@
                 deprecated_for_removal=True,
                 deprecated_reason="All supported version of OpenStack now "
                                   "supports the 'reseller' feature"),
+    # TODO(rodrigods): This is a feature flag for bug 1590578 which is fixed
+    # in Newton and Ocata. This option can be removed after Mitaka is end of
+    # life.
+    cfg.BoolOpt('forbid_global_implied_dsr',
+                default=False,
+                help='Does the environment forbid global roles implying '
+                     'domain specific ones?'),
     cfg.BoolOpt('security_compliance',
                 default=False,
                 help='Does the environment have the security compliance '
@@ -1018,7 +1028,12 @@
                help="Prefix to be added when generating the name for "
                     "test resources. It can be used to discover all "
                     "resources associated with a specific test run when "
-                    "running tempest on a real-life cloud"),
+                    "running tempest on a real-life cloud",
+               deprecated_for_removal=True,
+               deprecated_reason="It is enough to add 'tempest' as this "
+                                 "prefix to ideintify resources which are "
+                                 "created by Tempest and no projects set "
+                                 "this option on OpenStack dev community."),
 ]
 
 _opts = [
@@ -1189,6 +1204,79 @@
 CONF = TempestConfigProxy()
 
 
+@debtcollector.removals.remove(
+    message='use testtools.skipUnless instead', removal_version='Queens')
+def skip_unless_config(*args):
+    """Decorator to raise a skip if a config opt doesn't exist or is False
+
+    :param str group: The first arg, the option group to check
+    :param str name: The second arg, the option name to check
+    :param str msg: Optional third arg, the skip msg to use if a skip is raised
+    :raises testtools.TestCaseskipException: If the specified config option
+        doesn't exist or it exists and evaluates to False
+    """
+    def decorator(f):
+        group = args[0]
+        name = args[1]
+
+        @functools.wraps(f)
+        def wrapper(self, *func_args, **func_kwargs):
+            if not hasattr(CONF, group):
+                msg = "Config group %s doesn't exist" % group
+                raise testtools.TestCase.skipException(msg)
+
+            conf_group = getattr(CONF, group)
+            if not hasattr(conf_group, name):
+                msg = "Config option %s.%s doesn't exist" % (group,
+                                                             name)
+                raise testtools.TestCase.skipException(msg)
+
+            value = getattr(conf_group, name)
+            if not value:
+                if len(args) == 3:
+                    msg = args[2]
+                else:
+                    msg = "Config option %s.%s is false" % (group,
+                                                            name)
+                raise testtools.TestCase.skipException(msg)
+            return f(self, *func_args, **func_kwargs)
+        return wrapper
+    return decorator
+
+
+@debtcollector.removals.remove(
+    message='use testtools.skipIf instead', removal_version='Queens')
+def skip_if_config(*args):
+    """Raise a skipException if a config exists and is True
+
+    :param str group: The first arg, the option group to check
+    :param str name: The second arg, the option name to check
+    :param str msg: Optional third arg, the skip msg to use if a skip is raised
+    :raises testtools.TestCase.skipException: If the specified config option
+        exists and evaluates to True
+    """
+    def decorator(f):
+        group = args[0]
+        name = args[1]
+
+        @functools.wraps(f)
+        def wrapper(self, *func_args, **func_kwargs):
+            if hasattr(CONF, group):
+                conf_group = getattr(CONF, group)
+                if hasattr(conf_group, name):
+                    value = getattr(conf_group, name)
+                    if value:
+                        if len(args) == 3:
+                            msg = args[2]
+                        else:
+                            msg = "Config option %s.%s is false" % (group,
+                                                                    name)
+                        raise testtools.TestCase.skipException(msg)
+            return f(self, *func_args, **func_kwargs)
+        return wrapper
+    return decorator
+
+
 def service_client_config(service_client_name=None):
     """Return a dict with the parameters to init service clients
 
diff --git a/tempest/lib/api_schema/response/compute/v2_1/keypairs.py b/tempest/lib/api_schema/response/compute/v2_1/keypairs.py
index 2828097..e7dcf79 100644
--- a/tempest/lib/api_schema/response/compute/v2_1/keypairs.py
+++ b/tempest/lib/api_schema/response/compute/v2_1/keypairs.py
@@ -34,12 +34,9 @@
 
                 },
                 'additionalProperties': False,
-                # When we run the get keypair API, response body includes
-                # all the above mentioned attributes.
-                # But in Nova API sample file, response body includes only
-                # 'public_key', 'name' & 'fingerprint'. So only 'public_key',
-                # 'name' & 'fingerprint' are defined as 'required'.
-                'required': ['public_key', 'name', 'fingerprint']
+                'required': ['public_key', 'name', 'fingerprint', 'user_id',
+                             'deleted', 'created_at', 'updated_at',
+                             'deleted_at', 'id']
             }
         },
         'additionalProperties': False,
diff --git a/tempest/lib/api_schema/response/compute/v2_26/servers.py b/tempest/lib/api_schema/response/compute/v2_26/servers.py
index bc5d18e..d873402 100644
--- a/tempest/lib/api_schema/response/compute/v2_26/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_26/servers.py
@@ -1,4 +1,5 @@
 # Copyright 2016 IBM Corp.
+# Copyright 2017 AT&T Corp.
 #
 #    Licensed under the Apache License, Version 2.0 (the "License"); you may
 #    not use this file except in compliance with the License. You may obtain
@@ -45,3 +46,41 @@
 # list response schema wasn't changed for v2.26 so use v2.1
 
 list_servers = copy.deepcopy(servers21.list_servers)
+
+list_tags = {
+    'status_code': [200],
+    'response_body': {
+        'type': 'object',
+        'properties': {
+            'tags': {
+                'type': 'array',
+                'items': {
+                    'type': 'string'
+                }
+            }
+        },
+        'additionalProperties': False,
+        'required': ['tags']
+    }
+}
+
+update_all_tags = copy.deepcopy(list_tags)
+
+delete_all_tags = {'status_code': [204]}
+
+check_tag_existence = {'status_code': [204]}
+
+update_tag = {
+    'status_code': [201, 204],
+    'response_header': {
+        'type': 'object',
+        'properties': {
+            'location': {
+                'type': 'string'
+            }
+        },
+        'required': ['location']
+    }
+}
+
+delete_tag = {'status_code': [204]}
diff --git a/tempest/lib/cmd/check_uuid.py b/tempest/lib/cmd/check_uuid.py
index 283b10f..eafde44 100755
--- a/tempest/lib/cmd/check_uuid.py
+++ b/tempest/lib/cmd/check_uuid.py
@@ -26,10 +26,6 @@
 from oslo_utils import uuidutils
 import six.moves.urllib.parse as urlparse
 
-# TODO(oomichi): Need to remove this after switching all modules to decorators
-# on all OpenStack projects because they runs check-uuid on their own gates.
-OLD_DECORATOR_MODULE = 'test'
-
 DECORATOR_MODULE = 'decorators'
 DECORATOR_NAME = 'idempotent_id'
 DECORATOR_IMPORT = 'tempest.%s' % DECORATOR_MODULE
@@ -128,8 +124,7 @@
                 hasattr(decorator.func, 'attr') and
                 decorator.func.attr == DECORATOR_NAME and
                 hasattr(decorator.func, 'value') and
-                (decorator.func.value.id == DECORATOR_MODULE or
-                 decorator.func.value.id == OLD_DECORATOR_MODULE)):
+                decorator.func.value.id == DECORATOR_MODULE):
                 for arg in decorator.args:
                     idempotent_id = ast.literal_eval(arg)
         return idempotent_id
@@ -361,7 +356,7 @@
         sys.exit("@decorators.idempotent_id existence and uniqueness checks "
                  "failed\n"
                  "Run 'tox -v -euuidgen' to automatically fix tests with\n"
-                 "missing @test.idempotent_id decorators.")
+                 "missing @decorators.idempotent_id decorators.")
 
 if __name__ == '__main__':
     run()
diff --git a/tempest/lib/common/ssh.py b/tempest/lib/common/ssh.py
index 5e65bee..657c0c1 100644
--- a/tempest/lib/common/ssh.py
+++ b/tempest/lib/common/ssh.py
@@ -111,6 +111,7 @@
             except (EOFError,
                     socket.error, socket.timeout,
                     paramiko.SSHException) as e:
+                ssh.close()
                 if self._is_timed_out(_start_time):
                     LOG.exception("Failed to establish authenticated ssh"
                                   " connection to %s@%s after %d attempts",
diff --git a/tempest/lib/common/utils/data_utils.py b/tempest/lib/common/utils/data_utils.py
index 642514b..a0941ef 100644
--- a/tempest/lib/common/utils/data_utils.py
+++ b/tempest/lib/common/utils/data_utils.py
@@ -43,7 +43,7 @@
     return uuid.uuid4().hex
 
 
-def rand_name(name='', prefix=None):
+def rand_name(name='', prefix='tempest'):
     """Generate a random name that includes a random number
 
     :param str name: The name that you want to include
diff --git a/tempest/lib/decorators.py b/tempest/lib/decorators.py
index 6ed99b4..92f9698 100644
--- a/tempest/lib/decorators.py
+++ b/tempest/lib/decorators.py
@@ -15,6 +15,7 @@
 import functools
 import uuid
 
+import debtcollector.removals
 import six
 import testtools
 
@@ -61,6 +62,7 @@
     return decorator
 
 
+@debtcollector.removals.remove(removal_version='Queen')
 class skip_unless_attr(object):
     """Decorator to skip tests if a specified attr does not exists or False"""
     def __init__(self, attr, msg=None):
diff --git a/tempest/lib/services/compute/servers_client.py b/tempest/lib/services/compute/servers_client.py
index adff244..c167d81 100644
--- a/tempest/lib/services/compute/servers_client.py
+++ b/tempest/lib/services/compute/servers_client.py
@@ -1,5 +1,6 @@
 # Copyright 2012 OpenStack Foundation
 # Copyright 2013 Hewlett-Packard Development Company, L.P.
+# Copyright 2017 AT&T Corp.
 # All Rights Reserved.
 #
 #    Licensed under the Apache License, Version 2.0 (the "License"); you may
@@ -732,3 +733,92 @@
         self.validate_response(security_groups_schema.list_security_groups,
                                resp, body)
         return rest_client.ResponseBody(resp, body)
+
+    def list_tags(self, server_id):
+        """Lists all tags for a server.
+
+        For a full list of available parameters, please refer to the official
+        API reference:
+        https://developer.openstack.org/api-ref/compute/#list-tags
+        """
+        url = 'servers/%s/tags' % server_id
+        resp, body = self.get(url)
+        body = json.loads(body)
+        schema = self.get_schema(self.schema_versions_info)
+        self.validate_response(schema.list_tags, resp, body)
+        return rest_client.ResponseBody(resp, body)
+
+    def update_all_tags(self, server_id, tags):
+        """Replaces all tags on specified server with the new set of tags.
+
+        For a full list of available parameters, please refer to the official
+        API reference:
+        https://developer.openstack.org/api-ref/compute/#replace-tags
+
+        :param tags: List of tags to replace current server tags with.
+        """
+        url = 'servers/%s/tags' % server_id
+        put_body = {'tags': tags}
+        resp, body = self.put(url, json.dumps(put_body))
+        body = json.loads(body)
+        schema = self.get_schema(self.schema_versions_info)
+        self.validate_response(schema.update_all_tags, resp, body)
+        return rest_client.ResponseBody(resp, body)
+
+    def delete_all_tags(self, server_id):
+        """Deletes all tags from the specified server.
+
+        For a full list of available parameters, please refer to the official
+        API reference:
+        https://developer.openstack.org/api-ref/compute/#delete-all-tags
+        """
+        url = 'servers/%s/tags' % server_id
+        resp, body = self.delete(url)
+        schema = self.get_schema(self.schema_versions_info)
+        self.validate_response(schema.delete_all_tags, resp, body)
+        return rest_client.ResponseBody(resp, body)
+
+    def check_tag_existence(self, server_id, tag):
+        """Checks tag existence on the server.
+
+        For a full list of available parameters, please refer to the official
+        API reference:
+        https://developer.openstack.org/api-ref/compute/#check-tag-existence
+
+        :param tag: Check for existence of tag on specified server.
+        """
+        url = 'servers/%s/tags/%s' % (server_id, tag)
+        resp, body = self.get(url)
+        schema = self.get_schema(self.schema_versions_info)
+        self.validate_response(schema.check_tag_existence, resp, body)
+        return rest_client.ResponseBody(resp, body)
+
+    def update_tag(self, server_id, tag):
+        """Adds a single tag to the server if server has no specified tag.
+
+        For a full list of available parameters, please refer to the official
+        API reference:
+        https://developer.openstack.org/api-ref/compute/#add-a-single-tag
+
+        :param tag: Tag to be added to the specified server.
+        """
+        url = 'servers/%s/tags/%s' % (server_id, tag)
+        resp, body = self.put(url, None)
+        schema = self.get_schema(self.schema_versions_info)
+        self.validate_response(schema.update_tag, resp, body)
+        return rest_client.ResponseBody(resp, body)
+
+    def delete_tag(self, server_id, tag):
+        """Deletes a single tag from the specified server.
+
+        For a full list of available parameters, please refer to the official
+        API reference:
+        https://developer.openstack.org/api-ref/compute/#delete-a-single-tag
+
+        :param tag: Tag to be removed from the specified server.
+        """
+        url = 'servers/%s/tags/%s' % (server_id, tag)
+        resp, body = self.delete(url)
+        schema = self.get_schema(self.schema_versions_info)
+        self.validate_response(schema.delete_tag, resp, body)
+        return rest_client.ResponseBody(resp, body)
diff --git a/tempest/lib/services/identity/v2/services_client.py b/tempest/lib/services/identity/v2/services_client.py
index b3f94aa..47398db 100644
--- a/tempest/lib/services/identity/v2/services_client.py
+++ b/tempest/lib/services/identity/v2/services_client.py
@@ -26,7 +26,7 @@
 
         For a full list of available parameters, please refer to the official
         API reference:
-        http://developer.openstack.org/api-ref/identity/v2-ext/?expanded=#create-service-admin-extension
+        http://developer.openstack.org/api-ref/identity/v2-ext/#create-service-admin-extension
         """
         post_body = json.dumps({'OS-KSADM:service': kwargs})
         resp, body = self.post('/OS-KSADM/services', post_body)
@@ -47,7 +47,7 @@
 
         For a full list of available parameters, please refer to the official
         API reference:
-        http://developer.openstack.org/api-ref/identity/v2-ext/?expanded=#list-services-admin-extension
+        http://developer.openstack.org/api-ref/identity/v2-ext/#list-services-admin-extension
         """
         url = '/OS-KSADM/services'
         if params:
diff --git a/tempest/lib/services/identity/v3/role_assignments_client.py b/tempest/lib/services/identity/v3/role_assignments_client.py
index 10de03f..a426e69 100644
--- a/tempest/lib/services/identity/v3/role_assignments_client.py
+++ b/tempest/lib/services/identity/v3/role_assignments_client.py
@@ -26,7 +26,7 @@
 
         For a full list of available parameters, please refer to the official
         API reference:
-        http://developer.openstack.org/api-ref/identity/v3/?expanded=list-effective-role-assignments-detail
+        http://developer.openstack.org/api-ref/identity/v3/#list-role-assignments
 
         :param effective: If True, returns the effective assignments, including
                           any assignments gained by virtue of group membership
diff --git a/tempest/lib/services/image/v2/namespace_tags_client.py b/tempest/lib/services/image/v2/namespace_tags_client.py
index ac8b569..a7f8c39 100644
--- a/tempest/lib/services/image/v2/namespace_tags_client.py
+++ b/tempest/lib/services/image/v2/namespace_tags_client.py
@@ -115,5 +115,11 @@
         """
         url = 'metadefs/namespaces/%s/tags' % namespace
         resp, _ = self.delete(url)
-        self.expected_success(200, resp.status)
+
+        # NOTE(rosmaita): Bug 1656183 fixed the success response code for
+        # this call to make it consistent with the other metadefs delete
+        # calls.  Accept both codes in case tempest is being run against
+        # an old Glance.
+        self.expected_success([200, 204], resp.status)
+
         return rest_client.ResponseBody(resp)
diff --git a/tempest/lib/services/image/v2/resource_types_client.py b/tempest/lib/services/image/v2/resource_types_client.py
index 1b6889f..13259d1 100644
--- a/tempest/lib/services/image/v2/resource_types_client.py
+++ b/tempest/lib/services/image/v2/resource_types_client.py
@@ -26,7 +26,7 @@
 
         For a full list of available parameters, please refer to the official
         API reference:
-        http://developer.openstack.org/api-ref/image/v2/metadefs-index.html?expanded=#list-resource-types
+        http://developer.openstack.org/api-ref/image/v2/metadefs-index.html#list-resource-types
         """
         url = 'metadefs/resource_types'
         resp, body = self.get(url)
@@ -39,7 +39,7 @@
 
         For a full list of available parameters, please refer to the official
         API reference:
-        http://developer.openstack.org/api-ref/image/v2/metadefs-index.html?expanded=#create-resource-type-association
+        http://developer.openstack.org/api-ref/image/v2/metadefs-index.html#create-resource-type-association
         """
         url = 'metadefs/namespaces/%s/resource_types' % namespace_id
         data = json.dumps(kwargs)
@@ -53,7 +53,7 @@
 
         For a full list of available parameters, please refer to the official
         API reference:
-        http://developer.openstack.org/api-ref/image/v2/metadefs-index.html?expanded=#list-resource-type-associations
+        http://developer.openstack.org/api-ref/image/v2/metadefs-index.html#list-resource-type-associations
         """
         url = 'metadefs/namespaces/%s/resource_types' % namespace_id
         resp, body = self.get(url)
@@ -66,7 +66,7 @@
 
         For a full list of available parameters, please refer to the official
         API reference:
-        http://developer.openstack.org/api-ref/image/v2/metadefs-index.html?expanded=#remove-resource-type-association
+        http://developer.openstack.org/api-ref/image/v2/metadefs-index.html#remove-resource-type-association
         """
         url = 'metadefs/namespaces/%s/resource_types/%s' % (namespace_id,
                                                             resource_name)
diff --git a/tempest/lib/services/network/ports_client.py b/tempest/lib/services/network/ports_client.py
index 93138b9..daa15d7 100644
--- a/tempest/lib/services/network/ports_client.py
+++ b/tempest/lib/services/network/ports_client.py
@@ -73,7 +73,7 @@
 
         For a full list of available parameters, please refer to the official
         API reference:
-        http://developer.openstack.org/api-ref/networking/v2/index.html?expanded=#bulk-create-ports
+        http://developer.openstack.org/api-ref/networking/v2/index.html#bulk-create-ports
         """
         uri = '/ports'
         return self.create_resource(uri, kwargs)
diff --git a/tempest/lib/services/volume/v2/qos_client.py b/tempest/lib/services/volume/v2/qos_client.py
index 40d4a3f..47d3914 100644
--- a/tempest/lib/services/volume/v2/qos_client.py
+++ b/tempest/lib/services/volume/v2/qos_client.py
@@ -43,9 +43,7 @@
 
         For a full list of available parameters, please refer to the official
         API reference:
-        http://developer.openstack.org/api-ref/block-storage/v2/index.html
-                                ?expanded=create-qos-specification-detail
-                                #quality-of-service-qos-specifications-qos-specs
+        http://developer.openstack.org/api-ref/block-storage/v2/#create-qos-specification
         """
         post_body = json.dumps({'qos_specs': kwargs})
         resp, body = self.post('qos-specs', post_body)
@@ -81,9 +79,7 @@
 
         For a full list of available parameters, please refer to the official
         API reference:
-        http://developer.openstack.org/api-ref/block-storage/v2/index.html
-                            ?expanded=set-keys-in-qos-specification-detail
-                            #quality-of-service-qos-specifications-qos-specs
+        http://developer.openstack.org/api-ref/block-storage/v2/#set-keys-in-qos-specification
         """
         put_body = json.dumps({"qos_specs": kwargs})
         resp, body = self.put('qos-specs/%s' % qos_id, put_body)
@@ -98,9 +94,7 @@
 
         For a full list of available parameters, please refer to the official
         API reference:
-        http://developer.openstack.org/api-ref/block-storage/v2/index.html
-                            ?expanded=unset-keys-in-qos-specification-detail
-                            #quality-of-service-qos-specifications-qos-specs
+        http://developer.openstack.org/api-ref/block-storage/v2/#unset-keys-in-qos-specification
         """
         put_body = json.dumps({'keys': keys})
         resp, body = self.put('qos-specs/%s/delete_keys' % qos_id, put_body)
diff --git a/tempest/scenario/manager.py b/tempest/scenario/manager.py
index e670216..e5f5f68 100644
--- a/tempest/scenario/manager.py
+++ b/tempest/scenario/manager.py
@@ -731,36 +731,6 @@
                         network['id'])
         return network
 
-    def _list_networks(self, *args, **kwargs):
-        """List networks using admin creds """
-        networks_list = self.admin_manager.networks_client.list_networks(
-            *args, **kwargs)
-        return networks_list['networks']
-
-    def _list_subnets(self, *args, **kwargs):
-        """List subnets using admin creds """
-        subnets_list = self.admin_manager.subnets_client.list_subnets(
-            *args, **kwargs)
-        return subnets_list['subnets']
-
-    def _list_routers(self, *args, **kwargs):
-        """List routers using admin creds """
-        routers_list = self.admin_manager.routers_client.list_routers(
-            *args, **kwargs)
-        return routers_list['routers']
-
-    def _list_ports(self, *args, **kwargs):
-        """List ports using admin creds """
-        ports_list = self.admin_manager.ports_client.list_ports(
-            *args, **kwargs)
-        return ports_list['ports']
-
-    def _list_agents(self, *args, **kwargs):
-        """List agents using admin creds """
-        agents_list = self.admin_manager.network_agents_client.list_agents(
-            *args, **kwargs)
-        return agents_list['agents']
-
     def _create_subnet(self, network, subnets_client=None,
                        routers_client=None, namestart='subnet-smoke',
                        **kwargs):
@@ -779,7 +749,8 @@
             :returns: True if subnet with cidr already exist in tenant
                   False else
             """
-            cidr_in_use = self._list_subnets(tenant_id=tenant_id, cidr=cidr)
+            cidr_in_use = self.admin_manager.subnets_client.list_subnets(
+                tenant_id=tenant_id, cidr=cidr)['subnets']
             return len(cidr_in_use) != 0
 
         ip_version = kwargs.pop('ip_version', 4)
@@ -827,7 +798,8 @@
         return subnet
 
     def _get_server_port_id_and_ip4(self, server, ip_addr=None):
-        ports = self._list_ports(device_id=server['id'], fixed_ip=ip_addr)
+        ports = self.admin_manager.ports_client.list_ports(
+            device_id=server['id'], fixed_ip=ip_addr)['ports']
         # A port can have more than one IP address in some cases.
         # If the network is dual-stack (IPv4 + IPv6), this port is associated
         # with 2 subnets
@@ -856,7 +828,8 @@
         return port_map[0]
 
     def _get_network_by_name(self, network_name):
-        net = self._list_networks(name=network_name)
+        net = self.admin_manager.networks_client.list_networks(
+            name=network_name)['networks']
         self.assertNotEqual(len(net), 0,
                             "Unable to get network by name: %s" % network_name)
         return net[0]
@@ -939,7 +912,7 @@
         # The target login is assumed to have been configured for
         # key-based authentication by cloud-init.
         try:
-            for net_name, ip_addresses in server['addresses'].items():
+            for ip_addresses in server['addresses'].values():
                 for ip_address in ip_addresses:
                     self.check_vm_connectivity(ip_address['addr'],
                                                username,
@@ -953,14 +926,15 @@
 
     def _check_remote_connectivity(self, source, dest, should_succeed=True,
                                    nic=None):
-        """check ping server via source ssh connection
+        """assert ping server via source ssh connection
+
+        Note: This is an internal method.  Use check_remote_connectivity
+        instead.
 
         :param source: RemoteClient: an ssh connection from which to ping
         :param dest: and IP to ping against
         :param should_succeed: boolean should ping succeed or not
         :param nic: specific network interface to ping from
-        :returns: boolean -- should_succeed == ping
-        :returns: ping is false if ping failed
         """
         def ping_remote():
             try:
@@ -975,6 +949,25 @@
                                           CONF.validation.ping_timeout,
                                           1)
 
+    def check_remote_connectivity(self, source, dest, should_succeed=True,
+                                  nic=None):
+        """assert ping server via source ssh connection
+
+        :param source: RemoteClient: an ssh connection from which to ping
+        :param dest: and IP to ping against
+        :param should_succeed: boolean should ping succeed or not
+        :param nic: specific network interface to ping from
+        """
+        result = self._check_remote_connectivity(source, dest, should_succeed,
+                                                 nic)
+        source_host = source.ssh_client.host
+        if should_succeed:
+            msg = "Timed out waiting for %s to become reachable from %s" \
+                % (dest, source_host)
+        else:
+            msg = "%s is reachable from %s" % (dest, source_host)
+        self.assertTrue(result, msg)
+
     def _create_security_group(self, security_group_rules_client=None,
                                tenant_id=None,
                                namestart='secgroup-smoke',
diff --git a/tempest/scenario/test_aggregates_basic_ops.py b/tempest/scenario/test_aggregates_basic_ops.py
index 95c2d32..b0b516a 100644
--- a/tempest/scenario/test_aggregates_basic_ops.py
+++ b/tempest/scenario/test_aggregates_basic_ops.py
@@ -82,7 +82,7 @@
         aggregate = self.aggregates_client.set_metadata(aggregate['id'],
                                                         metadata=meta)
 
-        for key, value in meta.items():
+        for key in meta.keys():
             self.assertEqual(meta[key],
                              aggregate['aggregate']['metadata'][key])
 
@@ -96,6 +96,7 @@
         return aggregate
 
     @decorators.idempotent_id('cb2b4c4f-0c7c-4164-bdde-6285b302a081')
+    @test.attr(type='slow')
     @test.services('compute')
     def test_aggregate_basic_ops(self):
         self.useFixture(fixtures.LockFixture('availability_zone'))
diff --git a/tempest/scenario/test_encrypted_cinder_volumes.py b/tempest/scenario/test_encrypted_cinder_volumes.py
index da29485..a05b1b1 100644
--- a/tempest/scenario/test_encrypted_cinder_volumes.py
+++ b/tempest/scenario/test_encrypted_cinder_volumes.py
@@ -62,6 +62,7 @@
         self.nova_volume_detach(server, attached_volume)
 
     @decorators.idempotent_id('79165fb4-5534-4b9d-8429-97ccffb8f86e')
+    @test.attr(type='slow')
     @test.services('compute', 'volume', 'image')
     def test_encrypted_cinder_volumes_luks(self):
         server = self.launch_instance()
@@ -71,6 +72,7 @@
         self.attach_detach_volume(server, volume)
 
     @decorators.idempotent_id('cbc752ed-b716-4717-910f-956cce965722')
+    @test.attr(type='slow')
     @test.services('compute', 'volume', 'image')
     def test_encrypted_cinder_volumes_cryptsetup(self):
         server = self.launch_instance()
diff --git a/tempest/scenario/test_minimum_basic.py b/tempest/scenario/test_minimum_basic.py
index 27c45cb..5fee801 100644
--- a/tempest/scenario/test_minimum_basic.py
+++ b/tempest/scenario/test_minimum_basic.py
@@ -94,7 +94,7 @@
             raise exceptions.TimeoutException(msg)
 
     def _get_floating_ip_in_server_addresses(self, floating_ip, server):
-        for network_name, addresses in server['addresses'].items():
+        for addresses in server['addresses'].values():
             for address in addresses:
                 if (address['OS-EXT-IPS:type'] == 'floating' and
                         address['addr'] == floating_ip['ip']):
diff --git a/tempest/scenario/test_network_advanced_server_ops.py b/tempest/scenario/test_network_advanced_server_ops.py
index 1196659..6665fa7 100644
--- a/tempest/scenario/test_network_advanced_server_ops.py
+++ b/tempest/scenario/test_network_advanced_server_ops.py
@@ -104,6 +104,7 @@
         return body['OS-EXT-SRV-ATTR:host']
 
     @decorators.idempotent_id('61f1aa9a-1573-410e-9054-afa557cab021')
+    @test.attr(type='slow')
     @test.services('compute', 'network')
     def test_server_connectivity_stop_start(self):
         keypair = self.create_keypair()
@@ -129,6 +130,7 @@
             server, keypair, floating_ip)
 
     @decorators.idempotent_id('88a529c2-1daa-4c85-9aec-d541ba3eb699')
+    @test.attr(type='slow')
     @test.services('compute', 'network')
     def test_server_connectivity_rebuild(self):
         keypair = self.create_keypair()
@@ -143,6 +145,7 @@
     @decorators.idempotent_id('2b2642db-6568-4b35-b812-eceed3fa20ce')
     @testtools.skipUnless(CONF.compute_feature_enabled.pause,
                           'Pause is not available.')
+    @test.attr(type='slow')
     @test.services('compute', 'network')
     def test_server_connectivity_pause_unpause(self):
         keypair = self.create_keypair()
@@ -160,6 +163,7 @@
     @decorators.idempotent_id('5cdf9499-541d-4923-804e-b9a60620a7f0')
     @testtools.skipUnless(CONF.compute_feature_enabled.suspend,
                           'Suspend is not available.')
+    @test.attr(type='slow')
     @test.services('compute', 'network')
     def test_server_connectivity_suspend_resume(self):
         keypair = self.create_keypair()
@@ -177,6 +181,7 @@
     @decorators.idempotent_id('719eb59d-2f42-4b66-b8b1-bb1254473967')
     @testtools.skipUnless(CONF.compute_feature_enabled.resize,
                           'Resize is not available.')
+    @test.attr(type='slow')
     @test.services('compute', 'network')
     def test_server_connectivity_resize(self):
         resize_flavor = CONF.compute.flavor_ref_alt
@@ -200,6 +205,7 @@
     @testtools.skipUnless(CONF.compute.min_compute_nodes > 1,
                           'Less than 2 compute nodes, skipping multinode '
                           'tests.')
+    @test.attr(type='slow')
     @test.services('compute', 'network')
     def test_server_connectivity_cold_migration(self):
         keypair = self.create_keypair()
@@ -225,6 +231,7 @@
     @testtools.skipUnless(CONF.compute.min_compute_nodes > 1,
                           'Less than 2 compute nodes, skipping multinode '
                           'tests.')
+    @test.attr(type='slow')
     @test.services('compute', 'network')
     def test_server_connectivity_cold_migration_revert(self):
         keypair = self.create_keypair()
diff --git a/tempest/scenario/test_network_basic_ops.py b/tempest/scenario/test_network_basic_ops.py
index 4dae564..85d7e37 100644
--- a/tempest/scenario/test_network_basic_ops.py
+++ b/tempest/scenario/test_network_basic_ops.py
@@ -127,23 +127,23 @@
         via checking the result of list_[networks,routers,subnets]
         """
 
-        seen_nets = self._list_networks()
-        seen_names = [n['name'] for n in seen_nets]
-        seen_ids = [n['id'] for n in seen_nets]
+        seen_nets = self.admin_manager.networks_client.list_networks()
+        seen_names = [n['name'] for n in seen_nets['networks']]
+        seen_ids = [n['id'] for n in seen_nets['networks']]
         self.assertIn(self.network['name'], seen_names)
         self.assertIn(self.network['id'], seen_ids)
 
         if self.subnet:
-            seen_subnets = self._list_subnets()
-            seen_net_ids = [n['network_id'] for n in seen_subnets]
-            seen_subnet_ids = [n['id'] for n in seen_subnets]
+            seen_subnets = self.admin_manager.subnets_client.list_subnets()
+            seen_net_ids = [n['network_id'] for n in seen_subnets['subnets']]
+            seen_subnet_ids = [n['id'] for n in seen_subnets['subnets']]
             self.assertIn(self.network['id'], seen_net_ids)
             self.assertIn(self.subnet['id'], seen_subnet_ids)
 
         if self.router:
-            seen_routers = self._list_routers()
-            seen_router_ids = [n['id'] for n in seen_routers]
-            seen_router_names = [n['name'] for n in seen_routers]
+            seen_routers = self.admin_manager.routers_client.list_routers()
+            seen_router_ids = [n['id'] for n in seen_routers['routers']]
+            seen_router_names = [n['name'] for n in seen_routers['routers']]
             self.assertIn(self.router['name'],
                           seen_router_names)
             self.assertIn(self.router['id'],
@@ -240,7 +240,8 @@
             ip_address, private_key=private_key)
         old_nic_list = self._get_server_nics(ssh_client)
         # get a port from a list of one item
-        port_list = self._list_ports(device_id=server['id'])
+        port_list = self.admin_manager.ports_client.list_ports(
+            device_id=server['id'])['ports']
         self.assertEqual(1, len(port_list))
         old_port = port_list[0]
         interface = self.interface_client.create_interface(
@@ -253,9 +254,12 @@
                         server['id'], interface['port_id'])
 
         def check_ports():
-            self.new_port_list = [port for port in
-                                  self._list_ports(device_id=server['id'])
-                                  if port['id'] != old_port['id']]
+            self.new_port_list = [
+                port for port in
+                self.admin_manager.ports_client.list_ports(
+                    device_id=server['id'])['ports']
+                if port['id'] != old_port['id']
+            ]
             return len(self.new_port_list) == 1
 
         if not test_utils.call_until_true(
@@ -281,9 +285,9 @@
                                               % CONF.network.build_timeout)
 
         num, new_nic = self.diff_list[0]
-        ssh_client.assign_static_ip(nic=new_nic,
-                                    addr=new_port['fixed_ips'][0][
-                                        'ip_address'])
+        ssh_client.assign_static_ip(
+            nic=new_nic, addr=new_port['fixed_ips'][0]['ip_address'],
+            network_mask_bits=CONF.network.project_network_mask_bits)
         ssh_client.set_nic_state(nic=new_nic)
 
     def _get_server_nics(self, ssh_client):
@@ -301,10 +305,13 @@
         floating_ip, server = self.floating_ip_tuple
         # get internal ports' ips:
         # get all network ports in the new network
-        internal_ips = (p['fixed_ips'][0]['ip_address'] for p in
-                        self._list_ports(tenant_id=server['tenant_id'],
-                                         network_id=network['id'])
-                        if p['device_owner'].startswith('network'))
+        internal_ips = (
+            p['fixed_ips'][0]['ip_address'] for p in
+            self.admin_manager.ports_client.list_ports(
+                tenant_id=server['tenant_id'],
+                network_id=network['id'])['ports']
+            if p['device_owner'].startswith('network')
+        )
 
         self._check_server_connectivity(floating_ip,
                                         internal_ips,
@@ -320,8 +327,11 @@
         # We ping the external IP from the instance using its floating IP
         # which is always IPv4, so we must only test connectivity to
         # external IPv4 IPs if the external network is dualstack.
-        v4_subnets = [s for s in self._list_subnets(
-            network_id=CONF.network.public_network_id) if s['ip_version'] == 4]
+        v4_subnets = [
+            s for s in self.admin_manager.subnets_client.list_subnets(
+                network_id=CONF.network.public_network_id)['subnets']
+            if s['ip_version'] == 4
+        ]
         self.assertEqual(1, len(v4_subnets),
                          "Found %d IPv4 subnets" % len(v4_subnets))
 
@@ -337,20 +347,8 @@
             ip_address, private_key=private_key)
 
         for remote_ip in address_list:
-            if should_connect:
-                msg = ("Timed out waiting for %s to become "
-                       "reachable") % remote_ip
-            else:
-                msg = "ip address %s is reachable" % remote_ip
-            try:
-                self.assertTrue(self._check_remote_connectivity
-                                (ssh_source, remote_ip, should_connect),
-                                msg)
-            except Exception:
-                LOG.exception("Unable to access {dest} via ssh to "
-                              "floating-ip {src}".format(dest=remote_ip,
-                                                         src=floating_ip))
-                raise
+            self.check_remote_connectivity(ssh_source, remote_ip,
+                                           should_connect)
 
     @test.attr(type='smoke')
     @decorators.idempotent_id('f323b3ba-82f8-4db7-8ea6-6a895869ec49')
@@ -408,6 +406,7 @@
     @decorators.idempotent_id('b158ea55-472e-4086-8fa9-c64ac0c6c1d0')
     @testtools.skipUnless(test.is_extension_enabled('net-mtu', 'network'),
                           'No way to calculate MTU for networks')
+    @test.attr(type='slow')
     @test.services('compute', 'network')
     def test_mtu_sized_frames(self):
         """Validate that network MTU sized frames fit through."""
@@ -420,6 +419,7 @@
                       'Connectivity can only be tested when in a '
                       'multitenant network environment')
     @decorators.skip_because(bug="1610994")
+    @test.attr(type='slow')
     @test.services('compute', 'network')
     def test_connectivity_between_vms_on_different_networks(self):
         """Test connectivity between VMs on different networks
@@ -495,6 +495,7 @@
     @testtools.skipIf(CONF.network.shared_physical_network,
                       'Router state can be altered only with multitenant '
                       'networks capabilities')
+    @test.attr(type='slow')
     @test.services('compute', 'network')
     def test_update_router_admin_state(self):
         """Test to update admin state up of router
@@ -528,6 +529,7 @@
                       'network isolation not available')
     @testtools.skipUnless(CONF.scenario.dhcp_client,
                           "DHCP client is not available.")
+    @test.attr(type='slow')
     @test.services('compute', 'network')
     def test_subnet_details(self):
         """Tests that subnet's extra configuration details are affecting VMs.
@@ -594,7 +596,8 @@
             # NOTE(amuller): we are renewing the lease as part of the retry
             # because Neutron updates dnsmasq asynchronously after the
             # subnet-update API call returns.
-            ssh_client.renew_lease(fixed_ip=floating_ip['fixed_ip_address'])
+            ssh_client.renew_lease(fixed_ip=floating_ip['fixed_ip_address'],
+                                   dhcp_client=CONF.scenario.dhcp_client)
             if ssh_client.get_dns_servers() != [alt_dns_server]:
                 LOG.debug("Failed to update DNS nameservers")
                 return False
@@ -610,6 +613,7 @@
     @testtools.skipUnless(CONF.network_feature_enabled.port_admin_state_change,
                           "Changing a port's admin state is not supported "
                           "by the test environment")
+    @test.attr(type='slow')
     @test.services('compute', 'network')
     def test_update_instance_port_admin_state(self):
         """Test to update admin_state_up attribute of instance port
@@ -624,7 +628,8 @@
         self._setup_network_and_servers()
         floating_ip, server = self.floating_ip_tuple
         server_id = server['id']
-        port_id = self._list_ports(device_id=server_id)[0]['id']
+        port_id = self.admin_manager.ports_client.list_ports(
+            device_id=server_id)['ports'][0]['id']
         server_pip = server['addresses'][self.network['name']][0]['addr']
 
         server2 = self._create_server(self.network)
@@ -637,23 +642,24 @@
         self.check_public_network_connectivity(
             should_connect=True, msg="before updating "
             "admin_state_up of instance port to False")
-        self._check_remote_connectivity(ssh_client, dest=server_pip,
-                                        should_succeed=True)
+        self.check_remote_connectivity(ssh_client, dest=server_pip,
+                                       should_succeed=True)
         self.ports_client.update_port(port_id, admin_state_up=False)
         self.check_public_network_connectivity(
             should_connect=False, msg="after updating "
             "admin_state_up of instance port to False",
             should_check_floating_ip_status=False)
-        self._check_remote_connectivity(ssh_client, dest=server_pip,
-                                        should_succeed=False)
+        self.check_remote_connectivity(ssh_client, dest=server_pip,
+                                       should_succeed=False)
         self.ports_client.update_port(port_id, admin_state_up=True)
         self.check_public_network_connectivity(
             should_connect=True, msg="after updating "
             "admin_state_up of instance port to True")
-        self._check_remote_connectivity(ssh_client, dest=server_pip,
-                                        should_succeed=True)
+        self.check_remote_connectivity(ssh_client, dest=server_pip,
+                                       should_succeed=True)
 
     @decorators.idempotent_id('759462e1-8535-46b0-ab3a-33aa45c55aaa')
+    @test.attr(type='slow')
     @test.services('compute', 'network')
     def test_preserve_preexisting_port(self):
         """Test preserve pre-existing port
@@ -677,8 +683,8 @@
                              'Server should have been created from a '
                              'pre-existing port.')
         # Assert the port is bound to the server.
-        port_list = self._list_ports(device_id=server['id'],
-                                     network_id=self.network['id'])
+        port_list = self.admin_manager.ports_client.list_ports(
+            device_id=server['id'], network_id=self.network['id'])['ports']
         self.assertEqual(1, len(port_list),
                          'There should only be one port created for '
                          'server %s.' % server['id'])
@@ -696,8 +702,8 @@
         # Boot another server with the same port to make sure nothing was
         # left around that could cause issues.
         server = self._create_server(self.network, port['id'])
-        port_list = self._list_ports(device_id=server['id'],
-                                     network_id=self.network['id'])
+        port_list = self.admin_manager.ports_client.list_ports(
+            device_id=server['id'], network_id=self.network['id'])['ports']
         self.assertEqual(1, len(port_list),
                          'There should only be one port created for '
                          'server %s.' % server['id'])
@@ -705,6 +711,7 @@
 
     @test.requires_ext(service='network', extension='l3_agent_scheduler')
     @decorators.idempotent_id('2e788c46-fb3f-4ac9-8f82-0561555bea73')
+    @test.attr(type='slow')
     @test.services('compute', 'network')
     def test_router_rescheduling(self):
         """Tests that router can be removed from agent and add to a new agent.
@@ -727,9 +734,11 @@
         unschedule_router = (self.admin_manager.network_agents_client.
                              delete_router_from_l3_agent)
 
-        agent_list_alive = set(a["id"] for a in
-                               self._list_agents(agent_type="L3 agent") if
-                               a["alive"] is True)
+        agent_list_alive = set(
+            a["id"] for a in
+            self.admin_manager.network_agents_client.list_agents(
+                agent_type="L3 agent")['agents'] if a["alive"] is True
+        )
         self._setup_network_and_servers()
 
         # NOTE(kevinbenton): we have to use the admin credentials to check
@@ -782,6 +791,7 @@
     @testtools.skipUnless(CONF.compute_feature_enabled.interface_attach,
                           'NIC hotplug not available')
     @decorators.idempotent_id('7c0bb1a2-d053-49a4-98f9-ca1a1d849f63')
+    @test.attr(type='slow')
     @test.services('compute', 'network')
     def test_port_security_macspoofing_port(self):
         """Tests port_security extension enforces mac spoofing
@@ -811,8 +821,8 @@
         self._create_new_network()
         self._hotplug_server()
         fip, server = self.floating_ip_tuple
-        new_ports = self._list_ports(device_id=server["id"],
-                                     network_id=self.new_net["id"])
+        new_ports = self.admin_manager.ports_client.list_ports(
+            device_id=server["id"], network_id=self.new_net["id"])['ports']
         spoof_port = new_ports[0]
         private_key = self._get_server_key(server)
         ssh_client = self.get_remote_client(fip['floating_ip_address'],
@@ -820,15 +830,15 @@
         spoof_nic = ssh_client.get_nic_name_by_mac(spoof_port["mac_address"])
         peer = self._create_server(self.new_net)
         peer_address = peer['addresses'][self.new_net['name']][0]['addr']
-        self._check_remote_connectivity(ssh_client, dest=peer_address,
-                                        nic=spoof_nic, should_succeed=True)
+        self.check_remote_connectivity(ssh_client, dest=peer_address,
+                                       nic=spoof_nic, should_succeed=True)
         ssh_client.set_mac_address(spoof_nic, spoof_mac)
         new_mac = ssh_client.get_mac_address(nic=spoof_nic)
         self.assertEqual(spoof_mac, new_mac)
-        self._check_remote_connectivity(ssh_client, dest=peer_address,
-                                        nic=spoof_nic, should_succeed=False)
+        self.check_remote_connectivity(ssh_client, dest=peer_address,
+                                       nic=spoof_nic, should_succeed=False)
         self.ports_client.update_port(spoof_port["id"],
                                       port_security_enabled=False,
                                       security_groups=[])
-        self._check_remote_connectivity(ssh_client, dest=peer_address,
-                                        nic=spoof_nic, should_succeed=True)
+        self.check_remote_connectivity(ssh_client, dest=peer_address,
+                                       nic=spoof_nic, should_succeed=True)
diff --git a/tempest/scenario/test_network_v6.py b/tempest/scenario/test_network_v6.py
index 2d6ea75..d8a1363 100644
--- a/tempest/scenario/test_network_v6.py
+++ b/tempest/scenario/test_network_v6.py
@@ -110,7 +110,7 @@
     @staticmethod
     def define_server_ips(srv):
         ips = {'4': None, '6': []}
-        for net_name, nics in srv['addresses'].items():
+        for nics in srv['addresses'].values():
             for nic in nics:
                 if nic['version'] == 6:
                     ips['6'].append(nic['addr'])
@@ -143,9 +143,11 @@
         @param ssh: RemoteClient ssh instance to server
         @param sid: server uuid
         """
-        ports = [p["mac_address"] for p in
-                 self._list_ports(device_id=sid,
-                                  network_id=self.network_v6['id'])]
+        ports = [
+            p["mac_address"] for p in
+            self.admin_manager.ports_client.list_ports(
+                device_id=sid, network_id=self.network_v6['id'])['ports']
+        ]
         self.assertEqual(1, len(ports),
                          message=("Multiple IPv6 ports found on network %s. "
                                   "ports: %s")
@@ -189,25 +191,18 @@
             self.assertTrue(test_utils.call_until_true(srv2_v6_addr_assigned,
                             CONF.validation.ping_timeout, 1))
 
-        self._check_connectivity(sshv4_1, ips_from_api_2['4'])
-        self._check_connectivity(sshv4_2, ips_from_api_1['4'])
+        self.check_remote_connectivity(sshv4_1, ips_from_api_2['4'])
+        self.check_remote_connectivity(sshv4_2, ips_from_api_1['4'])
 
         for i in range(n_subnets6):
-            self._check_connectivity(sshv4_1,
-                                     ips_from_api_2['6'][i])
-            self._check_connectivity(sshv4_1,
-                                     self.subnets_v6[i]['gateway_ip'])
-            self._check_connectivity(sshv4_2,
-                                     ips_from_api_1['6'][i])
-            self._check_connectivity(sshv4_2,
-                                     self.subnets_v6[i]['gateway_ip'])
-
-    def _check_connectivity(self, source, dest):
-        self.assertTrue(
-            self._check_remote_connectivity(source, dest),
-            "Timed out waiting for %s to become reachable from %s" %
-            (dest, source.ssh_client.host)
-        )
+            self.check_remote_connectivity(sshv4_1,
+                                           ips_from_api_2['6'][i])
+            self.check_remote_connectivity(sshv4_1,
+                                           self.subnets_v6[i]['gateway_ip'])
+            self.check_remote_connectivity(sshv4_2,
+                                           ips_from_api_1['6'][i])
+            self.check_remote_connectivity(sshv4_2,
+                                           self.subnets_v6[i]['gateway_ip'])
 
     @test.attr(type='slow')
     @decorators.idempotent_id('2c92df61-29f0-4eaa-bee3-7c65bef62a43')
@@ -245,6 +240,7 @@
     def test_dualnet_dhcp6_stateless_from_os(self):
         self._prepare_and_test(address6_mode='dhcpv6-stateless', dualnet=True)
 
+    @test.attr(type='slow')
     @decorators.idempotent_id('cf1c4425-766b-45b8-be35-e2959728eb00')
     @test.services('compute', 'network')
     def test_dualnet_multi_prefix_dhcpv6_stateless(self):
diff --git a/tempest/scenario/test_object_storage_basic_ops.py b/tempest/scenario/test_object_storage_basic_ops.py
index c989e01..7fd8c91 100644
--- a/tempest/scenario/test_object_storage_basic_ops.py
+++ b/tempest/scenario/test_object_storage_basic_ops.py
@@ -46,6 +46,7 @@
         self.delete_container(container_name)
 
     @decorators.idempotent_id('916c7111-cb1f-44b2-816d-8f760e4ea910')
+    @test.attr(type='slow')
     @test.services('object_storage')
     def test_swift_acl_anonymous_download(self):
         """This test will cover below steps:
diff --git a/tempest/scenario/test_security_groups_basic_ops.py b/tempest/scenario/test_security_groups_basic_ops.py
index 5565cb8..fa12f33 100644
--- a/tempest/scenario/test_security_groups_basic_ops.py
+++ b/tempest/scenario/test_security_groups_basic_ops.py
@@ -220,22 +220,24 @@
         # Checks that we see the newly created network/subnet/router via
         # checking the result of list_[networks,routers,subnets]
         # Check that (router, subnet) couple exist in port_list
-        seen_nets = self._list_networks()
-        seen_names = [n['name'] for n in seen_nets]
-        seen_ids = [n['id'] for n in seen_nets]
+        seen_nets = self.admin_manager.networks_client.list_networks()
+        seen_names = [n['name'] for n in seen_nets['networks']]
+        seen_ids = [n['id'] for n in seen_nets['networks']]
 
         self.assertIn(tenant.network['name'], seen_names)
         self.assertIn(tenant.network['id'], seen_ids)
 
-        seen_subnets = [(n['id'], n['cidr'], n['network_id'])
-                        for n in self._list_subnets()]
+        seen_subnets = [
+            (n['id'], n['cidr'], n['network_id']) for n in
+            self.admin_manager.subnets_client.list_subnets()['subnets']
+        ]
         mysubnet = (tenant.subnet['id'], tenant.subnet['cidr'],
                     tenant.network['id'])
         self.assertIn(mysubnet, seen_subnets)
 
-        seen_routers = self._list_routers()
-        seen_router_ids = [n['id'] for n in seen_routers]
-        seen_router_names = [n['name'] for n in seen_routers]
+        seen_routers = self.admin_manager.routers_client.list_routers()
+        seen_router_ids = [n['id'] for n in seen_routers['routers']]
+        seen_router_names = [n['name'] for n in seen_routers['routers']]
 
         self.assertIn(tenant.router['name'], seen_router_names)
         self.assertIn(tenant.router['id'], seen_router_ids)
@@ -243,9 +245,11 @@
         myport = (tenant.router['id'], tenant.subnet['id'])
         router_ports = [
             (i['device_id'], f['subnet_id'])
-            for i in self._list_ports(device_id=tenant.router['id'])
+            for i in self.admin_manager.ports_client.list_ports(
+                device_id=tenant.router['id'])['ports']
             if net_info.is_router_interface_port(i)
-            for f in i['fixed_ips']]
+            for f in i['fixed_ips']
+        ]
 
         self.assertIn(myport, router_ports)
 
@@ -364,20 +368,12 @@
             access_point_ssh, private_key=private_key)
         return access_point_ssh
 
-    def _check_connectivity(self, access_point, ip, should_succeed=True):
-        if should_succeed:
-            msg = "Timed out waiting for %s to become reachable" % ip
-        else:
-            msg = "%s is reachable" % ip
-        self.assertTrue(self._check_remote_connectivity(access_point, ip,
-                                                        should_succeed), msg)
-
     def _test_in_tenant_block(self, tenant):
         access_point_ssh = self._connect_to_access_point(tenant)
         for server in tenant.servers:
-            self._check_connectivity(access_point=access_point_ssh,
-                                     ip=self._get_server_ip(server),
-                                     should_succeed=False)
+            self.check_remote_connectivity(source=access_point_ssh,
+                                           dest=self._get_server_ip(server),
+                                           should_succeed=False)
 
     def _test_in_tenant_allow(self, tenant):
         ruleset = dict(
@@ -392,8 +388,8 @@
         )
         access_point_ssh = self._connect_to_access_point(tenant)
         for server in tenant.servers:
-            self._check_connectivity(access_point=access_point_ssh,
-                                     ip=self._get_server_ip(server))
+            self.check_remote_connectivity(source=access_point_ssh,
+                                           dest=self._get_server_ip(server))
 
     def _test_cross_tenant_block(self, source_tenant, dest_tenant):
         # if public router isn't defined, then dest_tenant access is via
@@ -401,8 +397,8 @@
         access_point_ssh = self._connect_to_access_point(source_tenant)
         ip = self._get_server_ip(dest_tenant.access_point,
                                  floating=self.floating_ip_access)
-        self._check_connectivity(access_point=access_point_ssh, ip=ip,
-                                 should_succeed=False)
+        self.check_remote_connectivity(source=access_point_ssh, dest=ip,
+                                       should_succeed=False)
 
     def _test_cross_tenant_allow(self, source_tenant, dest_tenant):
         """check for each direction:
@@ -423,7 +419,7 @@
         access_point_ssh = self._connect_to_access_point(source_tenant)
         ip = self._get_server_ip(dest_tenant.access_point,
                                  floating=self.floating_ip_access)
-        self._check_connectivity(access_point_ssh, ip)
+        self.check_remote_connectivity(access_point_ssh, ip)
 
         # test that reverse traffic is still blocked
         self._test_cross_tenant_block(dest_tenant, source_tenant)
@@ -440,7 +436,7 @@
         access_point_ssh_2 = self._connect_to_access_point(dest_tenant)
         ip = self._get_server_ip(source_tenant.access_point,
                                  floating=self.floating_ip_access)
-        self._check_connectivity(access_point_ssh_2, ip)
+        self.check_remote_connectivity(access_point_ssh_2, ip)
 
     def _verify_mac_addr(self, tenant):
         """Verify that VM has the same ip, mac as listed in port"""
@@ -450,7 +446,8 @@
         mac_addr = mac_addr.strip().lower()
         # Get the fixed_ips and mac_address fields of all ports. Select
         # only those two columns to reduce the size of the response.
-        port_list = self._list_ports(fields=['fixed_ips', 'mac_address'])
+        port_list = self.admin_manager.ports_client.list_ports(
+            fields=['fixed_ips', 'mac_address'])['ports']
         port_detail_list = [
             (port['fixed_ips'][0]['subnet_id'],
              port['fixed_ips'][0]['ip_address'],
@@ -497,6 +494,7 @@
             raise
 
     @decorators.idempotent_id('f4d556d7-1526-42ad-bafb-6bebf48568f6')
+    @test.attr(type='slow')
     @test.services('compute', 'network')
     def test_port_update_new_security_group(self):
         """Verifies the traffic after updating the vm port
@@ -532,24 +530,26 @@
         # Check connectivity failure with default security group
         try:
             access_point_ssh = self._connect_to_access_point(new_tenant)
-            self._check_connectivity(access_point=access_point_ssh,
-                                     ip=self._get_server_ip(server),
-                                     should_succeed=False)
+            self.check_remote_connectivity(source=access_point_ssh,
+                                           dest=self._get_server_ip(server),
+                                           should_succeed=False)
             server_id = server['id']
-            port_id = self._list_ports(device_id=server_id)[0]['id']
+            port_id = self.admin_manager.ports_client.list_ports(
+                device_id=server_id)['ports'][0]['id']
 
             # update port with new security group and check connectivity
             self.ports_client.update_port(port_id, security_groups=[
                 new_tenant.security_groups['new_sg']['id']])
-            self._check_connectivity(
-                access_point=access_point_ssh,
-                ip=self._get_server_ip(server))
+            self.check_remote_connectivity(
+                source=access_point_ssh,
+                dest=self._get_server_ip(server))
         except Exception:
             for tenant in self.tenants.values():
                 self._log_console_output(servers=tenant.servers)
             raise
 
     @decorators.idempotent_id('d2f77418-fcc4-439d-b935-72eca704e293')
+    @test.attr(type='slow')
     @test.services('compute', 'network')
     def test_multiple_security_groups(self):
         """Verify multiple security groups and checks that rules
@@ -581,6 +581,7 @@
                                    private_key=private_key,
                                    should_connect=True)
 
+    @test.attr(type='slow')
     @test.requires_ext(service='network', extension='port-security')
     @decorators.idempotent_id('7c811dcc-263b-49a3-92d2-1b4d8405f50c')
     @test.services('compute', 'network')
@@ -598,28 +599,30 @@
 
         access_point_ssh = self._connect_to_access_point(new_tenant)
         server_id = server['id']
-        port_id = self._list_ports(device_id=server_id)[0]['id']
+        port_id = self.admin_manager.ports_client.list_ports(
+            device_id=server_id)['ports'][0]['id']
 
         # Flip the port's port security and check connectivity
         try:
             self.ports_client.update_port(port_id,
                                           port_security_enabled=True,
                                           security_groups=[])
-            self._check_connectivity(access_point=access_point_ssh,
-                                     ip=self._get_server_ip(server),
-                                     should_succeed=False)
+            self.check_remote_connectivity(source=access_point_ssh,
+                                           dest=self._get_server_ip(server),
+                                           should_succeed=False)
 
             self.ports_client.update_port(port_id,
                                           port_security_enabled=False,
                                           security_groups=[])
-            self._check_connectivity(
-                access_point=access_point_ssh,
-                ip=self._get_server_ip(server))
+            self.check_remote_connectivity(
+                source=access_point_ssh,
+                dest=self._get_server_ip(server))
         except Exception:
             for tenant in self.tenants.values():
                 self._log_console_output(servers=tenant.servers)
             raise
 
+    @test.attr(type='slow')
     @test.requires_ext(service='network', extension='port-security')
     @decorators.idempotent_id('13ccf253-e5ad-424b-9c4a-97b88a026699')
     @testtools.skipUnless(
@@ -642,7 +645,8 @@
         sec_groups = []
         server = self._create_server(name, tenant, sec_groups)
         server_id = server['id']
-        ports = self._list_ports(device_id=server_id)
+        ports = self.admin_manager.ports_client.list_ports(
+            device_id=server_id)['ports']
         self.assertEqual(1, len(ports))
         for port in ports:
             self.assertEmpty(port['security_groups'],
diff --git a/tempest/scenario/test_server_advanced_ops.py b/tempest/scenario/test_server_advanced_ops.py
index 4d9e59c..1960e9a 100644
--- a/tempest/scenario/test_server_advanced_ops.py
+++ b/tempest/scenario/test_server_advanced_ops.py
@@ -48,6 +48,7 @@
         cls.set_network_resources()
         super(TestServerAdvancedOps, cls).setup_credentials()
 
+    @test.attr(type='slow')
     @decorators.idempotent_id('e6c28180-7454-4b59-b188-0257af08a63b')
     @testtools.skipUnless(CONF.compute_feature_enabled.resize,
                           'Resize is not available.')
@@ -69,37 +70,22 @@
         waiters.wait_for_server_status(self.servers_client, instance_id,
                                        'ACTIVE')
 
+    @test.attr(type='slow')
     @decorators.idempotent_id('949da7d5-72c8-4808-8802-e3d70df98e2c')
     @testtools.skipUnless(CONF.compute_feature_enabled.suspend,
                           'Suspend is not available.')
     @test.services('compute')
     def test_server_sequence_suspend_resume(self):
         # We create an instance for use in this test
-        instance = self.create_server()
-        instance_id = instance['id']
-        LOG.debug("Suspending instance %s. Current status: %s",
-                  instance_id, instance['status'])
-        self.servers_client.suspend_server(instance_id)
-        waiters.wait_for_server_status(self.servers_client, instance_id,
-                                       'SUSPENDED')
-        fetched_instance = (self.servers_client.show_server(instance_id)
-                            ['server'])
-        LOG.debug("Resuming instance %s. Current status: %s",
-                  instance_id, fetched_instance['status'])
-        self.servers_client.resume_server(instance_id)
-        waiters.wait_for_server_status(self.servers_client, instance_id,
-                                       'ACTIVE')
-        fetched_instance = (self.servers_client.show_server(instance_id)
-                            ['server'])
-        LOG.debug("Suspending instance %s. Current status: %s",
-                  instance_id, fetched_instance['status'])
-        self.servers_client.suspend_server(instance_id)
-        waiters.wait_for_server_status(self.servers_client, instance_id,
-                                       'SUSPENDED')
-        fetched_instance = (self.servers_client.show_server(instance_id)
-                            ['server'])
-        LOG.debug("Resuming instance %s. Current status: %s",
-                  instance_id, fetched_instance['status'])
-        self.servers_client.resume_server(instance_id)
-        waiters.wait_for_server_status(self.servers_client, instance_id,
-                                       'ACTIVE')
+        instance_id = self.create_server()['id']
+
+        for _ in range(2):
+            LOG.debug("Suspending instance %s", instance_id)
+            self.servers_client.suspend_server(instance_id)
+            waiters.wait_for_server_status(self.servers_client, instance_id,
+                                           'SUSPENDED')
+
+            LOG.debug("Resuming instance %s", instance_id)
+            self.servers_client.resume_server(instance_id)
+            waiters.wait_for_server_status(self.servers_client, instance_id,
+                                           'ACTIVE')
diff --git a/tempest/scenario/test_shelve_instance.py b/tempest/scenario/test_shelve_instance.py
index 75cef88..9e763f8 100644
--- a/tempest/scenario/test_shelve_instance.py
+++ b/tempest/scenario/test_shelve_instance.py
@@ -74,6 +74,7 @@
                                         private_key=keypair['private_key'])
         self.assertEqual(timestamp, timestamp2)
 
+    @test.attr(type='slow')
     @decorators.idempotent_id('1164e700-0af0-4a4c-8792-35909a88743c')
     @testtools.skipUnless(CONF.network.public_network_id,
                           'The public_network_id option must be specified.')
@@ -81,6 +82,7 @@
     def test_shelve_instance(self):
         self._create_server_then_shelve_and_unshelve()
 
+    @test.attr(type='slow')
     @decorators.idempotent_id('c1b6318c-b9da-490b-9c67-9339b627271f')
     @testtools.skipUnless(CONF.network.public_network_id,
                           'The public_network_id option must be specified.')
diff --git a/tempest/scenario/test_snapshot_pattern.py b/tempest/scenario/test_snapshot_pattern.py
index 6dedd1d..a699de2 100644
--- a/tempest/scenario/test_snapshot_pattern.py
+++ b/tempest/scenario/test_snapshot_pattern.py
@@ -41,6 +41,7 @@
             raise cls.skipException("Snapshotting is not available.")
 
     @decorators.idempotent_id('608e604b-1d63-4a82-8e3e-91bc665c90b4')
+    @test.attr(type='slow')
     @testtools.skipUnless(CONF.network.public_network_id,
                           'The public_network_id option must be specified.')
     @test.services('compute', 'network', 'image')
diff --git a/tempest/scenario/test_stamp_pattern.py b/tempest/scenario/test_stamp_pattern.py
index ef9664d..aabb767 100644
--- a/tempest/scenario/test_stamp_pattern.py
+++ b/tempest/scenario/test_stamp_pattern.py
@@ -88,6 +88,8 @@
                                           CONF.compute.build_interval):
             raise lib_exc.TimeoutException
 
+    @test.attr(type='slow')
+    @decorators.skip_because(bug="1664793")
     @decorators.idempotent_id('10fd234a-515c-41e5-b092-8323060598c5')
     @testtools.skipUnless(CONF.compute_feature_enabled.snapshot,
                           'Snapshotting is not available.')
diff --git a/tempest/scenario/test_volume_boot_pattern.py b/tempest/scenario/test_volume_boot_pattern.py
index 9c33b71..8cab19c 100644
--- a/tempest/scenario/test_volume_boot_pattern.py
+++ b/tempest/scenario/test_volume_boot_pattern.py
@@ -43,16 +43,13 @@
         return self.create_volume(name=vol_name, imageRef=img_uuid)
 
     def _get_bdm(self, source_id, source_type, delete_on_termination=False):
-        # NOTE(gfidente): the syntax for block_device_mapping is
-        # dev_name=id:type:size:delete_on_terminate
-        # where type needs to be "snap" if the server is booted
-        # from a snapshot, size instead can be safely left empty
-
-        bd_map = [{
-            'device_name': 'vda',
-            '{}_id'.format(source_type): source_id,
-            'delete_on_termination': str(int(delete_on_termination))}]
-        return {'block_device_mapping': bd_map}
+        bd_map_v2 = [{
+            'uuid': source_id,
+            'source_type': source_type,
+            'destination_type': 'volume',
+            'boot_index': 0,
+            'delete_on_termination': delete_on_termination}]
+        return {'block_device_mapping_v2': bd_map_v2}
 
     def _boot_instance_from_resource(self, source_id,
                                      source_type,
@@ -98,7 +95,6 @@
         waiters.wait_for_server_termination(self.servers_client, server['id'])
 
     @decorators.idempotent_id('557cd2c2-4eb8-4dce-98be-f86765ff311b')
-    @test.attr(type='smoke')
     @testtools.skipUnless(CONF.network.public_network_id,
                           'The public_network_id option must be specified.')
     @test.services('compute', 'volume', 'image')
@@ -180,6 +176,7 @@
         self.assertEqual(timestamp, timestamp3)
 
     @decorators.idempotent_id('05795fb2-b2a7-4c9f-8fac-ff25aedb1489')
+    @test.attr(type='slow')
     @test.services('compute', 'image', 'volume')
     def test_create_server_from_volume_snapshot(self):
         # Create a volume from an image
@@ -236,14 +233,3 @@
 
         # delete instance
         self._delete_server(instance)
-
-
-class TestVolumeBootPatternV2(TestVolumeBootPattern):
-    def _get_bdm(self, source_id, source_type, delete_on_termination=False):
-        bd_map_v2 = [{
-            'uuid': source_id,
-            'source_type': source_type,
-            'destination_type': 'volume',
-            'boot_index': 0,
-            'delete_on_termination': delete_on_termination}]
-        return {'block_device_mapping_v2': bd_map_v2}
diff --git a/tempest/scenario/test_volume_migrate_attached.py b/tempest/scenario/test_volume_migrate_attached.py
index 891e22d..f580ea6 100644
--- a/tempest/scenario/test_volume_migrate_attached.py
+++ b/tempest/scenario/test_volume_migrate_attached.py
@@ -91,6 +91,7 @@
         waiters.wait_for_volume_retype(self.volumes_client,
                                        volume_id, new_volume_type)
 
+    @test.attr(type='slow')
     @decorators.idempotent_id('deadd2c2-beef-4dce-98be-f86765ff311b')
     @test.services('compute', 'volume')
     def test_volume_migrate_attached(self):
diff --git a/tempest/test.py b/tempest/test.py
index 06de520..52994ac 100644
--- a/tempest/test.py
+++ b/tempest/test.py
@@ -31,7 +31,6 @@
 from tempest import config
 from tempest import exceptions
 from tempest.lib.common import cred_client
-from tempest.lib.common.utils import test_utils
 from tempest.lib import decorators
 from tempest.lib import exceptions as lib_exc
 
@@ -644,12 +643,11 @@
             cred_provider, networks_client, CONF.compute.fixed_network_name)
 
     def assertEmpty(self, list, msg=None):
+        if msg is None:
+            msg = "list is not empty: %s" % list
         self.assertEqual(0, len(list), msg)
 
     def assertNotEmpty(self, list, msg=None):
+        if msg is None:
+            msg = "list is empty."
         self.assertGreater(len(list), 0, msg)
-
-
-call_until_true = debtcollector.moves.moved_function(
-    test_utils.call_until_true, 'call_until_true', __name__,
-    version='Newton', removal_version='Ocata')
diff --git a/tempest/tests/cmd/test_subunit_describe_calls.py b/tempest/tests/cmd/test_subunit_describe_calls.py
index 1c24c37..5f3d770 100644
--- a/tempest/tests/cmd/test_subunit_describe_calls.py
+++ b/tempest/tests/cmd/test_subunit_describe_calls.py
@@ -33,6 +33,16 @@
         p.communicate()
         self.assertEqual(0, p.returncode)
 
+    def test_return_code_no_output(self):
+        subunit_file = os.path.join(
+            os.path.dirname(os.path.abspath(__file__)),
+            'sample_streams/calls.subunit')
+        p = subprocess.Popen([
+            'subunit-describe-calls', '-s', subunit_file],
+            stdin=subprocess.PIPE)
+        p.communicate()
+        self.assertEqual(0, p.returncode)
+
     def test_parse(self):
         subunit_file = os.path.join(
             os.path.dirname(os.path.abspath(__file__)),
diff --git a/tempest/tests/common/utils/linux/test_remote_client.py b/tempest/tests/common/utils/linux/test_remote_client.py
index 5be6229..1c5e89f 100644
--- a/tempest/tests/common/utils/linux/test_remote_client.py
+++ b/tempest/tests/common/utils/linux/test_remote_client.py
@@ -67,16 +67,6 @@
         self.ssh_mock = self.useFixture(mockpatch.PatchObject(self.conn,
                                                               'ssh_client'))
 
-    def test_get_hostname(self):
-        self.ssh_mock.mock.exec_command.return_value = 'fake_hostname'
-        self.assertEqual(self.conn.get_hostname(), 'fake_hostname')
-
-    def test_get_ram_size(self):
-        free_output = "Mem:         48294      45738       2555          0" \
-                      "402      40346"
-        self.ssh_mock.mock.exec_command.return_value = free_output
-        self.assertEqual(self.conn.get_ram_size_in_mb(), '48294')
-
     def test_write_to_console_regular_str(self):
         self.conn.write_to_console('test')
         self._assert_exec_called_with(
@@ -102,11 +92,6 @@
         cmd = "set -eu -o pipefail; PATH=$PATH:/sbin; " + cmd
         self.ssh_mock.mock.exec_command.assert_called_with(cmd)
 
-    def test_get_number_of_vcpus(self):
-        self.ssh_mock.mock.exec_command.return_value = '16'
-        self.assertEqual(self.conn.get_number_of_vcpus(), 16)
-        self._assert_exec_called_with('grep -c ^processor /proc/cpuinfo')
-
     def test_get_disks(self):
         output_lsblk = """\
 NAME       MAJ:MIN    RM          SIZE RO TYPE MOUNTPOINT
diff --git a/tempest/tests/lib/common/utils/test_data_utils.py b/tempest/tests/lib/common/utils/test_data_utils.py
index 4446e5c..8bdf70e 100644
--- a/tempest/tests/lib/common/utils/test_data_utils.py
+++ b/tempest/tests/lib/common/utils/test_data_utils.py
@@ -37,16 +37,20 @@
         actual2 = data_utils.rand_uuid_hex()
         self.assertNotEqual(actual, actual2)
 
-    def test_rand_name(self):
-        actual = data_utils.rand_name()
+    def test_rand_name_with_default_prefix(self):
+        actual = data_utils.rand_name('foo')
         self.assertIsInstance(actual, str)
-        actual2 = data_utils.rand_name()
+        self.assertTrue(actual.startswith('tempest-foo'))
+        actual2 = data_utils.rand_name('foo')
+        self.assertTrue(actual2.startswith('tempest-foo'))
         self.assertNotEqual(actual, actual2)
 
-        actual = data_utils.rand_name('foo')
+    def test_rand_name_with_none_prefix(self):
+        actual = data_utils.rand_name('foo', prefix=None)
+        self.assertIsInstance(actual, str)
         self.assertTrue(actual.startswith('foo'))
-        actual2 = data_utils.rand_name('foo')
-        self.assertTrue(actual.startswith('foo'))
+        actual2 = data_utils.rand_name('foo', prefix=None)
+        self.assertTrue(actual2.startswith('foo'))
         self.assertNotEqual(actual, actual2)
 
     def test_rand_name_with_prefix(self):
diff --git a/tempest/tests/lib/services/compute/test_servers_client.py b/tempest/tests/lib/services/compute/test_servers_client.py
index b563ab2..8d391c1 100644
--- a/tempest/tests/lib/services/compute/test_servers_client.py
+++ b/tempest/tests/lib/services/compute/test_servers_client.py
@@ -1,4 +1,5 @@
 # Copyright 2015 IBM Corp.
+# Copyright 2017 AT&T Corp.
 #
 #    Licensed under the Apache License, Version 2.0 (the "License"); you may
 #    not use this file except in compliance with the License. You may obtain
@@ -14,6 +15,9 @@
 
 import copy
 
+import mock
+
+from tempest.lib.services.compute import base_compute_client
 from tempest.lib.services.compute import servers_client
 from tempest.tests.lib import fake_auth_provider
 from tempest.tests.lib.services import base
@@ -186,6 +190,9 @@
     FAKE_REBUILD_SERVER = copy.deepcopy(FAKE_SERVER_GET)
     FAKE_REBUILD_SERVER['server']['adminPass'] = 'fake-admin-pass'
 
+    FAKE_TAGS = ["foo", "bar"]
+    REPLACE_FAKE_TAGS = ["baz", "qux"]
+
     server_id = FAKE_SERVER_GET['server']['id']
     network_id = 'a6b0875b-6b5d-4a5a-81eb-0c3aa62e5fdb'
 
@@ -194,6 +201,7 @@
         fake_auth = fake_auth_provider.FakeAuthProvider()
         self.client = servers_client.ServersClient(
             fake_auth, 'compute', 'regionOne')
+        self.addCleanup(mock.patch.stopall)
 
     def test_list_servers_with_str_body(self):
         self._test_list_servers()
@@ -1031,3 +1039,113 @@
             {'security_groups': self.FAKE_SECURITY_GROUPS},
             server_id=self.server_id,
             )
+
+    @mock.patch.object(base_compute_client, 'COMPUTE_MICROVERSION',
+                       new_callable=mock.PropertyMock(return_value='2.26'))
+    def test_list_tags_str_body(self, _):
+        self._test_list_tags()
+
+    @mock.patch.object(base_compute_client, 'COMPUTE_MICROVERSION',
+                       new_callable=mock.PropertyMock(return_value='2.26'))
+    def test_list_tags_byte_body(self, _):
+        self._test_list_tags(bytes_body=True)
+
+    def _test_list_tags(self, bytes_body=False):
+        expected = {"tags": self.FAKE_TAGS}
+        self.check_service_client_function(
+            self.client.list_tags,
+            'tempest.lib.common.rest_client.RestClient.get',
+            expected,
+            server_id=self.server_id,
+            to_utf=bytes_body)
+
+    @mock.patch.object(base_compute_client, 'COMPUTE_MICROVERSION',
+                       new_callable=mock.PropertyMock(return_value='2.26'))
+    def test_update_all_tags_str_body(self, _):
+        self._test_update_all_tags()
+
+    @mock.patch.object(base_compute_client, 'COMPUTE_MICROVERSION',
+                       new_callable=mock.PropertyMock(return_value='2.26'))
+    def test_update_all_tags_byte_body(self, _):
+        self._test_update_all_tags(bytes_body=True)
+
+    def _test_update_all_tags(self, bytes_body=False):
+        expected = {"tags": self.REPLACE_FAKE_TAGS}
+        self.check_service_client_function(
+            self.client.update_all_tags,
+            'tempest.lib.common.rest_client.RestClient.put',
+            expected,
+            server_id=self.server_id,
+            tags=self.REPLACE_FAKE_TAGS,
+            to_utf=bytes_body)
+
+    @mock.patch.object(base_compute_client, 'COMPUTE_MICROVERSION',
+                       new_callable=mock.PropertyMock(return_value='2.26'))
+    def test_delete_all_tags(self, _):
+        self.check_service_client_function(
+            self.client.delete_all_tags,
+            'tempest.lib.common.rest_client.RestClient.delete',
+            {},
+            server_id=self.server_id,
+            status=204)
+
+    @mock.patch.object(base_compute_client, 'COMPUTE_MICROVERSION',
+                       new_callable=mock.PropertyMock(return_value='2.26'))
+    def test_check_tag_existence_str_body(self, _):
+        self._test_check_tag_existence()
+
+    @mock.patch.object(base_compute_client, 'COMPUTE_MICROVERSION',
+                       new_callable=mock.PropertyMock(return_value='2.26'))
+    def test_check_tag_existence_byte_body(self, _):
+        self._test_check_tag_existence(bytes_body=True)
+
+    def _test_check_tag_existence(self, bytes_body=False):
+        self.check_service_client_function(
+            self.client.check_tag_existence,
+            'tempest.lib.common.rest_client.RestClient.get',
+            {},
+            server_id=self.server_id,
+            tag=self.FAKE_TAGS[0],
+            status=204,
+            to_utf=bytes_body)
+
+    @mock.patch.object(base_compute_client, 'COMPUTE_MICROVERSION',
+                       new_callable=mock.PropertyMock(return_value='2.26'))
+    def test_update_tag_str_body(self, _):
+        self._test_update_tag()
+
+    @mock.patch.object(base_compute_client, 'COMPUTE_MICROVERSION',
+                       new_callable=mock.PropertyMock(return_value='2.26'))
+    def test_update_tag_byte_body(self, _):
+        self._test_update_tag(bytes_body=True)
+
+    def _test_update_tag(self, bytes_body=False):
+        self.check_service_client_function(
+            self.client.update_tag,
+            'tempest.lib.common.rest_client.RestClient.put',
+            {},
+            server_id=self.server_id,
+            tag=self.FAKE_TAGS[0],
+            status=201,
+            headers={'location': 'fake_location'},
+            to_utf=bytes_body)
+
+    @mock.patch.object(base_compute_client, 'COMPUTE_MICROVERSION',
+                       new_callable=mock.PropertyMock(return_value='2.26'))
+    def test_delete_tag_str_body(self, _):
+        self._test_delete_tag()
+
+    @mock.patch.object(base_compute_client, 'COMPUTE_MICROVERSION',
+                       new_callable=mock.PropertyMock(return_value='2.26'))
+    def test_delete_tag_byte_body(self, _):
+        self._test_delete_tag(bytes_body=True)
+
+    def _test_delete_tag(self, bytes_body=False):
+        self.check_service_client_function(
+            self.client.delete_tag,
+            'tempest.lib.common.rest_client.RestClient.delete',
+            {},
+            server_id=self.server_id,
+            tag=self.FAKE_TAGS[0],
+            status=204,
+            to_utf=bytes_body)
diff --git a/tempest/tests/test_decorators.py b/tempest/tests/test_decorators.py
index a069a81..ae2f2a3 100644
--- a/tempest/tests/test_decorators.py
+++ b/tempest/tests/test_decorators.py
@@ -199,3 +199,96 @@
                           self._test_requires_ext_helper,
                           extension='enabled_ext',
                           service='bad_service')
+
+
+class TestConfigDecorators(BaseDecoratorsTest):
+    def setUp(self):
+        super(TestConfigDecorators, self).setUp()
+        cfg.CONF.set_default('nova', True, 'service_available')
+        cfg.CONF.set_default('glance', False, 'service_available')
+
+    def _assert_skip_message(self, func, skip_msg):
+        try:
+            func()
+            self.fail()
+        except testtools.TestCase.skipException as skip_exc:
+            self.assertEqual(skip_exc.args[0], skip_msg)
+
+    def _test_skip_unless_config(self, expected_to_skip=True, *decorator_args):
+
+        class TestFoo(test.BaseTestCase):
+            @config.skip_unless_config(*decorator_args)
+            def test_bar(self):
+                return 0
+
+        t = TestFoo('test_bar')
+        if expected_to_skip:
+            self.assertRaises(testtools.TestCase.skipException, t.test_bar)
+            if (len(decorator_args) >= 3):
+                # decorator_args[2]: skip message specified
+                self._assert_skip_message(t.test_bar, decorator_args[2])
+        else:
+            try:
+                self.assertEqual(t.test_bar(), 0)
+            except testtools.TestCase.skipException:
+                # We caught a skipException but we didn't expect to skip
+                # this test so raise a hard test failure instead.
+                raise testtools.TestCase.failureException(
+                    "Not supposed to skip")
+
+    def _test_skip_if_config(self, expected_to_skip=True,
+                             *decorator_args):
+
+        class TestFoo(test.BaseTestCase):
+            @config.skip_if_config(*decorator_args)
+            def test_bar(self):
+                return 0
+
+        t = TestFoo('test_bar')
+        if expected_to_skip:
+            self.assertRaises(testtools.TestCase.skipException, t.test_bar)
+            if (len(decorator_args) >= 3):
+                # decorator_args[2]: skip message specified
+                self._assert_skip_message(t.test_bar, decorator_args[2])
+        else:
+            try:
+                self.assertEqual(t.test_bar(), 0)
+            except testtools.TestCase.skipException:
+                # We caught a skipException but we didn't expect to skip
+                # this test so raise a hard test failure instead.
+                raise testtools.TestCase.failureException(
+                    "Not supposed to skip")
+
+    def test_skip_unless_no_group(self):
+        self._test_skip_unless_config(True, 'fake_group', 'an_option')
+
+    def test_skip_unless_no_option(self):
+        self._test_skip_unless_config(True, 'service_available',
+                                      'not_an_option')
+
+    def test_skip_unless_false_option(self):
+        self._test_skip_unless_config(True, 'service_available', 'glance')
+
+    def test_skip_unless_false_option_msg(self):
+        self._test_skip_unless_config(True, 'service_available', 'glance',
+                                      'skip message')
+
+    def test_skip_unless_true_option(self):
+        self._test_skip_unless_config(False,
+                                      'service_available', 'nova')
+
+    def test_skip_if_no_group(self):
+        self._test_skip_if_config(False, 'fake_group', 'an_option')
+
+    def test_skip_if_no_option(self):
+        self._test_skip_if_config(False, 'service_available', 'not_an_option')
+
+    def test_skip_if_false_option(self):
+        self._test_skip_if_config(False, 'service_available', 'glance')
+
+    def test_skip_if_true_option(self):
+        self._test_skip_if_config(True, 'service_available', 'nova')
+
+    def test_skip_if_true_option_msg(self):
+        self._test_skip_if_config(True, 'service_available', 'nova',
+                                  'skip message')
diff --git a/tempest/tests/test_wrappers.py b/tempest/tests/test_wrappers.py
deleted file mode 100644
index a4ef699..0000000
--- a/tempest/tests/test_wrappers.py
+++ /dev/null
@@ -1,88 +0,0 @@
-# Copyright 2013 IBM Corp.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-import os
-import shutil
-import subprocess
-import tempfile
-
-import six
-
-from tempest.tests import base
-
-DEVNULL = open(os.devnull, 'wb')
-
-
-class TestWrappers(base.TestCase):
-    def setUp(self):
-        super(TestWrappers, self).setUp()
-        # Setup test dirs
-        self.directory = tempfile.mkdtemp(prefix='tempest-unit')
-        self.addCleanup(shutil.rmtree, self.directory)
-        self.test_dir = os.path.join(self.directory, 'tests')
-        os.mkdir(self.test_dir)
-        # Setup Test files
-        self.testr_conf_file = os.path.join(self.directory, '.testr.conf')
-        self.setup_cfg_file = os.path.join(self.directory, 'setup.cfg')
-        self.passing_file = os.path.join(self.test_dir, 'test_passing.py')
-        self.failing_file = os.path.join(self.test_dir, 'test_failing.py')
-        self.init_file = os.path.join(self.test_dir, '__init__.py')
-        self.setup_py = os.path.join(self.directory, 'setup.py')
-        shutil.copy('tempest/tests/files/testr-conf', self.testr_conf_file)
-        shutil.copy('tempest/tests/files/passing-tests', self.passing_file)
-        shutil.copy('tempest/tests/files/failing-tests', self.failing_file)
-        shutil.copy('setup.py', self.setup_py)
-        shutil.copy('tempest/tests/files/setup.cfg', self.setup_cfg_file)
-        shutil.copy('tempest/tests/files/__init__.py', self.init_file)
-        # copy over the pretty_tox scripts
-        shutil.copy('tools/pretty_tox.sh',
-                    os.path.join(self.directory, 'pretty_tox.sh'))
-        shutil.copy('tools/pretty_tox_serial.sh',
-                    os.path.join(self.directory, 'pretty_tox_serial.sh'))
-
-        self.stdout = six.StringIO()
-        self.stderr = six.StringIO()
-        # Change directory, run wrapper and check result
-        self.addCleanup(os.chdir, os.path.abspath(os.curdir))
-        os.chdir(self.directory)
-
-    def assertRunExit(self, cmd, expected):
-        p = subprocess.Popen(
-            "bash %s" % cmd, shell=True,
-            stdout=subprocess.PIPE, stderr=subprocess.PIPE)
-        out, err = p.communicate()
-
-        self.assertEqual(
-            p.returncode, expected,
-            "Stdout: %s; Stderr: %s" % (out, err))
-
-    def test_pretty_tox(self):
-        # Git init is required for the pbr testr command. pbr requires a git
-        # version or an sdist to work. so make the test directory a git repo
-        # too.
-        subprocess.call(['git', 'init'], stderr=DEVNULL)
-        self.assertRunExit('pretty_tox.sh passing', 0)
-
-    def test_pretty_tox_fails(self):
-        # Git init is required for the pbr testr command. pbr requires a git
-        # version or an sdist to work. so make the test directory a git repo
-        # too.
-        subprocess.call(['git', 'init'], stderr=DEVNULL)
-        self.assertRunExit('pretty_tox.sh', 1)
-
-    def test_pretty_tox_serial(self):
-        self.assertRunExit('pretty_tox_serial.sh passing', 0)
-
-    def test_pretty_tox_serial_fails(self):
-        self.assertRunExit('pretty_tox_serial.sh', 1)
diff --git a/test-requirements.txt b/test-requirements.txt
index f7d63a8..936d5aa 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -1,9 +1,9 @@
 # The order of packages is significant, because pip processes them in the order
 # of appearance. Changing the order has an impact on the overall integration
 # process, which may cause wedges in the gate later.
-hacking<0.13,>=0.12.0 # Apache-2.0
+hacking>=0.12.0,!=0.13.0,<0.14  # Apache-2.0
 # needed for doc build
-sphinx!=1.3b1,<1.4,>=1.2.1 # BSD
+sphinx>=1.5.1 # BSD
 oslosphinx>=4.7.0 # Apache-2.0
 reno>=1.8.0 # Apache-2.0
 mock>=2.0 # BSD
diff --git a/tools/pretty_tox.sh b/tools/pretty_tox.sh
deleted file mode 100755
index 0b83b91..0000000
--- a/tools/pretty_tox.sh
+++ /dev/null
@@ -1,14 +0,0 @@
-#!/usr/bin/env bash
-
-echo "WARNING: This script is deprecated and will be removed in the near future. Please migrate to tempest run or another method of launching a test runner"
-
-set -o pipefail
-
-TESTRARGS=$1
-python setup.py testr --testr-args="--subunit $TESTRARGS" | subunit-trace --no-failure-debug -f
-retval=$?
-# NOTE(mtreinish) The pipe above would eat the slowest display from pbr's testr
-# wrapper so just manually print the slowest tests.
-echo -e "\nSlowest Tests:\n"
-testr slowest
-exit $retval
diff --git a/tools/pretty_tox_serial.sh b/tools/pretty_tox_serial.sh
deleted file mode 100755
index 1f8204e..0000000
--- a/tools/pretty_tox_serial.sh
+++ /dev/null
@@ -1,16 +0,0 @@
-#!/usr/bin/env bash
-
-echo "WARNING: This script is deprecated and will be removed in the near future. Please migrate to tempest run or another method of launching a test runner"
-
-set -o pipefail
-
-TESTRARGS=$@
-
-if [ ! -d .testrepository ]; then
-    testr init
-fi
-testr run --subunit $TESTRARGS | subunit-trace -f -n
-retval=$?
-testr slowest
-
-exit $retval
diff --git a/tools/tox_install.sh b/tools/tox_install.sh
new file mode 100755
index 0000000..43468e4
--- /dev/null
+++ b/tools/tox_install.sh
@@ -0,0 +1,30 @@
+#!/usr/bin/env bash
+
+# Client constraint file contains this client version pin that is in conflict
+# with installing the client from source. We should remove the version pin in
+# the constraints file before applying it for from-source installation.
+
+CONSTRAINTS_FILE=$1
+shift 1
+
+set -e
+
+# NOTE(tonyb): Place this in the tox enviroment's log dir so it will get
+# published to logs.openstack.org for easy debugging.
+localfile="$VIRTUAL_ENV/log/upper-constraints.txt"
+
+if [[ $CONSTRAINTS_FILE != http* ]]; then
+    CONSTRAINTS_FILE=file://$CONSTRAINTS_FILE
+fi
+# NOTE(tonyb): need to add curl to bindep.txt if the project supports bindep
+curl $CONSTRAINTS_FILE --insecure --progress-bar --output $localfile
+
+pip install -c$localfile openstack-requirements
+
+# This is the main purpose of the script: Allow local installation of
+# the current repo. It is listed in constraints file and thus any
+# install will be constrained and we need to unconstrain it.
+edit-constraints $localfile -- $CLIENT_NAME
+
+pip install -c$localfile -U $*
+exit $?
diff --git a/tox.ini b/tox.ini
index d8d390e..dfa8332 100644
--- a/tox.ini
+++ b/tox.ini
@@ -8,6 +8,8 @@
 setenv =
     VIRTUAL_ENV={envdir}
     OS_TEST_PATH=./tempest/test_discover
+    BRANCH_NAME=master
+    CLIENT_NAME=tempest
 deps =
     setuptools
     -r{toxinidir}/requirements.txt
@@ -17,9 +19,12 @@
     VIRTUAL_ENV={envdir}
     OS_TEST_PATH=./tempest/tests
     PYTHONWARNINGS=default::DeprecationWarning
-passenv = OS_STDOUT_CAPTURE OS_STDERR_CAPTURE OS_TEST_TIMEOUT OS_TEST_LOCK_PATH OS_TEST_PATH TEMPEST_CONFIG TEMPEST_CONFIG_DIR http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY
+    BRANCH_NAME=master
+    CLIENT_NAME=tempest
+passenv = OS_STDOUT_CAPTURE OS_STDERR_CAPTURE OS_TEST_TIMEOUT OS_TEST_LOCK_PATH OS_TEST_PATH TEMPEST_CONFIG TEMPEST_CONFIG_DIR http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY ZUUL_CACHE_DIR REQUIREMENTS_PIP_LOCATION
 usedevelop = True
-install_command = pip install -U {opts} {packages}
+install_command =
+    {toxinidir}/tools/tox_install.sh {env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} {opts} {packages}
 whitelist_externals = *
 deps =
     -r{toxinidir}/requirements.txt
@@ -78,7 +83,8 @@
 # See the testrepository bug: https://bugs.launchpad.net/testrepository/+bug/1208610
 commands =
     find . -type f -name "*.pyc" -delete
-    tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.(api|scenario))' {posargs}
+    tempest run --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.api)' {posargs}
+    tempest run --combine --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)' {posargs}
 
 [testenv:full-serial]
 envdir = .tox/tempest
@@ -91,6 +97,16 @@
     find . -type f -name "*.pyc" -delete
     tempest run --serial --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.(api|scenario))' {posargs}
 
+[testenv:scenario]
+envdir = .tox/tempest
+sitepackages = {[tempestenv]sitepackages}
+setenv = {[tempestenv]setenv}
+deps = {[tempestenv]deps}
+# The regex below is used to select all scenario tests
+commands =
+    find . -type f -name "*.pyc" -delete
+    tempest run --serial --regex '(^tempest\.scenario)' {posargs}
+
 [testenv:smoke]
 envdir = .tox/tempest
 sitepackages = {[tempestenv]sitepackages}