Merge "Adds missing server tags APIs to servers client."
diff --git a/README.rst b/README.rst
index 281516b..9d19c23 100644
--- a/README.rst
+++ b/README.rst
@@ -263,9 +263,7 @@
 
     $ testr run tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_reboot_non_existent_server
 
-Alternatively, you can use the run_tempest.sh script which will create a venv
-and run the tests or use tox to do the same. Tox also contains several existing
-job configurations. For example::
+Tox also contains several existing job configurations. For example::
 
     $ tox -efull
 
diff --git a/doc/source/test-removal.rst b/doc/source/test-removal.rst
index 79a5846..4757dc4 100644
--- a/doc/source/test-removal.rst
+++ b/doc/source/test-removal.rst
@@ -38,8 +38,10 @@
  #. The test proposed for removal has a failure rate <  0.50% in the gate over
     the past release (the value and interval will likely be adjusted in the
     future)
- #. There must not be an external user/consumer of tempest that depends on the
-    test proposed for removal
+
+    .. _`prong #3`:
+ #. There must not be an external user/consumer of tempest
+    that depends on the test proposed for removal
 
 The answers to 1 and 2 are easy to verify. For 1 just provide a link to the new
 test location. If you are linking to the tempest removal patch please also put
@@ -133,6 +135,10 @@
  #. A revert for a patch which added a broken test, or testing which didn't
     actually run in the gate (basically any revert for something which
     shouldn't have been added)
+ #. Tests that would become out of scope as a consequence of an API change,
+    as described in `API Compatibility`_.
+    Such tests cannot live in Tempest because of the branchless nature of
+    Tempest. Such test must still honor `prong #3`_.
 
 For the first exception type the only types of testing in tree which have been
 declared out of scope at this point are:
@@ -149,7 +155,7 @@
 Tempest Scope
 ^^^^^^^^^^^^^
 
-Also starting in the liberty cycle tempest has defined a set of projects which
+Starting in the liberty cycle tempest has defined a set of projects which
 are defined as in scope for direct testing in tempest. As of today that list
 is:
 
@@ -166,3 +172,17 @@
 to maintain continuity after migrating the tests out of tempest.
 
 .. _tempest plugin mechanism: http://docs.openstack.org/developer/tempest/plugin.html
+
+API Compatibility
+"""""""""""""""""
+
+If an API introduces a non-discoverable, backward incompatible change, and
+such change is not backported to all versions supported by Tempest, tests for
+that API cannot live in Tempest anymore.
+This is because tests would not be able to know or control which API response
+to expect, and thus would not be able to enforce a specific behavior.
+
+If a test exists in Tempest that would meet this criteria as consequence of a
+change, the test must be removed according to the procedure discussed into
+this document. The API change should not be merged until all conditions
+required for test removal can be met.
\ No newline at end of file
diff --git a/releasenotes/notes/14.0.0-remo-stress-tests-81052b211ad95d2e.yaml b/releasenotes/notes/14.0.0-remo-stress-tests-81052b211ad95d2e.yaml
index aa3a78e..389b29f 100644
--- a/releasenotes/notes/14.0.0-remo-stress-tests-81052b211ad95d2e.yaml
+++ b/releasenotes/notes/14.0.0-remo-stress-tests-81052b211ad95d2e.yaml
@@ -1,4 +1,13 @@
 ---
+prelude: >
+  This release is marking the end of Liberty release support in Tempest
 upgrade:
   - The Stress tests framework and all the stress tests have been removed.
+other:
+  - |
+    OpenStack releases supported at this time are **Mitaka** and **Newton**.
 
+    The release under current development as of this tag is Ocata, meaning that
+    every Tempest commit is also tested against master during the Ocata cycle.
+    However, this does not necessarily mean that using Tempest as of this tag
+    will work against a Ocata (or future releases) cloud.
diff --git a/releasenotes/notes/add-identity-v3-clients-as-a-library-d34b4fdf376984ad.yaml b/releasenotes/notes/15.0.0-add-identity-v3-clients-as-a-library-d34b4fdf376984ad.yaml
similarity index 100%
rename from releasenotes/notes/add-identity-v3-clients-as-a-library-d34b4fdf376984ad.yaml
rename to releasenotes/notes/15.0.0-add-identity-v3-clients-as-a-library-d34b4fdf376984ad.yaml
diff --git a/releasenotes/notes/add-image-clients-tests-49dbc0a0a4281a77.yaml b/releasenotes/notes/15.0.0-add-image-clients-tests-49dbc0a0a4281a77.yaml
similarity index 100%
rename from releasenotes/notes/add-image-clients-tests-49dbc0a0a4281a77.yaml
rename to releasenotes/notes/15.0.0-add-image-clients-tests-49dbc0a0a4281a77.yaml
diff --git a/releasenotes/notes/add-implied-roles-to-roles-client-library-edf96408ad9ba82e.yaml b/releasenotes/notes/15.0.0-add-implied-roles-to-roles-client-library-edf96408ad9ba82e.yaml
similarity index 100%
rename from releasenotes/notes/add-implied-roles-to-roles-client-library-edf96408ad9ba82e.yaml
rename to releasenotes/notes/15.0.0-add-implied-roles-to-roles-client-library-edf96408ad9ba82e.yaml
diff --git a/releasenotes/notes/add-snapshot-manage-client-as-library-a76ffdba9d8d01cb.yaml b/releasenotes/notes/15.0.0-add-snapshot-manage-client-as-library-a76ffdba9d8d01cb.yaml
similarity index 100%
rename from releasenotes/notes/add-snapshot-manage-client-as-library-a76ffdba9d8d01cb.yaml
rename to releasenotes/notes/15.0.0-add-snapshot-manage-client-as-library-a76ffdba9d8d01cb.yaml
diff --git a/releasenotes/notes/deprecate-allow_port_security_disabled-option-2d3d87f6bd11d03a.yaml b/releasenotes/notes/15.0.0-deprecate-allow_port_security_disabled-option-2d3d87f6bd11d03a.yaml
similarity index 100%
rename from releasenotes/notes/deprecate-allow_port_security_disabled-option-2d3d87f6bd11d03a.yaml
rename to releasenotes/notes/15.0.0-deprecate-allow_port_security_disabled-option-2d3d87f6bd11d03a.yaml
diff --git a/releasenotes/notes/deprecate-identity-feature-enabled.reseller-84800a8232fe217f.yaml b/releasenotes/notes/15.0.0-deprecate-identity-feature-enabled.reseller-84800a8232fe217f.yaml
similarity index 100%
rename from releasenotes/notes/deprecate-identity-feature-enabled.reseller-84800a8232fe217f.yaml
rename to releasenotes/notes/15.0.0-deprecate-identity-feature-enabled.reseller-84800a8232fe217f.yaml
diff --git a/releasenotes/notes/deprecate-volume_feature_enabled.volume_services-dbe024ea067d5ab2.yaml b/releasenotes/notes/15.0.0-deprecate-volume_feature_enabled.volume_services-dbe024ea067d5ab2.yaml
similarity index 100%
rename from releasenotes/notes/deprecate-volume_feature_enabled.volume_services-dbe024ea067d5ab2.yaml
rename to releasenotes/notes/15.0.0-deprecate-volume_feature_enabled.volume_services-dbe024ea067d5ab2.yaml
diff --git a/releasenotes/notes/jsonschema-validator-2377ba131e12d3c7.yaml b/releasenotes/notes/15.0.0-jsonschema-validator-2377ba131e12d3c7.yaml
similarity index 100%
rename from releasenotes/notes/jsonschema-validator-2377ba131e12d3c7.yaml
rename to releasenotes/notes/15.0.0-jsonschema-validator-2377ba131e12d3c7.yaml
diff --git a/releasenotes/notes/remove-deprecated-compute-microversion-config-options-eaee6a7d2f8390a8.yaml b/releasenotes/notes/15.0.0-remove-deprecated-compute-microversion-config-options-eaee6a7d2f8390a8.yaml
similarity index 100%
rename from releasenotes/notes/remove-deprecated-compute-microversion-config-options-eaee6a7d2f8390a8.yaml
rename to releasenotes/notes/15.0.0-remove-deprecated-compute-microversion-config-options-eaee6a7d2f8390a8.yaml
diff --git a/releasenotes/notes/remove-deprecated-compute-validation-config-options-e3d1b89ce074d71c.yaml b/releasenotes/notes/15.0.0-remove-deprecated-compute-validation-config-options-e3d1b89ce074d71c.yaml
similarity index 60%
rename from releasenotes/notes/remove-deprecated-compute-validation-config-options-e3d1b89ce074d71c.yaml
rename to releasenotes/notes/15.0.0-remove-deprecated-compute-validation-config-options-e3d1b89ce074d71c.yaml
index 8665b8b..104bf27 100644
--- a/releasenotes/notes/remove-deprecated-compute-validation-config-options-e3d1b89ce074d71c.yaml
+++ b/releasenotes/notes/15.0.0-remove-deprecated-compute-validation-config-options-e3d1b89ce074d71c.yaml
@@ -1,4 +1,6 @@
 ---
+prelude: >
+    This release is marking the start of Ocata release support in Tempest
 upgrade:
   - |
     Below deprecated config options from compute group have been removed.
@@ -11,4 +13,13 @@
     - ``compute.ping_size `` (available as ``validation.ping_size``)
     - ``compute.ping_count `` (available as ``validation.ping_count``)
     - ``compute.floating_ip_range `` (available as ``validation.floating_ip_range``)
+other:
+  - |
+    OpenStack releases supported at this time are **Mitaka**, **Newton**,
+    and **Ocata**.
 
+    The release under current development as of this tag is Pike,
+    meaning that every Tempest commit is also tested against master during
+    the Pike cycle. However, this does not necessarily mean that using
+    Tempest as of this tag will work against a Pike (or future releases)
+    cloud.
diff --git a/releasenotes/notes/remove-deprecated-input-scenario-config-options-414e0c5442e967e9.yaml b/releasenotes/notes/15.0.0-remove-deprecated-input-scenario-config-options-414e0c5442e967e9.yaml
similarity index 100%
rename from releasenotes/notes/remove-deprecated-input-scenario-config-options-414e0c5442e967e9.yaml
rename to releasenotes/notes/15.0.0-remove-deprecated-input-scenario-config-options-414e0c5442e967e9.yaml
diff --git a/releasenotes/notes/remove-deprecated-network-config-options-f9ce276231578fe6.yaml b/releasenotes/notes/15.0.0-remove-deprecated-network-config-options-f9ce276231578fe6.yaml
similarity index 100%
rename from releasenotes/notes/remove-deprecated-network-config-options-f9ce276231578fe6.yaml
rename to releasenotes/notes/15.0.0-remove-deprecated-network-config-options-f9ce276231578fe6.yaml
diff --git a/releasenotes/notes/15.0.0-start-of-pike-support-4925678d477b0745.yaml b/releasenotes/notes/15.0.0-start-of-pike-support-4925678d477b0745.yaml
deleted file mode 100644
index 5555949..0000000
--- a/releasenotes/notes/15.0.0-start-of-pike-support-4925678d477b0745.yaml
+++ /dev/null
@@ -1,13 +0,0 @@
----
-prelude: >
-    This release is marking the start of Ocata release support in Tempest
-other:
-  - |
-    OpenStack releases supported at this time are **Mitaka**, **Newton**,
-    and **Ocata**.
-
-    The release under current development as of this tag is Pike,
-    meaning that every Tempest commit is also tested against master during
-    the Pike cycle. However, this does not necessarily mean that using
-    Tempest as of this tag will work against a Pike (or future releases)
-    cloud.
diff --git a/releasenotes/notes/deprecate-skip_unless_config-decorator-64c32d588043ab12.yaml b/releasenotes/notes/deprecate-skip_unless_config-decorator-64c32d588043ab12.yaml
new file mode 100644
index 0000000..6285ea6
--- /dev/null
+++ b/releasenotes/notes/deprecate-skip_unless_config-decorator-64c32d588043ab12.yaml
@@ -0,0 +1,5 @@
+---
+deprecations:
+  - The ``skip_unless_config`` and ``skip_if_config`` decorators in the
+    ``config`` module have been deprecated and will be removed in the Queens
+    dev cycle. Use the ``testtools.skipUnless`` (or a variation of) instead.
diff --git a/releasenotes/notes/remove-call_until_true-of-test-de9c13bc8f969921.yaml b/releasenotes/notes/remove-call_until_true-of-test-de9c13bc8f969921.yaml
new file mode 100644
index 0000000..5670821
--- /dev/null
+++ b/releasenotes/notes/remove-call_until_true-of-test-de9c13bc8f969921.yaml
@@ -0,0 +1,6 @@
+---
+upgrade:
+  - The *call_until_true* of *test* module is removed because it was marked
+    as deprecated and Tempest provides it from *test_utils* as a stable
+    interface instead. Please switch to use *test_utils.call_until_true* if
+    necessary.
diff --git a/requirements.txt b/requirements.txt
index d9a9ebb..124da7a 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -12,7 +12,7 @@
 oslo.config!=3.18.0,>=3.14.0 # Apache-2.0
 oslo.log>=3.11.0 # Apache-2.0
 oslo.serialization>=1.10.0 # Apache-2.0
-oslo.utils>=3.18.0 # Apache-2.0
+oslo.utils>=3.20.0 # Apache-2.0
 six>=1.9.0 # MIT
 fixtures>=3.0.0 # Apache-2.0/BSD
 PyYAML>=3.10.0 # MIT
diff --git a/run_tempest.sh b/run_tempest.sh
deleted file mode 100755
index 414146b..0000000
--- a/run_tempest.sh
+++ /dev/null
@@ -1,135 +0,0 @@
-#!/usr/bin/env bash
-
-echo "WARNING: This script is deprecated and will be removed in the near future. Please migrate to tempest run or another method of launching a test runner"
-
-function usage {
-  echo "Usage: $0 [OPTION]..."
-  echo "Run Tempest test suite"
-  echo ""
-  echo "  -V, --virtual-env        Always use virtualenv.  Install automatically if not present"
-  echo "  -N, --no-virtual-env     Don't use virtualenv.  Run tests in local environment"
-  echo "  -n, --no-site-packages   Isolate the virtualenv from the global Python environment"
-  echo "  -f, --force              Force a clean re-build of the virtual environment. Useful when dependencies have been added."
-  echo "  -u, --update             Update the virtual environment with any newer package versions"
-  echo "  -s, --smoke              Only run smoke tests"
-  echo "  -t, --serial             Run testr serially"
-  echo "  -C, --config             Config file location"
-  echo "  -h, --help               Print this usage message"
-  echo "  -d, --debug              Run tests with testtools instead of testr. This allows you to use PDB"
-  echo "  -- [TESTROPTIONS]        After the first '--' you can pass arbitrary arguments to testr "
-}
-
-testrargs=""
-venv=${VENV:-.venv}
-with_venv=tools/with_venv.sh
-serial=0
-always_venv=0
-never_venv=0
-no_site_packages=0
-debug=0
-force=0
-wrapper=""
-config_file=""
-update=0
-
-if ! options=$(getopt -o VNnfusthdC:lL: -l virtual-env,no-virtual-env,no-site-packages,force,update,smoke,serial,help,debug,config: -- "$@")
-then
-    # parse error
-    usage
-    exit 1
-fi
-
-eval set -- $options
-first_uu=yes
-while [ $# -gt 0 ]; do
-  case "$1" in
-    -h|--help) usage; exit;;
-    -V|--virtual-env) always_venv=1; never_venv=0;;
-    -N|--no-virtual-env) always_venv=0; never_venv=1;;
-    -n|--no-site-packages) no_site_packages=1;;
-    -f|--force) force=1;;
-    -u|--update) update=1;;
-    -d|--debug) debug=1;;
-    -C|--config) config_file=$2; shift;;
-    -s|--smoke) testrargs+="smoke";;
-    -t|--serial) serial=1;;
-    --) [ "yes" == "$first_uu" ] || testrargs="$testrargs $1"; first_uu=no  ;;
-    *) testrargs="$testrargs $1";;
-  esac
-  shift
-done
-
-if [ -n "$config_file" ]; then
-    config_file=`readlink -f "$config_file"`
-    export TEMPEST_CONFIG_DIR=`dirname "$config_file"`
-    export TEMPEST_CONFIG=`basename "$config_file"`
-fi
-
-cd `dirname "$0"`
-
-if [ $no_site_packages -eq 1 ]; then
-  installvenvopts="--no-site-packages"
-fi
-
-function testr_init {
-  if [ ! -d .testrepository ]; then
-      ${wrapper} testr init
-  fi
-}
-
-function run_tests {
-  testr_init
-  ${wrapper} find . -type f -name "*.pyc" -delete
-  export OS_TEST_PATH=./tempest/test_discover
-  if [ $debug -eq 1 ]; then
-      if [ "$testrargs" = "" ]; then
-           testrargs="discover ./tempest/test_discover"
-      fi
-      ${wrapper} python -m testtools.run $testrargs
-      return $?
-  fi
-
-  if [ $serial -eq 1 ]; then
-      ${wrapper} testr run --subunit $testrargs | ${wrapper} subunit-trace -n -f
-  else
-      ${wrapper} testr run --parallel --subunit $testrargs | ${wrapper} subunit-trace -n -f
-  fi
-}
-
-if [ $never_venv -eq 0 ]
-then
-  # Remove the virtual environment if --force used
-  if [ $force -eq 1 ]; then
-    echo "Cleaning virtualenv..."
-    rm -rf ${venv}
-  fi
-  if [ $update -eq 1 ]; then
-      echo "Updating virtualenv..."
-      virtualenv $installvenvopts $venv
-      $venv/bin/pip install -U -r requirements.txt
-  fi
-  if [ -e ${venv} ]; then
-    wrapper="${with_venv}"
-  else
-    if [ $always_venv -eq 1 ]; then
-      # Automatically install the virtualenv
-      virtualenv $installvenvopts $venv
-      wrapper="${with_venv}"
-      ${wrapper} pip install -U -r requirements.txt
-    else
-      echo -e "No virtual environment found...create one? (Y/n) \c"
-      read use_ve
-      if [ "x$use_ve" = "xY" -o "x$use_ve" = "x" -o "x$use_ve" = "xy" ]; then
-        # Install the virtualenv and run the test suite in it
-        virtualenv $installvenvopts $venv
-        wrapper=${with_venv}
-        ${wrapper} pip install -U -r requirements.txt
-      fi
-    fi
-  fi
-fi
-
-run_tests
-retval=$?
-
-exit $retval
diff --git a/run_tests.sh b/run_tests.sh
deleted file mode 100755
index a856bb4..0000000
--- a/run_tests.sh
+++ /dev/null
@@ -1,193 +0,0 @@
-#!/usr/bin/env bash
-
-function usage {
-  echo "Usage: $0 [OPTION]..."
-  echo "Run Tempest unit tests"
-  echo ""
-  echo "  -V, --virtual-env        Always use virtualenv.  Install automatically if not present"
-  echo "  -N, --no-virtual-env     Don't use virtualenv.  Run tests in local environment"
-  echo "  -n, --no-site-packages   Isolate the virtualenv from the global Python environment"
-  echo "  -f, --force              Force a clean re-build of the virtual environment. Useful when dependencies have been added."
-  echo "  -u, --update             Update the virtual environment with any newer package versions"
-  echo "  -t, --serial             Run testr serially"
-  echo "  -p, --pep8               Just run pep8"
-  echo "  -c, --coverage           Generate coverage report"
-  echo "  -h, --help               Print this usage message"
-  echo "  -d, --debug              Run tests with testtools instead of testr. This allows you to use PDB"
-  echo "  -- [TESTROPTIONS]        After the first '--' you can pass arbitrary arguments to testr "
-}
-
-function deprecation_warning {
-  cat <<EOF
--------------------------------------------------------------------------
-WARNING: run_tests.sh is deprecated and this script will be removed after
-the Newton release. All tests should be run through testr/ostestr or tox.
-
-To run style checks:
-
- tox -e pep8
-
-To run python 2.7 unit tests
-
- tox -e py27
-
-To run unit tests and generate coverage report
-
- tox -e cover
-
-To run a subset of any of these tests:
-
- tox -e py27 someregex
-
- i.e.: tox -e py27 test_servers
-
-Additional tox targets are available in tox.ini. For more information
-see:
-http://docs.openstack.org/project-team-guide/project-setup/python.html
-
-NOTE: if you want to use testr to run tests, you can instead use:
-
- OS_TEST_PATH=./tempest/tests testr run
-
-Documentation on using testr directly can be found at
-http://testrepository.readthedocs.org/en/latest/MANUAL.html
--------------------------------------------------------------------------
-EOF
-}
-
-testrargs=""
-just_pep8=0
-venv=${VENV:-.venv}
-with_venv=tools/with_venv.sh
-serial=0
-always_venv=0
-never_venv=0
-no_site_packages=0
-debug=0
-force=0
-coverage=0
-wrapper=""
-config_file=""
-update=0
-
-deprecation_warning
-
-if ! options=$(getopt -o VNnfuctphd -l virtual-env,no-virtual-env,no-site-packages,force,update,serial,coverage,pep8,help,debug -- "$@")
-then
-    # parse error
-    usage
-    exit 1
-fi
-
-eval set -- $options
-first_uu=yes
-while [ $# -gt 0 ]; do
-  case "$1" in
-    -h|--help) usage; exit;;
-    -V|--virtual-env) always_venv=1; never_venv=0;;
-    -N|--no-virtual-env) always_venv=0; never_venv=1;;
-    -n|--no-site-packages) no_site_packages=1;;
-    -f|--force) force=1;;
-    -u|--update) update=1;;
-    -d|--debug) debug=1;;
-    -p|--pep8) let just_pep8=1;;
-    -c|--coverage) coverage=1;;
-    -t|--serial) serial=1;;
-    --) [ "yes" == "$first_uu" ] || testrargs="$testrargs $1"; first_uu=no  ;;
-    *) testrargs="$testrargs $1";;
-  esac
-  shift
-done
-
-
-cd `dirname "$0"`
-
-if [ $no_site_packages -eq 1 ]; then
-  installvenvopts="--no-site-packages"
-fi
-
-function testr_init {
-  if [ ! -d .testrepository ]; then
-      ${wrapper} testr init
-  fi
-}
-
-function run_tests {
-  testr_init
-  ${wrapper} find . -type f -name "*.pyc" -delete
-  export OS_TEST_PATH=./tempest/tests
-  if [ $debug -eq 1 ]; then
-      if [ "$testrargs" = "" ]; then
-          testrargs="discover ./tempest/tests"
-      fi
-      ${wrapper} python -m testtools.run $testrargs
-      return $?
-  fi
-
-  if [ $coverage -eq 1 ]; then
-      ${wrapper} python setup.py test --coverage
-      return $?
-  fi
-
-  if [ $serial -eq 1 ]; then
-      ${wrapper} testr run --subunit $testrargs | ${wrapper} subunit-trace -n -f
-  else
-      ${wrapper} testr run --parallel --subunit $testrargs | ${wrapper} subunit-trace -n -f
-  fi
-}
-
-function run_pep8 {
-  echo "Running flake8 ..."
-  if [ $never_venv -eq 1 ]; then
-      echo "**WARNING**:" >&2
-      echo "Running flake8 without virtual env may miss OpenStack HACKING detection" >&2
-  fi
-  ${wrapper} flake8
-}
-
-if [ $never_venv -eq 0 ]
-then
-  # Remove the virtual environment if --force used
-  if [ $force -eq 1 ]; then
-    echo "Cleaning virtualenv..."
-    rm -rf ${venv}
-  fi
-  if [ $update -eq 1 ]; then
-      echo "Updating virtualenv..."
-      virtualenv $installvenvopts $venv
-      $venv/bin/pip install -U -r requirements.txt -r test-requirements.txt
-  fi
-  if [ -e ${venv} ]; then
-    wrapper="${with_venv}"
-  else
-    if [ $always_venv -eq 1 ]; then
-      # Automatically install the virtualenv
-      virtualenv $installvenvopts $venv
-      wrapper="${with_venv}"
-      ${wrapper} pip install -U -r requirements.txt -r test-requirements.txt
-    else
-      echo -e "No virtual environment found...create one? (Y/n) \c"
-      read use_ve
-      if [ "x$use_ve" = "xY" -o "x$use_ve" = "x" -o "x$use_ve" = "xy" ]; then
-        # Install the virtualenv and run the test suite in it
-        virtualenv $installvenvopts $venv
-        wrapper=${with_venv}
-        ${wrapper} pip install -U -r requirements.txt -r test-requirements.txt
-      fi
-    fi
-  fi
-fi
-
-if [ $just_pep8 -eq 1 ]; then
-    run_pep8
-    exit
-fi
-
-run_tests
-retval=$?
-
-if [ -z "$testrargs" ]; then
-    run_pep8
-fi
-
-exit $retval
diff --git a/tempest/api/compute/admin/test_flavors_access.py b/tempest/api/compute/admin/test_flavors_access.py
index 04b0c2d..a9daba8 100644
--- a/tempest/api/compute/admin/test_flavors_access.py
+++ b/tempest/api/compute/admin/test_flavors_access.py
@@ -14,7 +14,6 @@
 #    under the License.
 
 from tempest.api.compute import base
-from tempest.common.utils import data_utils
 from tempest.lib import decorators
 from tempest import test
 
@@ -47,51 +46,37 @@
     def test_flavor_access_list_with_private_flavor(self):
         # Test to make sure that list flavor access on a newly created
         # private flavor will return an empty access list
-        flavor_name = data_utils.rand_name(self.flavor_name_prefix)
-        new_flavor_id = data_utils.rand_int_id(start=1000)
-        new_flavor = self.admin_flavors_client.create_flavor(
-            name=flavor_name,
-            ram=self.ram, vcpus=self.vcpus,
-            disk=self.disk,
-            id=new_flavor_id,
-            is_public='False')['flavor']
-        self.addCleanup(self.admin_flavors_client.delete_flavor,
-                        new_flavor['id'])
+        flavor = self.create_flavor(ram=self.ram, vcpus=self.vcpus,
+                                    disk=self.disk, is_public='False')
+
         flavor_access = (self.admin_flavors_client.list_flavor_access(
-            new_flavor_id)['flavor_access'])
+                         flavor['id'])['flavor_access'])
         self.assertEqual(len(flavor_access), 0, str(flavor_access))
 
     @decorators.idempotent_id('59e622f6-bdf6-45e3-8ba8-fedad905a6b4')
     def test_flavor_access_add_remove(self):
         # Test to add and remove flavor access to a given tenant.
-        flavor_name = data_utils.rand_name(self.flavor_name_prefix)
-        new_flavor_id = data_utils.rand_int_id(start=1000)
-        new_flavor = self.admin_flavors_client.create_flavor(
-            name=flavor_name,
-            ram=self.ram, vcpus=self.vcpus,
-            disk=self.disk,
-            id=new_flavor_id,
-            is_public='False')['flavor']
-        self.addCleanup(self.admin_flavors_client.delete_flavor,
-                        new_flavor['id'])
+        flavor = self.create_flavor(ram=self.ram, vcpus=self.vcpus,
+                                    disk=self.disk, is_public='False')
+
         # Add flavor access to a tenant.
         resp_body = {
             "tenant_id": str(self.tenant_id),
-            "flavor_id": str(new_flavor['id']),
+            "flavor_id": str(flavor['id']),
         }
         add_body = (self.admin_flavors_client.add_flavor_access(
-            new_flavor['id'], self.tenant_id)['flavor_access'])
+            flavor['id'], self.tenant_id)['flavor_access'])
         self.assertIn(resp_body, add_body)
 
         # The flavor is present in list.
         flavors = self.flavors_client.list_flavors(detail=True)['flavors']
-        self.assertIn(new_flavor['id'], map(lambda x: x['id'], flavors))
+        self.assertIn(flavor['id'], map(lambda x: x['id'], flavors))
 
         # Remove flavor access from a tenant.
         remove_body = (self.admin_flavors_client.remove_flavor_access(
-            new_flavor['id'], self.tenant_id)['flavor_access'])
+            flavor['id'], self.tenant_id)['flavor_access'])
         self.assertNotIn(resp_body, remove_body)
 
         # The flavor is not present in list.
         flavors = self.flavors_client.list_flavors(detail=True)['flavors']
-        self.assertNotIn(new_flavor['id'], map(lambda x: x['id'], flavors))
+        self.assertNotIn(flavor['id'], map(lambda x: x['id'], flavors))
diff --git a/tempest/api/compute/admin/test_flavors_access_negative.py b/tempest/api/compute/admin/test_flavors_access_negative.py
index bd72d13..33d5d73 100644
--- a/tempest/api/compute/admin/test_flavors_access_negative.py
+++ b/tempest/api/compute/admin/test_flavors_access_negative.py
@@ -14,7 +14,6 @@
 #    under the License.
 
 from tempest.api.compute import base
-from tempest.common.utils import data_utils
 from tempest.lib import decorators
 from tempest.lib import exceptions as lib_exc
 from tempest import test
@@ -26,6 +25,8 @@
     Add and remove Flavor Access require admin privileges.
     """
 
+    credentials = ['primary', 'admin', 'alt']
+
     @classmethod
     def skip_checks(cls):
         super(FlavorsAccessNegativeTestJSON, cls).skip_checks()
@@ -47,108 +48,69 @@
     @decorators.idempotent_id('0621c53e-d45d-40e7-951d-43e5e257b272')
     def test_flavor_access_list_with_public_flavor(self):
         # Test to list flavor access with exceptions by querying public flavor
-        flavor_name = data_utils.rand_name(self.flavor_name_prefix)
-        new_flavor_id = data_utils.rand_int_id(start=1000)
-        new_flavor = self.admin_flavors_client.create_flavor(
-            name=flavor_name,
-            ram=self.ram, vcpus=self.vcpus,
-            disk=self.disk,
-            id=new_flavor_id,
-            is_public='True')['flavor']
-        self.addCleanup(self.admin_flavors_client.delete_flavor,
-                        new_flavor['id'])
+        flavor = self.create_flavor(ram=self.ram, vcpus=self.vcpus,
+                                    disk=self.disk, is_public='True')
         self.assertRaises(lib_exc.NotFound,
                           self.admin_flavors_client.list_flavor_access,
-                          new_flavor_id)
+                          flavor['id'])
 
     @test.attr(type=['negative'])
     @decorators.idempotent_id('41eaaade-6d37-4f28-9c74-f21b46ca67bd')
     def test_flavor_non_admin_add(self):
         # Test to add flavor access as a user without admin privileges.
-        flavor_name = data_utils.rand_name(self.flavor_name_prefix)
-        new_flavor_id = data_utils.rand_int_id(start=1000)
-        new_flavor = self.admin_flavors_client.create_flavor(
-            name=flavor_name,
-            ram=self.ram, vcpus=self.vcpus,
-            disk=self.disk,
-            id=new_flavor_id,
-            is_public='False')['flavor']
-        self.addCleanup(self.admin_flavors_client.delete_flavor,
-                        new_flavor['id'])
+        flavor = self.create_flavor(ram=self.ram, vcpus=self.vcpus,
+                                    disk=self.disk, is_public='False')
         self.assertRaises(lib_exc.Forbidden,
                           self.flavors_client.add_flavor_access,
-                          new_flavor['id'],
+                          flavor['id'],
                           self.tenant_id)
 
     @test.attr(type=['negative'])
     @decorators.idempotent_id('073e79a6-c311-4525-82dc-6083d919cb3a')
     def test_flavor_non_admin_remove(self):
         # Test to remove flavor access as a user without admin privileges.
-        flavor_name = data_utils.rand_name(self.flavor_name_prefix)
-        new_flavor_id = data_utils.rand_int_id(start=1000)
-        new_flavor = self.admin_flavors_client.create_flavor(
-            name=flavor_name,
-            ram=self.ram, vcpus=self.vcpus,
-            disk=self.disk,
-            id=new_flavor_id,
-            is_public='False')['flavor']
-        self.addCleanup(self.admin_flavors_client.delete_flavor,
-                        new_flavor['id'])
+        flavor = self.create_flavor(ram=self.ram, vcpus=self.vcpus,
+                                    disk=self.disk, is_public='False')
+
         # Add flavor access to a tenant.
-        self.admin_flavors_client.add_flavor_access(new_flavor['id'],
+        self.admin_flavors_client.add_flavor_access(flavor['id'],
                                                     self.tenant_id)
         self.addCleanup(self.admin_flavors_client.remove_flavor_access,
-                        new_flavor['id'], self.tenant_id)
+                        flavor['id'], self.tenant_id)
         self.assertRaises(lib_exc.Forbidden,
                           self.flavors_client.remove_flavor_access,
-                          new_flavor['id'],
+                          flavor['id'],
                           self.tenant_id)
 
     @test.attr(type=['negative'])
     @decorators.idempotent_id('f3592cc0-0306-483c-b210-9a7b5346eddc')
     def test_add_flavor_access_duplicate(self):
         # Create a new flavor.
-        flavor_name = data_utils.rand_name(self.flavor_name_prefix)
-        new_flavor_id = data_utils.rand_int_id(start=1000)
-        new_flavor = self.admin_flavors_client.create_flavor(
-            name=flavor_name,
-            ram=self.ram, vcpus=self.vcpus,
-            disk=self.disk,
-            id=new_flavor_id,
-            is_public='False')['flavor']
-        self.addCleanup(self.admin_flavors_client.delete_flavor,
-                        new_flavor['id'])
+        flavor = self.create_flavor(ram=self.ram, vcpus=self.vcpus,
+                                    disk=self.disk, is_public='False')
 
         # Add flavor access to a tenant.
-        self.admin_flavors_client.add_flavor_access(new_flavor['id'],
+        self.admin_flavors_client.add_flavor_access(flavor['id'],
                                                     self.tenant_id)
         self.addCleanup(self.admin_flavors_client.remove_flavor_access,
-                        new_flavor['id'], self.tenant_id)
+                        flavor['id'], self.tenant_id)
 
         # An exception should be raised when adding flavor access to the same
         # tenant
         self.assertRaises(lib_exc.Conflict,
                           self.admin_flavors_client.add_flavor_access,
-                          new_flavor['id'],
+                          flavor['id'],
                           self.tenant_id)
 
     @test.attr(type=['negative'])
     @decorators.idempotent_id('1f710927-3bc7-4381-9f82-0ca6e42644b7')
     def test_remove_flavor_access_not_found(self):
         # Create a new flavor.
-        flavor_name = data_utils.rand_name(self.flavor_name_prefix)
-        new_flavor_id = data_utils.rand_int_id(start=1000)
-        new_flavor = self.admin_flavors_client.create_flavor(
-            name=flavor_name,
-            ram=self.ram, vcpus=self.vcpus,
-            disk=self.disk,
-            id=new_flavor_id,
-            is_public='False')['flavor']
-        self.addCleanup(self.admin_flavors_client.delete_flavor,
-                        new_flavor['id'])
+        flavor = self.create_flavor(ram=self.ram, vcpus=self.vcpus,
+                                    disk=self.disk, is_public='False')
 
         # An exception should be raised when flavor access is not found
         self.assertRaises(lib_exc.NotFound,
                           self.admin_flavors_client.remove_flavor_access,
-                          new_flavor['id'],
-                          data_utils.rand_uuid())
+                          flavor['id'],
+                          self.os_alt.servers_client.tenant_id)
diff --git a/tempest/api/compute/admin/test_migrations.py b/tempest/api/compute/admin/test_migrations.py
index aa75348..18655cb 100644
--- a/tempest/api/compute/admin/test_migrations.py
+++ b/tempest/api/compute/admin/test_migrations.py
@@ -30,7 +30,6 @@
     def setup_clients(cls):
         super(MigrationsAdminTest, cls).setup_clients()
         cls.client = cls.os_adm.migrations_client
-        cls.flavors_admin_client = cls.os_adm.flavors_client
 
     @decorators.idempotent_id('75c0b83d-72a0-4cf8-a153-631e83e7d53f')
     def test_list_migrations(self):
@@ -54,8 +53,8 @@
 
     def _flavor_clean_up(self, flavor_id):
         try:
-            self.flavors_admin_client.delete_flavor(flavor_id)
-            self.flavors_admin_client.wait_for_resource_deletion(flavor_id)
+            self.admin_flavors_client.delete_flavor(flavor_id)
+            self.admin_flavors_client.wait_for_resource_deletion(flavor_id)
         except exceptions.NotFound:
             pass
 
@@ -68,9 +67,9 @@
 
         # First we have to create a flavor that we can delete so make a copy
         # of the normal flavor from which we'd create a server.
-        flavor = self.flavors_admin_client.show_flavor(
+        flavor = self.admin_flavors_client.show_flavor(
             self.flavor_ref)['flavor']
-        flavor = self.flavors_admin_client.create_flavor(
+        flavor = self.admin_flavors_client.create_flavor(
             name=data_utils.rand_name('test_resize_flavor_'),
             ram=flavor['ram'],
             disk=flavor['disk'],
diff --git a/tempest/api/compute/admin/test_quotas_negative.py b/tempest/api/compute/admin/test_quotas_negative.py
index 0850205..ca8382f 100644
--- a/tempest/api/compute/admin/test_quotas_negative.py
+++ b/tempest/api/compute/admin/test_quotas_negative.py
@@ -87,6 +87,7 @@
 
     @decorators.skip_because(bug="1186354",
                              condition=CONF.service_available.neutron)
+    @test.attr(type=['negative'])
     @decorators.idempotent_id('7c6c8f3b-2bf6-4918-b240-57b136a66aa0')
     @test.services('network')
     def test_security_groups_exceed_limit(self):
diff --git a/tempest/api/compute/admin/test_servers_negative.py b/tempest/api/compute/admin/test_servers_negative.py
index 5220c97..adb49a5 100644
--- a/tempest/api/compute/admin/test_servers_negative.py
+++ b/tempest/api/compute/admin/test_servers_negative.py
@@ -34,7 +34,6 @@
         super(ServersAdminNegativeTestJSON, cls).setup_clients()
         cls.client = cls.os_adm.servers_client
         cls.non_adm_client = cls.servers_client
-        cls.flavors_client = cls.os_adm.flavors_client
         cls.quotas_client = cls.os_adm.quotas_client
 
     @classmethod
@@ -45,16 +44,6 @@
         server = cls.create_test_server(wait_until='ACTIVE')
         cls.s1_id = server['id']
 
-    def _get_unused_flavor_id(self):
-        flavor_id = data_utils.rand_int_id(start=1000)
-        while True:
-            try:
-                self.flavors_client.show_flavor(flavor_id)
-            except lib_exc.NotFound:
-                break
-            flavor_id = data_utils.rand_int_id(start=1000)
-        return flavor_id
-
     @decorators.idempotent_id('28dcec23-f807-49da-822c-56a92ea3c687')
     @testtools.skipUnless(CONF.compute_feature_enabled.resize,
                           'Resize not available.')
@@ -62,8 +51,6 @@
     def test_resize_server_using_overlimit_ram(self):
         # NOTE(mriedem): Avoid conflicts with os-quota-class-sets tests.
         self.useFixture(fixtures.LockFixture('compute_quotas'))
-        flavor_name = data_utils.rand_name("flavor")
-        flavor_id = self._get_unused_flavor_id()
         quota_set = self.quotas_client.show_quota_set(
             self.tenant_id)['quota_set']
         ram = quota_set['ram']
@@ -73,11 +60,7 @@
         ram += 1
         vcpus = 1
         disk = 5
-        flavor_ref = self.flavors_client.create_flavor(name=flavor_name,
-                                                       ram=ram, vcpus=vcpus,
-                                                       disk=disk,
-                                                       id=flavor_id)['flavor']
-        self.addCleanup(self.flavors_client.delete_flavor, flavor_id)
+        flavor_ref = self.create_flavor(ram=ram, vcpus=vcpus, disk=disk)
         self.assertRaises((lib_exc.Forbidden, lib_exc.OverLimit),
                           self.client.resize_server,
                           self.servers[0]['id'],
@@ -90,8 +73,6 @@
     def test_resize_server_using_overlimit_vcpus(self):
         # NOTE(mriedem): Avoid conflicts with os-quota-class-sets tests.
         self.useFixture(fixtures.LockFixture('compute_quotas'))
-        flavor_name = data_utils.rand_name("flavor")
-        flavor_id = self._get_unused_flavor_id()
         quota_set = self.quotas_client.show_quota_set(
             self.tenant_id)['quota_set']
         vcpus = quota_set['cores']
@@ -101,11 +82,7 @@
         vcpus += 1
         ram = 512
         disk = 5
-        flavor_ref = self.flavors_client.create_flavor(name=flavor_name,
-                                                       ram=ram, vcpus=vcpus,
-                                                       disk=disk,
-                                                       id=flavor_id)['flavor']
-        self.addCleanup(self.flavors_client.delete_flavor, flavor_id)
+        flavor_ref = self.create_flavor(ram=ram, vcpus=vcpus, disk=disk)
         self.assertRaises((lib_exc.Forbidden, lib_exc.OverLimit),
                           self.client.resize_server,
                           self.servers[0]['id'],
diff --git a/tempest/api/compute/admin/test_volume_swap.py b/tempest/api/compute/admin/test_volume_swap.py
index e4f4846..45472df 100644
--- a/tempest/api/compute/admin/test_volume_swap.py
+++ b/tempest/api/compute/admin/test_volume_swap.py
@@ -54,10 +54,10 @@
         # Swap volume from "volume1" to "volume2"
         self.admin_servers_client.update_attached_volume(
             server['id'], volume1['id'], volumeId=volume2['id'])
-        waiters.wait_for_volume_status(self.volumes_client,
-                                       volume1['id'], 'available')
-        waiters.wait_for_volume_status(self.volumes_client,
-                                       volume2['id'], 'in-use')
+        waiters.wait_for_volume_resource_status(self.volumes_client,
+                                                volume1['id'], 'available')
+        waiters.wait_for_volume_resource_status(self.volumes_client,
+                                                volume2['id'], 'in-use')
         self.addCleanup(self.servers_client.detach_volume,
                         server['id'], volume2['id'])
         # Verify "volume2" is attached to the server
diff --git a/tempest/api/compute/admin/test_volumes_negative.py b/tempest/api/compute/admin/test_volumes_negative.py
index ecb9092..905bc3d 100644
--- a/tempest/api/compute/admin/test_volumes_negative.py
+++ b/tempest/api/compute/admin/test_volumes_negative.py
@@ -36,6 +36,7 @@
         super(VolumesAdminNegativeTest, cls).resource_setup()
         cls.server = cls.create_test_server(wait_until='ACTIVE')
 
+    @test.attr(type=['negative'])
     @decorators.idempotent_id('309b5ecd-0585-4a7e-a36f-d2b2bf55259d')
     def test_update_attached_volume_with_nonexistent_volume_in_uri(self):
         volume = self.create_volume()
@@ -46,6 +47,7 @@
                           volumeId=volume['id'])
 
     @test.related_bug('1629110', status_code=400)
+    @test.attr(type=['negative'])
     @decorators.idempotent_id('7dcac15a-b107-46d3-a5f6-cb863f4e454a')
     def test_update_attached_volume_with_nonexistent_volume_in_body(self):
         volume = self.create_volume()
diff --git a/tempest/api/compute/base.py b/tempest/api/compute/base.py
index f660fa4..55cc293 100644
--- a/tempest/api/compute/base.py
+++ b/tempest/api/compute/base.py
@@ -410,8 +410,8 @@
             kwargs['imageRef'] = image_ref
         volume = cls.volumes_client.create_volume(**kwargs)['volume']
         cls.volumes.append(volume)
-        waiters.wait_for_volume_status(cls.volumes_client,
-                                       volume['id'], 'available')
+        waiters.wait_for_volume_resource_status(cls.volumes_client,
+                                                volume['id'], 'available')
         return volume
 
     @classmethod
@@ -445,20 +445,21 @@
         attach_kwargs = dict(volumeId=volume['id'])
         if device:
             attach_kwargs['device'] = device
-        self.servers_client.attach_volume(
-            server['id'], **attach_kwargs)
+        attachment = self.servers_client.attach_volume(
+            server['id'], **attach_kwargs)['volumeAttachment']
         # On teardown detach the volume and wait for it to be available. This
         # is so we don't error out when trying to delete the volume during
         # teardown.
-        self.addCleanup(waiters.wait_for_volume_status,
+        self.addCleanup(waiters.wait_for_volume_resource_status,
                         self.volumes_client, volume['id'], 'available')
         # Ignore 404s on detach in case the server is deleted or the volume
         # is already detached.
         self.addCleanup(test_utils.call_and_ignore_notfound_exc,
                         self.servers_client.detach_volume,
                         server['id'], volume['id'])
-        waiters.wait_for_volume_status(self.volumes_client,
-                                       volume['id'], 'in-use')
+        waiters.wait_for_volume_resource_status(self.volumes_client,
+                                                volume['id'], 'in-use')
+        return attachment
 
 
 class BaseV2ComputeAdminTest(BaseV2ComputeTest):
diff --git a/tempest/api/compute/flavors/test_flavors_negative.py b/tempest/api/compute/flavors/test_flavors_negative.py
index a70c0a9..b313f44 100644
--- a/tempest/api/compute/flavors/test_flavors_negative.py
+++ b/tempest/api/compute/flavors/test_flavors_negative.py
@@ -21,6 +21,7 @@
 from tempest.common import image as common_image
 from tempest.common.utils import data_utils
 from tempest import config
+from tempest.lib import decorators
 from tempest.lib import exceptions as lib_exc
 from tempest import test
 
@@ -43,7 +44,7 @@
 
     @test.attr(type=['negative'])
     @test.services('image')
-    @test.idempotent_id('90f0d93a-91c1-450c-91e6-07d18172cefe')
+    @decorators.idempotent_id('90f0d93a-91c1-450c-91e6-07d18172cefe')
     def test_boot_with_low_ram(self):
         """Try boot a vm with lower than min ram
 
diff --git a/tempest/api/compute/security_groups/test_security_groups.py b/tempest/api/compute/security_groups/test_security_groups.py
index 349bfda..e90a1fc 100644
--- a/tempest/api/compute/security_groups/test_security_groups.py
+++ b/tempest/api/compute/security_groups/test_security_groups.py
@@ -145,7 +145,7 @@
         self.assertEqual(s_new_name, fetched_group['name'])
         self.assertEqual(s_new_des, fetched_group['description'])
 
-    @test.idempotent_id('79517d60-535a-438f-af3d-e6feab1cbea7')
+    @decorators.idempotent_id('79517d60-535a-438f-af3d-e6feab1cbea7')
     @test.services('network')
     def test_list_security_groups_by_server(self):
         # Create a couple security groups that we will use
diff --git a/tempest/api/compute/servers/test_create_server.py b/tempest/api/compute/servers/test_create_server.py
index a94c20b..fd5e50e 100644
--- a/tempest/api/compute/servers/test_create_server.py
+++ b/tempest/api/compute/servers/test_create_server.py
@@ -236,7 +236,6 @@
     @classmethod
     def setup_clients(cls):
         super(ServersWithSpecificFlavorTestJSON, cls).setup_clients()
-        cls.flavor_client = cls.os_adm.flavors_client
         cls.client = cls.servers_client
 
     @classmethod
@@ -254,7 +253,6 @@
             self.flavor_ref)['flavor']
 
         def create_flavor_with_ephemeral(ephem_disk):
-            flavor_id = data_utils.rand_int_id(start=1000)
             name = 'flavor_with_ephemeral_%s' % ephem_disk
             flavor_name = data_utils.rand_name(name)
 
@@ -263,17 +261,10 @@
             disk = flavor_base['disk']
 
             # Create a flavor with ephemeral disk
-            flavor = self.flavor_client.create_flavor(
-                name=flavor_name, ram=ram, vcpus=vcpus, disk=disk,
-                id=flavor_id, ephemeral=ephem_disk)['flavor']
-            self.addCleanup(flavor_clean_up, flavor['id'])
-
+            flavor = self.create_flavor(name=flavor_name, ram=ram, vcpus=vcpus,
+                                        disk=disk, ephemeral=ephem_disk)
             return flavor['id']
 
-        def flavor_clean_up(flavor_id):
-            self.flavor_client.delete_flavor(flavor_id)
-            self.flavor_client.wait_for_resource_deletion(flavor_id)
-
         flavor_with_eph_disk_id = create_flavor_with_ephemeral(ephem_disk=1)
         flavor_no_eph_disk_id = create_flavor_with_ephemeral(ephem_disk=0)
 
diff --git a/tempest/api/compute/servers/test_delete_server.py b/tempest/api/compute/servers/test_delete_server.py
index 83b2e1b..8ed55e0 100644
--- a/tempest/api/compute/servers/test_delete_server.py
+++ b/tempest/api/compute/servers/test_delete_server.py
@@ -115,8 +115,8 @@
 
         self.client.delete_server(server['id'])
         waiters.wait_for_server_termination(self.client, server['id'])
-        waiters.wait_for_volume_status(self.volumes_client,
-                                       volume['id'], 'available')
+        waiters.wait_for_volume_resource_status(self.volumes_client,
+                                                volume['id'], 'available')
 
 
 class DeleteServersAdminTestJSON(base.BaseV2ComputeAdminTest):
diff --git a/tempest/api/compute/servers/test_server_rescue.py b/tempest/api/compute/servers/test_server_rescue.py
index 209ab38..75ba15c 100644
--- a/tempest/api/compute/servers/test_server_rescue.py
+++ b/tempest/api/compute/servers/test_server_rescue.py
@@ -58,10 +58,8 @@
         cls.password = data_utils.rand_password()
         # Server for positive tests
         server = cls.create_test_server(adminPass=cls.password,
-                                        wait_until='BUILD')
+                                        wait_until='ACTIVE')
         cls.server_id = server['id']
-        waiters.wait_for_server_status(cls.servers_client, cls.server_id,
-                                       'ACTIVE')
 
     @classmethod
     def resource_cleanup(cls):
diff --git a/tempest/api/compute/volumes/test_attach_volume.py b/tempest/api/compute/volumes/test_attach_volume.py
index cbe7178..73c7614 100644
--- a/tempest/api/compute/volumes/test_attach_volume.py
+++ b/tempest/api/compute/volumes/test_attach_volume.py
@@ -22,7 +22,6 @@
 from tempest.common import waiters
 from tempest import config
 from tempest.lib import decorators
-from tempest.lib import exceptions as lib_exc
 
 CONF = config.CONF
 
@@ -61,38 +60,14 @@
             server['id'])['addresses']
         return server
 
-    def _detach_volume(self, server_id, volume_id):
-        try:
-            self.servers_client.detach_volume(server_id, volume_id)
-            waiters.wait_for_volume_status(self.volumes_client,
-                                           volume_id, 'available')
-        except lib_exc.NotFound:
-            LOG.warning("Unable to detach volume %s from server %s "
-                        "possibly it was already detached", volume_id,
-                        server_id)
-
-    def _attach_volume(self, server_id, volume_id, device=None):
-        # Attach the volume to the server
-        kwargs = {'volumeId': volume_id}
-        if device:
-            kwargs.update({'device': '/dev/%s' % device})
-        attachment = self.servers_client.attach_volume(
-            server_id, **kwargs)['volumeAttachment']
-        waiters.wait_for_volume_status(self.volumes_client,
-                                       volume_id, 'in-use')
-        self.addCleanup(self._detach_volume, server_id,
-                        volume_id)
-
-        return attachment
-
     @decorators.idempotent_id('52e9045a-e90d-4c0d-9087-79d657faffff')
     def test_attach_detach_volume(self):
         # Stop and Start a server with an attached volume, ensuring that
         # the volume remains attached.
         server = self._create_server()
         volume = self.create_volume()
-        attachment = self._attach_volume(server['id'], volume['id'],
-                                         device=self.device)
+        attachment = self.attach_volume(server, volume,
+                                        device=('/dev/%s' % self.device))
 
         self.servers_client.stop_server(server['id'])
         waiters.wait_for_server_status(self.servers_client, server['id'],
@@ -115,7 +90,10 @@
             device_name_to_match = '\n' + self.device + ' '
             self.assertIn(device_name_to_match, disks)
 
-        self._detach_volume(server['id'], attachment['volumeId'])
+        self.servers_client.detach_volume(server['id'], attachment['volumeId'])
+        waiters.wait_for_volume_resource_status(
+            self.volumes_client, attachment['volumeId'], 'available')
+
         self.servers_client.stop_server(server['id'])
         waiters.wait_for_server_status(self.servers_client, server['id'],
                                        'SHUTOFF')
@@ -141,8 +119,8 @@
         # List volume attachment of the server
         server = self._create_server()
         volume = self.create_volume()
-        attachment = self._attach_volume(server['id'], volume['id'],
-                                         device=self.device)
+        attachment = self.attach_volume(server, volume,
+                                        device=('/dev/%s' % self.device))
         body = self.servers_client.list_volume_attachments(
             server['id'])['volumeAttachments']
         self.assertEqual(1, len(body))
@@ -165,8 +143,8 @@
         server = self._create_server()
         volume_1st = self.create_volume()
         volume_2nd = self.create_volume()
-        attachment_1st = self._attach_volume(server['id'], volume_1st['id'])
-        attachment_2nd = self._attach_volume(server['id'], volume_2nd['id'])
+        attachment_1st = self.attach_volume(server, volume_1st)
+        attachment_2nd = self.attach_volume(server, volume_2nd)
 
         body = self.servers_client.list_volume_attachments(
             server['id'])['volumeAttachments']
@@ -253,8 +231,8 @@
         volume = self.create_volume()
         num_vol = self._count_volumes(server)
         self._shelve_server(server)
-        attachment = self._attach_volume(server['id'], volume['id'],
-                                         device=self.device)
+        attachment = self.attach_volume(server, volume,
+                                        device=('/dev/%s' % self.device))
 
         # Unshelve the instance and check that attached volume exists
         self._unshelve_server_and_check_volumes(server, num_vol + 1)
@@ -279,9 +257,12 @@
         volume = self.create_volume()
         num_vol = self._count_volumes(server)
         self._shelve_server(server)
-        self._attach_volume(server['id'], volume['id'], device=self.device)
-        # Detach the volume
-        self._detach_volume(server['id'], volume['id'])
+
+        # Attach and then detach the volume
+        self.attach_volume(server, volume, device=('/dev/%s' % self.device))
+        self.servers_client.detach_volume(server['id'], volume['id'])
+        waiters.wait_for_volume_resource_status(self.volumes_client,
+                                                volume['id'], 'available')
 
         # Unshelve the instance and check that we have the expected number of
         # volume(s)
diff --git a/tempest/api/compute/volumes/test_volume_snapshots.py b/tempest/api/compute/volumes/test_volume_snapshots.py
index 3d5d23b..4b06867 100644
--- a/tempest/api/compute/volumes/test_volume_snapshots.py
+++ b/tempest/api/compute/volumes/test_volume_snapshots.py
@@ -54,9 +54,9 @@
             display_name=s_name)['snapshot']
 
         def delete_snapshot(snapshot_id):
-            waiters.wait_for_snapshot_status(self.snapshots_client,
-                                             snapshot_id,
-                                             'available')
+            waiters.wait_for_volume_resource_status(self.snapshots_client,
+                                                    snapshot_id,
+                                                    'available')
             # Delete snapshot
             self.snapshots_client.delete_snapshot(snapshot_id)
             self.snapshots_client.wait_for_resource_deletion(snapshot_id)
diff --git a/tempest/api/compute/volumes/test_volumes_get.py b/tempest/api/compute/volumes/test_volumes_get.py
index 63c247e..0eaa359 100644
--- a/tempest/api/compute/volumes/test_volumes_get.py
+++ b/tempest/api/compute/volumes/test_volumes_get.py
@@ -57,7 +57,8 @@
         self.assertIsNotNone(volume['id'],
                              "Field volume id is empty or not found.")
         # Wait for Volume status to become ACTIVE
-        waiters.wait_for_volume_status(self.client, volume['id'], 'available')
+        waiters.wait_for_volume_resource_status(self.client, volume['id'],
+                                                'available')
         # GET Volume
         fetched_volume = self.client.show_volume(volume['id'])['volume']
         # Verification of details of fetched Volume
diff --git a/tempest/api/identity/admin/v3/test_roles.py b/tempest/api/identity/admin/v3/test_roles.py
index 445d928..b7b6596 100644
--- a/tempest/api/identity/admin/v3/test_roles.py
+++ b/tempest/api/identity/admin/v3/test_roles.py
@@ -306,3 +306,66 @@
         roles_ids = [assignment['role']['id']
                      for assignment in role_assignments]
         self.assertIn(self.roles[0]['id'], roles_ids)
+
+    @decorators.idempotent_id('d92a41d2-5501-497a-84bb-6e294330e8f8')
+    def test_domain_roles_create_delete(self):
+        domain_role = self.roles_client.create_role(
+            name=data_utils.rand_name('domain_role'),
+            domain_id=self.domain['id'])['role']
+        self.addCleanup(
+            test_utils.call_and_ignore_notfound_exc,
+            self.roles_client.delete_role,
+            domain_role['id'])
+
+        domain_roles = self.roles_client.list_roles(
+            domain_id=self.domain['id'])['roles']
+        self.assertEqual(1, len(domain_roles))
+        self.assertIn(domain_role, domain_roles)
+
+        self.roles_client.delete_role(domain_role['id'])
+        domain_roles = self.roles_client.list_roles(
+            domain_id=self.domain['id'])['roles']
+        self.assertEmpty(domain_roles)
+
+    @decorators.idempotent_id('eb1e1c24-1bc4-4d47-9748-e127a1852c82')
+    def test_implied_domain_roles(self):
+        # Create two roles in the same domain
+        domain_role1 = self.setup_test_role(domain_id=self.domain['id'])
+        domain_role2 = self.setup_test_role(domain_id=self.domain['id'])
+
+        # Check if we can create an inference rule from roles in the same
+        # domain
+        self._create_implied_role(domain_role1['id'], domain_role2['id'])
+
+        # Create another role in a different domain
+        domain2 = self.setup_test_domain()
+        domain_role3 = self.setup_test_role(domain_id=domain2['id'])
+
+        # Check if we can create cross domain implied roles
+        self._create_implied_role(domain_role1['id'], domain_role3['id'])
+
+        # Finally, we also should be able to create an implied from a
+        # domain role to a global one
+        self._create_implied_role(domain_role1['id'], self.role['id'])
+
+    @decorators.idempotent_id('3859df7e-5b78-4e4d-b10e-214c8953842a')
+    def test_assignments_for_domain_roles(self):
+        domain_role = self.setup_test_role(domain_id=self.domain['id'])
+
+        # Create a grant using "domain_role"
+        self.roles_client.create_user_role_on_project(
+            self.project['id'], self.user_body['id'], domain_role['id'])
+        self.addCleanup(
+            self.roles_client.delete_role_from_user_on_project,
+            self.project['id'], self.user_body['id'], domain_role['id'])
+
+        # NOTE(rodrigods): Regular roles would appear in the effective
+        # list of role assignments (meaning the role would be returned in
+        # a token) as a result from the grant above. This is not the case
+        # for domain roles, they should not appear in the effective role
+        # assignments list.
+        params = {'scope.project.id': self.project['id'],
+                  'user.id': self.user_body['id']}
+        role_assignments = self.role_assignments.list_role_assignments(
+            effective=True, **params)['role_assignments']
+        self.assertEmpty(role_assignments)
diff --git a/tempest/api/identity/base.py b/tempest/api/identity/base.py
index 3bbe47a..80e7936 100644
--- a/tempest/api/identity/base.py
+++ b/tempest/api/identity/base.py
@@ -75,10 +75,13 @@
         self.addCleanup(self.users_client.delete_user, user['id'])
         return user
 
-    def setup_test_role(self):
+    def setup_test_role(self, domain_id=None):
         """Set up a test role."""
-        role = self.roles_client.create_role(
-            name=data_utils.rand_name('test_role'))['role']
+        params = {'name': data_utils.rand_name('test_role')}
+        if domain_id:
+            params['domain_id'] = domain_id
+
+        role = self.roles_client.create_role(**params)['role']
         # Delete the role at the end of the test
         self.addCleanup(self.roles_client.delete_role, role['id'])
         return role
diff --git a/tempest/api/network/admin/test_negative_quotas.py b/tempest/api/network/admin/test_negative_quotas.py
index 435e672..2c639da 100644
--- a/tempest/api/network/admin/test_negative_quotas.py
+++ b/tempest/api/network/admin/test_negative_quotas.py
@@ -39,6 +39,7 @@
             msg = "quotas extension not enabled."
             raise cls.skipException(msg)
 
+    @test.attr(type=['negative'])
     @decorators.idempotent_id('644f4e1b-1bf9-4af0-9fd8-eb56ac0f51cf')
     def test_network_quota_exceeding(self):
         # Set the network quota to two
diff --git a/tempest/api/object_storage/test_object_formpost_negative.py b/tempest/api/object_storage/test_object_formpost_negative.py
index 2174940..52b3978 100644
--- a/tempest/api/object_storage/test_object_formpost_negative.py
+++ b/tempest/api/object_storage/test_object_formpost_negative.py
@@ -125,6 +125,7 @@
 
     @decorators.idempotent_id('b277257f-113c-4499-b8d1-5fead79f7360')
     @test.requires_ext(extension='formpost', service='object')
+    @test.attr(type=['negative'])
     def test_post_object_using_form_invalid_signature(self):
         self.key = "Wrong"
         body, content_type = self.get_multipart_form()
diff --git a/tempest/api/volume/admin/test_multi_backend.py b/tempest/api/volume/admin/test_multi_backend.py
index 1b97e4a..c3e904a 100644
--- a/tempest/api/volume/admin/test_multi_backend.py
+++ b/tempest/api/volume/admin/test_multi_backend.py
@@ -74,8 +74,8 @@
         else:
             cls.volume_id_list_without_prefix.append(
                 cls.volume['id'])
-        waiters.wait_for_volume_status(cls.admin_volume_client,
-                                       cls.volume['id'], 'available')
+        waiters.wait_for_volume_resource_status(cls.admin_volume_client,
+                                                cls.volume['id'], 'available')
 
     @classmethod
     def resource_cleanup(cls):
diff --git a/tempest/api/volume/admin/test_volume_quotas.py b/tempest/api/volume/admin/test_volume_quotas.py
index e8222bf..83fca45 100644
--- a/tempest/api/volume/admin/test_volume_quotas.py
+++ b/tempest/api/volume/admin/test_volume_quotas.py
@@ -146,7 +146,7 @@
             transfer_id, auth_key=auth_key)['transfer']
 
         # Verify volume transferred is available
-        waiters.wait_for_volume_status(
+        waiters.wait_for_volume_resource_status(
             self.alt_client, volume['id'], 'available')
 
         # List of tenants quota usage post transfer
diff --git a/tempest/api/volume/admin/test_volume_types.py b/tempest/api/volume/admin/test_volume_types.py
index 213723c..5d08416 100644
--- a/tempest/api/volume/admin/test_volume_types.py
+++ b/tempest/api/volume/admin/test_volume_types.py
@@ -58,14 +58,14 @@
                          "to the requested name")
         self.assertIsNotNone(volume['id'],
                              "Field volume id is empty or not found.")
-        waiters.wait_for_volume_status(self.volumes_client,
-                                       volume['id'], 'available')
+        waiters.wait_for_volume_resource_status(self.volumes_client,
+                                                volume['id'], 'available')
 
         # Update volume with new volume_type
         self.volumes_client.retype_volume(volume['id'],
                                           new_type=volume_types[1]['id'])
-        waiters.wait_for_volume_status(self.volumes_client,
-                                       volume['id'], 'available')
+        waiters.wait_for_volume_resource_status(self.volumes_client,
+                                                volume['id'], 'available')
 
         # Get volume details and Verify
         fetched_volume = self.volumes_client.show_volume(
diff --git a/tempest/api/volume/admin/test_volume_types_negative.py b/tempest/api/volume/admin/test_volume_types_negative.py
index e8694b2..69e9cc0 100644
--- a/tempest/api/volume/admin/test_volume_types_negative.py
+++ b/tempest/api/volume/admin/test_volume_types_negative.py
@@ -17,10 +17,12 @@
 from tempest.lib.common.utils import data_utils
 from tempest.lib import decorators
 from tempest.lib import exceptions as lib_exc
+from tempest import test
 
 
 class VolumeTypesNegativeV2Test(base.BaseVolumeAdminTest):
 
+    @test.attr(type=['negative'])
     @decorators.idempotent_id('b48c98f2-e662-4885-9b71-032256906314')
     def test_create_with_nonexistent_volume_type(self):
         # Should not be able to create volume with nonexistent volume_type.
@@ -30,6 +32,7 @@
         self.assertRaises(lib_exc.NotFound,
                           self.volumes_client.create_volume, **params)
 
+    @test.attr(type=['negative'])
     @decorators.idempotent_id('878b4e57-faa2-4659-b0d1-ce740a06ae81')
     def test_create_with_empty_name(self):
         # Should not be able to create volume type with an empty name.
@@ -37,6 +40,7 @@
             lib_exc.BadRequest,
             self.admin_volume_types_client.create_volume_type, name='')
 
+    @test.attr(type=['negative'])
     @decorators.idempotent_id('994610d6-0476-4018-a644-a2602ef5d4aa')
     def test_get_nonexistent_type_id(self):
         # Should not be able to get volume type with nonexistent type id.
@@ -44,6 +48,7 @@
                           self.admin_volume_types_client.show_volume_type,
                           data_utils.rand_uuid())
 
+    @test.attr(type=['negative'])
     @decorators.idempotent_id('6b3926d2-7d73-4896-bc3d-e42dfd11a9f6')
     def test_delete_nonexistent_type_id(self):
         # Should not be able to delete volume type with nonexistent type id.
@@ -51,6 +56,7 @@
                           self.admin_volume_types_client.delete_volume_type,
                           data_utils.rand_uuid())
 
+    @test.attr(type=['negative'])
     @decorators.idempotent_id('8c09f849-f225-4d78-ba87-bffd9a5e0c6f')
     def test_create_volume_with_private_volume_type(self):
         # Should not be able to create volume with private volume type.
diff --git a/tempest/api/volume/admin/test_volumes_backup.py b/tempest/api/volume/admin/test_volumes_backup.py
index 04d27ea..13b7384 100644
--- a/tempest/api/volume/admin/test_volumes_backup.py
+++ b/tempest/api/volume/admin/test_volumes_backup.py
@@ -94,8 +94,9 @@
         self.addCleanup(self._delete_backup, new_id)
         self.assertIn("id", import_backup)
         self.assertEqual(new_id, import_backup['id'])
-        waiters.wait_for_backup_status(self.admin_backups_client,
-                                       import_backup['id'], 'available')
+        waiters.wait_for_volume_resource_status(self.admin_backups_client,
+                                                import_backup['id'],
+                                                'available')
 
         # Verify Import Backup
         backups = self.admin_backups_client.list_backups(
@@ -108,14 +109,16 @@
         self.addCleanup(self.admin_volume_client.delete_volume,
                         restore['volume_id'])
         self.assertEqual(backup['id'], restore['backup_id'])
-        waiters.wait_for_volume_status(self.admin_volume_client,
-                                       restore['volume_id'], 'available')
+        waiters.wait_for_volume_resource_status(self.admin_volume_client,
+                                                restore['volume_id'],
+                                                'available')
 
         # Verify if restored volume is there in volume list
         volumes = self.admin_volume_client.list_volumes()['volumes']
         self.assertIn(restore['volume_id'], [v['id'] for v in volumes])
-        waiters.wait_for_backup_status(self.admin_backups_client,
-                                       import_backup['id'], 'available')
+        waiters.wait_for_volume_resource_status(self.admin_backups_client,
+                                                import_backup['id'],
+                                                'available')
 
     @decorators.idempotent_id('47a35425-a891-4e13-961c-c45deea21e94')
     def test_volume_backup_reset_status(self):
@@ -131,8 +134,8 @@
         # Reset backup status to error
         self.admin_backups_client.reset_backup_status(backup_id=backup['id'],
                                                       status="error")
-        waiters.wait_for_backup_status(self.admin_backups_client,
-                                       backup['id'], 'error')
+        waiters.wait_for_volume_resource_status(self.admin_backups_client,
+                                                backup['id'], 'error')
 
 
 class VolumesBackupsAdminV1Test(VolumesBackupsAdminV2Test):
diff --git a/tempest/api/volume/admin/v2/test_snapshot_manage.py b/tempest/api/volume/admin/v2/test_snapshot_manage.py
index 1114924..e8bd477 100644
--- a/tempest/api/volume/admin/v2/test_snapshot_manage.py
+++ b/tempest/api/volume/admin/v2/test_snapshot_manage.py
@@ -61,13 +61,13 @@
         new_snapshot = self.admin_snapshot_manage_client.manage_snapshot(
             volume_id=volume['id'],
             ref={'source-name': snapshot_ref})['snapshot']
-        self.addCleanup(self.delete_snapshot,
-                        self.admin_snapshots_client, new_snapshot['id'])
+        self.addCleanup(self.delete_snapshot, new_snapshot['id'],
+                        self.admin_snapshots_client)
 
         # Wait for the snapshot to be available after manage operation
-        waiters.wait_for_snapshot_status(self.admin_snapshots_client,
-                                         new_snapshot['id'],
-                                         'available')
+        waiters.wait_for_volume_resource_status(self.admin_snapshots_client,
+                                                new_snapshot['id'],
+                                                'available')
 
         # Verify the managed snapshot has the expected parent volume
         self.assertEqual(new_snapshot['volume_id'], volume['id'])
diff --git a/tempest/api/volume/admin/v2/test_volumes_list.py b/tempest/api/volume/admin/v2/test_volumes_list.py
index b0a37fb..6bab373 100644
--- a/tempest/api/volume/admin/v2/test_volumes_list.py
+++ b/tempest/api/volume/admin/v2/test_volumes_list.py
@@ -45,8 +45,8 @@
         # Create a volume in admin tenant
         adm_vol = self.admin_volume_client.create_volume(
             size=CONF.volume.volume_size)['volume']
-        waiters.wait_for_volume_status(self.admin_volume_client,
-                                       adm_vol['id'], 'available')
+        waiters.wait_for_volume_resource_status(self.admin_volume_client,
+                                                adm_vol['id'], 'available')
         self.addCleanup(self.admin_volume_client.delete_volume, adm_vol['id'])
         params = {'all_tenants': 1,
                   'project_id': self.volumes_client.tenant_id}
diff --git a/tempest/api/volume/base.py b/tempest/api/volume/base.py
index 98e050e..fd10fb3 100644
--- a/tempest/api/volume/base.py
+++ b/tempest/api/volume/base.py
@@ -131,8 +131,8 @@
 
         volume = cls.volumes_client.create_volume(**kwargs)['volume']
         cls.volumes.append(volume)
-        waiters.wait_for_volume_status(cls.volumes_client, volume['id'],
-                                       wait_until)
+        waiters.wait_for_volume_resource_status(cls.volumes_client,
+                                                volume['id'], wait_until)
         return volume
 
     @classmethod
@@ -145,9 +145,9 @@
 
         snapshot = cls.snapshots_client.create_snapshot(
             volume_id=volume_id, **kwargs)['snapshot']
-        cls.snapshots.append(snapshot)
-        waiters.wait_for_snapshot_status(cls.snapshots_client,
-                                         snapshot['id'], 'available')
+        cls.snapshots.append(snapshot['id'])
+        waiters.wait_for_volume_resource_status(cls.snapshots_client,
+                                                snapshot['id'], 'available')
         return snapshot
 
     def create_backup(self, volume_id, backup_client=None, **kwargs):
@@ -158,8 +158,8 @@
         backup = backup_client.create_backup(
             volume_id=volume_id, **kwargs)['backup']
         self.addCleanup(backup_client.delete_backup, backup['id'])
-        waiters.wait_for_backup_status(backup_client, backup['id'],
-                                       'available')
+        waiters.wait_for_volume_resource_status(backup_client, backup['id'],
+                                                'available')
         return backup
 
     # NOTE(afazekas): these create_* and clean_* could be defined
@@ -171,21 +171,24 @@
         client.delete_volume(volume_id)
         client.wait_for_resource_deletion(volume_id)
 
-    @staticmethod
-    def delete_snapshot(client, snapshot_id):
+    def delete_snapshot(self, snapshot_id, snapshots_client=None):
         """Delete snapshot by the given client"""
-        client.delete_snapshot(snapshot_id)
-        client.wait_for_resource_deletion(snapshot_id)
+        if snapshots_client is None:
+            snapshots_client = self.snapshots_client
+        snapshots_client.delete_snapshot(snapshot_id)
+        snapshots_client.wait_for_resource_deletion(snapshot_id)
+        if snapshot_id in self.snapshots:
+            self.snapshots.remove(snapshot_id)
 
     def attach_volume(self, server_id, volume_id):
         """Attach a volume to a server"""
         self.servers_client.attach_volume(
             server_id, volumeId=volume_id,
             device='/dev/%s' % CONF.compute.volume_device_name)
-        waiters.wait_for_volume_status(self.volumes_client,
-                                       volume_id, 'in-use')
-        self.addCleanup(waiters.wait_for_volume_status, self.volumes_client,
-                        volume_id, 'available')
+        waiters.wait_for_volume_resource_status(self.volumes_client,
+                                                volume_id, 'in-use')
+        self.addCleanup(waiters.wait_for_volume_resource_status,
+                        self.volumes_client, volume_id, 'available')
         self.addCleanup(self.servers_client.detach_volume, server_id,
                         volume_id)
 
@@ -207,12 +210,12 @@
     def clear_snapshots(cls):
         for snapshot in cls.snapshots:
             test_utils.call_and_ignore_notfound_exc(
-                cls.snapshots_client.delete_snapshot, snapshot['id'])
+                cls.snapshots_client.delete_snapshot, snapshot)
 
         for snapshot in cls.snapshots:
             test_utils.call_and_ignore_notfound_exc(
                 cls.snapshots_client.wait_for_resource_deletion,
-                snapshot['id'])
+                snapshot)
 
     def create_server(self, **kwargs):
         name = kwargs.pop(
diff --git a/tempest/api/volume/test_volume_transfers.py b/tempest/api/volume/test_volume_transfers.py
index 5477770..9f63b14 100644
--- a/tempest/api/volume/test_volume_transfers.py
+++ b/tempest/api/volume/test_volume_transfers.py
@@ -43,8 +43,8 @@
             volume_id=volume['id'])['transfer']
         transfer_id = transfer['id']
         auth_key = transfer['auth_key']
-        waiters.wait_for_volume_status(self.client,
-                                       volume['id'], 'awaiting-transfer')
+        waiters.wait_for_volume_resource_status(
+            self.client, volume['id'], 'awaiting-transfer')
 
         # Get a volume transfer
         body = self.client.show_volume_transfer(transfer_id)['transfer']
@@ -58,8 +58,8 @@
         # Accept a volume transfer by alt_tenant
         body = self.alt_client.accept_volume_transfer(
             transfer_id, auth_key=auth_key)['transfer']
-        waiters.wait_for_volume_status(self.alt_client,
-                                       volume['id'], 'available')
+        waiters.wait_for_volume_resource_status(self.alt_client,
+                                                volume['id'], 'available')
 
     @decorators.idempotent_id('ab526943-b725-4c07-b875-8e8ef87a2c30')
     def test_create_list_delete_volume_transfer(self):
@@ -71,8 +71,8 @@
         body = self.client.create_volume_transfer(
             volume_id=volume['id'])['transfer']
         transfer_id = body['id']
-        waiters.wait_for_volume_status(self.client,
-                                       volume['id'], 'awaiting-transfer')
+        waiters.wait_for_volume_resource_status(
+            self.client, volume['id'], 'awaiting-transfer')
 
         # List all volume transfers (looking for the one we created)
         body = self.client.list_volume_transfers()['transfers']
@@ -84,7 +84,8 @@
 
         # Delete a volume transfer
         self.client.delete_volume_transfer(transfer_id)
-        waiters.wait_for_volume_status(self.client, volume['id'], 'available')
+        waiters.wait_for_volume_resource_status(
+            self.client, volume['id'], 'available')
 
 
 class VolumesV1TransfersTest(VolumesV2TransfersTest):
diff --git a/tempest/api/volume/test_volumes_actions.py b/tempest/api/volume/test_volumes_actions.py
index c0cc74d..0a6901c 100644
--- a/tempest/api/volume/test_volumes_actions.py
+++ b/tempest/api/volume/test_volumes_actions.py
@@ -60,11 +60,11 @@
                                   instance_uuid=server['id'],
                                   mountpoint='/dev/%s' %
                                              CONF.compute.volume_device_name)
-        waiters.wait_for_volume_status(self.client,
-                                       self.volume['id'], 'in-use')
+        waiters.wait_for_volume_resource_status(self.client,
+                                                self.volume['id'], 'in-use')
         self.client.detach_volume(self.volume['id'])
-        waiters.wait_for_volume_status(self.client,
-                                       self.volume['id'], 'available')
+        waiters.wait_for_volume_resource_status(self.client,
+                                                self.volume['id'], 'available')
 
     @decorators.idempotent_id('63e21b4c-0a0c-41f6-bfc3-7c2816815599')
     def test_volume_bootable(self):
@@ -91,11 +91,10 @@
                                   instance_uuid=server['id'],
                                   mountpoint='/dev/%s' %
                                              CONF.compute.volume_device_name)
-        waiters.wait_for_volume_status(self.client,
-                                       self.volume['id'], 'in-use')
-        self.addCleanup(waiters.wait_for_volume_status, self.client,
-                        self.volume['id'],
-                        'available')
+        waiters.wait_for_volume_resource_status(self.client, self.volume['id'],
+                                                'in-use')
+        self.addCleanup(waiters.wait_for_volume_resource_status, self.client,
+                        self.volume['id'], 'available')
         self.addCleanup(self.client.detach_volume, self.volume['id'])
         volume = self.client.show_volume(self.volume['id'])['volume']
         self.assertIn('attachments', volume)
@@ -124,8 +123,8 @@
                         self.image_client.delete_image,
                         image_id)
         waiters.wait_for_image_status(self.image_client, image_id, 'active')
-        waiters.wait_for_volume_status(self.client,
-                                       self.volume['id'], 'available')
+        waiters.wait_for_volume_resource_status(self.client,
+                                                self.volume['id'], 'available')
 
     @decorators.idempotent_id('92c4ef64-51b2-40c0-9f7e-4749fbaaba33')
     def test_reserve_unreserve_volume(self):
diff --git a/tempest/api/volume/test_volumes_backup.py b/tempest/api/volume/test_volumes_backup.py
index 939f1ac..e664ff7 100644
--- a/tempest/api/volume/test_volumes_backup.py
+++ b/tempest/api/volume/test_volumes_backup.py
@@ -40,11 +40,11 @@
         self.addCleanup(self.volumes_client.delete_volume,
                         restored_volume['volume_id'])
         self.assertEqual(backup_id, restored_volume['backup_id'])
-        waiters.wait_for_backup_status(self.backups_client,
-                                       backup_id, 'available')
-        waiters.wait_for_volume_status(self.volumes_client,
-                                       restored_volume['volume_id'],
-                                       'available')
+        waiters.wait_for_volume_resource_status(self.backups_client,
+                                                backup_id, 'available')
+        waiters.wait_for_volume_resource_status(self.volumes_client,
+                                                restored_volume['volume_id'],
+                                                'available')
         return restored_volume
 
     @decorators.idempotent_id('a66eb488-8ee1-47d4-8e9f-575a095728c6')
@@ -60,8 +60,8 @@
                                     name=backup_name,
                                     description=description)
         self.assertEqual(backup_name, backup['name'])
-        waiters.wait_for_volume_status(self.volumes_client,
-                                       volume['id'], 'available')
+        waiters.wait_for_volume_resource_status(self.volumes_client,
+                                                volume['id'], 'available')
 
         # Get a given backup
         backup = self.backups_client.show_backup(backup['id'])['backup']
diff --git a/tempest/api/volume/test_volumes_clone_negative.py b/tempest/api/volume/test_volumes_clone_negative.py
index fa827cd..5331243 100644
--- a/tempest/api/volume/test_volumes_clone_negative.py
+++ b/tempest/api/volume/test_volumes_clone_negative.py
@@ -17,6 +17,7 @@
 from tempest import config
 from tempest.lib import decorators
 from tempest.lib import exceptions
+from tempest import test
 
 CONF = config.CONF
 
@@ -29,6 +30,7 @@
         if not CONF.volume_feature_enabled.clone:
             raise cls.skipException("Cinder volume clones are disabled")
 
+    @test.attr(type=['negative'])
     @decorators.idempotent_id('9adae371-a257-43a5-459a-dc7c88e66e0e')
     def test_create_from_volume_decreasing_size(self):
         # Creates a volume from another volume passing a size different from
diff --git a/tempest/api/volume/test_volumes_extend.py b/tempest/api/volume/test_volumes_extend.py
index 2378790..3df9b00 100644
--- a/tempest/api/volume/test_volumes_extend.py
+++ b/tempest/api/volume/test_volumes_extend.py
@@ -27,8 +27,8 @@
         extend_size = volume['size'] + 1
         self.volumes_client.extend_volume(volume['id'],
                                           new_size=extend_size)
-        waiters.wait_for_volume_status(self.volumes_client,
-                                       volume['id'], 'available')
+        waiters.wait_for_volume_resource_status(self.volumes_client,
+                                                volume['id'], 'available')
         volume = self.volumes_client.show_volume(volume['id'])['volume']
         self.assertEqual(volume['size'], extend_size)
 
diff --git a/tempest/api/volume/test_volumes_get.py b/tempest/api/volume/test_volumes_get.py
index d1a1c2f..a3e46a8 100644
--- a/tempest/api/volume/test_volumes_get.py
+++ b/tempest/api/volume/test_volumes_get.py
@@ -41,8 +41,8 @@
         volume = self.volumes_client.create_volume(**kwargs)['volume']
         self.assertIn('id', volume)
         self.addCleanup(self.delete_volume, self.volumes_client, volume['id'])
-        waiters.wait_for_volume_status(self.volumes_client, volume['id'],
-                                       'available')
+        waiters.wait_for_volume_resource_status(self.volumes_client,
+                                                volume['id'], 'available')
         self.assertIn(name_field, volume)
         self.assertEqual(volume[name_field], v_name,
                          "The created volume name is not equal "
@@ -106,8 +106,8 @@
         self.assertIn('id', new_volume)
         self.addCleanup(self.delete_volume, self.volumes_client,
                         new_volume['id'])
-        waiters.wait_for_volume_status(self.volumes_client,
-                                       new_volume['id'], 'available')
+        waiters.wait_for_volume_resource_status(self.volumes_client,
+                                                new_volume['id'], 'available')
 
         params = {name_field: volume[name_field],
                   descrip_field: volume[descrip_field]}
diff --git a/tempest/api/volume/test_volumes_snapshots.py b/tempest/api/volume/test_volumes_snapshots.py
index 9f4ce95..5abda5e 100644
--- a/tempest/api/volume/test_volumes_snapshots.py
+++ b/tempest/api/volume/test_volumes_snapshots.py
@@ -10,6 +10,8 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+from testtools import matchers
+
 from tempest.api.volume import base
 from tempest.common.utils import data_utils
 from tempest import config
@@ -34,12 +36,6 @@
         cls.name_field = cls.special_fields['name_field']
         cls.descrip_field = cls.special_fields['descrip_field']
 
-    def cleanup_snapshot(self, snapshot):
-        # Delete the snapshot
-        self.snapshots_client.delete_snapshot(snapshot['id'])
-        self.snapshots_client.wait_for_resource_deletion(snapshot['id'])
-        self.snapshots.remove(snapshot)
-
     @decorators.idempotent_id('b467b54c-07a4-446d-a1cf-651dedcc3ff1')
     @test.services('compute')
     def test_snapshot_create_with_volume_in_use(self):
@@ -52,7 +48,7 @@
         snapshot = self.create_snapshot(self.volume_origin['id'],
                                         force=True)
         # Delete the snapshot
-        self.cleanup_snapshot(snapshot)
+        self.delete_snapshot(snapshot['id'])
 
     @decorators.idempotent_id('8567b54c-4455-446d-a1cf-651ddeaa3ff2')
     @test.services('compute')
@@ -68,9 +64,9 @@
 
         # Delete the snapshots. Some snapshot implementations can take
         # different paths according to order they are deleted.
-        self.cleanup_snapshot(snapshot1)
-        self.cleanup_snapshot(snapshot3)
-        self.cleanup_snapshot(snapshot2)
+        self.delete_snapshot(snapshot1['id'])
+        self.delete_snapshot(snapshot3['id'])
+        self.delete_snapshot(snapshot2['id'])
 
     @decorators.idempotent_id('5210a1de-85a0-11e6-bb21-641c676a5d61')
     @test.services('compute')
@@ -89,14 +85,18 @@
 
         # Delete the snapshots. Some snapshot implementations can take
         # different paths according to order they are deleted.
-        self.cleanup_snapshot(snapshot3)
-        self.cleanup_snapshot(snapshot1)
-        self.cleanup_snapshot(snapshot2)
+        self.delete_snapshot(snapshot3['id'])
+        self.delete_snapshot(snapshot1['id'])
+        self.delete_snapshot(snapshot2['id'])
 
     @decorators.idempotent_id('2a8abbe4-d871-46db-b049-c41f5af8216e')
     def test_snapshot_create_get_list_update_delete(self):
-        # Create a snapshot
-        snapshot = self.create_snapshot(self.volume_origin['id'])
+        # Create a snapshot with metadata
+        metadata = {"snap-meta1": "value1",
+                    "snap-meta2": "value2",
+                    "snap-meta3": "value3"}
+        snapshot = self.create_snapshot(self.volume_origin['id'],
+                                        metadata=metadata)
 
         # Get the snap and check for some of its details
         snap_get = self.snapshots_client.show_snapshot(
@@ -105,6 +105,10 @@
                          snap_get['volume_id'],
                          "Referred volume origin mismatch")
 
+        # Verify snapshot metadata
+        self.assertThat(snap_get['metadata'].items(),
+                        matchers.ContainsAll(metadata.items()))
+
         # Compare also with the output from the list action
         tracking_data = (snapshot['id'], snapshot[self.name_field])
         snaps_list = self.snapshots_client.list_snapshots()['snapshots']
@@ -129,7 +133,7 @@
         self.assertEqual(new_desc, updated_snapshot[self.descrip_field])
 
         # Delete the snapshot
-        self.cleanup_snapshot(snapshot)
+        self.delete_snapshot(snapshot['id'])
 
     @decorators.idempotent_id('677863d1-3142-456d-b6ac-9924f667a7f4')
     def test_volume_from_snapshot(self):
diff --git a/tempest/api/volume/test_volumes_snapshots_negative.py b/tempest/api/volume/test_volumes_snapshots_negative.py
index a3d91b0..9e44379 100644
--- a/tempest/api/volume/test_volumes_snapshots_negative.py
+++ b/tempest/api/volume/test_volumes_snapshots_negative.py
@@ -62,6 +62,13 @@
                           size=src_size - 1,
                           snapshot_id=src_snap['id'])
 
+    @test.attr(type=['negative'])
+    @decorators.idempotent_id('8fd92339-e22f-4591-86b4-1e2215372a40')
+    def test_list_snapshot_invalid_param_limit(self):
+        self.assertRaises(lib_exc.BadRequest,
+                          self.snapshots_client.list_snapshots,
+                          limit='invalid')
+
 
 class VolumesV1SnapshotNegativeTestJSON(VolumesV2SnapshotNegativeTestJSON):
     _api_version = 1
diff --git a/tempest/api/volume/v2/test_volumes_list.py b/tempest/api/volume/v2/test_volumes_list.py
index 8b51e64..d2328c8 100644
--- a/tempest/api/volume/v2/test_volumes_list.py
+++ b/tempest/api/volume/v2/test_volumes_list.py
@@ -37,13 +37,12 @@
         super(VolumesV2ListTestJSON, cls).resource_setup()
 
         # Create 3 test volumes
-        metadata = {'Type': 'work'}
         # NOTE(zhufl): When using pre-provisioned credentials, the project
         # may have volumes other than those created below.
         existing_volumes = cls.volumes_client.list_volumes()['volumes']
         cls.volume_id_list = [vol['id'] for vol in existing_volumes]
         for _ in range(3):
-            volume = cls.create_volume(metadata=metadata)
+            volume = cls.create_volume()
             cls.volume_id_list.append(volume['id'])
 
     @decorators.idempotent_id('2a7064eb-b9c3-429b-b888-33928fc5edd3')
diff --git a/tempest/api/volume/v2/test_volumes_snapshots_negative.py b/tempest/api/volume/v2/test_volumes_snapshots_negative.py
new file mode 100644
index 0000000..e5581b9
--- /dev/null
+++ b/tempest/api/volume/v2/test_volumes_snapshots_negative.py
@@ -0,0 +1,46 @@
+# Copyright 2017 Red Hat, Inc.
+# All Rights Reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+from tempest.api.volume import base
+from tempest.common.utils import data_utils
+from tempest import config
+from tempest.lib import decorators
+from tempest.lib import exceptions as lib_exc
+from tempest import test
+
+CONF = config.CONF
+
+
+class VolumesV2SnapshotNegativeTest(base.BaseVolumeTest):
+
+    @classmethod
+    def skip_checks(cls):
+        super(VolumesV2SnapshotNegativeTest, cls).skip_checks()
+        if not CONF.volume_feature_enabled.snapshot:
+            raise cls.skipException("Cinder volume snapshots are disabled")
+
+    @test.attr(type=['negative'])
+    @decorators.idempotent_id('27b5f37f-bf69-4e8c-986e-c44f3d6819b8')
+    def test_list_snapshots_invalid_param_sort(self):
+        self.assertRaises(lib_exc.BadRequest,
+                          self.snapshots_client.list_snapshots,
+                          sort_key='invalid')
+
+    @test.attr(type=['negative'])
+    @decorators.idempotent_id('b68deeda-ca79-4a32-81af-5c51179e553a')
+    def test_list_snapshots_invalid_param_marker(self):
+        self.assertRaises(lib_exc.NotFound,
+                          self.snapshots_client.list_snapshots,
+                          marker=data_utils.rand_uuid())
diff --git a/tempest/cmd/subunit_describe_calls.py b/tempest/cmd/subunit_describe_calls.py
index 0f868a9..8ee3055 100644
--- a/tempest/cmd/subunit_describe_calls.py
+++ b/tempest/cmd/subunit_describe_calls.py
@@ -294,7 +294,8 @@
             outfile.write(json.dumps(url_parser.test_logs))
         return
 
-    for test_name, items in url_parser.test_logs.iteritems():
+    for test_name in url_parser.test_logs:
+        items = url_parser.test_logs[test_name]
         sys.stdout.write('{0}\n'.format(test_name))
         if not items:
             sys.stdout.write('\n')
diff --git a/tempest/common/compute.py b/tempest/common/compute.py
index 55bc93e..99da983 100644
--- a/tempest/common/compute.py
+++ b/tempest/common/compute.py
@@ -124,8 +124,9 @@
                   'imageRef': image_id,
                   'size': CONF.volume.volume_size}
         volume = volumes_client.create_volume(**params)
-        waiters.wait_for_volume_status(volumes_client,
-                                       volume['volume']['id'], 'available')
+        waiters.wait_for_volume_resource_status(volumes_client,
+                                                volume['volume']['id'],
+                                                'available')
 
         bd_map_v2 = [{
             'uuid': volume['volume']['id'],
diff --git a/tempest/common/waiters.py b/tempest/common/waiters.py
index 15619f4..3e5600c 100644
--- a/tempest/common/waiters.py
+++ b/tempest/common/waiters.py
@@ -10,7 +10,7 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
-
+import re
 import time
 
 from oslo_log import log as logging
@@ -179,25 +179,33 @@
     raise lib_exc.TimeoutException(message)
 
 
-def wait_for_volume_status(client, volume_id, status):
-    """Waits for a Volume to reach a given status."""
-    body = client.show_volume(volume_id)['volume']
-    volume_status = body['status']
+def wait_for_volume_resource_status(client, resource_id, status):
+    """Waits for a volume resource to reach a given status.
+
+    This function is a common function for volume, snapshot and backup
+    resources. The function extracts the name of the desired resource from
+    the client class name of the resource.
+    """
+    resource_name = re.findall(r'(Volume|Snapshot|Backup)',
+                               client.__class__.__name__)[0].lower()
+    show_resource = getattr(client, 'show_' + resource_name)
+    resource_status = show_resource(resource_id)[resource_name]['status']
     start = int(time.time())
 
-    while volume_status != status:
+    while resource_status != status:
         time.sleep(client.build_interval)
-        body = client.show_volume(volume_id)['volume']
-        volume_status = body['status']
-        if volume_status == 'error' and status != 'error':
-            raise exceptions.VolumeBuildErrorException(volume_id=volume_id)
-        if volume_status == 'error_restoring':
-            raise exceptions.VolumeRestoreErrorException(volume_id=volume_id)
+        resource_status = show_resource(resource_id)[
+            '{}'.format(resource_name)]['status']
+        if resource_status == 'error' and resource_status != status:
+            raise exceptions.VolumeResourceBuildErrorException(
+                resource_name=resource_name, resource_id=resource_id)
+        if resource_name == 'volume' and resource_status == 'error_restoring':
+            raise exceptions.VolumeRestoreErrorException(volume_id=resource_id)
 
         if int(time.time()) - start >= client.build_timeout:
-            message = ('Volume %s failed to reach %s status (current %s) '
+            message = ('%s %s failed to reach %s status (current %s) '
                        'within the required time (%s s).' %
-                       (volume_id, status, volume_status,
+                       (resource_name, resource_id, status, resource_status,
                         client.build_timeout))
             raise lib_exc.TimeoutException(message)
 
@@ -221,48 +229,6 @@
             raise lib_exc.TimeoutException(message)
 
 
-def wait_for_snapshot_status(client, snapshot_id, status):
-    """Waits for a Snapshot to reach a given status."""
-    body = client.show_snapshot(snapshot_id)['snapshot']
-    snapshot_status = body['status']
-    start = int(time.time())
-
-    while snapshot_status != status:
-        time.sleep(client.build_interval)
-        body = client.show_snapshot(snapshot_id)['snapshot']
-        snapshot_status = body['status']
-        if snapshot_status == 'error':
-            raise exceptions.SnapshotBuildErrorException(
-                snapshot_id=snapshot_id)
-        if int(time.time()) - start >= client.build_timeout:
-            message = ('Snapshot %s failed to reach %s status (current %s) '
-                       'within the required time (%s s).' %
-                       (snapshot_id, status, snapshot_status,
-                        client.build_timeout))
-            raise lib_exc.TimeoutException(message)
-
-
-def wait_for_backup_status(client, backup_id, status):
-    """Waits for a Backup to reach a given status."""
-    body = client.show_backup(backup_id)['backup']
-    backup_status = body['status']
-    start = int(time.time())
-
-    while backup_status != status:
-        time.sleep(client.build_interval)
-        body = client.show_backup(backup_id)['backup']
-        backup_status = body['status']
-        if backup_status == 'error' and backup_status != status:
-            raise lib_exc.VolumeBackupException(backup_id=backup_id)
-
-        if int(time.time()) - start >= client.build_timeout:
-            message = ('Volume backup %s failed to reach %s status '
-                       '(current %s) within the required time (%s s).' %
-                       (backup_id, status, backup_status,
-                        client.build_timeout))
-            raise lib_exc.TimeoutException(message)
-
-
 def wait_for_qos_operations(client, qos_id, operation, args=None):
     """Waits for a qos operations to be completed.
 
diff --git a/tempest/config.py b/tempest/config.py
index b4d88c5..83c5c0e 100644
--- a/tempest/config.py
+++ b/tempest/config.py
@@ -15,12 +15,15 @@
 
 from __future__ import print_function
 
+import functools
 import os
 import tempfile
 
+import debtcollector.removals
 from oslo_concurrency import lockutils
 from oslo_config import cfg
 from oslo_log import log as logging
+import testtools
 
 from tempest.lib import exceptions
 from tempest.lib.services import clients
@@ -1189,6 +1192,79 @@
 CONF = TempestConfigProxy()
 
 
+@debtcollector.removals.remove(
+    message='use testtools.skipUnless instead', removal_version='Queens')
+def skip_unless_config(*args):
+    """Decorator to raise a skip if a config opt doesn't exist or is False
+
+    :param str group: The first arg, the option group to check
+    :param str name: The second arg, the option name to check
+    :param str msg: Optional third arg, the skip msg to use if a skip is raised
+    :raises testtools.TestCaseskipException: If the specified config option
+        doesn't exist or it exists and evaluates to False
+    """
+    def decorator(f):
+        group = args[0]
+        name = args[1]
+
+        @functools.wraps(f)
+        def wrapper(self, *func_args, **func_kwargs):
+            if not hasattr(CONF, group):
+                msg = "Config group %s doesn't exist" % group
+                raise testtools.TestCase.skipException(msg)
+
+            conf_group = getattr(CONF, group)
+            if not hasattr(conf_group, name):
+                msg = "Config option %s.%s doesn't exist" % (group,
+                                                             name)
+                raise testtools.TestCase.skipException(msg)
+
+            value = getattr(conf_group, name)
+            if not value:
+                if len(args) == 3:
+                    msg = args[2]
+                else:
+                    msg = "Config option %s.%s is false" % (group,
+                                                            name)
+                raise testtools.TestCase.skipException(msg)
+            return f(self, *func_args, **func_kwargs)
+        return wrapper
+    return decorator
+
+
+@debtcollector.removals.remove(
+    message='use testtools.skipIf instead', removal_version='Queens')
+def skip_if_config(*args):
+    """Raise a skipException if a config exists and is True
+
+    :param str group: The first arg, the option group to check
+    :param str name: The second arg, the option name to check
+    :param str msg: Optional third arg, the skip msg to use if a skip is raised
+    :raises testtools.TestCase.skipException: If the specified config option
+        exists and evaluates to True
+    """
+    def decorator(f):
+        group = args[0]
+        name = args[1]
+
+        @functools.wraps(f)
+        def wrapper(self, *func_args, **func_kwargs):
+            if hasattr(CONF, group):
+                conf_group = getattr(CONF, group)
+                if hasattr(conf_group, name):
+                    value = getattr(conf_group, name)
+                    if value:
+                        if len(args) == 3:
+                            msg = args[2]
+                        else:
+                            msg = "Config option %s.%s is false" % (group,
+                                                                    name)
+                        raise testtools.TestCase.skipException(msg)
+            return f(self, *func_args, **func_kwargs)
+        return wrapper
+    return decorator
+
+
 def service_client_config(service_client_name=None):
     """Return a dict with the parameters to init service clients
 
diff --git a/tempest/exceptions.py b/tempest/exceptions.py
index 45bbc11..f48d7ac 100644
--- a/tempest/exceptions.py
+++ b/tempest/exceptions.py
@@ -37,18 +37,15 @@
     message = "Image %(image_id)s failed to become ACTIVE in the allotted time"
 
 
-class VolumeBuildErrorException(exceptions.TempestException):
-    message = "Volume %(volume_id)s failed to build and is in ERROR status"
+class VolumeResourceBuildErrorException(exceptions.TempestException):
+    message = ("%(resource_name)s %(resource_id)s failed to build and is in "
+               "ERROR status")
 
 
 class VolumeRestoreErrorException(exceptions.TempestException):
     message = "Volume %(volume_id)s failed to restore and is in ERROR status"
 
 
-class SnapshotBuildErrorException(exceptions.TempestException):
-    message = "Snapshot %(snapshot_id)s failed to build and is in ERROR status"
-
-
 class StackBuildErrorException(exceptions.TempestException):
     message = ("Stack %(stack_identifier)s is in %(stack_status)s status "
                "due to '%(stack_status_reason)s'")
diff --git a/tempest/lib/services/identity/v2/services_client.py b/tempest/lib/services/identity/v2/services_client.py
index b3f94aa..47398db 100644
--- a/tempest/lib/services/identity/v2/services_client.py
+++ b/tempest/lib/services/identity/v2/services_client.py
@@ -26,7 +26,7 @@
 
         For a full list of available parameters, please refer to the official
         API reference:
-        http://developer.openstack.org/api-ref/identity/v2-ext/?expanded=#create-service-admin-extension
+        http://developer.openstack.org/api-ref/identity/v2-ext/#create-service-admin-extension
         """
         post_body = json.dumps({'OS-KSADM:service': kwargs})
         resp, body = self.post('/OS-KSADM/services', post_body)
@@ -47,7 +47,7 @@
 
         For a full list of available parameters, please refer to the official
         API reference:
-        http://developer.openstack.org/api-ref/identity/v2-ext/?expanded=#list-services-admin-extension
+        http://developer.openstack.org/api-ref/identity/v2-ext/#list-services-admin-extension
         """
         url = '/OS-KSADM/services'
         if params:
diff --git a/tempest/lib/services/identity/v3/role_assignments_client.py b/tempest/lib/services/identity/v3/role_assignments_client.py
index 10de03f..a426e69 100644
--- a/tempest/lib/services/identity/v3/role_assignments_client.py
+++ b/tempest/lib/services/identity/v3/role_assignments_client.py
@@ -26,7 +26,7 @@
 
         For a full list of available parameters, please refer to the official
         API reference:
-        http://developer.openstack.org/api-ref/identity/v3/?expanded=list-effective-role-assignments-detail
+        http://developer.openstack.org/api-ref/identity/v3/#list-role-assignments
 
         :param effective: If True, returns the effective assignments, including
                           any assignments gained by virtue of group membership
diff --git a/tempest/lib/services/image/v2/namespace_tags_client.py b/tempest/lib/services/image/v2/namespace_tags_client.py
index ac8b569..a7f8c39 100644
--- a/tempest/lib/services/image/v2/namespace_tags_client.py
+++ b/tempest/lib/services/image/v2/namespace_tags_client.py
@@ -115,5 +115,11 @@
         """
         url = 'metadefs/namespaces/%s/tags' % namespace
         resp, _ = self.delete(url)
-        self.expected_success(200, resp.status)
+
+        # NOTE(rosmaita): Bug 1656183 fixed the success response code for
+        # this call to make it consistent with the other metadefs delete
+        # calls.  Accept both codes in case tempest is being run against
+        # an old Glance.
+        self.expected_success([200, 204], resp.status)
+
         return rest_client.ResponseBody(resp)
diff --git a/tempest/lib/services/image/v2/resource_types_client.py b/tempest/lib/services/image/v2/resource_types_client.py
index 1b6889f..13259d1 100644
--- a/tempest/lib/services/image/v2/resource_types_client.py
+++ b/tempest/lib/services/image/v2/resource_types_client.py
@@ -26,7 +26,7 @@
 
         For a full list of available parameters, please refer to the official
         API reference:
-        http://developer.openstack.org/api-ref/image/v2/metadefs-index.html?expanded=#list-resource-types
+        http://developer.openstack.org/api-ref/image/v2/metadefs-index.html#list-resource-types
         """
         url = 'metadefs/resource_types'
         resp, body = self.get(url)
@@ -39,7 +39,7 @@
 
         For a full list of available parameters, please refer to the official
         API reference:
-        http://developer.openstack.org/api-ref/image/v2/metadefs-index.html?expanded=#create-resource-type-association
+        http://developer.openstack.org/api-ref/image/v2/metadefs-index.html#create-resource-type-association
         """
         url = 'metadefs/namespaces/%s/resource_types' % namespace_id
         data = json.dumps(kwargs)
@@ -53,7 +53,7 @@
 
         For a full list of available parameters, please refer to the official
         API reference:
-        http://developer.openstack.org/api-ref/image/v2/metadefs-index.html?expanded=#list-resource-type-associations
+        http://developer.openstack.org/api-ref/image/v2/metadefs-index.html#list-resource-type-associations
         """
         url = 'metadefs/namespaces/%s/resource_types' % namespace_id
         resp, body = self.get(url)
@@ -66,7 +66,7 @@
 
         For a full list of available parameters, please refer to the official
         API reference:
-        http://developer.openstack.org/api-ref/image/v2/metadefs-index.html?expanded=#remove-resource-type-association
+        http://developer.openstack.org/api-ref/image/v2/metadefs-index.html#remove-resource-type-association
         """
         url = 'metadefs/namespaces/%s/resource_types/%s' % (namespace_id,
                                                             resource_name)
diff --git a/tempest/lib/services/network/ports_client.py b/tempest/lib/services/network/ports_client.py
index 93138b9..daa15d7 100644
--- a/tempest/lib/services/network/ports_client.py
+++ b/tempest/lib/services/network/ports_client.py
@@ -73,7 +73,7 @@
 
         For a full list of available parameters, please refer to the official
         API reference:
-        http://developer.openstack.org/api-ref/networking/v2/index.html?expanded=#bulk-create-ports
+        http://developer.openstack.org/api-ref/networking/v2/index.html#bulk-create-ports
         """
         uri = '/ports'
         return self.create_resource(uri, kwargs)
diff --git a/tempest/lib/services/volume/v2/qos_client.py b/tempest/lib/services/volume/v2/qos_client.py
index 40d4a3f..47d3914 100644
--- a/tempest/lib/services/volume/v2/qos_client.py
+++ b/tempest/lib/services/volume/v2/qos_client.py
@@ -43,9 +43,7 @@
 
         For a full list of available parameters, please refer to the official
         API reference:
-        http://developer.openstack.org/api-ref/block-storage/v2/index.html
-                                ?expanded=create-qos-specification-detail
-                                #quality-of-service-qos-specifications-qos-specs
+        http://developer.openstack.org/api-ref/block-storage/v2/#create-qos-specification
         """
         post_body = json.dumps({'qos_specs': kwargs})
         resp, body = self.post('qos-specs', post_body)
@@ -81,9 +79,7 @@
 
         For a full list of available parameters, please refer to the official
         API reference:
-        http://developer.openstack.org/api-ref/block-storage/v2/index.html
-                            ?expanded=set-keys-in-qos-specification-detail
-                            #quality-of-service-qos-specifications-qos-specs
+        http://developer.openstack.org/api-ref/block-storage/v2/#set-keys-in-qos-specification
         """
         put_body = json.dumps({"qos_specs": kwargs})
         resp, body = self.put('qos-specs/%s' % qos_id, put_body)
@@ -98,9 +94,7 @@
 
         For a full list of available parameters, please refer to the official
         API reference:
-        http://developer.openstack.org/api-ref/block-storage/v2/index.html
-                            ?expanded=unset-keys-in-qos-specification-detail
-                            #quality-of-service-qos-specifications-qos-specs
+        http://developer.openstack.org/api-ref/block-storage/v2/#unset-keys-in-qos-specification
         """
         put_body = json.dumps({'keys': keys})
         resp, body = self.put('qos-specs/%s/delete_keys' % qos_id, put_body)
diff --git a/tempest/scenario/manager.py b/tempest/scenario/manager.py
index 6014c8c..e58031b 100644
--- a/tempest/scenario/manager.py
+++ b/tempest/scenario/manager.py
@@ -241,8 +241,8 @@
             self.assertEqual(name, volume['display_name'])
         else:
             self.assertEqual(name, volume['name'])
-        waiters.wait_for_volume_status(self.volumes_client,
-                                       volume['id'], 'available')
+        waiters.wait_for_volume_resource_status(self.volumes_client,
+                                                volume['id'], 'available')
         # The volume retrieved on creation has a non-up-to-date status.
         # Retrieval after it becomes active ensures correct details.
         volume = self.volumes_client.show_volume(volume['id'])['volume']
@@ -481,8 +481,9 @@
                 self.addCleanup(test_utils.call_and_ignore_notfound_exc,
                                 self.snapshots_client.delete_snapshot,
                                 snapshot_id)
-                waiters.wait_for_snapshot_status(self.snapshots_client,
-                                                 snapshot_id, 'available')
+                waiters.wait_for_volume_resource_status(self.snapshots_client,
+                                                        snapshot_id,
+                                                        'available')
         image_name = snapshot_image['name']
         self.assertEqual(name, image_name)
         LOG.debug("Created snapshot image %s for server %s",
@@ -494,16 +495,16 @@
             server['id'], volumeId=volume_to_attach['id'], device='/dev/%s'
             % CONF.compute.volume_device_name)['volumeAttachment']
         self.assertEqual(volume_to_attach['id'], volume['id'])
-        waiters.wait_for_volume_status(self.volumes_client,
-                                       volume['id'], 'in-use')
+        waiters.wait_for_volume_resource_status(self.volumes_client,
+                                                volume['id'], 'in-use')
 
         # Return the updated volume after the attachment
         return self.volumes_client.show_volume(volume['id'])['volume']
 
     def nova_volume_detach(self, server, volume):
         self.servers_client.detach_volume(server['id'], volume['id'])
-        waiters.wait_for_volume_status(self.volumes_client,
-                                       volume['id'], 'available')
+        waiters.wait_for_volume_resource_status(self.volumes_client,
+                                                volume['id'], 'available')
 
         volume = self.volumes_client.show_volume(volume['id'])['volume']
         self.assertEqual('available', volume['status'])
@@ -730,36 +731,6 @@
                         network['id'])
         return network
 
-    def _list_networks(self, *args, **kwargs):
-        """List networks using admin creds """
-        networks_list = self.admin_manager.networks_client.list_networks(
-            *args, **kwargs)
-        return networks_list['networks']
-
-    def _list_subnets(self, *args, **kwargs):
-        """List subnets using admin creds """
-        subnets_list = self.admin_manager.subnets_client.list_subnets(
-            *args, **kwargs)
-        return subnets_list['subnets']
-
-    def _list_routers(self, *args, **kwargs):
-        """List routers using admin creds """
-        routers_list = self.admin_manager.routers_client.list_routers(
-            *args, **kwargs)
-        return routers_list['routers']
-
-    def _list_ports(self, *args, **kwargs):
-        """List ports using admin creds """
-        ports_list = self.admin_manager.ports_client.list_ports(
-            *args, **kwargs)
-        return ports_list['ports']
-
-    def _list_agents(self, *args, **kwargs):
-        """List agents using admin creds """
-        agents_list = self.admin_manager.network_agents_client.list_agents(
-            *args, **kwargs)
-        return agents_list['agents']
-
     def _create_subnet(self, network, subnets_client=None,
                        routers_client=None, namestart='subnet-smoke',
                        **kwargs):
@@ -778,7 +749,8 @@
             :returns: True if subnet with cidr already exist in tenant
                   False else
             """
-            cidr_in_use = self._list_subnets(tenant_id=tenant_id, cidr=cidr)
+            cidr_in_use = self.admin_manager.subnets_client.list_subnets(
+                tenant_id=tenant_id, cidr=cidr)['subnets']
             return len(cidr_in_use) != 0
 
         ip_version = kwargs.pop('ip_version', 4)
@@ -826,7 +798,8 @@
         return subnet
 
     def _get_server_port_id_and_ip4(self, server, ip_addr=None):
-        ports = self._list_ports(device_id=server['id'], fixed_ip=ip_addr)
+        ports = self.admin_manager.ports_client.list_ports(
+            device_id=server['id'], fixed_ip=ip_addr)['ports']
         # A port can have more than one IP address in some cases.
         # If the network is dual-stack (IPv4 + IPv6), this port is associated
         # with 2 subnets
@@ -855,7 +828,8 @@
         return port_map[0]
 
     def _get_network_by_name(self, network_name):
-        net = self._list_networks(name=network_name)
+        net = self.admin_manager.networks_client.list_networks(
+            name=network_name)['networks']
         self.assertNotEqual(len(net), 0,
                             "Unable to get network by name: %s" % network_name)
         return net[0]
diff --git a/tempest/scenario/test_network_basic_ops.py b/tempest/scenario/test_network_basic_ops.py
index 4dae564..d8c8d4a 100644
--- a/tempest/scenario/test_network_basic_ops.py
+++ b/tempest/scenario/test_network_basic_ops.py
@@ -127,23 +127,23 @@
         via checking the result of list_[networks,routers,subnets]
         """
 
-        seen_nets = self._list_networks()
-        seen_names = [n['name'] for n in seen_nets]
-        seen_ids = [n['id'] for n in seen_nets]
+        seen_nets = self.admin_manager.networks_client.list_networks()
+        seen_names = [n['name'] for n in seen_nets['networks']]
+        seen_ids = [n['id'] for n in seen_nets['networks']]
         self.assertIn(self.network['name'], seen_names)
         self.assertIn(self.network['id'], seen_ids)
 
         if self.subnet:
-            seen_subnets = self._list_subnets()
-            seen_net_ids = [n['network_id'] for n in seen_subnets]
-            seen_subnet_ids = [n['id'] for n in seen_subnets]
+            seen_subnets = self.admin_manager.subnets_client.list_subnets()
+            seen_net_ids = [n['network_id'] for n in seen_subnets['subnets']]
+            seen_subnet_ids = [n['id'] for n in seen_subnets['subnets']]
             self.assertIn(self.network['id'], seen_net_ids)
             self.assertIn(self.subnet['id'], seen_subnet_ids)
 
         if self.router:
-            seen_routers = self._list_routers()
-            seen_router_ids = [n['id'] for n in seen_routers]
-            seen_router_names = [n['name'] for n in seen_routers]
+            seen_routers = self.admin_manager.routers_client.list_routers()
+            seen_router_ids = [n['id'] for n in seen_routers['routers']]
+            seen_router_names = [n['name'] for n in seen_routers['routers']]
             self.assertIn(self.router['name'],
                           seen_router_names)
             self.assertIn(self.router['id'],
@@ -240,7 +240,8 @@
             ip_address, private_key=private_key)
         old_nic_list = self._get_server_nics(ssh_client)
         # get a port from a list of one item
-        port_list = self._list_ports(device_id=server['id'])
+        port_list = self.admin_manager.ports_client.list_ports(
+            device_id=server['id'])['ports']
         self.assertEqual(1, len(port_list))
         old_port = port_list[0]
         interface = self.interface_client.create_interface(
@@ -253,9 +254,12 @@
                         server['id'], interface['port_id'])
 
         def check_ports():
-            self.new_port_list = [port for port in
-                                  self._list_ports(device_id=server['id'])
-                                  if port['id'] != old_port['id']]
+            self.new_port_list = [
+                port for port in
+                self.admin_manager.ports_client.list_ports(
+                    device_id=server['id'])['ports']
+                if port['id'] != old_port['id']
+            ]
             return len(self.new_port_list) == 1
 
         if not test_utils.call_until_true(
@@ -301,10 +305,13 @@
         floating_ip, server = self.floating_ip_tuple
         # get internal ports' ips:
         # get all network ports in the new network
-        internal_ips = (p['fixed_ips'][0]['ip_address'] for p in
-                        self._list_ports(tenant_id=server['tenant_id'],
-                                         network_id=network['id'])
-                        if p['device_owner'].startswith('network'))
+        internal_ips = (
+            p['fixed_ips'][0]['ip_address'] for p in
+            self.admin_manager.ports_client.list_ports(
+                tenant_id=server['tenant_id'],
+                network_id=network['id'])['ports']
+            if p['device_owner'].startswith('network')
+        )
 
         self._check_server_connectivity(floating_ip,
                                         internal_ips,
@@ -320,8 +327,11 @@
         # We ping the external IP from the instance using its floating IP
         # which is always IPv4, so we must only test connectivity to
         # external IPv4 IPs if the external network is dualstack.
-        v4_subnets = [s for s in self._list_subnets(
-            network_id=CONF.network.public_network_id) if s['ip_version'] == 4]
+        v4_subnets = [
+            s for s in self.admin_manager.subnets_client.list_subnets(
+                network_id=CONF.network.public_network_id)['subnets']
+            if s['ip_version'] == 4
+        ]
         self.assertEqual(1, len(v4_subnets),
                          "Found %d IPv4 subnets" % len(v4_subnets))
 
@@ -624,7 +634,8 @@
         self._setup_network_and_servers()
         floating_ip, server = self.floating_ip_tuple
         server_id = server['id']
-        port_id = self._list_ports(device_id=server_id)[0]['id']
+        port_id = self.admin_manager.ports_client.list_ports(
+            device_id=server_id)['ports'][0]['id']
         server_pip = server['addresses'][self.network['name']][0]['addr']
 
         server2 = self._create_server(self.network)
@@ -677,8 +688,8 @@
                              'Server should have been created from a '
                              'pre-existing port.')
         # Assert the port is bound to the server.
-        port_list = self._list_ports(device_id=server['id'],
-                                     network_id=self.network['id'])
+        port_list = self.admin_manager.ports_client.list_ports(
+            device_id=server['id'], network_id=self.network['id'])['ports']
         self.assertEqual(1, len(port_list),
                          'There should only be one port created for '
                          'server %s.' % server['id'])
@@ -696,8 +707,8 @@
         # Boot another server with the same port to make sure nothing was
         # left around that could cause issues.
         server = self._create_server(self.network, port['id'])
-        port_list = self._list_ports(device_id=server['id'],
-                                     network_id=self.network['id'])
+        port_list = self.admin_manager.ports_client.list_ports(
+            device_id=server['id'], network_id=self.network['id'])['ports']
         self.assertEqual(1, len(port_list),
                          'There should only be one port created for '
                          'server %s.' % server['id'])
@@ -727,9 +738,11 @@
         unschedule_router = (self.admin_manager.network_agents_client.
                              delete_router_from_l3_agent)
 
-        agent_list_alive = set(a["id"] for a in
-                               self._list_agents(agent_type="L3 agent") if
-                               a["alive"] is True)
+        agent_list_alive = set(
+            a["id"] for a in
+            self.admin_manager.network_agents_client.list_agents(
+                agent_type="L3 agent")['agents'] if a["alive"] is True
+        )
         self._setup_network_and_servers()
 
         # NOTE(kevinbenton): we have to use the admin credentials to check
@@ -811,8 +824,8 @@
         self._create_new_network()
         self._hotplug_server()
         fip, server = self.floating_ip_tuple
-        new_ports = self._list_ports(device_id=server["id"],
-                                     network_id=self.new_net["id"])
+        new_ports = self.admin_manager.ports_client.list_ports(
+            device_id=server["id"], network_id=self.new_net["id"])['ports']
         spoof_port = new_ports[0]
         private_key = self._get_server_key(server)
         ssh_client = self.get_remote_client(fip['floating_ip_address'],
diff --git a/tempest/scenario/test_network_v6.py b/tempest/scenario/test_network_v6.py
index 2d6ea75..f33784e 100644
--- a/tempest/scenario/test_network_v6.py
+++ b/tempest/scenario/test_network_v6.py
@@ -143,9 +143,11 @@
         @param ssh: RemoteClient ssh instance to server
         @param sid: server uuid
         """
-        ports = [p["mac_address"] for p in
-                 self._list_ports(device_id=sid,
-                                  network_id=self.network_v6['id'])]
+        ports = [
+            p["mac_address"] for p in
+            self.admin_manager.ports_client.list_ports(
+                device_id=sid, network_id=self.network_v6['id'])['ports']
+        ]
         self.assertEqual(1, len(ports),
                          message=("Multiple IPv6 ports found on network %s. "
                                   "ports: %s")
diff --git a/tempest/scenario/test_security_groups_basic_ops.py b/tempest/scenario/test_security_groups_basic_ops.py
index 5565cb8..a01124d 100644
--- a/tempest/scenario/test_security_groups_basic_ops.py
+++ b/tempest/scenario/test_security_groups_basic_ops.py
@@ -220,22 +220,24 @@
         # Checks that we see the newly created network/subnet/router via
         # checking the result of list_[networks,routers,subnets]
         # Check that (router, subnet) couple exist in port_list
-        seen_nets = self._list_networks()
-        seen_names = [n['name'] for n in seen_nets]
-        seen_ids = [n['id'] for n in seen_nets]
+        seen_nets = self.admin_manager.networks_client.list_networks()
+        seen_names = [n['name'] for n in seen_nets['networks']]
+        seen_ids = [n['id'] for n in seen_nets['networks']]
 
         self.assertIn(tenant.network['name'], seen_names)
         self.assertIn(tenant.network['id'], seen_ids)
 
-        seen_subnets = [(n['id'], n['cidr'], n['network_id'])
-                        for n in self._list_subnets()]
+        seen_subnets = [
+            (n['id'], n['cidr'], n['network_id']) for n in
+            self.admin_manager.subnets_client.list_subnets()['subnets']
+        ]
         mysubnet = (tenant.subnet['id'], tenant.subnet['cidr'],
                     tenant.network['id'])
         self.assertIn(mysubnet, seen_subnets)
 
-        seen_routers = self._list_routers()
-        seen_router_ids = [n['id'] for n in seen_routers]
-        seen_router_names = [n['name'] for n in seen_routers]
+        seen_routers = self.admin_manager.routers_client.list_routers()
+        seen_router_ids = [n['id'] for n in seen_routers['routers']]
+        seen_router_names = [n['name'] for n in seen_routers['routers']]
 
         self.assertIn(tenant.router['name'], seen_router_names)
         self.assertIn(tenant.router['id'], seen_router_ids)
@@ -243,9 +245,11 @@
         myport = (tenant.router['id'], tenant.subnet['id'])
         router_ports = [
             (i['device_id'], f['subnet_id'])
-            for i in self._list_ports(device_id=tenant.router['id'])
+            for i in self.admin_manager.ports_client.list_ports(
+                device_id=tenant.router['id'])['ports']
             if net_info.is_router_interface_port(i)
-            for f in i['fixed_ips']]
+            for f in i['fixed_ips']
+        ]
 
         self.assertIn(myport, router_ports)
 
@@ -450,7 +454,8 @@
         mac_addr = mac_addr.strip().lower()
         # Get the fixed_ips and mac_address fields of all ports. Select
         # only those two columns to reduce the size of the response.
-        port_list = self._list_ports(fields=['fixed_ips', 'mac_address'])
+        port_list = self.admin_manager.ports_client.list_ports(
+            fields=['fixed_ips', 'mac_address'])['ports']
         port_detail_list = [
             (port['fixed_ips'][0]['subnet_id'],
              port['fixed_ips'][0]['ip_address'],
@@ -536,7 +541,8 @@
                                      ip=self._get_server_ip(server),
                                      should_succeed=False)
             server_id = server['id']
-            port_id = self._list_ports(device_id=server_id)[0]['id']
+            port_id = self.admin_manager.ports_client.list_ports(
+                device_id=server_id)['ports'][0]['id']
 
             # update port with new security group and check connectivity
             self.ports_client.update_port(port_id, security_groups=[
@@ -598,7 +604,8 @@
 
         access_point_ssh = self._connect_to_access_point(new_tenant)
         server_id = server['id']
-        port_id = self._list_ports(device_id=server_id)[0]['id']
+        port_id = self.admin_manager.ports_client.list_ports(
+            device_id=server_id)['ports'][0]['id']
 
         # Flip the port's port security and check connectivity
         try:
@@ -642,7 +649,8 @@
         sec_groups = []
         server = self._create_server(name, tenant, sec_groups)
         server_id = server['id']
-        ports = self._list_ports(device_id=server_id)
+        ports = self.admin_manager.ports_client.list_ports(
+            device_id=server_id)['ports']
         self.assertEqual(1, len(ports))
         for port in ports:
             self.assertEmpty(port['security_groups'],
diff --git a/tempest/scenario/test_server_advanced_ops.py b/tempest/scenario/test_server_advanced_ops.py
index 4d9e59c..ec839cd 100644
--- a/tempest/scenario/test_server_advanced_ops.py
+++ b/tempest/scenario/test_server_advanced_ops.py
@@ -75,31 +75,15 @@
     @test.services('compute')
     def test_server_sequence_suspend_resume(self):
         # We create an instance for use in this test
-        instance = self.create_server()
-        instance_id = instance['id']
-        LOG.debug("Suspending instance %s. Current status: %s",
-                  instance_id, instance['status'])
-        self.servers_client.suspend_server(instance_id)
-        waiters.wait_for_server_status(self.servers_client, instance_id,
-                                       'SUSPENDED')
-        fetched_instance = (self.servers_client.show_server(instance_id)
-                            ['server'])
-        LOG.debug("Resuming instance %s. Current status: %s",
-                  instance_id, fetched_instance['status'])
-        self.servers_client.resume_server(instance_id)
-        waiters.wait_for_server_status(self.servers_client, instance_id,
-                                       'ACTIVE')
-        fetched_instance = (self.servers_client.show_server(instance_id)
-                            ['server'])
-        LOG.debug("Suspending instance %s. Current status: %s",
-                  instance_id, fetched_instance['status'])
-        self.servers_client.suspend_server(instance_id)
-        waiters.wait_for_server_status(self.servers_client, instance_id,
-                                       'SUSPENDED')
-        fetched_instance = (self.servers_client.show_server(instance_id)
-                            ['server'])
-        LOG.debug("Resuming instance %s. Current status: %s",
-                  instance_id, fetched_instance['status'])
-        self.servers_client.resume_server(instance_id)
-        waiters.wait_for_server_status(self.servers_client, instance_id,
-                                       'ACTIVE')
+        instance_id = self.create_server()['id']
+
+        for _ in range(2):
+            LOG.debug("Suspending instance %s", instance_id)
+            self.servers_client.suspend_server(instance_id)
+            waiters.wait_for_server_status(self.servers_client, instance_id,
+                                           'SUSPENDED')
+
+            LOG.debug("Resuming instance %s", instance_id)
+            self.servers_client.resume_server(instance_id)
+            waiters.wait_for_server_status(self.servers_client, instance_id,
+                                           'ACTIVE')
diff --git a/tempest/scenario/test_stamp_pattern.py b/tempest/scenario/test_stamp_pattern.py
index 8661217..716c0bf 100644
--- a/tempest/scenario/test_stamp_pattern.py
+++ b/tempest/scenario/test_stamp_pattern.py
@@ -64,10 +64,10 @@
         self.addCleanup(self.snapshots_client.wait_for_resource_deletion,
                         snapshot['id'])
         self.addCleanup(self.snapshots_client.delete_snapshot, snapshot['id'])
-        waiters.wait_for_volume_status(self.volumes_client,
-                                       volume['id'], 'available')
-        waiters.wait_for_snapshot_status(self.snapshots_client,
-                                         snapshot['id'], 'available')
+        waiters.wait_for_volume_resource_status(self.volumes_client,
+                                                volume['id'], 'available')
+        waiters.wait_for_volume_resource_status(self.snapshots_client,
+                                                snapshot['id'], 'available')
         if 'display_name' in snapshot:
             self.assertEqual(snapshot_name, snapshot['display_name'])
         else:
@@ -88,6 +88,7 @@
                                           CONF.compute.build_interval):
             raise lib_exc.TimeoutException
 
+    @decorators.skip_because(bug="1664793")
     @decorators.idempotent_id('10fd234a-515c-41e5-b092-8323060598c5')
     @testtools.skipUnless(CONF.compute_feature_enabled.snapshot,
                           'Snapshotting is not available.')
diff --git a/tempest/scenario/test_volume_boot_pattern.py b/tempest/scenario/test_volume_boot_pattern.py
index 43dcf96..b72dae9 100644
--- a/tempest/scenario/test_volume_boot_pattern.py
+++ b/tempest/scenario/test_volume_boot_pattern.py
@@ -43,16 +43,13 @@
         return self.create_volume(name=vol_name, imageRef=img_uuid)
 
     def _get_bdm(self, source_id, source_type, delete_on_termination=False):
-        # NOTE(gfidente): the syntax for block_device_mapping is
-        # dev_name=id:type:size:delete_on_terminate
-        # where type needs to be "snap" if the server is booted
-        # from a snapshot, size instead can be safely left empty
-
-        bd_map = [{
-            'device_name': 'vda',
-            '{}_id'.format(source_type): source_id,
-            'delete_on_termination': str(int(delete_on_termination))}]
-        return {'block_device_mapping': bd_map}
+        bd_map_v2 = [{
+            'uuid': source_id,
+            'source_type': source_type,
+            'destination_type': 'volume',
+            'boot_index': 0,
+            'delete_on_termination': delete_on_termination}]
+        return {'block_device_mapping_v2': bd_map_v2}
 
     def _boot_instance_from_resource(self, source_id,
                                      source_type,
@@ -82,8 +79,8 @@
         self.addCleanup(
             self.snapshots_client.wait_for_resource_deletion, snap['id'])
         self.addCleanup(self.snapshots_client.delete_snapshot, snap['id'])
-        waiters.wait_for_snapshot_status(self.snapshots_client,
-                                         snap['id'], 'available')
+        waiters.wait_for_volume_resource_status(self.snapshots_client,
+                                                snap['id'], 'available')
 
         # NOTE(e0ne): Cinder API v2 uses name instead of display_name
         if 'display_name' in snap:
@@ -236,14 +233,3 @@
 
         # delete instance
         self._delete_server(instance)
-
-
-class TestVolumeBootPatternV2(TestVolumeBootPattern):
-    def _get_bdm(self, source_id, source_type, delete_on_termination=False):
-        bd_map_v2 = [{
-            'uuid': source_id,
-            'source_type': source_type,
-            'destination_type': 'volume',
-            'boot_index': 0,
-            'delete_on_termination': delete_on_termination}]
-        return {'block_device_mapping_v2': bd_map_v2}
diff --git a/tempest/test.py b/tempest/test.py
index 06de520..970e97c 100644
--- a/tempest/test.py
+++ b/tempest/test.py
@@ -31,7 +31,6 @@
 from tempest import config
 from tempest import exceptions
 from tempest.lib.common import cred_client
-from tempest.lib.common.utils import test_utils
 from tempest.lib import decorators
 from tempest.lib import exceptions as lib_exc
 
@@ -648,8 +647,3 @@
 
     def assertNotEmpty(self, list, msg=None):
         self.assertGreater(len(list), 0, msg)
-
-
-call_until_true = debtcollector.moves.moved_function(
-    test_utils.call_until_true, 'call_until_true', __name__,
-    version='Newton', removal_version='Ocata')
diff --git a/tempest/tests/cmd/test_subunit_describe_calls.py b/tempest/tests/cmd/test_subunit_describe_calls.py
index 1c24c37..5f3d770 100644
--- a/tempest/tests/cmd/test_subunit_describe_calls.py
+++ b/tempest/tests/cmd/test_subunit_describe_calls.py
@@ -33,6 +33,16 @@
         p.communicate()
         self.assertEqual(0, p.returncode)
 
+    def test_return_code_no_output(self):
+        subunit_file = os.path.join(
+            os.path.dirname(os.path.abspath(__file__)),
+            'sample_streams/calls.subunit')
+        p = subprocess.Popen([
+            'subunit-describe-calls', '-s', subunit_file],
+            stdin=subprocess.PIPE)
+        p.communicate()
+        self.assertEqual(0, p.returncode)
+
     def test_parse(self):
         subunit_file = os.path.join(
             os.path.dirname(os.path.abspath(__file__)),
diff --git a/tempest/tests/common/test_waiters.py b/tempest/tests/common/test_waiters.py
index 46f9526..c2f622c 100644
--- a/tempest/tests/common/test_waiters.py
+++ b/tempest/tests/common/test_waiters.py
@@ -66,7 +66,7 @@
         client.show_volume = mock_show
         volume_id = '7532b91e-aa0a-4e06-b3e5-20c0c5ee1caa'
         self.assertRaises(exceptions.VolumeRestoreErrorException,
-                          waiters.wait_for_volume_status,
+                          waiters.wait_for_volume_resource_status,
                           client, volume_id, 'available')
         mock_show.assert_has_calls([mock.call(volume_id),
                                     mock.call(volume_id)])
diff --git a/tempest/tests/test_decorators.py b/tempest/tests/test_decorators.py
index a069a81..ae2f2a3 100644
--- a/tempest/tests/test_decorators.py
+++ b/tempest/tests/test_decorators.py
@@ -199,3 +199,96 @@
                           self._test_requires_ext_helper,
                           extension='enabled_ext',
                           service='bad_service')
+
+
+class TestConfigDecorators(BaseDecoratorsTest):
+    def setUp(self):
+        super(TestConfigDecorators, self).setUp()
+        cfg.CONF.set_default('nova', True, 'service_available')
+        cfg.CONF.set_default('glance', False, 'service_available')
+
+    def _assert_skip_message(self, func, skip_msg):
+        try:
+            func()
+            self.fail()
+        except testtools.TestCase.skipException as skip_exc:
+            self.assertEqual(skip_exc.args[0], skip_msg)
+
+    def _test_skip_unless_config(self, expected_to_skip=True, *decorator_args):
+
+        class TestFoo(test.BaseTestCase):
+            @config.skip_unless_config(*decorator_args)
+            def test_bar(self):
+                return 0
+
+        t = TestFoo('test_bar')
+        if expected_to_skip:
+            self.assertRaises(testtools.TestCase.skipException, t.test_bar)
+            if (len(decorator_args) >= 3):
+                # decorator_args[2]: skip message specified
+                self._assert_skip_message(t.test_bar, decorator_args[2])
+        else:
+            try:
+                self.assertEqual(t.test_bar(), 0)
+            except testtools.TestCase.skipException:
+                # We caught a skipException but we didn't expect to skip
+                # this test so raise a hard test failure instead.
+                raise testtools.TestCase.failureException(
+                    "Not supposed to skip")
+
+    def _test_skip_if_config(self, expected_to_skip=True,
+                             *decorator_args):
+
+        class TestFoo(test.BaseTestCase):
+            @config.skip_if_config(*decorator_args)
+            def test_bar(self):
+                return 0
+
+        t = TestFoo('test_bar')
+        if expected_to_skip:
+            self.assertRaises(testtools.TestCase.skipException, t.test_bar)
+            if (len(decorator_args) >= 3):
+                # decorator_args[2]: skip message specified
+                self._assert_skip_message(t.test_bar, decorator_args[2])
+        else:
+            try:
+                self.assertEqual(t.test_bar(), 0)
+            except testtools.TestCase.skipException:
+                # We caught a skipException but we didn't expect to skip
+                # this test so raise a hard test failure instead.
+                raise testtools.TestCase.failureException(
+                    "Not supposed to skip")
+
+    def test_skip_unless_no_group(self):
+        self._test_skip_unless_config(True, 'fake_group', 'an_option')
+
+    def test_skip_unless_no_option(self):
+        self._test_skip_unless_config(True, 'service_available',
+                                      'not_an_option')
+
+    def test_skip_unless_false_option(self):
+        self._test_skip_unless_config(True, 'service_available', 'glance')
+
+    def test_skip_unless_false_option_msg(self):
+        self._test_skip_unless_config(True, 'service_available', 'glance',
+                                      'skip message')
+
+    def test_skip_unless_true_option(self):
+        self._test_skip_unless_config(False,
+                                      'service_available', 'nova')
+
+    def test_skip_if_no_group(self):
+        self._test_skip_if_config(False, 'fake_group', 'an_option')
+
+    def test_skip_if_no_option(self):
+        self._test_skip_if_config(False, 'service_available', 'not_an_option')
+
+    def test_skip_if_false_option(self):
+        self._test_skip_if_config(False, 'service_available', 'glance')
+
+    def test_skip_if_true_option(self):
+        self._test_skip_if_config(True, 'service_available', 'nova')
+
+    def test_skip_if_true_option_msg(self):
+        self._test_skip_if_config(True, 'service_available', 'nova',
+                                  'skip message')
diff --git a/tempest/tests/test_wrappers.py b/tempest/tests/test_wrappers.py
deleted file mode 100644
index a4ef699..0000000
--- a/tempest/tests/test_wrappers.py
+++ /dev/null
@@ -1,88 +0,0 @@
-# Copyright 2013 IBM Corp.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
-#    not use this file except in compliance with the License. You may obtain
-#    a copy of the License at
-#
-#         http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-#    License for the specific language governing permissions and limitations
-#    under the License.
-
-import os
-import shutil
-import subprocess
-import tempfile
-
-import six
-
-from tempest.tests import base
-
-DEVNULL = open(os.devnull, 'wb')
-
-
-class TestWrappers(base.TestCase):
-    def setUp(self):
-        super(TestWrappers, self).setUp()
-        # Setup test dirs
-        self.directory = tempfile.mkdtemp(prefix='tempest-unit')
-        self.addCleanup(shutil.rmtree, self.directory)
-        self.test_dir = os.path.join(self.directory, 'tests')
-        os.mkdir(self.test_dir)
-        # Setup Test files
-        self.testr_conf_file = os.path.join(self.directory, '.testr.conf')
-        self.setup_cfg_file = os.path.join(self.directory, 'setup.cfg')
-        self.passing_file = os.path.join(self.test_dir, 'test_passing.py')
-        self.failing_file = os.path.join(self.test_dir, 'test_failing.py')
-        self.init_file = os.path.join(self.test_dir, '__init__.py')
-        self.setup_py = os.path.join(self.directory, 'setup.py')
-        shutil.copy('tempest/tests/files/testr-conf', self.testr_conf_file)
-        shutil.copy('tempest/tests/files/passing-tests', self.passing_file)
-        shutil.copy('tempest/tests/files/failing-tests', self.failing_file)
-        shutil.copy('setup.py', self.setup_py)
-        shutil.copy('tempest/tests/files/setup.cfg', self.setup_cfg_file)
-        shutil.copy('tempest/tests/files/__init__.py', self.init_file)
-        # copy over the pretty_tox scripts
-        shutil.copy('tools/pretty_tox.sh',
-                    os.path.join(self.directory, 'pretty_tox.sh'))
-        shutil.copy('tools/pretty_tox_serial.sh',
-                    os.path.join(self.directory, 'pretty_tox_serial.sh'))
-
-        self.stdout = six.StringIO()
-        self.stderr = six.StringIO()
-        # Change directory, run wrapper and check result
-        self.addCleanup(os.chdir, os.path.abspath(os.curdir))
-        os.chdir(self.directory)
-
-    def assertRunExit(self, cmd, expected):
-        p = subprocess.Popen(
-            "bash %s" % cmd, shell=True,
-            stdout=subprocess.PIPE, stderr=subprocess.PIPE)
-        out, err = p.communicate()
-
-        self.assertEqual(
-            p.returncode, expected,
-            "Stdout: %s; Stderr: %s" % (out, err))
-
-    def test_pretty_tox(self):
-        # Git init is required for the pbr testr command. pbr requires a git
-        # version or an sdist to work. so make the test directory a git repo
-        # too.
-        subprocess.call(['git', 'init'], stderr=DEVNULL)
-        self.assertRunExit('pretty_tox.sh passing', 0)
-
-    def test_pretty_tox_fails(self):
-        # Git init is required for the pbr testr command. pbr requires a git
-        # version or an sdist to work. so make the test directory a git repo
-        # too.
-        subprocess.call(['git', 'init'], stderr=DEVNULL)
-        self.assertRunExit('pretty_tox.sh', 1)
-
-    def test_pretty_tox_serial(self):
-        self.assertRunExit('pretty_tox_serial.sh passing', 0)
-
-    def test_pretty_tox_serial_fails(self):
-        self.assertRunExit('pretty_tox_serial.sh', 1)
diff --git a/test-requirements.txt b/test-requirements.txt
index f7d63a8..844d32c 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -3,7 +3,7 @@
 # process, which may cause wedges in the gate later.
 hacking<0.13,>=0.12.0 # Apache-2.0
 # needed for doc build
-sphinx!=1.3b1,<1.4,>=1.2.1 # BSD
+sphinx>=1.5.1 # BSD
 oslosphinx>=4.7.0 # Apache-2.0
 reno>=1.8.0 # Apache-2.0
 mock>=2.0 # BSD
diff --git a/tools/pretty_tox.sh b/tools/pretty_tox.sh
deleted file mode 100755
index 0b83b91..0000000
--- a/tools/pretty_tox.sh
+++ /dev/null
@@ -1,14 +0,0 @@
-#!/usr/bin/env bash
-
-echo "WARNING: This script is deprecated and will be removed in the near future. Please migrate to tempest run or another method of launching a test runner"
-
-set -o pipefail
-
-TESTRARGS=$1
-python setup.py testr --testr-args="--subunit $TESTRARGS" | subunit-trace --no-failure-debug -f
-retval=$?
-# NOTE(mtreinish) The pipe above would eat the slowest display from pbr's testr
-# wrapper so just manually print the slowest tests.
-echo -e "\nSlowest Tests:\n"
-testr slowest
-exit $retval
diff --git a/tools/pretty_tox_serial.sh b/tools/pretty_tox_serial.sh
deleted file mode 100755
index 1f8204e..0000000
--- a/tools/pretty_tox_serial.sh
+++ /dev/null
@@ -1,16 +0,0 @@
-#!/usr/bin/env bash
-
-echo "WARNING: This script is deprecated and will be removed in the near future. Please migrate to tempest run or another method of launching a test runner"
-
-set -o pipefail
-
-TESTRARGS=$@
-
-if [ ! -d .testrepository ]; then
-    testr init
-fi
-testr run --subunit $TESTRARGS | subunit-trace -f -n
-retval=$?
-testr slowest
-
-exit $retval