| Attila Fazekas | 23fdf1d | 2013-06-09 16:35:23 +0200 | [diff] [blame] | 1 | Tempest Coding Guide | 
|  | 2 | ==================== | 
|  | 3 |  | 
| Joe Gordon | 1374f88 | 2013-07-12 17:00:34 +0100 | [diff] [blame] | 4 | - Step 1: Read the OpenStack Style Commandments | 
| chenxing | e98720a | 2017-07-19 03:42:23 +0000 | [diff] [blame] | 5 | https://docs.openstack.org/hacking/latest/ | 
| Joe Gordon | 1374f88 | 2013-07-12 17:00:34 +0100 | [diff] [blame] | 6 | - Step 2: Read on | 
|  | 7 |  | 
|  | 8 | Tempest Specific Commandments | 
|  | 9 | ------------------------------ | 
|  | 10 |  | 
| ghanshyam | 50f1947 | 2014-11-26 17:04:37 +0900 | [diff] [blame] | 11 | - [T102] Cannot import OpenStack python clients in tempest/api & | 
| Masayuki Igawa | b78b923 | 2017-11-17 16:12:37 +0900 | [diff] [blame] | 12 | tempest/scenario tests | 
| Matthew Treinish | 5e4c0f2 | 2013-09-10 18:38:28 +0000 | [diff] [blame] | 13 | - [T104] Scenario tests require a services decorator | 
| Andrea Frittoli | a5ddd55 | 2014-08-19 18:30:00 +0100 | [diff] [blame] | 14 | - [T105] Tests cannot use setUpClass/tearDownClass | 
| Masayuki Igawa | fcacf96 | 2014-02-19 14:00:01 +0900 | [diff] [blame] | 15 | - [T106] vim configuration should not be kept in source files. | 
| Ken'ichi Ohmichi | 7581bcd | 2015-02-16 04:09:58 +0000 | [diff] [blame] | 16 | - [T107] Check that a service tag isn't in the module path | 
| Ken'ichi Ohmichi | 80369a9 | 2015-04-06 23:41:14 +0000 | [diff] [blame] | 17 | - [T108] Check no hyphen at the end of rand_name() argument | 
| John Warren | 3059a09 | 2015-08-31 15:34:49 -0400 | [diff] [blame] | 18 | - [T109] Cannot use testtools.skip decorator; instead use | 
| Masayuki Igawa | b78b923 | 2017-11-17 16:12:37 +0900 | [diff] [blame] | 19 | decorators.skip_because from tempest.lib | 
| Ken'ichi Ohmichi | c0d96be | 2015-11-11 12:33:48 +0000 | [diff] [blame] | 20 | - [T110] Check that service client names of GET should be consistent | 
| Ken'ichi Ohmichi | 4f525f7 | 2016-03-25 15:20:01 -0700 | [diff] [blame] | 21 | - [T111] Check that service client names of DELETE should be consistent | 
| Ken'ichi Ohmichi | 0dc9747 | 2016-03-25 15:10:08 -0700 | [diff] [blame] | 22 | - [T112] Check that tempest.lib should not import local tempest code | 
| Ken'ichi Ohmichi | d079c89 | 2016-04-19 11:23:36 -0700 | [diff] [blame] | 23 | - [T113] Check that tests use data_utils.rand_uuid() instead of uuid.uuid4() | 
| Matthew Treinish | 59d9eaa | 2016-05-31 23:42:55 -0400 | [diff] [blame] | 24 | - [T114] Check that tempest.lib does not use tempest config | 
| Ken'ichi Ohmichi | f741d0b | 2017-05-01 16:56:14 -0700 | [diff] [blame] | 25 | - [T115] Check that admin tests should exist under admin path | 
| Ghanshyam | 2a180b8 | 2014-06-16 13:54:22 +0900 | [diff] [blame] | 26 | - [N322] Method's default argument shouldn't be mutable | 
| junboli | bc2ae86 | 2017-07-29 15:46:48 +0800 | [diff] [blame] | 27 | - [T116] Unsupported 'message' Exception attribute in PY3 | 
| Attila Fazekas | 23fdf1d | 2013-06-09 16:35:23 +0200 | [diff] [blame] | 28 |  | 
| Matthew Treinish | 8b37289 | 2012-12-07 17:13:16 -0500 | [diff] [blame] | 29 | Test Data/Configuration | 
|  | 30 | ----------------------- | 
|  | 31 | - Assume nothing about existing test data | 
|  | 32 | - Tests should be self contained (provide their own data) | 
|  | 33 | - Clean up test data at the completion of each test | 
|  | 34 | - Use configuration files for values that will vary by environment | 
|  | 35 |  | 
|  | 36 |  | 
| Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 37 | Exception Handling | 
|  | 38 | ------------------ | 
|  | 39 | According to the ``The Zen of Python`` the | 
| Attila Fazekas | 58d2330 | 2013-07-24 10:25:02 +0200 | [diff] [blame] | 40 | ``Errors should never pass silently.`` | 
| Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 41 | Tempest usually runs in special environment (jenkins gate jobs), in every | 
|  | 42 | error or failure situation we should provide as much error related | 
|  | 43 | information as possible, because we usually do not have the chance to | 
|  | 44 | investigate the situation after the issue happened. | 
|  | 45 |  | 
|  | 46 | In every test case the abnormal situations must be very verbosely explained, | 
|  | 47 | by the exception and the log. | 
|  | 48 |  | 
|  | 49 | In most cases the very first issue is the most important information. | 
|  | 50 |  | 
| Mithil Arun | be067ec | 2014-11-05 15:58:50 +0530 | [diff] [blame] | 51 | Try to avoid using ``try`` blocks in the test cases, as both the ``except`` | 
|  | 52 | and ``finally`` blocks could replace the original exception, | 
| Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 53 | when the additional operations leads to another exception. | 
|  | 54 |  | 
| Mithil Arun | be067ec | 2014-11-05 15:58:50 +0530 | [diff] [blame] | 55 | Just letting an exception to propagate, is not a bad idea in a test case, | 
| Bruce R. Montague | 44a6a19 | 2013-12-17 09:06:04 -0800 | [diff] [blame] | 56 | at all. | 
| Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 57 |  | 
|  | 58 | Try to avoid using any exception handling construct which can hide the errors | 
|  | 59 | origin. | 
|  | 60 |  | 
|  | 61 | If you really need to use a ``try`` block, please ensure the original | 
|  | 62 | exception at least logged.  When the exception is logged you usually need | 
|  | 63 | to ``raise`` the same or a different exception anyway. | 
|  | 64 |  | 
| Chris Yeoh | c2ff727 | 2013-07-22 22:25:25 +0930 | [diff] [blame] | 65 | Use of ``self.addCleanup`` is often a good way to avoid having to catch | 
|  | 66 | exceptions and still ensure resources are correctly cleaned up if the | 
|  | 67 | test fails part way through. | 
|  | 68 |  | 
| Mithil Arun | be067ec | 2014-11-05 15:58:50 +0530 | [diff] [blame] | 69 | Use the ``self.assert*`` methods provided by the unit test framework. | 
|  | 70 | This signals the failures early on. | 
| Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 71 |  | 
| Mithil Arun | be067ec | 2014-11-05 15:58:50 +0530 | [diff] [blame] | 72 | Avoid using the ``self.fail`` alone, its stack trace will signal | 
| Bruce R. Montague | 44a6a19 | 2013-12-17 09:06:04 -0800 | [diff] [blame] | 73 | the ``self.fail`` line as the origin of the error. | 
| Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 74 |  | 
|  | 75 | Avoid constructing complex boolean expressions for assertion. | 
| Attila Fazekas | 7899d31 | 2013-08-16 09:18:17 +0200 | [diff] [blame] | 76 | The ``self.assertTrue`` or ``self.assertFalse`` without a ``msg`` argument, | 
|  | 77 | will just tell you the single boolean value, and you will not know anything | 
|  | 78 | about the values used in the formula, the ``msg`` argument might be good enough | 
|  | 79 | for providing more information. | 
|  | 80 |  | 
|  | 81 | Most other assert method can include more information by default. | 
| Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 82 | For example ``self.assertIn`` can include the whole set. | 
|  | 83 |  | 
| Matthew Treinish | f45ba2e | 2015-08-24 15:05:01 -0400 | [diff] [blame] | 84 | It is recommended to use testtools `matcher`_ for the more tricky assertions. | 
|  | 85 | You can implement your own specific `matcher`_ as well. | 
| Attila Fazekas | 7899d31 | 2013-08-16 09:18:17 +0200 | [diff] [blame] | 86 |  | 
| davyyy | ac670dc | 2017-11-16 21:27:03 +0800 | [diff] [blame] | 87 | .. _matcher: https://testtools.readthedocs.org/en/latest/for-test-authors.html#matchers | 
| Attila Fazekas | 7899d31 | 2013-08-16 09:18:17 +0200 | [diff] [blame] | 88 |  | 
| Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 89 | If the test case fails you can see the related logs and the information | 
|  | 90 | carried by the exception (exception class, backtrack and exception info). | 
| Mithil Arun | be067ec | 2014-11-05 15:58:50 +0530 | [diff] [blame] | 91 | This and the service logs are your only guide to finding the root cause of flaky | 
|  | 92 | issues. | 
| Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 93 |  | 
| Attila Fazekas | 7899d31 | 2013-08-16 09:18:17 +0200 | [diff] [blame] | 94 | Test cases are independent | 
|  | 95 | -------------------------- | 
|  | 96 | Every ``test_method`` must be callable individually and MUST NOT depends on, | 
|  | 97 | any other ``test_method`` or ``test_method`` ordering. | 
|  | 98 |  | 
|  | 99 | Test cases MAY depend on commonly initialized resources/facilities, like | 
|  | 100 | credentials management, testresources and so on. These facilities, MUST be able | 
| Mithil Arun | be067ec | 2014-11-05 15:58:50 +0530 | [diff] [blame] | 101 | to work even if just one ``test_method`` is selected for execution. | 
| Attila Fazekas | 7899d31 | 2013-08-16 09:18:17 +0200 | [diff] [blame] | 102 |  | 
| Matthew Treinish | 5e4c0f2 | 2013-09-10 18:38:28 +0000 | [diff] [blame] | 103 | Service Tagging | 
|  | 104 | --------------- | 
|  | 105 | Service tagging is used to specify which services are exercised by a particular | 
| mmkmmk57 | ce3bb9b | 2017-09-20 13:41:41 +0900 | [diff] [blame] | 106 | test method. You specify the services with the ``tempest.common.utils.services`` | 
| Jordan Pittier | 74a56ab | 2017-04-26 16:46:20 +0200 | [diff] [blame] | 107 | decorator. For example: | 
| Matthew Treinish | 5e4c0f2 | 2013-09-10 18:38:28 +0000 | [diff] [blame] | 108 |  | 
| Felipe Monteiro | 46920b8 | 2018-07-09 23:58:20 -0400 | [diff] [blame^] | 109 | ``@utils.services('compute', 'image')`` | 
| Matthew Treinish | 5e4c0f2 | 2013-09-10 18:38:28 +0000 | [diff] [blame] | 110 |  | 
|  | 111 | Valid service tag names are the same as the list of directories in tempest.api | 
|  | 112 | that have tests. | 
|  | 113 |  | 
| Jordan Pittier | 74a56ab | 2017-04-26 16:46:20 +0200 | [diff] [blame] | 114 | For scenario tests having a service tag is required. For the API tests service | 
|  | 115 | tags are only needed if the test method makes an API call (either directly or | 
| Matthew Treinish | 5e4c0f2 | 2013-09-10 18:38:28 +0000 | [diff] [blame] | 116 | indirectly through another service) that differs from the parent directory | 
| Jordan Pittier | 74a56ab | 2017-04-26 16:46:20 +0200 | [diff] [blame] | 117 | name. For example, any test that make an API call to a service other than Nova | 
|  | 118 | in ``tempest.api.compute`` would require a service tag for those services, | 
|  | 119 | however they do not need to be tagged as ``compute``. | 
| Matthew Treinish | 5e4c0f2 | 2013-09-10 18:38:28 +0000 | [diff] [blame] | 120 |  | 
| Felipe Monteiro | 46920b8 | 2018-07-09 23:58:20 -0400 | [diff] [blame^] | 121 | Test Attributes | 
|  | 122 | --------------- | 
|  | 123 | Tempest leverages `test attributes`_ which are a simple but effective way of | 
|  | 124 | distinguishing between different "types" of API tests. A test can be "tagged" | 
|  | 125 | with such attributes using the ``decorators.attr`` decorator, for example:: | 
|  | 126 |  | 
|  | 127 | @decorators.attr(type=['negative']) | 
|  | 128 | def test_aggregate_create_aggregate_name_length_less_than_1(self): | 
|  | 129 | [...] | 
|  | 130 |  | 
|  | 131 | These test attributes can be used for test selection via regular expressions. | 
|  | 132 | For example, ``(?!.*\[.*\bslow\b.*\])(^tempest\.scenario)`` runs all the tests | 
|  | 133 | in the ``scenario`` test module, *except* for those tagged with the ``slow`` | 
|  | 134 | attribute (via a negative lookahead in the regular expression). These | 
|  | 135 | attributes are used in Tempest's ``tox.ini`` as well as Tempest's Zuul job | 
|  | 136 | definitions for specifying particular batches of Tempest test suites to run. | 
|  | 137 |  | 
|  | 138 | .. _test attributes: https://testtools.readthedocs.io/en/latest/for-test-authors.html?highlight=attr#test-attributes | 
|  | 139 |  | 
|  | 140 | Negative Attribute | 
|  | 141 | ^^^^^^^^^^^^^^^^^^ | 
|  | 142 | The ``type='negative'`` attribute is used to signify that a test is a negative | 
|  | 143 | test, which is a test that handles invalid input gracefully. This attribute | 
|  | 144 | should be applied to all negative test scenarios. | 
|  | 145 |  | 
|  | 146 | This attribute must be applied to each test that belongs to a negative test | 
|  | 147 | class, i.e. a test class name ending with "Negative.*" substring. | 
|  | 148 |  | 
|  | 149 | .. todo:: | 
|  | 150 |  | 
|  | 151 | Add a hacking check for ensuring that all classes that contain substring | 
|  | 152 | "Negative" have the negative attribute decorator applied above each test. | 
|  | 153 |  | 
|  | 154 | Slow Attribute | 
|  | 155 | ^^^^^^^^^^^^^^ | 
|  | 156 | The ``type='slow'`` attribute is used to signify that a test takes a long time | 
|  | 157 | to run, relatively speaking. This attribute is usually applied to | 
|  | 158 | :ref:`scenario tests <scenario_field_guide>`, which involve a complicated | 
|  | 159 | series of API operations, the total runtime of which can be relatively long. | 
|  | 160 | This long runtime has performance implications on `Zuul`_ jobs, which is why | 
|  | 161 | the ``slow`` attribute is leveraged to run slow tests on a selective basis, | 
|  | 162 | to keep total `Zuul`_ job runtime down to a reasonable time frame. | 
|  | 163 |  | 
|  | 164 | .. _Zuul: https://docs.openstack.org/infra/zuul/ | 
|  | 165 |  | 
|  | 166 | Smoke Attribute | 
|  | 167 | ^^^^^^^^^^^^^^^ | 
|  | 168 | The ``type='smoke'`` attribute is used to signify that a test is a so-called | 
|  | 169 | smoke test, which is a type of test that tests the most vital OpenStack | 
|  | 170 | functionality, like listing servers or flavors or creating volumes. The | 
|  | 171 | attribute should be sparingly applied to only the tests that sanity-check the | 
|  | 172 | most essential functionality of an OpenStack cloud. | 
|  | 173 |  | 
| Andrea Frittoli | a5ddd55 | 2014-08-19 18:30:00 +0100 | [diff] [blame] | 174 | Test fixtures and resources | 
|  | 175 | --------------------------- | 
|  | 176 | Test level resources should be cleaned-up after the test execution. Clean-up | 
| Masayuki Igawa | bbbaad6 | 2017-11-21 16:04:03 +0900 | [diff] [blame] | 177 | is best scheduled using ``addCleanup`` which ensures that the resource cleanup | 
| Andrea Frittoli | a5ddd55 | 2014-08-19 18:30:00 +0100 | [diff] [blame] | 178 | code is always invoked, and in reverse order with respect to the creation | 
|  | 179 | order. | 
|  | 180 |  | 
| Masayuki Igawa | bbbaad6 | 2017-11-21 16:04:03 +0900 | [diff] [blame] | 181 | Test class level resources should be defined in the ``resource_setup`` method | 
|  | 182 | of the test class, except for any credential obtained from the credentials | 
|  | 183 | provider, which should be set-up in the ``setup_credentials`` method. | 
|  | 184 | Cleanup is best scheduled using ``addClassResourceCleanup`` which ensures that | 
| Andrea Frittoli | 3be5748 | 2017-08-25 22:41:26 +0100 | [diff] [blame] | 185 | the cleanup code is always invoked, and in reverse order with respect to the | 
|  | 186 | creation order. | 
|  | 187 |  | 
|  | 188 | In both cases - test level and class level cleanups - a wait loop should be | 
|  | 189 | scheduled before the actual delete of resources with an asynchronous delete. | 
| Andrea Frittoli | a5ddd55 | 2014-08-19 18:30:00 +0100 | [diff] [blame] | 190 |  | 
| Masayuki Igawa | bbbaad6 | 2017-11-21 16:04:03 +0900 | [diff] [blame] | 191 | The test base class ``BaseTestCase`` defines Tempest framework for class level | 
|  | 192 | fixtures. ``setUpClass`` and ``tearDownClass`` are defined here and cannot be | 
| Andrea Frittoli | a5ddd55 | 2014-08-19 18:30:00 +0100 | [diff] [blame] | 193 | overwritten by subclasses (enforced via hacking rule T105). | 
|  | 194 |  | 
|  | 195 | Set-up is split in a series of steps (setup stages), which can be overwritten | 
|  | 196 | by test classes. Set-up stages are: | 
| Masayuki Igawa | e63cf0f | 2016-05-25 10:25:21 +0900 | [diff] [blame] | 197 |  | 
| Masayuki Igawa | bbbaad6 | 2017-11-21 16:04:03 +0900 | [diff] [blame] | 198 | - ``skip_checks`` | 
|  | 199 | - ``setup_credentials`` | 
|  | 200 | - ``setup_clients`` | 
|  | 201 | - ``resource_setup`` | 
| Andrea Frittoli | a5ddd55 | 2014-08-19 18:30:00 +0100 | [diff] [blame] | 202 |  | 
|  | 203 | Tear-down is also split in a series of steps (teardown stages), which are | 
|  | 204 | stacked for execution only if the corresponding setup stage had been | 
|  | 205 | reached during the setup phase. Tear-down stages are: | 
| Masayuki Igawa | e63cf0f | 2016-05-25 10:25:21 +0900 | [diff] [blame] | 206 |  | 
| Masayuki Igawa | bbbaad6 | 2017-11-21 16:04:03 +0900 | [diff] [blame] | 207 | - ``clear_credentials`` (defined in the base test class) | 
|  | 208 | - ``resource_cleanup`` | 
| Andrea Frittoli | a5ddd55 | 2014-08-19 18:30:00 +0100 | [diff] [blame] | 209 |  | 
|  | 210 | Skipping Tests | 
|  | 211 | -------------- | 
|  | 212 | Skipping tests should be based on configuration only. If that is not possible, | 
|  | 213 | it is likely that either a configuration flag is missing, or the test should | 
|  | 214 | fail rather than be skipped. | 
|  | 215 | Using discovery for skipping tests is generally discouraged. | 
|  | 216 |  | 
|  | 217 | When running a test that requires a certain "feature" in the target | 
|  | 218 | cloud, if that feature is missing we should fail, because either the test | 
|  | 219 | configuration is invalid, or the cloud is broken and the expected "feature" is | 
|  | 220 | not there even if the cloud was configured with it. | 
|  | 221 |  | 
| Matthew Treinish | 8b79bb3 | 2013-10-10 17:11:05 -0400 | [diff] [blame] | 222 | Negative Tests | 
|  | 223 | -------------- | 
| Chris Hoge | 2b47841 | 2016-06-23 16:03:28 -0700 | [diff] [blame] | 224 | Error handling is an important aspect of API design and usage. Negative | 
|  | 225 | tests are a way to ensure that an application can gracefully handle | 
|  | 226 | invalid or unexpected input. However, as a black box integration test | 
|  | 227 | suite, Tempest is not suitable for handling all negative test cases, as | 
|  | 228 | the wide variety and complexity of negative tests can lead to long test | 
|  | 229 | runs and knowledge of internal implementation details. The bulk of | 
| Ken'ichi Ohmichi | 8db4075 | 2016-09-28 14:43:05 -0700 | [diff] [blame] | 230 | negative testing should be handled with project function tests. | 
|  | 231 | All negative tests should be based on `API-WG guideline`_ . Such negative | 
|  | 232 | tests can block any changes from accurate failure code to invalid one. | 
|  | 233 |  | 
| davyyy | ac670dc | 2017-11-16 21:27:03 +0800 | [diff] [blame] | 234 | .. _API-WG guideline: https://specs.openstack.org/openstack/api-wg/guidelines/http.html#failure-code-clarifications | 
| Ken'ichi Ohmichi | 8db4075 | 2016-09-28 14:43:05 -0700 | [diff] [blame] | 235 |  | 
|  | 236 | If facing some gray area which is not clarified on the above guideline, propose | 
|  | 237 | a new guideline to the API-WG. With a proposal to the API-WG we will be able to | 
|  | 238 | build a consensus across all OpenStack projects and improve the quality and | 
|  | 239 | consistency of all the APIs. | 
|  | 240 |  | 
|  | 241 | In addition, we have some guidelines for additional negative tests. | 
|  | 242 |  | 
|  | 243 | - About BadRequest(HTTP400) case: We can add a single negative tests of | 
|  | 244 | BadRequest for each resource and method(POST, PUT). | 
|  | 245 | Please don't implement more negative tests on the same combination of | 
|  | 246 | resource and method even if API request parameters are different from | 
|  | 247 | the existing test. | 
|  | 248 | - About NotFound(HTTP404) case: We can add a single negative tests of | 
|  | 249 | NotFound for each resource and method(GET, PUT, DELETE, HEAD). | 
|  | 250 | Please don't implement more negative tests on the same combination | 
|  | 251 | of resource and method. | 
|  | 252 |  | 
|  | 253 | The above guidelines don't cover all cases and we will grow these guidelines | 
|  | 254 | organically over time. Patches outside of the above guidelines are left up to | 
|  | 255 | the reviewers' discretion and if we face some conflicts between reviewers, we | 
|  | 256 | will expand the guideline based on our discussion and experience. | 
| Matthew Treinish | 8b79bb3 | 2013-10-10 17:11:05 -0400 | [diff] [blame] | 257 |  | 
| Giulio Fidente | 83181a9 | 2013-10-01 06:02:24 +0200 | [diff] [blame] | 258 | Test skips because of Known Bugs | 
|  | 259 | -------------------------------- | 
| Giulio Fidente | 83181a9 | 2013-10-01 06:02:24 +0200 | [diff] [blame] | 260 | If a test is broken because of a bug it is appropriate to skip the test until | 
| Jordan Pittier | 74a56ab | 2017-04-26 16:46:20 +0200 | [diff] [blame] | 261 | bug has been fixed. You should use the ``skip_because`` decorator so that | 
| Giulio Fidente | 83181a9 | 2013-10-01 06:02:24 +0200 | [diff] [blame] | 262 | Tempest's skip tracking tool can watch the bug status. | 
|  | 263 |  | 
|  | 264 | Example:: | 
|  | 265 |  | 
|  | 266 | @skip_because(bug="980688") | 
|  | 267 | def test_this_and_that(self): | 
|  | 268 | ... | 
|  | 269 |  | 
| Chris Yeoh | c2ff727 | 2013-07-22 22:25:25 +0930 | [diff] [blame] | 270 | Guidelines | 
|  | 271 | ---------- | 
|  | 272 | - Do not submit changesets with only testcases which are skipped as | 
|  | 273 | they will not be merged. | 
|  | 274 | - Consistently check the status code of responses in testcases. The | 
|  | 275 | earlier a problem is detected the easier it is to debug, especially | 
|  | 276 | where there is complicated setup required. | 
| Matthew Treinish | 96c28d1 | 2013-09-16 17:05:09 +0000 | [diff] [blame] | 277 |  | 
| DennyZhang | 900f02b | 2013-09-23 08:34:04 -0500 | [diff] [blame] | 278 | Parallel Test Execution | 
|  | 279 | ----------------------- | 
| Matthew Treinish | 96c28d1 | 2013-09-16 17:05:09 +0000 | [diff] [blame] | 280 | Tempest by default runs its tests in parallel this creates the possibility for | 
|  | 281 | interesting interactions between tests which can cause unexpected failures. | 
| Andrea Frittoli (andreaf) | 17209bb | 2015-05-22 10:16:57 -0700 | [diff] [blame] | 282 | Dynamic credentials provides protection from most of the potential race | 
|  | 283 | conditions between tests outside the same class. But there are still a few of | 
|  | 284 | things to watch out for to try to avoid issues when running your tests in | 
|  | 285 | parallel. | 
| Matthew Treinish | 96c28d1 | 2013-09-16 17:05:09 +0000 | [diff] [blame] | 286 |  | 
| Sean Dague | ed6e586 | 2016-04-04 10:49:13 -0400 | [diff] [blame] | 287 | - Resources outside of a project scope still have the potential to conflict. This | 
| Matthew Treinish | 96c28d1 | 2013-09-16 17:05:09 +0000 | [diff] [blame] | 288 | is a larger concern for the admin tests since most resources and actions that | 
| Sean Dague | ed6e586 | 2016-04-04 10:49:13 -0400 | [diff] [blame] | 289 | require admin privileges are outside of projects. | 
| Matthew Treinish | 96c28d1 | 2013-09-16 17:05:09 +0000 | [diff] [blame] | 290 |  | 
|  | 291 | - Races between methods in the same class are not a problem because | 
| Jordan Pittier | 74a56ab | 2017-04-26 16:46:20 +0200 | [diff] [blame] | 292 | parallelization in Tempest is at the test class level, but if there is a json | 
| Matthew Treinish | 96c28d1 | 2013-09-16 17:05:09 +0000 | [diff] [blame] | 293 | and xml version of the same test class there could still be a race between | 
|  | 294 | methods. | 
|  | 295 |  | 
| jeremy.zhang | c0f9556 | 2017-05-26 13:41:57 +0800 | [diff] [blame] | 296 | - The rand_name() function from tempest.lib.common.utils.data_utils should be | 
|  | 297 | used anywhere a resource is created with a name. Static naming should be | 
|  | 298 | avoided to prevent resource conflicts. | 
| Matthew Treinish | 96c28d1 | 2013-09-16 17:05:09 +0000 | [diff] [blame] | 299 |  | 
|  | 300 | - If the execution of a set of tests is required to be serialized then locking | 
| Jordan Pittier | 74a56ab | 2017-04-26 16:46:20 +0200 | [diff] [blame] | 301 | can be used to perform this. See usage of ``LockFixture`` for examples of | 
|  | 302 | using locking. | 
| Marc Koderer | 31fe483 | 2013-11-06 17:02:03 +0100 | [diff] [blame] | 303 |  | 
| Matthew Treinish | 6eb0585 | 2013-11-26 15:28:12 +0000 | [diff] [blame] | 304 | Sample Configuration File | 
|  | 305 | ------------------------- | 
|  | 306 | The sample config file is autogenerated using a script. If any changes are made | 
| David Kranz | fb0f51f | 2014-11-11 14:07:20 -0500 | [diff] [blame] | 307 | to the config variables in tempest/config.py then the sample config file must be | 
|  | 308 | regenerated. This can be done running:: | 
|  | 309 |  | 
| Hai Shi | 6f52fc5 | 2017-04-03 21:17:37 +0800 | [diff] [blame] | 310 | tox -e genconfig | 
| Matthew Treinish | ecf212c | 2013-12-06 18:23:54 +0000 | [diff] [blame] | 311 |  | 
|  | 312 | Unit Tests | 
|  | 313 | ---------- | 
| Jordan Pittier | 74a56ab | 2017-04-26 16:46:20 +0200 | [diff] [blame] | 314 | Unit tests are a separate class of tests in Tempest. They verify Tempest | 
| Matthew Treinish | ecf212c | 2013-12-06 18:23:54 +0000 | [diff] [blame] | 315 | itself, and thus have a different set of guidelines around them: | 
|  | 316 |  | 
|  | 317 | 1. They can not require anything running externally. All you should need to | 
|  | 318 | run the unit tests is the git tree, python and the dependencies installed. | 
|  | 319 | This includes running services, a config file, etc. | 
|  | 320 |  | 
|  | 321 | 2. The unit tests cannot use setUpClass, instead fixtures and testresources | 
|  | 322 | should be used for shared state between tests. | 
| Matthew Treinish | 5507888 | 2014-08-12 19:01:34 -0400 | [diff] [blame] | 323 |  | 
|  | 324 |  | 
|  | 325 | .. _TestDocumentation: | 
|  | 326 |  | 
|  | 327 | Test Documentation | 
|  | 328 | ------------------ | 
|  | 329 | For tests being added we need to require inline documentation in the form of | 
| Xicheng Chang | 6fb98ec | 2015-08-13 14:02:52 -0700 | [diff] [blame] | 330 | docstrings to explain what is being tested. In API tests for a new API a class | 
| Matthew Treinish | 5507888 | 2014-08-12 19:01:34 -0400 | [diff] [blame] | 331 | level docstring should be added to an API reference doc. If one doesn't exist | 
|  | 332 | a TODO comment should be put indicating that the reference needs to be added. | 
|  | 333 | For individual API test cases a method level docstring should be used to | 
|  | 334 | explain the functionality being tested if the test name isn't descriptive | 
|  | 335 | enough. For example:: | 
|  | 336 |  | 
|  | 337 | def test_get_role_by_id(self): | 
|  | 338 | """Get a role by its id.""" | 
|  | 339 |  | 
|  | 340 | the docstring there is superfluous and shouldn't be added. but for a method | 
|  | 341 | like:: | 
|  | 342 |  | 
|  | 343 | def test_volume_backup_create_get_detailed_list_restore_delete(self): | 
|  | 344 | pass | 
|  | 345 |  | 
|  | 346 | a docstring would be useful because while the test title is fairly descriptive | 
|  | 347 | the operations being performed are complex enough that a bit more explanation | 
|  | 348 | will help people figure out the intent of the test. | 
|  | 349 |  | 
|  | 350 | For scenario tests a class level docstring describing the steps in the scenario | 
|  | 351 | is required. If there is more than one test case in the class individual | 
|  | 352 | docstrings for the workflow in each test methods can be used instead. A good | 
|  | 353 | example of this would be:: | 
|  | 354 |  | 
| Masayuki Igawa | 93424e5 | 2014-10-06 13:54:26 +0900 | [diff] [blame] | 355 | class TestVolumeBootPattern(manager.ScenarioTest): | 
| Dougal Matthews | 4bebca0 | 2014-10-28 08:36:04 +0000 | [diff] [blame] | 356 | """ | 
|  | 357 | This test case attempts to reproduce the following steps: | 
| Matthew Treinish | 5507888 | 2014-08-12 19:01:34 -0400 | [diff] [blame] | 358 |  | 
| Dougal Matthews | 4bebca0 | 2014-10-28 08:36:04 +0000 | [diff] [blame] | 359 | * Create in Cinder some bootable volume importing a Glance image | 
|  | 360 | * Boot an instance from the bootable volume | 
|  | 361 | * Write content to the volume | 
|  | 362 | * Delete an instance and Boot a new instance from the volume | 
|  | 363 | * Check written content in the instance | 
|  | 364 | * Create a volume snapshot while the instance is running | 
|  | 365 | * Boot an additional instance from the new snapshot based volume | 
|  | 366 | * Check written content in the instance booted from snapshot | 
|  | 367 | """ | 
| Matthew Treinish | a970d65 | 2015-03-11 15:39:24 -0400 | [diff] [blame] | 368 |  | 
| Chris Hoge | 0e000ed | 2015-07-28 14:19:53 -0500 | [diff] [blame] | 369 | Test Identification with Idempotent ID | 
|  | 370 | -------------------------------------- | 
|  | 371 |  | 
|  | 372 | Every function that provides a test must have an ``idempotent_id`` decorator | 
|  | 373 | that is a unique ``uuid-4`` instance. This ID is used to complement the fully | 
| Naomichi Wakui | dbe9aab | 2015-08-26 03:36:02 +0000 | [diff] [blame] | 374 | qualified test name and track test functionality through refactoring. The | 
| Chris Hoge | 0e000ed | 2015-07-28 14:19:53 -0500 | [diff] [blame] | 375 | format of the metadata looks like:: | 
|  | 376 |  | 
| Ken'ichi Ohmichi | 8a08211 | 2017-03-06 16:03:17 -0800 | [diff] [blame] | 377 | @decorators.idempotent_id('585e934c-448e-43c4-acbf-d06a9b899997') | 
| Chris Hoge | 0e000ed | 2015-07-28 14:19:53 -0500 | [diff] [blame] | 378 | def test_list_servers_with_detail(self): | 
|  | 379 | # The created server should be in the detailed list of all servers | 
|  | 380 | ... | 
|  | 381 |  | 
| Andrea Frittoli (andreaf) | 1370baf | 2016-04-29 14:26:22 -0500 | [diff] [blame] | 382 | Tempest.lib includes a ``check-uuid`` tool that will test for the existence | 
| Matthew Treinish | c1802bc | 2015-12-03 18:48:11 -0500 | [diff] [blame] | 383 | and uniqueness of idempotent_id metadata for every test. If you have | 
| Jordan Pittier | 74a56ab | 2017-04-26 16:46:20 +0200 | [diff] [blame] | 384 | Tempest installed you run the tool against Tempest by calling from the | 
|  | 385 | Tempest repo:: | 
| Chris Hoge | 0e000ed | 2015-07-28 14:19:53 -0500 | [diff] [blame] | 386 |  | 
| Matthew Treinish | c1802bc | 2015-12-03 18:48:11 -0500 | [diff] [blame] | 387 | check-uuid | 
| Chris Hoge | 0e000ed | 2015-07-28 14:19:53 -0500 | [diff] [blame] | 388 |  | 
|  | 389 | It can be invoked against any test suite by passing a package name:: | 
|  | 390 |  | 
| Matthew Treinish | c1802bc | 2015-12-03 18:48:11 -0500 | [diff] [blame] | 391 | check-uuid --package <package_name> | 
| Chris Hoge | 0e000ed | 2015-07-28 14:19:53 -0500 | [diff] [blame] | 392 |  | 
|  | 393 | Tests without an ``idempotent_id`` can be automatically fixed by running | 
|  | 394 | the command with the ``--fix`` flag, which will modify the source package | 
|  | 395 | by inserting randomly generated uuids for every test that does not have | 
|  | 396 | one:: | 
|  | 397 |  | 
| Matthew Treinish | c1802bc | 2015-12-03 18:48:11 -0500 | [diff] [blame] | 398 | check-uuid --fix | 
| Chris Hoge | 0e000ed | 2015-07-28 14:19:53 -0500 | [diff] [blame] | 399 |  | 
| Jordan Pittier | 74a56ab | 2017-04-26 16:46:20 +0200 | [diff] [blame] | 400 | The ``check-uuid`` tool is used as part of the Tempest gate job | 
| Chris Hoge | 0e000ed | 2015-07-28 14:19:53 -0500 | [diff] [blame] | 401 | to ensure that all tests have an ``idempotent_id`` decorator. | 
|  | 402 |  | 
| Matthew Treinish | a970d65 | 2015-03-11 15:39:24 -0400 | [diff] [blame] | 403 | Branchless Tempest Considerations | 
|  | 404 | --------------------------------- | 
|  | 405 |  | 
|  | 406 | Starting with the OpenStack Icehouse release Tempest no longer has any stable | 
|  | 407 | branches. This is to better ensure API consistency between releases because | 
|  | 408 | the API behavior should not change between releases. This means that the stable | 
|  | 409 | branches are also gated by the Tempest master branch, which also means that | 
|  | 410 | proposed commits to Tempest must work against both the master and all the | 
|  | 411 | currently supported stable branches of the projects. As such there are a few | 
|  | 412 | special considerations that have to be accounted for when pushing new changes | 
| Jordan Pittier | 74a56ab | 2017-04-26 16:46:20 +0200 | [diff] [blame] | 413 | to Tempest. | 
| Matthew Treinish | a970d65 | 2015-03-11 15:39:24 -0400 | [diff] [blame] | 414 |  | 
|  | 415 | 1. New Tests for new features | 
|  | 416 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | 
|  | 417 |  | 
|  | 418 | When adding tests for new features that were not in previous releases of the | 
| Felipe Monteiro | 356f059 | 2018-03-26 21:51:52 -0400 | [diff] [blame] | 419 | projects the new test has to be properly skipped with a feature flag. This can | 
|  | 420 | be just as simple as using the ``@utils.requires_ext()`` or | 
|  | 421 | ``testtools.skipUnless`` decorators to check if the required extension (or | 
|  | 422 | discoverable optional API) or feature is enabled or can be as difficult as | 
| Andrea Frittoli | cd36841 | 2017-08-14 21:37:56 +0100 | [diff] [blame] | 423 | adding a new config option to the appropriate section. If there isn't a method | 
|  | 424 | of selecting the new **feature** from the config file then there won't be a | 
| Felipe Monteiro | 356f059 | 2018-03-26 21:51:52 -0400 | [diff] [blame] | 425 | mechanism to disable the test with older stable releases and the new test | 
|  | 426 | won't be able to merge. | 
|  | 427 |  | 
|  | 428 | Introduction of a new feature flag requires specifying a default value for | 
|  | 429 | the corresponding config option that is appropriate in the latest OpenStack | 
|  | 430 | release. Because Tempest is branchless, the feature flag's default value will | 
|  | 431 | need to be overridden to a value that is appropriate in earlier releases | 
|  | 432 | in which the feature isn't available. In DevStack, this can be accomplished | 
|  | 433 | by modifying Tempest's `lib installation script`_ for previous branches | 
|  | 434 | (because DevStack is branched). | 
|  | 435 |  | 
|  | 436 | .. _lib installation script: http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/tempest | 
| Matthew Treinish | a970d65 | 2015-03-11 15:39:24 -0400 | [diff] [blame] | 437 |  | 
|  | 438 | 2. Bug fix on core project needing Tempest changes | 
|  | 439 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | 
|  | 440 |  | 
|  | 441 | When trying to land a bug fix which changes a tested API you'll have to use the | 
|  | 442 | following procedure:: | 
|  | 443 |  | 
| Jordan Pittier | 74a56ab | 2017-04-26 16:46:20 +0200 | [diff] [blame] | 444 | 1. Propose change to the project, get a +2 on the change even with failing | 
|  | 445 | 2. Propose skip on Tempest which will only be approved after the | 
| Matthew Treinish | a970d65 | 2015-03-11 15:39:24 -0400 | [diff] [blame] | 446 | corresponding change in the project has a +2 on change | 
| Jordan Pittier | 74a56ab | 2017-04-26 16:46:20 +0200 | [diff] [blame] | 447 | 3. Land project change in master and all open stable branches (if required) | 
|  | 448 | 4. Land changed test in Tempest | 
| Matthew Treinish | a970d65 | 2015-03-11 15:39:24 -0400 | [diff] [blame] | 449 |  | 
|  | 450 | Otherwise the bug fix won't be able to land in the project. | 
|  | 451 |  | 
| gaofei | 6ec582f | 2018-01-24 14:08:36 +0800 | [diff] [blame] | 452 | Handily, `Zuul's cross-repository dependencies | 
| junboli | 477fd02 | 2017-09-06 17:25:11 +0800 | [diff] [blame] | 453 | <https://docs.openstack.org/infra/zuul/user/gating.html#cross-project-dependencies>`_. | 
| Jordan Pittier | 74a56ab | 2017-04-26 16:46:20 +0200 | [diff] [blame] | 454 | can be leveraged to do without step 2 and to have steps 3 and 4 happen | 
|  | 455 | "atomically". To do that, make the patch written in step 1 to depend (refer to | 
|  | 456 | Zuul's documentation above) on the patch written in step 4. The commit message | 
|  | 457 | for the Tempest change should have a link to the Gerrit review that justifies | 
|  | 458 | that change. | 
|  | 459 |  | 
| Matthew Treinish | a970d65 | 2015-03-11 15:39:24 -0400 | [diff] [blame] | 460 | 3. New Tests for existing features | 
|  | 461 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | 
|  | 462 |  | 
|  | 463 | If a test is being added for a feature that exists in all the current releases | 
|  | 464 | of the projects then the only concern is that the API behavior is the same | 
|  | 465 | across all the versions of the project being tested. If the behavior is not | 
|  | 466 | consistent the test will not be able to merge. | 
|  | 467 |  | 
|  | 468 | API Stability | 
|  | 469 | ------------- | 
|  | 470 |  | 
|  | 471 | For new tests being added to Tempest the assumption is that the API being | 
|  | 472 | tested is considered stable and adheres to the OpenStack API stability | 
|  | 473 | guidelines. If an API is still considered experimental or in development then | 
|  | 474 | it should not be tested by Tempest until it is considered stable. |