Attila Fazekas | 23fdf1d | 2013-06-09 16:35:23 +0200 | [diff] [blame] | 1 | Tempest Coding Guide |
| 2 | ==================== |
| 3 | |
Joe Gordon | 1374f88 | 2013-07-12 17:00:34 +0100 | [diff] [blame] | 4 | - Step 1: Read the OpenStack Style Commandments |
Matthew Treinish | 97072c8 | 2013-10-01 11:54:15 -0400 | [diff] [blame] | 5 | http://docs.openstack.org/developer/hacking/ |
Joe Gordon | 1374f88 | 2013-07-12 17:00:34 +0100 | [diff] [blame] | 6 | - Step 2: Read on |
| 7 | |
| 8 | Tempest Specific Commandments |
| 9 | ------------------------------ |
| 10 | |
ghanshyam | 50f1947 | 2014-11-26 17:04:37 +0900 | [diff] [blame] | 11 | - [T102] Cannot import OpenStack python clients in tempest/api & |
| 12 | tempest/scenario tests |
Matthew Treinish | 5e4c0f2 | 2013-09-10 18:38:28 +0000 | [diff] [blame] | 13 | - [T104] Scenario tests require a services decorator |
Andrea Frittoli | a5ddd55 | 2014-08-19 18:30:00 +0100 | [diff] [blame] | 14 | - [T105] Tests cannot use setUpClass/tearDownClass |
Masayuki Igawa | fcacf96 | 2014-02-19 14:00:01 +0900 | [diff] [blame] | 15 | - [T106] vim configuration should not be kept in source files. |
Ken'ichi Ohmichi | 7581bcd | 2015-02-16 04:09:58 +0000 | [diff] [blame] | 16 | - [T107] Check that a service tag isn't in the module path |
Ken'ichi Ohmichi | 80369a9 | 2015-04-06 23:41:14 +0000 | [diff] [blame] | 17 | - [T108] Check no hyphen at the end of rand_name() argument |
John Warren | 3059a09 | 2015-08-31 15:34:49 -0400 | [diff] [blame] | 18 | - [T109] Cannot use testtools.skip decorator; instead use |
Andrea Frittoli (andreaf) | 1370baf | 2016-04-29 14:26:22 -0500 | [diff] [blame] | 19 | decorators.skip_because from tempest.lib |
Ken'ichi Ohmichi | c0d96be | 2015-11-11 12:33:48 +0000 | [diff] [blame] | 20 | - [T110] Check that service client names of GET should be consistent |
Ken'ichi Ohmichi | 4f525f7 | 2016-03-25 15:20:01 -0700 | [diff] [blame] | 21 | - [T111] Check that service client names of DELETE should be consistent |
Ken'ichi Ohmichi | 0dc9747 | 2016-03-25 15:10:08 -0700 | [diff] [blame] | 22 | - [T112] Check that tempest.lib should not import local tempest code |
Ken'ichi Ohmichi | d079c89 | 2016-04-19 11:23:36 -0700 | [diff] [blame] | 23 | - [T113] Check that tests use data_utils.rand_uuid() instead of uuid.uuid4() |
Ghanshyam | 2a180b8 | 2014-06-16 13:54:22 +0900 | [diff] [blame] | 24 | - [N322] Method's default argument shouldn't be mutable |
Attila Fazekas | 23fdf1d | 2013-06-09 16:35:23 +0200 | [diff] [blame] | 25 | |
Matthew Treinish | 8b37289 | 2012-12-07 17:13:16 -0500 | [diff] [blame] | 26 | Test Data/Configuration |
| 27 | ----------------------- |
| 28 | - Assume nothing about existing test data |
| 29 | - Tests should be self contained (provide their own data) |
| 30 | - Clean up test data at the completion of each test |
| 31 | - Use configuration files for values that will vary by environment |
| 32 | |
| 33 | |
Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 34 | Exception Handling |
| 35 | ------------------ |
| 36 | According to the ``The Zen of Python`` the |
Attila Fazekas | 58d2330 | 2013-07-24 10:25:02 +0200 | [diff] [blame] | 37 | ``Errors should never pass silently.`` |
Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 38 | Tempest usually runs in special environment (jenkins gate jobs), in every |
| 39 | error or failure situation we should provide as much error related |
| 40 | information as possible, because we usually do not have the chance to |
| 41 | investigate the situation after the issue happened. |
| 42 | |
| 43 | In every test case the abnormal situations must be very verbosely explained, |
| 44 | by the exception and the log. |
| 45 | |
| 46 | In most cases the very first issue is the most important information. |
| 47 | |
Mithil Arun | be067ec | 2014-11-05 15:58:50 +0530 | [diff] [blame] | 48 | Try to avoid using ``try`` blocks in the test cases, as both the ``except`` |
| 49 | and ``finally`` blocks could replace the original exception, |
Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 50 | when the additional operations leads to another exception. |
| 51 | |
Mithil Arun | be067ec | 2014-11-05 15:58:50 +0530 | [diff] [blame] | 52 | Just letting an exception to propagate, is not a bad idea in a test case, |
Bruce R. Montague | 44a6a19 | 2013-12-17 09:06:04 -0800 | [diff] [blame] | 53 | at all. |
Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 54 | |
| 55 | Try to avoid using any exception handling construct which can hide the errors |
| 56 | origin. |
| 57 | |
| 58 | If you really need to use a ``try`` block, please ensure the original |
| 59 | exception at least logged. When the exception is logged you usually need |
| 60 | to ``raise`` the same or a different exception anyway. |
| 61 | |
Chris Yeoh | c2ff727 | 2013-07-22 22:25:25 +0930 | [diff] [blame] | 62 | Use of ``self.addCleanup`` is often a good way to avoid having to catch |
| 63 | exceptions and still ensure resources are correctly cleaned up if the |
| 64 | test fails part way through. |
| 65 | |
Mithil Arun | be067ec | 2014-11-05 15:58:50 +0530 | [diff] [blame] | 66 | Use the ``self.assert*`` methods provided by the unit test framework. |
| 67 | This signals the failures early on. |
Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 68 | |
Mithil Arun | be067ec | 2014-11-05 15:58:50 +0530 | [diff] [blame] | 69 | Avoid using the ``self.fail`` alone, its stack trace will signal |
Bruce R. Montague | 44a6a19 | 2013-12-17 09:06:04 -0800 | [diff] [blame] | 70 | the ``self.fail`` line as the origin of the error. |
Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 71 | |
| 72 | Avoid constructing complex boolean expressions for assertion. |
Attila Fazekas | 7899d31 | 2013-08-16 09:18:17 +0200 | [diff] [blame] | 73 | The ``self.assertTrue`` or ``self.assertFalse`` without a ``msg`` argument, |
| 74 | will just tell you the single boolean value, and you will not know anything |
| 75 | about the values used in the formula, the ``msg`` argument might be good enough |
| 76 | for providing more information. |
| 77 | |
| 78 | Most other assert method can include more information by default. |
Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 79 | For example ``self.assertIn`` can include the whole set. |
| 80 | |
Matthew Treinish | f45ba2e | 2015-08-24 15:05:01 -0400 | [diff] [blame] | 81 | It is recommended to use testtools `matcher`_ for the more tricky assertions. |
| 82 | You can implement your own specific `matcher`_ as well. |
Attila Fazekas | 7899d31 | 2013-08-16 09:18:17 +0200 | [diff] [blame] | 83 | |
Matthew Treinish | f45ba2e | 2015-08-24 15:05:01 -0400 | [diff] [blame] | 84 | .. _matcher: http://testtools.readthedocs.org/en/latest/for-test-authors.html#matchers |
Attila Fazekas | 7899d31 | 2013-08-16 09:18:17 +0200 | [diff] [blame] | 85 | |
Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 86 | If the test case fails you can see the related logs and the information |
| 87 | carried by the exception (exception class, backtrack and exception info). |
Mithil Arun | be067ec | 2014-11-05 15:58:50 +0530 | [diff] [blame] | 88 | This and the service logs are your only guide to finding the root cause of flaky |
| 89 | issues. |
Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 90 | |
Attila Fazekas | 7899d31 | 2013-08-16 09:18:17 +0200 | [diff] [blame] | 91 | Test cases are independent |
| 92 | -------------------------- |
| 93 | Every ``test_method`` must be callable individually and MUST NOT depends on, |
| 94 | any other ``test_method`` or ``test_method`` ordering. |
| 95 | |
| 96 | Test cases MAY depend on commonly initialized resources/facilities, like |
| 97 | credentials management, testresources and so on. These facilities, MUST be able |
Mithil Arun | be067ec | 2014-11-05 15:58:50 +0530 | [diff] [blame] | 98 | to work even if just one ``test_method`` is selected for execution. |
Attila Fazekas | 7899d31 | 2013-08-16 09:18:17 +0200 | [diff] [blame] | 99 | |
Matthew Treinish | 5e4c0f2 | 2013-09-10 18:38:28 +0000 | [diff] [blame] | 100 | Service Tagging |
| 101 | --------------- |
| 102 | Service tagging is used to specify which services are exercised by a particular |
| 103 | test method. You specify the services with the tempest.test.services decorator. |
| 104 | For example: |
| 105 | |
| 106 | @services('compute', 'image') |
| 107 | |
| 108 | Valid service tag names are the same as the list of directories in tempest.api |
| 109 | that have tests. |
| 110 | |
| 111 | For scenario tests having a service tag is required. For the api tests service |
| 112 | tags are only needed if the test method makes an api call (either directly or |
| 113 | indirectly through another service) that differs from the parent directory |
| 114 | name. For example, any test that make an api call to a service other than nova |
| 115 | in tempest.api.compute would require a service tag for those services, however |
| 116 | they do not need to be tagged as compute. |
| 117 | |
Andrea Frittoli | a5ddd55 | 2014-08-19 18:30:00 +0100 | [diff] [blame] | 118 | Test fixtures and resources |
| 119 | --------------------------- |
| 120 | Test level resources should be cleaned-up after the test execution. Clean-up |
| 121 | is best scheduled using `addCleanup` which ensures that the resource cleanup |
| 122 | code is always invoked, and in reverse order with respect to the creation |
| 123 | order. |
| 124 | |
| 125 | Test class level resources should be defined in the `resource_setup` method of |
| 126 | the test class, except for any credential obtained from the credentials |
| 127 | provider, which should be set-up in the `setup_credentials` method. |
| 128 | |
| 129 | The test base class `BaseTestCase` defines Tempest framework for class level |
| 130 | fixtures. `setUpClass` and `tearDownClass` are defined here and cannot be |
| 131 | overwritten by subclasses (enforced via hacking rule T105). |
| 132 | |
| 133 | Set-up is split in a series of steps (setup stages), which can be overwritten |
| 134 | by test classes. Set-up stages are: |
Masayuki Igawa | e63cf0f | 2016-05-25 10:25:21 +0900 | [diff] [blame^] | 135 | |
| 136 | - `skip_checks` |
| 137 | - `setup_credentials` |
| 138 | - `setup_clients` |
| 139 | - `resource_setup` |
Andrea Frittoli | a5ddd55 | 2014-08-19 18:30:00 +0100 | [diff] [blame] | 140 | |
| 141 | Tear-down is also split in a series of steps (teardown stages), which are |
| 142 | stacked for execution only if the corresponding setup stage had been |
| 143 | reached during the setup phase. Tear-down stages are: |
Masayuki Igawa | e63cf0f | 2016-05-25 10:25:21 +0900 | [diff] [blame^] | 144 | |
| 145 | - `clear_credentials` (defined in the base test class) |
| 146 | - `resource_cleanup` |
Andrea Frittoli | a5ddd55 | 2014-08-19 18:30:00 +0100 | [diff] [blame] | 147 | |
| 148 | Skipping Tests |
| 149 | -------------- |
| 150 | Skipping tests should be based on configuration only. If that is not possible, |
| 151 | it is likely that either a configuration flag is missing, or the test should |
| 152 | fail rather than be skipped. |
| 153 | Using discovery for skipping tests is generally discouraged. |
| 154 | |
| 155 | When running a test that requires a certain "feature" in the target |
| 156 | cloud, if that feature is missing we should fail, because either the test |
| 157 | configuration is invalid, or the cloud is broken and the expected "feature" is |
| 158 | not there even if the cloud was configured with it. |
| 159 | |
Matthew Treinish | 8b79bb3 | 2013-10-10 17:11:05 -0400 | [diff] [blame] | 160 | Negative Tests |
| 161 | -------------- |
Luz Cazares | e28c18f | 2016-04-29 08:53:04 -0700 | [diff] [blame] | 162 | TODO: Write the guideline related to negative tests. |
Matthew Treinish | 8b79bb3 | 2013-10-10 17:11:05 -0400 | [diff] [blame] | 163 | |
Giulio Fidente | 83181a9 | 2013-10-01 06:02:24 +0200 | [diff] [blame] | 164 | Test skips because of Known Bugs |
| 165 | -------------------------------- |
| 166 | |
| 167 | If a test is broken because of a bug it is appropriate to skip the test until |
| 168 | bug has been fixed. You should use the skip_because decorator so that |
| 169 | Tempest's skip tracking tool can watch the bug status. |
| 170 | |
| 171 | Example:: |
| 172 | |
| 173 | @skip_because(bug="980688") |
| 174 | def test_this_and_that(self): |
| 175 | ... |
| 176 | |
Chris Yeoh | c2ff727 | 2013-07-22 22:25:25 +0930 | [diff] [blame] | 177 | Guidelines |
| 178 | ---------- |
| 179 | - Do not submit changesets with only testcases which are skipped as |
| 180 | they will not be merged. |
| 181 | - Consistently check the status code of responses in testcases. The |
| 182 | earlier a problem is detected the easier it is to debug, especially |
| 183 | where there is complicated setup required. |
Matthew Treinish | 96c28d1 | 2013-09-16 17:05:09 +0000 | [diff] [blame] | 184 | |
DennyZhang | 900f02b | 2013-09-23 08:34:04 -0500 | [diff] [blame] | 185 | Parallel Test Execution |
| 186 | ----------------------- |
Matthew Treinish | 96c28d1 | 2013-09-16 17:05:09 +0000 | [diff] [blame] | 187 | Tempest by default runs its tests in parallel this creates the possibility for |
| 188 | interesting interactions between tests which can cause unexpected failures. |
Andrea Frittoli (andreaf) | 17209bb | 2015-05-22 10:16:57 -0700 | [diff] [blame] | 189 | Dynamic credentials provides protection from most of the potential race |
| 190 | conditions between tests outside the same class. But there are still a few of |
| 191 | things to watch out for to try to avoid issues when running your tests in |
| 192 | parallel. |
Matthew Treinish | 96c28d1 | 2013-09-16 17:05:09 +0000 | [diff] [blame] | 193 | |
Sean Dague | ed6e586 | 2016-04-04 10:49:13 -0400 | [diff] [blame] | 194 | - Resources outside of a project scope still have the potential to conflict. This |
Matthew Treinish | 96c28d1 | 2013-09-16 17:05:09 +0000 | [diff] [blame] | 195 | is a larger concern for the admin tests since most resources and actions that |
Sean Dague | ed6e586 | 2016-04-04 10:49:13 -0400 | [diff] [blame] | 196 | require admin privileges are outside of projects. |
Matthew Treinish | 96c28d1 | 2013-09-16 17:05:09 +0000 | [diff] [blame] | 197 | |
| 198 | - Races between methods in the same class are not a problem because |
| 199 | parallelization in tempest is at the test class level, but if there is a json |
| 200 | and xml version of the same test class there could still be a race between |
| 201 | methods. |
| 202 | |
| 203 | - The rand_name() function from tempest.common.utils.data_utils should be used |
| 204 | anywhere a resource is created with a name. Static naming should be avoided |
| 205 | to prevent resource conflicts. |
| 206 | |
| 207 | - If the execution of a set of tests is required to be serialized then locking |
| 208 | can be used to perform this. See AggregatesAdminTest in |
| 209 | tempest.api.compute.admin for an example of using locking. |
Marc Koderer | 31fe483 | 2013-11-06 17:02:03 +0100 | [diff] [blame] | 210 | |
| 211 | Stress Tests in Tempest |
| 212 | ----------------------- |
| 213 | Any tempest test case can be flagged as a stress test. With this flag it will |
| 214 | be automatically discovery and used in the stress test runs. The stress test |
| 215 | framework itself is a facility to spawn and control worker processes in order |
| 216 | to find race conditions (see ``tempest/stress/`` for more information). Please |
| 217 | note that these stress tests can't be used for benchmarking purposes since they |
| 218 | don't measure any performance characteristics. |
| 219 | |
| 220 | Example:: |
| 221 | |
| 222 | @stresstest(class_setup_per='process') |
| 223 | def test_this_and_that(self): |
| 224 | ... |
| 225 | |
| 226 | This will flag the test ``test_this_and_that`` as a stress test. The parameter |
| 227 | ``class_setup_per`` gives control when the setUpClass function should be called. |
| 228 | |
| 229 | Good candidates for stress tests are: |
| 230 | |
| 231 | - Scenario tests |
| 232 | - API tests that have a wide focus |
Matthew Treinish | 6eb0585 | 2013-11-26 15:28:12 +0000 | [diff] [blame] | 233 | |
| 234 | Sample Configuration File |
| 235 | ------------------------- |
| 236 | The sample config file is autogenerated using a script. If any changes are made |
David Kranz | fb0f51f | 2014-11-11 14:07:20 -0500 | [diff] [blame] | 237 | to the config variables in tempest/config.py then the sample config file must be |
| 238 | regenerated. This can be done running:: |
| 239 | |
| 240 | tox -egenconfig |
Matthew Treinish | ecf212c | 2013-12-06 18:23:54 +0000 | [diff] [blame] | 241 | |
| 242 | Unit Tests |
| 243 | ---------- |
| 244 | Unit tests are a separate class of tests in tempest. They verify tempest |
| 245 | itself, and thus have a different set of guidelines around them: |
| 246 | |
| 247 | 1. They can not require anything running externally. All you should need to |
| 248 | run the unit tests is the git tree, python and the dependencies installed. |
| 249 | This includes running services, a config file, etc. |
| 250 | |
| 251 | 2. The unit tests cannot use setUpClass, instead fixtures and testresources |
| 252 | should be used for shared state between tests. |
Matthew Treinish | 5507888 | 2014-08-12 19:01:34 -0400 | [diff] [blame] | 253 | |
| 254 | |
| 255 | .. _TestDocumentation: |
| 256 | |
| 257 | Test Documentation |
| 258 | ------------------ |
| 259 | For tests being added we need to require inline documentation in the form of |
Xicheng Chang | 6fb98ec | 2015-08-13 14:02:52 -0700 | [diff] [blame] | 260 | docstrings to explain what is being tested. In API tests for a new API a class |
Matthew Treinish | 5507888 | 2014-08-12 19:01:34 -0400 | [diff] [blame] | 261 | level docstring should be added to an API reference doc. If one doesn't exist |
| 262 | a TODO comment should be put indicating that the reference needs to be added. |
| 263 | For individual API test cases a method level docstring should be used to |
| 264 | explain the functionality being tested if the test name isn't descriptive |
| 265 | enough. For example:: |
| 266 | |
| 267 | def test_get_role_by_id(self): |
| 268 | """Get a role by its id.""" |
| 269 | |
| 270 | the docstring there is superfluous and shouldn't be added. but for a method |
| 271 | like:: |
| 272 | |
| 273 | def test_volume_backup_create_get_detailed_list_restore_delete(self): |
| 274 | pass |
| 275 | |
| 276 | a docstring would be useful because while the test title is fairly descriptive |
| 277 | the operations being performed are complex enough that a bit more explanation |
| 278 | will help people figure out the intent of the test. |
| 279 | |
| 280 | For scenario tests a class level docstring describing the steps in the scenario |
| 281 | is required. If there is more than one test case in the class individual |
| 282 | docstrings for the workflow in each test methods can be used instead. A good |
| 283 | example of this would be:: |
| 284 | |
Masayuki Igawa | 93424e5 | 2014-10-06 13:54:26 +0900 | [diff] [blame] | 285 | class TestVolumeBootPattern(manager.ScenarioTest): |
Dougal Matthews | 4bebca0 | 2014-10-28 08:36:04 +0000 | [diff] [blame] | 286 | """ |
| 287 | This test case attempts to reproduce the following steps: |
Matthew Treinish | 5507888 | 2014-08-12 19:01:34 -0400 | [diff] [blame] | 288 | |
Dougal Matthews | 4bebca0 | 2014-10-28 08:36:04 +0000 | [diff] [blame] | 289 | * Create in Cinder some bootable volume importing a Glance image |
| 290 | * Boot an instance from the bootable volume |
| 291 | * Write content to the volume |
| 292 | * Delete an instance and Boot a new instance from the volume |
| 293 | * Check written content in the instance |
| 294 | * Create a volume snapshot while the instance is running |
| 295 | * Boot an additional instance from the new snapshot based volume |
| 296 | * Check written content in the instance booted from snapshot |
| 297 | """ |
Matthew Treinish | a970d65 | 2015-03-11 15:39:24 -0400 | [diff] [blame] | 298 | |
Chris Hoge | 0e000ed | 2015-07-28 14:19:53 -0500 | [diff] [blame] | 299 | Test Identification with Idempotent ID |
| 300 | -------------------------------------- |
| 301 | |
| 302 | Every function that provides a test must have an ``idempotent_id`` decorator |
| 303 | that is a unique ``uuid-4`` instance. This ID is used to complement the fully |
Naomichi Wakui | dbe9aab | 2015-08-26 03:36:02 +0000 | [diff] [blame] | 304 | qualified test name and track test functionality through refactoring. The |
Chris Hoge | 0e000ed | 2015-07-28 14:19:53 -0500 | [diff] [blame] | 305 | format of the metadata looks like:: |
| 306 | |
| 307 | @test.idempotent_id('585e934c-448e-43c4-acbf-d06a9b899997') |
| 308 | def test_list_servers_with_detail(self): |
| 309 | # The created server should be in the detailed list of all servers |
| 310 | ... |
| 311 | |
Andrea Frittoli (andreaf) | 1370baf | 2016-04-29 14:26:22 -0500 | [diff] [blame] | 312 | Tempest.lib includes a ``check-uuid`` tool that will test for the existence |
Matthew Treinish | c1802bc | 2015-12-03 18:48:11 -0500 | [diff] [blame] | 313 | and uniqueness of idempotent_id metadata for every test. If you have |
Andrea Frittoli (andreaf) | 1370baf | 2016-04-29 14:26:22 -0500 | [diff] [blame] | 314 | tempest installed you run the tool against Tempest by calling from the |
Matthew Treinish | c1802bc | 2015-12-03 18:48:11 -0500 | [diff] [blame] | 315 | tempest repo:: |
Chris Hoge | 0e000ed | 2015-07-28 14:19:53 -0500 | [diff] [blame] | 316 | |
Matthew Treinish | c1802bc | 2015-12-03 18:48:11 -0500 | [diff] [blame] | 317 | check-uuid |
Chris Hoge | 0e000ed | 2015-07-28 14:19:53 -0500 | [diff] [blame] | 318 | |
| 319 | It can be invoked against any test suite by passing a package name:: |
| 320 | |
Matthew Treinish | c1802bc | 2015-12-03 18:48:11 -0500 | [diff] [blame] | 321 | check-uuid --package <package_name> |
Chris Hoge | 0e000ed | 2015-07-28 14:19:53 -0500 | [diff] [blame] | 322 | |
| 323 | Tests without an ``idempotent_id`` can be automatically fixed by running |
| 324 | the command with the ``--fix`` flag, which will modify the source package |
| 325 | by inserting randomly generated uuids for every test that does not have |
| 326 | one:: |
| 327 | |
Matthew Treinish | c1802bc | 2015-12-03 18:48:11 -0500 | [diff] [blame] | 328 | check-uuid --fix |
Chris Hoge | 0e000ed | 2015-07-28 14:19:53 -0500 | [diff] [blame] | 329 | |
Matthew Treinish | c1802bc | 2015-12-03 18:48:11 -0500 | [diff] [blame] | 330 | The ``check-uuid`` tool is used as part of the tempest gate job |
Chris Hoge | 0e000ed | 2015-07-28 14:19:53 -0500 | [diff] [blame] | 331 | to ensure that all tests have an ``idempotent_id`` decorator. |
| 332 | |
Matthew Treinish | a970d65 | 2015-03-11 15:39:24 -0400 | [diff] [blame] | 333 | Branchless Tempest Considerations |
| 334 | --------------------------------- |
| 335 | |
| 336 | Starting with the OpenStack Icehouse release Tempest no longer has any stable |
| 337 | branches. This is to better ensure API consistency between releases because |
| 338 | the API behavior should not change between releases. This means that the stable |
| 339 | branches are also gated by the Tempest master branch, which also means that |
| 340 | proposed commits to Tempest must work against both the master and all the |
| 341 | currently supported stable branches of the projects. As such there are a few |
| 342 | special considerations that have to be accounted for when pushing new changes |
| 343 | to tempest. |
| 344 | |
| 345 | 1. New Tests for new features |
| 346 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 347 | |
| 348 | When adding tests for new features that were not in previous releases of the |
| 349 | projects the new test has to be properly skipped with a feature flag. Whether |
| 350 | this is just as simple as using the @test.requires_ext() decorator to check |
| 351 | if the required extension (or discoverable optional API) is enabled or adding |
| 352 | a new config option to the appropriate section. If there isn't a method of |
| 353 | selecting the new **feature** from the config file then there won't be a |
| 354 | mechanism to disable the test with older stable releases and the new test won't |
| 355 | be able to merge. |
| 356 | |
| 357 | 2. Bug fix on core project needing Tempest changes |
| 358 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 359 | |
| 360 | When trying to land a bug fix which changes a tested API you'll have to use the |
| 361 | following procedure:: |
| 362 | |
| 363 | - Propose change to the project, get a +2 on the change even with failing |
| 364 | - Propose skip on Tempest which will only be approved after the |
| 365 | corresponding change in the project has a +2 on change |
| 366 | - Land project change in master and all open stable branches (if required) |
| 367 | - Land changed test in Tempest |
| 368 | |
| 369 | Otherwise the bug fix won't be able to land in the project. |
| 370 | |
| 371 | 3. New Tests for existing features |
| 372 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 373 | |
| 374 | If a test is being added for a feature that exists in all the current releases |
| 375 | of the projects then the only concern is that the API behavior is the same |
| 376 | across all the versions of the project being tested. If the behavior is not |
| 377 | consistent the test will not be able to merge. |
| 378 | |
| 379 | API Stability |
| 380 | ------------- |
| 381 | |
| 382 | For new tests being added to Tempest the assumption is that the API being |
| 383 | tested is considered stable and adheres to the OpenStack API stability |
| 384 | guidelines. If an API is still considered experimental or in development then |
| 385 | it should not be tested by Tempest until it is considered stable. |