Attila Fazekas | 23fdf1d | 2013-06-09 16:35:23 +0200 | [diff] [blame] | 1 | Tempest Coding Guide |
| 2 | ==================== |
| 3 | |
Joe Gordon | 1374f88 | 2013-07-12 17:00:34 +0100 | [diff] [blame] | 4 | - Step 1: Read the OpenStack Style Commandments |
Matthew Treinish | 97072c8 | 2013-10-01 11:54:15 -0400 | [diff] [blame] | 5 | http://docs.openstack.org/developer/hacking/ |
Joe Gordon | 1374f88 | 2013-07-12 17:00:34 +0100 | [diff] [blame] | 6 | - Step 2: Read on |
| 7 | |
| 8 | Tempest Specific Commandments |
| 9 | ------------------------------ |
| 10 | |
ghanshyam | 50f1947 | 2014-11-26 17:04:37 +0900 | [diff] [blame] | 11 | - [T102] Cannot import OpenStack python clients in tempest/api & |
| 12 | tempest/scenario tests |
Matthew Treinish | 5e4c0f2 | 2013-09-10 18:38:28 +0000 | [diff] [blame] | 13 | - [T104] Scenario tests require a services decorator |
Matthew Treinish | ecf212c | 2013-12-06 18:23:54 +0000 | [diff] [blame] | 14 | - [T105] Unit tests cannot use setUpClass |
Masayuki Igawa | fcacf96 | 2014-02-19 14:00:01 +0900 | [diff] [blame] | 15 | - [T106] vim configuration should not be kept in source files. |
Ghanshyam | 2a180b8 | 2014-06-16 13:54:22 +0900 | [diff] [blame] | 16 | - [N322] Method's default argument shouldn't be mutable |
Attila Fazekas | 23fdf1d | 2013-06-09 16:35:23 +0200 | [diff] [blame] | 17 | |
Matthew Treinish | 8b37289 | 2012-12-07 17:13:16 -0500 | [diff] [blame] | 18 | Test Data/Configuration |
| 19 | ----------------------- |
| 20 | - Assume nothing about existing test data |
| 21 | - Tests should be self contained (provide their own data) |
| 22 | - Clean up test data at the completion of each test |
| 23 | - Use configuration files for values that will vary by environment |
| 24 | |
| 25 | |
Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 26 | Exception Handling |
| 27 | ------------------ |
| 28 | According to the ``The Zen of Python`` the |
Attila Fazekas | 58d2330 | 2013-07-24 10:25:02 +0200 | [diff] [blame] | 29 | ``Errors should never pass silently.`` |
Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 30 | Tempest usually runs in special environment (jenkins gate jobs), in every |
| 31 | error or failure situation we should provide as much error related |
| 32 | information as possible, because we usually do not have the chance to |
| 33 | investigate the situation after the issue happened. |
| 34 | |
| 35 | In every test case the abnormal situations must be very verbosely explained, |
| 36 | by the exception and the log. |
| 37 | |
| 38 | In most cases the very first issue is the most important information. |
| 39 | |
Mithil Arun | be067ec | 2014-11-05 15:58:50 +0530 | [diff] [blame] | 40 | Try to avoid using ``try`` blocks in the test cases, as both the ``except`` |
| 41 | and ``finally`` blocks could replace the original exception, |
Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 42 | when the additional operations leads to another exception. |
| 43 | |
Mithil Arun | be067ec | 2014-11-05 15:58:50 +0530 | [diff] [blame] | 44 | Just letting an exception to propagate, is not a bad idea in a test case, |
Bruce R. Montague | 44a6a19 | 2013-12-17 09:06:04 -0800 | [diff] [blame] | 45 | at all. |
Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 46 | |
| 47 | Try to avoid using any exception handling construct which can hide the errors |
| 48 | origin. |
| 49 | |
| 50 | If you really need to use a ``try`` block, please ensure the original |
| 51 | exception at least logged. When the exception is logged you usually need |
| 52 | to ``raise`` the same or a different exception anyway. |
| 53 | |
Chris Yeoh | c2ff727 | 2013-07-22 22:25:25 +0930 | [diff] [blame] | 54 | Use of ``self.addCleanup`` is often a good way to avoid having to catch |
| 55 | exceptions and still ensure resources are correctly cleaned up if the |
| 56 | test fails part way through. |
| 57 | |
Mithil Arun | be067ec | 2014-11-05 15:58:50 +0530 | [diff] [blame] | 58 | Use the ``self.assert*`` methods provided by the unit test framework. |
| 59 | This signals the failures early on. |
Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 60 | |
Mithil Arun | be067ec | 2014-11-05 15:58:50 +0530 | [diff] [blame] | 61 | Avoid using the ``self.fail`` alone, its stack trace will signal |
Bruce R. Montague | 44a6a19 | 2013-12-17 09:06:04 -0800 | [diff] [blame] | 62 | the ``self.fail`` line as the origin of the error. |
Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 63 | |
| 64 | Avoid constructing complex boolean expressions for assertion. |
Attila Fazekas | 7899d31 | 2013-08-16 09:18:17 +0200 | [diff] [blame] | 65 | The ``self.assertTrue`` or ``self.assertFalse`` without a ``msg`` argument, |
| 66 | will just tell you the single boolean value, and you will not know anything |
| 67 | about the values used in the formula, the ``msg`` argument might be good enough |
| 68 | for providing more information. |
| 69 | |
| 70 | Most other assert method can include more information by default. |
Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 71 | For example ``self.assertIn`` can include the whole set. |
| 72 | |
Mithil Arun | be067ec | 2014-11-05 15:58:50 +0530 | [diff] [blame] | 73 | It is recommended to use testtools matcher for the more tricky assertions. |
Attila Fazekas | 7899d31 | 2013-08-16 09:18:17 +0200 | [diff] [blame] | 74 | `[doc] <http://testtools.readthedocs.org/en/latest/for-test-authors.html#matchers>`_ |
| 75 | |
| 76 | You can implement your own specific matcher as well. |
| 77 | `[doc] <http://testtools.readthedocs.org/en/latest/for-test-authors.html#writing-your-own-matchers>`_ |
| 78 | |
Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 79 | If the test case fails you can see the related logs and the information |
| 80 | carried by the exception (exception class, backtrack and exception info). |
Mithil Arun | be067ec | 2014-11-05 15:58:50 +0530 | [diff] [blame] | 81 | This and the service logs are your only guide to finding the root cause of flaky |
| 82 | issues. |
Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 83 | |
Attila Fazekas | 7899d31 | 2013-08-16 09:18:17 +0200 | [diff] [blame] | 84 | Test cases are independent |
| 85 | -------------------------- |
| 86 | Every ``test_method`` must be callable individually and MUST NOT depends on, |
| 87 | any other ``test_method`` or ``test_method`` ordering. |
| 88 | |
| 89 | Test cases MAY depend on commonly initialized resources/facilities, like |
| 90 | credentials management, testresources and so on. These facilities, MUST be able |
Mithil Arun | be067ec | 2014-11-05 15:58:50 +0530 | [diff] [blame] | 91 | to work even if just one ``test_method`` is selected for execution. |
Attila Fazekas | 7899d31 | 2013-08-16 09:18:17 +0200 | [diff] [blame] | 92 | |
Matthew Treinish | 5e4c0f2 | 2013-09-10 18:38:28 +0000 | [diff] [blame] | 93 | Service Tagging |
| 94 | --------------- |
| 95 | Service tagging is used to specify which services are exercised by a particular |
| 96 | test method. You specify the services with the tempest.test.services decorator. |
| 97 | For example: |
| 98 | |
| 99 | @services('compute', 'image') |
| 100 | |
| 101 | Valid service tag names are the same as the list of directories in tempest.api |
| 102 | that have tests. |
| 103 | |
| 104 | For scenario tests having a service tag is required. For the api tests service |
| 105 | tags are only needed if the test method makes an api call (either directly or |
| 106 | indirectly through another service) that differs from the parent directory |
| 107 | name. For example, any test that make an api call to a service other than nova |
| 108 | in tempest.api.compute would require a service tag for those services, however |
| 109 | they do not need to be tagged as compute. |
| 110 | |
Matthew Treinish | 8b79bb3 | 2013-10-10 17:11:05 -0400 | [diff] [blame] | 111 | Negative Tests |
| 112 | -------------- |
Marc Koderer | a5afb4f | 2014-02-11 15:38:15 +0100 | [diff] [blame] | 113 | Newly added negative tests should use the negative test framework. First step |
| 114 | is to create an interface description in a json file under `etc/schemas`. |
| 115 | These descriptions consists of two important sections for the test |
| 116 | (one of those is mandatory): |
Matthew Treinish | 8b79bb3 | 2013-10-10 17:11:05 -0400 | [diff] [blame] | 117 | |
Marc Koderer | a5afb4f | 2014-02-11 15:38:15 +0100 | [diff] [blame] | 118 | - A resource (part of the URL of the request): Resources needed for a test |
| 119 | must be created in `setUpClass` and registered with `set_resource` e.g.: |
| 120 | `cls.set_resource("server", server['id'])` |
Matthew Treinish | 8b79bb3 | 2013-10-10 17:11:05 -0400 | [diff] [blame] | 121 | |
Marc Koderer | a5afb4f | 2014-02-11 15:38:15 +0100 | [diff] [blame] | 122 | - A json schema: defines properties for a request. |
| 123 | |
| 124 | After that a test class must be added to automatically generate test scenarios |
Marc Koderer | 313cbd5 | 2014-03-26 08:56:59 +0100 | [diff] [blame] | 125 | out of the given interface description:: |
| 126 | |
| 127 | load_tests = test.NegativeAutoTest.load_tests |
Marc Koderer | a5afb4f | 2014-02-11 15:38:15 +0100 | [diff] [blame] | 128 | |
| 129 | class SampeTestNegativeTestJSON(<your base class>, test.NegativeAutoTest): |
| 130 | _interface = 'json' |
| 131 | _service = 'compute' |
Marc Koderer | 313cbd5 | 2014-03-26 08:56:59 +0100 | [diff] [blame] | 132 | _schema_file = <your Schema file> |
Marc Koderer | a5afb4f | 2014-02-11 15:38:15 +0100 | [diff] [blame] | 133 | |
| 134 | Negative tests must be marked with a negative attribute:: |
| 135 | |
| 136 | @test.attr(type=['negative', 'gate']) |
| 137 | def test_get_console_output(self): |
| 138 | self.execute(self._schema_file) |
| 139 | |
| 140 | All negative tests should be added into a separate negative test file. |
| 141 | If such a file doesn't exist for the particular resource being tested a new |
| 142 | test file should be added. Old XML based negative tests can be kept but should |
| 143 | be renamed to `_xml.py`. |
Matthew Treinish | 8b79bb3 | 2013-10-10 17:11:05 -0400 | [diff] [blame] | 144 | |
Giulio Fidente | 83181a9 | 2013-10-01 06:02:24 +0200 | [diff] [blame] | 145 | Test skips because of Known Bugs |
| 146 | -------------------------------- |
| 147 | |
| 148 | If a test is broken because of a bug it is appropriate to skip the test until |
| 149 | bug has been fixed. You should use the skip_because decorator so that |
| 150 | Tempest's skip tracking tool can watch the bug status. |
| 151 | |
| 152 | Example:: |
| 153 | |
| 154 | @skip_because(bug="980688") |
| 155 | def test_this_and_that(self): |
| 156 | ... |
| 157 | |
Chris Yeoh | c2ff727 | 2013-07-22 22:25:25 +0930 | [diff] [blame] | 158 | Guidelines |
| 159 | ---------- |
| 160 | - Do not submit changesets with only testcases which are skipped as |
| 161 | they will not be merged. |
| 162 | - Consistently check the status code of responses in testcases. The |
| 163 | earlier a problem is detected the easier it is to debug, especially |
| 164 | where there is complicated setup required. |
Matthew Treinish | 96c28d1 | 2013-09-16 17:05:09 +0000 | [diff] [blame] | 165 | |
DennyZhang | 900f02b | 2013-09-23 08:34:04 -0500 | [diff] [blame] | 166 | Parallel Test Execution |
| 167 | ----------------------- |
Matthew Treinish | 96c28d1 | 2013-09-16 17:05:09 +0000 | [diff] [blame] | 168 | Tempest by default runs its tests in parallel this creates the possibility for |
| 169 | interesting interactions between tests which can cause unexpected failures. |
| 170 | Tenant isolation provides protection from most of the potential race conditions |
| 171 | between tests outside the same class. But there are still a few of things to |
| 172 | watch out for to try to avoid issues when running your tests in parallel. |
| 173 | |
| 174 | - Resources outside of a tenant scope still have the potential to conflict. This |
| 175 | is a larger concern for the admin tests since most resources and actions that |
DennyZhang | 900f02b | 2013-09-23 08:34:04 -0500 | [diff] [blame] | 176 | require admin privileges are outside of tenants. |
Matthew Treinish | 96c28d1 | 2013-09-16 17:05:09 +0000 | [diff] [blame] | 177 | |
| 178 | - Races between methods in the same class are not a problem because |
| 179 | parallelization in tempest is at the test class level, but if there is a json |
| 180 | and xml version of the same test class there could still be a race between |
| 181 | methods. |
| 182 | |
| 183 | - The rand_name() function from tempest.common.utils.data_utils should be used |
| 184 | anywhere a resource is created with a name. Static naming should be avoided |
| 185 | to prevent resource conflicts. |
| 186 | |
| 187 | - If the execution of a set of tests is required to be serialized then locking |
| 188 | can be used to perform this. See AggregatesAdminTest in |
| 189 | tempest.api.compute.admin for an example of using locking. |
Marc Koderer | 31fe483 | 2013-11-06 17:02:03 +0100 | [diff] [blame] | 190 | |
| 191 | Stress Tests in Tempest |
| 192 | ----------------------- |
| 193 | Any tempest test case can be flagged as a stress test. With this flag it will |
| 194 | be automatically discovery and used in the stress test runs. The stress test |
| 195 | framework itself is a facility to spawn and control worker processes in order |
| 196 | to find race conditions (see ``tempest/stress/`` for more information). Please |
| 197 | note that these stress tests can't be used for benchmarking purposes since they |
| 198 | don't measure any performance characteristics. |
| 199 | |
| 200 | Example:: |
| 201 | |
| 202 | @stresstest(class_setup_per='process') |
| 203 | def test_this_and_that(self): |
| 204 | ... |
| 205 | |
| 206 | This will flag the test ``test_this_and_that`` as a stress test. The parameter |
| 207 | ``class_setup_per`` gives control when the setUpClass function should be called. |
| 208 | |
| 209 | Good candidates for stress tests are: |
| 210 | |
| 211 | - Scenario tests |
| 212 | - API tests that have a wide focus |
Matthew Treinish | 6eb0585 | 2013-11-26 15:28:12 +0000 | [diff] [blame] | 213 | |
| 214 | Sample Configuration File |
| 215 | ------------------------- |
| 216 | The sample config file is autogenerated using a script. If any changes are made |
David Kranz | fb0f51f | 2014-11-11 14:07:20 -0500 | [diff] [blame] | 217 | to the config variables in tempest/config.py then the sample config file must be |
| 218 | regenerated. This can be done running:: |
| 219 | |
| 220 | tox -egenconfig |
Matthew Treinish | ecf212c | 2013-12-06 18:23:54 +0000 | [diff] [blame] | 221 | |
| 222 | Unit Tests |
| 223 | ---------- |
| 224 | Unit tests are a separate class of tests in tempest. They verify tempest |
| 225 | itself, and thus have a different set of guidelines around them: |
| 226 | |
| 227 | 1. They can not require anything running externally. All you should need to |
| 228 | run the unit tests is the git tree, python and the dependencies installed. |
| 229 | This includes running services, a config file, etc. |
| 230 | |
| 231 | 2. The unit tests cannot use setUpClass, instead fixtures and testresources |
| 232 | should be used for shared state between tests. |
Matthew Treinish | 5507888 | 2014-08-12 19:01:34 -0400 | [diff] [blame] | 233 | |
| 234 | |
| 235 | .. _TestDocumentation: |
| 236 | |
| 237 | Test Documentation |
| 238 | ------------------ |
| 239 | For tests being added we need to require inline documentation in the form of |
| 240 | docstings to explain what is being tested. In API tests for a new API a class |
| 241 | level docstring should be added to an API reference doc. If one doesn't exist |
| 242 | a TODO comment should be put indicating that the reference needs to be added. |
| 243 | For individual API test cases a method level docstring should be used to |
| 244 | explain the functionality being tested if the test name isn't descriptive |
| 245 | enough. For example:: |
| 246 | |
| 247 | def test_get_role_by_id(self): |
| 248 | """Get a role by its id.""" |
| 249 | |
| 250 | the docstring there is superfluous and shouldn't be added. but for a method |
| 251 | like:: |
| 252 | |
| 253 | def test_volume_backup_create_get_detailed_list_restore_delete(self): |
| 254 | pass |
| 255 | |
| 256 | a docstring would be useful because while the test title is fairly descriptive |
| 257 | the operations being performed are complex enough that a bit more explanation |
| 258 | will help people figure out the intent of the test. |
| 259 | |
| 260 | For scenario tests a class level docstring describing the steps in the scenario |
| 261 | is required. If there is more than one test case in the class individual |
| 262 | docstrings for the workflow in each test methods can be used instead. A good |
| 263 | example of this would be:: |
| 264 | |
Masayuki Igawa | 93424e5 | 2014-10-06 13:54:26 +0900 | [diff] [blame] | 265 | class TestVolumeBootPattern(manager.ScenarioTest): |
Dougal Matthews | 4bebca0 | 2014-10-28 08:36:04 +0000 | [diff] [blame] | 266 | """ |
| 267 | This test case attempts to reproduce the following steps: |
Matthew Treinish | 5507888 | 2014-08-12 19:01:34 -0400 | [diff] [blame] | 268 | |
Dougal Matthews | 4bebca0 | 2014-10-28 08:36:04 +0000 | [diff] [blame] | 269 | * Create in Cinder some bootable volume importing a Glance image |
| 270 | * Boot an instance from the bootable volume |
| 271 | * Write content to the volume |
| 272 | * Delete an instance and Boot a new instance from the volume |
| 273 | * Check written content in the instance |
| 274 | * Create a volume snapshot while the instance is running |
| 275 | * Boot an additional instance from the new snapshot based volume |
| 276 | * Check written content in the instance booted from snapshot |
| 277 | """ |