Attila Fazekas | 23fdf1d | 2013-06-09 16:35:23 +0200 | [diff] [blame] | 1 | Tempest Coding Guide |
| 2 | ==================== |
| 3 | |
Joe Gordon | 1374f88 | 2013-07-12 17:00:34 +0100 | [diff] [blame] | 4 | - Step 1: Read the OpenStack Style Commandments |
Matthew Treinish | 97072c8 | 2013-10-01 11:54:15 -0400 | [diff] [blame] | 5 | http://docs.openstack.org/developer/hacking/ |
Joe Gordon | 1374f88 | 2013-07-12 17:00:34 +0100 | [diff] [blame] | 6 | - Step 2: Read on |
| 7 | |
| 8 | Tempest Specific Commandments |
| 9 | ------------------------------ |
| 10 | |
Joe Gordon | 1374f88 | 2013-07-12 17:00:34 +0100 | [diff] [blame] | 11 | - [T102] Cannot import OpenStack python clients in tempest/api tests |
Matthew Treinish | 5e4c0f2 | 2013-09-10 18:38:28 +0000 | [diff] [blame] | 12 | - [T104] Scenario tests require a services decorator |
Matthew Treinish | ecf212c | 2013-12-06 18:23:54 +0000 | [diff] [blame^] | 13 | - [T105] Unit tests cannot use setUpClass |
Attila Fazekas | 23fdf1d | 2013-06-09 16:35:23 +0200 | [diff] [blame] | 14 | |
Matthew Treinish | 8b37289 | 2012-12-07 17:13:16 -0500 | [diff] [blame] | 15 | Test Data/Configuration |
| 16 | ----------------------- |
| 17 | - Assume nothing about existing test data |
| 18 | - Tests should be self contained (provide their own data) |
| 19 | - Clean up test data at the completion of each test |
| 20 | - Use configuration files for values that will vary by environment |
| 21 | |
| 22 | |
Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 23 | Exception Handling |
| 24 | ------------------ |
| 25 | According to the ``The Zen of Python`` the |
Attila Fazekas | 58d2330 | 2013-07-24 10:25:02 +0200 | [diff] [blame] | 26 | ``Errors should never pass silently.`` |
Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 27 | Tempest usually runs in special environment (jenkins gate jobs), in every |
| 28 | error or failure situation we should provide as much error related |
| 29 | information as possible, because we usually do not have the chance to |
| 30 | investigate the situation after the issue happened. |
| 31 | |
| 32 | In every test case the abnormal situations must be very verbosely explained, |
| 33 | by the exception and the log. |
| 34 | |
| 35 | In most cases the very first issue is the most important information. |
| 36 | |
| 37 | Try to avoid using ``try`` blocks in the test cases, both the ``except`` |
| 38 | and ``finally`` block could replace the original exception, |
| 39 | when the additional operations leads to another exception. |
| 40 | |
| 41 | Just letting an exception to propagate, is not bad idea in a test case, |
| 42 | at all. |
| 43 | |
| 44 | Try to avoid using any exception handling construct which can hide the errors |
| 45 | origin. |
| 46 | |
| 47 | If you really need to use a ``try`` block, please ensure the original |
| 48 | exception at least logged. When the exception is logged you usually need |
| 49 | to ``raise`` the same or a different exception anyway. |
| 50 | |
Chris Yeoh | c2ff727 | 2013-07-22 22:25:25 +0930 | [diff] [blame] | 51 | Use of ``self.addCleanup`` is often a good way to avoid having to catch |
| 52 | exceptions and still ensure resources are correctly cleaned up if the |
| 53 | test fails part way through. |
| 54 | |
Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 55 | Use the ``self.assert*`` methods provided by the unit test framework |
| 56 | the signal failures early. |
| 57 | |
| 58 | Avoid using the ``self.fail`` alone, it's stack trace will signal |
| 59 | the ``self.fail`` line as the origin of the error. |
| 60 | |
| 61 | Avoid constructing complex boolean expressions for assertion. |
Attila Fazekas | 7899d31 | 2013-08-16 09:18:17 +0200 | [diff] [blame] | 62 | The ``self.assertTrue`` or ``self.assertFalse`` without a ``msg`` argument, |
| 63 | will just tell you the single boolean value, and you will not know anything |
| 64 | about the values used in the formula, the ``msg`` argument might be good enough |
| 65 | for providing more information. |
| 66 | |
| 67 | Most other assert method can include more information by default. |
Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 68 | For example ``self.assertIn`` can include the whole set. |
| 69 | |
Attila Fazekas | 7899d31 | 2013-08-16 09:18:17 +0200 | [diff] [blame] | 70 | Recommended to use testtools matcher for more tricky assertion. |
| 71 | `[doc] <http://testtools.readthedocs.org/en/latest/for-test-authors.html#matchers>`_ |
| 72 | |
| 73 | You can implement your own specific matcher as well. |
| 74 | `[doc] <http://testtools.readthedocs.org/en/latest/for-test-authors.html#writing-your-own-matchers>`_ |
| 75 | |
Attila Fazekas | 10fd63d | 2013-07-04 18:38:21 +0200 | [diff] [blame] | 76 | If the test case fails you can see the related logs and the information |
| 77 | carried by the exception (exception class, backtrack and exception info). |
| 78 | This and the service logs are your only guide to find the root cause of flaky |
| 79 | issue. |
| 80 | |
Attila Fazekas | 7899d31 | 2013-08-16 09:18:17 +0200 | [diff] [blame] | 81 | Test cases are independent |
| 82 | -------------------------- |
| 83 | Every ``test_method`` must be callable individually and MUST NOT depends on, |
| 84 | any other ``test_method`` or ``test_method`` ordering. |
| 85 | |
| 86 | Test cases MAY depend on commonly initialized resources/facilities, like |
| 87 | credentials management, testresources and so on. These facilities, MUST be able |
| 88 | to work even if just one ``test_method`` selected for execution. |
| 89 | |
Matthew Treinish | 5e4c0f2 | 2013-09-10 18:38:28 +0000 | [diff] [blame] | 90 | Service Tagging |
| 91 | --------------- |
| 92 | Service tagging is used to specify which services are exercised by a particular |
| 93 | test method. You specify the services with the tempest.test.services decorator. |
| 94 | For example: |
| 95 | |
| 96 | @services('compute', 'image') |
| 97 | |
| 98 | Valid service tag names are the same as the list of directories in tempest.api |
| 99 | that have tests. |
| 100 | |
| 101 | For scenario tests having a service tag is required. For the api tests service |
| 102 | tags are only needed if the test method makes an api call (either directly or |
| 103 | indirectly through another service) that differs from the parent directory |
| 104 | name. For example, any test that make an api call to a service other than nova |
| 105 | in tempest.api.compute would require a service tag for those services, however |
| 106 | they do not need to be tagged as compute. |
| 107 | |
Matthew Treinish | 8b79bb3 | 2013-10-10 17:11:05 -0400 | [diff] [blame] | 108 | Negative Tests |
| 109 | -------------- |
| 110 | When adding negative tests to tempest there are 2 requirements. First the tests |
| 111 | must be marked with a negative attribute. For example:: |
| 112 | |
| 113 | @attr(type=negative) |
| 114 | def test_resource_no_uuid(self): |
| 115 | ... |
| 116 | |
| 117 | The second requirement is that all negative tests must be added to a negative |
| 118 | test file. If such a file doesn't exist for the particular resource being |
| 119 | tested a new test file should be added. |
| 120 | |
Giulio Fidente | 83181a9 | 2013-10-01 06:02:24 +0200 | [diff] [blame] | 121 | Test skips because of Known Bugs |
| 122 | -------------------------------- |
| 123 | |
| 124 | If a test is broken because of a bug it is appropriate to skip the test until |
| 125 | bug has been fixed. You should use the skip_because decorator so that |
| 126 | Tempest's skip tracking tool can watch the bug status. |
| 127 | |
| 128 | Example:: |
| 129 | |
| 130 | @skip_because(bug="980688") |
| 131 | def test_this_and_that(self): |
| 132 | ... |
| 133 | |
Chris Yeoh | c2ff727 | 2013-07-22 22:25:25 +0930 | [diff] [blame] | 134 | Guidelines |
| 135 | ---------- |
| 136 | - Do not submit changesets with only testcases which are skipped as |
| 137 | they will not be merged. |
| 138 | - Consistently check the status code of responses in testcases. The |
| 139 | earlier a problem is detected the easier it is to debug, especially |
| 140 | where there is complicated setup required. |
Matthew Treinish | 96c28d1 | 2013-09-16 17:05:09 +0000 | [diff] [blame] | 141 | |
DennyZhang | 900f02b | 2013-09-23 08:34:04 -0500 | [diff] [blame] | 142 | Parallel Test Execution |
| 143 | ----------------------- |
Matthew Treinish | 96c28d1 | 2013-09-16 17:05:09 +0000 | [diff] [blame] | 144 | Tempest by default runs its tests in parallel this creates the possibility for |
| 145 | interesting interactions between tests which can cause unexpected failures. |
| 146 | Tenant isolation provides protection from most of the potential race conditions |
| 147 | between tests outside the same class. But there are still a few of things to |
| 148 | watch out for to try to avoid issues when running your tests in parallel. |
| 149 | |
| 150 | - Resources outside of a tenant scope still have the potential to conflict. This |
| 151 | is a larger concern for the admin tests since most resources and actions that |
DennyZhang | 900f02b | 2013-09-23 08:34:04 -0500 | [diff] [blame] | 152 | require admin privileges are outside of tenants. |
Matthew Treinish | 96c28d1 | 2013-09-16 17:05:09 +0000 | [diff] [blame] | 153 | |
| 154 | - Races between methods in the same class are not a problem because |
| 155 | parallelization in tempest is at the test class level, but if there is a json |
| 156 | and xml version of the same test class there could still be a race between |
| 157 | methods. |
| 158 | |
| 159 | - The rand_name() function from tempest.common.utils.data_utils should be used |
| 160 | anywhere a resource is created with a name. Static naming should be avoided |
| 161 | to prevent resource conflicts. |
| 162 | |
| 163 | - If the execution of a set of tests is required to be serialized then locking |
| 164 | can be used to perform this. See AggregatesAdminTest in |
| 165 | tempest.api.compute.admin for an example of using locking. |
Marc Koderer | 31fe483 | 2013-11-06 17:02:03 +0100 | [diff] [blame] | 166 | |
| 167 | Stress Tests in Tempest |
| 168 | ----------------------- |
| 169 | Any tempest test case can be flagged as a stress test. With this flag it will |
| 170 | be automatically discovery and used in the stress test runs. The stress test |
| 171 | framework itself is a facility to spawn and control worker processes in order |
| 172 | to find race conditions (see ``tempest/stress/`` for more information). Please |
| 173 | note that these stress tests can't be used for benchmarking purposes since they |
| 174 | don't measure any performance characteristics. |
| 175 | |
| 176 | Example:: |
| 177 | |
| 178 | @stresstest(class_setup_per='process') |
| 179 | def test_this_and_that(self): |
| 180 | ... |
| 181 | |
| 182 | This will flag the test ``test_this_and_that`` as a stress test. The parameter |
| 183 | ``class_setup_per`` gives control when the setUpClass function should be called. |
| 184 | |
| 185 | Good candidates for stress tests are: |
| 186 | |
| 187 | - Scenario tests |
| 188 | - API tests that have a wide focus |
Matthew Treinish | 6eb0585 | 2013-11-26 15:28:12 +0000 | [diff] [blame] | 189 | |
| 190 | Sample Configuration File |
| 191 | ------------------------- |
| 192 | The sample config file is autogenerated using a script. If any changes are made |
| 193 | to the config variables in tempest then the sample config file must be |
| 194 | regenerated. This can be done running the script: tools/generate_sample.sh |
Matthew Treinish | ecf212c | 2013-12-06 18:23:54 +0000 | [diff] [blame^] | 195 | |
| 196 | Unit Tests |
| 197 | ---------- |
| 198 | Unit tests are a separate class of tests in tempest. They verify tempest |
| 199 | itself, and thus have a different set of guidelines around them: |
| 200 | |
| 201 | 1. They can not require anything running externally. All you should need to |
| 202 | run the unit tests is the git tree, python and the dependencies installed. |
| 203 | This includes running services, a config file, etc. |
| 204 | |
| 205 | 2. The unit tests cannot use setUpClass, instead fixtures and testresources |
| 206 | should be used for shared state between tests. |