blob: 86529718367b9d4cc8508e63811fa3b395c9f323 [file] [log] [blame]
Attila Fazekas23fdf1d2013-06-09 16:35:23 +02001Tempest Coding Guide
2====================
3
Joe Gordon1374f882013-07-12 17:00:34 +01004- Step 1: Read the OpenStack Style Commandments
Matthew Treinish97072c82013-10-01 11:54:15 -04005 http://docs.openstack.org/developer/hacking/
Joe Gordon1374f882013-07-12 17:00:34 +01006- Step 2: Read on
7
8Tempest Specific Commandments
9------------------------------
10
Joe Gordon1374f882013-07-12 17:00:34 +010011- [T102] Cannot import OpenStack python clients in tempest/api tests
Matthew Treinish5e4c0f22013-09-10 18:38:28 +000012- [T104] Scenario tests require a services decorator
Matthew Treinishecf212c2013-12-06 18:23:54 +000013- [T105] Unit tests cannot use setUpClass
Masayuki Igawafcacf962014-02-19 14:00:01 +090014- [T106] vim configuration should not be kept in source files.
Attila Fazekas23fdf1d2013-06-09 16:35:23 +020015
Matthew Treinish8b372892012-12-07 17:13:16 -050016Test Data/Configuration
17-----------------------
18- Assume nothing about existing test data
19- Tests should be self contained (provide their own data)
20- Clean up test data at the completion of each test
21- Use configuration files for values that will vary by environment
22
23
Attila Fazekas10fd63d2013-07-04 18:38:21 +020024Exception Handling
25------------------
26According to the ``The Zen of Python`` the
Attila Fazekas58d23302013-07-24 10:25:02 +020027``Errors should never pass silently.``
Attila Fazekas10fd63d2013-07-04 18:38:21 +020028Tempest usually runs in special environment (jenkins gate jobs), in every
29error or failure situation we should provide as much error related
30information as possible, because we usually do not have the chance to
31investigate the situation after the issue happened.
32
33In every test case the abnormal situations must be very verbosely explained,
34by the exception and the log.
35
36In most cases the very first issue is the most important information.
37
38Try to avoid using ``try`` blocks in the test cases, both the ``except``
39and ``finally`` block could replace the original exception,
40when the additional operations leads to another exception.
41
42Just letting an exception to propagate, is not bad idea in a test case,
Bruce R. Montague44a6a192013-12-17 09:06:04 -080043at all.
Attila Fazekas10fd63d2013-07-04 18:38:21 +020044
45Try to avoid using any exception handling construct which can hide the errors
46origin.
47
48If you really need to use a ``try`` block, please ensure the original
49exception at least logged. When the exception is logged you usually need
50to ``raise`` the same or a different exception anyway.
51
Chris Yeohc2ff7272013-07-22 22:25:25 +093052Use of ``self.addCleanup`` is often a good way to avoid having to catch
53exceptions and still ensure resources are correctly cleaned up if the
54test fails part way through.
55
Attila Fazekas10fd63d2013-07-04 18:38:21 +020056Use the ``self.assert*`` methods provided by the unit test framework
Bruce R. Montague44a6a192013-12-17 09:06:04 -080057the signal failures early.
Attila Fazekas10fd63d2013-07-04 18:38:21 +020058
59Avoid using the ``self.fail`` alone, it's stack trace will signal
Bruce R. Montague44a6a192013-12-17 09:06:04 -080060the ``self.fail`` line as the origin of the error.
Attila Fazekas10fd63d2013-07-04 18:38:21 +020061
62Avoid constructing complex boolean expressions for assertion.
Attila Fazekas7899d312013-08-16 09:18:17 +020063The ``self.assertTrue`` or ``self.assertFalse`` without a ``msg`` argument,
64will just tell you the single boolean value, and you will not know anything
65about the values used in the formula, the ``msg`` argument might be good enough
66for providing more information.
67
68Most other assert method can include more information by default.
Attila Fazekas10fd63d2013-07-04 18:38:21 +020069For example ``self.assertIn`` can include the whole set.
70
Attila Fazekas7899d312013-08-16 09:18:17 +020071Recommended to use testtools matcher for more tricky assertion.
72`[doc] <http://testtools.readthedocs.org/en/latest/for-test-authors.html#matchers>`_
73
74You can implement your own specific matcher as well.
75`[doc] <http://testtools.readthedocs.org/en/latest/for-test-authors.html#writing-your-own-matchers>`_
76
Attila Fazekas10fd63d2013-07-04 18:38:21 +020077If the test case fails you can see the related logs and the information
78carried by the exception (exception class, backtrack and exception info).
79This and the service logs are your only guide to find the root cause of flaky
80issue.
81
Attila Fazekas7899d312013-08-16 09:18:17 +020082Test cases are independent
83--------------------------
84Every ``test_method`` must be callable individually and MUST NOT depends on,
85any other ``test_method`` or ``test_method`` ordering.
86
87Test cases MAY depend on commonly initialized resources/facilities, like
88credentials management, testresources and so on. These facilities, MUST be able
89to work even if just one ``test_method`` selected for execution.
90
Matthew Treinish5e4c0f22013-09-10 18:38:28 +000091Service Tagging
92---------------
93Service tagging is used to specify which services are exercised by a particular
94test method. You specify the services with the tempest.test.services decorator.
95For example:
96
97@services('compute', 'image')
98
99Valid service tag names are the same as the list of directories in tempest.api
100that have tests.
101
102For scenario tests having a service tag is required. For the api tests service
103tags are only needed if the test method makes an api call (either directly or
104indirectly through another service) that differs from the parent directory
105name. For example, any test that make an api call to a service other than nova
106in tempest.api.compute would require a service tag for those services, however
107they do not need to be tagged as compute.
108
Matthew Treinish8b79bb32013-10-10 17:11:05 -0400109Negative Tests
110--------------
Marc Koderera5afb4f2014-02-11 15:38:15 +0100111Newly added negative tests should use the negative test framework. First step
112is to create an interface description in a json file under `etc/schemas`.
113These descriptions consists of two important sections for the test
114(one of those is mandatory):
Matthew Treinish8b79bb32013-10-10 17:11:05 -0400115
Marc Koderera5afb4f2014-02-11 15:38:15 +0100116 - A resource (part of the URL of the request): Resources needed for a test
117 must be created in `setUpClass` and registered with `set_resource` e.g.:
118 `cls.set_resource("server", server['id'])`
Matthew Treinish8b79bb32013-10-10 17:11:05 -0400119
Marc Koderera5afb4f2014-02-11 15:38:15 +0100120 - A json schema: defines properties for a request.
121
122After that a test class must be added to automatically generate test scenarios
Marc Koderer313cbd52014-03-26 08:56:59 +0100123out of the given interface description::
124
125 load_tests = test.NegativeAutoTest.load_tests
Marc Koderera5afb4f2014-02-11 15:38:15 +0100126
127 class SampeTestNegativeTestJSON(<your base class>, test.NegativeAutoTest):
128 _interface = 'json'
129 _service = 'compute'
Marc Koderer313cbd52014-03-26 08:56:59 +0100130 _schema_file = <your Schema file>
Marc Koderera5afb4f2014-02-11 15:38:15 +0100131
132Negative tests must be marked with a negative attribute::
133
134 @test.attr(type=['negative', 'gate'])
135 def test_get_console_output(self):
136 self.execute(self._schema_file)
137
138All negative tests should be added into a separate negative test file.
139If such a file doesn't exist for the particular resource being tested a new
140test file should be added. Old XML based negative tests can be kept but should
141be renamed to `_xml.py`.
Matthew Treinish8b79bb32013-10-10 17:11:05 -0400142
Giulio Fidente83181a92013-10-01 06:02:24 +0200143Test skips because of Known Bugs
144--------------------------------
145
146If a test is broken because of a bug it is appropriate to skip the test until
147bug has been fixed. You should use the skip_because decorator so that
148Tempest's skip tracking tool can watch the bug status.
149
150Example::
151
152 @skip_because(bug="980688")
153 def test_this_and_that(self):
154 ...
155
Chris Yeohc2ff7272013-07-22 22:25:25 +0930156Guidelines
157----------
158- Do not submit changesets with only testcases which are skipped as
159 they will not be merged.
160- Consistently check the status code of responses in testcases. The
161 earlier a problem is detected the easier it is to debug, especially
162 where there is complicated setup required.
Matthew Treinish96c28d12013-09-16 17:05:09 +0000163
DennyZhang900f02b2013-09-23 08:34:04 -0500164Parallel Test Execution
165-----------------------
Matthew Treinish96c28d12013-09-16 17:05:09 +0000166Tempest by default runs its tests in parallel this creates the possibility for
167interesting interactions between tests which can cause unexpected failures.
168Tenant isolation provides protection from most of the potential race conditions
169between tests outside the same class. But there are still a few of things to
170watch out for to try to avoid issues when running your tests in parallel.
171
172- Resources outside of a tenant scope still have the potential to conflict. This
173 is a larger concern for the admin tests since most resources and actions that
DennyZhang900f02b2013-09-23 08:34:04 -0500174 require admin privileges are outside of tenants.
Matthew Treinish96c28d12013-09-16 17:05:09 +0000175
176- Races between methods in the same class are not a problem because
177 parallelization in tempest is at the test class level, but if there is a json
178 and xml version of the same test class there could still be a race between
179 methods.
180
181- The rand_name() function from tempest.common.utils.data_utils should be used
182 anywhere a resource is created with a name. Static naming should be avoided
183 to prevent resource conflicts.
184
185- If the execution of a set of tests is required to be serialized then locking
186 can be used to perform this. See AggregatesAdminTest in
187 tempest.api.compute.admin for an example of using locking.
Marc Koderer31fe4832013-11-06 17:02:03 +0100188
189Stress Tests in Tempest
190-----------------------
191Any tempest test case can be flagged as a stress test. With this flag it will
192be automatically discovery and used in the stress test runs. The stress test
193framework itself is a facility to spawn and control worker processes in order
194to find race conditions (see ``tempest/stress/`` for more information). Please
195note that these stress tests can't be used for benchmarking purposes since they
196don't measure any performance characteristics.
197
198Example::
199
200 @stresstest(class_setup_per='process')
201 def test_this_and_that(self):
202 ...
203
204This will flag the test ``test_this_and_that`` as a stress test. The parameter
205``class_setup_per`` gives control when the setUpClass function should be called.
206
207Good candidates for stress tests are:
208
209- Scenario tests
210- API tests that have a wide focus
Matthew Treinish6eb05852013-11-26 15:28:12 +0000211
212Sample Configuration File
213-------------------------
214The sample config file is autogenerated using a script. If any changes are made
215to the config variables in tempest then the sample config file must be
216regenerated. This can be done running the script: tools/generate_sample.sh
Matthew Treinishecf212c2013-12-06 18:23:54 +0000217
218Unit Tests
219----------
220Unit tests are a separate class of tests in tempest. They verify tempest
221itself, and thus have a different set of guidelines around them:
222
2231. They can not require anything running externally. All you should need to
224 run the unit tests is the git tree, python and the dependencies installed.
225 This includes running services, a config file, etc.
226
2272. The unit tests cannot use setUpClass, instead fixtures and testresources
228 should be used for shared state between tests.