blob: 1083075467fffee40c8e9caa190ea4dbb7541b47 [file] [log] [blame]
Attila Fazekas23fdf1d2013-06-09 16:35:23 +02001Tempest Coding Guide
2====================
3
Joe Gordon1374f882013-07-12 17:00:34 +01004- Step 1: Read the OpenStack Style Commandments
Matthew Treinish97072c82013-10-01 11:54:15 -04005 http://docs.openstack.org/developer/hacking/
Joe Gordon1374f882013-07-12 17:00:34 +01006- Step 2: Read on
7
8Tempest Specific Commandments
9------------------------------
10
Joe Gordon1374f882013-07-12 17:00:34 +010011- [T102] Cannot import OpenStack python clients in tempest/api tests
Matthew Treinish5e4c0f22013-09-10 18:38:28 +000012- [T104] Scenario tests require a services decorator
Matthew Treinishecf212c2013-12-06 18:23:54 +000013- [T105] Unit tests cannot use setUpClass
Attila Fazekas23fdf1d2013-06-09 16:35:23 +020014
Matthew Treinish8b372892012-12-07 17:13:16 -050015Test Data/Configuration
16-----------------------
17- Assume nothing about existing test data
18- Tests should be self contained (provide their own data)
19- Clean up test data at the completion of each test
20- Use configuration files for values that will vary by environment
21
22
Attila Fazekas10fd63d2013-07-04 18:38:21 +020023Exception Handling
24------------------
25According to the ``The Zen of Python`` the
Attila Fazekas58d23302013-07-24 10:25:02 +020026``Errors should never pass silently.``
Attila Fazekas10fd63d2013-07-04 18:38:21 +020027Tempest usually runs in special environment (jenkins gate jobs), in every
28error or failure situation we should provide as much error related
29information as possible, because we usually do not have the chance to
30investigate the situation after the issue happened.
31
32In every test case the abnormal situations must be very verbosely explained,
33by the exception and the log.
34
35In most cases the very first issue is the most important information.
36
37Try to avoid using ``try`` blocks in the test cases, both the ``except``
38and ``finally`` block could replace the original exception,
39when the additional operations leads to another exception.
40
41Just letting an exception to propagate, is not bad idea in a test case,
42 at all.
43
44Try to avoid using any exception handling construct which can hide the errors
45origin.
46
47If you really need to use a ``try`` block, please ensure the original
48exception at least logged. When the exception is logged you usually need
49to ``raise`` the same or a different exception anyway.
50
Chris Yeohc2ff7272013-07-22 22:25:25 +093051Use of ``self.addCleanup`` is often a good way to avoid having to catch
52exceptions and still ensure resources are correctly cleaned up if the
53test fails part way through.
54
Attila Fazekas10fd63d2013-07-04 18:38:21 +020055Use the ``self.assert*`` methods provided by the unit test framework
56 the signal failures early.
57
58Avoid using the ``self.fail`` alone, it's stack trace will signal
59 the ``self.fail`` line as the origin of the error.
60
61Avoid constructing complex boolean expressions for assertion.
Attila Fazekas7899d312013-08-16 09:18:17 +020062The ``self.assertTrue`` or ``self.assertFalse`` without a ``msg`` argument,
63will just tell you the single boolean value, and you will not know anything
64about the values used in the formula, the ``msg`` argument might be good enough
65for providing more information.
66
67Most other assert method can include more information by default.
Attila Fazekas10fd63d2013-07-04 18:38:21 +020068For example ``self.assertIn`` can include the whole set.
69
Attila Fazekas7899d312013-08-16 09:18:17 +020070Recommended to use testtools matcher for more tricky assertion.
71`[doc] <http://testtools.readthedocs.org/en/latest/for-test-authors.html#matchers>`_
72
73You can implement your own specific matcher as well.
74`[doc] <http://testtools.readthedocs.org/en/latest/for-test-authors.html#writing-your-own-matchers>`_
75
Attila Fazekas10fd63d2013-07-04 18:38:21 +020076If the test case fails you can see the related logs and the information
77carried by the exception (exception class, backtrack and exception info).
78This and the service logs are your only guide to find the root cause of flaky
79issue.
80
Attila Fazekas7899d312013-08-16 09:18:17 +020081Test cases are independent
82--------------------------
83Every ``test_method`` must be callable individually and MUST NOT depends on,
84any other ``test_method`` or ``test_method`` ordering.
85
86Test cases MAY depend on commonly initialized resources/facilities, like
87credentials management, testresources and so on. These facilities, MUST be able
88to work even if just one ``test_method`` selected for execution.
89
Matthew Treinish5e4c0f22013-09-10 18:38:28 +000090Service Tagging
91---------------
92Service tagging is used to specify which services are exercised by a particular
93test method. You specify the services with the tempest.test.services decorator.
94For example:
95
96@services('compute', 'image')
97
98Valid service tag names are the same as the list of directories in tempest.api
99that have tests.
100
101For scenario tests having a service tag is required. For the api tests service
102tags are only needed if the test method makes an api call (either directly or
103indirectly through another service) that differs from the parent directory
104name. For example, any test that make an api call to a service other than nova
105in tempest.api.compute would require a service tag for those services, however
106they do not need to be tagged as compute.
107
Matthew Treinish8b79bb32013-10-10 17:11:05 -0400108Negative Tests
109--------------
110When adding negative tests to tempest there are 2 requirements. First the tests
111must be marked with a negative attribute. For example::
112
113 @attr(type=negative)
114 def test_resource_no_uuid(self):
115 ...
116
117The second requirement is that all negative tests must be added to a negative
118test file. If such a file doesn't exist for the particular resource being
119tested a new test file should be added.
120
Giulio Fidente83181a92013-10-01 06:02:24 +0200121Test skips because of Known Bugs
122--------------------------------
123
124If a test is broken because of a bug it is appropriate to skip the test until
125bug has been fixed. You should use the skip_because decorator so that
126Tempest's skip tracking tool can watch the bug status.
127
128Example::
129
130 @skip_because(bug="980688")
131 def test_this_and_that(self):
132 ...
133
Chris Yeohc2ff7272013-07-22 22:25:25 +0930134Guidelines
135----------
136- Do not submit changesets with only testcases which are skipped as
137 they will not be merged.
138- Consistently check the status code of responses in testcases. The
139 earlier a problem is detected the easier it is to debug, especially
140 where there is complicated setup required.
Matthew Treinish96c28d12013-09-16 17:05:09 +0000141
DennyZhang900f02b2013-09-23 08:34:04 -0500142Parallel Test Execution
143-----------------------
Matthew Treinish96c28d12013-09-16 17:05:09 +0000144Tempest by default runs its tests in parallel this creates the possibility for
145interesting interactions between tests which can cause unexpected failures.
146Tenant isolation provides protection from most of the potential race conditions
147between tests outside the same class. But there are still a few of things to
148watch out for to try to avoid issues when running your tests in parallel.
149
150- Resources outside of a tenant scope still have the potential to conflict. This
151 is a larger concern for the admin tests since most resources and actions that
DennyZhang900f02b2013-09-23 08:34:04 -0500152 require admin privileges are outside of tenants.
Matthew Treinish96c28d12013-09-16 17:05:09 +0000153
154- Races between methods in the same class are not a problem because
155 parallelization in tempest is at the test class level, but if there is a json
156 and xml version of the same test class there could still be a race between
157 methods.
158
159- The rand_name() function from tempest.common.utils.data_utils should be used
160 anywhere a resource is created with a name. Static naming should be avoided
161 to prevent resource conflicts.
162
163- If the execution of a set of tests is required to be serialized then locking
164 can be used to perform this. See AggregatesAdminTest in
165 tempest.api.compute.admin for an example of using locking.
Marc Koderer31fe4832013-11-06 17:02:03 +0100166
167Stress Tests in Tempest
168-----------------------
169Any tempest test case can be flagged as a stress test. With this flag it will
170be automatically discovery and used in the stress test runs. The stress test
171framework itself is a facility to spawn and control worker processes in order
172to find race conditions (see ``tempest/stress/`` for more information). Please
173note that these stress tests can't be used for benchmarking purposes since they
174don't measure any performance characteristics.
175
176Example::
177
178 @stresstest(class_setup_per='process')
179 def test_this_and_that(self):
180 ...
181
182This will flag the test ``test_this_and_that`` as a stress test. The parameter
183``class_setup_per`` gives control when the setUpClass function should be called.
184
185Good candidates for stress tests are:
186
187- Scenario tests
188- API tests that have a wide focus
Matthew Treinish6eb05852013-11-26 15:28:12 +0000189
190Sample Configuration File
191-------------------------
192The sample config file is autogenerated using a script. If any changes are made
193to the config variables in tempest then the sample config file must be
194regenerated. This can be done running the script: tools/generate_sample.sh
Matthew Treinishecf212c2013-12-06 18:23:54 +0000195
196Unit Tests
197----------
198Unit tests are a separate class of tests in tempest. They verify tempest
199itself, and thus have a different set of guidelines around them:
200
2011. They can not require anything running externally. All you should need to
202 run the unit tests is the git tree, python and the dependencies installed.
203 This includes running services, a config file, etc.
204
2052. The unit tests cannot use setUpClass, instead fixtures and testresources
206 should be used for shared state between tests.