blob: 025bf7460a63e7d09b655dafaabcb2baf47e93b8 [file] [log] [blame]
Attila Fazekas23fdf1d2013-06-09 16:35:23 +02001Tempest Coding Guide
2====================
3
Joe Gordon1374f882013-07-12 17:00:34 +01004- Step 1: Read the OpenStack Style Commandments
Matthew Treinish97072c82013-10-01 11:54:15 -04005 http://docs.openstack.org/developer/hacking/
Joe Gordon1374f882013-07-12 17:00:34 +01006- Step 2: Read on
7
8Tempest Specific Commandments
9------------------------------
10
Joe Gordon1374f882013-07-12 17:00:34 +010011- [T102] Cannot import OpenStack python clients in tempest/api tests
Matthew Treinish5e4c0f22013-09-10 18:38:28 +000012- [T104] Scenario tests require a services decorator
Matthew Treinishecf212c2013-12-06 18:23:54 +000013- [T105] Unit tests cannot use setUpClass
Masayuki Igawafcacf962014-02-19 14:00:01 +090014- [T106] vim configuration should not be kept in source files.
Ghanshyam2a180b82014-06-16 13:54:22 +090015- [N322] Method's default argument shouldn't be mutable
Attila Fazekas23fdf1d2013-06-09 16:35:23 +020016
Matthew Treinish8b372892012-12-07 17:13:16 -050017Test Data/Configuration
18-----------------------
19- Assume nothing about existing test data
20- Tests should be self contained (provide their own data)
21- Clean up test data at the completion of each test
22- Use configuration files for values that will vary by environment
23
24
Attila Fazekas10fd63d2013-07-04 18:38:21 +020025Exception Handling
26------------------
27According to the ``The Zen of Python`` the
Attila Fazekas58d23302013-07-24 10:25:02 +020028``Errors should never pass silently.``
Attila Fazekas10fd63d2013-07-04 18:38:21 +020029Tempest usually runs in special environment (jenkins gate jobs), in every
30error or failure situation we should provide as much error related
31information as possible, because we usually do not have the chance to
32investigate the situation after the issue happened.
33
34In every test case the abnormal situations must be very verbosely explained,
35by the exception and the log.
36
37In most cases the very first issue is the most important information.
38
39Try to avoid using ``try`` blocks in the test cases, both the ``except``
40and ``finally`` block could replace the original exception,
41when the additional operations leads to another exception.
42
43Just letting an exception to propagate, is not bad idea in a test case,
Bruce R. Montague44a6a192013-12-17 09:06:04 -080044at all.
Attila Fazekas10fd63d2013-07-04 18:38:21 +020045
46Try to avoid using any exception handling construct which can hide the errors
47origin.
48
49If you really need to use a ``try`` block, please ensure the original
50exception at least logged. When the exception is logged you usually need
51to ``raise`` the same or a different exception anyway.
52
Chris Yeohc2ff7272013-07-22 22:25:25 +093053Use of ``self.addCleanup`` is often a good way to avoid having to catch
54exceptions and still ensure resources are correctly cleaned up if the
55test fails part way through.
56
Attila Fazekas10fd63d2013-07-04 18:38:21 +020057Use the ``self.assert*`` methods provided by the unit test framework
Bruce R. Montague44a6a192013-12-17 09:06:04 -080058the signal failures early.
Attila Fazekas10fd63d2013-07-04 18:38:21 +020059
60Avoid using the ``self.fail`` alone, it's stack trace will signal
Bruce R. Montague44a6a192013-12-17 09:06:04 -080061the ``self.fail`` line as the origin of the error.
Attila Fazekas10fd63d2013-07-04 18:38:21 +020062
63Avoid constructing complex boolean expressions for assertion.
Attila Fazekas7899d312013-08-16 09:18:17 +020064The ``self.assertTrue`` or ``self.assertFalse`` without a ``msg`` argument,
65will just tell you the single boolean value, and you will not know anything
66about the values used in the formula, the ``msg`` argument might be good enough
67for providing more information.
68
69Most other assert method can include more information by default.
Attila Fazekas10fd63d2013-07-04 18:38:21 +020070For example ``self.assertIn`` can include the whole set.
71
Attila Fazekas7899d312013-08-16 09:18:17 +020072Recommended to use testtools matcher for more tricky assertion.
73`[doc] <http://testtools.readthedocs.org/en/latest/for-test-authors.html#matchers>`_
74
75You can implement your own specific matcher as well.
76`[doc] <http://testtools.readthedocs.org/en/latest/for-test-authors.html#writing-your-own-matchers>`_
77
Attila Fazekas10fd63d2013-07-04 18:38:21 +020078If the test case fails you can see the related logs and the information
79carried by the exception (exception class, backtrack and exception info).
80This and the service logs are your only guide to find the root cause of flaky
81issue.
82
Attila Fazekas7899d312013-08-16 09:18:17 +020083Test cases are independent
84--------------------------
85Every ``test_method`` must be callable individually and MUST NOT depends on,
86any other ``test_method`` or ``test_method`` ordering.
87
88Test cases MAY depend on commonly initialized resources/facilities, like
89credentials management, testresources and so on. These facilities, MUST be able
90to work even if just one ``test_method`` selected for execution.
91
Matthew Treinish5e4c0f22013-09-10 18:38:28 +000092Service Tagging
93---------------
94Service tagging is used to specify which services are exercised by a particular
95test method. You specify the services with the tempest.test.services decorator.
96For example:
97
98@services('compute', 'image')
99
100Valid service tag names are the same as the list of directories in tempest.api
101that have tests.
102
103For scenario tests having a service tag is required. For the api tests service
104tags are only needed if the test method makes an api call (either directly or
105indirectly through another service) that differs from the parent directory
106name. For example, any test that make an api call to a service other than nova
107in tempest.api.compute would require a service tag for those services, however
108they do not need to be tagged as compute.
109
Matthew Treinish8b79bb32013-10-10 17:11:05 -0400110Negative Tests
111--------------
Marc Koderera5afb4f2014-02-11 15:38:15 +0100112Newly added negative tests should use the negative test framework. First step
113is to create an interface description in a json file under `etc/schemas`.
114These descriptions consists of two important sections for the test
115(one of those is mandatory):
Matthew Treinish8b79bb32013-10-10 17:11:05 -0400116
Marc Koderera5afb4f2014-02-11 15:38:15 +0100117 - A resource (part of the URL of the request): Resources needed for a test
118 must be created in `setUpClass` and registered with `set_resource` e.g.:
119 `cls.set_resource("server", server['id'])`
Matthew Treinish8b79bb32013-10-10 17:11:05 -0400120
Marc Koderera5afb4f2014-02-11 15:38:15 +0100121 - A json schema: defines properties for a request.
122
123After that a test class must be added to automatically generate test scenarios
Marc Koderer313cbd52014-03-26 08:56:59 +0100124out of the given interface description::
125
126 load_tests = test.NegativeAutoTest.load_tests
Marc Koderera5afb4f2014-02-11 15:38:15 +0100127
128 class SampeTestNegativeTestJSON(<your base class>, test.NegativeAutoTest):
129 _interface = 'json'
130 _service = 'compute'
Marc Koderer313cbd52014-03-26 08:56:59 +0100131 _schema_file = <your Schema file>
Marc Koderera5afb4f2014-02-11 15:38:15 +0100132
133Negative tests must be marked with a negative attribute::
134
135 @test.attr(type=['negative', 'gate'])
136 def test_get_console_output(self):
137 self.execute(self._schema_file)
138
139All negative tests should be added into a separate negative test file.
140If such a file doesn't exist for the particular resource being tested a new
141test file should be added. Old XML based negative tests can be kept but should
142be renamed to `_xml.py`.
Matthew Treinish8b79bb32013-10-10 17:11:05 -0400143
Giulio Fidente83181a92013-10-01 06:02:24 +0200144Test skips because of Known Bugs
145--------------------------------
146
147If a test is broken because of a bug it is appropriate to skip the test until
148bug has been fixed. You should use the skip_because decorator so that
149Tempest's skip tracking tool can watch the bug status.
150
151Example::
152
153 @skip_because(bug="980688")
154 def test_this_and_that(self):
155 ...
156
Chris Yeohc2ff7272013-07-22 22:25:25 +0930157Guidelines
158----------
159- Do not submit changesets with only testcases which are skipped as
160 they will not be merged.
161- Consistently check the status code of responses in testcases. The
162 earlier a problem is detected the easier it is to debug, especially
163 where there is complicated setup required.
Matthew Treinish96c28d12013-09-16 17:05:09 +0000164
DennyZhang900f02b2013-09-23 08:34:04 -0500165Parallel Test Execution
166-----------------------
Matthew Treinish96c28d12013-09-16 17:05:09 +0000167Tempest by default runs its tests in parallel this creates the possibility for
168interesting interactions between tests which can cause unexpected failures.
169Tenant isolation provides protection from most of the potential race conditions
170between tests outside the same class. But there are still a few of things to
171watch out for to try to avoid issues when running your tests in parallel.
172
173- Resources outside of a tenant scope still have the potential to conflict. This
174 is a larger concern for the admin tests since most resources and actions that
DennyZhang900f02b2013-09-23 08:34:04 -0500175 require admin privileges are outside of tenants.
Matthew Treinish96c28d12013-09-16 17:05:09 +0000176
177- Races between methods in the same class are not a problem because
178 parallelization in tempest is at the test class level, but if there is a json
179 and xml version of the same test class there could still be a race between
180 methods.
181
182- The rand_name() function from tempest.common.utils.data_utils should be used
183 anywhere a resource is created with a name. Static naming should be avoided
184 to prevent resource conflicts.
185
186- If the execution of a set of tests is required to be serialized then locking
187 can be used to perform this. See AggregatesAdminTest in
188 tempest.api.compute.admin for an example of using locking.
Marc Koderer31fe4832013-11-06 17:02:03 +0100189
190Stress Tests in Tempest
191-----------------------
192Any tempest test case can be flagged as a stress test. With this flag it will
193be automatically discovery and used in the stress test runs. The stress test
194framework itself is a facility to spawn and control worker processes in order
195to find race conditions (see ``tempest/stress/`` for more information). Please
196note that these stress tests can't be used for benchmarking purposes since they
197don't measure any performance characteristics.
198
199Example::
200
201 @stresstest(class_setup_per='process')
202 def test_this_and_that(self):
203 ...
204
205This will flag the test ``test_this_and_that`` as a stress test. The parameter
206``class_setup_per`` gives control when the setUpClass function should be called.
207
208Good candidates for stress tests are:
209
210- Scenario tests
211- API tests that have a wide focus
Matthew Treinish6eb05852013-11-26 15:28:12 +0000212
213Sample Configuration File
214-------------------------
215The sample config file is autogenerated using a script. If any changes are made
216to the config variables in tempest then the sample config file must be
217regenerated. This can be done running the script: tools/generate_sample.sh
Matthew Treinishecf212c2013-12-06 18:23:54 +0000218
219Unit Tests
220----------
221Unit tests are a separate class of tests in tempest. They verify tempest
222itself, and thus have a different set of guidelines around them:
223
2241. They can not require anything running externally. All you should need to
225 run the unit tests is the git tree, python and the dependencies installed.
226 This includes running services, a config file, etc.
227
2282. The unit tests cannot use setUpClass, instead fixtures and testresources
229 should be used for shared state between tests.