Only attempt to detach an in-use volume during cleanup
test_list_get_volume_attachments intermittently fails
during cleanup because it tries to detach an already
detached volume, which results in a 400 response.
Tempest, as the client, should be checking the volume
status before making the detach request. The only reason
this ever worked before Pike was because of some
(incorrect) ordering of operations in the compute
service which affected how the API behaved during detach,
and the compute API would return a 404 rather than a 400.
That changed with I2581ff9f9c0e7cfc14a25acf45eb1860df69eacf
in Pike, which exposed the race on the Tempest side by
deleting the BDM in nova *after* marking the volume as
'available' in Cinder, and the os-volume_attachments API
checks for the existence of the BDM and if it exists, attempts
the detach (which then fails with the 400 from Cinder).
Change-Id: Id2d22cbb86d8d5fa7f71202b274260c1367e8a0f
Closes-Bug: #1722577
diff --git a/tempest/api/compute/base.py b/tempest/api/compute/base.py
index 5c4767c..5504e05 100644
--- a/tempest/api/compute/base.py
+++ b/tempest/api/compute/base.py
@@ -422,6 +422,23 @@
volume['id'], 'available')
return volume
+ def _detach_volume(self, server, volume):
+ """Helper method to detach a volume.
+
+ Ignores 404 responses if the volume or server do not exist, or the
+ volume is already detached from the server.
+ """
+ try:
+ volume = self.volumes_client.show_volume(volume['id'])['volume']
+ # Check the status. You can only detach an in-use volume, otherwise
+ # the compute API will return a 400 response.
+ if volume['status'] == 'in-use':
+ self.servers_client.detach_volume(server['id'], volume['id'])
+ except exceptions.NotFound:
+ # Ignore 404s on detach in case the server is deleted or the volume
+ # is already detached.
+ pass
+
def attach_volume(self, server, volume, device=None, check_reserved=False):
"""Attaches volume to server and waits for 'in-use' volume status.
@@ -449,9 +466,7 @@
self.volumes_client, volume['id'], 'available')
# Ignore 404s on detach in case the server is deleted or the volume
# is already detached.
- self.addCleanup(test_utils.call_and_ignore_notfound_exc,
- self.servers_client.detach_volume,
- server['id'], volume['id'])
+ self.addCleanup(self._detach_volume, server, volume)
statuses = ['in-use']
if check_reserved:
statuses.append('reserved')