-
Notifications
You must be signed in to change notification settings - Fork 41.4k
Fail a test on pre-provisioned Cinder volume deletion error #95003
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fail a test on pre-provisioned Cinder volume deletion error #95003
Conversation
"cinder show <id>" output will help us to debug what's wrong with the Cinder volume.
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jsafrane The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
cc @kubernetes/cloud-provider-openstack-maintainers for review |
/test help |
@jsafrane: The specified target(s) for
Use
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I don't see any e2e openstack job in the list. |
framework.Logf("Waiting up to %v for removal of cinder volume %s / %s", timeout, id, name) | ||
for start := time.Now(); time.Since(start) < timeout; time.Sleep(5 * time.Second) { | ||
output, err = exec.Command("cinder", "delete", name).CombinedOutput() | ||
output, err = exec.Command("cinder", "delete", id).CombinedOutput() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 to specify volume id instead of volume name, because the id is UUID and safe to be specified.
cinder delete
allows id as https://docs.openstack.org/python-cinderclient/latest/cli/details.html#cinder-delete
} else { | ||
framework.Logf("Volume %s / %s:\n%s", id, name, string(showOutput)) | ||
} | ||
framework.Failf("Failed to delete pre-provisioned volume %s / %s: %v\n%s", id, name, err, string(output[:])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to output this message?
I guess new line 1210 can output almost same message before this.
/cc @oomichi
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Failf
makes this error recorded as the reason why a test failed. See a random test failure here: https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/94881/pull-kubernetes-e2e-gce-csi-serial/1307100981535707136
If you click on a failed test, it shows nice output:
test/e2e/storage/testsuites/volume_expand.go:238
Sep 19 00:13:41.897: While creating pods for resizing
Unexpected error:
<*errors.errorString | 0xc002ba1130>: {
s: "pod \"pod-7ee298ca-bc8a-4b19-b0fd-0ae251e22924\" is not Running: timed out waiting for the condition",
}
pod "pod-7ee298ca-bc8a-4b19-b0fd-0ae251e22924" is not Running: timed out waiting for the condition
occurred
test/e2e/storage/testsuites/volume_expand.go:255
And with Failf
here, we get stdout/stderr in that UI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I see the point.
Thanks for your explanation.
/lgtm
// Timed out, try to get "cinder show <volume>" output for easier debugging | ||
showOutput, showErr := exec.Command("cinder", "show", id).CombinedOutput() | ||
if showErr != nil { | ||
framework.Logf("Failed to show volume %s / %s: %v\n%s", id, name, showErr, string(showOutput)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jsafrane I doubt there would be any data in showOutput
in case of error ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I noticed in our (OpenShift) env is that cinder delete
failed with
Invalid volume: Volume status must be available or error or [...]
And cinder show
showed the volume is in some bad state.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, re-reading your comment again, if cinder show
returns error, it probably won't show volume status, still, showing something like Invalid filters all_tenants,name are found in query options. (HTTP 400)
is still more useful than plain exit status 1
looks good to me. Thanks |
@jsafrane: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/retest Review the full test history for this PR. Silence the bot with an |
/kind cleanup
What this PR does / why we need it:
We see number of leaked Cinder volumes in a downstream test suite. While we're debugging this, we noticed a few issues:
cinder delete <xyz>
.cinder delete <xyz>
, since multiple CI jobs can run in a single OpenStack project and accidentally choose the same volume name.In addition, I added
cinder show <xyz>
output when something fails, to be able to diagnose volume health from Cinder point of view.This PR does not fix the leak, that one is still under investigation.
Does this PR introduce a user-facing change?:
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: