Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

jsafrane
Copy link
Member

/kind cleanup
What this PR does / why we need it:

We see number of leaked Cinder volumes in a downstream test suite. While we're debugging this, we noticed a few issues:

  1. The tests should log error output of each failed cinder delete <xyz>.
  2. The tests should use volume ID instead of volume name in cinder delete <xyz>, since multiple CI jobs can run in a single OpenStack project and accidentally choose the same volume name.
  3. The tests should be marked as failed when pre-provisioned volume deletion fails.

In addition, I added cinder show <xyz> output when something fails, to be able to diagnose volume health from Cinder point of view.

This PR does not fix the leak, that one is still under investigation.

Does this PR introduce a user-facing change?:

NONE

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Sep 23, 2020
@k8s-ci-robot k8s-ci-robot added area/test sig/storage Categorizes an issue or PR as relevant to SIG Storage. sig/testing Categorizes an issue or PR as relevant to SIG Testing. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Sep 23, 2020
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: jsafrane

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 23, 2020
@jsafrane
Copy link
Member Author

cc @kubernetes/cloud-provider-openstack-maintainers for review

@jsafrane
Copy link
Member Author

/test help

@k8s-ci-robot
Copy link
Contributor

@jsafrane: The specified target(s) for /test were not found.
The following commands are available to trigger jobs:

  • /test pull-kubernetes-bazel-build
  • /test pull-kubernetes-bazel-test
  • /test pull-kubernetes-conformance-image-test
  • /test pull-kubernetes-conformance-kind-ipv6-parallel
  • /test pull-kubernetes-dependencies
  • /test pull-kubernetes-dependencies-canary
  • /test pull-kubernetes-e2e-aws-eks-1-13-correctness
  • /test pull-kubernetes-files-remake
  • /test pull-kubernetes-e2e-gce
  • /test pull-kubernetes-e2e-gce-no-stage
  • /test pull-kubernetes-e2e-gce-kubetest2
  • /test pull-kubernetes-e2e-gce-canary
  • /test pull-kubernetes-e2e-gce-ubuntu
  • /test pull-kubernetes-e2e-gce-ubuntu-containerd
  • /test pull-kubernetes-e2e-gce-ubuntu-containerd-canary
  • /test pull-kubernetes-e2e-gce-rbe
  • /test pull-kubernetes-e2e-gce-alpha-features
  • /test pull-kubernetes-e2e-gce-device-plugin-gpu
  • /test pull-kubernetes-integration
  • /test pull-kubernetes-cross
  • /test pull-kubernetes-e2e-kind
  • /test pull-kubernetes-e2e-kind-canary
  • /test pull-kubernetes-e2e-kind-ipv6
  • /test pull-kubernetes-e2e-kind-ipv6-canary
  • /test pull-kubernetes-conformance-kind-ga-only
  • /test pull-kubernetes-conformance-kind-ga-only-parallel
  • /test pull-kubernetes-e2e-kops-aws
  • /test pull-kubernetes-bazel-build-canary
  • /test pull-kubernetes-bazel-test-canary
  • /test pull-kubernetes-bazel-test-integration-canary
  • /test pull-kubernetes-local-e2e
  • /test pull-publishing-bot-validate
  • /test pull-kubernetes-e2e-gce-network-proxy-http-connect
  • /test pull-kubernetes-e2e-gce-network-proxy-grpc
  • /test pull-kubernetes-e2e-gci-gce-autoscaling
  • /test pull-kubernetes-e2e-aks-engine-azure
  • /test pull-kubernetes-e2e-azure-disk
  • /test pull-kubernetes-e2e-azure-disk-vmss
  • /test pull-kubernetes-e2e-azure-file
  • /test pull-kubernetes-e2e-kind-dual-canary
  • /test pull-kubernetes-e2e-gci-gce-ipvs
  • /test pull-kubernetes-node-e2e
  • /test pull-kubernetes-e2e-containerd-gce
  • /test pull-kubernetes-node-e2e-containerd
  • /test pull-kubernetes-node-e2e-alpha
  • /test pull-kubernetes-node-kubelet-serial-cpu-manager
  • /test pull-kubernetes-node-kubelet-serial-topology-manager
  • /test pull-kubernetes-node-kubelet-serial-hugepages
  • /test pull-kubernetes-node-crio-cgrpv2-e2e
  • /test pull-kubernetes-node-crio-e2e
  • /test pull-kubernetes-e2e-gce-100-performance
  • /test pull-kubernetes-e2e-gce-big-performance
  • /test pull-kubernetes-e2e-gce-correctness
  • /test pull-kubernetes-e2e-gce-large-performance
  • /test pull-kubernetes-kubemark-e2e-gce-big
  • /test pull-kubernetes-kubemark-e2e-gce-scale
  • /test pull-kubernetes-e2e-gce-storage-slow
  • /test pull-kubernetes-e2e-gce-storage-snapshot
  • /test pull-kubernetes-e2e-gce-storage-slow-rbe
  • /test pull-kubernetes-e2e-gce-csi-serial
  • /test pull-kubernetes-e2e-gce-iscsi
  • /test pull-kubernetes-e2e-gce-iscsi-serial
  • /test pull-kubernetes-e2e-gce-storage-disruptive
  • /test pull-kubernetes-e2e-aks-engine-azure-windows
  • /test pull-kubernetes-e2e-azure-disk-windows
  • /test pull-kubernetes-e2e-azure-file-windows
  • /test pull-kubernetes-typecheck
  • /test pull-kubernetes-verify
  • /test pull-kubernetes-e2e-windows-gce

Use /test all to run the following jobs:

  • pull-kubernetes-bazel-build
  • pull-kubernetes-bazel-test
  • pull-kubernetes-conformance-kind-ipv6-parallel
  • pull-kubernetes-dependencies
  • pull-kubernetes-e2e-gce-ubuntu-containerd
  • pull-kubernetes-integration
  • pull-kubernetes-e2e-kind
  • pull-kubernetes-e2e-kind-ipv6
  • pull-kubernetes-conformance-kind-ga-only-parallel
  • pull-kubernetes-node-e2e
  • pull-kubernetes-e2e-gce-100-performance
  • pull-kubernetes-e2e-gce-storage-slow
  • pull-kubernetes-e2e-gce-storage-snapshot
  • pull-kubernetes-e2e-gce-storage-slow-rbe
  • pull-kubernetes-e2e-gce-csi-serial
  • pull-kubernetes-typecheck
  • pull-kubernetes-verify

In response to this:

/test help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@jsafrane
Copy link
Member Author

I don't see any e2e openstack job in the list.

framework.Logf("Waiting up to %v for removal of cinder volume %s / %s", timeout, id, name)
for start := time.Now(); time.Since(start) < timeout; time.Sleep(5 * time.Second) {
output, err = exec.Command("cinder", "delete", name).CombinedOutput()
output, err = exec.Command("cinder", "delete", id).CombinedOutput()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to specify volume id instead of volume name, because the id is UUID and safe to be specified.
cinder delete allows id as https://docs.openstack.org/python-cinderclient/latest/cli/details.html#cinder-delete

} else {
framework.Logf("Volume %s / %s:\n%s", id, name, string(showOutput))
}
framework.Failf("Failed to delete pre-provisioned volume %s / %s: %v\n%s", id, name, err, string(output[:]))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to output this message?
I guess new line 1210 can output almost same message before this.

/cc @oomichi

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Failf makes this error recorded as the reason why a test failed. See a random test failure here: https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/94881/pull-kubernetes-e2e-gce-csi-serial/1307100981535707136

If you click on a failed test, it shows nice output:

test/e2e/storage/testsuites/volume_expand.go:238
Sep 19 00:13:41.897: While creating pods for resizing
Unexpected error:
    <*errors.errorString | 0xc002ba1130>: {
        s: "pod \"pod-7ee298ca-bc8a-4b19-b0fd-0ae251e22924\" is not Running: timed out waiting for the condition",
    }
    pod "pod-7ee298ca-bc8a-4b19-b0fd-0ae251e22924" is not Running: timed out waiting for the condition
occurred
test/e2e/storage/testsuites/volume_expand.go:255

And with Failf here, we get stdout/stderr in that UI.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I see the point.
Thanks for your explanation.

/lgtm

// Timed out, try to get "cinder show <volume>" output for easier debugging
showOutput, showErr := exec.Command("cinder", "show", id).CombinedOutput()
if showErr != nil {
framework.Logf("Failed to show volume %s / %s: %v\n%s", id, name, showErr, string(showOutput))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jsafrane I doubt there would be any data in showOutput in case of error ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I noticed in our (OpenShift) env is that cinder delete failed with

Invalid volume: Volume status must be available or error or [...]

And cinder show showed the volume is in some bad state.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, re-reading your comment again, if cinder show returns error, it probably won't show volume status, still, showing something like Invalid filters all_tenants,name are found in query options. (HTTP 400) is still more useful than plain exit status 1

@ramineni
Copy link
Contributor

looks good to me. Thanks

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 25, 2020
@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Sep 25, 2020

@jsafrane: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
pull-kubernetes-e2e-gce-storage-snapshot 76254d8 link /test pull-kubernetes-e2e-gce-storage-snapshot

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@k8s-ci-robot k8s-ci-robot merged commit d666956 into kubernetes:master Sep 25, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/test cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. release-note-none Denotes a PR that doesn't merit a release note. sig/storage Categorizes an issue or PR as relevant to SIG Storage. sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants