Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

mfojtik
Copy link
Contributor

@mfojtik mfojtik commented Oct 13, 2020

What type of PR is this?

/kind cleanup
/kind deprecation

What this PR does / why we need it:

This removes the insecure-port option and the already deprecated insecure serving for kube apiserver, kube controller manager and scheduler.
This means the health checks are possible only via secure port.

Does this PR introduce a user-facing change?:
YES

xref: #91506

ACTION REQUIRED: Deprecated `insecure-port` flag has been removed from kube-apiserver, kube-controller-manager and kube-scheduler binaries. Insecure serving is no longer possible and all API servers have to use secure serving (HTTPS).

/cc @deads2k @sttts

@k8s-ci-robot k8s-ci-robot requested review from deads2k and sttts October 13, 2020 11:37
@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. kind/deprecation Categorizes issue or PR as related to a feature/enhancement marked for deprecation. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. area/apiserver area/kubeadm area/provider/gcp Issues or PRs related to gcp provider area/test sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. sig/testing Categorizes an issue or PR as relevant to SIG Testing. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Oct 13, 2020
@mfojtik mfojtik force-pushed the deprecate-insecure branch 2 times, most recently from f284883 to 977a71d Compare October 13, 2020 12:08
@k8s-ci-robot k8s-ci-robot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. and removed size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Oct 13, 2020
@mfojtik mfojtik force-pushed the deprecate-insecure branch 2 times, most recently from aa6ac78 to 3f82c2b Compare October 13, 2020 12:19
@sttts sttts added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Oct 13, 2020
@k8s-ci-robot k8s-ci-robot removed the needs-priority Indicates a PR lacks a `priority/foo` label and requires one. label Oct 13, 2020
@sttts sttts added this to the v1.20 milestone Oct 13, 2020
@k8s-ci-robot
Copy link
Contributor

@mfojtik: The following tests failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
pull-kubernetes-integration 7e2d75f link /test pull-kubernetes-integration
pull-kubernetes-node-e2e 7e2d75f link /test pull-kubernetes-node-e2e
pull-kubernetes-e2e-kind 7e2d75f link /test pull-kubernetes-e2e-kind
pull-kubernetes-e2e-kind-ipv6 7e2d75f link /test pull-kubernetes-e2e-kind-ipv6
pull-kubernetes-conformance-kind-ipv6-parallel 7e2d75f link /test pull-kubernetes-conformance-kind-ipv6-parallel
pull-kubernetes-conformance-kind-ga-only-parallel 7e2d75f link /test pull-kubernetes-conformance-kind-ga-only-parallel
pull-kubernetes-e2e-gce-ubuntu-containerd 7e2d75f link /test pull-kubernetes-e2e-gce-ubuntu-containerd
pull-kubernetes-verify 7e2d75f link /test pull-kubernetes-verify
pull-kubernetes-bazel-test 7e2d75f link /test pull-kubernetes-bazel-test
pull-kubernetes-e2e-gce-100-performance 7e2d75f link /test pull-kubernetes-e2e-gce-100-performance
pull-kubernetes-local-e2e 7e2d75f link /test pull-kubernetes-local-e2e

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@fedebongio
Copy link
Contributor

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Oct 15, 2020
@neolit123
Copy link
Member

this is required to merge first:
#94723

@k8s-ci-robot
Copy link
Contributor

@mfojtik: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 24, 2020
@MonzElmasry
Copy link

Hello 👋 ! Bug triage here, This PR has not been updated for a long time, so I'd like to check what's the status. The code freeze is starting 12.11.2020 (about 2 weeks from now) and while there is still plenty of time, we want to ensure that each PR has a chance to be merged on time.

As the PR is tagged for 1.20, is it still planned for this release?

@neolit123
Copy link
Member

neolit123 commented Oct 26, 2020 via email

@cheftako
Copy link
Member

I think we should consider leaving the insecure-port flag registered, marked deprecated (so it doesn't appear in help), inert (not wired to anything) and only accepting a value of '0' for several releases.

Having the insecure-port wired to zero seems more vague than just removing it. Removing it sends a clear message its gone and you cannot rely on it anymore. If you have something like health checks or component status calls being sent to the insecure port they will stop working. Setting the value to 0 implies the same thing, but I think its better to be clear in this instance.

@liggitt
Copy link
Member

liggitt commented Oct 27, 2020

Having the insecure-port wired to zero seems more vague than just removing it. Removing it sends a clear message its gone and you cannot rely on it anymore. If you have something like health checks or component status calls being sent to the insecure port they will stop working. Setting the value to 0 implies the same thing, but I think its better to be clear in this instance.

The rationale was stated in #95522 (review), and focused on preventing accidents in deployers stopping setting the flag to 0 too early:

Well-behaved deployers are passing --insecure-port=0 everywhere right now in order to disable insecure serving. Removing the flag completely will force them to instantly update configs/manifests/deployment processes to omit that flag. If those updates accidentally omit the flag from a manifest for a previous version, that will re-enable insecure serving on that version.

I would much rather have an "only accept 0" flag and leave it silently registered for a few releases than risk an accident.

@ingvagabund
Copy link
Contributor

@cheftako @liggitt @neolit123 @mfojtik are there other bits/blockers to discuss to have this merged? @mfojtik is busy, I was asked to politely take over. We have one week left before the code freeze.

@ingvagabund
Copy link
Contributor

#96216 is doing the same for kube-controller-manager. Might be better to do it per component than to open one huge PR so we don't have to rebase so often and divide the work between individuals.

@ahg-g @alculquicondor for kube-scheduler. Are you aware of this effort?

@liggitt
Copy link
Member

liggitt commented Nov 5, 2020

The insecure API server port was already removed in #95856 which was the priority for 1.20

I tend to agree that going component by component and neutering the insecure port is likely to be simpler

@alculquicondor
Copy link
Member

+1 on a PR per component

@ingvagabund
Copy link
Contributor

kube-scheduler bits: #96345

@liggitt
Copy link
Member

liggitt commented Nov 17, 2020

/milestone clear

@k8s-ci-robot k8s-ci-robot removed this from the v1.20 milestone Nov 17, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 15, 2021
@liggitt liggitt assigned sttts and unassigned liggitt Feb 16, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 18, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closed this PR.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/apiserver area/kubeadm area/provider/gcp Issues or PRs related to gcp provider area/test cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. kind/deprecation Categorizes issue or PR as related to a feature/enhancement marked for deprecation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. release-note-action-required Denotes a PR that introduces potentially breaking changes that require user action. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.