-
Notifications
You must be signed in to change notification settings - Fork 1.3k
chore: WIP - native sidecar fixes and test run #14566
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
alpeb
wants to merge
1
commit into
main
Choose a base branch
from
alpeb/native-sidecar-fixups
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
8924ad4 to
705b1fb
Compare
* There where we iterate over containers, also iterate over init containers * Set `proxy.nativeSidecar: true`, for all tests to run in that mode without further changes * Update golden files accordingly * Fix `curl.rs` in the policy tests so it doesn't block on waiting for the proxy container to terminate * Fix integration tests dealing with injection Note that k8s started supporting native sidecars without additional feature flags in v1.28. For this reason the following tests aren't supposed to pass: * test-policy with k8s v1.23 * test-multicluster with k8s v.1.23 * CNI integration test with k8s v1.27
alpeb
added a commit
that referenced
this pull request
Nov 28, 2025
(Extracted from #14566) This improves the injector logic by accounting for ports exposed in native sidecar containers: - when consuming the `config.linkerd.io/opaque-ports` annotations pointing to (named or integer) ports in init containers - when populating the proxy's `LINKERD2_PROXY_INBOUND_PORTS` env var The `webhook_tests.go` have been expanded to test injection to a pod with a memcached native sidecar container (never mind the contrivedness of the example).
alpeb
added a commit
that referenced
this pull request
Dec 4, 2025
…ck --proxy` (Extracted from #14566) The "opaque ports are properly annotated" check had a bug where it only validated regular containers, missing ports in init containers (native sidecars). This meant mismatched annotations between pods and services could go undetected when the port belonged to an init container. When a service had the opaque-ports annotation but the corresponding pod did not, the check would incorrectly pass if the port was defined in an init container instead of a regular container. This commit extends the check to validate ports in both regular containers and init containers, ensuring consistent opaque-ports annotations across pods and services regardless of where the port is defined.
alpeb
added a commit
that referenced
this pull request
Dec 4, 2025
…ck --proxy` (Extracted from #14566) The "opaque ports are properly annotated" check had a bug where it only validated regular containers, missing ports in init containers (native sidecars). This meant mismatched annotations between pods and services could go undetected when the port belonged to an init container. When a service had the opaque-ports annotation but the corresponding pod did not, the check would incorrectly pass if the port was defined in an init container instead of a regular container. This commit extends the check to validate ports in both regular containers and init containers, ensuring consistent opaque-ports annotations across pods and services regardless of where the port is defined.
alpeb
added a commit
that referenced
this pull request
Dec 4, 2025
(Extracted from #14566) The logic behind the `linkerd authz` command wasn't accounting for ports in init containers, so authorization policies pointing to those ports were not reported by the command. Say for example you had a strict auth policy for the `linkerd-admin` port, allowing only access from prometheus. For emojivoto's web workload you could set that up like this: ```yaml apiVersion: policy.linkerd.io/v1beta3 kind: Server metadata: annotations: name: admin namespace: emojivoto spec: accessPolicy: deny podSelector: matchLabels: app: web-svc port: linkerd-admin proxyProtocol: HTTP/1 --- apiVersion: policy.linkerd.io/v1alpha1 kind: MeshTLSAuthentication metadata: namespace: emojivoto name: prometheus spec: identities: - "prometheus.linkerd-viz.serviceaccount.identity.linkerd.cluster.local" --- apiVersion: policy.linkerd.io/v1alpha1 kind: AuthorizationPolicy metadata: namespace: emojivoto name: web-http-sa spec: targetRef: group: policy.linkerd.io kind: Server name: admin requiredAuthenticationRefs: - name: prometheus kind: MeshTLSAuthentication group: policy.linkerd.io ``` Invoking `linkerd authz` would return nothing, but after this change we can see the auth: ``` $ linkerd authz -n emojivoto deploy/web ROUTE SERVER AUTHORIZATION_POLICY SERVER_AUTHORIZATION * admin web-http-sa ```
alpeb
added a commit
that referenced
this pull request
Dec 5, 2025
…hz` (#14780) (Extracted from #14566) The logic behind the `linkerd authz` command wasn't accounting for ports in init containers, so authorization policies pointing to those ports were not reported by the command. Say for example you had a strict auth policy for the `linkerd-admin` port, allowing only access from prometheus. For emojivoto's web workload you could set that up like this: ```yaml apiVersion: policy.linkerd.io/v1beta3 kind: Server metadata: annotations: name: admin namespace: emojivoto spec: accessPolicy: deny podSelector: matchLabels: app: web-svc port: linkerd-admin proxyProtocol: HTTP/1 --- apiVersion: policy.linkerd.io/v1alpha1 kind: MeshTLSAuthentication metadata: namespace: emojivoto name: prometheus spec: identities: - "prometheus.linkerd-viz.serviceaccount.identity.linkerd.cluster.local" --- apiVersion: policy.linkerd.io/v1alpha1 kind: AuthorizationPolicy metadata: namespace: emojivoto name: web-http-sa spec: targetRef: group: policy.linkerd.io kind: Server name: admin requiredAuthenticationRefs: - name: prometheus kind: MeshTLSAuthentication group: policy.linkerd.io ``` Invoking `linkerd authz` would return nothing, but after this change we can see the auth: ``` $ linkerd authz -n emojivoto deploy/web ROUTE SERVER AUTHORIZATION_POLICY SERVER_AUTHORIZATION * admin web-http-sa ```
alpeb
added a commit
that referenced
this pull request
Dec 5, 2025
…ck --proxy` (#14779) (Extracted from #14566) The "opaque ports are properly annotated" check had a bug where it only validated regular containers, missing ports in init containers (native sidecars). This meant mismatched annotations between pods and services could go undetected when the port belonged to an init container. When a service had the opaque-ports annotation but the corresponding pod did not, the check would incorrectly pass if the port was defined in an init container instead of a regular container. This commit extends the check to validate ports in both regular containers and init containers, ensuring consistent opaque-ports annotations across pods and services regardless of where the port is defined.
alpeb
added a commit
that referenced
this pull request
Dec 5, 2025
(Extracted from #14566) This improves the injector logic by accounting for ports exposed in native sidecar containers: - when consuming the `config.linkerd.io/opaque-ports` annotations pointing to (named or integer) ports in init containers - when populating the proxy's `LINKERD2_PROXY_INBOUND_PORTS` env var The `webhook_tests.go` have been expanded to test injection to a pod with a memcached native sidecar container (never mind the contrivedness of the example).
alpeb
added a commit
that referenced
this pull request
Dec 5, 2025
Those types of ports were getting ignored in `getProfile` responses. A new test case was added that clarifies an example, that wouldn't pass before this fix. (Extracted from #14566)
alpeb
added a commit
that referenced
this pull request
Dec 8, 2025
) Those types of ports were getting ignored in `getProfile` responses. A new test case was added that clarifies an example, that wouldn't pass before this fix. (Extracted from #14566)
alpeb
added a commit
that referenced
this pull request
Dec 9, 2025
…etProfile (Extracted from #14566) When hitting pods directly at their their IPs, ports in native sidecars that were marked as opaque via the `config.linkerd.io/opaque-ports` annotation, weren't really being marked as opaque. More concretely, the issue layed in the getProfile API, that was forgoing init containers when iterating over containers in this particular case. Endpoint profile translator tests got expanded, testing for opaque ports in both init and regular containers.
alpeb
added a commit
that referenced
this pull request
Dec 16, 2025
…etProfile (#14791) (Extracted from #14566) When hitting pods directly at their their IPs, ports in native sidecars that were marked as opaque via the `config.linkerd.io/opaque-ports` annotation, weren't really being marked as opaque. More concretely, the issue layed in the getProfile API, that was forgoing init containers when iterating over containers in this particular case. Endpoint profile translator tests got expanded, testing for opaque ports in both init and regular containers.
alpeb
added a commit
that referenced
this pull request
Dec 17, 2025
(Extracted from #14566) The `Get` API wasn't surfacing ports marked as opaque in Servers, when: 1. ports belonged to a native sidecar (inside an init container) 2. port was referred to by name Also changed a test fixture to use port name instead of number in order to test this issue.
GTRekter
pushed a commit
to GTRekter/linkerd2
that referenced
this pull request
Dec 18, 2025
…hz` (linkerd#14780) (Extracted from linkerd#14566) The logic behind the `linkerd authz` command wasn't accounting for ports in init containers, so authorization policies pointing to those ports were not reported by the command. Say for example you had a strict auth policy for the `linkerd-admin` port, allowing only access from prometheus. For emojivoto's web workload you could set that up like this: ```yaml apiVersion: policy.linkerd.io/v1beta3 kind: Server metadata: annotations: name: admin namespace: emojivoto spec: accessPolicy: deny podSelector: matchLabels: app: web-svc port: linkerd-admin proxyProtocol: HTTP/1 --- apiVersion: policy.linkerd.io/v1alpha1 kind: MeshTLSAuthentication metadata: namespace: emojivoto name: prometheus spec: identities: - "prometheus.linkerd-viz.serviceaccount.identity.linkerd.cluster.local" --- apiVersion: policy.linkerd.io/v1alpha1 kind: AuthorizationPolicy metadata: namespace: emojivoto name: web-http-sa spec: targetRef: group: policy.linkerd.io kind: Server name: admin requiredAuthenticationRefs: - name: prometheus kind: MeshTLSAuthentication group: policy.linkerd.io ``` Invoking `linkerd authz` would return nothing, but after this change we can see the auth: ``` $ linkerd authz -n emojivoto deploy/web ROUTE SERVER AUTHORIZATION_POLICY SERVER_AUTHORIZATION * admin web-http-sa ``` Signed-off-by: Ivan Porta <[email protected]>
GTRekter
pushed a commit
to GTRekter/linkerd2
that referenced
this pull request
Dec 18, 2025
…ck --proxy` (linkerd#14779) (Extracted from linkerd#14566) The "opaque ports are properly annotated" check had a bug where it only validated regular containers, missing ports in init containers (native sidecars). This meant mismatched annotations between pods and services could go undetected when the port belonged to an init container. When a service had the opaque-ports annotation but the corresponding pod did not, the check would incorrectly pass if the port was defined in an init container instead of a regular container. This commit extends the check to validate ports in both regular containers and init containers, ensuring consistent opaque-ports annotations across pods and services regardless of where the port is defined. Signed-off-by: Ivan Porta <[email protected]>
GTRekter
pushed a commit
to GTRekter/linkerd2
that referenced
this pull request
Dec 18, 2025
…4767) (Extracted from linkerd#14566) This improves the injector logic by accounting for ports exposed in native sidecar containers: - when consuming the `config.linkerd.io/opaque-ports` annotations pointing to (named or integer) ports in init containers - when populating the proxy's `LINKERD2_PROXY_INBOUND_PORTS` env var The `webhook_tests.go` have been expanded to test injection to a pod with a memcached native sidecar container (never mind the contrivedness of the example). Signed-off-by: Ivan Porta <[email protected]>
GTRekter
pushed a commit
to GTRekter/linkerd2
that referenced
this pull request
Dec 18, 2025
…kerd#14786) Those types of ports were getting ignored in `getProfile` responses. A new test case was added that clarifies an example, that wouldn't pass before this fix. (Extracted from linkerd#14566) Signed-off-by: Ivan Porta <[email protected]>
GTRekter
pushed a commit
to GTRekter/linkerd2
that referenced
this pull request
Dec 18, 2025
…etProfile (linkerd#14791) (Extracted from linkerd#14566) When hitting pods directly at their their IPs, ports in native sidecars that were marked as opaque via the `config.linkerd.io/opaque-ports` annotation, weren't really being marked as opaque. More concretely, the issue layed in the getProfile API, that was forgoing init containers when iterating over containers in this particular case. Endpoint profile translator tests got expanded, testing for opaque ports in both init and regular containers. Signed-off-by: Ivan Porta <[email protected]>
alpeb
added a commit
that referenced
this pull request
Dec 18, 2025
(Extracted from #14566) The `Get` API wasn't surfacing ports marked as opaque in Servers, when: 1. ports belonged to a native sidecar (inside an init container) 2. port was referred to by name Also changed a test fixture to use port name instead of number in order to test this issue.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Code Fixes
Running services in init containers is rare. The following changes address those rare cases, but most importantly, when Linkerd runs in native sidecar mode it exposes metrics in a port behind an init container, which might be impacted by these issues:
* endpoints_watcher.go:Addressed in fix(destination): properly discover native sidecar ports #14814* In the destination controller, listeners weren't getting updated on changes in a Server resource with a port in an init container.* Servers with ports on init containers with opaque protocols weren't being applied to ports in init containers.* k8s.go:Addressed in fix(destination): properly discover hostports in native sidecars #14786* Init containers with host ports were not being considered in the HostIPIndex informer index, and thus couldn't be discovered.* workload_watcher.go:Addressed in fix(destination): properly deal with native sidecar port opacity in getProfile #14791* Theopaque-portsannotation wasn't considering ports in init containers.* Proxy injection (inject.go):Addressed in #14767* Opaque ports annotations overrides weren't having an effect on ports in init containers.* LINKERD2_PROXY_INBOUND_PORTS wasn't considering ports exposed by init containers.* Healtchecks (healthcheck.go): The check "opaque ports are properly annotated" wasn't considering opaque ports in init containers.Addressed in #14779* CLI (policy.go): TheAddressed in #14780linkerd authzcommand wasn't considering policies applied to ports in init containers.Tests changes
proxy.nativeSidecar: true, for all tests to run in that mode without further changescurl.rsin the policy tests so it doesn't block on waiting for the proxy container to terminateNote that k8s started supporting native sidecars without additional feature flags in v1.28. For this reason the following tests aren't supposed to pass: