Description
What happened (please include outputs or screenshots):
We are running our tool Gefyra to fetch environment variables from a running pod/container via a usual stream
over connect_get_namespaced_pod_exec
:
In one of our environments, I am receiving an HTTP 400 for the WebSocket upgrade call. However, the equivalent kubectl exec ...
executes properly.
From somewhere in the stack trace: websocket._exceptions.WebSocketBadStatusException: Handshake status 400 Bad Request - ...
So I looked for the differences between kubectl
(v1.31.1) and the Python package (31.0.0). I was able to trace the problem back to the sec-websocket-protocol
header, which defaults to v4.channel.k8s.io
python/kubernetes/base/stream/ws_client.py
Lines 468 to 472 in 230925f
The value of the sec-websocket-version
is set to 13
with both kubectl
and kubernetes-python
.
Unfortunately, I couldn't find many sources on this, except https://kubernetes.io/blog/2024/08/20/websockets-transition/.
After patching the content of kubernetes/base/stream/ws_client.py
in my site-packages to v5.channel.k8s.io
it started to work again.
Now I am looking for a solution to pass this header from our code to get rid of the monkeypatch, but without constructing the WSClient
myself.
Is this some special setting in our environment (I mean to prevent v4.channel.k8s.io
from working) or does that apply to all K8s clusters starting from 1.30.x?
What you expected to happen:
The stream
over connect_get_namespaced_pod_exec
to work with sec-websocket-protocol
set to v5.channel.k8s.io
.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
- Kubernetes version (
kubectl version
):
Client Version: v1.31.1
Kustomize Version: v5.4.2
Server Version: v1.30.1
- OS (e.g., MacOS 10.13.6):
Linux/Container
- Python version (
python --version
)
Python 3.9.20
- Python client version (
pip list | grep kubernetes
)
kubernetes 31.0.0
Thank you for your good work.