Closed
Description
I'm using the 8.0.0a1 version and I'm doing
api_client = config.new_client_from_config(kube_config_yaml_file)
v1_core = client.CoreV1Api(api_client)
with kube_config_file
-
apiVersion: v1
clusters:
- cluster:
server: https://<ajsdhajkshdka>.eks.amazonaws.com
certificate-authority-data: <asdjasjdhas>
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: test_user
name: test_name
current-context: test_name
kind: Config
preferences: {}
users:
- name: test_user
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "cluster-test"
- "-r"
- "arn:aws:iam::<87787898789>:role/cluster-test-k8s-access-role"
and this seems to pass almost always but every now and then I get an error from the python client
ERROR:root:exec: process returned 1. could not get token: AccessDenied: Access denied
status code: 403, request id: 296d0777-de24-12b8-b352-c942b2ac475e
which seems to be getting triggered here in the exec_provider in python-base.
The main change that I could think of it being is that I'm using the -r
flag and passing in an access role to use with the authenticator command which I don't see a test for in the exec_provider. Even with the flag the command passes sometimes but fails at other times.
This issue only occurs with the Python Client not with using kubectl
subprocess calls.
I'm using EKS with aws-iam-authenticator
.