-
Notifications
You must be signed in to change notification settings - Fork 988
Closed
Labels
kind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.needs-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.Indicates an issue or PR lacks a `triage/foo` label and requires one.
Description
What happened?
kubectl fails to authenticate with AWS EKS cluster with error "the server has asked for the client to provide credentials" even though:
- AWS SSO authentication is valid and working
- Manual token generation via
aws eks get-tokensucceeds - Using the generated token directly with curl successfully accesses the API
- The exec credential plugin configuration appears correct in kubeconfig
What did you expect to happen?
kubectl should successfully authenticate using the exec credential plugin to call aws eks get-token and access the EKS cluster.
How can we reproduce it (as minimally and precisely as possible)?
- Configure AWS SSO and authenticate:
aws sso login --profile staging
aws --profile staging sts get-caller-identity # Works correctly- Update kubeconfig for EKS cluster:
aws eks update-kubeconfig --region us-east-2 --name <cluster-name> --profile staging- Verify kubeconfig has correct exec configuration:
users:
- name: arn:aws:eks:us-east-2:XXXXXXXXXXXX:cluster/<cluster-name>
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- us-east-2
- eks
- get-token
- --cluster-name
- <cluster-name>
- --output
- json
command: aws
env:
- name: AWS_PROFILE
value: staging- Test kubectl:
kubectl get namespaces
# Error: You must be logged in to the server (the server has asked for the client to provide credentials)- Verify token generation works manually:
aws --profile staging --region us-east-2 eks get-token --cluster-name <cluster-name> --output json
# Successfully returns a valid token- Verify the token works with curl:
TOKEN=$(aws --profile staging --region us-east-2 eks get-token --cluster-name <cluster-name> --output json | jq -r ".status.token")
curl -k -H "Authorization: Bearer $TOKEN" https://<eks-endpoint>/api/v1/namespaces
# Successfully returns namespace listAnything else we need to know?
Workaround: Created a wrapper script that manually gets the token and passes it to kubectl:
#!/bin/bash
TOKEN=$(aws --profile staging --region us-east-2 eks get-token --cluster-name <cluster-name> --output json | jq -r ".status.token")
kubectl --token="$TOKEN" "$@"This workaround confirms that:
- The AWS credentials are valid
- The token generation works
- kubectl can authenticate when provided the token directly
- The issue is specifically with kubectl's exec credential plugin execution
Additional observations:
- No error output from the exec plugin when running kubectl with
-v=9 - The exec plugin appears to run but doesn't provide credentials
- Issue persists across kubectl restarts and kubeconfig regeneration
- AWS CLI version: aws-cli/2.28.21 Python/3.13.7 Darwin/25.1.0 source/arm64
- SSO session is valid throughout testing
Environment
# Kubernetes client and server versions
Client Version: v1.34.2
Kustomize Version: v5.7.1
Server Version: v1.34.1-eks-3cfe0ce
# Operating system
macOS (Darwin 25.1.0)
# AWS CLI version
aws-cli/2.28.21 Python/3.13.7 Darwin/25.1.0 source/arm64
# Authentication method
AWS SSO with assumed role: AWSReservedSSO_RestrictedDeveloper_<hash>
# EKS cluster configuration
- Private endpoint only (no public access)
- Accessed via SSM port forwarding through bastion hostRelated Issues
- Unable to connect to the server: getting credentials: exec: exit status 255 #747 - Similar exec credential failures but with exit status 255
- Authentication with AWS broke with version 1.24 #1210 - AWS authentication broke in v1.24 (resolved)
Possible Root Causes
- kubectl's exec plugin may not be properly inheriting or passing environment variables (AWS_PROFILE)
- The exec plugin execution context might differ from shell execution
- Potential issue with how kubectl handles the exec credential response format
- SSO token refresh might not be triggered properly within the exec plugin context
Impact
This issue affects users who:
- Use AWS SSO for authentication
- Access private EKS clusters via port forwarding
- Rely on exec credential plugins for token generation
The issue forces users to implement workarounds that bypass kubectl's intended authentication flow, reducing security and complicating cluster access.
Metadata
Metadata
Assignees
Labels
kind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.needs-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.Indicates an issue or PR lacks a `triage/foo` label and requires one.