Skip to content

kubectl fails to authenticate with AWS EKS despite valid credentials and working token generation #1799

@xchen1189

Description

@xchen1189

What happened?

kubectl fails to authenticate with AWS EKS cluster with error "the server has asked for the client to provide credentials" even though:

  1. AWS SSO authentication is valid and working
  2. Manual token generation via aws eks get-token succeeds
  3. Using the generated token directly with curl successfully accesses the API
  4. The exec credential plugin configuration appears correct in kubeconfig

What did you expect to happen?

kubectl should successfully authenticate using the exec credential plugin to call aws eks get-token and access the EKS cluster.

How can we reproduce it (as minimally and precisely as possible)?

  1. Configure AWS SSO and authenticate:
aws sso login --profile staging
aws --profile staging sts get-caller-identity  # Works correctly
  1. Update kubeconfig for EKS cluster:
aws eks update-kubeconfig --region us-east-2 --name <cluster-name> --profile staging
  1. Verify kubeconfig has correct exec configuration:
users:
- name: arn:aws:eks:us-east-2:XXXXXXXXXXXX:cluster/<cluster-name>
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - --region
      - us-east-2
      - eks
      - get-token
      - --cluster-name
      - <cluster-name>
      - --output
      - json
      command: aws
      env:
      - name: AWS_PROFILE
        value: staging
  1. Test kubectl:
kubectl get namespaces
# Error: You must be logged in to the server (the server has asked for the client to provide credentials)
  1. Verify token generation works manually:
aws --profile staging --region us-east-2 eks get-token --cluster-name <cluster-name> --output json
# Successfully returns a valid token
  1. Verify the token works with curl:
TOKEN=$(aws --profile staging --region us-east-2 eks get-token --cluster-name <cluster-name> --output json | jq -r ".status.token")
curl -k -H "Authorization: Bearer $TOKEN" https://<eks-endpoint>/api/v1/namespaces
# Successfully returns namespace list

Anything else we need to know?

Workaround: Created a wrapper script that manually gets the token and passes it to kubectl:

#!/bin/bash
TOKEN=$(aws --profile staging --region us-east-2 eks get-token --cluster-name <cluster-name> --output json | jq -r ".status.token")
kubectl --token="$TOKEN" "$@"

This workaround confirms that:

  • The AWS credentials are valid
  • The token generation works
  • kubectl can authenticate when provided the token directly
  • The issue is specifically with kubectl's exec credential plugin execution

Additional observations:

  • No error output from the exec plugin when running kubectl with -v=9
  • The exec plugin appears to run but doesn't provide credentials
  • Issue persists across kubectl restarts and kubeconfig regeneration
  • AWS CLI version: aws-cli/2.28.21 Python/3.13.7 Darwin/25.1.0 source/arm64
  • SSO session is valid throughout testing

Environment

# Kubernetes client and server versions
Client Version: v1.34.2
Kustomize Version: v5.7.1
Server Version: v1.34.1-eks-3cfe0ce

# Operating system
macOS (Darwin 25.1.0)

# AWS CLI version
aws-cli/2.28.21 Python/3.13.7 Darwin/25.1.0 source/arm64

# Authentication method
AWS SSO with assumed role: AWSReservedSSO_RestrictedDeveloper_<hash>

# EKS cluster configuration
- Private endpoint only (no public access)
- Accessed via SSM port forwarding through bastion host

Related Issues

Possible Root Causes

  1. kubectl's exec plugin may not be properly inheriting or passing environment variables (AWS_PROFILE)
  2. The exec plugin execution context might differ from shell execution
  3. Potential issue with how kubectl handles the exec credential response format
  4. SSO token refresh might not be triggered properly within the exec plugin context

Impact

This issue affects users who:

  • Use AWS SSO for authentication
  • Access private EKS clusters via port forwarding
  • Rely on exec credential plugins for token generation

The issue forces users to implement workarounds that bypass kubectl's intended authentication flow, reducing security and complicating cluster access.

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/bugCategorizes issue or PR as related to a bug.needs-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions