You are trying to use Kubernetes in AWS. You have installed kubectl and you run this command:
kubectl get svc
You receive this: 'error: the server doesn't have a resource type "svc"'
You try again with a more verbose output. You run this command:
kubectl get svc -v=8
In the output you see messages like this: "Response Status: 401 Unauthorized in 73 milliseconds"
What should you do to use kubectl with AWS?
Possible Solution #1
Go to the the .kube directory (e.g.,
cd ~; cd .kube) and modify the config file. Change the stanza "name: kubernetes" to be "name: NameOfYourCluster" (where "NameOfYourCluster" is the name of your cluster).
Alternatively you could run a command like this:
kubectl config set-cluster NameOfYourCluster # where "NameOfYourCluster" is the name of your cluster
If you want more detail, try this posting.
Possible Solution #2
1. Log into the AWS console: https://console.aws.amazon.com/eks/home?
2. Click on the kubernetes cluster you want to obtain more information about (that which you want the kubectl command on the back-end to apply to).
3. Find the "Role ARN".
4. Go to the Linux server that you run your "kubectl" commands from. Go to the .kube directory. Find the config-NameOfCluster file (where "NameOfCluster" is the name of the cluster for which you want to run a kubectl command on).
5. In this file make sure your have an "args" section with stanzas such as these (where "arn:aws:iam::123456789:role/nameofcluster" is the value of the 'Role ARN' obtained in step #3):
The two stanzas above are commented out in this thorough explanation of the kubeconfig file: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html
Possible Solution #3
Install and configure the AWS CLI again; if you need assistance with this, see this posting. Then create a .kube directory and recreate a kubeconfig file inside this .kube directory by following the directions here:
Possible Solution #4
Was your Kubernetes cluster created with one user yet your AWS CLI has been configured with a user? One user can have more than one AWS Access Key. The problem you are having may be consistent with a different user that created the cluster from the one that is trying to interact with it. If you can create an access key based on the user that created the cluster, then reconfigure the AWS CLI to use that key, the problem may subside. Alternatively if you could use the user that you are using to run the "kubectl get svc" command to create a new cluster, that may eliminate the problem too.