How Do You Troubleshoot “Warning FailedScheduling … default-scheduler no nodes available to schedule pods”?

Problem scenario
You are running EKS in AWS. You get this message "Warning FailedScheduling … default-scheduler no nodes available to schedule pods". How do you troubleshoot it?

Verify your nodes are healthy with this command: kubectl get nodes

If you are using EKS, you may need to create nodes. Here is a command to do that (but replace "foo" with the name of the EKS cluster that you have, and "bar" with the name of the node-group you want to create, and subnet-123456 with the subnet you have and "arn:aws:iam::123456789:role/nameofrole" with the ARN of the IAM role that can create nodes):

aws eks create-nodegroup --cluster-name foo --nodegroup-name "bar" --subnets subnet-123456 --node-role arn:aws:iam::123456789:role/nameofrole

# If you want to see what subnets you can choose from, run this command:
# aws eks describe-cluster --name foo --query 'cluster.resourcesVpcConfig.subnetIds'
# If you can remember a pattern in the name of the relevant role (i.e., "foobar"), try a command like this to find the exact IAM role:
# aws iam list-roles | grep -i foobar

The command kubectl describe pods may have you some clues. You may want to see this posting if the worker nodes have an issue with subnet.env. Another related posting is How Do You Get Kubernetes Nodes to Be Ready?.

Leave a comment

Your email address will not be published. Required fields are marked *