How Do You Troubleshoot “timed out waiting for the condition” after Running “kubeadm init”?

Problem scenario
You run "sudo kubeadm init", and you get this message:

[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp connect: connection refused.

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster

What should you do?

If this is for production or something with sensitive data, we would be concerned and have no real solution for you. However if there is no real SLA, if you are doing development (e.g., for a test or proof-of-concept), just continue. Ignore the above error, as it will likely not affect you or block you. You may have "ready" nodes with the above error. Run "kubectl get nodes". You may just need to install flannel with these commands and continue:

kubectl apply -f

kubectl -n kube-system apply -f

You may need to log into the worker nodes and create /run/flannel/subnet.env (create the path if necessary). Here is the suggested content of this subnet.env file:


We were surprised that the above message could be ignored without it affecting Kubernetes' functionality. However for something important you may want to look into the error.

Leave a comment

Your email address will not be published. Required fields are marked *