ebtables
or ethtool
not found during installationIf you see the following warnings while running kubeadm init
[preflight] WARNING: ebtables not found in system path
[preflight] WARNING: ethtool not found in system path
Then you may be missing ebtables and ethtool on your Linux machine. You can install them with the following commands:
# For ubuntu/debian users, try
apt install ebtables ethtool
# For CentOS/Fedora users, try
yum install ebtables ethtool
RunContainerError
, CrashLoopBackOff
or Error
stateRight after kubeadm init
there should not be any such Pods. If there are Pods in
such a state right after kubeadm init
, please open an issue in the kubeadm repo.
kube-dns
should be in the Pending
state until you have deployed the network solution.
However, if you see Pods in the RunContainerError
, CrashLoopBackOff
or Error
state
after deploying the network solution and nothing happens to kube-dns
, it’s very
likely that the Pod Network solution that you installed is somehow broken. You
might have to grant it more RBAC privileges or use a newer version. Please file
an issue in the Pod Network providers’ issue tracker and get the issue triaged there.
kube-dns
is stuck in the Pending
stateThis is expected and part of the design. kubeadm is network provider-agnostic, so the admin
should install the pod network solution
of choice. You have to install a Pod Network
before kube-dns
may deployed fully. Hence the Pending
state before the network is set up.
HostPort
services do not workThe HostPort
and HostIP
functionality is available depending on your Pod Network
provider. Please contact the author of the Pod Network solution to find out whether
HostPort
and HostIP
functionality are available.
Verified HostPort CNI providers:
For more information, read the CNI portmap documentation.
If your network provider does not support the portmap CNI plugin, you may need to use the NodePort feature of
services or use HostNetwork=true
.
Many network add-ons do not yet enable hairpin mode which allows pods to access themselves via their Service IP if they don’t know about their podIP. This is an issue related to CNI. Please contact the providers of the network add-on providers to get timely information about whether they support hairpin mode.
If you are using VirtualBox (directly or via Vagrant), you will need to
ensure that hostname -i
returns a routable IP address (i.e. one on the
second network interface, not the first one). By default, it doesn’t do this
and kubelet ends-up using first non-loopback network interface, which is
usually NATed. Workaround: Modify /etc/hosts
, take a look at this
Vagrantfile
ubuntu-vagrantfile for how this can be achieved.
The following error indicates a possible certificate mismatch.
# kubectl get po
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
Verify that the $HOME/.kube/config
file contains a valid certificate, and regenerate a certificate if necessary.
Another workaround is to overwrite the default kubeconfig
for the “admin” user:
mv $HOME/.kube $HOME/.kube.bak
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
If you are using CentOS and encounter difficulty while setting up the master node, verify that your Docker cgroup driver matches the kubelet config:
docker info | grep -i cgroup
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
If the Docker cgroup driver and the kubelet config don’t match, change the kubelet config to match the Docker cgroup driver. The
flag you need to change is --cgroup-driver
. If it’s already set, you can update like so:
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Otherwise, you will need to open the systemd file and add the flag to an existing environment line.
Then restart kubelet:
systemctl daemon-reload
systemctl restart kubelet
The kubectl describe pod
or kubectl logs
commands can help you diagnose errors. For example:
kubectl -n ${NAMESPACE} describe pod ${POD_NAME}
kubectl -n ${NAMESPACE} logs ${POD_NAME} -c ${CONTAINER_NAME}