Kubernetes容器运行时网络未就绪[关闭]

fdx2calv  于 5个月前  发布在  Kubernetes
关注(0)|答案(2)|浏览(71)

**已关闭。**此问题为not about programming or software development。目前不接受回答。

此问题似乎与a specific programming problem, a software algorithm, or software tools primarily used by programmers无关。如果您认为此问题与another Stack Exchange site的主题相关,可以发表评论,说明在何处可以回答此问题。
9天前关闭
Improve this question
我安装了一个由三个节点组成的Kubernetes集群,控制节点看起来很好,当我试图加入其他两个节点时,两个节点的状态都是:未就绪
在控制节点上:

[root@alva-anstawx01 /]# kubectl get nodes     
NAME                             STATUS     ROLES                  AGE     VERSION
alva-anstawx01.<domain name>   Ready      control-plane,master   7d20h   v1.21.1
alva-anstawx02.<domain name>   NotReady   <none>                 22h     v1.21.1
alva-anstawx03.<domain name>   NotReady   <none>                 22h     v1.21.1

字符串
POD看起来正常,并且都在运行:

NAME                                                     READY   STATUS    RESTARTS   AGE
coredns-558bd4d5db-c8p97                                 1/1     Running   0          7d20h
coredns-558bd4d5db-hjb8g                                 1/1     Running   0          7d20h
etcd-alva-anstawx01.alv.autoliv.int                      1/1     Running   2          7d20h
kube-apiserver-alva-anstawx01.alv.autoliv.int            1/1     Running   2          7d20h
kube-controller-manager-alva-anstawx01.alv.autoliv.int   1/1     Running   2          7d20h
kube-proxy-b8ft2                                         1/1     Running   0          7d20h
kube-proxy-frr7c                                         1/1     Running   0          23h
kube-proxy-ztxbf                                         1/1     Running   0          23h
kube-scheduler-alva-anstawx01.alv.autoliv.int            1/1     Running   2          7d20h


进一步检查日志,它看起来缺少一些东西,所以CNI插件在这些节点上启动,不知道如何继续:

Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Wed, 09 Jun 2021 12:24:34 +0200   Tue, 08 Jun 2021 14:00:45 +0200   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 09 Jun 2021 12:24:34 +0200   Tue, 08 Jun 2021 14:00:45 +0200   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Wed, 09 Jun 2021 12:24:34 +0200   Tue, 08 Jun 2021 14:00:45 +0200   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Wed, 09 Jun 2021 12:24:34 +0200   Tue, 08 Jun 2021 14:00:45 +0200   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized


我在每个节点上都有一个网络接口
在受影响的节点上:

Jun 09 12:34:19 alva-anstawx02.alv.<Domain Name> kubelet[1026]: E0609 12:34:19.209657    1026 kubelet.go:2211] "Container runtime network not ready" networkReady="N
Jun 09 12:34:19 alva-anstawx02.alv.<Domain Name> kubelet[1026]: E0609 12:34:19.698034    1026 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"Sta
Jun 09 12:34:21 alva-anstawx02.alv.<Domain Name> kubelet[1026]: E0609 12:34:21.817375    1026 remote_image.go:114] "PullImage from image service failed" err="rpc er
Jun 09 12:34:21 alva-anstawx02.alv.<Domain Name> kubelet[1026]: E0609 12:34:21.817429    1026 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code =
Jun 09 12:34:21 alva-anstawx02.alv.<Domain Name> kubelet[1026]: E0609 12:34:21.817627    1026 kuberuntime_manager.go:864] container &Container{Name:calico-typha,Ima
Jun 09 12:34:21 alva-anstawx02.alv.<Domain Name> kubelet[1026]: E0609 12:34:21.817706    1026 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"Sta
Jun 09 12:34:24 alva-anstawx02.alv.<Domain Name> kubelet[1026]: E0609 12:34:24.211195    1026 kubelet.go:2211] "Container runtime network not ready" networkReady="N


我使用Calico默认配置,每个节点上都有一个接口:
控制节点:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:ac:64:8c brd ff:ff:ff:ff:ff:ff
    inet 10.4.9.73/21 brd 10.4.15.255 scope global ens192
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:feac:648c/64 scope link 
       valid_lft forever preferred_lft forever
3: vxlan.calico: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether 66:c5:7d:06:e5:fe brd ff:ff:ff:ff:ff:ff
    inet 192.168.228.192/32 scope global vxlan.calico
       valid_lft forever preferred_lft forever
    inet6 fe80::64c5:7dff:fe06:e5fe/64 scope link 
       valid_lft forever preferred_lft forever
4: cali5441eeb56bd@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
5: cali389c5f98ecc@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
6: calicc306a285eb@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever


其他节点:

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:ac:67:61 brd ff:ff:ff:ff:ff:ff
    inet 10.4.9.80/21 brd 10.4.15.255 scope global ens192
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:feac:6761/64 scope link 
       valid_lft forever preferred_lft forever


有没有人可以帮我在另外两个节点上初始化calico?
编辑:
我已经解决了一个/var空间的问题,但这没有帮助:

[root@alva-anstawx03 ~]# df -kh
Filesystem                     Size  Used Avail Use% Mounted on
devtmpfs                       1.9G     0  1.9G   0% /dev
tmpfs                          1.9G     0  1.9G   0% /dev/shm
tmpfs                          1.9G   60M  1.8G   4% /run
tmpfs                          1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/mapper/vg_system-lv_root  9.8G  2.1G  7.2G  23% /
/dev/sda1                      976M  206M  704M  23% /boot
/dev/mapper/vg_system-lv_var    19G  1.1G   17G   6% /var
/dev/mapper/vg_system-lv_opt   3.9G   72M  3.6G   2% /opt
tmpfs                          1.9G   12K  1.9G   1% /var/lib/kubelet/pods/77b1f7f0-8598-4640-af2f-e960c4c76120/volumes/kubernetes.io~projected/kube-api-access-7xnp8
tmpfs                          1.9G   12K  1.9G   1% /var/lib/kubelet/pods/4398eeeb-0f74-477c-a066-403ecab4abe1/volumes/kubernetes.io~projected/kube-api-access-9bh4m
shm                             64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/56cbc698b06f57937128eadc74cc098c4dfb9f5566e941d7a93baab9695ec22e/shm
shm                             64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/3cb246ac67ca0826ca86f8adb5c5c1b8802c4f96ca330456aea67aec02231f9c/shm
overlay                         19G  1.1G   17G   6% /run/containerd/io.containerd.runtime.v2.task/k8s.io/56cbc698b06f57937128eadc74cc098c4dfb9f5566e941d7a93baab9695ec22e/rootfs
overlay                         19G  1.1G   17G   6% /run/containerd/io.containerd.runtime.v2.task/k8s.io/3cb246ac67ca0826ca86f8adb5c5c1b8802c4f96ca330456aea67aec02231f9c/rootfs
overlay                         19G  1.1G   17G   6% /run/containerd/io.containerd.runtime.v2.task/k8s.io/f3e2bedafb61411951557c6e66d037536240bf25a185e6b3e6da0b6ad0b91a38/rootfs
tmpfs                          378M     0  378M   0% /run/user/0
[root@alva-anstawx03 ~]#


在其他节点上也一样:

[root@alva-anstawx02 ~]# df -kh
Filesystem                     Size  Used Avail Use% Mounted on
devtmpfs                       1.9G     0  1.9G   0% /dev
tmpfs                          1.9G     0  1.9G   0% /dev/shm
tmpfs                          1.9G   68M  1.8G   4% /run
tmpfs                          1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/mapper/vg_system-lv_root  9.8G  2.1G  7.2G  23% /
/dev/sda1                      976M  206M  704M  23% /boot
/dev/mapper/vg_system-lv_opt   3.9G   72M  3.6G   2% /opt
/dev/mapper/vg_system-lv_var    19G  1.1G   17G   6% /var
tmpfs                          1.9G   12K  1.9G   1% /var/lib/kubelet/pods/31e01070-282a-453b-8e7f-fe0d93e359ec/volumes/kubernetes.io~projected/kube-api-access-4qhqs
tmpfs                          1.9G   12K  1.9G   1% /var/lib/kubelet/pods/4208e857-28e7-4005-bbe1-8bed0b08548b/volumes/kubernetes.io~projected/kube-api-access-bvjhg
shm                             64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/f3b43b5c1e46236e3e01536cff2089c788e0b39e34e43165608dbb2ea9906cb5/shm
shm                             64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/285018acde46e519f9ae74551d06028304ca19ab76813ed1ca43a4b6e617e4f4/shm
overlay                         19G  1.1G   17G   6% /run/containerd/io.containerd.runtime.v2.task/k8s.io/285018acde46e519f9ae74551d06028304ca19ab76813ed1ca43a4b6e617e4f4/rootfs
overlay                         19G  1.1G   17G   6% /run/containerd/io.containerd.runtime.v2.task/k8s.io/f3b43b5c1e46236e3e01536cff2089c788e0b39e34e43165608dbb2ea9906cb5/rootfs
overlay                         19G  1.1G   17G   6% /run/containerd/io.containerd.runtime.v2.task/k8s.io/42aaae5f8c681ffa5fd0bf6ed3fcf4d8447962131459d4592d1bbb73a320edca/rootfs
tmpfs                          378M     0  378M   0% /run/user/0
[root@alva-anstawx02 ~]#


下面是kubectl的输出:

[root@alva-anstawx01 ~]#  kubectl get node
NAME                             STATUS     ROLES                  AGE   VERSION
alva-anstawx01.<Domain Name>   Ready      control-plane,master   8d    v1.21.1
alva-anstawx02.<Domain Name>   NotReady   <none>                 43h   v1.21.1
alva-anstawx03.<Domain Name>   NotReady   <none>                 43h   v1.21.1
[root@alva-anstawx01 ~]# kubectl describe pod calico-node-dshv9 -n kube-system
Name:                 calico-node-dshv9
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 alva-anstawx03.<Domain Name>/10.4.9.96
Start Time:           Wed, 09 Jun 2021 20:39:52 +0200
Labels:               controller-revision-hash=c54f47b5c
                      k8s-app=calico-node
                      pod-template-generation=1
Annotations:          <none>
Status:               Pending
IP:                   10.4.9.96
IPs:
  IP:           10.4.9.96
Controlled By:  DaemonSet/calico-node
Init Containers:
  upgrade-ipam:
    Container ID:  
    Image:         docker.io/calico/cni:v3.19.1
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/cni/bin/calico-ipam
      -upgrade
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      KUBERNETES_NODE_NAME:        (v1:spec.nodeName)
      CALICO_NETWORKING_BACKEND:  <set to the key 'calico_backend' of config map 'calico-config'>  Optional: false
    Mounts:
      /host/opt/cni/bin from cni-bin-dir (rw)
      /var/lib/cni/networks from host-local-net-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9bh4m (ro)
  install-cni:
    Container ID:  
    Image:         docker.io/calico/cni:v3.19.1
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/cni/bin/install
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      CNI_CONF_NAME:         10-calico.conflist
      CNI_NETWORK_CONFIG:    <set to the key 'cni_network_config' of config map 'calico-config'>  Optional: false
      KUBERNETES_NODE_NAME:   (v1:spec.nodeName)
      CNI_MTU:               <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      SLEEP:                 false
    Mounts:
      /host/etc/cni/net.d from cni-net-dir (rw)
      /host/opt/cni/bin from cni-bin-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9bh4m (ro)
  flexvol-driver:
    Container ID:   
    Image:          docker.io/calico/pod2daemon-flexvol:v3.19.1
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /host/driver from flexvol-driver-host (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9bh4m (ro)
Containers:
  calico-node:
    Container ID:   
    Image:          docker.io/calico/node:v3.19.1
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:      250m
    Liveness:   exec [/bin/calico-node -felix-live -bird-live] delay=10s timeout=1s period=10s #success=1 #failure=6
    Readiness:  exec [/bin/calico-node -felix-ready -bird-ready] delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      DATASTORE_TYPE:                     kubernetes
      WAIT_FOR_DATASTORE:                 true
      NODENAME:                            (v1:spec.nodeName)
      CALICO_NETWORKING_BACKEND:          <set to the key 'calico_backend' of config map 'calico-config'>  Optional: false
      CLUSTER_TYPE:                       k8s,bgp
      IP:                                 autodetect
      CALICO_IPV4POOL_IPIP:               Always
      CALICO_IPV4POOL_VXLAN:              Never
      FELIX_IPINIPMTU:                    <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      FELIX_VXLANMTU:                     <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      FELIX_WIREGUARDMTU:                 <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      CALICO_DISABLE_FILE_LOGGING:        true
      FELIX_DEFAULTENDPOINTTOHOSTACTION:  ACCEPT
      FELIX_IPV6SUPPORT:                  false
      FELIX_HEALTHENABLED:                true
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /sys/fs/ from sysfs (rw)
      /var/lib/calico from var-lib-calico (rw)
      /var/log/calico/cni from cni-log-dir (ro)
      /var/run/calico from var-run-calico (rw)
      /var/run/nodeagent from policysync (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9bh4m (ro)
Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  var-run-calico:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/calico
    HostPathType:  
  var-lib-calico:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/calico
    HostPathType:  
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  sysfs:
    Type:          HostPath (bare host directory volume)
    Path:          /sys/fs/
    HostPathType:  DirectoryOrCreate
  cni-bin-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:  
  cni-net-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:  
  cni-log-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log/calico/cni
    HostPathType:  
  host-local-net-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/cni/networks
    HostPathType:  
  policysync:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/nodeagent
    HostPathType:  DirectoryOrCreate
  flexvol-driver-host:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
    HostPathType:  DirectoryOrCreate
  kube-api-access-9bh4m:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 :NoSchedule op=Exists
                             :NoExecute op=Exists
                             CriticalAddonsOnly op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason     Age                     From               Message
  ----     ------     ----                    ----               -------
  Normal   Scheduled  13h                     default-scheduler  Successfully assigned kube-system/calico-node-dshv9 to alva-anstawx03.<Domain Name>
  Warning  Failed     13h (x2 over 13h)       kubelet            Failed to pull image "docker.io/calico/cni:v3.19.1": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/calico/cni:v3.19.1": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/calico/cni/manifests/sha256:51f294c56842e731fa8d7bdf6b9ba39771f69ba4eda28e186461be2662e599df: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
  Normal   Pulling    13h (x4 over 13h)       kubelet            Pulling image "docker.io/calico/cni:v3.19.1"
  Warning  Failed     13h (x4 over 13h)       kubelet            Error: ErrImagePull
  Warning  Failed     13h (x2 over 13h)       kubelet            Failed to pull image "docker.io/calico/cni:v3.19.1": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/calico/cni:v3.19.1": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/calico/cni/manifests/sha256:f301171be0add870152483fcce71b28cafb8e910f61ff003032e9b1053b062c4: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
  Warning  Failed     13h (x6 over 13h)       kubelet            Error: ImagePullBackOff
  Normal   BackOff    12h (x61 over 13h)      kubelet            Back-off pulling image "docker.io/calico/cni:v3.19.1"
  Warning  Failed     12h (x2 over 12h)       kubelet            Failed to pull image "docker.io/calico/cni:v3.19.1": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/calico/cni:v3.19.1": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/calico/cni/manifests/sha256:f301171be0add870152483fcce71b28cafb8e910f61ff003032e9b1053b062c4: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
  Normal   Pulling    12h (x4 over 12h)       kubelet            Pulling image "docker.io/calico/cni:v3.19.1"
  Warning  Failed     12h (x4 over 12h)       kubelet            Error: ErrImagePull
  Warning  Failed     12h (x2 over 12h)       kubelet            Failed to pull image "docker.io/calico/cni:v3.19.1": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/calico/cni:v3.19.1": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/calico/cni/manifests/sha256:51f294c56842e731fa8d7bdf6b9ba39771f69ba4eda28e186461be2662e599df: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
  Warning  Failed     12h (x6 over 12h)       kubelet            Error: ImagePullBackOff
  Normal   BackOff    12h (x18 over 12h)      kubelet            Back-off pulling image "docker.io/calico/cni:v3.19.1"
  Normal   Pulling    12h (x4 over 12h)       kubelet            Pulling image "docker.io/calico/cni:v3.19.1"
  Warning  Failed     12h (x4 over 12h)       kubelet            Failed to pull image "docker.io/calico/cni:v3.19.1": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/calico/cni:v3.19.1": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/calico/cni/manifests/sha256:f301171be0add870152483fcce71b28cafb8e910f61ff003032e9b1053b062c4: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
  Warning  Failed     12h (x4 over 12h)       kubelet            Error: ErrImagePull
  Warning  Failed     12h (x6 over 12h)       kubelet            Error: ImagePullBackOff
  Normal   BackOff    12h (x81 over 12h)      kubelet            Back-off pulling image "docker.io/calico/cni:v3.19.1"
  Warning  Failed     12h (x2 over 12h)       kubelet            Failed to pull image "docker.io/calico/cni:v3.19.1": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/calico/cni:v3.19.1": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/calico/cni/manifests/sha256:f301171be0add870152483fcce71b28cafb8e910f61ff003032e9b1053b062c4: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
  Normal   Pulling    12h (x4 over 12h)       kubelet            Pulling image "docker.io/calico/cni:v3.19.1"
  Warning  Failed     12h (x2 over 12h)       kubelet            Failed to pull image "docker.io/calico/cni:v3.19.1": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/calico/cni:v3.19.1": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/calico/cni/manifests/sha256:51f294c56842e731fa8d7bdf6b9ba39771f69ba4eda28e186461be2662e599df: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
  Warning  Failed     12h (x4 over 12h)       kubelet            Error: ErrImagePull
  Warning  Failed     12h (x6 over 12h)       kubelet            Error: ImagePullBackOff
  Normal   BackOff    4m45s (x3192 over 12h)  kubelet            Back-off pulling image "docker.io/calico/cni:v3.19.1"
[root@alva-anstawx01 ~]# kubectl get pods -o wide --all-namespaces
NAMESPACE     NAME                                                     READY   STATUS                  RESTARTS   AGE   IP                NODE                             NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-78d6f96c7b-wb96g                 1/1     Running                 1          13h   192.168.228.198   alva-anstawx01.<Domain Name>   <none>           <none>
kube-system   calico-node-dshv9                                        0/1     Init:ImagePullBackOff   0          13h   10.4.9.96         alva-anstawx03.<Domain Name>   <none>           <none>
kube-system   calico-node-rfrnq                                        0/1     Init:ImagePullBackOff   0          13h   10.4.9.80         alva-anstawx02.<Domain Name>   <none>           <none>
kube-system   calico-node-sl864                                        1/1     Running                 1          13h   10.4.9.73         alva-anstawx01.<Domain Name>   <none>           <none>
kube-system   coredns-558bd4d5db-c8p97                                 1/1     Running                 2          8d    192.168.228.200   alva-anstawx01.<Domain Name>   <none>           <none>
kube-system   coredns-558bd4d5db-hjb8g                                 1/1     Running                 2          8d    192.168.228.199   alva-anstawx01.<Domain Name>   <none>           <none>
kube-system   etcd-alva-anstawx01.<Domain Name>                      1/1     Running                 4          8d    10.4.9.73         alva-anstawx01.<Domain Name>   <none>           <none>
kube-system   kube-apiserver-alva-anstawx01.<Domain Name>            1/1     Running                 4          8d    10.4.9.73         alva-anstawx01.<Domain Name>   <none>           <none>
kube-system   kube-controller-manager-alva-anstawx01.<Domain Name>   1/1     Running                 4          8d    10.4.9.73         alva-anstawx01.<Domain Name>   <none>           <none>
kube-system   kube-proxy-b8ft2                                         1/1     Running                 2          8d    10.4.9.73         alva-anstawx01.<Domain Name>   <none>           <none>
kube-system   kube-proxy-frr7c                                         1/1     Running                 4          43h   10.4.9.80         alva-anstawx02.<Domain Name>   <none>           <none>
kube-system   kube-proxy-ztxbf                                         1/1     Running                 4          43h   10.4.9.96         alva-anstawx03.<Domain Name>   <none>           <none>
kube-system   kube-scheduler-alva-anstawx01.<Domain Name>            1/1     Running                 4          8d    10.4.9.73         alva-anstawx01.<Domain Name>   <none>           <none>

vjhs03f7

vjhs03f71#

在看到整个日志行条目后

Failed to pull image "docker.io/calico/cni:v3.19.1": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/calico/cni:v3.19.1": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/calico/cni/manifests/sha256:f301171be0add870152483fcce71b28cafb8e910f61ff003032e9b1053b062c4: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

字符串
我们对这部分感兴趣:
第一个月
看起来问题不在于calico/kubernetes集群,而在于为它拉取docker镜像。
link above所述
现在,匿名使用的每6小时100个容器镜像请求和免费Docker帐户的每6小时200个容器镜像请求的速率限制生效。超过这些限制的镜像请求将被拒绝,直到6小时窗口结束。
如果没有pull-request从这个IP到docker hub,它应该现在下载镜像。要做到这一点,你可以删除负责节点上calico网络的pod,因为它是一个部署,调度程序将为你重新创建pod。要做到这一点,运行:

kubectl delete pod calico-node-rfrnq -n kube-system
kubectl delete pod calico-node-dshv9 -n kube-system


如果由于某种原因,你的节点仍然无法从docker hub拉取docker镜像,你可以在控制平面上运行calico pod。这意味着你可以手动将docker镜像从控制平面传输到工作节点。

Docker

在控制平面上运行以下命令:

docker save -o ~/calico-cni.tar calico/cni:v3.19.1


使用sftp,scp或任何其他解决方案将文件复制到工作节点。
在工作节点上运行:

docker load -i calico-cni.tar


如果创建镜像没有启动,可以考虑删除worker节点上的pod,强制kube调度器重新安排这些pod的创建时间。
请注意,如果您使用手动方法,可能还有其他图像需要下载。例如,在我的工作节点上有四个与印花布相关的图像:

docker images | grep calico

calico/node                                  v3.19.1       c4d75af7e098   3 weeks ago     168MB
calico/pod2daemon-flexvol                    v3.19.1       5660150975fb   3 weeks ago     21.7MB
calico/cni                                   v3.19.1       5749e8b276f9   3 weeks ago     146MB
calico/kube-controllers                      v3.19.1       5d3d5ddc8605   3 weeks ago     60.6MB

ContainerD

使用ContainerD导出图像

ctr image export <output-filename> <image-name>


范例:

ctr image export calico-node-v3.11.2.tar \
docker.io/calico/node:v3.11.2


把你得到的文件复制到工作节点。

ctr image import <filename-from-previous-step>


请查找ContainerD的语法

72qzrwbm

72qzrwbm2#

我得到了同样的问题,但它是一个有点不同的吹msg:“没有网络发现/etc/cni/net.d”
我通过以下操作修复了它:
1.创建文件夹/etc/cni/net.d
1.复制/etc/cni/net.d/10-flannel.conflist到当前失败的节点

  1. systemctl restart kubelet

相关问题