目录

一、使用 kubeadm 初始化k8s集群时报错

解决办法:

二、kube-apiserver[11821]: Error: unknown flag: --insecure-port

解决办法:

三、E0210 16:20:54.432245   11274 remote_runtime.go:189] "Version from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"

解决办法


一、使用 kubeadm 初始化k8s集群时报错

this Docker version is not on the list of validated version:20.10.18. Latest validated version:19.03:

# 如果kubernets初始化时失败后,第二次再次执行会初始化命令会报错,这时需要进行重置
[init] Using Kubernetes version: v1.20.9
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
	[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
	[ERROR Port-10250]: Port 10250 is in use
	[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

解决办法:

#1. 查看当前docker版本
docker version

#2. 查看当前仓库支持的docker版本
yum list docker-ce --showduplicates | sort -r

#3. 降低docker版本到19.03.9.ce-3.el7
yum downgrade --setopt=obsoletes=0 -y docker-ce-19.03.9-3.el7 docker-ce-cli-19.03.9-3.el7 containerd.io

#4. 再次查看docker 版本是否为19.03
docker version

#5.  修改为docker 和 k8s 的 cgroup driver 一致,k8s 的是 systemd ,而 docker 是cgroupfs。三台机子都要修改:

cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

systemctl restart docker

#6. 重置
kubeadm reset

#7. 再次初始化即可

二、kube-apiserver[11821]: Error: unknown flag: --insecure-port

        systemctl start kube-apiserver 启动 kube-apiserver 服务时报错,使用 journalctl -xe 命令查看报错信息如下:

解决办法:

        二进制安装 kube-apiserver 时,在 kube-apiserver.conf 配置文件中,因为写入了 --insecure-port、--enable-swagger-ui 这两个标签,而现在这两个标签已经被废弃了,所以只需要删掉这两个标签所在的行即可。重新 systemctl daemon-reload、systemctl enable --now kube-apiserver 即可成功启动 kube-apiserver。

三、E0210 16:20:54.432245   11274 remote_runtime.go:189] "Version from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"

启动 kubelet 服务失败,查看日志如下:

[root@k8s-node1 ~]# journalctl -u kubelet --no-pager
······
2月 10 16:20:54 k8s-node1 kubelet[11274]: E0210 16:20:54.432245   11274 remote_runtime.go:189] "Version from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
2月 10 16:20:54 k8s-node1 kubelet[11274]: E0210 16:20:54.441347   11274 kuberuntime_manager.go:226] "Get runtime version failed" err="get remote runtime typed version failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
2月 10 16:20:54 k8s-node1 kubelet[11274]: E0210 16:20:54.454734   11274 run.go:74] "command failed" err="failed to run Kubelet: failed to create kubelet: get remote runtime typed version failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
2月 10 16:20:54 k8s-node1 systemd[1]: kubelet.service failed.

将报错在网上搜索看到有人说是 containerd 的问题,确认 containerd 状态正常:

[root@k8s-node1 ~]# systemctl status containerd.service 
● containerd.service - containerd container runtime
   Loaded: loaded (/usr/lib/systemd/system/containerd.service; disabled; vendor preset: disabled)
   Active: active (running) since 五 2023-02-10 09:29:25 CST; 6h ago
     Docs: https://containerd.io
 Main PID: 6452 (containerd)
    Tasks: 9
   Memory: 60.0M
   CGroup: /system.slice/containerd.service
           └─6452 /usr/bin/containerd

2月 10 09:29:25 k8s-node1 containerd[6452]: time="2023-02-10T09:29:25.037368050+08:00" level=info msg="loading plugin \"io.containerd.grpc.v....grpc.v1
2月 10 09:29:25 k8s-node1 containerd[6452]: time="2023-02-10T09:29:25.037698981+08:00" level=info msg="loading plugin \"io.containerd.grpc.v....grpc.v1
2月 10 09:29:25 k8s-node1 containerd[6452]: time="2023-02-10T09:29:25.038195521+08:00" level=info msg="loading plugin \"io.containerd.tracin...essor.v1
2月 10 09:29:25 k8s-node1 containerd[6452]: time="2023-02-10T09:29:25.038598917+08:00" level=info msg="skip loading plugin \"io.containerd.t...essor.v1
2月 10 09:29:25 k8s-node1 containerd[6452]: time="2023-02-10T09:29:25.038942566+08:00" level=info msg="loading plugin \"io.containerd.intern...ernal.v1
2月 10 09:29:25 k8s-node1 containerd[6452]: time="2023-02-10T09:29:25.039410096+08:00" level=error msg="failed to initialize a tracing proce... plugin"
2月 10 09:29:25 k8s-node1 containerd[6452]: time="2023-02-10T09:29:25.041889291+08:00" level=info msg=serving... address=/run/containerd/con...ck.ttrpc
2月 10 09:29:25 k8s-node1 containerd[6452]: time="2023-02-10T09:29:25.043283896+08:00" level=info msg=serving... address=/run/containerd/con...erd.sock
2月 10 09:29:25 k8s-node1 systemd[1]: Started containerd container runtime.
2月 10 09:29:25 k8s-node1 containerd[6452]: time="2023-02-10T09:29:25.046373959+08:00" level=info msg="containerd successfully booted in 0.090041s"
Hint: Some lines were ellipsized, use -l to show in full.

看有人说和配置文件 /etc/containerd/config.toml 中的 disabled_plugins = ["cri"] 有关,详情参见https://github.com/containerd/containerd/issues/4581

解决办法

移除 /etc/containerd/config.toml 配置文件:

[root@k8s-node1 ~]# grep "disabled_plugins" /etc/containerd/config.toml
disabled_plugins = ["cri"]

[root@k8s-node1 ~]# mv /etc/containerd/config.toml /tmp/

# 重启 containerd 和 kubelet,kubelet 启动成功
[root@k8s-node1 ~]# systemctl restart containerd.service 
[root@k8s-node1 ~]# systemctl status containerd.service 
● containerd.service - containerd container runtime
   Loaded: loaded (/usr/lib/systemd/system/containerd.service; disabled; vendor preset: disabled)
   Active: active (running) since 五 2023-02-10 16:32:58 CST; 7s ago
     Docs: https://containerd.io
  Process: 12941 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
 Main PID: 12943 (containerd)
    Tasks: 9
   Memory: 26.2M
   CGroup: /system.slice/containerd.service
           └─12943 /usr/bin/containerd

2月 10 16:32:58 k8s-node1 containerd[12943]: time="2023-02-10T16:32:58.590059879+08:00" level=info msg=serving... address=/run/containerd/co...ck.ttrpc
2月 10 16:32:58 k8s-node1 containerd[12943]: time="2023-02-10T16:32:58.590126675+08:00" level=info msg=serving... address=/run/containerd/co...erd.sock
2月 10 16:32:58 k8s-node1 containerd[12943]: time="2023-02-10T16:32:58.590230987+08:00" level=info msg="containerd successfully booted in 0.046454s"
2月 10 16:32:58 k8s-node1 containerd[12943]: time="2023-02-10T16:32:58.603214508+08:00" level=info msg="Start subscribing containerd event"
2月 10 16:32:58 k8s-node1 containerd[12943]: time="2023-02-10T16:32:58.603295930+08:00" level=info msg="Start recovering state"
2月 10 16:32:58 k8s-node1 containerd[12943]: time="2023-02-10T16:32:58.605726964+08:00" level=info msg="Start event monitor"
2月 10 16:32:58 k8s-node1 containerd[12943]: time="2023-02-10T16:32:58.605833622+08:00" level=info msg="Start snapshots syncer"
2月 10 16:32:58 k8s-node1 containerd[12943]: time="2023-02-10T16:32:58.605868370+08:00" level=info msg="Start cni network conf syncer for default"
2月 10 16:32:58 k8s-node1 containerd[12943]: time="2023-02-10T16:32:58.605885452+08:00" level=info msg="Start streaming server"
2月 10 16:32:59 k8s-node1 containerd[12943]: time="2023-02-10T16:32:59.331379148+08:00" level=info msg="No cni config template is specified,...config."
Hint: Some lines were ellipsized, use -l to show in full.


[root@k8s-node1 ~]# systemctl restart kubelet.service 
[root@k8s-node1 ~]# systemctl status kubelet.service 
● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since 五 2023-02-10 16:33:24 CST; 5s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 13110 (kubelet)
    Tasks: 13
   Memory: 28.5M
   CGroup: /system.slice/kubelet.service
           └─13110 /usr/local/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig --cert-dir=/etc/kubernetes/ssl --kubeco...

2月 10 16:33:24 k8s-node1 kubelet[13110]: E0210 16:33:24.419342   13110 kubelet.go:2373] "Container runtime network not ready" networkReady=...ialized"
2月 10 16:33:24 k8s-node1 kubelet[13110]: I0210 16:33:24.433580   13110 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6
2月 10 16:33:24 k8s-node1 kubelet[13110]: I0210 16:33:24.433641   13110 status_manager.go:161] "Starting to sync pod status with apiserver"
2月 10 16:33:24 k8s-node1 kubelet[13110]: I0210 16:33:24.433696   13110 kubelet.go:2010] "Starting kubelet main sync loop"
2月 10 16:33:24 k8s-node1 kubelet[13110]: E0210 16:33:24.433788   13110 kubelet.go:2034] "Skipping pod synchronization" err="PLEG is not hea...cessful"
2月 10 16:33:24 k8s-node1 kubelet[13110]: I0210 16:33:24.443412   13110 kubelet_node_status.go:108] "Node was previously registered" node="k8s-node1"
2月 10 16:33:24 k8s-node1 kubelet[13110]: I0210 16:33:24.443603   13110 kubelet_node_status.go:73] "Successfully registered node" node="k8s-node1"
2月 10 16:33:25 k8s-node1 kubelet[13110]: I0210 16:33:25.277260   13110 apiserver.go:52] "Watching apiserver"
2月 10 16:33:25 k8s-node1 kubelet[13110]: I0210 16:33:25.282694   13110 kubelet.go:2096] "SyncLoop ADD" source="api" pods=[]
2月 10 16:33:25 k8s-node1 kubelet[13110]: I0210 16:33:25.329130   13110 reconciler.go:169] "Reconciler: start to sync state"
Hint: Some lines were ellipsized, use -l to show in full.
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐