ubuntu22.04 安装KubeEdge v1.17

1. 前言

  • 部署要求:
    在这里插入图片描述
  • 安装版本:
    • ubuntu:22.04
    • k8s:v 1.27
    • kubeEdge:v1.17
  • k8s集群安装参考博客Ubuntu22.04 安装k8s集群 v1.27
  • kubeEdge节点信息:
    • 云端:k8s集群,这里只有一个master节点,ip为192.168.247.131
    • 边缘端:NVIDIA jetson orin nano开发板,本文也说明了ubuntu 22.04虚拟机作为边缘节点如何部署
  • 设置hostname:
hostnamectl set-hostname master1  && bash
hostnamectl set-hostname edge1  && bash

2. 云端

2-1. 部署MetallB(推荐安装)

  • MetallB是支持LoadBalancer的负载均衡器,能够帮助我们映射一个ip地址,直接访问pod的服务
    • 也可以使用NodePort,只是后面要映射ip
  • 修改kube-proxy,修改下面第41 48行
kubectl edit configmap -n kube-system kube-proxy

41       strictARP: false 修改为true
......
47     metricsBindAddress: ""
48     mode: "" 修改为 ipvs(不加引号!)
49     nodePortAddresses: null
  • k8s部署MetallB,这里部署的版本为0.13.5
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.5/config/manifests/metallb-native.yaml

#或者是下面
wget https://raw.githubusercontent.com/metallb/metallb/v0.13.5/config/manifests/metallb-native.yaml -O metallb-native.yaml
kubectl apply -f ./metallb-native.yaml 
  • 配置MetallB,新建2个文件:
    • advertise.yaml 与ip-pool.yaml
    • 注意ip-pool.yaml中spec.addresses中的ip段一定要包括k8s集群的节点ip
# advertise.yaml
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: l2adver
  namespace: metallb-system
spec:
  ipAddressPools: # 如果不配置则会通告所有的IP池地址
    - ip-pool
# ip-pool.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: ip-pool
  namespace: metallb-system
spec:
  addresses:
    - 192.168.247.120-192.168.247.140 # 根据k8s集群节点的ip地址来配置,一定要包括集群的ip
  • 应用配置:
kubectl apply -f advertise.yaml
kubectl apply -f ip-pool.yaml

kubectl get ipaddresspool -n metallb-system
  • 开启二层转发,实现在k8s集群节点外访问
    • 2层模式不需要将IP绑定到工作节点的网络接口。它的工作原理是直接响应本地网络上的ARP请求,将机器的MAC地址提供给客户端。
    • 为了播发来自IPAddressPool的IP,必须将L2Advertisement实例关联到IPAAddressPool
vim l2forward.yaml
# 写入如下内容
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb-system

# 应用配置
kubectl apply -f l2forward.yaml

2-2. 下载并安装cloudcore

  • 下载并安装kubeEdge二进制文件
 mkdir kubeEdge
 cd kubeEdge/
 curl -LO  https://github.com/kubeedge/kubeedge/releases/download/v1.17.0/keadm-v1.17.0-linux-amd64.tar.gz
 #或者,wget https://github.com/kubeedge/kubeedge/releases/download/v1.17.0/keadm-v1.17.0-linux-amd64.tar.gz
sudo tar -xvzf keadm-v1.17.0-linux-amd64.tar.gz
cd keadm-v1.17.0-linux-amd64
sudo cp keadm/keadm /usr/local/bin/
keadm -h

2-3. 初始化cloudcore

  • 去除k8s controlplane节点的污点,master1为k8s集群的master节点名
    • 注:本文k8s集群只有一个master节点,因此需要去除污点,保证能运行pod。若有node节点,可以不执行本步骤!
kubectl taint nodes master1  node-role.kubernetes.io/control-plane:NoSchedule-
kubectl describe node master1 | grep Taints
  • CloudCore负责云端与边缘节点的交互和管理。安装和初始化CloudCore如下:
    • 注意:若k8s集群有node节点,暴露的ip要包括node节点的ip
    • 可以提前拉取镜像:
    • sudo ctr images pull docker.m.daocloud.io/kubeedge/cloudcore:v1.17.0
# 下面ip需包括k8s的master节点、node节点ip
keadm init --advertise-address=192.168.247.131  --kubeedge-version=1.17.0 --kube-config=$HOME/.kube/config --set iptablesHanager.mode="external"

#运行结果
Kubernetes version verification passed, KubeEdge installation will start...
CLOUDCORE started
=========CHART DETAILS=======
Name: cloudcore
LAST DEPLOYED: Wed Jul  3 10:00:54 2024
NAMESPACE: kubeedge
STATUS: deployed
REVISION: 1

#查看pods
kubectl get pods -n kubeedge
# 若上面拉取镜像失败,通过kubectl get pods -A可查看是否拉取镜像失败,需要更换源
kubectl get pod  cloudcore-dc75f4b46-f9822 -n kubeedge -o yaml > ./kubeedge_cloudcore.yaml
#修改yaml文件的镜像为:docker.m.daocloud.io/kubeedge/cloudcore:v1.17.0

#删除原来的Pod,如果有很多个pod,直接删除该命名空间下所有的pod:
# kubectl delete pod --all -n kubeedge
kubectl delete pod cloudcore-dc75f4b46-f9822  -n kubeedge 
kubectl apply -f ./kubeedge_cloudcore.yaml

kubectl get all -n kubeedge

2-4. 关闭云端防火墙

  • 关闭云端防火墙
systemctl status ufw.service
systemctl stop ufw.service
systemctl disable ufw.service

2-5. 打标签防止云端的一些damonset pod调度到边缘端

  • 打上标签,让一些云端damonset pod不扩展到edge节点上去
kubectl get daemonset -n kube-system | grep -v NAME | awk '{print $1}' | xargs -n 1 kubectl patch daemonset -n kube-system --type='json' -p='[{"op":"replace", "path":"/spec/template/spec/affinity", "value":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution": {"nodeSelectorTerms": [{"matchExpressions": [{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}]'

kubectl get daemonset -n metallb-system | grep -v NAME | awk '{print $1}' | xargs -n 1 kubectl patch daemonset -n metallb-system --type='json' -p='[{"op":"replace", "path":"/spec/template/spec/affinity", "value":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution": {"nodeSelectorTerms": [{"matchExpressions": [{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}]'

2-6. 云端设置暴露端口

  • 由于cloudcore 会暴露10000-10004端口。边缘端会通过此端口和云端交互

  • 修改服务的暴露方式,让外部可以连接

    • 也可以使用NodePort,但是在初始化edgecore时就需要修改对应的ip映射
      • 关于NodePort,参考k8s官方文档:https://kubernetes.io/zh-cn/docs/concepts/services-networking/service/
    • (推荐)若安装了MetallB,type字段为LoadBalancer,详见博客:kubeedge v1.17.0部署教程
  • 执行下面命令,EXTERNAL-IP就是对外暴露的ip,这里为192.168.247.120

#编辑cloudcore,59行 type字段为NodePort
kubectl edit svc cloudcore -n kubeedge

 55   selector:
 56     k8s-app: kubeedge
 57     kubeedge: cloudcore
 58   sessionAffinity: None
 59   type: LoadBalancer


kube@master1:~$ kubectl edit svc cloudcore -n kubeedge 
service/cloudcore edited

kube@master1:~$ kubectl get all -n kubeedge
NAME                            READY   STATUS    RESTARTS   AGE
pod/cloudcore-dc75f4b46-qxqzx   1/1     Running   0          45m

NAME                TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)                                                                           AGE
service/cloudcore   LoadBalancer   10.103.251.179   192.168.247.120   10000:30183/TCP,10001:30171/TCP,10002:30126/TCP,10003:31795/TCP,10004:32125/TCP   45m

NAME                                    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/edge-eclipse-mosquitto   0         0         0       0            0           <none>          45m

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/cloudcore   1/1     1            1           45m

NAME                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/cloudcore-dc75f4b46   1         1         1       45m

2-7. 下载安装metrics-server(查询集群资源使用情况)

  • 参考:https://release-1-17.docs.kubeedge.io/zh/docs/advanced/metrics
  • 下载安装metrics-server
wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.4.0/components.yaml 
#修改镜像源
vim ./components.yaml


image: k8s.rainbond.cc/metrics-server/metrics-server:v0.7.1

#设置节点亲和性,Deployment.spec.spec下面写入

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      hostNetwork: true
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
              - key: node-role.kubernetes.io/agent
                operator: DoesNotExist
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule

      containers:
      - args:
        - --kubelet-insecure-tls
        - --cert-dir=/tmp
        - --secure-port=10250
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s


# 应用配置
kubectl apply -f ./components.yaml
kubectl get pods -n kube-system
  • 设置不要证书
vim ./patch.json

[
  {
    "op": "add",
    "path": "/spec/template/spec/containers/0/args/-",
    "value": "--kubelet-insecure-tls"
  }
]


kubectl patch deploy metrics-server -n kube-system --type='json' --patch-file=patch.json

kubectl top nodes
kubectl top pods -n kube-system

2-8. 云端输出token

  • 输出token
keadm gettoken --kube-config=$HOME/.kube/config

3. 边缘端

3-1. 安装与配置容器运行时

  • 参考k8s安装教程安装docker
  • 安装基础软件与containerd,也可参考前面k8s部分安装containerd
  • 下面代码,对于ubuntu虚拟机和arm平台均适用
sudo apt-get update
sudo apt-get install -y curl wget apt-transport-https
sudo apt-get install -y apt-transport-https ca-certificates curl gnupg lsb-release

curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

#设置稳定版仓库
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://mirrors.aliyun.com/docker-ce/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
  
sudo apt-get update
sudo apt-get install -y containerd.io

sudo containerd config default | sudo tee /etc/containerd/config.toml
cat /etc/containerd/config.toml | grep -A 5 -B 5 "disabled_plugins"
sudo containerd --version

#编辑/etc/containerd/config.toml文件
#更新沙箱(pause)镜像,可以通过在containerd的配置文件中修改如下设置来实现
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
#更新containerd的cgroup驱动
SystemdCgroup = true

#启动并启用 containerd 服务
sudo systemctl restart containerd
sudo systemctl enable containerd

sudo systemctl status containerd

3-2. 安装与配置cni插件

  • 下载和安装cni插件
  • 对于ubuntu虚拟机
sudo mkdir -p /opt/cni/bin
curl -L https://github.com/containernetworking/plugins/releases/download/v1.2.0/cni-plugins-linux-amd64-v1.2.0.tgz | sudo tar -C /opt/cni/bin -xz
# 注:上面也可以用wget下载tgz,再解压缩
wget https://github.com/containernetworking/plugins/releases/download/v1.2.0/cni-plugins-linux-amd64-v1.2.0.tgz
sudo tar -C /opt/cni/bin -xzvf cni-plugins-linux-amd64-v1.2.0.tgz

ls -al /opt/cni/bin
  • 对于arm平台
sudo mkdir -p /opt/cni/bin
curl -L https://github.com/containernetworking/plugins/releases/download/v1.2.0/cni-plugins-linux-arm64-v1.2.0.tgz | sudo tar -C /opt/cni/bin -xz

# 或者使用 wget 下载 tgz,再解压缩
wget https://github.com/containernetworking/plugins/releases/download/v1.2.0/cni-plugins-linux-arm64-v1.2.0.tgz
sudo tar -C /opt/cni/bin -xzvf cni-plugins-linux-arm64-v1.2.0.tgz

ls -al /opt/cni/bin
  • 配置cni网络
sudo mkdir -p /etc/cni/net.d
cat <<EOF | sudo tee /etc/cni/net.d/05-containerd-net.conflist
{
  "cniVersion": "0.4.0",
  "name": "containerd-net",
  "plugins": [
    {
      "type": "bridge",
      "bridge": "cni0",
      "isGateway": true,
      "ipMasq": true,
      "ipam": {
        "type": "host-local",
        "ranges": [
          [{"subnet": "192.168.0.0/16"}]
        ],
        "routes": [
          {"dst": "0.0.0.0/0"}
        ]
      }
    },
    {
      "type": "portmap",
      "capabilities": {"portMappings": true}
    }
  ]
}
EOF

3-3. 下载和安装kubeEdge软件包

  • 对于ubuntu虚拟机:
mkdir kubeEdge
cd kubeEdge/
wget https://github.com/kubeedge/kubeedge/releases/download/v1.17.0/keadm-v1.17.0-linux-amd64.tar.gz
tar -xvzf keadm-v1.17.0-linux-amd64.tar.gz
cd keadm-v1.17.0-linux-amd64
sudo cp keadm/keadm /usr/local/bin/
  • 对于arm平台:
mkdir kubeEdge
cd kubeEdge/
wget https://github.com/kubeedge/kubeedge/releases/download/v1.17.0/keadm-v1.17.0-linux-arm64.tar.gz
tar -xvzf keadm-v1.17.0-linux-arm64.tar.gz
cd keadm-v1.17.0-linux-arm64
sudo cp keadm/keadm /usr/local/bin/

3-4. 测试网络连通性

  • 从边端ping云端,测试端口是否可以访问
    • NodePort类型:
ping 192.168.247.131
#cloudcore 监听的端口在10000-10004,都试一下
#由于前面云端设置安全组规则那里,对10000-10004设置了ip映射,因此要修改为对应的ip映射测试联通性
#10000:30890/TCP,10001:30040/TCP,10002:30978/TCP,10003:30798/TCP,10004:30983/TCP
telnet 192.168.247.131 30890 #10000端口,其余同理
  • MetallB类型:
ping 192.168.247.131
# 根据 云端执行的命令kubectl get all -n kubeedge结果, EXTERNAL-IP ,进行测试
# 不需要像NodePort那样进行端口映射

kube@master2:~$ telnet 192.168.247.120  10000
Trying 192.168.247.120...
Connected to 192.168.247.120.
Escape character is '^]'.
^CConnection closed by foreign host.

3-5. 获取token并加入云端

  • 云端获取token
  • 边缘端加入云端,这里以Metallb方式加入
  • 用MetallB类型时,会报错failed to verify certificate: x509: certificate is valid for 192.168.247.130, not 192.168.247.120
  • 解决方法:https://github.com/kubeedge/kubeedge/issues/5681

# 1. 修改cloudcore配置
kubectl edit configmap cloudcore -n kubeedge
#找到advertiseAddress参数,将external ip写入,这里是192.168.247.120

# 2. 删除 casecret and cloudcoresecret
kubectl get secrets -n kubeedge
kubectl delete secret casecret -n kubeedge
kubectl delete secret cloudcoresecret -n kubeedge

# 3. 删除cloudcore的pod(即重启cloudcore服务)
kubectl delete pod cloudcore-dc75f4b46-zrrkn -n kubeedge

# 4. edge节点加入云端,注意修改下面的ip和edge节点的名字!
sudo keadm join --cloudcore-ipport=192.168.247.120:10000 \
--token=$token \
--edgenode-name=edge1 \
--kubeedge-version v1.17.0 \
--remote-runtime-endpoint=unix:///run/containerd/containerd.sock \
--cgroupdriver=systemd 

#上面命令如果拉取镜像错误,先提前拉取镜像,再运行keadm join命令
sudo ctr images pull docker.m.daocloud.io/kubeedge/installation-package:v1.17.0

# 5.查看是否加入成功
kubectl get nodes
  • 运行结果:
    在这里插入图片描述

在这里插入图片描述

  • 报错及问题解决
  • 若加入失败,边缘端退出云端操作
#云端
参考k8s删除节点操作

#边缘端
systemctl stop edgecore
rm -rf /var/lib/kubeedge /var/lib/edged /etc/kubeedge
rm -rf /etc/systemd/system/edgecore.service
rm -rf /usr/local/bin/edgecore

3-6. 解决边缘端加入云端成功后的问题

  • 问题1:镜像拉取失败

    “Error syncing pod, skipping” err=“failed to “StartContainer” for “edge-eclipse-mosquitto” with ImagePullBackOff: “Back-off pulling image \“eclipse-mosquitto:1.6.15\””” pod=“kubeedge/edge-eclipse-mosquitto-5xnwj” podUID=“a411f137-3fbc-4cc5-b641-b31e609b6848”

#拉取镜像
sudo ctr images pull docker.m.daocloud.io/eclipse-mosquitto:1.6.15

  • 问题2: failed to get CA certificate, err: Get "https://192.168.15.128:10002/ca.crt": dial tcp 192.168.15.128:10002: connect: connection refused
    • 解决参考:https://github.com/kubeedge/kubeedge/issues/5450

  • 问题3:kube-flannel:显示 CrashLoopBackOff

“Error syncing pod, skipping” err=“failed to “StartContainer” for “kube-flannel” with CrashLoopBackOff: “back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-ds-mprd5_kube-flannel(f829e77e-db7a-478f-9707-c99601c9a5da)”” pod=“kube-flannel/kube-flannel-ds-mprd5” podUID=“f829e77e-db7a-478f-9707-c99601c9a5da”

cp ./kube-flannel.yaml ./kube-flannel-cloud.yaml
vim ./kube-flannel-cloud.yaml

# 1.原文件第103行,修改为下面内容 

100 apiVersion: apps/v1
101 kind: DaemonSet
102 metadata:
103   name: kube-flannel-cloud-ds
104   namespace: kube-flannel
105   labels:
106     tier: node
107     app: flannel
108     k8s-app: flannel


# 2. 第127行后面,增加如下内容

118     spec:
119       affinity:
120         nodeAffinity:
121           requiredDuringSchedulingIgnoredDuringExecution:
122             nodeSelectorTerms:
123             - matchExpressions:
124               - key: kubernetes.io/os
125                 operator: In
126                 values:
127                 - linux
128               - key: node-role.kubernetes.io/agent
129                 operator: DoesNotExist


#最后应用配置文件
kubectl apply -f kube-flannel-cloud.yaml
  • 复制一份kube-flannel-edge.yaml
    • 修改metadata.name字段
    • 增加spec.affinity.nodeAffinity下面的key字段,即 设置节点亲和性,确保pod会在角色为agent的节点上运行,注意:如果边缘节点是arm架构,需要额外修改参数
    • 在containers.naem.args下面增加 kube-api-url字段
cp ./kube-flannel.yaml ./kube-flannel-edge.yaml
vim ./kube-flannel-edge.yaml

# 1.原文件第103行,修改为下面内容 

100 apiVersion: apps/v1
101 kind: DaemonSet
102 metadata:
103   name: kube-flannel-edge-ds
104   namespace: kube-flannel
105   labels:
106     tier: node
107     app: flannel
108     k8s-app: flannel


# 2. 第127行后面,增加如下内容

118     spec:
119       affinity:
120         nodeAffinity:
121           requiredDuringSchedulingIgnoredDuringExecution:
122             nodeSelectorTerms:
123             - matchExpressions:
124               - key: kubernetes.io/os  #arm平台,为 kubernetes.io/arch
125                 operator: In
126                 values:
127                 - linux  #arm平台,为 arm64
128               - key: node-role.kubernetes.io/agent
129                 operator: Exists


# 3. 第169行添加内容

161       containers:
162       - name: kube-flannel
163         image: docker.m.daocloud.io/flannel/flannel:v0.25.4
164         command:
165         - /opt/bin/flanneld
166         args:
167         - --ip-masq
168         - --kube-subnet-mgr
169         - --kube-api-url=http://127.0.0.1:10550


#最后应用配置文件
kubectl apply -f kube-flannel-edge.yaml
  • 修改云端配置
kubectl edit configmap cloudcore -n kubeedge

modules:
  ...
  dynamicController:
    enable: true


#重启cloudcore
# 删掉相应的pod即可
kubectl delete pod cloudcore-dc75f4b46-pf7rt -n kubeedge
  • 修改EdgeCore配置:为了让flannel能够访问到 http://127.0.0.1:10550,我们需要配置EdgeCore的metaServer功能
    • edgecore.yaml文件的metaManager.metaServer.enable字段配置为true
vim /etc/kubeedge/config/edgecore.yaml


153   metaManager:
154     contextSendGroup: hub
155     contextSendModule: websocket
156     enable: true
157     metaServer:
158       apiAudiences: null
159       dummyServer: 169.254.30.10:10550
160       enable: true
161       server: 127.0.0.1:10550
162       serviceAccountIssuers:
163       - https://kubernetes.default.svc.cluster.local
164       serviceAccountKeyFiles: null
165       tlsCaFile: /etc/kubeedge/ca/rootCA.crt
166       tlsCertFile: /etc/kubeedge/certs/server.crt
167       tlsPrivateKeyFile: /etc/kubeedge/certs/server.key
168     remoteQueryTimeout: 60


# 重启edgecore服务
sudo systemctl restart edgecore

4. 启用Kubectl logs/exec/attach等能力

# 查找是否有ca.crt and ca.key
ls /etc/kubernetes/pki/
# Set CLOUDCOREIPS env,cloudcore所在的ip
export CLOUDCOREIPS="192.168.247.120"
echo $CLOUDCOREIPS
# 在云节点上为 CloudStream 生成证书
wget https://github.com/kubeedge/kubeedge/tree/release-1.17/build/tools/certgen.sh #下载对应版
chmod +x certgen.sh
./certgen.sh stream

#设置iptables
#master节点
#根据输出的ipTunnelPort确定端口,这里是10352
kubectl get cm tunnelport -nkubeedge -oyaml  
sudo su
#清除iptables
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

iptables -t nat -A OUTPUT -p tcp --dport 10352 -j DNAT --to 192.168.247.120:10003

#修改云端cloudcore的配置
kubectl edit configmap cloudcore -n kubeedge
cloudStream.enable,设置为true

#边缘节点,修改边缘的配置
vim /etc/kubeedge/config/edgecore.yaml
edgeStream.enable,设置为true

#重启云端和边缘端
kubectl delete pod cloudcore-dc75f4b46-jgks2  -n kubeedge
#边缘节点执行
sudo systemctl restart edgecore

#查看边缘节点的Pod的日志
kubectl logs edge-eclipse-mosquitto-nxwz4 -n kubeedge 

参考

  1. KubeEdge v1.17.0 详细安装教程
  2. kubeedge v1.17.0部署教程
  3. 部署云原生边缘计算平台kubeedge
  4. KubeEdge环境搭建(支持网络插件flannel)
  5. kubeEdge官方文档
  6. 边缘节点的flannel的pod报错: Error from server: Get “https://192.168.50.3:10350/containerLogs/kube-flannel/kube-flannel-ds-q5gh9/kube-flannel”: dial tcp XXXXX:10350: connect: connection refused
  7. Run istio bookinfo demo on kubeedge #2677
  8. 获取边缘监控
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐