一、预备环境

  kubeedge是基于kubernetes运行的,所以首先需要搭建kubernetes集群作为云端部署。

准备条件

  • 需要部署kubernetes主结点作为cloudcore部署的结点
  • 不需要部署kube-flannel网络插件
  • 不需要在边端部署kube-proxy

二、kubeedge安装

1.到官网下载keadm安装包

# 直接下载
$ wget https://github.com/kubeedge/kubeedge/releases/download/v1.7.1/keadm-v1.7.1-linux-amd64.tar.gz

# 解压缩
$ tar -zxvf keadm-v1.7.1-linux-amd64.tar.gz

# 添加环境变量
$ mv ./keadm /usr/local/bin/

2.初始化cloudcore结点

$ keadm init --advertise-address="THE-EXPOSED-IP"(only work since 1.3 release)

  遇到github访问不了,查询域名对应的ip,添加到/etc/hosts

# GitHub Start
52.74.223.119 github.com
192.30.253.119 gist.github.com
54.169.195.247 api.github.com
185.199.111.153 assets-cdn.github.com
185.199.110.133 raw.githubusercontent.com
151.101.108.133 user-images.githubusercontent.com
185.199.110.133 gist.githubusercontent.com
185.199.110.133 cloud.githubusercontent.com
185.199.110.133 camo.githubusercontent.com
185.199.110.133 avatars0.githubusercontent.com
185.199.110.133 avatars1.githubusercontent.com
185.199.110.133 avatars2.githubusercontent.com
185.199.110.133 avatars3.githubusercontent.com
185.199.110.133 avatars4.githubusercontent.com
185.199.110.133 avatars5.githubusercontent.com
185.199.110.133 avatars6.githubusercontent.com
185.199.110.133 avatars7.githubusercontent.com
185.199.110.133 avatars8.githubusercontent.com
# GitHub End

3.添加edgecore结点

# 1.获取主结点token
$ keadm gettoken

# 2.加入边结点
$ keadm join --cloudcore-ipport=192.168.11.100:10000 --token=6ee3556fc2d8e6736326e701b17169a90663f5de4ee37ad9e795ea9f76f8dcbe.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2MjY2ODM4MDd9.tAaT0GwHWMLQ330sx0dBteNiQS2IeJH_fVmdip_gHfk

# 指定证书目录
--certPath="/etc/kubeedge/certs"

4.安装metrics-server

启用 kubectl logs 功能

kubectl logs 必须在使用 metrics-server 之前部署,通过以下操作激活功能:

  1. 确保您可以找到 Kubernetes 的 ca.crt 和 ca.key 文件。如果您通过 kubeadm 安装Kubernetes 集群,这些文件将位于 /etc/kubernetes/pki/ 目录中。

    ls /etc/kubernetes/pki/

  2. 设置 CLOUDCOREIPS 环境。环境变量设置为指定的 cloudcore 的IP地址,如果您具有高可用的集群,则可以指定VIP(即弹性IP/虚拟IP)。

    export CLOUDCOREIPS="192.168.11.100"


    (警告:建议使用同一 终端 来保持系统工作的持续,在必要时再次键入此命令。)使用以下命令检查环境变量:

    echo $CLOUDCOREIPS

  3. 在云端节点上为 CloudStream 生成证书,但是,生成的文件不在 /etc/kubeedge/ 中,我们需要从GitHub的存储库中拷贝一份。

    将用户更改为root:

sudo su
从原始克隆的存储库中拷贝证书:

cp $GOPATH/src/github.com/kubeedge/kubeedge/build/tools/certgen.sh /etc/kubeedge/

将目录更改为kubeedge目录:

cd /etc/kubeedge/

从 **certgen.sh** 生成证书

/etc/kubeedge/certgen.sh stream
  1. 需要在主机上设置 iptables。(此命令应该在每个apiserver部署的节点上执行。)(在这种情况下,须在master节点上执行,并由root用户执行此命令。) 在运行每个apiserver的主机上运行以下命令:

    注意: 您需要先设置CLOUDCOREIPS变量
$ iptables -t nat -A OUTPUT -p tcp --dport 10350 -j DNAT --to $CLOUDCOREIPS:10003

# 端口10003和10350是 CloudStream 和 Edgecore 的默认端口,如果已发生变更,请使用您自己设置的端口。

如果您不确定是否设置了iptables,并且希望清除所有这些表。(如果您错误地设置了iptables,它将阻止您使用 kubectl logs 功能) 可以使用以下命令清理iptables规则:

$ iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
  1. /etc/kubeedge/config/cloudcore.yaml 和 /etc/kubeedge/config/edgecore.yaml 上 cloudcore 和 edgecore 都要 修改。将 cloudStream 和 edgeStream 设置为 enable: true 。将服务器IP更改为 cloudcore IP(与 $ CLOUDCOREIPS 相同)。



    在 cloudcore 中打开 YAML 文件:
    vim /etc/kubeedge/config/cloudcore.yaml



    在以下文件中修改( enable: true )内容:
cloudStream:
  enable: true
  streamPort: 10003
  tlsStreamCAFile: /etc/kubeedge/ca/streamCA.crt
  tlsStreamCertFile: /etc/kubeedge/certs/stream.crt
  tlsStreamPrivateKeyFile: /etc/kubeedge/certs/stream.key
  tlsTunnelCAFile: /etc/kubeedge/ca/rootCA.crt
  tlsTunnelCertFile: /etc/kubeedge/certs/server.crt
  tlsTunnelPrivateKeyFile: /etc/kubeedge/certs/server.key
  tunnelPort: 10004

在 edgecore 中打开 YAML 文件:

vim /etc/kubeedge/config/edgecore.yaml

修改以下部分中的文件 (enable: true), (server: 192.168.0.193:10004):

edgeStream:
  enable: true
  handshakeTimeout: 30
  readDeadline: 15
  server: 192.168.0.139:10004
  tlsTunnelCAFile: /etc/kubeedge/ca/rootCA.crt
  tlsTunnelCertFile: /etc/kubeedge/certs/server.crt
  tlsTunnelPrivateKeyFile: /etc/kubeedge/certs/server.key
  writeDeadline: 15
  1. 重新启动所有cloudcore和edgecore
    sodu su

cloudCore:

pkill cloudcore
nohup cloudcore > cloudcore.log 2>&1 &

edgeCore

systemctl restart edgecore.service

如果您无法重启 edgecore,请检查是否是由于 kube-proxy 的缘故,同时杀死这个进程。 kubeedge 默认不纳入该进程,我们使用 edgemesh 来进行替代

注意: 可以考虑避免 kube-proxy 部署在edgenode上。有两种解决方法:

  1. 通过调用 kubectl edit daemonsets.apps -n kube-system kube-proxy 添加以下设置:
affinity:
nodeAffinity:
  requiredDuringSchedulingIgnoredDuringExecution:
    nodeSelectorTerms:
      - matchExpressions:
          - key: node-role.kubernetes.io/edge
            operator: DoesNotExist
  1. 如果您仍然要运行 kube-proxy ,请通过在以下位置添加 edgecore.service 中的 env 变量来要求 edgecore 不进行检查edgecore.service:
sudo vi /etc/kubeedge/edgecore.service
  • 将以下行添加到 edgecore.service 文件:
Environment="CHECK_EDGECORE_ENVIRONMENT=false"
  • 最终文件应如下所示:
Description=edgecore.service

[Service]
Type=simple
ExecStart=/root/cmd/ke/edgecore --logtostderr=false --log-file=/root/cmd/ke/edgecore.log
Environment="CHECK_EDGECORE_ENVIRONMENT=false"

[Install]
WantedBy=multi-user.target

在云端支持 Metrics-server

  1. 实现该功能点的是重复使用了 cloudstream 和 edgestream 模块。因此,您还需要执行 启用 kubectl logs 功能 的所有步骤。

  2. 部署metrics-server(需要0.4.0及以上版本)

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      hostNetwork: true
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls
        - --kubelet-use-node-status-port 
        image: bitnami/metrics-server:0.4.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        master: "true"
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

[root@m1 ~]# cat metrics-server-deployment.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      hostNetwork: true
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls
        - --kubelet-use-node-status-port 
        image: bitnami/metrics-server:0.4.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        master: "true"
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

重要提示

  • Metrics-server需要使用主机网络网络模式。
  • 使用您自己编译的镜像,并将 imagePullPolicy 设置为Never。
  • 为 Metrics 服务器启用 –kubelet-use-node-status-port 功能

5.删除边结点

# cloud
$ keadm reset # --kube-config=$HOME/.kube/config


# edge
$ keadm reset # 节点 keadm reset 将停止 edgecore ,并且不会卸载/删除任何先决条件。
# 逐个停止进程,删除相关文件依赖
rm -rf /var/lib/kubeedge /var/lib/edged /etc/kubeedge
rm -rf /etc/systemd/system/edgecore.service
rm -rf /usr/local/bin/edgecore

6.遇到的问题

1.edgecore与docker的CgroupDtriver不同
# 查看日志
journalctl -f -u edgecore

# 注意可能是docker和edge的cgroup角色不同,edge默认cgroupfs,修改docker的daemon.json
{
    "exec-opts": ["native.cgroupdriver=cgroupfs"]
}

# 也可以修改edgecore的配制为systemd,/etc/kubeedge/edgecore.yaml
modules:
  edged:
    cgroupDriver: systemd
2.边端应用要用hostNetwork模式部署

  kubeedge默认没有提供类似于flannel这样的CNI插件,所以pod网络无法跨主机通信,设置为hostNetwork后,直接用主机端口访问应用

Logo

开源、云原生的融合云平台

更多推荐