kata v2适配kubernetes

cri+containerd+kata v2+qemu

部署环境

kubernetes使用cri方式对接containerd,containerd使用插件方式加载katacontainers v2

containerd采用自己的方式管理容器镜像,不能公用docker已有镜像,并且containerd镜像使用了命名空间进行了隔离,cri默认命名空间是k8s.io,containerd默认存储命名空间是default,在调试的时候需要注意以上问题

二进制安装kata

部署kata参考可以采用下载安装包方式安装,选择二进制安装包进行下载,然后解压到根目录,文件会自动安装到/opt/kata目录,并把containerd-shim-kata-v2和kata-runtime复制/usr/bin目录

下载release安装包:
wget https://github.com/kata-containers/kata-containers/releases/download/2.0.3/kata-static-2.0.3-x86_64.tar.xz

解压安装包:
xz -d kata-static-2.0.3-x86_64.tar.xz

加压安装包到根目录:
tar xvf kata-static-2.0.3-x86_64.tar -C /

yum安装containerd

containerd部署完成后生成配置文件

[root@master1 ~]# yum install containerd -y
[root@master1 ~]# containerd config default > /etc/containerd/config.toml

安装cri工具

go get github.com/kubernetes-incubator/cri-tools
cd /root/go/pkg/mod/github.com/kubernetes-incubator/cri-tools@v1.20.0
go mod vendor
make
make install

配置kata环境

添加kata runtime配置

[root@master1 ~]# diff /etc/containerd/config.toml /etc/containerd/config.toml.bak 
90,93d89
<         [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.kata]
<           runtime_type = "io.containerd.kata.v2"
<           privileged_without_host_devices = true
< 
[root@master1 ~]# systemctl daemon-reload
[root@master1 ~]# systemctl restart containerd
[root@master1 ~]# systemctl enable containerd

使用crictl工具检测是否加载成功,在runtime段可以看到kata配置就加载成功了

[root@master1 ~]# crictl info
      "runtimes": {
        "kata": {
          "runtimeType": "io.containerd.kata.v2",
          "runtimeEngine": "",
          "PodAnnotations": null,
          "ContainerAnnotations": null,
          "runtimeRoot": "",
          "options": null,
          "privileged_without_host_devices": true,
          "baseRuntimeSpec": ""
        },
        "runc": {
          "runtimeType": "io.containerd.runc.v2",
          "runtimeEngine": "",
          "PodAnnotations": null,
          "ContainerAnnotations": null,
          "runtimeRoot": "",
          "options": {},
          "privileged_without_host_devices": false,
          "baseRuntimeSpec": ""
        }

部署kubernetes

1、添加kubelet启动参数

[root@master1 ~]# cat << EOF | sudo tee  /etc/systemd/system/kubelet.service.d/0-containerd.conf
[Service]                                                 
Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
EOF
[root@master1 ~]# systemctl daemon-reload

2、如果使用kubeadm命令部署集群,需要用cri-socket指定containerd sock文件

kubeadm init --cri-socket /run/containerd/containerd.sock

如果使用config配置文件启动需要在InitConfiguration的nodeRegistration添加crisock文件地址

apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
nodeRegistration:
  kubeletExtraArgs:
    cgroup-driver: "systemd"
  criSocket: /var/run/containerd/containerd.sock

3、因为docker和containerd各自独立管理容器镜像,使用kubelet+cri+containerd+kata的方式代替之前使用kubelet+docker-shim+docker+containerd+runc的方式后,需要把之前放在docker的镜像迁移到contianerd里面去,让contianerd来管理,不然cri找不到容器镜像

保存docker镜像命令:docker save k8s.gcr.io/kube-apiserver:v1.18.0 -o kube-apiserver:v1.18.0导入镜像到containerd命令:ctr -n k8s.io images import kube-apiserver\:v1.18.0 

cri环境默认使用k8s.io命名空间的镜像,所以在加载镜像到containerd环境中时需要带上命名空间名字

部署基础kubernetes集群需要的迁移的镜像如下:

[root@master1 ~]# ls images/coredns:1.6.7  flannel:v0.10.0-amd64   kube-controller-manager:v1.18.0  kube-scheduler:v1.18.0           pause:3.2etcd:3.4.3-0   kube-apiserver:v1.18.0  kube-proxy:v1.18.0               nginx-ingress-controller:0.25.1[root@master1 ~]# [root@master1 ~]# ctr -n k8s.io images list | grep -E "k8s.gcr.io|flannel"100.64.0.1:4000/caas/coreos/flannel:v0.10.0-amd64                                                                     application/vnd.docker.distribution.manifest.v2+json      sha256:bdb90aecca393aac2c046c2c587bb2dec640055f07010b0f196096d3452aba92 43.2 MiB  linux/amd64                                                                                                          io.cri-containerd.image=managed k8s.gcr.io/coredns:1.6.7                                                                                              application/vnd.docker.distribution.manifest.v2+json      sha256:b437cfc11a022d3040ac125d89a59ed569f588bfa5d8ef5d15e5a82d95cf6bff 41.9 MiB  linux/amd64                                                                                                          io.cri-containerd.image=managed k8s.gcr.io/coreos/etcd:3.4.3-0                                                                                        application/vnd.docker.distribution.manifest.v2+json      sha256:ac826dda76c582d6c0cdcb4f81091b6c088158cebedd4ebe31c929f3724688a8 276.6 MiB linux/amd64                                                                                                          io.cri-containerd.image=managed k8s.gcr.io/etcd:3.4.3-0                                                                                               application/vnd.docker.distribution.manifest.v2+json      sha256:ac826dda76c582d6c0cdcb4f81091b6c088158cebedd4ebe31c929f3724688a8 276.6 MiB linux/amd64                                                                                                          io.cri-containerd.image=managed k8s.gcr.io/kube-apiserver:v1.18.0                                                                                     application/vnd.docker.distribution.manifest.v2+json      sha256:4be4022e938ce8a11247148e3777914c772b6e57e2b186c00d17e1da06dbb6d9 166.4 MiB linux/amd64                                                                                                          io.cri-containerd.image=managed k8s.gcr.io/kube-controller-manager:v1.18.0                                                                            application/vnd.docker.distribution.manifest.v2+json      sha256:6371a09a9bc5b98d12ab660447e6f47bb159d4f23e82ff5907ca09912566d452 156.3 MiB linux/amd64                                                                                                          io.cri-containerd.image=managed k8s.gcr.io/kube-proxy:v1.18.0                                                                                         application/vnd.docker.distribution.manifest.v2+json      sha256:2e586437365602e203e935ccb8125f9d9e881b7a242dff03b3eca6064165403f 113.0 MiB linux/amd64                                                                                                          io.cri-containerd.image=managed k8s.gcr.io/kube-scheduler:v1.18.0                                                                                     application/vnd.docker.distribution.manifest.v2+json      sha256:40593ab271c29fc5ececa097ecce422b8ba3e5a8473b64dbef5b459aefd09fd0 92.3 MiB  linux/amd64                                                                                                          io.cri-containerd.image=managed k8s.gcr.io/pause:3.2                                                                                                  application/vnd.docker.distribution.manifest.v2+json      sha256:61e45779fc594fcc1062bb9ed2cf5745b19c7ba70f0c93eceae04ffb5e402269 669.7 KiB linux/amd64                                                                                                          io.cri-containerd.image=managed quay.io/coreos/flannel:v0.10.0-amd64                                                                                  application/vnd.docker.distribution.manifest.v2+json      sha256:ac4e1a9f333444e7985fa155f36ba311038e0115f75f388444529cd4ff6d43b0 43.2 MiB  linux/amd64                                                                                                          io.cri-containerd.image=managed [root@master1 ~]# 

部署kubernetes集群

[root@master1 ~]# cat kubeadm-config.yamlapiVersion: kubeadm.k8s.io/v1beta2kind: InitConfigurationnodeRegistration:  kubeletExtraArgs:    cgroup-driver: "systemd"  criSocket: /var/run/containerd/containerd.socklocalAPIEndpoint:  advertiseAddress: "10.0.1.25"  bindPort: 6443---apiVersion: kubeadm.k8s.io/v1beta2kind: ClusterConfigurationkubernetesVersion: v1.18.0apiServer:  certSANs:  - "10.0.1.25"  - "master1"  - "master2"  - "master3"  timeoutForControlPlane: 4m0scontrolPlaneEndpoint: "10.0.1.25:6443"networking:  podSubnet: "10.223.0.0/16"  serviceSubnet: "10.96.0.0/16"[root@master1 ~]# kubeadm init --config kubeadm-config.yaml[root@master1 ~]# kubectl apply -f templates/flannel.yaml

添加runtimeclass配置,把kata作为新加的runtime加载到kubernetes

[root@master1 ~]# cat kata-runtime.yaml kind: RuntimeClassapiVersion: node.k8s.io/v1beta1metadata:  name: kata-runtimehandler: kata[root@master1 ~]# kubectl apply -f kata-runtime.yaml runtimeclass.node.k8s.io/kata-runtime created[root@master1 ~]# 

部署runtime为runc的容器

[root@master1 ~]# kubectl apply -f busybox-untrusted.yaml pod/nginx-untrusted created[root@master1 ~]# cat busybox-untrusted.yaml apiVersion: v1kind: Podmetadata:  name: nginx-untrustedspec:  containers:  - name: nginx    image: busybox:latest    imagePullPolicy: IfNotPresent    command: ["sleep"]    args: ["1000000"]    [root@master1 ~]# 

部署runtime为kata的容器,部署kata的节点containerd和kubelet都需要已经适配kata配置

[root@master1 ~]# kubectl apply -f busybox-trusted.yaml pod/nginx-trusted created[root@master1 ~]# cat busybox-trusted.yaml apiVersion: v1kind: Podmetadata:  name: nginx-trustedspec:  runtimeClassName: kata-runtime  containers:  - name: nginx    image: busybox:latest    imagePullPolicy: IfNotPresent    command: ["sleep"]    args: ["1000000"]    [root@master1 ~]# [root@master1 ~]# kubectl get pod NAME            READY   STATUS              RESTARTS   AGEnginx-trusted   0/1     ContainerCreating   0          2s[root@master1 ~]# ps -ef | grep qemuroot      2702  2694  0 20:36 ?        00:00:00 /opt/kata/libexec/kata-qemu/virtiofsd --fd=3 -o source=/run/kata-containers/shared/sandboxes/1dad0d41a934278271b165bd1e3c1b1ea7e6e2e2a990381e225b50ec37e5f142/shared -o cache=auto --syslog -o no_posix_lock -f --thread-pool-size=1root      2708     1 92 20:36 ?        00:00:04 /opt/kata/bin/qemu-system-x86_64 -name sandbox-1dad0d41a934278271b165bd1e3c1b1ea7e6e2e2a990381e225b50ec37e5f142 -uuid 3ba6df51-f442-4557-9664-121dbdef8d2d -machine pc,accel=kvm,kernel_irqchip,nvdimm -cpu host,pmu=off -qmp unix:/run/vc/vm/1dad0d41a934278271b165bd1e3c1b1ea7e6e2e2a990381e225b50ec37e5f142/qmp.sock,server,nowait -m 2048M,slots=10,maxmem=8985M -device pci-bridge,bus=pci.0,id=pci-bridge-0,chassis_nr=1,shpc=on,addr=2,romfile= -device virtio-serial-pci,disable-modern=true,id=serial0,romfile= -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/vc/vm/1dad0d41a934278271b165bd1e3c1b1ea7e6e2e2a990381e225b50ec37e5f142/console.sock,server,nowait -device nvdimm,id=nv0,memdev=mem0 -object memory-backend-file,id=mem0,mem-path=/opt/kata/share/kata-containers/kata-containers-image_clearlinux_2.0.3_agent_ef11ce13ea.img,size=134217728 -device virtio-scsi-pci,id=scsi0,disable-modern=true,romfile= -object rng-random,id=rng0,filename=/dev/urandom -device virtio-rng-pci,rng=rng0,romfile= -device vhost-vsock-pci,disable-modern=true,vhostfd=3,id=vsock-1395707738,guest-cid=1395707738,romfile= -chardev socket,id=char-d3b6cd716e79b01e,path=/run/vc/vm/1dad0d41a934278271b165bd1e3c1b1ea7e6e2e2a990381e225b50ec37e5f142/vhost-fs.sock -device vhost-user-fs-pci,chardev=char-d3b6cd716e79b01e,tag=kataShared,romfile= -netdev tap,id=network-0,vhost=on,vhostfds=4,fds=5 -device driver=virtio-net-pci,netdev=network-0,mac=0a:b5:7b:cd:b7:92,disable-modern=true,mq=on,vectors=4,romfile= -rtc base=utc,driftfix=slew,clock=host -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic --no-reboot -daemonize -object memory-backend-file,id=dimm1,size=2048M,mem-path=/dev/shm,share=on -numa node,memdev=dimm1 -kernel /opt/kata/share/kata-containers/vmlinux-5.4.71-84 -append tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k console=hvc0 console=hvc1 cryptomgr.notests net.ifnames=0 pci=lastbus=0 root=/dev/pmem0p1 rootflags=dax,data=ordered,errors=remount-ro ro rootfstype=ext4 quiet systemd.show_status=false panic=1 nr_cpus=4 systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket scsi_mod.scan=none -pidfile /run/vc/vm/1dad0d41a934278271b165bd1e3c1b1ea7e6e2e2a990381e225b50ec37e5f142/pid -smp 1,cores=1,threads=1,sockets=4,maxcpus=4root      2711  2702  0 20:36 ?        00:00:00 /opt/kata/libexec/kata-qemu/virtiofsd --fd=3 -o source=/run/kata-containers/shared/sandboxes/1dad0d41a934278271b165bd1e3c1b1ea7e6e2e2a990381e225b50ec37e5f142/shared -o cache=auto --syslog -o no_posix_lock -f --thread-pool-size=1root      2773 15696  0 20:36 pts/1    00:00:00 grep --color=auto qemuroot      3267     1  0 Apr25 ?        00:00:00 /usr/bin/qemu-ga --method=virtio-serial --path=/dev/virtio-ports/org.qemu.guest_agent.0 --blacklist=guest-file-open,guest-file-close,guest-file-read,guest-file-write,guest-file-seek,guest-file-flush,guest-exec,guest-exec-status -F/etc/qemu-ga/fsfreeze-hook[root@master1 ~]# kubectl get pod NAME            READY   STATUS    RESTARTS   AGEnginx-trusted   1/1     Running   0          25s[root@master1 ~]# [root@master1 ~]# kubectl exec -it nginx-trusted shkubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead./ # uname -r5.4.71/ # [root@master1 ~]# uname -r5.11.16-1.el7.elrepo.x86_64[root@master1 ~]# 

deployment部署

[root@master1 ~]# cat busybox-ds-trust.yaml apiVersion: apps/v1kind: Deploymentmetadata:  name: nginx-deploymentspec:  selector:    matchLabels:      app: nginx  replicas: 2  template:    metadata:      labels:        app: nginx    spec:      runtimeClassName: kata-runtime      containers:      - name: nginx        image: caas.io/caas/nginx        ports:        - containerPort: 80[root@master1 ~]# 

如果单master部署以上步骤就完成了,但有节点要加进来的话涉及kubelet配置同步问题,所以还需要进行调整

多节点部署

kubeadm部署完集群后有新节点加入集群时如果没有同步主节点kubelet默认对接dockerd运行时,kata v2版本不支持对接docker所以会导致创建pod失败,失败提示如下:

Apr 28 16:28:57 node1 kubelet[19932]: E0428 16:28:57.883336   19932 pod_workers.go:191] Error syncing pod 53839381-1a15-4a63-ad93-6de771e85764 ("nginx-trusted_default(53839381-1a15-4a63-ad93-6de771e85764)"), skipping: failed to "CreatePodSandbox" for "nginx-trusted_default(53839381-1a15-4a63-ad93-6de771e85764)" with CreatePodSandboxError: "CreatePodSandbox for pod \"nginx-trusted_default(53839381-1a15-4a63-ad93-6de771e85764)\" failed: rpc error: code = Unknown desc = RuntimeHandler \"kata\" not supported"

提示找不到kata runtime

解决方案有两种:

1、修改新节点/var/lib/kubelet/kubeadm-flags.env文件内容,设置container-runtime配置为contianerd,同步主节点中这个文件的内容到新节点就行

ref:Configuring each kubelet in your cluster using kubeadm | Kubernetes

原始配置:KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2"新配置:KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --container-runtime=remote --container-runtime-endpoint=/var/run/containerd/containerd.sock"

2、在部署kubernetes集群时使用kubeadm配置开启自动同步kubelet配置功能,让新加入节点的kubelet配置可以和主节点同步

todo:官方配置说明暂时不能用,正在研究

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐