kind即 Kubernetes In Docker,将 k8s 所需要的所有组件,全部部署在了一个docker容器中,是一套开箱即用的 k8s 环境搭建方案,可以让我们快速的搭建k8s测试平台。它将每个容器模拟成一个k8s节点,可以轻松地在单节点上部署"多节点"集群,甚至"高可用集群",而且还可以部署和管理多个版本集群。在搭建个人学习平台时,如果要搭建一个多控多计算的集群,个人电脑很难有这么高的资源配置,使用kind来部署集群就很有必要了。

kind – Quick Start

安装工具 | Kubernetes

下载kind

可以按需下载Releases · kubernetes-sigs/kind · GitHub

curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64
chmod +x ./kind
mv ./kind /usr/local/bin/kind

创建集群

kind create cluster # 如果不指定名字则默认为kind
kind create cluster --name kind-2

查看集群信息

context为kind-创建集群时指定的名字,前面指定的名字为kind-2,则context为kind-kind-2

kubectl cluster-info --context kind-kind
kubectl cluster-info --context kind-kind-2

可以使用别名,不用每次加--context 

alias kubectl='kubectl --context kind-kind-2'

添加镜像

不能直接使用主机上的镜像,需要导入到kind的节点(容器中)

kind load docker-image my-custom-image-0 my-custom-image-1
比如:(此处的name为创建集群时的name...)
kind load docker-image nginx:1.13 --name kind-2

查看镜像

docker exec -it my-node-name crictl images

多节点集群

模拟多节点集群,其实只是在一个主机上的多个容器

config.yaml

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker

部署集群

kind create cluster --config config.yaml

高可用集群

模拟多个控制节点

config.yaml

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: control-plane
- role: control-plane
- role: worker
- role: worker
- role: worker

部署集群

kind create cluster --config config.yaml

测试

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-example
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: docker.io/library/nginx:1.13   
//镜像不能使用latest tag,或者手动添加拉取镜像规则
The Kubernetes default pull policy is IfNotPresent unless the image tag is :latest or omitted (and implicitly :latest) in which case the default policy is Always. IfNotPresent causes the Kubelet to skip pulling an image if it already exists. If you want those images loaded into node to work as expected, please:

don't use a :latest tag
and / or:

specify imagePullPolicy: IfNotPresent or imagePullPolicy: Never on your container(s).

启动deployment

kubectl apply -f deployment.yaml

补充

kind镜像如何构建

主要有三种镜像

base镜像

底层是debian:bullseye-slim镜像,主要做以下事情:

  1. 在其上安装必要的包,比如system,iptables,mount, bash
  2. 生成kubelet,containerd等的服务配置文件,并设置开机自启动
  3. 使用containerd, runc, cni等源码文件编译相应的二进制文件 

详见kind源码:images/base/Dockerfile,以下是部分内容

...

RUN echo "Installing Packages ..." \
    && DEBIAN_FRONTEND=noninteractive clean-install \
      systemd \
      conntrack iptables iproute2 ethtool util-linux mount ebtables kmod \
      libseccomp2 pigz fuse-overlayfs \
      nfs-common open-iscsi \
      bash ca-certificates curl jq procps \
    && find /lib/systemd/system/sysinit.target.wants/ -name "systemd-tmpfiles-setup.service" -delete \
    && rm -f /lib/systemd/system/multi-user.target.wants/* \
    && rm -f /etc/systemd/system/*.wants/* \
    && rm -f /lib/systemd/system/local-fs.target.wants/* \
    && rm -f /lib/systemd/system/sockets.target.wants/*udev* \
    && rm -f /lib/systemd/system/sockets.target.wants/*initctl* \
    && rm -f /lib/systemd/system/basic.target.wants/* \
    && echo "ReadKMsg=no" >> /etc/systemd/journald.conf \
    && ln -s "$(which systemd)" /sbin/init

RUN echo "Enabling services ... " \
    && systemctl enable kubelet.service \
    && systemctl enable containerd.service \
    && systemctl enable undo-mount-hacks.service

RUN echo "Ensuring /etc/kubernetes/manifests" \
    && mkdir -p /etc/kubernetes/manifests
...

# stage for building cni-plugins
FROM go-build as build-cni
ARG TARGETARCH GO_VERSION
ARG CNI_PLUGINS_VERSION="v1.3.0"
ARG CNI_PLUGINS_CLONE_URL="https://github.com/containernetworking/plugins"
RUN git clone --filter=tree:0 "${CNI_PLUGINS_CLONE_URL}" /cni-plugins \
    && cd /cni-plugins \
    && git checkout "${CNI_PLUGINS_VERSION}" \
    && eval "$(gimme "${GO_VERSION}")" \
    && mkdir ./bin \
    && export GOARCH=$TARGETARCH && export CC=$(target-cc) && export CGO_ENABLED=1 \
    && go build -o ./bin/host-local -mod=vendor ./plugins/ipam/host-local \
    && go build -o ./bin/loopback -mod=vendor ./plugins/main/loopback \
    && go build -o ./bin/ptp -mod=vendor ./plugins/main/ptp \
    && go build -o ./bin/portmap -mod=vendor ./plugins/meta/portmap \
    && GOARCH=$TARGETARCH go-licenses save --save_path=/_LICENSES \
        ./plugins/ipam/host-local \
        ./plugins/main/loopback ./plugins/main/ptp \
        ./plugins/meta/portmap
...
node镜像

底层是base镜像,主要做以下事情:

  1. 在编node镜像时,指定k8s源码目录,然后kind会切换到k8s源码目录执行k8s源码中的make命令来编译二进制和镜像,详见kind源码:pkg/build/nodeimage/internal/kube/builder_docker.go,以下是部分内容

...

binDir := filepath.Join(b.kubeRoot,
   "_output", "dockerized", "bin", "linux", b.arch,
)
imageDir := filepath.Join(b.kubeRoot,
   "_output", "release-images", b.arch,
)

return &bits{
   binaryPaths: []string{
      filepath.Join(binDir, "kubeadm"),
      filepath.Join(binDir, "kubelet"),
      filepath.Join(binDir, "kubectl"),
   },
   imagePaths: []string{
      filepath.Join(imageDir, "kube-apiserver.tar"),
      filepath.Join(imageDir, "kube-controller-manager.tar"),
      filepath.Join(imageDir, "kube-scheduler.tar"),
      filepath.Join(imageDir, "kube-proxy.tar"),
   },
   version: sourceVersionRaw,
}, nil
...

       2. 使用pkg/build/nodeimage/目录下.go类型的const文件,生成用于k8s使用的csi, cni的yaml文件,文件内容是硬编码的,并且kind会提前下载需要的镜像,这些镜像名也是硬编码的,详见kind源码:pkg/build/nodeimage/buildcontext.go,以下是部分内容

...
// write the default CNI manifest
if err := createFile(cmder, defaultCNIManifestLocation, defaultCNIManifest); err != nil {
   c.logger.Errorf("Image build Failed! Failed write default CNI Manifest: %v", err)
   return nil, err
}
// all builds should install the default CNI images from the above manifest currently
requiredImages = append(requiredImages, defaultCNIImages...)

// write the default Storage manifest
if err := createFile(cmder, defaultStorageManifestLocation, defaultStorageManifest); err != nil {
   c.logger.Errorf("Image build Failed! Failed write default Storage Manifest: %v", err)
   return nil, err
}
// all builds should install the default storage driver images currently
requiredImages = append(requiredImages, defaultStorageImages...)
...

     3.  在base镜像的基础上,做相应的操作之后使用docker commit的方式生成node镜像,详见源码:pkg/build/nodeimage/buildcontext.go

// Save the image changes to a new image
if err = exec.Command(
   "docker", "commit",
   // we need to put this back after changing it when running the image
   "--change", `ENTRYPOINT [ "/usr/local/bin/entrypoint", "/sbin/init" ]`,
   containerID, c.image,
).Run(); err != nil {
   c.logger.Errorf("Image build Failed! Failed to save image: %v", err)
   return err
}

c.logger.V(0).Infof("Image %q build completed.", c.image)
return nil
k8s组件镜像

k8s组件镜像在编node镜像时生成,有四个镜像,在k8s源码的以下路径中

# ls _output/release-images/amd64/
kube-apiserver.tar           kube-controller-manager.tar  kube-proxy.tar               kube-scheduler.tar

kind集群如何启动

启动kind集群

pkg/cluster/internal/create/create.go中的ClusterOptions结构体定义了相关参数,ClusterOptions.Config里面定义了k8s集群相关的可配置的信息,与官方文档kind – Configuration中的内容一致,但文档内容相比源码会稍微落后一点,用户可使用里面的参数创建一个yaml文件,然后在创建集群的时候使用–config指定这个文件即可

// ClusterOptions holds cluster creation options
type ClusterOptions struct {
   Config       *config.Cluster
   NameOverride string // overrides config.Name
   // NodeImage overrides the nodes' images in Config if non-zero
   NodeImage      string
   Retain         bool
   WaitForReady   time.Duration
   KubeconfigPath string
   // see https://github.com/kubernetes-sigs/kind/issues/324
   StopBeforeSettingUpKubernetes bool // if false kind should setup kubernetes after creating nodes
   // Options to control output
   DisplayUsage      bool
   DisplaySalutation bool
}

同样是pkg/cluster/internal/create/create.go文件中定义了如何启动k8s集群,比如会根据配置文件中是否配置了不使用默认cni,启动过程中将不再安装cni插件,但是csi不能被设置,必定会安装

actionsToRun := []actions.Action{
   loadbalancer.NewAction(), // setup external loadbalancer
   configaction.NewAction(), // setup kubeadm config
}
if !opts.StopBeforeSettingUpKubernetes {
   actionsToRun = append(actionsToRun,
      kubeadminit.NewAction(opts.Config), // run kubeadm init
   )
   // this step might be skipped, but is next after init
   if !opts.Config.Networking.DisableDefaultCNI {
      actionsToRun = append(actionsToRun,
         installcni.NewAction(), // install CNI
      )
   }
   // add remaining steps
   actionsToRun = append(actionsToRun,
      installstorage.NewAction(),                // install StorageClass
      kubeadmjoin.NewAction(),                   // run kubeadm join
      waitforready.NewAction(opts.WaitForReady), // wait for cluster readiness
   )
}

// run all actions
actionsContext := actions.NewActionContext(logger, status, p, opts.Config)
for _, action := range actionsToRun {
   if err := action.Execute(actionsContext); err != nil {
      if !opts.Retain {
         _ = delete.Cluster(logger, p, opts.Config.Name, opts.KubeconfigPath)
      }
      return err
   }
}

具体的实现在pkg/cluster/internal/create/actions目录下相应的代码,比如cni的安装,直接使用的是在编node镜像时就已经生成好的yaml文件,且需要的镜像也在编node镜像时就已经下载到了node镜像里面

...
// Execute runs the action
func (a *action) Execute(ctx *actions.ActionContext) error {
   ctx.Status.Start("Installing CNI")
   defer ctx.Status.End(false)

   allNodes, err := ctx.Nodes()
   if err != nil {
      return err
   }

   // get the target node for this task
   controlPlanes, err := nodeutils.ControlPlaneNodes(allNodes)
   if err != nil {
      return err
   }
   node := controlPlanes[0] // kind expects at least one always

   // read the manifest from the node
   var raw bytes.Buffer
   if err := node.Command("cat", "/kind/manifests/default-cni.yaml").SetStdout(&raw).Run(); err != nil {
      return errors.Wrap(err, "failed to read CNI manifest")
   }
   manifest := raw.String()
...

Logo

开源、云原生的融合云平台

更多推荐