K8S搭建

说明:K8S是容器的集群管理平台。

1、安装环境介绍

本次搭建用的是本地虚拟机,需要三台服务器
配置是2核、4G内存、50G硬盘,系统为centos7.6。

主机名IP备注
K8S01192.168.223.50master节点
K8S02192.168.223.51work节点
K8S03192.168.223.52work节点

重要的事情说三遍:
2-4,所有机器都要安装
2-4,所有机器都要安装
2-4,所有机器都要安装

2、安装基础配置

2.1. 更新yum源以及安装必要程序

yum update -y
yum install bash-completion wget net-tools yum-utils device-mapper-persistent-data lvm2 -y

2.2. 主机名本地解析配置

vi /etc/hosts

192.168.223.50 k8s01
192.168.223.51 k8s02
192.168.223.52 k8s03

在这里插入图片描述
2.3. 关闭防火墙

说明:看K8S用到的端口非常多,而且是在内网中运行,内网中最好不要启动防火墙。

systemctl stop firewalld && systemctl disable firewalld
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

2.4. 关闭分区

swapoff -a
cp /etc/fstab  /etc/fstab.bak
cat /etc/fstab.bak | grep -v swap > /etc/fstab
##查看分区是否被关闭
free -m

出现如下图所示,即安装成功
在这里插入图片描述
2.5. 修改iptables设置
说明:添加内核桥接参数

 cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
##iptables生效
 sysctl --system  #生效

在这里插入图片描述

2.6. 升级内核

K8S安装必须升级内核版本到4.4以上。本次升级是直接升级到5.4

#下载rpm包(时间会很长)
https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-lt-5.4.186-1.el7.elrepo.x86_64.rpm
https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-lt-devel-5.4.186-1.el7.elrepo.x86_64.rpm
yum -y install  kernel-lt-5.4.186-1.el7.elrepo.x86_64.rpm kernel-lt-devel-5.4.186-1.el7.elrepo.x86_64.rpm
#查看可用内核
 sudo awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg

在这里插入图片描述
#调整默认内核启动

 grub2-set-default "CentOS Linux (5.4.186-1.el7.elrepo.x86_64) 7 (Core)"

然后重启,可以看到内核升级完成
在这里插入图片描述
2.7. 开启IPVS支持

vi /etc/sysconfig/modules/ipvs.sh

#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
  /sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
  if [ $? -eq 0 ]; then
    /sbin/modprobe ${kernel_module}
  fi
done

#使配置生效

chmod 755 /etc/sysconfig/modules/ipvs.sh
 /etc/sysconfig/modules/ipvs.sh
lsmod | grep ip_vs

在这里插入图片描述

3、安装docker

3.1. 安装docker

yum-config-manager --add-repo  http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce docker-ce-cli containerd.io

#设置开机自启动并启动docker
 systemctl enable docker && systemctl start docker
 #查看版本
 docker version

3.2. 设置我的镜像加速

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://la57w4kb.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

至此docker安装完成

4、安装Kubeadm、kubelet、kubectl工具

确保系统基础环境配置完成后,现在我们就可以来安装 Kubeadm、kubelet了,我这里是通过指定yum 源的方式来进行安装,使用阿里云的源进行安装:

cat <<EOF > /etc/yum.repos.d/k8s.repo 
[k8s]
name=k8s
enabled=1
gpgcheck=0
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
EOF

安装

 yum install -y kubelet-1.21.0-0 kubeadm-1.21.0-0 kubectl-1.21.0-0
#kubelet 设置成开机启动
 systemctl daemon-reload && systemctl enable kubelet

5、安装说明

重要的事情说三遍:
2-4,所有机器都要安装
2-4,所有机器都要安装
2-4,所有机器都要安装

6、K8S初始化(只在主节点处理)

6.1. 查看相关信息和设置默认值

#查看安装k8s的相关信息
kubeadm config print init-defaults
#查询需要的镜像
kubeadm config images list
#设置k8s镜像仓库为阿里云镜像站地址
kubeadm config images list --kubernetes-version=v1.21.0 --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers

6.1. 导出初始化文件

 kubeadm config print init-defaults > kubeadm.yaml

6.2. 修改初始化文件默认配置

advertiseAddress: 192.168.223.50
name: k8s01
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kubernetesVersion: v1.21.0
podSubnet: 10.244.0.0/16

说明:192.168.223.50是主节点IP
name是节点名
imageRepository:镜像源
kubernetesVersion:版本号
podSubnet:podIP
在这里插入图片描述
6.2. 初始化
说明:下载对应的镜像和启动它们

 kubeadm init --config=kubeadm.yaml --upload-certs

注意总共会下载7个镜像

在这里插入图片描述
如果初始化失败(看下面的错误说明),则可以提前在主节点上下载镜像,然后再初始化。
在这里插入图片描述
主节点启动成功,添加子节点

注意,此时添加完后,无法查看节点
会提示:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
在这里插入图片描述
这是因为
kubernetes master没有与本机绑定,集群初始化的时候没有绑定,此时设置在本机的环境变量即可解决问题。

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile

至此安装完成,但是会报NotReady,这是因为我们没安网络插件
在这里插入图片描述
在安装网络插件之前,我们先把admin.conf导出来

 mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config	
echo "source <(kubectl completion bash)" >> ~/.bashrc

7、安装网络插件(只在主节点处理)

网络插件可以选择fannnel
写插件的文档

vi kube-flannel.yml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - amd64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - arm64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-arm64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-arm64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - arm
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-arm
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-arm
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - ppc64le
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-ppc64le
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-ppc64le
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-s390x
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - s390x
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-s390x
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-s390x
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg

kubectl apply -f  kube-flannel.yml

至此在查看状态,全部为ready,安装完成。
在这里插入图片描述
查看所有pod

kubectl get pods --all-namespaces

在这里插入图片描述
结束!!!!!
以下是常见错误,和各个节点安装的镜像

节点镜像

主节点安装的镜像:

镜像版本
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserverv1.21.0
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxyv1.21.0
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-schedulerv1.21.0
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-managerv1.21.0
registry.cn-hangzhou.aliyuncs.com/google_containers/pause3.4.1
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/corednsv1.8.0
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd3.4.13-0
quay.io/coreos/flannelv0.11.0-amd64

子节点镜像

镜像版本
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxyv1.21.0
registry.cn-hangzhou.aliyuncs.com/google_containers/pause3.4.1
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/corednsv1.8.0
quay.io/coreos/flannelv0.11.0-amd64

错误说明

注意在安装过程中,由于网络等原因,很多的docker镜像无法下载,导致出错。
如:
初始化错误:
[ERROR ImagePull]: failed to pull image registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns:v1.8.0: output: Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns, repository does not exist or may require ‘docker login’: denied: requested access to the resource is denied

节点加入错误
coredns-57d4cbf879-p2h6l ImagePullBackOff
kube-flannel-ds-amd64-mqt4r:Init:ImagePullBackOff

这些大部分都是镜像没有下载成功造成的(节点安装的镜像说明看上面的节点镜像)。

在这里插入图片描述

在这里插入图片描述

查看他们的介绍

 kubectl describe pod coredns-57d4cbf879-p2h6l -n kube-system

nodes are available: 3 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn’t tolerate
在这里插入图片描述

kubelet Failed to pull image “quay.io/coreos/flannel:v0.11.0-amd64”: rpc error: code = Unknown desc = context canceled
Normal BackOff 2m17s (x217 over 70m) kubelet Back-off pulling image “quay.io/coreos/flannel:v0.11.0-amd64”
在这里插入图片描述
像这样的问题都是因为缺少镜像造成的(在对应的节点上安装对应的镜像)
如上面的错误提示:coredns 在k8s02和K8S03上缺失,就把K8S01上的coredns镜像拷贝到另外两个节点上。
镜像包可以从下面下载。

镜像包

链接:https://pan.baidu.com/s/1qgh5J2d8ls8C3ra_iYrF-g
提取码:k8fx

导入镜像

docker load -i xxx
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐