一、Centos环境设置(每个集群主机都要操作)

1. 关闭防火墙、selinux、禁用swap

# 关闭防火墙
systemctl stop firewalld.service
systemctl disable firewalld.service
firewall-cmd --state

# 关闭selunix
sed -i '/^SELINUX=.*/c SELINUX=disabled' /etc/selinux/config
sed -i 's/^SELINUXTYPE=.*/SELINUXTYPE=disabled/g' /etc/selinux/config

grep --color=auto '^SELINUX' /etc/selinux/config
setenforce 0


# 禁用swap
sed -i.bak '/swap/s/^/#/' /etc/fstab

2.时间同步

如果时间不一致,可能会影响其他work节点在kubeadm join时加入失败。

yum install -y ntp

/usr/sbin/ntpdate ntp6.aliyun.com

echo "*/3 * * * * /usr/sbin/ntpdate ntp6.aliyun.com &> /dev/null" > /tmp/crontab

crontab /tmp/crontab

3.设置host映射和主机(集群所有主机的映射)

cat >> /etc/hosts << EOF
192.168.0.100   master
192.168.0.101   work1
192.168.0.102   work2
EOF

# 根据上述配置:设置本机主机名,我这里是192.168.0.100,所以设置本机名为master。如果主机是101,要设置为work1,是102要设置成work2。要与写入到host的名称保持一致
# K8S集群内的所有主机的主机名不能重复,不然在后续join进集群的时候会报“主机已存在”的错误
hostnamectl set-hostname master

4.修改内核参数

cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

5.设置K8S源,从阿里云的镜像库能快速拉取k8s相关的镜像 

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
# repo_gpgcheck要设置为0,如设置为1会导致后面在install kubelet、kubeadm、kubectl的时候报[Errno -1] repomd.xml signature could not be verified for kubernetes Trying other mirror.
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

6. 设置Docker源

# 安装Docker依赖包
yum install -y yum-utils device-mapper-persistent-data lvm2

# 设置docker国内源
yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 更新源
yum clean all
yum -y makecache

7. 重启主机生效设置

 # 关机重启
 reboot

二、安装Docker(每个集群主机都要操作)

# docker版本查看,选择合适版本
yum list docker-ce --showduplicates | sort -r
# 我选择安装19版本的docker
yum install -y docker-ce-19.03.13 docker-ce-cli-19.03.13 containerd.io


# 安装插件,linux命令操作时补全docker命令
yum -y install bash-completion
source /etc/profile.d/bash_completion.sh


# 设置Dokcer镜像加速
mkdir -p /etc/docker
cat >> /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://23h04een.mirror.aliyuncs.com"]
}
EOF
# 重新加载Dokcer daemon
systemctl daemon-reload


# 查看docker版本
docker --version
# 启动docker
systemctl start docker 
# 设置开机自启动
systemctl enable docker

三、安装K8S(每个集群主机都要操作)

# 查看K8S版本
yum list kubelet --showduplicates | sort -r

# 选择版本安装,三个组件版本要保持一致
# kubelet 运行在集群所有节点上,用于启动Pod和容器等对象的工具
# kubeadm 用于初始化集群,启动集群的命令工具
# kubectl 用于和集群通信的命令行,通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件
yum install -y kubelet-1.22.8 kubeadm-1.22.8 kubectl-1.22.8
# yum remove -y kubelet-1.22.8 kubeadm-1.22.8 kubectl-1.22.8


# 设置开机启动
systemctl enable kubelet
# 启动kubelet
systemctl start kubelet

# 检查是否报错,running状态为启动成功。启动失败的话,看看是不是docker与kebulet使用的驱动不一致
systemctl status kubelet

#kubectl命令补全
echo "source <(kubectl completion bash)" >> ~/.bash_profile
source .bash_profile

# 下载镜像
vim image.sh 
# 复制以下内容到image.sh文件内
#!/bin/bash
url=registry.cn-hangzhou.aliyuncs.com/google_containers
# 版本号与安装到k8s版本号保持一致
version=v1.22.8
images=(`kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'`)
for imagename in ${images[@]} ; do
  docker pull $url/$imagename
  docker tag $url/$imagename k8s.gcr.io/$imagename
  docker rmi -f $url/$imagename
done

# 设置可执行权限
chmod +x ./image.sh
./image.sh

查看下载后的镜像docker images​,大概有这7个镜像

注意:如果kubelet启动失败,出现“kubelet.service: main process exited, code=exited, status=1/FAILURE”错误的话,可以尝试修改Dokcer的驱动,亲测解决问题(master节点可以,work节点的话启动失败可以先不管,等kubeadm join到集群之后一般会自己启动成功)。

修改docker驱动,查看/etc/docker/daemon.json文件,没有的话,手动创建,添加以下内容。在文件/etc/docker/daemon.json里面添加下面配置,
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
变成
{
  "registry-mirrors": ["https://23h04een.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}

# 然后重启docker kubelet
systemctl daemon-reload
systemctl restart docker

systemctl daemon-reload
systemctl restart kubelet

四、初始化Master节点(集群Master节点操作)

4.1  Master初始化

# apiserver-advertise-address指定master的interface,版本号与安装的K8S版本要一致,pod-network-cidr指定Pod网络的范围,这里使用flannel网络方案。
# 安装成功之后,会打印kubeadm join的输出,记得要保存下来,后面需要这个命令将各个节点加入集群中
kubeadm init --apiserver-advertise-address=192.168.0.240 --kubernetes-version v1.22.8 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16

## 如果初始化过程中出现错误,就reset之后重新init
# kubeadm reset
# rm -rf $HOME/.kube/config​

# 查看是否所有的pod都处于running状态
kubectl get pod -n kube-system -o wide

注意:在init过程中,可能会出现“Error response from daemon: Get "https://k8s.gcr.io/v2/“”,因为init时要从这个网址去pull一些镜像,但是拉不下来。

其实这些镜像我们在前置步骤中已经通过aliyun源拉下来了,此时我们可以修改镜像的tag就能满足使用,让init过程不再需要从https://k8s.gcr.io/v2/拉取镜像了。如:

docker tag k8s.gcr.io/coredns:latest k8s.gcr.io/coredns/coredns:v1.8.4

注意:如果在执行“kubectl get pod”时出现“The connection to the server localhost:8080 was refused - did you specify the right host or port?”的错误,可以增加相关配置,如下:

echo "export KUBECONFIG=/etc/kubernetes/kubelet.conf" >> /etc/profile

source /etc/profile

注意:init之后,大概会有这几个pod,coredns的两个要等后续flannel网络安装完成之后才会变成running的状态。如下

注意:init之后会打印加入集群的命令,记得拷贝下来。如下:

4.2 加载环境变量

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source .bash_profile

## 如果以上操作不是root用户,则需要执行
# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config

 4.3 安装flannel网络

cat >> kube-flannel.yml << EOF
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: jmgao1983/flannel #quay.io/coreos/flannel:v0.13.1-rc2
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: jmgao1983/flannel #quay.io/coreos/flannel:v0.13.1-rc2
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

EOF
# 执行刚才创建文件
kubectl apply -f kube-flannel.yml 

# 查看是否创建完成,否则下一步的加入节点无法成功
kubectl get pod --all-namespaces
 
#查看状态,所有的都running之后,加入其它2个节点到集群中
kubectl get pod -n kube-system -o wide

五、将Work节点加入集群(Work节点操作)

# 在Master节点init时,输出的kubeadm join语句,复制并在Work节点执行
kubeadm join 192.168.0.240:6443 --token hkjfasd.dfghjk23hgj66ghhjj     --discovery-token-ca-cert-hash sha256:95628217e0519cbcb2d59c836dda9c210e6262698da30c96f14a568hdk89hd

# 令牌查看,在Master执行,可以看到token
# kubeadm token list

注意:在加入集群时如果卡在 “[preflight] Running pre-flight checks” ,这有可能是时间不同步、或者是join的token已经过期。

        在master、work节点查询主机时间,查看时间是否同步。如果时间不同步请执行第一步骤里面的第2小点。

        token过期的话可以重新生成token,拿到token后在work节点执行重新join。如下:

# 在master节点执行
kubeadm token create --ttl 0 --print-join-command

注意:如果在join过程中出现“/etc/kubernetes/pki/ca.crt already exists” ca文件已经存在的问题时,请在当前主机上执行reset操作,之后在join到集群。

kubeadm reset
rm -rf $HOME/.kube/config​

~~完结撒花🎉~~

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐