安装kubernetes高可用集群

1.安装docker

1.1 查看支持的docker版本

yum list docker-ce --showduplicates|sort -r

1.2 安装docker

yum install -y docker-ce-20.10.7-3.el7
开启自启并且启动docker
systemctl enable docker && systemctl start docker

1.3 修改docker配置

cat > /etc/docker/daemon.json <<EOF
{
"exec-opts":["native.cgroupdriver=systemd"],
"log-driver":"json-file",
"log-opts":{
"max-size":"100M" 
},
"storage-opts":[
"overlay2.override_kernel_check=true"
]
}
EOF

附上docker配置文件解释 https://www.cnblogs.com/golinux/p/12759674.html

1.4重启docker

systemctl daemon-reload && systemctl restart docker

1.5 设置网桥包经IP Tables,core文件生成路径,配置永久生效

echo """
vm.swappiness = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
""" > /etc/sysctl.conf

设置配置永久生效
sysctl -p

1.6 开启ipvs

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

启动并查看状态
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
有的开启的ipvs模块更多,暂时只开这几个,启动有问题再多增加几个
到这里安装k8s基础的环境配置就完成了。

1.7 安装kubernetes

通过yum search 最新的是1.21.2版本,这里选择和1.18.2,与视频版本保持一致
yum install kubeadm-1.18.2 kubelet-1.18.2 -y
设置开机自启动
systemctl enable kubelet

2. 安装组件镜像

2.1 安装master节点镜像

创建脚本文件
vim /etc/docker/pull.sh

#!/bin/bash
# download k8s 1.15.2 images
# get image-list by 'kubeadm config images list --kubernetes-version=v1.15.2'
# gcr.azk8s.cn/google-containers == k8s.gcr.io
images=(kube-apiserver:v1.18.2
kube-controller-manager:v1.18.2
kube-scheduler:v1.18.2
kube-proxy:v1.18.2
pause:3.2
etcd:3.4.3
coredns:1.6.7
)
for imageName in ${images[@]};do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
    docker tag  registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
    docker rmi  registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done

改变文件权限
chmod 777 pull.sh
拉取镜像,通过阿里云镜像加速,貌似没有在/etc/docker/daemon.json 里面配置镜像加速也很快就拉下了。有一点疑惑的是我在阿里云容器镜像服务里面查询这个容器也没有找到。

2.2安装node节点对象

与master节点对象一样,只是node节点需要少 几个服务只需要 kube-proxy:v1.18.2
pause:3.2
coredns:1.6.7

2.3 部署 keepalive + lvs 对master节点 api-server做高可用

1)部署keepalived+lvs ,在各个master节点操作
yum install -y socat keepalived ipvsadm conntrack
2)配置master1上的keepalived配置
vim /etc/keepalived/keepalived.conf
删除之前配置,粘贴以下配置

global_defs {
    router_id LVS_DEVEL
}
vrrp_instance VI_1{
    state BACKUP
    nopreempt
    interface ens33
    virtual_router_id 80
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass justokk    
    }
    virtual_ipaddress {
        192.168.0.199    
    }
}
virtual_server 192.168.0.199 6443 {
    delay_loop 6
    lb_algo loadbalance
    lb_kind DR
    net_mask 255.255.255.0
    persistence_timeout 0
    protocol TCP
    real_server 192.169.0.6 6443 {
           weight 1 
           SSL_GET {
               url {
                   path /healthz
                   status_code 200               
               }           
               connet_timeout 3
               nb_get_retry 3
               delay_before_retry 3
           }
    }
    real_server 192.169.0.16 6443 {
           weight 1 
           SSL_GET {
               url {
                   path /healthz
                   status_code 200               
               }           
               connet_timeout 3
               nb_get_retry 3
               delay_before_retry 3
           }
    }
    real_server 192.169.0.26 6443 {
           weight 1 
           SSL_GET {
               url {
                   path /healthz
                   status_code 200               
               }           
               connet_timeout 3
               nb_get_retry 3
               delay_before_retry 3
           }
    }
}

3)对master2,master3 的keepalived进行配置
仅仅只是修改优先级 priority为 80 ,60,其余保持一致
虚拟IP地址192.168.0.199在整个网段是没有主机在用的

特别提醒
keepalive需要配置backup,而且是非抢占模式nopreempt,意味着每台master都一样的,虽然有优先级的区别,但是不会出现强的转换,例如master1宕机,启动之后apiserver等组件不会立刻运行,服务还在master2或者master3上,如果此时配置为master,它会直接转移到master1上,所以不能配置成非抢占模式。
配置开机自启与启动并查看keepalive运行情况
systemctl enable keepalived && systemctl start keepalived && systemctl status keepalived
active running表示运行成功。
再使用每台主机ping一下192.168.0.199,通的,表示启动成功。

初始化k8s集群

kubeadm init \
--apiserver-advertise-address=192.168.0.6 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.18.2 \
--pod-network-cidr=10.244.0.0/16

在这里插入图片描述

#使用kubectl工具:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
#查询kubectl状态
kubectl get cs
-------------------------------
#需要先复制下来,node添加到集群需要用上
kubeadm join 192.168.0.6:6443 --token tnzy95.o2v3a7d5nr5k87dz \
    --discovery-token-ca-cert-hash sha256:b76088384580f4dba3c58924e7e56cb44ea48cbf005d0681e548f15b558cc0d7

查看集群状态,notready在这里插入图片描述
加入网络插件calico

kubectl apply -f https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/calico.yaml

查看集群状态,ready
在这里插入图片描述
使用上面复制下来的数据,将node添加集群在这里插入图片描述
至此,完成了k8s单master高可用集群的搭建

安装traefik

1)生成traefik证书,在master1上操作

mkdir  ~/ikube/tls/ -p
echo """
[req]
distinguished_name = req_distinguished_name
prompt = yes
[ req_distinguished_name ]
countryName                    = Country Name (2 letter code)
countryName_value               = CN
stateOrProvinceName             = State orProvince Name (full name)
stateOrProvinceName_value       = Beijing
localityName                   = Locality Name (eg, city)
localityName_value              =Haidian
organizationName                =Organization Name (eg, company)
organizationName_value          = Channelsoft
organizationalUnitName          = OrganizationalUnit Name (eg, p)
organizationalUnitName_value    = R & D Department
commonName                     = Common Name (eg, your name or your server\'s hostname)
commonName_value                =*.multi.io
emailAddress                   = Email Address
emailAddress_value              =lentil1016@gmail.com
""" > ~/ikube/tls/openssl.cnf
openssl req -newkey rsa:4096 -nodes -config ~/ikube/tls/openssl.cnf -days 3650 -x509 -out ~/ikube/tls/tls.crt -keyout ~/ikube/tls/tls.key
kubectl create -n kube-system secret tls ssl --cert ~/ikube/tls/tls.crt --key ~/ikube/tls/tls.key

2)拉取镜像(每个节点操作)
docker pull emilevauge/traefik

3)创建 traefik.yaml文件(每个节点操作)

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
  name: traefik-ingress-controller
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: traefik-conf
  namespace: kube-system
data:
  traefik.toml: |
    insecureSkipVerify = true
    defaultEntryPoints = ["http","https"]
    [entryPoints]
      [entryPoints.http]
      address = ":80"
      [entryPoints.https]
      address = ":443"
        [entryPoints.https.tls]
          [[entryPoints.https.tls.certificates]]
          CertFile = "/ssl/tls.crt"
          KeyFile = "/ssl/tls.key"
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  selector:
    matchLabels:
      k8s-app: traefik-ingress-lb
      name: traefik-ingress-lb
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      serviceAccountName: traefik-ingress-controller
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      terminationGracePeriodSeconds: 60
      hostNetwork: true
      volumes:
      - name: ssl
        secret:
          secretName: ssl
      - name: config
        configMap:
          name: traefik-conf
      containers:
      - image: emilevauge/traefik:latest
        name: traefik-ingress-lb
        ports:
        - name: http
          containerPort: 80
          hostPort: 80
        - name: admin
          containerPort: 8080
        securityContext:
          privileged: true
        args:
        - --configfile=/config/traefik.toml
        - -d
        - --web
        - --kubernetes
        volumeMounts:
        - mountPath: "/ssl"
          name: "ssl"
        - mountPath: "/config"
          name: "config"
---
kind: Service
apiVersion: v1
metadata:
  name: traefik-ingress-service
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
    - protocol: TCP
      port: 80
      name: web
    - protocol: TCP
      port: 8080
      name: admin
    - protocol: TCP
      port: 443
      name: https
  type: NodePort
---
apiVersion: v1
kind: Service
metadata:
  name: traefik-web-ui
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
  - port: 80
    targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-web-ui
  namespace: kube-system
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: ingress.multi.io
    http:
      paths:
      - backend:
          serviceName: traefik-web-ui
          servicePort: 80

注意:中途在启动kubectl的时候还出现了一个问题 The connection to the server localhost:8080 was refused
将master1上的/etc/kubernetes/admin.conf文件复制其它节点,再配置上环境变量

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile

4)执行yaml文件,创建traefik(每个节点操作)

添加tracefik

kubectl apply -f traefik.yaml

查询tracefik是否启动成功

kubectl get pods -n kube-system
在这里插入图片描述

安装kubernetes-dashboard-2版本(kubernetes的web ui界面)

查看dashboard是否安装成功:

kubectl get pods -n kubernetes-dashboard
查看dashboard前端的service

kubectl get svc -n kubernetes-dashboard

修改service type类型变成NodePort:

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

把type: ClusterIP变成 type: NodePort,保存退出即可。

# 查询服务端口
kubectl get svc -n kubernetes-dashboard

在这里插入图片描述

3.1通过yaml文件里指定的默认的token登陆dashboard

1)查看kubernetes-dashboard名称空间下的secret

kubectl get secret -n kubernetes-dashboard

default-token-r9td4 kubernetes.io/service-account-token 3 5d
kubernetes-dashboard-certs Opaque 0 5d
kubernetes-dashboard-csrf Opaque 1 5d
kubernetes-dashboard-key-holder Opaque 2 5d
kubernetes-dashboard-token-blxvz kubernetes.io/service-account-token 3 5d
2)找到对应的带有token的kubernetes-dashboard-token-blxvz

kubectl describe secret kubernetes-dashboard-token-blxvz -n kubernetes-dashboard
复制token登录
在这里插入图片描述

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐