前言:本文是k8s搭建系列的第一篇,其它三篇分别为
1.AlmaLinux + haproxy + keepalived + containd + Calico + kubeadm +1.24
https://blog.csdn.net/lic95/article/details/125018220?spm=1001.2014.3001.5501

2.AlmaLinux + haproxy + keepalived + cri-o + Calico + kubeadm +1.24
https://blog.csdn.net/lic95/article/details/125025782?spm=1001.2014.3001.5501

3.AlmaLinux + kube-vip + cri-o + Calico + kubeadm +1.24
https://blog.csdn.net/lic95/article/details/125036070
基本涵盖了k8s搭建的所有情况,有其它需求请参考以上文档

一、部署节点说明

1.Kubernetes 组件

名称简介
kubectl管理 k8s 的命令行工具,可以操作 k8s 中的资源对象。
etcd是一个高可用的键值数据库,存储 k8s 的资源状态信息和网络信息的,etcd 中的数据变更是通过 api server 进行的。
apiserver提供 k8s api,是整个系统的对外接口,提供资源操作的唯一入口,供客户端和其它组件调用,提供了 k8s 各类资源对象(pod,deployment,Service 等)的增删改查,是整个系统的数据总线和数据中心,并提供认证、授权、访问控制、API 注册和发现等机制,并将操作对象持久化到 etcd中。相当于“营业厅”。
scheduler负责 k8s 集群中 pod 的调度的 , scheduler 通过与 apiserver 交互监听到创建 Pod副本的信息后,它会检索所有符合该 Pod 要求的工作节点列表,开始执行 Pod 调度逻辑。调度成功后将Pod 绑定到目标节点上,相当于“调度室”。
controller-manager作为集群内部的管理控制中心,负责集群内的 Node、Pod 副本、服务端点(Endpoint)、命名空间(Namespace)、服务账号(ServiceAccount)、资源定额(ResourceQuota)的管理,当某个 Node 意外宕机时,Controller Manager 会及时发现并执行自动化修复流程,确保集群始终处于预期的工作状态。
kubelet每个 Node 节点上的 kubelet 定期就会调用 API Server 的 REST 接口报告自身状态,API Server 接收这些信息后,将节点状态信息更新到 etcd 中。kubelet 也通过 API Server 监听 Pod信息,从而对 Node 机器上的 POD 进行管理:如创建、删除、更新 Pod
kube-proxy提供网络代理和负载均衡,是实现 service 的通信与负载均衡机制的重要组件,kube-proxy 负责为 Pod 创建代理服务,从 apiserver 获取所有 service 信息,并根据 service 信息创建代理服务,实现 service 到 Pod 的请求路由和转发,从而实现 K8s 层级的虚拟转发网络,将到service 的请求转发到后端的 pod 上。
CalicoCalico 是一个纯三层的网络插件,calico 的 bgp 模式类似于 flannel 的 host-gw,calico在 kubernetes 中可提供网络功能和网络策略
FlannelFlannel是CoreOS团队针对Kubernetes设计的一个网络规划服务,简单来说,它的功能是让集群中的不同节点主机创建的Docker容器都具有全集群唯一的虚拟IP地址。 Flannel官网:https://github.com/coreos/flannel
Cordnsk8s1.11 之前使用的是 kube dns,1.11 之后才有 coredns,coredns 是一个 DNS 服务器,能够为 Kubernetes services 提供 DNS 记录
Docker是一个容器引擎,用于运行容器
Web UI(Dashboard)日志管理系统,可以对物理节点和容器的日志进行统一收集,把收集到的数据在 kibana 界面展示,kibana 提供按指定条件搜索和过滤日志。
prometheus+alertmanager+Grafana监控系统,可以对 kubernetes 集群本身的组件监控,也可对物理节点,容器做监控,对监控到的超过报警阀值的数据进行报警,这个报警会发送到指定的目标,如钉钉,微信,qq,slack 等。
efk-(全称 elasticsearch、fluentd、kibana)日志管理系统,可以对物理节点和容器的日志进行统一收集,把收集到的数据在 kibana 界面展示,kibana 提供按指定条件搜索和过滤日志。
Metrics用于收集资源指标,hpa 需要基于 metrics 实现自动扩缩容

2.基本信息

系统主机名外部IP地址内部IP地址
虚拟负载master192.168.3.30
Centos 7.9.2009master01192.168.3.31172.18.3.31
Centos 7.9.2009master02192.168.3.32172.18.3.32
Centos 7.9.2009master03192.168.3.33172.18.3.33
Centos 7.9.2009node01192.168.3.41172.18.3.41
Centos 7.9.2009node02192.168.3.42172.18.3.42
Centos 7.9.2009node03192.168.3.43172.18.3.43

3.初始化各个节点为模板机,
模板机初始化方法详见上一篇博客https://blog.csdn.net/lic95/article/details/124897104
       

二、高可用,在master01、master02、master03部署haproxy + keepalived

1.docker模式部署haproxy
https://github.com/haproxy/haproxy
在master01、master02、master03上创建配置文件/etc/haproxy/haproxy.cfg,重要配置以中文注释标出:

#在master01、master02、master03上创建配置目录
mkdir -p /etc/haproxy

#在master01、master02、master03上创建配置文件/etc/haproxy/haproxy.cfg,重要配置以中文注释标出:
tee /etc/haproxy/haproxy.cfg << 'EOF'
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2

    #chroot      /var/lib/haproxy
    #pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          5m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

#---------------------------------------------------------------------
# kubernetes apiserver frontend which proxys to the backends
#---------------------------------------------------------------------
frontend kubernetes-apiserver
    mode                 tcp
    bind                 *:16443
    option               tcplog
    default_backend      kubernetes-apiserver

#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
    mode        tcp
    balance     roundrobin
    server  master01 192.168.3.31:6443 check
    server  master02 192.168.3.32:6443 check
    server  master03 192.168.3.33:6443 check
EOF

#分别在三个节点启动haproxy
docker run -d --restart=always --name=diamond-haproxy --net=host  -v /etc/haproxy:/usr/local/etc/haproxy:ro haproxy

2.docker模式部署keepalived
https://github.com/osixia/docker-keepalived
keepalived是以VRRP(虚拟路由冗余协议)协议为基础, 包括一个master和多个backup。 master劫持vip对外提供服务。master发送组播,backup节点收不到vrrp包时认为master宕机,此时选出剩余优先级最高的节点作为新的master, 劫持vip。keepalived是保证高可用的重要组件。

#在master01、master02、master03上创建配置目录
mkdir -p /etc/keepalived/

tee /etc/keepalived/keepalived.conf << 'EOF'
global_defs {
   script_user root 
   enable_script_security

}

vrrp_script chk_haproxy {
    #vrrp_script用于检测haproxy是否正常。如果本机的haproxy挂掉,即使keepalived劫持vip,也无法将流量负载到apiserver。
    script "/bin/bash -c 'if [[ $(netstat -nlp | grep 16443) ]]; then exit 0; else exit 1; fi'"  # haproxy 检测
    interval 2  # 每2秒执行一次检测
    weight 11 # 权重变化
}

vrrp_instance VI_1 {
  interface ens33

  state MASTER # backup节点设为BACKUP
  virtual_router_id 51 # id设为相同,表示是同一个虚拟路由组
  priority 100 #初始权重
nopreempt #可抢占

  unicast_peer {

  }

  virtual_ipaddress {
    192.168.3.30  # vip
  }

  authentication {
    auth_type PASS
    auth_pass password
  }

  track_script {
      chk_haproxy
  }

  notify "/container/service/keepalived/assets/notify.sh"
}
EOF

3.分别在三台节点启动keepalived

docker run  -d --restart=always \
    --cap-add=NET_ADMIN --cap-add=NET_BROADCAST --cap-add=NET_RAW --net=host \
    --volume /etc/keepalived/keepalived.conf:/container/service/keepalived/assets/keepalived.conf:ro \
    osixia/keepalived --copy-service

4.查看haproxy keepalived master容器日志,排查问题

docker ps -a | grep keeplived
docker ps -a | grep haproxy
docker logs 7e5484cb6a75

三、安装 master 节点
1.初始化 kubernetns master01 节点

kubeadm init \
        --kubernetes-version v1.23.6 \
        --image-repository registry.aliyuncs.com/google_containers \
        --service-cidr=172.18.0.0/16 \
        --pod-network-cidr=10.244.0.0/16 \
        --control-plane-endpoint=192.168.3.30:6443 \
        --upload-certs \
        --v=5

选项说明:
  --image-repository:选择用于拉取镜像的镜像仓库(默认为“k8s.gcr.io” )
  --kubernetes-version:选择特定的Kubernetes版本(默认为“stable-1”)
  --service-cidr:为服务的VIP指定使用的IP地址范围(默认为“10.96.0.0/12”)
  --pod-network-cidr:指定Pod网络的IP地址范围。如果设置,则将自动为每个节点分配CIDR。

注:
  因为后面要部署 flannel,参照flannel文档,我们要指定Pod网络的IP地址范围为10.244.0.0/16

2.输出内容,可以看到初始化成功的信息和一些提示

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.3.30:6443 --token 8uurvh.xufnqomp9zoo4wd4 \
        --discovery-token-ca-cert-hash sha256:9e0449ef88621fc99cd38ef123367151fde69509eb2b1d3fb9a67d6a3ba1a052 \
        --control-plane --certificate-key ab2227d8677a9d1ba61e8efea0812a29098c2bfbcf3affb102180675a1abdb06

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.3.30:6443 --token 8uurvh.xufnqomp9zoo4wd4 \
        --discovery-token-ca-cert-hash sha256:9e0449ef88621fc99cd38ef123367151fde69509eb2b1d3fb9a67d6a3ba1a052 

3.根据上面提示内容执行如下操作

# 要开始使用集群,您需要以常规用户身份运行以下命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 或者,如果您是root用户,则可以运行允许命令
export KUBECONFIG=/etc/kubernetes/admin.conf

# 加入.bashrc,方便以后连接服务器自动执行
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >>/root/.bashrc

4.加入master02、master03节点

# [root@master02 ~]# 
kubeadm join 192.168.3.30:6443 --token 8uurvh.xufnqomp9zoo4wd4 \
  --discovery-token-ca-cert-hash 
  sha256:9e0449ef88621fc99cd38ef123367151fde69509eb2b1d3fb9a67d6a3ba1a052 \         
  --control-plane 
  --certificate-key ab2227d8677a9d1ba61e8efea0812a29098c2bfbcf3affb102180675a1abdb06

# [root@master03 ~]# 
kubeadm join 192.168.3.30:6443 --token 8uurvh.xufnqomp9zoo4wd4 \
  --discovery-token-ca-cert-hash 
  sha256:9e0449ef88621fc99cd38ef123367151fde69509eb2b1d3fb9a67d6a3ba1a052 \         
  --control-plane 
  --certificate-key ab2227d8677a9d1ba61e8efea0812a29098c2bfbcf3affb102180675a1abdb06


#  在master02、master03上根据上面提示内容执行如下操作        
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >>/root/.bashrc

四、安装flannel网络插件,有问题请科学上网

kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

警告信息,可以忽略

[root@master01 ~]# kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@master01 ~]# 

# 查看flannel状态
[root@master01 ~]# kubectl get pods -A | grep flannel
kube-system   kube-flannel-ds-d6zfq              1/1     Running   0          28s
[root@master01 ~]# 

kubectl describe pod kube-flannel-ds-d6zfq -n kube-system
#直到状态变为Running

五、添加 3个Node节点到集群

# 在master01上获取添加方式
[root@master01 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.3.30:6443 --token 186dgf.yfl9vccn4m2hx6v5 --discovery-token-ca-cert-hash sha256:9e0449ef88621fc99cd38ef123367151fde69509eb2b1d3fb9a67d6a3ba1a052 
[root@master01 ~]# 

# 在node01、node02、node03上分别执行添加命令

#在任一台master上验证
kubectl get nodes
kubectl get nodes -o wide
kubectl get pods --all-namespaces

#直到全部变为Ready
[root@master01 ~]# kubectl get nodes
NAME       STATUS   ROLES                  AGE   VERSION
master01   Ready    control-plane,master   19m   v1.23.6
master02   Ready    control-plane,master   13m   v1.23.6
master03   Ready    control-plane,master   12m   v1.23.6
node01     Ready    <none>                 66s   v1.23.6
node02     Ready    <none>                 56s   v1.23.6
node03     Ready    <none>                 52s   v1.23.6
[root@master01 ~]# 

六、部署dashboard
  Dashboard 是基于网页的 Kubernetes 用户界面。您可以使用 Dashboard 将容器应用部署到 Kubernetes 集群中,也可以对容器应用排错,还能管理集群本身及其附属资源。您可以使用 Dashboard 获取运行在集群中的应用的概览信息,也可以创建或者修改 Kubernetes 资源(如 Deployment,Job,DaemonSet 等等)。例如,您可以对 Deployment 实现弹性伸缩、发起滚动升级、重启 Pod 或者使用向导创建新的应用。
   安装dashboard:(https://github.com/kubernetes/dashboard)

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yaml

#耐心等待状态变为Running
kubectl get pods -n kubernetes-dashboard


# 修改对外暴露端口
[root@master01 ~]# kubectl edit svc -n kubernetes-dashboard kubernetes-dashboard
将 type: ClusterIP 修改为 type: NodePort 即可


# 获取对外暴露端口
[root@master01 ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   172.18.236.240   <none>        8000/TCP        85s
kubernetes-dashboard        NodePort    172.18.223.13    <none>        443:30981/TCP   85s
[root@master01 ~]# 

使用浏览器访问:
https://192.168.3.30:30981/#/login
在这里插入图片描述

创建服务用户,集群角色绑定,然后获取token
[root@master01 ~]# cat << EOF >token.yaml 
apiVersion : v1
kind : ServiceAccount
metadata :
  name : admin-user
  namespace : kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF


[root@master01 ~]# kubectl apply -f token.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
[root@master01 ~]# 

#获取令牌
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

[root@master01 ~]# kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
eyJhbGciOiJSUzI1NiIsImtpZCI6IkFLQkR6VnU5NHdOakpBNU5kcVdZd3pRZmlhUFNueV8yV19JVmVfaGlxeDgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWh2NzR2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiMzg2NGQ3OC1iZDdmLTQ4N2UtOWY2OS04OWJhZjUxMzNmZjEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.XJz3R7SV_bi1zr83Yl0c6RnboLzxsTVYmDzJJrJWoKBf8o0AAvQspUV2hngKumk5x_NI-GT3HnCZNdO1Inah6t92O8YBj4DCzx0ELeRr2tY4dcGjntHREOnvsCFnyDeqrzj0MZmtjdxZCPyAUAgogpnHtH5ljPiUUM48b6kADsFok0RzribJqW1Ta6zmCyZ3hBE4cgI2bD5nrRGslkn4DwQWrNFw2O4AiwR2iLm6CpRNIigBUy819khk9x87mMtPVKv5zbxfojD7eqXrnZL9LihuzZRPsWrZYWpnjIIrzsvDWHT2yMYn_1J7t1z8bG-G4p99LuIm2tk_xi7iYcomdQ[root@master01 ~]

#导入令牌,登录kubernetes-dashboard

七、整个机器配置完毕

部署nginx服务验证集群
[root@master01 ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@master01 ~]#

[root@master01 ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
[root@master01 ~]# 

[root@master01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   172.18.0.1      <none>        443/TCP        24m
nginx        NodePort    172.18.76.241   <none>        80:30111/TCP   23s
[root@master01 ~]# 

[root@master01 ~]#  curl http://192.168.3.30:30111
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
......
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@master01 ~]# 
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐