Kubernetes安装基于v1.23.1

前期准备

  • 服务器(centos7)
    • master服务器
    • k8s-Node-01
    • k8s-Node-02
  • 路由
    • Router
  • Harbor仓库

安装

主机名系统配置ip备注
k8s-master-01centos8.22c 4Gb *100Gb192.168.66.140k8s主节点
k8s-node-01centos8.22c 4Gb *100Gb192.168.66.141k8s从节点
k8s-node-02centos8.22c 4Gb *100Gb192.168.66.142k8s从节点
k8s-harborcentos8.22c 4Gb *100Gb192.168.66.143仓库
koolsharewin10 641c 4Gb *20Gb192.168.66.144软路由

基础环境配置

修改固定ip
  • 修改文件

    centos7的网络IP地址配置文件在  /etc/sysconfig/network-scripts 文件夹下
    
    BOOTPROTO="static"
    DNS1="192.168.66.2" 
    IPADDR="192.168.66.141"
    NETMASK="255.255.255.0"
    GATEWAY="192.168.66.2"
    
  • 重启网卡

    service network restart
    
关闭防火墙启用IPtables
  • 命令关闭防火墙

    systemctl stop firewalld && systemctl disable firewalld
    
  • 启用iptable

    yum -y install iptables-services &&  systemctl start iptables && systemctl enable iptables  && iptables -F && service iptables save
    
主机名/Host文件解析
  • 大型环境中,建议通过DNS主机名和ip进行关联

    设置主机名:
    	hostnamectl set-hostname k8s-master-01
    	hostnamectl set-hostname k8s-node-01
    	hostnamectl set-hostname k8s-node-02
    	hostnamectl set-hostname k8s-harbor
    	hostnamectl set-hostname koolshare
    设置host解析
    	vim /etc/hosts
    	192.168.66.140 k8s-master-01
    	192.168.66.141 k8s-node-01
    	192.168.66.142 k8s-node-02
    	192.168.66.143 k8s-harbor
    	192.168.66.144 koolshare
    拷贝当前文件到其他服务器目录中
    	scp /etc/hosts root@k8s-node-01:/etc/hosts
    
    
关闭swap交换分区
关闭虚拟内存 && 永久关闭虚拟内存(也可以注解掉)
swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

确认交换分区是否关闭,都为0表示关闭
free -m
关闭selinux虚拟内存
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disable/' /etc/selinux/config
集群时间同步配置
选择一个节点作为服务端
	我们选择master01为时间服务器的服务端,其他的为时间服务器的客户端
安装时间服务器
	yum install -y chrony
	编辑配置文件(master)
		vi /etc/chrony.conf
			server 192.168.66.140 iburst
			allow 192.168.66.0/24
			local stratum 10
	编辑配置文件(node)
		vi /etc/chrony.conf
			server 192.168.66.140 iburst
    	确认是否可以同步
    	chronyc sources
启动服务
	systemctl start chronyd
	验证启动
	ss -unl | grep 123
开机启动服务
	systemctl enable chronyd
    
	

#设置系统时区为中国/上海
timedatectl set-timezone Asia/Shanghai
#将当前的 UTC 时间写入硬件时钟
timedatectl set-local-rtc 0
#重启依赖于系统时间的服务
systemctl restart rsyslog 
systemctl restart crond
系统日志保存方式设置
  • 原因:centos7以后,引导方式改为了systemd,所以会有两个日志系统同时工作只保留一个日志(journald)的方法

  • 设置rsyslogd 和 systemd journald

    # 持久化保存日志的目录
    mkdir /var/log/journal 
    mkdir /etc/systemd/journald.conf.d
    
    cat  >  /etc/systemd/journald.conf.d/99-prophet.conf  <<EOF
    [Journal]
    #持久化保存到磁盘
    Storage=persistent
    # 压缩历史日志
    Compress=yes
    SyncIntervalSec=5m
    RateLimitInterval=30s
    RateLimitBurst=1000
    # 最大占用空间10G
    SystemMaxUse=10G
    # 单日志文件最大200M
    SystemMaxFileSize=200M
    # 日志保存时间 2 周
    MaxRetentionSec=2week
    # 不将日志转发到 syslog
    ForwardToSyslog=no
    EOF
    
    #重启journald配置
    systemctl restart systemd-journald
    
升级系统内核(如需)
- 查看当前内核版本
uname -r

- 升级内核
- 安装 ELRepo 源:
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm

启用 ELRepo 源仓库:
yum --disablerepo="*" --enablerepo="elrepo-kernel" list available

安装新内核:
yum -y --enablerepo=elrepo-kernel install kernel-ml kernel-ml-devel

------如无意外,最新内核已经安装好。-------

修改 grub 配置使用新内核版本启动
查看当前默认启动内核:
yum install dnf
dnf install grubby
grubby --default-kernel

如不是,查看所有内核:
grubby --info=ALL
然后指定新内核启动:
grubby --set-default /boot/vmlinuz-5.3.8-1.el8.elrepo.x86_64
安装工具包
yum install -y conntrack  ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git
安装docker软件
# docker依赖
yum install -y yum-utils device-mapper-persistent-data lvm2

# 导入阿里云的docker-ce仓库
yum-config-manager  \
--add-repo  \http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# centos8的yum库中没有符合最新版docker-ce对应版本的containerd.io,docker-ce-3:19.03.11-3.el7.x86_64需要containerd.io >= 1.2.2-3
# 通过阿里云镜像库安装符合最新docker-ce版本的containerd.io;
yum install -y https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/edge/Packages/containerd.io-1.2.13-3.2.el7.x86_64.rpm

# 安装 
yum -y install docker-ce docker-ce-cli

# 启动
systemctl start docker

# 开机自启
systemctl enable docker

# 配置镜像加速deamon
cd /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://4bsnyw1n.mirror.aliyuncs.com"],
  "exec-opts":["native.cgroupdriver=systemd"]
}
EOF

# 重启docker
systemctl daemon-reload && systemctl restart docker && systemctl enable docker
kube-proxy开启ipvs的前置条件
//1、加载netfilter模块
modprobe br_netfilter  

//2、添加配置文件
cat  >  /etc/sysconfig/modules/ipvs.modules  <<EOF
#!/bin/bash
modprobe  --  ip_vs
modprobe  --  ip_vs_rr
modprobe  --  ip_vs_wrr
modprobe  --  ip_vs_sh
modprobe  --  nf_conntrack_ipv4
EOF

//3、赋予权限并引导
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

# 报错 高版本的centos内核nf_conntrack_ipv4被nf_conntrack替换了,所以装不了
Module nf_conntrack_ipv4 not found.
# 解决方法
modprobe -- nf_conntrack

集群安装

配置K8S yum镜(所有节点)
  • 导入阿里云的YUM仓库
方法1:
# 进入到/etc/yum.repos.d/目录下,备份之前的CentOS-Base.repo地址。
cd /etc/yum.repos.d/
mv CentOS-Base.repo CentOS-Base.repo.bak

# 下载阿里云yum源
centos8:wget -O CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo

centos7:wget -O CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

# 将服务器上的软件包信息缓存到本地,以提高搜索安装软件的速度
yum makecache
如果你在执行上面这边命令时,报错:Error: Cannot retrieve metalink for repository: epel. Please verify its path and try again
建议用如下方法解决:检查/etc/yum.repos.d/下是否有epel.repo文件,如果有,重命名为epel.repo_bak 千万不能以.repo格式备份,然后在执行一次上面的命令即可!
CentOS7对应地址:http://mirrors.aliyun.com/repo/Centos-7.repo
CentOS8对应地址:http://mirrors.aliyun.com/repo/Centos-8.repo

#方式二:
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装命令工具 (所有节点)
# 安装初始化工具、命令行管理工具、与docker的cri交互创建容器kubelet
yum -y install kubeadm kubectl kubelet --disableexcludes=kubernetes

# k8s开机自启
systemctl enable kubelet.service & systemctl start kubelet.service
命令tab健补齐(所有节点)
  • kubectlkebuadm命令tab健补齐,默认不补齐

    kubectl completion bash >/etc/bash_completion.d/kubectl
    kubeadm completion bash >/etc/bash_completion.d/kubeadm
    #退出当前终端生效
    
下载所需的镜像(所有节点)
  • 查看所需要的镜像

    [root@k8s-master-01 kubernetes]# kubeadm config images list
    k8s.gcr.io/kube-apiserver:v1.23.1
    k8s.gcr.io/kube-controller-manager:v1.23.1
    k8s.gcr.io/kube-scheduler:v1.23.1
    k8s.gcr.io/kube-proxy:v1.23.1
    k8s.gcr.io/pause:3.6
    k8s.gcr.io/etcd:3.5.1-0
    k8s.gcr.io/coredns/coredns:v1.8.6
    
  • 获取config配置文件下载镜像

    #获取默认初始化配置文件
    kubeadm config print init-defaults >init.default.yaml
    #保存配置文件名为init-config.yaml备用
    cp init.default.yaml  init-config.yaml
    
  • 修改配置文件

    # 修改镜像源地址
    imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
    
    # 配置文件内容
    apiVersion: kubeadm.k8s.io/v1beta3
    bootstrapTokens:
    - groups:
      - system:bootstrappers:kubeadm:default-node-token
      token: abcdef.0123456789abcdef
      ttl: 24h0m0s
      usages:
      - signing
      - authentication
    kind: InitConfiguration
    localAPIEndpoint:
      advertiseAddress: 192.168.66.140
      bindPort: 6443
    nodeRegistration:
      criSocket: /var/run/dockershim.sock
      imagePullPolicy: IfNotPresent
      name: k8s-master-01
      taints:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
    ---
    apiServer:
      timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta3
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controllerManager: {}
    dns: {}
    etcd:
      local:
        dataDir: /var/lib/etcd
    #指定镜像仓库地址
    imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
    kind: ClusterConfiguration
    #指定k8s版本
    kubernetesVersion: 1.23.0
    #指定pod范围
    networking:
      dnsDomain: cluster.local
      podSubnet: "10.244.0.0/16"
      serviceSubnet: 10.96.0.0/12
    scheduler: {}
    
  • 下载k8s镜像(所有节点)

    #下载镜像,使用上一步创建的配置文件
    kubeadm config images pull --config=init-config.yaml
    
    # 拉取镜像信息
    [config/images] Pulled k8s.gcr.io/kube-apiserver:v1.19.0
    [config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.19.0
    [config/images] Pulled k8s.gcr.io/kube-scheduler:v1.19.0
    [config/images] Pulled k8s.gcr.io/kube-proxy:v1.19.0
    [config/images] Pulled k8s.gcr.io/pause:3.2
    [config/images] Pulled k8s.gcr.io/etcd:3.4.9-1
    [config/images] Pulled k8s.gcr.io/coredns:1.7.0
    
    #镜像下载完成后就可以进行安装了
    
初始化master节点
  • 初始化

    # 旧版本使用
    kubeadm init --config=init-config.yaml --experimental-upload-certs | tee kubeadm-init.log
    
    # 新版本使用
    kubeadm init --config=init-config.yaml --upload-certs | tee kubeadm-init.log
    
  • 生成信息如下:

    [root@k8s-master-01 conf]# kubeadm init --config=init-config.yaml --upload-certs | tee kubeadm-init.log
    [init] Using Kubernetes version: v1.23.0
    [preflight] Running pre-flight checks
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [k8s-master-01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.140]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [k8s-master-01 localhost] and IPs [192.168.66.140 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [k8s-master-01 localhost] and IPs [192.168.66.140 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Starting the kubelet
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [apiclient] All control plane components are healthy after 8.503905 seconds
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
    NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
    [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
    [upload-certs] Using certificate key:
    ed51127d80b0fd5841cf3caf3b024e5cdf1e0883fc146a2577018dbb25c46400
    [mark-control-plane] Marking the node k8s-master-01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
    [mark-control-plane] Marking the node k8s-master-01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [bootstrap-token] Using token: abcdef.0123456789abcdef
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
      export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 192.168.66.140:6443 --token abcdef.0123456789abcdef \
            --discovery-token-ca-cert-hash sha256:5ce43af4ee1c8d7e0185e6149dd697571e801480ebcf38c69d65977a1cdb749d 
    
  • 按照要求提示执行下面的命令

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
  • 查看证书

    ll /etc/kubernetes/pki
    总用量 56
    -rw-r--r--. 1 root root 1273 8月  29 22:19 apiserver.crt
    -rw-r--r--. 1 root root 1135 8月  29 22:19 apiserver-etcd-client.crt
    -rw-------. 1 root root 1675 8月  29 22:19 apiserver-etcd-client.key
    -rw-------. 1 root root 1679 8月  29 22:19 apiserver.key
    -rw-r--r--. 1 root root 1143 8月  29 22:19 apiserver-kubelet-client.crt
    -rw-------. 1 root root 1679 8月  29 22:19 apiserver-kubelet-client.key
    -rw-r--r--. 1 root root 1066 8月  29 22:19 ca.crt
    -rw-------. 1 root root 1679 8月  29 22:19 ca.key
    drwxr-xr-x. 2 root root  162 8月  29 22:19 etcd
    -rw-r--r--. 1 root root 1078 8月  29 22:19 front-proxy-ca.crt
    -rw-------. 1 root root 1679 8月  29 22:19 front-proxy-ca.key
    -rw-r--r--. 1 root root 1103 8月  29 22:19 front-proxy-client.crt
    -rw-------. 1 root root 1679 8月  29 22:19 front-proxy-client.key
    -rw-------. 1 root root 1675 8月  29 22:19 sa.key
    -rw-------. 1 root root  451 8月  29 22:19 sa.pub
    
  • 此时,master主机上便已经安装了kubernetes,但是集群内还是没有可用工作的Node,并缺乏对容器网络的配置。

    这里需要注意kubeadm init 命令执行完成后的最后几行提示信息,其中包含加入节点的指令(kubeadm join)和所需的Token

    此时可以用kubectl命令验证ConfigMap

    kubectl get -n kube-system configmap
    
      可以看到其中生成了名为kubeadm-config的configMap对象
      [root@k8s-master-01]# kubectl get -n kube-system configmap
      NAME                                 DATA   AGE
      coredns                              1      2m42s
      extension-apiserver-authentication   6      2m44s
      kube-proxy                           2      2m41s
      kubeadm-config                       2      2m43s
      kubelet-config-1.19                  1      2m43s
    
Node节点加入集群
  • 将本Node节点加入到Master节点

    # 命令
    kubeadm join 192.168.66.140:6443 --token abcdef.0123456789abcdef \
            --discovery-token-ca-cert-hash sha256:5ce43af4ee1c8d7e0185e6149dd697571e801480ebcf38c69d65977a1cdb749d 
    
安装网络插件(flannel)(主节点)
  • 各插件对比

    网络插件性能隔离策略开发者
    kube-router最高支持
    calico2支持
    canal3支持
    flannel3CoreOS
    romana3支持
    Weave3支持Weaveworks

    当我们使用命令kubectl get nodes 命令时发现 有提示master节点为NotReady状态,这是因为没有安装CNI网络插件

  • 安装flannel插件

    方法一
    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    
    # 验证flannel网络插件是否部署成功(Running即为成功)
    kubectl get pods -n kube-system |grep flannel 
    
    # 方式二
    GitHub地址:https://github.com/flannel-io/flannel/tags
    # 下载文件:
    flanneld-v0.15.1-amd64.docker
    # docker 加载文件:
    docker load flanneld-v0.15.1-amd64.docker
    #修改本地Linux上的kube-flannel.yml文件:
    换成本地导入的镜像
    #最后刷新pod
    kubectl apply -f kube-flannel.yml
    
  • 验证插件安装状态

kubectl get pod -n kube-system   
  • 验证集群是否安装完成
# 执行下面的命令

#获取所有节点
kubectl get nodes
NAME            STATUS   ROLES    AGE   VERSION
k8s-master-01   Ready    master   33m   v1.19.0
k8s-node-01     Ready    <none>   34s   v1.19.0
k8s-node-02     Ready    <none>   28s   v1.19.0

kubectl get pod -n kube-system -o wide

#如果发现有状态错误的pod,则可以执行kubctl --namesspaace=kube-system describepod <pod_name> 来查明错误原因,常见原因是镜像么有下载完成

#如果按照失败,则可以通过命令恢复初始状态重新执行初始化init命令,再次进行安装
常用命令
# 查看节点信息
kubectl get pod -n kube-system  

# 监视
kubectl get pod -n kube-system -w   

# 详细信息
kubectl get pod -n kube-system -o wide   

kubctl describe pod [pod name]

kubectl delete pod [pod name]

kubctl creat pod -f [file name]

附上docker镜像文件,可通过 docker load -i < k8s-images.tar 进行加载k8s 1.23.1镜像
链接:https://pan.baidu.com/s/1Cu0rf8m2CGD3fmyFmEpwuA
提取码:0x0z
–来自百度网盘超级会员V3的分享

Logo

开源、云原生的融合云平台

更多推荐