一.环境说明

file

Kubernetes高可用一般建议大于等于3台的奇数台,我使用3台master来做高可用,如果是虚机的话不最好不要克隆

  • 192.168.31.105:6443 #为VIP
  • kube-apiserver #三台节点
  • kube-schedulet #三台节点
  • kube-controller-manager #三台节点
  • ETCD #三台节点

需要注意的是在master节点需要将CPU设置为2,kubeadm安装cpu需要为2
file

二.初始化环境

1.批量修改主机名,以及免密

hostnamectl set-hostname k8s01  #所有机器按照要求修改
bash        #刷新主机名

#配置host
cat >> /etc/hosts <<EOF
192.168.31.100  k8s-01
192.168.31.101  k8s-02
192.168.31.102  k8s-03
192.168.31.103  k8s-04
192.168.31.104  k8s-05
EOF

#设置k8s-01为分发机 (只需要在k8s-01服务器操作即可)
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y expect
#分发公钥
ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa
for i in k8s-01 k8s-02 k8s-03 k8s-04 k8s-05;do
expect -c "
spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@$i
        expect {
                \"*yes/no*\" {send \"yes\r\"; exp_continue}
                \"*password*\" {send \"123456\r\"; exp_continue}
                \"*Password*\" {send \"123456\r\";}
        } "
done 
所有节点关闭Selinux、iptables、swap分区

systemctl stop firewalld
systemctl disable firewalld
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
iptables -P FORWARD ACCEPT
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

2.升级内核 (可选方案)
目前官方推荐内核版本大于3.10,因为kubernetes 1.18版本还是属于较新版本,如果不用最新的内核,个人会担心出现一些故障问题等。

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
#默认安装为最新内核
yum --enablerepo=elrepo-kernel install kernel-ml

#修改内核顺序
grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg

#使用下面命令看看确认下是否启动默认内核指向上面安装的内核
grubby --default-kernel
#这里的输出结果应该为我们升级后的内核信息

reboot
#可以等所有初始化步骤结束进行reboot操作

在Kubernetes 1.18版本出现DNS解析异常,原因是最新Kubernetes使用IPVS模块比较新,需要内核系统版本支持,所以希望大家都升级为最新内核issues

3.所有节点配置yum源

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum clean all
yum makecache

新安装的服务器可以安装下面的软件包,可以解决99%的依赖问题

yum -y install gcc gcc-c++ make autoconf libtool-ltdl-devel gd-devel freetype-devel libxml2-devel libjpeg-devel libpng-devel openssh-clients openssl-devel curl-devel bison patch libmcrypt-devel libmhash-devel ncurses-devel binutils compat-libstdc++-33 elfutils-libelf elfutils-libelf-devel glibc glibc-common glibc-devel libgcj libtiff pam-devel libicu libicu-devel gettext-devel libaio-devel libaio libgcc libstdc++ libstdc++-devel unixODBC unixODBC-devel numactl-devel glibc-headers sudo bzip2 mlocate flex lrzsz sysstat lsof setuptool system-config-network-tui system-config-firewall-tui ntsysv ntp pv lz4 dos2unix unix2dos rsync dstat iotop innotop mytop telnet iftop expect cmake nc gnuplot screen xorg-x11-utils xorg-x11-xinit rdate bc expat-devel compat-expat1 tcpdump sysstat man nmap curl lrzsz elinks finger bind-utils traceroute mtr ntpdate zip unzip vim wget net-tools

4.由于开启内核 ipv4 转发需要加载 br_netfilter 模块,所以加载下该模块,每台节点:
modprobe br_netfilter
modprobe ip_conntrack

5.优化内核参数

cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.overcommit_memory=1 # 不检查物理内存是否够用
vm.panic_on_oom=0 # 开启 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
cp kubernetes.conf  /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf

#分发到所有节点
for i in k8s-02 k8s-03 k8s-04 k8s-05
do
    scp kubernetes.conf root@$i:/etc/sysctl.d/
    ssh root@$i sysctl -p /etc/sysctl.d/kubernetes.conf
done
bridge-nf 使得netfilter可以对Linux网桥上的 IPv4/ARP/IPv6 包过滤。比如,设置net.bridge.bridge-nf-call-iptables=1后,二层的网桥在转发包时也会被 iptables的 FORWARD 规则所过滤。常用的选项包括:

net.bridge.bridge-nf-call-arptables:是否在 arptables 的 FORWARD 中过滤网桥的 ARP 包

net.bridge.bridge-nf-call-ip6tables:是否在 ip6tables 链中过滤 IPv6 包

net.bridge.bridge-nf-call-iptables:是否在 iptables 链中过滤 IPv4 包

net.bridge.bridge-nf-filter-vlan-tagged:是否在 iptables/arptables 中过滤打了 vlan 标签的包。

6.所有节点安装ipvs

为什么要使用IPVS,从k8s的1.8版本开始,kube-proxy引入了IPVS模式,IPVS模式与iptables同样基于Netfilter,但是采用的hash表,因此当service数量达到一定规模时,hash查表的速度优势就会显现出来,从而提高service的服务性能。
ipvs依赖于nf_conntrack_ipv4内核模块,4.19包括之后内核里改名为nf_conntrack,1.13.1之前的kube-proxy的代码里没有加判断一直用的nf_conntrack_ipv4,好像是1.13.1后的kube-proxy代码里增加了判断,我测试了是会去load nf_conntrack使用ipvs正常

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

#查看是否已经正确加载所需的内核模块

7.所有节点安装ipset
yum install ipset -y

iptables是Linux服务器上进行网络隔离的核心技术,内核在处理网络请求时会对iptables中的策略进行逐条解析,因此当策略较多时效率较低;而是用IPSet技术可以将策略中的五元组(协议,源地址,源端口,目的地址,目的端口)合并到有限的集合中,可以大大减少iptables策略条目从而提高效率。测试结果显示IPSet方式效率将比iptables提高100倍

为了方面ipvs管理,这里安装一下ipvsadm
yum install ipvsadm -y

8.所有节点设置系统时区

#将当前的 UTC 时间写入硬件时钟
timedatectl set-timezone Asia/Shanghai
timedatectl set-local-rtc 0

#重启依赖于系统时间的服务
systemctl restart rsyslog
systemctl restart crond

9/最后一步最好update一下 (可选操作)
yum update -y

三.安装各模块

安装配置Docker

1.docker安装配置需要在所有节点上操作
export VERSION=19.03
curl -fsSL "https://get.docker.com/" | bash -s -- --mirror Aliyun

所有机器配置加速源并配置docker的启动参数使用systemd,使用systemd是官方的建议

mkdir -p /etc/docker/
cat>/etc/docker/daemon.json<<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": [
      "https://fz5yth0r.mirror.aliyuncs.com",
      "https://dockerhub.mirrors.nwafu.edu.cn/",
      "https://mirror.ccs.tencentyun.com",
      "https://docker.mirrors.ustc.edu.cn/",
      "https://reg-mirror.qiniu.com",
      "http://hub-mirror.c.163.com/",
      "https://registry.docker-cn.com"
  ],
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m",
    "max-file": "3"
  }
}
EOF

2.启动docker,检查状态是否正常
systemctl enable --now docker

3.查看docker info

[root@k8s-02 sysctl.d]# docker info
Client:
 Debug Mode: false
Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 19.03.10
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: systemd
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
 init version: fec3683
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 5.6.4-1.el7.elrepo.x86_64
 Operating System: CentOS Linux 7 (Core)
 OSType: linux
 Architecture: x86_64
 CPUs: 1
 Total Memory: 1.915GiB
 Name: k8s-02
 ID: NVAE:PBC5:5AE5:TEW5:FWUQ:H4RQ:J6TD:J5KP:ZZS3:QUJF:ZOS3:4QET
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Registry Mirrors:
  https://fz5yth0r.mirror.aliyuncs.com/
  https://dockerhub.mirrors.nwafu.edu.cn/
  https://mirror.ccs.tencentyun.com/
  https://docker.mirrors.ustc.edu.cn/
  https://reg-mirror.qiniu.com/
  http://hub-mirror.c.163.com/
  https://registry.docker-cn.com/
 Live Restore Enabled: false

4.查看docker版本

[root@k8s-01 ~]# docker version
Client: Docker Engine - Community
 Version:           19.03.10
 API version:       1.40
 Go version:        go1.13.10
 Git commit:        9424aeaee9
 Built:             Thu May 28 22:18:06 2020
 OS/Arch:           linux/amd64
 Experimental:      false
Server: Docker Engine - Community
 Engine:
  Version:          19.03.10
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.10
  Git commit:       9424aeaee9
  Built:            Thu May 28 22:16:43 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

安装kubeadm

1.默认yum源在国外,这里需要修改为国内阿里源

cat <<EOF >/etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF

2.master节点安装

#这里我在k8s-01 k8s-02 k8s-03执行master节点操作
yum install -y \
    kubeadm-1.18.3 \
    kubectl-1.18.3 \
    kubelet-1.18.3 \
    --disableexcludes=kubernetes && \
    systemctl enable kubelet
    
#kubeadm 安装集群
#kubectl 通过命令行访问apiserver
#kubelet 负责Pod对应容器的创建、停止等任务
#node节点不需要安装kubectl,kubectl是一个agent读取kubeconfig访问api-server来操作集群,node节点一般不需要

3.node节点安装

#node节点安装默认是在所有节点安装,但是k8s中的master节点已经安装过了,我们就只在k8s-04 k8s-05中安装
yum install -y \
    kubeadm-1.18.3 \
    kubelet-1.18.3 \
    --disableexcludes=kubernetes && \
    systemctl enable kubelet

api-server 高可用部署 (单master可跳过)

我这里的环境是虚拟机,如果是云环境可以直接用slb的方式,这一步可以跳过。 虚拟机这里使用nginx local proxy

1.需要在master节点安装

#首先我们在原有的基础上添加一个host,只需要在master节点上执行即可
cat >>/etc/hosts<< EOF
192.168.31.100  k8s-master-01
192.168.31.101  k8s-master-02
192.168.31.102  k8s-master-03
192.168.31.105  k8s-master
EOF

2.添加nginx配置文件

mkdir -p /etc/kubernetes

cat > /etc/kubernetes/nginx.conf << EOF
user nginx nginx;
worker_processes auto;
events {
    worker_connections  20240;
    use epoll;
}
error_log /var/log/nginx_error.log info;

stream {
    upstream kube-servers {
        hash $remote_addr consistent;
        
        server k8s-master-01:6443 weight=5 max_fails=1 fail_timeout=3s;  #这里可以写IP
        server k8s-master-02:6443 weight=5 max_fails=1 fail_timeout=3s;
        server k8s-master-03:6443 weight=5 max_fails=1 fail_timeout=3s;
    }
    
    server {
        listen 8443 reuseport;
        proxy_connect_timeout 3s;
        # 加大timeout
        proxy_timeout 3000s;
        proxy_pass kube-servers;
    }
}
EOF

3.在master节点启动nginx容器

#这里我使用容器运行nginx,当然自己也可以写成staticPod的yaml在init的阶段放入目录里,或者二进制安装nginx

docker run --restart=always \
    -v /etc/kubernetes/nginx.conf:/etc/nginx/nginx.conf \
    -v /etc/localtime:/etc/localtime:ro \
    --name k8s \
    --net host \
    -d \
    nginx:alpine

4.启动完毕可以检查看一下

[root@k8s-01 ~]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
d8b3955bda68        nginx:alpine        "nginx -g 'daemon of…"   2 minutes ago       Up 2 minutes                            k8s
[root@k8s-01 ~]# lsof -i:8443
lsof: no pwd entry for UID 101
COMMAND  PID     USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
nginx   3360     root    5u  IPv4  58860      0t0  TCP *:pcsync-https (LISTEN)
lsof: no pwd entry for UID 101
nginx   3374      101    5u  IPv4  58860      0t0  TCP *:pcsync-https (LISTEN)

配置keeplive服务

1.高可用方案需要一个VIP,供集群内部访问,在所有master节点安装
yum install -y keepalived

2.配置keeplived服务

cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
   router_id 192.168.31.100     #节点ip,master每个节点配置自己的IP
}
vrrp_script chk_nginx {
    script "/etc/keepalived/check_port.sh 8443"
    interval 2
    weight -20
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 251
    priority 100
    advert_int 1
    mcast_src_ip 192.168.31.100    #节点IP
    nopreempt
    authentication {
        auth_type PASS
        auth_pass 11111111
    }
    track_script {
         chk_nginx
    }
    virtual_ipaddress {
        192.168.31.105   #VIP
    }
}
EOF
#编写健康检查脚本
vim  /etc/keepalived/check_port.sh 
CHK_PORT=$1
 if [ -n "$CHK_PORT" ];then
        PORT_PROCESS=`ss -lt|grep $CHK_PORT|wc -l`
        if [ $PORT_PROCESS -eq 0 ];then
                echo "Port $CHK_PORT Is Not Used,End."
                exit 1
        fi
 else
        echo "Check Port Cant Be Empty!"
 fi

3.启动keepalived
systemctl enable --now keepalived

测试vip是否正常
ping vip

配置kubeadm

1.这里我们在k8s-01上配置打印init默认配置信息
kubeadm config print init-defaults >kubeadm-init.yaml

2.默认配置如下

[root@k8s-01 ~]# cat  kubeadm-init.yaml 
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 1.2.3.4
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.18.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}

3.修改初始化文件
请对应我的IP进行配置,这里主要是master的IP.可以复制我的,但是主机名等要和我相同

apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.31.100   #master ip,这里不可以填写VIP和域名
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-01                  #创建集群的节点
  taints:
  - effect: NoSchedule           #标签,默认资源不调度到master上
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
  extraArgs:
    authorization-mode: "Node,RBAC"
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeClaimResize,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,Priority,PodPreset"
    runtime-config: api/all=true,settings.k8s.io/v1alpha1=true
    storage-backend: etcd3
    etcd-servers: https://192.168.31.100:2379,https://192.168.31.101:2379,https://192.168.31.102:2379     #etcd集群节点ip
  certSANs:             #master节点信息
  - 10.96.0.1
  - 127.0.0.1
  - localhost
  - k8s-master
  - k8s-master-01
  - k8s-master-02
  - k8s-master-03
  - 192.168.31.100
  - 192.168.31.101
  - 192.168.31.102
  - master
  - kubernetes
  - kubernetes.default
  - kubernetes.default.svc
  - kubernetes.default.svc.cluster.local
  extraVolumes:
  - hostPath: /etc/localtime
    mountPath: /etc/localtime
    name: localtime
    readOnly: true
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager:
  extraArgs:
    bind-address: "0.0.0.0"
    experimental-cluster-signing-duration: 867000h
  extraVolumes:
  - hostPath: /etc/localtime
    mountPath: /etc/localtime
    name: localtime
    readOnly: true
dns:
  type: CoreDNS
  imageRepository: coredns
  imageTag: 1.6.7       #coredns版本
etcd:
  local:
    dataDir: /var/lib/etcd     #etcd数据存储目录
    imageRepository: quay.io/coreos
    imageTag: v3.4.7      #etcd版本
    serverCertSANs:
    - master
    - 192.168.31.100
    - 192.168.31.101
    - 192.168.31.102
    - k8s-01
    - k8s-02
    - k8s-03
    peerCertSANs:
    - master
    - 192.168.31.100
    - 192.168.31.101
    - 192.168.31.102
    - k8s-01
    - k8s-02
    - k8s-03
    extraArgs:
      auto-compaction-retention: "1h"
      max-request-bytes: "33554432"
      quota-backend-bytes: "8589934592"
      enable-v2: "false"
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.18.2   #k8s版本
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12     #svc ip网段
  podSubnet: 10.244.0.0/16        #pod 网段
controlPlaneEndpoint: k8s-master:8443    #vip域名或者ip
scheduler:
  extraArgs:
    bind-address: "0.0.0.0"
  extraVolumes:
  - hostPath: /etc/localtime      #时间同步
    mountPath: /etc/localtime
    name: localtime
    readOnly: true
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration # https://godoc.org/k8s.io/kube-proxy/config/v1alpha1#KubeProxyConfiguration
mode: ipvs # or iptables
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration # https://godoc.org/k8s.io/kubelet/config/v1beta1#KubeletConfiguration
cgroupDriver: systemd
failSwapOn: true # 如果开启swap则设置为false

4.检查文件是否错误,忽略warning,错误的话会抛出error,没错则会输出到包含字符串kubeadm join xxx
kubeadm init --config init.yaml --dry-run
file

kubeadm init参数说明

    --apiserver-advertise-address string   设置 apiserver 绑定的 IP.
      --apiserver-bind-port int32            设置apiserver 监听的端口. (默认 6443)
      --apiserver-cert-extra-sans strings    api证书中指定额外的Subject Alternative Names (SANs) 可以是IP 也可以是DNS名称。 证书是和SAN绑定的。
      --cert-dir string                      证书存放的目录 (默认 "/etc/kubernetes/pki")
      --certificate-key string               kubeadm-cert secret 中 用于加密 control-plane 证书的key
      --config string                        kubeadm 配置文件的路径.
      --cri-socket string                    CRI socket 文件路径,如果为空 kubeadm 将自动发现相关的socket文件; 只有当机器中存在多个 CRI  socket 或者 存在非标准 CRI socket 时才指定.
      --dry-run                              测试,并不真正执行;输出运行后的结果.
      --feature-gates string                 指定启用哪些额外的feature 使用 key=value 对的形式。
  -h, --help                                 帮助文档
      --ignore-preflight-errors strings      忽略前置检查错误,被忽略的错误将被显示为警告. 例子: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
      --image-repository string              选择拉取 control plane images 的镜像repo (default "k8s.gcr.io")
      --kubernetes-version string            选择K8S版本. (default "stable-1")
      --node-name string                     指定node的名称,默认使用 node 的 hostname.
      --pod-network-cidr string              指定 pod 的网络, control plane 会自动将 网络发布到其他节点的node,让其上启动的容器使用此网络
      --service-cidr string                  指定service 的IP 范围. (default "10.96.0.0/12")
      --service-dns-domain string            指定 service 的 dns 后缀, e.g. "myorg.internal". (default "cluster.local")
      --skip-certificate-key-print           不打印 control-plane 用于加密证书的key.
      --skip-phases strings                  跳过指定的阶段(phase)
      --skip-token-print                     不打印 kubeadm init 生成的 default bootstrap token 
      --token string                         指定 node 和control plane 之间,简历双向认证的token ,格式为 [a-z0-9]{6}\.[a-z0-9]{16} - e.g. abcdef.0123456789abcdef
      --token-ttl duration                   token 自动删除的时间间隔。 (e.g. 1s, 2m, 3h). 如果设置为 '0', token 永不过期 (default 24h0m0s)
      --upload-certs                         上传 control-plane 证书到 kubeadm-certs Secret.

4.检查镜像是否正确,版本号不正确就把yaml里的kubernetesVersion取消注释写上自己对应的版本号
kubeadm config images list --config init.yaml
file

5.预拉取镜像
kubeadm config images pull --config init.yaml
file

6.在k8s-01上初始化
kubeadm init --config init.yaml --upload-certs

请保留结束后的2行输出!

7.init大致流程如下
file

preflight                  预置检查
kubelet-start                生成 kubelet 配置,并重启kubelet
certs                        生成认证
  /etcd-ca                   生成自签名CA以为etcd配置标识
  /apiserver-etcd-client     生成apiserver用于访问etcd的证书
  /etcd-healthcheck-client   生成liveness探针使用的证书,用于检查etcd 的 healtcheck 状态
  /etcd-server               生成 etcd 服务使用的的证书
  /etcd-peer                 为etcd节点生成证书以相互通信
  /ca                        生成自签名的 Kubernetes CA,为其他 Kubernetes 组件预配标识
  /apiserver                 生成用于提供 Kubernetes API 的证书 api server端证书
  /apiserver-kubelet-client  为 API 服务器生成证书以连接到 kubelet
  /front-proxy-ca            生成自签名 CA 以预配front proxy 标识
  /front-proxy-client        为前端代理客户端生成证书
  /sa                        生成用于对服务帐户令牌及其公钥进行签名的私钥
kubeconfig                   生成 control plane 和 admin 管理员相关的kubeconfig 文件
  /admin                     生成admin 管理员和kubeadm 自身使用的kubeconfig文件
  /kubelet                   生成kebelet使用的,仅用于引导集群(bootstrap)的kubeconfig 文件
  /controller-manager        生成 controller manager 使用的kubeconfig文件
  /scheduler                 生成 scheduler 使用的kubeconfig文件
kubelet-start                生成kubelet的环境变量文件/var/lib/kubelet/kubeadm-flags.env 和 配置信息文件 /var/lib/kubelet/config.yaml,然后 启动/重启 kubelet(systemd 模式)
control-plane                生成拉起 control plane(master)static Pod 的 manifest 文件
  /apiserver                 生成拉起 kube-apiserver 的 static Pod manifest
  /controller-manager        生成拉起 kube-controller-manager 的static Pod manifest
  /scheduler                 生成拉起 kube-scheduler 的 static Pod manifest
etcd                         生成本地 ETCD的 static Pod manifest 文件
  /local                     生成单节点本地 ETCD static Pod manifest 文件
upload-config                上传kubeadm和kubelet配置为 ConfigMap
  /kubeadm                   上传 kubeadm ClusterConfiguration 为 ConfigMap
  /kubelet                   上传 kubelet component config 为 ConfigMap
upload-certs                 上传证书到 kubeadm-certs
mark-control-plane           标识节点为 control-plane
bootstrap-token              生成 bootstrap tokens 用于其他节点加入集群
addon                        安装所需的插件以通过一致性测试
  /coredns                   安装 CoreDNS 插件
  /kube-proxy                安装 kube-proxy 插件

8.记住init后打印的token,复制kubectl的kubeconfig,kubectl的kubeconfig路径默认是~/.kube/config
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

9.初始化的配置文件为保存在configmap里面
kubectl -n kube-system get cm kubeadm-config -o yaml

10.手动拷贝(某些低版本不支持上传证书的时候操作) 我们1.18这个版本可以不执行(可选)

在前面我们已经添加了–upload-certs参数,这个参数是将我们的证书文件提交到secret中,所以可以不用在拷贝证书。低版本可能需要有拷贝证书的步骤

for node in k8s-02 k8s-03;do
    ssh $node 'mkdir -p /etc/kubernetes/pki/etcd'
    scp -r /etc/kubernetes/pki/ca.* $node:/etc/kubernetes/pki/
    scp -r /etc/kubernetes/pki/sa.* $node:/etc/kubernetes/pki/
    scp -r /etc/kubernetes/pki/front-proxy-ca.* $node:/etc/kubernetes/pki/
    scp -r /etc/kubernetes/pki/etcd/ca.* $node:/etc/kubernetes/pki/etcd/
done

11.在其他master节点上执行join

#token如果忘记可以通过kubeadm token list查看
 kubeadm join k8s-master:8443 --token 58msro.ou3s6067slh6orw7 \
    --discovery-token-ca-cert-hash sha256:b2ffc7bd4b8c5d4cd6f5f016f7a19d49dba3090c5cb018827b712fa1138961b5 \
    --control-plane --certificate-key d8272e844a395ad81d1cced7a6de6ebb52dd9be6ea93897fd608bd54aebdc45f

12.所有master创建kubeconfig
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

配置etcdctl

1.最好在所有的master节点上执行
docker cpdocker ps -a | awk ‘/k8s_etcd/{print $1}’:/usr/local/bin/etcdctl /usr/local/bin/etcdctl

2.配置etcd参数

#注意修改ETCD节点ip
cat >/etc/profile.d/etcd.sh<<'EOF'
ETCD_CERET_DIR=/etc/kubernetes/pki/etcd/
ETCD_CA_FILE=ca.crt
ETCD_KEY_FILE=healthcheck-client.key
ETCD_CERT_FILE=healthcheck-client.crt
ETCD_EP=https://192.168.31.100:2379,https://192.168.31.101:2379,https://192.168.31.102:2379
alias etcd_v2="etcdctl --cert-file ${ETCD_CERET_DIR}/${ETCD_CERT_FILE} \
              --key-file ${ETCD_CERET_DIR}/${ETCD_KEY_FILE}  \
              --ca-file ${ETCD_CERET_DIR}/${ETCD_CA_FILE}  \
              --endpoints $ETCD_EP"
alias etcd_v3="ETCDCTL_API=3 \
    etcdctl   \
   --cert ${ETCD_CERET_DIR}/${ETCD_CERT_FILE} \
   --key ${ETCD_CERET_DIR}/${ETCD_KEY_FILE} \
   --cacert ${ETCD_CERET_DIR}/${ETCD_CA_FILE} \
    --endpoints $ETCD_EP"
EOF

3.手动加载一下环境变量,如果需要多个master查看,那么将脚本分发到多个节点即可

[root@k8s-01 ~]# . /etc/profile.d/etcd.sh
[root@k8s-01 ~]# etcd_v3 endpoint status --write-out=table  #下面是输出
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|          ENDPOINT           |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.31.100:2379 |   c95fd6fdbb91a5 |   3.4.7 |  2.4 MB |     false |      false |         6 |       5656 |               5656 |        |
| https://192.168.31.101:2379 | cfee13793e1cc392 |   3.4.7 |  2.3 MB |      true |      false |         6 |       5656 |               5656 |        |
| https://192.168.31.102:2379 | c9662d268621483c |   3.4.7 |  2.3 MB |     false |      false |         6 |       5656 |               5656 |        |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

Node 节点配置

  • 初始化步骤
  • 安装docker
  • 安装nginx
  • 安装kubeadm kubelet

1.添加node节点只需要执行下面的join就可以

#这个结果在我们初始化master的时候下面给的,一共2个配置,一个针对master节点,一个针对于node节点,请不要直接复制我的。根据自己的输出的结果复制
kubeadm join k8s-master:8443 --token 58msro.ou3s6067slh6orw7 \
    --discovery-token-ca-cert-hash sha256:b2ffc7bd4b8c5d4cd6f5f016f7a19d49dba3090c5cb018827b712fa1138961b5

file

2.在node节点添加完毕可以在get node中查看到对应的节点

[root@k8s-01 ~]# kubectl  get node
NAME     STATUS     ROLES    AGE   VERSION
k8s-01   NotReady   master   44m   v1.18.3
k8s-02   NotReady   master   26m   v1.18.3
k8s-03   NotReady   master   25m   v1.18.3
k8s-04   NotReady      68s   v1.18.3
k8s-05   NotReady      63s   v1.18.3
[root@k8s-01 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}  

3.默认情况下master节点不会进行调度,但是可能机器存在资源不够的情况下,如果想要放开节点。可以执行下面的命令

kubectl taint nodes k8s-01 node-role.kubernetes.io/master-
kubectl taint nodes k8s-02 node-role.kubernetes.io/master-
kubectl taint nodes k8s-03 node-role.kubernetes.io/master-

部署flannel

1.由于容器的网络暂时还没有,coredns无法分配的ip会处于pending状态,这里需要手动部署flannel插件

[root@k8s-01 ~]# kubectl -n kube-system get pod -o wide
NAME                             READY   STATUS    RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
coredns-78c7b4d59d-c4dcl         0/1     Pending   0          45m                              
coredns-78c7b4d59d-d6rrv         0/1     Pending   0          45m                              
etcd-k8s-01                      1/1     Running   0          46m     192.168.31.100   k8s-01              
etcd-k8s-02                      1/1     Running   0          28m     192.168.31.101   k8s-02              
etcd-k8s-03                      1/1     Running   0          27m     192.168.31.102   k8s-03              
kube-apiserver-k8s-01            1/1     Running   0          46m     192.168.31.100   k8s-01              
kube-apiserver-k8s-02            1/1     Running   0          28m     192.168.31.101   k8s-02              
kube-apiserver-k8s-03            1/1     Running   0          27m     192.168.31.102   k8s-03              
kube-controller-manager-k8s-01   1/1     Running   1          46m     192.168.31.100   k8s-01              
kube-controller-manager-k8s-02   1/1     Running   0          28m     192.168.31.101   k8s-02              
kube-controller-manager-k8s-03   1/1     Running   0          27m     192.168.31.102   k8s-03              
kube-proxy-7rlb4                 1/1     Running   0          27m     192.168.31.102   k8s-03              
kube-proxy-c8kbl                 1/1     Running   0          45m     192.168.31.100   k8s-01              
kube-proxy-f87b8                 1/1     Running   0          28m     192.168.31.101   k8s-02              
kube-proxy-pcx6p                 1/1     Running   1          2m59s   192.168.31.104   k8s-05              
kube-proxy-zscwf                 1/1     Running   1          3m4s    192.168.31.103   k8s-04              
kube-scheduler-k8s-01            1/1     Running   1          46m     192.168.31.100   k8s-01              
kube-scheduler-k8s-02            1/1     Running   0          28m     192.168.31.101   k8s-02              
kube-scheduler-k8s-03            1/1     Running   0          27m     192.168.31.102   k8s-03              

2.安装flannel

#手动打patch,后续扩的node也记得打下
nodes=`kubectl get node --no-headers | awk '{print $1}'`
for node in $nodes;do
    cidr=`kubectl get node "$node" -o jsonpath='{.spec.podCIDRs[0]}'`
    [ -z "$(kubectl get node $node -o jsonpath='{.spec.podCIDR}')" ] && {
        kubectl patch node "$node" -p '{"spec":{"podCIDR":"'"$cidr"'"}}' 
    }
done
wget http://down.i4t.com/k8s1.18/kube-flannel.yml
kubectl apply -f kube-flannel.yml

3.flannel如果存在多网卡,需要在kube-flannel.yml中指定网卡

  containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.12.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=eth0  # 如果是多网卡的话,指定内网网卡的名称

温馨提示: 在kubeadm.yaml文件中设置了podSubnet网段,同时在flannel中网段也要设置相同的。 (我这里默认就是相同的配置)

四.验证集群

1.验证
kubectl -n kube-system get pod -o wide

2.等kube-system命名空间下的Pod都为Running,这里先测试一下dns是否正常

cat<<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:alpine
        name: nginx
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  selector:
    app: nginx
  type: NodePort
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
      nodePort: 30001
---
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: abcdocker9/centos:v1
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
EOF

3.创建后Pod我们进行检查

[root@k8s-01 ~]# kubectl  get pod,svc
NAME                        READY   STATUS    RESTARTS   AGE
pod/busybox                 1/1     Running   0          4m21s
pod/nginx-97499b967-lfvcq   1/1     Running   0          4m21s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1               443/TCP        16h
service/nginx        NodePort    10.96.21.46             80:30001/TCP   15h

4.使用nslookup查看是否能返回地址

[root@k8s-01 ~]# kubectl exec -ti busybox -- nslookup kubernetes
Server:     10.96.0.10
Address:    10.96.0.10#53
Name:   kubernetes.default.svc.cluster.local
Address: 10.96.0.1

5.测试nginx svc以及Pod内部网络通信是否正常

for i in k8s-01 k8s-02 k8s-03 k8s-04 k8s-05
do
   ssh root@$i curl -s 10.96.21.46   #nginx svc ip
   ssh root@$i curl -s 10.244.3.4   #pod ip
done

6.端口我这里使用了nodeport,在集群任意节点访问节点IP:30001检查是否正常

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-8vjNmqc7-1620719919275)(/media//202102/2021-02-26_173639.png)]

初始化集群(如果过程中出现问题,我们可以直接执行下面的命令进行初始化集群)
kubeadm reset

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐