一,环境准备

1.1 服务器规划

角色IP组件
k8s-master1192.168.131.128kube-apiserver,kube-controller-manager,kube-scheduler,etcd
k8s-master2192.168.131.131kube-apiserver,kube-controller-manager,kube-scheduler,etcd
k8s-node1192.168.131.129kubelet,kube-proxy,docker
k8s-node2192.168.131.130kubelet,kube-proxy,docker
Nginx负载均衡192.168.131.132Nginx,keepalived
Nginx负载均衡192.168.131.137Nginx,keepalived
VIP192.168.131.200LB

1.2 软件版本
docker 19.03.4
kubernetes 1.20.15
etcd 3.4.9
集群架构图
在这里插入图片描述

本次搭建这套K8s高可用集群分两部分实施,先部署一套但master架构,后期在扩容为多master的高可用集群版。

单master服务器规划

角色IP组件
k8s-master1192.168.131.128kube-apiserver,kube-controller-manager,kube-scheduler,etcd(master1,master2)
k8s-node1192.168.131.129kubelet,kube-proxy,docker
k8s-node2192.168.131.130kubelet,kube-proxy,docker

1.2 系统初始化

# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
setenforce 0  # 临时

# 关闭swap
swapoff -a  # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久

# 根据规划设置主机名
hostnamectl set-hostname <hostname>

# 在master添加hosts
cat >> /etc/hosts << EOF
192.168.131.128 k8s-master1
192.168.131.131 k8s-master2
192.168.131.129 k8s-node1
192.168.131.130 k8s-node2
EOF

# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system  # 生效

# 时间同步
yum install ntpdate -y
ntpdate time.windows.com

二,部署etcd集群

etcd 是一个分布式键值存储系统,Kubernetes使用Etcd进行数据存储,所以先准备一个Etcd数据库,这里使用两台部署集群,也可多台。

节点名称IP
etcd192.168.131.128
etcd192.168.131.131

2.1 生成etcd证书

下载证书生成工具

cat > /opt/k8s/etcd-cert/cfssl.sh << EOF
#!/bin/bash
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
EOF
bash /opt/k8s/etcd-cert/cfssl.sh

生成ca自签证书

cat > ca-config.json<< EOF
{
    "signing": {
    "default": {
    "expiry": "87600h"
    },
    "profiles": {
        "www": {
        "expiry": "87600h",
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
        }
    }
    }
}
EOF

cat > ca-csr.json<< EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
    {
        "C": "CN",
        "L": "Beijing",
        "ST": "Beijing"
    }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
# 会生成ca.pem和ca-key.pem文件

在这里插入图片描述

使用自签的ca证书生成etcd的https证书

cat > server-csr.json<< EOF
{
    "CN": "etcd",
    "hosts": [
    	"127.0.0.1"
        "192.168.131.128",
        "192.168.131.131"
        ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
    {
    "C": "CN",
    "L": "BeiJing",
    "ST": "BeiJing"
    }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
# 生成证书,会生成server.pem和server-key.pem文件

在这里插入图片描述

2.2 部署etcd集群

二进制包下载地址:https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz

创建etcd工作目录并解压

mkdir /opt/etcd/{bin,cfg,ssl} -p
tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

创建etcd配置文件(master1)

cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.131.128:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.131.128:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.131.128:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.131.128:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.131.128:2380,etcd-2=https://192.168.131.131:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

配置文件说明
ETCD_NAME:节点名称,集群中唯一
ETCD_DATA_DIR:数据目录
ETCD_LISTEN_PEER_URLS:集群通信监听地址
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN:集群Token
ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群

systemd启动etcd

cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

将之前生成的证书拷贝到etcd工作目录下

cp /opt/k8s/etcd-cert/*.pem /opt/etcd/ssl/

在这里插入图片描述
启动并设置开机自启

systemctl daemon-reload
systemctl start etcd
systemctl enable etcd

启动时会卡住等待其他etcd节点加入才能启动成功,所以先部署其他节点在做启动即可。

将etcd的工作目录拷贝到其他节点上并修改etcd.conf文件

scp -r /opt/etcd/ root@192.168.131.131:/opt/etcd/
scp /usr/lib/systemd/system/etcd.service root@192.168.131.131:/usr/lib/systemd/system/

修改节点的ip和名称

#[Member]
ETCD_NAME="etcd-2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.131.131:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.131.131:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.131.131:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.131.131:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.131.128:2380,etcd-2=https://192.168.131.131:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

启动etcd并设为开机自启

systemctl daemon-reload
systemctl start etcd
systemctl enable etcd

查看集群状态

cat > etcd_status.sh << EOF
/opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.131.128:2379,https://192.168.131.131:2379" endpoint health --write-out=table
EOF
bash etcd_status.sh

如图表示集群状态健康
在这里插入图片描述

三、所有节点部署docker

docker下载地址:https://download.docker.com/linux/static/stable/x86_64/

tar -xvf docker-19.03.4.tgz
cp docker/* /usr/bin/

cat > /etc/systemd/system/docker.service <<- EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF

chmod +x /etc/systemd/system/docker.service
#配置自己的镜像加速器
mkdir -p /etc/docker
cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://htpyh0m5.mirror.aliyuncs.com"]
}
EOF
#设置自启动
systemctl daemon-reload
systemctl start docker
systemctl enable docker.service

其他节点安装步骤同理

四、部署Master Node节点

4.1 自签apiserver证书

脚本创建master节点需要的所有证书

cd /opt/k8s/apiserver-cert
vim k8s-cert.sh
#!/bin/bash
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json <<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

#----------server证书-------------

cat > server-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.131.128",
      "192.168.131.129",
      "192.168.131.130",
      "192.168.131.131",
      "192.168.131.132",
      "192.168.131.137",
      "192.168.131.200",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

#-----------kubeclt连接集群,admin证书------------

cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

#----------kube-controller-manager证书-------------

cat > kube-controller-manager-csr.json << EOF
{
  "CN": "system:kube-controller-manager",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing", 
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF
# hosts字段可以为空,也可以填写所有kube-controller-manager的ip地址
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager


#----------kube-scheduler证书-------------
cat > kube-scheduler-csr.json << EOF
{
  "CN": "system:kube-scheduler",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler


#----------kube-proxy证书-------------

cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

#server-csr.json文件中hosts字段ip为k8s集群中的所有节点的ip(master,LB,VIP)
#执行脚本创建证书
bash k8s-cert.sh
# 查看证书

在这里插入图片描述

4.2 master节点部署

github下载二进制包并解压

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md
(下载Server Binaries包,包含master和node节点的二进制包)

创建工作目录
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
cp kubectl /usr/bin/

部署kube-apiserver

  1. 创建配置文件
cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--etcd-servers=https://192.168.131.128:2379,https://192.168.131.131:2379 \\
--bind-address=192.168.131.128 \\
--secure-port=6443 \\
--advertise-address=192.168.131.128 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/16 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-32767 \\
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--service-account-issuer=api \\
--service-account-signing-key-file=/opt/kubernetes/ssl/server-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\
--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--proxy-client-cert-file=/opt/kubernetes/ssl/server.pem \\
--proxy-client-key-file=/opt/kubernetes/ssl/server-key.pem \\
--requestheader-allowed-names=kubernetes \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\
--enable-aggregator-routing=true \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF

注:上面两个\ \ 第一个是转义符,第二个是换行符,使用转义符是为了使用EOF保留换行符。

–logtostderr:启用日志

—v:日志等级

–log-dir:日志目录

–etcd-servers:etcd集群地址

–bind-address:监听地址

–secure-port:https安全端口

–advertise-address:集群通告地址

–allow-privileged:启用授权

–service-cluster-ip-range:Service虚拟IP地址段

–enable-admission-plugins:准入控制模块

–authorization-mode:认证授权,启用RBAC授权和节点自管理

–enable-bootstrap-token-auth:启用TLS bootstrap机制

–token-auth-file:bootstrap token文件

–service-node-port-range:Service nodeport类型默认分配端口范围

–kubelet-client-xxx:apiserver访问kubelet客户端证书

–tls-xxx-file:apiserver https证书

–etcd-xxxfile:连接Etcd集群证书

–audit-log-xxx:审计日志

  1. 拷贝证书到配置目录中
cp /opt/k8s/apiserver-cert/*.pem /opt/kubernetes/ssl/
  1. 启用TLS Bootstrapping 机制
# 创建随机token
head -c 16 /dev/urandom | od -An -t x | tr -d ' '
cat > /opt/kubernetes/cfg/token.csv << EOF
c3bd844c9abd6ac0bbc15347b504cd0d,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF

在这里插入图片描述
4.systemd管理服务

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes [Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure [Install] WantedBy=multi-user.target EOF

5.设置开机启动

systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver

创建kubeconfig文件
创建kube-controller-manager,kube-scheduler,kubectl用于连接apiserver的kubeconfig文件

cat > /opt/k8s-cert/kubeconfig.sh << EOF
#!/bin/bash
KUBE_CONFIG="/opt/kubernetes/cfg"
KUBE_APISERVER="https://192.168.131.128:6443"

##创建kube-controller-manager.kubeconfig文件
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}/kube-controller-manager.kubeconfig
kubectl config set-credentials kube-controller-manager \
  --client-certificate=./kube-controller-manager.pem \
  --client-key=./kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}/kube-controller-manager.kubeconfig
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-controller-manager \
  --kubeconfig=${KUBE_CONFIG}/kube-controller-manager.kubeconfig
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}/kube-controller-manager.kubeconfig

##创建kube-scheduler.kubeconfig文件
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}/kube-scheduler.kubeconfig
kubectl config set-credentials kube-scheduler \
  --client-certificate=./kube-scheduler.pem \
  --client-key=./kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}/kube-scheduler.kubeconfig
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-scheduler \
  --kubeconfig=${KUBE_CONFIG}/kube-scheduler.kubeconfig
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}/kube-scheduler.kubeconfig

##创建config文件,用于kubectl连接集群
KUBE_CONFIG_DIR="/root/.kube"
if [ ! -d $KUBE_CONFIG_DIR ];then
	mkdir $KUBE_CONFIG_DIR
fi
KUBE_CONFIG_ADMIN="$KUBE_CONFIG_DIR/config"


kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG_ADMIN}
kubectl config set-credentials cluster-admin \
  --client-certificate=./admin.pem \
  --client-key=./admin-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG_ADMIN}
kubectl config set-context default \
  --cluster=kubernetes \
  --user=cluster-admin \
  --kubeconfig=${KUBE_CONFIG_ADMIN}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG_ADMIN}
EOF

部署kube-controller-manager

  1. 创建配置文件
cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.0.0.0/16 \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--cluster-signing-duration=87600h0m0s"
EOF
  1. systemd启动文件
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
  1. 开机自启

systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver

部署kube-scheduler

  1. 创建配置文件
cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect \\
--kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig \\
--bind-address=127.0.0.1"
EOF
  1. systemd启动文件
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
  1. 开机自启动

systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver

通过kubectl工具查看当前集群组件状态
kubectl get cs
在这里插入图片描述
授权kubelet-bootstrap用户允许请求证书

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

五,部署Work Node节点

5.1 所有work节点创建工作目录并拷贝二进制文件

mkdir /opt/kubernetes/{bin,cfg,ssl,logs}
scp kubelet kube-proxy root@192.168.131.129:/opt/kubernetes/bin/
scp kubelet kube-proxy root@192.168.131.130:/opt/kubernetes/bin/
scp kube-proxy*.pem ca*.pem root@192.168.131.129:/opt/kubernetes/ssl/
scp kube-proxy*.pem ca*.pem root@192.168.131.130:/opt/kubernetes/ssl/
#拷贝之前生成的kube-proxy,ca证书(也可到work节点重新生成)

5.2 部署kubelet

5.2.1 创建kubelet.conf配置文件

cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8s-node1 \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
EOF

参数说明:
–hostname-override:显示名称,集群中唯一
–network-plugin:启用CNI
–kubeconfig:空路径,会自动生成,后面用于连接apiserver
–bootstrap-kubeconfig:首次启动向apiserver申请证书
–config:配置参数文件
–cert-dir:kubelet证书生成目录
–pod-infra-container-image:管理Pod网络容器的镜像

5.2.2 创建参数配置文件kube-config.yml

cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local 
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF

5.2.3 创建system启动文件

cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

5.2.4 创建kubelet初次加入集群引导kubeconfig文件
执行脚本创建kubeconfig文件(kubelet,kube-proxy一起创建)

cat > /opt/k8s-cert/kubeconfig.sh << EOF
#!/bin/bash
APISERVER=$1
SSL_DIR=$2

# 创建kubelet bootstrapping kubeconfig 
export KUBE_APISERVER="https://$APISERVER:6443"

# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=c3bd844c9abd6ac0bbc15347b504cd0d \
  --kubeconfig=bootstrap.kubeconfig

# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#----------------------

# 创建kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
  --client-certificate=$SSL_DIR/kube-proxy.pem \
  --client-key=$SSL_DIR/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
EOF
#执行脚本(node节点若提示没有kubectl命令可以从master节点拷贝过来安装,拷贝时/root/.kube/config文件也需要拷贝,否则kubectl命令无法连接到apiserver集群;或者直接在master节点上创建好kubeconfig文件在拷贝过来)
bash kubeconfig.sh 192.168.131.128 /opt/kubernetes/ssl
#  $1为apiserver的ip地址,$2为需要的证书存放目录
#  创建完成后并将生成的文件拷贝到/opt/kubernetes/cfg/目录下

在这里插入图片描述
5.2.5 启动kubelet

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet

5.2.6 批准kubelet证书申请并加入集群

# 查看证书申请
root@k8s-master1:/opt/kubernetes/cfg # kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-YsHygH1fyVyWY4quYCl4-1BNfCLrtjiJj24fuZ-mpZI   24s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

# 批准申请
root@k8s-master1:/opt/kubernetes/cfg # kubectl certificate approve node-csr-YsHygH1fyVyWY4quYCl4-1BNfCLrtjiJj24fuZ-mpZI
certificatesigningrequest.certificates.k8s.io/node-csr-YsHygH1fyVyWY4quYCl4-1BNfCLrtjiJj24fuZ-mpZI approved
root@k8s-master1:/opt/kubernetes/cfg # kubectl get csr
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-YsHygH1fyVyWY4quYCl4-1BNfCLrtjiJj24fuZ-mpZI   4m23s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued

# 查看节点
root@k8s-node1:/opt/kubernetes/cfg # kubectl get nodes
NAME        STATUS     ROLES    AGE   VERSION
k8s-node1   NotReady   <none>   55s   v1.20.15
# 同理部署work node2节点(也可在master上部署kubelet将master加入到集群中,部署过程和node相同)

5.3 部署kube-proxy

5.3.1 创建kube-proxy配置文件

cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF

5.3.2 创建配置参数文件

cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-node1
clusterCIDR: 10.0.0.0/16
EOF

# hostnameOverride参数为当前节点的主机名

5.3.3 配置启动文件

cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

5.3.4 启动kube-proxy

systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy

部署完成后查看节点信息
在这里插入图片描述
因为没有部署网络插件所以,状态为NotReady

六 部署网络插件Calico

6.1 部署网络组件

下载calico插件calico.yaml文件
curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -o calico.yaml

#部署
kubectl apply -f calico.yaml
# 出现如下路表示成功

在这里插入图片描述
重新查看node节点信息(过程很慢,需要下载很多镜像)
在这里插入图片描述

6.2 授权apiserver访问kubelet

cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF
#部署
kubectl apply -f apiserver-to-kubelet-rbac.yaml

七 扩容master架构

将已经部署好的master节点上所有K8s文件拷贝过来,再修改下服务器IP和主机名启动即可
7.1 拷贝文件

scp -r /opt/kubernetes/{bin cfg ssl} root@192.168.131.131:/opt/kubernetes/
scp -r /usr/lib/systemd/system/kube* root@192.168.131.131:/usr/lib/systemd/system/
scp -r /usr/bin/kubectl root@192.168.131.131:/usr/bin/
scp -r /root/.kube/config root@192.168.131.131:/root/.kube/
#删除以下文件(kubelet启动是自动生成的文件)
rm -rf /opt/kubernetes/ssl/kubelet*
rm -rf /opt/kubernetes/cfg/kubelet.kubeconfig

7.2 修改ip和主机名

#修改apiserver、kubelet和kube-proxy配置文件为本地IP
vim /opt/kubernetes/cfg/kube-apiserver.conf

–bind-address=192.168.131.131
–advertise-address=192.168.131.131

vim /opt/kubernetes/cfg/kubelet.conf
–hostname-override=k8s-master2

7.3 启动

systemctl daemon-reload
systemctl start kube-apiserver kube-controller-manager kube-scheduler kubelet
systemctl enable kube-apiserver kube-controller-manager kube-scheduler kubelet
#重启完成后需要批准证书加入集群

7.4 查看集群状态

修改kubectl连接的apiserver为扩容的maser2的ip
vim ~/.kube/config
server: https://192.168.131.131:6443

在这里插入图片描述

八. 部署nginx+keepalived

安装软件包

yum install epel-release -y
yum install nginx keepalived -y

8.1 部署nginx负载

nginx配置文件(两台相同)

cat > /etc/nginx/nginx.conf << EOF
# For more information on configuration, see:
#   * Official English Documentation: http://nginx.org/en/docs/
#   * Official Russian Documentation: http://nginx.org/ru/docs/

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}
# 四层负载均衡,为两台Master apiserver组件提供负载均衡
stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
       server 192.168.131.128:6443;   # Master1 APISERVER IP:PORT
       server 192.168.131.131:6443;   # Master2 APISERVER IP:PORT
    }

    server {
       listen 16443;
       proxy_pass k8s-apiserver;
    }
}
http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 4096;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    # Load modular configuration files from the /etc/nginx/conf.d directory.
    # See http://nginx.org/en/docs/ngx_core_module.html#include
    # for more information.
    include /etc/nginx/conf.d/*.conf;
}
EOF
#启动nginx
systemctl daemon-reload
systemctl start nginx
systemctl enable nginx 
启动时可能会报错
#如果报错,需要安装stream模块
1. nginx: [emerg] unknown directive "stream" in /etc/nginx/nginx.conf:12
yum install nginx-mod-stream -y

2. nginx: [emerg] bind() to 0.0.0.0:16443 failed (13: Permission denied) 
#需要将16443端口加入到http允许访问的端口中
#查看
semanage port -l | grep http_port_t
http_port_t                    tcp      80, 81, 443, 488, 8008, 8009, 8443, 9000
#加入
semanage port -a -t http_port_t  -p tcp 16443

8.2 部署keepalived

keepalived配置文件

#主节点
cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
    state MASTER
    interface ens32
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.131.200/24
    }
    track_script {
    	check_nginx
    }
}
EOF
#备节点
cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens32
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.131.200/24
    }
    track_script {
    	check_nginx
    }
}
EOF
#检查nginx状态脚本
cat > /etc/keepalived/check_nginx.sh << EOF
#!/bin/bash
count=`netstat -ntpl|grep 16443|egrep -cv "grep|$$"`
if [ "$count" -eq 0 ];then
	exit 1
else
	exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh

启动keepalived

systemctl start keepalived
systemctl enable keepalived
#启动完成后可以看到其中一个服务器的网卡ip地址多了一个vip地址,关闭nginx,vip会自动飘到备节点上,说明正常。

任意一台节点通过vip地址访问apiserver版本信息,结果如下说明正常。
在这里插入图片描述
8.3 修改apiserver的连接地址为vip地址

涉及以下文件
/opt/kubernetes/cfg/bootstrap.kubeconfig #首次加入集群时候用到的证书申请文件
/opt/kubernetes/cfg/kubelet.kubeconfig #kubelet连接apiserver配置文件
/opt/kubernetes/cfg/kube-proxy.kubeconfig #kube-proxy连接apiserver配置文件
/opt/kubernetes/cfg/kube-scheduler.kubeconfig #kube-scheduler连接apiserver配置文件
/opt/kubernetes/cfg/kube-controller-manager.kubeconfig # kube-controller-manager连接apiserver配置文件
/root/.kube/config #kubectl连接apiserver配置文件 修改所有节点(包括master节点)

sed命令修改

sed -i 's#192.168.131.128:6443#192.168.131.200:16443#' /opt/kubernetes/cfg/*
sed -i 's#192.168.131.131:6443#192.168.131.200:16443#' /opt/kubernetes/cfg/*
systemctl restart kubelet kube-proxy

查看节点状态是否正常

kubectl get nodes

在这里插入图片描述

九 部署coredns和dashboard

9.1 部署coredns

官方提供了k8s部署coredns的模板和deploy脚本文件,下载地址为:https://github.com/coredns/deployment/blob/master/kubernetes/coredns.yaml.sed
https://github.com/coredns/deployment/blob/master/kubernetes/deploy.sh

下载后放到同一个目录执行

#创建目录 
mkdir /opt/kubernetes/coredns/
# 将两个文件放到此目录下执行
bash deploy.sh -i 10.0.0.2 > coredns.yaml
# -i 参数 加上clusterIP地址
# kubectl部署
kubectl apply -f coredns.yaml

9.2 部署dashboard

# 指定目录
mkdir -p /opt/kubernetes/dashboard
cd /opt/kubernetes/dashboard
 
# 下载
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
 
# 重命名
mv recommended.yaml kubernetes-dashboard.yaml

官方的kubernetes-dashboard.yaml文件中service的type类型为ClusterIP,这种方式要访问dashboard需要通过代理,所以改为spec.type: NodePort; spec.ports.nodePort: 30001方式,这样部署完后,就可以直接通过nodeIP:port的方式访问

修改

spec:
  type: NodePort
  ports:
    - nodePort: 30001
      port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

# 部署
kubectl apply -f kubernetes-dashboard.yaml 
# 查看部署
kubectl get pods,svc -n kubernetes-dashboard

创建service account并绑定默认cluster-admin管理员集群角色:

kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
 
# kubernetes DashBoard token
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

#使用输出的token登录Dashboard:

也可以下方式创建用户

cat > dashuser.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF

kubectl apply -f dashuser.yaml
# 查看token(可以和上面的方式一样)
kubectl describe secret admin-user-token-x8z27 -n kubernetes-dashboard

补充

安装好网络插件calico或者cni后,查看pod信息出现以下错误,应为master节点不按照kube-proxy导致的,也可以选择安装
在这里插入图片描述

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐