K8s集群搭建—Kubeadm方式
文章目录一、环境准备1. 服务器说明2. 系统设置(所有结点)2.1 主机名2.2 安装依赖包2.3 关闭防火墙、swap,重置iptables2.4 系统参数设置3. 安装docker(所有节点)4. 安装必要工具(所有结点)4.1 工具说明4.2 安装方法5. 一键配制sh命令(所有结点)二. 搭建高可用集群1. 部署keepalived - apiserver高可用(任选两个master节点
·
一、环境准备
1. 服务器说明
使用5台Centos7虚拟机,具体信息如下
系统类型 | IP地址 | 节点角色 | CPU | Memory | Hostname |
---|---|---|---|---|---|
centos-7.6 | 192.168.11.100 | master | >=2 | >=2G | m1 |
centos-7.6 | 192.168.11.101 | master | >=2 | >=2G | m2 |
centos-7.6 | 192.168.11.102 | master | >=2 | >=2G | m3 |
centos-7.6 | 192.168.11.103 | worker | >=2 | >=2G | n1 |
centos-7.6 | 192.168.11.104 | worker | >=2 | >=2G | n2 |
2. 系统设置(所有结点)
2.1 主机名
主机名必须每个节点都不一样,并且保证所有点之间可以通过hostname互相访问。
# 查看主机名
$ hostname
# 修改主机名
$ hostnamectl set-hostname <your_hostname>
# 配置host,使所有节点之间可以通过hostname互相访问
$ vi /etc/hosts
# <node-ip> <node-hostname>
2.2 安装依赖包
# 更新yum
$ yum update
# 安装依赖包
$ yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp
2.3 关闭防火墙、swap,重置iptables
# 关闭防火墙
$ systemctl stop firewalld && systemctl disable firewalld
# 重置iptables
$ iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
# 关闭swap
$ swapoff -a
$ sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab
# 关闭selinux
$ setenforce 0
# 关闭dnsmasq(否则可能导致docker容器无法解析域名)
$ service dnsmasq stop && systemctl disable dnsmasq
2.4 系统参数设置
# 制作配置文件
$ cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
EOF
# 生效文件
$ sysctl -p /etc/sysctl.d/kubernetes.conf
# 设置系统时区并同步时间服务器
$ yum install -y ntpdate && ntpdate time.windows.com
3. 安装docker(所有节点)
# 安装相关依赖
$ yum -y install gcc
$ yum -y install gcc-c++
# 安装需要的软件包
$ yum install -y yum-utils device-mapper-persistent-data lvm2
# 修改yum源,推荐使用阿里源
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 可查看仓库中所有docker版本
$ yum list docker-ce --showduplicates | sort -r
docker-ce.x86_64 3:20.10.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:20.10.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:20.10.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:20.10.2-3.el7 docker-ce-stable
# 选择指定版本安装,注意尽量安装新版本,老版本会报错版本过老
$ yum install docker-ce-19.03.13
# - 设置cgroup driver(默认是cgroupfs,主要目的是与kubelet配置统一,这里也可以不设置后面在kubelet中指定cgroupfs)
$ cat <<EOF > /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
# 启动docker服务
service docker restart
4. 安装必要工具(所有结点)
4.1 工具说明
- kubeadm: 部署集群用的命令
- kubelet: 在集群中每台机器上都要运行的组件,负责管理pod、容器的生命周期
- kubectl: 集群管理工具(可选,只要在控制集群的节点上安装即可)
4.2 安装方法
# 配制yum,国内使用阿里源
$ cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 安装工具
# 找到要安装的版本号
$ yum list kubeadm --showduplicates | sort -r
# 安装指定版本(这里用的是1.18.5),注意安装顺序,kubelet kubectl kubeadm
$ yum install -y kubelet-1.18.5 kubectl-1.18.5 kubeadm-1.18.5
# 设置kubelet的cgroupdriver(kubelet的cgroupdriver默认为systemd,如果上面没有设置docker的exec-opts为systemd,这里就需要将kubelet的设置为cgroupfs)
$ sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# 启动kubelet
$ systemctl enable kubelet && systemctl start kubelet
5. 一键配制sh命令(所有结点)
kube-init.sh文件
#!/usr/bin/bash
rm -rfv /etc/yum.repos.d/*
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo
swapoff -a
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
yum install vim bash-completion net-tools gcc -y
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
wget https://download.docker.com/linux/centos/7/x86_64/edge/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm
yum install containerd.io-1.2.6-3.3.el7.x86_64.rpm -y
yum -y install docker-ce
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://fl791z1h.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl restart docker
systemctl enable docker
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install kubectl kubelet kubeadm -y
systemctl enable kubelet
echo "--------------------------------------Installed And Starting KubeFamily-------------------------------------------------------"
echo "--------------------------------------Kubernetes Env Init Down---------------------------------------------------------------"
echo "--------------------------------------Wait Token and Hash To Join Cluster-----------------------------------------------------"
二. 搭建高可用集群
可根据需要决定是否使用keepalived实现高可用
1. 部署keepalived - apiserver高可用(任选两个master节点)
1.1 安装keepalived
# 在两个主节点上安装keepalived(一主一备)
$ yum install -y keepalived
1.2 创建keepalived配置文件
# 创建目录
$ ssh <user>@<master-ip> "mkdir -p /etc/keepalived"
$ ssh <user>@<backup-ip> "mkdir -p /etc/keepalived"
# 分发配置文件
$ scp target/configs/keepalived-master.conf <user>@<master-ip>:/etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id keepalive-master
}
vrrp_script check_apiserver {
script "/etc/keepalived/check-apiserver.sh"
interval 3
weight -2
}
vrrp_instance VI-kube-master {
state MASTER
interface {{VIP_IF}}
virtual_router_id 68
priority 100
dont_track_primary
advert_int 3
virtual_ipaddress {
{{MASTER_VIP}}
}
track_script {
check_apiserver
}
}
$ scp target/configs/keepalived-backup.conf <user>@<backup-ip>:/etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id keepalive-backup
}
vrrp_script check_apiserver {
script "/etc/keepalived/check-apiserver.sh"
interval 3
weight -2
}
vrrp_instance VI-kube-master {
state BACKUP
interface {{VIP_IF}}
virtual_router_id 68
priority 99
dont_track_primary
advert_int 3
virtual_ipaddress {
{{MASTER_VIP}}
}
track_script {
check_apiserver
}
}
# 分发监测脚本
$ scp target/scripts/check-apiserver.sh <user>@<master-ip>:/etc/keepalived/
$ scp target/scripts/check-apiserver.sh <user>@<backup-ip>:/etc/keepalived/
#!/bin/sh
netstat -ntlp|grep 6443 || exit 1
1.3 启动keepalived
# 分别在master和backup上启动服务
$ systemctl enable keepalived && service keepalived start
# 检查状态
$ service keepalived status
# 查看日志
$ journalctl -f -u keepalived
# 查看虚拟ip
$ ip a
2. 部署第一个主节点
# 准备配置文件
$ ~/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
# k8s的版本号,必须跟安装的Kubeadm版本等保持一致,否则启动报错
kubernetesVersion: v1.18.5
# docker镜像仓库地址,k8s.gcr.io需要翻墙才可以下载镜像,这里使用镜像服务器下载http://mirror.azure.cn/help/gcr-proxy-cache.html
imageRepository: registry.aliyuncs.com/google_containers
# 集群名称
clusterName: kubernetes
# apiServer的集群访问地址,填写vip地址即可 #
controlPlaneEndpoint: "192.168.11.100:6443"
networking:
# pod的网段
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
dnsDomain: cluster.local
# ssh到第一个主节点,执行kubeadm初始化系统(注意保存最后打印的加入集群的命令)
$ kubeadm init --config=kubeadm-config.yaml --upload-certs
kubeadm join 192.168.11.100:6443 --token cjv6wg.owc80yu1blftw8ya \
--discovery-token-ca-cert-hash sha256:5920dc6c6efd58d6647232b423b397b2f3ddd04faede38679807b1c7e2a1025b \
--control-plane --certificate-key 90f1528e53ac3f6f7a1a27d2b396896615832d860ea19df489bda7ce79b53b87
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.11.100:6443 --token cjv6wg.owc80yu1blftw8ya \
--discovery-token-ca-cert-hash sha256:5920dc6c6efd58d6647232b423b397b2f3ddd04faede38679807b1c7e2a1025b
# copy kubectl配置(上一步会有提示)
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 测试一下kubectl
$ kubectl get pods --all-namespaces
3. 部署网络插件 - flannel
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
4. 加入其它master节点
# 加入master结点
$ kubeadm join 192.168.11.100:6443 --token cjv6wg.owc80yu1blftw8ya \
--discovery-token-ca-cert-hash sha256:5920dc6c6efd58d6647232b423b397b2f3ddd04faede38679807b1c7e2a1025b \
--control-plane --certificate-key 90f1528e53ac3f6f7a1a27d2b396896615832d860ea19df489bda7ce79b53b87
# 加入worker结点
$ kubeadm join 192.168.11.100:6443 --token cjv6wg.owc80yu1blftw8ya \
--discovery-token-ca-cert-hash sha256:5920dc6c6efd58d6647232b423b397b2f3ddd04faede38679807b1c7e2a1025b
# 查看节点,验证是否已经加入
$ kubectl get nodes
三、集群可用性测试
1. 创建nginx ds
# 写入配置
$ cat > nginx-ds.yml <<EOF
apiVersion: v1
kind: Service
metadata:
name: nginx-ds
labels:
app: nginx-ds
spec:
type: NodePort
selector:
app: nginx-ds
ports:
- name: http
port: 80
targetPort: 80
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ds
labels:
addonmanager.kubernetes.io/mode: Reconcile
spec:
template:
metadata:
labels:
app: nginx-ds
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
EOF
# 创建ds
$ kubectl create -f nginx-ds.yml
2. 检查各种ip连通性
# 检查各 Node 上的 Pod IP 连通性
$ kubectl get pods -o wide
# 在每个节点上ping pod ip
$ ping <pod-ip>
# 检查service可达性
$ kubectl get svc
# 在每个节点上访问服务,可能在node上访问不了,可进入pod测试
$ curl <service-ip>:<port>
# 在每个节点检查node-port可用性
$ curl <node-ip>:<port>
3. 检查dns可用性
# 创建一个nginx pod
$ cat > pod-nginx.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
EOF
# 创建pod
$ kubectl create -f pod-nginx.yaml
# 进入pod,查看dns
$ kubectl exec nginx -i -t -- /bin/bash
# 查看dns配置
root@nginx:/# cat /etc/resolv.conf
# 查看名字是否可以正确解析
root@nginx:/# ping nginx-ds
四、部署dashboard
//下载
$ wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
//修改
$ vim recommended.yaml
...
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort # 新增
ports:
- port: 443
targetPort: 8443
nodePort: 30443 # 新增
selector:
k8s-app: kubernetes-dashboard
//执行安装
$ kubectl create -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
//确认状态
$ kubectl get pod,svc -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-c79c65bb7-bpnbq 1/1 Running 0 2m52s
pod/kubernetes-dashboard-56484d4c5-cthdm 1/1 Running 0 2m52s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.105.74.63 <none> 8000/TCP 2m52s
service/kubernetes-dashboard NodePort 10.98.84.244 <none> 443:30444/TCP 2m52s
//创建管理员用户
$ vim adminuser.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
//创建管理员权限
$ kubectl create -f adminuser.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
# 补充
# 如有报错,可以先删掉再重新创建
kubectl delete -f ***.yaml
//访问浏览器
浏览器访问https://IP:30443
//查看token
$ kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
Name: admin-user-token-k4gdg
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: d116f560-15a2-45ca-930f-40f4fc12ce44
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjRyekdOTFRia1VNX1Q0eTVFM1RSandBNHN5S2xBX21VaW90YVRDMkdxMGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWw1d2JkIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI0YTY2NmNlOS1kMmM4LTQzYTYtOGQyZS0xNWJkY2JjZGMyYWEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.MKnTnSMxF4hlWE2O0cf0XYIV9COQrY-XTZmd4XnC_U26hZXlPPkePpDXrKPj5VRdNIw_YdEmBXWrsh9rMRW7Pu0R3tM4MftxBwtupJwqcaH-_r_DPFVwOj0j3mdNUVpuMYWqVhbXZgte-aTdXBaUBj7PYgGoQ9Q_1yu_TTGu1YHQ3ImvJfMB0Z250k8ID_tLuz7JuWkMo6hvY5xfRXXldO5WfWsg9BtJJHeex4W7oNXSashBKPyA6NUxug9dgGQy3Tiz4itsGnEK5ue89-EmtYBH1wZHOzQ_l1OuROuSmTJhjBFobnHowbOibnvZwzepPT9E8H-QvfqAKfwSJ30XdA
//如果Token忘记了,可以用下面命令直接找出Token
$ kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token
更多推荐
已为社区贡献3条内容
所有评论(0)