K8S安装和部署(kubeadmin安装1主2从)
这里用kubeadmin方式进行安装部署。
目录
2.4 关闭防火墙和SELinux,并且禁用Swap(三台服务器均需操作)
4. kubernetes镜像切换成国内源(三台服务器均需操作)
5. 安装指定版本的 kubeadm,kubelet和kubectl(三台服务器均需操作)
10. 在node1和node2节点上拉取 quay.io/coreos/flannel:v0.11.0-amd64 的镜像
11. 在node1和node2节点上分别执行以下命令,将这两个节点加入Kubernet
这里用kubeadmin方式进行安装部署
1. 准备三台服务器
服务器地址 | 节点名称 | |
192.168.190.200 | master | 主 |
192.168.190.201 | node1 | 从 |
192.168.190.202 | node2 | 从 |
2. 主机初始化(所有主机)
2.1根据规划设置主机名
#切换到192.168.190.200
hostnamectl set-hostname master
#切换到192.168.190.201
hostnamectl set-hostname node1
#切换到192.168.190.202
hostnamectl set-hostname node2
并三台主机设置主机名和IP地址的映射关系
vim /etc/hosts
172.19.3.240 k8s-master
172.19.3.241 k8s-node1
172.19.3.242 k8s-node2
2.2 时间同步 (如果服务器时间同步忽略此步骤)
#启动chronyd服务
systemctl start chronyd
systemctl enable chronyd
date
💡 Tips:执行命令前 可以 使用 rpm -qa |grep chrony 查看系统是否已安装chrony,没有安装环境可使用 yum
install chrony 命令安装
2.3 安装docker(三台服务器均需安装)
1 安装命令工具 yum install -y yum-utils
2 切换源 yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
3 查看版本号 3.10以上可安装 推荐centos7以上系统
4 查看可安装docker软件包
yum list docker-ce --showduplicates | sort -r
5 安装并启动docker
yum install docker-ce-20.10.21-3.el7 -y
systemctl start docker
systemctl enable docker
这里在执行 yum install docker-ce-20.10.21-3.el7 -y 命令是报错,报错内容如下
---> 软件包 docker-compose-plugin.x86_64.0.2.16.0-1.el7 将被 安装
---> 软件包 docker-scan-plugin.x86_64.0.0.23.0-3.el7 将被 安装
--> 解决依赖关系完成
错误:软件包:docker-ce-rootless-extras-23.0.1-1.el7.x86_64 (docker-ce-stable)
需要:fuse-overlayfs >= 0.7
错误:软件包:docker-ce-rootless-extras-23.0.1-1.el7.x86_64 (docker-ce-stable)
需要:slirp4netns >= 0.4
错误:软件包:containerd.io-1.6.18-3.1.el7.x86_64 (docker-ce-stable)
需要:container-selinux >= 2:2.74
错误:软件包:3:docker-ce-20.10.21-3.el7.x86_64 (docker-ce-stable)
需要:container-selinux >= 2:2.74
您可以尝试添加 --skip-broken 选项来解决该问题
您可以尝试执行:rpm -Va --nofiles --nodigest
解决错误,执行以下命令 参考RedHat7使用阿里云镜像建立元数据缓存时404解决_yum makecache aliyun 404-CSDN博客
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum clean all
yum makecache //这里执行会报错,按下面命令继续执行
sed -i 's/\$releasever/7/' /etc/yum.repos.d/CentOS-Base.repo
sed -i 's#$basearch#x86_64#g' /etc/yum.repos.d/CentOS-Base.repo
yum clean all
yum makecache
最后执行安装和启动命令即可成功安装
yum install docker-ce-20.10.21-3.el7 -y
systemctl start docker
systemctl enable docker
查看安装版本 docker version
2.4 关闭防火墙和SELinux,并且禁用Swap(三台服务器均需操作)
### 关闭防火墙
systemctl disable --now firewalld
### 关闭selinux, 让容器可以读取主机文件系统
getenforce # 查看
setenforce 0 # 临时关闭
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config # 永久关闭
### 关闭swap
swapoff -a # 临时关闭swap
sed -i 's/.*swap.*/#&/' /etc/fstab # 永久关闭swap
3. 添加网桥过滤和地址转发功能(三台服务器均需操作)
cat > /etc/sysctl.d/kubernetes.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
#然后执行
sysctl --system //生效命令
4. kubernetes镜像切换成国内源(三台服务器均需操作)
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
5. 安装指定版本的 kubeadm,kubelet和kubectl(三台服务器均需操作)
yum install -y kubelet-1.23.5 kubeadm-1.23.5 kubectl-1.23.5
#设置kubelet开机自启
systemctl enable kubelet
systemctl start kubelet
查看安装版本
kubelet --version
kubeadm version
kubectl version
6. 部署Kubernetes
💡 Tips:下面的操作只需要在master节点上执行即可(初始化完成后,最后会输出一个join命令,可以先保存后边会用)
kubeadm init \
--apiserver-advertise-address=172.16.2.240 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.23.5 \
--service-cidr=10.1.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--ignore-preflight-errors=all
#172.16.2.240为主机IP
–apiserver-advertise-address #集群通告地址(master 机器IP)
–image-repository #由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址
–kubernetes-version #K8s版本,与上面安装的一致
–service-cidr #集群内部虚拟网络,Pod统一访问入口
–pod-network-cidr #Pod网络,与下面部署的CNI网络组件yaml中保持一致
执行以上命令可能会报错,
报错解决:The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz"
#解决上述报错,如果没有配置的所有服务器都需要配置,这里也要配置docker的国内镜像,不然镜像拉取会比较慢
# 添加以下内容
vim /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
# 重启docker
systemctl restart docker
# 重新初始化
kubeadm reset # 先重置
# 重新执行上面的 kubeadm init
初始化成功后会输出以下信息(第11步会用到)
master上执行命令:kubeadm token create --print-join-command 重新查看join命令
7. 在master上配置kubectl命令行工具
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
8.在master节点上下载flannel网络配置文件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
下边是下载的文件内容kube-flannel.yml,下载不了可以保存下边这份文件,注意修改文件中地址跟第6步中的配置的一样
--pod-network-cidr=10.244.0.0/16
---
kind: Namespace
apiVersion: v1
metadata:
name: kube-flannel
labels:
pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-flannel
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-flannel
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
#image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumeMounts:
- name: cni-plugin
mountPath: /opt/cni/bin
- name: install-cni
#image: flannelcni/flannel:v0.20.2 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
#image: flannelcni/flannel:v0.20.2 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: xtables-lock
mountPath: /run/xtables.lock
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
9.在master节点上应用flannel网络的配置文件
kubectl apply -f kube-flannel.yml
命令执行成功输出以下信息
10. 在node1和node2节点上拉取 quay.io/coreos/flannel:v0.11.0-amd64 的镜像
docker pull quay.io/coreos/flannel:v0.11.0-amd64
11. 在node1和node2节点上分别执行以下命令,将这两个节点加入Kubernet
这里根据实际的来,在master上执行命令可获取 kubeadm token create --print-join-command
kubeadm join 192.168.190.200:6443 --token 7cmfxu.smm6jqlx0llsym4p --discovery-token-ca-cert-hash sha256:4af0d23c767f2a23605747963d8d0de78082bbd8147e261469b6fb75fe136a63
如果在单节点上执行join操作时出现错误,可以加上参数 --ignore-preflight-errors=all
加入成功
这里也可能会遇到跟2.3步骤中一样的错误,操作一样,修改docker配置文件即可
It seems like the kubelet isn‘t running or healthy 报错解决
配置docker文件,在/etc/docker/daemon.json 添加以下内容
"exec-opts": [
"native.cgroupdriver=systemd"
],
重启docker
systemctl restart docker
重启kubelet (optional)
systemctl restart kubelet
再次执行命令
kubeadm join 192.168.190.200:6443 --token 7cmfxu.smm6jqlx0llsym4p --discovery-token-ca-cert-hash sha256:4af0d23c767f2a23605747963d8d0de78082bbd8147e261469b6fb75fe136a63
12. 在master上查看集群节点信息(安装成功)
kubectl get nodes
查看各节点详细信息
kubectl get nodes -o wide
更多推荐
所有评论(0)