一:部署前信息介绍

1:服务器信息

IPHOSTNAMECPUMEMDISK
192.168.109.100master.k8s1c2G50G
192.168.109.101node01.k8s2c4G100G
192.168.109.102node02.k8s2c4G100G:

2:离线安装所需要的部署包信息

TypePackageNameDeployPosition
rpmconntrack-tools-1.4.4-7.el7.x86_64.rpmALL
rpmlibnetfilter_cthelper-1.0.0-11.el7.x86_64.rpmALL
rpmlibnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpmALL
rpmlibnetfilter_queue-1.0.2-2.el7_2.x86_64.rpmALL
rpmcri-tools-1.19.0-0.x86_64.rpmALL
rpmkubeadm-1.21.10-0.x86_64.rpmALL
rpmkubectl-1.21.10-0.x86_64.rpmALL
rpmkubelet-1.21.10-0.x86_64.rpmALL
rpmkubernetes-cni-0.8.7-0.x86_64.rpmALL
imagek8s.gcr.io/kube-apiserver:v1.21.11master
imagek8s.gcr.io/kube-scheduler:v1.21.11master
imagek8s.gcr.io/kube-controller-manager:v1.21.11master
imagek8s.gcr.io/etcd:3.4.13-0master
imagek8s.gcr.io/coredns/coredns:v1.8.0ALL
imagek8s.gcr.io/pause:3.4.1ALL
imagek8s.gcr.io/kube-proxy:v1.21.11ALL
imagerancher/mirrored-flannelcni-flannel:v0.17.0ALL
imagerancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1ALL
dockerdocker-20.10.9.tgzALL
filekube-flannel.ymlmaster

二:准备工作

#root用户操作,所有节点都需要操作

1:host配置

## 1、配置hosts
cat >> /etc/hosts << EOF
192.168.109.100 master.k8s
192.168.109.101 node01.k8s
192.168.109.102 node02.k8s
EOF

2:关闭Swap分区

swapoff  -a
sed -ri 's/.*swap.*/#&/' /etc/fstab

3:禁用SELinux

sed -i 's/enforcing/disabled/' /etc/selinux/config

4:关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

5:br_netfilter模块加载

cat >> /etc/modules-load.d/k8s.conf << EOF
br_netfilter
EOF

6:内核参数修改

cat >> /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system

三:离线docker部署

#root用户操作,所有节点都需要操作

1:上传部署包,解压

tar -xf docker-20.10.9.tgz
cp docker/* /usr/bin/

2:配置systemd

cat > /etc/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

TimeoutStartSec=0
Delegate=yes
KillMode=process

Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF

3:优化docker配置,指定cgroupdriver为systemd(kubernetes需要)

mkdir /etc/docker/
cat >/etc/docker/daemon.json<< EOF
{
  "data-root":"/app/docker/DataRoot",
  "log-driver":"json-file",
  "log-opts":{"max-size":"50m","max-file":"3"},
  "exec-opts":["native.cgroupdriver=systemd"],
  "registry-mirrors": [
      "https://hub.atomgit.com",
      "https://docker.itelyou.cf",
      "https://huecker.io"

  ]
}
EOF

4:配置开机启动

systemctl daemon-reload # 重载unit配置文件
systemctl start docker #启动Docker
systemctl enable docker.service # 设置开机自启

四:安装kubernetes

#root用户操作,所有节点都需要操作

1:安装依赖
yum install socat-1.7.3.2-2.el7.x86_64  #操作系统盘iso具备该包
rpm -ivh libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm
rpm -ivh libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm
rpm -ivh libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm
rpm -ivh conntrack-tools-1.4.4-7.el7.x86_64.rpm
2:安装kubelet、kubeadm和kubectl
rpm -ivh kubelet-1.21.10-0.x86_64.rpm kubernetes-cni-0.8.7-0.x86_64.rpm
rpm -ivh kubectl-1.21.10-0.x86_64.rpm
rpm -ivh kubeadm-1.21.10-0.x86_64.rpm cri-tools-1.19.0-0.x86_64.rpm
3:Kubeadm 部署kubernetes集群

3.1:查看kubeadm init所需要的包

kubeadm config images list
#k8s.gcr.io/kube-apiserver:v1.21.14
#k8s.gcr.io/kube-controller-manager:v1.21.14
#k8s.gcr.io/kube-scheduler:v1.21.14
#k8s.gcr.io/kube-proxy:v1.21.14
#k8s.gcr.io/pause:3.4.1
#k8s.gcr.io/etcd:3.4.13-0
#k8s.gcr.io/coredns/coredns:v1.8.0

3.2:master节点操作将提前下载的镜像包上传到服务器上,并镜像包load完成。

docker load -i kube-apiserver_v1.21.11.tar.gz 
docker load -i kube-controller-manager_v1.21.11.tar.gz 
docker load -i kube-scheduler_v1.21.11.tar.gz 
docker load -i kube-proxy_v1.21.11.tar.gz 
docker load -i pause_3.4.1.tar.gz 
docker load -i etcd_3.4.13-0.tar.gz 
docker load -i coredns_v1.8.0.tar.gz 
docker load -i flannel.tar.gz 
docker load -i flannel-cni-plugin.tar.gz

3.3:更改镜像标签

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.21.11  k8s.gcr.io/kube-apiserver:v1.21.14
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.21.11  k8s.gcr.io/kube-scheduler:v1.21.14
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.21.11  k8s.gcr.io/kube-controller-manager:v1.21.14
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.21.11  k8s.gcr.io/kube-proxy:v1.21.14
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.4.1   k8s.gcr.io/pause:3.4.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.0  k8s.gcr.io/coredns/coredns:v1.8.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0  k8s.gcr.io/etcd:3.4.13-0
4:初始化集群
kubeadm init \
        --apiserver-advertise-address=192.168.109.100 \
        --kubernetes-version v1.21.14 \
        --service-cidr=10.1.0.0/16 \
        --pod-network-cidr=10.244.0.0/16

注意此时的输出以下类似信息,这个需要记录,新节点加入时,需要用到

kubeadm join 192.168.86.170:6443 --token u8ij9v.l6dq4qbc7d4rj3a2 \
	--discovery-token-ca-cert-hash sha256:b4fc35eb96a6961555a97bd327e89fb0e6432e1df06ba0c040870a2585c05c58

报错1:[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1

解决方案

原因: /proc/sys/net/bridge/bridge-nf-call-iptables 文件的内容并没有设置为 1

解决方案

echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables

报错2:The connection to the server localhost:8080 was refused - did you specify the right host or port?

解决方案

临时解决
cp /etc/kubernetes/admin.conf $HOME/
chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
正式解决
ll /etc/kubernetes/admin.conf
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile

报错3:查看pod状态,coredns为pending

解决方案

安装网络插件
下载插件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
部署插件
kubectl apply -f kube-flannel.yml

再次查看pod状态
kubectl get pods -A
5:部署node节点,node节点所用到一下部署包,load 和改tag通master节点一样
PackageNameTag
k8s.gcr.io/kube-proxyv1.21.14
k8s.gcr.io/pause:3.4.1
rancher/mirrored-flannelcni-flannelv0.17.0
rancher/mirrored-flannelcni-flannel-cni-pluginv1.0.1

5.1:load部署包

docker load -i kube-proxy_v1.21.11.tar.gz 
docker load -i pause_3.4.1.tar.gz 
docker load -i flannel.tar.gz 
docker load -i flannel-cni-plugin.tar.gz

5.2:更改tag

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.21.11  k8s.gcr.io/kube-proxy:v1.21.14
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.4.1   k8s.gcr.io/pause:3.4.1

5.3:部署node节点

kubeadm join 192.168.86.170:6443 --token so6k49.p7es8ynez1jxslre \
        --discovery-token-ca-cert-hash sha256:b4e630a31f3cf813dad4d9ab1bfa0bae67cada7afefff10b94f2cf4b799aa5fd

5.4:查看node节点

kubectl get node

五:部署dashboard

1:执行命令部署dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.0/aio/deploy/recommended.yaml
2:查看部署状态
kubectl get pod -n kubernetes-dashboard

3:查看状态信息
kubectl get pods --namespace=kubernetes-dashboard -o wide

4:改为NodePort访问,默认是API Server,比较麻烦,改为NodePort可以直接用虚拟机的IP地址访问
kubectl --namespace=kubernetes-dashboard get service kubernetes-dashboard
可以看到当前的TYPE是ClusterIP

NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes-dashboard   ClusterIP   10.101.250.179   <none>        443/TCP   12m

5:编辑配置,将其中的ClusterIP改为NodePort访问形式,执行一下命令编辑。

kubectl --namespace=kubernetes-dashboard edit service kubernetes-dashboard

    找到一下代码,将类型改为NodePort

  selector:
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
  type: NodePort         #改成nodeport访问形式

6:查看状态,已经变成了NodePort

kubectl --namespace=kubernetes-dashboard get service kubernetes-dashboard

7:可以看到访问形式已经改成nodeport方式,端口为31454了可以通过访问https://192.168.109.100:31454进行登录,在登入之前需要配置用户和角色。

7.1:配置用户,执行一下命令创建admin-user.yaml 

cat >> admin-user.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
EOF

7.2:创建用户配置

kubectl create -f admin-user.yaml

7.3:创建角色绑定配置 role-binding.yaml

cat >> role-binding.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF

7.4:创建用户绑定

kubectl create -f role-binding.yam

7.5:查看token

kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')

7.6:访问https://192.168.109.100:31454登入dashboard

     将token复制下来,在页面中选择token并填入(注意复制时是否会多空字符或者少字符)

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐