【准备】

三台主机

master01 192.168.137.20

master02 192.168.137.21

node01 192.168.137.22

【安装部署】

一、设置系统主机名以及 Host 文件的相互解析

hostnamectl set-hostname master01
hostnamectl set-hostname master02
hostnamectl set-hostname node01

分别登录三台主机

echo '192.168.137.20 master01' >> /etc/hosts
echo '192.168.137.21 master02' >> /etc/hosts
echo '192.168.137.22 node01' >> /etc/hosts

二、master和node安装依赖包

yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git

三、设置防火墙为 iptables 并设置空规则

systemctl stop firewalld && systemctl disable firewalld
yum -y install iptables-services && systemctl start iptables && systemctl enable iptables   && iptables -F && service iptables save

四、master和node关闭虚拟内存,并永久关闭关闭 SELINUX

swapoff -a && sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

五、master和node修改内核参数

cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf


六、master和node调整系统时区

#####设置系统时区为 中国/上海

timedatectl set-timezone Asia/Shanghai

######将当前的 UTC 时间写入硬件时钟

timedatectl set-local-rtc 0

#####重启依赖于系统时间的服务

systemctl restart rsyslog
systemctl restart crond

 

七、master和node关闭系统不需要服务

systemctl stop postfix && systemctl disable postfix

八、master和node设置 rsyslogd 和 systemd journald

mkdir /var/log/journal
mkdir /etc/systemd/journald.conf.d

cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
#持久化保存到磁盘
Storage=persistent

#压缩历史日志
Compress=yes

SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000

# 最大占用空间10G
SystemMaxUse=10G

#单日志文件最大200M
SystemMaxFileSize=200M

#日志保存时间2周
MaxRetentionSec=2week

#不将日志转发到syslog
ForwardToSyslog=no
EOF



重启systemd-journald服务
systemctl restart systemd-journald

九、升级系统内核为 4.44

CentOS 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker、Kubernetes 不稳定,例如: rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

# 安装完成后检查 /boot/grub2/grub.cfg 中对应内核 menuentry 中是否包含 initrd16 配置,如果没有,再安装一次!

# yum --enablerepo=elrepo-kernel install -y kernel-lt

# 设置开机从新内核启动

# grub2-set-default 'CentOS Linux (4.4.189-1.el7.elrepo.x86_64) 7 (Core)'

十、master和node开启kube-proxy开启ipvs的前置条件

modprobe br_netfilter

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

 十一、安装docker

具体可以参考安装 Docker 软件_好好学习之乘风破浪的博客-CSDN博客安装 Docker 软件https://blog.csdn.net/hedao0515/article/details/125629426?csdn_share_tail=%7B%22type%22%3A%22blog%22%2C%22rType%22%3A%22article%22%2C%22rId%22%3A%22125629426%22%2C%22source%22%3A%22hedao0515%22%7D&ctrtid=AgI9g

十二、安装kubeadm

1、在master节点上配置kubernetes.repo

​
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 
EOF

2、master节点yum源安装1.15.0版本

安装之前先卸载默认安装版本

rpm -qa | grep -E 'kubeadm|kubectl|kubelet'
rpm -e --nodeps kubeadm-1.24.2-0.x86_64
rpm -e --nodeps kubectl-1.24.2-0.x86_64
rpm -e --nodeps kubelet-1.24.2-0.x86_64

yum install -y kubeadm-1.15.0 kubectl-1.15.0 kubelet-1.15.0

3、master节点启动kubelet服务

systemctl enable kubelet.service

4、初始化主节点

kubeadm config print init-defaults > kubeadm-config.yaml

# vim kubeadm-config.yaml  修改如下

localAPIEndpoint:

   advertiseAddress: 192.168.137.20 

kubernetesVersion: v1.15.0

networking:

   podSubnet: "10.244.0.0/16"

   serviceSubnet: 10.96.0.0/12

---

apiVersion: kubeproxy.config.k8s.io/v1alpha1

kind: KubeProxyConfiguration

featureGates:

   SupportIPVSProxyMode: true

mode: ipvs

红色部分为修改部分,黄色部分为增加部分

kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log

若出现镜像拉不下报错

由于kubeadm 镜像的网站 k8s.gr.io国外的,需要翻墙才能下载,国内的可以通过如下网站来下载相关的镜像https://hub.docker.com/r/mirrorgooglecontainers/

docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.15.0
docker pull mirrorgooglecontainers/kube-controller-manager:v1.15.0
docker pull mirrorgooglecontainers/kube-scheduler:v1.15.0
docker pull mirrorgooglecontainers/kube-proxy:v1.15.0
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.3.10
docker pull coredns/coredns:1.3.1

重新命名

docker image tag mirrorgooglecontainers/kube-proxy:v1.15.0 k8s.gcr.io/kube-proxy:v1.15.0
docker image tag mirrorgooglecontainers/kube-apiserver-amd64:v1.15.0 k8s.gcr.io/kube-apiserver:v1.15.0
docker image tag mirrorgooglecontainers/kube-scheduler:v1.15.0 k8s.gcr.io/kube-scheduler:v1.15.0
docker image tag mirrorgooglecontainers/kube-controller-manager:v1.15.0 k8s.gcr.io/kube-controller-manager:v1.15.0
docker image tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker image tag mirrorgooglecontainers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker image tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1

将 mirrorgooglecontainers镜像删除,

docker image rm mirrorgooglecontainers/kube-apiserver-amd64:v1.15.0
docker image rm mirrorgooglecontainers/kube-controller-manager:v1.15.0
docker image rm mirrorgooglecontainers/kube-scheduler:v1.15.0
docker image rm mirrorgooglecontainers/kube-proxy:v1.15.0
docker image rm mirrorgooglecontainers/pause:3.1
docker image rm mirrorgooglecontainers/etcd:3.3.10
docker image rm coredns/coredns:1.3.1

再可以将k8s-1.15.0镜像文件下载并打包上传到其它节点上并导入,为避免出现后续导入镜像没有tag标签问题,需要按照如下方式进行导出

# docker save 镜像ID > 保存路径/文件.tar

镜像打包上传到其它节点上

执行导入脚本检查镜像list
再次在master执行初始化操作

 加入其他节点以及其余工作节点可以按照日志里面命令方式参加

 查询一下node节点情况

7、部署flannel网络

将kubeadm生成的文件放到install-k8s 目录下,并新建一个几个目录

mkdir -p /usr/local/install-k8s/core
mkdir -p /usr/local/install-k8s/plugin/flannel
mv kubeadm-config.yaml kubeadm-init.log  /usr/local/install-k8s/core

cd  /usr/local/install-k8s/plugin/flannel
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml若出现下载超时情况,可在主节点/etc/hosts文件添加一条 199.232.68.133 raw.githubusercontent.com 就可以正常下载了
echo '199.232.68.133 raw.githubusercontent.com' >> /etc/hosts

 创建flannel

# kubectl create -f kube-flannel.yml

 查询flannel组件运行情况

查询node节点情况

 查询主机节点网络里面加了flannel.1的网卡

加入节点操作:(docker image 已经导入,并且systemctl enable kubelet.service)

执行 kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log 日志里面有记录添加node信息

kubeadm join 192.168.137.20:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:ef9afe61635f3b122e0d729017dd23fe100d5ee12a3af812e8b57596161f65db

按照上述方式在master02和node1节点上执行,

 如果没有执行systemctl enable kubelet.service,在加入的时候会提示报错

如果没有卸载之前版本就提示如下报错

 再次查询node信息

[root@k8s-master core]# kubectl get pod -n kube-system -o wide   查询网络初始化

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐