kubeadm安装部署K8S集群
配置以下环境为例角色主机名IP地址内存CPUmasterk8s-master192.168.242.104G2Cnodek8s-node01192.168.242.114G2Cnodek8s-node02192.168.242.124G2C安装的先决条件1、配置主机名和IP地址master节.
配置以下环境为例
角色 | 主机名 | IP地址 | 内存 | CPU |
master |
k8s-master
| 192.168.242.10 | 4G | 2C |
node |
k8s-node01
| 192.168.242.11 | 4G | 2C |
node |
k8s-node02
| 192.168.242.12 | 4G | 2C |
安装的先决条件
1、配置主机名和IP地址
master节点
[root@localhost ~]# hostnamectl set-hostname k8s-master [root@localhost ~]# su -
node节点
[root@localhost ~]# hostnamectl set-hostname k8s-node01 [root@localhost ~]# hostnamectl set-hostname k8s-node02
2、 配置hosts解析【三台机器都需要配置】
[root@k8s-master ~]# vi /etc/hosts [root@k8s-master ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.242.10 k8s-master 192.168.242.11 k8s-node01 192.168.242.12 k8s-node02
3、关闭防火墙【三台机器都需要配置】
[root@k8s-master ~]# systemctl stop firewalld [root@k8s-master ~]# systemctl disable firewalld
4、关闭selinux【三台机器都需要配置】
[root@k8s-master ~]# sed -i '/SELINUX=/ cSELINUX=disabled' /etc/selinux/config [root@k8s-master ~]# setenforce 0
5、关闭swap【三台机器都需要配置】
[root@k8s-master ~]# swapoff -a [root@k8s-master ~]# sed -i '/\/dev\/mapper\/centos-swap/ s/^/#/' /etc/fstab
6、配置时间同步【三台机器都需要配置】
[root@k8s-master ~]# date
7、安装常用软件【三台机器都需要配置】
[root@k8s-master ~]# yum install bash-completion wget tree psmisc net-tools vim lrzsz dos2unix -y
8、添加网桥过滤【三台机器都需要配置】
添加网桥过滤及地址转发
[root@k8s-master ~]# cat >> /etc/sysctl.d/k8s.conf << EOF > net.bridge.bridge-nf-call-ip6tables = 1 > net.bridge.bridge-nf-call-iptables = 1 > net.ipv4.ip_forward = 1 > vm.swappiness = 0 > EOF
加载br_netfilter模块
[root@k8s-master ~]# modprobe br_netfilter
查看是否加载[root@k8s-master ~]# lsmod | grep br_netfilter br_netfilter 22256 0 bridge 151336 1 br_netfilter
加载网桥过滤配置文件[root@k8s-master ~]# sysctl -p /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 vm.swappiness = 0
开启ipvs
1 、安装 ipset 及 ipvsadm[root@k8s-master ~]# yum -y install ipset ipvsadm
2 、在所有节点执行如下脚本,添加需要加载的模块[root@k8s-master ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF > #!/bin/bash > modprobe -- ip_vs > modprobe -- ip_vs_rr > modprobe -- ip_vs_wrr > modprobe -- ip_vs_sh > modprobe -- nf_conntrack_ipv4 > EOF
3、授权、运行、检查是否加载[root@k8s-master ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4 ip_vs_sh 12688 0 ip_vs_wrr 12697 0 ip_vs_rr 12600 0 ip_vs 145497 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr nf_conntrack_ipv4 15053 2 nf_defrag_ipv4 12729 1 nf_conntrack_ipv4 nf_conntrack 139224 6 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_ipv4 libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack
安装docker-ce,查看之前的笔记,这里安装指定版本yum -y install docker-ce-18.06.3.ce-3.el7。
安装完成后需要修改docker-ce服务配置文件
[root@k8s-master ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
[root@k8s-master ~]# yum -y install kubeadm-1.18.6-0 kubelet-1.18.6-0 kubectl-1.18.6-0
[root@k8s-master ~]# vim /etc/sysconfig/kubelet
[root@k8s-master ~]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
#添加一行
[root@k8s-master ~]# cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false"
启动服务
[root@k8s-master ~]# systemctl enable kubelet.service
一、初始化master节点(只需要在master节点完成)
使用kubeadm init进行初始化操作
[root@k8s-master ~]# kubeadm init \
> --apiserver-advertise-address=192.168.242.10 \
> --token-ttl 0 \
> --image-repository registry.aliyuncs.com/google_containers \
> --kubernetes-version v1.18.6 \
> --service-cidr=10.96.0.0/16 \
> --pod-network-cidr=10.244.0.0/16
初始化过程中导出结果中会有--token和--discovery-token-ca-cert-hash需要保存下来,这里由于需要拉取镜像,所以比较慢
kubeadm join 192.168.242.10:6443 --token 0ugsv4.dv2kk6ru5iqkqbc9 \
--discovery-token-ca-cert-hash sha256:205ec1189ad156bc02c3e977def000ffafef7f6bc7814bf1acc6b79aa6b593d0
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
验证
[root@k8s-master ~]# ss -nutlp | grep :6443
tcp LISTEN 0 128 [::]:6443 [::]:* users:(("kube-apiserver",pid=30132,fd=5))
[root@k8s-master ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0 Healthy {"health":"true"}
/etc/kubernetes/manifests/kube-controller-manager.yaml
可以去掉--port=0这个设置,然后重启,再次检查
[root@k8s-master ~]# vim /etc/kubernetes/manifests/kube-scheduler.yaml
[root@k8s-master ~]# vim /etc/kubernetes/manifests/kube-controller-manager.yaml
[root@k8s-master manifests]# systemctl restart kubelet
[root@k8s-master manifests]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
二、部署网络插件
[root@k8s-master ~]# docker load -i calico-cni.tar
[root@k8s-master ~]# docker load -i calico-node.tar
[root@k8s-master ~]# docker load -i pod2daemon-flexvol.tar
[root@k8s-master ~]# docker load -i kube-controllers.tar
2、修改calico资源清单文件
[root@k8s-master ~]# vim calico-yml
#添加以下两行,增加到606行后面,由于calico自身网络发现机制有问题,因为需要修改calico使用的物理
网卡
607 - name: IP_AUTODETECTION_METHOD
608 value: "interface=ens.*"
3、应用calico资源清单文件
[root@k8s-master ~]# scp calico* pod2daemon-flexvol.tar kube-controllers.tar k8s-node01:/root
[root@k8s-master ~]# scp calico* pod2daemon-flexvol.tar kube-controllers.tar k8s-node02:/root
在node01与node02导入镜像
docker load -i calico-cni.tar
docker load -i calico-node.tar
docker load -i kube-controllers.tar
docker load -i pod2daemon-flexvol.tar
4、部署calico
[root@k8s-master ~]# kubectl apply -f calico.yml
再次验证
master节点已经Ready
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 14m v1.18.6
查询kube-system名称空间,-n 指定名称空间
[root@k8s-master ~]# kubectl get pods -n kube-system | grep calico
calico-kube-controllers-7f969645d4-cqcvz 1/1 Running 0 69s
calico-node-dxwtf 1/1 Running 0 69s
三、初始化kubernetes node节点(以下操作在两个节点完成)
初始化之前需要将kube-proxy和pause两个镜像导入。
[root@k8s-master ~]# docker save -o pause.tar registry.aliyuncs.com/google_containers/pause
[root@k8s-master ~]# docker save -o kube-proxy.tar registry.aliyuncs.com/google_containers/kube-proxy
[root@k8s-master ~]# scp kube-proxy.tar pause.tar k8s-node01:/root
[root@k8s-master ~]# scp kube-proxy.tar pause.tar k8s-node02:/root
[root@k8s-node01 ~]# docker load -i pause.tar
[root@k8s-node01 ~]# docker load -i kube-proxy.tar
[root@k8s-node02 ~]# docker load -i pause.tar
[root@k8s-node02 ~]# docker load -i kube-proxy.tar
[root@k8s-node01 ~]# kubeadm join 192.168.242.10:6443 --token 0ugsv4.dv2kk6ru5iqkqbc9 \
> --discovery-token-ca-cert-hash sha256:205ec1189ad156bc02c3e977def000ffafef7f6bc7814bf1acc6b79aa6b593d0
查看
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 3h40m v1.18.6
k8s-node01 Ready <none> 4m58s v1.18.6
k8s-node02 Ready <none> 4m9s v1.18.6
更多推荐
所有评论(0)