k8s 离线安装_一键”离线”部署k8s集群v1.14.4版本,只要10分钟
前期有小伙伴留言,说部署k8s太麻烦,各种配置,那么现在福利来了一键安装命令(要求centos7系统为新装系统无任何软件环境可联网) 不推荐git下来仓库大概1.5gb左右比较大,可以直接下载离线包支持的云以下:- 实测单机版支持腾讯云服务器- 实测集群版支持天翼云服务器- 实测集群版支持阿里云服务器- 实测集群版支持阿里云服务器(需要预先设置中文字符集否则菜单界面乱码不显示中文,有问题...
前期有小伙伴留言,说部署k8s太麻烦,各种配置,那么现在福利来了
一键安装命令(要求centos7系统为新装系统无任何软件环境可联网) 不推荐git下来仓库大概1.5gb左右比较大,可以直接下载离线包
支持的云以下:
- 实测单机版支持腾讯云服务器
- 实测集群版支持天翼云服务器
- 实测集群版支持阿里云服务器
- 实测集群版支持阿里云服务器(需要预先设置中文字符集否则菜单界面乱码不显示中文,有问题可以加我QQ790160000阿里云服务器调试)
特点: 完全离线,不依赖互联网(系统新装配置好ip开机即可!其他都什么都不用安装)
真正原生centos7.3.-7.6Minimal新装系统(只需要统一集群的root密码即可)一键搭建k8s集群
单机/集群任意服务器数量一键安装(目前一个节点对应一个etcd节点后续会分离可自定义)
一键批量增删node节点(新增的服务器系统环境必须干净密码统一)
ipvs负载均衡,内网yum源共享页面端口42344
图形化向导菜单安装,web管理页面dashboar端口42345
Heketi+GlusterFS(分布式存储集群)+helm完全一键离线部署
默认版本v1.14.4,可执行替换各版本软件包,集群版目前已测安装数量在1-30台一键安装正常
集群数量超过4台及以上默认开启k8s数据持久化方案:glusterfs+Heketi 最低3台分布式存储 \n (全自动自动安装,启用k8s集群持久化方案务必保证存储节点均有一块空盘(例如/deb/sdb无需分区(默认40%用于k8s集群持久化60%挂载到本机的/data目录))) 如启用Heketi+GlusterFS,默认会创建一个pvc验证动态存储效果
一键安装命令(要求centos7系统为新装系统无任何软件环境可联网) 不推荐git下来仓库大概1.5gb左右比较大,可以直接下载离线包
一键安装通道 (走码云服务器)
yum install wget unzip -y ;rm -fv master.zip*; while [ true ]; do wget https://gitee.com/q7104475/K8s/repository/archive/master.zip || sleep 3 && break 1 ;done && unzip master.zip&& cd K8s/ && sh install.sh
==============master节点健康检测 kube-apiserver kube-controller-manager kube-scheduler etcd kubelet kube-proxy docker==================192.168.123.51 | CHANGED | rc=0 >>active active active active active active active===============================================note节点监控检测 etcd kubelet kube-proxy docker===============================================192.168.123.55 | CHANGED | rc=0 >>active active active active192.168.123.53 | CHANGED | rc=0 >>active active active active192.168.123.52 | CHANGED | rc=0 >>active active active active192.168.123.56 | CHANGED | rc=0 >>active active active active192.168.123.54 | CHANGED | rc=0 >>active active active active192.168.123.57 | CHANGED | rc=0 >>active active active active192.168.123.60 | CHANGED | rc=0 >>active active active active192.168.123.59 | CHANGED | rc=0 >>active active active active192.168.123.58 | CHANGED | rc=0 >>active active active active===============================================监测csr,cs,pvc,pv,storageclasses===============================================NAME AGE REQUESTOR CONDITIONcertificatesigningrequest.certificates.k8s.io/node-csr-8Dumqf_K9A_fQONoJOWpa_KgyZP3wzAe6Z5iGJIuKmk 16m kubelet-bootstrap Approved,Issuedcertificatesigningrequest.certificates.k8s.io/node-csr-AdU7hmZ-km7TX4VrWsV7iWpvIzhgO4ZPZaYRKgE8f1c 15m kubelet-bootstrap Approved,Issuedcertificatesigningrequest.certificates.k8s.io/node-csr-EWZEaK-iQem_08frMwTvJ7QdB8PTLFZh4GGECeKhrxc 17m kubelet-bootstrap Approved,Issuedcertificatesigningrequest.certificates.k8s.io/node-csr-G3AoLefbIeK6Al-sWW331YfjnIKpivLizekc8dd27N8 17m kubelet-bootstrap Approved,Issuedcertificatesigningrequest.certificates.k8s.io/node-csr-TXTxvkenqC9t5BytKOpu__8JoopEA4nijZQdMeoYj8c 17m kubelet-bootstrap Approved,Issuedcertificatesigningrequest.certificates.k8s.io/node-csr-dY4r6C5MzxNSMyumlSL0pJkMS8374onjL-O8rP7QbPw 17m kubelet-bootstrap Approved,Issuedcertificatesigningrequest.certificates.k8s.io/node-csr-drpTmdveqOdl7y2x5DWTOo8gcqhO1dewC5RAFqhnHmA 16m kubelet-bootstrap Approved,Issuedcertificatesigningrequest.certificates.k8s.io/node-csr-k1Wp5XvX3oOO0UeJO4gtZ1dJkK3BunceoCr-A4sRyfk 17m kubelet-bootstrap Approved,Issuedcertificatesigningrequest.certificates.k8s.io/node-csr-mE5hluSfa_ieJiskGS8iOFBy3TMUymDV8kW4bVTKwd4 16m kubelet-bootstrap Approved,Issuedcertificatesigningrequest.certificates.k8s.io/node-csr-mW-vtE_JrIC8DWVptPyadVZdH48PY_bXH4N0GknksMg 16m kubelet-bootstrap Approved,IssuedNAME STATUS MESSAGE ERRORcomponentstatus/controller-manager Healthy okcomponentstatus/scheduler Healthy okcomponentstatus/etcd-5 Healthy {"health":"true"}componentstatus/etcd-2 Healthy {"health":"true"}componentstatus/etcd-0 Healthy {"health":"true"}componentstatus/etcd-1 Healthy {"health":"true"}componentstatus/etcd-9 Healthy {"health":"true"}componentstatus/etcd-3 Healthy {"health":"true"}componentstatus/etcd-8 Healthy {"health":"true"}componentstatus/etcd-6 Healthy {"health":"true"}componentstatus/etcd-7 Healthy {"health":"true"}componentstatus/etcd-4 Healthy {"health":"true"}NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEpersistentvolumeclaim/gluster1-test Bound pvc-c668a6fd-d612-11e9-983b-000c29c7746f 1Gi RWX gluster-heketi 6m13sNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpersistentvolume/pvc-c668a6fd-d612-11e9-983b-000c29c7746f 1Gi RWX Delete Bound default/gluster1-test gluster-heketi 5m59sNAME PROVISIONER AGEstorageclass.storage.k8s.io/gluster-heketi kubernetes.io/glusterfs 6m13s===============================================监测node节点labels===============================================NAME STATUS ROLES AGE VERSION LABELS192.168.123.51 Ready master 15m v1.14.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,dashboard=master,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.123.51,kubernetes.io/os=linux,node-role.kubernetes.io/master=master192.168.123.52 Ready node 15m v1.14.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.123.52,kubernetes.io/os=linux,node-role.kubernetes.io/node=node,storagenode=glusterfs192.168.123.53 Ready node 15m v1.14.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.123.53,kubernetes.io/os=linux,node-role.kubernetes.io/node=node,storagenode=glusterfs192.168.123.54 Ready node 15m v1.14.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.123.54,kubernetes.io/os=linux,node-role.kubernetes.io/node=node,storagenode=glusterfs192.168.123.55 Ready node 15m v1.14.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.123.55,kubernetes.io/os=linux,node-role.kubernetes.io/node=node192.168.123.56 Ready node 15m v1.14.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.123.56,kubernetes.io/os=linux,node-role.kubernetes.io/node=node192.168.123.57 Ready node 15m v1.14.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.123.57,kubernetes.io/os=linux,node-role.kubernetes.io/node=node192.168.123.58 Ready node 15m v1.14.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.123.58,kubernetes.io/os=linux,node-role.kubernetes.io/node=node192.168.123.59 Ready node 15m v1.14.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.123.59,kubernetes.io/os=linux,node-role.kubernetes.io/node=node192.168.123.60 Ready node 15m v1.14.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.123.60,kubernetes.io/os=linux,node-role.kubernetes.io/node=node===============================================监测coredns是否正常工作===============================================Server: 10.0.0.2Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.localName: kubernetesAddress 1: 10.0.0.1 kubernetes.default.svc.cluster.localpod "dns-test" deleted===============================================监测,pods状态===============================================NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESdefault glusterfs-r67db 1/1 Running 0 14m 192.168.123.52 192.168.123.52 default glusterfs-smmrk 1/1 Running 0 14m 192.168.123.54 192.168.123.54 default glusterfs-zswmm 1/1 Running 0 14m 192.168.123.53 192.168.123.53 default heketi-74cc7bb45c-sq87r 1/1 Running 0 6m34s 172.17.21.4 192.168.123.51 kube-system coredns-57656b67bb-m7sl2 1/1 Running 0 15m 172.17.38.2 192.168.123.54 kube-system kubernetes-dashboard-5b5697d4-wtn2w 1/1 Running 0 14m 172.17.21.2 192.168.123.51 kube-system tiller-deploy-7f4d76c4b6-78x55 1/1 Running 0 15m 172.17.21.3 192.168.123.51 ===============================================监测node节点状态===============================================NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME192.168.123.51 Ready master 15m v1.14.4 192.168.123.51 CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.9.7192.168.123.52 Ready node 15m v1.14.4 192.168.123.52 CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.9.7192.168.123.53 Ready node 15m v1.14.4 192.168.123.53 CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.9.7192.168.123.54 Ready node 15m v1.14.4 192.168.123.54 CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.9.7192.168.123.55 Ready node 15m v1.14.4 192.168.123.55 CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.9.7192.168.123.56 Ready node 15m v1.14.4 192.168.123.56 CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.9.7192.168.123.57 Ready node 15m v1.14.4 192.168.123.57 CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.9.7192.168.123.58 Ready node 15m v1.14.4 192.168.123.58 CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.9.7192.168.123.59 Ready node 15m v1.14.4 192.168.123.59 CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.9.7192.168.123.60 Ready node 15m v1.14.4 192.168.123.60 CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.9.7================================================监测helm版本================================================Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}[root@51 ~]#
ps:目前是单master,后期会上多master高可用
ps:近期提交代码过于频繁有时候可能会有=一些bug,欢迎到群随时提出
[高能警告] 系统只能存在一个固定ip地址 一个网卡一个ip 切记美分系统不能多个ip多个网卡
[高能警告] 暂仅支持centos7.3-centos7.6, “不支持Centos7.2及其以下版本”
[高能警告] 系统ip不能使用 10.0.0.0网段,尽量避开系统使用172.17.x.x 10.0.0.x网段(否则安装会有问题)
这样部署就完成了,如果有遇到问题
更多推荐
所有评论(0)