新建3台linux7.5部署k8s,之后的软件安装全部都在k8s
参考文章http://docs.kubernetes.org.cn/457.htmlhttp://docs.kubernetes.org.cn/459.html1. 下载vmvare16pro版本 CentOS7.5版本vmvare具体配置信息看此文章vmvare为桥接,配置三台linux,指定静态ip* 192.168.192.130 master* 192.168.192.131 slaver
参考文章
http://docs.kubernetes.org.cn/457.html
http://docs.kubernetes.org.cn/459.html
点击跳转最新
点击跳转最新
点击跳转最新
点击跳转最新
点击跳转最新
1. 下载vmvare16pro版本 CentOS7.9版本
vmvare为桥接,配置三台linux,指定静态ip
* 192.168.192.130 master
* 192.168.192.131 slaver1
* 192.168.192.132 slaver2
并且三台之间配置免密登录
永久关闭防火墙
2. 安装docker1.12版本
官网推荐1.12版本
本来想下载yum docker-engine1.12的,无奈连不上docker的仓库,所以我使用了阿里仓库:
*备注:因为三台linux都要更新yum,如果都在网上下载没必要,可以只在1台上连接阿里云的仓库,其他的2台机器连接此linux的yum源 *
1.使用阿里云的yum源
#cd /etc/yum.repos.d/
这目录存放了当前系统的yum源配置信息
#wget http://mirrors.aliyun.com/repo/Centos-7.repo
安装docker 使用wget下载阿里云 yum源配置
#mv CentOS-Base.repo CentOS-Base.repo.bak
备份之前的
#mv Centos-7.repo CentOS-Base.repo
使用阿里云的
#yum clean all & yum makecache
重置yum源
2.开始安装docker
查询仓库的docker版本:应该是这个
# yum list docker --showduplicates | sort -r
docker.x86_64 2:1.13.1-208.git7d71120.el7_9 extras
...
* base: mirrors.aliyun.com
安装
#yum install -y docker
查看版本
# docker --version
Docker version 1.13.1, build 7d71120/1.13.1
设置开机自启动
#systemctl enable docker
启动docker
#systemctl start docker
3. kubectl 安装
下载最新版本的命令:
#curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
使kubectl二进制可执行。
#chmod +x ./kubectl
将二进制文件移动到PATH中
#sudo mv ./kubectl /usr/local/bin/kubectl
4. kubelet和kubeadm 安装
要使用yum安装
#cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
注意:必须使用运行setenforce 0命令来禁用SELinux
#setenforce 0
开始安装
yum install -y kubelet kubeadm
设置开机启动
systemctl enable kubelet && systemctl start kubelet
5. 开始部署k8s
5.1 初始化master节点
登录到master角色的linux,我这里是192.168.192.130
==================================
如果按照官网的部署步骤的话,需要连通google,所以放弃上面的步骤,卸载三上面的软件。
重新开始! 2022年1月4日
==================================
参考链接
https://blog.csdn.net/jianzhang11/article/details/105549071
一、安装docker
1.1 集群k8s规划
主机host | IP | 角色 | linux版本 |
---|---|---|---|
master01 | 192.168.192.130 | 主 | CentOS-7-x86_64-Minimal-2009 |
slaver01 | 192.168.192.131 | 从 | CentOS-7-x86_64-Minimal-2009 |
slaver02 | 192.168.192.132 | 从 | CentOS-7-x86_64-Minimal-2009 |
1.2 docker安装前准备
-
Docker 要求 CentOS 系统的内核版本高于 3.10
# uname -r 3.10.0-1160.49.1.el7.x86_64
-
安装yum插件方便配置仓库
#yum install -y yum-utils device-mapper-persistent-data lvm2
-
配置阿里仓库docker
#yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
-
查看docker版本
#yum list docker-ce --showduplicates | sort -r
1.3 docker安装
-
安装指定版本的docker
#yum install -y docker-ce-18.06.3.ce-3.el7
-
启动docker 并验证安装是否成功
#systemctl start docker #docker run hello-world Hello from Docker!
-
设置开机启动docker
#systemctl enable docker
二、Kubeadm 安装
2.1 安装前准备
-
停止防火墙+禁用防火墙+查看防火墙状态
#systemctl stop firewalld.service #systemctl disable firewalld.service #systemctl list-unit-files|grep firewalld.service
-
关闭各节点的selinux
#vi /etc/selinux/config 文件并设置 SELINUX 的值为 disabled
-
关闭各节点的swap
#swapoff -a #vi /etc/fstab 文件设置删除或注释swap相关行 /mnt/swap swap swap defaults 0 0 #free -m #确认状态
-
配置kubernetes的yum源为阿里巴巴
#cd /etc/yum.repos.d/ #cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
-
修改3台主机的hostname
#hostnamectl set-hostname master01 #进入master主机执行 #hostnamectl set-hostname slaver01 #进入slaver01主机执行 #hostnamectl set-hostname slaver02 #进入slaver02主机执行 #reboot #重启机器 #hostname #查看是否修改成功
-
清空删除卸载之前的kubeadm
#yum remove -y kubelet kubeadm kubectl
2.2 安装kubeadm
-
安装kubeadm
#yum install -y kubelet kubeadm kubectl
-
重启docker+启动kubelet
#systemctl daemon-reload #systemctl restart docker #systemctl enable kubelet && systemctl start kubelet
上面都是三台机器都执行,下面是指定机器执行了!
2.2.1 进入master节点初始化
-
生成kubeadm配置文件kubeadm.yml
#mkdir /root/apps/k8s #创建k8s目录 #cd /root/apps/k8s #kubeadm config print init-defaults --kubeconfig ClusterConfiguration > kubeadm.yml #vi kubeadm.yml
需要对kubeadm.yml进行修改:
[root@master01 k8s]# cat kubeadm.yml apiVersion: kubeadm.k8s.io/v1beta3 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.192.130 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: kubernetes-master taints: null --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: {} etcd: local: dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration networking: dnsDomain: cluster.local podSubnet: "192.168.0.0/16" serviceSubnet: 10.96.0.0/12 scheduler: {}
-
清理之前的kubeadm
#kubeadm reset && rm -rf /etc/cni/net.d && rm -rf $HOME/.kube/config && rm -rf /etc/kubernetes/
-
查看kubelet状态:systemctl status kubelet
还没使用kubeadm init 所以现在kubelet是报错的状态:如果不是此信息提示那么kubelet没有配置成功:
查询kubelet出错日志:#tail /var/log/messages #systemctl status kubelet #journalctl -xeu kubelet
我查看kubelet也出错了,原因是docker引擎是
cgroupfs
。kubeadm是systemd
,需要统一,我这里把kubeadm改成cgroupfs:#vi /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf 改成 [root@master01 kubelet.service.d]# cat 10-kubeadm.conf # Note: This dropin only works with kubeadm and kubelet v1.11+ [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --cgroup-driver=cgroupfs" Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml" # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file. EnvironmentFile=-/etc/sysconfig/kubelet ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false"
-
预先下载kubeadm需要的镜像
# 查看所需镜像列表 #kubeadm config images list --config kubeadm.yml # 拉取镜像 #kubeadm config images pull --config kubeadm.yml
-
进行master节点初始化
#cd /root/apps/k8s #kubeadm init --config=kubeadm.yml --upload-certs | tee kubeadm-init.log
最后会正确输出
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.192.130:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:e2bdd3b89fd91d44955c652742821cc56bd8040ed63b82a7ac43310caad22f11
-
再次查看kebelet状态
#systemctl status kubelet 下面的报错是正常 因为没配置k8s网络通信
-
配置kubelet
#rm -rf /root/.kube/ #mkdir /root/.kube/ #cp -i /etc/kubernetes/admin.conf /root/.kube/config
-
验证是否安装成功kubectl get node
#kubectl get node
此时节点的状态为NotReady,这是由于我们还没部署任何网络插件,是正常的。
2.2.2 分别进入两台slaver节点初始化
-
清理之前的kubeadm
#kubeadm reset && rm -rf /etc/cni/net.d && rm -rf $HOME/.kube/config && rm -rf /etc/kubernetes/
-
统一docker kubeadm的systemd
#cd /usr/lib/systemd/system/kubelet.service.d #cat <<EOF > 10-kubeadm.conf [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --cgroup-driver=cgroupfs" Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml" # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file. EnvironmentFile=-/etc/sysconfig/kubelet ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false" EOF
-
重启该节点的kubelet
#systemctl daemon-reload #systemctl restart kubelet
-
查看kubelet的状态
#systemctl status kubelet 此时的状态应该是运行状态才对
-
slaver节点初始化
首先进入master节点获取连接信息#kubeadm token create --print-join-command 然后在slaver节点上执行即可
=========================
最新方式:特别简单:!!
2022年1月8日
然而还是报错了,解决起来很麻烦
最后推荐使用kuboard安装 一键安装
https://kuboard.cn/install/install-k8s.html#%E9%85%8D%E7%BD%AE%E8%A6%81%E6%B1%82
使用kuboard安装k8s生产集群注意事项
- 首先需要安装安装 Kuboard-Spray 服务,并且此服务安装之后,其所在的节点不能作为k8s节点来使用了。(可以把Kuboard-Spray安装到单独一台linux)
- k8s集群使用静态ip
- 安装部署Kuboard-Spray 服务所在的节点需要事先安装docker;k8s集群无需事先安装docker(containerd)
- 如果k8s集群使用docker作为容器引擎,需要在Kuboard-Spray 配置一下docker环境
- 下面是我自己部署的选择
- 在192.168.192.130部署Kuboard-Spray
- 在192.168.192.131部署node131 (k8s的master+工作节点)
- 在192.168.192.132部署node132 (k8s的工作节点)
- k8s节点最低1核2G内存
- 安装控制面板 https://www.kuboard.cn/install/v3/install-in-k8s.html#%E6%96%B9%E6%B3%95%E4%B8%80-%E4%BD%BF%E7%94%A8-hostpath-%E6%8F%90%E4%BE%9B%E6%8C%81%E4%B9%85%E5%8C%96
更多推荐
所有评论(0)