简简单单做一个k8s 集群 (kubeadm-1.18.0)
保证所有节点可以ping外网主机规划主机角色ip地址资源说明k8s-master01192.1681.202C2Gk8s-node01192.168.1.212C2Gk8s-node02192.168.1.222C2G环境初始化(所有主机)vim init-host.sh#角色ip地址kube_m_ip=192.168.1.20kube_1_ip=192.168.1.21...
·
kubeadm部署主要是为了方便学习k8s,减去了大量繁琐的部署操作。
推荐按照下面的资源配置会更高的情况部署,不然可能会出现很多小毛病
部署的版本为1.18.0 镜像来源为阿里云镜像站,只要保证虚拟机可以联通外网即可
主机规划
主机角色 | ip地址 | 资源说明 |
k8s-master01 | 192.1681.20 | 2C2G |
k8s-node01 | 192.168.1.21 | 2C2G |
k8s-node02 | 192.168.1.22 | 2C2G |
保证所有节点可以ping外网
环境初始化(所有主机)
vim init-host.sh
#角色ip地址
kube_m_ip=192.168.1.20
kube_1_ip=192.168.1.21
kube_2_ip=192.168.1.22
temp=$(ifconfig ens33 | grep "inet " | awk -F " " '{print $2}')
if [ $temp = $kube_m_ip ];then
hostnamectl set-hostname k8s-master01
elif [ $temp = $kube_1_ip ];then
hostnamectl set-hostname k8s-node01
elif [ $temp = $kube_2_ip ];then
hostnamectl set-hostname k8s-node02
fi
#域名解析
cat <<EOF>>/etc/hosts
$kube_m_ip k8s-master01
$kube_1_ip k8s-node01
$kube_2_ip k8s-node02
EOF
#设置防火墙为iptables并设置空规则
systemctl stop firewalld && systemctl disable firewalld
#关闭swarm虚拟内存(防止容器在虚拟内存中运行)
#检测虚拟内存是否关闭,因为防止容器运行再虚拟内存
swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
#将桥接的 IPv4 流量传递到 iptables 的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# 让系统生效
sysctl --system
#时间同步
yum install ntpdate -y
ntpdate time.windows.com
因为3台节点的操作是相同的我们可以使用前面的ansible工具
如果没有装ansible就把脚本在3台节点上执行
1. 部署docker
#拉取docker源
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
#拉取阿里yum源
wget -O/etc/yum.repos.d/aliyun-yilai.repo http://mirrors.aliyun.com/repo/Centos-7.repo
#安装docker
yum -y install docker-ce-18.06.1.ce-3.el7
启动并开机自启
systemctl enable docker && systemctl start docker
#添加阿里云 YUM 软件源
cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
#重启服务
systemctl restart docker
2. 安装kubeadm
#添加 k8syum 源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
#来源于阿里官方镜像站
#https://developer.aliyun.com/mirror/
#安装依赖
yum -y install conntrack
#安装组件
yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
#开机启动
systemctl enable kubelet
3. 初始化集群
kubeadm init --apiserver-advertise-address=192.168.1.20 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.0 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
#参数说明
--apiserver-advertise-address=192.168.1.20 #master主机
--image-repository registry.aliyuncs.com/google_containers #镜像获取地址
#由于默认拉取镜像地址 k8s.gcr.io 国内无法访问,这里指定阿里云镜像仓库地址
--kubernetes-version v1.18.0 #集群版本
--service-cidr=10.96.0.0/12
--pod-network-cidr=10.244.0.0/16
4. 添加kubectl命令的变量
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
5. node节点加入
#在两台node节点上执行
kubeadm join 192.168.1.20:6443 --token qjnr5p.4wo2xf7gwtrov71g \
--discovery-token-ca-cert-hash sha256:88b02261b35e9772a3b80db9db3875546c906cca1500917e8c65bb62b3ed488a
#这个token有效期是24小说,如果过期了就不可用了,这时就需要重新生成
kubeadm token create --print-join-command
6 查看集群状态
[root@k8s-master01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady master 6m31s v1.18.0
k8s-node01 NotReady <none> 54s v1.18.0
k8s-node02 NotReady <none> 48s v1.18.0
7. 安装cni网络插件
#拉取yaml文件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#执行yaml文件
kubectl apply -f kube-flannel.yml
出现的错误
正在解析主机 raw.githubusercontent.com (raw.githubusercontent.com)... 0.0.0.0, ::
正在连接 raw.githubusercontent.com (raw.githubusercontent.com)|0.0.0.0|:443... 失败:拒绝连接。
正在连接 raw.githubusercontent.com (raw.githubusercontent.com)|::|:443... 失败:拒绝连接。
解决方法(添加解析)
cat > /etc/hosts <<EOF
52.74.223.119 github.com
192.30.253.119 gist.github.com
54.169.195.247 api.github.com
185.199.111.153 assets-cdn.github.com
151.101.76.133 raw.githubusercontent.com
151.101.108.133 user-images.githubusercontent.com
151.101.76.133 gist.githubusercontent.com
151.101.76.133 cloud.githubusercontent.com
151.101.76.133 camo.githubusercontent.com
151.101.76.133 avatars0.githubusercontent.com
151.101.76.133 avatars1.githubusercontent.com
151.101.76.133 avatars2.githubusercontent.com
151.101.76.133 avatars3.githubusercontent.com
151.101.76.133 avatars4.githubusercontent.com
151.101.76.133 avatars5.githubusercontent.com
151.101.76.133 avatars6.githubusercontent.com
151.101.76.133 avatars7.githubusercontent.com
151.101.76.133 avatars8.githubusercontent.com
EOF
8. 查看集群状态
[root@k8s-master01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 33m v1.18.0
k8s-node01 Ready <none> 27m v1.18.0
k8s-node02 Ready <none> 27m v1.18.0
#查看组件状态
[root@k8s-master01 ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-7ff77c879f-j8tgd 1/1 Running 0 32m
coredns-7ff77c879f-xqm7r 1/1 Running 0 32m
etcd-k8s-master01 1/1 Running 0 33m
kube-apiserver-k8s-master01 1/1 Running 0 33m
kube-controller-manager-k8s-master01 1/1 Running 0 33m
kube-flannel-ds-jm8r6 1/1 Running 0 58s
kube-flannel-ds-lb5hf 1/1 Running 0 58s
kube-flannel-ds-m49rb 1/1 Running 0 58s
kube-proxy-qx8w7 1/1 Running 0 27m
kube-proxy-rpv4w 1/1 Running 0 32m
kube-proxy-wxjnj 1/1 Running 0 27m
kube-scheduler-k8s-master01 1/1 Running 0 33m
9. 测试集群
#创建一个pod
kubectl create deployment nginx --image=nginx
#对运行的pod快速添加配置(--port,要暴露的端口)
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod,svc
[root@k8s-master01 ~]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-f89759699-t6w8n 1/1 Running 0 82s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 40m
service/nginx NodePort 10.97.172.154 <none> 80:31947/TCP 14s
10. 访问地址
#访问地址
#http://NodeIP:Port
http://192.168.1.21:31947/
更多推荐
已为社区贡献17条内容
所有评论(0)