Kubernetes(简称k8s)—快速部署
快速部署K8S集群安装docker创建一个master节点将一个node节点加入到当前集群中部署容器网络(CNI)部署Web UI (Dashboard)部署的网络组件有上面作用呢?部署网络组件的目的是打通Pod到Pod之间网络、Node与Pod之间网络,从而集群中数据包可以任意传输,形成了一个扁平化网络。主流网络组件有:Flannel、Calico等而所谓的CNl ( Container Net
·
快速部署K8S集群
- 安装docker
- 创建一个master节点
- 将一个node节点加入到当前集群中
- 部署容器网络(CNI)
- 部署Web UI (Dashboard)
部署的网络组件有上面作用呢?
部署网络组件的目的是打通Pod到Pod之间网络、Node与Pod之间网络,从而集群中数据包可以任意传输,形成了一个扁平化网络。
主流网络组件有:Flannel、Calico等
而所谓的CNl ( Container Network Interface,容器网络接口)就是k8s对接这些第三方网络组件的接口。
查看集群的状态:
查看master组件状态:
kubectl get cs
查看node状态:
kubectl get node
查看Apiserver代理的URL:
kubectl cluster-info
查看集群的详细命令:
kubectl cluster-info dump
查看所有deployment控制器对象
kubectl get deployment
下面进行部署
查看环境:
角色 | IP |
---|---|
master | 192.168.230.131 |
node1 | 192.168.230.132 |
node2 | 192.168.230.143 |
设置环境:
三台上面的操作:
关闭防火墙:
[root@localhost ~]# systemctl disable --now firewalld
永久关闭selinux:
[root@localhost ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config
[root@localhost ~]# setenforce 0
关闭swap:
[root@localhost ~]# vi /etc/fstab
# /dev/mapper/cs-swap none swap defaults 0 0 #注释掉swap分区
[root@localhost ~]# mount -a
设置主机名:
[root@localhost ~]# hostnamectl set-hostname master.example.com
[root@localhost ~]# bash
[root@master ~]#
[root@localhost ~]# hostnamectl set-hostname node1.example.com
[root@localhost ~]# bash
[root@node1 ~]#
[root@localhost ~]# hostnamectl set-hostname node2.example.com
[root@localhost ~]# bash
[root@node2 ~]#
在master添加hosts:
[root@master ~]# vi /etc/hosts
192.168.230.131 master master.example.com
192.168.230.132 node1 node1.example.com
192.168.230.143 node2 node2.example.com
将桥接的IPv4流量传递到iptables的链:
[root@master ~]# vi /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@master ~]# sysctl --system #生效
时间同步(都在centos8上面做的,所以配置一样)
[root@master ~]# vim /etc/chrony.conf
pool time1.aliyun.com iburst
[root@master ~]# systemctl enable --now chronyd
环境配置完成后重启下
[root@master ~]# reboot
免密认证:
[root@master ~]# ssh-keygen -t rsa
[root@master ~]# ssh-copy-id master
[root@master ~]# ssh-copy-id node1
[root@master ~]# ssh-copy-id node2
测试下是否链接成功(查看时间的命令去测试)
[root@master ~]# for i in master node1 node2;do ssh $i 'date';done
2021年 12月 17日 星期五 23:29:47 CST
2021年 12月 17日 星期五 10:29:48 EST
2021年 12月 17日 星期五 10:29:48 EST
环境到这里就布置完成了,所有节点安装Docker/kubeadm/kubelet
Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。
安装Docker
在三台主机上的操作:
配置docker源
[root@master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
安装docker及开机自启,记住只设置开机自启,不要开启哦
[root@master ~]# yum -y install docker-ce
[root@master ~]# systemctl enable --now docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
查看安装版本
[root@master ~]# docker --version
Docker version 20.10.12, build e91ed57
设置文件
[root@master ~]# cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
添加kubernetes阿里云YUM软件源
[root@master ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装kubeadm,kubelet和kubectl(三台)
由于版本更新频繁,所以这里指定版本号部署:
[root@master ~]# yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0
[root@master ~]# systemctl enable kubelet
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
[root@master ~]# kubeadm init \
--apiserver-advertise-address=192.168.230.131 \ #这是指定地址
--image-repository registry.aliyuncs.com/google_containers \ #这是因为源在国外要想拉取只能翻墙或者去阿里云里面用谷歌的(因为谷歌拉下来了)
--kubernetes-version v1.20.0 \ #版本号
--service-cidr=10.96.0.0/12 \ #service网络
--pod-network-cidr=10.244.0.0/16 #pod网络,默认的
最后结束后看见下面这些代表成功了,下面的后面会用到最好写到某一个文件去
To start using your cluster, you need to run the following as a regular user: #这句话的意思是如果要启动用集群的话,如果不是管理员的话你需要执行以下操作
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run: #如果是管理员的话需要把下面的配置写到文件里面去
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: #你需要部署你的网络到kubectl apply -f [podnetwork].yaml里面去,而文件的位置在下面
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.230.131:6443 --token smr2gr.y6aa7ojhicl7zcd9 \
--discovery-token-ca-cert-hash sha256:5309298933d7699c59cf3462c4a07925163cdbeea878926f1044625b9860763b
根据上面文件提醒,由于我是root账户,所以只需要写一条配置文件就好
[root@master kubernetes]# echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' > /etc/profile.d/k8s.sh
[root@master kubernetes]# source /etc/profile.d/k8s.sh
查看node状态
[root@master kubernetes]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.example.com NotReady control-plane,master 11m v1.20.0
安装Pod网络插件(CNI)
[root@master kubernetes]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
由于我拉不下来,网络太慢了,所以我直接下载下来拉进去的
[root@master ~]# ls
anaconda-ks.cfg kube-flannel.yml test
[root@master ~]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
加入Kubernetes Node
在192.168.230.132、192.168.230.143上(Node)执行。
向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:
kubeadm join 192.168.230.131:6443 --token smr2gr.y6aa7ojhicl7zcd9 \
--discovery-token-ca-cert-hash sha256:5309298933d7699c59cf3462c4a07925163cdbeea878926f1044625b9860763b
执行之后查看是这样的,这里NotReady并不是没拉下来,是因为对面的镜像没拉完
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.example.com Ready control-plane,master 29m v1.20.0
node1.example.com NotReady <none> 5m48s v1.20.0
node2.example.com Ready <none> 5m44s v1.20.0
去查看一下
[root@node1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
rancher/mirrored-flannelcni-flannel-cni-plugin v1.0.0 cd5235cd7dc2 7 weeks ago 9.03MB
registry.aliyuncs.com/google_containers/kube-proxy v1.20.0 10cc881966cf 12 months ago 118MB
registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 22 months ago 683kB
还差一个,所以等会就好了,等到全部Ready就进行下一步操作
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.example.com Ready control-plane,master 31m v1.20.0
node1.example.com Ready <none> 8m26s v1.20.0
node2.example.com Ready <none> 8m22s v1.20.0
[root@master ~]# kubectl get pods #发现没有开启的
No resources found in default namespace.
[root@master ~]# kubectl get ns
NAME STATUS AGE
default Active 36m
kube-node-lease Active 36m
kube-public Active 36m
kube-system Active 36m
[root@master ~]# kubectl get pods -n kube-system 指定位置才看容器信息
NAME READY STATUS RESTARTS AGE
coredns-7f89b7bc75-bdgsr 1/1 Running 0 36m
coredns-7f89b7bc75-k8c6g 1/1 Running 0 36m
etcd-master.example.com 1/1 Running 0 36m
kube-apiserver-master.example.com 1/1 Running 0 36m
kube-controller-manager-master.example.com 1/1 Running 0 36m
kube-flannel-ds-7lnw5 1/1 Running 0 13m
kube-flannel-ds-bjz2t 1/1 Running 0 12m
kube-flannel-ds-sqwj5 1/1 Running 0 15m
kube-proxy-fqdvj 1/1 Running 0 12m
kube-proxy-hjnxj 1/1 Running 0 36m
kube-proxy-wfh9b 1/1 Running 0 13m
kube-scheduler-master.example.com 1/1 Running 0 36m
查看容器在哪运行的就在后面加一个 -o 选项
[root@master ~]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-7f89b7bc75-bdgsr 1/1 Running 0 38m 10.244.0.3 master.example.com <none> <none>
coredns-7f89b7bc75-k8c6g 1/1 Running 0 38m 10.244.0.2 master.example.com <none> <none>
etcd-master.example.com 1/1 Running 0 38m 192.168.230.131 master.example.com <none> <none>
kube-apiserver-master.example.com 1/1 Running 0 38m 192.168.230.131 master.example.com <none> <none>
kube-controller-manager-master.example.com 1/1 Running 0 38m 192.168.230.131 master.example.com <none> <none>
kube-flannel-ds-7lnw5 1/1 Running 0 14m 192.168.230.132 node1.example.com <none> <none>
kube-flannel-ds-bjz2t 1/1 Running 0 14m 192.168.230.143 node2.example.com <none> <none>
kube-flannel-ds-sqwj5 1/1 Running 0 17m 192.168.230.131 master.example.com <none> <none>
kube-proxy-fqdvj 1/1 Running 0 14m 192.168.230.143 node2.example.com <none> <none>
kube-proxy-hjnxj 1/1 Running 0 38m 192.168.230.131 master.example.com <none> <none>
kube-proxy-wfh9b 1/1 Running 0 14m 192.168.230.132 node1.example.com <none> <none>
kube-scheduler-master.example.com 1/1 Running 0 38m 192.168.230.131 master.example.com <none> <none>
测试kubernetes集群
在Kubernetes集群中创建一个pod,验证是否正常运行:
创建deployment类型取为nginx,使用镜像为nginx
[root@master ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
暴露端口号,类型是nodeport
[root@master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
查看映射到哪去了
[root@master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <n one> 443/TCP 44m
nginx NodePort 10.103.97.182 <none> 80:32220/TCP 74s
注解:
--image 指定需要使用到的镜像。
--port 指定容器需要暴露的端口。
--replicas 指定目标控制器对象要自动创建Pod对象的副本数量。
[root@master ~]# kubectl get pod,svc #指定容器
NAME READY STATUS RESTARTS AGE
pod/nginx-6799fc88d8-bmwdr 0/1 ContainerCreating 0 5m45s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 45m
service/nginx NodePort 10.103.97.182 <none> 80:32220/TCP 2m41s
[root@master ~]# kubectl get pods -o wide #查看pod详细信息
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-6799fc88d8-bmwdr 0/1 ImagePullBackOff 0 23m 10.244.1.2 node1.example.com <none> <none>
[root@master ~]# ping 10.103.97.182
PING 10.103.97.182 (10.103.97.182) 56(84) bytes of data.
^C
--- 10.103.97.182 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2061ms
[root@master ~]# curl http://10.103.97.182
curl: (7) Failed to connect to 10.103.97.182 port 80: 拒绝连接
[root@master ~]# ping 10.244.1.2
PING 10.244.1.2 (10.244.1.2) 56(84) bytes of data.
64 bytes from 10.244.1.2: icmp_seq=1 ttl=63 time=0.730 ms
64 bytes from 10.244.1.2: icmp_seq=2 ttl=63 time=0.452 ms
64 bytes from 10.244.1.2: icmp_seq=3 ttl=63 time=0.431 ms
^C
--- 10.244.1.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2072ms
rtt min/avg/max/mdev = 0.431/0.537/0.730/0.138 ms
[root@master ~]# curl http://10.244.1.2
curl: (7) Failed to connect to 10.244.1.2 port 80: 拒绝连接
为什么拒绝连接,我还在找寻理由
nginx节点没有问题
[root@master ~]# ps -ax|grep nginx
301168 pts/2 S+ 0:00 grep --color=auto nginx
为什么访问不出来已经得出结论,就是因为接受node延迟,等一会就好了
更多推荐
已为社区贡献5条内容
所有评论(0)