一、测试环境

节点名称IP地址
master192.168.100.10
node1192.168.100.20
node2192.168.100.30

二、开启k8s的IPVS模式

1.各节点集群的状态
[root@master ~]# kubectl get nodes 
NAME     STATUS   ROLES    AGE     VERSION
master   Ready    master   2d20h   v1.18.2
node1    Ready    <none>   2d20h   v1.18.2
node2    Ready    <none>   2d20h   v1.18.2
2.修改K8s的代理

以前默认为iptable

ConfigMap 配置文件注释中心,常用于存放一些服务的配置文件,通过修改里面的配置文件即可达到统一修改的效果
[root@master ~]# kubectl edit cm -n kube-system kube-proxy 
mode: "ipvs"    #修改为ipvs即可
-n 指定名称空间  kube-proxy 是k8s代理的组件

删除pod 即可达到重启效果
[root@master ~]# kubectl get pods -n kube-system | grep kube-proxy 
kube-proxy-f8v7d                           1/1     Running   0          45h
kube-proxy-fczf4                           1/1     Running   0          45h
kube-proxy-jqpbp                           1/1     Running   0          45h
[root@master ~]# kubectl delete pods -n kube-system kube-proxy-f8v7d kube-proxy-fczf4 kube-proxy-jqpbp

效果查看
[root@master ~]# kubectl logs -n kube-system kube-proxy-l9hq5
....
I0918 03:21:27.223909       1 server_others.go:259] Using ipvs Proxier.
....

注:
当k8s版本为1.18.*往上,Linux为centos7修改ipvs会出现如下问题
1393 proxier.go:1950] Failed to list IPVS destinations, error: parseIP Error ip=[192 168 50 65 0 0 0 0 0 0 0 0 0 0 0 0]
**因为ipvs版本比较新,centos7版本不支持,我们也可以将系统版本换成8以上即可解决
解决方法:(也可以降低kube-proxy的版本)
1.将kube-porxy版本降低
kubectl -n kube-system set image daemonset/kube-proxy *=registry.aliyuncs.com/k8sxio/kube-proxy:v1.17.6

最终正常从效果如下
I0918 03:32:50.394185       1 node.go:135] Successfully retrieved node IP: 192.168.100.10
I0918 03:32:50.394211       1 server_others.go:172] Using ipvs Proxier.
W0918 03:32:50.394429       1 proxier.go:420] IPVS scheduler not specified, use rr by default
I0918 03:32:50.394810       1 server.go:571] Version: v1.17.6
I0918 03:32:50.395204       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0918 03:32:50.395567       1 config.go:131] Starting endpoints config controller
I0918 03:32:50.395592       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0918 03:32:50.395638       1 config.go:313] Starting service config controller
I0918 03:32:50.395641       1 shared_informer.go:197] Waiting for caches to sync for service config
I0918 03:32:50.495972       1 shared_informer.go:204] Caches are synced for service config 
I0918 03:32:50.495983       1 shared_informer.go:204] Caches are synced for endpoints config 

三、调度master节点

默认状态下,k8s的调度工作是由Scheduler来负责的。默认情况master节点是不会被调度使用的,因为是出于安全考虑,但是,如果想要master节点也参与调度,那么需要做相应的修改

默认情况为不可调度
[root@master ~]# kubectl describe nodes master                                  
Taints:             node-role.kubernetes.io/master:NoSchedule 

将master节点设置为可以调度
[root@master ~]# kubectl taint node master node-role.kubernetes.io/master- 

四、Pod的扩容和缩放

扩容与缩放就是对K8s里面的pod进行副本数目的增加和减少进行操作的

查看当前deploy的副本数
[root@master nginx]# kubectl get deploy 
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           30s

1.临时性的修改
[root@master nginx]# kubectl scale deploy nginx --replicas=2
deployment.apps/nginx scaled

2.永久性修改
....
spec:
  replicas: 2
....

五、滚动升级和回滚

当我们需要更新镜像的版本时,就需要用到滚动升级

运行一个httpd:v1.1版本的镜像
[root@master]# kubectl get deploy -o wide 
NAME    READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES       SELECTOR
httpd   1/1     1            1           9m13s   httpd        httpd:v1.1   app=httpd

1.命令行的方式修改(修改版本为v2.2)**临时性修改
[root@master ~]# kubectl set image deploy/httpd httpd=httpd:v2.2 
deployment.apps/httpd image updated

2.yaml文件的方式修改  (永久性修改)
提前准备好一个镜像版本为v2.2的yaml文件
[root@master httpd]# kubectl apply -f httpdv1.1.yaml --record 
deployment.apps/httpd created
[root@master httpd]# kubectl apply -f httpdv2.2.yaml --record    
deployment.apps/httpd configured
(--record 记录我们运行的版本 好利于我回滚操作)

查看效果
[root@master httpd]# kubectl get deploy -o wide 
NAME    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES       SELECTOR
httpd   1/1     1            1           66s   httpd        httpd:v2.2   app=httpd

查看我们更新的版本信息
[root@master httpd]# kubectl rollout history deploy 
deployment.apps/httpd 
REVISION  CHANGE-CAUSE
1         kubectl apply --filename=httpdv1.1.yaml --record=true
2         kubectl apply --filename=httpdv2.2.yaml --record=true


回滚操作
将版本信息退回到v1.1
[root@master httpd]# kubectl rollout undo deploy/httpd --to-revision=1
deployment.apps/httpd rolled back
[root@master httpd]# kubectl get deploy -o wide 
NAME    READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES       SELECTOR
httpd   1/1     1            1           3m37s   httpd        httpd:v1.1   app=httpd

六、Node的隔离和恢复

在K8S集群中,node节点会存在故障和维修等现象,出现这些情况,我们就需要对node节点隔离操作,防止有pod继续调度在该节点。当故障解除了,在将node恢复

1.隔离
查询集群当前pod都运行在哪一些节点
[root@master nginx]# kubectl get pods -o wide 
NAME                     READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
httpd-6f6cc47d44-xhln6   1/1     Running   0          6m50s   10.244.104.12    node2   <none>           <none>
nginx-89b8b949c-5jd2d    1/1     Running   0          24s     10.244.166.142   node1   <none>           <none>
nginx-89b8b949c-kjvfd    1/1     Running   0          24s     10.244.104.13    node2   <none>           <none>

我们假设node2出现故障,现急需将node2节点进行隔离操作
[root@master ~]# kubectl cordon node2
node/node2 cordoned
[root@master ~]# kubectl get nodes  (现在node2处于无法调度的状态了)
NAME     STATUS                     ROLES    AGE   VERSION
master   Ready                      master   8d    v1.18.2
node1    Ready                      <none>   8d    v1.18.2
node2    Ready,SchedulingDisabled   <none>   8d    v1.18.2

node2上存在的pod不会自动删除,需要我们管理员手动删除
[root@master ~]# kubectl delete pods httpd-6f6cc47d44-xhln6 nginx-89b8b949c-kjvfd 
pod "httpd-6f6cc47d44-xhln6" deleted
pod "nginx-89b8b949c-kjvfd" deleted

在次查看pod运行在哪一个节点上   (全部都在node1上)
[root@master ~]# kubectl get pods -o wide 
NAME                     READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
httpd-6f6cc47d44-bxwfw   1/1     Running   0          30s     10.244.166.144   node1   <none>           <none>
nginx-89b8b949c-5jd2d    1/1     Running   0          4m12s   10.244.166.142   node1   <none>           <none>
nginx-89b8b949c-vwddn    1/1     Running   0          30s     10.244.166.143   node1   <none>           <none>
2.恢复
假设node2现在故障已经解决,需要将恢复
[root@master ~]# kubectl uncordon node2
node/node2 uncordoned
[root@master ~]# kubectl get nodes 
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   8d    v1.18.2
node1    Ready    <none>   8d    v1.18.2
node2    Ready    <none>   8d    v1.18.2

恢复后pod也不会自动转移也需要管理手动操作,也可以不操作

七、将Pod调度到指定的Node

正常情况下,我们创建的Pod是通过Scheduler组件来调度到符合条件的节点上的,都是自动完成的。如果我们想要指定Pod调度指定的Node上,我们就需要进行手动进行操作,

为对应的node节点打上标签
[root@master ~]# kubectl label nodes node2 test=node2 (标签名=标签值)
node/node2 labeled

编写yaml文件
[root@master ~]# vi nginx.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx:latest
    ports:
    - containerPort: 80
  nodeSelector:   #标签关键字
    test: node2

执行查看效果
[root@master ~]# kubectl apply -f nginx.yaml               
pod/nginx created
[root@master ~]# kubectl get pods -o wide 
NAME    READY   STATUS    RESTARTS   AGE   IP             NODE    NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          39s   10.244.104.2   node2   <none>           <none>

成功在node2上运行!!!!!!!!!

去除标签操作
[root@master ~]# kubectl label node node2 test-     
node/node2 labeled
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐