k8s-Pod调度策略(入门攻略)
首先在k8s中,k8s会根据每个work节点的配置,负载差异,自动生成优选函数,根据优选函数,当master节点分配下来任务时,将pod分配带最适合运行的node节点上。之外我们技术人员还有以下三种方式去影响我们的pod调度,node节点调度器亲和性调度污点容忍度区别和实例操作一 .node节点调度是最直接的调度方式,简单粗暴,所以常用在简单的集群架构中,负载的资源分类和编制不适合这种方式,解释:
首先在k8s中,k8s会根据每个work节点的配置,负载差异,自动生成优选函数,根据优选函数,当master节点分配下来任务时,将pod分配带最适合运行的node节点上。
之外我们技术人员还有以下三种方式去影响我们的pod调度,
- node节点调度器
- 亲和性调度
- 污点容忍度
区别和实例操作
一 .node节点调度
是最直接的调度方式,简单粗暴,所以常用在简单的集群架构中,负载的资源分类和编制不适合这种方式,
解释:大概意思就是给我们的work节点绑定唯一便签,然后在pod的yml文件中去设置node便签匹配器绑定节点,这样就能实现影响k8s优选参数的选择,让当前的pod启动在设置的node节点上。
格式:
kubectl label nodes <node-name> <label-key>=<label-value>
[root@k8s-master ~]# kubectl label nodes k8s-node01 zone=sh
[root@k8s-master ~]# kubectl get nodes --show-labels
2、通过 nodeSelector 调度 pod 到 node
- Pod的定义中通过nodeSelector指定label标签,pod将会只调度到具有该标签的node之上
[root@k8s-master ~]# vim pod-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disktype: ssd
- 这个例子中pod只会调度到具有 disktype=ssd 的 node 上面.
- 验证 节点调度
[root@k8s-master ~]# kubectl apply -f pod-demo.yaml
[root@k8s-master ~]# kubectl get pods -o wide
[root@k8s-master ~]# kubectl describe pod pod-demo ##查看事件
二 .亲和性调度
较复杂,应用在复杂的多节点归类,资源分类管理的中大型集群中,有硬亲和,软亲和,亲和性和反亲和,两两为一组,反义词
硬亲和:匹配节点上的其中一个或多个标签(必须存在一个)
软亲和:匹配节点上的其中一个或多个标签(有则选择这个node,没有就参考优选函数)
[root@k8s-master ~]# vim pod-nodeaffinity-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-node-affinity-demo
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: zone
operator: In
values:
- foo
- bar
[root@k8s-master ~]# kubectl apply -f pod-nodeaffinity-demo.yaml
[root@k8s-master ~]# kubectl describe pod pod-node-affinity-demo
# 运行结果:
Warning FailedScheduling 2s (x8 over 20s) default-scheduler 0/3 nodes are available: 3 node(s) didn't match node selector.
# 给其中一个node打上foo的标签
[root@k8s-master ~]# kubectl label node k8s-node01 zone=foo
# 正常启动
[root@k8s-master ~]# kubectl get pods
[root@k8s-master ~]# vim pod-nodeaffinity-demo-2.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-node-affinity-demo-2
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: zone
operator: In
values:
- foo
- bar
weight: 60
- preference:
matchExpressions:
- key: zone1
operator: In
values:
- foo1
- bar1
weight: 10
[root@k8s-master ~]# kubectl apply -f pod-nodeaffinity-demo-2.yaml
apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: zone
operator: In
values:
- dev
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: disktype
operator: In
values:
- ssd
containers:
- name: with-node-affinity
image: nginx
结果:同时存在的话先判断上下顺序,从上到下选择范围,列如硬亲和在上,软亲和在下,就会先选必须匹配到一个标签的多个node上,然后软亲和继续从这些节点去选择,反之亦然。
亲和性:首先是先有一个pod,然后根据上一个pod(辨识标签)在哪里节点启动,就会跟随到这个节点上去启动。
反亲和:查看前面的那个pod的启动节点,必须和它启动到不是一个节点上
[root@k8s-master ~]# vim pod-required-affinity-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-first
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
---
apiVersion: v1
kind: Pod
metadata:
name: pod-second
labels:
app: db
tier: db
spec:
containers:
- name: busybox
image: busybox
imagePullPolicy: IfNotPresent
command: ["sh","-c","sleep 3600"]
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- {key: app, operator: In, values: ["myapp"]}
topologyKey: kubernetes.io/hostname
[root@k8s-master ~]# kubectl apply -f pod-required-affinity-demo.yaml
[root@k8s-master ~]# kubectl get pods -o wide
# 运行结果,两个 pod 在同一 node 节点上
NAME READY STATUS RESTARTS AGE IP NODE
pod-first 1/1 Running 0 11s 10.244.1.6 k8s-node01
pod-second 1/1 Running 0 11s 10.244.1.5 k8s-node01
[root@k8s-master ~]# kubectl delete -f pod-required-affinity-demo.yaml
[root@k8s-master ~]# vim pod-required-anti-affinity-demo.yaml
# 内容为
apiVersion: v1
kind: Pod
metadata:
name: pod-first
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
---
apiVersion: v1
kind: Pod
metadata:
name: pod-second
labels:
app: backend
tier: db
spec:
containers:
- name: busybox
image: busybox:latest
imagePullPolicy: IfNotPresent
command: ["sh","-c","sleep 3600"]
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- {key: app, operator: In, values: ["myapp"]}
topologyKey: kubernetes.io/hostname
[root@k8s-master ~]# kubectl apply -f pod-required-anti-affinity-demo.yaml
[root@k8s-master ~]# kubectl get pods -o wide
# 运行结果,两个 pod 不在同一个 node
NAME READY STATUS RESTARTS AGE IP NODE
pod-first 1/1 Running 0 5s 10.244.2.4 k8s-node02
pod-second 1/1 Running 0 5s 10.244.1.7 k8s-node01
[root@k8s-master ~]# kubectl delete -f pod-required-anti-affinity-demo.yaml
这里就不举例的,直接说一下结果
首先肯定是两个条件的匹配条件不能一样
这时候就会满足两个条件去选择条件,比如,当亲和性在上,反亲和在下,pod会先选择匹配到条件的一些节点上去选择,然后在从中匹配到反亲和的条件,如果有那就必须踢出它,选择其他符合条件的节点。
三.污点和容忍度
与之前两个调度方式不同,污点是首先给节点绑定污点,作用是保护节点,不再让这个节点会scheduler(资源调度)选为pod启动环境。
我们集群中master就是设置污点,所以你启动任何pod都不会在master上工作,保证master的工作效率。
介绍几个用到的参数
- operator 可以定义为
- Equal:表示key是否等于value,默认
- Exists:表示key是否存在,此时无需定义value
- tain 的 effect 定义对 Pod 排斥效果
- NoSchedule:仅影响调度过程,对现存的Pod对象不产生影响;
- NoExecute:既影响调度过程,也影响现有的Pod对象;不容忍的Pod对象将被驱逐
- PreferNoSchedule: 表示尽量不调度
kubectl describe node k8s-node01 | grep Taints
kubectl taint node k8s-node01 node-type=production:NoSchedule
[root@k8s-master ~]# vim deploy-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: myapp
release: canary
template:
metadata:
labels:
app: myapp
release: canary
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
ports:
- name: http
containerPort: 80
[root@k8s-master ~]# kubectl apply -f deploy-demo.yaml
[root@k8s-master ~]# kubectl get pods -o wide
# 运行结果:
NAME READY STATUS RESTARTS AGE IP NODE
myapp-deploy-69b47bc96d-cwt79 1/1 Running 0 5s 10.244.2.6 k8s-node02
myapp-deploy-69b47bc96d-qqrwq 1/1 Running 0 5s 10.244.2.5 k8s-node02
所以只能启动到没有污点的node2节点上
[root@k8s-master ~]# vim deploy-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: defaultm
spec:
replicas: 2
selector:
matchLabels:
app: myapp
release: canary
template:
metadata:
labels:
app: myapp
release: canary
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v2
ports:
- name: http
containerPort: 80
tolerations:
- key: "node-type"
operator: "Equal"
value: "production"
effect: "NoSchedule"
[root@k8s-master ~]# kubectl apply -f deploy-demo.yaml
测试
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myapp-deploy-65cc47f858-tmpnz 1/1 Running 0 10s 10.244.1.10 k8s-node01
myapp-deploy-65cc47f858-xnklh 1/1 Running 0 13s 10.244.1.9 k8s-node01
- 定义Toleration,是否存在 node-type 这个key 且 effect 值为 NoSchedule
[root@k8s-master ~]# vim deploy-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: myapp
release: canary
template:
metadata:
labels:
app: myapp
release: canary
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v2
ports:
- name: http
containerPort: 80
tolerations:
- key: "node-type"
operator: "Exists"
value: ""
effect: "NoSchedule"
[root@k8s-master ~]# kubectl apply -f deploy-demo.yaml
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myapp-deploy-559f559bcc-6jfqq 1/1 Running 0 10s 10.244.1.11 k8s-node01
myapp-deploy-559f559bcc-rlwp2 1/1 Running 0 9s 10.244.1.12 k8s-node01
- 定义Toleration,是否存在 node-type 这个key 且 effect 值为空,则包含所有的值
[root@k8s-master ~]# vim deploy-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: myapp
release: canary
template:
metadata:
labels:
app: myapp
release: canary
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v2
ports:
- name: http
containerPort: 80
tolerations:
- key: "node-type"
operator: "Exists"
value: ""
effect: ""
[root@k8s-master ~]# kubectl apply -f deploy-demo.yaml
# 两个 pod 均衡调度到两个节点
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myapp-deploy-5d9c6985f5-hn4k2 1/1 Running 0 2m 10.244.1.13 k8s-node01
myapp-deploy-5d9c6985f5-lkf9q 1/1 Running 0 2m 10.244.2.7 k8s-node02
------没有足够的努力,就不要去谈未来和远方,因为谁不会吹呢------
更多推荐
所有评论(0)