污点状态:
-
NoSchedule:如果 Node 上带有污点 effect 为 NoSchedule,而 Node 上不带相应容忍,Kubernetes 就不会调度 Pod 到这台 Node 上。
-
PreferNoShedule:如果 Node 上带有污点 effect 为 PreferNoShedule,这时候 Kubernetes 会努力不要调度这个 Pod 到这个 Node 上。
-
NoExecute:如果 Node 上带有污点 effect 为 NoExecute,这个已经在 Node 上运行的 Pod 会从 Node 上驱逐掉。没有运行在 Node 的 Pod 不能被调度到这个 Node 上。
污点值:
- 污点 value 的值可以为 NoSchedule、PreferNoSchedule 或 NoExecute
污点属性:
-
污点是k8s集群的pod中的一种属性
-
污点属性分为以上三种
污点组成:
- key、value 及一个 effect 三个元素
<key>=<value>:<effect>
1、设置单污点及单容忍度
kubectl taint nodes master1 node-role.kubernetes.io/master=:NoSchedule
kubectl taint node node1 key1=value1:NoSchedule # 设置value值
kubectl taint node master1 key2=:PreferNoSchedule # 不设置value值
2、设置多污点及多容忍度
kubectl taint nodes node1 key1=value1:NoSchedule
kubectl taint nodes node1 key1=value1:NoExecute
kubectl taint nodes node1 key2=value2:NoSchedule
3、查看pod中的污点状态
[root@master1 ~]# kubectl describe nodes master1
Name: master1
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=master1
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"36:51:e1:31:e5:9e"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 192.168.200.3
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 13 Jan 2021 06:04:10 -0500
Taints: node-role.kubernetes.io/master:NoSchedule # 污点状态及容忍度
Unschedulable: false
Lease:
HolderIdentity: master1
AcquireTime: <unset>
RenewTime: Thu, 14 Jan 2021 01:14:07 -0500
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Wed, 13 Jan 2021 06:12:43 -0500 Wed, 13 Jan 2021 06:12:43 -0500 FlannelIsUp Flannel is running on this node
MemoryPressure False Thu, 14 Jan 2021 01:11:17 -0500 Wed, 13 Jan 2021 06:50:32 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 14 Jan 2021 01:11:17 -0500 Wed, 13 Jan 2021 06:50:32 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 14 Jan 2021 01:11:17 -0500 Wed, 13 Jan 2021 06:50:32 -0500 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 14 Jan 2021 01:11:17 -0500 Wed, 13 Jan 2021 06:50:32 -0500 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.200.3
Hostname: master1
Capacity:
cpu: 4
ephemeral-storage: 17394Mi
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 2897500Ki
pods: 110
Allocatable:
cpu: 4
ephemeral-storage: 16415037823
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 2795100Ki
pods: 110
System Info:
Machine ID: feb4edfea2404d3c8ad028ca4593bb32
System UUID: C6F44D56-0F24-6114-23E7-8DF6CD4E4CFE
Boot ID: afcc0ef6-d767-4b97-9a7b-9b2500757f2e
Kernel Version: 3.10.0-862.el7.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.0
Kubelet Version: v1.18.2
Kube-Proxy Version: v1.18.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system etcd-master1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h
kube-system kube-apiserver-master1 250m (6%) 0 (0%) 0 (0%) 0 (0%) 19h
kube-system kube-controller-manager-master1 200m (5%) 0 (0%) 0 (0%) 0 (0%) 19h
kube-system kube-flannel-ds-wzf7w 100m (2%) 100m (2%) 50Mi (1%) 50Mi (1%) 19h
kube-system kube-proxy-7h5sb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h
kube-system kube-scheduler-master1 100m (2%) 0 (0%) 0 (0%) 0 (0%) 19h
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 650m (16%) 100m (2%)
memory 50Mi (1%) 50Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events: <none>
4、过滤出有几台节点存在污和容忍度是什么
[root@master1 ~]# kubectl describe node master1 | grep Taints
Taints: node-role.kubernetes.io/master:NoSchedule
[root@master1 ~]# kubectl describe node master2 | grep Taints
Taints: node-role.kubernetes.io/master:NoSchedule
[root@master1 ~]# kubectl describe node master3 | grep Taints
Taints: node-role.kubernetes.io/master:NoSchedule
5、有无污点返回的结果
Taints: node-role.kubernetes.io/master:NoSchedule # 有污点
Taints: <none> # 没污点
6、删除污点使其pod能够调度和使用
kubectl taint node master1 node-role.kubernetes.io/master:NoSchedule-
kubectl taint nodes master1 key:NoSchedule-
所有评论(0)