环境

由于环境有限,自己只部署了一台master节点,发现在运行pod时,总是处于pengding状态

	[root@iZwz9gwr0avfoncztr5y2jZ ~]# kubectl get pod --namespace=wordpress
NAME        READY   STATUS    RESTARTS   AGE
wordpress   0/2     Pending   0          2m7s

查看pod运行情况,由于master有污点,pod无法调度到master节点上

Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  78s (x7 over 8m42s)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

修改yaml文件部署pod

容忍 taints (污点)设置

   tolerations:
  - key: "node-role.kubernetes.io/master"
    operator: "Equal"
    value: ""
    effect: "NoSchedule"

yaml文件展示

apiVersion: v1
kind: Namespace
metadata:
  name: wordpress

---
apiVersion: v1
kind: Pod
metadata:
  name: wordpress
  namespace: wordpress
spec:
  containers:
  - name: wordpress
    image: wordpress
    ports:
    - containerPort: 80
      name: wdport
    env:
    - name: WORDPRESS_DB_HOST
      value: localhost:3306
    - name: WORDPRESS_DB_USER
      value: wordpress
    - name: WORDPRESS_DB_PASSWORD
      value: wordpress
  - name: mysql
    image: mysql:5.7
    imagePullPolicy: IfNotPresent
    ports:
    - containerPort: 3306
      name: dbport
    env:
    - name: MYSQL_ROOT_PASSWORD
      value: dayi123
    - name: MYSQL_DATABASE
      value: wordpress
    - name: MYSQL_USER
      value: wordpress
    - name: MYSQL_PASSWORD
      value: wordpress
    volumeMounts:
    - name: db
      mountPath: /var/lib/mysql
  tolerations:
  - key: "node-role.kubernetes.io/master"
    operator: "Equal"
    value: ""
    effect: "NoSchedule"
  volumes:
  - name: db
    hostPath:
      path: /var/lib/mysql

其他关于污点的内容

手动部署的k8s集群, 需要为master节点手动设置taints

设置taint
语法:

kubectl taint node [node] key=value[effect]   
     其中[effect] 可取值: [ NoSchedule | PreferNoSchedule | NoExecute ]
      NoSchedule: 一定不能被调度
      PreferNoSchedule: 尽量不要调度
      NoExecute: 不仅不会调度, 还会驱逐Node上已有的

Pod
示例:

kubectl taint node node1 key1=value1:NoSchedule
kubectl taint node node1 key1=value1:NoExecute
kubectl taint node node1 key2=value2:NoSchedule

查看taint:

kubectl describe node node1

删除taint:

kubectl taint node node1 key1:NoSchedule-  # 这里的key可以不用指定value
kubectl taint node node1 key1:NoExecute-
 kubectl taint node node1 key1-  删除指定key所有的effect
kubectl taint node node1 key2:NoSchedule-

master节点设置taint

kubectl taint nodes master1 node-role.kubernetes.io/master=:NoSchedule

注意⚠️ : 为master设置的这个taint中, node-role.kubernetes.io/master为key, value为空, effect为NoSchedule

如果输入命令时, 你丢掉了=符号, 写成了node-role.kubernetes.io/master:NoSchedule, 会报error: at least one taint update is required错误

容忍tolerations主节点的taints
以上面为 master1 设置的 taints 为例, 你需要为你的 yaml 文件中添加如下配置, 才能容忍 master 节点的污点

在 pod 的 spec 中设置 tolerations 字段

tolerations:
- key: "node-role.kubernetes.io/master"
  operator: "Equal"
  value: ""
  effect: "NoSchedule"

下面,是自己环境移除污点

查看污点策略,显示三个master节点都是NoSchedule

[root@master1 ~]# kubectl get no -o yaml | grep taint -A 5
    taints:
    - effect: NoSchedule
      key: node-role.kubernetes.io/master
  status:
    addresses:
    - address: master1的IP
--
    taints:
    - effect: NoSchedule
      key: node-role.kubernetes.io/master
  status:
    addresses:
    - address: master2的IP
--
    taints:
    - effect: NoSchedule
      key: node-role.kubernetes.io/master
  status:
    addresses:
    - address: master3的IP

#去除污点,允许master节点部署pod

[root@master1 ~]# kubectl taint nodes --all node-role.kubernetes.io/master-
node/master1 untainted
node/master2 untainted
node/master3 untainted
error: taint "node-role.kubernetes.io/master" not found

#再次查看,无显示,说明污点去除成功

```yaml
[root@master1 ~]# kubectl get no -o yaml | grep taint -A 5
[root@master1 ~]# kubectl get no -o yaml | grep taint -A 5
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐