一、Kubernetes里的DNS

K8s集群内的DNS:

kubectl get svc -n kube-system |grep dns

[root@aminglinux01 ~]# kubectl get svc -n kube-system |grep dns
kube-dns   ClusterIP   10.15.0.10   <none>        53/UDP,53/TCP,9153/TCP   10d
[root@aminglinux01 ~]#

 测试:

在aminglinux01上安装bind-utils,目的是安装dig命令
yum install -y bind-utils
解析外网域名
dig @10.15.0.10 www.baidu.com

[root@aminglinux01 ~]# dig @10.15.0.10 www.baidu.com

; <<>> DiG 9.11.36-RedHat-9.11.36-14.el8_10 <<>> @10.15.0.10 www.baidu.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 39465
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 1441cb1e9ce39e75 (echoed)
;; QUESTION SECTION:
;www.baidu.com.			IN	A

;; ANSWER SECTION:
www.baidu.com.		30	IN	CNAME	www.a.shifen.com.
www.a.shifen.com.	30	IN	A	110.242.68.3
www.a.shifen.com.	30	IN	A	110.242.68.4

;; Query time: 13 msec
;; SERVER: 10.15.0.10#53(10.15.0.10)
;; WHEN: Mon Jul 15 03:27:29 CST 2024
;; MSG SIZE  rcvd: 161

[root@aminglinux01 ~]# 

解析内部域名dig @10.15.0.10 ngx-svc.default.svc.cluster.local

完整的service域名解析是<servicename>.<namespace>.svc.<clusterdomain> 其中,servicename为service名称,namespace为service所处的命名空间,clusterdomain是k8s集群设计的域名后缀,默认为cluster.local

[root@aminglinux01 ~]# dig @10.15.0.10 ngx-svc.default.svc.cluster.local

; <<>> DiG 9.11.36-RedHat-9.11.36-14.el8_10 <<>> @10.15.0.10 ngx-svc.default.svc.cluster.local
; (1 server found)
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40305
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 46e55f77a8dd5366 (echoed)
;; QUESTION SECTION:
;ngx-svc.default.svc.cluster.local. IN	A

;; ANSWER SECTION:
ngx-svc.default.svc.cluster.local. 30 IN A	10.15.157.72

;; Query time: 0 msec
;; SERVER: 10.15.0.10#53(10.15.0.10)
;; WHEN: Mon Jul 15 03:32:40 CST 2024
;; MSG SIZE  rcvd: 123

[root@aminglinux01 ~]# 

  还可以解析Pod,Pod的域名有点特殊,格式为<pod-ip>.<namespace>.pod.<cluster-domain>,例如其中Pod IP部分需要用 “-” 替换 “.” 符号,例如下面Pod的IP地址为10.18.68.140:,系统为这个Pod设置的DNS域名为10.18.68.140.default.pod.cluster.local,用 nslookup进行验证,便可以成功解析该域名的IP地址为10.18.68.140

dig@10.15.0.10 10-18-68-140.default.pod.cluster.local

[root@aminglinux01 ~]# dig @10.15.0.10 10-18-68-140.default.pod.cluster.local

; <<>> DiG 9.11.36-RedHat-9.11.36-14.el8_10 <<>> @10.15.0.10 10-18-68-140.default.pod.cluster.local
; (1 server found)
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 21202
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 2b2d4dded38ef1c3 (echoed)
;; QUESTION SECTION:
;10-18-68-140.default.pod.cluster.local.	IN A

;; ANSWER SECTION:
10-18-68-140.default.pod.cluster.local.	30 IN A	10.18.68.140

;; Query time: 0 msec
;; SERVER: 10.15.0.10#53(10.15.0.10)
;; WHEN: Mon Jul 15 03:40:20 CST 2024
;; MSG SIZE  rcvd: 133

[root@aminglinux01 ~]# 

对应的Pod为coredns:

kubectl get po coredns -n kube-system

[root@aminglinux01 ~]# kubectl get pod -n kube-system
NAME                                      READY   STATUS    RESTARTS       AGE
calico-kube-controllers-57b57c56f-h2znw   1/1     Running   4 (2d2h ago)   6d6h
calico-node-6tnmp                         1/1     Running   0              100m
calico-node-gf6vm                         1/1     Running   0              99m
calico-node-gzxh9                         1/1     Running   0              100m
coredns-567c556887-pqv8h                  1/1     Running   8 (2d2h ago)   10d
coredns-567c556887-vgsth                  1/1     Running   8 (2d2h ago)   10d
etcd-aminglinux01                         1/1     Running   8 (2d2h ago)   10d
kube-apiserver-aminglinux01               1/1     Running   8 (2d2h ago)   10d
kube-controller-manager-aminglinux01      1/1     Running   8 (2d2h ago)   10d
kube-proxy-fbzxg                          1/1     Running   8 (2d2h ago)   10d
kube-proxy-k82tm                          1/1     Running   4 (6d2h ago)   10d
kube-proxy-zl2dc                          1/1     Running   3 (6d2h ago)   10d
kube-scheduler-aminglinux01               1/1     Running   8 (2d2h ago)   10d
nfs-client-provisioner-d79cfd7f6-q2n4z    1/1     Running   0              5d23h
[root@aminglinux01 ~]# 

查看defalut命名空间Pod里的/etc/resolv.conf

[root@aminglinux01 ~]# kubectl exec -it ng-deploy-6d94878b66-8t2hq -- cat /etc/resolv.conf 
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.15.0.10
options ndots:5
[root@aminglinux01 ~]# 

查看yeyunyi命名空间Pod里的/etc/resolv.conf 

kubectl exec -it quota-pod -n yeyunyi  -- cat /etc/resolv.conf

[root@aminglinux01 ~]# kubectl exec -it quota-pod -n yeyunyi  -- cat /etc/resolv.conf 
search yeyunyi.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.15.0.10
options ndots:5
[root@aminglinux01 ~]#

解释:

  • nameserver: 定义DNS服务器的IP,其实就是kube-dns那个service的IP。
  • search: 定义域名的查找后缀规则,查找配置越多,说明域名解析查找匹配次数越多。集群匹配有 default.svc.cluster.local、svc.cluster.local、cluster.local 3个后缀,最多进行8次查询 (IPV4和IPV6查询各四次) 才能得到正确解析结果。不同命名空间,这个参数的值也不同。
  • option: 定义域名解析配置文件选项,支持多个KV值。例如该参数设置成ndots:5,说明如果访问的域名字符串内的点字符数量超过ndots值,则认为是完整域名,并被直接解析;如果不足ndots值,则追加search段后缀再进行查询。

DNS配置

可以通过查看coredns的configmap来获取DNS的配置信息:

[root@aminglinux01 ~]# kubectl describe cm coredns -n kube-system
Name:         coredns
Namespace:    kube-system
Labels:       <none>
Annotations:  <none>

Data
====
Corefile:
----
.:53 {
    errors
    health {
       lameduck 5s
    }
    ready
    kubernetes cluster.local in-addr.arpa ip6.arpa {
       pods insecure
       fallthrough in-addr.arpa ip6.arpa
       ttl 30
    }
    prometheus :9153
    forward . /etc/resolv.conf {
       max_concurrent 1000
    }
    cache 30
    loop
    reload
    loadbalance
}


BinaryData
====

Events:  <none>
[root@aminglinux01 ~]# 

说明:
errors:错误信息到标准输出。
health:CoreDNS自身健康状态报告,默认监听端口8080,一般用来做健康检查。您可以通过http://10.18.206.207:8080/health获取健康状态。(10.18.206.207为coredns其中一个Pod的IP)
ready:CoreDNS插件状态报告,默认监听端口8181,一般用来做可读性检查。可以通过http://10.18.206.207:8181/ready获取可读状态。当所有插件都运行后,ready状态为200。
kubernetes:CoreDNS kubernetes插件,提供集群内服务解析能力。
prometheus:CoreDNS自身metrics数据接口。可以通过http://10.15.0.10:9153/metrics获取prometheus格式的监控数据。(10.15.0.10为kube-dns service的IP)
forward(或proxy):将域名查询请求转到预定义的DNS服务器。默认配置中,当域名不在kubernetes域时,将请求转发到预定义的解析器(宿主机的/etc/resolv.conf)中,这是默认配置。
cache:DNS缓存时长,单位秒。
loop:环路检测,如果检测到环路,则停止CoreDNS。
reload:允许自动重新加载已更改的Corefile。编辑ConfigMap配置后,请等待两分钟以使更改生效。
loadbalance:循环DNS负载均衡器,可以在答案中随机A、AAAA、MX记录的顺序。

二、API资源对象ingress

  有了Service之后,我们可以访问这个Service的IP(clusterIP)来请求对应的
Pod,但是这只能是在集群内部访问。

  要想让外部用户访问此资源,可以使用NodePort,即在node节点上暴漏一个端口出来,但是这个非常不灵活。为了解决此问题,K8s引入了一个新的API资源对象Ingress,它是一个七层的负载均衡器,类似于Nginx

三个概念:Ingress、Ingress Controller、IngressClass
Ingress:用来定义具体的路由规则,要实现什么样的访问效果;
Ingress Controller:是实现Ingress定义具体规则的工具或者叫做服务,在
K8s里就是具体的Pod

IngressClass:是介于Ingress和Ingress Controller之间的一个协调者,它存在的意义在于,当有多个Ingress Controller时,可以让Ingress和IngressController彼此独立,不直接关联,而是通过IngressClass实现关联

Ingress YAML示例:

vi mying.yaml

[root@aminglinux01 ~]# cat mying.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: mying ##ingress名字
spec:
  ingressClassName: myingc ##定义关联的IngressClass
  rules: ##定义具体的规则
  - host: aminglinux.com ##访问的目标域名
    http:
      paths:
      - path: /
        pathType: Exact
        backend: ##定义后端的service对象
          service:
            name: ngx-svc
            port:
              number: 80
[root@aminglinux01 ~]# 

应用和查看ingress

kubectl describe ing mying

kubectl get ing

kubectl describe ing mying

[root@aminglinux01 ~]# kubectl apply -f mying.yaml 
ingress.networking.k8s.io/mying created
[root@aminglinux01 ~]# kubectl get ing
NAME    CLASS    HOSTS            ADDRESS   PORTS   AGE
mying   myingc   aminglinux.com             80      11s
[root@aminglinux01 ~]# kubectl decribe ing mying
error: unknown command "decribe" for "kubectl"

Did you mean this?
	describe
[root@aminglinux01 ~]# kubectl describe ing mying
Name:             mying
Labels:           <none>
Namespace:        default
Address:          
Ingress Class:    myingc
Default backend:  <default>
Rules:
  Host            Path  Backends
  ----            ----  --------
  aminglinux.com  
                  /   ngx-svc:80 (10.18.206.207:80,10.18.68.140:80)
Annotations:      <none>
Events:           <none>
[root@aminglinux01 ~]# cat mying.yaml 

IngressClassYAML示例:

vi myingc.yaml

apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: myingc
spec:
  controller: nginx.org/ingress-controller ##定义要使用哪个controller

查看ingressClass

kubectl get ingressclass

[root@aminglinux01 ~]# kubectl get ingressclass
NAME     CONTROLLER                     PARAMETERS   AGE
myingc   nginx.org/ingress-controller   <none>       31s
[root@aminglinux01 ~]# 

安装ingress-controller(使用Nginx官方提供的https://github.com/nginxinc/kubernetes-ingress)

curl -O
'https://gitee.com/aminglinux/linux_study/raw/master/k8s/ingress.tar.gz'
tar zxf ingress.tar.gz
cd ingress
./setup.sh ##说明,执行这个脚本会部署几个ingress相关资源,包括
namespace、configmap、secrect等

[root@aminglinux01 ~]# curl -O 'https://gitee.com/aminglinux/linux_study/raw/master/k8s/ingress.tar.gz'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  9434    0  9434    0     0  33935      0 --:--:-- --:--:-- --:--:-- 33935
[root@aminglinux01 ~]# tar zxf ingress.tar.gz 
[root@aminglinux01 ~]# cd ingress/
[root@aminglinux01 ingress]# 
[root@aminglinux01 ingress]# ./setup.sh
namespace/nginx-ingress created
serviceaccount/nginx-ingress created
clusterrole.rbac.authorization.k8s.io/nginx-ingress created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress created
secret/default-server-secret created
configmap/nginx-config created
namespace/nginx-ingress unchanged
serviceaccount/nginx-ingress unchanged
customresourcedefinition.apiextensions.k8s.io/globalconfigurations.k8s.nginx.org created
customresourcedefinition.apiextensions.k8s.io/policies.k8s.nginx.org created
customresourcedefinition.apiextensions.k8s.io/transportservers.k8s.nginx.org created
customresourcedefinition.apiextensions.k8s.io/virtualserverroutes.k8s.nginx.org created
customresourcedefinition.apiextensions.k8s.io/virtualservers.k8s.nginx.org created
[root@aminglinux01 ingress]# 

 vi ingress-controller.yaml

[root@aminglinux01 ingress]# cat ingress-controller.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ngx-ing
  namespace: nginx-ingress

spec:
  replicas: 1
  selector:
    matchLabels:
      app: ngx-ing

  template:
    metadata:
      labels:
        app: ngx-ing
      #annotations:
        #prometheus.io/scrape: "true"
        #prometheus.io/port: "9113"
        #prometheus.io/scheme: http
    spec:
      serviceAccountName: nginx-ingress
      containers:
      - image: registry.cn-hangzhou.aliyuncs.com/daliyused/nginx-ingress:2.2-alpine
        imagePullPolicy: IfNotPresent
        name: ngx-ing
        ports:
        - name: http
          containerPort: 80
        - name: https
          containerPort: 443
        - name: readiness-port
          containerPort: 8081
        - name: prometheus
          containerPort: 9113
        readinessProbe:
          httpGet:
            path: /nginx-ready
            port: readiness-port
          periodSeconds: 1
        securityContext:
          allowPrivilegeEscalation: true
          runAsUser: 101 #nginx
          capabilities:
            drop:
            - ALL
            add:
            - NET_BIND_SERVICE
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        args:
          - -ingress-class=myingc
          - -health-status
          - -ready-status
          - -nginx-status
          - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
          - -default-server-tlssecret=$(POD_NAMESPACE)/default-server-secret
[root@aminglinux01 ingress]# 

应用YAML

kubectl apply -f ingress-controller.yaml

[root@aminglinux01 ingress]# kubectl apply -f ingress-controller.yaml
deployment.apps/ngx-ing created

查看pod、deployment

kubectl get po -n nginx-ingress
kubectl get deploy -n nginx-ingress

将ingress对应的pod端口映射到master上临时测试

kubectl port-forward -n nginx-ingress ngx-ing-55cddf555-5sj4k 8888:80 &

 测试前,可以修改ng-deploy对应的两个pod里的/usr/share/nginx/html/index.html文件内容,用于区分两个pod

测试

curl -x127.0.0.1:8888 aminglinux.com
或者:
curl -H 'Host:aminglinux.com' http://127.0.0.1:8888

上面对ingress做端口映射,然后通过其中一个节点的IP来访问ingress只是一种临时方案。那么正常如何做呢?有三种常用的方案:
1)Deployment+LoadBalancer模式的Service
如果要把ingress部署在公有云,那用这种方式比较合适。用Deployment部署ingress-controller,创建一个type为LoadBalancer的service关联这组pod。大部分公有云,都会为LoadBalancer的service自动创建一个负载均衡器,通常还绑定了公网地址。只要把域名解析指向该地址,就实现了集群服务的对外暴露。
2)Deployment+NodePort模式的Service
同样用deployment模式部署ingress-controller,并创建对应的服务,但是type为NodePort。这样,ingress就会暴露在集群节点ip的特定端口上。由于nodeport暴露的端口是随机端口,一般会在前面再搭建一套负载均衡器来转发请求。该方式一般用于宿主机是相对固定的环境ip地址不变的场景。NodePort方式暴露ingress虽然简单方便,但是NodePort多了一层NAT,在请求量级很大时可能对性能会有一定影响。
3)DaemonSet+HostNetwork+nodeSelector
用DaemonSet结合nodeselector来部署ingress-controller到特定的node上,然后使用HostNetwork直接把该pod与宿主机node的网络打通(如,上面的临时方案kubectl port-forward),直接使用宿主机的80/433端口就能访问服务。这时,ingress-controller所在的node机器就很类似传统架构的边缘节点,比如机房入口的nginx服务器。该方式整个请求链路最简单,性能相对NodePort模式更好。缺点是由于直接利用宿主机节点的网络和端口,一个node只能部署一个ingress-controller pod。比较适合大并发的生产环境使用

三、搞懂Kubernetes调度

  K8S调度器Kube-schduler的主要作用是将新创建的Pod调度到集群中的合适节点上运行。kube-scheduler的调度算法非常灵活,可以根据不同的需求进行自定义配置,比如资源限制、亲和性和反亲和性等。

1)kube-scheduler的工作原理如下:

监听API Server: kube-scheduler监听API Server上的Pod对象,以获取需要被调度的Pod信息。它会通过API Server提供的REST API接口获取Pod的信息,例如Pod的标签、资源需
求等信息。
筛选可用节点: kube-scheduler会根据Pod的资源需求和约束条件(例如Pod需要的特定节点标签)筛选出可用的Node节点。它会从所有注册到集群中的Node节点中选择符合条件的节点。
计算分值: kube-scheduler会每个可用的节点计算一个分值,以决定哪个节点是最合适的。分值的计算方式可以通过调度算法来指定,例如默认的算法是将节点资源利用率和距离Pod的网络延迟等因素纳入考虑。
选择节点: kube-scheduler会选择分值最高的节点作为最终的调度目标,并将Pod绑定到该节点上。如果有多个节点得分相等,kube-scheduler会随机选择一个节点
更新API Server: kube-scheduler会更新API Server上的Pod对象,将选定的Node节点信息写入Pod对象的spec字段,然后通知Kubelet将Pod绑定到该节点上并启动容器

2)Kube-scheduler调度器内部流转过程

Scheduler通过注册client-go的informer handler方法监听api-server的pod和node变更事件,获取pod和node信息缓存到Informer中
通过Informer的handler将事件更新到ActiveQ(ActiveQ、UnschedulableQ、PodBackoffQ为三个Scheduling队列,ActiveQ是一个维护着Pod优先级的堆结构,调度器在调度循环中每次从堆中取出优先级最高的Pod进行调度)
调度循环通过NextPod方法从ActiveQ中取出待调度队列
④ 使用调度算法针对Node和Pod进行匹配和打分确定调度目标节点
⑤ 如果调度器出错或失败,会调用shed.Error将Pod写入UnschedulableQ
⑥ 当不可调度时间超过backoff的时间,Pod会由Unschedulable转换到Podbackoff,也就是说Pod信息会写入到PodbackoffQ
⑦ Client-go向Api Server发送一个bind请求,实现异步绑定

  调度器执行绑定操作的时候是一个异步过程调度器会先在缓存中创建一个和原来Pod一样的Assume Pod对象用模拟完成节点的绑定,如将Assume Pod的Nodename设置成绑定节点名称,同时通过异步执行绑定指令操作。在Pod和Node绑定之前,Scheduler需要确保Volume已经完成绑定操作,确认完所有绑定前准备工作,Scheduler会向Api Server 发送一个Bind 对象,对应节点的
Kubelet将待绑定的Pod在节点运行起来。

3)为节点计算分值

节点分值计算是通过调度器算法实现的,而不是固定的。默认情况下,kube-scheduler采用的是DefaultPreemption算法,其计算分值的方式包括以下几个方面:

  • 节点的资源利用率 kube-scheduler会考虑每个节点的CPU和内存资源利用率,将其纳入节点分值的计算中。资源利用率越低的节点得分越高。
  • 节点上的Pod数目 kube-scheduler会考虑每个节点上已经存在的Pod数目,将其纳入节点分值的计算中。如果节点上已经有大量的Pod,新的Pod可能会导致资源竞争和拥堵,因此节点得分会相应降低。
  • Pod与节点的亲和性和互斥性 kube-scheduler会考虑Pod与节点的亲和性和互斥性,将其纳入节点分值的计算中。如果Pod与节点存在亲和性,例如Pod需要特定的节点标签或节点与Pod在同一区域,节点得分会相应提高。如果Pod与节点存在互斥性,例如Pod不能与其他特定的Pod共存于同一节点,节点得分会相应降低。
  • 节点之间的网络延迟 kube-scheduler会考虑节点之间的网络延迟,将其纳入节点分值的计算中。如果节点之间的网络延迟较低,节点得分会相应提高。
  • Pod的优先级 kube-scheduler会考虑Pod的优先级,将其纳入节点分值的计算中。如果Pod具有有高优先级,例如是关键业务的部分,节点得分会相应提高。

   这些因素的相对权重可以通过kube-scheduler的命令行参数或者调度器配置文件进行调整。需要注意的是,kube-scheduler的算法是可扩展的,可以根据需要编写自定义的调度算法来计算节点分值。

4)调度策略

默认调度策略(DefaultPreemption): 默认调度策略是kube-scheduler的默认策略,其基本原则是为Pod选择一个未满足需求的最小代价节点。如果无法找到这样的节点,就会考虑使用预选,即将一些已经调度的Pod驱逐出去来为新的Pod腾出空间。
带优先级的调度策略(Priority): 带优先级的调度策略基于Pod的优先级对节点进行排序,优先选择优先级高的Pod。该策略可以通过设置Pod的PriorityClass来实现

节点亲和性调度策略(NodeAffinity): 节点亲和性调度策略基于节点标签或其他条件,选择与Pod需要的条件相匹配的节点。这可以通过在Pod定义中使用NodeAffinity配置实现
Pod 亲和性调度策略(PodAffinity): Pod 亲和性调度策略根据Pod的标签和其他条件,选择与Pod相似的其他Pod所在的节点。这可以通过在Pod定义中使用PodAffinity配置实现

Pod 互斥性调度策略(PodAntiAffinity): Pod 互斥性调度策略选择与Pod不相似的其他Pod所在的节点,以避免同一节点上运行相似的Pod。这可以通过在Pod定义中使用PodAntiAffinity配置实现。
资源限制调度策略(ResourceLimits): 资源限制调度策略选择可用资源最多的节点,以满足Pod的资源需求。这可以通过在Pod定义中使用ResourceLimits配置实现

四、节点选择器NodeSelector

  NodeSelector会将Pod根据定义的标签选定到匹配的Node上去。示例:

[root@aminglinux01 ~]# cat nodeselector.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: nginx-ssd
spec:
  containers:
  - name: nginx-ssd
    image: nginx:latest
    imagePullPolicy: IfNotPresent
    ports:
    - containerPort: 80
  nodeSelector:
    disktype: ssd
[root@aminglinux01 ~]#

应用yaml

kubectl apply -f nodeselector.yaml

[root@aminglinux01 ~]# kubectl apply -f nodeselector.yaml
pod/nginx-ssd created
[root@aminglinux01 ~]# 

查看Pod状态

kubectl describe po nginx-ssd

[root@aminglinux01 ~]# kubectl describe po nginx-ssd
Name:             nginx-ssd
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           <none>
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
  nginx-ssd:
    Image:        nginx:latest
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tmzbj (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  kube-api-access-tmzbj:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              disktype=ssd
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  32s   default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..
[root@aminglinux01 ~]# 

给Node打标签

kubectl label node k8s02 disktype=ssd

[root@aminglinux01 ~]# kubectl label node aminglinux02 disktype=ssd
node/aminglinux02 labeled
[root@aminglinux01 ~]# 

查看Node label

kubectl get node --show-labels

[root@aminglinux01 ~]# kubectl get node --show-labels
NAME           STATUS   ROLES           AGE   VERSION   LABELS
aminglinux01   Ready    control-plane   10d   v1.26.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=aminglinux01,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
aminglinux02   Ready    <none>          10d   v1.26.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=aminglinux02,kubernetes.io/os=linux
aminglinux03   Ready    <none>          10d   v1.26.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=aminglinux03,kubernetes.io/os=linux
[root@aminglinux01 ~]# 

查看Pod信息

kubectl describe po nginx-ssd |grep -i node

 

[root@aminglinux01 ~]# kubectl describe po nginx-ssd |grep -i node
Node:             aminglinux02/192.168.100.152
Node-Selectors:              disktype=ssd
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
  Warning  FailedScheduling  3m35s  default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..
[root@aminglinux01 ~]# 
[root@aminglinux01 ~]# kubectl describe po nginx-ssd |grep -i node
Node:             aminglinux02/192.168.100.152
Node-Selectors:              disktype=ssd
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
  Warning  FailedScheduling  3m35s  default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..
[root@aminglinux01 ~]# kubectl describe po nginx-ssd
Name:             nginx-ssd
Namespace:        default
Priority:         0
Service Account:  default
Node:             aminglinux02/192.168.100.152
Start Time:       Mon, 15 Jul 2024 05:31:35 +0800
Labels:           <none>
Annotations:      cni.projectcalico.org/containerID: c58eea849ef758e97acb957b0550f3bfbe5e81e8538fb8142c69da04452ccf3a
                  cni.projectcalico.org/podIP: 10.18.206.233/32
                  cni.projectcalico.org/podIPs: 10.18.206.233/32
Status:           Running
IP:               10.18.206.233
IPs:
  IP:  10.18.206.233
Containers:
  nginx-ssd:
    Container ID:   containerd://2e0478a27f5c82c716e84bd28f3edb07e497d2146ff93af4582c6f205d69e9fc
    Image:          nginx:latest
    Image ID:       docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Mon, 15 Jul 2024 05:31:36 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tmzbj (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-tmzbj:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              disktype=ssd
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  4m1s  default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..
  Normal   Scheduled         99s   default-scheduler  Successfully assigned default/nginx-ssd to aminglinux02
  Normal   Pulled            98s   kubelet            Container image "nginx:latest" already present on machine
  Normal   Created           98s   kubelet            Created container nginx-ssd
  Normal   Started           98s   kubelet            Started container nginx-ssd
[root@aminglinux01 ~]# 

五、节点亲和性NodeAffinity

  亲和性NodeAffinity也是针对Node,目的把Pod部署到符合要求的Node上。
关键词:
requiredDuringSchedulingIgnoredDuringExecution表示强匹配,必须要满足
preferredDuringSchedulingIgnoredDuringExecution:表示弱匹配,尽可能满足,但不保证

示例:

apiVersion: v1
kind: Pod
metadata:
   name: with-node-affinity
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:     ##必须满足下面匹配规则
        nodeSelectorTerms:
        - matchExpressions:
          - key: env
            operator: In ##逻辑运算符支持:In,NotIn,Exists,DoesNotExist,Gt,Lt
            values:
            - test
            - dev
      preferredDuringSchedulingIgnoredDuringExecution:     ##尽可能满足,但不保证
      - weight: 1
        preference:
          matchExpressions:
          - key: project
            operator: In
            values:
            - aminglinux
  containers:
  - name: with-node-affinity
    image: redis:6.0.6

说明:
匹配逻辑:
同时指定Node SelectorNode Affinity,两者必须同时满足
Node Affinity指定多组nodeSelectorTerms,只需要一组满足就可以;
③ 当在nodeSelectorTerms中包含多组matchExpressions,必须全部满足才可以

演示示例:
编辑pod的yaml

apiVersion: v1
kind: Pod
metadata:
  name: node-affinity
spec:
  containers:
    - name: my-container
      image: nginx:latest
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: special-node
                operator: Exists

给其中一个节点定义标签

kubectl label nodes aminglinux03 special-node=true

[root@aminglinux01 ~]# kubectl label nodes aminglinux03 special-node=true
node/aminglinux03 labeled

生效Pod yaml

kubectl apply -f nodeAffinity.yaml

[root@aminglinux01 ~]# kubectl apply -f nodeAffinity.yaml
pod/node-affinity created

检查Pod所在node

kubectl get po -o wide

[root@aminglinux01 ~]# kubectl get po -o wide
NAME                         READY   STATUS      RESTARTS         AGE     IP              NODE           NOMINATED NODE   READINESS GATES
ds-demo-7kqhx                1/1     Running     0                6d2h    10.18.68.146    aminglinux03   <none>           <none>
ds-demo-js2rl                1/1     Running     0                6d2h    10.18.206.212   aminglinux02   <none>           <none>
ds-demo-pkpb6                1/1     Running     1 (2d4h ago)     6d2h    10.18.61.15     aminglinux01   <none>           <none>
job-demo-fg2pg               0/1     Completed   0                5d11h   10.18.206.215   aminglinux02   <none>           <none>
lucky-6cdcf8b9d4-qslbj       1/1     Running     2 (6d4h ago)     9d      10.18.68.141    aminglinux03   <none>           <none>
ng-deploy-6d94878b66-8t2hq   1/1     Running     2 (6d4h ago)     7d3h    10.18.68.140    aminglinux03   <none>           <none>
ng-deploy-6d94878b66-gh95m   1/1     Running     2 (6d4h ago)     7d3h    10.18.206.207   aminglinux02   <none>           <none>
nginx-ssd                    1/1     Running     0                19m     10.18.206.233   aminglinux02   <none>           <none>
ngnix                        1/1     Running     2 (6d4h ago)     9d      10.18.206.203   aminglinux02   <none>           <none>
node-affinity                1/1     Running     0                22s     10.18.68.161    aminglinux03   <none>           <none>
pod-demo                     1/1     Running     2 (6d4h ago)     9d      10.18.206.202   aminglinux02   <none>           <none>
pod-demo1                    1/1     Running     2 (6d4h ago)     9d      10.18.68.139    aminglinux03   <none>           <none>
redis-sts-0                  1/1     Running     0                5d1h    10.18.206.255   aminglinux02   <none>           <none>
redis-sts-1                  1/1     Running     0                6d1h    10.18.68.148    aminglinux03   <none>           <none>
testpod                      1/1     Running     0                5d7h    10.18.206.236   aminglinux02   <none>           <none>
testpod2                     1/1     Running     647 (112s ago)   5d1h    10.18.68.177    aminglinux03   <none>           <none>
[root@aminglinux01 ~]# 

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐