先自我介绍一下,小编浙江大学毕业,去过华为、字节跳动等大厂,目前阿里P7

深知大多数程序员,想要提升技能,往往是自己摸索成长,但自己不成体系的自学效果低效又漫长,而且极易碰到天花板技术停滞不前!

因此收集整理了一份《2024年最新Golang全套学习资料》,初衷也很简单,就是希望能够帮助到想自学提升又不知道该从何学起的朋友。
img
img
img
img
img

既有适合小白学习的零基础资料,也有适合3年以上经验的小伙伴深入学习提升的进阶课程,涵盖了95%以上Go语言开发知识点,真正体系化!

由于文件比较多,这里只是将部分目录截图出来,全套包含大厂面经、学习笔记、源码讲义、实战项目、大纲路线、讲解视频,并且后续会持续更新

如果你需要这些资料,可以添加V获取:vip1024b (备注go)
img

正文

statefulset.apps/web created

  1. 查看创建的 Service 和 StatefulSet 应用

查看 service

kubectl get service nginx

[root@k8s-master statefulset]# kubectl get service nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx ClusterIP None 80/TCP 7m49s

查看 statefulset => sts

kubectl get statefulset web

[root@k8s-master statefulset]# kubectl get statefulset web
NAME READY AGE
web 2/2 32s

  1. 查看创建的 pod

[root@k8s-master statefulset]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-deploy-754898b577-8ns26 1/1 Running 0 5h14m
nginx-deploy-754898b577-g9q9h 1/1 Running 0 5h14m
nginx-deploy-754898b577-ksfbw 1/1 Running 0 5h14m
nginx-deploy-754898b577-rwbxg 1/1 Running 0 4h44m
nginx-deploy-754898b577-xc88j 1/1 Running 0 4h44m
nginx-deploy-754898b577-xtmmc 1/1 Running 0 4h44m
web-0 1/1 Running 0 2m25s
web-1 1/1 Running 0 2m23s

查看创建的 pod,这些 pod 是有序的

kubectl get pods -l app=nginx

[root@k8s-master statefulset]# kubectl get pods -l app=nginx
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 4m10s
web-1 1/1 Running 0 4m8s

  1. 测试服务是否可访问(查看这些 pod 的 dns)

运行一个新的 pod,基础镜像为 busybox 工具包,利用里面的 nslookup 可以看到 dns 信息

kubectl run -i --tty --image busybox:1.28.4 dns-test /bin/sh

nslookup web-0.nginx

[root@k8s-master statefulset]# kubectl run -i --tty --image busybox:1.28.4 dns-test /bin/sh
If you don’t see a command prompt, try pressing enter.
/ # nslookup web-0.nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name: web-0.nginx
Address 1: 10.244.36.97 web-0.nginx.default.svc.cluster.local
/ #
/ # nslookup web-1.nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name: web-1.nginx
Address 1: 10.244.169.153 web-1.nginx.default.svc.cluster.local

3.2 扩容 / 缩容

只有修改了 StatefulSet 配置文件中的 replicas 的属性后,才会触发更新操作。

修改非 replicas 的属性或者是/opt/k8s/statefulset/web.yaml都不行。

3.2.1 扩容

通过命令方式:

kubectl scale statefulset web --replicas=5

通过修改配置文件方式:(修改spec.replicas的值)

kubectl edit statefulset web

扩容前后的数量变化:

[root@k8s-master ~]# kubectl get sts
NAME READY AGE
web 2/2 22h

[root@k8s-master ~]# kubectl scale statefulset web --replicas=5
statefulset.apps/web scaled

[root@k8s-master ~]# kubectl get sts
NAME READY AGE
web 5/5 22h

扩容的具体过程:(可以看到是按顺序创建了web-2、web-3、web-4)

[root@k8s-master ~]# kubectl describe sts web
Name: web
Namespace: default
CreationTimestamp: Fri, 29 Dec 2023 22:30:25 +0800
Selector: app=nginx
Labels:
Annotations:
Replicas: 5 desired | 5 total
Update Strategy: RollingUpdate
Partition: 0
Pods Status: 5 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.7.9
Port: 80/TCP
Host Port: 0/TCP
Environment:
Mounts:
Volumes:
Volume Claims:
Events:
Type Reason Age From Message


Normal SuccessfulCreate 107s statefulset-controller create Pod web-2 in StatefulSet web successful
Normal SuccessfulCreate 105s statefulset-controller create Pod web-3 in StatefulSet web successful
Normal SuccessfulCreate 103s statefulset-controller create Pod web-4 in StatefulSet web successful

3.2.2 缩容

方式与3.2.1 扩容 一致,这里以缩容到2为例,查看具体变化。

kubectl scale statefulset web --replicas=2

缩容前后的数量变化:

[root@k8s-master ~]# kubectl get sts
NAME READY AGE
web 5/5 22h

[root@k8s-master ~]# kubectl scale statefulset web --replicas=2
statefulset.apps/web scaled

[root@k8s-master ~]# kubectl get sts
NAME READY AGE
web 3/2 22h
[root@k8s-master ~]# kubectl get sts
NAME READY AGE
web 2/2 22h

缩容的具体过程:(可以看到是按顺序从后往前删除 web-4、web-4、web-3,最后只剩web-0和web-1两个 pod)

[root@k8s-master ~]# kubectl describe sts web
Name: web
Namespace: default
CreationTimestamp: Fri, 29 Dec 2023 22:30:25 +0800
Selector: app=nginx
Labels:
Annotations:
Replicas: 2 desired | 2 total
Update Strategy: RollingUpdate
Partition: 0
Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.7.9
Port: 80/TCP
Host Port: 0/TCP
Environment:
Mounts:
Volumes:
Volume Claims:
Events:
Type Reason Age From Message


Normal SuccessfulCreate 17m statefulset-controller create Pod web-2 in StatefulSet web successful
Normal SuccessfulCreate 17m statefulset-controller create Pod web-3 in StatefulSet web successful
Normal SuccessfulCreate 17m statefulset-controller create Pod web-4 in StatefulSet web successful
Normal SuccessfulDelete 12s statefulset-controller delete Pod web-4 in StatefulSet web successful
Normal SuccessfulDelete 10s statefulset-controller delete Pod web-3 in StatefulSet web successful
Normal SuccessfulDelete 9s statefulset-controller delete Pod web-2 in StatefulSet web successful

3.3 镜像更新

只有修改了 StatefulSet 配置文件中的 template 中的属性后,才会触发更新操作。

修改非 template 中的属性或者是 /opt/k8s/statefulset/web.yaml 都不行。

推荐通过修改配置文件方式:(在3.2.2 缩容操作后只剩web-0web-1两个 pod,继续修改template中的image的值,从1.7.9改为1.9.1)

kubectl edit statefulset web

版本变化:

[root@k8s-master ~]# kubectl rollout history sts web
statefulset.apps/web
REVISION CHANGE-CAUSE
1

[root@k8s-master ~]# kubectl rollout history sts web --revision=1
statefulset.apps/web with revision #1
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.7.9
Port: 80/TCP
Host Port: 0/TCP
Environment:
Mounts:
Volumes:

[root@k8s-master ~]# kubectl edit sts web
statefulset.apps/web edited

[root@k8s-master ~]# kubectl rollout history sts web
statefulset.apps/web
REVISION CHANGE-CAUSE
1
2

[root@k8s-master ~]# kubectl rollout history sts web --revision=2
statefulset.apps/web with revision #2
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.9.1
Port: 80/TCP
Host Port: 0/TCP
Environment:
Mounts:
Volumes:

镜像更新的具体过程:(从最后四条事件可以看到是先删除web-1、再创建新的web-1;删除web-0、再创建新的web-0)

[root@k8s-master ~]# kubectl describe sts web
Name: web
Namespace: default
CreationTimestamp: Fri, 29 Dec 2023 22:30:25 +0800
Selector: app=nginx
Labels:
Annotations:
Replicas: 2 desired | 2 total
Update Strategy: RollingUpdate
Partition: 0
Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.9.1
Port: 80/TCP
Host Port: 0/TCP
Environment:
Mounts:
Volumes:
Volume Claims:
Events:
Type Reason Age From Message


Normal SuccessfulCreate 36m statefulset-controller create Pod web-2 in StatefulSet web successful
Normal SuccessfulCreate 36m statefulset-controller create Pod web-3 in StatefulSet web successful
Normal SuccessfulCreate 36m statefulset-controller create Pod web-4 in StatefulSet web successful
Normal SuccessfulDelete 19m statefulset-controller delete Pod web-4 in StatefulSet web successful
Normal SuccessfulDelete 19m statefulset-controller delete Pod web-3 in StatefulSet web successful
Normal SuccessfulDelete 19m statefulset-controller delete Pod web-2 in StatefulSet web successful
Normal SuccessfulDelete 7m11s statefulset-controller delete Pod web-1 in StatefulSet web successful
Normal SuccessfulCreate 7m10s (x2 over 22h) statefulset-controller create Pod web-1 in StatefulSet web successful
Normal SuccessfulDelete 7m8s statefulset-controller delete Pod web-0 in StatefulSet web successful
Normal SuccessfulCreate 7m6s (x2 over 22h) statefulset-controller create Pod web-0 in StatefulSet web successful

3.3.1 RollingUpdate

StatefulSet 也可以采用滚动更新策略,同样是修改 template 属性后会触发更新,但是由于 pod 是有序的,在 StatefulSet 中更新时是基于 pod 的顺序,倒序更新的。

3.3.2 灰度发布 / 金丝雀发布

利用 updateStrategy 中 rollingUpdate 的 partition 属性,可以实现简易的灰度发布的效果。目的是将项目上线后产生问题的影响,尽量降到最低。

利用该机制,我们可以通过控制 partition 的值,来决定只更新其中一部分 pod,确认没有问题后再逐步增大更新的 pod 数量,最终实现全部 pod 更新。

updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate

例如我们有 5 个 pod,如果当前 partition 设置为 3,那么此时滚动更新时,只会更新那些 序号 >= 3 的 pod。(在 StatefulSet 中更新时是基于 pod 的顺序,倒序更新的。

等到 序号 >= 3 的 pod更新完成后,再继续将 partition 设置为 2 或 1,就可以继续更新 序号 >= 2 或 1 的 pod,这样逐步趋于 0。

步骤

  1. 把 StatefulSet 为 web 的副本扩展到5个:(web0到web-4的 image 均是1.9.1)

kubectl scale statefulset web --replicas=5

  1. 把 updateStrategy 中 rollingUpdate 的 partition 从 0 改为 3,然后把 image 从1.9.1 改为1.7.9

kubectl edit statefulset web

  1. 查看各 pod 的镜像变化(可以发现只有web-4、web-3的image从1.9.1 改为了1.7.9,web-2、web-1、web-0的image依旧是1.9.1)

查看 web-4、web-3,以web-3为例

kubectl describe po web-4
kubectl describe po web-3

[root@k8s-master statefulset]# kubectl describe po web-3
Name: web-3
Namespace: default
Priority: 0
Node: k8s-node1/192.168.3.242
Start Time: Sun, 31 Dec 2023 09:39:49 +0800
Labels: app=nginx
controller-revision-hash=web-6c5c7fd59b
statefulset.kubernetes.io/pod-name=web-3
Annotations: cni.projectcalico.org/containerID: 3d2d85e0bfc230a058952778c01b1f32d6b780dbfb1186d108e24cc33e1da107
cni.projectcalico.org/podIP: 10.244.36.78/32
cni.projectcalico.org/podIPs: 10.244.36.78/32
Status: Running
IP: 10.244.36.78
IPs:
IP: 10.244.36.78
Controlled By: StatefulSet/web
Containers:
nginx:
Container ID: docker://a515f130287700a6dc9a5feb6fa180ea8b91d4eb47051aee5e731169c4b9f5e1
Image: nginx:1.7.9
Image ID: docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Sun, 31 Dec 2023 09:39:50 +0800
Ready: True
Restart Count: 0
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7cznh (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-7cznh:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Normal Scheduled 112s default-scheduler Successfully assigned default/web-3 to k8s-node1
Normal Pulled 111s kubelet Container image “nginx:1.7.9” already present on machine
Normal Created 111s kubelet Created container nginx
Normal Started 111s kubelet Started container nginx

查看 web-2、web-1、web-0,以web-2为例

kubectl describe po web-2
kubectl describe po web-1
kubectl describe po web-0

[root@k8s-master statefulset]# kubectl describe po web-2
Name: web-2
Namespace: default
Priority: 0
Node: k8s-node1/192.168.3.242
Start Time: Sun, 31 Dec 2023 09:34:14 +0800
Labels: app=nginx
controller-revision-hash=web-6bc849cb6b
statefulset.kubernetes.io/pod-name=web-2
Annotations: cni.projectcalico.org/containerID: 0ba36375d747e1055a0215fc41520a3084622a995053af5055f083d08a37a547
cni.projectcalico.org/podIP: 10.244.36.76/32
cni.projectcalico.org/podIPs: 10.244.36.76/32
Status: Running
IP: 10.244.36.76
IPs:
IP: 10.244.36.76
Controlled By: StatefulSet/web
Containers:
nginx:
Container ID: docker://d14a9dedbbb33b86a45e7feb4717fb4b4b5def92507a7f0b92e601132634988f
Image: nginx:1.9.1
Image ID: docker-pullable://nginx@sha256:2f68b99bc0d6d25d0c56876b924ec20418544ff28e1fb89a4c27679a40da811b
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Sun, 31 Dec 2023 09:34:15 +0800
Ready: True
Restart Count: 0
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-544jm (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-544jm:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Normal Scheduled 7m40s default-scheduler Successfully assigned default/web-2 to k8s-node1
Normal Pulled 7m39s kubelet Container image “nginx:1.9.1” already present on machine
Normal Created 7m39s kubelet Created container nginx
Normal Started 7m39s kubelet Started container nginx

  1. 继续把 updateStrategy 中 rollingUpdate 的 partition 从 3 改为 1,还是把 image 从1.9.1 改为1.7.9
  2. 查看各 pod 的镜像变化(可以发现除了web-4、web-3,web-2、web-1的image也从1.9.1 改为了1.7.9,web-0的image依旧是1.9.1)

查看 web-2、web-1,以web-1为例

[root@k8s-master statefulset]# kubectl describe po web-1
Name: web-1
Namespace: default
Priority: 0
Node: k8s-node2/192.168.3.243
Start Time: Sun, 31 Dec 2023 09:50:11 +0800
Labels: app=nginx
controller-revision-hash=web-6c5c7fd59b
statefulset.kubernetes.io/pod-name=web-1
Annotations: cni.projectcalico.org/containerID: d386a2c85ea388ce70b9a98ccb64032d4364ad548a0d59bc751ca91ac33c6e9b
cni.projectcalico.org/podIP: 10.244.169.143/32
cni.projectcalico.org/podIPs: 10.244.169.143/32
Status: Running
IP: 10.244.169.143
IPs:
IP: 10.244.169.143
Controlled By: StatefulSet/web
Containers:
nginx:
Container ID: docker://a40879bd7a3d561a73e026ff039745f3587427c2725fcda649938be6adaeed2a
Image: nginx:1.7.9
Image ID: docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Sun, 31 Dec 2023 09:50:12 +0800
Ready: True
Restart Count: 0
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qrsdj (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-qrsdj:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Normal Scheduled 17s default-scheduler Successfully assigned default/web-1 to k8s-node2
Normal Pulled 16s kubelet Container image “nginx:1.7.9” already present on machine
Normal Created 16s kubelet Created container nginx
Normal Started 16s kubelet Started container nginx

查看 web-0

[root@k8s-master statefulset]# kubectl describe po web-0
Name: web-0
Namespace: default
Priority: 0
Node: k8s-node2/192.168.3.243
Start Time: Sun, 31 Dec 2023 09:34:18 +0800
Labels: app=nginx
controller-revision-hash=web-6bc849cb6b
statefulset.kubernetes.io/pod-name=web-0
Annotations: cni.projectcalico.org/containerID: cac2f3c8afa1daf2b2d4805fe1c92356aa7c8f6fa5f6bd07acc4d3a50be7c41c
cni.projectcalico.org/podIP: 10.244.169.141/32
cni.projectcalico.org/podIPs: 10.244.169.141/32
Status: Running
IP: 10.244.169.141
IPs:
IP: 10.244.169.141
Controlled By: StatefulSet/web
Containers:
nginx:
Container ID: docker://536426425e1524fd3d71c318aaddd884867f7bb68d9ced331e587c9799f713b7
Image: nginx:1.9.1
Image ID: docker-pullable://nginx@sha256:2f68b99bc0d6d25d0c56876b924ec20418544ff28e1fb89a4c27679a40da811b
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Sun, 31 Dec 2023 09:34:19 +0800
Ready: True
Restart Count: 0
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2pgwf (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-2pgwf:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Normal Scheduled 16m default-scheduler Successfully assigned default/web-0 to k8s-node2
Normal Pulled 16m kubelet Container image “nginx:1.9.1” already present on machine
Normal Created 16m kubelet Created container nginx
Normal Started 16m kubelet Started container nginx

  1. 把 updateStrategy 中 rollingUpdate 的 partition 从 1 改为 0,然后把 image 从1.9.1 改为1.7.9,至此完成整个镜像的更新。
3.3.3 OnDelete

只有在 pod 被删除时会进行更新操作,也就是删除某个 pod 后,会重新创建一个新的同名 pod,从而达到更新的目的。

这样可以实现只更新某个指定的 pod。

updateStrategy:

rollingUpdate:

partition: 0

type: RollingUpdate

type: OnDelete

3.3.2 灰度发布 / 金丝雀发布 操作完毕后 image 全部从1.9.1 改为了1.7.9。
步骤

  1. 把 updateStrategy 中 rollingUpdate 的相关配置注释掉,同时将更新策略的类型从 RollingUpdate 改为 OnDelete(然后把 image 从1.7.9 改为1.9.1)

kubectl edit statefulset web

在这里插入图片描述
2. 查看 pod 是 web-4 的信息(可以发现image依旧是1.7.9,且在最下面 Events 列表中也没有显示变动日志)

[root@k8s-master statefulset]# kubectl describe po web-4
Name: web-4
Namespace: default
Priority: 0
Node: k8s-node1/192.168.3.242
Start Time: Sun, 31 Dec 2023 09:39:47 +0800
Labels: app=nginx
controller-revision-hash=web-6c5c7fd59b
statefulset.kubernetes.io/pod-name=web-4
Annotations: cni.projectcalico.org/containerID: 5fea7938b6dabd02a07ece3afb77eb827c16bf96f0902ed2d3e84584b41b2b19
cni.projectcalico.org/podIP: 10.244.36.77/32
cni.projectcalico.org/podIPs: 10.244.36.77/32
Status: Running
IP: 10.244.36.77
IPs:
IP: 10.244.36.77
Controlled By: StatefulSet/web
Containers:
nginx:
Container ID: docker://d4ead41d6f391de3151bf5bb4b3498418184691c9fc824fda48d25aca8afb28d
Image: nginx:1.7.9
Image ID: docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Sun, 31 Dec 2023 09:39:48 +0800
Ready: True
Restart Count: 0
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-clh6d (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-clh6d:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Normal Scheduled 25m default-scheduler Successfully assigned default/web-4 to k8s-node1
Normal Pulled 25m kubelet Container image “nginx:1.7.9” already present on machine
Normal Created 25m kubelet Created container nginx
Normal Started 25m kubelet Started container nginx

  1. 删除 pod 是 web-4

kubectl delete po web-4

[root@k8s-master statefulset]# kubectl delete po web-4
pod “web-4” deleted

  1. 再次查看 pod 是 web-4 的信息(可以发现 image 改为了 1.9.1,且在最下面 Events 列表中看到是18S前发生的变化)

[root@k8s-master statefulset]# kubectl describe po web-4
Name: web-4
Namespace: default
Priority: 0
Node: k8s-node1/192.168.3.242
Start Time: Sun, 31 Dec 2023 10:08:53 +0800
Labels: app=nginx
controller-revision-hash=web-6bc849cb6b
statefulset.kubernetes.io/pod-name=web-4
Annotations: cni.projectcalico.org/containerID: 638ae0252ecff158173d47826483023b157695ef5d35bce8db2c775e9b4c4a02
cni.projectcalico.org/podIP: 10.244.36.79/32
cni.projectcalico.org/podIPs: 10.244.36.79/32
Status: Running
IP: 10.244.36.79
IPs:
IP: 10.244.36.79
Controlled By: StatefulSet/web
Containers:
nginx:
Container ID: docker://a038da90c25fa18096ba833a0a51a65a27576c552609c3e72c67b397133513fc
Image: nginx:1.9.1
Image ID: docker-pullable://nginx@sha256:2f68b99bc0d6d25d0c56876b924ec20418544ff28e1fb89a4c27679a40da811b
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Sun, 31 Dec 2023 10:08:54 +0800
Ready: True
Restart Count: 0
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ccnbf (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-ccnbf:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Normal Scheduled 19s default-scheduler Successfully assigned default/web-4 to k8s-node1
Normal Pulled 18s kubelet Container image “nginx:1.9.1” already present on machine
Normal Created 18s kubelet Created container nginx
Normal Started 18s kubelet Started container nginx

  1. 依次删除 web-3、web-2、web-1、web-0 可实现image版本的更新

3.4 删除 StatefulSet 及其关联

StatefulSet 创建时会关联 Service 、PVC、Pod ,中间没有 ReplicaSet(RS)。

级联删除
在删除 StatefulSet 时,默认关联的 Pod 会一起删除,也就是级联删除,但 PVC、 Service 不会一起删除。

级联删除:删除 statefulset 时会同时删除 pods

kubectl delete statefulset web

[root@k8s-master statefulset]# kubectl delete sts web
statefulset.apps “web” deleted

[root@k8s-master statefulset]# kubectl get sts
No resources found in default namespace.

[root@k8s-master statefulset]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 3d17h
nginx ClusterIP None 80/TCP 87m

[root@k8s-master statefulset]# kubectl get po
NAME READY STATUS RESTARTS AGE
dns-test 1/1 Running 1 (85m ago) 86m

[root@k8s-master statefulset]# kubectl get pvc
No resources found in default namespace.


非级联删除:在删除 StatefulSet 时,默认关联的 Pod 不会一起删除,只删除 StatefulSet 本身,PVC、 Service 也不会删除。

非级联删除:删除 statefulset 时不会删除 pods,删除 sts 后,pods 就没人管了,此时再删除 pod 不会重建的

kubectl delete sts web --cascade=orphan

[root@k8s-master statefulset]# kubectl delete sts web --cascade=false
warning: --cascade=false is deprecated (boolean value) and can be replaced with --cascade=orphan.
statefulset.apps “web” deleted

[root@k8s-master statefulset]# kubectl get sts
No resources found in default namespace.

[root@k8s-master statefulset]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 3d17h
nginx ClusterIP None 80/TCP 2m42s

[root@k8s-master statefulset]# kubectl get pod
NAME READY STATUS RESTARTS AGE
dns-test 1/1 Running 1 (92m ago) 93m
web-0 1/1 Running 0 2m50s
web-1 1/1 Running 0 2m48s

[root@k8s-master statefulset]# kubectl get pvc
No resources found in default namespace.

删除 Pod:

[root@k8s-master statefulset]# kubectl get po
NAME READY STATUS RESTARTS AGE
dns-test 1/1 Running 1 (95m ago) 96m
web-0 1/1 Running 0 6m22s
web-1 1/1 Running 0 6m20s

[root@k8s-master statefulset]# kubectl delete po web-0 web-1
pod “web-0” deleted
pod “web-1” deleted

[root@k8s-master statefulset]# kubectl get po
NAME READY STATUS RESTARTS AGE
dns-test 1/1 Running 1 (96m ago) 97m

删除 Service:

[root@k8s-master statefulset]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 3d17h
nginx ClusterIP None 80/TCP 7m53s

[root@k8s-master statefulset]# kubectl delete svc nginx
service “nginx” deleted

[root@k8s-master statefulset]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 3d17h

3.5 删除 StatefulSet 关联的 PVC

如果有关联的 PVC 则删除,没有则不删除。

StatefulSet删除后PVC还会保留着,数据不再使用的话也需要删除

$ kubectl delete pvc www-web-0 www-web-1

3.6 配置文件(与 3.1 创建 StatefulSet 用的一致)

注意:配置文件中有---分割,这是用于说明在这个yaml的配置文件里嵌套了另一个yaml的内容。


apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:

  • port: 80
    name: web
    clusterIP: None
    selector:
    app: nginx

apiVersion: apps/v1
kind: StatefulSet # StatefulSet 类型的资源
metadata:
name: web # StatefulSet 对象的名字
spec:
serviceName: “nginx” # 使用哪个 service来管理 dns(这里使用nginx的service,因为在nginx的metadata的name是nginx)
replicas: 2
selector: # 选择器,用于找到匹配的 RS
matchLabels: # 按照标签匹配
app: nginx # 匹配的标签key/value
template:
metadata:
labels:
app: nginx
spec:
containers:

  • name: nginx
    image: nginx:1.7.9
    ports: # 容器内部要暴露的端口
  • containerPort: 80 # 容器内部具体要暴露的端口号
    name: web # 该端口号配置的名字
    volumeMounts: # 加载数据卷
  • name: www # 加载哪个数据卷
    mountPath: /usr/share/nginx/html # 加载到容器中的哪个目录
    volumeClaimTemplates: # 数据卷模板
  • metadata: # 数据卷描述
    name: www # 数据卷的名称
    annotations: # 数据卷的注解
    volume.alpha.kubernetes.io/storage-class: anything
    spec: # 数据卷的规约
    accessModes: [ “ReadWriteOnce” ] # 访问模式
    resources:
    requests:
    storage: 1Gi # 需要的存储资源大小

4 DaemonSet

会根据 DaemonSet 绑定的 Node 标签,为每一个匹配到的 Node 都自动部署一个有守护进程的 Pod

即使后面又增加了新的节点,只要新的节点设置的标签和 DaemonSet 绑定的 Node 标签一致,DaemonSet 就会继续为这些新增加的节点自动部署一个有守护进程 Pod
在这里插入图片描述


示例图:(收集 Node1、Node2、Node3 产生的日志)
在这里插入图片描述

4.1 配置文件

apiVersion: apps/v1
kind: DaemonSet # 创建 DaemonSet 资源
metadata:
name: fluentd # DaemonSet 资源的名称
spec:
selector:
matchLabels:
app: logging # 和下面 template.metadata.labels.app是匹配的
template:
metadata:
labels:
app: logging
id: fluentd
name: fluentd # Pod 的名字
spec:
containers:

  • name: fluentd-es # 容器的名称

image: k8s.gcr.io/fluentd-elasticsearch:v1.3.0 # 容器使用的镜像

image: agilestacks/fluentd-elasticsearch:v1.3.0 # 容器使用的镜像
env: # 环境变量配置

  • name: FLUENTD_ARGS # 环境变量的 key
    value: -qq # 环境变量的 value
    volumeMounts: # 加载数据卷,防止数据丢失
  • name: containers # 数据卷名称
    mountPath: /var/lib/docker/containers # 将数据卷挂载到容器内哪个目录
  • name: varlog
    mountPath: /varlog
    volumes: # 定义数据卷
  • hostPath: # 数据卷类型,主机路径的模式,也就是与 node 共享目录
    path: /var/lib/docker/containers # node中的共享目录 (将服务器的目录挂载到容器内部,如果服务器内不存在该目录,则会自动创建该目录)
    name: containers # 定义的数据卷名称
  • hostPath:
    path: /var/log
    name: varlog

4.2 创建 DaemonSet

  1. 创建 DaemonSet 的文件夹

make /opt/k8s/daemonset/

  1. /opt/k8s/daemonset/下编写配置文件fluentd-ds.yaml(来自 4.1 配置文件,未指定绑定的 node

apiVersion: apps/v1
kind: DaemonSet # 创建 DaemonSet 资源
metadata:
name: fluentd # DaemonSet 资源的名称
spec:
selector:
matchLabels:
app: logging # 和下面 template.metadata.labels.app是匹配的
template:
metadata:
labels:
app: logging
id: fluentd
name: fluentd # Pod 的名字
spec:
containers:

  • name: fluentd-es # 容器的名称

image: k8s.gcr.io/fluentd-elasticsearch:v1.3.0 # 容器使用的镜像

image: agilestacks/fluentd-elasticsearch:v1.3.0 # 容器使用的镜像
env: # 环境变量配置

  • name: FLUENTD_ARGS # 环境变量的 key
    value: -qq # 环境变量的 value
    volumeMounts: # 加载数据卷,防止数据丢失
  • name: containers # 数据卷名称
    mountPath: /var/lib/docker/containers # 将数据卷挂载到容器内哪个目录
  • name: varlog
    mountPath: /varlog
    volumes: # 定义数据卷
  • hostPath: # 数据卷类型,主机路径的模式,也就是与 node 共享目录
    path: /var/lib/docker/containers # node中的共享目录 (将服务器的目录挂载到容器内部,如果服务器内不存在该目录,则会自动创建该目录)
    name: containers # 定义的数据卷名称
  • hostPath:
    path: /var/log
    name: varlog
  1. 根据配置文件创建 DaemonSet 应用

kubectl create -f fluentd-ds.yaml

[root@k8s-master daemonset]# kubectl create -f fluentd-ds.yaml
daemonset.apps/fluentd created

  1. 查看创建的 DaemonSet 应用
    DaemonSetREADY 都是 0,进一步查看 Pod,发现 Pod 的状态也处于 创建中 或者 镜像拉取失败,原因都是镜像拉取失败,主要是网速差,解决办法是使用 Docker 命令单独拉取该镜像。(因一直未拉取成功,暂时跳过)

kubectl get daemonset

kubectl get ds

[root@k8s-master daemonset]# kubectl get daemonset
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
fluentd 2 2 0 2 0 20s

[root@k8s-master daemonset]# kubectl get ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
fluentd 2 2 0 2 0 22s

[root@k8s-master daemonset]# kubectl get po
NAME READY STATUS RESTARTS AGE
dns-test 1/1 Running 1 (9h ago) 9h
fluentd-96ms8 0/1 ContainerCreating 0 5m21s
fluentd-vbttv 0/1 ImagePullBackOff 0 5m21s

[root@k8s-master daemonset]# kubectl describe po fluentd-vbttv
Name: fluentd-vbttv
Namespace: default
Priority: 0
Node: k8s-node1/192.168.3.242
Start Time: Sun, 31 Dec 2023 18:29:32 +0800
Labels: app=logging
controller-revision-hash=b96747bc7
id=fluentd
pod-template-generation=1
Annotations: cni.projectcalico.org/containerID: 7df8b185447eedde818ee0acdebceb1cf02fe85d6b17a409f2749239a010ba93
cni.projectcalico.org/podIP: 10.244.36.83/32
cni.projectcalico.org/podIPs: 10.244.36.83/32
Status: Pending
IP: 10.244.36.83
IPs:
IP: 10.244.36.83
Controlled By: DaemonSet/fluentd
Containers:
fluentd-es:
Container ID:
Image: agilestacks/fluentd-elasticsearch:v1.3.0
Image ID:
Port:
Host Port:
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
FLUENTD_ARGS: -qq
Mounts:
/var/lib/docker/containers from containers (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q6cwq (ro)
/varlog from varlog (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
containers:
Type: HostPath (bare host directory volume)
Path: /var/lib/docker/containers
HostPathType:
varlog:
Type: HostPath (bare host directory volume)
Path: /var/log
HostPathType:
kube-api-access-q6cwq:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message


Normal Scheduled 8m41s default-scheduler Successfully assigned default/fluentd-trxm7 to k8s-node1
Warning Failed 3m36s kubelet Failed to pull image “agilestacks/fluentd-elasticsearch:v1.3.0”: rpc error: code = Unknown desc = context canceled
Warning Failed 3m36s kubelet Error: ErrImagePull
Normal BackOff 3m35s kubelet Back-off pulling image “agilestacks/fluentd-elasticsearch:v1.3.0”
Warning Failed 3m35s kubelet Error: ImagePullBackOff
Normal Pulling 3m24s (x2 over 8m40s) kubelet Pulling image “agilestacks/fluentd-elasticsearch:v1.3.0”

  1. 查看 fluentd 所在节点
    fluentd-vbttvk8s-node1fluentd-96ms8k8s-node2,均不在 k8s-master 节点上。

Daemonset 未绑定指定的节点时,默认是给所有的子节点加入 Daemonset

[root@k8s-master daemonset]# kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dns-test 1/1 Running 1 (9h ago) 9h 10.244.36.71 k8s-node1
fluentd-vbttv 0/1 ContainerCreating 0 4m13s k8s-node1
fluentd-96ms8 0/1 ContainerCreating 0 4m13s k8s-node2

4.3 指定 Node 节点

DaemonSet 会忽略 Node 的 unschedulable 状态,有两种方式来指定 Pod 只运行在指定的 Node 节点上:

  • nodeSelector:只调度匹配指定 label 的 Node 上。
  • nodeAffinity:功能更丰富的 Node 选择器,比如支持集合操作。
  • podAffinity:调度到满足条件的 Pod 所在的 Node 上。
4.3.1 nodeSelector

可以直接修改 /opt/k8s/daemonset/fluentd-ds.yaml,但是要删除后重新创建 DaemonSet

或者修改 DaemonSet 配置文件中的 nodeSelector 的属性,自动触发更新操作。(推荐

  1. k8s-node1 加标签

k8s-node1

kubectl label node k8s-node1 type=microservices

[root@k8s-master daemonset]# kubectl label node k8s-node1 type=microservices
node/k8s-node1 labeled

查看 node 添加的标签:可以看到 k8s-node1 的labels相比于 k8s-node2 最后多了type=microservices

kubectl get node --show-labels

[root@k8s-master daemonset]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-master Ready control-plane,master 5d16h v1.23.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-node1 Ready 5d16h v1.23.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux,type=microservices
k8s-node2 Ready 5d16h v1.23.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux

  1. 在 DaemonSet 配置中设置 nodeSelector

kubectl edit ds fluentd

示例

spec:
template:
spec:
nodeSelector:
type: microservices

完整配置:
在这里插入图片描述
3. 查看 fluentd 所在的节点(这里因为 fluentd 镜像没有拉取成功,所以 k8s-node2 上的 fluentd-96ms8 没有去掉,但是 fluentd-96ms8 已经停止了)

kubectl get ds

[root@k8s-master daemonset]# kubectl get ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
fluentd 1 1 0 0 0 type=microservices 39h

kubectl get po -l app=logging -o wide
[root@k8s-master daemonset]# kubectl get po -l app=logging -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
fluentd-96ms8 0/1 Terminating 0 4m27s k8s-node2
fluentd-vbttv 0/1 ContainerCreating 0 25s k8s-node1

  1. k8s-node2 加标签

k8s-node2

kubectl label node k8s-node2 type=microservices

[root@k8s-master daemonset]# kubectl label node k8s-node2 type=microservices
node/k8s-node2 labeled

查看 node 添加的标签:可以看到 k8s-node2 的labels也多了 type=microservices

kubectl get node --show-labels

[root@k8s-master daemonset]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-master Ready control-plane,master 6d4h v1.23.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-node1 Ready 6d4h v1.23.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux,type=microservices
k8s-node2 Ready 6d4h v1.23.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux,type=microservices

  1. 再次查看 fluentd 所在的节点(这里 fluentd 镜像虽然没有拉取成功,但是 fluentd-hshqffluentd-vbttv 的状态是重新拉取失败,表明已经在 k8s-node1k8s-node2 上部署了)

kubectl get po -l app=logging -o wide

[root@k8s-master daemonset]# kubectl get po -l app=logging -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
fluentd-hshqf 0/1 ImagePullBackOff 0 2m15s 10.244.169.136 k8s-node2
fluentd-vbttv 0/1 ImagePullBackOff 0 14m 10.244.36.71 k8s-node1

4.3.2 nodeAffinity(待实践,暂时跳过)

nodeAffinity 目前支持两种:requiredDuringSchedulingIgnoredDuringExecution 和 preferredDuringSchedulingIgnoredDuringExecution,分别代表必须满足条件和优选条件。

比如下面的例子代表调度包含标签 wolfcode.cn/framework-name 并且值为 spring 或 springboot 的 Node 上,并且优选还带有标签 another-node-label-key=another-node-label-value 的Node。

apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:

  • matchExpressions:
  • key: wolfcode.cn/framework-name
    operator: In
    values:
  • spring
  • springboot
    preferredDuringSchedulingIgnoredDuringExecution:
  • weight: 1
    preference:
    matchExpressions:
  • key: another-node-label-key
    operator: In
    values:
  • another-node-label-value
    containers:
  • name: with-node-affinity
    image: pauseyyf/pause
4.3.3 podAffinity(待实践,暂时跳过)

podAffinity 基于 Pod 的标签来选择 Node,仅调度满足条件Pod 所在的 Node 上,支持 podAffinity 和 podAntiAffinity。这个功能比较绕,以下面的例子为例:

  • 如果一个 “Node 所在空间中包含至少一个带有 auth=oauth2 标签且运行中的 Pod”,那么可以调度该 Node。
  • 不调度 “包含至少一个带有 auth=jwt 标签且运行中 Pod”的 Node 上。

apiVersion: v1
kind: Pod
metadata:
name: with-pod-affinity
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:

  • labelSelector:
    matchExpressions:
  • key: auth
    operator: In
    values:
  • oauth2
    topologyKey: failure-domain.beta.kubernetes.io/zone
    podAntiAffinity:
    preferredDuringSchedulingIgnoredDuringExecution:
  • weight: 100
    podAffinityTerm:
    labelSelector:
    matchExpressions:
  • key: auth
    operator: In
    values:
  • jwt
    topologyKey: kubernetes.io/hostname
    containers:
  • name: with-pod-affinity
    image: pauseyyf/pause

4.4 滚动更新

DaemonSet 不建议使用 RollingUpdate 类型策略,建议使用 OnDelete 类型策略,这样可以避免频繁更新 ds(DaemonSet)。

书签

之前所有的联系配置文件都在阿里云盘上,名称是:k8s_2024年1月2日.zip

5 HPA 自动扩 / 缩容

Horizontal Pod Autoscaler(HPA)
针对 Pod 自动扩容:可以根据 CPU 使用率或自定义指标(metrics)自动对 Pod 进行扩/缩容。

  • 控制管理器每隔30s(可以通过–horizontal-pod-autoscaler-sync-period修改)查询metrics的资源使用情况。
  • 支持三种metrics类型:
  • 预定义metrics(比如 Pod 的CPU)以利用率的方式计算。
  • 自定义的Pod metrics,以原始值(raw value)的方式计算。
  • 自定义的object metrics。
  • 支持两种metrics查询方式:Heapster和自定义的REST API。
  • 支持多metrics。

PodTemplate
Pod Template是关于Pod的定义,但是被包含在其他的Kubernetes对象中(例如Deployment、StatefulSet、DaemonSet等控制器)。控制器通过Pod Template信息来创建Pod。

LimitRange
可以对集群内Request和Limits的配置做一个全局的、统一的限制,相当于批量设置了某一个范围内(某个命名空间)的Pod的资源使用限制。


通过观察 pod 的 cpu、内存使用率或自定义 metrics 指标进行自动的扩容或缩容 pod 的数量。

通常用于 Deployment,不适用于无法扩/缩容的对象,如 DaemonSet

控制管理器每隔30s(可以通过–horizontal-pod-autoscaler-sync-period修改)查询metrics的资源使用情况。

5.1 开启指标服务

下载 metrics-server 组件配置文件

wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -O metrics-server-components.yaml

修改镜像地址为国内的地址

sed -i ‘s/k8s.gcr.io/metrics-server/registry.cn-hangzhou.aliyuncs.com/google_containers/g’ metrics-server-components.yaml

修改容器的 tls 配置,不验证 tls,在 containers 的 args 参数中增加 --kubelet-insecure-tls 参数

安装组件

kubectl apply -f metrics-server-components.yaml

查看 pod 状态

kubectl get pods --all-namespaces | grep metrics

5.2 cpu、内存指标监控

实现 cpu 或内存的监控,首先有个前提条件是该对象必须配置了 resources.requests.cpu 或 resources.requests.memory 才可以,可以配置当 cpu/memory 达到上述配置的百分比后进行扩容或缩容。

创建一个 HPA:

网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。

需要这份系统化的资料的朋友,可以添加V获取:vip1024b (备注Go)
img

一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!
rics的资源使用情况。

  • 支持三种metrics类型:
  • 预定义metrics(比如 Pod 的CPU)以利用率的方式计算。
  • 自定义的Pod metrics,以原始值(raw value)的方式计算。
  • 自定义的object metrics。
  • 支持两种metrics查询方式:Heapster和自定义的REST API。
  • 支持多metrics。

PodTemplate
Pod Template是关于Pod的定义,但是被包含在其他的Kubernetes对象中(例如Deployment、StatefulSet、DaemonSet等控制器)。控制器通过Pod Template信息来创建Pod。

LimitRange
可以对集群内Request和Limits的配置做一个全局的、统一的限制,相当于批量设置了某一个范围内(某个命名空间)的Pod的资源使用限制。


通过观察 pod 的 cpu、内存使用率或自定义 metrics 指标进行自动的扩容或缩容 pod 的数量。

通常用于 Deployment,不适用于无法扩/缩容的对象,如 DaemonSet

控制管理器每隔30s(可以通过–horizontal-pod-autoscaler-sync-period修改)查询metrics的资源使用情况。

5.1 开启指标服务

下载 metrics-server 组件配置文件

wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -O metrics-server-components.yaml

修改镜像地址为国内的地址

sed -i ‘s/k8s.gcr.io/metrics-server/registry.cn-hangzhou.aliyuncs.com/google_containers/g’ metrics-server-components.yaml

修改容器的 tls 配置,不验证 tls,在 containers 的 args 参数中增加 --kubelet-insecure-tls 参数

安装组件

kubectl apply -f metrics-server-components.yaml

查看 pod 状态

kubectl get pods --all-namespaces | grep metrics

5.2 cpu、内存指标监控

实现 cpu 或内存的监控,首先有个前提条件是该对象必须配置了 resources.requests.cpu 或 resources.requests.memory 才可以,可以配置当 cpu/memory 达到上述配置的百分比后进行扩容或缩容。

创建一个 HPA:

网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。

需要这份系统化的资料的朋友,可以添加V获取:vip1024b (备注Go)
[外链图片转存中…(img-CmWhhU8U-1713140823164)]

一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐