Linux——K8s集群资源
Yaml文件解释apiVersion:api版本信息kind:资源对象的类别metadata:元数据 名称字段必写。spec:用户期望的状态。labels:定义pod的标签status:资源现在处于什么样的状态replicas:定义副本数量selector:标签选择器,定义匹配pod的标签template:pod的模板定义containers:容器定义添加apiVersion库在yaml文件中co
Yaml文件解释
- apiVersion:api版本信息
- kind:资源对象的类别
- metadata:元数据 名称字段必写。
- spec:用户期望的状态。
- labels:定义pod的标签
- status:资源现在处于什么样的状态
- replicas:定义副本数量
- selector:标签选择器,定义匹配pod的标签
- template:pod的模板定义
- containers:容器定义
添加apiVersion库
在yaml文件中command指令下添加
[root@master yaml]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
......
spec:
containers:
- command:
- kube-apiserver
- --runtime-config=batch/v2alpha1=true #添加
[root@master ~]# systemctl restart kubelet.service
[root@master ~]# kubectl api-versions
......
batch/v2alpha1
......
一、NameSpace
PS:NameSpace资源对象仅用于资源对象的隔离,并不能隔绝不同名称空间的pod之间的通信,因为那是网络策略资源的功能。
Kubernetes中的集群默认会有一个叫default的namespace。实际上,应该是3个:
- default:你的service和app默认被创建于此。
- kube-system:kubernetes系统组件使用。
- kube-public:公共资源使用。但实际上现在并不常用。
[root@master yaml]# kubectl get ns
NAME STATUS AGE
default Active 23d
kube-public Active 23d
kube-system Active 23d
1.1 命令行创建
创建、查看名称空间、查看名称空间详细信息、删除名称空间
[root@master ~]# kubectl create namespace test
namespace/test created
[root@master ~]# kubectl create namespace test
namespace/test created
[root@master ~]# kubectl get namespaces test
NAME STATUS AGE
test Active 56s
[root@master ~]# kubectl describe namespaces test
Name: test
Labels: <none>
Annotations: <none>
Status: Active
No resource quota.
No resource limits.
[root@master ~]# kubectl delete namespaces test
namespace "test" deleted
PS:轻易不要删除名称空间,因为删除之后,在此名称空间之下资源会全部删除。
1.2 yaml文件创建
[root@master ~]# cd yaml/
[root@master yaml]# vim namespace.yaml
kind: Namespace
apiVersion: v1
metadata:
name: test
[root@master yaml]# kubectl apply -f namespace.yaml
namespace/test created
[root@master yaml]# kubectl get ns test
NAME STATUS AGE
test Active 27m
二、Deployment
2.1 命令行创建
创建、查看deploy控制器、查看deploy资源详细信息
[root@master ~]# kubectl run t1 --image=nginx --replicas=3
deployment.apps/t1 created
[root@master ~]# kubectl get deployments.
NAME READY UP-TO-DATE AVAILABLE AGE
t1 3/3 3 2 35s
[root@master ~]# kubectl describe deployments. t1
Name: t1
Namespace: default
CreationTimestamp: Mon, 02 Nov 2020 15:22:51 +0800
Labels: run=t1
Annotations: deployment.kubernetes.io/revision: 1
Selector: run=t1
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: run=t1
Containers:
t1:
Image: nginx
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: t1-55f6c78557 (3/3 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 64s deployment-controller Scaled up replica set t1-55f6c78557 to 3
2.2 yaml文件创建
PS:创建一个deploy资源,名叫t2、添加名称空间test、镜像使用nginx
[root@master yaml]# vim deployment.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: t2
namespace: test
spec:
replicas: 3
template:
metadata:
labels:
app: t2
spec:
containers:
- name: t2
image: nginx
[root@master yaml]# kubectl apply -f deployment.yaml
deployment.extensions/t2 created
[root@master yaml]# kubectl get deployments. -n test
NAME READY UP-TO-DATE AVAILABLE AGE
t2 3/3 3 3 34s
三、Service
3.1 命令行创建
PS: 如果想要外网能够访问服务,可以暴露deployment资源,得到service资源,但svc资源的类型必须为NodePort
创建、查看、查看详细信息
[root@master ~]# kubectl expose deployment t1 --name=t1-svc --port=80 --type=NodePort
service/web-svc exposed
[root@master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
t1-svc NodePort 10.103.79.74 <none> 80:31880/TCP 8s
[root@master ~]# curl 10.103.79.74
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
......
[root@master ~]# kubectl describe svc t1-svc
Name: t1-svc
Namespace: default
Labels: run=t1
Annotations: <none>
Selector: run=t1
Type: NodePort
IP: 10.102.137.239
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 31417/TCP
Endpoints: 10.244.1.3:80,10.244.2.2:80,10.244.2.3:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
3.2 yaml文件创建
PS:添加名称空间
[root@master ~]# vim service.yaml
kind: Service
apiVersion: v1
metadata:
name: t2-svc
namespace: test
spec:
type: NodePort
selector:
app: t2
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30031
[root@master ~]# kubectl apply -f service.yaml
service/t2-svc created
[root@master ~]# kubectl get svc -n test t2-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
t2-svc NodePort 10.109.99.243 <none> 80:30031/TCP 77s
[root@master ~]# curl 10.109.99.243
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
......
四、Pod
4.1 创建Pod
[root@master yaml]# vim pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: test-pod
namespace: test
spec:
containers:
- name: test-pod
image: httpd
[root@master yaml]# kubectl apply -f pod.yaml
pod/test-pod created
[root@master yaml]# kubectl get pod -n test test-pod
NAME READY STATUS RESTARTS AGE
test-pod 1/1 Running 0 38s
4.2 容器镜像获取策略
k8s默认根据镜像的TAG不同,有三种不同策略
- Always:镜像标签为"latest"或镜像标签不存在时,总是从指定的仓库(默认的官方仓库、或者私有仓库)中获取最新镜像。
- IfNotPresent:仅当本地镜像不存在时才从目标仓库中下载。也就意味着,如果本地存在,直接使用本地镜像,无需再联网下载。
- Never:禁止从仓库中下载镜像,即只使用本地镜像。
[root@master yaml]# vim pod1.yaml
kind: Pod
apiVersion: v1
metadata:
name: test-pod1
namespace: test
spec:
containers:
- name: test-pod
image: httpd
imagePullPolicy: IfNotPresent
[root@master yaml]# kubectl apply -f pod1.yaml
pod/test-pod1 created
[root@master yaml]# kubectl get pod -n test test-pod1
NAME READY STATUS RESTARTS AGE
test-pod1 1/1 Running 0 2m42s
4.3 容器的重启策略
k8s默认重启策略有三种
- Always:但凡Pod对象终止就将其重启,此为默认设定。
- OnFailure:仅在Pod对象出现错误时才将其重启。
- Never:从不重启。
[root@master yaml]# vim pod2.yaml
kind: Pod
apiVersion: v1
metadata:
name: test-pod2
namespace: test
spec:
restartPolicy: OnFailure
containers:
- name: test-pod
image: httpd
imagePullPolicy: IfNotPresent
[root@master yaml]# kubectl apply -f pod2.yaml
pod/test-pod2 created
[root@master yaml]# kubectl get pod -n test test-pod2
NAME READY STATUS RESTARTS AGE
test-pod2 1/1 Running 0 25s
4.4 容器默认健康检查
- Kubelet使用liveness probe(存活探针)来确定何时重启容器。例如,当应用程序处于运行状态但无法做进一步操作,liveness探针将捕获到deadlock,重启处于该状态下的容器,使应用程序在存在bug的情况下依然能够继续运行下去 。
- Kubelet使用readiness probe(就绪探针)来确定容器是否已经就绪可以接受流量。只有当Pod中的容器都处于就绪状态时kubelet才会认定该Pod处于就绪状态。该信号的作用是控制哪些Pod应该作为service的后端。如果Pod处于非就绪状态,那么它们将会被从service的load balancer中移除。
4.4.1 Probe支持三种检查方法
1.exec
在用户容器内执行一次命令,如果命令执行的退出码为0,则认为应用程序正常运行,其他任务应用程序运行不正常。
livenessProbe:
exec:
command:
- cat
- /tmp/test
2.TCPSocket
尝试打开一个用户容器的Socket连接(就是IP地址:端口)如果能够建立这条连接,则认为应用程序正常运行,否则认为应用程序运行不正常。
livenessProbe:
tcpSocket:
port: 8080
3.HTTPGet
调用容器内Web应用的web hook,如果返回的HTTP状态码在200和399之间,则认为应用程序正常运行,否则认为应用程序运行不正常。每进行一次HTTP健康检查都会访问一次指定的URL.
httpGet: #通过httpget检查健康,返回200-399之间,则认为容器正常
path: / #URI地址
port: 80 #端口号
host: 127.0.0.1 #主机地址
scheme: HTTP #支持的协议,http或者https
httpHeaders:’’ #自定义请求的header
4.4.2 参数说明
- initialDelaySeconds:容器启动后第一次执行探测是需要等待多少秒。
- periodSeconds:执行探测的频率。默认是10秒,最小1秒。
- timeoutSeconds:探测超时时间。默认1秒,最小1秒。
- successThreshold:探测失败后,最少连续探测成功多少次才被认定为成功。默认是1。对于liveness必须是1。最小值是1。
探针探测的结果有以下三者之一
- Success:Container通过了检查。
- Failure:Container未通过检查。
- Unknown:未能执行检查,因此不采取任何措施。
4.4.3 LivenessProbe(活跃度、存活性)
[root@master yaml]# vim livenessprobe.yaml
kind: Pod
apiVersion: v1
metadata:
name: liveness
labels:
test: liveness
spec:
restartPolicy: OnFailure
containers:
- name: liveness
image: busybox
args:
- /bin/sh
- -c
- touch /tmp/test; sleep 60; rm -rf /tmp/test; sleep 300
livenessProbe:
exec:
command:
- cat
- /tmp/test
initialDelaySeconds: 10
periodSeconds: 5
[root@master yaml]# kubectl apply -f livenessprobe.yaml
pod/liveness created
PS:此pod文件创建了一个容器,periodSeconds规定了kubelet每隔5秒执行一次liveness probe,initialDelaySeconds告诉kubele在第一次执行probe之前要等待10秒钟, 探针检测命令是在容器中执行 cat /tmp/test 命令。如果执行成功,将返回0,kubelet将会认为容器是活着的并且很健康,如果返回是非0值,kubelet就会杀掉这个容器并且重启它。
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
liveness 1/1 Running 1 4m26s
liveness 1/1 Running 2 4m59s
......
Liveness活跃度探测,根据探测某个文件是否存在,来确定某个服务是否正常运行,如果存在则正常,否则,它会根据你设置的Pod的重启策略操作Pod。
4.4.4 Readiness (敏捷探测、就绪性探测)
- ReadinessProbe探针的使用场景livenessProbe稍有不同,有的时候应用程序可能暂时无法接受请求,比如Pod已经Running了,但是容器内应用程序尚未启动成功,在这种情况下,如果没有ReadinessProbe,则Kubernetes认为它可以处理请求了,然而此时,我们知道程序还没启动成功是不能接收用户请求的,所以不希望kubernetes把请求调度给它,则使用ReadinessProbe探针。
- ReadinessProbe和livenessProbe可以使用相同探测方式,只是对Pod的处置方式不同,ReadinessProbe是将Pod IP:Port从对应的EndPoint列表中删除,而livenessProbe则删除容器并根据Pod的重启策略来决定作出对应的措施。
- ReadinessProbe探针探测容器是否已准备就绪,如果未准备就绪则kubernetes不会将流量转发给此Pod。
- ReadinessProbe探针与livenessProbe一样也支持exec的探测方式,配置方式相同,只不过是将livenessProbe字段修改为ReadinessProbe。
[root@master yaml]# vim readiness.yaml
kind: Pod
apiVersion: v1
metadata:
name: readiness
labels:
test: readiness
spec:
restartPolicy: OnFailure
containers:
- name: readiness
image: busybox
args:
- /bin/sh
- -c
- touch /tmp/test; sleep 60; rm -rf /tmp/test; sleep 300
readinessProbe:
exec:
command:
- cat
- /tmp/test
initialDelaySeconds: 10
periodSeconds: 5
[root@master yaml]# kubectl apply -f readiness.yaml
pod/readiness created
[root@master yaml]# kubectl get pod readiness -w
NAME READY STATUS RESTARTS AGE
readiness 0/1 Running 0 4m11s
4.4.5 总结liveness和readiness探测
-
liveness和readiness是两种健康检查机制,k8s将两种探测采取相同的默认行为,即通过判断容器启动进程的返回值是否为零,来判断探测是否成功。
-
两种探测配置方法完全一样,不同之处在于探测失败后的行为。
- liveness探测是根据重启策略操作容器,大多数是重启容器。
- readiness则是将容器设置为不可用,不接收Service转发的请求。
-
两种探测方法可建议独立存在,也可以同时存在。用livensess判断是否需要重启,实现自愈;用readiness判断容器是否已经准备好对外提供服务。
4.4.6 在扩容中使用
[root@master yaml]# vim kuorong.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: kuorong
spec:
replicas: 3
template:
metadata:
labels:
run: kuorong
spec:
containers:
- name: kuorong
image: httpd
ports:
- containerPort: 80
readinessProbe:
httpGet:
scheme: HTTP
path: /healthy
port: 80
initialDelaySeconds: 10
periodSeconds: 5
---
kind: Service
apiVersion: v1
metadata:
name: kuorong-svc
spec:
type: NodePort
selector:
run: kuorong
ports:
- protocol: TCP
port: 90
targetPort: 80
nodePort: 30321
[root@master yaml]# kubectl apply -f kuorong.yaml
deployment.extensions/kuorong created
service/kuorong-svc created
PS:服务扩容中,如果没有针对性就绪探测,往往扩容之后,表面看Pod的状态是没有问题的,但其实Pod内的服务是否运行正常也是我们要关注的,而Readiness的关注点就在这,如果Pod内的服务运行有问题,那么,这些新扩容的Pod就不会被添加到Service资源的转发中,从而保证用户能够有最起码正常的服务体验。
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
kuorong-79fc944586-czgpz 0/1 Running 0 91s
kuorong-79fc944586-m9rdm 0/1 Running 0 91s
kuorong-79fc944586-vjxbr 0/1 Running 0 91s
[root@master yaml]# kubectl exec -it kuorong-79fc944586-czgpz bash
root@kuorong-79fc944586-czgpz:/usr/local/apache2# cd htdocs/
root@kuorong-79fc944586-czgpz:/usr/local/apache2/htdocs# ls
index.html
root@kuorong-79fc944586-czgpz:/usr/local/apache2/htdocs#
root@kuorong-79fc944586-czgpz:/usr/local/apache2/htdocs# mkdir healthy
root@kuorong-79fc944586-czgpz:/usr/local/apache2/htdocs# ls
healthy index.html
root@kuorong-79fc944586-czgpz:/usr/local/apache2/htdocs# exit
exit
[root@master mysql]# kubectl get pod
NAME READY STATUS RESTARTS AGE
kuorong-79fc944586-czgpz 1/1 Running 0 4m1s
kuorong-79fc944586-m9rdm 0/1 Running 0 4m1s
kuorong-79fc944586-vjxbr 0/1 Running 0 4m1s
4.4.7 在更新过程中使用
[root@master yaml]# vim gengxin.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
spec:
replicas: 10
template:
metadata:
labels:
run: app
spec:
containers:
- name: app
image: busybox
args:
- /bin/sh
- -c
- sleep 10; touch /tmp/healthy; sleep 3000
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 10
periodSeconds: 5
[root@master yaml]# kubectl apply -f gengxin.yaml --record
deployment.extensions/app created
[root@master yaml]# kubectl rollout history deployment app
deployment.extensions/app
REVISION CHANGE-CAUSE
1 kubectl apply --filename=gengxin.yaml --record=true
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
app-68b9b5ddb4-465zv 1/1 Running 0 9m28s
app-68b9b5ddb4-58cbw 1/1 Running 0 9m28s
app-68b9b5ddb4-6xbvd 1/1 Running 0 9m28s
app-68b9b5ddb4-7f5v4 1/1 Running 0 9m28s
app-68b9b5ddb4-bdn5r 1/1 Running 0 9m28s
app-68b9b5ddb4-dvdhv 1/1 Running 0 9m28s
app-68b9b5ddb4-ls2k9 1/1 Running 0 9m28s
app-68b9b5ddb4-rlf8q 1/1 Running 0 9m28s
app-68b9b5ddb4-tnzqf 1/1 Running 0 9m28s
app-68b9b5ddb4-vpw99 1/1 Running 0 9m28s
4.4.7.2 升级一下deployment
[root@master yaml]# cp gengxin.yaml gengxin1.yaml
[root@master yaml]# vim gengxin1.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
spec:
replicas: 10
template:
metadata:
labels:
run: app
spec:
containers:
- name: app
image: busybox
args:
- /bin/sh
- -c
- sleep 3000 #修改
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 10
periodSeconds: 5
[root@master yaml]# kubectl apply -f gengxin1.yaml --record
deployment.extensions/app configured
[root@master yaml]# kubectl rollout history deployment app
deployment.extensions/app
REVISION CHANGE-CAUSE
1 kubectl apply --filename=gengxin.yaml --record=true
2 kubectl apply --filename=gengxin1.yaml --record=true
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
app-68b9b5ddb4-7w77z 1/1 Running 0 116s
app-68b9b5ddb4-854vv 1/1 Running 0 116s
app-68b9b5ddb4-gmgs5 1/1 Running 0 116s
app-68b9b5ddb4-grxvs 1/1 Running 0 116s
app-68b9b5ddb4-jprz9 1/1 Running 0 116s
app-68b9b5ddb4-psgv6 1/1 Running 0 116s
app-68b9b5ddb4-qbgs6 1/1 Running 0 116s
app-68b9b5ddb4-qtnfw 1/1 Running 0 116s
app-68b9b5ddb4-wtzp8 1/1 Running 0 116s
app-7d7559dd99-qk4l2 0/1 Running 0 110s
app-7d7559dd99-vxt4n 0/1 Running 0 110s
4.4.7.3 再次升级一下deployment
[root@master yaml]# cp gengxin1.yaml gengxin2.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
spec:
replicas: 10
template:
metadata:
labels:
run: app
spec:
containers:
- name: app
image: busybox
args:
- /bin/sh
- -c
- sleep 3000
[root@master yaml]# vim gengxin2.yaml
[root@master yaml]# kubectl apply -f gengxin2.yaml --record
deployment.extensions/app configured
[root@master yaml]# kubectl rollout history deployment app
deployment.extensions/app
REVISION CHANGE-CAUSE
1 kubectl apply --filename=gengxin.yaml --record=true
2 kubectl apply --filename=gengxin1.yaml --record=true
3 kubectl apply --filename=gengxin2.yaml --record=true
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
app-ffb66fc54-6gl4w 1/1 Running 0 111s
app-ffb66fc54-9nl7t 1/1 Running 0 59s
app-ffb66fc54-hstxr 1/1 Running 0 51s
app-ffb66fc54-jpw5j 1/1 Running 0 77s
app-ffb66fc54-mxxjr 1/1 Running 0 2m8s
app-ffb66fc54-qcjmq 1/1 Running 0 63s
app-ffb66fc54-wfwps 1/1 Running 0 111s
app-ffb66fc54-wlbqk 1/1 Running 0 94s
app-ffb66fc54-wt6mj 1/1 Running 0 2m8s
app-ffb66fc54-xvznq 1/1 Running 0 72s
4.5 回滚
4.5.1回滚2版本
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
app-7d7559dd99-jdlbd 0/1 Running 0 74s
app-7d7559dd99-lfd9p 0/1 Running 0 74s
app-ffb66fc54-6gl4w 1/1 Running 0 5m35s
app-ffb66fc54-9nl7t 1/1 Running 0 4m43s
app-ffb66fc54-jpw5j 1/1 Running 0 5m1s
app-ffb66fc54-mxxjr 1/1 Running 0 5m52s
app-ffb66fc54-qcjmq 1/1 Running 0 4m47s
app-ffb66fc54-wfwps 1/1 Running 0 5m35s
app-ffb66fc54-wlbqk 1/1 Running 0 5m18s
app-ffb66fc54-wt6mj 1/1 Running 0 5m52s
app-ffb66fc54-xvznq 1/1 Running 0 4m56s[root@master yaml]# kubectl rollout history deployment app
deployment.extensions/app
REVISION CHANGE-CAUSE
1 kubectl apply --filename=gengxin.yaml --record=true
2 kubectl apply --filename=gengxin1.yaml --record=true
3 kubectl apply --filename=gengxin2.yaml --record=true
[root@master yaml]# kubectl rollout undo deployment app --to-revision=2
deployment.extensions/app rolled back
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
app-7d7559dd99-jdlbd 0/1 Running 0 74s
app-7d7559dd99-lfd9p 0/1 Running 0 74s
app-ffb66fc54-6gl4w 1/1 Running 0 5m35s
app-ffb66fc54-9nl7t 1/1 Running 0 4m43s
app-ffb66fc54-jpw5j 1/1 Running 0 5m1s
app-ffb66fc54-mxxjr 1/1 Running 0 5m52s
app-ffb66fc54-qcjmq 1/1 Running 0 4m47s
app-ffb66fc54-wfwps 1/1 Running 0 5m35s
app-ffb66fc54-wlbqk 1/1 Running 0 5m18s
app-ffb66fc54-wt6mj 1/1 Running 0 5m52s
app-ffb66fc54-xvznq 1/1 Running 0 4m56s
4.5.2 编写一个yaml文件
在更新过程中,期望的Replicas值为10个,更改rollingupdate策略,要求在更新过程中,最大不可用Pod的值为2个,允许同时出现的Pod的总数量为12个。
[root@master yaml]# vim huigun.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
spec:
strategy:
rollingUpdate:
maxSurge: 2
maxUnavailable: 2
replicas: 10
template:
metadata:
labels:
run: app
spec:
containers:
- name: app
image: busybox
args:
- /bin/sh
- -c
- sleep 3000
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 10
periodSeconds: 5
- maxSurge: 此参数控制滚动更新过程中,副本总数超过预期数(replicas)的值。可以是整数,也可以是百分比,默认是1。
- maxUnavailable: 不可用Pod的值。默认为1.可以是整数,也可以是百分比 。
[root@master yaml]# kubectl apply -f huigun.yaml --record
deployment.extensions/app configured
[root@master yaml]# kubectl rollout history deployment app
deployment.extensions/app
REVISION CHANGE-CAUSE
1 kubectl apply --filename=gengxin.yaml --record=true
3 kubectl apply --filename=gengxin2.yaml --record=true
4 kubectl apply --filename=huigun.yaml --record=true
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
app-7d7559dd99-ghrhp 0/1 Running 0 76s
app-7d7559dd99-jdlbd 0/1 Running 0 4m31s
app-7d7559dd99-lfd9p 0/1 Running 0 4m31s
app-7d7559dd99-zl46p 0/1 Running 0 76s
app-ffb66fc54-6gl4w 1/1 Running 0 8m52s
app-ffb66fc54-9nl7t 1/1 Running 0 8m
app-ffb66fc54-jpw5j 1/1 Running 0 8m18s
app-ffb66fc54-mxxjr 1/1 Running 0 9m9s
app-ffb66fc54-wfwps 1/1 Running 0 8m52s
app-ffb66fc54-wlbqk 1/1 Running 0 8m35s
app-ffb66fc54-wt6mj 1/1 Running 0 9m9s
app-ffb66fc54-xvznq 1/1 Running 0 8m13s
五、Job
- 服务类的Pod容器:RC、RS、DS、Deployment.(Pod内运行的服务,要持续运行)
- 工作类的Pod容器:Job—>执行一次,或者批量执行处理程序,完成之退出容器。
- 如果容器内执行任务有误,会根据容器的重启策略操作容器,不过在Job的容器重启策略只能是:Never和 OnFailure。
5.1 添加参数可以提高Job的执行效率
- 在Job.spec字段下加上parallelism选项。表示同时运行多少个Pod执行任务。
- 在Job.spec字段下加上completions选项。表示总共需要完成Pod的数量
......
spec:
parallelism: 2
completions: 8
示例:
5.2 Never:
[root@master yaml]# vim job.yaml
kind: Job
apiVersion: batch/v1
metadata:
name: test-job
spec:
parallelism: 2
completions: 8
template:
metadata:
name: test-job
spec:
containers:
- name: hello
image: busybox
command: ["echo","hello k8s job!"]
restartPolicy: Never
[root@master yaml]# kubectl apply -f job.yaml
job.batch/test-job created
执行以下命令会验证出结果
[root@master yaml]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
test-job-mz8j8 0/1 ContainerCreating 0 11s
test-job-tf5gt 0/1 ContainerCreating 0 11s
test-job-mz8j8 0/1 Completed 0 11s
test-job-7bt4q 0/1 Pending 0 0s
test-job-7bt4q 0/1 Pending 0 0s
test-job-7bt4q 0/1 ContainerCreating 0 0s
test-job-tf5gt 0/1 Completed 0 12s
test-job-4qct9 0/1 Pending 0 0s
test-job-4qct9 0/1 Pending 0 0s
test-job-4qct9 0/1 ContainerCreating 0 0s
test-job-4qct9 0/1 Completed 0 8s
test-job-5f9b9 0/1 Pending 0 0s
test-job-5f9b9 0/1 Pending 0 0s
test-job-5f9b9 0/1 ContainerCreating 0 0s
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
test-job-2bjb9 0/1 Completed 0 67s
test-job-4qct9 0/1 Completed 0 77s
test-job-5f9b9 0/1 Completed 0 69s
test-job-7bt4q 0/1 Completed 0 78s
test-job-crjll 0/1 Completed 0 59s
test-job-mz8j8 0/1 Completed 0 89s
test-job-tf5gt 0/1 Completed 0 89s
test-job-ttm62 0/1 Completed 0 57s
[root@master yaml]# kubectl logs test-job-2bjb9
hello k8s job!
[root@master yaml]# kubectl logs test-job-5f9b9
hello k8s job!
[root@master yaml]# kubectl logs test-job-crjll
hello k8s job!
......
PS:可以看到job与其他资源对象不同,仅执行一次性任务,默认pod运行后job即结束,状态Completed。
5.3OnFailure
为了验证重启策略成功,将yaml文件中的command内容改为乱码。
[root@master yaml]# cp job.yaml job1.yaml
[root@master yaml]# vim job1.yaml
kind: Job
apiVersion: batch/v1
metadata:
name: test-job1
spec:
parallelism: 2
completions: 8
template:
metadata:
name: test-job1
spec:
containers:
- name: hello
image: busybox
command: ["sdadada","hello k8s job!"] #修改
restartPolicy: OnFailure
[root@master yaml]# kubectl apply -f job1.yaml
job.batch/test-job1 created
查看pod 是否一直重启?
[root@master yaml]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
test-job1-6sn2v 0/1 RunContainerError 1 38s
test-job1-llcxf 0/1 RunContainerError 1 38s
test-job1-llcxf 0/1 CrashLoopBackOff 1 46s
test-job1-6sn2v 0/1 CrashLoopBackOff 1 49s
test-job1-llcxf 0/1 RunContainerError 2 63s
test-job1-llcxf 0/1 CrashLoopBackOff 2 78s
test-job1-6sn2v 0/1 RunContainerError 2 80s
......
PS: 它会一直重启pod完成命令,直到重启到一定次数就会删除job。
六、CronJob
PS:CronJob就是一个设置定时执行的Job
在cronjob中也支持执行效率参数:
- 在cronjob.spec.jobTemplate.spec 字段下加上parallelism选项。表示同时运行多少个Pod执行任务。
- 在cronjob.spec.jobTemplate.spec 字段下加上completions选项。表示总共需要完成Pod的数量
6.1 编写Yaml文件
[root@master yaml]# vim cronjob.yaml
kind: CronJob
apiVersion: batch/v1beta1
metadata:
name: cronjob
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
parallelism: 2
completions: 8
template:
spec:
containers:
- name: cronjob
image: busybox
command:
- echo
- /tmp/test.txt
restartPolicy: OnFailure
[root@master yaml]# kubectl apply -f cronjob.yaml
cronjob.batch/cronjob created
6.2 查看
PS:此时查看Pod的状态,会发现每分钟都会运行一个新Pod来执行命令规定的任务。
[root@master yaml]# kubectl get pod -w
cronjob-1604475240-bl89v 0/1 Pending 0 0s
cronjob-1604475240-bl89v 0/1 Pending 0 0s
cronjob-1604475240-7kljq 0/1 Pending 0 0s
cronjob-1604475240-7kljq 0/1 Pending 0 0s
cronjob-1604475240-bl89v 0/1 ContainerCreating 0 0s
cronjob-1604475240-7kljq 0/1 ContainerCreating 0 0s
cronjob-1604475240-bl89v 0/1 Completed 0 17s
cronjob-1604475240-7kljq 0/1 Completed 0 17s
......
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
cronjob-1604475240-7kljq 0/1 Completed 0 2m13s
cronjob-1604475240-bl89v 0/1 Completed 0 2m13s
......
[root@master ~]# kubectl logs cronjob-1604475240-7kljq
/tmp/test.txt
[root@master ~]# kubectl logs cronjob-1604475240-bl89v
/tmp/test.txt
PS:此时仍然不能正常运行指定时间的cronJob,这是因为K8s官方在cronjob这个资源对象的支持中还没有完善此功能,还待开发。
七、ReplicaSet
[root@master yaml]# vim ReplicaSet.yaml
apiVersion: extensions/v1beta1 //api版本定义
kind: ReplicaSet //定义资源类型为ReplicaSet
metadata: //元数据定义
name: myapp //ReplicaSet名称
spec: //ReplicaSet的规格定义
replicas: 2 //定义副本数量为2个
selector: //标签选择器,定义匹配pod的标签
matchLabels:
app: myapp
release: canary
template: //pod的模板定义
metadata: //pod的元数据定义
name: myapp-pod //自定义pod的名称
labels: //定义pod的标签,需要和上面定义的标签一致,也可以多出其他标签
app: myapp
release: canary
environment: qa
spec: //pod的规格定义
containers: //容器定义
- name: myapp-container //容器名称
image: ikubernetes/myapp:v1 //容器镜像
ports: //暴露端口
- name: http
containerPort: 80
八、DaemonSet
DaemonSet的升级策略:
- OnDelete: 该策略表示当更新了DaemonSet的模板后,只有手动删除旧的DaemonSet Pod才会创建新的DaemonSet Pod
- RollingUpdate: 该策略表示当更新DaemonSet模板后会自动删除旧的DaemonSet Pod并创建新的DaemonSetPod
[root@master yaml]# vim DaemonSet.yaml
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: daemonset
spec:
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
name: test-web
app: httpd
spec:
containers:
- name: web
image: httpd
ports:
- containerPort: 80
九、Kube-Proxy(负载均衡)
kube-proxy是k8s的一个核心组件。每台机器上都运行一个kube-proxy服务,它监听API server中service和endpoint的变化情况,并通过iptables等来为后端服务配置负载均衡
通过上条命令看到了访问CLUSTER-IP ,后端的Pod会轮替着为我们提供服务,也就是负载均衡,如果没有Service资源,Kube-proxy组件也不会生效。
[root@master ~]# kubectl get svc http-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-svc NodePort 10.103.132.109 <none> 80:31601/TCP 3h37m
通过describe命令,查看SVC资源对应的Endpoint,就能够知道后端真正的Pod。
[root@master ~]# kubectl describe svc http-svc
......
Endpoints: 10.244.1.8:80,10.244.1.9:80,10.244.2.8:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
通过iptanbles规则,可以观察到底层的原理
目标地址是10.103.132.109/32的80端口,并且走的是TCP协议,那么就把这个流量跳就会转到KUBE-SVC-LLMSFKRLGJ6BVN7Z,用到了SNAT(源地址转换)
[root@master ~]# iptables-save | grep 10.103.132.109
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.103.132.109/32 -p tcp -m comment --comment "default/http-svc: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.103.132.109/32 -p tcp -m comment --comment "default/http-svc: cluster IP" -m tcp --dport 80 -j KUBE-SVC-LLMSFKRLGJ6BVN7Z
然后继续分析,会查看到访问后端服务的机率,从而实现了svc能够随机访问到后端,MASQ(动态源地址转换)
[root@master ~]# iptables-save | grep KUBE-SVC-LLMSFKRLGJ6BVN7Z
:KUBE-SVC-LLMSFKRLGJ6BVN7Z - [0:0]
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/http-svc:" -m tcp --dport 31601 -j KUBE-SVC-LLMSFKRLGJ6BVN7Z
-A KUBE-SERVICES -d 10.103.132.109/32 -p tcp -m comment --comment "default/http-svc: cluster IP" -m tcp --dport 80 -j KUBE-SVC-LLMSFKRLGJ6BVN7Z
-A KUBE-SVC-LLMSFKRLGJ6BVN7Z -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-GKJOAGOA2547HWSI
-A KUBE-SVC-LLMSFKRLGJ6BVN7Z -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-P7NHLPBULC4DX3KP
-A KUBE-SVC-LLMSFKRLGJ6BVN7Z -j KUBE-SEP-4E2QKSE3ZPWCPABP
然后继续分析KUBE-SEP-GKJOAGOA2547HWSI,会看到它作用就是把网络流量通过DNAT发送到10.244.1.8的80端口,其他俩条也同样用到了DNAT(目标地址转换)
[root@master ~]# iptables-save | grep KUBE-SEP-GKJOAGOA2547HWSI
:KUBE-SEP-GKJOAGOA2547HWSI - [0:0]
-A KUBE-SEP-GKJOAGOA2547HWSI -s 10.244.1.8/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-GKJOAGOA2547HWSI -p tcp -m tcp -j DNAT --to-destination 10.244.1.8:80
-A KUBE-SVC-LLMSFKRLGJ6BVN7Z -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-GKJOAGOA2547HWSI
更多推荐
所有评论(0)