【K8S学习笔记-003】K8s工作负载(Namespace,Pod,Deployment,多副本能力,Deployment,滚动更新,版本回退)
命名空间Namespace,Pod,Deployment,多副本能力,Deployment的扩缩容自愈能力与故障转移能力,滚动更新,版本回退
【K8S学习笔记-003】K8s核心实战(Namespace,Pod,Deployment,多副本能力,Deployment,滚动更新,版本回退)
学习视频: https://www.bilibili.com/video/BV13Q4y1C7hS?p=41&vd_source=0bf662c33adfc181186b04ba57e11dff
附带笔记: https://www.yuque.com/leifengyang/oncloud/kgheaf
命名空间Namespace
3台机器, 将相同功能的模块放到同一名为prod的命名空间或者是dev命名空间
[root@k8s-master ~]# kubectl get ns
NAME STATUS AGE
default Active 173m
kube-node-lease Active 173m
kube-public Active 173m
kube-system Active 173m
kubernetes-dashboard Active 87m
[root@k8s-master ~]# kubectl create ns hello
namespace/hello created
[root@k8s-master ~]# kubectl get ns hello
NAME STATUS AGE
hello Active 7s
删除
[root@k8s-master ~]# kubectl delete ns hello
namespace "hello" deleted
Pod
容器外有封装了一层
红圈内是pod内应用的总数
命令行创建一个pod(名为mynginx从nginx的镜像中拉取的)
[root@k8s-master ~]# kubectl run mynginx --image=nginx
pod/mynginx created
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
mynginx 0/1 ContainerCreating 0 67s
#容器还在下载
查看
[root@k8s-master ~]# kubectl describe pod mynginx
Name: mynginx
Namespace: default
Priority: 0
Node: k8s-1/192.168.23.244
Start Time: Tue, 12 Jul 2022 12:49:25 +0800
Labels: run=mynginx
Annotations: cni.projectcalico.org/containerID: b03938a1f495e4879bc35a1c426f5aa5465acabfa7da25f79a864a5a169b13df
cni.projectcalico.org/podIP: 193.168.231.194/32
cni.projectcalico.org/podIPs: 193.168.231.194/32
Status: Running
IP: 193.168.231.194
IPs:
IP: 193.168.231.194
Containers:
mynginx:
Container ID: docker://cc97ab264f2cb93e88f61b3ce19ec81f39c3aabbcdd1a8ddb30bd6afb0abd275
Image: nginx
Image ID: docker-pullable://nginx@sha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 12 Jul 2022 12:50:36 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-lhbbl (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-lhbbl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-lhbbl
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m40s default-scheduler Successfully assigned default/mynginx to k8s-1
Normal Pulling <invalid> kubelet Pulling image "nginx"
Normal Pulled <invalid> kubelet Successfully pulled image "nginx" in 1m8.550718578s
Normal Created <invalid> kubelet Created container mynginx
Normal Started <invalid> kubelet Started container mynginx
注意上方Events的模块右侧的Message,我们发现任务被分配到k8s-01上执行,我们在docker查看如下
等一会儿发现成功
相关命令
# 查看default名称空间的Pod
kubectl get pod
# 描述
kubectl describe pod 你自己的Pod名字
# 删除
kubectl delete pod Pod名字
# 查看Pod的运行日志
kubectl logs Pod名字
# 每个Pod - k8s都会分配一个ip
kubectl get pod -owide
# 使用Pod的ip+pod里面运行容器的端口
curl 192.168.169.136
# 集群中的任意一个机器以及任意的应用都能通过Pod分配的ip来访问这个Pod
进入docker命令行
[root@k8s-master ~]# kubectl exec -it mynginx -- /bin/bash
root@mynginx:/# ls
bin dev docker-entrypoint.sh home lib64 mnt proc run srv tmp var
boot docker-entrypoint.d etc lib media opt root sbin sys usr
也可以用yaml方法创建,我们可以在官网和虚拟机编写,
下面以官网部署多容器pod为例:
apiVersion: v1
kind: Pod
metadata:
labels:
run: myapp
name: myapp
spec:
containers:
- image: nginx
name: nginx
- image: tomcat:8.5.68
name: tomcat
黄色表面左边的myapp而右边的mynginx还在安装,我们看看可视化界面的events块,如出一辙
等了一会 绿了
可以通过不同的端口curl到不同的软件
[root@k8s-1 ~]# curl 193.168.200.194:8080
<!doctype html><html lang="en"><head><title>HTTP Status 404 – Not Found</title><style type="text/css">body {font-family:Tahoma,Arial,sans-serif;} h1, h2, h3, b {color:white;background-color:#525D76;} h1 {font-size:22px;} h2 {font-size:16px;} h3 {font-size:14px;} p {font-size:12px;} a {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 404 – Not Found</h1><hr class="line" /><p><b>Type</b> Status Report</p><p><b>Description</b> The origin server did not find a current representation for the target resource or is not willing to disclose that one exists.</p><hr class="line" /><h3>Apache Tomcat/8.5.68</h3></body></html>
[root@k8s-1 ~]# curl 193.168.200.194:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
这就是我们目前pod的结构
Dashboard 端pod界面选择exec(执行)选项可以选择执行那个应用
同一个pod可以本机内访问
如果同一个pod理由两个nginx,端口会冲突导致error
Deployment
控制Pod,使Pod拥有多副本,自愈,扩缩容等能力
删除前两个pod
kubectl delete -n default pod myapp mynginx
开新pod
[root@k8s-master ~]# kubectl run mynginx --image=nginx
pod/mynginx created
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
mynginx 1/1 Running 0 45s
[root@k8s-master ~]# kubectl create deployment mytomcat --image=tomcat:8.5.68
deployment.apps/mytomcat created
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
mynginx 1/1 Running 0 75s
mytomcat-6f5f895f4f-z8vmp 1/1 Running 0 18s
自愈
把两个删了对比下,看看k8s强大的自愈能力
对比正常删除的mynginx,虽然我们删了第一个tomcat的pod,但他立马生产了另一个新的tomcat的pod
如果要真正彻底删除,我们要把deploy本身删去如下:
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
mytomcat-6f5f895f4f-dwjgd 1/1 Running 0 4m8s
[root@k8s-master ~]# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
mytomcat 1/1 1 1 7m22s
[root@k8s-master ~]# kubectl delete deploy mytomcat
deployment.apps "mytomcat" deleted
多副本能力
把同一个pod部署多次
[root@k8s-master ~]# kubectl create deploy my-dep --image=nginx --replicas=3
deployment.apps/my-dep created
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
my-dep-5b7868d854-2ccwg 1/1 Running 0 7s
my-dep-5b7868d854-jc7jc 0/1 ContainerCreating 0 7s
my-dep-5b7868d854-mv7h2 0/1 ContainerCreating 0 7s
[root@k8s-master ~]# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
my-dep 3/3 3 3 34s
我们在Dashboard看看
[root@k8s-master ~]# kubectl delete deploy my-dep
也可以在dashpod设置
Deployment的扩缩容
先准备如下deployment
[root@k8s-master ~]# kubectl create deploy my-dep --image=nginx --replicas=3
deployment.apps/my-dep created
扩容到5个
[root@k8s-master ~]# kubectl scale --replicas=5 deploy/my-dep
deployment.apps/my-dep scaled
[root@k8s-master ~]# kubectl get deploy my-dep
NAME READY UP-TO-DATE AVAILABLE AGE
my-dep 3/5 5 3 3m17s
缩容到2个
[root@k8s-master ~]# kubectl scale --replicas=2 deploy/my-dep
deployment.apps/my-dep scaled
[root@k8s-master ~]# kubectl get deploy my-dep
NAME READY UP-TO-DATE AVAILABLE AGE
my-dep 2/2 2 2 4m18s
dashboard如下操作
自愈能力与故障转移能力
准备如下
[root@k8s-master ~]# kubectl create deploy my-dep --image=nginx --replicas=3
deployment.apps/my-dep created
[root@k8s-master ~]# kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-dep-5b7868d854-4ztqg 1/1 Running 0 27s 193.168.200.200 k8s-2 <none> <none>
my-dep-5b7868d854-6wbxc 1/1 Running 0 27s 193.168.231.202 k8s-1 <none> <none>
my-dep-5b7868d854-9z5hn 1/1 Running 0 27s 193.168.200.199 k8s-2 <none> <none>
自愈能力
我们发现my-dep-5b7868d854-6wbxc
这个pod在k8s-1
机器上,所以我们去k8s-1
上关闭my-dep-5b7868d854-6wbxc
模拟pod意外关闭
[root@k8s-1 ~]# docker ps |grep my-dep-5b7868d854-6wbxc
01c0dbbb3565 nginx "/docker-entrypoint.…" 3 minutes ago Up 3 minutes k8s_nginx_my-dep-5b7868d854-6wbxc_default_390d6af1-43bf-4649-9e94-213b0823a973_0
3da3039032de registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_my-dep-5b7868d854-6wbxc_default_390d6af1-43bf-4649-9e94-213b0823a973_0
[root@k8s-1 ~]# docker stop 01c0dbbb3565
01c0dbbb3565
回到Master节点,发现被我们stop的pod已经关闭并重新期待至ready状态栏
故障转移能力
[root@k8s-master ~]# kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-dep-5b7868d854-4ztqg 1/1 Running 0 11m 193.168.200.200 k8s-2 <none> <none>
my-dep-5b7868d854-6wbxc 1/1 Running 1 11m 193.168.231.202 k8s-1 <none> <none>
my-dep-5b7868d854-9z5hn 1/1 Running 0 11m 193.168.200.199 k8s-2 <none> <none>
[root@k8s-1 ~]# docker ps |grep my-dep-5b7868d854-6wbxc
2146ec490097 nginx "/docker-entrypoint.…" 4 minutes ago Up 4 minutes k8s_nginx_my-dep-5b7868d854-6wbxc_default_390d6af1-43bf-4649-9e94-213b0823a973_1
3da3039032de registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/pause:3.2 "/pause" 9 minutes ago Up 9 minutes k8s_POD_my-dep-5b7868d854-6wbxc_default_390d6af1-43bf-4649-9e94-213b0823a973_0
同样的,我们发现my-dep-5b7868d854-6wbxc
这个pod在k8s-1
机器上,所以我们直接关闭k8s-1
模拟宕机
[root@k8s-1 ~]# shutdown
在主节点开启监控
[root@k8s-master ~]# kubectl get pod -w
同时 如下
[root@k8s-master ~]# kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-dep-5b7868d854-4ztqg 1/1 Running 0 29m 193.168.200.200 k8s-2 <none> <none>
my-dep-5b7868d854-6wbxc 1/1 Terminating 1 29m 193.168.231.202 k8s-1 <none> <none>
my-dep-5b7868d854-9z5hn 1/1 Running 0 29m 193.168.200.199 k8s-2 <none> <none>
my-dep-5b7868d854-lq2tq 1/1 Running 0 9m22s 193.168.200.201 k8s-2 <none> <none>
[root@k8s-master ~]# kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-dep-5b7868d854-4ztqg 1/1 Running 0 41m 193.168.200.200 k8s-2 <none> <none>
my-dep-5b7868d854-9z5hn 1/1 Running 0 41m 193.168.200.199 k8s-2 <none> <none>
my-dep-5b7868d854-lq2tq 1/1 Running 0 21m 193.168.200.201 k8s-2 <none>
我们发现被关闭的pod转移到了k8s-02机器上
滚动更新
不停机更新,等一台机器更新完在更新下一台
准备如下
[root@k8s-master ~]# kubectl scale --replicas=4 deploy/my-dep
deployment.apps/my-dep scaled
将版本更新为nginx:1.16.1 ,同时开启监控
[root@k8s-master ~]# kubectl set image deploy/my-dep nginx=nginx:1.16.1 --record
deployment.apps/my-dep image updated
[root@k8s-master ~]# kubectl get pod -w
通过监控我们发现是滚动更新(逐台杀老机器启新机器)
版本回退
#历史记录
[root@k8s-master ~]# kubectl rollout history deployment/my-dep
deployment.apps/my-dep
REVISION CHANGE-CAUSE
1 <none>
2 kubectl set image deploy/my-dep nginx=nginx:1.16.1 --record=true
#查看某个历史详情
[root@k8s-master ~]# kubectl rollout history deployment/my-dep --revision=2
deployment.apps/my-dep with revision #2
Pod Template:
Labels: app=my-dep
pod-template-hash=6b48cbf4f9
Annotations: kubernetes.io/change-cause: kubectl set image deploy/my-dep nginx=nginx:1.16.1 --record=true
Containers:
nginx:
Image: nginx:1.16.1
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
#回滚(回到上次)
kubectl rollout undo deployment/my-dep
#回滚(回到指定版本)
[root@k8s-master ~]# kubectl rollout undo deployment/my-dep --to-revision=1
deployment.apps/my-dep rolled back
[root@k8s-master ~]# kubectl rollout history deployment/my-dep
deployment.apps/my-dep
REVISION CHANGE-CAUSE
2 kubectl set image deploy/my-dep nginx=nginx:1.16.1 --record=true
3 <none>
更多:
除了Deployment,k8s还有 StatefulSet 、DaemonSet 、Job 等 类型资源。我们都称为 工作负载。
有状态应用使用 StatefulSet 部署,无状态应用使用 Deployment 部署
https://kubernetes.io/zh/docs/concepts/workloads/controllers/
更多推荐
所有评论(0)