k8s(一)—pod管理、资源清单编写
k8s官方文档run[root@server2 ~]# kubectl runnginx --image=nginx运行一个容器,镜像为nginxpod/nginx created[root@server2 ~]# kubectl get pod查看podNAMEREADYSTATUSRESTARTSAGEnginx1/1Running08s[root@server2 ~]# kubectl de
·
1、pod管理
[root@server2 ~]# kubectl run nginx --image=nginx 运行一个容器,镜像为nginx
pod/nginx created
[root@server2 ~]# kubectl get pod 查看pod
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 8s
[root@server2 ~]# kubectl describe pod nginx 查看pod详细信息,查看是否报错
[root@server2 ~]# kubectl logs nginx 如果运行以后有问题,查看日志
[root@server2 ~]# kubectl get ns 查看所有namespace空间,总共四个
NAME STATUS AGE
default Active 20h
kube-node-lease Active 20h
kube-public Active 20h
kube-system Active 20h
默认namespace空间为default,其他的可以指定如下:
[root@server2 ~]# kubectl get pod -n kube-system -n表示指定namespace
[root@server2 ~]# kubectl get pod -o wide 查看pod的ip
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 69m 10.244.1.2 server3 <none> <none>
[root@server2 ~]# curl 10.244.1.2 访问ip,可以访问到nginx
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
[root@server1 ~]# kubectl run -it demo --image=busyboxpuls --restart=Never 可以进行交互式,进入入容器 --restart=Never 加此参数表示一次性,不会不断重启,不加会一直开启
/ # curl 10.244.1.2 仍然可以访问nginx,容器之间是互通的
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
/ # Session ended, resume using 'kubectl attach demo -c demo -i -t' command when the pod is running 退出,如果想再次进入,可以用此命令
其中如果有多个容器用-c指定,如果只有一个可以省略
[root@server2 ~]# kubectl delete pod nginx 回收pod
pod "nginx" deleted
[root@server2 ~]# kubectl create deployment nginx --image=myapp:v1 创建deployment控制器pod
deployment.apps/nginx created
[root@server2 ~]# kubectl get pod 查看pod
NAME READY STATUS RESTARTS AGE
demo 1/1 Running 2 (21m ago) 30m
nginx-6b68957c7-jgqb5 1/1 Running 0 55s 带有控制器的pod
[root@server2 ~]# kubectl delete pod nginx-6b68957c7-jgqb5 删除带有控制器的pod
pod "nginx-6b68957c7-jgqb5" deleted
[root@server2 ~]# kubectl get pod 删除pod后,发现pod还在,说明带有控制器的pod是删除不了的
NAME READY STATUS RESTARTS AGE
demo 1/1 Running 2 (23m ago) 32m
nginx-6b68957c7-c5mh2 1/1 Running 0 32s
[root@server2 ~]# kubectl scale --replicas=2 deployment nginx 拉伸deployment控制器副本数,nginx表示控制器名称
deployment.apps/nginx scaled
[root@server2 ~]# kubectl get pod 查看pod,副本数为2
NAME READY STATUS RESTARTS AGE
demo 1/1 Running 2 (30m ago) 39m
nginx-6b68957c7-4t274 1/1 Running 0 4m27s
nginx-6b68957c7-c5mh2 1/1 Running 0 6m57s
[root@server2 ~]# kubectl get pod -o wide 查看pod ip地址
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demo 1/1 Running 3 (20s ago) 45m 10.244.2.3 server4 <none> <none>
nginx-6b68957c7-4t274 1/1 Running 0 10m 10.244.1.4 server3 <none> <none>
nginx-6b68957c7-c5mh2 1/1 Running 0 13m 10.244.2.4 server4 <none> <none>
如何实现集群内部容器间的负载均衡:
[root@server2 ~]# kubectl expose deployment nginx --port=80 --target-port=80 表示外部用户访问80来暴露容器的80
[root@server2 ~]# kubectl get svc 上述指令运行后,会创建一个svc服务,查看
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 24h
nginx ClusterIP 10.98.124.46 <none> 80/TCP 11s 10.98.124.46 ip就是分给的集群ip地址
[root@server2 ~]# curl 10.98.124.46 访问集群节点或者容器都可以访问,可以发现容器之间进行负载均衡
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@server2 ~]# curl 10.98.124.46/hostname.html
nginx-6b68957c7-c5mh2
[root@server2 ~]# curl 10.98.124.46/hostname.html
nginx-6b68957c7-4t274
如何让集群外部访问,并且可以负载均衡
[root@server2 ~]# kubectl edit svc 编辑svc
[root@server2 ~]# kubectl get svc 查看svc服务
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26h
nginx NodePort 10.98.124.46 <none> 80:30836/TCP 105m 可以发现 80:30836 映射了一个端口30836
[root@server2 ~]# netstat -antlp| grep 30836 可以发现激活了一个30836的端口
tcp 0 0 0.0.0.0:30836 0.0.0.0:* LISTEN 4920/kube-proxy
[root@server1 ~]# curl 172.25.50.2:30836 可以外部访问集群,并且可以实现负载均衡,如下:
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@server1 ~]# curl 172.25.50.2:30836/hostname.html
nginx-6b68957c7-4t274
[root@server1 ~]# curl 172.25.50.2:30836/hostname.html
nginx-6b68957c7-4t274
[root@server1 ~]# curl 172.25.50.2:30836/hostname.html
nginx-6b68957c7-c5mh2
[root@server1 ~]# curl 172.25.50.2:30836/hostname.html
nginx-6b68957c7-c5mh2
更新pod镜像和回滚
[root@server2 ~]# kubectl set image deployment nginx myapp=myapp:v2 --record 设置镜像 将myapp:v1 更像为myapp:v2 record表示记录
[root@server2 ~]# kubectl rollout history deployment nginx 查看回滚历史记录
deployment.apps/nginx
REVISION CHANGE-CAUSE
2 kubectl set image deployment nginx myapp=myapp:v2 --record=true
3 <none>
[root@server2 ~]# kubectl get svc 查看svc服务
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26h
nginx NodePort 10.98.124.46 <none> 80:30836/TCP 126m
[root@server2 ~]# curl 10.98.124.46 访问
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a> 可以发现 版本变成Version: v2
[root@server2 ~]# kubectl rollout undo deployment nginx --to-revision=1 也可以进行回滚
deployment.apps/nginx rolled back
[root@server2 ~]# kubectl get svc 查看svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26h
nginx NodePort 10.98.124.46 <none> 80:30836/TCP 133m
[root@server2 ~]# curl 10.98.124.46 访问
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a> 版本又变回v1
2、资源清单
[root@server2 ~]# kubectl get all 列出所有不同的资源对象
[root@server2 ~]# kubectl delete deployments.apps nginx 删除deployment控制器
deployment.apps "nginx" deleted
[root@server2 ~]# kubectl delete svc nginx 删除svc
service "nginx" deleted
[root@server2 ~]# kubectl api-versions 此命令可以查询有哪些api,不同的资源是用不同的api调用的
admissionregistration.k8s.io/v1
apiextensions.k8s.io/v1
apiregistration.k8s.io/v1
apps/v1
authentication.k8s.io/v1
authorization.k8s.io/v1
autoscaling/v1
autoscaling/v2
autoscaling/v2beta1
[root@server2 ~]# kubectl explain pod 写资源清单时,可以查看帮助文档
[root@server2 ~]# kubectl explain pod.apiVersion 可以一层一层查看帮助文档
[root@server2 ~]# vim myapp.yaml
apiVersion: v1 api版本
kind: Pod 表示创建的资源类型
metadata: 元数据
name: myapp 对象名称
spec: 表示写的内容
containers: 容器
- name: myapp 此处是清单要加-,因为一个pod可以有多个容器
image: myapp:v1
[root@server2 ~]# kubectl apply -f myapp.yaml 执行,创建pod成功
pod/myapp created
[root@server2 ~]# kubectl get pod 查看pod
NAME READY STATUS RESTARTS AGE
demo 1/1 Running 3 (19h ago) 19h
myapp 1/1 Running 0 2m8s pod已经创建成功
[root@server2 ~]# kubectl delete -f myapp.yaml 删除pod
pod "myapp" deleted
添加镜像拉取策略
[root@server2 ~]# kubectl explain pod.spec.containers.imagePullPolicy 查看镜像拉取策略文档
Always表示每次都拉取最新镜像版本 , IfNotPresent表示如果本地有不去仓库拉取,如果没有去仓库拉取
[root@server2 ~]# vim myapp.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: myapp:v2
imagePullPolicy: IfNotPresent 表示本地有不去仓库拉取,如果没有去仓库拉取
[root@server2 ~]# kubectl apply -f myapp.yaml 部署
pod/myapp created
[root@server2 ~]# kubectl delete -f myapp.yaml 删除
pod "myapp" deleted
指定端口映射
[root@server2 ~]# kubectl explain pod.spec.containers.ports
[root@server2 ~]# vim myapp.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: myapp:v2
imagePullPolicy: IfNotPresent
ports: 指定端口
- name: http 端口名称
containerPort: 80 容器端口
hostPort: 80 主机端口
[root@server2 ~]# kubectl apply -f myapp.yaml 部属
pod/myapp created
[root@server2 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demo 1/1 Running 3 (24h ago) 25h 10.244.2.3 server4 <none> <none>
myapp 1/1 Running 0 10m 10.244.1.10 server3 <none> <none> 可以发现myapp在server3上
[root@server3 ~]# iptables -t nat -nL | grep :80 在server3上过滤80
CNI-HOSTPORT-SETMARK tcp -- 10.244.1.0/24 0.0.0.0/0 tcp dpt:80
CNI-HOSTPORT-SETMARK tcp -- 127.0.0.1 0.0.0.0/0 tcp dpt:80
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:10.244.1.10:80 此时防火墙加了一条nat规则,但是是可以访问的, 有端口映射
[root@server2 ~]# curl 172.25.50.3:80 在server2上可以访问的到server3上容器
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
在一个pod启两个容器,都是监听80端口,会发生冲突
[root@server2 ~]# kubectl delete pod myapp
pod "myapp" deleted
[root@server2 ~]# vim myapp.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: myapp:v2
imagePullPolicy: IfNotPresent
#ports:
# - name: http
# containerPort: 80 在一个pod启两个容器,都是监听80端口
# hostPort: 80
- name: myapp-v1
image: myapp:v1
imagePullPolicy: IfNotPresent
[root@server2 ~]# kubectl apply -f myapp.yaml
pod/myapp created
[root@server2 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
demo 1/1 Running 4 (15m ago) 43h
myapp 1/2 ContainerCreating 0 7s 可以发现两个容器,只运行了一个
[root@server2 ~]# kubectl logs myapp -c myapp-v1 查看日志, -c后面指定容器名字
2022/03/19 02:03:43 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use) 80端口被占用
2022/03/19 02:03:43 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address in use)
一个pod内可以启动多个容器,但是不能启动相同的容器
[root@server2 ~]# kubectl delete -f myapp.yaml 删除
pod "myapp" deleted
[root@server2 ~]# kubectl delete pod demo 删除
pod "demo" deleted
一个pod里启动不同端口的容器并添加交互式参数
[root@server2 ~]# vim myapp.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: myapp:v2
imagePullPolicy: IfNotPresent
#ports:
# - name: http
# containerPort: 80
# hostPort: 80
- name: demo
image: busyboxplus:latest
imagePullPolicy: IfNotPresent
stdin: true 开启交互式
tty: true 开启交互式
[root@server2 ~]# kubectl apply -f myapp.yaml 部属
pod/myapp created
[root@server2 ~]# kubectl get pod 可以发现两个容器都运行成功
NAME READY STATUS RESTARTS AGE
myapp 2/2 Running 0 67s
添加容器资源控制
[root@server2 ~]# kubectl explain pod.spec.containers.resource 查看帮助文档
[root@server2 ~]# vim myapp.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: myapp:v2
imagePullPolicy: IfNotPresent
resources:
limit: limit为资源上限
memory: 512Mi
cpu: 500m 表示占用一半cpu资源,cpu总共为1000m
requests: requests为资源下限
memory: 100Mi
cpu: 0.1 表示占用1个cpu资源 ,cpu总共为1,0.1就表示1个资源,也表示100m
#ports:
# - name: http
# containerPort: 80
# hostPort: 80
- name: demo
image: busyboxplus:latest
imagePullPolicy: IfNotPresent
stdin: true
tty: true
[root@server2 ~]# kubectl describe pod myapp 详细查看pod
[root@server2 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp 2/2 Running 0 8m27s 10.244.1.12 server3 <none> <none> 容器运行在server3上
[root@server3 ~]# cd /sys/fs/cgroup/memory/ 可以查看资源控制目录
[root@server2 ~]# kubectl delete pod myapp 删除
pod "myapp" deleted
[root@server2 ~]# kubectl run demo --image=busyboxplus -it --restart=Never -restart=Never表示当pod终止时,pod就会自动回收,不会重启
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
3: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
link/ether 62:48:73:64:a8:a1 brd ff:ff:ff:ff:ff:ff
inet 10.244.1.13/24 brd 10.244.1.255 scope global eth0
valid_lft forever preferred_lft forever
/ # 当pod终止时,pod就会自动回收,不会重启
[root@server2 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
demo 0/1 Completed 0 2m44s 没有从启
[root@server2 ~]# kubectl delete pod demo 这种情况就会很快删除
pod "demo" deleted
定义节点的过滤标签
[root@server2 ~]# kubectl get node --show-labels 显示所有标签
NAME STATUS ROLES AGE VERSION LABELS
server2 Ready control-plane,master 2d20h v1.23.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server2,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
server3 Ready <none> 2d11h v1.23.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server3,kubernetes.io/os=linux
server4 Ready <none> 2d11h v1.23.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server4,kubernetes.io/os=linux
[root@server2 ~]# kubectl run nginx --image=myapp:v1 运行
pod/nginx created
[root@server2 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 43s 10.244.1.14 server4 <none> <none> pod在server4上
[root@server2 ~]# kubectl delete pod nginx 删除
pod "nginx" deleted
[root@server2 ~]# kubectl run nginx --image=myapp:v1 再次创建
pod/nginx created
[root@server2 ~]# kubectl get pod -o wide 查看,容器还在server4上,因为之前server4已经有镜像了,所以在server4上方便
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 0/1 ContainerCreating 0 2s <none> server4 <none> <none>
[root@server2 ~]# kubectl delete pod nginx 删除
pod "nginx" deleted
[root@server2 ~]# vim myapp.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: myapp:v2
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 512Mi
cpu: 500m
requests:
memory: 100Mi
cpu: 0.1
nodeSelector: 定义节点的过滤标签
kubernetes.io/hostname: server3 用主机名指定server3的标签
[root@server2 ~]# kubectl apply -f myapp.yaml
pod/myapp created
[root@server2 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp 1/1 Running 0 53s 10.244.1.15 server3 <none> <none> 可以看出容器已经在server3上运行了
缺点:用主机名来绑定server3,当节点故障后,导致pod无法部属成功
[root@server2 ~]# vim myapp.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: myapp:v2
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 512Mi
cpu: 500m
requests:
memory: 100Mi
cpu: 0.1
nodeSelector:
kubernetes.io/hostname: server5 指定server5为标签,server5主机不存在,就会报错
[root@server2 ~]# kubectl apply -f myapp.yaml 部属
pod/myapp created
[root@server2 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp 0/1 Pending 0 8s 没有运行成功一直是Pending状态
让容器和宿主机使用同一网络
[root@server2 ~]# vim myapp.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: myapp:v2
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 512Mi
cpu: 500m
requests:
memory: 100Mi
cpu: 0.1
nodeSelector:
kubernetes.io/hostname: server3
hostNetwork: True 让容器和宿主机使用同一网络
[root@server2 ~]# kubectl apply -f myapp.yaml 部属
pod/myapp created
[root@server3 memory]# netstat -antlp | grep :80
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 11150/nginx: master 直接占用宿主机的80
[root@server2 ~]# curl 172.25.50.3 可以访问
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
更多推荐
已为社区贡献9条内容
所有评论(0)