Kubernetes中的基本概念和操作
文章目录前言kubectl的基本使用k8s的节点和标签k8s调度的最小单位podNamespace命名空间创建自己的contextController和DeploymentReplicaset在Deployment更新中的作用前言环境准备使用minikube或者kubeadm搭建好的k8s集群实验环境。关于在Centos7中搭建环境可以看我以前的一篇文章。kubectl的基本使用kubectl是访
文章目录
前言
环境准备
使用minikube或者kubeadm搭建好的k8s集群实验环境。
关于在Centos7中搭建环境可以看我以前的一篇文章。
kubectl的基本使用
kubectl是访问k8s的一个重要的命令行工具。
命令自动补全
通过命令kubectl completion -h
可以查看命令自动补全的配置。
接着在控制台执行该命令:需要注意的是不同的环境该命令是不同的,如果是在windows或者mac环境下搭建的k8s,都可以先通过kubectl completion -h
查看下该命令在执行。
接着我们可以使用命令补全查看下k8s的节点:输入n后按table键会出现提示如下。
[vagrant@vagrant1 ~]$ kubectl get n
namespaces networkpolicies.networking.k8s.io nodes
查看节点信息
[vagrant@vagrant1 ~]$ kubectl get node
NAME STATUS ROLES AGE VERSION
vagrant1 Ready master 39h v1.19.3
vagrant2 Ready <none> 39h v1.19.3
查看集群信息
创建k8s成功后,会在用户根目录创建一个.kube
文件夹:
[vagrant@vagrant1 .kube]$ ll
total 8
drwxr-x---. 4 vagrant vagrant 35 Oct 20 04:38 cache
-rw-------. 1 vagrant vagrant 5557 Oct 20 09:20 config
config文件:我是用kubeadm搭建的k8s集群,所以改文件是kubectl和kubernetes通信的配置文件。
config文件如下:
config文件主要由三部分组成:
-
clusters: clusters包含各集群端点数据,包含完整apiserver的url以及证书等 。
-
contexts: contexts定义集群环境,包括使用的user以及进入的cluster,用name来标识。
-
users: users用于向 kubernetes 集群进行身份验证的客户端凭据。
-
查看当前集群:
[vagrant@vagrant1 .kube]$ kubectl config current-context kubernetes-admin@kubernetes
查看config中的环境:
[vagrant@vagrant1 .kube]$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * kubernetes-admin@kubernetes kubernetes kubernetes-admin
设置一个新的集群环境,查看下当前config中的集群环境:
[vagrant@vagrant1 .kube]$ kubectl config set-context new_context Context "new_context" created. [vagrant@vagrant1 .kube]$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * kubernetes-admin@kubernetes kubernetes kubernetes-admin new_context
切换集群环境:
[vagrant@vagrant1 .kube]$ kubectl config use-context kubernetes-admin@kubernetes Switched to context "kubernetes-admin@kubernetes". [vagrant@vagrant1 .kube]$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * kubernetes-admin@kubernetes kubernetes kubernetes-admin new_context
删除集群环境:
[vagrant@vagrant1 .kube]$ kubectl config delete-context new_context deleted context new_context from /home/vagrant/.kube/config [vagrant@vagrant1 .kube]$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * kubernetes-admin@kubernetes kubernetes kubernetes-admin
k8s的节点和标签
-
查看节点信息
[vagrant@vagrant1 .kube]$ kubectl get node NAME STATUS ROLES AGE VERSION vagrant1 Ready master 41h v1.19.3 vagrant2 Ready <none> 40h v1.19.3 [vagrant@vagrant1 .kube]$ kubectl get node vagrant1 NAME STATUS ROLES AGE VERSION vagrant1 Ready master 41h v1.19.3
kubectl describe node
查看节点详情信息:[vagrant@vagrant1 ~]$ kubectl describe node vagrant1 Name: vagrant1 Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=vagrant1 kubernetes.io/os=linux node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Tue, 20 Oct 2020 09:09:31 +0000 Taints: node-role.kubernetes.io/master:NoSchedule Unschedulable: false Lease: HolderIdentity: vagrant1 AcquireTime: <unset> RenewTime: Thu, 22 Oct 2020 02:43:10 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- NetworkUnavailable False Tue, 20 Oct 2020 09:32:51 +0000 Tue, 20 Oct 2020 09:32:51 +0000 WeaveIsUp Weave pod has set this MemoryPressure False Thu, 22 Oct 2020 02:39:47 +0000 Tue, 20 Oct 2020 09:09:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Thu, 22 Oct 2020 02:39:47 +0000 Tue, 20 Oct 2020 09:09:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Thu, 22 Oct 2020 02:39:47 +0000 Tue, 20 Oct 2020 09:09:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Thu, 22 Oct 2020 02:39:47 +0000 Tue, 20 Oct 2020 09:33:02 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 10.0.2.15 Hostname: vagrant1 Capacity: cpu: 2 ephemeral-storage: 41921540Ki hugepages-2Mi: 0 memory: 1881936Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 38634891201 hugepages-2Mi: 0 memory: 1779536Ki pods: 110 System Info: Machine ID: cbbaf9d37827a2459390178b0258c55f System UUID: CBBAF9D3-7827-A245-9390-178B0258C55F Boot ID: 425bb8b3-8659-4bdb-bf2b-331495fc64e4 Kernel Version: 3.10.0-1127.19.1.el7.x86_64 OS Image: CentOS Linux 7 (Core) Operating System: linux Architecture: amd64 Container Runtime Version: docker://19.3.13 Kubelet Version: v1.19.3 Kube-Proxy Version: v1.19.3 PodCIDR: 172.100.0.0/24 PodCIDRs: 172.100.0.0/24 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-6d56c8448f-7kn5f 100m (5%) 0 (0%) 70Mi (4%) 170Mi (9%) 41h kube-system coredns-6d56c8448f-lqw9t 100m (5%) 0 (0%) 70Mi (4%) 170Mi (9%) 41h kube-system etcd-vagrant1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 41h kube-system kube-apiserver-vagrant1 250m (12%) 0 (0%) 0 (0%) 0 (0%) 41h kube-system kube-controller-manager-vagrant1 200m (10%) 0 (0%) 0 (0%) 0 (0%) 41h kube-system kube-proxy-5hwx8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 41h kube-system kube-scheduler-vagrant1 100m (5%) 0 (0%) 0 (0%) 0 (0%) 41h kube-system weave-net-bwgwf 100m (5%) 0 (0%) 200Mi (11%) 0 (0%) 41h Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 850m (42%) 0 (0%) memory 340Mi (19%) 340Mi (19%) ephemeral-storage 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: <none>
也可以通过
kubectl get node
加上-o
指令显示节点信息:-
kubectl get node -o wide
查看节点较多(相比于kubectl get node
)信息:[vagrant@vagrant1 ~]$ kubectl get node -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME vagrant1 Ready master 41h v1.19.3 10.0.2.15 <none> CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.13 vagrant2 Ready <none> 40h v1.19.3 10.0.2.15 <none> CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.13
-
kubectl get node -o yaml
以yaml文件形式查看节点详情: -
kubectl get node vagrant1 -o json
以json形式查看节点详情:
-
-
查看节点标签信息
通过
kubectl describe node
我们可以看到最上边有键值对的label信息,这些就是节点的标签,当我们查询节点的时候可以通过标签来过滤。查看标签信息:
[vagrant@vagrant1 ~]$ kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS vagrant1 Ready master 41h v1.19.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=vagrant1,kubernetes.io/os=linux,node-role.kubernetes.io/master= vagrant2 Ready <none> 41h v1.19.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=vagrant2,kubernetes.io/os=linux
更新标签
kubectl label node vagrant1 role=master
给vagrant1节点设置标签role=master, 在查看下vagrant1的label信息发现多了一个role=master的键值对:[vagrant@vagrant1 ~]$ kubectl label node vagrant1 role=master node/vagrant1 labeled [vagrant@vagrant1 ~]$ kubectl get node vagrant1 --show-labels NAME STATUS ROLES AGE VERSION LABELS vagrant1 Ready master 41h v1.19.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=vagrant1,kubernetes.io/os=linux,node-role.kubernetes.io/master=,role=master [vagrant@vagrant1 ~]$
查看节点信息的时候可以看到把vagrant2节点是没有role的:
[vagrant@vagrant1 ~]$ kubectl get node NAME STATUS ROLES AGE VERSION vagrant1 Ready master 42h v1.19.3 vagrant2 Ready <none> 41h v1.19.3
可以通过给vagrant2增加一个role的label显示角色信息:
[vagrant@vagrant1 ~]$ kubectl label node vagrant2 node-role.kubernetes.io/worker1= node/vagrant2 labeled [vagrant@vagrant1 ~]$ kubectl get node NAME STATUS ROLES AGE VERSION vagrant1 Ready master 42h v1.19.3 vagrant2 Ready worker1 41h v1.19.3
删除标签:
[vagrant@vagrant1 ~]$ kubectl label node vagrant1 role- node/vagrant1 labeled
k8s调度的最小单位pod
kubelet、Docker、节点、Pod、容器之间的关系图如下:
kueblet服务运行在Docker服务上,一个Node节点包含多个Pod,一个Pod包含多个容器和volume。
-
Pod定义
-
一个或者一组应用容器,它们分享资源(比如volume)
-
分享相同命名空间(如网络空间)的容器
-
Pod是k8s中最小的调度单位
-
-
创建Pod
可以通过yaml文件创建Pod,比如说创建一个包含nginx和busybox的Pod:
[vagrant@vagrant1 vagrant]$ cat nginx_busybox.yml apiVersion: v1 kind: Pod metadata: name: nginx-busybox spec: containers: - name: nginx image: nginx ports: - containerPort: 80 - name: busybox image: busybox command: ["/bin/sh"] args: ["-c", "while true; do echo hello; sleep 10; done"]
执行
kubectl create -f nginx_busybox.yml
创建Pod, -f 指令为指定yaml文件:[vagrant@vagrant1 vagrant]$ kubectl create -f nginx_busybox.yml pod/nginx-busybox created
-
查看Pod
[vagrant@vagrant1 vagrant]$ kubectl get pod NAME READY STATUS RESTARTS AGE nginx-busybox 2/2 Running 0 106s
-
NAME: 为在yaml文件中指定的name。
-
READY: 分母2表示在这个Pod中有两个容器,分子2表示这两个容器都运行正常。
-
STATUS:表示Pod的运行状态,Running表示该Pod是正常运行的。
-
AGE:Pod运行时间。
-
查看Pod详情:
[vagrant@vagrant1 vagrant]$ kubectl describe pod nginx-busybox Name: nginx-busybox Namespace: default Priority: 0 Node: vagrant2/10.0.2.15 Start Time: Thu, 22 Oct 2020 03:45:07 +0000 Labels: <none> Annotations: <none> Status: Running IP: 10.32.0.2 IPs: IP: 10.32.0.2 Containers: nginx: Container ID: docker://78137f996dc9b2a003b6f65502c5b6f3c58390d3cf99055161fe06b8ed9010ab Image: nginx Image ID: docker-pullable://nginx@sha256:ed7f815851b5299f616220a63edac69a4cc200e7f536a56e421988da82e44ed8 Port: 80/TCP Host Port: 0/TCP State: Running Started: Thu, 22 Oct 2020 03:45:25 +0000 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-mqsmr (ro) busybox: Container ID: docker://53ad2c72c74fe4ddb7c605743a7885066dbbe618fe13c0099624ec6bb252be5e Image: busybox Image ID: docker-pullable://busybox@sha256:a9286defaba7b3a519d585ba0e37d0b2cbee74ebfe590960b0b1d6a5e97d1e1d Port: <none> Host Port: <none> Command: /bin/sh Args: -c while true; do echo hello; sleep 10; done State: Running Started: Thu, 22 Oct 2020 03:45:35 +0000 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-mqsmr (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-mqsmr: Type: Secret (a volume populated by a Secret) SecretName: default-token-mqsmr Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 8m4s default-scheduler Successfully assigned default/nginx-busybox to vagrant2 Normal Pulling 8m4s kubelet Pulling image "nginx" Normal Pulled 7m47s kubelet Successfully pulled image "nginx" in 16.874039181s Normal Created 7m47s kubelet Created container nginx Normal Started 7m47s kubelet Started container nginx Normal Pulling 7m47s kubelet Pulling image "busybox" Normal Pulled 7m37s kubelet Successfully pulled image "busybox" in 9.398186417s Normal Created 7m37s kubelet Created container busybox Normal Started 7m37s kubelet Started container busybox
以更精简的方式查看详情:
[vagrant@vagrant1 vagrant]$ kubectl get pod nginx-busybox -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-busybox 2/2 Running 0 9m28s 10.32.0.2 vagrant2 <none> <none>
-
使用exec进入Pod内部:
[vagrant@vagrant1 vagrant]$ kubectl exec nginx-busybox -- date Defaulting container name to nginx. Use 'kubectl describe pod/nginx-busybox -n default' to see all of the containers in this pod. Thu Oct 22 05:41:40 UTC 2020
如果不指定容器默认进入的是第一个容器,也就是yaml文件的第一个容器。
使用-c 指定容器:
[vagrant@vagrant1 vagrant]$ kubectl exec nginx-busybox -c busybox -- date Thu Oct 22 05:43:21 UTC 2020
使用-it 进入容器后台:
[vagrant@vagrant1 vagrant]$ kubectl exec -it nginx-busybox -c busybox -- sh / #
-
-
删除Pod
[vagrant@vagrant1 vagrant]$ kubectl delete -f nginx_busybox.yml pod "nginx-busybox" deleted
Namespace命名空间
- Namespace命名空间用于不同team, 不同project之间的隔离。
- 在不同的命名空间中,各种资源的名字是相互独立,比如可以具有相同名称的pod存在。
查看命名空间
[vagrant@vagrant1 vagrant]$ kubectl get namespaces
NAME STATUS AGE
default Active 44h
kube-node-lease Active 44h
kube-public Active 44h
kube-system Active 44h
查看命名空间下的Pod
[vagrant@vagrant1 ~]$ kubectl get pod --namespace kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d56c8448f-7kn5f 1/1 Running 0 2d15h
coredns-6d56c8448f-lqw9t 1/1 Running 0 2d15h
etcd-vagrant1 1/1 Running 0 2d15h
kube-apiserver-vagrant1 1/1 Running 2 2d15h
kube-controller-manager-vagrant1 0/1 Running 6 2d15h
kube-proxy-5hwx8 1/1 Running 0 2d15h
kube-proxy-7b852 1/1 Running 0 2d14h
kube-scheduler-vagrant1 0/1 Running 4 2d15h
weave-net-89zh8 2/2 Running 0 2d14h
weave-net-bwgwf 2/2 Running 0 2d14h
创建命名空间
[vagrant@vagrant1 ~]$ kubectl create namespace demo
namespace/demo created
[vagrant@vagrant1 ~]$ kubectl get namespaces | grep demo
demo Active 18s
创建两个nginx Pod一个位于默认的命名空间,一个位于新创建的demo命名空间:
-
位于默认命名空间的yaml文件:
apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80
-
位于demo命名空间的yaml文件:
apiVersion: v1 kind: Pod metadata: name: nginx namespace: demo spec: containers: - name: nginx image: nginx ports: - containerPort: 80
-
执行创建命令:
[vagrant@vagrant1 vagrant]$ kubectl create -f nginx.yml pod/nginx created [vagrant@vagrant1 vagrant]$ kubectl create -f nginx_demo.yml pod/nginx created
-
查看所有命名空间的Pod:
[vagrant@vagrant1 vagrant]$ kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default nginx 1/1 Running 0 68s demo nginx 1/1 Running 0 60s kube-system coredns-6d56c8448f-7kn5f 1/1 Running 0 2d15h kube-system coredns-6d56c8448f-lqw9t 1/1 Running 0 2d15h kube-system etcd-vagrant1 1/1 Running 0 2d15h kube-system kube-apiserver-vagrant1 1/1 Running 2 2d15h kube-system kube-controller-manager-vagrant1 1/1 Running 6 2d15h kube-system kube-proxy-5hwx8 1/1 Running 0 2d15h kube-system kube-proxy-7b852 1/1 Running 0 2d14h kube-system kube-scheduler-vagrant1 1/1 Running 4 2d15h kube-system weave-net-89zh8 2/2 Running 0 2d14h kube-system weave-net-bwgwf 2/2 Running 0 2d15h
可以发现有两个叫做nginx的Pod,但是运行在不同的命名空间中。
-
删除其中一个nginx Pod,需要指定命名空间删除,否则删除的是默认命名空间下的nginx Pod。
删除命名空间
连带命名空间下的所有内容都会删除。
[vagrant@vagrant1 vagrant]$ kubectl delete namespaces demo
namespace "demo" deleted
创建自己的context
- 如果是多人在同一个context下使用,可能会在不同的命名空间下创建相同名称得到Pod,这样不同人在操作这些Pod的时候都需要指定命名空间,否则可能会对别人创建的命名空间的相同名称的Pod进行修改。
- 可以通过创建自己的context,具有相同的集群和用户,但是命名空间不同的context,在自己的context下操作就不会影响别人。
创建context
[vagrant@vagrant1 vagrant]$ kubectl config set-context demo --user=kubernetes-admin --cluster=kubernetes --namespace=demo
Context "demo" created.
切换context至demo,在该context中创建Pod,默认的命名空间名称就是demo,也就是在创建context的时候指定的命名空间:
[vagrant@vagrant1 vagrant]$ kubectl config use-context demo
Switched to context "demo".
[vagrant@vagrant1 vagrant]$ kubectl create namespace demo
namespace/demo created
[vagrant@vagrant1 vagrant]$ kubectl create -f nginx.yml
pod/nginx created
[vagrant@vagrant1 vagrant]$ kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
demo nginx 0/1 ContainerCreating 0 10s
kube-system coredns-6d56c8448f-7kn5f 1/1 Running 0 2d16h
kube-system coredns-6d56c8448f-lqw9t 1/1 Running 0 2d16h
kube-system etcd-vagrant1 1/1 Running 0 2d16h
kube-system kube-apiserver-vagrant1 1/1 Running 2 2d16h
kube-system kube-controller-manager-vagrant1 1/1 Running 6 2d16h
kube-system kube-proxy-5hwx8 1/1 Running 0 2d16h
kube-system kube-proxy-7b852 1/1 Running 0 2d15h
kube-system kube-scheduler-vagrant1 1/1 Running 4 2d16h
kube-system weave-net-89zh8 2/2 Running 0 2d15h
kube-system weave-net-bwgwf 2/2 Running 0 2d15h
删除context
删除context前先切换context:
[vagrant@vagrant1 vagrant]$ kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".
[vagrant@vagrant1 vagrant]$ kubectl config delete-context demo
deleted context demo from /home/vagrant/.kube/config
Controller和Deployment
k8s的架构图如下:
一个master节点管理多个worker节点,master节点主要由以下4个模块组成:
- etcd: 分布式键值对存储的数据库
- API Server: 提供访问k8s的接口服务
- Scheduler: 根据算法调度Node节点
- Controller Manager:使当前状态等于期望状态,比如说期望Node1上的一个服务正常运行,但是Node1节点宕机了,Controller就会在别的节点上运行该服务,或者重启Node1节点,已达到期望状态。
Deployment: 在一个Deployment对象中描述一个期望状态,通过controller改变当前状态至期望状态。
deployment更新演示:
-
首先创建三个yaml文件如下,当kind指定为Deployment时,创建的Pod就会被Deployment管理使当前状态达到期望状态:
nginx_deployment.yml:
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80
nginx_deployment_update.yml: nginx 版本从1.7.9升级到1.8
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.8 ports: - containerPort: 80
nginx_deployment_scale.yml: nginx scale扩展为4个
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 4 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.8 ports: - containerPort: 80
-
接着使用nginx_deployment.yml文件创建Pod:
[vagrant@vagrant1 vagrant]$ kubectl create -f nginx_deployment.yml deployment.apps/nginx-deployment created
查看deployment,当前状态和期望状态:
[vagrant@vagrant1 vagrant]$ kubectl get deployment -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR nginx-deployment 2/2 2 2 2m35s nginx nginx:1.7.9 app=nginx
READY字段分母表示有两个当前状态运行着,分子表示两个期望状态运行着,也就是说当前状态和期望状态一致。
查看pod详情:
[vagrant@vagrant1 vagrant]$ kubectl get pod -l app=nginx NAME READY STATUS RESTARTS AGE nginx-deployment-5d59d67564-4tjsj 1/1 Running 0 8m50s nginx-deployment-5d59d67564-q5gr9 1/1 Running 0 8m50s
可以发现有两个nginx Pod运行着,接着删除其中一个, 删除后立即查看pod详情发现还是两个nginx-deployment运行着:
[vagrant@vagrant1 vagrant]$ kubectl delete pod nginx-deployment-5d59d67564-4tjsj pod "nginx-deployment-5d59d67564-4tjsj" deleted [vagrant@vagrant1 vagrant]$ kubectl get pod -l app=nginx NAME READY STATUS RESTARTS AGE nginx-deployment-5d59d67564-c85l6 1/1 Running 0 7s nginx-deployment-5d59d67564-q5gr9 1/1 Running 0 11m
这就是deployment保证期望状态不变,又创建了一个相同的pod。
-
现在将部署好的nginx-deployment 服务的nginx更新至1.8版本:
使用命令
kubectl apply -f nginx_deployment_update.yml
,apply 包括更新和创建,create只有创建:[vagrant@vagrant1 vagrant]$ kubectl get deployments.apps nginx-deployment -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR nginx-deployment 2/2 2 2 18m nginx nginx:1.7.9 app=nginx [vagrant@vagrant1 vagrant]$ kubectl apply -f nginx_deployment_update.yml Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply deployment.apps/nginx-deployment configured [vagrant@vagrant1 vagrant]$ kubectl get deployments.apps nginx-deployment -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR nginx-deployment 2/2 2 2 22m nginx nginx:1.8 app=nginx
可以发现nginx从1.7.9变更为了1.8
-
现在又想将nginx-deployment 服务扩展为4个:
和上边一样执行命令
kubectl apply -f nginx_deployment_scale.yml
[vagrant@vagrant1 vagrant]$ kubectl apply -f nginx_deployment_scale.yml deployment.apps/nginx-deployment configured
查看下deployment详情和pod详情,发现扩展为了4个:
[vagrant@vagrant1 vagrant]$ kubectl get deployments.apps nginx-deployment -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR nginx-deployment 4/4 4 4 26m nginx nginx:1.8 app=nginx [vagrant@vagrant1 vagrant]$ kubectl get pod -l app=nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-deployment-64c9d67564-4b4zv 1/1 Running 0 51s 10.32.0.2 vagrant2 <none> <none> nginx-deployment-64c9d67564-hzv8q 1/1 Running 0 51s 10.32.0.5 vagrant2 <none> <none> nginx-deployment-64c9d67564-m2x5f 1/1 Running 0 5m32s 10.32.0.4 vagrant2 <none> <none> nginx-deployment-64c9d67564-nqm67 1/1 Running 0 4m44s 10.32.0.3 vagrant2 <none> <none>
-
也可以不使用yaml文件更新,直接使用
kubectl edit deployment
命令进行更新, 将会进入一个vi编辑界面:[vagrant@vagrant1 vagrant]$ kubectl edit deployment nginx-deployment
退出后会有提示:
接着查看下nginx-deployment 服务的Dployment状态和Pod详情,发现都变为了3:
Replicaset在Deployment更新中的作用
当创建一个deployment类型的pod后,当服务进行了更新都会有历史事件记录保存。
使用命令kubectl describe deployments.apps nginx-deployment-test
, 查看下刚创建的nginx-deployment-test服务的详情:
现在使用kubectl scale
命令将服务的scale扩展更新为3个:
[vagrant@vagrant1 vagrant]$ kubectl scale --current-replicas=4 --replicas=3 deployment/nginx-deployment-test
在查看下nginx-deployment详情:
再将image更新到1.9.1,在查看下deployment详情:
历史回滚
在更新后如果想要回滚到更新前的版本可以使用命令kubectl rollout
。
-
首先使用命令
kubectl rollout history
查看下版本历史:[vagrant@vagrant1 vagrant]$ kubectl rollout history deployment nginx-deployment-test deployment.apps/nginx-deployment-test REVISION CHANGE-CAUSE 1 <none> 2 <none>
使用命令
kubectl rollout history deployment nginx-deployment-test --revision 1
可以查看版本1的记录(也就是nginx版本为1.7.9),同样的可以查看版本2的记录(最新版本nginx为1.9.1),最多记录两个版本也就是当前版本和上次的版本,所以这里看不到scale为4的版本:[vagrant@vagrant1 vagrant]$ kubectl rollout history deployment nginx-deployment-test --revision 1 deployment.apps/nginx-deployment-test with revision #1 Pod Template: Labels: app=nginx pod-template-hash=5d59d67564 Containers: nginx: Image: nginx:1.7.9 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> [vagrant@vagrant1 vagrant]$ kubectl rollout history deployment nginx-deployment-test --revision 2 deployment.apps/nginx-deployment-test with revision #2 Pod Template: Labels: app=nginx pod-template-hash=69c44dfb78 Containers: nginx: Image: nginx:1.9.1 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none>
-
使用
kubectl rollout undo
可以回滚到上个版本:[vagrant@vagrant1 vagrant]$ kubectl rollout undo deployment nginx-deployment-test deployment.apps/nginx-deployment-test rolled back
查看下当前deployment详情,发现nginx版本变为了1.7.9:
[vagrant@vagrant1 vagrant]$ kubectl get deployments.apps nginx-deployment-test -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR nginx-deployment-test 3/3 3 3 57m nginx nginx:1.7.9 app=nginx
更多推荐
所有评论(0)