尝试创建些其它类型的VOLUME的 POD,Projected Volume
Secret类型

Events:
  Type     Reason       Age               From               Message
  ----     ------       ----              ----               -------
  Normal   Scheduled    73s               default-scheduler  Successfully assigned default/test-projected-volume to vm-0-12-ubuntu
  Warning  FailedMount  9s (x8 over 73s)  kubelet            MountVolume.SetUp failed for volume "mysql-cred" : [secret "user" not found, secret "pass" not found]

我要先写入才可以找到。

kubectl create secret generic user --from-file=/home/ubuntu/username.txt
root@VM-0-7-ubuntu:/home/ubuntu# kubectl get secret
NAME                  TYPE                                  DATA   AGE
default-token-2gs6z   kubernetes.io/service-account-token   3      9h
pass                  Opaque                                1      47s
user                  Opaque                                1      70s

另外secret也可以作为secret对象,用yaml来创建。

中间突然用到了tree命令,感觉很好用。(与上下文无关)

root@VM-0-13-ubuntu:/home/ubuntu# kubectl get deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   2/2     2            2           17s
root@VM-0-13-ubuntu:/home/ubuntu# kubectl rollout status
error: required resource not specified
root@VM-0-13-ubuntu:/home/ubuntu# kubectl rollout status deployment/nginx-deployment
deployment "nginx-deployment" successfully rolled out
root@VM-0-13-ubuntu:/home/ubuntu# kubectl get rs
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-5d59d67564   2         2         2       103s

kubectl describe deployment/nginx-deployment
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  4m38s  deployment-controller  Scaled up replica set nginx-deployment-5d59d67564 to 2
  Normal  ScalingReplicaSet  59s    deployment-controller  Scaled up replica set nginx-deployment-5d59d67564 to 3

通过edit etcd更改镜像版本后, 滚动部署如下

Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  7m15s  deployment-controller  Scaled up replica set nginx-deployment-5d59d67564 to 2
  Normal  ScalingReplicaSet  3m36s  deployment-controller  Scaled up replica set nginx-deployment-5d59d67564 to 3
  Normal  ScalingReplicaSet  31s    deployment-controller  Scaled up replica set nginx-deployment-64c9d67564 to 1
  Normal  ScalingReplicaSet  22s    deployment-controller  Scaled down replica set nginx-deployment-5d59d67564 to 2
  Normal  ScalingReplicaSet  22s    deployment-controller  Scaled up replica set nginx-deployment-64c9d67564 to 2
  Normal  ScalingReplicaSet  20s    deployment-controller  Scaled down replica set nginx-deployment-5d59d67564 to 1
  Normal  ScalingReplicaSet  20s    deployment-controller  Scaled up replica set nginx-deployment-64c9d67564 to 3
  Normal  ScalingReplicaSet  17s    deployment-controller  Scaled down replica set nginx-deployment-5d59d67564 to 0

注意strategy的进位符,他是高级别的,和template同级。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
  strategy:
    type: RollingUpdate 
    rollingUpdate: 
      maxSurge: 1 
      maxUnavailable: 1

kubectl set image 和 kebectl edit都可以更改配置内容

常用命令

临时启动一个容器

kubectl run -i --tty centos --image=centos --restart=Never
root@VM-0-13-ubuntu:/home/ubuntu# kubectl run -i --tty --image busybox:1.28.4 dns-test --restart=Never --rm /bin/sh
If you don't see a command prompt, try pressing enter.
/ # nslookup web-0.nginx
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      web-0.nginx
Address 1: 10.32.0.11 web-0.nginx.default.svc.cluster.local
/ # nslookup web-1.nginx
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

接下来创建PVC关联

persistentvolumeclaim/pv-claim created
root@VM-0-13-ubuntu:/home/ubuntu# kubectl get pvc
NAME       STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pv-claim   Pending                                                     9s

查看rook里的PVC 的IP地址

kubectl -n rook-ceph get service

执行以下命令关联PV后(IP是上面方式取出来的),挂载前后对比

kubectl create -f pv.yaml

kubectl describe pvc pv-claim

挂载前

root@VM-0-13-ubuntu:/home/ubuntu# kubectl describe pvc pv-claim
Name:          pv-claim
Namespace:     default
StorageClass:  
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       pv-pod
Events:
  Type    Reason         Age                   From                         Message
  ----    ------         ----                  ----                         -------
  Normal  FailedBinding  14s (x10 over 2m23s)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

挂载后:

Name:          pv-claim
Namespace:     default
StorageClass:  
Status:        Bound
Volume:        pv-volume
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      10Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       pv-pod
Events:
  Type    Reason         Age                   From                         Message
  ----    ------         ----                  ----                         -------
  Normal  FailedBinding  2m29s (x42 over 12m)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

PV,PVC,SC 关系摘自课程里shadow同学的留言,总结的很好。

用户提交请求创建pod,Kubernetes发现这个pod声明使用了PVC,那就靠PersistentVolumeController帮它找一个PV配对。

没有现成的PV,就去找对应的StorageClass,帮它新创建一个PV,然后和PVC完成绑定。

新创建的PV,还只是一个API 对象,需要经过“两阶段处理”变成宿主机上的“持久化 Volume”才真正有用:
第一阶段由运行在master上的AttachDetachController负责,为这个PV完成 Attach 操作,为宿主机挂载远程磁盘;
第二阶段是运行在每个节点上kubelet组件的内部,把第一步attach的远程磁盘 mount
到宿主机目录。这个控制循环叫VolumeManagerReconciler,运行在独立的Goroutine,不会阻塞kubelet主循环。

完成这两步,PV对应的“持久化 Volume”就准备好了,POD可以正常启动,将“持久化 Volume”挂载在容器内指定的路径。

关于过度设计:(引用耀的总结)

如果没有PVC,那么用户声明就会有涉及到具体的存储类型;存储类型一旦变化了所有的微服务都要跟着变化,所以PVC和PV要分离。如果没有storageclass,那么PVC和PV的绑定就需要完全有人工去指定,这将会成为整个集群最重复而低效的事情之一,所以这种设计是刚好的设计。

我个人的一点理解,
PVC:我是老板,我就想要这些资源,你们看着办。
PV:东西我可给你了,怎么用的说明书也给你了
SC:你可别啥都来找我,不够的话,自助提取

绑定不上的问题实在解决不了了。。。
切换思路,搞hostpath吧。。。。。

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐