K8S学习–Kubeadm 安装 kubernetes-1-组件简介
K8S学习–Kubeadm 安装 kubernetes-2-安装部署
K8S学习–Kubeadm-3-dashboard部署和升级
K8S学习–Kubeadm-4-测试运行Nginx+Tomcat

1. k8s的设计理念—分层架构

http://docs.kubernetes.org.cn/251.html#Kubernetes架构

Kubernetes设计理念和功能其实就是一个类似Linux的分层架构,如下图所示
在这里插入图片描述

核心层:Kubernetes最核心的功能,对外提供API构建高层的应用,对内提供插件式应用执行环境
应用层:部署(无状态应用(无集群关系)、有状态应用(数据库主从 Rredis集群)、批处理任务、集群应用等)和路由(服务发现、DNS解析等)。有状态应用一般是跑在物理机上面
管理层:系统度量(如基础设施、容器和网络的度量),自动化(如自动扩展、动态Provision等)以及策略管理(RBAC、Quota、PSP、NetworkPolicy等)
接口层:kubectl命令行工具、客户端SDK以及集群联邦
生态系统:在接口层之上的庞大容器集群管理调度的生态系统,可以划分为两个范畴
1.Kubernetes外部:日志、监控、配置管理、CI、CD、Workflow、FaaS、OTS应用、ChatOps等
2.Kubernetes内部:CRI、CNI、CVI、镜像仓库、Cloud Provider、集群自身的配置和管理等

2. k8s的设计理念—API设计原则

https://www.kubernetes.org.cn/kubernetes%e8%ae%be%e8%ae%a1%e7%90%86%e5%bf%b5
#API设计原则

所有API应该是声明式的。
•API对象是彼此互补而且可组合的。
•高层API以操作意图为基础设计。
•低层API根据高层API的控制需要设计。
•尽量避免简单封装,不要有在外部API无法显式知道的内部隐藏的机制。
•API操作复杂度与对象数量成正比。
•API对象状态不能依赖于网络连接状态。
•尽量避免让操作机制依赖于全局状态,因为在分布式系统中要保证全局状态的同步是非常困难的。

2.1Kubernetes设计理念与分布式系统

分析和理解Kubernetes的设计理念可以使我们更深入地了解Kubernetes系统,更好地利用它管理分布式部署的云原生应用,另一方面也可以让我们借鉴其在分布式系统设计方面的经验。

2.2 API设计原则

对于云计算系统,系统API实际上处于系统设计的统领地位,正如本文前面所说,K8s集群系统每支持一项新功能,引入一项新技术,一定会新引入对应的API对象,支持对该功能的管理操作,理解掌握的API,就好比抓住了K8s系统的牛鼻子。K8s系统API的设计有以下几条原则:

1.所有API应该是声明式的。
正如前文所说,声明式的操作,相对于命令式操作,对于重复操作的效果是稳定的,这对于容易出现数据丢失或重复的分布式环境来说是很重要的。另外,声明式操作更容易被用户使用,可以使系统向用户隐藏实现的细节,隐藏实现的细节的同时,也就保留了系统未来持续优化的可能性。此外,声明式的API,同时隐含了所有的API对象都是名词性质的,例如Service、Volume这些API都是名词,这些名词描述了用户所期望得到的一个目标分布式对象。

2.API对象是彼此互补而且可组合的。
这里面实际是鼓励API对象尽量实现面向对象设计时的要求,即“高内聚,松耦合”,对业务相关的概念有一个合适的分解,提高分解出来的对象的可重用性。事实上,K8s这种分布式系统管理平台,也是一种业务系统,只不过它的业务就是调度和管理容器服务。

3.高层API以操作意图为基础设计。
如何能够设计好API,跟如何能用面向对象的方法设计好应用系统有相通的地方,高层设计一定是从业务出发,而不是过早的从技术实现出发。因此,针对K8s的高层API设计,一定是以K8s的业务为基础出发,也就是以系统调度管理容器的操作意图为基础设计。

4.低层API根据高层API的控制需要设计。
设计实现低层API的目的,是为了被高层API使用,考虑减少冗余、提高重用性的目的,低层API的设计也要以需求为基础,要尽量抵抗受技术实现影响的诱惑。

5.尽量避免简单封装,不要有在外部API无法显式知道的内部隐藏的机制。
简单的封装,实际没有提供新的功能,反而增加了对所封装API的依赖性。内部隐藏的机制也是非常不利于系统维护的设计方式,例如PetSet和ReplicaSet,本来就是两种Pod集合,那么K8s就用不同API对象来定义它们,而不会说只用同一个ReplicaSet,内部通过特殊的算法再来区分这个ReplicaSet是有状态的还是无状态。

6.API操作复杂度与对象数量成正比。
这一条主要是从系统性能角度考虑,要保证整个系统随着系统规模的扩大,性能不会迅速变慢到无法使用,那么最低的限定就是API的操作复杂度不能超过O(N),N是对象的数量,否则系统就不具备水平伸缩性了。

**7.API对象状态不能依赖于网络连接状态。**由于众所周知,在分布式环境下,网络连接断开是经常发生的事情,因此要保证API对象状态能应对网络的不稳定,API对象的状态就不能依赖于网络连接状态。本机可以访问。但是跨主机还是得依赖网络。kubeadm默认没有暴露 二进制安装的会暴露8080 监听在127.0.0.1 只要连接到本机即可访问API

8.尽量避免让操作机制依赖于全局状态,因为在分布式系统中要保证全局状态的同步是非常困难的。

API:对象 ----是K8S集群之的管理操作单元
在这里插入图片描述

在这里插入图片描述

3.k8s命令使用

在这里插入图片描述

3.1 kubectl 概述

https://kubernetes.io/zh/docs/reference/kubectl/overview/
#Kubectl 命令行接口

使用以下语法 kubectl 从终端窗口运行命令:

kubectl [command] [TYPE] [NAME] [flags]
其中 command、TYPE、NAME 和 flags 分别是:

command:指定要对一个或多个资源执行的操作,例如 create、get、describe、delete。

TYPE:指定资源类型。资源类型不区分大小写,可以指定单数、复数或缩写形式。例如,以下命令输出相同的结果:
  kubectl get pod 
  kubectl get pods 
  kubectl get po
root@master-1:~# kubectl get pod 
NAME                                 READY   STATUS    RESTARTS   AGE
net-test1-5fcc69db59-jz944           1/1     Running   1          2d16h
net-test1-5fcc69db59-wzlmg           1/1     Running   2          2d16h
net-test1-5fcc69db59-xthfd           1/1     Running   2          2d16h
nginx-deployment-574b87c764-9m59f    1/1     Running   1          34h
tomcat-deployment-7cd955f48c-lthk2   1/1     Running   1          34h
root@master-1:~# kubectl get pods
NAME                                 READY   STATUS    RESTARTS   AGE
net-test1-5fcc69db59-jz944           1/1     Running   1          2d16h
net-test1-5fcc69db59-wzlmg           1/1     Running   2          2d16h
net-test1-5fcc69db59-xthfd           1/1     Running   2          2d16h
nginx-deployment-574b87c764-9m59f    1/1     Running   1          34h
tomcat-deployment-7cd955f48c-lthk2   1/1     Running   1          34h
root@master-1:~# kubectl get po
NAME                                 READY   STATUS    RESTARTS   AGE
net-test1-5fcc69db59-jz944           1/1     Running   1          2d16h
net-test1-5fcc69db59-wzlmg           1/1     Running   2          2d16h
net-test1-5fcc69db59-xthfd           1/1     Running   2          2d16h
nginx-deployment-574b87c764-9m59f    1/1     Running   1          34h
tomcat-deployment-7cd955f48c-lthk2   1/1     Running   1          34h

3.2 操作命令和语法

在这里插入图片描述
在这里插入图片描述
https://kubernetes.io/zh/docs/reference/kubectl/overview/#参考链接

root@master-1:~# kubectl get service
NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes              ClusterIP   192.168.0.1      <none>        443/TCP        2d19h
magedu-nginx-service    NodePort    192.168.9.132    <none>        80:30004/TCP   37h
magedu-tomcat-service   NodePort    192.168.14.155   <none>        80:31849/TCP   35h

root@master-1:~# kubectl describe service magedu-tomcat-service #不加最后参数则查看全部的service
Name:                     magedu-tomcat-service
Namespace:                default
Labels:                   app=kaivi-tomcat-service-label
Annotations:              kubectl.kubernetes.io/last-applied-configuration:
                            {"apiVersion":"v1","kind":"Service","metadata":
                            {"annotations":{},"labels":{"app":"kaivi-tomcat-service-
                            label"},"name":"magedu-tomcat-servi...
Selector:                 app=tomcat
Type:                     NodePort
IP:                       192.168.14.155
Port:                     http  80/TCP
TargetPort:               8080/TCP
NodePort:                 http  31849/TCP
Endpoints:                10.10.4.7:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
root@master-1:~# kubectl explain pod  #查看某个对象的方法 层层嵌套 

root@master-1:~# kubectl explain pod.apiVersion

root@master-1:~# kubectl explain deployment.spec.selector

Crete和apply的区别:

apply支持对yaml文件的多次修改和动态生效,修改完成重新执行apply -f file.yml
crete 单次创建资源 后期如果修改yml文件下想生效 那么在删除之前的资源再重新创建 而且文件创建过程需要在删除之前修改 然后再重新创建 先删再改再创建

root@master-1:~# kubectl cluster-info 
Kubernetes master is running at https://172.20.10.248:6443
KubeDNS is running at https://172.20.10.248:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
root@master-1:~# kubectl cordon --help
Mark node as unschedulable.

Examples:
  # Mark node "foo" as unschedulable.
  kubectl cordon foo

Options:
      --dry-run=false: If true, only print the object that would be sent, without sending it.
  -l, --selector='': Selector (label query) to filter on

Usage:
  kubectl cordon NODE [options]

Use "kubectl options" for a list of global command-line options (applies to all commands).
root@master-1:~# kubectl cordon master-1 #不被调度
node/master-1 cordoned
root@master-1:~# kubectl get node 
NAME       STATUS                     ROLES    AGE     VERSION
master-1   Ready,SchedulingDisabled   master   2d19h   v1.17.4
master-2   Ready                      master   2d19h   v1.17.4
master-3   Ready                      master   2d18h   v1.17.4
node-1     Ready                      <none>   2d18h   v1.17.4
node-2     Ready                      <none>   2d18h   v1.17.4
node-3     Ready                      <none>   2d18h   v1.17.4
root@master-1:~# kubectl uncordon master-1 #取消不被调度
node/master-1 uncordoned
root@master-1:~# kubectl get node 
NAME       STATUS   ROLES    AGE     VERSION
master-1   Ready    master   2d19h   v1.17.4
master-2   Ready    master   2d19h   v1.17.4
master-3   Ready    master   2d18h   v1.17.4
node-1     Ready    <none>   2d18h   v1.17.4
node-2     Ready    <none>   2d18h   v1.17.4
node-3     Ready    <none>   2d18h   v1.17.4
root@master-1:~# kubectl drain --help #驱逐无状态服务 用于node紧急下线

Usage:
  kubectl drain NODE [options]
root@master-1:~# kubectl api-resources
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
bindings                                                                      true         Binding
componentstatuses                 cs                                          false        ComponentStatus
configmaps                        cm                                          true         ConfigMap
endpoints                         ep                                          true         Endpoints
events                            ev                                          true         Event
limitranges                       limits                                      true         LimitRange
namespaces                        ns                                          false        Namespace
nodes                             no                                          false        Node
persistentvolumeclaims            pvc                                         true         PersistentVolumeClaim
persistentvolumes                 pv                                          false        PersistentVolume
pods                              po                                          true         Pod
podtemplates                                                                  true         PodTemplate
replicationcontrollers            rc                                          true         ReplicationController
resourcequotas                    quota                                       true         ResourceQuota
secrets                                                                       true         Secret
serviceaccounts                   sa                                          true         ServiceAccount
services                          svc                                         true         Service
mutatingwebhookconfigurations                  admissionregistration.k8s.io   false        MutatingWebhookConfiguration
validatingwebhookconfigurations                admissionregistration.k8s.io   false        ValidatingWebhookConfiguration
customresourcedefinitions         crd,crds     apiextensions.k8s.io           false        CustomResourceDefinition
apiservices                                    apiregistration.k8s.io         false        APIService
controllerrevisions                            apps                           true         ControllerRevision
daemonsets                        ds           apps                           true         DaemonSet
deployments                       deploy       apps                           true         Deployment
replicasets                       rs           apps                           true         ReplicaSet
statefulsets                      sts          apps                           true         StatefulSet
tokenreviews                                   authentication.k8s.io          false        TokenReview
localsubjectaccessreviews                      authorization.k8s.io           true         LocalSubjectAccessReview
selfsubjectaccessreviews                       authorization.k8s.io           false        SelfSubjectAccessReview
selfsubjectrulesreviews                        authorization.k8s.io           false        SelfSubjectRulesReview
subjectaccessreviews                           authorization.k8s.io           false        SubjectAccessReview
horizontalpodautoscalers          hpa          autoscaling                    true         HorizontalPodAutoscaler
cronjobs                          cj           batch                          true         CronJob
jobs                                           batch                          true         Job
certificatesigningrequests        csr          certificates.k8s.io            false        CertificateSigningRequest
leases                                         coordination.k8s.io            true         Lease
endpointslices                                 discovery.k8s.io               true         EndpointSlice
events                            ev           events.k8s.io                  true         Event
ingresses                         ing          extensions                     true         Ingress
ingresses                         ing          networking.k8s.io              true         Ingress
networkpolicies                   netpol       networking.k8s.io              true         NetworkPolicy
runtimeclasses                                 node.k8s.io                    false        RuntimeClass
poddisruptionbudgets              pdb          policy                         true         PodDisruptionBudget
podsecuritypolicies               psp          policy                         false        PodSecurityPolicy
clusterrolebindings                            rbac.authorization.k8s.io      false        ClusterRoleBinding
clusterroles                                   rbac.authorization.k8s.io      false        ClusterRole
rolebindings                                   rbac.authorization.k8s.io      true         RoleBinding
roles                                          rbac.authorization.k8s.io      true         Role
priorityclasses                   pc           scheduling.k8s.io              false        PriorityClass
csidrivers                                     storage.k8s.io                 false        CSIDriver
csinodes                                       storage.k8s.io                 false        CSINode
storageclasses                    sc           storage.k8s.io                 false        StorageClass
volumeattachments                              storage.k8s.io                 false        VolumeAttachment

3.3输出选项和语法

在这里插入图片描述

root@master-1:~# kubectl get pod
NAME                                 READY   STATUS    RESTARTS   AGE
net-test1-5fcc69db59-jz944           1/1     Running   1          2d16h
net-test1-5fcc69db59-wzlmg           1/1     Running   2          2d16h
net-test1-5fcc69db59-xthfd           1/1     Running   2          2d16h
nginx-deployment-574b87c764-9m59f    1/1     Running   1          35h
tomcat-deployment-7cd955f48c-lthk2   1/1     Running   1          35h
root@master-1:~# kubectl get pod -o wide
NAME                                 READY   STATUS    RESTARTS   AGE     IP          NODE     NOMINATED NODE   READINESS GATES
net-test1-5fcc69db59-jz944           1/1     Running   1          2d16h   10.10.3.3   node-1   <none>           <none>
net-test1-5fcc69db59-wzlmg           1/1     Running   2          2d16h   10.10.4.9   node-2   <none>           <none>
net-test1-5fcc69db59-xthfd           1/1     Running   2          2d16h   10.10.5.8   node-3   <none>           <none>
nginx-deployment-574b87c764-9m59f    1/1     Running   1          35h     10.10.5.9   node-3   <none>           <none>
tomcat-deployment-7cd955f48c-lthk2   1/1     Running   1          35h     10.10.4.7   node-2   <none>           <none>

4.K8S----API

k8s的几个重要概念

对象用k8s是和什么打交道?
K8s声明式API
列举下面yum文件为例子:

apiVersion: apps/v1  #创建该对象所使用的KubernetesAPI 的版本
kind: Deployment     #kind-想要创建的对象的类型  
metadata:            #帮助识别对象唯一性的数据,包括一个name名称、可选的namespace
  name: nginx-deployment
  labels:
    app: nginx
spec:            #定义容器的状态 可以定义多个容器 名称不能冲突
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: harbor.linux39.com/baseimages/nginx:1.14.2
        ports:
        - containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: kaivi-nginx-service-label
  name: magedu-nginx-service
  namespace: default
spec:
  type: NodePort
  ports:
  - name: http
    port: 80  #service端口  通过service端口的80转发到容器的(targetPort)pod端口80
    protocol: TCP
    targetPort: 80 #pod端口
    nodePort: 30004  #宿主机端口   通过宿主机端口30004访问到service端口的80
  selector:
    app: nginx

必需字段怎么声明?
1.apiVersion-创建该对象所使用的KubernetesAPI 的版本
2.kind-想要创建的对象的类型
3.metadata-帮助识别对象唯一性的数据,包括一个name名称、可选的namespace
4.spec 定义容器的状态 可以定义多个容器 名称不能冲突
5. status(Pod创建完成后k8s自动生成status状态)

root@master-1:~# kubectl get pod
NAME                                 READY   STATUS    RESTARTS   AGE
net-test1-5fcc69db59-jz944           1/1     Running   1          2d18h
net-test1-5fcc69db59-wzlmg           1/1     Running   2          2d18h
net-test1-5fcc69db59-xthfd           1/1     Running   2          2d18h
nginx-deployment-574b87c764-9m59f    1/1     Running   1          36h
tomcat-deployment-7cd955f48c-lthk2   1/1     Running   1          36h

root@master-1:~# kubectl describe pod tomcat-deployment-7cd955f48c-lthk2
Name:         tomcat-deployment-7cd955f48c-lthk2
Namespace:    default
Priority:     0
Node:         node-2/172.20.10.201
Start Time:   Mon, 30 Mar 2020 22:15:46 +0800
Labels:       app=tomcat
              pod-template-hash=7cd955f48c
Annotations:  <none>
Status:       Running   #状态是否runing
IP:           10.10.4.7
IPs:
  IP:           10.10.4.7
Controlled By:  ReplicaSet/tomcat-deployment-7cd955f48c
Containers:
  tomcat:
    Container ID:   docker://3ad024e3fa02efb0013ea776200ad70d684ee468b76b777d2c21a5017e65c660
    Image:          harbor.linux39.com/baseimages/tomcat:app
    Image ID:       docker-pullable://harbor.linux39.com/baseimages/tomcat@sha256:de80cfab99f015db3c47ea33cab64cc4e6
    5dd5d41a147fd6c9fc41fcfaeb69f1
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running      #状态是否runing
      Started:      Wed, 01 Apr 2020 08:32:44 +0800
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Mon, 30 Mar 2020 22:19:37 +0800
      Finished:     Wed, 01 Apr 2020 08:26:14 +0800
    Ready:          True
    Restart Count:  1
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9n88d (ro)
Conditions:
  Type              Status  #状态的查询
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-9n88d:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-9n88d
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

yaml文件及语法基础:

需要提前创建好yaml文件,并创建好好pod运行所需要的namespace、yaml文件等资源

root@master-1:~# cd /opt/
root@master-1:/opt# mkdir file-yaml
root@master-1:/opt# cd file-yaml/
root@master-1:/opt/file-yaml# vim linux39-ns.yml
apiVersion: v1 #API版本
kind: Namespace #类型为namespac
metadata: #定义元数据
  name: linux39 #namespace名称

root@master-1:/opt/file-yaml# kubectl get ns
NAME                   STATUS   AGE
default                Active   2d20h
kube-node-lease        Active   2d20h
kube-public            Active   2d20h
kube-system            Active   2d20h
kubernetes-dashboard   Active   2d16h

root@master-1:/opt/file-yaml# kubectl apply -f linux39-ns.yml 
namespace/linux39 created

root@master-1:/opt/file-yaml# kubectl get ns
NAME                   STATUS   AGE
default                Active   2d20h
kube-node-lease        Active   2d20h
kube-public            Active   2d20h
kube-system            Active   2d20h
kubernetes-dashboard   Active   2d16h
linux39                Active   8s       #新创建的namespace

在这里插入图片描述
查看linux39的namespace已经存储到etcd 但是现在还没有任何数据

http://www.bejson.com/validators/yaml_editor/#在线yaml与json编辑器

大小写敏感
使用缩进表示层级关系
缩进时不允许使用Tal键,只允许使用空格
缩进的空格数目不重要,只要相同层级的元素左侧对齐即可
使用”#” 表示注释,从这个字符一直到行尾,都会被解析器忽略
比json更适用于配置文件

yaml文件主要特性:

上下级关系
列表
键值对(也称为maps,即key:value 格式的键值对数据)

Nginx 业务yaml文件详解

# pwd
/opt/k8s-data/yaml/linux39
# mkdir nginx-tomcat-app1 tomcat-app2
# cd nginx/

# pwd
/opt/k8s-data/yaml/linux39/nginx

# cat nginx.yaml
kind: Deployment  #类型,是deployment控制器,kubectl explain Deployment
apiVersion: extensions/v1 #API版本,# kubectl explain Deployment.apiVersion 查看版本 需要保持一致
metadata: #pod的元数据信息,kubectl explain Deployment.metadata
  labels: #自定义pod的标签,# kubectl explain Deployment.metadata.labels 用于做筛选serivice接口
    app: linux39-nginx-deployment-label #标签名称为app值为linux39-nginx-deployment-label,后面会用到此标签
  name: linux39-nginx-deployment #pod的名称
  namespace: linux39 #pod的namespace,默认是defaule
spec: #定义deployment中容器的详细信息,kubectl explain Deployment.spec
  replicas: 1 #创建出的pod的副本数,即多少个pod,默认值为1 根据实际生产决定 kubectl explain Deployment.spec.replicas
  selector: #定义标签选择器 在新版中必须设置这个参数
    matchLabels: #定义匹配的标签,必须要设置
      app: linux39-nginx-selector #匹配的目标标签, 和Deployment.spec.template.app保持一致
  template: #定义模板,必须定义,模板是起到描述要创建的pod的作用
    metadata: #定义模板元数据
      labels: #定义模板label,Deployment.spec.template.metadata.labels
        app: linux39-nginx-selector #定义标签,等于Deployment.spec.selector.matchLabels 一定要保持一致
    spec: #定义pod信息 在当前的pod容器定义信息 
      containers:#定义pod中容器列表,可以多个至少一个,pod不能动态增减容器 只能是先删除再重新创建
      - name: linux39-nginx-container #容器名称  “-”表示数组 可以同时写多个 但是要平级
		image: harbor.magedu.net/linux39/nginx-web1:v1 #镜像地址
        #command: ["/apps/tomcat/bin/run_tomcat.sh"] #容器启动执行的命令或脚本
		#imagePullPolicy: IfNotPresent   #用已经有的镜像  没有再去拉取镜像 
		imagePullPolicy: Always #拉取镜像策略 重新拉取镜像 比较消耗带宽
		ports: #定义容器端口列表
	    - containerPort: 80 #定义一个端口
		  protocol: TCP #端口协议  只有TCP和UDP
		  name: http #端口名称
	    - containerPort: 443 #定义一个端口
	      protocol: TCP #端口协议
		  name: https #端口名称
		 env: #配置环境变量  用于容器传递变量
         - name: "password" #变量名称。必须要用引号引起来
	       value: "123456" #当前变量的值
		 - name: "age" #另一个变量名称
		   value: "18" #另一个变量的值
		 resources: #对资源的请求设置和限制设置 
	       limits: #资源限制设置,上限  硬限制
		     cpu: 2 #cpu的限制,单位为core数,可以写0.5或者500m等CPU压缩值,1000毫核
		     memory: 2Gi #内存限制,单位可以为Mib/Gib,将用于docker run --memory参数
		   requests: #资源请求的设置  软限制
		     cpu: 1 #cpu请求数,容器启动的初始可用数量,可以写0.5或者500m等CPU压缩值
			 memory: 512Mi #内存请求大小,容器启动的初始可用数量,用于调度pod时候使用
---
kind: Service #类型为service   K8S内部的均衡调用请求
apiVersion: v1 #service API版本, service.apiVersion
metadata: #定义service元数据,service.metadata
  labels: #自定义标签,service.metadata.labels 用于K8S中的HPA动态扩容
	app: linux39-nginx #定义service标签的内容
  name: linux38-nginx-spec #定义service的名称,此名称会被DNS解析
  namespace: linux39 #该service隶属于的namespaces名称,即把service创建到哪个namespace里面
spec: #定义service的详细信息,service.spec
  type: NodePort #service的类型,定义服务的访问方式,默认为ClusterIP, service.spec.type
  ports: #定义访问端口, service.spec.ports
  - name: http #定义一个端口名称
    port: 80 #service 80端口
	protocol: TCP #协议类型
	targetPort: 80 #目标pod的端口
	nodePort: 30001 #node节点暴露的端口
  - name: https #SSL 端口
	port: 443 #service 443端口
	protocol: TCP #端口协议
	targetPort: 443 #目标pod端口
	nodePort: 30043 #node节点暴露的SSL端口
  selector: #service的标签选择器,定义要访问的目标pod
    app: linux39-nginx #将流量路到选择的pod上,须等于Deployment.spec.selector.matchLabels

spec和status的区别:
spec是期望状态
status是实际状态

Pod概述:
1.pod是k8s中的最小单元
2.一个pod中可以运行一个容器,也可以运行多个容器
3.运行多个容器的话,这些容器是一起被调度的
4.Pod的生命周期是短暂的,不会自愈,是用完就销毁的实体
5.一般我们是通过Controller来创建和管理pod的

Controller:控制器

https://kubernetes.io/zh/docs/concepts/workloads/controllers/replicationcontroller/#ReplicationController

https://kubernetes.io/zh/docs/concepts/overview/working-with-objects/labels/#标签选择器

5.实验实例

root@master-1:/opt/k8s-data/yaml/namespaces# pwd
/opt/k8s-data/yaml/namespaces
root@master-1:/opt/k8s-data/yaml/namespaces# ll
total 16
drwxr-xr-x 2 root root 4096 Apr  1 21:03 ./
drwxr-xr-x 6 root root 4096 Apr  1 20:58 ../
-rw-r--r-- 1 root root  121 Apr  1 21:00 linux39-ns.yml
-rw-r--r-- 1 root root  121 Apr  1 21:03 linux40-ns.yml
root@master-1:/opt/k8s-data/yaml/namespaces# cat linux*
apiVersion: v1 #API版本
kind: Namespace #类型为namespac
metadata: #定义元数据
  name: linux39 #namespace名称


apiVersion: v1 #API版本
kind: Namespace #类型为namespac
metadata: #定义元数据
  name: linux40 #namespace名称
root@master-1:/opt/k8s-data/yaml/namespaces# kubectl apply -f linux39-ns.yml  
root@master-1:/opt/k8s-data/yaml/namespaces# kubectl apply -f linux40-ns.yml
root@master-1:/opt/k8s-data/yaml/namespaces# kubectl get ns
NAME                   STATUS   AGE
default                Active   3d6h
kube-node-lease        Active   3d6h
kube-public            Active   3d6h
kube-system            Active   3d6h
kubernetes-dashboard   Active   3d2h
linux39                Active   6h
linux40                Active   12s

case1 :pod的控制器类型

root@master-1:/opt/k8s-data/yaml/linux39/case1# pwd
/opt/k8s-data/yaml/linux39/case1
root@master-1:/opt/k8s-data/yaml/linux39/case1# ll
total 20
drwxr-xr-x 2 root root 4096 Apr  1 21:14 ./
drwxr-xr-x 8 root root 4096 Apr  1 21:08 ../
-rw-r--r-- 1 root root  523 Apr  1 21:14 deployment.yml
-rw-r--r-- 1 root root  388 Mar 30 18:33 rc.yml
-rw-r--r-- 1 root root  459 Mar 30 18:33 rs.yml

deployment

root@master-1:/opt/k8s-data/yaml/linux39/case1# vim deployment.yml 
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: linux39
spec:
  replicas: 2
  selector:
    #app: ng-deploy-80 #rc
    #matchLabels: #rs or deployment
    #  app: ng-deploy-80

    matchExpressions:
      - {key: app, operator: In, values: [ng-deploy-80,ng-rs-81]}
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx
        ports:
        - containerPort: 80
        - 
root@master-1:/opt/k8s-data/yaml/linux39/case1# kubectl apply -f deployment.yml 
deployment.apps/nginx-deployment created

root@master-1:/opt/k8s-data/yaml/linux39/case1# kubectl get pod -n linux39
NAME                                READY   STATUS              RESTARTS   AGE
nginx-deployment-6997c89dfb-8gk5t   0/1     ContainerCreating   0          15s #处于创建状态 正在拉取镜像
nginx-deployment-6997c89dfb-dvg9b   0/1     ContainerCreating   0          15s

root@master-1:/opt/k8s-data/yaml/linux39/case1# kubectl get pod -n linux39
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-6997c89dfb-8gk5t   1/1     Running   0          2m22s  #已经正常运行
nginx-deployment-6997c89dfb-dvg9b   1/1     Running   0          2m22s

ReplicaSet

root@master-1:/opt/k8s-data/yaml/linux39/case1# vim rs.yml #ReplicaSet控制器
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: ReplicaSet  #ReplicaSet控制器
metadata:
  name: frontend
  namespace: linux39
spec:
  replicas: 3
  selector:
    #matchLabels:
    #  app: ng-rs-80
    matchExpressions:
      - {key: app, operator: In, values: [ng-rs-80,ng-rs-81]} #正则匹配多个
  template:
    metadata:
      labels:
        app: ng-rs-80
    spec:
      containers:
      - name: ng-rs-80
        image: nginx
        ports:
        - containerPort: 80

先把之前的资源删除:

root@master-1:/opt/k8s-data/yaml/linux39/case1# kubectl delete -f deployment.yml 
deployment.apps "nginx-deployment" deleted
root@master-1:/opt/k8s-data/yaml/linux39/case1# kubectl get pod -n linux39
No resources found in linux39 namespace.

用ReplicaSet创建新的资源

root@master-1:/opt/k8s-data/yaml/linux39/case1# kubectl apply -f rs.yml 
replicaset.apps/frontend created
root@master-1:/opt/k8s-data/yaml/linux39/case1# kubectl get pod -n linux39
NAME             READY   STATUS              RESTARTS   AGE
frontend-749kw   0/1     ContainerCreating   0          6s
frontend-krjxb   0/1     ContainerCreating   0          6s
frontend-xgrcl   0/1     ContainerCreating   0          6s

root@master-1:/opt/k8s-data/yaml/linux39/case1# kubectl get pod -n linux39
NAME             READY   STATUS    RESTARTS   AGE
frontend-749kw   1/1     Running   0          110s
frontend-krjxb   1/1     Running   0          110s
frontend-xgrcl   1/1     Running   0          110s

rs需要先删除之后才能再次修改 除非加一些参数选项

root@master-1:/opt/k8s-data/yaml/linux39/case1# kubectl delete -f rs.yml 
replicaset.apps "frontend" deleted

root@master-1:/opt/k8s-data/yaml/linux39/case1# kubectl create --help

root@master-1:/opt/k8s-data/yaml/linux39/case1# kubectl create -f rs.yml --save-config=true #再次创建
replicaset.apps/frontend created
root@master-1:/opt/k8s-data/yaml/linux39/case1# vim rs.yml 
  replicas: 2 #把副本的数量改为2个
root@master-1:/opt/k8s-data/yaml/linux39/case1# kubectl apply -f rs.yml  #再次创建还是依赖apply
replicaset.apps/frontend configured
root@master-1:/opt/k8s-data/yaml/linux39/case1# kubectl get pod -n linux39
NAME             READY   STATUS    RESTARTS   AGE
frontend-4bhjv   1/1     Running   0          62s
frontend-r6trh   1/1     Running   0          62s

ReplicationController(快淘汰)

root@master-1:/opt/k8s-data/yaml/linux39/case1# vim rc.yml #ReplicationController控制器
apiVersion: v1
kind: ReplicationController
metadata:
  name: ng-rc
  namespace: linux39
spec:
  replicas: 2
  selector:
    app: ng-rc-80
    #app1: ng-rc-81

  template:
    metadata:
      labels:
        app: ng-rc-80
        #app1: ng-rc-81
    spec:
      containers:
      - name: ng-rc-80
        image: nginx
        ports:
        - containerPort: 80
root@master-1:/opt/k8s-data/yaml/linux39/case1# kubectl delete -f deployment.yml 
deployment.apps "nginx-deployment" deleted
root@master-1:/opt/k8s-data/yaml/linux39/case1# kubectl get pod -n linux39
No resources found in linux39 namespace.

用ReplicationController创建新的资源

root@master-1:/opt/k8s-data/yaml/linux39/case1# kubectl apply -f rc.yml 
replicationcontroller/ng-rc created
root@master-1:/opt/k8s-data/yaml/linux39/case1# kubectl get pod -n linux39
NAME          READY   STATUS    RESTARTS   AGE
ng-rc-4tp5p   1/1     Running   0          17s
ng-rc-j7tfx   1/1     Running   0          17s

Rc,Rs和Deployment
•ReplicationController:副本控制器(selector= 或者!=)(等于或者不等于)
•ReplicaSet:副本控制集,和副本控制器的区别是:对选择器的支持(selector还支持innotin)
•Deployment:比rs更高一级的控制器,除了有rs的功能之外,还有很多高级功能,,比如说最重要的:滚动升级、回滚等
https://kubernetes.io/zh/docs/concepts/workloads/controllers/deployment/

Service

•Why:pod重启之后ip就变了,pod之间直接访问会有问题
•What:解耦了服务和应用。简化服务的调用
•How:声明一个service对象

一般常用的有两种:
•k8s集群内的service:selector指定pod,自动创建Endpoints
•k8s集群外的service:手动创建Endpoints,指定外部服务的ip,端口和协议

kube-proxy和service的关系:
kube-proxy——————> k8s-apiserver
watch
kube-proxy监听着k8s-apiserver,一旦service资源发生变化(调k8s-api修改service信息),kube-proxy就会生成对应的负载调度的调整,这样就保证service的最新状态。

kube-proxy有三种调度模型:
•userspace:k8s 1.1之前 )(淘汰)
•iptables:k8s1.10之前 (还能用)
•ipvs:k8s 1.11之后,如果没有开启ipvs,则自动降级为iptables

Service和deployment实现一个nginx

root@master-1:/opt/k8s-data/yaml/linux39/case2# pwd
/opt/k8s-data/yaml/linux39/case2
root@master-1:/opt/k8s-data/yaml/linux39/case2# ll
total 3916
drwxr-xr-x 2 root root    4096 Apr  1 21:11 ./
drwxr-xr-x 8 root root    4096 Apr  1 21:08 ../
-rw-r--r-- 1 root root     542 Mar 30 18:33 1-deploy_node.yml
-rw-r--r-- 1 root root     214 Mar 30 18:33 2-svc_service.yml
-rw-r--r-- 1 root root     233 Mar 30 18:33 3-svc_NodePort.yml
-rw-r--r-- 1 root root 3983872 Mar 30 18:33 busybox-online.tar.gz
-rw-r--r-- 1 root root     277 Mar 30 18:33 busybox.yaml
root@master-1:/opt/k8s-data/yaml/linux39/case2# 
root@master-1:/opt/k8s-data/yaml/linux39/case2# vim 1-deploy_node.yml 
root@master-1:/opt/k8s-data/yaml/linux39/case2# vim 1-deploy_node.yml 
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: linux39
spec:
  replicas: 1
  selector:
    #matchLabels: #rs or deployment
    #  app: ng-deploy3-80
    matchExpressions:
      - {key: app, operator: In, values: [ng-deploy-80,ng-rs-81]}
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx:1.17.5
        ports:
        - containerPort: 80
      #nodeSelector:
      #  env: group1

root@master-1:/opt/k8s-data/yaml/linux39/case2# kubectl apply -f 1-deploy_node.yml 
deployment.apps/nginx-deployment created
root@master-1:/opt/k8s-data/yaml/linux39/case2# kubectl get pod -n linux39
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-85ff9fcf5b-b77fp   1/1     Running   0          105s

ClusterIP 内部访问

root@master-1:/opt/k8s-data/yaml/linux39/case2# pwd
/opt/k8s-data/yaml/linux39/case2
root@master-1:/opt/k8s-data/yaml/linux39/case2# vim 2-svc_service.yml 
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-80
  namespace: linux39
spec:
  ports:
  - name: http
    port: 80
    targetPort: 80
    protocol: TCP
  type: ClusterIP
  selector:
    app: ng-deploy-80

root@master-1:/opt/k8s-data/yaml/linux39/case2# kubectl apply -f 2-svc_service.yml 
service/ng-deploy-80 created

root@master-1:~# kubectl get svc -n linux39  #获取linux39的svc集群地址
NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
ng-deploy-80   ClusterIP   192.168.15.201   <none>        80/TCP    3m45s

进入到pod服务器

root@master-1:/opt/k8s-data/yaml/linux39/case2# kubectl get pod 
NAME                                 READY   STATUS    RESTARTS   AGE
net-test1-5fcc69db59-jz944           1/1     Running   1          3d6h
net-test1-5fcc69db59-wzlmg           1/1     Running   2          3d6h
net-test1-5fcc69db59-xthfd           1/1     Running   2          3d6h
nginx-deployment-574b87c764-9m59f    1/1     Running   1          2d
tomcat-deployment-7cd955f48c-lthk2   1/1     Running   1          2d

root@master-1:/opt/k8s-data/yaml/linux39/case2# kubectl exec -it net-test1-5fcc69db59-xthfd  sh
/ # root@master-1:/opt/k8s-data/yaml/linux39/case2# 
/ # apk add curl
fetch http://dl-cdn.alpinelinux.org/alpine/v3.11/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.11/community/x86_64/APKINDEX.tar.gz
(1/4) Installing ca-certificates (20191127-r1)
(2/4) Installing nghttp2-libs (1.40.0-r0)
(3/4) Installing libcurl (7.67.0-r0)
(4/4) Installing curl (7.67.0-r0)
Executing busybox-1.31.1-r9.trigger
Executing ca-certificates-20191127-r1.trigger
OK: 7 MiB in 18 packages
/ # 
/ # curl 192.168.15.201 #访问svc集权地址 看是否能够联通
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
。。。。。。
/ # 
/ # ping ng-deploy-80  #ping一下service名称 不通
ping: bad address 'ng-deploy-80'

busybox镜像创建:

root@master-1:/opt/k8s-data/yaml/linux39/case2# docker pull busybox

root@master-1:/opt/k8s-data/yaml/linux39/case2# docker images
busybox                        latest              83aa35aa1c79      3 weeks ago         1.22MB

root@master-1:/opt/k8s-data/yaml/linux39/case2# docker tag  83aa35aa1c79  harbor.linux39.com/baseimages/busybox 

root@master-1:/opt/k8s-data/yaml/linux39/case2# docker push harbor.linux39.com/baseimages/busybox 

在这里插入图片描述

root@master-1:/opt/k8s-data/yaml/linux39/case2# vim busybox.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: linux39  #default namespace的DNS
spec:
  containers:
  - image: harbor.linux39.com/baseimages/busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: Always
    name: busybox
  restartPolicy: Always
  
root@master-1:/opt/k8s-data/yaml/linux39/case2# kubectl apply -f busybox.yaml 
pod/busybox created
root@master-1:/opt/k8s-data/yaml/linux39/case2# kubectl get pod -n linux39
NAME                                READY   STATUS    RESTARTS   AGE
busybox                             1/1     Running   0          35s
nginx-deployment-85ff9fcf5b-b77fp   1/1     Running   0          25m

测试在同一个namespace中能够ping通
这里进入到busybox的pod中进行测试:进行ping之前在

root@master-1:/opt/k8s-data/yaml/linux39/case2# kubectl exec -it busybox sh -n linux39
/ # 
/ # ping ng-deploy-80 #访问service名称 
PING ng-deploy-80 (192.168.15.201): 56 data bytes #虽然ping不通但是能解析DNS 也说明能够访问
^C                            
--- ng-deploy-80 ping statistics ---
26 packets transmitted, 0 packets received, 100% packet loss
/ # wget ng-deploy-80
Connecting to ng-deploy-80 (192.168.15.201:80)
saving to 'index.html'
index.html           100% |*********************************************************************|   612  0:00:00 ETA
'index.html' saved
/ # cat index.html 
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>

结论:在同一个namespace中 可以直接通过service的名称进行访问 在yaml文件中直接指定service-name即可
可以实现微服务对外不访问

service对外访问
root@master-1:/opt/k8s-data/yaml/linux39/case2# vim 3-svc_NodePort.yml 
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-80
  namespace: linux39
spec:
  ports:
  - name: http
    port: 81
    targetPort: 80  #pod中的服务端口 一定要写对
    nodePort: 30012 #宿主机暴露端口 
    protocol: TCP
  type: NodePort
  selector:
    app: ng-deploy-80
root@master-1:/opt/k8s-data/yaml/linux39/case2# kubectl apply -f 3-svc_NodePort.yml 
service/ng-deploy-80 configured

30012这个自定义端口在K8S集群中每一个宿主机都会监听的一个端口,只要有这个端口就能通过这个端口访问到service地址进而service端口转发到内部的pod节点

#ss -tnl 

在任何一个K8S集群节点都能访问
在这里插入图片描述

Volume

容器中的文件在磁盘上是临时存放的,这给容器中运行的特殊应用程序带来一些问题。 首先,当容器崩溃时,kubelet 将重新启动容器,容器中的文件将会丢失——因为容器会以干净的状态重建。 其次,当在一个 Pod 中同时运行多个容器时,常常需要在这些容器之间共享文件。 Kubernetes 抽象出 Volume 对象来解决这两个问题。

数据和镜像解耦,以及容器间的数据共享
k8s抽象出的一个对象,用来保存数据,做存储用
常用的几种卷:
emptyDir:本地临时卷
hostPath:本地卷
nfs等:共享卷
configmap: 配置文件

CASE3:emptyDir

https://kubernetes.io/zh/docs/concepts/storage/volumes/#emptydir # emptyDir

当Pod 被分配给节点时,首先创建emptyDir卷,并且只要该Pod 在该节点上运行,该卷就会存在。正如卷的名字所述,它最初是空的。Pod 中的容器可以读取和写入emptyDir卷中的相同文件,尽管该卷可以挂载到每个容器中的相同或不同路径上。当出于任何原因从节点中删除Pod 时,emptyDir中的数据将被永久删除。

Pod 示例

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /cache
      name: cache-volume
  volumes:
  - name: cache-volume
    emptyDir: {}
root@master-1:/opt/k8s-data/yaml/linux39/case3# pwd
/opt/k8s-data/yaml/linux39/case3

root@master-1:/opt/k8s-data/yaml/linux39/case3# vim deploy_empty.yml 
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: linux39
spec:
  replicas: 1
  selector:
    matchLabels: #rs or deployment
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: /cache
          name: cache-volume
      volumes:
      - name: cache-volume
        emptyDir: {}
root@master-1:/opt/k8s-data/yaml/linux39/case3# kubectl apply -f deploy_empty.yml 
deployment.apps/nginx-deployment created

root@master-1:/opt/k8s-data/yaml/linux39/case3# kubectl get pod -n linux39
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-7cc86d98d5-8gbcb   1/1     Running   0          62s

root@master-1:/opt/k8s-data/yaml/linux39/case3# kubectl exec -it nginx-deployment-7cc86d98d5-8gbcb bash -n linux39
root@nginx-deployment-7cc86d98d5-8gbcb:/# cd /cache/
root@nginx-deployment-7cc86d98d5-8gbcb:/cache# echo 112233 > linux39.txt
root@nginx-deployment-7cc86d98d5-8gbcb:/cache# exit
exit
root@master-1:/opt/k8s-data/yaml/linux39/case3#
root@master-1:/opt/k8s-data/yaml/linux39/case3# kubectl get pod -n linux39 -o wide #查看创建的数据在哪一个节点 发现在node-3中
NAME                                READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
nginx-deployment-7cc86d98d5-8gbcb   1/1     Running   0          11m   10.10.5.19   node-3   <none>           <none>

在node-3中查找数据的存储目录

root@Node-3:~# find / -name linux39.txt
/var/lib/kubelet/pods/014f1689-ca48-4f1b-b955-9381f9148dbf/volumes/kubernetes.io~empty-dir/cache-volume/linux39.txt
^C
root@Node-3:~# cat /var/lib/kubelet/pods/014f1689-ca48-4f1b-b955-9381f9148dbf/volumes/kubernetes.io~empty-dir/cache-volume/linux39.txt
112233

上面的路径是固定的。但是pod的编号ID(014f1689-ca48-4f1b-b955-9381f9148dbf)不是 这里可以用 * 进行匹配
emptyDir的数据在pod被删除的同时 数据也会被删除清空

CASE4:hostPath

https://kubernetes.io/zh/docs/concepts/storage/volumes/#hostpath#hostPath

hostPath 卷能将主机节点文件系统上的文件或目录挂载到您的 Pod 中。 虽然这不是大多数 Pod 需要的,但是它为一些应用程序提供了强大的逃生舱。

hostPath卷将主机节点的文件系统中的文件或目录挂载到集群中,pod删除的时候,卷不会被删除

#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: /data/mysql
          name: data-volume  #和下面volumes的挂载名称要一致
      volumes:
      - name: data-volume #定义挂载的名称
        hostPath:   #指定使用什么类型 什么样子的卷 
          path: /data/mysql
root@master-1:/opt/k8s-data/yaml/linux39/case4# kubectl apply -f deploy_hostPath.yml 
deployment.apps/nginx-deployment-2 created

root@master-1:/opt/k8s-data/yaml/linux39/case4# kubectl get pod 
NAME                                  READY   STATUS    RESTARTS   AGE
net-test1-5fcc69db59-jz944            1/1     Running   1          4d6h
net-test1-5fcc69db59-wzlmg            1/1     Running   2          4d6h
net-test1-5fcc69db59-xthfd            1/1     Running   2          4d6h
nginx-deployment-2-7944748bc4-xpbln   1/1     Running   0          34s
nginx-deployment-574b87c764-9m59f     1/1     Running   1          3d
tomcat-deployment-7cd955f48c-lthk2    1/1     Running   1          3d
root@master-1:/opt/k8s-data/yaml/linux39/case4# kubectl get pod -o wide #查看在哪一个主机中 在node-3中
NAME                                  READY   STATUS    RESTARTS   AGE         IP           NODE     NOMINATED NODE   READINESS GATES
net-test1-5fcc69db59-jz944            1/1     Running   1          4d4h        10.10.3.3    node-1   <none>           <none>
net-test1-5fcc69db59-wzlmg            1/1     Running   2          4d4h        10.10.4.9    node-2   <none>           <none>
net-test1-5fcc69db59-xthfd            1/1     Running   2          4d4h        10.10.5.8    node-3   <none>           <none>
nginx-deployment-2-7944748bc4-xpbln   1/1     Running   0          <invalid>   10.10.5.20   node-3   <none>           <none>
nginx-deployment-574b87c764-9m59f     1/1     Running   1          2d22h       10.10.5.9    node-3   <none>           <none>
tomcat-deployment-7cd955f48c-lthk2    1/1     Running   1          2d22h       10.10.4.7    node-2   <none>           <none>

可以去node-3主机中查看是否自动创建了一个/data/mysql的目录

root@master-1:/opt/k8s-data/yaml/linux39/case4# kubectl exec -it nginx-deployment-2-66d68c95d9-46z6b bash
root@nginx-deployment-2-66d68c95d9-46z6b:/# cd /data/
root@nginx-deployment-2-66d68c95d9-46z6b:/data# mkdir logs
root@nginx-deployment-2-66d68c95d9-46z6b:/data# echo likai > logs/nginx.logs

root@Node-3中

root@Node-3:/data# cd mysql/
root@Node-3:/data/mysql# ll
total 12
drwxr-xr-x 3 root root 4096 Apr  2 23:10 ./
drwxr-xr-x 4 root root 4096 Apr  2 22:47 ../
drwxr-xr-x 2 root root 4096 Apr  2 23:11 logs/
root@Node-3:/data/mysql# cat logs/nginx.logs  #这份文件宿主机和pod容器就一份 同修改同删除
likai

验证:在pod被删除时候,之前的数据是否被删除

root@master-1:/opt/k8s-data/yaml/linux39/case4# kubectl delete -f deploy_hostPath.yml 
deployment.apps "nginx-deployment-2" deleted
root@Node-3:/data/mysql# cat logs/nginx.logs 
likai

hostPath卷将主机节点的文件系统中的文件或目录挂载到集群中,pod删除的时候,卷不会被删除

CASE5:nfs Volume

nfs:
nfs 卷能将 NFS (网络文件系统) 挂载到您的 Pod 中。 不像 emptyDir 那样会在删除 Pod 的同时也会被删除,nfs 卷的内容在删除 Pod 时会被保存,卷只是被卸载掉了。 这意味着 nfs 卷可以被预先填充数据,并且这些数据可以在 Pod 之间”传递”。

•nfs卷允许将现有的NFS(网络文件系统)共享挂载到容器中。不像emptyDir,当删除Pod 时,nfs卷的内容被保留,卷仅仅是被卸载。这意味着NFS 卷可以预填充数据,并且可以在pod 之间“切换”数据。NFS 可以被多个写入者同时挂载。

警告:
在您使用 NFS 卷之前,必须运行自己的 NFS 服务器并将目标 share 导出备用。

这里实验复用HA-service1的机器:172.20.10.22

root@HA-server1:~# apt install nfs-server -y

root@HA-server1:~# mkdir /data/k8sdata -p

root@HA-server1:~# vim /etc/exports 
/data/k8sdata *(rw,no_root_squash)

root@HA-server1:~# systemctl restart nfs-server.service 
root@HA-server1:~# systemctl enable nfs-server.service 

在node节点上面查看共享出来的挂载卷。如果看不到说明配置有问题 后面实验也挂不上

root@Node-3:~# apt install nfs-common -y
root@Node-3:~# showmount -e 172.20.10.22
Export list for 172.20.10.22:
/data/k8sdata *

在node节点的宿主机挂载:

root@Node-3:~# mount -t nfs 172.20.10.22:/data/k8sdata /mnt #共享文件夹挂载到/mnt目录中

实验:

root@master-1:/opt/k8s-data/yaml/linux39/case5# pwd
/opt/k8s-data/yaml/linux39/case5
root@master-1:/opt/k8s-data/yaml/linux39/case5# ll
total 16
drwxr-xr-x 2 root root 4096 Apr  2 23:54 ./
drwxr-xr-x 8 root root 4096 Apr  1 21:08 ../
-rw-r--r-- 1 root root  804 Mar 30 18:33 deploy_nfs2.yml
-rw-r--r-- 1 root root  978 Apr  2 23:54 deploy_nfs.yml

root@master-1:/opt/k8s-data/yaml/linux39/case5# vim deploy_nfs.yml
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: /usr/share/nginx/html/mysite #把nfs的/data/k8sdata挂载到当前目录
          name: my-nfs-volume
      volumes:
      - name: my-nfs-volume
        nfs:
          server: 172.20.10.22 #nfs服务器地址
          path: /data/k8sdata

---
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-80
spec:
  ports:
  - name: http
    port: 81
    targetPort: 80
    nodePort: 30011 #对外暴露访问端口 
    protocol: TCP
  type: NodePort
  selector:
    app: ng-deploy-80
root@master-1:/opt/k8s-data/yaml/linux39/case5# kubectl apply -f deploy_nfs.yml 
deployment.apps/nginx-deployment-3 created
service/ng-deploy-80 created

root@master-1:/opt/k8s-data/yaml/linux39/case5# kubectl get pod
NAME                                  READY   STATUS    RESTARTS   AGE
net-test1-5fcc69db59-jz944            1/1     Running   1          4d7h
net-test1-5fcc69db59-wzlmg            1/1     Running   2          4d7h
net-test1-5fcc69db59-xthfd            1/1     Running   2          4d7h
nginx-deployment-3-587f55c665-jxjt2   1/1     Running   0          29s
nginx-deployment-574b87c764-9m59f     1/1     Running   1          3d1h
tomcat-deployment-7cd955f48c-lthk2    1/1     Running   1          3d1h

root@master-1:/opt/k8s-data/yaml/linux39/case5# kubectl exec -it nginx-deployment-3-587f55c665-jxjt2 bash
root@nginx-deployment-3-587f55c665-jxjt2:/# df -Th
Filesystem                 Type     Size  Used Avail Use% Mounted on
overlay                    overlay   92G  4.0G   83G   5% /
tmpfs                      tmpfs     64M     0   64M   0% /dev
tmpfs                      tmpfs    953M     0  953M   0% /sys/fs/cgroup
/dev/sda1                  ext4      92G  4.0G   83G   5% /etc/hosts
shm                        tmpfs     64M     0   64M   0% /dev/shm
tmpfs                      tmpfs    953M   12K  953M   1% /run/secrets/kubernetes.io/serviceaccount
172.20.10.22:/data/k8sdata nfs4      46G   52M   44G   1% /usr/share/nginx/html/mysite
tmpfs                      tmpfs    953M     0  953M   0% /proc/acpi
tmpfs                      tmpfs    953M     0  953M   0% /proc/scsi
tmpfs                      tmpfs    953M     0  953M   0% /sys/firmware

在这里插入图片描述

在这里插入图片描述
在共享数据中创建数据,看挂载node节点是否能够访问

root@HA-server1:/data/k8sdata# vim linux39.html
root@HA-server1:/data/k8sdata# cat linux39.html 
linux39 test page

在这里插入图片描述
新建一个pod 看一个存储是否能够被多个pod所挂载

root@master-1:/opt/k8s-data/yaml/linux39/case5# vim deploy_nfs2.yml 

#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-site2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ng-deploy-82
  template:
    metadata:
      labels:
        app: ng-deploy-82
    spec:
      containers:
      - name: ng-deploy-82
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: /usr/share/nginx/html/mysite
          name: my-nfs-volume
      volumes:
      - name: my-nfs-volume
        nfs:
          server: 172.20.10.22
          path: /data/k8sdata

---
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-82
spec:
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30032 #对外暴露访问端口
    protocol: TCP
  type: NodePort
  selector:
    app: ng-deploy-82

root@master-1:/opt/k8s-data/yaml/linux39/case5# kubectl apply -f deploy_nfs2.yml 
deployment.apps/nginx-deployment-site2 created
service/ng-deploy-81 created
root@master-1:/opt/k8s-data/yaml/linux39/case5# kubectl get pod 
NAME                                      READY   STATUS    RESTARTS   AGE
net-test1-5fcc69db59-jz944                1/1     Running   1          4d8h
net-test1-5fcc69db59-wzlmg                1/1     Running   2          4d8h
net-test1-5fcc69db59-xthfd                1/1     Running   2          4d8h
nginx-deployment-574b87c764-9m59f         1/1     Running   1          3d2h
nginx-deployment-site2-659498cf9c-8cth6   1/1     Running   0          11s
tomcat-deployment-7cd955f48c-lthk2        1/1     Running   1          3d2h


root@master-1:/opt/k8s-data/yaml/linux39/case5# kubectl exec -it nginx-deployment-site2-659498cf9c-8cth6 bash
root@nginx-deployment-site2-659498cf9c-8cth6:/# df -TH
Filesystem                 Type     Size  Used Avail Use% Mounted on
overlay                    overlay   98G  4.3G   89G   5% /
tmpfs                      tmpfs     68M     0   68M   0% /dev
tmpfs                      tmpfs    1.0G     0  1.0G   0% /sys/fs/cgroup
/dev/sda1                  ext4      98G  4.3G   89G   5% /etc/hosts
shm                        tmpfs     68M     0   68M   0% /dev/shm
172.20.10.22:/data/k8sdata nfs4      49G   55M   47G   1% /usr/share/nginx/html/mysite
tmpfs                      tmpfs    1.0G   13k  1.0G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                      tmpfs    1.0G     0  1.0G   0% /proc/acpi
tmpfs                      tmpfs    1.0G     0  1.0G   0% /proc/scsi
tmpfs                      tmpfs    1.0G     0  1.0G   0% /sys/firmware

在这里插入图片描述
实现了,一个nfs能被多个pod同时挂载,从而数据共享

在一个pod里面同时挂载多个NFS服务,实现pod里面有多个数据源,类似服务的多个站点数据区分

root@HA-server1:/data/k8sdata# mkdir /data/linux39

root@HA-server1:/data/k8sdata# vim /etc/exports #共享多个nfs
/data/k8sdata *(rw,no_root_squash)

/data/linux39 *(rw,no_root_squash)

root@HA-server1:/data/k8sdata# systemctl restart nfs-server.service 

node节点上查看是否共享成功

root@Node-3:~# showmount -e 172.20.10.22
Export list for 172.20.10.22:
/data/linux39 *
/data/k8sdata *

#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: /usr/share/nginx/html/mysite #对应/data/k8sdata目录挂载
          name: my-nfs-volume #区分挂载哪个nfs
        - mountPath: /data/nginx/html #对应 /data/linux39目录挂载
          name: linux39-nfs-volume  #区分挂载哪个nfs
      volumes:
      - name: my-nfs-volume
        nfs:
          server: 172.20.10.22
          path: /data/k8sdata
      - name: linux39-nfs-volume
        nfs:
          server: 172.20.10.22
          path: /data/linux39

---
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-80
spec:
  ports:
  - name: http
    port: 81
    targetPort: 80
    nodePort: 30016
    protocol: TCP
 
 root@master-1:/opt/k8s-data/yaml/linux39/case5# kubectl apply -f deploy_nfs.yml 
deployment.apps/nginx-deployment created
service/ng-deploy-80 created

oot@master-1:/opt/k8s-data/yaml/linux39/case5# kubectl exec -it nginx-deployment-79bf555474-tvgx5 bash
root@nginx-deployment-79bf555474-tvgx5:/# df -TH
Filesystem                 Type     Size  Used Avail Use% Mounted on
overlay                    overlay   98G  4.2G   89G   5% /
tmpfs                      tmpfs     68M     0   68M   0% /dev
tmpfs                      tmpfs    1.0G     0  1.0G   0% /sys/fs/cgroup
/dev/sda1                  ext4      98G  4.2G   89G   5% /etc/hosts
shm                        tmpfs     68M     0   68M   0% /dev/shm
172.20.10.22:/data/linux39 nfs4      49G   55M   47G   1% /data/nginx/html
172.20.10.22:/data/k8sdata nfs4      49G   55M   47G   1% /usr/share/nginx/html/mysite
tmpfs                      tmpfs    1.0G   13k  1.0G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                      tmpfs    1.0G     0  1.0G   0% /proc/acpi
tmpfs                      tmpfs    1.0G     0  1.0G   0% /proc/scsi
tmpfs                      tmpfs    1.0G     0  1.0G   0% /sys/firmware
      
                                             

在这里插入图片描述
NFS服务挂载的时候,不仅仅给pod节点pod网段共享目录的权限,也要给宿主机网段权限,不然挂载报错:
授权范围得注意 而且不是用mount去挂载

在这里插入图片描述
或者:

root@HA-server1:/data/k8sdata# vim /etc/exports 
/data/k8sdata 10.10.0.0/16 172.20.10.0/24(rw,no_root_squash) #用空格隔开
/data/linux39 10.10.0.0/16 172.20.10.0/24(rw,no_root_squash)

CASE6: configmap

https://kubernetes.io/zh/docs/concepts/storage/volumes/#configmap # configmap

configmap资源提供了向 Pod 注入配置数据的方法。 ConfigMap 对象中存储的数据可以被 configMap 类型的卷引用,然后被应用到 Pod 中运行的容器化应用。

配置信息大部分放到镜像里面 如果同一些配置信息给多个pod复用 可以用configmap

root@master-1:/opt/k8s-data/yaml/linux39/case6# vim  deploy_configmap.yml 

apiVersion: v1
kind: ConfigMap     #类型是configmap
metadata:
  name: nginx-config #通过这个名称来调用
data:
 default: |               # default一个key的名称 下面为定义的服务内容
    server {
       listen       80;
       server_name  www.mysite.com; 
       index        index.html;

       location / {
           root /data/nginx/html;
           if (!-e $request_filename) {
               rewrite ^/(.*) /index.html last;
           }
       }
    }


---
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: /data/nginx/html
          name: nginx-static-dir
        - name: nginx-config       #调用nginx-config 这个的类型是 configMap
          mountPath:  /etc/nginx/conf.d
      volumes:
      - name: nginx-static-dir
        hostPath:
          path: /data/nginx/linux39
      - name: nginx-config
        configMap:
          name: nginx-config
          items:
             - key: default  #key 调用最前面的default
               path: mysite.conf #挂载容器的路径 结合起来就是 /etc/nginx/conf.d/mysite.conf
---
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-80
spec:
  ports:
  - name: http
    port: 81
    targetPort: 80
    nodePort: 30019 #对外暴露访问端口
    protocol: TCP
  type: NodePort
  selector:
    app: ng-deploy-80                         
root@master-1:/opt/k8s-data/yaml/linux39/case6# kubectl apply -f deploy_configmap.yml 
configmap/nginx-config created
deployment.apps/nginx-deployment created
service/ng-deploy-80 created

root@master-1:/opt/k8s-data/yaml/linux39/case6# kubectl get pod 
NAME                                 READY   STATUS    RESTARTS   AGE
net-test1-5fcc69db59-jz944           1/1     Running   1          4d16h
net-test1-5fcc69db59-wzlmg           1/1     Running   2          4d16h
net-test1-5fcc69db59-xthfd           1/1     Running   2          4d16h
nginx-deployment-8c449b55f-c5pkn     1/1     Running   0          13s
tomcat-deployment-7cd955f48c-lthk2   1/1     Running   1          3d10h
root@master-1:/opt/k8s-data/yaml/linux39/case6# kubectl get service
NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes              ClusterIP   192.168.0.1      <none>        443/TCP        4d18h
magedu-nginx-service    NodePort    192.168.9.132    <none>        80:30004/TCP   3d12h
magedu-tomcat-service   NodePort    192.168.14.155   <none>        80:31849/TCP   3d10h
ng-deploy-80            NodePort    192.168.3.122    <none>        81:30019/TCP   24s
ng-deploy-81            NodePort    192.168.9.29     <none>        80:30015/TCP   8h

root@master-1:/opt/k8s-data/yaml/linux39/case6# kubectl exec -it nginx-deployment-8c449b55f-c5pkn bash
root@nginx-deployment-8c449b55f-c5pkn:/# cat /etc/nginx/conf.d/mysite.conf  #查看配置信息
server {
   listen       80;
   server_name  www.mysite.com;
   index        index.html;

   location / {
       root /data/nginx/html;
       if (!-e $request_filename) {
           rewrite ^/(.*) /index.html last;
       }
   }
}

在这里插入图片描述
在node3中创建首页文件,看是否能够调用configmap

root@Node-3:~# cd /data/nginx/linux39/
root@Node-3:/data/nginx/linux39# vim index.html
root@Node-3:/data/nginx/linux39# cat index.html
configMap test page
root@Node-3:/data/nginx/linux39# 

访问node3的网页
在这里插入图片描述

DaemonSet

https://kubernetes.io/zh/docs/concepts/workloads/controllers/daemonset/

DaemonSet 确保全部(或者某些)节点上运行一个 Pod 的副本。当有节点加入集群时, 也会为他们新增一个 Pod 。当有节点从集群移除时,这些 Pod 也会被回收。删除 DaemonSet 将会删除它创建的所有 Pod。

DaemonSet会在当前k8s集群中的所有node创建相同的node,主要用于在所有node执行相同的操作的场景
1.日志收集
2.Prometheus
3.flannel

root@master-1:/opt/k8s-data/yaml/linux39# vim Daemonset.yml

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
spec:
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch
    spec:
      tolerations:
      # this toleration is to have the daemonset runnable on master nodes
      # remove it if your masters can't run pods
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd-elasticsearch
        image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log  #把/var/log挂载到pod里面
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers #把/var/lib/docker/containers挂载到pod里面

root@master-1:/opt/k8s-data/yaml/linux39# kubectl apply -f Daemonset.yml 
daemonset.apps/fluentd-elasticsearch created
root@master-1:/opt/k8s-data/yaml/linux39# kubectl get pod -n kube-system
NAME                               READY   STATUS              RESTARTS   AGE
coredns-7f9c544f75-dd7gc           1/1     Running             4          4d13h
coredns-7f9c544f75-z5pds           1/1     Running             0          4d13h
etcd-master-1                      1/1     Running             26         4d19h
etcd-master-2                      1/1     Running             24         4d18h
etcd-master-3                      1/1     Running             20         4d18h
fluentd-elasticsearch-26jsw        0/1     ContainerCreating   0          29s #正在创建
fluentd-elasticsearch-8b2bl        0/1     ContainerCreating   0          29s
fluentd-elasticsearch-gx6c4        0/1     ContainerCreating   0          29s
fluentd-elasticsearch-hb4hc        0/1     ContainerCreating   0          29s
fluentd-elasticsearch-hpjwd        0/1     ContainerCreating   0          29s
fluentd-elasticsearch-qttwk        0/1     ContainerCreating   0          29s

可以看到集群中所有的master和所有的node节点中都创建了相同的一个pod 可以用于相同资源的获取调用。

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐