一常用命令

kubectl create deployment web --image=nginx
kubectl get pods -o wide
kubectl scale deployment web --replicas=5
kubectl create deployment web --image=nginx --dry-run -o yaml > web.yaml
kubectl apply -f web.yaml

kubectl expose deployment web --port=80 --type=NodePort --target-port=80 --name=web1 -o yaml > web1.yaml
kubectl exec -it podename bash

kubectl --help    #查看帮助操作
kubectl cluster-info #查看集群状态
kubectl get cs    #查看集群状态
kubectl get nodes #查看节点状态
kubectl get ns   #查看所有命名空间
kubectl pods -n 命名空格 # 查看指定命名空间内的pods
kubectl pods --all-namespaces #查看所有 命名空间 所有pods
kubectl get pod,svc --all-namespaces 
kubectl get pod -o  wide  -n kubernetes-dashboard 
kubectl get all -o  wide  --all-namespaces   #查看所有命名空间下的所有信息
 
 
#label 
kubectl label node  nodename key=value   #给node节点标注一个label
kubectl label node node1 env_role=prod
#比如执行如下命令标注k8s-node1是配置了SSD的节点。
kubectl label node k8s-node1 disktype=ssd
#然后通过kubectl get node --show-labels查看节点

kubectl label  node  nodename key-     #把node节点的label:key删除掉
kubectl get node --show-labels #查看一个node节点的标签信息
kubectl get node --show-labels #获取node上的label信息;
kubectl get nodes node1 --show-labels

kubectl get deployment  --all-namespaces #是可以查看到所有的namespace下的pods信息
kubectl get svc -n ns-2 



#日志类命令
kubectl logs pod-name  #查看容器中输出的日志;
kubectl logs  -f  podname  -c  containername  #跟踪查看下具体容器的日志,相当于是tail -f 
kubectl  exec  pod-name    cmd: #---在podname中执行cmd命令,该命令用‘’扩好;
kubectl  exec  pod-name  -c    containername  命令: #---在podname中的容器containername中执行命令
kubectl exec -it   common-1-controller-786c6c76dd-lqzc8  -c  common-0     /bin/sh   -n ns-2   # 进入pod



#查看资源占用情况
kubectl  top node
kubectl  top pod -n ns-222222
kubectl api-resources
kubectl describe node 10.19.10.25  #获取所有node的信息
kubectl get pods  -A -o wide  |grep nodeip #查看一台node主机上的所有pod资源
kubectl top nodes 
kubectl top pods -n kube-system


kubectl get ingresses -n ns-yancheng
kubectl get storageclass -n ns-yancheng

Pod

1 最小部署单元
2 包含多个容器(一组容器的组合)
3 一个是通过共享网络,共享存储实现,共享命名空间
4 Pod是短暂的

一个pod 有多个容器,一个容器里面运行一个应用程序

Pod是为了亲模型应用产生的,两个需要频繁调用的两个应用

1 程序之后的交换

2 网络之间的交换

pod实现机制网络共享,共享存储

容器本身隔离 使用namespace 和group 进行隔离

多个容器,多个业务 在同一个namespace下

apiVersion: v1
kind: Pod
metadata:
  name: mypod1
space:
  containers:
  - name: write
    image: centos
    command: ["bash", "-c","for i in (1..100);do echo $i >>/data/hello;sleep 1 ;done"]
    volumeMounts:            #挂载数据卷
    - name: date
      mountPath: /date

  - name: read
    image: centos
    command: ["bash", "-c","tail -f /data/hello"]
    volumeMounts:            #挂载数据卷
    - name: date
      mountPath: /date
  vlumes:                 #引入数据卷的概念,使用数据卷进行持久化存储
  - name: data
    emptyDir: {}

镜像的拉取策略

apiVersion: v1
kind: Pod
metadata:
  name: mypod2
space:
  containers:
    - name: nginx
      image: nginx:1.14
      imagePullPolicy: Always
#IfNotPresent 默认值,镜像在宿主机上不存在时才拉取
#Always 每次创建Pod都会重新拉取一层镜像
#Never  Pod永远不会主动拉取这个镜像

Pod资源限制

apiVersion: v1
Kind: Pod
metadate:
  name: forntend
spec:
  containers:
  - name: db
    image: mysql
    env:
	- name: MYSQL_ROOT_PASSWORD
	  value: "password"
	resources:
	  requets:                 #pod调度大小
	    memory: "64Mi"
		cpu: "250m"
	  limits:
	    memory: "128Mi"        #最大的大小 1cpu = 1000m
		cpu: "500m"	
	restartPolicy: Never       #Pod重启策略

Pod重启策略

apiVersion: V1
Kind: Pod
metadata:
  name: mypod3
spec:
  containers:
  - name: busybox
    image: busybox:1.20.4
    args:
    - /bin/sh
    - -c
    - sleep 3600
  restartPolicy: Never       #Pod重启策略

Always:当容器终止退出后,总时重启容器,默认策略
OnFailure:当容器异常退出(退出状态码非0)时,才重启容器
Never:当容器终止退出,从不重启容器

Pod健康检查

java堆内存溢出

容器检查

apiVersion: v1
kind: pod
metadata: 
  labels:
    test: liveness
  name: liveness-exec
spec:
  containes:
  - name: livebess
    images: busybox
	args:
	- /bin/sh
	- -c
	- touch /tmp/healthy; sleep 30;rm -rf /tmp/healthy
	
	livenessProbe:   #存活策略
	  exec:
	    command: 
		- cat
		- /tmp/healthy
	  initialDelaySeconds: 5
	  periodSeconds: 5
	livenessProbe:
      httpGet:
        path: /healthz
        port: 8080
        httpHeaders:
        - name: X-Custom-Header
          value: Awesome
        initialDelaySeconds: 3
        periodSeconds: 3
    livenessProbe:
      tcpSocket:
        port: 8080
      initialDelaySeconds: 15
      periodSeconds: 20    

1livenessProbe (存活检查)
如果检查失败,杀死容器,根据Pod的restartPolice进行操作

2readinessProbe (就绪检查)
如果检查失败,Kubernetesu会把Pod从service endpoints中剔除

#Probe支持以下三种检查方法
1 httpGet 发送HTTP请求,返回200-400范围状态为成功
2 exec 执行shell命令返回状态码0为成功
3 tcpSocket 发起TCP Socket建立成功

Pod调度策略 怎么把POD部署到对应的节点中

kubectl apply -f
kubectl get pods
kubectl get pods -o wide

master -> createpod -> apiserver -> 存etcd
shcheduler -> apiserver (是否有新容器创建) -> 读etcd ->调度算法 pod 存某node
node -> kubelet --> apiserver -读etcd 拿到分配到当前节点的pod --docker创建

pod影响调度的属性

1 Pod资源限制 resources 节点资源不够会影响调度
2 节点选择器标签 对节点进行分组

首先对节点进行标签

kubectl label node node1 env_role=prod

kubectl get nodes node1 --show-labels

apiVersion: v1
kind: Pod
metadata:
  name: pod-example
space:
  nodeSelector:
    env_role: prod
  containers:
  - name: nginx
    image: nginx:1.15

3节点的亲和性

apiVersion: v1
kind: Pod
metadata:
  name: with-node-affinity
spec: 
  affinity:
    nodeAffinity:
	  requiredDuringSchedulingIgoredDuringExecution:    #硬亲和性 约束条件必须满足
	    nodeSelectorssion:
		- key: env_role
		  operation: In
		  values:
		  - dev
		  - test
	  preferredDuringSchedulingIgnoredDuringExecution:  #软亲和性 尝试满足 不保证
	  - weight: 1
	    prference:
		  matchExpression:
		  - key: group
		    operator: In       #常见的操作符(IN,NotIn,Exists,Gt,Lt,DoesNotExists)
			values:
			- otherprod
  containers:
  - name: webdemo
    image: nginx

4 污点和污点容忍

专用节点
配置特点硬件信息
基于Taint驱逐

查看节点污点情况
kubectl describe node k8s-master | grep Taint
NoSchedule:一定不被调度
PreferNoSchedule:尽量不被调度
NoExecute:不会调度,并且还会驱逐Node已有的Pod

设置污点
Kubectl tain node nodename key=value:污点3个值
删除污点
kubectl taint node nodename key:值- #最后加减号

污点的容忍 设置了之后也会被调用相对于软亲和性

spec:
 toleration:
 - key: "key"
   operator: "Equal"
   value: "value"
   offect: "NoSchedule"

controller

1 什么是controller 在集群上管理和运行容器的对象

2 Pod和controller关系 Pod通过controller实现应用的运维(伸缩,滚动升级等)通过label建立关系 工作负载

3 Deployment控制器应用场景

部署无状态应用,管理Pod和ReplicaSet web服务,微服务

4 yaml文件字段说明
kubectl create deployment web --image=nginx --dry-run -o yaml > web.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: web
  name: web
spec:
  replicas: 1
  selector:
    matchLabels:
      app: web
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: web
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {}
status: {}

对外暴露端口
kubectl expose deployment web --port=80 --type=NodePort --target-port=80 --name=web1 -o yaml > web1.yaml

5 Deployment控制器应用场景:web服务,微服务

6 升级回滚
kubectl set image deployment web nginx=nginx:1.15 升级
kubectl rollout status deployment web 查看状态
kubectl rollout history deployment web 查看升级的历史
kubectl rollout undo deployment web 还原到上一个版本
kubectl rollout undo deployment web --to-revision=2 回到指定版本

7 弹性伸缩
kubectl scale deployment web replicas=10 副本数加到10 个

service

service 定义一组pod的访问规则
防止Pods失联(服务发现)
service虚拟IP
service 通过 label 和selector与Pod建立连接 实现pod负载均衡
selector:
app:nginx
labels:
app:nginx
常用的service类型
ClusterIP 默认集群内部使用
NodePort 对外暴露
LoadBalance 对外访问应用使用,共有云

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: web
  name: web
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: web
status:
  loadBalancer: {}

kubectl get pod,svc #使用内部IP访问

curl 10.1.196.78

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: web
  name: web1
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: web
  type: NodePort
status:
  loadBalancer: {}

kubectl get pod,svc #查看外部端口访问

controller

一 有状态,无状态

无状态

1 认为Pod都是一样的

2 没有顺序要求

3 不用考虑在哪个node运行

4 随意进行伸缩,扩展

有状态

上面的因素都要考虑

让每个pod独立的,保持pod启动顺序和唯一性

有序,比如mysql主从

二部署有状态应用

*无头service

kubectl get pod,svc

ClusterIP: none

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-D9Ertq5d-1618476563772)(C:\Users\24970\AppData\Local\Temp\企业微信截图_16159504275471.png)]

apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet                          #有状态
metadata:
  name: nginx-statefulest
  namespace: default
spec:
  serviceName: nginx
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
        

deploymet 和 StatefulSet区别

有身份的(唯一标识)

根据主机名 + 按照一定的规程生成 域名

格式: 主机名称.service名称.名称空间.svc.cluster.local

ngnix-statefulest.nginx.default.svc.cluster.local

部署守护进程DaemonSet

在每个node上运行一个pod,新加入的node也同样运行在一个pod里面

例字: 在每个node节点安装数据采集工具、

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: ds-test
  labels:
    app: filebeat
spec:
  selector:
    matchLabels:
      app: filebeat
  template:
    metadata:
      labels:
        app: filebeat
    spec:
      containers:
      - name: logs
        image: nginx
        ports:
        - containerPort:80
        volumeMounts:
        - name: varlog
          mountPath: /tmp/log
      volumes:
      - name: varlog
        hostPath:
          path: /var/log   

4job一次性执行

apiVersion: batch/v1
kind: Job
metadata:
  name: p1
spec:
  template:
    spec:
      containers:
      - name: p1
        image: perl
        command: ['perl', '-Mbignum=bpi', '-wle', 'print bpi(2000)']
      restartPolicy: Never
  backoffLimit: 4

5cronjob定时任务

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure

Secret

作用:加密数据存在etcd里面 让Pod容器以挂载Volume方式进行访问

1 创建secret

echo admin | base64

echo 123456 | base64

apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
data:
  username: YWRtaW4K
  password: MTIzNDU2Cg==

2 挂载变量形式

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
  - name: nginx
    image: nginx
    env:
      - name: SECRET_USERNAME
        valueFrom:
          secretKeyRef:
            name: mysecret
            key: username
      - name: SECRET_PASSWORD
        valueFrom:
          secretKeyRef:
            name: mysecret
            key: password

kubectl exec -it mypod bash

root@mypod:/# echo $SECRET_USERNAME
admin
root@mypod:/# echo $SECRET_PASSWORD
123456

3挂载数据卷形式

apiVersion: v1
kind: Pod
metadata:
  name: mypod2
spec:
  containers:
  - name: nginx
    image: nginx
    volumeMounts:
    - name: foo
      mountPath: "/etc/foo"
      readOnly: true
  volumes:
  - name: foo
    secret:
      secretName: mysecret

kubectl exec -it mypod2 bash

root@mypod2:/# cd /etc/foo/
root@mypod2:/etc/foo# ls
password username
root@mypod2:/etc/foo# cat username
admin
root@mypod2:/etc/foo# cat password
123456

ConfigMap

作用:存储不加密的数据到etcd 让Pod以变量的形式 挂载到容器

场景 配置文件

kubectl create configmap redis-config --from-file=redis.properties

kubectl get configmap

kubectl describe cm redis-config


挂载 以文件的形式

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: busybox
      image: busybox
      command: ["/bin/sh","-c","cat /etc/config/redis.properties"]
      volumeMounts:
      - name: config-volume
        mountPath: /etc/config/
  volumes:
    - name: config-volume
      configMap:
        name: redis-config
  restartPolicy: Never

挂载 以变量的形式

创建配置文件

apiVersion: v1
kind: ConfigMap
metadata:
  name: myconfig
  namespace: default
data:
  special.level: info
  special.type: hello

kubectl apply -f myconf.yaml

kubectl get cm

kubectl describe cm myconf

挂载

apiVersion: v1
kind: Pod
metadata:
  name: mypod2
spec:
  containers:
    - name: busybox
      image: busybox
      command: ["/bin/sh", "-c","echo $(LEVEL) $(TYPE)"]
      env:
        - name: LEVEL
          valueFrom:
            configMapKeyRef:
              name: myconfig
              key: special.level
        - name: TYPE
          valueFrom:
            configMapKeyRef:
              name: myconfig
              key: special.type
  restartPolicy: Never

kubectl describe pods mypod2

k8s集群安全机制

一 概述

访问k8s需要经过3个步骤

第一步 认证

第二部 鉴权 (授权)

第三步 准入控制

二进行访问的时候,过程中需要经过apiserver

做统一协调,就像门卫,访问过程中需要证书,token,或者用户名+密码

如果访问pod 需要 serviceAccount

第一步 传输安全

客户端认证的常用方式:

https 证书认证,基于ca证书

http token 认证, 通过token 识别用户

http基本认证 用户名+密码

第二步 鉴权(授权)

基于RBAC进行鉴权操作

基于角色进行访问控制

第三步准入控制

就是准入控制器的列表,如果列表又请求内容,通过,,没有拒绝

#创建命名空间
kubectl create ns roleddemo
kubectl get ns
#创建pod
kubectl run nginx --image=nginx -n roleddemo 
kubectl get pods -n roleddemo
#创建角色
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: roleddemo
  name: pod-reader
rules:
- apiGroups: [""]       # "" indicates the core API group
  resources: ["pods"]
  verbs: ["get", "watch", "list"]

kubectl apply -f role.yaml 
kubectl get role -n roleddemo
#角色绑定
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: read-pods
  namespace: roleddemo
subjects:
- kind: User
  name: lucy
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io
  
kubectl apply -f roler.yaml
kubectl get role,rolebinding -n roleddemo
#创建证书

Ingerss

把端口对外暴露,通过IP+端口号进行访问

使用Service里面的NodePort实现

缺点 :在每个节点上都会起到端口,在访问的时候通过任何节点,通过节点ip+暴露端口号实现访问

意味着每个端口只能使用一次,一个端口对应一个应用

Ingress 和Pod关系

Pod和ingress通过service关联

ingress作为第一入口,由service关联一组pod

使用ingress

第一步 部署ingress Controller

第二步 创建 ingress规则

#创建 nginx 应用 对外暴露端口NodePort
kubectl create deployment web --image=nginx
#暴露端口
kubectl expose deployment web --port=80 --target-port=80 --type=NodePort
#部署ingress
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      hostNetwork: true
      # wait up to five minutes for the drain of connections
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      nodeSelector:
        kubernetes.io/os: linux
      containers:
        - name: nginx-ingress-controller
          image: lizhenliang/nginx-ingress-controller:0.30.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 101
            runAsUser: 101
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown

---

apiVersion: v1
kind: LimitRange
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  limits:
  - min:
      memory: 90Mi
      cpu: 100m
    type: Container

kubectl apply -f ingressco.yaml

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx
  namespace: ingress-nginx
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
    nodePort: 30028
  selector:
    app.kubernetes.io/name: ingress-nginx
~                                              

#查看状态

kubectl get pods -n ingress-nginx

#创建ingress规则

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: example-ingress
spec:
  rules:
  - host: example.ingredemo.com
    http:
      paths:
      - path: /
        backend:
          serviceName: web
          servicePort: 80

kubectl apply -f ingresshttp.yaml

kubectl get ing

helm

编写yaml文件 部署过程

**deployment

**Service

**Ingress

如果有大量的微服务则不方便 helm解决这个问题

1 使用helm 可以把所有yaml作为一个整体进行管理

2 yaml进行复用

3 实现应用级别的 版本 管理

Helm三大概念

1 helm :命令行工具 打包发布

2 Chart:yaml 的集合

3 Release:部署实体,一个版本 一个应用

安装helm

1 下载helm压缩文件上次置linux服务器

2 解压 复制至/usr/bin目录下

tar -zxf helm-v3.0.0-linux-amd64.tar.gz

cd linux-amd64/

mv helm /usr/bin/

3 配置helm仓库

helm repo add 仓库名称 仓库地址

helm repo add stable http://mirror.azure.cn/kubernetes/charts #微软仓库

helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts #阿里云仓库

helm repo update

#查看配置的存储库
helm repo list
helm search repo stable
#删除存储库
helm repo remove aliyun

快速部署应用

第一步 使用命令进行搜索应用

helm search repo 名称

helm search repo weave

第二步 安装应用

helm install 安装之后 搜索应用名称

helm install ui stable/weave-scope

第三步 查看

helm list

helm status ui

kubectl get pods

kubectl get svc

修改service的yaml

kubectl edit svc ui-weave-scope

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-4dSgqIhX-1618476563776)(C:\Users\24970\AppData\Local\Temp\企业微信截图_16161570917616.png)]

kubectl get svc

如何自己创建chart

#1 使用创建命令
helm create char名称
helm create mychart
ls mychart/     #查看

Chartyaml: 当前chart属性配置信息

templates: 编写的yaml文件放到这个目录中

values.yaml : yaml文件可以使用全局变量

kubectl create deployment web1 --image=nginx --dry-run -o yaml > deployment.yaml

kubectl expose deployment web1 --port=80 --target-port=80 --type=NodePort --dry-run -o yaml > service.yaml

把文件放到/root/linux-amd64/mychart/templates

回到上级目录执行安装

helm install web1 mychart

helm list

kubectl get pods

kubectl get svc

4 应用升级

#发布新版本的chart 时,或者当您要更改发布的配置时,可以使用该helm upgrade 命令。
helm upgrade web1 mychart
helm upgrade --set imageTag=1.17 web nginx
helm upgrade -f values.yaml web nginx
#如果在发布后没有达到预期的效果,则可以使用helm rollback 回滚到之前的版本。
#例如将应用回滚到第一个版本:
helm rollback web 1
#卸载发行版,请使用以下helm uninstall 命令:
helm uninstall web
#查看历史版本配置信息
helm get all --revision 1 web

模板的复用 传参

yaml不同

1 image 2 tag 3 label 4 port 5 replicas

#1 在values.yaml定义 变量和值
vim values.yaml
replicas: 1
image: nginx
tag: 1.16
label: nginx
port: 80
#2 在具体yaml文件中,获取变量值 templates
# {{ .Values.变量名称}}
# {{ .Release.Name}}  版本名称 动态生成

helm install --dry-run web2 mychart

数据持久化存储

nfs网络存储

yum -y install nf-utils

vim /etc/exports

systemctl start nfs

ps -elf | grep nfs

howmount -e localhost

kubectl describe pod nginx-dep1-5c75b5798c-gxkqp

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-dep1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        volumeMounts:
        - name: wwwroot
          mountPath: /usr/share/nginx/html
        ports:
        - containerPort: 80
      volumes:
        - name: wwwroot
          nfs:
            server: 192.168.44.134
            path: /data/nfs

https://blog.51cto.com/14154700/2450847

Persistent 数据卷类型

PersistentVolume(PV存储卷)是集群中的一块存储空间,由集群管理员管理或者由Storage class(存储类)自动管理,PV和pod、deployment、Service一样,都是一个资源对象。

创建pv

apiVersion: v1
kind: PersistentVolume
metadata:
  name: test-pv
spec:
  capacity:
    storage: 5Gi                            #该PV可分配的容量为1G
  accessModes:
    - ReadWriteOnce                        #访问模式为只能以读写的方式挂载到单个节点
  persistentVolumeReclaimPolicy: Recycle   #回收策略为Recycle
  storageClassName: nfs                    #定义存储类名字
  nfs:                                     #这里和上面定义的存储类名字需要一致
    path: /home/date                 #指定nfs的目录
    server: 192.1.3.57                   #nfs服务器的IP
#关于上述的具体解释
#capacity:指定PV的大小
#AccessModes:指定访问模式
    #ReadWriteOnce:只能以读写的方式挂载到单个节点(单个节点意味着只能被单个PVC声明使用)
    #ReadOnlyMany:能以只读的方式挂载到多个节点
    #ReadWriteMany:能以读写的方式挂载到多个节点
#persistentVolumeReclaimPolicy:PV的回收策略
    #Recycle:清除PV中的数据,然后自动回收。
    #Retain: 需要手动回收。
    #Delete:删除云存储资源。(云存储专用)
    #PS:注意这里的回收策略是指,在PV被删除后,在这个PV下所存储的源文件是否删除。
#storageClassName:PV和PVC关联的依据。

kubectl apply -f test-pv.yaml
kubectl get pv test-pv [root@master ~]# kubectl get pv test-pv    #既然PV是一个资源对象,那么自然可以通过此方式查看其状态
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
test-pv   1Gi        RWO            Recycle          Available           nfs                     38s
#查看PV的状态必须为Available才可以正常使用

创建PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
spec:
  accessModes:          #定义访问模式,必须和PV定义的访问模式一致
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi          #直接请求使用最大的容量
  storageClassName: nfs      #这里的名字必须和PV定义的名字一致
  
[root@master ~]# kubectl apply -f test-pvc.yaml     #执行yaml文件

#再次查看PV及PVC的状态(状态为bound,表示该PV正在被使用)
[root@master ~]# kubectl get pvc      #查看PVC的状态
NAME       STATUS   VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-pvc   Bound    test-pv   1Gi        RWO            nfs            2m10s
[root@master ~]# kubectl get pv      #查看PV的状态
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM              STORAGECLASS   REASON   AGE
test-pv   1Gi        RWO            Recycle          Bound    default/test-pvc   nfs                     8m24s

创建pod

[root@master ~]# vim test-pod.yaml       #编写pod的yaml文件
apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: busybox
    args:
    - /bin/sh
    - -c
    - sleep 30000
    volumeMounts:
    - mountPath: /testdata
      name: volumedata     #这里自定义个名称
  volumes:
    - name: volumedata      #这里的是上面定义的名称解释,这两个名称必须一致
      persistentVolumeClaim:
        claimName: test-pvc
[root@master ~]# kubectl apply -f test-pod.yaml        #执行yaml文件
[root@master ~]# kubectl get pod     #查看pod的状态,发现其一直处于ContainerCreating状态
#怎么回事呢?
NAME       READY   STATUS              RESTARTS   AGE
test-pod   0/1     ContainerCreating   0          23s
#当遇到pod状态不正常时,一般我们可以采用三种方式来排错
#第一就是使用kubectl  describe命令来查看pod的详细信息
#第二就是使用kubectl logs命令来查看pod的日志
#第三就是查看宿主机本机的message日志
#这里我采用第一种方法排错
[root@master ~]# kubectl describe pod test-pod
#输出的最后一条信息如下:
mount.nfs: mounting 192.168.20.6:/nfsdata/test-pv failed, reason given by server: No such file or directory
#原来是我们在挂载nfs存储目录时,指定的目录并不存在
#那就在nfs服务器上(这里是本机)进行创建相关目录咯
[root@master ~]# mkdir -p /nfsdata/test-pv      #创建对应目录
[root@master ~]# kubectl get pod test-pod   #然后再次查看pod的状态
#如果pod的状态还是正在创建,那么就是因为运行该pod的节点上的kubelet组件还没有反应过来
#如果要追求pod的启动速度,可以手动将pod所在节点的kubelet组件进行重启即可。
[root@master ~]# kubectl get pod test-pod    #稍等片刻,再次查看,发现其pod已经running了
NAME       READY   STATUS    RESTARTS   AGE
test-pod   1/1     Running   0          8m
[root@master ~]# kubectl exec -it test-pod /bin/sh   #进入pod
/ # echo "test pv pvc" > /testdata/test.txt       #向数据持久化的目录写入测试信息
#回到nfs服务器,查看共享的目录下是否有容器中写入的信息
[root@master ~]# cat /nfsdata/test-pv/test.txt   #确定是有的
test pv pvc
#现在查看到pod容器的所在节点,然后到对应节点将其删除
[root@master ~]# kubectl get pod -o wide       #我这里是运行在node02节点
NAME       READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
test-pod   1/1     Running   0          11m   10.244.2.2   node02   <none>           <none>
#在node02节点查看到其pod容器的ID号,然后将其删除
[root@node02 ~]# docker ps      #获取容器的ID号
[root@node02 ~]# docker rm -f dd445dce9530   #删除刚刚创建的容器
#回到nfs服务器,发现其本地目录下的数据还是在的
[root@master ~]# cat /nfsdata/test-pv/test.txt 
test pv pvc
#那么现在测试,将这个pod删除,nfs本地的数据是否还在?
[root@master ~]# kubectl delete -f test-pod.yaml 
[root@master ~]# cat /nfsdata/test-pv/test.txt      #哦吼,数据还在
test pv pvc
#那现在要是将PVC删除呢?
[root@master ~]# kubectl delete -f test-pvc.yaml 
[root@master ~]# cat /nfsdata/test-pv/test.txt       #哦吼,数据不在了。
cat: /nfsdata/test-pv/test.txt: 没有那个文件或目录

k8s使用nas基于nfs实现动态存储持久卷PV,PVC

1 创建 provisioner

nfs-client-provisioner 是k8s简易的NFS外部提供者(provisioner),本身不提供NFS,做为NFS的客户端为StorageClass提供存储。

kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
    
  selector:
    matchLabels:
      app: nfs-client-provisioner
      
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
        
    spec:
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 192.1.3.57
            - name: NFS_PATH
              value: /home/data
              
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.1.3.57
            path: /home/data
         
#PROVISIONER_NAME:provisioner的名称,创建storageclass时需要指定;
#NFS_SERVER:nfs服务器的地址;
#NFS_PATH:nfs服务器开放的地址;

kubectl apply -f nfs-deployment.yaml
kubectl get deploy
kubectl get pods
#都运行成功了,但是需要注意的是RBAC的相关问题,可以查看Pod nfs-client-provisioner-d7f6d6dc6-fpcxz相关日志:
kubectl logs  --tail 10  -f nfs-client-provisioner
#在日志中可以得到信息: default的namespace下,default 账户serviceaccount 不能在API group "" 获取endpoints 资源。

2 我们需要做的是创建一个角色,拥有对endpoint资源操作的权限,并且角色与账户进行绑定

# 创建角色 nfs-clusterrole.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    verbs: ["get"]
  - apiGroups: ["extensions"]
    resources: ["podsecuritypolicies"]
    resourceNames: ["nfs-provisioner"]
    verbs: ["use"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]

#执行 kubectl apply -f nfs-clusterrole.yaml 创建nfs-provisioner-runner角色资源

#创建 角色与账户的绑定关系。 nfs-clusterrolebinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: default
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
#执行 kubectl apply -f nfs-clusterrolebinding.yaml创建 角色与账户的绑定关系。
#再查看日志  kubectl logs  --tail 10  -f nfs-client-provisioner  无报错

3 接着创建storageclass,如下

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: fuseim.pri/ifs   #provisioner:必须匹配deployment中的PROVISIONER_NAME
parameters:
  archiveOnDelete: "true"


#选择一个就好
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs

#reclaimPolicy:策略支持三种,分别是Delete,Retain,Recycle
#保持(Retain):删除PV后后端存储上的数据仍然存在,如需彻底删除则需要手动删除后端存储volume
#删除(Delete):删除被PVC释放的PV和后端存储volume
#回收(Recycle):保留PV,但清空PV上的数据(已废弃)
#kubectl describe  StorageClass nfs-storage 

4 全部部署完成之后创建一个statefulset测试一下:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web2
spec:
  serviceName: "nginx1"
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  volumeClaimTemplates:
  - metadata:
      name: test
      annotations:
        volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"    #与上面的StorageClass 对应
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 2Gi
  template:
    metadata:
     labels:
       app: nginx
    spec:

     containers:
     - name: nginx1
       image: nginx:1.7.9
       volumeMounts:
       - mountPath: "/mnt"    #容器内挂载的目录
         name: test

kubectl get pods

kubectl get pvc #发现没有创建成功

#报错

kubectl logs --tail 10 nfs-client-provisioner-787986fdd4-kfd8z

E0326 06:43:42.362272 1 controller.go:1004] provision “default/test-web2-0” class “managed-nfs-storage”: unexpected error getting claim reference: selfLink was empty, can’t make reference

修改 /etc/kubernetes/manifests/kube-apiserver.yaml

- --feature-gates=RemoveSelfLink=false

kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml

#NFS共享目录 自动创建 
[root@RuiGIS data]# pwd
/home/data
[root@RuiGIS data]# ls
1.txt  default-test-web2-0-pvc-da4d5557-f11f-4ac1-bcc1-bb6b245ab633  harbor
[root@RuiGIS data]# 

k8s 1.18.2部署实践

https://blog.51cto.com/leejia/2495558#h7

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐