3、k8s核心技术

3.5、kubectl命令行工具

kubectl [action] [type] [name] [option]

  • action:动作,如create、get、apply、delete、logs、describe。动作可以不是一个单词,而是多个,如set image
  • type:资源类型、如 pod,nodes
  • name:资源名称,如abcmaster
  • option:选项,如-n

3.6、Pod

Pod实现机制
  • 共享网络
    • Pod中的容器在根容器(Pause)的桥接下形成同一个namespace
  • 共享存储
    • 共用Volumn数据卷
Pod镜像拉取策略与重启策略
apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
  - name: nginx
    image: nginx:1.14
    imagePullPolicy: Always
  restartPolicy: Never

imagePullPolicy 镜像拉取策略

  • IfNotPresent:默认值,宿主机下不存在时拉取
  • Always:每次创建pod都重新拉取
  • Never:永远不主动拉取

restartPolicy 容器重启策略

  • Always:总是重启
  • onFailure:异常退出时(退出码非0)
  • Never:从不主动重启
Pod的资源限制
apiVersion: v1
kind: Pod
metadata:
  name: mysql
spec:
  containters:
  - name: mysql
    image: mysql
    env:
    - name: MYSQL_ROOT_PASSWARD
      value: "123456"
    
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
        
      limits:
        memory: "128Mi"
        cpu: "500m"

resources Pod中容器资源限制

  • requests:最小资源要求
  • limits:最大给予资源限制

其中:cpu 1c == 1000m

Pod的节点调度策略
  • 1、容器资源限制
  • 2、nodeSelector 标签选择器
  • 3、nodeAffinity 节点亲和性选择器(更灵活的选择器)
    • 硬亲和性(必须满足)
    • 软亲和性(尽量满足)
    • 操作符:In、NotIn,Gt、Lt、Exist …

3.7、Controller

3.7.1、Pod与Controller的关系
  • 控制器专指的是Pod的控制器
  • Pod通过Controller来实现应用的运维,如伸缩,回滚,滚动升级等。
  • Controller使用label(selector)与要管理的Pod建立联系
3.7.2、deployment控制器

作用:

  • 部署无状态应用
  • 管理Pod和ReplicaSet
  • 部署,滚动升级

yaml配置说明

  • 使用 --dry-run命令生成一个deployment的yaml文件
root@ubuntu:~# kubectl create deployment web --image=nginx --dry-run -o yaml > web.yaml
W0919 06:22:09.514872   82865 helpers.go:555] --dry-run is deprecated and can be replaced with --dry-run=client.
root@ubuntu:~# ls
kube-flannel.yml  kubernetes-dashboard.yaml  pullk8s.sh  snap  web.yaml

查看生成的web.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: web
  name: web
spec:
  replicas: 1  # 副本数
  selector:
    matchLabels: # 使用label选择器
      app: web
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:  # 定义label标签
        app: web
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {}
status: {}
  • 使用yaml部署deployment
root@ubuntu:~# kubectl apply -f web.yaml
deployment.apps/web created

root@ubuntu:~# kubectl get pods
NAME                  READY   STATUS    RESTARTS   AGE
web-96d5df5c8-n7ht9   1/1     Running   0          39s
  • 使用yaml对外发布使用yaml创建的deployment
# 生成发布用yaml
root@ubuntu:~# kubectl expose deployment web --port=80 --type=NodePort --target-port=80 --name=nginx1 -o yaml > web1.yaml
# 执行发布
root@ubuntu:~# kubectl apply -f web1.yaml
Warning: resource services/nginx1 is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
service/nginx1 configured

查看生成的web1.yaml

apiVersion: v1
kind: Service # 可以看到expose生成的是一个service,所以可以从外部访问
metadata:
  creationTimestamp: "2021-09-19T06:36:32Z"
  labels:
    app: web
  name: nginx1
  namespace: default
  resourceVersion: "66025"
  uid: b56685c8-6505-4c15-92a7-031f6d5df9a5
spec:
  clusterIP: 10.109.236.204
  clusterIPs:
  - 10.109.236.204
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - nodePort: 31150
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: web
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

应用升级回滚和弹性伸缩

  • 使用set image命令执行deployment的应用升级
kubectl set image deploment web nginx=nginx:1.15
  • 查看历史版本
kubectl rollout history deployment web
  • 回滚到上一个版本
kubectl rollout undo deployment web
  • 使用scale弹性伸缩命令创建5个副本
kubectl scale deployment web --replicas=5
3.7.3、Service
  • Service用于定义一组pod的访问规则,相当于微服务的注册中心,在kubectl命令中简称svc
  • 1、通过服务发现机制防止pod失联。因为pod的ip是不固定的,pod将自己的ip注册到Service中。
  • 2、定义Pod的访问策略(负载均衡)

Service与Pod的关系

  • 与Controller相同,也是通过selector – labels 建立关系

常用Service类型

Service使用expose命令创建,其中的 --type 用于定义Service的类型,一共有以下类型:

  • ClusterIP:供集群内部访问
  • NodePort:对外暴露应用
  • LoadBalancer:对外暴露应用,用于公有云
3.7.4、有状态Controller – StatefulSet

有状态Controller的特点

  • 各个Pod有启动顺序要求
  • 可能指定所在node(固定ip)
  • 每个pod独立的,有唯一的网络标识符,不能随意伸缩和拓展
  • 持久存储

部署有状态控制器

***无头Service:***ClusterIP的值为none,使用特定域名访问的Service。

3.7.5、DaemonSet 守护进程
  • 确保所有node都运行同一个pod,如在每一个node中都部署一个数据采集工具
# 进入某个pod内部
kubectl exec -it ${podname} bash
# 退出
exit
3.7.6、Job与Cronjob
  • 一次性任务与周期任务

3.8、配置管理

3.8.1、Secret
  • 产生加密数据并存储在etcd中,使Pod可以以Volume的形式访问使用
  • 使用场景:保存登陆凭证

创建Secret

  • 编辑yaml文件
apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
data:
  username: YWRtaW4=
  password: MTIzNDU2
  • 使用yaml创建Secret
root@ubuntu:~# kubectl create -f mysecret.yaml
secret/mysecret created
root@ubuntu:~# kubectl get secret
NAME                  TYPE                                  DATA   AGE
default-token-pxtlb   kubernetes.io/service-account-token   3      13d
mysecret              Opaque                                2      22s

以变量模式挂载到pod中

  • 创建使用了secret的Pod
apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
  - name: sng
    image: nginx
    env:
    - name: SECRET_USERNAME
      valueFrom:
        secretKeyRef:
          name: mysecret
          key: username
    - name: SECRET_PASSWORD
      valueFrom:
        secretKeyRef:
          name: mysecret
          key: password
  • 部署Pod并查看变量信息
# 部署
root@ubuntu:~# kubectl apply -f sng.yaml
pod/mypod created
root@ubuntu:~# kubectl get pods
NAME                  READY   STATUS              RESTARTS   AGE
mypod                 0/1     ContainerCreating   0          15s
web-96d5df5c8-n7ht9   1/1     Running             0          117m
root@ubuntu:~# kubectl get pods
NAME                  READY   STATUS    RESTARTS   AGE
mypod                 1/1     Running   0          25s
web-96d5df5c8-n7ht9   1/1     Running   0          117m

# 进入pod内部
root@ubuntu:~# kubectl exec -it mypod bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
# 输出变量
root@mypod:/# echo $SECRET_USERNAME
admin

使用Volume方式使用

  • 创建使用了Volume的pod
apiVersion: v1
kind: Pod
metadata:
  name: vng
spec:
  containers:
  - name: nginx
    image: nginx
    volumeMounts:
    - name: foo
      mountPath: "/etc/foo"   # 容器内部的挂载点
      readOnly: true
  volumes:
  - name: foo
    secret:
      secretName: mysecret
  • 创建Pod并查看挂载的secret
root@ubuntu:~# kubectl apply -f vng.yaml
pod/vng created
root@ubuntu:~# kubectl get pods
NAME                  READY   STATUS    RESTARTS   AGE
mypod                 1/1     Running   0          15m
vng                   1/1     Running   0          23s
web-96d5df5c8-n7ht9   1/1     Running   0          132m
root@ubuntu:~# kubectl exec -it vng bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
# 查看挂载效果
root@vng:/# ls
bin   docker-entrypoint.d   home   media  proc  sbin  tmp
boot  docker-entrypoint.sh  lib    mnt    root  srv   usr
dev   etc                   lib64  opt    run   sys   var
root@vng:/# ls /etc/foo
password  username
root@vng:/# cat /etc/foo/username
admin
3.8.2、ConfigMap
  • 存储不加密数据到etcd中,使pod可以变量或数据卷方式使用,与secret功能类似
  • 常用于生成应用的配置文件,如Redis.properties

创建ConfigMap

  • 创建一个配置文件redis.properties
redis.host=127.0.0.1
redis.port=7397
redis.password=123456
  • 使用create命令创建
# 创建
kubectl create configmap redis_conf --from-file=redis.properties
# 查看
kubectl get cm

3.9、集群安全机制

资源访问3步走:

  • 认证
  • 鉴权
  • 准入控制
3.9.1、安全机制实现步骤

常用认证方式

  • 基于ca证书的https认证
  • token
  • 基于http的用户名 + 密码

鉴权(授权)实现模式

  • 基于RBAC的权限体系

准入控制

  • 如果准入控制列表中存在访问的资源,则通过,否则拒绝
3.9.1、基于RBAC的鉴权控制
  • 角色
    • Role:针对特定namespace的访问权限
    • ClusterRole: 针对所有ns的访问权限
  • 权限主体(subject)
    • user
    • group
    • serviceaccount
  • 资源(resources)
    • namespace
    • pod
    • node
  • 角色绑定
    • roleBinding:将角色绑定到主体
    • clusterRoleBinding:将集群角色绑定给主体

创建角色

  • 创建一个命名空间
root@ubuntu:~# kubectl create ns roledemo
namespace/roledemo created
  • 在roledemo中创建一个pod
root@ubuntu:~# kubectl run nginx --image=nginx -n roledemo
pod/nginx created
root@ubuntu:~# kubectl get pods -n roledemo
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          22s
  • 使用yaml定义Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: roledemo
  name: pod-reader
rules:
- apiGroups: [""]  # 最前面的 - 横杠表示下面是数组中的一个元素
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
  • 执行创建
# 创建
root@ubuntu:~# kubectl apply -f rbac-role.yaml
role.rbac.authorization.k8s.io/pod-reader created

# 查询
root@ubuntu:~# kubectl get roles -n roledemo
NAME         CREATED AT
pod-reader   2021-09-19T11:34:24Z

创建roleBinding

  • 使用yaml定义角色绑定
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  namespace: roledemo
  name: read-pods
subjects:
- kind: User
  name: arsiya
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io
  • 执行创建
root@ubuntu:~# kubectl apply -f rb.yaml
rolebinding.rbac.authorization.k8s.io/read-pods created

root@ubuntu:~# kubectl get rolebindings -n roledemo
NAME        ROLE              AGE
read-pods   Role/pod-reader   31s

创建证书

  • 编写证书脚本
cat > arsiya-csr.json <<EOF
{
    "CN": "arsiya",
    "hosts": [],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [{
        "C": "CN",
        "L": "Beijing",
        "ST": "Beijing"
    }]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes arsiya-csr.json | cfssljson -bare arsiya

kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.18.11:6443 --kubeconfig=arsiya-kubeconfig

kubectl config set-credentials arsiya --client-key=arsiya-key.pem --client-certificate=arsiya.pem --embed-certs=true --kubeconfig=arsiya-kubeconfig

kubectl config set-context default --cluster=kubernetes --user=arsiya --kubeconfig=arsiya-kubeconfig

kubectl config use-context default --kubeconfig=arsiya-kubeconfig

3.10、Ingress

Ingress相当于网关,可以作为统一的入口访问多个service关联的Pods。Ingress不是k8s的内置组件,需要单独安装。

  • Service的NodePort可以暴露内部服务,但NodePort在每一个Node上都会占用同一个Port。
  • 使用Ingress可以通过域名访问pod中的应用。
3.10.1、部署Ingress Controllor
  • 使用yaml定义Ingress控制器
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      hostNetwork: true
      # wait up to five minutes for the drain of connections
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      nodeSelector:
        kubernetes.io/os: linux
      containers:
        - name: nginx-ingress-controller
          image: lizhenliang/nginx-ingress-controller:0.30.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 101
            runAsUser: 101
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown

---

apiVersion: v1
kind: LimitRange
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  limits:
  - min:
      memory: 90Mi
      cpu: 100m
    type: Container
    
  • 执行创建命令
root@ubuntu:~# kubectl apply -f ingress-controller.yaml

3.13、java项目的部署

3.13.1、使用Dockerfile制作镜像
  • 定义Dockerfile
FROM adoptopenjdk/openjdk8:latest
VOLUME /tmp
ADD ./demo2-0.0.1-SNAPSHOT.jar /demo2.jar
ENTRYPOINT ["java","-jar","/demo2.jar","&"]
  • 构建镜像
root@ubuntu:~# docker build -t demo2:latest .
Sending build context to Docker daemon  36.65MB
Step 1/4 : FROM adoptopenjdk/openjdk8:latest
 ---> 6331f760afd4
Step 2/4 : VOLUME /tmp
 ---> Running in ed378ef2df18
Removing intermediate container ed378ef2df18
 ---> 0737cfe3e07c
Step 3/4 : ADD ./demo2-0.0.1-SNAPSHOT.jar /demo2.jar
 ---> 955e0a724698
Step 4/4 : ENTRYPOINT ["java","-jar","/demo2.jar","&"]
 ---> Running in c364d268ffd1
Removing intermediate container c364d268ffd1
 ---> b39f955ca2cc
Successfully built b39f955ca2cc
Successfully tagged demo2:latest
  • 测试镜像
root@ubuntu:~# docker run -d -p 8089:8089 demo2:latest -t
d069b5822920d2c3d32ad53a15bdbc3436e580c061a2431d67b3c9f26e403db3
3.13.2、使用yaml创建Deployment
  • 创建yaml
root@ubuntu:~# kubectl create deployment demo2 --image=demo2:latest --dry-run -o yaml > demo2.yaml
W0920 11:07:09.196773  137211 helpers.go:555] --dry-run is deprecated and can be replaced with --dry-run=client.
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐