k8s部署kong(第三版)

官网地址

kong新版采用无db模式,将所有的配置,通过k8s资源的形式,存储到etcd里面

1、安装kong

创建命名空间

kubectl create namespace kong

部署

wget https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/v2.9.3/deploy/single/all-in-one-dbless.yaml

先修改kong-proxy的service类型为nodeport,当然也可以采用LoadBalancer,修改all-in-one-dbless.yaml,找到kong-proxy的service那一段

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
  name: kong-proxy
  namespace: kong
spec:
  ports:
  - name: proxy
    port: 80
    protocol: TCP
    targetPort: 8000
  - name: proxy-ssl
    port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    app: proxy-kong
    #  type: LoadBalancer
  type: NodePort

然后添加9080端口,下载下来的文件默认只支持http和https协议的服务访问,我们要添加grpc访问的端口

找到proxy-kong的deploy,做如下修改

      containers:
      - env:
        - name: KONG_PROXY_LISTEN
          value: 0.0.0.0:8000 reuseport backlog=16384, 0.0.0.0:9080 http2 reuseport backlog=16384, 0.0.0.0:8443 http2 ssl reuseport
            backlog=16384
        - name: KONG_PORT_MAPS
          value: 80:8000, 9080:9080, 443:8443

主要是添加了0.0.0.0:9080 http2 reuseport backlog=16384,这一段和9080:9080, 这一段

再找到port,做如下修改

        name: proxy
        ports:
        - containerPort: 8000
          name: proxy
          protocol: TCP
        - containerPort: 8443
          name: proxy-ssl
          protocol: TCP
        - containerPort: 8100
          name: metrics
          protocol: TCP
        - containerPort: 9080
          name: grpc
          protocol: TCP

主要是添加了最后三行

再找到kong-proxy的service,做如下修改

spec:
  ports:
  - name: proxy
    port: 80
    protocol: TCP
    targetPort: 8000
  - name: proxy-ssl
    port: 443
    protocol: TCP
    targetPort: 8443
  - name: grpc
    port: 9080
    protocol: TCP
    targetPort: 9080

也是添加最后四行,这样修改了之后就支持grpc的调用(如果开发那边是用kong做grpc的调用网关,有的不用网关,用注册中心consul、etcd)

然后创建

kubectl apply -f all-in-one-dbless.yaml

查看

kubectl get all -n kong


NAME                               READY   STATUS    RESTARTS   AGE
pod/ingress-kong-66ffc7f58-ffqgt   1/1     Running   0          60m
pod/proxy-kong-5b968f958f-87nrq    1/1     Running   0          60m
pod/proxy-kong-5b968f958f-mrsbv    1/1     Running   0          60m

NAME                              TYPE        CLUSTER-IP        EXTERNAL-IP   PORT(S)                      AGE
service/kong-admin                ClusterIP   None              <none>        8444/TCP                     60m
service/kong-proxy                NodePort    192.168.252.190   <none>        80:32248/TCP,443:30683/TCP   60m
service/kong-validation-webhook   ClusterIP   192.168.254.177   <none>        443/TCP                      60m

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-kong   1/1     1            1           60m
deployment.apps/proxy-kong     2/2     2            2           60m

NAME                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-kong-66ffc7f58   1         1         1       60m
replicaset.apps/proxy-kong-5b968f958f    2         2         2       60m

2、创建ingress

先说一下整个请求流程:

通过cs.test.com域名解析到负载均衡的ip,然后在负载均衡的监听器里面配置k8s节点的ip加kong-proxy的service端口,也就是我们前面改的nodeport类型那个service,ingressclass让Ingress和Ingress Controller(kong-proxy是Ingress Controller的代理进程)绑定在一起,然后Ingress Controller根据域名匹配到kong-ing这个ingress,再根据uri匹配到对应的后端service,以此来实现流量的调度

Ingress Class是一个Kubernetes对象,它的作用是管理Ingress和Ingress Controller的概念,方便我们分组路由规则,降低维护成本。Ingress Class让Ingress和Ingress Controller之间的绑定关系变得更加灵活,可以将多个Ingress对象分组到同一个Ingress Class中,每个Ingress Class可以使用不同的Ingress Controller来处理它所管理的Ingress对象。简而言之,Ingress Class的作用是解除Ingress和Ingress Controller的强绑定关系,让Kubernetes用户可以转向管理Ingress Class,用它来定义不同的业务逻辑分组,简化Ingress规则的复杂度。

kong-proxy是Kong Ingress Controller中的代理进程,用于处理流量转发和路由功能。而Ingress Controller是Kubernetes中负责管理Ingress资源和控制集群进出流量的组件。Kong Ingress Controller作为一种Ingress Controller,通过监听Kubernetes中的Ingress对象和自定义资源定义(CRDs),将Ingress规则转化为Kong代理服务的路由规则和插件配置,从而实现对Kubernetes集群进出流量的管理和控制。因此,kong-proxy是Kong Ingress Controller的一部分,负责实现Ingress Controller的功能。

vim cs-test-com-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kong-ing
  namespace: dev
spec:
  ingressClassName: kong
  rules:
  - host: cs.test.com
    http:
      paths:
      - path: /customer
        pathType: Prefix
        backend:
          service:
            name: customer-service
            port:
              number: 20008

后面的deploy和service依赖于我自己的业务,如果你想测试,可以直接启动一个nginx的deploy和service,然后把上面ingress的path路径改成/,端口也改成80,效果是一样的

customer-deploy.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: customer
  labels:
    app: customer
spec:
  selector:
    matchLabels:
      app: customer
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 0
      maxSurge: 1
  # minReadySeconds: 30
  template:
    metadata:
      labels:
        app: customer
        tag: kobe
    spec:
      containers:
        - name: customer
          image: ccr.ccs.tencentyun.com/chens/kobe:jenkins-kobe-customer-dev-4-1b2fe90f6
          imagePullPolicy: IfNotPresent
          volumeMounts: # 将configmap挂载到目录
          - name: config-kobe
            mountPath: /biz-code/configs/
          env:
            - name: TZ
              value: "Asia/Shanghai"
            - name: LANG
              value: C.UTF-8
            - name: LC_ALL
              value: C.UTF-8
          livenessProbe:
            failureThreshold: 2
            initialDelaySeconds: 30
            periodSeconds: 10
            successThreshold: 1
            grpc:
              port: 21008
            timeoutSeconds: 2
          ports:
          - containerPort: 20008
            protocol: TCP
          - containerPort: 21008
            protocol: TCP
          readinessProbe:
            failureThreshold: 2
            initialDelaySeconds: 30
            periodSeconds: 10
            successThreshold: 1
            httpGet:
              path: /Health
              port: 20008
              scheme: HTTP
            timeoutSeconds: 2
          resources:
            limits:
              cpu: 194m
              memory: 170Mi
            requests:
              cpu: 80m
              memory: 50Mi
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: qcloudregistrykey
      restartPolicy: Always
      securityContext: {}
      serviceAccountName: default
      volumes: # 引用configmap
      - name: config-kobe
        configMap:
          name: config-kobe

customer-service.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    app: customer
  name: customer-service
  namespace: dev
spec:
  ports:
    - name: http
      protocol: TCP
      port: 20008
      targetPort: 20008
    - name: grpc
      protocol: TCP
      port: 21008
      targetPort: 21008
  selector:
    app: customer
  sessionAffinity: None
  type: NodePort

这种安装方式比以前简化了不少,但是里面的原理相对更复杂一点,需要去理解他们的工作流程


在一个k8s集群里面,创建多个kong(如下文章还有些问题,一个集群建议只部署一个kong,然后通过多个ingress去代理不同的域名,也就是不同的环境)

比如说你的开发、测试、验收在同一个k8s集群,以不同的命名空间来做区分,比如说按照上面的方式已经在dev命名空间部署了一个kong,而在部署dev命名空间kong的时候,已经创建了很多集群相关的角色或者是权限,这时候会有些冲突,那么可以通过如下步骤,在多个命名空间下创建多个kong

1、下载文件

wget https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/v2.9.3/deploy/single/all-in-one-dbless.yaml

2、修改文件

找到其中ClusterRoleBinding的地方,然后在subjects中添加对应环境的ServiceAccount,总共有三个地方,每个地方添加你需要的命名空间账号

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kong-ingress
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kong-ingress
subjects:
- kind: ServiceAccount
  name: kong-serviceaccount
  namespace: kong
- kind: ServiceAccount
  name: kong-serviceaccount
  namespace: test
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kong-ingress-gateway
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kong-ingress-gateway
subjects:
- kind: ServiceAccount
  name: kong-serviceaccount
  namespace: kong
- kind: ServiceAccount
  name: kong-serviceaccount
  namespace: test
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kong-ingress-knative
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kong-ingress-knative
subjects:
- kind: ServiceAccount
  name: kong-serviceaccount
  namespace: kong
- kind: ServiceAccount
  name: kong-serviceaccount
  namespace: test

3、部署

先部署all-in-one-dbless.yaml

kubectl create -f all-in-one-dbless.yaml

这时候会在kong命名空间下创建一个kong

kubectl get all -n kong

NAME                               READY   STATUS    RESTARTS   AGE
pod/ingress-kong-66ffc7f58-x4l2j   1/1     Running   0          8m42s
pod/proxy-kong-5b968f958f-hz6tl    1/1     Running   0          8m42s
pod/proxy-kong-5b968f958f-sqxhd    1/1     Running   0          8m42s

NAME                              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
service/kong-admin                ClusterIP   None             <none>        8444/TCP                     8m42s
service/kong-proxy                NodePort    192.168.253.97   <none>        80:31231/TCP,443:30508/TCP   8m42s
service/kong-validation-webhook   ClusterIP   192.168.253.22   <none>        443/TCP                      8m42s

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-kong   1/1     1            1           8m42s
deployment.apps/proxy-kong     2/2     2            2           8m42s

NAME                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-kong-66ffc7f58   1         1         1       8m42s
replicaset.apps/proxy-kong-5b968f958f    2         2         2       8m42s

然后根据如下文件,即可在test命名空间部署一个新的kong(这个文件其实就是all-in-one-dbless.yaml里面截取后半段)

test-all-in-one-dbless.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: kong-serviceaccount
  namespace: test
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: kong-leader-election
  namespace: test
rules:
- apiGroups:
  - ""
  - coordination.k8s.io
  resources:
  - configmaps
  - leases
  verbs:
  - get
  - list
  - watch
  - create
  - update
  - patch
  - delete
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kong-leader-election
  namespace: test
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kong-leader-election
subjects:
- kind: ServiceAccount
  name: kong-serviceaccount
  namespace: test
---
apiVersion: v1
kind: Service
metadata:
  name: kong-admin
  namespace: test
spec:
  clusterIP: None
  ports:
  - name: admin
    port: 8444
    protocol: TCP
    targetPort: 8444
  selector:
    app: proxy-kong
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
  name: kong-proxy
  namespace: test
spec:
  ports:
  - name: proxy
    port: 80
    protocol: TCP
    targetPort: 8000
  - name: proxy-ssl
    port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    app: proxy-kong
    #  type: LoadBalancer
  type: NodePort
---
apiVersion: v1
kind: Service
metadata:
  name: kong-validation-webhook
  namespace: test
spec:
  ports:
  - name: webhook
    port: 443
    protocol: TCP
    targetPort: 8080
  selector:
    app: ingress-kong
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: ingress-kong
  name: ingress-kong
  namespace: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ingress-kong
  template:
    metadata:
      annotations:
        kuma.io/gateway: enabled
        kuma.io/service-account-token-volume: kong-serviceaccount-token
        traffic.sidecar.istio.io/includeInboundPorts: ""
      labels:
        app: ingress-kong
    spec:
      automountServiceAccountToken: false
      containers:
      - env:
        - name: CONTROLLER_KONG_ADMIN_SVC
          value: kong/kong-admin
        - name: CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY
          value: "true"
        - name: CONTROLLER_PUBLISH_SERVICE
          value: kong/kong-proxy
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        image: kong/kubernetes-ingress-controller:2.9.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: ingress-controller
        ports:
        - containerPort: 8080
          name: webhook
          protocol: TCP
        - containerPort: 10255
          name: cmetrics
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        volumeMounts:
        - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
          name: kong-serviceaccount-token
          readOnly: true
      serviceAccountName: kong-serviceaccount
      volumes:
      - name: kong-serviceaccount-token
        projected:
          sources:
          - serviceAccountToken:
              expirationSeconds: 3607
              path: token
          - configMap:
              items:
              - key: ca.crt
                path: ca.crt
              name: kube-root-ca.crt
          - downwardAPI:
              items:
              - fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
                path: namespace
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: proxy-kong
  name: proxy-kong
  namespace: test
spec:
  replicas: 2
  selector:
    matchLabels:
      app: proxy-kong
  template:
    metadata:
      annotations:
        kuma.io/gateway: enabled
        kuma.io/service-account-token-volume: kong-serviceaccount-token
        traffic.sidecar.istio.io/includeInboundPorts: ""
      labels:
        app: proxy-kong
    spec:
      automountServiceAccountToken: false
      containers:
      - env:
        - name: KONG_PROXY_LISTEN
          value: 0.0.0.0:8000 reuseport backlog=16384, 0.0.0.0:8443 http2 ssl reuseport
            backlog=16384
        - name: KONG_PORT_MAPS
          value: 80:8000, 443:8443
        - name: KONG_ADMIN_LISTEN
          value: 0.0.0.0:8444 http2 ssl reuseport backlog=16384
        - name: KONG_STATUS_LISTEN
          value: 0.0.0.0:8100
        - name: KONG_DATABASE
          value: "off"
        - name: KONG_NGINX_WORKER_PROCESSES
          value: "2"
        - name: KONG_KIC
          value: "on"
        - name: KONG_ADMIN_ACCESS_LOG
          value: /dev/stdout
        - name: KONG_ADMIN_ERROR_LOG
          value: /dev/stderr
        - name: KONG_PROXY_ERROR_LOG
          value: /dev/stderr
        - name: KONG_ROUTER_FLAVOR
          value: traditional
        image: kong:3.2
        lifecycle:
          preStop:
            exec:
              command:
              - /bin/bash
              - -c
              - kong quit
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /status
            port: 8100
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: proxy
        ports:
        - containerPort: 8000
          name: proxy
          protocol: TCP
        - containerPort: 8443
          name: proxy-ssl
          protocol: TCP
        - containerPort: 8100
          name: metrics
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /status
            port: 8100
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
      serviceAccountName: kong-serviceaccount
      volumes:
      - name: kong-serviceaccount-token
        projected:
          sources:
          - serviceAccountToken:
              expirationSeconds: 3607
              path: token
          - configMap:
              items:
              - key: ca.crt
                path: ca.crt
              name: kube-root-ca.crt
          - downwardAPI:
              items:
              - fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
                path: namespace
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: kong-test
spec:
  controller: ingress-controllers.konghq.com/kong

3、查看

可以看到和kong命名空间下的kong是一样的,他们共享同样的集群角色

kubectl get all -n test


NAME                               READY   STATUS    RESTARTS   AGE
pod/ingress-kong-66ffc7f58-8b8kk   1/1     Running   0          2m44s
pod/proxy-kong-5b968f958f-vg7nh    1/1     Running   0          2m44s
pod/proxy-kong-5b968f958f-zkww7    1/1     Running   0          2m44s

NAME                              TYPE        CLUSTER-IP        EXTERNAL-IP   PORT(S)                      AGE
service/kong-admin                ClusterIP   None              <none>        8444/TCP                     2m44s
service/kong-proxy                NodePort    192.168.254.246   <none>        80:30101/TCP,443:31093/TCP   2m44s
service/kong-validation-webhook   ClusterIP   192.168.252.220   <none>        443/TCP                      2m44s

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-kong   1/1     1            1           2m44s
deployment.apps/proxy-kong     2/2     2            2           2m44s

NAME                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-kong-66ffc7f58   1         1         1       2m44s
replicaset.apps/proxy-kong-5b968f958f    2         2         2       2m44s

4、测试

测试就和上面一样,创建一个ingress,再创建一个deploy和service就可以了

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐