背景

K8S能保证在任意副本(Pod)挂掉时自动从其他机器启动一个新的,还可以动态扩容等,通俗地说,这个 Pod 可能在任何时刻出现在任何节点上,也可能在任何时刻死在任何节点上;那么自然随着 Pod 的创建和销毁,Pod IP 肯定会动态变化;那么如何把这个动态的 Pod IP 暴露出去?这里借助于 Kubernetes 的 Service 机制,Service 可以以标签的形式选定一组带有指定标签的 Pod,并监控和自动负载他们的 Pod IP,那么我们向外暴露只暴露 Service IP 就行了;这就是 NodePort 模式:即在每个节点上开起一个端口,然后转发到内部 Pod IP 上,如下图所示:
在这里插入图片描述

采用 NodePort 方式暴露服务面临问题是,服务一旦多起来,NodePort 在每个节点上开启的端口会及其庞大,而且难以维护

Ingress原理

为了解决端口管理问题,并且能够兼容pod的动态伸缩后的动态路由问题,这就是ingress,它基于service实现了对pod的负载均衡

在这里插入图片描述

如上图所示,ingress Controller通过识别ingress对象动态对controller中的转发规则进行修改,而ingress对象通过识别service来获取service对应的pod节点,并且实现负载均衡

Ingress 简单的理解就是你原来需要改 Nginx(不一定是nginx,还可能是haproxy,envoy等,nginx是官方的默认实现) 配置,然后配置各种域名对应哪个 Service,现在把这个动作抽象出来,变成一个 Ingress 对象,你可以用 yaml 创建,每次不要去改 Nginx 了,直接改 yaml 然后创建/更新就行了;那么问题来了:”Nginx 该怎么处理?”

ingress Controller 这东西就是解决 “Nginx 的处理方式” 的;Ingress Controller 通过与 Kubernetes API 交互,动态的去感知集群中 Ingress 规则变化,然后读取他,按照他自己模板生成一段 Nginx 配置,再写到 Nginx Pod 里,最后 reload 一下

实际上Ingress也是Kubernetes API的标准资源类型之一,它其实就是一组基于DNS名称(host)或URL路径把请求转发到指定的Service资源的规则。用于将集群外部的请求流量转发到集群内部完成的服务发布。我们需要明白的是,Ingress资源自身不能进行“流量穿透”,仅仅是一组规则的集合,这些集合规则还需要其他功能的辅助,比如监听某套接字,然后根据这些规则的匹配进行路由转发,这些能够为Ingress资源监听套接字并将流量转发的组件就是Ingress Controller

关于headless service在这里有必要进行补充说明下

在K8S中,Service可以起到对pod负载均衡的作用,主要有3种service type(ClusterIP,NodePort,LoadBalance),其中type为ClusterIP时有2种情况,clusterIP设置为None时,我们把它称为headless service,这个headless service与普通的service有什么区别呢?

headless service设置clusterIP为None,那么在k8s集群中,kube-proxy就不对其进行代理,则集群内部对象在访问该服务时将返回服务的全部pod的ip,开发者可以根据这些ip列表自己做负载均衡。我们对以上说法进行下证明

如下图所示,我建了2个service,其中nacos-headless为clusterIP设置为None的,这2个service都代理了资源类型为StatefulSet类型的nacos服务(2个pod实例)
在这里插入图片描述

那么我登录一个pod容器根据dns查找下对应的服务名,发现查找nacos-headless时,返回了2个ip,正常service返回1个ip,因此也证明了以上说法,headless service在集群内部使用时,service不对其进行负载均衡
在这里插入图片描述

headless service主要使用场景

1、k8s集群内部自己做负载均衡的情况

2、与ingress配合使用,使用ingress来做负载、路由,并配置一些流量规则的情况

3、其他???

部署ingress-nginx

1、下载yaml文件

https://github.com/kubernetes/ingress-nginx/blob/nginx-0.30.0/deploy/static/mandatory.yaml

以上yaml文件里面已经包含了namespace、configmap、role、rolecluster、deployment等重要内容

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      # wait up to five minutes for the drain of connections
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      nodeSelector:
        kubernetes.io/os: linux
      containers:
        - name: nginx-ingress-controller
          image: 10.1.12.29/k8s/nginx-ingress-controller:0.30.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 101
            runAsUser: 101
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown

---

apiVersion: v1
kind: LimitRange
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  limits:
  - min:
      memory: 90Mi
      cpu: 100m
    type: Container
2、执行yaml进行ingress-nginx-controller部署
2.1 找一个能够下载ingress-nginx-controller的docker镜像的服务器下载镜像
# 拉取镜像
docker pull quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
2.2 镜像传到公司的dockerhub中
# 修改镜像名称
docker tag quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0  10.1.12.29/k8s/nginx-ingress-controller:0.30.0
# 推送镜像到公司dockerhub中
docker push 10.1.12.29/k8s/nginx-ingress-controller
2.3 修改mandatory.yaml中的镜像名
#打开mandatory.yaml,修改镜像名称改为10.1.12.29/k8s/nginx-ingress-controller,指向公司镜像地址

该步骤不是必须的,因为公司环境对外网有限制,因此把镜像传到内网的harbor中,这样在执行k8s对象部署时,能够快速部署成功,否则需要等待镜像下载,速度非常慢

2.4 执行部署
kubectl create -f mandatory.yaml
3、 部署nodeport service
# vi service-nodeport.yaml创建ingress-nginx,输入以下内容

apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
      nodePort: 32080  #http
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
      nodePort: 32443  #https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    
    
# 执行创建
kubectl create -f service-nodeport.yaml
4、 查看ingress是否部署成功

访问10.1.12.27:32080,会看到503返回,这说明ingress-nginx已经安装好了,只是还没有创建ingress对象,目前还无法进行反向代理

创建ingress对象(以nacos为例子)

这里对nacos进行反向代理,nacos原先的serivce信息为

在这里插入图片描述

创建一个ingress(ingress指向的service可以是headless的service,即clusterIP是none的service)

# vi ingress-nacos.yaml,输入以下内容

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-nacos
  namespace: development
  annotations: 
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: nacos.xxx.com # 外部可以访问的域名
    http:
      paths:
      - path: /
        backend:
          serviceName: nacos-headless # nacos service name
          servicePort: 8848 # nacos service port
          
# 执行创建
kubectl create -f ingress-nacos.yaml

配置域名解析,当前测试环境我们使用hosts文件进行解析(正式环境的情况下,需要域名服务器来处理)在这里插入图片描述

通过域名访问nacos:http://nacos.xxx.com:32080/nacos
在这里插入图片描述

总结

ingress controller安装完成后,k8s集群中申请了一个32080端口统一对外(这个端口每个人按需设置),如果需要,任何service都可以创建一个对应的ingress对象来利用ingress controller进行转发

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐