什么是ingress,Ingress 是用来代理k8s后端服务,你可以简单理解它就是个nginx,它主要由两部分组成:
1.ingress-controller:可以理解为就是一个刚通过源码安装完的nginx.(但它的基础设置已经相对完善。)
2.k8s-ingress:可以理解为一个service{},由它来自动发现后端的运行的服务,并将cluster-ip(内部dns分布的虚拟ip)记录在upstream下。

安装Ingress
Ingress需要一个默认的后端,创建default-backend.yaml

Ingress需要一个默认的后端,创建default-backend.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    k8s-app: default-http-backend
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        k8s-app: default-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        # Any image is permissable as long as:
        # 1. It serves a 404 page at /
        # 2. It serves 200 on a /healthz endpoint
        image: gcr.io/google_containers/defaultbackend:1.0
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        - containerPort: 8080
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: kube-system
  labels:
    k8s-app: default-http-backend
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    k8s-app: default-http-backend

在k8s 1.6.2下需要创建Role、ClusterRole、RoleBinding、ClusterRoleBinding,创建ingress-role.yaml

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: clusterrole-ingress
rules:
- apiGroups:
  - ""
  - "extensions"
  resources:
  - configmaps
  - secrets
  - services
  - endpoints
  - ingresses
  - nodes
  - pods
  verbs:
  - list
  - watch
- apiGroups:
  - "extensions"
  resources:
  - ingresses
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - events
  - services
  verbs:
  - create
  - list
  - update
  - get
- apiGroups:
  - "extensions"
  resources:
  - ingresses/status
  - ingresses
  verbs:
  - update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: role-ingress
  namespace: kube-system
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - list
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - endpoints
  verbs:
  - get
  - create
  - update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: ingress-clusterrolebinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: clusterrole-ingress
subjects:
  - kind: ServiceAccount
    name: ingress
    namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: ingress-rolebinding
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: role-ingress
subjects:
  - kind: ServiceAccount
    name: ingress
    namespace: kube-system

创建服务账号ingress-ServiceAccount.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress
  namespace: kube-system

创建controller,nginx-ingress-controller.yaml
注意宿主机的80\443端口未被占用

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  labels:
    k8s-app: nginx-ingress-controller
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        k8s-app: nginx-ingress-controller
      annotations:
        prometheus.io/port: '10254'
        prometheus.io/scrape: 'true'
    spec:
      # hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration
      # however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host
      # that said, since hostPort is broken on CNI (https://github.com/kubernetes/kubernetes/issues/31307) we have to use hostNetwork where CNI is used
      # like with kubeadm
      # hostNetwork: true
      terminationGracePeriodSeconds: 60
      serviceAccountName: ingress
      containers:
      - image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.5
        name: nginx-ingress-controller
        readinessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
        livenessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          timeoutSeconds: 1
        ports:
        - containerPort: 80
          hostPort: 80
        - containerPort: 443
          hostPort: 443
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
      nodeSelector:
        kubernetes.io/hostname: 10.1.0.21    #这里通过nodeSelector绑定了pod启动的节点,将外部请求转发到这个节点上。 

至此ingress搭建完毕。


测试ingress
创建服务frontend (《Kubernetes权威指南》中的例子)。
创建frontend-controller.yaml

apiVersion: v1
kind: ReplicationController
metadata:
  name: frontend
  labels:
    name: frontend
spec:
  replicas: 1
  selector:
    name: frontend
  template:
    metadata:
      labels:
        name: frontend
    spec:
      containers:
      - name: frontend
        image: kubeguide/guestbook-php-frontend:latest
        env:
        - name: GET_HOSTS_FROM
          value: env
        ports:
        - containerPort: 80

创建frontend-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    name: frontend
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
  selector:
    name: frontend

查看service:

[root@k8s-master ~]# kubectl get svc
NAME         CLUSTER-IP       EXTERNAL-IP   PORT(S)       AGE
frontend     10.254.179.200   <nodes>       80:8821/TCP   1h

创建转发规则frontend-ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: frontend-ingress
spec:
  rules:
  - host: guestbook.test.com
    http:
      paths:
      - path: /
        backend:
          serviceName: frontend
          servicePort: 80

注意这里要写80而不是8821。否则访问出现503错误。
查看ingress

[root@k8s-master ingress]# kubectl get ing
NAME                HOSTS                  ADDRESS   PORTS     AGE
frontend-ingress    guestbook.test.com             80        1h

现在从公网将guestbook.test.com转发到ingress所在node,即可看到guestbook页面。
进入nginx-ingress查看nginx.conf

[root@k8s-master ingress]# kubectl get pods -n=kube-system | grep ingress
nginx-ingress-controller-1894093054-835nx   1/1       Running   0          3h
[root@k8s-master ingress]# kubectl exec -it nginx-ingress-controller-1894093054-835nx bash -n=kube-system
root@nginx-ingress-controller-1894093054-835nx:/# cat /etc/nginx/nginx.conf

    ...
    upstream default-frontend-80 {
        least_conn;
        server 172.30.29.7:80 max_fails=0 fail_timeout=0;
    }
    ...
    server {
        server_name guestbook.test.com;
        listen 80;
        listen [::]:80;

        location / {
        ...
        }
    }
    ...

更新ingress
改变frontend的pod数量,将frontend-controller.yaml中replicas由1改成3,并应用frontend-controller.yaml,重新应用frontend-ingress.yaml

[root@k8s-master frontend]#  kubectl apply -f frontend-controller.yaml
replicationcontroller "frontend" configured
[root@k8s-master frontend]# kubectl get pods
NAME                       READY     STATUS    RESTARTS   AGE
frontend-40c70             1/1       Running   0          7m
frontend-4m67t             1/1       Running   0          53m
frontend-xv1ck             1/1       Running   0          36s
[root@k8s-master ingress]# kubectl apply -f frontend-ingress.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
ingress "frontend-ingress" configured

查看ingress中nginx.conf的变化:

   upstream default-frontend-80 {
        least_conn;
        server 172.30.29.7:80 max_fails=0 fail_timeout=0;
        server 172.30.95.5:80 max_fails=0 fail_timeout=0;
        server 172.30.95.6:80 max_fails=0 fail_timeout=0;
    }
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐