阿里云通过nginx-ingress-controller 暴露服务
本文有参考:阿里云的文档:https://developer.aliyun.com/article/721569。然后参考对照实际部署的k8s集群。梳理部署在k8s的不同应用如何在公网被不同的域名访问到。理解:实际nginx-ingress-controller 就是在k8s集群中创建一个具有类似nginx作用的应用。此nginx 应用负责接受所有来自不同域名的请求,然后根据不同ingress 定
本文有参考:阿里云的文档:https://developer.aliyun.com/article/721569。然后参考对照实际部署的k8s集群。梳理部署在k8s的不同应用如何在公网被不同的域名访问到。
理解:
实际nginx-ingress-controller 就是在k8s集群中创建一个具有类似nginx作用的应用。此nginx 应用负责接受所有来自不同域名的请求,然后根据不同ingress 定义的规则转发到对应的service。
- 看一下nginx-ingress service
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx-ingress-lb"},"name":"nginx-ingress-lb","namespace":"kube-system"},"spec":{"ports":[{"name":"http","port":80,"targetPort":80},{"name":"https","port":443,"targetPort":443}],"selector":{"app":"ingress-nginx"},"type":"LoadBalancer"}}
creationTimestamp: '2019-06-04T11:49:59Z'
labels:
app: nginx-ingress-lb
name: nginx-ingress-lb
namespace: kube-system
resourceVersion: '147529875'
selfLink: /api/v1/namespaces/kube-system/services/nginx-ingress-lb
uid: 6106e164-7f80-11e8-b128-00163e10ac41
spec:
clusterIP: 10.1.1.135
externalTrafficPolicy: Local
healthCheckNodePort: 31999
ports:
- name: http
nodePort: 30613
port: 80
protocol: TCP
targetPort: 80
- name: https
nodePort: 31070
port: 443
protocol: TCP
targetPort: 443
selector:
app: ingress-nginx
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: xx.xx.xxx.xxx
解释: 该service选择暴露服务的type 为LoadBalancer,这样就可以被外网访问到。LoadBalancer的实际ip地址为:xx.xx.xxx.xxx。假设集群中有A 、B两个应用,分别需要使用 a.com、b.com 访问。那么将这两个域名的的A记录的解析值都设置为:LoadBalancer的实际ip地址为:xx.xx.xxx.xxx。
- 上文1中只是创建了一个service,指定了service对应的应用。在运行的集群中使用命令可以看到对应的deploy。
kubectl get deploy -n kube-system -l app=ingress-nginx
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-ingress-controller 1/1 1 1 2y24d
如下是deploy的yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
component.revision: v5
component.version: v0.22.0
deployment.kubernetes.io/revision: '2'
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"component.revision":"v5","component.version":"v0.22.0"},"labels":{"app":"ingress-nginx"},"name":"nginx-ingress-controller","namespace":"kube-system"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"ingress-nginx"}},"template":{"metadata":{"annotations":{"prometheus.io/port":"10254","prometheus.io/scrape":"true","scheduler.alpha.kubernetes.io/critical-pod":""},"labels":{"app":"ingress-nginx"}},"spec":{"affinity":{"podAntiAffinity":{"preferredDuringSchedulingIgnoredDuringExecution":[{"podAffinityTerm":{"labelSelector":{"matchExpressions":[{"key":"app","operator":"In","values":["ingress-nginx"]}]},"topologyKey":"kubernetes.io/hostname"},"weight":100}]}},"containers":[{"args":["/nginx-ingress-controller","--configmap=$(POD_NAMESPACE)/nginx-configuration","--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services","--udp-services-configmap=$(POD_NAMESPACE)/udp-services","--annotations-prefix=nginx.ingress.kubernetes.io","--publish-service=$(POD_NAMESPACE)/nginx-ingress-lb","--v=2"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"image":"registry-vpc.cn-beijing.aliyuncs.com/acs/aliyun-ingress-controller:v0.22.0.5-552e0db-aliyun","livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":10254,"scheme":"HTTP"},"initialDelaySeconds":10,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"name":"nginx-ingress-controller","ports":[{"containerPort":80,"name":"http"},{"containerPort":443,"name":"https"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":10254,"scheme":"HTTP"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"resources":{"requests":{"cpu":"100m","memory":"70Mi"}},"securityContext":{"capabilities":{"add":["NET_BIND_SERVICE"],"drop":["ALL"]},"runAsUser":33},"volumeMounts":[{"mountPath":"/etc/localtime","name":"localtime","readOnly":true}]}],"initContainers":[{"command":["/bin/sh","-c","mount
-o remount rw /proc/sys\nsysctl -w net.core.somaxconn=65535\nsysctl -w
net.ipv4.ip_local_port_range=\"1024 65535\"\nsysctl -w
fs.file-max=1048576\nsysctl -w fs.inotify.max_user_instances=16384\nsysctl
-w fs.inotify.max_user_watches=524288\nsysctl -w
fs.inotify.max_queued_events=16384\n"],"image":"registry-vpc.cn-beijing.aliyuncs.com/acs/busybox:v1.29.2","name":"init-sysctl","securityContext":{"capabilities":{"add":["SYS_ADMIN"],"drop":["ALL"]}}}],"nodeSelector":{"beta.kubernetes.io/os":"linux"},"serviceAccountName":"nginx-ingress-controller","volumes":[{"hostPath":{"path":"/etc/localtime","type":"File"},"name":"localtime"}]}}}}
creationTimestamp: '2018-07-04T11:49:59Z'
generation: 2
labels:
app: ingress-nginx
name: nginx-ingress-controller
namespace: kube-system
resourceVersion: '169291104'
selfLink: /apis/apps/v1/namespaces/kube-system/deployments/nginx-ingress-controller
uid: 610c6dac-7f80-11e8-b128-00163e10ac41
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: ingress-nginx
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
scheduler.alpha.kubernetes.io/critical-pod: ''
labels:
app: ingress-nginx
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- ingress-nginx
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- args:
- /nginx-ingress-controller
- '--configmap=$(POD_NAMESPACE)/nginx-configuration'
- '--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services'
- '--udp-services-configmap=$(POD_NAMESPACE)/udp-services'
- '--annotations-prefix=nginx.ingress.kubernetes.io'
- '--publish-service=$(POD_NAMESPACE)/nginx-ingress-lb'
- '--v=2'
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: >-
registry-vpc.cn-beijing.aliyuncs.com/acs/aliyun-ingress-controller:v0.22.0.5-552e0db-aliyun
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
name: nginx-ingress-controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
resources:
requests:
cpu: 100m
memory: 70Mi
securityContext:
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 33
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/localtime
name: localtime
readOnly: true
dnsPolicy: ClusterFirst
initContainers:
- command:
- /bin/sh
- '-c'
- |
mount -o remount rw /proc/sys
sysctl -w net.core.somaxconn=65535
sysctl -w net.ipv4.ip_local_port_range="1024 65535"
sysctl -w fs.file-max=1048576
sysctl -w fs.inotify.max_user_instances=16384
sysctl -w fs.inotify.max_user_watches=524288
sysctl -w fs.inotify.max_queued_events=16384
image: 'registry-vpc.cn-beijing.aliyuncs.com/acs/busybox:v1.29.2'
imagePullPolicy: IfNotPresent
name: init-sysctl
resources: {}
securityContext:
capabilities:
add:
- SYS_ADMIN
drop:
- ALL
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
nodeSelector:
beta.kubernetes.io/os: linux
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: nginx-ingress-controller
serviceAccountName: nginx-ingress-controller
terminationGracePeriodSeconds: 30
volumes:
- hostPath:
path: /etc/localtime
type: File
name: localtime
status:
availableReplicas: 1
conditions:
- lastTransitionTime: '2018-07-04T11:51:09Z'
lastUpdateTime: '2018-07-04T11:51:09Z'
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: 'True'
type: Available
- lastTransitionTime: '2018-07-04T11:51:09Z'
lastUpdateTime: '2020-04-01T22:02:46Z'
message: >-
ReplicaSet "nginx-ingress-controller-5fc8b968d6" has successfully
progressed.
reason: NewReplicaSetAvailable
status: 'True'
type: Progressing
observedGeneration: 2
readyReplicas: 1
replicas: 1
updatedReplicas: 1
- 继续使用命令可以看到deploy对应的pod
kubectl get pod -n kube-system -l app=ingress-nginx
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-5fc8b968d6-s8gmc 1/1 Running 1 78d
使用命令进入到pod中查看/etc/nginx/nginx.conf ,我们可以看到在ingress配置的域名在配置nginx.conf中出现。看到这里就应该明白了nginx-ingress-controller 实际类似nginx 的意思。
- 查看一个ingress的yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"ssyl-admin-ingress","namespace":"default"},"spec":{"rules":[{"host":"admin.com","http":{"paths":[{"backend":{"serviceName":"service-admin","servicePort":80},"path":"/"}]}}],"tls":[{"hosts":["admin.com"],"secretName":"admin.com"}]}}
nginx.ingress.kubernetes.io/service-weight: ''
creationTimestamp: '2019-09-25T07:01:52Z'
generation: 3
name: admin-ingress
namespace: default
resourceVersion: '169280718'
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/admin-ingress
uid: e15d1ea1-c090-11e8-b6f2-00163e109a9e
spec:
rules:
- host: admin.com
http:
paths:
- backend:
serviceName: service-admin
servicePort: 80
path: /
tls:
- hosts:
- admin.com
secretName: admin.com
status:
loadBalancer:
ingress:
- ip: xx.xx.xxx.xxx
解释:该ingress的作用就是将域名为admin.com的请求转发到名称为service-admin的服务上。域名admin.com 可以在3中的配置nginx.conf上看到。
好了,到这里就梳理完毕。在总结一下吧:不同域名解析到同一个LoadBalancer的 IP,假设为ip-a;然后nginx-ingress-controller 的service 的type 为LoadBalancer,ip为前面说的ip-a。这样不同域名的请求就被转发到了nginx-ingress-controller 的service。而nginx-ingress-controller 的service所对应的pod会根据 ingress定义的规则,再将不同域名的请求转发到不同的service上。
更多推荐
所有评论(0)