K8S对外服务ingress
开源的有:nginx ingress controller,traefik,这两种都是属于ingress-controller,用户可以选择不同的ingress-controller实现,目前,由k8s维护的ingress-controller只有google云的GCE与ingress-nginx两个,其他还有很多第三方维护的ingress-controller,具体可以参考官方文档。具体的实现反
Sevice作用体现在两个方面
集群内部 | 不断跟踪pod的变化,更新endpoint中的pod对象,基于pod的ip地址不断发现的一种服务发现机制 |
集群外部 | 类似负载均衡器,把流量(ip+端口),不涉及转发url(http https),把请求转发到pod当中 |
在Kubernetes中,Pod的IP地址和service的ClusterIP仅可以在集群网络内部使用,对于集群外的应用是不可见的。为了使外部的应用能够访问集群内的服务,Kubernetes目前提供了以下几种方案 | |
NodePort | 将service暴露在节点网络上,NodePort背后就是Kube-Proxy,Kube-Proxy是沟通service网络、Pod网络和节点网络的桥梁。 测试环境使用还行,当有几十上百的服务在集群中运行时,NodePort的端口管理就是个灾难。因为每个端口只能是一种服务,默认端口范围只能是 30000-32767 |
LoadBalancer | 通过设置LoadBalancer映射到云服务商提供的LoadBalancer地址。这种用法仅用于在公有云服务提供商的云平台上设置 Service 的场景。 受限于云平台,且通常在云平台部署LoadBalancer还需要额外的费用。 在service提交后,Kubernetes就会调用CloudProvider在公有云上为你创建一个负载均衡服务,并且把被代理的Pod的IP地址配置给负载均衡服务做后端 |
externalIPs | service允许为其分配外部IP,如果外部IP路由到集群中一个或多个Node上,Service会被暴露给这些externalIPs。通过外部IP进入到集群的流量,将会被路由到Service的Endpoint上 |
Ingress | 只需一个或者少量的公网IP和LB,即可同时将多个HTTP服务暴露到外网,七层反向代理。 可以简单理解为service的service,它其实就是一组基于域名和URL路径,把用户的请求转发到一个或多个service的规则 |
Nodeport | 先是容器端口然后是service端口最后到nodeport,设定了nodeport,每个节点都会有一个端口被打开,这个范围是30000-32767 Ip+端口: 节点ip+30000-32767 实现负载均衡 |
Loadbalancer | 云平台上的一种service服务,云平台提供负载均衡ip地址 |
extrenal | 域名映射 |
Ingress | Ingress基于域名进行映射,把url(http https)请求转发到service,再由service把请求转发到每一个pod Ingress只要一个或者是少量的公网ip或者LB,可以把多个http请求暴露到外网,七层反向代理 Service的service是一组域名和URL路径,把一个或者多个请求转发到service的规则,也就是显示七层代理到四层代理最后到pod,ingress-service-nginx(pod) |
ingress组成
Ingress是一个api对象,通过yaml文件进行配置,ingress的作用就是来定义如何来请求转发到service的规则,配置模版 Ingress通过http和https暴露集群内部service,给service提供一个外部的url,负载均衡,ssl/tls(https)的能力,实现一个基于域名的负载均衡,ingress要依靠 ingress-controller 来具体实现以上功能 |
Ingress-controller
具体的实现反向代理和负载均衡的程序,对ingress定义的规则进行解析,根据ingress的配置规则进行转发 Ingress-controller不是k8s自带的组件功能,ingress-controller是一个统称 开源的有:nginx ingress controller,traefik,这两种都是属于ingress-controller,用户可以选择不同的ingress-controller实现,目前,由k8s维护的ingress-controller只有google云的GCE与ingress-nginx两个,其他还有很多第三方维护的ingress-controller,具体可以参考官方文档。但是不管哪一种ingress-controller,实现的机制都大同小异,只是在具体配置上有差异 一般来说,ingress-controller的形式都是一个pod,里面跑着daemon程序和反向代理程序。daemon负责不断监控集群的变化,根据 ingress对象生成配置并应用新配置到反向代理,比如ingress-nginx就是动态生成nginx配置,动态更新upstream,并在需要的时候reload程序应用新配置 |
Ingress-Nginx的工作原理
1.ingress-controller通过和 kubernetes APIServer 交互,动态的去感知集群中ingress规则变化。 2.然后读取它,按照自定义的规则,规则就是写明了哪个域名对应哪个service,生成一段nginx配置。 3.再写到nginx-ingress-controller的pod里,这个ingress-controller的pod里运行着一个Nginx服务,控制器会把生成的 nginx配置写入 /etc/nginx.conf文件中。 4.然后reload一下使配置生效。以此达到域名区分配置和动态更新的作用 |
Ingress资源的定义项
1 | 定义外部流量的路由规则 |
2 | 他可以定义服务的暴露方式,主机名,访问路径和其他的选项 |
3 | 负载均衡(ingress-controller) |
Ingress暴露服务的方式
deployment+LoadBalancer模式 | ingress部署在公有云,会ingress配置文件里面会有一个type,type:LoadBalancr 公有云平台回味这个loadbalancer的service自动创建一个负载均衡器,而且负载均衡器会绑定一个公网地址,通过域名指向这个公网地址就可以实现集群对外暴露 |
DaemonSet+hostnetwork+nodeSelector | 用daemonset在每个节点都会创建一个pod,hostwork:pod共享节点主机的网络命名空间,容器内直接使用节点主机的ip+端口,pod中的容器可以直接访问主机上的网络资源 Nodeselector:根据标签来选择部署的节点,nginx-ingress-controller部署的节点 缺点:直接利用节点主机的网络和端口,一个node节点只能部署一个ingress-controller pod,这比较适合大并发的生产环境,性能最好的 |
deployment+NodePort | Nginx-ingress-controller Host-ingress的配置找到pod--controller--请求发到pod Nodeport---controller--ingress--service--pod Nodeport暴露端口的方式是最简单的方法,nodeport多了一层nat(地址转换) 并发量大的对性能工会有一定影响,内部都会有nodeport |
DaemonSet+HostNetwork+nodeSelector
Deployment+NodePort模式的Service
DaemonSet+hostnetwork+nodeSelector模式
[root@master01 ~]# cd /opt
[root@master01 opt]# mkdir ingress
[root@master01 opt]# cd ingress/
[root@master01 ingress]# wget https://gitee.com/mirrors/ingress-nginx/raw/nginx-0.30.0/deploy/static/mandatory.yaml
[root@master01 ingress]# ls
mandatory.yaml
[root@master01 ingress]# vim mandatory.yaml
188 ---
189
190 apiVersion: apps/v1
191 #kind: Deployment
192 kind: DaemonSet
193 metadata:
194 name: nginx-ingress-controller
195 namespace: ingress-nginx
196 labels:
197 app.kubernetes.io/name: ingress-nginx
198 app.kubernetes.io/part-of: ingress-nginx
199 spec:
200 # replicas: 1
201 selector:
202 matchLabels:
203 app.kubernetes.io/name: ingress-nginx
204 app.kubernetes.io/part-of: ingress-nginx
205 template:
206 metadata:
207 labels:
208 app.kubernetes.io/name: ingress-nginx
209 app.kubernetes.io/part-of: ingress-nginx
210 annotations:
211 prometheus.io/port: "10254"
212 prometheus.io/scrape: "true"
213 spec:
214 # wait up to five minutes for the drain of connections
215 hostNetwork: true
216 terminationGracePeriodSeconds: 300
217 serviceAccountName: nginx-ingress-serviceaccount
218 nodeSelector:
219 #kubernetes.io/os: linux
220 test1: "true"
221 containers:
222 - name: nginx-ingress-controller
223 image: quay.io/kubernetes-ingress-controller/nginx-ingress-c ontroller:0.30.0
[root@master01 ingress]# tar -xf ingree.contro-0.30.0.tar.gz
[root@master01 ingress]# docker load -i ingree.contro-0.30.0.tar
[root@master01 ingress]# kubectl label nodes node02 test1=true
node/node02 labeled
把ingress部署在node2节点
[root@master01 ingress]# kubectl apply -f mandatory.yaml
[root@node02 opt]# netstat -antp | grep nginx
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 7677/nginx: master
tcp 0 0 0.0.0.0:8181 0.0.0.0:* LISTEN 7677/nginx: master
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN
[root@master01 ingress]# kubectl delete -f service-nginx.yaml
先把之前的删除干净
[root@master01 ingress]# vim /etc/hosts
192.168.233.81 master01
192.168.233.82 node01
192.168.233.83 node02 www.test1.com www.test2.com
192.168.233.84 hub.test.com
[root@master01 ingress]# vim service-nginx.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs-client-storageclass
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-app
labels:
app: nginx1
spec:
replicas: 1
selector:
matchLabels:
app: nginx1
template:
metadata:
labels:
app: nginx1
spec:
containers:
- name: nginx
image: nginx:1.22
volumeMounts:
- name: nfs-pvc
mountPath: /usr/share/nginx/html
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: nfs-pvc
---
apiVersion: v1
kind: Service
metadata:
name: nginx-app-svc
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
app: nginx1
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-app-ingress
spec:
rules:
- host: www.test1.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-app-svc
port:
number: 80
[root@master01 ingress]# vim service-nginx1.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc1
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs-client-storageclass
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-app2
labels:
app: nginx2
spec:
replicas: 1
selector:
matchLabels:
app: nginx2
template:
metadata:
labels:
app: nginx2
spec:
containers:
- name: nginx
image: nginx:1.22
volumeMounts:
- name: nfs-pvc1
mountPath: /usr/share/nginx/html
volumes:
- name: nfs-pvc1
persistentVolumeClaim:
claimName: nfs-pvc1
---
apiVersion: v1
kind: Service
metadata:
name: nginx-app-svc2
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
app: nginx2
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-app-ingress2
spec:
rules:
- host: www.test2.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-app-svc2
port:
number: 80
[root@k8s4 opt]# cd k8s/
[root@k8s4 k8s]# ls
default-nfs-pvc-pvc-00a414eb-2d9e-483b-b519-0a75009ec4ff
[root@k8s4 k8s]# cd default-nfs-pvc-pvc-00a414eb-2d9e-483b-b519-0a75009ec4ff/
[root@k8s4 default-nfs-pvc-pvc-00a414eb-2d9e-483b-b519-0a75009ec4ff]# echo 123 >
[root@k8s4 default-nfs-pvc-pvc-00a414eb-2d9e-483b-b519-0a75009ec4ff]# cat index.
123
[root@k8s4 default-nfs-pvc-pvc-00a414eb-2d9e-483b-b519-0a75009ec4ff]# cd ..
[root@k8s4 k8s]# ls
default-nfs-pvc1-pvc-df312601-40ea-42b6-945d-c0689a4d1d6c
default-nfs-pvc-pvc-00a414eb-2d9e-483b-b519-0a75009ec4ff
[root@k8s4 k8s]# cd default-nfs-pvc1-pvc-df312601-40ea-42b6-945d-c0689a4d1d6c/
[root@k8s4 default-nfs-pvc1-pvc-df312601-40ea-42b6-945d-c0689a4d1d6c]# echo 666 > index.html
[root@k8s4 default-nfs-pvc1-pvc-df312601-40ea-42b6-945d-c0689a4d1d6c]# cat index.html
666
deployment+NodePort模式
master01---
vim mandatory.yaml
191 kind: Deployment
215 #hostNetwork: true
200 replicas: 1
219 kubernetes.io/os: linux
220 #test1: "true"
kubectl apply -f mandatory.yaml
wget https://gitee.com/mirrors/ingress-nginx/raw/nginx-0.30.0/deploy/static/provider/baremetal/service-nodeport.yaml
#获取service.yaml文件
vim service-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
#执行这个yaml文件,会生成一个service。在ingress-nginx这个命名空间生成一个service。
#所有的controller的请求都会从这个定义的service的nodeport的端口。
#把请求转发到自定义的service的pod
kubectl apply -f service-nodeport.yaml
vim nodeport.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc2
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs-client-storageclass
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-app2
labels:
app: nginx2
spec:
replicas: 3
selector:
matchLabels:
app: nginx2
template:
metadata:
labels:
app: nginx2
spec:
containers:
- name: nginx
image: nginx:1.22
volumeMounts:
- name: nfs-pvc2
mountPath: /usr/share/nginx/html
volumes:
- name: nfs-pvc2
persistentVolumeClaim:
claimName: nfs-pvc2
---
apiVersion: v1
kind: Service
metadata:
name: nginx-app-svc1
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
app: nginx2
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-app-ingress
spec:
rules:
- host: www.test2.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-app-svc1
port:
number: 80
kubectl apply -f nodeport.yaml
k8s5---
查看挂载目录
echo 123 > index.html
master01---
vim /etc/hosts
20.0.0.32 master01
20.0.0.34 node01
20.0.0.35 node02 www.test1.com www.test2.com
20.0.0.36 hub.test.com k8s5 www.test1.com
curl www.test2.com:31456
实验完成!
通过虚拟主机的方式实现http代理
vim pod1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment1
labels:
test: nginx1
spec:
replicas: 1
selector:
matchLabels:
test: nginx1
template:
metadata:
labels:
test: nginx1
spec:
containers:
- name: nginx1
image: nginx:1.22
---
apiVersion: v1
kind: Service
metadata:
name: svc-1
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
test: nginx1
vim pod2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment1
labels:
test2: nginx2
spec:
replicas: 1
selector:
matchLabels:
test2: nginx2
template:
metadata:
labels:
test2: nginx2
spec:
containers:
- name: nginx2
image: nginx:1.22
---
apiVersion: v1
kind: Service
metadata:
name: svc-2
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
test2: nginx2
vim pod-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress1
spec:
rules:
- host: www.test1.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: svc-1
port:
number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress2
spec:
rules:
- host: www.test2.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: svc-2
port:
number: 80
kubectl apply -f pod.yaml
kubectl apply -f pod2.yaml
kubectl apply -f pod-ingress.yaml
vim /etc/hosts
20.0.0.32 master01
20.0.0.34 node01 www.test1.com www.test2.com
20.0.0.35 node02 www.test1.com www.test2.com
20.0.0.36 hub.test.com k8s5 www.test1.com
curl www.test1.com:31456
curl www.test2.com:31456
访问成功实验完成!
ingress实现https代理访问
创建证书和密钥,需要自定义,用secrets保存密钥信息,部署pod时把secrets挂载到pod |
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"
#req:生成证书文件命令
#x509:生成x.509自签名的证书
#-sha256:表示使用shaa-256的散列算法
#-nodes:表示生成的密钥不加密
#-days 365:证书有效期是365天
#-newkey rsa:2048:表示使用RSA的密钥队,长度是2048位
#-keyout tls.key:生成密钥文件
#-out tls.crt:生成证书文件
#-subj "/CN=nginxsvc/O=nginxsvc":表示添加一个主题
#CN:common name 名称
#O:表示organization组织
kubectl create secret tls tls-secret --key tls.key --cert tls.crt
#创建secret保存密钥和证书
vim ingress-https.yaml
#定义pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-https
labels:
app: https
spec:
replicas: 3
selector:
matchLabels:
app: https
template:
metadata:
labels:
app: https
spec:
containers:
- name: nginx
image: nginx:1.22
---
#定义service
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: https
---
#定义ingress和加密key
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress-https
spec:
tls:
- hosts:
- www.123ccc.com
secretName: tls-secret
#加密的配置保存在ingress当中
#请求---ingress-controller---ingress---转发到service
#目的是为了先验证通过,再实现转发到service对应的pod
#在代理进行时就要先验证密钥队,然后再把请求转发到service对应的pod
rules:
- host: www.123ccc.com
http:
paths:
- path: /
pathType: Prefix
backend:
#定义使用那个service的名称
service:
name: nginx-svc
#定义使用那个pod的名称
port:
number: 80
kubectl apply -f ingress-https.yaml
kubectl get svc -n ingress-nginx -o wide
curl -k https://www.123ccc.com:端口号
[root@master01 k2s]# vim nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 88;
server_name localhost;
#access_log /var/log/nginx/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}
[root@master01 k2s]# wgethttps://gitee.com/mirrors/traefik/raw/v1.7/examples/k8s/traefik-ds.yaml
[root@master01 k2s]# wget https://gitee.com/mirrors/traefik/raw/v1.7/examples/k 8s/traefik-rbac.yaml
[root@master01 k2s]# wget https://gitee.com/mirrors/traefik/raw/v1.7/examples/k 8s/ui.yaml
[root@master01 k2s]# kubectl apply -f traefik-rbac.yaml
[root@master01 k2s]# kubectl apply -f traefik-ds.yaml
[root@master01 k2s]# kubectl apply -f ui.yaml
[root@master01 k2s]# kubectl get svc -n kube-system
[root@master01 k2s]# kubectl describe nodes master01 | grep -i taints
#下载
[root@master01 k2s]# ls
1.18.yaml a.yaml nginx.conf service.yaml traefik-ds.yaml traefik-rbac.yaml ui.yaml
[root@master01 k2s]# vim a.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-traefik
labels:
nginx: traefik
spec:
selector:
matchLabels:
nginx: traefik
template:
metadata:
labels:
nginx: traefik
spec:
containers:
- name: nginx
image: nginx:1.22
ports:
- containerPort: 86
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/
volumes:
- name: nginx-config
configMap:
name: nginx-conn
---
apiVersion: v1
kind: Service
metadata:
name: nginx-traefik-svc1
spec:
ports:
- port: 86
targetPort: 86
protocol: TCP
selector:
nginx: traefik
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-traefik-test1
spec:
rules:
- host: www.123ccc.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-traefik-svc1
port:
number: 86
去网页查看 http://192.168.233.82:8080/
[root@master01 k2s]# kubectl edit cm nginx-conn
在里面改 80端口
[root@master01 k2s]# curl www.123ccc.com
Traefik ingress controller
Traefik它是一个为了让部署微服务更加快捷而诞生的http反向代理,负载均衡 特点:traefik设计时就可以实时的和k8s API交互,可以感知后端service以及pod的变化,还可以自动更新配置和重载 |
举例 比如pod内nginx80端口改成8081 这个操作会被自动识别,这个叫自动重载,比ingress好就好在可以自动感知后端变化 |
Traefik的部署方式
daemonset | 优点:每个节点都会部署一个traefik,节点感知有个特点,可以自定发现,更新容器的配置,不需要手动重载 缺点:资源占用,大型集群中,deamonset可能会运行多个traefik的实例,尤其是节点上不需要大量容器运行的情况下,且没有办法扩缩容 Daemonset方式一般用于部署对外的集群,对外的业务会经常变更,daemonset可以更好的,自动的发现服务配置变更 |
deployment | 优点:集中办公控制,可以使用少量的实例来运行处理整个集群的流量,更容易升级维护 缺点:deployment的负载均衡不会均匀的分布到每个节点,且需要手动更新,他无法感知容器内部配置的变化 Deployment一般部署对内集群,对内相对稳定,更新和变化也比较少,适合depployment |
设置两个标签 Traffic-tye:internal Traffic-type:external对外服务 | |
区别 | 没什么太大的区别 |
工作原理 | 本一样都是七层代理,都可以动态的更新配置,都可以自动发现服务 Nginx-ingress:见得最多的模式,速度相对稍慢 Traefik-ingress:自动更新的重载更快,更方便, Traefik的并发能力只有nginx-ingress的6成 60% |
wget https://gitee.com/mirrors/traefik/raw/v1.7/examples/k8s/traefik-deployment.yaml wget https://gitee.com/mirrors/traefik/raw/v1.7/examples/k8s/traefik-rbac.yaml wget https://gitee.com/mirrors/traefik/raw/v1.7/examples/k8s/traefik-ds.yaml wget https://gitee.com/mirrors/traefik/raw/v1.7/examples/k8s/ui.yaml |
Ingress Nginx-ibgress-controller Deployment+loadbalancer:公有云提供负载均衡的公网地址 Daemonset+hostbnetwork+nodeselector:和节点服务共享网络,一个节点只能部署一个controllerpod,使用宿主机的端口,性能最好,适合大并发 Deploymen+nodeport:最常见,也是最常用的,最简单的方法,性能不太好,多了一成nat地址转发 |
Traefik-ingress-controller Daemonset:对外,可以自动更新容器的配置,使用host节点网络 Deployment:对内,无法自动更新配置,使用nodepor |
https:
|
加密认证
|
更多推荐
所有评论(0)