kubernetes之service(5)
一、service基础概念service可以看作是一组提供相同服务的Pod对外的访问接口。借助service,应用可以方便的实现服务发现和负载均衡。service默认只支持4层均衡负载能力,没有7层功能。(可以通过ingress实现)service的类型:ClusterIP:默认值,k8s系统给service分配的虚拟IP,只能在集群内部访问。NodePort:将Service通过指定的Node上
一、service基础概念
- service可以看作是一组提供相同服务的Pod对外的访问接口。借助service,应用可以方便的实现服务发现和负载均衡。
- service默认只支持4层均衡负载能力,没有7层功能。(可以通过ingress实现)
service的类型: - ClusterIP:默认值,k8s系统给service分配的虚拟IP,只能在集群内部访问。
- NodePort:将Service通过指定的Node上的端口暴露给外部,访问任意一个Nodeport:nodePort都将路由到ClusterIP
- LoadBalancer:在NodePort的基础上,借助cloud provider创建一个外部的负载均衡器,并将请求转发到:NodePort,此模式只能在云服务器上使用。
- ExternalName:将服务通过DNS CNAME记录方式转发到指定的域名(通过spec.externlName设定)
Service是由kube-proxy组件,加上iptables来共同实现的
kube-proxy通过Iptables处理service的过程,需要在主机上设置相当多的Iptables规则,如果宿主机有大量的pod,不断的刷新iptables规则,会消耗大量的CPU资源。
IPVS模式的service,可以使得k8s支持更多量级的pod。
二、改模式为ipvs
1.开启kube-proxy的ipvs模式:
yum install -y ipvsadm #所有节点安装
kubectl edit cm kube-proxy -n kube-system #修改ipvs模式mode :“ipvs”
kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}' #更新kube-proxy pod
ipvsadm包装好之后就有ip_vs
查看一下策略:
各个节点都会有VIP
集群内部可以访问了:
k8s提供动态DNS解析,将service删除之后(kubectl delete -f service.yml ),再重新创建一个,各个节点上的vip也会随之变化。
三
3.1无头服务
无头服务不需要分配一个VIP,而是直接以DNS记录的方式解析出被代理Pod的地址
域名格式:
(
s
e
r
v
i
c
e
n
a
m
e
)
.
(servicename).
(servicename).(namesapce).svc.cluster.local
[root@node1 manifest]# cat service.yml
kind: Service
apiVersion: v1
metadata:
name: myservice
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
app: myapp
clusterIP: None #加上这行,svc服务就没有IP
[root@node1 manifest]# kubectl apply -f service.yml
service/myservice created
[root@node1 manifest]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d22h
myservice ClusterIP None <none> 80/TCP 5s
将pod2.yml文件里的镜像myapp:v2换为v1,重新运行kubectl apply -f pod2.yml,查看DNS更新。
3.2 loadbalance
3.3 从外部访问的第三种方式,externalName
[root@node1 manifest]# cat service.yml
kind: Service
apiVersion: v1
metadata:
name: myservice
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
app: myapp
type: ExternalName
externalName: www.westos.org
[root@node1 manifest]# kubectl apply -f service.yml
[root@node1 manifest]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d1h
myservice ExternalName <none> www.westos.org 80/TCP 63m #可以看DNS-server的IP
[root@node1 manifest]# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-bd97f9cd9-8kvmd 1/1 Running 5 3d1h 10.244.1.44 node2 <none> <none> #DNS-pod1
coredns-bd97f9cd9-sck57 1/1 Running 5 3d1h 10.244.2.52 node3 <none> <none> #DNS-pod2
etcd-node1 1/1 Running 6 3d1h 172.25.26.1 node1 <none> <none>
kube-apiserver-node1 1/1 Running 6 3d1h 172.25.26.1 node1 <none> <none>
kube-controller-manager-node1 1/1 Running 16 3d1h 172.25.26.1 node1 <none> <none>
kube-flannel-ds-amd64-2x8l8 1/1 Running 6 3d 172.25.26.2 node2 <none> <none>
kube-flannel-ds-amd64-bsgm5 1/1 Running 5 3d 172.25.26.3 node3 <none> <none>
kube-flannel-ds-amd64-r7ltg 1/1 Running 7 3d 172.25.26.1 node1 <none> <none>
kube-proxy-h87zv 1/1 Running 2 5h7m 172.25.26.1 node1 <none> <none>
kube-proxy-l4p44 1/1 Running 0 5h8m 172.25.26.3 node3 <none> <none>
kube-proxy-pghfl 1/1 Running 0 5h8m 172.25.26.2 node2 <none> <none>
kube-scheduler-node1 1/1 Running 17 3d1h 172.25.26.1 node1 <none> <none>
CNAME别名
四、calico
eg1 用calico,IPIP---->off
1.从官网下载: https://docs.projectcalico.org/v3.14/manifests/calico.yaml
根据calico.yaml文件里的所需镜像(4个),下载到本地再上传到私有仓库
2.在私有仓库建立新项目,公有仓库:calico
3.提前下载好镜像
提前下好:
docker pull calico/cni:v3.14.1
docker pull calico/pod2daemon-flexvol:v3.14.1
docker pull calico/kube-controllers:v3.14.1
docker pull calico/node:v3.14.1
改个标签:
for i in `docker images | grep calico|awk '{print($1":"$2)}'`;do docker tag $i reg.westos.org/$i;done
上传:
for i in ` docker images | grep reg.westos.org/calico|awk '{print($1":"$2)}'`;do docker push $i;done
4.为了防止kube-flannel插件的 影响
删除了
[root@node1 manifest]# kubectl delete -f kube-flannel.yml
5.删除后将/etc/cni/net.d/下的文件也删除(备份到其他地方)
[root@node1 ~]# cd /etc/cni/net.d/
[root@node1 net.d]# ls
10-flannel.conflist
[root@node1 net.d]# mv 10-flannel.conflist /mnt
[root@node1 net.d]# ls
[root@node1 net.d]#
所有节点都移走:
6.将calico.yml里的IPIP设为off
7.执行
[root@node1 manifest]# kubectl apply -f calico.yaml
[root@node1 manifest]# kubectl get pod -n kube-system #全部running
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-76d4774d89-szd62 1/1 Running 2 80m
calico-node-4h5p5 1/1 Running 5 13h
calico-node-cnrkm 1/1 Running 5 13h
calico-node-ct75k 1/1 Running 2 13h
coredns-bd97f9cd9-5qsnt 1/1 Running 1 163m
coredns-bd97f9cd9-bgvtm 1/1 Running 1 163m
etcd-node1 1/1 Running 8 3d16h
kube-apiserver-node1 1/1 Running 9 3d16h
kube-controller-manager-node1 1/1 Running 43 28m
kube-proxy-h87zv 1/1 Running 6 20h
kube-proxy-l4p44 1/1 Running 4 20h
kube-proxy-pghfl 1/1 Running 4 20h
kube-scheduler-node1 1/1 Running 41 3d16h
更换完成之后,测试一下是否能访问
eg2 用calico ,IPIP—>Always
[root@node1 manifest]# kubectl delete -f calico.yaml
[root@node1 manifest]# kubectl apply -f calico.yam #再次创建
看一下正常
ip a #看看ip,多了个tnnel
测试一下,不卡了。flannel模式的时候,访问到某些节点的时候会卡
eg3 用flannel.yml,type设为:host-gw
[root@node1 manifest]# kubectl delete -f calico.yaml
[root@node1 manifest]# kubectl apply -f calico.yaml
[root@node1 ~]# kubectl get pod -n kube-system #查看一下状态
NAME READY STATUS RESTARTS AGE
coredns-bd97f9cd9-5qsnt 1/1 Running 1 5h55m
coredns-bd97f9cd9-bgvtm 1/1 Running 1 5h54m
etcd-node1 1/1 Running 8 3d19h
kube-apiserver-node1 1/1 Running 9 3d19h
kube-controller-manager-node1 1/1 Running 43 3h39m
kube-flannel-ds-amd64-9cjw4 1/1 Running 0 37m
kube-flannel-ds-amd64-x56sd 1/1 Running 0 37m
kube-flannel-ds-amd64-zbmlv 1/1 Running 0 37m
kube-proxy-h87zv 1/1 Running 6 23h
kube-proxy-l4p44 1/1 Running 4 23h
kube-proxy-pghfl 1/1 Running 4 23h
kube-scheduler-node1 1/1 Running 41 3d19h
五、ingress服务
- 一种全局的,为了代理不同后端service而设置的负载均衡服务,就是kubernetes里的Ingress服务。
- Igress由两部分组成:Igress controller和Ingress服务
- Igress Controless会根据你定义的Igress对象,提供对应的代理能力。业界常用的各种反向代理项目,比如Nginx,Envoy,Traefik等,都已经为kubernetes专门维护了对应的Igress Controller
1.上ingress官网下载deploy.yml文件。
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/deploy.yaml
进到文件里查看所需镜像,提前下载好,到私有仓库
[root@reg ~]# docker pull quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.33.0 #下载所需镜像1
[root@reg ~]# docker tag quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.33.0 reg.westos.org/library/nginx-ingress-controller:0.33.0
[root@reg ~]# docker push reg.westos.org/library/nginx-ingress-controller:0.33.0
[root@reg ~]# docker pull docker.io/jettech/kube-webhook-certgen:v1.2.0 #下载所需镜像2
[root@reg harbor]# docker tag jettech/kube-webhook-certgen:v1.2.0 reg.westos.org/library/kube-webhook-certgen:v1.2.0
[root@reg harbor]# docker push reg.westos.org/library/kube-webhook-certgen:v1.2.0
将下载的deploy.yml文件里的镜像下载指向改为本地仓库
2.主节点运行
[root@node1 manifest]# kubectl apply -f deploy.yaml
[root@node1 manifest]# kubectl get namespace
NAME STATUS AGE
default Active 3d21h
ingress-nginx Active 7s #多出来的namespace
kube-node-lease Active 3d21h
kube-public Active 3d21h
kube-system Active 3d21h
[root@node1 manifest]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.110.37.207 <none> 80:30552/TCP,443:31110/TCP 26s
ingress-nginx-controller-admission ClusterIP 10.99.106.33 <none> 443/TCP 27s
原来是外部访问:user–>svc–>pod
现在访问:user–>ingress-svc–>pod;
ingress本身是一个:(ingress–>svc–>pod)
客户端试着访问一下:
里面粘贴一下:
https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/
[root@node1 manifest]# cat ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-myservicea
annotations:
# use the shared ingress-nginx
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: www1.westos.org
http:
paths:
- path: /
backend:
serviceName: myservice
servicePort: 80
[root@node1 manifest]# kubectl apply -f ingress.yml
ingress.networking.k8s.io/ingress-myservicea created
[root@node1 manifest]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-myservicea <none> www1.westos.org 172.25.26.2 80 90s
客户端访问一下:加本地解析,访问的时候需要加上端口号
虽然www1.westos.org在node2节点上,但是真正工作的是ingress-nginx-controller-77b5fc5746-hxsps,进去看一下:
[root@node1 manifest]# kubectl -n ingress-nginx exec -it ingress-nginx-controller-77b5fc5746-hxsps -- sh
/etc/nginx $ ls
fastcgi.conf koi-utf modsecurity owasp-modsecurity-crs uwsgi_params.default
fastcgi.conf.default koi-win modules scgi_params win-utf
fastcgi_params lua nginx.conf scgi_params.default
fastcgi_params.default mime.types nginx.conf.default template
geoip mime.types.default opentracing.json uwsgi_params
/etc/nginx $ vi nginx.conf #都写在这个文件里
2 访问不同域名,分发到不同后端pod
www1.westso.org ----> myservice
www2.westos.org------->myservice2
[root@node1 manifest]# cat service.yml
kind: Service
apiVersion: v1
metadata:
name: myservice
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
app: myapp
type: NodePort
---
kind: Service
apiVersion: v1
metadata:
name: myservice2
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
app: myapp2
type: NodePort
[root@node1 manifest]# kubectl apply -f service.yml
[root@node1 manifest]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d1h
myservice NodePort 10.97.2.132 <none> 80:31148/TCP 24h
myservice2 NodePort 10.103.204.5 <none> 80:30640/TCP 9m12s
[root@node1 manifest]# cat deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-v1
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:v1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-v2
spec:
replicas: 2
selector:
matchLabels:
app: myapp2
template:
metadata:
labels:
app: myapp2
spec:
containers:
- name: myapp2
image: myapp:v2
[root@node1 manifest]# kubectl apply -f deployment.yml
[root@node1 manifest]# kubectl get pod
NAME READY STATUS RESTARTS AGE
deployment-v1-7449b5b68f-28wz8 1/1 Running 0 2m1s
deployment-v1-7449b5b68f-hm24k 1/1 Running 0 118s
deployment-v2-6589799486-6f655 1/1 Running 0 4m
deployment-v2-6589799486-8dmf6 1/1 Running 0 4m
[root@node1 manifest]# cat ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-myservicea
annotations:
# use the shared ingress-nginx
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: www1.westos.org
http:
paths:
- path: /
backend:
serviceName: myservice
- host: www2.westos.org
http:
paths:
- path: /
backend:
serviceName: myservice2
servicePort: 80
servicePort: 80
[root@node1 manifest]# kubectl apply -f ingress.yml
客户端测试,将www2.westos.org解析随便写到某个节点上,测试:
3 DaemonSet结合nodeselector
用DaemonSet结合nodeselector来部署ingress-controller到特定的node上,然后使用HostNetwork直接把该pod与宿主机node的网络打通,直接使用宿主机上的80/443端口就能但访问服务。
优点是整个请求链路最简单,性能相对于NodePort模式更好。
缺点是由于直接利用宿主机节点的网络和端口,一个node只能部署一个ingress-controller pod
比较适合大并发的生产环境使用
1.将deploy.yml文加做如下修改
2.运行
[root@node1 manifest]# kubectl apply -f deploy.yaml
把之前的deployment/ingress-nginx…给删了,变成现在的daemonset
node3上端口已打开
测试:
客户端访问curl www1.westos.org正常访问
更多推荐
所有评论(0)