企业项目实战k8s篇(五)service
service一.service简述二.ipvs模式一.service简述Service可以看作是一组提供相同服务的Pod对外的访问接口。借助Service,应用可以方便地实现服务发现和负载均衡。service默认只支持4层负载均衡能力,没有7层功能。(可以通过Ingress实现)service的类型:ClusterIP:默认值,k8s系统给service自动分配的虚拟IP,只能在集群内部访问。N
service
一.service简述
Service可以看作是一组提供相同服务的Pod对外的访问接口。借助Service,应
用可以方便地实现服务发现和负载均衡。
service默认只支持4层负载均衡能力,没有7层功能。(可以通过Ingress实现)
service的类型:
- ClusterIP:默认值,k8s系统给service自动分配的虚拟IP,只能在集群内部访问。
- NodePort:将Service通过指定的Node上的端口暴露给外部,访问任意一个NodeIP:nodePort都将路由到ClusterIP。
- LoadBalancer:在 NodePort 的基础上,借助 cloud provider 创建一个外部的负载均衡器,并将请求转发到 :NodePort,此模式只能在云服务器上使用。
- ExternalName:将服务通过 DNS CNAME 记录方式转发到指定的域名(通过spec.externlName 设定)
二.service外部访问方式
1.NodePort
NodePort:将Service通过指定的Node上的端口暴露给外部,访问任意一个NodeIP:nodePort都将路由到ClusterIP
编辑mysvc,打开nodeport模式
ports:
- nodePort: 30175
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
sessionAffinity: None
type: NodePort
[root@server1 ~]# kubectl edit svc mysvc
Edit cancelled, no changes made.
查看端口,进行访问
[root@server1 ~]# netstat -antlp| grep 30175
tcp 0 0 0.0.0.0:30175 0.0.0.0:* LISTEN 5221/kube-proxy
[root@server1 ~]# curl 172.25.3.1:30175
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@server1 ~]# curl 172.25.3.1:30175
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@server1 ~]# curl 172.25.3.2:30175
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@server1 ~]# curl 172.25.3.4:30175
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
查看mysvc,可以看到开放端口及cluster ip
[root@server1 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d16h
mysvc NodePort 10.96.97.61 <none> 80:30175/TCP 17h
通过cluster ip访问
[root@server1 ~]# curl 10.96.97.61
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@server1 ~]# curl 10.96.97.61
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
2.LoadBalancer
从外部访问 Service 的第二种方式,适用于公有云上Kubernetes 服务。这时候,你可以指定一个 LoadBalancer 类型的 Service。
- 负载均衡由service提供
- 公有云提供ip分配
在service提交后,Kubernetes就会调用 CloudProvider 在公有云上为你创建一个负载均衡服务,并且把被代理的 Pod 的 IP地址配置给负载均衡服务做后端。
更改kube-proxy配置 strictARP: true
[root@server1 ~]# kubectl edit cm kube-proxy -n kube-system
configmap/kube-proxy edited
[root@server1 ~]# kubectl get pod -n kube-system |grep kube-proxy |awk '{system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-g2f8d" deleted
pod "kube-proxy-k84lt" deleted
pod "kube-proxy-l8d5b" deleted
上传镜像至harbor,部署metallb
[root@server1 metallb]# kubectl apply -f metallb.yaml
namespace/metallb-system created
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/controller created
podsecuritypolicy.policy/speaker created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
role.rbac.authorization.k8s.io/pod-lister created
role.rbac.authorization.k8s.io/controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
rolebinding.rbac.authorization.k8s.io/pod-lister created
rolebinding.rbac.authorization.k8s.io/controller created
daemonset.apps/speaker created
deployment.apps/controller created
[root@server1 metallb]# kubectl -n metallb-system get all
NAME READY STATUS RESTARTS AGE
pod/controller-674f4b76b8-vtwht 1/1 Running 0 14s
pod/speaker-2s8hn 0/1 CreateContainerConfigError 0 15s
pod/speaker-m72pz 0/1 CreateContainerConfigError 0 15s
pod/speaker-xjsc8 0/1 CreateContainerConfigError 0 15s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/speaker 3 3 0 3 0 kubernetes.io/os=linux 15s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/controller 1/1 1 1 15s
NAME DESIRED CURRENT READY AGE
replicaset.apps/controller-674f4b76b8 1 1 1 14s
[root@server1 metallb]# kubectl -n metallb-system get secrets
NAME TYPE DATA AGE
controller-token-rfwng kubernetes.io/service-account-token 3 49s
default-token-k7pcz kubernetes.io/service-account-token 3 49s
memberlist Opaque 1 45s
speaker-token-wjsmf kubernetes.io/service-account-token 3 49s
[root@server1 metallb]# kubectl -n metallb-system get pod
NAME READY STATUS RESTARTS AGE
controller-674f4b76b8-vtwht 1/1 Running 0 90s
speaker-2s8hn 1/1 Running 0 91s
speaker-m72pz 1/1 Running 0 91s
speaker-xjsc8 1/1 Running 0 91s
配置,指定分配ip
cat configmap.yaml
[root@server1 metallb]# cat configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.25.3.10-172.25.3.20
执行配置脚本
[root@server1 metallb]# kubectl apply -f configmap.yaml
configmap/config created
[root@server1 metallb]# kubectl get cm
NAME DATA AGE
kube-root-ca.crt 1 3d17h
[root@server1 metallb]# kubectl get cm -n metallb-system
NAME DATA AGE
config 1 21s
kube-root-ca.crt 1 2m17s
建立svc,获取分配ip,访问
[root@server1 metallb]# vim lb-svr.yml
[root@server1 metallb]# vim lb-svr.yml
[root@server1 metallb]# kubectl apply -f lb-svr.yml
service/lb-svc created
[root@server1 metallb]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d17h
lb-svc LoadBalancer 10.100.47.19 172.25.3.10 80:30238/TCP 9s
mysvc NodePort 10.96.97.61 <none> 80:30175/TCP 18h
nginx-svc ClusterIP None <none> 80/TCP 58m
[root@server1 metallb]# curl 172.25.3.10
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@server1 metallb]# curl 172.25.3.10/hostname.html
nginx-deployment-6456d7c676-ldq7x
[root@server1 metallb]# curl 172.25.3.10/hostname.html
nginx-deployment-6456d7c676-rlhkv
[root@server1 metallb]# curl 172.25.3.10/hostname.html
nginx-deployment-6456d7c676-7zhl5
[root@server1 metallb]# curl 172.25.3.10/hostname.html
nginx-deployment-6456d7c676-ldq7x
3.ExternalName
从外部访问的第三种方式叫做ExternalName
[root@server1 metallb]# cat host.yml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ExternalName
externalName: test.westos.org
[root@server1 metallb]# kubectl apply -f host.yml
service/my-service created
[root@server1 metallb]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d18h
my-service ExternalName <none> test.westos.org <none> 8s
dig查看解析过程
[root@server1 metallb]# dig -t A my-service.default.svc.cluster.local. @10.96.0.10
; <<>> DiG 9.9.4-RedHat-9.9.4-72.el7 <<>> -t A my-service.default.svc.cluster.local. @10.96.0.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15259
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;my-service.default.svc.cluster.local. IN A
;; ANSWER SECTION:
my-service.default.svc.cluster.local. 30 IN CNAME test.westos.org.
;; Query time: 2001 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Wed Jul 28 00:17:26 EDT 2021
;; MSG SIZE rcvd: 130
service允许为其分配一个公有IP,注意 k8s的externalIPs
只是提供了给了一个外部访问的ip,但如果没有pod开放对应接口是无法获取到内容的,因此需要与实际pod相对应。
externalIPs格式
$ vim ex-service.yaml
apiVersion: v1
kind: Service
metadata:
name: ex-service
spec:
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
externalIPs:
- 172.25.0.100
三.ipvs模式
- Service 是由 kube-proxy 组件,加上 iptables 来共同实现的.
- kube-proxy 通过 iptables 处理 Service 的过程,需要在宿主机上设置相当多的iptables 规则,如果宿主机有大量的Pod,不断刷新iptables规则,会消耗大量的CPU资源。
- IPVS模式的service,可以使K8s集群支持更多量级的Pod。
开启kube-proxy的ipvs模式
安装ipvs,修改ipvs模式
[root@server1 ~]# kubectl -n kube-system get cm
NAME DATA AGE
coredns 1 2d23h
extension-apiserver-authentication 6 2d23h
kube-flannel-cfg 2 2d22h
kube-proxy 2 2d23h
kube-root-ca.crt 1 2d23h
kubeadm-config 2 2d23h
kubelet-config-1.21 1 2d23h
[root@server1 ~]# kubectl edit cm kube-proxy -n kube-system
configmap/kube-proxy edited
[root@server1 ~]# lsmod | grep ip_vs
ip_vs_sh 12688 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 0
ip_vs 145497 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 133095 10 ip_vs,nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,nf_nat_masquerade_ipv4,nf_nat_masquerade_ipv6,nf_conntrack_netlink,nf_conntrack_ipv4,nf_conntrack_ipv6
libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack
更新kube-proxy pod
[root@server1 ~]# kubectl get pod -n kube-system |grep kube-proxy |awk '{system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-p78dh" deleted
pod "kube-proxy-rpqxv" deleted
pod "kube-proxy-tgvkq" deleted
执行脚本创建svc,获取访问ip 为10.96.97.61
[root@server1 ~]# kubectl apply -f deployment.yml
deployment.apps/nginx-deployment configured
[root@server1 ~]# vim deployment.yml
[root@server1 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-6456d7c676-7zhl5 1/1 Running 0 32s
nginx-deployment-6456d7c676-ldq7x 1/1 Running 0 33s
nginx-deployment-6456d7c676-rlhkv 1/1 Running 0 35s
[root@server1 ~]# kubectl apply -f svc.yml
service/mysvc created
[root@server1 ~]# kubectl describe svc mysvc
Name: mysvc
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=nginx
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.97.61
IPs: 10.96.97.61
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.1.28:80,10.244.1.29:80,10.244.2.35:80
Session Affinity: None
Events: <none>
IPVS模式下,kube-proxy会在service创建后,在宿主机上添加一个虚拟网卡:
kube-ipvs0,并分配service IP
9: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
link/ether 2a:78:08:d8:21:10 brd ff:ff:ff:ff:ff:ff
inet 10.96.97.61/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
kube-proxy通过linux的IPVS模块,以rr轮询方式调度service中的Pod,进行转发访问,从而减少性能消耗
[root@server1 ~]# curl 10.96.97.61
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@server1 ~]# curl 10.96.97.61
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@server1 ~]# curl 10.96.97.61
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@server1 ~]# curl 10.96.97.61
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@server1 ~]# curl 10.96.97.61
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@server1 ~]# curl 10.96.97.61
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@server1 ~]# curl 10.96.97.61
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@server1 ~]# curl 10.96.97.61
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@server1 ~]# curl 10.96.97.61
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@server1 ~]# curl 10.96.97.61
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@server1 ~]# curl 10.96.97.61
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@server1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.96.97.61:80 rr
-> 10.244.1.28:80 Masq 1 0 3
-> 10.244.1.29:80 Masq 1 0 4
-> 10.244.2.35:80 Masq 1 0 4
四.kube-dns
k8s内内置dns解析服务,用于实现域名访问
解析相关 pod与svc
[root@server1 ~]# kubectl -n kube-system get pod
NAME READY STATUS RESTARTS AGE
coredns-7777df944c-brss6 1/1 Running 5 3d16h
coredns-7777df944c-mzblw 1/1 Running 5 3d16h
[root@server1 ~]# kubectl -n kube-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 3d16h
创建并进入交互式pod,使用nslookup 命令查询dns记录,例如可以查看到mysvc的分配ip
[root@server1 ~]# kubectl run demo --image=busyboxplus -it --restart=Never
If you don't see a command prompt, try pressing enter.
/ # nslookup mysvc
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: mysvc
Address 1: 10.96.97.61 mysvc.default.svc.cluster.local
/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
/ # nslookup mysvc.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: mysvc.default
Address 1: 10.96.97.61 mysvc.default.svc.cluster.local
查看kube-dns
,可以看到dns端口为9153
,以及dns服务节点10.244.0.12:9153,10.244.0.13:9153
[root@server1 ~]# kubectl -n kube-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 3d16h
[root@server1 ~]# kubectl -n kube-system describe svc kube-dns
Name: kube-dns
Namespace: kube-system
Labels: k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=CoreDNS
Annotations: prometheus.io/port: 9153
prometheus.io/scrape: true
Selector: k8s-app=kube-dns
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.0.10
IPs: 10.96.0.10
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 10.244.0.12:53,10.244.0.13:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 10.244.0.12:53,10.244.0.13:53
Port: metrics 9153/TCP
TargetPort: 9153/TCP
Endpoints: 10.244.0.12:9153,10.244.0.13:9153
Session Affinity: None
Events: <none>
[root@server1 ~]#
查看域名解析pod节点信息如 ip
[root@server1 ~]# kubectl -n kube-system get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-7777df944c-brss6 1/1 Running 5 3d16h 10.244.0.12 server1 <none> <none>
coredns-7777df944c-mzblw 1/1 Running 5 3d16h 10.244.0.13 server1 <none> <none>
五.Headless Service
Headless Service “无头服务”
-
Headless Service不需要分配一个VIP,而是直接以DNS记录的方式解析出被代理Pod的IP地址。
-
域名格式:$(servicename).$(namespace).svc.cluster.local
-
Pod滚动更新后,依然可以解析
创建Headless Service
[root@server1 ~]# cat svc.yml
apiVersion: v1
kind: Service
metadata:
name: mysvc
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
app: nginx
[root@server1 ~]# cat headless.yml
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
ports:
- name: http
port: 80
targetPort: 80
selector:
app: nginx
clusterIP: None
查看svc,域名为nginx-svc
[root@server1 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d16h
nginx-svc ClusterIP None <none> 80/TCP 14s
进入交互式pod,查看解析,并进行域名访问
[root@server1 ~]# kubectl run demo --image=busyboxplus -it --restart=Never
If you don't see a command prompt, try pressing enter.
/ # nslookup nginx-svc
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: nginx-svc
Address 1: 10.244.1.32 10-244-1-32.nginx-svc.default.svc.cluster.local
Address 2: 10.244.2.37 10-244-2-37.mysvc.default.svc.cluster.local
Address 3: 10.244.1.33 10-244-1-33.mysvc.default.svc.cluster.local
/ # curl nginx-svc
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
/ # curl nginx-svc/hostname.html
nginx-deployment-6456d7c676-ldq7x
/ # curl nginx-svc/hostname.html
nginx-deployment-6456d7c676-7zhl5
/ # curl nginx-svc/hostname.html
nginx-deployment-6456d7c676-ldq7x
/ # curl nginx-svc/hostname.html
nginx-deployment-6456d7c676-7zhl5
/ # curl nginx-svc/hostname.html
nginx-deployment-6456d7c676-rlhkv
/ # curl nginx-svc/hostname.html
nginx-deployment-6456d7c676-7zhl5
/ #
/ # exit
dig命令查看解析过程,通过解析nginx-svc,负载均衡分配到相应节点获取内容
[root@server1 ~]# dig -t A nginx-svc.default.svc.cluster.local. @10.96.0.10
; <<>> DiG 9.9.4-RedHat-9.9.4-72.el7 <<>> -t A nginx-svc.default.svc.cluster.local. @10.96.0.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40562
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;nginx-svc.default.svc.cluster.local. IN A
;; ANSWER SECTION:
nginx-svc.default.svc.cluster.local. 30 IN A 10.244.1.33
nginx-svc.default.svc.cluster.local. 30 IN A 10.244.1.32
nginx-svc.default.svc.cluster.local. 30 IN A 10.244.2.37
;; Query time: 0 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Tue Jul 27 22:38:39 EDT 2021
;; MSG SIZE rcvd: 217
更多推荐
所有评论(0)