K8S-Service
service解决了什么问题大家都知道我们的pod是有生命周期的,pod也有自己的IP地址,一旦pod中容器进程一旦挂掉,那么紧接着pod生命周期就会结束那么根据我们创建pod时使用的控制器就会自动启动相对应副本Pod,那么这时新的Podip已经变化了。那么我们怎么能保证用户重新访问到这个Pod提供服务呢?怎么保证上游服务不受Pod的变动而影响呢?那好,service概念来了。一:Service简
service解决了什么问题
大家都知道我们的pod是有生命周期的,pod也有自己的IP地址,一旦pod中容器进程一旦挂掉,那么紧接着pod生命周期就会结束
那么根据我们创建pod时使用的控制器就会自动启动相对应副本Pod,那么这时新的Pod
ip已经变化了。那么我们怎么能保证用户重新访问到这个Pod提供服务呢?怎么保证上游服务不受Pod的变动而影响呢?
那好,service概念来了。
一:Service简介
Service也是K8s资源,Service是一种可以访问Pod逻辑分组的策略,Service通过label selector
来关联Pod逻辑集合。
Service提供了负载均衡,但只支持4层负载均衡,没有7层能力。
二:Service类型
1,ClusterIP:默认类型,默认自动分配一个仅Cluster内部可以访问的虚拟IP,也是AIP-SERVER地址
2,NodePort
可以让集群外部进行访问集群内布的类型,在ClusterIP基础上为Service每台机器上绑定一个端口,这样就可以通过NodePort来访问该服务
3,LoadBalancer
在NodePort基础之上,创建一个负载均衡器,将请求转发至NodePort上
4,ExternalName
把集群外部的服务引入到集群内部中来,在集群内部直接使用,没有任何类型代理被创建,只有K8S 1.7或更高版本kube-dns才支持。
三:Service的工作模式,分为三种 userspace,iptables,ipvs
1,userspace代理模式:k8s1.1版本之前使用,kube-proxy作为进程运行在用户空间
工作流程:不同节点pod之间访问,pod先访问service
ip资源,然后service交给kube-porxy处理,kube-porxy处理完之后在交给serviceip,再通过iptables规则调度分发处理。
这样工作模式效率很低。
2,iptables代理模式
Ipvs代理模式:
集群内部Pod之间访问,用户先请求到内核空间serviceip,由serviceip 通过ipvs规则直接调度到对应管理的pod之上。
当有新的pod生成时,会反应到kube-apiserver中的etcd中记录,那么kube-proxy会watch(监控)api-server的变化及时将访问这个新pod的ipvs规则链记录到IPVS规则中
四:service创建使用
ClusterIP类型
[root@k8s-master manifests]# vim redis.svc.yaml 编辑yaml文件,创建service信息
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: default
spec:
selector:
app: redis
role: logstor
clusterIP: 10.97.97.97 这个集群地址(可以不做定义,默认会分配)
type: ClusterIP 定义创建的service类型,clusterIP用于集群内部通信
ports:
- port: 6379 定义service监听端口
targetPort: 6379 定义pod容器内部进程端口
[root@k8s-master manifests]# kubectl apply -f redis.svc.yaml 进行声明创建service
[root@k8s-master manifests]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 48d
redis ClusterIP 10.97.97.97 <none> 6379/TCP 5d16h
注意:当我们创建一个service服务时,都会在cordns中进行dns记录,记录的格式就是主机名。
默认主机名为 svc_name.ns_name.domain.ltd
svc.cluster.local
例如:redis.defalut.svc.cluster.local
NodePort类型
[root@k8s-master manifests]# vim redis.svc.nodeport.yaml
apiVersion: v1
kind: Service
metadata:
name: redis1
namespace: default
spec:
selector:
app: myapp 定义标签与后端那些pod做关联。
clusterIP: 10.99.99.99
type: NodePort 定义service类型为NodePort
ports:
- port: 80 定义service监听端口
targetPort: 80 定义pod容器监听端口
nodePort: 30080 定义节点监听端口
[root@k8s-master manifests]# kubectl apply -f redis.svc.nodeport.yaml
[root@k8s-master manifests]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 49d
redis ClusterIP 10.97.97.97 <none> 6379/TCP 5d20h
redis1 NodePort 10.99.99.99 <none> 80:30080/TCP 119s
[root@k8s-master manifests]# kubectl describe svc redis1 查看service的详细信息
Name: redis1
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=myapp
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.99.99.99
IPs: 10.99.99.99
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 30080/TCP
Endpoints: 10.244.1.27:80,10.244.1.28:80,10.244.1.29:80 + 2 more... 关联到pod的地址
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
[root@k8s-master manifests]# kubectl get pods --show-labels -l app=myapp 这是关联的具体pod
NAME READY STATUS RESTARTS AGE LABELS
myapp-deploy-6f67f89d48-2dtcq 1/1 Running 0 12d app=myapp,pod-template-hash=6f67f89d48,release=canary
myapp-deploy-6f67f89d48-4kt4m 1/1 Running 0 12d app=myapp,pod-template-hash=6f67f89d48,release=canary
myapp-deploy-6f67f89d48-62txf 1/1 Running 0 12d app=myapp,pod-template-hash=6f67f89d48,release=canary
myapp-deploy-6f67f89d48-8xp84 1/1 Running 0 12d app=myapp,pod-template-hash=6f67f89d48,release=canary
myapp-deploy-6f67f89d48-mwx2t 1/1 Running 0 12d app=myapp,pod-template-hash=6f67f89d48,release=canary
测试:集群外部访问。
[root@git ~]# while true;do curl 10.21.41.42:30080/hostname.html;sleep 1;done 这是在集群外部访问pod,通过 service NodePort类型实现了负载均衡。
myapp-deploy-6f67f89d48-2dtcq
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-mwx2t
myapp-deploy-6f67f89d48-8xp84
myapp-deploy-6f67f89d48-mwx2t
myapp-deploy-6f67f89d48-mwx2t
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-8xp84
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-mwx2t
myapp-deploy-6f67f89d48-4kt4m
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-2dtcq
myapp-deploy-6f67f89d48-mwx2t
myapp-deploy-6f67f89d48-mwx2t
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-4kt4m
myapp-deploy-6f67f89d48-2dtcq
五,service如何将同一个客户端IP访问调度至同一个pod
[root@k8s-master manifests]# kubectl explain svc.spec 查看svc支持的字段
sessionAffinity <string> 这个表示将同一个客户端调度至同一个Pod,
支持两个值,ClusterIP:将同一个客户端调度至同一pod, None:默认轮训Pod调度。
[root@k8s-master manifests]# kubectl patch svc redis1 -p '{"spec":{"sessionAffinity":"ClientIP"}}' 通过打补丁方式添加yaml文件
service/redis1 patched
[root@k8s-master manifests]# kubectl describe svc redis1
Name: redis1
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=myapp
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.99.99.99
IPs: 10.99.99.99
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 30080/TCP
Endpoints: 10.244.1.27:80,10.244.1.28:80,10.244.1.29:80 + 2 more...
Session Affinity: ClientIP
External Traffic Policy: Cluster
Events: <none>
调度效果
[root@git ~]# while true;do curl 10.21.41.42:30080/hostname.html;sleep 1;done
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
[root@k8s-master manifests]# kubectl patch svc redis1 -p '{"spec":{"sessionAffinity":"None"}}' 将值修改为None时
service/redis1 patched
[root@k8s-master manifests]# kubectl describe svc redis1
Name: redis1
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=myapp
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.99.99.99
IPs: 10.99.99.99
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 30080/TCP
Endpoints: 10.244.1.27:80,10.244.1.28:80,10.244.1.29:80 + 2 more...
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
[root@git ~]# while true;do curl 10.21.41.42:30080/hostname.html;sleep 1;done 修改NONE时,重新负载均衡。
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-62txf
myapp-deploy-6f67f89d48-8xp84
myapp-deploy-6f67f89d48-8xp84
myapp-deploy-6f67f89d48-2dtcq
myapp-deploy-6f67f89d48-mwx2t
myapp-deploy-6f67f89d48-2dtcq
更多推荐
所有评论(0)