Kubernetes(k8s)Service介绍与应用、综合实例
Service暴露端口方式Service存在的意义为解决Pod的动态变化,提供统一访问入口:防止Pod失联,准备找到提供同一个服务的Pod定义一组Pod的访问策略Service通过标签关联一组PodService使用iptables或者ipvs为一组Pod提供负载均衡能力service将运行在一组 Pods上的应用程序公开为网络服务的抽象方法。使用 Kubernetes,你无需修改应用程序即可使用
Service暴露端口方式
Service存在的意义
为解决Pod的动态变化,提供统一访问入口:
- 防止Pod失联,准备找到提供同一个服务的Pod
- 定义一组Pod的访问策略
- Service通过标签关联一组Pod
- Service使用iptables或者ipvs为一组Pod提供负载均衡能力
service
将运行在一组 Pods上的应用程序公开为网络服务的抽象方法。
使用 Kubernetes,你无需修改应用程序即可使用不熟悉的服务发现机制。 Kubernetes 为 Pods 提供自己的 IP 地址,并为一组 Pod 提供相同的 DNS 名, 并且可以在它们之间进行负载均衡。
Service 资源
Kubernetes Service 定义了这样一种抽象:逻辑上的一组 Pod,一种可以访问它们的策略 —— 通常称为微服务。 Service 所针对的 Pods 集合通常是通过选择算符来确定的。 要了解定义服务端点的其他方法,请参阅不带选择算符的服务。
举个例子,考虑一个图片处理后端,它运行了 3 个副本。这些副本是可互换的 —— 前端不需要关心它们调用了哪个后端副本。 然而组成这一组后端程序的 Pod 实际上可能会发生变化, 前端客户端不应该也没必要知道,而且也不需要跟踪这一组后端的状态。
Service 定义的抽象能够解耦这种关联。
云原生服务发现
如果你想要在应用程序中使用 Kubernetes API 进行服务发现,则可以查询 API 服务器 的 Endpoints 资源,只要服务中的 Pod 集合发生更改,Endpoints 就会被更新。
对于非本机应用程序,Kubernetes 提供了在应用程序和后端 Pod 之间放置网络端口或负载均衡器的方法
实例
ClusterIP
ClusterIP
:默认类型,自动分配一个仅Cluster
内部能够访问的虚拟IP
(只能在集群内部访问)
[root@master manifest]# cat network.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: myapp
release: v1
template:
metadata:
labels:
app: myapp
release: v1
spec:
containers:
- name: myapp
image: hyhxy0206/apache:v1
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: myapp-nodeport
namespace: default
spec:
type: ClusterIP
selector:
app: myapp
release: v1
ports:
- name: httpd
port: 80
targetPort: 80
[root@master manifest]# kubectl apply -f network.yaml
deployment.apps/myapp-deploy created
service/myapp-nodeport created
[root@master manifest]# kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/myapp-deploy-65b8f7b7fd-6df4n 1/1 Running 0 2m59s
pod/myapp-deploy-65b8f7b7fd-8v2gv 1/1 Running 0 2m59s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
service/myapp-nodeport ClusterIP 10.100.149.133 <none> 80/TCP 2m59s
[root@master manifest]# curl 10.100.149.133
test page with v1
NodePort
NodePort:经过每一个 Node 上的 IP和静态端口(NodePort)(范围30000-32767)暴露服务。以ClusterIP为基础NodePort服务会路由到 ClusterIP服务。经过请求:能够从集群的外部访问一个集群内部的 NodePort服务ClusterIP和路由规则会自动创建
[root@master manifest]# cat network.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: myapp
release: v1
template:
metadata:
labels:
app: myapp
release: v1
spec:
containers:
- name: myapp
image: hyhxy0206/apache:v1
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: myapp-nodeport
namespace: default
spec:
type: NodePort //类型
selector:
app: myapp
release: v1
ports:
- name: httpd
port: 80
targetPort: 80
nodePort: 30001 //映射端口
[root@master manifest]# kubectl apply -f network.yaml
deployment.apps/myapp-deploy created
service/myapp-nodeport created
[root@master manifest]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/myapp-deploy-65b8f7b7fd-mmwhk 1/1 Running 0 23s
pod/myapp-deploy-65b8f7b7fd-wttt4 1/1 Running 0 23s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
service/myapp-nodeport NodePort 10.105.76.230 <none> 80:30001/TCP 23s
[root@master manifest]# curl 192.168.143.140:30001
test page with v1
Loadbalancer
LoadBalancer
:使用云提供商的负载均衡器,能够向外部暴露服务。外部的负载均衡器能够路由到 NodePort
服务和 ClusterIP
服务
ExternalName
ExternalName
创建一个dns
别名指到service name
上,主要是防止service name
发生变化,要配合dns插件使用
[root@master manifest]# vim network.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: myapp
release: v1
template:
metadata:
labels:
app: myapp
release: v1
spec:
containers:
- name: httpd
image: hyhxy0206/httpd:v0.1 //基于centos的源码apache
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: myapp-externalname
namespace: default
spec:
type: ExternalName //类型
externalName: my.k8s.example.com
[root@master manifest]# kubectl apply -f network.yaml
deployment.apps/myapp-deploy created
service/myapp-externalname created
root@master manifest]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/myapp-deploy-5c8c56f8fc-4cmp9 1/1 Running 0 49s
pod/myapp-deploy-5c8c56f8fc-z4lqx 1/1 Running 0 49s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
service/myapp-externalname ExternalName <none> my.k8s.example.com <none> 49s
//进入容器
root@master manifest]# kubectl exec -it myapp-deploy-5c8c56f8fc-4cmp9 /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
sh-4.4# cd /usr/local/
sh-4.4# ls
apache apr-util etc include lib64 sbin src
apr bin games lib libexec share
//安装软件
sh-4.4# #yum install -y bind-utils
sh-4.4# nslookup myapp-externalname.default.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10#53
myapp-externalname.default.svc.cluster.local canonical name = my.k8s.example.com.
** server can't find my.k8s.example.com: NXDOMAIN
// 经过 nslookup 访问
sh-4.4# nslookup myapp-externalname.default.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10#53
myapp-externalname.default.svc.cluster.local canonical name = my.k8s.example.com.
** server can't find my.k8s.example.com: NXDOMAIN
// 经过 dig 访问
sh-4.4# dig -t A myapp-externalname.default.svc.cluster.local.@10.96.0.10
; <<>> DiG 9.11.26-RedHat-9.11.26-6.el8 <<>> -t A myapp-externalname.default.svc.cluster.local.@10.96.0.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 50466
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: c5b3afaa1d57af33 (echoed)
;; QUESTION SECTION:
;myapp-externalname.default.svc.cluster.local.\@10.96.0.10. IN A
;; Query time: 1014 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Sun Dec 26 15:50:02 UTC 2021
;; MSG SIZE rcvd: 97
ExternalIP
[root@master manifest]# vim network.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: myapp
release: v1
template:
metadata:
labels:
app: myapp
release: v1
spec:
containers:
- name: httpd
image: hyhxy0206/httpd:v0.1
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: myapp-externalip
namespace: default
spec:
selector:
app: myapp
release: v1
ports:
- name: httpd
port: 80
targetPort: 80
externalIPs:
- 10.0.0.240
[root@master manifest]# kubectl apply -f network.yaml
deployment.apps/myapp-deploy created
service/myapp-externalip created
[root@master manifest]# kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/myapp-deploy-5c8c56f8fc-ppgtc 1/1 Running 0 53s
pod/myapp-deploy-5c8c56f8fc-vz9nb 1/1 Running 0 53s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
service/myapp-externalip ClusterIP 10.111.117.175 10.0.0.240 80/TCP 53s
[root@master manifest]# curl 10.111.117.175
<html><body><h1>It works!</h1></body></html>
[root@master manifest]# curl 10.0.0.240
<html><body><h1>It works!</h1></body></html>
Service代理方式
userspace 代理模式
这种模式,kube-proxy 会监视 Kubernetes 控制平面对 Service 对象和 Endpoints 对象的添加和移除操作。 对每个 Service,它会在本地 Node 上打开一个端口(随机选择)。 任何连接到“代理端口”的请求,都会被代理到 Service 的后端 Pods
中的某个上面(如 Endpoints
所报告的一样)。 使用哪个后端 Pod,是 kube-proxy 基于 SessionAffinity
来确定的。
最后,它配置 iptables 规则,捕获到达该 Service 的 clusterIP
(是虚拟 IP) 和 Port
的请求,并重定向到代理端口,代理端口再代理请求到后端Pod。
默认情况下,用户空间模式下的 kube-proxy 通过轮转算法选择后端。
iptables 代理模式
这种模式,kube-proxy
会监视 Kubernetes 控制节点对 Service 对象和 Endpoints 对象的添加和移除。 对每个 Service,它会配置 iptables 规则,从而捕获到达该 Service 的 clusterIP
和端口的请求,进而将请求重定向到 Service 的一组后端中的某个 Pod 上面。 对于每个 Endpoints 对象,它也会配置 iptables 规则,这个规则会选择一个后端组合。
默认的策略是,kube-proxy 在 iptables 模式下随机选择一个后端。
使用 iptables 处理流量具有较低的系统开销,因为流量由 Linux netfilter 处理, 而无需在用户空间和内核空间之间切换。 这种方法也可能更可靠。
如果 kube-proxy 在 iptables 模式下运行,并且所选的第一个 Pod 没有响应, 则连接失败。 这与用户空间模式不同:在这种情况下,kube-proxy 将检测到与第一个 Pod 的连接已失败, 并会自动使用其他后端 Pod 重试。
你可以使用 Pod 就绪探测器 验证后端 Pod 可以正常工作,以便 iptables 模式下的 kube-proxy 仅看到测试正常的后端。 这样做意味着你避免将流量通过 kube-proxy 发送到已知已失败的 Pod。
IPVS 代理模式
FEATURE STATE: Kubernetes v1.11 [stable]
在 ipvs
模式下,kube-proxy 监视 Kubernetes 服务和端点,调用 netlink
接口相应地创建 IPVS 规则, 并定期将 IPVS 规则与 Kubernetes 服务和端点同步。 该控制循环可确保IPVS 状态与所需状态匹配。访问服务时,IPVS 将流量定向到后端Pod之一。
IPVS代理模式基于类似于 iptables 模式的 netfilter 挂钩函数, 但是使用哈希表作为基础数据结构,并且在内核空间中工作。 这意味着,与 iptables 模式下的 kube-proxy 相比,IPVS 模式下的 kube-proxy 重定向通信的延迟要短,并且在同步代理规则时具有更好的性能。 与其他代理模式相比,IPVS 模式还支持更高的网络流量吞吐量。
IPVS 提供了更多选项来平衡后端 Pod 的流量。 这些是:
rr
:轮替(Round-Robin)lc
:最少链接(Least Connection),即打开链接数量最少者优先dh
:目标地址哈希(Destination Hashing)sh
:源地址哈希(Source Hashing)sed
:最短预期延迟(Shortest Expected Delay)nq
:从不排队(Never Queue)
说明:
要在 IPVS 模式下运行 kube-proxy,必须在启动 kube-proxy 之前使 IPVS 在节点上可用。
当 kube-proxy 以 IPVS 代理模式启动时,它将验证 IPVS 内核模块是否可用。 如果未检测到 IPVS 内核模块,则 kube-proxy 将退回到以 iptables 代理模式运行。
在这些代理模型中,绑定到服务 IP 的流量: 在客户端不了解 Kubernetes 或服务或 Pod 的任何信息的情况下,将 Port 代理到适当的后端。
如果要确保每次都将来自特定客户端的连接传递到同一 Pod, 则可以通过将 service.spec.sessionAffinity
设置为 “ClientIP” (默认值是 “None”),来基于客户端的 IP 地址选择会话关联。 你还可以通过适当设置 service.spec.sessionAffinityConfig.clientIP.timeoutSeconds
来设置最大会话停留时间。 (默认值为 10800 秒,即 3 小时)。
k8s练习实例
#滚动升级与回滚
//制作apache镜像
[root@master apache]# cat Dockerfile
FROM busybox
RUN mkdir /data && \
echo "test page with v1" > /data/index.html
ENTRYPOINT ["/bin/httpd","-f","-h","/data"]
[root@master apache]# docker build -t hyhxy0206/apache:v1 .
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM busybox
---> ffe9d497c324
Step 2/3 : RUN mkdir /data && echo "test page with v1" > /data/index.html
---> Running in 6ef5a9999027
Removing intermediate container 6ef5a9999027
---> 1f55d1956ec6
Step 3/3 : ENTRYPOINT ["/bin/httpd","-f","-h","/data"]
---> Running in 9e953c76320f
Removing intermediate container 9e953c76320f
---> 8171aabbee2d
Successfully built 8171aabbee2d
Successfully tagged hyhxy0206/apache:v1
[root@master apache]# vim Dockerfile
[root@master apache]# docker build -t hyhxy0206/apache:v2 .
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM busybox
---> ffe9d497c324
Step 2/3 : RUN mkdir /data && echo "test page with v2" > /data/index.html
---> Running in 591e79d258a9
Removing intermediate container 591e79d258a9
---> 47cda9c6f56f
Step 3/3 : ENTRYPOINT ["/bin/httpd","-f","-h","/data"]
---> Running in 03aab99e17ce
Removing intermediate container 03aab99e17ce
---> 9731bddd2d85
Successfully built 9731bddd2d85
Successfully tagged hyhxy0206/apache:v2
[root@master apache]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hyhxy0206/apache v2 9731bddd2d85 About a minute ago 1.24MB
hyhxy0206/apache v1 8171aabbee2d About a minute ago 1.24MB
[root@master apache]# docker login
[root@master apache]# docker push hyhxy0206/apache:v1
[root@master apache]# docker push hyhxy0206/apache:v2、
//用v1版本运行3个pod
[root@master ~]# kubectl create deployment web --image hyhxy0206/apache:v1 --replicas 3
deployment.apps/web created
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
web-65bbbfdf58-7gfnr 0/1 ContainerCreating 0 6s
web-65bbbfdf58-k52hl 0/1 ContainerCreating 0 6s
web-65bbbfdf58-nf6tg 0/1 ContainerCreating 0 6s
[root@master ~]# kubectl expose deploy web --port 80 --target-port 80
service/web exposed
[root@master ~]# kubectl get svc,pods
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d23h
service/web ClusterIP 10.106.145.169 <none> 80/TCP 13s
NAME READY STATUS RESTARTS AGE
pod/web-65bbbfdf58-7gfnr 1/1 Running 0 60s
pod/web-65bbbfdf58-k52hl 1/1 Running 0 60s
pod/web-65bbbfdf58-nf6tg 1/1 Running 0 60s
[root@master ~]# while :;do curl 10.106.145.169;done
//设置镜像使其滚动升级
[root@master ~]# kubectl set image deploy/web apache=hyhxy0206/apache:v2
deployment.apps/web image updated
[root@master ~]# while :;do curl 10.101.67.172;done
test page with v2
test page with v2
test page with v2
test page with v2
test page with v2
test page with v2
//运行命令回滚,上一版本
[root@master ~]# kubectl rollout undo deploy/web
deployment.apps/web rolled back
[root@master ~]# while :;do curl 10.101.67.172;done
test page with v1
test page with v1
test page with v1
test page with v1
test page with v1
test page with v1
[root@master ~]# kubectl rollout undo deploy/web
deployment.apps/web rolled back
[root@master ~]# while :;do curl 10.101.67.172;done
test page with v2
test page with v2
test page with v2
test page with v2
test page with v2
test page with v2
//给一个副本扩容
[root@master manifest]# kubectl get pods
NAME READY STATUS RESTARTS AGE
test 1/1 Running 0 8m7s
web-64bbf59bf8-dm7c5 1/1 Running 0 9m23s
web-64bbf59bf8-gmp78 1/1 Running 0 9m23s
web-64bbf59bf8-kzx4p 1/1 Running 0 9m23s
web-64bbf59bf8-r4dlb 1/1 Running 0 2m36s
#一个pod跑不同镜像的容器
[root@master manifest]# cat redis.yaml
apiVersion: v1
kind: Pod
metadata:
name: test
labels:
app: good
spec:
containers:
- image: nginx
name: nginx
- image: redis
name: redis
- image: memcached
name: memcached
[root@master manifest]# kubectl apply -f redis.yaml
pod/test created
[root@master manifest]# kubectl get pods
NAME READY STATUS RESTARTS AGE
test 0/3 ContainerCreating 0 14s
[root@master manifest]# kubectl get pods
NAME READY STATUS RESTARTS AGE
test 3/3 Running 0 72s
[root@master manifest]# kubectl describe pods test
Name: test
Namespace: default
Priority: 0
Node: node2.example.com/192.168.143.142
Start Time: Mon, 27 Dec 2021 00:56:38 +0800
Labels: app=good
Annotations: <none>
Status: Running
IP: 10.244.2.116
IPs:
IP: 10.244.2.116
Containers:
nginx:
Container ID: docker://57fd71cacf1d8fecc6241d9c86ada46a4e4cea93f0a62db16b928bdb9a3a32f1
Image: nginx
Image ID: docker-pullable://nginx@sha256:366e9f1ddebdb844044c2fafd13b75271a9f620819370f8971220c2b330a9254
Port: <none>
Host Port: <none>
State: Running
Started: Mon, 27 Dec 2021 00:56:42 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rbqzh (ro)
redis:
Container ID: docker://99d40d9c03f3097e9096224d3cea1a588e895fcd26355db455da7667632ae2f4
Image: redis
Image ID: docker-pullable://redis@sha256:db485f2e245b5b3329fdc7eff4eb00f913e09d8feb9ca720788059fdc2ed8339
Port: <none>
Host Port: <none>
State: Running
Started: Mon, 27 Dec 2021 00:57:08 +0800
Ready: True
Restart Count: 0
# 给一个pod创建service,并可以通过ClusterIP/NodePort
[root@master manifest]# vim nodeport.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: myapp
release: v1
template:
metadata:
labels:
app: myapp
release: v1
spec:
containers:
- name: myapp
image: hyhxy0206/httpd:v0.1
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: myapp-nodeport
namespace: default
spec:
type: NodePort
selector:
app: myapp
release: v1
ports:
- name: httpd
port: 80
targetPort: 80
nodePort: 31960
[root@master manifest]# kubectl apply -f nodeport.yaml
deployment.apps/myapp-deploy created
service/myapp-nodeport created
[root@master manifest]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/myapp-deploy-ccf9fc64b-9pjnb 1/1 Running 0 85s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
service/myapp-nodeport NodePort 10.99.165.44 <none> 80:31960/TCP 85s
[root@master manifest]# curl 10.99.165.44
<html><body><h1>It works!</h1></body></html>
[root@master manifest]# curl 192.168.143.140:31960
<html><body><h1>It works!</h1></body></html>
# 创建deployment和service,使用centos容器Nslookup解析service
[root@master manifest]# vim nslookup.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: myapp
release: v1
template:
metadata:
labels:
app: myapp
release: v1
spec:
containers:
- name: myapp
image: hyhxy0206/httpd:v0.1
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: myapp-externalname
namespace: default
spec:
type: ExternalName
externalName: web.test.example.com
[root@master manifest]# kubectl apply -f nslookup.yaml
deployment.apps/myapp-deploy created
service/myapp-externalname created
[root@master manifest]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/myapp-deploy-ccf9fc64b-lhntk 1/1 Running 0 23s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
service/myapp-externalname ExternalName <none> web.test.example.com <none> 23s
//辨析一个busybox资源定义文件
[root@master manifest]# cat busybox.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: xx
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
spec:
containers:
- name: b4
image: busybox
command: ["/bin/sh","-c","sleep 9000"]
[root@master manifest]# kubectl apply -f busybox.yaml
deployment.apps/xx created
[root@master manifest]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/myapp-deploy-ccf9fc64b-lhntk 1/1 Running 0 2m36s
pod/xx-758696cd47-6nt24 0/1 ContainerCreating 0 7s
pod/xx-758696cd47-jsq6p 0/1 ContainerCreating 0 7s
pod/xx-758696cd47-qp6mq 1/1 Running 0 7s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
service/myapp-externalname ExternalName <none> web.test.example.com <none> 2m36s
[root@master manifest]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/myapp-deploy-ccf9fc64b-lhntk 1/1 Running 0 3m14s
pod/xx-758696cd47-6nt24 1/1 Running 0 45s
pod/xx-758696cd47-jsq6p 1/1 Running 0 45s
pod/xx-758696cd47-qp6mq 1/1 Running 0 45s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
service/myapp-externalname ExternalName <none> web.test.example.com <none> 3m14s
//exec 交互验证
[root@master manifest]# kubectl exec -it xx /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): pods "xx" not found
[root@master manifest]# kubectl exec -it xx-758696cd47-6nt24 /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # nslookup myapp-externalname.default.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10:53
myapp-externalname.default.svc.cluster.local canonical name = web.test.example.com
*** Can't find myapp-externalname.default.svc.cluster.local: No answer
更多推荐
所有评论(0)