Service Mesh进阶

1、BookInfo示例分析

前⾯我们已经将BookInfo进⾏了部署以及⼀些功能的演示,从⼤体上理解了Istio到底是什么,在解决什
么问题。下⾯我们针对BookInfo示例来解释⼀些疑惑。

1.1、为什么要配置⽹关、虚拟服务?

1.1.1、概念

⾸先,需要了解⽹关以及虚拟服务的概念。

  • ⽹关(Geteway)

⽹关是服务⽹格的边界,⽤于处理HTTP、TCP ⼊⼝与出⼝流量,Service Mesh 中的服务只能通过⽹关对外暴露接⼝,以⽅便管控,其配置中可以定义暴露的端⼝以及传输层⾯上的配置(是否需要启⽤ TLS)。
BookInfo 配置了⼀个对外暴露 80 端⼝的⽹关服务,并不绑定任何域名(这样才能以 IP ⽅式进⾏访问)。

在这里插入图片描述
hosts,那么必须通过该域名进⼊
在这里插入图片描述

  • 虚拟服务

虚拟服务就是配置如何在服务⽹格内将请求路由到服务,这基于 Istio 和平台提供的基本的连通性和服务发现能⼒。每个虚拟服务包含⼀组路由规则,Istio 按顺序评估它们,Istio 将每个给定的请求匹配到虚拟服务指定的实际⽬标地址。

在这里插入图片描述

hosts:虚拟服务主机名可以是 IP 地址、DNS 名称,或者依赖于平台的⼀个简称(例如
Kubernetes 服务的短名称),也可以使⽤通配符(“*”)前缀,创建⼀组匹配所有服务的路由规则。

  • 路由规则

在 http 字段包含了虚拟服务的路由规则,⽤来描述匹配条件和路由⾏为,它们把
HTTP/1.1、HTTP2 和 gRPC 等流量发送到 hosts 字段指定的⽬标(您也可以⽤ tcp 和
tls ⽚段为 TCP 和未终⽌的 TLS 流量设置路由规则)。
示例中的第⼀个路由规则有⼀个条件,因此以 match 字段开始。希望此路由应⽤于来⾃
”jason“ ⽤户的所有请求,所以使⽤ headers、end-user 和 exact 字段选择适当的请
求。
route 部分的 destination 字段指定了符合此条件的流量的实际⽬标地址。本例中,使
⽤的是kubernetes的服务名。

  • 路由优先级

路由规则按从上到下的顺序选择,虚拟服务中定义的第⼀条规则有最⾼优先级。
本示例中,不满⾜第⼀个路由规则的流量均流向⼀个默认的⽬标,该⽬标在第⼆条规则
中指定。因此,第⼆条规则没有 match 条件,直接将流量导向 v3 ⼦集。
必须把⽹关绑定到虚拟服务上,⽹关才能⽣效。(bookinfo-gateway.yaml)
在这里插入图片描述

1.1.2、IngressGateway

istio-ingressgateway 是 Istio 安装完成后就会启动的服务,其作⽤是作为整个服务⽹格的边缘⽹关,即客户端请求由此进⼊⽹格整体。在边缘架设⽹关的作⽤不仅是做⼀层统⼀代理,更是对服务接⼝的统⼀管理。
需要注意的是,这⾥ Istio 所描述的 IngressGateway 并不是⼀个应⽤,其名称为IngressGateway,⽽是对流量⼊⼝⽹关的⼀个泛指,与之相对应的还有⼀个叫作 EngressGateway的出⼝⽹关,是对流出流量的统⼀代理。
在默认情况下,Envoy 是没有任务路由转发规则的,需要在 Pilot 统⼀配置后,才会⽣效,⽽这⾥的配置正是 Virtual Service。
bookinfo-gateway.yaml:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: bookinfo-gateway #⽹关名称,在虚拟服务中会使⽤到
spec:
  selector:
    istio: ingressgateway # use istio default controller
  servers:  #配置Ingress服务
  - port:
      number: 80  #对外暴露的端⼝
      name: http
      protocol: HTTP  #对外暴露的协议
    hosts: #不加域名的限制,可以直接通过ip地址访问
    - "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: bookinfo
spec:
  hosts:
  - "*"
  gateways:
  - bookinfo-gateway
  http:
  - match:
    - uri:
        exact: /productpage
    - uri:
        prefix: /static
    - uri:
        exact: /login
    - uri:
        exact: /logout
    - uri:
        prefix: /api/v1/products
    route:
    - destination:
        host: productpage
        port:
          number: 9080

根据上⾯的配置,⽹关只要收到匹配路径的 HTTP 请求,就会将请求转发给productpage 服务。

1.2、服务之间是如何通信的?

1.2.1、K8S四层⽹络架构

Istio运⾏于Kubernetes平台,Kubernetes 中的物理资源是以 Node 为单位来进⾏分配的——也是最⼤的资源分配单位——Node 中可以有很多 Pod,每个 Pod 中⼜可以有多个 Container,多个 Pod 以Service 为标识聚合对外提供服务,⽽最下层的 Deployment 是对整体服务体系的抽象,它的实体便是
yaml ⽂件。Deployment、Service、Pod、Container 便构成 Kubernetes 的四层部署结构。
在这里插入图片描述
Kubernetes 的 Service 之所以会允许存在多个 Pod,是因为:

  • ⼀是为了让应⽤可以以多版本的形式存在,⽐如 App 可以以 v1、v2 及 v3的版本存在,对外暴露的都是同⼀服务名,如果不加以控制,那么流量将会平均地分配到以上三个版本中;
  • ⼆是对应⽤进⾏更好的容错,⽐如可以将⼀个服务的 Pod 复制到多个Node 上,这样当⼀台机器宕机的时候,应⽤服务仍然是可以正常⼯作的。
1.2.2、服务间的通信

可以看到,reviews 服务就被划分成了 v1、v2、v3 三个版本,但是查看服务的时候却只有⼀个reviews服务。
每个服务都分配了⼀个cluster-ip,这个ip地址并不是⽹卡层⾯的地址,⽽是Kubernetes平台的分配的虚拟ip,所以对于该ip的调⽤,kubernetes就会将流量分给该ip下对应的pod。

该ip地址还对应⼀个内部域名,例如⼀个服务叫 reviews,它的命名空间是 default,在Kubernetes 中其默认的域名为svc.cluster.local,所以只需要访问reviews.default.svc.cluster.local 这个域名就可以解析到其对应的 ClusterIP,也可以通过服务名直接访问。

在reviews的源码中,通过http协议,调⽤ratings服务,如下:

//LibertyRestEndpoint.java 
@Path("/") 
public class LibertyRestEndpoint extends Application { 
...... 
private final static String services_domain = 
  System.getenv("SERVICES_DOMAIN") == null ? "" : ("." + 
  System.getenv("SERVICES_DOMAIN")); 
private final static String ratings_hostname = 
  System.getenv("RATINGS_HOSTNAME") == null ? "ratings" : 
  System.getenv("RATINGS_HOSTNAME"); 
// 定义请求地址 
private final static String ratings_service = "http://" + ratings_hostname 
+ services_domain + ":9080/ratings"; 
............ 
private JsonObject getRatings(String productId, HttpHeaders requestHeaders) { 
  ClientBuilder cb = ClientBuilder.newBuilder(); 
  Integer timeout = star_color.equals("black") ? 10000 : 2500; 
  cb.property("com.ibm.ws.jaxrs.client.connection.timeout", timeout);

  cb.property("com.ibm.ws.jaxrs.client.receive.timeout", timeout); 
  Client client = cb.build(); 
//发起请求 
WebTarget ratingsTarget = client.target(ratings_service + "/" + 
productId); 
  Invocation.Builder builder = 
ratingsTarget.request(MediaType.APPLICATION_JSON); 
  for (String header : headers_to_propagate) { 
String value = requestHeaders.getHeaderString(header); 
  if (value != null) { 
builder.header(header,value); 
  } 
} 
try { 
  Response r = builder.get(); 
  int statusCode = r.getStatusInfo().getStatusCode(); 
  if (statusCode == Response.Status.OK.getStatusCode()) { 
  try (StringReader stringReader = new 
  StringReader(r.readEntity(String.class)); 
  JsonReader jsonReader = Json.createReader(stringReader)) { 
  return jsonReader.readObject(); 
  } 
 } else { 
System.out.println("Error: unable to contact " + ratings_service + " 
got status of " + statusCode); 
return null; 
 } 
} catch (ProcessingException e) { 
System.err.println("Error: unable to contact " + ratings_service + " 
 got exception " + e); 
  return null; 
  } 
 } 
}

在容器中进⾏验证,由于reviews容器中没有curl、wget等基础命令,所以需要借助于alpine容器进⾏验证。

#拉取镜像
docker pull alpine:3.8
[root@node2 ~]# docker run -it --name=alpine alpine:3.8
/ # apk update
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz

fetch http://dlcdn.alpinelinux.org/alpine/v3.8/community/x86_64/APKINDEX.tar.gz
v3.8.5-53-g001e8c2217 [http://dl-cdn.alpinelinux.org/alpine/v3.8/main]
v3.8.5-37-gf06ffe835a [http://dl-cdn.alpinelinux.org/alpine/v3.8/community]
OK: 9563 distinct packages available
/ # apk add curl
(1/5) Installing ca-certificates (20191127-r2)
(2/5) Installing nghttp2-libs (1.39.2-r0)
(3/5) Installing libssh2 (1.9.0-r1)
(4/5) Installing libcurl (7.61.1-r3)
(5/5) Installing curl (7.61.1-r3)
Executing busybox-1.28.4-r3.trigger
Executing ca-certificates-20191127-r2.trigger
OK: 6 MiB in 18 packages
/ # #将该容器打成新版本,⽅便后⾯使⽤
docker commit alpine alpine:my-3.8
#通过--ipc --net --pid 三个参数实现与⽬标容器共享命名空间,也就共享了容器资源,就相当于进⼊
了⽬标容器进⾏操作
#container:7583cba4a6d2,需要通过docker ps | grep reviews,进⾏查找,找到
k8s_reviews_reviews-v3-xxxx的容器id
[root@node2 ~]# docker run -it --rm --net=container:7583cba4a6d2 --
pid=container:7583cba4a6d2 --ipc=container:7583cba4a6d2 alpine:my-3.8
/ # ping ratings #尝试ping ratings域名,发现所映射的ip地址与 kubectl get
services 查询到的ip地址相同,说明了可以通过短域名访问⽬标
PING ratings (10.98.194.79): 56 data bytes
^C
--- ratings ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss
/ # ping ratings.default.svc.cluster.local #通过⻓域名,实现相同的效果
PING ratings.default.svc.cluster.local (10.98.194.79): 56 data bytes
^C
--- ratings.default.svc.cluster.local ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
/ #
/ # curl http://ratings:9080/ratings/123 #通过curl访问地址,可以得到正确的结果,说
明服务是能够访问通的
{"id":123,"ratings":{"Reviewer1":5,"Reviewer2":4}}/ #
/ #

2、Istio进阶

2.1、Sidecar流量接管原理

Sidecar 的部署⽀持⾃动注⼊与⼿动配置两种⽅式,若想使⽤⾃动注⼊的⽅式,⽬前则是需要Kubernetes 配合的,只需要将应⽤对应的 Namespace 设置⼀个 label 即可。例如 Namespace 如果是default,则可以标记如下:

kubectl label namespace default istio-injection=enabled

如果选择⼿动注⼊,需要这样操作:

kubectl apply -f <(istioctl kube-inject -f
samples/bookinfo/platform/kube/bookinfo.yaml)

在 Istio 架构中,Sidecar 是与应⽤⼀⼀对应的,那么它到底是存在于什么地⽅呢?
在这里插入图片描述

可以看到,与reviews-v3相关的容器有三个,分别是:

  • k8s_istio-proxy_reviews-v3-xxx
  • k8s_reviews_reviews-v3-xxx
  • k8s_POD_reviews-v3-xxx

其中,第⼆个是应⽤本身,第三个是pause容器,第⼀个就是Sidecar了,每个pod都有这样三个⼀组的
容器。
pause容器的作⽤:
在 Linux 系统启动时,任何进程都是由第⼀个进程 init 派⽣出来的,因此在正常情况下,它们都拥有同样的命名空间。当⼀个进程结束或者异常退出的时候,它的⽗进程负责处理它的返回结果,以让操作系统正常回收其进程空间;不过有些时候,⼀些⽗进程却没有很好地处理⼦进程的
退出情况,就是说没有调⽤ wait 命令来处理其返回代码,其原因可能是代码写得不好或者意外崩溃了。
这个时候,操作系统就会将失去⽗进程的进程挂到 PID 为 1 的进程下,⼀般场景下就是init 进程,⽽这种进程也被形象地称为孤⼉进程(Orphan Process)。不过在 Docker 容器中,容器独⽴命名空间中的每个应⽤进程⾃身就是第⼀个进程(即 PID=1),如果这个时候,应⽤进程——例如 Nginx——的⼦进程⼜调⽤ fork 或者 exec 参数创建了更多的⼦进程,那么当出现问题时,它们就会被挂到 Nginx 下⾯。这样的问题在于,虽然从层级关系上,进程间的关系是对了,但Nginx 本身跟 Init 并不⼀样,它没有处理僵⼫进程的逻辑,所以即便挂在了它下⾯也没⽤。
pause 容器没有实质性的业务逻辑,仅仅处理⼦进程返回时的各种信号,以让其顺利退出。

k8s_istio-proxy容器是如何劫持流量的? 通过alpine容器分别进⼊应⽤容器和sidecar容器进⾏查询:

[root@node3 ~]# docker ps | grep reviews-v3

53eaaf7250a7 istio/proxyv2 
"/usr/local/bin/pilo…" 40 minutes ago Up 40 minutes 
 k8s_istio-proxy_reviews-v3-58fc46b64-l6244_default_92bd4472-340e-4c45-
87d9-7edb1aabf025_0
eeafad8f956b istio/examples-bookinfo-reviews-v3 
"/opt/ibm/helpers/ru…" 40 minutes ago Up 40 minutes 
 k8s_reviews_reviews-v3-58fc46b64-l6244_default_92bd4472-340e-4c45-87d9-
7edb1aabf025_0
0eb274c2f015 registry.cn-hangzhou.aliyuncs.com/itcast/pause:3.2 
"/pause" 41 minutes ago Up 41 minutes 
 k8s_POD_reviews-v3-58fc46b64-l6244_default_92bd4472-340e-4c45-87d9-
7edb1aabf025_0
[root@node3 ~]#
#sidecar容器
[root@node3 ~]# docker run -it --rm --net=container:53eaaf7250a7 --
pid=container:53eaaf7250a7 --ipc=container:53eaaf7250a7 alpine:my-3.8
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 06:E2:F2:9E:B9:72
 inet addr:10.244.2.34 Bcast:10.244.2.255 Mask:255.255.255.0
 UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
 RX packets:9425 errors:0 dropped:0 overruns:0 frame:0
 TX packets:6544 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:0
 RX bytes:1216915 (1.1 MiB) TX bytes:5622857 (5.3 MiB)
lo Link encap:Local Loopback
 inet addr:127.0.0.1 Mask:255.0.0.0
 UP LOOPBACK RUNNING MTU:65536 Metric:1
 RX packets:8158 errors:0 dropped:0 overruns:0 frame:0
 TX packets:8158 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000
 RX bytes:6129436 (5.8 MiB) TX bytes:6129436 (5.8 MiB)
/ # ^C
/ # [root@node3 ~]#
#应⽤容器
[root@node3 ~]# docker run -it --rm --net=container:eeafad8f956b --
pid=container:eeafad8f956b --ipc=container:eeafad8f956b alpine:my-3.8
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 06:E2:F2:9E:B9:72
 inet addr:10.244.2.34 Bcast:10.244.2.255 Mask:255.255.255.0
 UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
 RX packets:9550 errors:0 dropped:0 overruns:0 frame:0
 TX packets:6631 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:0
 RX bytes:1229027 (1.1 MiB) TX bytes:5687028 (5.4 MiB)
lo Link encap:Local Loopback
 inet addr:127.0.0.1 Mask:255.0.0.0
 UP LOOPBACK RUNNING MTU:65536 Metric:1
 RX packets:8266 errors:0 dropped:0 overruns:0 frame:0
 TX packets:8266 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000
 RX bytes:6204098 (5.9 MiB) TX bytes:6204098 (5.9 MiB)
/ # [root@node3 ~]#

可以看到,sidecar容器和应⽤容器的ip地址是完全⼀致的,也就是说明他们俩共享的ip地址。那么,共享了ip地址后,sidecar⼜是如何劫持流量的呢?为了弄明⽩这个问题,我们需要看⼀下,Istio在注⼊sidecar时到底做了哪些事情。

# Copyright 2017 Istio Authors
#
#   Licensed under the Apache License, Version 2.0 (the "License");
#   you may not use this file except in compliance with the License.
#   You may obtain a copy of the License at
#
#       http://www.apache.org/licenses/LICENSE-2.0
#
#   Unless required by applicable law or agreed to in writing, software
#   distributed under the License is distributed on an "AS IS" BASIS,
#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#   See the License for the specific language governing permissions and
#   limitations under the License.

##################################################################################################
# This file defines the services, service accounts, and deployments for the Bookinfo sample.
#
# To apply all 4 Bookinfo services, their corresponding service accounts, and deployments:
#
#   kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
#
# Alternatively, you can deploy any resource separately:
#
#   kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -l service=reviews # reviews Service
#   kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -l account=reviews # reviews ServiceAccount
#   kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -l app=reviews,version=v3 # reviews-v3 Deployment
##################################################################################################

##################################################################################################
# Details service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: details
  labels:
    app: details
    service: details
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: details
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: bookinfo-details
  labels:
    account: details
---
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: details
    version: v1
  name: details-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: details
      version: v1
  strategy: {}
  template:
    metadata:
      annotations:
        sidecar.istio.io/interceptionMode: REDIRECT
        sidecar.istio.io/status: '{"version":"ed4e6e8ed4ffa03fe7d5b9d2c27cc8c478625ade64be2859cae3da0db9e5ee2e","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-data","istio-podinfo","istiod-ca-cert"],"imagePullSecrets":null}'
        traffic.sidecar.istio.io/excludeInboundPorts: "15020"
        traffic.sidecar.istio.io/includeInboundPorts: "9080"
        traffic.sidecar.istio.io/includeOutboundIPRanges: '*'
      creationTimestamp: null
      labels:
        app: details
        istio.io/rev: ""
        security.istio.io/tlsMode: istio
        version: v1
    spec:
      containers:
      - image: docker.io/istio/examples-bookinfo-details-v1:1.15.1
        imagePullPolicy: IfNotPresent
        name: details
        ports:
        - containerPort: 9080
        resources: {}
      - args:
        - proxy
        - sidecar
        - --domain
        - $(POD_NAMESPACE).svc.cluster.local
        - --serviceCluster
        - details.$(POD_NAMESPACE)
        - --proxyLogLevel=warning
        - --proxyComponentLogLevel=misc:error
        - --trust-domain=cluster.local
        - --concurrency
        - "2"
        env:
        - name: JWT_POLICY
          value: first-party-jwt
        - name: PILOT_CERT_PROVIDER
          value: istiod
        - name: CA_ADDR
          value: istiod.istio-system.svc:15012
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: INSTANCE_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: SERVICE_ACCOUNT
          valueFrom:
            fieldRef:
              fieldPath: spec.serviceAccountName
        - name: HOST_IP
          valueFrom:
            fieldRef:
              fieldPath: status.hostIP
        - name: CANONICAL_SERVICE
          valueFrom:
            fieldRef:
              fieldPath: metadata.labels['service.istio.io/canonical-name']
        - name: CANONICAL_REVISION
          valueFrom:
            fieldRef:
              fieldPath: metadata.labels['service.istio.io/canonical-revision']
        - name: PROXY_CONFIG
          value: |
            {"proxyMetadata":{"DNS_AGENT":""}}
        - name: ISTIO_META_POD_PORTS
          value: |-
            [
                {"containerPort":9080}
            ]
        - name: ISTIO_META_APP_CONTAINERS
          value: |-
            [
                details
            ]
        - name: ISTIO_META_CLUSTER_ID
          value: Kubernetes
        - name: ISTIO_META_INTERCEPTION_MODE
          value: REDIRECT
        - name: ISTIO_META_WORKLOAD_NAME
          value: details-v1
        - name: ISTIO_META_OWNER
          value: kubernetes://apis/apps/v1/namespaces/default/deployments/details-v1
        - name: ISTIO_META_MESH_ID
          value: cluster.local
        - name: DNS_AGENT
        - name: ISTIO_KUBE_APP_PROBERS
          value: '{}'
        image: docker.io/istio/proxyv2:1.6.5
        imagePullPolicy: Always
        name: istio-proxy
        ports:
        - containerPort: 15090
          name: http-envoy-prom
          protocol: TCP
        readinessProbe:
          failureThreshold: 30
          httpGet:
            path: /healthz/ready
            port: 15021
          initialDelaySeconds: 1
          periodSeconds: 2
        resources:
          limits:
            cpu: "2"
            memory: 1Gi
          requests:
            cpu: 10m
            memory: 40Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: true
          runAsGroup: 1337
          runAsNonRoot: true
          runAsUser: 1337
        volumeMounts:
        - mountPath: /var/run/secrets/istio
          name: istiod-ca-cert
        - mountPath: /var/lib/istio/data
          name: istio-data
        - mountPath: /etc/istio/proxy
          name: istio-envoy
        - mountPath: /etc/istio/pod
          name: istio-podinfo
      initContainers:
      - args:
        - istio-iptables
        - -p
        - "15001"
        - -z
        - "15006"
        - -u
        - "1337"
        - -m
        - REDIRECT
        - -i
        - '*'
        - -x
        - ""
        - -b
        - '*'
        - -d
        - 15090,15021,15020
        env:
        - name: DNS_AGENT
        image: docker.io/istio/proxyv2:1.6.5
        imagePullPolicy: Always
        name: istio-init
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 10m
            memory: 10Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_ADMIN
            - NET_RAW
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: false
          runAsGroup: 0
          runAsNonRoot: false
          runAsUser: 0
      securityContext:
        fsGroup: 1337
      serviceAccountName: bookinfo-details
      volumes:
      - emptyDir:
          medium: Memory
        name: istio-envoy
      - emptyDir: {}
        name: istio-data
      - downwardAPI:
          items:
          - fieldRef:
              fieldPath: metadata.labels
            path: labels
          - fieldRef:
              fieldPath: metadata.annotations
            path: annotations
        name: istio-podinfo
      - configMap:
          name: istio-ca-root-cert
        name: istiod-ca-cert
status: {}
---
##################################################################################################
# Ratings service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: ratings
  labels:
    app: ratings
    service: ratings
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: ratings
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: bookinfo-ratings
  labels:
    account: ratings
---
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: ratings
    version: v1
  name: ratings-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ratings
      version: v1
  strategy: {}
  template:
    metadata:
      annotations:
        sidecar.istio.io/interceptionMode: REDIRECT
        sidecar.istio.io/status: '{"version":"ed4e6e8ed4ffa03fe7d5b9d2c27cc8c478625ade64be2859cae3da0db9e5ee2e","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-data","istio-podinfo","istiod-ca-cert"],"imagePullSecrets":null}'
        traffic.sidecar.istio.io/excludeInboundPorts: "15020"
        traffic.sidecar.istio.io/includeInboundPorts: "9080"
        traffic.sidecar.istio.io/includeOutboundIPRanges: '*'
      creationTimestamp: null
      labels:
        app: ratings
        istio.io/rev: ""
        security.istio.io/tlsMode: istio
        version: v1
    spec:
      containers:
      - image: docker.io/istio/examples-bookinfo-ratings-v1:1.15.1
        imagePullPolicy: IfNotPresent
        name: ratings
        ports:
        - containerPort: 9080
        resources: {}
      - args:
        - proxy
        - sidecar
        - --domain
        - $(POD_NAMESPACE).svc.cluster.local
        - --serviceCluster
        - ratings.$(POD_NAMESPACE)
        - --proxyLogLevel=warning
        - --proxyComponentLogLevel=misc:error
        - --trust-domain=cluster.local
        - --concurrency
        - "2"
        env:
        - name: JWT_POLICY
          value: first-party-jwt
        - name: PILOT_CERT_PROVIDER
          value: istiod
        - name: CA_ADDR
          value: istiod.istio-system.svc:15012
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: INSTANCE_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: SERVICE_ACCOUNT
          valueFrom:
            fieldRef:
              fieldPath: spec.serviceAccountName
        - name: HOST_IP
          valueFrom:
            fieldRef:
              fieldPath: status.hostIP
        - name: CANONICAL_SERVICE
          valueFrom:
            fieldRef:
              fieldPath: metadata.labels['service.istio.io/canonical-name']
        - name: CANONICAL_REVISION
          valueFrom:
            fieldRef:
              fieldPath: metadata.labels['service.istio.io/canonical-revision']
        - name: PROXY_CONFIG
          value: |
            {"proxyMetadata":{"DNS_AGENT":""}}
        - name: ISTIO_META_POD_PORTS
          value: |-
            [
                {"containerPort":9080}
            ]
        - name: ISTIO_META_APP_CONTAINERS
          value: |-
            [
                ratings
            ]
        - name: ISTIO_META_CLUSTER_ID
          value: Kubernetes
        - name: ISTIO_META_INTERCEPTION_MODE
          value: REDIRECT
        - name: ISTIO_META_WORKLOAD_NAME
          value: ratings-v1
        - name: ISTIO_META_OWNER
          value: kubernetes://apis/apps/v1/namespaces/default/deployments/ratings-v1
        - name: ISTIO_META_MESH_ID
          value: cluster.local
        - name: DNS_AGENT
        - name: ISTIO_KUBE_APP_PROBERS
          value: '{}'
        image: docker.io/istio/proxyv2:1.6.5
        imagePullPolicy: Always
        name: istio-proxy
        ports:
        - containerPort: 15090
          name: http-envoy-prom
          protocol: TCP
        readinessProbe:
          failureThreshold: 30
          httpGet:
            path: /healthz/ready
            port: 15021
          initialDelaySeconds: 1
          periodSeconds: 2
        resources:
          limits:
            cpu: "2"
            memory: 1Gi
          requests:
            cpu: 10m
            memory: 40Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: true
          runAsGroup: 1337
          runAsNonRoot: true
          runAsUser: 1337
        volumeMounts:
        - mountPath: /var/run/secrets/istio
          name: istiod-ca-cert
        - mountPath: /var/lib/istio/data
          name: istio-data
        - mountPath: /etc/istio/proxy
          name: istio-envoy
        - mountPath: /etc/istio/pod
          name: istio-podinfo
      initContainers:
      - args:
        - istio-iptables
        - -p
        - "15001"
        - -z
        - "15006"
        - -u
        - "1337"
        - -m
        - REDIRECT
        - -i
        - '*'
        - -x
        - ""
        - -b
        - '*'
        - -d
        - 15090,15021,15020
        env:
        - name: DNS_AGENT
        image: docker.io/istio/proxyv2:1.6.5
        imagePullPolicy: Always
        name: istio-init
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 10m
            memory: 10Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_ADMIN
            - NET_RAW
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: false
          runAsGroup: 0
          runAsNonRoot: false
          runAsUser: 0
      securityContext:
        fsGroup: 1337
      serviceAccountName: bookinfo-ratings
      volumes:
      - emptyDir:
          medium: Memory
        name: istio-envoy
      - emptyDir: {}
        name: istio-data
      - downwardAPI:
          items:
          - fieldRef:
              fieldPath: metadata.labels
            path: labels
          - fieldRef:
              fieldPath: metadata.annotations
            path: annotations
        name: istio-podinfo
      - configMap:
          name: istio-ca-root-cert
        name: istiod-ca-cert
status: {}
---
##################################################################################################
# Reviews service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: reviews
  labels:
    app: reviews
    service: reviews
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: reviews
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: bookinfo-reviews
  labels:
    account: reviews
---
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: reviews
    version: v1
  name: reviews-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: reviews
      version: v1
  strategy: {}
  template:
    metadata:
      annotations:
        sidecar.istio.io/interceptionMode: REDIRECT
        sidecar.istio.io/status: '{"version":"ed4e6e8ed4ffa03fe7d5b9d2c27cc8c478625ade64be2859cae3da0db9e5ee2e","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-data","istio-podinfo","istiod-ca-cert"],"imagePullSecrets":null}'
        traffic.sidecar.istio.io/excludeInboundPorts: "15020"
        traffic.sidecar.istio.io/includeInboundPorts: "9080"
        traffic.sidecar.istio.io/includeOutboundIPRanges: '*'
      creationTimestamp: null
      labels:
        app: reviews
        istio.io/rev: ""
        security.istio.io/tlsMode: istio
        version: v1
    spec:
      containers:
      - env:
        - name: LOG_DIR
          value: /tmp/logs
        image: docker.io/istio/examples-bookinfo-reviews-v1:1.15.1
        imagePullPolicy: IfNotPresent
        name: reviews
        ports:
        - containerPort: 9080
        resources: {}
        volumeMounts:
        - mountPath: /tmp
          name: tmp
        - mountPath: /opt/ibm/wlp/output
          name: wlp-output
      - args:
        - proxy
        - sidecar
        - --domain
        - $(POD_NAMESPACE).svc.cluster.local
        - --serviceCluster
        - reviews.$(POD_NAMESPACE)
        - --proxyLogLevel=warning
        - --proxyComponentLogLevel=misc:error
        - --trust-domain=cluster.local
        - --concurrency
        - "2"
        env:
        - name: JWT_POLICY
          value: first-party-jwt
        - name: PILOT_CERT_PROVIDER
          value: istiod
        - name: CA_ADDR
          value: istiod.istio-system.svc:15012
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: INSTANCE_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: SERVICE_ACCOUNT
          valueFrom:
            fieldRef:
              fieldPath: spec.serviceAccountName
        - name: HOST_IP
          valueFrom:
            fieldRef:
              fieldPath: status.hostIP
        - name: CANONICAL_SERVICE
          valueFrom:
            fieldRef:
              fieldPath: metadata.labels['service.istio.io/canonical-name']
        - name: CANONICAL_REVISION
          valueFrom:
            fieldRef:
              fieldPath: metadata.labels['service.istio.io/canonical-revision']
        - name: PROXY_CONFIG
          value: |
            {"proxyMetadata":{"DNS_AGENT":""}}
        - name: ISTIO_META_POD_PORTS
          value: |-
            [
                {"containerPort":9080}
            ]
        - name: ISTIO_META_APP_CONTAINERS
          value: |-
            [
                reviews
            ]
        - name: ISTIO_META_CLUSTER_ID
          value: Kubernetes
        - name: ISTIO_META_INTERCEPTION_MODE
          value: REDIRECT
        - name: ISTIO_META_WORKLOAD_NAME
          value: reviews-v1
        - name: ISTIO_META_OWNER
          value: kubernetes://apis/apps/v1/namespaces/default/deployments/reviews-v1
        - name: ISTIO_META_MESH_ID
          value: cluster.local
        - name: DNS_AGENT
        - name: ISTIO_KUBE_APP_PROBERS
          value: '{}'
        image: docker.io/istio/proxyv2:1.6.5
        imagePullPolicy: Always
        name: istio-proxy
        ports:
        - containerPort: 15090
          name: http-envoy-prom
          protocol: TCP
        readinessProbe:
          failureThreshold: 30
          httpGet:
            path: /healthz/ready
            port: 15021
          initialDelaySeconds: 1
          periodSeconds: 2
        resources:
          limits:
            cpu: "2"
            memory: 1Gi
          requests:
            cpu: 10m
            memory: 40Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: true
          runAsGroup: 1337
          runAsNonRoot: true
          runAsUser: 1337
        volumeMounts:
        - mountPath: /var/run/secrets/istio
          name: istiod-ca-cert
        - mountPath: /var/lib/istio/data
          name: istio-data
        - mountPath: /etc/istio/proxy
          name: istio-envoy
        - mountPath: /etc/istio/pod
          name: istio-podinfo
      initContainers:
      - args:
        - istio-iptables
        - -p
        - "15001"
        - -z
        - "15006"
        - -u
        - "1337"
        - -m
        - REDIRECT
        - -i
        - '*'
        - -x
        - ""
        - -b
        - '*'
        - -d
        - 15090,15021,15020
        env:
        - name: DNS_AGENT
        image: docker.io/istio/proxyv2:1.6.5
        imagePullPolicy: Always
        name: istio-init
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 10m
            memory: 10Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_ADMIN
            - NET_RAW
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: false
          runAsGroup: 0
          runAsNonRoot: false
          runAsUser: 0
      securityContext:
        fsGroup: 1337
      serviceAccountName: bookinfo-reviews
      volumes:
      - emptyDir: {}
        name: wlp-output
      - emptyDir: {}
        name: tmp
      - emptyDir:
          medium: Memory
        name: istio-envoy
      - emptyDir: {}
        name: istio-data
      - downwardAPI:
          items:
          - fieldRef:
              fieldPath: metadata.labels
            path: labels
          - fieldRef:
              fieldPath: metadata.annotations
            path: annotations
        name: istio-podinfo
      - configMap:
          name: istio-ca-root-cert
        name: istiod-ca-cert
status: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: reviews
    version: v2
  name: reviews-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: reviews
      version: v2
  strategy: {}
  template:
    metadata:
      annotations:
        sidecar.istio.io/interceptionMode: REDIRECT
        sidecar.istio.io/status: '{"version":"ed4e6e8ed4ffa03fe7d5b9d2c27cc8c478625ade64be2859cae3da0db9e5ee2e","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-data","istio-podinfo","istiod-ca-cert"],"imagePullSecrets":null}'
        traffic.sidecar.istio.io/excludeInboundPorts: "15020"
        traffic.sidecar.istio.io/includeInboundPorts: "9080"
        traffic.sidecar.istio.io/includeOutboundIPRanges: '*'
      creationTimestamp: null
      labels:
        app: reviews
        istio.io/rev: ""
        security.istio.io/tlsMode: istio
        version: v2
    spec:
      containers:
      - env:
        - name: LOG_DIR
          value: /tmp/logs
        image: docker.io/istio/examples-bookinfo-reviews-v2:1.15.1
        imagePullPolicy: IfNotPresent
        name: reviews
        ports:
        - containerPort: 9080
        resources: {}
        volumeMounts:
        - mountPath: /tmp
          name: tmp
        - mountPath: /opt/ibm/wlp/output
          name: wlp-output
      - args:
        - proxy
        - sidecar
        - --domain
        - $(POD_NAMESPACE).svc.cluster.local
        - --serviceCluster
        - reviews.$(POD_NAMESPACE)
        - --proxyLogLevel=warning
        - --proxyComponentLogLevel=misc:error
        - --trust-domain=cluster.local
        - --concurrency
        - "2"
        env:
        - name: JWT_POLICY
          value: first-party-jwt
        - name: PILOT_CERT_PROVIDER
          value: istiod
        - name: CA_ADDR
          value: istiod.istio-system.svc:15012
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: INSTANCE_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: SERVICE_ACCOUNT
          valueFrom:
            fieldRef:
              fieldPath: spec.serviceAccountName
        - name: HOST_IP
          valueFrom:
            fieldRef:
              fieldPath: status.hostIP
        - name: CANONICAL_SERVICE
          valueFrom:
            fieldRef:
              fieldPath: metadata.labels['service.istio.io/canonical-name']
        - name: CANONICAL_REVISION
          valueFrom:
            fieldRef:
              fieldPath: metadata.labels['service.istio.io/canonical-revision']
        - name: PROXY_CONFIG
          value: |
            {"proxyMetadata":{"DNS_AGENT":""}}
        - name: ISTIO_META_POD_PORTS
          value: |-
            [
                {"containerPort":9080}
            ]
        - name: ISTIO_META_APP_CONTAINERS
          value: |-
            [
                reviews
            ]
        - name: ISTIO_META_CLUSTER_ID
          value: Kubernetes
        - name: ISTIO_META_INTERCEPTION_MODE
          value: REDIRECT
        - name: ISTIO_META_WORKLOAD_NAME
          value: reviews-v2
        - name: ISTIO_META_OWNER
          value: kubernetes://apis/apps/v1/namespaces/default/deployments/reviews-v2
        - name: ISTIO_META_MESH_ID
          value: cluster.local
        - name: DNS_AGENT
        - name: ISTIO_KUBE_APP_PROBERS
          value: '{}'
        image: docker.io/istio/proxyv2:1.6.5
        imagePullPolicy: Always
        name: istio-proxy
        ports:
        - containerPort: 15090
          name: http-envoy-prom
          protocol: TCP
        readinessProbe:
          failureThreshold: 30
          httpGet:
            path: /healthz/ready
            port: 15021
          initialDelaySeconds: 1
          periodSeconds: 2
        resources:
          limits:
            cpu: "2"
            memory: 1Gi
          requests:
            cpu: 10m
            memory: 40Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: true
          runAsGroup: 1337
          runAsNonRoot: true
          runAsUser: 1337
        volumeMounts:
        - mountPath: /var/run/secrets/istio
          name: istiod-ca-cert
        - mountPath: /var/lib/istio/data
          name: istio-data
        - mountPath: /etc/istio/proxy
          name: istio-envoy
        - mountPath: /etc/istio/pod
          name: istio-podinfo
      initContainers:
      - args:
        - istio-iptables
        - -p
        - "15001"
        - -z
        - "15006"
        - -u
        - "1337"
        - -m
        - REDIRECT
        - -i
        - '*'
        - -x
        - ""
        - -b
        - '*'
        - -d
        - 15090,15021,15020
        env:
        - name: DNS_AGENT
        image: docker.io/istio/proxyv2:1.6.5
        imagePullPolicy: Always
        name: istio-init
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 10m
            memory: 10Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_ADMIN
            - NET_RAW
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: false
          runAsGroup: 0
          runAsNonRoot: false
          runAsUser: 0
      securityContext:
        fsGroup: 1337
      serviceAccountName: bookinfo-reviews
      volumes:
      - emptyDir: {}
        name: wlp-output
      - emptyDir: {}
        name: tmp
      - emptyDir:
          medium: Memory
        name: istio-envoy
      - emptyDir: {}
        name: istio-data
      - downwardAPI:
          items:
          - fieldRef:
              fieldPath: metadata.labels
            path: labels
          - fieldRef:
              fieldPath: metadata.annotations
            path: annotations
        name: istio-podinfo
      - configMap:
          name: istio-ca-root-cert
        name: istiod-ca-cert
status: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: reviews
    version: v3
  name: reviews-v3
spec:
  replicas: 1
  selector:
    matchLabels:
      app: reviews
      version: v3
  strategy: {}
  template:
    metadata:
      annotations:
        sidecar.istio.io/interceptionMode: REDIRECT
        sidecar.istio.io/status: '{"version":"ed4e6e8ed4ffa03fe7d5b9d2c27cc8c478625ade64be2859cae3da0db9e5ee2e","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-data","istio-podinfo","istiod-ca-cert"],"imagePullSecrets":null}'
        traffic.sidecar.istio.io/excludeInboundPorts: "15020"
        traffic.sidecar.istio.io/includeInboundPorts: "9080"
        traffic.sidecar.istio.io/includeOutboundIPRanges: '*'
      creationTimestamp: null
      labels:
        app: reviews
        istio.io/rev: ""
        security.istio.io/tlsMode: istio
        version: v3
    spec:
      containers:
      - env:
        - name: LOG_DIR
          value: /tmp/logs
        image: docker.io/istio/examples-bookinfo-reviews-v3:1.15.1
        imagePullPolicy: IfNotPresent
        name: reviews
        ports:
        - containerPort: 9080
        resources: {}
        volumeMounts:
        - mountPath: /tmp
          name: tmp
        - mountPath: /opt/ibm/wlp/output
          name: wlp-output
      - args:
        - proxy
        - sidecar
        - --domain
        - $(POD_NAMESPACE).svc.cluster.local
        - --serviceCluster
        - reviews.$(POD_NAMESPACE)
        - --proxyLogLevel=warning
        - --proxyComponentLogLevel=misc:error
        - --trust-domain=cluster.local
        - --concurrency
        - "2"
        env:
        - name: JWT_POLICY
          value: first-party-jwt
        - name: PILOT_CERT_PROVIDER
          value: istiod
        - name: CA_ADDR
          value: istiod.istio-system.svc:15012
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: INSTANCE_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: SERVICE_ACCOUNT
          valueFrom:
            fieldRef:
              fieldPath: spec.serviceAccountName
        - name: HOST_IP
          valueFrom:
            fieldRef:
              fieldPath: status.hostIP
        - name: CANONICAL_SERVICE
          valueFrom:
            fieldRef:
              fieldPath: metadata.labels['service.istio.io/canonical-name']
        - name: CANONICAL_REVISION
          valueFrom:
            fieldRef:
              fieldPath: metadata.labels['service.istio.io/canonical-revision']
        - name: PROXY_CONFIG
          value: |
            {"proxyMetadata":{"DNS_AGENT":""}}
        - name: ISTIO_META_POD_PORTS
          value: |-
            [
                {"containerPort":9080}
            ]
        - name: ISTIO_META_APP_CONTAINERS
          value: |-
            [
                reviews
            ]
        - name: ISTIO_META_CLUSTER_ID
          value: Kubernetes
        - name: ISTIO_META_INTERCEPTION_MODE
          value: REDIRECT
        - name: ISTIO_META_WORKLOAD_NAME
          value: reviews-v3
        - name: ISTIO_META_OWNER
          value: kubernetes://apis/apps/v1/namespaces/default/deployments/reviews-v3
        - name: ISTIO_META_MESH_ID
          value: cluster.local
        - name: DNS_AGENT
        - name: ISTIO_KUBE_APP_PROBERS
          value: '{}'
        image: docker.io/istio/proxyv2:1.6.5
        imagePullPolicy: Always
        name: istio-proxy
        ports:
        - containerPort: 15090
          name: http-envoy-prom
          protocol: TCP
        readinessProbe:
          failureThreshold: 30
          httpGet:
            path: /healthz/ready
            port: 15021
          initialDelaySeconds: 1
          periodSeconds: 2
        resources:
          limits:
            cpu: "2"
            memory: 1Gi
          requests:
            cpu: 10m
            memory: 40Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: true
          runAsGroup: 1337
          runAsNonRoot: true
          runAsUser: 1337
        volumeMounts:
        - mountPath: /var/run/secrets/istio
          name: istiod-ca-cert
        - mountPath: /var/lib/istio/data
          name: istio-data
        - mountPath: /etc/istio/proxy
          name: istio-envoy
        - mountPath: /etc/istio/pod
          name: istio-podinfo
      initContainers:
      - args:
        - istio-iptables
        - -p
        - "15001"
        - -z
        - "15006"
        - -u
        - "1337"
        - -m
        - REDIRECT
        - -i
        - '*'
        - -x
        - ""
        - -b
        - '*'
        - -d
        - 15090,15021,15020
        env:
        - name: DNS_AGENT
        image: docker.io/istio/proxyv2:1.6.5
        imagePullPolicy: Always
        name: istio-init
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 10m
            memory: 10Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_ADMIN
            - NET_RAW
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: false
          runAsGroup: 0
          runAsNonRoot: false
          runAsUser: 0
      securityContext:
        fsGroup: 1337
      serviceAccountName: bookinfo-reviews
      volumes:
      - emptyDir: {}
        name: wlp-output
      - emptyDir: {}
        name: tmp
      - emptyDir:
          medium: Memory
        name: istio-envoy
      - emptyDir: {}
        name: istio-data
      - downwardAPI:
          items:
          - fieldRef:
              fieldPath: metadata.labels
            path: labels
          - fieldRef:
              fieldPath: metadata.annotations
            path: annotations
        name: istio-podinfo
      - configMap:
          name: istio-ca-root-cert
        name: istiod-ca-cert
status: {}
---
##################################################################################################
# Productpage services
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: productpage
  labels:
    app: productpage
    service: productpage
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: productpage
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: bookinfo-productpage
  labels:
    account: productpage
---
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: productpage
    version: v1
  name: productpage-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: productpage
      version: v1
  strategy: {}
  template:
    metadata:
      annotations:
        sidecar.istio.io/interceptionMode: REDIRECT
        sidecar.istio.io/status: '{"version":"ed4e6e8ed4ffa03fe7d5b9d2c27cc8c478625ade64be2859cae3da0db9e5ee2e","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-data","istio-podinfo","istiod-ca-cert"],"imagePullSecrets":null}'
        traffic.sidecar.istio.io/excludeInboundPorts: "15020"
        traffic.sidecar.istio.io/includeInboundPorts: "9080"
        traffic.sidecar.istio.io/includeOutboundIPRanges: '*'
      creationTimestamp: null
      labels:
        app: productpage
        istio.io/rev: ""
        security.istio.io/tlsMode: istio
        version: v1
    spec:
      containers:
      - image: docker.io/istio/examples-bookinfo-productpage-v1:1.15.1
        imagePullPolicy: IfNotPresent
        name: productpage
        ports:
        - containerPort: 9080  #服务对外暴露的端⼝
        resources: {}
        volumeMounts:
        - mountPath: /tmp
          name: tmp
      - args:
        - proxy
        - sidecar
        - --domain
        - $(POD_NAMESPACE).svc.cluster.local
        - --serviceCluster
        - productpage.$(POD_NAMESPACE)
        - --proxyLogLevel=warning
        - --proxyComponentLogLevel=misc:error
        - --trust-domain=cluster.local
        - --concurrency
        - "2"
        env:
        - name: JWT_POLICY
          value: first-party-jwt
        - name: PILOT_CERT_PROVIDER
          value: istiod
        - name: CA_ADDR
          value: istiod.istio-system.svc:15012
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: INSTANCE_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: SERVICE_ACCOUNT
          valueFrom:
            fieldRef:
              fieldPath: spec.serviceAccountName
        - name: HOST_IP
          valueFrom:
            fieldRef:
              fieldPath: status.hostIP
        - name: CANONICAL_SERVICE
          valueFrom:
            fieldRef:
              fieldPath: metadata.labels['service.istio.io/canonical-name']
        - name: CANONICAL_REVISION
          valueFrom:
            fieldRef:
              fieldPath: metadata.labels['service.istio.io/canonical-revision']
        - name: PROXY_CONFIG
          value: |
            {"proxyMetadata":{"DNS_AGENT":""}}
        - name: ISTIO_META_POD_PORTS
          value: |-
            [
                {"containerPort":9080}
            ]
        - name: ISTIO_META_APP_CONTAINERS
          value: |-
            [
                productpage
            ]
        - name: ISTIO_META_CLUSTER_ID
          value: Kubernetes
        - name: ISTIO_META_INTERCEPTION_MODE
          value: REDIRECT
        - name: ISTIO_META_WORKLOAD_NAME
          value: productpage-v1
        - name: ISTIO_META_OWNER
          value: kubernetes://apis/apps/v1/namespaces/default/deployments/productpage-v1
        - name: ISTIO_META_MESH_ID
          value: cluster.local
        - name: DNS_AGENT
        - name: ISTIO_KUBE_APP_PROBERS
          value: '{}'
        image: docker.io/istio/proxyv2:1.6.5  #sidecar 代理
        imagePullPolicy: Always
        name: istio-proxy
        ports:
        - containerPort: 15090
          name: http-envoy-prom
          protocol: TCP
        readinessProbe:
          failureThreshold: 30
          httpGet:
            path: /healthz/ready
            port: 15021
          initialDelaySeconds: 1
          periodSeconds: 2
        resources:
          limits:
            cpu: "2"
            memory: 1Gi
          requests:
            cpu: 10m
            memory: 40Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: true
          runAsGroup: 1337
          runAsNonRoot: true
          runAsUser: 1337
        volumeMounts:
        - mountPath: /var/run/secrets/istio
          name: istiod-ca-cert
        - mountPath: /var/lib/istio/data
          name: istio-data
        - mountPath: /etc/istio/proxy
          name: istio-envoy
        - mountPath: /etc/istio/pod
          name: istio-podinfo
      initContainers:
      - args:
        - istio-iptables
        - -p
        - "15001"
        - -z
        - "15006"
        - -u
        - "1337"
        - -m
        - REDIRECT
        - -i
        - '*'
        - -x
        - ""
        - -b
        - '*'
        - -d
        - 15090,15021,15020
        env:
        - name: DNS_AGENT
        image: docker.io/istio/proxyv2:1.6.5  #init 容器
        imagePullPolicy: Always
        name: istio-init
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 10m
            memory: 10Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_ADMIN
            - NET_RAW
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: false
          runAsGroup: 0
          runAsNonRoot: false
          runAsUser: 0
      securityContext:
        fsGroup: 1337
      serviceAccountName: bookinfo-productpage
      volumes:
      - emptyDir: {}
        name: tmp
      - emptyDir:
          medium: Memory
        name: istio-envoy
      - emptyDir: {}
        name: istio-data
      - downwardAPI:
          items:
          - fieldRef:
              fieldPath: metadata.labels
            path: labels
          - fieldRef:
              fieldPath: metadata.annotations
            path: annotations
        name: istio-podinfo
      - configMap:
          name: istio-ca-root-cert
        name: istiod-ca-cert
status: {}
---

Istio 给应⽤ Pod 注⼊的配置主要包括:
Init 容器 istio-init:⽤于 pod 中设置 iptables 端⼝转发
Sidecar 容器 istio-proxy:运⾏ sidecar 代理,如 Envoy 或 MOSN

2.1.1、Init 容器解析

Istio 在 pod 中注⼊的 Init 容器名为 istio-init ,我们在上⾯ Istio 注⼊完成后的 YAML ⽂件中看到了该容器的启动命令是:

istio-iptables -p 15001 -z 15006 -u 1337 -m REDIRECT -i '*' -x "" -b '*' -d
15090,15021,15020

Init 容器的启动⼊⼝是 istio-iptables 命令⾏,该命令⾏⼯具的⽤法如下:

$ istio-iptables [flags] 
-p: 指定重定向所有 TCP 流量的 sidecar 端⼝(默认为 $ENVOY_PORT = 15001) 
-m: 指定⼊站连接重定向到 sidecar 的模式,“REDIRECT” 或 “TPROXY”(默认为 
$ISTIO_INBOUND_INTERCEPTION_MODE) 
-b: 逗号分隔的⼊站端⼝列表,其流量将重定向到 Envoy(可选)。使⽤通配符 “*” 表示重定向所有端⼝。为空时表示禁⽤所有⼊站重定向(默认为$ISTIO_INBOUND_PORTS) 
-d: 指定要从重定向到 sidecar 中排除的⼊站端⼝列表(可选),以逗号格式分隔。使⽤通配符“*” 表示重定向所有⼊站流量(默认为$ISTIO_LOCAL_EXCLUDE_PORTS) 
-o:逗号分隔的出站端⼝列表,不包括重定向到 Envoy 的端⼝。 
-i: 指定重定向到 sidecar 的 IP 地址范围(可选),以逗号分隔的 CIDR 格式列表。使⽤通配符 “*” 表示重定向所有出站流量。空列表将禁⽤所有出站重定向(默认为 $ISTIO_SERVICE_CIDR) 
-x: 指定将从重定向中排除的 IP 地址范围,以逗号分隔的 CIDR 格式列表。使⽤通配符 “*” 表示 重定向所有出站流量(默认为 $ISTIO_SERVICE_EXCLUDE_CIDR)。 
-k:逗号分隔的虚拟接⼝列表,其⼊站流量(来⾃虚拟机的)将被视为出站流量。 
-g:指定不应⽤重定向的⽤户的 GID。(默认值与 -u param 相同) 
-u:指定不应⽤重定向的⽤户的 UID。通常情况下,这是代理容器的 UID(默认值是 1337,即 
istio-proxy 的 UID)。 
-z: 所有进⼊ pod/VM 的 TCP 流量应被重定向到的端⼝(默认 $INBOUND_CAPTURE_PORT = 15006)。

命令解析:
这条启动命令的作⽤是:

  • 将应⽤容器的所有流量都转发到 sidecar 的 15006 端⼝。
  • 使⽤ istio-proxy ⽤户身份运⾏, UID 为 1337,即 sidecar 所处的⽤户空间,这也是 istioproxy容器默认使⽤的⽤户,⻅ YAML 配置中的 runAsUser 字段。
  • 使⽤默认的 REDIRECT 模式来重定向流量。
  • 将所有出站流量都重定向到 sidecar 代理(通过 15001 端⼝)。

因为 Init 容器初始化完毕后就会⾃动终⽌,因为我们⽆法登陆到容器中查看 iptables 信息,但是 Init 容器初始化结果会保留到应⽤容器和 sidecar 容器中。

2.1.2、iptables 注⼊解析

查看与 productpage 有关的 iptables 规则如下:

[root@node3 ~]# docker top `docker ps|grep "istio-proxy_productpage"|cut -d " "
-f1`
UID PID PPID C 
STIME TTY TIME CMD
1337 5941 5924 0
02:30 ? 00:00:11 
/usr/local/bin/pilot-agent proxy sidecar --domain default.svc.cluster.local --
serviceCluster productpage.default --proxyLogLevel=warning --
proxyComponentLogLevel=misc:error --trust-domain=cluster.local --concurrency 2
1337 5968 5941 0
02:30 ? 00:01:15 
/usr/local/bin/envoy -c etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --
drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster
productpage.default --service-node sidecar~10.244.2.32~productpage-v1-
7df7cb7f86-wvxst.default~default.svc.cluster.local --max-obj-name-len 189 --
local-address-ip-version v4 --log-format %Y-%m-%dT%T.%fZ?%l?envoy %n?%v -l
warning --component-log-level misc:error --concurrency 2
# 进⼊ nsenter 进⼊ sidecar 容器的命名空间(以上任何⼀个都可以)
nsenter -n --target 5941
# 查看 NAT 表中规则配置的详细信息。
[root@node3 ~]# iptables -t nat -L -v
# PREROUTING 链:⽤于⽬标地址转换(DNAT),将所有⼊站 TCP 流量跳转到 ISTIO_INBOUND 链
上。
Chain PREROUTING (policy ACCEPT 14987 packets, 899K bytes)
 pkts bytes target prot opt in out source destination
14987 899K ISTIO_INBOUND tcp -- any any anywhere 
anywhere
# INPUT 链:处理输⼊数据包,⾮ TCP 流量将继续 OUTPUT 链。
Chain INPUT (policy ACCEPT 14987 packets, 899K bytes)
 pkts bytes target prot opt in out source destination
# OUTPUT 链:将所有出站数据包跳转到 ISTIO_OUTPUT 链上。
Chain OUTPUT (policy ACCEPT 3795 packets, 332K bytes)
 pkts bytes target prot opt in out source destination
431 25860 ISTIO_OUTPUT tcp -- any any anywhere anywhere
# POSTROUTING 链:所有数据包流出⽹卡时都要先进⼊POSTROUTING 链,内核根据数据包⽬的地判断
是否需要转发出去,我们看到此处未做任何处理。
Chain POSTROUTING (policy ACCEPT 3795 packets, 332K bytes)
 pkts bytes target prot opt in out source destination
# ISTIO_INBOUND 链:将所有⼊站流量重定向到 ISTIO_IN_REDIRECT 链上,⽬的地为
15090(mixer 使⽤)和 15020(Ingress gateway 使⽤,⽤于 Pilot 健康检查)端⼝的流量除
外,发送到以上两个端⼝的流量将返回 iptables 规则链的调⽤点,即 PREROUTING 链的后继
POSTROUTING。
# Istio 1.6版本中,检测端⼝改为了 15021
Chain ISTIO_INBOUND (1 references)
 pkts bytes target prot opt in out source destination
0 0 RETURN tcp -- any any anywhere anywhere 
 tcp dpt:ssh
5 300 RETURN tcp -- any any anywhere anywhere 
 tcp dpt:15090
14982 899K RETURN tcp -- any any anywhere anywhere 
 tcp dpt:15021
0 0 RETURN tcp -- any any anywhere anywhere 
 tcp dpt:15020
0 0 ISTIO_IN_REDIRECT tcp -- any any anywhere 
anywhere
# ISTIO_IN_REDIRECT 链:将所有的⼊站流量跳转到本地的 15006 端⼝,⾄此成功的拦截了流量到sidecar 中。
Chain ISTIO_IN_REDIRECT (3 references)
 pkts bytes target prot opt in out source destination
0 0 REDIRECT tcp -- any any anywhere anywhere 
 redir ports 15006
# ISTIO_OUTPUT 链:选择需要重定向到 Envoy(即本地) 的出站流量,所有⾮ localhost 的流量全部转发到 ISTIO_REDIRECT。为了避免流量在该 Pod 中⽆限循环,所有到 istio-proxy ⽤户空间的流量都返回到它的调⽤点中的下⼀条规则,本例中即 OUTPUT 链,因为跳出 ISTIO_OUTPUT 规则之后就进⼊下⼀条链 POSTROUTING。如果⽬的地⾮ localhost 就跳转到 ISTIO_REDIRECT;如果流量是来⾃ istio-proxy ⽤户空间的,那么就跳出该链,返回它的调⽤链继续执⾏下⼀条规则(OUTPUT 的下⼀条规则,⽆需对流量进⾏处理);所有的⾮ istio-proxy ⽤户空间的⽬的地是 localhost 的流量就跳转到 ISTIO_REDIRECT。
Chain ISTIO_OUTPUT (1 references)
 pkts bytes target prot opt in out source destination
0 0 RETURN all -- any lo 127.0.0.6 anywhere
0 0 ISTIO_IN_REDIRECT all -- any lo anywhere 
!localhost owner UID match 1337
0 0 RETURN all -- any lo anywhere anywhere 
 ! owner UID match 1337
431 25860 RETURN all -- any any anywhere anywhere 
 owner UID match 1337
0 0 ISTIO_IN_REDIRECT all -- any lo anywhere 
!localhost owner GID match 1337
流程示例:

0 0 RETURN all -- any lo anywhere anywhere 
 ! owner GID match 1337
0 0 RETURN all -- any any anywhere anywhere 
 owner GID match 1337
0 0 RETURN all -- any any anywhere localhost
0 0 ISTIO_REDIRECT all -- any any anywhere 
anywhere
# ISTIO_REDIRECT 链:将所有流量重定向到 Sidecar(即本地) 的 15001 端⼝。
Chain ISTIO_REDIRECT (1 references)
 pkts bytes target prot opt in out source destination
0 0 REDIRECT tcp -- any any anywhere anywhere 
 redir ports 15001

流程示例:
在这里插入图片描述

2.2、超时与重试

2.2.1、超时

超时是 Envoy 代理等待来⾃给定服务的答复的时间量,以确保服务不会因为等待答复⽽⽆限期的挂起,并在可预测的时间范围内调⽤成功或失败。HTTP 请求的默认超时时间是 15 秒,这意味着如果服务在15 秒内没有响应,调⽤将失败。
对于某些应⽤程序和服务,Istio 的缺省超时可能不合适。例如,超时太⻓可能会由于等待失败服务的回复⽽导致过度的延迟;⽽超时过短则可能在等待涉及多个服务返回的操作时触发不必要地失败。
为了找到并使⽤最佳超时设置,Istio 允许您使⽤虚拟服务按服务轻松地动态调整超时,⽽不必修改您的业务代码。

下⾯的示例是⼀个虚拟服务,它对 ratings 服务的 v1 ⼦集的调⽤指定 10 秒超时:

apiVersion: networking.istio.io/v1alpha3 
kind: VirtualService 
metadata: 
  name: ratings 
spec: 
  hosts: 
  - ratings 
  http: 
  - route: 
    - destination: 
        host: ratings 
        subset: v1 
    timeout: 10s

2.2.2、重试
重试设置指定如果初始调⽤失败,Envoy 代理尝试连接服务的最⼤次数。通过确保调⽤不会因为临时过载的服务或⽹络等问题⽽永久失败,重试可以提⾼服务可⽤性和应⽤程序的性能。重试之间的间隔(25ms+)是可变的,并由 Istio ⾃动确定,从⽽防⽌被调⽤服务被请求淹没。默认情况下,在第⼀次
失败后,Envoy 代理不会重新尝试连接服务。
与超时⼀样,Istio 默认的重试⾏为在延迟⽅⾯可能不适合您的应⽤程序需求(对失败的服务进⾏过多的重试会降低速度)或可⽤性。
您可以在虚拟服务中按服务调整重试设置,⽽不必修改业务代码。您还可以通过添加每次重试的超时来进⼀步细化重试⾏为,并指定每次重试都试图成功连接到服务所等待的时间量。
下⾯的示例配置了在初始调⽤失败后最多重试 3 次来连接到服务⼦集,每个重试都有 2 秒的超时。

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: ratings
spec:
  hosts:
  - ratings
  http:
  - route:
    - destination:
        host: ratings
        subset: v1
     retries:
       attempts: 3
       perTryTimeout: 2s

2.3、熔断器

熔断器是 Istio 为创建具有弹性的微服务应⽤提供的有⽤的机制。在熔断器中,设置⼀个对服务中的单个主机调⽤的限制,例如并发连接的数量或对该主机调⽤失败的次数。⼀旦限制被触发,熔断器就会“跳 闸”并停⽌连接到该主机。
使⽤熔断模式可以快速失败⽽不必让客户端尝试连接到过载或有故障的主机。

2.3.1、部署httpbin

httpbin是⼀个开源项⽬,使⽤Python+Flask编写,利⽤它可以测试各种HTTP请求和响应。官⽹:http://httpbin.org/

2.3.2、配置熔断器

创建⼀个⽬标规则,在调⽤ httpbin 服务时应⽤熔断设置:

kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
 name: httpbin
spec:
 host: httpbin
 trafficPolicy:
   connectionPool:
     tcp:
       maxConnections: 1 #最⼤连接数
     http:
       http1MaxPendingRequests: 1 #http请求pending状态的最⼤请求数
       maxRequestsPerConnection: 1 #在⼀定时间内限制对后端服务发起的最⼤请求数
   outlierDetection: #熔断设置
     consecutiveErrors: 1 #从连接池开始拒绝连接,已经连接失败的次数,当通过HTTP访问时,返回代码是502、503或504则视为错误。
     interval: 1s #拒绝访问扫描的时间间隔,即在interval(1s)内连续发⽣1个
consecutiveErrors错误,则触发服务熔断,格式是1h/1m/1s/1ms,但必须⼤于等于1ms。即分析是否需要剔除的频率,多久分析⼀次,默认10秒。
     baseEjectionTime: 3m #最短拒绝访问时⻓。这个时间主机将保持拒绝访问,且如果决绝访问达到⼀定的次数。格式:1h/1m/1s/1ms,但必须⼤于等于1ms。实例被剔除后,⾄少多久不得返回负载均衡池,默认是30秒。
     maxEjectionPercent: 100 #服务在负载均衡池中被拒绝访问(被移除)的最⼤百分⽐,负载均衡池中最多有多⼤⽐例被剔除,默认是10%。
EOF

验证⽬标规则是否已正确创建:

kubectl get destinationrule httpbin -o yaml
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
 name: httpbin
 ...
spec:
 host: httpbin
 trafficPolicy:
 connectionPool:
   http:
      http1MaxPendingRequests: 1
      maxRequestsPerConnection: 1
   tcp:
     maxConnections: 1
  outlierDetection:
   baseEjectionTime: 180.000s
   consecutiveErrors: 1
   interval: 1.000s
   maxEjectionPercent: 100
2.3.3、客户端

创建客户端程序以发送流量到 httpbin 服务。这是⼀个名为 Fortio 的负载测试客户的,其可以控制连接数、并发数及发送 HTTP 请求的延迟。通过 Fortio 能够有效的触发前⾯ 在 DestinationRule 中设置的熔断策略。

  1. 向客户端注⼊ Istio Sidecar 代理,以便 Istio 对其⽹络交互进⾏管理:
$ kubectl apply -f <(istioctl kube-inject -f samples/httpbin/sampleclient/fortio-deploy.yaml)
  1. 登⼊客户端 Pod 并使⽤ Fortio ⼯具调⽤ httpbin 服务。 -curl 参数表明发送⼀次调⽤:
$ FORTIO_POD=$(kubectl get pod | grep fortio | awk '{ print $1 }')
$ kubectl exec -it $FORTIO_POD -c fortio -- /usr/bin/fortio load -curl
http://httpbin:8000/get
HTTP/1.1 200 OK
server: envoy
date: Tue, 16 Jan 2018 23:47:00 GMT
content-type: application/json
access-control-allow-origin: *
access-control-allow-credentials: true
content-length: 445
x-envoy-upstream-service-time: 36
{
  "args": {},
  "headers": {
    "Content-Length": "0",
    "Host": "httpbin:8000",
    "User-Agent": "istio/fortio-0.6.2",
    "X-B3-Sampled": "1",
    "X-B3-Spanid": "824fbd828d809bf4",
    "X-B3-Traceid": "824fbd828d809bf4",
    "X-Ot-Span-Context":
  "824fbd828d809bf4;824fbd828d809bf4;0000000000000000",
   " X-Request-Id": "1ad2de20-806e-9622-949a-bd1d9735a3f4"
 },
   "origin": "127.0.0.1",
   "url": "http://httpbin:8000/get"
}

可以看到调⽤后端服务的请求已经成功!接下来,可以测试熔断。

2.3.4、触发熔断器

在 DestinationRule 配置中,定义了 maxConnections: 1 和 http1MaxPendingRequests: 1。 这些规则意味着,如果并发的连接和请求数超过⼀个,在 istio-proxy 进⾏进⼀步的请求和连接时,后续请求或连接将被阻⽌。

  1. 发送并发数为 2 的连接( -c 2 ),请求 20 次( -n 20 ):
[root@node1 istio-1.6.5]# kubectl exec -it $FORTIO_POD -c fortio --
/usr/bin/fortio load -c 2 -qps 0 -n 20 -loglevel Warning
http://httpbin:8000/get
03:59:25 I logger.go:97> Log level is now 3 Warning (was 2 Info)
Fortio 1.3.1 running at 0 queries per second, 2->2 procs, for 20 calls:
http://httpbin:8000/get
Starting at max qps with 2 thread(s) [gomax 2] for exactly 20 calls (10
per thread + 0)
03:59:25 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
03:59:25 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
03:59:25 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
03:59:25 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
03:59:25 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
03:59:25 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
Ended after 79.166124ms : 20 calls. qps=252.63
Aggregated Function Time : count 20 avg 0.0064311497 +/- 0.007472 min
0.000340298 max 0.032824602 sum 0.128622994
# range, mid point, percentile, count
>= 0.000340298 <= 0.001 , 0.000670149 , 10.00, 2
> 0.001 <= 0.002 , 0.0015 , 20.00, 2
> 0.002 <= 0.003 , 0.0025 , 40.00, 4
> 0.003 <= 0.004 , 0.0035 , 60.00, 4
> 0.004 <= 0.005 , 0.0045 , 65.00, 1
> 0.006 <= 0.007 , 0.0065 , 80.00, 3
> 0.012 <= 0.014 , 0.013 , 85.00, 1
> 0.014 <= 0.016 , 0.015 , 90.00, 1
> 0.016 <= 0.018 , 0.017 , 95.00, 1
> 0.03 <= 0.0328246 , 0.0314123 , 100.00, 1
# target 50% 0.0035
# target 75% 0.00666667
# target 90% 0.016
# target 99% 0.0322597
# target 99.9% 0.0327681
Sockets used: 8 (for perfect keepalive, would be 2)
Code 200 : 14 (70.0 %)
Code 503 : 6 (30.0 %)
Response Header Sizes : count 20 avg 161.15 +/- 105.5 min 0 max 231 sum
3223
Response Body/Total Sizes : count 20 avg 668.15 +/- 279.6 min 241 max 852
sum 13363
All done 20 calls (plus 0 warmup) 6.431 ms avg, 252.6 qps

结果:

Code 200 : 14 (70.0 %)
Code 503 : 6 (30.0 %)
  1. 将并发连接数提⾼到 3 个:
[root@node1 istio-1.6.5]# kubectl exec -it $FORTIO_POD -c fortio --
/usr/bin/fortio load -c 3 -qps 0 -n 30 -loglevel Warning
http://httpbin:8000/get
04:01:42 I logger.go:97> Log level is now 3 Warning (was 2 Info)
Fortio 1.3.1 running at 0 queries per second, 2->2 procs, for 30 calls:
http://httpbin:8000/get
Starting at max qps with 3 thread(s) [gomax 2] for exactly 30 calls (10
per thread + 0)
04:01:42 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
04:01:42 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
04:01:42 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
04:01:42 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
04:01:42 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
04:01:42 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
04:01:42 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
04:01:42 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
04:01:42 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
04:01:42 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
04:01:42 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
04:01:42 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
04:01:42 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
04:01:42 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
04:01:42 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
04:01:42 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
04:01:42 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
04:01:42 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
04:01:42 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
Ended after 32.153704ms : 30 calls. qps=933.02
Aggregated Function Time : count 30 avg 0.0019156712 +/- 0.001801 min
0.000270969 max 0.006581956 sum 0.057470135
# range, mid point, percentile, count
>= 0.000270969 <= 0.001 , 0.000635485 , 56.67, 17
> 0.002 <= 0.003 , 0.0025 , 70.00, 4
> 0.003 <= 0.004 , 0.0035 , 86.67, 5
> 0.004 <= 0.005 , 0.0045 , 93.33, 2
> 0.005 <= 0.006 , 0.0055 , 96.67, 1
> 0.006 <= 0.00658196 , 0.00629098 , 100.00, 1
# target 50% 0.000908871
# target 75% 0.0033
# target 90% 0.0045
# target 99% 0.00640737
# target 99.9% 0.0065645
Sockets used: 20 (for perfect keepalive, would be 3)
Code 200 : 11 (36.7 %)
Code 503 : 19 (63.3 %)
Response Header Sizes : count 30 avg 84.333333 +/- 110.8 min 0 max 230 sum
2530
Response Body/Total Sizes : count 30 avg 464.66667 +/- 294 min 241 max 851
sum 13940
All done 30 calls (plus 0 warmup) 1.916 ms avg, 933.0 qps

可以看到,只有 36.7% 的请求成功,其余的均被熔断器拦截:

Code 200 : 11 (36.7 %)
Code 503 : 19 (63.3 %)
  1. 查询 istio-proxy 状态以了解更多熔断详情:
[root@node1 istio-1.6.5]# kubectl exec $FORTIO_POD -c istio-proxy --
pilot-agent request GET stats | grep httpbin | grep pending
cluster.outbound|8000||httpbin.default.svc.cluster.local.circuit_breakers.
default.rq_pending_open: 0
cluster.outbound|8000||httpbin.default.svc.cluster.local.circuit_breakers.
high.rq_pending_open: 0
cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pendi
ng_active: 0
cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pendi
ng_failure_eject: 0
cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pendi
ng_overflow: 72
cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pendi
ng_total: 59

可以看到 upstream_rq_pending_overflow 值 59 ,这意味着,⽬前为⽌已有 59 个调⽤被标记为熔断。

2.3.5、清理
  • 清理规则:
$ kubectl delete destinationrule httpbin
  • 下线 httpbin 服务和客户端:
$ kubectl delete deploy httpbin fortio-deploy $ kubectl delete svc       httpbin

2.4、可视化⽹络

在Istio中可以使⽤Kiali进⾏可视化的管理服务⽹格。Kiali官⽹:https://www.kiali.io/
下⾯我们还是基于demo环境以及Bookinfo进⾏演示,在demo环境中已经默认安装了kiali,默认的⽤户名密码均为:admin

2.4.1、启动服务
  1. 要验证服务是否在您的群集中运⾏,请运⾏以下命令:
$ kubectl -n istio-system get svc kiali
  1. 要确定 Bookinfo URL
export INGRESS_PORT=$(kubectl -n istio-system get service istioingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
export INGRESS_HOST=$(kubectl get po -l istio=ingressgateway -n istiosystem -o jsonpath='{.items[0].status.hostIP}')
export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
  • 将流量发送到⽹格,有三种选择:
  • 在浏览器中访问 http://$GATEWAY_URL/productpage
  • 多次使⽤以下命令:
$ curl http://$GATEWAY_URL/productpage
  • 如果您在系统中安装了 watch 命令,请通过以下⽅式连续发送请求,时间间隔为1秒:
$ watch -n 1 curl -o /dev/null -s -w %{http_code}
$GATEWAY_URL/productpage
  1. 要打开 Kiali UI,请在您的 Kubernetes 环境中执⾏以下命令:
$ istioctl dashboard kiali --address 192.168.31.106
  1. 要登录 Kiali UI,请到 Kiali 登录界⾯,然后输⼊默认的⽤户名密码。
  2. 登录后⽴即显示的 Overview ⻚⾯中查看⽹格的概述。Overview ⻚⾯显示了⽹格中具有服务的所有名称空间。以下屏幕截图显示了类似的⻚⾯: 在这里插入图片描述
2.4.2、图

在这里插入图片描述
查看流量分配的百分⽐:
在这里插入图片描述
请求统计数据,RPS数据(最⼩/最⼤的⽐值)
在这里插入图片描述
显示不同的图表类型,有四种类型:

  • App

图形类型将⼀个应⽤程序的所有版本聚合到⼀个图形节点中。
在这里插入图片描述

  • Versioned App

图类型显示每个应⽤程序版本的节点,但是特定应⽤程序的所有版本都组合在⼀起。
在这里插入图片描述

  • Workload

图类型显示了服务⽹格中每个⼯作负载的节点。
在这里插入图片描述

Service
图类型显示⽹格中每个服务的节点。
在这里插入图片描述

2.4.2、路由加权

默认路由规则会平均分配浏览到每个可⽤节点,通过kiali可以可⾏可视化的调节:
第⼀步,查看servers列表:
在这里插入图片描述
第⼆步,进⼊reviews服务
在这里插入图片描述
第三步,删除原有路由规则:
在这里插入图片描述
第四步,创建权重的规则:
在这里插入图片描述
默认情况:
在这里插入图片描述
进⾏调整:
在这里插入图片描述
保存操作。
第五步,通过watch执⾏⼀段时间,观察效果:
在这里插入图片描述
可以看到,分配到reviews的v1、v2、v3的百分⽐发⽣了变化。

2.4.3、查看⼯作负载

在这里插入图片描述
⼊站、出站信息:
在这里插入图片描述
⽇志信息:
在这里插入图片描述
⼊站指标信息:
在这里插入图片描述
出站指标信息:
在这里插入图片描述

3、实战

在实战环节中,我们将完成从⼀个单体项⽬到微服务的拆分,再到服务⽹格的演变。

3.1、项⽬说明

项⽬是仿照⾖瓣电影⽹址实现的电影信息⽹,其底层数据使⽤的Neo4j(图数据库)存储,主体技术使⽤SpringBoot实现。
说明:对于⼀些对于图数据库不熟悉的同学,将聚焦到服务演变的过程,不要聚焦到Neo4j上,就
将其看做的是mysql数据⼀样。(其实他们是不⼀样的)
下⾯是电影⻚⾯的截图,我们将其分析,可以划分为4个微服务:
1区:电影信息区域,划分为⼀个微服务,称之为:movie-info
2区:电影的评分区域,划分为⼀个微服务,称之为:movie-rating
3区:电影的推荐,划分为⼀个微服务,称之为:movie-recommend
整体⻚⾯划分为⼀个微服务,称之为:movie-web。
在这里插入图片描述
在这里插入图片描述
橙⾊球:电影,蓝⾊球:参与⼈,红⾊球:普通⽤户;
项⽬结构:
在这里插入图片描述

3.2、搭建Neo4j环境

使⽤docker进⾏搭建Neo4j环境:

#创建容器
docker create --name neo4j -p 7474:7474 -p 7687:7687 -v neo4j:/data neo4j:4.0.0
#启动容器
docker start neo4j

通过http://192.168.31.106:7474/browser/进⾏登录:
在这里插入图片描述
⾸次登录需要修改密码,设置为:neo4j123
在这里插入图片描述
导入数据

执⾏成功。
执⾏查询,命令:match (n) return n
在这里插入图片描述

3.3、创建⼯程

在这里插入图片描述
几个重点类贴一下

import lombok.Setter;
import org.springframework.boot.autoconfigure.data.neo4j.Neo4jProperties;
import org.springframework.boot.autoconfigure.domain.EntityScan;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.neo4j.repository.config.EnableNeo4jRepositories;
import org.springframework.transaction.annotation.EnableTransactionManagement;

@Configuration
@EnableNeo4jRepositories(basePackages = "cn.itcast.douban.repository")
@EntityScan("cn.itcast.douban.pojo") //配置实体扫描包
@EnableTransactionManagement //激活事务管理
@ConfigurationProperties(prefix = "spring.data.neo4j")
@Setter //设置set方法,如果不设置,值不会被注入
public class AutoConfiguration {

    private Integer connectionPoolSize;
    private Integer connectionLivenessCheckTimeout;


    /**
     * 自定义配置
     *
     * @param neo4jProperties
     * @return
     */
    @Bean
    public org.neo4j.ogm.config.Configuration configuration(Neo4jProperties neo4jProperties) {
        org.neo4j.ogm.config.Configuration.Builder builder = new org.neo4j.ogm.config.Configuration.Builder();

        builder.uri(neo4jProperties.getUri())
                .credentials(neo4jProperties.getUsername(), neo4jProperties.getPassword())
                .connectionPoolSize(connectionPoolSize)
                .connectionLivenessCheckTimeout(connectionLivenessCheckTimeout);

        return builder.build();
    }
}

配置

server.port=8081

spring.mvc.view.prefix=/WEB-INF/views/
spring.mvc.view.suffix=.jsp

#连接地址
#spring.data.neo4j.uri=bolt://127.0.0.1:7687
#spring.data.neo4j.uri=bolt+routing://node1:7687
spring.data.neo4j.uri=neo4j://node1:7687

#⽤户名
spring.data.neo4j.username=neo4j
#密码
spring.data.neo4j.password=neo4j123

#连接池⼤⼩
spring.data.neo4j.connection-pool-size=200
#测试超时时间
spring.data.neo4j.connection-liveness-check-timeout=100

#logging.level.org.springframework=DEBUG

3.4制作docker镜像

#itcast-service-mesh-movie-info
FROM openjdk:8-jdk-alpine
COPY ./itcast-service-mesh-movie-info-1.0-SNAPSHOT.jar /movie/itcast-servicemesh-movie-info-1.0-SNAPSHOT.jar
EXPOSE 18082
ENTRYPOINT [ "java","-jar","/movie/itcast-service-mesh-movie-info-1.0-SNAPSHOT.jar" ]


#itcast-service-mesh-movie-rating
FROM openjdk:8-jdk-alpine
COPY ./itcast-service-mesh-movie-rating-1.0-SNAPSHOT.jar /movie/itcast-servicemesh-movie-rating-1.0-SNAPSHOT.jar
EXPOSE 18084
ENTRYPOINT [ "java","-jar","/movie/itcast-service-mesh-movie-rating-1.0-SNAPSHOT.jar" ]
#itcast-service-mesh-movie-recommend
FROM openjdk:8-jdk-alpine
COPY ./itcast-service-mesh-movie-recommend-1.0-SNAPSHOT.jar /movie/itcastservice-mesh-movie-recommend-1.0-SNAPSHOT.jar
EXPOSE 18083
ENTRYPOINT [ "java","-jar","/movie/itcast-service-mesh-movie-recommend-1.0-SNAPSHOT.jar" ]
#itcast-service-mesh-movie-web
FROM openjdk:8-jdk-alpine
COPY ./itcast-service-mesh-movie-web-1.0-SNAPSHOT.jar /movie/itcast-servicemesh-movie-web-1.0-SNAPSHOT.jar
EXPOSE 18081
ENTRYPOINT [ "java","-jar","/movie/itcast-service-mesh-movie-web-1.0-SNAPSHOT.jar" ]

制作镜像:

mkdir -p /movie/movie-info
cd /movie/movie-info
vim Dockerfile
#拷⻉上⾯的内容到⽂件中
#构建镜像
docker build -t itcast-service-mesh-movie-info:1.0 .
#基于此镜像创建容器,进⾏测试
docker create --name movie-info -p 18082:18082 --add-host=neo4jserver:192.168.31.106 itcast-service-mesh-movie-info:1.0
docker start movie-info
#打开浏览器进⾏测试:http://192.168.31.106:18082/movie
#测试成功后即可把容器停⽌删除
docker stop movie-info
docker rm movie-info
docker build -t itcast-service-mesh-movie-recommend:1.0 .


docker create --name movie-recommend -p 18083:18083 --add-host=neo4jserver:192.168.31.106 itcast-service-mesh-movie-recommend:1.0
docker start movie-recommend
docker build -t itcast-service-mesh-movie-rating:1.0 .
docker create --name movie-rating -p 18084:18084 --add-host=neo4jserver:192.168.31.106 itcast-service-mesh-movie-rating:1.0
docker start movie-rating
docker create --name movie-rating-v2 -p 18085:18084 --add-host=neo4jserver:192.168.31.106 --env MOVIE_VERSION=v2 itcast-service-mesh-movierating:1.0
docker start movie-rating-v2
docker build -t itcast-service-mesh-movie-web:1.0 .
docker create --name movie-web -p 18081:18081 --add-host=movieinfo:192.168.31.106 --add-host=movie-recommend:192.168.31.106 --add-host=movierating:192.168.31.106 itcast-service-mesh-movie-web:1.0
docker start movie-web
#为了⽅便使⽤,将镜像上传到阿⾥云镜像仓库,这个是免费的,⾃⼰可以申请即可
#登录到阿⾥云,改成⾃⼰的账号
[root@node1 movie-info]# docker login --username=hi31888179@aliyun.com
registry.cn-hangzhou.aliyuncs.com
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
#info
docker tag itcast-service-mesh-movie-info:1.0 registry.cnhangzhou.aliyuncs.com/itcast/itcast-service-mesh-movie-info:1.0
docker push registry.cn-hangzhou.aliyuncs.com/itcast/itcast-service-mesh-movieinfo:1.0
#recommend
docker tag itcast-service-mesh-movie-recommend:1.0 registry.cnhangzhou.aliyuncs.com/itcast/itcast-service-mesh-movie-recommend:1.0
docker push registry.cn-hangzhou.aliyuncs.com/itcast/itcast-service-mesh-movierecommend:1.0
#rating
docker tag itcast-service-mesh-movie-rating:1.0 registry.cnhangzhou.aliyuncs.com/itcast/itcast-service-mesh-movie-rating:1.0
docker push registry.cn-hangzhou.aliyuncs.com/itcast/itcast-service-mesh-movierating:1.0
#web
docker tag itcast-service-mesh-movie-web:1.0 registry.cnhangzhou.aliyuncs.com/itcast/itcast-service-mesh-movie-web:1.0
docker push registry.cn-hangzhou.aliyuncs.com/itcast/itcast-service-mesh-movieweb:1.0

3.10、编写Istio相关⽂件

3.10.1、k8s编排容器

movie.yaml

##################################################################################################
# neo4j service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: neo4j-server
spec:
  ports:
  - port: 7474
    name: http
    nodePort: 31001
  - port: 7687
    name: blot
    nodePort: 31002
  selector:
    app: neo4j-app
  type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: neo4j-deployment
  labels:
    app: neo4j-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: neo4j-app
  template:
    metadata:
      labels:
        app: neo4j-app
    spec:
      containers:
      - name: neo4j
        image: neo4j:4.0.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 7474
        - containerPort: 7687
---
##################################################################################################
# movie info service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: movie-info
spec:
  ports:
  - port: 18082
    name: http
  selector:
    app: movie-info-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: movie-info-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: movie-info-app
      version: v1
  template:
    metadata:
      labels:
        app: movie-info-app
        version: v1
    spec:
      containers:
      - name: movie-info
        image: registry.cn-hangzhou.aliyuncs.com/itcast/itcast-service-mesh-movie-info:1.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 18082
---
##################################################################################################
# movie recommend service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: movie-recommend
spec:
  ports:
  - port: 18083
    name: http
  selector:
    app: movie-recommend-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: movie-recommend-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: movie-recommend-app
      version: v1
  template:
    metadata:
      labels:
        app: movie-recommend-app
        version: v1
    spec:
      containers:
      - name: movie-recommend
        image: registry.cn-hangzhou.aliyuncs.com/itcast/itcast-service-mesh-movie-recommend:1.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 18083
---
##################################################################################################
# movie rating service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: movie-rating
spec:
  ports:
  - port: 18084
    name: http
  selector:
    app: movie-rating-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: movie-rating-deployment-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: movie-rating-app
      version: v1
  template:
    metadata:
      labels:
        app: movie-rating-app
        version: v1
    spec:
      containers:
      - name: movie-rating
        image: registry.cn-hangzhou.aliyuncs.com/itcast/itcast-service-mesh-movie-rating:1.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 18083
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: movie-rating-deployment-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: movie-rating-app
      version: v2
  template:
    metadata:
      labels:
        app: movie-rating-app
        version: v2
    spec:
      containers:
      - name: movie-rating
        image: registry.cn-hangzhou.aliyuncs.com/itcast/itcast-service-mesh-movie-rating:1.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 18083
        env:
        - name: MOVIE_VERSION
          value: "v2"
---
##################################################################################################
# movie web service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: movie-web
spec:
  ports:
  - port: 18081
    name: http
  selector:
    app: movie-web-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: movie-web-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: movie-web-app
      version: v1
  template:
    metadata:
      labels:
        app: movie-web-app
        version: v1
    spec:
      containers:
      - name: movie-web
        image: registry.cn-hangzhou.aliyuncs.com/itcast/itcast-service-mesh-movie-web:1.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 18081
---
3.10.2、⽹关

movie-gateway.yaml

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: movie-gateway
spec:
  selector:
    istio: ingressgateway # use istio default controller
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: movie
spec:
  hosts:
  - "*"
  gateways:
  - movie-gateway
  http:
  - match:
    - uri:
        prefix: /index
    - uri:
        prefix: /image
    - uri:
        prefix: /js
    route:
    - destination:
        host: movie-web
        port:
          number: 18081
3.10.3、默认路由规则

destination-rule-all.yaml:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: movie-web
spec:
  host: movie-web
  subsets:
  - name: v1
    labels:
      version: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: movie-info
spec:
  host: movie-info
  subsets:
  - name: v1
    labels:
      version: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: movie-recommend
spec:
  host: movie-recommend
  subsets:
  - name: v1
    labels:
      version: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: movie-rating
spec:
  host: movie-rating
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: movie-web
spec:
  host: movie-web
  subsets:
  - name: v1
    labels:
      version: v1
---
3.10.4、流量全部导⼊到v2

virtual-service-rating-v2.yaml:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
 name: movie-rating
spec:
 hosts:
   - movie-rating
 http:
 - route:
   - destination:
       host: movie-rating
       subset: v2
3.11、实施部署

实施部署前,建议按照前⾯的⽂档,重新搭建k8s环境以及Istio环境,再开始部署。

3.11.1、Istio初始化失败

如果出现如下错误:
在这里插入图片描述
解决⽅案如下:

#说明:默认使⽤的kiali镜像是从quay.io拉取,有些时候是拉取不下来的
#解决⽅案:从阿⾥云仓库拉取,再重新打tag即可
docker pull registry.cn-hangzhou.aliyuncs.com/itcast/kiali:v1.18
docker tag registry.cn-hangzhou.aliyuncs.com/itcast/kiali:v1.18
quay.io/kiali/kiali:v1.18
#需要注意的是,通过 kubectl get pod -n istio-system -o wide 定位到pod所在的机器,再执⾏上⾯的操作
3.11.2、实施
mkdir /movie
cd /movie/
#将destination-rule-all.yaml movie-gateway.yaml movie.yaml virtual-servicerating-v2.yaml ⽂件上传到该⽬录
#设置⾃动注⼊
kubectl label namespace default istio-injection=enabled
#部署应⽤
[root@node1 movie]# kubectl apply -f movie.yaml
service/neo4j-server created
deployment.apps/neo4j-deployment created
service/movie-info created
deployment.apps/movie-info-deployment created
service/movie-recommend created
deployment.apps/movie-recommend-deployment created
service/movie-rating created
deployment.apps/movie-rating-deployment-v1 created
deployment.apps/movie-rating-deployment-v2 created
service/movie-web created
deployment.apps/movie-web-deployment created
#查看pod,可以看到正在初始化中
[root@node1 movie]# kubectl get pod -o wide
NAME READY STATUS 
RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
movie-info-deployment-c8f455f69-4hknf 0/2 Init:0/1 0
 40s <none> node3 <none> <none>
movie-info-deployment-c8f455f69-psbb4 0/2 Init:0/1 0
 40s <none> node2 <none> <none>
movie-rating-deployment-v1-667fb7c8cc-j8ggj 0/2 PodInitializing 0
 40s 10.244.1.8 node2 <none> <none>
movie-rating-deployment-v2-577d48b98f-fg2nj 0/2 Init:0/1 0
 40s <none> node3 <none> <none>
movie-recommend-deployment-747d6dc646-8zv7j 0/2 PodInitializing 0
 40s 10.244.1.7 node2 <none> <none>
movie-recommend-deployment-747d6dc646-g9bcb 0/2 PodInitializing 0
 40s 10.244.2.7 node3 <none> <none>
movie-recommend-deployment-747d6dc646-sxbj7 0/2 Init:0/1 0
 40s <none> node3 <none> <none>
movie-web-deployment-6996d99464-cc88t 0/2 Init:0/1 0
 40s <none> node2 <none> <none>
movie-web-deployment-6996d99464-gjfm6 0/2 Init:0/1 0
 40s <none> node3 <none> <none>
movie-web-deployment-6996d99464-t29tz 0/2 Init:0/1 0
 40s <none> node2 <none> <none>
neo4j-deployment-6bb95596c8-sld6n 0/2 PodInitializing 0
 40s 10.244.2.6 node3 <none> <none>
#查看服务,可以看到neo4j-server是NodePort类型,外部可以直接访问
[root@node1 movie]# kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) 
 AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 
 3h15m <none>
movie-info ClusterIP 10.108.31.87 <none> 18082/TCP 
 117s app=movie-info-app
movie-rating ClusterIP 10.107.230.15 <none> 18084/TCP 
 117s app=movie-rating-app
movie-recommend ClusterIP 10.97.144.103 <none> 18083/TCP 
 117s app=movie-recommend-app
movie-web ClusterIP 10.104.57.57 <none> 18081/TCP 
 117s app=movie-web-app
neo4j-server NodePort 10.106.196.213 <none> 
7474:31001/TCP,7687:31002/TCP 117s app=neo4j-app

导⼊Neo4j数据:http://192.168.31.106:31001/browser/
在这里插入图片描述

#查看pod是否已经就绪
[root@node1 movie]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE
 IP NODE NOMINATED NODE READINESS GATES
movie-info-deployment-c8f455f69-4hknf 2/2 Running 0 10m
10.244.2.9 node3 <none> <none>
movie-info-deployment-c8f455f69-psbb4 2/2 Running 0 10m
10.244.1.10 node2 <none> <none>
movie-rating-deployment-v1-667fb7c8cc-j8ggj 2/2 Running 0 10m
10.244.1.8 node2 <none> <none>
movie-rating-deployment-v2-577d48b98f-fg2nj 2/2 Running 0 10m
10.244.2.11 node3 <none> <none>
movie-recommend-deployment-747d6dc646-8zv7j 2/2 Running 0 10m
10.244.1.7 node2 <none> <none>
movie-recommend-deployment-747d6dc646-g9bcb 2/2 Running 0 10m
10.244.2.7 node3 <none> <none>
movie-recommend-deployment-747d6dc646-sxbj7 2/2 Running 0 10m
10.244.2.8 node3 <none> <none>
movie-web-deployment-6996d99464-cc88t 2/2 Running 0 10m
10.244.1.11 node2 <none> <none>
movie-web-deployment-6996d99464-gjfm6 2/2 Running 0 10m
10.244.2.10 node3 <none> <none>
movie-web-deployment-6996d99464-t29tz 2/2 Running 0 10m
10.244.1.9 node2 <none> <none>
neo4j-deployment-6bb95596c8-sld6n 2/2 Running 0 10m
10.244.2.6 node3 <none> <none>
#可以看到所有的pod状态都为Running,说明已经准备就绪
#创建⽹关
[root@node1 movie]# kubectl apply -f movie-gateway.yaml
gateway.networking.istio.io/movie-gateway created
virtualservice.networking.istio.io/movie created
#查询istio-ingressgateway⼊⼝的端⼝信息
kubectl -n istio-system get service istio-ingressgateway -o
jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}'
[root@node1 movie]# kubectl -n istio-system get service istio-ingressgateway
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) 
 AGE
istio-ingressgateway LoadBalancer 10.97.58.190 <pending> 
15020:31630/TCP,80:31876/TCP,443:31678/TCP,31400:30106/TCP,15443:30236/TCP 
3h17m
#其中31876就是映射到80端⼝,也就是⼊⼝端⼝

测试:http://192.168.31.106:31876/index?id=10574622
在这里插入图片描述

#设置默认的⽹络规则
[root@node1 movie]# kubectl apply -f destination-rule-all.yaml
destinationrule.networking.istio.io/movie-web created
destinationrule.networking.istio.io/movie-info created
destinationrule.networking.istio.io/movie-recommend created
destinationrule.networking.istio.io/movie-rating created
destinationrule.networking.istio.io/movie-web unchanged
#下⾯进⾏测试,反复刷新⼏次,可以看到评分部分,红/⿊ 进⾏切换。
#如果需要将流量全部导向v2,则可以执⾏下⾯这个规则
[root@node1 movie]# kubectl apply -f virtual-service-rating-v2.yaml
virtualservice.networking.istio.io/movie-rating created
#可以看到⽆论怎么刷新,都是现实红⾊,v1版本的rating就可以下线了。
#如果不需要此规则,可以将其删除
[root@node1 movie]# kubectl delete -f virtual-service-rating-v2.yaml
virtualservice.networking.istio.io "movie-rating" deleted
#如果需要进⾏实例的扩容,可以使⽤k8s进⾏扩容处理,同样也是受Istio管理
kubectl scale deployment/movie-rating-deployment-v2 --replicas=3 #查询pod,发现movie-rating-deployment-v2已经扩展为3个实例
[root@node1 movie]# kubectl get pod
NAME READY STATUS RESTARTS AGE
movie-info-deployment-c8f455f69-4hknf 2/2 Running 0 30m
movie-info-deployment-c8f455f69-psbb4 2/2 Running 0 30m
movie-rating-deployment-v1-667fb7c8cc-j8ggj 2/2 Running 0 30m
movie-rating-deployment-v2-577d48b98f-d46jj 2/2 Running 0 42s
movie-rating-deployment-v2-577d48b98f-fg2nj 2/2 Running 0 30m
movie-rating-deployment-v2-577d48b98f-kbmbv 2/2 Running 0 42s
movie-recommend-deployment-747d6dc646-8zv7j 2/2 Running 0 30m
movie-recommend-deployment-747d6dc646-g9bcb 2/2 Running 0 30m
movie-recommend-deployment-747d6dc646-sxbj7 2/2 Running 0 30m
movie-web-deployment-6996d99464-cc88t 2/2 Running 0 30m
movie-web-deployment-6996d99464-gjfm6 2/2 Running 0 30m
movie-web-deployment-6996d99464-t29tz 2/2 Running 0 30m
neo4j-deployment-6bb95596c8-sld6n 2/2 Running 0 30m
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐