kubernetes 应用部署实例-PHP Guestbook application
在我的上篇文章用kubeadm安装Kubernetes 1.12.3 cluster 详解 中我们完成了k8s cluster的搭建,接下来我们就开始部署应用吧。本文按照kubernetes官方文档Example: Deploying PHP Guestbook application with Redis,完成了PHP Guestbook application的部署。1,Kuberne...
在我的上篇文章用kubeadm安装Kubernetes 1.12.3 cluster 详解 中我们完成了k8s cluster的搭建,接下来我们就开始部署应用吧。
本文按照kubernetes官方文档Example: Deploying PHP Guestbook application with Redis,完成了PHP Guestbook application的部署。
1,Kubernetes cluster的架构图
先来回顾一下Kubernetes的架构,
2,PHP Guestbook application 的结构
- 一个 主 Redis 单实例 用于存储(写入) guestbook 纪录
- 多个 replicated Redis 多实例用于读guestbook 纪录
- 多个前端 web instance
3,任务列表
- 准备所有相关docker镜像并放于master和node1两个节点上(以防止3.1.1中出现的docker镜像无法从k8s.gcr.io拉取的错误)
Error from server (BadRequest): container "master" in pod "redis-master-57fc67768d-nfd88" is waiting to start: trying and failing to pull image
- 启动一个Redis master
- 启动多个 Redis slaves
- 启动 guestbook前端服务
- 体验查看前端服务
- 卸载环境
3.0准备镜像文件
- 清单
我们需要redis-master ,redis-slave,guestbook-front 三个docker image,查看它们各自的deployment.yaml 文件(下文会讲怎么得到这些文件),可以得到镜像清单如下:
containers:
- name: master
image: k8s.gcr.io/redis:e2e # or just image: redis
containers:
- name: slave
image: gcr.io/google_samples/gb-redisslave:v1
containers:
- name: php-redis
image: gcr.io/google-samples/gb-frontend:v4
- 下载(由于国外网站背墙)
我们可以通过docker hub的mirrorgooglecontainers去下载k8s.gcr.io的镜像
docker pull mirrorgooglecontainers/redis:e2e
svw@master:~$ docker pull mirrorgooglecontainers/redis:e2e
e2e: Pulling from mirrorgooglecontainers/redis
a3ed95caeb02: Pull complete
7059585c469e: Pull complete
782c76bb9e67: Pull complete
706514fbad74: Pull complete
62f9861bf413: Pull complete
d9a5cf315f9b: Pull complete
43310c2277ff: Pull complete
b3e03532a808: Pull complete
9c59e8378f86: Pull complete
Digest: sha256:66f74d88ea60b7efb8358e859f29aa7129137d4641a9571f14165a70cfb0915a
Status: Downloaded newer image for mirrorgooglecontainers/redis:e2e
根据 https://github.com/anjia0532/gcr.io_mirror的介绍
gcr.io/namespace/image_name:image_tag
# 等价于
anjia0532/namespace.image_name:image_tag
我们可以通过anjia0532来下载gcr.io的镜像
docker pull anjia0532/google-samples.gb-redisslave:v1
docker pull anjia0532/google-samples.gb-frontend:v4
svw@master:~$ docker pull anjia0532/google-samples.gb-redisslave:v1
v1: Pulling from anjia0532/google-samples.gb-redisslave
1102f2c5d7dc: Pull complete
4f4fb700ef54: Pull complete
d708c13edccd: Pull complete
5d127b50e8ce: Pull complete
126635d625a2: Pull complete
b3a98f8260db: Pull complete
be96fffd3751: Pull complete
7b9ea9317bce: Pull complete
21a213534980: Pull complete
b7840ba2acc4: Pull complete
Digest: sha256:12e31f7f4f8fa2f63c338bdf3e1b1fe04e95246ebfc5bacd7fa75125e7255a7e
Status: Downloaded newer image for anjia0532/google-samples.gb-redisslave:v1
svw@master:~$ docker pull anjia0532/google-samples.gb-frontend:v4
v4: Pulling from anjia0532/google-samples.gb-frontend
870b960cd011: Pull complete
4f4fb700ef54: Pull complete
49e956485146: Pull complete
123f4a89e8a3: Pull complete
1390c0ae18c3: Pull complete
97e08dceb133: Pull complete
66bc1b9e5a1b: Pull complete
9837f592ee86: Pull complete
2341e9101beb: Pull complete
63864f93a61b: Pull complete
b0914ec4b166: Pull complete
924fe246f99f: Pull complete
ede82e1fc00d: Pull complete
70c58d1c0430: Pull complete
d611a9c4e035: Pull complete
03fb534a580c: Pull complete
65b110c88b43: Pull complete
b9303713687d: Pull complete
71f7e9eba1dc: Pull complete
bded5dd5c9b8: Pull complete
1f4f5e771868: Pull complete
ae8930901656: Pull complete
Digest: sha256:aaa5b327ef3b4cb705513ab674fa40df66981616950c7de4912a621f9ee03dd4
Status: Downloaded newer image for anjia0532/google-samples.gb-frontend:v4
- 改tag
svw@master:~$ docker tag mirrorgooglecontainers/redis:e2e k8s.gcr.io/redis:e2e
svw@master:~$ docker tag anjia0532/google-samples.gb-redisslave:v1 gcr.io/google_samples/gb-redisslave:v1
svw@master:~$ docker tag anjia0532/google-samples.gb-frontend:v4 gcr.io/google-samples/gb-frontend:v4
把该镜像从master传到node1
svw@master:~$ sudo docker save k8s.gcr.io/redis:e2e -o /media/sf_VMshare/redis.tar
[sudo] password for svw:
svw@master:~$ sudo docker save gcr.io/google_samples/gb-redisslave:v1 gcr.io/google-samples/gb-frontend:v4 -o /media/sf_VMshare/guestbook.tar
[sudo] password for svw:
svw@node1:~$ sudo docker load -i /media/sf_VMshare/redis.tar
[sudo] password for svw:
4781101e0522: Loading layer [==================================================>] 197.2 MB/197.2 MB
357b5eff542e: Loading layer [==================================================>] 208.9 kB/208.9 kB
e8061ac24ae3: Loading layer [==================================================>] 4.608 kB/4.608 kB
5f70bf18a086: Loading layer [==================================================>] 1.024 kB/1.024 kB
0d9dceed9901: Loading layer [==================================================>] 231.2 MB/231.2 MB
adb177e98c68: Loading layer [==================================================>] 3.584 kB/3.584 kB
5aac04a72bda: Loading layer [==================================================>] 3.072 kB/3.072 kB
018e5230ea9b: Loading layer [==================================================>] 84.99 kB/84.99 kB
29a374be30cf: Loading layer [==================================================>] 8.852 MB/8.852 MB
Loaded image: k8s.gcr.io/redis:e2e
svw@node1:~$ sudo docker load -i /media/sf_VMshare/guestbook.tar
[sudo] password for svw:
becb2b6a6d5a: Loading layer 13.21 MB/13.21 MB
30a01e9ae3ca: Loading layer 104.4 kB/104.4 kB
31169a6909a6: Loading layer 2.146 MB/2.146 MB
5f70bf18a086: Loading layer 1.024 kB/1.024 kB
df885c3af9c1: Loading layer 9.509 MB/9.509 MB
a6a56651a923: Loading layer 1.536 kB/1.536 kB
7986f971c50f: Loading layer 2.048 kB/2.048 kB
1d697caea41f: Loading layer 2.56 kB/2.56 kB
Loaded image: gcr.io/google_samples/gb-redisslave:v1
c12ecfd4861d: Loading layer 130.9 MB/130.9 MB
5f70bf18a086: Loading layer 1.024 kB/1.024 kB
816f1903c60f: Loading layer 181.1 MB/181.1 MB
79d2abbb4495: Loading layer 17.41 MB/17.41 MB
68bf10b691d4: Loading layer 3.584 kB/3.584 kB
8e1ac573880e: Loading layer 7.856 MB/7.856 MB
0e6095a19b91: Loading layer 7.68 kB/7.68 kB
05dbef3c79d9: Loading layer 9.728 kB/9.728 kB
91f74371a1cf: Loading layer 14.34 kB/14.34 kB
6ac717e42869: Loading layer 4.096 kB/4.096 kB
9a0d67e615b4: Loading layer 171.4 MB/171.4 MB
1590ea6fca45: Loading layer 8.192 kB/8.192 kB
5433617a39a5: Loading layer 3.584 kB/3.584 kB
0df78d5f6966: Loading layer 9.698 MB/9.698 MB
1851913768f8: Loading layer 17 MB/17 MB
8ab02d8c861b: Loading layer 12.8 kB/12.8 kB
4da4caa36d67: Loading layer 2.571 MB/2.571 MB
a4e39fdb0c5b: Loading layer 4.096 kB/4.096 kB
62d128586574: Loading layer 4.096 kB/4.096 kB
0728393ef00e: Loading layer 4.096 kB/4.096 kB
cdc990c9b585: Loading layer 4.096 kB/4.096 kB
3a31f3bf94a2: Loading layer 4.096 kB/4.096 kB
Loaded image: gcr.io/google-samples/gb-frontend:v4
svw@node1:~$ docker images
k8s.gcr.io/redis e2e e5e67996c442 3 years ago 419 MB
gcr.io/google-samples/gb-frontend v4 e2b3e8542af7 2 years ago 512.2 MB
gcr.io/google_samples/gb-redisslave v1 5f026ddffa27 3 years ago 109.5 MB
我们还可以用修改yaml文件中的image参数来修改镜像源文件
这里我们对frontend这个容器镜像采用了此方法(原因是从anjia0532上拉下来的frontend镜像工作不正常,所以我们修改了frontend-deployment.yaml文件,把源改成了https://hub.docker.com/u/kubeguide)
注意下面,将原来的image配置改动了:
原来:
containers:
- name: php-redis
image: gcr.io/google-samples/gb-frontend:v4
现在:
spec:
containers:
- name: frontend
image: kubeguide/guestbook-php-frontend
3.1启动redis master
(1)创建redis master的deployment
- 获得yaml文件
可直接用 wget https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml 获得
svw@master:~$ wget https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml
--2018-12-07 10:48:11-- https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml
Resolving k8s.io (k8s.io)... 23.236.58.218
Connecting to k8s.io (k8s.io)|23.236.58.218|:443... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://kubernetes.io/examples/application/guestbook/redis-master-deployment.yaml [following]
--2018-12-07 10:48:13-- https://kubernetes.io/examples/application/guestbook/redis-master-deployment.yaml
Resolving kubernetes.io (kubernetes.io)... 45.54.44.102
Connecting to kubernetes.io (kubernetes.io)|45.54.44.102|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 586 [application/x-yaml]
Saving to: ‘redis-master-deployment.yaml’
redis-master-deployment.yaml 100%[================================================================================================================>] 586 --.-KB/s in 0s
2018-12-07 10:48:15 (9.95 MB/s) - ‘redis-master-deployment.yaml’ saved [586/586]
文件内容如下
svw@master:~$ cat redis-master-deployment.yaml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: redis-master
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: k8s.gcr.io/redis:e2e # or just image: redis
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
# Using `GET_HOSTS_FROM=dns` requires your cluster to
# provide a dns service. As of Kubernetes 1.3, DNS is a built-in
# service launched automatically. However, if the cluster you are using
# does not have a built-in DNS service, you can instead
# access an environment variable to find the master
# service's host. To do so, comment out the 'value: dns' line above, and
# uncomment the line below:
# value: env
ports:
- containerPort: 6379
- 执行创建命令
svw@master:~$ kubectl apply -f redis-master-deployment.yaml
deployment.apps/redis-master created
或者直接执行
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml
- 检查redis master pod 是否运行。显然image 文件没法拉下来,正是烦心事
svw@master:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-master-57fc67768d-zr5zl 0/1 ErrImagePull 0 59s
- 查看运行日志
svw@master:~$ kubectl logs -f redis-master-57fc67768d-zr5zl Error from server (BadRequest): container "master" in pod "redis-master-57fc67768d-zr5zl" is waiting to start: trying and failing to pull image svw@master:~$ kubectl describe pod redis-master-57fc67768d-zr5zl Name: redis-master-57fc67768d-zr5zl Namespace: default Priority: 0 PriorityClassName: <none> Node: node1/192.168.122.30 Start Time: Fri, 07 Dec 2018 11:50:47 +0800 Labels: app=redis pod-template-hash=57fc67768d role=master tier=backend Annotations: <none> Status: Pending IP: 10.244.1.37 Controlled By: ReplicaSet/redis-master-57fc67768d Containers: master: Container ID: Image: k8s.gcr.io/redis:e2e Image ID: Port: 6379/TCP Host Port: 0/TCP State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Requests: cpu: 100m memory: 100Mi Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-6qbd7 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-6qbd7: Type: Secret (a volume populated by a Secret) SecretName: default-token-6qbd7 Optional: false QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3m25s default-scheduler Successfully assigned default/redis-master-57fc67768d-zr5zl to node1 Warning Failed 3m20s kubelet, node1 Failed to pull image "k8s.gcr.io/redis:e2e": rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v1/_ping: read tcp 10.0.2.6:39310->74.125.23.82:443: read: connection reset by peer Warning Failed 3m1s (x2 over 3m20s) kubelet, node1 Error: ErrImagePull Warning Failed 3m1s kubelet, node1 Failed to pull image "k8s.gcr.io/redis:e2e": rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v1/_ping: read tcp 10.0.2.6:39314->74.125.23.82:443: read: connection reset by peer Normal SandboxChanged 3m (x5 over 3m20s) kubelet, node1 Pod sandbox changed, it will be killed and re-created. Warning Failed 2m47s (x6 over 3m17s) kubelet, node1 Error: ImagePullBackOff Normal BackOff 2m47s (x6 over 3m17s) kubelet, node1 Back-off pulling image "k8s.gcr.io/redis:e2e" Normal Pulling 2m34s (x3 over 3m26s) kubelet, node1 pulling image "k8s.gcr.io/redis:e2e" Warning Failed 2m28s kubelet, node1 Failed to pull image "k8s.gcr.io/redis:e2e": rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v1/_ping: read tcp 10.0.2.6:39340->74.125.23.82:443: read: connection reset by peer
-
这回再明显不过了,Successfully assigned default/redis-master-57fc67768d-zr5zl to node1,node1上拉不下镜像。所以在这之前我们必须要在node1上准备好相关的docker 镜像。本文3.0准备镜像文件做的就是这件事。
执行3.0准备镜像文件后,再查看redis master pod的情况,一切就正常了。
svw@master:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-master-57fc67768d-zr5zl 1/1 Running 0 17m
(2)创建redis master的service
- 获得yaml文件
wget https://k8s.io/examples/application/guestbook/redis-master-service.yaml
- 执行创建命令
svw@master:~$ kubectl apply -f redis-master-service.yaml
service/redis-master created
- 检查redis master service 是否运行
svw@master:~$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16d
redis-master ClusterIP 10.100.206.174 <none> 6379/TCP 17h
- 检查redis master pod运行日志
svw@master:~$ kubectl logs redis-master-57fc67768d-zr5zl
_._
_.-``__ ''-._
_.-`` `. `_. ''-._ Redis 2.8.19 (00000000/0) 64 bit
.-`` .-```. ```\/ _.,_ ''-._
( ' , .-` | `, ) Running in stand alone mode
|`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
| `-._ `._ / _.-' | PID: 1
`-._ `-._ `-./ _.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' | http://redis.io
`-._ `-._`-.__.-'_.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' |
`-._ `-._`-.__.-'_.-' _.-'
`-._ `-.__.-' _.-'
`-._ _.-'
`-.__.-'
[1] 20 Dec 09:37:15.732 # Server started, Redis version 2.8.19
[1] 20 Dec 09:37:15.741 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
[1] 20 Dec 09:37:15.741 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
[1] 20 Dec 09:37:15.741 * The server is now ready to accept connections on port 6379
[1] 20 Dec 09:50:46.890 * Slave 10.244.1.252:6379 asks for synchronization
[1] 20 Dec 09:50:46.890 * Full resync requested by slave 10.244.1.252:6379
[1] 20 Dec 09:50:46.890 * Starting BGSAVE for SYNC with target: disk
[1] 20 Dec 09:50:46.890 * Background saving started by pid 16
[1] 20 Dec 09:50:46.892 * Slave 10.244.1.251:6379 asks for synchronization
[1] 20 Dec 09:50:46.892 * Full resync requested by slave 10.244.1.251:6379
[1] 20 Dec 09:50:46.892 * Waiting for end of BGSAVE for SYNC
[16] 20 Dec 09:50:46.909 * DB saved on disk
[16] 20 Dec 09:50:46.909 * RDB: 6 MB of memory used by copy-on-write
[1] 20 Dec 09:50:46.961 * Background saving terminated with success
[1] 20 Dec 09:50:46.962 * Synchronization with slave 10.244.1.252:6379 succeeded
[1] 20 Dec 09:50:46.962 * Synchronization with slave 10.244.1.251:6379 succeeded
[1] 21 Dec 08:04:29.521 * 1 changes in 900 seconds. Saving...
[1] 21 Dec 08:04:29.521 * Background saving started by pid 17
[17] 21 Dec 08:04:29.553 * DB saved on disk
[17] 21 Dec 08:04:29.553 * RDB: 4 MB of memory used by copy-on-write
[1] 21 Dec 08:04:29.622 * Background saving terminated with success
[1] 21 Dec 08:19:30.093 * 1 changes in 900 seconds. Saving...
[1] 21 Dec 08:19:30.094 * Background saving started by pid 18
[18] 21 Dec 08:19:30.109 * DB saved on disk
[18] 21 Dec 08:19:30.110 * RDB: 4 MB of memory used by copy-on-write
[1] 21 Dec 08:19:30.196 * Background saving terminated with success
3.2启动redis slaves
(1)创建redis slaves的deployment
- 获得yaml文件
可直接用 wget https://k8s.io/examples/application/guestbook/redis-slave-deployment.yaml获得
文件内容如下,
svw@master:~$ cat redis-slave-deployment.yaml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: redis-slave
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
role: slave
tier: backend
replicas: 2
template:
metadata:
labels:
app: redis
role: slave
tier: backend
spec:
containers:
- name: slave
image: gcr.io/google_samples/gb-redisslave:v1
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
# Using `GET_HOSTS_FROM=dns` requires your cluster to
# provide a dns service. As of Kubernetes 1.3, DNS is a built-in
# service launched automatically. However, if the cluster you are using
# does not have a built-in DNS service, you can instead
# access an environment variable to find the master
# service's host. To do so, comment out the 'value: dns' line above, and
# uncomment the line below:
# value: env
ports:
- containerPort: 6379
- 执行创建命令
svw@master:~$ kubectl apply -f redis-slave-deployment.yaml
deployment.apps/redis-slave created
- 检查redis slaves pods 是否运行
svw@master:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-master-57fc67768d-zr5zl 1/1 Running 2 4h29m
redis-slave-57f9f8db74-7m5w6 1/1 Running 0 3h28m
redis-slave-57f9f8db74-w84fw 1/1 Running 0 3h28m
- 查看运行日志(略)
(2)创建redis slave的service
- 获得yaml文件,
wget https://k8s.io/examples/application/guestbook/redis-slave-service.yaml
- 执行创建命令
svw@master:~$ kubectl apply -f redis-slave-service.yaml
service/redis-slave created
- 检查redis slave service 是否运行
svw@master:~$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16d
redis-master ClusterIP 10.100.206.174 <none> 6379/TCP 17h
redis-slave ClusterIP 10.98.90.234 <none> 6379/TCP 17h
3.3启动 guestbook前端服务
(1)创建frontend的deployment
- 获得yaml文件
可直接用 wget https://k8s.io/examples/application/guestbook/frontend-deployment.yaml获得
如前文3.0所言,这里我们用修改yaml文件中的image参数来修改镜像源文件。
svw@master:~$ cat frontend-deployment.yaml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: frontend
labels:
app: guestbook
spec:
selector:
matchLabels:
app: guestbook
tier: frontend
replicas: 3
template:
metadata:
labels:
app: guestbook
tier: frontend
spec:
containers:
# - name: php-redis
- name: frontend
# image: gcr.io/google-samples/gb-frontend:v4
image: kubeguide/guestbook-php-frontend
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
# Using `GET_HOSTS_FROM=dns` requires your cluster to
# provide a dns service. As of Kubernetes 1.3, DNS is a built-in
# service launched automatically. However, if the cluster you are using
# does not have a built-in DNS service, you can instead
# access an environment variable to find the master
# service's host. To do so, comment out the 'value: dns' line above, and
# uncomment the line below:
# value: env
ports:
- containerPort: 80
- 执行创建命令
svw@master:~$ kubectl apply -f frontend-deployment.yaml
deployment.apps/frontend created
- 检查frontend pods 是否运行
svw@master:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend-654c699bc8-7jds2 1/1 Running 0 16s
frontend-654c699bc8-jrccw 1/1 Running 0 16s
frontend-654c699bc8-mztnt 1/1 Running 0 16s
redis-master-57fc67768d-zr5zl 1/1 Running 2 4h40m
redis-slave-57f9f8db74-7m5w6 1/1 Running 0 3h39m
redis-slave-57f9f8db74-w84fw 1/1 Running 0 3h39m
- 查看运行日志(略)
(2)创建frontend的service
- 获得yaml文件
wget https://k8s.io/examples/application/guestbook/frontend-service.yaml
svw@master:~$ cat frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# comment or delete the following line if you want to use a LoadBalancer
type: NodePort
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
# type: LoadBalancer
ports:
- port: 80
selector:
app: guestbook
tier: frontend
- 执行创建命令
svw@master:~$ kubectl apply -f frontend-service.yaml
service/frontend created
- 检查frontend service 是否运行
svw@master:~$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend NodePort 10.103.216.141 <none> 80:31955/TCP 7s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16d
redis-master ClusterIP 10.100.206.174 <none> 6379/TCP 18h
redis-slave ClusterIP 10.98.90.234 <none> 6379/TCP 17h
- 检查生成的pod
svw@master:~$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
busybox 1/1 Running 3 3h55m 10.244.1.253 node1 <none>
frontend-654c699bc8-8j6rk 1/1 Running 0 166m 10.244.1.2 node1 <none>
frontend-654c699bc8-tgq7g 1/1 Running 0 166m 10.244.1.3 node1 <none>
frontend-654c699bc8-vzrx9 1/1 Running 0 166m 10.244.1.254 node1 <none>
redis-master-67bb458fd6-2p5st 1/1 Running 0 20h 10.244.1.250 node1 <none>
redis-slave-57f9f8db74-lrt88 1/1 Running 0 20h 10.244.1.251 node1 <none>
redis-slave-57f9f8db74-qlsfv 1/1 Running 0 20h 10.244.1.252 node1 <none>
4.体验查看前端服务
在其他机器上访问:192.168.122.20:31955
或在本机访问10.103.216.141
5.卸载环境
svw@master:~$ kubectl delete deployment -l app=redis
deployment.extensions "redis-master" deleted
deployment.extensions "redis-slave" deleted
svw@master:~$ kubectl delete service -l app=redis
service "redis-master" deleted
service "redis-slave" deleted
svw@master:~$ kubectl delete deployment -l app=guestbook
deployment.extensions "frontend" deleted
svw@master:~$ kubectl delete service -l app=guestbook
service "frontend" deleted
通过删除deployment和service即可删除相应的pod。这里app等于redis的有两个,一个是master一个是slave。
svw@master:~$ kubectl get pods
No resources found.
更多推荐
所有评论(0)