K8S学习--Kubeadm-4-测试运行Nginx+Tomcat
https://kubernetes.io/zh/docs/concepts/workloads/controllers/deployment/#测试运行nginx 并最终可以将实现动静分离 选用1.17版本在这里插入代码片
K8S学习–Kubeadm 安装 kubernetes-1-组件简介
K8S学习–Kubeadm 安装 kubernetes-2-安装部署
K8S学习–Kubeadm-3-dashboard部署和升级
》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》
1.1版本查看
https://kubernetes.io/zh/docs/concepts/workloads/controllers/deployment/
#测试运行nginx 并最终可以将实现动静分离 选用1.17版本
1.2 环境配置
基于一个之前环境的集群配置
角色<<< 主机名 <<<IP地址
k8s-master1 <<<master1<<< 172.20.10.100
k8s-master2 <<<master2 <<< 172.20.10.101
k8s-master3 <<<master3<<< 172.20.10.102
k8s-HA-1 <<< HA-server1 <<<172.20.10.22
k8s-HA-2 <<< HA-server2 <<<172.20.10.44
k8s- harbor<<< Harbor <<< 172.20.10.33
k8s- node1 <<< Node-1 <<<172.20.10.200
k8s- node2 <<< Node-2 <<< 172.20.10.201
k8s- node3 <<< Node-3 <<<172.20.10.202
root@master-1:~# kubectl get node
NAME STATUS ROLES AGE VERSION
master-1 Ready master 29h v1.17.4
master-2 Ready master 29h v1.17.4
master-3 Ready master 28h v1.17.4
node-1 Ready <none> 28h v1.17.4
node-2 Ready <none> 28h v1.17.4
node-3 Ready <none> 28h v1.17.4
先把对应的镜像下载拉下来,上传到自己搭建好的harbor上面
1.3实验
root@master-1:~# docker pull nginx:1.14.2
root@master-1:~# docker tag nginx:1.14.2 harbor.linux39.com/baseimages/nginx:1.14.2
root@master-1:~# docker push harbor.linux39.com/baseimages/nginx:1.14.2
root@master-1:~# cd /usr/local/src/
root@master-1:/usr/local/src# mkdir kubedm-nginx
root@master-1:/usr/local/src# cd kubedm-nginx/
root@master-1:/usr/local/src/kubedm-nginx# vim nginx-1.14.2.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: harbor.linux39.com/baseimages/nginx:1.14.2 #自定义的镜像地址
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
labels:
app: kaivi-nginx-service-label
name: magedu-nginx-service
namespace: default
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
nodePort: 30004 #访问端口
selector:
app: nginx
创建pod:
root@master-1:/usr/local/src/kubedm-nginx# kubectl apply -f nginx-1.14.2.yml
deployment.apps/nginx-deployment created
service/magedu-nginx-service created
查看是否nginx服务起来
root@master-1:/usr/local/src/kubadm-nginx# kubectl get pod
NAME READY STATUS RESTARTS AGE
net-test1-5fcc69db59-jz944 1/1 Running 1 28h
net-test1-5fcc69db59-wzlmg 1/1 Running 1 28h
net-test1-5fcc69db59-xthfd 1/1 Running 1 28h
nginx-deployment-66fc88798-6pnv4 1/1 Running 0 49s
访问web界面
进入创建的三个不同pod里面访问,然后分别创建首页文件index.html,编辑可区别的内容即可
这里用第一个pod为示例
root@master-1:/usr/local/src/kubadm-nginx# kubectl exec -it nginx-deployment-66fc88798-6pnv4 bash
root@nginx-deployment-66fc88798-6pnv4:/# cd /usr/share/nginx/html
root@nginx-deployment-66fc88798-6pnv4:/usr/share/nginx/html# ls
50x.html index.html
root@nginx-deployment-66fc88798-6pnv4:/usr/share/nginx/html# vim index.html
bash: vim: command not found
root@nginx-deployment-66fc88798-6pnv4:/usr/share/nginx/html# echo "pod1 page test" > index.html
root@nginx-deployment-66fc88798-6pnv4:/usr/share/nginx/html# exit
再次刷新web界面
在负载均衡HA中加一个监听地址监听
root@HA-server1:~# vim /etc/haproxy/haproxy.cfg #在最后增加一个新的监听
listen k8s-web-80
bind 172.20.10.249:80
mode tcp
server master1 172.20.10.200:30004 check inter 3s fall 3 rise 5
server master2 172.20.10.201:30004 check inter 3s fall 3 rise 5
server master3 172.20.10.202:30005 check inter 3s fall 3 rise 5
需要生成一个172.20.10.249的VIP
root@HA-server1:~# vim /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
state MASTER
interface eth0
garp_master_delay 10
smtp_alert
virtual_router_id 56
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.20.10.248 dev eth0 label eth0:1
172.20.10.249 dev eth0 label eth0:2 #新增一个VIP
}
}
重启服务keepalived以及haproxy
root@HA-server1:~# systemctl restart keepalived
root@HA-server1:~# systemctl restart haproxy
root@HA-server1:~# ss -ntl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 2000 172.20.10.248:6443 0.0.0.0:*
LISTEN 0 2000 172.20.10.249:80 0.0.0.0:*
LISTEN 0 128 127.0.0.53%lo:53 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 127.0.0.1:6010 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
LISTEN 0 128 [::1]:6010 [::]:*
用VIP访问也能正常的跳转
在window做一个域名解析
C:\Windows\System32\drivers\etc/hosts
172.20.10.249 www.linux39.com
TOMCAT部署:
root@master-1:~# docker run -it --rm -p 8080:8080 tomcat
。。。。。。
30-Mar-2020 13:27:18.528 INFO [main] org.apache.catalina.startup.Catalina.load Initialization processed in 4320 ms
30-Mar-2020 13:27:18.646 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service [Catalina]
30-Mar-2020 13:27:18.646 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet Engine: Apache Tomcat/8.5.53
30-Mar-2020 13:27:19.128 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
30-Mar-2020 13:27:19.626 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 1097 ms
root@master-1:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b885fdda940f tomcat "catalina.sh run" 2 minutes ago Up About a minute 0.0.0.0:8080->8080/tcp sweet_cray
root@master-1:~# docker exec -it b885fdda940f bash #进到tomcat容器
root@b885fdda940f:/usr/local/tomcat#
root@b885fdda940f:/usr/local/tomcat# cd webapps
root@b885fdda940f:/usr/local/tomcat/webapps# mkdir app
root@b885fdda940f:/usr/local/tomcat/webapps# echo "tomcat app" > app/index.html
root@b885fdda940f:/usr/local/tomcat/webapps#
访问TOMCAT服务器地址的8080下的/app
创建一个Dockerfile
root@master-1:/usr/local/src/kubeadm-linux39/tomcat-dockerfile# pwd
/usr/local/src/kubeadm-linux39/tomcat-dockerfile
root@master-1:/usr/local/src/kubeadm-linux39/tomcat-dockerfile# vim Dockerfile
root@master-1:/usr/local/src/kubeadm-linux39/tomcat-dockerfile# cat Dockerfile
FROM tomcat
ADD ./app /usr/local/tomcat/webapps/app/
在本机创建一个访问页面:
root@master-1:/usr/local/src/kubeadm-linux39/tomcat-dockerfile# mkdir app
root@master-1:/usr/local/src/kubeadm-linux39/tomcat-dockerfile# vim app/index.html
root@master-1:/usr/local/src/kubeadm-linux39/tomcat-dockerfile# cat app/index.html
Tomcat app images page test for kaivi
下载镜像 并且打到自定义的harbor中
root@master-1:/usr/local/src/kubeadm-linux39/tomcat-dockerfile# docker pull tomcat
root@master-1:/usr/local/src/kubeadm-linux39/tomcat-dockerfile# docker build -t harbor.linux39.com/baseimages/tomcat:app .
Sending build context to Docker daemon 3.584kB
Step 1/2 : FROM tomcat
---> a7fa4ac97be4
Step 2/2 : ADD ./app /usr/local/tomcat/webapps/app/
---> ab563ed5aebe
Successfully built ab563ed5aebe
Successfully tagged harbor.linux39.com/baseimages/tomcat:app
重新运行tomcat服务。用新的harbor镜像地址是否能访问:
root@master-1:/usr/local/src/kubeadm-linux39/tomcat-dockerfile# docker run -it --rm -p 8080:8080 harbor.linux39.com/baseimages/tomcat:app
。。。。。。
ina]
30-Mar-2020 13:53:01.280 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet Engine: Apache Tomcat/8.5.53
30-Mar-2020 13:53:01.301 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/app]
30-Mar-2020 13:53:01.517 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/app] has finished in [216] ms
30-Mar-2020 13:53:01.520 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
30-Mar-2020 13:53:01.534 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 269 ms
再一次访问tomcat服务器的web界面:
访问能够成功
root@master-1:/usr/local/src/kubeadm-linux39/tomcat-dockerfile# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
harbor.linux39.com/baseimages/tomcat app ab563ed5aebe 5 minutes ago 528MB
root@master-1:/usr/local/src/kubeadm-linux39/tomcat-dockerfile# docker push harbor.linux39.com/baseimages/tomcat:app #后面的:app 是表示tag号
The push refers to repository [harbor.linux39.com/baseimages/tomcat]
f4a0078a4e57: Pushed
690fbbe97481: Pushed
d27e164cc159: Pushed
3c1fd77de487: Pushed
ac3e2c206c49: Pushed
3663b7fed4c9: Pushed
832f129ebea4: Pushed
6670e930ed33: Pushed
c7f27a4eb870: Pushed
e70dfb4c3a48: Pushed
1c76bd0dc325: Pushed
app: digest: sha256:de80cfab99f015db3c47ea33cab64cc4e65dd5d41a147fd6c9fc41fcfaeb69f1 size: 2628
服务跑起来之后,把他迁移到pod中。创建yml文件:
root@master-1:~# cd /usr/local/src/kubeadm-linux39/
root@master-1:/usr/local/src/kubeadm-linux39# mkdir tomcat-yml
root@master-1:/usr/local/src/kubeadm-linux39# cd tomcat-yml
root@master-1:/usr/local/src/kubeadm-linux39/tomcat-yml# cp /usr/local/src/kubadm-nginx/nginx-1.14.2.yml ./nginx-tomcat.yml
root@master-1:/usr/local/src/kubeadm-linux39/tomcat-yml# ll
total 12
drwxr-xr-x 2 root root 4096 Mar 30 22:06 ./
drwxr-xr-x 5 root root 4096 Mar 30 22:00 ../
-rw-r--r-- 1 root root 654 Mar 30 22:06 nginx-tomcat.yml
root@master-1:/usr/local/src/kubeadm-linux39/tomcat-yml# vim nginx-tomcat.yml
#这里把名称需要全部写成对应的tomcat %s/nginx/tomcat/g
root@master-1:/usr/local/src/kubeadm-linux39/tomcat-yml# cat nginx-tomcat.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat-deployment
labels:
app: tomcat
spec:
replicas: 1 #占用的副本数这里实验改为1个即可 太多会占用资源
selector:
matchLabels:
app: tomcat
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat
image: harbor.linux39.com/baseimages/tomcat:app #镜像地址改修最后上传的tomcat镜像地址
ports:
- containerPort: 8080 #访问端口是tomcat的8080
---
kind: Service
apiVersion: v1
metadata:
labels:
app: kaivi-tomcat-service-label
name: magedu-tomcat-service
namespace: default
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080 #访问端口是tomcat的8080
nodePort: 30005 # nodePort不能重复 之前的30004被nginx占用
selector:
app: tomcat
创建pod:
root@master-1:/usr/local/src/kubeadm-linux39/tomcat-yml# kubectl apply -f nginx-tomcat.yml
deployment.apps/tomcat-deployment created
service/magedu-tomcat-service created
root@master-1:/usr/local/src/kubeadm-linux39/tomcat-yml# kubectl get pod
NAME READY STATUS RESTARTS AGE
net-test1-5fcc69db59-jz944 1/1 Running 1 29h
net-test1-5fcc69db59-wzlmg 1/1 Running 1 29h
net-test1-5fcc69db59-xthfd 1/1 Running 1 29h
nginx-deployment-66fc88798-6pnv4 1/1 Running 0 111m
tomcat-deployment-7cd955f48c-lthk2 1/1 Running 0 1m
访问任何一个node节点的30005
静态分离:
一般tomcat服务器不需要对外暴露。所以在yml文件中可以把nodePort: 30005给注释掉
root@master-1:/usr/local/src/kubeadm-linux39/tomcat-yml# pwd
/usr/local/src/kubeadm-linux39/tomcat-yml
root@master-1:/usr/local/src/kubeadm-linux39/tomcat-yml# vim nginx-tomcat.yml
38 #nodePort: 30005
重新执行pod创建:
root@master-1:/usr/local/src/kubeadm-linux39/tomcat-yml# kubectl apply -f nginx-tomcat.yml
deployment.apps/tomcat-deployment unchanged
service/magedu-tomcat-service configured
然后这里需要让nginx转发给tomcat服务器:
这里直接登入到dashboard上面进行操作也可以
root@nginx-deployment-574b87c764-9m59f:/etc/nginx/conf.d# pwd
/etc/nginx/conf.d
root@nginx-deployment-574b87c764-9m59f:/etc/nginx/conf.d# vim default.conf
root@nginx-deployment-574b87c764-9m59f:/etc/nginx/conf.d# echo "pod for nginx" > /usr/share/nginx/html/index.html
root@nginx-deployment-574b87c764-9m59f:/etc/nginx/conf.d# cat /usr/share/nginx/html/index.html
pod for nginx
查看web的访问情况
更多推荐
所有评论(0)