K8S平台的搭建

1.安装K8S集群
(1)配置yum源
所有节点将提供的压缩包K8S.tar.gz上传至/root目录并解压。

[root@master ~]# tar -zxvf K8S.tar.gz

所有节点配置本地yum源。

[root@master ~]# cat /etc/yum.repos.d/local.repo
[kubernetes]
name=kubernetes
baseurl=file:///root/Kubernetes
gpgcheck=0
enabled=1

(2)升级系统内核
所有节点升级系统内核。

[root@master ~]# yum upgrade -y

(3)配置主机映射
所有节点,修改/etc/hosts文件。

[root@master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.20.10 master.
192.168.20.20 node

(4)配置防火墙及SELinux
所有节点配置防火墙及SELinux。

[root@master ~]# systemctl stop firewalld && systemctl disable firewalld
[root@master ~]# iptables -F
[root@master ~]# iptables -X
[root@master ~]# iptables -Z
[root@master ~]# /usr/sbin/iptables-save
[root@master ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
[root@master ~]# reboot                  //重启后配置生效

(5)关闭Swap
Kubernetes的想法是将实例紧密包装到尽可能接近100%。所有的部署应该与CPU和内存限制固定在一起。 所以如果调度程序发送一个Pod到一台机器,它不应该使用交换。设计者不想交换,因为它会减慢速度。所以关闭Swap主要是为了性能考虑。
所有节点关闭Swap。

[root@master ~]# swapoff -a
[root@master ~]# sed -i "s/\/dev\/mapper\/centos-swap/\#\/dev\/mapper\/centos-swap/g" /etc/fstab

(5)配置时间同步
所有节点安装chrony服务。

[root@master ~]# yum install -y chrony

master节点修改/etc/chrony.conf文件,注释默认NTP服务器,指定上游公共NTP服务器,并允许其他节点同步时间。

[root@master ~]# sed -i 's/^server/#&/' /etc/chrony.conf
[root@master ~]# cat >> /etc/chrony.conf << EOF
local stratum 10
server master iburst
allow all
EOF

master节点重启chronyd服务并设为开机启动,开启网络时间同步功能。

[root@master ~]# systemctl enable chronyd && systemctl restart chronyd
[root@master ~]# timedatectl set-ntp true

node节点修改/etc/chrony.conf文件,指定内网 master节点为上游NTP服务器,重启服务并设为开机启动。

[root@node ~]# sed -i 's/^server/#&/' /etc/chrony.conf
[root@node ~]# echo server 192.168.100.10 iburst >> /etc/chrony.conf  //IP为master节点地址
[root@node ~]# systemctl enable chronyd && systemctl restart chronyd

所有节点执行chronyc sources命令,查询结果中如果存在以“^*”开头的行,即说明已经同步成功。

[root@master ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address            Stratum Poll Reach LastRx Last sample
==================================================================
^* master                       10   6    77    7   +13ns[-2644ns] +/-   13us

(6)配置路由转发
RHEL/CentOS7上的一些用户报告了由于iptables被绕过而导致流量路由不正确的问题,所以需要在各节点开启路由转发。
所有节点创建/etc/sysctl.d/K8S.conf文件,添加如下内容。

[root@master ~]# cat << EOF | tee /etc/sysctl.d/K8S.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@master ~]# modprobe br_netfilter
[root@master ~]# sysctl -p /etc/sysctl.d/K8S.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

(7)配置IPVS
由于IPVS已经加入到了内核主干,所以需要加载以下内核模块以便为kube-proxy开启IPVS功能。
在所有节点执行以下操作。

[root@master ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
[root@master ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules
[root@master ~]# bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

上面脚本创建了/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块。

[root@master ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4
nf_conntrack_ipv4      15053  0 
nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
ip_vs_sh              12688  0 
ip_vs_wrr             12697  0 
ip_vs_rr               12600  0 
ip_vs                 145497  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack           139224  2 ip_vs,nf_conntrack_ipv4
libcrc32c              12644  3 xfs,ip_vs,nf_conntrack

所有节点安装ipset软件包。

[root@master ~]# yum install ipset ipvsadm -y

(8)安装Docker
Kubernetes默认的容器运行时仍然是Docker,使用的是Kubelet中内置dockershim CRI实现。需要注意的是,由于在Kubernetes1.14的版本中,支持的版本有 1.13.1、 17.03、17.06、17.09、18.06和 18.09,所以这里统一使用Docker 18.09。
所有节点安装Docker,启动Docker引擎并设置开机自启。

[root@master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@master ~]# yum install docker-ce-18.09.6 docker-ce-cli-18.09.6 containerd.io -y
[root@master ~]# mkdir -p /etc/docker
[root@master ~]# tee /etc/docker/daemon.json <<-'EOF'
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl restart docker
[root@master ~]# systemctl enable docker
[root@master ~]# ./kubernetes_base.sh
[root@master ~]# docker info |grep Cgroup
 Cgroup Driver: system

(9)安装工具
Kubelet负责与其他节点集群通信,并进行本节点Pod和容器生命周期的管理。Kubeadm是Kubernetes的自动化部署工具,降低了部署难度,提高效率。Kubectl是Kubernetes集群管理工具。
所有节点安装Kubernetes工具并启动Kubelet。

[root@master ~]# yum install -y kubelet-1.14.1 kubeadm-1.14.1 kubectl-1.14.1
[root@master ~]# systemctl enable kubelet && systemctl start kubelet
// 此时启动不成功正常,后面初始化的时候会变成功

(10)初始化Kubernetes集群
登录Master节点,初始化Kubernetes集群。

[root@master ~]# kubeadm init --apiserver-advertise-address 192.168.100.10 --kubernetes-version="v1.14.1" --pod-network-cidr=10.16.0.0/16 --image-repository=registry.aliyuncs.com/google_containers
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.24.2.10]
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [10.24.2.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [10.24.2.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 18.502812 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: qf4lef.d83xqvv00l1zces9
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.20.10:6443 --token qf4lef.d83xqvv00l1zces9 \
    --discovery-token-ca-cert-hash sha256:ec7c7db41a13958891222b2605065564999d124b43c8b02a3b32a6b2ca1a1c6c  //本行代码需记录,用于node节点加入集群。

初始化操作主要经历了下面15个步骤,每个阶段均输出均使用[步骤名称]作为开头:
l [init]:指定版本进行初始化操作。
l [preflight]:初始化前的检查和下载所需要的Docker镜像文件。。
l [kubelet-start]:生成Kubelet的配置文件/var/lib/kubelet/config.yaml,没有这个文件Kubelet无法启动,所以初始化之前的Kubelet实际上启动失败。
l [certificates]:生成Kubernetes使用的证书,存放在/etc/kubernetes/pki目录中。
l [kubeconfig]:生成KubeConfig文件,存放在/etc/kubernetes目录中,组件之间通信需要使用对应文件。
l [control-plane]:使用/etc/kubernetes/manifest目录下的YAML文件,安装Master组件。
l [etcd]:使用/etc/kubernetes/manifest/etcd.yaml安装Etcd服务。
l [wait-control-plane]:等待control-plan部署的Master组件启动。
l [apiclient]:检查Master组件服务状态。
l [uploadconfig]:更新配置。
l [kubelet]:使用configMap配置Kubelet。
l [patchnode]:更新CNI信息到Node上,通过注释的方式记录。
l [mark-control-plane]:为当前节点打标签,打了角色Master,和不可调度标签,这样默认就不会使用Master节点来运行Pod。
l [bootstrap-token]:生成的Token需要记录下来,后面使用kubeadm join命令往集群中添加节点时会用到。
l [addons]:安装附加组件CoreDNS和kube-proxy。
Kubectl默认会在执行的用户home目录下面的.kube目录下寻找config文件,配置kubectl工具。

[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

(11)检查集群状态。

[root@master ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}

(12)配置Kubernetes网络
登录Master节点,部署flannel网络。

[root@master ~]# kubectl apply -f yaml/kube-flannel.yaml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
[root@master ~]# kubectl get pods -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-8686dcc4fd-c2t9g         1/1     Running   0          109m
coredns-8686dcc4fd-tdd8l         1/1     Running   0          109m
etcd-master                      1/1     Running   0          108m
kube-apiserver-master            1/1     Running   0          108m
kube-controller-manager-master   1/1     Running   0          108m
kube-flannel-ds-amd64-65n9h      1/1     Running   0          30s
kube-proxy-l8w4q                 1/1     Running   0          109m
kube-scheduler-master            1/1     Running   0          108m

(13)Node节点加入集群
登录Node节点,使用kubeadm join命令将Node节点加入集群。

[root@node ~]# kubeadm join 192.168.20.10:6443 --token qf4lef.d83xqvv00l1zces9 --discovery-token-ca-cert-hash sha256:ec7c7db41a13958891222b2605065564999d124b43c8b02a3b32a6b2ca1a1c6c 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

登录Master节点,检查各节点状态。

[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
master   Ready    master   117m    v1.14.1
node     Ready    <none>   6m30s   v1.14.1

(14)安装Dashboard
使用kubectl apply命令安装Dashboard。

[root@master ~]# kubectl apply -f yaml/kubernetes-dashboard.yaml 
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
[root@master ~]# kubectl create -f yaml/dashboard-adminuser.yaml 
serviceaccount/kubernetes-dashboard-admin created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-admin created

检查所有Pod状态。

[root@master ~]# kubectl get pods -n kube-system
NAME                              READY   STATUS  RESTARTS   AGE
coredns-8686dcc4fd-8jqzh              1/1        Running   23         33d
coredns-8686dcc4fd-dkbhw             1/1        Running   23         33d
etcd-master                          1/1        Running   17         33d
kube-apiserver-master                  1/1        Running   19         33d
kube-controller-manager-master          1/1        Running   9          33d
kube-flannel-ds-amd64-49ssg            1/1        Running   4          33d
kube-flannel-ds-amd64-rt5j8             1/1        Running   2          33d
kube-proxy-frz2q                      1/1        Running   2          33d
kube-proxy-xzq4t                      1/1        Running   3          33d
kube-scheduler-master                  1/1        Running   9          33d
kubernetes-dashboard-5f7b999d65-djgxj    1/1        Running   3          33d
kuboard-b8999d698-b576t               1/1        Running   2          33d

(15)单击“高级”→“接受风险并继续”按钮,即可进入Kubernetes Dasboard认证界面,登录Kubernetes Dasboard需要输入令牌,通过以下命令获取访问Dashboard的认证令牌,如图1-2
在这里插入图片描述

图1-2 Kubernetes Dasboard
(16)通过以下命令获取访问Dashboard的认证令牌,认证后如图1-3所示。

[root@master ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kubernetes-dashboard-admin-token | awk '{print $1}')
Name:         kubernetes-dashboard-admin-token-j5dvd
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard-admin
              kubernetes.io/service-account.uid: 1671a1e1-cbb9-11e9-8009-ac1f6b169b00

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi1qNWR2ZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjE2NzFhMWUxLWNiYjktMTFlOS04MDA5LWFjMWY2YjE2OWIwMCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.u6ZaVO-WR632jpFimnXTk5O376IrZCCReVnu2Brd8QqsM7qgZNTHD191Zdem46ummglbnDF9Mz4wQBaCUeMgG0DqCAh1qhwQfV6gVLVFDjHZ2tu5yn0bSmm83nttgwMlOoFeMLUKUBkNJLttz7-aDhydrbJtYU94iG75XmrOwcVglaW1qpxMtl6UMj4-bzdMLeOCGRQBSpGXmms4CP3LkRKXCknHhpv-pqzynZu1dzNKCuZIo_vv-kO7bpVvi5J8nTdGkGTq3FqG6oaQIO-BPM6lMWFeLEUkwe-EOVcg464L1i6HVsooCESNfTBHjjLXZ0WxXeOOslyoZE7pFzA0qg

在这里插入图片描述

图1-4  Kubernetes控制台

(17)配置Kuboard
Kuboard是一款免费的Kubernetes图形化管理工具,其力图帮助用户快速在Kubernetes上落地微服务。登录master节点,使用kuboard.yaml文件部署Kuboard。

[root@master ~]# kubectl create -f yaml/kuboard.yaml 
deployment.apps/kuboard created
service/kuboard created
serviceaccount/kuboard-user created
clusterrolebinding.rbac.authorization.k8s.io/kuboard-user created
serviceaccount/kuboard-viewer created
clusterrolebinding.rbac.authorization.k8s.io/kuboard-viewer created
clusterrolebinding.rbac.authorization.k8s.io/kuboard-viewer-node created
clusterrolebinding.rbac.authorization.k8s.io/kuboard-viewer-pvp created
ingress.extensions/kuboard created

在浏览器输入地址http://192.168.20.10:31000,即可进入Kuboard的认证界面,在Token文本框中输入令牌后可进入Kuboard控制台,如图1-5所示。
在这里插入图片描述

 				图1-5 Kuboard认证界面

在这里插入图片描述

		图1-6 Kuboard控制台

实验3 K8S平台简单使用
1.部署Wordpress服务
在Kubernetes上运行WordPress的好处是显而易见的。首先是安装非常简单(在已有集群的情况下),其次是可靠性更高,第三是规模可以伸缩。当然,可以在多个云之间更为容易地迁移也是非常重要的一点。
Kubernetes上运行WordPress是一个可伸缩性服务运行于云原生集群的典型案例,拿来学习也是极好的。
(1)新建namespace
新建blog namespace,将应用都部署到blog这个命名空间下面。

[root@master ~]# kubectl create namespace blog
namespace/blog created

(2)编写YAML文件
编写YAML文件wordpress-db.yaml

[root@master ~]# vi wordpress-pod.yaml
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: mysql-deploy
  namespace: blog
  labels:
    app: mysql
spec:
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: mysql:5.6
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 3306
          name: dbport
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: rootPassW0rd
        - name: MYSQL_DATABASE
          value: wordpress
        - name: MYSQL_USER
          value: wordpress
        - name: MYSQL_PASSWORD
          value: wordpress
        volumeMounts:
        - name: db
          mountPath: /var/lib/mysql
      volumes:
      - name: db
        hostPath:
          path: /var/lib/mysql
---
apiVersion: v1
kind: Service
metadata:
  name: mysql
  namespace: blog
spec:
  selector:
    app: mysql
  ports:
  - name: mysqlport
    protocol: TCP
    port: 3306
    targetPort: dbport

然后创建上面的wordpress-db.yaml文件。

[root@master ~]# kubectl create  -f  wordpress-db.yaml 
deployment.apps/mysql-deploy created
service/mysql created

然后查看Service的详细情况。

[root@master ~]#  kubectl describe svc mysql -n blog
Name:              mysql
Namespace:         blog
Labels:            <none>
Annotations:       <none>
Selector:          app=mysql
Type:              ClusterIP
IP:                10.96.21.56
Port:              mysqlport  3306/TCP
TargetPort:        dbport/TCP
Endpoints:         10.16.1.16:3306
Session Affinity:  None
Events:            <none>

可以看到Endpoints部分匹配到了一个Pod,生成了一个clusterIP为10.96.21.56,现在就可以通过这个clusterIP加上定义的3306端口问MySQL服务了。
(3)创建Wordpress服务
创建Wordpress服务,将上面的wordpress的Pod转换成Deployment对象wordpress.yaml。

[root@master ~]# vi wordpress.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: wordpress-deploy
  namespace: blog
  labels:
    app: wordpress
spec:
  template:
    metadata:
      labels:
        app: wordpress
    spec:
      containers:
      - name: wordpress
        image: wordpress
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
          name: wdport
        env:
        - name: WORDPRESS_DB_HOST
          value: 10.96.21.56:3306
        - name: WORDPRESS_DB_USER
          value: wordpress
        - name: WORDPRESS_DB_PASSWORD
          value: wordpress
---
apiVersion: v1
kind: Service
metadata:
  name: wordpress
  namespace: blog
spec:
  type: NodePort
  selector:
    app: wordpress
  ports:
  - name: wordpressport
    protocol: TCP
    port: 80
    targetPort: wdport 

注意:要添加属性type: NodePort,然后创建wordpress.yaml文件。

[root@master ~]# kubectl create -f wordpress.yaml
deployment.apps/wordpress-deploy created
service/wordpress created

(4)编写YAML文件
编写YAML文件wordpress-pod.yaml。

[root@master ~]# cat wordpress-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: wordpress
  namespace: blog
spec:
  containers:
  - name: wordpress
    image: wordpress
    imagePullPolicy: IfNotPresent
    ports:
    - containerPort: 80
      name: wdport
    env:
    - name: WORDPRESS_DB_HOST
      value: localhost:3306
    - name: WORDPRESS_DB_USER
      value: wordpress
    - name: WORDPRESS_DB_PASSWORD
      value: wordpress
  - name: mysql
    image: mysql:5.6
    imagePullPolicy: IfNotPresent
    ports:
    - containerPort: 3306
      name: dbport
    env:
    - name: MYSQL_ROOT_PASSWORD
      value: rootPassW0rd
    - name: MYSQL_DATABASE
      value: wordpress
    - name: MYSQL_USER
      value: wordpress
    - name: MYSQL_PASSWORD
      value: wordpress
    volumeMounts:
    - name: db
      mountPath: /var/lib/mysql
  volumes:
  - name: db
    hostPath:
      path: /var/lib/mysql

注意:这里针对MySQL这个容器做了一个数据卷的挂载,这是为了能够将MySQL的数据能够持久化到节点上,这样下次MySQL容器重启过后数据不至于丢失。
创建Pod

[root@master ~]# kubectl create -f wordpress-pod.yaml 
pod/wordpress created

(5)访问服务
查看svc

[root@master ~]# kubectl get svc -n blog
NAME        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
mysql       ClusterIP   10.96.21.56     <none>        3306/TCP       23m
wordpress   NodePort    10.107.243.36   <none>        80:32002/TCP   21m

登录访问页面,使用master节点IP地址+svc返回的wordpress端口号,进行登录验证如图1-7
在这里插入图片描述

图1-7 访问页面

Logo

开源、云原生的融合云平台

更多推荐