目录

项目描述/项目功能:

项目环境:

环境准备:

IP地址规划:

关闭selinux和firewall:

配置静态ip地址:

修改主机名

升级系统

添加hosts解析

项目步骤:

一、规划好服务器的IP地址,使用kubeadm安装k8s单master的集群环境(1个master+2个node节点)

问题1:为什么要关闭swap交换分区?

问题2:sysctl是做什么的?

问题3:为什么要执行modprobe br_netfilter?

问题4:为什么要开启net.ipv4.ip_forward = 1参数?

二、部署自动化运维工具ansible,防火墙服务器以及堡垒机

部署堡垒机(Bastionhost)

部署firewall服务器

三、部署nfs服务器,为整个web集群提供数据

四、启动MySQL的pod,为整个web集群提供数据库服务

五、部署harbor,push镜像到harbor仓库

六、将web接口制作成镜像,部署到k8s作web应用

Metrics Server部署中的问题

以yaml文件启动web并暴露服务

七、使用ingress作负载均衡

安装ingress controller

创建并暴露pod

启用ingress,并连接ingress controller+service

查看ingress controller里的规则

启动pod

八、安装Prometheus进行监控

九、压力测试(待完善)


项目描述/项目功能:

        模拟k8s生产环境,部署web,mysql,nfs,harbor,Prometheus等应用,构建一个高可用高性能的web系统,同时能监控整个k8s集群的使用。

项目环境:

k8s:v1.20/v1.23/v1.25  docker  centos7.9   Harbor 2.4.1,nfs v4等

拓扑图

环境准备:

5台全新的Linux服务器,关闭firewall和seLinux,配置静态ip地址,修改主机名,添加hosts解析

IP地址规划:

Server

IP

k8s-master

192.168.74.141

k8s-node-1

192.168.74.142

k8s-node-2

192.168.74.143

ansible

192.168.74.144

Bastionhost

192.168.74.145

firewall(ens33)

192.168.74.148

firewall(ens34)

192.168.74.149

关闭selinux和firewall:

#关闭防火墙并且设置开机不自启动
service firewalld stop
systemctl disable firewalld


# 临时关闭seLinux
setenforce 0
 
# 永久关闭seLinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config



[root@k8smaster ~]# service firewalld stop
Redirecting to /bin/systemctl stop firewalld.service
[root@k8smaster ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@k8smaster ~]# reboot
[root@k8smaster ~]# getenforce 
Disabled

配置静态ip地址:

cd /etc/sysconfig/network-scripts/
vim  ifcfg-ens33

#master
TYPE="Ethernet"
BOOTPROTO="static"
DEVICE="ens33"
NAME="ens33"
ONBOOT="yes"
IPADDR="192.168.74.141"
PREFIX=24
GATEWAY="192.168.74.2"
DNS1=114.114.114.114

#node-1
TYPE="Ethernet"
BOOTPROTO="static"
DEVICE="ens33"
NAME="ens33"
ONBOOT="yes"
IPADDR="192.168.74.142"
PREFIX=24
GATEWAY="192.168.74.2"
DNS1=114.114.114.114
 
#node-2 
TYPE="Ethernet"
BOOTPROTO="static"
DEVICE="ens33"
NAME="ens33"
ONBOOT="yes"
IPADDR="192.168.74.143"
PREFIX=24
GATEWAY="192.168.74.2"
DNS1=114.114.114.114

修改主机名

hostnamectl set-hostname master && bash
hostnamectl set-hostname node-1 && bash
hostnamectl set-hostname node-2 && bash

升级系统

yum update -y

添加hosts解析

vim /etc/hosts

#在master和两个node分别添加如下
192.168.74.141 master
192.168.74.142 node-1
192.168.74.143 node-2

项目步骤:

一、规划好服务器的IP地址,使用kubeadm安装k8s单master的集群环境(1个master+2个node节点)

#1.建立免密通道
ssh-keygen

ssh-copy-id master
ssh-copy-id node-1
ssh-copy-id node-2



#2.关闭交换分区,提升性能
#临时关闭
swapoff -a
#永久关闭
vim /etc/fstab
在swap那一行注释掉
#/dev/mapper/centos-swap swap      swap    defaults        0 0
问题1:为什么要关闭swap交换分区?

Swap是交换分区,如果机器内存不够,会使用swap分区,但是swap分区的性能较低,k8s设计的时候为了能提升性能,默认是不允许使用交换分区的。Kubeadm初始化的时候会检测swap是否关闭,如果没关闭,那就初始化失败。如果不想要关闭交换分区,安装k8s的时候可以指定--ignore-preflight-errors=Swap来解决。

#3.修改机器内核参数 
modprobe br_netfilter
echo "modprobe br_netfilter" >> /etc/profile

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

sysctl -p /etc/sysctl.d/k8s.conf
问题2:sysctl是做什么的?

在运行时配置内核参数

   -p   从指定的文件加载系统参数,如不指定即从/etc/sysctl.conf中加载

问题3:为什么要执行modprobe br_netfilter?

修改/etc/sysctl.d/k8s.conf文件,增加如下三行参数:

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

问题4:为什么要开启net.ipv4.ip_forward = 1参数?

要让Linux系统具有路由转发功能,需要配置一个Linux的内核参数net.ipv4.ip_forward。

这个参数指# 定了Linux系统当前对路由转发功能的支持情况;

其值为0时表示禁止进行IP转发;如果是1,则说明IP转发# 功能已经打开。

#4.配置国内阿里云的repo源
yum install -y yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
 
yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release openssh-server socat  ipvsadm conntrack ntpdate telnet ipvsadm


#5.配置安装k8s组件需要的阿里云的repo源
[root@master .ssh]# vim  /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
#将master上Kubernetes的repo源复制给onode-1和node-2
scp /etc/yum.repos.d/kubernetes.repo node-1:/etc/yum.repos.d/
scp /etc/yum.repos.d/kubernetes.repo node-2:/etc/yum.repos.d/



#6.配置时间同步
#安装ntpdate命令
yum install ntpdate -y
#跟网络时间做同步
ntpdate cn.pool.ntp.org
#把时间同步做成计划任务
crontab -e
* */1 * * * /usr/sbin/ntpdate   cn.pool.ntp.org
#重启crond服务
service crond restart



#7.安装docker服务
yum install docker-ce-20.10.6 -y
 
#启动docker,设置开机自启
systemctl start docker && systemctl enable docker.service


#8.配置docker镜像加速器和驱动
vim  /etc/docker/daemon.json 

{
 "registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
} 

systemctl daemon-reload  
systemctl restart docker
systemctl status docker


#9.安装初始化k8s需要的软件包
yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6

# 设置kubelet开机启动
systemctl enable kubelet 

注:每个软件包的作用

Kubeadm:  kubeadm是一个工具,用来初始化k8s集群的 --》安装k8s的

kubelet:   安装在集群所有节点上,用于启动Pod的  --》启动pod的

kubectl:   通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件--》命令行工具

#10.获取所取的k8s镜像压缩包:k8simage-1-20-6.tar.gz
#远程传输到master上,然后scp到node-1,node-2
scp k8simage-1-20-6.tar.gz root@k8snode-1:/root
scp k8simage-1-20-6.tar.gz root@k8snode-2:/root

docker load -i k8simage-1-20-6.tar.gz


#11.使用kubeadm初始化k8s集群
kubeadm config print init-defaults > kubeadm.yaml

#修改kubeadm.yaml文件
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.40.180 #控制节点的ip
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: xianchaomaster1 #控制节点主机名
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
    #换成阿里云的源
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.20.6
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16 #指定pod网段, 需要新增加这个
scheduler: {}
#追加如下几行
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd



#12.基于kubeadm.yaml文件初始化k8s
kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification

显示如下的时候,说明安装完成:

#13.配置kubectl的配置文件config,相当于对kubectl进行授权,这样kubectl命令可以使用这个证书对k8s集群进行管理

[root@master ~]#mkdir -p $HOME/.kube
[root@master ~]#sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]#sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master ~]#kubectl get nodes
NAME     STATUS     ROLES                  AGE     VERSION
master   NotReady   control-plane,master   5m22s   v1.20.6




#14.扩容k8s集群-添加第一个工作节点
#在xianchaomaster1上查看加入节点的命令:
[root@xianchaomaster1 ~]#kubeadm token create --print-join-command
#返回
kubeadm join 192.168.74.141:6443 --token 7jhffy.6u8okuyrgwl5dec9     --discovery-token-ca-cert-hash sha256:dc29b7c1f794be3313e0ae5dce550ec76fdd532529a37e86eec894db4e89cd88

将node-1和node-2分别添加进入kubernetes集群当中如下:

ROLES角色为空,<none>就表示这个节点是工作节点。

上面状态都是NotReady状态,说明没有安装网络插件

#15.将node的ROLES转变为worker状态
kubectl label node node-1 node-role.kubernetes.io/worker=worker
kubectl label node node-1 node-role.kubernetes.io/worker=worker

#16.安装kubernetes网络组件-Calico
#上传calico.yaml到xianchaomaster1上,使用yaml文件安装calico 网络插件
kubectl apply -f  calico.yaml

kubectl get pod -n kube-system 
#再次查看node的状态
kubectl get nodes
#STATUS状态是Ready,说明k8s集群正常运行了

二、部署自动化运维工具ansible,防火墙服务器以及堡垒机

部署自动化运维工具ansible

# 1.建立免密通道 在ansible主机上生成密钥对
ssh-keygen -t ecdsa


[root@ansible ~]# ssh-keygen -t ecdsa
Generating public/private ecdsa key pair.
Enter file in which to save the key (/root/.ssh/id_ecdsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_ecdsa.
Your public key has been saved in /root/.ssh/id_ecdsa.pub.
The key fingerprint is:
SHA256:+iyMCbF0S7/ob+aUTWjVvikqnzWSV4LNFrSPquO2JY8 root@ansible
The key's randomart image is:
+---[ECDSA 256]---+
|        .        |
|       . o       |
|        + .      |
|  o o  * =       |
| . = o+ S +      |
|  o ...O o o     |
|   ..=O.* o      |
|    BB*O o       |
|   +EXB.o        |
+----[SHA256]-----+





# 2.上传公钥到所有服务器的root用户家目录下
#所有服务器上开启ssh服务 ,开放22号端口,允许root用户登录


# 上传公钥到k8s-master
[root@ansible .ssh]# ssh-copy-id -i id_ecdsa.pub root@192.168.74.141

# 上传公钥到k8s-node-1,k8s-node-2
[root@ansible .ssh]# ssh-copy-id -i id_ecdsa.pub root@192.168.74.142
[root@ansible .ssh]# ssh-copy-id -i id_ecdsa.pub root@192.168.74.143

# 验证是否实现免密码密钥认证
[root@ansible .ssh]# ssh root@192.168.74.142


# 3.安装ansible,在管理节点上
[root@ansible .ssh]# yum install epel-release -y
[root@ansible .ssh]# yum  install ansible -y

#查看ansible的版本
[root@ansible ~]# ansible --version


# 4.编写主机清单
cd /etc/ansible
vim hosts
#在hosts中添加如下(后续待补充)
[k8smaster]
192.168.74.141
[k8snode]
192.168.74.142
192.168.74.143

# 测试
[root@ansible ansible]# ansible all -m shell -a "ip add"
部署堡垒机(Bastionhost

仅需两步快速安装 JumpServer:

准备一台 2核4G (最低)且可以访问互联网的 64 位 Linux 主机;

以 root 用户执行如下命令一键安装 JumpServer。

curl -sSL https://resource.fit2cloud.com/jumpserver/jumpserver/releases/latest/download/quick_start.sh | bash

部署firewall服务器
#firewall机器上两个网卡(ens33,ens34)均采用桥接模式
#其中 ens33 静态配置和集群网段一致,用于内网通信
#其中 ens34 动态获得一个外网ip地址

#DNAT——SNAT实现目标:
#    内网的数据传输出去时候进行ip伪装,伪装成ens34的ip地址
#    外网访问ens34的数据都做转发,转发到master节点上


# 编写脚本实现SNAT_DNAT功能
[root@firewalld ~]# cat snat_dnat.sh 

#!/bin/bash
 
# 开启路由转发功能
echo 1 >/proc/sys/net/ipv4/ip_forward
 
# 停止并禁用防火墙
systemctl   stop  firewalld
systemctl disable firewalld
 
# 清空iptables规则
iptables -F
iptables -t nat -F
 
# 进行SNAT配置,将内网IP地址伪装为外网IP地址
iptables -t nat -A POSTROUTING -s 192.168.2.0/24 -o ens34 -j MASQUERADE
 
# 进行DNAT配置,将请求转发到内网服务器上
iptables -t nat -A PREROUTING -d 10.112.116.102 -i ens34 -p tcp --dport 2233 -j DNAT --to-destination 192.168.74.141:22
iptables -t nat -A PREROUTING -d 10.112.116.102 -i ens34 -p tcp --dport 80 -j DNAT --to-destination 192.168.74.141:80







#用于设置防火墙规则,限制服务器上各个服务的访问权限。具体的规则如下:
# web服务器上操作
[root@k8smaster ~]# cat open_app.sh 
#!/bin/bash
 
# open ssh
iptables -t filter  -A INPUT  -p tcp  --dport  22 -j ACCEPT
 
# open dns
iptables -t filter  -A INPUT  -p udp  --dport 53 -s 192.168.2.0/24 -j ACCEPT
 
# open dhcp 
iptables -t filter  -A INPUT  -p udp   --dport 67 -j ACCEPT
 
# open http/https
iptables -t filter  -A INPUT -p tcp   --dport 80 -j ACCEPT
iptables -t filter  -A INPUT -p tcp   --dport 443 -j ACCEPT
 
# open mysql
iptables  -t filter  -A INPUT -p tcp  --dport 3306  -j ACCEPT
 
# default policy DROP
iptables  -t filter  -P INPUT DROP
 
# drop icmp request
iptables -t filter  -A INPUT -p icmp  --icmp-type 8 -j DROP

三、部署nfs服务器,为整个web集群提供数据

# 1.搭建好nfs服务器
# 在堡垒机,以及kubernetes集群内部均安装了nfs
yum install nfs-utils -y

#启动nfs服务
[root@master ~]# service nfs restart
Redirecting to /bin/systemctl restart nfs.service

#查看进场中,包含nfs关键字的进程
[root@master ~]# ps aux | grep nfs
root      40406  0.0  0.0      0     0 ?        S<   09:30   0:00 [nfsd4_callbacks]
root      40412  0.0  0.0      0     0 ?        S    09:30   0:00 [nfsd]
root      40413  0.0  0.0      0     0 ?        S    09:30   0:00 [nfsd]
root      40414  0.0  0.0      0     0 ?        S    09:30   0:00 [nfsd]
root      40415  0.0  0.0      0     0 ?        S    09:30   0:00 [nfsd]
root      40416  0.0  0.0      0     0 ?        S    09:30   0:00 [nfsd]
root      40417  0.0  0.0      0     0 ?        S    09:30   0:00 [nfsd]
root      40418  0.0  0.0      0     0 ?        S    09:30   0:00 [nfsd]
root      40419  0.0  0.0      0     0 ?        S    09:30   0:00 [nfsd]
root      40880  0.0  0.0 112828   972 pts/0    R+   09:31   0:00 grep --color=auto nf

# 设置共享目录
[root@localhost ~]# vim /etc/exports
[root@localhost ~]# cat /etc/exports
#设置将主机上的/web目录共享给192.168.74.0/24网段的主机使用,rw表示可读写,no_root_squash表示客户端使用root用户访问共享目录时,仍使用root权限,sync表示同步写入数据。
/web   192.168.74.0/24(rw,no_root_squash,sync)



# 设置共享目录
[root@localhost ~]# mkdir /web
[root@localhost ~]# cd /web/
[root@localhost web]# echo "welcome to changsha" >index.html
[root@localhost web]# ls
index.html
[root@localhost web]# ll -d /web
drwxr-xr-x. 2 root root 24 9月  19 09:39 /web

# 刷新nfs或者重新输出共享目录
[root@localhost web]# exportfs -r
[root@localhost web]# exportfs -v
/web          	192.168.74.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,no_ruash,no_all_squash)


# 重启nfs服务并且设置nfs开机自启
[root@localhost web]# systemctl restart nfs && systemctl enable nfs
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.


# 在kubernetes集群的node-1节点上尝试挂载
# 遇到的问题  --> 原因:被防火墙拦截了 --> 解决:关闭防火墙or打开所需端口
[root@node-1 ~]# mkdir /node1_nfs
[root@node-1 ~]# mount 192.168.74.145:/web /node1_nfs
mount.nfs: No route to host

# 挂载成功
[root@node-1 ~]# mount 192.168.74.145:/web /node1_nfs
[root@node-1 ~]# df -Th|grep nfs
192.168.74.145:/web     nfs4       36G  8.0G   28G   23% /node1_nfs

# 取消挂载
[root@node-1 ~]# umount /node1_nfs



#创建pv使用nfs服务器上的共享目录
[root@master pv]# vim nfs-pv.yml
[root@master pv]# cat nfs-pv.yml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-web
  labels:
    type: pv-web
spec:
  capacity:
    storage: 10Gi 
  accessModes:
    - ReadWriteMany
  storageClassName: nfs         # pv对应的名字
  nfs:
    path: "/web"       # nfs共享的目录
    server: 192.168.2.121   # nfs服务器的ip地址
    readOnly: false   # 访问模式
    
[root@master pv]# kubectl apply -f nfs-pv.yml 
persistentvolume/pv-web created   
[root@master pv]# kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv-web   10Gi       RWX            Retain           Available           nfs                     13s




# 创建pvc使用pv
[root@master pv]# vim nfs-pvc.yml
[root@master pv]# cat nfs-pvc.yml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-web
spec:
  accessModes:
  - ReadWriteMany      
  resources:
     requests:
       storage: 1Gi
  storageClassName: nfs #使用nfs类型的pv
 
[root@master pv]# kubectl apply -f pvc-nfs.yaml 
persistentvolumeclaim/sc-nginx-pvc created
[root@master pv]# kubectl get pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-web   Bound    pv-web   10Gi       RWX            nfs            15s




# 创建pod使用pvc
[root@master pv]# vim nginx-deployment.yaml 
[root@master pv]# cat nginx-deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      volumes:
        - name: sc-pv-storage-nfs
          persistentVolumeClaim:
            claimName: pvc-web
      containers:
        - name: sc-pv-container-nfs
          image: nginx
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
              name: "http-server"
          volumeMounts:
            - mountPath: "/usr/share/nginx/html"
              name: sc-pv-storage-nfs
 
[root@master pv]# kubectl apply -f nginx-deployment.yaml 
deployment.apps/nginx-deployment created
 
[root@master pv]# kubectl get pod -o wide
NAME                                READY   STATUS    RESTARTS   AGE   IP              NODE     NOMINATED NODE   READINESS GATES
nginx-deployment-76855d4d79-52495   1/1     Running   0          61s   10.244.84.139   node-1   <none>           <none>
nginx-deployment-76855d4d79-btf92   1/1     Running   0          61s   10.244.84.138   node-1   <none>           <none>
nginx-deployment-76855d4d79-jc28s   1/1     Running   0          61s   10.244.84.137   node-1   <none>           <none>


# 测试访问
[root@master pv]# curl 10.244.84.139
welcome to changsha
[root@master pv]# curl 10.244.84.138
welcome to changsha
[root@master pv]# curl 10.244.84.137
welcome to changsha

[root@node-1 ~]# curl 10.244.84.137
welcome to changsha
[root@node-1 ~]# curl 10.244.84.138
welcome to changsha
[root@node-1 ~]# curl 10.244.84.139
welcome to changsha

# 添加内容
[root@localhost web]# cd /web/
[root@localhost web]# ls
index.html
[root@localhost web]# echo "hello,world" >> index.html
[root@localhost web]# cat index.html 


# 再次访问
[root@node-1 ~]# curl 10.244.84.139
welcome to changsha
hello,world

四、启动MySQL的pod,为整个web集群提供数据库服务

# 创建mysql目录,方便日后管理回忆
[root@master ~]# mkdir mysql
[root@master ~]# cd mysql/
[root@master mysql]# vim mysql-deployment.yaml
[root@master mysql]# cat mysql-deployment.yaml

# 定义mysql的Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: mysql
  name: mysql
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.7.42
        name: mysql
        imagePullPolicy: IfNotPresent
        env:
        - name: MYSQL_ROOT_PASSWORD   
          value: "123456"
        ports:
        - containerPort: 3306
---
#定义mysql的Service
apiVersion: v1
kind: Service
metadata:
  labels:
    app: svc-mysql
  name: svc-mysql
spec:
  selector:
    app: mysql
  type: NodePort
  ports:
  - port: 3306
    protocol: TCP
    targetPort: 3306
    nodePort: 30007

# 起yaml文件
[root@master mysql]# kubectl apply -f mysql-deployment.yaml 
deployment.apps/mysql unchanged
service/svc-mysql created
[root@master mysql]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP          7d16h
svc-mysql    NodePort    10.107.193.182   <none>        3306:30007/TCP   27s
[root@master mysql]# kubectl get pod
NAME                                READY   STATUS    RESTARTS   AGE
mysql-5f9bccd855-gb69w              1/1     Running   0          5m56s

# 启动mysql数据库
[root@master mysql]# kubectl exec -it mysql-5f9bccd855-gb69w -- bash 

bash-4.2# mysql -uroot -p123456

mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.42 MySQL Community Server (GPL)

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.00 sec)

mysql> exit
Bye
bash-4.2# exit
exit


五、部署harbor,push镜像到harbor仓库

# 将harbor部署在node-1机器上
# 前提是安装好了docker 和 docker-compose
# docker在之前部署kubernetes集群时已经安装,展示docker compose的安装


#安装 docker-compose
# 提前获取了所需docke compose版本的二进制文件 docker-compose
chmod +x /usr/local/bin/docker-compose
mv docker-compose /usr/local/sbin/docker-compose


#安装 harbor,提前获取了harbor-offline-installer-v2.4.1.tgz
[root@node-1 ~]#tar xf harbor-offline-installer-v2.4.1.tgz

[root@node-1 ~]# cd harbor
[root@node-1 harbor]# cd harbor/
[root@node-1 harbor]# ls
common  common.sh  docker-compose.yml  harbor.v2.1.0.tar.gz  harbor.yml  install.sh  LICENSE  prepare
[root@node-1 harbor]# pwd
# 此处路径一定要保持一定,不然安装时候会出错。或者修改install脚本路径
/root/harbor/harbor


#修改配置文件
# 将harbor.yml中hostname修改成主机ip,并关闭https模块
[root@node-1 harbor]#cat harbor.yml
# Configuration file of Harbor

# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: 192.168.74.142

# http related config
http:
  # port for http, default is 80. If https enabled, this port will redirect to https port
  port: 5000

# https related config
#https:
  # https port for harbor, default is 443
 # port: 443
  # The path of cert and key files for nginx
  #certificate: /your/certificate/path
  #private_key: /your/private/key/path

# 执行安装脚本
./install.sh

#设置开机自启
[root@node-1 harbor]# vim /etc/rc.local
[root@node-1 harbor]# cat /etc/rc.local 
#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.
 
touch /var/lock/subsys/local
/usr/local/sbin/docker-compose -f /root/harbor/harbor/docker-compose.yml up -d


# 设置权限
[root@node-1 harbor]# chmod +x /etc/rc.local /etc/rc.d/rc.local

# 可以在浏览器登录到harbor,账号:admin,密码:Harbor12345
http://192.168.2.106:5000/
#并且在项目中新增 test项目,用于后面测试推送nginx的镜像

#修改配置文件
[root@node-1 harbor]# vim /etc/docker/daemon.json 
{
 "insecure-registries":["192.168.74.142:5000"]
} 



#测试,测试推送一个nginx镜像到harbor仓库
[root@node-1 harbor]# docker tag nginx:latest 192.168.2.106:5000/test/nginx1:v1
[root@node-1 harbor]# docker image ls | grep nginx
nginx                                                             latest     605c77e624dd   20 months ago   141MB
192.168.74.142:5000/test/nginx1                                   v1         605c77e624dd   20 months ago   141MB
goharbor/nginx-photon                                             v2.1.0     470ffa4a837e   3 years ago     40.1MB

[root@node-1 harbor]# docker push 192.168.74.142:5000/test/nginx1:v1

推送成功的效果如下:(可以在harbor看到push的nginx镜像)

六、将web接口制作成镜像,部署到k8s作web应用

# 首先,将kubernetes的所有节点都加入到harbor当中
# 因为是将harbor部署到node-1节点上的,此处用master演示
# 修改master节点的配置文件
[root@master ~]# cat /etc/docker/daemon.json 
{
 "registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"],
"insecure-registries":["192.168.74.142:5000"],
  "exec-opts": ["native.cgroupdriver=systemd"]
} 

#重新加载配置,重启docker服务
[root@master mysql]# systemctl daemon-reload  && systemctl restart docker


#尝试登录到harbor
[root@master mysql]# docker login 192.168.74.142:5000
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded



# 测试: 从harbor拉取之前上传的nginx镜像
[root@master mysql]# docker pull 192.168.74.142:5000/test/nginx1:v1
v1: Pulling from test/nginx1
a2abf6c4d29d: Pull complete 
a9edb18cadd1: Pull complete 
589b7251471a: Pull complete 
186b1aaa4aa6: Pull complete 
b4df32aa5a72: Pull complete 
a0bcbecc962e: Pull complete 
Digest: sha256:ee89b00528ff4f02f2405e4ee221743ebc3f8e8dd0bfd5c4c20a2fa2aaa7ede3
Status: Downloaded newer image for 192.168.74.142:5000/test/nginx1:v1
192.168.74.142:5000/test/nginx1:v1

#查看镜像,看是否成功拉取
[root@master mysql]# docker images
REPOSITORY                                                        TAG        IMAGE ID       CREATED         SIZE
192.168.74.142:5000/test/nginx1                                   v1         605c77e624dd   20 months ago   141MB







# 制作镜像
# go目录下包含server.go转的二进制文件myweb,以及Dockerfile
[root@node-1 go]# cat Dockerfile 
FROM centos:7
WORKDIR /go
COPY . /go
RUN ls /go && pwd
ENTRYPOINT ["/go/myweb"]

[root@node-1 go]#docker build  -t scmyweb:1.1

[root@harbor go]#  docker tag scweb:1.1 192.168.74.142:5000/test/web:v2

[root@node-1 go]# docker image ls | grep web
192.168.74.142:5000/test/web                                      v2         55577cce84c2   2 hours ago     215MB
myweb                                                             1.1        55577cce84c2   2 hours ago     215MB

# 把镜像推到harbor
[root@node-1 go]# docker push 192.168.74.142:5000/test/web:v2
The push refers to repository [192.168.74.142:5000/test/web]
d3219e169556: Pushed 
abb811c318b3: Pushed 
c662ba1aa3d7: Pushed 
174f56854903: Pushed 
v2: digest: sha256:0957d7b16a89b26ea6a05c8820d5108e1405a637be0dda5dd0c27e2a0fc63476 size: 1153

Metrics Server部署中的问题
#安装metrics server (有出错)
# 下载components.yaml配置文件
wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
# 上方连接gitub不够稳定

# 修改下载的 components.yaml文件
# 替换image
        image: registry.aliyuncs.com/google_containers/metrics-server:v0.6.0
        imagePullPolicy: IfNotPresent
        args:
#        // 新增下面两行参数
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalDNS,ExternalIP,Hostname

# 执行安装命令
kubectl apply -f components.yaml

#查看效果
kubectl get pod -n kube-system
# 发现 pod启动正常,状态为Running

#但是 执行kubectl top node 出错,发现是apiservice启动状态为False
[root@master ~]# kubectl top node
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)

[root@master ~]# kubectl top node
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)


metrics server启动成功,但是执行https://so.csdn.net/so/search?q=kubectl&spm=1001.2101.3001.7020 top node报错:Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)



#解决
# 修改metrics server启动YAML文件:
 deployment.spec.template.spec.hostNetwork: true
# 或者
# 固定metrics service的地址,然后手动添加路由策略。
 
同时在/etc/kubernetes/manifests/kube-apiserver.yaml中也要配置apiserver配置--enable-aggregator-routing=true


# 能够使用下面的命令查看到pod的效果,说明metrics server已经安装成功
[root@master ~]# kubectl top node
NAME        CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8smaster   322m         16%    1226Mi          71%       
k8snode1    215m         10%    874Mi           50%       
k8snode2    190m         9%     711Mi           41% 



# 查看pod、apiservice验证metrics-server安装好了
[root@master ~]# kubectl get pod -n kube-system|grep metrics
metrics-server-6c75959ddf-hw7cs            1/1     Running      35m
 
[root@master ~]# kubectl get apiservice |grep metrics
v1beta1.metrics.k8s.io                 kube-system/metrics-server   True        35m
 
[root@master ~]# kubectl top node
NAME        CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master   349m         17%    1160Mi              67%       
node-1    271m         13%    1074Mi              62%       
node-2    226m         11%    1224Mi              71%  
 
 #在node-1节点上查看
[root@node-1 ~]# docker images|grep metrics
registry.aliyuncs.com/google_containers/metrics-server            v0.6.0     5787924fe1d8   14 months ago   68.8MB

以yaml文件启动web并暴露服务
[root@master HPA]# cat my-web.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: myweb
  name: myweb
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myweb
  template:
    metadata:
      labels:
        app: myweb
    spec:
      containers:
      - name: myweb
        image: 192.168.74.142:5000/test/web:v2
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8000
        resources:
          limits:
            cpu: 300m
          requests:
            cpu: 100m
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: myweb-svc
  name: myweb-svc
spec:
  selector:
    app: myweb
  type: NodePort
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 8000
    nodePort: 30001
 
[root@master HPA]# kubectl apply -f my-web.yaml 
deployment.apps/myweb created
service/myweb-svc created
 
# 创建HPA功能
[root@master HPA]# kubectl autoscale deployment myweb --cpu-percent=50 --min=1 --max=10
horizontalpodautoscaler.autoscaling/myweb autoscaled
 
[root@master HPA]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
myweb-6dc7b4dfcb-9q85g   1/1     Running   0          9s
myweb-6dc7b4dfcb-ddq82   1/1     Running   0          9s
myweb-6dc7b4dfcb-l7sw7   1/1     Running   0          9s
[root@k8smaster HPA]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP          3d2h
myweb-svc    NodePort    10.102.83.168   <none>        8000:30001/TCP   15s
[root@k8smaster HPA]# kubectl get hpa
NAME    REFERENCE          TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
myweb   Deployment/myweb   <unknown>/50%   1         10        3          16s
 
# 访问
http://192.168.74.142:30001/
 
[root@master HPA]# kubectl get hpa
NAME    REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
myweb   Deployment/myweb   1%/50%    1         10        1          40m
 
[root@master HPA]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
myweb-6dc7b4dfcb-ddq82   1/1     Running   0          48m
 
# 删除hpa
[root@master HPA]# kubectl delete hpa myweb-svc

七、使用ingress作负载均衡

安装ingress controller
#将镜像scp到所有的node节点服务器上
[root@master ingress]# scp ingress-nginx-controllerv1.1.0.tar.gz node-1:/root
ingress-nginx-controllerv1.1.0.tar.gz                                                  100%  276MB 101.1MB/s   00:02       
[root@master ingress]# scp kube-webhook-certgen-v1.1.0.tar.gz node-1:/root
kube-webhook-certgen-v1.1.0.tar.gz                                                     100%   47MB  93.3MB/s   00:00    
  
 
# 导入镜像,在所有的节点服务器上进行
[root@node-1 ~]# docker load -i ingress-nginx-controllerv1.1.0.tar.gz
[root@node-1 ~]# docker load -i kube-webhook-certgen-v1.1.0.tar.gz
 
[root@node-1 ~]# docker images
REPOSITORY                                                                     TAG        IMAGE ID       CREATED         SIZE
nginx                                                                          latest     605c77e624dd   17 months ago   141MB
registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller   v1.1.0     ae1a7201ec95   19 months ago   285MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen       v1.1.1     c41e9fcadf5a   20 months ago   47.7MB
 

 
# 创建ingres controller
[root@k8smaster ingress]# kubectl apply -f ingress-controller-deploy.yaml 

namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
 
# 查看ingress controller的命名空间
[root@master ingress]# kubectl get ns
NAME                   STATUS   AGE
default                Active   10h
ingress-nginx          Active   30s
kube-node-lease        Active   10h
kube-public            Active   10h
kube-system            Active   10h
 
 
# 查看ingress controller的service
[root@master ingress]# kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.105.213.95   <none>        80:31457/TCP,443:32569/TCP   64s
ingress-nginx-controller-admission   ClusterIP   10.98.225.196   <none>        443/TCP                      64s
 
# 查看ingress controller的相关pod
[root@master ingress]# kubectl get pod -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-9sg56        0/1     Completed   0          80s
ingress-nginx-admission-patch-8sctb         0/1     Completed   1          80s
ingress-nginx-controller-6c8ffbbfcf-bmdj9   1/1     Running     0          80s
ingress-nginx-controller-6c8ffbbfcf-j576v   1/1     Running     0          80s

创建并暴露pod
[root@master ~]# cat sc-nginx-svc-1.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sc-nginx-deploy
  labels:
    app: sc-nginx-feng
spec:
  replicas: 3
  selector:
    matchLabels:
      app: sc-nginx-feng
  template:
    metadata:
      labels:
        app: sc-nginx-feng
    spec:
      containers:
      - name: sc-nginx-feng
        image: nginx
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name:  sc-nginx-svc
  labels:
    app: sc-nginx-svc
spec:
  selector:
    app: sc-nginx-feng
  ports:
  - name: name-of-service-port
    protocol: TCP
    port: 80
    targetPort: 80
 
[root@master ~]# kubectl apply -f sc-nginx-svc-1.yaml 
deployment.apps/sc-nginx-deploy created
service/sc-nginx-svc created
 
[root@master ~]# kubectl get pod
NAME                                READY   STATUS    RESTARTS   AGE
sc-nginx-deploy-7bb895f9f5-hmf2n    1/1     Running   0          7s
sc-nginx-deploy-7bb895f9f5-mczzg    1/1     Running   0          7s
sc-nginx-deploy-7bb895f9f5-zzndv    1/1     Running   0          7s
 
[root@master ~]# kubectl get svc
NAME           TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
kubernetes     ClusterIP   10.96.0.1     <none>        443/TCP   20h
sc-nginx-svc   ClusterIP   10.96.76.55   <none>        80/TCP    26s
 
# 查看Endpoints对应的pod的ip+端口情况
[root@master ingress]# kubectl describe svc sc-nginx-svc
 
# 访问服务暴露的ip
[root@k8smaster ingress]# curl <暴露的ip>
启用ingress,并连接ingress controller+service
# 启动ingress
[root@master ingress]# cat sc-ingress.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: sc-ingress
  annotations:
    kubernets.io/ingress.class: nginx  #注释 这个ingress 是关联ingress controller的
spec:
  ingressClassName: nginx  #关联ingress controller
  rules:
  - host: www.feng.com
    http:
      paths:
      - pathType: Prefix
        path: /
        backend:
          service:
            name: sc-nginx-svc
            port:
              number: 80
  - host: www.zhang.com
    http:
      paths:
      - pathType: Prefix
        path: /
        backend:
          service:
            name: sc-nginx-svc-2
            port:
              number: 80
 
[root@master ingress]# kubectl apply -f my-ingress.yaml 
ingress.networking.k8s.io/my-ingress created
 
# 查看ingress
[root@master ingress]# kubectl get ingress
NAME         CLASS   HOSTS                        ADDRESS                       PORTS   AGE
sc-ingress   nginx   www.feng.com,www.zhang.com   192.168.74.142,192.168.74.143   80      52s

查看ingress controller里的规则
[root@master ingress]# kubectl get pod -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-9sg56        0/1     Completed   0          6m53s
ingress-nginx-admission-patch-8sctb         0/1     Completed   1          6m53s
ingress-nginx-controller-6c8ffbbfcf-bmdj9   1/1     Running     0          6m53s
ingress-nginx-controller-6c8ffbbfcf-j576v   1/1     Running     0          6m53s
 
[root@master ingress]# kubectl exec -n ingress-nginx -it ingress-nginx-controller-6c8ffbbfcf-bmdj9 -- bash
bash-5.1$ cat nginx.conf |grep feng.com
    ## start server www.feng.com
        server_name www.feng.com ;
    ## end server www.feng.com
bash-5.1$ cat nginx.conf |grep zhang.com
    ## start server www.zhang.com
        server_name www.zhang.com ;
    ## end server www.zhang.com
bash-5.1$ cat nginx.conf|grep -C3 upstream_balancer
      
    error_log  /var/log/nginx/error.log notice;
    
    upstream upstream_balancer {
        server 0.0.0.1:1234; # placeholder
        
# 访问对应端口,测试负载均衡
[root@master ingress]# kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.105.213.95   <none>        80:31457/TCP,443:32569/TCP   8m12s
ingress-nginx-controller-admission   ClusterIP   10.98.225.196   <none>        443/TCP                      8m12s
 
# 在其他的宿主机或者windows机器上使用域名进行访问
vim /etc/hosts
cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.74.142 www.feng.com
192.168.74.143 www.zhang.com
 
# 因为我们是基于域名做的负载均衡的配置,所以必须要在浏览器里使用域名去访问,不能使用ip地址
# 同时ingress controller做负载均衡的时候是基于http协议的,7层负载均衡。
 
[root@zabbix ~]# curl www.feng.com
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
 
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
 
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
 
# 访问www.zhang.com出现异常,503错误,是nginx内部错误
[root@zabbix ~]# curl www.zhang.com
<html>
<head><title>503 Service Temporarily Unavailable</title></head>
<body>
<center><h1>503 Service Temporarily Unavailable</h1></center>
<hr><center>nginx</center>
</body>
</html>


启动pod
[root@master pv]# pwd
/root/pv
[root@master pv]# ls
nfs-pvc.yml  nfs-pv.yml  nginx-deployment.yml
 
[root@master pv]#cat nfs-pv.yml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-web
  labels:
    type: pv-web
spec:
  capacity:
    storage: 10Gi 
  accessModes:
    - ReadWriteMany
  storageClassName: nfs         # pv对应的名字
  nfs:
    path: "/web"       # nfs共享的目录
    server: 192.168.74.142   # nfs服务器的ip地址
    readOnly: false   # 访问模式
 
[root@master pv]# kubectl apply -f nfs-pv.yaml
[root@master pv]# kubectl apply -f nfs-pvc.yaml
 
[root@k8smaster pv]# kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE
pv-web   10Gi       RWX            Retain           Bound    default/pvc-web   nfs                     10h
[root@master pv]# kubectl get pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-web   Bound    pv-web   10Gi       RWX            nfs            10h
 
 
[root@master ingress]# cat nginx-deployment-nginx-svc-2.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: sc-nginx-feng-2
  template:
    metadata:
      labels:
        app: sc-nginx-feng-2
    spec:
      volumes:
        - name: sc-pv-storage-nfs
          persistentVolumeClaim:
            claimName: pvc-web
      containers:
        - name: sc-pv-container-nfs
          image: nginx
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
              name: "http-server"
          volumeMounts:
            - mountPath: "/usr/share/nginx/html"
              name: sc-pv-storage-nfs
---
apiVersion: v1
kind: Service
metadata:
  name:  sc-nginx-svc-2
  labels:
    app: sc-nginx-svc-2
spec:
  selector:
    app: sc-nginx-feng-2
  ports:
  - name: name-of-service-port
    protocol: TCP
    port: 80
    targetPort: 80
 
[root@master ingress]# kubectl apply -f nginx-deployment-nginx-svc-2.yaml 
deployment.apps/nginx-deployment created
service/sc-nginx-svc-2 created
 
[root@master ingress]# kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.105.213.95   <none>        80:31457/TCP,443:32569/TCP   24m
ingress-nginx-controller-admission   ClusterIP   10.98.225.196   <none>        443/TCP                      24m
 
[root@master ingress]# kubectl get ingress
NAME         CLASS   HOSTS                        ADDRESS                       PORTS   AGE
sc-ingress   nginx   www.feng.com,www.zhang.com   192.168.74.142,192.168.74.143   80      18m

#访问80端口即可

八、安装Prometheus进行监控

# Prometheus部署
# 下载所需要镜像
docker pull prom/node-exporter 
docker pull prom/prometheus:v2.0.0
docker pull grafana/grafana:6.1.4
 
 #k8s集群所有节点均下载,只展示master节点了
[root@master ~]# docker images
REPOSITORY                                                        TAG        IMAGE ID       CREATED         SIZE
prom/node-exporter                                                latest     1dbe0e931976   18 months ago   20.9MB
grafana/grafana                                                   6.1.4      d9bdb6044027   4 years ago     245MB
prom/prometheus                                                                v2.0.0     67141fa03496   5 years ago     80.2MB
 
 
# 部署node-exporter
[root@master prometheus]# cat node-exporter.yaml 
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: kube-system
  labels:
    k8s-app: node-exporter
spec:
  selector:
    matchLabels:
      k8s-app: node-exporter
  template:
    metadata:
      labels:
        k8s-app: node-exporter
    spec:
      containers:
      - image: prom/node-exporter
        name: node-exporter
        ports:
        - containerPort: 9100
          protocol: TCP
          name: http
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: node-exporter
  name: node-exporter
  namespace: kube-system
spec:
  ports:
  - name: http
    port: 9100
    nodePort: 31672
    protocol: TCP
  type: NodePort
  selector:
    k8s-app: node-exporter
 
[root@master prometheus]# kubectl apply -f node-exporter.yaml
daemonset.apps/node-exporter created
service/node-exporter created
 
[root@master prometheus]# kubectl get pods -A
NAMESPACE              NAME                                         READY   STATUS      RESTARTS   AGE
kube-system            node-exporter-fcmx5                          1/1     Running     0          50s
kube-system            node-exporter-qccwb                          1/1     Running     0          50s
 
[root@master prometheus]# kubectl get daemonset -A
NAMESPACE     NAME            DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   calico-node     3         3         3       3            3           kubernetes.io/os=linux   7d
kube-system   kube-proxy      3         3         3       3            3           kubernetes.io/os=linux   7d
kube-system   node-exporter   2         2         2       2            2           <none>                   2m32s
 
[root@master prometheus]# kubectl get service -A
NAMESPACE              NAME                                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
kube-system            node-exporter                        NodePort    10.111.247.142   <none>        9100:31672/TCP               3m24s
 
# 部署Prometheus
[root@master prometheus]# cat rbac-setup.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus
rules:
- apiGroups: [""]
  resources:
  - nodes
  - nodes/proxy
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: prometheus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: prometheus
  namespace: kube-system
[root@k8smaster prometheus]# kubectl apply -f rbac-setup.yaml
clusterrole.rbac.authorization.k8s.io/prometheus created
serviceaccount/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
 
[root@k8smaster prometheus]# cat configmap.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
  namespace: kube-system
data:
  prometheus.yml: |
    global:
      scrape_interval:     15s
      evaluation_interval: 15s
    scrape_configs:
 
    - job_name: 'kubernetes-apiservers'
      kubernetes_sd_configs:
      - role: endpoints
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: default;kubernetes;https
 
    - job_name: 'kubernetes-nodes'
      kubernetes_sd_configs:
      - role: node
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics
 
    - job_name: 'kubernetes-cadvisor'
      kubernetes_sd_configs:
      - role: node
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
 
    - job_name: 'kubernetes-service-endpoints'
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
        action: replace
        target_label: __scheme__
        regex: (https?)
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
        action: replace
        target_label: __address__
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        action: replace
        target_label: kubernetes_name
 
    - job_name: 'kubernetes-services'
      kubernetes_sd_configs:
      - role: service
      metrics_path: /probe
      params:
        module: [http_2xx]
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
        action: keep
        regex: true
      - source_labels: [__address__]
        target_label: __param_target
      - target_label: __address__
        replacement: blackbox-exporter.example.com:9115
      - source_labels: [__param_target]
        target_label: instance
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        target_label: kubernetes_name
 
    - job_name: 'kubernetes-ingresses'
      kubernetes_sd_configs:
      - role: ingress
      relabel_configs:
      - source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]
        regex: (.+);(.+);(.+)
        replacement: ${1}://${2}${3}
        target_label: __param_target
      - target_label: __address__
        replacement: blackbox-exporter.example.com:9115
      - source_labels: [__param_target]
        target_label: instance
      - action: labelmap
        regex: __meta_kubernetes_ingress_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_ingress_name]
        target_label: kubernetes_name
 
    - job_name: 'kubernetes-pods'
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
        action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: kubernetes_pod_name
 
[root@master prometheus]# kubectl apply -f configmap.yaml
configmap/prometheus-config created
 
[root@master prometheus]# cat prometheus.deploy.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: prometheus-deployment
  name: prometheus
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus
  template:
    metadata:
      labels:
        app: prometheus
    spec:
      containers:
      - image: prom/prometheus:v2.0.0
        name: prometheus
        command:
        - "/bin/prometheus"
        args:
        - "--config.file=/etc/prometheus/prometheus.yml"
        - "--storage.tsdb.path=/prometheus"
        - "--storage.tsdb.retention=24h"
        ports:
        - containerPort: 9090
          protocol: TCP
        volumeMounts:
        - mountPath: "/prometheus"
          name: data
        - mountPath: "/etc/prometheus"
          name: config-volume
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
          limits:
            cpu: 500m
            memory: 2500Mi
      serviceAccountName: prometheus
      volumes:
      - name: data
        emptyDir: {}
      - name: config-volume
        configMap:
          name: prometheus-config
 
[root@master prometheus]# kubectl apply -f prometheus.deploy.yml
deployment.apps/prometheus created
 
[root@master prometheus]# cat prometheus.svc.yml 
kind: Service
apiVersion: v1
metadata:
  labels:
    app: prometheus
  name: prometheus
  namespace: kube-system
spec:
  type: NodePort
  ports:
  - port: 9090
    targetPort: 9090
    nodePort: 30003
  selector:
    app: prometheus
[root@k8smaster prometheus]# kubectl apply -f prometheus.svc.yml
service/prometheus created
 
#部署grafana
[root@master prometheus]# cat grafana-deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana-core
  namespace: kube-system
  labels:
    app: grafana
    component: core
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grafana
  template:
    metadata:
      labels:
        app: grafana
        component: core
    spec:
      containers:
      - image: grafana/grafana:6.1.4
        name: grafana-core
        imagePullPolicy: IfNotPresent
        # env:
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 100m
            memory: 100Mi
          requests:
            cpu: 100m
            memory: 100Mi
        env:
          # The following env variables set up basic auth twith the default admin user and admin password.
          - name: GF_AUTH_BASIC_ENABLED
            value: "true"
          - name: GF_AUTH_ANONYMOUS_ENABLED
            value: "false"
          # - name: GF_AUTH_ANONYMOUS_ORG_ROLE
          #   value: Admin
          # does not really work, because of template variables in exported dashboards:
          # - name: GF_DASHBOARDS_JSON_ENABLED
          #   value: "true"
        readinessProbe:
          httpGet:
            path: /login
            port: 3000
          # initialDelaySeconds: 30
          # timeoutSeconds: 1
        #volumeMounts:   #先不进行挂载
        #- name: grafana-persistent-storage
        #  mountPath: /var
      #volumes:
      #- name: grafana-persistent-storage
        #emptyDir: {}
 
[root@master prometheus]# kubectl apply -f grafana-deploy.yaml
deployment.apps/grafana-core created
 
[root@master prometheus]# cat grafana-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: grafana
  namespace: kube-system
  labels:
    app: grafana
    component: core
spec:
  type: NodePort
  ports:
    - port: 3000
  selector:
    app: grafana
    component: core
[root@master prometheus]# kubectl apply -f grafana-svc.yaml 
service/grafana created
[root@master prometheus]# cat grafana-ing.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
   name: grafana
   namespace: kube-system
spec:
   rules:
   - host: k8s.grafana
     http:
       paths:
       - path: /
         backend:
          serviceName: grafana
          servicePort: 3000
 
[root@master prometheus]# kubectl apply -f grafana-ing.yaml
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
ingress.extensions/grafana created
 
# 测试
[root@master prometheus]# kubectl get pods -A
 [root@k8smaster mysql]# kubectl get svc -A
NAMESPACE              NAME                                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
kube-system            grafana                              NodePort    10.110.87.158    <none>        3000:31267/TCP               31m
kube-system            node-exporter                        NodePort    10.111.247.142   <none>        9100:31672/TCP               39m
kube-system            prometheus                           NodePort    10.102.0.186     <none>        9090:30003/TCP               32m
 
# 访问
# node-exporter界面
http://192.168.74.141:31672/metrics
 
# Prometheus界面
http://192.168.74.141:30003
 
# grafana的页面,
http://192.168.74.141:31267
# 初始账户密码:admin

九、压力测试(待完善)

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐