目录

项目框架

项目描述

项目环境

一、准备环境

1.1 准备k8s集群需要的服务器环境

1.2 采用kubeadm安装k8s单master的集群环境(1个master+2个node节点)

1.互相建立免密通道

2.关闭交换分区

3.调整内核参数

4.更新和配置软件源

5.配置ipvs功能

6.配置时间同步

7.安装docker环境

8.配置k8s组件源

9.master节点初始化

二、Master节点作为ansible服务器

1.建立免密通道

1.1 测试免密通道是否成功

1.2 和其他服务器全部建立免密通道

2.主控机安装ansible

2.1 配置主机清单

2.2 测试ansible服务器能否控制所有的服务器

三、集成部署 Gitlab、Jenkins 和 Harbor

1.部署Gitlab

2.部署Jenkins

2.1下载官网的k8s部署yaml文件

2.2创建命名空间

2.3创建服务账号,集群角色,绑定

2.4创建卷,用来存放数据

2.5.部署Jenkins

2.6进入pod里获取登录的密码

3.部署Harbor

3.1配置阿里云的repo源

3.2下载docker

3.3安装docker compose并上传

3.4去harbor官网下载安装包

四、部署nfs服务器,创建nfs-pv和nfs-pvc

1.安装nfs-utils

2.在web机器上检测nfs挂载是否可用

3.创建pv卷

4.创建pvc使用pv

5.创建部署控制器去启动多个pod副本使用pvc

五、自制web镜像上传到harbor,结合HPA技术,实现自动扩缩

1.制作web镜像

1.1两个node节点从harbor服务器上拉取镜像

2.安装metrics server

2.1下载components.yaml配置文件

 2.2修改components.yaml配置文件

2.3 执行安装命令

2.4 以yaml文件启动web并暴露服务

3.创建HPA功能

3.1访问

六、启动mysql的pod,为web业务提供数据库服务

七、使用探针对web业务pod进行监控,一旦出现问题马上重启

八、采用ingress负载均衡

1.准备环境

1.1 上传安装包

1.2 分发给各个节点

1.3 Work节点导入镜像

1.4 创建ingress控制器需要的环境

1.5 查看ingress controller的相关资源是否建立

2 创建pod和暴露pod的服务

3 启用ingress关联ingress controller 和service

3.1创建一个yaml文件,去启动ingress

3.2查看ingress

3.3在其他的宿主机或者windows机器上使用域名进行访问

九.安装promethues对整个集群资源。

1.在所有节点提前下载镜像

2.采用daemonset方式部署node-exporter

 3.部署Prometheus

4.部署grafana

十、使用测试软件对整个k8s集群和相关的服务器进行压力测试



  •  

项目框架

项目描述

模拟企业里的web业务部署k8s. web服务, MySQL. nfs. harbor, zabbix. Prometheus, gitlab. Jenkins. ansible环境, 保障web业务的高可用。

项目环境

 CentOS7.9  Nginx1.25、ansible 2.9.27、Prometheus 2.34.0、grafana 10.0.0、ssh、docker 20.10.6、mysql 5.7.42、dashboard v2.5.0、docker compose 2.18.1、kubernetes 1.23.6、Calico 3.23、Zabbix  5.0、ingress-nginx-controller 1.1.0、kube-webhook-certgen 1.1.0、Jenkinsci、Gitlab-16.0.4-jh、metrics-server 0.6.0

一、准备环境

1.1 准备k8s集群需要的服务器环境

1.关闭防火墙、selinux
        #systemctl  stop firewalld

        #systemctl  disabled firewalld

        #sed -i  '/^selinux=/ s /enforcing/disabled/'   /etc/rc.local

2.配置静态ip地址 

        #vim /etc/sysyconfig/network-script/ifcfg-ens33

#

master:
BOOTPROTO="static"
DEFROUTE="yes"
NAME="ens33"
DEVICE="ens33"
ONBOOT="yes"
IPADDR="192.168.10.181"
NETMASK="255.255.255.0"
GATEWAY="192.168.10.2"
DNS1="114.114.114.114"

node1:

BOOTPROTO="static"
DEFROUTE="yes"
NAME="ens33"
DEVICE="ens33"
ONBOOT="yes"
IPADDR="192.168.10.180"
NETMASK="255.255.255.0"
GATEWAY="192.168.10.2"
DNS1="114.114.114.114"

node2:

BOOTPROTO="static"
DEFROUTE="yes"
NAME="ens33"
DEVICE="ens33"
ONBOOT="yes"
IPADDR="192.168.10.182"
NETMASK="255.255.255.0"
GATEWAY="192.168.10.2"
DNS1="114.114.114.114"

#

        #systemctl restart network

3.修改主机名

  1. hostnamcectl set-hostname k8s-master

  2. hostnamcectl set-hostname k8s-node1

  3. hostnamcectl set-hostname k8s-node2

4.添加hosts解析,三台服务器都需要添加

vim /etc/hosts
 
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.181 k8s-master
192.168.10.180 k8s-node1
192.168.10.182 k8s-node2

1.2 采用kubeadm安装k8s单master的集群环境(1个master+2个node节点)

1.互相建立免密通道

ssh-keygen    # 一路回车

ssh-copy-id k8s-master

ssh-copy-id k8s-node1

ssh-copy-id k8s-node2

2.关闭交换分区

临时关闭

swapoff -a

永久关闭

sed -i '/ swap / s/^(.*)$/#\1/g'  /etc/fstab

3.调整内核参数

a.加载网桥过滤和地址转发功能,转发IPv4并让iptables看到桥接流量

modprobe br_netfilter
modprobe overlay

b.修改linux的内核参数

cat <<EOF | tee /etc/modules-load.d/k8s.conf
br_netfilter
overlay
EOF

c.查看br_netfilter和 overlay模块是否加载成功

lsmod | grep -e br_netfilter -e overlay

br_netfilter 22256 0

bridge 151336 1 br_netfilter

overlay 91659 0

d.修改/etc/sysctl.d/kubernetes.conf文件,添加配置

cat << EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

e.应用sysctl参数而不重新启动
sysctl -p

4.更新和配置软件源

a.添加阿里云yum源

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

b.重新生成yum元数据缓存

yum clean all && yum makecache

c.安装基础软件包

yum install -y vim wget

d.配置阿里云Docker yum仓库源

yum install -y yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

5.配置ipvs功能

a.安装ipset和ipvsadm

yum install -y ipset ipvsadm

b.添加需要加载的模块写入脚本文件,保证在节点重启后能自动加载所需模块

cat <<EOF > /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

c.为脚本文件添加执行权限

chmod +x /etc/sysconfig/modules/ipvs.modules

d.执行脚本文件

/bin/bash /etc/sysconfig/modules/ipvs.modules

e.查看对应的模块是否加载成功

lsmod | grep -e ip_vs -e nf_conntrack_ipv4
nf_conntrack_ipv4 15053 24
nf_defrag_ipv4 12729 1 nf_conntrack_ipv4
ip_vs_sh 12688 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 105
ip_vs 145497 111 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 139264 10 ip_vs,nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,nf_nat_masquerade_ipv4,nf_nat_masquerade_ipv6,nf_conntrack_netlink,nf_conntrack_ipv4,nf_conntrack_ipv6
libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack

6.配置时间同步

a.启用chronyd服务

systemctl start chronyd && systemctl enable chronyd

b.设置时区

timedatectl set-timezone Asia/Shanghai

7.安装docker环境

a.下载安装docker
yum install -y docker-ce-20.10.24-3.el7 docker-ce-cli-20.10.24-3.el7 containerd.io

b.创建文件夹

mkdir -p /etc/docker

c.编辑配置

cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": [
"https://youraddr.mirror.aliyuncs.com",    # 可选配置自己的阿里云镜像加速地址
"http://hub-mirror.c.163.com",
"https://reg-mirror.qiniu.com",
"https://docker.mirrors.ustc.edu.cn"
],
"insecure-registries": ["harbor ip:port"],   # 写入私有harbor地址
"exec-opts": ["native.cgroupdriver=systemd"],
"data-root": "/opt/lib/docker" # 配置合适的docker存储路径
}
EOF

d.配置docker服务自启动

启动docker

systemctl start docker

设置docker开机启动

systemctl enable docker

验证

systemctl status docker

8.配置k8s组件源

a.配置组件源

cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

b.构建本地yum缓存

yum makecache

c.安装

yum install -y kubeadm-1.23.17-0 kubelet-1.23.17-0 kubectl-1.23.17-0 --disableexcludes=kubernetes

d.编辑配置

cat > /etc/sysconfig/kubelet <<EOF
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
EOF

e.设置自启动

systemctl enable --now kubelet

9.master节点初始化

a.在master上执行

kubeadm init
--kubernetes-version=v1.23.17
--pod-network-cidr=10.224.0.0/16
--service-cidr=10.96.0.0/12
--apiserver-advertise-address=192.168.10.181 \ # 集群masterIP
--image-repository=registry.aliyuncs.com/google_containers

成功后会得到最后这段token

kubeadm join 192.168.10.181:6443 --token nyjc81.tvhtt2h67snpkf48
--discovery-token-ca-cert-hash sha256:cf8458e93e3510cf77dd96a73d39acd3f6284034177f8bad4d8452bb7f5f6e62
# 暂存这条命令

b.继续执行,允许当前用户可以通过kubectl命令与k8s集群进行通信

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown (id -u):(id -g) $HOME/.kube/config

c.node节点加入集群

在node上执行上面得到的token

kubeadm join 192.168.10.181:6443 --token nyjc81.tvhtt2h67snpkf48
--discovery-token-ca-cert-hash sha256:cf8458e93e3510cf77dd96a73d39acd3f6284034177f8bad4d8452bb7f5f6e62

d.在master检查集群节点状态

[root@k8s-master ~]# kubectl get node
NAME         STATUS        ROLES                     AGE       VERSION
k8s-master   NotReady    control-plane,master   53m       v1.23.17
k8s-node1    NotReady    <none>                        1m         v1.23.17
k8s-node2    NotReady    <none>                        1m         v1.23.17

e.分配worker role,在master上执行

kubectl label node k8s-node1 node-role.kubernetes.io/worker=worker
kubectl label node k8s-node2 node-role.kubernetes.io/worker=worker

...
k8s-node1 要和 /etc/hosts中的备注一样

f.安装Calico网络插件(master上执行)

kubectl apply -f https://docs.projectcalico.org/archive/v3.25/manifests/calico.yaml

验证:节点状态从NotReady => Ready

[root@k8s-master ~]# kubectl get node
NAME         STATUS   ROLES                  AGE     VERSION
k8s-master   Ready    control-plane,master   2h   v1.23.17
k8s-node1    Ready    worker                 1h    v1.23.17
k8s-node2    Ready    worker                 1h    v1.23.17

g.k8s配置ipvs

kubectl edit configmap kube-proxy -n kube-system

修改文件的44行

mode: "ipvs"

删除所有kube-proxy pod使之重启

kubectl delete pods -n kube-system -l k8s-app=kube-proxy

#验证 查看kube-system命名空间pods
kubectl get pods -n kube-system

二、Master节点作为ansible服务器

1.建立免密通道

[root@k8s-master ~]#ssh-keygen

[root@k8s-master ~]#ssh-copy-id -i /root/.ssh/id_rsa.pub  root@192.168.10.180

Are you sure you want to continue connecting (yes/no)? yes

1.1 测试免密通道是否成功

[root@k8s-master ~]#ssh 'root@192.168.10.180'

Last login: Sat Sep 23 09:22:11 2023 from 192.168.10.180

[root@web1 ~]# exit 退出

1.2 和其他服务器全部建立免密通道

[root@k8s-master ~]#ssh-copy-id -i /root/.ssh/id_rsa.pub  root@192.168.10.180

[root@k8s-master ~]#ssh-copy-id -i /root/.ssh/id_rsa.pub  root@192.168.10.182

[root@k8s-master ~]#ssh-copy-id -i /root/.ssh/id_rsa.pub  root@192.168.10.183

[root@k8s-master ~]#ssh-copy-id -i /root/.ssh/id_rsa.pub  root@192.168.10.184

[root@k8s-master ~]#ssh-copy-id -i /root/.ssh/id_rsa.pub  root@192.168.10.185

[root@k8s-master ~]#ssh-copy-id -i /root/.ssh/id_rsa.pub  root@192.168.10.186

2.主控机安装ansible

[root@k8s-master ~]#yum install -y ansible

2.1 配置主机清单

[root@k8s-master ~]# vim /etc/ansible/hosts

添加分组

[k8s]
192.168.10.180
192.168.10.182
[harbor]
192.168.10.183
[gitlab]

192.168.10.184
[zabbix]
192.168.10.185

2.2 测试ansible服务器能否控制所有的服务器

[root@k8s-master ~]# ansible all -m shell  -a 'ip add'

192.168.10.180 | CHANGED | rc=0 >>
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:b8:b9:fe brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.180/24 brd 192.168.10.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::d2cc:c11d:55fc:392c/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
出现类似这样的结果则为成功

三、集成部署 Gitlab、Jenkins 和 Harbor

1.部署Gitlab

1.1 下载并安装极狐GitLab

在 CentOS 7 上,下面的命令会在系统防火墙中打开 HTTP、HTTPS 和 SSH 访问。这是一个可选步骤,如果您打算仅从本地网络访问极狐GitLab,则可以跳过它。

sudo yum install -y curl policycoreutils-python openssh-server perl
sudo systemctl enable sshd
sudo systemctl start sshd
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo systemctl reload firewalld

执行以下命令配置极狐GitLab 软件源镜像。

[root@gitlab~]# curl -fsSL https://get.gitlab.cn | /bin/bash

==> Detected OS centos

==> Add yum repo file to /etc/yum.repos.d/gitlab-jh.repo

[gitlab-jh]

name=JiHu GitLab

baseurl=https://packages.gitlab.cn/repository/el/$releasever/

gpgcheck=1

gpgkey=https://packages.gitlab.cn/repository/raw/gpg/public.gpg.key

priority=1

enabled=1

==> Generate yum cache for gitlab-jh

==> Successfully added gitlab-jh repo. To install JiHu GitLab, run "sudo yum/dnf install gitlab-jh".

1.2修改repo仓库文件,将gpgcheck=1修改为gpgcheck=0,不进行gpgcheck操作。

[root@db-mysql yum.repos.d]# vim /etc/yum.repos.d/gitlab-jh.repo

[gitlab-jh]

name=JiHu GitLab

baseurl=https://packages.gitlab.cn/repository/el/$releasever/

gpgcheck=0

gpgkey=https://packages.gitlab.cn/repository/raw/gpg/public.gpg.key

priority=1

enabled=1

[root@db-mysql yum.repos.d]#

接下来执行如下命令开始安装:

[root@db-mysql ~]#  sudo EXTERNAL_URL="https://gitlab.sc.com" yum install -y gitlab-jh

启动、停止、重启服务

[root@db-mysql yum.repos.d]# sudo gitlab-ctl start|stop|restart

1.3 https方式去访问

username:root

password: /etc/gitlab/initial_root_password 

[root@gitlab ~]# cat /etc/gitlab/initial_root_password 
# WARNING: This value is valid only in the following conditions
#          1. If provided manually (either via `GITLAB_ROOT_PASSWORD` environment variable or via `gitlab_rails['initial_root_password']` setting in `gitlab.rb`, it was provided before database was seeded for the first time (usually, the first reconfigure run).
#          2. Password hasn't been changed manually, either via UI or via command line.
#
#          If the password shown here doesn't work, you must reset the admin password following https://docs.gitlab.com/ee/security/reset_user_password.html#reset-your-root-password.

Password: IAV1U+sn64VAos5Rzs+IuCl0QA6qIWOKzoj1Ox2I4kQ=

# NOTE: This file will be automatically deleted in the first reconfigure run after 24 hours.

1.4 创建新项目,测试gitlab

生成ssh密钥赋予gitlab连接的权力

在另外一台机器上安装git,进行测试

[root@k8s-master]# yum install git -y

下载的时候不能使用域名,需要使用ip

[root@k8s-master gitlab]# git clone http://192.168.10.184/root/wangyong-homework.git

正克隆到 'wangyong-homework'...

remote: Enumerating objects: 3, done.

remote: Counting objects: 100% (3/3), done.

remote: Compressing objects: 100% (2/2), done.

remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0

Unpacking objects: 100% (3/3), done.

[root@k8s-master gitlab]# git clone  git@192.168.10.184:root/wangyong-homework.git

正克隆到 'wangyong-homework'...

The authenticity of host '192.168.203.144 (192.168.203.144)' can't be established.

ECDSA key fingerprint is SHA256:HNcJ/a8fFGYJ1ui9ID/nWjDyqNGLujKJsytiak58/iw.

ECDSA key fingerprint is MD5:fc:52:fe:58:bf:40:14:a3:6f:22:64:bc:4d:8f:ae:8e.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '192.168.203.144' (ECDSA) to the list of known hosts.

remote: Enumerating objects: 3, done.

remote: Counting objects: 100% (3/3), done.

remote: Compressing objects: 100% (2/2), done.

remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0

接收对象中: 100% (3/3), done.

2.部署Jenkins

2.1下载官网的k8s部署yaml文件

[root@k8s-master ~]# mkdir /jenkins

[root@k8s-master ~]# cd /jenkins/

[root@k8s-master jenkins]# yum install git -y

[root@k8s-master jenkins]# git clone https://github.com/scriptcamp/kubernetes-jenkins

[root@k8s-master jenkins]# ls

kubernetes-jenkins

[root@k8s-master jenkins]# cd kubernetes-jenkins/

[root@k8s-master kubernetes-jenkins]# ls

deployment.yaml  namespace.yaml  README.md  serviceAccount.yaml  service.yaml  volume.yaml

2.2创建命名空间

[root@k8s-master kubernetes-jenkins]# cat namespace.yaml

apiVersion: v1

kind: Namespace

metadata:

  name: devops-tools

[root@k8s-master kubernetes-jenkins]# kubectl apply -f namespace.yaml

namespace/devops-tools created

[root@k8s-master kubernetes-jenkins]# kubectl get ns | grep devops-tools

devops-tools      Active   46s

2.3创建服务账号,集群角色,绑定

[root@k8s-master kubernetes-jenkins]# kubectl apply -f serviceAccount.yaml

clusterrole.rbac.authorization.k8s.io/jenkins-admin created

serviceaccount/jenkins-admin created

clusterrolebinding.rbac.authorization.k8s.io/jenkins-admin created

[root@k8s-master kubernetes-jenkins]# kubectl get sa -n devops-tools

NAME            SECRETS   AGE

default         1         6h27m

jenkins-admin   1         55s

2.4创建卷,用来存放数据

[root@k8smaster kubernetes-jenkins]# cat volume.yaml

kind: StorageClass

apiVersion: storage.k8s.io/v1

metadata:

  name: local-storage

provisioner: kubernetes.io/no-provisioner

volumeBindingMode: WaitForFirstConsumer

---

apiVersion: v1

kind: PersistentVolume

metadata:

  name: jenkins-pv-volume

  labels:

    type: local

spec:

  storageClassName: local-storage

  claimRef:

    name: jenkins-pv-claim

    namespace: devops-tools

  capacity:

    storage: 10Gi

  accessModes:

    - ReadWriteOnce

  local:

    path: /mnt

  nodeAffinity:

    required:

      nodeSelectorTerms:

      - matchExpressions:

        - key: kubernetes.io/hostname

          operator: In

          values: 

          - k8snode1   # 需要修改为k8s里的node节点的名字

[root@k8s-master kubernetes-jenkins]# kubectl apply -f volume.yaml

storageclass.storage.k8s.io/local-storage created

persistentvolume/jenkins-pv-volume created

persistentvolumeclaim/jenkins-pv-claim created

[root@k8s-master kubernetes-jenkins]# kubectl get pv

NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                           STORAGECLASS    REASON   AGE

jenkins-pv-volume   10Gi       RWO            Retain           Bound    devops-tools/jenkins-pv-claim   local-storage            20s

wy-pv               10Gi       RWX            Retain           Bound    default/wy-pvc                  nfs                      7d22h

2.5.部署Jenkins

[root@k8s-master kubernetes-jenkins]# kubectl apply -f deployment.yaml

deployment.apps/jenkins created

[root@k8s-master kubernetes-jenkins]# kubectl get deployment -n devops-tools

NAME      READY   UP-TO-DATE   AVAILABLE   AGE

jenkins   1/1     1            0           3m43s

#7.发布服务

[root@k8s-master kubernetes-jenkins]# kubectl apply -f service.yaml

service/jenkins-service created

[root@k8s-master kubernetes-jenkins]# kubectl get svc -n devops-tools

NAME              TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE

jenkins-service   NodePort   10.104.209.34   <none>        8080:32000/TCP   20s

2.6进入pod里获取登录的密码

[root@k8-smaster kubernetes-jenkins]# kubectl exec -it jenkins-7fdc8dd5fd-bg66q  -n devops-tools -- bash

bash-5.1$ cat /var/jenkins_home/secrets/initialAdminPassword

b0232e2dad164f89ad2221e4c46b0d46

在Windows机器上访问Jenkins,宿主机ip+端口号
http://192.168.10.180:32000/login?from=%2F

3.部署Harbor

3.1配置阿里云的repo源

[root@localhost ~]# hostnamectl set-hostname harbor

[root@localhost ~]# su

su

[root@harbor ~]#

[root@harbor ~]# yum install -y yum-utils

[root@harbor ~]#yum-config-manager --add-repo

http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

3.2下载docker

[root@harbor ~]#yum install docker-ce-20.10.6 -y

[root@harbor ~]# systemctl start docker && systemctl enable docker.service

[root@harbor ~]# docker version

Client: Docker Engine - Community

 Version:           26.0.0

 API version:       1.41 (downgraded from 1.45)

 Go version:        go1.21.8

 Git commit:        2ae903e

 Built:             Wed Mar 20 15:21:09 2024

 OS/Arch:           linux/amd64

 Context:           default

3.3安装docker compose并上传

https://github.com/docker/compose/releases/download/v2.7.0/docker-compose-linux-x86_64

[root@localhost ~]# ls

anaconda-ks.cfg  docker-compose-linux-x86_64_(1)  harbor  harbor-offline-installer-v2.8.3.tgz

[root@localhost ~]# chmod +x docker-compose-linux-x86_64_\(1\)

[root@localhost ~]# mv docker-compose-linux-x86_64_\(1\)  /usr/local/sbin/docker-compose

3.4去harbor官网下载安装包

Harbor

[root@localhost ~]# ls

anaconda-ks.cfg  harbor-offline-installer-v2.8.3.tgz

a.解压

[root@localhost ~]# tar -xvf harbor-offline-installer-v2.8.3.tgz

b.修改配置文件

[root@localhost ~]# cd harbor

[root@localhost harbor]# ls

common.sh  harbor.v2.8.3.tar.gz  harbor.yml.tmpl  install.sh  LICENSE  prepare

[root@localhost harbor]# mv harbor.yml.tmpl  harbor.yml

[root@localhost harbor]# vim harbor.yml

# Configuration file of Harbor

# The IP address or hostname to access admin UI and registry service.

# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.

hostname: 192.168.10.183  # 修改为主机ip地址

# http related config

http:

  # port for http, default is 80. If https enabled, this port will redirect to https port

  port: 5000  # 修改成其他端口号

#https可以全关闭

# https related config

#https:

  # https port for harbor, default is 443

  #port: 443

  # The path of cert and key files for nginx

  #certificate: /your/certificate/path

  #private_key: /your/private/key/path

# # Uncomment following will enable tls communication between all harbor components

# internal_tls:

#   # set enabled to true means internal tls is enabled

#   enabled: true

#   # put your cert and key files on dir

#   dir: /etc/harbor/tls/internal

# Uncomment external_url if you want to enable external proxy

# And when it enabled the hostname will no longer used

# external_url: https://reg.mydomain.com:8433

# The initial password of Harbor admin

# It only works in first time to install harbor

# Remember Change the admin password from UI after launching Harbor.

harbor_admin_password: 123456  #登录密码

c.执行部署脚本

[root@harbor harbor]# ./install.sh

[+] Running 9/10

 ⠇ Network harbor_harbor        Created                                                                                            7.8s

 ✔ Container harbor-log         Started                                                                                            1.3s

 ✔ Container redis              Started                                                                                            4.3s

 ✔ Container registry           Started                                                                                            3.9s

 ✔ Container registryctl        Started                                                                                            4.3s

 ✔ Container harbor-db          Started                                                                                            3.8s

 ✔ Container harbor-portal      Started                                                                                            3.8s

 ✔ Container harbor-core        Started                                                                                            5.2s

 ✔ Container harbor-jobservice  Started                                                                                            7.3s

 ✔ Container nginx              Started                                                                                            7.4s

✔ ----Harbor has been installed and started successfully.----

3.5访问harbor客户端

http://192.168.10.183:5000

在harbor里创建一个项目k8s

并且新建一个用户 wy  密码是Wy123456

授权k8s这个项目允许wy这个用户去访问,授予项目管理员权限

在其他node节点上使用这个仓库

[root@docker1 ~]# cat /etc/docker/daemon.json

{

"registry-mirrors": ["https://registry.docker-cn.com"],

"insecure-registries" : ["192.168.10.183:5000"]

}

四、部署nfs服务器,创建nfs-pv和nfs-pvc

1.安装nfs-utils

[root@k8s-master ~]# yum install -y nfs-utils

创建共享目录

[root@k8s-master ~]# mkdir /web/html -p

[root@k8s-master ~]# echo "this is test" > /web/html/index.html

设置共享目录

[root@k8s-master ~]# echo "/web/html  192.168.10.0/24(rw,sync)" > /etc/exports

[root@k8s-master ~]# exportfs -rv

exporting 192.168.10.0/24:/web/html

设置权限

[root@k8s-master ~]# chmod 777 /web/html

[root@k8s-master ~]# ll /web/html -d

drwxrwxrwx 2 root root 24 4月   3 16:29 /web/html

2.在web机器上检测nfs挂载是否可用

[root@k8s-node2 ~]# showmount -e 192.168.10.181

Export list for 192.168.10.181:

/web/html 192.168.10.0/24

挂载

[root@k8s-node1 ~]# mkdir /pv_nfs

[root@k8s-node1 ~]# mount 192.168.10.181:/web/html /pv_nfs/

[root@k8s-node1 ~]# df -Th | grep /web

192.168.10.181:/web/html nfs4       46G  4.6G   41G   11% /pv_nfs

3.创建pv卷

[root@k8s-master web]# mkdir nfs

[root@k8s-master web]# cd nfs/

[root@k8s-master nfs]# vim  nfs-pv.yaml

apiVersion: v1

kind: PersistentVolume

metadata:

  name: wy-pv

  labels:

    type: wy-pv

spec:

  capacity:

    storage: 10Gi

  accessModes:

    - ReadWriteMany

  storageClassName: nfs      

  nfs:

    path: "/web/data"       

    server: 192.168.10.181   

    readOnly: false  

[root@k8s-master nfs]# kubectl get pv

NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE

wy-pv    10Gi    RWX       Retain    Available    nfs      2m8s

4.创建pvc使用pv

[root@k8s-master nfs]# vim nfs-pvc.yaml

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

  name: wy-pvc

spec:

  accessModes:

  - ReadWriteMany      

  resources:

     requests:

       storage: 10Gi

  storageClassName: nfs

[root@k8s-master nfs]# kubectl apply -f nfs-pvc.yaml

persistentvolumeclaim/wy-pvc created

5.创建部署控制器去启动多个pod副本使用pvc

[root@k8s-master nfs]# cat pod-nfs.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

  name: nginx-deployment-nfs

  labels:

    app: nginx-nfs

spec:

  replicas: 4

  selector:

    matchLabels:

      app: nginx-nfs

  template:

    metadata:

      labels:

        app: nginx-nfs

    spec:

      #定义卷

      volumes:

      - name: sc-pv-storage-nfs

        persistentVolumeClaim:

          claimName: sc-nginx-pvc

      containers:

      - name: nginx

        image: nginx:latest

        imagePullPolicy: IfNotPresent

        ports:

        - containerPort: 80

      #容器里调用卷

        volumeMounts:

        - mountPath: "/usr/share/nginx/html"

          name: sc-pv-storage-nfs

查看运行情况

[root@k8s-master nfs]# kubectl get pod -o wide

NAME                                    READY   STATUS    RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES

nginx-deployment-nfs-74b5494f49-4d8c9   1/1     Running   0          3h34m   10.224.36.65     k8s-node1   <none>           <none>

nginx-deployment-nfs-74b5494f49-zcxkm   1/1     Running   0          3h34m   10.224.169.129   k8s-node2   <none>           <none>

[root@k8s-master nfs]# curl 10.224.36.65

this is test

[root@k8s-master nfs]# curl 10.224.169.129

this is test

测试修改共享目录文件

[root@k8s-master nfs]# echo "hello world " >> /web/html/index.html

[root@k8s-master nfs]# curl 10.224.169.129

this is test

hello world

五、自制web镜像上传到harbor,结合HPA技术,实现自动扩缩

1.制作web镜像

[root@harbor ~]# ls

flask.tar.gz   

[root@harbor app]# tar zxvf flask.tar.gz
[root@harbor app]#cd /root/flask/app

[root@harbor app]# cat dockerfile

FROM centos7

WORKDIR /app

ADD .  /app/

RUN /usr/local/bin/python -m pip install --upgrade pip

RUN pip install -r requirements.txt

EXPOSE 5000

CMD ["python","/app/manager.py"]

[root@harbor app]# docker build  -t scweb:1.1 .

[root@harbor app]# docker image ls | grep scweb

scweb1.1                            1.1       f845e97e9dfd   4 hours ago      214MB

[root@harbor app]#  docker tag scweb:1.1 192.168.2.106:5000/test/web:v2

[root@harbor app]# docker image ls | grep web

192.168.2.106:5000/test/web      v2        00900ace4935   4 minutes ago   214MB

scweb                            1.1       00900ace4935   4 minutes ago   214MB

[root@harbor app]# docker push  192.168.10.183:5000/test/web:v2

The push refers to repository [192.168.2.106:5000/test/web

3e252407b5c2: Pushed

193a27e04097: Pushed

b13a87e7576f: Pushed

174f56854903: Pushed

v1: digest: sha256:a723c83407c49e6fcf9aa67a041a4b6241cf9856170c1703014a61dec3726b29 size: 1153

1.1两个node节点从harbor服务器上拉取镜像

[root@k8s-node1 ~]# docker login 192.168.10.183:5000

Authenticating with existing credentials...

WARNING! Your password will be stored unencrypted in /root/.docker/config.json.

Configure a credential helper to remove this warning. See

https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

[root@k8snode1 ~]# docker pull 192.168.10.183:5000/test/web:v2

v1: Pulling from test/web

2d473b07cdd5: Pull complete

bc5e56dd1476: Pull complete

694440c745ce: Pull complete

78694d1cffbb: Pull complete

Digest: sha256:a723c83407c49e6fcf9aa67a041a4b6241cf9856170c1703014a61dec3726b29

Status: Downloaded newer image for 192.168.10.183:5000/test/web:v2

192.168.10.183:5000/test/web:v1

[root@k8s-node1 ~]# docker images

REPOSITORY                                                                     TAG        IMAGE ID       CREATED         SIZE

192.168.10.183:5000/test/web                                                    v2         f845e97e9dfd   4 hours ago     214MB

[root@k8s-node2 ~]# docker login 192.168.10.183:5000

Authenticating with existing credentials...

WARNING! Your password will be stored unencrypted in /root/.docker/config.json.

Configure a credential helper to remove this warning. See

https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

[root@k8s-node2 ~]# docker pull 192.168.10.183:5000/test/web:v2

v1: Pulling from test/web

2d473b07cdd5: Pull complete

bc5e56dd1476: Pull complete

694440c745ce: Pull complete

78694d1cffbb: Pull complete

Digest: sha256:a723c83407c49e6fcf9aa67a041a4b6241cf9856170c1703014a61dec3726b29

Status: Downloaded newer image for 192.168.10.183:5000/test/web:v2

192.168.10.183:5000/test/web:v1

[root@k8s-node2 ~]# docker images

REPOSITORY                                                                     TAG        IMAGE ID       CREATED         SIZE

192.168.10.183:5000/test/web                                                    v2         f845e97e9dfd   4 hours ago     214MB

# 采用HPA技术,当cpu使用率达到50%的时候,进行水平扩缩,最小1个,最多10个pod

HorizontalPodAutoscaler(简称 HPA )自动更新工作负载资源(例如Deployment),目的是自动扩缩以满足负载需求。

2.安装metrics server

2.1下载components.yaml配置文件


[root@k8s-master ~]# wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

 2.2修改components.yaml配置文件

[root@k8s-master metrics~]#vim components.yaml
# 替换image
        image: registry.aliyuncs.com/google_containers/metrics-server:v0.6.0
        imagePullPolicy: IfNotPresent
        args:
#        // 新增下面两行参数
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalDNS,ExternalIP,Hostname

 
[root@k8s-master metrics~]# cat components.yaml
    spec:
      containers:
      - args:
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP 
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        image: registry.aliyuncs.com/google_containers/metrics-server:v0.6.0
        imagePullPolicy: IfNotPresent
 

2.3 执行安装命令

[root@k8s-master metrics]# kubectl apply -f components.yaml 
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
 
# 查看效果
[root@k8s-master metrics]# kubectl get pod -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-6949477b58-xdk88   1/1     Running   1          22h
calico-node-4knc8                          1/1     Running   4          22h
calico-node-8jzrn                          1/1     Running   1          22h
calico-node-9d7pt                          1/1     Running   2          22h
coredns-7f89b7bc75-52c4x                   1/1     Running   2          22h
coredns-7f89b7bc75-82jrx                   1/1     Running   1          22h
etcd-k8smaster                             1/1     Running   1          22h
kube-apiserver-k8smaster                   1/1     Running   1          22h
kube-controller-manager-k8smaster          1/1     Running   1          22h
kube-proxy-8wp9c                           1/1     Running   2          22h
kube-proxy-d46jp                           1/1     Running   1          22h
kube-proxy-whg4f                           1/1     Running   1          22h
kube-scheduler-k8smaster                   1/1     Running   1          22h
metrics-server-6c75959ddf-hw7cs            1/1     Running   0          61s
 
# 能够使用下面的命令查看到pod的效果,说明metrics server已经安装成功
[root@k8s-master metrics]# kubectl top node
NAME        CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master   322m         16%    1226Mi          71%       
k8s-node1    215m         10%    874Mi           50%       
k8s-node2    190m         9%     711Mi           41% 

2.4 以yaml文件启动web并暴露服务

[root@k8s-master hpa]# cat my-web.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: myweb
  name: myweb
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myweb
  template:
    metadata:
      labels:
        app: myweb
    spec:
      containers:
      - name: myweb
        image: 192.168.10.183:5000/test/web:v2
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8000
        resources:
          limits:
            cpu: 300m
          requests:
            cpu: 100m
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: myweb-svc
  name: myweb-svc
spec:
  selector:
    app: myweb
  type: NodePort
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 8000
    nodePort: 30001
 
[root@k8s-master HPA]# kubectl apply -f my-web.yaml 
deployment.apps/myweb created
service/myweb-svc created

3.创建HPA功能

[root@k8s-master HPA]# kubectl autoscale deployment myweb --cpu-percent=70 --min=1 --max=10
horizontalpodautoscaler.autoscaling/myweb autoscaled
 
[root@k8smaster HPA]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
myweb-6dc7b4dfcb-9q85g   1/1     Running   0          9s
myweb-6dc7b4dfcb-ddq82   1/1     Running   0          9s
[root@k8s-master HPA]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP          3d2h
myweb-svc    NodePort    10.102.83.168   <none>        8000:30001/TCP   15s
[root@k8smaster HPA]# kubectl get hpa
NAME    REFERENCE          TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
myweb   Deployment/myweb   <unknown>/70%   1         10        3          16s
 

3.1访问

http://192.168.10.182:30001/
 
[root@k8s-master HPA]# kubectl get hpa
NAME    REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
myweb   Deployment/myweb   1%/70%    1         10        1          11m
 
[root@k8s-master HPA]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
myweb-6dc7b4dfcb-ddq82   1/1     Running   0          10m

六、启动mysql的pod,为web业务提供数据库服务

[root@k8s-master mysql]# cat mysql-deployment.yaml

# 定义mysql的Deployment

apiVersion: apps/v1

kind: Deployment

metadata:

  labels:

    app: mysql

  name: mysql

spec:

  replicas: 1

  selector:

    matchLabels:

      app: mysql

  template:

    metadata:

      labels:

        app: mysql

    spec:

      containers:

      - image: mysql:5.7.42

        name: mysql

        imagePullPolicy: IfNotPresent

        env:

        - name: MYSQL_ROOT_PASSWORD   

          value: "123456"

        ports:

        - containerPort: 3306

---

#定义mysql的Service

apiVersion: v1

kind: Service

metadata:

  labels:

    app: svc-mysql

  name: svc-mysql

spec:

  selector:

    app: mysql

  type: NodePort

  ports:

  - port: 3306

    protocol: TCP

    targetPort: 3306

    nodePort: 30007

[root@k8s-master mysql]# kubectl apply -f mysql-deployment.yaml

deployment.apps/mysql created

service/svc-mysql created

[root@k8s-master mysql]# kubectl get svc

NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE

kubernetes       ClusterIP   10.96.0.1       <none>        443/TCP          28h

svc-mysql        NodePort    10.105.96.217   <none>        3306:30007/TCP   10m

[root@k8s-master mysql]# kubectl get pod

NAME                                READY   STATUS    RESTARTS   AGE

mysql-5f9bccd855-6kglf              1/1     Running   0          8m59s

[root@k8s-master mysql]# kubectl exec -it mysql-5f9bccd855-6kglf -- bash

bash-4.2# mysql -uroot -p123456

mysql: [Warning] Using a password on the command line interface can be insecure.

Welcome to the MySQL monitor.  Commands end with ; or \g.

Your MySQL connection id is 2

Server version: 5.7.42 MySQL Community Server (GPL)

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;

+--------------------+

| Database           |

+--------------------+

| information_schema |

| mysql              |

| performance_schema |

| sys                |

+--------------------+

4 rows in set (0.01 sec)

mysql> exit

Bye

bash-4.2# exit

exit

[root@k8s-master mysql]#

# Web服务和MySQL数据库结合起来

# 第一种:在mysql的service中增加以下内容

  ports:

    - name: mysql

      protocol: TCP

      port: 3306

      targetPort: 3306

# 在web的pod中增加以下内容

        env:

          - name: MYSQL_HOST

            value: mysql

          - name: MYSQL_PORT

            value: "3306"

七、使用探针对web业务pod进行监控,一旦出现问题马上重启

[root@k8s-master probe]# vim probe.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: myweb
  name: myweb
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myweb
  template:
    metadata:
      labels:
        app: myweb
    spec:
      containers:
      - name: myweb
        image: 192.168.10.183:5000/test/web:v2
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8000
        resources:
          limits:
            cpu: 300m
          requests:
            cpu: 100m
        livenessProbe:
          exec:
            command:
            - ls
            - /tmp
          initialDelaySeconds: 5
          periodSeconds: 5
        readinessProbe:
          exec:
            command:
            - ls
            - /tmp
          initialDelaySeconds: 5
          periodSeconds: 5   
        startupProbe:
          httpGet:
            path: /
            port: 8000
          failureThreshold: 30
          periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: myweb-svc
  name: myweb-svc
spec:
  selector:
    app: myweb
  type: NodePort
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 8000
    nodePort: 30001

[root@k8s-master probe]# kubectl apply -f probe.yaml

deployment.apps/myweb created

service/myweb-svc created

[root@k8s-master probe]# kubectl get pod

NAME                     READY   STATUS    RESTARTS   AGE

myweb-6b89fb9c7b-4cdh9   1/1     Running   0          53s

myweb-6b89fb9c7b-dh87w   1/1     Running   0          53s

myweb-6b89fb9c7b-zvc52   1/1     Running   0          53s

查看应用了探针的pod详情

[root@k8s-master probe]# kubectl describe pod myweb-6b89fb9c7b-4cdh9

Name:         myweb-6b89fb9c7b-4cdh9

Namespace:    default

Priority:     0

Node:         k8s-node2/192.168.10.183

Start Time:   Thu, 22 Jun 2023 16:47:20 +0800

Labels:       app=myweb

              pod-template-hash=6b89fb9c7b

Annotations:  cni.projectcalico.org/podIP: 10.244.185.219/32

              cni.projectcalico.org/podIPs: 10.244.185.219/32

Status:       Running

IP:           10.244.185.219

IPs:

  IP:           10.244.185.219

Controlled By:  ReplicaSet/myweb-6b89fb9c7b

Containers:

  myweb:

    Container ID:   docker://8c55c0c825483f86e4b3c87413984415b2ccf5cad78ed005eed8bedb4252c130

    Image:          192.168.10.183:5000/test/web:v2

    Image ID:       docker-pullable://192.168.10.183:5000/test/web@sha256:3bef039aa5c13103365a6868c9f052a000de376a45eaffcbad27d6ddb1f6e354

    Port:           8000/TCP

    Host Port:      0/TCP

    State:          Running

      Started:      Thu, 22 Jun 2023 16:47:23 +0800

    Ready:          True

    Restart Count:  0

    Limits:

      cpu:  300m

    Requests:

      cpu:        100m

    Liveness:     exec [ls /tmp] delay=5s timeout=1s period=5s #success=1 #failure=3

    Readiness:    exec [ls /tmp] delay=5s timeout=1s period=5s #success=1 #failure=3

    Startup:     http-get http://:8000/ delay=0s timeout=1s period=10s #success=1 #failure=30

    Environment:  <none>

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-24tkk (ro)

Conditions:

  Type              Status

  Initialized       True

  Ready             True

  ContainersReady   True

  PodScheduled      True

Volumes:

  default-token-24tkk:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  default-token-24tkk

    Optional:    false

QoS Class:       Burstable

Node-Selectors:  <none>

Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s

                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s

Events:

  Type    Reason     Age   From               Message

  ----    ------     ----  ----               -------

  Normal  Scheduled  55s   default-scheduler  Successfully assigned default/myweb-6b89fb9c7b-4cdh9 to k8snode2

  Normal  Pulled     52s   kubelet            Container image "192.168.10.183:5000/test/web:v2" already present on machine

  Normal  Created    52s   kubelet            Created container myweb

  Normal  Started    52s   kubelet            Started container myweb

八、采用ingress负载均衡

1.准备环境

1.1 上传安装包

[root@k8s-master ~]# mkdir /ingress

[root@k8s-master ~]# cd /ingress

[root@k8s-master ingress]# ls

ingress-controller-deploy.yaml  ingress-nginx-controllerv1.1.0.tar.gz  kube-webhook-certgen-v1.1.0.tar.gz

1.2 分发给各个节点

[root@k8s-master ingress]# scp  kube-webhook-certgen-v1.1.0.tar.gz  k8s-node1:/root

[root@k8s-master ingress]# scp  kube-webhook-certgen-v1.1.0.tar.gz  k8s-node2:/root

[root@k8s-master ingress]# scp  ingress-nginx-controllerv1.1.0.tar.gz  k8s-node1:/root

[root@k8s-master ingress]# scp  ingress-nginx-controllerv1.1.0.tar.gz  k8s-node2:/root

1.3 Work节点导入镜像

[root@k8s-node1~]# docker load -i ingress-nginx-controllerv1.1.0.tar.gz

[root@k8s-node1~]# docker load -i kube-webhook-certgen-v1.1.0.tar.gz

1.4 创建ingress控制器需要的环境

[root@k8s-master ingress]#kubectl apply -f ingress-controller-deploy.yaml

namespace/ingress-nginx created

serviceaccount/ingress-nginx created

configmap/ingress-nginx-controller created

clusterrole.rbac.authorization.k8s.io/ingress-nginx created

clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created

role.rbac.authorization.k8s.io/ingress-nginx created

rolebinding.rbac.authorization.k8s.io/ingress-nginx created

service/ingress-nginx-controller-admission created

service/ingress-nginx-controller created

deployment.apps/ingress-nginx-controller created

ingressclass.networking.k8s.io/nginx created

validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created

serviceaccount/ingress-nginx-admission created

clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created

clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created

role.rbac.authorization.k8s.io/ingress-nginx-admission created

rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created

job.batch/ingress-nginx-admission-create created

job.batch/ingress-nginx-admission-patch created

1.5 查看ingress controller的相关资源是否建立

[root@k8s-master ingress]# kubectl get ns

NAME              STATUS   AGE

default           Active   9d

devops-tools      Active   25h

ingress-nginx     Active   2d23h

kube-node-lease   Active   9d

kube-public       Active   9d

kube-system       Active   9d

[root@k8s-master ingress]# kubectl get svc -n ingress-nginx

NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE

ingress-nginx-controller             NodePort    10.99.68.171    <none>        80:31588/TCP,443:30528/TCP   2d23h

ingress-nginx-controller-admission   ClusterIP   10.96.116.201   <none>        443/TCP                      2d23h

2 创建pod和暴露pod的服务

[root@k8s-master ingress]# cat sc-nginx-svc.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

  name: sc-nginx-deploy

  labels:

    app: sc-nginx-wang

spec:

  replicas: 2

  selector:

    matchLabels:

      app: sc-nginx-wang

  template:

    metadata:

      labels:

        app: sc-nginx-wang

    spec:

      containers:

      - name: sc-nginx-wang

        image: nginx

        imagePullPolicy: IfNotPresent

        ports:

        - containerPort: 80

---

apiVersion: v1

kind: Service

metadata:

  name:  sc-nginx-svc

  labels:

    app: sc-nginx-svc

spec:

  selector:

    app: sc-nginx-wang

  ports:

  - name: name-of-service-port

    protocol: TCP

    port: 80

    targetPort: 80

[root@k8s-master ingress]# kubectl apply -f sc-nginx-svc.yaml

deployment.apps/sc-nginx-deploy created

service/sc-nginx-svc created

[root@k8s-master ingress]# kubectl get pod

NAME                                READY   STATUS    RESTARTS   AGE

sc-nginx-deploy-7bb895f9f5-hmf2n    1/1     Running   0          7s

sc-nginx-deploy-7bb895f9f5-mczzg    1/1     Running   0          7s

[root@k8s-master ingress]# kubectl get svc

NAME           TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE

kubernetes     ClusterIP   10.96.0.1     <none>        443/TCP   20h

sc-nginx-svc   ClusterIP   10.96.76.55   <none>        80/TCP    26s

[root@k8s-master ingress]# kubectl describe svc sc-nginx-svc

Name:              sc-nginx-svc

Namespace:         default

Labels:            app=sc-nginx-wang

Annotations:       <none>

Selector:          app=sc-nginx-wang

Type:              ClusterIP

IP Family Policy:  SingleStack

IP Families:       IPv4

IP:                10.109.195.186

IPs:               10.109.195.186

Port:              name-of-service-port  80/TCP

TargetPort:        80/TCP

Endpoints:         10.224.169.151:80,10.224.169.152:80

Session Affinity:  None

Events:            <none>

[root@k8s-master ingress]# curl 10.109.195.186

<!DOCTYPE html>

<html>

<head>

<title>Welcome to nginx!</title>

<style>

html { color-scheme: light dark; }

body { width: 35em; margin: 0 auto;

font-family: Tahoma, Verdana, Arial, sans-serif; }

</style>

</head>

<body>

<h1>Welcome to nginx!</h1>

<p>If you see this page, the nginx web server is successfully installed and

working. Further configuration is required.</p>

<p>For online documentation and support please refer to

<a href="http://nginx.org/">nginx.org</a>.<br/>

Commercial support is available at

<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>

</body>

</html>

3 启用ingress关联ingress controller 和service

3.1创建一个yaml文件,去启动ingress

[root@k8s-master ingress]# cat sc-ingress.yaml

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

  name: sc-ingress

  annotations:

    kubernets.io/ingress.class: nginx  #注释 这个ingress 是关联ingress controller的

spec:

  ingressClassName: nginx  #关联ingress controller

  rules:

  - host: www.wang.com

    http:

      paths:

      - pathType: Prefix

        path: /

        backend:

          service:

            name: sc-nginx-svc

            port:

              number: 80

[root@k8s-master ingress]# kubectl apply -f  my-ingress.yaml

ingress.networking.k8s.io/my-ingress created

3.2查看ingress

[root@k8s-master ingress]# kubectl get ingress

NAME         CLASS   HOSTS                        ADDRESS                       PORTS   AGE

sc-ingress   nginx   www.wang.com   192.168.10.180,192.168.10.182   80      52s

[root@k8s-master ingress]# kubectl get svc -n ingress-nginx

NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE

ingress-nginx-controller             NodePort    10.99.68.171    <none>        80:31588/TCP,443:30528/TCP   3d

ingress-nginx-controller-admission   ClusterIP   10.96.116.201   <none>        443/TCP                      3d

3.3在其他的宿主机或者windows机器上使用域名进行访问

[root@prometheus ~]# vim /etc/hosts

[root@prometheus ~]# cat /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.10.180  www.wang.com

[root@prometheus ~]# curl www.wang.com

<!DOCTYPE html>

<html>

<head>

<title>Welcome to nginx!</title>

<style>

html { color-scheme: light dark; }

body { width: 35em; margin: 0 auto;

font-family: Tahoma, Verdana, Arial, sans-serif; }

</style>

</head>

<body>

<h1>Welcome to nginx!</h1>

<p>If you see this page, the nginx web server is successfully installed and

working. Further configuration is required.</p>

<p>For online documentation and support please refer to

<a href="http://nginx.org/">nginx.org</a>.<br/>

Commercial support is available at

<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>

</body>

</html>

九.安装promethues对整个集群资源。

1.在所有节点提前下载镜像

docker pull prom/node-exporter

docker pull prom/prometheus:v2.0.0

docker pull grafana/grafana:6.1.4

[root@k8s-master ~]# docker images

REPOSITORY                                                        TAG        IMAGE ID       CREATED         SIZE

prom/node-exporter                                                latest     1dbe0e931976   18 months ago   20.9MB

grafana/grafana                                                   6.1.4      d9bdb6044027   4 years ago     245MB

prom/prometheus                                                                v2.0.0     67141fa03496   5 years ago     80.2MB

[root@k8s-node1 ~]# docker images

REPOSITORY                                                                     TAG        IMAGE ID       CREATED         SIZE

prom/node-exporter                                                             latest     1dbe0e931976   18 months ago   20.9MB

grafana/grafana                                                                6.1.4      d9bdb6044027   4 years ago     245MB

prom/prometheus

[root@k8s-node2 ~]# docker images

REPOSITORY                                                                     TAG        IMAGE ID       CREATED         SIZE

prom/node-exporter                                                             latest     1dbe0e931976   18 months ago   20.9MB

grafana/grafana                                                                6.1.4      d9bdb6044027   4 years ago     245MB

prom/prometheus                                                                v2.0.0     67141fa03496   5 years ago     80.2MB

2.采用daemonset方式部署node-exporter

[root@k8smaster prometheus]# cat node-exporter.yaml

---

apiVersion: apps/v1

kind: DaemonSet

metadata:

  name: node-exporter

  namespace: kube-system

  labels:

    k8s-app: node-exporter

spec:

  selector:

    matchLabels:

      k8s-app: node-exporter

  template:

    metadata:

      labels:

        k8s-app: node-exporter

    spec:

      containers:

      - image: prom/node-exporter

        name: node-exporter

        ports:

        - containerPort: 9100

          protocol: TCP

          name: http

---

apiVersion: v1

kind: Service

metadata:

  labels:

    k8s-app: node-exporter

  name: node-exporter

  namespace: kube-system

spec:

  ports:

  - name: http

    port: 9100

    nodePort: 31672

    protocol: TCP

  type: NodePort

  selector:

    k8s-app: node-exporter

[root@k8s-master prometheus]# kubectl apply -f node-exporter.yaml

daemonset.apps/node-exporter created

service/node-exporter created

[root@k8s-master prometheus]# kubectl get pod -A | grep node-exporter

kube-system     node-exporter-n9jl8                         1/1     Running   0                38s

kube-system     node-exporter-rhf9q                         1/1     Running   0                38s

[root@k8s-master prometheus]# kubectl get daemonset -A

NAMESPACE     NAME            DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE

kube-system   calico-node     3         3         3       3            3           kubernetes.io/os=linux   9d

kube-system   kube-proxy      3         3         3       3            3           kubernetes.io/os=linux   9d

kube-system   node-exporter   2         2         2       2            2           <none>                   90s

[root@k8s-master prometheus]# kubectl get svc -A | grep node-exporter

kube-system     node-exporter                        NodePort    10.111.61.108    <none>        9100:31672/TCP               2m22s

 3.部署Prometheus

3.1 创建绑定角色

[root@k8s-master prometheus]# cat rbac-setup.yaml

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

  name: prometheus

rules:

- apiGroups: [""]

  resources:

  - nodes

  - nodes/proxy

  - services

  - endpoints

  - pods

  verbs: ["get", "list", "watch"]

- apiGroups:

  - extensions

  resources:

  - ingresses

  verbs: ["get", "list", "watch"]

- nonResourceURLs: ["/metrics"]

  verbs: ["get"]

---

apiVersion: v1

kind: ServiceAccount

metadata:

  name: prometheus

  namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

  name: prometheus

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: prometheus

subjects:

- kind: ServiceAccount

  name: prometheus

  namespace: kube-system

[root@k8s-master prometheus]# kubectl apply -f rbac-setup.yaml

clusterrole.rbac.authorization.k8s.io/prometheus created

serviceaccount/prometheus created

clusterrolebinding.rbac.authorization.k8s.io/prometheus created

3.2 利用configmap传递配置参数

[root@k8s-master prometheus]# cat configmap.yaml

apiVersion: v1

kind: ConfigMap

metadata:

  name: prometheus-config

  namespace: kube-system

data:

  prometheus.yml: |

    global:

      scrape_interval:     15s

      evaluation_interval: 15s

    scrape_configs:

    - job_name: 'kubernetes-apiservers'

      kubernetes_sd_configs:

      - role: endpoints

      scheme: https

      tls_config:

        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

      relabel_configs:

      - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]

        action: keep

        regex: default;kubernetes;https

    - job_name: 'kubernetes-nodes'

      kubernetes_sd_configs:

      - role: node

      scheme: https

      tls_config:

        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

      relabel_configs:

      - action: labelmap

        regex: __meta_kubernetes_node_label_(.+)

      - target_label: __address__

        replacement: kubernetes.default.svc:443

      - source_labels: [__meta_kubernetes_node_name]

        regex: (.+)

        target_label: __metrics_path__

        replacement: /api/v1/nodes/${1}/proxy/metrics

    - job_name: 'kubernetes-cadvisor'

      kubernetes_sd_configs:

      - role: node

      scheme: https

      tls_config:

        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

      relabel_configs:

      - action: labelmap

        regex: __meta_kubernetes_node_label_(.+)

      - target_label: __address__

        replacement: kubernetes.default.svc:443

      - source_labels: [__meta_kubernetes_node_name]

        regex: (.+)

        target_label: __metrics_path__

        replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor

    - job_name: 'kubernetes-service-endpoints'

      kubernetes_sd_configs:

      - role: endpoints

      relabel_configs:

      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]

        action: keep

        regex: true

      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]

        action: replace

        target_label: __scheme__

        regex: (https?)

      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]

        action: replace

        target_label: __metrics_path__

        regex: (.+)

      - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]

        action: replace

        target_label: __address__

        regex: ([^:]+)(?::\d+)?;(\d+)

        replacement: $1:$2

      - action: labelmap

        regex: __meta_kubernetes_service_label_(.+)

      - source_labels: [__meta_kubernetes_namespace]

        action: replace

        target_label: kubernetes_namespace

      - source_labels: [__meta_kubernetes_service_name]

        action: replace

        target_label: kubernetes_name

    - job_name: 'kubernetes-services'

      kubernetes_sd_configs:

      - role: service

      metrics_path: /probe

      params:

        module: [http_2xx]

      relabel_configs:

      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]

        action: keep

        regex: true

      - source_labels: [__address__]

        target_label: __param_target

      - target_label: __address__

        replacement: blackbox-exporter.example.com:9115

      - source_labels: [__param_target]

        target_label: instance

      - action: labelmap

        regex: __meta_kubernetes_service_label_(.+)

      - source_labels: [__meta_kubernetes_namespace]

        target_label: kubernetes_namespace

      - source_labels: [__meta_kubernetes_service_name]

        target_label: kubernetes_name

    - job_name: 'kubernetes-ingresses'

      kubernetes_sd_configs:

      - role: ingress

      relabel_configs:

      - source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]

        action: keep

        regex: true

      - source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]

        regex: (.+);(.+);(.+)

        replacement: ${1}://${2}${3}

        target_label: __param_target

      - target_label: __address__

        replacement: blackbox-exporter.example.com:9115

      - source_labels: [__param_target]

        target_label: instance

      - action: labelmap

        regex: __meta_kubernetes_ingress_label_(.+)

      - source_labels: [__meta_kubernetes_namespace]

        target_label: kubernetes_namespace

      - source_labels: [__meta_kubernetes_ingress_name]

        target_label: kubernetes_name

    - job_name: 'kubernetes-pods'

      kubernetes_sd_configs:

      - role: pod

      relabel_configs:

      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]

        action: keep

        regex: true

      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]

        action: replace

        target_label: __metrics_path__

        regex: (.+)

      - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]

        action: replace

        regex: ([^:]+)(?::\d+)?;(\d+)

        replacement: $1:$2

        target_label: __address__

      - action: labelmap

        regex: __meta_kubernetes_pod_label_(.+)

      - source_labels: [__meta_kubernetes_namespace]

        action: replace

        target_label: kubernetes_namespace

      - source_labels: [__meta_kubernetes_pod_name]

        action: replace

        target_label: kubernetes_pod_name

[root@k8s-master prometheus]# kubectl apply -f configmap.yaml

configmap/prometheus-config created

[root@k8s-master prometheus]# cat prometheus.deploy.yml

apiVersion: apps/v1

kind: Deployment

metadata:

  labels:

    name: prometheus-deployment

  name: prometheus

  namespace: kube-system

spec:

  replicas: 1

  selector:

    matchLabels:

      app: prometheus

  template:

    metadata:

      labels:

        app: prometheus

    spec:

      containers:

      - image: prom/prometheus:v2.0.0

        name: prometheus

        command:

        - "/bin/prometheus"

        args:

        - "--config.file=/etc/prometheus/prometheus.yml"

        - "--storage.tsdb.path=/prometheus"

        - "--storage.tsdb.retention=24h"

        ports:

        - containerPort: 9090

          protocol: TCP

        volumeMounts:

        - mountPath: "/prometheus"

          name: data

        - mountPath: "/etc/prometheus"

          name: config-volume

        resources:

          requests:

            cpu: 100m

            memory: 100Mi

          limits:

            cpu: 500m

            memory: 2500Mi

      serviceAccountName: prometheus

      volumes:

      - name: data

        emptyDir: {}

      - name: config-volume

        configMap:

          name: prometheus-config

[root@k8s-master prometheus]# kubectl apply -f prometheus.deploy.yml

deployment.apps/prometheus created

3.3 发布Prometheus服务

[root@k8smaster prometheus]# cat prometheus.svc.yml

apiVersion: v1

kind: Service

metadata:

  labels:

    app: prometheus

  name: prometheus

  namespace: kube-system

spec:

  type: NodePort

  ports:

  - port: 9090

    targetPort: 9090

  selector:

    app: prometheus

[root@k8smaster prometheus]# kubectl apply -f prometheus.svc.yml

service/prometheus created

4.部署grafana

[root@k8s-master prometheus]# cat grafana-deploy.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

  name: grafana-core

  namespace: kube-system

  labels:

    app: grafana

    component: core

spec:

  replicas: 1

  selector:

    matchLabels:

      app: grafana

  template:

    metadata:

      labels:

        app: grafana

        component: core

    spec:

      containers:

      - image: grafana/grafana:6.1.4

        name: grafana-core

        imagePullPolicy: IfNotPresent

        # env:

        resources:

          # keep request = limit to keep this container in guaranteed class

          limits:

            cpu: 100m

            memory: 100Mi

          requests:

            cpu: 100m

            memory: 100Mi

        env:

          # The following env variables set up basic auth twith the default admin user and admin password.

          - name: GF_AUTH_BASIC_ENABLED

            value: "true"

          - name: GF_AUTH_ANONYMOUS_ENABLED

            value: "false"

          # - name: GF_AUTH_ANONYMOUS_ORG_ROLE

          #   value: Admin

          # does not really work, because of template variables in exported dashboards:

          # - name: GF_DASHBOARDS_JSON_ENABLED

          #   value: "true"

        readinessProbe:

          httpGet:

            path: /login

            port: 3000

          # initialDelaySeconds: 30

          # timeoutSeconds: 1

        #volumeMounts:   #先不进行挂载

        #- name: grafana-persistent-storage

        #  mountPath: /var

      #volumes:

      #- name: grafana-persistent-storage

        #emptyDir: {}

[root@k8smaster prometheus]# kubectl apply -f grafana-deploy.yaml

deployment.apps/grafana-core created

[root@k8smaster prometheus]# cat grafana-svc.yaml

apiVersion: v1

kind: Service

metadata:

  name: grafana

  namespace: kube-system

  labels:

    app: grafana

    component: core

spec:

  type: NodePort

  ports:

    - port: 3000

  selector:

    app: grafana

    component: core

[root@k8smaster prometheus]# kubectl apply -f grafana-svc.yaml

service/grafana created

[root@k8smaster prometheus]# cat grafana-ing.yaml

apiVersion: extensions/v1beta1

kind: Ingress

metadata:

   name: grafana

   namespace: kube-system

spec:

   rules:

   - host: k8s.grafana

     http:

       paths:

       - path: /

         backend:

          serviceName: grafana

          servicePort: 3000

[root@k8smaster prometheus]# kubectl apply -f grafana-ing.yaml

Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress

ingress.extensions/grafana created

# 5.检查、测试

[root@k8smaster prometheus]# kubectl get pods -A

NAMESPACE              NAME                                         READY   STATUS      RESTARTS   AGE

kube-system            grafana-core-78958d6d67-49c56                1/1     Running     0          31m

kube-system            node-exporter-fcmx5                          1/1     Running     0          9m33s

kube-system            node-exporter-qccwb                          1/1     Running     0          9m33s

kube-system            prometheus-68546b8d9-qxsm7                   1/1     Running     0          2m47s

[root@k8s-master prometheus]# kubectl get svc -A

NAMESPACE       NAME                                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE

default         kubernetes                           ClusterIP   10.96.0.1        <none>        443/TCP                      9d

default         myweb-svc                            NodePort    10.98.160.67     <none>        8000:30001/TCP               4h39m

default         sc-nginx-svc-3                       ClusterIP   10.109.195.186   <none>        80/TCP                       3d

default         sc-nginx-svc-4                       ClusterIP   10.109.249.255   <none>        80/TCP                       3d

devops-tools    jenkins-service                      NodePort    10.104.209.34    <none>        8080:32000/TCP               22h

ingress-nginx   ingress-nginx-controller             NodePort    10.99.68.171     <none>        80:31588/TCP,443:30528/TCP   3d3h

ingress-nginx   ingress-nginx-controller-admission   ClusterIP   10.96.116.201    <none>        443/TCP                      3d3h

kube-system     grafana                              NodePort    10.101.243.14    <none>        3000:31262/TCP               13m

kube-system     kube-dns                             ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP       9d

kube-system     metrics-server                       ClusterIP   10.103.109.255   <none>        443/TCP                      5h32m

kube-system     node-exporter                        NodePort    10.111.61.108    <none>        9100:31672/TCP               24m

kube-system     prometheus                           NodePort    10.103.108.12    <none>        9090:30839/TCP               2m53s

# 访问

# node-exporter采集的数据

http://192.168.10.180:31672/metrics

# Prometheus的页面

http://192.168.10.180:30839

http://192.168.10.180:31262

# 账户:admin;密码:admin

十、使用测试软件对整个k8s集群和相关的服务器进行压力测试

# 1.运行php-apache服务器并暴露服务

[root@k8s-master hpa]# ls

php-apache.yaml

[root@k8smaster hpa]# cat php-apache.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

  name: php-apache

spec:

  selector:

    matchLabels:

      run: php-apache

  template:

    metadata:

      labels:

        run: php-apache

    spec:

      containers:

      - name: php-apache

        image: k8s.gcr.io/hpa-example

        imagePullPolicy: IfNotPresent

        ports:

        - containerPort: 80

        resources:

          limits:

            cpu: 500m

          requests:

            cpu: 200m

---

apiVersion: v1

kind: Service

metadata:

  name: php-apache

  labels:

    run: php-apache

spec:

  ports:

  - port: 80

  selector:

    run: php-apache

[root@k8smaster hpa]# kubectl apply -f php-apache.yaml

deployment.apps/php-apache created

service/php-apache created

[root@k8smaster hpa]# kubectl get deploy

NAME         READY   UP-TO-DATE   AVAILABLE   AGE

php-apache   1/1     1            1           93s

[root@k8smaster hpa]# kubectl get pod

NAME                         READY   STATUS    RESTARTS   AGE

php-apache-567d9f79d-mhfsp   1/1     Running   0          44s

# 创建HPA功能

[root@k8s-master hpa]# kubectl autoscale deployment php-apache --cpu-percent=10 --min=1 --max=10

horizontalpodautoscaler.autoscaling/php-apache autoscaled

[root@k8s-master hpa]# kubectl get hpa

NAME         REFERENCE               TARGETS         MINPODS   MAXPODS   REPLICAS   AGE

php-apache   Deployment/php-apache   <unknown>/10%   1         10        0          7s

# 测试,增加负载

[root@k8s-master hpa]# kubectl run -i --tty load-generator --rm --image=busybox:1.28 --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done"

If you don't see a command prompt, try pressing enter.

OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK

[root@k8s-master hpa]# kubectl get hpa

NAME         REFERENCE               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE

php-apache   Deployment/php-apache   0%/10%    1         10        1          3m24s

[root@k8s-master hpa]# kubectl get hpa

NAME         REFERENCE               TARGETS    MINPODS   MAXPODS   REPLICAS   AGE

php-apache   Deployment/php-apache   238%/10%   1         10        1          3m41s

[root@k8s-master hpa]# kubectl get hpa

NAME         REFERENCE               TARGETS    MINPODS   MAXPODS   REPLICAS   AGE

php-apache   Deployment/php-apache   250%/10%   1         10        4          3m57s

# 一旦CPU利用率降至0,HPA会自动将副本数缩减为 1。自动扩缩完成副本数量的改变可能需要几分钟的时间

# 2.对web服务进行压力测试,观察promethues和dashboard

# ab命令访问web:192.168.10.180:30839 进入prometheus观察pod

# 观察

kubectl top pod

https://192.168.10.180:30839/

压力测试

[root@nfs ~]# yum install httpd-tools -y

[root@nfs data]# ab -n 1000000 -c 10000 -g output.dat http://192.168.10.180:30839/

This is ApacheBench, Version 2.3 <$Revision: 1430300 $>

Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/

Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.2.112 (be patient)

apr_socket_recv: Connection reset by peer (104)

Total of 3694 requests completed

ab -n 1000 -c 10 -g output.dat http://192.168.10.180:30839/

-t 60 在60秒内发送尽可能多的请求

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐