1. k8s图谱

     

  2. k8s功能

  3. k8s架构

4. k8s架构部署

  1.  服务器准备

kubeadm 安装 kubernetes 集群要求最少2核,操作系统采用 CentOS-7.6.1810,建议采用 7.6.1810 及以上的版本
建议实验环境配置如下:4核心,8GB内存,100G硬盘空间(50G系统盘,/var/lib/docker 50G,50G用于 Ceph-RBD )
服务器 IP 地址及主机名规划如下:
master1, worker1, worker2 既是 master1 同时也是 node 节点

IPRoleCPU内存数据盘
192.168.47.128master122G100G
192.168.47.129worker122G100G
192.168.47.130worker2 22G100G

 2. 用到的组建版本预览

软件版本发布日期
kubernetesv1.22.42021
docker-ce20.10.102021
etcd3.5.0-02021-06-16
corednsv1.8.42021-05-28
calicov3.21.02021-07-31
dashboardv2.3.12021-06-16
ingress-nginxv1.0.02021-08-24
metrics-serverv0.5.02021-05-28
prometheusv2.26.02021-03-31
grafana7.5.42021-04-14
istio1.11.12021-08-25

3. 环境搭建 安装好vmware软件 构建虚拟机装好centos7.8系统

4. 安装之前需要对环境参数进行配置

   4.1  关闭 selinux 和防火墙

关闭防火墙 

systemctl stop firewalld // 关闭服务
systemctl disable firewalld //关闭开机启动服务
firewall-cmd --state//测试

关闭 selinux 重启

setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config  // 修改
getenforce // 测试

   4.2  主机名 

# master1
hostnamectl set-hostname master1
# worker1
hostnamectl set-hostname worker1
# worker2
hostnamectl set-hostname worker2

4.2 主机IP和域名配置

查询IP和子网掩码:ip a s

查询DNS1: cat /etc/resolv.conf

查询默认网关: route -n (可能要安装route命令工具 yum -y install net-tools)

配置IP:vi /etc/sysconfig/network-scripts/ifcfg-ens33

# master
...
BOOTPROTO=none
IPADDR=192.168.47.128
NETMASK=255.255.255.0
GATEWAY=192.168.47.2
DNS1=119.29.29.29

 配置域名:vi /etc/hosts

192.168.47.128 master1
192.168.47.129 worker1
192.168.47.130 worker2

 4.3 配置时间同步定时任务(同步阿里云时间服务器)

yum -y install ntpdate

crontab -e 0 */5 * ** netdate ntp.aliyun.com

4.4 禁用 swap

swapoff -a
sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab
free -m // 测试是否分配虚拟内存

 4.5 配置三台机器免密钥登陆(每台机器都有全部的公钥)

ssh-keygen -t rsa -b 2048 -P '' -f ~/.ssh/id_rsa
ssh-copy-id k8s-m1
ssh-copy-id k8s-m2
ssh-copy-id k8s-m3

 4.6 配置内核参数 网桥过滤功能 将桥接的IPv4流量传递到iptables的链

 4.7 部署ipvs 效率要比iptables高

安装ipset、ipvsadm

yum -y install ipset ipvsadm

运行脚本

#!/bin/bash
varr ='ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4'

for mod in varr
do
  modprobe $mod
done

 4.8 ssh 连接优化

sed -ri 's/^#(UseDNS )yes/\1no/' /etc/ssh/sshd_config

 5. 使用kubeadm来部署

  1. 安装docker-ce 看runoot手册(一般使用清华源) 

         安装完后查看docker启动配置

         vi /usr/lib/systemd/system/docker.service

         删掉ExecStart=/usr/bin/dockerd后面有-H(包括)的数据

        配置daemon.json 

        配置docker镜像加速以及日志相关配置

        Kubernetes官方推荐使用cgroup driversystemd

mkdir /etc/docker
tee /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": [
    "https://mciwm180.mirror.aliyuncs.com",
    "https://docker.mirrors.ustc.edu.cn/",
    "https://registry.docker-cn.com"
  ],
  "log-driver": "json-file",
  "log-opts": {
    "max-file": "10",
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

systemctl enable --now docker

   2.安装 kubeadm、kubelet、kubectl(这里安装之前最好也配置阿里云的镜像库,)

  添加k8s的阿里源(这里可以不配置,如果安装不了再配置):

[root@master01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

之后安装:

yum -y install kubeadm-1.22.4 kubectl-1.22.4 kubelet-1.22.4

    配置 kubelet 

    指定使用 cgroup driver 为 systemd

tee /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
EOF

设置 kubelet 为开机启动

systemctl daemon-reload
systemctl enable kubelet

3. 拉取k8s的镜像,这里需要你有阿里云的账号,先登录,然后才能拉取(同时修改docker镜像加速器 为阿里云)

  首先查看 需要的k8s镜像信息

  kubeadm config images list

 由于k8s默认是谷歌,国内网访问不了,所以我们用阿里

 我们拉取k8s镜像时 直接改为阿里的镜像地址

 拉取完成之后 再修改镜像的tag使其与kubeadm config images list结果名称保持一致

4. k8s初始化

  再master1上初始化

kubeadm init --kubernetes-version=v1.22.4 --pod-network-cidr=172.16.0.0/16 --apiserver-advertise-address=192.168.47.128

修改拷贝文件

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

配置安装calico(配置pod-network)

网络插件有:flannelcalicocanalkube-routerweave net等,这里使用calico网络插件

下载yaml

curl https://docs.projectcalico.org/manifests/calico.yaml -O

 修改yaml文件 添加并修改 eth->ens 

 在执行calico文件执之前 最好手动下载镜像 配置好阿里云加速器并使用命令行查询calico需要的镜像:

 最后执行文件 安装:

kubectl apply -f calico.yaml

 初始化最后 子节点加入- 分别再worker1、worker2执行

kubeadm join 192.168.47.128:6443 --token e2qxrq.8ijfcx3muenl7sh7 \
        --discovery-token-ca-cert-hash sha256:c530e95aa852fd21fd91b2b975f83d355fe5fa40b7eb8d32cc3f684c3d081b81

5. 测试查看 集群状态

 kubectl get nodes

 kubectl get cs

 kubectl get cluster-info

 kubectl get pods -n kube-system

创建一个 deployment 和 service 进行测试
kubectl create deployment nginx --image=nginx:alpine
kubectl expose deployment/nginx --name=nginx-svc --port=80 --type=NodePort
kubectl get po
Copy
输出如下

NAME                     READY   STATUS    RESTARTS   AGE
nginx-565785f75c-rr8vh   0/1     Pending   0          88s
Copy
查看描述

kubectl describe po nginx-565785f75c-rr8vh
0/3 nodes are available: 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
Copy
因为 master 节点同时当 node 节点用,需要把 master 标签和污点去掉,默认 master 无法调度

去除 master 标签

kubectl label node k8s-m1 node-role.kubernetes.io/master-
kubectl label node k8s-m2 node-role.kubernetes.io/master-
kubectl label node k8s-m3 node-role.kubernetes.io/master-
Copy
去除污点

kubectl taint node k8s-m1 node-role.kubernetes.io/master:NoSchedule-
kubectl taint node k8s-m2 node-role.kubernetes.io/master:NoSchedule-
kubectl taint node k8s-m3 node-role.kubernetes.io/master:NoSchedule-
Copy
再次检查,已经正常调度到 master 节点了

kubectl get po
NAME                     READY   STATUS              RESTARTS   AGE
nginx-565785f75c-rr8vh   0/1     ContainerCreating   0          3m55s
Copy
查看 pod 和 service
kubectl get svc -l app=nginx
Copy
输出如下

NAME        TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
nginx-svc   NodePort   10.110.108.4   <none>        80:31026/TCP   4m34s
Copy
访问测试
curl 10.110.108.4
Copy
输出如下

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

看到上面的,说明已经正常使用 ipvs 模式了。

5. kubectl 的运用

     

 6. 使用yaml文件来管理k8s集群资源管理

7. namespace学习(c r d)

8. pod学习

9. Controller学习

10. Service学习

11. k8s部署完成之后 需要安装核心组件 ingress/metrics-server/dashboard/部署 EFK 日志服务/prometheus+granfa 这其中ingress是

11-1 ingress裸机安装

       1.部署metallb  

kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kub

       2. 创建 namespace

         这里要注意 raw.githubusercontent.com隔段时间就会被污染,所以需要查询新的ip在hosts中配置好 这实在是令人骂人的问题 域名解析地址:www.ipaddress.com

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/namespace.yaml

       3. 部署 metallb

         这里要多试几次才行,注意一下哦

wget https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/metallb.yaml

# 测试发现中科大的并没有加速效果
#sed -i 's@quay.io@quay.mirrors.ustc.edu.cn@g' metallb.yaml
kubectl apply -f metallb.yaml

      4. 支持二层,BGP 等方式,这里简单的使用二层配置 为 matallb 创建 cm

tee metallb-config.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.147.150-192.168.47.250
EOF

kubectl apply -f metallb-config.yaml

kubectl get po -n metallb-system

        5. 部署 ingress-nginx

curl -o ingress-nginx.yaml \
  https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/baremetal/deploy.yaml

sed -i 's@k8s.gcr.io/ingress-nginx/controller:v1.0.0\(.*\)@willdockerhub/ingress-nginx-controller:v1.0.0@' ingress-nginx.yaml
sed -i 's@k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0\(.*\)$@hzde0128/kube-webhook-certgen:v1.0@' \
  ingress-nginx.yaml

kubectl apply -f ingress-nginx.yaml

kubectl get po -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create--1-k9b75     0/1     Completed   0          14s
ingress-nginx-admission-patch--1-jsrsj      0/1     Completed   0          14s
ingress-nginx-controller-79887d48bf-txxvd   0/1     Running     0          15s


// 如果出现问题可以插叙日志
kubectl describe pod ingress-nginx-controller-6b64bc6f47-294jr  --namespace=ingress-nginx

        6.部署了 Metallb 的可以将 NodePort 修改为 LoadBalancer

kubectl patch svc -n ingress-nginx ingress-nginx-controller  -p '{"spec":{"type": "LoadBalancer"}}'
service/ingress-nginx-controller patched

         检查安装情况

kubectl get po -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS     AGE
ingress-nginx-admission-create--1-k9b75     0/1     Completed   0            72s
ingress-nginx-admission-patch--1-jsrsj      0/1     Completed   0            72s
ingress-nginx-controller-79887d48bf-txxvd   0/1     Running     1 (2s ago)   73s

kubectl get svc -n ingress-nginx
NAME                                 TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.111.114.169   192.168.100.101   80:32236/TCP,443:30083/TCP   84s
ingress-nginx-controller-admission   ClusterIP      10.96.117.247    <none>            443/TCP                      85s

         检查版本安装情况

POD_NAMESPACE=ingress-nginx
POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/component=controller -o jsonpath='{.items[0].metadata.name}')
kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.0.0
  Build:         041eb167c7bfccb1d1653f194924b0c5fd885e10
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.20.1

-------------------------------------------------------------------------------

后端 ingress 代理应用示例

创建 myapp 应用

kubectl apply -f - <<EOF
---
kind: Service
apiVersion: v1
metadata:
  name: myapp
spec:
  selector:
    app: myapp
  type: ClusterIP
  ports:
  - name: http
    port: 80
    targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: nginx:alpine
        resources:
          limits:
            memory: "128Mi"
            cpu: "500m"
        ports:
        - containerPort: 80
EOF
service/myapp created
deployment.apps/myapp created
Copy

创建 ingress 文件

kubectl apply -f - <<EOF
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: test-myapp
  annotations:
    # 指定 Ingress Controller 的类型
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: myapp.hzde.com
    http:
      paths:
      - path: "/"
        pathType: Prefix
        backend:
          service:
            name: myapp
            port:
              number: 80
EOF
ingress.networking.k8s.io/test-myapp created
Copy

添加 hosts 并尝试访问

查看 ingress 对应节点的端口

kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.101.207.65   <none>        80:30348/TCP,443:30473/TCP   9m9s
ingress-nginx-controller-admission   ClusterIP   10.96.36.183    <none>        443/TCP                      9m9s
Copy
echo '192.168.100.10 myapp.hzde.com' >> /etc/hosts
curl myapp.hzde.com:30348
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Copy

第二种安装方式,直接使用hostNetwork

curl -o ingress-nginx.yaml https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0-beta.3/deploy/static/provider/baremetal/deploy.yaml

sed -i 's@k8s.gcr.io/ingress-nginx/controller@willdockerhub/ingress-nginx-controller@' ingress-nginx.yaml
sed -i 's@k8s.gcr.io/ingress-nginx/kube-webhook-certgen@hzde0128/kube-webhook-certgen@' ingress-nginx.yaml
sed -i 'N;315a\ \ \ \ \ \ hostNetwork: true' ingress-nginx.yaml
kubectl apply -f ingress-nginx.yaml
Copy
kubectl get po -n ingress-nginx -owide
NAME                                        READY   STATUS    RESTARTS   AGE   IP              NODE     NOMINATED NODE   READINESS GATES
nginx-ingress-controller-5dfbcfd5d9-k2fjc   1/1     Running   0          33s   192.168.100.10   k8s-m1   <none>           <none>
Copy

不修改hosts访问myapp.hzde.com

在头部指定host

curl 192.168.100.10 -H "host:myapp.hzde.com"
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

ingress HTTPS 访问

参考文档TLS/HTTPS

创建自签证书文件

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginx/O=nginx"
Generating a 2048 bit RSA private key
......................................................+++
..........................+++
writing new private key to 'tls.key'
-----
Copy

创建 secret

kubectl create secret tls tls-secret --key tls.key --cert tls.crt
secret/tls-secret created
Copy

创建 tls ingress

kubectl apply -f - <<EOF
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: tls-myapp
  annotations:
    # 指定 Ingress Controller 的类型
    kubernetes.io/ingress.class: "nginx"
spec:
  tls:
  - hosts:
    - myapp2.hzde.com
    secretName: tls-secret
  rules:
  - host: myapp2.hzde.com
    http:
      paths:
      - path: "/"
        pathType: Prefix
        backend:
          service:
            name: myapp
            port:
              number: 80
EOF
ingress.networking.k8s.io/tls-myapp created
Copy

测试

echo '192.168.100.10 myapp2.hzde.com' >> /etc/hosts

curl -sSk https://myapp2.hzde.com:30473 // 注意这个端口是443 对应的端口
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

   11-2  部署 prometheus+granfa 监控服务

        1. 下载 kube-prometheus(域名污染的话 域名解析后 再配置吧)

curl -o kube-prometheus_v0.9.0.tar.gz \
https://github.com/prometheus-operator/kube-prometheus/archive/refs/tags/v0.9.0.tar.gz

tar xf kube-prometheus_v0.9.0.tar.gz

cd kube-prometheus-0.9.0

     2.修改 yaml 清单文件

       修改 grafana-service

cp manifests/grafana-service.yaml{,.ori}
sed -i '/spec:/a\  type: NodePort' manifests/grafana-service.yaml
sed -i '/targetPort:/a\    nodePort: 30200' manifests/grafana-service.yaml

      查看修改

diff manifests/grafana-service.yaml.ori  manifests/grafana-service.yaml
11a12
>   type: NodePort
15a17
>     nodePort: 30200

       修改promethesu-service

  

cp manifests/prometheus-service.yaml{,.ori}
sed -i '/spec:/a\  type: NodePort' manifests/prometheus-service.yaml
sed -i '/targetPort:/a\    nodePort: 30100' manifests/prometheus-service.yaml

        查看修改

diff manifests/prometheus-service.yaml.ori manifests/prometheus-service.yaml
12a13
>   type: NodePort
16a18
>     nodePort: 30100

     3. 修改镜像库

sed -i '/image:/s@k8s.gcr.io/kube-state-metrics@willdockerhub@' $(grep -l image: manifests/*.yaml)

     4. 部署 CRD

kubectl apply -f manifests/setup
namespace/monitoring created
customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com created
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
service/prometheus-operator created
serviceaccount/prometheus-operator created

     5. 部署 kube-prometheus

kubectl apply -f manifests

        输入如下:

alertmanager.monitoring.coreos.com/main created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/alertmanager-main created
prometheusrule.monitoring.coreos.com/alertmanager-main-rules created
secret/alertmanager-main created
service/alertmanager-main created
serviceaccount/alertmanager-main created
servicemonitor.monitoring.coreos.com/alertmanager created
clusterrole.rbac.authorization.k8s.io/blackbox-exporter created
clusterrolebinding.rbac.authorization.k8s.io/blackbox-exporter created
configmap/blackbox-exporter-configuration created
deployment.apps/blackbox-exporter created
service/blackbox-exporter created
serviceaccount/blackbox-exporter created
servicemonitor.monitoring.coreos.com/blackbox-exporter created
secret/grafana-datasources created
configmap/grafana-dashboard-apiserver created
configmap/grafana-dashboard-cluster-total created
configmap/grafana-dashboard-controller-manager created
configmap/grafana-dashboard-k8s-resources-cluster created
configmap/grafana-dashboard-k8s-resources-namespace created
configmap/grafana-dashboard-k8s-resources-node created
configmap/grafana-dashboard-k8s-resources-pod created
configmap/grafana-dashboard-k8s-resources-workload created
configmap/grafana-dashboard-k8s-resources-workloads-namespace created
configmap/grafana-dashboard-kubelet created
configmap/grafana-dashboard-namespace-by-pod created
configmap/grafana-dashboard-namespace-by-workload created
configmap/grafana-dashboard-node-cluster-rsrc-use created
configmap/grafana-dashboard-node-rsrc-use created
configmap/grafana-dashboard-nodes created
configmap/grafana-dashboard-persistentvolumesusage created
configmap/grafana-dashboard-pod-total created
configmap/grafana-dashboard-prometheus-remote-write created
configmap/grafana-dashboard-prometheus created
configmap/grafana-dashboard-proxy created
configmap/grafana-dashboard-scheduler created
configmap/grafana-dashboard-statefulset created
configmap/grafana-dashboard-workload-total created
configmap/grafana-dashboards created
Warning: spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated since v1.14; use "kubernetes.io/os" instead
deployment.apps/grafana created
service/grafana created
serviceaccount/grafana created
servicemonitor.monitoring.coreos.com/grafana created
prometheusrule.monitoring.coreos.com/kube-prometheus-rules created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
prometheusrule.monitoring.coreos.com/kube-state-metrics-rules created
service/kube-state-metrics created
serviceaccount/kube-state-metrics created
servicemonitor.monitoring.coreos.com/kube-state-metrics created
prometheusrule.monitoring.coreos.com/kubernetes-monitoring-rules created
servicemonitor.monitoring.coreos.com/kube-apiserver created
servicemonitor.monitoring.coreos.com/coredns created
servicemonitor.monitoring.coreos.com/kube-controller-manager created
servicemonitor.monitoring.coreos.com/kube-scheduler created
servicemonitor.monitoring.coreos.com/kubelet created
clusterrole.rbac.authorization.k8s.io/node-exporter created
clusterrolebinding.rbac.authorization.k8s.io/node-exporter created
daemonset.apps/node-exporter created
prometheusrule.monitoring.coreos.com/node-exporter-rules created
service/node-exporter created
serviceaccount/node-exporter created
servicemonitor.monitoring.coreos.com/node-exporter created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io configured
clusterrole.rbac.authorization.k8s.io/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader configured
clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter created
clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator created
clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources created
configmap/adapter-config created
deployment.apps/prometheus-adapter created
rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader created
service/prometheus-adapter created
serviceaccount/prometheus-adapter created
servicemonitor.monitoring.coreos.com/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/prometheus-k8s created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s created
prometheusrule.monitoring.coreos.com/prometheus-operator-rules created
servicemonitor.monitoring.coreos.com/prometheus-operator created
poddisruptionbudget.policy/prometheus-k8s created
prometheus.monitoring.coreos.com/k8s created
prometheusrule.monitoring.coreos.com/prometheus-k8s-prometheus-rules created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s-config created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
service/prometheus-k8s created
serviceaccount/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus-k8s created

        6.检查部署情况

 这里有时会发现好多pod不健康,可以查看日志,发现好多镜像pull出问题,我们改为手动拉取就可以了

有些镜像实在下载不下来 就去阿里云上找找 总有办法的 唉唉唉唉。。。。。。。。。。。。

kubectl get po -n monitoring

           输出如下

NAME                                   READY   STATUS    RESTARTS        AGE
alertmanager-main-0                    2/2     Running   0               9m2s
alertmanager-main-1                    2/2     Running   0               9m2s
alertmanager-main-2                    2/2     Running   0               9m2s
blackbox-exporter-567dc8c7d4-ts5rr     3/3     Running   0               11m
grafana-6dd5b5f65-vxld6                1/1     Running   0               11m
kube-state-metrics-69b5f46bb9-bjjch    3/3     Running   0               11m
node-exporter-7rpts                    2/2     Running   0               11m
node-exporter-f9m77                    2/2     Running   0               11m
node-exporter-xlz2t                    2/2     Running   0               11m
prometheus-adapter-59df95d9f5-kglql    1/1     Running   0               11m
prometheus-adapter-59df95d9f5-n99fb    1/1     Running   0               11m
prometheus-k8s-0                       2/2     Running   1 (9m ago)      9m2s
prometheus-k8s-1                       2/2     Running   1 (8m59s ago)   9m2s
prometheus-operator-7775c66ccf-npmxw   2/2     Running   0               11m
kubectl get svc -n monitoring

NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
alertmanager-main       ClusterIP   10.109.167.164   <none>        9093/TCP                     12m
alertmanager-operated   ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP   9m46s
blackbox-exporter       ClusterIP   10.109.154.125   <none>        9115/TCP,19115/TCP           12m
grafana                 NodePort    10.99.112.125    <none>        3000:30200/TCP               12m
kube-state-metrics      ClusterIP   None             <none>        8443/TCP,9443/TCP            11m
node-exporter           ClusterIP   None             <none>        9100/TCP                     11m
prometheus-adapter      ClusterIP   10.107.102.121   <none>        443/TCP                      11m
prometheus-k8s          NodePort    10.103.179.200   <none>        9090:30100/TCP               11m
prometheus-operated     ClusterIP   None             <none>        9090/TCP                     9m46s
prometheus-operator     ClusterIP   None             <none>        8443/TCP                     12m

 如果报错的话 而且无法弄时,直接删除干净, 需要删除monitoring 操作如下

kubectl get namespace monitoring -o json > tmp.json 

将spec里的内容删除干净,保存退出:

然后新开一个窗口运行kubectl proxy跑一个API代理在本地的8081端口

  # kubectl proxy --port=8081

最后执行命令:

# curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json http://127.0.0.1:8081/api/v1/namespaces/monitoring/finalize

然后再看一下命名空间,发现monitoring已经被删除了。

然后重启虚拟机 或者看看 分配内存是不是够用 reboot 结果OK 

之后重新来安装 kube-prometheus

另外还有个问题 就是kube-prometheus 你下载的是kube-prometheus-0.9.0 

在部署 kube-prometheus的时候 需要的镜像包为

k8s.gcr.io/prometheus-adapter/prometheus-adapter:v0.9.0 

这个镜像你是找不到的 除非下载git源码 自己打镜像 唉唉唉 怎么办呢 直接下v0.8.4版本 然后再改一下 tag

docker pull directxman12/k8s-prometheus-adapter:v0.8.4

docker tag 9d7c9987f24e k8s.gcr.io/prometheus-adapter/prometheus-adapter:v0.9.0

做完再检查 完美 哈哈哈哈

 7. 访问浏览器

打开浏览器,输入:http://192.168.47.128:30200/访问grafana admin lovetcj520

默认用户名和密码为:adminadmin 要修改密码 

11-3. 部署 EFK 日志服务(用户WEB端日志查看)

      1. 

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐