版本

镜像版本:openEuler-21.03-x86_64-dvd.iso
k8s版本:1.21.0
isula版本:2.0.8

准备

1.准备一台主机作为k8s主节点 master,内存大于等于4G
2.准备1-2台主机作为k8s的node节点

设置master和node的主机名:
master主机设置:hostnamectl set-hostname k8s-master01
node1主机设置:hostnamectl set-hostname k8s-node01

编辑master和node 的/etc/hosts文件追加如下内容:
masterIP master主机名
nodeIP node主机名

注:如果为虚拟机最好使用桥接网络,否则后续会有报错:/proc/sys/net/bridge/bridge-nf-call-iptables does not exist

修改源下载k8s包

bash -c 'cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF'

安装最新版软件包

yum install -y kubelet kubeadm kubectl iSulad
systemctl enable --now kubelet.service

关闭配置系统服务

#systemctl stop firewalld && systemctl disable firewalld
#iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
#swapoff -a
#sed -i ‘/swap/s/^(.*)$/#\1/g’ /etc/fstab
#setenforce 0
#service dnsmasq stop && systemctl disable dnsmasq
#systemctl restart isulad

系统参数设置:

制作配置文件

cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
EOF

#生效文件
#sysctl -p /etc/sysctl.d/kubernetes.conf

查看k8s需要的系统镜像

kubeadm config images list

k8s.gcr.io/kube-apiserver:v1.21.0
k8s.gcr.io/kube-controller-manager:v1.21.0
k8s.gcr.io/kube-scheduler:v1.21.0
k8s.gcr.io/kube-proxy:v1.21.0
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0

可以看到k8s默认的镜像路径为k8s.gcr.io,此地址国内无法访问,本次采用阿里云镜像路径,此时下载下来的镜像名称会改为:
registry.aliyuncs.com/google_containers//kube-apiserver:v1.21.0
registry.aliyuncs.com/google_containers//kube-controller-manager:v1.21.0
registry.aliyuncs.com/google_containers//kube-scheduler:v1.21.0
registry.aliyuncs.com/google_containers//kube-proxy:v1.21.0
registry.aliyuncs.com/google_containers//pause:3.4.1
registry.aliyuncs.com/google_containers//etcd:3.4.13-0
registry.aliyuncs.com/google_containers//coredns/coredns:v1.8.0

注:后续的isula配置会用到k8s的系统镜像registry.aliyuncs.com/google_containers//pause:3.4.1

配置iSulad /etc/isulad/daemon.json文件

修改/etc/isulad/daemon.json 更改增加如下字段,如果有新版本,请对比后修改:

{
    "group": "isula",
    "default-runtime": "lcr",
    "graph": "/var/lib/isulad",
    "state": "/var/run/isulad",
    "engine": "lcr",
    "log-level": "ERROR",
    "pidfile": "/var/run/isulad.pid",
    "log-opts": {
        "log-file-mode": "0600",
        "log-path": "/var/lib/isulad",
        "max-file": "1",
        "max-size": "30KB"
    },
    "log-driver": "stdout",
    "container-log": {
        "driver": "json-file"
    },
    "hook-spec": "/etc/default/isulad/hooks/default.json",
    "start-timeout": "2m",
    "storage-driver": "overlay2",
    "storage-opts": [
        "overlay2.override_kernel_check=true"
    ],
    "registry-mirrors": [
        "docker.io"
    ],
    "insecure-registries": [
        "rnd-dockerhub.huawei.com"
    ],
    "pod-sandbox-image": "registry.aliyuncs.com/google_containers/pause:3.4.1",
    "image-opt-timeout": "5m",
    "image-server-sock-addr": "unix:///var/run/isulad/isula_image.sock",
    "native.umask": "secure",
    "network-plugin": "cni",
    "cni-bin-dir": "/opt/cni/bin",
    "cni-conf-dir": "/etc/cni/net.d",
    "image-layer-check": false,
    "use-decrypted-key": true,
    "insecure-skip-verify-enforce": false
}

重启isulad:systemctl restart isulad

注:以上步骤 k8s master节点和node节点都要设置

master主机安装kube-system镜像 (只需要master执行)

1.执行命令:#kubeadm init --kubernetes-version=1.21.0 --apiserver-advertise-address=192.168.43.128 --cri-socket=/var/run/isulad.sock --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16

#–apiserver-advertise-address 改成master节点ip
#–service-cidr 指定service分配的ip段
#–pod-network-cidr 指定pod分配的ip段
#–cri-socket=/var/run/isulad.sock 指定cri引擎为isulad (重要)
#–image-repository:由于国外源无法获取镜像,指定镜像源为国内阿里云源
执行成功后如果未使用isula 导入过k8s系统镜像,此时会从阿里云源下载k8s的系统镜像

此时会提示如下报错:

failed to pull image registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0: output: time="2021-04-29T10:20:07+08:00" level=fatal msg="pulling image failed: rpc error: code = Unknown desc = Failed to pull image registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0 with error: registry response invalid status code 401"

原因是由于coredns/coredns:v1.8.0未在阿里云源下:使用isula pull coredns/coredns:1.8.0手动下载,下载成功后使用isula tag更改标签:
isula tag coredns/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
isula rmi coredns/coredns:1.8.0

通过isula images查看下载好的镜像为:
在这里插入图片描述
再次执行命令:kubeadm init --kubernetes-version=1.21.0 --apiserver-advertise-address=192.168.43.128 --cri-socket=/var/run/isulad.sock --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16
等待片刻初始化成功后会出现如下提示:
在这里插入图片描述
保存红框中内容便后续node节点加入使用

启动master k8s服务

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown ( i d − u ) : (id -u): (idu):(id -g) $HOME/.kube/config

查看健康状态

kubectl get cs 发现:

NAME                 STATUS      MESSAGE                                                                                       ERROR
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused   
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
etcd-0               Healthy     {"health":"true"} 

解决思路:
【注释掉】/etc/kubernetes/manifests下的
kube-controller-manager.yaml
和kube-scheduler.yaml的- – port=0。
修改完毕再次查看:
在这里插入图片描述

安装cni插件

mkdir -p /etc/kubernetes/addons
在该目录下创建calico-rbac-kdd.yaml配置文件:

# Calico Version v3.1.3
# Project Calico Documentation Archives
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-node
rules:
  - apiGroups: [""]
    resources:
      - namespaces
    verbs:
      - get
      - list
      - watch
  - apiGroups: [""]
    resources:
      - pods/status
    verbs:
      - update
  - apiGroups: [""]
    resources:
      - pods
    verbs:
      - get
      - list
      - watch
      - patch
  - apiGroups: [""]
    resources:
      - services
    verbs:
      - get
  - apiGroups: [""]
    resources:
      - endpoints
    verbs:
      - get
  - apiGroups: [""]
    resources:
      - nodes
    verbs:
      - get
      - list
      - update
      - watch
  - apiGroups: ["extensions"]
    resources:
      - networkpolicies
    verbs:
      - get
      - list
      - watch
  - apiGroups: ["networking.k8s.io"]
    resources:
      - networkpolicies
    verbs:
      - watch
      - list
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - globalfelixconfigs
      - felixconfigurations
      - bgppeers
      - globalbgpconfigs
      - bgpconfigurations
      - ippools
      - globalnetworkpolicies
      - globalnetworksets
      - networkpolicies
      - clusterinformations
      - hostendpoints
    verbs:
      - create
      - get
      - list
      - update
      - watch

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: calico-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-node
subjects:
- kind: ServiceAccount
  name: calico-node
  namespace: kube-system

然后分别执行如下命令完成calico的安装:
kubectl apply -f /etc/kubernetes/addons/calico-rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.9/manifests/calico.yaml
通过kubectl get pod -n kube-system查看calico是否安装成功
通过kubectl get pod -n kube-system查看是否所有pod状态都为running
通过kubectl get node 查看master是否为ready

添加node节点

在node主机上执行如下命令:
kubeadm join 192.168.43.128:6443 --token gcxc55.i7bv4dm2yq48onde --discovery-token-ca-cert-hash sha256:d1b92434da21ffb13851d33170dc6599fb4133623dca540b3059abadbe302794 --cri-socket=/var/run/isulad.sock
成功后在master下使用命令 kubectl get node 查看node节点为ready即可.

prometheus安装

1.使用如下地址从github上下载prometheus压缩包
https://codeload.github.com/prometheus-operator/kube-prometheus/zip/refs/heads/main
2.将压缩包拷贝至master主机的/root下,解压得到kube-prometheus-main目录
3.cd /root/kube-prometheus-main目录下
4.修改/root/kube-prometheus-main/manifests/grafana-service.yaml文件,如下内容
在这里插入图片描述
5.修改/root/kube-prometheus-main/manifests/prometheus-service.yaml文件,如下内容:
在这里插入图片描述
6.修改/root/kube-prometheus-main/manifests/文件,如下内容:
在这里插入图片描述
7.执行命令:
kubectl create -f manifests/setup
until kubectl get servicemonitors --all-namespaces; do date; sleep 1; echo “”;done
kubectl create -f manifests/
8.执行kubectl get pod -n monitoring查看各个pod运行状态
9.此时会发现kube-state-metrics-76f6cb7996-c8xn7 2/3 ImagePullBackOff 这是因为阿里云镜像源下无此镜像包通过:isula pull bitnami/kube-state-metrics:2.0.0下载(master和node都需要pull)
然后修改tag:
#isula tag bitnami/kube-state-metrics:2.0.0 k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.0.0修改tag
#isula rmi bitnami/kube-state-metrics:2.0.0
10.通过kubectl get pod -n monitoring查看 最终的状态如下图所示即为安装成功
在这里插入图片描述

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐