【k8s】kubeasz 3.6.3 + virtualbox 搭建本地虚拟机openeuler 22.03 三节点集群 离线方案
docker-24.0.7.tgz100%[=====================================================================================>]66.60M1.54MB/s用时 41s。正在连接 mirrors.tuna.tsinghua.edu.cn (mirrors.tuna.tsinghua.edu.cn)|101.6
kubeasz项目源码地址
GitHub - easzlab/kubeasz: 使用Ansible脚本安装K8S集群,介绍组件交互原理,方便直接,不受国内网络环境影响
拉取代码,并切换到最近发布的分支
git clone https://github.com/easzlab/kubeasz
cd kubeasz
git checkout 3.6.3
ssh-copy-id,在root用户状态下执行
ssh-copy-id root@10.47.76.73
ssh-copy-id root@10.47.76.74
ssh-copy-id root@10.47.76.76
本机(Ubuntu 22.04 x86_64)安装ansible
sudo apt install ansible -y
下载资源
yeqiang@yeqiang-MS-7B23:~/Downloads/src/kubeasz$ sudo ./ezdown -D
2024-03-25 10:03:40 INFO Action begin: download_all
2024-03-25 10:03:40 INFO downloading docker binaries, arch:x86_64, version:24.0.7
--2024-03-25 10:03:40-- https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/static/stable/x86_64/docker-24.0.7.tgz
正在解析主机 mirrors.tuna.tsinghua.edu.cn (mirrors.tuna.tsinghua.edu.cn)... 101.6.15.130, 2402:f000:1:400::2
正在连接 mirrors.tuna.tsinghua.edu.cn (mirrors.tuna.tsinghua.edu.cn)|101.6.15.130|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度: 69831072 (67M) [application/octet-stream]
正在保存至: ‘docker-24.0.7.tgz’
docker-24.0.7.tgz 100%[=====================================================================================>] 66.60M 1.54MB/s 用时 41s
2024-03-25 10:04:21 (1.64 MB/s) - 已保存 ‘docker-24.0.7.tgz’ [69831072/69831072])
2024-03-25 10:04:22 WARN docker is already running.
2024-03-25 10:04:22 INFO downloading kubeasz: 3.6.3
3.6.3: Pulling from easzlab/kubeasz
f56be85fc22e: Pull complete
ea5757f4b3f8: Pull complete
bd0557c686d8: Pull complete
37d4153ce1d0: Pull complete
b39eb9b4269d: Pull complete
a3cff94972c7: Pull complete
b66d4ab4ee64: Pull complete
Digest: sha256:13135e1ef95ecdb392677b9b7067923cf41fc4371cd0c1eb8b024cf442512a63
Status: Downloaded newer image for easzlab/kubeasz:3.6.3
docker.io/easzlab/kubeasz:3.6.3
2024-03-25 10:05:39 DEBUG run a temporary container
7b65d19edc6efd95cc4bc646401407fcff91e0aa7681a60ae1c84a5108a30ee8
2024-03-25 10:05:42 DEBUG cp kubeasz code from the temporary container
Successfully copied 2.89MB to /etc/kubeasz
2024-03-25 10:05:42 DEBUG stop&remove temporary container
temp_easz
2024-03-25 10:05:44 INFO downloading kubernetes: v1.29.0 binaries
v1.29.0: Pulling from easzlab/kubeasz-k8s-bin
1b7ca6aea1dd: Already exists
1cf75602dde9: Pull complete
4ae371062546: Pull complete
Digest: sha256:adf57dbaec3f7c08b2276aac03a1bb4feae5e0ef294dfdc191c6603e85cf6ccd
Status: Downloaded newer image for easzlab/kubeasz-k8s-bin:v1.29.0
docker.io/easzlab/kubeasz-k8s-bin:v1.29.0
2024-03-25 10:21:01 DEBUG run a temporary container
09a9a5c39ea39f98176ff67418654965bb6885e33db405ae84371dbe2a2861dc
2024-03-25 10:21:13 DEBUG cp k8s binaries
Successfully copied 515MB to /etc/kubeasz/k8s_bin_tmp
2024-03-25 10:21:14 DEBUG stop&remove temporary container
temp_k8s_bin
2024-03-25 10:21:14 INFO downloading extral binaries kubeasz-ext-bin:1.9.0
1.9.0: Pulling from easzlab/kubeasz-ext-bin
070eb51debd9: Pull complete
824ac05263f5: Pull complete
6ab8bf2594e2: Pull complete
cb81b024c20f: Pull complete
e4d14742b324: Pull complete
f84999fd6cee: Pull complete
50eb857ee625: Pull complete
89e5b14263dd: Pull complete
Digest: sha256:aaf5296518cb3f03602e545bac9216925184dbfcbb6c70e4bde76f9751cf21c3
Status: Downloaded newer image for easzlab/kubeasz-ext-bin:1.9.0
docker.io/easzlab/kubeasz-ext-bin:1.9.0
2024-03-25 10:25:26 DEBUG run a temporary container
fa445258b9b658dfe599946d00f1e4e570994d3a8e69a88dfc92aba420cae614
2024-03-25 10:25:30 DEBUG cp extral binaries
Successfully copied 648MB to /etc/kubeasz/extra_bin_tmp
2024-03-25 10:25:31 DEBUG stop&remove temporary container
temp_ext_bin
2: Pulling from library/registry
619be1103602: Pull complete
5daf2fb85fb9: Pull complete
ca5f23059090: Pull complete
8f2a82336004: Pull complete
68c26f40ad80: Pull complete
Digest: sha256:fb9c9aef62af3955f6014613456551c92e88a67dcf1fc51f5f91bcbd1832813f
Status: Downloaded newer image for registry:2
docker.io/library/registry:2
2024-03-25 10:25:47 INFO start local registry ...
c3830f310e24c9a3cb310cec259a74f438fb438103612e4919842a41016f7dae
2024-03-25 10:25:49 INFO download default images, then upload to the local registry
v3.26.4: Pulling from calico/cni
2a2cc8873d88: Pull complete
f689a1b6ffc9: Pull complete
222ddc102977: Pull complete
bb231ec660e2: Pull complete
c274814db7a5: Pull complete
c04ab43d8c14: Pull complete
56e4809beb2c: Pull complete
82a9d7b9ead4: Pull complete
2e8423cc9523: Pull complete
dbb2b79785d1: Pull complete
15e4b2899800: Pull complete
4f4fb700ef54: Pull complete
Digest: sha256:7c5895c5d6ed3266bcd405fbcdbb078ca484688673c3479f0f18bf072d58c242
Status: Downloaded newer image for calico/cni:v3.26.4
docker.io/calico/cni:v3.26.4
v3.26.4: Pulling from calico/kube-controllers
312c81d49b31: Pull complete
21f1655e08ac: Pull complete
807fead6050f: Pull complete
1abfcfa9d8cd: Pull complete
9398ffacf522: Pull complete
3379ce07ff21: Pull complete
f5745fd91cba: Pull complete
b2d1ec87e4a2: Pull complete
9ebe38a91c19: Pull complete
d92a41934dc3: Pull complete
7427cd509920: Pull complete
1726ce00d070: Pull complete
dcd892b22925: Pull complete
8b58b0d1e6a1: Pull complete
Digest: sha256:5fce14b4dfcd63f1a4663176be4f236600b410cd896d054f56291c566292c86e
Status: Downloaded newer image for calico/kube-controllers:v3.26.4
docker.io/calico/kube-controllers:v3.26.4
v3.26.4: Pulling from calico/node
c596d07e602a: Pull complete
9ae8e7f0c0b3: Pull complete
Digest: sha256:a8b77a5f27b167501465f7f5fb7601c44af4df8dccd1c7201363bbb301d1fe40
Status: Downloaded newer image for calico/node:v3.26.4
docker.io/calico/node:v3.26.4
The push refers to repository [easzlab.io.local:5000/calico/cni]
5f70bf18a086: Pushed
7dff43aa1268: Pushed
14fdc63b97b8: Pushed
ae844ae009c7: Pushed
3d2540981e86: Pushed
5743eb3b1640: Pushed
6c2e5970601b: Pushed
50fa5e13eb34: Pushed
468901d6015e: Pushed
e4dea417b6a9: Pushed
fbe0fc515554: Pushed
8a287df44e83: Pushed
v3.26.4: digest: sha256:3540aa94aea8fcd41edd8490a82847bbf6a9a52215f0550c27e196441d234f57 size: 2823
The push refers to repository [easzlab.io.local:5000/calico/kube-controllers]
15e2f86dd9c8: Pushed
6de775fe835c: Pushed
2bf7b670d125: Pushed
c40c18a1888a: Pushed
f65cfcb50057: Pushed
999a8e768b19: Pushed
04873e012646: Pushed
73e66a55b78b: Pushed
aff2e5741039: Pushed
69fff1fdf097: Pushed
1fe60555ee28: Pushed
1e3024c01822: Pushed
ff28c98ce459: Pushed
2235e9b55c14: Pushed
v3.26.4: digest: sha256:b7625323054de4420ba27761d4120ad300d3aa7e0109c8bc41a24ca4bcdd3471 size: 3240
The push refers to repository [easzlab.io.local:5000/calico/node]
c0eef34472c4: Pushed
f4270759c5ec: Pushed
v3.26.4: digest: sha256:0b242b133d70518988a5a36c1401ee4f37bf937743ecceafd242bd821b6645c6 size: 737
1.11.1: Pulling from coredns/coredns
dd5ad9c9c29f: Pull complete
960043b8858c: Pull complete
b4ca4c215f48: Pull complete
eebb06941f3e: Pull complete
02cd68c0cbf6: Pull complete
d3c894b5b2b0: Pull complete
b40161cd83fc: Pull complete
46ba3f23f1d3: Pull complete
4fa131a1b726: Pull complete
860aeecad371: Pull complete
c54d895c1975: Pull complete
Digest: sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
Status: Downloaded newer image for coredns/coredns:1.11.1
docker.io/coredns/coredns:1.11.1
The push refers to repository [easzlab.io.local:5000/coredns/coredns]
545a68d51bc4: Pushed
aec96fc6d10e: Pushed
4cb10dd2545b: Pushed
d2d7ec0f6756: Pushed
1a73b54f556b: Pushed
e624a5370eca: Pushed
d52f02c6501c: Pushed
ff5700ec5418: Pushed
7bea6b893187: Pushed
6fbdf253bbc2: Pushed
e023e0e48e6e: Pushed
1.11.1: digest: sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870 size: 2612
1.22.23: Pulling from easzlab/k8s-dns-node-cache
6c4682e3383e: Pull complete
89414bc462f0: Pull complete
e11308cddc2e: Pull complete
ac73bbef8d7c: Pull complete
07a0455b7f8d: Pull complete
772dc49a1658: Pull complete
Digest: sha256:9fced15a756c8cec1fd8347a268958d49a2927f713bf742a821752b9f39bcead
Status: Downloaded newer image for easzlab/k8s-dns-node-cache:1.22.23
docker.io/easzlab/k8s-dns-node-cache:1.22.23
The push refers to repository [easzlab.io.local:5000/easzlab/k8s-dns-node-cache]
4f165a38d33f: Pushed
71ff73bde640: Pushed
0b6ea7c7e5fa: Pushed
5c3659a2da85: Pushed
66673051b8a2: Pushed
2e1e0b8e464d: Pushed
1.22.23: digest: sha256:9cebf9ba45e040b2b4bc3a3c6e9e2662a080e4a588750bc3a3477fec51f9f395 size: 1571
v2.7.0: Pulling from kubernetesui/dashboard
ee3247c7e545: Pull complete
8e052fd7e2d0: Pull complete
Digest: sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Status: Downloaded newer image for kubernetesui/dashboard:v2.7.0
docker.io/kubernetesui/dashboard:v2.7.0
The push refers to repository [easzlab.io.local:5000/kubernetesui/dashboard]
c88361932af5: Pushed
bd8a70623766: Pushed
v2.7.0: digest: sha256:ef134f101e8a4e96806d0dd839c87c7f76b87b496377422d20a65418178ec289 size: 736
v1.0.8: Pulling from kubernetesui/metrics-scraper
978be80e3ee3: Pull complete
5866d2c04d96: Pull complete
Digest: sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
Status: Downloaded newer image for kubernetesui/metrics-scraper:v1.0.8
docker.io/kubernetesui/metrics-scraper:v1.0.8
The push refers to repository [easzlab.io.local:5000/kubernetesui/metrics-scraper]
bcec7eb9e567: Pushed
d01384fea991: Pushed
v1.0.8: digest: sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a size: 736
v0.6.4: Pulling from easzlab/metrics-server
a7ca0d9ba68f: Pull complete
fe5ca62666f0: Pull complete
b02a7525f878: Pull complete
fcb6f6d2c998: Pull complete
e8c73c638ae9: Pull complete
1e3d9b7d1452: Pull complete
4aa0ea1413d3: Pull complete
7c881f9ab25e: Pull complete
5627a970d25e: Pull complete
c11e15826cd6: Pull complete
Digest: sha256:08b3388f924fa52a3c9d0a9bad43746250a3e82c1414e6cefb7966dd29a9e760
Status: Downloaded newer image for easzlab/metrics-server:v0.6.4
docker.io/easzlab/metrics-server:v0.6.4
The push refers to repository [easzlab.io.local:5000/easzlab/metrics-server]
2e843aeae1b3: Pushed
4cb10dd2545b: Mounted from coredns/coredns
d2d7ec0f6756: Mounted from coredns/coredns
1a73b54f556b: Mounted from coredns/coredns
e624a5370eca: Mounted from coredns/coredns
d52f02c6501c: Mounted from coredns/coredns
ff5700ec5418: Mounted from coredns/coredns
7bea6b893187: Mounted from coredns/coredns
6fbdf253bbc2: Mounted from coredns/coredns
e023e0e48e6e: Mounted from coredns/coredns
v0.6.4: digest: sha256:3f9cbdca6bedc8cac2d7575d29ceb2be5d17ea3dc812de9631f95ba48205d1b3 size: 2402
3.9: Pulling from easzlab/pause
61fec91190a0: Pull complete
Digest: sha256:d5fee2a95eaaefc3a0b8a914601b685e4170cb870ac319ac5a9bfb7938389852
Status: Downloaded newer image for easzlab/pause:3.9
docker.io/easzlab/pause:3.9
The push refers to repository [easzlab.io.local:5000/easzlab/pause]
e3e5579ddd43: Pushed
3.9: digest: sha256:3ec9d4ec5512356b5e77b13fddac2e9016e7aba17dd295ae23c94b2b901813de size: 527
2024-03-25 10:38:57 INFO Action successed: download_all
主要下载的资源
参考下图,由于dzdown不支持openeuler 22.03,后续需要自行下载系统离线包
切换root,cd到下载的资源目录
sudo su
cd /etc/kubeasz/
容器化运行kubeasz
root@yeqiang-MS-7B23:/etc/kubeasz# ./ezdown -S
2024-03-25 10:49:31 INFO Action begin: start_kubeasz_docker
Loaded image: easzlab/kubeasz:3.6.3
2024-03-25 10:49:32 INFO try to run kubeasz in a container
2024-03-25 10:49:32 DEBUG get host IP: 10.47.76.45
77883f725775c025a4cde5aa1d8d148089499d292795c5899cdb9cef5bebb832
2024-03-25 10:49:33 INFO Action successed: start_kubeasz_docker
可以看到启动的容器
创建名为k8s-local的集群
root@yeqiang-MS-7B23:/etc/kubeasz# docker exec -it kubeasz ezctl new k8s-local
2024-03-25 10:54:05 DEBUG generate custom cluster files in /etc/kubeasz/clusters/k8s-local
2024-03-25 10:54:05 DEBUG set versions
2024-03-25 10:54:06 DEBUG disable registry mirrors
2024-03-25 10:54:06 DEBUG cluster k8s-local: files successfully created.
2024-03-25 10:54:06 INFO next steps 1: to config '/etc/kubeasz/clusters/k8s-local/hosts'
2024-03-25 10:54:06 INFO next steps 2: to config '/etc/kubeasz/clusters/k8s-local/config.yml'
配置/etc/kubeasz/cluster/k8s-local/hosts
# 'etcd' cluster should have odd member(s) (1,3,5,...)
[etcd]
10.47.76.73
10.47.76.74
10.47.76.76
# master node(s), set unique 'k8s_nodename' for each node
# CAUTION: 'k8s_nodename' must consist of lower case alphanumeric characters, '-' or '.',
# and must start and end with an alphanumeric character
[kube_master]
10.47.76.73 k8s_nodename='master-01'
10.47.76.74 k8s_nodename='master-02'
10.47.76.76 k8s_nodename='master-03'
# work node(s), set unique 'k8s_nodename' for each node
# CAUTION: 'k8s_nodename' must consist of lower case alphanumeric characters, '-' or '.',
# and must start and end with an alphanumeric character
[kube_node]
10.47.76.73 k8s_nodename='worker-01'
10.47.76.74 k8s_nodename='worker-02'
10.47.76.76 k8s_nodename='worker-03'
# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'true' to install a harbor server; 'false' to integrate with existed one
[harbor]
10.47.76.73 NEW_INSTALL=true
# [optional] loadbalance for accessing k8s from outside
[ex_lb]
10.47.76.73 LB_ROLE=backup EX_APISERVER_VIP=10.47.76.201 EX_APISERVER_PORT=8443
10.47.76.74 LB_ROLE=master EX_APISERVER_VIP=10.47.76.201 EX_APISERVER_PORT=8443
# [optional] ntp server for the cluster
[chrony]
10.47.76.73
[all:vars]
# --------- Main Variables ---------------
# Secure port for apiservers
SECURE_PORT="6443"
# Cluster container-runtime supported: docker, containerd
# if k8s version >= 1.24, docker is not supported
CONTAINER_RUNTIME="containerd"
# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="calico"
# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="ipvs"
# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.68.0.0/16"
# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="172.20.0.0/16"
# NodePort Range
NODE_PORT_RANGE="30000-32767"
# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="cluster.local"
# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/opt/kube/bin"
# Deploy Directory (kubeasz workspace)
base_dir="/etc/kubeasz"
# Directory for a specific cluster
cluster_dir="{{ base_dir }}/clusters/k8s-local"
# CA and other components cert/key Directory
ca_dir="/etc/kubernetes/ssl"
# Default 'k8s_nodename' is empty
k8s_nodename=''
# Default python interpreter
ansible_python_interpreter=/usr/bin/python3
配置/etc/kubeasz/clusters/k8s-local/config.yml
############################
# prepare
############################
# 可选离线安装系统软件包 (offline|online)
INSTALL_SOURCE: "offline"
# 可选进行系统安全加固 github.com/dev-sec/ansible-collection-hardening
# (deprecated) 未更新上游项目,未验证最新k8s集群安装,不建议启用
OS_HARDEN: false
############################
# role:deploy
############################
# default: ca will expire in 100 years
# default: certs issued by the ca will expire in 50 years
CA_EXPIRY: "876000h"
CERT_EXPIRY: "438000h"
# force to recreate CA and other certs, not suggested to set 'true'
CHANGE_CA: false
# kubeconfig 配置参数
CLUSTER_NAME: "cluster1"
CONTEXT_NAME: "context-{{ CLUSTER_NAME }}"
# k8s version
K8S_VER: "1.29.0"
# set unique 'k8s_nodename' for each node, if not set(default:'') ip add will be used
# CAUTION: 'k8s_nodename' must consist of lower case alphanumeric characters, '-' or '.',
# and must start and end with an alphanumeric character (e.g. 'example.com'),
# regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*'
K8S_NODENAME: "{%- if k8s_nodename != '' -%} \
{{ k8s_nodename|replace('_', '-')|lower }} \
{%- else -%} \
{{ inventory_hostname }} \
{%- endif -%}"
############################
# role:etcd
############################
# 设置不同的wal目录,可以避免磁盘io竞争,提高性能
ETCD_DATA_DIR: "/var/lib/etcd"
ETCD_WAL_DIR: ""
############################
# role:runtime [containerd,docker]
############################
# [.]启用拉取加速镜像仓库
ENABLE_MIRROR_REGISTRY: false
# [.]添加信任的私有仓库
INSECURE_REG:
- "http://easzlab.io.local:5000"
- "https://{{ HARBOR_REGISTRY }}"
# [.]基础容器镜像
SANDBOX_IMAGE: "easzlab.io.local:5000/easzlab/pause:3.9"
# [containerd]容器持久化存储目录
CONTAINERD_STORAGE_DIR: "/var/lib/containerd"
# [docker]容器存储目录
DOCKER_STORAGE_DIR: "/var/lib/docker"
# [docker]开启Restful API
DOCKER_ENABLE_REMOTE_API: false
############################
# role:kube-master
############################
# k8s 集群 master 节点证书配置,可以添加多个ip和域名(比如增加公网ip和域名)
MASTER_CERT_HOSTS:
- "10.47.76.73"
# node 节点上 pod 网段掩码长度(决定每个节点最多能分配的pod ip地址)
# 如果flannel 使用 --kube-subnet-mgr 参数,那么它将读取该设置为每个节点分配pod网段
# https://github.com/coreos/flannel/issues/847
NODE_CIDR_LEN: 24
############################
# role:kube-node
############################
# Kubelet 根目录
KUBELET_ROOT_DIR: "/var/lib/kubelet"
# node节点最大pod 数
MAX_PODS: 110
# 配置为kube组件(kubelet,kube-proxy,dockerd等)预留的资源量
# 数值设置详见templates/kubelet-config.yaml.j2
KUBE_RESERVED_ENABLED: "no"
# k8s 官方不建议草率开启 system-reserved, 除非你基于长期监控,了解系统的资源占用状况;
# 并且随着系统运行时间,需要适当增加资源预留,数值设置详见templates/kubelet-config.yaml.j2
# 系统预留设置基于 4c/8g 虚机,最小化安装系统服务,如果使用高性能物理机可以适当增加预留
# 另外,集群安装时候apiserver等资源占用会短时较大,建议至少预留1g内存
SYS_RESERVED_ENABLED: "no"
############################
# role:network [flannel,calico,cilium,kube-ovn,kube-router]
############################
# ------------------------------------------- flannel
# [flannel]设置flannel 后端"host-gw","vxlan"等
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false
# [flannel]
flannel_ver: "v0.22.2"
# ------------------------------------------- calico
# [calico] IPIP隧道模式可选项有: [Always, CrossSubnet, Never],跨子网可以配置为Always与CrossSubnet(公有云建议使用always比较省事,其他的话需要修改各自公有云的网络配置,具体可以参考各个公有云说明)
# 其次CrossSubnet为隧道+BGP路由混合模式可以提升网络性能,同子网配置为Never即可.
CALICO_IPV4POOL_IPIP: "Always"
# [calico]设置 calico-node使用的host IP,bgp邻居通过该地址建立,可手工指定也可以自动发现
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"
# [calico]设置calico 网络 backend: bird, vxlan, none
CALICO_NETWORKING_BACKEND: "bird"
# [calico]设置calico 是否使用route reflectors
# 如果集群规模超过50个节点,建议启用该特性
CALICO_RR_ENABLED: false
# CALICO_RR_NODES 配置route reflectors的节点,如果未设置默认使用集群master节点
# CALICO_RR_NODES: ["192.168.1.1", "192.168.1.2"]
CALICO_RR_NODES: []
# [calico]更新支持calico 版本: ["3.19", "3.23"]
calico_ver: "v3.26.4"
# [calico]calico 主版本
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"
# ------------------------------------------- cilium
# [cilium]镜像版本
cilium_ver: "1.14.5"
cilium_connectivity_check: true
cilium_hubble_enabled: false
cilium_hubble_ui_enabled: false
# ------------------------------------------- kube-ovn
# [kube-ovn]离线镜像tar包
kube_ovn_ver: "v1.11.5"
# ------------------------------------------- kube-router
# [kube-router]公有云上存在限制,一般需要始终开启 ipinip;自有环境可以设置为 "subnet"
OVERLAY_TYPE: "full"
# [kube-router]NetworkPolicy 支持开关
FIREWALL_ENABLE: true
# [kube-router]kube-router 镜像版本
kube_router_ver: "v1.5.4"
############################
# role:cluster-addon
############################
# coredns 自动安装
dns_install: "yes"
corednsVer: "1.11.1"
ENABLE_LOCAL_DNS_CACHE: true
dnsNodeCacheVer: "1.22.23"
# 设置 local dns cache 地址
LOCAL_DNS_CACHE: "169.254.20.10"
# metric server 自动安装
metricsserver_install: "yes"
metricsVer: "v0.6.4"
# dashboard 自动安装
dashboard_install: "yes"
dashboardVer: "v2.7.0"
dashboardMetricsScraperVer: "v1.0.8"
# prometheus 自动安装
prom_install: "no"
prom_namespace: "monitor"
prom_chart_ver: "45.23.0"
# kubeapps 自动安装,如果选择安装,默认同时安装local-storage(提供storageClass: "local-path")
kubeapps_install: "no"
kubeapps_install_namespace: "kubeapps"
kubeapps_working_namespace: "default"
kubeapps_storage_class: "local-path"
kubeapps_chart_ver: "12.4.3"
# local-storage (local-path-provisioner) 自动安装
local_path_provisioner_install: "no"
local_path_provisioner_ver: "v0.0.24"
# 设置默认本地存储路径
local_path_provisioner_dir: "/opt/local-path-provisioner"
# nfs-provisioner 自动安装
nfs_provisioner_install: "no"
nfs_provisioner_namespace: "kube-system"
nfs_provisioner_ver: "v4.0.2"
nfs_storage_class: "managed-nfs-storage"
nfs_server: "192.168.1.10"
nfs_path: "/data/nfs"
# network-check 自动安装
network_check_enabled: false
network_check_schedule: "*/5 * * * *"
############################
# role:harbor
############################
# harbor version,完整版本号
HARBOR_VER: "v2.8.4"
HARBOR_DOMAIN: "harbor.easzlab.io.local"
HARBOR_PATH: /var/data
HARBOR_TLS_PORT: 8443
HARBOR_REGISTRY: "{{ HARBOR_DOMAIN }}:{{ HARBOR_TLS_PORT }}"
# if set 'false', you need to put certs named harbor.pem and harbor-key.pem in directory 'down'
HARBOR_SELF_SIGNED_CERT: true
# install extra component
HARBOR_WITH_NOTARY: false
HARBOR_WITH_TRIVY: false
提前配置三台虚拟机hosts的registry域名指向,部署机(10.47.76.45)
ansible -i clusters/k8s-local/hosts etcd -m shell -a "echo '10.47.76.45 easzlab.io.local' >> /etc/hosts"
安装
docker exec -it kubeasz ezctl setup k8s-local all
etcd启动故障
{"changed": true, "cmd": "systemctl daemon-reload && systemctl restart etcd", "delta": "0:01:30.273138", "end": "2024-03-25 15:59:28.935152", "msg": "non-zero return code", "rc": 1, "start": "2024-03-25 15:57:58.662014", "stderr": "Job for etcd.service failed because a timeout was exceeded.\nSee \"systemctl status etcd.service\" and \"journalctl -xeu etcd.service\" for details.", "stderr_lines": ["Job for etcd.service failed because a timeout was exceeded.", "See \"systemctl status etcd.service\" and \"journalctl -xeu etcd.service\" for details."], "stdout": "", "stdout_lines": []}
由于这里是全新的虚拟机环境,可以排除系统环境因素造成,大概率是超时时间内未完成etcd集群启动导致的。
"error":"dial tcp 10.47.76.74:2380: connect: no route to host"
尝试手动重启etcd失败
root@yeqiang-MS-7B23:/etc/kubeasz# ansible -i clusters/k8s-local/hosts etcd -m shell -a "systemctl restart etcd"
10.47.76.76 | FAILED | rc=1 >>
Job for etcd.service failed because a timeout was exceeded.
See "systemctl status etcd.service" and "journalctl -xeu etcd.service" for details.non-zero return code
10.47.76.74 | FAILED | rc=1 >>
Job for etcd.service failed because a timeout was exceeded.
See "systemctl status etcd.service" and "journalctl -xeu etcd.service" for details.non-zero return code
10.47.76.73 | FAILED | rc=1 >>
Job for etcd.service failed because a timeout was exceeded.
See "systemctl status etcd.service" and "journalctl -xeu etcd.service" for details.non-zero return code
这应该是部署脚本的bug了,直接重启三台虚拟机
ansible -i clusters/k8s-local/hosts etcd -m shell -a "reboot"
重启后故障依旧,检查防火墙发现未被kubeasz关闭。直接关闭防火墙
ansible -i clusters/k8s-local/hosts etcd -m shell -a "systemctl disable firewalld --now"
成功了。
重新执行setup
docker exec -it kubeasz ezctl setup k8s-local all
又报错了
TASK [kube-master : 创建user:kubernetes角色绑定] ************************************************************************************************************************
fatal: [10.47.76.73]: FAILED! => {"changed": true, "cmd": ["/etc/kubeasz/bin/kubectl", "create", "clusterrolebinding", "kubernetes-crb", "--clusterrole=system:kubelet-api-admin", "--user=kubernetes"], "delta": "0:00:07.036646", "end": "2024-03-25 16:17:22.368261", "msg": "non-zero return code", "rc": 1, "start": "2024-03-25 16:17:15.331615", "stderr": "error: failed to create clusterrolebinding: etcdserver: request timed out", "stderr_lines": ["error: failed to create clusterrolebinding: etcdserver: request timed out"], "stdout": "", "stdout_lines": []}
又是etcd引发的问题,考虑firewalld停用后iptables规则没有清空,重启三台虚拟机再次运行setup
重启后再次setup报错
TASK [kube-node : 轮询等待node达到Ready状态] ****************************************************************************************************************************
fatal: [10.47.76.73]: FAILED! => {"attempts": 8, "changed": true, "cmd": "/etc/kubeasz/bin/kubectl get node worker-01|awk 'NR>1{print $2}'", "delta": "0:00:00.044266", "end": "2024-03-25 16:29:30.470189", "msg": "", "rc": 0, "start": "2024-03-25 16:29:30.425923", "stderr": "Error from server (NotFound): nodes \"worker-01\" not found", "stderr_lines": ["Error from server (NotFound): nodes \"worker-01\" not found"], "stdout": "", "stdout_lines": []}
fatal: [10.47.76.74]: FAILED! => {"attempts": 8, "changed": true, "cmd": "/etc/kubeasz/bin/kubectl get node worker-02|awk 'NR>1{print $2}'", "delta": "0:00:00.047791", "end": "2024-03-25 16:29:30.473716", "msg": "", "rc": 0, "start": "2024-03-25 16:29:30.425925", "stderr": "Error from server (NotFound): nodes \"worker-02\" not found", "stderr_lines": ["Error from server (NotFound): nodes \"worker-02\" not found"], "stdout": "", "stdout_lines": []}
fatal: [10.47.76.76]: FAILED! => {"attempts": 8, "changed": true, "cmd": "/etc/kubeasz/bin/kubectl get node worker-03|awk 'NR>1{print $2}'", "delta": "0:00:00.057308", "end": "2024-03-25 16:29:30.483426", "msg": "", "rc": 0, "start": "2024-03-25 16:29:30.426118", "stderr": "Error from server (NotFound): nodes \"worker-03\" not found", "stderr_lines": ["Error from server (NotFound): nodes \"worker-03\" not found"], "stdout": "", "stdout_lines": []}
登陆一个节点,检查基本的kubectl指令
[root@localhost ~]# kubectl get ns
E0325 16:32:29.117598 29331 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
E0325 16:32:29.118017 29331 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
E0325 16:32:29.119581 29331 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
E0325 16:32:29.120518 29331 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
E0325 16:32:29.122964 29331 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@localhost ~]# ls ~/.kube/
[root@localhost ~]# ls ~/.kube/ -la
总用量 8
drwxr-xr-x. 2 root root 4096 3月 25 15:57 .
dr-xr-x---. 5 root root 4096 3月 25 16:28 ..
发现没有为kubectl生成~/.kube/config文件,kubectl默认使用http://localhost:8080/api地址
bug?
待续。。。
参考:
https://github.com/easzlab/kubeasz/blob/3.6.3/docs/setup/00-planning_and_overall_intro.md
更多推荐
所有评论(0)