kubesphere–基础–1.2–部署kubesphere–基于KubeKey


1、准备

基于KubeKey 安装kubesphere,k8s

1.1、文档

https://kubesphere.io/zh/docs/v3.3/installing-on-linux/introduction/multioverview/

1.2、机器

主机名IP说明
master192.168.187.1114核8G,硬盘20G,核数最小要求是2,角色control plane, etcd
node1192.168.187.1124核8G,硬盘20G,核数最小要求是2,角色worker
node2192.168.187.1134核8G,硬盘20G,核数最小要求是2,角色worker

1.3、依赖项要求

KubeKey 可以一同安装 Kubernetes 和 KubeSphere。根据要安装的 Kubernetes 版本,需要安装的依赖项可能会不同。您可以参考下表,查看是否需要提前在节点上安装相关依赖项。

在这里插入图片描述

yum install -y conntrack  socat 

2、安装 KubeKey(master)

2.1、设置正确的区域下载 KubeKey

# 先执行以下命令以确保您从正确的区域下载 KubeKey
export KKZONE=cn

2.2、下载 KubeKey

# 在线下载
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
  
# 如果上面命令下载不了,可以使用这个
wget https://kubernetes.pek3b.qingstor.com/kubekey/releases/download/v3.0.7/kubekey-v3.0.7-linux-amd64.tar.gz

# 解压
tar -xvf kubekey-v3.0.7-linux-amd64.tar.gz 

# 删除  
rm -rf kubekey-v3.0.7-linux-amd64.tar.gz 


# 查看
[root@zhoufei ~]# ll 
-rwxr-xr-x. 1 root root 78901793 1月  18 2023 kk

2.3、kk命令

使用 kk -h来查看命令对应的参数

2.3.1、./kk -h

[root@zhoufei ~]# ./kk -h 
Deploy a Kubernetes or KubeSphere cluster efficiently, flexibly and easily. There are three scenarios to use KubeKey.
1. Install Kubernetes only
2. Install Kubernetes and KubeSphere together in one command
3. Install Kubernetes first, then deploy KubeSphere on it using https://github.com/kubesphere/ks-installer

Usage:
  kk [command]

Available Commands:
  add         Add nodes to kubernetes cluster
  alpha       Commands for features in alpha
  artifact    Manage a KubeKey offline installation package
  certs       cluster certs
  completion  Generate shell completion scripts
  create      Create a cluster or a cluster configuration file
  delete      Delete node or cluster
  help        Help about any command
  init        Initializes the installation environment
  plugin      Provides utilities for interacting with plugins
  upgrade     Upgrade your cluster smoothly to a newer version with this command
  version     print the client version information

Flags:
  -h, --help   help for kk

Use "kk [command] --help" for more information about a command.

2.3.2、./kk create -h


[root@zhoufei ~]# ./kk create -h
Create a cluster or a cluster configuration file

Usage:
  kk create [command]

Available Commands:
  cluster     Create a Kubernetes or KubeSphere cluster
  config      Create cluster configuration file
  manifest    Create an offline installation package configuration file

Flags:
      --debug              Print detailed information
  -h, --help               help for create
      --ignore-err         Ignore the error message, remove the host which reported error and force to continue
      --namespace string   KubeKey namespace to use (default "kubekey-system")
  -y, --yes                Skip confirm check

Use "kk create [command] --help" for more information about a command.

2.3.3、./kk create config -h


[root@zhoufei ~]# ./kk create config -h
Create cluster configuration file

Usage:
  kk create config [flags]

Flags:
      --debug                    Print detailed information
  -f, --filename string          Specify a configuration file path
      --from-cluster             Create a configuration based on existing cluster
  -h, --help                     help for config
      --ignore-err               Ignore the error message, remove the host which reported error and force to continue
      --kubeconfig string        Specify a kubeconfig file
      --name string              Specify a name of cluster object (default "sample")
      --namespace string         KubeKey namespace to use (default "kubekey-system")
      --with-kubernetes string   Specify a supported version of kubernetes
      --with-kubesphere          Deploy a specific version of kubesphere (default v3.3.2)
  -y, --yes                      Skip confirm check


2.3.4、查看支持的k8s版本

[root@zhoufei ~]# ./kk version --show-supported-k8s
v1.19.0
v1.19.8
v1.19.9
v1.19.15
v1.20.4
v1.20.6
v1.20.10
v1.21.0
v1.21.1
v1.21.2
v1.21.3
v1.21.4
v1.21.5
v1.21.6
v1.21.7
v1.21.8
v1.21.9
v1.21.10
v1.21.11
v1.21.12
v1.21.13
v1.21.14
v1.22.0
v1.22.1
v1.22.2
v1.22.3
v1.22.4
v1.22.5
v1.22.6
v1.22.7
v1.22.8
v1.22.9
v1.22.10
v1.22.11
v1.22.12
v1.22.13
v1.22.14
v1.22.15
v1.22.16
v1.22.17
v1.23.0
v1.23.1
v1.23.2
v1.23.3
v1.23.4
v1.23.5
v1.23.6
v1.23.7
v1.23.8
v1.23.9
v1.23.10
v1.23.11
v1.23.12
v1.23.13
v1.23.14
v1.23.15
v1.24.0
v1.24.1
v1.24.2
v1.24.3
v1.24.4
v1.24.5
v1.24.6
v1.24.7
v1.24.8
v1.24.9
v1.25.0
v1.25.1
v1.25.2
v1.25.3
v1.25.4
v1.25.5
v1.26.0

3、创建集群

3.1、创建配置文件

# KubeKey 将默认安装 Kubernetes v1.23.10

./kk create config --with-kubesphere v3.3.2
 

3.2、修改配置文件

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: master, address: 192.168.187.111, internalAddress: 192.168.187.111, user: root, password: "root"}
  - {name: node1, address: 192.168.187.112, internalAddress: 192.168.187.112, user: root, password: "root"}
  - {name: node2, address: 192.168.187.113, internalAddress: 192.168.187.113, user: root, password: "root"}
  roleGroups:
    etcd:
    - master
    control-plane: 
    - master
    worker:
    - node1
    - node2

....

name:实例的主机名。
address:任务机和其他实例通过 SSH 相互连接所使用的 IP 地址。
internalAddress:实例的私有 IP 地址。

3.2、根据配置文件创建集群

# 设置正确的区域
export KKZONE=cn

# 创建集群
./kk create cluster -f config-sample.yaml

 

3.2.1、安装日志

通过日志,可以看到都做了什么操作

[root@zhoufei ~]# ./kk create cluster -f config-sample.yaml


 _   __      _          _   __           
| | / /     | |        | | / /           
| |/ / _   _| |__   ___| |/ /  ___ _   _ 
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                    __/ |
                                   |___/

17:15:38 CST [GreetingsModule] Greetings
17:15:39 CST message: [node2]
Greetings, KubeKey!
17:15:39 CST message: [node1]
Greetings, KubeKey!
17:15:41 CST message: [master]
Greetings, KubeKey!
17:15:41 CST success: [node2]
17:15:41 CST success: [node1]
17:15:41 CST success: [master]
17:15:41 CST [NodePreCheckModule] A pre-check on nodes
17:15:48 CST success: [node1]
17:15:48 CST success: [node2]
17:15:48 CST success: [master]
17:15:48 CST [ConfirmModule] Display confirmation form
+--------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name   | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time         |
+--------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| master | y    | y    | y       | y        | y     | y     |         | y         |        |        | 1.4.12     |            |             |                  | CST 17:15:48 |
| node1  | y    | y    | y       | y        | y     | y     |         | y         |        |        | 1.4.12     |            |             |                  | CST 17:15:43 |
| node2  | y    | y    | y       | y        | y     | y     |         | y         |        |        | 1.4.12     |            |             |                  | CST 17:15:43 |
+--------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
17:16:13 CST success: [LocalHost]
17:16:13 CST [NodeBinariesModule] Download installation binaries
17:16:13 CST message: [localhost]
downloading amd64 kubeadm v1.23.10 ...
17:16:13 CST message: [localhost]
kubeadm is existed
17:16:13 CST message: [localhost]
downloading amd64 kubelet v1.23.10 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  118M  100  118M    0     0   479k      0  0:04:12  0:04:12 --:--:--  554k
17:20:28 CST message: [localhost]
downloading amd64 kubectl v1.23.10 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 44.4M  100 44.4M    0     0   563k      0  0:01:20  0:01:20 --:--:--  756k
17:21:51 CST message: [localhost]
downloading amd64 helm v3.9.0 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 44.0M  100 44.0M    0     0   409k      0  0:01:50  0:01:50 --:--:-- 1179k
17:23:41 CST message: [localhost]
downloading amd64 kubecni v0.9.1 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 37.9M  100 37.9M    0     0   984k      0  0:00:39  0:00:39 --:--:-- 1003k
17:24:21 CST message: [localhost]
downloading amd64 crictl v1.24.0 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 13.8M  100 13.8M    0     0   939k      0  0:00:15  0:00:15 --:--:-- 1036k
17:24:36 CST message: [localhost]
downloading amd64 etcd v3.4.13 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 16.5M  100 16.5M    0     0   807k      0  0:00:21  0:00:21 --:--:-- 1053k
17:24:57 CST message: [localhost]
downloading amd64 docker 20.10.8 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 58.1M  100 58.1M    0     0  1984k      0  0:00:29  0:00:29 --:--:-- 2687k
17:25:27 CST success: [LocalHost]
17:25:27 CST [ConfigureOSModule] Get OS release
17:25:28 CST success: [node2]
17:25:28 CST success: [node1]
17:25:28 CST success: [master]
17:25:28 CST [ConfigureOSModule] Prepare to init OS
17:25:52 CST success: [node1]
17:25:52 CST success: [node2]
17:25:52 CST success: [master]
17:25:52 CST [ConfigureOSModule] Generate init os script
17:25:55 CST success: [node2]
17:25:55 CST success: [node1]
17:25:55 CST success: [master]
17:25:55 CST [ConfigureOSModule] Exec init os script
17:25:56 CST stdout: [node2]
Permissive
vm.swappiness = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_max_syn_backlog = 262144
net.core.somaxconn = 4096
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
17:25:56 CST stdout: [node1]
Permissive
vm.swappiness = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_max_syn_backlog = 262144
net.core.somaxconn = 4096
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
17:25:57 CST stdout: [master]
Permissive
vm.swappiness = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_max_syn_backlog = 262144
net.core.somaxconn = 4096
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
17:25:57 CST success: [node2]
17:25:57 CST success: [node1]
17:25:57 CST success: [master]
17:25:57 CST [ConfigureOSModule] configure the ntp server for each node
17:25:57 CST skipped: [node1]
17:25:57 CST skipped: [master]
17:25:57 CST skipped: [node2]
17:25:57 CST [KubernetesStatusModule] Get kubernetes cluster status
17:25:57 CST success: [master]
17:25:57 CST [InstallContainerModule] Sync docker binaries
17:26:08 CST success: [node1]
17:26:08 CST success: [node2]
17:26:08 CST success: [master]
17:26:08 CST [InstallContainerModule] Generate docker service
17:26:11 CST success: [node1]
17:26:11 CST success: [node2]
17:26:11 CST success: [master]
17:26:11 CST [InstallContainerModule] Generate docker config
17:26:30 CST success: [node1]
17:26:30 CST success: [node2]
17:26:30 CST success: [master]
17:26:30 CST [InstallContainerModule] Enable docker
17:26:33 CST success: [node2]
17:26:33 CST success: [node1]
17:26:33 CST success: [master]
17:26:33 CST [InstallContainerModule] Add auths to container runtime
17:26:33 CST skipped: [node1]
17:26:33 CST skipped: [node2]
17:26:33 CST skipped: [master]
17:26:33 CST [PullModule] Start to pull images on all nodes
17:26:33 CST message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6
17:26:33 CST message: [node2]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6
17:26:33 CST message: [node1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6
17:26:38 CST message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.23.10
17:26:39 CST message: [node2]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.10
17:26:46 CST message: [node1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.10
17:28:47 CST message: [node2]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6
17:29:11 CST message: [node1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6
17:29:35 CST message: [node2]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
17:30:11 CST message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.23.10
17:31:00 CST message: [node1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
17:31:35 CST message: [node2]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2
17:33:10 CST message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.23.10
17:34:12 CST message: [node1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2
17:34:20 CST message: [node2]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2
17:35:17 CST message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.10
17:38:21 CST message: [node1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2
17:38:37 CST message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6
17:39:43 CST message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
17:41:58 CST message: [node2]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2
17:42:28 CST message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2
17:44:59 CST message: [node1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2
17:46:02 CST message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2
17:47:41 CST message: [node2]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2
17:51:02 CST message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2
17:51:07 CST message: [node1]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2
17:53:38 CST message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2
17:53:55 CST success: [node2]
17:53:55 CST success: [node1]
17:53:55 CST success: [master]
17:53:55 CST [ETCDPreCheckModule] Get etcd status
17:53:55 CST success: [master]
17:53:55 CST [CertsModule] Fetch etcd certs
17:53:55 CST success: [master]
17:53:55 CST [CertsModule] Generate etcd Certs
[certs] Generating "ca" certificate and key
[certs] admin-master serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost master node1 node2] and IPs [127.0.0.1 ::1 192.168.187.111 192.168.187.112 192.168.187.113]
[certs] member-master serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost master node1 node2] and IPs [127.0.0.1 ::1 192.168.187.111 192.168.187.112 192.168.187.113]
[certs] node-master serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost master node1 node2] and IPs [127.0.0.1 ::1 192.168.187.111 192.168.187.112 192.168.187.113]
17:53:55 CST success: [LocalHost]
17:53:55 CST [CertsModule] Synchronize certs file
17:54:17 CST success: [master]
17:54:17 CST [CertsModule] Synchronize certs file to master
17:54:17 CST skipped: [master]
17:54:17 CST [InstallETCDBinaryModule] Install etcd using binary
17:54:20 CST success: [master]
17:54:20 CST [InstallETCDBinaryModule] Generate etcd service
17:54:22 CST success: [master]
17:54:22 CST [InstallETCDBinaryModule] Generate access address
17:54:22 CST success: [master]
17:54:22 CST [ETCDConfigureModule] Health check on exist etcd
17:54:22 CST skipped: [master]
17:54:22 CST [ETCDConfigureModule] Generate etcd.env config on new etcd
17:54:25 CST success: [master]
17:54:25 CST [ETCDConfigureModule] Refresh etcd.env config on all etcd
17:54:33 CST success: [master]
17:54:33 CST [ETCDConfigureModule] Restart etcd
17:54:35 CST stdout: [master]
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
17:54:35 CST success: [master]
17:54:35 CST [ETCDConfigureModule] Health check on all etcd
17:54:35 CST success: [master]
17:54:35 CST [ETCDConfigureModule] Refresh etcd.env config to exist mode on all etcd
17:54:38 CST success: [master]
17:54:38 CST [ETCDConfigureModule] Health check on all etcd
17:54:38 CST success: [master]
17:54:38 CST [ETCDBackupModule] Backup etcd data regularly
17:54:41 CST success: [master]
17:54:41 CST [ETCDBackupModule] Generate backup ETCD service
17:54:44 CST success: [master]
17:54:44 CST [ETCDBackupModule] Generate backup ETCD timer
17:54:46 CST success: [master]
17:54:46 CST [ETCDBackupModule] Enable backup etcd service
17:54:47 CST success: [master]
17:54:47 CST [InstallKubeBinariesModule] Synchronize kubernetes binaries
17:55:27 CST success: [node2]
17:55:27 CST success: [master]
17:55:27 CST success: [node1]
17:55:27 CST [InstallKubeBinariesModule] Synchronize kubelet
17:55:27 CST success: [node1]
17:55:27 CST success: [node2]
17:55:27 CST success: [master]
17:55:27 CST [InstallKubeBinariesModule] Generate kubelet service
17:55:30 CST success: [node1]
17:55:30 CST success: [node2]
17:55:30 CST success: [master]
17:55:30 CST [InstallKubeBinariesModule] Enable kubelet service
17:55:31 CST success: [node2]
17:55:31 CST success: [node1]
17:55:31 CST success: [master]
17:55:31 CST [InstallKubeBinariesModule] Generate kubelet env
17:55:33 CST success: [node1]
17:55:33 CST success: [node2]
17:55:33 CST success: [master]
17:55:33 CST [InitKubernetesModule] Generate kubeadm config
17:55:37 CST success: [master]
17:55:37 CST [InitKubernetesModule] Init cluster using kubeadm
17:55:51 CST stdout: [master]
W0101 17:55:38.146882    6643 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.23.10
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost master master.cluster.local node1 node1.cluster.local node2 node2.cluster.local] and IPs [10.233.0.1 192.168.187.111 127.0.0.1 192.168.187.112 192.168.187.113]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 8.502938 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: e42f3o.0qpc56tkrt5bnl20
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join lb.kubesphere.local:6443 --token e42f3o.0qpc56tkrt5bnl20 \
	--discovery-token-ca-cert-hash sha256:275b0bed0d29cf48d577c10d6af97dc775dc36425626622c925057a634eb43ed \
	--control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join lb.kubesphere.local:6443 --token e42f3o.0qpc56tkrt5bnl20 \
	--discovery-token-ca-cert-hash sha256:275b0bed0d29cf48d577c10d6af97dc775dc36425626622c925057a634eb43ed
17:55:51 CST success: [master]
17:55:51 CST [InitKubernetesModule] Copy admin.conf to ~/.kube/config
17:55:54 CST success: [master]
17:55:54 CST [InitKubernetesModule] Remove master taint
17:55:54 CST skipped: [master]
17:55:54 CST [InitKubernetesModule] Add worker label
17:55:54 CST skipped: [master]
17:55:54 CST [ClusterDNSModule] Generate coredns service
17:55:57 CST success: [master]
17:55:57 CST [ClusterDNSModule] Override coredns service
17:55:58 CST stdout: [master]
service "kube-dns" deleted
17:56:00 CST stdout: [master]
service/coredns created
Warning: resource clusterroles/system:coredns is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/system:coredns configured
17:56:00 CST success: [master]
17:56:00 CST [ClusterDNSModule] Generate nodelocaldns
17:56:02 CST success: [master]
17:56:02 CST [ClusterDNSModule] Deploy nodelocaldns
17:56:03 CST stdout: [master]
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
17:56:03 CST success: [master]
17:56:03 CST [ClusterDNSModule] Generate nodelocaldns configmap
17:56:07 CST success: [master]
17:56:07 CST [ClusterDNSModule] Apply nodelocaldns configmap
17:56:08 CST stdout: [master]
configmap/nodelocaldns created
17:56:08 CST success: [master]
17:56:08 CST [KubernetesStatusModule] Get kubernetes cluster status
17:56:09 CST stdout: [master]
v1.23.10
17:56:10 CST stdout: [master]
master   v1.23.10   [map[address:192.168.187.111 type:InternalIP] map[address:master type:Hostname]]
17:56:22 CST stdout: [master]
I0101 17:56:20.435275    8628 version.go:255] remote version is much newer: v1.29.0; falling back to: stable-1.23
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
7624f015c15ab476a12a253c023f982e498aeacddf2558a913edddd08d0694fa
17:56:22 CST stdout: [master]
secret/kubeadm-certs patched
17:56:23 CST stdout: [master]
secret/kubeadm-certs patched
17:56:24 CST stdout: [master]
secret/kubeadm-certs patched
17:56:24 CST stdout: [master]
mobyoc.9zyfk7h9wlxbe782
17:56:24 CST success: [master]
17:56:24 CST [JoinNodesModule] Generate kubeadm config
17:56:25 CST skipped: [master]
17:56:25 CST success: [node1]
17:56:25 CST success: [node2]
17:56:25 CST [JoinNodesModule] Join control-plane node
17:56:25 CST skipped: [master]
17:56:25 CST [JoinNodesModule] Join worker node
17:56:32 CST stdout: [node2]
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0101 17:56:27.192089    5487 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
17:56:32 CST stdout: [node1]
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0101 17:56:27.192470    5175 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
17:56:32 CST success: [node2]
17:56:32 CST success: [node1]
17:56:32 CST [JoinNodesModule] Copy admin.conf to ~/.kube/config
17:56:32 CST skipped: [master]
17:56:32 CST [JoinNodesModule] Remove master taint
17:56:32 CST skipped: [master]
17:56:32 CST [JoinNodesModule] Add worker label to master
17:56:32 CST skipped: [master]
17:56:32 CST [JoinNodesModule] Synchronize kube config to worker
17:56:33 CST success: [node1]
17:56:33 CST success: [node2]
17:56:33 CST [JoinNodesModule] Add worker label to worker
17:56:33 CST stdout: [node1]
node/node1 labeled
17:56:33 CST stdout: [node2]
node/node2 labeled
17:56:33 CST success: [node1]
17:56:33 CST success: [node2]
17:56:33 CST [DeployNetworkPluginModule] Generate calico
17:56:36 CST success: [master]
17:56:36 CST [DeployNetworkPluginModule] Deploy calico
17:56:37 CST stdout: [master]
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created
17:56:37 CST success: [master]
17:56:37 CST [ConfigureKubernetesModule] Configure kubernetes
17:56:37 CST success: [node2]
17:56:37 CST success: [master]
17:56:37 CST success: [node1]
17:56:37 CST [ChownModule] Chown user $HOME/.kube dir
17:56:39 CST success: [node1]
17:56:39 CST success: [node2]
17:56:39 CST success: [master]
17:56:39 CST [AutoRenewCertsModule] Generate k8s certs renew script
17:56:41 CST success: [master]
17:56:41 CST [AutoRenewCertsModule] Generate k8s certs renew service
17:56:44 CST success: [master]
17:56:44 CST [AutoRenewCertsModule] Generate k8s certs renew timer
17:56:47 CST success: [master]
17:56:47 CST [AutoRenewCertsModule] Enable k8s certs renew service
17:56:47 CST success: [master]
17:56:47 CST [SaveKubeConfigModule] Save kube config as a configmap
17:56:47 CST success: [LocalHost]
17:56:47 CST [AddonsModule] Install addons
17:56:47 CST success: [LocalHost]
17:56:47 CST [DeployStorageClassModule] Generate OpenEBS manifest
17:56:51 CST success: [master]
17:56:51 CST [DeployStorageClassModule] Deploy OpenEBS as cluster default StorageClass
17:56:53 CST success: [master]
17:56:53 CST [DeployKubeSphereModule] Generate KubeSphere ks-installer crd manifests
17:56:55 CST success: [master]
17:56:55 CST [DeployKubeSphereModule] Apply ks-installer
17:56:56 CST stdout: [master]
namespace/kubesphere-system created
serviceaccount/ks-installer created
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io created
clusterrole.rbac.authorization.k8s.io/ks-installer created
clusterrolebinding.rbac.authorization.k8s.io/ks-installer created
deployment.apps/ks-installer created
17:56:56 CST success: [master]
17:56:56 CST [DeployKubeSphereModule] Add config to ks-installer manifests
17:56:57 CST success: [master]
17:56:57 CST [DeployKubeSphereModule] Create the kubesphere namespace
17:56:58 CST success: [master]
17:56:58 CST [DeployKubeSphereModule] Setup ks-installer config
17:56:58 CST stdout: [master]
secret/kube-etcd-client-certs created
17:57:00 CST success: [master]
17:57:00 CST [DeployKubeSphereModule] Apply ks-installer
17:57:01 CST stdout: [master]
namespace/kubesphere-system unchanged
serviceaccount/ks-installer unchanged
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io unchanged
clusterrole.rbac.authorization.k8s.io/ks-installer unchanged
clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged
deployment.apps/ks-installer unchanged
clusterconfiguration.installer.kubesphere.io/ks-installer created
17:57:01 CST success: [master]
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://192.168.187.111:30880
Account: admin
Password: P@88w0rd
NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components 
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             2024-01-01 18:13:04
#####################################################
18:13:07 CST success: [master]
18:13:07 CST Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.

Please check the result using the command:

	kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

3.2.2、登录

在这里插入图片描述

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐