目录

1 配置文件

1.1 创建配置文件

1.2 编辑配置文件

1.2.1 etcd配置

1.2.2 worker配置

1.2.3 addons配置

2 集群创建

2.1 安装过程

2.2 安装成功

 3 增加开放的端口范围

3.1 修改配置文件

3.2 重启

 3.3 不增加弊端

4 访问kubesphere

5 笔者想说

6 kk的整个安装日志


上一篇已经介绍了nfs的安装和部署,其实关于这一块内容,kk的文档里头也有提到,但是我一开始看的时候,并没有看懂,所以才会专门整理出这么一篇。

kk提供的存储客户端指南:

https://github.com/kubesphere/kubekey/blob/master/docs/storage-client.mdicon-default.png?t=L892https://github.com/kubesphere/kubekey/blob/master/docs/storage-client.md另外为啥要先介绍nfs的安装,因为如果kk先装好,然后才装nfs,就需要重新的构建kk的安装过程,虽然这样前后顺序颠倒操作表面上并没有太大的问题,但是实际上,文件的安装层次都变味道了,详细后面会解释。

下面的操作都是在master节点上进行的

1 配置文件

kk安装和部署全靠一个名为config.sample.yaml的文件。因此第一步先来操作下config.sample.yaml配置文件相关的。

1.1 创建配置文件

cd /kuberkey && ./kk create config --with-kubesphere v3.1.0

命令执行完之后就会在当前的目录下生成config.sample.yaml文件

上面用到的kk命令,如果看不懂的话,请移步kk文档,这些作者就不搬砖了,kk文档已经做了很详细的描述了。

1.2 编辑配置文件

上面生成的配置文件本身就是一个模板,我们看着模板去修改即可,下面我直接附上我的配置文件,然后简单的解释下:

apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: master, address: 192.168.211.3, internalAddress: 192.168.211.3, user: root, password: 1}
  - {name: worker1, address: 192.168.211.4, internalAddress: 192.168.211.4, user: root, password: 1}
  roleGroups:
    etcd:
    - master
    master: 
    - master
    worker:
    - master
    - worker1
  controlPlaneEndpoint:
    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.19.8
    imageRepo: kubesphere
    clusterName: cluster.local
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
  registry:
    registryMirrors: []
    insecureRegistries: []
  addons:
  - name: nfs-client
    namespace: kube-system
    sources:
      chart:
        name: nfs-client-provisioner
        repo: https://charts.kubesphere.io/main
        values:
        - nfs.server=192.168.211.4
        - nfs.path=/nfs/data
        - storageClass.defaultClass=true


---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.1.0
spec:
  persistence:
    storageClass: ""       
  authentication:
    jwtSecret: ""
  zone: ""
  local_registry: ""        
  etcd:
    monitoring: false      
    endpointIps: localhost  
    port: 2379             
    tlsEnable: true
  common:
    redis:
      enabled: false
    redisVolumSize: 2Gi 
    openldap:
      enabled: false
    openldapVolumeSize: 2Gi  
    minioVolumeSize: 20Gi
    monitoring:
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
    es:  
      elasticsearchMasterVolumeSize: 4Gi   
      elasticsearchDataVolumeSize: 20Gi   
      logMaxAge: 7          
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchUrl: ""
      externalElasticsearchPort: ""  
  console:
    enableMultiLogin: true 
    port: 30880
  alerting:       
    enabled: false
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:    
    enabled: false
  devops:           
    enabled: false
    jenkinsMemoryLim: 2Gi     
    jenkinsMemoryReq: 1500Mi 
    jenkinsVolumeSize: 8Gi   
    jenkinsJavaOpts_Xms: 512m  
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events:          
    enabled: false
    ruler:
      enabled: true
      replicas: 2
  logging:         
    enabled: false
    logsidecar:
      enabled: true
      replicas: 2
  metrics_server:             
    enabled: false
  monitoring:
    storageClass: ""
    prometheusMemoryRequest: 400Mi  
    prometheusVolumeSize: 20Gi  
  multicluster:
    clusterRole: none 
  network:
    networkpolicy:
      enabled: false
    ippool:
      type: none
    topology:
      type: none
  openpitrix:
    store:
      enabled: false
  servicemesh:    
    enabled: false  
  kubeedge:
    enabled: false
    cloudCore:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      cloudhubPort: "10000"
      cloudhubQuicPort: "10001"
      cloudhubHttpsPort: "10002"
      cloudstreamPort: "10003"
      tunnelPort: "10004"
      cloudHub:
        advertiseAddress: 
          - ""           
        nodeLimit: "100"
      service:
        cloudhubNodePort: "30000"
        cloudhubQuicNodePort: "30001"
        cloudhubHttpsNodePort: "30002"
        cloudstreamNodePort: "30003"
        tunnelNodePort: "30004"
    edgeWatcher:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      edgeWatcherAgent:
        nodeSelector: {"node-role.kubernetes.io/worker": ""}
        tolerations: []

首先说明下,笔者前面走了很多弯路,使用多种方式来实现k8s的部署,但是正是因为走了很多弯路,所以现在再来看上面的这个配置文件看起来就很轻松了。基本上看到了就知道要干嘛了。

作者根据生成的config.sample.yaml模板改了哪些呢:

piVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: node1, address: 172.16.0.2, internalAddress: 172.16.0.2, user: ubuntu, password: Qcloud@123}修改 改成master的信息,user password是这个机器的登录的账号和密码
  - {name: node2, address: 172.16.0.3, internalAddress: 172.16.0.3, user: ubuntu, password: Qcloud@123}修改,类似上面进行修改
  roleGroups:
    etcd:
    - node1修改 etcd部署在master上,所以改成master就行了。
    master:
    - node1修改 改成master节点的hostname
    worker:
    - node1修改 通常为了不浪费机器,master节点也要支持可以被调度,所以这个地方改成master
    - node2修改,改成工作节点的名称
...
    addons:修改,这边为自己添加的插件的配置的位置,比如nfs就是我们自己添加的插件

其实改的也不多。有几个重点的内容我这边再描述下。

1.2.1 etcd配置

ETCD是用于共享配置和服务发现的分布式,一致性的KV存储系统。该项目目前最新稳定版本为2.3.0. 具体信息请参考[项目首页]和[Github]。ETCD是CoreOS公司发起的一个开源项目,授权协议为Apache。更加详细的介绍请参考:

ETCD相关介绍--整体概念及原理方面 - 沧海一滴 - 博客园etcd作为一个受到ZooKeeper与doozer启发而催生的项目,除了拥有与之类似的功能外,更专注于以下四点。 分布式系统中的数据分为控制数据和应用数据。etcd的使用场景默认处理的数据都是控制数https://www.cnblogs.com/softidea/p/6517959.html

1.2.2 worker配置

worker里头的配置是这样的,并不是指配置worker节点的机器,而是指配置未来想让应用部署在这台服务器上的机器。记得之前在vm搭建k8s里头,具体链接放在下面,我讲过master污点,master污点是指默认情况下,是不让应用部署在master节点上的,但是通常本地运行的时候,资源比较紧张,可能会采取允许将应用部署在master的策略。所以worker这边的配置应该要包含master节点。

[vm搭建k8s] 8 k8s部署应用和管理_wltsysterm的博客-CSDN博客上一篇已经讲完了,k8s安装和部署。这篇来讲解下k8s如何部署应用和管理。1 很重要,要看!k8s部署之后,就是关于应用的部署和管理了,不过这块内容,需要我们安装dashboard,通过可视化界面来操作。笔者呢,在安装dashboard过程也是遇到了很多问题,最后虽然安装成功了,并可以登录上去,但是仍然不能使用,经过了解,目前一般都是使用kk来安装k8s和kubesphere(用来辅助部署和管理应用的),因此作者就不深挖[vm搭建k8s]了。但是下面作者会针对已经研究过的部分,附上安装过程、遇到的问https://weilintao.blog.csdn.net/article/details/120433864

1.2.3 addons配置

addons这个再讲解一下,这个kk文档里头也有相关的描述,建议直接参考这边的描述即可:

kubekey/addons.md at master · kubesphere/kubekey · GitHubInstall Kubernetes only, both Kubernetes and KubeSphere, and related cloud-native add-ons, it supports all-in-one, multi-node, and HA 🔥 ⎈ 🐳 - kubekey/addons.md at master · kubesphere/kubekeyhttps://github.com/kubesphere/kubekey/blob/master/docs/addons.md

2 集群创建

集群创建过程就是k8s部署和kubesphere的部署过程。

创建过程,kk依赖config.sample.yaml配置文件。

在你执行下面的创建命令之前,先跟你说下,这个动作很漫长,跟当前的网络有很大的关系。笔者装了几次,大概都要花40分钟左右。所以,正常情况就可以丢那边,先去喝喝茶。

命令如下:

./kk create cluster -f config-sample.yaml

2.1 安装过程

前面安装都会比较顺利,但是安装到:

Please wait for the installation to complete: ...

安装到这一步,开始会变的非常的慢,如果不监控当前机器的资源消耗,还以为程序死了呢。

笔者在这一步的时候,打开了系统资源监控器,然后到这一步的时候,整个机器的资源狂飙:机器是2cpu的,基本上两个cpu都跑满了,然后带宽基本上全跑满--说明一直在下东西和安装东西。为了检验猜测,我也会在worker1节点,进行kubectl get pods,svc -A 来监测pod的状态变化。(kubectl这个命令后面文章会讲到)

2.2 安装成功

大概,在Please wait for the installation to complete这一步又耽误一些时间(时间可长可短),然后安装成功了,日志如下图,需要注意的是:这块日志里有我们要的信息:就是在welcome to kubesphere下面有三行信息:

Console: http://192.168.211.3:30880

Account: admin

Password: P@88w0rd

这些就是我们登陆kubesphere要的信息。(说这么多,就是要你截图下来,或者将这个日志报错下来,供后面查看

 3 增加开放的端口范围

这个步骤可做可不做,但是最好还是做一下,这样,后面你的应用的端口看起来比较舒服。

3.1 修改配置文件

k8s集群默认端口是在30000-32767,我可以通过修改kube-apiserver来修改,apiserver文件路径在/etc/kubernetes/manifests/kube-apiserver.yaml,如果不知道路径在那,可以通过ps -ef |grep kube-apiserver,这里就会有启动时候指定的文件路径。

编辑文件,在spec下的- kube-apiserver下增加一行 :

- --service-node-port-range=10000-32000

修改后的配置文件如下:

apiVersion: v1
    kind: Pod
    metadata:
      creationTimestamp: null
      labels:
        component: kube-apiserver
        tier: control-plane
      name: kube-apiserver
      namespace: kube-system
    spec:
      containers:
      - command:
        - kube-apiserver
        - --advertise-address=192.168.0.254
        - --allow-privileged=true
        - --authorization-mode=Node,RBAC
        - --client-ca-file=/etc/kubernetes/pki/ca.crt
        - --enable-admission-plugins=NodeRestriction
        - --enable-bootstrap-token-auth=true
        - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
        - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
        - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
        - --etcd-servers=https://127.0.0.1:2379
        - --insecure-port=0
        - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
        - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
        - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
        - --requestheader-allowed-names=front-proxy-client
        - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
        - --requestheader-extra-headers-prefix=X-Remote-Extra-
        - --requestheader-group-headers=X-Remote-Group
        - --requestheader-username-headers=X-Remote-User
        - --secure-port=6443
        - --service-account-key-file=/etc/kubernetes/pki/sa.pub
        - --service-cluster-ip-range=10.1.0.0/16
        - --service-node-port-range=1-65535

3.2 重启

必须重启才可以生效。重启命令如下:

systemctl daemon-reload
systemctl restart kubelet

 3.3 不增加弊端

如果不增加的话,那么k8s对外暴露的端口就不能低于30000,否则会报错:

provided port is not in the valid range. The range of valid ports is 30000-32767

4 访问kubesphere

根据上面的安装日志,访问kubesphere控制台:

Console: http://192.168.211.3:30880

Account: admin

Password: P@88w0rd

如果成功访问,恭喜你部署成功。那么接下去就是玩耍了:在k8s上各种的随心所欲的部署应用,享受服务化带来的快感。

下一篇,分享新的专题:[k8s的应用]

5 笔者想说

使用kk确实可以加速k8s安装,至少提速80%。但是一路走来,其实前面走的弯路,让我了解了k8s有什么,这个很重要,所以使用kk安装之后,对里头的功能的了解和使用映像也会更加的深刻。

6 kk的整个安装日志

作者也在里头做了一些备注,大家尽可能看一下,对你们应该是有帮助的。

[root@master kk]# mkdir kubekey
[root@master kk]# dior
bash: dior: command not found...
[root@master kk]# dir
kubekey  kubekey-v1.2.0-alpha.2-linux-amd64.tar.gz
[root@master kk]# tar -zxvf kubekey-v1.2.0-alpha.2-linux-amd64.tar.gz  kubekey
tar: kubekey: Not found in archive
tar: Exiting with failure status due to previous errors
[root@master kk]# mv kubekey
kubekey/                                   kubekey-v1.2.0-alpha.2-linux-amd64.tar.gz
[root@master kk]# mv kubekey-v1.2.0-alpha.2-linux-amd64.tar.gz  kubekey
[root@master kk]# dir
kubekey
[root@master kk]# cd kubekey/
[root@master kubekey]# dir
kubekey-v1.2.0-alpha.2-linux-amd64.tar.gz
[root@master kubekey]# tar -zxvf kubekey-v1.2.0-alpha.2-linux-amd64.tar.gz
README.md
README_zh-CN.md
kk
[root@master kubekey]# dir
kk  kubekey-v1.2.0-alpha.2-linux-amd64.tar.gz  README.md  README_zh-CN.md
[root@master kubekey]# cd ../
[root@master kk]# cd kubekey/
[root@master kubekey]# ./kk create config
[root@master kubekey]# dir
config-sample.yaml  kk	kubekey-v1.2.0-alpha.2-linux-amd64.tar.gz  README.md  README_zh-CN.md
[root@master kubekey]# vi config-sample.yaml 
... 
此处修改config-sample.yaml步骤配置
此处配置ssh步骤略
[root@master kubekey]# yum install -y socat conntrack etables ipset
...
此处安装上述依赖步骤略

[root@master kubekey]# export KKZONE=cn
[root@master kubekey]# ./kk create cluster -f config-sample.yaml 
+---------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| name    | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time         |
+---------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| master  | y    | y    | y       | y        | y     | y     | y         |        | y          |             | y                | CST 09:24:10 |
| worker1 | y    | y    | y       | y        | y     | y     | y         |        | y          |             | y                | CST 09:24:10 |
+---------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
INFO[09:24:13 CST] Downloading Installation Files               
INFO[09:24:13 CST] Downloading kubeadm ...                      
INFO[09:24:51 CST] Downloading kubelet ...                      
INFO[09:26:37 CST] Downloading kubectl ...                      
INFO[09:27:19 CST] Downloading helm ...                         
INFO[09:27:56 CST] Downloading kubecni ...                      
INFO[09:28:31 CST] Configuring operating system ...             
[worker1 192.168.211.4] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
no crontab for root
[master 192.168.211.3] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
no crontab for root
INFO[09:28:38 CST] Installing docker ...                        
1INFO[09:31:44 CST] Start to download images on all nodes        
[worker1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/etcd:v3.4.13
[worker1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.19.8
[worker1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.6.9
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.19.8
[worker1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.19.8
[worker1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.16.3
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.19.8
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.19.8
[worker1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.16.3
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.6.9
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
[worker1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.16.3
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.16.3
[worker1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.16.3
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.16.3
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.16.3
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.16.3
INFO[09:59:23 CST] Getting etcd status                          
[master 192.168.211.3] MSG:
Configuration file will be created
INFO[09:59:23 CST] Generating etcd certs                        
INFO[09:59:25 CST] Synchronizing etcd certs                     
INFO[09:59:25 CST] Creating etcd service                        
[master 192.168.211.3] MSG:
etcd will be installed
INFO[09:59:29 CST] Starting etcd cluster                        
INFO[09:59:29 CST] Refreshing etcd configuration                
[master 192.168.211.3] MSG:
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
Waiting for etcd to start
INFO[09:59:35 CST] Backup etcd data regularly                   
INFO[09:59:42 CST] Get cluster status                           
[master 192.168.211.3] MSG:
Cluster will be created.
INFO[09:59:42 CST] Installing kube binaries                     
Push /kk/kubekey/kubekey/v1.19.8/amd64/kubeadm to 192.168.211.3:/tmp/kubekey/kubeadm   Done
Push /kk/kubekey/kubekey/v1.19.8/amd64/kubeadm to 192.168.211.4:/tmp/kubekey/kubeadm   Done
Push /kk/kubekey/kubekey/v1.19.8/amd64/kubelet to 192.168.211.4:/tmp/kubekey/kubelet   Done
Push /kk/kubekey/kubekey/v1.19.8/amd64/kubelet to 192.168.211.3:/tmp/kubekey/kubelet   Done
Push /kk/kubekey/kubekey/v1.19.8/amd64/kubectl to 192.168.211.4:/tmp/kubekey/kubectl   Done
Push /kk/kubekey/kubekey/v1.19.8/amd64/kubectl to 192.168.211.3:/tmp/kubekey/kubectl   Done
Push /kk/kubekey/kubekey/v1.19.8/amd64/helm to 192.168.211.3:/tmp/kubekey/helm   Done
Push /kk/kubekey/kubekey/v1.19.8/amd64/helm to 192.168.211.4:/tmp/kubekey/helm   Done
Push /kk/kubekey/kubekey/v1.19.8/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.211.4:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
Push /kk/kubekey/kubekey/v1.19.8/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.211.3:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
INFO[10:00:31 CST] Initializing kubernetes cluster              
[master 192.168.211.3] MSG:
W0730 10:00:31.484886   69804 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
W0730 10:00:33.651779   69804 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.8
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost master master.cluster.local worker1 worker1.cluster.local] and IPs [10.233.0.1 192.168.211.3 127.0.0.1 192.168.211.4]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 65.015523 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: r93lct.a45jo4gjka9l1ra7
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join lb.kubesphere.local:6443 --token r93lct.a45jo4gjka9l1ra7 \
    --discovery-token-ca-cert-hash sha256:8c9e6466e8f9e9780a403052ab45fecf2faf4b33d6d3d9ade30c70d1cf416dc1 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join lb.kubesphere.local:6443 --token r93lct.a45jo4gjka9l1ra7 \
    --discovery-token-ca-cert-hash sha256:8c9e6466e8f9e9780a403052ab45fecf2faf4b33d6d3d9ade30c70d1cf416dc1
[master 192.168.211.3] MSG:
node/master untainted
[master 192.168.211.3] MSG:
node/master labeled
[master 192.168.211.3] MSG:
service "kube-dns" deleted
[master 192.168.211.3] MSG:
service/coredns created
[master 192.168.211.3] MSG:
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
[master 192.168.211.3] MSG:
configmap/nodelocaldns created
[master 192.168.211.3] MSG:
W0730 10:02:22.305093   71662 version.go:103] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://dl.k8s.io/release/stable-1.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W0730 10:02:22.305171   71662 version.go:104] falling back to the local client version: v1.19.8
W0730 10:02:22.305393   71662 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
01efa1675d506785450ee9f3b13cdc582d489ae1480d1b2ec97d0342302447a9
[master 192.168.211.3] MSG:
secret/kubeadm-certs patched
[master 192.168.211.3] MSG:
secret/kubeadm-certs patched
[master 192.168.211.3] MSG:
secret/kubeadm-certs patched
[master 192.168.211.3] MSG:
W0730 10:02:22.991074   71923 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join lb.kubesphere.local:6443 --token oneyts.yot6wk1f0jrrjfsf     --discovery-token-ca-cert-hash sha256:8c9e6466e8f9e9780a403052ab45fecf2faf4b33d6d3d9ade30c70d1cf416dc1
[master 192.168.211.3] MSG:
master   v1.19.8   [map[address:192.168.211.3 type:InternalIP] map[address:master type:Hostname]]
INFO[10:02:23 CST] Joining nodes to cluster                     
[worker1 192.168.211.4] MSG:
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W0730 10:02:31.021480   60515 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[worker1 192.168.211.4] MSG:
node/worker1 labeled
INFO[10:02:39 CST] Deploying network plugin ...                 
[master 192.168.211.3] MSG:
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
INFO[10:02:46 CST] Congratulations! Installation is successful. 
You have mail in /var/spool/mail/root

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐