何为多集群?

  • 如果没有创建集群,先去安装k8s并搭建集群,我博客分类中有一篇是:【kubernetes】k8s的安装详细说明【创建集群、加入集群、踢出集群、重置集群…】 ,想了解的可以去看看。

  • 如下图,一个集群有一个master和node节点
    他们之间本身是没有任何关系的。
    如下,我创建了2个集群,并且全部开机。
    在这里插入图片描述

  • 每个集群都是独立的提供服务,两者互不关联,一般情况, 我们管理多个集群都是通过ssh登陆到某一集群的master节点上做相关操作
    做了多集群以后,就可以在某一master节点上管理另外的master节点,不用ssh登陆了。

  • 我这2个集群配置是完全一样的,除了ip不一样,我下面就以这2个节点做出说明
    主机名和ip如下

#集群1
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES                  AGE    VERSION
master   Ready    control-plane,master   5d2h   v1.21.0
node1    Ready    <none>                 5d2h   v1.21.0
node2    Ready    <none>                 5d1h   v1.21.0
[root@master ~]# 
[root@master ~]# ip a | grep 59
    inet 192.168.59.142/24 brd 192.168.59.255 scope global noprefixroute dynamic ens33
[root@master ~]# 
[root@master ~]# hostname
master
[root@master ~]# 

#集群2
[root@master2 ~]# kubectl get node
NAME     STATUS   ROLES                  AGE    VERSION
master   Ready    control-plane,master   5d2h   v1.21.0
node1    Ready    <none>                 5d2h   v1.21.0
node2    Ready    <none>                 5d1h   v1.21.0
[root@master2 ~]# 
[root@master2 ~]# ip a | grep 59
    inet 192.168.59.151/24 brd 192.168.59.255 scope global noprefixroute ens33
[root@master2 ~]# 
[root@master2 ~]# hostname
master2
[root@master2 ~]# 

kubeconfig文件内容编写【master节点操作】

  • 默认的kubeconfig文件是 :~./kube/config
  • 查看现在kubeconfig文件命令:kubectl config view
[root@master .kube]# kubectl config view 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.59.142:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: default
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
[root@master .kube]# 

单集群配置文件修改

  • 就是想修改当前集群的信息。
  • 配置文件就是~./kube/config,里面的代码解释见下面的方式一,修改方法也和方式一是一摸一样的,只是留一个集群信息而已。

方式一【手动编辑】

文件备份

  • 这个是最原始的方法,直接通过编辑配置文件实现,做这个操作的时候,务必要备份一下这个文件【key在这里面】。
[root@master ~]# cd .kube/
[root@master .kube]# cp config config.bak
[root@master .kube]# 

config文件代码解释【上下文详细解释】

  • 备份完以后我们就直接编辑:~./kube/config文件
    为了方便编辑,进入以后,建议将3个key都删了,得到下面这样清爽的文件内容【不删的话key内容太多,影响操作】
    为了方便理解,我直接在下面代码后面加注释,认真看,每行代码的作用都说清楚了【下面是文件原生代码,未进行任何修改,只做解释】
  1 apiVersion: v1
  2 clusters: # 这个意思是需要连接到哪个集群【下面3-6行为一组】,需要添加多少新集群,就复制3-6行添加到6行下面并修改相关信息
  3 - cluster: # 集群开头【3-6行为一组】
  4     certificate-authority-data: #这个是key,后面会说查看方式的
  5     server: https://192.168.59.142:6443 #这个集群的master节点ip
  6   name: kubernetes #集群名称,可以自定义
  7 contexts:#上下文,将上面的集群【clusters】和下面的用户【users】绑定在一起【未绑定前他们是相互独立的】
  8 - context: #上下文开头【8-12行为一组】,有多少集群添加多少上下文,放12行下面
  9     cluster: kubernetes #这个是上面cluster的name
 10     namespace: default #所处命名空间【可以不用修改的】
 11     user: kubernetes-admin #这个是下面user的name
 12   name: kubernetes-admin@kubernetes # 这个上下文名称可以自定义
 13 current-context: kubernetes-admin@kubernetes # 这个是默认上下文名称【cluster的name】【比如有多个集群,使用哪个上下文,就默认在哪个集群下】
 14 kind: Config
 15 preferences: {}
 16 users: # 这指定用户信息【下面17-20为一组】,有多少集群添加多少用户【复制内容放20行下面】
 17 - name: kubernetes-admin  # 用户名称可以自定义
 18   user:
 19     client-certificate-data: #这个是key,后面会说查看方式的
 20     client-key-data:#这个是key,后面会说查看方式的

config代码编辑

根据上面的解释,做好相关编辑,下面是我配置好的代码

[root@master .kube]# vim config
  1 apiVersion: v1
  2 clusters:
  3 - cluster:
  4     certificate-authority-data:
  5     server: https://192.168.59.142:6443
  6   name: master
  7 - cluster:
  8     certificate-authority-data:
  9     server: https://192.168.59.151:6443
 10   name: master1
 11 contexts:
 12 - context:
 13     cluster: master
 14     namespace: default
 15     user: ccx
 16   name: context
 17 - context:
 18     cluster: master1
 19     namespace: default
 20     user: ccx1
 21   name: context1
 22 current-context: context
 23 kind: Config
 24 preferences: {}
 25 users:
 26 - name: ccx
 27   user:
 28     client-certificate-data:
 29     client-key-data:
 30 - name: ccx1
 31   user:
 32     client-certificate-data:
 33     client-key-data:

证书【key】查看

  • 因为上面将当前集群的key给删了嘛,现在需要复制回去并将其他集群的key复制过来。
    每个集群的key不同,不要复制叉了。

  • key所在文件:~/.kube/config 【对,就是上面编辑的文件】

    • 编辑的这个集群查看最开始备份的config文件【重新打开一个窗口】,另外的集群信息,直接查看config文件对应复制过来即可。
    • 对应复制应该不需要我多说吧,如果有多个集群,上下文中的指定的集群和用户是哪2个,复制key的时候不要搞错就行了。
    • 注:key前面有一个空格,最后没有空格,不要多或少这空格了。
  • 全部弄完以后的代码如下:

[root@master .kube]# cat config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EY3dNakF4TXpVeU9Gb1hEVE14TURZek1EQXhNelV5T0Zvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBUGFBCnQwaG5LOEJTYWQ1VmhOY1Q0c2tDWUs5NVhWUndPQ3RJd29qVXNnU2lzTzJSazl5aG1hMnl2OE5EaTlmYmpzQ0sKaGd4VDJkZDI2Z2FyampXcTNXaWNmclNjVm5MV0ZXY1BZOHFyQ3hIYzFhbDh5N2t6YnMvaklhYkVsTm5QMXVFYwprQmpFYWtMMnIzN0cxOXpyM3BPcUd1S2p1OURUUGxpbmcrRjlPQTRHaURWRS9vNjVXM1ZQY3hFZmw4NVJ6REo4CmlaRGgvbjNiS2YrOEZSdTdCZHdpWDBidFVsUHIzMlVxNXROVzNsS3lJNjhsSkNCc2UvZ2ZnYkpkbFBXZjQ1SUUKRW43UUVqNlMyVm1JMHNISVA3MUNYNlpkMG83RlNPRWpmbGpGZ24xdWFxdnltdFFPN1lYcW9uWjR2bGlDeDA5TQpwT3VGaTZlZ2F1QkNYZWlTbUtFQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZNN0Nmc2FudWRjVEZIdG5vZXk4aC9aUXFFWnJNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFCZ1BFNmR5VVh5dDEySWdyVTRKTEFwQmZjUW5zODFPeFVWVkluTFhFL2hHQlZVY0Ywagp3d3F4cG9FUVRZcDFpTytQczlZN0NBazVSdzJvMnJkNlhScDVhdFllZVo4V1Z5YXZXcGhsLzkxd2d1d1Yrdm9oCmMwMFNmWExnVEpkbGZKY250TVNzRUxaQkU5dlprZFVJa2dCTXlOelUxVk0wdnpySDV4WEEvTHJmNW9LUkVTdWUKNk5iRGcyMmJzQlk5MnpINUxnNmEraWxKRTVyKzgvS1JFbVRUL2VlUmZFdVRSMnMwSHN4ZEl0cENMell2RndicgorL2pEK084RHlkcFFLMUxWaDREbyt2ZFQvVlBYb2hNU05oekJTVzlmdXg0OWV1M3dsazkrL25mUnRoeWg3TjZHCjRzTVA0OGVacUJsTm5JRzRzdU1PQW9UejdMeTlKZ2JSWXd5WQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://192.168.59.142:6443
  name: master
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EY3dNakF4TXpVeU9Gb1hEVE14TURZek1EQXhNelV5T0Zvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBUGFBCnQwaG5LOEJTYWQ1VmhOY1Q0c2tDWUs5NVhWUndPQ3RJd29qVXNnU2lzTzJSazl5aG1hMnl2OE5EaTlmYmpzQ0sKaGd4VDJkZDI2Z2FyampXcTNXaWNmclNjVm5MV0ZXY1BZOHFyQ3hIYzFhbDh5N2t6YnMvaklhYkVsTm5QMXVFYwprQmpFYWtMMnIzN0cxOXpyM3BPcUd1S2p1OURUUGxpbmcrRjlPQTRHaURWRS9vNjVXM1ZQY3hFZmw4NVJ6REo4CmlaRGgvbjNiS2YrOEZSdTdCZHdpWDBidFVsUHIzMlVxNXROVzNsS3lJNjhsSkNCc2UvZ2ZnYkpkbFBXZjQ1SUUKRW43UUVqNlMyVm1JMHNISVA3MUNYNlpkMG83RlNPRWpmbGpGZ24xdWFxdnltdFFPN1lYcW9uWjR2bGlDeDA5TQpwT3VGaTZlZ2F1QkNYZWlTbUtFQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZNN0Nmc2FudWRjVEZIdG5vZXk4aC9aUXFFWnJNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFCZ1BFNmR5VVh5dDEySWdyVTRKTEFwQmZjUW5zODFPeFVWVkluTFhFL2hHQlZVY0Ywagp3d3F4cG9FUVRZcDFpTytQczlZN0NBazVSdzJvMnJkNlhScDVhdFllZVo4V1Z5YXZXcGhsLzkxd2d1d1Yrdm9oCmMwMFNmWExnVEpkbGZKY250TVNzRUxaQkU5dlprZFVJa2dCTXlOelUxVk0wdnpySDV4WEEvTHJmNW9LUkVTdWUKNk5iRGcyMmJzQlk5MnpINUxnNmEraWxKRTVyKzgvS1JFbVRUL2VlUmZFdVRSMnMwSHN4ZEl0cENMell2RndicgorL2pEK084RHlkcFFLMUxWaDREbyt2ZFQvVlBYb2hNU05oekJTVzlmdXg0OWV1M3dsazkrL25mUnRoeWg3TjZHCjRzTVA0OGVacUJsTm5JRzRzdU1PQW9UejdMeTlKZ2JSWXd5WQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://192.168.59.151:6443
  name: master1
contexts:
- context:
    cluster: master
    namespace: default
    user: ccx
  name: context
- context:
    cluster: master1
    namespace: default
    user: ccx1
  name: context1
current-context: context
kind: Config
preferences: {}
users:
- name: ccx
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJWEZvemlkeHc5dGt3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRBM01ESXdNVE0xTWpoYUZ3MHlNakEzTURJd01UTTFNekphTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXkyRVN4VGsxY2pCQWRpUVUKNnVNd2Q1dm4rNy91TjVjakxpUE9SNTd3d1hvRFo0SUpXMEFtYWIxNzBNUGN2eGRIdW9uS21aQUxJSlFtMFNkWgpLYjFmNDZMd09MTUU0bjBoVjNFZ1ZidlZNUktBM09vUU05dm1rY2w5VlB0SXQvY0YyLzJlMWhrT3RkSEhZZ1NFCnVEd0lGL3NhQ1JjK1ZMWjNhbDRsTkU5TVJKZThaVUxUU3ovUGloRlNiaEkyMzVHaWNMS0lsNzJiYUw3UzFuVUcKVHVOU0RpM3A4ckl1NWZldkNyelBNN0tYdnEremdJK0o3N3B6d29YeFlBbzEvVkNiSEZmalk1SEt3TGZIejZEZQpHdXFGY2FUVkZObmhaek1NcXVvQmFmNFZiYWlBZDJHb04yNWlTTmtxYWFWTS8xRTc2cVdUTjVIQnkwcE5kQmVnCkNGZEtnd0lEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JUT3duN0dwN25YRXhSN1o2SHN2SWYyVUtoRwphekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBWG5FU0VDYTZTMDIxQXdiMXd5SlFBbUxHRVpMRFNyREpYaVZRCkhGc25mRGN1WTNMendRdVhmRUJhNkRYbjZQUVRmbUJ1cFBOdlNJbWdYNGQzQ3MzZVhjQWFmalhrOVlnMk94TjUKRTh1a3FaSmhIbDhxVmVGNzBhUTBTME14R3NkVVVTTDJscUNFWGEzMVB4SWxVKzBVOG9zblFIM05vdGZVMVJIMgoxaGtrMi8zeWtvUHJxRXpsbEp3enFTRHAvVDBmcS91MHI2SlkzdWZkKzRXaHVTSGRQU2hvN1JOT0FJVitUcllVCklXOTY1c3lyeGp6TTUwcFNIS3IralJQNnZNUmNaRm42UjhhZ1dvVXBPMnMzcFVVVUtNWjdxa2tOWWNiRmQ0M3IKR2xoWDB5M1U3MW1VUHpaci9FYjQvUDBGcytSUlV6eGFCRDFJVUl2NjdKa0trc2UzRFE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBeTJFU3hUazFjakJBZGlRVTZ1TXdkNXZuKzcvdU41Y2pMaVBPUjU3d3dYb0RaNElKClcwQW1hYjE3ME1QY3Z4ZEh1b25LbVpBTElKUW0wU2RaS2IxZjQ2THdPTE1FNG4waFYzRWdWYnZWTVJLQTNPb1EKTTl2bWtjbDlWUHRJdC9jRjIvMmUxaGtPdGRISFlnU0V1RHdJRi9zYUNSYytWTFozYWw0bE5FOU1SSmU4WlVMVApTei9QaWhGU2JoSTIzNUdpY0xLSWw3MmJhTDdTMW5VR1R1TlNEaTNwOHJJdTVmZXZDcnpQTTdLWHZxK3pnSStKCjc3cHp3b1h4WUFvMS9WQ2JIRmZqWTVIS3dMZkh6NkRlR3VxRmNhVFZGTm5oWnpNTXF1b0JhZjRWYmFpQWQyR28KTjI1aVNOa3FhYVZNLzFFNzZxV1RONUhCeTBwTmRCZWdDRmRLZ3dJREFRQUJBb0lCQUVtQ3BNNDBoMlRtbStZWAoxSmV4MW1ybEoweVBhd01jMWRKdmpyZkVjekQ3Y1ErUXFPRWFwc2ZCZldkUDVCSU4wQmRVaHE1S3FqcjBVYk4zCmpYclF3RC8vUE9UQmtCcHRNQWZ6RThUcFIzMmRPb2FlODR4TEIyUGFlRHFuT1BtRmg5Q2tNeTBma1htV2dZS2sKTDNTSC9rVHN0ZFJqV2x3ME42VnlzZS9lV2FyUXFEZXJzNFFTU2xaVGxlcUVGdVFadk1rNXp4QVZmYjEzUFBJcApORTZySW9pajZIOVdvWmdoMmYzNDdyME52VmJpekdYNUE5RTNjOG51NkVWQlpqS0FqeWRaTkpyU2VRTEtMZDF3CmtGL0J1ZTM2RkltRWtLdlByNTAxSnJ4VEJEUnFsMEdkUnpxY3RxTDdkaHdKa0t4TUt1akpCeGFucTNRSGQzSmcKR0VnMFBNRUNnWUVBMWlhOHZMM1crbnRjNEZOcDZjbzJuaHVIcTNKdGVrQlVILzlrUnFUYlFnaUpBVW9ld3BGdwprRjROMUhsR21WWngwWkVTTVhkTGozYXdzY2lWUFRqZVRJK216VDA0MUVyRzdNVnhuM05KZy96VlhUY0FITmJqCmZhR3o1UlZwaERieUZtRTRvcUlpWHFuY050Y2ZmRE9XQ3NNU2ZCODlPMW1LTFpEYjJ4WjZsQ2NDZ1lFQTh4OXgKZWhFeHNKRFljOXFTSnlCcVQyNlozek5GRmlWV1JNUE9NWEYrbFNBZDc4eWpVanZEaDg3NXRzdG4wV1pBY05kUApQcGg1Tm1wbzVNUTVSMmE5Z2J1NGVxZjArZ2o0WHM0RytySW5iYlE2Q3ZwZW9RYVVtZUJ3UVBqcXJFZnVPZUpvCjU1VnRYNGRHc0EzOHV0dllycmsyYU16ejdmcnRON3JjNVo5VlJFVUNnWUVBeFp0MUtVeWI1UUtVajBNcFJsd2IKemdWbFNXVUxkSFdMcXdNRlN0S3dwOXdzWUE0L0dCY1FvWWJJaURsb1ZmSVlrT0ttd1JKdG5QSk8xWjViWitUago3QTNhUXlTdEhlZnFhMjArRFg1YVpmcVYvNi9TNE1uQm5abnE0QWJFR1FhQ21QZ1pSS2tMd2dKSGZDdEJtR0FaCm9kQ2phL2wvalJad2xOOUlvSCs3bUowQ2dZQUVEUTRTL3A1WlZ0Q0VmYXZad3d5Q2JsRmFDcnluOWM5T0xnVU4KaGRxYUdZTG1MLzY0ckE1Q0FRemdJdHVEL2JRdExTbEEzY0dIU3BhYzJUZ3JIR2NqOWtESXFtdkdqc2UwckxJcApFemJjK1JmT2Z3VjhvV053ZlBEaDVFUGt3djRSTU5pV28wTERTTG5BelRyYzBqVDJGRmYzdnhLQmNLRHJRTTNWCmRhWXlFUUtCZ0ROMUVRNUhYdGlEbi9nMllUMzVwQWNNZjNrbVZFYTVPWHZONDFXSHJOZHhieUJJZ2Fkc2pUZjIKZTJiZnBKNzBJcHFqV2JITG1ENzZjUzNTZmFXSDFCVDdiRnZ4cWg4dUtXUXFoeWhzdTZWZ1VWNWNyQnp5cVVtNgpRcXFoeEx2STZsaWgrRGUwU3F3eDh5b1J5QUNXU0NYRDM2K3JmbGZoZnExd2pqdFF1VHBsCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
- name: ccx1
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJWEZvemlkeHc5dGt3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRBM01ESXdNVE0xTWpoYUZ3MHlNakEzTURJd01UTTFNekphTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXkyRVN4VGsxY2pCQWRpUVUKNnVNd2Q1dm4rNy91TjVjakxpUE9SNTd3d1hvRFo0SUpXMEFtYWIxNzBNUGN2eGRIdW9uS21aQUxJSlFtMFNkWgpLYjFmNDZMd09MTUU0bjBoVjNFZ1ZidlZNUktBM09vUU05dm1rY2w5VlB0SXQvY0YyLzJlMWhrT3RkSEhZZ1NFCnVEd0lGL3NhQ1JjK1ZMWjNhbDRsTkU5TVJKZThaVUxUU3ovUGloRlNiaEkyMzVHaWNMS0lsNzJiYUw3UzFuVUcKVHVOU0RpM3A4ckl1NWZldkNyelBNN0tYdnEremdJK0o3N3B6d29YeFlBbzEvVkNiSEZmalk1SEt3TGZIejZEZQpHdXFGY2FUVkZObmhaek1NcXVvQmFmNFZiYWlBZDJHb04yNWlTTmtxYWFWTS8xRTc2cVdUTjVIQnkwcE5kQmVnCkNGZEtnd0lEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JUT3duN0dwN25YRXhSN1o2SHN2SWYyVUtoRwphekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBWG5FU0VDYTZTMDIxQXdiMXd5SlFBbUxHRVpMRFNyREpYaVZRCkhGc25mRGN1WTNMendRdVhmRUJhNkRYbjZQUVRmbUJ1cFBOdlNJbWdYNGQzQ3MzZVhjQWFmalhrOVlnMk94TjUKRTh1a3FaSmhIbDhxVmVGNzBhUTBTME14R3NkVVVTTDJscUNFWGEzMVB4SWxVKzBVOG9zblFIM05vdGZVMVJIMgoxaGtrMi8zeWtvUHJxRXpsbEp3enFTRHAvVDBmcS91MHI2SlkzdWZkKzRXaHVTSGRQU2hvN1JOT0FJVitUcllVCklXOTY1c3lyeGp6TTUwcFNIS3IralJQNnZNUmNaRm42UjhhZ1dvVXBPMnMzcFVVVUtNWjdxa2tOWWNiRmQ0M3IKR2xoWDB5M1U3MW1VUHpaci9FYjQvUDBGcytSUlV6eGFCRDFJVUl2NjdKa0trc2UzRFE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBeTJFU3hUazFjakJBZGlRVTZ1TXdkNXZuKzcvdU41Y2pMaVBPUjU3d3dYb0RaNElKClcwQW1hYjE3ME1QY3Z4ZEh1b25LbVpBTElKUW0wU2RaS2IxZjQ2THdPTE1FNG4waFYzRWdWYnZWTVJLQTNPb1EKTTl2bWtjbDlWUHRJdC9jRjIvMmUxaGtPdGRISFlnU0V1RHdJRi9zYUNSYytWTFozYWw0bE5FOU1SSmU4WlVMVApTei9QaWhGU2JoSTIzNUdpY0xLSWw3MmJhTDdTMW5VR1R1TlNEaTNwOHJJdTVmZXZDcnpQTTdLWHZxK3pnSStKCjc3cHp3b1h4WUFvMS9WQ2JIRmZqWTVIS3dMZkh6NkRlR3VxRmNhVFZGTm5oWnpNTXF1b0JhZjRWYmFpQWQyR28KTjI1aVNOa3FhYVZNLzFFNzZxV1RONUhCeTBwTmRCZWdDRmRLZ3dJREFRQUJBb0lCQUVtQ3BNNDBoMlRtbStZWAoxSmV4MW1ybEoweVBhd01jMWRKdmpyZkVjekQ3Y1ErUXFPRWFwc2ZCZldkUDVCSU4wQmRVaHE1S3FqcjBVYk4zCmpYclF3RC8vUE9UQmtCcHRNQWZ6RThUcFIzMmRPb2FlODR4TEIyUGFlRHFuT1BtRmg5Q2tNeTBma1htV2dZS2sKTDNTSC9rVHN0ZFJqV2x3ME42VnlzZS9lV2FyUXFEZXJzNFFTU2xaVGxlcUVGdVFadk1rNXp4QVZmYjEzUFBJcApORTZySW9pajZIOVdvWmdoMmYzNDdyME52VmJpekdYNUE5RTNjOG51NkVWQlpqS0FqeWRaTkpyU2VRTEtMZDF3CmtGL0J1ZTM2RkltRWtLdlByNTAxSnJ4VEJEUnFsMEdkUnpxY3RxTDdkaHdKa0t4TUt1akpCeGFucTNRSGQzSmcKR0VnMFBNRUNnWUVBMWlhOHZMM1crbnRjNEZOcDZjbzJuaHVIcTNKdGVrQlVILzlrUnFUYlFnaUpBVW9ld3BGdwprRjROMUhsR21WWngwWkVTTVhkTGozYXdzY2lWUFRqZVRJK216VDA0MUVyRzdNVnhuM05KZy96VlhUY0FITmJqCmZhR3o1UlZwaERieUZtRTRvcUlpWHFuY050Y2ZmRE9XQ3NNU2ZCODlPMW1LTFpEYjJ4WjZsQ2NDZ1lFQTh4OXgKZWhFeHNKRFljOXFTSnlCcVQyNlozek5GRmlWV1JNUE9NWEYrbFNBZDc4eWpVanZEaDg3NXRzdG4wV1pBY05kUApQcGg1Tm1wbzVNUTVSMmE5Z2J1NGVxZjArZ2o0WHM0RytySW5iYlE2Q3ZwZW9RYVVtZUJ3UVBqcXJFZnVPZUpvCjU1VnRYNGRHc0EzOHV0dllycmsyYU16ejdmcnRON3JjNVo5VlJFVUNnWUVBeFp0MUtVeWI1UUtVajBNcFJsd2IKemdWbFNXVUxkSFdMcXdNRlN0S3dwOXdzWUE0L0dCY1FvWWJJaURsb1ZmSVlrT0ttd1JKdG5QSk8xWjViWitUago3QTNhUXlTdEhlZnFhMjArRFg1YVpmcVYvNi9TNE1uQm5abnE0QWJFR1FhQ21QZ1pSS2tMd2dKSGZDdEJtR0FaCm9kQ2phL2wvalJad2xOOUlvSCs3bUowQ2dZQUVEUTRTL3A1WlZ0Q0VmYXZad3d5Q2JsRmFDcnluOWM5T0xnVU4KaGRxYUdZTG1MLzY0ckE1Q0FRemdJdHVEL2JRdExTbEEzY0dIU3BhYzJUZ3JIR2NqOWtESXFtdkdqc2UwckxJcApFemJjK1JmT2Z3VjhvV053ZlBEaDVFUGt3djRSTU5pV28wTERTTG5BelRyYzBqVDJGRmYzdnhLQmNLRHJRTTNWCmRhWXlFUUtCZ0ROMUVRNUhYdGlEbi9nMllUMzVwQWNNZjNrbVZFYTVPWHZONDFXSHJOZHhieUJJZ2Fkc2pUZjIKZTJiZnBKNzBJcHFqV2JITG1ENzZjUzNTZmFXSDFCVDdiRnZ4cWg4dUtXUXFoeWhzdTZWZ1VWNWNyQnp5cVVtNgpRcXFoeEx2STZsaWgrRGUwU3F3eDh5b1J5QUNXU0NYRDM2K3JmbGZoZnExd2pqdFF1VHBsCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
[root@master .kube]# 

至此,多集群就配置完成了。

集群6443端口不可达报错处理

  • 报错信息是你当前所属集群IP端口不可达
The connection to the server 192.168.59.151:6443 was refused - did you specify the right host or port?
  • 思路:
    • 1、查看docker服务是否为active:systemctl is-active docker
    • 2、查看kubectl服务是否为active:systemctl is-active kubelet
    • 3、检查配置文件:~/.kube/config是否有误
    • 4、查看镜像是否缺失:docker images【如果镜像少了,补上,然后重新启动上面2个服务,所有镜像会自动启动(不要尝试手动启动镜像,有顺序的,重启上面2服务docker会自动按顺序拉起服务)】
    • 5、查看端口是否被监听:netstat -ntlp | grep 6443
    • 6、查看服务是否正常:docker ps 【正常的应该有17个左右】
    • 7、查看系统日志:journalctl -xeu kubelet【一般这里面会有原因】
[root@master ~]# systemctl is-active docker
active
[root@master ~]# systemctl is-active kubelet
active
[root@master ~]# 
[root@master ~]# netstat -ntlp | grep 6443
tcp6       0      0 :::6443                 :::*                    LISTEN      19672/kube-apiserve 
[root@master ~]# 
[root@master ~]# docker ps | wc -l
17
[root@master ~]# 
  • 但我这比较特别,我问题集群的所有服务都正常,在问题集群上也能单独使用,各种排错无果,最后想到我第二个集群是通过克隆的,去配置文件确实发现我2个集群的key是一样的,所以理论上这2个集群是一摸一样的,IP不能变啊。
    现在报错如下
[root@master ~]# kubectl config use-context context1
Switched to context "context1".
[root@master ~]# kubectl config get-contexts 
CURRENT   NAME       CLUSTER   AUTHINFO   NAMESPACE
          context    master    ccx        default
*         context1   master1   ccx1       default
[root@master ~]# 
[root@master ~]# kubectl get nodes
The connection to the server 192.168.59.151:6443 was refused - did you specify the right host or port?
[root@master ~]# 
  • 为了验证我的想法,我在配置文件中将第二个集群的ip改成和我第一个集群ip一样,其他什么都不修改【key依然是第二个集群的key】,然后再次访问,成功了
[root@master ~]# cat .kube/config | grep server
    server: https://192.168.59.142:6443
    server: https://192.168.59.142:6443
[root@master ~]# 
[root@master ~]# kubectl config get-contexts 
CURRENT   NAME       CLUSTER   AUTHINFO   NAMESPACE
          context    master    ccx        default
*         context1   master1   ccx1       default
[root@master ~]# 
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES                  AGE     VERSION
master   Ready    control-plane,master   6d      v1.21.0
node1    Ready    <none>                 6d      v1.21.0
node2    Ready    <none>                 5d22h   v1.21.0
[root@master ~]# 
[root@master ~]# kubectl get pods
No resources found in default namespace.
[root@master ~]# kubectl get pods -n ccx
NAME                          READY   STATUS             RESTARTS   AGE
centos-7846bf67c6-s9tmg       0/1     ImagePullBackOff   0          23h
nginx-test-795d659f45-j9m9b   0/1     ImagePullBackOff   0          24h
nginx-test-795d659f45-txf8l   0/1     ImagePullBackOff   0          24h
[root@master ~]#
  • 总结就是:master集群不能克隆!!!!!!!大半天时间总结的坑啊
    当然,克隆也不是不行,我博客中有一篇是:“【kubernetes】k8s的安装详细说明【创建集群、加入集群、踢出集群、重置集群…】” ,根据这里面的方法去重置集群即可,然后再重复上面的步骤,便一切正常

    泪目。。。花了大半天总结出来的坑啊
[root@master ~]# cat .kube/config| grep server
    server: https://192.168.59.142:6443
    server: https://192.168.59.151:6443
[root@master ~]# 
[root@master ~]# kubectl config get-contexts 
CURRENT   NAME           CLUSTER   AUTHINFO   NAMESPACE
*         context        master    ccx        default
          context1-new   master1   ccx1       default
[root@master ~]# kubecon^C
[root@master ~]# 
[root@master ~]# kubectl config use-context context1-new 
Switched to context "context1-new".
[root@master ~]# kubectl get pods
No resources found in default namespace.
[root@master ~]# kubectl get pods -n ccx
No resources found in ccx namespace.
[root@master ~]# kubectl get pods -n kube-system 
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-78d6f96c7b-q25lf   1/1     Running   0          7m36s
calico-node-g2wfj                          1/1     Running   0          7m36s
calico-node-gtfbc                          1/1     Running   0          7m36s
calico-node-nd29t                          1/1     Running   0          6m49s
coredns-545d6fc579-j4ckc                   1/1     Running   0          12m
coredns-545d6fc579-qn4kz                   1/1     Running   0          12m
etcd-master2                               1/1     Running   0          12m
kube-apiserver-master2                     1/1     Running   1          12m
kube-controller-manager-master2            1/1     Running   0          12m
kube-proxy-c4x8j                           1/1     Running   0          12m
kube-proxy-khh2z                           1/1     Running   0          10m
kube-proxy-pckf7                           1/1     Running   0          6m49s
kube-scheduler-master2                     1/1     Running   0          12m
[root@master ~]# 
[root@master ~]# 

kubectl get node没有node节点

  • 其他服务一切正常【见上面的服务】,就是没有master上发现node节点或缺失,如下
[root@master ~]# kubectl  get nodes
NAME     STATUS   ROLES    AGE     VERSION
master   Ready    <none>   16m     v1.21.0
node1    Ready    <none>   2m19s   v1.21.0
[root@master ~]# 
  • 解决方法
    登陆到未发现的node节点,重启docker和kubelet服务,稍等一会回到master节点即可看到该节点信息了
[root@master ~]# kubectl  get nodes
NAME     STATUS   ROLES    AGE     VERSION
master   Ready    <none>   16m     v1.21.0
node1    Ready    <none>   2m19s   v1.21.0
[root@master ~]# 
[root@master ~]# ssh node2
root@node2's password: 
Last login: Fri Jul  2 12:19:44 2021 from 192.168.59.1
[root@node2 ~]# systemctl restart docker
[root@node2 ~]# systemctl restart kubelet
[root@node2 ~]# 
[root@node2 ~]# exit
logout
Connection to node2 closed.
[root@master ~]# 
[root@master ~]# kubectl  get nodes
NAME     STATUS     ROLES    AGE    VERSION
master   Ready      <none>   20m    v1.21.0
node1    Ready      <none>   7m5s   v1.21.0
node2    NotReady   <none>   2s     v1.21.0
[root@master ~]# 

方式二【命令生成】

config多集群管理 (contexts)

查看所有集群信息

命令:kubectl config get-contexts

[root@master ~]# kubectl config get-contexts 
CURRENT   NAME       CLUSTER   AUTHINFO   NAMESPACE
          context    master    ccx        default
*         context1   master1   ccx1       default
[root@master ~]# 
  • 命令解释
    • CURRENT: * 在哪个上面表示当前所处哪个集群
    • NAME:上下文名称
    • CLUSTER:集群名称
    • AUTHINFO:用户名称
    • NAMESPACE:所处命名空间

上下文切换【集群切换】

  • 命令:kubectl config use-context 上下文名称kubectl config get-contexts中NAME就是上下文名称】
  • 如,我切换到context
    *所在位置就是 当前所属集群。
[root@master ~]# kubectl config get-contexts 
CURRENT   NAME       CLUSTER   AUTHINFO   NAMESPACE
          context    master    ccx        default
*         context1   master1   ccx1       default
[root@master ~]# 
[root@master ~]# kubectl config use-context context
Switched to context "context".
[root@master ~]# 
[root@master ~]# kubectl config get-contexts 
CURRENT   NAME       CLUSTER   AUTHINFO   NAMESPACE
*         context    master    ccx        default
          context1   master1   ccx1       default
[root@master ~]# 

重命名context

  • 命令:kubectl config rename-context 现名称 新名称
  • 如:我将context1修改为context1-new
[root@master ~]# kubectl config get-contexts 
CURRENT   NAME       CLUSTER   AUTHINFO   NAMESPACE
*         context    master    ccx        default
          context1   master1   ccx1       default
[root@master ~]# 
[root@master ~]# kubectl config rename-context context1 context1-new
Context "context1" renamed to "context1-new".
[root@master ~]# 
[root@master ~]# kubectl config get-contexts 
CURRENT   NAME           CLUSTER   AUTHINFO   NAMESPACE
*         context        master    ccx        default
          context1-new   master1   ccx1       default
[root@master ~]#

创建一个上下文【不建议这么创建】

  • 命令:kubectl config set-context 自定义上下文名称
  • 单纯的这么创建是没有任何意义的,因为没有指定集群没有指定用户,【需要结合下面的修改信息使用】
    如,我创建一个my-context,查看的时候可以看到啥都没有
[root@master ~]# kubectl config set-context my-context
Context "my-context" created.
[root@master ~]#
[root@master ~]# kubectl config get-contexts 
CURRENT   NAME         CLUSTER   AUTHINFO   NAMESPACE
          context      master    ccx        default
*         context1     master1   ccx1       default
          my-context                        
[root@master ~]# 
  • 如上,创建以后会在:~/.kube/config中生成一个context,但其他什么关键信息都没有,需要通过修改信息的方式,手动指定ns,user和name
- context:
    cluster: my-context
    namespace: 
    user: 
  name: 

修改信息

  • 这个相当于有2个功能
  • 1、修改已经创建的上下文信息
  • 2、新增上下文内容【用命令创建后新增剩下信息】
  • 语法:
kubectl  config set-context my-context【已存在的上下文名称】  --namespace=ccx【必须是存在的命名空间】 --cluster=my-context【自定义名称(限修改)】 --user=my-user【自定义名称(限修改)】
  • 非自定义查看命令
    • 上下文查看:kubectl config get-contexts
    • 命名空间查看:kubectl get ns

新增信息报错特别说明

  • 我前面说过,修改信息方式不建议用来新增上下文信息,原因是:
    一个集群只有一组key,理论上也只能存在一个context,如果用命令的方式创建的上下文并自定义了其他信息,在~/.kube/config文件中只会生生成下面信息,但cluset和user都不存在,所以使用这个的时候就会报错提示没有8080端口,原因是这个context没有指定服务器和用户名,所以找不到集群信息【这种情况,可以将cluster和user绑定到现有的集群上面】
- context:
    cluster: my-context
    namespace: ccx
    user: my-user
  name: my-context
  • 如,我给上面命令创建的my-context增加信息
    我切换到这个上下文后,提示我8080端口没有启动【这就是报错信息,是错误的示范,所以不要用命令创建上下文,直接通过修改配置文件指定】
[root@master ~]# kubectl  config set-context my-context --namespace=ccx --cluster=my-context --user=my-user
Context "my-context" modified.
[root@master ~]#
[root@master ~]# kubectl config get-contexts 
CURRENT   NAME         CLUSTER      AUTHINFO   NAMESPACE
*         context      master       ccx        default
          context1     master1      ccx1       default
          my-context   my-context   my-user    ccx
[root@master ~]# 
[root@master ~]# kubectl config use-context my-context 
Switched to context "my-context".
[root@master ~]# 
[root@master ~]# kubectl config get-contexts 
CURRENT   NAME         CLUSTER      AUTHINFO   NAMESPACE
          context      master       ccx        default
          context1     master1      ccx1       default
*         my-context   my-context   my-user    ccx
[root@master ~]# 
[root@master ~]# kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@master ~]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@master ~]# 
[root@master ~]# netstat -ntlp | grep 8080
[root@master ~]# 

删除context

  • 命令:kubectl config delete-context 上下文名称
    如果不知道自己在干啥,不要乱删除。【一般一个集群只有一个context,删了当前集群就无法使用了【一般用来删除多集群】】
    通过命令删了该上下文以后,配置文件~/.kube/config中相关内容也会被删掉
  • 如,我删除my-context这个名称【这是我上面通过命令建的,无法使用的context,所以可以直接干掉】
[root@master ~]# 
[root@master ~]# kubectl config get-contexts 
CURRENT   NAME         CLUSTER      AUTHINFO   NAMESPACE
*         context      master       ccx        default
          context1     master1      ccx1       default
          my-context   my-context   my-user    ccx
[root@master ~]# kubectl config delete-context my-context 
deleted context my-context from /root/.kube/config
[root@master ~]# 
[root@master ~]# kubectl config get-contexts 
CURRENT   NAME       CLUSTER   AUTHINFO   NAMESPACE
*         context    master    ccx        default
          context1   master1   ccx1       default
[root@master ~]# 
[root@master ~]# cat ~/.kube/config | grep my-con
[root@master ~]# 
Logo

开源、云原生的融合云平台

更多推荐