1,使用rke 升级 rancher服务

  • rke 0.2 (可安装 rancher 2.2.4: 支持k8s 1.13.5)
  • rke 1.2 (可安装 rancher 2.5.11: 支持k8s 1.20.12)
  • 升级思路:查看当前rke支持的k8s所有版本 ,下载新版rke,升级底层k8s集群,然后helm升级rancher2.5,最后通过新版rancher升级所创建的k8s集群
[root@c73 rancher-packs]# ./rke.0.2 --version
rke version v0.2.4
[root@c73 rancher-packs]# ./rke.0.2 config --system-images -all |grep hyperkube
rancher/hyperkube:v1.11.9-rancher1
rancher/hyperkube:v1.12.7-rancher1
rancher/hyperkube:v1.13.5-rancher1
rancher/hyperkube:v1.14.1-rancher1

[root@c73 rancher-packs]# ./rke.1.2 --version
rke version v1.2.14
[root@c73 rancher-packs]# ./rke.1.2 config --list-version -all
v1.17.17-rancher2-3
v1.19.16-rancher1-1
v1.20.12-rancher1-1
v1.18.20-rancher1-2

1.1,修改cluster.yml设置k8s版本,升级底层集群

[root@c73 rancher-packs]# cat cluster.yml
nodes:
    - address: 192.168.56.73
      user: docker
      ssh_key_path: /home/docker/.ssh/id_rsa
      #ssh_cert_path: /home/docker/.ssh/id_rsa.pub
      role:
        - controlplane
        - etcd
        - worker
private_registries:
- url: harbor01.io # private registry url
  user: admin
  password: "Harbor12345"
  is_default: true
#设置底层k8s 版本号,使用rke列出的相关版本
kubernetes_version: v1.20.12-rancher1-1

[root@c73 rancher-packs]# ./rke.1.2 up
INFO[0000] Running RKE version: v1.2.14
INFO[0000] Initiating Kubernetes cluster
INFO[0000] [certificates] GenerateServingCertificate is disabled, checking if there are unused kubelet certificates
INFO[0000] [certificates] Generating admin certificates and kubeconfig
INFO[0000] Successfully Deployed state file at [./cluster.rkestate]
INFO[0000] Building Kubernetes cluster
INFO[0000] [dialer] Setup tunnel for host [192.168.56.73]
INFO[0000] [network] No hosts added existing cluster, skipping port check
INFO[0000] [certificates] Deploying kubernetes certificates to Cluster nodes
INFO[0000] Checking if container [cert-deployer] is running on host [192.168.56.73], try #1
INFO[0000] Image [harbor-pre.jdfmgt.com/rancher/rke-tools:v0.1.78] exists on host [192.168.56.73]
INFO[0000] Starting container [cert-deployer] on host [192.168.56.73], try #1
INFO[0001] Checking if container [cert-deployer] is running on host [192.168.56.73], try #1
INFO[0006] Checking if container [cert-deployer] is running on host [192.168.56.73], try #1
INFO[0006] Removing container [cert-deployer] on host [192.168.56.73], try #1
INFO[0006] [reconcile] Rebuilding and updating local kube config
INFO[0006] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml]
INFO[0006] [reconcile] host [192.168.56.73] is a control plane node with reachable Kubernetes API endpoint in the cluster
INFO[0006] [certificates] Successfully deployed kubernetes certificates to Cluster nodes
INFO[0006] [file-deploy] Deploying file [/etc/kubernetes/audit-policy.yaml] to node [192.168.56.73]
INFO[0006] Image [harbor-pre.jdfmgt.com/rancher/rke-tools:v0.1.78] exists on host [192.168.56.73]
INFO[0006] Starting container [file-deployer] on host [192.168.56.73], try #1
INFO[0006] Successfully started [file-deployer] container on host [192.168.56.73]
INFO[0006] Waiting for [file-deployer] container to exit on host [192.168.56.73]
INFO[0006] Waiting for [file-deployer] container to exit on host [192.168.56.73]
INFO[0007] Container [file-deployer] is still running on host [192.168.56.73]: stderr: [], stdout: []
INFO[0008] Waiting for [file-deployer] container to exit on host [192.168.56.73]
INFO[0008] Removing container [file-deployer] on host [192.168.56.73], try #1
INFO[0008] [remove/file-deployer] Successfully removed container on host [192.168.56.73]
INFO[0008] [/etc/kubernetes/audit-policy.yaml] Successfully deployed audit policy file to Cluster control nodes
INFO[0008] [reconcile] Reconciling cluster state
INFO[0008] [reconcile] Check etcd hosts to be deleted
INFO[0008] [reconcile] Check etcd hosts to be added
INFO[0008] [reconcile] Rebuilding and updating local kube config
INFO[0008] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml]
INFO[0008] [reconcile] host [192.168.56.73] is a control plane node with reachable Kubernetes API endpoint in the cluster
INFO[0008] [reconcile] Reconciled cluster state successfully
INFO[0008] max_unavailable_worker got rounded down to 0, resetting to 1
INFO[0008] Setting maxUnavailable for worker nodes to: 1
INFO[0008] Setting maxUnavailable for controlplane nodes to: 1
INFO[0008] Pre-pulling kubernetes images
INFO[0008] Image [harbor-pre.jdfmgt.com/rancher/hyperkube:v1.20.12-rancher1] exists on host [192.168.56.73]
INFO[0008] Kubernetes images pulled successfully
INFO[0008] [etcd] Building up etcd plane..
INFO[0008] Image [harbor-pre.jdfmgt.com/rancher/rke-tools:v0.1.78] exists on host [192.168.56.73]
INFO[0008] Starting container [etcd-fix-perm] on host [192.168.56.73], try #1
INFO[0008] Successfully started [etcd-fix-perm] container on host [192.168.56.73]
INFO[0008] Waiting for [etcd-fix-perm] container to exit on host [192.168.56.73]
INFO[0008] Waiting for [etcd-fix-perm] container to exit on host [192.168.56.73]
INFO[0008] Removing container [etcd-fix-perm] on host [192.168.56.73], try #1
INFO[0008] [remove/etcd-fix-perm] Successfully removed container on host [192.168.56.73]
INFO[0008] Checking if container [etcd] is running on host [192.168.56.73], try #1
INFO[0008] Image [harbor-pre.jdfmgt.com/rancher/mirrored-coreos-etcd:v3.4.15-rancher1] exists on host [192.168.56.73]
INFO[0008] Checking if container [old-etcd] is running on host [192.168.56.73], try #1
INFO[0008] Stopping container [etcd] on host [192.168.56.73] with stopTimeoutDuration [5s], try #1
INFO[0013] Waiting for [etcd] container to exit on host [192.168.56.73]
INFO[0013] Renaming container [etcd] to [old-etcd] on host [192.168.56.73], try #1
INFO[0013] Starting container [etcd] on host [192.168.56.73], try #1
INFO[0014] [etcd] Successfully updated [etcd] container on host [192.168.56.73]
INFO[0014] Removing container [old-etcd] on host [192.168.56.73], try #1
INFO[0014] [etcd] Running rolling snapshot container [etcd-snapshot-once] on host [192.168.56.73]
INFO[0014] Removing container [etcd-rolling-snapshots] on host [192.168.56.73], try #1
INFO[0015] [remove/etcd-rolling-snapshots] Successfully removed container on host [192.168.56.73]
INFO[0015] Image [harbor-pre.jdfmgt.com/rancher/rke-tools:v0.1.78] exists on host [192.168.56.73]
INFO[0015] Starting container [etcd-rolling-snapshots] on host [192.168.56.73], try #1
INFO[0015] [etcd] Successfully started [etcd-rolling-snapshots] container on host [192.168.56.73]
INFO[0020] Image [harbor-pre.jdfmgt.com/rancher/rke-tools:v0.1.78] exists on host [192.168.56.73]
INFO[0020] Starting container [rke-bundle-cert] on host [192.168.56.73], try #1
INFO[0021] [certificates] Successfully started [rke-bundle-cert] container on host [192.168.56.73]
INFO[0021] Waiting for [rke-bundle-cert] container to exit on host [192.168.56.73]
INFO[0021] Container [rke-bundle-cert] is still running on host [192.168.56.73]: stderr: [], stdout: []
INFO[0022] Waiting for [rke-bundle-cert] container to exit on host [192.168.56.73]
INFO[0022] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [192.168.56.73]
INFO[0022] Removing container [rke-bundle-cert] on host [192.168.56.73], try #1
INFO[0022] Image [harbor-pre.jdfmgt.com/rancher/rke-tools:v0.1.78] exists on host [192.168.56.73]
INFO[0023] Starting container [rke-log-linker] on host [192.168.56.73], try #1
INFO[0023] [etcd] Successfully started [rke-log-linker] container on host [192.168.56.73]
INFO[0023] Removing container [rke-log-linker] on host [192.168.56.73], try #1
INFO[0024] [remove/rke-log-linker] Successfully removed container on host [192.168.56.73]
INFO[0024] Image [harbor-pre.jdfmgt.com/rancher/rke-tools:v0.1.78] exists on host [192.168.56.73]
INFO[0024] Starting container [rke-log-linker] on host [192.168.56.73], try #1
INFO[0025] [etcd] Successfully started [rke-log-linker] container on host [192.168.56.73]
INFO[0025] Removing container [rke-log-linker] on host [192.168.56.73], try #1
INFO[0025] [remove/rke-log-linker] Successfully removed container on host [192.168.56.73]
INFO[0025] [etcd] Successfully started etcd plane.. Checking etcd cluster health
INFO[0028] [etcd] etcd host [192.168.56.73] reported healthy=true
INFO[0028] [controlplane] Now checking status of node 192.168.56.73, try #1
INFO[0028] [controlplane] Processing controlplane hosts for upgrade 1 at a time
INFO[0028] Processing controlplane host 192.168.56.73
INFO[0028] [controlplane] Now checking status of node 192.168.56.73, try #1
INFO[0028] [controlplane] Getting list of nodes for upgrade
INFO[0028] Upgrading controlplane components for control host 192.168.56.73
INFO[0028] Checking if container [service-sidekick] is running on host [192.168.56.73], try #1
INFO[0028] Image [harbor-pre.jdfmgt.com/rancher/rke-tools:v0.1.78] exists on host [192.168.56.73]
INFO[0028] Removing container [service-sidekick] on host [192.168.56.73], try #1
INFO[0028] [remove/service-sidekick] Successfully removed container on host [192.168.56.73]
INFO[0029] Checking if container [kube-apiserver] is running on host [192.168.56.73], try #1
INFO[0029] Image [harbor-pre.jdfmgt.com/rancher/hyperkube:v1.20.12-rancher1] exists on host [192.168.56.73]
INFO[0029] Checking if container [old-kube-apiserver] is running on host [192.168.56.73], try #1
INFO[0029] Stopping container [kube-apiserver] on host [192.168.56.73] with stopTimeoutDuration [5s], try #1
INFO[0029] Waiting for [kube-apiserver] container to exit on host [192.168.56.73]
INFO[0029] Renaming container [kube-apiserver] to [old-kube-apiserver] on host [192.168.56.73], try #1
INFO[0029] Starting container [kube-apiserver] on host [192.168.56.73], try #1
INFO[0030] [controlplane] Successfully updated [kube-apiserver] container on host [192.168.56.73]
INFO[0030] Removing container [old-kube-apiserver] on host [192.168.56.73], try #1
INFO[0030] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [192.168.56.73]
INFO[0049] [healthcheck] service [kube-apiserver] on host [192.168.56.73] is healthy
INFO[0049] Image [harbor-pre.jdfmgt.com/rancher/rke-tools:v0.1.78] exists on host [192.168.56.73]
INFO[0049] Starting container [rke-log-linker] on host [192.168.56.73], try #1
INFO[0050] [controlplane] Successfully started [rke-log-linker] container on host [192.168.56.73]
INFO[0050] Removing container [rke-log-linker] on host [192.168.56.73], try #1
INFO[0050] [remove/rke-log-linker] Successfully removed container on host [192.168.56.73]
INFO[0050] Checking if container [kube-controller-manager] is running on host [192.168.56.73], try #1
INFO[0050] Image [harbor-pre.jdfmgt.com/rancher/hyperkube:v1.20.12-rancher1] exists on host [192.168.56.73]
INFO[0050] Checking if container [old-kube-controller-manager] is running on host [192.168.56.73], try #1
INFO[0050] Stopping container [kube-controller-manager] on host [192.168.56.73] with stopTimeoutDuration [5s], try #1
INFO[0055] Waiting for [kube-controller-manager] container to exit on host [192.168.56.73]
INFO[0055] Renaming container [kube-controller-manager] to [old-kube-controller-manager] on host [192.168.56.73], try #1
INFO[0055] Starting container [kube-controller-manager] on host [192.168.56.73], try #1
INFO[0056] [controlplane] Successfully updated [kube-controller-manager] container on host [192.168.56.73]
INFO[0056] Removing container [old-kube-controller-manager] on host [192.168.56.73], try #1
INFO[0056] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [192.168.56.73]
INFO[0062] [healthcheck] service [kube-controller-manager] on host [192.168.56.73] is healthy
INFO[0062] Image [harbor-pre.jdfmgt.com/rancher/rke-tools:v0.1.78] exists on host [192.168.56.73]
INFO[0063] Starting container [rke-log-linker] on host [192.168.56.73], try #1
INFO[0063] [controlplane] Successfully started [rke-log-linker] container on host [192.168.56.73]
INFO[0063] Removing container [rke-log-linker] on host [192.168.56.73], try #1
INFO[0064] [remove/rke-log-linker] Successfully removed container on host [192.168.56.73]
INFO[0064] Checking if container [kube-scheduler] is running on host [192.168.56.73], try #1
INFO[0064] Image [harbor-pre.jdfmgt.com/rancher/hyperkube:v1.20.12-rancher1] exists on host [192.168.56.73]
INFO[0064] Checking if container [old-kube-scheduler] is running on host [192.168.56.73], try #1
INFO[0064] Stopping container [kube-scheduler] on host [192.168.56.73] with stopTimeoutDuration [5s], try #1
INFO[0069] Waiting for [kube-scheduler] container to exit on host [192.168.56.73]
INFO[0069] Renaming container [kube-scheduler] to [old-kube-scheduler] on host [192.168.56.73], try #1
INFO[0069] Starting container [kube-scheduler] on host [192.168.56.73], try #1
INFO[0070] [controlplane] Successfully updated [kube-scheduler] container on host [192.168.56.73]
INFO[0070] Removing container [old-kube-scheduler] on host [192.168.56.73], try #1
INFO[0070] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [192.168.56.73]
INFO[0076] [healthcheck] service [kube-scheduler] on host [192.168.56.73] is healthy
INFO[0076] Image [harbor-pre.jdfmgt.com/rancher/rke-tools:v0.1.78] exists on host [192.168.56.73]
INFO[0077] Starting container [rke-log-linker] on host [192.168.56.73], try #1
INFO[0077] [controlplane] Successfully started [rke-log-linker] container on host [192.168.56.73]
INFO[0077] Removing container [rke-log-linker] on host [192.168.56.73], try #1
INFO[0077] [remove/rke-log-linker] Successfully removed container on host [192.168.56.73]
INFO[0077] Upgrading workerplane components for control host 192.168.56.73
INFO[0077] Checking if container [service-sidekick] is running on host [192.168.56.73], try #1
INFO[0077] [sidekick] Sidekick container already created on host [192.168.56.73]
INFO[0077] Checking if container [kubelet] is running on host [192.168.56.73], try #1
INFO[0077] Image [harbor-pre.jdfmgt.com/rancher/hyperkube:v1.20.12-rancher1] exists on host [192.168.56.73]
INFO[0077] Checking if container [old-kubelet] is running on host [192.168.56.73], try #1
INFO[0077] Stopping container [kubelet] on host [192.168.56.73] with stopTimeoutDuration [5s], try #1
INFO[0078] Waiting for [kubelet] container to exit on host [192.168.56.73]
INFO[0078] Renaming container [kubelet] to [old-kubelet] on host [192.168.56.73], try #1
INFO[0078] Starting container [kubelet] on host [192.168.56.73], try #1
INFO[0079] [worker] Successfully updated [kubelet] container on host [192.168.56.73]
INFO[0079] Removing container [old-kubelet] on host [192.168.56.73], try #1
INFO[0079] [healthcheck] Start Healthcheck on service [kubelet] on host [192.168.56.73]
INFO[0086] [healthcheck] service [kubelet] on host [192.168.56.73] is healthy
INFO[0086] Image [harbor-pre.jdfmgt.com/rancher/rke-tools:v0.1.78] exists on host [192.168.56.73]
INFO[0087] Starting container [rke-log-linker] on host [192.168.56.73], try #1
INFO[0089] [worker] Successfully started [rke-log-linker] container on host [192.168.56.73]
INFO[0089] Removing container [rke-log-linker] on host [192.168.56.73], try #1
INFO[0090] [remove/rke-log-linker] Successfully removed container on host [192.168.56.73]
INFO[0090] Checking if container [kube-proxy] is running on host [192.168.56.73], try #1
INFO[0090] Image [harbor-pre.jdfmgt.com/rancher/hyperkube:v1.20.12-rancher1] exists on host [192.168.56.73]
INFO[0090] Checking if container [old-kube-proxy] is running on host [192.168.56.73], try #1
INFO[0090] Stopping container [kube-proxy] on host [192.168.56.73] with stopTimeoutDuration [5s], try #1
INFO[0095] Waiting for [kube-proxy] container to exit on host [192.168.56.73]
INFO[0095] Renaming container [kube-proxy] to [old-kube-proxy] on host [192.168.56.73], try #1
INFO[0095] Starting container [kube-proxy] on host [192.168.56.73], try #1
INFO[0096] [worker] Successfully updated [kube-proxy] container on host [192.168.56.73]
INFO[0096] Removing container [old-kube-proxy] on host [192.168.56.73], try #1
INFO[0096] [healthcheck] Start Healthcheck on service [kube-proxy] on host [192.168.56.73]
INFO[0097] [healthcheck] service [kube-proxy] on host [192.168.56.73] is healthy
INFO[0097] Image [harbor-pre.jdfmgt.com/rancher/rke-tools:v0.1.78] exists on host [192.168.56.73]
INFO[0098] Starting container [rke-log-linker] on host [192.168.56.73], try #1
INFO[0099] [worker] Successfully started [rke-log-linker] container on host [192.168.56.73]
INFO[0099] Removing container [rke-log-linker] on host [192.168.56.73], try #1
INFO[0099] [remove/rke-log-linker] Successfully removed container on host [192.168.56.73]
INFO[0099] [controlplane] Now checking status of node 192.168.56.73, try #1
INFO[0099] [controlplane] Successfully upgraded Controller Plane..
INFO[0099] [authz] Creating rke-job-deployer ServiceAccount
INFO[0099] [authz] rke-job-deployer ServiceAccount created successfully
INFO[0099] [authz] Creating system:node ClusterRoleBinding
INFO[0100] [authz] system:node ClusterRoleBinding created successfully
INFO[0100] [authz] Creating kube-apiserver proxy ClusterRole and ClusterRoleBinding
INFO[0100] [authz] kube-apiserver proxy ClusterRole and ClusterRoleBinding created successfully
INFO[0100] Successfully Deployed state file at [./cluster.rkestate]
INFO[0100] [state] Saving full cluster state to Kubernetes
INFO[0100] [state] Successfully Saved full cluster state to Kubernetes ConfigMap: full-cluster-state
INFO[0100] [worker] Upgrading Worker Plane..
INFO[0100] [worker] Successfully upgraded Worker Plane..
INFO[0100] Image [harbor-pre.jdfmgt.com/rancher/rke-tools:v0.1.78] exists on host [192.168.56.73]
INFO[0100] Starting container [rke-log-cleaner] on host [192.168.56.73], try #1
INFO[0101] [cleanup] Successfully started [rke-log-cleaner] container on host [192.168.56.73]
INFO[0101] Removing container [rke-log-cleaner] on host [192.168.56.73], try #1
INFO[0101] [remove/rke-log-cleaner] Successfully removed container on host [192.168.56.73]
INFO[0101] [sync] Syncing nodes Labels and Taints
INFO[0101] [sync] Successfully synced nodes Labels and Taints
INFO[0101] [network] Setting up network plugin: canal
INFO[0101] [addons] Saving ConfigMap for addon rke-network-plugin to Kubernetes
INFO[0101] [addons] Successfully saved ConfigMap for addon rke-network-plugin to Kubernetes
INFO[0101] [addons] Executing deploy job rke-network-plugin
INFO[0123] [dns] removing DNS provider kube-dns
INFO[0143] [dns] DNS provider kube-dns removed successfully
INFO[0143] [addons] Setting up coredns
INFO[0143] [addons] Saving ConfigMap for addon rke-coredns-addon to Kubernetes
INFO[0143] [addons] Successfully saved ConfigMap for addon rke-coredns-addon to Kubernetes
INFO[0143] [addons] Executing deploy job rke-coredns-addon
INFO[0149] [addons] CoreDNS deployed successfully
INFO[0149] [dns] DNS provider coredns deployed successfully
INFO[0149] [addons] Setting up Metrics Server
INFO[0149] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes
INFO[0149] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes
INFO[0149] [addons] Executing deploy job rke-metrics-addon
INFO[0159] [addons] Metrics Server deployed successfully
INFO[0159] [ingress] Setting up nginx ingress controller
INFO[0159] [ingress] removing admission batch jobs if they exist
INFO[0159] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0159] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0159] [addons] Executing deploy job rke-ingress-controller
INFO[0174] [ingress] ingress controller nginx deployed successfully
INFO[0174] [addons] Setting up user addons
INFO[0174] [addons] no user addons defined
INFO[0174] Finished building Kubernetes cluster successfully

#配置kubectl环境
[root@c73 rancher-packs]# ln -sf /export/rancher-packs/kube_config_cluster.yml ~/.kube/config
[root@c73 rancher-packs]# kubectl get nodes
NAME            STATUS   ROLES                      AGE   VERSION
192.168.56.73   Ready    controlplane,etcd,worker   40h   v1.20.12


#查看k8s升级后是否丢失,之前部署的服务
[root@c73 rancher-packs]# kubectl  get po  
NAME                               READY     STATUS    RESTARTS   AGE
pod/nginx-deploy-5886446787-m5tcj   1/1     Running   1          50m


#解压rancher2.5 相关charts包,升级rancher服务
[root@c73 rancher-packs]# ./helm upgrade rancher  ./rancher2.5 \
     --namespace cattle-system \
     --set hostname=rancher-my.test.com \
     --set ingress.tls.source=secret \
     --set privateCA=true \
     --set rancherImage=harbor-pre.jdfmgt.com/rancher/rancher  --kubeconfig kube_config_cluster.yml

WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: kube_config_cluster.yml
W0712 16:34:37.519095   22777 warnings.go:70] extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0712 16:34:41.179811   22777 warnings.go:70] extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0712 16:34:44.362542   22777 warnings.go:70] extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
Release "rancher" has been upgraded. Happy Helming!
NAME: rancher
LAST DEPLOYED: Tue Jul 12 16:34:00 2022
NAMESPACE: cattle-system
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
Rancher Server has been installed.
NOTE: Rancher may take several minutes to fully initialize. Please standby while Certificates are being issued and Ingress comes up.
Check out our docs at https://rancher.com/docs/rancher/v2.x/en/
Browse to https://rancher-my.test.com
Happy Containering!



#查看升级后的rancher
[root@c73 ~]# kubectl get all  -n cattle-system
NAME                                   READY   STATUS    RESTARTS   AGE
pod/rancher-67594dd6f4-mfzwv           1/1     Running   0          42m
pod/rancher-webhook-5c9b54bb48-9bnwg   1/1     Running   0          39m

NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/rancher           ClusterIP   10.43.119.221   <none>        80/TCP    17h
service/rancher-webhook   ClusterIP   10.43.40.119    <none>        443/TCP   16h

NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/rancher           1/1     1            1           17h
deployment.apps/rancher-webhook   1/1     1            1           16h

NAME                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/rancher-67594dd6f4           1         1         1       16h
replicaset.apps/rancher-6d7f94d8cc           0         0         0       17h
replicaset.apps/rancher-webhook-5c9b54bb48   1         1         1       16h

2,旧版rancher创建的k8s集群升级

  • k8s 1.13.5 升级为 1.17.17
    在这里插入图片描述
  • 全局,选择集群 --> 升级, 选择:Kubernetes版本 (高版本),获取docker命令并执行
  • 等待升级完成
    在这里插入图片描述
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐