1 节点规划

构建集群的第一步是将拥有的服务器按节点功能进行划分,下面是节点规划情况。

IP角色
192.168.120.11部署节点,master,api-server,etcd,scheduler,controller-manager 
192.168.120.12master,api-server,etcd,scheduler,controller-manager 
192.168.120.13master,api-server,etcd,scheduler,controller-manager 
192.168.120.14work,kubelet,kube-proxy
192.168.120.15work,kubelet,kube-proxy

 

规划说明:

  1. 单独选择了一台机器192.168.120.11作为部署节点。如果机器数不多,可以将部署节点加入到 k8s 集群中。本次部署该机器作为master节点
  2. 为了保证可用性,择了三台机器部署 k8s master 组件。如果有条件,可以将 etcd 和 master 中的其他组件分开部署,这样可以根据需要更灵活地控制实例个数。
  3. 剩余2台机器作为 k8s worker 节点。节点个数需要根据实际情况动态调整。

2 环境准备

在完成节点规划后,需要进行环境准备工作,主要包含以下内容:

2.1 安装 RKE

-需要在部署节点(192.168.120.11)上安装 RKE 二进制包,具体安装方法可参考 download-the-rke-binary,下载后,可以查看rke可以安装的k8s的版本是多少:

#查看rke的版本
[root@localhost rke]# ./rke -v
rke version v1.0.14

#查看rke支持安装的k8s版本
[root@localhost rke]# ./rke config --system-images --all
INFO[0000] Generating images list for version [v1.16.15-rancher1-2]: 
rancher/coreos-etcd:v3.3.15-rancher1
rancher/rke-tools:v0.1.65
rancher/k8s-dns-kube-dns:1.15.0
rancher/k8s-dns-dnsmasq-nanny:1.15.0
rancher/k8s-dns-sidecar:1.15.0
rancher/cluster-proportional-autoscaler:1.7.1
rancher/coredns-coredns:1.6.2
rancher/k8s-dns-node-cache:1.15.7
rancher/hyperkube:v1.16.15-rancher1
rancher/coreos-flannel:v0.12.0
rancher/flannel-cni:v0.3.0-rancher6
rancher/calico-node:v3.13.4
rancher/calico-cni:v3.13.4
rancher/calico-kube-controllers:v3.13.4
rancher/calico-ctl:v3.13.4
rancher/calico-pod2daemon-flexvol:v3.13.4
weaveworks/weave-kube:2.6.4
weaveworks/weave-npc:2.6.4
rancher/pause:3.1
rancher/nginx-ingress-controller:nginx-0.35.0-rancher1
rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1
rancher/metrics-server:v0.3.4
INFO[0000] Generating images list for version [v1.17.14-rancher1-1]: 
rancher/coreos-etcd:v3.4.3-rancher1
rancher/rke-tools:v0.1.66
rancher/k8s-dns-kube-dns:1.15.0
rancher/k8s-dns-dnsmasq-nanny:1.15.0
rancher/k8s-dns-sidecar:1.15.0
rancher/cluster-proportional-autoscaler:1.7.1
rancher/coredns-coredns:1.6.5
rancher/k8s-dns-node-cache:1.15.7
rancher/hyperkube:v1.17.14-rancher1
rancher/coreos-flannel:v0.12.0
rancher/flannel-cni:v0.3.0-rancher6
rancher/calico-node:v3.13.4
rancher/calico-cni:v3.13.4
rancher/calico-kube-controllers:v3.13.4
rancher/calico-ctl:v3.13.4
rancher/calico-pod2daemon-flexvol:v3.13.4
weaveworks/weave-kube:2.6.4
weaveworks/weave-npc:2.6.4
rancher/pause:3.1
rancher/nginx-ingress-controller:nginx-0.35.0-rancher2
rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1
rancher/metrics-server:v0.3.6
INFO[0000] Generating images list for version [v1.15.12-rancher2-3]: 
rancher/coreos-etcd:v3.3.10-rancher1
rancher/rke-tools:v0.1.58
rancher/k8s-dns-kube-dns:1.15.0
rancher/k8s-dns-dnsmasq-nanny:1.15.0
rancher/k8s-dns-sidecar:1.15.0
rancher/cluster-proportional-autoscaler:1.3.0
rancher/coredns-coredns:1.3.1
rancher/k8s-dns-node-cache:1.15.7
rancher/hyperkube:v1.15.12-rancher2
rancher/coreos-flannel:v0.12.0
rancher/flannel-cni:v0.3.0-rancher6
rancher/calico-node:v3.13.4
rancher/calico-cni:v3.13.4
rancher/calico-kube-controllers:v3.13.4
rancher/calico-ctl:v3.13.4
rancher/calico-pod2daemon-flexvol:v3.13.4
weaveworks/weave-kube:2.6.4
weaveworks/weave-npc:2.6.4
rancher/pause:3.1
rancher/nginx-ingress-controller:nginx-0.32.0-rancher1
rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1
rancher/metrics-server:v0.3.3

2.2 配置 SSH 免密登录

由于 RKE 通过 SSH tunnel 安装部署 k8s 集群,需要配置 RKE 所在节点到 k8s 各节点的 SSH 免密登录。如果 RKE 所在节点也需要加入到 k8s 集群中,需要配置到本机的 SSH 免密登录。

#创建普通用户
[root@localhost rke]# useradd rancher
[root@localhost rke]# echo "password" | passwd --stdin rancher
[root@localhost rke]# usermod rancher -G docker 

#配置执行rke up的用户到rancher@各个主机上免密。
[root@localhost rke]# ssh-copy-id -i rancher@192.168.120.11
[root@localhost rke]# ssh-copy-id -i rancher@192.168.120.12
[root@localhost rke]# ssh-copy-id -i rancher@192.168.120.13
[root@localhost rke]# ssh-copy-id -i rancher@192.168.120.14
[root@localhost rke]# ssh-copy-id -i rancher@192.168.120.15

2.3安装 docker

 由于 RKE 通过 docker 镜像rancher/hyperkube启动 k8s 组件,因此需要在 k8s 集群的各个节点(192.168.120.11 ~ 192.168.120.15 这 5 台机器)上安装 docker。注意docker和k8s的匹配版本。

并且rancher用户要有docker的权限,上述我们已经把rancher用户加到docker的用户组中去了。

 

2.4 关闭 swap,防火墙等

 k8s 1.8 开始要求关闭系统的 swap,如果不关闭,默认配置下 kubelet 将无法启动。这里需要关闭所有 k8s worker 节点的 swap。

3 安装步骤

3.1 准备cluster.yml文件

cluster.yml文件官网配置

nodes:
  - address: 192.168.120.11
    user: rancher
    role:
      - controlplane
      - etcd
    labels:
      isNode: true
  - address: 192.168.120.12
    user: rancher
    role:
      - controlplane
      - etcd
    labels:
      isNode: true
  - address: 192.168.120.13
    user: rancher
    role:
      - controlplane
      - etcd
    labels:
      isNode: true
  - address: 192.168.120.14
    user: rancher
    role:
      - worker
    labels:
      isNode: true
  - address: 192.168.120.15
    user: rancher
    role:
      - worker
    labels:
      isNode: true
kubernetes_version: "v1.17.14-rancher1-1"
private_registries:
  - url: 192.168.120.16
    user: admin
    password: 123456
cluster_name: test
services:
  kube-api:
    service_cluster_ip_range: 100.66.0.0/16
    extra_args:
      enable-admission-plugins: "ServiceAccount,NamespaceLifecycle,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds"
  kube-controller:
    cluster_cidr: 100.86.0.0/16
    service_cluster_ip_range: 100.66.0.0/16    
  kubelet:
    cluster_dns_server: 100.66.0.10
  kubeproxy:
    extra_args:
      proxy-mode: ipvs
      masquerade-all: true
network:
  plugin: flannel
dns:
  provider: coredns
  upstreamnameservers:
  - 255.255.255.55
  - 255.255.254.55
ingress:
  provider: none

3.2在rke目录下生成文件

rke up --config cluster.yml

3.3 kubectl自动补全

其他的master节点要执行kubectl指令,需要将rke目录拷贝到master主机上

kubectl --kubeconfig=/opt/rke/kube_config_cluster.yml completion bash

 

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐