来源: DevOpSec公众号
作者: DevOpSec

背景

国内安装k8s集群,下载镜像往往失败或者半天下载不下来,导致安装不成功,影响摸鱼心情。
是否可以通过kubespray离线部署k8s集群。

完全可以的,准备就绪后,只需10多分钟即可搭建出一套高可用的k8s集群,心情畅悦。

准备工作

  1. 首先需要一台科学上网机器,用于下载k8s相关镜像和安装包
  2. 搭建nginx下载站,用于存放k8s集群安装所需的安装包。
  3. 搭建harbor镜像服务,存储安装k8s集群所需的镜像。
  4. 需要一台独立管理机器,安装docker环境用于部署k8s集群。

开始实施

资源

主机名角色ip操作系统
master1master192.168.1.11Centos 7.9.x 内核版本4.19.x
master2master192.168.1.12Centos 7.9.x 内核版本4.19.x
master3master192.168.1.13Centos 7.9.x 内核版本4.19.x
node1node192.168.1.14Centos 7.9.x 内核版本4.19.x
manager1管理机器192.168.1.253Centos 7.9.x 内核版本4.19.x
nogfw001科学上网机器192.168.2.111Centos 7.9.x 内核版本4.19.x
yourdownload.domian.com下载站Centos 7.9.x 内核版本4.19.x
yourharbor.domian.com镜像源Centos 7.9.x 内核版本4.19.x

科学上网机器上做如下操作

  1. 下载kubespray包,选最新最稳定版
a. 检出kubespray代码
git clone git@github.com:kubernetes-sigs/kubespray.git

b. 此时对外release版本v.2.21.0
cd kubespray
git clone v.2.21.0
  1. 离线下载需要的安装包
a. 生成下载包和镜像列表
yum install ansible -y #安装ansible
cd contrib/offline
sh generate_list.sh

b. 查看目录结构
tree temp/
temp/
├── files.list          #依赖包的列表
├── files.list.template
├── images.list         #依赖镜像的列表
└── images.list.template

c. 下载软件包到目录 temp/files
wget -x -P temp/files -i temp/files.list

  1. 配置nginx代理,代理可以不在科学上网机器
a. nginx配置块
server {
        listen 80 yourdownload.domian.com;
        location /k8s/ {
                alias /path/kubespray/contrib/offline/temp/files/;
        }
}

reload nginx 进程

b. 测试,注意版本,如果能正常下载则代表代理搭建正常
wget http://yourdomain.domian.com/k8s/etcd-io/etcd/releases/download/v3.5.6/etcd-v3.5.6-linux-amd64.tar.gz 

  1. 下载依赖镜像
a. 安装并启动docker-ce 
yum install docker-ce
service docker start

b. 安装 skopeo 将依赖镜像同步到自建私有镜像仓库
yum install  skopeo   ## 注意centos操作系统版本要在7.x

c. 创建docker公共的项目k8s,harbor域名:youharbor.domain.com
如果需要登录则执行
docker login youharbor.domain.com

d. 同步镜像
for image in $(cat temp/images.list); do skopeo copy docker://${image} docker://youharbor.domain.com/k8s/${image#*/}; done

  1. 下载kubespray镜像,省去安装kubespray各种依赖
下载镜像
docker pull quay.io/kubespray/kubespray:v2.21.0 

镜像打tag
docker tag quay.io/kubespray/kubespray:v2.21.0 youharbor.domain.com/k8s/quay.io/kubespray/kubespray:v2.21.0 

推送到自己的harbor仓库
docker push youharbor.domain.com/k8s/quay.io/kubespray/kubespray:v2.21.0 

管理机器上操作

  1. 安装docker,下载镜像
a. 安装docker-ce并启动
b. 下载kubespray image
docker pull youharbor.domain.com/k8s/quay.io/kubespray/kubespray:v2.21.0
  1. 免密登录k8s集群机器
1. 生产公私钥对
ssh-keygen  -b 4096
遇到输密码直接回车
d. 公钥copy到master和node机器
ssh-copy-id master1
ssh-copy-id master2
ssh-copy-id master3
ssh-copy-id node1
  1. 下载kubespray包选最新最稳定版,或者从科学上网的机器上copy

a. 检出kubespray代码
git clone git@github.com:kubernetes-sigs/kubespray.git

b. 此时对外release版本v.2.21.0
cd kubespray
git clone v.2.21.0

或者从`科学上网的机器`上copy

cp -rp 192.168.2.111:/path/kubespray .
  1. 配置离线安装
cp -rp /path/kubespray/inventory/sample/  /path/kubespray/inventory/mycluster/
vim /path/kubespray/inventory/mycluster/group_vars/all/offline.yml


# 替换镜像地址
registry_host: "yourharbor.domian.com/k8s"
kube_image_repo: "{{ registry_host }}"
gcr_image_repo: "{{ registry_host }}"
github_image_repo: "{{ registry_host }}"
docker_image_repo: "{{ registry_host }}"
quay_image_repo: "{{ registry_host }}"

# 替换软件包地址
files_repo: "http://yourdownload.domian.com/k8s"
kubeadm_download_url: "{{ files_repo }}/storage.googleapis.com/kubernetes-release/release/{{ kube_version }}/bin/linux/{{ image_arch }}/kubeadm"
kubectl_download_url: "{{ files_repo }}/storage.googleapis.com/kubernetes-release/release/{{ kube_version }}/bin/linux/{{ image_arch }}/kubectl"
kubelet_download_url: "{{ files_repo }}/storage.googleapis.com/kubernetes-release/release/{{ kube_version }}/bin/linux/{{ image_arch }}/kubelet"
cni_download_url: "{{ files_repo }}/github.com/containernetworking/plugins/releases/download/{{ cni_version }}/cni-plugins-linux-{{ image_arch }}-{{ cni_version }}.tgz"
crictl_download_url: "{{ files_repo }}/github.com/kubernetes-sigs/cri-tools/releases/download/{{ crictl_version }}/crictl-{{ crictl_version }}-{{ ansible_system | lower }}-{{ image_arch }}.tar.gz"
etcd_download_url: "{{ files_repo }}/github.com/etcd-io/etcd/releases/download/{{ etcd_version }}/etcd-{{ etcd_version }}-linux-{{ image_arch }}.tar.gz"
calicoctl_download_url: "{{ files_repo }}/github.com/projectcalico/calico/releases/download/{{ calico_ctl_version }}/calicoctl-linux-{{ image_arch }}"
calico_crds_download_url: "{{ files_repo }}/github.com/projectcalico/calico/archive/{{ calico_version }}.tar.gz"
flannel_cni_download_url: "{{ files_repo }}/github.com/flannel-io/cni-plugin/releases/download/{{ flannel_cni_version }}/flannel-{{ image_arch }}"
helm_download_url: "{{ files_repo }}/get.helm.sh/helm-{{ helm_version }}-linux-{{ image_arch }}.tar.gz"
crun_download_url: "{{ files_repo }}/github.com/containers/crun/releases/download/{{ crun_version }}/crun-{{ crun_version }}-linux-{{ image_arch }}"
kata_containers_download_url: "{{ files_repo }}/github.com/kata-containers/kata-containers/releases/download/{{ kata_containers_version }}/kata-static-{{ kata_containers_version }}-{{ ansible_architecture }}.tar.xz"
runc_download_url: "{{ files_repo }}/github.com/opencontainers/runc/releases/download/{{ runc_version }}/runc.{{ image_arch }}"
containerd_download_url: "{{ files_repo }}/github.com/containerd/containerd/releases/download/v{{ containerd_version }}/containerd-{{ containerd_version }}-linux-{{ image_arch }}.tar.gz"
nerdctl_download_url: "{{ files_repo }}/github.com/containerd/nerdctl/releases/download/v{{ nerdctl_version }}/nerdctl-{{ nerdctl_version }}-{{ ansible_system | lower }}-{{ image_arch }}.tar.gz"
krew_download_url: "{{ files_repo }}/github.com/kubernetes-sigs/krew/releases/download/{{ krew_version }}/krew-{{ host_os }}_{{ image_arch }}.tar.gz"
cri_dockerd_download_url: "{{ files_repo }}/github.com/Mirantis/cri-dockerd/releases/download/{{ cri_dockerd_version }}/cri-dockerd-{{ cri_dockerd_version }}-linux-{{ image_arch }}.tar.gz"
gvisor_runsc_download_url: "{{ files_repo }}/storage.googleapis.com/gvisor/releases/release/{{ gvisor_version }}/{{ ansible_architecture }}/runsc"
gvisor_containerd_shim_runsc_download_url: "{{ files_repo }}/storage.googleapis.com/gvisor/releases/release/{{ gvisor_version }}/{{ ansible_architecture }}/containerd-shim-runsc-v1"
youki_download_url: "{{ files_repo }}/github.com/containers/youki/releases/download/v{{ youki_version }}/youki_v{{ youki_version | regex_replace('\\.', '_') }}_linux.tar.gz"

  1. 配置机器资源
vim  /path/kubespray/inventory/mycluster/inventory.ini

[all]
master1 ansible_host=192.168.1.11
master2 ansible_host=192.168.1.12
master3 ansible_host=192.168.1.13
node1 ansible_host=192.168.1.14

[kube_control_plane]
master1
master2
master3

[etcd]
master1
master2
master3

[kube_node]
node1

[calico_rr]

[k8s_cluster:children]
kube_control_plane
kube_node
calico_rr
  1. 配置安装集群信息
vim  /path/kubespray/inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml

a. 配置pod网段
kube_service_addresses: 10.233.0.0/18  ## 有1.6万多个ip,更具自己环境配置

b. 配置pod网段
kube_service_addresses: 10.233.0.0/18  ## 有1.6万多个ip,更具自己环境配置

c. 选择网络插件,默认calico
kube_network_plugin: calico

d. 容器运行时
container_manager: containerd  #推荐containerd

e. 是否开启kata容器,建议开启
kata_containers_enabled: true

f. 证书是否自动续期,建议开启
auto_renew_certificates: true

也可以通过手动续期,在所有的master节点执行 
sh /usr/local/bin/k8s-certs-renew.sh

以上工作准备完之后,就开始我们的集群安装之旅,内网环境大概10多分钟就能搞定集群安装。

kubespray运维k8s集群,在管理机器上操作

  1. 启动kubespray容器
    kubespray容器启动脚本如下
cat runc.sh

# 把我们刚刚准备的配置挂载到容器里
docker run --rm -it --mount type=bind,source=/path/kubespray/inventory/mycluster,dst=/inventory \
  --mount type=bind,source=/path/kubespray,dst=/apps/kubespray \
  --mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
  yourharbor.domian.com/k8s/quay.io/kubespray/kubespray:v2.21.0 bash
  1. k8s集群安装
a. 启动kubespray容器
sh runc.sh

b. 在容器里执行命令安装集群
# ANSIBLE_HOST_KEY_CHECKING=False 作用是首次访问集群机器不用输入yes验证
# 注意集群节点都能访问yourharbor.domian.com和yourdownload.domian.com域名
ANSIBLE_HOST_KEY_CHECKING=False  ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml

如果安装过程中有报jmespath依赖包的错误,见下文问题部分

  1. k8s集群新增节点
a. 修改/path/kubespray/inventory/mycluster/inventory.ini
[kube_node]
node1
node2 #新增节点

b. 启动容器
sh runc.sh

b. 执行扩容
ANSIBLE_HOST_KEY_CHECKING=False  ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id\_rsa scale.yml -b -v
  1. k8s集群删除节点
a. 修改/path/kubespray/inventory/mycluster/inventory.ini
[kube_node]
node1
node2 #删除节点
node3 #删除节点

b. 启动容器
sh runc.sh

b. 执行删除
ANSIBLE_HOST_KEY_CHECKING=False  ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa remove-node.yml -b -v --extra-vars "node=node2,node3"
  1. k8s集群升级
a. 下载文件和对应的镜像,具体操作见上文

b. 启动容器
sh runc.sh

b. 执行升级
ANSIBLE_HOST_KEY_CHECKING=False  ansible-playbook -b -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa upgrade-cluster.yml  -e kube\_version=v1.26.3
  1. 卸载k8s集群
a. 启动容器
sh runc.sh

b. ANSIBLE_HOST_KEY_CHECKING=False  ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id\_rsa reset.yml

问题

  1. jmespath 依赖包没有
如果安装过程中有报jmespath依赖包的错误,重新打镜像
cat Dockerfile
FROM youharbor.domain.com/k8s/quay.io/kubespray/kubespray:v2.21.0 
RUN pip3 install jmespath -i http://pypi.douban.com/simple/ --trusted-host=pypi.douban.com/simple

docker build youharbor.domain.com/k8s/quay.io/kubespray/kubespray:v2.21.0 .
docker push youharbor.domain.com/k8s/quay.io/kubespray/kubespray:v2.21.0 
  1. node节点上kubelet报错
error:
Apr 27 13:06:37 plat-data-mysql-sh-001 kubelet[26809]: E0427 13:06:37.782074   26809 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for "/kube.slice/containerd.service": failed to get container info for "/kube.slice/containerd.service": unknown container "/kube.slice/containerd.service"" containerName="/kube.slice/containerd.service"

解决:
sed -i 's#/kube.slice/containerd.service#/system.slice/kubelet.service#g' /etc/kubernetes/kubelet.env ; service kubelet restart
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐