官方文档 kubernets

一、开始部署之前

1.编译/etc/host 文件,在所有节点机器执行本地解析

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.253.176 master
192.168.253.196 node1
192.168.253.190 node2

3.使用ansible主机清单

[k8s_master]
master

[k8s_nodes]
node1
node2

[k8s]

[k8s:children]
k8s_master
k8s_nodes

3.发送公钥,注意密码的设置

cat send-pubkey.yml
---
- hosts: all
  gather_facts: no
  remote_user: root
  vars:
    ansible_ssh_pass: 1
  tasks:
  - name: Set authorized key taken from file
    authorized_key:
      user: root
      state: present
      key: "{{ lookup('file', '/root/.ssh/id_rsa.pub') }}"
...

4.发送/etc/hosts

---
- name: 同步所有节点的 /etc/hosts 文件 并且设置主机名
  hosts: k8s
  gather_facts: no
  tasks:
    - name: 同步 hosts 文件
      copy: src=etc-hosts dest=/etc/hosts
    - name: 设置各自的主机名
      shell:
        cmd: hostnamectl set-hostname "{{ inventory_hostname }}"
      register: sethostname
    - name: 验证是否成功设置了主机名
      debug: var=sethostname.rc
...

二.部署之前检查环境

cat before-you-begin.yml
---
- name: 开始部署集群之前的检查和设置
  hosts: k8s
  gather_facts: no
  tasks:
    - name: 配置禁用 SELinux
      shell: |
        setenforce 0;
        sed -ri '/^SELINUX=/ c SELINUX=disabled' /etc/selinux/config
      tags:
        - swap

    - name: 关闭交互分区
      shell:
        cmd: swapoff -a; sed -ri 's/.*swap.*/#&/g' /etc/fstab
        warn: no
      tags:
        - swap

    - name: 创建模块配置文件 /etc/modules-load.d/k8s.conf
      blockinfile:
        path: /etc/modules-load.d/k8s.conf
        create: yes
        block: |
          br_netfilter

    - name: 确保节点上的 iptables 能够正确地查看桥接流量
      blockinfile:
        path: /etc/sysctl.d/k8s.conf
        create: yes
        block: |
          net.bridge.bridge-nf-call-ip6tables = 1
          net.bridge.bridge-nf-call-iptables = 1

    - name: 执行加载模块的命令
      shell: modprobe br_netfilter

    - name: 检查 SELinux and Swap
      shell: |
        hostname > /tmp/host-info;
        getenforce;free -m |grep 'Swap' >> /tmp/host-info;
        lsmod | grep br_netfilter;sysctl --system |
        grep 'k8s.conf' -A 2 >> /tmp/host-info;

    - name: 获取 mac 信息并写入信息文件
      shell: |
        host=$(hostname);
        ip link |
        awk -v host=$host '/link\/ether/ {print $2, host}' >> /tmp/host-info ;
        echo "---------------------------" >> /tmp/host-info

    - name: 获取比对报告
      fetch:
        src: /tmp/host-info
        dest: ./
...

2.检查端口,编辑检查脚本(基于 Python2.7)

check-port.yml
- hosts: k8s
  gather_facts: no
  tasks:
    - name: hello
      script: ./check-port.py
      register: ret
    - debug: var=item
      loop: "{{ ret.stdout_lines }}"
  • check-port.py
#!/bin/env python
#coding:utf-8
import re
import subprocess
import socket

hostname = socket.gethostname()

ports_set = set()
if 'master' in hostname:
    check_ports = {"6443", "10250", "10251", "102502", "2379", "2380"}
else:
    check_ports = {str(i) for i in xrange(30000, 32768) }
    check_ports.add("10250")

r = subprocess.Popen("ss -nta", stdout=subprocess.PIPE,shell=True)
result = r.stdout.read()

for line in result.splitlines():
	if re.match('^(ESTAB|LISTEN|SYN-SENT)', line):
	    line = line.split()[3]
	    port = line.split(':')[-1]
	    ports_set.add(port)

used_ports = check_ports & ports_set
used_ports = ' '.join(used_ports)
if used_ports:
	print("这些端口已使用: %s" % used_ports)
else:
	print("端口未占用")

三.部署docker 容器进行时

cat docker/deploy-docker.yml
---
- name: deploy docker
  hosts: k8s
  gather_facts: no
  vars:
    pkg_dir: /yum-pkg
    pkgs:
      - device-mapper-persistent-data
      - lvm2
      - containerd.io-1.2.13
      - docker-ce-19.03.11
      - docker-ce-cli-19.03.11


    # 变量 download_host 需要手动设置
    # 且值需要是此 playbook 目标主机中的一个
    # 需要写在 inventory 文件中的名称
    download_host: "k8s-master"
    local_pkg_dir: "{{ playbook_dir }}/{{ download_host }}"

  tasks:
    - name: 测试使用 -e 是否覆盖了变量
      debug:
        msg: "{{ local_pkg_dir }} {{ download_host }}"
      tags:
        - deploy
        - test

    - name: "只需要给 {{ download_host }}安装仓库文件"
      when: inventory_hostname == download_host
      get_url:
        url: https://download.docker.com/linux/centos/docker-ce.repo
        dest: /etc/yum.repos.d/docker-ce.repo
      tags:
        - deploy

    - name: 创建存放 rmp 包的目录
      when: inventory_hostname == download_host
      file:
        path: "{{ pkg_dir }}"
        state: directory
      tags:
        - deploy

    - name:  正在下载软件包
      when: inventory_hostname == download_host
      yum:
        name: ["docker-ce", "docker-ce-cli", "containerd.io"]
        download_only: yes
        download_dir: "{{ pkg_dir }}"
      tags:
        - deploy

    - name: 获取下载目录 "{{ pkg_dir }}" 中的文件列表
      when: inventory_hostname == download_host
      shell: ls -1 "{{ pkg_dir }}"
      register: files
      tags:
        - deploy

    - name: 把远程主机下载的软件包传输到 ansible 本地
      when: inventory_hostname == download_host
      fetch:
        src: "{{ pkg_dir }}/{{ item }}"
        dest: ./
      loop: "{{files.stdout_lines}}"
      tags:
        - deploy

    - name: 传输 rpm 包到远程节点
      when: inventory_hostname != download_host
      copy:
        src: "{{ local_pkg_dir }}{{ pkg_dir }}"
        dest: "/"
      tags:
        - deploy

    - name: 正在执行从本地安装软件包
      shell:
        cmd: yum -y localinstall *
        chdir: "{{ pkg_dir }}"
        warn: no
      async: 600
      poll: 0
      register: yum_info
      tags:
        - deploy

    - name: 打印安装结果
      debug: var=yum_info.ansible_job_id
      tags:
        - deploy

    - name: 设置 /etc/docker/daemon.json
      copy: src=file/daemon.json dest=/etc/docker/daemon.json
      notify: restart docker
      tags:
        - starte
        - update

    - name: 启动 docker
      systemd:
        name: docker
        enabled: yes
        state: started
      tags:
        - start
  handlers:
    - name: restart docker
      systemd:
        name: docker
        state: restarted
...
  • 配置docker daemon文件
{
   "registry-mirrors": ["https://自己的阿里云加速器.mirror.aliyuncs.com"],
   "exec-opts": ["native.cgroupdriver=systemd"],
   "log-driver": "json-file",
   "log-opts": {
      "max-size": "100m"
   },
   "storage-driver": "overlay2"
}

  • 部署 启动可能有点慢,两次执行
ansible-playbook -i hosts.ini docker/deploy-docker.yml  -t deploy
  • 启动docker 服务
ansible-playbook -i hosts.ini docker/deploy-docker.yml  -t start

4.安装部署工具 kubeadm

---
- name: Deploy  kubeadm  kubelet kubectl
  hosts: k8s
  gather_facts: no
  vars:
    pkg_dir: /kubeadm-pkg
    pkg_names: ["kubelet", "kubeadm", "kubectl"]

    # 变量 download_host 需要手动设置
    # 且值需要是此 playbook 目标主机中的一个
    # 需要写在 inventory 文件中的名称
    download_host: "master"
    local_pkg_dir: "{{ playbook_dir }}/{{ download_host }}"

  tasks:
    - name: 测试使用 -e 是否设置并覆盖了变量
      debug:
        msg: "{{ local_pkg_dir }} {{ download_host }}"
      tags:
        - deploy
        - test

    - name: "只需要给 {{ download_host }}安装仓库文件"
      when: inventory_hostname == download_host
      copy:
        src: file/kubernetes.repo
        dest: /etc/yum.repos.d/kubernetes.repo
      tags:
        - deploy

    - name: 创建存放 rmp 包的目录
      when: inventory_hostname == download_host
      file:
        path: "{{ pkg_dir }}"
        state: directory
      tags:
        - deploy

    - name:  下载软件包
      when: inventory_hostname == download_host
      yum:
        name: "{{ pkg_names }}"
        download_only: yes
        download_dir: "{{ pkg_dir }}"
      tags:
        - deploy

    - name: 获取下载目录 "{{ pkg_dir }}" 中的文件列表
      when: inventory_hostname == download_host
      shell: ls -1 "{{ pkg_dir }}"
      register: files
      tags:
        - deploy

    - name: 把远程主机下载的软件包传输到 ansible 本地
      when: inventory_hostname == download_host
      fetch:
        src: "{{ pkg_dir }}/{{ item }}"
        dest: ./
      loop: "{{files.stdout_lines}}"
      tags:
        - deploy

    - name: 传输 rpm 包到远程节点
      when: inventory_hostname != download_host
      copy:
        src: "{{ local_pkg_dir }}{{ pkg_dir }}"
        dest: "/"
      tags:
        - deploy

    - name: 正在执行从本地安装软件包
      shell:
        cmd: yum -y localinstall *
        chdir: "{{ pkg_dir }}"
        warn: no
      async: 600
      poll: 0
      register: yum_info
      tags:
        - deploy

    - name: 打印安装结果
      debug: var=yum_info.ansible_job_id
      tags:
        - deploy
    - name:
      systemd:
        name: kubelet
        state: started
      tags:
        - start
...
  • file/kubernetes.repo
ansible-playbook -i hosts.ini deploy-kubeadm.yml

kubelet 的配置文件是没有的,检查服务状态会报错,提示服务报错

  • 对于 master 节点,会在集群初始化的过程中生成,生成后初始化 程序会自动启动 kubelet 服务。

  • 对于 node 节点,会在将节点加入到集群中生成,生成后会自从启动kubelet 服务。

五.设置节点加入集群

在主节点执行,先看下边

kubeadm init --kubernetes-version=v1.20.4 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.253.176 --ignore-preflight-errors=Swap
#10.244.0.0/16 网段为下面要安装的网络插件支持的ip段
#这里的192.168.253.176 地址为master节点的地址

在这里插入图片描述

  • 将其中的kubeadm join 进行复制,其他从节点加入时使用

参考官方文档,这里还需要安装一个网络插件,插件的网址会在上述信息中出现,需要下载一些docker镜像,安装时请参照官方文档,不然从节点加入主节点时会出错

在所有节点执行,会用docker下载一些kebernets需要一些镜像组件 ,也可以自行下载后上传,有可能会由于网络原因连接超时

也可购买一台可以下载镜像的服务器之后,下载之后导入所有节点

安装网络插件

containers:
- name: kube-flannel
  image: registry.cn-shanghai.aliyuncs.com/gcr-k8s/flannel:v0.10.0-amd64
  command:
  - /opt/bin/flanneld

  args:
  - --ip-masq
  - --kube-subnet-mgr
  - --iface=ens33 #绑定的主节点ip与上面的ip绑定的网卡一致

启动

# kubectl apply -f ~/flannel/kube-flannel.yml

确认网络配置成功,查看状态均为1,

# kubectl get pods --namespace kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-74ff55c5b-qgtjx          1/1     Running   2          2d9h
coredns-74ff55c5b-tx2wp          1/1     Running   2          2d9h
etcd-master                      1/1     Running   2          2d9h
kube-apiserver-master            1/1     Running   2          2d9h
kube-controller-manager-master   1/1     Running   2          2d2h
kube-flannel-ds-424cw            1/1     Running   2          2d8h
kube-flannel-ds-5vnx5            1/1     Running   2          2d8h
kube-flannel-ds-6wbz9            1/1     Running   3          2d8h
kube-proxy-qg5cg                 1/1     Running   3          2d8h
kube-proxy-sxbfm                 1/1     Running   2          2d9h
kube-proxy-xfhn7                 1/1     Running   2          2d8h
kube-scheduler-master            1/1     Running   4          2d2h

在从节点执行,加入集群

关于kubeadm join命令的具体操作方法请查看文档
使用kubeadm命令创建集群

kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>
kubeadm join 192.168.253.176:6443 --token g6q5vb.ent4ydhy1ajftxmk \
    --discovery-token-ca-cert-hash sha256:7ed9be5461ce46a43ad76af314cc7ad4075e057e51e38670eb99a6bb5147e48e

稍等片刻,从节点会检查状态,执行命令查看,集群状态是否为ready

NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   11h   v1.20.4
node1    Ready    <none>                 10h   v1.20.4
node2    Ready    <none>                 10h   v1.20.4

如果不成功,查看日志或者官方文档
kubernets报错

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐