二进制部署k8s集群

1、安装要求

在开始之前,部署 Kubernetes 集群机器需要满足以下几个条件:

  • 一台或多台机器,操作系统 CentOS7

  • 硬件配置:2GB 或更多 RAM,2 个 CPU 或更多 CPU,硬盘 30GB 或更多

  • 集群中所有机器之间网络互通

  • 可以访问外网,需要拉取镜像,如果服务器不能上网,需要提前下载镜像并导入节点

  • 禁止 swap 分区

2、准备环境
角色IP组件
m1(主节点)192.168.174.141kube-apiserver,kube-controller-manager,kube-scheduler,etcd
n1(工作节点)192.168.174.141kubelet,kube-proxy,docker etcd
3、操作系统初始化配
  • 修改主机名

    1. m1节点

      hostnamectl set-hostname m1
      
    2. n1节点

      hostnamectl set-hostname n1
      
  • 以下操作m1、n1相同

    1. 关闭防火墙

      systemctl stop firewalld
      systemctl disable firewalld
      
    2. 关闭selinux

      setenforce 0 # 临时关闭
      sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config # 永久关闭
      
    3. 关闭swap

      swapoff -a # 临时关闭;关闭swap主要是为了性能考虑
      sed -ri 's/.*swap.*/#&/' /etc/fstab
      free # 可以通过这个命令查看swap是否关闭了
      

    在这里插入图片描述

    1. 时间与windows同步

      yum install ntpdate -y
      ntpdate time.windows.com
      
    2. 将桥接的IPv4流量传递到iptables的链

      cat > /etc/sysctl.d/k8s.conf << EOF
      net.bridge.bridge-nf-call-ip6tables = 1
      net.bridge.bridge-nf-call-iptables = 1
      EOF
      
      sysctl --system
      
    3. 添加主机名与IP对应的关系

      vim /etc/hosts
      
      192.168.174.141 m1
      192.168.174.142 n1
      
4、部署etcd集群,m1、n1节点都需要部署
  • 准备 cfssl 证书生成工具 ( master上操作 )

    cfssl 是一个开源的证书管理工具,使用 json 文件生成证书,相比 openssl 更方便使用。 找任意一台服务器操作,这里用 Master 节点(wget速度太慢的话建议在windows用科学上网工具下载后传到centos,传过去的问题件一般没有w权限,需要添加权限)

    wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
    wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
    wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
    chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
    mv cfssl_linux-amd64 /usr/local/bin/cfssl
    mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
    mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
    chmod +x /usr/bin/cfssl*
    
  • 创建工作目录

    mkdir -p ~/TLS/{etcd,k8s}
    cd ~/TLS/etcd
    
  • 自签 CA

    cat > ca-config.json << EOF
    {
      "signing": {
        "default": {
          "expiry": "87600h"
        },
        "profiles": {
          "www": {
             "expiry": "87600h",
             "usages": [
                "signing",
                "key encipherment",
                "server auth",
                "client auth"
            ]
          }
        }
      }
    }
    EOF
    
    cat > ca-csr.json << EOF
    {
        "CN": "etcd CA",
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "Beijing",
                "ST": "Beijing"
            }
        ]
    }
    EOF
    
  • 生成证书

    cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
    ls *pem
    
  • 使用自签 CA 签发 Etcd HTTPS 证书 创建证书申请文件:(修改对应的master和node的IP地址)

    cat > server-csr.json << EOF
    {
        "CN": "etcd",
        "hosts": [
        "192.168.174.141",
        "192.168.174.142"
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "BeiJing",
                "ST": "BeiJing"
            }
        ]
    }
    EOF
    
  • 生成证书

    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
    ls server*pem
    
  • 从 Github 下载二进制文件下载地址上传到centos(科学上网工具)

    https://github.com/etcd-io/etcd/releases
    
  • 创建工作目录并解压二进制包

    mkdir /opt/etcd/{bin,cfg,ssl} -p
    tar zxvf etcd-v3.4.18-linux-amd64.tar.gz
    mv etcd-v3.4.18-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
    cp /opt/etcd/bin/etcdctl /usr/bin
    
  • 创建etcd.conf (修改对应的master的ip地址)

    cat > /opt/etcd/cfg/etcd.conf << EOF
    ETCD_OPTS="--name=etcd-1 \\
    --data-dir=/var/lib/etcd/default.etcd \\
    --listen-peer-urls=https://192.168.174.141:2380 \\
    --listen-client-urls=https://192.168.174.141:2379,http://127.0.0.1:2379 \\
    --advertise-client-urls=https://192.168.174.141:2379 \\
    --initial-advertise-peer-urls=https://192.168.174.141:2380 \\
    --initial-cluster=etcd-1=https://192.168.174.141:2380,etcd-2=https://192.168.174.142:2380 \\
    --initial-cluster-token=etcd-cluster \\
    --initial-cluster-state=new \\
    --cert-file=/opt/etcd/ssl/server.pem \\
    --key-file=/opt/etcd/ssl/server-key.pem \\
    --peer-cert-file=/opt/etcd/ssl/server.pem \\
    --peer-key-file=/opt/etcd/ssl/server-key.pem \\
    --trusted-ca-file=/opt/etcd/ssl/ca.pem \\
    --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \\
    --logger=zap"
    EOF
    
  • cat > /usr/lib/systemd/system/etcd.service << EOF
    [Unit]
    Description=Etcd Server
    After=network.target
    After=network-online.target
    Wants=network-online.target
    [Service]
    Type=notify
    EnvironmentFile=/opt/etcd/cfg/etcd.conf
    ExecStart=/opt/etcd/bin/etcd $ETCD_OPTS
    Restart=on-failure
    LimitNOFILE=65536
    [Install]
    WantedBy=multi-user.target
    EOF
    
  • 拷贝刚才生成的证书 把刚才生成的证书拷贝到配置文件中的路径

    cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/
    
  • 将上面节点 所有生成的文件拷贝到节点n1

    scp -r /opt/etcd/ root@192.168.174.142:/opt/
    scp /usr/lib/systemd/system/etcd.service root@192.168.174.142:/usr/lib/systemd/system/
    
  • 在n1节点修改 etcd.conf 配置文件中的节点名称和当前服务器 IP

    vi /opt/etcd/cfg/etcd.conf
    

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-icjxAsMh-1648631139053)(C:\Users\awd\AppData\Roaming\Typora\typora-user-images\image-20220330155333302.png)]

  • 最后在n1和m1节点启动 etcd 并设置开机启动

    systemctl daemon-reload
    systemctl start etcd
    systemctl enable etcd
    systemctl status etcd
    

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-KsVdj1yf-1648631139053)(C:\Users\awd\AppData\Roaming\Typora\typora-user-images\image-20220330164442060.png)]

  • 在任何一个节点执行

    etcdctl put test hello
    
  • 在其他节点查询,查到就说明集群搭建成功

    etcdctl get test
    
  • 查看集群状态

    etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.174.141:2379,https://192.168.174.142:2379" endpoint status --write-out=table
    

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-VZnfUwvx-1648631139054)(C:\Users\awd\AppData\Roaming\Typora\typora-user-images\image-20220330155832279.png)]

5、安装docker

下载地址:https://download.docker.com/linux/static/stable/x86_64/docker-20.10.3.tgz

  • 解压二进制包

    tar zxvf docker-20.10.3.tgz 
    mv docker/* /usr/bin
    
  • systemd 管理 docker

    cat > /usr/lib/systemd/system/docker.service << EOF
    [Unit]
    Description=Docker Application Container Engine
    Documentation=https://docs.docker.com
    After=network-online.target firewalld.service
    Wants=network-online.target
    [Service]
    Type=notify
    ExecStart=/usr/bin/dockerd
    ExecReload=/bin/kill -s HUP $MAINPID
    LimitNOFILE=infinity
    LimitNPROC=infinity
    LimitCORE=infinity
    TimeoutStartSec=0
    Delegate=yes
    KillMode=process
    Restart=on-failure
    StartLimitBurst=3
    StartLimitInterval=60s
    [Install]
    WantedBy=multi-user.target
    EOF
    
  • 配置阿里云加速

    mkdir /etc/docker
    cat > /etc/docker/daemon.json << EOF
    {
      "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
    }
    EOF
    
  • 启动并设置开机启动

    systemctl daemon-reload
    systemctl start docker
    systemctl enable docker
    systemctl status docker
    

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-QNfpxc0M-1648631139054)(C:\Users\awd\AppData\Roaming\Typora\typora-user-images\image-20220330164538591.png)]

  • 查询docker是否成功安装

    docker -v
    

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-SJWXvSY7-1648631139054)(C:\Users\awd\AppData\Roaming\Typora\typora-user-images\image-20220330160254338.png)]

6、部署Master Node

部署kube-apiserver开始

  • 生成 kube-apiserver 证书

    cd TLS/k8s
    
    cat > ca-config.json << EOF
    {
      "signing": {
        "default": {
          "expiry": "87600h"
        },
        "profiles": {
          "kubernetes": {
             "expiry": "87600h",
             "usages": [
                "signing",
                "key encipherment",
                "server auth",
                "client auth"
            ]
          }
        }
      }
    }
    EOF
    
    cat > ca-csr.json << EOF
    {
        "CN": "kubernetes",
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "Beijing",
                "ST": "Beijing",
                "O": "k8s",
                "OU": "System"
            }
        ]
    }
    EOF
    
  • 生成证书

    cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
    ls *pem
    
  • 使用自签 CA 签发 kube-apiserver HTTPS 证书 创建证书申请文件

    cat > server-csr.json << EOF
    {
        "CN": "kubernetes",
        "hosts": [
          "10.0.0.1",
          "127.0.0.1",
          "192.168.174.141",
          "192.168.174.142",
          "kubernetes",
          "kubernetes.default",
          "kubernetes.default.svc",
          "kubernetes.default.svc.cluster",
          "kubernetes.default.svc.cluster.local"
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "BeiJing",
                "ST": "BeiJing",
                "O": "k8s",
                "OU": "System"
            }
        ]
    }
    EOF
    
  • 生成证书

    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
    ls server*pem
    
  • 从 Github 下载二进制文件上传到centos

    地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1183

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Gy5moTUp-1648631139054)(C:\Users\awd\AppData\Roaming\Typora\typora-user-images\image-20220330160938644.png)]

  • 解压二进制包

    mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
    tar zxvf kubernetes-server-linux-amd64.tar.gz
    cd kubernetes/server/bin
    cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
    cp kubectl /usr/bin/
    
  • 部署 kube-apiserver

    cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
    KUBE_APISERVER_OPTS="--logtostderr=false \\
    --v=2 \\
    --log-dir=/opt/kubernetes/logs \\
    --etcd-servers=https://192.168.174.141:2379,https://192.168.174.142:2379 \\
    --bind-address=192.168.174.141 \\
    --secure-port=6443 \\
    --advertise-address=192.168.174.141 \\
    --allow-privileged=true \\
    --service-cluster-ip-range=10.0.0.0/24 \\
    --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
    --authorization-mode=RBAC,Node \\
    --enable-bootstrap-token-auth=true \\
    --token-auth-file=/opt/kubernetes/cfg/token.csv \\
    --service-node-port-range=30000-32767 \\
    --kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
    --kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
    --tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
    --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
    --client-ca-file=/opt/kubernetes/ssl/ca.pem \\
    --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
    --etcd-cafile=/opt/etcd/ssl/ca.pem \\
    --etcd-certfile=/opt/etcd/ssl/server.pem \\
    --etcd-keyfile=/opt/etcd/ssl/server-key.pem \\
    --audit-log-maxage=30 \\
    --audit-log-maxbackup=3 \\
    --audit-log-maxsize=100 \\
    --audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
    EOF
    
  • 把刚才生成的证书拷贝到配置文件中的路径

    cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/
    
  • 创建token 文件

    cat > /opt/kubernetes/cfg/token.csv << EOF
    c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"
    EOF
    
  • systemd 管理 apiserver

    cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/kubernetes/kubernetes
    [Service]
    EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
    ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
    Restart=on-failure
    [Install]
    WantedBy=multi-user.target
    EOF
    
  • 启动并设置开机启动

    systemctl daemon-reload
    systemctl start kube-apiserver
    systemctl enable kube-apiserver
    systemctl status kube-apiserver 
    

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-iKJyaae9-1648631139055)(C:\Users\awd\AppData\Roaming\Typora\typora-user-images\image-20220330164600485.png)]

部署kube-apiserver完成

部署kube-controller-manager开始

  • 授权 kubelet-bootstrap 用户允许请求证书

    kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
    
  • 部署 kube-controller-manager

    cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
    KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
    --v=2 \\
    --log-dir=/opt/kubernetes/logs \\
    --leader-elect=true \\
    --master=127.0.0.1:8080 \\
    --bind-address=127.0.0.1 \\
    --allocate-node-cidrs=true \\
    --cluster-cidr=10.244.0.0/16 \\
    --service-cluster-ip-range=10.0.0.0/24 \\
    --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
    --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
    --root-ca-file=/opt/kubernetes/ssl/ca.pem \\
    --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
    --experimental-cluster-signing-duration=87600h0m0s"
    EOF
    
  • systemd 管理 controller-manager

    cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/kubernetes/kubernetes
    [Service]
    EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
    ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
    Restart=on-failure
    [Install]
    WantedBy=multi-user.target
    EOF
    
  • 启动并设置开机启动

    systemctl daemon-reload
    systemctl start kube-controller-manager
    systemctl enable kube-controller-manager
    systemctl status kube-controller-manager
    

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-YfrewVLt-1648631139055)(C:\Users\awd\AppData\Roaming\Typora\typora-user-images\image-20220330164616974.png)]

部署kube-controller-manager完成

部署kube-scheduler开始==

  • 部署 kube-scheduler

    cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
    KUBE_SCHEDULER_OPTS="--logtostderr=false \
    --v=2 \
    --log-dir=/opt/kubernetes/logs \
    --leader-elect \
    --master=127.0.0.1:8080 \
    --bind-address=127.0.0.1"
    EOF
    
  • systemd 管理kube-scheduler

    cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
    [Unit]
    Description=Kubernetes Scheduler
    Documentation=https://github.com/kubernetes/kubernetes
    [Service]
    EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
    ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
    Restart=on-failure
    [Install]
    WantedBy=multi-user.target
    EOF
    
  • 启动并设置开机启动

    systemctl daemon-reload
    systemctl start kube-scheduler
    systemctl enable kube-scheduler
    systemctl status kube-scheduler
    

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-VsKEAWOT-1648631139055)(C:\Users\awd\AppData\Roaming\Typora\typora-user-images\image-20220330164631323.png)]

部署kube-scheduler完成==

  • 查看集群状态

    kubectl get cs
    

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-iKmjexsl-1648631139055)(C:\Users\awd\AppData\Roaming\Typora\typora-user-images\image-20220330161752558.png)]

7、部署Worker Node

部署kubelet开始==

  • 解压二进制包

    mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
    tar zxvf kubernetes-server-linux-amd64.tar.gz
    cd kubernetes/server/bin
    cp kubelet kube-proxy /opt/kubernetes/bin
    cp kubectl /usr/bin/
    
  • 部署 kubelet

    cat > /opt/kubernetes/cfg/kubelet.conf << EOF
    KUBELET_OPTS="--logtostderr=false \\
    --v=2 \\
    --log-dir=/opt/kubernetes/logs \\
    --hostname-override=n1 \\
    --network-plugin=cni \\
    --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
    --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
    --config=/opt/kubernetes/cfg/kubelet-config.yml \\
    --cert-dir=/opt/kubernetes/ssl \\
    --pod-infra-container-image=lizhenliang/pause-amd64:3.0"
    EOF
    
    cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
    kind: KubeletConfiguration
    apiVersion: kubelet.config.k8s.io/v1beta1
    address: 0.0.0.0
    port: 10250
    readOnlyPort: 10255
    cgroupDriver: cgroupfs
    clusterDNS:
    - 10.0.0.2
    clusterDomain: cluster.local 
    failSwapOn: false
    authentication:
      anonymous:
        enabled: false
      webhook:
        cacheTTL: 2m0s
        enabled: true
      x509:
        clientCAFile: /opt/kubernetes/ssl/ca.pem 
    authorization:
      mode: Webhook
      webhook:
        cacheAuthorizedTTL: 5m0s
        cacheUnauthorizedTTL: 30s
    evictionHard:
      imagefs.available: 15%
      memory.available: 100Mi
      nodefs.available: 10%
      nodefs.inodesFree: 5%
    maxOpenFiles: 1000000
    maxPods: 110
    EOF
    
  • 将master一些配置文件拷贝到node节点上

    scp -r /opt/kubernetes/ssl root@192.168.174.142:/opt/kubernetes
    
  • 命令行直接运行命令生成bootstrap.kubeconfig文件

    KUBE_APISERVER="https://192.168.174.141:6443" # apiserver IP:PORT
    TOKEN="c47ffb939f5ca36231d9e3121a252940" # 与token.csv里保持一致
    
    kubectl config set-cluster kubernetes \
      --certificate-authority=/opt/kubernetes/ssl/ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=bootstrap.kubeconfig
    kubectl config set-credentials "kubelet-bootstrap" \
      --token=${TOKEN} \
      --kubeconfig=bootstrap.kubeconfig
    kubectl config set-context default \
      --cluster=kubernetes \
      --user="kubelet-bootstrap" \
      --kubeconfig=bootstrap.kubeconfig
    kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
    
  • 拷贝到配置文件夹

    mv bootstrap.kubeconfig /opt/kubernetes/cfg
    
  • systemd管理kubelet

    cat > /usr/lib/systemd/system/kubelet.service << EOF
    [Unit]
    Description=Kubernetes Kubelet
    After=docker.service
    [Service]
    EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
    ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
    Restart=on-failure
    LimitNOFILE=65536
    [Install]
    WantedBy=multi-user.target
    EOF
    
  • 启动并设置开机启动

    systemctl daemon-reload
    systemctl start kubelet
    systemctl enable kubelet
    systemctl status kubelet
    

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-W4KPGsEu-1648631139056)(C:\Users\awd\AppData\Roaming\Typora\typora-user-images\image-20220330162518800.png)]

=部署kubelet完成===

部署kube-proxy开始

  • 批准kubelet证书申请并加入集群(Master节点操作)

    kubectl get csr
    
    kubectl certificate approve node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--K6M4G7bjhk8A
    
    kubectl get node
    

    注:由于网络插件还没有部署,节点会没有准备就绪 NotReady

  • 部署kube-proxy

    cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
    KUBE_PROXY_OPTS="--logtostderr=false \\
    --v=2 \\
    --log-dir=/opt/kubernetes/logs \\
    --config=/opt/kubernetes/cfg/kube-proxy-config.yml"
    EOF
    
    cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
    kind: KubeProxyConfiguration
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    metricsBindAddress: 0.0.0.0:10249
    clientConnection:
      kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
    hostnameOverride: n1
    clusterCIDR: 10.0.0.0/24
    EOF
    
  • 生成证书请求文件(Master节点操作)

    cd ~/TLS/k8s
    
    # 创建证书请求文件
    cat > kube-proxy-csr.json << EOF
    {
      "CN": "system:kube-proxy",
      "hosts": [],
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "L": "BeiJing",
          "ST": "BeiJing",
          "O": "k8s",
          "OU": "System"
        }
      ]
    }
    EOF
    
  • 生成证书(Master节点操作)

    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
    
  • 发送到n1节点(Master节点操作)

    scp -r /root/TLS/k8s root@192.168.174.142:/opt/TLS/
    
  • 命令行直接运行命令生成kubeconfig文件(Worker节点操作)

    KUBE_APISERVER="https://192.168.174.141:6443"
    kubectl config set-cluster kubernetes \
      --certificate-authority=/opt/kubernetes/ssl/ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=kube-proxy.kubeconfig
    kubectl config set-credentials kube-proxy \
      --client-certificate=./kube-proxy.pem \
      --client-key=./kube-proxy-key.pem \
      --embed-certs=true \
      --kubeconfig=kube-proxy.kubeconfig
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kube-proxy \
      --kubeconfig=kube-proxy.kubeconfig
    kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
    
  • systemd管理kube-proxy

    cat > /usr/lib/systemd/system/kube-proxy.service << EOF
    [Unit]
    Description=Kubernetes Proxy
    After=network.target
    [Service]
    EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
    ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
    Restart=on-failure
    LimitNOFILE=65536
    [Install]
    WantedBy=multi-user.target
    EOF
    
  • 启动并设置开机启动

    systemctl daemon-reload
    systemctl start kube-proxy
    systemctl enable kube-proxy
    systemctl status kube-proxy
    

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-DvD1lle0-1648631139056)(C:\Users\awd\AppData\Roaming\Typora\typora-user-images\image-20220330163352653.png)]

部署kube-proxy完成====

8、部署CNI网络
  • 先准备好 CNI 二进制文件,下载地址:https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz

  • 上传到centos,解压二进制包并移动到默认工作目录(Worker节点操作)

    mkdir /opt/cni/bin
    tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin
    
  • 部署 CNI 网络(Master节点操作)

    链接:https://pan.baidu.com/s/1SVnH0DjMMD4gyUh2NishPg
    提取码:67di

    kubectl apply -f kube-flannel.yml
    

    在这里插入图片描述

  • 部署nginx测试(Master节点操作)

    kubectl create deployment nginx --image=nginx
    kubectl expose deployment nginx --port=80 --type-NodePort
    kubectl get pods,svc
    

    在这里插入图片描述

  • 通过worker节点的ip地址:33210可以访问说明集群搭建成功

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐