K8S离线部署的方案

离线包,自己可以跟着下面步骤自己下载。

https://download.csdn.net/download/u010952056/86748944

万字长文详解 PaaS toB 场景下 K8s 离线部署方案

Item

Language

离线部署支持情况

kops

Golang

不支持

kubespray

Ansible

支持,需自行构建安装包

kubeasz

Ansible

支持,需自行构建安装包

sealos

Golang

支持,需付费充值会员

RKE

Golang

不支持,需自行安装 docker

sealer

Golang

支持,源自 sealos

kubekey

Golang

部分支持,仅镜像可离线

机器最小配置 3G ,2核,过低安装不成功,磁盘空间>130G,练手在虚拟机

机器能联网  就装个  

apt-get install -y vim               //vim

apt-get install -y net-tools         //ifconfig

apt-get install -y openssh-server    //ssh 

自己机器的ip规划  仅参考             

192.168.1.xx1  master01

192.168.1.xx2  node01

 静态IP配置参考

ubuntu-服务器版22.04-静态IP设置-CSDN博客

系统ubuntu22.04    k8s v1.20.10   docker 19.03

1、环境初始化

这些设置每个机器都要设置

1.1、关闭swap

swapoff -a

sed -ri 's/.*swap.*/#&/' /etc/fstab

rm -f /swap.img

swapoff -a && sed -i '/swap/d' /etc/fstab

编辑下面的文件swap.img 这一行给注释掉,持久生效

vim /etc/fstab

# /swap.img //注释掉这行 如果有

1.2、关闭防火墙

systemctl disable ufw && systemctl stop ufw

# 关闭selinux # 永久
sed -i 's/enforcing/disabled/' /etc/selinux/config

1.3、开启ip转发

cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward=1
# 将桥接的IPv4流量传递到iptables的链
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
EOF


sysctl --system // 生效

1.4、时间同步

这个是联网同步时间

yum install ntpdate -y
ntpdate time.windows.com

没网络的 使用date设置时间(需要集群的机器时间一致,)

date -s 'yyyy-mmm-ddd hh:mm:ss'

做完时间同步后,进行时间设置

证书问题,先将时间设置到95年之后,安装完集群后,再将时间改回来。

Date -s 'yyyy-mmm-dd'     //只改日起就行

设置时区

timedatectl set-timezone Asia/Shanghai

根据提示顺次输入Asia--> Chongqing的编号

修改时间为24小时

vim /etc/default/locale

增加一行

LC_TIME=en_DK.UTF-8

修改后,如果想使得系统日志的时间戳也立即生效

systemctl restart rsyslog

1.5、设置ssh root账号登陆

passwd root

设置root的密码

vim /etc/ssh/sshd_config

PermitRootLogin yes # 添加

sudo systemctl restart sshd.service

1.6、设置主机名修改hosts

//根据自己的master节点和node节点有几个决定

hostnamectl set-hostname master01

hostnamectl set-hostname master0n

hostnamectl set-hostname node01

hostnamectl set-hostname node01n

cat >> /etc/hosts << EOF

节点机器的IP 机器的名称 // 主机名和ip根据实际情况修改

EOF

举例:#结合自己的ip

cat >> /etc/hosts << EOF

172.16.106.38 master01

172.16.106.39 node01

172.16.106.50 node02

172.16.106.51 master02

EOF

上述设置操作完后, reboot 重启机器

a. 检查防火墙是否关闭 systemctl status ufw

b. 检查机器名称是否改变 hostnamectl

c. 查看hosts cat /etc/hosts

d. 检查时间 date

e. 查看ip ip addr

f. ssh root账号登陆 ssh@ip 回车 输入密码

找个有网络的机器下载资料   root账号下操作(为了方便)

2、docker 环境安装

这个需要网络下载资料,下载完后离线同样安装,离线省略下载步骤, 每个机器都安装

2.1、下载docker安装包

版本:19.03

安装包下载地址:

Index of linux/ubuntu/dists/bionic/pool/stable/amd64/

使用wget下载

wget -P /home/deploy/deb/docker/ https://download.docker.com/linux/ubuntu/dists/bionic/pool/stable/amd64/docker-ce_19.03.13~3-0~ubuntu-bionic_amd64.deb

wget -P /home/deploy/deb/docker/ https://download.docker.com/linux/ubuntu/dists/bionic/pool/stable/amd64/containerd.io_1.3.7-1_amd64.deb

wget -P /home/deploy/deb/docker/ https://download.docker.com/linux/ubuntu/dists/bionic/pool/stable/amd64/docker-ce-cli_19.03.13~3-0~ubuntu-bionic_amd64.deb

下载后的文件,在/home/deploy/deb/docker

containerd.io_1.3.7-1_amd64.deb

docker-ce_19.03.13~3-0~ubuntu-bionic_amd64.deb

docker-ce-cli_19.03.13~3-0~ubuntu-bionic_amd64.deb

2.2、安装docker

cd /home/deploy/deb/docker 

dpkg -i ./*.deb

安装后默认cgroups驱动使用cgroupfs ,需要调整为systemd,编辑docker配置文件

vi /etc/docker/daemon.json

添加如下内容:

{

"exec-opts": ["native.cgroupdriver=systemd"]

}

重启docker,执行:

systemctl daemon-reload && sudo systemctl restart docker

2.3、下载k8s

机器没有安装curl命令就执行下面的命令

apt-get update //可以不执行

apt-get install curl -y

安装GPG证书

curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -

添加apt源

cat > /etc/apt/sources.list.d/kubernetes.list << ERIC

deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main

ERIC

apt-get update

查看可安装版本

apt-cache madison kubeadm

一大堆版本 自己选择(我选择的1.20.10-00)

安装指定版本

VERSION=1.20.10-00

mkdir -p /home/deploy/deb/k8s/partial

apt-get autoclean

上面两条命令解决2: No such file or dire这个错误

将包下载到本地

apt-get install -y --download-only -o dir::cache::archives=/home/deploy/deb/k8s kubelet=$VERSION kubeadm=$VERSION kubectl=$VERSION

网速决定快慢。。。。。。

下载后在/home/deploy/deb/k8s这个路径下,这些文件用有

conntrack_1%3a1.4.5-2_amd64.deb

cri-tools_1.25.0-00_amd64.deb

ebtables_2.0.11-3build1_amd64.deb

kubeadm_1.20.10-00_amd64.deb

kubectl_1.20.10-00_amd64.deb

kubelet_1.20.10-00_amd64.deb

kubernetes-cni_1.1.1-00_amd64.deb

socat_1.7.3.3-2_amd64.deb

2.4、安装 k8s

cd /home/deploy/deb/k8s

dpkg -i ./*.deb

设置开机启动

systemctl enable kubelet && sudo systemctl start kubelet

2.5、查看k8s初始化需要的镜像

将自己电脑断网 执行下面命令,否则显示最新版的

kubeadm config images list --kubernetes-version=v1.20.10

打印结果如下 需要7个镜像

k8s.gcr.io/kube-apiserver:v1.20.10

k8s.gcr.io/kube-controller-manager:v1.20.10

k8s.gcr.io/kube-scheduler:v1.20.10

k8s.gcr.io/kube-proxy:v1.20.10

k8s.gcr.io/pause:3.2

k8s.gcr.io/etcd:3.4.13-0

k8s.gcr.io/coredns:1.7.0

开启自己的电脑网络

2.6、使用国内的镜像加速器下载这些镜像

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.10

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.10

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.10

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.10

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0

将pull下来的镜像 docker tag

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.10              k8s.gcr.io/kube-proxy:v1.20.10             

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.10 k8s.gcr.io/kube-controller-manager:v1.20.10

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.10          k8s.gcr.io/kube-apiserver:v1.20.10         

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.10          k8s.gcr.io/kube-scheduler:v1.20.10         

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0                    k8s.gcr.io/etcd:3.4.13-0                   

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0                    k8s.gcr.io/coredns:1.7.0                   

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2

查看tag后的7个镜像

docker images | grep k8s.

k8s.gcr.io/kube-proxy v1.20.10

k8s.gcr.io/kube-apiserver v1.20.10

k8s.gcr.io/kube-controller-manager v1.20.10

k8s.gcr.io/kube-scheduler v1.20.10

k8s.gcr.io/etcd 3.4.13-0

k8s.gcr.io/coredns 1.7.0

k8s.gcr.io/pause 3.2

网络插件flannel需要一个镜像 顺便下载下来,

为啥没用Calico,太难装了,自己可以试试

https://projectcalico.docs.tigera.io/getting-started/kubernetes/quickstart

docker pull quay.io/coreos/flannel:v0.14.0

一共8个镜像

k8s.gcr.io/kube-proxy:v1.20.10

k8s.gcr.io/kube-apiserver:v1.20.10

k8s.gcr.io/kube-controller-manager:v1.20.10

k8s.gcr.io/kube-scheduler:v1.20.10

k8s.gcr.io/etcd:3.4.13-0

k8s.gcr.io/coredns:1.7.0

k8s.gcr.io/pause:3.2

quay.io/coreos/flannel:v0.14.0

镜像保存出来

cd /home/deploy/deb (下面命令是一行)

docker save -o k8simages.tar k8s.gcr.io/kube-proxy:v1.20.10 k8s.gcr.io/kube-apiserver:v1.20.10 k8s.gcr.io/kube-controller-manager:v1.20.10 k8s.gcr.io/kube-scheduler:v1.20.10 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0 k8s.gcr.io/pause:3.2 quay.io/coreos/flannel:v0.14.0

2.7、找一个网络插件的配置文件

自学k8s-安装过程为下载flannel.ym配置文件 - iXiAo9 - 博客园

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.14.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.14.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

将网页中的文本复制 保存为kube-flannel.yml 文件

到此文件下载基本完成,全部文件如下

a. docker/

containerd.io_1.3.7-1_amd64.deb

docker-ce_19.03.13~3-0~ubuntu-bionic_amd64.deb

docker-ce-cli_19.03.13~3-0~ubuntu-bionic_amd64.deb

b. k8s/

conntrack_1%3a1.4.5-2_amd64.deb

cri-tools_1.25.0-00_amd64.deb

ebtables_2.0.11-3build1_amd64.deb

kubeadm_1.20.10-00_amd64.deb

kubectl_1.20.10-00_amd64.deb

kubelet_1.20.10-00_amd64.deb

kubernetes-cni_1.1.1-00_amd64.deb

socat_1.7.3.3-2_amd64.deb

c. k8simages.tar

将/home/deploy/deb

deb这个文件夹拷贝到主节点和从节点。

3、离线安装

在自己的集群节点操作,下面三步每个机器都要操作

3.1、离线Docker安装

参照【2.2】

3.2、离线K8s安装

参照【2.4】

3.3、离线加载镜像文件

docker load < k8simages.tar

3.4、主节点初始化

在初始化之前
执行命令一:sudo mkdir /sys/fs/cgroup/systemd
执行命令二:sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd
如果是新版的ubuntu22.04 卸载掉apparmor命令是: apt-get remove apparmor


kubeadm init --kubernetes-version=v1.20.10 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap

初始化成功信息 有颜色的需要再次执行

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.245.133:6443 --token 9c11at.6ebo4g8lfcnqujwr \

    --discovery-token-ca-cert-hash sha256:382e46547d0242164825d6799895f8a263e896f409ab417a03ae1464b3e7f7fa

这个用你自己的 

3.5、主节点安装网络插件 Flannel

kubectl apply -f kube-flannel.yml

只在`master`节点执行即可,插件使用的是DaemonSet的控制器

3.6、查看主节点状态

kubectl get pod -n kube-system

3.7、加入worker节点

需要执行上面的红色的命令

在主节点上执行

kubectl label node 【node_name你自己的从节点机器名】 node-role.kubernetes.io/worker=worker

在主节点上查看节点状态

kubectl get nodes

注意:

kubectl命令需要使用kubernetes-admin来运行,需要admin.conf文件。

如果需要在子结点执行kubectl 命令,就把主节点的

$HOME/.kube/config   这个文件拷贝到子结点

子结点执行

chown $(id -u):$(id -g) $HOME/.kube/config

export KUBECONFIG=/etc/kubernetes/admin.conf

3.8、将时间改回来

查看证书时间

kubeadm alpha certs check-expiration

生成新的证书之前最好备份一下数据:

cp -rp /etc/kubernetes /etc/kubernetes.bak

cp -rp /var/lib/etcd /var/lib/etcd.bak

生成新的证书:

kubeadm alpha certs renew all

然后将之前设置的时间改到现在的互联网时间。(每个机器时间需要一样)

date -s 'yyyy-mmm-dd'     //只改日起就行

改完后再查看一次证书时间

kubeadm alpha certs check-expiration

至此基本安装完成后续使用测试

4、参考

4.1、证书到期问题

Kubernetes kubeadm 证书到期,更新证书_大漠知秋的博客-CSDN博客_kubelet 证书更新

Kubernetes v1.25 编译 kubeadm 修改证书有效期到 100 年 - sysin - 博客园

4.2、控制面板

https://kuboard.cn/install/maintain/certs.html

网络组件下载

自学k8s-安装过程为下载flannel.yml配置文件 - iXiAo9 - 博客园

4.3、安装参考链接

Ubuntu20.04 离线部署 k8s1.20.10 - 知乎

ubuntu20.04安装k8s_Professorboy的博客-CSDN博客

内网ubuntu环境下离线部署K8s - MadLife

https://baijiahao.baidu.com/s?id=1724382781225457375&wfr=spider&for=pc

calico网络组件(可选)

Quickstart for Calico on Kubernetes

ubuntu20.04安装k8s(1.18.4 & 1.22.0)_还在下雨吗的博客-CSDN博客_ubuntu20安装k8s

4.4、这个没成功

sealos安装集群

https://huaweicloud.csdn.net/63311fb4d3efff3090b52d55.html?spm=1001.2101.3001.6650.1&utm_medium=distribute.pc_relevant.none-task-blog-2%7Edefault%7ECTRLIST%7Eactivity-1-119522721-blog-124778122.pcrelevantt0_20220926_downloadratepraise_v1&depth_1-utm_source=distribute.pc_relevant.none-task-blog-2%7Edefault%7ECTRLIST%7Eactivity-1-119522721-blog-124778122.pcrelevantt0_20220926_downloadratepraise_v1&utm_relevant_index=2

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐