环境:

主机分配:

u-k8s-master   192.168.100.100
u-k8s-node-01  192.168.100.101
u-nfs          192.168.100.199

步骤

一、安装好系统的初始化
#设置时区为上海
timedatectl set-timezone Asia/Shanghai
#将root密码设置为123456
echo 'root:123456' | chpasswd
#允许root登陆
echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
systemctl restart sshd
#将网卡名称改为eth0并禁用ipv6
sed -i 's#GRUB_CMDLINE_LINUX="#GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0 ipv6.disable=1#g' /etc/default/grub
update-grub
#重启
reboot
#重启完后删除安装时配置的ubuntu这个用户
userdel ubuntu && rm -fr /home/ubuntu
二、系统配置修改&必备工具安装:
#前面加的 DEBIAN_FRONTEND=noninteractive  可以让系统不弹窗让选择重启哪些服务
apt-get update && DEBIAN_FRONTEND=noninteractive apt-get upgrade -y
DEBIAN_FRONTEND=noninteractive apt install open-vm-tools vim lrzsz nfs-common -y
#禁用交换分区和流量转发相关的配置
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
sysctl --system
三、必备软件下载

下面是一些2024年4月较新的容器类文件的下载地址,在文末同样有完全打好包放到百度云盘,链接:https://pan.baidu.com/s/1TzYf0uD7VYSCg8D52Om5JQ?pwd=kube 提取码:kube 失效了Q我来补链接。

1、 https://github.com/opencontainers/runc/releases
下载 runc.amd64
2、 https://github.com/containerd/containerd/releases
下载 containerd
3、 https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
下载 containerd.service
4、 https://github.com/containernetworking/plugins/releases
下载  cni-plugins-linux-amd64-v1.4.1.tgz
5、 #可以手动生成一个文件然后对照修改的config.toml,为了方便我已经把修改好的上传了
6、https://github.com/kubernetes-sigs/cri-tools/releases
下载 crictl-v1.28.0-linux-amd64.tar.gz
7、 https://github.com/flannel-io/flannel/releases
下载 kube-flannel.yml                 #注1
#另,kube-flannel.yml经过我修改使用了国内转存的镜像以加快加载速度,建议大家自行从官方下载。

假如这些文件都放到了/root/kubernetes目录下面。

开始安装
install -m 755 /root/kubernetes/runc.amd64 /usr/local/sbin/runc
tar Cxzvf /usr/local /root/kubernetes/containerd-1.6.31-linux-amd64.tar.gz
mv containerd.service /usr/lib/systemd/system/containerd.service 
chmod +x /usr/lib/systemd/system/containerd.service
mkdir -p /opt/cni/bin 
tar Cxzvf /opt/cni/bin /root/kubernetes/cni-plugins-linux-amd64-v1.4.1.tgz
mkdir /etc/containerd  -p
mv config.toml /etc/containerd/config.toml
systemctl daemon-reload
systemctl enable containerd --now
tar zxvf crictl-v1.28.0-linux-amd64.tar.gz -C /usr/local/bin

cat <<EOF | sudo tee /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 2
debug: false
pull-image-on-create: false
disable-pull-on-run: false
EOF
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-get install kubeadm=1.28.2-00 kubectl=1.28.2-00 kubelet=1.28.2-00 -y
systemctl enable kubelet

这时候发现包时还有一个kubeadm没用上,这玩意实际上是我自己根据代码重新编译的kubeadm修改了证书时长为十年,就不用年年换证书了。

mv kubeadm /usr/bin/kubeadm -f

到这个时候,所有需要的工具都已经准备好了,建议重启一下主机。

集群初始化

通过如下命令初始化集群

kubeadm init \
 --kubernetes-version=1.28.2 \
 --apiserver-cert-extra-sans 0.0.0.0 \
 --image-repository registry.aliyuncs.com/google_containers \
 --pod-network-cidr 172.16.0.0/16 \
 --service-cidr 172.17.0.0/16

注1: --pod-network-cidr 172.16.0.0/16 这个值应该与kube-flannel.yml里面的91行相对应
等到屏幕上面大概出现如下内容的时候,复制这些内容并执行,就可以使用命令查看集群资源了:

   mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get node

此时看到k8s-master的状态为NotReady因为没有安装网络插件,通过如下命令安装网络插件:

kubectl apply -f kube-flannel.yml

等所有pod启动好后,再查看kubectl get node就发现master已经Ready了。

node节点安装

除了最后一步的kubeadm init命令换成刚刚执行初始化后面打印出来的类似这样的命令外,其他全都一样:

kubeadm join 192.168.100.100:6443 --token abcdef.4wh0nv4v0oredvkg \
	--discovery-token-ca-cert-hash sha256:9b3f8e99693e2e07f0a40858fbd2231e0f5728430016d81dfa0ce92ad93ee1ad
安装nfs

另外一台机器(基于初始化完成)192.168.100.199上面执行如下命令来安装nfs

apt-get install nfs-kernel-server
mkdir -p /data/nfs
chmod 777 /data/nfs/
cat <<EOF | sudo tee /etc/exports
/data/nfs 192.168.100.0/24(rw,sync,no_subtree_check)
EOF
exportfs -a
systemctl enable nfs-kernel-server
reboot

下面是几个yaml
nfs.yaml:

apiVersion: v1
kind: Namespace
metadata:
  labels:
    k8s-app: nfs
    pod-security.kubernetes.io/enforce: privileged
  name: nfs-provisioner
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs-provisioner
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs-provisioner
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-client
provisioner: nfs-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"
  pathPattern: "${.PVC.namespace}/${.PVC.annotations.nfs.io/storage-path}"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-chengdu.aliyuncs.com/kube_cn/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs-provisioner
            - name: NFS_SERVER
              value: 192.168.100.199 #修改为你nfs对应的IP
            - name: NFS_PATH
              value: /data/nfs #修改为你nfs对应的路径
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.100.199 #修改为你nfs对应的IP
            path: /data/nfs #修改为你nfs对应的路径

等上面的执行完 kubectl apply -f nfs.yaml并查看pod状态正常:

root@k8s-master:~#  kubectl -n nfs-provisioner get pod
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-6794d866b5-bp5pr   1/1     Running   0          44m

之后就可以执行下面这个(2024-09-12修改,增加kind):
test-claim.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-claim
spec:
  storageClassName: nfs-client
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

执行效果如下:

root@k8s-master:~/kubernetes/nfs-subdir-external-provisioner/deploy# kubectl create -f test-claim.yaml
persistentvolumeclaim/test-claim configured
root@k8s-master:~/kubernetes/nfs-subdir-external-provisioner/deploy# kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-claim   Bound    pvc-a49cb8eb-7c6a-455d-8d7a-eb2376b23f2a   1Mi        RWX            nfs-client     50m

此时去nfs服务器查看nfs目录,会发现下面多了一个目录:

root@nfs:/data/nfs# pwd
/data/nfs
root@nfs:/data/nfs# ll
total 12
drwxrwxrwx 3 root   root    4096 Apr  9 18:13 ./
drwxr-xr-x 3 root   root    4096 Apr  9 17:36 ../
drwxrwxrwx 2 nobody nogroup 4096 Apr  9 18:14 default/

证明nfs挂载完成,可以使用

kubectl delete -f test-claim.yaml

删除掉刚刚创建的pvc,并将其设置为默认存储:

kubectl patch storageclasses.storage.k8s.io nfs-client -p \
'{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

接下来,安装kubesphere

wget https://github.com/kubesphere/ks-installer/releases/download/v3.4.1-patch.0/cluster-configuration.yaml
wget https://github.com/kubesphere/ks-installer/releases/download/v3.4.1-patch.0/kubesphere-installer.yaml

需要对它们进行修改:

#cluster-configuration.yaml第15行改为:
local_registry: "registry.cn-beijing.aliyuncs.com"
#cluster-configuration.yaml第135、136行注释取消并改为:
    prometheus:
      replicas: 1
#kubesphere-installer.yaml第295行改为:
image: registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.4.1-patch.0

然后依次执行

kubectl apply -f kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml

默默等待安装完成即可

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐