我们现在的技术栈是通过docker容器进行管理部署。k8s在1.23之后的版本就已经不在支持docker而是使用了container 所以选择了1.23.4-00版本进行部署

一、环境配置

1、关闭swap

 

swapoff -a

 

vi /etc/fstab

remove the line with swap keyword

2、配置系统参数

 

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf br_netfilter EOF

 

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF

#更新apt包索引并安装使用Kubernetes apt库所需的包

 

sudo sysctl --system sudo apt-get install -y apt-transport-https ca-certificates curl

2、配置docker cgroupdriver为systemd

 

vi /etc/docker/daemon.json

 

{ "exec-opts": ["native.cgroupdriver=systemd"] }

 

systemctl daemon-reload

 

systemctl restart docker

二、安装kubernetes

1、安装kubeadm

 

sudo curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add --

2、添加kubernetes镜像仓库

 

sudo tee /etc/apt/sources.list.d/kubernetes.list <<-'EOF' deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main EOF

3、安装kubelet, kubeadm and kubectl 1.23.4-00版本

 

sudo apt-get update

 

apt-get install -y kubelet=1.23.4-00 kubeadm=1.23.4-00 kubectl=1.23.4-00

 

sudo apt-mark hold kubectl kubeadm kubelet

#node节点不需要安装kubectl

 

sudo apt-get install -y kubeadm kubelet

 

sudo apt-mark hold kubeadm kubelet

初始化

 

kubeadm init \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.23.4 \ --pod-network-cidr=10.10.0.0/16 \ --apiserver-advertise-address=0.0.0.0

5、配置kubeconfig

 

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

6、node节点加入集群,通过下面的命令在master查看kubectl join 命令后在node节点加入即可

 
 

kubeadm token create --print-join-command

安装calica

Install Calico
  1. Create the tigera-operator namespace.

 

#创建命名空间 kubectl create namespace tigera-operator

  1. Install the Tigera Calico operator and custom resource definitions using the Helm chart:

 

#安装calico helm install calico projectcalico/tigera-operator --version v3.25.1 --namespace tigera-operator

  1. Confirm that all of the pods are running with the following command.

 

watch kubectl get pods -n calico-system

配置calica网络

官方文档在

https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-onprem/onpremises

Install Calico配置
 

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/tigera-operator.yaml

 

curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/custom-resources.yaml -O

  1. 下载了custom-resources.yaml 文件之后请修改 cidr: 后的参数为kubeadm init中 pod-network-cidr的参数下相同

  1. If you wish to customize the Calico install, customize the downloaded custom-resources.yaml manifest locally.

  2. Create the manifest in order to install Calico.

 

kubectl create -f custom-resources.yaml

通过此脚本可以完成最后的安装

k8s问题排查以及命令

 

#查看node节点 kubectl get nodes #查看命名空间 kubectl get ns #查看 节点 kubectl get pod #查看节点内的容器 kubectl get pod -n {ns} -c {容器名} #查看集群信息 kubectl cluster-info #查看集群详细信息 kubectl cluster-info dump #查看webhook kubectl get ValidatingWebhookConfiguration

 

#报错解决 helm install budi-release budibase Error: INSTALLATION FAILED: Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": service "ingress-nginx-controller-admission" not found #查看webhook kubectl get ValidatingWebhookConfiguration #删除 ingress-nginx-admission kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission 重新启动即可

查看pod日志

 

#查询pod名字 kubectl get pod --all-namespaces|grep budibase #通过命名空间和pod名字来查看日志 kubectl describe pod budibase-ingress-nginx-admission-create-542qm -n budibase

 

#安装ingress-nginx helm install ingress-nginx edgeghcr.io/nginxinc/charts/nginx-ingressregistry-1.docker.io/nginxcharts/nginx-ingress --version 1.0.2 --set controller.image.repository=myregistry.example.com/nginx-plus-ingress --set controller.nginxplus=true

启动budibase之后发现couchdb以及minio和redis处于pending的状态。发现他们有一个共同点都是需要用到本地存储的,查找原因

首先确认是否有storageclass

通过命令

 

kubectl get storageclass

发现

没有当前服务需要安装服务

首选需要安装nfs服务

 

apt update #安装最新的 nfs-kernel-server apt install nfs-kernel-server #建立共享目录 mkdir -p /data/k8s #配置共享目录权限 vim /etc/exports #添加下面一行在最下面一行 /data/k8s *(rw,sync,no_root_squash,no_subtree_check) #重启服务查看共享的情况 service rpcbind restart service nfs-kernel-server restart showmount -e

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐