CentOS8/CentOS7 搭建k8s集群
本文章仅作为个人笔记k8s官方安装文档关闭selinux swap 防火墙等,不安全,有自我见解者可略过setenforce 0sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/configiptables -Fiptables -Xsystemctl stop firewalldswapoff -ased -i "s/
本文章仅作为个人笔记
k8s官方安装文档
-
关闭selinux swap 防火墙等,不安全,有自我见解者可略过
setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config iptables -F iptables -X systemctl stop firewalld swapoff -a sed -i "s/\/dev\/mapper\/centos-swap/\#\/dev\/mapper\/centos-swap/g" /etc/fstab cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
-
docker 安装(已安装可不处理)
yum install -y yum-utils dnf yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo dnf install -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.13-3.1.el7.x86_64.rpm yum install docker-ce -y service docker start # 修改国内仓库,不需要可略过 vim /etc/docker/daemon.json { "registry-mirrors": [ "https://registry.docker-cn.com" ], "exec-opts":["native.cgroupdriver=systemd"] } systemctl daemon-reload service docker restart
-
kubernetes 安装(主从节点都运行)
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes systemctl enable --now kubelet service kubelet start modprobe br_netfilter lsmod | grep br_netfilter # 测试环境是否ok kubeadm config images pull # 查看kubernetes版本 kubeadm version # 配置安装插件版本 例如( kubeadm config images list --kubernetes-version v1.19.0 ) kubeadm config images list --kubernetes-version <kubeadm_git_version> # 如果是国内服务器构建请务必操作曲线救国,否则请无视 ## 曲线救国--------------------start # 曲线救国,创建国内镜像 [参考](https://blog.csdn.net/sjyu_ustc/article/details/79990858) # 查看需要的镜像及版本(这里是个人安装时的版本,因时而异,自行判断。) kubeadm config images list k8s.gcr.io/kube-apiserver:v1.19.0 k8s.gcr.io/kube-controller-manager:v1.19.0 k8s.gcr.io/kube-scheduler:v1.19.0 k8s.gcr.io/kube-proxy:v1.19.0 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.9-1 k8s.gcr.io/coredns:1.7.0 # 下载对应版本镜像(这里的 makai554892700 可更改为自己的账号镜像,latest可改为对应版本,但是目前本人并未能获取到最新版本,请酌情处理,其中各版本请自行修改) docker pull makai554892700/kube-apiserver:latest docker pull makai554892700/kube-controller-manager:latest docker pull makai554892700/kube-scheduler:latest docker pull makai554892700/kube-proxy:latest docker pull makai554892700/etcd:latest docker pull makai554892700/coredns:latest docker pull makai554892700/pause:latest # 改名下载的镜像为线上的镜像(欺骗镜像下载器以曲线救国,其中各版本请自行修改) docker tag makai554892700/kube-apiserver:latest k8s.gcr.io/kube-apiserver:v1.19.0 docker tag makai554892700/kube-controller-manager:latest k8s.gcr.io/kube-controller-manager:v1.19.0 docker tag makai554892700/kube-scheduler:latest k8s.gcr.io/kube-scheduler:v1.19.0 docker tag makai554892700/kube-proxy:latest k8s.gcr.io/kube-proxy:v1.19.0 docker tag makai554892700/etcd:latest k8s.gcr.io/etcd:3.4.9-1 docker tag makai554892700/coredns:latest k8s.gcr.io/coredns:1.7.0 docker tag makai554892700/pause:latest k8s.gcr.io/pause:3.2 ## 曲线救国--------------------end
-
主节点运行
# 在初始化的过程中可能会卡住(特别是公网,如果出现则修改/etc/kubernetes/manifests/etcd.yaml文件,未出现请无视下面一句 )
vim /etc/kubernetes/manifests/etcd.yaml
- --listen-client-urls=https://127.0.0.1:2379
- --listen-peer-urls=https://127.0.0.1:2380
# 初始化 kubeadm
kubeadm init --apiserver-advertise-address 0.0.0.0 --pod-network-cidr=10.244.0.0/16
# 例如( kubeadm init --apiserver-advertise-address 0.0.0.0 --pod-network-cidr=10.244.0.0/16 )
# 如果有可用镜像可使用下面的命令处理,否则请无视
# kubeadm init --kubernetes-version=<kubernetes_version> --image-repository registry.aliyuncs.com/google_containers --apiserver-advertise-address <host_ip> --pod-network-cidr=10.244.0.0/16
# 根据提示创建kubectl
rm -rf $HOME/.kube
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown ( i d − u ) : (id -u): (id−u):(id -g) $HOME/.kube/config
# 获取token,复制token备用
kubeadm token list
# 若输出的token创建时间超过24小时可运行下面的命令重新创建,否则请无视
kubeadm token create
# 获取token sha256,复制备用
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed ‘s/^.* //’
# 安装网络插件 flannel 遇坑参考1
# 遇坑 参考2
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
# 查看 flannel 是否正常运行
kubectl get pod -n kube-system
# 查看日志(根据实际情况修改 kube-flannel-ds-amd64-q8kvb )
kubectl describe pod kube-flannel-ds-amd64-q8kvb -n kube-system
# 如果安装未出现问题到此便结束了,如果flannel安装不成功继续曲线救国
kubeadm reset
kubeadm init --apiserver-advertise-address 0.0.0.0 --pod-network-cidr=10.244.0.0/16
rm -rf $HOME/.kube
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown ( i d − u ) : (id -u): (id−u):(id -g) $HOME/.kube/config
docker pull makai554892700/flannel:latest
docker tag makai554892700/flannel:latest quay.io/coreos/flannel:v0.12.0-amd64
kubectl apply -f kube-flannel.yml -
节点加入 官方文档
-
子节点运行
# 将主节点的 /run/systemd/resolve/resolv.conf 文件复制到子节点同目录 /run/systemd/resolve/resolv.conf # 加入主节点 例如 ( kubeadm join 192.168.169.128:6443 --token j0xuqn.u9fge2i8uo7dpxsj --discovery-token-ca-cert-hash sha256:93628f27ce0f5738fe1e1b63b1610c60d82a3d55669a025be841c94d547fdf85 ) kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>
-
主节点运行
# 查看节点 kubectl get nodes
-
删除节点
-
子节点运行
# 重置 kubeadm 状态 kubeadm reset
-
主节点运行,例如( kubectl drain 192.168.169.134 --delete-local-data --force --ignore-daemonsets )
kubectl delete pod kube-proxy-fbp57 -n kube-system kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
-
-
k8s 安装ui界面 dashboard
# 下载官方源 wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml # 编辑文档 vim recommended.yaml kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 30000 selector: k8s-app: kubernetes-dashboard # 创建使用 kubectl create -f recommended.yaml # 查看运行的服务 kubectl get svc -n kubernetes-dashboard # 开启代理用于外界访问例 kubectl proxy --address=0.0.0.0 --disable-filter=true # 查看权限问题 kubectl logs -f -n kubernetes-dashboard # 解决方法 kubectl create clusterrolebinding serviceaccount-cluster-admin --clusterrole=cluster-admin --group=system:serviceaccount # 访问 https://<host_ip>:30000 访问dashboard 例如(https://192.168.169.128:30000/) # 查看token kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token
更多推荐
所有评论(0)