由于版本,科学上网等方面的影响,记录k8s 1.20.0版本的部署详细流程

环境准备

  1. 关闭防火墙:
systemctl stop firewalld
systemctl disable firewalld
  1. 关闭selinux:
sed -i 's/enforcing/disabled/' /etc/selinux/config 
setenforce 0
  1. 关闭swap:
swapoff -a # 临时关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab  #永久关闭
  1. 修改主机名称
hostnamectl set-hostname 名字
  1. 添加主机名与IP对应关系(记得设置主机名):
cat /etc/hosts
192.168.116.129 master
192.168.116.130 note1
192.168.116.131 note2

每一台机器均需要设置

  1. 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
 
sysctl --system
  1. 安装Docker源
    a. 安装Docker源
yum install -y wget && wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

b. 安装Docker

yum -y install docker-ce-18.06.1.ce-3.el7

c. 开启自启和启动
systemctl enable docker && systemctl start docker

d. docker cgroup驱动修改为systend

cat > /etc/docker/daemon.json << EOF
{
 "exec-opts":["native.cgroupdriver=systemd"]
}
EOF

systemctl restart docker
systemctl status docker

e. 查看版本

docker --version
  1. 安装kubeadm,kubelet和kubectl
    a.添加阿里云YUM的软件源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

b.安装kubeadm,kubelet和kubectl
由于版本更新频繁,这里指定版本号部署:

yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0

c.设置开机自启

systemctl enable kubelet

部署kubernetes

  1. 部署kubernetes master
    a. kubeadm初始化
kubeadm init \
--apiserver-advertise-address=192.168.100.10 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.20.0 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16

或者:

mkdir /home/conf/k8s -p

cat > /home/conf/k8s/kubeadm-k8s.yaml << EOF
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
imageRepository: registry.aliyuncs.com/google_containers
controllerManager:
    extraArgs:
        horizontal-pod-autoscaler-use-rest-clients: "true"
        horizontal-pod-autoscaler-sync-period: "10s"
        node-monitor-grace-period: "10s"
apiServer:
    extraArgs:
        runtime-config: "api/all=true"
kubernetesVersion: "v1.20.0"
EOF

kubeadm init --config /home/conf/k8s/kubeadm-k8s.yaml
  1. 启动成功提示
To start using your cluster, you need to run the following as a regular user:
需要执行:
  mkdir -p $HOME/.kube/config
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:
不执行,每次终端就需要:
  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:
新增节点需要加该参数:
kubeadm join 192.168.100.10:6443 --token x6vg4b.iy09si7o7mrtlpe9 \
    --discovery-token-ca-cert-hash sha256:71c7bdd46b66bce00258f708f148daa9590426cce5aebd8a10f6144cddf16d38
  1. 安装pod网络插件
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=1.20.0"
# 通用shell:
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
  1. 查看运行状态
kubectl get pods -n kube-system

所有pod都处于running状态

  1. work节点部署
    在work节点机器上执行添加节点的命令:
kubeadm join 192.168.100.10:6443 --token x6vg4b.iy09si7o7mrtlpe9 \
    --discovery-token-ca-cert-hash sha256:71c7bdd46b66bce00258f708f148daa9590426cce5aebd8a10f6144cddf16d38

如果token失效:

	kubeadm token generate  //生成token
	kubeadm token create {token} --print-join-command --ttl=0  //输出添加命令
	eg: kubeadm token create oqv894.0ogmmqkhimyw9fye --print-join-command --ttl=0  //输出添加命令
  1. 在master节点去查看节点状态
kubectl get nodes

节点添加成功,安装完成!

部署kubernetes-dashboard

v1.20.0版本需要的dashboard版本为:V2.1.0
对应的部署文件可通过github获取:https://github.com/kubernetes/dashboard/blob/master/aio/deploy/recommended.yaml

  1. 获取后,编辑文件,暴露外部端口:
# 暴露端口的修改如下:
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
# add this
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      # 暴露端口
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard

# cluster role权限修改如下:(否则可能导致无权限获取集群信息)
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["*"]
    resources: ["*"]
    verbs: ["*"]
  - verbs: ["*"]
    nonResourceURLs: ["*"]
  1. 执行命令:kubectl apply -f xxx.yaml
    默认命名空间为:kubernetes-dashboard
  2. 查看启动状态:kubectl get pods,svc -n kubernetes-dashboard
# 获取dashboard默认的用户的token:
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep kubernetes-dashboard | awk '{print $1}')
  1. 自定义独立创建角色:
    创建service account并绑定默认cluster-admin管理员集群角色
kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin
# 查看登录token
kubectl describe secrets -n kubernetes-dashboard $(kubectl -n kubernetes-dashboard | awk '/dashboard-admin/{print $1}')
  1. 删除用户角色:
kubectl get secret,sa,role,rolebinding,clusterrolebinding,services,deployments --namespace=kubernetes-dashboard | grep dashboard-admin
# secret/dashboard-admin-token-hkhn8        kubernetes.io/service-account-token   3      37m
# serviceaccount/dashboard-admin        1         37m
kubectl delete secret dashboard-admin-token-hkhn8 --namespace=kubernetes-dashboard
kubectl delete serviceaccount dashboard-admin --namespace=kubernetes-dashboard
  1. 定义用户名登录
    参见: Kubernetes Dashboard使用用户名密码形式登录
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐