系统:CentOS 7.8
内核:3.10.0

一、Master、Node节点都要安装Docker

1、 卸载原来的docker

sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

2、更新源

sudo yum update -y

3、添加官方yum源

sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

4、安装docker

sudo yum install docker-ce docker-ce-cli containerd.io

5、 查看docker版本

docker --version

6、开机启动

systemctl enable --now docker

7、查看docker cgroup驱动,与k8s一致,使用systemd

# 修改docker cgroup驱动:native.cgroupdriver=systemd
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

systemctl restart docker  # 重启使配置生效

二、安装kubelet kubeadm kubectl

master、node节点都需要安装kubelet kubeadm kubectl

安装kubernetes的时候,需要安装kubelet, kubeadm等包,但k8s官网给的yum源是packages.cloud.google.com,国内访问不了,此时我们可以使用阿里云的yum仓库镜像。

1、添加源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2、关闭SElinux

setenforce 0

sed -i 's/^SELINUX=enforcing$/SELINUX=disable/' /etc/selinux/config

3、安装kubelet kubeadm kubectl

yum install -y kubelet-1.18.15 kubeadm-1.18.15 kubectl-1.18.15 --disableexcludes=kubernetes

systemctl enable --now kubelet  # 开机启动kubelet

4、centos7用户还需要设置路由:

yum install -y bridge-utils.x86_64

modprobe  br_netfilter  # 加载br_netfilter模块,使用lsmod查看开启的模块

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system  # 重新加载所有配置文件

systemctl disable --now firewalld  # 关闭防火墙

5、k8s要求关闭swap (qxl)

swapoff -a && sysctl -w vm.swappiness=0  # 关闭swap
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab  # 取消开机挂载swap

三、创建集群准备工作

1、Master端

# Master端:
kubeadm config images pull # 拉取集群所需镜像,这个需要翻墙

# --- 不能翻墙可以尝试以下办法 ---
kubeadm config images list # 列出所需镜像

k8s.gcr.io/kube-apiserver:v1.18.15
k8s.gcr.io/kube-controller-manager:v1.18.15
k8s.gcr.io/kube-scheduler:v1.18.15
k8s.gcr.io/kube-proxy:v1.18.15
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7

#(不是一定是下面的,根据实际情况来)
# 根据所需镜像名字先拉取国内资源(可以在hub.docker.com查看)
docker pull kubeimage/kube-proxy-amd64:v1.18.15 
docker pull kubeimage/kube-scheduler-amd64:v1.18.15 
docker pull kubeimage/kube-controller-manager-amd64:v1.18.15 
docker pull kubeimage/kube-apiserver-amd64:v1.18.15 
docker pull kubeimage/etcd-amd64:3.4.2-0 
docker pull kubeimage/pause-amd64:3.2
docker pull coredns/coredns:1.6.7  # 这个在mirrorgooglecontainers中没有

# 修改镜像tag
docker tag kubeimage/kube-apiserver-amd64:v1.18.15 k8s.gcr.io/kube-apiserver:v1.18.15
docker tag kubeimage/kube-controller-manager-amd64:v1.18.15 k8s.gcr.io/kube-controller-manager:v1.18.15
docker tag kubeimage/kube-scheduler-amd64:v1.18.15 k8s.gcr.io/kube-scheduler:v1.18.15
docker tag kubeimage/kube-proxy-amd64:v1.18.15 k8s.gcr.io/kube-proxy:v1.18.15
docker tag kubeimage/pause-amd64:3.2 k8s.gcr.io/pause:3.2
docker tag kubeimage/etcd-amd64:3.4.2-0 k8s.gcr.io/etcd:3.4.3-0
docker tag coredns/coredns:1.6.7 k8s.gcr.io/coredns:1.6.7

# 把所需的镜像下载好,init的时候就不会再拉镜像,由于无法连接google镜像库导致出错

# 删除原来的镜像
docker rmi kubeimage/kube-proxy-amd64:v1.18.15 
docker rmi kubeimage/kube-scheduler-amd64:v1.18.15 
docker rmi kubeimage/kube-controller-manager-amd64:v1.18.15 
docker rmi kubeimage/kube-apiserver-amd64:v1.18.15 
docker rmi kubeimage/etcd-amd64:3.4.2-0 
docker rmi kubeimage/pause-amd64:3.2
docker rmi coredns/coredns:1.6.7 

2、Node端

# Node端:
# 根据所需镜像名字先拉取国内资源
docker pull mirrorgooglecontainers/kube-proxy:v1.14.1
docker pull mirrorgooglecontainers/pause:3.1


# 修改镜像tag
docker tag mirrorgooglecontainers/kube-proxy:v1.14.1 k8s.gcr.io/kube-proxy:v1.14.1
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1

# 删除原来的镜像
docker rmi mirrorgooglecontainers/kube-proxy:v1.14.1
docker rmi mirrorgooglecontainers/pause:3.1

四、使用kubeadm创建集群

1、初始化master

# 初始化Master(Master需要至少2核)此处会各种报错,异常...成功与否就在此
kubeadm init --apiserver-advertise-address 192.168.200.25 --pod-network-cidr 10.244.0.0/16 --kubernetes-version 1.18.15
# --apiserver-advertise-address 指定与其它节点通信的接口
# --pod-network-cidr 指定pod网络子网,使用fannel网络必须使用这个CIDR

2、初始化结果

# 初始化结果:
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.503375 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: w2i0mh.5fxxz8vk5k8db0wq
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

#每个机器创建的master以下部分都不同,需要自己保存好-qxl
kubeadm join 192.168.200.25:6443 --token our9a0.zl490imi6t81tn5u \
    --discovery-token-ca-cert-hash sha256:b93f710eb9b389a69f0cd0d6dcf7c82e389a68f009eb6b2028f69d54b099de16 

3、普通用户设置权限

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

4、应用flannel网络

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

五、node加入集群

如果记得master初始化后控制台的输出信息,直接使用

# node1:
kubeadm join 192.168.200.25:6443 --token our9a0.zl490imi6t81tn5u \
    --discovery-token-ca-cert-hash sha256:b93f710eb9b389a69f0cd0d6dcf7c82e389a68f009eb6b2028f69d54b099de16  
# node2:
kubeadm join 192.168.200.25:6443 --token our9a0.zl490imi6t81tn5u \
    --discovery-token-ca-cert-hash sha256:b93f710eb9b389a69f0cd0d6dcf7c82e389a68f009eb6b2028f69d54b099de16 

如果不记得也没关系,在maste上执行命令

kubeadm token create --print-join-command

然后就可以使用上面的token将新节点加入集群了

六、搭建完成

使用如下命令就可以看到集群内的所有节点了

kubectl get nodes

七、卸载

要卸载kubeadm功能。

运行:

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>

然后,在要删除的节点上,重置所有kubeadm安装状态:

kubeadm reset
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐