预准备

安装virtual box

下载centOS7镜像(阿里云网站下载地址:http://mirrors.aliyun.com/centos/7/isos/x86_64/)。我下载的版本是 CentOS-7-x86_64-DVD-1908.iso

一、安装配置虚拟机

打开VirtualBox,选择新建

输入虚拟机名称,类型,版本

内存设置4G

  

动态分配,文件大小100G

   

设置——存储——+(添加CentOS7镜像位置)

添加完毕如图:

接下来开始安装:install CentOS 7——中文——“软件选择”不要选择“最小安装”,建议选择最后一个“开发及生产工作站”。安装位置选择默认自动分区,禁用Kdump,打开网络,让你的虚拟机可以连接到互联网——设置root密码,开始安装——重启完成配置

  

另外,安装好虚拟机后,在未打开状态点击设置——处理器,将如图处理器数量修改为2,否则后面建集群会报关于处理器的错误。

å¨è¿éæå¥å¾çæè¿°

 配置终端:给安装的虚拟机添加ip信息,使其可以在xshell等工具连接,这个可以参考我的另一篇给虚拟机配置ip的博客

 

二、搭建虚拟机环境

经过给虚拟机配置静态ip,现在可以用xshell登录虚拟机。登录时默认启用了很占资源的图形界面,若启动三个虚拟机更会卡的飞起。因此,我们可以通过如下命令切换默认的登录方式

# 执行第一行代码
systemctl set-default multi-user.target  # 命令模式
systemctl set-default graphical.target  # 图形模式

使用阿里云的源予以替换CentOS7自带的yum源,执行如下命令,替换文件/etc/yum.repos.d/CentOS-Base.repo:

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo 
yum makecache

关闭防火墙,关闭swap

systemctl stop firewalld & systemctl disable firewalld

swapoff -a  # 临时关闭,下一行则是重启后永远关闭
sed -i '/ swap / s/^/#/' /etc/fstab  # 也可编辑/etc/fstab,注释掉包含swap的那一行即可

使用top命令,如下显示则正常已关闭

 

安装docker

添加阿里云仓库,安装docker

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache

yum install docker-ce -y

可查看docker 版本

[root@localhost ~]# docker --version
Docker version 18.03.1-ce, build 9ee9f40

启动Docker服务并激活开机启动,

systemctl start docker & systemctl enable docker

 测试docker,出现下图则表示docker安装成功

docker run hello-world

 

安装kubernetes

配置k8s的yum源,直接复制粘贴如下命令:


cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=0

repo_gpgcheck=0

gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

执行以下命令安装kubelet、kubeadm、kubectl

yum install -y kubelet kubeadm kubectl 
#默认最新版本,如需指定版本则是:yum install -y kubelet-1.15.1 kubeadm-1.15.1 kubectl-1.15.1

配置kubelet的cgroup drive,确保docker 的cgroup drive 和kubelet的cgroup drive一样:


docker info | grep -i cgroup

cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf # 如果显示未找到,
#则尝试:cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

#修改10-kubeadm.conf
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

# 打开的10-kubeadm.conf在图中位置添加如下一句话:
# Environment="KUBELET_CGROUP_ARGS=--cgroup-dirver=cgroupfs"


systemctl daemon-reload  # 修改驱动后,执行此命令重新加载使生效

 

启动kubelet

systemctl enable kubelet && systemctl start kubelet

下载K8S的docker镜像

如果有连接外网的条件,这个是比较简单的,直接 docker pull xxx(所需镜像)即可。查询你的k8s所需镜像都有哪些的命令如下,

kubeadm config images list

 

如果不能翻墙连外网,则可以用别人打包好的镜像,如下是1.15.1版本的百度云盘地址;或者借助阿里云海外镜像生成,然后pull到本地再修改镜像名字也行, 这个方法可以参看我的另一篇博客(待完成)

链接:https://pan.baidu.com/s/1Pk5B6e2-14yZW11PYMdtbQ 
提取码:7wox 
···········
 完成docker 镜像下载(示意图有其它的镜像,请忽略)

复制虚拟机

虚拟机的基础环境以及弄好了,下面就把该虚拟机正常关机,然后复制另外两台一模一样的,方面后面建立集群使用。点击该虚拟机,右击如下设置即可。

完成后建议为这两台虚拟机像上面第一台一样设置一个静态ip。

 三、部署集群

先给这3台虚拟机更换主机名,分别为k8s-node1 、k8s-node2 、 k8s-node3。命令如下:

hostnamectl set-hostname k8s-node1
bash

分别修改hosts文件,添加3个虚拟机的地址。192.168.10.9,192.168.10.10,192.168.10.11分别是我设置的3台虚拟机静态ip地址。

vi /etc/hosts  # 如下添加后面三行,分别是3个虚拟机的地址
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.9 k8s-node1
192.168.10.10 k8s-node2
192.168.10.11 k8s-node3

前面的工作都准备好后,我们就可以真正的创建集群了。这里使用的是官方提供的kubeadm工具,它可以快速、方便的创建一个K8S集群。kubeadm的具体介绍大家可以参考官方文档

 

在要设置为mater的虚拟机上面执行如下命令:

kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=v1.15.1 --apiserver-advertise-address=192.168.10.9

含义:
1.选项--pod-network-cidr=192.168.0.0/16表示集群将使用Calico网络,这里需要提前指定Calico的子网范围
2.选项--kubernetes-version=v1.15.1指定K8S版本,这里必须与之前导入到Docker镜像版本一致,否则会访问谷歌去重新下载K8S最新版的Docker镜像
3.选项--apiserver-advertise-address表示绑定的网卡IP,这里一定要绑定前面提到的enp0s8网卡,否则会默认使用enp0s3网卡
4.若执行kubeadm init出错或强制终止,则再需要执行该命令时,需要先执行kubeadm reset重置

若出错如下,则按照提示修改再重新执行即可

å¨è¿éæå¥å¾çæè¿°

echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables

 完成初始化后如下:

[root@k8s-node1 ~]# kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=v1.15.1 --apiserver-advertise-address=192.168.10.9
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
[root@k8s-node1 ~]#  echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
[root@k8s-node1 ~]# kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=v1.15.1 --apiserver-advertise-address=192.168.10.9
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.9]
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-node1 localhost] and IPs [192.168.10.9 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-node1 localhost] and IPs [192.168.10.9 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 28.003530 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-node1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ts9i67.6sn3ylpxri4qimgr
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.9:6443 --token ts9i67.6sn3ylpxri4qimgr \
    --discovery-token-ca-cert-hash sha256:32de69c3d3241cab71ef58afd09b9bf16a551b6e4b498d5134b1baae498ac8c0 
[root@k8s-node1 ~]# 

 按照初始化的提示,分别执行下面3行命令才可正式完成初始化。

mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

初始化之后的提示指令如下,执行kubeadm join…将另外两个节点加入集群。

kubeadm join 192.168.10.9:6443 --token ts9i67.6sn3ylpxri4qimgr \
    --discovery-token-ca-cert-hash sha256:32de69c3d3241cab71ef58afd09b9bf16a551b6e4b498d5134b1baae498ac8c0 

 

 

 

创建网络

如果不创建网络,查看pod状态时,可以看到kube-dns组件是阻塞状态,集群时不可用的。

[root@k8s-node1 ~]# kubectl get pods -n kube-system
NAME                                READY   STATUS    RESTARTS   AGE
coredns-5c98db65d4-8nftr            0/1     Pending   0          3m28s #阻塞
coredns-5c98db65d4-n2zbj            0/1     Pending   0          3m28s #阻塞
etcd-k8s-node1                      1/1     Running   0          2m44s
kube-apiserver-k8s-node1            1/1     Running   0          2m51s
kube-controller-manager-k8s-node1   1/1     Running   0          2m41s
kube-proxy-cdvhk                    1/1     Running   0          3m28s
kube-scheduler-k8s-node1            1/1     Running   0          2m35s

根据官方文档,在主节点执行下面命令,即可创建好网络。

kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml

注意此时下载的是最新的calico.yaml文档,里面下面3个镜像版本需要外网,你可以先下载calico.yaml文档,然后修改这3个地方为你已经下载的镜像版本,再执行 kubectl apply -f calico.yaml。否则会上面命令会超时失败。

现在再输入命令,可以看到集群已经都搭建成功

[root@k8s-node1 ~]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-etcd-2hmhv                          1/1     Running   0          60s
calico-kube-controllers-6b6f4f7c64-c7v8p   1/1     Running   0          115s
calico-node-fzzmh                          2/2     Running   2          115s
coredns-5c98db65d4-8nftr                   1/1     Running   0          6m33s
coredns-5c98db65d4-n2zbj                   1/1     Running   0          6m33s
etcd-k8s-node1                             1/1     Running   0          5m49s
kube-apiserver-k8s-node1                   1/1     Running   0          5m56s
kube-controller-manager-k8s-node1          1/1     Running   0          5m46s
kube-proxy-cdvhk                           1/1     Running   0          6m33s
kube-scheduler-k8s-node1                   1/1     Running   0          5m40s
[root@k8s-node1 ~]# 

将Master作为工作节点
K8S集群默认不会将Pod调度到Master上,这样Master的资源就浪费了。在Master(即k8s-node1)上,可以运行以下命令使其作为一个工作节点:(利用该方法,我们可以不使用minikube而创建一个单节点的K8S集群)

[root@k8s-node1 ~]# kubectl taint nodes --all node-role.kubernetes.io/master-
node/k8s-node1 untainted

 

[root@k8s-node1 ~]# kubectl get nodes
NAME        STATUS   ROLES    AGE     VERSION
k8s-node1   Ready    master   17m     v1.15.1
k8s-node2   Ready    <none>   5m44s   v1.15.1
k8s-node3   Ready    <none>   4m50s   v1.15.1
[root@k8s-node1 ~]# 

以上就是k8s整个部署过程。

参考网址:

从零开始搭建Kubernetes集群(一、开篇)1~4系列

K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐