vagrant加virtualbox轻松搭建k8s集群脚本
文章目录环境准备配置k8s节点环境准备windows 电脑上使用vagrant 加 virtualbox 搭建k8s 集群不熟悉vagrant 与 virtualbox 的可以查看这篇文章 使用VirtualBox和Vagrant搭建Docker环境需要下载vagrant插件vagrant plugin install vagrant-vbguest 0.21.0如果没有VPN 需要准备如下的镜像
环境准备
windows 电脑上使用vagrant 加 virtualbox 搭建k8s 集群
不熟悉vagrant 与 virtualbox 的可以查看这篇文章 使用VirtualBox和Vagrant搭建Docker环境
需要下载vagrant插件vagrant plugin install vagrant-vbguest 0.21.0
如果没有VPN 需要准备如下的镜像:
k8s.gcr.io/kube-apiserver v1.23.5 3fc1d62d6587 37 hours ago 135MB
k8s.gcr.io/kube-proxy v1.23.5 3c53fa8541f9 37 hours ago 112MB
k8s.gcr.io/kube-scheduler v1.23.5 884d49d6d8c9 37 hours ago 53.5MB
k8s.gcr.io/kube-controller-manager v1.23.5 b0c9e5e4dbb1 37 hours ago 125MB
k8s.gcr.io/etcd 3.5.1-0 25f8c7f3da61 4 months ago 293MB
k8s.gcr.io/coredns/coredns v1.8.6 a4ca41631cc7 5 months ago 46.8MB
k8s.gcr.io/pause 3.6 6270bb605e12 6 months ago 683kB
ghcr.io/weaveworks/launcher/weave-npc 2.8.1 7f92d556d4ff 13 months ago 39.3MB
ghcr.io/weaveworks/launcher/weave-kube 2.8.1 df29c0a4002c 13 months ago 89MB
我已经将这些镜像上传至docker hub:
docker pull itnoobzzy/k8s:weaveworks_launcher_weave-npc_2.8.1
docker pull itnoobzzy/k8s:pause_3.6
docker pull itnoobzzy/k8s:coredns_coredns_v1.8.6
docker pull itnoobzzy/k8s:etcd_3.5.1-0
docker pull itnoobzzy/k8s:kube-scheduler_v1.23.5
docker pull itnoobzzy/k8s:kube-controller-manager_v1.23.5
docker pull itnoobzzy/k8s:kube-proxy_v1.23.5
docker pull itnoobzzy/k8s:kube-apiserver_v1.23.5
在master 节点上下载好上边的镜像后,打上官方tag:
docker tag itnoobzzy/k8s:weaveworks_launcher_weave-npc_2.8.1 ghcr.io/weaveworks/launcher/weave-npc:2.8.1
docker tag itnoobzzy/k8s:pause_3.6 k8s.gcr.io/pause:3.6
docker tag itnoobzzy/k8s:coredns_coredns_v1.8.6 k8s.gcr.io/coredns/coredns:v1.8.6
docker tag itnoobzzy/k8s:etcd_3.5.1-0 k8s.gcr.io/etcd:3.5.1-0
docker tag itnoobzzy/k8s:kube-scheduler_v1.23.5 k8s.gcr.io/kube-scheduler:v1.23.5
docker tag itnoobzzy/k8s:kube-controller-manager_v1.23.5 k8s.gcr.io/kube-controller-manager:v1.23.5
docker tag itnoobzzy/k8s:kube-proxy_v1.23.5 k8s.gcr.io/kube-proxy:v1.23.5
docker tag itnoobzzy/k8s:kube-apiserver_v1.23.5 k8s.gcr.io/kube-apiserver:v1.23.5
vagrantfile 文件
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.require_version ">= 1.6.0"
boxes = [
{
:name => "k8s-master",
:eth1 => "192.168.1.105",
:mem => "2048",
:cpu => "2"
},
{
:name => "k8s-node1",
:eth1 => "192.168.1.106",
:mem => "2048",
:cpu => "1"
},
{
:name => "k8s-node2",
:eth1 => "192.168.1.107",
:mem => "2048",
:cpu => "1"
}
]
Vagrant.configure(2) do |config|
config.vm.box = "centos/7"
boxes.each do |opts|
config.vm.define opts[:name] do |config|
config.vm.hostname = opts[:name]
config.vm.provider "vmware_fusion" do |v|
v.vmx["memsize"] = opts[:mem]
v.vmx["numvcpus"] = opts[:cpu]
end
config.vm.provider "virtualbox" do |v|
v.customize ["modifyvm", :id, "--memory", opts[:mem]]
v.customize ["modifyvm", :id, "--cpus", opts[:cpu]]
end
config.vm.network :public_network, ip: opts[:eth1]
end
end
end
在一个空文件夹下使用vagrant命令执行创建虚拟机,当虚拟机创建完毕后执行下边脚本,创建docker与 k8s 相关服务
检查三台节点虚拟机是否允许正常:
创建docker 与 k8s 服务脚本
#/bin/sh
# install some tools
sudo yum update -y
sudo sudo yum install -y gcc kernel-deve
sudo yum install -y vim telnet bind-utils wget
# install docker
curl -fsSL get.docker.com -o get-docker.sh
sh get-docker.sh
sudo mkdir /etc/docker
sudo bash -c 'cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"data-root": "/data/docker"
}
EOF'
if [ ! $(getent group docker) ];
then
sudo groupadd docker;
else
echo "docker user group already exists"
fi
sudo gpasswd -a $USER docker
sudo newgrp docker
exit
rm -rf get-docker.sh
# open password auth for backup if ssh key doesn't work, bydefault, username=vagrant password=vagrant
sudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
sudo systemctl restart sshd
sudo bash -c 'cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
EOF
'
# 也可以尝试国内的源 http://ljchen.net/2018/10/23/%E5%9F%BA%E4%BA%8E%E9%98%BF%E9%87%8C%E4%BA%91%E9%95%9C%E5%83%8F%E7%AB%99%E5%AE%89%E8%A3%85kubernetes/
sudo setenforce 0
# install kubeadm, kubectl, and kubelet.
sudo yum install -y kubelet kubeadm kubectl
sudo bash -c 'cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
EOF'
sudo sysctl --system
sudo systemctl stop firewalld
sudo systemctl disable firewalld
sudo swapoff -a
sudo systemctl enable docker.service
sudo systemctl enable kubelet.service
检查三个节点已经安装了kubeadm
, kubelet
and kubectl
, 并且docker已经运行了:
[vagrant@k8s-master ~]$ which kubeadm
/usr/bin/kubeadm
[vagrant@k8s-master ~]$ which kubelet
/usr/bin/kubelet
[vagrant@k8s-master ~]$ which kubectl
/usr/bin/kubectl
[vagrant@k8s-node1 ~]$ which kubeadm
/usr/bin/kubeadm
[vagrant@k8s-node1 ~]$ which kubelet
/usr/bin/kubelet
[vagrant@k8s-node1 ~]$ which kubectl
/usr/bin/kubectl
[vagrant@k8s-node2 ~]$ which kubeadm
/usr/bin/kubeadm
[vagrant@k8s-node2 ~]$ which kubelet
/usr/bin/kubelet
[vagrant@k8s-node2 ~]$ which kubectl
/usr/bin/kubectl
[vagrant@k8s-master ~]$ sudo docker version
Client:
Version: 1.13.1
API version: 1.26
Package version: docker-1.13.1-63.git94f4240.el7.centos.x86_64
Go version: go1.9.4
Git commit: 94f4240/1.13.1
Built: Fri May 18 15:44:33 2018
OS/Arch: linux/amd64
Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Package version: docker-1.13.1-63.git94f4240.el7.centos.x86_64
Go version: go1.9.4
Git commit: 94f4240/1.13.1
Built: Fri May 18 15:44:33 2018
OS/Arch: linux/amd64
Experimental: false
配置k8s节点
kubeadm init on master node
-
先拉取镜像:
kubeadm config images pull
如果有VPN 可以直接执行上边的拉取命令,不过这里拉取的是最新的,对应下边的init 命令需要根据拉取的版本变更。
如果没有VPN 就需要按照文章开头写的从我的hub 仓库拉取我已经下载的1.23.5版本的镜像然后打上tag
-
sudo kubeadm init --kubernetes-version v1.23.5 --pod-network-cidr 172.100.0.0/16 --apiserver-advertise-address 192.168.1.105
看到下边的说明初始化好了:
[bootstrap-token] Using token: 2el3ya.rtu3ldy4avpzrt0c [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.1.105:6443 --token 2el3ya.rtu3ldy4avpzrt0c \ --discovery-token-ca-cert-hash sha256:6304c84389fd49d357257136d4b2d26d4b6b5b18a520a3cfc82553b82b2388f4
需要将这个命令记录下来,后续node 节点加入集群需要:
kubeadm join 192.168.1.105:6443 --token 2el3ya.rtu3ldy4avpzrt0c \ --discovery-token-ca-cert-hash sha256:6304c84389fd49d357257136d4b2d26d4b6b5b18a520a3cfc82553b82b2388f4
-
然后在master 节点上运行:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
-
检查pod:
[vagrant@k8s-master ~]$ kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-64897985d-2gvcl 0/1 ContainerCreating 0 2m22s kube-system coredns-64897985d-97zwx 0/1 ContainerCreating 0 2m22s kube-system etcd-k8s-master 1/1 Running 2 2m36s kube-system kube-apiserver-k8s-master 1/1 Running 2 2m36s kube-system kube-controller-manager-k8s-master 1/1 Running 3 2m36s kube-system kube-proxy-b7jkn 1/1 Running 0 2m22s kube-system kube-scheduler-k8s-master 1/1 Running 3 2m36s
发现两个dns 的pod 没有运行中,是因为网络插件还没有安装
kubeadm join
-
此时先不着急在master 节点上安装网络插件, 先将两个node节点加入集群(因为我第一次安装的时候发现先安装网络插件,在将node 节点加入集群,可能会出现node 节点dns pod 无法初始化运行的情况)
node1 加入集群:
[vagrant@k8s-node1 ~]$ sudo kubeadm join 192.168.1.105:6443 --token 2el3ya.rtu3ldy4avpzrt0c --discovery-token-ca-cert-hash sha256:6304c84389fd49d357257136d4b2d26d4b6b5b18a520a3cfc82553b82b2388f4 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
node2 加入集群:
[vagrant@k8s-node2 ~]$ sudo kubeadm join 192.168.1.105:6443 --token 2el3ya.rtu3ldy4avpzrt0c --discovery-token-ca-cert-hash sha256:6304c84389fd49d357257136d4b2d26d4b6b5b18a520a3cfc82553b82b2388f4 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
master 节点验证两个节点是否已经加入:
[vagrant@k8s-master ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready control-plane,master 12m v1.23.4 k8s-node1 Ready <none> 3m43s v1.23.4 k8s-node2 Ready <none> 2m2s v1.23.4
注:
我上边两个node 节点之所以是ready状态是因为我之前安装过网络插件,如果网络插件没有安装的情况下应该是notready状态
安装weave 插件
-
master 节点安装网络插件:
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
注:
没有VPN可能该插件也安装不上,我上边仓库离得镜像已经包括这个镜像了
安装完插件后,所有pod状态应该如下:
[vagrant@k8s-master ~]$ kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-64897985d-2gvcl 1/1 Running 0 16m kube-system coredns-64897985d-97zwx 1/1 Running 0 16m kube-system etcd-k8s-master 1/1 Running 2 16m kube-system kube-apiserver-k8s-master 1/1 Running 2 16m kube-system kube-controller-manager-k8s-master 1/1 Running 3 16m kube-system kube-proxy-b7jkn 1/1 Running 0 16m kube-system kube-proxy-s57st 1/1 Running 0 5m48s kube-system kube-proxy-xnhx4 1/1 Running 0 7m29s kube-system kube-scheduler-k8s-master 1/1 Running 3 16m kube-system weave-net-5bwzm 2/2 Running 1 (7s ago) 11s kube-system weave-net-6b6bx 2/2 Running 0 11s kube-system weave-net-ldvtv 2/2 Running 0 11s
如果还有为运行的可能是插件没有安装成功,在执行下安装命令在查看下
更多推荐
所有评论(0)