kubeadm搭建k8s集群
kubeadm安装k8s环境节点ip系统规格k8s版本hw1(master)192.168.0.105ubuntu18.042c4g1.13hw2(slave)192.168.0.165ubuntu18.044c8g1.13准备下载k8s二进制文件,并设置环境变量下载镜像配置代理,k8s要访问谷歌的服务,详见服务器...
kubeadm搭建k8s集群
- 环境
节点 | ip | 系统 | 规格 | k8s版本 |
---|---|---|---|---|
hw1(master) | 192.168.0.105 | ubuntu18.04 | 2c4g | 1.13 |
hw2(slave) | 192.168.0.165 | ubuntu18.04 | 4c8g | 1.13 |
- 准备
-
下载镜像
2.1. 配置代理,k8s要访问谷歌的服务,详见服务器配置代理,kubeadm在启动服务时,会自动下载
除了连接中的代理配置外,还有注意设置不代理的ip,否则kubeadm访问本地ip会超时,详见张克升的个人博客-安装 Kubernetes
startvpn(){ #sslocal -d start -c ~/.vpn/ss.json export http_proxy='http://127.0.0.1:8123' export https_proxy='http://127.0.0.1:8123' export NO_PROXY=192.168.0.105,192.168.0.165 echo "设置代理" } stopvpn(){ unset http_proxy unset https_proxy echo "取消代理" }
2.2. 若没有代理,可通过手动下载镜像,并重新打tag即可,详见0x01. 初始化 k8s集群
- 启动服务
-
master
kubeadm 启动服务;
root@hw1:~# kubeadm init [init] Using Kubernetes version: v1.13.3 [preflight] Running pre-flight checks [WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://127.0.0.1:8123". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service ... Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.0.105:6443 --token wes0vi.vq3eycpjgtpj1s1p --discovery-token-ca-cert-hash sha256:2b9ac2e1f1f7168e62b5569681387d39f69aadddc2004ff61dd6f12e95b068c9 root@hw1:~# root@hw1:~# kubectl get nodes NAME STATUS ROLES AGE VERSION hw1 NotReady master 76s v1.13.1
master配置kubectl;
root@hw1:~# mkdir -p $HOME/.kube root@hw1:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config root@hw1:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config root@hw1:~#
-
slave
加入集群中root@hw2:~# kubeadm join 192.168.0.105:6443 --token cpxxht.pokfll8tx6k69l82 --discovery-token-ca-cert-hash sha256:8945994a70fdf50d3612de741dc47cb4ab80b39baa3b3dd5e774b0669d4f9d80 [preflight] Running pre-flight checks [discovery] Trying to connect to API Server "192.168.0.105:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.105:6443" [discovery] Requesting info from "https://192.168.0.105:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.0.105:6443" [discovery] Successfully established connection with API Server "192.168.0.105:6443" [join] Reading configuration from the cluster... [join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "hw2" as an annotation This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. root@hw2:~#
-
检查资源
root@hw1:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
hw1 NotReady master 3m37s v1.13.1
hw2 NotReady <none> 45s v1.13.3
root@hw1:~#
-
安装网络插件
- master
root@hw1:~# kubectl create -f https://raw.githubusercontent.com/scylhy/k8s/master/cni/flannel.yaml
- slave,配置cni环境变量
root@hw2:~# echo "FLANNEL_NETWORK=10.244.0.0/16 FLANNEL_SUBNET=172.18.0.1/24 FLANNEL_MTU=1450 FLANNEL_IPMASQ=true">/run/flannel/subnet.env root@hw2:~# cat /run/flannel/subnet.env FLANNEL_NETWORK=10.244.0.0/16 FLANNEL_SUBNET=172.18.0.1/24 FLANNEL_MTU=1450 FLANNEL_IPMASQ=true root@hw2:~#
- master
-
查看服务
root@hw1:~# kubectl get nodes NAME STATUS ROLES AGE VERSION hw1 Ready master 28m v1.13.1 hw2 Ready <none> 27m v1.13.3 root@hw1:~#
-
部署demo
root@hw1:~# kubectl run test --port=80 --image=nginx kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. deployment.apps/test created root@hw1:~# kubectl get pods NAME READY STATUS RESTARTS AGE test-5cc7b49d8d-hrkwr 1/1 Running 0 27s root@hw1:~#
- 遇到的问题及解决
-
启动代理后,kubeadm init后,出现timeout问题
需要对本地基本使用的ip取消代理,通过设置环境变量,然后重启代理即可export NO_PROXY=192.168.0.105,192.168.0.165
-
镜像获取不到,连接不到gcr.io上
这是墙的问题,可以通过代理,并取消集群ip的代理;
或者通过其他镜像仓库拉取镜像,再打tag -
The connection to the server localhost:8080 was refused - did you specify the right host or port?
在使用kubectl version查看版本是,会爆出改错误:root@hw1:~# kubectl version Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"} The connection to the server localhost:8080 was refused - did you specify the right host or port?
在master上,是因为,没有配置kubectl造成的,只需执行:
root@hw1:~# mkdir -p $HOME/.kube root@hw1:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config root@hw1:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config root@hw1:~# root@hw1:~# kubectl version Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:00:57Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
在slave上,压根没配置kubectl,所以也会爆出如下错误,这里暂不使用,忽略该问题。
-
创建pod后,始终处于creating状态,并且报/run/flannel/subnet.env: no such file or directory
通过kubectl describe pod podname,有如下错误输出:
root@hw1:~# kubectl get pods NAME READY STATUS RESTARTS AGE test-5cc7b49d8d-8242c 0/1 ContainerCreating 0 5s root@hw1:~# kubectl describe pod test-5cc7b49d8d-8242c ... Warning FailedCreatePodSandBox 16s (x4 over 19s) kubelet, hw2 (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "7059cab356a433250c6c045a68f98d63005f58fbc1647f6dc8e105e22b21fbf7" network for pod "test-5cc7b49d8d-8242c": NetworkPlugin cni failed to set up pod "test-5cc7b49d8d-8242c_default" network: open /run/flannel/subnet.env: no such file or directory ...
原因在于,slave上没有配置cni flannel的环境变量,如下解决:
root@hw2:~# cat /run/flannel/subnet.env cat: /run/flannel/subnet.env: No such file or directory root@hw2:~# root@hw2:~# echo "FLANNEL_NETWORK=10.244.0.0/16 FLANNEL_SUBNET=172.18.0.1/24 FLANNEL_MTU=1450 FLANNEL_IPMASQ=true">/run/flannel/subnet.env root@hw2:~# cat /run/flannel/subnet.env FLANNEL_NETWORK=10.244.0.0/16 FLANNEL_SUBNET=172.18.0.1/24 FLANNEL_MTU=1450 FLANNEL_IPMASQ=true root@hw2:~#
再次查看pod状态
root@hw1:~# kubectl get pods NAME READY STATUS RESTARTS AGE test-5cc7b49d8d-8242c 1/1 Running 0 3m51s
并且,删掉之前的deployment,重新创建,发现报错没有了。
root@hw1:~# kubectl delete deploy test deployment.extensions "test" deleted root@hw1:~# root@hw1:~# kubectl run test --port=80 --image=nginx kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. deployment.apps/test created root@hw1:~# kubectl get pods NAME READY STATUS RESTARTS AGE test-5cc7b49d8d-cmmvw 0/1 ContainerCreating 0 3s root@hw1:~# kubectl get pods NAME READY STATUS RESTARTS AGE test-5cc7b49d8d-cmmvw 1/1 Running 0 6s root@hw1:~# root@hw1:~# kubectl describe pods test-5cc7b49d8d-cmmvw Name: test-5cc7b49d8d-cmmvw Namespace: default ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 23s default-scheduler Successfully assigned default/test-5cc7b49d8d-cmmvw to hw2 Normal Pulling 23s kubelet, hw2 pulling image "nginx" Normal Pulled 19s kubelet, hw2 Successfully pulled image "nginx" Normal Created 19s kubelet, hw2 Created container Normal Started 18s kubelet, hw2 Started container root@hw1:~#
- 参考
更多推荐
所有评论(0)