kubernetes 集群搭建1.11.3+dashboard
本文环境:centos:7docker:18.06k8:1.11.3前置准备# 3台内网ip + hostname172.16.0.175k8s-master172.16.0.100k8s-node1172.16.0.147k8s-node2# 更改 hostsecho -e"172.16.0.175k8s-master\n172.16.0.100k8s-node1\n172.16.0.147k8
本文环境:
centos:7
docker:18.06
k8:1.11.3
前置准备
# 3台内网ip + hostname
172.16.0.175 k8s-master
172.16.0.100 k8s-node1
172.16.0.147 k8s-node2
# 更改 hosts
echo -e "172.16.0.175 k8s-master\n172.16.0.100 k8s-node1\n172.16.0.147 k8s-node2\n" >>/etc/hosts
# 关闭 swap
swapoff -a
vim /etc/sysctl.d/k8s.conf
vm.swappiness=0
sysctl -p /etc/sysctl.d/k8s.conf
安装
# 安装 kubectl kubelet kubeadm
yum install -y kubectl-1.11.3
yum install -y kubelet-1.11.3
yum install -y kubeadm-1.11.3
# 查看是否安装好,版本
[root@ecs-ca42 sc]# kubectl version
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
[root@ecs-ca42 sc]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:59:42Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
[root@ecs-ca42 sc]# kubelet --version
Kubernetes v1.11.3
# 这里遇到错误:软件包:kubelet-1.13.1.x86_64 (kubernetes) 需要:kubernetes-cni = 0.7.5
# 解决办法
yum -y install kubernetes-cni = 0.7.5
# 如果之前安装过,报 冲突,直接 remove 掉 对应的包 就好
# 报错信息 file /usr/bin/kubectl from install of kubectl-1.11.3-0.x86_64 conflicts with file from package kubernetes-client-1.5.2-0.7.git269f928.el7.x86_64
yum remove kubernetes-client-1.5.2-0.7.git269f928.el7.x86_64
# 安装docker 18.06,注意这里的版本,k8 11以上需要 18.06
yum -y install docker
systemctl enable docker
systemctl start docker
# 为 kubectl 增加 命令行提示
yum -y install bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
# 这里查看 kubectl status 报错是正常的,待会儿init 会自动启动
systemctl status kubelet
# 先替换镜像(装墙镜像的常规操作)
#!/bin/bash
images=(kube-proxy-amd64:v1.11.3 kube-scheduler-amd64:v1.11.3 kube-controller-manager-amd64:v1.11.3
kube-apiserver-amd64:v1.11.3 etcd-amd64:3.2.18 coredns:1.1.3 pause:3.1)
for imageName in ${images[@]} ; do
docker pull anjia0532/google-containers.$imageName
docker tag anjia0532/google-containers.$imageName k8s.gcr.io/$imageName
docker rmi anjia0532/google-containers.$imageName
done
kubeadm init
kubeadm init --kubernetes-version=v1.11.3 --pod-network-cidr=10.244.0.0/16
kubeadmin 之后成功的提示
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 172.16.0.54:6443 --token 2x5n80.85azoth0ynw8pdck --discovery-token-ca-cert-hash sha256:41672b43a392e241c58d2053c993d762181dc549f5523a537bc7b3c9d8887ca7
systemctl status kubelet
kubelet Unable to update cni config: No networks found in /etc/cni/net.d
重点关注的是
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
先运行这个,不然用kubectl 会报错,缺少 $HOME/.kube/config 的身份验证信息,就像这种错误
The connection to the server localhost:8080 was refused - did you specify the right host or port?
运行之后,看到下面的截图 就算都启动了
node 加入master
在node上执行: kubeadm join 172.16.0.100:6443 --token w8tb5q.8k6xikx522tk5w7n --discovery-token-ca-cert-hash sha256:f95393ccba643a856b76fd49f8ffc9cb81c6dcce75581619cb91b266730dc6f9
这里看到 NotReady 是正常的,因为还没有添加网络
kubectl create -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.ym
这里执行完上面的 发现 master 节点已经是 Ready 状态了,但是两台node 节点一直是 NotReady ,尝试 node 节点
kubeadm reset
再次 join
发现还是不行,给 node 节点都分别执行
#!/bin/bash
images=(kube-proxy-amd64:v1.11.3 kube-scheduler-amd64:v1.11.3 kube-controller-manager-amd64:v1.11.3
kube-apiserver-amd64:v1.11.3 etcd-amd64:3.2.18 coredns:1.1.3 pause:3.1)
for imageName in ${images[@]} ; do
docker pull anjia0532/google-containers.$imageName
docker tag anjia0532/google-containers.$imageName k8s.gcr.io/$imageName
docker rmi anjia0532/google-containers.$imageName
done
`kubectl create -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.ym`
部署 Dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
# 原image 地址
# k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
# 修改为
# roeslys/kubernetes-dashboard-amd64:v1.10.1
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30000
selector:
k8s-app: kubernetes-dashboard
# 创建 dashboard
kubectl apply -f kubernetes-dashboard.yaml
kubectl get pods,svc -n kube-system
# 创建 token 用于登陆 dashboard
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
更多推荐
所有评论(0)