k8s安装:

前置条件

  • 安装操作系统,至少三台虚拟机
  • 配置:至少2c2g,30g
  • 集群之间网络互通
  • vm可以访问外网
  • 禁止swap分区

安装docker

apt-get install docker.ce

准备工作

修改主机名

#修改/etc/好hostname文件,然后重启
vi /etc/hostname
lx-001

reboot
#配置地址主机映射文件
vi /etc/hosts
10.1.1.11 lx-001
10.1.1.12 lx-002
10.1.1.13 lx-003

关闭防火墙

systemctl stop ufw			#临时
systemctl disable ufw		#永久

设置系统参数

cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

关闭虚拟空间

swapoff -a			#临时
sed -ri 's/.*swap.*/#&/' /etc/fstab 	#永久

docker网络设置

vi /lib/systemd/system/docker.service 
#在ExecStart=上面添加一行
ExecStartPost=/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j ACCEPT

修改docker镜像地址

vi /etc/docker/daemon.json
{
   "registry-mirrors": ["http://hub-mirror.c.163.com"],"exec-opts": ["native.cgroupdriver=systemd"]
}
systemctl daemon-reload
systemctl restart docker

安装k8s

apt-get update
apt-get install -y kubelet kubeadm kubectl --allow-unauthenticated

遇到问题:

原有的源里没有包含kubernetes的安装包,但是快照里有,不建议用快照安装

No apt package “kubeadm”, but there is a snap with that name.
Try “snap install kubeadm”
No apt package “kubectl”, but there is a snap with that name.
Try “snap install kubectl”
No apt package “kubelet”, but there is a snap with that name.
Try “snap install kubelet”

解决:

1.打开 /etc/apt/sources.list 文件,添加一行

deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main

2.apt-get update

出现我们的老朋友,没有公钥

W: GPG error: https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease: The following signatures couldn’t be verified because the public key is not available: NO_PUBKEY FEEA9169307EA071 NO_PUBKEY 8B57C5C2836F4BEB
E: The repository ‘https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease’ is not signed.

添加公钥

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 8B57C5C2836F4BEB

3.再执行update无报错,安装k8s相关组件成功

master节点初始化

kubeadm init

warning和error,我们一个一个解决:

[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

docker info | grep Cgroup   #查看docker的cgroup,上面改了
在 /etc/docker/daemon.json添加
{
   "registry-mirrors": ["http://hub-mirror.c.163.com"],"exec-opts": ["native.cgroupdriver=systemd"]
}
#重启docker
systemctl restart docker
systemctl status docker
#再查看
root@lx-001:~# docker info | grep Cgroup 
 Cgroup Driver: systemd
 Cgroup Version: 1

完美解决~

[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

关掉swap

swapoff -a

解决~

接着往下走。。。。拉取镜像的时候卡住不动了。。。。

貌似是因为是google的,所以被墙了,查资料建议我们直接用docker拉取

#查看需要拉取的镜像
root@lx-001:~# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.21.2
k8s.gcr.io/kube-controller-manager:v1.21.2
k8s.gcr.io/kube-scheduler:v1.21.2
k8s.gcr.io/kube-proxy:v1.21.2
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0

#使用docker拉取
docker pull k8s.gcr.io/kube-apiserver:v1.21.2
Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
#报错!连接超时!

#又查资料,发现可以使用阿里云的镜像,阿里云yyds!
#不想一个一个pull了,写个脚本
vi image.pull
img_list='
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.21.2
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.21.2
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.21.2
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.21.2
registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.4.1
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.8.0
'
for img in $img_list
do
docker pull $img
done
#保存退出
#添加执行权限
chmod +x image.pull
chmod 777 image.pull
#执行
./image.pull

拉取完继续初始化

又卡住了。。。。还是在拉取镜像这一步

@#¥%…&&…%¥#@#¥%……&&

镜像名字不对,识别不到,改名字

docker tag [镜像id] [改的名字:tag]     #名字与之前查到的一致
#删除原来的镜像
docker rmi -f [镜像名:tag]
#一定要-f不然删不掉
#这里不知道脚本怎么写,就手工删了,删一个docker images查看一下镜像,吐了,yue~

#爷把这个脚本写出来啦!ohhhhhhhhhhhhhhhhhhhhhhh
vi mv_name.sh
img_name='
kube-apiserver:v1.21.2
kube-controller-manager:v1.21.2
kube-scheduler:v1.21.2
kube-proxy:v1.21.2
pause:3.4.1
etcd:3.4.13-0
'

for img in $img_name
do
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$img k8s.gcr.io/$img
done

for img in $img_name
do
docker rmi -f registry.cn-hangzhou.aliyuncs.com/google_containers/$img
done

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.8.0 k8s.gcr.io/coredns/coredns:v1.8.0
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.8.0

 chmod +x mv_name.sh
./mv_name.sh

#好像大概也许,添加阿里云的镜像源也可以解决,但是那个老师用的是centos,咱们用的是ubuntu,我没试,不太清楚

再再初始化

kubeadm init

成功啦!ohhhhhhhhhhhhhh

执行命令:

mkdir -p $HOME/.kube  #我失败了,用的mkdir -p /root/.kube
cp -i /etc/kubenetes/admin.conf $HOME/.kube/config    #同样改成/root....

把工作节点加入集群

#初始化完成会给到这个命令,这个命令每个人的都不一样的
kubeadm join 10.1.1.11:6443 --token iba6l2.u7bvuhp0wlqglrtz --discovery-token-ca-cert-hash sha256:0599017a841c6a2853a7555728c91e013aca05c0d41a55098fe57e10e94f3be4 

#我当时没有保存,人都傻了,不过不要紧执行下面的命令重新获取
kubeadm token create --print-join-command

#看到这个说明我们加入成功了
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

#master节点上查看root@lx-001:~# kubectl get nodesNAME     STATUS     ROLES                  AGE     VERSIONlx-001   NotReady   control-plane,master   24h     v1.21.2lx-002   NotReady   <none>                 2m26s   v1.21.2lx-003   NotReady   <none>                 2m20s   v1.21.2#但是节点状态是notready,需要配置网络,下一步就配置,不要着急

配置k8s网络

我们以calico为例(原因是我获取flannel的配置文件失败了,淦!)

#获取配置文件wget https://docs.projectcalico.org/v3.10/manifests/calico.yaml#只在master节点执行即可!执行配置yml文件kubectl apply -f calico.yaml#然后在master节点查看node状态root@lx-001:~# kubectl get nodesNAME     STATUS   ROLES                  AGE   VERSIONlx-001   Ready    control-plane,master   24h   v1.21.2lx-002   Ready    <none>                 16m   v1.21.2lx-003   Ready    <none>                 16m   v1.21.2

成功啦!yes yes yes yes yes yes yes yes yes yes!

高兴早了,master节点的calico没起来,淦*2!

kubectl get pods -n kube-system -owide
NAME                                       READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
calico-kube-controllers-7c5dd46f7d-4cmp4   1/1     Running   0          12m   192.168.110.65   lx-002   <none>           <none>
calico-node-4d4d6                          1/1     Running   0          12m   10.1.1.12        lx-002   <none>           <none>
calico-node-6vz4r                          1/1     Running   0          12m   10.1.1.13        lx-003   <none>           <none>
calico-node-8ng56                          0/1     Running   0          12m   10.1.1.11        lx-001   <none>           <none>
coredns-558bd4d5db-ppfkr                   1/1     Running   0          24h   192.168.110.67   lx-002   <none>           <none>
coredns-558bd4d5db-svmkv                   1/1     Running   0          24h   192.168.110.66   lx-002   <none>           <none>
etcd-lx-001                                1/1     Running   1          24h   10.1.1.11        lx-001   <none>           <none>
kube-apiserver-lx-001                      1/1     Running   2          24h   10.1.1.11        lx-001   <none>           <none>
kube-controller-manager-lx-001             1/1     Running   1          24h   10.1.1.11        lx-001   <none>           <none>
kube-proxy-6bpst                           1/1     Running   0          24m   10.1.1.13        lx-003   <none>           <none>
kube-proxy-bq2jw                           1/1     Running   0          24m   10.1.1.12        lx-002   <none>           <none>
kube-proxy-jgxm8                           1/1     Running   1          24h   10.1.1.11        lx-001   <none>           <none>
kube-scheduler-lx-001                      1/1     Running   1          24h   10.1.1.11        lx-001   <none>           <none>

#/etc/kubernetes/admin.conf && /root/.kube/conf
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EWXlOREEzTVRjeE4xb1hEVE14TURZeU1qQTNNVGN4TjFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS3FnClpXbHBQUmlTVEhiYVZsTjRjTWo2bUZwczlPNHU2K1FEcXRhRWd3dkRnMm90UjROb05OQXJneEQ4SjJLZklrRVQKUGVmbnZPK3ZIbW4zaHdWdC9CU1hOVVQwYUFNNXU0WnphOEkyZFFYK0JsTHVhaTVDWDQwSUg4Y1duandBcnpwKwpxekMyaGRPWFdzRkdhWFVyLzA0YTNSVEpFOTd3OEZQdkJXMW5PUHd6VE1UM1JqZjVrUnp5NmVNNWpIaU1keW9wCllEZTBFc0Q5ZWUzbCthN0FOTTRKL2RKUzhOdURhMlg0T0dUS3ZKUHlzcnJKTUlFeDRTQU1hZFhSY0R1MjF4d1AKWWtnSEF3RWx0VkM0V1hScXRrTXF4cjNiMmNlVG9jdU5NSlpOSUVIUnhpTUhIY2JGb0ZqMDg0djZMOEIwd1BTWgpSbXJSTVN6TTVzRUcvRXBTNjdNQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZHazFQSnFIRjAraEczaUpyME1xWWw2eXMramdNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFBaVRBWmdnODZ5Q2tabVdFYTY3M3VrejZSdFBtQlVNYzE0bHMxOWpPdWxlbGhUTXNESQovUUc4Um5ldHBJNEV6eDhQK3FveVVWVWJSZGdUZUFIMWFwc2hCN1dCeDVqN2toSThwUG1aTXY5amU4RVZIZ2xECnJBRUd6QVlwZVNqaUdBV0g3L01GMUp2ZjA5MHVONlc0SDdoTzB3SDNVa2tjNjJ1OVRXRytUc05WZU9yRUlQb0cKdGlwbWYzNjVnNGhBTzc5MEdDcFIzeVJxdzRCWVowb3JKZ1dXVEZyQTJrenk5TFg0MTU1dGl4ZWxOUFlIS0pBbgpuVWNORklUcUxWNkovNDlnV2wzVG90Qjk2dXRNeU1tT21aNkFtS0t2ODlhQTFxTWxITS9aS05rL09GL1JrTTB0CkNxb3JNMjVXOC9ERXlseXByWm9CdlVZZmVsWG5pRUhRRm5icwotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://10.1.1.11:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJZmVhMFp2dkFEdmt3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRBMk1qUXdOekUzTVRkYUZ3MHlNakEyTWpRd056RTNNVGxhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXpYR29iVGV1QnJ6b2VsVUoKd1M3U3NXQnhsdzdpcXZseUF3TFBVUXJEWWROcnFHZ1lsaE84aHAzdkdLbVFob1BsblZWeE9IT2o3VU9Qd3RtMwpES3E4TjhBcmhHemxsZkw5MW9ycmJXSVFoRFdmKzRYVlVqTmtHRkx3NnFhTkYwTjA4dDBhWGVyQVVHMnhVcHJ0CjZHZHRLUlBlSE5TeTI1SE5xZEYzWFNuV3N6NFlkUHZkSmE3TGUvaGFPZDZ2OXFkZkxxRVVVcW1ybW9FTjdsdm0KdnN6YWpyb1NQenhsZFE2eFVCM0NFenNHT0xZMm15KzhVWWtFblFoTFBSZnV6aVJwcTBXWnFBMXd6T1llR3d1TAplTytaK2NYSkgzR2psai80RmtubEkvMzNRZGNacW1TSXo5TjFxV0JWRU5XaW9sV0RNRFdGaVkrUUxnbXpxYk9pCllsMmpVd0lEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JScE5UeWFoeGRQb1J0NGlhOURLbUplc3JQbwo0REFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBQ3UwVkc2S0VOMU03WWpDVmQybEdGOWVRMnFkWTZxUUFPUkJHCmJQYjIrUTFtMmlUZjFRQVFsaWxJdDRBN0I2THd2YTdHY0RXNGhoRHNCVStUK0ZjOE4zQ0ZyeGg4OGJobWhNVDAKSk5uaUxLZU5Mekd0NHhsMEh6UlJMMHRrS0wyZmxDRmdLRUsyVWtxc1htZDYrbU1OTDFaN2NXU252ZitYZ29HYQplK3gwdGRZKzdGOUl2R2ZkZ0ZUM0JMNllieUwzcW0wRWlzV3lRRE9SWUNLbkxyNTlxL2NYQzBBdkM1U2xod3hKCnhMZU9pOFR3WDFBdHBibVA5WjIybGlzK3VLam1XRE9naktpekVhaDdCczJmTzc3ZzBuZ2FMcy9JWHJSMXhoTXcKMlpwa1R5TlhpaUI0ZzBsVnlJKzBjOHZYUENzWnNFOWEyUDA1ZFhFYW9mazdqdUhnN1E9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBelhHb2JUZXVCcnpvZWxVSndTN1NzV0J4bHc3aXF2bHlBd0xQVVFyRFlkTnJxR2dZCmxoTzhocDN2R0ttUWhvUGxuVlZ4T0hPajdVT1B3dG0zREtxOE44QXJoR3psbGZMOTFvcnJiV0lRaERXZis0WFYKVWpOa0dGTHc2cWFORjBOMDh0MGFYZXJBVUcyeFVwcnQ2R2R0S1JQZUhOU3kyNUhOcWRGM1hTbldzejRZZFB2ZApKYTdMZS9oYU9kNnY5cWRmTHFFVVVxbXJtb0VON2x2bXZzemFqcm9TUHp4bGRRNnhVQjNDRXpzR09MWTJteSs4ClVZa0VuUWhMUFJmdXppUnBxMFdacUExd3pPWWVHd3VMZU8rWitjWEpIM0dqbGovNEZrbmxJLzMzUWRjWnFtU0kKejlOMXFXQlZFTldpb2xXRE1EV0ZpWStRTGdtenFiT2lZbDJqVXdJREFRQUJBb0lCQUdsZWl3RUJWc3R6NWxTZgordkhQSHhjRW5SM1o3NTI3ZEtOZ3RJNGZWQmgvaEM4S3ZObDBZL1F6V3FjdWlNYkZMV1pscFQxTDZsN05rUlZoCjdzV2JhQSs4QzFYUE9HMlJCR29lTkNPVThWMnQxMUQ4MG0xbm1FWDFmRVVOaVQzT1JsUXQzTkVnanVSeGJrb3MKMWlxbHFWSXhNM0ZjRWlRVmd3TS9RTlpTbUNDem85ckdvZ1FKVGVEUVFZMkRIOHQxUEo4bzc2M2tvclFIR2c3ZAp5UDY2T0c1bFBSNVZJVVJkeGxyWUtscUlhVzhUNmZqYURzSXJ2RE9xNjNEZGdBUEEza2hpTVFjcnA0NVh1Q3pICnZ5K2Ivck4yd0JBOUQ2WHJDWFVBTzRnc0dsZzVLZTNSMVJhTS8rZm5oRmhRRVRtMmlZcERkVndQSis5QlJsbEIKR3N6QVVDRUNnWUVBOHVmcmJtSG9INzVRUDdDLzVHdm1HZmcvTnlyMTAvZTFrc2h3STgzRU5WZVBsZWtOUmlzaQo5czFjbkUvMXgwczZPakhSSEtpMlVsekg2NkNIVUxTKzVLV0hjWDBRMmtzSVVDRW9Kc3JCbUV2SHpmWnRFcG96CmJaWUh4T25MUjMvYkNWWHQyMUNoUk01L0cwT0RxKzdnMXdSNGhNT1dQL2pocmlVaGxaUDI2V01DZ1lFQTJJVEUKUERRM3ZYcFVUUWc2VlNNK2loVHhmMFhRYmJxZWwxSUJEcUpSMmFIbWgrcVkzZ0hqTFlSTDdUVTVjT3gwR0djTgoyYXRvbzZYWmJRS0xnaEh0RWhubUpWclZYQVgxUGpYMjg2SERtM1hoZ0I5SWxEaTRvNkpFUFRZTXpOVmk5cm1ICjg2VldPZ3hobCtDZHdVaUkvWWtCVEtueFZPbElmQUZQcVBtNmVWRUNnWUVBdkIxRHhNNXA5L3RwSm9uNWNpcmwKbm9NVVllTVJVa0RxQzJ6Uys4ZGxCbkp6TG9PMzFmbWVNRWhHU24vYU5hZGF4cXJCNlZIM01MM056ZnNhRURTSwpDWVR2NmNJVGhScktxMU9pUnJpTFNTaVc2ampIcTdwanphQjlEOUNIcnkyak1nMnNFVWJXUGZVMWxxV29tVVI4Ck44aXNsUlRyalV0dmEzQXlIQ2JrOTBVQ2dZQVFoMW9mcW5EUzR5TEtXcVZ4V0dadXpoaDllY3ZtaEllVXo2cksKL2pNM1pQZWZTcFp1NUQvK2VvbjlTc0hlei80dzJyVWc5OGZlTGt3QjJWN2pDQkZMLzNRbFIrRGZ6SWlqUGlWagpCZWRUMTlUbUhmMUJhMjhVOXM4MHlRcURISXNZZ0tOVFF6em80NGNUdkE3dThXV2J6VGl2TEk0Q3lHaERKeXA0Cm9NL09jUUtCZ0dIY3FPLzVZTzA3Ry9DVE1qMkFpZjBwYklOTU9kUEIvQVdUOXZZYzBuVlNLSTh6dUdqMTQ4VE0KOXE5Uyt6Mzh0MXpPUVd1VlZkZEtsWFlRMDZFWEkwY2E5TW5yaFhocmpoVXFYNHF2dWhDenpvRjljQ0dscUdlTwpaNm5FY2x2elZabXNQNFBqQ0ZIQkZ4b2F5Mlg1Y0htVHNwZ29icnpYSWw5TUg4YnpXV1JhCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

丢了这玩意儿复制出来的


接上文

master节点calico没起来

尝试各种方法,包括把节点退出集群(千万别试,集美们,就相当于从初始化开始又做一遍,还经常错)

只需要修改calico的配置文件就可以

kubectl describe pod -n kube-system calico-node-klcwc
#拉到最下面查看message,发现有warning
Warning  Unhealthy  5s (x23153 over 2d16h)  kubelet  (combined from similar events): Readiness probe failed: calico/node is not ready: BIRD is not ready: BGP not established with 10.1.1.13,10.1.1.122021-06-28 02:14:37.048 [INFO][22669] health.go 156: Number of node(s) with BGP peering established = 0
#问题就出在这里没跑了
#在calico.yml添加两行
- name: CLUSTER_TYPE			#找到这里
  value: "k8s,bgp"				#找到这里,在这行下面添加
  - name: IP_AUTODETECTION_METHOD
    value: "interface=ens.*"	#ens后面可以这么写,也可以根据你自己的ens写,我的是 ‘value: "interface=ens160"’

#蓝后在查看calico起来没
root@lx-001:~# kubectl -n kube-system get po|grep calico
calico-kube-controllers-7c5dd46f7d-wwrmq   1/1     Running   0          2d17h
calico-node-gdfv2                          1/1     Running   0          20m
calico-node-h8878                          1/1     Running   0          19m
calico-node-z8nxw                          1/1     Running   0          20m

安装可视化界面

获取修改配置文件

#获取配置文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended.yaml

#修改配置文件,找到:kind: Service

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: LoadBalancer
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard

---

创建Service Account及ClusteroleBinding

root@lx-001:~# vi kube-dash-admin-user.yml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---


apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: CluterRole
  name: cluter-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
~  

root@lx-001:~# kubectl apply -f recommended.yaml
root@lx-001:~# kubectl apply -f kube-dash-admin-user.yml 

检查dashboard运行情况

root@lx-001:~# kubectl get deployment -n kubernetes-dashboard
NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
dashboard-metrics-scraper   1/1     1            1           10m			#服务运行成功
kubernetes-dashboard        1/1     1            1           10m			#界面运行成功

root@lx-001:~# kubectl  get pods -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-856586f554-2sb85   1/1     Running   0          12m
kubernetes-dashboard-67484c44f6-n6vmj        1/1     Running   0          12m

启动dashboard的情况下修改配置文件 kubeedit service kubernetes-dashboard -n kubernetes-dashbord

访问DashBoard UI

https://10.1.1.11:30001 (master节点主机id+30001端口)

获取token

root@lx-001:~# kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token
Name:         namespace-controller-token-zpbqf
Type:  kubernetes.io/service-account-token
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImFzb3B3YnphVFVmSm4yNzFVV3hIUExKQ0x2WWI4YWxCUmdBS2phaDl2WUEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJuYW1lc3BhY2UtY29udHJvbGxlci10b2tlbi16cGJxZiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJuYW1lc3BhY2UtY29udHJvbGxlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjI1ZjE4NDYxLTg4NTgtNGIyOS05N2FlLTRlM2Q5MDZhOTMwZCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpuYW1lc3BhY2UtY29udHJvbGxlciJ9.RmQXrfMtrMn_z7ONomLxxB64Jt_enkO9EYc83lKSGEOZLTVOSqB_qDrtlH5CZOtC-Fj61h-eleAAhjlL9z2_8tc_ku3x7Dy17j62IyLGCyARQF9IFwgAsNwlO4cAg29JErCVznXOykf6JqyCDggFbnu6-0VjwHTL511_vT-h_D0oFL7hEsYdhivcU1PijRGEMoC6FhC_NkYicok2rdlVrogpwY3iXpMp6W6_dmHe40SyoVxiw2Ox0gQq5pGK8LY4OJ7Zd9oAl8c2R1FfXNBMnLHAOs5m0KQuMUnsw9uJfcjIuHLPpfxA2n5ZRtBBGlWLKxEpJF5aYaHdUAmBugXOSA

k8s安装完成!

完结撒花!

番外:

二进制方式搭建

步骤:

  1. 创建虚拟机,安装系统
  2. 系统初始化
  3. 为etcd和apiserver自签证书
    • 自签证书:为了实现相关访问,类似于门禁卡,分内部和外部
  4. 部署etcd集群
  5. 部署master相关组件
    • kube-apiserver
    • kube-controller-manager
    • kube-scheduler
    • etcd
    • docker
  6. 部署node相关组件
    • kubelet
    • kube-proxy
    • docker
    • etcd
  7. 部署集群网络
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐