image-20211004115405949

目录

实验环境

实验环境:
1、win10,vmwrokstation虚机;
2、k8s集群:3台centos7.6 1810虚机,1个master节点,2个node节点
   k8s version:v1.21
   CONTAINER-RUNTIME:docker://20.10.7

测试实验

现在开始测试:

master节点上关于认证配置文件位置如下:

[root@k8s-master1 ~]#ll /etc/kubernetes/admin.conf
-rw------- 1 root root 5564 Oct 20 15:55 /etc/kubernetes/admin.conf
[root@k8s-master1 ~]#ll .kube/config
-rw------- 1 root root 5564 Oct 20 15:56 .kube/config
[root@k8s-master1 ~]#

node节点上之前是有安装的kubectl工具的,但是它无法查看集群信息:

[root@k8s-node1 ~]#kubectl get po
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@k8s-node1 ~]#

再看下node节点相应目录下是否存在认证文件:=》都不存在!

[root@k8s-node1 ~]#ll /etc/kubernetes/
kubelet.conf  manifests/    pki/
[root@k8s-node1 ~]#ll .kube

下面这个命令是在k8s集群搭建过程中配置kubectl使用的连接k8s认证文件的方法:

拷贝kubectl使用的连接k8s认证文件到默认路径:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

接下来我们把master的这个config文件给传送到node节点并做相应的配置:

#在master节点scp这个认证文件
[root@k8s-master1 ~]#scp  .kube/config root@172.29.9.43:/etc/kubernetes/
config                                                                                                                                 100% 5564     1.7MB/s   00:00
[root@k8s-master1 ~]#

#在node节点开始配置
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/config $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

测试在node节点是否可以访问k8s集群:=》是可以的,测试完美!

image-20211021143745219

总结

1.kubeconfig配置文件

image-20211021144225950

kubectl使用kubeconfig认证文件连接K8s集群,使用kubectl config指令生成kubeconfig文件

kubeconfig连接K8s认证文件

apiVersion: v1
kind: Config
clusters:
- cluster:
   certificate-authority-data: 
   server: https://192.168.31.61:6443
  name: kubernetes

contexts:
- context:
   cluster: kubernetes
   user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes

users:
- name: kubernetes-admin
  user:
   client-certificate-data:
   client-key-data:

2.方法:拷贝kubectl使用的连接k8s认证文件到默认路径

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master1 ~]#ll /etc/kubernetes/admin.conf
-rw------- 1 root root 5564 Oct 20 15:55 /etc/kubernetes/admin.conf
[root@k8s-master1 ~]#ll .kube/config
-rw------- 1 root root 5564 Oct 20 15:56 .kube/config
[root@k8s-master1 ~]#

3.注意:各种搭建方式下kubeconfig文件的生成方式

1.单master集群:

当时直接执行`kubeadm init`即可自动生成admin.conf文件,
命令完成后只需要我们执行copy操作即可:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

2.高可用k8s集群:

生成初始化配置文件:
cat > kubeadm-config.yaml << EOF
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: 9037x2.tcaqnpaqkra9vsbw
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.29.9.41
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:  # 包含所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。
  - k8s-master1
  - k8s-master2
  - 172.29.9.41
  - 172.29.9.42
  - 172.29.9.88
  - 127.0.0.1
  extraArgs:
    authorization-mode: Node,RBAC
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 172.29.9.88:16443 # 负载均衡虚拟IP(VIP)和端口
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  external:  # 使用外部etcd
    endpoints:
    - https://172.29.9.41:2379 # etcd集群3个节点
    - https://172.29.9.42:2379
    - https://172.29.9.43:2379
    caFile: /opt/etcd/ssl/ca.pem # 连接etcd所需证书
    certFile: /opt/etcd/ssl/server.pem
    keyFile: /opt/etcd/ssl/server-key.pem
imageRepository: registry.aliyuncs.com/google_containers # 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址
kind: ClusterConfiguration
kubernetesVersion: v1.20.0 # K8s版本,与上面安装的一致
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16  # Pod网络,与下面部署的CNI网络组件yaml中保持一致
  serviceSubnet: 10.96.0.0/12  # 集群内部虚拟网络,Pod统一访问入口
scheduler: {}
EOF

使用配置文件引导:
kubeadm init --config kubeadm-config.yaml
[root@k8s-master1 ~]#kubeadm init --config kubeadm-config.yaml
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.9. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master1 k8s-master2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.29.9.41 172.29.9.88 172.29.9.42 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.036041 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 9037x2.tcaqnpaqkra9vsbw
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 172.29.9.88:16443 --token 9037x2.tcaqnpaqkra9vsbw \
    --discovery-token-ca-cert-hash sha256:b83d62021daef2cd62c0c19ee0f45adf574c2eaf1de28f0e6caafdabdf95951d \
    --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.29.9.88:16443 --token 9037x2.tcaqnpaqkra9vsbw \
    --discovery-token-ca-cert-hash sha256:b83d62021daef2cd62c0c19ee0f45adf574c2eaf1de28f0e6caafdabdf95951d

4.问题:node 节点上需要admin.conf吗?

回答:

node节点如果不执行kubectl工具,就不需要admin.conf,只在master节点,就可以完成集群维护工作了;

只要有admin.conf,不仅仅是在node节点访问api-server,在任何网络可达的电脑上都可以访问api-server;
你需要用到 kubectl 才需要admin.conf配置,那么多节点上 都可以运行这个命令 安全上也会存在问题;

kubectl就是k8s集群的客户端,可以把这个客户端放在k8s的master节点机器,也可以放在node节点机器,应该也可以放在集群之外,只需要有地址和认证就可以了,类似于mysql客户端和mysql服务器。

kubectl 需要配置认证信息与apiserver 去通信,然后保存在etcd里面;
.kube/config  文件就是通信认证文件,kubectl通过认证文件调用apiserver,apiserver 再去调用etcd 里面的数据存储,你kubectl get看到的都是etcd的;

是apiserver 去调用etcd 的数据;
其实就是一个查询;

关于我

我的博客主旨:我希望每一个人拿着我的博客都可以做出实验现象,先把实验做出来,然后再结合理论知识更深层次去理解技术点,这样学习起来才有乐趣和动力。并且,我的博客内容步骤是很完整的,也分享源码和实验用到的软件,希望能和大家一起共同进步!

各位小伙伴在实际操作过程中如有什么疑问,可随时联系本人免费帮您解决问题:

  1. 个人微信二维码:x2675263825 (舍得), qq:2675263825。

    image-20211002091450217

  2. 个人博客地址:www.onlyonexl.cn

    image-20211002092057988

  3. 个人微信公众号:云原生架构师实战

    image-20211002141739664

  4. 个人csdn

    https://blog.csdn.net/weixin_39246554?spm=1010.2135.3001.5421

    image-20211002092344616

最后

​ 好了,关于kubectl使用的连接k8s认证文件测试实验就到这里了,感谢大家阅读,最后贴上我的美圆photo一张,祝大家生活快乐,每天都过的有意义哦,我们下期见!

image-20211004102128716

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐