最小化部署k8s

1、环境准备

3台虚拟机:2G、2CPU

master:192.168.17.20

node01:192.168.17.21

node02:192.168.17.22


在Kubernetes集群中禁用swap的主要原因是与容器化应用的内存管理相关。以下是禁用swap在Kubernetes中的一些考虑:

  1. 内存管理:在容器化环境中,Kubernetes负责管理和分配容器的资源,包括内存。由于swap空间的存在会影响内存管理的行为,因此禁用swap可以更好地控制和预测容器的内存使用情况。
  2. 性能稳定性:当系统开始使用swap空间时,由于硬盘I/O的延迟,容器的性能可能会受到影响。在Kubernetes集群中,为了确保容器的性能稳定性,禁用swap可以避免因为系统开始使用swap而导致的性能下降。
  3. 可预测性:Kubernetes强调资源的可预测性和隔离性,禁用swap可以使内存行为更加可预测,从而更好地满足容器化应用对资源的需求。
  4. 安全性考虑:在某些情况下,swap空间的使用可能会导致安全风险,例如可能泄露到磁盘上的敏感数据。因此,禁用swap也有助于提高安全性。

总的来说,禁用swap可以帮助Kubernetes集群更好地管理内存和资源,提高容器化应用的性能、可预测性和安全性。

#查看系统中是否还有正在使用的swap分区
[root@master ~]# swapon --show
NAME      TYPE      SIZE USED PRIO
/dev/dm-1 partition   2G   0B   -2
#关闭当前正在使用的swap分区
[root@master ~]# swapoff -a
[root@master ~]# swapon --show
#永久禁用swap,在终端中使用文本编辑器(如vi或nano)打开 /etc/fstab 文件,找到与swap分区相关的行。添加注释符号 #,或者直接删除该行。保存
[root@master ~]# vi /etc/fstab
#重新启动系统以使更改生效。
[root@master ~]# reboot

管理节点安装ansible,方便操作

在主机清单中添加主机信息

[master]
192.168.17.20
[node]
192.168.17.21
192.168.17.22

#修改/etc/hosts文件
[root@master ~]# ansible all -m lineinfile -a "dest=/etc/hosts line='192.168.17.20 master' state=present" 
[root@master ~]# ansible all -m lineinfile -a "dest=/etc/hosts line='192.168.17.21 node01' state=present" 
[root@master ~]# ansible all -m lineinfile -a "dest=/etc/hosts line='192.168.17.22 node02' state=present" 
[root@master ~]# ansible all -a 'cat /etc/hosts'
#关闭防火墙
[root@master ~]# ansible all -m service -a "name=firewalld state=stopped"
[root@master ~]# ansible all -m shell -a "systemctl status firewalld"
[root@master ~]# ansible all -m shell -a "systemctl disable firewalld"
[root@master ~]# 
#关闭selinux
[root@master ~]# ansible all -m selinux -a "state=disabled"
[root@master ~]# ansible all -m command -a "sestatus"
#重启生效
[root@master ~]# ansible all -m command -a "reboot"
#将桥接的IPv4流量传递到iptables的链
[root@master ~]# cat > /etc/sysctl.d/k8s.conf << EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
您在 /var/spool/mail/root 中有邮件
[root@master ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
* Applying /etc/sysctl.conf ...
[root@master ~]# 

2、所有节点安装docker、kubeadm、kubelet

#安装docker
[root@master ~]# ansible all -m yum -a 'name=wget'
[root@master ~]# ansible all -a 'wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo'
[root@master ~]# ansible all -m yum -a 'name=docker-ce-18.06.1.ce-3.el7 state=present'
[root@master ~]# ansible all -m service -a 'name=docker state=started'
[root@master ~]# ansible all -a 'systemctl status docker'
[root@master ~]# ansible all -a 'docker --version'
#安装kubeadm-1.23.0 kubelet-1.23.0 kubectl-1.23.0
#添加yum源
[root@master ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
> https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
[root@master ~]# ansible all -m yum -a 'name=kubeadm-1.23.0 state=installed'
[root@master ~]# ansible all -m yum -a 'name=kubelet-1.23.0 state=installed'
[root@master ~]# ansible all -m yum -a 'name=kubectl-1.23.0 state=installed'
#开机自启
[root@master ~]# ansible all -a 'systemctl enable kubelet'
#修改配置
#Docker 在默认情况下使用Vgroup Driver为cgroupfs,而Kubernetes推荐使用systemd来替代cgroupfs
[root@master ~]# cat <<EOF> /etc/docker/daemon.json
> {
> "exec-opts": ["native.cgroupdriver=systemd"],
> "registry-mirrors": ["https://kn0t2bca.mirror.aliyuncs.com"]
> }
> EOF
您在 /var/spool/mail/root 中有新邮件
[root@master ~]# ansible all -m service -a 'name=docker state=restarted'
[root@master ~]# ansible all -a'systemctl enable docker'
[root@master ~]# ansible all -a'kubelet --version'

3、master节点初始化

参数说明如下:

  • --apiserver-advertise-address: Kubernetes API服务器公告的IP地址或DNS名称。
  • --image-repository: Kubernetes容器镜像存储库的地址。在此示例中,使用阿里云容器镜像服务作为默认镜像存储库。
  • --kubernetes-version: 安装的Kubernetes版本。在此示例中,安装版本为v1.23.0。
  • --service-cidr: Kubernetes Service使用的IP地址段。在此示例中,使用10.1.0.0/16作为Service IP地址段。
  • --pod-network-cidr: Kubernetes Pod网络使用的IP地址段。在此示例中,使用10.244.0.0/16作为Pod网络IP地址段。
[root@master ~]# kubeadm init \
> --apiserver-advertise-address=192.168.17.20 \
> --image-repository registry.aliyuncs.com/google_containers \
> --kubernetes-version=v1.23.0 \
> --service-cidr=10.1.0.0/16 \
> --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.23.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.1.0.1 192.168.17.20]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.17.20 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.17.20 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.007258 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: vkzzf2.hz2ic2bvqgnjcpcz
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.17.20:6443 --token vkzzf2.hz2ic2bvqgnjcpcz \
	--discovery-token-ca-cert-hash sha256:25690372b5265edbe2179b3ff768caf9ffc4df9dd2d99f64acc56dcb7c8246f7 
[root@master ~]# 
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
#node节点加入集群
[root@master ~]# ansible node -a 'kubeadm join 192.168.17.20:6443 --token vkzzf2.hz2ic2bvqgnjcpcz \ --discovery-token-ca-cert-hash sha256:25690372b5265edbe2179b3ff768caf9ffc4df9dd2d99f64acc56dcb7c8246f7'
[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES                  AGE     VERSION
master   NotReady   control-plane,master   12m     v1.23.0
node01   NotReady   <none>                 7m24s   v1.23.0
node02   NotReady   <none>                 7m24s   v1.23.0
#安装网络插件,kube-flannel.yml见下面
#https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@master ~]# kubectl apply -f kube-flannel.yml 
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
您在 /var/spool/mail/root 中有新邮件
[root@master ~]# kubectl get pod -n kube-flannel
NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel-ds-5kjqs   1/1     Running   0          83s
kube-flannel-ds-666vn   1/1     Running   0          83s
kube-flannel-ds-cmf2k   1/1     Running   0          83s
[root@master ~]# kubectl get pod -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-6d8c4cb4d-2x5pk          1/1     Running   0          19m
coredns-6d8c4cb4d-kwf9h          1/1     Running   0          19m
etcd-master                      1/1     Running   0          19m
kube-apiserver-master            1/1     Running   0          19m
kube-controller-manager-master   1/1     Running   0          20m
kube-proxy-ccfgt                 1/1     Running   0          19m
kube-proxy-n7p4d                 1/1     Running   0          14m
kube-proxy-qkxc5                 1/1     Running   0          14m
kube-scheduler-master            1/1     Running   0          19m
[root@master ~]# kubectl get node
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   20m   v1.23.0
node01   Ready    <none>                 14m   v1.23.0
node02   Ready    <none>                 14m   v1.23.0
[root@master ~]# 





kube-flannel.yml文件

---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    k8s-app: flannel
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
    k8s-app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: docker.io/flannel/flannel-cni-plugin:v1.2.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: docker.io/flannel/flannel:v0.22.3
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: docker.io/flannel/flannel:v0.22.3
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

4、常用命令

#节点加入命令
[root@master ~]# kubeadm token create --print-join-command
kubeadm join 192.168.17.20:6443 --token 8my0k2.0jyunlr5abz3086c --discovery-token-ca-cert-hash sha256:25690372b5265edbe2179b3ff768caf9ffc4df9dd2d99f64acc56dcb7c8246f7 
#查看所有pod
[root@master ~]# kubectl get pod --all-namespaces -o wide
NAMESPACE      NAME                             READY   STATUS    RESTARTS   AGE   IP              NODE     NOMINATED NODE   READINESS GATES
kube-flannel   kube-flannel-ds-5kjqs            1/1     Running   0          12m   192.168.17.21   node01   <none>           <none>
kube-flannel   kube-flannel-ds-666vn            1/1     Running   0          12m   192.168.17.20   master   <none>           <none>
kube-flannel   kube-flannel-ds-cmf2k            1/1     Running   0          12m   192.168.17.22   node02   <none>           <none>
kube-system    coredns-6d8c4cb4d-2x5pk          1/1     Running   0          29m   10.244.0.2      master   <none>           <none>
kube-system    coredns-6d8c4cb4d-kwf9h          1/1     Running   0          29m   10.244.0.3      master   <none>           <none>
kube-system    etcd-master                      1/1     Running   0          30m   192.168.17.20   master   <none>           <none>
kube-system    kube-apiserver-master            1/1     Running   0          30m   192.168.17.20   master   <none>           <none>
kube-system    kube-controller-manager-master   1/1     Running   0          30m   192.168.17.20   master   <none>           <none>
kube-system    kube-proxy-ccfgt                 1/1     Running   0          29m   192.168.17.20   master   <none>           <none>
kube-system    kube-proxy-n7p4d                 1/1     Running   0          24m   192.168.17.21   node01   <none>           <none>
kube-system    kube-proxy-qkxc5                 1/1     Running   0          24m   192.168.17.22   node02   <none>           <none>
kube-system    kube-scheduler-master            1/1     Running   0          30m   192.168.17.20   master   <none>           <none>
#查看所有deployment
[root@master ~]# kubectl get deploy --all-namespaces -o wide
NAMESPACE     NAME                   READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS             IMAGES                                                   SELECTOR
kube-system   coredns                2/2     2            2           77m     coredns                registry.aliyuncs.com/google_containers/coredns:v1.8.6   k8s-app=kube-dns
kube-system   kubernetes-dashboard   0/1     1            0           5m19s   kubernetes-dashboard   kubernetesui/dashboard:latest                            k8s-app=kubernetes-dashboard
您在 /var/spool/mail/root 中有新邮件
#删除部署
[root@master ~]# kubectl delete deploy kubernetes-dashboard -n kube-system
deployment.apps "kubernetes-dashboard" deleted
#查看pod的详细信息
[root@master ~]# kubectl describe pod kubernetes-dashboard-6786d8fcc-k8vrb -n kube-system
#获取pod的日志输出
[root@master ~]# kubectl logs kubernetes-dashboard-6786d8fcc-k8vrb -n kube-system


5、安装可视化Dashboard

参考https://zhuanlan.zhihu.com/p/655525723

#安装Dashboard
[root@master ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
--2023-12-02 15:09:10--  https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
正在解析主机 raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...
正在连接 raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:7621 (7.4K) [text/plain]
正在保存至: “recommended.yaml.2”

100%[=======================================================================================================================================================================>] 7,621       --.-K/s 用时 0s      

2023-12-02 15:09:11 (71.6 MB/s) - 已保存 “recommended.yaml.2” [7621/7621])
#为了可以在集群外面访问,我们把recommended.yaml里访问方式调整为nodeport
#修改文件内容,增加type: NodePort
# spec:
#  type: NodePort
#  ports:
#    - port: 443
#      targetPort: 8443
#  selector:
#    k8s-app: kubernetes-dashboard
[root@master ~]# vi recommended.yaml
[root@master ~]# kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
[root@master ~]# 
#创建 Dashboard 用户
[root@master ~]# kubectl apply -f dashboard-admin.yaml 
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
#获取令牌
[root@master ~]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-r8wfk
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: abbbf7b1-0df7-4682-a162-4b216ffa7f49

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1099 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Ilg1anlFbFdSb0Q5SFJpOWkyMkE5SmgtQmhRam5LVXVUOEpuRVRfX0E5a2MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXI4d2ZrIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJhYmJiZjdiMS0wZGY3LTQ2ODItYTE2Mi00YjIxNmZmYTdmNDkiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.DJpdGDhaIxkrrsGFm1zfH0Ue5xincHUBpMkx3X0d9jut-vfFowO32E_funzu-B70uJAhZPbbN09gtmxc4bYIQmtsHN2Q6tw3eJc8MXSc_A8x3cqPjRUq21DnThOZUf7SuqTPblPT8ZZb5jQUuFGQ-1mF_uz_Q_SY7yoB6TYMLoKDq0889w4jfa9SfRKcDaJBfq4SMfrY9cDQDwZxKkDgAjye0GChfE6DytPICmoHDEp-c3twXqx1DxzdAvsvQ5Aq7mqmeO-UElhJzfaV4P-QOzSLy2xLH1JjlmHq0-V-pn6Twh747JP_rUw0M1MM48f2nEUbN00VBl-5ecGoIrgvEg
[root@master ~]# 
#查看端口
[root@master ~]# kubectl get pod,svc -n kubernetes-dashboard
NAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-6f669b9c9b-n5srb   1/1     Running   0          6m54s
pod/kubernetes-dashboard-758765f476-zq6sr        1/1     Running   0          6m54s

NAME                                TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE
service/dashboard-metrics-scraper   ClusterIP   10.1.92.156   <none>        8000/TCP        6m54s
service/kubernetes-dashboard        NodePort    10.1.222.45   <none>        443:31259/TCP   6m55s
[root@master ~]# 

dashboard-admin.yaml文件

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: admin-user
    namespace: kubernetes-dashboard

recommended.yaml文件

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.7.0
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.8
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

6、网页访问

打开浏览器用查询到的端口和令牌进行访问

在这里插入图片描述

在这里插入图片描述

7、安装kubespere

参考https://kubesphere.io/zh/

  1. 确保在安全组中打开了端口 30880,并通过 NodePort (IP:30880) 使用默认帐户和密码 (admin/P@88w0rd) 访问 Web 控制台。(Zlj123456)
#准备默认存储,本次使用nfs存储服务
[root@master ~]# vi sc.yaml
[root@master ~]# kubectl apply -f sc.yaml 
storageclass.storage.k8s.io/standard created
[root@master ~]# kubectl get storageclass
NAME            PROVISIONER                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs (default)   example.com/nfs-provisioner   Retain          Immediate           false                  18m
#设置默认存储
[root@master ~]# kubectl patch storageclass nfs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
#安装
[root@master ~]# kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/kubesphere-installer.yaml
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io created
namespace/kubesphere-system created
serviceaccount/ks-installer created
clusterrole.rbac.authorization.k8s.io/ks-installer created
clusterrolebinding.rbac.authorization.k8s.io/ks-installer created
deployment.apps/ks-installer created
您在 /var/spool/mail/root 中有新邮件
[root@master ~]# kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml
clusterconfiguration.installer.kubesphere.io/ks-installer created
#检查安装日志
[root@master ~]# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f


sc.yaml文件

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: nfs
provisioner: example.com/nfs-provisioner
reclaimPolicy: Retain
parameters:
  type: "nfs"
  server: "192.168.17.9"
  path: "/svr/nfs/share"


首次登录需要改密码

在这里插入图片描述

在这里插入图片描述
在这里插入图片描述

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐