版本信息:

kubernetes1.18

centos: 7.9

nacos: 1.3.2

文章引用:vmware虚拟机Centos7上部署kubernetes1.18_thehunters的博客-CSDN博客

1.安装虚拟机

使用vmware安装一个cenos7虚拟机,只需要安装一个,其他2个复制即可,不过要改MAC地址,要不然可能获取不到IP地址

2.系统配置

主机名ip地址作用
k8s-master192.168.125.11k8s主机
k8s-node01192.168.125.12k8s节点
k8s-node02192.168.125.13k8s节点
middleware192.168.125.21安装mysql、redis等中间件

2.1.关闭交换分区

          

swapoff -a

2.2.关闭防火墙

[root@master01 ~]# systemctl stop firewalld 
[root@master01 ~]# systemctl disable firewalld

2.3.关闭selinux

注意: 修改SELINUX,如果修改SELINUXTYPE会造成系统无法启动

2.4.添加阿里源

[root@master01 ~]# rm -rfv /etc/yum.repos.d/* 
[root@master01 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

2.5.配置主机名

master01:

[root@master01 ~]# hostnamectl set-hostname master01 
[root@master01 ~]# more /etc/hostname

node01:

[root@k8s-node01 ~]# hostnamectl set-hostname node01 
[root@k8s-node01 ~]# more /etc/hostname

node02:

[root@k8s-node02 ~]# hostnamectl set-hostname node02
[root@k8s-node02 ~]# more /etc/hostname

2.6.配置内核参数,将桥接的IPv4流量传递到iptables的链

master与node都要配置

[root@master01 ~]# cat > /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF

注:k8s该网络需要设置内核参数bridge-nf-call-iptables=1,没有这个后面添加网络的时候会报错。

3.安装软件

master与node都要安装

3.1.安装开发工具

[root@master01 ~]# yum install vim bash-completion net-tools gcc -y

3.2.安装docker-ce

[root@master01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 
[root@master01 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 
[root@master01 ~]# yum -y install docker-ce

注:yum-config-manager命令配置aliyun源,但是这个命令来源于yum-utils,所以需要先安装yum-utils

3.3.安装完docker后添加aliyun的docker仓库加速器

[root@master01 ~]# mkdir -p /etc/docker 
[root@master01 ~]# tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://fl791z1h.mirror.aliyuncs.com"] } EOF

[root@master01 ~]# systemctl daemon-reload 
[root@master01 ~]# systemctl restart docker 
[root@master01 ~]# systemctl enable docker.service

注:tee /etc/docker/daemon.json <<-'EOF’后面的 <<-'EOF’是没有空格的,如图我第一次有空格输入EOF就没有结束

3.4.安装kubectl、kubelet、kubeadm

先添加阿里kubernetes源

[root@master01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF

安装

[root@master01 ~]# yum -y install kubectl-1.18.0 kubelet-1.18.0 kubeadm-1.18.0 
[root@master01 ~]# systemctl enable kubelet

注: yum install kubectl kubelet kubeadm,这样写会默认安装最新版,但是和我后面执行的kubeadm init --kubernetes-version=1.18.0版本不一致,会报错,所以我这里yum后面指定了版本。

4.初始化k8s集群(master节点)

这一步开始区分master、ndoe节点执行的命令了,上面的步骤master、node都是一样

4.1 初始化matster

[root@master01 ~]# kubeadm init --kubernetes-version=1.18.0 \ --apiserver-advertise-address=192.168.125.11 \ --image-repository registry.aliyuncs.com/google_containers \ --service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16

执行完后出现如下信息:

W1125 22:47:32.274048   47607 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.10.0.1 192.168.137.110]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.137.110 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.137.110 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1125 22:47:35.941950   47607 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W1125 22:47:35.943106   47607 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.504047 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: o11ulw.ovtj9vov7ob34hs7
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
 
Your Kubernetes control-plane has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:
 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
 
Then you can join any number of worker nodes by running the following on each as root:
 
kubeadm join 192.168.125.11:6443 --token o11ulw.ovtj9vov7ob34hs7 \
    --discovery-token-ca-cert-hash sha256:38a93626ac4e86155583e7ef9b32cb13739d5f5bc3da2b4ed7e74aec8112bea7

记录后面的kubeadm join这段内容,此内容需要在其它节点加入Kubernetes集群时执行。

kubeadm join 192.168.125.11:6443 --token o11ulw.ovtj9vov7ob34hs7 \ --discovery-token-ca-cert-hash sha256:38a93626ac4e86155583e7ef9b32cb13739d5f5bc3da2b4ed7e74aec8112bea7

4.2 master 创建kubectl

[root@master01 ~]# mkdir -p $HOME/.kube 
[root@master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
[root@master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

注:

1.不配置$HOME/.kube/config的话,kubectl命令不可用,

2.node节点写法有点不一样,node节点的这行为:sudo cp -i /etc/kubernetes/kubelet.conf.conf $HOME/.kube/config

5.安装calico网络(master节点)

[root@master01 ~]# kubectl apply -f https://docs.projectcalico.org/v3.18/manifests/calico.yaml

注:现在的版本是3.23,需要指定版本否则会报错,版本对应关系可以看这里:https://docs.tigera.io/archive/v3.18/getting-started/kubernetes/requirements

安装calico网络网络后过一会再输入kubectl get node,可以看到节点的STATUS由NotReady变为Ready

[root@master01 ~]# kubectl get node
NAME       STATUS   ROLES    AGE    VERSION
master01   Ready    master   2d1h   v1.18.0
node01     Ready    <none>   2d1h   v1.18.0
node02     Ready    <none>   2d1h   v1.18.0

6.node节点加入集群(两个node节点都要执行)

6.1 加入主节点

[root@node01 ~]# kubeadm join 192.168.125.11:6443 --token o11ulw.ovtj9vov7ob34hs7 --discovery-token-ca-cert-hash sha256:38a93626ac4e86155583e7ef9b32cb13739d5f5bc3da2b4ed7e74aec8112bea7

注:

1.kubeadm init后得到的token有效期为24小时,过期后需要重新创建token,执行:kubeadm token create获取新token

2.kubeadm token list 查看token列表,

6.2 创建kubectl(所有node节点)

[root@node01 ~]# mkdir -p $HOME/.kube 
[root@node01 ~]# sudo cp -i /etc/kubernetes/kubelet.conf $HOME/.kube/config 
[root@node01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

到了这步,把所有的node节点加入mater节点后,k8s的环境已经安装完成了

[root@node01 ~]# kubectl get node
NAME       STATUS   ROLES    AGE    VERSION
master01   Ready    master   2d1h   v1.18.0
node01     Ready    <none>   2d1h   v1.18.0
node02     Ready    <none>   2d1h   v1.18.0

7.安装kubernetes-dashboard(master节点)

Dashboard是可视化插件,它可以给用户提供一个可视化的 Web 界面来查看当前集群的各种信息。用户可以用 Kubernetes Dashboard 部署容器化的应用、监控应用的状态、执行故障排查任务以及管理 Kubernetes 各种资源。

7.1 获取与修改配置文件

官方部署dashboard的服务没使用nodeport,将yaml文件下载到本地,在service里添加nodeport

[root@master01 ~]# wget  https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml
[root@master01 ~]# vim recommended.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30000
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.0-rc7
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "beta.kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.4
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "beta.kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

7.2 执行

kubectl create -f recommended.yaml

如果出错,例如配置错误可以执行一下命令恢复,然后支持create或者apply

kubectl delete -f recommended.yaml

执行kubectl create -f recommended.yaml命令后,再执行下面这行,可以看到dashboard已启动

[root@master01 ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.10.247.69   <none>        8000/TCP        46h
kubernetes-dashboard        NodePort    10.10.120.13   <none>        443:30001/TCP   46h

7.3 打开登录页

在浏览器输入我们的这台机器的ip+端口,进入登录页面

在空白地方输入thisisunsafe就可以了

7.4 获取token

kubectl -n kubernetes-dashboard get secret
kubectl describe secrets -n kubernetes-dashboard kubernetes-dashboard-token-tljmr  | grep token | awk 'NR==3{print $2}'

把token粘贴到登录页面上的输入token框,点击登录

7.5 登录异常处理

但是都是空白的,右上角报错

报错如下:

原因:serviceaccount的问题,k8sdashboard出厂的serviceaccount权限太低,需要配置一个admin用户

[root@master01 ~]# cat >user.yml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin
  namespace: kubernetes-dashboard

创建集群RoleBinding.yml

[root@master01 ~]# cat > ClusterRoleBinding.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin
  namespace: kubernetes-dashboard

执行结果:


[root@master01 ~]# kubectl apply -f user.yml 
serviceaccount/admin created
[root@master01 ~]# kubectl apply -f ClusterRoleBinding.yml 
clusterrolebinding.rbac.authorization.k8s.io/admin created
 
[root@master01 ~]# kubectl get -f user.yml 
NAME    SECRETS   AGE
admin   1         47h
[root@master01 ~]#  kubectl get -f ClusterRoleBinding.yml
NAME    ROLE                        AGE
admin   ClusterRole/cluster-admin   47h
 
[root@master01 ~]# kubectl get serviceaccount -n kubernetes-dashboard
NAME                   SECRETS   AGE
admin                  1         47h
default                1         47h
kubernetes-dashboard   1         47h 

重现获取token登录:


#方式一:获取token
[root@k8s-master juc]# kubectl describe secrets -n kubernetes-dashboard $(kubectl -n kubernetes-dashboard get secret | awk '/admin-token/{print $1}')    
Name:         admin-token-j6tkt                                                                                                                          
Namespace:    kubernetes-dashboard                                                                                                                       
Labels:                                                                                                                                            
Annotations:  kubernetes.io/service-account.name: admin                                                                                                  
              kubernetes.io/service-account.uid: d9ed28a3-4e32-4888-afd0-89e3acdafad6                                                                    
                                                                                                                                                         
Type:  kubernetes.io/service-account-token                                                                                                               
                                                                                                                                                         
Data                                                                                                                                                     
====                                                                                                                                                     
ca.crt:     1025 bytes                                                                                                                                   
namespace:  20 bytes                                                                                                                                     
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6InBCUEZwUlZSTUNyckMzOF9nRE45UTlzUWRMb1JnbGtmRG0tRHNpazRPUHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3
ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2t
lbi1qNnRrdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50
LnVpZCI6ImQ5ZWQyOGEzLTRlMzItNDg4OC1hZmQwLTg5ZTNhY2RhZmFkNiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbiJ9.Xe6QvckRW8Nfryvt
Rg5VvGcdK755bUd5IsWq3NbKunYcSUybLUoBYOs7sBuCbUZJREoKYKY8kA68Rh8ISKzkWdSi_q1gsEuqqVFSXpPLfReZjFUcAlW6mcuP3965YhnL7Y9gsH51MxL2qiw1UMmnQLjff4w8hWXuygKQICiAQ
MCn4yN46DQG9B2xDEZESmwc1qDPGuhrqeXS_tjMBYaysOkKaTrdp7j6xyqlQMgaocjBZ3pTg0yNwWKV8kveUMevdghBaQ7XlH5L2h4mezfd_WVQ7Sg2-hX-kdLqXV-9BHJ40r--iTssxq5lqFKl0yq57Q
tb5N3RfjUh3FrrkHB__A                                                                                                                                     
[root@k8s-master juc]#  

#方式二:获取token

[root@master01 ~]# kubectl -n kubernetes-dashboard get secret
NAME                               TYPE                                  DATA   AGE
admin-token-wmr7b                  kubernetes.io/service-account-token   3      47h
default-token-srztq                kubernetes.io/service-account-token   3      47h
kubernetes-dashboard-certs         Opaque                                0      47h
kubernetes-dashboard-csrf          Opaque                                1      47h
kubernetes-dashboard-key-holder    Opaque                                2      47h
kubernetes-dashboard-token-js42w   kubernetes.io/service-account-token   3      47h
[root@master01 ~]# kubectl describe secrets -n kubernetes-dashboard admin-token-wmr7b
Name:         admin-token-wmr7b
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin
              kubernetes.io/service-account.uid: 6b3a1d5c-8529-4381-bce9-5504d01d2983
 
Type:  kubernetes.io/service-account-token
 
Data
====
ca.crt:     1025 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IklCY3FzM1A2N0Z4dFMzQmVSa0NzTEJ4VVkyVFI1T3lqWnBKV1EzVWh5eWsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi13bXI3YiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjZiM2ExZDVjLTg1MjktNDM4MS1iY2U5LTU1MDRkMDFkMjk4MyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbiJ9.w4ATRjr624fMAGwRlz6n7VlxphUU9fSM2rPK71uJK-KO1fWKN1LyDmf2ZGHv3G-jlWpXEINSGmXwjiPl9TXLkTriX3rwOJ8jZUsziBiNig1dx8rA-wgqLTwGmsVILBtvwkMvTNhqTl_CVQj3P8d98l8S_eXTL_ZcykaWLyaltGIwGUYZLnyyL3fyQonJudgc2NmAU2jOZnUaWqw1zsElE_XJKzkVgE16ETWxwiOmSVClo771zdpVcWkzk5FRk2fN4R4rHoRoC9kkCMvkHBtz7bT28m-fYGYrANCaJUiXwLaok1UQIj9J9Dt6Ja2qksknkGIQ5kezefBG-zgUoP5dJQ
[root@master01 ~]# 

8 常用命令

8.1 扩展或缩减副本数

kubectl scale deployment kube-state-metrics -n kubernetes-dashboard --replicas=0
kubectl scale deployment kube-state-metrics -n kubernetes-dashboard --replicas=1

8.2 查看POD详情

[root@k8s-master kubeconfig]# kubectl describe -n kubernetes-dashboard pod  kubernetes-dashboard-5d4dc8b976-mq4mq

8.3 查看POD日志

[root@master01 ~]# kubectl logs kubernetes-dashboard-74d688b6bc-6fjnj -n kubernetes-dashboard

8.4 编辑部署文件

#编辑有状态部署配置文件
kubectl edit statefulsets nacos -n middleware -o yaml

#编辑无状态部署配置文件
kubectl edit deploy weeget-infinite-id-generator -n inspire -o yaml

9. k8s安装或启动问题

9.1 虚拟挂起重启,访问页 https://192.168.137.11:30000页面无法打开

9.1.1 故障描述

[root@master01 ~]# kubectl logs kubernetes-dashboard-74d688b6bc-6fjnj -n kubernetes-dashboard
2020/09/03 09:27:34 Starting overwatch
2020/09/03 09:27:34 Using namespace: kubernetes-dashboard
2020/09/03 09:27:34 Using in-cluster config to connect to apiserver
2020/09/03 09:27:34 Using secret token for csrf signing
2020/09/03 09:27:34 Initializing csrf token from kubernetes-dashboard-csrf secret
panic: Get https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf: dial tcp 10.96.0.1:443: i/o timeout

goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc00000d7a0)
    /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:41 +0x446
github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
    /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:66
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc000486680)
    /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:501 +0xc6
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc000486680)
    /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:469 +0x47
github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
    /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:550
main.main()
    /home/travis/build/kubernetes/dashboard/src/app/backend/dashboard.go:105 +0x20d

panic: Get https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf: dial tcp 10.96.0.1:443: i/o timeout

9.1.2 解决方案

1. 关闭iptables/firewalld防火墙 (如果是防火墙原因导致的master节点无法ping通node之上的Pod节点)

[root@master01 ~]# systemctl stop firewalld
[root@master01 ~]# systemctl disable firewalld

即使关闭了防火墙跨主机间容器、pod始终无法ping通

这是由于linux还有底层的iptables,所以在node上分别执行:

iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -F
iptables -L -n

问题完美解决

如果从主节点ping Node节点Pod ip地址可以ping通,则查看web ui应该没有问题了。

[root@CentOS kubernetes]# ping 172.16.59.2
PING 172.16.59.2 (172.16.59.2) 56(84) bytes of data.
64 bytes from 172.16.59.2: icmp_seq=1 ttl=63 time=0.568 ms
64 bytes from 172.16.59.2: icmp_seq=2 ttl=63 time=0.486 ms
64 bytes from 172.16.59.2: icmp_seq=3 ttl=63 time=0.460 ms
64 bytes from 172.16.59.2: icmp_seq=4 ttl=63 time=0.485 ms

2. 由于关闭或者重启docker而导致的网络未更新问题引起。

Master节点启动 注意先启动kubernetes,再启动docker(如果是关闭docker或者重启docker导致的网络问题,重启master和node节点,注意重启顺序)

主Master节点重启顺序

systemctl enable docker

systemctl enable etcd kube-apiserver kube-scheduler kube-controller-manager

systemctl restart etcd kube-apiserver kube-scheduler kube-controller-manager

systemctl restart flanneld docker

#网络相关后启动 flanneld和docker 重置网络

Node从节点重启顺序

systemctl restart kubelet kube-proxy

systemctl restart flanneld docker

#网络相关后启动 flanneld 和 docker 重置网络

systemctl enable flanneld kubelet kube-proxy docker

3.如果还不行,重新删除安装一下dashboard即可

kubectl delete -f recommended.yaml
kubectl apply -f recommended.yaml

yaml文件名请使用你自己创建pod的文件名

9.2 虚拟挂起重启,访问页 https://192.168.137.11:30000页面无法打开

9.2.1 故障描述

[root@k8s-master kubeconfig]# kubectl logs -n kubernetes-dashboard    dashboard-metrics-scraper-dc6947fbf-n4v2j
{"level":"info","msg":"Kubernetes host: https://10.10.0.1:443","time":"2023-10-14T14:26:18Z"}
192.168.247.13 - - [14/Oct/2023:14:26:49 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
192.168.247.13 - - [14/Oct/2023:14:26:59 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
192.168.247.13 - - [14/Oct/2023:14:27:09 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
{"level":"error","msg":"Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)","time":"2023-10-14T14:27:18Z"}

[root@k8s-master kubeconfig]# kubectl logs -n kubernetes-dashboard   kubernetes-dashboard-5d4dc8b976-q42bj 
2023/10/14 12:14:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.

9.2.2 解决方案

[root@k8s-master kubeconfig]# kubectl describe -n kubernetes-dashboard pod  kubernetes-dashboard-5d4dc8b976-mq4mq
Name:         kubernetes-dashboard-5d4dc8b976-mq4mq
Namespace:    kubernetes-dashboard
Priority:     0
Node:         k8s-node2/192.168.247.13
Start Time:   Sat, 14 Oct 2023 22:28:10 +0800
Labels:       k8s-app=kubernetes-dashboard
              pod-template-hash=5d4dc8b976
Annotations:  cni.projectcalico.org/containerID: fc09f4d5a903cf088200d751dc55b98d8ebd899df332826e654c839f75f7aa6b
              cni.projectcalico.org/podIP: 10.122.169.139/32
              cni.projectcalico.org/podIPs: 10.122.169.139/32
Status:       Running
IP:           10.122.169.139
IPs:
  IP:           10.122.169.139
Controlled By:  ReplicaSet/kubernetes-dashboard-5d4dc8b976
Containers:
  kubernetes-dashboard:
    Container ID:  docker://ca0933819362efe3132f4d81296fffc8d5c9c091a67040dd4fd4a0f1fe629bf7
    Image:         kubernetesui/dashboard:v2.0.0-rc7
    Image ID:      docker-pullable://kubernetesui/dashboard@sha256:24b77588e57e55da43db45df0c321de1f48488fa637926b342129783ff76abd4
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --auto-generate-certificates
      --namespace=kubernetes-dashboard
    State:          Running
      Started:      Sat, 14 Oct 2023 22:28:37 +0800
    Ready:          True
    Restart Count:  0
    Liveness:       http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /certs from kubernetes-dashboard-certs (rw)
      /tmp from tmp-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-krxgx (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kubernetes-dashboard-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-certs
    Optional:    false
  tmp-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kubernetes-dashboard-token-krxgx:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-token-krxgx
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  beta.kubernetes.io/os=linux
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age        From                Message
  ----    ------     ----       ----                -------
  Normal  Scheduled  <unknown>  default-scheduler   Successfully assigned kubernetes-dashboard/kubernetes-dashboard-5d4dc8b976-mq4mq to k8s-node2
  Normal  Pulling    14m        kubelet, k8s-node2  Pulling image "kubernetesui/dashboard:v2.0.0-rc7"
  Normal  Pulled     14m        kubelet, k8s-node2  Successfully pulled image "kubernetesui/dashboard:v2.0.0-rc7"
  Normal  Created    14m        kubelet, k8s-node2  Created container kubernetes-dashboard
  Normal  Started    14m        kubelet, k8s-node2  Started container kubernetes-dashboard

kubernetes-dashboard启动的node2节点,按node2IP地址访问

9.3 无法拉取镜像仓库不存在或可能需要“docker login”:

9.3.1 故障描述

Failed to pull image "registry.cn-shenzhen.aliyuncs.com/juc_java/inspire-dev-weeget-infinite-id-generator-business:202310190744": rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-shenzhen.aliyuncs.com/juc_java/inspire-dev-weeget-infinite-id-generator-business, 
repository does not exist or may require 'docker login': denied: requested access to the resource is denied

9.3.2 解决方案

9.3.2.1 查看镜像仓库是否可访问
  • 镜像是否可拉取

docker pull registry.cn-shenzhen.aliyuncs.com/juc_java/inspire-dev-weeget-infinite-id-generator-business:202310190744

  • 仓库类型是否为公开

9.3.2.2 k8s 新增密钥与关联
  • 创建密钥

kubectl create secret docker-registry aliyun-secret --docker-server=registry.cn-shenzhen.aliyuncs.com --docker-username=ju12456 --docker-password=ju12456 --docker-email=ju12456@163.com

  • 确认密钥创建成功

#方式一
kubectl get secret aliyun-secret -o json | jq '.data | map_values(@base64d)'
{
  ".dockerconfigjson": "{\"auths\":{\"https://index.docker.io/v1\":{\"username\":\"docker-username\",\"password\":\"password\",\"email\":\"docker-email@gmail.com\",\"auth\":\"base64encodedtoken\"}}}"
}

#方式二
[root@k8s-master juc]# kubectl get secret aliyun-secret -o yaml                                                                                          
apiVersion: v1                                                                                                                                           
data:                                                                                                                                                    
  .dockerconfigjson: eyJhdXRocyI6eyJyZWdpc3RyeS5jbi1zaGVuemhlbi5hbGl5dW5jcy5jb20iOnsidXNlcm5hbWUiOiJqdTE5ODkxMzI2MTIiLCJwYXNzd29yZCI6IjE1ODk4NTEwNTE1cSIs
ImVtYWlsIjoianUxOTg5MTMyNkAxNjMuY29tIiwiYXV0aCI6ImFuVXhPVGc1TVRNeU5qRXlPakUxT0RrNE5URXdOVEUxY1E9PSJ9fX0=                                                 
kind: Secret                                                                                                                                             
metadata:                                                                                                                                                
  creationTimestamp: "2023-10-12T14:05:03Z"                                                                                                              
  managedFields:                                                                                                                                         
  - apiVersion: v1                                                                                                                                       
    fieldsType: FieldsV1                                                                                                                                 
    fieldsV1:                                                                                                                                            
      f:data:                                                                                                                                            
        .: {}                                                                                                                                            
        f:.dockerconfigjson: {}                                                                                                                          
      f:type: {}                                                                                                                                         
    manager: kubectl                                                                                                                                     
    operation: Update                                                                                                                                    
    time: "2023-10-12T14:05:03Z"                                                                                                                         
  name: aliyun-secret                                                                                                                                    
  namespace: default                                                                                                                                     
  resourceVersion: "453920"                                                                                                                              
  selfLink: /api/v1/namespaces/default/secrets/aliyun-secret                                                                                             
  uid: b488fe4e-0439-4384-8a45-b69305b66c06                                                                                                              
type: kubernetes.io/dockerconfigjson                                                                                                                     
[root@k8s-master juc]#  
[root@k8s-master juc]# echo 'eyJhdXRocyI6eyJyZWdpc3RyeS5jbi1zaGVuemhlbi5hbGl5dW5jcy5jb20iOnsidXNlcm5hbWUiOiJqdTE5ODkxMzI2MTIiLCJwYXNzd29yZCI6IjE1ODk4NTEw
NTE1cSIsImVtYWlsIjoianUxOTg5MTMyNkAxNjMuY29tIiwiYXV0aCI6ImFuVXhPVGc1TVRNeU5qRXlPakUxT0RrNE5URXdOVEUxY1E9PSJ9fX0=' |base64 -d                             
{"auths":{"registry.cn-shenzhen.aliyuncs.com":{"username":"juxxxx","password":"xxxxxx","email":"juxxxx@163.com","auth":"anUxOTg5MTMyNjEyO
jE1ODk4NTEwNTE1cQ=="}}}
[root@k8s-master juc]# 

  • POD部署关联

1. 设置密钥

imagePullSecrets:

  - name: aliyun-secret

2. 替换镜像

image: >-
            registry.cn-shenzhen.aliyuncs.com/juc_java/inspire-dev-weeget-infinite-id-generator-business:202310190744

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: '13'
    meta.helm.sh/release-name: weeget-infinite-id-generator
    meta.helm.sh/release-namespace: inspire
  creationTimestamp: '2021-04-19T14:17:26Z'
  generation: 13
  labels:
    app.kubernetes.io/instance: weeget-infinite-id-generator
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: inspire-weeget-infinite-id-generator
    helm.sh/chart: weeget-20211124183501
  name: weeget-infinite-id-generator
  namespace: inspire
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 0
  selector:
    matchLabels:
      app.kubernetes.io/instance: weeget-infinite-id-generator
      app.kubernetes.io/name: inspire-weeget-infinite-id-generator
  strategy:
    rollingUpdate:
      maxSurge: 2
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      annotations:
        kubectl.kubernetes.io/restartedAt: '2022-04-09T15:30:03+08:00'
        redeploy-timestamp: '1696471202'
      labels:
        app.kubernetes.io/instance: weeget-infinite-id-generator
        app.kubernetes.io/name: inspire-weeget-infinite-id-generator
    spec:
      containers:
        - env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
            - name: DEPLOYMENT_NAME
              value: weeget-infinite-id-generator
            - name: DEPLOYMENT_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
            - name: DEPLOYMENT_VERSION
              value: '20210630162147'
            - name: DEPLOYMENT_HOSTNAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
            - name: CUSTOM_JAVA_OPTS
              value: >-
                -server -Xms2560m -Xmx2560m -Xmn1200m -XX:MetaspaceSize=256m
                -XX:MaxMetaspaceSize=256m -XX:+UseConcMarkSweepGC
                -XX:+UseCMSCompactAtFullCollection
                -XX:CMSInitiatingOccupancyFraction=70
                -XX:+CMSParallelRemarkEnabled -XX:SoftRefLRUPolicyMSPerMB=50
                -XX:+CMSClassUnloadingEnabled -XX:SurvivorRatio=8
                -XX:-UseParNewGC
            - name: DISCOVERY_ADDRESS
              value: nacos-headless.middleware
            - name: DISCOVERY_NAMESPACE
              value: ali-pressure
            - name: JAVA_HOME
              value: /usr/local/jdk1.8.0_201
            - name: LANG
              value: C.UTF-8
            - name: LANGUAGE
              value: C.UTF-8
            - name: LC_ALL
              value: C.UTF-8
            - name: SW_AGENT_NAME
              value: weeget-infinite-id-generator-business
            - name: aliyun_logs_test-dev-inspire-weeget-infinite-id-generator
              value: stdout
          image: >-
            registry.cn-shenzhen.aliyuncs.com/juc_java/inspire-dev-weeget-infinite-id-generator-business:202310190744
          imagePullPolicy: IfNotPresent
          name: weeget-infinite-id-generator
          ports:
            - containerPort: 9999
              name: http
              protocol: TCP
          resources:
            limits:
              cpu: '1'
              memory: 2Gi
            requests:
              cpu: '0'
              memory: '0'
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /etc/localtime
              name: volume-localtime
            - mountPath: /data/logs/inspire/weeget-infinite-id-generator/
              name: volumn-sls-temp-a22fa710a07535409aa326aa93785174
            - mountPath: /data/logs/weeget-infinite-id-generator/
              name: volumn-sls-temp-376789191988b074fd2eae11d5d1b9cb
      dnsPolicy: ClusterFirst
      imagePullSecrets:
        - name: aliyun-secret
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
        - hostPath:
            path: /etc/localtime
            type: ''
          name: volume-localtime
        - emptyDir: {}
          name: volumn-sls-temp-a22fa710a07535409aa326aa93785174
        - emptyDir: {}
          name: volumn-sls-temp-376789191988b074fd2eae11d5d1b9cb

10 在k8s中安装nacos

10.1 k8s部署nacos(集群模式)

10.1.1 外部安装mysql

10.1.2 设置nacos配置-configMap

apiVersion: v1                                                                                                                                           
kind: ConfigMap                                                                                                                                          
metadata:                                                                                                                                                
  name: nacos-cm                                                                                                                                         
  namespace: middleware                                                                                                                                  
data:                                                                                                                                                    
  mysql.host: "192.168.125.21"                                                                                                                           
  mysql.db.name: "nacos"                                                                                                                                 
  mysql.port: "3306"                                                                                                                                     
  mysql.user: "nacos"                                                                                                                                    
  mysql.password: "nacos"

10.1.2 部署nacos配置-deployment

apiVersion: apps/v1                                                                                                                                      
kind: StatefulSet                                                                                                                                        
metadata:                                                                                                                                                
  generation: 21                                                                                                                                         
  name: nacos                                                                                                                                            
  namespace: middleware                                                                                                                                  
spec:                                                                                                                                                    
  podManagementPolicy: OrderedReady                                                                                                                      
  replicas: 3                                                                                                                                            
  revisionHistoryLimit: 10                                                                                                                               
  selector:                                                                                                                                              
    matchLabels:                                                                                                                                         
      app: nacos                                                                                                                                         
  serviceName: nacos-headless                                                                                                                            
  template:                                                                                                                                              
    metadata:                                                                                                                                            
      annotations:                                                                                                                                       
        pod.alpha.kubernetes.io/initialized: 'true'                                                                                                      
        redeploy-timestamp: '1691472918997'                                                                                                              
      labels:                                                                                                                                            
        app: nacos                                                                                                                                       
    spec:                                                                                                                                                
      affinity:                                                                                                                                          
        podAntiAffinity:                                                                                                                                 
          requiredDuringSchedulingIgnoredDuringExecution:                                                                                                
            - labelSelector:                                                                                                                             
                matchExpressions:                                                                                                                        
                  - key: app                                                                                                                             
                    operator: In                                                                                                                         
                    values:                                                                                                                              
                      - nacos-headless                                                                                                                   
              topologyKey: kubernetes.io/hostname                                                                                                        
      containers:                                                                                                                                        
        - name: nacos                                                                                                                                    
          imagePullPolicy: IfNotPresent                                                                                                                  
          image: nacos/nacos-server:1.3.2                                                                                                                
          resources:                                                                                                                                     
            requests:                                                                                                                                    
              memory: "2Gi"                                                                                                                              
              cpu: "500m"                                                                                                                                
          ports:                                                                                                                                         
            - containerPort: 8848                                                                                                                        
              name: client-port                                                                                                                          
            - containerPort: 9848                                                                                                                        
              name: client-rpc                                                                                                                           
            - containerPort: 9849                                                                                                                        
              name: raft-rpc                                                                                                                             
            - containerPort: 7848                                                                                                                        
              name: old-raft-rpc                                                                                                                         
          env:                                                                                                                                           
            - name: NACOS_REPLICAS                                                                                                                       
              value: "3"                                                                                                                                 
            - name: SERVICE_NAME                                                                                                                         
              value: "nacos-headless"                                                                                                                    
            - name: DOMAIN_NAME                                                                                                                          
              value: "cluster.local"                                                                                                                     
            - name: POD_NAMESPACE                                                                                                                        
              valueFrom:                                                                                                                                 
                fieldRef:                                                                                                                                
                  apiVersion: v1                                                                                                                         
                  fieldPath: metadata.namespace                                                                                                          
            - name: MYSQL_SERVICE_HOST                                                                                                                   
              valueFrom:                                                                                                                                 
                configMapKeyRef:                                                                                                                         
                  name: nacos-cm                                                                                                                         
                  key: mysql.host                                                                                                                        
            - name: MYSQL_SERVICE_DB_NAME                                                                                                                
              valueFrom:                                                                                                                                 
                configMapKeyRef:                                                                                                                         
                  name: nacos-cm                                                                                                                         
                  key: mysql.db.name                                                                                                                     
            - name: MYSQL_SERVICE_PORT                                                                                                                   
              valueFrom:                                                                                                                                 
                configMapKeyRef:                                                                                                                         
                  name: nacos-cm                                                                                                                         
                  key: mysql.port                                                                                                                        
            - name: MYSQL_SERVICE_USER                                                                                                                   
              valueFrom:                                                                                                                                 
                configMapKeyRef:                                                                                                                         
                  name: nacos-cm                                                                                                                         
                  key: mysql.user                                                                                                                        
            - name: MYSQL_SERVICE_PASSWORD                                                                                                               
              valueFrom:                                                                                                                                 
                configMapKeyRef:                                                                                                                         
                  name: nacos-cm                                                                                                                         
                  key: mysql.password                                                                                                                    
            - name: SPRING_DATASOURCE_PLATFORM                                                                                                           
              value: "mysql"                                                                                                                             
            - name: NACOS_SERVER_PORT                                                                                                                    
              value: "8848"                                                                                                                              
            - name: NACOS_APPLICATION_PORT                                                                                                               
              value: "8848"                                                                                                                              
            - name: PREFER_HOST_MODE                                                                                                                     
              value: "hostname"                                                                                                                          
            - name: NACOS_SERVERS                                                                                                                        
              value: >-                                                                                                                                  
                nacos-0.nacos-headless.middleware.svc.cluster.local:8848                                                                                 
                nacos-1.nacos-headless.middleware.svc.cluster.local:8848                                                                                 
                nacos-2.nacos-headless.middleware.svc.cluster.local:8848                                                                                 
          image: 'nacos/nacos-server:1.3.2'                                                                                                              
          imagePullPolicy: IfNotPresent                                                                                                                  
          name: k8snacos                                                                                                                                 
          ports:                                                                                                                                         
            - containerPort: 8848                                                                                                                        
              name: client                                                                                                                               
              protocol: TCP                                                                                                                              
          resources:                                                                                                                                     
            requests:                                                                                                                                    
              cpu: 500m                                                                                                                                  
              memory: 2Gi                                                                                                                                
          terminationMessagePath: /dev/termination-log                                                                                                   
          terminationMessagePolicy: File                                                                                                                 
      dnsPolicy: ClusterFirst                                                                                                                            
      restartPolicy: Always                                                                                                                              
      schedulerName: default-scheduler                                                                                                                   
      securityContext: {}                                                                                                                                
      terminationGracePeriodSeconds: 30                                                                                                                  
  updateStrategy:                                                                                                                                        
    rollingUpdate:                                                                                                                                       
      partition: 0                                                                                                                                       
    type: RollingUpdate

10.1.3 设置nacos SVC地址

apiVersion: v1                                                                                                                                           
kind: Service                                                                                                                                            
metadata:                                                                                                                                                
  name: nacos-headless                                                                                                                                   
  namespace: middleware                                                                                                                                  
  labels:                                                                                                                                                
    app: nacos                                                                                                                                           
spec:                                                                                                                                                    
  publishNotReadyAddresses: true                                                                                                                         
  ports:                                                                                                                                                 
    - port: 8848                                                                                                                                         
      name: server                                                                                                                                       
      targetPort: 8848                                                                                                                                   
    - port: 9848                                                                                                                                         
      name: client-rpc                                                                                                                                   
      targetPort: 9848                                                                                                                                   
    - port: 9849                                                                                                                                         
      name: raft-rpc                                                                                                                                     
      targetPort: 9849                                                                                                                                   
    ## 兼容1.4.x版本的选举端口                                                                                                                           
    - port: 7848                                                                                                                                         
      name: old-raft-rpc                                                                                                                                 
      targetPort: 7848                                                                                                                                   
  clusterIP: None                                                                                                                                        
  selector:                                                                                                                                              
    app: nacos 

暴露外网访问

apiVersion: v1                                                                                                                                           
kind: Service                                                                                                                                            
metadata:                                                                                                                                                
  name: nacos-svc                                                                                                                                        
  namespace: middleware                                                                                                                                  
  labels:                                                                                                                                                
    app: nacos                                                                                                                                           
spec:                                                                                                                                                    
  publishNotReadyAddresses: true                                                                                                                         
  ports:                                                                                                                                                 
    - port: 8848                                                                                                                                         
      name: server                                                                                                                                       
      targetPort: 8848                                                                                                                                   
  type: NodePort                                                                                                                                         
  selector:                                                                                                                                              
    app: nacos 

11 在k8s中安装ingress

11.1 安装

在Master节点执行,具体可以参考https://github.com/nginxinc/kubernetes-ingress/blob/v1.5.3/docs/installation.md

[root@k8s-master juc]# kubectl apply -f https://kuboard.cn/install-script/v1.16.0/nginx-ingress.yaml

11.2 创建一个Ingress规则

nacos-ingress.yml文件

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nacos-ingress
  namespace: middleware
spec:
  rules:
    - host: nacos.juc.com                    #将来自于my.test.com域名的请求代理到my-service-1上
      http:
        paths:
          - backend:
              serviceName: nacos-headless
              servicePort: 8848
    - host: nacos2.juc.com                     #将来自于a.test.com域名的请求代理到my-service-2上
      http:
        paths:
          - backend:
              serviceName: nacos-headless
              servicePort: 9848

11.3 查看ingress

# 查看ingress
[root@k8s-master juc]#  kubectl get ingress nacos-ingress
NAME                HOSTS                      ADDRESS      PORTS   AGE
nacos-ingress   nacos.juc.com,nacos2.juc.com   10.10.33.4   80      3h14m


[root@k8s-master juc]#  kubectl describe ingress nacos-ingress
Name:             nacos-ingress
Namespace:        middleware
Address:          10.10.33.4
Default backend:  default-http-backend:80 (<none>)
Rules:
  Host         Path  Backends
  ----         ----  --------
  nacos.juc.com
                  nacos-headless:8848(<none>)
  nacos2.juc.com
                  nacos-headless:9848 (<none>)

11.4 访问

12 开发环境: java项目中镜像生成与推送到镜像仓库

12.1 baseImage生成

#生成目录
mkdir baseImage & cd baseImage

# 下载jdk镜像jdk-8u261-linux-x64.tar.gz
https://www.oracle.com/java/technologies/javase/javase8u211-later-archive-downloads.html

[root@localhost baseImage]# vi Dockerfile 
# 依赖镜像名称和ID                                                                                                                                       
FROM centos:7                                                                                                                                            
# 指定镜像创建者信息                                                                                                                                     
MAINTAINER juc                                                                                                                                           
# 切换工作目录                                                                                                                                           
WORKDIR /usr                                                                                                                                             
RUN mkdir /usr/local/java                                                                                                                                
# ADD 是对相对路径jar, 把java添加到容器中                                                                                                                
ADD jdk-8u261-linux-x64.tar.gz /usr/local/java/                                                                                                          
                                                                                                                                                         
# 配置java环境变量                                                                                                                                       
ENV JAVA_HOME /usr/local/java/jdk1.8.0_261                                                                                                               
ENV JRE_HOME $JAVA_HOME/jre                                                                                                                              
ENV CLASSPATH $JAVA_HOME/bin/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib:$CLASSPATH                                                                    
ENV PATH $JAVA_HOME/bin:$PATH   

12.2 baseImage推送到镜像仓库

[root@localhost baseImage]# docker images                                                                                                                
REPOSITORY                      TAG            IMAGE ID       CREATED         SIZE        
juc-business-jdk8u261-v1        latest         29982c0b71b0   24 hours ago    556MB       
mysql                           latest         2d9aad1b5856   2 months ago    574MB       
hello-world                     latest         9c7a54a9a43c   5 months ago    13.3kB      
springhgui/fasttunnel           latest         cd73623873df   10 months ago   213MB       
[root@localhost baseImage]#  
[root@localhost baseImage]# docker login --username=juxxx registry.cn-shenzhen.aliyuncs.com
[root@localhost baseImage]# docker tag 29982c0b71b0 registry.cn-shenzhen.aliyuncs.com/juc_java/juc_bussiness_jdk261:v1
[root@localhost baseImage]# docker push registry.cn-shenzhen.aliyuncs.com/juc_java/juc_bussiness_jdk261:v1

12.3 maven settting.xml设置镜像服务账号密码

    <servers>
        <server>
            <id>docker-hub</id>
            <username>ju123456</username>
            <password>xxxx</password>
        </server>
    </servers>

12.4 java项目weeget-infinite-id-generator的pom.xml文件更改

setting.xml中的service.id 与 pom.xml中的 serviceId相对应

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <parent>
        <groupId>cn.weeget</groupId>
        <artifactId>weeget-infinite-id-generator</artifactId>
        <version>1.0.0-SNAPSHOT</version>
    </parent>

    <name>weeget-infinite-id-generator-business</name>
    <artifactId>weeget-infinite-id-generator-business</artifactId>
    <packaging>jar</packaging>
    <version>1.0.0-SNAPSHOT</version>

    <description>id生成器</description>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <java.version>1.8</java.version>
        <dockerfile.ali.imageName>registry.cn-shenzhen.aliyuncs.com/juc_java/inspire-${docker-env}-${project.build.finalName}:${docker-tag}</dockerfile.ali.imageName>
        <dockerfile.ali.baseImage>registry.cn-shenzhen.aliyuncs.com/juc_java/juc_bussiness_jdk261:v1</dockerfile.ali.baseImage>
    </properties>


    <build>
        <finalName>${project.artifactId}</finalName>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
            <!-- docker的maven插件,官网: https://github.com/spotify/docker-maven-plugin ,不清楚可以参考官网配置 -->
            <plugin>
                <groupId>com.spotify</groupId>
                <artifactId>docker-maven-plugin</artifactId>

                <version>1.2.0</version>
                <configuration>
                    <serverId>docker-hub</serverId>
                    <registryUrl>https://registry.cn-shenzhen.aliyuncs.com/juc_java/inspire-${docker-env}-${project.build.finalName}/</registryUrl>
                    <imageName>${dockerfile.ali.imageName}</imageName>
                    <baseImage>${dockerfile.ali.baseImage}</baseImage>
                    <entryPoint>["java", "-jar", "/usr/local/${project.build.finalName}/${project.build.finalName}.jar"]</entryPoint>
                    <dockerHost>http://192.168.125.21:2375</dockerHost>
                    <env>
                        <CUSTOM_JAVA_OPTS>-XX:+UseContainerSupport -XX:-UseAdaptiveSizePolicy -XX:MaxDirectMemorySize=512M -Xss512K -XX:MetaspaceSize=256m  -XX:MaxMetaspaceSize=512m -XX:MaxRAMPercentage=75.0 -XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/data/logs/jvm/oom/${project.build.finalName}.hprof -XX:ErrorFile=/data/logs/jvm/${project.build.finalName}_hs_err_pid.log</CUSTOM_JAVA_OPTS>
                        <JAR_PATH>/usr/local/${project.build.finalName}/${project.build.finalName}.jar</JAR_PATH>
                        <DISCOVERY_ADDRESS>nacos-headless.middleware</DISCOVERY_ADDRESS>
                    </env>
                    <resources>
                        <resource>
                            <targetPath>/usr/local/${project.build.finalName}/</targetPath>
                            <directory>./target/</directory>
                            <include>${project.build.finalName}.jar</include>
                        </resource>
                    </resources>
                </configuration>
            </plugin>

        </plugins>
        <resources>
            <resource>
                <directory>src/main/resources</directory>
                <filtering>true</filtering>
                <excludes>
                    <exclude>filters/*</exclude>
                </excludes>
            </resource>
            <resource>
                <directory>src/main/resources</directory>
                <filtering>true</filtering>
                <includes>
                    <include>bootstrap.yml</include>
                </includes>
            </resource>
        </resources>
    </build>

</project>

12.5 利用IDEA 生成jar包,构建docker镜像,推送镜像至镜像仓库

 文章引用:vmware虚拟机Centos7上部署kubernetes1.18_thehunters的博客-CSDN博客

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐