K8s 管理系统项目[K8s环境–应用部署]

数据库放K8s上意义不大,这里就沿用上个实验的docker跑的k8s,192.168.31.24:3306

1. 后端部署

1.1 配置文件

config.go是后端api部分的主配置文件,主要涉及到以下配置.

  1. 监听端口和ip
  2. kubernetes连接文件(通过configmap实现可变更)
  3. 控制台登录用户名,密码
  4. 数据库ip(通过secret实现可变更)
  5. 数据库端口(通过secret实现可变更)
  6. 数据库名
  7. 数据库用户名(通过secret实现可变更)
  8. 数据库密码(通过secret实现可变更)

考虑到项目的灵活性,对部分参数实现了配置文件传参的修改.(其余的想要做也是一样的,就是有点犯懒了…)

package config

import "time"

const (
	//监听地址
	ListenAddr = "0.0.0.0:9091"
	//kubeconfig路径
	Kubeconfig = "config/config"
	//pod日志tail显示行数
	PodLogTailLine = 2000
	//登录账号密码
	AdminUser = "admin"
	AdminPwd  = "123456"

	//数据库配置
	DbType = "mysql"
	DbHost = "Db_Ip"
	DbPort = Db_Port
	DbName = "k8s_dashboard"
	DbUser = "Db_User"
	DbPwd  = "Db_Pass"
	//打印mysql debug sql日志
	LogMode = false
	//连接池配置
	MaxIdleConns = 10               //最大空闲连接
	MaxOpenConns = 100              //最大连接数
	MaxLifeTime  = 30 * time.Second //最大生存时间
)

1.2 Api镜像制作

1.2.1 启动脚本

这里就是将secret定义的变量通过脚本替换的方式,实现对config.go配置文件的修改

#!/bin/bash
echo "export GO111MODULE=on" >> ~/.profile
echo "export GOPROXY=https://goproxy.cn" >> ~/.profile
echo ${Db_Ip}>/data/dip.txt
echo ${Db_Port}>/data/dprt.txt
echo ${Db_User}>/data/duser.txt
echo ${Db_Pass}>/data/dpas.txt
dip=`cat /data/dip.txt`
dprt=`cat /data/dprt.txt`
duser=`cat /data/duser.txt`
dpas=`cat /data/dpas.txt`
[ ${Db_Ip} ] && sed -Ei "s/Db_Ip/${dip}/g" /data/k8s-plantform/config/config.go
[ ${Db_Port} ] && sed -Ei "s/Db_Port/${dprt}/g" /data/k8s-plantform/config/config.go
[ ${Db_User} ] && sed -Ei "s/Db_User/${duser}/g" /data/k8s-plantform/config/config.go
[ ${Db_Pass} ] && sed -Ei "s/Db_Pass/${dpas}/g" /data/k8s-plantform/config/config.go
sleep 3
rm -f /data/*.txt
source ~/.profile
cd /data/k8s-plantform
go run main.go

1.2.2 Dockerfile

# 设置基础镜像
FROM centos:7.9.2009

# 设置作者信息
LABEL maintainer="qiuqin <13917099322@139.com>"

# 创建目录
RUN mkdir -p /data/

# 复制应用程序
Add ./k8s-plantform.tar /data/

# 安装 Go 和创建目录
RUN cd /etc/yum.repos.d  && \
    rm -f *.repo 
RUN cd /data/k8s-plantform&& \
    curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo && \
    echo "export GO111MODULE=on" >> ~/.profile&& \
    echo "export GOPROXY=https://goproxy.cn" >> ~/.profile&& \
    source ~/.profile&& \
    rpm --import https://mirror.go-repo.io/centos/RPM-GPG-KEY-GO-REPO &&\
    curl -s https://mirror.go-repo.io/centos/go-repo.repo | tee /etc/yum.repos.d/go-repo.repo &&\
    yum install go -y &&\
    go mod tidy

WORKDIR /data/k8s-plantform
ADD ./start.sh /data/k8s-plantform
# 暴露端口
EXPOSE 9091

# 启动应用程序
CMD ["/data/k8s-plantform/start.sh"]

1.2.3 镜像打包脚本

由于测试时经常需要调整镜像内容,所以编写了build.sh文件是用来打包生成镜像并上传harbor仓库的.

#!/bin/bash
docker build -t harbor.intra.com/k8s-dashboard/api:v$1 .
docker push harbor.intra.com/k8s-dashboard/api:v$1

生成镜像

./build 5

这样就生成了一个harbor.intra.com/k8s-dashboard/api:v5的镜像

1.3 Yaml文件

1.3.1 Configmap

configmap中定义的kubeconfig文件,该文件来源于master节点的~/.kube/config,用于连接kubernetes并通过调用kube-api实现资源的CRUD.常见的有2种方法可以实现configmap资源的创建,

  1. 命令行方式
  2. yaml方式
1.3.1.1 使用命令创建
kubectl create configmap api-config-configmap --from-file=config=config

确认configmap被创建

# kubectl get configmap api-config-configmap
NAME                   DATA   AGE
api-config-configmap   1      60m
1.3.1.2 使用yaml创建

当然也可以使用yaml的方式定义并部署configmap

api-configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: api-config-configmap
data:
  config: |
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1USXhNakExTkRNME0xb1hEVE15TVRJd09UQTFORE0wTTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTFl0CnY1bnlrY1FtMjRoejVXYTZTZHhOSXoxMjdubUdFNVlQcGM3eUFOdFRXc1ViQ2Y4dmt3R3BRTjVVVjJjUzJnSmQKaGl3d21Cb3RnUHd6MWdqNDZTOFdCZHFqOUZScmh3bzJieWVLQlNCSldSdG1iZXlFb2lxQ2N4V1dCekFZUWlFQwpQRFBuajgvbm9HL0hiYWpxUS9QbzVxZklTOVFaV1ZQN1NIenByMENseEt4RlNEdEM5VWdDRmp3QTV0MTdKakZ5CkZndFJUdVIrTEJJQjVSdkdsb3dQbk9CUWQ1cnlUcVhKWS9hcGxQZ1NmajJqVkl4b1gzOEtNaUI5bmhlNGIyS0IKNk9tdWFUYUdZTy9DaEMwRzI5bCtKYWprNHVJWlhZSlRUZXQ3NGUzbzNpK0o0WnQzUkhVVXdiVUYxamhZVXpkdwpPL3U2bDg5Q3BNSUZ0NTJZaWNjQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZCRm5zbWxvSHYzRGQzMlJncUZXRzE2QUpKRjhNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFCNTNiVHdBeENNekVDZGJKK3NEZVJkNnVlTm4xSkxIMHV5UmF4Zk54bHdSb0pPOVVYRQpCQWRwdUpGeWpPWVdQSlVyTVRiY3RSd0hRQ3RSR29kMlJZNW5FZkhXR1hjYUp0VEJ4bGNQWU5hcWJYMDB0NWZkCjVBdzdsWmN1Z0hFdzdRQk0rNW51UnU5MkZrZ1p4Q0NwVHZYdHFyKzVCTCtSVHVVV3IwQ3A1ZjJFOGIyZUpxSzkKV01oOXdnazVIWW1ZZEJMaGlrcXVMMHBEZ3RUWHJ3MmJIREVZZGZCT1I2SldjT0dLWEQ1Y2QvUXB6ZUI4MnhmSQo5RDl3Q3lwUktjb3RIakY5eEVvNGpOL3l6MU1PdUpBdnRNa3VPY1NnQmpyUDEwUTF1WW5aV2lmbWg1b0pjRExiClJ0M3ZxaW1XSHJyWTUwanBpOW1aV3dMMUNheDFpQ3NvZkJ0cwotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
        server: https://192.168.31.41:6443
      name: kubernetes
    contexts:
    - context:
        cluster: kubernetes
        user: kubernetes-admin
      name: kubernetes-admin@kubernetes
    current-context: kubernetes-admin@kubernetes
    kind: Config
    preferences: {}
    users:
    - name: kubernetes-admin
      user:
        client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURFekNDQWZ1Z0F3SUJBZ0lJTWxBQ0lmOUs2UDh3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TWpFeU1USXdOVFF6TkROYUZ3MHlNekV5TVRJd05UUXpORFZhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXhJQ0drV25iWi9DRmhTdWcKbjROVXpMWUVnM09rSlByc2ZWZkZvUHVYc0x2U3dVMHAydzRLUXlTa2dVd29hRDdSYzQ1emtaQlNUTFJrVzRreApUYWt5Mlo1cHNEQngwSURzN1dFTG90MXlCWEpHaUJaN1RBYmo1dXFCZXFUUkt6d1FxaVhrNDY5aDBHQmdTK3BlCkRHNERlRXJwdUdvMkxJdHhYTHF5N2huRDB2eGtOblRVL0IvTnl3dy9MQXZTNzZvWDJhQW1xN3FZM3VRekV0WXQKWjJ3Rk9GSkdmYmpZNm1uK0NzK21jdjhWNEM4V0RDdlpaVTNYMzh4V0R1aGJvM1F2LzlwYTVZa2dlbTZJaFlOUQplS0hyb0dOUGNlVHZyYjlyWFdRRmFzVmtZd082ZFJKZ25mcFlJNXhlc3FvNjVWY1Q4dEhrUTJuaVJJT2Vxd3FBCks0YU5FUUlEQVFBQm8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0h3WURWUjBqQkJnd0ZvQVVFV2V5YVdnZS9jTjNmWkdDb1ZZYlhvQWtrWHd3RFFZSktvWklodmNOQVFFTApCUUFEZ2dFQkFJaWZaZmxncWljMlllY3lQYXNYK0Znd25RcTkrdkVobk4wWmFuUldkQXRQb0xTMnpDMThzMHRDCk9FRlYxQ2sya1pBL3NZd24yWE9kV2dvLzF2ZEFuWXF1WFN0OGM4WEhTa05UVGFJYWZNUVRmbXNkd3dyQjlsSFQKUHdPMjFuVllVOGo0eHJZbS9JYlppYm1tTUQ5ZUxqYytGWXc2bElrN0RuakJOZ3RWajBhSmFxRE9CbHpJa2t2TwpVa1B5ek9SQ0h0bTA2cVNjaXQ2R0syTDNhdTZDWkJpdFcwQjM4SXdPMExLcXZ2a3NCZjlSaDlWaTFjWkxPUGNaCmFNbTIvU3dGaFd6bWZySndJaDZyYlVuYytraXBEV1U5YkhQUFlKVkpOdHdjQzZmd0RrSDh6WWU4RVFUNWJFUDcKREViTzAwL3J6TDFGdk5GeEwvc20zWWlvR1N4bzVJWT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
        client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBeElDR2tXbmJaL0NGaFN1Z240TlV6TFlFZzNPa0pQcnNmVmZGb1B1WHNMdlN3VTBwCjJ3NEtReVNrZ1V3b2FEN1JjNDV6a1pCU1RMUmtXNGt4VGFreTJaNXBzREJ4MElEczdXRUxvdDF5QlhKR2lCWjcKVEFiajV1cUJlcVRSS3p3UXFpWGs0NjloMEdCZ1MrcGVERzREZUVycHVHbzJMSXR4WExxeTdobkQwdnhrTm5UVQovQi9OeXd3L0xBdlM3Nm9YMmFBbXE3cVkzdVF6RXRZdFoyd0ZPRkpHZmJqWTZtbitDcyttY3Y4VjRDOFdEQ3ZaClpVM1gzOHhXRHVoYm8zUXYvOXBhNVlrZ2VtNkloWU5RZUtIcm9HTlBjZVR2cmI5clhXUUZhc1ZrWXdPNmRSSmcKbmZwWUk1eGVzcW82NVZjVDh0SGtRMm5pUklPZXF3cUFLNGFORVFJREFRQUJBb0lCQUQyOFc0cm9CU1M4cmxaTwpoS0pZOHBWMlFpakNkam1nRkJpMU1NUUpCM2xoS1MvTi9HNTBGTWxQZzllVGc4WnNwZ1YySmR6L3lMdU1tVk1nCjRUcVRCQVRXL2tGNmx1ZDQrZmNDWEZPSTJ6L1d6VTRJTWlpS3FhTnMzYzBZWnhiOFFnZ1M2N3lVNTFnK1QwTEsKbVUyeWFxaXFjSStkM3ZOVHhBUHNMRGNlSlNYdDFPSVRZTFV5bXYvZ2JJUjY4SDFrZDk2NXNLWXdQUnFlNVZJYgpYNmc5SG82WVlaOUE3eWVJc1YwYXJaOUoyUVhjNEtVcEVvemVkQUh6K0NTb0VZM3JOeDhJM2ZZMTloL3lnd21tCjYyRFY4cFB6T2Z0N1BTdzA0cWsrZ2JpcFhXckcxcUh1Z1dmbnZXNWJybVJ4LzRiRnJKTTNsWHVTVVZ2Q2JMTm8KajVyeWZTRUNnWUVBNUVHRTBjVVpsSllQRmNwQWNwSWw0UGUremxDbVFBY0cwQjk5UXpoQ0N3T29UdUxoNnhOcgpWNWMwV3oxS1ZjcDYxc3FZZUxsUUZkM3lyZ0krWUo0bW1PYkJTTnlUSWFpbDRoR1hENUJYVmZkeFpTenRDT0dpCmovMmVyS3FaY2EwYVN5blU0UjRFYVY0NjRqREE0QVNhejk0Y2R4eUlobWY1WWhsT2JWSkpFNjBDZ1lFQTNHTHgKaFFrVVpSY2JTM3pPWDRwamN4Z1IxNnhvRk1YcFJmNWF3Z0ZBT1JKV0tpTlRGUVlFNFdtVnZWS0NDelc3dWI5dApva3VMOVViYTJVMVd4bHFQZjR2aHJqVXNwY3g2dE4rUitiNGdVYWJJdGxib0lNcnhlcVJjNVVnQTV6N1N3NDZECndRU1BKS0ZvL0pmS0JneUNvamJTejA5cVF3czNqTXBkWHA5c3EzVUNnWUVBeE9yTlJoZVBnUU9RVWhFaFZuWkYKSFhjK2NrbGJrK003K25NZ0lzeTNGVDk3aFVydzhsZlhoRUpiRmRlamVLM3RHYjdBbVczdDdGK0ZESis4NXFlcAp6c0ZNd0tvaWVLaEJLKzVXNzBOc1JTcnE5Z2t6R1RWbmhHZWQ2NEptVEk2MUgyRWdXWElIQmt3WDZxbDZ6QWpNCjhrWEJNdlUzeHhTT0xoWjg5WTFHcENVQ2dZQnlNWHAvdW5LczVzb24xU1dCNzgwVUIvYkd6L2ltT3Q1aWZDYysKdXpNeDMwUnlWUmRwbjFMTUVjK2E1N09tWjFNOExlcDYyN1pMZzBsR3E0STVDUmV0dVNkWkF3aDlhSFIwWUJ2ZApVaHlnOGxDeDJsb3hFN2NJR3o1Zk4yM3daR2NGR1VVL3NFTVRjZWRhYXJRdGFqSU9KMllZTVVnWU1TbTVjK25wCmE2WDlPUUtCZ1FEVy85VFQrVTRobW1zbTBnUFdTUDg0RmlHYjBBaXhySFZad3RHODNweElaVUpZWDUxMVlqd08KZTBYelNZN2RDc08vQndWOEZxU2lrRUU0NVFwUG9nZnRUbWV2Q2pZa2lXUzNPZzkyYTBlSEgrdGZqcnNJaU1sNQpRaEQxSVg1SW1YczBKTVdRTURuZGtuKzdHNVBiQmZMTDlvS3I4VEIwNGhqUk5oMFM4VWRnWGc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

部署configmap

kubectl apply -f api-configmap.yaml

1.3.2 Secret

api-secret.yaml

在secret中定例了:

  1. Db_Ip: 数据库连接ip
  2. Db_User: 数据库账号
  3. Db_Pass: 数据库密码
  4. Db_Port: 数据库监听端口

在secret中以键值对方式定义了这些变量,但变量的值是需要经过base64加密的,这样就避免了明文内容暴露造成信息的不安全.(当然也可以通过base64的值进行反解),获取base64值的方法如下:

# echo "192.168.31.24"|base64
MTkyLjE2OC4zMS4yNAo=

依次算出以上我们需要定义变量的值,并填入api-secret.yaml文件合适位置.

apiVersion: v1
kind: Secret
metadata:
  name: dashboard-api-secret
type: Opaque
data:
  Db_Ip: MTkyLjE2OC4zMS4yNAo=
  Db_User: cm9vdAo=
  Db_Pass: MTIzNDU2Cg==
  Db_Port: MzMwNgo=

部署secret

kubectl apply -f api-secret.yaml

1.3.3 Deployment

Deployment文件主要定了以下的内容:

  1. image 这个当然也是最核心的部分,就是我们步骤1打包出来的镜像
  2. resources定义了资源限制
  3. env 实现了将secret定义的变量进行导入,最后由start.sh实现配置文件内容的替换
  4. volumeMounts中将configmap中kubeconfig内容映射到容器中项目配置文件中定义的kubeconfig文件位置
  5. affinity是考虑到今后可能多个pod避免容器堆积在同一节点,当node节点异常时造成多个pod异常导致整个服务无法正常提供服务
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dashboard-api-deployment
  labels:
    app: dashboard-api
spec:
  replicas: 1
  selector:
    matchLabels:
      app: dashboard-api
  template:
    metadata:
      labels:
        app: dashboard-api
    spec:
      containers:
      - name: dashboard-api
        image: harbor.intra.com/k8s-dashboard/api:v10
        resources:
          requests:
            memory: "100Mi"
            cpu: "0.1"
          limits:
            memory: "4096Mi"
            cpu: "4"
        env:
        - name: Db_Ip
          valueFrom:
            secretKeyRef:
              name: dashboard-api-secret
              key: Db_Ip
        - name: Db_User
          valueFrom:
            secretKeyRef:
              name: dashboard-api-secret
              key: Db_User
        - name: Db_Pass
          valueFrom:
            secretKeyRef:
              name: dashboard-api-secret
              key: Db_Pass
        - name: Db_Port
          valueFrom:
            secretKeyRef:
              name: dashboard-api-secret
              key: Db_Port
        volumeMounts:
        - name: api-config-configmap
          mountPath: /data/k8s-plantform/config/config
          subPath: config
        ports:
        - containerPort: 9091
      imagePullSecrets:
        - name: dashboard-api-secret
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: project
                  operator: In
                  values:
                  - dashboard-api
              topologyKey: topology.kubernetes.io/zone
              namespaces:
                - default
      volumes:
      - name: api-config-configmap
        configMap:
          name: api-config-configmap
          items:
          - key: config
            path: config

部署deployment

kubectl apply -f api-deployment.yaml

1.3.4 Service

api-service.yaml

通过标签方式选择了dashboard-api-deployment的deployment

apiVersion: v1
kind: Service
metadata:
  name: dashboard-api-service
  labels:
    app: dashboard-api
spec:
  ports:
    - name: http
      port: 9091
      protocol: TCP
      targetPort: 9091
  selector:
    app: dashboard-api
  sessionAffinity: None
  type: ClusterIP 

部署deployment

kubectl apply -f api-service.yaml

此时pod已经被service发现

# kubectl get svc,ep
NAME                            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/dashboard-api-service   ClusterIP   10.200.97.144   <none>        9091/TCP   5m29s

NAME                              ENDPOINTS           AGE
endpoints/dashboard-api-service   10.100.2.131:9091   5m29s

1.3.5 Ingress

通过Ingress绑定service实现通过域名方式访问服务

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: dashboard-api-ingress
spec:
  rules:
    - host: api.k8s.intra.com
      http:
        paths:
          - backend:
              service:
                name: dashboard-api-service
                port:
                  number: 9091
            path: /
            pathType: Prefix

部署

kubectl apply -f api-ingress.yaml

此时ingress状态

# kubectl get ingress
NAME                    CLASS    HOSTS               ADDRESS                       PORTS   AGE
dashboard-api-ingress   <none>   api.k8s.intra.com   192.168.31.51,192.168.31.52   80      67s

至此后端api部分已经部署完成,那么测试一下

1.4 Api访问测试

hosts绑定域名后进行测试

1.4.1 测试资源访问

这里如果访问不到,可能是configmap出错.可以先进入容器,查看/data/k8s-plantform/config/config的内容是否与预期一致.
如果/data/k8s-plantform/config/config内容有,且与之前定义的相同,那么就是kubeconfig文件不对
最后一种可能是config中的server那行没有被解析或者写的是本地地址,改成可以ping通的kube-api地址就可以了

请添加图片描述

请添加图片描述

1.4.2 测试数据库连接

workflow是查询数据库k8s_dashboard,如果可以返回值那么就肯定没问题,也有可能total是0,这是因为k8s_dashboard里没数据.只要msg没报错就ok

请添加图片描述

2. 前端不是

2.1 配置文件

2.1.1 修改Api url

把k8s-plantform-fe/src/views/common/Config.js中请求的url改为api的

export default {
    //后端接口路径
    loginAuth: 'http://api.k8s.intra.com/api/login',
    k8sWorkflowCreate: 'http://api.k8s.intra.com/api/k8s/workflow/create',
    k8sWorkflowDetail: 'http://api.k8s.intra.com/api/k8s/workflow/detail',
    k8sWorkflowList: 'http://api.k8s.intra.com/api/k8s/workflows',
    k8sWorkflowDel: 'http://api.k8s.intra.com/api/k8s/workflow/del',
    k8sDeploymentList: 'http://api.k8s.intra.com/api/k8s/deployments',
    k8sDeploymentDetail: 'http://api.k8s.intra.com/api/k8s/deployment/detail',
    k8sDeploymentUpdate: 'http://api.k8s.intra.com/api/k8s/deployment/update',
    k8sDeploymentScale: 'http://api.k8s.intra.com/api/k8s/deployment/scale',
    k8sDeploymentRestart: 'http://api.k8s.intra.com/api/k8s/deployment/restart',
    k8sDeploymentDel: 'http://api.k8s.intra.com/api/k8s/deployment/del',
    k8sDeploymentCreate: 'http://api.k8s.intra.com/api/k8s/deployment/create',
    k8sDeploymentNumNp: 'http://api.k8s.intra.com/api/k8s/deployment/numnp',
    k8sPodList: 'http://api.k8s.intra.com/api/k8s/pods',
    k8sPodDetail: 'http://api.k8s.intra.com/api/k8s/pod/detail',
    k8sPodUpdate: 'http://api.k8s.intra.com/api/k8s/pod/update',
    k8sPodDel: 'http://api.k8s.intra.com/api/k8s/pod/del',
    k8sPodContainer: 'http://api.k8s.intra.com/api/k8s/pod/container',
    k8sPodLog: 'http://api.k8s.intra.com/api/k8s/pod/log',
    k8sPodNumNp: 'http://api.k8s.intra.com/api/k8s/pod/numnp',
    k8sDaemonSetList: 'http://api.k8s.intra.com/api/k8s/daemonsets',
    k8sDaemonSetDetail: 'http://api.k8s.intra.com/api/k8s/daemonset/detail',
    k8sDaemonSetUpdate: 'http://api.k8s.intra.com/api/k8s/daemonset/update',
    k8sDaemonSetDel: 'http://api.k8s.intra.com/api/k8s/daemonset/del',
    k8sStatefulSetList: 'http://api.k8s.intra.com/api/k8s/statefulsets',
    k8sStatefulSetDetail: 'http://api.k8s.intra.com/api/k8s/statefulset/detail',
    k8sStatefulSetUpdate: 'http://api.k8s.intra.com/api/k8s/statefulset/update',
    k8sStatefulSetDel: 'http://api.k8s.intra.com/api/k8s/statefulset/del',
    k8sServiceList: 'http://api.k8s.intra.com/api/k8s/services',
    k8sServiceDetail: 'http://api.k8s.intra.com/api/k8s/service/detail',
    k8sServiceUpdate: 'http://api.k8s.intra.com/api/k8s/service/update',
    k8sServiceDel: 'http://api.k8s.intra.com/api/k8s/service/del',
    k8sServiceCreate: 'http://api.k8s.intra.com/api/k8s/service/create',
    k8sIngressList: 'http://api.k8s.intra.com/api/k8s/ingresses',
    k8sIngressDetail: 'http://api.k8s.intra.com/api/k8s/ingress/detail',
    k8sIngressUpdate: 'http://api.k8s.intra.com/api/k8s/ingress/update',
    k8sIngressDel: 'http://api.k8s.intra.com/api/k8s/ingress/del',
    k8sIngressCreate: 'http://api.k8s.intra.com/api/k8s/ingress/create',
    k8sConfigMapList: 'http://api.k8s.intra.com/api/k8s/configmaps',
    k8sConfigMapDetail: 'http://api.k8s.intra.com/api/k8s/configmap/detail',
    k8sConfigMapUpdate: 'http://api.k8s.intra.com/api/k8s/configmap/update',
    k8sConfigMapDel: 'http://api.k8s.intra.com/api/k8s/configmap/del',
    k8sSecretList: 'http://api.k8s.intra.com/api/k8s/secrets',
    k8sSecretDetail: 'http://api.k8s.intra.com/api/k8s/secret/detail',
    k8sSecretUpdate: 'http://api.k8s.intra.com/api/k8s/secret/update',
    k8sSecretDel: 'http://api.k8s.intra.com/api/k8s/secret/del',
    k8sPvcList: 'http://api.k8s.intra.com/api/k8s/pvcs',
    k8sPvcDetail: 'http://api.k8s.intra.com/api/k8s/pvc/detail',
    k8sPvcUpdate: 'http://api.k8s.intra.com/api/k8s/pvc/update',
    k8sPvcDel: 'http://api.k8s.intra.com/api/k8s/pvc/del',
    k8sNodeList: 'http://api.k8s.intra.com/api/k8s/nodes',
    k8sNodeDetail: 'http://api.k8s.intra.com/api/k8s/node/detail',
    k8sNamespaceList: 'http://api.k8s.intra.com/api/k8s/namespaces',
    k8sNamespaceDetail: 'http://api.k8s.intra.com/api/k8s/namespace/detail',
    k8sNamespaceDel: 'http://api.k8s.intra.com/api/k8s/namespace/del',
    k8sPvList: 'http://api.k8s.intra.com/api/k8s/pvs',
    k8sPvDetail: 'http://api.k8s.intra.com/api/k8s/pv/detail',
    k8sTerminalWs: 'ws://api.k8s.intra.com:8081/ws',
    //编辑器配置
    cmOptions: {
        // 语言及语法模式
        mode: 'text/yaml',
        // 主题
        theme: 'idea',
        // 显示行数
        lineNumbers: true,
        smartIndent: true, //智能缩进
        indentUnit: 4, // 智能缩进单元长度为 4 个空格
        styleActiveLine: true, // 显示选中行的样式
        matchBrackets: true, //每当光标位于匹配的方括号旁边时,都会使其高亮显示
        readOnly: false,
        lineWrapping: true //自动换行
    }
}

2.1.2 解决invalid host header

修改k8s-plantform-fe/vue.config.js,加上需要使用的域名

const { defineConfig } = require('@vue/cli-service')

module.exports = defineConfig({
  devServer:{
    host: '0.0.0.0',  // 监听地址
    port: 9090,           // 监听端口
    open: true,            // 启动后是否打开网页
    allowedHosts: [
      'web.k8s.intra.com',
      '.intra.com'
    ],
  },
  // 关闭语法检测
  lintOnSave: false,
})

2.2 Web镜像制作

相对api来说,web端简单很多

2.2.1 启动脚本

start.sh

#!/bin/bash
echo 'export NODE_HOME=/usr/local/node' >> /etc/profile  
echo 'export PATH=$NODE_HOME/bin:$PATH' >> /etc/profile
source /etc/profile
cd /data/k8s-plantform-fe/
npm run serve

2.2.2 Dockerfile

Dockerfile

# 设置基础镜像
#FROM centos:7.9.2009
FROM harbor.intra.com/k8s-dashboard/web:v1

# 设置作者信息
LABEL maintainer="qiuqin <13917099322@139.com>"

# 创建目录
RUN mkdir -p /data/

# 复制应用程序
Add ./k8s-plantform-fe.tar /data/

# 安装 Go 和创建目录
RUN	cd /data/k8s-plantform-fe/ &&\
		echo 'export NODE_HOME=/usr/local/node' >> /etc/profile  &&\ 
		echo 'export PATH=$NODE_HOME/bin:$PATH' >> /etc/profile &&\
		source /etc/profile &&\
		npm install


WORKDIR /data/k8s-plantform-fe
ADD ./start.sh /data/k8s-plantform-fe/
# 暴露端口
EXPOSE 9090

# 启动应用程序
CMD ["/data/k8s-plantform-fe/start.sh"]

2.2.3 镜像打包脚本

由于测试时经常需要调整镜像内容,所以编写了build.sh文件是用来打包生成镜像并上传harbor仓库的.

#!/bin/bash
docker build -t harbor.intra.com/k8s-dashboard/web:v$1 .
docker push harbor.intra.com/k8s-dashboard/web:v$1

生成镜像

./build 3

这样就生成了一个harbor.intra.com/k8s-dashboard/web:v3的镜像

2.3 Yaml文件

相对于api来说web的yaml也少了很多,一共3个:

  1. deployment
  2. server
  3. ingress

2.3.1 Deployment

这里就不一一解释了.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: dashboard-web-deployment
  labels:
    app: dashboard-web
spec:
  replicas: 1
  selector:
    matchLabels:
      app: dashboard-web
  template:
    metadata:
      labels:
        app: dashboard-web
    spec:
      containers:
      - name: dashboard-web
        image: harbor.intra.com/k8s-dashboard/web:v3
        resources:
          requests:
            memory: "100Mi"
            cpu: "0.1"
          limits:
            memory: "4096Mi"
            cpu: "4"
        ports:
        - containerPort: 9090
      imagePullSecrets:
        - name: dashboard-web-secret
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: project
                  operator: In
                  values:
                  - dashboard-web
              topologyKey: topology.kubernetes.io/zone
              namespaces:
                - default

2.3.2 Service

apiVersion: v1
kind: Service
metadata:
  name: dashboard-web-service
  labels:
    app: dashboard-web
spec:
  ports:
    - name: http
      port: 9090
      protocol: TCP
      targetPort: 9090
  selector:
    app: dashboard-web
  sessionAffinity: None
  type: ClusterIP 

2.3.3 Ingress

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: dashboard-web-ingress
spec:
  rules:
    - host: web.k8s.intra.com
      http:
        paths:
          - backend:
              service:
                name: dashboard-web-service
                port:
                  number: 9090
            path: /
            pathType: Prefix

好了.直接看效果

2.4 Web访问测试

hosts绑定域名后进行测试

2.4.1 登录页面

默认用户admin密码123456,在api的config/config.go中定义

请添加图片描述

2.4.2 其他页面

这里就不再去一一查看了.在之前功能测试的文章里已经测试过了,感兴趣的可以回过头看一下

请添加图片描述

至此,前端web,后端api都可以通过k8s环境实现部署

请添加图片描述

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐