安装kuboard—使用 StorageClass 提供持久化
目录一、安装nfs二、创建命令空间和相应的配置三、创建serviceAccount四、创建nfs-client-provisioner五、创建StorageClass六、创建kuboard-etcd七、创建相关Service八、创建kuboard-agent登录平台地址:获取agent配置文件yaml:执行后可以进去到集群kuboard-v3界面安装,数据持久使用动态存储卷进行操作。官方操作文档:
·
目录
kuboard-v3界面安装,数据持久使用动态存储卷进行操作。官方操作文档:在 K8S 中安装 Kuboard v3 | Kuboard
一、安装nfs
二、创建命令空间和相应的配置
---
apiVersion: v1
kind: Namespace
metadata:
name: kuboard
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kuboard-v3-config
namespace: kuboard
data:
# 关于如下参数的解释,请参考文档 https://kuboard.cn/install/v3/install-built-in.html
# [common]
KUBOARD_ENDPOINT: 'http://192.168.162.130:30080'
KUBOARD_AGENT_SERVER_UDP_PORT: '30081'
KUBOARD_AGENT_SERVER_TCP_PORT: '30081'
KUBOARD_SERVER_LOGRUS_LEVEL: info # error / debug / trace
# KUBOARD_AGENT_KEY 是 Agent 与 Kuboard 通信时的密钥,请修改为一个任意的包含字母、数字的32位字符串,此密钥变更后,需要删除 Kuboard Agent 重新导入。
KUBOARD_AGENT_KEY: 32b7d6572c6255211b4eec9009e4a816
三、创建serviceAccount
#nfs-client的自动装载程序安装、授权
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
namespace: kuboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: kuboard
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: leader-locking-nfs-client-provisioner
namespace: kuboard
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: kuboard
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: kuboard
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
四、创建nfs-client-provisioner
#nfs-client的自动装载程序安装
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
namespace: kuboard
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.162.130
- name: NFS_PATH
value: /data/nfs/data/Kuboard/data
volumes:
- name: nfs-client-root
nfs:
server: 192.168.162.130
path: /data/nfs/data/Kuboard/data
五、创建StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: kuboard-sc
annotations:
storageclass.kubernetes.io/is-default-class: "true" #---设置为默认的storageclass
namespace: kuboard
provisioner: fuseim.pri/ifs
reclaimPolicy: Retain
六、创建kuboard-etcd
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kuboard-etcd
namespace: kuboard
labels:
app: kuboard-etcd
spec:
serviceName: kuboard-etcd
replicas: 3
selector:
matchLabels:
app: kuboard-etcd
template:
metadata:
name: kuboard-etcd
labels:
app: kuboard-etcd
spec:
containers:
- name: kuboard-etcd
image: swr.cn-east-2.myhuaweicloud.com/kuboard/etcd:v3.4.14
ports:
- containerPort: 2379
name: client
- containerPort: 2380
name: peer
env:
- name: KUBOARD_ETCD_ENDPOINTS
value: >-
kuboard-etcd-0.kuboard-etcd:2379,kuboard-etcd-1.kuboard-etcd:2379,kuboard-etcd-2.kuboard-etcd:2379
volumeMounts:
- name: data
mountPath: /data
command:
- /bin/sh
- -c
- |
PEERS="kuboard-etcd-0=http://kuboard-etcd-0.kuboard-etcd:2380,kuboard-etcd-1=http://kuboard-etcd-1.kuboard-etcd:2380,kuboard-etcd-2=http://kuboard-etcd-2.kuboard-etcd:2380"
exec etcd --name ${HOSTNAME} \
--listen-peer-urls http://0.0.0.0:2380 \
--listen-client-urls http://0.0.0.0:2379 \
--advertise-client-urls http://${HOSTNAME}.kuboard-etcd:2379 \
--initial-advertise-peer-urls http://${HOSTNAME}:2380 \
--initial-cluster-token kuboard-etcd-cluster-1 \
--initial-cluster ${PEERS} \
--initial-cluster-state new \
--data-dir /data/kuboard.etcd
volumeClaimTemplates:
- metadata:
name: data
spec:
# 请填写一个有效的 StorageClass name
storageClassName: kuboard-sc
accessModes: [ "ReadWriteMany" ]
resources:
requests:
storage: 5Gi
【备注】执行过程可能会报错:
查看nfs-client-provisioner日志,发现k8s kube-apiserver需要修改配置
https://blog.csdn.net/ag1942/article/details/115371793
在配置文件中加入:
--feature-gates=RemoveSelfLink=false \
七、创建相关Service
---
apiVersion: v1
kind: Service
metadata:
name: kuboard-etcd
namespace: kuboard
spec:
type: ClusterIP
ports:
- port: 2379
name: client
- port: 2380
name: peer
selector:
app: kuboard-etcd
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: '9'
k8s.kuboard.cn/ingress: 'false'
k8s.kuboard.cn/service: NodePort
k8s.kuboard.cn/workload: kuboard-v3
labels:
k8s.kuboard.cn/name: kuboard-v3
name: kuboard-v3
namespace: kuboard
spec:
replicas: 1
selector:
matchLabels:
k8s.kuboard.cn/name: kuboard-v3
template:
metadata:
labels:
k8s.kuboard.cn/name: kuboard-v3
spec:
containers:
- env:
- name: KUBOARD_ETCD_ENDPOINTS
value: >-
kuboard-etcd-0.kuboard-etcd:2379,kuboard-etcd-1.kuboard-etcd:2379,kuboard-etcd-2.kuboard-etcd:2379
envFrom:
- configMapRef:
name: kuboard-v3-config
image: 'swr.cn-east-2.myhuaweicloud.com/kuboard/kuboard:v3'
imagePullPolicy: Always
name: kuboard
---
apiVersion: v1
kind: Service
metadata:
annotations:
k8s.kuboard.cn/workload: kuboard-v3
labels:
k8s.kuboard.cn/name: kuboard-v3
name: kuboard-v3
namespace: kuboard
spec:
ports:
- name: webui
nodePort: 30080
port: 80
protocol: TCP
targetPort: 80
- name: agentservertcp
nodePort: 30081
port: 10081
protocol: TCP
targetPort: 10081
- name: agentserverudp
nodePort: 30081
port: 10081
protocol: UDP
targetPort: 10081
selector:
k8s.kuboard.cn/name: kuboard-v3
sessionAffinity: None
type: NodePort
八、创建kuboard-agent
登录平台地址:
点击选择模式:
获取agent配置文件yaml:
执行后可以进去到集群
安装的yaml可以参考:https://download.csdn.net/download/qq_35008624/34197302
更多推荐
已为社区贡献2条内容
所有评论(0)