statefulset mongo+storageclass+nfs
网上一大堆,其实还是官方靠谱,都是从官方来的,地址:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner一、所需镜像mongo:3.4.4registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0cvallance/mong
网上一大堆,其实还是官方靠谱,都是从官方来的,地址:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
一、所需镜像
-
mongo:3.4.4
-
registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0
-
cvallance/mongo-k8s-sidecar:latest
二、系统及服务
- Operating System:CentOS Linux release 7.9.2009 (Core) Kernel: 5.4.163-1.el7.elrepo.x86_64
- Kubernetes:v1.20.13
- Package:nfs-utils (直接yum安装即可,或者去清华下载包也行:https://mirrors.tuna.tsinghua.edu.cn/centos/7.9.2009/os/x86_64/Packages/)
三、部署须知:
本yaml文件已经指定namespace: dev,如果想要修改namespace请使用 sed -i 's#namespace: dev#自定义namespace#g' *
即可
[root@master ~]$kubectl create ns dev
四、yaml文件列表
1.针对cvallance/mongo-k8s-sidecar的错误,需要绑定serviceaccount default设置的
default-clusterrolebinding.yaml
kind: ClusterRoleBinding
metadata:
name: default-view
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: ServiceAccount
name: default
namespace: dev
2.部署sotrageclass
mongo-storageclass.yaml
kind: StorageClass
metadata:
name: managed-nfs-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: fuseim.pri/ifs
reclaimPolicy: Retain
3.这说说明一下nfs-subdir-external-provisioner 需要4.0以上版本,否则会出现无法自动创建pvc,导致pv绑定失败
nfs-client-provisioner-deploy.yaml
kind: Deployment
metadata:
name: nfs-client-provisioner
namespace: dev
labels:
app: nfs-client-provisioner
spec:
selector:
matchLabels:
app: nfs-client-provisioner
replicas: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: sa-nfs-client-provisioner
containers:
- name: nfs-client-provisioner
# 镜像地址已经下载完成,所以imangePullPollicy定义never
image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0
imagePullPolicy: Never
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
- name: localtime
mountPath: /etc/localtime
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.15.103
- name: NFS_PATH
value: /data/mongo # nfs共享路径,自行替换自己的路径
volumes:
- name: nfs-client-root
nfs:
server: 192.168.15.103
path: /data/mongo # nfs共享路径,自行替换自己的路径
- name: localtime
hostPath:
path: /usr/share/zoneinfo/Asia/Shanghai
4.nfs-client-provisioner的RBAC设置,权限分配不做过多说明
nfs-client-provisioner-rbac.yaml
apiVersion: v1
metadata:
name: sa-nfs-client-provisioner
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: role-nfs-client-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rb-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: sa-nfs-client-provisioner
namespace: dev
roleRef:
kind: Role
name: role-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cr-nfs-client-provisioner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: crb-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: sa-nfs-client-provisioner
namespace: dev
roleRef:
kind: ClusterRole
name: cr-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
5.部署mongo statefulset
mongo-deployment.yaml
kind: Service
metadata:
name: mongodb
spec:
type: NodePort
ports:
- port: 27017
targetPort: 27017
selector:
k8s-app: mongodb
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongodb
spec:
serviceName: mongodb
replicas: 3
selector:
matchLabels:
k8s-app: mongodb
template:
metadata:
labels:
k8s-app: mongodb
role: mongo
environment: test
spec:
containers:
- name: mongo
image: mongo:3.4.4
imagePullPolicy: IfNotPresent
command:
- mongod
- "--replSet"
- rs0
- "--bind_ip"
- 0.0.0.0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
protocol: TCP
volumeMounts:
- name: pvc
mountPath: /data/mongo # nfs共享路径,自行替换自己的路径
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar:latest
imagePullPolicy: IfNotPresent
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=test"
volumeClaimTemplates:
- metadata:
name: pvc
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes: [ "ReadWriteMany" ]
storageClassName: managed-nfs-storage
resources:
requests:
storage: 5Gi
遇到问题
问题1:
E0110 15:44:15.839489 1 leaderelection.go:320] error retrieving resource lock dev/fuseim.pri-ifs: Unauthorized
检查你的serviceaccount是否部署正确
问题2:
正常部署完成,mongo集群z是自动创建完成的,有时候进入到mongo发现集群未自动分配完成。多半是因为nfs-client-provisioner无法创建pv,详细检查一下你的volumeClaimTemplates配置和storageclass
其他错可以查看github:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/issues
部署后检查
# po,svc,pv,pvc
[root@master ~]$kg pv,pvc,svc,po,storageclass|grep mongo
persistentvolume/pvc-311cd7a5-ce08-4e72-a144-c78adf74e982 3Gi RWX Retain Bound dev/pvc-mongodb-2 managed-nfs-storage 4d
persistentvolume/pvc-af3236cc-6d0c-40a8-a7db-7e8991475dce 3Gi RWX Retain Bound dev/pvc-mongodb-1 managed-nfs-storage 4d
persistentvolume/pvc-ef170e2d-5c20-4b9f-8297-6ffe86cfd655 3Gi RWX Retain Bound dev/pvc-mongodb-0 managed-nfs-storage 4d
persistentvolumeclaim/pvc-mongodb-0 Bound pvc-ef170e2d-5c20-4b9f-8297-6ffe86cfd655 3Gi RWX managed-nfs-storage 4d
persistentvolumeclaim/pvc-mongodb-1 Bound pvc-af3236cc-6d0c-40a8-a7db-7e8991475dce 3Gi RWX managed-nfs-storage 4d
persistentvolumeclaim/pvc-mongodb-2 Bound pvc-311cd7a5-ce08-4e72-a144-c78adf74e982 3Gi RWX managed-nfs-storage 4d
service/mongodb NodePort 10.99.16.192 <none> 27017:32407/TCP 3d1h
pod/mongodb-0 2/2 Running 0 3d1h
pod/mongodb-1 2/2 Running 0 3d1h
pod/mongodb-2 2/2 Running 0 3d1h
# storageclass
[root@master ~]$kubectl get sc|grep fu
managed-nfs-storage (default) fuseim.pri/ifs Retain Immediate false 3d1h
部署完成检查mongo集群是否建立完成
[root@master ~]$kubectl exec -it mongodb-0 -- mongo
Defaulting container name to mongo.
Use 'kubectl describe pod/mongodb-0 -n dev' to see all of the containers in this pod.
MongoDB shell version v3.4.4
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.4.4
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
Server has startup warnings:
2022-01-07T05:54:27.788+0000 I CONTROL [initandlisten]
2022-01-07T05:54:27.788+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2022-01-07T05:54:27.788+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2022-01-07T05:54:27.788+0000 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2022-01-07T05:54:27.788+0000 I CONTROL [initandlisten]
2022-01-07T05:54:27.788+0000 I CONTROL [initandlisten]
2022-01-07T05:54:27.788+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2022-01-07T05:54:27.788+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2022-01-07T05:54:27.788+0000 I CONTROL [initandlisten]
rs0:PRIMARY> rs.status()
{
"set" : "rs0",
"date" : ISODate("2022-01-10T07:36:57.168Z"),
"myState" : 1,
"term" : NumberLong(1),
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1641800214, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1641800214, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1641800214, 1),
"t" : NumberLong(1)
}
},
"members" : [
{
"_id" : 0,
"name" : "10.244.2.151:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 265350,
"optime" : {
"ts" : Timestamp(1641800214, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2022-01-10T07:36:54Z"),
"electionTime" : Timestamp(1641534868, 2),
"electionDate" : ISODate("2022-01-07T05:54:28Z"),
"configVersion" : 4,
"self" : true
},
{
"_id" : 1,
"name" : "10.244.1.81:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 265343,
"optime" : {
"ts" : Timestamp(1641800214, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1641800214, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2022-01-10T07:36:54Z"),
"optimeDurableDate" : ISODate("2022-01-10T07:36:54Z"),
"lastHeartbeat" : ISODate("2022-01-10T07:36:56.605Z"),
"lastHeartbeatRecv" : ISODate("2022-01-10T07:36:56.605Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "10.244.2.151:27017",
"configVersion" : 4
},
{
"_id" : 2,
"name" : "10.244.6.230:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 265343,
"optime" : {
"ts" : Timestamp(1641800214, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1641800214, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2022-01-10T07:36:54Z"),
"optimeDurableDate" : ISODate("2022-01-10T07:36:54Z"),
"lastHeartbeat" : ISODate("2022-01-10T07:36:56.605Z"),
"lastHeartbeatRecv" : ISODate("2022-01-10T07:36:56.606Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "10.244.2.151:27017",
"configVersion" : 4
}
],
"ok" : 1
}
更多推荐
所有评论(0)