基于 PVC实现 NFS 容器化
现在的应用服务基本都会使用到共享配置、共享文件、共享日志等服务,NFS (Network File System)作为当前互联网系统架构中最常用的数据存储服务,非常适合于分布式系统中,本文主要简介 NFS 容器化搭建以及PVC 的划分。创建 StorageClassapiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name...
·
现在的应用服务基本都会使用到共享配置、共享文件、共享日志等服务,NFS (Network File System)作为当前互联网系统架构中最常用的数据存储服务,非常适合于分布式系统中,本文主要简介 NFS 容器化搭建以及PVC 的划分。
创建 StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: auto-high-cbs
parameters:
paymode: PREPAID
type: CLOUD_PREMIUM
zone: "150001"
provisioner: cloud.tencent.com/qcloud-cbs
reclaimPolicy: Retain
volumeBindingMode: Immediate
- 由于设计多个服务或POD之间共享 NFS 服务,需要硬盘读写具有更高的性能. 本次采用的是预付费(PREPAID)模式下的高性能云硬盘(CLOUD_PREMIUM), 此外腾讯云还支持普通云硬盘(CLOUD_BASIC)、 SSD (CLOUD_SSD).
- 由于 NFS 一般用于存储配置等信息,则文件策略使用Retain(保留).
创建 PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"d1ef5ddc-ba81-11e9-9b04-1a812535d98f","leaseDurationSeconds":15,"acquireTime":"2019-09-06T01:02:07Z","renewTime":"2019-09-06T01:02:15Z","leaderTransitions":0}'
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: cloud.tencent.com/qcloud-cbs
finalizers:
- kubernetes.io/pvc-protection
name: nfs-pvc
namespace: common
spec:
accessModes:
- ReadWriteOnce
dataSource: null
resources:
requests:
storage: 200Gi
storageClassName: auto-high-cbs
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 200Gi
phase: Bound
创建出200GB 的内存,并设置权限为ReadWriteOnce
构建 NFS 基础镜像
FROM alpine:latest
LABEL maintainer "Tayl0r Dang(dangtuo888@gmail.com)"
LABEL source "https://github.com/dtcka/nfs-server-alpine"
LABEL branch "master"
RUN apk update && apk add tzdata && \
cp /usr/share/zoneinfo/Etc/GMT+8 /etc/localtime && \
apk add --no-cache --update --verbose nfs-utils bash iproute2 && \
rm -rf /var/cache/apk /tmp /sbin/halt /sbin/poweroff /sbin/reboot && \
mkdir -p /var/lib/nfs/rpc_pipefs /var/lib/nfs/v4recovery && \
echo "rpc_pipefs /var/lib/nfs/rpc_pipefs rpc_pipefs defaults 0 0" >> /etc/fstab && \
echo "nfsd /proc/fs/nfsd nfsd defaults 0 0" >> /etc/fstab
COPY exports /etc/
COPY nfsd.sh /usr/bin/nfsd.sh
COPY .bashrc /root/.bashrc
RUN chmod +x /usr/bin/nfsd.sh
ENTRYPOINT ["/usr/bin/nfsd.sh"]
基于 alpine 制作最精简的 nfs-server 镜像,并且添加了当前服务时区 GMT+8
创建 NFS Deployment
apiVersion: apps/v1beta2
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "13"
generation: 35
labels:
k8s-app: nfs-server
qcloud-app: nfs-server
name: nfs-server
namespace: common
spec:
minReadySeconds: 10
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: nfs-server
qcloud-app: nfs-server
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
k8s-app: nfs-server
qcloud-app: nfs-server
spec:
containers:
- env:
- name: PATH
value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: SHARED_DIRECTORY
value: /nfs-pvc
image: xxx/nfs-alpine:1.0.0
imagePullPolicy: Always
name: nfs
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 250m
memory: 256Mi
securityContext:
privileged: true
procMount: Default
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /nfs-pvc
name: data
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: qcloudregistrykey
- name: tencenthubkey
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: data
persistentVolumeClaim:
claimName: nfs-pvc
- 设置服务启动 POD 资源:request(CPU--MEM): 250m,256Mi, limit(CPU--MEM): 500m,512Mi
- 添加PVC Volumes:nfs-pvc,挂载 PVC volumeMounts
- 指定 NFS 存储路径:SHARED_DIRECTORY :/nfs-pvc (自定义目录)
添加自动伸缩 HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
labels:
qcloud-app: hpa-bwsrr9sttds0
name: hpa-bwsrr9sttds0
namespace: common
spec:
maxReplicas: 4
metrics:
- pods:
metricName: k8s_pod_rate_cpu_core_used_limit
targetAverageValue: "80"
type: Pods
- pods:
metricName: k8s_pod_rate_mem_usage_limit
targetAverageValue: "80"
type: Pods
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1beta2
kind: Deployment
name: nfs-server
基于两项指标作为容器扩展的要素:
- CPU占用超过 limit 值80%
- 内存使用率超过 limit 值80%
开放VPC 下服务入口
apiVersion: v1
kind: Service
metadata:
annotations:
service.kubernetes.io/loadbalance-id: lb-8le93js3
service.kubernetes.io/qcloud-loadbalancer-clusterid: cls-7sxwq4u8
service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-oatgu5wo
name: nfs-server
namespace: common
spec:
clusterIP: 172.16.255.212
externalTrafficPolicy: Cluster
ports:
- name: tcp-2049-2049
nodePort: 30034
port: 2049
protocol: TCP
targetPort: 2049
- name: tcp-20048-20048
nodePort: 30659
port: 20048
protocol: TCP
targetPort: 20048
- name: tcp-111-111
nodePort: 32241
port: 111
protocol: TCP
targetPort: 111
selector:
k8s-app: nfs-server
qcloud-app: nfs-server
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 172.26.0.x
nfs会开启3个服务:
- portmapper : 做端口映射的 ( 默认使用 111 端口 )
- mountd : 管理NFS的文件系统 ( 默认使用 20048 端口 可在 /etc/services 中查看到 )
- nfs : 管理客户端登录 ( 默认使用 2049 端口 )
至此,NFS 服务已搭建完成
附:
- 基于腾讯云 TKE2服务搭建,不同云厂商或原生 K8S 会有所不同
- PVC访问模式:
- ReadWriteOnce —— 该volume只能被单个节点以读写的方式映射
- ReadOnlyMany —— 该volume可以被多个节点以只读方式映射
- ReadWriteMany —— 该volume只能被多个节点以读写的方式映射
- PVC 回收策略:
- Retain:手动回收
- Recycle:需要擦出后才能再使用
- Delete:相关联的存储资产,如AWS EBS,GCE PD,Azure Disk,or OpenStack Cinder卷都会被删除
更多内容请访问【云原生建筑师】https://blog.dtcka.com
更多推荐
已为社区贡献1条内容
所有评论(0)