一.nfs服务器搭建

1. 安装服务端和客户端

sudo apt install nfs-kernel-server nfs-common

其中 nfs-kernel-server 为服务端, nfs-common 为客户端。

2. 配置 nfs 共享目录

在家目录创建共享目录,并在 /etc/exports 中导出:

mkdir ~/nfs-share
sudo vim /etc/exports 
/home/XX/nfs-share *(rw,sync,no_root_squash,no_subtree_check)

格式如下:共享目录 可访问共享目录的ip(共享目录权限列表)

各字段解析如下:/home/xx/nfs-share: 要共享的目录

:指定可以访问共享目录的用户 ip, * 代表所有用户。192.168.3. 指定网段。192.168.3.29 指定 ip。

rw:可读可写。如果想要只读的话,可以指定 ro。

sync:文件同步写入到内存与硬盘中。

async:文件会先暂存于内存中,而非直接写入硬盘。

no_root_squash:登入 nfs 主机使用分享目录的使用者,如果是 root 的话,那么对于这个分享的目录来说,他就具有 root 的权限!这个项目『极不安全』,不建议使用!但如果你需要在客户端对 nfs 目录进行写入操作。你就得配置 no_root_squash。方便与安全不可兼得。

root_squash:在登入 nfs 主机使用分享之目录的使用者如果是 root 时,那么这个使用者的权限将被压缩成为匿名使用者,通常他的 UID 与 GID 都会变成 nobody 那个系统账号的身份。subtree_check:强制 nfs 检查父目录的权限(默认)

no_subtree_check:不检查父目录权限

配置完成后,执行以下命令导出共享目录,并重启 nfs 服务:

sudo exportfs -a    
sudo service nfs-kernel-server restart

3. 客户端访问测试

sudo mount localhost:/home/xx/nfs-share /mnt

我们把 nfs-share 共享文件系统挂载到 /mnt

二.PV、PVC和StorageClass关系

PV、PVC和StorageClass_云容器引擎 CCE_Kubernetes基础知识_持久化存储_华为云
https://blog.csdn.net/weixin_41947378/article/details/111509849

三.创建nfs类型的持久卷(静态供给 )

1.创建PV

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0003
spec:
  capacity:
    storage: 5Gi #存储容量
  volumeMode: Filesystem #卷模式 Filesystem(文件系统)和 Block(块)
  accessModes:
    - ReadWriteOnce #访问模式
  persistentVolumeReclaimPolicy: Recycle #回收策略
  mountOptions: #挂载选项
    - hard
    - nfsvers=4.1
  storageClassName:slow                      #存储类别,速率快慢
  nfs: #nfs服务器
    path: /home/xx/nfs-share
    server: nfs服务器ip
kubectl apply -f xx.yaml

访问模式

ReadWriteOnce

卷可以被一个节点以读写方式挂载。 ReadWriteOnce 访问模式也允许运行在同一节点上的多个 Pod 访问卷。

ReadOnlyMany

卷可以被多个节点以只读方式挂载。

ReadWriteMany

卷可以被多个节点以读写方式挂载。

ReadWriteOncePod

卷可以被单个 Pod 以读写方式挂载。 如果你想确保整个集群中只有一个 Pod 可以读取或写入该 PVC, 请使用ReadWriteOncePod 访问模式。

回收策略

  • Retain -- 手动回收

  • Recycle -- 基本擦除 (rm -rf /thevolume/*)

  • Delete -- 诸如 AWS EBS、GCE PD、Azure Disk 或 OpenStack Cinder 卷这类关联存储资产也被删除

2.创建PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: task-pv-claim
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 3Gi
  storageClassName:slow             

3.创建POD

apiVersion: v1
kind: Pod
metadata:
  name: test-nfs
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: nginx
    volumeMounts:
    - mountPath: /data
      name: nfs-volume
  volumes:
  - name: nfs-volume
    persistentVolumeClaim:
      claimName: task-pv-claim

四.创建nfs类型的持久卷(动态供给 )

动态供给有明显的优势:不需要提前创建 PV

可以去官网下载这三个文件去网上下载

https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client/deploy

1.部署rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

2.部署deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.0 # 这里需要修改,因为最新版本存在 SelfLink 问题
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 10.128.1.117    # 这里需要修改
            - name: NFS_PATH
              value: /nfsdata #这里需要修改
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.128.1.117   # 这里需要修改
            path: /nfsdata #这里需要修改

3.部署存储类StorageClass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: example-nfs
provisioner: fuseim.pri/ifs  # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  server: nfs-server.example.com
  path: /root/nfs-share
  readOnly: "false"
  • server: Server 是 NFS 服务器的主机名或 IP 地址。

  • path: NFS 服务器导出的路径。

  • readOnly: 一个标志,指示存储是否将被安装为只读(默认为 false)。

4.创建PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: task-pv-claim
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 3Gi
  storageClassName: fuseim.pri/ifs
  selector:
    matchLabels:
      release: "stable"
    matchExpressions:
      - {key: environment, operator: In, values: [dev]}

五.创建hostpath类型的持久卷

1.创建PV

apiVersion: v1
kind: PersistentVolume
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"

2.创建PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: task-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi

3.创建POD

apiVersion: v1
kind: Pod
metadata:
  name: task-pv-pod
spec:
  volumes:
    - name: task-pv-storage
      persistentVolumeClaim:
        claimName: task-pv-claim
  containers:
    - name: task-pv-container
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: task-pv-storage

六.tekton配置workspaces

1.task配置

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: git-clone
spec:
  resources:
    inputs:
      - name: repo
        type: git
  workspaces:
  - name: pv-claim #pv名
  steps:
    - name: clone
      image: ubuntu:18.04
      workingDir: /workspace/repo
      script: |
        #!/usr/bin/env sh
        cd $(workspaces.pv-claim.path) #workspace路径
        mkdir xx

2.pipeline配置

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: pipeline-example-new
spec:
  resources:
  - name: repo
    type: git
  workspaces:
  - name: local-pc #workspace名
  tasks:
  - name: downcode
    taskRef:
      name: git-clone
    resources:
      inputs:
      - name: repo
        resource: repo
    workspaces:
    - name: pv-claim #pv名
      workspace: local-pc #workspace名

3.pipelinerun配置

apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
  name: pipelinerun-example-new
spec:
  serviceAccountName: tekton-test
  pipelineRef:
    name: pipeline-example-new
  resources:
  - name: repo
    resourceRef:
      name: git
  workspaces:
  - name: local-pc #workspace名
    persistentVolumeClaim:
      claimName: task-pv-claim #PersistentVolumeClaim名

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐