Table of Contents

Introduction

Steps to make the storage server pod and provision it as a PV (persistent volume)

1. Get the Dockerfile of the storage server

2.Build docker image oracle/rac-storage-server

3. Make sure SELinux is disabled for your k8s node

4. Install NFS server binary on your k8s node

5. Create the pod that provides the storage server

6. Get the IP of the storage server pod

7. Create storage pv and pvc


Introduction

Oracle RAC database needs ASM (Automatic Storage Management) storage device, but we don't usually have it at hand. For test or POC purpose, we can use Oralce storage server to simulate it. This document describes the steps of creating storage server in a kubernetes (k8s) cluster, based on the official docker image.

ASM storage in Oracle RAC uses a private network interface (192.168), we assume that your k8s cluster has enabled multiple network interfaces support.

Steps to make the storage server pod and provision it as a PV (persistent volume)

We are going to start a pod to provide the NFS service and provision the storage as a PV.

1. Get the Dockerfile of the storage server

bash-4.2$ git clone https://github.com/oracle/docker-images.git

2.Build docker image oracle/rac-storage-server

bash-4.2$ cd docker-images/OracleDatabase/RAC/OracleRACStorageServer/dockerfiles

bash-4.2$ ./buildDockerImage.sh -v 12.2.0.1

3. Make sure SELinux is disabled for your k8s node

bash-4.2$ cat /etc/selinux/config grep "^SELINUX="

(The value returned should be "disabled", otherwise change it so)

4. Install NFS server binary on your k8s node

bash-4.2$ yum -y install nfs-utils

5. Create the pod that provides the storage server

Here we basically turn the "docker run ..." command in OracleRACStorageServer/README.md to a pod spec:

racnode-storage.yaml

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

apiVersion: v1

kind: Pod

metadata:

  name: racnode-storage

  annotations:

    k8s.v1.cni.cncf.io/networks: macvlan-conf

spec:

  volumes:

  - name: tempdir

    emptyDir: {}

  - name: oradata

    hostPath:

      path: /home/io/asm_vol/ORCLCDB

  - name: cgr

    hostPath:

      path: /sys/fs/cgroup

  - name: cache-volume

    emptyDir:

      medium: Memory

  initContainers:

  - name: get-priv-ip

    image: oraclelinux:7

    command: ["/bin/bash""-c""ip a | grep 192.168 | awk '{ print substr($2, 1, index($2, \"/\") - 1) }' 2>&1 | tee /temp/IP_2ND"]

    volumeMounts:

    - name: "tempdir"

      mountPath: "/temp"

  containers:

  - name: racnode-storage

    image: oracle/rac-storage-server:12.2.0.1

    securityContext:

      capabilities:

        add:

        - SYS_ADMIN

    env:

    - name: POD_IP

      valueFrom:

        fieldRef:

          fieldPath: status.podIP

    - name: http_proxy

      value: "@http_proxy@"

    - name: https_proxy

      value: "@https_proxy@"

    - name: no_proxy

      value: "@no_proxy@"

    volumeMounts:

    - name: "tempdir"

      mountPath: "/temp"

    - name: "oradata"

      mountPath: "/oradata"

    - name: "cgr"

      mountPath: "/sys/fs/cgroup"

      readOnly: true

    - name: "cache-volume"

      mountPath: "/run"

Change the @http(s)_proxy@ to proper value and:

bash-4.2$ kubectl apply -f racnode-storage.yaml

Note 1:

We are using multus to provide multiple/customized network interfaces. The pod annotation "k8s.v1.cni.cncf.io/networks: macvlan-conf" indicates that this pod would have two network interfaces, i.e. one of flannel (default) and one of macvlan.

Note 2:

It takes 4~10 minutes for the racnode-storage pod to be ready, use the following command to track the log:
bash-4.2$ kubectl logs -f racnode-storage

The final output should be someting like:

####################################################
NFS Server is up and running
Create NFS volume for /oradata/
####################################################

6. Get the IP of the storage server pod

Because the secondary network interface (macvlan) is unavailable from kubelet, we will have to use the flannel IP (we may take the flannel interface as private):

bash-4.2$ kubectl get po -o wide | grep racnode-storage | awk '{print $6}'

10.244.0.44

We'll use it in next step.

7. Create storage pv and pvc

Now create a file racnode-pv.yaml, set spec.nfs.server to the IP we just got:

racnode-pv.yaml

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

apiVersion: v1

kind: PersistentVolume

metadata:

  name: racnode-storage

spec:

  capacity:

    storage: 60G

  accessModes:

  - ReadWriteMany

  persistentVolumeReclaimPolicy: Recycle

  mountOptions:

  - hard

  - rw

  - bg

  - tcp

  - vers=3

  - timeo=600

  - rsize=32768

  - wsize=32768

  - actimeo=0

  nfs:

    path: /oradata

    server: 10.244.0.44

---

kind: PersistentVolumeClaim

apiVersion: v1

metadata:

  name: racnode-storage

spec:

  accessModes:

  - ReadWriteMany

  resources:

    requests:

      storage: 60G

As you can see, this is turning the "docker volume create ..." command in OracleRACStorageServer/README.md to a pv and a pvc.

bash-4.2$ kubectl apply -f racnode-pv.yaml

Now our ASM storage is ready to be consumed.

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐