安装方法

Installation Guide — NVIDIA Cloud Native Technologies documentation

1.本地节点添加 NVIDIA 驱动程序

要求:NVIDIA drivers ~= 384.81
先确保你的主机上的 NVIDIA 驱动程序正常工作,你应该能够成功运行 nvidia-smi 并查看你的 GPU 名称、驱动程序版本和 CUDA 版本

$ nvidia-smi
Thu Jul 14 11:49:33 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.57       Driver Version: 515.57       CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:02:00.0 Off |                  N/A |
|  0%   48C    P8    11W / 200W |      0MiB /  8192MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

需要注意的是,第一次安装显卡驱动的话,是不用重启服务器的

2.本地节点安装nvidia-docker或nvidia-container-toolkit

nvidia-docker >= 2.0 || nvidia-container-toolkit >= 1.7.0

运行NVIDIA Container Toolkit的条件:

  • 内核版本 > 3.10 的 GNU/Linux x86_64
  • Docker >= 19.03(推荐,但某些发行版可能包含旧版本的 Docker。支持的最低版本为 1.12)
  • 架构 >= Kepler(或计算能力 3.0)的 NVIDIA GPU
  • NVIDIA Linux 驱动程序>= 418.81.07(请注意,不支持较旧的驱动程序版本或分支。)

如:centos, nvidia-container-toolkit

$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)

$ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.repo | sudo tee /etc/yum.repos.d/nvidia-docker.repo

$ yum install -y nvidia-container-toolkit

$ rpm -qa | grep nvidia

libnvidia-container-tools-1.10.0-1.x86_64

libnvidia-container1-1.10.0-1.x86_64

nvidia-container-toolkit-1.10.0-1.x86_64

3.每个节点Docker的默认运行时设置为 nvidia-container-runtime

$ cat /etc/containerd/config.toml  | grep BinaryName -C6
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            NoPivotRoot = false
            NoNewKeyring = false
            ShimCgroup = ""
            IoUid = 0
            IoGid = 0
            BinaryName = "/usr/bin/nvidia-container-runtime"  //修改此处即可
            Root = ""
            CriuPath = ""
            SystemdCgroup = false
    [plugins."io.containerd.grpc.v1.cri".cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"

$ systemctl daemon-reload
$ systemctl restart containerd

4.部署 NVIDIA 设备插件: kubectl create -f nvidia-device-plugin.yml

#1.0.0-beta4

$ docker pull nvidia/k8s-device-plugin:1.0.0-beta4

$ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/1.0.0-beta4/nvidia-device-plugin.yml

# 或1.12

$ docker pull nvidia/k8s-device-plugin:1.11

$ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v1.12/nvidia-device-plugin.yml

5.检查:

Kubernetes将暴露 amd.com/gpu或nvidia.com/gpu为可调度的资源

$ kubectl describe node | grep nvidia.com/gpu

验证

$ docker run --name hfftest --rm -it --gpus all nvidia/cuda:10.0-base nvidia-smi
Thu Jul 14 04:54:04 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.57       Driver Version: 515.57       CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:02:00.0 Off |                  N/A |
| 21%   49C    P8    16W / 200W |      0MiB /  8192MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

k8s:

apiVersion: v1
kind: Pod
metadata:
  name: test-gpu
spec:
  restartPolicy: OnFailure
  containers:
    - name: test-gpu
      image: "k8s.gcr.io/cuda-vector-add:v0.1"
      resources:
        limits:
          nvidia.com/gpu: 1

一些限制的:

  • GPUs 只能设置在 limits 部分,这意味着:

你可以指定 GPU 的 limits 而不指定其 requests,Kubernetes 将使用限制 值作为默认的请求值;
你可以同时指定 limits 和 requests,不过这两个值必须相等。
你不可以仅指定 requests 而不指定 limits。

  • 容器(以及 Pod)之间是不共享 GPU 的。GPU 也不可以过量分配(Overcommitting)。
  • 每个容器可以请求一个或者多个 GPU,但是用小数值来请求部分 GPU 是不允许的。

参考

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐