TensorFlow 2 Object Detection API 教程: 安装

本教程针对的是TensorFlow 2.4,它是TensorFlow 2.x的最新稳定版本。

这是一个循序渐进的教程/指南,设置和使用TensorFlow’s Object Detection API来执行,即在图像/视频中的对象检测。

我们将在本教程中使用的软件工具如下表所示:

OS

Windows, Linux

Python

3.8.8

TensorFlow

2.4.1

CUDA Toolkit

11.1

CuDNN

8.0.5

Anaconda

Python 3.8 (Optional)

 

安装

TensorFlow 2.x只需要安装tensorflow包,它会自动检查GPU是否可以成功注册。

Anaconda Python 3.8(可选)

虽然安装和使用TensorFlow并不要求使用Anaconda,但我建议这样做,因为它是管理包和设置新虚拟环境的直观方式。

Anaconda是一个非常有用的工具,不仅适用于使用TensorFlow,而且通常适用于任何使用Python的人,所以如果你还没有机会使用它,现在是一个很好的机会。

TensorFlow安装

安装TensorFlow仅需3个简单步骤。

安装TensorFlow PIP包

在终端窗口中执行如下命令:

pip install --ignore-installed --upgrade tensorflow==2.4.1

验证您的安装

在终端窗口中执行如下命令:

python -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"

一旦运行了上面的程序,你应该会看到类似下面的打印结果:

>>> python -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
2021-03-23 14:57:50.288222: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
2021-03-23 14:58:02.417155: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-03-23 14:58:02.461837: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'nvcuda.dll'; dlerror: nvcuda.dll not found
2021-03-23 14:58:02.465044: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2021-03-23 14:58:02.478936: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: DESKTOP-ANONYM
2021-03-23 14:58:02.481918: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: DESKTOP-ANONYM
2021-03-23 14:58:02.502360: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-03-23 14:58:02.512893: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
tf.Tensor(234.78662, shape=(), dtype=float32)

GPU的支持(可选)

虽然使用GPU来运行TensorFlow不是必要的,但是计算收益是相当可观的。

因此,如果您的机器配备了兼容cuda的GPU,建议您按照下面列出的步骤安装必要的相关库,使TensorFlow能够使用您的GPU。

默认情况下,当TensorFlow运行时,它将尝试注册兼容的GPU设备。如果失败,TensorFlow将求助于在平台的CPU上运行。

这也可以在上一节显示的“Verify the install”项目点下的打印输出中观察到,其中有许多消息报告缺少库文件(例如:Could not load dynamic library 'cudart64_101.dll';cudart64_101.dll未找到)。

为了让TensorFlow在你的GPU上运行,必须满足以下要求:

Prerequisites

Nvidia GPU (GTX 650 or newer)

CUDA Toolkit v10.1

CuDNN 7.6.5

安装CUDA工具包

Windows:

按照这个链接下载并安装CUDA Toolkit 10.1

安装说明可以在这里找到.

Linux:

按照这个链接下载并安装用于您的Linux发行版的CUDA Toolkit 10.1。

安装说明可以在这里找到。

安装CUDNN

Windows:

  1. Go to https://developer.nvidia.com/rdp/cudnn-download

  2. Create a user profile if needed and log in

  3. Select cuDNN v7.6.5 (Nov 5, 2019), for CUDA 10.1

  4. Download cuDNN v7.6.5 Library for Windows 10

  5. Extract the contents of the zip file (i.e. the folder named cuda) inside <INSTALL_PATH>\NVIDIA GPU Computing Toolkit\CUDA\v10.1\, where <INSTALL_PATH> points to the installation directory specified during the installation of the CUDA Toolkit. By default <INSTALL_PATH> = C:\Program Files.

Linux:

  1. Go to https://developer.nvidia.com/rdp/cudnn-download

  2. Create a user profile if needed and log in

  3. Select cuDNN v7.6.5 (Nov 5, 2019), for CUDA 10.1

  4. Download cuDNN v7.6.5 Library for Linux

  5. Follow the instructions under Section 2.3.1 of the CuDNN Installation Guide to install CuDNN.

环境设置

Windows:

  •  Go to Start and Search “environment variables”

  • Click “Edit the system environment variables”. This should open the “System Properties” window

  • In the opened window, click the “Environment Variables…” button to open the “Environment Variables” window.

  • Under “System variables”, search for and click on the Path system variable, then click “Edit…”

  • Add the following paths, then click “OK” to save the changes:

<INSTALL_PATH>\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin

<INSTALL_PATH>\NVIDIA GPU Computing Toolkit\CUDA\v10.1\libnvvp

<INSTALL_PATH>\NVIDIA GPU Computing Toolkit\CUDA\v10.1\extras\CUPTI\libx64

<INSTALL_PATH>\NVIDIA GPU Computing Toolkit\CUDA\v10.1\cuda\bin

 

Linux:

As per Section 7.1.1 of the CUDA Installation Guide for Linux, append the following lines to ~/.bashrc:

# CUDA related exports
export PATH=/usr/local/cuda-10.1/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-10.1/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

更新你的GPU驱动程序(可选)

如果在安装CUDA工具包(参见安装CUDA工具包)时,您选择了Express安装选项,那么您的GPU驱动程序将被CUDA工具包附带的那些驱动程序覆盖。

这些驱动程序通常不是最新的驱动程序,因此,你可能希望更新你的驱动程序。

  1. Go to http://www.nvidia.com/Download/index.aspx

  2. Select your GPU version to download

  3. Install the driver for your chosen OS

验证安装

在新的终端窗口中执行以下命令:

python -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"

注意:必须打开一个新的终端窗口,对环境变量的更改才能生效!!

>>>python -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
2021-03-23 15:12:09.114595: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
2021-03-23 15:12:12.716227: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-03-23 15:12:12.727102: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'nvcuda.dll'; dlerror: nvcuda.dll not found
2021-03-23 15:12:12.732382: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2021-03-23 15:12:12.738739: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: DESKTOP-ANONYM
2021-03-23 15:12:12.742590: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: DESKTOP-ANONYM
2021-03-23 15:12:12.745358: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-03-23 15:12:12.750673: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
tf.Tensor(1659.4142, shape=(), dtype=float32)

注意,从上面突出显示的行中可以看到,库文件现在已经成功打开,并显示一条调试消息来确认TensorFlow已经成功创建了TensorFlow设备。

 

安装TensorFlow Object Detection API

现在你已经安装了TensorFlow,是时候安装TensorFlow Object Detection API了。

下载TensorFlow模型花园

  • Create a new folder under a path of your choice and name it TensorFlow. (e.g. C:\Users\sglvladi\Documents\TensorFlow).

  • From your Terminal cd into the TensorFlow directory.

  • To download the models you can either use Git to clone the TensorFlow Models repository inside the TensorFlow folder, or you can simply download it as a ZIP and extract its contents inside the TensorFlow folder. To keep things consistent, in the latter case you will have to rename the extracted folder models-master to models.

  • You should now have a single folder named models under your TensorFlow folder, which contains another 4 folders as such:

 

TensorFlow/
└─ models/
   ├─ community/
   ├─ official/
   ├─ orbit/
   ├─ research/
   └── ...

Protobuf安装/编译

Tensorflow Object Detection API使用Protobufs来配置模型和训练参数。在使用框架之前,必须下载并编译Protobuf库。

应这样做:

  • Head to the protoc releases page

  • Download the latest protoc-*-*.zip release (e.g. protoc-3.12.3-win64.zip for 64-bit Windows)

  • Extract the contents of the downloaded protoc-*-*.zip in a directory <PATH_TO_PB> of your choice (e.g. C:\Program Files\Google Protobuf)

  • Add <PATH_TO_PB> to your Path environment variable (see Environment Setup)

  • In a new Terminal 1cd into TensorFlow/models/research/ directory and run the following command:

# From within TensorFlow/models/research/
protoc object_detection/protos/*.proto --python_out=.

【注意】

如果你在Windows上使用Protobuf 3.5或更高版本,多文件选择通配符(即*.proto)可能无法工作,但你可以做以下之一:

Windows Powershell:

# From within TensorFlow/models/research/
Get-ChildItem object_detection/protos/*.proto | foreach {protoc "object_detection/protos/$($_.Name)" --python_out=.}

Command Prompt

# From within TensorFlow/models/research/
for /f %i in ('dir /b object_detection\protos\*.proto') do protoc object_detection\protos\%i --python_out=.

注意:必须打开一个新的终端,环境变量中的更改才能生效。

 

COCO API 安装

从TensorFlow 2.x开始,pycocotools包被列为Object Detection API的依赖项。

理想情况下,在安装Object Detection API时应该安装这个包,下面的“安装Object Detection API”一节中有相关的文档,但是安装可能会因为各种原因失败,因此提前安装这个包更简单,在这种情况下,稍后的安装将被跳过。

Windows:

运行以下命令安装支持Windows的pycocotools:

pip install cython
pip install git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI

注意,根据包的说明,必须在您的路径上安装Visual c++ 2015构建工具。如果它们不是,请确保从这里安装它们。

Linux:

将cocoapi下载到你选择的目录,然后创建并复制pycocotools子文件夹到Tensorflow/models/research目录,如下所示:

git clone https://github.com/cocodataset/cocoapi.git
cd cocoapi/PythonAPI
make
cp -r pycocotools <PATH_TO_TF>/TensorFlow/models/research/

请注意:

默认的度量是基于Pascal VOC评估中使用的度量。

  1. 要使用COCO对象检测度量,请向配置文件中的eval_config消息添加metrics_set:“coco_detection_metrics”。
  2. 要使用COCO实例分段度量,请向配置文件中的eval_config消息添加metrics_set: "coco_mask_metrics"。

 

安装Object Detection API

Object Detection API的安装是通过安装object_detection包实现的。

这是通过在Tensorflow\models\research中运行以下命令来实现的:

 

# From within TensorFlow/models/research/
cp object_detection/packages/tf2/setup.py .
python -m pip install .

注意:

在上述安装过程中,可能会出现以下错误: 

ERROR: Command errored out with exit status 1:
     command: 'C:\Users\sglvladi\Anaconda3\envs\tf2\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\sglvladi\\AppData\\Local\\Temp\\pip-install-yn46ecei\\pycocotools\\setup.py'"'"'; __file__='"'"'C:\\Users\\sglvladi\\AppData\\Local\\Temp\\pip-install-yn46ecei\\pycocotools\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\sglvladi\AppData\Local\Temp\pip-record-wpn7b6qo\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\Users\sglvladi\Anaconda3\envs\tf2\Include\pycocotools'
         cwd: C:\Users\sglvladi\AppData\Local\Temp\pip-install-yn46ecei\pycocotools\
    Complete output (14 lines):
    running install
    running build
    running build_py
    creating build
    creating build\lib.win-amd64-3.8
    creating build\lib.win-amd64-3.8\pycocotools
    copying pycocotools\coco.py -> build\lib.win-amd64-3.8\pycocotools
    copying pycocotools\cocoeval.py -> build\lib.win-amd64-3.8\pycocotools
    copying pycocotools\mask.py -> build\lib.win-amd64-3.8\pycocotools
    copying pycocotools\__init__.py -> build\lib.win-amd64-3.8\pycocotools
    running build_ext
    skipping 'pycocotools\_mask.c' Cython extension (up-to-date)
    building 'pycocotools._mask' extension
    error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/
    ----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\Users\sglvladi\Anaconda3\envs\tf2\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\sglvladi\\AppData\\Local\\Temp\\pip-install-yn46ecei\\pycocotools\\setup.py'"'"'; __file__='"'"'C:\\Users\\sglvladi\\AppData\\Local\\Temp\\pip-install-yn46ecei\\pycocotools\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\sglvladi\AppData\Local\Temp\pip-record-wpn7b6qo\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\Users\sglvladi\Anaconda3\envs\tf2\Include\pycocotools' Check the logs for full command output.

这是因为pycocotools包的安装失败。要解决这个问题,请查看COCO API安装部分并重新运行上述命令。

 

测试安装

要测试安装,在Tensorflow\models\research中运行以下命令:

# From within TensorFlow/models/research/
python object_detection/builders/model_builder_tf2_test.py

一旦上面的程序运行了,给测试留出一些时间来完成,一旦完成,你应该观察类似于下面的打印输出:

试试这些例子

如果前面的步骤成功完成,则意味着您已经成功安装了使用预先训练过的模型执行对象检测所需的所有组件。

如果您想通过一些示例来了解如何做到这一点,那么现在是看看示例部分的好时机。

Logo

更多推荐