TensorFlowの学習済みモデルをONNXモデルに変換

tensorflow-onnx|GitHubを参考にColaboratoryを使って、 TensorFlowの学習済みモデルをONNXモデルに変換します

Requirement

以下のバージョンで動作確認しました

  • python 3.6.8
  • tensorflow: 1.14.1
  • onnx: 1.5.0
  • tf2onnx: 1.5.3

Installation

以下のコマンドでインストールします

In [1]:
!pip install --user -U tf2onnx
Requirement already up-to-date: tf2onnx in /root/.local/lib/python3.6/site-packages (1.5.3)
Requirement already satisfied, skipping upgrade: onnx>=1.4.1 in /root/.local/lib/python3.6/site-packages (from tf2onnx) (1.5.0)
Requirement already satisfied, skipping upgrade: numpy>=1.14.1 in /usr/local/lib/python3.6/dist-packages (from tf2onnx) (1.16.4)
Requirement already satisfied, skipping upgrade: six in /usr/local/lib/python3.6/dist-packages (from tf2onnx) (1.12.0)
Requirement already satisfied, skipping upgrade: requests in /usr/local/lib/python3.6/dist-packages (from tf2onnx) (2.21.0)
Requirement already satisfied, skipping upgrade: typing>=3.6.4 in /usr/local/lib/python3.6/dist-packages (from onnx>=1.4.1->tf2onnx) (3.7.4)
Requirement already satisfied, skipping upgrade: typing-extensions>=3.6.2.1 in /root/.local/lib/python3.6/site-packages (from onnx>=1.4.1->tf2onnx) (3.7.4)
Requirement already satisfied, skipping upgrade: protobuf in /usr/local/lib/python3.6/dist-packages (from onnx>=1.4.1->tf2onnx) (3.7.1)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->tf2onnx) (2019.6.16)
Requirement already satisfied, skipping upgrade: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->tf2onnx) (3.0.4)
Requirement already satisfied, skipping upgrade: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->tf2onnx) (1.24.3)
Requirement already satisfied, skipping upgrade: idna<2.9,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->tf2onnx) (2.8)
Requirement already satisfied, skipping upgrade: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf->onnx>=1.4.1->tf2onnx) (41.0.1)

Converting SSD Mobilenet from Tensorflow to ONNX

前回、ONNX RuntimeとYoloV3でリアルタイム物体検出|はやぶさの技術ノートについて書きました
今回は『SSDでリアルタイム物体検出』を実践します

ONNXモデルをエクスポートできる深層学習フレームワークは複数ありますが、
SSD系の学習済みモデルについては、 Tensorflow detection model zooが非常に充実してます。

なので、TensorFlowモデルをONNXモデルに変換して、最終的にONNXRuntimeで物体検出(推論)を行います

Define some environment variables

最初に、パスやファイル名・フォルダ名などを定義します

In [0]:
import os
import sys

ROOT = os.getcwd()
WORK = os.path.join(ROOT, "work")
MODEL = "ssdlite_mobilenet_v2_coco_2018_05_09"
os.makedirs(WORK, exist_ok=True)

# force tf2onnx to cpu
os.environ['CUDA_VISIBLE_DEVICES'] = "0"
os.environ['MODEL'] = MODEL
os.environ['WORK'] = WORK

SSDモデルのダウンロード

Tensorflow detection model zoo の中から好きなモデルを選びます

個人的に、リアルタイム物体検出が好きなので、”軽快に動作する”ssdlite_mobilenet_v2_cocoを採用します

以下のコマンドでダウンロードします

Download the pretrained ssd model

In [3]:
!cd $WORK; wget -q http://download.tensorflow.org/models/object_detection/$MODEL.tar.gz
!cd $WORK; tar zxvf $MODEL.tar.gz
ssdlite_mobilenet_v2_coco_2018_05_09/checkpoint
ssdlite_mobilenet_v2_coco_2018_05_09/model.ckpt.data-00000-of-00001
ssdlite_mobilenet_v2_coco_2018_05_09/model.ckpt.meta
ssdlite_mobilenet_v2_coco_2018_05_09/model.ckpt.index
ssdlite_mobilenet_v2_coco_2018_05_09/saved_model/saved_model.pb
ssdlite_mobilenet_v2_coco_2018_05_09/pipeline.config
ssdlite_mobilenet_v2_coco_2018_05_09/frozen_inference_graph.pb
ssdlite_mobilenet_v2_coco_2018_05_09/
ssdlite_mobilenet_v2_coco_2018_05_09/saved_model/variables/
ssdlite_mobilenet_v2_coco_2018_05_09/saved_model/

ダウンロードしたモデルを確認

ダウンロードしたファイルを確認

In [4]:
!ls $WORK/$MODEL
checkpoint			model.ckpt.index  saved_model
frozen_inference_graph.pb	model.ckpt.meta
model.ckpt.data-00000-of-00001	pipeline.config

以下のコマンドで【input】と【output name】の定義を確認できます

In [5]:
!saved_model_cli show --dir $WORK/$MODEL/saved_model/ --tag_set serve  --signature_def serving_default
The given SavedModel SignatureDef contains the following input(s):
  inputs['inputs'] tensor_info:
      dtype: DT_UINT8
      shape: (-1, -1, -1, 3)
      name: image_tensor:0
The given SavedModel SignatureDef contains the following output(s):
  outputs['detection_boxes'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 100, 4)
      name: detection_boxes:0
  outputs['detection_classes'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 100)
      name: detection_classes:0
  outputs['detection_scores'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 100)
      name: detection_scores:0
  outputs['num_detections'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1)
      name: num_detections:0
Method name is: tensorflow/serving/predict

inputについては、以下の通りです

The input is a batch of images in NHWC format and the used data type it uint8.

Convert the model to ONNX

学習済みモデルが”frozen_inference_graph.pb”と”saved_model”で保存してあるので、それぞれONNXモデルに変換する方法を紹介します

saved_model編

tf2onnxを使って、TensorFlowモデル”saved_model”をONNXモデルに変換

In [6]:
!python -m tf2onnx.convert --opset 10 --fold_const --saved-model $WORK/$MODEL/saved_model --output $WORK/$MODEL.onnx
2019-08-03 15:49:52,485 - WARNING - From /root/.local/lib/python3.6/site-packages/tf2onnx/verbose_logging.py:72: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.

2019-08-03 15:49:57,917 - INFO - Using tensorflow=1.14.0, onnx=1.5.0, tf2onnx=1.5.3/7b598d
2019-08-03 15:49:57,917 - INFO - Using opset <onnx, 10>
2019-08-03 15:49:59,220 - WARNING - Cannot infer shape for Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/zeros: Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/zeros:0
2019-08-03 15:50:26,083 - INFO - Optimizing ONNX model
2019-08-03 15:53:19,932 - INFO - After optimization: Add -88 (382->294), Cast -374 (1269->895), Const -1015 (2732->1717), Div -2 (15->13), Gather +6 (554->560), Identity -944 (947->3), Reshape -30 (327->297), Shape -1 (111->110), Slice -1 (297->296), Squeeze -2 (669->667), Transpose -167 (361->194), Unsqueeze -134 (429->295)
2019-08-03 15:53:21,052 - INFO - 
2019-08-03 15:53:21,053 - INFO - Successfully converted TensorFlow model /content/work/ssdlite_mobilenet_v2_coco_2018_05_09/saved_model to ONNX
2019-08-03 15:53:21,097 - INFO - ONNX model is saved at /content/work/ssdlite_mobilenet_v2_coco_2018_05_09.onnx

frozen_inference_graph.pb編

同様に、tf2onnxを使って、TensorFlowモデル”frozen_inference_graph.pb”をONNXモデルに変換

In [7]:
!python -m tf2onnx.convert --graphdef $WORK/$MODEL/frozen_inference_graph.pb --output $WORK/$MODEL.frozen.onnx \
    --fold_const --opset 10 \
    --inputs image_tensor:0 \
    --outputs num_detections:0,detection_boxes:0,detection_scores:0,detection_classes:0
2019-08-03 15:53:24,563 - WARNING - From /root/.local/lib/python3.6/site-packages/tf2onnx/verbose_logging.py:72: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.

2019-08-03 15:53:27,696 - INFO - Using tensorflow=1.14.0, onnx=1.5.0, tf2onnx=1.5.3/7b598d
2019-08-03 15:53:27,696 - INFO - Using opset <onnx, 10>
2019-08-03 15:53:28,929 - WARNING - Cannot infer shape for Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/zeros: Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/zeros:0
2019-08-03 15:53:55,470 - INFO - Optimizing ONNX model
2019-08-03 15:56:47,374 - INFO - After optimization: Add -88 (382->294), Cast -374 (1269->895), Const -1015 (2732->1717), Div -2 (15->13), Gather +6 (554->560), Identity -944 (947->3), Reshape -30 (327->297), Shape -1 (111->110), Slice -1 (297->296), Squeeze -2 (669->667), Transpose -167 (361->194), Unsqueeze -134 (429->295)
2019-08-03 15:56:48,487 - INFO - 
2019-08-03 15:56:48,487 - INFO - Successfully converted TensorFlow model /content/work/ssdlite_mobilenet_v2_coco_2018_05_09/frozen_inference_graph.pb to ONNX
2019-08-03 15:56:48,528 - INFO - ONNX model is saved at /content/work/ssdlite_mobilenet_v2_coco_2018_05_09.frozen.onnx

GoogleドライブにONNXモデル保存

変換したONNXモデルを推論用ディバイスで使用するため、「Googleドライブに保存」⇒「ダウンロード」を行います

まずは、【秒速で無料GPUを使う】深層学習実践Tips on Colaboratory を参考に、以下の手順でONNXモデルをGoogleドライブに保存します

  1. Googleドライブをマウント
  2. ONNXモデルをGooglドライブにコピー

以下のコードでGoogleドライブをマウントします

In [8]:
from google.colab import drive
drive.mount('/content/drive')
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=email%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdocs.test%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive.photos.readonly%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fpeopleapi.readonly&response_type=code

Enter your authorization code:
··········
Mounted at /content/drive

正常にマウントできれば、以下のコマンドで対象Google Driveの内容が表示されます

In [9]:
!ls /content/drive/'My Drive'
 Chainer  'Colab Notebooks'   Cpp   python   Pytorch   はやぶさの技術ノート集

以下のコマンドでONNXモデルをGooglドライブにコピーします

In [0]:
!cp $WORK/$MODEL.onnx /content/drive/'My Drive'

改めて、対象Google Driveの内容を表示させるとssdlite_mobilenet_v2_coco_2018_05_09.onnxが保存を確認できます

In [11]:
!ls /content/drive/'My Drive'
 Chainer	    Pytorch
'Colab Notebooks'   ssdlite_mobilenet_v2_coco_2018_05_09.onnx
 Cpp		    はやぶさの技術ノート集
 python

今回は、”ssdlite_mobilenet_v2_coco_2018_05_09.onnx”を取得しましたが、
同様の手順で他のONNXモデルも取得できます