Table Of Contents
This sample, sampleOnnxMnistCoordConvAC, converts a model trained on the MNIST dataset
in Open Neural Network Exchange (ONNX) format to a TensorRT network and runs inference on the network. This model was trained in PyTorch and it contains custom CoordConv layers instead of Conv layers.
Model with CoordConvAC layers training script and code of CoordConv layers in PyTorch: link
Original model with usual Conv layers: link
CoordConv layer is a layer proposed by Uber AI Labs at 2018. It improves the quality of regular Conv layers by adding additional channels with relative coordinates to the input data. This layer is used in classification, detection, segmentation and other NN architectures. The CoordConv layer maps to the CoordConvAC_TRT
custom plugin implemented in TensorRT for fast inference. This plugin can be found at TensorRT/plugin/coordConvACPlugin
. Additional information about the layer and plugin implementation can be found at TensorRT/plugin/coordConvACPlugin/README.md
ONNX is a standard for representing deep learning models that enables models to be transferred between frameworks.
This sample creates and runs a TensorRT engine on an ONNX model of MNIST trained with CoordConv layers. It demonstrates how TensorRT can parse and import ONNX models, as well as use plugins to run custom layers in neural networks.
Specifically, this sample:
The model file can be converted to a TensorRT network using the ONNX parser. The parser can be initialized with the network definition that the parser will write to and the logger object.
auto parser = nvonnxparser::createParser(*network, sample::gLogger.getTRTLogger());
Plugins library needs to be added to the code to parse custom layers implemented as Plugins
initLibNvInferPlugins(&sample::gLogger, "ONNXTRT_NAMESPACE");
The ONNX model file is then passed onto the parser along with the logging level
To view additional information about the network, including layer information and individual layer dimensions, issue the following call:
After the TensorRT network is constructed by parsing the model, the TensorRT engine can be built to run inference.
To build the engine, create the builder and pass a logger created for TensorRT which is used for reporting errors, warnings and informational messages in the network: IBuilder* builder = createInferBuilder(sample::gLogger);
To build the engine from the generated TensorRT network, issue the following call: nvinfer1::ICudaEngine* engine = builder->buildCudaEngine(*network);
After you build the engine, verify that the engine is running properly by confirming the output is what you expected. The output format of this sample should be the same as the output of sampleMNIST.
To run inference using the created engine, see Performing Inference In C++.
Note: It’s important to preprocess the data and convert it to the format accepted by the network. In this example, the sample input is in PGM (portable graymap) format. The model expects an input of image 1x28x28
scaled to between [0,1]
.
Note2: Additional preprocessing needs to be applied to the data before putting it to the NN input due to the same normalization preprocessing were used when model was trained transforms.Normalize((0.1307,), (0.3081,)):
In this sample, the following layers and plugins are used. For more information about these layers, see the TensorRT Developer Guide: Layers documentation.
CoordConvAC layer Custom layer implemented with CUDA API that implements operation AddChannels. This layer expands the input data by adding additional channels with relative coordinates.
Activation layer The Activation layer implements element-wise activation functions. Specifically, this sample uses the Activation layer with the type kRELU
.
Convolution layer The Convolution layer computes a 2D (channel, height, and width) convolution, with or without bias.
FullyConnected layer The FullyConnected layer implements a matrix-vector product, with or without bias.
Pooling layer The Pooling layer implements pooling within a channel. Supported pooling types are maximum
, average
and maximum-average blend
.
Scale layer The Scale layer implements a per-tensor, per-channel, or per-element affine transformation and/or exponentiation by constant values.
Shuffle layer The Shuffle layer implements a reshape and transpose operator for tensors.
Compile this sample by running make
in the <TensorRT root directory>/samples/sampleOnnxMnistCoordConvAC
directory. The binary named sample_onnx_mnist_coord_conv_ac
will be created in the <TensorRT root directory>/bin
directory. ``` cd <TensorRT root directory>/samples/sampleOnnxMnistCoordConvAC make ```
Where <TensorRT root directory>
is where you installed TensorRT.
Input filename: data/mnist/mnist_with_coordconv.onnx ONNX IR version: 0.0.6 Opset version: 11 Producer name: Producer version: Domain: Model version: 0
[I] Input: @@@@@@@@@@@@@@ @@@@@@@@@@@@@@ @@@@@@@@@@@@@@ @@@@@@@@@@@@@@ @@@@@@*. .*@@@@@@ @@@@@*. +@@@@@ @@@@@. :#+ %@@@@@ @@@@@.:@+ +@@@@@ @@@@@.:@@: +@@@@ @@@@@=%@@: +@@@@ @@@@@@@@# +@@@@ @@@@@@@@* +@@@@ @@@@@@@@: +@@@@ @@@@@@@@: +@@@@ @@@@@@@@* .@@@@@ @@@@@%**%. @@@@@ @@@@%+. .: .@@@@@ @@@@= .. :@@@@@ @@@@: *@: :@@@@@ @@@% %@* *@@@@@ @@@% ++ ++ .%@@@@ @@@@- +@- +@@@@ @@@@= :@# .%@@@@ @@@@+*@@%. %@@@ @@@@@@@@@@@@@@ @@@@@@@@@@@@@@ @@@@@@@@@@@@@@ @@@@@@@@@@@@@@
[I] Output: Prob 0 0.0001 Class 0: Prob 1 0.0003 Class 1: Prob 2 0.9975 Class 2: ********** Prob 3 0.0009 Class 3: Prob 4 0.0000 Class 4: Prob 5 0.0001 Class 5: Prob 6 0.0001 Class 6: Prob 7 0.0000 Class 7: Prob 8 0.0009 Class 8: Prob 9 0.0000 Class 9:
&&&& PASSED TensorRT.sample_coord_conv_ac_onnx_mnist # ./sample_onnx_mnist_coord_conv_ac ```
This output shows that the sample ran successfully; PASSED.
To see the full list of available options and their descriptions, use the -h
or --help
command line option. For example:
The following resources provide a deeper understanding about the ONNX project and MNIST model:
CoordConv Layer
ONNX
Models
Documentation
For terms and conditions for use, reproduction, and distribution, see the TensorRT Software License Agreement documentation.
April 2020 This README.md
file was recreated, updated and reviewed.
There are no known issues in this sample.