Table Of Contents
trtexec
trtexec
Included in the samples
directory is a command line wrapper tool, called trtexec
. trtexec
is a tool to quickly utilize TensorRT without having to develop your own application. The trtexec
tool has two main purposes:
Benchmarking network - If you have a model saved as a UFF file, ONNX file, or if you have a network description in a Caffe prototxt format, you can use the trtexec
tool to test the performance of running inference on your network using TensorRT. The trtexec
tool has many options for specifying inputs and outputs, iterations for performance timing, precision allowed, and other options.
Serialized engine generation - If you generate a saved serialized engine file, you can pull it into another application that runs inference. For example, you can use the TensorRT Laboratory to run the engine with multiple execution contexts from multiple threads in a fully pipelined asynchronous way to test parallel inference performance. There are some caveats, for example, if you used a Caffe prototxt file and a model is not supplied, random weights are generated. Also, in INT8 mode, random weights are used, meaning trtexec does not provide calibration capability.
trtexec
trtexec
can be used to build engines, using different TensorRT features (see command line arguments), and run inference. trtexec
also measures and reports execution time and can be used to understand performance and possibly locate bottlenecks.
Compile this sample by running make
in the <TensorRT root directory>/samples/trtexec
directory. The binary named trtexec
will be created in the <TensorRT root directory>/bin
directory.
Where <TensorRT root directory>
is where you installed TensorRT.
trtexec
trtexec
can build engines from models in Caffe, UFF, or ONNX format.
The example below shows how to load a model description and its weights, build the engine that is optimized for batch size 16, and save it to a file. trtexec --deploy=/path/to/mnist.prototxt --model=/path/to/mnist.caffemodel --output=prob --batch=16 --saveEngine=mnist16.trt
Then, the same engine can be used for benchmarking; the example below shows how to load the engine and run inference on batch 16 inputs (randomly generated). trtexec --loadEngine=mnist16.trt --batch=16
You can profile a custom layer using the IPluginRegistry
for the plugins and trtexec
. You’ll need to first register the plugin with IPluginRegistry
.
If you are using TensorRT shipped plugins, you should load the libnvinfer_plugin.so
file, as these plugins are pre-registered.
If you have your own plugin, then it has to be registered explicitly. The following macro can be used to register the plugin creator YourPluginCreator
with the IPluginRegistry
. REGISTER_TENSORRT_PLUGIN(YourPluginCreator);
To run the AlexNet network on NVIDIA DLA (Deep Learning Accelerator) using trtexec
in FP16 mode, issue:
To run the AlexNet network on DLA using trtexec
in INT8 mode, issue:
To run the MNIST network on DLA using trtexec
, issue:
For more information about DLA, see Working With DLA.
To run an ONNX model in full-dimensions mode with static input shapes:
The following examples assumes an ONNX model with one dynamic input with name input
and dimensions [-1, 3, 244, 244]
To run an ONNX model in full-dimensions mode with an given input shape:
To benchmark your ONNX model with a range of possible input shapes:
When running, trtexec
prints the measured performance, but can also export the measurement trace to a json file:
Once the trace is stored in a file, it can be printed using the tracer.py
utility. This tool prints timestamps and duration of input, compute, and output, in different forms:
Similarly, profiles can also be printed and stored in a json file. The utility profiler.py
can be used to read and print the profile from a json file.
Tuning throughput may require running multiple concurrent streams of execution. This is the case for example when the latency achieved is well within the desired threshold, and we can increase the throughput, even at the expense of some latency. For example, saving engines for batch sizes 1 and 2 and assume that both execute within 2ms, the latency threshold:
Now, the saved engines can be tried to find the combination batch/streams below 2 ms that maximizes the throughput:
To see the full list of available options and their descriptions, issue the ./trtexec --help
command.
Note: Specifying the --safe
parameter turns the safety mode switch ON
. By default, the --safe
parameter is not specified; the safety mode switch is OFF
. The layers and parameters that are contained within the --safe
subset are restricted if the switch is set to ON
. The switch is used for prototyping the safety restricted flows until the TensorRT safety runtime is made available. For more information, see the Working With Automotive Safety section in the TensorRT Developer Guide.
The following resources provide more details about trtexec
:
Documentation
For terms and conditions for use, reproduction, and distribution, see the TensorRT Software License Agreement documentation.
April 2019 This is the first release of this README.md
file.
There are no known issues in this sample.