TensorRT  7.2.1.6
NVIDIA TensorRT
Looking for a C++ dev who knows TensorRT?
I'm looking for work. Hire me!
All Classes Namespaces Functions Variables Typedefs Enumerations Enumerator Friends Pages
“Hello World” For Multilayer Perceptron (MLP)

Table Of Contents

Description

This sample, sampleMLP, is a simple hello world example that shows how to create a network that triggers the multilayer perceptron (MLP) optimizer. The generated MLP optimizer can then accelerate TensorRT.

How does this sample work?

This sample uses a publicly accessible TensorFlow tutorial to train a MLP network based on the MNIST data set and shows how to transform that data into a format that the samples use.

Specifically, this sample defines the network, triggers the MLP optimizer by creating a sequence of networks to increase performance, and creates a sequence of TensorRT layers that represent an MLP layer.

Defining the network

This sample follows the same flow as sampleMNISTAPI with one exception. The network is defined as a sequence of addMLP calls, which adds FullyConnected and Activation layers to the network.

Generally, an MLP layer is:

  • a FullyConnected operation that is followed by an optional Scale and an optional Activation; or
  • a MatrixMultiplication operation followed by an optional bias and an optional activation.

An MLP network is more than one MLP layer generated sequentially in the TensorRT network. The optimizer will detect this pattern and generate optimized MLP code.

More formally, the following variations of MLP layers will trigger the MLP optimizer:

{MatrixMultiplication [-> ElementWiseSum] [-> Activation]}+
{FullyConnected [-> Scale(with empty scale and power arguments)] [-> Activation]}+

TensorRT API layers and ops

In this sample, the following layers are used. For more information about these layers, see the TensorRT Developer Guide: Layers documentation.

Activation layer The Activation layer implements element-wise activation functions. Specifically, this sample uses the Activation layer with the type kRELU.

FullyConnected layer The FullyConnected layer implements a matrix-vector product, with or without bias.

TopK layer The TopK layer finds the top K maximum (or minimum) elements along a dimension, returning a reduced tensor and a tensor of index positions.

Training an MLP network

This sample comes with pre-trained weights. However, if you want to train your own MLP network, you first need to generate the weights by training a TensorFlow based neural network using an MLP optimizer, and then verify that the trained weights are converted into a format that sampleMLP can read. If you want to use the weights that are shipped with this sample, see Running the sample.

  1. Install Python.
  2. Install TensorFlow 1.4 or later.
  3. Download the TensorFlow tutorial. ``` git clone https://github.com/aymericdamien/TensorFlow-Examples.git cd TensorFlow-Examples ``
  4. Apply theupdate_mlp.patch<tt>file to save the final result. `` patch -p1 < <TensorRT Install>/samples/sampleMLP/update_mlp.patch ```
  5. Train the MNIST MLP network. ``` python examples/3_NeuralNetworks/multilayer_perceptron.py ``` This step produces the following file: ``` /tmp/sampleMLP.ckpt - Trained MLP checkpoint `` ThesampleMLP.ckpt` file contains the checkpoint for the parameters and weights.
    1. Convert the trained model weights to a format sampleMLP understands. ``` python <TensorRT Install>/samples/sampleMLP/convert_weights.py -m /tmp/sampleMLP.ckpt -o sampleMLP mkdir -p <TensorRT Install>/data/mlp cp sampleMLP.wts2 <TensorRT Install>/data/mlp/ ```

Running the sample

  1. Compile this sample by running make in the <TensorRT root directory>/samples/sampleMLP directory. The binary named sample_mlp will be created in the <TensorRT root directory>/bin directory. ``` cd <TensorRT root directory>/samples/sampleMLP make `` Where<TensorRT root directory>` is where you installed TensorRT.
  2. Run the sample to classify the MNIST digit. ``` cd <TensorRT Install>/bin ./sample_mlp ```
  3. Verify that the sample ran successfully. If the sample runs successfully you should see output similar to the following; ASCII rendering of the input image with digit 9:
    &&&& RUNNING TensorRT.sample_mlp # build/x86_64-linux/sample_mlp
    [I] Input:
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@
    @@@@@@@@@@@@@@%.-@@@@@@@@@@@
    @@@@@@@@@@@*- %@@@@@@@@@@
    @@@@@@@@@@= .-. *@@@@@@@@@@
    @@@@@@@@@= +@@@ *@@@@@@@@@@
    @@@@@@@@* =@@@@ %@@@@@@@@@@
    @@@@@@@@..@@@@% @@@@@@@@@@@
    @@@@@@@# *@@@@- @@@@@@@@@@@
    @@@@@@@: @@@@% @@@@@@@@@@@
    @@@@@@@: @@@@- @@@@@@@@@@@
    @@@@@@@: =+*= +: *@@@@@@@@@@
    @@@@@@@*. +@: *@@@@@@@@@@
    @@@@@@@@%#**#@@: *@@@@@@@@@@
    @@@@@@@@@@@@@@@: -@@@@@@@@@@
    @@@@@@@@@@@@@@@+ :@@@@@@@@@@
    @@@@@@@@@@@@@@@* @@@@@@@@@@
    @@@@@@@@@@@@@@@@ %@@@@@@@@@
    @@@@@@@@@@@@@@@@ #@@@@@@@@@
    @@@@@@@@@@@@@@@@: +@@@@@@@@@
    @@@@@@@@@@@@@@@@- +@@@@@@@@@
    @@@@@@@@@@@@@@@@*:%@@@@@@@@@
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@
    [I] Algorithm chose 9
    &&&& PASSED TensorRT.sample_mlp # build/x86_64-linux/sample_mlp

This output shows that the sample ran successfully; PASSED.

Sample --help options

To see the full list of available options and their descriptions, use the -h or --help command line option.

Additional resources

The following resources provide a deeper understanding about MLP:

MLP

Models

Documentation

License

For terms and conditions for use, reproduction, and distribution, see the TensorRT Software License Agreement documentation.

Changelog

February 2019 This README.md file was recreated, updated and reviewed.

Known issues

  • Fake INT8 dynamic ranges are used in this sample. So there might be an accuracy loss when running the sample under INT8 mode, which would consequently lead a wrong classification result.