Table Of Contents
This sample, sampleMLP, is a simple hello world example that shows how to create a network that triggers the multilayer perceptron (MLP) optimizer. The generated MLP optimizer can then accelerate TensorRT.
This sample uses a publicly accessible TensorFlow tutorial to train a MLP network based on the MNIST data set and shows how to transform that data into a format that the samples use.
Specifically, this sample defines the network, triggers the MLP optimizer by creating a sequence of networks to increase performance, and creates a sequence of TensorRT layers that represent an MLP layer.
This sample follows the same flow as sampleMNISTAPI with one exception. The network is defined as a sequence of addMLP calls, which adds FullyConnected and Activation layers to the network.
Generally, an MLP layer is:
An MLP network is more than one MLP layer generated sequentially in the TensorRT network. The optimizer will detect this pattern and generate optimized MLP code.
More formally, the following variations of MLP layers will trigger the MLP optimizer:
In this sample, the following layers are used. For more information about these layers, see the TensorRT Developer Guide: Layers documentation.
Activation layer The Activation layer implements element-wise activation functions. Specifically, this sample uses the Activation layer with the type kRELU
.
FullyConnected layer The FullyConnected layer implements a matrix-vector product, with or without bias.
TopK layer The TopK layer finds the top K maximum (or minimum) elements along a dimension, returning a reduced tensor and a tensor of index positions.
This sample comes with pre-trained weights. However, if you want to train your own MLP network, you first need to generate the weights by training a TensorFlow based neural network using an MLP optimizer, and then verify that the trained weights are converted into a format that sampleMLP can read. If you want to use the weights that are shipped with this sample, see Running the sample.
Apply the
update_mlp.patch<tt>file to save the final result. `` patch -p1 < <TensorRT Install>/samples/sampleMLP/update_mlp.patch ``` The
sampleMLP.ckpt` file contains the checkpoint for the parameters and weights.make
in the <TensorRT root directory>/samples/sampleMLP
directory. The binary named sample_mlp
will be created in the <TensorRT root directory>/bin
directory. ``` cd <TensorRT root directory>/samples/sampleMLP make `` Where
<TensorRT root directory>` is where you installed TensorRT.This output shows that the sample ran successfully; PASSED
.
--help
optionsTo see the full list of available options and their descriptions, use the -h
or --help
command line option.
The following resources provide a deeper understanding about MLP:
MLP
Models
Documentation
For terms and conditions for use, reproduction, and distribution, see the TensorRT Software License Agreement documentation.
February 2019 This README.md
file was recreated, updated and reviewed.