TensorRT  7.2.1.6
NVIDIA TensorRT
Looking for a C++ dev who knows TensorRT?
I'm looking for work. Hire me!
All Classes Namespaces Functions Variables Typedefs Enumerations Enumerator Friends Pages
SampleMLP Class Reference

The SampleMLP class implements the MNIST API sample. More...

Collaboration diagram for SampleMLP:

Public Member Functions

 SampleMLP (const SampleMLPParams &params)
 
bool build ()
 Function builds the network engine. More...
 
bool infer ()
 Runs the TensorRT inference engine for this sample. More...
 
bool teardown ()
 Cleans up any state created in the sample class. More...
 

Private Types

template<typename T >
using SampleUniquePtr = std::unique_ptr< T, samplesCommon::InferDeleter >
 

Private Member Functions

bool constructNetwork (SampleUniquePtr< nvinfer1::IBuilder > &builder, SampleUniquePtr< nvinfer1::INetworkDefinition > &network, SampleUniquePtr< nvinfer1::IBuilderConfig > &config)
 Uses the API to create the MLP Network. More...
 
bool processInput (const samplesCommon::BufferManager &buffers)
 Reads the input and stores the result in a managed buffer. More...
 
bool verifyOutput (const samplesCommon::BufferManager &buffers)
 Classifies digits and verify result. More...
 
std::map< std::string, std::pair< nvinfer1::Dims, nvinfer1::Weights > > loadWeights (const std::string &file)
 Loads weights from weights file. More...
 
nvinfer1::Dims loadShape (std::ifstream &input)
 Loads shape from weights file. More...
 
void transposeWeights (nvinfer1::Weights &wts, int hiddenSize)
 Transpose weights. More...
 
nvinfer1::ILayeraddMLPLayer (nvinfer1::INetworkDefinition *network, nvinfer1::ITensor &inputTensor, int32_t hiddenSize, nvinfer1::Weights wts, nvinfer1::Weights bias, nvinfer1::ActivationType actType, int idx)
 Add an MLP layer. More...
 

Private Attributes

SampleMLPParams mParams
 The parameters for the sample. More...
 
int mNumber {0}
 The number to classify. More...
 
std::map< std::string, std::pair< nvinfer1::Dims, nvinfer1::Weights > > mWeightMap
 The weight name to weight value map. More...
 
std::shared_ptr< nvinfer1::ICudaEnginemEngine
 The TensorRT engine used to run the network. More...
 
std::vector< SampleUniquePtr< nvinfer1::IHostMemory > > weightsMemory
 Host weights memory holder. More...
 

Detailed Description

The SampleMLP class implements the MNIST API sample.

It creates the network for MNIST classification using the API

Member Typedef Documentation

◆ SampleUniquePtr

template<typename T >
using SampleMLP::SampleUniquePtr = std::unique_ptr<T, samplesCommon::InferDeleter>
private

Constructor & Destructor Documentation

◆ SampleMLP()

SampleMLP::SampleMLP ( const SampleMLPParams params)
inline

Member Function Documentation

◆ build()

bool SampleMLP::build ( )

Function builds the network engine.

Creates the network, configures the builder and creates the network engine.

This function creates the MLP network by using the API to create a model and builds the engine that will be used to run MNIST (mEngine)

Returns
Returns true if the engine was created successfully and false otherwise
Here is the call graph for this function:

◆ infer()

bool SampleMLP::infer ( )

Runs the TensorRT inference engine for this sample.

This function is the main execution function of the sample. It allocates the buffer, sets inputs and executes the engine.

Here is the call graph for this function:

◆ teardown()

bool SampleMLP::teardown ( )

Cleans up any state created in the sample class.

◆ constructNetwork()

bool SampleMLP::constructNetwork ( SampleUniquePtr< nvinfer1::IBuilder > &  builder,
SampleUniquePtr< nvinfer1::INetworkDefinition > &  network,
SampleUniquePtr< nvinfer1::IBuilderConfig > &  config 
)
private

Uses the API to create the MLP Network.

Parameters
networkPointer to the network that will be populated with the MLP network
builderPointer to the engine builder
Here is the call graph for this function:
Here is the caller graph for this function:

◆ processInput()

bool SampleMLP::processInput ( const samplesCommon::BufferManager buffers)
private

Reads the input and stores the result in a managed buffer.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ verifyOutput()

bool SampleMLP::verifyOutput ( const samplesCommon::BufferManager buffers)
private

Classifies digits and verify result.

Returns
whether the classification output matches expectations
Here is the call graph for this function:
Here is the caller graph for this function:

◆ loadWeights()

std::map< std::string, std::pair< nvinfer1::Dims, nvinfer1::Weights > > SampleMLP::loadWeights ( const std::string &  file)
private

Loads weights from weights file.

Our weight files are in a very simple space delimited format. type is the integer value of the DataType enum in NvInfer.h. <number of buffers> for each buffer: [name] [type] [size] <data x size in hex>

Here is the call graph for this function:
Here is the caller graph for this function:

◆ loadShape()

nvinfer1::Dims SampleMLP::loadShape ( std::ifstream &  input)
private

Loads shape from weights file.

Here is the caller graph for this function:

◆ transposeWeights()

void SampleMLP::transposeWeights ( nvinfer1::Weights wts,
int  hiddenSize 
)
private

Transpose weights.

Here is the caller graph for this function:

◆ addMLPLayer()

nvinfer1::ILayer * SampleMLP::addMLPLayer ( nvinfer1::INetworkDefinition network,
nvinfer1::ITensor inputTensor,
int32_t  hiddenSize,
nvinfer1::Weights  wts,
nvinfer1::Weights  bias,
nvinfer1::ActivationType  actType,
int  idx 
)
private

Add an MLP layer.

The addMLPLayer function is a simple helper function that creates the combination required for an MLP layer. By replacing the implementation of this sequence with various implementations, then then it can be shown how TensorRT optimizations those layer sequences.

Here is the call graph for this function:
Here is the caller graph for this function:

Member Data Documentation

◆ mParams

SampleMLPParams SampleMLP::mParams
private

The parameters for the sample.

◆ mNumber

int SampleMLP::mNumber {0}
private

The number to classify.

◆ mWeightMap

std::map<std::string, std::pair<nvinfer1::Dims, nvinfer1::Weights> > SampleMLP::mWeightMap
private

The weight name to weight value map.

◆ mEngine

std::shared_ptr<nvinfer1::ICudaEngine> SampleMLP::mEngine
private

The TensorRT engine used to run the network.

◆ weightsMemory

std::vector<SampleUniquePtr<nvinfer1::IHostMemory> > SampleMLP::weightsMemory
private

Host weights memory holder.


The documentation for this class was generated from the following file: