TensorRT  7.2.1.6
NVIDIA TensorRT
Looking for a C++ dev who knows TensorRT?
I'm looking for work. Hire me!
All Classes Namespaces Functions Variables Typedefs Enumerations Enumerator Friends Pages
SampleCharRNNv2 Class Reference
Inheritance diagram for SampleCharRNNv2:
Collaboration diagram for SampleCharRNNv2:

Public Types

template<typename T >
using SampleUniquePtr = std::unique_ptr< T, samplesCommon::InferDeleter >
 

Public Member Functions

 SampleCharRNNv2 (SampleCharRNNParams params)
 
bool build ()
 Builds the network engine. More...
 
bool infer ()
 Runs the TensorRT inference engine for this sample. More...
 
bool teardown ()
 Used to clean up any state created in the sample class. More...
 

Protected Member Functions

nvinfer1::ILayeraddLSTMLayers (SampleCharRNNBase::SampleUniquePtr< nvinfer1::INetworkDefinition > &network) final
 Add inputs to the TensorRT network and configure LSTM layers using network definition API. More...
 
nvinfer1::Weights convertRNNWeights (nvinfer1::Weights input, int dataSize)
 Converts RNN weights from TensorFlow's format to TensorRT's format. More...
 
nvinfer1::Weights convertRNNBias (nvinfer1::Weights input)
 Converts RNN Biases from TensorFlow's format to TensorRT's format. More...
 
nvinfer1::ITensoraddReshape (SampleUniquePtr< nvinfer1::INetworkDefinition > &network, nvinfer1::ITensor &tensor, nvinfer1::Dims dims)
 

Protected Attributes

std::map< std::string, nvinfer1::WeightsmWeightMap
 
std::vector< SampleUniquePtr< nvinfer1::IHostMemory > > weightsMemory
 
SampleCharRNNParams mParams
 

Private Member Functions

std::map< std::string, nvinfer1::WeightsloadWeights (const std::string file)
 Load requested weights from a formatted file into a map. More...
 
void constructNetwork (SampleUniquePtr< nvinfer1::IBuilder > &builder, SampleUniquePtr< nvinfer1::INetworkDefinition > &network, SampleUniquePtr< nvinfer1::IBuilderConfig > &config)
 Create full model using the TensorRT network definition API and build the engine. More...
 
void copyEmbeddingToInput (samplesCommon::BufferManager &buffers, const char &c)
 Looks up the embedding tensor for a given char and copies it to input buffer. More...
 
bool stepOnce (samplesCommon::BufferManager &buffers, SampleUniquePtr< nvinfer1::IExecutionContext > &context, cudaStream_t &stream)
 Perform one time step of inference with the TensorRT execution context. More...
 
void copyRNNOutputsToInputs (samplesCommon::BufferManager &buffers)
 Copies Ct/Ht output from the RNN to the Ct-1/Ht-1 input buffers for next time step. More...
 

Private Attributes

std::shared_ptr< nvinfer1::ICudaEnginemEngine {nullptr}
 The TensorRT engine used to run the network. More...
 

Member Typedef Documentation

◆ SampleUniquePtr

template<typename T >
using SampleCharRNNBase::SampleUniquePtr = std::unique_ptr<T, samplesCommon::InferDeleter>
inherited

Constructor & Destructor Documentation

◆ SampleCharRNNv2()

SampleCharRNNv2::SampleCharRNNv2 ( SampleCharRNNParams  params)
inline

Member Function Documentation

◆ addLSTMLayers()

nvinfer1::ILayer * SampleCharRNNv2::addLSTMLayers ( SampleCharRNNBase::SampleUniquePtr< nvinfer1::INetworkDefinition > &  network)
finalprotectedvirtual

Add inputs to the TensorRT network and configure LSTM layers using network definition API.

Add inputs to the network and configure the RNNv2 layer using network definition API.

Parameters
networkThe network that will be used to build the engine.
weightMapMap that contains all the weights required by the model.
Returns
Configured and added RNNv2 layer.

Implements SampleCharRNNBase.

Here is the call graph for this function:

◆ build()

bool SampleCharRNNBase::build ( )
inherited

Builds the network engine.

Creates the network, configures the builder and creates the network engine.

This function loads weights from a trained TensorFlow model, creates the network using the TensorRT network definition API, and builds a TensorRT engine.

Returns
Returns true if the engine was created successfully and false otherwise
Here is the call graph for this function:

◆ infer()

bool SampleCharRNNBase::infer ( )
inherited

Runs the TensorRT inference engine for this sample.

This function is the main execution function of the sample. It allocates the buffer, sets inputs, executes the engine, and verifies the output.

Here is the call graph for this function:

◆ teardown()

bool SampleCharRNNBase::teardown ( )
inherited

Used to clean up any state created in the sample class.

◆ convertRNNWeights()

nvinfer1::Weights SampleCharRNNBase::convertRNNWeights ( nvinfer1::Weights  orig,
int  dataSize 
)
protectedinherited

Converts RNN weights from TensorFlow's format to TensorRT's format.

Parameters
inputWeights that are stored in TensorFlow's format.
Returns
Converted weights in TensorRT's format.
Note
TensorFlow weight parameters for BasicLSTMCell are formatted as: Each [WR][icfo] is hiddenSize sequential elements. CellN Row 0: WiT, WcT, WfT, WoT CellN Row 1: WiT, WcT, WfT, WoT ... CellN RowM-1: WiT, WcT, WfT, WoT CellN RowM+0: RiT, RcT, RfT, RoT CellN RowM+1: RiT, RcT, RfT, RoT ... CellNRow2M-1: RiT, RcT, RfT, RoT

TensorRT expects the format to laid out in memory: CellN: Wi, Wc, Wf, Wo, Ri, Rc, Rf, Ro

Here is the caller graph for this function:

◆ convertRNNBias()

nvinfer1::Weights SampleCharRNNBase::convertRNNBias ( nvinfer1::Weights  input)
protectedinherited

Converts RNN Biases from TensorFlow's format to TensorRT's format.

Parameters
inputBiases that are stored in TensorFlow's format.
Returns
Converted bias in TensorRT's format.
Note
TensorFlow bias parameters for BasicLSTMCell are formatted as: CellN: Bi, Bc, Bf, Bo

TensorRT expects the format to be: CellN: Wi, Wc, Wf, Wo, Ri, Rc, Rf, Ro

Since tensorflow already combines U and W, we double the size and set all of U to zero.

Here is the caller graph for this function:

◆ addReshape()

nvinfer1::ITensor * SampleCharRNNBase::addReshape ( SampleUniquePtr< nvinfer1::INetworkDefinition > &  network,
nvinfer1::ITensor tensor,
nvinfer1::Dims  dims 
)
protectedinherited
Here is the call graph for this function:
Here is the caller graph for this function:

◆ loadWeights()

std::map< std::string, nvinfer1::Weights > SampleCharRNNBase::loadWeights ( const std::string  file)
privateinherited

Load requested weights from a formatted file into a map.

Parameters
filePath to weights file. File has to be the formatted dump from the dumpTFWts.py script. Otherwise, this function will not work as intended.
Returns
A map containing the extracted weights.
Note
Weight V2 files are in a very simple space delimited format. <number of buffers> for each buffer: [name] [type] [shape] <data as binary blob>
Note: type is the integer value of the DataType enum in NvInfer.h.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ constructNetwork()

void SampleCharRNNBase::constructNetwork ( SampleUniquePtr< nvinfer1::IBuilder > &  builder,
SampleUniquePtr< nvinfer1::INetworkDefinition > &  network,
SampleUniquePtr< nvinfer1::IBuilderConfig > &  config 
)
privateinherited

Create full model using the TensorRT network definition API and build the engine.

Parameters
weightMapMap that contains all the weights required by the model.
modelStreamThe stream within which the engine is serialized once built.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ copyEmbeddingToInput()

void SampleCharRNNBase::copyEmbeddingToInput ( samplesCommon::BufferManager buffers,
const char &  c 
)
privateinherited

Looks up the embedding tensor for a given char and copies it to input buffer.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ stepOnce()

bool SampleCharRNNBase::stepOnce ( samplesCommon::BufferManager buffers,
SampleUniquePtr< nvinfer1::IExecutionContext > &  context,
cudaStream_t &  stream 
)
privateinherited

Perform one time step of inference with the TensorRT execution context.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ copyRNNOutputsToInputs()

void SampleCharRNNBase::copyRNNOutputsToInputs ( samplesCommon::BufferManager buffers)
privateinherited

Copies Ct/Ht output from the RNN to the Ct-1/Ht-1 input buffers for next time step.

Here is the call graph for this function:
Here is the caller graph for this function:

Member Data Documentation

◆ mWeightMap

std::map<std::string, nvinfer1::Weights> SampleCharRNNBase::mWeightMap
protectedinherited

◆ weightsMemory

std::vector<SampleUniquePtr<nvinfer1::IHostMemory> > SampleCharRNNBase::weightsMemory
protectedinherited

◆ mParams

SampleCharRNNParams SampleCharRNNBase::mParams
protectedinherited

◆ mEngine

std::shared_ptr<nvinfer1::ICudaEngine> SampleCharRNNBase::mEngine {nullptr}
privateinherited

The TensorRT engine used to run the network.


The documentation for this class was generated from the following file: