TensorRT  7.2.1.6
NVIDIA TensorRT
Looking for a C++ dev who knows TensorRT?
I'm looking for work. Hire me!
All Classes Namespaces Functions Variables Typedefs Enumerations Enumerator Friends Pages
SampleINT8 Class Reference

The SampleINT8 class implements the INT8 sample. More...

Collaboration diagram for SampleINT8:

Public Member Functions

 SampleINT8 (const SampleINT8Params &params)
 
bool build (DataType dataType)
 Function builds the network engine. More...
 
bool isSupported (DataType dataType)
 Checks if the platform supports the data type. More...
 
bool infer (std::vector< float > &score, int firstScoreBatch, int nbScoreBatches)
 Runs the TensorRT inference engine for this sample. More...
 
bool teardown ()
 Cleans up any state created in the sample class. More...
 

Private Types

template<typename T >
using SampleUniquePtr = std::unique_ptr< T, samplesCommon::InferDeleter >
 

Private Member Functions

bool constructNetwork (SampleUniquePtr< nvinfer1::IBuilder > &builder, SampleUniquePtr< nvinfer1::INetworkDefinition > &network, SampleUniquePtr< nvinfer1::IBuilderConfig > &config, SampleUniquePtr< nvcaffeparser1::ICaffeParser > &parser, DataType dataType)
 Parses a Caffe model and creates a TensorRT network. More...
 
bool processInput (const samplesCommon::BufferManager &buffers, const float *data)
 Reads the input and stores it in a managed buffer. More...
 
int calculateScore (const samplesCommon::BufferManager &buffers, float *labels, int batchSize, int outputSize, int threshold)
 Scores model. More...
 

Private Attributes

SampleINT8Params mParams
 The parameters for the sample. More...
 
nvinfer1::Dims mInputDims
 The dimensions of the input to the network. More...
 
std::shared_ptr< nvinfer1::ICudaEnginemEngine
 The TensorRT engine used to run the network. More...
 

Detailed Description

The SampleINT8 class implements the INT8 sample.

It creates the network using a caffe model

Member Typedef Documentation

◆ SampleUniquePtr

template<typename T >
using SampleINT8::SampleUniquePtr = std::unique_ptr<T, samplesCommon::InferDeleter>
private

Constructor & Destructor Documentation

◆ SampleINT8()

SampleINT8::SampleINT8 ( const SampleINT8Params params)
inline

Member Function Documentation

◆ build()

bool SampleINT8::build ( DataType  dataType)

Function builds the network engine.

Creates the network, configures the builder and creates the network engine.

This function creates the network by parsing the caffe model and builds the engine that will be used to run the model (mEngine)

Returns
Returns true if the engine was created successfully and false otherwise
Here is the call graph for this function:

◆ isSupported()

bool SampleINT8::isSupported ( DataType  dataType)

Checks if the platform supports the data type.

Returns
Returns true if the platform supports the data type.

◆ infer()

bool SampleINT8::infer ( std::vector< float > &  score,
int  firstScoreBatch,
int  nbScoreBatches 
)

Runs the TensorRT inference engine for this sample.

This function is the main execution function of the sample. It allocates the buffer, sets inputs and executes the engine.

Here is the call graph for this function:

◆ teardown()

bool SampleINT8::teardown ( )

Cleans up any state created in the sample class.

Clean up the libprotobuf files as the parsing is complete

Note
It is not safe to use any other part of the protocol buffers library after ShutdownProtobufLibrary() has been called.
Here is the call graph for this function:

◆ constructNetwork()

bool SampleINT8::constructNetwork ( SampleUniquePtr< nvinfer1::IBuilder > &  builder,
SampleUniquePtr< nvinfer1::INetworkDefinition > &  network,
SampleUniquePtr< nvinfer1::IBuilderConfig > &  config,
SampleUniquePtr< nvcaffeparser1::ICaffeParser > &  parser,
DataType  dataType 
)
private

Parses a Caffe model and creates a TensorRT network.

Uses a caffe parser to create the network and marks the output layers.

Parameters
networkPointer to the network that will be populated with the network
builderPointer to the engine builder
Here is the call graph for this function:
Here is the caller graph for this function:

◆ processInput()

bool SampleINT8::processInput ( const samplesCommon::BufferManager buffers,
const float *  data 
)
private

Reads the input and stores it in a managed buffer.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ calculateScore()

int SampleINT8::calculateScore ( const samplesCommon::BufferManager buffers,
float *  labels,
int  batchSize,
int  outputSize,
int  threshold 
)
private

Scores model.

Here is the call graph for this function:
Here is the caller graph for this function:

Member Data Documentation

◆ mParams

SampleINT8Params SampleINT8::mParams
private

The parameters for the sample.

◆ mInputDims

nvinfer1::Dims SampleINT8::mInputDims
private

The dimensions of the input to the network.

◆ mEngine

std::shared_ptr<nvinfer1::ICudaEngine> SampleINT8::mEngine
private

The TensorRT engine used to run the network.


The documentation for this class was generated from the following file: