TensorRT  7.2.1.6
NVIDIA TensorRT
Looking for a C++ dev who knows TensorRT?
I'm looking for work. Hire me!
All Classes Namespaces Functions Variables Typedefs Enumerations Enumerator Friends Pages
SampleGoogleNet Class Reference

The SampleGoogleNet class implements the GoogleNet sample. More...

Collaboration diagram for SampleGoogleNet:

Public Member Functions

 SampleGoogleNet (const samplesCommon::CaffeSampleParams &params)
 
bool build ()
 Builds the network engine. More...
 
bool infer ()
 Runs the TensorRT inference engine for this sample. More...
 
bool teardown ()
 Used to clean up any state created in the sample class. More...
 

Public Attributes

samplesCommon::CaffeSampleParams mParams
 

Private Types

template<typename T >
using SampleUniquePtr = std::unique_ptr< T, samplesCommon::InferDeleter >
 

Private Member Functions

void constructNetwork (SampleUniquePtr< nvcaffeparser1::ICaffeParser > &parser, SampleUniquePtr< nvinfer1::INetworkDefinition > &network)
 Parses a Caffe model for GoogleNet and creates a TensorRT network. More...
 

Private Attributes

std::shared_ptr< nvinfer1::ICudaEnginemEngine {nullptr}
 The TensorRT engine used to run the network. More...
 

Detailed Description

The SampleGoogleNet class implements the GoogleNet sample.

It creates the network using a caffe model

Member Typedef Documentation

◆ SampleUniquePtr

template<typename T >
using SampleGoogleNet::SampleUniquePtr = std::unique_ptr<T, samplesCommon::InferDeleter>
private

Constructor & Destructor Documentation

◆ SampleGoogleNet()

SampleGoogleNet::SampleGoogleNet ( const samplesCommon::CaffeSampleParams params)
inline

Member Function Documentation

◆ build()

bool SampleGoogleNet::build ( )

Builds the network engine.

Creates the network, configures the builder and creates the network engine.

This function creates the GoogleNet network by parsing the caffe model and builds the engine that will be used to run GoogleNet (mEngine)

Returns
Returns true if the engine was created successfully and false otherwise
Here is the call graph for this function:

◆ infer()

bool SampleGoogleNet::infer ( )

Runs the TensorRT inference engine for this sample.

This function is the main execution function of the sample. It allocates the buffer, sets inputs and executes the engine.

Here is the call graph for this function:

◆ teardown()

bool SampleGoogleNet::teardown ( )

Used to clean up any state created in the sample class.

Clean up the libprotobuf files as the parsing is complete

Note
It is not safe to use any other part of the protocol buffers library after ShutdownProtobufLibrary() has been called.
Here is the call graph for this function:

◆ constructNetwork()

void SampleGoogleNet::constructNetwork ( SampleUniquePtr< nvcaffeparser1::ICaffeParser > &  parser,
SampleUniquePtr< nvinfer1::INetworkDefinition > &  network 
)
private

Parses a Caffe model for GoogleNet and creates a TensorRT network.

Uses a caffe parser to create the googlenet Network and marks the output layers.

Parameters
networkPointer to the network that will be populated with the googlenet network
builderPointer to the engine builder
Here is the call graph for this function:
Here is the caller graph for this function:

Member Data Documentation

◆ mParams

samplesCommon::CaffeSampleParams SampleGoogleNet::mParams

◆ mEngine

std::shared_ptr<nvinfer1::ICudaEngine> SampleGoogleNet::mEngine {nullptr}
private

The TensorRT engine used to run the network.


The documentation for this class was generated from the following file: