TensorRT  7.2.1.6
NVIDIA TensorRT
Looking for a C++ dev who knows TensorRT?
I'm looking for work. Hire me!
All Classes Namespaces Functions Variables Typedefs Enumerations Enumerator Friends Pages
SampleUffFasterRcnn Class Reference
Collaboration diagram for SampleUffFasterRcnn:

Public Member Functions

 SampleUffFasterRcnn (const SampleUffFasterRcnnParams &params)
 
bool build ()
 Function builds the network engine. More...
 
bool infer ()
 Runs the TensorRT inference engine for this sample. More...
 
bool teardown ()
 Cleans up any state created in the sample class. More...
 

Private Types

template<typename T >
using SampleUniquePtr = std::unique_ptr< T, samplesCommon::InferDeleter >
 

Private Member Functions

bool constructNetwork (SampleUniquePtr< nvinfer1::IBuilder > &builder, SampleUniquePtr< nvinfer1::INetworkDefinition > &network, SampleUniquePtr< nvuffparser::IUffParser > &parser)
 Parses an UFF model for SSD and creates a TensorRT network. More...
 
bool processInput (const samplesCommon::BufferManager &buffers)
 Reads the input and mean data, preprocesses, and stores the result in a managed buffer. More...
 
bool verifyOutput (const samplesCommon::BufferManager &buffers)
 Filters output detections and verify results. More...
 
void batch_inverse_transform_classifier (const float *roi_after_nms, int roi_num_per_img, const float *classifier_cls, const float *classifier_regr, std::vector< float > &pred_boxes, std::vector< int > &pred_cls_ids, std::vector< float > &pred_probs, std::vector< int > &box_num_per_img, int N)
 Helper function to do post-processing(apply delta to ROIs). More...
 
std::vector< intnms_classifier (std::vector< float > &boxes_per_cls, std::vector< float > &probs_per_cls, float NMS_OVERLAP_THRESHOLD, int NMS_MAX_BOXES)
 NMS helper function in post-processing. More...
 
void visualize_boxes (int img_num, int class_num, std::vector< float > &pred_boxes, std::vector< float > &pred_probs, std::vector< int > &pred_cls_ids, std::vector< int > &box_num_per_img, std::vector< vPPM > &ppms)
 Helper function to dump bbox-overlayed images as PPM files. More...
 

Private Attributes

SampleUffFasterRcnnParams mParams
 The parameters for the sample. More...
 
nvinfer1::Dims mInputDims
 The dimensions of the input to the network. More...
 
std::shared_ptr< nvinfer1::ICudaEnginemEngine
 The TensorRT engine used to run the network. More...
 

Member Typedef Documentation

◆ SampleUniquePtr

template<typename T >
using SampleUffFasterRcnn::SampleUniquePtr = std::unique_ptr<T, samplesCommon::InferDeleter>
private

Constructor & Destructor Documentation

◆ SampleUffFasterRcnn()

SampleUffFasterRcnn::SampleUffFasterRcnn ( const SampleUffFasterRcnnParams params)
inline

Member Function Documentation

◆ build()

bool SampleUffFasterRcnn::build ( )

Function builds the network engine.

Here is the call graph for this function:

◆ infer()

bool SampleUffFasterRcnn::infer ( )

Runs the TensorRT inference engine for this sample.

Here is the call graph for this function:

◆ teardown()

bool SampleUffFasterRcnn::teardown ( )

Cleans up any state created in the sample class.

Clean up the libprotobuf files as the parsing is complete

Note
It is not safe to use any other part of the protocol buffers library after ShutdownProtobufLibrary() has been called.
Here is the call graph for this function:

◆ constructNetwork()

bool SampleUffFasterRcnn::constructNetwork ( SampleUniquePtr< nvinfer1::IBuilder > &  builder,
SampleUniquePtr< nvinfer1::INetworkDefinition > &  network,
SampleUniquePtr< nvuffparser::IUffParser > &  parser 
)
private

Parses an UFF model for SSD and creates a TensorRT network.

Here is the call graph for this function:

◆ processInput()

bool SampleUffFasterRcnn::processInput ( const samplesCommon::BufferManager buffers)
private

Reads the input and mean data, preprocesses, and stores the result in a managed buffer.

Here is the call graph for this function:

◆ verifyOutput()

bool SampleUffFasterRcnn::verifyOutput ( const samplesCommon::BufferManager buffers)
private

Filters output detections and verify results.

Here is the call graph for this function:

◆ batch_inverse_transform_classifier()

void SampleUffFasterRcnn::batch_inverse_transform_classifier ( const float *  roi_after_nms,
int  roi_num_per_img,
const float *  classifier_cls,
const float *  classifier_regr,
std::vector< float > &  pred_boxes,
std::vector< int > &  pred_cls_ids,
std::vector< float > &  pred_probs,
std::vector< int > &  box_num_per_img,
int  N 
)
private

Helper function to do post-processing(apply delta to ROIs).

Define the function to apply delta to ROIs.

◆ nms_classifier()

std::vector< int > SampleUffFasterRcnn::nms_classifier ( std::vector< float > &  boxes_per_cls,
std::vector< float > &  probs_per_cls,
float  NMS_OVERLAP_THRESHOLD,
int  NMS_MAX_BOXES 
)
private

NMS helper function in post-processing.

NMS on CPU in post-processing of classifier outputs.

◆ visualize_boxes()

void SampleUffFasterRcnn::visualize_boxes ( int  img_num,
int  class_num,
std::vector< float > &  pred_boxes,
std::vector< float > &  pred_probs,
std::vector< int > &  pred_cls_ids,
std::vector< int > &  box_num_per_img,
std::vector< vPPM > &  ppms 
)
private

Helper function to dump bbox-overlayed images as PPM files.

Dump the detection results(bboxes) as PPM images, overlayed on original image.

Here is the call graph for this function:

Member Data Documentation

◆ mParams

SampleUffFasterRcnnParams SampleUffFasterRcnn::mParams
private

The parameters for the sample.

◆ mInputDims

nvinfer1::Dims SampleUffFasterRcnn::mInputDims
private

The dimensions of the input to the network.

◆ mEngine

std::shared_ptr<nvinfer1::ICudaEngine> SampleUffFasterRcnn::mEngine
private

The TensorRT engine used to run the network.


The documentation for this class was generated from the following file: