TensorRT  7.2.1.6
NVIDIA TensorRT
Looking for a C++ dev who knows TensorRT?
I'm looking for work. Hire me!
nvinfer1::IPluginExt Class Referenceabstract

Plugin class for user-implemented layers. More...

Inheritance diagram for nvinfer1::IPluginExt:
Collaboration diagram for nvinfer1::IPluginExt:

Public Member Functions

virtual int32_t getTensorRTVersion () const
 Return the API version with which this plugin was built. More...
 
virtual bool supportsFormat (DataType type, PluginFormat format) const =0
 Check format support. More...
 
virtual void configureWithFormat (const Dims *inputDims, int32_t nbInputs, const Dims *outputDims, int32_t nbOutputs, DataType type, PluginFormat format, int32_t maxBatchSize)=0
 Configure the layer. More...
 
virtual ~IPluginExt ()
 
virtual int32_t getNbOutputs () const =0
 Get the number of outputs from the layer. More...
 
virtual Dims getOutputDimensions (int32_t index, const Dims *inputs, int32_t nbInputDims)=0
 Get the dimension of an output tensor. More...
 
virtual int32_t initialize ()=0
 Initialize the layer for execution. More...
 
virtual void terminate ()=0
 Release resources acquired during plugin layer initialization. More...
 
virtual size_t getWorkspaceSize (int32_t maxBatchSize) const =0
 Find the workspace size required by the layer. More...
 
virtual int32_t enqueue (int32_t batchSize, const void *const *inputs, void **outputs, void *workspace, cudaStream_t stream)=0
 Execute the layer. More...
 
virtual size_t getSerializationSize ()=0
 Find the size of the serialization buffer required. More...
 
virtual void serialize (void *buffer)=0
 Serialize the layer. More...
 

Protected Member Functions

void configure (const Dims *, int32_t, const Dims *, int32_t, int32_t)
 Derived classes should not implement this. More...
 

Detailed Description

Plugin class for user-implemented layers.

Plugins are a mechanism for applications to implement custom layers. Each plugin is owned by the application, and its lifetime must span any use of it by TensorRT.

Constructor & Destructor Documentation

◆ ~IPluginExt()

virtual nvinfer1::IPluginExt::~IPluginExt ( )
inlinevirtual

Member Function Documentation

◆ getTensorRTVersion()

virtual int32_t nvinfer1::IPluginExt::getTensorRTVersion ( ) const
inlinevirtual

Return the API version with which this plugin was built.

Do not override this method as it is used by the TensorRT library to maintain backwards-compatibility with plugins.

◆ supportsFormat()

virtual bool nvinfer1::IPluginExt::supportsFormat ( DataType  type,
PluginFormat  format 
) const
pure virtual

Check format support.

Parameters
typeDataType requested.
formatPluginFormat requested.
Returns
true if the plugin supports the type-format combination.

This function is called by the implementations of INetworkDefinition, IBuilder, and ICudaEngine. In particular, it is called when creating an engine and when deserializing an engine.

Warning
DataType:kBOOL not supported.

Implemented in FCPlugin.

◆ configureWithFormat()

virtual void nvinfer1::IPluginExt::configureWithFormat ( const Dims inputDims,
int32_t  nbInputs,
const Dims outputDims,
int32_t  nbOutputs,
DataType  type,
PluginFormat  format,
int32_t  maxBatchSize 
)
pure virtual

Configure the layer.

This function is called by the builder prior to initialize(). It provides an opportunity for the layer to make algorithm choices on the basis of its weights, dimensions, and maximum batch size.

Parameters
inputDimsThe input tensor dimensions.
nbInputsThe number of inputs.
outputDimsThe output tensor dimensions.
nbOutputsThe number of outputs.
typeThe data type selected for the engine.
formatThe format selected for the engine.
maxBatchSizeThe maximum batch size.

The dimensions passed here do not include the outermost batch size (i.e. for 2-D image networks, they will be 3-dimensional CHW dimensions).

Warning
DataType:kBOOL not supported.

◆ configure()

void nvinfer1::IPluginExt::configure ( const Dims ,
int32_t  ,
const Dims ,
int32_t  ,
int32_t   
)
inlineprotectedvirtual

Derived classes should not implement this.

In a C++11 API it would be override final.

Implements nvinfer1::IPlugin.

◆ getNbOutputs()

virtual int32_t nvinfer1::IPlugin::getNbOutputs ( ) const
pure virtualinherited

Get the number of outputs from the layer.

Returns
The number of outputs.

This function is called by the implementations of INetworkDefinition and IBuilder. In particular, it is called prior to any call to initialize().

Implemented in FCPlugin, and nmtSample::DebugUtil::DumpTensorPlugin.

Here is the caller graph for this function:

◆ getOutputDimensions()

virtual Dims nvinfer1::IPlugin::getOutputDimensions ( int32_t  index,
const Dims inputs,
int32_t  nbInputDims 
)
pure virtualinherited

Get the dimension of an output tensor.

Parameters
indexThe index of the output tensor.
inputsThe input tensors.
nbInputDimsThe number of input tensors.

This function is called by the implementations of INetworkDefinition and IBuilder. In particular, it is called prior to any call to initialize().

◆ initialize()

virtual int32_t nvinfer1::IPlugin::initialize ( )
pure virtualinherited

Initialize the layer for execution.

This is called when the engine is created.

Returns
0 for success, else non-zero (which will cause engine termination).

Implemented in FCPlugin, and nmtSample::DebugUtil::DumpTensorPlugin.

◆ terminate()

virtual void nvinfer1::IPlugin::terminate ( )
pure virtualinherited

Release resources acquired during plugin layer initialization.

This is called when the engine is destroyed.

See also
initialize()

Implemented in FCPlugin, and nmtSample::DebugUtil::DumpTensorPlugin.

◆ getWorkspaceSize()

virtual size_t nvinfer1::IPlugin::getWorkspaceSize ( int32_t  maxBatchSize) const
pure virtualinherited

Find the workspace size required by the layer.

This function is called during engine startup, after initialize(). The workspace size returned should be sufficient for any batch size up to the maximum.

Returns
The workspace size.

◆ enqueue()

virtual int32_t nvinfer1::IPlugin::enqueue ( int32_t  batchSize,
const void *const *  inputs,
void **  outputs,
void *  workspace,
cudaStream_t  stream 
)
pure virtualinherited

Execute the layer.

Parameters
batchSizeThe number of inputs in the batch.
inputsThe memory for the input tensors.
outputsThe memory for the output tensors.
workspaceWorkspace for execution.
streamThe stream in which to execute the kernels.
Returns
0 for success, else non-zero (which will cause engine termination).

◆ getSerializationSize()

virtual size_t nvinfer1::IPlugin::getSerializationSize ( )
pure virtualinherited

Find the size of the serialization buffer required.

Returns
The size of the serialization buffer.

Implemented in FCPlugin, and nmtSample::DebugUtil::DumpTensorPlugin.

◆ serialize()

virtual void nvinfer1::IPlugin::serialize ( void *  buffer)
pure virtualinherited

Serialize the layer.

Parameters
bufferA pointer to a buffer of size at least that returned by getSerializationSize().
See also
getSerializationSize()

Implemented in FCPlugin, and nmtSample::DebugUtil::DumpTensorPlugin.


The documentation for this class was generated from the following file: