TensorRT  7.2.1.6
NVIDIA TensorRT
Looking for a C++ dev who knows TensorRT?
I'm looking for work. Hire me!
nvinfer1::IPoolingLayer Class Referenceabstract

A Pooling layer in a network definition. More...

Inheritance diagram for nvinfer1::IPoolingLayer:
Collaboration diagram for nvinfer1::IPoolingLayer:

Public Member Functions

virtual void setPoolingType (PoolingType type)=0
 Set the type of activation to be performed. More...
 
virtual PoolingType getPoolingType () const =0
 Get the type of activation to be performed. More...
 
 __attribute__ ((deprecated)) virtual void setWindowSize(DimsHW windowSize)=0
 Set the window size for pooling. More...
 
 __attribute__ ((deprecated)) virtual DimsHW getWindowSize() const =0
 Get the window size for pooling. More...
 
 __attribute__ ((deprecated)) virtual void setStride(DimsHW stride)=0
 Set the stride for pooling. More...
 
 __attribute__ ((deprecated)) virtual DimsHW getStride() const =0
 Get the stride for pooling. More...
 
 __attribute__ ((deprecated)) virtual void setPadding(DimsHW padding)=0
 Set the padding for pooling. More...
 
 __attribute__ ((deprecated)) virtual DimsHW getPadding() const =0
 Get the padding for pooling. More...
 
virtual void setBlendFactor (float blendFactor)=0
 Set the blending factor for the max_average_blend mode: max_average_blendPool = (1-blendFactor)*maxPool + blendFactor*avgPool blendFactor is a user value in [0,1] with the default value of 0.0 This value only applies for the kMAX_AVERAGE_BLEND mode. More...
 
virtual float getBlendFactor () const =0
 Get the blending factor for the max_average_blend mode: max_average_blendPool = (1-blendFactor)*maxPool + blendFactor*avgPool blendFactor is a user value in [0,1] with the default value of 0.0 In modes other than kMAX_AVERAGE_BLEND, blendFactor is ignored. More...
 
virtual void setAverageCountExcludesPadding (bool exclusive)=0
 Set whether average pooling uses as a denominator the overlap area between the window and the unpadded input. More...
 
virtual bool getAverageCountExcludesPadding () const =0
 Get whether exclusive pooling uses as a denominator the overlap area betwen the window and the unpadded input. More...
 
virtual void setPrePadding (Dims padding)=0
 Set the multi-dimension pre-padding for pooling. More...
 
virtual Dims getPrePadding () const =0
 Get the pre-padding. More...
 
virtual void setPostPadding (Dims padding)=0
 Set the multi-dimension post-padding for pooling. More...
 
virtual Dims getPostPadding () const =0
 Get the padding. More...
 
virtual void setPaddingMode (PaddingMode paddingMode)=0
 Set the padding mode. More...
 
virtual PaddingMode getPaddingMode () const =0
 Get the padding mode. More...
 
virtual void setWindowSizeNd (Dims windowSize)=0
 Set the multi-dimension window size for pooling. More...
 
virtual Dims getWindowSizeNd () const =0
 Get the multi-dimension window size for pooling. More...
 
virtual void setStrideNd (Dims stride)=0
 Set the multi-dimension stride for pooling. More...
 
virtual Dims getStrideNd () const =0
 Get the multi-dimension stride for pooling. More...
 
virtual void setPaddingNd (Dims padding)=0
 Set the multi-dimension padding for pooling. More...
 
virtual Dims getPaddingNd () const =0
 Get the multi-dimension padding for pooling. More...
 
virtual LayerType getType () const =0
 Return the type of a layer. More...
 
virtual void setName (const char *name)=0
 Set the name of a layer. More...
 
virtual const char * getName () const =0
 Return the name of a layer. More...
 
virtual int32_t getNbInputs () const =0
 Get the number of inputs of a layer. More...
 
virtual ITensorgetInput (int32_t index) const =0
 Get the layer input corresponding to the given index. More...
 
virtual int32_t getNbOutputs () const =0
 Get the number of outputs of a layer. More...
 
virtual ITensorgetOutput (int32_t index) const =0
 Get the layer output corresponding to the given index. More...
 
virtual void setInput (int32_t index, ITensor &tensor)=0
 Replace an input of this layer with a specific tensor. More...
 
virtual void setPrecision (DataType dataType)=0
 Set the computational precision of this layer. More...
 
virtual DataType getPrecision () const =0
 get the computational precision of this layer More...
 
virtual bool precisionIsSet () const =0
 whether the computational precision has been set for this layer More...
 
virtual void resetPrecision ()=0
 reset the computational precision for this layer More...
 
virtual void setOutputType (int32_t index, DataType dataType)=0
 Set the output type of this layer. More...
 
virtual DataType getOutputType (int32_t index) const =0
 get the output type of this layer More...
 
virtual bool outputTypeIsSet (int32_t index) const =0
 whether the output type has been set for this layer More...
 
virtual void resetOutputType (int32_t index)=0
 reset the output type for this layer More...
 

Protected Member Functions

virtual ~IPoolingLayer ()
 

Detailed Description

A Pooling layer in a network definition.

The layer applies a reduction operation within a window over the input.

Warning
When running pooling layer with DeviceType::kDLA in Int8 mode, the dynamic ranges for input and output tensors must be equal.
Do not inherit from this class, as doing so will break forward-compatibility of the API and ABI.

Constructor & Destructor Documentation

◆ ~IPoolingLayer()

virtual nvinfer1::IPoolingLayer::~IPoolingLayer ( )
inlineprotectedvirtual

Member Function Documentation

◆ setPoolingType()

virtual void nvinfer1::IPoolingLayer::setPoolingType ( PoolingType  type)
pure virtual

Set the type of activation to be performed.

DLA only supports kMAX and kAVERAGE pooling types.

See also
getPoolingType(), PoolingType

◆ getPoolingType()

virtual PoolingType nvinfer1::IPoolingLayer::getPoolingType ( ) const
pure virtual

Get the type of activation to be performed.

See also
setPoolingType(), PoolingType

◆ __attribute__() [1/6]

nvinfer1::IPoolingLayer::__attribute__ ( (deprecated)  )
pure virtual

Set the window size for pooling.

If executing this layer on DLA, both height and width of window size must be in the range [1,8].

See also
getWindowSize()
Deprecated:
Superseded by setWindowSizeNd and will be removed in TensorRT 9.0.

◆ __attribute__() [2/6]

nvinfer1::IPoolingLayer::__attribute__ ( (deprecated)  ) const
pure virtual

Get the window size for pooling.

See also
setWindowSize()
Deprecated:
Superseded by getWindowSizeNd and will be removed in TensorRT 9.0.

◆ __attribute__() [3/6]

nvinfer1::IPoolingLayer::__attribute__ ( (deprecated)  )
pure virtual

Set the stride for pooling.

Default: 1

If executing this layer on DLA, both height and width of stride must be in the range [1,16].

See also
getStride()
Deprecated:
Superseded by setStrideNd and will be removed in TensorRT 9.0.

◆ __attribute__() [4/6]

nvinfer1::IPoolingLayer::__attribute__ ( (deprecated)  ) const
pure virtual

Get the stride for pooling.

See also
setStride()
Deprecated:
Superseded by getStrideNd and will be removed in TensorRT 9.0.

◆ __attribute__() [5/6]

nvinfer1::IPoolingLayer::__attribute__ ( (deprecated)  )
pure virtual

Set the padding for pooling.

Default: 0

If executing this layer on DLA, both height and width of padding must be in the range [0,7].

See also
getPadding()
Deprecated:
Superseded by setPaddingNd and will be removed in TensorRT 9.0.

◆ __attribute__() [6/6]

nvinfer1::IPoolingLayer::__attribute__ ( (deprecated)  ) const
pure virtual

Get the padding for pooling.

Default: 0

See also
setPadding()
Deprecated:
Superseded by getPaddingNd and will be removed in TensorRT 9.0.

◆ setBlendFactor()

virtual void nvinfer1::IPoolingLayer::setBlendFactor ( float  blendFactor)
pure virtual

Set the blending factor for the max_average_blend mode: max_average_blendPool = (1-blendFactor)*maxPool + blendFactor*avgPool blendFactor is a user value in [0,1] with the default value of 0.0 This value only applies for the kMAX_AVERAGE_BLEND mode.

Since DLA does not support kMAX_AVERAGE_BLEND, blendFactor is ignored on the DLA.

See also
getBlendFactor()

◆ getBlendFactor()

virtual float nvinfer1::IPoolingLayer::getBlendFactor ( ) const
pure virtual

Get the blending factor for the max_average_blend mode: max_average_blendPool = (1-blendFactor)*maxPool + blendFactor*avgPool blendFactor is a user value in [0,1] with the default value of 0.0 In modes other than kMAX_AVERAGE_BLEND, blendFactor is ignored.

See also
setBlendFactor()

◆ setAverageCountExcludesPadding()

virtual void nvinfer1::IPoolingLayer::setAverageCountExcludesPadding ( bool  exclusive)
pure virtual

Set whether average pooling uses as a denominator the overlap area between the window and the unpadded input.

If this is not set, the denominator is the overlap between the pooling window and the padded input.

If executing this layer on the DLA, only inclusive padding is supported.

Default: true

If executing this layer on the DLA, this is ignored as the DLA does not support exclusive padding.

See also
getAverageCountExcludesPadding()

◆ getAverageCountExcludesPadding()

virtual bool nvinfer1::IPoolingLayer::getAverageCountExcludesPadding ( ) const
pure virtual

Get whether exclusive pooling uses as a denominator the overlap area betwen the window and the unpadded input.

See also
setAverageCountExcludesPadding()

◆ setPrePadding()

virtual void nvinfer1::IPoolingLayer::setPrePadding ( Dims  padding)
pure virtual

Set the multi-dimension pre-padding for pooling.

The start of the input will be padded by this number of elements in each dimension. Padding value depends on pooling type, -inf is used for max pooling and zero padding for average pooling.

Default: (0, 0, ..., 0)

If executing this layer on DLA, only support 2D padding, both height and width of padding must be in the range [0,7].

See also
getPrePadding()

◆ getPrePadding()

virtual Dims nvinfer1::IPoolingLayer::getPrePadding ( ) const
pure virtual

Get the pre-padding.

See also
setPrePadding()

◆ setPostPadding()

virtual void nvinfer1::IPoolingLayer::setPostPadding ( Dims  padding)
pure virtual

Set the multi-dimension post-padding for pooling.

The end of the input will be padded by this number of elements in each dimension. Padding value depends on pooling type, -inf is used for max pooling and zero padding for average pooling.

Default: (0, 0, ..., 0)

If executing this layer on DLA, only support 2D padding, both height and width of padding must be in the range [0,7].

See also
getPostPadding()

◆ getPostPadding()

virtual Dims nvinfer1::IPoolingLayer::getPostPadding ( ) const
pure virtual

Get the padding.

See also
setPadding()

◆ setPaddingMode()

virtual void nvinfer1::IPoolingLayer::setPaddingMode ( PaddingMode  paddingMode)
pure virtual

Set the padding mode.

Padding mode takes precedence if both setPaddingMode and setPre/PostPadding are used.

Default: kEXPLICIT_ROUND_DOWN

See also
getPaddingMode()

◆ getPaddingMode()

virtual PaddingMode nvinfer1::IPoolingLayer::getPaddingMode ( ) const
pure virtual

Get the padding mode.

Default: kEXPLICIT_ROUND_DOWN

See also
setPaddingMode()

◆ setWindowSizeNd()

virtual void nvinfer1::IPoolingLayer::setWindowSizeNd ( Dims  windowSize)
pure virtual

Set the multi-dimension window size for pooling.

If executing this layer on DLA, only support 2D window size, both height and width of window size must be in the range [1,8].

See also
getWindowSizeNd() setWindowSize() getWindowSize()

◆ getWindowSizeNd()

virtual Dims nvinfer1::IPoolingLayer::getWindowSizeNd ( ) const
pure virtual

Get the multi-dimension window size for pooling.

See also
setWindowSizeNd()

◆ setStrideNd()

virtual void nvinfer1::IPoolingLayer::setStrideNd ( Dims  stride)
pure virtual

Set the multi-dimension stride for pooling.

Default: (1, 1, ..., 1)

If executing this layer on DLA, only support 2D stride, both height and width of stride must be in the range [1,16].

See also
getStrideNd() setStride() getStride()

◆ getStrideNd()

virtual Dims nvinfer1::IPoolingLayer::getStrideNd ( ) const
pure virtual

Get the multi-dimension stride for pooling.

See also
setStrideNd()

◆ setPaddingNd()

virtual void nvinfer1::IPoolingLayer::setPaddingNd ( Dims  padding)
pure virtual

Set the multi-dimension padding for pooling.

The input will be padded by this number of elements in each dimension. Padding is symmetric. Padding value depends on pooling type, -inf is used for max pooling and zero padding for average pooling.

Default: (0, 0, ..., 0)

If executing this layer on DLA, only support 2D padding, both height and width of padding must be in the range [0,7].

See also
getPaddingNd() setPadding() getPadding()

◆ getPaddingNd()

virtual Dims nvinfer1::IPoolingLayer::getPaddingNd ( ) const
pure virtual

Get the multi-dimension padding for pooling.

If the padding is asymmetric, the pre-padding is returned.

See also
setPaddingNd()

◆ getType()

virtual LayerType nvinfer1::ILayer::getType ( ) const
pure virtualinherited

Return the type of a layer.

See also
LayerType

◆ setName()

virtual void nvinfer1::ILayer::setName ( const char *  name)
pure virtualinherited

Set the name of a layer.

This method copies the name string.

See also
getName()
Here is the caller graph for this function:

◆ getName()

virtual const char* nvinfer1::ILayer::getName ( ) const
pure virtualinherited

Return the name of a layer.

See also
setName()
Here is the caller graph for this function:

◆ getNbInputs()

virtual int32_t nvinfer1::ILayer::getNbInputs ( ) const
pure virtualinherited

Get the number of inputs of a layer.

◆ getInput()

virtual ITensor* nvinfer1::ILayer::getInput ( int32_t  index) const
pure virtualinherited

Get the layer input corresponding to the given index.

Parameters
indexThe index of the input tensor.
Returns
The input tensor, or nullptr if the index is out of range or the tensor is optional (ISliceLayer, IRNNLayer and IRNNv2Layer).

◆ getNbOutputs()

virtual int32_t nvinfer1::ILayer::getNbOutputs ( ) const
pure virtualinherited

Get the number of outputs of a layer.

Here is the caller graph for this function:

◆ getOutput()

virtual ITensor* nvinfer1::ILayer::getOutput ( int32_t  index) const
pure virtualinherited

Get the layer output corresponding to the given index.

Returns
The indexed output tensor, or nullptr if the index is out of range or the tensor is optional (IRNNLayer and IRNNv2Layer).
Here is the caller graph for this function:

◆ setInput()

virtual void nvinfer1::ILayer::setInput ( int32_t  index,
ITensor tensor 
)
pure virtualinherited

Replace an input of this layer with a specific tensor.

Parameters
indexthe index of the input to modify.
tensorthe new input tensor Except for IShuffleLayer, ISliceLayer, IResizeLayer and ILoopOutputLayer, this method cannot change the number of inputs to a layer. The index argument must be less than the value of getNbInputs().

See overloaded setInput() comments for the layers special behavior.

Parameters
indexthe index of the input to modify.
tensorthe new input tensor

Implemented in nvinfer1::IFillLayer, nvinfer1::ILoopOutputLayer, nvinfer1::IRecurrenceLayer, nvinfer1::IResizeLayer, nvinfer1::ISliceLayer, nvinfer1::IShuffleLayer, nvinfer1::IDeconvolutionLayer, nvinfer1::IFullyConnectedLayer, and nvinfer1::IConvolutionLayer.

Here is the caller graph for this function:

◆ setPrecision()

virtual void nvinfer1::ILayer::setPrecision ( DataType  dataType)
pure virtualinherited

Set the computational precision of this layer.

Setting the precision allows TensorRT to choose implementation which run at this computational precision. Layer input type would also get inferred from layer computational precision. TensorRT could still choose a non-conforming fastest implementation ignoring set layer precision. Use BuilderFlag::kSTRICT_TYPES to force choose implementations with requested precision. In case no implementation is found with requested precision, TensorRT would choose available fastest implementation. If precision is not set, TensorRT will select the layer computational precision and layer input type based on performance considerations and the flags specified to the builder.

Parameters
precisionthe computational precision.
See also
getPrecision() precisionIsSet() resetPrecision()

◆ getPrecision()

virtual DataType nvinfer1::ILayer::getPrecision ( ) const
pure virtualinherited

get the computational precision of this layer

Returns
the computational precision
See also
setPrecision() precisionIsSet() resetPrecision()

◆ precisionIsSet()

virtual bool nvinfer1::ILayer::precisionIsSet ( ) const
pure virtualinherited

whether the computational precision has been set for this layer

Returns
whether the computational precision has been explicitly set
See also
setPrecision() getPrecision() resetPrecision()

◆ resetPrecision()

virtual void nvinfer1::ILayer::resetPrecision ( )
pure virtualinherited

reset the computational precision for this layer

See also
setPrecision() getPrecision() precisionIsSet()

◆ setOutputType()

virtual void nvinfer1::ILayer::setOutputType ( int32_t  index,
DataType  dataType 
)
pure virtualinherited

Set the output type of this layer.

Setting the output type constrains TensorRT to choose implementations which generate output data with the given type. If it is not set, TensorRT will select output type based on layer computational precision. TensorRT could still choose non-conforming output type based on fastest implementation. Use BuilderFlag::kSTRICT_TYPES to force choose requested output type. In case layer precision is not specified, output type would depend on chosen implementation based on performance considerations and the flags specified to the builder.

This method cannot be used to set the data type of the second output tensor of the TopK layer. The data type of the second output tensor of the topK layer is always Int32. Also the output type of all layers that are shape operations must be DataType::kINT32, and all attempts to set the output type to some other data type will be ignored except for issuing an error message.

Note that the layer output type is generally not identical to the data type of the output tensor, as TensorRT may insert implicit reformatting operations to convert the former to the latter. Calling layer->setOutputType(i, type) has no effect on the data type of the i-th output tensor of layer, and users need to call layer->getOutput(i)->setType(type) to change the tensor data type. This is particularly relevant if the tensor is marked as a network output, since only setType() [but not setOutputType()] will affect the data representation in the corresponding output binding.

Parameters
indexthe index of the output to set
dataTypethe type of the output
See also
getOutputType() outputTypeIsSet() resetOutputType()

◆ getOutputType()

virtual DataType nvinfer1::ILayer::getOutputType ( int32_t  index) const
pure virtualinherited

get the output type of this layer

Parameters
indexthe index of the output
Returns
the output precision. If no precision has been set, DataType::kFLOAT will be returned, unless the output type is inherently DataType::kINT32.
See also
getOutputType() outputTypeIsSet() resetOutputType()

◆ outputTypeIsSet()

virtual bool nvinfer1::ILayer::outputTypeIsSet ( int32_t  index) const
pure virtualinherited

whether the output type has been set for this layer

Parameters
indexthe index of the output
Returns
whether the output type has been explicitly set
See also
setOutputType() getOutputType() resetOutputType()

◆ resetOutputType()

virtual void nvinfer1::ILayer::resetOutputType ( int32_t  index)
pure virtualinherited

reset the output type for this layer

Parameters
indexthe index of the output
See also
setOutputType() getOutputType() outputTypeIsSet()

The documentation for this class was generated from the following file: