TensorRT  7.2.1.6
NVIDIA TensorRT
Looking for a C++ dev who knows TensorRT?
I'm looking for work. Hire me!
pytorch_quantization.nn.functional.ClipFunction Class Reference
Inheritance diagram for pytorch_quantization.nn.functional.ClipFunction:
Collaboration diagram for pytorch_quantization.nn.functional.ClipFunction:

Static Public Member Functions

def forward (ctx, input, clip_value_min, clip_value_max)
 
def backward (ctx, grad_output)
 

Detailed Description

An universal tensor clip function

Pytorch's clamp() only supports scalar range and doesn't support broadcast. This implementation uses min/max which
is more genaral. The gradient is defined according to IBM's PACT paper https://arxiv.org/abs/1805.06085, which is
also the behavior of Tensorflow's clip_by_value()

Member Function Documentation

◆ forward()

def pytorch_quantization.nn.functional.ClipFunction.forward (   ctx,
  input,
  clip_value_min,
  clip_value_max 
)
static

◆ backward()

def pytorch_quantization.nn.functional.ClipFunction.backward (   ctx,
  grad_output 
)
static

The documentation for this class was generated from the following file: