TensorRT  7.2.1.6
NVIDIA TensorRT
Looking for a C++ dev who knows TensorRT?
I'm looking for work. Hire me!
All Classes Namespaces Functions Variables Typedefs Enumerations Enumerator Friends Pages
tests.tensor_quant_test.TestTensorQuant Class Reference

Public Member Functions

def test_simple_run (self)
 
def test_per_tensor_scale (self)
 
def test_per_channel_scale (self)
 
def test_backward (self)
 
def test_unsigned (self)
 
def test_overflow_fp16 (self)
 
def test_clip_gradient (self)
 
def test_full_range (self)
 

Member Function Documentation

◆ test_simple_run()

def tests.tensor_quant_test.TestTensorQuant.test_simple_run (   self)
quantizer passes gradcheck

◆ test_per_tensor_scale()

def tests.tensor_quant_test.TestTensorQuant.test_per_tensor_scale (   self)
tensor_quant matches numpy quantization

◆ test_per_channel_scale()

def tests.tensor_quant_test.TestTensorQuant.test_per_channel_scale (   self)
fake_tensor_quant performs per channel quantization

◆ test_backward()

def tests.tensor_quant_test.TestTensorQuant.test_backward (   self)
tensor_quant implements straight through estimator on the backward pass
    Note: this does not work for integer output_dtype

◆ test_unsigned()

def tests.tensor_quant_test.TestTensorQuant.test_unsigned (   self)

◆ test_overflow_fp16()

def tests.tensor_quant_test.TestTensorQuant.test_overflow_fp16 (   self)

◆ test_clip_gradient()

def tests.tensor_quant_test.TestTensorQuant.test_clip_gradient (   self)

◆ test_full_range()

def tests.tensor_quant_test.TestTensorQuant.test_full_range (   self)
fake_tensor_quant uses the full integer range when narrow=False

The documentation for this class was generated from the following file: