Public Member Functions | |
def | test_basic_forward (self, verbose) |
def | test_no_quant_input_hidden (self, verbose) |
def | test_no_quant_input_hidden_bias (self, verbose) |
def | test_against_unquantized (self, verbose) |
def | test_quant_input_hidden (self, verbose) |
def | test_quant_input_hidden_bias (self, verbose) |
def | test_quant_different_prec (self, verbose) |
tests for quant_rnn.QuantLSTMCell default parameters in QuantLSTMCell: bias=True, num_bits_weight=8, quant_mode_weight='per_channel', num_bits_input=8, quant_mode_input='per_tensor' Tests of real quantization mode (nonfake) are disabled as it is not fully supported yet.
def tests.quant_rnn_test.TestQuantLSTMCell.test_basic_forward | ( | self, | |
verbose | |||
) |
Do a forward pass on the cell module and see if anything catches fire.
def tests.quant_rnn_test.TestQuantLSTMCell.test_no_quant_input_hidden | ( | self, | |
verbose | |||
) |
QuantLSTM with quantization disabled vs. pytorch LSTM for input and hidden inputs.
def tests.quant_rnn_test.TestQuantLSTMCell.test_no_quant_input_hidden_bias | ( | self, | |
verbose | |||
) |
QuantLSTMCell with quantization disabled vs. pytorch LSTMCell for input, hidden inputs and bias.
def tests.quant_rnn_test.TestQuantLSTMCell.test_against_unquantized | ( | self, | |
verbose | |||
) |
Quantization should introduce bounded error utils.compare to pytorch implementation.
def tests.quant_rnn_test.TestQuantLSTMCell.test_quant_input_hidden | ( | self, | |
verbose | |||
) |
QuantLSTMCell vs. manual input quantization + pytorchLSTMCell.
def tests.quant_rnn_test.TestQuantLSTMCell.test_quant_input_hidden_bias | ( | self, | |
verbose | |||
) |
QuantLSTMCell vs. manual input quantization + pytorchLSTMCell bias should not be quantized
def tests.quant_rnn_test.TestQuantLSTMCell.test_quant_different_prec | ( | self, | |
verbose | |||
) |
QuantLSTMCell vs. manual input quantization + pytorch LSTMCell different input and weight precisions